Feed aggregator

My Glamorous Life : What’s the point?

Tim Hall - Fri, 2016-06-10 04:19

Last night I put out a video of my trip home.

I’ve written about the My Glamorous Life series of videos before (here). A couple of people commented recently that “nothing happens in them” and they are “quite boring”. That’s the point really.

Temporary Post Used For Theme Detection (4a0f3a36-664d-47d9-93c7-24f2096e004d – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

Jeff Moss - Fri, 2016-06-10 02:18

This is a temporary post that was not deleted. Please delete this manually. (31074fcb-66af-42a8-a620-ab6d94cd18a0 – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

database option - tuning

Pat Shuff - Fri, 2016-06-10 02:07
Today we are going to look at using the diagnostics and tuning package that comes with the High Performance and Extreme Performance Editions of the database. We do not get these options with the Standard Edition or Enterprise Edition and if we use Amazon RDS, EC2, Oracle IaaS, or Microsoft Azure Compute to use the tuning option we must bring along a license for the options. Diagnostics are licensed at $150 per named user or $7,500 per processor. This correlates to $294 per processor per month. Options like the SQL Tuning Advisor and Automatic SQL Tuning are part of the Tuning pack option. Tuning pack is $100 per named user or $5,000 per processor. This comes in at $196 per processor per month if we use the four year amortization that we talked about last week.

There are three ways to look at the SQL Tuning Advisor. We can use Enterprise Manager in a central site and analyze historic data from days, weeks, and months back. Unfortunately, we can not use this in conjunction with Amazon RDS. We can use the Enterprise Manager Express which is part of the database and gives you three hours of history of database performance. Again, we can not use this in conjunction with Amazon RDS. These features are disabled and turned off as part of the Amazon installation. We can use SQL Developer to connect to the database on all platforms. This allows us to pull down real time diagnostics and look at live database performance. We will go through an Oracle by Example SQL Tuning Advisor Tutorial that details how to enable and use the tuning advisor packs. The database version that we will be using is the 11g version of the database. These same steps should work with 12c because the features have not changed and SQL Developer knows what to do between the two versions of the database and present a common user interface to do SQL Tuning.

The first step that we have to do is find out the ip address of our 11g database. We do this by going to the database console and looking at our instance detail.

We then create a connection to the database with SQL Developer. This is done first as the sys user as sysdba connecting to the ORCL instance at the ip address of the database. We can verify that we are connected to a High Performance Edition by issuing a select statement against the v$version table.

select * from v$version;

Before we can execute step 8 in the Tuning Advisor Tutorial we must enable the user scott and set a password for the account. To do this we expand the Other Users selection at the bottom left of the screen, find the user scott, and enable the account while setting the password.

We can now connect to the 11g instance and give user scott permission to attach to the sql resources with the commands

grant advisor to scott;
grant administer sql tuning set to scott;

We then clear the existing statistics to make sure we are not looking at old artifacts but what we are going to execute. This is done by exeucting

exec DBMS_STATS.DELETE_SCHEMA_STATS ('scott');

At this point we switch over to the user scott and execute a select statement

select sum(e.sal), avg(e.sal), count(1), e.deptno from dept d, emp e group by e.deptno order by e.deptno;

We can launch the SQL Tuning Advisor from the icon at the top of the screen. This opens a new tab next to the resulting output from the select statement.

The output from the tuning advisor has four parts. We can look at the statistics that were gathered, look at suggested indexes, sql profile, and restructuring statement recommendations. The index output did not say anything but the other three had recommendations.

The restructuring statement suggests that we remove the dept d definition since we really are not using it in the select statement. We then execute the following modified command

select sum(e.sal), avg(e.sal), count(1), e.deptno from emp e group by e.deptno order by e.deptno;

When we rerun the command without the dept d in the select statement we get a clean output from the SQL Advisor.

In summary, we can use Enterprise Manager, Enterprise Manager Express, or SQL Developer to run the tuning advisor. We walked through a simple example of how to do this with SQL Developer on a single select statement. We walked through the SQL Developer because it works on all cloud platforms and the Enterprise Manager solutions do not work well with Amazon RDS. With these tools we can dive into SQL performance issues, tune the database, and optimize the cloud system to utilize fewer resources and cost us less money. If we can reduce the processor count by a couple of processors that more than pays for the cost of the High Performance Edition incremental cost over the Enterprise Edition.

Uniquely parallel

Jonathan Lewis - Fri, 2016-06-10 01:36

Here’s a surprising (to me) execution plan from 12.1.0.2 – parallel execution to find one row in a table using a unique scan of a unique index – produced by running the following script (data creation SQL to follow):


set serveroutput off
set linesize 180
set trimspool on
set pagesize 60

alter session set statistics_level = all;

variable b1 number
exec :b1 := 50000

select /*+ parallel (3) */ id, v1 from t2 where id=:b1;

select * from table(dbms_xplan.display_cursor(null,null,'allstats parallel'));

break on dfo_number skip 1 on tq_id skip 1 on server_type

select
        dfo_number, tq_id, server_type, instance, process, num_rows
from
        v$pq_tqstat
order by
        dfo_number, tq_id, server_type desc, instance, process
;

All I’ve done is enable rowsource execution statistics, set a bind variable to a value, query a table with a /*+ parallel(3) */ hint to find the one row that will be identified by primary key, and then reported the actual execution plan. When I first ran the test Oracle didn’t report the execution statistics correctly so I’ve also queried v$pq_tqstat to show the PX servers used and the flow of data through the plan. Here’s the plan, followed by the  results from v$pq_tqstat:


SQL_ID  0dzynh9d29pt9, child number 0
-------------------------------------
select /*+ parallel (3) */ id,v1 from t2 where id=:b1

Plan hash value: 247082613

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name     | Starts | E-Rows |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |          |      1 |        |        |      |            |      1 |00:00:00.02 |
|   1 |  PX COORDINATOR                   |          |      1 |        |        |      |            |      1 |00:00:00.02 |
|   2 |   PX SEND QC (RANDOM)             | :TQ10001 |      0 |      1 |  Q1,01 | P->S | QC (RAND)  |      0 |00:00:00.01 |
|   3 |    TABLE ACCESS BY INDEX ROWID    | T2       |      0 |      1 |  Q1,01 | PCWP |            |      0 |00:00:00.01 |
|   4 |     BUFFER SORT                   |          |      0 |        |  Q1,01 | PCWC |            |      0 |00:00:00.01 |
|   5 |      PX RECEIVE                   |          |      0 |      1 |  Q1,01 | PCWP |            |      0 |00:00:00.01 |
|   6 |       PX SEND HASH (BLOCK ADDRESS)| :TQ10000 |      0 |      1 |  Q1,00 | S->P | HASH (BLOCK|      0 |00:00:00.01 |
|   7 |        PX SELECTOR                |          |      0 |        |  Q1,00 | SCWC |            |      0 |00:00:00.01 |
|*  8 |         INDEX UNIQUE SCAN         | T2_PK    |      0 |      1 |  Q1,00 | SCWP |            |      0 |00:00:00.01 |
---------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   8 - access("ID"=:B1)

Note
-----
   - Degree of Parallelism is 3 because of hint

DFO_NUMBER      TQ_ID SERVER_TYP   INSTANCE PROCES   NUM_ROWS
---------- ---------- ---------- ---------- ------ ----------
         1          0 Producer            1 P003            0
                                          1 P004            1
                                          1 P005            0
                      Consumer            1 P000            0
                                          1 P001            1
                                          1 P002            0

                    1 Producer            1 P000            0
                                          1 P001            1
                                          1 P002            0
                      Consumer            1 QC              1

As you can see the table access follows a unique scan of an index and, although the rowsource execution stats report zero starts for the unique scan, we can see from v$pq_tqstat that slave P004 acquired a “row” (actually a rowid) and passed it to slave P001 which then acquired a row from the table and passed that row to the query coordinator. Oracle really did execute a parallel query, starting and stopping a total of 6 sessions to perform a single unique index access.

You’ll notice operation 7 is one you’ve only seen in the latest version of Oracle. The PX SELECTOR was introduced in 12c to reduce the number of times a complex parallel query would funnel into the query coordinator (parallel to serial) and then fan out again (serial to parallel) generating a new data flow operation tree (DFO tree) spawning one or two new parallel server groups as it did so. To stop this happening a step that needs to serialise in a 12c parallel plan can nominate one of the existing PX server processes (from each set, if necessary) to do the job so that the same set of PX servers can carry on running the query without the need for a new DFO tree to appear.

This enhancement to parallel execution plans is a good idea – except when it appears in my silly little query and turns something that ought to be quick and cheap into a job that is far more resource-intensive than it should be.

At this point, of course, you’re probably wondering what kind of idiot would put a parallel() hint into a query that was doing nothing but selecting one row by primary key – the answer is: “the Oracle optimizer in 12c”. I discovered this anomaly while creating a demonstration of the way that a distributed parallel query has to serialise through a single database link even if the operations at the two ends of the link run parallel. Here’s the SQL I wrote for the full demonstration:


rem     Script:         distributed_pq.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2016

define m_remote='test@loopback'
define m_remote='orcl@loopback'

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        cast(rownum as number(8,0))                     id,
        cast(lpad(rownum,8,'0') as varchar2(8))         v1,
        cast(rpad('x',100) as varchar2(100))            padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5
;

create table t2
as
select  *
from    t1
where   mod(id,100) = 0
;

alter table t2 add constraint t2_pk primary key(id);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'
        );
end;
/

set serveroutput off

select
        /*+ parallel(3) */
        t1.v1, t2.v1
from
        t1,
        t2@&m_remote
where
        mod(t1.id,10) = 0
and     t2.id = t1.id
and     mod(to_number(t2.v1),10) = 1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats parallel'));

If you want to run this demo you’ll need to do something about formatting the output; more importantly you’ll have to create a database link (with a loopback link) and set up a define identifying it at the line where I’ve got orcl@loopback and test@loopback (which are my 12c and 11g loopback links respectively).

Here’s the plan (with rowsource stats) I got from the 12c test:


----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Starts | E-Rows |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |          |      1 |        |        |      |            |      0 |00:01:14.67 |       7 |      0 |
|   1 |  NESTED LOOPS         |          |      1 |     10 |        |      |            |      0 |00:01:14.67 |       7 |      0 |
|   2 |   PX COORDINATOR      |          |      1 |        |        |      |            |  10000 |00:00:00.11 |       7 |      0 |
|   3 |    PX SEND QC (RANDOM)| :TQ10000 |      0 |   1000 |  Q1,00 | P->S | QC (RAND)  |      0 |00:00:00.01 |       0 |      0 |
|   4 |     PX BLOCK ITERATOR |          |      3 |   1000 |  Q1,00 | PCWC |            |  10000 |00:03:17.72 |    1745 |   1667 |
|*  5 |      TABLE ACCESS FULL| T1       |     39 |   1000 |  Q1,00 | PCWP |            |  10000 |00:00:00.06 |    1745 |   1667 |
|   6 |   REMOTE              | T2       |  10000 |      1 |        |      |            |      0 |00:01:14.44 |       0 |      0 |
----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access(:Z>=:Z AND :Z<=:Z)
       filter(MOD("T1"."ID",10)=0)

Remote SQL Information (identified by operation id):
----------------------------------------------------
   6 - SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE "ID"=:1 AND MOD(TO_NUMBER("V1"),10)=1
       (accessing 'ORCL@LOOPBACK' )

Note
-----
   - Degree of Parallelism is 3 because of hint

I have hacked this output a little – the “Remote SQL” section didn’t get reported by display_cursor(), so I’ve inserted the remote sql I got from a call to dbms_xplan.display() after using explain plan to generate a plan. Note the /*+ shared(3) */ hint that appears in the remote SQL – that’s the internal version of a parallel(3) hint.

In 11g the query complete in 2.4 seconds, in 12c the query took nearly 75 seconds to run thanks to the 12c enhancement that allowed it to obey the hint! Looking at the time column (and ignoring the anomalous 3:17 at operation 4 – which might roughly be echoing 3 * 1:14) we can see that the time goes on the calls to the remote database (and a check of v$session_event shows this time spent in “SQL*Net message from db link”), so the obvious thing to do is check what actually happened at the remote database and we can do that by searching the library cache for a recognizable piece of the remote SQL – here’s the SQL to do that, with the results from 11g followed by the results from 12c:


SQL> select sql_id, child_number, executions, px_servers_executions, sql_text from v$sql
  2  where sql_text like '%SHARED%' and sql_text not like 'select sql_id%';

11g results
SQL_ID        CHILD_NUMBER EXECUTIONS PX_SERVERS_EXECUTIONS
------------- ------------ ---------- ---------------------
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------
c0f292z5czhwk            0      10000                     0
SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE MOD(TO_NUMBER("V1"),10)=1 AND "ID"=:1


12c results
SQL_ID        CHILD_NUMBER EXECUTIONS PX_SERVERS_EXECUTIONS
------------- ------------ ---------- ---------------------
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------
7bk51w7vtagwd            0      10000                     0
SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE "ID"=:1 AND MOD(TO_NUMBER("V1"),10)=1

7bk51w7vtagwd            1          0                 59995
SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE "ID"=:1 AND MOD(TO_NUMBER("V1"),10)=1

It’s not surprising to see that the query has executed 10,000 times – that’s what we were told by the Starts statistic from dbms_output.display_cursor(), but 12c has 60,000 (with a little error) PX Servers executions of the statement. That’s 10,000 executions * degree 3 * the 2 slave sets we saw in my original execution plan. (It’s an odd little quirk of the two versions of Oracle that the order of predicates in the remote SQL was reversed between 11g and 12c – leading to two different SQL_IDs).

By enabling rowsource execution stats at the system level I was able to capture the remote execution plan with its stats:


SQL_ID  7bk51w7vtagwd, child number 0
-------------------------------------
SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE "ID"=:1 AND
MOD(TO_NUMBER("V1"),10)=1

--------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |          |      0 |        |      0 |00:00:00.01 |       0 |
|   1 |  PX COORDINATOR                   |          |      0 |        |      0 |00:00:00.01 |       0 |
|   2 |   PX SEND QC (RANDOM)             | :TQ10001 |      0 |      1 |      0 |00:00:00.01 |       0 |
|*  3 |    TABLE ACCESS BY INDEX ROWID    | T2       |  29983 |      1 |      0 |00:00:22.21 |    1000 |
|   4 |     BUFFER SORT                   |          |  29995 |        |    999 |00:00:21.78 |       0 |
|   5 |      PX RECEIVE                   |          |  29924 |      1 |    994 |00:00:21.21 |       0 |
|   6 |       PX SEND HASH (BLOCK ADDRESS)| :TQ10000 |      0 |      1 |      0 |00:00:00.01 |       0 |
|   7 |        PX SELECTOR                |          |  29993 |        |    999 |00:00:06.08 |   19992 |
|*  8 |         INDEX UNIQUE SCAN         | T2_PK    |  29999 |      1 |   1000 |00:00:00.24 |   20000 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(MOD(TO_NUMBER("V1"),10)=1)
   8 - access("ID"=:1)

Unlike the test case I started with, this output did show the number of starts (with a few missing) and the work done across the slaves. Our index probe had to do two buffer gets on every execution, and we have 10,000 executions of the query so 20,000 buffer gets on the index unique scan. Even though only one slave actually does any work with the PX Selector, all three slaves in that set seem to “start” the relevant operations. The definition of the data meant that only one index probe in 10 was successful so we only visited 1,000 buffers from the table. If you’re familiar with reading plans with rowsource execution stats you’ll appreciate that something has gone wrong in the reporting here – that 1,000 at operation 3 should read 21,000 because it ought to include the 20,000 from the index scan (at least, that’s what a serial plan would do).

If you’re still wondering why running this query as a parallel query should take so long – after all it’s only 10,000 executions in 70 seconds – bear in mind that Oracle has to allocate and deallocate 6 PX servers to new sessions each time it starts; the instance activity stats showed “logons cumulative” going up by 60,000 each time I ran the driving query: that’s about 850 logons (and log offs) per second. I don’t think my test machine would give a realistic impression of the impact of a couple of copies of this query running simultaneously, but when I tried the contention introduce increased the run time to 93 seconds.

tl;dr

Watch out for poor performance becomg disastrous for distributed parallel queries when you upgrade from 11g to 12c

 

 

.


What’s in a name? or rather, in the SSA Names data

RDBMS Insight - Thu, 2016-06-09 14:31

One of the amazing things about being a DBA/developer in 2016 is the sheer amount of freely available, downloadable data to play with. One fun publicly available data sets is the American Social Security Administration names data. It contains all names for which SSNs were issued for each year, with the number of occurrences (although names with <5 occurrences are not included to protect individual privacy).

What’s so fun about this dataset?

* It’s already normalized

* It updates only once a year, and then only by adding another year’s worth of data, so it’s easy to keep current

* Almost everyone can relate to this dataset personally – almost everyone’s name is in there!

* At about 1.8 million rows, it’s not particularly large, but it’s large enough to be interesting to play with.

The one slight annoyance is that the data is in over 100 files, one per year: too many to load one-by-one manually. So here’s a blog post on loading it into your Oracle database, with scripts.

1. Visit the URL:
https://catalog.data.gov/dataset/baby-names-from-social-security-card-applications-national-level-data

2. Download and unzip names.zip . This zip archive contains one file for each year from 1880 to 2015. The files are named yobXXXX.txt eg. yob2015.txt .

3. Create a table to hold the names data:

DROP TABLE names;
CREATE TABLE names (YEAR NUMBER(4), name varchar2(30), sex CHAR(1), freq NUMBER);

4. Load in one year to get a feeling for the data. Let’s load “yob2015.txt”, the most recent year.
Here’s a sql*loader control file “names.ctl” to load the data:

[oracle@localhost names]$ cat names.ctl
load data 
infile 'yob2015.txt' "str '\r\n'"
append
into table NAMES
fields terminated by ','
OPTIONALLY ENCLOSED BY '"' AND '"'
trailing nullcols
           ( NAME CHAR(4000),
             SEX CHAR(4000),
             FREQ CHAR(4000),
             YEAR "2015"
           )

(By the way, here’s a great tip from That Jeff Smith: Use sql developer to generate a sql*loader ctl file. )
Now let’s use the ctl file to load it:

[oracle@localhost names]$ sqlldr CONTROL=names.ctl   skip=0  
Username:scott/********
 
SQL*Loader: Release 12.1.0.2.0 - Production on Thu Jun 9 10:41:29 2016
 
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
 
Path used:      Conventional
Commit point reached - logical record count 20
 
...
Table NAMES:
  32952 Rows successfully loaded.
 
Check the log file:
  names.log
for more information about the load.

5. Let’s take a look at the 2015 data! How about the top 10 names for each sex?

WITH n AS 
  ( SELECT name, sex, freq, 
  rank() OVER (partition BY sex ORDER BY freq DESC) AS rank_2015
  FROM names 
  WHERE YEAR=2015 )
SELECT * FROM n
WHERE rank_2015 < 11
ORDER BY sex, rank_2015;
NAME			       S       FREQ  RANK_2015
------------------------------ - ---------- ----------
Emma			       F      20355	     1
Olivia			       F      19553	     2
Sophia			       F      17327	     3
Ava			       F      16286	     4
Isabella		       F      15504	     5
Mia			       F      14820	     6
Abigail 		       F      12311	     7
Emily			       F      11727	     8
Charlotte		       F      11332	     9
Harper			       F      10241	    10
 
NAME			       S       FREQ  RANK_2015
------------------------------ - ---------- ----------
Noah			       M      19511	     1
Liam			       M      18281	     2
Mason			       M      16535	     3
Jacob			       M      15816	     4
William 		       M      15809	     5
Ethan			       M      14991	     6
James			       M      14705	     7
Alexander		       M      14460	     8
Michael 		       M      14321	     9
Benjamin		       M      13608	    10

6. Now let’s load the names data for the other 135 years.
First we’ll create a generic “names.ctl”:

$ cat names.ctl
load data 
infile 'yob%%YEAR%%.txt' "str '\r\n'"
append
into table NAMES
fields terminated by ','
OPTIONALLY ENCLOSED BY '"' AND '"'
trailing nullcols
           ( NAME CHAR(4000),
             SEX CHAR(4000),
             FREQ CHAR(4000),
             YEAR "%%YEAR%%"
           )

Now we’ll write a small shell script to substitute %%YEAR%% for each year from 1880 to 2014, and load that year’s file.

$ cat names.sh
#!/usr/bin/bash
export TWO_TASK=orcl
for i in {1880..2014}
do
  echo "generating yob$i.ctl"
  sed s/%%YEAR%%/$i/g names.ctl > yob$i.ctl
  echo "loading yob$i"
  sqlldr username/password CONTROL=yob$i.ctl
  echo "done $i"
done
 
[oracle@localhost names]$ ./names.sh
... massive screen output...
 
[oracle@localhost names]$ grep "error" *.log
yob1880.log:  0 Rows not loaded due to data errors.
yob1881.log:  0 Rows not loaded due to data errors.
yob1882.log:  0 Rows not loaded due to data errors.
yob1883.log:  0 Rows not loaded due to data errors.
...
yob2012.log:  0 Rows not loaded due to data errors.
yob2013.log:  0 Rows not loaded due to data errors.
yob2014.log:  0 Rows not loaded due to data errors.

7. Now we can play with the data a bit!

Here’s a quick look at the popularity of 2015’s top girls’ names since 1880:

WITH n2015 AS 
  ( SELECT name, sex, freq, 
  rank() OVER (partition BY sex ORDER BY freq DESC) AS rank_2015
  FROM names 
  WHERE YEAR=2015 )
, y AS (SELECT  YEAR, sex, SUM(freq) tot FROM names GROUP BY YEAR, sex)
SELECT names.year, names.name, 100*names.freq/tot AS pct_by_sex
FROM n2015, y, names
WHERE n2015.name = names.name AND n2015.sex = names.sex
AND y.year = names.year AND y.sex=names.sex
AND n2015.rank_2015 < 11
AND y.sex='F'
ORDER BY YEAR, name;

I graphed this in SQL Developer. Click to embiggen:
2015_girls_allyrs

You can see that Emma, my grandmother’s name, is having a bit of a comeback but is nowhere near the powerhouse it was in the 1880s, when 2% of all girls were named Emma. (For the record, my grandmother was not born in the 1880s!)

My next post will look at the name Brittany and its variants.

Note: You can download the names.ctl and names.sh from github here.

Categories: DBA Blogs

UKOUG Slides - Forms, ADF and JET a Non-Aggression Pact

Andrejus Baranovski - Thu, 2016-06-09 13:43
Florin Marcus was presenting topic on Forms modernization during UKOUG Development SIG seminar in London. Session was based on our production experience from recent Forms modernization project to ADF 12c and Alta UI.

Highlights:

- Who integrates Forms and ADF?

- Java Plugin to be Discontinued. Consequences for Oracle Forms.

- Why Yet Another Web Framework? Oracle JET.

- Forms and ADF and JET - Business Integration Patterns

- Forms Developer's perspective: life after learning ADF

Go through the slides and learn more about it:

Is time for Oracle XE on Docker

Marcelo Ochoa - Thu, 2016-06-09 10:09
Sometime Oracle XE is used in development environment because at the early stage of the project an Oracle RDBMS EE is not necessary.
There are many Docker images at the public repo but many of them do not include the possibility to customize some parameters and I am not sure if the legal condition of Oracle XE redistribution license is meet.
So, using similar ideas as in Oracle RDBMS 12c Docker image script I made this simple Dockerfile script:
FROM oraclelinux:5
MAINTAINER marcelo.ochoa@gmail.com
RUN groupadd -g 54321 oinstall
RUN groupadd -g 54322 dba
RUN useradd -m -g oinstall -G oinstall,dba -u 54321 oracle
RUN yum -y install bc glibc make binutils gcc libaio perl wget unzip && yum clean all
RUN chown -R oracle:oinstall /home/oracle
RUN chmod g+rx /home/oracle
RUN chmod o+rx /home/oracle
ADD xe.rsp /home/oracle/
ADD oracle-xe-11.2.0-1.0.x86_64.rpm /home/oracle/
RUN rpm2cpio /home/oracle/oracle-xe-11.2.0-1.0.x86_64.rpm | cpio -idmv
RUN rm -f /home/oracle/oracle-xe-11.2.0-1.0.x86_64.rpm
RUN mkdir /u01/app/oracle/oradata
RUN mkdir /u01/app/oracle/product/11.2.0/xe/config/log
RUN mkdir -p /u01/app/oracle/diag/rdbms/xe/XE/trace/
RUN umask 0027
RUN mkdir -p /u01/app/oracle/admin/XE/adump
RUN mkdir -p /u01/app/oracle/admin/XE/dpdump
RUN mkdir -p /u01/app/oracle/admin/XE/pfile
RUN mkdir -p /u01/app/oracle/admin/cfgtoollogs/dbca/XE
RUN mkdir -p /u01/app/oracle/admin/XE/dbs
RUN mkdir -p /u01/app/oracle/fast_recovery_area
RUN umask 0022
RUN touch /u01/app/oracle/diag/rdbms/xe/XE/trace/alert_XE.log
RUN chown -R oracle:oinstall /u01/app/oracle
RUN chmod u+x /etc/init.d/oracle-xe
RUN sed -i -e 's/%memory_target%/1G/g' /u01/app/oracle/product/11.2.0/xe/config/scripts/init.ora
RUN sed -i -e 's/%memory_target%/1G/g' /u01/app/oracle/product/11.2.0/xe/config/scripts/initXETemp.ora
RUN find /u01/app/oracle/product/11.2.0/xe -name "*.sh" -exec chmod u+x {} \;
RUN echo ". /u01/app/oracle/product/11.2.0/xe/bin/oracle_env.sh" >>/home/oracle/.bashrc
COPY manage-oracle.sh /home/oracle/
EXPOSE 1521 8080
VOLUME ["/u01/app/oracle/oradata","/u01/app/oracle/fast_recovery_area"]
CMD [ "sh" , "-c" ,  "/home/oracle/manage-oracle.sh" ]
This Docker image is using Official Oracle Linux 5 layer, and to build the image is necessary to dowload oracle-xe-11.2.0-1.0.x86_64.rpm.zip from OTN Download Section, unzip above file and put oracle-xe-11.2.0-1.0.x86_64.rpm at the same directory where Dockerfile is placed.
To build the image I made this simple buildDockerImage.sh:
#!/bin/bash
docker build -t "oracle-xe:11.2.0-1.0" .
I am 100% sure that the the installation will be successful, this is the magic on Docker world, here the output of docker image listing:
$ docker images REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE oracle-xe           11.2.0-1.0          946a685de09f        About an hour ago   1.885 GB
as in the Oracle RDBMS 12c Docker image the installation of the seed database and initial configuration is made by manage-oracle.sh script.
Note that also that in above Dockerfile script instead of using rpm tool to install Oracle XE rpm file I am using rpm2cpio, this is because the original rpm file provided by Oracle do some actions that are not valid on Docker environment, I am doing all necessary actions inside the Dockerfile then.
Once you have the image ready to run I created an initial database a run.sh file which could by used as:
docker run --privileged=true --name xe --hostname xe --detach=true --publish=1521:1521 --publish=8080:8080 oracle-xe:11.2.0-1.0
SQLNet 1521 port and Oracle Express web admin tool is published outside the container.
If you run the container with above script the Database will be created using Docker volumnes which means that all data stored into the RDBMS will persists across multiple run/stop executions,  but if you execute docker rm command all datafiles will be removed from the layered file system.
Here an screenshot of the Admin tool:

To get data surviving across containers destruction a run-persistent.sh script look like:
docker run --privileged=true --volume=/home/data/db/xe/fast_recovery_area:/u01/app/oracle/fast_recovery_area --volume=/home/data/db/xe/oradata:/u01/app/oracle/oradata --name xe --hostname xe --detach=true --publish=1521:1521 --publish=8080:8080 oracle-xe:11.2.0-1.0
before running above Docker command create externals directories as:
$ sudo mkdir -p /home/data/db/xe/oradata
$ sudo mkdir -p /home/data/db/xe/fast_recovery_area
$ sudo mkdir -p chown -R 54321:54321 /home/data/db/xe
And thats all, enjoy your Oracle XE Database on Docker, a complete list files showed above could be download as oracle-xe.zip from Google Drive obviously without the Oracle XE rpm file.




Oracle USERENV Function with Examples

Complete IT Professional - Thu, 2016-06-09 06:00
The Oracle USERENV function allows you to get information about the current session. Learn how to use the USERENV function in this article. Purpose of the Oracle USERENV Function The purpose of the USERENV function is to get information about your current session. It’s a simple function but can take a few different values for […]
Categories: Development

New on Oracle Mobile Cloud Service 2.0

As we simplify mobile development, this has led us to develop important new features with the new release of Oracle Mobile Cloud Service 2.0 such as  Mobile Application Accelerator (MAX), a new...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Scripted Domain Creation for SOA/BPM, OSB 12.2.1

Darwin IT - Thu, 2016-06-09 02:53
Recently I blogged about the automatic install of SOASuite and ServiceBus 12.2.1. It catered for the installation of the binaries and the creation of a repository.

What it does not handles is the creation of a domain. The last few weeks I worked on a script to do that. It's based on a wlst script for 12.1.3 by Edwin Biemond. This one was quite educational for me. But as denoted: it was made for 12.1.3. And I wanted more control on what and how certain artefacts are created. So gave it my own swing, also to adapt it for some wishes of my current customer.

Although there are certainly some improvements thinkable, and probably will be done in future uses, for now it is ready to share with the world.

One of the changes I did was to divide the main function in sections to make clear what the structure of the script is. I also moved some of the duplicate code or functional parts into separate functions.

Let me describe the sections first.

1. Create Base domainThe script starts with creating a base domain. It reads the default wls template 'wls.jar'. Sets the log properties of the domain. It then adapts the AdminServer to
  • change name as set in the property file: you can have your own naming convention.
  • change listen addres + port
  • Set default SSL settings
  • Set log file properties.
Then set the admin password on the domain and set the server start mode. Save the domain.
It then creates boot.properties files for nodemanager and AdminServer and set the password of the NodeManager. Finally setting the Applications home folder.

2. Extending domain with templatesThe second section extends the domain with templates. Another improvement I did is that you can select which components you want to add by toggling the appropriate 'xxxEnabled' switches  in the property file, where xxx stands for the component (for instance 'soa', 'bpm', 'osb', 'bam', 'ess', etc.)

It supports the following components:
  • ServiceBus
  • SOA and BPM Suite and B2B
  • BAM
  • Enterprise Scheduler Service
Components such as  Managed File Transfer can be added quite easily.
3. DataSourcesSection 3 takes care of setting the datasources to the created repository based on the repository user '{Prefix}_STB', via the 'LocalScvTblDataSource' datasource. In the property file you need to set
  • soaRepositoryDbUrl: jdbc connect string to the repository database
  • soaRepositoryDbUserPrefix=prefix used in the Repository creation 
  • soaRepositoryStbPwd=Password for the {Prefix}_STB user.
 It will adapt the LocalScvTblDataSource with these properties to load the information on the other repository users to create and adapt the datasources that are essential to the modules with which the domain is extended. These datasources are then made XA-Enabled.

4. Create UnixMachines, Clusters and Managed ServersThis section creates Machine definitions of type 'Unix', based on the properties:
  • server1Address=darlin-vce-db.darwin-it.local
  • server1Machine=darlin-vce-db
  • server2Enabled=true
  • server2Address=darlin-vce-db2.darwin-it.local
  • server2Machine=darlin-vce-db2
The machine denoted with 'server1Machine' is created always. The AdminServer is added to it, as well all the first, default, managed servers. The machine denoted with the property 'server2Machine' is only created when 'server2Enabled' is set to true.

I realize that 'server' in this context might be a little confusing. In serverYAddress and  serverYMachine, I actually mean a server-host, not a managed or admin server.

 For each component to configure (soa, osb, etc.) a cluster, denoted with for instance osbClr or soaClr, is created.

When you extend the domain with SOA Suite or OSB then automatically a managed server called 'soa_server1' or 'osb_server1' created with the appropriate deployments targeted to it. In the script of Edwin these are removed and new ones are created. I found problems with that and found that it's quite unnecessary, since we can rename the current ones with the given name in the property file, denoted with soaSvr1 or osbSvr1, etc., as is done with the AdminServer. So I leave the already created ones, but rename them to the desired value.

These first servers are added to the appropriate cluster, what causes to re-target the deployments to that cluster, magically.

Then if enabled, as with osbSvr2Enabled or soaSvr2Enabled, etc., the particular 'second' servers are created and added to the particular cluster.

5. Add Servers to ServerGroupsNew in 12c is the concept of ServerGroups. In 11g you had only one definition of USER_MEM_ARGS in the setDomainEnv.sh/cmd. So these counted for each server (admin or managed) that are started using the start(Managed)Weblogic.sh/cmd scripts. But in 12c the determination of the USER_MEM_ARGS are done in a separate script: setStartupEnv.sh/cmd.
In this script the determination is done based on so-called ServerGroups. This provides a means to differentiate in memory settings for the particular servers, which was lacking in 11g.

So in this sections all the Managed and Admin Servers are added to a particular ServerGroup.
6. Create boot properties filesLastly, for each created managed server a boot.properties file with the username password is created. Smart: I used to do this every single time by hand...

The example property fileHere's an example of the property file:
#############################################################################
# Properties voor Creeëren SOADomain
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-15
#
#############################################################################
#
fmwHome=/u01/app/oracle/FMW12210
#
soaDomainName=osb_domain
domainsHome=/u01/app/work/domains
applicationsHome=/u01/app/work/applications
productionMode=true
#
# Server Settings
server1Address=darlin-vce-db.darwin-it.local
server1Machine=darlin-vce-db
server2Enabled=true
server2Address=darlin-vce-db2.darwin-it.local
server2Machine=darlin-vce-db2
#
# Properties for AdminServer
adminServerName=AdminServer
adminListenAddress=darlin-vce-db
adminListenPort=7001
adminJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1532m
# Properties for OSB
osbEnabled=true
osbJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m
osbClr=OsbCluster
osbSvr1=OsbServer1
osbSvr1Port=8011
osbSvr2Enabled=true
osbSvr2=OsbServer2
osbSvr2Port=8012
# Properties for SOA
soaEnabled=true
bpmEnabled=true
b2bEnabled=true
soaJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=752m -Xms1024m -Xmx1532m
soaClr=SoaCluster
soaSvr1=SoaServer1
soaSvr1Port=8001
soaSvr2Enabled=true
soaSvr2=SoaServer2
soaSvr2Port=8002
# Properties for ESS
essEnabled=true
essJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m
essClr=essCluster
essSvr1=EssServer1
essSvr1Port=8021
essSvr2Enabled=true
essSvr2=EssServer2
essSvr2Port=8022
# Properties for BAM
bamEnabled=true
bamJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1532m
bamClr=BamCluster
bamSvr1=BamServer1
bamSvr1Port=9001
bamSvr2Enabled=true
bamSvr2=BamServer2
bamSvr2Port=9002
# AdminUser
adminUser=weblogic
adminPwd=welcome1
# SoaRepository Settings
soaRepositoryDbUrl=jdbc:oracle:thin:@darlin-vce-db.darwin-it.local:1521/pdborcl
soaRepositoryDbUserPrefix=DEV
soaRepositoryStbPwd=DEV_STB
# Logs
logsHome=/u01/app/work/logs
fileCount=10
fileMinSize=5000
fileTimeSpan=24
rotationType=byTime
#
# Settings
webtierEnabled=false
jsseEnabled=false

Save it with a name like darlin-vce-db.properties, but adapted for each particular environment.

The script(And of course I don't mean the band that my daughter likes...)
#############################################################################
# Create a SOA/BPM/OSB domain
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-09
#
#############################################################################
# Modify these values as necessary
import sys, traceback
scriptName = 'createSoaBpmDomain.py'
#
#Home Folders
wlsHome = fmwHome+'/wlserver'
soaDomainHome = domainsHome+'/'+soaDomainName
soaApplicationsHome = applicationsHome+'/'+soaDomainName
#
# Templates for 12.1.3
#wlsjar =fmwHome+'/wlserver/common/templates/wls/wls.jar'
#oracleCommonTplHome=fmwHome+'/oracle_common/common/templates'
#wlservicetpl=oracleCommonTplHome+'/oracle.wls-webservice-template_12.1.3.jar'
#osbtpl=fmwHome+'/osb/common/templates/wls/oracle.osb_template_12.1.3.jar'
#applCoreTpl=oracleCommonTplHome+'/wls/oracle.applcore.model.stub.1.0.0_template.jar'
#soatpl=fmwHome+'/soa/common/templates/wls/oracle.soa_template_12.1.3.jar'
#bamtpl=fmwHome+'/soa/common/templates/wls/oracle.bam.server_template_12.1.3.jar'
#bpmtpl=fmwHome+'/soa/common/templates/wls/oracle.bpm_template_12.1.3.jar'
#essBasicTpl=oracleCommonTplHome+'/wls/oracle.ess.basic_template_12.1.3.jar'
#essEmTpl=fmwHome+'/em/common/templates/wls/oracle.em_ess_template_12.1.3.jar'
#ohsTpl=fmwHome+'/ohs/common/templates/wls/ohs_managed_template_12.1.3.jar'
#b2bTpl=fmwHome+'/soa/common/templates/wls/oracle.soa.b2b_template_12.1.3.jar'
#
# Templates for 12.2.1
wlsjar =fmwHome+'/wlserver/common/templates/wls/wls.jar'
oracleCommonTplHome=fmwHome+'/oracle_common/common/templates'
wlservicetpl=oracleCommonTplHome+'/wls/oracle.wls-webservice-template.jar'
osbtpl=fmwHome+'/osb/common/templates/wls/oracle.osb_template.jar'
applCoreTpl=oracleCommonTplHome+'/wls/oracle.applcore.model.stub_template.jar'
soatpl=fmwHome+'/soa/common/templates/wls/oracle.soa_template.jar'
bamtpl=fmwHome+'/soa/common/templates/wls/oracle.bam.server_template.jar'
bpmtpl=fmwHome+'/soa/common/templates/wls/oracle.bpm_template.jar'
essBasicTpl=oracleCommonTplHome+'/wls/oracle.ess.basic_template.jar'
essEmTpl=fmwHome+'/em/common/templates/wls/oracle.em_ess_template.jar'
ohsTpl=fmwHome+'/ohs/common/templates/wls/ohs_managed_template.jar' # need to be validated!
b2bTpl=fmwHome+'/soa/common/templates/wls/oracle.soa.b2b_template.jar' # need to be validated!
#
# ServerGroup definitions
adminSvrGrpDesc='WSM-CACHE-SVR WSMPM-MAN-SVR JRF-MAN-SVR'
adminSvrGrp=["WSM-CACHE-SVR" , "WSMPM-MAN-SVR" , "JRF-MAN-SVR"]
essSvrGrpDesc="ESS-MGD-SVRS"
essSvrGrp=["ESS-MGD-SVRS"]
soaSvrGrpDesc="SOA-MGD-SVRS"
soaSvrGrp=["SOA-MGD-SVRS"]
bamSvrGrpDesc="BAM12-MGD-SVRS"
bamSvrGrp=["BAM12-MGD-SVRS"]
osbSvrGrpDesc="OSB-MGD-SVRS-COMBINED"
osbSvrGrp=["OSB-MGD-SVRS-COMBINED"]
#
#
lineSeperator='__________________________________________________________________________________'
#
#
def usage():
print 'Call script as: '
print 'Windows: wlst.cmd '+scriptName+' -loadProperties localhost.properties'
print 'Linux: wlst.sh '+scriptName+' -loadProperties environment.properties'
print 'Property file should contain the following properties: '
print "adminUrl='localhost:7101'"
print "adminUser='weblogic'"
print "adminPwd='welcome1'"
#
# Create a boot properties file.
def createBootPropertiesFile(directoryPath,fileName, username, password):
print ('Create Boot Properties File for folder: '+directoryPath)
print (lineSeperator)
serverDir = File(directoryPath)
bool = serverDir.mkdirs()
fileNew=open(directoryPath + '/'+fileName, 'w')
fileNew.write('username=%s\n' % username)
fileNew.write('password=%s\n' % password)
fileNew.flush()
fileNew.close()
#
# Create Startup Properties File
def createAdminStartupPropertiesFile(directoryPath, args):
print 'Create AdminServer Boot Properties File for folder: '+directoryPath
print (lineSeperator)
adminserverDir = File(directoryPath)
bool = adminserverDir.mkdirs()
fileNew=open(directoryPath + '/startup.properties', 'w')
args=args.replace(':','\\:')
args=args.replace('=','\\=')
fileNew.write('Arguments=%s\n' % args)
fileNew.flush()
fileNew.close()
#
# Set Log properties
def setLogProperties(logMBeanPath, logFile, fileCount, fileMinSize, rotationType, fileTimeSpan):
print '\nSet Log Properties for: '+logMBeanPath
print (lineSeperator)
cd(logMBeanPath)
print ('Server log path: '+pwd())
print '. set FileName to '+logFile
set('FileName' ,logFile)
print '. set FileCount to '+str(fileCount)
set('FileCount' ,int(fileCount))
print '. set FileMinSize to '+str(fileMinSize)
set('FileMinSize' ,int(fileMinSize))
print '. set RotationType to '+rotationType
set('RotationType',rotationType)
print '. set FileTimeSpan to '+str(fileTimeSpan)
set('FileTimeSpan',int(fileTimeSpan))
#
#
def createServerLog(serverName, logFile, fileCount, fileMinSize, rotationType, fileTimeSpan):
print ('\nCreate Log for '+serverName)
print (lineSeperator)
cd('/Server/'+serverName)
create(serverName,'Log')
setLogProperties('/Server/'+serverName+'/Log/'+serverName, logFile, fileCount, fileMinSize, rotationType, fileTimeSpan)
#
# Change DataSource to XA
def changeDatasourceToXA(datasource):
print 'Change datasource '+datasource
print (lineSeperator)
cd('/')
cd('/JDBCSystemResource/'+datasource+'/JdbcResource/'+datasource+'/JDBCDriverParams/NO_NAME_0')
set('DriverName','oracle.jdbc.xa.client.OracleXADataSource')
print '. Set UseXADataSourceInterface='+'True'
set('UseXADataSourceInterface','True')
cd('/JDBCSystemResource/'+datasource+'/JdbcResource/'+datasource+'/JDBCDataSourceParams/NO_NAME_0')
print '. Set GlobalTransactionsProtocol='+'TwoPhaseCommit'
set('GlobalTransactionsProtocol','TwoPhaseCommit')
cd('/')
#
#
def createCluster(cluster):
print ('\nCreate '+cluster)
print (lineSeperator)
cd('/')
create(cluster, 'Cluster')
#
# Create a Unix Machine
def createUnixMachine(serverMachine,serverAddress):
print('\nCreate machine '+serverMachine+' with type UnixMachine')
print (lineSeperator)
cd('/')
create(serverMachine,'UnixMachine')
cd('UnixMachine/'+serverMachine)
create(serverMachine,'NodeManager')
cd('NodeManager/'+serverMachine)
set('ListenAddress',serverAddress)
#
# Add server to Unix Machine
def addServerToMachine(serverName, serverMachine):
print('\nAdd server '+serverName+' to '+serverMachine)
print (lineSeperator)
cd('/Servers/'+serverName)
set('Machine',serverMachine)
#
# Determine the Server Java Args
def getServerJavaArgs(serverName,javaArgsBase,logsHome):
javaArgs = javaArgsBase+' -Dweblogic.Stdout='+logsHome+'/'+serverName+'.out -Dweblogic.Stderr='+logsHome+'/'+serverName+'_err.out'
return javaArgs
#
# Change Managed Server
def changeManagedServer(server,listenAddress,listenPort,javaArgs):
print '\nChange ManagedServer '+server
print (lineSeperator)
cd('/Servers/'+server)
print '. Set listen address and port to: '+listenAddress+':'+str(listenPort)
set('ListenAddress',listenAddress)
set('ListenPort' ,int(listenPort))
# ServerStart
print ('. Create ServerStart')
create(server,'ServerStart')
cd('ServerStart/'+server)
print ('. Set Arguments to: '+javaArgs)
set('Arguments' , javaArgs)
# SSL
cd('/Server/'+server)
print ('. Create server SSL')
create(server,'SSL')
cd('SSL/'+server)
print ('. Set SSL Enabled to: '+'False')
set('Enabled' , 'False')
print ('. Set SSL HostNameVerificationIgnored to: '+'True')
set('HostNameVerificationIgnored', 'True')
#
if jsseEnabled == 'true':
print ('. Set JSSEEnabled to: '+ 'True')
set('JSSEEnabled','True')
else:
print ('. Set JSSEEnabled to: '+ 'False')
set('JSSEEnabled','False')
#
# Create a Managed Server
def createManagedServer(server,listenAddress,listenPort,cluster,machine,
javaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan):
print('\nCreate '+server)
print (lineSeperator)
cd('/')
create(server, 'Server')
cd('/Servers/'+server)
javaArgs=getServerJavaArgs(server,javaArgsBase,logsHome)
changeManagedServer(server,listenAddress,listenPort,javaArgs)
createServerLog(server, logsHome+'/'+server+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
print('Add '+server+' to cluster '+cluster)
cd('/')
assign('Server',server,'Cluster',cluster)
addServerToMachine(server, machine)
#
# Adapt a Managed Server
def adaptManagedServer(server,newSrvName,listenAddress,listenPort,cluster,machine,
javaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan):
print('\nAdapt '+server)
print (lineSeperator)
cd('/')
cd('/Servers/'+server)
# name of adminserver
print '. Rename '+server+' to '+ newSrvName
set('Name',newSrvName )
cd('/Servers/'+newSrvName)
javaArgs=getServerJavaArgs(newSrvName,javaArgsBase,logsHome)
changeManagedServer(newSrvName,listenAddress,listenPort,javaArgs)
createServerLog(newSrvName, logsHome+'/'+newSrvName+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
print('Add '+newSrvName+' to cluster '+cluster)
cd('/')
assign('Server',newSrvName,'Cluster',cluster)
addServerToMachine(newSrvName, machine)
#
# Change Admin Server
def changeAdminServer(adminServerName,listenAddress,listenPort,javaArguments):
print '\nChange AdminServer'
print (lineSeperator)
cd('/Servers/AdminServer')
# name of adminserver
print '. Set Name to '+ adminServerName
set('Name',adminServerName )
cd('/Servers/'+adminServerName)
# address and port
print '. Set ListenAddress to '+ server1Address
set('ListenAddress',server1Address)
print '. Set ListenPort to '+ str(listenPort)
set('ListenPort' ,int(listenPort))
#
# ServerStart
print 'Create ServerStart'
create(adminServerName,'ServerStart')
cd('ServerStart/'+adminServerName)
print '. Set Arguments to: '+javaArguments
set('Arguments' , javaArguments)
# SSL
cd('/Server/'+adminServerName)
print 'Create SSL'
create(adminServerName,'SSL')
cd('SSL/'+adminServerName)
set('Enabled' , 'False')
set('HostNameVerificationIgnored', 'True')
#
if jsseEnabled == 'true':
print ('. Set JSSEEnabled to: '+ 'True')
set('JSSEEnabled','True')
else:
print ('. Set JSSEEnabled to: '+ 'False')
set('JSSEEnabled','False')
#
#
def main():
try:
#
# Section 1: Base Domain + Admin Server
print (lineSeperator)
print ('1. Create Base domain '+soaDomainName)
print('\nCreate base wls domain with template '+wlsjar)
print (lineSeperator)
readTemplate(wlsjar)
#
cd('/')
# Domain Log
print('Set base_domain log')
create('base_domain','Log')
setLogProperties('/Log/base_domain', logsHome+soaDomainName+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
#
# Admin Server
adminJavaArgs = getServerJavaArgs(adminServerName,adminJavaArgsBase,logsHome)
changeAdminServer(adminServerName,adminListenAddress,adminListenPort,adminJavaArgs)
createServerLog(adminServerName, logsHome+adminServerName+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
#
print('\nSet password in '+'/Security/base_domain/User/weblogic')
cd('/')
cd('Security/base_domain/User/weblogic')
# weblogic user name + password
print('. Set Name to: ' +adminUser)
set('Name',adminUser)
cmo.setPassword(adminPwd)
#
if productionMode == 'true':
print('. Set ServerStartMode to: ' +'prod')
setOption('ServerStartMode', 'prod')
else:
print('. Set ServerStartMode to: ' +'dev')
setOption('ServerStartMode', 'dev')
#
print('write Domain...')
# write path + domain name
writeDomain(soaDomainHome)
closeTemplate()
#
createAdminStartupPropertiesFile(soaDomainHome+'/servers/'+adminServerName+'/data/nodemanager',adminJavaArgs)
createBootPropertiesFile(soaDomainHome+'/servers/'+adminServerName+'/security','boot.properties',adminUser,adminPwd)
createBootPropertiesFile(soaDomainHome+'/config/nodemanager','nm_password.properties',adminUser,adminPwd)
#
es = encrypt(adminPwd,soaDomainHome)
#
readDomain(soaDomainHome)
#
print('set Domain password for '+soaDomainName)
cd('/SecurityConfiguration/'+soaDomainName)
set('CredentialEncrypted',es)
#
print('Set nodemanager password')
set('NodeManagerUsername' ,adminUser )
set('NodeManagerPasswordEncrypted',es )
#
cd('/')
setOption( "AppDir", soaApplicationsHome )
#
print('Finished base domain.')
#
# Section 2: Templates
print('\n2. Extend Base domain with templates.')
print (lineSeperator)
print ('Adding Webservice template '+wlservicetpl)
addTemplate(wlservicetpl)
# SOA Suite
if soaEnabled == 'true':
print ('Adding SOA Template '+soatpl)
addTemplate(soatpl)
else:
print('SOA is disabled')
# BPM
if bpmEnabled == 'true':
print ('Adding BPM Template '+bpmtpl)
addTemplate(bpmtpl)
else:
print('BPM is disabled')
# OSB
if osbEnabled == 'true':
print ('Adding OSB template '+osbtpl)
addTemplate(osbtpl)
else:
print('OSB is disabled')
#
print ('Adding ApplCore Template '+applCoreTpl)
addTemplate(applCoreTpl)
#
if bamEnabled == 'true':
print ('Adding BAM Template '+bamtpl)
addTemplate(bamtpl)
else:
print ('BAM is disabled')
#
if webtierEnabled == 'true' == true:
print ('Adding OHS Template '+ohsTpl)
addTemplate(ohsTpl)
else:
print('OHS is disabled')
#
if b2bEnabled == 'true':
print 'Adding B2B Template '+b2bTpl
addTemplate(b2bTpl)
else:
print('B2B is disabled')
#
if essEnabled == 'true':
print ('Adding ESS Template'+essBasicTpl)
addTemplate(essBasicTpl)
print ('Adding ESS Em Template'+essEmTpl)
addTemplate(essEmTpl)
else:
print('ESS is disabled')
#
dumpStack()
print ('Finished templates')
#
# Section 3: Change Datasources
print ('\n3. Change datasources')
print 'Change datasource LocalScvTblDataSource'
cd('/JDBCSystemResource/LocalSvcTblDataSource/JdbcResource/LocalSvcTblDataSource/JDBCDriverParams/NO_NAME_0')
set('URL',soaRepositoryDbUrl)
set('PasswordEncrypted',soaRepositoryStbPwd)
cd('Properties/NO_NAME_0/Property/user')
set('Value',soaRepositoryDbUserPrefix+'_STB')
#
print ('Call getDatabaseDefaults which reads the service table')
getDatabaseDefaults()
#
if soaEnabled == 'true':
changeDatasourceToXA('EDNDataSource')
if osbEnabled == 'true':
changeDatasourceToXA('wlsbjmsrpDataSource')
changeDatasourceToXA('OraSDPMDataSource')
changeDatasourceToXA('SOADataSource')
#
if bamEnabled == 'true':
changeDatasourceToXA('BamDataSource')
#
print 'Finshed DataSources'
#
# Section 4: Create UnixMachines, Clusters and Managed Servers
print ('\n4. Create UnixMachines, Clusters and Managed Servers')
print (lineSeperator)
cd('/')
#
createUnixMachine(server1Machine,server1Address)
if server2Enabled == 'true':
createUnixMachine(server2Machine,server2Address)
#
addServerToMachine(adminServerName,server1Machine)
#
cd('/')
# SOA Suite
if soaEnabled == 'true':
createCluster(soaClr)
adaptManagedServer('soa_server1',soaSvr1,server1Address, soaSvr1Port,soaClr,server1Machine,
soaJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
if soaSvr2Enabled == 'true':
createManagedServer(soaSvr2,server2Address,soaSvr2Port,soaClr,server2Machine,
soaJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
else:
print('Do not create SOA Server2')
#
# OSB
if osbEnabled == 'true':
createCluster(osbClr)
adaptManagedServer('osb_server1',osbSvr1,server1Address,osbSvr1Port,osbClr,server1Machine,
osbJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
if osbSvr2Enabled == 'true':
createManagedServer(osbSvr2,server2Address,osbSvr2Port,osbClr,server2Machine,
osbJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
else:
print('Do not create OSB Server2')
#
# BAM
if bamEnabled == 'true':
createCluster(bamClr)
adaptManagedServer('bam_server1',bamSvr1,server1Address,bamSvr1Port,bamClr,server1Machine,
bamJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
if bamSvr2Enabled == 'true':
createManagedServer(bamSvr2,server2Address,bamSvr2Port,bamClr,server2Machine,
bamJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
else:
print('Do not create BAM Server2')
#
# ESS
if essEnabled == 'true':
createCluster(essClr)
adaptManagedServer('ess_server1',essSvr1,server1Address,essSvr1Port,essClr,server1Machine,
essJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
if essSvr2Enabled == 'true':
createManagedServer(essSvr2,server2Address,essSvr2Port,essClr,server2Machine,
essJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
else:
print('Do not create ESS Server2')

#
print ('Finshed creating Machines, Clusters and ManagedServers')
#
# Section 5: Add Servers to ServerGroups.
print ('\n5. Add Servers to ServerGroups')
print (lineSeperator)
cd('/')
print 'Add server groups '+adminSvrGrpDesc+ ' to '+adminServerName
setServerGroups(adminServerName, adminSvrGrp)
# SOA
if soaEnabled == 'true':
print 'Add server group '+soaSvrGrpDesc+' to '+soaSvr1+' and possibly '+soaSvr2
setServerGroups(soaSvr1, soaSvrGrp)
if soaSvr2Enabled == 'true':
setServerGroups(soaSvr2, soaSvrGrp)
#
# OSB
if osbEnabled == 'true':
print 'Add server group '+osbSvrGrpDesc+' to '+osbSvr1+' and possibly '+osbSvr2
setServerGroups(osbSvr1, osbSvrGrp)
if osbSvr2Enabled == 'true':
setServerGroups(osbSvr2, osbSvrGrp)
#
if bamEnabled == 'true':
print 'Add server group '+bamSvrGrpDesc+' to '+bamSvr1+' and possibly '+bamSvr2
setServerGroups(bamSvr1, bamSvrGrp)
if bamSvr2Enabled == 'true':
setServerGroups(bamSvr2, bamSvrGrp)
#
if essEnabled == 'true':
print 'Add server group '+essSvrGrpDesc+' to '+essSvr1+' and possibly '+essSvr2
setServerGroups(essSvr1, essSvrGrp)
if essSvr2Enabled == 'true':
setServerGroups(essSvr2, essSvrGrp)
#
print ('Finshed ServerGroups.')
#
updateDomain()
closeDomain();
#
# Section 6: Create boot properties files.
print ('\n6. Create boot properties files')
print (lineSeperator)
# SOA
if soaEnabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+soaSvr1+'/security','boot.properties',adminUser,adminPwd)
if soaSvr2Enabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+soaSvr2+'/security','boot.properties',adminUser,adminPwd)
#
# OSB
if osbEnabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+osbSvr1+'/security','boot.properties',adminUser,adminPwd)
if osbSvr2Enabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+osbSvr2+'/security','boot.properties',adminUser,adminPwd)
#
if bamEnabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+bamSvr1+'/security','boot.properties',adminUser,adminPwd)
if bamSvr2Enabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+bamSvr1+'/security','boot.properties',adminUser,adminPwd)
#
if essEnabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+essSvr1+'/security','boot.properties',adminUser,adminPwd)
if essSvr2Enabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+essSvr2+'/security','boot.properties',adminUser,adminPwd)
#
print ('\nFinished')
#
print('\nExiting...')
exit()
except NameError, e:
print 'Apparently properties not set.'
print "Please check the property: ", sys.exc_info()[0], sys.exc_info()[1]
usage()
except:
apply(traceback.print_exception, sys.exc_info())
stopEdit('y')
exit(exitcode=1)
#call main()
main()
exit()

ConclusionAs said, although I think this script is already quite adaptable using the property file, of course there are many improvements thinkable for your particular situation. It creates a 'skeleton' SOA or Service Bus domain, but you might need to adapt for network topologies, security settings.
And although it creates a 'per domain' nodemanager configuration, you would need to adapt it for your particular needs to get the domain started. I only tested this by starting the Admin server using the startWeblogic.sh script.

Having such a script is such  a valuable asset: it allows you to (re-)create your domains repeatably in a standard way, ensuring that different environments (dev, test, acc, prod) are created similarly.

One, last thing though, the script somehow registers the creation of the domain and thus the use of the datasources in the repository. So you can't just throw away the domain and recreate it to the current Repository. You'll need to rereate the Repository as well.


database option - compression part 2

Pat Shuff - Thu, 2016-06-09 02:07
Yesterday we looked at the different compression options that are available for a database. Today we are going to walk through an example. The example comes from Oracle by Example - Compression. This is a hands on tutorial that has you execute code in an 11g database. Note that you must create this database as a High Performance or Extreme Performance database. If you create a Standard Edition or Enterprise Edition the execution will fail with an option not available error as we saw with partitioning a couple of days ago.

To start, we create an 11g database in the Oracle Public Cloud. We create the instance, wait an hour or so, change the network configuration to open port 1521 to the public, and connect using sys as sysdba to the instance. We are going to use SQL Developer in our Windows 2012 instance to make the connection. To get the connection information, we the database console and get the ip address of the instance.

We then go to our sqldeveloper tool and add this database connection. We can use ssh tunneling or open port 1521 to the world to make the connection.

The first step that we are told to do is to execute the setup.sql file available via the tutorial. We are not going to execute this program but do everything by hand through sql developer. The purpose of this script is to enable the user sh, set a password, and grant privileges to the user. We can do this from SQL Developer. The code that it recommends using is

connect / as sysdba
set echo on
alter user sh identified by sh account unlock;
grant create tablespace to sh;
grant drop tablespace to sh;
First, we don't want to use such a simple password. We change this and set it to something a little more secure. We select the database instance, in our example it is prs11gHP where we are connected as the sys user. We select other Users..., the user sh, and edit the entry. When the screen comes up to edit the user, we enable the account, set the password, grant create tablespace and drop tablespace rights to the user and apply. This effectively executes the script shown above.

At this point, we have a user that can create and drop tables. We now want to load the create_sales_tbls.sql code from the tutorial.

The create script first, drops the existing tables. This might generate an error because the table does not exist. This error is not significant and won't stop everything from executing. We then create a non-compressed and a compressed table by selecting from the demo sales table that exists if you installed the demo database during your install.

drop table sales_nocompress purge
/
drop table sales_compress purge
/
set echo on
set timing on
create table sales_nocompress
as select * from sales
/

create table sales_compress compress for all operations
as select * from sales where 1=0
/

select count(*)
from sales_compress
/
Note that the two create statements should create a table of the same size. What we see is that the creation of the first table takes just over 4 seconds because we pull in the sales table information. The second creation does not take as long because the data is in cache and the where clause fails for all select operations.

When we do the select, the table size should be zero based on the where clause. We then to an insert into the table to create a table of the same size. This is done by executing

@oltp_insert

set timing off
select count(*) from sales_compress
/
select count(*) from sales_nocompress
/
This executes the oltp_insert.sql code then compares the counts of the two tables to make sure they contain the same number of records. The code that is executed in the insert script is
SQL> set timing on
SQL> declare

  commit_after integer := 0 ;
  loop_variable integer ;

  cursor c_sales is
  select prod_id
  , cust_id
  , time_id
  , channel_id
  , promo_id
  , quantity_sold
  , amount_sold
  from sales ;

begin

  for r_sales in c_sales
  loop

    if commit_after = 0
    then

      loop_variable := 0 ;

      commit_after := round(dbms_random.value(1,1)) ;

    end if ;

    insert into sales_compress
    (prod_id, cust_id, time_id, channel_id, promo_id, quantity_sold, amount_sold)
    values
    ( r_sales.prod_id
    , r_sales.cust_id
    , r_sales.time_id
    , r_sales.channel_id
    , r_sales.promo_id
    , r_sales.quantity_sold
    , r_sales.amount_sold
    ) ;

    if loop_variable = commit_after
    then
      commit ;
      commit_after := 0 ;
    end if ;

    loop_variable := loop_variable + 1 ;

  end loop ;

end ;
/
We are not going to go through this code but it does return the same amount of entries as the uncompressed table. The values that are inserted are pulled from the sales table and inserted into the compressed table. Note that we are using the basic compression since we did not state any compress methodology when we created the table.

We can execute the examine_storage.sql script to see that the compressed storage takes up about half the storage as the uncompressed table. We can also see that the table is enabled for oltp compression by looking at the parameters of the table from a select statement.

We can also look at the select time differences by reading all of the data from the compressed and uncompressed tables. Note that the compressed table takes about 3/4 of the time that the uncompressed takes to execute.

In summary, we were able to create an 11g database, create a table that is compressed and non-compressed and look at the relative size and timing on retrieving data from the table. We can experiment with this data and grow the table size to see if we still get the same improvements as the table gets larger. We can try different compression algorithms to see if it effects performance or compression ratios. We have done all of this in a database as a service public cloud instance. The only tools that we needed was a SQL Developer connection and an Oracle Cloud account. We could have done with with Amazon RDS as well as EC2 and Microsoft Azure Compute. The key difference is that this experiment took about two hours to execute and we only consumed about $15 to learn and play with compression on 11g (or 12c) given that a low memory option for the database is only $6.720 per OCPU per hour. With the pay as you go option we burn less than $15 and turn off the service. We could have uploaded our own data sets into the database instance and played with the compression advisor in a sandbox and not effected our production environment. If we were using database backup as a service we could have restored a single table from our backup and play with the compression variations and compression advisor.

Combining Features - Wrong Results With Scalar Subquery Caching

Randolf Geist - Wed, 2016-06-08 15:39
Quite often you can get into trouble with Oracle when you start combining different features.

In this case of one my clients it is the combination of user-defined PL/SQL functions that can raise exceptions (think of currency conversion and a non-existent currency code gets passed into the function), DML error logging and attempting to improve performance by wrapping the PL/SQL function call into a scalar subquery to benefit from the built-in scalar subquery caching feature of the SQL runtime engine.

As long as the scalar subquery didn't get used everything worked as expected, but after adding the scalar subquery after some while it became obvious that wrong results occurred - in that particular case here it meant rows that should have been rejected and written to the error logging table due to the exception raised in the user-defined PL/SQL function suddenly showed up in the target table, and what was even more worrying - they included a converted amount, obviously using some arbitrary conversion rate, although the PL/SQL function actually didn't return any value but raised an exception for unknown currency codes.

The interesting point here is that everything works as expected if the error logging feature doesn't get used, or if the scalar subquery doesn't get used.

In case of no error logging the whole statement will roll back if an exception occurs, and that holds true and works even with scalar subquery caching in place.

Without scalar subquery caching the correct result gets produced, all rows that are supposed to be rejected are written to the error logging table and don't show up in the target table.

And if no exception gets raised (only valid and existing currency codes get used), again the results are correct in the target table even when using scalar subquery caching - so the caching of the values in principle works correctly.

Only when combining those features and dealing with invalid data the problem shows up, and as it can be demonstrated from a simple test case, even then it only occurs under certain circumstances.

Here is a simple test case for reproducing the issue:

set echo on timing on time on

drop table t_source purge;

drop table t_dest purge;

drop table ERR$_T_DEST purge;

drop package test_plsql_caching_with_except;

create table t_source
as
select
rownum as id
, cast('C' || to_char(round(trunc(rownum / 100000)) + 1, 'FM00') as varchar2(3)) as currency_code_clust
, cast('C' || to_char(mod(rownum, 10) + 1, 'FM00') as varchar2(3)) as currency_code_scat
, 100 as amount
from
(select /*+ cardinality(1e3) */ null from dual connect by level <= 1e3) gen1
, (select /*+ cardinality(1e3) */ null from dual connect by level <= 1e3) gen2
;

exec dbms_stats.gather_table_stats(null, 't_source')

create or replace package test_plsql_caching_with_except
as
function test_plsql_caching_with_except(s_currency_code in varchar2, s_do_excpt_on in varchar2 default 'C10') return number parallel_enable;
end;
/

create or replace package body test_plsql_caching_with_except
as
function test_plsql_caching_with_except(s_currency_code in varchar2, s_do_excpt_on in varchar2 default 'C10') return number parallel_enable
is
begin
if s_currency_code = s_do_excpt_on then
raise_application_error(-20001, 'exception encountered');
end if;
--
return to_number(substr(s_currency_code, 2));
end;
end;
/

create table t_dest (id number(*, 0), currency_code varchar2(3), org_amount number, conv_amount number);

exec DBMS_ERRLOG.create_error_log (dml_table_name => 't_dest')

truncate table t_dest;

truncate table ERR$_T_DEST;

alter session enable parallel dml;

-- Using "currency_code_scat" as function parameter gives correct result in T_DEST
-- But column CONV_AMOUNT in ERR$_T_DEST shows some function result that shouldn't be there since the function raised an exception (reproduces in 11.2.0.4 but not 12.1.0.2)
--
-- define ccy_code = currency_code_scat

-- Using "currency_code_clust" as function parameter gives wrong result in T_DEST - rows that should go to ERR$_T_DEST
-- due to exception being raised in PL/SQL function end up in T_DEST (with some "cached" function result applied)
-- There shouldn't be any rows in T_DEST with "currency_code = 'C10'"
-- Applies to both serial and Parallel Execution, conventional and direct path insert
define ccy_code = currency_code_clust

-- Use commented, non-cached function call to see correct results
insert /*+
append
no_parallel
-- parallel(n)
*/
into t_dest (id, currency_code, org_amount, conv_amount)
select id, &ccy_code, amount,
--test_plsql_caching_with_except.test_plsql_caching_with_except(&ccy_code) * amount
(select test_plsql_caching_with_except.test_plsql_caching_with_except(&ccy_code) * amount from dual)

from t_source
log errors
reject limit unlimited;

commit;

-- This is supposed to 900K
select count(*) from t_dest;

-- This is supposed to be 0
select count(*) from t_dest where currency_code = 'C10';

-- This is supposed to 100K
select count(*) from ERR$_T_DEST;

-- This is supposed to 0
select count(*) from ERR$_T_DEST where conv_amount is not null;
The second parameter to the PL/SQL function determines for which code an exception should be raised (or if at all), and hence allows reproducing different scenarios. By default it will raise an exception for code "C10" which represents 10% (100K rows) of the data in T_SOURCE. So 900K rows should end up in the destination table and those 100K rows having code "C10" should go into the error logging table.

Running the test case without the scalar subquery gets the correct result:

SQL> -- This is supposed to 900K
SQL> select count(*) from t_dest;

COUNT(*)
----------
900000

Elapsed: 00:00:00.70
SQL>
SQL> -- This is supposed to be 0
SQL> select count(*) from t_dest where currency_code = 'C10';

COUNT(*)
----------
0

Elapsed: 00:00:00.21
SQL>
SQL> -- This is supposed to 100K
SQL> select count(*) from ERR$_T_DEST;

COUNT(*)
----------
100000

Elapsed: 00:00:00.70
SQL>
SQL> -- This is supposed to 0
SQL> select count(*) from ERR$_T_DEST where conv_amount is not null;

COUNT(*)
----------
0

Elapsed: 00:00:00.04
SQL>
Use the scalar subquery, and suddendly instead of 100K rows that should be rejected only a single one gets rejected and 99.999 make it into the target table with some converted amount:

SQL> -- This is supposed to 900K
SQL> select count(*) from t_dest;

COUNT(*)
----------
999999

Elapsed: 00:00:00.23
SQL>
SQL> -- This is supposed to be 0
SQL> select count(*) from t_dest where currency_code = 'C10';

COUNT(*)
----------
99999

Elapsed: 00:00:00.20
SQL>
SQL> -- This is supposed to 100K
SQL> select count(*) from ERR$_T_DEST;

COUNT(*)
----------
1

Elapsed: 00:00:00.00
SQL>
SQL> -- This is supposed to 0
SQL> select count(*) from ERR$_T_DEST where conv_amount is not null;

COUNT(*)
----------
1

Elapsed: 00:00:00.01
SQL>
Now the critical point is the order of the column values in the column passed into the PL/SQL function. The problem can only be reproduced when using the "clustered" variant of the currency codes (column CURRENCY_CODE_CLUST in T_SOURCE). Switch to the "scattered" variant (CURRENCY_CODE_SCAT), and the (almost) correct result gets produced, at least in the target table. I say "almost" because looking closer at the rows written to the error logging table it becomes obvious that they contain a converted amount that definitely shouldn't be there - so the wrong result error already shows up here, since the row was written to the error logging table due to the PL/SQL function raising an exception, hence the converted amount should be NULL. This "wrong converted amount in the error logging table" no longer occurs in 12c (12.1.0.2), so something has changed but the problem of wrong results (rows with errors / exceptions go into the target table) is still reproducible in 12c.

The problem seems to be related to the way how the scalar subquery caching works when the same value comes again - Jonathan Lewis described this feature in "Cost Based Oracle" a long time ago. There seems to be special optimisation that avoids the cache lookup in case the same value as before is used again, like a "single value cache" on top of the actual cache. And it looks like it is this special treatment that doesn't handle correctly the case of the exception being raised in the PL/SQL function.

The test case also allows checking if other caching mechanisms like declaring the function "deterministic" (although in the real life case it isn't really hence shouldn't be used since it opens up other possibilities like building a Materialized View or a function-based index based on the PL/SQL function that potentially would lead to wrong results again then) or making use of the PL/SQL function result cache suffer from the same problem - and they don't. Both produce the correct results under all scenarios tested.

I have a more complex variant of the test case that also allows spotting more clearly the critical difference between the "scalar subquery / deterministic caching" that obviously happens in the SQL engine and the PL/SQL function result cache that is implemented in the PL/SQL engine by counting the number of times the function code gets actually executed and how many times the PL/SQL code block got executed according to the Library Cache information.

In this case here where the overhead of the actual function call is very small but it is called very frequently the PL/SQL function result cache adds much more overhead than the SQL engine caching, because the context switch to PL/SQL needs to be done for every function invocation, only to be then resolved in the PL/SQL function result cache instead of actually executing the function code.

Whereas the "scalar subquery / deterministic" cache avoids the context switch to PL/SQL and uses values from a cache maintained in the SQL engine.

So the lesser the overhead of the actual PL/SQL function code and the more frequent the function gets called, the more overhead the usage of the PL/SQL function result cache will produce in comparison to the SQL engine based caching mechanisms.

Of course the PL/SQL function result cache comes with other features that are not there with the others - like cross session / statement invocation caching that might make a big difference if a single function call causes a lot of overhead.

The Future of Work: What does a Smart Office look like today?

Oracle AppsLab - Wed, 2016-06-08 15:23

Editor’s note: We just returned from Holland last week where we attended AMIS 25, which was a wonderful show. One of the demos we showed was the Smart Office; Noel (@noelportugal) also gave a presentation on it.

We’ve been showing the Smart Office since OOW last year, and it remains one of our most popular demos because it uses off-the-shelf components that are available today, e.g. Amazon Echo, Leap Motion, Philips Hue lights, beacons, etc., making it an experience that anyone could replicate today with some development work.

The following article is cross-posted from our Emerging Technologies pages on oracle.com. Stay tuned for details on each of the components of the Smart Office.

In early 2015, the AppsLab team decided we were going to showcase the latest emerging technologies in an integrated demo. As part of the Oracle Applications User Experience group, our main goal as the emerging technologies team is to design products that will increase productivity and user participation in Oracle software.

We settled on the idea of the Smart Office, which is designed with the future of enterprise workplaces in mind. With the advent of the Internet of Things and more home automation in consumer products, users are expecting similar experiences in the workplace. We wanted to build an overall vision of how users will accomplish their tasks with the help of emerging technologies, no matter where they might be working.

Technologies such as voice control, gesture, and proximity have reached what we consider an acceptable maturity level for public consumption. Inexpensive products such as the Amazon Echo, Leap Motion and Bluetooth beacons are becoming more common in users’ daily lives. These examples of emerging technology have become cornerstones in our vision for the Smart Office.

Wearable technology also plays an important role in our idea of the Future of Work. Smart watches are becoming ubiquitous, and the price of wireless microprocessors continues to decrease. Dedicated mobile devices, our research shows, can increase productivity in the workplace when they are properly incorporated into the user experience as a whole.

Building for you, a Sales Cloud example

We first created what we call a user persona to assist us in building the Smart Office. This helps us develop very specific work flows using very specific technology that can be widely applied to a variety of software users. In this case, we started with a sales example as they are often mobile workers.

bean_front

Sally Smith, our development example for the Smart Office, is a regional sales vice president who is traveling to her headquarter’s office. Traveling to another office often requires extra effort to find and book a working space. To help Sally with that task, we built a geo-fence-enabled mobile app as well as a Smart Badge. Here’s what these two components help her do:

  • As Sally approaches the office building, her mobile device (using geo-fencing capabilities) alerts her via her smart watch and helps her find her way to her an available office space, using micro-location with beacons. She uses her Smart Badge, which has access to data about her employee status, to go through the security doors at the office building.
  • As Sally approaches the available office space, her Smart Badge proximity sensor (a Bluetooth beacon) connects with a Lighthouse, which is a small touch-screen device outside the office space that displays space availability and works as the “brain” to control IoT devices inside the space. The proximity with the Lighthouse triggers a second confirmation to her smart watch to unlock the office and reserve the space in the company’s calendar system. This authenticates her reservation in two ways.

office

  • As Sally enters the office, her global preferences are loaded into the office “brain.” Settings such as light brightness and color (Hue Lights), and room temperature (Nest Thermostat) are set to her liking.
  • The office screens then start to load Sally’s familiar pictures as well as useful data relative to her location, such as weather or local events, on two Infoscreens. An Infoscreen is a Wi-Fi-enabled digital frame or LCD screen hung on the wall.

RS3660_ORACLE 332

Sally has already interacted with her Smart Office in several ways. But up to this point, all of the interactions have been triggered or captured by emerging technology built into mobile devices that she is carrying with her. Now, she is ready to interact more purposefully with the Smart Office.

  • Sally uses the Amazon Echo voice control to talk to the office: “Alexa, start my day.” Since she has been authenticated by the system already, it knows that the Oracle Sales Cloud is the application she is most likely to need, and the welcome page is now loaded in the touchscreen at the desk. She can use voice navigation to check on her opportunities, leads, or any other section of the Sales Cloud.
  • Sally was working on the plane with Oracle Sales Cloud, but she did not have a chance to save her work before landing. Session portability is built into the cloud user experience, which takes care of saving her work when she is offline. Now that she is sitting inside the Smart Office and back online, she just swipes her screen to transfer her incomplete work onto the desktop screen.

RS3652_ORACLE 419

  • The Smart Office also uses empty wall space to project data throughout the day. On this Ambient Screen, Sally could use her voice (Amazon Echo), or hand gestures (Leap Motion), to continue her work. Since Sally has a global sales team, she can use the Ambient Screen to project a basic overview of her team performance metrics, location, and notifications.
  • If Sally needs to interact with any of the notifications or actions she sees on the Ambient Screen, she can use a grab-and-throw motion to bring the content to her desk screen. She can also use voice commands to call up a team map, for example, and ask questions about her team such as their general location.

gant

  • As Sally finishes her day and gets ready to close her session inside the Smart Office, she can use voice commands to turn everything off.

Find out more

The Smart Office was designed to use off-the-shelf components on purpose. We truly believe that the Future of Work no longer relies on a single device. Instead, a set of cloud-connected devices help us accomplish our work in the most efficient manner.

For more on how we decide which pieces of emerging technology to investigate and develop in a new way for use in the enterprise world, read “Influence of Emerging Technology,” on the Usable Apps website.

See this for yourself and get inspired by what the Oracle Applications Cloud looks like when it’s connected to the future. Request a lab tour.Possibly Related Posts:

OGh Oracle DBA and SQL Celebration Day 2016

Tim Hall - Wed, 2016-06-08 12:01

oghThe OGh Oracle DBA and SQL Celebration Day was the last stop on my trip and it went super-quick! The morning started with a keynote by Dad. He managed to resist the urge to talk about me for 60 minutes, which was good. It gets really embarrassing when he keeps telling everyone I’ve eclipsed his accomplishments in every way possible!

After the keynote is was speaking about efficient PL/SQL calls from SQL. I lost the ability to type, but I think the session went OK. There is a playlist of videos about this stuff here.

USING SELECT 'X' in query/sub-queries

Learn oracle 12c database management - Wed, 2016-06-08 10:18


USING SELECT 'X' in query/sub-queries.



--------------------------------------------------------
--  DDL for Table TAB1
--------------------------------------------------------

  CREATE TABLE "ATEST"."TAB1"
   (    "ID" NUMBER,
    "NAME" VARCHAR2(20 BYTE)
   ) ;

Insert into ATEST.TAB1 (ID,NAME) values (1,'AAA');
Insert into ATEST.TAB1 (ID,NAME) values (2,'BBB');
Insert into ATEST.TAB1 (ID,NAME) values (3,'EEE');
Insert into ATEST.TAB1 (ID,NAME) values (4,'FFF');


--------------------------------------------------------
--  DDL for Table TAB2
--------------------------------------------------------

  CREATE TABLE "ATEST"."TAB2"
   (    "ID" NUMBER,
    "NAME" VARCHAR2(20 BYTE)
   ) ;

Insert into ATEST.TAB2 (ID,NAME) values (1,'CCC');
Insert into ATEST.TAB2 (ID,NAME) values (2,'DDD');


Get records that exits in TAB1 and not in TAB2 using select 'X' :


select * from TAB1 f where not exists (select 'X' from TAB2 where id=f.id);
ID    NAME
--    ---- 
4    FFF
3    EEE

IN the above query we get output of all the records from TAB1 that doesnt match with TAB2 ID's.
Hence we do not get the records with ID's 1 & 2 as they only exits in TAB1.
This is just like using "select * from TAB1 f where not exists (select ID from TAB2 where id=f.id);"


Get records that exits in TAB1 and in TAB2 using select 'X' :


select * from TAB1 f where exists (select 'X' from TAB2 where id=f.id);

ID    NAME
--    ---- 
1    AAA
2    BBB

IN the above query we get output of all the records from TAB1 that exist with same ID in TAB2 .
Hence we get only records with ID 1 & 2 as they exists in both TABLES.
This is just like using "select * from TAB1 f where exists (select ID from TAB2 where id=f.id);"
Categories: DBA Blogs

Using Index Hints in oracle

Learn DB Concepts with me... - Wed, 2016-06-08 09:59

Hints : Hints are used to give specific information that we know about our data and application, to Oracle. This further improves the performance of our system. There can be instances where the default optimizer may not be efficient for certain SQL statements. We can specify HINTS with the SQL statements, to improve the efficiency of those SQL statements. Hints should only be used as a last-resort if statistics were gathered and the query is still following a sub-optimal execution plan.

Example of the correct syntax for an index hint:

select /*+ index(TEST_IDX IDX_OS_USR) */ * from TEST_IDX;







If we alias the table (A in below case), you must use the alias in the index hint:

select /*+ index(A IDX_OS_USR) */ * from TEST_IDX A;

Note :

Oracle decides to use weather to use this hint or not, of oracle finds that it has faster execution plan without using hint it ignores it. You might think that an index may be helpfull and provide it as hint but oracle may still ignore it. In below case you can see hint being ignored.






Categories: DBA Blogs

@OraclePartners FY17 Global Kickoff

REGISTER NOW: ORACLE FY17 GLOBAL PARTNER KICKOFF It’s the start of a new Oracle fiscal year and we are excited about the tremendous opportunity ahead of us. Join...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator