Feed aggregator

Error in Import of Schema containing table(s) having DEFAULT Sequence.nextval values.

Tom Kyte - Tue, 2017-03-07 19:26
Hey TOM, Consider the following test scenario : 1. Create a sequence Connected to: Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production SQL> create sequence test_seq start with 100 increment by 10; 2. Use the...
Categories: DBA Blogs

Query runs faster with stats deleted, slower with up to date stats

Tom Kyte - Tue, 2017-03-07 19:26
<code>Hi team AskTOM, Sorry, wasn't sure how to use livesql.oracle.com for my case. I have a situation where (no stats as in dynamic sampling (11.2.04)) is producing a better plan (and faster elapsed time) than with gather_table_stats. Please s...
Categories: DBA Blogs

Ref Cursors - types.cursor_type vs sys_refcursor

Tom Kyte - Tue, 2017-03-07 19:26
There are few stored procedure written which sending OUT REF CURSORS to calling event. They are using types.cursor_type as datatype. IS there any difference in types.cursor_type & SYS_REFCURSOR?
Categories: DBA Blogs

JSON in PL/SQL Oracle 12c

Tom Kyte - Tue, 2017-03-07 19:26
Hello, I have a procedure with below structure: <code> procedure p_json (i_json clob) is l_firstname vacrchar2(10); begin --some code here end;</code> Input parameter i_json will receive Json_object. Below is structure of json object ...
Categories: DBA Blogs

Using Exception Table

Michael Dinh - Tue, 2017-03-07 18:44

It has been quite a while since I have used Exception table and just a quick demo to share.

oracle@arrow1:HAWKA:/media/sf_working/sql
$ sqlplus demo/demo @exception_demo.sql

SQL*Plus: Release 11.2.0.4.0 Production on Tue Mar 7 16:36:33 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning and Real Application Testing options

ARROW1:(DEMO@HAWKA):PRIMARY> col OWNER for a20
ARROW1:(DEMO@HAWKA):PRIMARY> col TABLE_NAME for a20
ARROW1:(DEMO@HAWKA):PRIMARY> col R_CONSTRAINT_NAME for a20
ARROW1:(DEMO@HAWKA):PRIMARY> drop table f purge;

Table dropped.

ARROW1:(DEMO@HAWKA):PRIMARY> drop table p purge;

Table dropped.

ARROW1:(DEMO@HAWKA):PRIMARY> drop table exceptions purge;

Table dropped.

ARROW1:(DEMO@HAWKA):PRIMARY> set echo off
Running @?/rdbms/admin/utlexcpt.sql

Table created.

ARROW1:(DEMO@HAWKA):PRIMARY> desc exceptions
 Name                    Null?    Type
 ----------------------- -------- ----------------
 ROW_ID                           ROWID
 OWNER                            VARCHAR2(30)
 TABLE_NAME                       VARCHAR2(30)
 CONSTRAINT                       VARCHAR2(30)

ARROW1:(DEMO@HAWKA):PRIMARY>
ARROW1:(DEMO@HAWKA):PRIMARY> create table p
  2  ( id number not null,
  3    CONSTRAINT p_pk PRIMARY KEY (id)
  4  );

Table created.

ARROW1:(DEMO@HAWKA):PRIMARY>
ARROW1:(DEMO@HAWKA):PRIMARY> create table f
  2  ( id number not null,
  3    CONSTRAINT  f_fk FOREIGN KEY (id) REFERENCES  p(id)
  4  );

Table created.

ARROW1:(DEMO@HAWKA):PRIMARY>
ARROW1:(DEMO@HAWKA):PRIMARY> alter table f DISABLE CONSTRAINT f_fk
  2  ;

Table altered.

ARROW1:(DEMO@HAWKA):PRIMARY>
ARROW1:(DEMO@HAWKA):PRIMARY> insert into p values (1);

1 row created.

ARROW1:(DEMO@HAWKA):PRIMARY> insert into f values (2);

1 row created.

ARROW1:(DEMO@HAWKA):PRIMARY> commit;

Commit complete.

ARROW1:(DEMO@HAWKA):PRIMARY>
ARROW1:(DEMO@HAWKA):PRIMARY> alter table f ENABLE CONSTRAINT f_fk
  2  EXCEPTIONS INTO exceptions
  3  ;
alter table f ENABLE CONSTRAINT f_fk
                                *
ERROR at line 1:
ORA-02298: cannot validate (DEMO.F_FK) - parent keys not found


ARROW1:(DEMO@HAWKA):PRIMARY>
ARROW1:(DEMO@HAWKA):PRIMARY> select owner,table_name,constraint_type,R_CONSTRAINT_NAME
  2  from dba_constraints
  3  where owner in ('DEMO')
  4  and table_name in ('P','F')
  5  order by owner, table_name,R_CONSTRAINT_NAME asc,constraint_type desc
  6  ;

OWNER                TABLE_NAME           C R_CONSTRAINT_NAME
-------------------- -------------------- - --------------------
DEMO                 F                    R P_PK
DEMO                 F                    C
DEMO                 P                    P
DEMO                 P                    C

ARROW1:(DEMO@HAWKA):PRIMARY>
ARROW1:(DEMO@HAWKA):PRIMARY> select owner,table_name,constraint_type,R_CONSTRAINT_NAME
  2  from dba_constraints
  3  where owner in ('DEMO')
  4  and constraint_name in ('P_PK')
  5  order by owner, table_name,R_CONSTRAINT_NAME asc,constraint_type desc
  6  ;

OWNER                TABLE_NAME           C R_CONSTRAINT_NAME
-------------------- -------------------- - --------------------
DEMO                 P                    P

ARROW1:(DEMO@HAWKA):PRIMARY>
ARROW1:(DEMO@HAWKA):PRIMARY> select * from exceptions
  2  ;

ROW_ID             OWNER                TABLE_NAME           CONSTRAINT
------------------ -------------------- -------------------- ------------------------------
AAAEv1AAFAAAAerAAA DEMO                 F                    F_FK

ARROW1:(DEMO@HAWKA):PRIMARY>
ARROW1:(DEMO@HAWKA):PRIMARY> select f.*
  2  from f, exceptions
  3  where exceptions.constraint='F_FK'
  4  and f.rowid=exceptions.row_id
  5  ;

        ID
----------
         2

ARROW1:(DEMO@HAWKA):PRIMARY>

Business Logic in Oracle Application Builder Cloud Service

Shay Shmeltzer - Tue, 2017-03-07 16:35

As you start building more complex applications in Oracle Application Builder Cloud Service, you'll might need to define more complex interactions between objects or validations on rows of data or specific fields.

In the new version of ABCS that we rolled out in February we added these type of capabilities.

There are several things you could do with the new Business Rules section of ABCS

  • Triggers - allow you create logic that will be executed in specific CRUD events such as when you insert, remove or update a record.
  • Object Validators - allowing you to define checks across multiple fields in your object
  • Field Validators - allowing you to define a check on specific field values.
  • Object Functions - reusable functions that can be called from the other points

Note that these logic points will be running on the server side of ABCS. 

In the video below I demonstrate a couple of these capabilities. You'll learn how to update one object when a specific value is set in another object's field. You'll also see how to check whether a combination of values in fields in a record is valid.

Check it out:

&amp;lt;p&amp;gt; &amp;lt;/p&amp;gt;

Here is a screenshot of the nice logic flow editor:

Categories: Development

Business Logic in Oracle Application Builder Cloud Service

Shay Shmeltzer - Tue, 2017-03-07 16:35

As you start building more complex applications in Oracle Application Builder Cloud Service, you'll might need to define more complex interactions between objects or validations on rows of data or specific fields.

In the new version of ABCS that we rolled out in February we added these type of capabilities.

There are several things you could do with the new Business Rules section of ABCS

  • Triggers - allow you create logic that will be executed in specific CRUD events such as when you insert, remove or update a record.
  • Object Validators - allowing you to define checks across multiple fields in your object
  • Field Validators - allowing you to define a check on specific field values.
  • Object Functions - reusable functions that can be called from the other points

Note that these logic points will be running on the server side of ABCS. 

In the video below I demonstrate a couple of these capabilities. You'll learn how to update one object when a specific value is set in another object's field. You'll also see how to check whether a combination of values in fields in a record is valid.

Check it out:

&lt;p&gt; &lt;/p&gt;

Here is a screenshot of the nice logic flow editor:

Categories: Development

Oracle 12cR2: changes for login.sql

Yann Neuhaus - Tue, 2017-03-07 15:58

If you use a login.sql script to set the SQL*Plus environment from your current working directory, you will see that it will not run anymore in 12.2. This is a security feature, and a good occasion to explain how sqlplus finds the scritps to run, on Linux.

For my test I have login.sql, LOGIN.SQL and script.sql in the following directories

$ tree /tmp/mytest/
/tmp/mytest/
├── a
│   ├── login.sql
│   ├── LOGIN.SQL
│   └── script.sqlL
├── b
│   ├── login.sql
│   ├── LOGIN.SQL
│   └── script.sql
├── login.sql
├── LOGIN.SQL
└── script.sql

I’m going to the parent directory
cd /tmp/mytest

The scripts display their name:

+ head login.sql LOGIN.SQL script.sql
==> login.sql LOGIN.SQL script.sql <==
prompt Hello from /tmp/mytest/script.sql

I’ll run commands from bash -x so that they are displayed, and environment variables are set only for the command to be run.

login.sql

+ sqlplus -s /nolog

Nothing displayed here. This is what has changed in 12.2 for security reasons the login.sql in the current working directory is not run anymore.

+ sqlplus -s /nolog @ login.sql

This is probably a side effect of the implementation of this new security feature: even when I explicitly want to run the login.sql script it is ignored

+ sqlplus -s /nolog @ login
Hello from /tmp/mytest/login.sql

Here, I rely on the implicit ‘.sql’ added and the the script is run. Probably the implementation of the security feature is done before this implicit extension.

+ sqlplus -s /nolog @ /tmp/mytest/login.sql
Hello from /tmp/mytest/login.sql

With full path, the script is run.

Actually, the only way to get the current directory login.sql run implicitely when starting sqlplus or connecting is to set the current directory in ORACLE_PATH:

+ ORACLE_PATH=.
+ sqlplus -s /nolog
Hello from /tmp/mytest/login.sql

Note that this defeats the security feature, in the same way it is not recommended to add ‘.’ to your shell PATH. It is better to put only absolute paths in the PATH, with directories whey you know nobody can add a trojan script.

LOGIN.SQL

+ sqlplus -s /nolog @ LOGIN.SQL

The implementation of this new feature is case insensitive. LOGIN.SQL is ignored even when specified in the command line.

+ sqlplus -s /nolog @ ./LOGIN.SQL
Hello from /tmp/mytest/LOGIN.SQL

Only when using less or more characters to specify it it is used.

Note that the implicit login.sql is case sensitive on Linux:
+ rm login.sql
+ ORACLE_PATH=.
+ sqlplus -s /nolog

Even with ORACLE_PATH it is not found.

SQLPATH

+ SQLPATH=/tmp/mytest
+ sqlplus -s /nolog

SQLPATH is not a solution to find login.sql unlike ORACLE_PATH

Note that the documentation tells very different things in http://docs.oracle.com/database/122/SQPUG/configuring-SQL-Plus.htm#SQPUG012

script.sql

Now, because SQLPATH and ORACLE_PATH was already a mess, I’ll try with a script that is not login.sql

+ sqlplus -s /nolog @ script.sql
Hello from /tmp/mytest/script.sql

Current directory is still searched for non-login scripts

+ sqlplus -s /nolog @ /tmp/mytest/script.sql
Hello from /tmp/mytest/script.sql

Absolute path can be used, or we can sete a PATH to search:

+ SQLPATH=/tmp/mytest
+ sqlplus -s /nolog @ script
Hello from /tmp/mytest/script.sql

Unlike login.sql, SQLPATH can be used to find a script in another directory

+ cd /tmp/mytest/a
+ SQLPATH=/tmp/mytest
+ sqlplus -s /nolog @ script
Hello from /tmp/mytest/a/script.sql

But current directory is still the first one that is searched

+ rm script.sql
+ SQLPATH=/tmp/mytest
+ sqlplus -s /nolog @ script
Hello from /tmp/mytest/script.sql

Only when the script is not in the current directory it is searched in SQLPATH

+ rm /tmp/mytest/script.sql
+ SQLPATH=/tmp/mytest
+ sqlplus -s /nolog @ script
SP2-0310: unable to open file "script.sql"

Again, the documentation is wrong. Only specified directories are looked-up, not sub-directories. But if I specify the subdirectory relative to SQLPATH (I am still in /tmp/mytest/a where shere is no script.sql)

+ SQLPATH=/tmp/mytest
+ sqlplus -s /nolog @ b/script
Hello from /tmp/mytest/b/script.sql

The b/script was resolved from the SQLPATH=/tmp/mytest

In SQLPATH, we can add multiple paths

+ SQLPATH=/tmp/mytest:/tmp/mytest/a:/tmp/mytest/b
+ sqlplus -s /nolog @ script
Hello from /tmp/mytest/b/script.sql

Here I have a script.sql only in the ‘b’ sub-directory and sqlplus finds it when this directory is listed in SQLPATH

ORACLE_PATH

Running the same with ORACLE_PATH instead of SQLPATH

+ ORACLE_PATH=/tmp/mytest:/tmp/mytest/a:/tmp/mytest/b
+ sqlplus -s /nolog @ script
Hello from /tmp/mytest/a/login.sql
Hello from /tmp/mytest/b/script.sql

We can have also multiple paths for ORACLE_PATH (this is not in the documentation) and it acts as with SQLPATH but there are a few differences.

First, you have seen that the login.sql script is run.

Then, if I have the script in my current directory, but not in ORACLE_PATH

+ cd /tmp
+ echo 'prompt Hello from /tmp' > script.sql
+ ORACLE_PATH=/tmp/mytest:/tmp/mytest/a:/tmp/mytest/b
+ sqlplus -s /nolog @ script
Hello from /tmp/mytest/a/login.sql
Hello from /tmp/mytest/b/script.sql

The ORACLE_PATH one is used first

+ ORACLE_PATH=/tmp/mytest:/tmp/mytest/a
+ sqlplus -s /nolog @ script
Hello from /tmp/mytest/a/login.sql
Hello from /tmp

The current directory is considered only when not found in ORACLE_PATH.

+ ORACLE_PATH=/tmp/mytest:/tmp/mytest/a
+ sqlplus -s /nolog @ b/script
Hello from /tmp/mytest/a/login.sql
Hello from /tmp/mytest/b/script.sql

As with SQLPATH, subdirectory is accessible if mentioned.

Both, in order

If you don’t want to keep it simple, you can specify both ORACLE_PATH and SQLPATH

+ cd mytest
+ ORACLE_PATH=/tmp
+ SQLPATH=/tmp/mytest/b
+ sqlplus -s /nolog @ script
Hello from /tmp

and ORACLE_PATH is read first. So the order is:

  1. ORACLE_PATH, in order of paths specified
  2. Current directory (except for login.sq)
  3. SQLPATH (except for login.sql) in order of paths specified
strace

Better than documentation or testing all combinations, in Linux we can trace the system calls when sqlplus is looking for the script.

I’ve set non-existing paths /ORACLE_PATH1 and /ORACLE_PATH2 for ORACLE_PATH, and /SQLPATH1 and /SQLPATH2 for SQLPATH and run ‘script’ without the extention

ORACLE_PATH=/ORACLE_PATH1:/ORACLE_PATH2 SQLPATH=/SQLPATH1:/SQLPATH2 strace -e trace=file sqlplus -s /nolog @ script

This traces all system calls with a file name:


access("/ORACLE_PATH1/script", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH2/script", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH1/script.sql", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH2/script.sql", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH1/script", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH2/script", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH1/script.sql", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH2/script.sql", F_OK) = -1 ENOENT (No such file or directory)
stat("script.sql", 0x7fff01921400) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH1/script", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH2/script", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH1/script.sql", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH2/script.sql", F_OK) = -1 ENOENT (No such file or directory)
stat("/SQLPATH1/script.sql", 0x7fff0191b430) = -1 ENOENT (No such file or directory)
stat("/SQLPATH2/script.sql", 0x7fff0191b430) = -1 ENOENT (No such file or directory)
SP2-0310: unable to open file "script.sql"
access("/ORACLE_PATH1/login.sql", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH2/login.sql", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH1/login.sql", F_OK) = -1 ENOENT (No such file or directory)
access("/ORACLE_PATH2/login.sql", F_OK) = -1 ENOENT (No such file or directory)
stat("/u01/app/oracle/product/12.2.0/dbhome_1/sqlplus/admin/glogin.sql", {st_mode=S_IFREG|0644, st_size=342, ...}) = 0
access("/u01/app/oracle/product/12.2.0/dbhome_1/sqlplus/admin/glogin.sql", F_OK) = 0
statfs("/u01/app/oracle/product/12.2.0/dbhome_1/sqlplus/admin/glogin.sql", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=6676009, f_bfree=2866104, f_bavail=2521221, f_files=1703936, f_ffree=1663469, f_fsid={-1731931108, 1057261682}, f_namelen=255, f_frsize=4096}) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/sqlplus/admin/glogin.sql", O_RDONLY) = 9

This is very interesting. First we see that the paths are searched multiple time, and I don’t know why. Second, when passing a name without extension (i.e without dot in the name) the exact name is used forst for ORACLE_PATH, but lookup in current directory and in SQLPATH automatically adds ‘.sql’. The system calls are also different: ORACLE_PATH has no stat() call before access(), which is different with current directory and SQLPATH. Finally, login.sql is read from ORACLE_PATH only and glogin.sql from ORACLE_HOME/sqlplus/admin.

Change occurred between 12.2.0.1 and 12.2.0.1

As a comparison, sqlplus 12.1.0.2 and even 12.2.0.1 DBaaS version (built in October) has the following additional calls to look for login.sql in current path and in SQLPATH:
stat("login.sql", 0x7fffc14d5490) = -1 ENOENT (No such file or directory)
stat("/SQLPATH1/login.sql", 0x7fffc14cf4c0) = -1 ENOENT (No such file or directory)
stat("/SQLPATH2/login.sql", 0x7fffc14cf4c0) = -1 ENOENT (No such file or directory)

This has disappear in 12.2.0.1 on-premises version (build in January).

So what?

WTF login.sql is not run, even when explicitly started
Except when mentioning it as 'login' or as './login.sql' pic.twitter.com/ClfvAUizO6

— Franck Pachot (@FranckPachot) March 7, 2017

Big thanks to the SQL Developer team who gave me the solution approximately 3 seconds after my tweet.

This behavior changed and, as far as I know, is not documented and the MOS note about it is not published. It makes sense, for security reason, to prevent running scripts in the current directory without explicitely allowing it. However, login.sql is often used for formatting only. It seems that SQLcl will implement this in a finer way, running only the formatting commands when it comes implicitly.

Be careful when moving to/from the Oracle Cloud and your premises because you don’t run exactly the same version…

 

Cet article Oracle 12cR2: changes for login.sql est apparu en premier sur Blog dbi services.

Guesswork

Jonathan Lewis - Tue, 2017-03-07 13:57

A recent posting on the OTN database forum described a problem with an insert (as select) statement that sometimes ran extremely slowly: nothing interesting yet, there could be plenty of boring reasons for that to happen. The same SQL statement (by SQL_ID) might take 6 hours to insert 300K rows one night while taking just a few minutes to insert 900K another night (still nothing terribly interesting).

An analysis of the ASH data about the statement showed that the problem was on the “LOAD TABLE CONVENTIONAL” operation – which starts to get interesting if you also assume that someone who was competent to look at ASH would probably have noticed whether or not the time spent was on (the first obvious guess) some variant of “TX enqueue” waits or “log file” waits or something amazingly different. It’s also a little more interesting if you’ve noticed that the title of the posting is about “consuming a lot of CPU time” – so where could that be going if we see most of the excess time going on the insert rather than on the select.

Here’s an enormous clue that this might be a “non-standard” problem – the SQL statement starts like this:


INSERT  /*+ ignore_row_on_dupkey_index(tgt OTC_DAT_TV_PROC_STATUS_UK) */
INTO OTC_DAT_TV_PROC_STATUS TGT (
    {list of columns}
)
SELECT  /*+ parallel(8) */
    ...

See the hint which says “ignore rows if they raise duplicate key errors against index otc_dat_tv_proc_status_uk” ? Think what it might take to implement code that obeys the hint – Oracle can’t know that a row is duplicating a value until it has inserted the row (to get a rowid) then tried to maintain the index and discovered the pre-existing value, at which point it raises an exception, then handles it by undoing the single row insert (I wonder what that does to array inserts, and the logic of triggers), then carries on with the next row. (I suppose the code could check every relevant index before doing the insert – but that would mean a very big, and possibly very resource-intensive, change to all existing “insert a row” code.)

Surely we’re going to see some dramatic effects if a large fraction of our rows result in duplicate values.

Warning – the guess might not be right but it’s much more interesting than all the other guesses you might make at this point and a good excuse for doing a bit of experimentation and learning. So here’s a simple model to test the “excess work” hypothesis – running against 11.2.0.4:

rem
rem     Script:         ignore_dupkey.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2017
rem

drop table t2;
drop table t1;

create table t1
nologging
as
select  *
from    all_objects
where   rownum <= 50000 ; -- > comment to protect WordPress formatting

create table t2
nologging
as
select  *
from    all_objects
where   rownum <= 50000 ; -- > comment to protect WordPress formatting

update t1 set object_id = 500000 + (select max(object_id) from t1)
;

commit;

create unique index t1_i1 on t1(object_id);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'
        );
end;
/

All I’ve done is create two copies of the same data – with an update in place that changes one set of data so that the object_id doesn’t overlap with the object_id in the other set. The update statement is something that I’m going to include in one test but exclude from the second. Now for the statement that tests for the effects of the hint:


execute snap_events.start_snap
execute snap_my_stats.start_snap

insert
        /*+ ignore_row_on_dupkey_index(t1 (object_id)) */
        into t1
select  *
from    t2
;

execute snap_my_stats.end_snap
execute snap_events.end_snap


There are several different things I could do once I’ve got this basic test set up if I want to refine what I’m testing and isolate certain aspects of the mechanism, but all I’ll do for this note is a simple report of some figures from the two tests – one where the object_id values don’t overlap and one where t1 and t2 are exact copies of each other. The two procedures wrapping the statement are just my standard routines for capturing changes in v$mystat and v$session_event for my session.

For the insert where I’ve done the update to avoid any duplicates appearing the insert completed in about 0.3 seconds, generating 10MB of redo and 2MB of undo.

When I removed the update statement the (continuously failing) insert took 35.5 seconds to complete, of which almost all the time was CPU time. The redo jumped to 478MB with 14MB of undo. The extreme level of redo surprised me slightly especially since the scale of the change was so much greater than that of the undo – I think it may be due to a problem with Oracle needing to unwind one row from an (internal) array insert before retrying. Here, taken from the session stats of the problem run, is a little indication of why the time (and especially the CPU time) increased so much:

Name                                                                     Value
----                                                                     -----
rollback changes - undo records applied                                 95,014
active txn count during cleanout                                        24,627
cleanout - number of ktugct calls                                       24,627
HSC Heap Segment Block Changes                                          97,509
Heap Segment Array Inserts                                              97,509
recursive calls                                                        682,574
recursive cpu usage                                                      2,193
session logical reads                                                1,341,077

Obviously there’s a lot of work done rolling back changes that should not have been made (though why it’s reported as 95,000 rather than 100,000 I don’t know and I’d rather not do a trace of buffer activity to find out) with an associated extra load of data block activity. Most visible, though, is the huge number of recursive calls with, perhaps associated, a very large number of session logical reads. Clearly it’s worth enabling extended tracing to see what’s going on – if you haven’t already guessed what some of those calls are about. Here’s an extract from the top of an appropriate tkprof output:


tkprof test_ora_32298.trc ignore_dupkey sort=execnt

select /*+ rule */ c.name, u.name
from
 con$ c, cdef$ cd, user$ u  where c.con# = cd.con# and cd.enabled = :1 and
  c.owner# = u.user#


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    48753      1.86       2.08          0          0          0           0
Execute  48753      3.75       4.17          0          0          0           0
Fetch    48753      2.86       3.12          0      97506          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   146259      8.47       9.38          0      97506          0           0

select o.name, u.name
from
 obj$ o, user$ u  where o.obj# = :1 and o.owner# = u.user#

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    48753      1.87       1.99          0          0          0           0
Execute  48753      3.60       3.63          0          0          0           0
Fetch    48753      7.35       7.52          0     243765          0       48753
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   146259     12.84      13.14          0     243765          0       48753

The top two statements in the trace file show Oracle first trying to find the name of the constraint that has been breached, then falling back to searching for the name of the unique index that has caused the error to appear. If I had created a unique constraint rather than just a unique index then the second of these two statement would not have appeared in the trace file (and the run would have been a little quicker – hint: constraints are a good thing).

You’ll notice that the total number of calls from the two statement is roughly 292,500 – far short of the 682,000 reported in the session stats. Unfortunately there was nothing else in the trace files that could be blamed for the outstanding 400,000 missing calls. It’s not really necessary to chase up all the details, though; clearly we can see that this feature is very expensive if lots of duplicates appear – like DML error logging it has probably been created as a way of dealing with occasional errors when handling large volumes of data.

Footnote

Notice that my example uses the “index description” method for specifying the index in the hint; it’s legal with either the name or the description. Interestingly (but, perhaps, unsurprisingly) this is a hint that Oracle has to parse for correctness before optimisation. I made a mistake in my first attempt at writing the update statement leaving me with duplicates in the data so Oracle couldn’t create the unique index; as a consequence of the missing unique index the insert statement reported the Oracle error “ORA-38913: Index specified in the index hint is invalid”.

 


Welcome Back, My Friends!

Floyd Teter - Tue, 2017-03-07 13:02
Welcome back, my friends
To the show that never ends
We're so glad you could attend
Come inside! Come inside!
                     -- From Emerson, Lake and Palmer's Karn Evil 9

Yup, it's true.  We're breathing new life into an old blog with the resurrection of ORCLville.

When I closed down ORCLville last year, it was over concerns regarding conflicts of interest over my employment with Oracle while serving as an impartial source of information.  Admittedly, that conflict still exists.

I also stepped away because I'm no longer a big fan of long form text.  Terse messages and visual presentations are more my speed these days.

So why am I bringing ORCLville back?  Because there are so many stories going untold: stories that involve Oracle Cloud products and the delivery of the services that go with those products.  In one way or another, I witness many Cloud challenges and victories every day.  And this seems like the best way to share and discuss those experiences...which, in turn, does us all a bit of good as we continue on this Cloud journey.

So we're up and running again, effective immediately.  Some disclaimers to share so you'll all be away and so I can sleep at night:

1.  I'm an Oracle employee.   And I love my job.  So don't look for any criticism or sharing of information that might mess up my gig.
2.  I also own Oracle stock.  It's a key component of my long-term savings program, so I'd like to see it do well.  It's not likely you'll see much here that might cause the stock to sink long-term.
3.  Items 1 & 2 above notwithstanding, the thoughts and opinions expressed here are my own and no-one else's.  I'm also the final decision maker in terms of what we explore here - not Oracle, but me.
4.  I'm hoping we don't have a monologue, but a two-way conversation.  Your comments matter.  So comment early and often.

So we're back.   It's been tough being gone, so I'm really happy to be back again with the show that never ends!

Oracle Java Cloud - How to Create Access Rule for Managed Server

Andrejus Baranovski - Tue, 2017-03-07 11:40
When you get fresh instance of Oracle Java Cloud, you are assigned with one admin and one managed server. If there is requirement to host multiple environments - demo, production, etc. - one managed server is not enough. Is better to run different environments on dedicated managed servers, this would simplify maintenance. WebLogic 12.2.1.2 partitions are not supported yet for Fusion Middleware, so only choice we have right now - different managed servers per environment.

In this short post, I will describe how to enable access to newly created managed server in Oracle Java Cloud. I have created RedSamuraiProd managed server with port 9075 in our Oracle Java Cloud service instance:


To allow access to port 9075, I need to define new access rule. This can be done through Cloud instance control, select Access Rules from the menu:


In Access Rules section, create new rule. Import here is to specify PUBLIc-INTERNET for source, WLS_MANAGED_SERVER for destination and port for managed server:


Once rule is created, managed server starts to be accessible from the internet. So simple and it works!

March 22: General Electric—Oracle ERP Cloud Customer Forum

Linda Fishman Hoyle - Tue, 2017-03-07 11:05

Join us for an Oracle Customer Reference Forum on March 22, 2017, at 8:30 a.m. PT.

Ricardo Cayeula, Chief Information Officer at General Electric Oil & Gas, will discuss GE’s digital strategy and migration to the cloud. She’ll share the company’s experience implementing Oracle ERP Cloud.

Register now to attend the live forum and learn more about General Electric’s experience with Oracle ERP Cloud.

March 22: General Electric—Oracle ERP Cloud Customer Forum

Linda Fishman Hoyle - Tue, 2017-03-07 11:05

Join us for an Oracle Customer Reference Forum on March 22, 2017, at 8:30 a.m. PT.

Ricardo Cayeula, Chief Information Officer at General Electric Oil & Gas, will discuss GE’s digital strategy and migration to the cloud. She’ll share the company’s experience implementing Oracle ERP Cloud.

Register now to attend the live forum and learn more about General Electric’s experience with Oracle ERP Cloud.

The Side Effects of Drop ‘Unused’ Index

Pythian Group - Tue, 2017-03-07 10:54

In this blog post I’ll demonstrate a scenario where dropping a potentially ‘unused’ index can have a negative influence on the optimizer’s cardinality estimation. Having columns with logical correlation and skewed data can cause difficulties for the optimizer when calculating cardinality. This post will also address what we can do to help the optimizer with better cardinality estimates.

The inspiration for this post was derived from a recent index usage review. One of the requirements was to determine how to confirm which unused indexes qualify to be dropped. I decided to do some tests regarding extended statistics and the effect of potentially dropping an ‘unused’ index. You will observe what kind of result may be seen from the drop of an index which has not been used. It’s important to remember that it does not apply in all cases.  Occasionally, even if the index is used, it doesn’t mean that it’s needed.

This is more or less linked to columns with skewed data and which might have logical relationship.
Hopefully, it can help you answer some of the following questions:

  1. Is the optimizer using the indexes behind scenes?
  2. While there are methods to determine if an index has been used in an execution plan, can an index be dropped on this basis only?
  3. If we drop composite index (constructed from correlated columns), can we do anything to avoid performance degradation?

Before we start with the use case, let’s briefly review some concepts.

The basic formula for selectivity is 1/NDV. The cardinality (CDN) is calculated as selectivity * total number of rows.

The selectivity of a join is defined as the selectivity of the most selective join column adjusted by the proportion of not null values in each join column.

Join Selectivity:
Sel = 1/max[NDV(t1.c1),NDV(t2.c2)] *
           ( (Card t1 - # t1.c1 NULLs) / Card t1) *
           ( (Card t2 - # t2.c2 NULLs) / Card t2)

Join Cardinality:
Card(Pj) = Card(T1) * Card(T2) * Sel(Pj)

In Oracle’s Doc ID 68992.1 you can find a more detailed explanation about different selectivity calculations based on different predicates. For simplicity, I will use equality predicate.

This blog post is divided in three sections.

  1. Use case where we demonstrate how drop of an “unused” index can mess up optimizer cardinality calculation.
  2. How to help optimizer for better cardinality estimation with extended statistics.
  3. More explanation on column correlation (CorStregth).
Preparation

For the first and second part I’ll be using the following configuration.

drop table t1 purge;
drop table t2 purge;
drop table t3 purge;

create table t1
as
select rownum pk,
    mod(level,  20) n1,
    mod(level, 200) n2,
    mod(level,1000) n3,
    lpad('x',42,'x') c1
from dual
connect by level <= 10000
/

create table t2
as
select rownum pk,
    mod(level,  20) n1,
    mod(level, 200) n2,
    mod(level,1000) n3,
    lpad('x',42,'x') c1
from dual
connect by level <= 10 
/ 

CREATE TABLE t3
AS
SELECT rownum pk, lpad('x',rownum,'x') c1
FROM dual
CONNECT BY level <= 2000 / exec dbms_stats.gather_table_stats(user,'T1',method_opt=>'for all columns size 1');
exec dbms_stats.gather_table_stats(user,'T2',method_opt=>'for all columns size 1');
exec dbms_stats.gather_table_stats(user,'T3',method_opt=>'for all columns size 1');


alter table t1 add constraint t1pk primary key(pk);
alter table t2 add constraint t2pk primary key(pk);
alter table t3 add constraint t3pk primary key(pk);

create index t1ix on t1(n1,n2);
create index t2ix on t2(n1,n2);

It creates two tables (T1 and T2) with composite index on N1 and N2 columns for both tables.
The statistics are as follows:

--Column statistics
TABLE_NAME	     COLUMN_NAME	  NUM_DISTINCT	  COL_SEL
-------------------- -------------------- ------------ ----------
T1		     C1 			     1		1
T1		     N1 			    20	      .05
T1		     N2 			   200	     .005
T1		     N3 			  1000	     .001
T2		     C1 			     1		1
T2		     N1 			    10	       .1
T2		     N2 			    10	       .1
T2		     N3 			    10	       .1

--Index statistics
TABLE_NAME	     INDEX_NAME 	    NUM_ROWS DISTINCT_KEYS    IND_SEL
-------------------- -------------------- ---------- ------------- ----------
T1		     T1IX		       10000	       200	 .005
T2		     T2IX			  10		10	   .1

Table T1 has 10000 rows and T2 only 10 rows. I’ve created two composite indexes for T1 and T2 since I am using both (N1,N2) columns in selection. As you can see (from the data dictionary) we have 200 distinct values for T1IX index and 10 for T2IX index.
e.g. Based on the formula we can easily derive selectivity as 1/NDV, for T1IX index we have 1/200 = 0.005.

Drop of an “unused” index can mess up optimizer’s cardinality estimation

The query I’m going to use in this case is:

select t1.c1,t2.c1
from t1,t2
where t1.n1 = t2.n1
and t1.n2 = t2.n2
/

Let’s first check the amount of the cardinality (CDN) before we do any changes. The column we’re interested in is ROWS, that’s where cardinality estimation is shown.

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

---------------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |   500 | 49500 |    29   (0)| 00:00:01 |
|*  1 |  HASH JOIN	   |	  |   500 | 49500 |    29   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T2   |    10 |   490 |	3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   | 10000 |   488K|    26   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")

Based on the formula provided, you may have expected to see cardinality (CDN) of 25. Since, we don’t have nulls we can do the calculation as 1/max(t1.n1,t2.n1) * 1/max(t1.n2,t2.n2) = 1/20 * 1/200 = 0.05 * 0.005 = 0.00025. We came up with selectivity of 0.00025.
Now, if we now do the join cardinality calculation 10 (T1) * 10000 (T2) * selectivity 0.00025 = 1000000 * 0.00025 = 25

But instead of 25, the optimizer calculated cardinality (CDN) of 500. In order to better understand the calculation, let’s review the 10053 event trace file.

  ColGroup (#1, Index) T1IX
    Col#: 1 2    CorStregth: 20.00
  ColGroup (#1, Index) T2IX
    Col#: 1 2    CorStregth: 10.00
Join ColGroups for T1[T1] and T2[T2] : (#1, #1)

What’s interesting here is the Join ColGroups analysis. Since, we have full match on the join with the indexed columns, the optimizer is considering Join ColGroups between T1 and T2. It will use ColGroups statistics to calculate join cardinality (CDN).

Col#: column position (self explanatory)
CorStregth: correlation strength. Will come back later on correlation strength (CorStregth).

Within the trace file in Nested Loop (NL) section (remember join cardinality is only listed in NL section) you should see something like:

Join selectivity using 1 ColGroups: 0.005000 (sel1 = 0.000000, sel2 = 0.000000)
Join Card:  500.000000 = outer (10.000000) * inner (10000.000000) * sel (0.005000)
Join Card - Rounded: 500 Computed: 500.000000

Based on the join selectivity formula, the optimizer chooses T1IX index (ColGroup) statistics for selectivity calculation, hence 1/NDV (T1IX index) = 1/200 = 0.005.
Afterwards, join cardinality is calculated as num_rows (T1) * num_rows (T2) * selectivity (0.005) or 10 * 10000 * 0.005 = 100000 * 0.005 = 500.

If you’re not aware of such cases, you may decide to drop an index based solely on the execution plan information. Even worse, the developers can easily decide to drop an index without knowing the possible effect.

Now, let’s observe the cardinality estimation after T1IX index is dropped.

SQL> drop index t1ix;

Index dropped.

...

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

---------------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |    25 |  2475 |    29   (0)| 00:00:01 |
|*  1 |  HASH JOIN	   |	  |    25 |  2475 |    29   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T2   |    10 |   490 |	3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   | 10000 |   488K|    26   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")

After we dropped T1IX the cardinality dropped from 500 to 25! This can have big impact on the performance. If we add additional tables to the join the optimizer can possibly re-order tables, switch Hash Join to Nested Loops or Merge Joins.

So, why did the cardinality (CDN) change after “unused” index was dropped?

Even if the index was not used, it doesn’t mean that its statistics were also not used. That’s the case here. The optimizer used the NDV of the column group created by the T1IX index to calculate the join selectivity.

Lets again check the 10053 event trace file, now where the query has been executed without T1IX index.

  
ColGroup (#1, Index) T2IX
    Col#: 1 2    CorStregth: 10.00
Join ColGroups for T1[T1] and T2[T2] : Using cdn sanity check

...

ColGroup cardinality sanity check: ndv for  T1[T1] = 4000.000000  T2[T2] = 10.000000
Join selectivity using 1 ColGroups: 2.5000e-04 (sel1 = 0.000000, sel2 = 0.000000)
Join Card:  25.000000 = outer (10.000000) * inner (10000.000000) * sel (2.5000e-04)
Join Card - Rounded: 25 Computed: 25.000000

We don’t have full match for Join ColGroups (we’re missing the statistics from T1IX index). The optimizer decided to perform cardinality (CDN) sanity check where (after the sanity check) it ended up with a number of distinct values (NDV) of 4000.

How did the optimizer calculate 4000 ?

Since we don’t have column group, the optimizer did a basic calculation of N1 (sel) * N2 (sel) = 0.00025. Having the (N1,N2) selectivity, we can extract the NDV as 1/0.00025 = 4000.
The representation in the trace file is slightly different as 2.5000e-04 = 2.5000 * (1/10000) = 0.00025

Having incorrect cardinality estimate the optimizer can make wrong decision for join methods, as we can see in the following example when we join T3 table.

--query
select t1.c1,t2.c1,t3.c1
from t1,t2,t3
where t1.n1 = t2.n1
and t1.n2 = t2.n2
and t1.pk = t3.pk
/

Execution Plan
----------------------------------------------------------
Plan hash value: 1184213596

----------------------------------------------------------------------------
| Id  | Operation	    | Name | Rows  | Bytes | Cost (%CPU)| Time	   |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |	   |   500 |   541K|   120   (0)| 00:00:01 |
|*  1 |  HASH JOIN	    |	   |   500 |   541K|   120   (0)| 00:00:01 |
|*  2 |   HASH JOIN	    |	   |   500 | 51500 |	30   (0)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| T2   |	10 |   490 |	 3   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| T1   | 10000 |   527K|	27   (0)| 00:00:01 |
|   5 |   TABLE ACCESS FULL | T3   |  2000 |  1964K|	90   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."PK"="T3"."PK")
   2 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")

SQL> drop index t1ix; 

Index dropped.

Execution Plan
----------------------------------------------------------

Plan hash value: 142045272

-------------------------------------------------------------------------------------
| Id  | Operation		     | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	     |	    |	 25 | 27725 |	 35   (0)| 00:00:01 |
|   1 |  NESTED LOOPS		     |	    |	 25 | 27725 |	 35   (0)| 00:00:01 |
|   2 |   NESTED LOOPS		     |	    |	 25 | 27725 |	 35   (0)| 00:00:01 |
|*  3 |    HASH JOIN		     |	    |	 25 |  2575 |	 30   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL	     | T2   |	 10 |	490 |	  3   (0)| 00:00:01 |
|   5 |     TABLE ACCESS FULL	     | T1   | 10000 |	527K|	 27   (0)| 00:00:01 |
|*  6 |    INDEX UNIQUE SCAN	     | T3PK |	  1 |	    |	  0   (0)| 00:00:01 |
|   7 |   TABLE ACCESS BY INDEX ROWID| T3   |	  1 |  1006 |	  1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")
   6 - access("T1"."PK"="T3"."PK")

Another example with different data set where there is a change in the row source used for building a hash table.

--query
SELECT t1.c1, t2.c1
FROM t1,t2,t3
WHERE t1.n1 = t2.n1
AND t1.n2 = t2.n2
AND t1.n1 = t3.pk
/

Execution Plan
----------------------------------------------------------
Plan hash value: 2855555999

-------------------------------------------------------------------------------
| Id  | Operation	       | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |      |  5000K|  1525M|  2766	(1)| 00:00:01 |
|*  1 |  HASH JOIN	       |      |  5000K|  1525M|  2766	(1)| 00:00:01 |
|   2 |   TABLE ACCESS FULL    | T2   |  1000 |   251K|    14	(0)| 00:00:01 |
|*  3 |   HASH JOIN	       |      |  1000K|    59M|  2739	(1)| 00:00:01 |
|   4 |    INDEX FAST FULL SCAN| T3PK |  2000 |  8000 |     3	(0)| 00:00:01 |
|   5 |    TABLE ACCESS FULL   | T1   |  1000K|    55M|  2734	(1)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")
   3 - access("T1"."N1"="T3"."PK")

SQL> drop index t1ix; 

Index dropped.

...

Execution Plan
----------------------------------------------------------
Plan hash value: 1993272941

------------------------------------------------------------------------------
| Id  | Operation	      | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |	 250K|	  76M|	2754   (1)| 00:00:01 |
|*  1 |  HASH JOIN	      |      |	 250K|	  76M|	2754   (1)| 00:00:01 |
|   2 |   INDEX FAST FULL SCAN| T3PK |	2000 |	8000 |	   3   (0)| 00:00:01 |
|*  3 |   HASH JOIN	      |      |	 250K|	  75M|	2750   (1)| 00:00:01 |
|   4 |    TABLE ACCESS FULL  | T2   |	1000 |	 251K|	  14   (0)| 00:00:01 |
|   5 |    TABLE ACCESS FULL  | T1   |	1000K|	  55M|	2734   (1)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."N1"="T3"."PK")
   3 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")

Although this will not apply in all cases, be careful before deciding to drop an index. Although it may seem like a good idea initially, it can have big impact with wrong cardinality estimation. This could then lead to incorrect execution plan decisions, poor performance and unhappy customers.

So, what is the solution?
We don’t want huge indexes to be (DML) maintained if not used, right?

This brings me to the second part, Extended Statistics.

Extended Statistics

Oracle introduced extended statistics with Oracle 11g. Its main functionality is to highlight column group relation. The extended statistics helps the optimizer become more familiar with logical relation between column groups, otherwise unknown information. The extended statistics are represented as virtual (hidden) columns on which statistics are gathered.

When we had T1IX index, the optimizer used its NDV to calculate correct cardinality. The index statistics were used for the column group of N1 and N2 columns. However, after we dropped the index (even though it was never used), we’ve made a mess of the optimizer with its cardinality estimates.

Extended statistics are maintained by DBMS_STATS package, function CREATE_EXTENDED_STATS. Let’s create extended statistics and see the cardinality estimation.

SQL> select dbms_stats.create_extended_stats(null,'T1','(N1,N2)') from dual
  2  /

DBMS_STATS.CREATE_EXTENDED_STATS(NULL,'T1','(N1,N2)')
------------------------------------------------------------
SYS_STUBZH0IHA7K$KEBJVXO5LOHAS

SQL> set autotrace trace exp
SQL> 
SQL> select t1.c1,t2.c1
  2  from t1,t2
  3  where t1.n1 = t2.n1
  4  and t1.n2 = t2.n2
  5  /

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

---------------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |    25 |  2475 |    29   (0)| 00:00:01 |
|*  1 |  HASH JOIN	   |	  |    25 |  2475 |    29   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T2   |    10 |   490 |	3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   | 10000 |   488K|    26   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")

I know, you’re thinking “It didn’t help” and you’re right. The optimizer did not use the column group (extension). If you look a little deeper, USER_TAB_COL_STATISTICS you will see that the virtual column is not created, only the extension definition was created.

SQL> SELECT column_name, num_distinct, histogram
FROM   user_tab_col_statistics
WHERE  table_name = 'T1'
/

COLUMN_NAME				 NUM_DISTINCT HISTOGRAM
---------------------------------------- ------------ ---------------
N1						   20 NONE
N2						  200 NONE
N3						 1000 NONE
C1						    1 NONE

SQL> col extension_name for a40
SQL> SELECT extension_name, extension
FROM   user_stat_extensions
WHERE  table_name = 'T1'
/

EXTENSION_NAME				 EXTENSION
---------------------------------------- --------------------
SYS_STUBZH0IHA7K$KEBJVXO5LOHAS		 ("N1","N2")

This behavior is a bit expected, because DBMS_STATS.CREATE_EXTENDED_STATS will only create an extension. It won’t gather statistics for the extension. If you check the 10053 event trace file you will see NO STATISTICS (using defaults).

Column (#5): SYS_STUBZH0IHA7K$KEBJVXO5LOHAS(NUMBER)  NO STATISTICS (using defaults)

In order to fix this, we need to re-gather statistics on the table.

SQL> exec dbms_stats.gather_table_stats(null,'T1',method_opt=>'for all columns size 1');

PL/SQL procedure successfully completed.

SQL> SELECT column_name, num_distinct, histogram
FROM   user_tab_col_statistics
WHERE  table_name = 'T1'
/

COLUMN_NAME				 NUM_DISTINCT HISTOGRAM
---------------------------------------- ------------ ---------------
N1						   20 NONE
N2						  200 NONE
N3						 1000 NONE
C1						    1 NONE
SYS_STUBZH0IHA7K$KEBJVXO5LOHAS			  200 NONE


SQL> select t1.c1,t2.c1
from t1,t2
where t1.n1 = t2.n1
and t1.n2 = t2.n2
/

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

---------------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |   500 | 49500 |    29   (0)| 00:00:01 |
|*  1 |  HASH JOIN	   |	  |   500 | 49500 |    29   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T2   |    10 |   490 |	3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   | 10000 |   488K|    26   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")

Now the optimizer calculated the correct cardinality. If you check the trace file more extensively, you should see the same join cardinality information comparable to when we had the index in place. The difference would only be that this time it’s using the extension statistics.

...
  ColGroup (#1, VC) SYS_STUBZH0IHA7K$KEBJVXO5LOHAS
    Col#: 1 2    CorStregth: 20.00
  ColGroup (#1, Index) T2IX
    Col#: 1 2    CorStregth: 10.00

...

Join selectivity using 1 ColGroups: 0.005000 (sel1 = 0.000000, sel2 = 0.000000)
Join Card:  500.000000 = outer (10.000000) * inner (10000.000000) * sel (0.005000)
Join Card - Rounded: 500 Computed: 500.000000

With extended statistics we were able to help the optimizer to calculate cardinality (CDN) of 500. It used extension NDV in the same way it used T1IX index statistics.

However, what if T1 table was a big table and you already gathered statistics before creating the extension (column group)? Re-gathering can be time-consuming and have negative impact on the performance. To answer this question, we have to employ the METHOD_OPT approach from DBMS_STATS.GATHER_TABLE_STATS.
Creating extension via METHOD_OPT will automatically gather statistics on the column group.

SQL>
SQL> SELECT column_name, num_distinct, histogram
FROM   user_tab_col_statistics
WHERE  table_name = 'T1'
/  

COLUMN_NAME	     NUM_DISTINCT HISTOGRAM
-------------------- ------------ ---------------
C1				1 NONE
N3			     1000 NONE
N2			      200 NONE
N1			       20 NONE

SQL> exec dbms_stats.gather_table_stats(null,'T1',method_opt=>'for all columns size 1 for columns size 1 (N1,N2)');

PL/SQL procedure successfully completed.

SQL> col column_name for a40
SQL> SELECT column_name, num_distinct, histogram
FROM   user_tab_col_statistics
WHERE  table_name = 'T1'
/

COLUMN_NAME				 NUM_DISTINCT HISTOGRAM
---------------------------------------- ------------ ---------------
SYS_STUBZH0IHA7K$KEBJVXO5LOHAS			  200 NONE
C1						    1 NONE
N3						 1000 NONE
N2						  200 NONE
N1						   20 NONE

In this case an extension was automatically created and in the same time statistics were gathered.

If you find yourself in a situation were you used DBMS_STATS.CREATE_EXTENDED_STATS after gathering statistics, there are multiple ways to gather statistics on already existing extensions.

1) DBMS_STATS.GATHER_TABLE_STATS
(it will include all virtual columns)

2) METHOD_OPT=>’for columns size … (column group)’
(you can list the actual column group as you do for CREATE_EXTENDED_STATS)

3) METHOD_OPT=>’for columns size … extension_name’
(you can specify the extension name obtained from DBA|ALL|USER_STAT_EXTENSIONS dictionary view)

4) METHOD_OPT=>’for all hidden columns’
(hidden columns are by default included in default DBMS_STATS.GATHER_TABLE_STATS configuration)

What’s the difference between METHOD_OPT and DBMS_STATS.CREATE_EXTENDED_STATS?

I believe it behaves in that manner because METHOD_OPT is a part of a procedure that is designed for statistics gathering. Extended statistics (column groups) were introduced with 11g and are additional functionality to METHOD_OPT. The main functionality of it is to gather statistics, hence gathering statistics via METHOD_OPT creates the extension and gathers statistics at the same time.
Perhaps Oracle could have used different naming, something like CREATE_COLUMN_GROUP instead of CREATE_EXTENDED_STATS (which is misleading).

Column Correlation (CorStreght)

Column correlation represents how much the columns within a given group have mutual relation. It’s calculated as product of NDV of all columns in the group divided with the NDV of the column group. This is clearly explained in MultiColumn/Column Group Statistics – Additional Examples (Doc ID 872406.1).

Product of individual NDVs / NDV for the column group.

If the product of all columns NDV is smaller than CG NDV, the optimizer will ignore CG (extended) stats. Based on the tests I performed, the CorStregth is taken into consideration only when the optimizer has to decide between multiple column groups for which Full Match can be constructed. In the other cases, when Partial Match is used the optimizer it picked the column group with smaller NDV.

Here are two different cases; one where Full Match is constructed and another for Partial Match:

Column Group – Full Match

Let’s examine the first case where we can construct Full Match.

COLUMN_NAME		NUM_DISTINCT
---------------------------------------- ------------
N1						   20
N2						  200
N3						 1000
C1						    1
SYS_STUBZH0IHA7K$KEBJVXO5LOHAS			  200 ---> (N1,N2)
SYS_STUXQEJ8WOFTK5R5EYM4DF_60V			 1000 ---> (N2,N3)

select *
from t1
where t1.n1 = 10
and t1.n2 = 40
and t1.n3 = 80
/

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation	  | Name | Rows  | Bytes | Cost (%CPU)| Time	 |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	 |     5 |   270 |   228   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1	 |     5 |   270 |   228   (1)| 00:00:01 |
--------------------------------------------------------------------------

Trace file
...
Access path analysis for T1
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T1[T1]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#1): N1(NUMBER)
    AvgLen: 3 NDV: 20 Nulls: 0 Density: 0.050000 Min: 0.000000 Max: 19.000000
  Column (#2): N2(NUMBER)
    AvgLen: 4 NDV: 200 Nulls: 0 Density: 0.005000 Min: 0.000000 Max: 199.000000
  Column (#3): N3(NUMBER)
    AvgLen: 4 NDV: 1000 Nulls: 0 Density: 0.001000 Min: 0.000000 Max: 999.000000
  Column (#6): SYS_STUXQEJ8WOFTK5R5EYM4DF_60V(NUMBER)
    AvgLen: 12 NDV: 1000 Nulls: 0 Density: 0.001000
  Column (#5): SYS_STUBZH0IHA7K$KEBJVXO5LOHAS(NUMBER)
    AvgLen: 12 NDV: 200 Nulls: 0 Density: 0.005000
  ColGroup (#1, VC) SYS_STUXQEJ8WOFTK5R5EYM4DF_60V
    Col#: 2 3    CorStregth: 200.00
  ColGroup (#2, VC) SYS_STUBZH0IHA7K$KEBJVXO5LOHAS
    Col#: 1 2    CorStregth: 20.00
  ColGroup Usage:: PredCnt: 3  Matches Full: #1  Partial:  Sel: 0.001000

We have T1 table with two column groups on (N1,N2) and (N2,N3).
In this case the product of the individual columns N1,N2,N3 NDV is much higher than column groups NDV, hence the optimizer will calculate the cardinality (CDN) based on the extensions. Having a selection with (N1,N2,N3) the optimizer can perform Full Match for both of the column groups. In this scenario, the cardinality can be expressed as:

1) Using CG #1 (N2,N3): 1/CG(NDV) * 1/N1(NDV) = 1/1000 * 1/20 = 0.001 * 0.05 = 0.00005 * 100000 = 5

2) Using CG #2 (N1,N2): 1/CG(NDV) * 1/N3(NDV) = 1/200 * 1/1000 = 0.005 * 0.0001 = 0.000005 * 100000 = 0.5 (rounded to 1)

The following instance demonstrates where the optimizer decided to use CG (N2,N3) or SYS_STUXQEJ8WOFTK5R5EYM4DF_60V because it has higher CorStregth of 200 (vs. 20 for CG (N1,N2)).
To show how CorStregth can influence the optimizer, I’ve changed N1 distinct value to 210 to increase CG (N1,N2) CorStregth.
Here, case column group (N1,N2) was used in cardinality estimates.

exec dbms_stats.set_column_stats(null,'T1','N1',distcnt=>210);

Execution Plan
----------------------------------------------------------

Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |	1 |    11 |   228   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE    |	  |	1 |    11 |	       |	  |
|*  2 |   TABLE ACCESS FULL| T1   |	1 |    11 |   228   (1)| 00:00:01 |
---------------------------------------------------------------------------



--Trace file
...
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T1[T1]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#1): N1(NUMBER)
    AvgLen: 3 NDV: 210 Nulls: 0 Density: 0.004762 Min: 0.000000 Max: 19.000000
  Column (#2): N2(NUMBER)
    AvgLen: 4 NDV: 200 Nulls: 0 Density: 0.005000 Min: 0.000000 Max: 199.000000
  Column (#3): N3(NUMBER)
    AvgLen: 4 NDV: 1000 Nulls: 0 Density: 0.001000 Min: 0.000000 Max: 999.000000
  Column (#6): SYS_STUXQEJ8WOFTK5R5EYM4DF_60V(NUMBER)
    AvgLen: 12 NDV: 1000 Nulls: 0 Density: 0.001000
  Column (#5): SYS_STUBZH0IHA7K$KEBJVXO5LOHAS(NUMBER)
    AvgLen: 12 NDV: 200 Nulls: 0 Density: 0.005000
  ColGroup (#2, VC) SYS_STUBZH0IHA7K$KEBJVXO5LOHAS
    Col#: 1 2    CorStregth: 210.00
  ColGroup (#1, VC) SYS_STUXQEJ8WOFTK5R5EYM4DF_60V
    Col#: 2 3    CorStregth: 200.00
  ColGroup Usage:: PredCnt: 3  Matches Full: #2  Partial:  Sel: 0.005000

We can see that the optimizer chose CG #2 based on CorStregth.

Column Group – Partial Match

This second case shows what happens when Full Match can’t be constructed and the optimizer chooses the column group with smaller NDV. I’ve built the same T1 table, but now with Column Groups (N1,N2,N3) and (N1,N2,C1).

COLUMN_NAME		 NUM_DISTINCT
---------------------------------------- ------------
N1						   20
N2						  200
N3						 1000
C1						    1
SYS_STU2NGW2O#$ZX2CDZYOMNMFP64			  200
SYS_STUOYQUEIAZ7FI9DV53VLN$$$0			 1000


--Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation	  | Name | Rows  | Bytes | Cost (%CPU)| Time	 |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	 |   500 | 27000 |   228   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1	 |   500 | 27000 |   228   (1)| 00:00:01 |
--------------------------------------------------------------------------



--Trace file
...
Access path analysis for T1
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T1[T1]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#1): N1(NUMBER)
    AvgLen: 3 NDV: 20 Nulls: 0 Density: 0.050000 Min: 0.000000 Max: 19.000000
  Column (#2): N2(NUMBER)
    AvgLen: 4 NDV: 200 Nulls: 0 Density: 0.005000 Min: 0.000000 Max: 199.000000
  Column (#6): SYS_STUOYQUEIAZ7FI9DV53VLN$$$0(NUMBER)
    AvgLen: 12 NDV: 1000 Nulls: 0 Density: 0.001000
  Column (#5): SYS_STU2NGW2O#$ZX2CDZYOMNMFP64(NUMBER)
    AvgLen: 12 NDV: 200 Nulls: 0 Density: 0.005000
  Column (#3): N3(NUMBER)
    AvgLen: 4 NDV: 1000 Nulls: 0 Density: 0.001000 Min: 0.000000 Max: 999.000000
  Column (#4): C1(VARCHAR2)
    AvgLen: 43 NDV: 1 Nulls: 0 Density: 1.000000
  ColGroup (#1, VC) SYS_STUOYQUEIAZ7FI9DV53VLN$$$0
    Col#: 1 2 3    CorStregth: 4000.00
  ColGroup (#2, VC) SYS_STU2NGW2O#$ZX2CDZYOMNMFP64
    Col#: 1 2 4    CorStregth: 20.00
  ColGroup Usage:: PredCnt: 2  Matches Full:  Partial: #2 (1 2 )  Sel: 0.005000
  Table: T1  Alias: T1

As CG #1 has way higher CorStregth of 4000 (vs. 20) the optimizer used CG #2. Let’s decrease CG #2 NDV to 100 to see whether it will influence the optimizer to use CG #1.

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation	  | Name | Rows  | Bytes | Cost (%CPU)| Time	 |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	 |  1000 | 50000 |   228   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1	 |  1000 | 50000 |   228   (1)| 00:00:01 |
--------------------------------------------------------------------------



--Trace file
...
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T1[T1]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#1): N1(NUMBER)
    AvgLen: 3 NDV: 20 Nulls: 0 Density: 0.050000 Min: 0.000000 Max: 19.000000
  Column (#2): N2(NUMBER)
    AvgLen: 4 NDV: 200 Nulls: 0 Density: 0.005000 Min: 0.000000 Max: 199.000000
  Column (#6): SYS_STUOYQUEIAZ7FI9DV53VLN$$$0(NUMBER)
    AvgLen: 12 NDV: 100 Nulls: 0 Density: 0.010000
  Column (#5): SYS_STU2NGW2O#$ZX2CDZYOMNMFP64(NUMBER)
    AvgLen: 12 NDV: 200 Nulls: 0 Density: 0.005000
  Column (#3): N3(NUMBER)
    AvgLen: 4 NDV: 1000 Nulls: 0 Density: 0.001000 Min: 0.000000 Max: 999.000000
  Column (#4): C1(VARCHAR2)
    AvgLen: 43 NDV: 1 Nulls: 0 Density: 1.000000
  ColGroup (#1, VC) SYS_STUOYQUEIAZ7FI9DV53VLN$$$0
    Col#: 1 2 3    CorStregth: 40000.00
  ColGroup (#2, VC) SYS_STU2NGW2O#$ZX2CDZYOMNMFP64
    Col#: 1 2 4    CorStregth: 20.00
  ColGroup Usage:: PredCnt: 2  Matches Full:  Partial: #1 (1 2 ) Sel: 0.010000
  Table: T1  Alias: T1

Since CG #1 has smaller NDV (100) (vs. #2, 200), it calculated the cardinality (CDN) based on CG #1 as:
1/NDV (CG) * Original Cardinality = 1/100 * 100000 = 0.01 * 100000 = 1000

Summary

There are some situations where dropping a potentially ‘unused’ index can have a negative influence on the optimizer’s cardinality estimation. Consider using extended statistics to help the optimizer in order to become more familiar with columns logical relationship. The extended statistics are represented with virtual (hidden) column. Regarding statistics gathering, note that DBMS_STATS.CREATE_EXTENDED_STATS won’t gather statistics automatically, use METHOD_OPT instead. If you have skewed data, you may want to consider creating proper histograms. When histograms are on base columns, you’ll also need to create histograms on the column group in order to be taken into consideration. This is because Oracle gives priority to histograms. You can use column groups extended statistics to preserve cardinality stability as consequence from dropping an index. While it might not be appropriate for all cases, it’s a viable option for the optimizers help in good cardinality estimation.

Lastly, you can use DBMS_STATS.SEED_COL_USAGE procedure to monitor columns relation in filter predicates, join predicates and group by clauses. A report can be generated with DBMS_STATS.REPORT_COL_USAGE where its information can be used for appropriate column groups identification. I look forward to exploring more on that in a future post.

Categories: DBA Blogs

Getting ADF Data in a Jet Component (1)

Amis Blog - Tue, 2017-03-07 09:33

Oracle JET has been around for a while, and at this moment we are investigating what it would take to integrate JET with our existing ADF Application. In the current ADF application we want to make a dashboard in JET, however we still need to know for what project we need the data from. Therefore I am researching on how to get data from our ADF application into our JET part. In this blog I will show you how you can in an easy and quick way get your ADF BC data into your JET Components without using REST services.

I used the blog of Andrejus1 to set up JET within my ADF Application.

Add the JET libraries to the public_html folder of the ViewController project.

(Final) Structure of the project:

Make a jsf page and use af:resources to get to the css and requireJS and add the main.js

<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE html>
<f:view xmlns:f="http://java.sun.com/jsf/core" xmlns:af="http://xmlns.oracle.com/adf/faces/rich" xmlns:dvt="http://xmlns.oracle.com/dss/adf/faces" xmlns:ui="http://java.sun.com/jsf/facelets">
    <af:document title="main.jsf" id="d1">
        <af:messages id="m1"/>
        <af:resource type="css" source="jet/css/alta/2.2.0/web/alta.min.css"/>
        <af:resource type="javascript" source="jet/js/libs/require/require.js"/>
        <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%3E%0A%20%20%20%20%20%20%20%20%20%20require.config(%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20baseUrl%20%3A%20%22jet%2Fjs%22%0A%20%20%20%20%20%20%20%20%20%20%7D)%3B%0A%0A%20%20%20%20%20%20%20%20%20%20require(%5B%22main%22%5D)%3B%0A%20%20%20%20%20%20%20%20%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />
        <af:form id="f1">
        
        </af:form>
    </af:document>
</f:view>

Then I added my composite component folder to the js folder of JET. My component is named amis-person and will show the name of the person in capital letters and the emailadress within a blue box. You can read more about composite components in the blog series of Duncan2

Put the metadata directly in the loader.js instead of via a json file, otherwise it will not work!. When you do it via the .json file and you console.log the in the function, you will see it does not print out the metadata from the .json file.


define(['ojs/ojcore',
       './amis-person',
        'text!./amis-person.html',
        'css!./amis-person',
        'ojs/ojcomposite'],
  function (oj, ComponentModel, view, css) {
        'use strict';
         var metadata = '{ "properties": { "amisPersonName": { "type": "string"}, "amisPersonEmail": { "type": "string"}} }';
       oj.Composite.register('amis-person',
      {

      metadata: { inline: JSON.parse(metadata) },
      viewModel: { inline: ComponentModel },
      view: { inline: view },
      css: { inline: css }
       });
   });

Import the component in main.js to make it available.

require(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojknockout', 'jet-composites/amis-person/loader'],
function (oj, ko, $){
function ViewModel() {
    var self = this;</code>
}
    ko.applyBindings(new ViewModel(), document.body);
})

Create a page fragment where you will put the html to show your component, in this case it is just the composite component.

<?xml version='1.0' encoding='UTF-8'?>
  <ui:composition xmlns:ui="http://java.sun.com/jsf/facelets">
    <amis-person amis-person-name="NAME" amis-person-email="EMAIL" />
  </ui:composition>

In the jsf page, create an iterator for the viewmodel and put the page fragment within the iterator

 <af:iterator id="iterator" value="#{bindings.EmployeesVO.collectionModel}" var="item">
    <ui:include src="/fragments/amis-person-fragment.jsff"/>
 </af:iterator>

Change the bindings in the page fragment to match the output of the iterator

 <amis-person amis-person-name="#{item.bindings.FirstName.inputValue}" amis-person-email="#{item.bindings.Email.inputValue}" />

That’s it, you are done. When I now run the project I see the data from the Employee Viewmodel in the Jet Component I made:

 

Sources
  1. http://andrejusb.blogspot.nl/2015/12/improved-jet-rendering-in-adf.html
  2. https://blogs.oracle.com/groundside/entry/jet_composite_components_i_backgrounder (and the other blogs)
  3. ADFJetDemo Application or Github
Versions used

JDeveloper 12.1.3,
OracleJet V2.2.0

Disclaimer

The information is based on my personal research. At the moment, Oracle does not support or encourage integrating ADF and Jet. Oracle is working on JET Composite Components in ADF.

There is also a second part, on how to this, but then with DVT’s

The post Getting ADF Data in a Jet Component (1) appeared first on AMIS Oracle and Java Blog.

Oracle Brings Together Customer Experience Industry Leaders at Modern Customer Experience 2017

Oracle Press Releases - Tue, 2017-03-07 07:00
Press Release
Oracle Brings Together Customer Experience Industry Leaders at Modern Customer Experience 2017 Event for marketing, commerce, sales and service professionals provides insights necessary to meet the expectations of modern consumers and drive meaningful business results.

Redwood Shores, Calif.—Mar 7, 2017

To help customer experience professionals drive sustainable differentiation and deliver predictable and tangible business results, Oracle will host Modern Customer Experience 2017 in Las Vegas, Nevada, April 25-27. Held at the Mandalay Bay Resort, Modern Customer Experience 2017 helps attendees understand today’s empowered customers and implement modern best practices in marketing, sales, service and commerce. The event provides insights into real-world business transformation initiatives and demonstrates innovations that push the boundaries of traditional customer experience management approaches.

Empowered customers are accelerating change in all industries and forcing organizations to rethink traditional business models. Modern Customer Experience 2017 helps business leaders from across marketing, sales and customer service successfully manage these changes and leverage new digital technologies to lead successful business transformation within their organizations. To personalize the experience for attendees, Modern Customer Experience 2017 includes dedicated marketing, sales, service and commerce sessions that feature inspiring business transformation stories, hands-on workshops, small breakout sessions, product demonstrations and networking opportunities.

  • Modern Marketing Experience: Helps attendees create innovative marketing strategies to drive revenue. Sessions provide insights from experts and end-users into data-driven marketing, marketing automation, cross-channel marketing, mobile marketing, and account-based marketing. Modern Marketing Experience also includes the 2017 Markie Awards, which showcase examples of excellence in modern marketing, scheduled for Tuesday, April 25th at 7:00 p.m. PT.
  • Modern Sales Experience: Helps attendees embrace modern sales best practices to drive profitability in their organizations. Sessions provide insights from thought leaders and industry insiders into modern sales and business transformation, the future direction of the sales ecosystem, and how to build a unified sales and service approach.
  • Modern Service Experience: Provide attendees with insights into next generation customer service that help businesses create sustainable differentiation. Sessions include best practices to support organic channels and knowledge-driven, personalized customer service strategies as well as guidance on how to deliver pro-active, self-solving service powered by the Internet of Things (IoT), robotics and artificial intelligence.
  • Modern Commerce Experience: Helps attendees meet the demand for a seamless customer experience and drive digital revenues. Sessions provide actionable insights into the shift to mobile, omni-channel experience strategies, innovations in payments and the key components and drivers of customer loyalty.

“Innovations in social, mobile and cloud have fundamentally changed the way consumers interact with brands and this is just the beginning as a new wave of technologies such as artificial intelligence and the Internet of Things reach the mainstream,” said Des Cahill, vice president and head CX Evangelist, Oracle. “These changes are creating evermore empowered customers that are forcing organizations to rethink traditional approaches to customer experience management. That’s why we’re bringing together over 3,000 customer experience professionals to discuss the latest CX innovations and best practices at Modern Customer Experience 2017. It will be a great three days packed with inspiration, innovation and impact.”

Interested in attending Modern Customer Experience 2017? Register here.

For more information on Oracle Customer Experience solutions, visit the Oracle Customer Experience blog, like us on Facebook and follow us on Twitter.

Contact Info
Simon Jones
PR for Oracle
+1.415.856.5155
sjones@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Simon Jones

  • +1.415.856.5155

Partner Webcast – Enabling Oracle Database High Availability and Disaster Recovery with ...

The DatabaseHigh Availability and DisasterRecovery needs of customers traditionally haverequired significant capital investment in the infrastructure that provides theredundant capabilities that are...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Enabling Oracle Database High Availability and Disaster Recovery with Oracle Cloud

The Database High Availability and Disaster Recovery needs of customers traditionally have required significant capital investment in the infrastructure that provides the redundant capabilities that...

We share our skills to maximize your revenue!
Categories: DBA Blogs

E-Business Suite 12.2 Certified with Oracle VM 3

Steven Chan - Tue, 2017-03-07 02:06

Oracle VM 3.4 is the latest release of Oracle's server virtualization and management solution released as Oracle vM Server for x86 and Oracle VM Manager. Oracle VM 3.x is certified with E-Business Suite 12.2.x. This certification covers all current and future Oracle VM 3 updates including subsequent 3.x releases.

Oracle VM 3 was previously certified with Oracle E-Business Suite 11i, 12.0, and 12.1. 

Oracle VM 3 Manager screenshot

Oracle VM 3 supports a number of guest operating systems including various flavors and versions of Linux, Solaris and Windows. For information regarding certified platforms, installation and upgrade guidance and prerequisite requirements please refer to the Certifications tab on My Oracle Support as well as the following documentation:

We offer a variety of Oracle VM Templates, Oracle VM Virtual Appliances, and Oracle VM Virtual Assemblies for E-Business Suite 12.2.  You can download these here:

You can search for "e-business vm" to find the latest VM downloads for Oracle E-Business Suite 12.2.

Related Articles

Categories: APPS Blogs

Statistics In-Database Archiving

Tom Kyte - Tue, 2017-03-07 01:06
Hi, I am curious about the table statistics if I use In-Database Archiving. Are the statistics calculated including or excluding invisible rows? Best regards, Mathias
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator