Feed aggregator

Welcome Back, My Friends!

Floyd Teter - Tue, 2017-03-07 13:02
Welcome back, my friends
To the show that never ends
We're so glad you could attend
Come inside! Come inside!
                     -- From Emerson, Lake and Palmer's Karn Evil 9

Yup, it's true.  We're breathing new life into an old blog with the resurrection of ORCLville.

When I closed down ORCLville last year, it was over concerns regarding conflicts of interest over my employment with Oracle while serving as an impartial source of information.  Admittedly, that conflict still exists.

I also stepped away because I'm no longer a big fan of long form text.  Terse messages and visual presentations are more my speed these days.

So why am I bringing ORCLville back?  Because there are so many stories going untold: stories that involve Oracle Cloud products and the delivery of the services that go with those products.  In one way or another, I witness many Cloud challenges and victories every day.  And this seems like the best way to share and discuss those experiences...which, in turn, does us all a bit of good as we continue on this Cloud journey.

So we're up and running again, effective immediately.  Some disclaimers to share so you'll all be away and so I can sleep at night:

1.  I'm an Oracle employee.   And I love my job.  So don't look for any criticism or sharing of information that might mess up my gig.
2.  I also own Oracle stock.  It's a key component of my long-term savings program, so I'd like to see it do well.  It's not likely you'll see much here that might cause the stock to sink long-term.
3.  Items 1 & 2 above notwithstanding, the thoughts and opinions expressed here are my own and no-one else's.  I'm also the final decision maker in terms of what we explore here - not Oracle, but me.
4.  I'm hoping we don't have a monologue, but a two-way conversation.  Your comments matter.  So comment early and often.

So we're back.   It's been tough being gone, so I'm really happy to be back again with the show that never ends!

Oracle Java Cloud - How to Create Access Rule for Managed Server

Andrejus Baranovski - Tue, 2017-03-07 11:40
When you get fresh instance of Oracle Java Cloud, you are assigned with one admin and one managed server. If there is requirement to host multiple environments - demo, production, etc. - one managed server is not enough. Is better to run different environments on dedicated managed servers, this would simplify maintenance. WebLogic 12.2.1.2 partitions are not supported yet for Fusion Middleware, so only choice we have right now - different managed servers per environment.

In this short post, I will describe how to enable access to newly created managed server in Oracle Java Cloud. I have created RedSamuraiProd managed server with port 9075 in our Oracle Java Cloud service instance:


To allow access to port 9075, I need to define new access rule. This can be done through Cloud instance control, select Access Rules from the menu:


In Access Rules section, create new rule. Import here is to specify PUBLIc-INTERNET for source, WLS_MANAGED_SERVER for destination and port for managed server:


Once rule is created, managed server starts to be accessible from the internet. So simple and it works!

March 22: General Electric—Oracle ERP Cloud Customer Forum

Linda Fishman Hoyle - Tue, 2017-03-07 11:05

Join us for an Oracle Customer Reference Forum on March 22, 2017, at 8:30 a.m. PT.

Ricardo Cayeula, Chief Information Officer at General Electric Oil & Gas, will discuss GE’s digital strategy and migration to the cloud. She’ll share the company’s experience implementing Oracle ERP Cloud.

Register now to attend the live forum and learn more about General Electric’s experience with Oracle ERP Cloud.

March 22: General Electric—Oracle ERP Cloud Customer Forum

Linda Fishman Hoyle - Tue, 2017-03-07 11:05

Join us for an Oracle Customer Reference Forum on March 22, 2017, at 8:30 a.m. PT.

Ricardo Cayeula, Chief Information Officer at General Electric Oil & Gas, will discuss GE’s digital strategy and migration to the cloud. She’ll share the company’s experience implementing Oracle ERP Cloud.

Register now to attend the live forum and learn more about General Electric’s experience with Oracle ERP Cloud.

The Side Effects of Drop ‘Unused’ Index

Pythian Group - Tue, 2017-03-07 10:54

In this blog post I’ll demonstrate a scenario where dropping a potentially ‘unused’ index can have a negative influence on the optimizer’s cardinality estimation. Having columns with logical correlation and skewed data can cause difficulties for the optimizer when calculating cardinality. This post will also address what we can do to help the optimizer with better cardinality estimates.

The inspiration for this post was derived from a recent index usage review. One of the requirements was to determine how to confirm which unused indexes qualify to be dropped. I decided to do some tests regarding extended statistics and the effect of potentially dropping an ‘unused’ index. You will observe what kind of result may be seen from the drop of an index which has not been used. It’s important to remember that it does not apply in all cases.  Occasionally, even if the index is used, it doesn’t mean that it’s needed.

This is more or less linked to columns with skewed data and which might have logical relationship.
Hopefully, it can help you answer some of the following questions:

  1. Is the optimizer using the indexes behind scenes?
  2. While there are methods to determine if an index has been used in an execution plan, can an index be dropped on this basis only?
  3. If we drop composite index (constructed from correlated columns), can we do anything to avoid performance degradation?

Before we start with the use case, let’s briefly review some concepts.

The basic formula for selectivity is 1/NDV. The cardinality (CDN) is calculated as selectivity * total number of rows.

The selectivity of a join is defined as the selectivity of the most selective join column adjusted by the proportion of not null values in each join column.

Join Selectivity:
Sel = 1/max[NDV(t1.c1),NDV(t2.c2)] *
           ( (Card t1 - # t1.c1 NULLs) / Card t1) *
           ( (Card t2 - # t2.c2 NULLs) / Card t2)

Join Cardinality:
Card(Pj) = Card(T1) * Card(T2) * Sel(Pj)

In Oracle’s Doc ID 68992.1 you can find a more detailed explanation about different selectivity calculations based on different predicates. For simplicity, I will use equality predicate.

This blog post is divided in three sections.

  1. Use case where we demonstrate how drop of an “unused” index can mess up optimizer cardinality calculation.
  2. How to help optimizer for better cardinality estimation with extended statistics.
  3. More explanation on column correlation (CorStregth).
Preparation

For the first and second part I’ll be using the following configuration.

drop table t1 purge;
drop table t2 purge;
drop table t3 purge;

create table t1
as
select rownum pk,
    mod(level,  20) n1,
    mod(level, 200) n2,
    mod(level,1000) n3,
    lpad('x',42,'x') c1
from dual
connect by level <= 10000
/

create table t2
as
select rownum pk,
    mod(level,  20) n1,
    mod(level, 200) n2,
    mod(level,1000) n3,
    lpad('x',42,'x') c1
from dual
connect by level <= 10 
/ 

CREATE TABLE t3
AS
SELECT rownum pk, lpad('x',rownum,'x') c1
FROM dual
CONNECT BY level <= 2000 / exec dbms_stats.gather_table_stats(user,'T1',method_opt=>'for all columns size 1');
exec dbms_stats.gather_table_stats(user,'T2',method_opt=>'for all columns size 1');
exec dbms_stats.gather_table_stats(user,'T3',method_opt=>'for all columns size 1');


alter table t1 add constraint t1pk primary key(pk);
alter table t2 add constraint t2pk primary key(pk);
alter table t3 add constraint t3pk primary key(pk);

create index t1ix on t1(n1,n2);
create index t2ix on t2(n1,n2);

It creates two tables (T1 and T2) with composite index on N1 and N2 columns for both tables.
The statistics are as follows:

--Column statistics
TABLE_NAME	     COLUMN_NAME	  NUM_DISTINCT	  COL_SEL
-------------------- -------------------- ------------ ----------
T1		     C1 			     1		1
T1		     N1 			    20	      .05
T1		     N2 			   200	     .005
T1		     N3 			  1000	     .001
T2		     C1 			     1		1
T2		     N1 			    10	       .1
T2		     N2 			    10	       .1
T2		     N3 			    10	       .1

--Index statistics
TABLE_NAME	     INDEX_NAME 	    NUM_ROWS DISTINCT_KEYS    IND_SEL
-------------------- -------------------- ---------- ------------- ----------
T1		     T1IX		       10000	       200	 .005
T2		     T2IX			  10		10	   .1

Table T1 has 10000 rows and T2 only 10 rows. I’ve created two composite indexes for T1 and T2 since I am using both (N1,N2) columns in selection. As you can see (from the data dictionary) we have 200 distinct values for T1IX index and 10 for T2IX index.
e.g. Based on the formula we can easily derive selectivity as 1/NDV, for T1IX index we have 1/200 = 0.005.

Drop of an “unused” index can mess up optimizer’s cardinality estimation

The query I’m going to use in this case is:

select t1.c1,t2.c1
from t1,t2
where t1.n1 = t2.n1
and t1.n2 = t2.n2
/

Let’s first check the amount of the cardinality (CDN) before we do any changes. The column we’re interested in is ROWS, that’s where cardinality estimation is shown.

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

---------------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |   500 | 49500 |    29   (0)| 00:00:01 |
|*  1 |  HASH JOIN	   |	  |   500 | 49500 |    29   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T2   |    10 |   490 |	3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   | 10000 |   488K|    26   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")

Based on the formula provided, you may have expected to see cardinality (CDN) of 25. Since, we don’t have nulls we can do the calculation as 1/max(t1.n1,t2.n1) * 1/max(t1.n2,t2.n2) = 1/20 * 1/200 = 0.05 * 0.005 = 0.00025. We came up with selectivity of 0.00025.
Now, if we now do the join cardinality calculation 10 (T1) * 10000 (T2) * selectivity 0.00025 = 1000000 * 0.00025 = 25

But instead of 25, the optimizer calculated cardinality (CDN) of 500. In order to better understand the calculation, let’s review the 10053 event trace file.

  ColGroup (#1, Index) T1IX
    Col#: 1 2    CorStregth: 20.00
  ColGroup (#1, Index) T2IX
    Col#: 1 2    CorStregth: 10.00
Join ColGroups for T1[T1] and T2[T2] : (#1, #1)

What’s interesting here is the Join ColGroups analysis. Since, we have full match on the join with the indexed columns, the optimizer is considering Join ColGroups between T1 and T2. It will use ColGroups statistics to calculate join cardinality (CDN).

Col#: column position (self explanatory)
CorStregth: correlation strength. Will come back later on correlation strength (CorStregth).

Within the trace file in Nested Loop (NL) section (remember join cardinality is only listed in NL section) you should see something like:

Join selectivity using 1 ColGroups: 0.005000 (sel1 = 0.000000, sel2 = 0.000000)
Join Card:  500.000000 = outer (10.000000) * inner (10000.000000) * sel (0.005000)
Join Card - Rounded: 500 Computed: 500.000000

Based on the join selectivity formula, the optimizer chooses T1IX index (ColGroup) statistics for selectivity calculation, hence 1/NDV (T1IX index) = 1/200 = 0.005.
Afterwards, join cardinality is calculated as num_rows (T1) * num_rows (T2) * selectivity (0.005) or 10 * 10000 * 0.005 = 100000 * 0.005 = 500.

If you’re not aware of such cases, you may decide to drop an index based solely on the execution plan information. Even worse, the developers can easily decide to drop an index without knowing the possible effect.

Now, let’s observe the cardinality estimation after T1IX index is dropped.

SQL> drop index t1ix;

Index dropped.

...

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

---------------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |    25 |  2475 |    29   (0)| 00:00:01 |
|*  1 |  HASH JOIN	   |	  |    25 |  2475 |    29   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T2   |    10 |   490 |	3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   | 10000 |   488K|    26   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")

After we dropped T1IX the cardinality dropped from 500 to 25! This can have big impact on the performance. If we add additional tables to the join the optimizer can possibly re-order tables, switch Hash Join to Nested Loops or Merge Joins.

So, why did the cardinality (CDN) change after “unused” index was dropped?

Even if the index was not used, it doesn’t mean that its statistics were also not used. That’s the case here. The optimizer used the NDV of the column group created by the T1IX index to calculate the join selectivity.

Lets again check the 10053 event trace file, now where the query has been executed without T1IX index.

  
ColGroup (#1, Index) T2IX
    Col#: 1 2    CorStregth: 10.00
Join ColGroups for T1[T1] and T2[T2] : Using cdn sanity check

...

ColGroup cardinality sanity check: ndv for  T1[T1] = 4000.000000  T2[T2] = 10.000000
Join selectivity using 1 ColGroups: 2.5000e-04 (sel1 = 0.000000, sel2 = 0.000000)
Join Card:  25.000000 = outer (10.000000) * inner (10000.000000) * sel (2.5000e-04)
Join Card - Rounded: 25 Computed: 25.000000

We don’t have full match for Join ColGroups (we’re missing the statistics from T1IX index). The optimizer decided to perform cardinality (CDN) sanity check where (after the sanity check) it ended up with a number of distinct values (NDV) of 4000.

How did the optimizer calculate 4000 ?

Since we don’t have column group, the optimizer did a basic calculation of N1 (sel) * N2 (sel) = 0.00025. Having the (N1,N2) selectivity, we can extract the NDV as 1/0.00025 = 4000.
The representation in the trace file is slightly different as 2.5000e-04 = 2.5000 * (1/10000) = 0.00025

Having incorrect cardinality estimate the optimizer can make wrong decision for join methods, as we can see in the following example when we join T3 table.

--query
select t1.c1,t2.c1,t3.c1
from t1,t2,t3
where t1.n1 = t2.n1
and t1.n2 = t2.n2
and t1.pk = t3.pk
/

Execution Plan
----------------------------------------------------------
Plan hash value: 1184213596

----------------------------------------------------------------------------
| Id  | Operation	    | Name | Rows  | Bytes | Cost (%CPU)| Time	   |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |	   |   500 |   541K|   120   (0)| 00:00:01 |
|*  1 |  HASH JOIN	    |	   |   500 |   541K|   120   (0)| 00:00:01 |
|*  2 |   HASH JOIN	    |	   |   500 | 51500 |	30   (0)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| T2   |	10 |   490 |	 3   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| T1   | 10000 |   527K|	27   (0)| 00:00:01 |
|   5 |   TABLE ACCESS FULL | T3   |  2000 |  1964K|	90   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."PK"="T3"."PK")
   2 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")

SQL> drop index t1ix; 

Index dropped.

Execution Plan
----------------------------------------------------------

Plan hash value: 142045272

-------------------------------------------------------------------------------------
| Id  | Operation		     | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	     |	    |	 25 | 27725 |	 35   (0)| 00:00:01 |
|   1 |  NESTED LOOPS		     |	    |	 25 | 27725 |	 35   (0)| 00:00:01 |
|   2 |   NESTED LOOPS		     |	    |	 25 | 27725 |	 35   (0)| 00:00:01 |
|*  3 |    HASH JOIN		     |	    |	 25 |  2575 |	 30   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL	     | T2   |	 10 |	490 |	  3   (0)| 00:00:01 |
|   5 |     TABLE ACCESS FULL	     | T1   | 10000 |	527K|	 27   (0)| 00:00:01 |
|*  6 |    INDEX UNIQUE SCAN	     | T3PK |	  1 |	    |	  0   (0)| 00:00:01 |
|   7 |   TABLE ACCESS BY INDEX ROWID| T3   |	  1 |  1006 |	  1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")
   6 - access("T1"."PK"="T3"."PK")

Another example with different data set where there is a change in the row source used for building a hash table.

--query
SELECT t1.c1, t2.c1
FROM t1,t2,t3
WHERE t1.n1 = t2.n1
AND t1.n2 = t2.n2
AND t1.n1 = t3.pk
/

Execution Plan
----------------------------------------------------------
Plan hash value: 2855555999

-------------------------------------------------------------------------------
| Id  | Operation	       | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |      |  5000K|  1525M|  2766	(1)| 00:00:01 |
|*  1 |  HASH JOIN	       |      |  5000K|  1525M|  2766	(1)| 00:00:01 |
|   2 |   TABLE ACCESS FULL    | T2   |  1000 |   251K|    14	(0)| 00:00:01 |
|*  3 |   HASH JOIN	       |      |  1000K|    59M|  2739	(1)| 00:00:01 |
|   4 |    INDEX FAST FULL SCAN| T3PK |  2000 |  8000 |     3	(0)| 00:00:01 |
|   5 |    TABLE ACCESS FULL   | T1   |  1000K|    55M|  2734	(1)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")
   3 - access("T1"."N1"="T3"."PK")

SQL> drop index t1ix; 

Index dropped.

...

Execution Plan
----------------------------------------------------------
Plan hash value: 1993272941

------------------------------------------------------------------------------
| Id  | Operation	      | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |	 250K|	  76M|	2754   (1)| 00:00:01 |
|*  1 |  HASH JOIN	      |      |	 250K|	  76M|	2754   (1)| 00:00:01 |
|   2 |   INDEX FAST FULL SCAN| T3PK |	2000 |	8000 |	   3   (0)| 00:00:01 |
|*  3 |   HASH JOIN	      |      |	 250K|	  75M|	2750   (1)| 00:00:01 |
|   4 |    TABLE ACCESS FULL  | T2   |	1000 |	 251K|	  14   (0)| 00:00:01 |
|   5 |    TABLE ACCESS FULL  | T1   |	1000K|	  55M|	2734   (1)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."N1"="T3"."PK")
   3 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")

Although this will not apply in all cases, be careful before deciding to drop an index. Although it may seem like a good idea initially, it can have big impact with wrong cardinality estimation. This could then lead to incorrect execution plan decisions, poor performance and unhappy customers.

So, what is the solution?
We don’t want huge indexes to be (DML) maintained if not used, right?

This brings me to the second part, Extended Statistics.

Extended Statistics

Oracle introduced extended statistics with Oracle 11g. Its main functionality is to highlight column group relation. The extended statistics helps the optimizer become more familiar with logical relation between column groups, otherwise unknown information. The extended statistics are represented as virtual (hidden) columns on which statistics are gathered.

When we had T1IX index, the optimizer used its NDV to calculate correct cardinality. The index statistics were used for the column group of N1 and N2 columns. However, after we dropped the index (even though it was never used), we’ve made a mess of the optimizer with its cardinality estimates.

Extended statistics are maintained by DBMS_STATS package, function CREATE_EXTENDED_STATS. Let’s create extended statistics and see the cardinality estimation.

SQL> select dbms_stats.create_extended_stats(null,'T1','(N1,N2)') from dual
  2  /

DBMS_STATS.CREATE_EXTENDED_STATS(NULL,'T1','(N1,N2)')
------------------------------------------------------------
SYS_STUBZH0IHA7K$KEBJVXO5LOHAS

SQL> set autotrace trace exp
SQL> 
SQL> select t1.c1,t2.c1
  2  from t1,t2
  3  where t1.n1 = t2.n1
  4  and t1.n2 = t2.n2
  5  /

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

---------------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |    25 |  2475 |    29   (0)| 00:00:01 |
|*  1 |  HASH JOIN	   |	  |    25 |  2475 |    29   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T2   |    10 |   490 |	3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   | 10000 |   488K|    26   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")

I know, you’re thinking “It didn’t help” and you’re right. The optimizer did not use the column group (extension). If you look a little deeper, USER_TAB_COL_STATISTICS you will see that the virtual column is not created, only the extension definition was created.

SQL> SELECT column_name, num_distinct, histogram
FROM   user_tab_col_statistics
WHERE  table_name = 'T1'
/

COLUMN_NAME				 NUM_DISTINCT HISTOGRAM
---------------------------------------- ------------ ---------------
N1						   20 NONE
N2						  200 NONE
N3						 1000 NONE
C1						    1 NONE

SQL> col extension_name for a40
SQL> SELECT extension_name, extension
FROM   user_stat_extensions
WHERE  table_name = 'T1'
/

EXTENSION_NAME				 EXTENSION
---------------------------------------- --------------------
SYS_STUBZH0IHA7K$KEBJVXO5LOHAS		 ("N1","N2")

This behavior is a bit expected, because DBMS_STATS.CREATE_EXTENDED_STATS will only create an extension. It won’t gather statistics for the extension. If you check the 10053 event trace file you will see NO STATISTICS (using defaults).

Column (#5): SYS_STUBZH0IHA7K$KEBJVXO5LOHAS(NUMBER)  NO STATISTICS (using defaults)

In order to fix this, we need to re-gather statistics on the table.

SQL> exec dbms_stats.gather_table_stats(null,'T1',method_opt=>'for all columns size 1');

PL/SQL procedure successfully completed.

SQL> SELECT column_name, num_distinct, histogram
FROM   user_tab_col_statistics
WHERE  table_name = 'T1'
/

COLUMN_NAME				 NUM_DISTINCT HISTOGRAM
---------------------------------------- ------------ ---------------
N1						   20 NONE
N2						  200 NONE
N3						 1000 NONE
C1						    1 NONE
SYS_STUBZH0IHA7K$KEBJVXO5LOHAS			  200 NONE


SQL> select t1.c1,t2.c1
from t1,t2
where t1.n1 = t2.n1
and t1.n2 = t2.n2
/

Execution Plan
----------------------------------------------------------
Plan hash value: 2959412835

---------------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |   500 | 49500 |    29   (0)| 00:00:01 |
|*  1 |  HASH JOIN	   |	  |   500 | 49500 |    29   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T2   |    10 |   490 |	3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   | 10000 |   488K|    26   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."N1"="T2"."N1" AND "T1"."N2"="T2"."N2")

Now the optimizer calculated the correct cardinality. If you check the trace file more extensively, you should see the same join cardinality information comparable to when we had the index in place. The difference would only be that this time it’s using the extension statistics.

...
  ColGroup (#1, VC) SYS_STUBZH0IHA7K$KEBJVXO5LOHAS
    Col#: 1 2    CorStregth: 20.00
  ColGroup (#1, Index) T2IX
    Col#: 1 2    CorStregth: 10.00

...

Join selectivity using 1 ColGroups: 0.005000 (sel1 = 0.000000, sel2 = 0.000000)
Join Card:  500.000000 = outer (10.000000) * inner (10000.000000) * sel (0.005000)
Join Card - Rounded: 500 Computed: 500.000000

With extended statistics we were able to help the optimizer to calculate cardinality (CDN) of 500. It used extension NDV in the same way it used T1IX index statistics.

However, what if T1 table was a big table and you already gathered statistics before creating the extension (column group)? Re-gathering can be time-consuming and have negative impact on the performance. To answer this question, we have to employ the METHOD_OPT approach from DBMS_STATS.GATHER_TABLE_STATS.
Creating extension via METHOD_OPT will automatically gather statistics on the column group.

SQL>
SQL> SELECT column_name, num_distinct, histogram
FROM   user_tab_col_statistics
WHERE  table_name = 'T1'
/  

COLUMN_NAME	     NUM_DISTINCT HISTOGRAM
-------------------- ------------ ---------------
C1				1 NONE
N3			     1000 NONE
N2			      200 NONE
N1			       20 NONE

SQL> exec dbms_stats.gather_table_stats(null,'T1',method_opt=>'for all columns size 1 for columns size 1 (N1,N2)');

PL/SQL procedure successfully completed.

SQL> col column_name for a40
SQL> SELECT column_name, num_distinct, histogram
FROM   user_tab_col_statistics
WHERE  table_name = 'T1'
/

COLUMN_NAME				 NUM_DISTINCT HISTOGRAM
---------------------------------------- ------------ ---------------
SYS_STUBZH0IHA7K$KEBJVXO5LOHAS			  200 NONE
C1						    1 NONE
N3						 1000 NONE
N2						  200 NONE
N1						   20 NONE

In this case an extension was automatically created and in the same time statistics were gathered.

If you find yourself in a situation were you used DBMS_STATS.CREATE_EXTENDED_STATS after gathering statistics, there are multiple ways to gather statistics on already existing extensions.

1) DBMS_STATS.GATHER_TABLE_STATS
(it will include all virtual columns)

2) METHOD_OPT=>’for columns size … (column group)’
(you can list the actual column group as you do for CREATE_EXTENDED_STATS)

3) METHOD_OPT=>’for columns size … extension_name’
(you can specify the extension name obtained from DBA|ALL|USER_STAT_EXTENSIONS dictionary view)

4) METHOD_OPT=>’for all hidden columns’
(hidden columns are by default included in default DBMS_STATS.GATHER_TABLE_STATS configuration)

What’s the difference between METHOD_OPT and DBMS_STATS.CREATE_EXTENDED_STATS?

I believe it behaves in that manner because METHOD_OPT is a part of a procedure that is designed for statistics gathering. Extended statistics (column groups) were introduced with 11g and are additional functionality to METHOD_OPT. The main functionality of it is to gather statistics, hence gathering statistics via METHOD_OPT creates the extension and gathers statistics at the same time.
Perhaps Oracle could have used different naming, something like CREATE_COLUMN_GROUP instead of CREATE_EXTENDED_STATS (which is misleading).

Column Correlation (CorStreght)

Column correlation represents how much the columns within a given group have mutual relation. It’s calculated as product of NDV of all columns in the group divided with the NDV of the column group. This is clearly explained in MultiColumn/Column Group Statistics – Additional Examples (Doc ID 872406.1).

Product of individual NDVs / NDV for the column group.

If the product of all columns NDV is smaller than CG NDV, the optimizer will ignore CG (extended) stats. Based on the tests I performed, the CorStregth is taken into consideration only when the optimizer has to decide between multiple column groups for which Full Match can be constructed. In the other cases, when Partial Match is used the optimizer it picked the column group with smaller NDV.

Here are two different cases; one where Full Match is constructed and another for Partial Match:

Column Group – Full Match

Let’s examine the first case where we can construct Full Match.

COLUMN_NAME		NUM_DISTINCT
---------------------------------------- ------------
N1						   20
N2						  200
N3						 1000
C1						    1
SYS_STUBZH0IHA7K$KEBJVXO5LOHAS			  200 ---> (N1,N2)
SYS_STUXQEJ8WOFTK5R5EYM4DF_60V			 1000 ---> (N2,N3)

select *
from t1
where t1.n1 = 10
and t1.n2 = 40
and t1.n3 = 80
/

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation	  | Name | Rows  | Bytes | Cost (%CPU)| Time	 |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	 |     5 |   270 |   228   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1	 |     5 |   270 |   228   (1)| 00:00:01 |
--------------------------------------------------------------------------

Trace file
...
Access path analysis for T1
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T1[T1]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#1): N1(NUMBER)
    AvgLen: 3 NDV: 20 Nulls: 0 Density: 0.050000 Min: 0.000000 Max: 19.000000
  Column (#2): N2(NUMBER)
    AvgLen: 4 NDV: 200 Nulls: 0 Density: 0.005000 Min: 0.000000 Max: 199.000000
  Column (#3): N3(NUMBER)
    AvgLen: 4 NDV: 1000 Nulls: 0 Density: 0.001000 Min: 0.000000 Max: 999.000000
  Column (#6): SYS_STUXQEJ8WOFTK5R5EYM4DF_60V(NUMBER)
    AvgLen: 12 NDV: 1000 Nulls: 0 Density: 0.001000
  Column (#5): SYS_STUBZH0IHA7K$KEBJVXO5LOHAS(NUMBER)
    AvgLen: 12 NDV: 200 Nulls: 0 Density: 0.005000
  ColGroup (#1, VC) SYS_STUXQEJ8WOFTK5R5EYM4DF_60V
    Col#: 2 3    CorStregth: 200.00
  ColGroup (#2, VC) SYS_STUBZH0IHA7K$KEBJVXO5LOHAS
    Col#: 1 2    CorStregth: 20.00
  ColGroup Usage:: PredCnt: 3  Matches Full: #1  Partial:  Sel: 0.001000

We have T1 table with two column groups on (N1,N2) and (N2,N3).
In this case the product of the individual columns N1,N2,N3 NDV is much higher than column groups NDV, hence the optimizer will calculate the cardinality (CDN) based on the extensions. Having a selection with (N1,N2,N3) the optimizer can perform Full Match for both of the column groups. In this scenario, the cardinality can be expressed as:

1) Using CG #1 (N2,N3): 1/CG(NDV) * 1/N1(NDV) = 1/1000 * 1/20 = 0.001 * 0.05 = 0.00005 * 100000 = 5

2) Using CG #2 (N1,N2): 1/CG(NDV) * 1/N3(NDV) = 1/200 * 1/1000 = 0.005 * 0.0001 = 0.000005 * 100000 = 0.5 (rounded to 1)

The following instance demonstrates where the optimizer decided to use CG (N2,N3) or SYS_STUXQEJ8WOFTK5R5EYM4DF_60V because it has higher CorStregth of 200 (vs. 20 for CG (N1,N2)).
To show how CorStregth can influence the optimizer, I’ve changed N1 distinct value to 210 to increase CG (N1,N2) CorStregth.
Here, case column group (N1,N2) was used in cardinality estimates.

exec dbms_stats.set_column_stats(null,'T1','N1',distcnt=>210);

Execution Plan
----------------------------------------------------------

Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id  | Operation	   | Name | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |	1 |    11 |   228   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE    |	  |	1 |    11 |	       |	  |
|*  2 |   TABLE ACCESS FULL| T1   |	1 |    11 |   228   (1)| 00:00:01 |
---------------------------------------------------------------------------



--Trace file
...
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T1[T1]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#1): N1(NUMBER)
    AvgLen: 3 NDV: 210 Nulls: 0 Density: 0.004762 Min: 0.000000 Max: 19.000000
  Column (#2): N2(NUMBER)
    AvgLen: 4 NDV: 200 Nulls: 0 Density: 0.005000 Min: 0.000000 Max: 199.000000
  Column (#3): N3(NUMBER)
    AvgLen: 4 NDV: 1000 Nulls: 0 Density: 0.001000 Min: 0.000000 Max: 999.000000
  Column (#6): SYS_STUXQEJ8WOFTK5R5EYM4DF_60V(NUMBER)
    AvgLen: 12 NDV: 1000 Nulls: 0 Density: 0.001000
  Column (#5): SYS_STUBZH0IHA7K$KEBJVXO5LOHAS(NUMBER)
    AvgLen: 12 NDV: 200 Nulls: 0 Density: 0.005000
  ColGroup (#2, VC) SYS_STUBZH0IHA7K$KEBJVXO5LOHAS
    Col#: 1 2    CorStregth: 210.00
  ColGroup (#1, VC) SYS_STUXQEJ8WOFTK5R5EYM4DF_60V
    Col#: 2 3    CorStregth: 200.00
  ColGroup Usage:: PredCnt: 3  Matches Full: #2  Partial:  Sel: 0.005000

We can see that the optimizer chose CG #2 based on CorStregth.

Column Group – Partial Match

This second case shows what happens when Full Match can’t be constructed and the optimizer chooses the column group with smaller NDV. I’ve built the same T1 table, but now with Column Groups (N1,N2,N3) and (N1,N2,C1).

COLUMN_NAME		 NUM_DISTINCT
---------------------------------------- ------------
N1						   20
N2						  200
N3						 1000
C1						    1
SYS_STU2NGW2O#$ZX2CDZYOMNMFP64			  200
SYS_STUOYQUEIAZ7FI9DV53VLN$$$0			 1000


--Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation	  | Name | Rows  | Bytes | Cost (%CPU)| Time	 |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	 |   500 | 27000 |   228   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1	 |   500 | 27000 |   228   (1)| 00:00:01 |
--------------------------------------------------------------------------



--Trace file
...
Access path analysis for T1
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T1[T1]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#1): N1(NUMBER)
    AvgLen: 3 NDV: 20 Nulls: 0 Density: 0.050000 Min: 0.000000 Max: 19.000000
  Column (#2): N2(NUMBER)
    AvgLen: 4 NDV: 200 Nulls: 0 Density: 0.005000 Min: 0.000000 Max: 199.000000
  Column (#6): SYS_STUOYQUEIAZ7FI9DV53VLN$$$0(NUMBER)
    AvgLen: 12 NDV: 1000 Nulls: 0 Density: 0.001000
  Column (#5): SYS_STU2NGW2O#$ZX2CDZYOMNMFP64(NUMBER)
    AvgLen: 12 NDV: 200 Nulls: 0 Density: 0.005000
  Column (#3): N3(NUMBER)
    AvgLen: 4 NDV: 1000 Nulls: 0 Density: 0.001000 Min: 0.000000 Max: 999.000000
  Column (#4): C1(VARCHAR2)
    AvgLen: 43 NDV: 1 Nulls: 0 Density: 1.000000
  ColGroup (#1, VC) SYS_STUOYQUEIAZ7FI9DV53VLN$$$0
    Col#: 1 2 3    CorStregth: 4000.00
  ColGroup (#2, VC) SYS_STU2NGW2O#$ZX2CDZYOMNMFP64
    Col#: 1 2 4    CorStregth: 20.00
  ColGroup Usage:: PredCnt: 2  Matches Full:  Partial: #2 (1 2 )  Sel: 0.005000
  Table: T1  Alias: T1

As CG #1 has way higher CorStregth of 4000 (vs. 20) the optimizer used CG #2. Let’s decrease CG #2 NDV to 100 to see whether it will influence the optimizer to use CG #1.

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation	  | Name | Rows  | Bytes | Cost (%CPU)| Time	 |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	 |  1000 | 50000 |   228   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1	 |  1000 | 50000 |   228   (1)| 00:00:01 |
--------------------------------------------------------------------------



--Trace file
...
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T1[T1]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#1): N1(NUMBER)
    AvgLen: 3 NDV: 20 Nulls: 0 Density: 0.050000 Min: 0.000000 Max: 19.000000
  Column (#2): N2(NUMBER)
    AvgLen: 4 NDV: 200 Nulls: 0 Density: 0.005000 Min: 0.000000 Max: 199.000000
  Column (#6): SYS_STUOYQUEIAZ7FI9DV53VLN$$$0(NUMBER)
    AvgLen: 12 NDV: 100 Nulls: 0 Density: 0.010000
  Column (#5): SYS_STU2NGW2O#$ZX2CDZYOMNMFP64(NUMBER)
    AvgLen: 12 NDV: 200 Nulls: 0 Density: 0.005000
  Column (#3): N3(NUMBER)
    AvgLen: 4 NDV: 1000 Nulls: 0 Density: 0.001000 Min: 0.000000 Max: 999.000000
  Column (#4): C1(VARCHAR2)
    AvgLen: 43 NDV: 1 Nulls: 0 Density: 1.000000
  ColGroup (#1, VC) SYS_STUOYQUEIAZ7FI9DV53VLN$$$0
    Col#: 1 2 3    CorStregth: 40000.00
  ColGroup (#2, VC) SYS_STU2NGW2O#$ZX2CDZYOMNMFP64
    Col#: 1 2 4    CorStregth: 20.00
  ColGroup Usage:: PredCnt: 2  Matches Full:  Partial: #1 (1 2 ) Sel: 0.010000
  Table: T1  Alias: T1

Since CG #1 has smaller NDV (100) (vs. #2, 200), it calculated the cardinality (CDN) based on CG #1 as:
1/NDV (CG) * Original Cardinality = 1/100 * 100000 = 0.01 * 100000 = 1000

Summary

There are some situations where dropping a potentially ‘unused’ index can have a negative influence on the optimizer’s cardinality estimation. Consider using extended statistics to help the optimizer in order to become more familiar with columns logical relationship. The extended statistics are represented with virtual (hidden) column. Regarding statistics gathering, note that DBMS_STATS.CREATE_EXTENDED_STATS won’t gather statistics automatically, use METHOD_OPT instead. If you have skewed data, you may want to consider creating proper histograms. When histograms are on base columns, you’ll also need to create histograms on the column group in order to be taken into consideration. This is because Oracle gives priority to histograms. You can use column groups extended statistics to preserve cardinality stability as consequence from dropping an index. While it might not be appropriate for all cases, it’s a viable option for the optimizers help in good cardinality estimation.

Lastly, you can use DBMS_STATS.SEED_COL_USAGE procedure to monitor columns relation in filter predicates, join predicates and group by clauses. A report can be generated with DBMS_STATS.REPORT_COL_USAGE where its information can be used for appropriate column groups identification. I look forward to exploring more on that in a future post.

Categories: DBA Blogs

Getting ADF Data in a Jet Component (1)

Amis Blog - Tue, 2017-03-07 09:33

Oracle JET has been around for a while, and at this moment we are investigating what it would take to integrate JET with our existing ADF Application. In the current ADF application we want to make a dashboard in JET, however we still need to know for what project we need the data from. Therefore I am researching on how to get data from our ADF application into our JET part. In this blog I will show you how you can in an easy and quick way get your ADF BC data into your JET Components without using REST services.

I used the blog of Andrejus1 to set up JET within my ADF Application.

Add the JET libraries to the public_html folder of the ViewController project.

(Final) Structure of the project:

Make a jsf page and use af:resources to get to the css and requireJS and add the main.js

<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE html>
<f:view xmlns:f="http://java.sun.com/jsf/core" xmlns:af="http://xmlns.oracle.com/adf/faces/rich" xmlns:dvt="http://xmlns.oracle.com/dss/adf/faces" xmlns:ui="http://java.sun.com/jsf/facelets">
    <af:document title="main.jsf" id="d1">
        <af:messages id="m1"/>
        <af:resource type="css" source="jet/css/alta/2.2.0/web/alta.min.css"/>
        <af:resource type="javascript" source="jet/js/libs/require/require.js"/>
        <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%3E%0A%20%20%20%20%20%20%20%20%20%20require.config(%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20baseUrl%20%3A%20%22jet%2Fjs%22%0A%20%20%20%20%20%20%20%20%20%20%7D)%3B%0A%0A%20%20%20%20%20%20%20%20%20%20require(%5B%22main%22%5D)%3B%0A%20%20%20%20%20%20%20%20%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />
        <af:form id="f1">
        
        </af:form>
    </af:document>
</f:view>

Then I added my composite component folder to the js folder of JET. My component is named amis-person and will show the name of the person in capital letters and the emailadress within a blue box. You can read more about composite components in the blog series of Duncan2

Put the metadata directly in the loader.js instead of via a json file, otherwise it will not work!. When you do it via the .json file and you console.log the in the function, you will see it does not print out the metadata from the .json file.


define(['ojs/ojcore',
       './amis-person',
        'text!./amis-person.html',
        'css!./amis-person',
        'ojs/ojcomposite'],
  function (oj, ComponentModel, view, css) {
        'use strict';
         var metadata = '{ "properties": { "amisPersonName": { "type": "string"}, "amisPersonEmail": { "type": "string"}} }';
       oj.Composite.register('amis-person',
      {

      metadata: { inline: JSON.parse(metadata) },
      viewModel: { inline: ComponentModel },
      view: { inline: view },
      css: { inline: css }
       });
   });

Import the component in main.js to make it available.

require(['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojknockout', 'jet-composites/amis-person/loader'],
function (oj, ko, $){
function ViewModel() {
    var self = this;</code>
}
    ko.applyBindings(new ViewModel(), document.body);
})

Create a page fragment where you will put the html to show your component, in this case it is just the composite component.

<?xml version='1.0' encoding='UTF-8'?>
  <ui:composition xmlns:ui="http://java.sun.com/jsf/facelets">
    <amis-person amis-person-name="NAME" amis-person-email="EMAIL" />
  </ui:composition>

In the jsf page, create an iterator for the viewmodel and put the page fragment within the iterator

 <af:iterator id="iterator" value="#{bindings.EmployeesVO.collectionModel}" var="item">
    <ui:include src="/fragments/amis-person-fragment.jsff"/>
 </af:iterator>

Change the bindings in the page fragment to match the output of the iterator

 <amis-person amis-person-name="#{item.bindings.FirstName.inputValue}" amis-person-email="#{item.bindings.Email.inputValue}" />

That’s it, you are done. When I now run the project I see the data from the Employee Viewmodel in the Jet Component I made:

 

Sources
  1. http://andrejusb.blogspot.nl/2015/12/improved-jet-rendering-in-adf.html
  2. https://blogs.oracle.com/groundside/entry/jet_composite_components_i_backgrounder (and the other blogs)
  3. ADFJetDemo Application or Github
Versions used

JDeveloper 12.1.3,
OracleJet V2.2.0

Disclaimer

The information is based on my personal research. At the moment, Oracle does not support or encourage integrating ADF and Jet. Oracle is working on JET Composite Components in ADF.

There is also a second part, on how to this, but then with DVT’s

The post Getting ADF Data in a Jet Component (1) appeared first on AMIS Oracle and Java Blog.

Oracle Brings Together Customer Experience Industry Leaders at Modern Customer Experience 2017

Oracle Press Releases - Tue, 2017-03-07 07:00
Press Release
Oracle Brings Together Customer Experience Industry Leaders at Modern Customer Experience 2017 Event for marketing, commerce, sales and service professionals provides insights necessary to meet the expectations of modern consumers and drive meaningful business results.

Redwood Shores, Calif.—Mar 7, 2017

To help customer experience professionals drive sustainable differentiation and deliver predictable and tangible business results, Oracle will host Modern Customer Experience 2017 in Las Vegas, Nevada, April 25-27. Held at the Mandalay Bay Resort, Modern Customer Experience 2017 helps attendees understand today’s empowered customers and implement modern best practices in marketing, sales, service and commerce. The event provides insights into real-world business transformation initiatives and demonstrates innovations that push the boundaries of traditional customer experience management approaches.

Empowered customers are accelerating change in all industries and forcing organizations to rethink traditional business models. Modern Customer Experience 2017 helps business leaders from across marketing, sales and customer service successfully manage these changes and leverage new digital technologies to lead successful business transformation within their organizations. To personalize the experience for attendees, Modern Customer Experience 2017 includes dedicated marketing, sales, service and commerce sessions that feature inspiring business transformation stories, hands-on workshops, small breakout sessions, product demonstrations and networking opportunities.

  • Modern Marketing Experience: Helps attendees create innovative marketing strategies to drive revenue. Sessions provide insights from experts and end-users into data-driven marketing, marketing automation, cross-channel marketing, mobile marketing, and account-based marketing. Modern Marketing Experience also includes the 2017 Markie Awards, which showcase examples of excellence in modern marketing, scheduled for Tuesday, April 25th at 7:00 p.m. PT.
  • Modern Sales Experience: Helps attendees embrace modern sales best practices to drive profitability in their organizations. Sessions provide insights from thought leaders and industry insiders into modern sales and business transformation, the future direction of the sales ecosystem, and how to build a unified sales and service approach.
  • Modern Service Experience: Provide attendees with insights into next generation customer service that help businesses create sustainable differentiation. Sessions include best practices to support organic channels and knowledge-driven, personalized customer service strategies as well as guidance on how to deliver pro-active, self-solving service powered by the Internet of Things (IoT), robotics and artificial intelligence.
  • Modern Commerce Experience: Helps attendees meet the demand for a seamless customer experience and drive digital revenues. Sessions provide actionable insights into the shift to mobile, omni-channel experience strategies, innovations in payments and the key components and drivers of customer loyalty.

“Innovations in social, mobile and cloud have fundamentally changed the way consumers interact with brands and this is just the beginning as a new wave of technologies such as artificial intelligence and the Internet of Things reach the mainstream,” said Des Cahill, vice president and head CX Evangelist, Oracle. “These changes are creating evermore empowered customers that are forcing organizations to rethink traditional approaches to customer experience management. That’s why we’re bringing together over 3,000 customer experience professionals to discuss the latest CX innovations and best practices at Modern Customer Experience 2017. It will be a great three days packed with inspiration, innovation and impact.”

Interested in attending Modern Customer Experience 2017? Register here.

For more information on Oracle Customer Experience solutions, visit the Oracle Customer Experience blog, like us on Facebook and follow us on Twitter.

Contact Info
Simon Jones
PR for Oracle
+1.415.856.5155
sjones@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Simon Jones

  • +1.415.856.5155

Partner Webcast – Enabling Oracle Database High Availability and Disaster Recovery with ...

The DatabaseHigh Availability and DisasterRecovery needs of customers traditionally haverequired significant capital investment in the infrastructure that provides theredundant capabilities that are...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Enabling Oracle Database High Availability and Disaster Recovery with Oracle Cloud

The Database High Availability and Disaster Recovery needs of customers traditionally have required significant capital investment in the infrastructure that provides the redundant capabilities that...

We share our skills to maximize your revenue!
Categories: DBA Blogs

E-Business Suite 12.2 Certified with Oracle VM 3

Steven Chan - Tue, 2017-03-07 02:06

Oracle VM 3.4 is the latest release of Oracle's server virtualization and management solution released as Oracle vM Server for x86 and Oracle VM Manager. Oracle VM 3.x is certified with E-Business Suite 12.2.x. This certification covers all current and future Oracle VM 3 updates including subsequent 3.x releases.

Oracle VM 3 was previously certified with Oracle E-Business Suite 11i, 12.0, and 12.1. 

Oracle VM 3 Manager screenshot

Oracle VM 3 supports a number of guest operating systems including various flavors and versions of Linux, Solaris and Windows. For information regarding certified platforms, installation and upgrade guidance and prerequisite requirements please refer to the Certifications tab on My Oracle Support as well as the following documentation:

We offer a variety of Oracle VM Templates, Oracle VM Virtual Appliances, and Oracle VM Virtual Assemblies for E-Business Suite 12.2.  You can download these here:

You can search for "e-business vm" to find the latest VM downloads for Oracle E-Business Suite 12.2.

Related Articles

Categories: APPS Blogs

Statistics In-Database Archiving

Tom Kyte - Tue, 2017-03-07 01:06
Hi, I am curious about the table statistics if I use In-Database Archiving. Are the statistics calculated including or excluding invisible rows? Best regards, Mathias
Categories: DBA Blogs

counting the appearance of a certain word in a document

Tom Kyte - Tue, 2017-03-07 01:06
Hello, I am trying to count the appearance of a certain word into a document (stored as BFILE in database). Table definition looks like this: CREATE TABLE MY_DOC (ID NUMBER PRIMARY KEY, FILE_NAME VARCHAR2(255), UPLOAD_DATE VARCH...
Categories: DBA Blogs

Modify non partitioned table to partitioned online

Tom Kyte - Tue, 2017-03-07 01:06
Hi Tom, I am trying to convert a non partitioned table into interval partitioned table online.I created a simple test table and ran the below script but I get ORA-14006 invalid partition name error. My conjecture would be this is not allowed for i...
Categories: DBA Blogs

Create Partition on load table 11g

Tom Kyte - Tue, 2017-03-07 01:06
Hi Connor/Chris, Please have a look at below table structure This is load table and there is no uniqueness related to any column, hence we cannot go for Primary key for this. <code> tb_card_details --------------- card_id start_date end...
Categories: DBA Blogs

Data guard

Tom Kyte - Tue, 2017-03-07 01:06
Hi AskTOM team, -- On standby database I am using real time redo apply. My standby database is started using following commands. STARTUP MOUNT; ALTER DATABASE OPEN READ ONLY; ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFI...
Categories: DBA Blogs

Re: parameterized views revisited

Tom Kyte - Tue, 2017-03-07 01:06
Hi Connor, Thank you for your response to my question. I have a follow up question about the consolidated view. I did the following suggestion: <code> create or replace 2 view V as 3 select 'T1' tname, t1.* from t1 4 union all 5...
Categories: DBA Blogs

Exadata snapshot access limitation

Tom Kyte - Tue, 2017-03-07 01:06
we have Exadata and we've started to implement snapshots. the business case required: we create a snapshots from the same test master, and grant access on each two snapshot for a specific group of users while preventing them to access other snapsho...
Categories: DBA Blogs

Getting started with Oracle PaaS Service Manager Command Line Interface (PSM)

Amis Blog - Mon, 2017-03-06 23:41

Oracle PaaS Service Manager (PSM) provides a command line interface (CLI) with which you can manage the lifecycle of various services in Oracle Public Cloud. This opens the door for scripting (recurring) tasks – such as (re)deployment of applications on ACCS to provisioning new environments. PSM makes performing admin operations on the Oracle Public Cloud a lot easier and efficient, compared to using the the GUI.

Note that the CLI is a thin wrapper over PaaS REST APIs that invokes these APIs to support common PaaS features.

The steps for installing and configuring PSM are simple enough – and take about 6 minutes. I will briefly walk you through them. They are also documented just fine.  Before I continue, I want to thank Abhijit Ramchandra Jere of Oracle for graciously helping me out with PSM.

1. Install Python (3.3+) and cURL

PSM is a Python based tool. To set it up and run it, you need to have Python set up on your machine.

2. Download PSM

The psmcli.zip can be downloaded from Cloud UI (as described here) or it can be fetched through cURL from the REST API (as described here):

curl -I -X GET       -u “lucas.jellema:password”      -H “X-ID-TENANT-NAME: cloud17”      -H “Accept: application/json”       https://psm.us.oraclecloud.com/paas/api/v1.1/instancemgmt/cloud17/services/cs/instances

3. Install PSM as a Python Package

With a simple statement, PSM is installed from the downloaded zip file (see here)

pip install -U psmcli.zip

image

This installs PSM into the Python Scripts directory: image

Verify the result using

pip show psmcli

image

On Linux:

image

 

4. Configure PSM for the identity domain and the connection to the cloud

Run the setup for PSM and configure it for use with your identity domain (see docs). Note: this step assumes that the Python scripts directory that contains PSM is in the PATH environment variable.

psm setup

image

I am not sure if and how you can use PSM on your machine for multiple identity domains or user accounts. I have access to several Oracle Public Cloud identity domains – even in different data centers. I have now setup PSM for one of them. If I can do a setup for a second identity domain and then somehow be able to switch between the two is not yet clear to me.
EDIT: switching to a different identity domain is simply done by running psm setup again. I need to provide the identity domain, region and credentials to make the switch. Note: psm remembers the set up across restart of the operating system.

5. Start using PSM for inspecting and manipulating PaaS Services

PSM can be used with many PaaS Services – not yet all – for inspecting their health, stopping and (re)starting, scaling and performing many administrative activities. See docs for all of them.

Some examples:

List all applications on Application Container Cloud:

psm accs apps

image

List log details for a specific application on ACCS:

psm accs log -n|–app-name name -i|–instance-name name

psm accs log -n Artist-Enricher-API -i web.1

and the list of log files is presented

image

 

6. Update PSM

To get rid of the slightly annoying message about their being a later version of PSM available – and to get hold of the latest version, you simply type:

psm update

and wait for maybe 15 seconds.

image

 

Issues:

I ran into an issue, caused as it turned out by having multiple Python versions on my machine. PSM got installed as Python package with version 3.5 and I was trying to run PSM with Python 3.6 as first version in my PATH environment variable. Clearly, that failed.

The error I ran into: ModuleNotFoundError: No module named ‘opaascli’

image

The solution: I removed all but one Python version (3.5 because with 3.6 the installation did not go well because of missing pip) and then installed with that one version.

Resources

Documentation on PSM: http://docs.oracle.com/en/cloud/paas/java-cloud/pscli/abouit-paas-service-manager-command-line-interface.html

Documentation on Oracle PaaS REST APIs: https://apicatalog.oraclecloud.com/ui/

The post Getting started with Oracle PaaS Service Manager Command Line Interface (PSM) appeared first on AMIS Oracle and Java Blog.

Spring Boot Actuator support added to Pivotal Web Services (PWS) Application Manager

Pas Apicella - Mon, 2017-03-06 17:07
Recently we added "Spring Boot Actuator support" to Pivotal Web Services (PWS) http://run.pivotal.io. If you want to try this out simply use the demo below which is all setup to verify how this works.

https://github.com/papicella/SpringBootPCFPas

Once pushed you will see a Spring Boot icon in the Application Manager UI showing the Actuator support as per below.








Categories: Fusion Middleware

Misleading wait event names clarified in V$EVENT_NAME

Yann Neuhaus - Mon, 2017-03-06 14:15

The oracle wait event names were originally implemented for the oracle rdbms developers and are now use by the database users to troubleshoot performance issues. The consequence is that the name may be misleading because they have a meaning from the internal point of view. Here is some clarification about them.

In 12c the clarification is easy because we have a new DISPLAY_NAME column in the V$EVENT_NAME view:


SQL> select wait_class,name, display_name from v$event_name where display_name != name order by 1,2;
WAIT_CLASS NAME DISPLAY_NAME
-------------- ----------------------------------- ----------------------------------------------
Administrative concurrent I/O completion online move datafile IO completion
Administrative datafile copy range completion online move datafile copy range completion
Administrative wait for possible quiesce finish quiesce database completion
Commit log file sync commit: log file sync
Configuration log buffer space log buffer full - LGWR bottleneck
Idle LGWR real time apply sync standby apply advance notification
Other DFS db file lock quiesce for datafile offline
Other Image redo gen delay redo resource management
Other datafile move cleanup during resize online move datafile resize cleanup
System I/O control file sequential read control file read
System I/O control file single write control file write
System I/O db file parallel write db list of blocks write
System I/O log file parallel write log file redo write
System I/O log file sequential read log file multiblock read
System I/O log file single write log file header write
User I/O db file parallel read db list of blocks read
User I/O db file scattered read db multiblock read
User I/O db file sequential read db single block read
User I/O db file single write db single block write

For long we know the misleading ‘db file sequential read’ which is what we call ‘random reads’ from storage point of view and ‘db file scattered read’ that is what we call ‘sequential reads’ from storage point of view. The DISPLAY_NAME clarifies everything: single block reads vs. multiblock reads.

‘db file parallel read’ is a batch of random reads, used by prefetching for example, which reads multiple blocks but non contiguous.
‘db file parallel write’ is similar, for DBWR to write a batch of blocks. The DISPLAY_NAME clarifies everything: ‘db list of blocks’.

‘log file parallel write’ is ‘parallel’ only because you can have multiplexed files. DISPLAY_NAME is less misleading with ‘log file redo write’.
The ‘log buffer space’ has a DISPLAY_NAME that is more focused on the cause: ‘log buffer full – LGWR bottleneck’

You can look at the others where DISPLAY_NAME is very clear about the operation: ‘online move’ for some operations on files, ‘commit’ for the well know log file sync…

Of course they are also described in the Database Reference documentation.

 

Cet article Misleading wait event names clarified in V$EVENT_NAME est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator