Skip navigation.

Feed aggregator

Oracle ASM 12c: New Features

Jason Arneil - Mon, 2014-04-07 04:44

Last week I was lucky enough to be presenting at the UKOUG AIM SIG. There was a decent enough crowd in attendance and there were some really interesting talks and some really good speakers. In particularly I found Chris Lawless speaking on replication a particularly engaging speaker, and Dave Webster really held the audiences attention late in day.

I was giving a presentation on the new features available to you with 12c ASM. The presentation is below. What you don’t get from the ppt, though is the various demos I did, and in particular seeing flex ASM in action on my 4-node 12c RAC demo cluster.

I should confess, the above isn’t quite what I presented as I did pictures instead of text for the new features.

For clearest understanding, you probably want to download the ppt and actually read the notes attached to each slide.


Password Change Sample

Anthony Shorten - Sun, 2014-04-06 20:27

In the Technical Best Practices whitepaper ((Doc Id: 560367.1), available from My Oracle Support, there is a section (Password Management Solution for Oracle WebLogic) that mentions a sample password change JSP that used to be provided by BEA for WebLogic. That site is no longer available but the sample code is now available on this blog.

Now, this is an example only and is very generic. It is not a drop and install feature that you can place in your installation but the example is sufficient to give an idea of the Oracle WebLogic API available for changing your password. It is meant to allow you to develop a CM JSP if you required this feature.

There is NO support for this as it is sample code only. It is merely an example of the API available. Link to this code is here. Examine it to get ideas for your own solutions.

The API used will most probably work for any security system that is configured as an authentication security provider.

Private Cloud Planning Guide available for Oracle Utilities

Anthony Shorten - Sun, 2014-04-06 17:56

Oracle Utilities Application Framework based applications can be housed in private cloud infrastructure which is either onsite or as a partner offering. Oracle provides a Private Cloud foundation set of software that can be used to house Oracle Utilities software. To aid in planning for installing Oracle Utilities Application Framework based products on private cloud a whitepaper has been developed and has been published.

The Private Cloud Planning Guide (Doc Id: 1308165.1) which is available from My Oracle Support, provides and architecture and software manifest for implementing a fully functional private cloud offering onsite or via a partner. It refers to other documentation to install and configure specific components of a private cloud solution.

Creating Users in Oracle Internet Directory (OID)

Online Apps DBA - Sun, 2014-04-06 15:03
This post covers creating users in OID using ODSM, this OID user will be used as admin user for OAM-OID integration in our Oracle Access Manager (OAM) 11gR2 Admin Training (training starts on 3rd May and fee is 699 USD). For part I of OID/OVD installation click here and for part II click here . In this exercise, we use Oracle Directory [...]

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

Analysing Parallel Execution Skew - Without Diagnostics / Tuning Pack License

Randolf Geist - Sun, 2014-04-06 14:27
This is the third part of the video tutorial "Analysing Parallel Execution Skew". In this part I show how to analyse a parallel SQL execution regarding Parallel Execution Skew.

If you don't have a Diagnostics / Tuning Pack license the options you have for doing that are quite limited, and the approach, as demonstrated in the tutorial, has several limitations and shortcomings.

Here is the video:



If you want to reproduce or play around with the examples shown in the tutorial here is the script for creating the tables and running the queries / DML commands used in the tutorial. A shout goes out to Christo Kutrovsky at Pythian who I think was the one who inspired the beautified version on V$PQ_TQSTAT.

---------------------
-- Links for S-ASH --
---------------------
--
-- http://www.perfvision.com/ash.php
-- http://www.pythian.com/blog/trying-out-s-ash/
-- http://sourceforge.net/projects/orasash/files/v2.3/
-- http://sourceforge.net/projects/ashv/
---------------------

-- Table creation
set echo on timing on time on

drop table t_1;

purge table t_1;

drop table t_2;

purge table t_2;

drop table t_1_part;

purge table t_1_part;

drop table t_2_part;

purge table t_2_part;

drop table t1;

purge table t1;

drop table t2;

purge table t2;

drop table t3;

purge table t3;

drop table t4;

purge table t4;

drop table t5;

purge table t5;

drop table x;

purge table x;

create table t1
as
select /*+ use_nl(a b) */
(rownum * 2) as id
, rownum as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000) */ * from dual
connect by
level <= 1000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't1')

alter table t1 cache;

create table t2
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000000) */ * from dual
connect by
level <= 1000000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't2')

alter table t2 cache;

create table t3
as
select /*+ use_nl(a b) */
(rownum * 2) as id
, rownum as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000) */ * from dual
connect by
level <= 1000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't3')

alter table t3 cache;

create table t4
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000000) */ * from dual
connect by
level <= 1000000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't4')

alter table t4 cache;

create table t5
as
select /*+ use_nl(a b) */
(rownum * 2) as id
, rownum as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000) */ * from dual
connect by
level <= 1000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't5')

alter table t5 cache;

create table x
compress
as
select * from t2
where 1 = 2;

create unique index x_idx1 on x (id);

alter table t1 parallel 2;

alter table t2 parallel 2;

alter table t3 parallel 15;

alter table t4 parallel 15;

alter table t5 parallel 15;

create table t_1
compress
as
select /*+ use_nl(a b) */
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_1')

create table t_2
compress
as
select
rownum as id
, case when rownum <= 5e5 then mod(rownum, 2e6) + 1 else 1 end as fk_id_skew
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_2', method_opt=>'for all columns size 1', no_invalidate=>false)

alter table t_1 parallel 8 cache;

alter table t_2 parallel 8 cache;

create table t_1_part
partition by hash(id) partitions 8
compress
as
select /*+ use_nl(a b) */
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_1_part')

create table t_2_part
partition by hash(fk_id_skew) partitions 8
compress
as
select
rownum as id
, case when rownum <= 5e5 then mod(rownum, 2e6) + 1 else 1 end as fk_id_skew
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_2_part', method_opt=>'for all columns size 1', no_invalidate=>false)

alter table t_1_part parallel 8 cache;

alter table t_2_part parallel 8 cache;

---------------------------------------------------------------
-- Single DFO tree (with Parallel Execution Skew), many DFOs --
---------------------------------------------------------------

set echo on timing on time on verify on

define num_cpu = "14"

alter session set workarea_size_policy = manual;

alter session set sort_area_size = 200000000;

alter session set sort_area_size = 200000000;

alter session set hash_area_size = 200000000;

alter session set hash_area_size = 200000000;

select
max(t1_id)
, max(t1_filler)
, max(t2_id)
, max(t2_filler)
, max(t3_id)
, max(t3_filler)
from (
select /*+ monitor
no_merge
no_merge(v_1)
no_merge(v_5)
parallel(t1 &num_cpu)
PQ_DISTRIBUTE(T1 HASH HASH)
PQ_DISTRIBUTE(V_5 HASH HASH)
leading (v_1 v_5 t1)
use_hash(v_1 v_5 t1)
swap_join_inputs(t1)
*/
t1.id as t1_id
, regexp_replace(v_5.t3_filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as t1_filler
, v_5.*
from (
select /*+ parallel(t2 &num_cpu)
parallel(t3 &num_cpu)
leading(t3 t2)
use_hash(t3 t2)
swap_join_inputs(t2)
PQ_DISTRIBUTE(T2 HASH HASH)
*/
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t1 t2
, t2 t3
where
t3.id2 = t2.id2 (+)
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 3) = 0
) v_1
, (
select /*+ parallel(t2 &num_cpu)
parallel(t3 &num_cpu)
leading(t3 t2)
use_hash(t3 t2)
swap_join_inputs(t2)
PQ_DISTRIBUTE(T2 HASH HASH)
*/
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t1 t2
, t2 t3
where
t3.id = t2.id (+)
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 3) = 0
) v_5
, t1
where
v_1.t3_id = v_5.t3_id
and v_5.t2_id2 = t1.id2 (+) + 2001
and regexp_replace(v_5.t3_filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t1.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
)
;

break on dfo_number nodup on tq_id nodup on server_type skip 1 nodup on instance nodup

-- compute sum label Total of num_rows on server_type

select
/*dfo_number
, */tq_id
, cast(server_type as varchar2(10)) as server_type
, instance
, cast(process as varchar2(8)) as process
, num_rows
, round(ratio_to_report(num_rows) over (partition by dfo_number, tq_id, server_type) * 100) as "%"
, cast(rpad('#', round(num_rows * 10 / nullif(max(num_rows) over (partition by dfo_number, tq_id, server_type), 0)), '#') as varchar2(10)) as graph
, round(bytes / 1024 / 1024) as mb
, round(bytes / nullif(num_rows, 0)) as "bytes/row"
from
v$pq_tqstat
order by
dfo_number
, tq_id
, server_type desc
, instance
, process
;

---------------------------------------------------------------------------------------------------
-- Same statement with Parallel TEMP TABLE TRANSFORMATION, V$PQ_TQSTAT shows useless information --
---------------------------------------------------------------------------------------------------

set echo on timing on time on verify on

define num_cpu = "14"

alter session set workarea_size_policy = manual;

alter session set sort_area_size = 200000000;

alter session set sort_area_size = 200000000;

alter session set hash_area_size = 200000000;

alter session set hash_area_size = 200000000;

with result as
(
select /*+ materialize
monitor
no_merge
no_merge(v_1)
no_merge(v_5)
parallel(t1 &num_cpu)
PQ_DISTRIBUTE(T1 HASH HASH)
PQ_DISTRIBUTE(V_1 HASH HASH)
PQ_DISTRIBUTE(V_5 HASH HASH)
leading (v_1 v_5 t1)
use_hash(v_1 v_5 t1)
swap_join_inputs(t1)
*/
t1.id as t1_id
, regexp_replace(v_5.t3_filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as t1_filler
, v_5.*
from (
select /*+ parallel(t2 &num_cpu) parallel(t3 &num_cpu) leading(t3 t2) use_hash(t3 t2) swap_join_inputs(t2) PQ_DISTRIBUTE(T2 HASH HASH) */
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t1 t2
, t2 t3
where
t3.id2 = t2.id2 (+)
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 3) = 0
)
v_1
, (
select /*+ parallel(t2 &num_cpu) parallel(t3 &num_cpu) leading(t3 t2) use_hash(t3 t2) swap_join_inputs(t2) PQ_DISTRIBUTE(T2 HASH HASH) */
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t1 t2
, t2 t3
where
t3.id = t2.id (+)
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 3) = 0
) v_5
, t1
where
v_1.t3_id = v_5.t3_id
and v_5.t2_id2 = t1.id2 (+) + 2001
and regexp_replace(v_5.t3_filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t1.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
)
select max(t1_id), max(t1_filler), max(t2_id), max(t2_filler), max(t3_id), max(t3_filler) from
result;

break on dfo_number nodup on tq_id nodup on server_type skip 1 nodup on instance nodup

-- compute sum label Total of num_rows on server_type

select
/*dfo_number
, */tq_id
, cast(server_type as varchar2(10)) as server_type
, instance
, cast(process as varchar2(8)) as process
, num_rows
, round(ratio_to_report(num_rows) over (partition by dfo_number, tq_id, server_type) * 100) as "%"
, cast(rpad('#', round(num_rows * 10 / nullif(max(num_rows) over (partition by dfo_number, tq_id, server_type), 0)), '#') as varchar2(10)) as graph
, round(bytes / 1024 / 1024) as mb
, round(bytes / nullif(num_rows, 0)) as "bytes/row"
from
v$pq_tqstat
order by
dfo_number
, tq_id
, server_type desc
, instance
, process
;

--------------------------------------------------------------------------------------------------
-- This construct results in misleading information from V$PQ_TQSTAT (actually a complete mess) --
--------------------------------------------------------------------------------------------------

set echo on timing on time on

alter session enable parallel dml;

truncate table x;

insert /*+ append parallel(x 4) */ into x
select /*+ leading(v1 v2) optimizer_features_enable('11.2.0.1') */
v_1.id
, v_1.id2
, v_1.filler
from (
select
id
, id2
, filler
from (
select /*+ parallel(t2 4) no_merge */
rownum as id
, t2.id2
, t2.filler
from
t2
where
mod(t2.id2, 3) = 0
and regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
) v1
) v_1
, (
select
id
, id2
, filler
from (
select /*+ parallel(t2 8) no_merge */
rownum as id
, t2.id2
, t2.filler
from
t2
where
mod(t2.id2, 3) = 0
and regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
) v2
) v_2
where
v_1.id = v_2.id
and v_1.filler = v_2.filler
;

-- Parallel DML requires a COMMIT before querying V$PQ_TQSTAT
commit;

break on dfo_number nodup on tq_id nodup on server_type skip 1 nodup on instance nodup

compute sum label Total of num_rows on server_type

select
dfo_number
, tq_id
, cast(server_type as varchar2(10)) as server_type
, instance
, cast(process as varchar2(8)) as process
, num_rows
, round(ratio_to_report(num_rows) over (partition by dfo_number, tq_id, server_type) * 100) as "%"
, cast(rpad('#', round(num_rows * 10 / nullif(max(num_rows) over (partition by dfo_number, tq_id, server_type), 0)), '#') as varchar2(10)) as graph
, round(bytes / 1024 / 1024) as mb
, round(bytes / nullif(num_rows, 0)) as "bytes/row"
from
v$pq_tqstat
order by
dfo_number
, tq_id
, server_type desc
, instance
, process
;

----------------------------------------------------------------------
-- Single DFO tree (with Parallel Execution Skew, almost no impact) --
----------------------------------------------------------------------

set echo on timing on time on

alter session set workarea_size_policy = manual;

alter session set sort_area_size = 500000000;

alter session set sort_area_size = 500000000;

alter session set hash_area_size = 500000000;

alter session set hash_area_size = 500000000;

select /*+ leading(v1)
use_hash(t_1)
no_swap_join_inputs(t_1)
pq_distribute(t_1 hash hash)
*/
max(t_1.filler)
, max(v1.t_1_filler)
, max(v1.t_2_filler)
from
t_1
, (
select /*+ no_merge
leading(t_1 t_2)
use_hash(t_2)
no_swap_join_inputs(t_2)
pq_distribute(t_2 hash hash) */
t_1.id as t_1_id
, t_1.filler as t_1_filler
, t_2.id as t_2_id
, t_2.filler as t_2_filler
from t_1
, t_2
where
t_2.fk_id_skew = t_1.id
) v1
where
v1.t_2_id = t_1.id
and regexp_replace(v1.t_2_filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t_1.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and regexp_replace(v1.t_2_filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') >= regexp_replace(t_1.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
;

break on dfo_number nodup on tq_id nodup on server_type skip 1 nodup on instance nodup

-- compute sum label Total of num_rows on server_type

select
/*dfo_number
, */tq_id
, cast(server_type as varchar2(10)) as server_type
, instance
, cast(process as varchar2(8)) as process
, num_rows
, round(ratio_to_report(num_rows) over (partition by dfo_number, tq_id, server_type) * 100) as "%"
, cast(rpad('#', round(num_rows * 10 / nullif(max(num_rows) over (partition by dfo_number, tq_id, server_type), 0)), '#') as varchar2(10)) as graph
, round(bytes / 1024 / 1024) as mb
, round(bytes / nullif(num_rows, 0)) as "bytes/row"
from
v$pq_tqstat
order by
dfo_number
, tq_id
, server_type desc
, instance
, process
;

--------------------------------------------------------------------------------------------------------------------------------
-- Full Partition Wise Join with partition skew - V$PQ_TQSTAT is of no help, since no redistribution takes place (single DFO) --
--------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

alter session set workarea_size_policy = manual;

alter session set sort_area_size = 500000000;

alter session set sort_area_size = 500000000;

alter session set hash_area_size = 500000000;

alter session set hash_area_size = 500000000;

select count(t_2_filler) from (
select /*+ monitor
leading(t_1 t_2)
use_hash(t_2)
no_swap_join_inputs(t_2)
pq_distribute(t_2 none none)
*/
t_1.id as t_1_id
, t_1.filler as t_1_filler
, t_2.id as t_2_id
, t_2.filler as t_2_filler
from t_1_part t_1
, t_2_part t_2
where
t_2.fk_id_skew = t_1.id
and regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t_1.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
);

break on dfo_number nodup on tq_id nodup on server_type skip 1 nodup on instance nodup

-- compute sum label Total of num_rows on server_type

select
/*dfo_number
, */tq_id
, cast(server_type as varchar2(10)) as server_type
, instance
, cast(process as varchar2(8)) as process
, num_rows
, round(ratio_to_report(num_rows) over (partition by dfo_number, tq_id, server_type) * 100) as "%"
, cast(rpad('#', round(num_rows * 10 / nullif(max(num_rows) over (partition by dfo_number, tq_id, server_type), 0)), '#') as varchar2(10)) as graph
, round(bytes / 1024 / 1024) as mb
, round(bytes / nullif(num_rows, 0)) as "bytes/row"
from
v$pq_tqstat
order by
dfo_number
, tq_id
, server_type desc
, instance
, process
;

OAMSSA-06252 after patching

Frank van Bortel - Sun, 2014-04-06 04:01
Once upon a time.. you had a working environment with WebLogic, Access and Identity Management (or Discoverer, or ...) and all of a sudden things start failing. Symptoms You notice the dreaded OAMSSA-06252 (Policy Store not Available) while starting up, and start fearing the worst. Also, it seems as-if you cannot login to OAM management console anymore; your credentials are accepted, but you get Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

OUGN : Summary

Tim Hall - Sun, 2014-04-06 02:33

With the exception of a 5+ hour layover in Amsterdam, the trip home was pretty straight forward. I flew to Amsterdam with Lonneke DikmansRonald Luttikhuizen and Roel Hartman. During my rather excessive layover, I played catchup with all the internet stuff I missed during the trip… :)

I must say OUGN 2014 was a pretty cool event all round! The speaker lineup was incredible. The location (on a boat) was fun. I’ve not done that for a while. In addition to the presentations, I got a lot of time to talk to people about technology, which is what I love doing, so that made me happy…

Big thanks to the organisers of the event for inviting me and paying the bill for the boat and hotel room! Thanks to all the speakers and attendees that managed to put up with me for a couple of days. On a boat, there is nowhere to run! Thanks also to OTN and the Oracle ACE Program. They didn’t fund this trip for me, but I’m still happy to be flying the flag on their behalf. :)

Cheers

Tim…

 

OUGN : Summary was first posted on April 6, 2014 at 9:33 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Bangalore Coonoor on Royal Enfield

Vattekkat Babu - Sat, 2014-04-05 11:28

Route is indicated by green icons on the map. Return was on next day, indicated by red icons. Each marker was done when I had stopped for at least a 5 minute break. Click/hover on the marker to get info and odometer reading.

Open Google Route Map

Onward

Outside Ramanagaram.
Outside Ramanagaram at 6:30am. From here till Mysore, 2 hours non-stop ride!

  • Early morning traffic was peaceful. Nobody on the roads to Mysore really. Very different when you are driving on weekends though.
  • Route was Sony World - Hosur Road - NICE Road - Ramanagaram - Mysore City - Nanjangud - Gundlupet - Bandipur - Theppakkadu - Masinagudi - Ooty
  • Overall about 330km. Took about 8 hours with about 1.25 hours break.
  • I was apprehensive of climbing the Kallatti Ghat on a relatively new bike. Just pushed it - it climbed with no drama.
  • Steady speed of 60kmph was followed mostly. Once in a while, went up to 70kmph for less than 1km, just to try the bike.
  • The waterhole near Bandipur visitor's center has all the trees burned down. Quite bad. This is the place where I had seen elephants, bisons and even a tiger across the road once before.

MDS Seeded Customization Approach with Empty External Project

Andrejus Baranovski - Sat, 2014-04-05 11:27
Great feature in ADF - MDS Seeded customisations support. This is particularly useful for independent software vendors, who are developing their own products on top of ADF framework. With MDS Seeded customisation, maintenance for multiple customers becomes easier. We could create customisation, instead of changing original source code - this makes it easier to maintain a product. I would like to share one important hint, related to technical architecture for MDS Seeded customisations - this will be related to the way how MDS Seeded customisations are organised and maintained. By default, you would create MDS Seeded customisation files in the original application, however this is not very clean approach. There is a way to create and keep all MDS Seeded customisation files in empty external application. I will describe in the post, how this can be achieved with a few easy steps.

If you are going to test sample application - MDSCustomizationsApp.zip, with integrated WLS instances in JDeveloper, make sure to read this post (it describes how to setup local MDS repository in file system) - How To Setup MDS Repository for Embedded WLS Instance.

Let's start - you can download initial version of sample ADF application from the blog post mention above. Employees form and empty Submit button:


I don't want to create any MDS Seeded customisation files inside, rather I would build and deploy ADF library out of main application:


Sample comes with special application - CustomizationApp (you can find it in the archive). This application was created to keep MDS Seeded customisation files, no other purpose. Initially, empty project was created - where ADF library was imported (the one we have just deployed):


Empty project is enabled with MDS Seeded customisation support:


Restart JDeveloper in customisation mode - so we could create some customisations for the content from imported ADF library:


If MDS Seeded customisation mode was successfully applied, you should see some special icon, next to the application name. Choose 'Show Libraries' to see a list of libraries - so, we could see contents from imported ADF library:


All attached libraries will be displayed, you should locate our ADF library. Expand it and you should see application packaging:


We could apply several customisations now. Let's open Employees VO and define View Criteria (filter employees by salary >= 1000). This customisation will be stored inside our empty project:


There will be a change in AM - View Criteria will be applied for VO instance:


We could go and review XML files for applied MDS Seeded customisations. There is one for VO and AM. XML for AM contains delta information about applied View Criteria for VO instance:


One more customisation on UI level now - drag and drop Commit operation for Submit button:


This change creates two additional XML files with MDS Seeded customisations - for JSF fragment and Page Definition:


You must define MDS Seeded customisations deployment profile (MAR) for application with empty project (containing XML's):


Development is completed, now will be a last bit - deployment. Make sure WLS is started (you should start it separately, if you want to test MAR profile deployment):


Go ahead and deploy main application first - you should get a list of MDS repositories (see a hint how to define local test repository in the blog post mentioned above):


Once main application was deployed, you can apply MDS Seeded customisations and export them through a MAR file from external application:


You should see main application name in the wizard, MDS Seeded customisations will be applied for this application:


All the changes applied through MDS Seeded customisations, will be visible in the log:


Good point - there is no need to restart main application, after MDS Seeded customisation changes were applied. Here you can see, original application with changes as per above:


If applied changes should be removed, this could be simply removed from MDS Seeded customisation XML file and re-applied. Main advantage of this approach - no need to store XML files with MDS Seeded customisations inside original project, we could keep them outside.

OUGN : Day 2

Tim Hall - Sat, 2014-04-05 09:22

Day 2 started really early. Having got to bed about 02:00, I was up at 05:30 and thinking about my 08:30 session. The previous evening’s conversation with Brynn was playing on my mind a little (in a good way), thinking how that conversation should/would affect my session. The session itself seemed to go well. I enjoyed it anyway. :)

From there it was more conversations with people, including a chat with Martin Bach, Martin Nash and one of the attendees (sorry, I forgot your name) about some Exadata issues he was having. I freely admit to knowing nothing about Exadata, but I do know about most of the technology stack that makes up Exadata (like ASM, RAC etc.). Since a number of the issues people have are not really “Exadata” issues, so much as “RAC” issues or “ASM” issues, it’s surprising how much you can get involved in these discussions, provided you don’t try to pretend to be something you are not!

After that I attempted to catch up on some sleep, which didn’t really work out, so then it was off to lunch.

After lunch it was time for an open database panel session. This is the first time they’ve done this sort of thing at this event, so I’m not sure what people expected, including us panalists. :) I think this sort of thing needs to run for a few conferences to let people get a feel for it, before you make a decision about whether it is going to work or not going forward. You have to give it an opportunity to mature… :)

After that it was off to Martin Bach‘s session on Oracle 12c, features that didn’t make the top 10. There were some things I already knew about and some things that passed me by. Food for thought!

Next up was Martin Nash speaking about what an Oracle DBA should know about Linux administration. I was pretty confident I was going to be so ahead of the game here, but he mentioned a number of things I’ve not played with. I’ll probably end up downloading his slides and working through some stuff.

Then it was my WebLogic session. Marcus Eisele did a session called WebLogic 101 in the morning and after a quick discussion I decided that we were effectively giving the same talk. I’m always a little nervous about doing WebLogic talks, but this made me even more nervous. As it turned out, with the exception of one person, I got a different crowd to him, so it was fine.

After my last session of the conference, I inevitably did some more chatting and then went off to see “Oracle Cloud Oddessy“. From there it was dinner, chatting to Mike Dietricht in the bar, then off to bed to sleep through the overnight ride back to OSLO.

Cheers

Tim…

PS. I think I got married to Debra Lilley by the ship’s captain…

OUGN : Day 2 was first posted on April 5, 2014 at 4:22 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

OUGN : Day 1

Tim Hall - Sat, 2014-04-05 09:03

The journey to Norway was pretty straight forward, but during the second flight, from Amsterdam to OSLO, my nose and eyes started to stream. I didn’t feel ill, but I was starting to worry I might be getting ill right before a conference. I landed in Norway, got the train to the centre of OSLO and walked to my hotel. I was meant to go out to dinner, but I figured bed might be a better option…

The next day we met up and headed off to the boat to begin the conference. After boarding, we went to the keynotes. Since these were in Norwegian, a few of use ended up at the back of the room chatting. :) As soon as I got access to my room I headed on up to check it out. I’ve been on one of these ferry/cruise ships before and I think they are kind of cute.

After some food it was off to the first sesson of the day. I went to Connor McDonald speaking about Analytics. I’ve been to a number of different talks on analytic functions over the years, including my own. It’s kind-of cool to see how different people approach the same subject and what they focus on compared to you.

Next up it was my session. Compressing a 60 minute session, that I always overrun on, into 45 minutes was a challenge and I had to miss some stuff out, but it’s all good. I got some good feedback after the session, which is also nice for the ego. :)

From there is was on to Lasse Jenssen‘s session on version control in the database. I really enjoyed this session as it raised lots of important discussion points! After all these years, there is still not a single product that really does everything, but there are a number of products like Red Gate SQL Source Control, that are coming closer to the mark.

After that I got locked in conversation with a number people and missed the next session. I spent a long time talking to Cary Millsap about a whole bunch of stuff unrelated to technology. Then it was off to the evening drinks and dinner, where I sat with Cary and Connor. Once again, lots of chat about none technical stuff. Pretty cool!

At about midnight we decided to head off to bed and I bumped into Brynn Llewellyn, who asked if he could have a quick word with me. Two hours later we eventually knocked that conversation on the head and I actually went to bed. :)

Cheers

Tim…

OUGN : Day 1 was first posted on April 5, 2014 at 4:03 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Analysing Parallel Execution Skew - Data Flow Operations (DFOs) And DFO Trees

Randolf Geist - Sat, 2014-04-05 03:24
This is the second part of the video tutorial "Analysing Parallel Execution Skew". In this part I introduce the concept of "Data Flow Operations (DFOs)" and "DFO Trees", which is what a Parallel Execution plan is made of. DFOs / DFO Trees are specific to Parallel Execution and don't have any counterpart in a serial execution plan.

Understanding the implications of DFOs / DFO Trees is important as prerequisite for understanding some of the effects shown in the later parts of the video tutorial, hence I covered this as a separate topic.

Note that this tutorial also demonstrates some new 12c features regarding Parallel Execution, in particular how Oracle 12c now lifts many of the previous limitations that lead to the generation of multiple DFO Trees.

Here is the video:



If you want to reproduce and play around with the DFO Tree variations shown in the tutorial here is the script for creating the tables and running the queries / DML commands used in the tutorial:

-- Table creation
set echo on timing on time on

drop table t1;

purge table t1;

drop table t2;

purge table t2;

drop table t3;

purge table t3;

drop table t4;

purge table t4;

drop table t5;

purge table t5;

drop table x;

purge table x;

create table t1
as
select /*+ use_nl(a b) */
(rownum * 2) as id
, rownum as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000) */ * from dual
connect by
level <= 1000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't1')

alter table t1 cache;

create table t2
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000000) */ * from dual
connect by
level <= 1000000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't2')

alter table t2 cache;

create table t3
as
select /*+ use_nl(a b) */
(rownum * 2) as id
, rownum as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000) */ * from dual
connect by
level <= 1000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't3')

alter table t3 cache;

create table t4
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000000) */ * from dual
connect by
level <= 1000000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't4')

alter table t4 cache;

create table t5
as
select /*+ use_nl(a b) */
(rownum * 2) as id
, rownum as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000) */ * from dual
connect by
level <= 1000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't5')

alter table t5 cache;

create table x
compress
as
select * from t2
where 1 = 2;

create unique index x_idx1 on x (id);

alter table t1 parallel 2;

alter table t2 parallel 2;

alter table t3 parallel 15;

alter table t4 parallel 15;

alter table t5 parallel 15;

---------------------------------------------------------------
-- Single DFO tree (with Parallel Execution Skew), many DFOs --
---------------------------------------------------------------

set echo on timing on time on verify on

define num_cpu = "15"

select
max(t1_id)
, max(t1_filler)
, max(t2_id)
, max(t2_filler)
, max(t3_id)
, max(t3_filler)
from (
select /*+ monitor
no_merge
no_merge(v_1)
no_merge(v_5)
parallel(t1 &num_cpu)
PQ_DISTRIBUTE(T1 HASH HASH)
PQ_DISTRIBUTE(V_5 HASH HASH)
leading (v_1 v_5 t1)
use_hash(v_1 v_5 t1)
swap_join_inputs(t1)
*/
t1.id as t1_id
, regexp_replace(v_5.t3_filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as t1_filler
, v_5.*
from (
select /*+ parallel(t2 &num_cpu)
parallel(t3 &num_cpu)
leading(t3 t2)
use_hash(t3 t2)
swap_join_inputs(t2)
PQ_DISTRIBUTE(T2 HASH HASH)
*/
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t1 t2
, t2 t3
where
t3.id2 = t2.id2 (+)
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 3) = 0
) v_1
, (
select /*+ parallel(t2 &num_cpu)
parallel(t3 &num_cpu)
leading(t3 t2)
use_hash(t3 t2)
swap_join_inputs(t2)
PQ_DISTRIBUTE(T2 HASH HASH)
*/
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t1 t2
, t2 t3
where
t3.id = t2.id (+)
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 3) = 0
) v_5
, t1
where
v_1.t3_id = v_5.t3_id
and v_5.t2_id2 = t1.id2 (+) + 2001
and regexp_replace(v_5.t3_filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t1.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
)
;

---------------------------------------------------------------------------------------------------------------------------------------------------
-- Multiple DFO trees no parent / child (with different DOPs), separate slave sets, one active after the other (12.1: Still multiple DFO trees) --
---------------------------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on verify on

with a as (
select /*+ materialize monitor no_merge */
t1.id as t1_id
, regexp_replace(t3_filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as t1_filler
, v1.*
from (
select /*+ no_merge pq_distribute(t3 hash hash) */
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t1 t2
, t2 t3
where
t3.id2 = t2.id
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 2) = 0
) v1
, t1
where
v1.t2_id2 = t1.id2
),
b as (
select /*+ materialize monitor no_merge */
t1.id as t1_id
, regexp_replace(t3_filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as t1_filler
, v1.*
from (
select /*+ no_merge pq_distribute(t3 hash hash) */
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t3 t2
, t4 t3
where
t3.id2 = t2.id
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 2) = 0
) v1
, t5 t1
where
v1.t2_id2 = t1.id2
)
select max(t1_id), max(t1_filler), max(t2_id), max(t2_filler), max(t3_id), max(t3_filler) from (
select /*+ no_merge */
a.t1_id
, a.t1_filler
, a.t2_id
, a.t2_filler
, a.t3_id
, regexp_replace(a.t3_filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as t3_filler
from
a
, b
where
a.t3_id = b.t3_id
and regexp_replace(a.t3_filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(b.t3_filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(a.t1_id, 4) = 0
and mod(b.t1_id, 4) = 0
)
;

-------------------------------------------------------------------------------------------------------------------------------
-- Multiple DFO trees parent / child (with different DOPs), separate slave sets, concurrently active (12.1: Single DFO tree) --
-------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

alter session enable parallel dml;

truncate table x;

insert /*+ append parallel(x 4) */ into x
select /*+ leading(v1 v2) */
v_1.id
, v_1.id2
, v_1.filler
from (
select
id
, id2
, filler
from (
select /*+ parallel(t2 4) no_merge */
rownum as id
, t2.id2
, t2.filler
from
t2
where
mod(t2.id2, 3) = 0
and regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
) v1
) v_1
, (
select
id
, id2
, filler
from (
select /*+ parallel(t2 8) no_merge */
rownum as id
, t2.id2
, t2.filler
from
t2
where
mod(t2.id2, 3) = 0
and regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
) v2
) v_2
where
v_1.id = v_2.id
and v_1.filler = v_2.filler
;

commit;

--------------------------------------------------------------------------------------------------------------------------------------
-- Multiple DFO trees parent / child (with different DOPs), separate slave sets, *not* concurrently active (12.1: Single DFO tree) --
--------------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

alter session enable parallel dml;

truncate table x;

insert /*+ append parallel(x 4) */ into x
select
v1.*
from (
select /*+ parallel(t2 4) */
lag(t2.id) over (order by regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')) as id
, t2.id2
, regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as filler
from
t2
where
mod(t2.id2, 3) = 0
) v1
, (
select /*+ parallel(t2 8) */
lag(id) over (order by regexp_replace(filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')) as id
, id2
, regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as filler
from
t2
where
mod(t2.id2, 3) = 0
) v2
where
v1.id = v2.id
and v1.filler = v2.filler
;

commit;

----------------------------------------------------------------------------------------------------------------------------------------------------------------------
-- Multiple DFO trees, no parent/child, multiple DFO tree starts, no separate slave sets, concurrently active (12.1: Single DFO tree, new parallel FILTER/SUBQUERY) --
----------------------------------------------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

select /*+ no_merge(x) */
*
from (
select
v1.filler
, (select /*+ parallel(x 2) */ id from t2 x where x.id = v1.id) as id
, (select /*+ parallel(x 2) */ id2 from t2 x where x.id = v1.id) as id2
from (
select /*+ parallel(t2 4) */
t2.id
, t2.id2
, t2.filler
from
t2
) v1
, (
select /*+ parallel(t2 8) */
t2.id
, t2.id2
, t2.filler
from
t2
) v2
where
v1.id = v2.id
and v1.filler = v2.filler
) x
where
rownum <= 100
;

----------------------------------------------------------------------------------------------------------------------------------------------------------
-- Multiple DFO trees, no parent/child, multiple DFO tree starts, separate slave sets, concurrently active (12.1: Single DFO tree, new parallel FILTER) --
----------------------------------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

select /*+ parallel(t2 8)
parallel(t3 8)
leading(t3 t2)
use_hash(t3 t2)
swap_join_inputs(t2)
PQ_DISTRIBUTE(T2 HASH HASH)
--PQ_FILTER(@"SEL$1" NONE)
*/
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t1 t2
, t2 t3
where
t3.id2 = t2.id2 (+)
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 3) = 0
and not exists (select /*+ no_unnest parallel(t2 2) */ null from t2 x where x.id2 = t2.id2)
;

-------------------------------------------------------------------------------------------------------------------------------------------------------------
-- Multiple DFO trees, no parent/child, multiple DFO tree starts, separate slave sets, concurrently active (12.1: Still multiple DFO trees, serial FILTER) --
-------------------------------------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

alter session enable parallel dml;

truncate table x;

insert /*+ append parallel(x 4) */ into x
select /*+ leading(v1 v2) */
v_1.id
, v_1.id2
, v_1.filler
from (
select
id
, id2
, filler
from (
select /*+ parallel(t2 4) no_merge */
rownum as id
, t2.id2
, t2.filler
from
t2
where
mod(t2.id2, 3) = 0
and regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
) v1
) v_1
, (
select
id
, id2
, filler
from (
select /*+ parallel(t2 8) no_merge */
rownum as id
, t2.id2
, t2.filler
from
t2
where
mod(t2.id2, 3) = 0
and regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
) v2
) v_2
where
v_1.id = v_2.id
and v_1.filler = v_2.filler
and not exists (select /*+ no_unnest parallel(y 2) */ null from t2 y where y.id2 = v_1.id2)
;

commit;

Collaborate in My Oracle Support Community

Joshua Solomin - Fri, 2014-04-04 17:40
Untitled Document

HandCircle

My Oracle Support Community provides a collaborative environment built in partnership: Oracle Premier Support customers and Oracle Support engineers.

Support engineers provide their knowledge and Oracle information; you will find a trusted network of Oracle customers, who share similar goals, and who have tackled many of the real-world challenges you face. You can learn new approaches to situations and examine different perspectives to find the best answer for your environment.

Together, we help you take the path to success.

Get started now with a video tour of My Oracle Support Community, the first in a series of short videos on Community. You can also register for a live webcast session: Get Proactive Essentials: Using the new My Oracle Support Community Platform.

Join us.

How to configure YUM for Oracle Enterprise Linux (OEL) to install missing RPMs

Online Apps DBA - Fri, 2014-04-04 16:21
I installed new Linux Server for our Oracle Access Manager (OAM) 11gR2 Admin Training (training starts on 3rd May and fee is 699 USD). This post covers steps to configure YUM so that you can install missing RPM (RPMs required for Oracle Database and Fusion Middleware). 1. Identify your Oracle Enterprise Linux version from enterprise-release file cat [...]

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

Hide All Search Operators for ADF View Criteria Item

Andrejus Baranovski - Fri, 2014-04-04 14:51
During this week work, I received interesting question from ADF developer - how to hide all operators for specific ADF View Criteria item. There is an option to go and hide different operators one by one, following API guide for JboCompOper. You could follow this approach, but there is one smarter way - to hide all operators at once. See below - how.

Sample application is available for download - ADFQueryNoOperatorsApp.zip. This application implements View Criteria with several items to search for (FirstName, LastName and Salary):


Search operators are configured to be displayed in both Basic and Advanced modes:


This is how it looks on UI - ADF Query with standard list of search operators:


I would like to hide all search operators for FirstName and LastName attributes. This is pretty easy - what we need to do, is to open VO attribute and define CompOper tag. If you want to hide all search operators, simply provide "*" for Name and Oper properties. Set "-2" for ToDo property (this is OPER_TODO_CLEAR, meaning - remove all operators).  Name and Oper are required properties, but we could provide "*" sign, indicating it will be applied for all. Repeat the same for FirstName and LastName attributes:


Search operators will be hidden now for FirstName and LastName items in both modes - Basic:


And Advanced mode:


Sounds pretty easy, isn't it? You should remember a trick with "*" symbol.

Best of OTN - Week of March 31st

OTN TechBlog - Fri, 2014-04-04 12:25

Java Community

Java 8 is Revolutionary! 
The Java SE 8 launch event is available for replay, plus 30+ videos covering the technical features of this revolutionary release. Download Java SE 8 and watch the videos.

Updated Java Tutorials
Find out where the tech docs have been updated for the Java 8 release.

For the Business Set: 8 Reasons to Love Java 8
Yes, Java 8 is a developer's dream. Find out from Forbes the business-level view of Java 8.

Friday Funny from OTN Java Community Manager Tori Wieldt -
Dentist: "Do you floss regularly?"
Me: "Do you back up your computer data regularly?"
Thanks, @scott4arrow !

Architect Community

The top 3 most popular OTN articles written by members of the architect community for the month of March 2014.

Cookbook: Installing and Configuring Oracle BI Applications 11.1.1.7.1
by Mark Rittman and Kevin McGinley
A step-by-step guide to installing, configuring, and loading an initial set of tables within Oracle BI Applications 11.1.1.7.1, using data sourced from Oracle E-Business Suite 12.1.

Enterprise Service Bus
by Juergen Kress, Hajo Normann, Danilo Schmiedel, Guido Schmutz, Clemens Utschig-Utschig, Torsten Winterberg, and Bernd Trops
Answers to some of the most important questions surrounding the use of an enterprise service bus, using concrete examples to clarify correct areas of application. Part of the Industrial SOA article series.

Starting and Stopping a Java EE Environment when using Oracle WebLogic
by René van Wijk
Oracle ACE Director and Fusion Middleware specialist René van Wijk explores ways to simplify the life-cycle management of a Java EE environment through the use of scripts developed with WebLogic Scripting Tool and Linux Bash.

Friday fun from OTN Architect Community manager Bob Rhubart:
Check out this live concert video from the Ben Miller Band out of Joplin, Missouri. Can you say "Wah-wah washboard"?

Get involved in community conversations on the following OTN channels...

NVL() change

Jonathan Lewis - Fri, 2014-04-04 11:10

One of the problems of functions is that the optimizer generally doesn’t have any idea on how a predicate based on function(col) might affect the cardinality. However,  the optimizer group are constantly refining the algorithms to cover an increasing number of special cases more accurately. This is a good thing, of course – but it does mean that you might be unlucky on an upgrade where a better cardinality estimate leads to a less efficient execution plan. Consider for example the simple query (where d1 is column of type date):

select	*
from	t1
where	nvl(d1,to_date('01-01-1900','dd-mm-yyyy')) < sysdate

Now, there are many cases in many versions of Oracle, where the optimizer will appear to calculate the cardinality of

nvl(columnX,{constant}) operator {constant}

as if it were:

columnX is null or columnX operator {constant}

Unfortunately this doesn’t seem to be one of them – until you get to 11.2.something. Here’s a little demonstration code:

create table t1
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	case
		when rownum > 100 then null else sysdate - rownum
	end	d1
from
	generator	v1,
	generator	v2
where
	rownum <= 50000
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'
	);
end;
/

set autotrace traceonly explain

prompt	query with NVL

select	*
from	t1
where	nvl(d1,to_date('01-01-1900','dd-mm-yyyy')) < sysdate
;

prompt	query with OR clause

select	*
from	t1
where	d1 is null or d1 < sysdate
;

If you run this code in 11.1.0.7 you get the following – with numeric variations for cost (which I’m interested not in at the moment):


query with NVL
==============
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  2500 |  2500 |    18  (39)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |  2500 |  2500 |    18  (39)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(NVL("D1",TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))<SYSDATE@!)

query with OR clause
====================
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 50000 | 50000 |    13  (16)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   | 50000 | 50000 |    13  (16)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("D1" IS NULL OR "D1"<SYSDATE@!)

Take note particularly of the difference in the estimated cardinality for the tablescans.

When you upgrade to 11.2.0.4 (possibly earlier – though there are some nvl() related patches that appeared only in 11.2.0.4), you get this:


query with NVL
==============
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 50000 | 50000 |    18  (39)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   | 50000 | 50000 |    18  (39)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NVL("D1",TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))<SYSDATE@!)

query with OR clause
====================
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 49900 | 49900 |    13  (16)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   | 49900 | 49900 |    13  (16)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("D1" IS NULL OR "D1"<SYSDATE@!)

As you can see the estimate for the “NVL()” example is now correct – which means vastly different from the estimate in 11.1.0.7 which seemed to be using the standard “5% for range-based predicate on function(col)”.

It’s interesting to note that a (relatively) small error has crept in to the “OR” example – interestingly the size of the error is exactly the number of rows where d1 is not null (which looks like enough of a coincidence to be a bug – but maybe there’s a good rationale for it)

Conclusion

Yet again, a simple upgrade has the capacity to make a dramatic change to a cardinality estimate – which could mean a significant change to an execution plan and major change in performance. If you’ve read this note, though, you may recognise the pattern that is the driving feature behind the problem.

Footnote:

If you have access to versions 11.2.0.1 through 11.2.0.3 and find that the test data produces different cardinalities please publish the results in the comments – it would be nice to know exactly when this change appears.  (You need only show the body of the execution plans  with labels, not the entire output of the script).

 


NVL() change

Jonathan Lewis - Fri, 2014-04-04 11:10

One of the problems of functions is that the optimizer generally doesn’t have any idea on how a predicate based on function(col) might affect the cardinality. However,  the optimizer group are constantly refining the algorithms to cover an increasing number of special cases more accurately. This is a good thing, of course – but it does mean that you might be unlucky on an upgrade where a better cardinality estimate leads to a less efficient execution plan. Consider for example the simple query (where d1 is column of type date):

select	*
from	t1
where	nvl(d1,to_date('01-01-1900','dd-mm-yyyy')) < sysdate

Now, there are many cases in many versions of Oracle, where the optimizer will appear to calculate the cardinality of

nvl(columnX,{constant}) operator {constant}

as if it were:

columnX is null or columnX operator {constant}

Unfortunately this doesn’t seem to be one of them – until you get to 11.2.something. Here’s a little demonstration code:

create table t1
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	case
		when rownum > 100 then null else sysdate - rownum
	end	d1
from
	generator	v1,
	generator	v2
where
	rownum <= 50000
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'
	);
end;
/

set autotrace traceonly explain

prompt	query with NVL

select	*
from	t1
where	nvl(d1,to_date('01-01-1900','dd-mm-yyyy')) < sysdate
;

prompt	query with OR clause

select	*
from	t1
where	d1 is null or d1 < sysdate
;

If you run this code in 11.1.0.7 you get the following – with numeric variations for cost (which I’m interested not in at the moment):


query with NVL
==============
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  2500 |  2500 |    18  (39)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |  2500 |  2500 |    18  (39)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(NVL("D1",TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))<SYSDATE@!)

query with OR clause
====================
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 50000 | 50000 |    13  (16)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   | 50000 | 50000 |    13  (16)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("D1" IS NULL OR "D1"<SYSDATE@!)

Take note particularly of the difference in the estimated cardinality for the tablescans.

When you upgrade to 11.2.0.4 (possibly earlier – though there are some nvl() related patches that appeared only in 11.2.0.4), you get this:


query with NVL
==============
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 50000 | 50000 |    18  (39)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   | 50000 | 50000 |    18  (39)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NVL("D1",TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))<SYSDATE@!)

query with OR clause
====================
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 49900 | 49900 |    13  (16)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   | 49900 | 49900 |    13  (16)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("D1" IS NULL OR "D1"<SYSDATE@!)

As you can see the estimate for the “NVL()” example is now correct – which means vastly different from the estimate in 11.1.0.7 which seemed to be using the standard “5% for range-based predicate on function(col)”.

It’s interesting to note that a (relatively) small error has crept in to the “OR” example – interestingly the size of the error is exactly the number of rows where d1 is not null (which looks like enough of a coincidence to be a bug – but maybe there’s a good rationale for it)

Conclusion

Yet again, a simple upgrade has the capacity to make a dramatic change to a cardinality estimate – which could mean a significant change to an execution plan and major change in performance. If you’ve read this note, though, you may recognise the pattern that is the driving feature behind the problem.

Footnote:

If you have access to versions 11.2.0.1 through 11.2.0.3 and find that the test data produces different cardinalities please publish the results in the comments – it would be nice to know exactly when this change appears.  (You need only show the body of the execution plans  with labels, not the entire output of the script).

 


Where Does v$osstat Get It's Data? Trustworthy?

OS Data Without An OS Prompt
Have you ever wanted to get OS data from within Oracle Database 12c/11g/10g and without being at the OS prompt? Have you ever wondered where v$osstat gets it's data?

I have! It's the kind of stuff I think about all the time. In fact, I was so fascinated by what I found, I included the details in my online seminar, Utilization On Steroids. It's that cool.

What Is So Great About v$osstat?
For starters, v$osstat provides an operating system perspective view of OS activity. In contrast, v$sysstat or v$sys_time_model provide performance data about a specific instance. Looking at v$osstat data is like looking at OS data from within Oracle. In other words, seeing OS data without an OS prompt.

But is the data from v$osstat trustworthy? That's exactly what this posting is all about. To figure this out, I'm going to first look at where vmstat gets its data and then look at where v$osstat gets its data. If their data source is the same, then it is highly likely v$osstat is trustworthy. Make sense? Read on...

Where Does vmstat Get It's Data?
It's real simple to determine vmstat's data source. Just OS trace vmstat! On Linux I use the strace command. In the exert below, I did not change any lines. If I removed lines, I replaced them a "...". Here is the output:

[oracle@sixcore local]$ strace vmstat 2 5
execve("/usr/bin/vmstat", ["vmstat", "2", "5"], [/* 39 vars */]) = 0
...
write(1, "procs -----------memory---------"..., 81procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
) = 81
write(1, " r b swpd free buff cach"..., 81 r b swpd free buff cache si so bi bo in cs us sy id wa st
) = 81
open("/proc/meminfo", O_RDONLY) = 3
lseek(3, 0, SEEK_SET) = 0
read(3, "MemTotal: 24729532 kB\nMemF"..., 2047) = 1198
open("/proc/stat", O_RDONLY) = 4
read(4, "cpu 570795858 11695 216600983 8"..., 65535) = 4643
open("/proc/vmstat", O_RDONLY) = 5
lseek(5, 0, SEEK_SET) = 0
read(5, "nr_free_pages 119649\nnr_inactive"..., 2047) = 2047
write(1, " 2 0 146388 478596 317080 22558"..., 84 2 0 146388 478596 317080 22558284 0 0 5 234 1 4 72 27 0 0 0
) = 84
...
nanosleep({2, 0}, 0x7fff0a9313b0) = 0
...
lseek(3, 0, SEEK_SET) = 0
read(3, "MemTotal: 24729532 kB\nMemF"..., 2047) = 1198
lseek(4, 0, SEEK_SET) = 0
read(4, "cpu 570796318 11695 216600996 8"..., 65535) = 4643
lseek(5, 0, SEEK_SET) = 0
read(5, "nr_free_pages 119672\nnr_inactive"..., 2047) = 2047
write(1, " 2 0 146388 478688 317080 22558"..., 84 2 0 146388 478688 317080 22558284 0 0 0 36 3781 2311 38 1 61 0 0
) = 84
...
nanosleep({2, 0},
...
exit_group(0) = ?
[oracle@sixcore local]$

There are two things I want to bring to your attention. First is the nonosleep call. If you do a man page on nanosleep, you can see that vmstat is using the call for the two second sleep between data output.

The second and the most important for this posting, is vmstat repeatedly references data in the /proc filesystem. I write about the /proc filesystem in my Oracle Performance Firefighting book and also explore it in my Utilization On Steroids online seminar. It's amazing. The bottom line is this, vmstat gets performance data from the /proc filesystem. Place that in your stack and read on.

Where Does v$osstat Get Its Data?
The Oracle Database background process mmnl, at least in part, is responsible for collecting data that we see in v$osstat. So I am going to OS trace using strace the mmnl process and see what files it opens and reads!

First I need to get the OS process ID of the mmnl background process for my Oracle Database 12c instance.

[oracle@sixcore local]$ ps -eaf|grep mmnl
oracle 11521 11170 0 08:59 pts/1 00:00:00 grep mmnl
oracle 20109 1 0 Feb24 ? 00:18:47 ora_mmnl_prod35
oracle 60274 1 0 Jan13 ? 00:24:34 ora_mmnl_prod30

Got it! The PID for my Oracle Database 12c instance is 20109. Just like I did with vmstat, except mmnl is already running, I'm going to OS trace it using strace.  In the exert below, I did not change any lines. If I removed lines, I replaced them a "...". Here is the output:

[oracle@sixcore local]$ strace -p 20109
Process 20109 attached - interrupt to quit
...
open("/proc/cpuinfo", O_RDONLY) = 15
fstat(15, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f68548bd000
read(15, "processor\t: 0\nvendor_id\t: Genuin"..., 1024) = 1024
read(15, " size\t: 12288 KB\nphysical id\t: 0"..., 1024) = 1024
read(15, "gs\t\t: fpu vme de pse tsc msr pae"..., 1024) = 1024
read(15, "_tsc arch_perfmon pebs bts rep_g"..., 1024) = 1024
read(15, "2 popcnt aes lahf_lm ida arat ep"..., 1024) = 1024
read(15, "ical, 48 bits virtual\npower mana"..., 1024) = 41
read(15, "", 1024) = 0
open("/proc/mounts", O_RDONLY) = 16
fstat(16, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f68548bc000
read(16, "rootfs / rootfs rw 0 0\nproc /pro"..., 1024) = 886
read(16, "", 1024) = 0
close(16) = 0
munmap(0x7f68548bc000, 4096) = 0
close(15) = 0
munmap(0x7f68548bd000, 4096) = 0
open("/proc/loadavg", O_RDONLY) = 15
fstat(15, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f68548bd000
read(15, "2.11 2.07 1.73 5/302 11525\n", 1024) = 27
close(15) = 0
munmap(0x7f68548bd000, 4096) = 0
open("/proc/stat", O_RDONLY) = 15
fstat(15, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f68548bd000
read(15, "cpu 571012017 11695 216602922 8"..., 1024) = 1024
close(15) = 0
munmap(0x7f68548bd000, 4096) = 0
open("/proc/meminfo", O_RDONLY) = 15
fstat(15, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f68548bd000
read(15, "MemTotal: 24729532 kB\nMemF"..., 1024) = 1024
close(15) = 0
munmap(0x7f68548bd000, 4096) = 0
open("/proc/vmstat", O_RDONLY) = 15
fstat(15, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f68548bd000
read(15, "nr_free_pages 119496\nnr_inactive"..., 1024) = 1024
close(15)
...
semtimedop(5341187, {{24, -1, 0}}, 1, {1, 0}^C
Process 20109 detached
[oracle@sixcore local]$

Just in the few lines above you can see that the mmnl background process opens and reads data from /proc/cpuinfo, /proc/mounts, /proc/loadavg, /proc/stat, /proc/meminfo, and /proc/vmstat. While a bit scary if you are violating your Oracle Corporation licensing agreement, this is truly fascinating!

While my demonstration does not prove this and I have never seen or heard anything to the contrary, it does appear the source of OS data for v$osstat is the /proc filesystem. And certainly, the Oracle Database background process mmnl repeatedly references the /proc filesystem.

Therefore, if the source of OS data is the same for both vmstat and v$osstat, I will trust the data from v$osstat unless there is a really good reason to not trust the data.

What Can I Do With v$osstat?
That's a great question, but not the focus of this posting. But just to give you some ideas, we can easily determine OS CPU utilization solely with $osstat, an AWR report, or a Statspack report. If fact, next week I'm going to give a 30 minute Quick Tip at the 2014 IOUG/Collaborate conference entitled, Calculating OS CPU Utilization From V$ Views, AWR, and Statspack. OK... here is the link where you can download the slide deck for my most recent version of the conference presentation.

Utilization is a fascinating topic. It is far reaching and touches on all things computing and even beyond. And as you might expect, I really get into this in my online seminar entitle, Utilization On Steroids. It's the simplest concepts that are the most profound.

Thanks for reading!

Craig.

If you enjoy my blog, I suspect you'll get a lot out of my online or in-class training. Go to www.orapub.com. I also offer on-site training and consulting services.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com. Another option is to send an email to OraPub's general email address, which is currently info@ orapub. com.
https://resources.orapub.com/OraPub_Online_Seminars_About_Oracle_Database_Performance_s/100.htm


















Categories: DBA Blogs

Unusable unique constraint

Dominic Brooks - Fri, 2014-04-04 10:20

Another annoying thing about unusable indexes

I’m surprised that I can’t remember coming across this before before.

I want to archive some data from a bunch of partitions.

Unfortunately, I can’t follow the virtuous circle of data lifecycling and partition the table in such a way that I can archive all the data from a particular partition (or subpartition) via exchange partition.

Without going into too much detail and skipping a bunch of other steps, roughly what I wanted to do was:
1. Mark indexes as unusable (ignore whether they’re global/local, it doesn’t matter).
2. Delete from existing partition
3. Rebuild indexes.

But if you mark a unique index as unusable, you can’t then do DML on the table.

It is expected behaviour.
Doesn’t matter what the setting of skip_unusable_indexes is.

If an index is used to enforce a UNIQUE constraint on a table, then allowing insert and update operations on the table might violate the constraint.
Therefore, this setting does not disable error reporting for unusable indexes that are unique.

Well that seems to compromise one of the main benefits of marking a unique index as unusable.

SQL> get tmp
  1  drop table t1;
  2  create table t1
  3  (pt    varchar2(24)
  4  ,col1  number)
  5  PARTITION BY LIST (pt)
  6  (PARTITION PX VALUES('X')
  7  ,PARTITION PY VALUES('Y'));
  8  create unique index i1 on t1 (col1) global;
  9  insert into t1 values ('X',1);
 10  alter index i1 unusable;
 11* delete from t1 partition (px) where col1 = 1;
 12  .
SQL> @tmp

Table dropped.

Table created.

Index created.

1 row created.

Index altered.

delete from t1 partition (px) where col1 = 1
*
ERROR at line 1:
ORA-01502: index 'E668983_DBA.I1' or partition of such index is in unusable state

To get around this, I can use a unique constraint backed by a non-unique index but that can come with some extra baggage.

SQL> get tmp
  1  drop table t1;
  2  create table t1
  3  (pt    varchar2(24)
  4  ,col1  number)
  5  PARTITION BY LIST (pt)
  6  (PARTITION PX VALUES('X')
  7  ,PARTITION PY VALUES('Y'));
  8  create  index i1 on t1 (col1) global;
  9  alter table t1 add constraint uk_t1 unique (col1) using index i1;
 10  insert into t1 values ('X',1);
 11  alter table t1 disable constraint uk_t1;
 12  alter index i1 unusable;
 13* delete from t1 partition (px) where col1 = 1;
 14  .
SQL> @tmp

Table dropped.

Table created.

Index created.

Table altered.

1 row created.

Table altered.

Index altered.

1 row deleted.