Feed aggregator

Highlight numbers in an APEX Report (SQL and Class)

Dimitri Gielis - Wed, 2016-01-20 09:31
Last year I blogged about highlighting negative numbers in an APEX Report, the CSS only way.
At that time I gave two alternative approaches; by using JQuery or SQL, but it looks like I didn't do those posts yet, till somebody reminded me. This post is about using SQL to highlight something in a report.

Let's say we want to highlight negative numbers in a report (as in the previous post):


We have some CSS defined inline in the Page:

.negative-number {
  color:red;
}

The negative-number class we will add to some values. All the logic to know if it's a negative number will be in SQL. Why SQL you might ask? This example is very simple, but you could call a function which has a lot of complexity to decide if you want to assign a class to a record or not, the principe of this example is more important, that you can use logic in SQL to work with CSS.

The SQL Query of the Report looks like this. Watch for the case statement where we say when to assign a value for the class:

select 
 description,
 amount,
 case 
   when amount < 0
   then 'negative-number'
   else ''
 end as class
from dimi_transaction
order by id

Finally we assign the class to the amount, by adding a span in the HTML Expression of the Amount column:


The Class column you can make Conditional = Never as it's something we just use behind the scenes.

That's how you make a bridge between SQL and CSS.

You can now play more with the case statement and even let the class or style e.g. color, come from a user defined table... unlimited possibilities :)

Categories: Development

January 2016 Critical Patch Update Released

Oracle Security Team - Tue, 2016-01-19 18:11

Oracle today released the January 2016 Critical Patch Update.  With this Critical Patch Update release, the Critical Patch Update program enters its 11th year of existence (the first Critical Patch Update was released in January 2005).  As a reminder, Critical Patch Updates are currently released 4 times a year, on a schedule announced a year in advance.  Oracle recommends that customers apply this Critical Patch Update as soon as possible.

The January 2016 Critical Patch Update provides fixes for a wide range of product families; including: 

  • Oracle Database
    • None of these database vulnerabilities are remotely exploitable without authentication. 
  • Java SE vulnerabilities
    • Oracle strongly recommends that Java home users visit the java.com web site, to ensure that they are using the most recent version of Java and are advised to remove obsolete Java SE versions from their computers if they are not absolutely needed.
  • Oracle E-Business Suite.
    • Oracle’s ongoing assurance effort with E-Business Suite helps remediate security issues and is intended to help enhance the overall security posture provided by E-Business Suite.

Oracle takes security seriously, and strongly encourages customers to keep up with newer releases in order to benefit from Oracle’s ongoing security assurance effort.  

For more information:

The January 2016 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpujan2016-2367955.html

The Oracle Software Security Assurance web site is located at https://www.oracle.com/support/assurance/index.html.

Oracle Applications Lifetime Support Policy is located at http://www.oracle.com/us/support/library/lifetime-support-applications-069216.pdf.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Drop table cascade and reimport

Laurent Schneider - Tue, 2016-01-19 12:26

Happy new year &#x1f642;

Today I had to import a subset of a database and the challenge was to restore a parent table without restoring its children. It took me some minutes to write the code, but it would have taken days to restore the whole database.

CREATE TABLE t1(
  c1 NUMBER CONSTRAINT t1_pk PRIMARY KEY);
INSERT INTO t1 (c1) VALUES (1);
CREATE TABLE t2(
  c1 NUMBER CONSTRAINT t2_t1_fk REFERENCES t1,
  c2 NUMBER CONSTRAINT t2_pk PRIMARY KEY);
INSERT INTO t2 (c1, c2) VALUES (1, 2);
CREATE TABLE t3(
  c2 NUMBER CONSTRAINT t3_t2_fk REFERENCES t2,
  c3 NUMBER CONSTRAINT t3_pk PRIMARY KEY);
INSERT INTO t3 (c2, c3) VALUES (2, 3);
CREATE TABLE t4(
  c3 NUMBER CONSTRAINT t4_t3_fk REFERENCES t3,
  c4 NUMBER CONSTRAINT t4_pk PRIMARY KEY);
INSERT INTO t4 (c3, c4) VALUES (3, 4);
COMMIT;

expdp scott/tiger directory=DATA_PUMP_DIR dumpfile=scott.dmp reuse_dumpfiles=y

Now what happen if I want to restore T2 and T3 ?

If possible, I check the dictionary for foreign keys from other tables pointing to T2 and T3.

SELECT constraint_name
FROM user_constraints
WHERE (r_constraint_name) IN (
    SELECT constraint_name
    FROM user_constraints
    WHERE table_name IN ('T2', 'T3'))
  AND table_name NOT IN ('T2', 'T3');

TABLE_NAME                     CONSTRAINT_NAME               
------------------------------ ------------------------------
T4                             T4_T3_FK                      

T4 points to T3 and T4 has data.

Now I can drop my tables with the cascade options

drop table t2 cascade constraints;
drop table t3 cascade constraints;

Now I import, first the tables, then the referential constraints dropped with the cascade clause and not on T2/T3.

impdp scott/tiger tables=T2,T3 directory=DATA_PUMP_DIR dumpfile=scott.dmp

impdp scott/tiger  "include=ref_constraint:\='T4_T3_FK'" directory=DATA_PUMP_DIR dumpfile=scott.dmp

It’s probably possible to do it in one import, but the include syntax is horrible. I tried there

Oracle Database Critical Patch Update (CPU) Planning for 2016

With the start of the new year, it is now time to think about Oracle Critical Patch Updates for 2016.  Oracle releases security patches in the form of Critical Patch Updates (CPU) each quarter (January, April, July, and October).  These patches include important fixes for security vulnerabilities in the Oracle Database.  The CPUs are only available for certain versions of the Oracle Database, therefore, advanced planning is required to ensure supported versions are being used and potentially mitigating controls may be required when the CPUs can not be applied in a timely manner.

CPU Supported Database Versions

As of the October 2015 CPU, the only CPU supported database versions are 11.2.0.4, 12.1.0.1, and 12.1.0.2.  The final CPU for 12.1.0.1 will be July 2016.  11.2.0.4 will be supported until October 2020 and 12.1.0.2 will be supported until July 2021.

11.1.0.7 and 11.2.0.3 CPU support ended as of July 2015. 

Database CPU Recommendations
  1. When possible, all Oracle databases should be upgraded to 11.2.0.4 or 12.1.0.2.  This will ensure CPUs can be applied through at least October 2020.
     
  2. [12.1.0.1] New databases or application/database upgrade projects currently testing 12.1.0.1 should immediately look to implement 12.1.0.2 instead of 12.1.0.1, even if this will require additional effort or testing.  With the final CPU for 12.1.0.1 being July 2016, unless a project is implementing in January or February 2016, we believe it is imperative to move to 12.1.0.2 to ensure long-term CPU support.
     
  3. [11.2.0.3 and prior] If a database can not be upgraded, the only effective mitigating control for many database security vulnerabilities is to strictly limit direct database access.  In order to restrict database access, Integrigy recommends using valid node checking, Oracle Connection Manager, network restrictions and firewall rules, and/or terminal servers and bastion hosts.  Direct database access is required to exploit database security vulnerabilities and most often a valid database session is required.
     

Regardless if security patches are regularly applied or not, general database hardening such as changing database passwords, optimizing initialization parameters, and enabling auditing should be done for all Oracle databases. 

 

Oracle Database, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Oracle E-Business Suite Critical Patch Update (CPU) Planning for 2016

With the start of the new year, it is now time to think about Oracle Critical Patch Updates for 2016.  Oracle releases security patches in the form of Critical Patch Updates (CPU) each quarter (January, April, July, and October).  These patches include important fixes for security vulnerabilities in the Oracle E-Business Suite and its technology stack.  The CPUs are only available for certain versions of the Oracle E-Business Suite and Oracle Database, therefore, advanced planning is required to ensure supported versions are being used and potentially mitigating controls may be required when the CPUs can not be applied in a timely manner.

For 2016, CPUs for Oracle E-Business Suite will become a significant focus as a large number of security vulnerabilities for the Oracle E-Business Suite will be fixed.  The January 2016 CPU for the Oracle E-Business Suite (EBS) will include 78 security fixes for a wide range of security bugs with many being high risk such as SQL injection in web facing self-service modules.  Integrigy anticipates the next few quarters will have an above average number of EBS security fixes (average is 7 per CPU since 2005).  This large number of security bugs puts Oracle EBS environments at significant risk as many of these bugs will be high risk and well publicized.

Supported Oracle E-Business Suite Versions

Starting with the April 2016 CPU, only 12.1 and 12.2 will be fully supported for CPUs moving forward.  11.5.10 CPU patches for April 2016, July 2016, and October 2016 will only be available to customers with an Advanced Customer Support (ACS) contract.  There will be no 11.5.10 CPU patches after October 2016.  CPU support for 12.0 ended as of October 2015.

11.5.10 Recommendations
  1. When possible, the recommendation is to upgrade to12.1 or 12.2.
  2. Obtaining an Advanced Customer Support (ACS) contract is a short term (until October 2016) solution, but is an expensive option.
  3. An alternative to applying CPU patches is to use Integrigy's AppDefend, an application firewall for Oracle EBS, in proxy mode which blocks EBS web security vulnerabilities.  AppDefend provides virtual patching and can effectively replace patching of EBS web security vulnerabilities.

In order to mitigate some mod_plsql security vulnerabilities, all Oracle EBS 11i environments should look at limiting the enabled mod_plsql web pages.  The script /patch/115/sql/txkDisableModPLSQL.sql can be used to limit the allowed pages listed in FND_ENABLED_PLSQL.  This script was introduced in 11i.ATG_PF.H and the most recent version is in 11i.ATG_PF.H.RUP7.  This must be thoroughly tested as it may block a few mod_plsql pages used by your organization.  Review the Apache web logs for the pattern '/pls/' to see what mod_plsql pages are actively being used.  This fix is included and implemented as part of the January 2016 CPU.

12.0 Recommendations
  1. As no security patches are available for 12.0, the recommendation is to upgrade to 12.1 or 12.2 when possible.
  2. If upgrading is not feasible, Integrigy's AppDefend, an application firewall for Oracle EBS, provides virtual patching for EBS web security vulnerabilities as well as blocks common web vulnerabilities such as SQL injection and cross-site scripting (XSS).  AppDefend is a simple to implement and cost-effective solution when upgrading EBS is not feasible.
12.1 Recommendations
  1. 12.1 is supported for CPUs through October 2019 for implementations where the minimum baseline is maintained.  The current minimum baseline is the 12.1.3 Application Technology Stack (R12.ATG_PF.B.delta.3).  This minimum baseline should remain consistent until October 2019, unless a large number of functional module specific (i.e., GL, AR, AP, etc.) security vulnerabilities are discovered.
  2. For organizations where applying CPU patches is not feasible within 30 days of release or Internet facing self-service modules (i.e., iSupplier, iStore, etc.) are used, AppDefend should be used to provide virtual patching of known, not yet patched web security vulnerabilities and to block common web security vulnerabilities such as SQL injection and cross-site scripting (XSS).
12.2 Recommendations
  1. 12.2 is supported for CPUs through July 2021 as there will be no extended support for 12.2.  The current minimum baseline is 12.2.3 plus roll-up patches R12.AD.C.Delta.7 and R12.TXK.C.Delta.7.  Integrigy anticipates the minimum baseline will creep up as new RUPs (12.2.x) are released for 12.2.  Your planning should anticipate the minimum baseline will be 12.2.4 in 2017 and 12.2.5 in 2019 with the releases of 12.2.6 and 12.2.7.  With the potential release of 12.3, a minimum baseline of 12.2.7 may be required in the future.
  2. For organizations where applying CPU patches is not feasible within 30 days of release or Internet facing self-service modules (i.e., iSupplier, iStore, etc.) are used, AppDefend should be used to provide virtual patching of known, not yet patched web security vulnerabilities and to block common web security vulnerabilities such as SQL injection and cross-site scripting (XSS).
EBS Database Recommendations
  1. As of the October 2015 CPU, the only CPU supported database versions are 11.2.0.4, 12.1.0.1, and 12.1.0.2.  11.1.0.7 and 11.2.0.3 CPU support ended as of July 2015.  The final CPU for 12.1.0.1 will be July 2016.
  2. When possible, all EBS environments should be upgraded to 11.2.0.4 or 12.1.0.2, which are supported for all EBS versions including 11.5.10.2.
  3. If database security patches (SPU or PSU) can not be applied in a timely manner, the only effective mitigating control is to strictly limit direct database access.  In order to restrict database access, Integrigy recommends using the EBS feature Managed SQLNet Access, Oracle Connection Manager, network restrictions and firewall rules, and/or terminal servers and bastion hosts.
  4. Regardless if security patches are regularly applied or not, general database hardening such as changing database passwords, optimizing initialization parameters, and enabling auditing should be done for all EBS databases.
Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Using SKIP LOCKED feature in DB Adapter polling

Darwin IT - Tue, 2016-01-19 04:53
Last few days I spent with describing a Throttle mechanism using the DB Adapter. Today the 'Distributed Polling' functionality of the DB Adapter was mentioned to me, which uses the SKIP LOCKED clausule of the database.

On one of the pages you'll get to check the 'Distributed Polling' option:
Leave it like it is, since it adds the 'SKIP LOCKED' option in the 'FOR UPDATE' clausule.

In my example screendump I set the Database Rows per Transaction, but you might want to set in a sensible higher value with regards to the 'RowsPerPollingInterval' that you need to set yourself in the JCA file:

The 'RowsPerPollingInterval' is not an option in the UI, unfortunately. You might want to set this as a multiple to the MaxTransactionSize (in the UI denoted as 'Database Rows per Transaction').

A great explanation for this functionality is this A-Team blogpost. Unfortunately the link to the documentation about 'SKIP LOCKED' in that post is broken. I found this one. Nice thing is that it suggests using AQ as preferred solution in stead of SKIP LOCKED.

Maybe a better way for throttling is using the AQ Adapter together with the properties

CrossFit and Coding: 3 Lessons for Women and Technology

Usable Apps - Mon, 2016-01-18 17:21

Yes, it’s January again. Time to act on that New Year resolution and get into the gym to burn off those holiday excesses. But have you got what it takes to keep going back?

Here’s Sarahi Mireles (@sarahimireles), our User Experience Developer in Oracle’s México Development Center, to tell us about how her CrossFit experience not only challenges the myths about fierce workouts being something only for the guys but about what that lesson can teach us about coding and women in technology too…

Introducing CrossFit: Me Against Myself

Heard about CrossFit? In case you haven’t, it’s an intense fitness program with a mix of weights, cardio, other exercises, and a lot of social media action too about how much we love doing CrossFit.

CrossFit is also a great way to keep fit and to make new friends. Most workouts are so tough that you’re left all covered in sweat, your muscles are on fire, and you feel like it's going to be impossible to even move the next day.

But you keep doing it anyway. 

One of the things I love most about CrossFit is that it is super dynamic. The Workout of the Day (WOD) is a combination of activities, from running outside, gymnastics, weight training, to swimming. You’re never doing the same thing two days in a row. 

Sounds awesome, right? Well, it is!

But some people, particularly women, unfortunately think CrossFit will make them bulk up and they’ll end up with HUGE muscles! A lot of people on the Internet are saying this, and lots of my friends believe it too: CrossFit is really for men and not women. 

From CrossFit to CrossWIT: Women in Techology (WIT)

Just like with CrossFit, there are many young women who also believe that coding is something meant only for men. Seems crazy, but let's be honest, hiring a woman who knows how to code can be a major challenge (my manager can tell you about that!).

So, why aren't women interested in either coding or lifting weights? Or are they? Is popular opinion the truth, that there are some things that women shouldn't do rather than cannot do?

The reality is that CrossFit won't make you bulk up like a bodybuilder, any more than studying those science, technology, engineering or mathematics (STEM) subjects in school won’t make you any less feminine. Women have been getting the wrong messages about gender and technology from the media and from advertising since we were little girls. We grew up believing that intense workout programs, just like learning computer languages, and about engineering, science and math, are “man’s stuff”. And then we wonder where are the women in technology?!

3 Lessons to Challenge Conventions and Change Yourself

So, wether you are interested in these things, or not, I would like to point out 3 key lessons, based on my experience, that I am sure would help you in some stage of your life: 

  1. Don't be afraid of defying those gender stereotypes. You can become whatever you want to be: a successful doctor, a great programmer, or even a CrossFit professional. Go for it!

  2. Choosing to be or to do something different from what others consider “normal” can be hard, but keep doing it! There are talented women in many fields of work who, despite the stereotypes, are awesome professionals, are respected for what they do, and have become key parts of their organizations and companies. Coding is a world largely dominated by men now, with 70% of the jobs taken by males, but that does not stop us from challenging and changing things so that diversity makes the tech industry a better place for everyone

  3. If you are interested in coding, computer science, or technology in general, keep up with your passion by learning more from others by reading the latest tech blogs, for example. If you don't know where to start, here are some great examples to inspire you: our own VoX, Usable Apps, and AppsLab blogs. Read up about the Oracle Women in Technology (WIT) program too.

I'm sure you'll find something of interest in the work Oracle does and you can use our resources to pursue your interests in a career in technology! And who knows? Maybe you can join us at an Oracle Applications User Experience event in the future. We would love to see you there and meet you in person.

I think you will like what you can become! Just like the gym, don’t wait until next January to start.

Related Links

APEX Dashboard Competition

Denes Kubicek - Sun, 2016-01-17 15:01
APEX Dashboard Competition initiated by Tobias Arnhold is now online. If you want to compete against your colleagues all you need to do is to create a nice looking dashboard based on the prepared set of data, crate a packaged application and send it to the jury. You can apply here: Submit your application and win some nice prices. Hurry up. The closing is on Friday the 1st of April 2016.

Categories: Development

DML Operations On Partitioned Tables Can Restart On Invalidation

Randolf Geist - Sun, 2016-01-17 13:12
It's probably not that well known that Oracle can actually rollback / re-start the execution of a DML statement should the cursor become invalidated. By rollback / re-start I mean that Oracle actually performs a statement level rollback (so any modification already performed by that statement until that point gets rolled back), performs another optimization phase of the statement on re-start (due to the invalidation) and begins the execution of the statement from scratch. Note that this can happen multiple times - actually it's possible to end up in a kind of infinite loop when this happens, leading to statements that can run for very, very long (I've seen statements on Production environments executing for several days although a single execution would only take minutes).

The pre-requisites to meet for this to happen are not that complex or exotic:

- The target table to manipulate needs to be partitioned

- The cursor currently executing gets invalidated - either by running DDL (typically think of partition related operations) - or simply by gathering statistics on one of the objects involved in the statement

- The DML statement hasn't touched yet one of the partitions of the target table but attempts to do so after the cursor got invalidated

When the last condition is met, the statement performs a rollback, and since it got invalidated - which is one of the conditions to be met - another optimization phase happens, meaning that it's also possible to get different execution plans for the different execution attempts. When the execution plan is ready the execution begins from scratch.

According to my tests the issue described here applies to both conventional and direct-path inserts, merge statements (insert / update / delete) as well as serial and parallel execution. I haven't explicitly tested UPDATE and DELETE statements, but the assumption is that they are affected, too.

The behaviour is documented in the following note on MOS: "Insert Statement On Partitioned Tables Is RE-Started After Invalidation (Doc ID 1462003.1)" which links to Bug "14102209 : INSERT STATEMENT' IS RESTARTING BY ITSELF AFTER INVALIDATION" where you can also find some more comments on this behaviour. The issue seems to be that Oracle at that point is no longer sure if the partition information compiled into the cursor for the partitioned target table is still correct or not (and internally raises and catches a corresponding error, like "ORA-14403: Cursor invalidation detected after getting DML partition lock", leading to the re-try), so it needs to refresh that information, hence the re-optimization and re-start of the cursor.

Note that this also means that the DML statement might already have performed modifications to other partitions but after being invalidated attempts to modify another partition it hasn't touched yet - it just needs an attempt to modify a partition not touched into yet by that statement.

It's also kind of nasty that the statement keeps running the potentially lengthy query part after being invalidated only to find out it needs to re-start after the first row is attempted to be applied to a target table partition not touched yet.

Note that applications typically run into this problem, when they behave like the following:

- There are longer running DML statements that take typically several seconds / minutes until they attempt to actually perform an modification to a partitioned target table

- They either use DBMS_STATS to gather stats on one of the involved tables, typically using NO_INVALIDATE=>FALSE, which leads to an immediate invalidation of all affected cursors

- And/Or they perform partition related operations on one of the tables involved, like truncating, creating or exchanging partitions. Note that it is important to point out that it doesn't matter which objects gets DDL / stats applied, so it's not limited to activity on the partitioned target table being modified - any object involved in the query can cause the cursor invalidation

In principle this is another variation of the general theme "Don't mix concurrent DDL with DML/queries on the same objects". Doing so is something that leads to all kinds of side effects, and the way the Oracle engine is designed means that it doesn't cope very well with doing so.

Here is a simple test case for reproducing the issue, using INSERTs in this case here (either via INSERT or MERGE statement):

create table t_target (
id number(*, 0) not null,
pkey number(*, 0) not null,
filler varchar2(500)
)
--segment creation immediate
partition by range (pkey) --interval (1)
(
partition pkey_0 values less than (1)
, partition pkey_1 values less than (2)
, partition pkey_2 values less than (3)
, partition pkey_3 values less than (4)
);

create table t_source
compress
as
select 1 as id, rpad('x', 100) as filler
from
(select /*+ cardinality(1e3) */ null from dual connect by level <= 1e3),
(select /*+ cardinality(1e0) */ null from dual connect by level <= 1e0)
union all
select 1 as id, rpad('y', 100) as filler from dual;

-- Run this again once the DML statement below got started
exec dbms_stats.gather_table_stats(null, 't_source', no_invalidate=>false)

exec dbms_stats.gather_table_stats(null, 't_target', no_invalidate=>false)

----------------------------------------------------------------------------------------------------------------------------------
-- INSERT example --
-- Run above DBMS_STATS calls or any other command that invalidates the cursor during execution to force re-start of the cursor --
----------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

-- alter session set tracefile_identifier = 'insert_restart';

-- alter session set events '10046 trace name context forever, level 12';

-- exec sys.dbms_monitor.session_trace_enable(waits => true, binds => true/*, plan_stat => 'all_executions'*/)

insert /* append */ into t_target (id, pkey, filler)
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 1 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 2 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 3 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
;

-- exec sys.dbms_monitor.session_trace_disable

----------------------------------------------------------------------------------------------------------------------------------
-- MERGE example --
-- Run above DBMS_STATS calls or any other command that invalidates the cursor during execution to force re-start of the cursor --
----------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

merge /* append */ into t_target t
using (
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 1 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 2 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 3 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
) s
on (s.id = t.id)
when not matched then
insert (id, pkey, filler) values (s.id, s.pkey, s.filler)
;
The idea of the test case is to maximise the time until each UNION ALL branch produces data to insert by performing an inefficient HASH JOIN (that in fact generates a Cartesian product and needs to apply a costly REGEXP filter on that huge intermediate result) and forcing a sort on the join result, so rows will only be handed over to the parent operations until all rows were processed in the join operation - and each branch generates data for a different partition of the target table. Typically it should take several seconds per branch to execute (if you need more time just un-comment the additional REGEXP_REPLACE filters), so you should have plenty of time to cause the invalidation from another session.

This means during the execution of each branch invalidating the cursor (for example by executing either of the two DBMS_STATS calls on the source or target table using NO_INVALIDATE=>FALSE) will lead to a re-start of the statement at the next attempt to write into a new target partition, possibly rolling back rows already inserted into other partitions.

Diagnostics
If you run the provided INSERT or MERGE statement on newer versions of Oracle that include the SQL_EXEC_START and SQL_EXEC_ID in V$ACTIVE_SESSION_HISTORY (or V$SESSION for that matter) and invalidate the cursor during execution and before a partition of the target table gets inserted for the first time then you can see that these entries change as the statement restarts.

In such cases the INVALIDATIONS and LOADS increase in V$SQL accordingly and the OBJECT_STATUS changes from INVALID_UNAUTH to VALID again with each re-start attempt. In newer versions where you can configure the "plan_stat" information for SQL trace to "all_executions" you'll find STAT lines for each execution attempt dumped to the trace file, but only a single final EXEC line, where the elapsed time covers all execution attempts.

The oldest version I've tested was 10.2.0.4, and that one showed already the re-start behaviour, although I would be inclined to think that this wasn't the case with older versions. So if anybody still runs older versions than 10.2.0.4 I would be interested to hear whether the behaviour reproduces or not.

Integrating Oracle Document Cloud and Oracle Sales Cloud, maintaining data level business object security

Angelo Santagata - Thu, 2016-01-14 11:10
Just written an article on http://www.ateam-oracle.com/ on how one would integrate SalesCloud and Documents cloud together. Interested? go and check it out!
Link to article

Adding a contact to a sales cloud account

Angelo Santagata - Thu, 2016-01-14 11:04

A common question is how do you add a contact to a Account in Sales Cloud? Ive seen developers (including me) try to issue a mergeOrganizaiton on the org, however this is not correct. In Oracle Fusion , Contacts are not "addded" to accounts but they are "related" to accounts.. Therefore to add a contact to a account you need to add a "Relationship" which links the contact and the organization. Here's a sample payload

WSDL <host>/crmCommonSalesParties/RelationshipService?wsdl

SOAP Request Payload

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/crmCommon/salesParties/relationshipService/types/" xmlns:rel="http://xmlns.oracle.com/apps/crmCommon/salesParties/relationshipService/">

   <soapenv:Header/>
 <soapenv:Body>
      <typ:createRelationship>
         <typ:relationship>
<!-- ContactId -->
            <rel:ObjectPartyId>300000000943126</rel:ObjectPartyId>
<!-- AccountId -->
            <rel:SubjectPartyId>300000000943078</rel:SubjectPartyId>
            <rel:RelationshipType>CONTACT</rel:RelationshipType>
            <rel:RelationshipCode>CONTACT</rel:RelationshipCode>
            <rel:CreatedByModule>HZ_WS</rel:CreatedByModule>
            <rel:Status>A</rel:Status>
         </typ:relationship>
      </typ:createRelationship>
   </soapenv:Body>
</soapenv:Envelope>

Groovy Script Example

def addContact =
[
        ObjectPartyId        : '300000000943126', /*Contact */
        SubjectPartyId       : '300000000943078', /*Account */
        RelationshipType  : 'CONTACT',
        RelationshipCode  : 'CONTACT',
        CreatedByModule : 'HZ_WS',
        Status  : 'A'
];

adf.webServices.RelationshipService.createRelationship(addContact);

 

 

 

 

 

Extranet login redirects to intranet URL

Vikram Das - Thu, 2016-01-14 10:15
For an old 11.5.10.2 ERP, we are moving from the architecture of "EBS application server in DMZ" to the architecture of "Reverse Proxy in DMZ and EBS application server in intranet".  After doing all configurations, we hit the classic issue where, you login through extranet url visible on public internet which redirects to intranet url.

So https://extranet.example.com asks for SSO details and after keying in SSO username and password goes to http://intranet.example.com.

The support.oracle.com article DMZ Configuration with Oracle E-Business Suite 11i (Doc ID 287176.1) has listed 4 checks which could be the reason for this issue:

H6: Redirection to an Incorrect Server During Login
If you are getting redirected to an incorrect server during the login process, check the following:
  • Whether the hirearchy type of the profile options mentioned in Section 5.1 is set to SERVRESP .
  • select PROFILE_OPTION_NAME,HIERARCHY_TYPE from fnd_profile_options where profile_option_name in 
    ('APPS_WEB_AGENT','APPS_SERVLET_AGENT','APPS_JSP_AGENT','APPS_FRAMEWORK_AGENT' ,'ICX_FORMS_LAUNCHER','ICX_DISCOVERER_LAUNCHER','ICX_DISCOVERER_VIEWER_LAUNCHER','HELP_WEB_AGENT','APPS_PORTAL','CZ_UIMGR_URL','ASO_CONFIGURATOR_URL','QP_PRICING_ENGINE_URL','TCF:HOST');
    PROFILE_OPTION_NAME                               HIERARCHY_TYPE
    ----------------------------------------                               --------------------------------
    APPS_FRAMEWORK_AGENT                         SERVRESP
    APPS_JSP_AGENT                                         SERVRESP
    APPS_PORTAL                                         SERVRESP
    APPS_SERVLET_AGENT                                 SERVRESP
    APPS_WEB_AGENT                                         SERVRESP
    ASO_CONFIGURATOR_URL                         SERVRESP
    CZ_UIMGR_URL                                         SERVRESP
    HELP_WEB_AGENT                                         SERVRESP
    ICX_DISCOVERER_LAUNCHER                 SERVRESP
    ICX_DISCOVERER_VIEWER_LAUNCHER SERVRESP
    ICX_FORMS_LAUNCHER                         SERVRESP
    QP_PRICING_ENGINE_URL                         SERVRESP
    TCF:HOST                                                 SERVRESP

    All good on this point

  • Whether the profile option values for the fnd profile options (APPS_FRAMEWORK_AGENT, APPS_WEB_AGENT, APPS_JSP_AGENT, APPS_SERVLET_AGENT) are pointing to the correct node. Replace the node_id with the node_id of the external and internal web tier. For example:
  • select fnd_profile.value_specific('APPS_FRAMEWORK_AGENT',null,null,null,null,) from dual;
    This query returned https://extranet.example.com

  • Whether the dbc file pointed to by the JVM parameter (JTFDBCFILE) in jserv.properties exists.
  • wrapper.bin.parameters=-DJTFDBCFILE=
    This was incorrect.  It was pointing to the intranet jdbc file location.

  • Whether the value of the parameter APPL_SERVER_ID set in the dbc file for the node is the same as the value of the server_id in the fnd_nodes table.
    select node_name,node_id,server_id from fnd_nodes;
    This was overwritten in the dbc file, with appl_server_id of intranet when autoconfig was done on intranet and overwritten with appl_server_id of extranet when autoconfig was done on extranet, as the DBC file location and name were same for both intranet and extranet.
I asked the DBA team to manually correct the dbc file name inside $IAS_CONFIG_HOME/Apache/Apache/Jserv/etc/jserv.properties
and create a file of that name in $FND_SECURE/$CONTEXT_NAME.dbc on the extranet node and bounce services.  Once that was done, we tested and it worked. No more redirection to intranet URL.

Then I asked them to correct the s_dbc_file_name variable in the context file of extranet node. Run autoconfig on extranet, verify the value of dbcfile in jserv.properties DJTFDBCFILE parameter, verify that the DBC file had the server_id of the extranet node.  Restart all services.
Checked again, and it worked again.

So apart from checking the values of context file variables like s_webentryhost, s_webentrydomain, s_active_port, you also need to check the value of s_dbc_file while verifying the setups for extranet configuration. This can happen in 11i , R12.1 and R12.2 also.
Categories: APPS Blogs

Indexes and Initrans (Blackstar)

Richard Foote - Thu, 2016-01-14 00:26
It’s been a very tough week so to help keep my mind off rather sad events, thought I’ll finish off one of my unpublished articles. Initrans is a physical attribute that determines the initial number of concurrent transaction entries allocated within each data block for a given table/index/cluster. Every transaction that updates a block has to acquire an […]
Categories: DBA Blogs

Gather system stats on EXADATA X4-2 throws an ORA-20001,ORA-06512

Syed Jaffar - Wed, 2016-01-13 00:40
After a recent patch deployment on Exadata X4-2 system, the following error encountered whilst gathering system stats with Exadata mode (dbms_stats.gather_system_stats('EXADATA')):

SQL> exec dbms_stats.gather_system_stats('EXADATA');
BEGIN dbms_stats.gather_system_stats('EXADATA'); END;

*
ERROR at line 1:
ORA-20001: Invalid or inconsistent input values
ORA-06512: at "SYS.DBMS_STATS", line 27155
ORA-06512: at line 1

It appears to be a BUG, and the following workaround should resolve the issue:

(1) As SYS user execute the scripts below:

SQL> @?/rdbms/admin/dbmsstat.sql
SQL> @?/rdbms/admin/prvtstas.plb
SQL> @?/rdbms/admin/prvtstat.plb
SQL> @?/rdbms/admin/prvtstai.plb

(2) Then, re-run your DBMS_STATS call:

exec dbms_stats.gather_system_stats('EXADATA');

Indeed this worked for us and hope this would work for you as well.

Exclusive Oracle CloudWorld Hands-on-lab!

OTN TechBlog - Tue, 2016-01-12 11:30

Please join us at Oracle CloudWorld Developer Conference in New York on January 19th for an exclusive Oracle CloudWorld Hands-on-lab. The lab has limited seating so confirm your participation today and learn how the cloud will transform how you work REGISTER TODAY! Time - January 19, 2016 - 10:00 AM - 4:00 PM / Location - Waldorf Astoria 
New York, 
540 Lexington Avenue New York, NY 10022

Topics:
»   Provision a new database in the cloud with Oracle Database Cloud Service (DBCS)
»   Gain experience in building new applications for the Oracle Cloud
»   Set up connectivity between the Compute tier and DBCS
»   Learn about the REST APIs available to access the Oracle Cloud

Following the session, you can meet with the Oracle Cloud experts to take a deeper dive into specific areas of interest.

Requirements:
Each participant must bring their own laptop with the following:
»   Intel Core i5 Laptop with 8GB Memory, 55GB Free Hard Disk Space
»   Windows 7 64-Bit
»   Microsoft Internet Explorer, Apple Safari or Mozilla Browser

Sign up today! Limited seating for this session.

Your registration gives you full access to Oracle CloudWorld Developers Conference. Enjoy networking opportunities, product demos, break-out sessions and a keynote from Chris Tonas, Vice President of Mobility and Development tools at Oracle on best practices for Cloud development and why innovation begins with the cloud.

Netflix for Webinars

Gerger Consulting - Tue, 2016-01-12 01:35
This February, we are launching ProHuddle, a website where you can find high quality webinars for every topic you are interested in.

Our core community consists of Oracle professionals, so we’ll start with Oracle related topics and branch off from there to other areas.

Benefits of ProHuddle for You
It’s free Attending conferences can be expensive. ProHuddle brings you high quality presentations for free.

Content DiscoveryYou miss a lot of content you might enjoy, simply because you never hear about it. ProHuddle curates presentations, surfaces the best ones and notifies you about them. You’ll have access to presentations from all over the world, connecting you to the experts from everywhere.

Easy to Attend Attending conferences can be time consuming. You can attend ProHuddle webinars from the comfort of your home or your office, on any device.

More Engaging Presentations have time constraints. At ProHuddle, there is no next presentation to catch or to clear the room for. The presenter has as much time as she needs to deliver her message, answer questions and interact with the audience (which is to me the best part of all).

If you are a curious, open minded person, with an interest in new people, ideas, products and technologies, sign up at www.prohuddle.com. We are launching in February 2016!
Categories: Development

Goodbye Spaceboy

Andrew Clarke - Mon, 2016-01-11 22:45
"Sometimes I feel
The need to move on
So I pack a bag
And move on"

Can't believe Bowie has taken that final train.

David Bowie's music has been part of my life pretty much since I started listening to pop music seriously. Lodger was the first Bowie album I listened to all the way through. It's probably his most under-appreciated album. It's funny to think that back then in 1979 Bowie was dismissed as past it, a boring old fart who should be swept aside by the vital surge of post-punk bands. Because those bands were raised on Ziggy, they were taught to dance by the Thin White Duke and they learnt that moodiness from listening to Low in darkened bedrooms too many times.

Even if you don't listen to Bowie, probably your favourite bands did. If they style their hair or wear make up, they listened to Bowie. If they play synths they listened to Bowie. If they make dance music for awkward white boys at indie discos they listened to Bowie. If they lurk in shadows smoking cigarettes in their videos they listened to Bowie. That's a large part of his legacy.

The other thing about Bowie is that his back catalogue has something for pretty much everybody. People who loved Ziggy Stardust might loath the plastic soul phase. Hardly anybody gets Hunky Dory; but for some fans it's their favourite album. My favourite is the first side of "Heroes" and the second side of Low, but that whole stretch from Young Americans to Lodger is a seam of sustained musical invention unparallelled by any other pop act. (Judicious picking of collaborators is an art in itself.)

Of course, there was a long fallow period. Tin Machine weren't as bad as we thought at the time, but the drum'n'bass was too 'Dad dancing at a wedding reception' for comfort. So it was a relief when he finally started producing decent albums again. Heathen has some lovely moments. The Next Day was something of a return to form (although a bit too long to be a classic). Then there's Blackstar.

It's almost as though Bowie hung on just long enough that Blackstar would be reviewed as his latest album, rather than his last one. The four and five star reviews earned through merit rather than the mawkishness which would have accompanied a posthumous release. And it really is pretty good. When I first heard the title track it sounded like Bowie was taking a cue from Scott Walker's latter period: edgy, experimental and deliberately designed not to be fan pleaser. But, unlike Walker, Bowie can't do wilfully unlistenable. Even in the midst of all that drone and skronk there are tunes. He can't help himself, his pop sensibility is too strong. Which is why I've already listened to Blackstar more times than I've listened to Bish Bosch.

So, farewell David Bowie. We're all going to miss you terribly. "May God's love be with you."

Conference Season

Scott Spendolini - Mon, 2016-01-11 21:56
It's conference season!  That means time to start looking at flights and hotels and ensuring that while I'm on the road, my wife is not at work (no easy task).  In addition to many of the conferences that I've been presenting at for years, I have a couple of new additions to the list.

Here it is:

RMOUG - Denver, CO
One of the larger conferences, the year usually starts out in Denver for me, where crowds are always large and appreciative.  RMOUG has some of the most dedicated volunteers and puts on a great conference year after year.

GAOUG - Atlanta, GA
This will be my first time at GAOUG, and I'm excited to help them get their annual conference started.  Lots of familiar faces will be in attendance.  At only $150, if you near the Atlanta drive, it's worth checking out.

OCOJ - Williamsburg, VA (submitted)
This will (hopefully) also be my first Oracle Conference on the James.  Held in historic Williamsburg, OCOJ is also a steal at just $99.

UTOUG - Salt Lake City, UT
I'll head back out west to Utah for UTOUG.  Always good to catch up with the local Oracle community in Utah each year.  Plus, I make my annual SLC brewery tour while there.

GLOC - Cleveland, OH (submitted)
Steadily growing in popularity, the folks at GLOC put on an excellent conference.  Waiting to hear back on whether my sessions got accepted.

KSCOPE - Chicago, IL
Like everyone, I'm looking forward to one of the best annual technical conferences that I've regularly attended.  In addition to the traditional APEX content, there's few surprises planned this year!

ECO - Raleigh/Durham, NC (planning on submitting)
ECO - formerly VOUG - is also growing in numbers each year.  There's a lot of tech in the RDU area, and many of the talented locals present here.  Bonus: Jeff Smith curated brewery/bar tour the night before.

OOW - San Francisco, CA (planning on submitting)
As always, the conference year typically ends with the biggest one - Oracle Open World.  While there's not as much APEX content as there once way, it's always been more focused on the marketing side of technology, which is good to hear every now and then.



User group conferences are one of the best types of training available, especially since they typically cost just a couple hundred dollars.  I encourage you to try to check out one near you.  Smaller groups are also great places to get an opportunity to present.  In addition to annual conferences, many smaller groups meet monthly or quarterly and are always on the look out for new content.

David Bowie 1947-2016. My Memories.

Richard Foote - Mon, 2016-01-11 20:06
In mid-April 1979, as a nerdy 13 year old, I sat in my bedroom in Sale, North England listening to the radio when a song called “Boys Keep Swinging” came on by an singer called David Bowie who I never heard of before. I instantly loved it and taped it next time it came on the radio via my […]
Categories: DBA Blogs

Intro to JCS

Angelo Santagata - Mon, 2016-01-11 06:04
v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Angelo Santagata Angelo Santagata 3 3 2016-01-11T12:00:00Z 2016-01-11T12:03:00Z 1 62 356 2 1 417 15.00 Clean Clean false false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-fareast-language:EN-US;}

Happy New Year all!!!

 

To start this year's blogging I thought id highlight a nice and easy quick win on getting started with Oracle Java Cloud Service..  This link https://www.youtube.com/watch?v=5DHsE2x5mks takes you to a youtube video which I found very clear and easy to follow.

 

Enjoy

Pages

Subscribe to Oracle FAQ aggregator