Feed aggregator

January 2016 Critical Patch Update Released

Oracle Security Team - Tue, 2016-01-19 18:11

Oracle today released the January2016 Critical Patch Update.  With this Critical Patch Update release,the CriticalPatch Update program enters its 11th year of existence(the first Critical Patch Update was released in January 2005).  As areminder, Critical Patch Updates are currently released 4 times a year, on aschedule announced a year in advance.  Oracle recommends that customersapply this Critical Patch Update as soon as possible.

TheJanuary2016 Critical Patch Update provides fixes for a wide range of productfamilies; including: 

  • Oracle Database
    • None of these database vulnerabilities are remotely exploitable without authentication. 
  • Java SE vulnerabilities
    • Oracle strongly recommends that Java home users visit the java.com web site, to ensure that they are using the most recent version of Java and are advised to remove obsolete Java SE versions from their computers if they are not absolutely needed.
  • Oracle E-Business Suite.
    • Oracle’s ongoing assurance effort with E-Business Suite helps remediate security issues and is intended to help enhance the overall security posture provided by E-Business Suite.

Oracletakes security seriously, and stronglyencourages customers to keep up with newer releases in order tobenefit from Oracle’s ongoing security assurance effort.  

Formore information:

TheJanuary 2016 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpujan2016-2367955.html

TheOracle Software Security Assurance web site is located at https://www.oracle.com/support/assurance/index.html.

OracleApplications Lifetime Support Policy is located at http://www.oracle.com/us/support/library/lifetime-support-applications-069216.pdf.

January 2016 Critical Patch Update Released

Oracle Security Team - Tue, 2016-01-19 18:11

Oracle today released the January 2016 Critical Patch Update.  With this Critical Patch Update release, the Critical Patch Update program enters its 11th year of existence (the first Critical Patch Update was released in January 2005).  As a reminder, Critical Patch Updates are currently released 4 times a year, on a schedule announced a year in advance.  Oracle recommends that customers apply this Critical Patch Update as soon as possible.

The January 2016 Critical Patch Update provides fixes for a wide range of product families; including: 

  • Oracle Database
    • None of these database vulnerabilities are remotely exploitable without authentication. 
  • Java SE vulnerabilities
    • Oracle strongly recommends that Java home users visit the java.com web site, to ensure that they are using the most recent version of Java and are advised to remove obsolete Java SE versions from their computers if they are not absolutely needed.
  • Oracle E-Business Suite.
    • Oracle’s ongoing assurance effort with E-Business Suite helps remediate security issues and is intended to help enhance the overall security posture provided by E-Business Suite.

Oracle takes security seriously, and strongly encourages customers to keep up with newer releases in order to benefit from Oracle’s ongoing security assurance effort.  

For more information:

The January 2016 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpujan2016-2367955.html

The Oracle Software Security Assurance web site is located at https://www.oracle.com/support/assurance/index.html.

Oracle Applications Lifetime Support Policy is located at http://www.oracle.com/us/support/library/lifetime-support-applications-069216.pdf.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Drop table cascade and reimport

Laurent Schneider - Tue, 2016-01-19 12:26

Happy new year 🙂

Today I had to import a subset of a database and the challenge was to restore a parent table without restoring its children. It took me some minutes to write the code, but it would have taken days to restore the whole database.

CREATE TABLE t1(
  c1 NUMBER CONSTRAINT t1_pk PRIMARY KEY);
INSERT INTO t1 (c1) VALUES (1);
CREATE TABLE t2(
  c1 NUMBER CONSTRAINT t2_t1_fk REFERENCES t1,
  c2 NUMBER CONSTRAINT t2_pk PRIMARY KEY);
INSERT INTO t2 (c1, c2) VALUES (1, 2);
CREATE TABLE t3(
  c2 NUMBER CONSTRAINT t3_t2_fk REFERENCES t2,
  c3 NUMBER CONSTRAINT t3_pk PRIMARY KEY);
INSERT INTO t3 (c2, c3) VALUES (2, 3);
CREATE TABLE t4(
  c3 NUMBER CONSTRAINT t4_t3_fk REFERENCES t3,
  c4 NUMBER CONSTRAINT t4_pk PRIMARY KEY);
INSERT INTO t4 (c3, c4) VALUES (3, 4);
COMMIT;

expdp scott/tiger directory=DATA_PUMP_DIR dumpfile=scott.dmp reuse_dumpfiles=y

Now what happen if I want to restore T2 and T3 ?

If possible, I check the dictionary for foreign keys from other tables pointing to T2 and T3.

SELECT constraint_name
FROM user_constraints
WHERE (r_constraint_name) IN (
    SELECT constraint_name
    FROM user_constraints
    WHERE table_name IN ('T2', 'T3'))
  AND table_name NOT IN ('T2', 'T3');

TABLE_NAME                     CONSTRAINT_NAME               
------------------------------ ------------------------------
T4                             T4_T3_FK                      

T4 points to T3 and T4 has data.

Now I can drop my tables with the cascade options

drop table t2 cascade constraints;
drop table t3 cascade constraints;

Now I import, first the tables, then the referential constraints dropped with the cascade clause and not on T2/T3.

impdp scott/tiger tables=T2,T3 directory=DATA_PUMP_DIR dumpfile=scott.dmp

impdp scott/tiger  "include=ref_constraint:\='T4_T3_FK'" directory=DATA_PUMP_DIR dumpfile=scott.dmp

It’s probably possible to do it in one import, but the include syntax is horrible. I tried there

Oracle Database Critical Patch Update (CPU) Planning for 2016

With the start of the new year, it is now time to think about Oracle Critical Patch Updates for 2016.  Oracle releases security patches in the form of Critical Patch Updates (CPU) each quarter (January, April, July, and October).  These patches include important fixes for security vulnerabilities in the Oracle Database.  The CPUs are only available for certain versions of the Oracle Database, therefore, advanced planning is required to ensure supported versions are being used and potentially mitigating controls may be required when the CPUs can not be applied in a timely manner.

CPU Supported Database Versions

As of the October 2015 CPU, the only CPU supported database versions are 11.2.0.4, 12.1.0.1, and 12.1.0.2.  The final CPU for 12.1.0.1 will be July 2016.  11.2.0.4 will be supported until October 2020 and 12.1.0.2 will be supported until July 2021.

11.1.0.7 and 11.2.0.3 CPU support ended as of July 2015. 

Database CPU Recommendations
  1. When possible, all Oracle databases should be upgraded to 11.2.0.4 or 12.1.0.2.  This will ensure CPUs can be applied through at least October 2020.
     
  2. [12.1.0.1] New databases or application/database upgrade projects currently testing 12.1.0.1 should immediately look to implement 12.1.0.2 instead of 12.1.0.1, even if this will require additional effort or testing.  With the final CPU for 12.1.0.1 being July 2016, unless a project is implementing in January or February 2016, we believe it is imperative to move to 12.1.0.2 to ensure long-term CPU support.
     
  3. [11.2.0.3 and prior] If a database can not be upgraded, the only effective mitigating control for many database security vulnerabilities is to strictly limit direct database access.  In order to restrict database access, Integrigy recommends using valid node checking, Oracle Connection Manager, network restrictions and firewall rules, and/or terminal servers and bastion hosts.  Direct database access is required to exploit database security vulnerabilities and most often a valid database session is required.
     

Regardless if security patches are regularly applied or not, general database hardening such as changing database passwords, optimizing initialization parameters, and enabling auditing should be done for all Oracle databases. 

 

Oracle Database, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Oracle E-Business Suite Critical Patch Update (CPU) Planning for 2016

With the start of the new year, it is now time to think about Oracle Critical Patch Updates for 2016.  Oracle releases security patches in the form of Critical Patch Updates (CPU) each quarter (January, April, July, and October).  These patches include important fixes for security vulnerabilities in the Oracle E-Business Suite and its technology stack.  The CPUs are only available for certain versions of the Oracle E-Business Suite and Oracle Database, therefore, advanced planning is required to ensure supported versions are being used and potentially mitigating controls may be required when the CPUs can not be applied in a timely manner.

For 2016, CPUs for Oracle E-Business Suite will become a significant focus as a large number of security vulnerabilities for the Oracle E-Business Suite will be fixed.  The January 2016 CPU for the Oracle E-Business Suite (EBS) will include 78 security fixes for a wide range of security bugs with many being high risk such as SQL injection in web facing self-service modules.  Integrigy anticipates the next few quarters will have an above average number of EBS security fixes (average is 7 per CPU since 2005).  This large number of security bugs puts Oracle EBS environments at significant risk as many of these bugs will be high risk and well publicized.

Supported Oracle E-Business Suite Versions

Starting with the April 2016 CPU, only 12.1 and 12.2 will be fully supported for CPUs moving forward.  11.5.10 CPU patches for April 2016, July 2016, and October 2016 will only be available to customers with an Advanced Customer Support (ACS) contract.  There will be no 11.5.10 CPU patches after October 2016.  CPU support for 12.0 ended as of October 2015.

11.5.10 Recommendations
  1. When possible, the recommendation is to upgrade to12.1 or 12.2.
  2. Obtaining an Advanced Customer Support (ACS) contract is a short term (until October 2016) solution, but is an expensive option.
  3. An alternative to applying CPU patches is to use Integrigy's AppDefend, an application firewall for Oracle EBS, in proxy mode which blocks EBS web security vulnerabilities.  AppDefend provides virtual patching and can effectively replace patching of EBS web security vulnerabilities.

In order to mitigate some mod_plsql security vulnerabilities, all Oracle EBS 11i environments should look at limiting the enabled mod_plsql web pages.  The script /patch/115/sql/txkDisableModPLSQL.sql can be used to limit the allowed pages listed in FND_ENABLED_PLSQL.  This script was introduced in 11i.ATG_PF.H and the most recent version is in 11i.ATG_PF.H.RUP7.  This must be thoroughly tested as it may block a few mod_plsql pages used by your organization.  Review the Apache web logs for the pattern '/pls/' to see what mod_plsql pages are actively being used.  This fix is included and implemented as part of the January 2016 CPU.

12.0 Recommendations
  1. As no security patches are available for 12.0, the recommendation is to upgrade to 12.1 or 12.2 when possible.
  2. If upgrading is not feasible, Integrigy's AppDefend, an application firewall for Oracle EBS, provides virtual patching for EBS web security vulnerabilities as well as blocks common web vulnerabilities such as SQL injection and cross-site scripting (XSS).  AppDefend is a simple to implement and cost-effective solution when upgrading EBS is not feasible.
12.1 Recommendations
  1. 12.1 is supported for CPUs through October 2019 for implementations where the minimum baseline is maintained.  The current minimum baseline is the 12.1.3 Application Technology Stack (R12.ATG_PF.B.delta.3).  This minimum baseline should remain consistent until October 2019, unless a large number of functional module specific (i.e., GL, AR, AP, etc.) security vulnerabilities are discovered.
  2. For organizations where applying CPU patches is not feasible within 30 days of release or Internet facing self-service modules (i.e., iSupplier, iStore, etc.) are used, AppDefend should be used to provide virtual patching of known, not yet patched web security vulnerabilities and to block common web security vulnerabilities such as SQL injection and cross-site scripting (XSS).
12.2 Recommendations
  1. 12.2 is supported for CPUs through July 2021 as there will be no extended support for 12.2.  The current minimum baseline is 12.2.3 plus roll-up patches R12.AD.C.Delta.7 and R12.TXK.C.Delta.7.  Integrigy anticipates the minimum baseline will creep up as new RUPs (12.2.x) are released for 12.2.  Your planning should anticipate the minimum baseline will be 12.2.4 in 2017 and 12.2.5 in 2019 with the releases of 12.2.6 and 12.2.7.  With the potential release of 12.3, a minimum baseline of 12.2.7 may be required in the future.
  2. For organizations where applying CPU patches is not feasible within 30 days of release or Internet facing self-service modules (i.e., iSupplier, iStore, etc.) are used, AppDefend should be used to provide virtual patching of known, not yet patched web security vulnerabilities and to block common web security vulnerabilities such as SQL injection and cross-site scripting (XSS).
EBS Database Recommendations
  1. As of the October 2015 CPU, the only CPU supported database versions are 11.2.0.4, 12.1.0.1, and 12.1.0.2.  11.1.0.7 and 11.2.0.3 CPU support ended as of July 2015.  The final CPU for 12.1.0.1 will be July 2016.
  2. When possible, all EBS environments should be upgraded to 11.2.0.4 or 12.1.0.2, which are supported for all EBS versions including 11.5.10.2.
  3. If database security patches (SPU or PSU) can not be applied in a timely manner, the only effective mitigating control is to strictly limit direct database access.  In order to restrict database access, Integrigy recommends using the EBS feature Managed SQLNet Access, Oracle Connection Manager, network restrictions and firewall rules, and/or terminal servers and bastion hosts.
  4. Regardless if security patches are regularly applied or not, general database hardening such as changing database passwords, optimizing initialization parameters, and enabling auditing should be done for all EBS databases.
Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Recover from ORA-01172 & ORA-01151

DBASolved - Tue, 2016-01-19 07:48

This morning I was working on an Oracle Management Repository (OMR) for a test Enterprise Manager that is used by a few consultants I work with. When I logged into the box, I found that the OMR was down. When I went to start the database, I was greeted with ORA-01172 and ORA-01151.

These errors basically say:

ORA-01172 – recovery of thread % stuck at block % of file %
ORA-01151 – use media recovery to recover block, restore backup if needed

So how do I recover from this. The solution is simple, I just needed to perform the following steps:

1. Shutdown the database

SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

2. Mount the database

SQL> startup mount;
ORACLE instance started.
Total System Global Area 1.0033E+10 bytes
Fixed Size 2934696 bytes
Variable Size 1677723736 bytes
Database Buffers 8321499136 bytes
Redo Buffers 30617600 bytes
Database mounted.

3. Recover the database

SQL> recover database;
Media recovery complete.

4. Open the database with “alter database”

SQL> alter database open;
Database altered.

At this point, you should be able to access the database (OMR) and then have the EM environment up and running.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Database
Categories: DBA Blogs

Using SKIP LOCKED feature in DB Adapter polling

Darwin IT - Tue, 2016-01-19 04:53
Last few days I spent with describing a Throttle mechanism using the DB Adapter. Today the 'Distributed Polling' functionality of the DB Adapter was mentioned to me, which uses the SKIP LOCKED clausule of the database.

On one of the pages you'll get to check the 'Distributed Polling' option:
Leave it like it is, since it adds the 'SKIP LOCKED' option in the 'FOR UPDATE' clausule.

In my example screendump I set the Database Rows per Transaction, but you might want to set in a sensible higher value with regards to the 'RowsPerPollingInterval' that you need to set yourself in the JCA file:

The 'RowsPerPollingInterval' is not an option in the UI, unfortunately. You might want to set this as a multiple to the MaxTransactionSize (in the UI denoted as 'Database Rows per Transaction').

A great explanation for this functionality is this A-Team blogpost. Unfortunately the link to the documentation about 'SKIP LOCKED' in that post is broken. I found this one. Nice thing is that it suggests using AQ as preferred solution in stead of SKIP LOCKED.

Maybe a better way for throttling is using the AQ Adapter together with the properties

CrossFit and Coding: 3 Lessons for Women and Technology

Usable Apps - Mon, 2016-01-18 17:21

Yes, it’s January again. Time to act on that New Year resolution and get into the gym to burn off those holiday excesses. But have you got what it takes to keep going back?

Here’s Sarahi Mireles (@sarahimireles), our User Experience Developer in Oracle’s México Development Center, to tell us about how her CrossFit experience not only challenges the myths about fierce workouts being something only for the guys but about what that lesson can teach us about coding and women in technology too…

Introducing CrossFit: Me Against Myself

Heard about CrossFit? In case you haven’t, it’s an intense fitness program with a mix of weights, cardio, other exercises, and a lot of social media action too about how much we love doing CrossFit.

CrossFit is also a great way to keep fit and to make new friends. Most workouts are so tough that you’re left all covered in sweat, your muscles are on fire, and you feel like it's going to be impossible to even move the next day.

But you keep doing it anyway. 

One of the things I love most about CrossFit is that it is super dynamic. The Workout of the Day (WOD) is a combination of activities, from running outside, gymnastics, weight training, to swimming. You’re never doing the same thing two days in a row. 

Sounds awesome, right? Well, it is!

But some people, particularly women, unfortunately think CrossFit will make them bulk up and they’ll end up with HUGE muscles! A lot of people on the Internet are saying this, and lots of my friends believe it too: CrossFit is really for men and not women. 

From CrossFit to CrossWIT: Women in Techology (WIT)

Just like with CrossFit, there are many young women who also believe that coding is something meant only for men. Seems crazy, but let's be honest, hiring a woman who knows how to code can be a major challenge (my manager can tell you about that!).

So, why aren't women interested in either coding or lifting weights? Or are they? Is popular opinion the truth, that there are some things that women shouldn't do rather than cannot do?

The reality is that CrossFit won't make you bulk up like a bodybuilder, any more than studying those science, technology, engineering or mathematics (STEM) subjects in school won’t make you any less feminine. Women have been getting the wrong messages about gender and technology from the media and from advertising since we were little girls. We grew up believing that intense workout programs, just like learning computer languages, and about engineering, science and math, are “man’s stuff”. And then we wonder where are the women in technology?!

3 Lessons to Challenge Conventions and Change Yourself

So, wether you are interested in these things, or not, I would like to point out 3 key lessons, based on my experience, that I am sure would help you in some stage of your life: 

  1. Don't be afraid of defying those gender stereotypes. You can become whatever you want to be: a successful doctor, a great programmer, or even a CrossFit professional. Go for it!

  2. Choosing to be or to do something different from what others consider “normal” can be hard, but keep doing it! There are talented women in many fields of work who, despite the stereotypes, are awesome professionals, are respected for what they do, and have become key parts of their organizations and companies. Coding is a world largely dominated by men now, with 70% of the jobs taken by males, but that does not stop us from challenging and changing things so that diversity makes the tech industry a better place for everyone

  3. If you are interested in coding, computer science, or technology in general, keep up with your passion by learning more from others by reading the latest tech blogs, for example. If you don't know where to start, here are some great examples to inspire you: our own VoX, Usable Apps, and AppsLab blogs. Read up about the Oracle Women in Technology (WIT) program too.

I'm sure you'll find something of interest in the work Oracle does and you can use our resources to pursue your interests in a career in technology! And who knows? Maybe you can join us at an Oracle Applications User Experience event in the future. We would love to see you there and meet you in person.

I think you will like what you can become! Just like the gym, don’t wait until next January to start.

Related Links

APEX Dashboard Competition

Denes Kubicek - Sun, 2016-01-17 15:01
APEX Dashboard Competition initiated by Tobias Arnhold is now online. If you want to compete against your colleagues all you need to do is to create a nice looking dashboard based on the prepared set of data, crate a packaged application and send it to the jury. You can apply here: Submit your application and win some nice prices. Hurry up. The closing is on Friday the 1st of April 2016.

Categories: Development

DML Operations On Partitioned Tables Can Restart On Invalidation

Randolf Geist - Sun, 2016-01-17 13:12
It's probably not that well known that Oracle can actually rollback / re-start the execution of a DML statement should the cursor become invalidated. By rollback / re-start I mean that Oracle actually performs a statement level rollback (so any modification already performed by that statement until that point gets rolled back), performs another optimization phase of the statement on re-start (due to the invalidation) and begins the execution of the statement from scratch. Note that this can happen multiple times - actually it's possible to end up in a kind of infinite loop when this happens, leading to statements that can run for very, very long (I've seen statements on Production environments executing for several days although a single execution would only take minutes).

The pre-requisites to meet for this to happen are not that complex or exotic:

- The target table to manipulate needs to be partitioned

- The cursor currently executing gets invalidated - either by running DDL (typically think of partition related operations) - or simply by gathering statistics on one of the objects involved in the statement

- The DML statement hasn't touched yet one of the partitions of the target table but attempts to do so after the cursor got invalidated

When the last condition is met, the statement performs a rollback, and since it got invalidated - which is one of the conditions to be met - another optimization phase happens, meaning that it's also possible to get different execution plans for the different execution attempts. When the execution plan is ready the execution begins from scratch.

According to my tests the issue described here applies to both conventional and direct-path inserts, merge statements (insert / update / delete) as well as serial and parallel execution. I haven't explicitly tested UPDATE and DELETE statements, but the assumption is that they are affected, too.

The behaviour is documented in the following note on MOS: "Insert Statement On Partitioned Tables Is RE-Started After Invalidation (Doc ID 1462003.1)" which links to Bug "14102209 : INSERT STATEMENT' IS RESTARTING BY ITSELF AFTER INVALIDATION" where you can also find some more comments on this behaviour. The issue seems to be that Oracle at that point is no longer sure if the partition information compiled into the cursor for the partitioned target table is still correct or not (and internally raises and catches a corresponding error, like "ORA-14403: Cursor invalidation detected after getting DML partition lock", leading to the re-try), so it needs to refresh that information, hence the re-optimization and re-start of the cursor.

Note that this also means that the DML statement might already have performed modifications to other partitions but after being invalidated attempts to modify another partition it hasn't touched yet - it just needs an attempt to modify a partition not touched into yet by that statement.

It's also kind of nasty that the statement keeps running the potentially lengthy query part after being invalidated only to find out it needs to re-start after the first row is attempted to be applied to a target table partition not touched yet.

Note that applications typically run into this problem, when they behave like the following:

- There are longer running DML statements that take typically several seconds / minutes until they attempt to actually perform an modification to a partitioned target table

- They either use DBMS_STATS to gather stats on one of the involved tables, typically using NO_INVALIDATE=>FALSE, which leads to an immediate invalidation of all affected cursors

- And/Or they perform partition related operations on one of the tables involved, like truncating, creating or exchanging partitions. Note that it is important to point out that it doesn't matter which objects gets DDL / stats applied, so it's not limited to activity on the partitioned target table being modified - any object involved in the query can cause the cursor invalidation

In principle this is another variation of the general theme "Don't mix concurrent DDL with DML/queries on the same objects". Doing so is something that leads to all kinds of side effects, and the way the Oracle engine is designed means that it doesn't cope very well with doing so.

Here is a simple test case for reproducing the issue, using INSERTs in this case here (either via INSERT or MERGE statement):

create table t_target (
id number(*, 0) not null,
pkey number(*, 0) not null,
filler varchar2(500)
)
--segment creation immediate
partition by range (pkey) --interval (1)
(
partition pkey_0 values less than (1)
, partition pkey_1 values less than (2)
, partition pkey_2 values less than (3)
, partition pkey_3 values less than (4)
);

create table t_source
compress
as
select 1 as id, rpad('x', 100) as filler
from
(select /*+ cardinality(1e3) */ null from dual connect by level <= 1e3),
(select /*+ cardinality(1e0) */ null from dual connect by level <= 1e0)
union all
select 1 as id, rpad('y', 100) as filler from dual;

-- Run this again once the DML statement below got started
exec dbms_stats.gather_table_stats(null, 't_source', no_invalidate=>false)

exec dbms_stats.gather_table_stats(null, 't_target', no_invalidate=>false)

----------------------------------------------------------------------------------------------------------------------------------
-- INSERT example --
-- Run above DBMS_STATS calls or any other command that invalidates the cursor during execution to force re-start of the cursor --
----------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

-- alter session set tracefile_identifier = 'insert_restart';

-- alter session set events '10046 trace name context forever, level 12';

-- exec sys.dbms_monitor.session_trace_enable(waits => true, binds => true/*, plan_stat => 'all_executions'*/)

insert /* append */ into t_target (id, pkey, filler)
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 1 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 2 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 3 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
;

-- exec sys.dbms_monitor.session_trace_disable

----------------------------------------------------------------------------------------------------------------------------------
-- MERGE example --
-- Run above DBMS_STATS calls or any other command that invalidates the cursor during execution to force re-start of the cursor --
----------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

merge /* append */ into t_target t
using (
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 1 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 2 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
union all
select * from (
select /*+
use_hash(a b)
no_eliminate_oby
*/
a.id, 3 as pkey, a.filler
from t_source a, t_source b
where a.id = b.id
and (
regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
--or regexp_replace(a.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm') != regexp_replace(b.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'm')
or (b.filler = rpad('y', 100) and a.filler = rpad('y', 100))
)
order by a.id
)
) s
on (s.id = t.id)
when not matched then
insert (id, pkey, filler) values (s.id, s.pkey, s.filler)
;
The idea of the test case is to maximise the time until each UNION ALL branch produces data to insert by performing an inefficient HASH JOIN (that in fact generates a Cartesian product and needs to apply a costly REGEXP filter on that huge intermediate result) and forcing a sort on the join result, so rows will only be handed over to the parent operations until all rows were processed in the join operation - and each branch generates data for a different partition of the target table. Typically it should take several seconds per branch to execute (if you need more time just un-comment the additional REGEXP_REPLACE filters), so you should have plenty of time to cause the invalidation from another session.

This means during the execution of each branch invalidating the cursor (for example by executing either of the two DBMS_STATS calls on the source or target table using NO_INVALIDATE=>FALSE) will lead to a re-start of the statement at the next attempt to write into a new target partition, possibly rolling back rows already inserted into other partitions.

Diagnostics
If you run the provided INSERT or MERGE statement on newer versions of Oracle that include the SQL_EXEC_START and SQL_EXEC_ID in V$ACTIVE_SESSION_HISTORY (or V$SESSION for that matter) and invalidate the cursor during execution and before a partition of the target table gets inserted for the first time then you can see that these entries change as the statement restarts.

In such cases the INVALIDATIONS and LOADS increase in V$SQL accordingly and the OBJECT_STATUS changes from INVALID_UNAUTH to VALID again with each re-start attempt. In newer versions where you can configure the "plan_stat" information for SQL trace to "all_executions" you'll find STAT lines for each execution attempt dumped to the trace file, but only a single final EXEC line, where the elapsed time covers all execution attempts.

The oldest version I've tested was 10.2.0.4, and that one showed already the re-start behaviour, although I would be inclined to think that this wasn't the case with older versions. So if anybody still runs older versions than 10.2.0.4 I would be interested to hear whether the behaviour reproduces or not.

Defining Resources in #GoldenGate Studio 12c

DBASolved - Fri, 2016-01-15 16:12

As I’ve been working with the beta of GoldenGate Studio 12c, I have tried to do simple things to see what will break and what is needed to make the process work. One of the things that I lilke about the studio is that prior to creating any solutions, mappings or projects, you can define what databases and GoldenGate instances will be used during the design process. What I want to show you in this blog post is how to create the database resource and the GoldenGate instance resource.

Creating a Resource:

To create a database resource, after opening GoldenGate Studio, go to the Resource tab. On this tab, you will see that is it empty. This is because no resources have been created yet.

In the left hand corner of the Resources tab, you should see a folder with a small arrow next to it. When you click on the arrow, you are provided with a context menu that provides you with three options for resources (Databases, Global Mappings, and GoldenGate Instances).


Database Resources:

Now that you know how to select what resrouce you want to create, lets create a database resource. To do this, select the database resource from the context menu. This will open up a one page wizard/dialog for you to fill out the connection information for the database you want to use as a resource.

You will notice there are a few fields that need to be populated. Provide the relative information you need to connect to the database. Once you all the information has been provided, you can test the connection to validate that it works before clicking ok.

Once you click ok, the database resource will be added to the resrouce tab under database header.

Notice that the database is automatically connected to once it is created. This allows you to immediately start using the resource for mappings and global mappings.

GoldenGate Instance Resources:

The GoldenGate Instance resources are a little more complex to configure. This is due to the requirement that the GoldenGate environment has to have the GoldenGate Monitoring Agent (aka. JAgent (12.1.3.0)) running. This is the same JAgent that is used with the OEM plug-in. If you need more information on how to install and configure the JAgent, you can find it at this here.

Now, to create a new GoldenGate Instance resource, you follow the same approach as you would to create a database resource; instead of selecting database; select GoldenGate Instance. This will open up the GoldenGate Instance wizard/dialog for you to fill out. Provide all the information requested.

In setting up the GoldenGate Instance, there are a few things that you need to provide. In my opinion, the names of the items requested in the GoldenGate Information section are misleading. To make this a bit easier, I’m providing an explanation of what each field means.

GoldenGate Version: This is the version of GoldenGate running with the JAgent
GoldenGate Database Type: Database which GoldenGate is running against. There are multiple opptions here
GoldenGate Port: This is the port number of the manager process
Agent Username: This is the username that is defined in $GGAGENT_HOME/cfg/Config.properties
Agent Password: This is the password that is created and stored in the datastore for the JAgent
Agent Port: This is the JMX port number that is defined in $GGAGENT_HOME/cfg/Config.properties

After providing all the required information, you can then perform a test connection. If the connection is successful, then you can click “ok” to create the GoldenGate Instance resource. If the connection fails, then you need to confirm all your settings.

Once all the resources you need for designing your GoldenGate architecture is done, you will see all the rsources under the Resource tab.

Now that you know how to create resources in GoldenGate Studio, it will help you in designing your replication flows.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Integrating Oracle Document Cloud and Oracle Sales Cloud, maintaining data level business ...

Angelo Santagata - Thu, 2016-01-14 11:10
Just written an article on http://www.ateam-oracle.com/ on how one would integrate SalesCloud and Documents cloud together. Interested? go and check it out!
Link to article

Integrating Oracle Document Cloud and Oracle Sales Cloud, maintaining data level business object security

Angelo Santagata - Thu, 2016-01-14 11:10
Just written an article on http://www.ateam-oracle.com/ on how one would integrate SalesCloud and Documents cloud together. Interested? go and check it out!
Link to article

Adding a contact to a sales cloud account

Angelo Santagata - Thu, 2016-01-14 11:04

A common question is how do you add a contact to a Account in Sales Cloud? Ive seen developers (including me) try to issue a mergeOrganizaiton on the org, however this is not correct. In Oracle Fusion , Contacts are not "addded" to accounts but they are "related" to accounts.. Therefore to add a contact to a account you need to add a "Relationship" which links the contact and the organization. Here's a sample payload

WSDL

<host>/crmCommonSalesParties/RelationshipService?wsdl

SOAP Request Payload

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/crmCommon/salesParties/relationshipService/types/" xmlns:rel="http://xmlns.oracle.com/apps/crmCommon/salesParties/relationshipService/">

   <soapenv:Header/>

 <soapenv:Body>

      <typ:createRelationship>

         <typ:relationship>

<!-- ContactId -->

            <rel:ObjectPartyId>300000000943126</rel:ObjectPartyId>

<!-- AccountId -->

            <rel:SubjectPartyId>300000000943078</rel:SubjectPartyId>

            <rel:RelationshipType>CONTACT</rel:RelationshipType>

            <rel:RelationshipCode>CONTACT</rel:RelationshipCode>

            <rel:CreatedByModule>HZ_WS</rel:CreatedByModule>

            <rel:Status>A</rel:Status>

         </typ:relationship>

      </typ:createRelationship>

   </soapenv:Body>

</soapenv:Envelope>

Groovy Script Example

def addContact =

[

        ObjectPartyId        : '300000000943126', /*Contact */

        SubjectPartyId       : '300000000943078', /*Account */

        RelationshipType  : 'CONTACT',

        RelationshipCode  : 'CONTACT',

        CreatedByModule : 'HZ_WS',

        Status  : 'A'

];

adf.webServices.RelationshipService.createRelationship(addContact);

Adding a contact to a sales cloud account

Angelo Santagata - Thu, 2016-01-14 11:04

A common question is how do you add a contact to a Account in Sales Cloud? Ive seen developers (including me) try to issue a mergeOrganizaiton on the org, however this is not correct. In Oracle Fusion , Contacts are not "addded" to accounts but they are "related" to accounts.. Therefore to add a contact to a account you need to add a "Relationship" which links the contact and the organization. Here's a sample payload

WSDL <host>/crmCommonSalesParties/RelationshipService?wsdl

SOAP Request Payload

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/crmCommon/salesParties/relationshipService/types/" xmlns:rel="http://xmlns.oracle.com/apps/crmCommon/salesParties/relationshipService/">

   <soapenv:Header/>
 <soapenv:Body>
      <typ:createRelationship>
         <typ:relationship>
<!-- ContactId -->
            <rel:ObjectPartyId>300000000943126</rel:ObjectPartyId>
<!-- AccountId -->
            <rel:SubjectPartyId>300000000943078</rel:SubjectPartyId>
            <rel:RelationshipType>CONTACT</rel:RelationshipType>
            <rel:RelationshipCode>CONTACT</rel:RelationshipCode>
            <rel:CreatedByModule>HZ_WS</rel:CreatedByModule>
            <rel:Status>A</rel:Status>
         </typ:relationship>
      </typ:createRelationship>
   </soapenv:Body>
</soapenv:Envelope>

Groovy Script Example

def addContact =
[
        ObjectPartyId        : '300000000943126', /*Contact */
        SubjectPartyId       : '300000000943078', /*Account */
        RelationshipType  : 'CONTACT',
        RelationshipCode  : 'CONTACT',
        CreatedByModule : 'HZ_WS',
        Status  : 'A'
];

adf.webServices.RelationshipService.createRelationship(addContact);

 

 

 

 

 

Extranet login redirects to intranet URL

Vikram Das - Thu, 2016-01-14 10:15
For an old 11.5.10.2 ERP, we are moving from the architecture of "EBS application server in DMZ" to the architecture of "Reverse Proxy in DMZ and EBS application server in intranet".  After doing all configurations, we hit the classic issue where, you login through extranet url visible on public internet which redirects to intranet url.

So https://extranet.example.com asks for SSO details and after keying in SSO username and password goes to http://intranet.example.com.

The support.oracle.com article DMZ Configuration with Oracle E-Business Suite 11i (Doc ID 287176.1) has listed 4 checks which could be the reason for this issue:

H6: Redirection to an Incorrect Server During Login
If you are getting redirected to an incorrect server during the login process, check the following:
  • Whether the hirearchy type of the profile options mentioned in Section 5.1 is set to SERVRESP .
  • select PROFILE_OPTION_NAME,HIERARCHY_TYPE from fnd_profile_options where profile_option_name in 
    ('APPS_WEB_AGENT','APPS_SERVLET_AGENT','APPS_JSP_AGENT','APPS_FRAMEWORK_AGENT' ,'ICX_FORMS_LAUNCHER','ICX_DISCOVERER_LAUNCHER','ICX_DISCOVERER_VIEWER_LAUNCHER','HELP_WEB_AGENT','APPS_PORTAL','CZ_UIMGR_URL','ASO_CONFIGURATOR_URL','QP_PRICING_ENGINE_URL','TCF:HOST');
    PROFILE_OPTION_NAME                               HIERARCHY_TYPE
    ----------------------------------------                               --------------------------------
    APPS_FRAMEWORK_AGENT                         SERVRESP
    APPS_JSP_AGENT                                         SERVRESP
    APPS_PORTAL                                         SERVRESP
    APPS_SERVLET_AGENT                                 SERVRESP
    APPS_WEB_AGENT                                         SERVRESP
    ASO_CONFIGURATOR_URL                         SERVRESP
    CZ_UIMGR_URL                                         SERVRESP
    HELP_WEB_AGENT                                         SERVRESP
    ICX_DISCOVERER_LAUNCHER                 SERVRESP
    ICX_DISCOVERER_VIEWER_LAUNCHER SERVRESP
    ICX_FORMS_LAUNCHER                         SERVRESP
    QP_PRICING_ENGINE_URL                         SERVRESP
    TCF:HOST                                                 SERVRESP

    All good on this point

  • Whether the profile option values for the fnd profile options (APPS_FRAMEWORK_AGENT, APPS_WEB_AGENT, APPS_JSP_AGENT, APPS_SERVLET_AGENT) are pointing to the correct node. Replace the node_id with the node_id of the external and internal web tier. For example:
  • select fnd_profile.value_specific('APPS_FRAMEWORK_AGENT',null,null,null,null,) from dual;
    This query returned https://extranet.example.com

  • Whether the dbc file pointed to by the JVM parameter (JTFDBCFILE) in jserv.properties exists.
  • wrapper.bin.parameters=-DJTFDBCFILE=
    This was incorrect.  It was pointing to the intranet jdbc file location.

  • Whether the value of the parameter APPL_SERVER_ID set in the dbc file for the node is the same as the value of the server_id in the fnd_nodes table.
    select node_name,node_id,server_id from fnd_nodes;
    This was overwritten in the dbc file, with appl_server_id of intranet when autoconfig was done on intranet and overwritten with appl_server_id of extranet when autoconfig was done on extranet, as the DBC file location and name were same for both intranet and extranet.
I asked the DBA team to manually correct the dbc file name inside $IAS_CONFIG_HOME/Apache/Apache/Jserv/etc/jserv.properties
and create a file of that name in $FND_SECURE/$CONTEXT_NAME.dbc on the extranet node and bounce services.  Once that was done, we tested and it worked. No more redirection to intranet URL.

Then I asked them to correct the s_dbc_file_name variable in the context file of extranet node. Run autoconfig on extranet, verify the value of dbcfile in jserv.properties DJTFDBCFILE parameter, verify that the DBC file had the server_id of the extranet node.  Restart all services.
Checked again, and it worked again.

So apart from checking the values of context file variables like s_webentryhost, s_webentrydomain, s_active_port, you also need to check the value of s_dbc_file while verifying the setups for extranet configuration. This can happen in 11i , R12.1 and R12.2 also.
Categories: APPS Blogs

Indexes and Initrans (Blackstar)

Richard Foote - Thu, 2016-01-14 00:26
It’s been a very tough week so to help keep my mind off rather sad events, thought I’ll finish off one of my unpublished articles. Initrans is a physical attribute that determines the initial number of concurrent transaction entries allocated within each data block for a given table/index/cluster. Every transaction that updates a block has to acquire an […]
Categories: DBA Blogs

Gather system stats on EXADATA X4-2 throws an ORA-20001,ORA-06512

Syed Jaffar - Wed, 2016-01-13 00:40
After a recent patch deployment on Exadata X4-2 system, the following error encountered whilst gathering system stats with Exadata mode (dbms_stats.gather_system_stats('EXADATA')):

SQL> exec dbms_stats.gather_system_stats('EXADATA');
BEGIN dbms_stats.gather_system_stats('EXADATA'); END;

*
ERROR at line 1:
ORA-20001: Invalid or inconsistent input values
ORA-06512: at "SYS.DBMS_STATS", line 27155
ORA-06512: at line 1

It appears to be a BUG, and the following workaround should resolve the issue:

(1) As SYS user execute the scripts below:

SQL> @?/rdbms/admin/dbmsstat.sql
SQL> @?/rdbms/admin/prvtstas.plb
SQL> @?/rdbms/admin/prvtstat.plb
SQL> @?/rdbms/admin/prvtstai.plb

(2) Then, re-run your DBMS_STATS call:

exec dbms_stats.gather_system_stats('EXADATA');

Indeed this worked for us and hope this would work for you as well.

Pages

Subscribe to Oracle FAQ aggregator