Feed aggregator

Starting out with MAF?

Angelo Santagata - Tue, 2014-08-26 08:45
If your starting out with MAF, Oracle's Mobile Application Framework, then you MUST read this blog entry and make sure everything is right.. even if your a seasoned ADF mobile developer like me.. you wanna check this out, got me a couple of times!

https://blogs.oracle.com/mobile/entry/10_tips_for_getting_started

%sql: To Pandas and Back

Catherine Devlin - Tue, 2014-08-26 06:03

A Pandas DataFrame has a nice to_sql(table_name, sqlalchemy_engine) method that saves itself to a database.

The only trouble is that coming up with the SQLAlchemy Engine object is a little bit of a pain, and if you're using the IPython %sql magic, your %sql session already has an SQLAlchemy engine anyway. So I created a bogus PERSIST pseudo-SQL command that simply calls to_sql with the open database connection:

%sql PERSIST mydataframe

The result is that your data can make a very convenient round-trip from your database, to Pandas and whatever transformations you want to apply there, and back to your database:



In [1]: %load_ext sql

In [2]: %sql postgresql://@localhost/
Out[2]: u'Connected: @'

In [3]: ohio = %sql select * from cities_of_ohio;
246 rows affected.

In [4]: df = ohio.DataFrame()

In [5]: montgomery = df[df['county']=='Montgomery County']

In [6]: %sql PERSIST montgomery
Out[6]: u'Persisted montgomery'

In [7]: %sql SELECT * FROM montgomery
11 rows affected.
Out[7]:
[(27L, u'Brookville', u'5,884', u'Montgomery County'),
(54L, u'Dayton', u'141,527', u'Montgomery County'),
(66L, u'Englewood', u'13,465', u'Montgomery County'),
(81L, u'Germantown', u'6,215', u'Montgomery County'),
(130L, u'Miamisburg', u'20,181', u'Montgomery County'),
(136L, u'Moraine', u'6,307', u'Montgomery County'),
(157L, u'Oakwood', u'9,202', u'Montgomery County'),
(180L, u'Riverside', u'25,201', u'Montgomery County'),
(210L, u'Trotwood', u'24,431', u'Montgomery County'),
(220L, u'Vandalia', u'15,246', u'Montgomery County'),
(230L, u'West Carrollton', u'13,143', u'Montgomery County')]

Flipping Herman Millers

FeuerThoughts - Mon, 2014-08-25 13:44
I was one of those dot.com boom excitables who decided that he must own a Herman Miller Aeron chair in order to be super cool and incredibly productive while I blogged and tweeted (well, I don't think there was Twitter back then) and so forth.

Soon after I got the chair, I realized that I really didn't like it very much at all. It didn't seem to provide support in the right places, didn't make it easy to avoid slouching....it wasn't the desk chair for me.

[My chair of choice, for years now, is the Swopper.]

Tried to get my son interested. Nope.

Tried to get my wife interested. Nope.

Moved it to the garage. 

Finally decided to offer it up on Craigslist nice and cheap to make it go away. Posted the ad for $200 and within minutes had someone lined up. He lived in St. Louis, but his dad lived nearby, and he told me when picking up the chair:

His son has built up a business "flipping" Aeron chairs. Buy them cheap, buy lots of parts, fix 'em up, and spin 'em back out into the market. 

Who knew?

What a remarkable system is capitalism. Sure, it's not nice. It's not fair. It's played a key role in the destruction of our planet as humans suck out the "natural resources" to make all the stuff that consumers want to buy. 

But capitalism is a very organic way of organizing an economy. 

Oh and this same chair-flipping fellow also does a wholesale business in ugly Christmas sweaters.

Finger on the pulse of a nation.
Categories: Development

To Hint or not to hint (Application Engine), that is the question

David Kurtz - Mon, 2014-08-25 13:36
Over the years Oracle has provided a number of plan stability technologies to control how SQL statements are executed.  At the risk of over simplification, Outlines (deprecated in 11g), Profiles, Baselines and Patches work by injecting a set of hints into a SQL statement at parse time.  There is quite a lot of advice from Oracle to use these technologies to fix errant execution plans rather than hint the application.  I think it is generally good advice, however, there are times when this approach does not work well with PeopleSoft, and that is due to the behaviour and structure of PeopleSoft rather than the Oracle database.

It is possible to produce a SQL profile from a plan captured by AWR.  A part of distribution for the SQLT Diagnostic Tool (Doc ID 215187.1) is a script called coe_xfr_sql_profile.sql written by Carlos Sierra.
The only thing I would change in the delivered script, (for use with PeopleSoft and as suggested in a comment) is to create the profile with FORCE_MATCHING so that similar statements with different literal values still match. 
The Slings and Arrows of outrageous execution plans Let's take an example of a statement (from the vanilla Financials product that has not been customised) that performed poorly because it didn't generate a good execution plan (although I have cut out most of the statement for readability.  Note, that it references instance 5 of PeopleTools temporary record CA_SUM_TAO.
INSERT INTO PS_CA_SUM_RC_TAO5 (…) SELECT

FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE

B.PROCESS_INSTANCE = 51381955 AND C.IN_USE_FLAG = 'Y'

Plan hash value: 2039212279
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 14424 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 1 | 430 | 14424 (1)| 00:02:54 |
| 4 | NESTED LOOPS | | 1 | 318 | 14421 (1)| 00:02:54 |
| 5 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14420 (1)| 00:02:54 |
| 6 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO5 | 1 | 233 | 1 (0)| 00:00:01 |
| 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO5 | 1 | | 0 (0)| |
| 8 | INDEX RANGE SCAN | PSACA_PR_SUMM | 1 | | 2 (0)| 00:00:01 |
| 9 | TABLE ACCESS BY INDEX ROWID | PS_CA_PR_SUMM | 1 | 112 | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------
However, below is the plan we get on instance 4.  We get this plan because there is already a profile that has been applied in the past, but now we are on a different non-shared instance of the temporary table, so the profile cannot match because we are on different objects, and we get the same problem, but on different non-shared instances of the temporary record.  Different literal values, such as those for Process Instance can be handled by FORCE_MATCHING, but not different tables.  This is a totally different SQL statement.
SQL_ID 5gtxdqbx0d0c3
--------------------
INSERT INTO PS_CA_SUM_RC_TAO4 (…) SELECT

FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE

B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'

Plan hash value: 3552771247

---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 36361 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | HASH JOIN | | 1 | 430 | 36361 (3)| 00:07:17 |
| 3 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14347 (1)| 00:02:53 |
| 4 | NESTED LOOPS | | | | | |
| 5 | NESTED LOOPS | | 1 | 345 | 22014 (3)| 00:04:25 |
| 6 | TABLE ACCESS FULL | PS_CA_PR_SUMM | 5268K| 562M| 21539 (1)| 00:04:19 |
| 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO4 | 1 | | 0 (0)| |
| 8 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO4 | 1 | 233 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Note
-----
- SQL profile "coe_gn3n77gs6xj2a_3552771247" used for this statement
Of course, the statement on instance 4 had a profile because it was added as a short term fix and then left in situ long term.  It worked fine until a process errored, left the non-shared instance of the temporary record allocated to that process instance, and so PeopleSoft allocated instance 5 on the next execution.
So we could just create another profile using the coe_xfr_sql_profile.sql script
SPO coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.log;
SET ECHO ON TERM ON LIN 2000 TRIMS ON NUMF 99999999999999999999;
REM
REM $Header: 215187.1 coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql 11.4.1.4 2014/08/13 csierra $
REM
REM Copyright (c) 2000-2010, Oracle Corporation. All rights reserved.
REM
REM AUTHOR
REM carlos.sierra@oracle.com
REM
REM SCRIPT
REM coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql
REM
REM DESCRIPTION
REM This script is generated by coe_xfr_sql_profile.sql
REM It contains the SQL*Plus commands to create a custom
REM SQL Profile for SQL_ID 5gtxdqbx0d0c3 based on plan hash
REM value 3552771247.
REM The custom SQL Profile to be created by this script
REM will affect plans for SQL commands with signature
REM matching the one for SQL Text below.
REM Review SQL Text and adjust accordingly.
REM
REM PARAMETERS
REM None.
REM
REM EXAMPLE
REM SQL> START coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql;
REM
REM NOTES
REM 1. Should be run as SYSTEM or SYSDBA.
REM 2. User must have CREATE ANY SQL PROFILE privilege.
REM 3. SOURCE and TARGET systems can be the same or similar.
REM 4. To drop this custom SQL Profile after it has been created:
REM EXEC DBMS_SQLTUNE.DROP_SQL_PROFILE('coe_5gtxdqbx0d0c3_3552771247');
REM 5. Be aware that using DBMS_SQLTUNE requires a license
REM for the Oracle Tuning Pack.
REM
WHENEVER SQLERROR EXIT SQL.SQLCODE;
REM
VAR signature NUMBER;
REM
DECLARE
sql_txt CLOB;
h SYS.SQLPROF_ATTR;
BEGIN
sql_txt := q'[
INSERT INTO PS_CA_SUM_RC_TAO4 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
]';
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('_unnest_subquery' 'false')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
q'[ALL_ROWS]',
q'[OUTLINE_LEAF(@"SEL$1")]',
q'[OUTLINE_LEAF(@"INS$1")]',
q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO4"@"INS$1")]',
q'[FULL(@"SEL$1" "A"@"SEL$1")]',
q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO4"."PROCESS_INSTANCE" "PS_CA_SUM_TAO4"."BUSINESS_UNIT" "PS_CA_SUM_TAO4"."PROJECT_ID" "PS_CA_SUM_TAO4"."ACTIVITY_ID" "PS_CA_SUM_TAO4"."ANALYSIS_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO4"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO4"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_NUM" ]',
q'[ "PS_CA_SUM_TAO4"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO4"."CA_FEE_STATUS"))]',
q'[FULL(@"SEL$1" "C"@"SEL$1")]',
q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
q'[END_OUTLINE_DATA]');
:signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => sql_txt,
profile => h,
name => 'coe_5gtxdqbx0d0c3_3552771247',
description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
END;
/
WHENEVER SQLERROR CONTINUE
SET ECHO OFF;
PRINT signature
PRO
PRO ... manual custom SQL Profile has been created
PRO
SET TERM ON ECHO OFF LIN 80 TRIMS OFF NUMF "";
SPO OFF;
PRO
PRO COE_XFR_SQL_PROFILE_5gtxdqbx0d0c3_3552771247 completed
But then we must manually change the table and index names from 4 to 5.
DECLARE
sql_txt CLOB;
h SYS.SQLPROF_ATTR;
BEGIN
sql_txt := q'[
INSERT INTO PS_CA_SUM_RC_TAO5 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
]';
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
q'[DB_VERSION('11.2.0.3')]',
q'[OPT_PARAM('_unnest_subquery' 'false')]',
q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
q'[ALL_ROWS]',
q'[OUTLINE_LEAF(@"SEL$1")]',
q'[OUTLINE_LEAF(@"INS$1")]',
q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO5"@"INS$1")]',
q'[FULL(@"SEL$1" "A"@"SEL$1")]',
q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO5"."PROCESS_INSTANCE" "PS_CA_SUM_TAO5"."BUSINESS_UNIT" "PS_CA_SUM_TAO5"."PROJECT_ID" "PS_CA_SUM_TAO5"."ACTIVITY_ID" "PS_CA_SUM_TAO5"."ANALYSIS_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO5"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO5"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_NUM" ]',
q'[ "PS_CA_SUM_TAO5"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO5"."CA_FEE_STATUS"))]',
q'[FULL(@"SEL$1" "C"@"SEL$1")]',
q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
q'[END_OUTLINE_DATA]');
:signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => sql_txt,
profile => h,
name => 'coe_5gtxdqbx0d0c3_3552771247',
description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
END;
/
Or to take Arms against a Sea of statements,
The profile has advantage that it can be applied quickly without a code change.  It is the perfect tool for the DBA with a production performance problem. However, there are some other considerations.
  • If applying to statement that references a PS temp record then we need to apply the profile to all instances of the record (both non-shared instances and the shared instance).
  • We were lucky that we referenced instance 5 of two temporary records. However, you could get a situation where a statement references different instances of different temporary records.  So perhaps instance 5 of one table and instance 6 of another.  In which case, you might also get instance 6 of the first table and instance 5 of the other.  A SQL profile could be needed for each permutation.
  • Bear in mind also that some areas of PeopleSoft use dynamically generated SQL.  So you get similar SQL statements which are sufficiently different for the profile not to match.  
  • Any changes to the expansion of Application Engine and PeopleCode MetaSQL on upgrading PeopleTools, or potentially even patching, will also prevent matching.
  • There is also the challenge of dealing with code changes as the system is upgraded, patched and customised.  A small code change, perhaps just an extra field in the select clause, can result in a performance regression because the profile stops matching. Of course, this challenge is not limited to PeopleSoft systems! 
Profiles are likely to be effective if there are no PeopleSoft temporary records present.  So you can generally use them in COBOL and SQR processes and the on-line application (other than in on-line Application Engine processes).
Aye, there's the rub,
I would use a profile (or a set of profiles) as a short-term temporary fix that is easier to introduce into production, and then add hints to the source code and so fix all instances of the code, not just the ones that have been profiled. Of course, that does entail a code change, and everything that goes with that.  One strong argument against making code change is that you have to change the code again to remove or change the hint if it becomes unnecessary at some time in future after a significant change, such as an Oracle upgrade.  However, on balance, I think it is better than the scenario where the profile stops working one day without warning.
The rest is silence.
Unless you add a comment.

★ How BIG is Oracle OpenWorld?

Eddie Awad - Sat, 2014-08-23 16:19

Oracle OpenWorld

Here is how big it was in 2013. Compare it to its size in 2012. It is safe to assume that it will be bigger in 2014!

I will attend this year’s event by invitation from the Oracle ACE Program. Prior to the start of the conference, I will be attending a two day product briefing with product teams at Oracle HQ. It’s like a mini OpenWorld but only for Oracle ACE Directors.

During the briefing, Oracle product managers talk about the latest and greatest product news. They also share super secret information that is not yet made public. I will report this information to you here on awads.net and via Twitter, unless of course it is protected by a non-disclosure agreement.

See you there!

Continue reading...

© Eddie Awad's Blog, 2014. | Permalink | Add a comment | Topic: Oracle | Tags:

DRM Free Downloads - Is That OK?

Bradley Brown - Fri, 2014-08-22 23:01
I talked to a prospect today that was using a competitive platform to deliver their video products.  We talked extensively about differentiation of InteliVideo over their current platform (from brand to analytics to customers to apps and so on).  But one thing that came up had to do with DRM free downloads.  You might think that's a good thing if you don't understand what it means...as they did.  I said "so you're OK with your customers downloading the MP4 and putting it on YouTube or emailing it to their friends or throwing it on a thumb drive."  He said, "no, our customer's can't do that."  So I showed him the language on their website (DRM free downloads), to which he replied, "right, it's protected."  When I looked puzzled, he said "what is DRM?"  I said DRM stands for Digital Rights Management.  You can think of it as protection.  So that wording is saying "you get this video that has NO protection, use it as you wish."  Of course there's likely wording in the click through contract that says the customer is only supposed to use the content on their own devices.  But...you've just handed them a risky piece of content.

So...in my humble opinion, I was say that "no" - DRM free downloads is not OK.

How the Big Boys Do It!

Bradley Brown - Fri, 2014-08-22 17:03
We have a number of VERY large (video on demand) customers.  We've broken our customer-base into 3 categories:

1. Infoprenuers - these are companies who do something and have video, but video is not their primary business.  For example, RunBare.  Their primary business is teaching people how to run in your barefeet.  They do have a video on this topic, along with books and other items.  Another example is BDB Software, which is my original side consulting business.  I started it to sell online Oracle Application Express training.  So all the business does is video, yet, I'm the CEO at InteliVideo.  That's my primary business - so BDB is just a side business.  You could say that Infoprenuers are wannabes.

2. Enterprise Content Providers (ECP) - these are companies who's primary business revolves around selling digital goods, education, etc. online.  These are the "big boys" of our industry.  In this blog post I'm going to talk more about how the Infoprenuer can become an ECP.

3. Corporations - these are companies that use video content to educate their own employees.  They typically aren't interested in selling their videos, they are interested in protecting their videos.

So how do the big boys become so big.  Like any business, it's typically over an extended period of time.  Even those who focus on infomercials are not usually overnight successes.  Take P90X or Beachbody.  They've been selling DVDs for MANY years.  Infomercials contribute to their success, but also the fact that they figured out how to do affiliate marketing is another large piece of their success.

So how do they do it?  Creative marketing, marketing, and more marketing - lots of it!  Promotions, A/B split tests, refining the message and analyzing what works and building on it.  Said another way, they are professional marketers!  You might ask - social marketing or pay per click marketing or SEO or.... - the answer is yes.

You want to get big?  Well then...it's time to focus.  On what?  One guess!

Introduction to MongoDB Geospatial feature

Tugdual Grall - Thu, 2014-08-21 16:30
This post is a quick and simple introduction to Geospatial feature of MongoDB 2.6 using simple dataset and queries. Storing Geospatial Informations As you know you can store any type of data, but if you want to query them you need to use some coordinates, and create index on them. MongoDB supports three types of indexes for GeoSpatial queries: 2d Index : uses simple coordinate (longitude, Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com13

ORA-14048 When Adding Composite Partition

Don Seiler - Thu, 2014-08-21 15:35
Fresh off the heels of my earlier composite partitioning post, I just ran into this confusing issue:

SQL> alter table p_objects
  2          add partition p201410
  3                  values less than (to_date('2014/11/01','yyyy/mm/dd'))
  4          (
  5                  subpartition p201410_spdts values ('DTS')
  6                  , subpartition p201410_spfoo values ('FOO')
  7                  , subpartition p201410_spbar values ('BAR')
  8                  , subpartition p201410_spsys values ('SYS')
  9                  , subpartition p201410_spsysaux values ('SYSAUX')
 10          )
 11  tablespace tbs1
 12  ;
tablespace tbs1
*
ERROR at line 11:
ORA-14048: a partition maintenance operation may not be combined with other
operations

The error doesn't quite make it easy to determine what the problem is. Turns out that the tablespace/storage clause of the ALTER TABLE ... ADD PARTITION has to come prior to the subpartitions definition. Simply moving that part of the statement a few rows up yields success:

SQL> alter table p_objects
  2          add partition p201410
  3                  values less than (to_date('2014/11/01','yyyy/mm/dd'))
  4          tablespace tbs1
  5          (
  6                  subpartition p201410_spdts values ('DTS')
  7                  , subpartition p201410_spfoo values ('FOO')
  8                  , subpartition p201410_spbar values ('BAR')
  9                  , subpartition p201410_spsys values ('SYS')
 10                  , subpartition p201410_spsysaux values ('SYSAUX')
 11          )
 12  ;

Table altered.

Again, probably obvious to most of you. It wasn't as obvious to decipher when I was trying to add a partition with over 6,300 subpartitions. As always:


Categories: DBA Blogs

Adding New Partitions with Custom Subpartition Definition (Range-List)

Don Seiler - Thu, 2014-08-21 14:28
As part of a project for work I wanted to create a script that would create a new range partition but also pre-create all the list subpartitions. By default the subpartitions would be created based on the subpartition template. However for various reasons which I won't get into we don't update or use the subpartition template. I wanted to define the subpartition list as part of the ALTER TABLE ... ADD PARTITION statement. I assumed it was perfectly acceptable but didn't see any obvious examples in my hasty web searching, so I thought I'd share on myself.

I start by creating my composite range-list partitioned table:

SQL> create table p_objects
  2  tablespace tbs1
  3  partition by range(rdate)
  4  subpartition by list(owner)
  5  subpartition template
  6  (
  7          subpartition spsys values ('SYS')
  8          , subpartition spsysaux values ('SYSAUX')
  9  )
 10  (
 11          partition p201301 values less than (to_date('2013/02/01','YYYY/MM/DD')),
 12          partition p201302 values less than (to_date('2013/03/01','YYYY/MM/DD')),
 13          partition p201303 values less than (to_date('2013/04/01','YYYY/MM/DD')),
 14          partition p201304 values less than (to_date('2013/05/01','YYYY/MM/DD')),
 15          partition p201305 values less than (to_date('2013/06/01','YYYY/MM/DD')),
 16          partition p201306 values less than (to_date('2013/07/01','YYYY/MM/DD')),
 17          partition p201307 values less than (to_date('2013/08/01','YYYY/MM/DD')),
 18          partition p201308 values less than (to_date('2013/09/01','YYYY/MM/DD')),
 19          partition p201309 values less than (to_date('2013/10/01','YYYY/MM/DD')),
 20          partition p201310 values less than (to_date('2013/11/01','YYYY/MM/DD')),
 21          partition p201311 values less than (to_date('2013/12/01','YYYY/MM/DD')),
 22          partition p201312 values less than (to_date('2014/01/01','YYYY/MM/DD'))
 23  )
 24  as select object_id
 25          , owner
 26          , object_name
 27          , object_type
 28          , to_date(trunc(dbms_random.value(
 29                  to_char(to_date('2013/01/01','YYYY/MM/DD'),'J'),
 30                  to_char(to_date('2013/12/31','YYYY/MM/DD'),'J')
 31                  )),'J') rdate
 32  from all_objects
 33  where owner in ('SYS','SYSAUX');

Table created.

This creates the partitions with 2 subpartitions each, per my defined template. For example:

PARTITION_NAME                 SUBPARTITION_NAME
------------------------------ ------------------------------
P201301                        P201301_SPSYS
                               P201301_SPSYSAUX
P201302                        P201302_SPSYS
                               P201302_SPSYSAUX
P201303                        P201303_SPSYS
                               P201303_SPSYSAUX

Next I'll add two more partitions, one with no subpartition definition specified and another with a custom definition:

SQL> alter table p_objects
  2          add partition p201408
  3                  values less than (to_date('2014/09/01','yyyy/mm/dd'));

Table altered.

SQL> alter table p_objects
  2          add partition p201409
  3                  values less than (to_date('2014/10/01','yyyy/mm/dd'))
  4          (
  5                  subpartition p201409_spdts values ('DTS')
  6                  , subpartition p201409_spsys values ('SYS')
  7                  , subpartition p201409_spsysaux values ('SYSAUX')
  8          )
  9  ;

Table altered.

The results:
PARTITION_NAME                 SUBPARTITION_NAME
------------------------------ ------------------------------
P201408                        P201408_SPSYS
                               P201408_SPSYSAUX
P201409                        P201409_SPDTS
                               P201409_SPSYS
                               P201409_SPSYSAUX

You can see that the first statement just used the subpartition template as defined in the original CREATE TABLE statement. The second uses the list I defined in the ALTER TABLE statement. I could have only defined the SPDTS subpartition and it would have only used that one and not used SYS or SYSAUX subpartitions at all.

Hopefully this post helps a few of you in your searching when faced with a similar task. This was probably obvious to most of you but the mind starts to slip for some of us.
Categories: DBA Blogs

Up and Running with HCM 9.2 on PT 8.54 via PUM "Demo Image"

Jim Marion - Thu, 2014-08-21 12:57

Yes, you read that correctly. PUM is the new demo image. According to MOS Doc ID 1464619.1 "As of HCM 9.2.006, the PeopleSoft Image and PeopleSoft Demo Image have been combined into one PeopleSoft Update Image. You can use the PeopleSoft Update Image for both your patching and demonstration purposes." If you are current on your PUM images, you probably already knew that. If you are like me, however, and haven't downloaded a demo image for a while, then you may have been looking for demo images on the old MOS demo image page.

Since I use the image for prototyping and demonstrations, I turned off SES and reduced the memory requirements to 2048 MB. It is working great at this lower memory setting.

There are a lot of new and great features in the PT 8.54 PUM:

  • Attribute based branding,
  • Component Branding (add your own CSS and JavaScript to components without hacking a delivered HTML definition)
  • REST Query Access Service,
  • Mobile Application Platform (MAP), and
  • Fluid homepages

Tip: Access the fluid homepages by visiting the URL http://<yourserver>:8000/psc/ps/EMPLOYEE/HRMS/c/NUI_FRAMEWORK.PT_LANDINGPAGE.GBL. For example, if you have a hosts entry mapping your PUM image to the hostname hcmdb.example.com, then use the URL http://hcmdb.example.com:8000/psc/ps/EMPLOYEE/HRMS/c/NUI_FRAMEWORK.PT_LANDINGPAGE.GBL.

Oracle 12C - In-Memory Option Resources

Karl Reitschuster - Thu, 2014-08-21 00:17

Hi folks,

Introduced as an Option Oracles In-Memory option will change the world of databasing also like SAP HANA does; Since July the release is out but the search for resources and documentation is  poor;

Here some useful links I found.

First the Home of Oracle In-Memory

Employee or Member Training

Bradley Brown - Wed, 2014-08-20 20:12
Do you have a group of employees or members that you would like to train?  Would you like to make the training available for a limited time only - such as for 2 weeks?  Would you like to have the ability to take the ability to view the training away at your discretion (such as when an employee leaves the company)?  Would you like to know who watched which videos?  For example, did Jim watch the introductory video on Tuesday as he said he did?

If you answered yes to any of these questions, I have great news for you!  The InteliVideo platform supports all of these needs and more.  In fact, you can upload any number of videos, you can group them as you wish, you can give people access or deny access at any time (even if they have downloaded videos to their iPad, they will no longer be able to watch them once you deny access.

Below I'm going to run you through an actual use case for a company that's using our platform to train their employees.

Signing Up for an InteliVideo AccountSigning up for InteliVideo is an easy, painless and free process.  First, go to the InteliVideo site and click on "Sign Up:"


You will be asked for your subdomain name (typically something like your company name), first name, last name and email address.  Finally, just fill in the Captcha (anti-spammer protection) information and click on "Create Account."


You will then receive an email that will provide your InteliVideo account information.  Congratulations!  You're getting closer!
Customizing the Site with Your Logo, Color Scheme, etc.
Once you create your account, you'll be taken to the "Display" page within the InteliVideo administration / backend pages.  This is where you can update your subdomain name, logo, color schemes, page header, choose a template (look and feel) and so much more.  We work with most of our customers to make sure that the look and feel of their main website matches the look and feel of their InteliVideo website.  If you want to point your domain to our website, we can do that for you too.  For example, if you signed up for coolvideos.intelivideo.com and you want to change it to videos.coolvideos.com, we can even do that for you!
Signing Up for a Paid AccountUnder "Account Settings" and then "My Plan" you can sign up for a paid account.  The highest value is in the $249/year account.  It includes more videos, storage, etc. in the plan.  You can always go over the number of hours or minutes provided, we just charge by the minute for those overages.


Uploading Your Video(s)Uploading your videos is easy!  Simply drag and drop your videos into the platform and we do the rest for you!  When you click on the "Videos" tab, you'll see an "Upload Videos" button.  Click on this button, which will present you with a window like this one:


You can drag and drop any video file into the "Drag file here" section or you choose files, import them from Dropbox, download from a URL and there are other options.  If you have 100s of video files, we will work directly with you to get your videos into the platform.  Most of our customers who have more than 100 videos will send us a hard drive with the videos and we'll upload them.

Once the videos are uploaded, we take care of transcoding.  Transcoding is just a fancy way of saying that we convert just about any source video file format (MOV file, AVI file, VOB, etc.) into a number of different resolutions and formats so that your video will play on any device.  Another way of explaining this is that we take care of the hard stuff for you.

You'll see your videos in the list along with a status.  If the video file is corrupt, you would see an error message, but most of the time once your videos are transcoded, you'll see that you can watch the video as you see here below:


You can also edit the details (i.e. the description, if it's a public or private video, etc.) by clicking the edit button:


As you can see, you can edit the short (title) and long description here.  You can also indicate whether a video is public or private here.  Public means anyone can view it for free.  Private means you must be a member (or buyer) to view the video.  The override image allows you to upload an image that should be used as the default background image for the video.  If you don't upload an override image, we extract the first frame of the video and we use that image.

If there is a 1:1 correlation between your video and a product, you can click on "Create Product" in the list of videos page above.  Most of the time a product is made up of more than 1 video, but sometimes this is a good starting point for a product.  You can always add more videos to a product.
Grouping Your Video(s) into a ProductIf you didn't click the "Create Product" button above, you'll need to create a product.  A Product is simply a bundle of videos that you wish to offer to the public, for sale or for members.

Click on the "Products" and then click on "New Product."  You'll see that there are a number of options here:


Again, you can set a short (title) and long description for your product.  You can determine whether the product is available to the public, members only or for sale.  If it's for sale, you can determine if the product is a one time payment, a rental, subscription or installment payment.
Offering Products for Sale
If you want to sell your products, you must connect your InteliVideo account with a Stripe account.  If you don't have an existing Stripe account, you can create one through our platform.  If you already have a Stripe account, you can connect to that account.  Either way, click on "Connect with Stripe" and we'll walk you through step-by-step what if required to connect InteliVideo and Stripe.
Granting Access to a Product / VideoAny product that is available for sale or for members only can be granted access (or revoked/denied) manually.  Click on the "Customers" tab, which will show you a list of your existing customers/members.  To add a new customer or member, click on "New Customer:"


Enter the first and last name along with the email of the person and select any products that you would like them to have access to.  IPs allow indicates how many unique locations you want the user to be able to access your video from.  The default is 8.  If you wanted them to be able to access it from one computer only, you could change this to 1.

You can view any customer/member by clicking on their name from the customer page.  You can click on "Edit User" to disable the user.  As you can see here, when you drill into a user, you'll see their usage stats:

When you edit the user, you can disable the user at any time.
Timed or Dripped ContentWithin the details of every product, you can reorder the videos and the order they are displayed in by dragging any video into the correct order:


You can also set up a delayed delivery schedule or "drip schedule" for each video.  In other words, if you want Module 1 to be available for days 0 through 7 (first week), you can set that schedule up.  If you wanted all of the videos to be available for 3 weeks, you could set each video to 0 through 21.


Knowing Who's Watched WhatThe InteliVideo platform tracks all of the usage for every video whether it's watched streaming or downloaded and watched on a plane.  You saw one example of those usage statistics for a specific customer/member above.  There are MANY other ways of slicing and dicing the data to know what your customers/members are watching, what they are having a difficult time with (i.e. they are watching repeatedly), and what they aren't watching.  You can see where (in the world) they were when they were watching, what devices they watched from and so much more.  We are data guys, so believe me when I say "we track everything!"
Employee / Member's Viewing OptionsWe support about every device out there for viewing.  Most video platforms only support streaming videos.  This limits your members to watching when they have an Internet connection.  We have apps for iOS (iPhone and iPad), Android (phones and tablets), Chromecast, Roku, Windows computers, Apple computers and much much more.

The apps manage which content users can access and they keep track of those usage statistics when people are disconnected from the Internet and they upload those stats when the person's phone, tablet, laptop, etc. "phones" home.  This allows your customers or members to download a video, yet they don't have access to the source video file.  Therefore they can't archive it to watch it later.  They can't email it to their friends.  They can't post it on Facebook or YouTube.  It also allows you to control when they can no longer watch the product.  If you deny the user, they won't be able to watch the content any more.  The bottom line is that we protect your content and we don't allow people to access the content when you're done allowing them to have access.
Sign Up Today!If you're ready to sign up for an account at InteliVideo, please do so now!  We would love to have you as a customer!

Slicing the EDG

Antony Reynolds - Tue, 2014-08-19 21:24
Different SOA Domain Configurations

In this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a whole new round of discussion.  For each possible deployment architecture I have identified some of the advantages.

Super Domain

This is a single EDG style domain for everything needed for SOA/OSB.   It extends the standard EDG slightly but otherwise assumes a single “super” domain.

This is basically the SOA EDG.  I have broken out JMS servers and Coherence servers to improve scalability and reduce dependencies.Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if rest of domain is unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service.
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
Benefits
  • Single Administration Point (1 Admin Server)
  • Closely follows EDG with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
Drawbacks
  • Patching is an all or nothing affair.
  • Startup time for SOA may be slow if large number of composites deployed.
Multiple Domains

This extends the EDG into multiple domains, allowing separate management and update of these domains.  I see this type of configuration quite often with customers, although some don't have OWSM, others don't have separate Coherence etc.

SOA & BAM are kept in the same domain as little benefit is obtained by separating them.Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service.
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
Benefits
  • Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
  • Patch lifecycle of OSB/SOA/JMS are no longer lock stepped.
  • JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB.
  • OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability.
  • All domains use same OWSM policy store (MDS-WSM).
Drawbacks
  • Multiple domains to manage and configure.
  • Multiple Admin servers (single view requires use of Grid Control)
  • Multiple Admin servers/WSM clusters waste resources.
  • Additional homes needed to enjoy benefits of separate patching.
  • Cross domain trust needs setting up to simplify cross domain interactions.
  • Startup time for SOA may be slow if large number of composites deployed.
Shared Service Environment

This model extends the previous multiple domain arrangement to provide a true shared service environment.

This extends the previous model by allowing multiple additional SOA domains and/or other domains to take advantage of the shared services.  Only one non-shared domain is shown, but there could be multiple, allowing groups of applications to share patching independent of other application groups.Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
  • Shared SOA Domain hosts
    • Human Workflow Tasks
    • BAM
    • Common "utility" composites
  • Single OSB domain provides "Enterprise Service Bus"
  • All domains use same OWSM policy store (MDS-WSM)
Benefits
  • Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
  • Patch lifecycle of OSB/SOA/JMS are no longer lock stepped.
  • JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB.
  • OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability.
  • All domains use same OWSM policy store (MDS-WSM).
  • Supports large numbers of deployed composites in multiple domains.
  • Single URL for Human Workflow end users.
  • Single URL for BAM end users.
Drawbacks
  • Multiple domains to manage and configure.
  • Multiple Admin servers (single view requires use of Grid Control)
  • Multiple Admin servers/WSM clusters waste resources.
  • Additional homes needed to enjoy benefits of separate patching.
  • Cross domain trust needs setting up to simplify cross domain interactions.
  • Human Workflow needs to be specially configured to point to shared services domain.
Summary

The alternatives in this blog allow for patching to have different impacts, depending on the model chosen.  Each organization must decide the tradeoffs for itself.  One extreme is to go for the shared services model and have one domain per SOA application.  This requires a lot of administration of the multiple domains.  The other extreme is to have a single super domain.  This makes the entire enterprise susceptible to an outage at the same time due to patching or other domain level changes.  Hopefully this blog will help your organization choose the right model for you.

Slicing the EDG

Antony Reynolds - Tue, 2014-08-19 21:24
Different SOA Domain Configurations

In this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a whole new round of discussion.  For each possible deployment architecture I have identified some of the advantages.

Super Domain

This is a single EDG style domain for everything needed for SOA/OSB.   It extends the standard EDG slightly but otherwise assumes a single “super” domain.

This is basically the SOA EDG.  I have broken out JMS servers and Coherence servers to improve scalability and reduce dependencies. Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if rest of domain is unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service.
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
Benefits
  • Single Administration Point (1 Admin Server)
  • Closely follows EDG with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
Drawbacks
  • Patching is an all or nothing affair.
  • Startup time for SOA may be slow if large number of composites deployed.
Multiple Domains

This extends the EDG into multiple domains, allowing separate management and update of these domains.  I see this type of configuration quite often with customers, although some don't have OWSM, others don't have separate Coherence etc.

SOA & BAM are kept in the same domain as little benefit is obtained by separating them. Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service.
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
Benefits
  • Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
  • Patch lifecycle of OSB/SOA/JMS are no longer lock stepped.
  • JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB.
  • OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability.
  • All domains use same OWSM policy store (MDS-WSM).
Drawbacks
  • Multiple domains to manage and configure.
  • Multiple Admin servers (single view requires use of Grid Control)
  • Multiple Admin servers/WSM clusters waste resources.
  • Additional homes needed to enjoy benefits of separate patching.
  • Cross domain trust needs setting up to simplify cross domain interactions.
  • Startup time for SOA may be slow if large number of composites deployed.
Shared Service Environment

This model extends the previous multiple domain arrangement to provide a true shared service environment.

This extends the previous model by allowing multiple additional SOA domains and/or other domains to take advantage of the shared services.  Only one non-shared domain is shown, but there could be multiple, allowing groups of applications to share patching independent of other application groups. Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
  • Shared SOA Domain hosts
    • Human Workflow Tasks
    • BAM
    • Common "utility" composites
  • Single OSB domain provides "Enterprise Service Bus"
  • All domains use same OWSM policy store (MDS-WSM)
Benefits
  • Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
  • Patch lifecycle of OSB/SOA/JMS are no longer lock stepped.
  • JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB.
  • OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability.
  • All domains use same OWSM policy store (MDS-WSM).
  • Supports large numbers of deployed composites in multiple domains.
  • Single URL for Human Workflow end users.
  • Single URL for BAM end users.
Drawbacks
  • Multiple domains to manage and configure.
  • Multiple Admin servers (single view requires use of Grid Control)
  • Multiple Admin servers/WSM clusters waste resources.
  • Additional homes needed to enjoy benefits of separate patching.
  • Cross domain trust needs setting up to simplify cross domain interactions.
  • Human Workflow needs to be specially configured to point to shared services domain.
Summary

The alternatives in this blog allow for patching to have different impacts, depending on the model chosen.  Each organization must decide the tradeoffs for itself.  One extreme is to go for the shared services model and have one domain per SOA application.  This requires a lot of administration of the multiple domains.  The other extreme is to have a single super domain.  This makes the entire enterprise susceptible to an outage at the same time due to patching or other domain level changes.  Hopefully this blog will help your organization choose the right model for you.

Oracle 11204 Clusterware upgrade - ASM glitch

Syed Jaffar - Tue, 2014-08-19 04:01
Yet another tough challenge thrown at my team right after the disaster recovery (DR) simulation drill which performed barely couple of weeks ago. The new task (challenge) in hands is to upgrade the existing four cluster environments from 11.2.0.2 to 11.2.0.4 as Oracle already stopped supporting v11.2.0.2.

Although last week we had a 3 node successfully upgrade track record, we encountered ASM upgrade troubles whilst running rootupgrade.sh in a new cluster environment (7 nodes). The following error was reported during the course of rootupgrade.sh script execution:

CRS-2672: Attempting to start 'ora.asm' on 'node01' 
CRS-5017: The resource action "ora.asm start" encountered the following error: 
ORA-48108: invalid value given for the diagnostic_dest init.ora parameter 
ORA-48140: the specified ADR Base directory does not exist [/u00/app/11.2.0/grid/dbs/{ORACLE_BASE}] 
ORA-48187: specified directory does not exist 
HPUX-ia64 Error: 2: No such file or directory 
Additional information: 1CRS-2674: Start of 'ora.asm' on 'node01' failed 
CRS-2679: Attempting to clean 'ora.asm' on 'node01
CRS-2681: Clean of 'ora.asm' on 'node01' succeeded 
CRS-4000: Command Start failed, or completed with errors. 

When tried to start-up the ASM instance manually through sqlplus prompt, the following error was thrown:

SQL> 
ORA-32004: obsolete or deprecated parameter(s) specified for ASM instance 
ORA-48108: invalid value given for the diagnostic_dest init.ora parameter 
ORA-48140: the specified ADR Base directory does not exist [/u00/app/11.2.0/grid/dbs/{ORACLE_BASE}] 
ORA-48187: specified directory does not exist 
HPUX-ia64 Error: 2: No such file or directory

Sadly, there wasn't much info available about the nature of this problem. As usual, after giving it 1 hr try with different options, we opened a SR with Oracle support and agreed to rollback the upgrade from the node where the rootupgrade script failed. Luckily, this was the first node we tried and other 6 nodes were just running fine. After rolling back to the previous cluster version, ASM instance error was still persist.

To resolve the ASM instance startup issues, the following action was taken:

  • export diagnostic_dest=/u00/app/oracle
  • From active ASM instance on another node, executed the following statement:
    • SQL> ALTER SYSTEM STOP ROLLING MIGRATION;

Cause:
The problem caused an ASM instance startup issue was reported/logged as a known bug (17449823).

Workaround:
According to the MOS Doc ID (1598959.1), the bug is still being worked by the development team, they suggest the following work around on each node just before running the rootupgrade.sh script:
  • mkdir <New-GI-HOME>/dbs/{ORACLE_BASE} 
Third successful attempt
The upgrade failed in first 2 attempts, and the 3 attempt was successful and we managed to upgrade all 7 nodes from 11.2.0.2 to 11.2.0.4. It was also learnt that CRS_HOME, ORACLE_HOME, ORACLE_BASE was not unset before the runinstaller was initiated. In 3rd attempt with unsetting those parameters, upgrade went successfully.

Addendum (24-Aug-2014)
Couple of new challenges encountered in the last  upgrade task on 10 nodes.

  1. OUI window from which runInstaller was initiated got closed due to PC rebooted.
  2. Although the directory {ORACLE_BASE} created under the new GRID home, the issue were reoccurring.
Here is the solution:
  1. How to Complete 11gR2 Grid Infrastructure Configuration Assistant(Plug-in) if OUI is not Available (Doc ID 1360798.1)
  2. Ensure the diagnostic_dest is updated on ASM Spfile to the new location before running the rootupgrade.sh


References:

  • Things to Consider Before Upgrading to 11.2.0.3/11.2.0.4 Grid Infrastructure/ASM ( Doc ID 1363369.1) 
  • Things to Consider Before Upgrading to 11.2.0.4 to Avoid Poor Performance or Wrong Results ( Doc ID 1645862.1) 
  • GI rootupgrade.sh on last node: ASM rolling upgrade action failed ( Doc ID 1598959.1) 
  • bug 17449823




Accessing Database Photos from Query Access Service

Jim Marion - Tue, 2014-08-19 00:47

I have been working with the PeopleTools 8.54 REST Query Access Services. I would absolutely LOVE them if they returned JSON instead of XML. With a little help from x2js I am able to work around this "shortcoming." One specific challenge I faced was accessing image data. For example, with PeopleSoft query I can see who has photos in PS_EMPL_PHOTO, but I can't see the actual uploaded photo. With a little help from Oracle and a query expression, however, I can convert the photo blob into base64:

SELECT UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_ENCODE(EMPLOYEE_PHOTO))
FROM PS_EMPL_PHOTO
WHERE EMPLID = 'KU0003'

The only problem with this approach is that Oracle database has a maximum size limit on data that can be encoded and most of the photos I have seen exceed that maximum. The way I chose to work around this limitation is to substring the blob and encode it in fragments. I create a separate column for each fragment, and then concatenate them together in the REST client. Here is some sample SQL from a PeopleSoft query. Each of the CASE statements is a query expression.

SELECT
CASE
WHEN DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) > 1455 THEN UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_ENCODE(DBMS_LOB.SUBSTR(EMPLOYEE_PHOTO, 1455, 1)))
WHEN DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) <= 1455 THEN UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_ENCODE(EMPLOYEE_PHOTO))
END AS C1,
CASE
WHEN DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) > 2910 THEN UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_ENCODE(DBMS_LOB.SUBSTR(EMPLOYEE_PHOTO, 1455, 1456)))
WHEN DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) <= 2910 AND DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) > 1455 THEN UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_ENCODE(DBMS_LOB.SUBSTR(EMPLOYEE_PHOTO, DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) - 1455, 1456)))
END AS C2,
CASE
WHEN DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) > 4365 THEN UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_ENCODE(DBMS_LOB.SUBSTR(EMPLOYEE_PHOTO, 1455, 2911)))
WHEN DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) <= 4365 AND DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) > 2910 THEN UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_ENCODE(DBMS_LOB.SUBSTR(EMPLOYEE_PHOTO, DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) - 2910, 2911)))
END AS C3,
CASE
WHEN DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) > 5820 THEN UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_ENCODE(DBMS_LOB.SUBSTR(EMPLOYEE_PHOTO, 1455, 4366)))
WHEN DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) <= 5820 AND DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) > 4365 THEN UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_ENCODE(DBMS_LOB.SUBSTR(EMPLOYEE_PHOTO, DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) - 4365, 4366)))
END AS C4,
CASE
WHEN DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) > 7275 THEN UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_ENCODE(DBMS_LOB.SUBSTR(EMPLOYEE_PHOTO, 1455, 5821)))
WHEN DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) <= 7275 AND DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) > 5820 THEN UTL_RAW.CAST_TO_VARCHAR2(UTL_ENCODE.BASE64_ENCODE(DBMS_LOB.SUBSTR(EMPLOYEE_PHOTO, DBMS_LOB.GETLENGTH(EMPLOYEE_PHOTO) - 5820, 5821)))
END AS C5
FROM PS_EMPL_PHOTO
WHERE EMPLID = 'KUL704'

On the client I use something like this:

var data = data:image/jpeg;base64," + columns.join("");

The end result is something like this (right-click to see base64 data):

Mikko,Jill's Photo

What Is Oracle Elapsed Time And Wall Time With A Parallelism Twist

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.
What Is Oracle Elapsed Time And Wall Time With A Parallelism Twist
In this post I'm focusing on Oracle Database SQL elapsed time, adding parallelism into the mix and then revisiting wall time. What initially seems simple can take some very interesting twists!

If you are into tuning Oracle Database systems, you care about time. And if you care about time, then you need to understand the most important time parameters: what they are, their differences, how they relate to each other and how to use them in your performance tuning work.

A couple weeks ago I wrote about Oracle DB Time, non-idle wait time, and server process CPU consumption (DB CPU) time. If you have not read that posting, HERE is the link. It must be a good read because it quickly become my most viewed post ever! In this posting, the focus is SQL elapsed time, parallelism, and again wall time. Enjoy!

Quick Review
In my previous related post, I covered non-idle wait time, DB CPU, and DB Time. Here is a very quick summary of each.

Non-Idle Wait Time occurs when an Oracle process is not consuming CPU, the session pauses (i.e., waits) and Oracle considers the wait time important for performance tuning. An example of a non-idle wait event is direct path read temp. An example of an idle wait event is SQL*Net message from client or pmon timer.

DB CPU is Oracle server/foreground/shadow process CPU consumption. This is not include Oracle background process CPU consumption.

DB Time is DB CPU plus Non-Idle Wait Time. Remember that DB Time does not include background process CPU consumption and Oracle Corporation determines which wait events are considered idle.

Elapsed Time
Elapsed Time (ET) is all DB Time related to a defined task. A "defined task" could be a SQL statement, group of SQL statements, pl/sql procedure, batch job, etc. It is whatever makes sense in your tuning situation.

The elapsed time for a SQL_ID can be found in v$sql. But be careful because this elapsed time is related to "all" the SQL_ID executions. Thankfully, there is an "executions" column in v$sql.


Elapsed time is displayed in a number of areas within an Oracle Database AWR and Statspack report. Looking at the above screen shot, the "top" elapsed time SQL has an elapsed time of 268561 seconds. This means that over the AWR report's snapshot interval, for all this SQL's executions, its total DB Time is 268561 seconds. Said another way, if we were to add up all this SQL's DB CPU and non-idle wait time for all its executions within the snapshot interval, the value should be 268561.

There is a lot of great information provided in the AWR and Statspack SQL reports. For example, because the elapsed time and the CPU time (DB CPU) is shown above, we can calculate the non-idle wait time for the "top" elapsed time SQL ID.

non idle wait time = elapsed time - cpu time
268465 = 268561 - 96

For the "top" elapsed SQL, its elapsed time 268561 and it's DB CPU is 96 therefore its non-idle wait time is 268465. Wow! This statement has tons of associated wait time compared to CPU consumption time.

But it gets even better! Because the total Elapsed Time and the total number Executions over the snapshot interval is displayed, we can determine the average elapsed time!

average elapsed time = total elapsed time / executions
746.03 = 268561 / 36

Do not be deceived! The average elapsed time is unlikely what the user is experiencing. Two possibility examples for this deception are skewed elapsed times and parallelism.

For most DBAs this is unexpected. It also causes performance perception problems yet solutions are available to understand what's really going on.

I've spent so much time researching this topic and seen it increase my consulting value, I've posted a number of blog entries on this subject. Plus I created an OraPub Online Institute seminar focused specifically on this subject. It's called Using Skewed Performance Data To Your Advantage. Check it out. I'm really proud how it turned out. I also have a couple of OSM scripts dedicated to this topic, sqlelget[11].sql.

Revisiting Wall Time With A Parallelism Twist
Now it's time to put this all together.

DB CPU is the Oracle server process CPU consumption.

Non-Idle Wait Time (NIWT) is the time when an Oracle process can not consume CPU and must pause and we care about this time.

DB Time is the Oracle server process CPU consumption and all non-idle wait time.

Elapsed Time (ET) is the sum (i.e., all) DB Time related to a task, such as a SQL_ID.

Wall Time is what we hope the user experiences. I'll assume there is no time gap between Oracle and the user, therefore the wall time will equal the user's experience.

Effective Parallelism is the effective number of Oracle parallel slaves or some other form of parallelism, such as designed-in application parallelism. (For simplicity, I'm only going to mention Oracle parallel query.) If Oracle parallel query is not involved, then the effective parallelism is one. If two parallel query slaves are involved, then the effective parallelism will be a little less than 2

Parallelism can reduce wall time because we can simultaneously "burn time" in multiple places. For example, 60 seconds of elapsed time with a process running serially, results in a wall time of 60 seconds. But if we have two parallel query slaves, while the elapsed time (i.e., all the DB Time) is still 60 seconds (plus some overhead time), the wall time will be around 30 seconds (plus some overhead time).

The math is really simple...that is until you factor in scalability (i.e., the overhead), which I won't. If you're interested, read the last chapter of my book, Forecasting Oracle Performance.

Let's simplify this by using some mathematical notation.

DB Time = DB CPU + NIWT

Elapsed Time = Sum of DB Time

Wall Time = Elapsed Time / Effective Parallelism

Pretty straightforward, eh? Below is a short video clip summarizing the key time parameters taken from the OraPub Online Institute seminar, Tuning Oracle Using an AWR Report - Part 2. If you can't see the video, click HERE watch it on YouTube.



Test You Knowledge
True or False? If the total elapsed time is 60 seconds and parallel query is not involved, the total wall time will also be 60 seconds. True

True or False? If the elapsed time per execution is 60 seconds and the wall time is 30 seconds, then parallel query is involved. True

True or False? Bonus question yet very important to understand: If the elapsed time per execution is 60 seconds and two PQ slaves are involved, then the wall time will be 30 seconds.

The last question is false because there is overhead when parallelizing. Parallelism is not free. Because of this, the wall time will hopefully drop to perhaps 35 seconds. That 5 seconds is the parallelization overhead.

Coming Up Next: Video Proof!
While the above may seem correct, I ran some SQL statements and captured the relevant time statistics. There is quite a bit of detail and I ran two different tests, so I'll post that in a week or two.

Thanks for reading,

Craig.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator