Feed aggregator

APEX Summer School 2015 - Online Webinare

Dietmar Aust - Mon, 2015-04-20 09:38
Im Rahmen der APEX Summer School 2015 ( Twitter Handle: #apexsummer2015 ) werden wir 9 kostenlose Webinare zum Thema APEX 5 mit den Experten aus dem deutschsprachigen Raum durchführen:



Vielen Dank an Carsten Czarski für die Einladung und Organisation! Nicht zuletzt auch vielen Dank für die klasse Homepage des Webinars, alles 100% APEX 5 out-of-the-box! Ein einfaches und dennoch wirkungsvolles Beispiel dafür, was standardmäßig mit APEX 5 möglich ist:

APEX 5 ist seit letzter Woche produktiv verfügbar und kann hier zum heruntergeladen werden.

Es gibt noch weitere APEX Veranstaltungen, schaut doch mal rein.

Es wird ein heißer APEX - Sommer ;).



Viel Vergnügen,
~Dietmar.

Destroying The Moon

Scott Spendolini - Mon, 2015-04-20 09:18
Just under three years ago, I joined Enkitec when they acquired Sumneva.  The next three years brought a whirlwind of change and excitement - new products, additional training, and expanding the APEX practice from an almost nonexistent state to one of the best in the world.

Like all good things, that run has come to an end.  Last Friday was my final day at Accenture, and I am once again back in the arena of being self-employed.  Without any doubt, I am leaving behind some of the best minds in the Oracle community.  However, I am not leaving behind the new friendships that I have forged over the past three years.  Those will come with me and hopefully remain with me for many, many years to come.

Making the jump for the second time is not nearly as scary as it was the first time, but it's still an emotional move.  Specifically what's next for me?  That's a good questions, as the answer is not 100% clear yet.  There's a lot of possibilities, and hopefully things will be a lot more defined at the end of the week.

#letswreckthistogether

PeopleSoft's paths to the Cloud - Part III

Javier Delgado - Mon, 2015-04-20 07:49
In my previous posts on this series, I have covered how cloud computing could be used to reduce costs and maximize the flexibility of PeopleSoft Development and Production environments. In both cases, I focused on one specific area of cloud computing, Infrastructure as a Service (IaaS).

Today I will explain what kind of benefits can be expected by using another important area: Database as a Service (DBaaS). Instead of using an IaaS provisioned server to install and maintain your database, DBaaS providers take responsibility for installing and maintaining the database.

There are many players in this market, including Amazon, Microsoft and Oracle. The service features may differ, but in a nutshell, they normally offer these capabilities:

  • Backups: the database backups are automated, and you can decide to restore point-in-time backups at any moment. You can also decide when to take a snapshot of your database, which may be eventually be used to create another database instance (for example, to copy your Production database into the User Acceptance environment).
  • High Availability: while some IaaS provider do not support high-availability database solutions such as Oracle RAC (for instance, it is not supported by Amazon EC2), many DBaaS providers include high availability by default.
  • Contingency: some providers maintain a standby copy of your database in another data center. This allows you to quickly restore your system in the case the original data center's services are lost.
  • Patching: although you can decide when to apply a database patch, the DBaaS will do that for you. In many case, you can turn on automatic patching, in order to make sure your database engine is always up to date.
  • Monitoring: providers give the system administrators access to a management console, in which they can monitor the database behavior and add or remove resources as needed.
  • Notifications: in order to simplify the monitoring effort, you normally have the possibility of setting up notifications to be received by email and/or SMS upon a list of events, which may include CPU usage, storage availability, etc.

Under my point of view, these services offer significant advantages for PeopleSoft customers, particularly if your current architecture does not support all the previously mentioned services or you do not have the right DBA skills in-house. Even if your organization does not fall in these categories, the scalability and elasticity of DBaaS providers is very difficult to match by most internal IT organizations.

In any case, if you are interested in using Database as a Service for your PeopleSoft installation, make sure you correctly evaluate what each provider can give you.



Debugging PeopleSoft Absence Management Forecast

Javier Delgado - Mon, 2015-04-20 07:47
Forecasting is one of the most useful PeopleSoft Absence Management functionalities. It allows users to know which is going to be the resulting balance when entering an absence. The alternative is to wait until the Global Payroll calendar group is calculated, which naturally is far from being an online calculation.

Although this is a handy functionality, the calculation process does not always return the expected results. For some specific needs, the system element FCST ASOF DT, FCST BGN DT and FCST END DT may be needed. These elements are null for normal Global Payroll runs, so the formulas may behave differently in these runs than in the actual forecast execution. If you ever hit a calculation issue in the forecast process that cannot be solved by looking at the element definitions, you may be stuck.

When this type of issues are found in a normal Global Payroll execution, one handy functionality is to enable the Debug information and then review the Element Resolution Chain page. This page shows the step by step calculation of each element and it is particularly helpful in identifying how an element is calculated.

Unfortunately, this information is not available in the standard forecast functionality. Luckily, it can be enabled using a tiny customisation.

In PeopleSoft HCM 9.1, the forecast functionality is executed from two different places:

DERIVED_GP.FCST_PB.FieldFormula - Abs_ForecastSetup function
FUNCLIB_GP_ABS.FCST_PB.FieldFormula - Abs_ForecastExec function

In both PeopleCode events, you will find a sentence like this one:

SQLExec("INSERT INTO PS_GP_RUNCTL(OPRID, RUN_CNTL_ID, CAL_RUN_ID, TXN_ID, STRM_NUM, GROUP_LIST_ID, RUN_IDNT_IND, RUN_UNFREEZE_IND, RUN_CALC_IND, RUN_RECALC_ALL_IND, RUN_FREEZE_IND, SUSP_ACTIVE_IND, STOP_BULK_IND, RUN_FINAL_IND, RUN_CANCEL_IND, RUN_SUSPEND_IND, RUN_TRACE_OPTN, RUN_PHASE_OPTN, RUN_PHASE_STEP, IDNT_PGM_OPTN, NEXT_PGM, NEXT_STEP, NEXT_NUM, CANCEL_PGM_OPTN, NEXT_EMPLID, UPDATE_STATS_IND, LANGUAGE_CD, EXIT_POINT, SEQ_NUM5, UE_CHKPT_CH1, UE_CHKPT_CH2, UE_CHKPT_CH3, UE_CHKPT_DT1, UE_CHKPT_DT2, UE_CHKPT_DT3, UE_CHKPT_NUM1, UE_CHKPT_NUM2, UE_CHKPT_NUM3,PRC_NUM,OFF_CYCLE) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,%datein(:33),%datein(:34),%datein(:35),:36,:37,:38,:39,:40)", &OprID, &RunCntl_ID, &CalcRunId, &TxnID, 0, &SpaceFiller, "Y", "N", "Y", "N", "N", "N", &ApprByInd, "N", "N", "N", "N", &RunPhaseOptN, &RunPhaseStep, &SpaceFiller, &SpaceFiller, 0, 0, &SpaceFiller, &SpaceFiller, "N", "ENG", &SpaceFiller, 0, &SpaceFiller, &SpaceFiller, &SpaceFiller, "", "", "", 0, 0, 0, 0, "N");

You will notice that the RUN_TRACE_OPTN field is set to "N". If you use "A" instead as the trace option value, you will obtain the Element Resolution Chain:

SQLExec("INSERT INTO PS_GP_RUNCTL(OPRID, RUN_CNTL_ID, CAL_RUN_ID, TXN_ID, STRM_NUM, GROUP_LIST_ID, RUN_IDNT_IND, RUN_UNFREEZE_IND, RUN_CALC_IND, RUN_RECALC_ALL_IND, RUN_FREEZE_IND, SUSP_ACTIVE_IND, STOP_BULK_IND, RUN_FINAL_IND, RUN_CANCEL_IND, RUN_SUSPEND_IND, RUN_TRACE_OPTN, RUN_PHASE_OPTN, RUN_PHASE_STEP, IDNT_PGM_OPTN, NEXT_PGM, NEXT_STEP, NEXT_NUM, CANCEL_PGM_OPTN, NEXT_EMPLID, UPDATE_STATS_IND, LANGUAGE_CD, EXIT_POINT, SEQ_NUM5, UE_CHKPT_CH1, UE_CHKPT_CH2, UE_CHKPT_CH3, UE_CHKPT_DT1, UE_CHKPT_DT2, UE_CHKPT_DT3, UE_CHKPT_NUM1, UE_CHKPT_NUM2, UE_CHKPT_NUM3,PRC_NUM,OFF_CYCLE) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,%datein(:33),%datein(:34),%datein(:35),:36,:37,:38,:39,:40)", &OprID, &RunCntl_ID, &CalcRunId, &TxnID, 0, &SpaceFiller, "Y", "N", "Y", "N", "N", "N", &ApprByInd, "N", "N", "N", "A", &RunPhaseOptN, &RunPhaseStep, &SpaceFiller, &SpaceFiller, 0, 0, &SpaceFiller, &SpaceFiller, "N", "ENG", &SpaceFiller, 0, &SpaceFiller, &SpaceFiller, &SpaceFiller, "", "", "", 0, 0, 0, 0, "N");

By performing this change, you will notice that GP_AUDIT_TBL table starts to be populated with the Element Resolution Chain information. However, it may still not be visible from the page itself, because some tables are only populated temporarily in the forecast execution. In order to enable the access for the forecast runs, you will need to customise the GP_AUDIT_SEG_VW search record by adding the lines in italics to the SQL definition:

SELECT DISTINCT A.CAL_RUN_ID 
 , A.EMPLID 
 , A.EMPL_RCD 
 , A.GP_PAYGROUP 
 , A.CAL_ID 
 , A.ORIG_CAL_RUN_ID 
 , B.RSLT_SEG_NUM 
 , A.FICT_CAL_ID 
 , A.FICT_CAL_RUN_ID 
 , A.FICT_RSLT_SEG_NUM 
 , B.RSLT_VER_NUM 
 , B.RSLT_REV_NUM 
 , B.SEG_BGN_DT 
 , B.SEG_END_DT 
  FROM PS_GP_AUDIT_TBL A 
  , PS_GP_PYE_SEG_STAT B 
 WHERE A.CAL_RUN_ID = B.CAL_RUN_ID 
   AND A.EMPLID = B.EMPLID 
   AND A.EMPL_RCD = B.EMPL_RCD 
   AND A.GP_PAYGROUP = B.GP_PAYGROUP 
   AND A.CAL_ID = B.CAL_ID 
  UNION ALL 
 SELECT DISTINCT A.CAL_RUN_ID 
 , A.EMPLID 
 , A.EMPL_RCD 
 , A.GP_PAYGROUP 
 , A.CAL_ID 
 , A.ORIG_CAL_RUN_ID 
 , A.RSLT_SEG_NUM 
 , A.FICT_CAL_ID 
 , A.FICT_CAL_RUN_ID 
 , A.FICT_RSLT_SEG_NUM 
 , 1 
 , 1 
 , NULL 
 , NULL 
  FROM PS_GP_AUDIT_TBL A 
 WHERE NOT EXISTS ( 
 SELECT 'X' 
  FROM PS_GP_PYE_SEG_STAT B 
 WHERE A.CAL_RUN_ID = B.CAL_RUN_ID 
   AND A.EMPLID = B.EMPLID 
   AND A.EMPL_RCD = B.EMPL_RCD 
   AND A.GP_PAYGROUP = B.GP_PAYGROUP 
   AND A.CAL_ID = B.CAL_ID)

I hope you find this useful. Should you have any question or doubt, I will be happy to assist.

Note: Keep in mind that it is not a good idea to leave the Debug information enabled for Production environments, at least permanently. The time needed to run a forecast calculation with this type of information is significantly higher than without it. So, if you do not want to hit performance issues, my recommendation is to store in a table a flag indicating if the Element Resolution Chain for forecast should be enabled or not.


Function-Based Indexes And CURSOR_SHARING = FORCE

Randolf Geist - Mon, 2015-04-20 03:00
In general it is known that Function-Based Indexes (FBIs) can no longer be used by the optimizer if the expression contains literals and CURSOR_SHARING = FORCE / SIMILAR (deprecated) turns those literals into bind variables. Jonathan Lewis described the issue quite a while ago here in detail.

In a recent OTN thread this issue was raised again, but to my surprise when I played around with a test case that mimicked the OP's problem query I found that (certain) Oracle versions have some built-in logic that enable FBI usage for certain cases where you would expect them to be not usable.

If you test the following code on versions from 10.2.0.4 (possibly earlier) up to and including version 11.2.0.3 then you'll notice some interesting details:


create table t
as
select * from all_objects;

create index t_idx on t (owner || ' ' || object_name);

exec dbms_stats.gather_table_stats(null, 't')

set echo on linesize 200 pagesize 0

alter session set cursor_sharing = force;

select /*+ full(t) */ * from t where owner || ' ' || object_name = 'BLA';

select * from table(dbms_xplan.display_cursor);

select /*+ index(t) */ * from t where owner || ' ' || object_name = 'BLA';

select * from table(dbms_xplan.display_cursor);

select /*+ index(t) */ * from t where owner || 'A' || object_name = 'BLA';

select * from table(dbms_xplan.display_cursor);
Here is the relevant output I got from 11.2.0.1 for example:

SQL> alter session set cursor_sharing = force;

Session altered.

SQL>
SQL> select /*+ full(t) */ * from t where owner || ' ' || object_name = 'BLA';

no rows selected

SQL>
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID ar3tw7r1rvawk, child number 0
-------------------------------------
select /*+ full(t) */ * from t where owner || :"SYS_B_0" || object_name
= :"SYS_B_1"

Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 284 (100)| |
|* 1 | TABLE ACCESS FULL| T | 1 | 117 | 284 (2)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"||' '||"OBJECT_NAME"=:SYS_B_1)


19 rows selected.

SQL>
SQL> select /*+ index(t) */ * from t where owner || ' ' || object_name = 'BLA';

no rows selected

SQL>
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID 6kzz3vw5x8x3b, child number 0
-------------------------------------
select /*+ index(t) */ * from t where owner || :"SYS_B_0" ||
object_name = :"SYS_B_1"

Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 117 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T_IDX | 1 | | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T"."SYS_NC00016$"=:SYS_B_1)


20 rows selected.

SQL>
SQL> select /*+ index(t) */ * from t where owner || 'A' || object_name = 'BLA';

no rows selected

SQL>
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID 6kzz3vw5x8x3b, child number 1
-------------------------------------
select /*+ index(t) */ * from t where owner || :"SYS_B_0" ||
object_name = :"SYS_B_1"

Plan hash value: 3778778741

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 52472 (100)| |
|* 1 | TABLE ACCESS BY INDEX ROWID| T | 724 | 84708 | 52472 (1)| 00:10:30 |
| 2 | INDEX FULL SCAN | T_IDX | 72351 | | 420 (1)| 00:00:06 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"||:SYS_B_0||"OBJECT_NAME"=:SYS_B_1)


20 rows selected.
Looking at the statement text that results from "CURSOR_SHARING = force" we can spot the expected bind variables instead of the literals, and this should result in a corresponding predicate that doesn't match the FBI expression. However, when looking at the filter expression in the predicate section (when forcing a full table scan) we can spot something interesting: It still shows the literal, which doesn't correspond to the predicate of the rewritten query text.

The next execution shows that the FBI really can be used despite the bind variable replacement taking place, and the final execution shows that the cursor sharing works correctly in that sense that a new child cursor got created for the same SQL text with a different plan and different predicate section when using a different literal in the original SQL text. V$SQL_SHARED_CURSOR shows "HASH_MATCH_FAILED" which is described as "No existing child cursors have the unsafe literal bind hash values required by the current cursor", which makes sense and probably means that the corresponding bind variable is marked as "unsafe" internally.

This optimisation shows only up if there is a suitable FBI - if there's no corresponding expression the SQL text and predicate section match. Furthermore it only supports certain expressions - Jonathan's example shows that in general it's true that these rewrites prevent FBI usage. And obviously it ceases to work in 11.2.0.4 and 12c. Whether this is a bug or a feature I don't know, but since it only seems to apply to certain expressions it's probably not that relevant anyway.

As Jonathan points out in his note you can always work around the general problem by hiding the expression in a view, and since 11g of course a proper virtual column definition is the better approach, which doesn't expose this problem either.

Even better would be the proper usage of bind variables and not using forced cursor sharing, but there are still many installations out there that rely on that feature.

Oracle HCM Cloud and the iGeneration: Not Your Parents’ Applications

Usable Apps - Sun, 2015-04-19 13:56

Harder Than You Think” is one awesome rap anthem by Public Enemy (I’ve always suspected Flavor Flav was a wearable technology innovator with that clock).

There’s a particular line in the song that resonates soundly with the Oracle Applications User Experience (OAUX) capacity for empathy: to "bear witness". Witness to real people doing real tasks in real places. I used the line to open my session on Future User Experience (UX) at the Oracle HCM Day 2015 at Oracle Nederland in Utrecht.

 Harder Than You Think (Click to play)

Public Enemy: Harder Than You Think 

UX is harder than you think too, but our job is to make things easy for Oracle Applications Cloud users by simplifying things; hiding the complexity of technology and making convoluted business processes fade away.

The theme of HCM Day was the iGeneration: those technology-versed young adults about to enter the workforce with expectations and attitudes about job permanence and how talent is recruited and managed that have shattered the employment model status quo. So, to explain the OAUX fit with their world, I jumped right into Oracle co-CEO Mark Hurd’s “Welcome to Nobody Cares” HCM World keynote, research and insights into younger generations' use of technology, how work apps are more than a pretty face; and then in to Oracle HCM Cloud, stealing a line from U2 about everything you know (about enterprise software) being wrong, along the way.

The Oracle Applications Cloud UX message for the iGeneration can be summed up as:

  1. We care. Winning in HCM is strategic for Oracle.
  2. This is not your parent’s software (Not that they wanted crapplications, either).
  3. Tech-savvy people need tech-savvy workplaces.
  4. User experience is a competitive must-have.
  5. The cloud has changed everything. Vendor and partner complacency about how people and business use software is dead. 
  6. Making things "Apple Easy" (h/t Richard Twelvetrees [@rg12t]) is no longer merely a dream for enterprise applications users thanks to simplicity and a design philosophy of Glance, Scan, Commit.
 UX Apple Easy

The Message 

To keep the performance real I offered a glimpse of what UX might come the iGeneration's way too. I shared some innovative concepts from the OAUX AppsLab team (@theappslab), our mobile Cloud apps, and the Oracle HCM Cloud, Oracle Sales Cloud and Oracle ERP Cloud designs for the forthcoming Apple Watch. I then showed live proof-of-concept demos of a glanceable UI for time entry microtransactions (using a Motorola Moto 360 smartwatch and location detection for mega convenience) and of Oracle Taleo interview evaluations and phone screener text (SMS) messaging built using Oracle partner Twilio's cloud API.

Glance UI on Smartwatch

Glance UI time entry on Motorola Moto 360

Oracle Taleo Texting (SMS)

Oracle Taleo Texting (SMS)

HCM Day was a great event, with over 130 attendees (including some 90 customers). It was a cool, fun, fast-paced learning event and an opportunity to build more relationships across the Benelux area. I was really wowed by the heads-up displays and 3D printing by main partner Deloitte too!

Glance UX 3D Print by Deloitte

3D Print by Deloitte 

Special thanks to Conny Groen in 't Woud (Marketing Manager Applications, HCM Benelux) and Henry Barenholz (Senior Director, HCM Leader Benelux and Nordics) for asking me to share our insight and innovation at a brilliantly organized and orchestrated happening.

You can see more of the cool stuff I showed off at HCM Day in the free Oracle Applications Cloud User Experience Trends and Strategy eBook.

Remember, it’s not how you swipe or click. It’s how you work. Tell your parents.

The iGeneration already knows.

A Werken.FM podcast of the day is now available (in Dutch). 

Oracle BPM / SOA Instance Migration Script Issues

Jan Kettenis - Fri, 2015-04-17 11:29
In this blog posting I explain some changes you need to apply to get instance migration to work for the Oracle BPM 11g Suite.

While figuring out a couple things about instance migrations I have been doing a few Bundle Patch installations lately. Up to 11.1.1.7.0 + BP4 I had issues with migration even after applying patch 18025048. I just applied BP7 and installed the latest 11.1.1.7.0 SOA and BPM plugins for JDeveloper (those indicated in the readme), and I finally got rid of most but still not all java.lang.reflect.InvocationTargetException exceptions that I saw before in the migration feasibility report.

And the I have the annoying error below back again when trying to run the report from JDeveloper:

java.lang.NoClassDefFoundError: oracle/soa/management/util/CompositeInstanceFilter
    at java.lang.Class.getDeclaredMethods0(Native Method)


Might as well write it down what I did to the issue, so that next time I don't have to try to remember what it was.

Up to BP4 I used the ant-composite-instance-migration.xml, as unlike the documentation suggests there was no /jdeveloper/bin/ant-bpm-migration.xml script. But at least since BP7 there is. However both do not work out-of-the-box (as was the case for all previous versions of the ant-composite-instance-migration.xml that I used). The problem being that some of the library paths are incorrect.

In the ant-composite-instance-migration.xml, instead of the next line:


It should read:


When using the ant-composite-instance-migration.xml it still complained about the composite not being compatible, while according to the documentation it should (there is no non-durable BPEL nor Mediators involved, and I try to migrate to a new revision of exactly the same code). So I thought I try my luck with the ant-bpm-migration.xml script instead.

Initially that gives another error:

java.lang.NoClassDefFoundError: oracle/jrf/PortabilityLayerException
    at oracle.soa.management.internal.ejb.EJBLocatorImpl.lookupBean(EJBLocatorImpl.java:817)


The issue here is that instead of the next lines:


It should have read:


Unfortunately it still reported the composite not to be compatible. But that I will address in some next topic (I hope).

JSON for APEX Developers (part 3): Querying JSON in Oracle DB 12c

Dimitri Gielis - Fri, 2015-04-17 10:34
In previous blog posts I talked about JSON for APEX Developers:
In this post I want to show how you store and query JSON data straight in the database.

To start, create a table to store the JSON object:

CREATE TABLE ssn_json
   (id          NUMBER GENERATED ALWAYS AS IDENTITY NOT NULL,
    date_loaded DATE,
    json_document CLOB
    CONSTRAINT ensure_json CHECK (json_document IS JSON));


I inserted a couple of records in the table:


Here's in more detail the JSON in the json_document column:

Now to query the JSON data I can do :


You basically say j (table) . json_document (column) . analyses (first entry in JSON) . ... (other fields of the hierarchy you want to navigate to)
 
You see the data I get back is actually two records as I actually get back the JSON array.

To go into the array and see for example the first record, I can use json_value:


Note that the array start with 0.

And finally to see both records that are in the JSON array I can make use of json_table like this:


The above is just a grasp of what you can do with JSON directly in the database (12c) by using SQL.
If you're interested to read more about how to manipulate JSON in the database, have a look at the documentation, it contains a wealth on information with great examples.

As you can basically query the JSON with SQL you can use this SQL in your APEX reports... in the coming days I'll show you a real case where I used the above techniques to do some interesting searching in data.
Categories: Development

You Are Trying To Access a Page That Is No Longer Active.The Referring Page May Have Come From a Previous Session. Please Select Home To Proceed

Vikram Das - Wed, 2015-04-15 17:06
Shahed pinged me about this error.  It was coming after logging in.  This R12.1.3 instance had just migrated from an old server to a new one. Once you logged in this error would be displayed:

You Are Trying To Access a Page That Is No Longer Active.The Referring Page May Have Come From a Previous Session. Please Select Home To Proceed

The hits on support.oracle.com were not helpful, but a gave a clue that it may have something to do with session cookie.  So I used Firefox to check http headers.  If you press Ctrl+Shift+K, you will get a panel at the bottom of the browser. Click on Network tab, click on the AppsLocalLogin.jsp and on the right side of the pane, you'll see a cookie tab.

The domain appearing in the cookie tab was from the old server.  So I checked:

select session_cookie_domain from icx_parameters;
olddomain.justanexample.com

So I nullified it:

update icx_parameters set session_cookie_domain=null;

commit;

Restarted Apache

cd $ADMIN_SCRIPTS_HOME
adapcctl.sh stop
adapcctl.sh start

No more error.  I was able to log in and so was Shahed.
Categories: APPS Blogs

Oracle APEX 5.0 released today

Dimitri Gielis - Wed, 2015-04-15 15:34
After 2.5 years of development, today is the day APEX 5.0 is publicly released and ready to be downloaded to install on your own environment.

In my view it's the best release ever. Not so much of the new Page Designer - although that is definitely a piece of art and it increased productivity even further - but because it's the first time whole of APEX got refreshed and every piece was put under a radar to see how it could be improved. All the small changes and the new UI, together with the Page Designer makes it a whole new development tool, without losing it's strengths from before.

Also note that APEX 5.0 enables many new features build on top of the Oracle Database 12c features, so if you're on that database, you'll see even more nice features.

If you wonder if you should wait with upgrading to APEX 5.0 because you're afraid that your current APEX applications break, I can only share that I upgraded many of my applications as part of EA/beta and most of my apps kept running without issues. As always you have to try your applications yourself, but the APEX development team spend a lot of time trying to keep things backwards compatible. But make sure to have a look at the APEX 5.0 release notes and known issues as they contain important information about changes, expected behaviour and workarounds.

You can develop online on apex.oracle.com or you can download APEX 5.0 and install into your own environment.
Categories: Development

Oracle Application Express 5 - The Unofficial Announcement

Joel Kallman - Wed, 2015-04-15 14:35
What started on a whiteboard in New York City more than 2 years ago is now finally realized.  I and the other members of the Oracle Application Express team proudly announce the release of Oracle Application Express 5.

The official blog posting and announcement is HERE.  But this is my personal blog, and the thoughts and words are my own, so I can be a bit more free.

Firstly, I don't ever want to see a release of Oracle Application Express take 2.5 years again, ever.  It's not good for Oracle, not good for Oracle Application Express, and certainly not good for the vast Oracle Application Express community.  We're going to strive, going forward, for a cadence of annual release cycles.  But with this said, I'm not about to apologize for the duration of the APEX 5 release cycle either.  It's broader and more ambitious than anything we've ever approached, and it happened the way it was supposed to happen.  Rather than say "redesigned", I'd prefer to use Shakeeb's words of "reimagined", because that's really what has transpired.  Not only has every one of the 1,945 pages that make up "internal APEX" (like the Application Builder) been visited, redesigned, and modernized, but the Page Designer is a radically different yet productive way to build and maintain your applications.  It takes time to iterate to this high level of quality.

At the end of the day, what matters most for developers is what they can produce with Oracle Application Express.  They'd gladly suffer through the non-Page Designer world and click the mouse all day, as long as what they produced and delivered made them a hero.  And I believe we have delivered on this goal of focusing on high-quality results in the applications you create.  I've seen my share of bad-looking APEX applications over the years, and with prior releases of APEX, we've essentially enabled the creation of these rather poor examples of APEX.  Not everyone is a Shakeeb or Marc.  I'm not.  But we've harnessed the talents of some of the brightest minds in the UI world, who also happen to be on the APEX development team, and delivered a framework that makes it easy for ordinary people like me to deliver beautiful, responsive and accessible applications, out-of-the-box.

What I'm most happy about is what this does for the Oracle Database.  I believe APEX 5 will make superheroes out of our Oracle Database and Oracle Database Cloud customers.  There is a massive wealth of functionality for application developers and data architects and citizen developers and everyone in-between, in the Oracle Database.  And all of it is a simple SQL or PL/SQL call away!  The Oracle Database is extraordinarily good at managing large amounts of data and helping people turn data into information.  And now, for customers to be able to easily create elegant UI and be able to beautifully visualize this information using Oracle Application Express 5, well...it's just an awesome combination.

I am blessed to work with some of the brightest, most focused, professional, talented, and yet humble people on the planet.  As my wife likes to say, they're all "quality people".  It truly takes an array of people who are deep in very different technologies to pull this off - Oracle Database design, data modeling, PL/SQL programming, database security, performance tuning, JavaScript programming, accessibility, Web security, HTML 5 design, CSS layout, graphic artistry, globalization, integration, documentation, testing, and on and on.  Both the breadth and depth of the talent to pull this off is staggering.

You might think that we get to take a breath now.  In fact, the fun only begins now and plenty of hard work is ahead for all of us.  But we look forward to the great successes of our many Oracle customers.  The #orclapex community is unrivaled.  And we are committed to making heroes out of every one of them.  That's the least we could do for the #orclapex community, such an amazingly passionate and vibrant collection of professionals and enthusiasts.

When anyone asks about the "watershed event" for Oracle Application Express, you can tell them that the day was April 15, 2015 - when Oracle Application Express 5 was released.

Joel

P.S.  #letswreckthistogether

Chrome and E-Business Suite

Vikram Das - Wed, 2015-04-15 13:23
Dhananjay came to me today.  He said that his users were complaining about forms not launching after upgrading to the latest version of Chrome. On launching forms they got this error:

/dev60cgi/oracle forms engine Main was not found on this server

I recalled that Google Chrome team had announced that they would not support java going forward. Googling with keywords chrome java brought this page:

https://java.com/en/download/faq/chrome.xml#npapichrome

It states that:

NPAPI support by Chrome
The Java plug-in for web browsers relies on the cross platform plugin architecture NPAPI, which has long been, and currently is, supported by all major web browsers. Google announced in September 2013 plans to remove NPAPI support from Chrome by "the end of 2014", thus effectively dropping support for Silverlight, Java, Facebook Video and other similar NPAPI based plugins. Recently, Google has revised their plans and now state that they plan to completely remove NPAPI by late 2015. As it is unclear if these dates will be further extended or not, we strongly recommend Java users consider alternatives to Chrome as soon as possible. Instead, we recommend Firefox, Internet Explorer and Safari as longer-term options. As of April 2015, starting with Chrome Version 42, Google has added an additional step to configuring NPAPI based plugins like Java to run — see the section Enabling NPAPI in Chrome Version 42 and later below.
Enabling NPAPI in Chrome Version 42 and later
As of Chrome Version 42, an additional configuration step is required to continue using NPAPI plugins.
  1. In your URL bar, enter:
    chrome://flags/#enable-npapi 
  2. Click the Enable link for the Enable NPAPI configuration option.
  3. Click the Relaunch button that now appears at the bottom of the configuration page.
Developers and System administrators looking for alternative ways to support users of Chrome should see this blog, in particular "Running Web Start applications outside of a browser" and "Additional Deployment Options" section.
Once Dhananjay did the above steps, Chrome started launching forms again.  He quickly gave these steps to all his users who had upgraded to the latest version of Chrome (version 42) and it started working form them too.
Oracle doesn't certify E-Business Suite forms on Chrome.  Only self service pages of E-Business Suite are certified on Google Chrome.
Categories: APPS Blogs

Oracle Application Express 5.0 now available

Marc Sewtz - Wed, 2015-04-15 12:44
The result of a two and half year engineering effort, Oracle Application Express 5.0 represents the greatest advancement of Oracle Application Express in its 10-year history.  Oracle Application Express 5.0 enables customers to develop, design and deploy beautiful, responsive, database-driven desktop and mobile applications using only a browser.  



Now that APEX 5.0 is finally available, it’s time to spread the word and to get everyone up to speed on what’s new in 5.0, how to make best use of the beautiful, new Universal Theme and how to get the most out of the incredibly powerful new Page Designer. 


So here are some of the events coming up in the near future, where you can learn all about APEX 5.0 and meet many of the APEX development team members in person:


APEXposed 2015

Montreal, Canada - May 6, 2015
  • Keynote: Oracle Application Express 5.0 (Marc Sewtz - 8:30 - 9:30am)
  • Oracle Application Express 5.0 Plug-In Enhancements (Patrick Maniraho - 1:00 - 2:00pm)
  • Transitioning to Oracle Application Express 5.0 (Marc Sewtz - 2:15 - 3:15pm)
Solihull, UK - May 14, 2015
  • Oracle Application Express 5.0 (Anthony Rayner - 2:25-3:35pm)
Düsseldorf, Germany - June 9 - 10, 2015
  • Keynote: Oracle Application Express 5.0 (Marc Sewtz - Jun-09 - 9:30 - 10:30am)
  • Beautiful UI with APEX 5 (Shakeeb Rahman - Jun-09 11:00 - 11:45am)
  • Der Oracle Application Express Entwicklungsprozess (Marc Sewtz - Jun-10 - 11:00 - 11:45am)
Plovdiv, Bulgaria - June 12 - 14, 2015
  • Oracle Application Express 5.0 New Features (Marc Sewtz)
  • Transitioning to Oracle Application Express 5.0 (Marc Sewtz)
ODTUG KScope 2015
Hollywood, FL, USA - June 21 - 25, 2015

June 21
  • APEX Episode 5:  A New Frontier (Joel Kallman, 8:30 - 9:30am)
  • Need for Speed: Page Designer (Patrick Wolf, 9:30 - 10:30am)
  • Interstellar: The Universal Theme (Shakeeb Rahman, 11:00am - 12:00pm)
  • The Fifth Element: HMTL5 + APEX5 Mobile (Marc Sewtz, 1:00 - 2:00pm)
  • The Matrix Reloaded: Interactive Report (Anthony Rayner, 2:00 - 3:00pm)
  • The Prestige: Converting to APEX 5.0 Universal Theme (David Peake, 3:15 – 4:15pm)
June 22
  • Transitioning to Oracle Application Express 5.0 (David Peake, 08:30am - 09:30pm)
  • Introduction to Oracle Application Express (David Peake, 09:45 -10:45am)
  • You Don't Lack APEX Skills; You Lack Oracle Skills (Joel Kallman, 2:00 - 3:00pm)
June 23
  • Self-Service Application Deployment in a Runtime Instance: This Is How We (Oracle) Do It... (Jason Straub, 4:45 - 5:45pm)
June 24
  • Oracle Application Express 5.0 New Features (Hilary Farrell, 9:45 - 10:45am)
  • Application Express 5.0: Features Nobody Else Will Tell You About (David Peake, 1:45 - 2:45pm)

... and be sure to join one of the numerous APEX Meetup groups world wide - and if there isn't one close to where you live, then consider starting your own local APEX Meetup group - don't know how to do that? Check out the APEX Meetup site for helpful information:



Oracle APEX 5.0 Available for Download!

Patrick Wolf - Wed, 2015-04-15 12:43
After a longer development cycle than usual, Oracle Application Express 5.0 is finally available for download! I think it was worth the wait. It comes with a ton of features, here are just a few marquee features. Page Designer Universal … Continue reading
Categories: Development

APEX 5.0: Set Cursor Focus for Region Type Plug-Ins

Patrick Wolf - Tue, 2015-04-14 08:19
Complex Region Type Plug-ins might render their own input fields, like our Interactive Report component does with the search field. If a developer had set the page level attribute Cursor Focus to First item on Page in Oracle APEX 4.x, … Continue reading
Categories: Development

Combined ACCESS And FILTER Predicates - Excessive Throw-Away

Randolf Geist - Mon, 2015-04-13 01:00
Catchy title... Let's assume the following data setup:

create table t1
as
select
rownum as id
, 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e4
;

create table t2
as
select
rownum as id
, 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e4
;

create table t3
as
select
rownum as id
, 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e4
;

exec dbms_stats.gather_table_stats(null, 't1')

exec dbms_stats.gather_table_stats(null, 't2')

exec dbms_stats.gather_table_stats(null, 't3')

-- Deliberately wrong order (FBI after gather stats) - the virtual columns created for this FBI don't have statistics, see below
create index t2_idx on t2 (case when id2 = 1 then id2 else 1 end, case when id2 = 2 then id2 else 1 end, filler, id);

create index t3_idx on t3 (id, filler, id2);
And the following execution plan (all results are from 12.1.0.2 but should be applicable to other versions, too):

----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10000 | 1416K| 132 (0)| 00:00:01 |
|* 1 | HASH JOIN | | 10000 | 1416K| 132 (0)| 00:00:01 |
| 2 | TABLE ACCESS FULL | T2 | 10000 | 292K| 44 (0)| 00:00:01 |
|* 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 |
----------------------------------------------------------------------------
How long would you expect it to run to return all rows (no tricks like expensive regular expressions or user-defined PL/SQL functions)?

Probably should take just a blink, given the tiny tables with just 10000 rows each.

However, these are the runtime statistics for a corresponding execution:

| | | |
| |DATABASE |CPU |
|DURATION |TIME |TIME |
|------------|------------|------------|
|+0 00:00:23 |+0 00:00:23 |+0 00:00:23 |
| | | |

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Ord | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows| PGA | Start | Dur(T)| Dur(A)| Time Active Graph | Activity Graph ASH | Top 5 Activity ASH |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | 6 | SELECT STATEMENT | | | | 132 (100)| | 1 | 1 | 0 | | | | | | |
|* 1 | 0 | 5 | HASH JOIN | | 10000 | 1191K| 132 (0)| 00:00:01 | 1 | 1 | 1401K | 2 | 23 | 22 | ##### ############## | @@@@@@@@@@@@@@@@@@@ (100%) | ON CPU(22) |
| 2 | 1 | 1 | TABLE ACCESS FULL | T2 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
|* 3 | 1 | 4 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K | 1930K | | | | | | |
| 4 | 3 | 2 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
| 5 | 3 | 3 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
How is it possible to burn more than 20 seconds of CPU time with that execution plan?

The actual rows produced correspond pretty much to the estimated cardinalities (except for the final hash join), so that doesn't look suspect at first glance.
What becomes obvious from the SQL Monitoring output is that all the time is spent on the hash join operation ID = 1.

Of course at that point (at the latest) you should tell me off for not having you shown the predicate section of the plan and the corresponding query in first place.

So here is the predicate section and the corresponding query:

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END =CASE
"T2"."ID2" WHEN 1 THEN "T2"."ID2" ELSE 1 END AND CASE "T1"."ID2" WHEN
2 THEN "T1"."ID2" ELSE 1 END =CASE "T2"."ID2" WHEN 2 THEN "T2"."ID2"
ELSE 1 END )
filter("T3"."ID2"=CASE WHEN ("T1"."ID">"T2"."ID") THEN
"T1"."ID" ELSE "T2"."ID" END )

3 - access("T3"."ID"="T1"."ID")


select /*+
leading(t1 t3 t2)
full(t1)
full(t3)
use_hash(t3)
swap_join_inputs(t3)
full(t2)
use_hash(t2)
swap_join_inputs(t2)
*/
t1.*
--, t3.id2
--, case when t1.id > t2.id then t1.id else t2.id end
from
t1
, t2
, t3
where
1 = 1
--
and case when t1.id2 = 1 then t1.id2 else 1 end = case when t2.id2 = 1 then t2.id2 else 1 end
and case when t1.id2 = 2 then t1.id2 else 1 end = case when t2.id2 = 2 then t2.id2 else 1 end
--
and t3.id = t1.id
and t3.id2 = case when t1.id > t2.id then t1.id else t2.id end
;
There are two important aspects to this query and the plan: First, the join expression (without corresponding expression statistics) between T1 and T2 is sufficiently deceptive to hide from the optimizer that in fact this produces a cartesian product (mimicking real life multi table join expressions that lead to bad estimates) and second, the table T3 is joined to both T1 and an expression based on T1 and T2, which means that this expression can only be evaluated after the join to T1 and T2.
With the execution plan shape enforced via my hints (but could be a real life execution plan shape preferred by the optimizer) T3 and T1 are joined first, producing an innocent 10K rows row source, which is then joined to T2. And here the accident happens inside the hash join operation:

If you look closely at the predicate section you'll notice that the hash join operation has both, an ACCESS operation and a FILTER operation. The ACCESS operation performs based on the join between T1 and T2 a lookup into the hash table, which happens to be a cartesian product, so produces 10K times 10K rows, and only afterwards the FILTER (representing the T3 to T1/T2 join expression) is applied to these 100M rows, but matching only a single row in my example here, which is what the A-Rows shows for this operation.

So the point is that this excessive work and FILTER throwaway isn't very well represented in the row source statistics. Ideally you would need one of the following two modifications to get a better picture of what is going on:

Either the FILTER operator should be a separate step in the plan, which in theory would then look like this:

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows|
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 132 (100)| | 1 | 1 |
|* 1a| FILTER | | 10000 | 1191K| 132 (0)| 00:00:01 | 1 | 1 |
|* 1b| HASH JOIN | | 10000 | 1191K| 132 (0)| 00:00:01 | 1 | 100M |
| 2 | TABLE ACCESS FULL | T2 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K |
|* 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K |
| 4 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K |
| 5 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1a- filter("T3"."ID2"=CASE WHEN ("T1"."ID">"T2"."ID") THEN
"T1"."ID" ELSE "T2"."ID" END )
1b- access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END =CASE
"T2"."ID2" WHEN 1 THEN "T2"."ID2" ELSE 1 END AND CASE "T1"."ID2" WHEN
2 THEN "T1"."ID2" ELSE 1 END =CASE "T2"."ID2" WHEN 2 THEN "T2"."ID2"
ELSE 1 END )
3 - access("T3"."ID"="T1"."ID")
Which would make the excess rows produced by the ACCESS part of the hash join very obvious, but is probably for performance reasons not a good solution, because then the data would have to flow from one operation to another one rather than being processed within the HASH JOIN operator, which means increased overhead.

Or an additional rowsource statistics should be made available:

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows|AE-Rows|
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 132 (100)| | 1 | 1 | 1 |
|* 1 | HASH JOIN | | 10000 | 1191K| 132 (0)| 00:00:01 | 1 | 1 | 100M |
| 2 | TABLE ACCESS FULL | T2 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | 10K |
|* 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K | 10K |
| 4 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | 10K |
| 5 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K | 10K |
----------------------------------------------------------------------------------------------------
Which I called here "Actually evaluated rows" and in addition to this case here of combined ACCESS and FILTER operations could also be helpful for other FILTER cases, for example even for simple full table scan to see how many rows were evaluated, and not only how many rows matched a possible filter (what A-Rows currently shows).

In a recent OTN thread this topic came up again, and since I also came across this phenomenon a couple of times recently I thought to put this note together. Note that Martin Preiss has submitted a corresponding database idea on the OTN forum.

Expanding on this idea a bit further, it could be useful to have an additional "Estimated evaluated rows (EE-Rows)" calculated by the optimizer and shown in the plan. This could also be used to improve the optimizer's cost model for such cases, because at present it looks like the optimizer doesn't consider additional FILTER predicates on top of ACCESS predicates when calculating the CPU cost of operations like HASH JOINs.

Note that this problem isn't specific to HASH JOIN operations, you can get similar effects with other join methods, like NESTED LOOP joins, or even simple INDEX lookup operations, where again the ACCESS part isn't very selective but only the FILTER applied afterwards filters matching rows.

Here are some examples with the given setup:

select /*+
leading(t1 t3 t2)
full(t1)
full(t3)
use_hash(t3)
swap_join_inputs(t3)
index(t2)
use_nl(t2)
*/
t1.*
--, t3.id2
--, case when t1.id > t2.id then t1.id else t2.id end
from
t1
, t2
, t3
where
1 = 1
--
and case when t1.id2 = 1 then t1.id2 else 1 end = case when t2.id2 = 1 then t2.id2 else 1 end
and case when t1.id2 = 2 then t1.id2 else 1 end = case when t2.id2 = 2 then t2.id2 else 1 end
--
and t3.id = t1.id
and t3.id2 = case when t1.id > t2.id then t1.id else t2.id end
;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Ord | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows| PGA | Start | Dur(T)| Dur(A)| Time Active Graph | Activity Graph ASH | Top 5 Activity ASH |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | 6 | SELECT STATEMENT | | | | 10090 (100)| | 1 | 1 | | | | | | | |
| 1 | 0 | 5 | NESTED LOOPS | | 10000 | 1416K| 10090 (1)| 00:00:01 | 1 | 1 | | | | | | | |
|* 2 | 1 | 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K | 1890K | | | | | | |
| 3 | 2 | 1 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
| 4 | 2 | 2 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
|* 5 | 1 | 4 | INDEX RANGE SCAN | T2_IDX | 1 | 30 | 1 (0)| 00:00:01 | 10K | 1 | | 3 | 33 | 32 | ################### | @@@@@@@@@@@@@@@@@@@ (100%) | ON CPU(32) |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T3"."ID"="T1"."ID")
5 - access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END
="T2"."SYS_NC00004$" AND CASE "T1"."ID2" WHEN 2 THEN "T1"."ID2" ELSE 1
END ="T2"."SYS_NC00005$")
filter("T3"."ID2"=CASE WHEN ("T1"."ID">"T2"."ID") THEN
"T1"."ID" ELSE "T2"."ID" END )



select /*+
leading(t1 t3 t2)
full(t1)
full(t3)
use_hash(t3)
swap_join_inputs(t3)
index(t2)
use_nl(t2)
*/
max(t1.filler)
--, t3.id2
--, case when t1.id > t2.id then t1.id else t2.id end
from
t1
, t2
, t3
where
1 = 1
--
and case when t1.id2 = 1 then t1.id2 else 1 end = case when t2.id2 = 1 then t2.id2 else 1 end
and case when t1.id2 = 2 then t1.id2 else 1 end = case when t2.id2 = 2 then t2.id2 else 1 end
--
and t3.id = t1.id
and t2.filler >= t1.filler
and t2.id = case when t1.id2 > t3.id2 then t1.id2 else t3.id2 end

;

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Ord | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows| PGA | Start | Dur(T)| Dur(A)| Time Active Graph | Activity Graph ASH | Top 5 Activity ASH |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | 7 | SELECT STATEMENT | | | | 20092 (100)| | 1 | 1 | | | | | | | |
| 1 | 0 | 6 | SORT AGGREGATE | | 1 | 223 | | | 1 | 1 | | | | | | | |
| 2 | 1 | 5 | NESTED LOOPS | | 1 | 223 | 20092 (1)| 00:00:01 | 1 | 10K | | | | | | | |
|* 3 | 2 | 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K | 1900K | | | | | | |
| 4 | 3 | 1 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
| 5 | 3 | 2 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
|* 6 | 2 | 4 | INDEX RANGE SCAN | T2_IDX | 1 | 108 | 2 (0)| 00:00:01 | 10K | 10K | | 2 | 34 | 34 | #################### | @@@@@@@@@@@@@@@@@@@ (100%) | ON CPU(34) |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("T3"."ID"="T1"."ID")
6 - access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END
="T2"."SYS_NC00004$" AND CASE "T1"."ID2" WHEN 2 THEN "T1"."ID2" ELSE 1
END ="T2"."SYS_NC00005$" AND "T2"."FILLER">="T1"."FILLER" AND
"T2"."ID"=CASE WHEN ("T1"."ID2">"T3"."ID2") THEN "T1"."ID2" ELSE
"T3"."ID2" END AND "T2"."FILLER" IS NOT NULL)
filter("T2"."ID"=CASE WHEN ("T1"."ID2">"T3"."ID2") THEN
"T1"."ID2" ELSE "T3"."ID2" END )

The former one exhibits exactly the same problem as the HASH JOIN example, only that the FILTER is evaluated in the inner row source of a NESTED LOOP join after the index access operation.

The latter one shows as variation the classic partial "index access" due to a range comparison in between - although the entire expression can be evaluated on index level, the access part matches every index entry, so the range scan actually needs to walk the entire index at each loop iteration and the FILTER is then applied to all the index values evaluated.

In what ways is buckthorn harmful?

FeuerThoughts - Sun, 2015-04-12 17:54
I spent 10-15 hours a week in various locations of still-wooded Chicago (and now nearby Lincolnwood) cutting down buckthorn. Some people have taken me to task for it ("Just let it be, let nature take it's course, etc.). So I thought I would share this excellent, concise sum up of the damage that can be wrought by buckthorn.

And if anyone lives on the north side of Chicago and would like to help out, there is both "heavy" work (cutting large trees and dragging them around) and now lots of "light" work (clipping the new growth from the stumps from last year's cutting - I don't use poison). 

It's great exercise and without a doubt you will be helping rescue native trees and ensure that the next generation of those trees will survive and thrive!


Buckthorn should be on America's "Most Wanted" list, with its picture hanging up in every US Post Office! Here are a few of the dangers of Buckthorn:

a) Buckthorn squeezes out native plants for nutrients, sunlight, and moisture. It literally chokes out surrounding healthy trees and makes it impossible for any new growth to take root under its cancerous canopy of dense vegetation.

b) Buckthorn degrades wildlife habitats and alters the natural food chain in and growth of an otherwise healthy forest. It disrupts the whole natural balance of the ecosystem.

c) Buckthorn can host pests like Crown Rust Fungus and Soybean Aphids. Crown Rust can devastate oat crops and a wide variety of other grasses. Soybean Aphids can have a devastating effect on the yield of soybean crops. Without buckthorn as host, these pests couldn't survive to blight crops.

d) Buckthorn contributes to erosion by overshadowing plants that grow on the forest floor, causing them to die and causing the soil to lose the integrity and structure created by such plants.

e) Buckthorn lacks "natural controls" like insects or diseases that would curb its growth. A Buckthorn-infested forest is too dense to walk through, and the thorns of Common Buckthorn will leave you bloodied.

f) Buckthorn attracts many species of birds (especially robins and cedar waxwings) that eat the berries and spread the seeds through excrement. Not only are the birds attracted to the plentiful berries, but because the buckthorn berries have a diuretic and cathartic effect, the birds pass the seeds very quickly to the surrounding areas of the forest. This makes Buckthorn spread even more widely and rapidly, making it harder for us to control and contain.
Categories: Development

opatch hangs on /sbin/fuser oracle

Vikram Das - Sat, 2015-04-11 19:30
Pipu pinged me today about opatch hanging. The opatch log showed this:

[Apr 11, 2015 5:24:13 PM]    Start fuser command /sbin/fuser $ORACLE_HOME/bin/oracle at Sat Apr 11 17:24:13 EDT 2015

I had faced this issue once before, but was not able to recall what was the solution.  So I started fresh.

As oracle user:

/sbin/fuser $ORACLE_HOME/bin/oracle hung

As root user

/sbin/fuser $ORACLE_HOME/bin/oracle hung

As root user

lsof hung.

Google searches about it brought up a lot of hits about NFS issues.  So I did df -h.

df -h also hung.

So I checked /var/log/messages and found many messages like these:

Apr 11 19:44:42 erpserver kernel: nfs: server share.justanexample.com not responding, still trying

That server has a mount called /R12.2stage that has the installation files for R12.2.

So I tried unmounting it:

umount /R12.2stage
Device Busy

umount -f /R12.2stage
Device Busy

umount -l /R12.2stage

df -h didn't hang any more.

Next I did strace /sbin/fuser $ORACLE_HOME/bin/oracle and it stopped here:

open("/proc/12854/fdinfo/3", O_RDONLY)  = 7
fstat(7, {st_mode=S_IFREG|0400, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b99de014000
read(7, "pos:\t0\nflags:\t04002\n", 1024) = 20
close(7)                                = 0
munmap(0x2b99de014000, 4096)            = 0
getdents(4, /* 0 entries */, 32768)     = 0
close(4)                                = 0
stat("/proc/12857/", {st_mode=S_IFDIR|0555, st_size=0, ...}) = 0
open("/proc/12857/stat", O_RDONLY)      = 4
read(4, "12857 (bash) S 12853 12857 12857"..., 4096) = 243
close(4)                                = 0
readlink("/proc/12857/cwd", "11.2.0.4/examples (deleted)"..., 4096) = 27
rt_sigaction(SIGALRM, {0x411020, [ALRM], SA_RESTORER|SA_RESTART, 0x327bc30030}, {SIG_DFL, [ALRM], SA_RESTORER|SA_RESTART, 0x327bc30030}, 8) = 0
alarm(15)                               = 0
write(5, "@\20A\0\0\0\0\0", 8)          = 8
write(5, "\20\0\0\0", 4)                = 4
write(5, "/proc/12857/cwd\0", 16)       = 16
write(5, "\220\0\0\0", 4)               = 4
read(6,  

It stopped here. So I did Ctrl+C
# ps -ef |grep 12857
oracle   12857 12853  0 Apr10 pts/2    00:00:00 -bash
root     21688  2797  0 19:42 pts/8    00:00:00 grep 12857

Killed this process

# kill -9 12857

Again I did strace /sbin/fuser $ORACLE_HOME/bin/oracle and it stopped at a different process this time that was another bash process.  I killed that process also.

I executed it for 3rd time: strace /sbin/fuser $ORACLE_HOME/bin/oracle

This time it completed.

Ran it without strace

/sbin/fuser $ORACLE_HOME/bin/oracle

It came out in 1 second.

Then I did the same process for lsof

strace lsof

and killed those processes were it was getting stuck.  Eventually lsof also worked.

Pipu retried opatch and it worked fine.

Stale NFS mount was the root cause of this issue.  It was stale because the source server was down for Unix security patching during weekend.
 

Categories: APPS Blogs

Come See Integrigy at Collaborate 2015

Come see Integrigy's session at Collaborate 2015 in Las Vegas (http://collaborate.ioug.org/). Integrigy is presenting the following paper:

IOUG #763
Detecting and Stopping Cyber Attacks against Oracle Databases
Monday, April 13th, 9:15 - 11:30 am
North Convention, South Pacific J

If you are going to Collaborate 2015, we would also be more than happy to talk with you about your Oracle security or questions. If you would like to talk with us while at Collaborate, please contact us at info@integrigy.com.

Conference
Categories: APPS Blogs, Security Blogs

Pages

Subscribe to Oracle FAQ aggregator