Feed aggregator

You Are Trying To Access a Page That Is No Longer Active.The Referring Page May Have Come From a Previous Session. Please Select Home To Proceed

Vikram Das - Wed, 2015-04-15 17:06
Shahed pinged me about this error.  It was coming after logging in.  This R12.1.3 instance had just migrated from an old server to a new one. Once you logged in this error would be displayed:

You Are Trying To Access a Page That Is No Longer Active.The Referring Page May Have Come From a Previous Session. Please Select Home To Proceed

The hits on support.oracle.com were not helpful, but a gave a clue that it may have something to do with session cookie.  So I used Firefox to check http headers.  If you press Ctrl+Shift+K, you will get a panel at the bottom of the browser. Click on Network tab, click on the AppsLocalLogin.jsp and on the right side of the pane, you'll see a cookie tab.

The domain appearing in the cookie tab was from the old server.  So I checked:

select session_cookie_domain from icx_parameters;
olddomain.justanexample.com

So I nullified it:

update icx_parameters set session_cookie_domain=null;

commit;

Restarted Apache

cd $ADMIN_SCRIPTS_HOME
adapcctl.sh stop
adapcctl.sh start

No more error.  I was able to log in and so was Shahed.
Categories: APPS Blogs

Oracle APEX 5.0 released today

Dimitri Gielis - Wed, 2015-04-15 15:34
After 2.5 years of development, today is the day APEX 5.0 is publicly released and ready to be downloaded to install on your own environment.

In my view it's the best release ever. Not so much of the new Page Designer - although that is definitely a piece of art and it increased productivity even further - but because it's the first time whole of APEX got refreshed and every piece was put under a radar to see how it could be improved. All the small changes and the new UI, together with the Page Designer makes it a whole new development tool, without losing it's strengths from before.

Also note that APEX 5.0 enables many new features build on top of the Oracle Database 12c features, so if you're on that database, you'll see even more nice features.

If you wonder if you should wait with upgrading to APEX 5.0 because you're afraid that your current APEX applications break, I can only share that I upgraded many of my applications as part of EA/beta and most of my apps kept running without issues. As always you have to try your applications yourself, but the APEX development team spend a lot of time trying to keep things backwards compatible. But make sure to have a look at the APEX 5.0 release notes and known issues as they contain important information about changes, expected behaviour and workarounds.

You can develop online on apex.oracle.com or you can download APEX 5.0 and install into your own environment.
Categories: Development

Oracle Application Express 5 - The Unofficial Announcement

Joel Kallman - Wed, 2015-04-15 14:35
What started on a whiteboard in New York City more than 2 years ago is now finally realized.  I and the other members of the Oracle Application Express team proudly announce the release of Oracle Application Express 5.

The official blog posting and announcement is HERE.  But this is my personal blog, and the thoughts and words are my own, so I can be a bit more free.

Firstly, I don't ever want to see a release of Oracle Application Express take 2.5 years again, ever.  It's not good for Oracle, not good for Oracle Application Express, and certainly not good for the vast Oracle Application Express community.  We're going to strive, going forward, for a cadence of annual release cycles.  But with this said, I'm not about to apologize for the duration of the APEX 5 release cycle either.  It's broader and more ambitious than anything we've ever approached, and it happened the way it was supposed to happen.  Rather than say "redesigned", I'd prefer to use Shakeeb's words of "reimagined", because that's really what has transpired.  Not only has every one of the 1,945 pages that make up "internal APEX" (like the Application Builder) been visited, redesigned, and modernized, but the Page Designer is a radically different yet productive way to build and maintain your applications.  It takes time to iterate to this high level of quality.

At the end of the day, what matters most for developers is what they can produce with Oracle Application Express.  They'd gladly suffer through the non-Page Designer world and click the mouse all day, as long as what they produced and delivered made them a hero.  And I believe we have delivered on this goal of focusing on high-quality results in the applications you create.  I've seen my share of bad-looking APEX applications over the years, and with prior releases of APEX, we've essentially enabled the creation of these rather poor examples of APEX.  Not everyone is a Shakeeb or Marc.  I'm not.  But we've harnessed the talents of some of the brightest minds in the UI world, who also happen to be on the APEX development team, and delivered a framework that makes it easy for ordinary people like me to deliver beautiful, responsive and accessible applications, out-of-the-box.

What I'm most happy about is what this does for the Oracle Database.  I believe APEX 5 will make superheroes out of our Oracle Database and Oracle Database Cloud customers.  There is a massive wealth of functionality for application developers and data architects and citizen developers and everyone in-between, in the Oracle Database.  And all of it is a simple SQL or PL/SQL call away!  The Oracle Database is extraordinarily good at managing large amounts of data and helping people turn data into information.  And now, for customers to be able to easily create elegant UI and be able to beautifully visualize this information using Oracle Application Express 5, well...it's just an awesome combination.

I am blessed to work with some of the brightest, most focused, professional, talented, and yet humble people on the planet.  As my wife likes to say, they're all "quality people".  It truly takes an array of people who are deep in very different technologies to pull this off - Oracle Database design, data modeling, PL/SQL programming, database security, performance tuning, JavaScript programming, accessibility, Web security, HTML 5 design, CSS layout, graphic artistry, globalization, integration, documentation, testing, and on and on.  Both the breadth and depth of the talent to pull this off is staggering.

You might think that we get to take a breath now.  In fact, the fun only begins now and plenty of hard work is ahead for all of us.  But we look forward to the great successes of our many Oracle customers.  The #orclapex community is unrivaled.  And we are committed to making heroes out of every one of them.  That's the least we could do for the #orclapex community, such an amazingly passionate and vibrant collection of professionals and enthusiasts.

When anyone asks about the "watershed event" for Oracle Application Express, you can tell them that the day was April 15, 2015 - when Oracle Application Express 5 was released.

Joel

P.S.  #letswreckthistogether

Chrome and E-Business Suite

Vikram Das - Wed, 2015-04-15 13:23
Dhananjay came to me today.  He said that his users were complaining about forms not launching after upgrading to the latest version of Chrome. On launching forms they got this error:

/dev60cgi/oracle forms engine Main was not found on this server

I recalled that Google Chrome team had announced that they would not support java going forward. Googling with keywords chrome java brought this page:

https://java.com/en/download/faq/chrome.xml#npapichrome

It states that:

NPAPI support by Chrome
The Java plug-in for web browsers relies on the cross platform plugin architecture NPAPI, which has long been, and currently is, supported by all major web browsers. Google announced in September 2013 plans to remove NPAPI support from Chrome by "the end of 2014", thus effectively dropping support for Silverlight, Java, Facebook Video and other similar NPAPI based plugins. Recently, Google has revised their plans and now state that they plan to completely remove NPAPI by late 2015. As it is unclear if these dates will be further extended or not, we strongly recommend Java users consider alternatives to Chrome as soon as possible. Instead, we recommend Firefox, Internet Explorer and Safari as longer-term options. As of April 2015, starting with Chrome Version 42, Google has added an additional step to configuring NPAPI based plugins like Java to run — see the section Enabling NPAPI in Chrome Version 42 and later below.
Enabling NPAPI in Chrome Version 42 and later
As of Chrome Version 42, an additional configuration step is required to continue using NPAPI plugins.
  1. In your URL bar, enter:
    chrome://flags/#enable-npapi 
  2. Click the Enable link for the Enable NPAPI configuration option.
  3. Click the Relaunch button that now appears at the bottom of the configuration page.
Developers and System administrators looking for alternative ways to support users of Chrome should see this blog, in particular "Running Web Start applications outside of a browser" and "Additional Deployment Options" section.
Once Dhananjay did the above steps, Chrome started launching forms again.  He quickly gave these steps to all his users who had upgraded to the latest version of Chrome (version 42) and it started working form them too.
Oracle doesn't certify E-Business Suite forms on Chrome.  Only self service pages of E-Business Suite are certified on Google Chrome.
Categories: APPS Blogs

Oracle Application Express 5.0 now available

Marc Sewtz - Wed, 2015-04-15 12:44
The result of a two and half year engineering effort, Oracle Application Express 5.0 represents the greatest advancement of Oracle Application Express in its 10-year history.  Oracle Application Express 5.0 enables customers to develop, design and deploy beautiful, responsive, database-driven desktop and mobile applications using only a browser.  



Now that APEX 5.0 is finally available, it’s time to spread the word and to get everyone up to speed on what’s new in 5.0, how to make best use of the beautiful, new Universal Theme and how to get the most out of the incredibly powerful new Page Designer. 


So here are some of the events coming up in the near future, where you can learn all about APEX 5.0 and meet many of the APEX development team members in person:


APEXposed 2015

Montreal, Canada - May 6, 2015
  • Keynote: Oracle Application Express 5.0 (Marc Sewtz - 8:30 - 9:30am)
  • Oracle Application Express 5.0 Plug-In Enhancements (Patrick Maniraho - 1:00 - 2:00pm)
  • Transitioning to Oracle Application Express 5.0 (Marc Sewtz - 2:15 - 3:15pm)
Solihull, UK - May 14, 2015
  • Oracle Application Express 5.0 (Anthony Rayner - 2:25-3:35pm)
Düsseldorf, Germany - June 9 - 10, 2015
  • Keynote: Oracle Application Express 5.0 (Marc Sewtz - Jun-09 - 9:30 - 10:30am)
  • Beautiful UI with APEX 5 (Shakeeb Rahman - Jun-09 11:00 - 11:45am)
  • Der Oracle Application Express Entwicklungsprozess (Marc Sewtz - Jun-10 - 11:00 - 11:45am)
Plovdiv, Bulgaria - June 12 - 14, 2015
  • Oracle Application Express 5.0 New Features (Marc Sewtz)
  • Transitioning to Oracle Application Express 5.0 (Marc Sewtz)
ODTUG KScope 2015
Hollywood, FL, USA - June 21 - 25, 2015

June 21
  • APEX Episode 5:  A New Frontier (Joel Kallman, 8:30 - 9:30am)
  • Need for Speed: Page Designer (Patrick Wolf, 9:30 - 10:30am)
  • Interstellar: The Universal Theme (Shakeeb Rahman, 11:00am - 12:00pm)
  • The Fifth Element: HMTL5 + APEX5 Mobile (Marc Sewtz, 1:00 - 2:00pm)
  • The Matrix Reloaded: Interactive Report (Anthony Rayner, 2:00 - 3:00pm)
  • The Prestige: Converting to APEX 5.0 Universal Theme (David Peake, 3:15 – 4:15pm)
June 22
  • Transitioning to Oracle Application Express 5.0 (David Peake, 08:30am - 09:30pm)
  • Introduction to Oracle Application Express (David Peake, 09:45 -10:45am)
  • You Don't Lack APEX Skills; You Lack Oracle Skills (Joel Kallman, 2:00 - 3:00pm)
June 23
  • Self-Service Application Deployment in a Runtime Instance: This Is How We (Oracle) Do It... (Jason Straub, 4:45 - 5:45pm)
June 24
  • Oracle Application Express 5.0 New Features (Hilary Farrell, 9:45 - 10:45am)
  • Application Express 5.0: Features Nobody Else Will Tell You About (David Peake, 1:45 - 2:45pm)

... and be sure to join one of the numerous APEX Meetup groups world wide - and if there isn't one close to where you live, then consider starting your own local APEX Meetup group - don't know how to do that? Check out the APEX Meetup site for helpful information:



Oracle APEX 5.0 Available for Download!

Patrick Wolf - Wed, 2015-04-15 12:43
After a longer development cycle than usual, Oracle Application Express 5.0 is finally available for download! I think it was worth the wait. It comes with a ton of features, here are just a few marquee features. Page Designer Universal … Continue reading
Categories: Development

APEX 5.0: Set Cursor Focus for Region Type Plug-Ins

Patrick Wolf - Tue, 2015-04-14 08:19
Complex Region Type Plug-ins might render their own input fields, like our Interactive Report component does with the search field. If a developer had set the page level attribute Cursor Focus to First item on Page in Oracle APEX 4.x, … Continue reading
Categories: Development

Scheduling Processes on Oracle JavaCloud Service SaaS Extensions (JCSSX)

Angelo Santagata - Mon, 2015-04-13 18:38
In the past if we wanted to schedule some background processing in JCSSX we had to get help from the database's scheduler.. Using a clever feature of SchemaDB, that is the ability to call a REST Service from a PLSQL procedure, which in turn would execute our code on JCSSX we were able to effectively execute code on JCSSX at determined intervals/times etc. All this is now unnecessary! With the current version of JCSSX we now fully support a list of 3rd party frameworks (see link for a list) and Quartz is one of them. Jani, colleague from the Fusion Developer Relations team, has written up a really nice blog posting on the FADevrel blog summarising what you can do with Quartz and some code to get you started..You could say "Quartz in 10mins" blog! Check it out

Combined ACCESS And FILTER Predicates - Excessive Throw-Away

Randolf Geist - Mon, 2015-04-13 01:00
Catchy title... Let's assume the following data setup:

create table t1
as
select
rownum as id
, 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e4
;

create table t2
as
select
rownum as id
, 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e4
;

create table t3
as
select
rownum as id
, 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e4
;

exec dbms_stats.gather_table_stats(null, 't1')

exec dbms_stats.gather_table_stats(null, 't2')

exec dbms_stats.gather_table_stats(null, 't3')

-- Deliberately wrong order (FBI after gather stats) - the virtual columns created for this FBI don't have statistics, see below
create index t2_idx on t2 (case when id2 = 1 then id2 else 1 end, case when id2 = 2 then id2 else 1 end, filler, id);

create index t3_idx on t3 (id, filler, id2);
And the following execution plan (all results are from 12.1.0.2 but should be applicable to other versions, too):

----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10000 | 1416K| 132 (0)| 00:00:01 |
|* 1 | HASH JOIN | | 10000 | 1416K| 132 (0)| 00:00:01 |
| 2 | TABLE ACCESS FULL | T2 | 10000 | 292K| 44 (0)| 00:00:01 |
|* 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 |
----------------------------------------------------------------------------
How long would you expect it to run to return all rows (no tricks like expensive regular expressions or user-defined PL/SQL functions)?

Probably should take just a blink, given the tiny tables with just 10000 rows each.

However, these are the runtime statistics for a corresponding execution:

| | | |
| |DATABASE |CPU |
|DURATION |TIME |TIME |
|------------|------------|------------|
|+0 00:00:23 |+0 00:00:23 |+0 00:00:23 |
| | | |

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Ord | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows| PGA | Start | Dur(T)| Dur(A)| Time Active Graph | Activity Graph ASH | Top 5 Activity ASH |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | 6 | SELECT STATEMENT | | | | 132 (100)| | 1 | 1 | 0 | | | | | | |
|* 1 | 0 | 5 | HASH JOIN | | 10000 | 1191K| 132 (0)| 00:00:01 | 1 | 1 | 1401K | 2 | 23 | 22 | ##### ############## | @@@@@@@@@@@@@@@@@@@ (100%) | ON CPU(22) |
| 2 | 1 | 1 | TABLE ACCESS FULL | T2 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
|* 3 | 1 | 4 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K | 1930K | | | | | | |
| 4 | 3 | 2 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
| 5 | 3 | 3 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
How is it possible to burn more than 20 seconds of CPU time with that execution plan?

The actual rows produced correspond pretty much to the estimated cardinalities (except for the final hash join), so that doesn't look suspect at first glance.
What becomes obvious from the SQL Monitoring output is that all the time is spent on the hash join operation ID = 1.

Of course at that point (at the latest) you should tell me off for not having you shown the predicate section of the plan and the corresponding query in first place.

So here is the predicate section and the corresponding query:

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END =CASE
"T2"."ID2" WHEN 1 THEN "T2"."ID2" ELSE 1 END AND CASE "T1"."ID2" WHEN
2 THEN "T1"."ID2" ELSE 1 END =CASE "T2"."ID2" WHEN 2 THEN "T2"."ID2"
ELSE 1 END )
filter("T3"."ID2"=CASE WHEN ("T1"."ID">"T2"."ID") THEN
"T1"."ID" ELSE "T2"."ID" END )

3 - access("T3"."ID"="T1"."ID")


select /*+
leading(t1 t3 t2)
full(t1)
full(t3)
use_hash(t3)
swap_join_inputs(t3)
full(t2)
use_hash(t2)
swap_join_inputs(t2)
*/
t1.*
--, t3.id2
--, case when t1.id > t2.id then t1.id else t2.id end
from
t1
, t2
, t3
where
1 = 1
--
and case when t1.id2 = 1 then t1.id2 else 1 end = case when t2.id2 = 1 then t2.id2 else 1 end
and case when t1.id2 = 2 then t1.id2 else 1 end = case when t2.id2 = 2 then t2.id2 else 1 end
--
and t3.id = t1.id
and t3.id2 = case when t1.id > t2.id then t1.id else t2.id end
;
There are two important aspects to this query and the plan: First, the join expression (without corresponding expression statistics) between T1 and T2 is sufficiently deceptive to hide from the optimizer that in fact this produces a cartesian product (mimicking real life multi table join expressions that lead to bad estimates) and second, the table T3 is joined to both T1 and an expression based on T1 and T2, which means that this expression can only be evaluated after the join to T1 and T2.
With the execution plan shape enforced via my hints (but could be a real life execution plan shape preferred by the optimizer) T3 and T1 are joined first, producing an innocent 10K rows row source, which is then joined to T2. And here the accident happens inside the hash join operation:

If you look closely at the predicate section you'll notice that the hash join operation has both, an ACCESS operation and a FILTER operation. The ACCESS operation performs based on the join between T1 and T2 a lookup into the hash table, which happens to be a cartesian product, so produces 10K times 10K rows, and only afterwards the FILTER (representing the T3 to T1/T2 join expression) is applied to these 100M rows, but matching only a single row in my example here, which is what the A-Rows shows for this operation.

So the point is that this excessive work and FILTER throwaway isn't very well represented in the row source statistics. Ideally you would need one of the following two modifications to get a better picture of what is going on:

Either the FILTER operator should be a separate step in the plan, which in theory would then look like this:

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows|
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 132 (100)| | 1 | 1 |
|* 1a| FILTER | | 10000 | 1191K| 132 (0)| 00:00:01 | 1 | 1 |
|* 1b| HASH JOIN | | 10000 | 1191K| 132 (0)| 00:00:01 | 1 | 100M |
| 2 | TABLE ACCESS FULL | T2 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K |
|* 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K |
| 4 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K |
| 5 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1a- filter("T3"."ID2"=CASE WHEN ("T1"."ID">"T2"."ID") THEN
"T1"."ID" ELSE "T2"."ID" END )
1b- access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END =CASE
"T2"."ID2" WHEN 1 THEN "T2"."ID2" ELSE 1 END AND CASE "T1"."ID2" WHEN
2 THEN "T1"."ID2" ELSE 1 END =CASE "T2"."ID2" WHEN 2 THEN "T2"."ID2"
ELSE 1 END )
3 - access("T3"."ID"="T1"."ID")
Which would make the excess rows produced by the ACCESS part of the hash join very obvious, but is probably for performance reasons not a good solution, because then the data would have to flow from one operation to another one rather than being processed within the HASH JOIN operator, which means increased overhead.

Or an additional rowsource statistics should be made available:

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows|AE-Rows|
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 132 (100)| | 1 | 1 | 1 |
|* 1 | HASH JOIN | | 10000 | 1191K| 132 (0)| 00:00:01 | 1 | 1 | 100M |
| 2 | TABLE ACCESS FULL | T2 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | 10K |
|* 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K | 10K |
| 4 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | 10K |
| 5 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K | 10K |
----------------------------------------------------------------------------------------------------
Which I called here "Actually evaluated rows" and in addition to this case here of combined ACCESS and FILTER operations could also be helpful for other FILTER cases, for example even for simple full table scan to see how many rows were evaluated, and not only how many rows matched a possible filter (what A-Rows currently shows).

In a recent OTN thread this topic came up again, and since I also came across this phenomenon a couple of times recently I thought to put this note together. Note that Martin Preiss has submitted a corresponding database idea on the OTN forum.

Expanding on this idea a bit further, it could be useful to have an additional "Estimated evaluated rows (EE-Rows)" calculated by the optimizer and shown in the plan. This could also be used to improve the optimizer's cost model for such cases, because at present it looks like the optimizer doesn't consider additional FILTER predicates on top of ACCESS predicates when calculating the CPU cost of operations like HASH JOINs.

Note that this problem isn't specific to HASH JOIN operations, you can get similar effects with other join methods, like NESTED LOOP joins, or even simple INDEX lookup operations, where again the ACCESS part isn't very selective but only the FILTER applied afterwards filters matching rows.

Here are some examples with the given setup:

select /*+
leading(t1 t3 t2)
full(t1)
full(t3)
use_hash(t3)
swap_join_inputs(t3)
index(t2)
use_nl(t2)
*/
t1.*
--, t3.id2
--, case when t1.id > t2.id then t1.id else t2.id end
from
t1
, t2
, t3
where
1 = 1
--
and case when t1.id2 = 1 then t1.id2 else 1 end = case when t2.id2 = 1 then t2.id2 else 1 end
and case when t1.id2 = 2 then t1.id2 else 1 end = case when t2.id2 = 2 then t2.id2 else 1 end
--
and t3.id = t1.id
and t3.id2 = case when t1.id > t2.id then t1.id else t2.id end
;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Ord | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows| PGA | Start | Dur(T)| Dur(A)| Time Active Graph | Activity Graph ASH | Top 5 Activity ASH |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | 6 | SELECT STATEMENT | | | | 10090 (100)| | 1 | 1 | | | | | | | |
| 1 | 0 | 5 | NESTED LOOPS | | 10000 | 1416K| 10090 (1)| 00:00:01 | 1 | 1 | | | | | | | |
|* 2 | 1 | 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K | 1890K | | | | | | |
| 3 | 2 | 1 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
| 4 | 2 | 2 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
|* 5 | 1 | 4 | INDEX RANGE SCAN | T2_IDX | 1 | 30 | 1 (0)| 00:00:01 | 10K | 1 | | 3 | 33 | 32 | ################### | @@@@@@@@@@@@@@@@@@@ (100%) | ON CPU(32) |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T3"."ID"="T1"."ID")
5 - access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END
="T2"."SYS_NC00004$" AND CASE "T1"."ID2" WHEN 2 THEN "T1"."ID2" ELSE 1
END ="T2"."SYS_NC00005$")
filter("T3"."ID2"=CASE WHEN ("T1"."ID">"T2"."ID") THEN
"T1"."ID" ELSE "T2"."ID" END )



select /*+
leading(t1 t3 t2)
full(t1)
full(t3)
use_hash(t3)
swap_join_inputs(t3)
index(t2)
use_nl(t2)
*/
max(t1.filler)
--, t3.id2
--, case when t1.id > t2.id then t1.id else t2.id end
from
t1
, t2
, t3
where
1 = 1
--
and case when t1.id2 = 1 then t1.id2 else 1 end = case when t2.id2 = 1 then t2.id2 else 1 end
and case when t1.id2 = 2 then t1.id2 else 1 end = case when t2.id2 = 2 then t2.id2 else 1 end
--
and t3.id = t1.id
and t2.filler >= t1.filler
and t2.id = case when t1.id2 > t3.id2 then t1.id2 else t3.id2 end

;

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Ord | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows| PGA | Start | Dur(T)| Dur(A)| Time Active Graph | Activity Graph ASH | Top 5 Activity ASH |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | 7 | SELECT STATEMENT | | | | 20092 (100)| | 1 | 1 | | | | | | | |
| 1 | 0 | 6 | SORT AGGREGATE | | 1 | 223 | | | 1 | 1 | | | | | | | |
| 2 | 1 | 5 | NESTED LOOPS | | 1 | 223 | 20092 (1)| 00:00:01 | 1 | 10K | | | | | | | |
|* 3 | 2 | 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K | 1900K | | | | | | |
| 4 | 3 | 1 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
| 5 | 3 | 2 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
|* 6 | 2 | 4 | INDEX RANGE SCAN | T2_IDX | 1 | 108 | 2 (0)| 00:00:01 | 10K | 10K | | 2 | 34 | 34 | #################### | @@@@@@@@@@@@@@@@@@@ (100%) | ON CPU(34) |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("T3"."ID"="T1"."ID")
6 - access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END
="T2"."SYS_NC00004$" AND CASE "T1"."ID2" WHEN 2 THEN "T1"."ID2" ELSE 1
END ="T2"."SYS_NC00005$" AND "T2"."FILLER">="T1"."FILLER" AND
"T2"."ID"=CASE WHEN ("T1"."ID2">"T3"."ID2") THEN "T1"."ID2" ELSE
"T3"."ID2" END AND "T2"."FILLER" IS NOT NULL)
filter("T2"."ID"=CASE WHEN ("T1"."ID2">"T3"."ID2") THEN
"T1"."ID2" ELSE "T3"."ID2" END )

The former one exhibits exactly the same problem as the HASH JOIN example, only that the FILTER is evaluated in the inner row source of a NESTED LOOP join after the index access operation.

The latter one shows as variation the classic partial "index access" due to a range comparison in between - although the entire expression can be evaluated on index level, the access part matches every index entry, so the range scan actually needs to walk the entire index at each loop iteration and the FILTER is then applied to all the index values evaluated.

In what ways is buckthorn harmful?

FeuerThoughts - Sun, 2015-04-12 17:54
I spent 10-15 hours a week in various locations of still-wooded Chicago (and now nearby Lincolnwood) cutting down buckthorn. Some people have taken me to task for it ("Just let it be, let nature take it's course, etc.). So I thought I would share this excellent, concise sum up of the damage that can be wrought by buckthorn.

And if anyone lives on the north side of Chicago and would like to help out, there is both "heavy" work (cutting large trees and dragging them around) and now lots of "light" work (clipping the new growth from the stumps from last year's cutting - I don't use poison). 

It's great exercise and without a doubt you will be helping rescue native trees and ensure that the next generation of those trees will survive and thrive!


Buckthorn should be on America's "Most Wanted" list, with its picture hanging up in every US Post Office! Here are a few of the dangers of Buckthorn:

a) Buckthorn squeezes out native plants for nutrients, sunlight, and moisture. It literally chokes out surrounding healthy trees and makes it impossible for any new growth to take root under its cancerous canopy of dense vegetation.

b) Buckthorn degrades wildlife habitats and alters the natural food chain in and growth of an otherwise healthy forest. It disrupts the whole natural balance of the ecosystem.

c) Buckthorn can host pests like Crown Rust Fungus and Soybean Aphids. Crown Rust can devastate oat crops and a wide variety of other grasses. Soybean Aphids can have a devastating effect on the yield of soybean crops. Without buckthorn as host, these pests couldn't survive to blight crops.

d) Buckthorn contributes to erosion by overshadowing plants that grow on the forest floor, causing them to die and causing the soil to lose the integrity and structure created by such plants.

e) Buckthorn lacks "natural controls" like insects or diseases that would curb its growth. A Buckthorn-infested forest is too dense to walk through, and the thorns of Common Buckthorn will leave you bloodied.

f) Buckthorn attracts many species of birds (especially robins and cedar waxwings) that eat the berries and spread the seeds through excrement. Not only are the birds attracted to the plentiful berries, but because the buckthorn berries have a diuretic and cathartic effect, the birds pass the seeds very quickly to the surrounding areas of the forest. This makes Buckthorn spread even more widely and rapidly, making it harder for us to control and contain.
Categories: Development

opatch hangs on /sbin/fuser oracle

Vikram Das - Sat, 2015-04-11 19:30
Pipu pinged me today about opatch hanging. The opatch log showed this:

[Apr 11, 2015 5:24:13 PM]    Start fuser command /sbin/fuser $ORACLE_HOME/bin/oracle at Sat Apr 11 17:24:13 EDT 2015

I had faced this issue once before, but was not able to recall what was the solution.  So I started fresh.

As oracle user:

/sbin/fuser $ORACLE_HOME/bin/oracle hung

As root user

/sbin/fuser $ORACLE_HOME/bin/oracle hung

As root user

lsof hung.

Google searches about it brought up a lot of hits about NFS issues.  So I did df -h.

df -h also hung.

So I checked /var/log/messages and found many messages like these:

Apr 11 19:44:42 erpserver kernel: nfs: server share.justanexample.com not responding, still trying

That server has a mount called /R12.2stage that has the installation files for R12.2.

So I tried unmounting it:

umount /R12.2stage
Device Busy

umount -f /R12.2stage
Device Busy

umount -l /R12.2stage

df -h didn't hang any more.

Next I did strace /sbin/fuser $ORACLE_HOME/bin/oracle and it stopped here:

open("/proc/12854/fdinfo/3", O_RDONLY)  = 7
fstat(7, {st_mode=S_IFREG|0400, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b99de014000
read(7, "pos:\t0\nflags:\t04002\n", 1024) = 20
close(7)                                = 0
munmap(0x2b99de014000, 4096)            = 0
getdents(4, /* 0 entries */, 32768)     = 0
close(4)                                = 0
stat("/proc/12857/", {st_mode=S_IFDIR|0555, st_size=0, ...}) = 0
open("/proc/12857/stat", O_RDONLY)      = 4
read(4, "12857 (bash) S 12853 12857 12857"..., 4096) = 243
close(4)                                = 0
readlink("/proc/12857/cwd", "11.2.0.4/examples (deleted)"..., 4096) = 27
rt_sigaction(SIGALRM, {0x411020, [ALRM], SA_RESTORER|SA_RESTART, 0x327bc30030}, {SIG_DFL, [ALRM], SA_RESTORER|SA_RESTART, 0x327bc30030}, 8) = 0
alarm(15)                               = 0
write(5, "@\20A\0\0\0\0\0", 8)          = 8
write(5, "\20\0\0\0", 4)                = 4
write(5, "/proc/12857/cwd\0", 16)       = 16
write(5, "\220\0\0\0", 4)               = 4
read(6,  

It stopped here. So I did Ctrl+C
# ps -ef |grep 12857
oracle   12857 12853  0 Apr10 pts/2    00:00:00 -bash
root     21688  2797  0 19:42 pts/8    00:00:00 grep 12857

Killed this process

# kill -9 12857

Again I did strace /sbin/fuser $ORACLE_HOME/bin/oracle and it stopped at a different process this time that was another bash process.  I killed that process also.

I executed it for 3rd time: strace /sbin/fuser $ORACLE_HOME/bin/oracle

This time it completed.

Ran it without strace

/sbin/fuser $ORACLE_HOME/bin/oracle

It came out in 1 second.

Then I did the same process for lsof

strace lsof

and killed those processes were it was getting stuck.  Eventually lsof also worked.

Pipu retried opatch and it worked fine.

Stale NFS mount was the root cause of this issue.  It was stale because the source server was down for Unix security patching during weekend.
 

Categories: APPS Blogs

Come See Integrigy at Collaborate 2015

Come see Integrigy's session at Collaborate 2015 in Las Vegas (http://collaborate.ioug.org/). Integrigy is presenting the following paper:

IOUG #763
Detecting and Stopping Cyber Attacks against Oracle Databases
Monday, April 13th, 9:15 - 11:30 am
North Convention, South Pacific J

If you are going to Collaborate 2015, we would also be more than happy to talk with you about your Oracle security or questions. If you would like to talk with us while at Collaborate, please contact us at info@integrigy.com.

Conference
Categories: APPS Blogs, Security Blogs

Video Tutorial: XPLAN_ASH Active Session History - Part 5

Randolf Geist - Fri, 2015-04-10 02:02
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.

More parts to follow.

Deploying applications remotely with WebLogic REST Management Interface

Steve Button - Wed, 2015-04-08 20:20
WebLogic 12.1.3 provides a fully documented REST Management Interface that enables deployment and management operations to be performed using REST calls.

The table of contents from the documentation lists the set of operations available:
To use the REST Management Interface, it first must be enabled for a domain. This can be done from the WebLogic Console.

Domain > Configuration > General [Advanced] > Enable RESTful Management Services




To deploy an application to a remote server, the REST Management Interface provides the ability to upload an application archive and execute the deployment operation:
To perform this type of deployment operation a HTTP POST on the deployment URL is invoked specifying:
  1. The HTTP Header of multipart/form-data;
  2. The application archive is supplied as form field deployment;
  3. The deployment information specified using the form field model in which the application name, targets and other attributes are supplied;
Using curl this looks like the following:

$ curl \
--user weblogic:welcome1 \
-H X-Requested-By:TestForRest \
-H Accept:application/json \
-H Content-Type:multipart/form-data \
-F "model={
name: 'basicwebapp',
targets: [ 'AdminServer' ]
}" \
-F "deployment=@/tmp/basicwebapp.war" \
-X POST http://adc2101001.us.oracle.com:7001/management/wls/latest/deployments/application

When this REST call is made with curl, the application archive specified as @/tmp/basicwebapp.war is uploaded as a binary file using the form field deployment and the configuration information supplied in the model form parameter used to define and perform the deployment.

The response returned from the REST call contains a payload with the result of the operation that was performed.

{
"messages": [{
"message": "Deployed the application 'basicwebapp'.",
"severity": "SUCCESS"
}],
"links": [{
"rel": "job",
"uri": "http://adc2101001:7001/management/wls/latest/jobs/deployment/id/3"
}],
"item": {
"name": "ADTR-3",
"id": "3",
"type": "deployment",
"status": "completed",
"beginTime": 1428541581923,
"endTime": 1428541584150,
"targets": [{
"name": "AdminServer",
"type": "server",
"errors": [],
"status": "completed"
}],
"description": "[Deployer:149026]deploy application basicwebapp on AdminServer.",
"operation": "deploy",
"deploymentName": "basicwebapp"
}
}

Further information about the deployment resource can be obtained using a further REST call, which provides links to operations that can be performed on the application, URLs to access the application, context-paths for registered servlets, runtime data and other useful information.

$ curl \
--user weblogic:welcome1 \
-H Accept:application/json \
http://adc2101001:7001/management/wls/latest/deployments/application/id/basicwebapp

Which returns a response containing a payload with details of the deployment:

{
"links": [
{
"rel": "parent",
"uri": "http://adc2101001:7001/management/wls/latest/deployments/application"
},
{
"rel": "action",
"uri": "http://adc2101001:7001/management/wls/latest/deployments/application/id/basicwebapp/redeploy",
"title": "redeploy"
},
{
"rel": "action",
"uri": "http://adc2101001:7001/management/wls/latest/deployments/application/id/basicwebapp/update",
"title": "update"
},
{
"rel": "action",
"uri": "http://adc2101001:7001/management/wls/latest/deployments/application/id/basicwebapp/start",
"title": "start"
},
{
"rel": "action",
"uri": "http://adc2101001:7001/management/wls/latest/deployments/application/id/basicwebapp/stop",
"title": "stop"
},
{
"rel": "bindables",
"uri": "http://adc2101001:7001/management/wls/latest/deployments/application/id/basicwebapp/bindables"
}
],
"item": {
"name": "basicwebapp",
"state": "active",
"type": "application",
"displayName": "basicwebapp",
"targets": ["AdminServer"],
"planPath": "/scratch/sbutton/Oracle/Middleware/user_projects/domains/arq_domain/servers/AdminServer/upload/basicwebapp/plan/Plan.xml",
"urls": ["http://adc2101001:7001/basicwebapp"],
"openSessionsCurrentCount": 1,
"sessionsOpenedTotalCount": 1,
"servlets": [
{
"servletName": "JspServlet",
"contextPath": "/basicwebapp",
"aggregateMetrics": {
"executionTimeTotal": 0,
"invocationTotalCount": 0,
"reloadTotalCount": 0,
"executionTimeHigh": 0,
"executionTimeLow": 0
},
"servletMetrics": [{
"serverName": "AdminServer",
"executionTimeTotal": 0,
"invocationTotalCount": 0,
"reloadTotalCount": 0,
"executionTimeHigh": 0,
"executionTimeLow": 0
}]
},
{
"servletName": "Faces Servlet",
"contextPath": "/basicwebapp",
"aggregateMetrics": {
"executionTimeTotal": 22,
"invocationTotalCount": 1,
"reloadTotalCount": 1,
"executionTimeHigh": 22,
"executionTimeLow": 22
},
"servletMetrics": [{
"serverName": "AdminServer",
"executionTimeTotal": 22,
"invocationTotalCount": 1,
"reloadTotalCount": 1,
"executionTimeHigh": 22,
"executionTimeLow": 22
}]
},
{
"servletName": "FileServlet",
"contextPath": "/basicwebapp",
"aggregateMetrics": {
"executionTimeTotal": 0,
"invocationTotalCount": 0,
"reloadTotalCount": 0,
"executionTimeHigh": 0,
"executionTimeLow": 0
},
"servletMetrics": [{
"serverName": "AdminServer",
"executionTimeTotal": 0,
"invocationTotalCount": 0,
"reloadTotalCount": 0,
"executionTimeHigh": 0,
"executionTimeLow": 0
}]
}
],
"ejbs": [],
"deploymentPath": "servers/AdminServer/upload/basicwebapp/app/basicwebapp.war",
"applicationType": "war",
"health": {"state": "ok"}
}

The documentation provides extensive coverage of the many resources and entity types that can be reached via the REST Management Interface.

Sample Payload : Creating a Lead

Angelo Santagata - Wed, 2015-04-08 11:43

Very common operation

 Object Name
Lead Version Tested on
R9
 Description This payload demonstrates how to create a salesLead, assign a primary lead owner AND add additional resources to the lead
Operation createSalesLead  Parameters
*Required

Name*
StatusCode
CustomerId
ResourceId
PrimaryFlag

 Payload

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/marketing/leadMgmt/leads/leadService/types/" xmlns:lead="http://xmlns.oracle.com/oracle/apps/marketing/leadMgmt/leads/leadService/" xmlns:lead1="http://xmlns.oracle.com/apps/marketing/leadMgmt/leads/leadService/">
   <soapenv:Header/>
   <soapenv:Body>
      <typ:createSalesLead>
         <typ:salesLead>
            <lead:Name>gold2 - cust</lead:Name>
            <lead:StatusCode>UNQUALIFIED</lead:StatusCode>
            <lead:CustomerId>100000000055319</lead:CustomerId>
            <!--PartyID of person you want to assign it to-->
            <lead:OwnerId>300000000629932</lead:OwnerId>
            <lead:MklLeadResources>
        <!-- ResourceID is PartyID of additional Resource -->
               <lead1:ResourceId>300000000623680</lead1:ResourceId>
               <lead1:PrimaryFlag>false</lead1:PrimaryFlag>
            </lead:MklLeadResources>
         </typ:salesLead>
      </typ:createSalesLead>
   </soapenv:Body>
</soapenv:Envelope>

 

 


LPAR and Oracle Database

Pakistan's First Oracle Blog - Tue, 2015-04-07 20:30
What is LPAR?

LPAR stands for Logical Partitioning and it's a feature of IBM's operating system AIX (Also available in Linux). By abstracting all the physical devices in a system, LPAR creates a virtualized computing environment.

In a server; the processor, memory, and storage are divided into multiple sets. Each set in a server consist of resources like processor, memory and storage. Each set is called as LPAR.

One server can have many LPARs operating at the same time. These LPARs communicate with each other as if they are on separate machines.

What is DLPAR?

DLPAR stands for Dynamic Logical Partitioning and with DLPAR the LPARs can be configured dynamically without restart. With DLPAR, memory, CPU and storage can be moved between LPARs on the fly.

What is HMC?

HMC stands for Hardware Management Console. The Hardware Management Console (HMC) is interface which is used to manage the LPARs. Its Java based and can be used to manage many systems.

If LPAR is in shared processor mode, without the following fix, LPAR may see excessive CPu usage: 


APARs for WAITPROC IDLE LOOPING CONSUMES CPU:
IV01111 AIX 6.1 TL05 if before SP08 (fixed in SP08)
IV06197 AIX 6.1 TL06 if before SP07 (fixed in SP07)
IV10172 AIX 6.1 TL07 if before SP02 (fixed in SP02)
IV09133 AIX 7.1 TL00 if before SP05 (fixed in SP05)
IV10484 AIX 7.1 TL01 if before SP02 (fixed in SP02)

This problem can effect POWER7 systems running any level of Ax720 firmware prior to Ax720_101. But it is recommended to update to the latest available firmware. If required, AIX and Firmware fixes can be obtained from IBM Support Fix Central:
http://www-933.ibm.com/support/fixcentral/main/System+p/AIX
Categories: DBA Blogs

Consumer Reports guide on which fruits and veggies to always buy organic

FeuerThoughts - Mon, 2015-04-06 10:32
First, I encourage everyone reading this (and beyond) to subscribe to Consumer Reports and make use of their unbiased, science-based reviews of products and services.

It is the best antidote to advertising you will ever find.

In their May 2015 issue, they analyze the "perils of pesticides" and offer a guide to fruits and vegetables. When you should buy organic? When might conventional be OK for you?

[or, as CR puts it, "Though we believe that organic is always the best choice because it promotes sustainable agriculture, getting plenty of fruits and vegetables - even if you can't obtain organic - takes precedence when it comes to your health.]

Here are the most important findings:

ALWAYS BUY ORGANIC

CR found that for these fruits and vegetables, you should always buy organic - the pesticide risk in conventional is too high.

Fruit

Peaches
Tangerines
Nectarines
Strawberries
Cranberries

Vegetables

Green Beans
Sweet Bell Peppers
Hot Peppers
Sweet Potatoes
Carrors
Categories: Development

EBS 12.2 do not ignore the database patches on top of AD Delta 5 and TXK Delta 5

Senthil Rajendran - Sun, 2015-04-05 09:13

Make sure all the recommended patches are in place as a part of the bundle patch. Your EBS 12.2 ADOP cycle could go unstable with out the database patches.

A world of confusion

Gary Myers - Sat, 2015-04-04 03:00
It has got to the stage where I often don't even know what day it is. No, not premature senility (although some may disagree). But time zones.

Mostly I've had it fairly easy in my career. When I worked in the UK, I just had the one time zone to work with. The only time things got complicated was when I was working at one of the power generation companies, and we had to make provision for the 23-hour and 25-hour days that go with Daylight Savings.

And in Australia we only have a handful of timezones, and when I start and finish work, it is the same day for any part of Australia. I did work on one system where the database clock was set to UTC, but dates weren't important on that application.

Now it is different. I'm dealing with events that happen all over the world. Again the database clock is UTC, with the odd effect that TRUNC(SYSDATE) 'flips over' around lunchtime. Now when I want to look at 'recent' entries (eg a log table) I've got into the habit of asking WHERE LOG_DATE > SYSDATE - INTERVAL '9' HOUR

And we also have columns that are TIMESTAMP WITH TIMEZONE. So I'm getting into the habit of selecting COL_TS AT TIME ZONE DBTIMEZONE . I could use sessiontimezone, but then the time component of DATE columns would be inconsistent.  This becomes just a little more confusing this time of year as various places slip in and out of Daylight Savings.

Now things are getting even more complicated for me.

Again, during my career, I've been lucky enough to be pretty oblivious to character set issues. Most things have squeezed in to my databases without any significant trouble. Occasionally I've had to look for some accented characters in people's names, but that's been it.

In the past few months, I've been working with some European data where the issues have been more pronounced. Aside from a few issues in emails, I've been coping quite well (with a lot of help from Google Translate). 

Now I get to work with some Japanese data. And things get complicated.

"The modern Japanese writing system is a combination of two character types: logographic kanji, which are adopted Chinese characters, and syllabic kana. Kana itself consists of a pair of syllabarieshiragana, used for native or naturalised Japanese words and grammatical elements, and katakana, used for foreign words and names, loanwordsonomatopoeia, scientific names, and sometimes for emphasis. Almost all Japanese sentences contain a mixture of kanji and kana. Because of this mixture of scripts, in addition to a large inventory of kanji characters, the Japanese writing system is often considered to be the most complicated in use anywhere in the world.[1][2]"Japanese writing system

Firstly I hit katakana. With some tables, I can get syllables corresponding to the characters and work out something that I can eyeball and match up to some English data. As an extra complication, there are also half-width characters which are semantically equivalent but occupy different codepoints in Unicode. That has parallels to upper/lower case in English, but is a modern development that came about from trying to fit the previously squarish forms into print, typewriters and computer screens.

Kanji is a different order of shock. Primary school children in Japan learn the first 1000 or so characters. Another thousand plus get taught in high school. The character set is significantly larger in total.

I will have to see if the next few months cause my head to explode. In the mean time, I can recommend reading this article about the politics involved in getting characters (glyphs ? letters ?) into Unicode.  I Can Text You A Pile of Poo, But I Can’t Write My Name

Oh, and I'm still trying to find the most useful character/font set I can have on my PC and  use practically in SQL Developer. My current choice shows the Japanese characters when I click in the field in the dataset, but only little rectangles when I'm not in the field. The only one I've found that does show up all the time is really UGLY. 

adoafmctl.sh hangs

Vikram Das - Fri, 2015-04-03 20:26
Rajesh and Shahed called me about this error where after a reboot of the servers, adoafmctl.sh wouldn't start.  It gave errors like these:

You are running adoafmctl.sh version 120.6.12000000.3 
Starting OPMN managed OAFM OC4J instance ... 
adoafmctl.sh: exiting with status 152 
adoafmctl.sh: check the logfile 
$INST_TOP/logs/appl/admin/log/adoafmctl.txt for more information

adoafmctl.txt showing:
ias-component/process-type/process-set:
default_group/oafm/default_group/
Error
--> Process (index=1,uid=349189076,pid=15039)
time out while waiting for a managed process to start
Log:
$INST_TOP/logs/ora/10.1.3/opmn/default_group~oafm~default_group~1
07/31/09-09:14:28 :: adoafmctl.sh: exiting with status 152
================================================================================
07/31/09-09:14:40 :: adoafmctl.sh version 120.6.12000000.3
07/31/09-09:14:40 :: adoafmctl.sh: Checking the status of OPMN managed OAFM OC4J instance
Processes in Instance: SID_machine.machine.domain
-------------------+--------------------+---------+---------
ias-component | process-type | pid | status
-------------------+--------------------+---------+---------
default_group | oafm | N/A | Down

Solution:

1. Shutdown all Middle tier services and ensure no defunct processes exist running the following from the operating system:
# ps -ef | grep

If one finds any, kill these processes.
2. Navigate to $INST_TOP/ora/10.1.3/opmn/logs/states directory. It contains hidden file .opmndat:
# ls -lrt .opmndat
3. Delete this file .opmndat after making a backup of it:
# rm .opmndat
4. Restart the services.

5. Re-test the issue.

This resolved the issue.
Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator