Feed aggregator

11.2.0.3 Install failing for Pre-requisite Checks (error: configured=unknown)

Madan Mohan - Sat, 2012-12-08 06:50
Error
****
All the Pre-req checks are failing for kernel parameters  

Current = "250"  configured=unknown


Fix
***
/etc/sysctl.conf  is not having the read permission to the User that is installing the Software.

chmod +r  /etc/sysctl.conf


Challenges with APM 1.0 product

Debu Panda - Tue, 2012-12-04 11:38
Customers have been managing application performance since early days of mainframe evolution. However, Application Performance Management as a discipline has gained popularity in the past decade.

See my blog in BMC communities for challenges with old generation of APM products.

Here is the direct link : https://communities.bmc.com/communities/community/bsm_initiatives/app_mgmt/blog/2012/12/04/challenges-with-apm-10-products

GOTOs, considered

Andrew Clarke - Mon, 2012-12-03 00:12
Extreme programming is old hat now, safe even. The world is ready for something new, something tougher, something that'll... break through. You know? . And here is what the world's been waiting for: Transgressive Programming.

The Transgressive Manifesto is quite short: It's okay to use GOTO. The single underlying principle is that we value willful controversy over mindless conformity.

I do have a serious point here. Even programmers who haven't read the original article (because they can't spell Dijkstra and so can't find it through Google) know that GOTOs are "considered harmful". But as Marshall and Webber point out, "the problem lies ... in the loss of knowledge and experience. If something is forbidden for long enough, it becomes difficult to resurrect the knowledge of how to use it."

How many Oracle developers even realise PL/SQL supports GOTO? It does, of course. Why wouldn't it? PL/SQL is a proper programming language.

The standard objection is that there is no role for GOTO because PL/SQL has loops, procedures, CASE, etc. But sometimes we need to explicitly transfer control. In recent times I have have across these examples:

  • a loop which raised a user-defined exception to skip to the END LOOP; point when the data was in certain ststes, thus avoiding large chunk of processing. A GOTO would have have been cleaner, because it is poor practice to represent normal business states as exceptions.
  • a huge function with more than a dozen separate RETURN statements. GOTOs directing flow to a single RETURN call would have been really helpful, because I needed to log the returned value.
  • a condition which set a counter variable to a large number so as to short-circuit a loop. Here a GOTO would simply have been more honest.
These examples are all explicit control transfers: they cause exactly the sort of random paths through the code which Dijkstra inveighed against. But the coders didn't honour the principle underlying his fatwa, they just lacked the moxie to invoke the dread statement. Instead they kludged. I'm not saying that using a GOTO would have redeemed a function with 800 LOC ; clearly there'e a lot more refactoring to be done there. But it would have been better.

Here is a situation I have come across a few times. The spec is to implement a series of searches of increasing coarseness, depending on which arguments are passed; the users want the most focused set of records available, so once a specific search gets some hits we don't need to run the more general searches.

Nested IF statements provide one way to do this:


result_set := sieve_1(p1=>var1, p2=>var2, p3=>var4, p4=>var5);

if result_set.count() = 0
then
result_set := sieve_2(p1=>var2, p2=>var3, p3=>var4);

if result_set.count() = 0
then
result_set := sieve_3(p1=>var3, p2=>var5);

if result_set.count() = 0
then
....
end if;
end if;
end if;
return result_set;
Obviously as the number of distinct searches increases the nested indentation drives the code towards the right-hand side of the page. Here is an alternative implementation which breaks the taboo and does away with the tabs.

result_set := sieve_1(p1=>var1, p2=>var2, p3=>var4, p4=>var5);
if result_set.count() > 0
then
goto return_point;
end if;

result_set := sieve_2(p1=>var2, p2=>var3, p3=>var4);
if result_set.count() > 0
then
goto return_point;
end if;

result_set := sieve_3(p1=>var3, p2=>var5);
if result_set.count() > 0
then
goto return_point;
end if;
...
<< return_point >>
return result_set;
I think the second version has a clearer expression of intent. Did we find any records? Yes we did, job's a good'un, let's crack on.

tl;dr
GOTO: not as evil as triggers.

Exadata "marketing" and reality

Nuno Souto - Fri, 2012-11-30 04:24
Sorry for the marked absence of posts, folks.  This year we've been upgrading all our Oracle dbs to 11.2.0.3 and all our MSSQL dbs to 2008R2 - hectic is an understatement for how things have been! On top of that we've been told we may need to increase the size of our main Oracle dbs by 10X in the next 2 years. So, in the process of upgrading I had to ensure the groundwork for that sort of Noonshttp://www.blogger.com/profile/04285930853937157148noreply@blogger.com2

Track and Trace E-Business Suite Concurrent Request - EBS Logs, O/S Process IDs, Locks

Gareth Roberts - Thu, 2012-11-29 17:21

I often get asked to take a look at an Oracle eBusiness Suite concurrent request to see what it is doing, this can come from a few different angles:

  1. What requests are currently running?
  2. I have an operating system process that is taking too much CPU - what is it doing?
  3. Can you tell me where the request is at? I've clicked on the log but it doesn't show anything!
  4. My request is taking too long - can you check for blocking locks?

There are a number of strategies to track and trace where things are at for a running request, these include:

  1. Monitor the currently running requests in fnd_concurrent_requests
  2. Checking the v$sqlarea to see what SQL statement or PL/SQL is running
  3. Tailing the concurrent request log / output file while it is being written to near realtime - prior to request completing
  4. Checking for locks blocking the concurrent request

So without further ado, let's take a look at the following sweet query. UPDATE: 23-Aug-2012 fixed multi rows due to missing join on inst_id between gv$session and gv$process. Also note for non-RAC environments change gv$ to v$ and remove joins to sys.v_$active_instances:

set pages 9999 feed on lines 150
col user_concurrent_program_name format a40 head PROGRAM trunc
col elapsed format 9999
col request_id format 9999999 head REQUEST
col user_name format a12
col oracle_process_id format a5 head OSPID
col inst_name format a10
col sql_text format a30
col outfile_tmp format a30
col logfile_tmp format a30

REM *********************
REM **** RAC VERSION ****
REM *********************
select /*+ ordered */ 
       fcp.user_concurrent_program_name
,      fcr.request_id
,      round(24*60*( sysdate - actual_start_date )) elapsed
,      fu.user_name
,      fcr.oracle_process_id
,      sess.sid
,      sess.serial#
,      inst.inst_name
,      sa.sql_text
,      cp.plsql_dir || '/' || cp.plsql_out outfile_tmp
,      cp.plsql_dir || '/' || cp.plsql_log logfile_tmp
from   apps.fnd_concurrent_requests fcr
,      apps.fnd_concurrent_programs_tl fcp
,      apps.fnd_concurrent_processes cp
,      apps.fnd_user fu
,      gv$process pro
,      gv$session sess
,      gv$sqlarea sa
,      sys.v_$active_instances inst
where  fcp.concurrent_program_id = fcr.concurrent_program_id
and    fcp.application_id = fcr.program_application_id
and    fcr.controlling_manager = cp.concurrent_process_id
and    fcr.requested_by = fu.user_id (+)
and    fcr.oracle_process_id = pro.spid (+)
and    pro.addr = sess.paddr (+)
and    pro.inst_id = sess.inst_id (+)
and    sess.sql_address = sa.address (+)
and    sess.sql_hash_value = sa.hash_value (+)
and    sess.inst_id = inst.inst_number (+)
and    fcr.phase_code = 'R' /* only running requests */
;

REM *********************
REM ** NON-RAC VERSION **
REM *********************
select /*+ ordered */ 
       fcp.user_concurrent_program_name
,      fcr.request_id
,      round(24*60*( sysdate - actual_start_date )) elapsed
,      fu.user_name
,      fcr.oracle_process_id
,      sess.sid
,      sess.serial#
,      sa.sql_text
,      cp.plsql_dir || '/' || cp.plsql_out outfile_tmp
,      cp.plsql_dir || '/' || cp.plsql_log logfile_tmp
from   apps.fnd_concurrent_requests fcr
,      apps.fnd_concurrent_programs_tl fcp
,      apps.fnd_concurrent_processes cp
,      apps.fnd_user fu
,      v$process pro
,      v$session sess
,      v$sqlarea sa
where  fcp.concurrent_program_id = fcr.concurrent_program_id
and    fcp.application_id = fcr.program_application_id
and    fcr.controlling_manager = cp.concurrent_process_id
and    fcr.requested_by = fu.user_id (+)
and    fcr.oracle_process_id = pro.spid (+)
and    pro.addr = sess.paddr (+)
and    sess.sql_address = sa.address (+)
and    sess.sql_hash_value = sa.hash_value (+)
and    fcr.phase_code = 'R' /* only running requests */
;

PROGRAM                                   REQUEST ELAPSED USER_NAME    OSPID        SID    SERIAL# INST_NAME  SQL_TEXT                       OUTFILE_TMP                    LOGFILE_TMP
---------------------------------------- -------- ------- ------------ ----- ---------- ---------- ---------- ------------------------------ ------------------------------ ------------------------------
Workflow Background Process               2960551       1 VIRTUATE     24814        130      29699 APPLPROD1  BEGIN WF_ENGINE.BACKGROUNDCONC /usr/tmp/o0068194.tmp          /usr/tmp/l0068194.tmp
                                                                                                              URRENT(:errbuf,:rc,:A0,:A1,:A2
                                                                                                              ,:A3,:A4,:A5); END;

1 row selected.

From the above we can see key information:

  • The running Concurrent Request Program Name and Request_ID
  • The database node operating system process id (OSPID) so we can monitor usage via top / iostat / vmstat
  • The SID / Serial in case we want to kill the session via alter system kill session '130,29699';
  • The instance name the database session is running on in a RAC environment
  • The currently running SQL text
  • The temporary files where concurrent request log/out is being written to via utl_file while running. These files are copied over to the Concurrent Tier $APPLCSF/$APPLOUT and $APPLLOG after completion of the request.

We can break out the above into a few queries and procedures to drill into specific information information from the core EBS tables and DBA v$ views

1. Get concurrent requests running, temp files and Operating System PID
col user_concurrent_program_name format a40 head PROGRAM trunc
col elapsed format 9999
col request_id format 9999999 head REQUEST
col user_name format a12
col oracle_process_id format a5 head OSPID
select  fcp.user_concurrent_program_name
,       fcr.request_id
,       round(24*60*( sysdate - actual_start_date )) elapsed
,       fu.user_name
,       fcr.oracle_process_id
from    apps.fnd_concurrent_requests fcr
,       apps.fnd_concurrent_programs_tl fcp
,       apps.fnd_user fu
where   fcp.concurrent_program_id = fcr.concurrent_program_id
and     fcp.application_id = fcr.program_application_id
and     fu.user_id = fcr.requested_by
and     fcr.phase_code = 'R';

PROGRAM                                   REQUEST ELAPSED USER_NAME    OSPID
---------------------------------------- -------- ------- ------------ -----
Virtuate GL OLAP Data Refresh             2960541       5 VIRTUATE     21681

2. Get current running sql from instance (RAC inst_id/name) where request is running
col inst_name format a10
col sql_text format a30
col module format a20

REM *********************
REM **** RAC VERSION ****
REM *********************
select sess.sid
,      sess.serial#
,      sess.module
,      sess.inst_id
,      inst.inst_name
,      sa.fetches
,      sa.runtime_mem
,      sa.sql_text
,      pro.spid
from   gv$sqlarea sa
,      gv$session sess
,      gv$process pro
,      sys.v_$active_instances inst
where  sa.address = sess.sql_address
and    sa.hash_value = sess.sql_hash_value
and    sess.paddr = pro.addr
and    sess.inst_id = pro.inst_id
and    sess.inst_id = inst.inst_number (+)
and    pro.spid = &OSPID_from_running_request;

REM *********************
REM ** NON-RAC VERSION **
REM *********************

select sess.sid
,      sess.serial#
,      sess.module
,      sa.fetches
,      sa.runtime_mem
,      sa.sql_text
,      pro.spid
from   v$sqlarea sa
,      v$session sess
,      v$process pro
where  sa.address = sess.sql_address
and    sa.hash_value = sess.sql_hash_value
and    sess.paddr = pro.addr
and    pro.spid = &OSPID_from_running_request;

If you're running something that has long SQL statements, get the full SQL Statement by selecting from v$sqltext_with_newlines as follows

select t.sql_text
from   v$sqltext_with_newlines t
,      v$session s
where  s.sid = &SID
and    s.sql_address = t.address
order by t.piece
3. Find currently spooling temp file from request
col outfile format a30
col logfile format a30
select cp.plsql_dir || '/' || cp.plsql_out outfile
,      cp.plsql_dir || '/' || cp.plsql_log logfile
from  apps.fnd_concurrent_requests cr
,     apps.fnd_concurrent_processes cp
where cp.concurrent_process_id = cr.controlling_manager
and cr.request_id = &request_id;

OUTFILE                        LOGFILE
------------------------------ ------------------------------
/usr/tmp/PROD/o0068190.tmp     /usr/tmp/PROD/l0068190.tmp

REM Now tail log file on database node to see where it is at, near realtime
REM tail -f /usr/tmp/l0068190.tmp

Then on the Database node you can tail -f the above plsql_out or plsql_log files to see where program is at. Combine this with good logging techniques (date/time stamp on each entry) and you'll be able to know where your program is at.

If locks are the potential problem, then drill into those:

Currently held locks per concurrent request
set lines 150
col object_name format a32
col mode_held format a15
select /*+ ordered */
       fcr.request_id
,      object_name
,      object_type
,      decode( l.block
             , 0, 'Not Blocking'
             , 1, 'Blocking'
             , 2, 'Global'
             ) status
,      decode( v.locked_mode
             , 0, 'None'
             , 1, 'Null'
             , 2, 'Row-S (SS)'
             , 3, 'Row-X (SX)'
             , 4, 'Share'
             , 5, 'S/Row-X (SSX)'
             , 6, 'Exclusive'
             , to_char(lmode)
             ) mode_held
from   apps.fnd_concurrent_requests fcr
,      gv$process pro
,      gv$session sess
,      gv$locked_object v
,      gv$lock l
,      dba_objects d
where  fcr.phase_code = 'R'
and    fcr.oracle_process_id = pro.spid (+)
and    pro.addr = sess.paddr (+)
and    sess.sid = v.session_id (+)
and    v.object_id = d.object_id (+)
and    v.object_id = l.id1 (+)
;

REQUEST_ID OBJECT_NAME                      OBJECT_TYPE         STATUS       MODE_HELD
---------- -------------------------------- ------------------- ------------ ---------------
   1070780 VIRTUATE_GL_OLAP_REFRESH         TABLE               Not Blocking Exclusive

So there you have it - enough tools to keep you happy Track n Tracing! Maybe next time we'll look at tracing with bind / waits or PL/SQL Profiling concurrent programs

Catch ya!
Gareth
This is a post from Gareth's blog at http://garethroberts.blogspot.com
Related Posts

Book review: Oracle APEX Best Practices

Jornica - Thu, 2012-11-29 14:59
Read my book review of Oracle APEX Best practices.

Berkeley DB Java Edition 5.x Cleaner Performance Improvements

Charles Lamb - Thu, 2012-11-29 14:14

Berkeley DB Java Edition 5.x has significant performance improvements. One user noted that they are seeing a 20x improvement in cleaner performance.

E-Business Suite Adapter vs Integrated SOA Gateway

Ramkumar Menon - Wed, 2012-11-28 17:52

Snip from a very Nice Slide Deck from a Presentation by the E-Business Suite Team.

See the entire presentation here


E-Business Suite Adapter vs Integrated SOA Gateway

Ramkumar Menon - Wed, 2012-11-28 17:52

Snip from a very Nice Slide Deck from a Presentation by the E-Business Suite Team.

See the entire presentation here


Review Oracle Apex Best Practices- Packt Publishing

H.Tonguç Yılmaz - Wed, 2012-11-28 07:50
Oracle-Apex-Best Practices has been released by Pact-Pub recently. First impression on me was that it’s not a beginner guide to APEX. It’s an enterprise level Apex development features book. When developers begin to learn APEX, they think “we can do simple web projects with APEX but not more”. But this book shows that everything can be […]

APEX 5.0 Statement of Direction

David Peake - Tue, 2012-11-27 18:35
We have published an updated Statement of Direction for Application Express and plan on calling out next major release APEX 5.0.

This outlines the major features we *plan* on delivering, however, as always there is also a large number of other features we have listed in Team Development for this release. These are covered under the "Numerous functional improvements" bucket. Many of the features we are planning to deliver come from the APEX Feature Request Application.

As always our disclaimer (as listed at the bottom of the Statement of Direction page) in relation to such forward looking statements applies.

Looks like our team is going to busy ... again!!!

APEX at the NYOUG Special Winter Meeting

Marc Sewtz - Mon, 2012-11-26 14:12

Just back from the DOAG conference (German Oracle User Group) in Nuremberg, it’s now time to get ready for an event closer to home: the annual NYOUG Special Winter Meeting at The New Yorker Hotel on December 12. If you’re interested in APEX and the Oracle Database, please stop by - you’ll have a busy day.

Willie Hardie, Vice President for Oracle Database Management will kick things off at 9:30am with his keynote on “What’s next for the Oracle Database". Right after that - at 10:30 - I’ll be talking about what’s new in Oracle Application Express 4.2. 

Later – at 2pm – you can hear about "Responsive Web Design in Oracle Application Express with HTML5 and CSS3" from Shakeeb Rahman and then finish the afternoon with Josh Millinger’s "The Maturity of Tabular Forms". 

Looking forward to seeing you at the NYOUG Special Winter Meeting. You can find the full agenda, including abstracts on all sessions here:

http://nyoug.org/upcoming_events.htm#General_Meeting1

A new career in a new town

Andrew Clarke - Sun, 2012-11-25 07:36
Well, I say "new career" but really it's the same thing: Oracle development. I'm working on some complex data matching routines for a data warehouse. But it is new, because I'm a contractor.

Going freelance has been a big step, as I was with Logica for over seventeen years. Actually, the company I joined was CMG, which through a chain of mergers, acquisitions and re-brandings became CMG Admiral, LogicaCMG, Logica and is now part of CGI. Contracting is something I have been considering for a while years but the actual decision almost took me by surprise. It's two weeks now, and I'm still enjoying it.

The "new town" part of the title is not quite accurate either, as Croydon is a town with a long and venerable history. But it's not a place I had been to before, although I have driven through it. This is not a crack at Croydon, it's just that sort of place; Wikipedia describes it as "a natural transport corridor between London and England's south coast".

Croydon is a bit odd, comprising zones of office blocks, shopping malls, car parks and entertainment complexes delineated by dual carriage ways and fly-overs. For me its chief virtue is a shortish commute. My last engagement for Logica entailed working away from home. After several months of living in a hotel room it's just nice to come back the family every evening.

Oracle eBS R12 customers can co-exist with Fusion Accounting Hub

Krishanu Bose - Fri, 2012-11-23 13:58
Existing Oracle eBS customers who are on R12 can leverage the features of Fusion General Ledger and extensive reporting capabilities of Oracle Fusion without re-implementing or upgrading the entire sub-ledger system to Fusion ecosystem through a co-existence configuration setup using Fusion Accounting Hub (FAH).
Oracle Fusion Accounting Hub, comes embedded with Essbase multidimensional cube that can be leveraged to derive business benefits through BI analytics, Smartview reporting, Financial Studio reporting, Account Monitor and Account Inspector and a very intuitive Allocation Manager for creating Allocation rules.
Using GoldenGate, a seamless integration between Oracle eBS and FAH has been provided. GL actual balances data can be transferred to FAH and drill-down back to eBS system is also possible.
The SaaS model of FAH deployment would provide faster time to deployment and lower TCO for customers as well while the existing eBS system can be an on-premise installation.

Application Migration - Part 3

Barry McGillin - Fri, 2012-11-23 12:09
Ok, Finally, we have got to part 3 of Application Migration.  In Part 1, we outlined a program which runs in Sybase through iSQL.  We then followed this, in part 2 with 2 important pieces.
  1. Recognizers to identify the file types of the source we post
  2. Rules to identify items within the files and report on the them
In this part, We will take the rules we used for the previous part, and add some replacement rules.  So, lets recap.  Our recogniser is set for shell files as below.

<?xml version="1.0" encoding="UTF-8"?>
<rulesfile version="1.0" name="Shell file recognizer" description="Recognize .sh files">
<recognizer name="sh" description="sh recognizer" enabled="true">
<fileExtension extension="sh" />
<expression><![CDATA[#!/bin/sh]]></expression>
</recognizer>
</rulesfile>


Our rules file is now extended to include replacement rules.  Looking at the rules file below, we have the two main sections
  1. The required section, which defines the expressions which are used to see if we should scan a file
  2. Rules section which can have 3 sections
    1. Example clause which contains the expression which shows what should be found by the rule.
    2. Expression clause which defines a regular expression which is used to identify the items to be change
    3. Replacement clause which can be configured in 3 different stances to cope with different replacements.
Review the file below:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="../schema/scanner.xsd"?>
<rulesfile version="1.0" name="Sample sh scanner rules"
description="Sample rules to show whats possible" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="../../schema/scanner.xsd">
<ruleset name="isql sample rules" enable="true" type="SYBASE"
description="" source="sh" codetype="isql">
<required>
<regex>
<expression><![CDATA[go|isql]]></expression>
</regex>
</required>
<rules>
<regex>
<expression><![CDATA[go *\n]]></expression>
<replacement type="text"><![CDATA[]]>
</replacement>
</regex>
<regex>
<expression><![CDATA[use.*\n]]></expression>
<replacement type="regex"><![CDATA[]]>
</replacement>
</regex>
<regex>
<expression><![CDATA[isql.*EOF]]></expression>
<replacement type="regex"><![CDATA[sqlplus barry/barry <<EOF]]></replacement>
</regex>
</rules>
</ruleset>
<ruleset name="sql sample rules" enable="true" type="SYBASE"
description="" source="sh" codetype="sql">
<required>
<regex>
<expression><![CDATA[select]]></expression>
</regex>
</required>
<rules>
<regex>
<expression><![CDATA[select.*\n]]></expression>
<replacement type="translator"/>
</regex>
</rules>
</ruleset>
</rulesfile>


The replacement tags are
  1. Text
    • This is the simplest type of replacement, taking the source strings found and replacing them with the string in the replacement tag.
  2. regex
    • The regular expression replacement can either simply replace text, or it can also use regular expressions to rearrange the string that was found.  For example,  function(a,b,c) can be switched to myfunction(c,a,b)
  3. Translator
    • The translator type allows the user to take the string found and pass it to a language translator denoted by the type.  In our example, the type is SYBASE, which will call our sybase translator and translate the source string.
In the rules file above, we have 2 rulesets defined, the first doing text and regex replacements, and the second doing translator replacements.  All these can be mixed together, though.  If you have a lot of rules, it makes sense to delineate them in rulesets so the tool can filter out what is not required.
Now, taking a look at the source we had in Part 1

bamcgill-macbook-pro:src bamcgill$ cat test.sh
#!/bin/sh
isql -UMYUSER -PMYPASS -SMYSERVER <<EOF
use pubs2
go
select count(*) from authors
go
select top 5 au_lname,au_fname,postalcode from authors
go
EOF
bamcgill-macbook-pro:src bamcgill$

we can now run the scanner and make the replacements.  Using a similar command to the that used in part 1, We can replace code in this script.

bamcgill-macbook-pro:demo bamcgill$ migration -actions=scan -dir=/Users/bamcgill/code/demo/src -rulesdir=/Users/bamcgill/code/demo/rules -inplace=true

Now, we when we look at the file, test.sh again, we have

#!/bin/sh
sqlplus barry/barry <<EOF
SELECT COUNT(*)
FROM authors ;

SELECT au_lname ,
au_fname ,
postalcode
FROM authors WHERE ROWNUM <= 5;

EOF

So, there we a translated file which was running iSQL on Sybase, but is now running SQL*Plus and Oracle specific SQL.

Now, if you are interested, heres what the scanner said as it was running.  It prints out individual statements as they are found and where in the source they were found. At the end, the tool summarises what was found and where.
bamcgill-macbook-pro:demo bamcgill$ migration -actions=scan -dir=/Users/bamcgill/code/demo/src -rulesdir=/Users/bamcgill/code/demo/rules -inplace=true

Oracle SQL Developer
Copyright (c) 1997, 2011, Oracle and/or its affiliates. All rights reserved.

Finding files....
Default Application Name

test.sh
3:go

5:go

7:go

2:use pubs2

1:isql -UMYUSER -PMYPASS -SMYSERVER <<EOF
2:select count(*) from authors

3:select top 5 au_lname,au_fname,postalcode from authors

------------------------ Application Results -----------------

Call Breakdown Summary by File
------------------------------
/Users/bamcgill/code/demo/src/test.sh
3: go

1: use pubs2

1: isql -UMYUSER -PMYPASS -SMYSERVER <<EOF
1: select count(*) from authors

1: select top 5 au_lname,au_fname,postalcode from authors


-------------------------------------------------------------

Call Breakdown Summary
----------------------
3: go

1: use pubs2

1: isql -UMYUSER -PMYPASS -SMYSERVER <<EOF
1: select count(*) from authors

1: select top 5 au_lname,au_fname,postalcode from authors


-------------------------------------------------------------

File Type Summary
----------------------
sh 1 file

-------------------------------------------------------------
------------------------ Summary -----------------------
High Level Overview
-------------------
7 total calls found
5 distinct calls found
1 files scanned
1 language types
2 total files in source
9 lines of code in scanned application
-------------------------------------------------------------
-------------------------------------------------------------
scan completed successfully
If you want to know more about this, drop me line on twitter @bamcgill, or by email on barry.mcgillin@oracle.com

You can download SQL Developer from OTN.

Reinstalling an Oracle Database Appliance Part 3

Fuad Arshad - Fri, 2012-11-23 10:47
An Oracle Database appliance is a set of standard hardware and software molded together to created a 2 node RAC cluster that is easy to deploy. In parts 1 & 2 i talked about what methods you can use to reformat or erase the Oracle Database Appliance and rebuild it. In this post  i will talk about 2 methods to complete a deployment on the Oracle Database Appliance (ODA).
You can use the GUI tool oakcli deploy that will walk you thru all the steps and let you add all the IP's and then do the deploy.
You can use the Oracle Appliance Kit offline configurator to generate the conf file and then either run  oakcli deploy to quickly validate all the information and runa graphical deploy.
The Oracle Appliance Kit Offline Configurator is available at the Oracle Technology Network ODA Page .  The Offline configurator is an exact replica of the oakcli deploy tool except it can be done on your laptop so you dont have to type all the info on the server. The Offline Configurator allows to save a text file in the below format.
#onecommand.params generated by GUI tool
# Confuration information
CONFIGVERSION=2.4.0.0.0

# CLUSTERINFO
DBMPREFIX=test1
DOMAIN=test.com
CLUSTERNAME=test1-c
REGION=America
TIMEZONE=America/Chicago

# DATABASE INFO
DbHomeArr=(test1)
OraHomeArr=(/u01/app/oracle/product/11.2.0/dbhome_1)
DbBlkSizArr=(8192)
DBBackupType=External
DataDgArr=(DATA)
RecoDgArr=(RECO)
RedoDgArr=(REDO)
RecoSizeArr=(2048)
DbTypeArr=(RAC)
DbNodeArr=(ALL )
DbCharset=AL32UTF8
DbTeritory=AMERICA
DbLang=AMERICAN
DbClass=VSmall
DbQOSTypArr=(ADMIN)
COMPONENT_LANGUAGES={en}


This text file contains a complete snapshot of the ODA being Built. Below is how the ODA configurator looks like.


Once the file is loaded you can go thru a standard GUI deployment.
The Second method which is very useful to deploy an ODA is the silent method. you can use the text file generated from the ODA offline configurator and copy the config file

./oakcli copy -conf /software/test11_12.txt
Successfully copied the config file!!!

Once the config file has been copied you list what the GridInst.pl steps are

[root@oak1 onecmd]# cd /opt/oracle/oak/onecmd
[root@oak1 onecmd]# ./GridInst.pl -l
INFO : Logging all actions in /dev/null and traces in /dev/null
INFO : Loading configuration file /opt/oracle/oak/onecmd/onecommand.params...
The steps in order are...
Step 0 = ValidateParamFile
Step 1 = SetupNetwork
Step 2 = WriteNodelists
Step 3 = SetupSSHroot
Step 4 = SetupDNS
Step 5 = UpdateEtcHosts
Step 6 = SetTimezone
Step 7 = SetupNTP
Step 8 = SetupILOM
Step 9 = ValidateEnv
Step 10 = CreateUsers
Step 11 = SetupStorage
Step 12 = SetupSSHusers
Step 13 = InstallGIClone
Step 14 = RunGIClonePl
Step 15 = RunRootScripts
Step 16 = GIConfigAssists
Step 17 = CreateASMDiskgroups
Step 18 = InstallDBClone
Step 19 = RunDBClonePl
Step 20 = DbcaDB
Step 21 = SetupEMDbControl
Step 22 = SetupACFS
Step 23 = SetupTFA
Step 24 = SetupASR
Step 25 = ResecureMachine
[root@oak1 onecmd]#
It is very important to validate how many steps are there since Oracle can change the number between the versions. The above is a snapshot of ODA 2.4 Steps. you can decide to break down the number of steps run this in debug mode in which case you will have interaction on some steps. or just execute GridInst.pl via a shell script
[root@oak1 onecmd]# ./GridInst.pl -r 1-25

This will execute the ODA deploy in non GUI mode with all logging happening as normal.
This concludes this 3 part series on how to reinstall an Oracle Database Appliance. Please feel free to ask any questions regarding the ODA and i will try to the best of my knowledge to answer them

Reinstalling an Oracle Database Appliance part 1Reinstalling an Oracle Database Appliance Part 2

Restoring a Database with Rman Controlfile Parameters

Fuad Arshad - Tue, 2012-11-20 19:08
There are occasions when you recieve a complete backup of a database from a  place different then yours and have to do a restore. normally a restore is simple . put the files in a directory run rman catalog on them and you are done. but what if you have rman configuration parameters stored in the controlfile and you are trying to duplicate the database. During a duplicate rman will restore the controlfile and then use the parameters associated in the controlfile for the next steps.
I hit this issue when the db i was trying to duplicate  has sbt_tape parameter and the host i was trying to duplicate on  has no MML configured to it.
After trying a couple of times with no luck. I opened an SR and got a response from a very helpful Oracle Support guy (even on Sev 3).
Below are the procedures i followed to solve my rather simple problem.
1. Startup the auxiliary database in nomount
2. Restore the controlfile from the autobackup piece
RESTORE CONTROLFILE FROM '.... location and name of the autobackup piece ...';
3. Mount the database:
ALTER DATABASE MOUNT;
4. Clear the SBT related configurations, CHANNEL and DEVICE - Do a show all and make sure there are no parameters that you might have an issue with
See the RMAN reference guide under CONFIGURE for details on how to do this.
5. In SQLPLUS backup the controlfile:
ALTER DATABASE BACKUP CONTROLFILE TO '/location/of/backups/control.dbf';
6. Startup the auxiliary database in nomount
7. Move the controlfile autoback piece out of the backup location
8. Run the Duplicate, which now will use the controlfile copy we created with correct RMAN configuration.

I was unaware of doing something this way and i guess its good to learn something new everyday.

JPA SQL and Fetching tuning ( EclipseLink )

Edwin Biemond - Mon, 2012-11-19 15:50
When you use JPA in your project and your model project contains many entities with some Eager fetching relation attributes then you probably notice that EclipseLink can fire a lot of SQL queries on the database. This works ok & fast on Dev or Test but in production these queries can lead to big problems ( higher load and more data). In our project we also use ADF DataControls, Web Services or

Fusion Applications Security

Krishanu Bose - Thu, 2012-11-15 22:56
The Fusion security architecture in Fusion is quite different from EBS and relates to real world scenarios better. Fusion security is externalized from Applications and moved to standardized Fusion Middleware, OPSS and LDAP.

Management of enterprise users and roles in fusion applications is done via Oracle Identity Manager (OIM). OIM manages this data in a database and keeps it in sync with the LDAP directory.

Management of enterprise users and roles in fusion applications is done via Oracle Identity Manager (OIM). OIM manages this data in a database and keeps it in sync with the LDAP directory. OAM is authentication, authorization, and auditing solution that provides centralized security administration. This includes functionality for access control, single sign-on (separate from Oracle Single Sign-On), personalization, and user profile management in heterogeneous application environments across a variety of application servers, legacy applications, and databases. OAM provides key features for creating, managing, and enforcing access policies. In Fusion v1, the LDAP server will hold both the Identity Store and the Policy Store. Identity store holds users and external roles. Policy store holds Application roles, Function and Data security policies.

Fusion ships with a full pre-seeded set of Job Roles. Separation of Jobs and Duty allows access control to be customized to a company’s particular organization

   • You can add/change jobs and their duties

   • You can add/change duties and their privileges

Below is a comparison of EBS security with Fusion Apps.

Job Role -> Top Level Menu

Data Role -> Responsibility

Duty Role -> Sub Menu

Privilege -> Form Function

Permission -> Executable

However, at times you may need to create a new set of policies, duties and job roles. I will try to demonstrate the steps necessary to setup this using an example in another article.

When is Video Better?

Cary Millsap - Thu, 2012-11-15 16:59
Ok, I’m stuck, and I need your help.

At my company, we sell software tools that help Oracle application developers and database administrators see exactly where their code spends their users’ time. I want to publish better information at our web page that will allow people who are interested to learn more about our software, and that will allow people who don’t even realize we exist to discover what we have. My theory is that the more people who understand exactly what we have, the more customers we’ll get, and we have some evidence that bears that out.

I’ve gotten so much help from YouTube in various of my endeavors that I’ve formed the grand idea in my head:
We need more videos showing our products.The first one I made is the 1:13 video on YouTube called “Method R Tools v3.0: Getting Started.” I’m interested to see how effective it is. I think this content is perfect for the video format because the whole point is to show you how easy it is to get going. Saying it’s easy just isn’t near as fun or convincing as showing it’s easy.

The next thing I need to share with people is a great demonstration that Jeff Holt has helped me pull together, which shows off all the tools in our suite, how they interact and solve an interesting and important common problem. But this one can’t be a 1-minute video; it’s a story that will take a lot longer to tell than that. It’s probably a 10- to 15-minute story, if I had to guess.

Here’s where I’m stuck. Should I make a video? Or write up a blog post for it? The reason this is a difficult question is that making a video costs me about 4 hours per minute of output I can create. That will get better over time, as I practice and accumulate experience. But right now, videos cost me a lot of time. On the other hand, I can whip together a blog post with plenty of detail in a fraction of the time.

Where I need your help is to figure out how much benefit there is, really, to creating a video instead of just a write-up. Some things I have to consider:
  • Would a 15-minute video capture the attention of people who Do people glaze over (TL;DR) on longish printed case studies on which they’d gladly watch about 15 minutes of video?
  • Or do people just pass on the prospect of watching a 15-minute case study about a problem they might not even realize they have yet? I ask this, because I find myself not clicking on any video that I know will take longer than a minute or two, unless I believe it’s going to help me solve a difficult problem that I know I have right now.
So, if you don’t mind giving me a hand, I’ve created a survey at SurveyMonkey that asks just three simple questions that will help me determine which direction to go next. I’m eager to see what you say.

Thank you for your time.

Pages

Subscribe to Oracle FAQ aggregator