Feed aggregator

Error Handling: Or, How to Start an Argument Among Programmers

Tahiti Views - Tue, 2012-12-11 01:18
Lately I see a lot of discussions come up about error handling. For example: Dr. Dobbs "The Scourge of Error Handling" covering mainly C-descended languages; this blog post stemming from OTN discussions of Oracle exception handling in PL/SQL. I've been on both sides of the fence (cowboy coder and database programmer) so I'll offer my take. I'll write mostly from the perspective of PL/SQL stored John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

Big Data career adviser says you should be a… Big Data analyst

William Vambenepe - Tue, 2012-12-11 00:13

LinkedIn CEO Jeff Weiner wrote an interesting post on “the future of LinkedIn and the economic graph“. There’s a lot to like about his vision. The part about making education and career choices better informed by data especially resonates with me:

With the existence of an economic graph, we could look at where the jobs are in any given locality, identify the fastest growing jobs in that area, the skills required to obtain those jobs, the skills of the existing aggregate workforce there, and then quantify the size of the gap. Even more importantly, we could then provide a feed of that data to local vocational training facilities, junior colleges, etc. so they could develop a just-in-time curriculum that provides local job seekers the skills they need to obtain the jobs that are and will be, and not just the jobs that once were.

I consider myself very lucky. I happened to like computers and enjoy programming them. This eventually lead me to an engineering degree, a specialization in Computer Science and a very enjoyable career in an attractive industry. I could have been similarly attracted by other domains which would have been unlikely to give me such great professional options. Not everyone is so lucky, and better data could help make better career and education choices. The benefits, both at the individual and societal levels, could be immense.

Of course, like for every Big Data example, you can’t expect a crystal ball either. It’s unlikely that the “economic graph” for France in 1994 would have told me: “this would be a good time to install Linux Slackware, learn Python and write your first CGI script”. It’s also debatable whether that “economic graph” would have been able to avoid one of the worst talent waste of recent time, when too many science and engineering graduates went into banking. The “economic graph” might actually have encouraged that.

But, even under moderate expectations, there is a lot of potential for better informed education and career decision (both on the part of the training profession and the students themselves) and I am glad that LinkedIn is going after that. Along with the choice of a life partner (and other companies are after that problem), this is maybe the most important and least informed decision people will make in their lifetime.

Jeff Weiner also made proclamation of openness in that same article:

Once realized, we then want to get out of the way and allow all of the nodes on this network to connect seamlessly by removing as much friction as possible and allowing all forms of capital, e.g. working capital, intellectual capital, and human capital, to flow to where it can best be leveraged.

I’m naturally suspicious of such claims. And a few hours later, I get a nice email from LinkedIn, announcing that as of tomorrow they are dropping the “blog link” application which, as far as I can tell, fetches recent posts form my blog and includes them on my LinkedIn profile. Seems to me that this was a nice and easy way to “allow all of the nodes on this network to connect seamlessly by removing as much friction as possible”…

Categories: Other

Webinar: Using XMLA with Cognos and Oracle OLAP Cubes

Keith Laker - Mon, 2012-12-10 10:01
When:  Thursday, Dec 13, 2012 at 9:00am PST / 12:00pm EST / 6:00pm CET.

To attend:    Sign up here.
 
 
If you use a business intelligence tool such as IBM Cognos, Microstrategy or SPA BusinessObjects Analysis that uses XMLA to connect to multidimensional data sources, check out a free webinar by Simba Technologies which offers a "sneak peak" of the Simba XMLA Provider for Oracle OLAP.  The Simba XMLA Provider for Oracle OLAP is an XMLA version for the Simba MDX Provider for Oracle OLAP, the gold standard in MDX connectivity to Oracle OLAP.   (The Simba MDX Provider for Oracle OLAP allows MDX based clients such as Microsoft Excel PivotTables to query Oracle OLAP cubes.  The XMLA version allows clients that use XMLA rather than ODBO to connect to Oracle OLAP.)

Simba will demonstrate using IBM Cognos using the XMLA provider to query Oracle OLAP cubes.  Here's a brief outline of the session.

See how:
  • Familiar business intelligence applications such as IBM Cognos can connect to an Oracle OLAP cube.
  • Ad-hoc querying and data analysis can be performed directly in IBM Cognos on your OLAP data.
  • The most advanced application that responds to XMLA requests available enables users to interactively build reports, drill into details and slice and dice data
  • Connectivity can be established without the need to install any software on the client machine.
    Simply connect to the XMLA service and everything works!
See you there!


Categories: BI & Warehousing

Missing controlfiles ?

Bas Klaassen - Mon, 2012-12-10 03:36
During the backup check this morning, I noticed something strange.. The backup logfile was showing the following error : Normal 0 21 false false false NL X-NONE X-NONE Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com2
Categories: APPS Blogs

Error Wrangling

Andrew Clarke - Mon, 2012-12-10 01:50
Last week the OTN SQL and PL/SQL Forum hosted of those threads which generate heat and insight without coming to a firm conclusion: this one was titled WHEN OTHERS is a bug. Eventually Rahul, the OP, complained that he was as confused as ever. The problem is, his question asked for a proof of Tom Kyte's opinion that, well, that WHEN OTHERS is a bug. We can't proof an opinion, even an opinion from a well-respected source like Tom. All we can do is weigh in with our own opinions on the topic.

One of the most interesting things in the thread was Steve Cosner's observations on designing code "to be called by non-database, and ... non-Oracle software (Example: Oracle Forms, or a web process)". He uses WHEN OTHERS to produce a return code. Now return codes are often cited as an example of bad practice. Return codes disguise the error and the allow the calling program to proceed as though nothing had gone wrong in the called unit. Exceptions, on the other hand, cannot be ignored.

But Steve goes on to make a crucial point: we must make allowances for how our code will be called. For instance, one time I was writing PL/SQL packages for an E-Business Suite application and I was doing my usual thing, coding procedures which raised exceptions on failure. This was fine for the low level routines which were only used in other PL/SQL routines. But the procedures which ran as concurrent jobs had to finish cleanly, no matter what happened inside; each passed a return code to the concurrent job manager, and logged the details. Similarly the procedures used by Forms passed back an outcome message, prefixed with "Success" or"Error" as appropriate.

This rankled at first but it is undoubtedly true that exceptions make demands on the calling program. Although the demands are not as high in PL/SQL as in say C++ or Java, raising exceptions still changes how the calling function must be coded. We have to work with the grain of the existing code base when introducing new functionality. (Interestingly, Google has a fiat against exceptions in its C++ code base, and developed its Go language to support return codes as the default error handling mechanism. Find out more.)

The point is APIs are a contract. On its side the called program can enforce rules about how it is called - number and validity of input parameters, return values. But it cannot impose rules about what the calling program does with the outcome. So there's no point in exposing a function externally if its behaviour is unacceptable to the program which wants to call it. When the calling program wants to use return codes there's little point in raising exceptions instead. Sure the coder writing the calling program can ignore the value in the return code, but that is why we need code reviews.

So, is WHEN OTHERS a bug? The answer is, as so often, it depends.

11.2.0.3 Install failing for Pre-requisite Checks (error: configured=unknown)

Madan Mohan - Sat, 2012-12-08 06:50
Error
****
All the Pre-req checks are failing for kernel parameters  

Current = "250"  configured=unknown


Fix
***
/etc/sysctl.conf  is not having the read permission to the User that is installing the Software.

chmod +r  /etc/sysctl.conf


Challenges with APM 1.0 product

Debu Panda - Tue, 2012-12-04 11:38
Customers have been managing application performance since early days of mainframe evolution. However, Application Performance Management as a discipline has gained popularity in the past decade.

See my blog in BMC communities for challenges with old generation of APM products.

Here is the direct link : https://communities.bmc.com/communities/community/bsm_initiatives/app_mgmt/blog/2012/12/04/challenges-with-apm-10-products

GOTOs, considered

Andrew Clarke - Mon, 2012-12-03 00:12
Extreme programming is old hat now, safe even. The world is ready for something new, something tougher, something that'll... break through. You know? . And here is what the world's been waiting for: Transgressive Programming.

The Transgressive Manifesto is quite short: It's okay to use GOTO. The single underlying principle is that we value willful controversy over mindless conformity.

I do have a serious point here. Even programmers who haven't read the original article (because they can't spell Dijkstra and so can't find it through Google) know that GOTOs are "considered harmful". But as Marshall and Webber point out, "the problem lies ... in the loss of knowledge and experience. If something is forbidden for long enough, it becomes difficult to resurrect the knowledge of how to use it."

How many Oracle developers even realise PL/SQL supports GOTO? It does, of course. Why wouldn't it? PL/SQL is a proper programming language.

The standard objection is that there is no role for GOTO because PL/SQL has loops, procedures, CASE, etc. But sometimes we need to explicitly transfer control. In recent times I have have across these examples:

  • a loop which raised a user-defined exception to skip to the END LOOP; point when the data was in certain ststes, thus avoiding large chunk of processing. A GOTO would have have been cleaner, because it is poor practice to represent normal business states as exceptions.
  • a huge function with more than a dozen separate RETURN statements. GOTOs directing flow to a single RETURN call would have been really helpful, because I needed to log the returned value.
  • a condition which set a counter variable to a large number so as to short-circuit a loop. Here a GOTO would simply have been more honest.
These examples are all explicit control transfers: they cause exactly the sort of random paths through the code which Dijkstra inveighed against. But the coders didn't honour the principle underlying his fatwa, they just lacked the moxie to invoke the dread statement. Instead they kludged. I'm not saying that using a GOTO would have redeemed a function with 800 LOC ; clearly there'e a lot more refactoring to be done there. But it would have been better.

Here is a situation I have come across a few times. The spec is to implement a series of searches of increasing coarseness, depending on which arguments are passed; the users want the most focused set of records available, so once a specific search gets some hits we don't need to run the more general searches.

Nested IF statements provide one way to do this:


result_set := sieve_1(p1=>var1, p2=>var2, p3=>var4, p4=>var5);

if result_set.count() = 0
then
result_set := sieve_2(p1=>var2, p2=>var3, p3=>var4);

if result_set.count() = 0
then
result_set := sieve_3(p1=>var3, p2=>var5);

if result_set.count() = 0
then
....
end if;
end if;
end if;
return result_set;
Obviously as the number of distinct searches increases the nested indentation drives the code towards the right-hand side of the page. Here is an alternative implementation which breaks the taboo and does away with the tabs.

result_set := sieve_1(p1=>var1, p2=>var2, p3=>var4, p4=>var5);
if result_set.count() > 0
then
goto return_point;
end if;

result_set := sieve_2(p1=>var2, p2=>var3, p3=>var4);
if result_set.count() > 0
then
goto return_point;
end if;

result_set := sieve_3(p1=>var3, p2=>var5);
if result_set.count() > 0
then
goto return_point;
end if;
...
<< return_point >>
return result_set;
I think the second version has a clearer expression of intent. Did we find any records? Yes we did, job's a good'un, let's crack on.

tl;dr
GOTO: not as evil as triggers.

Exadata "marketing" and reality

Nuno Souto - Fri, 2012-11-30 04:24
Sorry for the marked absence of posts, folks.  This year we've been upgrading all our Oracle dbs to 11.2.0.3 and all our MSSQL dbs to 2008R2 - hectic is an understatement for how things have been! On top of that we've been told we may need to increase the size of our main Oracle dbs by 10X in the next 2 years. So, in the process of upgrading I had to ensure the groundwork for that sort of Noonshttp://www.blogger.com/profile/04285930853937157148noreply@blogger.com2

Track and Trace E-Business Suite Concurrent Request - EBS Logs, O/S Process IDs, Locks

Gareth Roberts - Thu, 2012-11-29 17:21

I often get asked to take a look at an Oracle eBusiness Suite concurrent request to see what it is doing, this can come from a few different angles:

  1. What requests are currently running?
  2. I have an operating system process that is taking too much CPU - what is it doing?
  3. Can you tell me where the request is at? I've clicked on the log but it doesn't show anything!
  4. My request is taking too long - can you check for blocking locks?

There are a number of strategies to track and trace where things are at for a running request, these include:

  1. Monitor the currently running requests in fnd_concurrent_requests
  2. Checking the v$sqlarea to see what SQL statement or PL/SQL is running
  3. Tailing the concurrent request log / output file while it is being written to near realtime - prior to request completing
  4. Checking for locks blocking the concurrent request

So without further ado, let's take a look at the following sweet query. UPDATE: 23-Aug-2012 fixed multi rows due to missing join on inst_id between gv$session and gv$process. Also note for non-RAC environments change gv$ to v$ and remove joins to sys.v_$active_instances:

set pages 9999 feed on lines 150
col user_concurrent_program_name format a40 head PROGRAM trunc
col elapsed format 9999
col request_id format 9999999 head REQUEST
col user_name format a12
col oracle_process_id format a5 head OSPID
col inst_name format a10
col sql_text format a30
col outfile_tmp format a30
col logfile_tmp format a30

REM *********************
REM **** RAC VERSION ****
REM *********************
select /*+ ordered */ 
       fcp.user_concurrent_program_name
,      fcr.request_id
,      round(24*60*( sysdate - actual_start_date )) elapsed
,      fu.user_name
,      fcr.oracle_process_id
,      sess.sid
,      sess.serial#
,      inst.inst_name
,      sa.sql_text
,      cp.plsql_dir || '/' || cp.plsql_out outfile_tmp
,      cp.plsql_dir || '/' || cp.plsql_log logfile_tmp
from   apps.fnd_concurrent_requests fcr
,      apps.fnd_concurrent_programs_tl fcp
,      apps.fnd_concurrent_processes cp
,      apps.fnd_user fu
,      gv$process pro
,      gv$session sess
,      gv$sqlarea sa
,      sys.v_$active_instances inst
where  fcp.concurrent_program_id = fcr.concurrent_program_id
and    fcp.application_id = fcr.program_application_id
and    fcr.controlling_manager = cp.concurrent_process_id
and    fcr.requested_by = fu.user_id (+)
and    fcr.oracle_process_id = pro.spid (+)
and    pro.addr = sess.paddr (+)
and    pro.inst_id = sess.inst_id (+)
and    sess.sql_address = sa.address (+)
and    sess.sql_hash_value = sa.hash_value (+)
and    sess.inst_id = inst.inst_number (+)
and    fcr.phase_code = 'R' /* only running requests */
;

REM *********************
REM ** NON-RAC VERSION **
REM *********************
select /*+ ordered */ 
       fcp.user_concurrent_program_name
,      fcr.request_id
,      round(24*60*( sysdate - actual_start_date )) elapsed
,      fu.user_name
,      fcr.oracle_process_id
,      sess.sid
,      sess.serial#
,      sa.sql_text
,      cp.plsql_dir || '/' || cp.plsql_out outfile_tmp
,      cp.plsql_dir || '/' || cp.plsql_log logfile_tmp
from   apps.fnd_concurrent_requests fcr
,      apps.fnd_concurrent_programs_tl fcp
,      apps.fnd_concurrent_processes cp
,      apps.fnd_user fu
,      v$process pro
,      v$session sess
,      v$sqlarea sa
where  fcp.concurrent_program_id = fcr.concurrent_program_id
and    fcp.application_id = fcr.program_application_id
and    fcr.controlling_manager = cp.concurrent_process_id
and    fcr.requested_by = fu.user_id (+)
and    fcr.oracle_process_id = pro.spid (+)
and    pro.addr = sess.paddr (+)
and    sess.sql_address = sa.address (+)
and    sess.sql_hash_value = sa.hash_value (+)
and    fcr.phase_code = 'R' /* only running requests */
;

PROGRAM                                   REQUEST ELAPSED USER_NAME    OSPID        SID    SERIAL# INST_NAME  SQL_TEXT                       OUTFILE_TMP                    LOGFILE_TMP
---------------------------------------- -------- ------- ------------ ----- ---------- ---------- ---------- ------------------------------ ------------------------------ ------------------------------
Workflow Background Process               2960551       1 VIRTUATE     24814        130      29699 APPLPROD1  BEGIN WF_ENGINE.BACKGROUNDCONC /usr/tmp/o0068194.tmp          /usr/tmp/l0068194.tmp
                                                                                                              URRENT(:errbuf,:rc,:A0,:A1,:A2
                                                                                                              ,:A3,:A4,:A5); END;

1 row selected.

From the above we can see key information:

  • The running Concurrent Request Program Name and Request_ID
  • The database node operating system process id (OSPID) so we can monitor usage via top / iostat / vmstat
  • The SID / Serial in case we want to kill the session via alter system kill session '130,29699';
  • The instance name the database session is running on in a RAC environment
  • The currently running SQL text
  • The temporary files where concurrent request log/out is being written to via utl_file while running. These files are copied over to the Concurrent Tier $APPLCSF/$APPLOUT and $APPLLOG after completion of the request.

We can break out the above into a few queries and procedures to drill into specific information information from the core EBS tables and DBA v$ views

1. Get concurrent requests running, temp files and Operating System PID
col user_concurrent_program_name format a40 head PROGRAM trunc
col elapsed format 9999
col request_id format 9999999 head REQUEST
col user_name format a12
col oracle_process_id format a5 head OSPID
select  fcp.user_concurrent_program_name
,       fcr.request_id
,       round(24*60*( sysdate - actual_start_date )) elapsed
,       fu.user_name
,       fcr.oracle_process_id
from    apps.fnd_concurrent_requests fcr
,       apps.fnd_concurrent_programs_tl fcp
,       apps.fnd_user fu
where   fcp.concurrent_program_id = fcr.concurrent_program_id
and     fcp.application_id = fcr.program_application_id
and     fu.user_id = fcr.requested_by
and     fcr.phase_code = 'R';

PROGRAM                                   REQUEST ELAPSED USER_NAME    OSPID
---------------------------------------- -------- ------- ------------ -----
Virtuate GL OLAP Data Refresh             2960541       5 VIRTUATE     21681

2. Get current running sql from instance (RAC inst_id/name) where request is running
col inst_name format a10
col sql_text format a30
col module format a20

REM *********************
REM **** RAC VERSION ****
REM *********************
select sess.sid
,      sess.serial#
,      sess.module
,      sess.inst_id
,      inst.inst_name
,      sa.fetches
,      sa.runtime_mem
,      sa.sql_text
,      pro.spid
from   gv$sqlarea sa
,      gv$session sess
,      gv$process pro
,      sys.v_$active_instances inst
where  sa.address = sess.sql_address
and    sa.hash_value = sess.sql_hash_value
and    sess.paddr = pro.addr
and    sess.inst_id = pro.inst_id
and    sess.inst_id = inst.inst_number (+)
and    pro.spid = &OSPID_from_running_request;

REM *********************
REM ** NON-RAC VERSION **
REM *********************

select sess.sid
,      sess.serial#
,      sess.module
,      sa.fetches
,      sa.runtime_mem
,      sa.sql_text
,      pro.spid
from   v$sqlarea sa
,      v$session sess
,      v$process pro
where  sa.address = sess.sql_address
and    sa.hash_value = sess.sql_hash_value
and    sess.paddr = pro.addr
and    pro.spid = &OSPID_from_running_request;

If you're running something that has long SQL statements, get the full SQL Statement by selecting from v$sqltext_with_newlines as follows

select t.sql_text
from   v$sqltext_with_newlines t
,      v$session s
where  s.sid = &SID
and    s.sql_address = t.address
order by t.piece
3. Find currently spooling temp file from request
col outfile format a30
col logfile format a30
select cp.plsql_dir || '/' || cp.plsql_out outfile
,      cp.plsql_dir || '/' || cp.plsql_log logfile
from  apps.fnd_concurrent_requests cr
,     apps.fnd_concurrent_processes cp
where cp.concurrent_process_id = cr.controlling_manager
and cr.request_id = &request_id;

OUTFILE                        LOGFILE
------------------------------ ------------------------------
/usr/tmp/PROD/o0068190.tmp     /usr/tmp/PROD/l0068190.tmp

REM Now tail log file on database node to see where it is at, near realtime
REM tail -f /usr/tmp/l0068190.tmp

Then on the Database node you can tail -f the above plsql_out or plsql_log files to see where program is at. Combine this with good logging techniques (date/time stamp on each entry) and you'll be able to know where your program is at.

If locks are the potential problem, then drill into those:

Currently held locks per concurrent request
set lines 150
col object_name format a32
col mode_held format a15
select /*+ ordered */
       fcr.request_id
,      object_name
,      object_type
,      decode( l.block
             , 0, 'Not Blocking'
             , 1, 'Blocking'
             , 2, 'Global'
             ) status
,      decode( v.locked_mode
             , 0, 'None'
             , 1, 'Null'
             , 2, 'Row-S (SS)'
             , 3, 'Row-X (SX)'
             , 4, 'Share'
             , 5, 'S/Row-X (SSX)'
             , 6, 'Exclusive'
             , to_char(lmode)
             ) mode_held
from   apps.fnd_concurrent_requests fcr
,      gv$process pro
,      gv$session sess
,      gv$locked_object v
,      gv$lock l
,      dba_objects d
where  fcr.phase_code = 'R'
and    fcr.oracle_process_id = pro.spid (+)
and    pro.addr = sess.paddr (+)
and    sess.sid = v.session_id (+)
and    v.object_id = d.object_id (+)
and    v.object_id = l.id1 (+)
;

REQUEST_ID OBJECT_NAME                      OBJECT_TYPE         STATUS       MODE_HELD
---------- -------------------------------- ------------------- ------------ ---------------
   1070780 VIRTUATE_GL_OLAP_REFRESH         TABLE               Not Blocking Exclusive

So there you have it - enough tools to keep you happy Track n Tracing! Maybe next time we'll look at tracing with bind / waits or PL/SQL Profiling concurrent programs

Catch ya!
Gareth
This is a post from Gareth's blog at http://garethroberts.blogspot.com
Related Posts

Book review: Oracle APEX Best Practices

Jornica - Thu, 2012-11-29 14:59
Read my book review of Oracle APEX Best practices.

Berkeley DB Java Edition 5.x Cleaner Performance Improvements

Charles Lamb - Thu, 2012-11-29 14:14

Berkeley DB Java Edition 5.x has significant performance improvements. One user noted that they are seeing a 20x improvement in cleaner performance.

E-Business Suite Adapter vs Integrated SOA Gateway

Ramkumar Menon - Wed, 2012-11-28 17:52

Snip from a very Nice Slide Deck from a Presentation by the E-Business Suite Team.

See the entire presentation here


E-Business Suite Adapter vs Integrated SOA Gateway

Ramkumar Menon - Wed, 2012-11-28 17:52

Snip from a very Nice Slide Deck from a Presentation by the E-Business Suite Team.

See the entire presentation here


Review Oracle Apex Best Practices- Packt Publishing

H.Tonguç Yılmaz - Wed, 2012-11-28 07:50
Oracle-Apex-Best Practices has been released by Pact-Pub recently. First impression on me was that it’s not a beginner guide to APEX. It’s an enterprise level Apex development features book. When developers begin to learn APEX, they think “we can do simple web projects with APEX but not more”. But this book shows that everything can be […]

APEX 5.0 Statement of Direction

David Peake - Tue, 2012-11-27 18:35
We have published an updated Statement of Direction for Application Express and plan on calling out next major release APEX 5.0.

This outlines the major features we *plan* on delivering, however, as always there is also a large number of other features we have listed in Team Development for this release. These are covered under the "Numerous functional improvements" bucket. Many of the features we are planning to deliver come from the APEX Feature Request Application.

As always our disclaimer (as listed at the bottom of the Statement of Direction page) in relation to such forward looking statements applies.

Looks like our team is going to busy ... again!!!

APEX at the NYOUG Special Winter Meeting

Marc Sewtz - Mon, 2012-11-26 14:12

Just back from the DOAG conference (German Oracle User Group) in Nuremberg, it’s now time to get ready for an event closer to home: the annual NYOUG Special Winter Meeting at The New Yorker Hotel on December 12. If you’re interested in APEX and the Oracle Database, please stop by - you’ll have a busy day.

Willie Hardie, Vice President for Oracle Database Management will kick things off at 9:30am with his keynote on “What’s next for the Oracle Database". Right after that - at 10:30 - I’ll be talking about what’s new in Oracle Application Express 4.2. 

Later – at 2pm – you can hear about "Responsive Web Design in Oracle Application Express with HTML5 and CSS3" from Shakeeb Rahman and then finish the afternoon with Josh Millinger’s "The Maturity of Tabular Forms". 

Looking forward to seeing you at the NYOUG Special Winter Meeting. You can find the full agenda, including abstracts on all sessions here:

http://nyoug.org/upcoming_events.htm#General_Meeting1

A new career in a new town

Andrew Clarke - Sun, 2012-11-25 07:36
Well, I say "new career" but really it's the same thing: Oracle development. I'm working on some complex data matching routines for a data warehouse. But it is new, because I'm a contractor.

Going freelance has been a big step, as I was with Logica for over seventeen years. Actually, the company I joined was CMG, which through a chain of mergers, acquisitions and re-brandings became CMG Admiral, LogicaCMG, Logica and is now part of CGI. Contracting is something I have been considering for a while years but the actual decision almost took me by surprise. It's two weeks now, and I'm still enjoying it.

The "new town" part of the title is not quite accurate either, as Croydon is a town with a long and venerable history. But it's not a place I had been to before, although I have driven through it. This is not a crack at Croydon, it's just that sort of place; Wikipedia describes it as "a natural transport corridor between London and England's south coast".

Croydon is a bit odd, comprising zones of office blocks, shopping malls, car parks and entertainment complexes delineated by dual carriage ways and fly-overs. For me its chief virtue is a shortish commute. My last engagement for Logica entailed working away from home. After several months of living in a hotel room it's just nice to come back the family every evening.

Oracle eBS R12 customers can co-exist with Fusion Accounting Hub

Krishanu Bose - Fri, 2012-11-23 13:58
Existing Oracle eBS customers who are on R12 can leverage the features of Fusion General Ledger and extensive reporting capabilities of Oracle Fusion without re-implementing or upgrading the entire sub-ledger system to Fusion ecosystem through a co-existence configuration setup using Fusion Accounting Hub (FAH).
Oracle Fusion Accounting Hub, comes embedded with Essbase multidimensional cube that can be leveraged to derive business benefits through BI analytics, Smartview reporting, Financial Studio reporting, Account Monitor and Account Inspector and a very intuitive Allocation Manager for creating Allocation rules.
Using GoldenGate, a seamless integration between Oracle eBS and FAH has been provided. GL actual balances data can be transferred to FAH and drill-down back to eBS system is also possible.
The SaaS model of FAH deployment would provide faster time to deployment and lower TCO for customers as well while the existing eBS system can be an on-premise installation.

Application Migration - Part 3

Barry McGillin - Fri, 2012-11-23 12:09
Ok, Finally, we have got to part 3 of Application Migration.  In Part 1, we outlined a program which runs in Sybase through iSQL.  We then followed this, in part 2 with 2 important pieces.
  1. Recognizers to identify the file types of the source we post
  2. Rules to identify items within the files and report on the them
In this part, We will take the rules we used for the previous part, and add some replacement rules.  So, lets recap.  Our recogniser is set for shell files as below.

<?xml version="1.0" encoding="UTF-8"?>
<rulesfile version="1.0" name="Shell file recognizer" description="Recognize .sh files">
<recognizer name="sh" description="sh recognizer" enabled="true">
<fileExtension extension="sh" />
<expression><![CDATA[#!/bin/sh]]></expression>
</recognizer>
</rulesfile>


Our rules file is now extended to include replacement rules.  Looking at the rules file below, we have the two main sections
  1. The required section, which defines the expressions which are used to see if we should scan a file
  2. Rules section which can have 3 sections
    1. Example clause which contains the expression which shows what should be found by the rule.
    2. Expression clause which defines a regular expression which is used to identify the items to be change
    3. Replacement clause which can be configured in 3 different stances to cope with different replacements.
Review the file below:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="../schema/scanner.xsd"?>
<rulesfile version="1.0" name="Sample sh scanner rules"
description="Sample rules to show whats possible" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="../../schema/scanner.xsd">
<ruleset name="isql sample rules" enable="true" type="SYBASE"
description="" source="sh" codetype="isql">
<required>
<regex>
<expression><![CDATA[go|isql]]></expression>
</regex>
</required>
<rules>
<regex>
<expression><![CDATA[go *\n]]></expression>
<replacement type="text"><![CDATA[]]>
</replacement>
</regex>
<regex>
<expression><![CDATA[use.*\n]]></expression>
<replacement type="regex"><![CDATA[]]>
</replacement>
</regex>
<regex>
<expression><![CDATA[isql.*EOF]]></expression>
<replacement type="regex"><![CDATA[sqlplus barry/barry <<EOF]]></replacement>
</regex>
</rules>
</ruleset>
<ruleset name="sql sample rules" enable="true" type="SYBASE"
description="" source="sh" codetype="sql">
<required>
<regex>
<expression><![CDATA[select]]></expression>
</regex>
</required>
<rules>
<regex>
<expression><![CDATA[select.*\n]]></expression>
<replacement type="translator"/>
</regex>
</rules>
</ruleset>
</rulesfile>


The replacement tags are
  1. Text
    • This is the simplest type of replacement, taking the source strings found and replacing them with the string in the replacement tag.
  2. regex
    • The regular expression replacement can either simply replace text, or it can also use regular expressions to rearrange the string that was found.  For example,  function(a,b,c) can be switched to myfunction(c,a,b)
  3. Translator
    • The translator type allows the user to take the string found and pass it to a language translator denoted by the type.  In our example, the type is SYBASE, which will call our sybase translator and translate the source string.
In the rules file above, we have 2 rulesets defined, the first doing text and regex replacements, and the second doing translator replacements.  All these can be mixed together, though.  If you have a lot of rules, it makes sense to delineate them in rulesets so the tool can filter out what is not required.
Now, taking a look at the source we had in Part 1

bamcgill-macbook-pro:src bamcgill$ cat test.sh
#!/bin/sh
isql -UMYUSER -PMYPASS -SMYSERVER <<EOF
use pubs2
go
select count(*) from authors
go
select top 5 au_lname,au_fname,postalcode from authors
go
EOF
bamcgill-macbook-pro:src bamcgill$

we can now run the scanner and make the replacements.  Using a similar command to the that used in part 1, We can replace code in this script.

bamcgill-macbook-pro:demo bamcgill$ migration -actions=scan -dir=/Users/bamcgill/code/demo/src -rulesdir=/Users/bamcgill/code/demo/rules -inplace=true

Now, we when we look at the file, test.sh again, we have

#!/bin/sh
sqlplus barry/barry <<EOF
SELECT COUNT(*)
FROM authors ;

SELECT au_lname ,
au_fname ,
postalcode
FROM authors WHERE ROWNUM <= 5;

EOF

So, there we a translated file which was running iSQL on Sybase, but is now running SQL*Plus and Oracle specific SQL.

Now, if you are interested, heres what the scanner said as it was running.  It prints out individual statements as they are found and where in the source they were found. At the end, the tool summarises what was found and where.
bamcgill-macbook-pro:demo bamcgill$ migration -actions=scan -dir=/Users/bamcgill/code/demo/src -rulesdir=/Users/bamcgill/code/demo/rules -inplace=true

Oracle SQL Developer
Copyright (c) 1997, 2011, Oracle and/or its affiliates. All rights reserved.

Finding files....
Default Application Name

test.sh
3:go

5:go

7:go

2:use pubs2

1:isql -UMYUSER -PMYPASS -SMYSERVER <<EOF
2:select count(*) from authors

3:select top 5 au_lname,au_fname,postalcode from authors

------------------------ Application Results -----------------

Call Breakdown Summary by File
------------------------------
/Users/bamcgill/code/demo/src/test.sh
3: go

1: use pubs2

1: isql -UMYUSER -PMYPASS -SMYSERVER <<EOF
1: select count(*) from authors

1: select top 5 au_lname,au_fname,postalcode from authors


-------------------------------------------------------------

Call Breakdown Summary
----------------------
3: go

1: use pubs2

1: isql -UMYUSER -PMYPASS -SMYSERVER <<EOF
1: select count(*) from authors

1: select top 5 au_lname,au_fname,postalcode from authors


-------------------------------------------------------------

File Type Summary
----------------------
sh 1 file

-------------------------------------------------------------
------------------------ Summary -----------------------
High Level Overview
-------------------
7 total calls found
5 distinct calls found
1 files scanned
1 language types
2 total files in source
9 lines of code in scanned application
-------------------------------------------------------------
-------------------------------------------------------------
scan completed successfully
If you want to know more about this, drop me line on twitter @bamcgill, or by email on barry.mcgillin@oracle.com

You can download SQL Developer from OTN.

Pages

Subscribe to Oracle FAQ aggregator