Feed aggregator

The beginning of Oracle Denmark

Moans Nogood - Thu, 2008-03-27 20:03
I started working for a bank called Sparekassen SDS 1st of January 1987. They had just bought Oracle, and that's how I ended up in the database world.

In 1990 I joined Oracle Denmark's support organisation under the magnificient leadership of Jannik Ohl.

He was fired by Peter Perregaard in 1998 or so, because they didn't like each other. Until then things were fantastic. After that things were not.

Jannik was replaced by Allan Marker, who was not nearly his equal in any which way you choose to look. Especially when it comes to the art of thinking instead of wondering how you can survive in the corporate culture for the next few months.

But that's how things are. Peter made a mistake, and he regrets it to this day, I'm sure (as in: sure).

So Jannik went into geo-stationary orbit. In other words: He joined the Oracle EMEA organisation (Europe, Middle East, Africa).

When you "go into orbit", ie. join EMEA or some global stuff, you're never heard of again. In space, nobody can hear you scream, as they say.

Until it's time to lay off some bodies. So Jannik, uhm, resigned just now.

Today I served a bit of Miracle beer for my friend Jannik in Oracle Denmark's canteen.

To honour the best boss I ever had.

And to honour one of the most creative minds I've met. Really.

He was the one that came up with the idea of doing serious database stuff in Lalandia (which is why Miracle now do two conferences there a year).

He was the one that told me: "With all this internet stuff and not-being-able-to-call-a-person thing going on in Support, people will pay for extra services that allow them to talk to people and get their problems resolved without too much bullshit" - and we now have 130 Miracle Support customers.

He came up with the idea of having a credit-card thing for Good Oracle Customers (GOC).

Miracle Support shouldn't be allowed to live. It's feeding off the failings of the big vendor support organisations, because they're failing. That's wrong. But it's a fact.

I just hope Jannik doesn't do the boring thing of leaning back and waiting for the early-age pension to arrive. He's not old, he's not spent. We need him.

As for the headline (The beginning of Oracle Denmark) I'll just say this piece of information from an unknown source:

The beginning of Oracle Denmark: Jørgen Balle, Ole Bisgaard, Hanne Cederberg & Jannik started at the same time. Then came Pete Francis, og later Klaus Holse Andersen.

We need more details, folks :-))

Mogens

Columns to String: Comma Separated Values (CSV) (Updated SQL Snippets Tutorial)

Joe Fuda - Thu, 2008-03-27 19:00
The "Columns to String: Comma Separated Values (CSV)" tutorial now includes three new sections, "How to Create a CSV File", "How to Escape Double Quotes, CR, and LF", and "How to Include a Header Line".
...

E-Business Suite Integration: Using Irep to discover available business services

Peeyush Tugnawat - Thu, 2008-03-27 10:40

Bookmark and Share

To plan your soa based integrations, the architects and business users need to know what services are available within ebs that can be leveraged to be a part of your information integration, business process integration or coming up with composite application spanning across enterprise silos.

The first step when planning and designing your integrations should be to use Oracle Irep. This will give you the details of the business services available within EBS and also the details of service end-points. IRep lets users easily discover the appropriate business service interface for integration with any system, application, or business partner.

It is a pre-built central catalog of information about the numerous public integration interfaces delivered with Oracle applications, known as business interfaces.

The key advantages of using Irep are

  • Helps in better integration planning by providing information to make informed decisions

  • Acts as single source of truth for the available business servicesEnhanced re-use of existing components


  • Assurance that you are using supported public interfaces

    Using Irep

    Go to http://irep.oracle.com/

    If you are working on EBS R12: From the Navigator menu, select the Integration Repository responsibility, then click the Integration Repository link that appears.

    Browse IRep : You can browse Irep using the categories of product or by the integration standards you wish to leverage.

     irep-browse

    Search IRep: IRep also lets you search using various search parameters. You can search by interface name, internal name, product family, interface type (concurrent program, web service, XML gateway map etc), product, and business entities.

     irep-search


    In Release 12, the Oracle Integration Repository will ship as part of the E-Business Suite. As your instance is patched, the repository will automatically be updated with content appropriate for the precise revisions of interfaces in your environment. But until Release 12 is available, you can explore an on-line version of the Integration Repository for the 11i10 version of E-Business applications.
    Bookmark and Share

  • New Stuff (3) Start Stop Table item is for real!

    Carl Backstrom - Wed, 2008-03-26 13:47
    This is a small feature but fixes something that has always bugged me.

    In Application Express there is the Start Stop Table item. Which is very useful for form layout especially when building forms with large textarea's.

    The problem was there was no way easy way access the Start Stop Table itself with javascript or CSS since it didn't have any discerning attributes. Well that has all changed in APEX 3.1 as the Start Stop Table gets some of the same attributes as a regular item does.

    Start Stop Table's will get the id attribute set to the Item Name just like regular items , Start Stop Tables will also insert attributes from HTML Form Element Attributes property , agian just like a regular APEX item.

    You can see a very simple usage example here http://apex.oracle.com/pls/otn/f?p=11933:137.

    I can definitely see this being used for some more dynamic and just plain prettier forms and layouts, there are a few spots in the APEX builder slated to get some treatment from this.

    As with my last few posts , and my next couple, this isn't the most WizBang feature but the impact if properly used can be huge.

    RMAN, RAC, ASM, FRA and Archive Logs

    Eric S. Emrick - Wed, 2008-03-26 09:37
    The topic, as the title suggests, concerns RMAN, RAC, ASM and archive logs. This post is rather different than my prior posts, in that, I want to open up a dialogue concerning the subject matter. So, I’ll start the thread by posing a question: Are any of you that run RAC in your production environments backing up your archive logs to an FRA that resides in an ASM disk group (and of course backing up the archive logs to tape from the FRA)? Managing your free space within your FRA is paramount as are judicious backups of the FRA (actually these really go hand in hand). However, I am very interested in your experience. Have you come across and “gotchas”, bad experiences, positive experiences, more robust alternatives, extended solutions, etc.? Being somewhat of a backup and recovery junky, I am extremely interested in your thoughts. Let the dialogue commence!

    Update: 03/26/2008

    A colleague of mine has been doing some testing using RMAN, RAC, ASM, FRA for archive log management. Also, he has tested the integration of DataGuard into this configuration. To be more precise, he has tested using an FRA residing in an ASM disk group as the only local archive log destination. In addition to the local destination, each archive log is sent to the standby destination. Based on his testing this approach is rather robust. The archive logs are backed up via the "BACKUP RECOVERY AREA" command with a regular periodicity. This enables the FRA's internal algorithm to remove archive logs that have been backed up, once the space reaches 80% full. No manual intervention is required to remove the archive logs. Moreover, the archive logs in this configuration will only be automatically deleted from the FRA if both of the following are true: 1) the archive log has been backed up satisfying the retention policy and 2) the archive log has been sent to the standby. When there is a gap issue with the standby database, the archive logs are read from the FRA and sent to the standby. It works real nice!

    E-Business Suite Integration Components

    Peeyush Tugnawat - Wed, 2008-03-26 05:14

    Bookmark and Share


    It is important to understand different integration components available within EBS to make informed decision about using one or more for your SOA integration project. Selecting one or more of them depends upon the requirements and the interaction pattern determined to be best fit for the service oriented architecture based integration.
    Following are the integration mechanisms available within e-Business suite.

    Oracle XML Gateway: E-Business Suite utilizes the Oracle Workflow Business Event System to support event-based XML message creation and consumption. It can consume events raised by the Oracle E-Business Suite and can subscribes to inbound events for processing. It can be leveraged for Business-to-Business (B2B) and Application-to-Application (A2A) integration scenarios. Majority of messages delivered with the Oracle E-Business Suite are mapped using the Open Application Group (OAG) standard.

    Business Events: The Oracle Workflow Business Event System is an application service that leverages the Oracle Advanced Queuing (AQ) infrastructure to communicate business events between systems. There are more than 1000 built in events with in EBS that can be leveraged for event-based integration of business processes.


    Concurrent Programs: A concurrent program is an instance of an execution file. Concurrent programs use a concurrent program executable to locate the correct execution file. Several concurrent programs may use the same execution file to perform their specific tasks, each having different parameter defaults.

    Interface Tables: Interface tables are intermediate tables into which the data is inserted first. Once the data gets inserted into the interface tables, the data is validated, and then transferred to the base tables. Base tables are real application tables that reside in the application database. The data that resides in the interface tables is transferred to the base tables using concurrent programs. Interface views provide a way to retrieve data from Oracle Applications. By using views, you can get synchronous data access to Oracle Applications.

    PL/SQL APIs: These are stored procedures that enable you to insert and update data in Oracle Applications.
    Oracle e-Commerce (EDI) Gateway: Oracle e-Commerce Gateway provides a common, standards-based approach for Electronic Data Interchange (EDI) integration between Oracle Applications and third party applications. It is the EDI integration enabler for Oracle Applications.


    Bookmark and Share

    Objects Remain In Their Original Tablespaces After Run Oatm

    Madan Mohan - Wed, 2008-03-26 04:51
    Migrated to the new tablespaces using OATM but there are objects left behind in original tablespaces. There were no errors reported during tablespace migration.

    SQL> select tablespace_name, count(1) from dba_Segments group by tablespace_name;
    TABLESPACE_NAME COUNT(1)
    ------------------------------ ----------
    APPLSYSD 1
    APPLSYSX 1
    COMD 26
    COMX 47
    CTXD 77
    EDWREP 88
    EDWREPX 31
    PVD 1
    PVX 1

    SQL> select segment_name, segment_type from dba_segments
    2* where tablespace_name='APPLSYSD'
    SEGMENT_NA SEGMENT_TYPE
    ---------- ------------------
    20.42 SPACE HEADER

    Cause
    *******

    One of the circumstances under which a 'SPACE HEADER' segment gets created is if a 'dictionary managed' tablespace is migrated to 'locally managed' (see dbms_space_admin.tablespace_migrate_to_local()).

    The space header segment contains the extent bitmap and is allocated during the migration of the tablespace. Since there is no reserved space after the file header (as with locally managed tablespaces) the bitmap segment will be allocated somewhere in the "data" area of the datafile. During its creation the segment will pick up some of the storage attributes (e.g. MAXEXTENTS) from the default storage clause of the tablespace. Once the segment has been created it can neither be dropped nor changed.

    Fix
    ****

    You can ignore these "left-over" objects. Please go ahead and drop old tablespaces

    How to Purge the RECYCLEBIN in Oracle 10g

    Madan Mohan - Tue, 2008-03-25 21:49
    THE RECYCLE BIN
    *****************


    The Recycle Bin is a virtual container where all dropped objects reside. Underneath the covers, the objects are occupying the same space as when they were created. If table EMP was created in the USERS tablespace, the dropped table EMP remains in the USERS tablespace. Dropped tables and any associated objects such as indexes, constraints, nested tables, and other dependant objects are not moved, they are simply renamed with a prefix of BIN$$. You can continue to access the data in a
    dropped table or even use Flashback Query against it. Each user has the same rights and privileges on Recycle Bin objects before it was dropped. You can view your dropped tables by querying the new RECYCLEBIN view. Objects in the Recycle Bin will remain in the database until the owner of the dropped objects decides to permanently remove them using the new PURGE command. The Recycle Bin objects are counted against a user's quota. But Flashback Drop is a non-intrusive feature. Objects in the Recycle Bin will be automatically purged by the space reclamation process if

    o A user creates a new table or adds data that causes their quota to be exceeded.
    o The tablespace needs to extend its file size to accommodate create/insert operations.


    There is no issues with DROPping the table, behaviour wise. It is the same as in 8i / 9i. The space is not released immediately and is accounted for within the same tablespace / schema after the drop.

    When we drop a tablespace or a user there is NO recycling of the objects.

    o Recyclebin does not work for SYS objects

    Checking the RECYCLEBIN Objects
    *******************************


    SELECT object_name,original_name,operation,type,dropscn,droptime FROM user_recyclebin;

    SELECT owner,original_name,operation,type FROM dba_recyclebin;


    Purging the Recyclebin
    **************************

    Subject: 10g Recyclebin Features And How To Disable it( _recyclebin )
    Doc ID: Note:265253.1 Type: BULLETIN

    Applies to: Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.2.0.0
    Information in this document applies to any platform.
    Purpose:- This bulletin illustrates the new recyclebin functionality provided with the 10g database

    Scope and ApplicationCan be used by Oracle Support Analyst and DBA

    10g Recyclebin Features And How To Disable it( _recyclebin )ABOUT 10g RECYCLEBIN
    In order to have FLASHBACK DROP functionality a recyclebin is provided to every oracle user.

    SQL> desc recyclebin
    Name Null? Type
    ----------------------------------------- -------- ------------
    OBJECT_NAME NOT NULL VARCHAR2(30)
    ORIGINAL_NAME VARCHAR2(32)
    OPERATION VARCHAR2(9)
    TYPE VARCHAR2(25)
    TS_NAME VARCHAR2(30)
    CREATETIME VARCHAR2(19)
    DROPTIME VARCHAR2(19)
    DROPSCN NUMBER
    PARTITION_NAME VARCHAR2(32)
    CAN_UNDROP VARCHAR2(3)
    CAN_PURGE VARCHAR2(3)
    RELATED NOT NULL NUMBER
    BASE_OBJECT NOT NULL NUMBER
    PURGE_OBJECT NOT NULL NUMBER
    SPACE NUMBER

    The recyclebin is a public synonym and it is based on the view user_recyclebin which in turn is based on sys.recyclebin$ table.

    Related recyclebin objects:

    SQL> SELECT SUBSTR(object_name,1,50),object_type,owner
    FROM dba_objects
    WHERE object_name LIKE '%RECYCLEBIN%';
    /
    SUBSTR(OBJECT_NAME,1,50) OBJECT_TYPE OWNER
    --------------------------- ------------------- ----------
    RECYCLEBIN$ TABLE SYS
    RECYCLEBIN$_OBJ INDEX SYS
    RECYCLEBIN$_TS INDEX SYS
    RECYCLEBIN$_OWNER INDEX SYS
    USER_RECYCLEBIN VIEW SYS
    USER_RECYCLEBIN SYNONYM PUBLIC
    RECYCLEBIN SYNONYM PUBLIC
    DBA_RECYCLEBIN VIEW SYS
    DBA_RECYCLEBIN SYNONYM PUBLIC

    9 rows selected.

    EXAMPLE
    SQL> SELECT * FROM v$version;
    BANNER
    ----------------------------------------------------------------
    Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bi
    PL/SQL Release 10.1.0.2.0 - Production
    CORE 10.1.0.2.0 Production
    TNS for Solaris: Version 10.1.0.2.0 - Production
    NLSRTL Version 10.1.0.2.0 - Production

    SQL> sho user
    USER is "BH"

    SQL> SELECT object_name,original_name,operation,type,dropscn,droptime
    2 FROM user_recyclebin
    3 /
    no rows selected

    SQL> CREATE TABLE t1(a NUMBER);
    Table created.

    SQL> DROP TABLE t1;
    Table dropped.

    SQL> SELECT object_name,original_name,operation,type,dropscn,droptime
    2 FROM user_recyclebin
    3 /
    OBJECT_NAME ORIGINAL_NAME OPERATION TYPE DROPSCN DROPTIME
    ------------------------------ -------------------------------- --------- ------------------------- ---------- -------------------
    BIN$1Unhj5+DSHDgNAgAIKds8A==$0 T1 DROP TABLE 8.1832E+12 2004-03-10:11:03:49

    SQL> sho user
    USER is "SYS"

    SQL> SELECT owner,original_name,operation,type
    2 FROM dba_recyclebin
    3 /

    OWNER ORIGINAL_NAME OPERATION TYPE
    ------------------------------ -------------------------------- --------- ------
    BH T1 DROP TABLE

    We can also create a new table with the same name at this point.

    @NOTE:
    @Pre-10.1.0.3, the recycled objects can also be viewed in user_tables and dba_tables
    @Fix for Bug 3255906 changed this behaviour to maintain compatibility with 9i



    PURGING
    ********


    In order to completely remove the table from the DB and to release the space the new PURGE command is used.

    From BH user:
    SQL> PURGE TABLE t1;
    Table purged.

    OR

    SQL> PURGE TABLE "BIN$1UtrT/b1ScbgNAgAIKds8A==$0";
    Table purged.

    From SYSDBA user:
    SQL> SELECT owner,original_name,operation,type
    2 FROM dba_recyclebin
    3 /
    no rows selected

    From BH user:
    SQL> SHOW recyclebin
    SQL>

    There are various ways to PURGE objects:

    PURGE TABLE t1;
    PURGE INDEX ind1;
    PURGE recyclebin; (Purge all objects in Recyclebin)
    PURGE dba_recyclebin; (Purge all objects / only SYSDBA can)
    PURGE TABLESPACE users; (Purge all objects of the tablespace)
    PURGE TABLESPACE users USER bh; (Purge all objects of the tablspace belonging to BH)

    For an object, the owner or a user with SYSDBA privilege or a user with DROP ANY... system privilege for the type of object to be purged can PURGE it.


    DISABLING RECYCLEBIN
    **********************


    We can DROP and PURGE a table with a single command

    From BH user:
    SQL> DROP TABLE t1 PURGE;
    Table dropped.

    SQL> SELECT *
    2 FROM recyclebin
    3 /
    no rows selected

    There is no need to PURGE.

    On 10gR1, in case we want to disable the behavior of recycling, there is an underscore parameter
    "_recyclebin" which defaults to TRUE. We can disable recyclebin by setting it to FALSE.

    From SYSDBA user:
    SQL> SELECT a.ksppinm, b.ksppstvl, b.ksppstdf
    FROM x$ksppi a, x$ksppcv b
    WHERE a.indx = b.indx
    AND a.ksppinm like '%recycle%'
    ORDER BY a.ksppinm
    /
    Parameter Value Default?
    ---------------------------- ---------------------------------------- --------
    _recyclebin TRUE TRUE

    From BH user:
    SQL> CREATE TABLE t1(a NUMBER);
    Table created.

    SQL> DROP TABLE t1;
    Table dropped.

    SQL> SELECT original_name
    FROM user_recyclebin;
    ORIGINAL_NAME
    --------------
    T1

    From SYSDBA user:
    SQL> ALTER SYSTEM SET "_recyclebin"=FALSE SCOPE = BOTH;
    System altered.

    SQL> SELECT a.ksppinm, b.ksppstvl, b.ksppstdf
    FROM x$ksppi a, x$ksppcv b
    WHERE a.indx = b.indx
    AND a.ksppinm like '%recycle%'
    ORDER BY a.ksppinm
    /
    Parameter Value Default?
    ---------------------------- ---------------------------------------- --------
    _recyclebin FALSE TRUE

    From BH user:
    SQL> CREATE TABLE t1(a NUMBER);
    Table created.

    SQL> DROP TABLE t1;
    Table dropped.

    SQL> SELECT original_name
    FROM user_recyclebin;
    no rows selected

    There is no need to PURGE.

    As with anyother underscore parameter, setting this parameter is not recommended unless
    advised by oracle support services.

    On 10gR2 recyclebin is a initialization parameter and bydefault its ON.
    We can disable recyclebin by using the following commands:

    SQL> ALTER SESSION SET recyclebin = OFF;
    SQL> ALTER SYSTEM SET recyclebin = OFF;

    The dropped objects, when recyclebin was ON will remain in the recyclebin even if we set the recyclebin parameter to OFF.

    A reading list for our developers

    Rob Baillie - Tue, 2008-03-25 12:54
    An idea I'm thinking of trying to get implemented at our place is a required reading list for all our developers. A collection of books that will improve the way that developers think about their code, and they ways in which they solve problems. The company would buy the books as gifts to the employees, maybe one or two every three months.

    Some questions though:

    • Is it fair for a company to expect its employees to read educational material out of hours?

    Conversely:
    • Is it fair for an employee to expect to be moved forward in their career without a little bit of personal development outside the office?


    If anyone has any books out there that they'd recommend - please let me know. Otherwise, here's my initial ideas - the first three would be in your welcome pack:

    Update:Gary Myers came up with a good point, being that any book should really be readable on public transport. That probably rules out Code Complete (although I read it on the tube, I can see that it's a little tricky), but Design Patterns and Refactoring to Patterns are small enough I reckon.

    Unfortunately, Code Complete is a really good book that gives a lot of great, simple, valuable advice. Does anyone out there have any other suggestions for similar books?

    Update 2:Andy Beacock reminded me of Fowler's Refactoring, which really should also make the list.

    Update 3:The development team have bought into the idea and the boss has been asked. In fact, I'm pretty pleased with the enthusiasm shown by the team for the idea. I can't see the boss turning it down. Interestingly though, someone suggested that Code Complete go onto the list...

    In this order:


    Ruled out because of their size:

    Forgot your Password?

    Aviad Elbaz - Tue, 2008-03-25 02:58

    Almost every website that uses username & password have a "forget password" functionality to retrieve users passwords, and so also the Oracle E-Business Suite.

    This is a very useful functionality since it reduces the number of SR's opened to the helpdesk team regarding login problems and moreover satisfying the customers which can get a new password in a very short time with no helpdesk intervention.

    The implementation of this functionality is very simple and easy.
    To enable it you should:

    1. set the profile "Local Login Mask" to the current value plus 8 (e.g. current value is 32 -> set value to 40)
    2. Bounce Apache

    The "Local Login Mask" profile used to customize some attributes of the login page (AppsLocalLogin.jsp), one of them is the "forgot your password" link.
    You should set the value of this profile to the sum of all attribute's mask values you are interested in.

    The full attributes list is:

    Attribute

    Mask Value Binary value Hint for Username 01 00000001 Hint for Password 02 00000010 Cancel button 04 00000100 Forgot Password link 08 00001000 Registration link 16 00010000 Language Images 32 00100000 Corporate Policy Message 64 01000000

     

    Setting the Forgot Password link mask value will add the following TIP to the login page:

    The reset password process:

    - Click on "Forgot your password?" link will ask for a username to which reset the password.

    - After typing the username and click OK, a new workflow process is started (Item type UMXUPWD) and you'll get this confirmation message:

    - Shortly you'll get this email - "Password reset required approval" (expired after 4 hours).

    - Click on "Approve" to confirm you are interested in a new password.

    - Shortly you'll get an email with a temporary password which you have to change on first login.

    Very nice and easy to implement functionality, which could be very beneficial.

    Related Note 399766.1 - Reset Password Functionality FAQ

    You are welcome to leave a comment

    Aviad

    Categories: APPS Blogs

    Which Temporary Tablespace is used for sorts?

    Pawel Barut - Mon, 2008-03-24 12:34
    Written by Paweł Barut
    This time I will write about "Which Temporary Tablespace is used for sorts" ?. I was not wondering about this much, as usually there is only one Temporary tablespace in DB. Lets assume situation, as show on picture:
    • User A
      • Assigned to Temporary Tablespace TEMP_A
      • has table TA
    • User B
      • Assigned to Temporary Tablespace TEMP_B
      • has table TB
      • owns procedure PB (definer rights)
    • Both users has access rights for all above object (Select on Tables and Execute on Procedure)
    So let's discuss some situations:
    1. User A runs query on tables TA or TB (or any other) - when disk sort is needed then tablespace TEMP_A is used
    2. User A executes procedure PB. Procedure PB opens cursor on table TB (or TA or any other). If disk sort is required then tablespace TEMP_B is used.
    For me it was bit surprising. Especially that I did not find anywhere in Oracle documentation description for this behaviour:
    TEMPORARY TABLESPACE Clause
    Specify the tablespace or tablespace group for the user's temporary segments.
    I was expecting that all sort segments will be created in tablespace that is assigned to that user. I was hopping to solve one of my issues that way. But it occurred that, sort segment is created by user B, because procedure PB uses user B rights. It is reasonable, as this is consistent with granting access to objects, and temporary objects are treated the same way as permanent ones. On the other hand select is run for user A - shouldn't TEMP_A be used in all cases? What is your opinion on that?

    Cheers Paweł

    --
    Related Articles on Paweł Barut blog:
    Categories: Development

    Pl/sql optimisation in 10g

    Adrian Billington - Sat, 2008-03-22 03:00
    Oracle 10g's compiler optimisation for faster PL/SQL, with a new section on optimisation bugs. November 2004 (updated March 2008)

    New Stuff (1)

    Carl Backstrom - Thu, 2008-03-20 22:15
    So I'm going through my example application updating different examples new APEX 3.1 features, as I work through them I'll be posting examples of changes.

    So the first one are the changes to the basic example for illustrating Ajax using an OnDemand Process.

    Javascript


    function f_TestOnDemand(){
    var get = new htmldb_Get(null,$v('pFlowId'),'APPLICATION_PROCESS=SimpleExample',0);
    get.addParam('x01',$v('P11_TEST'));
    gReturn = get.get();
    get = null;
    gReturn = (!gReturn)?'null':gReturn;
    $s('P11_TEXT_DROP',gReturn);
    }



    One of the biggest changes , and one of my favorites , is that in older versions of APEX to pass a value to on OnDemand Process in most cases you would need a application level item that was only used for that purpose , this is no longer needed.

    This new functionality is illustrated on line 4 where the global variables x01 gets the value of the textarea to post to the process. There are 10 global variables x01 - x10 , and a few others, so you can pass around quite a few values at once, more in later post.

    There are a couple calls to $v('ITEM_NAME') which given an item name returns you the value of the given item.

    And $s('ITEM_NAME','Some Value') which given an item name and a value set the value of that item.
    * These both work with most the basic item types and will be extended support all item types.

    OnDemand Process (SimpleExample)

    declare
    l_value varchar2(4000);
    begin
    l_value := wwv_flow.g_x01;
    htp.p('');
    htp.p('This was just put into one of the global temporary values.');
    htp.p(''||l_value||'');
    htp.p('');
    end;


    On line 4 of the OnDemand Process is take getting the value of the global variable. The global variables are only available for that Ajax call and do not get saved into session state.

    Simple changes to be sure but they allow for much more generic javascript and easier integration across different application's

    A Day On The Road (To Hell)

    Moans Nogood - Thu, 2008-03-20 17:52
    My ringtone on my mobile is currently Highway To Hell with AC/DC, but I thought Chris Rea's The Road to Hell was more appropriate as a title today. I hope you'll understand why after reading this.

    I've just come home from 10 days in a Danish town called Horsens doing a reality TV show called "The Secret Millionaire", which has run for two seasons in England.

    Now they've done 11 programs in Denmark. Mine will probably be shown in the fall of this year.

    Basically, a TV crew of three followed me all day long while I (complete with a cover story) visited places where good souls help out people in need. At night I stayed in a borrowed, Turkish immigrant apartment.

    At the end of the 10 days I put on one of my Armani suits and told the good people that in fact I was not that much down and out, and that I'd like to donate some of my own money to their cause (a total of 250.000 Danish kroner, to be exact).

    In fact I'm not a millionaire in the sense that I can take out that amount from my bank account at all. Instead, we had to take a loan in our house, which my wife Anette was OK with (and thank you so much for that!).

    The 10 other folks look a LOT more like millionaires than me, let me tell you that.

    Folks like the guy behind JustEat, a guy with his own investment bank in London, a big IT-guy called Asger Jensby, and so on and so forth. Some of them with private chauffeurs, one live in a French castle, for crying out loud. You know the type.

    The filming ended last Thursday - a week ago - and it was a good day. Lots of happiness, tears, and much more. And of course I threw a big party with more than 200participants at the end of it.

    Fantastic. But perhaps the most emotinally draining thing I've tried.

    Then last Friday (the day after) after spending 30 minutes in my house while re-packing and re-grouping, I found myself with my co-director Lasse racing snow scooters and drilling holes through 70 cm ice on frozen lakes in Northern Iceland with hard, Icelandic men around me.

    Talk of a change of scene within 24 hours.

    A couple of rough days here in Denmark, and it was time to relax on this beautiful Easter Thursday...

    The plan was to eat brunch with my friend Søren (who buys breweries for Carlsberg) and his family. Well, I made it, but late of course, due to all sorts of things.

    Then I left around 1400 hours in order to drive back to my town Maaloev and pick up three kids and then take them to a football match between Brondby and FC Midtjylland due to take place at 1500 hours. Running a bit late...

    I had 10 free tickets from Brondby because I tried to help them with a social project called "Fra Bænken Til Banen" (from the bench to the field) where they try to get kids into jobs (they've been so successful that they're now starting to find jobs for the kids' fathers, too).

    But nobody wanted my free tickets, so I ended up throwing six of them away. Bah.

    I was running a bit late for the game. Perhaps that's why I was driving too fast on the street where the Police was checking speed.

    I was charged with driving 97 where 60 was the limit. Ouch.

    That means 2500 kroner in fine (that's OK) and I have to take a new driving test (which cost a lot more and takes a lot of time). Hmm.

    But hey, I get to learn about all the new street signs and rules that have appeared since I learned to drive back in 1982. Might even get one of those new, fancy credit-card style driving licenses.

    While I was standing there talking to the cops, another car was stopped for speeding in the opposite direction of me. Turned out he had just been at Brondby Stadium, but had discovered that the game had been moved from 1500 hours to 1800 hours due to demands from Viasat television, since they had another important sport thing to cover today, too.

    So evidently, one should check game times and not rely on whatever is written on the tickets. Anyway, that's how I discovered that it wasn't neccessary for me to drive faster today :-)).

    So we drove back, and then we went to Brondby and saw a fine match (2-1 to Brondby) in rain and snow, then back to Maaloev with the kids and then back into Copenhagen to pick up my girl Nathalie (9 years of age) who had stayed with Søren to play with his daughter Louise.

    Shortly before arriving in the street where Søren lives, I hit something with my right front tire and all air went out. Then I spent half an hour in heavy rain and sleet trying to change the #%&/Q tire.

    Due to very slippery cobble stones, the jack kept slipping and the car crashing down. That happened four times, the last time with the tire only half way off, and that's when I called for professionel help (and a professionel jack).

    They came, tire was changed, we drove home.

    All through this I was looking forward to a nice evening with my wife Anette (whom I hadn't seen too much of in the last couple of weeks) and some cheese and champagne that she had promised me.

    Like in 'Driving home for Christmas'. Chris Rea. Can't wait to see those faces. Oh, I'm driving down that lane.

    It was late when I finally made it home, and Anette had had to go to sleep, of course. She had been up early and had been taking care of little Viktor all day.

    Bummer. But there's always email and blogging for you, then.

    But I just got a text message from my oldest daughter Christine (18 years old), who's on some kind of survival training thing with the scouts.

    She wrote: "By the way: I love you, dad. Have I ever told you that?".

    No, you haven't. And you never needed to. But it was the finest of timings when you did :-)).

    I've just poured myself a large Bowmore 12 year single malt (Enigma edition).

    Here's to life.

    Management Agent Install problem on AIX 5.3

    Mark Vakoc - Thu, 2008-03-20 11:23
    There has been an issue discovered when installing on the AIX machine. This is a known issue with respect to running an Installshield Multi-Platform installer on an AIX 5.3 machine.

    If the process creates a core dump when executing the installation of the agent on your AIX 5.3 machine. Here is the important information from the core dump of the java process:
    SIGSEGV received at 0x362220e4 in /opt/JDEdwards/Install/ismp001/libaixppk.so. Processing terminated.

    If you see the above in the dump file then use this link to solve the problem. Follow the instructions closely and this should allow you to install the agent. It is a fix to the libaixppk.so for ISMP. Here is the link:
    http://knowledge.macrovision.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=Q111262&sliceId=

    SQL Features Tutorials: Materialized Views (New SQL Snippets Tutorial)

    Joe Fuda - Wed, 2008-03-19 18:00
    Originally introduced in 1992 as "Snapshots" in Oracle 7, Materialized Views are now used in ways far removed from their original raison d'être, replication. Database programmers use them for data warehousing, denormalization, and even validation. Despite their versatility though, materialized views remain a mystery to some programmers due to their complexity. The new SQL Snippets tutorial "Materialized Views" strips away the mystery with its simple test cases, step-by-step exploration of the basics, common programming pitfall alerts, and a useful utility called MY_MV_CAPABILITIES which analyzes and reports a materialized view's capabilities in a single step.
    ...

    Service Oriented approach for ERP Integration

    Peeyush Tugnawat - Wed, 2008-03-19 13:44

    Bookmark and Share

    Business Requirements: Seamless Information

    Today businesses are changing faster than ever. Business models are evolving and being transformed every day. There are greater expectations in terms of innovation, business performance, results, and effective use of resources. All this is putting greater pressure on the organizations to find new ways to streamline their processes and share information effectively and seamlessly. In other words businesses today need to be more flexible and responsive than ever before to their customers, suppliers, partners, vendors, and most importantly changing business models.
    As ERP systems are at the center of enterprise business model, they need to provide a way for the business processes to participate and collaborate with external applications, partners, and information. Enterprise applications are moving from being monolithic and self-contained to the being more and more flexible and collaborative.

    Whats Wrong with Traditional EAI?

    Traditional integration focused on solving some of the above-mentioned requirements by enabling data flow between silos and external applications. It helped to a certain extent but over the time the complexity and of the integrations started posing serious concerns. The issues were proprietary technology used for integrating applications was a gridlock and it was not easy to easily orchestrate business processes between the disparate applications using the traditional eai.
    What is Service Oriented Architecture?
    SOA as it relates to software paradigm is an agile architecture approach that is based on service-oriented principles of composition, abstraction, loose coupling, discoverability, and amalgamation. SOA inherently empowers scalability, evolution of services, interoperability, re-usability, and modularity.

    Do we need SOA based Integrations?

    Using SOA principles while designing application integrations results in SOA based application integration. Simplicity is desired for the traditional and complex world of integrations. Better and common sense approaches such as Service interactions and amalgamation supported by Open Standards should be enabled. SOA is needed for the following main reasons:

  • To provide seamless agility to business
  • To improve business process visibility
  • To simplify the current rigid and complex state of IT
  • To enhance efficiency and provide cost-effectiveness
  • To enable re-usability factor
  • To provide better quality of service

    But you are still doing integration?

    Yes. Besides the obvious advantages enumerated by everyone, the key advantage of this approach is that you are contributing to the future of the SOA of your enterprise. The integrations with service-oriented approach are loosely coupled with the infrastructure components and more flexible and refactorable. Logical end-points for integration services provide far more decoupling and is implementation agnostic. The components and integration services can be used for creating a composite application or business process later. The benefits of adopting SOA grow with time. Once you have these reusable components from across applications, application modules, and other enterprise software components, creating a new application is relatively easy and that�s when the full potential of SOA is realized.

    Design your SOA Integrations

    Most of the design depends on the requirements. But before applying the very same approach you took for your earlier integration project, you need to keep in mind is that the goal here is to come up with integration components that are designed for interoperability, re-usability, and modularity. The key to designing your SOA integrations is remembering that they are SOA based. Using and adopting SOA principles is the key. Always try to apply SOA principles composition, abstraction, loose coupling, discoverability, and amalgamation to your services and integrations.

    Architecture Perspective: Guidelines for SOA based Integrations

    Based on my experience, considering the following guidelines can help you realize the SOA vision for integrations with EBS. Most of the following guidelines are generic and can and have been applied to other ERP integrations as well.
    Use standards: Using standards based technologies for your service-oriented integrations will help eliminate lock-down with products and companies. This is one of the biggest challenges with traditional EAI. This will enhance easy evolution, enhancement, and composition of business processes that may use services related to integrations.

    Classify Integration Requirements:
    Categorize requirements into data integration and business process integration. Identify message exchange patterns and use ESB functions (transportation, mediation, and routing) to model data integration processes. Use BPEL for modeling your processes that involve anything more than what can be satisfied with ESB functions. Many times it is hard to classify and try to fit a particular integration process in one of the two buckets. In such cases it will be a good idea to use layered approach and use ESB functions for data integration and use BPEL for future extensions to the business integration process.

    Introduce Extensibility:
    To deliver on the high hopes for soa based integration architecture solution, it is very important to do some forward thinking for the desired flexibility and agility in your integration architecture. To deliver on that, think hard early on ways to introduce extensibility and forward compatibility into the architecture and all the components including individual integrations and messages.

    Service Enable Enterprise Application Functions:
    Enterprise applications have many business functions and technology components that are application specific and depend on proprietary technologies. These components or functions should be service enabled before they can participate in the service oriented integration architecture. Using resource adapters is a way of connecting and interacting with the application specific components. It is important that the resource adapter is implemented using industry standards such as J2CA, WSIF, and WSDL and can provide a web service interface to the application specific functions.

    Inject Resiliency:
    Build resiliency into the individual integration processes. This may be easy to miss as even with the best architecture in place. Always think about all the �what if� scenarios and try to inject process level resiliency into the individual integration processes.

    Exception Handling:
    Despite all the forward thinking there are things that might and will go wrong. Define reusable, extensible, and agile approach to handle exceptions at process level and other unknown exceptions. Using a common exception handler service with extensible interface can provide the flexibility, re-usability, and extensibility.

    Simplify Support Functions:
    Any one who has worked with application integration can relate to the great deal of time and energy waste involved when troubleshooting integration issues. With asynchronous messaging and multiple services, the idea should be to ease the pains of traditional EAI support functions. This can be done by thinking ahead about how can support functions be empowered with better ways to provide information visibility and take actions. Notifications and human work-flow are some of the ways to empower your support team.

    Human interaction and intervention:
    Business processes inevitably will involve human interaction in some or other form. If your integration process involves such role based people interaction, plan ahead and use standards based mechanisms to have human work-flows.

    Separate Business Rules:
    The integration process probably is not a good place to embed and hard codes the business rules. Identify the rules and provide loosely coupling between your process and rules. This would provide the flexibility to change business rules dynamically without modifying or redeploying your integration services or processes.

    Business Process Visibility:
    Plan for providing visibility into your so integration or business process. This is very important because today enterprises have heterogeneous systems and applications and with integrations spanning multiple systems, it becomes very hard to have visibility in run time. Users (IT and Business) should be able to monitor and have visibility into your business processes and integrations.

    Service Composition:
    Provide the capability to provide business functionality that is composed of disparate and/or independent services. The composite solution may cater to an integration solution or a new future business process. Service composite architecture is the relevant standard that addresses service composition. It provides the specifications to describe the model for building applications and systems using a Service-Oriented Architecture (SOA).

    SOA Governance:
    In simple terms, plan for the capability to manage and apply policies for the services within the service portfolio of your integration services. This is critical for SOA and needs to be planned well to ensure better management and control of services.

    Conclusion

    Service oriented approach to enterprise integrations offers tremendous advantages over traditional EAI. Enterprise integrations can be converted into reusable and implementation agnostic useful services by applying very basic principles of flexibility, agility, and extensibility in all the components of service oriented integration architecture.

    Bookmark and Share

  • Tip: Using a Non-Standard Port for the Collaborator Database

    Oracle EPM Smart Space - Tue, 2008-03-18 17:04
    In Smart Space 9.3.1, one of the common “bumps” in the installation road is when you try to use a non-standard port for the Collaborator database. This is because the configuration utility does not properly update the wildfire.xml file for this situation. To remedy this mistake, simply find the wildfire.xml file and look for the following line:

    jdbc:jtds:sqlserver://serverName/databaseName;appName=jive

    Change the above line by adding the port number (i.e. 14330) after the serverName like so:

    jdbc:jtds:sqlserver://serverName:14330/databaseName;appName=jive
    Categories: Development

    Forces for Good in the Universe

    Mary Ann Davidson - Tue, 2008-03-18 14:12

    Between prime time television and the newspapers, the average person could be forgiven for thinking that most of life in America is sordid, self-serving and sensationalistic. If you go by news and TV, businessmen are always greedy exploiters of the poor/despoilers of the environment, veterans are always crazed gunmen, and hardly anybody takes marital vows seriously, if at all.

     

    The negative emphasis of some media is all the more reason to enjoy those who practice excelsior living ("excelsior" is Latin for "higher" or "superior") instead of degradation and debasement.

     

    One such event occurred for me last week when I attended the IT Security Entrepreneur's Forum. A friend of mine is the executive kahuna and founding force for good behind this event (though other organizations sponsor it, like the Department of Homeland Security and the Kaufmann Foundation). It's an opportunity for entrepreneurs in IT security to understand what security challenges the US government faces, and to learn how to work with the government. The topics covered everything from the VCs that have government involvement, like In-Q-Tel, to how to deal with system integrators and procurement programs. The idea was to get entrepreneurs' Cool New Security Ideas in front of people dealing with Large Scale National Security Challenges, for the betterment of all.  (Mahalo nui loa, Robert, for a great event.)

     

    I was reminded several times during the week that there are people who not only want to make the world better, they are committing their lives and fortunes (or at least, investors' fortunes) to doing so. (And, unlike the target of my last entry on Do-Gooderitis, these problems all need solving, badly.)

     

    One of my happy "better world" moments occurred in the discussion of energy security at the Forum. Truthfully, I never thought much about the IT security implications of energy. You can see that protecting information about promising new energy sources, new extraction techniques and technologies would be important. Also (while I do not intend to be polemical or political) it is pretty clear that the extent to which we are dependent on non-US oil supplies does drive our involvement in the Middle East. Ergo, finding alternative sources of energy (and making wise use of the energy we have) has important national security implications. 

     

    We live in a country where we mostly take energy for granted: you plug in your whatever, you get power, no problem. (Though it can be expensive. It's been a cold winter in Idaho and my last two Idaho Power bills have been high enough to make me consider listing them as a dependent on my tax return.) We forget that not everyone lives in a place where there's a plug and ready access to a steady power supply. For example, soldiers and marines in war zones have an unbelievable plethora of electronic gadgets and gizmos on their person, many of which require them to carry God knows how many chargers, not to mention lots of batteries. For them, being able to eliminate unnecessary electronic chargers mean they could fight more nimbly (carrying less weight in their packs), or that they could carry an extra magazine or Ka-Bar instead of a power cord.  Most of us, though not typically getting shot at on business trips, can relate to the annoyance of schlepping a bunch of cords and adapters along wherever we go. I think I carry about four on the average business trip (camera, iPod, computer, cell phone). Probably an extra cord or two to charge things in the car. For weight reasons alone, I'd like to carry fewer chargers (and then I'd have room for more books, instead of the three or four I typically carry on a trip).

     

    Wouldn't it be really great if you could carry one charger that charged all your devices? A charger that would be smart enough to detect when a device is charged and automatically stop sucking power? Also, although I am not always the most ecologically correct person, I hate the idea of throwing more stuff into landfills. It probably comes from having parents who grew up during the Depression: throwing things away that are perfectly good to use again just doesn't sit well with me. One thing, energy efficient, that you can reuse over and over sounds pretty darn good.

     

    There's a company called GreenPlug that would really - is really - making it a better world, because what I just described is the GreenPlug vision. Someday soon, I hope all those electronic gadgets we love to have with us can be GreenPlug-enabled, so we only suck the power we need to charge a device - and no more - and we have one thing that charges all our gadgets instead of rebuying charger after charger after charger. Back to security, I think about "GI Joe" or "Marine Bob" (Robert or Roberta) in the field, who could take five pounds of chargers and batteries out of their packs and replace the weight with more MREs or a couple of spare magazines. (Sometimes better security is as simple as having more firepower than the other guy.)

     

    In the near future, I want to buy my very last power hub/charger/cord/thingy - ever. (Mahalo nui loa, Palani, na honua 'apau.) (Thanks, Frank, for all the world.) Special mahalo for helping the warriors in harm's way, who will one day carry more he mau mea kaua (weapons) and fewer power cords.

     

    Another group out in force at the IT Security Entrepreneur's Event was one of my favorite government organizations, the National Institute of Standards and Technology (NIST). I have been a huge NIST fan for a long time. In fact, the title of this blog came from comments I have made about NIST in the past: "NIST: A Force for Good in the Universe." NIST has a long record of developing standards and benchmarks for things in a highly transparent way. That's their charter. So you think, why give them credit for "just doing their job?" Because of the way they do it, the fact they are so good at it, and the individuals who work there I deal with. (I am still wearing a black armband several years after Ed Roback left NIST to go work at Treasury. I miss him.)

     

    The fact is that industry, despite much posturing, does not always do standards well. Too many times it is Big Companies A and B teaming up against More Big Companies C and D to duel over standards. A couple of disparate standards limp along, things don't work together, the companies involved may never want or work towards a truly independent standard. What they want is a lock-in to "their way or the highway" for competitive advantage. That's business.

     

    There is, however, a public good argument for getting plumbing to work together so we can all have nice hot showers. NIST is in the "getting everyone a nice hot shower" business by working to help create the standards that make public good activities in IT security (among other areas) happen. If standards (true open standards, not "dueling standards") do not happen, what consumers end up with is stuff that has to be spliced together with digital duct tape. Try taking a hot shower with duct taped-together pipes sometime to see how well it works.

     

    We need a truly independent group to do standards well. I realize I am going against the nerdy grain here, but really, most consumers do not care two hoots in hell for "elegant technical solutions" half as much as things that just work together without digital duct tape. NIST's only "dog in the hunt" is to solve a problem well and with broad industry feedback. Their entire MO is to help create standards by working with industry. When they are engaged in standards development, the result is typically really good, because they get great minds working on it and listen to people. What's better than that? NIST's purview also covers technical benchmarks (like security configurations) and there, too, there is a dialogue with industry, instead of a few people locking themselves in an ivory tower and creating drawbridge specs without ever actually using a drawbridge or consulting castle defenders.

     

    NIST does a great job at working with all stakeholders to the point where lots of vendors, including me on behalf of Oracle, are happy to traipse up to the US House of Representatives Science and Technology Committee asking for more money for NIST to continue Doing Good Things.  For all the times when you wonder where your tax dollars are going (and why), when it comes to NIST, they are doing good things with your money and if given more, will do more good things with it.

     

    Both NIST and NSA folks graciously visited Oracle a couple of days before the Forum (as well as participating in the Forum) to talk about SCAP (Security Content Automation Protocol). Our goal for inviting them was for them to explain what issues the Defense Department is trying to address through SCAP and, on the Oracle side, what technology we have that gets at the problem space (with a view towards "can we play /talk/work with SCAP?")  I have - and probably will continue to have - issues with some of the particulars of SCAP. What I don't have an issue with is the problem space. I also appreciate that we had a productive discussion with the experts from NIST (and NSA). Bilateral. Not, "We dreamed this up and we know everything."

     

    (For those who are nerdy enough to know that there is a linkage between Federal Desktop Core Configuration (FDCC) and SCAP, you are probably wondering why I like SCAP and (per last blog entry) am less than thrilled about (some aspects of) FDCC. The issue is that the actual configuration required by FDCC was mandated instead of first being developed in conjunction with industry. Had pretty much any vendor who is affected by FDCC gotten a chance to comment on the benchmark before it was mandated, lots of issues would have - we think - been clarified. I still do not know what a "desktop" is because there is no definition yet.  This is exactly the sort of dialogue NIST does and is good at, which is why the technical standards and benchmarks they work on are adoptable and adopted.)

     

    The reason SCAP matters is that the lack of basic "security plumbing" puts all of us at a distinct disadvantage in protecting our systems. Can anybody answer the question, real time:

     

    Who is on my network?

    What is on my network?
    What is my "mission readiness?" (my security configuration, patch level and so on)?

    What is happening that I should be worried about?

     

    You can think of the network as the battlespace (it surely is) and the answers to the above four questions are necessary to give you what the military calls "situational awareness." Nobody has it, and thus the advantage is all to the attackers. SCAP does not address all the above issues, but it does answer questions related to mission readiness (and also, "what's on my network?") Being able to get enough standardization so that you can determine whether your network components are locked down correctly, or what components you have that are subject to a particular vulnerability - in some automated way - would be really useful. Nobody adds any value by manually reading security bulletin FOO and then manually trying to figure out what they have on their network that is subject to FOO problem. No automated tool does this for everything, or does it well, or works with any other tool someone would use. Which is why everyone is using digital duct tape with predictable results: advantage to attackers.

     

    One-off security products that do pieces of this but don't do it comprehensively are not enough. You need to know "what's my security posture?" real time, so if something is happening that you should be worried about you can "take evasive action" real time (e.g., reset a security parameter or turn off a service). Attacks are real time; defenses need to be real-time, too. 

     

    If there is any worse example of fiddling while Rome burns than people arguing over the elegance of their individual technical solutions instead of trying to make comprehensive, universal situational awareness a reality for everyone's networks, I don't know what it is. (Get over yourselves, people, it's national security.)

     

    So, mahalo nui loa to NIST for - whatever one's individual issues with individual standards - creating not only a dialogue, but a climate for discussion, instead of diktats. And for being a force for good in the universe, especially for DoD. That goodness will trickle down to other communities, I have no doubt of it.

     

    For More Information:

     

    Book of the Week: Lone Survivor by Marcus Luttrell.

     

    It is a source of ineffable sadness and more than a little pique to me that the average American can more readily bring to mind the names of celebutantes or tartlets (sorry, I meant starlets - I think) than the names of the last three recipients of the Congressional Medal of Honor (Paul Smith, Jason Dunham, and Michael Murphy, if you want to know). This book recounts the story of SEAL Team 10's actions in Afghanistan, which led to LT Michael Murphy's death, those of two others in the squad, and 16 people on a helicopter that came to extract Luttrell's SEAL team. Marcus Luttrell was the lone survivor (and recipient of the Navy Cross).

     

    This book should be required reading for anybody who wants to know what real heroism is (hint: it's not the ability to putt, throw or slam dunk). And, in my opinion, there is something wrong when members of the armed forces are more afraid of violating the rules of engagement than they are of the enemy. As Luttrell puts it: "...any government that thinks war is somehow fair and subject to rules like a baseball game probably should not get into one. Because nothing's fair in war, and occasionally the wrong people do get killed."

     

    http://www.amazon.com/Lone-Survivor-Eyewitness-Account-Operation/dp/0316067601/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1205801463&sr=8-1

     

    The citation for Michael Murphy's Medal of Honor:

     

    http://www.history.army.mil/html/moh/afghanistan.html

     

    The citations for Paul Smith's and Jason Dunham's Medal of Honor:

     

    http://www.history.army.mil/html/moh/iraq.html

     

    More on the IT Security Entrepreneur's Forum:

     

    http://www.publicprivatepartnerships.org/

     

    More on GreenPlug ("One Plug, One Planet"):

     

    http://www.greenplug.us/

     

    Marines love their Ka-Bars, and who can blame them?

     

    http://www.geocities.com/heartland/6350/kbar.htm

     

    Unbelievably cool that KGMB9 station in Hawai'i is doing a regular news segment in the Hawaiian language. Maika'i nui loa! (Woo hoo!) 'A'ha'i 'olelo ola (messenger of a living language).

     

    http://kgmb9.com/main/content/view/4738/40/

     




     

    Cloning 11i with 10g database (you may need a patch)

    Fadi Hasweh - Sun, 2008-03-16 10:56
    I faced the following error while configuring database when cloning apps 11i with 10g database and it happens directly after entering the path of the third data_top (by the way the script will always assume you have 4 data_tops even if you have more/less data_tops)

    StackTrace:
    java.lang.NullPointerException
    at oracle.apps.ad.context.CloneContext.createContextFileForDbhomes(CloneContext.java:2816)
    at oracle.apps.ad.context.CloneContext.getInputFromUsers(CloneContext.java:1485)
    at oracle.apps.ad.context.CloneContext.doClone(CloneContext.java:627)
    at oracle.apps.ad.context.CloneContext.main(CloneContext.java:6085)

    On metalink there was the solution on Note:427981.1 Subject:RC-50004 When Specifying DATA_TOPS While Cloning a 10.2 Database
    And the solution is to apply a patch with a version of CloneContext.java above 115.203. the following are possibilities patches

    Patch 5473292, Patch 5732291, Patch 5604818, Patch 5456078, Patch 5474116

    Well I did not do that because this means that I have to apply the patch and recopy the database, which take time.
    So I recreate the database manually using control file. And when try to open the database I issued
    SQL>alter database open resetlogs;
    It failed with the following
    alter database open resetlogs
    *
    ERROR at line 1:
    ORA-01113: file 1 needs media recovery
    ORA-01110: data file 1: '/ora_dev/data/system01.dbf'
    Even though the database was completely down when I copied it I faced this error the good thing is 2 days ago I was reading bas post http://basklaassen.blogspot.com/2008/01/recover-database.html
    I followed the steps for recovery and provided the path to my online log files and the recovery completed successfully.
    Next down time for the production database for sure I will apply one of the patches above.

    Hope that helped
    Fadi

    Pages

    Subscribe to Oracle FAQ aggregator