APPS Blogs

OTL - Authorized Delegate

RameshKumar Shanmugam - Sun, 2008-07-06 00:55

Authorized Delegate: This functionality is similar to the timekeeper functionality but has few difference

  • Timekeeper uses the PUI where as Authorized Delegate uses Self Service
  • Timekeeper can access the people who are assigned to the Timekeeper group where authorized Delegate is controlled by the HR:Security Profile
  • Timekeeper has seeded responsibility where the authorized delegate the user must build the custom responsibility using the Seeded Menu 'Authorized Delegate Timecard Entry’

This functionality is good to use when a single person need to enter time for fewer number of employees, if there are more number of employees it is always advisable to use timekeeper

Try it out

Categories: APPS Blogs

How to execute TKPROF on trace files larger than 2GB ? --> Use pipe

Aviad Elbaz - Tue, 2008-06-24 05:54

Here is a nice trick to work with files larger than 2GB on Unix/Linux using pipe.

First case - TKPROF

When trying to execute TKPROF on a trace file larger than 2 GB I got this error:

[udump]$ ll test_ora_21769.trc

-rw-r-----  1 oratest dba 2736108204 Jun 23 11:04 test_ora_21769.trc

[udump]$ tkprof test_ora_21769.trc test_ora_21769.out

TKPROF: Release 9.2.0.6.0 - Production on Thu Jun 23 21:05:10 2008

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

could not open trace file test_ora_21769.trc

In order to successfully execute TKPROF on this trace file you can use the mkfifo command to create named pipe as follow:

  • Open a new unix/linux session (1st), change directory where the trace file exists and execute:

[udump]$ mkfifo mytracepipe
[udump]$ tkprof mytracepipe test_ora_21769.out

TKPROF: Release 9.2.0.6.0 - Production on Thu Jun 23 21:07:35 2008

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

  • Open another session (2nd), change directory where the trace file exists and execute:

[udump]$ cat test_ora_21769.trc > mytracepipe

This way you'll successfully get the output file.

 

Second case - spool

Similar issue with spool to file larger than 2GB can be treat similarly.

$ mkfifo myspoolpipe.out

--> Create new named pipe called 'myspoolpipe.out'

$ dd if=myspoolpipe.out of=aviad.out &

--> What you read from 'myspoolpipe.out' write to 'aviad.out'

$ sqlplus user/pwd@dbname

SQL*Plus: Release 9.2.0.6.0 - Production on Tue Jun 24 12:05:37 2008

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

SQL> spool myspoolpipe.out

--> Spool to the pipe

SQL> select .....

SQL> spool off
SQL> 5225309+294082 records in
5367174+1 records out

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

[1]+  Done                    dd if=myspoolpipe.out of=aviad.out

$ ls -ltr

prw-r--r--  1 oratest dba          0 Jun 24 12:22 myspoolpipe.out
-rw-r--r--  1 oratest dba 2747993487 Jun 24 12:22 aviad.out

Related Notes:

Note 62427.1 - 2Gb or Not 2Gb - File limits in Oracle
Note 94486.1 - How to Create a SQL*Plus Spool File Larger Than 2 GB on UNIX

Aviad

Categories: APPS Blogs

Mix of Old & New style buttons in OA Framework pages

Aviad Elbaz - Fri, 2008-06-06 08:59

After some heavy patches applied on our system we noticed that some buttons in OAF pages looks like the old style gray buttons while the others are fine new style yellow buttons.

For example:

 

(The "Advanced" is the old style and all the others are the new style)

Trying to clear cache ($COMMON_TOP/_pages) and bounce Apache didn't solve the problem.

The solution is hiding within jserv.properties:

  1. Edit $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties
  2. Change the following to TRUE:
    wrapper.bin.parameters=-Djava.awt.headless=true
  3. (optional) Clear all content from $OA_HTML/cabo/images/cache (e.g rm -rf $OA_HTML/cabo/images/cache)
  4. (optional) Clear all content from $COMMON_TOP/_pages
  5. Bounce Apache

And the problem will be resolved...

 


In order to make this change permanent, you should update the Application context file as follow, otherwise next run of AutoConfig will overwrite your change.

  1. Edit $APPL_TOP/admin/$CONTEXT_NAME.xml
  2. Change the following to:
    <java_awt_headless oa_var="s_java_awt_headless">true</java_awt_headless>
  3. Run AutoConfig on Apps Tier.
  4. Bounce Apache

Related Note: 368188.1 - Buttons Are Not Rendering Correctly In Self Service Framework Pages.

Aviad

Categories: APPS Blogs

OTL Time Keeper

RameshKumar Shanmugam - Sun, 2008-06-01 18:13
There are multiple ways you can enter time into OTL

  • Self Service time - employee entering his own time
  • Line Manager time Entry - Manager Entering time for his direct reports
  • Timestore Deposit API - Time import using the interface from the third party system
  • Timekeeper - one person entering time for the group of employees based on the group assigned to them
  • Authorized Delegate - One person entering time for the group of employees based on the security profile attached in the user/responsibility
In this section I am going to explain how to setup Timekeeper module for entering the time for the Group of employees

Before starting the Timekeeper configuration , if you directly go the Responsibility OTL Super timekeeper and click the function timekeeper Entry you will get the following error msg,

The above error message will clearly tell you what are the setup steps you need to do to enable the Timekeeper

The first two steps are to be done in the OTL Application developer Responsibility
1. Timekeeper Misc Setup Items
2. Timekeeper Layout attribute

The third step should be done in the OTL super Timekeeper Responsibility
3. Timekeeper Group is created

The Forth step should be done in the System administrator, this step should be done at the user level and it is only for the super timekeeper
4 Profile OTL: Allow Change Group Timekeeper

I'll be explaining each of the above steps in detail in my next post
Categories: APPS Blogs

FND_GLOBAL affected by New Global Performance Changes

Aviad Elbaz - Thu, 2008-05-29 04:59

After applying ATG Rollup 5 patch (and above) we discovered an issue with some of our custom developments.
For some processes we got the following errors:

ORA-20001: Oracle error -20001: ORA-20001: Oracle error -4092: ORA-04092: cannot SET NLS in a trigger
has been detected in fnd_global.set_nls.set_parameter('NLS_LANGUAGE','AMERICAN').
has been detected in fnd_global.set_nls.
ORA-06512: at "APPS.APP_EXCEPTION", line 72
ORA-06512: at "APPS.FND_GLOBAL", line 240
ORA-06512: at "APPS.FND_GLOBAL", line 1410
ORA-06512: at "APPS.FND_GLOBAL", line 1655
ORA-06512: at "APPS.FND_GLOBAL", line 2170
ORA-06512: at "APPS.FND_GLOBAL", line 2312
ORA-06512: at "APPS.FND_GLOBAL", line 2250

and this:

ORA-20001: Oracle error -2074: ORA-02074: cannot SET NLS in a distributed transaction has been
detected in
fnd_global.set_nls.set_paramenters('NLS_LANGUAGE','AMERICAN').

After some debug work we found that this issue happens when executing FND_GLOBAL.apps_initialize more than once within a trigger/via a db link in the same transaction.

According to Note: 556391.1 - "ORA-02074: Cannot SET NLS in a Distributed Transaction" this issue cause by a new global performance changes.

Oracle Development said: "Very sorry if the new global performance changes have exposed you to this error, but there is no way we can back out these changes. They are not only complex and wide spread but required to maintain functional performance levels. Using fnd_global to change user/resp context from a trigger is not only not supported it is ill advised."

OK, So we had to find a workaround to this issues and we found two...

I'll start with a sample of the new behavior of fnd_global to demonstrate the issue and the solutions/workarounds will come right after.

SQL> create table test1 (a number, b number);
Table created

SQL> insert into test1 (a) values (1001);
1 row inserted

SQL> insert into test1 (a) values (1002);
1 row inserted

SQL> commit;
Commit complete

SQL> create or replace trigger test1_trg_bi
  2  after update on test1
  3  for each row
  4  begin
  5       fnd_global.APPS_INITIALIZE(:new.a,1,1);
  6       -- fnd_request.submit_request...
  7       -- ....
  8       -- ....
  9  end;
10  /
Trigger created

SQL> select fnd_global.user_id from dual; 
   USER_ID
----------
        -1

SQL> update test1 set b=1101 where a=1001;
1 row updated

SQL> select fnd_global.user_id from dual; 
   USER_ID
----------
      1001

SQL> update test1 set b=1102 where a=1002;

update test1 set b=1102 where a=1002

ORA-20001: Oracle error -20001: ORA-20001: Oracle error -4092: ORA-04092: cannot SET NLS in a trigger
has been detected in fnd_global.set_nls.set_parameter('NLS_LANGUAGE','AMERICAN').
has been detected in fnd_global.set_nls.
ORA-06512: at "APPS.APP_EXCEPTION", line 72
ORA-06512: at "APPS.FND_GLOBAL", line 240
ORA-06512: at "APPS.FND_GLOBAL", line 1410
ORA-06512: at "APPS.FND_GLOBAL", line 1655
ORA-06512: at "APPS.FND_GLOBAL", line 2170
ORA-06512: at "APPS.FND_GLOBAL", line 2312
ORA-06512: at "APPS.FND_GLOBAL", line 2250
ORA-06512: at "APPS.TEST1_TRG_BI", line 2
ORA-04088: error during execution of trigger 'APPS.TEST1_TRG_BI'

As you can see, the second update failed because apps_initialize was executed for the second time in the same transaction.

Now I'll show two ways to workaround this issue:

1) As suggested in Note: 556391.1 - "ORA-02074: Cannot SET NLS in a Distributed Transaction" a wrapper Concurrent Request which contain a call to the context set (apps_initialize) and afterwards submits the original request, is one possible solution.

instead:

create or replace trigger test1_trg_bi
after update on test1
for each row
declare
     . . .
begin
     fnd_global.APPS_INITIALIZE(:new.a,1,1);
     ret_code := fnd_request.submit_request ('OWNER', 'ORIGINAL_CONC', . . .);
     . . .
     . . .
end;

create the following trigger:

create or replace trigger test1_trg_bi
after update on test1
for each row
declare
     . . .
begin
     ret_code := fnd_request.submit_request ('OWNER', 'WRAPPER_CONC', . . . , :new.a, . . . );
     . . .
     . . .
end;

additionally - create a new plsql concurrent (WRAPPER_CONC) that contains the fnd_global.apps_initialize and submits the ORIGINAL_CONC concurrent request.

This way, the apps_initialize statement executed in a separate transaction with no error.

This is the preferred and recommended solution by Oracle.

2) The second solution is easier to implement, works fine but according to Note: 556391.1 is not supported since it contains calls to fnd_global within a database trigger.

Anyway...

The idea is to call the apps_initialize in an Autonomous Transaction procedure.

Follow this sample:

SQL> create or replace procedure test1_apps_init (p_user_id number) is
  2  pragma autonomous_transaction;
  3  begin
  4       fnd_global.APPS_INITIALIZE(p_user_id,1,1);
  5       commit;
  6  end;
  7  /

Procedure created

SQL> create or replace trigger test1_trg_bi
  2  after update on test1
  3  for each row
  4  begin
  5       test1_apps_init (:new.a);
  6       -- fnd_request.submit_request...
  7       -- ....
  8       -- .....
  9  end;
10  /

Trigger created

SQL> select fnd_global.user_id from dual; 
   USER_ID
----------
        -1

SQL> update test1 set b=1101 where a=1001;
1 row updated

SQL> select fnd_global.user_id from dual; 
   USER_ID
----------
      1001

SQL> update test1 set b=1102 where a=1002;
1 row updated

SQL> select fnd_global.user_id from dual; 
   USER_ID
----------
      1002

As you can see, the update statements were executed successfully this time and the session was updated with the appropriate user context in each update statement.

Those two solutions are working fine, but keep in mind that the second is not supported.

You are welcome to leave a comment.

Aviad

Categories: APPS Blogs

Oracle Reports using BI Publisher

OracleAppsBlog - Mon, 2008-04-14 18:32
Categories: APPS Blogs

Oracle Transparent Gateways - General Description - Part I

Aviad Elbaz - Sun, 2008-04-13 17:15

A lot of companies have several applications based on more than one database system (e.g DB2, SQL Server, Sybase, etc).
Each database system store its own data and naturally there's a need to share data among the various heterogeneous database systems.

Oracle, starting with Oracle Database 9i, offers the "Oracle Transparent Gateways" (Oracle Database Gateways) to allow integration of Oracle database with non-Oracle databases.
Unlike "Oracle Generic Connectivity" that provide a generic solution to connect any ODBC/OLEDB compliant non-Oracle system using ODBC and OLEDB standards, the "Oracle Transparent Gateways" are solutions specifically tailored for each target non-Oracle database system.
The "Oracle Transparent Gateways" communicates using the target database native interface, it's make it possible to access to non-Oracle systems as if they were Oracle databases.

The Transparent Gateway solution composed of two parts:

  • Heterogeneous Services (HS) - this is a general integrated component that make it possible to connect to non-Oracle systems from Oracle database
  • Oracle Database Gateways (agent) - these are specific tailored agents for non-Oracle systems that make it possible to interacts with the target non-Oracle system.

Heterogeneous Services (HS)

This is a generic component for connecting to non-Oracle systems.
It's an integrated component of the database that "extends the Oracle SQL engine to recognize the SQL and procedural capabilities of the remote non-Oracle system and the mappings required to obtain necessary data dictionary information" (Oracle Doc').

The following services are provided by the Heterogeneous Services (HS):

  • Transaction service
    Responsible for establishing authenticated connection when the non-Oracle system is accessed and close the connection when session end.
    Also responsible for global data integrity using two phase commit protocol, even for non-Oracle systems that do not support two phase commit natively.
  • SQL Service
    Provide the translation capabilities: SQL & data dictionary translations.
    The SQL services uses an information arrived from the Gateway to translate Oracle SQL to the appropriate SQL dialect of the non-Oracle system. Also, references to data dictionary tables in a query will be rewrite by the SQL Service and result with a result set as from Oracle database.
    ** Data type translation performed by the Gateway.
  • Procedural Service
    An interface for executing stored procedures on non-Oracle system.
  • Pass through SQL
    A mechanism for issuing a SQL statement against the non-Oracle system. It is useful when the statement/function/procedure are not supported by the Gateway.

Oracle Database Gateways (agents)

This component responsible for the interface to the remote non-Oracle system.
It's also responsible for SQL mappings and data type conversions.
The Gateway interacts with Heterogeneous Services to make it possible to transparently connect from an Oracle Database to a non-Oracle System.
In contrast to the HS (Heterogeneous Services) which is a generic component, the Gateways are tailored specifically for each target non-Oracle system.
There are Gateways for many systems such as: DB2, Sybase, Informix, SQL Server, IMS, VSAM, Adabas, Ingres, Teradata, to name a few.
The Gateway can be installed on the same server like the non-Oracle system or on the same server like the Oracle system or on a separate server.

Next post I'll show an example of connecting and retrieving data from a SQL Server database to an Oracle database using Oracle Transparent Gateway for Microsoft SQL Server including all configuration required for Transparent Gateway and the source Oracle System.

Related Documents for more information:

- Oracle® Transparent Gateway for Microsoft SQL Server Administrator's Guide 10g Release 2 (10.2) for Microsoft Windows (32-bit)

- Database Gateways Technical Whitepaper

You are more than welcome to leave a comment.

Aviad

Categories: APPS Blogs

Accrual Balance Display in Self Service HR

RameshKumar Shanmugam - Fri, 2008-04-11 19:18
One of the comon requirement that often times come in the Absence Managment is HR team want their employee to be able to see their available accrued/PTO balance.
As an employee it is always good to know how much vacation balance an employee is left with

Here is a simple setup that can enable your employee to view the available balance through the Self Service absence Management

Step 1: Define the Absence element
Step 2: Link the Element based on the Eligibilty Criteria
Step 3: Define the Absence Type
Step 4: Define the Accrual Plan
Step 5: Attach the Accrual Plan to the Employee
Step 6: Complete the setup for the Self Service HR
Step 7: Make sure employee is able access the absence Management Functionality

Setup steps to enable the Entitlement Balance in the Absence Management

Step 1: Create an Element set with the type as ‘Run Set’
Step 2: Attach the element set to the Profile option HR: Accrual Plan Element Set Displayed to User at the Responsibility level (Employee Self Service)
Step 3: Bounce the Appache

Now Navigate to the Employee Self Service > Absence Management > (T) Entitlement Balance


Try this out
Categories: APPS Blogs

LinkedIn Oracle Contractors Group

OracleAppsBlog - Thu, 2008-04-03 17:46
Categories: APPS Blogs

Forgot your Password?

Aviad Elbaz - Tue, 2008-03-25 02:58

Almost every website that uses username & password have a "forget password" functionality to retrieve users passwords, and so also the Oracle E-Business Suite.

This is a very useful functionality since it reduces the number of SR's opened to the helpdesk team regarding login problems and moreover satisfying the customers which can get a new password in a very short time with no helpdesk intervention.

The implementation of this functionality is very simple and easy.
To enable it you should:

  1. set the profile "Local Login Mask" to the current value plus 8 (e.g. current value is 32 -> set value to 40)
  2. Bounce Apache

The "Local Login Mask" profile used to customize some attributes of the login page (AppsLocalLogin.jsp), one of them is the "forgot your password" link.
You should set the value of this profile to the sum of all attribute's mask values you are interested in.

The full attributes list is:

Attribute

Mask Value Binary value Hint for Username 01 00000001 Hint for Password 02 00000010 Cancel button 04 00000100 Forgot Password link 08 00001000 Registration link 16 00010000 Language Images 32 00100000 Corporate Policy Message 64 01000000

 

Setting the Forgot Password link mask value will add the following TIP to the login page:

The reset password process:

- Click on "Forgot your password?" link will ask for a username to which reset the password.

- After typing the username and click OK, a new workflow process is started (Item type UMXUPWD) and you'll get this confirmation message:

- Shortly you'll get this email - "Password reset required approval" (expired after 4 hours).

- Click on "Approve" to confirm you are interested in a new password.

- Shortly you'll get an email with a temporary password which you have to change on first login.

Very nice and easy to implement functionality, which could be very beneficial.

Related Note 399766.1 - Reset Password Functionality FAQ

You are welcome to leave a comment

Aviad

Categories: APPS Blogs

Job Vs Position

RameshKumar Shanmugam - Sun, 2008-03-09 16:23
As a Functional consultant the first thing that we should decide before we can design the solution for a customer is whether the system going to be a Single Business Group or Multi Business Group.

The second main important thing we need to decide is whether the customer is going to use Job or position.

When we put this question to the customer the first expected question for the Customer side would be what is the difference between Job and Position

The content in this blog is more of my own view and the simplistic approach I always like. review the documentation before you can decide on the approach you want to take

To explain it in a very high level
Jobs are Generic Title or Role within a Business Group, independent of any single organization. Required. Usually more specific if positions are not used.

Position are Specific occurrence of one job, fixed within an organization. Not required

If you are in US legislation your job will drive you FLSA and EEO reporting. Personally I feel maintaining the Position is hard in an unstructured organization. Position Hierarchy will suite for the University/college/School and Government Organization

The Maintenance is more in the Position hierarchy than in the Job. If your customer feel they need less maintenance activity then you should recommend Job not the Position
Categories: APPS Blogs

Upgrade from Jinitiator 1.3 to Java Plugin 1.6.0.x

Aviad Elbaz - Fri, 2008-03-07 05:51

Lately Oracle announced the end of Error Correction Support for Jinitiator 1.3 for E-Business Suite 11i, effective July 2009.

This is the sign it’s about time to upgrade to the native Java Plug-in… :-)

Among other things, one of the main advantages of upgrading from Jinitiator to the native Java Plug-in is the prevention of conflicts between them.

This upgrade is great news to all are working with Oracle Discoverer Plus (with Java plug-in) and Oracle EBS 11i (with Jinitiator) and experiencing those conflicts.

I’ll skip all the others advantages and disadvantages of upgrading to Java Plug-in as they are well described in Steven Chan’s post - Jinitiator 1.1.8 To Be Desupported for Apps 11i and in Metalink Note: 290807.1 - Upgrading Sun JRE with Oracle Applications 11i.

So I will focus on the upgrade process itself - step by step.

I tested the upgrade on the following test environment:

  • EBS 11.5.10.2
  • Database 10.2.0.3
  • ATG Rollup 5
  • Developer 6i patchset 18
  • OS RHEL4.

Be aware that before upgrading to Java Plug-in you must upgrade to Developer 6i patchset 18 or later (currently the latest patchset is 19).

* You can use my previous post in order to Upgrading Developer 6i with Oracle Apps 11i to patchset 18.

  1. Download JRE plug-in Oracle E-Business Suite interoperability patch - 6863618
  2. Download the Sun JRE Plug-in 
    • Select Java Runtime Environment (JRE) 6 Update X (select the latest available update, currently it’s 5)
    • Select Windows offline installation, multi-language
  3. Rename the downloaded installation file jre-6_uX-windows-i586-p.exe to j2se1600x.exe
    In my case rename jre-6_u5-windows-i586-p.exe to ==>> j2se16005.exe
  4. Copy the j2se1605.exe file to $COMMON_TOP/util/jinitiator on the Apps Tier node
  5. If you are on Developer 6i patchset 18 you should apply forms patches 6195758 & 5884875.
    ** Skip this step if you are on Developer 6i patchset 19.
    • Download Patches 6195758 & 5884875
    • Apply patch 6195758
      • Stop all applications processes by adstpall.sh
      • Unzip p6195758_60827_GENERIC.zip
      • cd 6195758
      • cp -r $ORACLE_HOME/forms60/java/oracle/forms/
        handler/UICommon.class $ORACLE_HOME/forms60/java/oracle/forms/
        handler/UICommon.class.PRE_BUG6195758
      • cp -r $ORACLE_HOME/forms60/java/ oracle/forms
        /handler/ComponentItem.class $ORACLE_HOME/forms60/java/oracle/forms/
        handler/ComponentItem.class.PRE_BUG6195758
      • cp oracle/forms/handler/UICommon.class $ORACLE_HOME/forms60/java/oracle/forms/
        handler/UICommon.class
      • cp oracle/forms/handler/ComponentItem.class $ORACLE_HOME/forms60/java/oracle/forms/
        handler/ComponentItem.class
    • Apply Patch 5884875
      • Unzip p5884875_60827_GENERIC.zip
      • cd 5884875
      • cp -r $ORACLE_HOME/forms60/java/oracle/forms/engine/Main.class $ORACLE_HOME/forms60/java/oracle/forms/
        engine/Main.class.PRE_BUG5884875
      • cp -r $ORACLE_HOME/forms60/java/ oracle/forms/
        handler/AlertDialog.class $ORACLE_HOME/forms60/java/oracle/forms/
        handler/AlertDialog.class.PRE_BUG5884875
      • cp oracle/forms/engine/Main.class $ORACLE_HOME/forms60/java/oracle/forms/engine/Main.class
      • cp oracle/forms/handler/AlertDialog.class $ORACLE_HOME/forms60/java/oracle/forms/
        engine/AlertDialog.class
      • Run adadmin -> Generate Applications Files menu -> Generate product JAR files
  6. Apply the Interoperability patch 6863618
    • Make sure all applications processes are down
    • Enable maintenance mode:
      Execute adadmin -> Change Maintenance Mode (5) -> Enable Maintenance Mode (1)
    • Unzip p6863618_11i_GENERIC.zip
    • Cd 6863618
    • Run adpatch to apply patch driver u6863618.drv
    • cd [PATCH_NUMBER]/fnd/bin
    • Execute the following command where X represent the update number:
      $ txkSetPlugin.sh 1600X
      In my case:
      $ txkSetPlugin.sh 16005
    • Disable maintenance mode:
      Execute adadmin -> Change Maintenance Mode (5) -> Disable Maintenance Mode (2)
  7. Start all applications processes by adstrtall.sh
  8. Verify installation by sign in Oracle EBS and select a forms based responsibility.

For those who worry about performance, take a look at this benchmark:
Benchmark comparison test with Jinitiator 1.3.1.23 and Java Plug-in 1.5.0_05 – performance whitepaper

For more information - Note: 290807.1 - Upgrading Sun JRE with Oracle Applications 11i

Aviad

Categories: APPS Blogs

UTL_FILE_DIR issue after applying patch 5985992 'TXK AUTOCONFIG RUP Q'

Aviad Elbaz - Mon, 2008-02-04 09:17

Do you have more than 240 characters in your utl_file_dir?

if so, you should read this before applying patch 5985992 'TXK AUTOCONFIG RUP Q (Jul/Aug 2007)'.

After applying this patch, AutoConfig on database tier failed with the following error:

[PROFILE PHASE]
  AutoConfig could not successfully execute the following scripts:
    Directory: [RDBMS_ORACLE_HOME]/appsutil/install/[context_name] 
      afdbprf.sh              INSTE8_PRF         1

AutoConfig is exiting with status 1

It wasn't clear why afdbprf.sh failed so I ran it manually from ssh terminal and I got this:

$ [RDBMS_ORACLE_HOME]/appsutil/install/[context_name]/afdbprf.sh

afdbprf.sh started at Tue Jan 29 17:43:21 IST 2008

The environment settings are as follows ...

       ORACLE_HOME : ....
       .........
       .........

Application Id : 0
Profile Name : BIS_DEBUG_LOG_DIRECTORY
Level Id : 10001
New Value : /usr/tmp
Old Value : /usr/tmp
declare
*
ERROR at line 1:
ORA-12899: value too large for column
"APPLSYS"."FND_PROFILE_OPTION_VALUES"."PROFILE_OPTION_VALUE" (actual: 486,
maximum: 240)
ORA-06512: at line 44
ORA-06512: at line 139

Looking into afdbprf.sql  (executed by afdbprf.sh) reveals the problem:

.......

--Setting  BIS_DEBUG_LOG_DIRECTORY
           set_profile(0, 'BIS_DEBUG_LOG_DIRECTORY',
                       10001, 0,
                       '&BIS_DEBUG_LOG_DIR',
                       NULL);

-- This profile option was earlier set in addbprf.sql via bug 2843457, Now moved here for bug 5722442
--
-- Set up UTL_FILE_LOG profile option
--
          set_profile(1, 'UTL_FILE_LOG',
                      10001, 0, '/usr/tmp,........[more than 240 characters..... :-) ]',
                      NULL);

In order to fix bug 5722442, the UTL_FILE_LOG updated with the value of s_db_util_filedir from the database context file and a new bug was created (not a bug according to Bug 6404909).

If the s_db_util_filedir contains more than 240 characters it can't be stored in a profile since the PROFILE_OPTION_VALUE column defined as varchar2(240).

The s_db_util_filedir initialized when creating the database context file by adbldxml.pl, and set up with the value of utl_file_dir database parameter.

I'm not sure why it should be updated with all directories within utl_file_dir and not with the relevant directories only...?! 
The UTL_FILE_LOG profile (or "Stored procedure log directory") wasn't updated with AutoConfig before applying this patch, so I'm not sure regarding the purpose of this profile.
Before applying this patch I have this profile set to a directory that doesn't exist...

The solution according to Note:458511.1 - "After patch 5985992 AutoConfig On Database Tier Fails with script afdbprf.sh" is to change the value of s_db_util_filedir in the database context file to a value less the 240 characters and run AutoConfig again.

And it works, of course...

What should I do if I my utl_file_dir contains more than 240 characters?!

Actually I don't have a good answer to this question but I will try to locate all the places on which this issue might affect when s_db_util_filedir will be updated with a value less than 240 chars.

  1. AutoConfig on the DB Tier creates the [SID]_APPS_BASE.ora file under $RDBMS_ORACLE_HOME/dbs (if it's not already exist), which contains the utl_file_dir database parameter generated respectively to the s_db_util_filedir from the database context file.
    So if you plan to rebuild your init.ora with AutoConfig you will need to update the utl_file_dir manually.
  2. When executing adbldxml.pl on DB Tier to rebuild the database context file, the s_db_util_filedir will be filled with the utl_file_dir database parameter - that might contain more than 240 chars.
    Therefore, before executing AutoConfig you should edit the new context file and shorten the value of s_db_util_filedir.
  3. The UTL_FILE_LOG profile ("Stored procedure log directory")  will be updated with s_db_util_filedir shortened value.

You are welcome to leave a comment or update with additional information regarding this issue.

Aviad

Categories: APPS Blogs

Oracle Discoverer Query Prediction functionality and Performance

Aviad Elbaz - Tue, 2008-01-22 12:21

Lately we noticed that our Discoverer reports runs very slow.
Actually, the problem wasn’t the Discoverer reports query but the query prediction that for some reason took so long.

What is Query Prediction in Discoverer?

“Discoverer includes functionality to predict the time required to retrieve the information in a Discoverer query.
The query prediction appears before the query begins, enabling Discoverer users to decide whether or not to run the query.
This is a powerful facility that enables Discoverer users to control how long they wait for large reports.”
(from Oracle doc’)

The query prediction is the elapsed time while the following message appeared on the bottom left of Discoverer Desktop window: "Determining query time estimate".

For each report we tested, we found query prediction runs 30%-50% (!!!) from the report’s total run time.

Next phase was to start a SQL trace on Discoverer session to see what actually happens when running a Discoverer report.

This is the relevant section from the SQL Trace:

SELECT QS_ID, QS_COST, QS_ACT_CPU_TIME,
       QS_ACT_ELAP_TIME, QS_EST_ELAP_TIME,
       QS_CREATED_DATE, QS_OBJECT_USE_KEY,
       QS_NUM_ROWS,QS_STATE
FROM [EUL_USER].EUL5_QPP_STATS WHERE  QS_COST IS NOT NULL
AND    QS_OBJECT_USE_KEY = :OBJECTUSEKEY
ORDER BY QS_CREATED_DATE DESC


As you can see, the query prediction functionality tries to retrieve statistics information from EUL5_QPP_STATS and it takes 35 seconds. (total time for this report is 55 seconds).

The query prediction based, among other things, on the query prediction statistics table – EUL5_QPP_STATS.
This table records query prediction statistics while running Discoverer reports.

There is no logic by estimating query run time longer than the report’s query itself…

Since the query prediction functionality is important to our users we avoid from disable this functionality (by setting the QPPEnable to 0).
Furthermore, I found that we have statistics data in this table from 7 years ago… 
There is no meaning to hold these statistics…

I tried to find information about purging the EUL5_QPP_STATS and I found this: “How to delete old query prediction statistics” in Oracle® Business Intelligence Discoverer Administration Guide 10g Release 2 (10.1.2.1)

There is a SQL script at [ORACLE_ HOME]\discoverer\util\eulstdel.sql – that deletes all query prediction statistics that were created before a specified date.

Great!
I executed this sql on my database, gave 90 days as a parameter and it deleted 460,000 (from 468,000) rows.
I ran a Discoverer report again, but still query prediction takes too long, same like before.
I checked the explain plan and the cost of the above SQL and it remains the same.
I tried to gather statistics on EUL5_QPP_ENABLE table and rebuild its indexes but cost become higher… (More than 103, something like 800…).

I had no choice but rebuild the EUL5_QPP_ENABLE table (by export, drop table and import).

After recreation of EUL5_QPP_STATS table I ran a Discoverer report again and query prediction takes insignificant time, almost nothing…  :-)

This is from the trace I took after:

SELECT QS_ID, QS_COST, QS_ACT_CPU_TIME,
       QS_ACT_ELAP_TIME, QS_EST_ELAP_TIME,
       QS_CREATED_DATE, QS_OBJECT_USE_KEY,
       QS_NUM_ROWS,QS_STATE
FROM [EUL_USER].EUL5_QPP_STATS WHERE  QS_COST IS NOT NULL
AND    QS_OBJECT_USE_KEY = :OBJECTUSEKEY
ORDER BY QS_CREATED_DATE DESC


The elapsed time for this sql reduced to 0.05 seconds!! (was 35 sec’ before)


Sql cost reduced from 103 to 31!

I checked this issue on Discoverer Desktop 10g (10.1.2.2) but it is relevant to the web tools (Discoverer viewer and Discoverer Plus) as well, since the query prediction functionality exist in these tools like in the client version.

You are welcome to leave a comment.

Aviad

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator - APPS Blogs