APPS Blogs

Configure Oracle dbconsole...

Bas Klaassen - Tue, 2010-09-28 04:38
For a 10g database I wanted to check something in the database console. When trying to start this console, I received the following error : OC4J Configuration issue. Oracle/ora102/oc4j/j2ee/OC4J_DBConsole_.local_ not foundThe ORACLE_HOME is a shared one, meaning different databases use the same ORACLE_HOME. So, because of this there were some other directories but not the one I needed. I Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com0
Categories: APPS Blogs

Oracle webcache and Oracle eBS 11i....

Bas Klaassen - Thu, 2010-09-09 08:31
I have installe Oracle webcache 10.1.2.3.0 to act as a reverse proxy server for our Oracle eBS environments. After installing the webcache, I used note 380486.1 to configure webcache for our R12 environments. Now, we can use the webcache url to enter the R12 application.For our 11i environments I used note 306653.1Using the webcache I am able to login the 11i application, but when starting a Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com0
Categories: APPS Blogs

Read/Write NTFS on my MacBook Pro

Bas Klaassen - Thu, 2010-08-12 03:41
Today I tried to start my virtual linux machines (created in vmware workstation on windows to install/upgrade eBS environments) on my MacBook Pro. I downloaded a trial version of Vmware Fusion. When trying to start a virtual machine, Vmware would show me the following error 'Read only file system' and the machine would not start. It seemed I could not write to the folders containing the Vmware Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com3
Categories: APPS Blogs

Agent blocked....

Bas Klaassen - Fri, 2010-08-06 04:29
In our 11g Grid Control I noticed an agent that was not uploading any data to the oms anymore.When checking the status of the agent I noticed the following:Last successful upload : (none)Last attempted upload : (none)Total Megabytes of XML files uploaded so far : 0.00Number of XML files pending upload : 199Size of XML files pending Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com4
Categories: APPS Blogs

Incomplete restore....

Bas Klaassen - Tue, 2010-07-13 02:32
Yesterday I was asked to help during a restore/recovery operation of the eBS datbase. Before restoring the dba'er removed the original database (rm *). After restoring the database files from the backup abd trying to start the database, we noticed the database did not to start because he could not find his redo log files. It seemed the redo log files were not in the backup, and therefore not Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com0
Categories: APPS Blogs

Oracle.exe on Windows

Bas Klaassen - Wed, 2010-04-07 03:09
So, I solved my last problem. Deleting the registry entry of NLS_LANG solved the problem. After this, the open database also showed me the NLS error again. I decided to start again with another environment, maybe the source was the problem ;)With a new set, I ran into another problem...the database service cannot be started. DIM-00019: Create Service Error The Oracle Service service terminated Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com0
Categories: APPS Blogs

Cloning eBS environment on windows....

Bas Klaassen - Wed, 2010-03-31 07:04
For a new project I am working with Oracle eBS 11i on the windows platform. When cloning an environment using rapidclone, the database clone is showing me an error. When the clone is trying to create a new controlfile, the following error is shown : The log information will be written to"F:\oracle\behdb\9.2.0\appsutil\log\beh_smdbp02\adcrdb.txt"ECHO is off.ECHO is off.Creating the control file Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com1
Categories: APPS Blogs

Oracle R12.1.2 HCM New Functionality Part 1

RameshKumar Shanmugam - Sat, 2010-01-02 22:35
EBS R12.1.2 is released and much awaited gap in the OTL and Absence Management product is closed in the current release.
OTL timecard is integrated with the SSHR Absence Management, with this new functionality Absence entered in the Oracle Core HR/ ESS or MSS will be populated in OTL Timecard.Similarly Absence time entered in OTL can be viewed in Core HR and SSHR. This new functionality helps to maintain the data integrity and this new functionality also eliminates much of the custom work that need to be done by the consultants to validate the time entered in OTL against the Absence Management

To understand how to setup Absence Management refer the blog http://ramesh-oraclehrms.blogspot.com/2007/07/leave-management.html
Categories: APPS Blogs

Happy Xmas to everybody !!!!!

Bas Klaassen - Fri, 2009-12-25 05:40
Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com0
Categories: APPS Blogs

Login problems R12

Bas Klaassen - Fri, 2009-12-04 06:39
On our R12 eBS environment we are facing a problem when loggin in. It does not happen all the time, because we use more then one web node (loadbalancer), but when trying to acces the login page the following error is shown..."Unable to generate forwarding URL. Exception: oracle.apps.fnd.cache.CacheException"or a blank page is shown instead of the login page. In the applications.log file ($Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com5
Categories: APPS Blogs

Missing AppsLogin.jsp...

Bas Klaassen - Tue, 2009-08-25 01:20
I am still facing the same problem with my R12 upgrade.When running the post install checks using Rapidwiz , only the JSP and the Login page show errors.For jsp I see 'JSP not responding, waiting 15 seconds and retesting'and the Login Page shows 'RW-50016: Error. -{0} was not created. File= {1}'The strange thing is that all other checks are oke. Even the /OA_HTML/help check !So, the problem is Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com15
Categories: APPS Blogs

R12 upgrade

Bas Klaassen - Sun, 2009-08-23 03:46
I Finally upgraded my 11.5.10.2 environment to R12.I followed the steps mentioned in the different upgrade guides. What do I have runing right now ?- Oracle eBS 12.0.6- Oracle database 10.2.0.4- Oracle tech stack (old ora directory) 10.1.2.3.0- Oracle tech stack (old iAS directory) 10.1.3.4.0So, having no problems during the upgrade proces, I finished starting al services. When trying to login myBas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com5
Categories: APPS Blogs

Purge old files on Linux/Unix using “find” command

Aviad Elbaz - Wed, 2009-06-10 01:30

I've noticed that one of our interface directories has a lot of old files, some of them were more than a year old. I checked it with our implementers and it turns out that we can delete all files that are older than 60 days.

I decided to write a (tiny) shell script to purge all files older than 60 days and schedule it with crontab, this way I won't deal with it manually. I wrote a find command to identify and delete those files. I started with the following command:

find /interfaces/inbound -mtime +60 -type f -maxdepth 1 -exec rm {} \;

It finds and deletes all files in directory /interface/inbound that are older than 60 days.
"-maxdepth 1" -> find files in current directory only. Don't look for files in sub directories.

After packing it in a shell script I got a request to delete "csv" files only. No problem... I added the "-name" to the find command:

find /interfaces/inbound -name "*.csv" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

All csv files in /interface/inbound that are older than 60 days will be deleted.

But then, the request had changed, and I was asked to delete "*.xls" files further to "*.csv" files. At this point things went complicated for me since I'm not a shell script expert...

I tried several things, like add another "-name" to the find command:

find /interfaces/inbound -name "*.csv" -name "*.xls" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

But no file was deleted. Couple of moments later I understood that I'm trying to find csv files which is also xls files... (logically incorrect of course).

After struggling a liitle with the find command, I managed to make it works:

find /interfaces/inbound \( -name "*.csv" -o -name "*.xls" \) -mtime +60 -type f -maxdepth 1 -exec rm {} \;

:-)

Aviad

Categories: APPS Blogs

Upgrade Java plug-in (JRE) to the latest certified version

Aviad Elbaz - Wed, 2009-05-20 03:15

If you have already migrated to Java JRE with Oracle EBS 11i you may want to update EBS to the latest update from time to time. For example, if your EBS environment is configured to work with Java JRE 6 update 5 and you want to upgrade your clients with the latest JRE 6 update 13.

This upgrade process is very simple:

  1. Download the latest Java JRE installation file
    The latest update can be downloaded from here.
    Download the "JRE 6 Update XX" under "Java SE Runtime Environment".
     
  2. Copy the above installation file to the appropriate directory:
    $> cp jre-6uXX-windows-i586-p.exe $COMMON_TOP/util/jinitiator/j2se160XX.exe
    We have to change the installation file name by the following format:   "j2se160XX.exe"  where XX indicates the update version.
     
  3. Execute the upgrade script:
    $> cd $FND_TOP/bin
    $> ./txkSetPlugin.sh 160XX

That's all....

Since we upgraded our system to JRE 6 update 13 (2 weeks ago), our users don't complain about mouse focus issues and some other forms freezes they have experienced before. So... it was worth it...

If you haven't migrated from Jinitiator to the native Sun Java plug-in yet, it's highly recommended to migrate soon. Jinitiator is going to be desupported soon.

See the following post for detailed, step by step, migration instructions: Upgrade from Jinitiator 1.3 to Java Plugin 1.6.0.x.

You are welcome to leave a comment.

Aviad

Categories: APPS Blogs

Corruption in redo log file when implementing Physical Standby

Aviad Elbaz - Tue, 2009-03-17 10:55

Lately I started implementing Data Guard - Physical Standby - as a DRP environment for our production E-Businsess Suite database and I must share with you one issue I encountered during implementation.

I chose one of our test environments as a primary instance and I used a new server, which was prepared to the standby database in production, as the server for the standby database in test. Both are Red-Hat enterprise linux 4.

The implementation process went fast with no special issues (at lease I thought so...), everything seems to work fine, archived logs were transmitted from the primary server to the standby server and successfully applied on the standby database. I even executed switchover to the standby server (both database and application tier), and switchover back to the primary server with no problems.

The standby database was configured for maximum performance mode, I also created standby redo log files and LGWR was set to asynchronous (ASYNC) network transmission.

The exact setting from init.ora file:
log_archive_dest_2='SERVICE=[SERVICE_NAME] LGWR ASYNC=20480 OPTIONAL REOPEN=15 MAX_FAILURE=10 NET_TIMEOUT=30'

At this stage, when the major part of the implementation had been done, I found some time to deal with some other issues, like interfaces to other systems, scripts, configure rsync for concurrent log files, etc... , and some modifications to the setup document I wrote during implementation.

While doing those other issues, I left the physical standby instance active so archive log files are transmitted and applied on the standby instance. After a couple of hours I noticed the following error in the primary database alert log file:

ARC3: Log corruption near block 146465 change 8181238407160 time ?
Mon Mar  2 13:04:43 2009
Errors in file [ORACLE_HOME]/admin/[CONTEXT_NAME]/bdump/[sid]_arc3_16575.trc:
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 146465 change 8181238407160 time 02/03/2009 11:57:54
ORA-00312: online log 3 thread 1: '[logfile_dir]/redolog3.ora'
ARC3: All Archive destinations made inactive due to error 354
Mon Mar  2 13:04:44 2009
ARC3: Closing local archive destination LOG_ARCHIVE_DEST_1: '[archivelog_dir]/arch_[xxxxx].arc' (error 354)([SID])
Committing creation of archivelog '[archivelog_dir]/arch_[xxxxx].arc' (error 354)
ARCH: Archival stopped, error occurred. Will continue retrying
Mon Mar  2 13:04:45 2009
ORACLE Instance [SID] - Archival Error

I don't remember if I've ever had a corruption in redo log file before... 
What is wrong?! Is it something with the physical standby instance ?? Actually, if it's something with the standby instance I would have expected for a corruption in the standby redo log files not the primary's..

The primary instance resides on a Netapp volume, so I checked the mount option in /etc/fstab but they were fine. I asked our infrastructure team to check if something went wrong with the network during the time I got the corruption, but they reported that there was no error or something unusual.

Ok, I had no choice but to reconstruct the physical standby database, since when an archive log file is missing, the standby database is out of sync'. I set the 'log_archive_dest_state_2' to defer so no further archive log will be transferred to the standby server, cleared the corrupted redo log files (alter database clear unarchived logfile 'logfile.log') and reconstruct the physical standby database.

Meanwhile (copy database files takes long...), I checked documentation again, maybe I missed something, maybe I configured something wrong.. I have read a lot and didn't find anything that can shed some light on this issue.

At this stage, the standby was up and ready. First, I held up the redo transport service (log_archive_dest_state_2='defer') to see if I'll get a corruption when standby is off.  After one or two days with no corruption I activated the standby.

Then I saw the following sentence in Oracle® Data Guard Concepts and Administration 10g Release 2 (10.2):
"All members of a Data Guard configuration must run an Oracle image that is built for the same platform. For example, this means a Data Guard configuration with a primary database on a 32-bit Linux on Intel system can have a standby database that is configured on a 32-bit Linux on Intel system"

One moment, I thought to myself, the standby server is based on AMD processors and the primary server is based on Intel's..    Is it the problem?!
When talking about same platform, is the meaning same processors also? Isn't it sufficient to have same 32 bit OS on x86 machines?
Weird but I had to check it...

Meanwhile, I got a corruption in redo log file again which assured there is a real problem and it wasn't accidentally.

So I used another AMD based server (identical to the standby server) and started all over again – primary and standby instances. After two or three days with no corruption I started to believe the difference in the processors was the problem. But one day later I got a corruption again (Oh no…)

I must say that on the one hand I was very frustrated, but on the other hand it was a relief to know it's not the difference in the processors.
It was so clear that when I'll find out the problem it will be something stupid..

So it is not the processors, not the OS and not the network.  What else can it be?!

And here my familiarity with the "filesystemio_option" initialization parameter begins (thanks to Oracle Support!). I don't know how I missed this note before, but all is written here - Note 437005.1: Redo Log Corruption While Using Netapps Filesystem With Default Setting of Filesystemio_options Parameter.

When the redo log files are on a netapp volume, "filesystemio_options" must be set to "directio" (or "setall"). When "filesystemio_options" is set to "none" (like my instance before), read/writes to the redo log files are using the OS buffer cache. Since netapp storage is based on NFS (which is stateless protocol), when performing asynchronous writing over the network, the consistency of writes is not guaranteed. Some writes can be lost. By setting the "filesystemio_options" to "directio", writes bypasses the OS cache layer so no write will be lost.

Needless to say that when I set it to "directio" everything was fine and I haven't gotten any corruption again.

Aviad

Categories: APPS Blogs

JRE Plug-in “Next-Generation” – Part II

Aviad Elbaz - Tue, 2009-03-10 04:22

In my last post "JRE Plug-in “Next-Generation” – to migrate or not?" I wrote about a Forms launching issue in EBS right after upgrading JRE (Java Plug-in) to version 6 update 11 which works with the new next-generation Java Plug-in architecture. The problem happens inconsistently and it only works when I disable the "next-generation Java Plug-in".

Following a SR I've opened to Oracle support about this issue, I was being asked to verify that the profile option "Self Service Personal Home Page Mode" is set to "Framework Only".

We have this profile option set to "Personal Home Page" as our users prefer this way to the "Framework Only" way.

It's important to note that "Personal Home Page" is not a supported value for the "Self Service Personal Home Page Mode" profile option and may cause unexpected issues.

After setting the profile option to "Framework Only" the problem has resolved and the screen doesn't freezes anymore.

So the solution in my case was to set the profile option "Self Service Personal Home Page Mode" to "Framework Only" (we are still testing it but it look fine so far), however there are two more options that seems to work, even when the profile option set to "Personal Home Page" and "next generation Java Plug-in" is enabled.

1) Uncheck "Keep temporary files on my computer"
- Navigate to Java console (start -> settings -> Control Panel -> Java,  or start -> run -> javacpl.cpl)
- On General tab -> Temporary Internet Files -> Settings -> uncheck the "Keep temporary files on my computer".
- Check the issue from a fresh IE session.
 

I'm not sure how or why, but it solves the problem, no more freezing this way..

2) Set “splashScreen” to null
- Edit $FORMS60_WEB_CONFIG_FILE file in your Forms server node.
- Change this line
"splashScreen=oracle/apps/media/splash.gif"
to
"splashScreen="

- No need to bounce any service.
- Check the issue from a fresh IE session.

Again, it's not so clear how or why, but it solves the problem as well.

Now, we just need to convince our users to accept the "framework only" look and feel, and then we would consider upgrading all our clients to the new next-generation Java Plug-in.

You are welcome to leave a comment or share your experience with the new Java Plug-in.

Aviad

Categories: APPS Blogs

Archiving OTL Timecard

RameshKumar Shanmugam - Sun, 2009-02-22 16:18
The Timecard archive process in OTL helps to archive the timecard which are no longer needed and to improve performance of the OTL. Once archived the timecard summary information is available for the users but not the detailed view. Users will not be able to edit or modify any details of the timecard

Following are the process that need to be performed for the archiving the OTL timecard

1) Setup following profile option
  • OTL: Archive Restore Chunk Size
  • OTL: Minimum Age of Data Set for Archiving
  • OTL: Max Errors in Validate Data Set
2) Run 'Define Data Set Process' -This is the first step in the overall process of archiving timecard, This processes helps to identify the date range of the timecard that need to be archived. This process will move the data set to temporary tables in preparation for archiving.
Note: Make sure not to select too much data at once, the process may fail.
If the process fails the data can be restored using the 'Undo Data Set process'



3) Next step in the archiving process is run 'Validate Data Set Process' - This process checks for error on the timecards in the data set.This process returns a validation warning message if the process finds the timecard with the status: working, rejected, submitted, approved, or error.

Note:The validation process may encounter some errors during processing. To restrict the number of errors reported at one time, you can set the OTL: Max Errors in Validate Data Set profile option to a maximum number of errors. The process stops running when it reaches the number of errors you define.


4) Final Step in the Timecard archiving is to run the 'Archive Data Set Process' - The archive process moves the defined data set of the validated timecard data from active tables in the OTL application to archive table.


In my next blog I'll write the details of restoring the archived data back to OTL application

Try this out!
Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator - APPS Blogs