Feed aggregator

ORA-12547 while creating ASM instance using DBCA in 10gR2

Madhu Thatamsetty - Wed, 2008-05-28 04:39
ORA-12547 - TNS Lost Contact while creating ASM instance using DBCA.Cause: Seems like Oracle Binaries were not relinked properly.[oracle@testsrv01 ~]$ ldd `which oracle`lddlibc4: cannot read header from `/oracle01/oracle/product/10.2.0/db_1/bin/oracle'[oracle@testsrv01 ~]$Fix: shutdown the listener and kill any stale processes referring to the executable of your ORACLE_HOME andcd $ORACLE_HOME/Madhu Sudhanhttp://www.blogger.com/profile/11947987602520523332noreply@blogger.com0

Recommended by Joe

Mark A. Williams - Tue, 2008-05-27 18:21

A friend of mine at Oracle (that is to say Greg is still at Oracle whilst I am not) pointed out to me that Microsoft's Joe Stagner has Pro .NET Oracle Programming as a recommended book on the Oracle - ASP.NET forum. Currently the recommended books list looks like this (subject to change naturally):


That got me to thinking a bit. It has been just over 4 years since I started writing that book. (I started the first chapter in March 2004). Certainly a lot has changed since then with several releases of the Oracle Data Provider for .NET, Oracle Database, Visual Studio, the Oracle Developer Tools for Visual Studio .NET, etc. I was just at one of the major booksellers here locally over the weekend and the "computer section" has dwindled immensely. I wonder if it would be worthwhile to update the book? There are several things I would want to change to be sure, but would it be worth it? Do people get their technical information from OTN and MSDN mostly now?

More on JavaFX

Oracle EPM Smart Space - Tue, 2008-05-27 14:18

OK I will be totally honest I don't have a whole lot on this one, simply because it is the newest entry in the market. It does look promising from the videos I have seen and what is being said about platform support sounds great. I just hope we can avoid all the issues I have seen over the years with JRE and version compatibility… I have signed up to preview the SDK and when I get a hold of it I will be sure to share more. I do want to share one cool feature I have seen on video. It is the ability to start with the application in the browser and then drag it to the desktop. Here the video that shows this:

The feature I am talking about is about 2:10 into the video and I think this will be a key differentiator that the other RIA (Rich Internet Application) players will quickly try to copy.

Categories: Development

Are your projects failing? How to avoid the Pitfalls

Project Directions - Tue, 2008-05-27 10:20

In a recently published article entitled Why Projects Fail (And How to Avoid the Pitfalls) published by Enterprise Systems, Senior Director of Strategy for Oracle Projects Colleen Baumbach outlines many of the common mistakes that lead to project failure.

I think one of the best points Ms. Baumbach makes is at the end where she says the accumulated years of project failures almost creates a mindset from the start that a project is doomed.  As she notes, there are countless studies that have been prepared showing how dismal project success rates are.

How are companies addressing this?  According to a Forrester study published in early 2007, twenty-six percent of IT leaders planned to hire project managers and 59 percent planned to train their current staff in project management in 2007.  They noted that those numbers changed very little from 2002.

Further reasoning behind the rush to acquire or train more qualified project managers: 

“The reason for the continued emphasis on project management skills is because IT’s value to business remains contingent on it’s ability to deliver projects which meet business requirements both on time and on budget. IT staff accustomed to more technical roles struggle to transition to project management, CIO’s argue, and complain that educational institutions are not putting adequate focus on these skills through coursework.”

It should be a good time to be a project manager as long as you know how to avoid the pitfalls.

"Demo It To Oracle" (DITO) - CamStudio Help

Pankaj Chandiramani - Sun, 2008-05-25 22:03

Now you can record & share the issues you are facing to Oracle Support .


Its a nice way to share the error & show the support guys on how that error occurred & faster reproducibility.

Categories: DBA Blogs

Time saving tactic: How we saved 6 hrs of downtime in production 10g upgrade

Gaurav Verma - Sat, 2008-05-24 03:57

PrefaceSo you want to upgrade your database to 10g from 9i. Well, welcome to the club.

If you have RAC, then you will definitely need to install the CRS and database binaries, along with some RDBMS patches.

When we were challenged with keeping the downtime to within 24 hrs by a customer, that set us thinking as to what time saving tactics could be employed to achieve this end.
Divide and conquer the downtime..The old strategy of divide and conquer is really time tested and works well in most paradigms. In this article, we demonstrate how its possible to split the 10g upgrade downtime into two logical parts:

Part 1)

Install the 10g technology stack ahead of time, including the database, any patchsets (usually and rdbms patches. Before doing this, we shutdown the 9i RAC/DB binaries. After the 10g CRS/DB technology stack installation, you shutdown the 10g CRS , bring up the 9i RAC processes and carry on as it nothing happened.

   At this stage, 9i and 10g technology stack will co-exist peacefully with each other. The production system can run on 9i RAC for a week or more till you decide to do the actual upgrade.

Part 2)

In the subsequent main upgrade outage, you can shutdown the 9i RAC/DB, bring up DB using 10g CRS/DB oracle home  and do the actual upgrade. This outage could last anywhere from 10-24 hrs or even upto 32 hrs depending on your pre-production timing practice. It is assumed that one would do at least 2-3 rounds of upgrade before gaining the confidence of doing the production round.

Adopting this split strategy saved us about 6 hrs in the main upgrade downtime window and we were able to do the upgrade in a window of 16-20 hrs. The size of the database was ~1 TB on HP-UX PA RISC 64 bit OS.

How was it done..When you do an 9i->10g upgrade for RAC, the following happens:

1) OCR device location is automatically picked up from /var/opt/oracle/srvConfig.loc (9i  setup file)

2) The OCR device's contents is upgraded to 10g format

3) A new file called /var/opt/oracle/ocr.loc is created with the OCR device name

Since we had to preserve the 9i OCR device for running 9i RAC after the 10g CRS/DB techstack installation, we did the following:

1) Got a new set of OCR and Voting device for 10g. This was a separate set of devices in addition to the 9i OCR and voting disks. Then we brought down the 9i CRS processes.

A caveat here was that the HP service guard (vendor cluster solution) was required to be up to perform the 10g CRS installation.

2) copied the contents of 9i OCR into the 10g OCR device using the dd command:

$ whoami

$ dd if=/dev/rawdev/raw_9i_ocr.dbf  of=/dev/rawdev/raw_10g_ocr.dbf   bs=1024

3) Edited the srvConfig.loc file to point to the /dev/rawdev/raw_10g_ocr.dbf file

4) Did the 10g CRS installation and ran root.sh that upgraded the OCR device to 10g format

5) We then installed the DB binaries and applied the patchset, along with RDBMS patches. This was the major activity and took about 4-5 hrs.

Another option to save time here was that we could have cloned the ORACLE_HOME from the UAT servers, but since this was production, we wanted to do everything with a clean start and not carry on any mistakes from UAT.

6) Brought down 10g CRS services with $ORA_CRS_HOME/bin/crsctl stop crs and also disabled the automatic startup of CRS with /etc/init.d/init.crs crs disable

7) Re-pointed the 9i OCR device back in /var/opt/oracle/srvConfig.loc as  /dev/rawdev/raw_9i_ocr.dbf

Then we brought up the 9i CRS services again and then brought up the Database with 9i binaries.

At this point, the 9i and 10g binaries co-existed with each other peacefully as if 10g techstack was never there.
ConclusionThere might be better ways of reducing downtime during the upgrade, but this was one of the selling points of projecting a successful 10g upgrade to the customer by reducing the downtime, especially when we were under extreme pressure to keep the main downtime window to less than 24 hrs.

puzzling RMAN: No channel to restore a backup or copy of datafile?

Gaurav Verma - Fri, 2008-05-23 06:43

An RMAN puzzle  
In this article, we will look at a puzzling situation that happened at one of our environments, when we were carrying out an RMAN restoration. The RMAN backup was taken using a catalog (not using controlfile) and our mission was to restore PROD to UAT. Simple, right?

An Arrrgh! Moment..

We were using the following restoration script:

RMAN> run
2> {
3> set until time  "to_date('05/13/08 14:40:00','MM/DD/YY hh24:mi:ss')";
4> allocate auxiliary channel 'adisk_0' type DISK;
5> allocate auxiliary channel 'adisk_1' type DISK;
6> allocate auxiliary channel 'adisk_2' type DISK;
7> allocate auxiliary channel 'adisk_3' type DISK;
9> }

The backup location was available and so were adequate backup channels, but we always ended up getting this laconic error:


Starting restore at 13-MAY-08

released channel: adisk_0
released channel: adisk_1
released channel: adisk_2
released channel: adisk_3
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/13/2008 15:25:38
RMAN-03015: error occurred in stored script Memory Script
RMAN-06026: some targets not found - aborting restore
RMAN-06100: no channel to restore a backup or copy of datafile 471
RMAN-06100: no channel to restore a backup or copy of datafile 466

RMAN> **end-of-file**


Recovery Manager complete.

No channel to restore?...Huh? Well, as you can see, the channels were very well defined and even the rman log showed that the channels were being allocated. We even tried increasing the channels to 8 or 10, and also tried allocating auxiliary channels (was irrelevant), but to no avail.

Another option that was tried was plain and simple restoration using the RMAN> restore datafile 1; command. Although that failed since the paths of datafile 1 was not switched to UAT's convention, it did not prove anything.

My teammates, Dhandapani Perungulam    and Sandeep Rebba  were trying out different options. Then someone had the bright idea of checking the timestamp of the completed backup. It was possible that we were trying to restore to a point in time before the backup had completed.
Verifying the rman backup completion time from catalogFirst of all, we needed to set the NLS_DATE_FORMAT variable to include the HH24:MI format:

$  export NLS_DATE_FORMAT='dd-mon-rr hh24:mi'

Then we needed to connect to the target (source instance for replication in RMAN terminology) and RMAN catalog:

$  rman catalog rman@rman target sys@prd

Recovery Manager: Release - Production on Fri May 23 14:34:02 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

rman database Password:

target database Password:

connected to target database: PRD (DBID=2530951715)
connected to recovery catalog database


This command will give us details of the backupsets and the time when the backup completed:

RMAN> list backup of database completed between '12-may-2008' and '14-may-2008;

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2794098 Incr 0  53.28G     SBT_TAPE    00:38:54     13-may-08 23:11
        BP Key: 2794121   Status: AVAILABLE  Compressed: NO  Tag: TAG20080513T203101
        Handle: racprod1_PRD<PRD_116563:654647578:1>.dbf   Media:
  List of Datafiles in backup set 2794098
  File LV Type Ckp SCN    Ckp Time        Name
  ---- -- ---- ---------- --------------- ----
  7    0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/owad01.dbf
  295  0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/system06.dbf
  379  0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/rbs11.dbf
  459  0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_interface01.dbf
  460  0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_summary01.dbf
  481  0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_tx_data12.dbf
  509  0  Incr 7247000051870 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_archive02.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2794099 Incr 0  55.62G     SBT_TAPE    00:39:54     13-may-08 23:12
        BP Key: 2794122   Status: AVAILABLE  Compressed: NO  Tag: TAG20080513T203101
        Handle: racprod1_PRD<PRD_116562:654647562:1>.dbf   Media:
  List of Datafiles in backup set 2794099
  File LV Type Ckp SCN    Ckp Time        Name
  ---- -- ---- ---------- --------------- ----
  5    0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/system05.dbf
  17   0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/rbs15.dbf
  18   0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_interface03.dbf
  22   0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_summary05.dbf
  398  0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/discoverer01.dbf
  480  0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_tx_data11.dbf
  508  0  Incr 7247000051797 13-may-08 22:32 /u01/oracle/prdracdata/apps_ts_nologging02.dbf


As we can see,
13-may-08 22:32 is the golden time. Anything after that in the until time statement will do.
Further progress..So then, we tried with this script:

RMAN> run {
2> allocate auxiliary channel 'adisk_0' type DISK;
3> allocate auxiliary channel 'adisk_1' type DISK;
4> allocate auxiliary channel 'adisk_2' type DISK;
5> allocate auxiliary channel 'adisk_3' type DISK;
6> allocate auxiliary channel 'adisk_4' type DISK;
8> set until time  "to_date('09/013/2008 22:40:00','DD/MM/YYYY HH24:MI:SS')";
11> }

Although, we got the following error for the archivelogs, we were able to open the database after manually supplying them and then doing an SQL> alter database open resetlogs; :


Starting recover at 13-MAY-08

starting media recovery

Oracle Error:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/u01/oracle/uatracdata/system01.dbf'

released channel: adisk_0
released channel: adisk_1
released channel: adisk_2
released channel: adisk_3
released channel: adisk_4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/13/2008 22:47:51
RMAN-03015: error occurred in stored script Memory Script
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06102: no channel to restore a backup or copy of log thread 2 seq 111965 lowscn 7246877796511
RMAN-06102: no channel to restore a backup or copy of log thread 1 seq 143643 lowscn 7246877796417
ConclusionSome RMAN errors can be really hard to understand or debug. Especially, it seems that the no channel to restore a backup or copy seems to be a very generic error message in RMAN, which could really mean anything and can be very misleading.

For example, we can see that at the end, it gave the
RMAN-06102: no channel to restore a backup or copy of log thread 2 seq 111965 lowscn 7246877796511 error, but that meant that it could not find the archive logs. After supplying the archive logs, the database opened up fine with the resetlogs option.

upcoming posts

Pankaj Chandiramani - Thu, 2008-05-22 20:37

Been long time since i posted something  the reason being have started in new directions  ......working with OIM , OID & stuff these days .

So will be writing a post to install/config OIM in next couple of days & will keep on posting regularly on the topic.


Categories: DBA Blogs

Hurray Patchset!

Carl Backstrom - Thu, 2008-05-22 16:02
As Joel blogged here our patchset for Application Express 3.1 is out.

Hopefully this fixes any issues you ran into with APEX 3.1 so we can start on creating all brand new ones in future versions ;)

CFO’s rate Data Integrity a Critical Issue

Project Directions - Thu, 2008-05-22 10:23

In a recent article from Baseline titled, One in Nine CFOs See High Return Benefits from IT, they report that of the 629 CFO’s surveyed, close to half of them rate improving data quality in their enterprise as a critical issue.

Here in Oracle Projects one of our greatest assets has always been to promote our tight integration with the rest of the E-Business Suite, such as Financial’s and HR.  When products are built from the ground up to be integrated together, you eliminate many of the problems that occur from having disparate 3rd party systems all trying to share data.

Even if you must use special best-of-breed or niche products to help manage your business, then CFO’s and CIO’s should at least look into our AIA strategy to help them build tighter integration and hence improve their data integrity.

Using mod_rails with Rails applications on Oracle

Raimonds Simanovskis - Tue, 2008-05-20 16:00

As many others I also got interested in new mod_rails deployment solution for Rails applications. And when I read how to use it for development environment needs I decided to try it out.

As you probably know I am using Mac for development and using Oracle database for many Rails applications. So if you do it as well then at first you need to setup Ruby and Oracle on your Mac.

After that I installed and did setup of mod_rails according to these instructions and these additional notes.

One additional thing that I had to do was to change the user which will be used to run Apache httpd server as otherwise default www user did not see my Rails applications directories. You should do it in /etc/apache2/httpd.conf:

User yourusername
Group yourusername

And then I started to fight with the issue that ruby which was started from mod_rails could not load ruby-oci8 library as it could not find Oracle Instant Client shared library. And the reason for that was that mod_rails launched ruby with very minimal list of environment variables. E.g. as DYLD_LIBRARY_PATH environment variable was not specified then ruby-oci8 could not find Oracle Instant Client libraries.

The issue is that there is no documented way how to pass necessary environment variables to mod_rails. Unfortunately mod_rails is ignoring SetEnv settings from Apache httpd.conf file. Therefore I needed to find some workaround for the issue and finally I did the following solution.

I created executable script file /usr/local/bin/ruby_with_env:

export DYLD_LIBRARY_PATH="/usr/local/oracle/instantclient_10_2:$DYLD_LIBRARY_PATH"
export TNS_ADMIN="/usr/local/oracle/network/admin"
/usr/bin/ruby $*

and then in Apache httpd.conf file I changed RailsRuby line to

RailsRuby /usr/local/bin/ruby_with_env

As a result in this way I was able to specify necessary environment variables before Ruby and Rails was started and after this change ruby-oci8 libraries were successfully loaded.

You can use this solution also on Linux hosts where you will deploy Rails applications in production.

Currently I still have issue with mod_rails that it fails to execute RMagick library methods (which is compiled with ImageMagick). I get strange errors in Apache error_log:

The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec().
[error] [client ::1] Premature end of script headers:

When I was running the same application with Mongrel then everything was running correctly. If anyone has any ideas what could be the reason please write some comment.

Categories: Development

PMXPO – Final Thoughts

Project Directions - Mon, 2008-05-19 22:57

I guess I didn’t get back to my thoughts on this cool virtual conference that day but wanted to get it done before I forget!

What did I like:

  • No travel to a conference!
  • Ability to pop in and out of sessions as I liked
  • Easily browse the vendor offerings and chat if you’d like
  • Could continue to work while listening
  • Their briefcase feature made grabbing vendor and presentation information extremely easy – no having to leave to someone’s website to get materials
  • One click entry of contests (where is my Xbox 360 bundle or iPod Touch??)  Since you’re logged in I liked the fact they didn’t make you have to re-enter any information or fill out a contact card
  • Reminders for presentations that come in your email so you don’t miss a session
  • All presentations and materials are available later if you missed a session

What I wasn’t so hot on:

  • You could easily tune out because you’re continuing to work – next thing you know you missed 5-10 minutes of what the speaker said or have no context of the slide you’re looking at
  • The content was pretty much focused on new Project Managers – both myself and a couple of colleagues that attended didn’t find many sessions all that riveting
  • The networking is very difficult to do – you basically know a name and company (sometimes!) of a person and most people didn’t even put a picture on their avatar.  I can’t imagine you’d ever really meet someone new on this format unless you want to pester someone with a message.

Those are my top of the head thoughts.  Overall I thought it looked fantastic – they did a great job making it look clean and professional.  I would definitely attend next year although I hope the topics presented get a little bit more out of the box for more experienced PM’s.  Maybe they can offer more than one session at a time to be able to cover more ground in the day and let people choose what interests them.

To b:\ or not to B:\

Jornica - Mon, 2008-05-19 13:50

Recently, I ran into a small issue with Java permissions. Starting point is the DirList Java procedure to list the content of an OS directory. Here is the code to get started (executed as SCOTT):


import java.io.*;
import java.sql.*;
public class DirList {
public static void getList(String directory) throws SQLException {
File path = NEW File( directory );
list = path.list();
String element;
for(INT i = 0; i < list.length; i++) {
element = list[i];

CREATE OR REPLACE PROCEDURE get_dir_list ( p_directory IN VARCHAR2 )
AS language java name 'DirList.getList( java.lang.String )';

Don't grant the role JAVAUSERPRIV but use a more granular option. Grant read permission to SCOTT on the d: drive:

EXECUTE dbms_java.grant_permission( 'SCOTT', 'java.io.FilePermission','d:\','read' );

Everything is in place now, time to run the procedure:

EXECUTE get_dir_list('D:\');
ORA-29532: Java call terminated by uncaught Java exception: java.security.AccessControlException:
the Permission (java.io.FilePermission D:\ read)
has not been granted by dbms_java.grant_permission to SchemaProtectionDomain(SCOTT|PolicyTableProxy(SCOTT))
ORA-06512: at "SCOTT.GET_DIR_LIST", line 0
ORA-06512: at line 2

An error occurred, perhaps the directory did not exists? Executing a dir D:\ at the command prompt on the database server lists the files. The command prompt is not case sensitive, executing a dir d:\ returns the same listing. Perhaps but DBMS_JAVA.GRANT_PERMISSION is case sensitive?

EXECUTE get_dir_list( 'd:\' );
PL/SQL procedure successfully completed

System Volume Information

10gR2 CRS case study: CRS would not start after reboot - stuck at /etc/init.d/init.cssd startcheck

Gaurav Verma - Mon, 2008-05-19 04:25

PrefaceI had recently done a 10gR2 CRS installation on SuSE linux 9.3 ( kernel) and noticed that after a reboot of the RAC nodes, the CRS would not come up!

The CSS daemon was stuck at the /etc/init.d/init.cssd startcheck command:

raclinux1:/tmp # ps -ef | grep css
root      6929     1  0 13:56 ?        00:00:00 /bin/sh /etc/init.d/init.cssd fatal
root      6960  6928  0 13:56 ?        00:00:00 /bin/sh /etc/init.d/init.cssd startcheck
root      6963  6929  0 13:56 ?        00:00:00 /bin/sh /etc/init.d/init.cssd startcheck
root      7064  6935  0 13:56 ?        00:00:00 /bin/sh /etc/init.d/init.cssd startcheck

Debugging..To debug this more, I went to the $ORA_CRS_HOME/log/&lt;nodename>/client and checked the latest files there:

raclinux1:/opt/oracle/product/10.2.0/crs/log/raclinux1/client # ls -ltr
total 435
-rw-r-----  1 root   root 2561 May 18 23:20 ocrconfig_8870.log
-rw-r--r--  1 root   root  195 May 18 23:22 clscfg_8924.log
-rw-r-----  1 root   root  172 May 18 23:29 ocr_15307_3.log
-rw-r-----  1 root   root  172 May 18 23:29 ocr_15319_3.log
-rw-r-----  1 root   root  172 May 18 23:29 ocr_15447_3.log
drwxr-x---  2 oracle dba  3472 May 19 08:10 .
drwxr-xr-t  8 root   dba   232 May 19 13:50 ..
-rw-r--r--  1 root   root 2946 May 19 14:11 clsc.log
-rw-r--r--  1 root   root 7702 May 19 14:11 css.log

I did a more of the clsc.log & css.log and saw the following errors:

$ more clsc.log
2008-05-19 14:11:29.912: [ COMMCRS][1094672672]clsc_connect: (0x81c74b8) no listener at (ADDRESS=(PROTOCOL=IPC)(KEY=CRSD_UI_SOCKET))

2008-05-19 14:11:31.582: [ COMMCRS][1094672672]clsc_connect: (0x817e3f0) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth))

2008-05-19 14:11:31.583: [ default][1094672672]Terminating clsd session

$ more css.log
2008-05-19 02:42:48.307: [  OCROSD][1094672672]utopen:7:failed to open OCR file/disk /var/opt/oracle/ocr1 /var/opt/oracle/oc
r2, errno=19, os err string=No such device
2008-05-19 02:42:48.308: [  OCRRAW][1094672672]proprinit: Could not open raw device
2008-05-19 02:42:48.308: [ default][1094672672]a_init:7!: Backend init unsuccessful : [26]
2008-05-19 02:42:48.308: [ CSSCLNT][1094672672]clsssinit: Unable to access OCR device in OCR init.

2008-05-19 02:43:41.982: [  OCROSD][1094672672]utopen:7:failed to open OCR file/disk /var/opt/oracle/ocr1 /var/opt/oracle/oc
r2, errno=19, os err string=No such device
2008-05-19 02:43:41.983: [  OCRRAW][1094672672]proprinit: Could not open raw device
2008-05-19 02:43:41.983: [ default][1094672672]a_init:7!: Backend init unsuccessful : [26]
2008-05-19 02:43:41.983: [ CSSCLNT][1094672672]clsssinit: Unable to access OCR device in OCR init.

2008-05-19 02:46:40.204: [ CSSCLNT][1094672672]clsssInitNative: connect failed, rc 9

2008-05-19 14:11:28.217: [ CSSCLNT][1094672672]clsssInitNative: connect failed, rc 9

2008-05-19 14:11:37.186: [ CSSCLNT][1094672672]clsssInitNative: connect failed, rc 9

So it was pointing towards the OCR being not available, as could be verified by the /tmp/crsctl.&lt;PID> files too:

raclinux1:/tmp # ls -ltr crsctl*
-rw-r--r--  1 oracle dba 148 May 19 02:44 crsctl.6826
-rw-r--r--  1 oracle dba 148 May 19 02:44 crsctl.6679
-rw-r--r--  1 oracle dba 148 May 19 02:44 crsctl.6673
-rw-r--r--  1 oracle dba 148 May 19 02:49 crsctl.7784
-rw-r--r--  1 oracle dba 148 May 19 02:49 crsctl.7890
-rw-r--r--  1 oracle dba 148 May 19 02:49 crsctl.7794
-rw-r--r--  1 oracle dba 148 May 19 13:55 crsctl.7034
-rw-r--r--  1 oracle dba 148 May 19 13:55 crsctl.6886
-rw-r--r--  1 oracle dba 148 May 19 13:55 crsctl.6883
-rw-r--r--  1 oracle dba 148 May 19 14:18 crsctl.6960
-rw-r--r--  1 oracle dba 148 May 19 14:18 crsctl.7064
-rw-r--r--  1 oracle dba 148 May 19 14:18 crsctl.6963

raclinux1:/tmp # more crsctl.6963
OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage Operating System error [Permission denied] [13]
Permission issue!Duh! So it was a permission issue on the OCR disk (at this moment), which could expand into a permissions issue for Voting and asm disks later:

raclinux1:/tmp # ls -ltr /dev/raw/raw*
crw-rw-r--  1 root disk 162,  9 Nov 18  2005 /dev/raw/raw9
crw-rw-r--  1 root disk 162,  8 Nov 18  2005 /dev/raw/raw8
crw-rw-r--  1 root disk 162,  7 Nov 18  2005 /dev/raw/raw7
crw-rw-r--  1 root disk 162,  6 Nov 18  2005 /dev/raw/raw6
crw-rw-r--  1 root disk 162,  5 Nov 18  2005 /dev/raw/raw5
crw-rw-r--  1 root disk 162,  4 Nov 18  2005 /dev/raw/raw4
crw-rw-r--  1 root disk 162,  3 Nov 18  2005 /dev/raw/raw3
crw-rw-r--  1 root disk 162,  2 Nov 18  2005 /dev/raw/raw2
crw-rw-r--  1 root disk 162, 15 Nov 18  2005 /dev/raw/raw15
crw-rw-r--  1 root disk 162, 14 Nov 18  2005 /dev/raw/raw14
crw-rw-r--  1 root disk 162, 13 Nov 18  2005 /dev/raw/raw13
crw-rw-r--  1 root disk 162, 12 Nov 18  2005 /dev/raw/raw12
crw-rw-r--  1 root disk 162, 11 Nov 18  2005 /dev/raw/raw11
crw-rw-r--  1 root disk 162, 10 Nov 18  2005 /dev/raw/raw10
crw-rw-r--  1 root disk 162,  1 Nov 18  2005 /dev/raw/raw1

I enabled read and write permission for the raw devices using the # chmod +rw /dev/raw/raw* devices. but even after that the latest /tmp/crsctl.&lt;PID> files being generated were showing this message:

raclinux1:/tmp # more crsctl.6960
Failure -2 opening file handle for (vote1)
Failure 1 checking the CSS voting disk 'vote1'.
Failure -2 opening file handle for (vote2)
Failure 1 checking the CSS voting disk 'vote2'.
Failure -2 opening file handle for (vote3)
Failure 1 checking the CSS voting disk 'vote3'.
Not able to read adequate number of voting disks

At this point, I just chowned /dev/raw/raw* to oracle:dba like this:

raclinux1:/tmp # chown oracle:dba /dev/raw/raw*

After 1-2 mins, the CSS came up:

raclinux1:/tmp # ps -ef | grep css
root      6929     1  0 13:56 ?        00:00:00 /bin/sh /etc/init.d/init.cssd fatal
root     10900  6929  0 14:39 ?        00:00:00 /bin/sh /etc/init.d/init.cssd daemon
oracle   10980 10900  0 14:40 ?        00:00:00 /bin/su -l oracle -c /bin/sh -c 'ulimit -c unlimited; cd /opt/oracle/product/10.2.0/crs/log/raclinux1/cssd;  /opt/oracle/product/10.2.0/crs/bin/ocssd  || exit $?'
oracle   10981 10980  0 14:40 ?        00:00:00 /bin/sh -c ulimit -c unlimited; cd /opt/oracle/product/10.2.0/crs/log/raclinux1/cssd;  /opt/oracle/product/10.2.0/crs/bin/ocssd  || exit $?
oracle   11007 10981  2 14:40 ?        00:00:00 /opt/oracle/product/10.2.0/crs/bin/ocssd.bin
root     12013  7414  0 14:40 pts/2    00:00:00 grep css
raclinux1:/tmp #

The CRS components came up fine automatically:

raclinux1:/opt/oracle/product/10.2.0/crs/bin # ./crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

The ASM and RAC instances also came up fine:

raclinux1:/opt/oracle/product/10.2.0/crs/bin # ps -ef |grep smon
oracle   12257     1  0 14:41 ?        00:00:00 asm_smon_+ASM1
oracle   13100     1  0 14:41 ?        00:00:02 ora_smon_o10g1
root     32282  7414  0 14:55 pts/2    00:00:00 grep smon
For the long term..To make this change permanent, I put it in /etc/init.d/boot.local file, along with the modprobe hangcheck-timer  command:

raclinux1:/opt/oracle/product/10.2.0/crs/bin # more /etc/init.d/boot.local
#! /bin/sh
# Copyright (c) 2002 SuSE Linux AG Nuernberg, Germany.  All rights reserved.
# Author: Werner Fink &lt;werner@suse.de>, 1996
#         Burchard Steinbild, 1996
# /etc/init.d/boot.local
# script with local commands to be executed from init on system startup
# Here you should add things, that should happen directly after booting
# before we're going to the first run level.
chown oracle:dba /dev/raw/raw*
modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

ConclusionIf simple things are permissions are not correct on the OCR devices, it can hold down the CRS daemons and the ASM/DB instances. It may be needed to put workarounds in /etc/init.d/boot.local for getting around the situation.

ORA-00904: "XMLROOT": invalid identifier

Pawel Barut - Sat, 2008-05-17 04:55
Written by Paweł Barut
Some time ago I've had noticed strange problem with XMLRoot function. I was installing application on production server and I've noticed that code:
SQL> select XMLRoot(xmltype('<a>a</a>'))
  2  from dual;
gives error:
select XMLRoot(xmltype('<a>a</a>'))
Error in line 1:
ORA-00904: "XMLROOT": invalid identifier
WTF, it was running perfectly on development and test environment!
Quick search revealed that XMLROOT is function in XDB schema, which was missing in production environment. I've just copies source code for function from test environment and I could proceed further.
After some time, I've decided to check why this function was missing?
Quick search showed that function is created by script ?\demo\schema\order_entry\xdbUtilities.sql
Strange, well documented function is created only when you install demo schemas? Seems that there should be another explanation.
Then I've found that in documentation this function has 2 mandatory attributes, while my code has only one attribute. So there are 2 versions of XMLRoot function:
  1. SQL function; see documentation
  2. Simplified version created by demo in XDB schema - this version can be also used in PL/SQL

my original code should look like that:
SQL> select XMLRoot(xmltype('<a>a</a>'), version '1.0', standalone yes)
  2  from dual;


<?xml version="1.0" standalone="yes"?>
This can run without XMLROOT function in XDB schema.

Hope this will help someone to save some time.

Related Articles on Paweł Barut blog:
Categories: Development

Conversion (Data Migration) of Invoices in Receivables

Krishanu Bose - Fri, 2008-05-16 15:44

Whenever we are going in for implementation of Receivables module, we have to consider the necessity of bringing in customer open balances from the old system to Oracle Receivables.

Some of the key questions that needs to be addressed before we take up a conversion activity. This is just a sample list and not an exhaustive one:

1. What are the different types of invoices in existing system Provide invoice samples? (invoices, credit/ debit memos, commitments, chargebacks)

2. Do we need to migrate only open invoices?

3. Do we migrate closed invoices also, if yes, then for what time period?

4. Please explain the invoice numbering mechanism? Is it automatic?

5. What are the interfaces from/to your existing receivables system?

6. Will the old system still be in place for querying and reporting purpose?

One can adopt one of the following three strategies for conversion:

1. Consolidate all the open balances customer-wise and create a single open invoice for each customer in the new Oracle system. The advantage of this system is that it is quite easy and not data intensive and makes good business sense in case of small businesses with very few customers. The major demerit of this approach is that later on one cannot track the individual invoices which the customer had sent and can become an audit issue also. In case of dispute over payment, this invoice will remain open till the dispute is resolved. Also, aging of invoices and dunning history will be lost.

2. Bring in all the open and partially paid invoices, credit/debit memos into the new system. Migrate all the unapplied and partially applied receipts to the new system. The advantage of this process of conversion is that you can track all open invoices individually and apply the correct receipt to correct invoice. Also, the conversion effort will be moderately low compared to case if you migrate all open and closed invoices. The disadvantage of this approach is that you cannot have a track of closed invoices in the new system. Also, it would be tough to handle scenarios where there is a dispute regarding incorrect receipt application, etc. This is the most common approach taken for receivables invoice, credit/debit memo and receipt migration.

3. Migrate all open and closed invoices to the new system. Reapply the migrated receipts to invoices in the new system. This approach makes sense if your receivables data is quite small else the effort involved in migrating all closed invoices and credit memos to the new system does not make much business sense.

The next question that arises is how we should migrate the invoices, credit/debit memos and receipts to the new system. Oracle provides standard interfaces to load the same. We can also use tools like Dataloader or manually key in the data into Oracle.

In this article i will talk of invoice, credit/debit memo conversion only. Prior to invoice migration, customer migration should be over apart from other pre-requisites. Following is the list of pre-requisites that should be completed prior to invoice, credit/debit memo conversion:

•Set-up of Customer Payment Terms should be complete

•Set-up of Currencies should be complete (this is necessary in case you have foreign currency invoices also)

•Set-up of Transaction Types should be complete

•Set-up of Accounting Rules should be complete

•Set-up of Tax rates and Tax codes should be complete

•Set up for sales representative should be complete

•Set up for debtor area should be complete

•Set up for income category should be complete

•Automatic customer invoice numbering should be set to 'No'

•Customer and Customer address should be migrated in the system

•Disable the Invoice interface purge program so that the data successfully imported should not get purged in the interface table.

•Set up for invoice batch source name should be complete

In the next step extract Invoice data from the legacy files and using SQL loader populate the interface tables RA_INTERFACE_LINES_ALL and RA_INTERFACE_DISTRIBUTIONS_ALL. Submit the Auto Invoice open interface program. Data from the two interface tables will be uploaded to the following base tables using the Invoice open interface program:









Ensure that the Purge Interface check box is not checked when you submit the Autoinvoice program. In the Autoinvoice errors form you can see the error corresponding to failed records. Correct the errors in the interface table and rerun the Autoinvoice program. Submit the Autoinvoice Purge Program separately. Only records that have been successfully processed by Autoinvoice are purged.

Using autoinvoice you can migrate invoices, credit/debit memos and on-account credits into Oracle. However, you have to set grouping rules (Navigation > Setup > Transactions > Autoinvoice > Grouping Rule) to group lines to create one transaction and ordering rules (Navigation > Setup > Transactions > Autoinvoice > Line Ordering Rules) to determine the order of the transaction lines on a particular invoice.

PMXPO – It’s better than I expected!

Project Directions - Thu, 2008-05-15 09:04

Just a heads up that I’m attending the virtual PMXPO 2008 right now.  It is much cooler than I thought it would be.  It’s just like being at a conference without all the hassles of travel and that annyoing thing called walking between sessions and booths!

Right now I’m listening to the keynote speaker and there are several more sessions coming throughout the day.  I don’t think it’s too late to sign up so just go to http://events.unisfair.com/rt/pmexpo and sign up now.  I’ll have more details on what I liked about it later.

Bluetooth for Solaris

Siva Doe - Wed, 2008-05-14 23:16

Check this announcement. One of the gaps identified for Solaris on Desktop was Bluetooth connectivity. Looks like these folks have filled that gap. They seem to miss a GUI for better integration into the desktop. Right now, it is all command line, like a FTP client. I am happy to say that I too played a part as their external guide. Good work there folks.

Silverlight 2.0

Oracle EPM Smart Space - Wed, 2008-05-14 16:53

In my post on desktop and web convergence I said I would spend some time on each of the technologies I mentioned, so here we go. I will start out with Silverlight only because for the past 2 years I have been immersed in c# and smart clients so it was a pretty simple jump. Don't expect that after reading this post you will be an expert in Silverlight as I have only scratched the surface. I figure the best way to start is to list the pros and cons; here are some things I think fall into the pro category:

Language Support

Silverlight allows development and extensions using a verity of programming languages including JavaScript, C# and VB.Net. This will help ease the learning curve for existing developers familiar with these languages. Silverlight also has a Dynamic Language SDK that allows developers to communicate with the .NET libraries included with Silverlight. This makes it open to many other language possibilities like Python and Ruby.

IDE availability:

This is often time a con for newer technologies but for Silverlight this is a pro, it's integrated into Visual Studio 2008 it has a separate design environment of its own (Expression Studio). Studio is truly for the designer and allows for UI driven creation of animations and overall user experience. Having great IDE's for both the designer and the developer is a big plus.

Platform and browser support:

Silverlight supports many platforms and browsers. This is different than one might expect from Microsoft but Silverlight supports IE, Safari, and Mozilla and will run on both Window and the Mac. Plus there is now support for a growing list of mobile devices.

User Experience Support:

Silverlight has a wide variety of features that support a rich user experience, this combined with the power of Expression Studio make for crisp UI's. Features include; media support, panel and canvas support, animations with timelines, AJAX support, etc.

Here are some things that I considered cons:

Limited .Net Framework:

This one can go either way but for me it is a con. Silverlight comes with a limited set of the .Net framework assemblies (to keep the client install small) and for developers that are used to desktop development this will be difficult. For others new to the .Net world they this will not be a con.

Separate IDE's for Design and Development:

I am always looking for that one IDE that does it all and I tend to cross the line between designer and developer so switching in and out id Visual Studio and Expression Blend was not so smooth. I wish they would just stuff Blend into Visual Studio but I am sure many disagree on this.

Keep in mind that Silverlight 2.0 is currently in beta and like any other beta software you should use and install with care. Here are some cool Silverlight samples.

Categories: Development


Subscribe to Oracle FAQ aggregator