root.sh fails for 11gR2 Grid Infrastructre install

From: Kumar Madduri <ksmadduri_at_gmail.com>
Date: Fri, 28 Jan 2011 14:15:40 -0800
Message-ID: <AANLkTinPvZCp4Mi3yvgSgZHRKO-HEq9gtyXFt4S5B_Cy_at_mail.gmail.com>



Hello All:
Any idea on why this may be happening?
root.sh ran successfully on node 1
When running it on node 2, it fails with this error. [root_at_asiadbg3dev2 grid]# /app/11.2.0/grid/root.sh Running Oracle 11g root script...
The following environment variables are set as:

    ORACLE_OWNER= oracle
    ORACLE_HOME= /app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file:
/app/11.2.0/grid/crs/install/crsconfig_params
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'asiadbg3dev2'
CRS-2676: Start of 'ora.cssdmonitor' on 'asiadbg3dev2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'asiadbg3dev2'
CRS-2672: Attempting to start 'ora.diskmon' on 'asiadbg3dev2'
CRS-2676: Start of 'ora.diskmon' on 'asiadbg3dev2' succeeded
CRS-2676: Start of 'ora.cssd' on 'asiadbg3dev2' succeeded
Mounting Disk Group DATA failed with the following message:
*ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATA" cannot be mounted ORA-15003: diskgroup "DATA" already mounted in another lock name space*

Configuration of ASM ... failed
see asmca logs at /app/oracle_base/cfgtoollogs/asmca for details Did not succssfully configure and start ASM at /app/11.2.0/grid/crs/install/ crsconfig_lib.pm line 6464.

/app/11.2.0/grid/perl/bin/perl -I/app/11.2.0/grid/perl/lib
-I/app/11.2.0/grid/crs/install
/app/11.2.0/grid/crs/install/rootcrs.plexecution failed


DATA diskgroup is using a device that is mounted across both nodes. root.sh should not try to create the disk group on node 2. This gives the same output on both nodes

[root_at_asiadbg3dev2 asmca]# /usr/sbin/oracleasm listdisks

*FSS_POC_ASM*

[root_at_asiadbg3dev1 crsconfig]# /usr/sbin/oracleasm querydisk /dev/emcpowerb1
*Device "/dev/emcpowerb1" is marked an ASM disk with the label "FSS_POC_ASM"
*

**
**

Thank you
Kumar

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Jan 28 2011 - 16:15:40 CST

Original text of this message