Problem installing CRS using raw devise... 10.2 + sun solaris

From: Pankaj Jain <pjain_at_ibasis.net>
Date: Mon, 27 Apr 2009 16:33:03 -0400
Message-ID: <D98F41646724244FAC1DC0B4462220250200018C_at_SERVER719C.VIPCALLING.CORP>



I want to bring to your notice one think which is unique in our case since the two servers that I are configuring for CRS/ASM/RAC using raw devices are DIFFERENT models, having same OS and because of that our interface name is different. both public and private. For your reference I am furnishing the output of uname -a from both servers and private and public interface names.  

racasm1>uname -a
SunOS csgbur01 5.10 Generic_137137-09 sun4v sparc SUNW,Netra-T5220 racasm1>  

racasm2>uname -a
SunOS dbmbur01 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-480R racasm2>

Interface names - Public



csgbur01- e1000g0
dbmbur01- ce0  

Interface names - Private



csgbur01- e1000g2
dbmbur01- ce2  

Since the CRS installation needs two sets of ipaddresses one set for each server, as a workaround for this case, I have first installed CRS only on one node - csgbur01 and after it is done I have tried to add the node via addnode (2nd node - dbmbur01). Since I can't use vipca because of different interface name, I am trying to configure the two nodes using oifcfg setif as follows and getting the below mentioned error. racasm1>sudo oifcfg setif -node csgbur01 e1000g2/10.10.100.0:cluster_interconnect racasm1>sudo oifcfg setif -node dbmbur01 ce2/10.10.100.0:cluster_interconnect
racasm1>sudo oifcfg setif -node csgbur01 e1000g0/216.168.188.0:public racasm1>sudo oifcfg setif -node dbmbur01 ce0/216.168.188.0:public PROC-4: The cluster registry key to be operated on does not exist. PRIF-11: cluster registry error

On discussing this issue with our Unix admin, following was his response.

In looking at the csgbur01/dbmbur01 oracle asm cluster, the shared storage looks fine from dbmbur01, but it largely invisible from csgbur01. I therefore suspect that clusterware on dbmbur01 has ejected csgbur01 from accessing the shared san storage. I googled a bit, but I don't know how to use oracle commands to query whether this is indeed the case. Let me know if you think this makes any sense, since I see no evidence of a san/storage issue.

FYI.. We have successfully setup our RAC environment using VERITAS cluster on the same two servers (csgbur01 and dbmbur01). In the light of above facts please advice at your earliest convenience any solution if anybody might have.  

Thanks & regards,  

Pankaj Jain

10 Second Avenue, Burlington, MA 01803

Work (781) 505-7925 Cell (978) 987-0004 Fax (781) 505-7382,

Email: pjain_at_ibasis.net <mailto:pjain_at_ibasis.net> ; 9789870004_at_vtext.com <mailto:9789870004_at_vtext.com>

Confidentiality Statement : This e-mail contains proprietary information of iBasis. It is exclusively intended for the recipient of this e-mail. The information should not be copied or distributed to third parties.  


From: oracle-l-bounce_at_freelists.org
[mailto:oracle-l-bounce_at_freelists.org] On Behalf Of A Ebadi Sent: Wednesday, January 14, 2009 12:55 PM To: oracle-l_at_freelists.org
Subject: RAC - remove node

Hi,  

Has anyone on this list had experience removing a node from an Oracle RAC cluster when that node is down/gone? In other words, does the node you are trying to remove have to be up for you to remove it?  

Background:
We have a 6 node RAC cluster that we're having trouble removing nodes from and need some help. Nodes 1,2,5,6 are all still in the cluster and nodes 3,4 are partially in. We tried removing nodes 3,4 and were not able to due to issues with dbca hanging and other issues that followed. Any suggestions would be appreciated.  

We are on Oracle 10.2.3.0 w/ ASM on Solaris 10.  

Thanks,
Abdul

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Apr 27 2009 - 15:33:03 CDT

Original text of this message