RE: RACnode communication problem?

From: Amaral, Rui <Rui.Amaral_at_tdsecurities.com>
Date: Wed, 28 Sep 2011 11:59:57 -0400
Message-ID: <B38858412AFF9F4098331DE549A774490638C328C3_at_EX7T2-SV05.TDBFG.COM>



Jed,

From your output it looks like the banner is causing the problem.

The section "ssh flux-rac-node-wcdp-02 ls -l /tmp" and it returns the warning. That's the first place to start.  

Rui Amaral

-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Walker, Jed S Sent: Wednesday, September 28, 2011 11:52 AM To: oracle-l_at_freelists.org
Subject: RACnode communication problem?

Hi,
I'm trying to install RAC 11.2.0 - Linux-x86-64, RedHat 5.3, oracle 11.2.0.3.0 (fyi - I also tried with 11.2.0.2.0 and got the same result)

Has anyone seen this before, and have a solution? runcluvfy.sh and runInstaller complain that the nodes can't talk to each other, but I can ssh (passwordless) between them and oracle has even copied files from node1 to the other nodes in /tmp. I believe there is nothing wrong, but need to figure out why oracle suddenly thinks something is wrong.

BTW, yesterday, it wasn't complaining at all, but the servers were rebooted last night. It works fine when I do things manually, but oracle doesn't seem to think it works. I'm hoping I'm either doing something "RAC newbie" or it is just a weird oracle thing

Here is what runcluvfy says (and runInstaller) (note: yes, changed the a/b on network so I don't get in trouble)

[oracle_at_flux-rac-node-wcdp-01 grid]$ ./runcluvfy.sh stage -pre crsinst -n flux-rac-node-wcdp-01,flux-rac-node-wcdp-02

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "flux-rac-node-wcdp-01"

Checking user equivalence...
User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

WARNING:
Make sure IP address "bond0 : 99.99.230.195 [99.99.230.128] " is up and is a valid IP address on node "flux-rac-node-wcdp-02"

WARNING:
Make sure IP address "bond0 : 99.99.230.194 [99.99.230.128] " is up and is a valid IP address on node "flux-rac-node-wcdp-01"

ERROR:
PRVF-7616 : Node connectivity failed for subnet "99.99.230.128" between "flux-rac-node-wcdp-02 - bond0 : 99.99.230.195" and "flux-rac-node-wcdp-01 - bond0 : 99.99.230.194" Checking multicast communication...

Checking subnet "99.99.230.128" for multicast communication with multicast group "230.0.1.0"...

I've checked ifconfig and everything looks fine, and here you can see I can ssh between them (and yes, I've done it both ways). Also, the runcluvfy.sh apparently is able to connect over because when I run it, node2 gets oracle files created in /tmp.

[oracle_at_flux-rac-node-wcdp-02 ~]$ ssh flux-rac-node-wcdp-02 ls -l /tmp
WARNING:   This system is solely for ...
total 4
drwxr-xr-x 3 oracle dba 4096 Sep 28 15:36 CVU_11.2.0.3.0_oracle

--
http://www.freelists.org/webpage/oracle-l



NOTICE: Confidential message which may be privileged. Unauthorized use/disclosure prohibited. If received in error, please go to www.td.com/legal for instructions.
AVIS : Message confidentiel dont le contenu peut être privilégié. Utilisation/divulgation interdites sans permission. Si reçu par erreur, prière d'aller au www.td.com/francais/avis_juridique pour des instructions.
--
http://www.freelists.org/webpage/oracle-l
Received on Wed Sep 28 2011 - 10:59:57 CDT

Original text of this message