RE: RAC patch 10.2.0.4 with lib/lib32/bin differences
Date: Mon, 31 Mar 2008 18:04:34 +0100
I am about to do something similar, however it'll be a patch for an 10.2.0.1 active/passive cluster using a CRS_HOME, clustered ASM_HOME and a single instance database on each node.
Did the propagation fail for CRS _and_ the database/asm or was it limited to either of the two? Is there any indication of problems in the OUI log? Sometimes time differences on the nodes can cause issues.
RAC patch 10.2.0.4 with lib/lib32/bin differences
- From: Martin Klier <usn_at_xxxxxxxxx>
- To: oracle-l_at_xxxxxxxxxxxxx
- Date: Mon, 31 Mar 2008 15:32:51 +0200
recently I patched one of my 2-node-RACs on Linux x86_64 10.2.0.3 to 10.2.0.4. The patch on Node1 and the propagating of the patch to Node2 did not show any error, as well as the DB instance patching script (does not matter for this case).
But now, some days later, we recognized silly scheduler execution errors (could not determine OS PID for Job XYZ) and other magic. There has been no real system, except that all Errors occured on Node1.
After looking in several empty holes, we compared the bin, lib and lib32 directories of the nodes, and Node1 has _more recent_ libraries and binaries in 5-10% of the files. I guess, the propagating of the patch did not work very well - even I've seen no error and no warning at all.
I will rsync these directories to have the most recent files on both nodes and will come back with my results. But, first of all, has anybody seen this behaviour before?
-- Usn's IT Blog for Linux, Oracle, Asterisk www.usn-it.de -- http://www.freelists.org/webpage/oracle-lReceived on Mon Mar 31 2008 - 12:04:34 CDT