RE: active-pasive cluster in linux

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Tue, 23 Feb 2010 15:13:20 -0500
Message-ID: <CA94211CCD2C4ED99C1968DF5B11EE29_at_rsiz.com>



One question is whether you want the diagnostic logs inside what it might be logging about. So even if you do use Oracle clustered file systems it can be of value to understand that non-clustered file systems on shareable volumes can be mounted read only with no increase in the chances of corruption. And I believe the OP of this thread (Juan Cruz Miranda Vigo [jcmiranda_at_oesia.com]) was asking about alternatives to using Oracle clusterware. That was David's point if I understood him. If you're going active-passive, just can just mount the file systems on the active node. I was adding that it can be useful and is no additional risk to mount the file systems containing the diagnostic logs read only on the passive node.  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of LS Cheng
Sent: Tuesday, February 23, 2010 2:24 PM To: Mark W. Farnham
Cc: ballester.david_at_gmail.com; jcmiranda_at_oesia.com; oracle-l_at_freelists.org Subject: Re: active-pasive cluster in linux  

That is my point even mounting and dismounting filesystems is doable do we really want to do it working with a clustering software which does not know about filesystems except his own "filesystem" which is ASM? That is why I said ASM is a must if we want to use Oracle Clusterware to setup an active-passive cluster.

Thanks

--

LSC On Tue, Feb 23, 2010 at 7:08 PM, Mark W. Farnham <mwf_at_rsiz.com> wrote:

Most ports support a read only mount of a volume mounted read/write on one node. The only wrinkle I've noticed is that the appearance of new objects on the non-rw node may be slightly delayed. I'm uncertain of the exact mechanism of this delay, and it may vary by port.  

An interesting application of this is mounting all the log directories (both archived redo and diagnostic) as read only without the need for clustering software at the file system level. (They still have to be shareable volumes, of course). IF you're not storing online redo in ASM this can also be used for the online redo logs, but of course you need the dbfiles multi-rw-able for any flavor of RAC. But this thread isn't really about RAC.  

Purely for the active-passive idea, this of course means you can tail files on read mounted volumes and see the dying diagnostics from a crashed node without waiting for some other node to start or waiting to mount the volumes. Then you have the last messages at hand if you want to look at them before you activate the passive node. This can save a lot of time if the logs are telling you about, for example, media failure or some other issue that will just be repeated on the passive node.  

Presumably the interest in this type of architecture is minimizing the time until the applications are available on the passive node if the active node is in some sort of failure, as well as for routine planned switchover for preventive maintenance and just to prove it will work.  

<I wrote too much. I just wanted to introduce the notion about non-corrupting read only mountablility even without a clustered file system; David is absolutely correct you don't want multiple concurrent rw mounts without a clustered file system, even if you can trick the OS into doing it for you.>  

mwf


From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of David Ballester
Sent: Monday, February 22, 2010 6:26 PM
To: LS Cheng
Cc: jcmiranda_at_oesia.com; oracle-l_at_freelists.org Subject: Re: active-pasive cluster in linux    

2010/2/22 LS Cheng <exriscer_at_gmail.com>

Hi

You can use OCFS2 if you wish but basically you need some sort of Cluster Filesystem or Volume Manager such as ASM.

I dont mention OCFS2 because introducing two clustering software (OCFS2 and CRS) for a Single instance Active Passive solution is a bit too much in my opinion that is why I said you are forced to use ASM (runs on top of CRS).

Thanks  

Or not use ocfs2 or any cluster filesystem, the only node that needs to have mounted the filesystems is the active one

If you try to mount a non cluster filesystem in another node while is mounted in a first one, the other node should answer 'partition used' or something similar ( prevents corruption )

when switching from nodes, as part of the automatic process, one node should umount and the other mount ( may be fsck first )

Regards

D.    

--

http://www.freelists.org/webpage/oracle-l Received on Tue Feb 23 2010 - 14:13:20 CST

Original text of this message