Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: RAC/CRS and OCFS2

Re: RAC/CRS and OCFS2

From: Valentin Minzatu <valentinminzatu_at_yahoo.com>
Date: 15 Mar 2007 08:17:53 -0700
Message-ID: <1173971873.154987.281500@b75g2000hsg.googlegroups.com>


On Mar 15, 10:55 am, "hpuxrac" <johnbhur..._at_sbcglobal.net> wrote:
> On Mar 15, 10:19 am, "Keith" <kkna..._at_gmail.com> wrote:
>
>
>
>
>
> > On Mar 15, 9:59 am, "hpuxrac" <johnbhur..._at_sbcglobal.net> wrote:
>
> > > On Mar 15, 9:50 am, "Keith" <kkna..._at_gmail.com> wrote:
>
> > > > Hello RAC gurus,
>
> > > > I am currently running 10.2.0.3 RAC/CRS on RHEL4. All our database
> > > > files are using ASM/raw devices. However, we now have the need for a
> > > > clustered file system. This file system likely would not be used
> > > > directly by the database, but requires high availability. So, we'd
> > > > like to use OCFS2 -- which has prompted some questions:
>
> > > > Are there any conflicts or issues with running OCFS2 and CRS
> > > > concurrently?
> > > > Can they share the same private interconnect?
> > > > Would this be an Oracle supported configuration?
> > > > If it is supported, ishttp://oss.oracle.com/projects/ocfs2/files/
> > > > Oracle's preferred place to obtain the software?
>
> > > > If OCFS2 panics the system (or anything "panics" the system), what is
> > > > the expected system behavior? (I'm curious if OCFS2 would ever prompt
> > > > or initiate a cluster node to be evicted/rebooted/etc).
>
> > > > Also, Metalink note 391771.1 discusses a bug that affects RHEL4 (fixed
> > > > in a later version than I'm running):
>
> > > > "Kernel panic - not syncing: ocfs2 is very sorry to be fencing this
> > > > system by panicing"
>
> > > > It gives the option of using the "DEADLINE" IO scheduler versus the
> > > > "CFQ" IO scheduler. Is there any impact associated with this change?
>
> > > > Any help is greatly appreciated. This came up on me very quick -- and
> > > > my 9i -> 10g migration is scheduled for next weekend. I'm in a pinch!
>
> > > You are planning on not only implementing 10g next weekend but also
> > > including a new clustered file system in an environment where you
> > > haven't done exhaustive testing ... I wouldn't worry if I were you!
>
> > > Opinions vary on ocfs2 but even some of the most devoted linux/oracle
> > > fans think it's something to run away from.
>
> > > If you really proceed down this path I would recommend using oracle
> > > consulting to install the OCFS2 and the 10g implementation so that you
> > > don't put your job on the line.
>
> > Well, the RAC/CRS has been tested thoroughly; but the OCFS2 is the new
> > requirement. As I started to read the OCFS2 users guide, I become
> > very wary b/c of the OCFS2 "cluster services" -- I'm concerned they
> > may interfere with CRS. If OCFS2 is not the way to go, are there
> > better, proven alternatives? RH GFS?
>
> Personally I would look at veritas for the clustered file system but
> that's me.
>
> Officially I think that oracle is still "promoting" OCFS2 ...
>
>
>
>
>
> > If it helps any, I have two RAC nodes tapping into an EMC storage
> > array. On one node I have a file system built that will be NFS'd out
> > to several "client" systems (in support of an old web technology that
> > I'm not sure when will be replaced). Management would like this file
> > system "clustered" for high availability. Which raises more
> > questions:
>
> > If I NFS mount a clustered file system, can I only NFS mount from one
> > of the nodes?
> > If so, should that node go down, would I still lose the NFS mount and
> > be forced to remount from a surviving node?
> > Is there likely an "online" and quick method to re-present the disk
> > (say, from the SAN side) to the surviving node and mount the file
> > system there (so as to avoid using a clustered file system altogether)?- Hide quoted text -- Hide quoted text -
>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -

I've been using NFS mounts for a while in "active/passive" (read RW/ RO) configurations on RH without any problem. I was told that they could be configured RW/RW, but that was not my purpose as I needed them to be only writeable from a single/specific node at the time. Received on Thu Mar 15 2007 - 10:17:53 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US