Re: popular storage for ORACLE RAC

From: Mladen Gogala <gogala.mladen_at_gmail.com>
Date: Wed, 18 Mar 2009 21:07:41 +0000 (UTC)
Message-ID: <gprnqt$a7k$3_at_solani.org>



On Wed, 18 Mar 2009 10:12:18 -0700, novickivan_at_gmail.com wrote:

> Thanks everyone for the replies. Is it fair to say not many people are
> using clustered file systems directly on commodity boxes as storage for
> Oracle RAC. For example installing Red Hat's GFS on 8 standard hp
> servers and using the local disks as your storage for Oracle RAC?

I did a test with GFS and was rather pleasantly surprised. GFS beats OCFS2 in speed because it does have caching. It is, however, rather exotic option and my management did not vote for that option. Personally, I do prefer cluster file systems over ASM because:

  1. ASM is proprietary, can hold nothing but Oracle information.
  2. Standard OS utilities like df,du,tar, cpio or scp do not work with ASM.
  3. ASM uses quite a lot of resources, compared to a cluster file system.
  4. ASM cannot do buffered reads, even if I want it to. Good clustering file system like Polyserve or VxFS/CFS can beat ASM hands down in many situations.

Unfortunately, ASM is free, as opposed to HP Polyserve or VxFS/CFS. That leaves only OCFS, ASM, GFS and NAS in the game. OCFS is clearly not the favorite of Oracle and further development is slow. My management was scared of GFS corruption rumors. I must confess that I didn't see any GFS corruption, but, on the other hand, this was a test system only. I must say, the test was done in 2006. and I don't read that much about Oracle on GFS. It still doesn't sound like a winning combination, maybe because of the central lock manager. GFS needs a special node, not running oracle, dedicated to the lock manager.
To make a long story short, if there was a decent, general purpose, clustering file system which I could use for free or for very little money, I would instantly select it over ASM. Of course, NAS is fine, but it's also far from being free.

-- 
http://mgogala.freehostia.com
Received on Wed Mar 18 2009 - 16:07:41 CDT

Original text of this message