Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Best fs for Oracle RAC

Re: Best fs for Oracle RAC

From: gerryt <>
Date: Fri, 21 Sep 2007 09:40:18 -0700
Message-ID: <>

On Sep 20, 12:08 pm, hpuxrac <> wrote:
> On Sep 20, 1:26 pm, Andrea <> wrote:
> > Hi,
> > we have been to value some types of Oracle RAC solutions in according to
> > which filesystem to use on SAN in cluster environment with HP-UX (or
> > Linux) platform.
> > HP says that there are this options to choose (in order of
> > importance):
> > raw device
> > oracle ASM
> > OCFS (only for linux)
> > NFS in high avail.
> > The our choice incline to ASM, because i think that this avoid problems
> > with platform compatibility and for easier management of datafiles.
You mean like "let Oracle handle it"... type of management?
> > I would like to know if there are some papers that explain these
> > solutions of filesystem, also they defects.
> > The products that we have to install are Oracle DB 10g with RAC.
Im not using RAC so mileage may vary below
> RAW partitions give the best performance but take more administration
> and up front design.

Somehow, maybe, a LOT of up front design?? I'd like to know what kind of up front design you are talking about :

Certainly ASM requires some jumping through setup hoops.

Ive been testing various fs configs with swingbench lately and ASM beats all others by a significant margin - like up 50% better numbers in some tests.
Included in this test were raw file systems and "cooked" with various cooked options that are alleged to speed things up and raw vs. cooked came out virtually the same every time. Have not looked into ZFS yet, but its normally quicker than ufs. This is on a Solaris SPARC platform
using 44 FC/AL drives in a striped/mirrored w. hot spares setup for "DOM102".. Im by no means finished with these tests - just getting started - but ASM
is the winner so far. The OP really needs to set up his own tests and see for himself. Received on Fri Sep 21 2007 - 11:40:18 CDT

Original text of this message