Re: Do we need multiple REDOLOG member if it is already on SAN box?

From: onedbguru <>
Date: Fri, 29 Apr 2011 16:12:29 -0700 (PDT)
Message-ID: <>

On Apr 29, 3:12 pm, charles <> wrote:
> > This is not exactly a new type question.  Various people will have
> > different answers here.
> > On most of my dev/test databases ... no I don't use it multiple
> > members in a redolog group usually.
> > Production databases I will go case by case.  Highly critical systems
> > sure why not.
> > Have I ever lost a redolog group or just one member of a group?  Not
> > so far ...
> Here is our SA's comment
> I'm NOT fine with this but please remember that I HIGHLY recommend
> against this.  It is a best practice to never do software raid which
> is what you are basically doing. This is very old practice when you
> don't have the infrastructure that we have.

Ask him if he would bet his job on that quote. Actually in Oracle, if you do not do "software RAID" (ie multiple redo log members/groups) he needs to have his head examined. I don't know where he pulled that "best practices" quote from, but it was not from anything I have seen in the Oracle world going back 20+ years. To "bet" that your hardware RAID will save your bacon is extremely confident in technology that proves itself over and over again that if it can fail, it will and is not a matter of "will it fail" but "when will it fail" AND usually at the most inconvenient time.

If he takes that view of things, then I guess that he would never use ASM (which, in the *n*x world, would be a huge mistake).

Not sure how big your environment is, but if he really wants to play that game, tell him to giver you {N} equally sized LUNS, install ASM and tell him to get out of the way. Those LUNs can be on RAID10 or even RAID5 or 6. and doing so, may just increase your performance. Then you can manage your space instead of an uninformed SA. If the database is mission critical, you may want to also look at ASM FAILURE groups. Cool stuff and runs really fast. Received on Fri Apr 29 2011 - 18:12:29 CDT

Original text of this message