Re: ASM on SAN

From: Frits Hoogland <frits.hoogland_at_gmail.com>
Date: Mon, 22 Feb 2010 09:49:44 +0100
Message-Id: <527409AD-A199-4596-A5A0-573AEAD9B98A_at_gmail.com>



I have little to add to what other's mentioned, only a bit of things I've found in using it.

I would go for option 1 too.
It makes little sense to have a redundant setup inside the SAN (RAID > 0) and redundancy at the ASM level. Unless you have multiple SAN's and explicitly want to setup HA over these.

As greg mentioned, there are sweet spots and best practices which enables you to setup it in the most perfect way (default is not always good)

One very important thing, which others have mentioned: keep ASM LUN's in the diskgroups the same size: if not, they will be filled according to their size, which means if you got two LUN's which are 50GB and 100GB, the second one will get twice as much data (in order to fill the LUN), which also could mean it will get twice the amount of IO's, which is probably not what you want.

frits
On Feb 21, 2010, at 8:16 PM, Chen Shapira wrote:

> Hi Oracle-L,
>
> I'm preparing to install ASM using our EVA storage and I'm trying to
> decide how many volumes to request from my storage manager.
>
> There are two configurations we consider:
> 1) Ask for two volumes - one for data files, the other for flashback,
> archive logs, backups, etc. Then run ASM with external redundancy and
> external striping. Let EVA do the RAID thing for us.
> 2) Ask for multiple data volumes and multiple backup volumes.
> Configure ASM to do its own striping. Since EVA will do its own
> stripe+mirror thing, we'll have double striping.
>
> I'm leaning toward the first option since it seems more manageable.
>
> Does anyone on the least have a good reason to go with the second
> option? I'm worried that I'm missing something, because all ASM papers
> natually assume there will be multiple ASK "disks".
>
> Chen
> --
> http://www.freelists.org/webpage/oracle-l
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Feb 22 2010 - 02:49:44 CST

Original text of this message