Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Re: ASM question

Re: ASM question

From: Greg Rahn <greg_at_structureddata.org>
Date: Sat, 24 Feb 2007 19:03:48 -0800
Message-ID: <45E0FC94.80103@structureddata.org>


To quote the documentation:
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/storeman.htm

for normal redundancy:
FREE_MB - REQUIRED_MIRROR_FREE_MB = 2 * USABLE_FILE_MB or
(FREE_MB - REQUIRED_MIRROR_FREE_MB)/2 = USABLE_FILE_MB Using the below example's numbers:
(20374 - 5120)/2 = 7627

REQUIRED_MIRROR_FREE_MB is 5120 because at most 1 ASM disk can fail and still be able restore full redundancy. Given that each fail group only has 2 ASM disks its should be obvious why no more than 1 can fail, correct?

I don't know if I agree with your example. Perhaps I am misunderstanding it. I believe in either a 4 disk, 4 failgroups or a 4 disk, 2 failgroups of 2 disks scenario, the math is the same, however, how/if the diskgroup can be restored to full redundancy on the surviving ASM disks may be different.

Here is how I would describe it:

There are 4 disks - disk1, disk2 disk3, disk4 failgroup1 = disk1 & disk2
failgroup2 = disk3 & disk4
Lets say disk 4 fails.
At this point there is no data loss, all data is available in either a primary or mirrored extent or both.

In order to get full redundancy back, the primary and mirrored extents from disk4 now need to be rebuilt. Given there are 2 failgroups with 2 ASM disks each, there is only one place these extents can be rebuilt - disk3. So disk3 must now support its own primary/mirrored extents as well as the primary/mirrored extents that disk4 previously supported. I believe this may or may not be possible - it depends on how much space has been used in the disk group. (see "A5" in the Metalink note below)

In the case of a 4 disk, 4 failgroup scenario, if disk4 failed, its primary/mirrored extents would be able to be rebuilt on all of the surviving 3 ASM disks.

At this point I'm going to recommend reading the Metalink note 395712.1 because there is a similar example worked out as well as some other useful questions & answers.
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=395712.1

I would also agree that external mirroring may be more simple. There isn't even a need for Veritas as a lvm. Just RAID it on the DMX3000.

Cheers,

Greg Rahn
http://structureddata.org

> This math seems a bit odd. It applies to disk group of 4 disks without
> separation in two failure groups where by default each disk forms its
> own failure group.
>
> How it works in this case:
> if one disk fails than Oracle can mirror extents in the following ways:
> half of disk1 + half of disk2 = 2.5 GB
> +
> half of disk2 + half of disk3 = 2.5 GB
> +
> half of disk1 + half of disk3 = 2.5 GB
> =
> 7.5 GB
>
> However, with disk1 and disk2 being in the same failure groups, Oracle
> won't be able to mirror extents between them (first 2.5 GB above) so
> it should really be 5 GB.
>
> To the original poster - be sure you know why you want to separate
> disks into failure groups. It doesn't make sense if they disks of the
> same SAN box, for example. Unless they are accessed by different
> controller/FC switch or something.
>
> Chances are 5 GB volume is not exactly one spindle behind. According
> to the path - it seems they are volumes from the same Veritas
> diskgroup. Though, it's possible to allocate them from particular
> disk(s), that's probably not the case. Is it? So it hardly justifiable
> to split them in such small chunks.
>
> Since you already using Veritas, you might as well go for their
> mirroring instead of ASM normal redundancy as more mature solution.
>
> On 2/23/07, Greg Rahn <greg_at_structureddata.org> wrote:

>>
>>  To benefit the list...
>>  --
>> Greg Rahn
>> http://structureddata.org
>>
>>
>>  -------- Original Message --------
>>  Subject: Re:ASM question
>>  From: "Hameed, Amir" <Amir.Hameed_at_xerox.com>
>>  To: "Greg Rahn" <greg_at_structureddata.org>
>>  Date: 2/23/2007 1:15 PM
>>
>>
>> Thank you for your explanation.
>> Amir
>>
>>  ________________________________
>>  From: Greg Rahn [mailto:greg_at_structureddata.org]
>>  Sent: Friday, February 23, 2007 3:17 PM
>>  To: Hameed, Amir
>>  Subject: Re: ASM question
>>
>>  First, lets understand a couple things:
>>
>>  ASM Failgroups & normal redundancy:  Any ASM disk in a given 
>> failgroup may
>> not have its extent mirrors on any other asm disk in that same failgroup.
>>  A normal redundancy disk group can tolerate the failure of one failure
>> group. If only one failure group fails, the disk group remains mounted 
>> and
>> serviceable, and ASM performs a rebalance of the surviving disks 
>> (including
>> the surviving disks in the failed failure group) to restore redundancy 
>> for
>> the data in the failed disks. If more than one failure group fails, ASM
>> dismounts the disk group.
>>
>>  REQUIRED_MIRROR_FREE_MB indicates the amount of space that must be
>> available in the disk group to restore full redundancy after the worst
>> failure that can be tolerated by the disk group.
>>
>> http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/storeman.htm 
>>
>>
>>  In your second example the 7627GB USABLE_FILE_MB comes from here:
>>  The worst failure that this disk group could tolerate is 1 ASM disk 
>> failure
>> (this is where REQUIRED_MIRROR_FREE_MB = 5GB comes from) and still 
>> restore
>> full redundancy.  Given that, 100% of the data and its redundant copy 
>> would
>> have to reside on 3 asm disks.  So if ASM needs to support 4 ASM disks 
>> data
>> on 3 ASM disks no more than 75% of the capacity could be used.  Using 
>> normal
>> redundancy the math would be:
>>  4 ASM disks @5GB = 20GB = TOTAL_MB
>>  20GB TOTAL_MB / 2 = 10GB (for primary extent mirrors)
>>  3/4 (support 4 disks data on 3) * 10GB = 7.5GB
>>
>>  In the first example REQUIRED_MIRROR_FREE_MB is 0 because that diskgroup
>> could not sustain a failure and still restore full redundancy.
>>
>>  Regards,
>>  -Greg
>>
>>
>>  -------- Original Message --------
>>  Subject: ASM question
>>  From: "Hameed, Amir" <Amir.Hameed_at_xerox.com>
>>  To: oracle-l_at_freelists.org
>>  Date: 2/23/2007 10:13 AM
>>
>>  Hi Folks,
>> I have a 10.2.0.2 ASM instance running on Solaris 9 with the following
>> scenario:
>>
>> 1. All raw disk slices are 5GB in size
>>
>> 2. I have created a normal redundancy ASM diskgroup with two failure
>> groups as shown below:
>> SQL> create diskgroup data normal redundancy
>> failgroup failgroup_1
>> disk '/dev/vx/rdsk/ux016_RAW/volraw_01'
>> failgroup failgroup_2
>> disk '/dev/vx/rdsk/ux016_RAW/volraw_02'
>> /
>>
>> When I run the sql statement, as shown below, I see the following
>> output:
>> SQL> select GROUP_NUMBER GROUP#, NAME, STATE, TOTAL_MB, FREE_MB,
>> REQUIRED_MIRROR_FREE_MB REQ_MIRR_FREE_MB, USABLE_FILE_MB
>> from V$ASM_DISKGROUP;
>>
>>  GROUP# NAME STATE TOTAL_MB FREE_MB
>> REQ_MIRR_FREE_MB USABLE_FILE_MB
>> ---------- -------------- --------------- ---------- ----------
>> ---------------- --------------
>>  1 DATA MOUNTED 10240 10138
>> 0 5069
>>
>> So, the total size of the DG is 10GB with Usable space of 5GB. Because
>> the group is mirrored 1:1, the REQ_MIRR_FREE_MB is zero.
>>
>> 3. When I create the same group with two disks in each failover group, I
>> see an output that I am not able to comprehend:
>> SQL> create diskgroup data
>> failgroup failgroup_1
>> disk
>> '/dev/vx/rdsk/ux016_RAW/volraw_01',
>> '/dev/vx/rdsk/ux016_RAW/volraw_02'
>> failgroup failgroup_2
>> disk
>> '/dev/vx/rdsk/ux016_RAW/volraw_03',
>> '/dev/vx/rdsk/ux016_RAW/volraw_04'
>> /
>>
>> SQL> select GROUP_NUMBER GROUP#, NAME, STATE, TOTAL_MB, FREE_MB,
>> REQUIRED_MIRROR_FREE_MB REQ_MIRR_FREE_MB, USABLE_FILE_MB
>> from V$ASM_DISKGROUP;
>>
>>  GROUP# NAME STATE TOTAL_MB FREE_MB
>> REQ_MIRR_FREE_MB USABLE_FILE_MB
>> ---------- -------------- --------------- ---------- ----------
>> ---------------- --------------
>>  1 DATA MOUNTED 20480 20374
>> 5120 7627
>>
>> I was hoping to see REQ_MIRR_FREE_MB of zero because I have a DG that
>> contains two failure groups with each group contains two disks. I was
>> also expecting to see 10GB for the USABLE_FILE_MB.
>>
>> Can someone please clarify how two interpret these stats.
>>
>> Thanks
>> Amir
>>
--
http://www.freelists.org/webpage/oracle-l
Received on Sat Feb 24 2007 - 21:03:48 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US