Re: RAC vote 11gR2 Question

From: Sanjay Mishra <smishra_97_at_yahoo.com>
Date: Fri, 27 Sep 2013 07:07:45 -0700 (PDT)
Message-ID: <1380290865.39434.YahooMailNeo_at_web122106.mail.ne1.yahoo.com>



Dimitre
 

So for OCR and as per Oracle quoted docs that "Normal redundancy disk groups provide 3 voting disk files, 1 OCR and 2 copies(one primary and one secondary mirror)" means there are 2 OCR in this diskgroup. Now ,
1. Are they been on seperate failure group ? Hopefully yes as per the requirement
2. What happen if the Primary OCR failure group failed, does that mean CRS can hang ?
3. If the Assuption in Point 2 is correct, does there any benefit of having OCR mirror on the second diskgroup to avoing CRS hung problem ? 

 

thanks for your time and help
 

Sanjay
 
 

 From: "Radoulov, Dimitre" <cichomitiko_at_gmail.com> To: smishra_97_at_yahoo.com; "oracle-l_at_freelists.org" <oracle-l_at_freelists.org> Sent: Friday, September 27, 2013 4:36 AM Subject: Re: RAC vote 11gR2 Question   

Hi Sanjay,

On 27/09/2013 00:42, Sanjay Mishra wrote:  

  1. If I had OCR/Voting Disk in ASM diskgroup with Normal Redundancy and three Disk defined in ASM Diskgroup creation  are from three different SAN. Now question is that where is the OCR primary and Mirror created in this case. I was reading somewhere on one site and it mentioned that there is only one copy of OCR on one diskgroup and so in order for protection and best practices, second OCR has to tbe second diskgroup. Can someone correct as what is correct architecture. Quoting the documentation:

OracleŽ Grid Infrastructure Installation Guide 11g Release 2 (11.2)

    for Linux

For Oracle Clusterware files, Normal redundancy disk groups provide

    3 voting disk files, 1 OCR and 2 copies (one primary and one secondary mirror). With normal redundancy, the

    cluster can survive the loss of one failure group.

In other words:

RAC and Oracle Clusterware Best Practices and Starter Kit (Platform

    Independent) (Doc ID 810394.1)

For those who wish to utilize Oracle supplied redundancy for the OCR and Voting Disks in 11gR2 and above one could create a separate (3rd) ASM Diskgroup having a minimum of 3 fail groups (total of 3 disks). This configuration will provide 3 Voting Disks (1 on each fail group) and a single OCR which takes on the redundancy of that disk group (mirrored within ASM). The minimum size of the 3 disks that make up this normal

      redundancy diskgroup is 1GB.

Regarding an OCR copy to a separate diskgroup, OracleŽ Clusterware

    Administration and Deployment Guide
11g Release 2 (11.2) states:

If OCR is stored in an Oracle ASM disk group with _external redundancy_, then Oracle recommends that you add another OCR location to another disk group to avoid the loss of OCR, if a disk fails in the disk group.  

But the best practice document on MOS -  RACGuides_Rac11gR2OnLinux.pdf - describes the following implementation:  

  • normal redundancy asm dg for OCR and voting
  • ocr mirror on a separate asm dg

And says:
It is Oracle's Best Practice to have an OCR mirror stored in a

      second diskgroup. To follow this
recommendation add an OCR mirror. Mind that you can only have one

      OCR in a diskgroup.  

2. On my site, I saw OCRVOTE diskgroup with the following kind of syntax. Please ignore the syntax 
          create diskgroup ocrvote

failuregroup 1 disk
                orcl:1

orcl:2
orcl:3
orcl:4

failuregroup 2 disk
                orcl:5
orcl:6
orcl:7
orcl:8

failuregroup 3 disk
                orcl:9
orcl:10
orcl:11
orcl:12 So it has used 12 disk where 1-4 are from One SAN and 5-6 from Second and 9-12 from third one. Now this is done so as to avoid the Voting disk availability and so any one SAN failure will not cause eviction. Question is why or what is advantage of using 12 disks here. Isn't it a wastage of 9 disks here as three disks can provide the same availability   
No, it doesn't provide the same availability: with the above

    configuration you can loose up to three disks per failure group. Note that if you loose a controller/FG your cluster _should_ remain

    up (i.e. you'll have at least one node running), but you may(actually "will") still have some nodes evicted. 

Regards
Dimitre

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Sep 27 2013 - 16:07:45 CEST

Original text of this message