ASM with device mapper and OCR and vote disks redundancy

From: Radoulov, Dimitre <>
Date: Mon, 13 Feb 2012 16:34:27 +0100
Message-ID: <>

Hi all,

target environment: RAC on RHEL 5.7 x86-64.

Regarding the correct asmlib configuration with Linux native multipathing (device mapper) -
note that I'm aware of the asmlib vs. udev vs. multipath fixed device names discussion,
but our choice is asmlib for now).

The multipath configuration for asmlib is described in the official Oracle documentation and
in various MOS notes.

  1. The official documentation (Grid Infrastructure Installation Guide for Linux) states the following:
 Configuring ASMLIB for Multipath Disks

         For Oracle Linux and Red Hat Enterprise Linux version 5, when scanning, the kernel

         sees the devices as /dev/mapper/XXX entries. By default, the 2.6 kernel device file

         naming scheme udev creates the /dev/mapper/XXX names for human readability.

         Any configuration using ORACLEASM_SCANORDER should use the
         /dev/mapper/XXX entries


       Edit the ORACLEASM_SCANORDER variable to provide the prefix path 
of the
       multipath disks. For example, if the multipath disks use the 
prefix multipath
       (/dev/mapper/multipatha, /dev/mapper/multipathb and so on), and the
       multipath disks mount SCSI disks, then provide a prefix path 
similar to the
      ORACLEASM_SCANORDER="multipath sd"

   2. A MOS note: Linux: 11gR2 Grid Infrastructure doesn't startup after node reboot due to incorrect asmlib setting [ID 1050164.1]

       states that scanorder, scanexclude should be set as:


AFAIK dm- names are not static, and the udev created user-friendly names in /dev/mapper/<string> are static.
So - should we use the string we have under /dev/mapper/<some_string> as a prefix for scanorder or we need the other one: dm-? Is there any difference as far as asmlib and GI are concerned?

Another question: we want to use an asm disk group to store the voting disks and the OCR,
the nodes are connected to a SAN so we'll use its HW RAID capabilities (the disk groups
dedicated to user/application data and FRA will be defined with external redundancy).

We plan to create a separate disk group for the voting disks and the OCR with 3 x 1G LUNs
and normal redundancy, just to be sure we have more copies.

Could this mirror (SAN) on mirror (ASM normal redundancy) configuration have significant performance implications? Note that we plan to implement normal redundancy on top of HW RAID only for a single disk group (for voting and OCR).

Best regards

Received on Mon Feb 13 2012 - 09:34:27 CST

Original text of this message