Re: Powerpath and ASM w/o ASMLib

From: Harel Safra <harel.safra_at_gmail.com>
Date: Thu, 2 Jan 2020 22:15:39 +0200
Message-ID: <CA+UC=5Gxv8mQE+5HPuQNjwqT1dQzB0cwitW0yaLMvA85mzR_Fg_at_mail.gmail.com>



Are the new devices created by udev?
Did you set ASM_DISKTRING (
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/ASM_DISKSTRING.html) to search that location?

Harel

On Thu, Jan 2, 2020 at 9:35 PM David Barbour <david.barbour1_at_gmail.com> wrote:

> Happy New Year!
>
> We are changing SANs from an EMC VNX to an EMC UNITY. The servers and
> SANs are hosted at a third-party site. We are on RHEL 6.8. We are not
> using ASMLib.
>
> When I was informed that the LUNs had been presented to the servers (it's
> a 2-node RAC) I re-scanned the hosts and the Powerpath devices showed up.
> Using parted I partitioned each LUN as a full primary partition. I added
> the WWID to udev rules, but no matter what I try, the LUNs are not showing
> up in the ASM search path. Generally I don't use Powerpath, so I'm hoping
> I don't have to reboot the server to get these to show up.
>
> *Here's what I see (using one LUN as an example):*
>
>
>
> *[root_at_685925-db6 dev]# ls -al |grep emcpowereebrw-rw---- 1 root disk
> 120, 2144 Jan 2 12:32 emcpowereebrw-rw---- 1 root disk 120, 2145
> Jan 2 12:08 emcpoweree1*
>
>
> *[root_at_685925-db6 dev]# parted /dev/emcpoweree*
>
>
>
>
>
> *Disk /dev/emcpoweree: 859GBSector size (logical/physical):
> 512B/512BPartition Table: msdosNumber Start End Size Type File
> system Flags 1 4194kB 859GB 859GB primary*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[root_at_685925-db6 dev]# powermt display dev=emcpowereePseudo
> name=emcpowereeUnity ID=APM00193839645 [Host_5]Logical device
> ID=60060160E6914D00ADF8F35D73CD831D
> [863648-685926-800GB-Pool1-RAC_Cluster-56]state=alive; policy=CLAROpt;
> queued-IOs=0Owner: default=SP A, current=SP A Array failover mode:
> 4==============================================================================---------------
> Host --------------- - Stor - -- I/O Path -- -- Stats ---### HW Path
> I/O Paths Interf. Mode State Q-IOs
> Errors==============================================================================
> 1 bfa sdtq SP B3 active alive 0
> 0 1 bfa sdrj SP A2 active alive 0
> 0 2 bfa sdpc SP A3 active alive 0
> 0 2 bfa sdmv SP B2 active alive
> 0 0*
>
>
>
> *[root_at_685925-db6 by-id]# ls -al |grep
> 60060160E6914D00ADF8F35D73CD831D[root_at_685925-db6 by-id]#*
>
> *Here's the stanza in /udev/rules.d/99-oracle-grid.rules:*
>
>
>
> *#emcpoweree#[863648-685926-800GB-Pool1-RAC_Cluster-56] ACTION=="add|change",
> KERNEL=="emcpower[a-z][a-z]?", SUBSYSTEM=="block", PROGRAM="/sbin/scsi_id
> --whitelisted --replace-whitespace /dev/$parent",
> RESULT=="360060160E6914D00ADF8F35D73CD831D", OWNER="grid",
> GROUP="asmadmin", MODE="0660", NAME="/dev/ora/ORA-ACTIVE101p%n"*
>
> I've tried rescanning the host(s), /sbin/udevadm control --reload-rules
> with /sbin/udevadm trigger --type=devices --action=change and powermt save
> and powermt config.
>
> Nothing.
>
> Do I need to reboot or is there some way to get these to show up?
>
> *David A. Barbour*
>
> *dbarbour_at_istation.com <dbarbour_at_istation.com>*
>
> *(214) 292-4096*
>
> Istation
>
> 8150 North Central Expressway, Suite 2000
>
> Dallas, TX 75206
>
> www.Istation.com <http://www.istation.com/>
>
>
>
> CONFIDENTIALITY / PROPRIETARY NOTICE:
>
> The information contained in this e-mail, including any attachment(s), is
> confidential information that may be privileged and exempt from disclosure
> under applicable law. If the reader of this message is not the intended
> recipient, or if you received this message in error, then any direct or
> indirect disclosure, distribution or copying of this message is strictly
> prohibited. If you have received this message in error, please notify
> Istation by calling 866-883-7323 immediately and by sending a return
> e-mail; delete this message; and destroy all copies, including attachments.
>
> Thank you.
>
>
>
>
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Jan 02 2020 - 21:15:39 CET

Original text of this message