RE: Oracle ASM disk corruption

From: Hameed, Amir <Amir.Hameed_at_xerox.com>
Date: Wed, 29 Jul 2020 01:40:38 +0000
Message-ID: <DM6PR11MB3483CF72EC3911F96B07F16BF4700_at_DM6PR11MB3483.namprd11.prod.outlook.com>



Hi Mladen!,
Thank you for your input. I am using UDEV and not ASMLIB to manage these devices. I had an SA re-initialize the device and also clean it up using DD. Once it was done, the MOUNT and HEADER statuses changed to CLOSED and CANDIDATE respectively as shown below:

                                                      OS disk Space   Space              Disk
                Mount   Header       Mode    Disk     Size    Total   Free    ASM Disk   Failgroup                                      Vote
Ins# Grp# Disk# Status  Status       Status  State    (MB)    (MB)    (MB)    Name       Name       Disk path                      SS   file
---- ---- ----- ------- ------------ ------- -------- ------- ------- ------- ---------- ---------- ------------------------------ ---- ----
   1    0     0 CLOSED  CANDIDATE    ONLINE  NORMAL    20,490       0       0                       /dev/oracleasm/grid/asmgrid01   512 N
        2     0 CACHED  MEMBER       ONLINE  NORMAL    20,490  20,480   9,987 GRID_0000  GRID_0000  /dev/oracleasm/grid/asmgrid03   512 Y
        2     1 CACHED  MEMBER       ONLINE  NORMAL    20,490  20,480   9,987 GRID_0001  GRID_0001  /dev/oracleasm/grid/asmgrid02   512 Y
   2    0     0 CLOSED  CANDIDATE    ONLINE  NORMAL    20,490       0       0                       /dev/oracleasm/grid/asmgrid01   512 N
        2     0 CACHED  MEMBER       ONLINE  NORMAL    20,490  20,480   9,987 GRID_0000  GRID_0000  /dev/oracleasm/grid/asmgrid03   512 Y
        2     1 CACHED  MEMBER       ONLINE  NORMAL    20,490  20,480   9,987 GRID_0001  GRID_0001  /dev/oracleasm/grid/asmgrid02   512 Y
   3    0     0 CLOSED  CANDIDATE    ONLINE  NORMAL    20,490       0       0                       /dev/oracleasm/grid/asmgrid01   512 N
        2     0 CACHED  MEMBER       ONLINE  NORMAL    20,490  20,480   9,987 GRID_0000  GRID_0000  /dev/oracleasm/grid/asmgrid03   512 Y
        2     1 CACHED  MEMBER       ONLINE  NORMAL    20,490  20,480   9,987 GRID_0001  GRID_0001  /dev/oracleasm/grid/asmgrid02   512 Y

I tried to add the disk but got the same result (this syntax was suggested by the Oracle Engineer): SQL> ALTER DISKGROUP GRID ADD DISK '/dev/oracleasm/grid/asmgrid01' size 2048M rebalance power 11
*

ERROR at line 1:
ORA-15032: not all alterations performed ORA-15410: Disks in disk group GRID do not have equal size.

The following command shows that these devices have the same size: +ASM1> blockdev -v --getsize64 /dev/oracleasm/grid/asmgrid01 /dev/oracleasm/grid/asmgrid02 /dev/oracleasm/grid/asmgrid03

get size in bytes: 21485574144
get size in bytes: 21485574144
get size in bytes: 21485574144

The Oracle SR is now back to Work In Progress status. Thank you,
Amir
From: Mladen Gogala <gogala.mladen_at_gmail.com> Sent: Tuesday, July 28, 2020 5:57 PM
To: Hameed, Amir <Amir.Hameed_at_xerox.com>; Mark W. Farnham <mwf_at_rsiz.com>; 'John Chacho' <jchacho_at_gmail.com> Cc: oracle-l_at_freelists.org
Subject: Re: Oracle ASM disk corruption

Hi Amir!

Please delete device using oracleasm and then clean up the device using dd. I don't like oracleasm because it writes header and therefore changes the device size. I prefer SCSI driver setup through udev rules, like described here:

https://oracle-base.com/articles/linux/udev-scsi-rules-configuration-in-oracle-linux

Regards
On 7/28/20 1:30 PM, Hameed, Amir wrote:

*

ERROR at line 1:

ORA-15032: not all alterations performed

ORA-15410: Disks in disk group GRID do not have equal size.

This is confusing because all of the disks allocated to the grid DG are of the same size. After I added '/dev/oracleasm/grid/asmgrid01' to TEMPDG, the GV$ASM_DISK.TOTAL_MB shown the size the same as for '/dev/oracleasm/grid/asmgrid02' and '/dev/oracleasm/grid/asmgrid03'

Thanks,
Amir

--

Mladen Gogala

Database Consultant

Tel: (347) 321-1217

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Jul 29 2020 - 03:40:38 CEST

Original text of this message