Friday, August 12, 2016


Storage addition (LUNs)  in RAC - 4 node setup
Brief Steps:-
·         Storage team will assign the LUNs to all the nodes in the cluster.
·         Unix team will re-scan the multipath config... to identify the newly added LUNs
è In case of non-visibility of these LUNs, there are 2 options
1.       Restart the multipath daemon services.
2.       Restart the complete server
We can check the same under /dev/mapper (if it’s multipath config) Otherwise you can check under /dev/disk/by-id
 DBA Activities:-
è cross checking the LUN’s with LUN ids in all nodes
è Disk partioning using fdisk (in any node ex:abc1 in my case)
è After partioning they won’t visible in other nodes, then we have to use ‘kpartx –a’ command to add them.
è Using ASM libraries... labeling the disks


Upon getting confirmation from both storage and UNIX team… we must crosscheck below things on all nodes in the cluster.
Under /dev/mapper -> check all the LUNs having same multipath names and LUN Id’s below is the example for it

First Node: abc1:root@[/dev/mapper]
------------
# multipath -ll|grep mpath2
mpath225 (3600601607c4133005386b1d6cf5ee611) dm-151 DGC,VRAID
mpath224 (3600601607c413300e8b4e365d35ee611) dm-158 DGC,VRAID
mpath223 (3600601607c413300e8394011d35ee611) dm-157 DGC,VRAID
mpath222 (3600601607c4133002e63b6c2d25ee611) dm-156 DGC,VRAID

Second Node: abc2:root@[/dev/mapper]
# multipath -ll | grep mpath2
mpath225 (3600601607c4133005386b1d6cf5ee611) dm-158 DGC,VRAID
mpath224 (3600601607c413300e8b4e365d35ee611) dm-157 DGC,VRAID
mpath223 (3600601607c413300e8394011d35ee611) dm-156 DGC,VRAID
mpath222 (3600601607c4133002e63b6c2d25ee611) dm-155 DGC,VRAID

Third node: abc3:root@[/dev/mapper]
# multipath -ll|grep mpath2
mpath225 (3600601607c4133005386b1d6cf5ee611) dm-158 DGC,VRAID
mpath224 (3600601607c413300e8b4e365d35ee611) dm-157 DGC,VRAID
mpath223 (3600601607c413300e8394011d35ee611) dm-156 DGC,VRAID
mpath222 (3600601607c4133002e63b6c2d25ee611) dm-155 DGC,VRAID

Fourth node: abc4:root@[/dev/mapper]
# multipath -ll|grep mpath2
mpath225 (3600601607c4133005386b1d6cf5ee611) dm-159 DGC,VRAID
mpath224 (3600601607c413300e8b4e365d35ee611) dm-158 DGC,VRAID
mpath223 (3600601607c413300e8394011d35ee611) dm-157 DGC,VRAID
mpath222 (3600601607c4133002e63b6c2d25ee611) dm-156 DGC,VRAID

Upon crosschecking all the LUNs are having matching LUN ids in all the nodes, Proceed with disk portioning,
Only disk partitioning and disk creation… I’m demonstrating here
ASM partitioning

# fdisk -l /dev/mapper/mpath222 ( this is to double check.. if partition already available or not)

Disk /dev/mapper/mpath222: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mapper/mpath222 doesn't contain a valid partition table



# fdisk /dev/mapper/mpath222
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.


The number of cylinders for this disk is set to 6527.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): u
Changing display/entry units to sectors

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First sector (63-104857599, default 63): 2048
Last sector or +size or +sizeM or +sizeK (2048-104857599, default 104857599):
Using default value 104857599

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

To make the partition to be visible on each node, perform below steps on every node :-

kpartx -a /dev/mapper/mpath222
kpartx -a /dev/mapper/mpath223
kpartx -a /dev/mapper/mpath224
kpartx -a /dev/mapper/mpath225

Disk Creation:
# /usr/sbin/oracleasm createdisk ORADISK74   /dev/mapper/mpath222p1
Writing disk header: done
Instantiating disk: done
--
Upon creation... scan all the nodes:
14:48:47 root@abc2: /dev/mapper
# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "ORADISK74"
Instantiating disk "ORADISK75"
Instantiating disk "ORADISK76"
Instantiating disk "ORADISK77"
14:50:52 root@itsolx0002: /dev/mapper

Just cross-verify
# /usr/sbin/oracleasm querydisk -d ORADISK74
Disk "ORADISK74" is a valid ASM disk on device /dev/dm-164[253,164]
15:52:06 root@itsolx0008: /dev/mapper

SQL> select PATH,HEADER_STATUS from v$asm_disk where HEADER_STATUS='PROVISIONED';

PATH                HEADER_STATU
------------------- ------------
ORCL:ORADISK77      PROVISIONED
ORCL:ORADISK74      PROVISIONED
ORCL:ORADISK75      PROVISIONED
ORCL:ORADISK76      PROVISIONED




Thanks.!