Resolution | Steps to replace faulty hard drive on any SAW node:
- First thing you need to do, is determine which hard disk is failing or has predictive failure, this can be achieved using the nwraidutil tool also get the Adapter number, the Enclosure number and the Slot number. you will also need to know the name of the hard disk under /dev "for example /dev/sdx".
- You will then need to remove this disk from the mapr storage pool by issuing the following command:
maprcli disk remove -host <host> -disks <disks> - For example: maprcli disk remove -host saw_node_1 -disks /dev/sdx - Notice the output of this command will be something like below, it will remove /dev/sdy and /dev/sdz since they are both members of the same storage pool: maprcli disk remove -host saw_node_1 -disks /dev/sdx message host disk removed. saw_node_1 /dev/sdx removed. saw_node_1 /dev/sdy removed. saw_node_1 /dev/sdz - Now you can safely pull the hard disk physically from the server.
- Now instert the new disk physically, and get the new name of the disk using the following command, it might still be named /dev/sdx or might be given a new name:
dmesg | tail -100 | grep --color /dev - Now, this part will be preparing the newly added disk for RAID 0. Megacli command can be located using "locate MegaCli"
MegaCli -CfgForeign -Clear -aALL MegaCli -CfgLdAdd -r0 [E:S] -aN where E is enclosure number, S is slot number and N is adapter number - Now that you have configured a RAID 0 virtual drive on the newly added disk, you should confirm that disk is showing as online from nwraidutil.
- Assuming that the new disk is named /dev/sdx as well, we will now need to issue the following command:
maprcli disk add -host saw_node_1 -disks /dev/sdx,/dev/sdy,/dev/sdz
- Note that /dev/sdy and /dev/sdz are members of the same storage pool, and were removed automatically in step 2. - Do not forget to add them in this step. upon command excution you should find something like below: maprcli disk add -host saw_node_1 -disks /dev/sdx,/dev/sdy,/dev/sdz message host disk added. saw_node_1 /dev/sdx added. saw_node_1 /dev/sdy added. saw_node_1 /dev/sdz - You can confirm that the newly added storage pool (3 disks in our case) are showing correctly using the following commands:
/opt/mapr/server/mrconfig sp list -v maprcli disk list -host saw_node_1 make sure that the 3 added disks to storage pool are showing as MapR-FS in the Filesystem column
|