000032728 - Security Analytics | How to replace faulty disk on RSA Security Analytics Warehouse (SAW) node in RSA Security Analytics 10.3 and higher

Document created by RSA Customer Support Employee on Jun 30, 2016Last modified by RSA Customer Support Employee on Apr 21, 2017
Version 2Show Document
  • View in full screen mode

Article Content

Article Number000032728
Applies ToRSA Product Set: RSA Security Analytics
RSA Product/Service Type: Warehouse Node
RSA Version/Condition: 10.3 or higher
 
IssueReplacing a Faulty Hard Drive on SAW node.
There is no foreign configuration that can be imported for the newly added disk.
Pulling the faulty hard disk and installing the new one will cause it to be shown as (Unconfigured Good)
CauseThis is caused because in SAW nodes, each hard disk is considered as a separate virtual drive in the RAID controller.
Each hard disk should be configured as a RAID 0 virtual drive separately.
There will be no import foreign config as each disk is part of its own RAID 0 virtual drive only.
ResolutionSteps to replace faulty hard drive on any SAW node:
  1. First thing you need to do, is determine which hard disk is failing or has predictive failure, this can be achieved using the nwraidutil tool also get the Adapter number, the Enclosure number and the Slot number. you will also need to know the name of the hard disk under /dev "for example /dev/sdx".
  2. You will then need to remove this disk from the mapr storage pool by issuing the following command:
    maprcli disk remove -host <host> -disks <disks>


    - For example:
    maprcli disk remove -host saw_node_1 -disks /dev/sdx


    - Notice the output of this command will be something like below, it will remove /dev/sdy and /dev/sdz since they are both members of the same storage pool:
    maprcli disk remove -host saw_node_1 -disks /dev/sdx
    message   host               disk
    removed.  saw_node_1      /dev/sdx
    removed.  saw_node_1      /dev/sdy
    removed.  saw_node_1      /dev/sdz

  3. Now you can safely pull the hard disk physically from the server.
  4. Now instert the new disk physically, and get the new name of the disk using the following command, it might still be named /dev/sdx or might be given a new name:
    dmesg | tail -100 | grep --color /dev

  5. Now, this part will be preparing the newly added disk for RAID 0. Megacli command can be located using "locate MegaCli"
    MegaCli -CfgForeign -Clear -aALL

    MegaCli -CfgLdAdd -r0 [E:S] -aN
    where E is enclosure number, S is slot number and N is adapter number

  6. Now that you have configured a RAID 0 virtual drive on the newly added disk, you should confirm that disk is showing as online from nwraidutil.
  7. Assuming that the new disk is named /dev/sdx as well, we will now need to issue the following command:
    maprcli disk add -host saw_node_1 -disks /dev/sdx,/dev/sdy,/dev/sdz

    - Note that /dev/sdy and /dev/sdz are members of the same storage pool, and were removed automatically in step 2.
    - Do not forget to add them in this step. upon command excution you should find something like below:
    maprcli disk add -host saw_node_1 -disks /dev/sdx,/dev/sdy,/dev/sdz
    message  host               disk
    added.   saw_node_1        /dev/sdx
    added.   saw_node_1        /dev/sdy
    added.   saw_node_1        /dev/sdz

  8. You can confirm that the newly added storage pool (3 disks in our case) are showing correctly using the following commands:
    /opt/mapr/server/mrconfig sp list -v
    maprcli disk list -host saw_node_1
    make sure that the 3 added disks to storage pool are showing as MapR-FS in the Filesystem column

NotesDo not try to shuffle the steps or remove the faulty disk before removing it from mapr as this might corrupt the cldb "Cluster Database".

Attachments

    Outcomes