000032552 - How to add new RSA Security Analytics Warehouse (SAW) node into existing SAW cluster

Document created by RSA Customer Support Employee on Jul 4, 2016Last modified by RSA Customer Support Employee on Apr 21, 2017
Version 5Show Document
  • View in full screen mode

Article Content

Article Number000032552
Applies ToRSA Product Set: Security Analytics
RSA Product/Service Type: Security Analytics Warehouse
RSA Version/Condition: 10.3.x, 10.4.x, 10.5.x
Platform: CentOS
O/S Version: 5, 6
IssueThis article describes how to add a SAW node into SAW cluster. There are two scenarios where this may be required:
  1. When a new SAW node is need to be added into SAW cluster
  2. When a existing SAW node is replaced (RMA'd) with new node, RMA SAW node need to be added into SAW cluster.
TasksChecklist for the node being added:
      1. Verify IP address and Subnet of the new SAW node. The new SAW node has to be reached to all existing SAW nodes when they are pinged from new SAW node.
      2. Make sure “hostname –f” returns Fully Qualified domain name. By configuring host name in below files.


User-added image

User-added image
Post above configuration, “hostname –f” output will be as below in new node.
User-added image

      3. UID,GID of “mapr” should be 2147483632. Please use “id –u mapr” for UID and “id –g mapr” for GID of new appliance.
User-added image

If new node has different UID,GID, Please use below commands to configure correctly.

usermod -u <NEWUID> <LOGIN>
groupmod -g <NEWGID> <GROUP>
find / -user <OLDUID> -exec chown -h <NEWUID> {} \;
find / -group <OLDGID> -exec chgrp -h <NEWGID> {} \;
usermod -g <NEWGID> <LOGIN>


Replace <NEWUID>=2147483632,<NEWGID>=2147483632, <LOGIN>=mapr, <OLDUID>=501 (New ISO image may have this value), <OLDGID>=501 (New ISO image may have this value) with these values in below commands.

      4. Verify the mounts “/opt” and “tmp” has defaults ON in /etc/fstab file.

Locate the mount point entries for /opt and /tmp in /etc/fstab. The following is the example of the entries for /opt and /tmp:

/dev/mapper/VolGroup01-optlv /opt ext4 nosuid 1 2

/dev/mapper/VolGroup01-tmplv /tmp ext4 nosuid 1 2

Change nosuid to suid as below in /etc/fstab file.
/dev/mapper/VolGroup01-optlv /opt ext4 suid 1 2
/dev/mapper/VolGroup01-tmplv /tmp ext4 suid 1 2

      5. Cleanup old data from the image.

  • ZK data remove using rm –rf /opt/mapr/zkdata/*
  • Host ID using rm –f /opt/mapr/hostid
Information Needed to add new node into SAW cluster:


     1. List of CLDB Nodes in SAW cluster:
  • Login to ssh of existing SAW node and use “maprcli node cldbmaster” command to find cldb master node.
User-added image

  • Login to CLDB master node and use “cat /opt/mapr/conf/mapr-clusters.conf” to get details.
User-added image


  • Analyzing above output is as <Cluster_Name> secure=false <cldb_node1>:7222

      2. List of ZOOKEEPER Nodes in SAW cluster:

  • Login to CLDB master node and use “cat /opt/mapr/conf/cldb.conf|grep zoo” to get zookeeper node list.
User-added image

      3. Cluster name already found in step 1.
      4. List of disks to be added in the node: use “lsblk” command for unmounted/unformatted disks for cluster configuration.

User-added image


  • Above output shows sda has partitions. So, sdb is available to add in cluster.
      5. rpm packages need to be installed:

(Only if this new node is CLDB node)

      6. Prepare a file “/root/mydisks” in new SAW node with list of disks to be added in HDFS with one disk per line as below.

ResolutionPlease follow below steps to add this into SAW cluster.
      1. Run below command in new SAW node.
opt/mapr/server/configure.sh –c <CLDB_NODES> -z <ZOOKEEPER_NODES> -N <CLUSTER_NAME> -F /root/mydisks –create-user

Below is the sample output for above command.
User-added image
      2. Verify new node is configured properly or not using “ Hadoop fs -ls /”.
User-added image