000033266 - Vulnerability Risk Management 1.1 SP1.X Hadoop node tmp utilization - "toBeDeleted" directory full

Document created by RSA Customer Support Employee on Jun 14, 2016Last modified by RSA Customer Support Employee on Apr 22, 2017
Version 2Show Document
  • View in full screen mode

Article Content

Article Number000033266
Applies ToRSA Product Set: Security Management
RSA Product/Service Type: Vulnerability Risk Manager
RSA Version/Condition: 1.1 SP1.X
Platform: CentOS
IssueIssue: tmp directory is 100% utilized on node(s). Maximum space is utilized by mapr-hadoop directory.  Found that Distcache directory under mapr-hadoop/mapred/local/ toBedeleted/ is utilizing maximum space.
CauseThis is a known bug with certain versions of MapR 3.1.X. 
Link to a MapR Community page describing the issue: https://community.mapr.com/thread/7604.
ResolutionUpgrade to VRM 1.2 because it uses MapR 4.1.
WorkaroundIf upgrading to VRM 1.2 is not an option you can attempt the following workaround...granted, upgrading to VRM 1.2 is the preferred method.
If needed create a CRON script using the following instructions:
Here's the instructions to create a CRON Job to clean the unneeded files/logs.
This needs to be done in all 3 Nodes:
1- Run crontab -e
2- Copy the below command and paste them into the cron file once you run the command in step 1

This is a scheduled CRON job to clean all unneeded Hadoop files/logs
Scheduled to run every Sunday at 12 pm.
(you need to comment out the 2 lines above by placing a # at the beginning. Jira is converting # to number 1)

0 12 * * 0 rm -rf /tmp/mapr-hadoop/mapred/local/toBeDeleted/*
0 12 * * 0 rm -rf /tmp/mapr-hadoop/mapred/local/jobTracker/*
0 12 * * 0 rm -rf /tmp/mapr-hadoop/mapred/local/tt_log_tmp/*
0 12 * * 0 rm -rf /tmp/mapr-hadoop/mapred/local/taskTracker/root/jobcache/*
0 12 * * 0 rm -rf /tmp/mapr-hadoop/mapred/local/ttprivate/taskTracker/root/jobcache/*
0 12 * * 0 rm -rf /opt/mapr/hadoop/hadoop-0.20.2/logs/history/*
0 12 * * 0 rm -rf /opt/mapr/hadoop/hadoop-0.20.2/logs/userlogs/*

3- Once you paste the above do the following to save the CRON Job: <ESC> : wq!


1.  Need to schedule the CRON Job when the cluster is not running/processing any workflow as we don't have a clear understanding of what Hadoop is doing to those files at the time jobs are still running.
2. Also please ask them to first check the following directories and only include in the CRON Job the directories where they see a huge number of files. ( We don't want them to start deleting every single log file)

3. Please make sure that Mapr is not using more than 10G in /tmp folder.