The below error is seen in hive server logs when not able to run reports for warehouse. SSH to warehouse node and run the below command to see this error:-
2019-07-03 13:24:10,727 ERROR [HiveServer2-Background-Pool: Thread-8156]: fs.MapRFileSystem (MapRFileSystem.java:delete(1060)) - Failed to delete path maprfs:/user/mapr/tmp/hive/mapr/27c5166b-7f74-475f-adf1-c02e47bcfc55/hive_2019-07-03_13-24-07_196_8853042487779820155-5/_task_tmp.-ext-10001, error: No such file or directory (2)
2019-07-03 13:24:10,727 ERROR [HiveServer2-Background-Pool: Thread-8156]: ql.Driver (SessionState.java:printError(836)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
2019-07-03 13:24:10,727 INFO [HiveServer2-Background-Pool: Thread-8156]: log.PerfLogger (PerfLogger.java:PerfLogEnd(135)) - </PERFLOG method=Driver.execute start=1562160247235 end=1562160250727 duration=3492 from=org.apache.hadoop.hive.ql.Driver>
If you stop running Hive reports for few days(i.e. 10 days or more) and then start running it, you may encounter below Yarn job issues due to this Hive reports will fail with the above error.
After returning to a MapR cluster left running but idle for an extended period, then this issue may observe a "No such file or directory" warning. The /user/mapr/tmp/ directory will not exist, and yet the mapr user will have no trouble creating it. This is a side-effect of the tmpwatch utility, which is run daily on CentOS systems to clean up /usr/mapr/tmp/hive/mapr not recently accessed.
The Node Manager service running in MapR clusters will not re-create the top-level Hadoop.user.dir when it launches a job so you must restart all the NodeManager service running in a cluster setup.
SSH to any Warehouse Node and run below command to restart the node manager service: #maprcli node services -name nodemanager -nodes <node1,node2,node3> -action stop #maprcli node services -name nodemanager -nodes <node1,node2,node3> -action start
Where node 1, node 2, node 3 etc are the hostnames of warehouse nodes.