HDFS enters safe mode while running AsterSpark*_hadoop.sh script - Aster Analytics

Teradata AsterĀ® Spark Connector User Guide

Product
Aster Analytics
Release Number
7.00.00.01
Published
May 2017
Language
English (United States)
Last Update
2018-04-13
dita:mapPath
dbt1482959363906.ditamap
dita:ditavalPath
Generic_no_ie_no_tempfilter.ditaval
dita:id
dbt1482959363906
lifecycle
previous
Product Category
Software
Problem: You see this message:
hadoop fs -mkdir -p /user/sparkJobSubmitter/tmp ... Cannot create directory ... Name node is in safe mode.

Reason: Hadoop nodes do not have enough free disk space.

Solution:
  1. Free more disk space.
  2. Leave safe mode, using this command:
    hdfs dfsadmin -safemode leave
  3. Ensure that Hadoop cleans up cache and log files before running out of disk space:
    yarn.nodemanager.localizer.cache.cleanup.interval-ms
    yarn.nodemanager.localizer.cache.target-size-mb