How to fix Apache Hadoop Name node is in safe mode

During start up, Namenode loads the filesystem state from fsimage and edits log file. It then waits for data nodes to report their blocks so that it does not prematurely start replicating the blocks though enough replicas already exist in the cluster. During this time, Namenode stays in safe mode. A Safemode for Namenode is essentially a read-only mode for the HDFS cluster, where it does not allow any modifications to file system or blocks. Normally, Namenode disables safe mode automatically at the beginning. If required, HDFS could be placed in safe mode explicitly using bin/hadoop dfsadmin -safemode command. Namenode front page shows whether safe mode is on or off.

Solution:

One way to workaround this is to manually move the namenode out of safemode.  Before deciding to do that make sure you know and understand why the namenode is stuck in safemode by reviewing the status of all datanodes and the namenode logs.  In some cases manually disabling safemode can lead to dataloss.
sudo -u hdfs hdfs dfsadmin -safemode leave
Note: You must run the command using the HDFS OS user which is the default super user for HDFS. Otherwise, you will encounter  the following error: "Access denied for user Hadoop. Superuser privilege is required".