Tuesday, November 14, 2023

What is SafeMode in Hadoop

When the NameNode starts in a Hadoop cluster, following tasks are performed by NameNode and the NameNode stays in a state known as Safemode in Hadoop during that duration.

  1. NameNode reads the FsImage and EditLog from disk, applies all the transactions from the EditLog to the in-memory representation of the FsImage, and flushes out this new version into a new FsImage on disk.
  2. Receive block reports from the DataNodes in the cluster.

Restrictions during Safemode

While NameNode is in Safemode no write operations can be performed by any client application. Only read only operations like listing the files in a directory are allowed to work during that period. If you try any write operation during that time that will result in SafeModeException with a message "Name node is in safe mode".

If NameNode doesn’t wait till it gets enough block reports from the DataNodes in the cluster it will start replicating the blocks again to the DataNodes after startup. So it is important that the NameNode stays in the Safemode until the stated tasks are not finished.

When does NameNode exit SafeMode

After a percentage (Which is configurable) of safely replicated data blocks checks in with the NameNode (plus an additional 30 seconds), the NameNode exits the Safemode state automatically.

Configuration for Safemode

Configuration parameters for safemode are as follows, these configuration parameters are in configuration file hdfs-site.xml.

dfs.namenode.safemode.threshold-pct- Specifies the percentage of blocks that should satisfy the minimal replication requirement defined by dfs.namenode.replication.min parameter. Values less than or equal to 0 mean not to wait for any particular percentage of blocks before exiting safemode. Values greater than 1 will make safe mode permanent. Default value if 0.999f or 99.9% which means 99.9% of block should satisfy the minimal replication requirement.

dfs.namenode.safemode.extension- Determines extension of safe mode in milliseconds after the threshold level is reached. Default value is 30000 miliseconds or 30 seconds.

dfs.namenode.safemode.min.datanodes- Specifies the number of datanodes that must be considered alive before the name node exits safemode. Values less than or equal to 0 mean not to take the number of live datanodes into account when deciding whether to remain in safe mode during startup. Values greater than the number of datanodes in the cluster will make safe mode permanent. Default value is 0 so by default number of live datanodes is not taken into account.

HDFS commands for safemode

If required, HDFS could be placed in Safemode explicitly using bin/hdfs dfsadmin -safemode command. NameNode front page UI also shows whether Safemode is on or off.

1- Entering the safemode - hdfs dfsadmin -safemode enter

2- Leaving the safemode - hdfs dfsadmin -safemode leave
You can use this command if NameNode is in safe mode for long and not able to leave safe mode. But try to verify why Namenode is not able to leave safe mode as that might be because of less resources.

3- Checking whether Namenode is in safemode - hdfs dfsadmin -safemode get

4- If you want any file operation command to block till HDFS exists safemode - hdfs dfsadmin -safemode wait

5- Forcefully exit the safemode - hdfs dfsadmin -safemode forceExit

That's all for this topic What is SafeMode in Hadoop. If you have any doubt or any suggestions to make please drop a comment. Thanks!

>>>Return to Hadoop Framework Tutorial Page

Related Topics

  1. Replica Placement Policy in Hadoop Framework
  2. HDFS Federation in Hadoop Framework
  3. HDFS High Availability
  4. File Read in HDFS - Hadoop Framework Internal Steps
  5. Input Splits in Hadoop

You may also like-

  1. What is Big Data
  2. Introduction to Hadoop Framework
  3. Installing Hadoop on a Single Node Cluster in Pseudo-Distributed Mode
  4. Java Program to Write File in HDFS
  5. Uber Mode in Hadoop
  6. How MapReduce Works in Hadoop
  7. Difference Between Abstract Class And Interface in Java
  8. What is Dependency Injection in Spring