Tuesday, November 28, 2023

How to Handle Missing And Under Replicated Blocks in HDFS

In this post we’ll see how to get information about missing or corrupt blocks in HDFS and how to fix it. We'll also see how to fix under replicated blocks in HDFS.

Get information about corrupt or missing HDFS blocks

For getting information about corrupt or missing blocks in HDFS you can use following HDFS command which prints out list of missing blocks and files they belong to.

hdfs fsck -list-corruptfileblocks

Fixing corrupt or missing HDFS blocks

Using that information you can decide how important the file is where you have missing blocks. Since the easiest way is to delete the file and copy it to HDFS again. If you are ok with deleting the files that have corrupt blocks you can use the following command.

hdfs fsck / -delete

This command deletes corrupted files.

If you still want to have a shot at fixing the blocks that are corrupted using the file names which you got from running the hdfs fsck -list-corruptfileblocks command you can use the following command.

hdfs fsck <path to file> -locations -blocks -files

This command prints out locations for every block. Using that information you can go the data nodes where block is stored. You can verify if there is any network or hardware related error or any file system problem and fixing that will make the block healthy again or not.

Fixing under replicated blocks problem in Hadoop

If you have under replicated blocks in HDFS for files then you can use hdfs fsck / command to get that information.

Then you can use the following script where hdfs dfs -setrep <replication number> command is used to set required replication factor for the files.

 
$ hdfs fsck / | grep 'Under replicated' | awk -F':' '{print $1}' >> /tmp/files

$ for problemfile in `cat /tmp/files`; do echo "Setting replication for $problemfile"; hdfs dfs -setrep 3 $problemfile; done 

Actually when you run hdfs fsck / command the output is in the following form for the under replicated blocks -

File name: Under replicated <block>.
   Target Replicas is 3 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
From this output using awk command you take the file name where word “Under replicated” is found and write them in a temp file. Then you set replication factor to 3 ( in this case) for those files.

That's all for this topic How to Handle Missing And Under Replicated Blocks in HDFS. If you have any doubt or any suggestions to make please drop a comment. Thanks!

>>>Return to Hadoop Framework Tutorial Page


Related Topics

  1. Replica Placement Policy in Hadoop Framework
  2. NameNode, DataNode And Secondary NameNode in HDFS
  3. HDFS Commands Reference List
  4. HDFS High Availability
  5. File Read in HDFS - Hadoop Framework Internal Steps

You may also like-

  1. Speculative Execution in Hadoop
  2. YARN in Hadoop
  3. How to Compress Intermediate Map Output in Hadoop
  4. How to Configure And Use LZO Compression in Hadoop
  5. How HashMap Internally Works in Java
  6. Stream API in Java 8
  7. How to Run a Shell Script From Java Program
  8. Lazy Initializing Spring Beans

No comments:

Post a Comment