Sunday, July 1, 2018

How to Write a Map Only Job in Hadoop MapReduce

In a MapReduce job in Hadoop you generally write both map function and reduce function. Map function to generate (key, value) pairs and reduce function to aggregate those (key, value) pairs but you may opt to have only the map function in your MapReduce job and skip the reducer part. That is known as a Mapper only job in Hadoop MapReduce.

Mapper only job in Hadoop

You may have a scenario where you just want to generate (key, value) pair in that case you can write a job with only map function. For example if you want to convert file to a binary file format like SequenceFile or to a columnar file format like Parquet.

Note that, generally in a MapReduce job output of Mappers are written to local disk rather than in HDFS. In case of Mapper only job map output is written to HDFS which is one of the difference between a MapReduce job and a Mapper only job in Hadoop.

Writing Mapper only job

In order to write a mapper only job you need to set number of reducers as zero. You can do by adding job.setNumReduceTasks(0); in your driver class.

As example

public int run(String[] args) throws Exception {
 Configuration conf = getConf();
 Job job = Job.getInstance(conf, "TestClass");
 // Setting reducer to zero


Another way to have a Mapper only job is to pass the configuration parameter in the command line. Parameter used is mapreduce.job.reduces note that before Hadoop 2 parameter was mapred.reduce.tasks which is deprecated now.

As example-

hadoop jar /path/to/jar ClasstoRun -D mapreduce.job.reduces=0 /input/path /output/path

Mapper only job runs faster

The output of map job is partitioned and sorted on keys. Then it is sent across the network to the nodes where reducer is running. This whole shuffle phase can be avoided by having a Mapper only job in Hadoop making it faster.

That's all for this topic How to Write a Map Only Job in Hadoop MapReduce. If you have any doubt or any suggestions to make please drop a comment. Thanks!

>>>Return to Hadoop Framework Tutorial Page

Related Topics

  1. Word Count MapReduce Program in Hadoop
  2. How to Compress Intermediate Map Output in Hadoop
  3. Input Splits in Hadoop
  4. Uber Mode in Hadoop
  5. NameNode, DataNode And Secondary NameNode in HDFS

You may also like-

  1. HDFS Commands Reference List
  2. How to Handle Missing And Under Replicated Blocks in HDFS
  3. What is SafeMode in Hadoop
  4. HDFS High Availability
  5. Fair Scheduler in YARN
  6. Difference Between Abstract Class And Interface in Java
  7. Writing File in Java
  8. Java Lambda Expressions Interview Questions