Friday, 3 October 2014

Set the Number of Mappers and Reducers

Mappers:

The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.

Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. 



The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.

It can be changed by adjusting the DFS block size. 
For example, to set it as 1M.        -Ddfs.block.size = 1048576 (at least 1M)

Reducers:
   
     MapReduce v1                                                       MapReduce v2
-Dmapred.reduce.tasks=10 --> -Dmapreduce.job.reduces=10

Reference:
http://wiki.apache.org/hadoop/HowManyMapsAndReduces

No comments:

Post a Comment