In Mapreduce framework, map tasks are executed in parallel to the process these input splits. if the splits are small, the processing will be better load-balanced since a faster node will be able to process proportionally more splits over the course of job than a slower node.
But if the splits are too smaller than the default HDFS block size, then managing splits and creation of map tasks becomes an overhead than the job execution time.