It avoids a full shuffle. If it's known that the number is decreasing then the executor can safely keep data on the minimum number of partitions, only moving the data off the extra nodes, onto the nodes that we kept.
So, it would go something like this:
Node 1 = 1,2,3
Node 2 = 4,5,6
Node 3 = 7,8,9
Node 4 = 10,11,12
Then coalesce down to 2 partitions:
Node 1 = 1,2,3 + (10,11,12)
Node 3 = 7,8,9 + (4,5,6)
Notice that Node 1 and Node 3 did not require its original data to move.
All the answers are adding some great knowledge into this very often asked question.
So going by tradition of this question's timeline, here are my 2 cents.
I found the repartition to be faster than coalesce, in very specific case.
In my application when the number of files that we estimate is lower than the certain threshold, repartition works faster.
Here is what I mean
if(numFiles > 20)
df.coalesce(numFiles).write.mode(SaveMode.Overwrite).parquet(dest)
else
df.repartition(numFiles).write.mode(SaveMode.Overwrite).parquet(dest)
In above snippet, if my files were less than 20, coalesce was taking forever to finish while repartition was much faster and so the above code.
Of course, this number (20) will depend on the number of workers and amount of data.
Hope that helps.
scala> pairMrkt.repartition(10)
res16: org.apache.spark.rdd.RDD[(String, Array[String])] =MapPartitionsRDD[11] at repartition at <console>:26
scala> res16.partitions.length
res17: Int = 10
scala> pairMrkt.partitions.length
res20: Int = 2