Keeping track of the Information Technology revolution that has India in its grips, its profound visible and invisible effects on the Indian society, culture, ethos, the thinking of the citizen. The blog keeps a pulse on the evolution of IT in India & elsewhere and analyzes the reverberations of these developments as felt in India.
A well written article chronicling how the Mumbai Dabbawallas use the same principles that are the force behind the Big Data technologies today. Titled "What is Common between Mumbai Dabbawalas and Apache Hadoop?", the article details out how the entire process of collecting, shuffling, sorting and delivering tiffin boxes by the Mumbai Dabbawallas is akin to the MapReduce algorithm that is key to dealing with huge data collections.
Here's the crux of the article
Just like HDFS slices and distributes the chunk of data to individual nodes, each household submits the lunchbox to a Dabbawala.
All the lunchboxes are collected at the common place for tagging them and to put them into carriages with unique codes. This is the job of the Mapper!
Based on the code, carriages that need to go to the common destination are sorted and on-boarded to the respective trains. This is called Shuffle and Sort phase in MapReduce.
At each railway station, the Dabbawala picks up the carriage and delivers each box in that to respective customers. This is the Reduce phase.