QBoard » Big Data » Big Data - Hadoop Eco-System » A starting point for learning how to implement MapReduce/Hadoop in Python?

A starting point for learning how to implement MapReduce/Hadoop in Python?

  • I've recently started getting into data analysis and I've learned quite a bit over the last year (at the moment, pretty much exclusively using Python). I feel the next step is to begin training myself in MapReduce/Hadoop. I have no formal computer science training however and so often don't quite understand the jargon that is used when people write about Hadoop, hence my question here.

    What I am hoping for is a top level overview of Hadoop (unless there is something else I should be using?) and perhaps a recommendation for some sort of tutorial/text book.

    If, for example, I want to parallelise a neural network which I have written in Python, where would I start? Is there a relatively standard method for implementing Hadoop with an algorithm or is each solution very problem specific?

    The Apache wiki page describes Hadoop as "a framework for running applications on large cluster built of commodity hardware". But what does that mean? I've heard the term "Hadoop Cluster" and I know that Hadoop is Java based. So does that mean for the above example I would need to learn Java, set up a Hadoop cluster on, say, a few amazon servers, then Jython-ify my algorithm before finally getting it to work on the cluster using Hadoop?

    Thanks a lot for any help!

      December 15, 2021 12:48 PM IST
    0
  • For those who like MOOC as an option there is Intro to Hadoop and Mapreduce on Udacity, made in collaboration with Cloudera. During the course you have a chance to install Cloudera Hadoop Distribution virtual machine locally and perform some map/reduce jobs on sample datasets. Hadoop Streaming is used for interaction with Hadoop cluster and the programming is done in Python.

     
      January 1, 2022 2:15 PM IST
    0
  • First, to use Hadoop with Python (whenever you run it on your own cluster, or Amazon EMR, or anything else) you would need an option called "Hadoop Streaming".

    Read the original chapter (updated link) of Hadoop Manual to get the idea of how it works.

    There is also a great library "MrJob" that simplifies running Python jobs on Hadoop.

    You could set up your own cluster or try to play with Amazon Elastic Map Reduce. The later can cost you something, but sometimes easier to run at the beginning. There is a great tutorial on how to run Python with Hadoop Streaming on Amazon EMR. It immediately shows a simple but practical application.

    To learn the Hadoop itself I would recommend reading one of the books out there. They say that "Hadoop In Action" is better in covering things for those who interested in Python/Hadoop Streaming.

    Also note that for testing/learning things you can run Hadoop on your local machine without having an actual cluster.

    UPDATE:

    As for understanding Map Reduce (that is how to identify and express different kinds of problems on Map Reduce language) read the great article "MapReduce Patterns, Algorithms, and Use Cases" with examples in Python.

      December 16, 2021 12:44 PM IST
    0
  • I would recommend you start by downloading the Cloudera VM for Hadoop which is pretty much a standard across many industries these days and simplifies the Hadoop setup process. Then follow this tutorial for the word count example which is a standard hello world equivalent for learning Map/Reduce

    Before that, a simple way to understand map/reduce is by trying python's inbuilt map/reduce functions:

    x = [1, 2, 3, 4]
    y = map(lambda z: z*z, x]
    print y
    [1, 4, 9, 16]
    q = reduce(lambda m,n : m+n, y)
    print q
    30

     

    Here the mapper transforms the data by squaring every element and the reducer sums up the squares. Hadoop just uses this to scale large scale computations but you need to figure out your own mapping and reducing functions.

     
      December 18, 2021 11:27 AM IST
    0