DarkMatter in Cyberspace
  • Home
  • Categories
  • Tags
  • Archives

Run Python MapReduce on Hadoop Cluster


Based on Writing an Hadoop MapReduce Program in Python.

  1. Create mapper and reducer script and make them executable with chmod 755 *.py:

mapper.py:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
#!/usr/bin/env python
import sys
# input comes from STDIN (standard input)
for line in sys.stdin:
    # remove leading and trailing whitespace
    line = line.strip()
    # split the line into words
    words = line.split()
    # increase counters
    for word in words:
        # write the results to STDOUT (standard output); what we output here will be the input for the Reduce step, i.e. the input for reducer.py tab-delimited; the trivial word count is 1
        print '%s\t%s' % (word, 1)

reducer.py:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
#!/usr/bin/env python

from operator import itemgetter
import sys

current_word = None
current_count = 0
word = None

# input comes from STDIN
for line in sys.stdin:
    # remove leading and trailing whitespace
    line = line.strip()

    # parse the input we got from mapper.py
    word, count = line.split('\t', 1)

    # convert count (currently a string) to int
    try:
        count = int(count)
    except ValueError:
        # count was not a number, so silently ignore/discard this line
        continue

    # this IF-switch only works because Hadoop sorts map output by key (here: word) before it is passed to the reducer
    if current_word == word:
        current_count += count
    else:
        if current_word:
            # write result to STDOUT
            print '%s\t%s' % (current_word, current_count)
        current_count = count
        current_word = word

# do not forget to output the last word if needed!
if current_word == word:
    print '%s\t%s' % (current_word, current_count)
  1. Get input text file and put them into hdfs: download the text version of The Outline of Science, Vol. 1 (of 4) by J. Arthur Thomson, The Notebooks of Leonardo Da Vinci and Ulysses by James Joyce. Then upload them to hdfs:

    $ hadoop fs -mkdir gutenberg $ hadoop fs -put pg.txt gutenberg/ $ hadoop jar /usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming.jar -file /home/hduser/mapper.py -mapper /home/hduser/mapper.py -file /home/hduser/reducer.py -reducer /home/hduser/reducer.py -input /user/hduser/gutenberg/ -output /user/hduser/gutenberg-output

You have to make sure the "gutenberg-output" folder has not existed. When finished, you can see the result with:

$ hadoop fs -ls gutenberg-output
$ hadoop fs -cat gutenberg-output/part-00000

Verified on CDH 4.3, built on 8 CentOS 6.3 64bit host, Python 2.6.6, 2014-8-7.



Published

Aug 7, 2014

Last Updated

Aug 7, 2014

Category

Tech

Tags

  • hadoop 7
  • mapreduce 1
  • python 136
  • streaming 2

Contact

  • Powered by Pelican. Theme: Elegant by Talha Mansoor