这是搭建hadoop环境后的第一个MapReduce程序;
基于hadoop streaming的python的脚本;
1 map.py文件,把文本的内容划分成单词:
#!/usr/bin/pythonimport sys for line in sys.stdin: line = line.strip() words = line.split() for word in words: print('%s\t%s' % (word, 1))
2 reduce文件,把统计单词出现的次数;
#!/usr/bin/pythonimport sys last_key = Nonerunning_total = 0 for input_line in sys.stdin: input_line = input_line.strip() this_key, value = input_line.split("\t", 1) value = int(value) if last_key == this_key: running_total += value else: if last_key: print ("%s\t%d" % (last_key, running_total)) running_total = value last_key = this_keyif last_key == this_key: print( "%s\t%d" % (last_key, running_total) )
3 本地测试下python脚本,结果是否正确:
cat in.txt | python map.py | python reduce.py
4 Hadoop调用脚本:指定输出目录OUTPUT;
调用支持多语言的streaming的编程环境,参数-input是输入的log文件,为了用mapreduce模式统计这个文件每个单词出现的次数;-output是输出路径;-mapper是mapper编译 此处是python语言;-reducer是reduce编译语法;-file是mapper文件路径和reduce文件路径;-numReduceTaskers 是使用的子tasker数目,这里是3,代表分成了3了tasker分布式的处理计数任务;
#!/bin/bash OUTPUT=/home/apm3/outdir hadoop fs -rmr $OUTPUT hadoop jar /usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-.jar \ -input /opt/mapr/logs/warden.log \ -output $OUTPUT \ -mapper "python map.py" \ -reducer "python reduce.py" \ -file map.py \ -file reduce.py \ -numReduceTasks
bash -x start.sh 会在输出路径中生成三个输出文件,及三分ReduceTasks 输出的结果;(MapReduce 模式主要做了shuffle和sort任务,shuffle是按照hashkey分配单词到子tasker中,而sort是排序的功能。)
5 MapR里执行程序,run.sh:
hadoop fs -rm -r /user/rongyu/output hadoop jar hadoop-streaming-2.7.0-mapr-1602.jar \-input "/user/input/*" \-output "/user/rongyu/output" \-file "/home/mapr/Develop/rongyu/mapreduce/map.py"-mapper "python map.py" \-file "/home/mapr/Develop/rongyu/mapreduce/reduce.py"-reducer "python reduce.py" \-numReduceTasks 3
6 查看结果
查看输出目录: 命令 $ hadoop fs -ls /user/rongyu/output/
Found items -rwxr-xr-x mapr mapr -- : /user/rongyu/output/_SUCCESS -rwxr-xr-x mapr mapr -- : /user/rongyu/output/part- -rwxr-xr-x mapr mapr -- : /user/rongyu/output/part- -rwxr-xr-x mapr mapr -- : /user/rongyu/output/part-
输出三个输出文件之一part-00000:命令 $ hadoop fs -cat /user/rongyu/output/part-00000 | less
/nodes/apm1/services/nfs 17/opt/mapr/conf/cldb.conf 12/opt/mapr/hostid 6/services/cldb/master. 4/services/fileserver. 2/services/fileserver/master 1/services/hbmaster/apm2. 1/services/hbregionserver/apm4. 207/services/hbregionserver/master 1/services/historyserver/master 1/services/hoststats/apm2. 2/services/kvstore/apm3. 2/services/nfs. 22/services/nfs/master. 53/services_config/kvstore. 2/services_config/nodemanager. 3/services_config/nodemanager/apm4. 2600:00:00,3402 100:00:00,4710 100:00:01,6710 100:00:01,7916 100:00:01,9725 1
7异常:
// :: INFO mapreduce.Job: Task Id : attempt_1469682745105_0016_m_000001_2, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:) at org.apache.hadoop.mapred.YarnChild$.run(YarnChild.java:) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:)
解决方案:在python脚本头部增加 #!/usr/bin/python 并且注意run.sh的-reducer -mapper等参数设置
代码下载: https://github.com/rongyux/Hadoop_WordCount