---恢复内容开始---
1.首先准备一个需要统计的单词文件 word.txt,我们的单词是以空格分开的,统计时按照空格分隔即可
hello hadoop
hello yarn
hello zookeeper
hdfs hadoop
select from hadoop
select from yarn
mapReduce
MapReduce
2.上传word.txt到hdfs根目录
$ bin/hdfs dfs -put test/word.txt /
3.准备工作完成后在eclipse编写代码,分别编写Map、Reduce、Driver等Java文件
WordCountMap.java
map执行我们的word.txt 文件是按行执行,每一行执行一个map
WordCountMap.java
map执行我们的word.txt 文件是按行执行,每一行执行一个map
package com.ijeffrey.mapreduce.wordcount.client;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
/**
* map 输出的键值对必须和reducer输入的键值对类型一致
* @author PXY
*
*/
public class WordCountMap extends Mapper<LongWritable, Text, Text, IntWritable> {
private Text keyout = new Text();
private IntWritable valueout = new IntWritable(1);
@Override
protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, IntWritable>.Context context)
throws IOException, InterruptedException {
String line = value.toString();
// 我的文件记录的单词是以空格记录单词,所以这里用空格来截取
String[] words = line.split(" ");
// 遍历数组,并以k v 对的形式输出
for (String word : words) {
keyout.set(word);
context.write(keyout, valueout);
}
}
}
WordCountReducer.java
package com.ijeffrey.mapreduce.wordcount.client;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
/**
* reducer 输入的键值对必须和map输出的键值对类型一致
* map <hello,1> <world,1> <hello,1> <apple,1> ....
* reduce 接收 <apple,[1]> <hello,[1,1]> <world,[1]>
* @author PXY
*
*/
public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable valueout = new IntWritable();
@Override
protected void reduce(Text key, Iterable<IntWritable> values,
Reducer<Text, IntWritable, Text, IntWritable>.Context context) throws IOException, InterruptedException {
int count = 0; // 统计总数
// 遍历数组,累加求和
for(IntWritable value : values){
// IntWritable类型不能和int类型相加,所以需要先使用get方法转换成int类型
count += value.get();
}
// 将统计的结果转成IntWritable
valueout.set(count);
// 最后reduce要输出最终的 k v 对
context.write(key, valueout);
}
}
WordCountDriver.java
package com.ijeffrey.mapreduce.wordcount.client;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
/**
* 运行主函数
* @author PXY
*
*/
public class WordCountDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Configuration conf = new Configuration();
// 获得一个job对象,用来完成一个mapreduce作业
Job job = Job.getInstance(conf);
// 让程序找到主入口
job.setJarByClass(WordCountDriver.class);
// 指定输入数据的目录,指定数据计算完成后输出的目录
// sbin/yarn jar share/hadoop/xxxxxxx.jar wordcount /wordcount/input/ /wordcount/output/
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
// 告诉我调用那个map方法和reduce方法
job.setMapperClass(WordCountMap.class);
job.setReducerClass(WordCountReducer.class);
// 指定map输出键值对的类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
// 指定reduce输出键值对的类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
// 提交job任务
boolean result = job.waitForCompletion(true);
System.exit(result ? 0 : 1);
}
}
}
4.将编写完成的代码打成jar包,并在集群上运行
将jar上传到到服务器,启动服务后运行我们自己编写的MapReduce,统计根目录下的word.txt并将运行结果写入output
$ bin/yarn jar test/wordCount.jar com.ijeffrey.mapreduce.wordcount.client.WordCountDriver /word.txt /output
注意:运行jar的时候要添加Driver的完全路径
运行完成后查看output结果:
$ bin/hdfs dfs -text /output12/part-r-00000