Hadoop运行单词统计

时间:2023-03-09 04:50:57
Hadoop运行单词统计

1.创建input文件夹

hadoop fs -mkdir input

2.上传文件到hadoop

hadoop fs -put /root/data/output.txt input

3.运行wordcount(运行前删除旧的output文件夹,可以使用eclipse删除)

hadoop jar ./hadoop-examples-1.2..jar wordcount input output

4.下载文件到本地

hadoop fs -get output /root/data/

运行结果:

[root@VM_238_215_centos hadoop-1.2.]# hadoop jar ./hadoop-examples-1.2..jar wordcount input output
Warning: $HADOOP_HOME is deprecated. // :: INFO input.FileInputFormat: Total input paths to process :
// :: INFO util.NativeCodeLoader: Loaded the native-hadoop library
// :: WARN snappy.LoadSnappy: Snappy native library not loaded
// :: INFO mapred.JobClient: Running job: job_201705080035_0003
// :: INFO mapred.JobClient: map % reduce %
// :: INFO mapred.JobClient: map % reduce %
// :: INFO mapred.JobClient: map % reduce %
// :: INFO mapred.JobClient: map % reduce %
// :: INFO mapred.JobClient: Job complete: job_201705080035_0003
// :: INFO mapred.JobClient: Counters:
// :: INFO mapred.JobClient: Map-Reduce Framework
// :: INFO mapred.JobClient: Spilled Records=
// :: INFO mapred.JobClient: Map output materialized bytes=
// :: INFO mapred.JobClient: Reduce input records=
// :: INFO mapred.JobClient: Virtual memory (bytes) snapshot=
// :: INFO mapred.JobClient: Map input records=
// :: INFO mapred.JobClient: SPLIT_RAW_BYTES=
// :: INFO mapred.JobClient: Map output bytes=
// :: INFO mapred.JobClient: Reduce shuffle bytes=
// :: INFO mapred.JobClient: Physical memory (bytes) snapshot=
// :: INFO mapred.JobClient: Reduce input groups=
// :: INFO mapred.JobClient: Combine output records=
// :: INFO mapred.JobClient: Reduce output records=
// :: INFO mapred.JobClient: Map output records=
// :: INFO mapred.JobClient: Combine input records=
// :: INFO mapred.JobClient: CPU time spent (ms)=
// :: INFO mapred.JobClient: Total committed heap usage (bytes)=
// :: INFO mapred.JobClient: File Input Format Counters
// :: INFO mapred.JobClient: Bytes Read=
// :: INFO mapred.JobClient: FileSystemCounters
// :: INFO mapred.JobClient: HDFS_BYTES_READ=
// :: INFO mapred.JobClient: FILE_BYTES_WRITTEN=
// :: INFO mapred.JobClient: FILE_BYTES_READ=
// :: INFO mapred.JobClient: HDFS_BYTES_WRITTEN=
// :: INFO mapred.JobClient: Job Counters
// :: INFO mapred.JobClient: Launched map tasks=
// :: INFO mapred.JobClient: Launched reduce tasks=
// :: INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=
// :: INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=
// :: INFO mapred.JobClient: SLOTS_MILLIS_MAPS=
// :: INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=
// :: INFO mapred.JobClient: Data-local map tasks=
// :: INFO mapred.JobClient: File Output Format Counters
// :: INFO mapred.JobClient: Bytes Written=

Hadoop运行单词统计

相关文章