Hadoop介绍
Hadoop是一个能对大量数据进行分布式处理的软件框架。其基本的组成包括hdfs分布式文件系统和可以运行在hdfs文件系统上的MapReduce编程模型,以及基于hdfs和MapReduce而开发的一系列上层应用软件。
hdfs是在一个网络中以流式数据访问模式来存储超大文件的跨越多台计算机的分布式文件系统。目前支持的超大文件的范围为从MB级至PB级。
MapReduce是一种可用于数据处理的编程模型,基于MapReduce模型的程序本质上都是并行运行的。基于MapReduce编程模型的程序包括完成数据提取的map函数,对中间结果进行处理的merge函数(merge函数一般是可选的),以及生成最终处理结果的reduce函数。经过map函数和merge函数进行处理后的数据将是经过排序和分组的key-value,经过reduce对这些中间结果处理后生成最终的计算结果。其中map函数都是并行运行的,每个map函数负责处理大文件的一个文件块,因此对于基于hdfs文件系统的大文件来说,map函数可以充分利用多台计算机的处理能力,快速计算并出中间结果。
The Apache™ Hadoop® project develops open-source software for reliable, scalable, distributed computing.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.
The project includes these modules:
Hadoop Common: The common utilities that support the other Hadoop modules.
Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.
Hadoop YARN: A framework for job scheduling and cluster resource management.
Hadoop MapReduce: A YARN-based system for parallel processing of large data sets. www.169it.com
Hadoop最新稳定版Hadoop 2.4.1下载地址
Hadoop 2.4.1(2.X系列稳定版)下载地址: Hadoop 2.4.1下载
hadoop 2.4.1 虚拟机安装-单节点安装步骤
1 安装java及java 环境变量的设置
2 设置账户,主机的hostname /etc/hosts
用户的.bash_profile 中加入如下内容
1
2
3
4
5
|
export JAVA_HOME=/usr/java/jdk1.7.0_60 export HADOOP_PREFIX=/home/hadoop/hadoop-2.4.1 export CLASSPATH=".:$JAVA_HOME/lib:$CLASSPATH" export PATH="$JAVA_HOME/:$HADOOP_PREFIX/bin:$PATH" export HADOOP_PREFIX PATH CLASSPATH |
3 设置 无密码登陆
先确保所有主机的防火墙处于关闭状态。
1
2
3
4
|
$cd ~/.ssh $ssh-keygen -t rsa #然后一直按回车键,就会按照默认的选项将生成的密钥保存在.ssh/id_rsa文件中。 $cp id_rsa.pub authorized_keys sudo service sshd restart |
4 hadoop2.4.1 的配置
进入hadoop2.4.1文件夹,配置etc/hadoop中的文件。
1
2
|
hadoop-env.sh export JAVA_HOME=/usr/java/jdk1.7.0_60 |
另外可选的添加上:
1
2
|
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib" |
5 配置hadoop2.4.1的core-site.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
|
< configuration >
< property >
< name >fs.default.name</ name >
< value >hdfs://localhost:9000</ value >
</ property >
< property >
< name >io.file.buffer.size</ name >
< value >131072</ value >
</ property >
< property >
< name >hadoop.tmp.dir</ name >
< value >file:/home/hadoop/tmp</ value >
</ property >
< property >
< name >dfs.namenode.name.dir</ name >
< value >file:/home/hadoop/hadoop-2.4.1/dfs/name</ value >
</ property >
< property >
< name >dfs.datanode.data.dir</ name >
< value >file:/home/hadoop/hadoop-2.4.1/dfs/data</ value >
</ property >
</ configuration >
hdfs-site.xml < configuration >
< property >
< name >dfs.namenode.name.dir</ name >
< value >file:/home/hadoop/hadoop-2.4.1/dfs/name</ value >
</ property >
< property >
< name >dfs.datanode.data.dir</ name >
< value >file:/home/hadoop/hadoop-2.4.1/dfs/data</ value >
</ property >
< property >
< name >dfs.replication</ name >
< value >1</ value >
</ property >
</ configuration >
mapred-site.xml < configuration >
< property >
< name >mapreduce.jobtracker.address</ name >
< value >hdfs://localhost:9001</ value >
</ property >
</ configuration >
yarn-site.xml < configuration >
< property >
< name >mapreduce.framework.name</ name >
< value >yarn</ value >
</ property >
< property >
< name >yarn.nodemanager.aux-services</ name >
< value >mapreduce_shuffle</ value >
</ property >
</ configuration >
|
经过以上五步,hadoop2.4.1单机环境配置都已经完成了,下面启动:
./bin/hadoop namenode –format 格式化结点信息
bin/start-all.sh. 新版本的hadoop其实不建议这么直接start-all,建议一步步来,先start-dfs,然后在start-map
./bin/hadoop dfsadmin -report
http://localhost:50070