一、准备安装介质
a)、hadoop-2.8.0.tar
b)、jdk-7u71-linux-x64.tar
二、节点部署图
三、安装步骤
环境介绍:
主服务器ip:192.168.80.128(master) NameNode SecondaryNameNode ResourceManager
从服务器ip:192.168.80.129(slave1) DataNode NodeManager
从服务器ip: 192.168.80.130(slave2) DataNode NodeManager
1、在三台机器上配置服务器域名
192.168.80.128 master
192.168.80.129 slave1
192.168.80.130 slave2
2、注意:关闭防火墙
systemctl stop firewalld.service #停止
firewall systemctl disable firewalld.service #禁止firewall开机启动
firewall-cmd --state #查看默认防火墙状态(关闭后显示notrunning,开启后显示running)
3.上传安装包
1)JDK安装包 jdk-7u71-linux-x64.tar.gz
2)hadoop安装包 hadoop-2.8.0.tar.gz
4.安装JDK
1)解压JDK tar -zxvf jdk-7u71-linux-x64.tar.gz
5)配置环境变量 vi /etc/profile
6)在文件最后加入JDK配置
JAVA_HOME=/home/hadoop/jdk1.7.0_71
PATH=$JAVA_HOME/bin:$PATH
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:
$JAVA_HOME/lib/tools.jar
export JAVA_HOME
export PATH
export CLASSPATH
7)退出当前用户,重新登录,检查JDK安装是否成功 java -version
8)按照以上操作,重复安装其他两台机器
9、安装Hadoop
1)解压Hadoop tar -zxvf hadoop-2.8.0.tar.gz
2)配置环境变量 vi /etc/profile
export HADOOP_HOME=/home/hadoop/hadoop-2.8.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
4)退出当前用户,重新登录,检查Hadoop安装是否成功 hadoop version
5)进入/home/hadoop/hadoop-2.8.0/etc/hadoop目录
6)打开hadoop-2.8.0/etc/hadoop/hadoop-env.sh文件
export JAVA_HOME=/home/hadoop/jdk1.7.0_71
10)配置hadoop-2.8.0/etc/hadoop/slaves文件,增加slave主机名
slave1
slave2
11)配置hadoop-2.8.0/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<!-- Size of read/write buffer used in SequenceFiles. -->
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<!-- 指定hadoop临时目录,自行创建 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/*/hadoop/tmp</value>
</property>
</configuration>
12)配置hadoop-2.8.0/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<!-- 指定namenode数据存放临时目录,自行创建 -->
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/*/hadoop/hdfs/name</value>
</property>
<!-- 指定datanode数据存放临时目录,自行创建 -->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/*/hadoop/hdfs/data</value>
</property>
</configuration>
13)配置hadoop-2.8.0/etc/hadoop/yarn-site.xml <configuration>
<!-- Site specific YARN configuration properties -->
<!-- Configurations for ResourceManager -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
14)配置hadoop-2.8.0/etc/hadoop/mapred-site.xml
注意: 因为默认没有mapred-site.xml,所以先要复制一份,shell命令如下:
cp mapred-site.xml.template mapred-site.xml
然后在mapred-site.xml加入以下配置
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
15)把配置好的文件,复制到从节点
scp -r hadoop root@slave1:/home/hadoop/hadoop-2.8.0/etc
scp -r hadoop root@slave2:/home/hadoop/hadoop-2.8.0/etc
注意:同时需要把文件夹也要给拷贝过去
scp -r hadoop root@slave1:/home/*/
scp -r hadoop root@slave2:/home/*/
16)格式化节点
cd hadoop-2.8.0/sbin
hdfs namenode -format
17)hadoop集群全部启动
cd hadoop-2.8.0/sbin
./start-all.sh
18)启动JobHistoryServer 备注(查看MapReduce历史执行记录,和hadoop关系不大,可忽略此步骤)
cd hadoop-2.8.0/sbin
./mr-jobhistory-daemon.sh start historyserver
19)查看启动进程是否正常
在master节点输入 jps命令,将会显示以下进程:
3458 Jps
3150 SecondaryNameNode
2939 NameNode
3364 ResourceManager
在slave1、slave2上输入jps命名,将会显示以下进程:
2969 NodeManager
3191 Jps
2801 DataNode
如果进程不正常的话,进入hadoop-2.8.0/logs查看异常日志
master节点
1、hadoop-2.8.0/logs/hadoop-root-namenode-master.log #namenode日志
2、hadoop-root-secondarynamenode-master.log #SecondaryNameNode日志
3、yarn-root-resourcemanager-master.log #ResourceManager日志
slave1、slave2节日
4、hadoop-root-datanode-slave1.log #DataNode日志
5、yarn-root-nodemanager-slave1.log #NodeManager日志
20)通过web UI访问
hadoop http://192.168.80.128:50070 #整个集群
http://192.168.80.128:50090 #SecondaryNameNode的情况
http://192.168.80.128:8088 #resourcemanager的情况
http://192.168.80.128:19888 #historyserver(MapReduce历史运行情况)
注意:
如果master节点一直没有namenode进程的话,集群可以分开启动
1)启动namenode:sbin/hadoop-daemon.sh start namenode
2)sbin/hadoop-daemon.sh start datanode