SSH互信
hadoop用户权限下1.在每个节点上建立秘钥:
ssh-keygen -t rsa
2.把id_rsa.pub追加授权到keys里面去:
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys3.修改权限:
chmod 644 ~/.ssh/authorized_keys4.ssh本机:ssh localhost
退出:exit
配置相关环境
root权限下(每台节点都要修改):1.准备的环境
-hostname修改vi /etc/sysconfig/network
-hosts修改
vi /etc/hosts
加上:
192.168._._ master
192.168._._ slave01
192.168._._ slave02
2.列出所有服务:chkconfig
列出防火墙状态:service iptables status
关闭防火墙:chkconfig iptables off //永久关闭
service iptables stop //临时关闭
查看selinux状态:getenforce
关闭selinux状态:vi /etc/selinux/config
改为SELINUX=disabled
hadoop用户下:
master:ssh-copy-id -i hadoop@slave01
ssh-copy-id -i hadoop@slave02
slave01:ssh-copy-id -i hadoop@master
ssh-copy-id -i hadoop@slave02
slave02:ssh-copy-id -i hadoop@master
ssh-copy-id -i hadoop@slave02
hadoop-2.6.0的安装
安装java,可以使用两种方法:
一.1.安装在root用户下,在/usr新建java目录,将jdk导入,解压缩
mkidr /usr/java
tar -xvf jdk-8u66-linux-x64.tar.gz
2.配置/etc/profile
vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_66
export JRE_HOME=$JAVA_HOME/jre
export CLASS_HOME=$JAVA_HOME/lib
export PATH=$JAVA_HOME/bin:$PATH
source /etc/profile
这样安装可以在root用户及其他所有用户下使用java。
二.
1.在hadoop用户下,在/home/hadoop/下新建java目录,将jdk导入,解压缩
mkidr /home/hadoop/java
tar -xvf jdk-8u66-linux-x64.tar.gz
2.配置 .bash_profile
vim .bash_profile
export JAVA_HOME=/home/hadoop/java/jdk1.8.0_66
export JRE_HOME=$JAVA_HOME/jre
export CLASS_HOME=$JAVA_HOME/lib
export PATH=$JAVA_HOME/bin:$PATH
source .bash_profile
这样安装可以避免更改root用户下的环境变量,在没有root权限的情况下,选择这种安装模式。两种方法共同测试方法:Java或java -version
安装hadoop
1.在/home/hadoop下导入hadoop-2.6.0.tar.gz并解压缩
2.添加环境变量
vi .bash_profile
HADOOP_HOME=/home/hadoop/hadoop-2.6.0
PATH=/home/hadoop/hadoop-2.6.0/bin:/home/hadoop/hadoop-2.6.0/sbin:$PATH:$HOME/bin
HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
HADOOP_OPTS="-Djava.librart.path=$HADOOP_HOME/lib"
export PATH HADOOP_HOME HADOOP_COMMON_LIB_NATIVE_DIR HADOOP_OPTS
生效
source .bash_profile
hadoop version
3.完全分布式集群安装,配置相关配置文件,这里配置3节点
cd /home/hadoop/etc/hadoop
(1)hadoop-env.sh
-$JAVA_HOMEexport JAVA_HOME=/usr/java/jdk1.8.0_66
(2)yarn-env.sh
-$JAVA_HOME
export JAVA_HOME=/usr/java/jdk1.8.0_66
(3)core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop/hadoop-2.6.0/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
(4)hdfs-site.xml
<property>
<name>dfs.namenode.secondary.http-address</name><value>hdfs://master:9005</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/hadoop/hadoop-2.6.0/hdfs/name</value>
</property>
<property>
<name>dfs.namenode.data.dir</name>
<value>file:/usr/hadoop/hadoop-2.6.0/hdfs/data</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
(5)cp mapred-site.xml.template mapred-site.xml
mapred-site.xml
<property>
<name>mapreduce.framework.name</name><value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:10021</value>
</property>
(6)yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
(7)修改slaves
slave01
slave02
格式化namenode
hdfs namenode -format
启动hdfs
start-dfs.sh
启动yarn
start-yarn.sh
检查进程:jps
停止服务
s top-dfs.sh
stop-yarn.sh