1. Hadoop的HA机制
前言:正式引入HA机制是从hadoop2.0开始,之前的版本中没有HA机制
1.1. HA的运作机制
(1)hadoop-HA集群运作机制介绍
所谓HA,即高可用(7*24小时不中断服务)
实现高可用最关键的是消除单点故障
hadoop-ha严格来说应该分成各个组件的HA机制——HDFS的HA、YARN的HA
(2)HDFS的HA机制详解
通过双namenode消除单点故障
双namenode协调工作的要点:
A、元数据管理方式需要改变:
内存中各自保存一份元数据
Edits日志只能有一份,只有Active状态的namenode节点可以做写操作
两个namenode都可以读取edits
共享的edits放在一个共享存储中管理(qjournal和NFS两个主流实现)
B、需要一个状态管理功能模块
实现了一个zkfailover,常驻在每一个namenode所在的节点
每一个zkfailover负责监控自己所在namenode节点,利用zk进行状态标识
当需要进行状态切换时,由zkfailover来负责切换
切换时需要防止brain split现象的发生
1.2. HDFS-HA图解
2. 主机规划
主机名称 |
外网IP |
内网IP |
操作系统 |
备注 |
安装软件 |
运行进程 |
mini01 |
10.0.0.111 |
172.16.1.111 |
CentOS 7.4 |
ssh port:22 |
jdk、hadoop |
NameNode、DFSZKFailoverController(zkfc) |
mini02 |
10.0.0.112 |
172.16.1.112 |
CentOS 7.4 |
ssh port:22 |
jdk、hadoop |
NameNode、DFSZKFailoverController(zkfc) |
mini03 |
10.0.0.113 |
172.16.1.113 |
CentOS 7.4 |
ssh port:22 |
jdk、hadoop、zookeeper |
ResourceManager |
mini04 |
10.0.0.114 |
172.16.1.114 |
CentOS 7.4 |
ssh port:22 |
jdk、hadoop、zookeeper |
ResourceManager |
mini05 |
10.0.0.115 |
172.16.1.115 |
CentOS 7.4 |
ssh port:22 |
jdk、hadoop、zookeeper |
DataNode、NodeManager、JournalNode、QuorumPeerMain |
mini06 |
10.0.0.116 |
172.16.1.116 |
CentOS 7.4 |
ssh port:22 |
jdk、hadoop、zookeeper |
DataNode、NodeManager、JournalNode、QuorumPeerMain |
mini07 |
10.0.0.117 |
172.16.1.117 |
CentOS 7.4 |
ssh port:22 |
jdk、hadoop、zookeeper |
DataNode、NodeManager、JournalNode、QuorumPeerMain |
注意:针对HA模式,就不需要SecondaryNameNode了,因为STANDBY状态的namenode会负责做checkpoint
Linux添加hosts信息,保证每台都可以相互ping通
[root@mini01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: localhost localhost.localdomain localhost6 localhost6.localdomain6 10.0.0.111 mini01
10.0.0.112 mini02
10.0.0.113 mini03
10.0.0.114 mini04
10.0.0.115 mini05
10.0.0.116 mini06
10.0.0.117 mini07
Windows的hosts文件修改
# 文件位置C:\Windows\System32\drivers\etc 在hosts中追加如下内容
…………………………………………
10.0.0.111 mini01
10.0.0.112 mini02
10.0.0.113 mini03
10.0.0.114 mini04
10.0.0.115 mini05
10.0.0.116 mini06
10.0.0.117 mini07
3. 添加用户账号
# 使用一个专门的用户,避免直接使用root用户
# 添加用户、指定家目录并指定用户密码
useradd -d /app yun && echo '' | /usr/bin/passwd --stdin yun
# sudo提权
echo "yun ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
# 让其它普通用户可以进入该目录查看信息
chmod /app/
4. 实现yun用户免秘钥登录
要求:根据规划实现 mini01 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录
实现 mini02 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录
实现 mini03 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录
实现 mini04 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录
实现 mini05 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录
实现 mini06 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录
实现 mini07 到 mini01、mini02、mini03、mini04、mini05、mini06、mini07 免秘钥登录 # 可以使用ip也可以是hostname 但是由于我们计划使用的是 hostname 方式交互,所以使用hostname
# 同时hostname方式分发,可以通过hostname远程登录,也可以IP远程登录
具体过程就不多说了,请参见 Hadoop2.7.6_01_部署
5. Jdk【java8】
具体过程就不多说了,请参见 Hadoop2.7.6_01_部署
6. Zookeeper部署
根据规划zookeeper部署在mini03、mini04、mini05、mini06、mini07上
6.1. 配置信息
[yun@mini03 conf]$ pwd
/app/zookeeper/conf
[yun@mini03 conf]$ vim zoo.cfg
#单个客户端与单台服务器之间的连接数的限制,是ip级别的,默认是60,如果设置为0,那么表明不作任何限制。
maxClientCnxns=
# The number of milliseconds of each tick
tickTime=
# The number of ticks that the initial
# synchronization phase can take
initLimit=
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
# dataDir=/tmp/zookeeper
dataDir=/app/bigdata/zookeeper/data
# the port at which the clients will connect
clientPort=
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=
# Purge task interval in hours
# Set to "" to disable auto purge feature
#autopurge.purgeInterval= # leader和follow通信端口和投票选举端口
server.=mini03::
server.=mini04::
server.=mini05::
server.=mini06::
server.=mini07::
6.2. 添加myid文件
[yun@mini03 data]$ pwd
/app/bigdata/zookeeper/data
[yun@mini03 data]$ vim myid # 其中mini03的myid 为3;mini04的myid 为4;mini05的myid 为5;mini06的myid 为6;mini07的myid 为7
6.3. 启动zk服务
# 依次在启动mini03、mini04、mini05、mini06、mini07 zk服务
[yun@mini03 ~]$ cd zookeeper/bin/
[yun@mini03 bin]$ pwd
/app/zookeeper/bin
[yun@mini03 bin]$ ll
total
-rwxr-xr-x yun yun Oct README.txt
-rwxr-xr-x yun yun Oct zkCleanup.sh
-rwxr-xr-x yun yun Oct zkCli.cmd
-rwxr-xr-x yun yun Oct zkCli.sh
-rwxr-xr-x yun yun Oct zkEnv.cmd
-rwxr-xr-x yun yun Oct zkEnv.sh
-rwxr-xr-x yun yun Oct zkServer.cmd
-rwxr-xr-x yun yun Oct zkServer.sh
-rw-rw-r-- yun yun Jun : zookeeper.out
[yun@mini03 bin]$ ./zkServer.sh start
JMX enabled by default
Using config: /app/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
6.4. 查询运行状态
# 其中mini03、mini04、mini06、mini07状态如下
[yun@mini03 bin]$ ./zkServer.sh status
JMX enabled by default
Using config: /app/zookeeper/bin/../conf/zoo.cfg
Mode: follower # 其中mini05 状态如下
[yun@mini05 bin]$ ./zkServer.sh status
JMX enabled by default
Using config: /app/zookeeper/bin/../conf/zoo.cfg
Mode: leader
PS:4个follower 1个leader
7. Hadoop部署与配置修改
注意:每台机器的Hadoop以及配置相同
7.1. 部署
[yun@mini01 software]$ pwd
/app/software
[yun@mini01 software]$ ll
total
-rw-r--r-- yun yun Jun : CentOS-.4_hadoop-2.7..tar.gz
[yun@mini01 software]$ tar xf CentOS-.4_hadoop-2.7..tar.gz
[yun@mini01 software]$ mv hadoop-2.7./ /app/
[yun@mini01 software]$ cd
[yun@mini01 ~]$ ln -s hadoop-2.7./ hadoop
[yun@mini01 ~]$ ll
total
lrwxrwxrwx yun yun Jun : hadoop -> hadoop-2.7./
drwxr-xr-x yun yun Jun : hadoop-2.7.
lrwxrwxrwx yun yun May : jdk -> jdk1..0_112
drwxr-xr-x yun yun Sep jdk1..0_112
7.2. 环境变量
[root@mini01 profile.d]# pwd
/etc/profile.d
[root@mini01 profile.d]# vim hadoop.sh
export HADOOP_HOME="/app/hadoop"
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH [root@mini01 profile.d]# source /etc/profile # 生效
7.3. core-site.xml
[yun@mini01 hadoop]$ pwd
/app/hadoop/etc/hadoop
[yun@mini01 hadoop]$ vim core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
……………………
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<!-- 指定hdfs的nameservice为bi -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://bi/</value>
</property> <!-- 指定hadoop临时目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
</property> <!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>mini03:,mini04:,mini05:,mini06:,mini07:</value>
</property> </configuration>
7.4. hdfs-site.xml
[yun@mini01 hadoop]$ pwd
/app/hadoop/etc/hadoop
[yun@mini01 hadoop]$ vim hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
……………………
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<!--指定hdfs的nameservice为bi,需要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>bi</value>
</property> <!-- bi下面有两个NameNode,分别是nn1,nn2 -->
<property>
<name>dfs.ha.namenodes.bi</name>
<value>nn1,nn2</value>
</property> <!-- nn1的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.bi.nn1</name>
<value>mini01:</value>
</property>
<!-- nn1的http通信地址 -->
<property>
<name>dfs.namenode.http-address.bi.nn1</name>
<value>mini01:</value>
</property> <!-- nn2的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.bi.nn2</name>
<value>mini02:</value>
</property>
<!-- nn2的http通信地址 -->
<property>
<name>dfs.namenode.http-address.bi.nn2</name>
<value>mini02:</value>
</property> <!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://mini05:8485;mini06:8485;mini07:8485/bi</value>
</property> <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/app/hadoop/journaldata</value>
</property> <!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property> <!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.bi</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
<!-- 其中shell(/bin/true) 表示可执行一个脚本 比如 shell(/app/yunwei/hadoop_fence.sh) -->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property> <!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/app/.ssh/id_rsa</value>
</property> <!-- 配置sshfence隔离机制超时时间 单位:毫秒 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value></value>
</property> </configuration>
7.5. mapred-site.xml
[yun@mini01 hadoop]$ pwd
/app/hadoop/etc/hadoop
[yun@mini01 hadoop]$ cp -a mapred-site.xml.template mapred-site.xml
[yun@mini01 hadoop]$ vim mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
……………………
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<!-- 指定mr框架为yarn方式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property> </configuration>
7.6. yarn-site.xml
[yun@mini01 hadoop]$ pwd
/app/hadoop/etc/hadoop
[yun@mini01 hadoop]$ vim yarn-site.xml
<?xml version="1.0"?>
<!--
……………………
-->
<configuration> <!-- Site specific YARN configuration properties -->
<!-- 开启RM高可用 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property> <!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property> <!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property> <!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>mini03</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>mini04</value>
</property> <!-- 指定zk集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>mini03:,mini04:,mini05:,mini06:,mini07:</value>
</property> <!-- reduce 获取数据的方式 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property> </configuration>
7.7. 修改slaves
slaves是指定子节点的位置,因为要在mini01上启动HDFS、在mini03启动yarn,所以mini01上的slaves文件指定的是datanode的位置,mini03上的slaves文件指定的是nodemanager的位置
[yun@mini01 hadoop]$ pwd
/app/hadoop/etc/hadoop
[yun@mini01 hadoop]$ vim slaves
mini05
mini06
mini07
PS:改后配置后,将这些配置拷到其他Hadoop机器
8. 启动相关服务
注意:第一次启动时严格按照下面的步骤!!!!!!!
8.1. 启动zookeeper集群
前面已经启动了,这里就不说了
8.2. 启动journalnode
# 根据规划在mini05、mini06、mini07 启动
# 在第一次格式化的时候需要先启动journalnode 之后就不必了
[yun@mini05 ~]$ hadoop-daemon.sh start journalnode # 已经配置环境变量,所以不用进入到响应的目录
starting journalnode, logging to /app/hadoop-2.7./logs/hadoop-yun-journalnode-mini05.out
[yun@mini05 ~]$ jps
QuorumPeerMain
Jps
JournalNode
8.3. 格式化HDFS
# 在mini01上执行命令
[yun@mini01 ~]$ hdfs namenode -format
// :: INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = mini01/10.0.0.111
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.6
STARTUP_MSG: classpath = ………………
STARTUP_MSG: build = Unknown -r Unknown; compiled by 'root' on 2018-06-08T08:30Z
STARTUP_MSG: java = 1.8.0_112
************************************************************/
// :: INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
// :: INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-2385f26e-72e6--aa09-47848b5ba4be
// :: INFO namenode.FSNamesystem: No KeyProvider found.
// :: INFO namenode.FSNamesystem: fsLock is fair: true
// :: INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
// :: INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=
// :: INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
// :: INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to :::00.000
// :: INFO blockmanagement.BlockManager: The block deletion will start around Jun ::
// :: INFO util.GSet: Computing capacity for map BlocksMap
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
// :: INFO blockmanagement.BlockManager: defaultReplication =
// :: INFO blockmanagement.BlockManager: maxReplication =
// :: INFO blockmanagement.BlockManager: minReplication =
// :: INFO blockmanagement.BlockManager: maxReplicationStreams =
// :: INFO blockmanagement.BlockManager: replicationRecheckInterval =
// :: INFO blockmanagement.BlockManager: encryptDataTransfer = false
// :: INFO blockmanagement.BlockManager: maxNumBlocksToLog =
// :: INFO namenode.FSNamesystem: fsOwner = yun (auth:SIMPLE)
// :: INFO namenode.FSNamesystem: supergroup = supergroup
// :: INFO namenode.FSNamesystem: isPermissionEnabled = true
// :: INFO namenode.FSNamesystem: Determined nameservice ID: bi
// :: INFO namenode.FSNamesystem: HA Enabled: true
// :: INFO namenode.FSNamesystem: Append Enabled: true
// :: INFO util.GSet: Computing capacity for map INodeMap
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO namenode.FSDirectory: ACLs enabled? false
// :: INFO namenode.FSDirectory: XAttrs enabled? true
// :: INFO namenode.FSDirectory: Maximum size of an xattr:
// :: INFO namenode.NameNode: Caching file names occuring more than times
// :: INFO util.GSet: Computing capacity for map cachedBlocks
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
// :: INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes =
// :: INFO namenode.FSNamesystem: dfs.namenode.safemode.extension =
// :: INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets =
// :: INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users =
// :: INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = ,,
// :: INFO namenode.FSNamesystem: Retry cache on namenode is enabled
// :: INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is millis
// :: INFO util.GSet: Computing capacity for map NameNodeRetryCache
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO namenode.FSImage: Allocated new BlockPoolId: BP--10.0.0.111-
// :: INFO common.Storage: Storage directory /app/hadoop/tmp/dfs/name has been successfully formatted.
// :: INFO namenode.FSImageFormatProtobuf: Saving image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
// :: INFO namenode.FSImageFormatProtobuf: Image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size bytes saved in seconds.
// :: INFO namenode.NNStorageRetentionManager: Going to retain images with txid >=
// :: INFO util.ExitUtil: Exiting with status
// :: INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at mini01/10.0.0.111
************************************************************/
拷贝到mini02
#格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,这里我配置的是/app/hadoop/tmp,然后将/app/hadoop/tmp拷贝到mini02的/app/hadoop/下。
# 方法1:
[yun@mini01 hadoop]$ pwd
/app/hadoop
[yun@mini01 hadoop]$ scp -r tmp/ yun@mini02:/app/hadoop
VERSION % .4KB/s :
seen_txid % .0KB/s :
fsimage_0000000000000000000.md5 % .7KB/s :
fsimage_0000000000000000000 % .1KB/s : ##########################
# 方法2:##也可以这样,建议hdfs namenode -bootstrapStandby # 不过需要mini02的Hadoop起来才行
8.4. 格式化ZKFC
#在mini01上执行一次即可
[yun@mini01 ~]$ hdfs zkfc -formatZK
// :: INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at mini01/10.0.0.111:
// :: INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.-, built on // : GMT
// :: INFO zookeeper.ZooKeeper: Client environment:host.name=mini01
// :: INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_112
// :: INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
// :: INFO zookeeper.ZooKeeper: Client environment:java.home=/app/jdk1..0_112/jre
// :: INFO zookeeper.ZooKeeper: Client environment:java.class.path=……………………
// :: INFO zookeeper.ZooKeeper: Client environment:java.library.path=/app/hadoop-2.7./lib/native
// :: INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
// :: INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
// :: INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
// :: INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
// :: INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.-.el7.x86_64
// :: INFO zookeeper.ZooKeeper: Client environment:user.name=yun
// :: INFO zookeeper.ZooKeeper: Client environment:user.home=/app
// :: INFO zookeeper.ZooKeeper: Client environment:user.dir=/app/hadoop-2.7.
// :: INFO zookeeper.ZooKeeper: Initiating client connection, connectString=mini03:,mini04:,mini05:,mini06:,mini07: sessionTimeout= watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@7f3b84b8
// :: INFO zookeeper.ClientCnxn: Opening socket connection to server mini04/10.0.0.114:. Will not attempt to authenticate using SASL (unknown error)
// :: INFO zookeeper.ClientCnxn: Socket connection established to mini04/10.0.0.114:, initiating session
// :: INFO zookeeper.ClientCnxn: Session establishment complete on server mini04/10.0.0.114:, sessionid = 0x4644fff9cb80000, negotiated timeout =
// :: INFO ha.ActiveStandbyElector: Session connected.
// :: INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/bi in ZK.
// :: INFO zookeeper.ZooKeeper: Session: 0x4644fff9cb80000 closed
// :: INFO zookeeper.ClientCnxn: EventThread shut down
8.5. 启动HDFS
# 在mini01上执行
[yun@mini01 ~]$ start-dfs.sh
Starting namenodes on [mini01 mini02]
mini01: starting namenode, logging to /app/hadoop-2.7./logs/hadoop-yun-namenode-mini01.out
mini02: starting namenode, logging to /app/hadoop-2.7./logs/hadoop-yun-namenode-mini02.out
mini07: starting datanode, logging to /app/hadoop-2.7./logs/hadoop-yun-datanode-mini07.out
mini06: starting datanode, logging to /app/hadoop-2.7./logs/hadoop-yun-datanode-mini06.out
mini05: starting datanode, logging to /app/hadoop-2.7./logs/hadoop-yun-datanode-mini05.out
Starting journal nodes [mini05 mini06 mini07]
mini07: journalnode running as process . Stop it first.
mini06: journalnode running as process . Stop it first.
mini05: journalnode running as process . Stop it first.
Starting ZK Failover Controllers on NN hosts [mini01 mini02]
mini01: starting zkfc, logging to /app/hadoop-2.7./logs/hadoop-yun-zkfc-mini01.out
mini02: starting zkfc, logging to /app/hadoop-2.7./logs/hadoop-yun-zkfc-mini02.out
8.6. 启动YARN
#####注意#####:是在mini03上执行start-yarn.sh,把namenode和resourcemanager分开是因为性能问题
# 因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动
[yun@mini03 ~]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /app/hadoop-2.7./logs/yarn-yun-resourcemanager-mini03.out
mini06: starting nodemanager, logging to /app/hadoop-2.7./logs/yarn-yun-nodemanager-mini06.out
mini07: starting nodemanager, logging to /app/hadoop-2.7./logs/yarn-yun-nodemanager-mini07.out
mini05: starting nodemanager, logging to /app/hadoop-2.7./logs/yarn-yun-nodemanager-mini05.out ################################
# 在mini04启动 resourcemanager
[yun@mini04 ~]$ yarn-daemon.sh start resourcemanager # 也可用start-yarn.sh
starting resourcemanager, logging to /app/hadoop-2.7./logs/yarn-yun-resourcemanager-mini04.out
8.7. 启动说明
# 第一次启动的时候请严格按照上面的步骤【第一次涉及格式化问题】
# 第二次以及之后,步骤为: 启动zookeeper、HDFS、YARN
9. 浏览访问
9.1. Hdfs访问
9.1.1. 正常情况访问
http://mini01:50070
http://mini02:50070
9.1.2. mini01挂了Active自动切换
# mini01操作
[yun@mini01 ~]$ jps
DFSZKFailoverController
NameNode
Jps
[yun@mini01 ~]$ kill
[yun@mini01 ~]$ jps
DFSZKFailoverController
Jps
Namenode挂了所以mini01不能访问
http://mini02:50070
可见Hadoop已经切换过去了,之后mini01即使起来了,状态也只能为standby 。
9.2. Yarn访问
http://mini03:8088
http://mini04:8088
会直接跳转到http://mini03:8088/
# 该图从其他地方截取,所以不怎么匹配
# Linux下访问
[yun@mini01 ~]$ curl mini04:
This is standby RM. The redirect url is: http://mini03:8088/
HA完毕
10. 集群运维测试
10.1. Haadmin与状态切换管理
[yun@mini01 ~]$ hdfs haadmin
Usage: haadmin
[-transitionToActive [--forceactive] <serviceId>]
[-transitionToStandby <serviceId>]
[-failover [--forcefence] [--forceactive] <serviceId> <serviceId>]
[-getServiceState <serviceId>]
[-checkHealth <serviceId>]
[-help <command>] Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|resourcemanager:port> specify a ResourceManager
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
可以看到,状态操作的命令示例:
# 查看namenode工作状态
hdfs haadmin -getServiceState nn1 # 将standby状态namenode切换到active
hdfs haadmin -transitionToActive nn1 # 将active状态namenode切换到standby
hdfs haadmin -transitionToStandby nn2
10.2. 测试集群工作状态的一些指令
测试集群工作状态的一些指令 :
hdfs dfsadmin -report 查看hdfs的各节点状态信息
hdfs haadmin -getServiceState nn1 # hdfs haadmin -getServiceState nn2 获取一个namenode节点的HA状态
hadoop-daemon.sh start namenode 单独启动一个namenode进程
hadoop-daemon.sh start zkfc 单独启动一个zkfc进程
10.3. Datanode动态上下线
Datanode动态上下线很简单,步骤如下:
a) 准备一台服务器,设置好环境
b) 部署hadoop的安装包,并同步集群配置
c) 联网上线,新datanode会自动加入集群
d) 如果是一次增加大批datanode,还应该做集群负载重均衡
10.4. 数据块的balance
启动balancer的命令:
start-balancer.sh -threshold
运行之后,会有Balancer进程出现:
上述命令设置了Threshold为8%,那么执行balancer命令的时候,首先统计所有DataNode的磁盘利用率的均值,然后判断如果某一个DataNode的磁盘利用率超过这个均值Threshold,那么将会把这个DataNode的block转移到磁盘利用率低的DataNode,这对于新节点的加入来说十分有用。Threshold的值为1到100之间,不显示的进行参数设置的话,默认是10。