Spark进阶之路-Spark HA配置

时间:2024-01-06 23:31:56

              Spark进阶之路-Spark HA配置

                                    作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。

  集群部署完了,但是有一个很大的问题,那就是Master节点存在单点故障,要解决此问题,就要借助zookeeper,并且启动至少两个Master节点来实现高可靠,配置方式比较简单。本篇博客的搭建环境是基于Standalone模式进行的(https://www.cnblogs.com/yinzhengjie/p/9458161.html

1>.编辑spark-env.sh文件,去掉之前的master主机,并指定zookeeper集群的主机

[yinzhengjie@s101 ~]$ grep -v ^# /soft/spark/conf/spark-env.sh | grep -v ^$
export JAVA_HOME=/soft/jdk
SPARK_MASTER_PORT=
export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=4000 -Dspark.history.retainedApplications=3 -Dspark.history.fs.logDirectory=hdfs://s105:8020/yinzhengjie/logs"
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=s102:2181,s103:2181,s103:2181 -Dspark.deploy.zookeeper.dir=/spark"      #指定zookeeper的集群地址以及spark在spark存放的路径。
[yinzhengjie@s101 ~]$

2>.分发配置

[yinzhengjie@s101 ~]$ more `which xrsync.sh`
#!/bin/bash
#@author :yinzhengjie
#blog:http://www.cnblogs.com/yinzhengjie
#EMAIL:y1053419035@qq.com #判断用户是否传参
if [ $# -lt ];then
echo "请输入参数";
exit
fi #获取文件路径
file=$@ #获取子路径
filename=`basename $file` #获取父路径
dirpath=`dirname $file` #获取完整路径
cd $dirpath
fullpath=`pwd -P` #同步文件到DataNode
for (( i=;i<=;i++ ))
do
#使终端变绿色
tput setaf
echo =========== s$i %file ===========
#使终端变回原来的颜色,即白灰色
tput setaf
#远程执行命令
rsync -lr $filename `whoami`@s$i:$fullpath
#判断命令是否执行成功
if [ $? == ];then
echo "命令执行成功"
fi
done
[yinzhengjie@s101 ~]$

同步文件的脚本,需要配置无秘钥登录才能使用哟([yinzhengjie@s101 ~]$ more `which xrsync.sh`)

[yinzhengjie@s101 ~]$ xrsync.sh /soft/spark
=========== s102 %file ===========
命令执行成功
=========== s103 %file ===========
命令执行成功
=========== s104 %file ===========
命令执行成功
=========== s105 %file ===========
命令执行成功
[yinzhengjie@s101 ~]$ xrsync.sh /soft/spark-2.1.-bin-hadoop2./
=========== s102 %file ===========
命令执行成功
=========== s103 %file ===========
命令执行成功
=========== s104 %file ===========
命令执行成功
=========== s105 %file ===========
命令执行成功
[yinzhengjie@s101 ~]$

3>.s101启动master集群

[yinzhengjie@s101 ~]$ /soft/spark/sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /soft/spark/logs/spark-yinzhengjie-org.apache.spark.deploy.master.Master--s101.out
s103: starting org.apache.spark.deploy.worker.Worker, logging to /soft/spark/logs/spark-yinzhengjie-org.apache.spark.deploy.worker.Worker--s103.out
s104: starting org.apache.spark.deploy.worker.Worker, logging to /soft/spark/logs/spark-yinzhengjie-org.apache.spark.deploy.worker.Worker--s104.out
s102: starting org.apache.spark.deploy.worker.Worker, logging to /soft/spark/logs/spark-yinzhengjie-org.apache.spark.deploy.worker.Worker--s102.out
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
DFSZKFailoverController
Jps
Master
NameNode
HistoryServer
命令执行成功
============= s102 jps ============
QuorumPeerMain
DataNode
Jps
JournalNode
Worker
命令执行成功
============= s103 jps ============
JournalNode
Worker
QuorumPeerMain
Jps
DataNode
命令执行成功
============= s104 jps ============
Worker
QuorumPeerMain
Jps
DataNode
JournalNode
命令执行成功
============= s105 jps ============
DFSZKFailoverController
NameNode
Jps
命令执行成功
[yinzhengjie@s101 ~]$  

4>.s105手动启动另外一个master

[yinzhengjie@s105 ~]$ /soft/spark/sbin/start-master.sh
starting org.apache.spark.deploy.master.Master, logging to /soft/spark/logs/spark-yinzhengjie-org.apache.spark.deploy.master.Master--s105.out
[yinzhengjie@s105 ~]$ jps
Master
Jps
DFSZKFailoverController
NameNode
[yinzhengjie@s105 ~]$

5>.连接spark集群

Spark进阶之路-Spark HA配置

6>.查看master节点的webUI信息

  s105的master信息如下:(此时s105啥也没有,worker没有正确到,正在运行的任务也没有争取到)

Spark进阶之路-Spark HA配置

   s101的master信息如下:(你会发现目前的正在工作的master是s101)

Spark进阶之路-Spark HA配置

7>.手动杀死s101的master进程

Spark进阶之路-Spark HA配置

8>.查看spark-shell命令行是否可以正常工作

Spark进阶之路-Spark HA配置

9>.检查集群中是否还有正常的master存活(很显然,此时一定是s105接管了任务)

Spark进阶之路-Spark HA配置

10>.再次查看s105的webUI界面

Spark进阶之路-Spark HA配置

  由于s101的master进程已经被我们手动杀死了,因此我们无法通过webUI的形式访问它了:

Spark进阶之路-Spark HA配置