Spark HA 的搭建

时间:2023-03-09 19:42:04
Spark HA 的搭建

Spark HA 的搭建

接hadoop HA的搭建,因为你zookeeper已经部署完成,所以直接安装spark就可以

tar –xzf spark-1.6.1-bin-hadoop2.6.tgz -C ../services

-bash-4.1$ ln -sv services/spark-1.6.1-bin-hadoop2.6/ spark

修改spark-env.sh的相应信息

-bash-4.1$ vim spark-env.sh

export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=MSJTVL-DSJC-H03:2181,MSJTVL-DSJC-H04:2181,MSJTVL-DSJC-H05:2181 -Dspark.deploy.zookeeper.dir=/hadoop/spark"
export JAVA_HOME=/opt/java/jdk1.8.0_91
export SCALA_HOME=/hadoop/services/scala-2.10.5
#export SPARK_WORKER_CORES=5
export SPARK_WORKER_MEMORY=5g
export HADOOP_HOME=/hadoop/hadoop
export HADOOP_CONF_DIR=/hadoop/hadoop/etc/hadoop

在slaves中增加work的配置信息

-bash-4.1$ vim slaves

  MSJTVL-DSJC-H03
  MSJTVL-DSJC-H04
  MSJTVL-DSJC-H05

启动spark

-bash-4.1$ ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /hadoop/spark/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-iZ2zefn1rjw3rfejj5hhwbZ.out
iZ2zehhwq5a6tmvi3wg17iZ: starting org.apache.spark.deploy.worker.Worker, logging to /hadoop/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-iZ2zehhwq5a6tmvi3wg17iZ.out
iZ2zee62ni1rdbg34t5mydZ: starting org.apache.spark.deploy.worker.Worker, logging to /hadoop/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-iZ2zee62ni1rdbg34t5mydZ.out
iZ2zee62ni1rdbg34t5mycZ: starting org.apache.spark.deploy.worker.Worker, logging to /hadoop/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-iZ2zee62ni1rdbg34t5mycZ.out
-bash-4.1$ jps
9905 DFSZKFailoverController
31076 Jps
9621 NameNode
9995 ResourceManager
30990 Master

在另一个节点上,启动新的master

-bash-4.1$ ./start-master.sh
starting org.apache.spark.deploy.master.Master, logging to /hadoop/spark/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-iZ2zefn1rjw3rfejj5hhwaZ.out
-bash-4.1$

登录网页可以查看到相应的状态,web的端口是8080