集群环境
hadoop-2.8.3搭建详细请查看hadoop系列文章
scala-2.11.12环境请查看scala系列文章
jdk1.8.0_161
spark-2.4.0-bin-hadoop2.7
192.168.217.201 hadoop1.org.cn hadoop1
192.168.217.202 hadoop2.org.cn hadoop2
192.168.217.203 hadoop3.org.cn hadoop3
spark2.4.0完全分布式环境搭建
下载安装包
http://spark.apache.org/downloads.html
解压安装包
tar zxf spark--bin-hadoop2..tgz -C /usr/hdp/
环境配置
# SET SPARK_HOME export SPARK_HOME=/usr/hdp/spark--bin-hadoop2. export PATH=$PATH:$SPARK_HOME/bin
配置文件修改
备注:一下文件都在spark安装的conf文件目录下
文件spark-env.sh
cp spark-env.sh.template spark-env.sh
然后修改spark-env.sh,修改的内容如下:
#!/usr/bin/env bash # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This file is sourced when running various Spark programs. # Copy it as spark-env.sh and edit that to configure Spark for your site. # Options read when launching programs locally with # ./bin/run-example or ./bin/spark-submit # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files # - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node # - SPARK_PUBLIC_DNS, to set the public dns name of the driver program # Options read by executors and drivers running inside the cluster # - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node # - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program # - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data # - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos # Options read in YARN client/cluster mode # - SPARK_CONF_DIR, Alternate conf dir. (Default: ${SPARK_HOME}/conf) # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files # - YARN_CONF_DIR, to point Spark towards YARN configuration files when you use YARN # - SPARK_EXECUTOR_CORES, Number of cores ). # - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G) # - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G) # Options for the daemons used in the standalone deploy mode # - SPARK_MASTER_HOST, to bind the master to a different IP address or hostname # - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master # - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y") # - SPARK_WORKER_CORES, to set the number of cores to use on this machine # - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g) # - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker # - SPARK_WORKER_DIR, to set the working directory of worker processes # - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y") # - SPARK_DAEMON_MEMORY, to allocate to the master, worker and history server themselves (default: 1g). # - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y") # - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y") # - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y") # - SPARK_DAEMON_CLASSPATH, to set the classpath for all daemons # - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers # Generic options for the daemons used in the standalone deploy mode # - SPARK_CONF_DIR Alternate conf dir. (Default: ${SPARK_HOME}/conf) # - SPARK_LOG_DIR Where log files are stored. (Default: ${SPARK_HOME}/logs) # - SPARK_PID_DIR Where the pid file is stored. (Default: /tmp) # - SPARK_IDENT_STRING A string representing this instance of spark. (Default: $USER) # - SPARK_NICENESS The scheduling priority ) # - SPARK_NO_DAEMONIZE Run the proposed command in the foreground. It will not output a PID file. # Options for native BLAS, like Intel MKL, OpenBLAS, and so on. # You might ). # - MKL_NUM_THREADS= Disable multi-threading of Intel MKL # - OPENBLAS_NUM_THREADS= Disable multi-threading of OpenBLAS export JAVA_HOME=/opt/jdk1..0_161 export SCALA_HOME=/usr/scala/scala- export HADOOP_HOME=/usr/hdp/hadoop- export HADOOP_CONF_DIR=/usr/hdp/hadoop-/etc/hadoop export SPARK_MASTER_HOST=hadoop1 export SPAKR_MASTER_IP=192.168.217.201 export SPARK_LOCAL_IP=192.168.217.201 export SPARK_WORKER_MEMORY=1g export SPARK_WORKER_CORES= export SPARK_HOME=/usr/hdp/spark--bin-hadoop2.
文件slaves
cp slaves.template slaves
然后编辑slaves文件,添加的内容如下:
hadoop1 hadoop2 hadoop3
文件的复制
将spark整个的目录复制到另外两个节点上面。
scp -r spark--bin-hadoop2./ root@192.168.217.202:/usr/hdp/ scp -r spark--bin-hadoop2./ root@192.168.217.203:/usr/hdp
文件复制之后,在其他的两个节点上面添加spark的环境变量,同时,修改spark-env.sh文件,将export SPARK_LOCAL_IP=192.168.217.201的IP修改为该节点的IP地址。
集群启动
启动hadoop整个的集群,然后进入到spark的sbin目录下,执行start-all.sh脚本。
[root@hadoop1 hdp]# jps NameNode ResourceManager SecondaryNameNode Master Jps Worker [root@hadoop2 conf]# jps Jps DataNode Worker NodeManager [root@hadoop3 ~]# jps DataNode Jps Worker NodeManager
此时访问相关的页面:
坚壁清野