1)首先配置好了四个linux虚拟机 root pwd:z****l*3
关闭了防火墙
开通了 sshd服务
Linux下安装vsftpd之后,默认的配置是
匿名用户可以登录,匿名帐户有两个:
用户名:anonymous
密码:空
用户名:ftp
密码:ftp
开通了 ftp服务
配置了 jdk 1.8
配置好了互信 (之前配置的过程忘了!--检查了一下可以)
注意 /etc/hosts 中需配置主机名(每台都要设置)
注意 互信建立时 第一次连接会提示要输入yes才行,即使用ip试过 换了主机名同样会在换过主机名后的第一次 提示输入yes 所以最后 用主机名试一下 scp
linux版本是
Fedora release 22 (Twenty Two) 企业过界面版
2)
修改配置文件 参考这个贴子
https://www.cnblogs.com/lzxlfly/p/7221890.html
启动
总是提示 hocalhost (::1%1)' can't be established.
解决办法是 执行 ssh -o StrictHostKeyChecking=no root@localhost
输入:
http://192.168.1.19:50070 检查是否成功
启动hbase
输出ignoring option PermSize=128m; support was removed in 8.0告警信息
hbase/conf/hbase-env.sh
由于JDK使用的是jdk1.8.0_65
注释掉以下:
# Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+
export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
启动手 主节点没有
查看输出日志 .out 原因是 hbase 和 hadoop 中日志包有冲突
解决方法: 移除 hbase 中的 日志包
再重启 主节点 hbase.out里面
Hadoop 启动完成 进程正常 页面访问正常
然后启动HBase
一直报错, Hadoop _ Hdfs java.io.IOException: No FileSystem for scheme: hdfs
最后说是要将
hadoop core-site.xml 和 hdfs-site.xml 复制到 hbase的配置目录
复制完成后 还是报错,是没有配这个
<property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
<description>The FileSystem for hdfs: uris.</description>
</property>
配完后,全部重删,重新解压,重新启动 还是报错
找不到 org.apache.hadoop.hdfs.DistributedFileSystem 类
然看再 百度,发现
https://*.com/questions/26063359/org-apache-hadoop-fs-filesystem-provider-org-apache-hadoop-hdfs-distributedfile
贴子里面,有一条是说因为 我现在用的 hadoop-2.7.1 和 hbase-1.2.1
hadoop-common-2.xxx.jar 这个包不统一。虽然兼这两个版本官网上标注是支持兼容的但要自己去统一包冲突。
然后我就想,找一对不用自己去兼容的配对版本。
先是找到了比较新的
1) 我先找到了hbase-2.0.1-bin.tar.gz 。
因为它里面的 hadoop-common-2.7.4 版本与我前面的 hadoop-2.7.1相近。
2) 然后我再找到了hadoop-2.7.4.tar.gz 因为 hadoop-common-2.7.4就是依赖 它开发的。
然后,为了省事不想改配置文件,目录还是 用 /opt/hbase/hadoop-2.7.1 和 /opt/hbase/hbase-1.2.1
实际上 现在版本换成了 hadoop-common-2.7.4 + hbase-2.0.1 。
然后重新按上面部骤来。启动成功。
总结一下:
1) 先配好一个虚拟机,包括配置ftp 服务, 配置ssh服务,关闭防火墙 ,关闭selinux,配置好jdk,配置好 hadoopHome ,HbaseHome. 将压缩包通过FTP上传到目录。
2) 将虚拟机复制几份,换一下网卡物理编号即可,配置好ip,测试物量机 虚拟机之前网络能不能ping通。
这里讲一下 VMware 网络配置
两种办法
方法1. 方便,用桥接模式。
这种情况适用于自己的路由器是连宽带,也就是说自己的路由器是NAT模式(子网模式)。
这时
虚拟机连到一个虚拟的交换机,虚拟的交换机连到真实路由器上。
物理机通过真实网卡连到路由器上。
真实机与物理机的消息转发通过真实路由器实现。
网关的具体地址通过修改真实路由器设置。
这种情况下没有多余的配置。
只要求把虚拟机当一个物理机就行了。
方法2 用虚拟子网模式NAT
方法1 好像很方便,但是你的笔记本电脑移动到别的办公地点,你又想通过物理路由器与虚拟机交换信息。
就不好办了,很多时候外面的路由器不由你控制,这意味着你不能随意修改外面路由器网关地址
为了连上外面路由,你不得不修改自己物理IP和虚拟机IP使它们和外面的路由器网关一样。
这意味着,要改N多配置。奶奶的。有什么更好的办法??
好吧用NAT模式吧。
用NAT模式时
物理机与真实机通过虚拟路由交换消息。
虚拟子网(路由器)关网(例 192.168.8.1) 由 Virtual Network Edito配置,动态分配ip不需要直接禁用。
虚拟机网卡 <---> 虚拟子网(路由器)<-->物理路由器
物理网卡 <--> 物理路由器
虚拟机网卡<--->虚拟子网(路由器)<--> VMnet8
VMnet8 默认是VMnet8 ( 可通过Virtual Network Editor修改)。
在 Virtual Network Editor中修改VMnet8
subnet ip 这个为网关的网段类型,如要将 192.168.8.x 为虚拟机子网段的话,那么这里填192.168.8.0 这里的零看成星号即可
subnet mask 即为 掩码 这里填 255.255.255.0
点击 nat settings 弹出窗可设置
gateway ip这个为虚拟机的网虚拟路由网关 填192.168.8.1
在宿主机里面还要设置虚拟网卡VMnet8 的ip (例192.168.8.100),这个ip是宿主机与虚拟路由通信的ip .
假设装了两台虚拟机192.168.8.10 和 192.168.8.11 ,它们的网关是 192.168.8.1 能piing通192.168.8.100 这个就是宿主机的VMnet8
假设现在宿主机的有线网卡IP 192.168.1.88 宿主机的 网关 192.168.1.1
还有一台真实机器 192.168.1.89
上面的配置就很清析了,
然后再有一个部题,就是我虚拟机网段能不能和宿主机同样在192.168.1.*上呢,虚拟机的网关能不能设为 192.168.1.1呢!
实际上是可以的哟.例如
那换一下之后虚拟机 192.168.1.10 和 192.168.1.11
Virtual Network Editor中修改VMnet8 subnet ip 192.168.8.0 subnet mask 255.255.255.0 gateway ip 192.168.1.1
宿主 VMnet8 的ip 192.168.1.100
本机物理网卡 192.168.1.88
另一台物理机 192.168.1.89
这时你在本机(就是宿主机)上
ping 192.168.1.10 及 192.168.1.11是通的
ping 192.168.1.89 是通的
但问题又来了
如果你将 192.168.1.89 改为 192.168.1.10 ,然后ping 192.168.1.10 会发生什么?
两个网卡分别找到一台。。。有可能会有错。
只能关掉真实机,或关闭虚拟机,或改其中一台的ip.
看来,这个还是要规划好,如划自己虚拟机在安装时都划好了自己的网段,麻烦少很多。
方法三:
host only
这相模式与 方法二相似,区别是nat模式是虚拟了一个路由器,host only是虚拟了一个交换机。
host olny 模式下 仅要求 的地址与 虚拟机 网段一致
Virtual Network Editor配置 VMnet1 时会发现没有了 nat settings ,这个就是配置网关的,交换机没网关。
宿主机网络配置 界面VMnet1 的ip设置成与虚拟机网段一样,它们就能相互ping通了。
例:物理网卡为192.168.1.88
VMnet1 为 192.168.3.66
虚拟机为 192.168.3.10 192.168.3.11
这样三个ip就能相互ping通了 但是 192.168.3.10 ping 不通www.baidu.com
如果要实现 虚拟机ping 通www.baidu.com 这类外网,需要在 宿主机网络配置 界面右键 VMnet1 图标,选属性-> 共享,勾上充许XXX共享xxx
3)建立免密登陆,在各个机器间测试互传文件。
前面三步 基本和hadoop没多大关系,可独立测试,网上资料大把。
4)ssh连接各个机器,配置好hadoop 和 hbase 。
配置都集中在 opt/hbase/hadoop-2.7.1/etc/hadoop 和 /opt/hbase/hbase-1.2.1/conf
opt/hbase/hadoop-2.7.1/etc/hadoop 下面修改 或添加
core-site.xml
hdfs-site.xml
yarn-env.sh
hadoop-env.sh
mapred-site.xml
slaves
yarn-site.xml
启动步骤
格式化 bin/hdfs namenode -format
启动 停止 sbin/start-all.sh sbin/stop-all.sh
查看
master 上输入命令:jps, 看到ResourceManager、NameNode、SecondaryNameNode进程
在slave 上输入命令:jps, 看到DataNode、NodeManager进程
出现这5个进程就表示Hadoop启动成功。
master状态 http://master:50070
集群状态 http://192.168.172.72:8088
/opt/hbase/hbase-1.2.1/conf 下面修改 或添加
core-site.xml <同上面一样的>
hbase-site.xml<同上面一样的>
regionservers
hbase-env.sh
hdfs-site.xml
启动 停止 start-hbase.sh stop-hbase.sh
查看
jps
master上出现HMaster、HQuormPeer,
slave上出现HRegionServer、HQuorumPeer,就是启动成功了。
hbase的配置 http://master:16010
相关脚本
hadoop.cop.sh
#ssh root@192.168.1.21 rm -rf /opt/hbase/hadoop-2.7./*
#ssh root@192.168.1.22 rm -rf /opt/hbase/hadoop-2.7.1/*
#ssh root@192.168.1.20 rm -rf /opt/hbase/hadoop-2.7.1/* #tar -zvxf /opt/hbase/hadoop-2.7.1.tar.gz -C /opt/hbase/ scp core-site.xml root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp hdfs-site.xml root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp mapred-site.xml root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp slaves root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp yarn-site.xml root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp hadoop-env.sh root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp yarn-env.sh root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop scp core-site.xml root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp hdfs-site.xml root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp mapred-site.xml root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp slaves root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp yarn-site.xml root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp hadoop-env.sh root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp yarn-env.sh root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop scp core-site.xml root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp hdfs-site.xml root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp mapred-site.xml root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp slaves root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp yarn-site.xml root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp hadoop-env.sh root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp yarn-env.sh root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
HBase.cop.sh
# rm -rf /opt/hbase/hbase-1.2./*
#tar -zvxf /opt/hbase/hbase-1.2.1-bin.tar.gz -C /opt/hbase/
#tar -zvxf /opt/hbase/hbase-2.0.1-bin.tar.gz -C /opt/hbase/
#tar -zvxf /opt/hbase/hadoop-2.7.4.tar.gz -C /opt/hbase/ scp hbase-env.sh root@192.168.1.20:/opt/hbase/hbase-1.2.1/conf
scp hbase-site.xml root@192.168.1.20:/opt/hbase/hbase-1.2.1/conf
scp regionservers root@192.168.1.20:/opt/hbase/hbase-1.2.1/conf
scp core-site.xml root@192.168.1.20:/opt/hbase/hbase-1.2.1/conf
scp hdfs-site.xml root@192.168.1.20:/opt/hbase/hbase-1.2.1/conf scp hbase-env.sh root@192.168.1.21:/opt/hbase/hbase-1.2.1/conf
scp hbase-site.xml root@192.168.1.21:/opt/hbase/hbase-1.2.1/conf
scp regionservers root@192.168.1.21:/opt/hbase/hbase-1.2.1/conf
scp core-site.xml root@192.168.1.21:/opt/hbase/hbase-1.2.1/conf
scp hdfs-site.xml root@192.168.1.21:/opt/hbase/hbase-1.2.1/conf scp hbase-env.sh root@192.168.1.22:/opt/hbase/hbase-1.2.1/conf
scp hbase-site.xml root@192.168.1.22:/opt/hbase/hbase-1.2.1/conf
scp regionservers root@192.168.1.22:/opt/hbase/hbase-1.2.1/conf
scp core-site.xml root@192.168.1.22:/opt/hbase/hbase-1.2.1/conf
scp hdfs-site.xml root@192.168.1.22:/opt/hbase/hbase-1.2.1/conf #rm /opt/hbase/hbase-1.2.1/lib/hadoop-common-2.5.1.jar
#cp -r /opt/hbase/hadoop-2.7.1/share/hadoop/common/hadoop-common-2.7.1.jar /opt/hbase/hbase-1.2.1/lib
全部配置文件
core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration>
<property>
<name>fs.defaultFS</name> <!--NameNode 的URI-->
<value>hdfs://master:9000</value>
</property>
<!-- <property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
<description>The FileSystem for hdfs: uris.</description>
</property> -->
<property>
<name>hadoop.tmp.dir</name> <!--hadoop临时文件的存放目录-->
<value>/opt/hbase/hadoop-2.7./temp</value>
</property>
</configuration>
hadoop-env.sh
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. # Set Hadoop-specific environment variables here. # The only required environment variable is JAVA_HOME. All others are
# optional. When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes. # The java implementation to use.
export JAVA_HOME=/usr/lib/jdk/jdk1..0_91 # The jsvc implementation to use. Jsvc is required to run secure datanodes
# that bind to privileged ports to provide authentication of data transfer
# protocol. Jsvc is not required if SASL is configured for authentication of
# data transfer protocol using non-privileged ports.
#export JSVC_HOME=${JSVC_HOME} export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"} # Extra Java CLASSPATH elements. Automatically insert capacity-scheduler.
for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
if [ "$HADOOP_CLASSPATH" ]; then
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
else
export HADOOP_CLASSPATH=$f
fi
done # The maximum amount of heap to use, in MB. Default is 1000.
#export HADOOP_HEAPSIZE=
#export HADOOP_NAMENODE_INIT_HEAPSIZE="" # Extra Java runtime options. Empty by default.
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true" # Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS" export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS" export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS" # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS" # On secure datanodes, user to run the datanode as after dropping privileges.
# This **MUST** be uncommented to enable secure HDFS if using privileged ports
# to provide authentication of data transfer protocol. This **MUST NOT** be
# defined if SASL is configured for authentication of data transfer protocol
# using non-privileged ports.
export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER} # Where log files are stored. $HADOOP_HOME/logs by default.
#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER # Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER} ###
# HDFS Mover specific parameters
###
# Specify the JVM options to be used when starting the HDFS Mover.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HADOOP_MOVER_OPTS="" ###
# Advanced Users Only!
### # The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by
# the user that will run the hadoop daemons. Otherwise there is the
# potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR} # A string representing this instance of hadoop. $USER by default.
export HADOOP_IDENT_STRING=$USER
export JAVA_HOME=/usr/lib/jdk/jdk1.8.0_91
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. -->
<configuration>
<property> <!--namenode持久存储名字空间及事务日志的本地文件系统路径-->
<name>dfs.namenode.name.dir</name>
<value>/opt/hbase/hadoop-2.7.1/dfs/name</value> <!--目录无需预先创建,会自动创建-->
</property>
<property> <!--DataNode存放块数据的本地文件系统路径-->
<name>dfs.datanode.data.dir</name>
<value>/opt/hbase/hadoop-2.7.1/dfs/data</value>
</property>
<property> <!--数据需要备份的数量,不能大于集群的机器数量,默认为3-->
<name>dfs.replication</name>
<value>2</value>
</property> <property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property> <!--设置为true,可以在浏览器中IP+port查看-->
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration>
<property> <!--mapreduce运用了yarn框架,设置name为yarn-->
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property> <!--历史服务器,查看Mapreduce作业记录-->
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
slaves
master
slave1
slave2
slave3
yarn-env.sh
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. # User for YARN daemons
export HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}
export JAVA_HOME=/usr/lib/jdk/jdk1..0_91 # resolve links - $ may be a softlink
export YARN_CONF_DIR="${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}" # some Java parameters
# export JAVA_HOME=/home/y/libexec/jdk1.6.0/
if [ "$JAVA_HOME" != "" ]; then
#echo "run java in $JAVA_HOME"
JAVA_HOME=$JAVA_HOME
fi if [ "$JAVA_HOME" = "" ]; then
echo "Error: JAVA_HOME is not set."
exit
fi JAVA=$JAVA_HOME/bin/java
JAVA_HEAP_MAX=-Xmx1000m # For setting YARN specific HEAP sizes please use this
# Parameter and set appropriately
# YARN_HEAPSIZE= # check envvars which might override default args
if [ "$YARN_HEAPSIZE" != "" ]; then
JAVA_HEAP_MAX="-Xmx""$YARN_HEAPSIZE""m"
fi # Resource Manager specific parameters # Specify the max Heapsize for the ResourceManager using a numerical value
# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
# the value to .
# This value will be overridden by an Xmx setting specified in either YARN_OPTS
# and/or YARN_RESOURCEMANAGER_OPTS.
# If not specified, the default value will be picked from either YARN_HEAPMAX
# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
#export YARN_RESOURCEMANAGER_HEAPSIZE= # Specify the max Heapsize for the timeline server using a numerical value
# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
# the value to .
# This value will be overridden by an Xmx setting specified in either YARN_OPTS
# and/or YARN_TIMELINESERVER_OPTS.
# If not specified, the default value will be picked from either YARN_HEAPMAX
# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
#export YARN_TIMELINESERVER_HEAPSIZE= # Specify the JVM options to be used when starting the ResourceManager.
# These options will be appended to the options specified as YARN_OPTS
# and therefore may override any similar flags set in YARN_OPTS
#export YARN_RESOURCEMANAGER_OPTS= # Node Manager specific parameters # Specify the max Heapsize for the NodeManager using a numerical value
# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
# the value to .
# This value will be overridden by an Xmx setting specified in either YARN_OPTS
# and/or YARN_NODEMANAGER_OPTS.
# If not specified, the default value will be picked from either YARN_HEAPMAX
# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
#export YARN_NODEMANAGER_HEAPSIZE= # Specify the JVM options to be used when starting the NodeManager.
# These options will be appended to the options specified as YARN_OPTS
# and therefore may override any similar flags set in YARN_OPTS
#export YARN_NODEMANAGER_OPTS= # so that filenames w/ spaces are handled correctly in loops below
IFS= # default log directory & file
if [ "$YARN_LOG_DIR" = "" ]; then
YARN_LOG_DIR="$HADOOP_YARN_HOME/logs"
fi
if [ "$YARN_LOGFILE" = "" ]; then
YARN_LOGFILE='yarn.log'
fi # default policy file for service-level authorization
if [ "$YARN_POLICYFILE" = "" ]; then
YARN_POLICYFILE="hadoop-policy.xml"
fi # restore ordinary behaviour
unset IFS YARN_OPTS="$YARN_OPTS -Dhadoop.log.dir=$YARN_LOG_DIR"
YARN_OPTS="$YARN_OPTS -Dyarn.log.dir=$YARN_LOG_DIR"
YARN_OPTS="$YARN_OPTS -Dhadoop.log.file=$YARN_LOGFILE"
YARN_OPTS="$YARN_OPTS -Dyarn.log.file=$YARN_LOGFILE"
YARN_OPTS="$YARN_OPTS -Dyarn.home.dir=$YARN_COMMON_HOME"
YARN_OPTS="$YARN_OPTS -Dyarn.id.str=$YARN_IDENT_STRING"
YARN_OPTS="$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
YARN_OPTS="$YARN_OPTS -Dyarn.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
YARN_OPTS="$YARN_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"
fi
YARN_OPTS="$YARN_OPTS -Dyarn.policy.file=$YARN_POLICYFILE"
export JAVA_HOME=/usr/lib/jdk/jdk1..0_91
yarn-site.xml
<?xml version="1.0"?>
<configuration>
<property> <!--NodeManager上运行的附属服务,用于运行mapreduce-->
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property> <!--ResourceManager 对客户端暴露的地址-->
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property> <!--ResourceManager 对ApplicationMaster暴露的地址-->
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property> <!--ResourceManager 对NodeManager暴露的地址-->
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property> <!--ResourceManager 对管理员暴露的地址-->
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property> <!--ResourceManager 对外web暴露的地址,可在浏览器查看-->
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
hbase-env.sh
#
#/**
# * Licensed to the Apache Software Foundation (ASF) under one
# * or more contributor license agreements. See the NOTICE file
# * distributed with this work for additional information
# * regarding copyright ownership. The ASF licenses this file
# * to you under the Apache License, Version 2.0 (the
# * "License"); you may not use this file except in compliance
# * with the License. You may obtain a copy of the License at
# *
# * http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */ # Set environment variables here. # This script sets variables multiple times over the course of starting an hbase process,
# so try to keep things idempotent unless you want to take an even deeper look
# into the startup scripts (bin/hbase, etc.) # The java implementation to use. Java 1.7+ required.
# export JAVA_HOME=/usr/java/jdk1.6.0/ # Extra Java CLASSPATH elements. Optional.
# export HBASE_CLASSPATH= # The maximum amount of heap to use. Default is left to JVM default.
# export HBASE_HEAPSIZE=1G # Uncomment below if you intend to use off heap cache. For example, to allocate 8G of
# offheap, set the value to "8G".
# export HBASE_OFFHEAPSIZE=1G # Extra Java runtime options.
# Below are what we set by default. May only work with SUN JVM.
# For more on why as well as other possible settings,
# see http://wiki.apache.org/hadoop/PerformanceTuning
export HBASE_OPTS="-XX:+UseConcMarkSweepGC" # Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+
#export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
#export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m" # Uncomment one of the below three options to enable java garbage collection logging for the server-side processes. # This enables basic gc logging to the .out file.
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps" # This enables basic gc logging to its own file.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>" # This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M" # Uncomment one of the below three options to enable java garbage collection logging for the client processes. # This enables basic gc logging to the .out file.
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps" # This enables basic gc logging to its own file.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>" # This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M" # See the package documentation for org.apache.hadoop.hbase.io.hfile for other configurations
# needed setting up off-heap block caching. # Uncomment and adjust to enable JMX exporting
# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
# NOTE: HBase provides an alternative JMX implementation to fix the random ports issue, please see JMX
# section in HBase Reference Guide for instructions. # export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"
# export HBASE_REST_OPTS="$HBASE_REST_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10105" # File naming hosts on which HRegionServers will run. $HBASE_HOME/conf/regionservers by default.
# export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers # Uncomment and adjust to keep all the Region Server pages mapped to be memory resident
#HBASE_REGIONSERVER_MLOCK=true
#HBASE_REGIONSERVER_UID="hbase" # File naming hosts on which backup HMaster will run. $HBASE_HOME/conf/backup-masters by default.
# export HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters # Extra ssh options. Empty by default.
# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR" # Where log files are stored. $HBASE_HOME/logs by default.
# export HBASE_LOG_DIR=${HBASE_HOME}/logs # Enable remote JDWP debugging of major HBase processes. Meant for Core Developers
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073" # A string representing this instance of hbase. $USER by default.
# export HBASE_IDENT_STRING=$USER # The scheduling priority for daemon processes. See 'man nice'.
# export HBASE_NICENESS= # The directory where pid files are stored. /tmp by default.
# export HBASE_PID_DIR=/var/hadoop/pids # Seconds to sleep between slave commands. Unset by default. This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.f
# export HBASE_SLAVE_SLEEP=0.1 # Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=true # The default log rolling policy is RFA, where the log file is rolled as per the size defined for the
# RFA appender. Please refer to the log4j.properties file to see more details on this appender.
# In case one needs to do log rolling on a date change, one should set the environment property
# HBASE_ROOT_LOGGER to "<DESIRED_LOG LEVEL>,DRFA".
# For example:
# HBASE_ROOT_LOGGER=INFO,DRFA
# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as
# DRFA doesn't put any cap on the log size. Please refer to HBase-5655 for more context.
export JAVA_HOME=/usr/lib/jdk/jdk1..0_91
hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-->
<configuration> <property>
<name>hbase.rootdir</name> <!-- hbase存放数据目录 -->
<value>hdfs://master:9000/opt/hbase/hbase_db</value> <!-- 端口要和Hadoop的fs.defaultFS端口一致-->
</property>
<property>
<name>hbase.cluster.distributed</name> <!-- 是否分布式部署 -->
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name> <!-- list of zookooper -->
<value>master,slave1,slave2,slave3</value>
</property> <property><!--zookooper配置、日志等的存储位置 -->
<name>hbase.zookeeper.property.dataDir</name>
<value>/opt/hbase/zookeeper</value>
</property> </configuration>
regionservers
master
slave1
slave2
slave3
最后感谢
未知的风fly https://www.cnblogs.com/lzxlfly/p/7221890.html
Hadoop2.7.3+Hbase-1.2.6完全分布式安装部署
Hadoop安装部署基本步骤:
1、安装jdk,配置环境变量。
jdk可以去网上自行下载,环境变量如下:
编辑 vim /etc/profile 文件,添加如下内容:
export JAVA_HOME=/opt/java_environment/jdk1.7.0_80(填写自己的jdk安装路径)
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
输入命令,source /etc/profile 使配置生效
分别输入命令,java 、 javac 、 java -version,查看jdk环境变量是否配置成功
2、linux环境下,至少需要3台机子,一台作为master,2台(以上)作为slave。
这里我以3台机器为例,linux用的是CentOS 6.5 x64为机器。
master 192.168.172.71
slave1 192.168.172.72
slave2 192.168.172.73
3、配置所有机器的hostname和hosts。
(1)更改hostname,可以编辑 vim /etc/sysconfig/network
更改master的HOSTNAME,这里改为HOSTNAME=master
其它slave为HOSTNAME=slave1、HOSTNAME=slave2 ,重启后生效。
或者直接输: hostname 名字,更改成功,这种方式无需重启即可生效,
但是重启系统后更改的名字会失效,仍是原来的名字
(2)更改host,可以编辑 vim /etc/hosts,增加如下内容:
192.168.172.71 master
192.168.172.72 slave1
192.168.172.73 slave2
hosts可以和hostname不一致 ,这里为了好记就写一致了。
4、配置SSH所有机器之间免密码登录
(1)CentOS默认没有启动ssh无密登录,编辑 vim /etc/ssh/sshd_config,
去掉以下两行注释,开启Authentication免登陆。
#RSAAuthentication yes
#PubkeyAuthentication yes
如果是root用户下进行操作,还要去掉 #PermitRootLogin yes注释,允许root用户登录。
(2)输入命令,ssh-keygen -t rsa,生成key,一直按回车,
就会在/root/.ssh生成:authorized_keys id_rsa.pub id_rsa 三个文件,
这里要说的是,为了各个机器之间的免登陆,在每一台机器上都要进行此操作。
(3) 接下来,在master服务器,合并公钥到authorized_keys文件,
进入/root/.ssh目录,输入以下命令
cat id_rsa.pub>> authorized_keys 把master公钥合并到authorized_keys 中
ssh root@192.168.172.72 cat ~/.ssh/id_rsa.pub>> authorized_keys
ssh root@192.168.172.73 cat ~/.ssh/id_rsa.pub>> authorized_keys
把slave1、slave2公钥合并到authorized_keys 中
完成之后输入命令,把authorized_keys远程copy到slave1和slave2之中
scp authorized_keys 192.168.172.72:/root/.ssh/
scp authorized_keys 192.168.172.73:/root/.ssh/
最好在每台机器上进行chmod 600 authorized_keys操作,
使当前用户具有 authorized_keys的读写权限。
拷贝完成后,在每台机器上进行 service sshd restart 操作, 重新启动ssh服务。
之后在每台机器输入 ssh 192.168.172.xx,测试能否无需输入密码连接另外两台机器。
5、配置Hadoop环境变量,HADOOP_HOME、hadoop-env.sh、yarn-env.sh。
(1)配置HADOOP_HOME,编辑 vim /etc/profile 文件,添加如下内容:
export HADOOP_HOME=/opt/hbase/hadoop-2.7.3 (Hadoop的安装路径)
export PATH=$PATH:$HADOOP_HOME/sbin
export PATH=$PATH:$HADOOP_HOME/bin
(以下两行最好加上,若没有启动Hadoop、hbase时都会有没加载lib成功的警告)
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
(2)配置hadoop-env.sh、yarn-env.sh,在Hadoop安装目录下
编辑 vim etc/hadoop/hadoop-env.sh
加入export JAVA_HOME=/opt/java_environment/jdk1.7.0_80(jdk安装路径)
编辑 vim etc/hadoop/yarn-env.sh
加入export JAVA_HOME=/opt/java_environment/jdk1.7.0_80(jdk安装路径)
保存退出
6、配置基本相关xml,core-site.xml、hdfs-site.xml、mapred-site.xml、mapred-site.xml
(1)配置core-site.xml,在Hadoop安装目录下 编辑 vim etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name> <!--NameNode 的URI-->
<value>hdfs://mater:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name> <!--hadoop临时文件的存放目录-->
<value>/opt/hbase/hadoop-2.7.3/temp</value>
</property>
</configuration>
(2)配置hdfs-site.xml,在Hadoop安装目录下 编辑 vim etc/hadoop/hdfs-site.xml
<configuration>
<property> <!--namenode持久存储名字空间及事务日志的本地文件系统路径-->
<name>dfs.namenode.name.dir</name>
<value>/opt/hbase/hadoop-2.7.3/dfs/name</value>
<!--目录无需预先创建,会自动创建-->
</property>
<property> <!--DataNode存放块数据的本地文件系统路径-->
<name>dfs.datanode.data.dir</name>
<value>/opt/hbase/hadoop-2.7.3/dfs/data</value>
</property>
<property> <!--数据需要备份的数量,不能大于集群的机器数量,默认为3-->
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property> <!--设置为true,可以在浏览器中IP+port查看-->
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
(3)配置mapred-site.xml,在Hadoop安装目录下 编辑 vim etc/hadoop/mapred-site.xml
<configuration>
<property> <!--mapreduce运用了yarn框架,设置name为yarn-->
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property> <!--历史服务器,查看Mapreduce作业记录-->
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
(4)配置yarn-site.xml,在Hadoop安装目录下 编辑 vim etc/hadoop/yarn-site.xml
<configuration>
<property> <!--NodeManager上运行的附属服务,用于运行mapreduce-->
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property> <!--ResourceManager 对客户端暴露的地址-->
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property> <!--ResourceManager 对ApplicationMaster暴露的地址-->
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property> <!--ResourceManager 对NodeManager暴露的地址-->
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property> <!--ResourceManager 对管理员暴露的地址-->
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property> <!--ResourceManager 对外web暴露的地址,可在浏览器查看-->
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
7、配置slaves文件
在Hadoop安装目录下,编辑vim etc/hadoop/slaves,
去除默认的localhost,加入slave1、slave2,保存退出。
8、通过远程复制命令scp,将配置好的Hadoop复制到各个节点对应位置
scp -r /opt/hadoop-2.7.3 192.168.172.72:/opt/hadoop-2.7.3
scp -r /opt/hadoop-2.7.3 192.168.172.73:/opt/hadoop-2.7.3
9、Hadoop的启动与停止
(1)在Master服务器启动hadoop,从节点会自动启动,进入Hadoop目录下,
输入命令,bin/hdfs namenode -format进行hdfs格式化
输入命令,sbin/start-all.sh,进行启动
也可以分开启动,sbin/start-dfs.sh、sbin/start-yarn.sh
在master 上输入命令:jps, 看到ResourceManager、
NameNode、SecondaryNameNode进程
在slave 上输入命令:jps, 看到DataNode、NodeManager进程
出现这5个进程就表示Hadoop启动成功。
(2)接下来配置本地hosts,编辑 C:\Windows\System32\drivers\etc的hosts文件,加入
192.168.172.71 master
192.168.172.72 slave1
192.168.172.73 slave2
在浏览器中输入http://master:50070查看master状态,
输入http://192.168.172.72:8088查看集群状态
(3)停止hadoop,进入Hadoop目录下,输入命令:sbin/stop-all.sh,
即可停止master和slave的Hadoop进程
Hbase安装部署基本步骤:
1、在Hadoop配置的基础上,配置环境变量HBASE_HOME、hbase-env.sh
编辑 vim /etc/profile 加入
export HBASE_HOME=/opt/hbase-1.2.6
export PATH=$HBASE_HOME/bin:$PATH
编辑vim /opt/hbase-1.2.6/conf/hbase-env.sh 加入
export JAVA_HOME=/opt/java_environment/jdk1.7.0_80(jdk安装路径)
去掉注释 # export HBASE_MANAGES_ZK=true,使用hbase自带zookeeper。
2、配置hbase-site.xml文件
<configuration>
<property>
<name>hbase.rootdir</name> <!-- hbase存放数据目录 -->
<value>hdfs://master:9000/opt/hbase/hbase_db</value>
<!-- 端口要和Hadoop的fs.defaultFS端口一致-->
</property>
<property>
<name>hbase.cluster.distributed</name> <!-- 是否分布式部署 -->
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name> <!-- list of zookooper -->
<value>master,slave1,slave2</value>
</property>
<property><!--zookooper配置、日志等的存储位置 -->
<name>hbase.zookeeper.property.dataDir</name>
<value>/opt/hbase/zookeeper</value>
</property>
</configuration>
3、配置regionservers
编辑 vim /opt/hbase-1.2.6/conf/regionservers 去掉默认的localhost,
加入slave1、slave2,保存退出
然后把在master上配置好的hbase,通过远程复制命令
scp -r /opt/hbase-1.2.6 192.168.172.72/73:/opt/hbase-1.2.6
复制到slave1、slave2对应的位置
4、启动与停止Hbase
(1)在Hadoop已经启动成功的基础上,输入start-hbase.sh,过几秒钟便启动完成,
输入jps命令查看进程是否启动成功,若 master上出现HMaster、HQuormPeer,
slave上出现HRegionServer、HQuorumPeer,就是启动成功了。
(2)输入hbase shell 命令 进入hbase命令模式
输入status命令可以看到如下内容,1个master,2 servers,3机器全部成功启动。
1 active master, 0 backup masters, 2 servers, 0 dead, 2.0000 average load
(3)接下来配置本地hosts,(前边配置过的无需再配置了)
编辑 C:\Windows\System32\drivers\etc的hosts文件,加入
192.168.172.71 master
192.168.172.72 slave1
192.168.172.73 slave2
在浏览器中输入http://master:16010就可以在界面上看到hbase的配置了
(4)当要停止hbase时输入stop-hbase.sh,过几秒后hbase就会被停止了。
宿主