问题:: Invalid host name: local host is: (unknown); destination host is: “master”:9000; ;
log错误日志
2017-07-13 21:26:45,915 FATAL [master:] : Failed to become active master
: Invalid host name: local host is: (unknown); destination host is: "master":9000; ; For more details see: /hadoop/UnknownHost
at .newInstance0(Native Method)
at (:57)
at (:45)
at (:526)
at (:792)
at (:744)
at $Connection.<init>(:409)
at (:1518)
at (:1451)
at (:1412)
at $(:229)
at .$(Unknown Source)
at (:666)
at .invoke0(Native Method)
at (:57)
at (:43)
at (:606)
at (:191)
at (:102)
at .$(Unknown Source)
at .invoke0(Native Method)
at (:57)
at (:43)
at (:606)
at $(:279)
at .$(Unknown Source)
at (:2596)
at (:1223)
at (:1207)
at (:525)
at (:971)
at (:429)
at (:153)
at .<init>(:128)
at (:693)
at $600(:189)
at $(:1803)
at (:745)
Caused by:
... 32 more
2017-07-13 21:26:45,924 FATAL [master:] : Unhandled exception. Starting shutdown.
: Invalid host name: local host is: (unknown); destination host is: "master":9000; ; For more details see: /hadoop/UnknownHost
at .newInstance0(Native Method)
at (:57)
at (:45)
at (:526)
at (:792)
at (:744)
at $Connection.<init>(:409)
at (:1518)
at (:1451)
at (:1412)
at $(:229)
at .$(Unknown Source)
at (:666)
at .invoke0(Native Method)
at (:57)
at (:43)
at (:606)
at (:191)
at (:102)
at .$(Unknown Source)
at .invoke0(Native Method)
at (:57)
at (:43)
at (:606)
at $(:279)
at .$(Unknown Source)
at (:2596)
at (:1223)
at (:1207)
at (:525)
at (:971)
at (:429)
at (:153)
at .<init>(:128)
at (:693)
at $600(:189)
at $(:1803)
at (:745)
Caused by:
... 32 more
2017-07-13 21:26:45,925 INFO [master:] : STOPPED: Unhandled exception. Starting shutdown.
问题补充
1、防火墙均已关闭、root最高权限
2、hadoop启动正常jps查看已启动,通过浏览器访问50070,8088无任何问题
3、Zookeeper启动正常jps查看已启动
4、已删除hbase/lib下所有关于hadoop的jar并将 hadoop/share所有关于Hadoop的jar拷贝到hbase/lib下,并添加aws-java-sdk-core-1.11.和aws-java-sdk-s3-1.11.
版本说明
1、hadoop 2.7.2
2、hbase 1.2.6
3、zookeeper 3.4.2
/etc/hosts 配置文件
192.168.1.151 master
192.168.1.152 slave1
192.168.1.153 slave2
Hadoop配置
<configuration>
<property>
<name></name>
<value>hdfs://master:9000</value>
</property>
<property>
<name></name>
<value>file:/usr/local/hadoop/tmp</value>
</property>
</configuration>
<configuration>
<property>
<name></name>
<value>/usr/local/hadoop/hdf/data</value>
<final>true</final>
</property>
<property>
<name></name>
<value>/usr/local/hadoop/hdf/name</value>
<final>true</final>
</property>
</configuration>
<configuration>
<property>
<name></name>
<value>yarn</value>
</property>
<property>
<name></name>
<value>master:10020</value>
</property>
<property>
<name></name>
<value>master:19888</value>
</property>
</configuration>
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name></name>
<value></value>
</property>
<property>
<name></name>
<value>master:8032</value>
</property>
<property>
<name></name>
<value>master:8030</value>
</property>
<property>
<name></name>
<value>master:8031</value>
</property>
<property>
<name></name>
<value>master:8033</value>
</property>
<property>
<name></name>
<value>master:8088</value>
</property>
</configuration>
slaves
slave1
slave2
zookeeper配置
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/local/zookeeper/zookeeper-data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# /doc/current/#sc_maintenance
#
# The number of snapshots to retain in dataDir
#=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#=1
server.0=master:2888:3888
server.1=slave1:2888:3888
server.2=slave2:2888:3888
myid:三个主机分别是0,1,2
Hbase配置
<configuration>
<property>
<name></name>
<value>hdfs://master:9000/hbase</value>
</property>
<property>
<name></name>
<value>true</value>
</property>
<property>
<name></name>
<value>master,slave1,slave2</value>
</property>
<property>
<name></name>
<value>hdfs://master:60000</value>
</property>
<property>
<name></name>
<value>/usr/local/zookeeper/zookeeper-data</value>
</property>
<property>
<name></name>
<value>2181</value>
</property>
</configuration>
regionservers
master
slave1
slave2
解决方案:
/usr/local/hadoop/etc/hadoop/ 配置文件中指定主机IP添加如下代码:
<property>
<name></name>
<value>192.168.1.151</value>
</property>