HBase中此类异常解决记录org.apache.hadoop.ipc.RemoteException(java.io.IOException):

时间:2021-03-07 08:27:30

ERROR: Can't get master address from ZooKeeper; znode data == null   一定注意这只是问题的第一层表象,真的问题是:

File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplica

网上很多都是叫用两种方式解决

  • stop/start  重启hbase
  • 格式化 hdfs namenode -format,不能随随便便就格式话hadoop的namenode

 

HBase中此类异常解决记录org.apache.hadoop.ipc.RemoteException(java.io.IOException):

按照上述方式试一两个小时找问题,没有找到,最后问题就在每个应用的日志里藏着

Hbase中启动中很多异常的坑会遇到,但是请一定不要慌,坑多是因为我们对她不熟悉,我找了一上午的错误例子,在今年5月份我记得我可以启动单机的hbase hadoop zookeeper,由于我的阿里云服务器要用作别用,我就关闭了三个应用,9月我再次启动时,就不能启动了。

 

 

 

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1622)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3351)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:683)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:214)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:495)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)

        at org.apache.hadoop.ipc.Client.call(Client.java:1472)
        at org.apache.hadoop.ipc.Client.call(Client.java:1409)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at com.sun.proxy.$Proxy17.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:413)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
        at com.sun.proxy.$Proxy18.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
        at com.sun.proxy.$Proxy19.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1812)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1608)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:772)
2018-09-22 10:50:56,289 INFO  [app:60000.activeMasterManager] regionserver.HRegionServer: STOPPED: Unhandled exception. Starting shutdown.
2018-09-22 10:50:56,290 INFO  [master/app.server/172.16.216.42:60000] regionserver.HRegionServer: Stopping infoServer
2018-09-22 10:50:56,320 INFO  [master/app.server/172.16.216.42:60000] mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60010

 我打开hbase hadoop zookeeper 三者中data缓存文件,里面还是5月份的数据,比较坑就是每次重启都不自己覆盖以前的文件的么。这里就以后不要用kill 去关掉线程了

[root@app hbase-1.2.0-cdh5.10.0]# cd data/tmp/

重新启动 hbase hadoop zookeeper  进入 hbase shell命令客户端

[root@app bin]# ./hbase shell
2018-09-22 11:12:00,809 INFO  [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2018-09-22 11:12:03,263 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.0-cdh5.10.0, rUnknown, Fri Jan 20 12:18:02 PST 2017

hbase(main):001:0> list
TABLE                                                                                                                                                                                        
0 row(s) in 0.3760 seconds

=> []
hbase(main):002:0> 

最后强调一下jps 查看最近启动的进程中是不是全部启动,我这里是单机版的,仅供参考。

[root@app tmp]# jps
4336 Jps
2529 HRegionServer
2418 HMaster
2276 QuorumPeerMain
1947 DataNode
2109 SecondaryNameNode
2847 Main
1823 NameNode
[root@app tmp]#