当我运行Hbase shell时,会得到错误

时间:2022-12-22 00:55:45

My local environment: OS X 10.9.2, Hbase 0.98.0, Java1.6

我的本地环境:OS X 10.9.2, Hbase 0.98.0, Java1.6

conf/hbase-site.xml

conf / hbase-site.xml

 <property>
     <name>hbase.rootdir</name>
     <!--<value>hdfs://127.0.0.1:9000/hbase</value> need to run dfs -->
     <value>file:///Users/apple/Documents/tools/hbase-rootdir/hbase</value>
 </property>

 <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/Users/apple/Documents/tools/hbase-zookeeper/zookeeper</value>
 </property> 

conf/hbase-env.sh

conf / hbase-env.sh

export JAVA_HOME=$(/usr/libexec/java_home -d 64 -v 1.6)
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"

And when I ran

当我跑

> list

in Hbase shell, I got following errors:

在Hbase shell中,我有以下错误:

2014-03-29 10:25:53.412 java[2434:1003] Unable to load realm info from SCDynamicStore
2014-03-29 10:25:53,416 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-03-29 10:26:14,470 ERROR [main] zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 4 attempts
2014-03-29 10:26:14,471 WARN  [main] zookeeper.ZKUtil: hconnection-0x5e15e68d, quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid)
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)
    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:199)
    at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:479)
    at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
    at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:857)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:662)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:414)
    at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:393)
    at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:274)
    at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:183)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at org.jruby.javasupport.JavaConstructor.newInstanceDirect(JavaConstructor.java:275)
    at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:91)
    at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:178)
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322)
    at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178)
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:182)
    at org.jruby.java.proxies.ConcreteJavaProxy$2.call(ConcreteJavaProxy.java:48)
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322)
    at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178)
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:182)
    at org.jruby.RubyClass.newInstance(RubyClass.java:829)
        ...
at Users.apple.Documents.tools.hbase_minus_0_dot_98_dot_0_minus_hadoop2.bin.hirb.block_2$RUBY$start(/Users/apple/Documents/tools/hbase-0.98.0-hadoop2/bin/hirb.rb:185)
    at Users$apple$Documents$tools$hbase_minus_0_dot_98_dot_0_minus_hadoop2$bin$hirb$block_2$RUBY$start.call(Users$apple$Documents$tools$hbase_minus_0_dot_98_dot_0_minus_hadoop2$bin$hirb$block_2$RUBY$start:65535)
    at org.jruby.runtime.CompiledBlock.yield(CompiledBlock.java:112)
    at org.jruby.runtime.CompiledBlock.yield(CompiledBlock.java:95)
    at org.jruby.runtime.Block.yield(Block.java:130)
    at org.jruby.RubyContinuation.enter(RubyContinuation.java:106)
    at org.jruby.RubyKernel.rbCatch(RubyKernel.java:1212)
    at org.jruby.RubyKernel$s$1$0$rbCatch.call(RubyKernel$s$1$0$rbCatch.gen:65535)
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322)
    at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178)
    at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:187)
    at Users.apple.Documents.tools.hbase_minus_0_dot_98_dot_0_minus_hadoop2.bin.hirb.method__5$RUBY$start(/Users/apple/Documents/tools/hbase-0.98.0-hadoop2/bin/hirb.rb:184)
    at Users$apple$Documents$tools$hbase_minus_0_dot_98_dot_0_minus_hadoop2$bin$hirb$method__5$RUBY$start.call(Users$apple$Documents$tools$hbase_minus_0_dot_98_dot_0_minus_hadoop2$bin$hirb$method__5$RUBY$start:65535)
    at org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:203)
    at org.jruby.internal.runtime.methods.CompiledMethod.call(CompiledMethod.java:255)
    at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292)
    at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135)
    at Users.apple.Documents.tools.hbase_minus_0_dot_98_dot_0_minus_hadoop2.bin.hirb.__file__(/Users/apple/Documents/tools/hbase-0.98.0-hadoop2/bin/hirb.rb:190)
    at Users.apple.Documents.tools.hbase_minus_0_dot_98_dot_0_minus_hadoop2.bin.hirb.load(/Users/apple/Documents/tools/hbase-0.98.0-hadoop2/bin/hirb.rb)
    at org.jruby.Ruby.runScript(Ruby.java:697)
    at org.jruby.Ruby.runScript(Ruby.java:690)
    at org.jruby.Ruby.runNormally(Ruby.java:597)
    at org.jruby.Ruby.runFromMain(Ruby.java:446)
    at org.jruby.Main.doRunFromMain(Main.java:369)
    at org.jruby.Main.internalRun(Main.java:258)
    at org.jruby.Main.run(Main.java:224)
    at org.jruby.Main.run(Main.java:208)
    at org.jruby.Main.main(Main.java:188)
2014-03-29 10:28:21,137 ERROR [main] client.HConnectionManager$HConnectionImplementation: Can't get connection to ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase

ERROR: KeeperErrorCode = ConnectionLoss for /hbase

And my /etc/hosts looks right:

我的/etc/hosts看起来是对的:

127.0.0.1   localhost
255.255.255.255 broadcasthost
::1             localhost 
fe80::1%lo0 localhost
127.0.0.1 activate.adobe.com
127.0.0.1 practivate.adobe.com
127.0.0.1 ereg.adobe.com
127.0.0.1 activate.wip3.adobe.com
127.0.0.1 wip3.adobe.com
127.0.0.1 3dns-3.adobe.com
127.0.0.1 3dns-2.adobe.com
127.0.0.1 adobe-dns.adobe.com
127.0.0.1 adobe-dns-2.adobe.com
127.0.0.1 adobe-dns-3.adobe.com
127.0.0.1 ereg.wip3.adobe.com
127.0.0.1 activate-sea.adobe.com
127.0.0.1 wwis-dubc1-vip60.adobe.com
127.0.0.1 activate-sjc0.adobe.com
127.0.0.1 adobe.activate.com
127.0.0.1 209.34.83.73:443
127.0.0.1 209.34.83.73:43
127.0.0.1 209.34.83.73
127.0.0.1 209.34.83.67:443
127.0.0.1 209.34.83.67:43
127.0.0.1 209.34.83.67
127.0.0.1 ood.opsource.net
127.0.0.1 CRL.VERISIGN.NET
127.0.0.1 199.7.52.190:80
127.0.0.1 199.7.52.190
127.0.0.1 adobeereg.com
127.0.0.1 OCSP.SPO1.VERISIGN.COM
127.0.0.1 199.7.54.72:80
127.0.0.1 199.7.54.72

5 个解决方案

#1


12  

I also met the same problem and struggled for a long time. Following the instructions here, before run ./bin/hbase shell command you should use ./bin/start-hbase.sh first. Then my problem was solved.

我也遇到了同样的问题,挣扎了很长一段时间。按照这里的说明,在运行./bin/hbase shell命令之前,应该使用./bin/start-hbase。sh。然后我的问题就解决了。

#2


3  

As your hbase-site.xml says - you have tried running hbase on hdfs also and now you are trying to run on local file system.
Solution : run hadoop.x.x.x/bin/start-dfs.sh first, and then run hbase.x.x.x/bin/start-hbase.sh . It will now run as expected on local file system.

作为hbase-site。xml说——您已经尝试在hdfs上运行hbase,现在您正在尝试在本地文件系统上运行。解决方案:hadoop.x.x.x / bin / start-dfs运行。首先运行hbase.x.x.x/bin/start-hbase。sh。它现在将按照预期在本地文件系统上运行。

#3


0  

I was in this problem too.

我也遇到了这个问题。

If you trying in stand alone, only use hbase library and remove hadoop from your libraries and use hbase.hadoop libraries.

如果您尝试独立,只使用hbase库并从库中删除hadoop并使用hbase。hadoop库。

#4


0  

I faced this problem when i didn't add my hostname in /etc/hosts file.

我在/etc/hosts文件中没有添加主机名时遇到了这个问题。

for example, my hostname is node1.

例如,我的主机名是node1。

add 127.0.0.1 node1 in /etc/hosts

#5


-1  

I also faced this problem, later got conclusion

我也遇到了这个问题,后来得出了结论

When I write start-hbase.sh directly into hdfs shell its showing error "No Command".

当我写start-hbase。sh直接进入hdfs shell显示错误“无命令”。

Then I navigate to hbase bin folder cd /usr/local/hbase/bin and gave the command ./start-hbase.sh . It started working (zookeeper and master services found running).

然后我导航到hbase bin文件夹cd /usr/local/hbase/bin并给出命令。/start-hbase。sh。它开始工作(zookeeper和master services发现运行)。

Also for hbase shell, first you need to enter hbase bin folder, then type ./hbase shell

同样对于hbase shell,首先需要输入hbase bin文件夹,然后输入./hbase shell

Hope this works :)

希望如此:)

#1


12  

I also met the same problem and struggled for a long time. Following the instructions here, before run ./bin/hbase shell command you should use ./bin/start-hbase.sh first. Then my problem was solved.

我也遇到了同样的问题,挣扎了很长一段时间。按照这里的说明,在运行./bin/hbase shell命令之前,应该使用./bin/start-hbase。sh。然后我的问题就解决了。

#2


3  

As your hbase-site.xml says - you have tried running hbase on hdfs also and now you are trying to run on local file system.
Solution : run hadoop.x.x.x/bin/start-dfs.sh first, and then run hbase.x.x.x/bin/start-hbase.sh . It will now run as expected on local file system.

作为hbase-site。xml说——您已经尝试在hdfs上运行hbase,现在您正在尝试在本地文件系统上运行。解决方案:hadoop.x.x.x / bin / start-dfs运行。首先运行hbase.x.x.x/bin/start-hbase。sh。它现在将按照预期在本地文件系统上运行。

#3


0  

I was in this problem too.

我也遇到了这个问题。

If you trying in stand alone, only use hbase library and remove hadoop from your libraries and use hbase.hadoop libraries.

如果您尝试独立,只使用hbase库并从库中删除hadoop并使用hbase。hadoop库。

#4


0  

I faced this problem when i didn't add my hostname in /etc/hosts file.

我在/etc/hosts文件中没有添加主机名时遇到了这个问题。

for example, my hostname is node1.

例如,我的主机名是node1。

add 127.0.0.1 node1 in /etc/hosts

#5


-1  

I also faced this problem, later got conclusion

我也遇到了这个问题,后来得出了结论

When I write start-hbase.sh directly into hdfs shell its showing error "No Command".

当我写start-hbase。sh直接进入hdfs shell显示错误“无命令”。

Then I navigate to hbase bin folder cd /usr/local/hbase/bin and gave the command ./start-hbase.sh . It started working (zookeeper and master services found running).

然后我导航到hbase bin文件夹cd /usr/local/hbase/bin并给出命令。/start-hbase。sh。它开始工作(zookeeper和master services发现运行)。

Also for hbase shell, first you need to enter hbase bin folder, then type ./hbase shell

同样对于hbase shell,首先需要输入hbase bin文件夹,然后输入./hbase shell

Hope this works :)

希望如此:)