I am following this tutorial to set up Hadoop-2.9.0 When I execute the following command:
当我执行以下命令时,我正在跟随本教程设置Hadoop-2.9.0:
sbin/start-df.sh
I get the following output on terminal:
我在终端上得到如下输出:
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/uname/hadoop-2.9.0/logs/hadoop-uname-namenode-mname.out
localhost: starting datanode, logging to /home/uname/hadoop-2.9.0/logs/hadoop-uname-datanode-mname.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/uname/hadoop-2.9.0/logs/hadoop-uname-secondarynamenode-mname.out
Then when I try to copy things as per the tutorial here using the following command
然后,当我试着用下面的命令来复制这个教程的时候。
bin/hdfs dfs -put etc/hadoop input
bin/hdfs dfs -put等/hadoop输入。
I get the following error:
我得到了以下错误:
put: File /user/uname/input/yarn-site.xml._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
Following code block is the detailed stack trace of the above error. You can ignore it while first reading.
下面的代码块是上述错误的详细堆栈跟踪。你可以在第一次阅读时忽略它。
18/02/17 21:59:45 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/uname/input/yarn-site.xml._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1797)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2559)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:846)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:510)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:868)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:814)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2603)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1493)
at org.apache.hadoop.ipc.Client.call(Client.java:1439)
at org.apache.hadoop.ipc.Client.call(Client.java:1349)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:444)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1845)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1645)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
put: File /user/uname/input/yarn-site.xml._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
I am not sure why no datanode is running? I have tried things in this and this answer but it didn't work. If you look at the 2nd code block of the question it says that starting the datanode. and logs are written to /home/uname/hadoop-2.9.0/logs/hadoop-uname-datanode-mname.out. I didn't find any error in the log. I am copying the logs below.
我不确定为什么没有datanode运行?我试过这个和这个答案,但没用。如果你看问题的第2个代码块,它说启动datanode。并且日志被写入/home/uname/hadoop-2.9.0/logs/hadoop-uname-datanode-mname.out。我在日志里没有发现任何错误。我正在复制下面的日志。
ulimit -a for user uname
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63757
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 63757
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
To verify if the data node is really running or not. I executed the very first start command again.
验证数据节点是否真的在运行。我再次执行了第一个start命令。
sbin/start-df.sh
I got following message:
我得到以下信息:
Starting namenodes on [localhost]
localhost: namenode running as process 24802. Stop it first.
localhost: starting datanode, logging to /home/uname/hadoop-2.9.0/logs/hadoop-uname-datanode-mname.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: secondarynamenode running as process 25166. Stop it first.
This message says that namenode and secondarynamenode are already running. However, this also doesn't show that datanode is running and it attempts to start datanode again and writes the same log in the log file.
这个消息说namenode和secondarynamenode已经在运行了。但是,这也没有显示datanode正在运行,它尝试再次启动datanode,并在日志文件中写入相同的日志。
Any idea why namenode is not starting?
知道为什么namenode没有启动吗?
1 个解决方案
#1
1
Actually you need to debug the log files from logs directory. May be you need to delete the Namenode and Datanode Log directories. Create the Log Directories again. after that run hadoop namenode format command. Then run start-dfs.sh and start-yarn.sh.
实际上,您需要从日志目录中调试日志文件。可能需要删除Namenode和Datanode日志目录。再次创建日志目录。之后运行hadoop namenode格式命令。然后运行start-dfs。sh和start-yarn.sh。
#1
1
Actually you need to debug the log files from logs directory. May be you need to delete the Namenode and Datanode Log directories. Create the Log Directories again. after that run hadoop namenode format command. Then run start-dfs.sh and start-yarn.sh.
实际上,您需要从日志目录中调试日志文件。可能需要删除Namenode和Datanode日志目录。再次创建日志目录。之后运行hadoop namenode格式命令。然后运行start-dfs。sh和start-yarn.sh。