Mapreduce在Yarn调用出错

时间:2021-12-31 12:08:30

共享原因:虽然用一篇博文写问题感觉有点奢侈,但是搜索百度,相关文章太少了,苦苦探寻日志才找到解决方案。

遇到问题:mapreduce程序一直迟迟没有结果。

[QC] INFO [main] org.apache.hadoop.yarn.client.RMProxy.createRMProxy(98) | Connecting to ResourceManager at master/192.168.56.100:8032
[QC] WARN [main] org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(64) | Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
[QC] INFO [main] org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(283) | Total input paths to process : 2
[QC] INFO [main] org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(198) | number of splits:2
[QC] INFO [main] org.apache.hadoop.mapreduce.JobSubmitter.printTokens(287) | Submitting tokens for job: job_1496627557122_0004
[QC] INFO [main] org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(273) | Submitted application application_1496627557122_0004
[QC] INFO [main] org.apache.hadoop.mapreduce.Job.submit(1294) | The url to track the job: http://master:8088/proxy/application_1496627557122_0004/
[QC] INFO [main] org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(1339) | Running job: job_1496627557122_0004

Master(NameNode)日志

java.io.IOException: Connection reset by peer

at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
        at sun.nio.ch.IOUtil.read(IOUtil.java:197)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
        at org.apache.hadoop.ipc.Server.channelRead(Server.java:2603)
        at org.apache.hadoop.ipc.Server.access$2800(Server.java:136)
        at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1481)
        at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:771)
        at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:637)

        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:608
 
Slave(DataNode)的日志异常
2017-06-05 09:49:40,464 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-06-05 09:49:41,464 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-06-05 09:49:42,465 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-06-05 09:49:43,467 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-06-05 09:49:44,468 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-06-05 09:49:45,470 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-06-05 09:49:46,472 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-06-05 09:49:47,474 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
 
说明
我的hadoop集群是Master(namenode)、Slave1、Slave2、Slave3
 
解决办法
在所有的Slave机器的yarn-site.xml,之前我只在Master机器上添加了这些内容
<configuration>
  <property>
      <name>yarn.resourcemanager.hostname</name>
      <value>master</value>
  </property>
  <property> 
      <name>yarn.nodemanager.aux-services</name> 
      <value>mapreduce_shuffle</value> 
  </property> 
  <property>
      <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
</configuration>