一、集群情况
集群搭建过程见https://blog.csdn.net/lynne_cat/article/details/102975026
二、运行命令
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar grep input output 'dfs[a-z.]+'
三、报错信息
命令行输出 |
19/11/12 16:47:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/11/12 16:47:12 INFO client.RMProxy: Connecting to ResourceManager at hslave162/10.0.35.162:8032
19/11/12 16:47:13 INFO input.FileInputFormat: Total input paths to process : 35
19/11/12 16:47:14 INFO mapreduce.JobSubmitter: number of splits:35
19/11/12 16:47:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1573296317172_0006
19/11/12 16:47:15 INFO impl.YarnClientImpl: Submitted application application_1573296317172_0006
19/11/12 16:47:15 INFO mapreduce.Job: The url to track the job: http://hslave162:8088/proxy/application_1573296317172_0006/
19/11/12 16:47:15 INFO mapreduce.Job: Running job: job_1573296317172_0006
19/11/12 16:47:21 INFO mapreduce.Job: Job job_1573296317172_0006 running in uber mode : false
19/11/12 16:47:21 INFO mapreduce.Job: map 0% reduce 0%
19/11/12 16:47:21 INFO mapreduce.Job: Job job_1573296317172_0006 failed with state FAILED due to: Application application_1573296317172_0006 failed 2 times due to AM Container for appattempt_1573296317172_0006_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://hslave162:8088/cluster/app/application_1573296317172_0006Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1573296317172_0006_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)
at org.apache.hadoop.util.Shell.run(Shell.java:482)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
19/11/12 16:47:21 INFO mapreduce.Job: Counters: 0
19/11/12 16:47:21 INFO client.RMProxy: Connecting to ResourceManager at hslave162/10.0.35.162:8032
19/11/12 16:47:22 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1573296317172_0007
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://hmaster156:9000/user/hadoop/grep-temp-160258091
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265)
at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:59)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at org.apache.hadoop.examples.Grep.run(Grep.java:94)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.Grep.main(Grep.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:226)
at org.apache.hadoop.util.RunJar.main(RunJar.java:141)
|
四、解决方式