在执行hbase和mapreduce融合时,将hdfs上的文本文件插入到hbase中,我没有使用"胖包"(胖包就是将项目依赖的jar包放入项目打包后的lib目录中),而是直接将hbase的lib加入到了hadoop的classpath中.
我是将HBase的jar包加到/opt/modules/hadoop/etc/hadoop/hadoop-env.sh中。配置好这个文件,分发到各个节点,改这个配置不用重启集群.
TEMP=`ls /opt/modules/hbase/lib/*.jar` HBASE_JARS=`echo $TEMP | sed 's/ /:/g'` HADOOP_CLASSPATH=$HBASE_JARS
可以通过命令查看hadoop的classpath现在包含哪些jar包:
[hadoop@master ~]$ hdfs classpath
运行hadoop的mapreduce程序
hadoop jar xx.jar input '表名'
执行后出错:
16/08/11 08:52:05 INFO mapreduce.Job: Running job: job_1470930593079_0001 16/08/11 08:52:24 INFO mapreduce.Job: Job job_1470930593079_0001 running in uber mode : false 16/08/11 08:52:24 INFO mapreduce.Job: map 0% reduce 0% 16/08/11 08:52:24 INFO mapreduce.Job: Job job_1470930593079_0001 failed with state FAILED due to: Application application_1470930593079_0001 failed 2 times due to AM Container for appattempt_1470930593079_0001_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://master.xjl456852.com:8088/cluster/app/application_1470930593079_0001Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1470930593079_0001_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:545) at org.apache.hadoop.util.Shell.run(Shell.java:456) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. 16/08/11 08:52:24 INFO mapreduce.Job: Counters: 0
网上查询到需要在/opt/modules/hadoop/etc/hadoop/yarn-site.xml中加入mapreduce运行时需要的类库,需要设置
yarn.application.classpath
:
所以我在yarn-site.xml中加入了如下配置,并加入了hbase的lib目录,配置好这个文件,分发到各个节点,
这个配置需要重启集群
<property> <name>yarn.application.classpath</name> <value> /opt/modules/hadoop/etc/*, /opt/modules/hadoop/etc/hadoop/*, /opt/modules/hadoop/lib/*, /opt/modules/hadoop/share/hadoop/common/*, /opt/modules/hadoop/share/hadoop/common/lib/*, /opt/modules/hadoop/share/hadoop/mapreduce/*, /opt/modules/hadoop/share/hadoop/mapreduce/lib/*, /opt/modules/hadoop/share/hadoop/hdfs/*, /opt/modules/hadoop/share/hadoop/hdfs/lib/*, /opt/modules/hadoop/share/hadoop/yarn/*, /opt/modules/hadoop/share/hadoop/yarn/lib/*, /opt/modules/hbase/lib/* </value> </property>