先介绍下需求:
散仙要处理多个类似表的txt数据,当然只有值,列名什么的全部在xml里配置了,然后加工这些每个表的每一行数据,生成特定的格式基于ASCII码1和ASCII码2作为分隔符的一行数据,ASCII2作为字段名和字段值的分隔符,ASCII1作为字段和字段之间的分隔符,每解析一个txt文件时,都要获取文件名,然后与xml中的schema信息映射并找到对应位置的值,它的列名,前提是,这些的txt的内容位置,是固定的,然后我们知道它每一行属于哪个表结构的映射,因为这些映射信息是提前配置在xml中的,如下图:
当然类似这样的结构有20个左右的表文件,到时候,我们的数据方,会给我们提供这些txt文件,然后散仙需要加工成特定的格式,然后写入HDFS,由我们的索引系统使用MapReduce批量建索引使用。
本来想直接用java写个单机程序,串行处理,然后写入HDFS,后来一想假如数据量比较大,串行程序还得改成多线程并行执行,这样改来改去,倒不如直接使用MapReduce来的方便
ok,说干就干,测试环境已经有一套CDH5.3的hadoop2.5集群,直接就在eclipse进行开发和MapReduce程序的调试,反正也好久也没手写MapReduce了,前段时间,一直在用Apache Pig分析数据,这次处理的逻辑也不复杂,就再写下练练手 , CDH的集群在远程的服务器上,散仙本机的hadoop是Apache Hadoop2.2的版本,在使用eclipse进行开发时,也没来得及换版本,理论上最好各个版本,不同发行版,之间对应起来开发比较好,这样一般不会存在兼容性问题,但散仙这次就懒的换了,因为CDH5.x之后的版本,是基于社区版的Apache Hadoop2.2之上改造的,接口应该大部分都一致,当然这只是散仙猜想的。
(1)首先,散仙要搞定的事,就是解析xml了,在程序启动之前需要把xml解析,加载到一个Map中,这样在处理每种txt时,会根据文件名来去Map中找到对应的schma信息,解析xml,散仙直接使用的jsoup,具体为啥,请点击散仙这篇
http://qindongliang.iteye.com/blog/2162519文章,在这期间遇到了一个比较蛋疼的问题,简直是一个bug,最早散仙定义的xml是每个表,一个table标签,然后它下面有各个property的映射定义,但是在用jsoup的cssQuery语法解析时,发现总是解析不出来东西,按照以前的做法,是没任何问题的,这次简直是开玩笑了,后来就是各种搜索,测试,最后才发现,将table标签,换成其他的任何标签都无任何问题,具体原因,散仙还没来得及细看jsoup的源码,猜测table标签应该是一个关键词什么的标签,在解析时会和html的table冲突,所以在xml解析中失效了,花了接近2个小时,求证,检验,终于搞定了这个小bug。
(2)搞定了这个问题,散仙就开始开发调试MapReduce版的处理程序,这下面临的又一个问题,就是如何使用Jsoup解析存放在HDFS上的xml文件,有过Hadoop编程经验的人,应该都知道,HDFS是一套分布式的文件系统,与我们本地的磁盘的存储方式是不一样的,比如你在正常的JAVA程序上解析在C:\file\t.tx或者在linux上/home/user/t.txt,所编写的程序在Hadoop上是无法使用的,你得使用Hadoop提供的编程接口获取它的文件信息,然后转成字符串之后,再给jsoup解析。
(3)ok,第二个问题搞定之后,你得编写你的MR程序,处理对应的txt文本,而且保证不同的txt里面的数据格式,所获取的scheaml是正确的,所以在map方法里,你要获取当然处理文件的路径,然后做相应判断,在做对应处理。
(4)很好,第三个问题搞定之后,你的MR的程序,基本编写的差不多了,下一步就改考虑如何提交到Hadoop的集群上,来调试程序了,由于散仙是在Win上的eclipse开发的,所以这一步可能遇到的问题会很多,而且加上,hadoop的版本不一致与发行商也不一致,出问题也纯属正常。
这里多写一点,一般建议大家不要在win上调试hadoop程序,这里的坑非常多,如果可以,还是建议大家在linux上直接玩,下面说下,散仙今天又踩的坑,关于在windows上调试eclipse开发, 运行Yarn的MR程序,散仙以前也记录了文章,感兴趣者,可以点击这个链接
http://qindongliang.iteye.com/blog/2078452地址。
(5)提交前,是需要使用ant或maven或者java自带的导出工具,将项目打成一个jar包提交的,这一点大家需要注意下,最后测试得出,Apache的hadoop2.2编写的MR程序,是可以直接向CDH Hadoop2.5提交作业的,但是由于hadoop2.5中,使用google的guice作为了一个内嵌的MVC轻量级的框架,所以在windows上打包提交时,需要引入额外的guice的几个包,截图如下:
上面几步搞定后,打包整个项目,然后运行成功,过程如下:
- 输出路径存在,已删除!
- 2015-04-08 19:35:18,001 INFO [main] client.RMProxy (RMProxy.java:createRMProxy(56)) - Connecting to ResourceManager at /172.26.150.18:8032
- 2015-04-08 19:35:18,170 WARN [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(149)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
- 2015-04-08 19:35:21,156 INFO [main] input.FileInputFormat (FileInputFormat.java:listStatus(287)) - Total input paths to process : 2
- 2015-04-08 19:35:21,219 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(394)) - number of splits:2
- 2015-04-08 19:35:21,228 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - user.name is deprecated. Instead, use mapreduce.job.user.name
- 2015-04-08 19:35:21,228 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - mapred.jar is deprecated. Instead, use mapreduce.job.jar
- 2015-04-08 19:35:21,228 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - fs.default.name is deprecated. Instead, use fs.defaultFS
- 2015-04-08 19:35:21,229 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
- 2015-04-08 19:35:21,229 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - mapred.mapoutput.value.class is deprecated. Instead, use mapreduce.map.output.value.class
- 2015-04-08 19:35:21,230 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
- 2015-04-08 19:35:21,230 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - mapred.job.name is deprecated. Instead, use mapreduce.job.name
- 2015-04-08 19:35:21,230 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
- 2015-04-08 19:35:21,230 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
- 2015-04-08 19:35:21,230 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
- 2015-04-08 19:35:21,230 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class
- 2015-04-08 19:35:21,231 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
- 2015-04-08 19:35:21,233 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - mapred.mapoutput.key.class is deprecated. Instead, use mapreduce.map.output.key.class
- 2015-04-08 19:35:21,233 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
- 2015-04-08 19:35:21,331 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(477)) - Submitting tokens for job: job_1419419533357_5012
- 2015-04-08 19:35:21,481 INFO [main] impl.YarnClientImpl (YarnClientImpl.java:submitApplication(174)) - Submitted application application_1419419533357_5012 to ResourceManager at /172.21.50.108:8032
- 2015-04-08 19:35:21,506 INFO [main] mapreduce.Job (Job.java:submit(1272)) - The url to track the job: http://http://dnode1:8088/proxy/application_1419419533357_5012/
- 2015-04-08 19:35:21,506 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1317)) - Running job: job_1419419533357_5012
- 2015-04-08 19:35:33,777 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1338)) - Job job_1419419533357_5012 running in uber mode : false
- 2015-04-08 19:35:33,779 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1345)) - map 0% reduce 0%
- 2015-04-08 19:35:43,885 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1345)) - map 100% reduce 0%
- 2015-04-08 19:35:43,902 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1356)) - Job job_1419419533357_5012 completed successfully
- 2015-04-08 19:35:44,011 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1363)) - Counters: 27
- File System Counters
- FILE: Number of bytes read=0
- FILE: Number of bytes written=166572
- FILE: Number of read operations=0
- FILE: Number of large read operations=0
- FILE: Number of write operations=0
- HDFS: Number of bytes read=47795
- HDFS: Number of bytes written=594
- HDFS: Number of read operations=12
- HDFS: Number of large read operations=0
- HDFS: Number of write operations=4
- Job Counters
- Launched map tasks=2
- Data-local map tasks=2
- Total time spent by all maps in occupied slots (ms)=9617
- Total time spent by all reduces in occupied slots (ms)=0
- Map-Reduce Framework
- Map input records=11
- Map output records=5
- Input split bytes=252
- Spilled Records=0
- Failed Shuffles=0
- Merged Map outputs=0
- GC time elapsed (ms)=53
- CPU time spent (ms)=2910
- Physical memory (bytes) snapshot=327467008
- Virtual memory (bytes) snapshot=1905754112
- Total committed heap usage (bytes)=402653184
- File Input Format Counters
- Bytes Read=541
- File Output Format Counters
- Bytes Written=594
- true
最后附上核心代码,以作备忘:
(1)Map Only作业的代码:
- package com.dhgate.search.rate.convert;
- import java.io.File;
- import java.io.FileInputStream;
- import java.io.FileNotFoundException;
- import java.io.FilenameFilter;
- import java.io.IOException;
- import java.util.Map;
- import org.apache.hadoop.conf.Configuration;
- import org.apache.hadoop.fs.FileSystem;
- import org.apache.hadoop.fs.Path;
- import org.apache.hadoop.io.LongWritable;
- import org.apache.hadoop.io.NullWritable;
- import org.apache.hadoop.io.Text;
- import org.apache.hadoop.mapreduce.Job;
- import org.apache.hadoop.mapreduce.Mapper;
- import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
- import org.apache.hadoop.mapreduce.lib.input.FileSplit;
- import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
- import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
- import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
- import org.slf4j.Logger;
- import org.slf4j.LoggerFactory;
- import com.dhgate.parse.xml.tools.HDFSParseXmlTools;
- import com.sun.xml.bind.v2.schemagen.xmlschema.Import;
- /**
- * 加工处理数据格式
- *
- * @author qindongliang 2015年04月07日
- *
- * **/
- public class StoreConvert {
- //log4j记录
- static Logger log=LoggerFactory.getLogger(StoreConvert.class);
- /**
- * 转换支持的格式
- *
- * **/
- private static class FormatMapper extends Mapper<LongWritable, Text, NullWritable, Text>{
- @Override
- protected void map(LongWritable key, Text value,Context context)throws IOException, InterruptedException {
- String filename = ((FileSplit) context.getInputSplit()).getPath().getName().split("\\.")[0];
- //System.out.println("文件名是: "+filename);
- //log.info("读取的文件名是: "+filename);
- String vs[]=value.toString().split(",");
- if(HDFSParseXmlTools.map.get(filename)!=null){
- Map<String, String> m=HDFSParseXmlTools.map.get(filename);
- StringBuffer sb=new StringBuffer();
- for(int i=0;i<vs.length;i++){
- //字段\2值
- if(i==vs.length-1){
- sb.append(m.get(i+"")).append("\2").append(vs[i]);
- }else{
- sb.append(m.get(i+"")).append("\2").append(vs[i]).append("\1");
- }
- }
- context.write(NullWritable.get(), new Text(filename+" == "+sb.toString()));
- }
- }
- }
- public static void main(String[] args) throws Exception {
- // System.setProperty("HADOOP_USER_NAME", "root");
- Configuration conf=new Configuration();
- // getConf(conf);
- conf.set("mapreduce.job.jar", "searchrate.jar");
- conf.set("fs.defaultFS","hdfs://172.21.50.108:8020");
- conf.set("mapreduce.framework.name", "yarn");
- conf.set("mapred.remote.os", "Linux");
- conf.set("yarn.resourcemanager.scheduler.address", "172.21.50.108:8030");
- conf.set("yarn.resourcemanager.address", "172.21.50.108:8032");
- // System.exit(0);
- Job job=Job.getInstance(conf, "formatdata");
- job.setJarByClass(StoreConvert.class);
- // System.out.println("模式: "+conf.get("mapreduce.jobtracker.address"));;
- job.setMapperClass(FormatMapper.class);
- job.setMapOutputKeyClass(Text.class);
- job.setMapOutputValueClass(Text.class);
- job.setInputFormatClass(TextInputFormat.class);
- job.setOutputFormatClass(TextOutputFormat.class);
- job.setNumReduceTasks(0);//Map Only作业
- String path = "/tmp/qin/out";
- FileSystem fs = FileSystem.get(conf);
- Path p = new Path(path);
- if (fs.exists(p)) {
- fs.delete(p, true);
- System.out.println("输出路径存在,已删除!");
- }
- FileInputFormat.setInputPaths(job, "/tmp/qin/testfile/");
- FileOutputFormat.setOutputPath(job, p);
- System.out.println(job.waitForCompletion(true));
- }
- }
使用解析HDFS上xml文件的代码:
- package com.dhgate.parse.xml.tools;
- import java.io.BufferedReader;
- import java.io.InputStreamReader;
- import java.util.HashMap;
- import java.util.HashSet;
- import java.util.List;
- import java.util.Map;
- import java.util.Map.Entry;
- import java.util.Set;
- import java.util.TreeMap;
- import org.apache.hadoop.conf.Configuration;
- import org.apache.hadoop.fs.FileSystem;
- import org.apache.hadoop.fs.Path;
- import org.jsoup.Jsoup;
- import org.jsoup.nodes.Document;
- import org.jsoup.nodes.Element;
- import org.slf4j.Logger;
- import org.slf4j.LoggerFactory;
- /**
- * Created by qindongliang on 15-4-6.
- * 大数据交流群:415886155
- */
- public class HDFSParseXmlTools {
- private final static Logger log= LoggerFactory.getLogger(HDFSParseXmlTools.class);
- //存储元数据信息
- public static Map<String, Map<String, String>> map=new HashMap<String, Map<String,String>>();
- static Configuration conf=new Configuration();
- static FileSystem fs=null;
- static{
- log.info("初始化加载mapping.xml开始.......");
- try{
- conf.set("fs.defaultFS","hdfs://172.21.50.108:8020/");
- fs=FileSystem.get(conf);//获取conf对象
- Path xml =new Path("/tmp/qin/mapping.xml");//读取HDFS的xml文件
- BufferedReader br=new BufferedReader(new InputStreamReader(fs.open(xml)));//获取输入流
- StringBuffer sb=new StringBuffer();//声明一个buffer对象用来存储xml文件内容
- String line;
- line=br.readLine();//读取第一行
- sb.append(line);//追加到StringBuffer中
- while (line != null){
- line=br.readLine();//循环读取
- sb.append(line);//循环追加
- }
- // System.out.println(sb.toString());
- br.close();//释放资源
- Document d=Jsoup.parse(sb.toString(),"UTF-8");//解析xml
- Set<String> set=new HashSet<String>();//排除,不需要解析的文件
- List<Element> excludes=d.select("exclude");
- for(Element ee:excludes){
- set.add(ee.text().trim());
- }
- List<Element> tables=d.select("type");
- for(Element t:tables){
- String num=t.attr("num");
- String name=t.attr("name");
- String indexname=t.attr("indexname");
- if(set.contains(name)){
- log.info("跳过的表名:"+name);
- continue;
- }
- // System.out.println(" 序号: "+num+" 表名: "+name+" 索引名: "+indexname);
- Map<String, String> data=new TreeMap<String, String>();
- for(Element s:t.select("map")){
- // System.out.println("----------------------"+s.attr("pos")+" "+s.attr("field")+" "+s.attr(""));
- String pos=s.attr("pos");//位置信息
- String field=s.attr("field");//索引字段名
- data.put(pos, field);
- }
- map.put(name, data);//将此表名对应的映射信息存储到map里
- }
- }catch(Exception e){
- //e.printStackTrace();
- log.error("加载映射文件异常!",e);
- }
- }
- public static void parseXml()throws Exception{
- }
- public static void main(String[] args) throws Exception{
- System.out.println();
- for(Entry<String, Map<String, String>> m:map.entrySet()){
- System.out.println("表名:"+m.getKey());
- // for(Entry<String, String> me:m.getValue().entrySet()){
- // System.out.println(me.getKey()+" "+me.getValue());
- // }
- // System.out.println("==================================================");
- }
- }
- }
项目结构如下图:
Ant的打包脚本如下:
- <project name="${component.name}" basedir="." default="jar">
- <property environment="env"/>
- <!-- <property name="hadoop.home" value="${env.HADOOP_HOME}"/> -->
- <property name="hadoop.home" value="E:/hadooplib"/>
- <!-- 指定jar包的名字 -->
- <property name="jar.name" value="searchrate.jar"/>
- <path id="project.classpath">
- <fileset dir="lib">
- <include name="*.jar" />
- </fileset>
- <fileset dir="${hadoop.home}">
- <include name="**/*.jar" />
- </fileset>
- </path>
- <target name="clean" >
- <delete dir="bin" failonerror="false" />
- <mkdir dir="bin"/>
- </target>
- <target name="build" depends="clean">
- <echo message="${ant.project.name}: ${ant.file}"/>
- <javac destdir="bin" encoding="utf-8" debug="true" includeantruntime="false" debuglevel="lines,vars,source">
- <src path="src"/>
- <exclude name="**/.svn" />
- <classpath refid="project.classpath"/>
- </javac>
- <copy todir="bin">
- <fileset dir="src">
- <include name="*config*"/>
- </fileset>
- </copy>
- </target>
- <target name="jar" depends="build">
- <copy todir="bin/lib">
- <fileset dir="lib">
- <include name="**/*.*"/>
- </fileset>
- </copy>
- <copy todir="bin/lib">
- <fileset dir="${hadoop.home}">
- <include name="**/*.*"/>
- </fileset>
- </copy>
- <path id="lib-classpath">
- <fileset dir="lib" includes="**/*.jar" />
- </path>
- <pathconvert property="my.classpath" pathsep=" " >
- <mapper>
- <chainedmapper>
- <!-- 移除绝对路径 -->
- <flattenmapper />
- <!-- 加上lib前缀 -->
- <globmapper from="*" to="lib/*" />
- </chainedmapper>
- </mapper>
- <path refid="lib-classpath" />
- </pathconvert>
- <jar basedir="bin" destfile="${jar.name}" >
- <include name="**/*"/>
- <!-- define MANIFEST.MF -->
- <manifest>
- <attribute name="Class-Path" value="${my.classpath}" />
- </manifest>
- </jar>
- </target>
- </project>
至此,我们以及完成了,这个小项目的开发,最终回归当生产环境上,我们是需要打成jar包,在linux上定时执行的,直接使用linux环境来开发调试hadoop,遇到的问题会更少,虽然不推荐使用win直接开发hadoop程序,但是了解一些基本的方法和技巧,对我们来说也是一件不错的事情。