【Hadoop】MapReduce输入输出格式之输入格式

时间:2021-07-20 18:20:40

1 常用输入格式

输入格式 特点 使用的RecordReader 是否使用FileInputFormat的getSplits
TextInputFormat 以行偏移量为key,以换行符前的字符为Value LineRecordReader
KeyValueTextInputFormat 默认分割符为”\t”,根据分割符来切分行,前为key,后为value KeyValueLineRecordReader,内部使用LineRecordReader
NLineInputFormat 根据属性mapreduce.input.lineinputformat.linespermap所设置的行数为每片split的行数 LineRecordReader 覆盖FileInputFormat的getSplits
SequenceFileInputFormat 使用Hadoop特有文件格式SequenceFile.Reader进行读写,读取二进制文件 SequenceFileRecordReader
DBInputFormat 通过与数据建立连接,将读取的数据根据map数进行分片 DBRecordReader 继承InputFormat,实现分片和RecordReader

【Hadoop】MapReduce输入输出格式之输入格式

2 自定义InputFormat的流程

1)如果是文本格式的数据,那么实现一个XXInputForamt继承FileInputFormat
2)重写 FileInputFormat 里面的 isSplitable() 方法。如果文件是压缩文件的话则不能切割,一般都是支持切割
3)重写 FileInputFormat 里面的 createRecordReader()方法
4)自定义XXRecordReader,来读取特定的格式

XXRecordReader中需要重点实现以下两个的方法
@Override
        public void initialize(InputSplit input, TaskAttemptContext context)
                throws IOException, InterruptedException {
            FileSplit split=(FileSplit)input;
            Configuration job=context.getConfiguration();
            Path file=split.getPath();
            FileSystem fs=file.getFileSystem(job);
           
            FSDataInputStream fileIn=fs.open(file);
//红色标记这部分对于文本型数据来说基本是一样的
            in=new LineReader(fileIn,job);
            line=new Text();
            lineKey=new Text();
            lineValue = new Text();
        }

        //此方法读取每行数据,完成自定义的key和value
        @Override
        public boolean nextKeyValue() throws IOException, InterruptedException {
            int linesize=in.readLine(line);//每行数据
            if(linesize==0return false;
            String[] pieces = line.toString().split("\\s+");//解析每行数据
           ...
            lineKey.set(“key”);//完成自定义key数据
            lineValue.set(“value”);//封装自定义value数据
            return true;
        }       

3 多个输入

1)如果输入格式存在多种,可以设置不同Mapper处理不同的数据源

MultipleInputs.addInputPath(job,ncdcInputPath,TextInputFormat.class,NCDCTemperatureMapper.class);

2)存在多种输入格式,而只有一个Mapper则可使用

public static void addInputPath(Job job,Path path,class< ? extends InputFormat> inputFormatClass);
使用job.setMapperClass();

4. InputFormat及其子类解决的是针对不同的数据格式分片和读取问题

4.1 getSplits方法中实现如何分片?

1)根据不同的数据来源采取不同的切片方式
例子1:文本格式的数据来源
通过计算确认splitSize的大小,假如输入文件为100M,那么splitSize则为64M,那么文件会被切分为64M和36M两个分片输出。

public List<InputSplit> getSplits(JobContext job) throws IOException {
StopWatch sw = new StopWatch().start();
long minSize = Math.max(getFormatMinSplitSize(), getMinSplitSize(job));
long maxSize = getMaxSplitSize(job);

// generate splits
List<InputSplit> splits = new ArrayList<InputSplit>();
List<FileStatus> files = listStatus(job);
for (FileStatus file: files) {
Path path = file.getPath();
long length = file.getLen();
if (length != 0) {
BlockLocation[] blkLocations;
if (file instanceof LocatedFileStatus) {
blkLocations = ((LocatedFileStatus) file).getBlockLocations();
} else {
FileSystem fs = path.getFileSystem(job.getConfiguration());
blkLocations = fs.getFileBlockLocations(file, 0, length);
}
if (isSplitable(job, path)) {
long blockSize = file.getBlockSize();
long splitSize = computeSplitSize(blockSize, minSize, maxSize);

long bytesRemaining = length;
while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) {
int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining);
splits.add(makeSplit(path, length-bytesRemaining, splitSize,
blkLocations[blkIndex].getHosts(),
blkLocations[blkIndex].getCachedHosts()));
bytesRemaining -= splitSize;
}

if (bytesRemaining != 0) {
int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining);
splits.add(makeSplit(path, length-bytesRemaining, bytesRemaining,
blkLocations[blkIndex].getHosts(),
blkLocations[blkIndex].getCachedHosts()));
}
} else { // not splitable
if (LOG.isDebugEnabled()) {
// Log only if the file is big enough to be splitted
if (length > Math.min(file.getBlockSize(), minSize)) {
LOG.debug("File is not splittable so no parallelization "
+ "is possible: " + file.getPath());
}
}
splits.add(makeSplit(path, 0, length, blkLocations[0].getHosts(),
blkLocations[0].getCachedHosts()));
}
} else {
//Create empty hosts array for zero length files
splits.add(makeSplit(path, 0, length, new String[0]));
}
}
// Save the number of input files for metrics/loadgen
job.getConfiguration().setLong(NUM_INPUT_FILES, files.size());
sw.stop();
if (LOG.isDebugEnabled()) {
LOG.debug("Total # of splits generated by getSplits: " + splits.size()
+ ", TimeTaken: " + sw.now(TimeUnit.MILLISECONDS));
}
return splits;
}

例子2:数据来源为数据库
通过查询需要读取的条数,然后再除以map_task数,得出每个map_task需要处理的条数,进行切分

public List<InputSplit> getSplits(JobContext job) throws IOException {

ResultSet results = null;
Statement statement = null;
try {
statement = connection.createStatement();

results = statement.executeQuery(getCountQuery());//查询总条数
results.next();

long count = results.getLong(1);
int chunks = job.getConfiguration().getInt(MRJobConfig.NUM_MAPS, 1);
long chunkSize = (count / chunks);

results.close();
statement.close();

List<InputSplit> splits = new ArrayList<InputSplit>();

// Split the rows into n-number of chunks and adjust the last chunk
// accordingly
for (int i = 0; i < chunks; i++) {
DBInputSplit split;

if ((i + 1) == chunks)
split = new DBInputSplit(i * chunkSize, count);
else
split = new DBInputSplit(i * chunkSize, (i * chunkSize)
+ chunkSize);

splits.add(split);
}

connection.commit();
return splits;
} catch (SQLException e) {
throw new IOException("Got SQLException", e);
} finally {
try {
if (results != null) { results.close(); }
} catch (SQLException e1) {}
try {
if (statement != null) { statement.close(); }
} catch (SQLException e1) {}

closeConnection();
}
}

2)RecordReader类实现如何读取数据
一般的文本格式一般使用LineRecordReader进行读取,然后根据需求进行处理