在Google Cloud Dataflow中按顺序读取文件

时间:2022-02-01 15:24:20

I'm using Spotify Scio to read logs that are exported from Stackdriver to Google Cloud Storage. They are JSON files where every line is a single entry. Looking at the worker logs it seems like the file is split into chunks, which are then read in any order. I've already limited my job to exactly 1 worker in this case. Is there a way to force these chunks to be read and processed in order?

我正在使用Spotify Scio来读取从Stackdriver导出到Google云端存储的日志。它们是JSON文件,其中每一行都是单个条目。查看工作日志,似乎文件被拆分为块,然后以任何顺序读取。在这种情况下,我已经将我的工作限制在1名工人身上。有没有办法强制按顺序读取和处理这些块?

As an example (textFile is basically a TextIO.Read):

作为一个例子(textFile基本上是一个TextIO.Read):

val sc = ScioContext(myOptions)
sc.textFile(myFile).map(line => logger.info(line))

Would produce output similar to this based on the worker logs:

将根据工作日志生成类似于此的输出:

line 5
line 6
line 7
line 8
<Some other work>
line 1
line 2
line 3
line 4
<Some other work>
line 9
line 10
line 11
line 12

What I want to know is if there's a way to force it to read lines 1-12 in order. I've found that gzipping the file and reading it with the CompressionType specified is a workaround but I'm wondering if there are any ways to do this that don't involve zipping or changing the original file.

我想知道的是,是否有办法强制它按顺序读取1-12行。我发现gzipping文件并使用指定的CompressionType读取它是一种解决方法,但我想知道是否有任何方法可以执行此操作,不涉及压缩或更改原始文件。

1 个解决方案

#1


4  

Google Cloud Dataflow / Apache Beam currently do not support sorting or preservation of order in processing pipelines. The drawback of allowing for sorted output is that it outputting such a result for large datasets eventually bottlenecks on a single machine, which is not scalable for large datasets.

Google Cloud Dataflow / Apache Beam目前不支持在处理管道时对订单进行排序或保留。允许排序输出的缺点是它为大型数据集输出这样的结果最终导致单个机器上的瓶颈,这对于大型数据集是不可扩展的。

#1


4  

Google Cloud Dataflow / Apache Beam currently do not support sorting or preservation of order in processing pipelines. The drawback of allowing for sorted output is that it outputting such a result for large datasets eventually bottlenecks on a single machine, which is not scalable for large datasets.

Google Cloud Dataflow / Apache Beam目前不支持在处理管道时对订单进行排序或保留。允许排序输出的缺点是它为大型数据集输出这样的结果最终导致单个机器上的瓶颈,这对于大型数据集是不可扩展的。