如何在不清除bigquery表的情况下更新在app引擎中运行的google-cloud-dataflow

时间:2021-12-31 15:25:39

I have a google-cloud-dataflow process running on the App-engine. It listens to messages sent via pubsub and streams to big-query.

我在App-engine上运行了google-cloud-dataflow流程。它监听通过pubsub发送的消息和流到big-query。

I updated my code and I am trying to rerun the app.

我更新了我的代码,我正在尝试重新运行该应用程序。

But I receive this error:

但是我收到了这个错误:

Exception in thread "main" java.lang.IllegalArgumentException: BigQuery table is not empty

Is there anyway to update data flow without deleting the table? Since my code might change quite often, and I do not want to delete data in the table.

无论如何更新数据流而不删除表格?由于我的代码可能经常更改,我不想删除表中的数据。

Here is my code:

这是我的代码:

public class MyPipline {
    private static final Logger LOG = LoggerFactory.getLogger(BotPipline.class);
    private static String name;

    public static void main(String[] args) {

        List<TableFieldSchema> fields = new ArrayList<>();
        fields.add(new TableFieldSchema().setName("a").setType("string"));
        fields.add(new TableFieldSchema().setName("b").setType("string"));
        fields.add(new TableFieldSchema().setName("c").setType("string"));
        TableSchema tableSchema = new TableSchema().setFields(fields);

        DataflowPipelineOptions options = PipelineOptionsFactory.as(DataflowPipelineOptions.class);
        options.setRunner(BlockingDataflowPipelineRunner.class);
        options.setProject("my-data-analysis");
        options.setStagingLocation("gs://my-bucket/dataflow-jars");
        options.setStreaming(true);

        Pipeline pipeline = Pipeline.create(options);

        PCollection<String> input = pipeline
                .apply(PubsubIO.Read.subscription(
                        "projects/my-data-analysis/subscriptions/myDataflowSub"));

        input.apply(ParDo.of(new DoFn<String, Void>() {

            @Override
            public void processElement(DoFn<String, Void>.ProcessContext c) throws Exception {
                LOG.info("json" + c.element());
            }

        }));
        String fileName = UUID.randomUUID().toString().replaceAll("-", "");


        input.apply(ParDo.of(new DoFn<String, String>() {
            @Override
            public void processElement(DoFn<String, String>.ProcessContext c) throws Exception {
                JSONObject firstJSONObject = new JSONObject(c.element());
                firstJSONObject.put("a", firstJSONObject.get("a").toString()+ "1000");
                c.output(firstJSONObject.toString());

            }

        }).named("update json")).apply(ParDo.of(new DoFn<String, TableRow>() {

            @Override
            public void processElement(DoFn<String, TableRow>.ProcessContext c) throws Exception {
                JSONObject json = new JSONObject(c.element());
                TableRow row = new TableRow().set("a", json.get("a")).set("b", json.get("b")).set("c", json.get("c"));
                c.output(row);
            }

        }).named("convert json to table row"))
                .apply(BigQueryIO.Write.to("my-data-analysis:mydataset.mytable").withSchema(tableSchema)
        );

        pipeline.run();
    }
}

1 个解决方案

#1


2  

You need to specify withWriteDisposition on your BigQueryIO.Write - see documentation of the method and of its argument. Depending on your requirements, you need either WRITE_TRUNCATE or WRITE_APPEND.

您需要在BigQueryIO.Write上指定withWriteDisposition - 请参阅方法及其参数的文档。根据您的要求,您需要WRITE_TRUNCATE或WRITE_APPEND。

#1


2  

You need to specify withWriteDisposition on your BigQueryIO.Write - see documentation of the method and of its argument. Depending on your requirements, you need either WRITE_TRUNCATE or WRITE_APPEND.

您需要在BigQueryIO.Write上指定withWriteDisposition - 请参阅方法及其参数的文档。根据您的要求,您需要WRITE_TRUNCATE或WRITE_APPEND。