Cloud Dataflow作业扩展超出最大工作者值

时间:2021-03-09 15:34:29

Dataflow job id: 2016-01-13_16_00_09-15016519893798477319

数据流作业ID:2016-01-13_16_00_09-15016519893798477319

Pipeline was configured with the following worker/scaling config:

管道配置了以下worker / scaling配置:

  • min 2 workers
  • 最少2名工人
  • max 50 workers
  • 最多50名工人

However, the job scaled to 55 workers. Why was the max worker value of 50 not honoured?

然而,这项工作规模扩大到55名工人。为什么50的最大工人价值没有兑现?

Jan 14, 2016, 11:00:10 AM
(77f7e53b4884ba02): Autoscaling: Enabled for job 2016-01-13_16_00_09-15016519893798477319 between 1 and 1000000 worker processes.

Jan 14, 2016, 11:00:17 AM
(374d4f69f65e2506): Worker configuration: n1-standard-1 in us-central1-a.

Jan 14, 2016, 11:00:18 AM
(28acda8454e90ad2): Starting 2 workers...

Jan 14, 2016, 11:01:49 AM
(cf611e5d4ce4784d): Autoscaling: Resizing worker pool from 2 to 50.

Jan 14, 2016, 11:06:20 AM
(36c68efd7f1743cf): Autoscaling: Resizing worker pool from 50 to 55.

1 个解决方案

#1


3  

This turned out to be a bug in our code. We were calling the wrong method. We need to call setMaxNumWorkers, and not setNumWorkers.

结果证明这是我们代码中的错误。我们打错了方法。我们需要调用setMaxNumWorkers,而不是setNumWorkers。

#1


3  

This turned out to be a bug in our code. We were calling the wrong method. We need to call setMaxNumWorkers, and not setNumWorkers.

结果证明这是我们代码中的错误。我们打错了方法。我们需要调用setMaxNumWorkers,而不是setNumWorkers。