If you are running a streaming pipeline on Google Cloud Dataflow, is there (or is there planned to be at some point in the future) a method of manually scaling the number of requested worker instances up and down based on some form of custom metric or API call? Or does such scaling require performing a clean shutdown of the pipeline, a difficult task in and of itself even in the most well-controlled circumstances, and then restarting it completely with a different number of instances?
如果您在Google Cloud Dataflow上运行流式传输管道,那么是否存在(或计划在将来的某个时间点)根据某种形式的自定义指标手动调整请求的工作者实例数量的方法API调用?或者这样的扩展是否需要执行管道的干净关闭,即使在最良好控制的环境中也是一项艰巨的任务,然后使用不同数量的实例完全重新启动它?
1 个解决方案
#1
2
Currently, there isn't a way of scaling the number of workers for a running pipeline. We are aware of this need and are working on several avenuse for addressing it. Stay tuned!
目前,没有一种方法可以扩展正在运行的管道的工作者数量。我们意识到了这种需求,并正在努力解决这个问题。敬请关注!
#1
2
Currently, there isn't a way of scaling the number of workers for a running pipeline. We are aware of this need and are working on several avenuse for addressing it. Stay tuned!
目前,没有一种方法可以扩展正在运行的管道的工作者数量。我们意识到了这种需求,并正在努力解决这个问题。敬请关注!