I am running a 2 node presto cluster pointed to hive on EMR, which is configured with data on S3.
我正在运行一个指向EMR上的hive的2节点presto集群,它配置了S3上的数据。
The hive metadata is visible; in the CLI I can describe (table) claim1, and see metadata about it.
可见的hive元数据;在CLI中,我可以描述(表)claim1,并查看关于它的元数据。
Both nodes show up as active in the sys.node table.
这两个节点在sys中都显示为活动的。节点表。
When I run a query (select count(*) from claim1 where col1='M'), I see a lot of logging on the coordinator node, finishing with:
当我运行claim1中的查询(选择count(*),其中col1='M')时,我在协调器节点上看到很多日志记录,最后是:
2014-05-19T22:19:32.760+0000 INFO HiveHdfsWalker-144 stdout 22:19:32.760 [HiveHdfsWalker-144] DEBUG c.a.s.s.m.t.XmlResponsesSaxParser - Examining listing for bucket: unzippeddata
2014-05-19T22:19:32.763+0000 INFO HiveHdfsWalker-144 stdout 22:19:32.763 [HiveHdfsWalker-144] DEBUG com.amazonaws.request - Received successful response: 200, AWS Request ID: 9B844EEC8586FF3B
2014-05-19T22:19:32.766+0000 DEBUG query-scheduler-8 com.facebook.presto.execution.SqlStageExecution Stage 20140519_221932_00005_mfhtx.1 is FAILED
2014-05-19T22:19:32.766+0000 DEBUG query-scheduler-6 com.facebook.presto.execution.SqlStageExecution Stage 20140519_221932_00005_mfhtx.0 is FAILED
2014-05-19T22:19:32.768+0000 DEBUG query-scheduler-7 com.facebook.presto.execution.QueryStateMachine Query 20140519_221932_00005_mfhtx is FAILED
2014-05-19T22:19:32.770+0000 ERROR Stage-20140519_221932_00005_mfhtx.1-126 com.facebook.presto.execution.SqlStageExecution Error while starting stage 20140519_221932_00005_mfhtx.1
com.facebook.presto.spi.PrestoException: No nodes available to run query
at com.facebook.presto.util.Failures.checkCondition(Failures.java:79) ~[presto-main-0.68.jar:0.68]
at com.facebook.presto.util.Failures.checkCondition(Failures.java:73) ~[presto-main-0.68.jar:0.68]
at com.facebook.presto.execution.NodeScheduler$NodeSelector.computeAssignments(NodeScheduler.java:184) ~[presto-main-0.68.jar:0.68]
at com.facebook.presto.execution.SqlStageExecution.scheduleSourcePartitionedNodes(SqlStageExecution.java:631) [presto-main-0.68.jar:0.68]
at com.facebook.presto.execution.SqlStageExecution.startTasks(SqlStageExecution.java:549) [presto-main-0.68.jar:0.68]
at com.facebook.presto.execution.SqlStageExecution.access$200(SqlStageExecution.java:91) [presto-main-0.68.jar:0.68]
at com.facebook.presto.execution.SqlStageExecution$4.run(SqlStageExecution.java:521) [presto-main-0.68.jar:0.68]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_51]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_51]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_51]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
2014-05-19T22:19:32.775+0000 INFO query-scheduler-7 com.facebook.presto.event.query.QueryMonitor TIMELINE: Query 20140519_221932_00005_mfhtx :: elapsed 483.00ms :: planning 74.12ms :: scheduling 409.00ms :: running 0.00ms :: finishing 409.00ms :: begin 2014-05-19T22:19:32.285Z :: end 2014-05-19T22:19:32.768Z
2014-05-19T22:19:32.872+0000 DEBUG task-notification-0 com.facebook.presto.execution.TaskStateMachine Task 20140519_221932_00005_mfhtx.0.0 is CANCELED
2014-05-19T22:19:32.880+0000 DEBUG 20140519_221932_00005_mfhtx.0.0-0-56 com.facebook.presto.execution.TaskExecutor Split 20140519_221932_00005_mfhtx.0.0-0 (start = 1400537972443, wall = 437 ms, cpu = 3 ms, calls = 2) is finished
...or alternately:
…或交替:
2014-05-19T22:22:43.972+0000 INFO HiveHdfsWalker-170 stdout 22:22:43.972 [HiveHdfsWalker-170] DEBUG c.a.s.s.m.t.XmlResponsesSaxParser - Parsing XML response document with handler: class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler
2014-05-19T22:22:43.972+0000 INFO HiveHdfsWalker-170 stdout 22:22:43.972 [HiveHdfsWalker-170] DEBUG c.a.s.s.m.t.XmlResponsesSaxParser - Examining listing for bucket: unzippeddata
2014-05-19T22:22:43.976+0000 INFO HiveHdfsWalker-170 stdout 22:22:43.976 [HiveHdfsWalker-170] DEBUG com.amazonaws.request - Received successful response: 200, AWS Request ID: 476D4ABA552DAA66
2014-05-19T22:22:43.979+0000 DEBUG query-scheduler-17 com.facebook.presto.execution.SqlStageExecution Stage 20140519_222243_00007_mfhtx.1 is FAILED
2014-05-19T22:22:43.979+0000 ERROR Stage-20140519_222243_00007_mfhtx.1-160 com.facebook.presto.execution.SqlStageExecution Error while starting stage 20140519_222243_00007_mfhtx.1
com.facebook.presto.spi.PrestoException: No nodes available to run query
at com.facebook.presto.util.Failures.checkCondition(Failures.java:79) ~[presto-main-0.68.jar:0.68]
at com.facebook.presto.util.Failures.checkCondition(Failures.java:73) ~[presto-main-0.68.jar:0.68]
at com.facebook.presto.execution.NodeScheduler$NodeSelector.computeAssignments(NodeScheduler.java:184) ~[presto-main-0.68.jar:0.68]
at com.facebook.presto.execution.SqlStageExecution.scheduleSourcePartitionedNodes(SqlStageExecution.java:631) [presto-main-0.68.jar:0.68]
at com.facebook.presto.execution.SqlStageExecution.startTasks(SqlStageExecution.java:549) [presto-main-0.68.jar:0.68]
at com.facebook.presto.execution.SqlStageExecution.access$200(SqlStageExecution.java:91) [presto-main-0.68.jar:0.68]
at com.facebook.presto.execution.SqlStageExecution$4.run(SqlStageExecution.java:521) [presto-main-0.68.jar:0.68]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_51]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_51]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_51]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
2014-05-19T22:22:43.983+0000 DEBUG query-scheduler-13 com.facebook.presto.execution.SqlStageExecution Stage 20140519_222243_00007_mfhtx.0 is FAILED
2014-05-19T22:22:43.984+0000 DEBUG query-scheduler-14 com.facebook.presto.execution.QueryStateMachine Query 20140519_222243_00007_mfhtx is FAILED
2014-05-19T22:22:43.990+0000 INFO query-scheduler-14 com.facebook.presto.event.query.QueryMonitor TIMELINE: Query 20140519_222243_00007_mfhtx :: elapsed 233.00ms :: planning 60.01ms :: scheduling 173.00ms :: running 0.00ms :: finishing 173.00ms :: begin 2014-05-19T22:22:43.751Z :: end 2014-05-19T22:22:43.984Z
2014-05-19T22:22:44.088+0000 DEBUG task-notification-3 com.facebook.presto.execution.TaskStateMachine Task 20140519_222243_00007_mfhtx.0.0 is CANCELED
2014-05-19T22:22:44.102+0000 DEBUG 20140519_222243_00007_mfhtx.0.0-0-50 com.facebook.presto.execution.TaskExecutor Split 20140519_222243_00007_mfhtx.0.0-0 (start = 1400538163839, wall = 259 ms, cpu = 0 ms, calls = 2) is finished
The non-coordinator node sometimes (but not consistently) gets a couple lines in its logs:
非协调节点有时(但不一致)在其日志中获取几行:
2014-05-19T22:19:32.340+0000 DEBUG task-notification-10 com.facebook.presto.execution.TaskStateMachine Task 20140519_222103_00006_mfhtx.0.0 is CANCELED
2014-05-19T22:19:32.352+0000 DEBUG 20140519_222103_00006_mfhtx.0.0-0-48 com.facebook.presto.execution.TaskExecutor Split 20140519_222103_00006_mfhtx.0.0-0 (start = 1400538063009, wall = 343 ms, cpu = 0 ms, calls = 2) is finished
1 个解决方案
#1
3
This problem used to be caused by forgetting to configure datasources
on the workers, but this is no longer a problem because Presto no longer requires explicit configuration for this (and the config property will likely be removed in a future release).
这个问题以前是由于忘记在worker上配置数据源而导致的,但这不再是问题,因为Presto不再需要为此进行显式配置(配置属性很可能在将来的版本中被删除)。
#1
3
This problem used to be caused by forgetting to configure datasources
on the workers, but this is no longer a problem because Presto no longer requires explicit configuration for this (and the config property will likely be removed in a future release).
这个问题以前是由于忘记在worker上配置数据源而导致的,但这不再是问题,因为Presto不再需要为此进行显式配置(配置属性很可能在将来的版本中被删除)。