为什么Future.get(...)没有杀死线程?

时间:2022-04-07 00:17:04

I have an application that is using future for asynchronous execution.
I set the parameter on get method, that the thread should get killed after 10 seconds, when it does not get the response:

我有一个正在使用future进行异步执行的应用程序。我在get方法上设置了参数,该线程应该在10秒后被杀死,当它没有得到响应时:

 Future<RecordMetadata> meta = producer.send(record, new ProducerCallBack());
      RecordMetadata data = meta.get(10, TimeUnit.SECONDS);  

But the thread get killed after 60 second:

但该线程在60秒后被杀死:

java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
    at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1124)
    at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:823)
    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:760)
    at io.khinkali.KafkaProducerClient.main(KafkaProducerClient.java:49)
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

What am I doing wrong?

我究竟做错了什么?

2 个解决方案

#1


2  

From the docs:

来自文档:

The threshold for time to block is determined by max.block.ms after which it throws a TimeoutException.

阻塞时间的阈值由max.block.ms确定,然后抛出TimeoutException。

Check Kafka Appender config in logback.xml, look for:

在logback.xml中检查Kafka Appender配置,查找:

<producerConfig>max.block.ms=60000</producerConfig>

#2


1  

I set the parameter on get method, that the thread should get killed after 10 seconds, when it does not get the response:

我在get方法上设置了参数,该线程应该在10秒后被杀死,当它没有得到响应时:

If we are talking about Future.get(...) there is nothing about it that "kills" the thread at all. To quote from the javadocs, the Future.get(...) method:

如果我们谈论的是Future.get(...),那么根本就没有“杀死”线程。引用javadocs,Future.get(...)方法:

Waits if necessary for at most the given time for the computation to complete, and then retrieves its result, if available.

如果需要,最多等待计算完成的给定时间,然后检索其结果(如果可用)。

If the get(...) method times out then it will throw TimeoutException but your thread is free to continue to run. If you want to stop the thread running then you'll need to catch TimeoutException and then call meta.cancel(true) but even that doesn't guarantee that the thread will be "killed". That causes the thread to be interrupted which means that certain methods will throw InterruptedException or the thread needs to be checking for Thread.currentThread().isInterrupted().

如果get(...)方法超时则会抛出TimeoutException,但您的线程可以继续运行。如果你想停止线程运行,那么你需要捕获TimeoutException,然后调用meta.cancel(true),但即使这样也不能保证线程将被“杀死”。这会导致线程被中断,这意味着某些方法会抛出InterruptedException,或者线程需要检查Thread.currentThread()。isInterrupted()。

java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

java.util.concurrent.ExecutionException:org.apache.kafka.common.errors.TimeoutException:60000毫秒后无法更新元数据。

Yeah this timeout has nothing to do with the Future.get(...) timeout.

是的,这个超时与Future.get(...)超时无关。

#1


2  

From the docs:

来自文档:

The threshold for time to block is determined by max.block.ms after which it throws a TimeoutException.

阻塞时间的阈值由max.block.ms确定,然后抛出TimeoutException。

Check Kafka Appender config in logback.xml, look for:

在logback.xml中检查Kafka Appender配置,查找:

<producerConfig>max.block.ms=60000</producerConfig>

#2


1  

I set the parameter on get method, that the thread should get killed after 10 seconds, when it does not get the response:

我在get方法上设置了参数,该线程应该在10秒后被杀死,当它没有得到响应时:

If we are talking about Future.get(...) there is nothing about it that "kills" the thread at all. To quote from the javadocs, the Future.get(...) method:

如果我们谈论的是Future.get(...),那么根本就没有“杀死”线程。引用javadocs,Future.get(...)方法:

Waits if necessary for at most the given time for the computation to complete, and then retrieves its result, if available.

如果需要,最多等待计算完成的给定时间,然后检索其结果(如果可用)。

If the get(...) method times out then it will throw TimeoutException but your thread is free to continue to run. If you want to stop the thread running then you'll need to catch TimeoutException and then call meta.cancel(true) but even that doesn't guarantee that the thread will be "killed". That causes the thread to be interrupted which means that certain methods will throw InterruptedException or the thread needs to be checking for Thread.currentThread().isInterrupted().

如果get(...)方法超时则会抛出TimeoutException,但您的线程可以继续运行。如果你想停止线程运行,那么你需要捕获TimeoutException,然后调用meta.cancel(true),但即使这样也不能保证线程将被“杀死”。这会导致线程被中断,这意味着某些方法会抛出InterruptedException,或者线程需要检查Thread.currentThread()。isInterrupted()。

java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

java.util.concurrent.ExecutionException:org.apache.kafka.common.errors.TimeoutException:60000毫秒后无法更新元数据。

Yeah this timeout has nothing to do with the Future.get(...) timeout.

是的,这个超时与Future.get(...)超时无关。