I looked online and found some SO discussing and ActiveState recipes for running some code with a timeout. It looks there are some common approaches:
我在网上找了一些SO讨论和ActiveState配方来运行一些带有超时的代码。它看起来有一些常见的方法:
- Use thread that run the code, and
join
it with timeout. If timeout elapsed - kill the thread. This is not directly supported in Python (used private_Thread__stop
function) so it is bad practice - Use
signal.SIGALRM
- but this approach not working on Windows! - Use subprocess with timeout - but this is too heavy - what if I want to start interruptible task often, I don't want fire process for each!
使用运行代码的线程,并使用超时加入它。如果超时已超时 - 终止线程。这不是Python直接支持的(使用私有_Thread__stop函数),所以这是不好的做法
使用signal.SIGALRM - 但这种方法不适用于Windows!
使用带有超时的子进程 - 但这太重了 - 如果我想经常启动可中断任务,我不希望每个进程都有火!
So, what is the right way? I'm not asking about workarounds (eg use Twisted and async IO), but actual way to solve actual problem - I have some function and I want to run it only with some timeout. If timeout elapsed, I want control back. And I want it to work on Linux and Windows.
那么,正确的方法是什么?我不是在询问有关变通方法(例如使用Twisted和async IO),而是解决实际问题的实际方法 - 我有一些功能,我想只运行一些超时。如果超时,我想要控制回来。我希望它能在Linux和Windows上运行。
9 个解决方案
#1
9
A completely general solution to this really, honestly does not exist. You have to use the right solution for a given domain.
一个完全通用的解决方案真的,老实说不存在。您必须为给定的域使用正确的解决方案。
-
If you want timeouts for code you fully control, you have to write it to cooperate. Such code has to be able to break up into little chunks in some way, as in an event-driven system. You can also do this by threading if you can ensure nothing will hold a lock too long, but handling locks right is actually pretty hard.
如果您希望完全控制代码的超时,则必须将其编写为合作。这样的代码必须能够以某种方式分解成小块,就像在事件驱动的系统中一样。你也可以通过线程来做到这一点,如果你可以确保没有任何东西会持有太久的锁定,但处理锁定权利实际上是非常困难的。
-
If you want timeouts because you're afraid code is out of control (for example, if you're afraid the user will ask your calculator to compute
9**(9**9)
), you need to run it in another process. This is the only easy way to sufficiently isolate it. Running it in your event system or even a different thread will not be enough. It is also possible to break things up into little chunks similar to the other solution, but requires very careful handling and usually isn't worth it; in any event, that doesn't allow you to do the same exact thing as just running the Python code.如果你想要超时,因为你害怕代码失控(例如,如果你担心用户会要求你的计算器计算9 **(9 ** 9)),你需要在另一个进程中运行它。这是充分隔离它的唯一简单方法。在事件系统或甚至不同的线程中运行它是不够的。也可以将其分解成与其他解决方案类似的小块,但需要非常小心处理,通常不值得;无论如何,这不允许你做与运行Python代码完全相同的事情。
#2
9
What you might be looking for is the multiprocessing module. If subprocess
is too heavy, then this may not suit your needs either.
您可能正在寻找的是多处理模块。如果子进程太重,那么这可能也不适合您的需求。
import time
import multiprocessing
def do_this_other_thing_that_may_take_too_long(duration):
time.sleep(duration)
return 'done after sleeping {0} seconds.'.format(duration)
pool = multiprocessing.Pool(1)
print 'starting....'
res = pool.apply_async(do_this_other_thing_that_may_take_too_long, [8])
for timeout in range(1, 10):
try:
print '{0}: {1}'.format(duration, res.get(timeout))
except multiprocessing.TimeoutError:
print '{0}: timed out'.format(duration)
print 'end'
#3
4
I found this with eventlet library:
我用eventlet库找到了这个:
http://eventlet.net/doc/modules/timeout.html
from eventlet.timeout import Timeout
timeout = Timeout(seconds, exception)
try:
... # execution here is limited by timeout
finally:
timeout.cancel()
#4
3
If it's network related you could try:
如果它与网络相关,您可以尝试:
import socket
socket.setdefaulttimeout(number)
#5
2
For "normal" Python code, that doesn't linger prolongued times in C extensions or I/O waits, you can achieve your goal by setting a trace function with sys.settrace()
that aborts the running code when the timeout is reached.
对于“普通”Python代码,在C扩展或I / O等待中不会延长时间,您可以通过使用sys.settrace()设置跟踪函数来实现目标,该函数在达到超时时中止正在运行的代码。
Whether that is sufficient or not depends on how co-operating or malicious the code you run is. If it's well-behaved, a tracing function is sufficient.
这是否足够取决于您运行的代码是如何合作或恶意的。如果它表现良好,跟踪功能就足够了。
#6
2
An other way is to use faulthandler:
另一种方法是使用faulthandler:
import time
import faulthandler
faulthandler.enable()
try:
faulthandler.dump_tracebacks_later(3)
time.sleep(10)
finally:
faulthandler.cancel_dump_tracebacks_later()
N.B: The faulthandler module is part of stdlib in python3.3.
N.B:faulthandler模块是python3.3中stdlib的一部分。
#7
0
If you're running code that you expect to die after a set time, then you should write it properly so that there aren't any negative effects on shutdown, no matter if its a thread or a subprocess. A command pattern with undo would be useful here.
如果您运行的代码在预定时间后会死亡,那么您应该正确编写代码,以便对关闭没有任何负面影响,无论是线程还是子进程。具有撤消的命令模式在这里很有用。
So, it really depends on what the thread is doing when you kill it. If its just crunching numbers who cares if you kill it. If its interacting with the filesystem and you kill it , then maybe you should really rethink your strategy.
所以,它实际上取决于你杀死它时线程正在做什么。如果它只是处理数字谁关心你是否杀了它。如果它与文件系统进行交互并且你杀了它,那么也许你应该重新考虑你的策略。
What is supported in Python when it comes to threads? Daemon threads and joins. Why does python let the main thread exit if you've joined a daemon while its still active? Because its understood that someone using daemon threads will (hopefully) write the code in a way that it wont matter when that thread dies. Giving a timeout to a join and then letting main die, and thus taking any daemon threads with it, is perfectly acceptable in this context.
在线程方面,Python支持什么?守护程序线程和连接。为什么如果你在守护进程仍处于活动状态时加入一个守护进程,python会让主线程退出?因为它理解有人使用守护进程线程(希望)以一种在该线程死亡时无关紧要的方式编写代码。给一个连接超时,然后让主要死亡,从而使用任何守护程序线程,在这种情况下是完全可以接受的。
#8
0
I've solved that in that way: For me is worked great (in windows and not heavy at all) I'am hope it was useful for someone)
我已经以这种方式解决了这个问题:对我来说工作得很好(在Windows中并且根本不重)我希望它对某人有用)
import threading
import time
class LongFunctionInside(object):
lock_state = threading.Lock()
working = False
def long_function(self, timeout):
self.working = True
timeout_work = threading.Thread(name="thread_name", target=self.work_time, args=(timeout,))
timeout_work.setDaemon(True)
timeout_work.start()
while True: # endless/long work
time.sleep(0.1) # in this rate the CPU is almost not used
if not self.working: # if state is working == true still working
break
self.set_state(True)
def work_time(self, sleep_time): # thread function that just sleeping specified time,
# in wake up it asking if function still working if it does set the secured variable work to false
time.sleep(sleep_time)
if self.working:
self.set_state(False)
def set_state(self, state): # secured state change
while True:
self.lock_state.acquire()
try:
self.working = state
break
finally:
self.lock_state.release()
lw = LongFunctionInside()
lw.long_function(10)
The main idea is to create a thread that will just sleep in parallel to "long work" and in wake up (after timeout) change the secured variable state, the long function checking the secured variable during its work. I'm pretty new in Python programming, so if that solution has a fundamental errors, like resources, timing, deadlocks problems , please response)).
主要思想是创建一个线程,它将与“长时间工作”并行睡眠,并在唤醒(超时后)更改安全变量状态,长函数在其工作期间检查安全变量。我是Python编程的新手,所以如果该解决方案有基本错误,如资源,时间,死锁问题,请回复))。
#9
0
solving with the 'with' construct and merging solution from -
使用'with'构造和合并解决方案解决 -
- Timeout function if it takes too long to finish
-
this thread which work better.
这个线程更好用。
import threading, time class Exception_TIMEOUT(Exception): pass class linwintimeout: def __init__(self, f, seconds=1.0, error_message='Timeout'): self.seconds = seconds self.thread = threading.Thread(target=f) self.thread.daemon = True self.error_message = error_message def handle_timeout(self): raise Exception_TIMEOUT(self.error_message) def __enter__(self): try: self.thread.start() self.thread.join(self.seconds) except Exception, te: raise te def __exit__(self, type, value, traceback): if self.thread.is_alive(): return self.handle_timeout() def function(): while True: print "keep printing ...", time.sleep(1) try: with linwintimeout(function, seconds=5.0, error_message='exceeded timeout of %s seconds' % 5.0): pass except Exception_TIMEOUT, e: print " attention !! execeeded timeout, giving up ... %s " % e
超时功能,如果完成时间太长
#1
9
A completely general solution to this really, honestly does not exist. You have to use the right solution for a given domain.
一个完全通用的解决方案真的,老实说不存在。您必须为给定的域使用正确的解决方案。
-
If you want timeouts for code you fully control, you have to write it to cooperate. Such code has to be able to break up into little chunks in some way, as in an event-driven system. You can also do this by threading if you can ensure nothing will hold a lock too long, but handling locks right is actually pretty hard.
如果您希望完全控制代码的超时,则必须将其编写为合作。这样的代码必须能够以某种方式分解成小块,就像在事件驱动的系统中一样。你也可以通过线程来做到这一点,如果你可以确保没有任何东西会持有太久的锁定,但处理锁定权利实际上是非常困难的。
-
If you want timeouts because you're afraid code is out of control (for example, if you're afraid the user will ask your calculator to compute
9**(9**9)
), you need to run it in another process. This is the only easy way to sufficiently isolate it. Running it in your event system or even a different thread will not be enough. It is also possible to break things up into little chunks similar to the other solution, but requires very careful handling and usually isn't worth it; in any event, that doesn't allow you to do the same exact thing as just running the Python code.如果你想要超时,因为你害怕代码失控(例如,如果你担心用户会要求你的计算器计算9 **(9 ** 9)),你需要在另一个进程中运行它。这是充分隔离它的唯一简单方法。在事件系统或甚至不同的线程中运行它是不够的。也可以将其分解成与其他解决方案类似的小块,但需要非常小心处理,通常不值得;无论如何,这不允许你做与运行Python代码完全相同的事情。
#2
9
What you might be looking for is the multiprocessing module. If subprocess
is too heavy, then this may not suit your needs either.
您可能正在寻找的是多处理模块。如果子进程太重,那么这可能也不适合您的需求。
import time
import multiprocessing
def do_this_other_thing_that_may_take_too_long(duration):
time.sleep(duration)
return 'done after sleeping {0} seconds.'.format(duration)
pool = multiprocessing.Pool(1)
print 'starting....'
res = pool.apply_async(do_this_other_thing_that_may_take_too_long, [8])
for timeout in range(1, 10):
try:
print '{0}: {1}'.format(duration, res.get(timeout))
except multiprocessing.TimeoutError:
print '{0}: timed out'.format(duration)
print 'end'
#3
4
I found this with eventlet library:
我用eventlet库找到了这个:
http://eventlet.net/doc/modules/timeout.html
from eventlet.timeout import Timeout
timeout = Timeout(seconds, exception)
try:
... # execution here is limited by timeout
finally:
timeout.cancel()
#4
3
If it's network related you could try:
如果它与网络相关,您可以尝试:
import socket
socket.setdefaulttimeout(number)
#5
2
For "normal" Python code, that doesn't linger prolongued times in C extensions or I/O waits, you can achieve your goal by setting a trace function with sys.settrace()
that aborts the running code when the timeout is reached.
对于“普通”Python代码,在C扩展或I / O等待中不会延长时间,您可以通过使用sys.settrace()设置跟踪函数来实现目标,该函数在达到超时时中止正在运行的代码。
Whether that is sufficient or not depends on how co-operating or malicious the code you run is. If it's well-behaved, a tracing function is sufficient.
这是否足够取决于您运行的代码是如何合作或恶意的。如果它表现良好,跟踪功能就足够了。
#6
2
An other way is to use faulthandler:
另一种方法是使用faulthandler:
import time
import faulthandler
faulthandler.enable()
try:
faulthandler.dump_tracebacks_later(3)
time.sleep(10)
finally:
faulthandler.cancel_dump_tracebacks_later()
N.B: The faulthandler module is part of stdlib in python3.3.
N.B:faulthandler模块是python3.3中stdlib的一部分。
#7
0
If you're running code that you expect to die after a set time, then you should write it properly so that there aren't any negative effects on shutdown, no matter if its a thread or a subprocess. A command pattern with undo would be useful here.
如果您运行的代码在预定时间后会死亡,那么您应该正确编写代码,以便对关闭没有任何负面影响,无论是线程还是子进程。具有撤消的命令模式在这里很有用。
So, it really depends on what the thread is doing when you kill it. If its just crunching numbers who cares if you kill it. If its interacting with the filesystem and you kill it , then maybe you should really rethink your strategy.
所以,它实际上取决于你杀死它时线程正在做什么。如果它只是处理数字谁关心你是否杀了它。如果它与文件系统进行交互并且你杀了它,那么也许你应该重新考虑你的策略。
What is supported in Python when it comes to threads? Daemon threads and joins. Why does python let the main thread exit if you've joined a daemon while its still active? Because its understood that someone using daemon threads will (hopefully) write the code in a way that it wont matter when that thread dies. Giving a timeout to a join and then letting main die, and thus taking any daemon threads with it, is perfectly acceptable in this context.
在线程方面,Python支持什么?守护程序线程和连接。为什么如果你在守护进程仍处于活动状态时加入一个守护进程,python会让主线程退出?因为它理解有人使用守护进程线程(希望)以一种在该线程死亡时无关紧要的方式编写代码。给一个连接超时,然后让主要死亡,从而使用任何守护程序线程,在这种情况下是完全可以接受的。
#8
0
I've solved that in that way: For me is worked great (in windows and not heavy at all) I'am hope it was useful for someone)
我已经以这种方式解决了这个问题:对我来说工作得很好(在Windows中并且根本不重)我希望它对某人有用)
import threading
import time
class LongFunctionInside(object):
lock_state = threading.Lock()
working = False
def long_function(self, timeout):
self.working = True
timeout_work = threading.Thread(name="thread_name", target=self.work_time, args=(timeout,))
timeout_work.setDaemon(True)
timeout_work.start()
while True: # endless/long work
time.sleep(0.1) # in this rate the CPU is almost not used
if not self.working: # if state is working == true still working
break
self.set_state(True)
def work_time(self, sleep_time): # thread function that just sleeping specified time,
# in wake up it asking if function still working if it does set the secured variable work to false
time.sleep(sleep_time)
if self.working:
self.set_state(False)
def set_state(self, state): # secured state change
while True:
self.lock_state.acquire()
try:
self.working = state
break
finally:
self.lock_state.release()
lw = LongFunctionInside()
lw.long_function(10)
The main idea is to create a thread that will just sleep in parallel to "long work" and in wake up (after timeout) change the secured variable state, the long function checking the secured variable during its work. I'm pretty new in Python programming, so if that solution has a fundamental errors, like resources, timing, deadlocks problems , please response)).
主要思想是创建一个线程,它将与“长时间工作”并行睡眠,并在唤醒(超时后)更改安全变量状态,长函数在其工作期间检查安全变量。我是Python编程的新手,所以如果该解决方案有基本错误,如资源,时间,死锁问题,请回复))。
#9
0
solving with the 'with' construct and merging solution from -
使用'with'构造和合并解决方案解决 -
- Timeout function if it takes too long to finish
-
this thread which work better.
这个线程更好用。
import threading, time class Exception_TIMEOUT(Exception): pass class linwintimeout: def __init__(self, f, seconds=1.0, error_message='Timeout'): self.seconds = seconds self.thread = threading.Thread(target=f) self.thread.daemon = True self.error_message = error_message def handle_timeout(self): raise Exception_TIMEOUT(self.error_message) def __enter__(self): try: self.thread.start() self.thread.join(self.seconds) except Exception, te: raise te def __exit__(self, type, value, traceback): if self.thread.is_alive(): return self.handle_timeout() def function(): while True: print "keep printing ...", time.sleep(1) try: with linwintimeout(function, seconds=5.0, error_message='exceeded timeout of %s seconds' % 5.0): pass except Exception_TIMEOUT, e: print " attention !! execeeded timeout, giving up ... %s " % e
超时功能,如果完成时间太长