Is there any easy way to have a system-wide mutex in Python on Linux? By "system-wide", I mean the mutex will be used by a group of Python processes; this is in contrast to a traditional mutex, which is used by a group of threads within the same process.
在Linux上有什么简单的方法在Python中拥有一个系统范围的互斥锁吗?我说的“系统范围”是指互斥对象将被一组Python进程使用;这与传统的互斥对象不同,传统互斥对象被同一进程中的一组线程使用。
EDIT: I'm not sure Python's multiprocessing
package is what I need. For example, I can execute the following in two different interpreters:
编辑:我不确定Python的多处理包是否是我需要的。例如,我可以在两个不同的解释器中执行以下操作:
from multiprocessing import Lock
L = Lock()
L.acquire()
When I execute these commands simultaneously in two separate interpreters, I want one of them to hang. Instead, neither hangs; it appears they aren't acquiring the same mutex.
当我在两个不同的解释器中同时执行这些命令时,我希望挂起其中一个。相反,不挂;看起来他们并没有得到相同的互斥体。
4 个解决方案
#1
23
The "traditional" Unix answer is to use file locks. You can use lockf(3)
to lock sections of a file so that other processes can't edit it; a very common abuse is to use this as a mutex between processes. The python equivalent is fcntl.lockf.
“传统”Unix的答案是使用文件锁。您可以使用lockf(3)锁定文件的部分,以便其他进程不能编辑它;一个非常常见的滥用是在进程之间使用它作为互斥体。python的等效项是fcntl.lockf。
Traditionally you write the PID of the locking process into the lock file, so that deadlocks due to processes dying while holding the lock are identifiable and fixable.
传统上,您将锁定过程的PID写入到锁文件中,因此,由于在持有锁的过程中死亡而导致死锁是可识别和可修复的。
This gets you what you want, since your lock is in a global namespace (the filesystem) and accessible to all processes. This approach also has the perk that non-Python programs can participate in your locking. The downside is that you need a place for this lock file to live; also, some filesystems don't actually lock correctly, so there's a risk that it will silently fail to achieve exclusion. You win some, you lose some.
这可以得到您想要的,因为您的锁位于全局命名空间(文件系统)中,并且所有进程都可以访问它。这种方法还有一个好处,即非python程序可以参与锁定。缺点是你需要一个地方来存放这个锁文件;此外,有些文件系统实际上并没有正确地锁定,因此存在这样的风险:它可能会悄无声息地无法实现排除。你赢了一些,你输了一些。
#2
11
The POSIX standard specifies inter-process semaphores which can be used for this purpose. http://linux.die.net/man/7/sem_overview
POSIX标准指定了可以用于此目的的进程间信号。http://linux.die.net/man/7/sem_overview
The multiprocessing
module in Python is built on this API and others. In particular, multiprocessing.Lock
provides a cross-process "mutex". http://docs.python.org/library/multiprocessing.html#synchronization-between-processes
Python中的多处理模块构建在这个API和其他API之上。特别是多处理。Lock提供了一个交叉进程“互斥体”。http://docs.python.org/library/multiprocessing.html在进程之间进行同步
EDIT to respond to edited question:
编辑以回答编辑过的问题:
In your proof of concept each process is constructing a Lock()
. So you have two separate locks. That is why neither process waits. You will need to share the same lock between processes. The section I linked to in the multiprocessing
documentation explains how to do that.
在概念的证明中,每个进程都构造一个Lock()。所以你有两个单独的锁。这就是为什么两个进程都不等待的原因。您将需要在进程之间共享相同的锁。在多处理文档中链接到的部分解释了如何做到这一点。
#3
4
Try ilock library:
试试ilock库:
from ilock import ILock
with ILock('Unique lock name'):
# The code should be run as a system-wide single instance
...
#4
0
For a system-wide mutex that enables the synchronization of absolutely separate processes (i.e., to INCLUDE Linux processes that do NOT belong to the same processes tree), simply use fcntl.flock. I suppose that using a memory file under Linux' /run/shm folder may make it perform faster.
对于整个系统范围的互斥对象,它支持绝对独立进程的同步(例如要包含不属于同一进程树的Linux进程,只需使用fcntl.flock。我认为在Linux' /run/shm文件夹下使用内存文件可能会使它执行得更快。
See more here.
看到更多。
#1
23
The "traditional" Unix answer is to use file locks. You can use lockf(3)
to lock sections of a file so that other processes can't edit it; a very common abuse is to use this as a mutex between processes. The python equivalent is fcntl.lockf.
“传统”Unix的答案是使用文件锁。您可以使用lockf(3)锁定文件的部分,以便其他进程不能编辑它;一个非常常见的滥用是在进程之间使用它作为互斥体。python的等效项是fcntl.lockf。
Traditionally you write the PID of the locking process into the lock file, so that deadlocks due to processes dying while holding the lock are identifiable and fixable.
传统上,您将锁定过程的PID写入到锁文件中,因此,由于在持有锁的过程中死亡而导致死锁是可识别和可修复的。
This gets you what you want, since your lock is in a global namespace (the filesystem) and accessible to all processes. This approach also has the perk that non-Python programs can participate in your locking. The downside is that you need a place for this lock file to live; also, some filesystems don't actually lock correctly, so there's a risk that it will silently fail to achieve exclusion. You win some, you lose some.
这可以得到您想要的,因为您的锁位于全局命名空间(文件系统)中,并且所有进程都可以访问它。这种方法还有一个好处,即非python程序可以参与锁定。缺点是你需要一个地方来存放这个锁文件;此外,有些文件系统实际上并没有正确地锁定,因此存在这样的风险:它可能会悄无声息地无法实现排除。你赢了一些,你输了一些。
#2
11
The POSIX standard specifies inter-process semaphores which can be used for this purpose. http://linux.die.net/man/7/sem_overview
POSIX标准指定了可以用于此目的的进程间信号。http://linux.die.net/man/7/sem_overview
The multiprocessing
module in Python is built on this API and others. In particular, multiprocessing.Lock
provides a cross-process "mutex". http://docs.python.org/library/multiprocessing.html#synchronization-between-processes
Python中的多处理模块构建在这个API和其他API之上。特别是多处理。Lock提供了一个交叉进程“互斥体”。http://docs.python.org/library/multiprocessing.html在进程之间进行同步
EDIT to respond to edited question:
编辑以回答编辑过的问题:
In your proof of concept each process is constructing a Lock()
. So you have two separate locks. That is why neither process waits. You will need to share the same lock between processes. The section I linked to in the multiprocessing
documentation explains how to do that.
在概念的证明中,每个进程都构造一个Lock()。所以你有两个单独的锁。这就是为什么两个进程都不等待的原因。您将需要在进程之间共享相同的锁。在多处理文档中链接到的部分解释了如何做到这一点。
#3
4
Try ilock library:
试试ilock库:
from ilock import ILock
with ILock('Unique lock name'):
# The code should be run as a system-wide single instance
...
#4
0
For a system-wide mutex that enables the synchronization of absolutely separate processes (i.e., to INCLUDE Linux processes that do NOT belong to the same processes tree), simply use fcntl.flock. I suppose that using a memory file under Linux' /run/shm folder may make it perform faster.
对于整个系统范围的互斥对象,它支持绝对独立进程的同步(例如要包含不属于同一进程树的Linux进程,只需使用fcntl.flock。我认为在Linux' /run/shm文件夹下使用内存文件可能会使它执行得更快。
See more here.
看到更多。