Python / SQLite - 尽管有大量超时,数据库已锁定

时间:2021-05-25 18:20:57

I'm sure I'm missing something pretty obvious, but I can't for the life of me stop my pysqlite scripts crashing out with a database is locked error. I have two scripts, one to load data into the database, and one to read data out, but both will frequently, and instantly, crash depending on what the other is doing with the database at any given time. I've got the timeout on both scripts set to 30 seconds:

我确定我错过了一些非常明显的东西,但我不能为我的生活停止我的pysqlite脚本崩溃与数据库锁定错误。我有两个脚本,一个用于将数据加载到数据库中,另一个用于读取数据,但两者都会频繁地立即崩溃,具体取决于另一个脚本在任何给定时间对数据库执行的操作。我将两个脚本的超时设置为30秒:

cx = sqlite.connect("database.sql", timeout=30.0)

And think I can see some evidence of the timeouts in that I get what appears to be a timing stamp (e.g 0.12343827e-06 0.1 - and how do I stop that being printed?) dumped occasionally in the middle of my Curses formatted output screen, but no delay that ever gets remotely near the 30 second timeout, but still one of the other keeps crashing again and again from this. I'm running RHEL 5.4 on a 64-bit 4 CPU HS21 IBM blade, and have heard some mention about issues about multi-threading and am not sure if this might be relevant. Packages in use are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to newer versions outside of Red Hat's official provisions is not a great option for me. Possible, but not desirable due to the environment in general.

并且我认为我可以看到一些超时的证据,我得到了一个似乎是时间戳(例如0.12343827e-06 0.1 - 以及如何停止打印?)偶尔在我的Curses格式输出屏幕中间转储但是没有任何延迟在30秒超时附近得到远程,但是其中一个仍然会一次又一次地崩溃。我在64位4 CPU HS21 IBM刀片上运行RHEL 5.4,并且听说过有关多线程问题的一些提及,我不确定这是否相关。正在使用的软件包是sqlite-3.3.6-5和python-sqlite-1.1.7-1.2.1,升级到Red Hat官方规定之外的新版本对我来说不是一个好选择。可能,但由于环境一般不可取。

I have had autocommit=1 on previously in both scripts, but have since disabled on both, and I am now cx.commit()ing on the inserting script and not committing on the select script. Ultimately as I only ever have one script actually making any modifications, I don't really see why this locking should ever happen. I have noticed that this is significantly worse over time when the database has gotten larger. It was recently at 13 MB with 3 equal sized tables, which was about 1 day's worth of data. Creating a new file has significantly improved this, which seems understandable, but the timeout ultimately just doesn't seem to be being obeyed.

我之前在两个脚本中都有过autocommit = 1,但两者都被禁用了,我现在是cx.commit()插入脚本而不是提交选择脚本。最终,因为我只有一个脚本实际上做了任何修改,我真的不明白为什么这种锁定应该发生。我注意到随着时间的推移,当数据库变大时,情况会更糟。它最近是13 MB,有3个相同大小的表,这是大约1天的数据。创建一个新文件已经显着改善了这一点,这似乎是可以理解的,但最终似乎没有遵守超时。

Any pointers very much appreciated.

任何指针非常赞赏。

EDIT: since asking I have been able to restructure my code slightly and use a signal to periodically write between 0 and 150 updates in one transaction every 5 seconds. This has significantly reduced the occurrences of the locking, to less than one an hour as opposed to once every minute or so. I guess I could go further by ensuring the times I write data are offset by a few seconds as I read data in the other script, but fundamentally I'm working around an issue as I percieve it, making a timeout not required, which doesn't seem right still. Ta.

编辑:因为我要求我能够稍微重构我的代码并使用一个信号在一个事务中每5秒定期写0到150次更新。这显着地减少了锁定的发生,小于一小时而不是每分钟一次。我想我可以进一步确保写入数据的时间在我读取其他脚本中的数据时偏移了几秒钟,但从根本上说我正在解决一个问题,因为我对它进行了调整,不需要超时,这不会看起来还不错。助教。

3 个解决方案

#1


2  

In early versions of pysqlite, the timeout parameter to sqlite.connect is apparently interpreted as milliseconds. So your timeout=30.0 should be timeout=30000.

在pysqlite的早期版本中,sqlite.connect的timeout参数显然被解释为毫秒。所以你的超时= 30.0应该是超时= 30000。

#2


0  

SQLite uses database locking for every write (update/insert/delete/...). IMHO, this lock is held until transaction ends. This is single lock held across threads/processes, AFAIK.

SQLite为每次写入使用数据库锁定(update / insert / delete / ...)。恕我直言,此锁定一直持续到交易结束。这是跨线程/进程AFAIK的单锁。

So, I'd try explicitly ending both transaction and connection for writing script and to explicitly commit even in reading script and try to debug concurrency issues.

因此,我会尝试显式结束用于编写脚本的事务和连接,并在读取脚本时显式提交并尝试调试并发问题。

#3


0  

SQLite is simply not optimized for write-heavy workloads, nor does it pretend to be (but it doesn't mind writing quite a lot in one transaction). It sounds to me like you might be getting to the point where you need to switch to another database like MySQL, PostgreSQL, Oracle or DB2. Some of those options are expensive indeed, but for some workloads that's what you need. (Also note that write-heavy workloads tend to be better done with a dedicated database server solution too, despite the fact that that pushes up deployment costs and complexity. Some things just cost.)

SQLite根本没有针对写入繁重的工作负载进行优化,也没有假装(但它不介意在一个事务中写得太多)。听起来像你可能已经到了需要切换到MySQL,PostgreSQL,Oracle或DB2等其他数据库的地步。其中一些选项确实很昂贵,但对于某些工作负载而言,这正是您所需要的。 (另请注意,使用专用数据库服务器解决方案也可以更好地完成大量写入工作负载,尽管这会增加部署成本和复杂性。有些事情只是成本。)

#1


2  

In early versions of pysqlite, the timeout parameter to sqlite.connect is apparently interpreted as milliseconds. So your timeout=30.0 should be timeout=30000.

在pysqlite的早期版本中,sqlite.connect的timeout参数显然被解释为毫秒。所以你的超时= 30.0应该是超时= 30000。

#2


0  

SQLite uses database locking for every write (update/insert/delete/...). IMHO, this lock is held until transaction ends. This is single lock held across threads/processes, AFAIK.

SQLite为每次写入使用数据库锁定(update / insert / delete / ...)。恕我直言,此锁定一直持续到交易结束。这是跨线程/进程AFAIK的单锁。

So, I'd try explicitly ending both transaction and connection for writing script and to explicitly commit even in reading script and try to debug concurrency issues.

因此,我会尝试显式结束用于编写脚本的事务和连接,并在读取脚本时显式提交并尝试调试并发问题。

#3


0  

SQLite is simply not optimized for write-heavy workloads, nor does it pretend to be (but it doesn't mind writing quite a lot in one transaction). It sounds to me like you might be getting to the point where you need to switch to another database like MySQL, PostgreSQL, Oracle or DB2. Some of those options are expensive indeed, but for some workloads that's what you need. (Also note that write-heavy workloads tend to be better done with a dedicated database server solution too, despite the fact that that pushes up deployment costs and complexity. Some things just cost.)

SQLite根本没有针对写入繁重的工作负载进行优化,也没有假装(但它不介意在一个事务中写得太多)。听起来像你可能已经到了需要切换到MySQL,PostgreSQL,Oracle或DB2等其他数据库的地步。其中一些选项确实很昂贵,但对于某些工作负载而言,这正是您所需要的。 (另请注意,使用专用数据库服务器解决方案也可以更好地完成大量写入工作负载,尽管这会增加部署成本和复杂性。有些事情只是成本。)