I'm trying to code a book indexer using Python (traditional, 2.7) and SQLite (3).
我正在尝试使用Python(传统的,2.7)和SQLite(3)编写图书索引器。
The code boils down to this sequence of SQL statements:
代码归结为以下SQL语句序列:
'select count(*) from tag_dict' ()
/* [(30,)] */
'select count(*) from file_meta' ()
/* [(63613,)] */
'begin transaction' ()
'select id from archive where name=?' ('158326-158457.zip',)
/* [(20,)] */
'select id from file where name=? and archive=?' ('158328.fb2', 20)
/* [(122707,)] */
'delete from file_meta where file=?' (122707,)
'commit transaction' ()
# error: cannot commit - no transaction is active
The isolation level is 'DEFERRED' ('EXCLUSIVE' is no better).
隔离级别是“DEFERRED”(“EXCLUSIVE”没有更好的含义)。
I've attempted to use connection.commit() instead of cursor.execute('commit') - nothing useful happened.
我尝试过使用connector .commit()而不是cursor.execute('commit')——没有发生什么有用的事情。
- Sure, I've searched * and the Net, but the answers found are irrelevant.
- 当然,我搜索了*和the Net,但是找到的答案是无关的。
- Autocommit mode is unacceptable for performance reason.
- 由于性能原因,自动提交模式是不可接受的。
- I use the only database file at a time.
- 我一次使用唯一的数据库文件。
- My code runs in single thread.
- 我的代码在一个线程中运行。
- All the SQL execution is being done via single function that ensures that I have no more than only one cursor open at a time.
- 所有SQL执行都是通过单个函数完成的,该函数确保每次打开的游标不超过一个。
So, what's wrong with transaction here?
If I use connection.commit() (note: there is no connection.begin method!), then I merely loose my data.
如果我使用connect .commit()(注意:没有连接。开始的方法!),然后我只是松掉我的数据。
Sure, I've doube/triple/quaruple checked file permissions on the database file and its directory.
当然,我已经检查了数据库文件及其目录上的doube/triple/quaruple文件权限。
Well, as it often happens I found the solution just a minutes after posing the question.
通常情况下,我在提出问题几分钟后就找到了答案。
As a newbie, I cannot anwer my own question for 8 hours... So, the anwer is now there:
作为一个新手,我无法回答自己的问题长达8个小时。所以,答案是:
The solution was found here and consists of the only idea:
这里找到了解决方案,它包含了唯一的想法:
Never use BEGIN/COMMIT in non autocommit mode in Python application - use db.commit() and db.rollback() only!
It sounds odd, but it works.
这听起来很奇怪,但确实有效。
3 个解决方案
#1
11
Well, as it often happens I found the solution just a minutes after posing the question.
通常情况下,我在提出问题几分钟后就找到了答案。
The solution was found here and consists of the only idea:
这里找到了解决方案,它包含了唯一的想法:
Never use BEGIN/COMMIT in non-autocommit mode in Python application - use db.commit() and db.rollback() only!
It sounds odd, but it works.
这听起来很奇怪,但确实有效。
#2
3
This is a pretty late response, but perhaps take a look at APSW if you want finer-grain control over transactions. I ran a few tests on deferred transactions involving reads on pysqlite, and it just doesn't seem to perform correctly.
这是一个相当晚的响应,但是如果您想要对事务进行更细粒度的控制,可以查看APSW。我对涉及pysqlite上的读取的延迟事务运行了一些测试,但它似乎没有正确地执行。
https://code.google.com/p/apsw/
https://code.google.com/p/apsw/
#3
0
cursor=connection.cursor()
cursor.executemany("insert into person(firstname, lastname) values (?, ?)", persons)
connection.commit()
#1
11
Well, as it often happens I found the solution just a minutes after posing the question.
通常情况下,我在提出问题几分钟后就找到了答案。
The solution was found here and consists of the only idea:
这里找到了解决方案,它包含了唯一的想法:
Never use BEGIN/COMMIT in non-autocommit mode in Python application - use db.commit() and db.rollback() only!
It sounds odd, but it works.
这听起来很奇怪,但确实有效。
#2
3
This is a pretty late response, but perhaps take a look at APSW if you want finer-grain control over transactions. I ran a few tests on deferred transactions involving reads on pysqlite, and it just doesn't seem to perform correctly.
这是一个相当晚的响应,但是如果您想要对事务进行更细粒度的控制,可以查看APSW。我对涉及pysqlite上的读取的延迟事务运行了一些测试,但它似乎没有正确地执行。
https://code.google.com/p/apsw/
https://code.google.com/p/apsw/
#3
0
cursor=connection.cursor()
cursor.executemany("insert into person(firstname, lastname) values (?, ?)", persons)
connection.commit()