使用存储过程并发访问MySQL数据库

时间:2021-04-30 07:10:03

I have a stored procedure that will read and then increment a value in the database. This particular procedure is used by many programs at the same time. I am concerned about the concurrency issues, in particular the reader-writer problem. Can anybody please suggest me any possible solutions?

我有一个存储过程,它将读取然后递增数据库中的值。许多程序同时使用这个特定的程序。我担心并发问题,特别是读写器问题。有人可以建议我任何可能的解决方案吗?

thanks.

谢谢。

4 个解决方案

#1


13  

First, as stated in another post, use InnoDB. It is the default storage engine as of MySQL 5.5 and is more robust.

首先,如另一篇文章所述,使用InnoDB。它是MySQL 5.5的默认存储引擎,更加强大。

Second, look at this page: http://dev.mysql.com/doc/refman/5.5/en/innodb-locking-reads.html

其次,请看这个页面:http://dev.mysql.com/doc/refman/5.5/en/innodb-locking-reads.html

You should use a SELECT ... FOR UPDATE to prevent other connections from reading the row you are about to update until your transaction is complete:

您应该使用SELECT ... FOR UPDATE来阻止其他连接读取您要更新的行,直到您的事务完成为止:

START TRANSACTION;

SELECT value INTO @value
FROM mytable
WHERE id = 5
FOR UPDATE;

UPDATE mytable
SET value = value + 1
WHERE id = 5;

COMMIT;

This is better than locking the table because InnoDB does row level locks. The transaction above would only lock the rows where id = 5... so another query working with id = 10 wouldn't be held up by this query.

这比锁定表更好,因为InnoDB执行行级锁定。上面的事务只会锁定id = 5的行......所以使用id = 10的另一个查询将不会被此查询阻止。

#2


0  

Use Innodb. Inside the stored procedure start transaction before do anything else. At the Commit and end the transaction. This will resolve the read/write problem.

使用Innodb。在执行任何其他操作之前,在存储过程内启动事务在提交并结束交易。这将解决读/写问题。

But be aware of the fact that it will slow down concurrent operation. It is fine for the case only a few concurrent request are expected at a given time.

但要注意它会减慢并发操作的事实。对于在给定时间仅预期几个并发请求的情况是好的。

#3


0  

Create a separate table (or reuse the original if appropriate) for all of the incremental inserts and use a SUM() for retrieval.

为所有增量插入创建单独的表(或在适当时重用原始表)并使用SUM()进行检索。

If there's still a concern about the number of eventual rows, then use a transaction to sum them into a single row back in the original table periodically. Most likely the trade-off of eventual consistency in the sum (reads lagging the writes) or performance hit in summing rows is less of an issue when compared to the stalls on multiple writers waiting on a lock to a single row.

如果仍然关注最终行的数量,则使用事务将它们定期汇总到原始表中的单行中。与等待锁定到单行的多个写入器上的停顿相比,最有可能权衡总和中的最终一致性(读取滞后写入)或求和行中的性能损失是一个问题。

#4


0  

if possible you can lock the table just before calling the SP then unlock immediately after. i had i similar problem and this is how i circumvented this issue.

如果可能的话,你可以在调用SP之前锁定表,然后立即解锁。我有类似的问题,这就是我如何绕过这个问题。

example

LOCK TABLES my_table LOW_PRIORITY WRITE;
CALL my_stored_procedure('ff');
UNLOCK TABLES;

#1


13  

First, as stated in another post, use InnoDB. It is the default storage engine as of MySQL 5.5 and is more robust.

首先,如另一篇文章所述,使用InnoDB。它是MySQL 5.5的默认存储引擎,更加强大。

Second, look at this page: http://dev.mysql.com/doc/refman/5.5/en/innodb-locking-reads.html

其次,请看这个页面:http://dev.mysql.com/doc/refman/5.5/en/innodb-locking-reads.html

You should use a SELECT ... FOR UPDATE to prevent other connections from reading the row you are about to update until your transaction is complete:

您应该使用SELECT ... FOR UPDATE来阻止其他连接读取您要更新的行,直到您的事务完成为止:

START TRANSACTION;

SELECT value INTO @value
FROM mytable
WHERE id = 5
FOR UPDATE;

UPDATE mytable
SET value = value + 1
WHERE id = 5;

COMMIT;

This is better than locking the table because InnoDB does row level locks. The transaction above would only lock the rows where id = 5... so another query working with id = 10 wouldn't be held up by this query.

这比锁定表更好,因为InnoDB执行行级锁定。上面的事务只会锁定id = 5的行......所以使用id = 10的另一个查询将不会被此查询阻止。

#2


0  

Use Innodb. Inside the stored procedure start transaction before do anything else. At the Commit and end the transaction. This will resolve the read/write problem.

使用Innodb。在执行任何其他操作之前,在存储过程内启动事务在提交并结束交易。这将解决读/写问题。

But be aware of the fact that it will slow down concurrent operation. It is fine for the case only a few concurrent request are expected at a given time.

但要注意它会减慢并发操作的事实。对于在给定时间仅预期几个并发请求的情况是好的。

#3


0  

Create a separate table (or reuse the original if appropriate) for all of the incremental inserts and use a SUM() for retrieval.

为所有增量插入创建单独的表(或在适当时重用原始表)并使用SUM()进行检索。

If there's still a concern about the number of eventual rows, then use a transaction to sum them into a single row back in the original table periodically. Most likely the trade-off of eventual consistency in the sum (reads lagging the writes) or performance hit in summing rows is less of an issue when compared to the stalls on multiple writers waiting on a lock to a single row.

如果仍然关注最终行的数量,则使用事务将它们定期汇总到原始表中的单行中。与等待锁定到单行的多个写入器上的停顿相比,最有可能权衡总和中的最终一致性(读取滞后写入)或求和行中的性能损失是一个问题。

#4


0  

if possible you can lock the table just before calling the SP then unlock immediately after. i had i similar problem and this is how i circumvented this issue.

如果可能的话,你可以在调用SP之前锁定表,然后立即解锁。我有类似的问题,这就是我如何绕过这个问题。

example

LOCK TABLES my_table LOW_PRIORITY WRITE;
CALL my_stored_procedure('ff');
UNLOCK TABLES;