如何在PHP中处理查询同步?

时间:2022-09-20 12:18:03

I would like to insert some value into a table, which has an auto-incrementing field as a primary key. Then I want to retrieve the ID by using mysql_insert_id() and use this ID as a foreign key in another table. The problem is - although very unlikely - it may happen that between the first insertion and the later retrieving, another insertion happens into the first table, thus a wrong ID will be given back.

我想向表中插入一些值,该表具有一个自动递增的字段作为主键。然后,我希望通过使用mysql_insert_id()来检索ID,并在另一个表中使用这个ID作为外键。问题是——尽管不太可能——在第一次插入和之后的检索之间,第一个表会发生另一次插入,因此会返回一个错误的ID。

Does PHP handle this automatically, or are my concerns valid? If so, how can I overcome them?

PHP是自动处理的,还是我的关注是有效的?如果是的话,我该如何克服它们呢?

3 个解决方案

#1


4  

mysql_insert_id() will return the last inserted id on a per connection base. So basically, you don't have to worry about concurrent script requests, if that is your worry.

mysql_insert_id()将返回每个连接基上最后插入的id。所以基本上,如果你担心并发脚本请求的话,你不需要担心。

For reference: see the MySQL docs.

参考:请参阅MySQL文档。

Edit:

编辑:

BTW, you could test this quite easily.

顺便说一句,您可以很容易地测试它。

test.php

test.php

<?php

    $sleep = isset( $_GET[ 'sleep' ] ) ? true : false;

    $conn = mysql_connect( /* your parameters */ );
    mysql_select_db( /* your db */, $conn );

    $sql = 'INSERT INTO yourtable(id,col1,col2) VALUES(null,"test","test")';
    mysql_query( $sql );

    if( $sleep ) sleep( 5 );

    echo mysql_insert_id();
?>

open two browser tabs:

打开两个浏览器选项卡:

request this in the first:
http://localhost/test.php?sleep=1

第一个请求:http://localhost/test.php?sleep=1。

request this in the second within, say, 4 seconds max:
http://localhost/test.php

在第二个请求中(例如,4秒内)请求此操作:http://localhost/test.php

First request should give you an ID less than the second request.

第一个请求的ID应该小于第二个请求。

#2


3  

SQL transactions are what you need. In MySQL, InnoDB is the only engine that supports transactions.

SQL事务是您所需要的。在MySQL中,InnoDB是唯一支持事务的引擎。

#3


-2  

There are two general strategies that you can use to ensure that this problem does not impact you. First, you can use transactions (as another author pointed out).

您可以使用两种通用策略来确保这个问题不会影响到您。首先,您可以使用事务(正如另一位作者指出的那样)。

However from a performance perspective, it may be faster for you to manually track Id numbers. You can do this by using a global id in the database. A rolling number that makes ids in the system globally unique.

但是从性能的角度来看,手动跟踪Id号可能会更快。您可以通过在数据库中使用全局id来实现这一点。使系统中的id具有全局惟一性的滚动数。

Lets say the global id is 100 and you know that you are going to need 6 ids. Then you write out 106 to the global id row in the global id table... then you use 101 for the first entry, which is the foreign key in the 102 data point and so on. This improves performance considerably when working on large data sets.

假设全局id是100,需要6个id。然后在全局id表中写出106的全局id行…然后第一个条目使用101,它是102个数据点中的外键,以此类推。在处理大型数据集时,这会显著提高性能。

So if you need to make 100 new inserts this might be a good idea. If you only need 6 at a time.. use transactions...

如果你需要做100个新的插入,这可能是个好主意。如果你一次只需要6个使用事务……

As suggested by jmucchiello in comments. You can use an Update statement to ensure that another process is not writing to the global id entry. Something like

jmucchiello评论说。可以使用Update语句确保另一个进程没有写入全局id条目。类似的

update globaltable set id = 106 where id = 100

更新globaltable集合id = 106, id = 100。

I can see that I am getting modded down on this answer, but this really is the best strategy if you have a million rows to import... oh well...

我可以看到我正在修改这个答案,但是如果你有一百万行要导入的话,这确实是最好的策略。哦……

-FT

英国《金融时报》

#1


4  

mysql_insert_id() will return the last inserted id on a per connection base. So basically, you don't have to worry about concurrent script requests, if that is your worry.

mysql_insert_id()将返回每个连接基上最后插入的id。所以基本上,如果你担心并发脚本请求的话,你不需要担心。

For reference: see the MySQL docs.

参考:请参阅MySQL文档。

Edit:

编辑:

BTW, you could test this quite easily.

顺便说一句,您可以很容易地测试它。

test.php

test.php

<?php

    $sleep = isset( $_GET[ 'sleep' ] ) ? true : false;

    $conn = mysql_connect( /* your parameters */ );
    mysql_select_db( /* your db */, $conn );

    $sql = 'INSERT INTO yourtable(id,col1,col2) VALUES(null,"test","test")';
    mysql_query( $sql );

    if( $sleep ) sleep( 5 );

    echo mysql_insert_id();
?>

open two browser tabs:

打开两个浏览器选项卡:

request this in the first:
http://localhost/test.php?sleep=1

第一个请求:http://localhost/test.php?sleep=1。

request this in the second within, say, 4 seconds max:
http://localhost/test.php

在第二个请求中(例如,4秒内)请求此操作:http://localhost/test.php

First request should give you an ID less than the second request.

第一个请求的ID应该小于第二个请求。

#2


3  

SQL transactions are what you need. In MySQL, InnoDB is the only engine that supports transactions.

SQL事务是您所需要的。在MySQL中,InnoDB是唯一支持事务的引擎。

#3


-2  

There are two general strategies that you can use to ensure that this problem does not impact you. First, you can use transactions (as another author pointed out).

您可以使用两种通用策略来确保这个问题不会影响到您。首先,您可以使用事务(正如另一位作者指出的那样)。

However from a performance perspective, it may be faster for you to manually track Id numbers. You can do this by using a global id in the database. A rolling number that makes ids in the system globally unique.

但是从性能的角度来看,手动跟踪Id号可能会更快。您可以通过在数据库中使用全局id来实现这一点。使系统中的id具有全局惟一性的滚动数。

Lets say the global id is 100 and you know that you are going to need 6 ids. Then you write out 106 to the global id row in the global id table... then you use 101 for the first entry, which is the foreign key in the 102 data point and so on. This improves performance considerably when working on large data sets.

假设全局id是100,需要6个id。然后在全局id表中写出106的全局id行…然后第一个条目使用101,它是102个数据点中的外键,以此类推。在处理大型数据集时,这会显著提高性能。

So if you need to make 100 new inserts this might be a good idea. If you only need 6 at a time.. use transactions...

如果你需要做100个新的插入,这可能是个好主意。如果你一次只需要6个使用事务……

As suggested by jmucchiello in comments. You can use an Update statement to ensure that another process is not writing to the global id entry. Something like

jmucchiello评论说。可以使用Update语句确保另一个进程没有写入全局id条目。类似的

update globaltable set id = 106 where id = 100

更新globaltable集合id = 106, id = 100。

I can see that I am getting modded down on this answer, but this really is the best strategy if you have a million rows to import... oh well...

我可以看到我正在修改这个答案,但是如果你有一百万行要导入的话,这确实是最好的策略。哦……

-FT

英国《金融时报》