如果行数大于6400,PHP / MySQL不会返回任何结果

时间:2021-12-22 06:57:07

I am using simple PHP query to fetch rows from the database, there are above 7000 rows in the table and whenever I fetch all rows, the PHP doesn't work and script dies, but when I limit the results to 6400 everything works fine. Is there any limitation in MySQL or PHP that I should be aware of? If any, where I need to configure these settings. Any help is highly appreciated. Please note that my PHP settings allow the script execution time of 1800 seconds and memory usage is set to 512MB.

我使用简单的PHP查询从数据库中获取行,表中有7000行以上,每当我获取所有行时,PHP都不起作用并且脚本死掉,但是当我将结果限制为6400时,一切正常。我应该注意MySQL或PHP是否有任何限制?如果有的话,我需要配置这些设置。任何帮助都非常感谢。请注意,我的PHP设置允许脚本执行时间为1800秒,内存使用率设置为512MB。

2 个解决方案

#1


2  

Why do you need that much rows? I'm pretty sure php dies because it runs out of memory.

你为什么需要那么多行?我很确定php会因为内存不足而死掉。

Run your query in a console to see if you get there more than 7000 rows without issues. If it returns them all there you can be sure it's php and not your database and I'm sure it is php.

在控制台中运行您的查询,看看是否有超过7000行没有问题。如果它返回所有那里你可以确定它是PHP而不是你的数据库,我敢肯定它是PHP。

For whatever you do, it would be better to loop over the data, also known as "pagination" and read it in chunks of for example 100 rows and process it, 0-100, 100-200, 200-300...

无论你做什么,最好循环数据,也称为“分页”,并以例如100行的块读取它并处理它,0-100,100-200,200-300 ......

#2


1  

You have 2 solutions as I see it:

我看到你有2个解决方案:

1) The one that @burzum suggested. Really nice although you would have to empirically establish the max size based on you server load (if it's not constant).

1)@burzum建议的那个。真的很好,虽然你必须根据你的服务器负载经验地建立最大尺寸(如果它不是恒定的)。

2) Use mysql_unbuffered_query()

2)使用mysql_unbuffered_query()

mysql_unbuffered_query does have some drawbacks as described there:

mysql_unbuffered_query确实有一些缺点,如下所述:

The benefits of mysql_unbuffered_query() come at a cost: you cannot use mysql_num_rows() and mysql_data_seek() on a result set returned from mysql_unbuffered_query(), until all rows are fetched. You also have to fetch all result rows from an unbuffered SQL query before you can send a new SQL query to MySQL, using the same link_identifier.

mysql_unbuffered_query()的好处是有代价的:你不能在mysql_unbuffered_query()返回的结果集上使用mysql_num_rows()和mysql_data_seek(),直到获取所有行。您还必须从无缓冲的SQL查询中获取所有结果行,然后才能使用相同的link_identifier向MySQL发送新的SQL查询。

But since you're dealing with large dataset it seems well justified...

但是既然你正在处理大型数据集,那么它似乎是合理的......

#1


2  

Why do you need that much rows? I'm pretty sure php dies because it runs out of memory.

你为什么需要那么多行?我很确定php会因为内存不足而死掉。

Run your query in a console to see if you get there more than 7000 rows without issues. If it returns them all there you can be sure it's php and not your database and I'm sure it is php.

在控制台中运行您的查询,看看是否有超过7000行没有问题。如果它返回所有那里你可以确定它是PHP而不是你的数据库,我敢肯定它是PHP。

For whatever you do, it would be better to loop over the data, also known as "pagination" and read it in chunks of for example 100 rows and process it, 0-100, 100-200, 200-300...

无论你做什么,最好循环数据,也称为“分页”,并以例如100行的块读取它并处理它,0-100,100-200,200-300 ......

#2


1  

You have 2 solutions as I see it:

我看到你有2个解决方案:

1) The one that @burzum suggested. Really nice although you would have to empirically establish the max size based on you server load (if it's not constant).

1)@burzum建议的那个。真的很好,虽然你必须根据你的服务器负载经验地建立最大尺寸(如果它不是恒定的)。

2) Use mysql_unbuffered_query()

2)使用mysql_unbuffered_query()

mysql_unbuffered_query does have some drawbacks as described there:

mysql_unbuffered_query确实有一些缺点,如下所述:

The benefits of mysql_unbuffered_query() come at a cost: you cannot use mysql_num_rows() and mysql_data_seek() on a result set returned from mysql_unbuffered_query(), until all rows are fetched. You also have to fetch all result rows from an unbuffered SQL query before you can send a new SQL query to MySQL, using the same link_identifier.

mysql_unbuffered_query()的好处是有代价的:你不能在mysql_unbuffered_query()返回的结果集上使用mysql_num_rows()和mysql_data_seek(),直到获取所有行。您还必须从无缓冲的SQL查询中获取所有结果行,然后才能使用相同的link_identifier向MySQL发送新的SQL查询。

But since you're dealing with large dataset it seems well justified...

但是既然你正在处理大型数据集,那么它似乎是合理的......