My project uses an open source PHP MySQL library https://github.com/ajillion/PHP-MySQLi-Database-Class
我的项目使用开源PHP MySQL库https://github.com/ajillion/PHP-MySQLi-Database-Class
But the project mid-year report: "Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4294967296 bytes) in / home1/flipalbu/public_html/kvisofttest/login-admin/Lib/class.MysqliDb.php on line 422" This error ,
但该项目年中报告:“致命错误:允许内存大小为134217728字节耗尽(试图分配4294967296字节)在/ home1 / flipalbu / public_html / kvisofttest / login-admin / Lib / class.MysqliDb.php第422行“这个错误,
My server is: linux x86_64
我的服务器是:linux x86_64
PHP Version 5.4.17
PHP版本5.4.17
Mysql Version: 5.5.32
Mysql版本:5.5.32
memory_limit = 128M
memory_limit = 128M
Line 422:call_user_func_array (array ($ stmt, 'bind_result'), $ parameters);
第422行:call_user_func_array(array($ stmt,'bind_result'),$ parameters);
Query part of the code:
查询部分代码:
$ db = new MysqliDb ('LocalHost', 'root', 'PASSWD', 'DB');
$ wqdb = $ db-> query ("SELECT * FROM db_table");
foreach ($ wqdb as $ row) {
$ con. = $ row ['ID'];
}
echo $ con;
Is there any way to solve it?
有什么办法可以解决吗?
/** Error Code **/
/** 错误代码 **/
protected function _dynamicBindResults(mysqli_stmt $stmt)
{
$parameters = array();
$results = array();
$meta = $stmt->result_metadata();
$row = array();
while ($field = $meta->fetch_field()) {
$row[$field->name] = null;
$parameters[] = & $row[$field->name];
}
call_user_func_array(array($stmt, 'bind_result'), $parameters);
while ($stmt->fetch()) {
$x = array();
foreach ($row as $key => $val) {
$x[$key] = $val;
}
array_push($results, $x);
}
return $results;
}
4 个解决方案
#1
21
I read this bug report here: https://bugs.php.net/bug.php?id=51386
我在这里阅读了这个错误报告:https://bugs.php.net/bug.php?id = 51386
Your problem seems to happen because there is a longblob
or longtext
in the columns of the table.
你的问题似乎发生了,因为表格的列中有一个longblob或longtext。
longtext
/ longblob
have a maximum length of 4294967295
[4GB] thats why mysqli tries to allocated that memory for the buffer to be sure nothing is lost. I would suggest that you use mediumtext
(16777215 [16MB] max length), that should be enough for everything usually.
longtext / longblob的最大长度为4294967295 [4GB],这就是mysqli尝试为缓冲区分配内存以确保没有丢失的原因。我建议您使用mediumtext(16777215 [16MB]最大长度),这应该足够通常。
Update: Because this answer has seen some activity I add this solution from Phil_1984 (see comments)
更新:因为这个答案已经看到一些活动我从Phil_1984添加了这个解决方案(见评论)
I use mysqli and after reading that quote from php dev, adding a $stmt->store_result(); between execute and bind_result seems to fix the issues for me
我使用mysqli并在从php dev读取引用后,添加$ stmt-> store_result();在execute和bind_result之间似乎解决了我的问题
=> If you use $stmt->store_result()
you can use mysqli with longblob
/ longtext
without getting the error.
=>如果你使用$ stmt-> store_result(),你可以使用mysqli和longblob / longtext而不会收到错误。
-
Old Answer: I suggest that you either change the column to another type (mediumtext) or use PDO (i think it doesnt have that problem). but if you want to keep the column as longtext, you have to switch your mysql library
旧答案:我建议您将列更改为其他类型(mediumtext)或使用PDO(我认为它没有这个问题)。但是如果你想把列保留为longtext,你必须切换你的mysql库
Quote from PHP Dev:
引自PHP Dev:
This is a known limitation of ext/mysqli when using libmysql (always in 5.2 and previous) and when libmysql is enabled with 5.3 . The reason is that the server sends not too specific metadata about the column. This longtext has a max length of 4G and ext/mysqli tries to bind with the max length, to be sure no data loss occurs (data doesn't fit in the bind buffer on C level). However, that means 4G for a longtext/longblob column. ext/mysqli has been changed to have a way to work around that. You need to call mysqli_stmt_store_result() which will store the data locally, which means, of course a higher memory usage for PHP. However, because you use libmysql this won't hit the PHP's memory limit, for sure. During store_result the max_length of every column will be calculated and then when bind_result is executed only a buffer with size of max_length will be allocated, which will be definitely lower than 4G. In short, prepare execute store_result bind_result fetch...fetch...fetch
这是使用libmysql(总是在5.2和之前)以及使用5.3启用libmysql时ext / mysqli的已知限制。原因是服务器发送的列不是太具体的元数据。此longtext的最大长度为4G,ext / mysqli尝试与最大长度绑定,以确保不会发生数据丢失(数据不适合C级别的绑定缓冲区)。但是,这意味着4G用于longtext / longblob列。 ext / mysqli已被改为有办法解决这个问题。您需要调用mysqli_stmt_store_result()来本地存储数据,这意味着PHP的内存使用率更高。但是,因为你使用libmysql,这肯定不会达到PHP的内存限制。在store_result期间,将计算每列的max_length,然后当执行bind_result时,将仅分配大小为max_length的缓冲区,这肯定会低于4G。简而言之,准备执行store_result bind_result fetch ... fetch ... fetch
#2
4
If you're trying to read in an entire table in one go, and that table has a lot of rows and columns then running out of memory is inevitable. You can put it off by upping the memory limit in php.ini but the problem will only reoccur when you add a few thousand more rows.
如果你试图一次性读取整个表,并且该表有很多行和列,那么内存不足是不可避免的。你可以通过增加php.ini中的内存限制来解决它,但问题只会在你添加几千行时重新出现。
You need to rewrite your script to be more sensible about what it fetches. If you only need specific records then pulling down the entire table and looking for the row(s) you want in the result set is horrible inefficient. Use a WHERE clause to specify what you really want to get. The rule of thumb with PHP/SQL applications is "Use SQL wherever possible to specify what you want, then do what you need to do with it in PHP".
您需要重写脚本以更加明智地了解它所取得的内容。如果您只需要特定记录,那么下拉整个表并在结果集中查找所需的行是非常低效的。使用WHERE子句指定您真正想要获得的内容。 PHP / SQL应用程序的经验法则是“尽可能使用SQL来指定您想要的内容,然后在PHP中执行您需要的操作”。
Of course it may be that there's an entirely legitimate reason why you need to process an entire table in PHP. In that case, you should fetch the data in chunks (say 100 rows at a time) with LIMIT and OFFSET, process those rows, get the next chunk, process those and so on until you've gone through the entire table. That will far less memory than trying to load the entire table at once
当然可能有一个完全正当的理由,你需要在PHP中处理整个表。在这种情况下,您应该使用LIMIT和OFFSET以块(一次说100行)获取数据,处理这些行,获取下一个块,处理这些行等等,直到您完成整个表。这比尝试一次加载整个表要少得多
#3
1
Doens't seem like a huge table! Seems like a endless loop! It tries to allocate about 4gb, i dont think you have such a big table....
看起来不像是一张巨大的桌子!好像无尽的循环!它试图分配大约4GB,我不认为你有这么大的表....
check that you don't create a loop here:
检查您是否在此处创建循环:
call_user_func_array (array ($ stmt, 'bind_result'), $ parameters);
maybe you should post the code that is around this line.
也许你应该发布围绕这一行的代码。
#4
0
You are exceeding the maximun available memory. Yow have two options:
你超过了最大可用内存。你有两个选择:
-
Increase the maximum allowed memory to each PHP script either by configuration (
memory_limit
directive inphp.ini
) or in execution time by usingini_set('memory_limit', '200M')
通过配置(php.ini中的memory_limit指令)或执行时间使用ini_set('memory_limit','200M')增加每个PHP脚本的最大允许内存量
-
Improve the code to handle only the required information.
改进代码以仅处理所需信息。
#1
21
I read this bug report here: https://bugs.php.net/bug.php?id=51386
我在这里阅读了这个错误报告:https://bugs.php.net/bug.php?id = 51386
Your problem seems to happen because there is a longblob
or longtext
in the columns of the table.
你的问题似乎发生了,因为表格的列中有一个longblob或longtext。
longtext
/ longblob
have a maximum length of 4294967295
[4GB] thats why mysqli tries to allocated that memory for the buffer to be sure nothing is lost. I would suggest that you use mediumtext
(16777215 [16MB] max length), that should be enough for everything usually.
longtext / longblob的最大长度为4294967295 [4GB],这就是mysqli尝试为缓冲区分配内存以确保没有丢失的原因。我建议您使用mediumtext(16777215 [16MB]最大长度),这应该足够通常。
Update: Because this answer has seen some activity I add this solution from Phil_1984 (see comments)
更新:因为这个答案已经看到一些活动我从Phil_1984添加了这个解决方案(见评论)
I use mysqli and after reading that quote from php dev, adding a $stmt->store_result(); between execute and bind_result seems to fix the issues for me
我使用mysqli并在从php dev读取引用后,添加$ stmt-> store_result();在execute和bind_result之间似乎解决了我的问题
=> If you use $stmt->store_result()
you can use mysqli with longblob
/ longtext
without getting the error.
=>如果你使用$ stmt-> store_result(),你可以使用mysqli和longblob / longtext而不会收到错误。
-
Old Answer: I suggest that you either change the column to another type (mediumtext) or use PDO (i think it doesnt have that problem). but if you want to keep the column as longtext, you have to switch your mysql library
旧答案:我建议您将列更改为其他类型(mediumtext)或使用PDO(我认为它没有这个问题)。但是如果你想把列保留为longtext,你必须切换你的mysql库
Quote from PHP Dev:
引自PHP Dev:
This is a known limitation of ext/mysqli when using libmysql (always in 5.2 and previous) and when libmysql is enabled with 5.3 . The reason is that the server sends not too specific metadata about the column. This longtext has a max length of 4G and ext/mysqli tries to bind with the max length, to be sure no data loss occurs (data doesn't fit in the bind buffer on C level). However, that means 4G for a longtext/longblob column. ext/mysqli has been changed to have a way to work around that. You need to call mysqli_stmt_store_result() which will store the data locally, which means, of course a higher memory usage for PHP. However, because you use libmysql this won't hit the PHP's memory limit, for sure. During store_result the max_length of every column will be calculated and then when bind_result is executed only a buffer with size of max_length will be allocated, which will be definitely lower than 4G. In short, prepare execute store_result bind_result fetch...fetch...fetch
这是使用libmysql(总是在5.2和之前)以及使用5.3启用libmysql时ext / mysqli的已知限制。原因是服务器发送的列不是太具体的元数据。此longtext的最大长度为4G,ext / mysqli尝试与最大长度绑定,以确保不会发生数据丢失(数据不适合C级别的绑定缓冲区)。但是,这意味着4G用于longtext / longblob列。 ext / mysqli已被改为有办法解决这个问题。您需要调用mysqli_stmt_store_result()来本地存储数据,这意味着PHP的内存使用率更高。但是,因为你使用libmysql,这肯定不会达到PHP的内存限制。在store_result期间,将计算每列的max_length,然后当执行bind_result时,将仅分配大小为max_length的缓冲区,这肯定会低于4G。简而言之,准备执行store_result bind_result fetch ... fetch ... fetch
#2
4
If you're trying to read in an entire table in one go, and that table has a lot of rows and columns then running out of memory is inevitable. You can put it off by upping the memory limit in php.ini but the problem will only reoccur when you add a few thousand more rows.
如果你试图一次性读取整个表,并且该表有很多行和列,那么内存不足是不可避免的。你可以通过增加php.ini中的内存限制来解决它,但问题只会在你添加几千行时重新出现。
You need to rewrite your script to be more sensible about what it fetches. If you only need specific records then pulling down the entire table and looking for the row(s) you want in the result set is horrible inefficient. Use a WHERE clause to specify what you really want to get. The rule of thumb with PHP/SQL applications is "Use SQL wherever possible to specify what you want, then do what you need to do with it in PHP".
您需要重写脚本以更加明智地了解它所取得的内容。如果您只需要特定记录,那么下拉整个表并在结果集中查找所需的行是非常低效的。使用WHERE子句指定您真正想要获得的内容。 PHP / SQL应用程序的经验法则是“尽可能使用SQL来指定您想要的内容,然后在PHP中执行您需要的操作”。
Of course it may be that there's an entirely legitimate reason why you need to process an entire table in PHP. In that case, you should fetch the data in chunks (say 100 rows at a time) with LIMIT and OFFSET, process those rows, get the next chunk, process those and so on until you've gone through the entire table. That will far less memory than trying to load the entire table at once
当然可能有一个完全正当的理由,你需要在PHP中处理整个表。在这种情况下,您应该使用LIMIT和OFFSET以块(一次说100行)获取数据,处理这些行,获取下一个块,处理这些行等等,直到您完成整个表。这比尝试一次加载整个表要少得多
#3
1
Doens't seem like a huge table! Seems like a endless loop! It tries to allocate about 4gb, i dont think you have such a big table....
看起来不像是一张巨大的桌子!好像无尽的循环!它试图分配大约4GB,我不认为你有这么大的表....
check that you don't create a loop here:
检查您是否在此处创建循环:
call_user_func_array (array ($ stmt, 'bind_result'), $ parameters);
maybe you should post the code that is around this line.
也许你应该发布围绕这一行的代码。
#4
0
You are exceeding the maximun available memory. Yow have two options:
你超过了最大可用内存。你有两个选择:
-
Increase the maximum allowed memory to each PHP script either by configuration (
memory_limit
directive inphp.ini
) or in execution time by usingini_set('memory_limit', '200M')
通过配置(php.ini中的memory_limit指令)或执行时间使用ini_set('memory_limit','200M')增加每个PHP脚本的最大允许内存量
-
Improve the code to handle only the required information.
改进代码以仅处理所需信息。