使用HYDRATION_RECORD,Doctrine 1.2水合作用失败,但与HYDRATION_ARRAY一起使用

时间:2021-08-11 07:22:49

I have a code that runs perfectly with Doctrine_Core::HYDRATION_ARRAY, but crashes with Doctrine_Core::HYDRATION_RECORD. The page is loading for about two minutes and shows standard browser error message, which is something like

我有一个与Doctrine_Core :: HYDRATION_ARRAY完美匹配的代码,但是与Doctrine_Core :: HYDRATION_RECORD崩溃了。页面加载大约两分钟,并显示标准的浏览器错误消息,这是类似的

Connection to the server was lost during the page load.

(I have localized browser, so that's not the exact error message, but translated).

(我有本地化的浏览器,所以这不是确切的错误消息,但已翻译)。

Using mysql command line Show processlist output

使用mysql命令行显示processlist输出

+-----+--------+-----------------+--------+---------+------+-------+------------------+
| Id  | User   | Host            | db     | Command | Time | State | Info             |
+-----+--------+-----------------+--------+---------+------+-------+------------------+
| 698 | root   | localhost:53899 | NULL   | Query   |    0 | NULL  | show processlist |
| 753 | *user* | localhost:54202 | *db1*  | Sleep   |  102 |       | NULL             |
| 754 | *user* | localhost:54204 | *db2*  | Sleep   |  102 |       | NULL             |
+-----+--------+-----------------+--------+---------+------+-------+------------------+

The code itself:

代码本身:

 $q = Doctrine_Query::create()
        ->select("fc.*")
        ->from("Card fc")
        ->leftJoin("fc.Fact f")
        ->where("f.deckid=?", $deck_id);
  $card = $q->execute(array(), Doctrine_Core::HYDRATE_RECORD);
  //Commenting the above line and uncommenting below line leads to an error
  //$card= $q->execute(array(), Doctrine_Core::HYDRATE_ARRAY);

So I think that query is not populated with correct SQL. Hovewer, $q->getSqlQuery() outputs the correct SQL that runs perfectly if executed via command-line or phpMyAdmin.

所以我认为查询没有填充正确的SQL。 Hovewer,$ q-> getSqlQuery()输出正确的SQL,如果通过命令行或phpMyAdmin执行,它将完美运行。

Server configuration:

Apache/2.2.4 (Win32) mod_ssl/2.2.4 OpenSSL/0.9.8k mod_wsgi/3.3 Python/2.7.1 PHP/5.2.12
Mysql 5.1.40-community

Everything runs on localhost, so that's not a connection issue.

一切都在localhost上运行,所以这不是连接问题。

The amount of data for that specific query is very small - about a dozen records, so it's nothing to do with memory or time limits. safe_mode is off, display_errors is on ,error_reporting is 6135.

该特定查询的数据量非常小 - 大约十几条记录,因此它与内存或时间限制无关。 safe_mode关闭,display_errors打开,error_reporting为6135。

Could somebody point to some hints or caveats I'm missing?

有人会指出一些我不知道的提示或警告吗?

UPDATE: what's most weird that it works with HYDRATION_RECORD from time to time.

更新:与HYDRATION_RECORD不时合作最奇怪的是什么。

UPDATE2: it crashes when I'm trying to fetch something from the query, e.g. getFirst(). With no fetching it works, but I really don't need a query form which I can't fetch data.

UPDATE2:当我尝试从查询中获取内容时崩溃,例如getFirst()。没有提取它可以工作,但我真的不需要一个我无法获取数据的查询表单。

UPDATE3: I've workarounded this issue, but I'm still interested, what's going on.

更新3:我已经解决了这个问题,但我仍然感兴趣,发生了什么。

Update 4:

Sql query:

SELECT f.id AS f__id, f.createdat AS f__createdat, f.updatedat AS f__updatedat,
    f.flashcardmodelid AS f__flashcardmodelid, f.source AS f__source, 
    f.content AS f__content, f.md5 AS f__md5 
FROM flashcard f 
LEFT JOIN fact f2 ON f.id = f2.flashcardid AND (f2.deleted_at IS NULL) 
WHERE (f2.deckid = 19413)

Output:

f__id   f__createdat            f__updatedat            f__flashcardmodelid     f__source           f__content
245639  2011-08-05 20:00:00     2011-08-05 20:00:00     179                     jpod lesson 261     {"source":"\u7f8e\u5473\u3057\u3044","target":"del... 

So, the query itself is OK, data fectched as expected. Do you need models definition?

因此,查询本身就可以了,数据按预期进行处理。你需要模型定义吗?

Update 5 When running query with HYDRATE_RECORD httpd.exe consumes 100% of one of the CPU cores.

更新5使用HYDRATE_RECORD运行查询时,httpd.exe占用其中一个CPU内核的100%。

Final Update Don't know why, but now it works... Haven't changed anything. Looks like it just was waiting when I place a bounty on this question. :) But still, as I already have placed a bounty, any idea of what's the difference between HYDRATE_ARRAY and HYDRATE_RECORD that might crash the script is appreciated.

最终更新不知道为什么,但现在它的工作原理......没有改变任何东西。当我在这个问题上放置赏金时,看起来只是在等待。 :)但是,由于我已经放置了赏金,因此可以了解可能导致脚本崩溃的HYDRATE_ARRAY和HYDRATE_RECORD之间的区别。

2 个解决方案

#1


1  

I've seen a similar behavior when dumping in some way (print_r, var_dump, and so on) the whole record set or just a single record. This is caused by the fact that Doctrine uses an high structured class hierarchy which contains a lot of circular references. This obviously is not true when you use Doctrine_Core::HYDRATION_ARRAY.

在以某种方式(print_r,var_dump等)转储整个记录集或只是一条记录时,我看到了类似的行为。这是因为Doctrine使用了一个包含大量循环引用的高结构化类层次结构。当您使用Doctrine_Core :: HYDRATION_ARRAY时,这显然不是这样。

So any of the mentioned functions (but I think that can be some other way to reproduce that) will begin an endless loop which causes 100% cpu usage until it reaches a kill point.

因此,任何提到的函数(但我认为可以通过其他方式重现)将开始无限循环,导致100%的CPU使用率,直到它达到杀戮点。

Don't know if this can help in your case.

不知道这对你的情况是否有帮助。

#2


0  

I have had a similar issue with Doctrine 1.2, and found out that PHP reported a FATAL error due to exceeding memory limit or execution time, or sometimes the situation even got PHP to cause a segmentation fault.

我在Doctrine 1.2中遇到了类似的问题,并发现PHP由于超出内存限制或执行时间而报告了FATAL错误,或者有时情况甚至导致PHP导致分段错误。

You can find these errors in your Apache error log file. On my OS X box those files are in /var/log/apache2/error_log. You can increase the allowed memory or max execution time in your PHP configuration.

您可以在Apache错误日志文件中找到这些错误。在我的OS X机器上,这些文件位于/ var / log / apache2 / error_log中。您可以在PHP配置中增加允许的内存或最大执行时间。

In my case, it was caused by the amount of records that was fetched from the database that caused excessive memory consumption. Hydrating Doctrine_Records seems to be a relatively tough thing to do some times.

就我而言,它是由从数据库中提取的记录数量引起的,这会导致过多的内存消耗。保湿Doctrine_Records在某些时候似乎是一件相对艰难的事情。

Just out of curiousity, how many rows do you expect in your result?

出于好奇,您对结果有多少行?

#1


1  

I've seen a similar behavior when dumping in some way (print_r, var_dump, and so on) the whole record set or just a single record. This is caused by the fact that Doctrine uses an high structured class hierarchy which contains a lot of circular references. This obviously is not true when you use Doctrine_Core::HYDRATION_ARRAY.

在以某种方式(print_r,var_dump等)转储整个记录集或只是一条记录时,我看到了类似的行为。这是因为Doctrine使用了一个包含大量循环引用的高结构化类层次结构。当您使用Doctrine_Core :: HYDRATION_ARRAY时,这显然不是这样。

So any of the mentioned functions (but I think that can be some other way to reproduce that) will begin an endless loop which causes 100% cpu usage until it reaches a kill point.

因此,任何提到的函数(但我认为可以通过其他方式重现)将开始无限循环,导致100%的CPU使用率,直到它达到杀戮点。

Don't know if this can help in your case.

不知道这对你的情况是否有帮助。

#2


0  

I have had a similar issue with Doctrine 1.2, and found out that PHP reported a FATAL error due to exceeding memory limit or execution time, or sometimes the situation even got PHP to cause a segmentation fault.

我在Doctrine 1.2中遇到了类似的问题,并发现PHP由于超出内存限制或执行时间而报告了FATAL错误,或者有时情况甚至导致PHP导致分段错误。

You can find these errors in your Apache error log file. On my OS X box those files are in /var/log/apache2/error_log. You can increase the allowed memory or max execution time in your PHP configuration.

您可以在Apache错误日志文件中找到这些错误。在我的OS X机器上,这些文件位于/ var / log / apache2 / error_log中。您可以在PHP配置中增加允许的内存或最大执行时间。

In my case, it was caused by the amount of records that was fetched from the database that caused excessive memory consumption. Hydrating Doctrine_Records seems to be a relatively tough thing to do some times.

就我而言,它是由从数据库中提取的记录数量引起的,这会导致过多的内存消耗。保湿Doctrine_Records在某些时候似乎是一件相对艰难的事情。

Just out of curiousity, how many rows do you expect in your result?

出于好奇,您对结果有多少行?