I have been running into this issue every time I try and sync a medium size JSON object to my database so we can perform some reporting on it. From looking into what can cause it I have come across these links on the matter.
每次我尝试将中等大小的JSON对象同步到我的数据库时,我都遇到过这个问题,因此我们可以对它进行一些报告。通过研究可能导致它的原因,我在这个问题上遇到了这些联系。
http://blog.corrlabs.com/2013/04/mysql-prepared-statement-needs-to-be-re.html http://bugs.mysql.com/bug.php?id=42041
http://blog.corrlabs.com/2013/04/mysql-prepared-statement-needs-to-be-re.html http://bugs.mysql.com/bug.php?id=42041
Both seem to point me in the direction of table_definition_cache. However this is saying the issue is due to a mysqldump happening on the server at the same time. I can assure you that this is not the case. Further I have slimmed down the query to only insert one object at a time.
两者似乎都指向了table_definition_cache的方向。但是,这就是说问题是由于服务器上同时发生了mysqldump。我可以向你保证,事实并非如此。此外,我将查询精简到一次只插入一个对象。
public function fire($job, $data)
{
foreach (unserialize($data['message']) as $org)
{
// Ignore ID 33421 this will time out.
// It contains all users in the system.
if($org->id != 33421) {
$organization = new Organization();
$organization->orgsync_id = $org->id;
$organization->short_name = $org->short_name;
$organization->long_name = $org->long_name;
$organization->category = $org->category->name;
$organization->save();
$org_groups = $this->getGroupsInOrganization($org->id);
if (!is_int($org_groups))
{
foreach ($org_groups as $group)
{
foreach($group->account_ids as $account_id)
{
$student = Student::where('orgsync_id', '=', $account_id)->first();
if (is_object($student))
{
$student->organizations()->attach($organization->id, array('is_officer' => ($group->name == 'Officers')));
}
}
}
}
}
}
$job->delete();
}
This is the code that is running when the error is thrown. Which normally comes in the form of.
这是抛出错误时运行的代码。通常以形式出现。
SQLSTATE[HY000]: General error: 1615 Prepared statement needs to be re-prepared (SQL: insert into `organization_student` (`is_officer`, `organization_id`, `student_id`) values (0, 284, 26))
Which is then followed by this error repeated 3 times.
然后重复3次此错误。
SQLSTATE[HY000]: General error: 1615 Prepared statement needs to be re-prepared (SQL: insert into `organizations` (`orgsync_id`, `short_name`, `long_name`, `category`, `updated_at`, `created_at`) values (24291, SA, Society of American, Professional, 2014-09-15 16:26:01, 2014-09-15 16:26:01))
If anyone can point me in the right direction I would be very grateful. I am more curious about what is actually triggering the error then finding the cause of this specific issue. It also seems to be somewhat common in laravel application when using the ORM.
如果有人能指出我正确的方向,我将非常感激。我对实际触发错误然后找到这个特定问题的原因更加好奇。在使用ORM时,在laravel应用程序中似乎也有些常见。
2 个解决方案
#1
7
While mysqldump is the commonly reported cause for this it is not the only one.
虽然mysqldump是常见的原因,但并不是唯一的原因。
In my case running artisan:migrate on any database will also trigger this error for different databases on the same server.
在我的情况下运行工匠:在任何数据库上迁移也会为同一服务器上的不同数据库触发此错误。
http://bugs.mysql.com/bug.php?id=42041 Mentions table locks/flush which would be called in a mysqldump so worth checking if you have any migrations, locks or flushes happening simultaneously.
http://bugs.mysql.com/bug.php?id=42041提到将在mysqldump中调用的表锁定/刷新,因此值得检查是否同时发生了任何迁移,锁定或刷新。
Failing that try switching the prepares to emulated.
如果没有尝试将准备切换为模拟。
'options' => [
\PDO::ATTR_EMULATE_PREPARES => true
]
#2
3
This error occurs when mysqldump is in progress. It doesn't matter which DB dump is in progress. Wait for the dump to finish and this error will vanish.
当mysqldump正在进行时会发生此错误。哪个数据库转储正在进行并不重要。等待转储完成,此错误将消失。
The issue is with the table definition being dumped which cause this error.
问题是转储的表定义会导致此错误。
Yeah I tried changing these mysql settings, but it still occurs sometime (mostly when running heavy mysql backups/dumps at night)..
是的我尝试更改这些mysql设置,但它仍然会发生(大多数时候在晚上运行繁重的mysql备份/转储)
table_open_cache 128=>16384
table_open_cache 128 => 16384
table_definition_cache 1024=>16384
table_definition_cache 1024 => 16384
#1
7
While mysqldump is the commonly reported cause for this it is not the only one.
虽然mysqldump是常见的原因,但并不是唯一的原因。
In my case running artisan:migrate on any database will also trigger this error for different databases on the same server.
在我的情况下运行工匠:在任何数据库上迁移也会为同一服务器上的不同数据库触发此错误。
http://bugs.mysql.com/bug.php?id=42041 Mentions table locks/flush which would be called in a mysqldump so worth checking if you have any migrations, locks or flushes happening simultaneously.
http://bugs.mysql.com/bug.php?id=42041提到将在mysqldump中调用的表锁定/刷新,因此值得检查是否同时发生了任何迁移,锁定或刷新。
Failing that try switching the prepares to emulated.
如果没有尝试将准备切换为模拟。
'options' => [
\PDO::ATTR_EMULATE_PREPARES => true
]
#2
3
This error occurs when mysqldump is in progress. It doesn't matter which DB dump is in progress. Wait for the dump to finish and this error will vanish.
当mysqldump正在进行时会发生此错误。哪个数据库转储正在进行并不重要。等待转储完成,此错误将消失。
The issue is with the table definition being dumped which cause this error.
问题是转储的表定义会导致此错误。
Yeah I tried changing these mysql settings, but it still occurs sometime (mostly when running heavy mysql backups/dumps at night)..
是的我尝试更改这些mysql设置,但它仍然会发生(大多数时候在晚上运行繁重的mysql备份/转储)
table_open_cache 128=>16384
table_open_cache 128 => 16384
table_definition_cache 1024=>16384
table_definition_cache 1024 => 16384