pg_query() - “无法设置与阻塞模式的连接(错误号8)

时间:2021-09-12 01:15:12

Our application is inserting data from CSV files to Redshift using a COPY query. It uploads c. 700 GB in total across c. 11000 files. Each file maps to one database table. We run a SELECT COUNT(*) FROM <table> before and after each COPY for logging and sanity checking.

我们的应用程序是使用COPY查询将CSV文件中的数据插入Redshift。它上传c。 c总共700 GB。 11000个文件。每个文件都映射到一个数据库表。我们在每个COPY之前和之后运行SELECT COUNT(*)FROM

以进行日志记录和健全性检查。

After a period of time (it seems to vary) the call to pg_query() returns this E_NOTICE PHP error:

经过一段时间(似乎变化)后,对pg_query()的调用将返回此E_NOTICE PHP错误:

pg_query() - "Cannot set connection to blocking mode (Error No. 8)

This is returned for the SELECT COUNT(*) FROM <table> query; our application propagates all PHP Errors to Exceptions. Removing this propagation gives us this error message in addition to the E_NOTICE above on both the SELECT and the COPY:

这是为SELECT COUNT(*)FROM

查询返回的;我们的应用程序将所有PHP错误传播到异常。除了上面的E_NOTICE以及SELECT和COPY之外,删除此传播会给我们带来此错误消息:
Failed to run query: server closed the connection unexpectedly
    This probably means the server terminated abnormally

The COPY query definitely does not actually insert the files.

COPY查询肯定不会实际插入文件。

Once present, this error happens on every attempt to insert a file. It does not seem to resolve itself.

一旦出现,每次尝试插入文件时都会发生此错误。它似乎没有解决自己。

We initially had one database connection open (opened with pg_connect()) at the start of the script and re-used it for all following SELECTs and COPYs. When we got the E_NOTICE above we then tried - just as an experiment - opening a fresh connection for each query. This changed nothing.

我们最初在脚本的开头打开了一个数据库连接(用pg_connect()打开),并将其重新用于所有后续的SELECT和COPY。当我们得到上面的E_NOTICE时,我们尝试了 - 就像一个实验 - 为每个查询打开一个新的连接。这没有改变。

our current pgsql settings in the PHP ini file are:

我们在PHP ini文件中的当前pgsql设置是:

pgsql.allow_persistent = Off
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0

What could be causing this error and how could it be resolved?

可能导致此错误的原因以及如何解决?

Update - See the attached screen. It seems we only have the default query queue with 'concurrency' set to 5 and the timeout set to 0 MS?

更新 - 请参阅随附的屏幕。看来我们只有默认查询队列,'concurrency'设置为5,超时设置为0 MS?

pg_query() - “无法设置与阻塞模式的连接(错误号8)

Also: we only have these DB users connected while the application is running (the one with 'username_removed' is the only one that is created by our application):

另外:我们只在应用程序运行时连接这些数据库用户(带有'username_removed'的应用程序是我们的应用程序创建的唯一用户):

main=# select * from stv_sessions;
       starttime        | process |                     user_name                      |                      db_name
------------------------+---------+----------------------------------------------------+----------------------------------------------------
 2017-03-24 10:07:49.50 |   18263 | rdsdb                                              | dev
 2017-03-24 10:08:41.50 |   18692 | rdsdb                                              | dev
 2017-03-30 10:34:49.50 |   21197 | <username_removed>                              | main
 2017-03-24 10:09:39.50 |   18985 | rdsdb                                              | dev
 2017-03-30 10:36:40.50 |   21605 | root                                               | main
 2017-03-30 10:52:13.50 |   23516 | rdsdb                                              | dev
 2017-03-30 10:56:10.50 |   23886 | root                                               | main

2 个解决方案

#1


0  

Have you tried to change pg_connect to pg_pconnect? This will reuse an existent connection and will decrease the connections to your database and the server will run smoothly.

您是否尝试将pg_connect更改为pg_pconnect?这将重用现有连接并减少与数据库的连接,服务器将顺利运行。

I would say to never do a count using *. You are forcing the database to create a hash for each register and count it. Use some value that is unique. If you don't have it, consider create a sequence and use it in an "auto_increment" field. I see that you work with huge files and any performance improvement will help your work

我会说永远不要使用*进行计数。您正在强制数据库为每个寄存器创建一个哈希并对其进行计数。使用一些独特的值。如果没有,请考虑创建一个序列并在“auto_increment”字段中使用它。我看到你使用大文件,任何性能改进都将有助于你的工作

You can also check your blocking mode config.

您还可以检查阻止模式配置。

I got this searching the web, may work for you. "From changing pgsql.auto_reset_persistent = Off to On and restarting Apache, this resolves the error."

我在网上搜索,可能适合你。 “从将pgsql.auto_reset_persistent = Off更改为On并重新启动Apache,可以解决错误。”

My last advice is about transactions, if you are using transactions you can set your count Select to ignore locked rows and it will make your count run faster.

我的最后建议是关于交易,如果您使用交易,您可以设置您的计数选择忽略锁定的行,它将使您的计数运行更快。

https://www.postgresql.org/docs/9.5/static/explicit-locking.html#LOCKING-ROWS

https://www.postgresql.org/docs/9.5/static/explicit-locking.html#LOCKING-ROWS

#2


0  

Your connection may be timing out. Make sure you enable keepalives in your connection options.

您的连接可能会超时。确保在连接选项中启用了Keepalive。

Setting keepalives=1 in your connection string should send keepalive packets and prevent the connection from timing out. You can also try setting keepalives_idle=60.

在连接字符串中设置keepalives = 1应发送keepalive数据包并防止连接超时。您也可以尝试设置keepalives_idle = 60。

By default connections from your OS may not have the ability to request keepalives so these settings will appear to not work until you also update the corresponding OS level settings.

默认情况下,来自操作系统的连接可能无法请求保持连接,因此在您更新相应的操作系统级别设置之前,这些设置似乎不起作用。

Take a look at the similar question TCP Keep-Alive PDO Connection Parameter for more information.

有关更多信息,请查看类似问题TCP Keep-Alive PDO连接参数。

#1


0  

Have you tried to change pg_connect to pg_pconnect? This will reuse an existent connection and will decrease the connections to your database and the server will run smoothly.

您是否尝试将pg_connect更改为pg_pconnect?这将重用现有连接并减少与数据库的连接,服务器将顺利运行。

I would say to never do a count using *. You are forcing the database to create a hash for each register and count it. Use some value that is unique. If you don't have it, consider create a sequence and use it in an "auto_increment" field. I see that you work with huge files and any performance improvement will help your work

我会说永远不要使用*进行计数。您正在强制数据库为每个寄存器创建一个哈希并对其进行计数。使用一些独特的值。如果没有,请考虑创建一个序列并在“auto_increment”字段中使用它。我看到你使用大文件,任何性能改进都将有助于你的工作

You can also check your blocking mode config.

您还可以检查阻止模式配置。

I got this searching the web, may work for you. "From changing pgsql.auto_reset_persistent = Off to On and restarting Apache, this resolves the error."

我在网上搜索,可能适合你。 “从将pgsql.auto_reset_persistent = Off更改为On并重新启动Apache,可以解决错误。”

My last advice is about transactions, if you are using transactions you can set your count Select to ignore locked rows and it will make your count run faster.

我的最后建议是关于交易,如果您使用交易,您可以设置您的计数选择忽略锁定的行,它将使您的计数运行更快。

https://www.postgresql.org/docs/9.5/static/explicit-locking.html#LOCKING-ROWS

https://www.postgresql.org/docs/9.5/static/explicit-locking.html#LOCKING-ROWS

#2


0  

Your connection may be timing out. Make sure you enable keepalives in your connection options.

您的连接可能会超时。确保在连接选项中启用了Keepalive。

Setting keepalives=1 in your connection string should send keepalive packets and prevent the connection from timing out. You can also try setting keepalives_idle=60.

在连接字符串中设置keepalives = 1应发送keepalive数据包并防止连接超时。您也可以尝试设置keepalives_idle = 60。

By default connections from your OS may not have the ability to request keepalives so these settings will appear to not work until you also update the corresponding OS level settings.

默认情况下,来自操作系统的连接可能无法请求保持连接,因此在您更新相应的操作系统级别设置之前,这些设置似乎不起作用。

Take a look at the similar question TCP Keep-Alive PDO Connection Parameter for more information.

有关更多信息,请查看类似问题TCP Keep-Alive PDO连接参数。