I get the following error when I try to upload files to my node.js based web app:
当我尝试将文件上传到基于node.js的Web应用程序时,出现以下错误:
2014/05/20 04:30:20 [error] 31070#0: *5 upstream prematurely closed connection while reading response header from upstream, client: ... [clipped]
I'm using a front-end proxy here:
我在这里使用前端代理:
upstream app_mywebsite {
server 127.0.0.1:3000;
}
server {
listen 0.0.0.0:80;
server_name {{ MY IP}} mywebsite;
access_log /var/log/nginx/mywebsite.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://app_mywebsite;
proxy_redirect off;
# web socket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
This is my nginx.conf file:
这是我的nginx.conf文件:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 20;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
# default_type application/octet-stream;
default_type text/html;
charset UTF-8;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_min_length 256;
gzip_comp_level 5;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Any idea on how to better debug this? The things I've found haven't really worked (e.g. removing the tailing slash from my proxy_pass
关于如何更好地调试这个的任何想法?我发现的东西并没有真正发挥作用(例如从我的proxy_pass中删除拖尾斜杠
4 个解决方案
#1
4
Try adding the following to your server{}
block, I was able to solve an Nginx reverse proxy issue by defining these proxy attributes:
尝试将以下内容添加到服务器{}块,我可以通过定义这些代理属性来解决Nginx反向代理问题:
# define buffers, necessary for proper communication to prevent 502s
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
#2
0
Try adding the following below to the http
section of your /etc/nginx/nginx.conf
:
尝试将以下内容添加到/etc/nginx/nginx.conf的http部分:
fastcgi_read_timeout 400s;
and restart nginx.
并重新启动nginx。
Futher reading: nginx docs
进一步阅读:nginx文档
#3
0
Try this:
client_max_body_size
- Maximum uploadable file size
client_max_body_size - 最大可上载文件大小
http {
send_timeout 10m;
client_header_timeout 10m;
client_body_timeout 10m;
client_max_body_size 100m;
large_client_header_buffers 8 32k;
}
and server section:
和服务器部分:
server {
location / {
proxy_buffer_size 32k;
}
}
large_client_header_buffers 8 32k
and proxy_buffer_size 32k
- is enough for most scripts, but you can try 64k, 128k, 256k...
large_client_header_buffers 8 32k和proxy_buffer_size 32k - 足够大多数脚本,但你可以试试64k,128k,256k ......
(sorry, im not english speaking) =)
(对不起,我不是说英语)=)
#4
-1
So in the end I ended up changing in my keepalive from 20
to 64
and it seems to handle large files fine now. The bummer about it is that I re-wrote from scratch the image upload library I was using node-imager
, but at least I learned something from it.
所以最后我最终将我的keepalive从20更改为64,现在似乎处理大文件。关于它的糟糕的是,我从头开始重新编写了我使用node-imager的图像上传库,但至少我从中学到了一些东西。
server {
location / {
keepalive 64
}
}
#1
4
Try adding the following to your server{}
block, I was able to solve an Nginx reverse proxy issue by defining these proxy attributes:
尝试将以下内容添加到服务器{}块,我可以通过定义这些代理属性来解决Nginx反向代理问题:
# define buffers, necessary for proper communication to prevent 502s
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
#2
0
Try adding the following below to the http
section of your /etc/nginx/nginx.conf
:
尝试将以下内容添加到/etc/nginx/nginx.conf的http部分:
fastcgi_read_timeout 400s;
and restart nginx.
并重新启动nginx。
Futher reading: nginx docs
进一步阅读:nginx文档
#3
0
Try this:
client_max_body_size
- Maximum uploadable file size
client_max_body_size - 最大可上载文件大小
http {
send_timeout 10m;
client_header_timeout 10m;
client_body_timeout 10m;
client_max_body_size 100m;
large_client_header_buffers 8 32k;
}
and server section:
和服务器部分:
server {
location / {
proxy_buffer_size 32k;
}
}
large_client_header_buffers 8 32k
and proxy_buffer_size 32k
- is enough for most scripts, but you can try 64k, 128k, 256k...
large_client_header_buffers 8 32k和proxy_buffer_size 32k - 足够大多数脚本,但你可以试试64k,128k,256k ......
(sorry, im not english speaking) =)
(对不起,我不是说英语)=)
#4
-1
So in the end I ended up changing in my keepalive from 20
to 64
and it seems to handle large files fine now. The bummer about it is that I re-wrote from scratch the image upload library I was using node-imager
, but at least I learned something from it.
所以最后我最终将我的keepalive从20更改为64,现在似乎处理大文件。关于它的糟糕的是,我从头开始重新编写了我使用node-imager的图像上传库,但至少我从中学到了一些东西。
server {
location / {
keepalive 64
}
}