Nginx和多重流星/Nodejs应用程序有问题

时间:2021-10-20 17:05:43

I understand that multiple node.js, and I assume by extension Meteor, can be run on one server using Nginx. I've got Nginx setup and running on a Ubuntu server just fine, I can even get it to respond to requests and proxy them to one application of mine. I however hit a roadblock when trying to get Nginx to proxy traffic to the second application.

我理解多节点。我假设扩展后的流星可以使用Nginx在一台服务器上运行。我有Nginx的设置,在Ubuntu服务器上运行良好,我甚至可以让它响应请求并将它们委托给我的一个应用程序。然而,当我试图让Nginx代理第二个应用程序的流量时遇到了障碍。

Some background:

背景知识:

  • 1st app running on port 8001
  • 第一个应用程序运行在端口8001上
  • 2nd app running on port 8002
  • 第二款应用程序运行在端口8002上
  • Nginx listening on port 80
  • Nginx监听端口80。
  • Attempting to get nginx to send traffic at / to app one and traffic at /app2/ to app two
  • 尝试让nginx在app 1发送流量,在app2发送流量
  • Both apps can be reached by going to domain:8001 and domain:8002
  • 两个应用都可以通过进入domain:8001和domain:8002来访问

My Nginx config:

我的Nginx配置:

upstream mydomain.com {
server 127.0.0.1:8001;
server 127.0.0.1:8002;
}

# the nginx server instance
server {
listen 0.0.0.0:80 default_server;
access_log /var/log/nginx/mydomain.log;

location /app2 {
  rewrite /app2/(.*) /$1 break;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_set_header X-NginX-Proxy true;
  proxy_pass http://127.0.0.1:8002;
  proxy_redirect off;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection "upgrade";
}

location / {
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_set_header X-NginX-Proxy true;
  proxy_pass http://127.0.0.1:8001;
  proxy_redirect off;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection "upgrade";
}
}

Any insight as to what might be going on when traffic goes to /app2/ I'd greatly appreciate it!

当交通流到/app2/我将非常感激时,任何关于可能发生的事情的见解。

2 个解决方案

#1


27  

proxy_pass http://127.0.0.1:8002/1;  <-- these should be 
proxy_pass http://**my_upstream_name**;  <--these

then

然后

upstream my_upstream_name {  

//Ngixn do a round robin load balance, some users will conect to / and othes to /app2

server 127.0.0.1:8001;

server 127.0.0.1:8002;

}

A few tips control the proxy:

一些技巧可以控制代理:

take a look here @nginx docs

看看@nginx文档

then here we go:

然后我们开始吧:

weight = NUMBER - set weight of the server, if not set weight is equal to one. unbalance the default round robin.

权重=服务器的数量设置权重,如果不设置权重则为1。取消默认轮询。

max_fails = NUMBER - number of unsuccessful attempts at communicating with the server within the time period (assigned by parameter fail_timeout) after which it is considered inoperative. If not set, the number of attempts is one. A value of 0 turns off this check. What is considered a failure is defined by proxy_next_upstream or fastcgi_next_upstream (except http_404 errors which do not count towards max_fails).

max_failed =在被视为无效的时间段内(由参数fail_timeout分配)与服务器通信失败尝试的数量。如果没有设置,尝试的次数为1。值为0时,关闭此检查。被认为是失败的是由proxy_next_上游或fastcgi_next_上游定义的(http_404错误不计入max_failure)。

fail_timeout = TIME - the time during which must occur *max_fails* number of unsuccessful attempts at communication with the server that would cause the server to be considered inoperative, and also the time for which the server will be considered inoperative (before another attempt is made). If not set the time is 10 seconds. fail_timeout has nothing to do with upstream response time, use proxy_connect_timeout and proxy_read_timeout for controlling this.

fail_timeout = TIME——必须发生的时间* max_failure *与服务器通信失败的次数,这会导致服务器失效,以及服务器失效的时间(在再次尝试之前)。如果不设置时间为10秒。fail_timeout与上游响应时间无关,使用proxy_connect_timeout和proxy_read_timeout来控制它。

down - marks server as permanently offline, to be used with the directive ip_hash.

向下标记服务器为永久脱机,与指令ip_hash一起使用。

backup - (0.6.7 or later) only uses this server if the non-backup servers are all down or busy (cannot be used with the directive ip_hash)

备份—(0.6.7或更高)仅在非备份服务器都已停机或繁忙时才使用此服务器(不能与指示ip_hash一起使用)

EXAMPLE generic

    upstream  my_upstream_name  {
      server   backend1.example.com    weight=5;
      server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
      server   unix:/tmp/backend3;
    }
//   proxy_pass http://my_upstream_name; 

tho these is what you need:

这就是你所需要的:

if u just want to control de load between vhosts for one app :

如果你只想控制一个app的vhosts之间的de load:

 upstream  my_upstream_name{
          server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
          server   127.0.0.1:8081          max_fails=3  fail_timeout=30s;
          server   127.0.0.1:8082          max_fails=3  fail_timeout=30s;
          server   127.0.0.1:8083 backup;
//  proxy_pass http://my_upstream_name; 
// amazingness no.1, the keyword "backup" means that this server should only be used when the rest are non-responsive
    } 

if u have 2 or more apps: 1 upstream per app like:

如果你有2个或更多的应用程序:1个上游应用程序:

upstream  my_upstream_name{
              server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8081          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8082          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8083 backup;  
            } 
upstream  my_upstream_name_app2  {
              server   127.0.0.1:8084          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8085          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8086          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8087 backup; 
            } 
upstream  my_upstream_name_app3  {
              server   127.0.0.1:8088          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8089          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8090          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8091 backup;  
            } 

hope it helps.

希望它可以帮助。

#2


0  

People looking for alternative for Nginx: Install Cluster package for each Meteor app and the package will handle load balancing automatically. https://github.com/meteorhacks/cluster

人们在寻找Nginx的替代方案:为每个流星应用安装集群包,并且包将自动处理负载平衡。https://github.com/meteorhacks/cluster

How to set it up:

如何设置:

# You can use your existing MONGO_URL for this
export CLUSTER_DISCOVERY_URL=mongodb://host:port/db,
# this is the direct URL to your server (it could be a private URL)
export CLUSTER_ENDPOINT_URL=http://ipaddress
# mark your server as a web service (you can set any name for this)
export CLUSTER_SERVICE=web

Example setup:

示例设置:

{
  "ip-1": {
    "endpointUrl": "http://ip-1",
    "balancerUrl": "https://one.bulletproofmeteor.com"
  },
  "ip-2": {
    "endpointUrl": "http://ip-2",
    "balancerUrl": "https://two.bulletproofmeteor.com"
  },
  "ip-3": {
    "endpointUrl": "http://ip-3",
    "balancerUrl": "https://three.bulletproofmeteor.com"
  },
  "ip-4": {
    "endpointUrl": "http://ip-4"
  }
}

#1


27  

proxy_pass http://127.0.0.1:8002/1;  <-- these should be 
proxy_pass http://**my_upstream_name**;  <--these

then

然后

upstream my_upstream_name {  

//Ngixn do a round robin load balance, some users will conect to / and othes to /app2

server 127.0.0.1:8001;

server 127.0.0.1:8002;

}

A few tips control the proxy:

一些技巧可以控制代理:

take a look here @nginx docs

看看@nginx文档

then here we go:

然后我们开始吧:

weight = NUMBER - set weight of the server, if not set weight is equal to one. unbalance the default round robin.

权重=服务器的数量设置权重,如果不设置权重则为1。取消默认轮询。

max_fails = NUMBER - number of unsuccessful attempts at communicating with the server within the time period (assigned by parameter fail_timeout) after which it is considered inoperative. If not set, the number of attempts is one. A value of 0 turns off this check. What is considered a failure is defined by proxy_next_upstream or fastcgi_next_upstream (except http_404 errors which do not count towards max_fails).

max_failed =在被视为无效的时间段内(由参数fail_timeout分配)与服务器通信失败尝试的数量。如果没有设置,尝试的次数为1。值为0时,关闭此检查。被认为是失败的是由proxy_next_上游或fastcgi_next_上游定义的(http_404错误不计入max_failure)。

fail_timeout = TIME - the time during which must occur *max_fails* number of unsuccessful attempts at communication with the server that would cause the server to be considered inoperative, and also the time for which the server will be considered inoperative (before another attempt is made). If not set the time is 10 seconds. fail_timeout has nothing to do with upstream response time, use proxy_connect_timeout and proxy_read_timeout for controlling this.

fail_timeout = TIME——必须发生的时间* max_failure *与服务器通信失败的次数,这会导致服务器失效,以及服务器失效的时间(在再次尝试之前)。如果不设置时间为10秒。fail_timeout与上游响应时间无关,使用proxy_connect_timeout和proxy_read_timeout来控制它。

down - marks server as permanently offline, to be used with the directive ip_hash.

向下标记服务器为永久脱机,与指令ip_hash一起使用。

backup - (0.6.7 or later) only uses this server if the non-backup servers are all down or busy (cannot be used with the directive ip_hash)

备份—(0.6.7或更高)仅在非备份服务器都已停机或繁忙时才使用此服务器(不能与指示ip_hash一起使用)

EXAMPLE generic

    upstream  my_upstream_name  {
      server   backend1.example.com    weight=5;
      server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
      server   unix:/tmp/backend3;
    }
//   proxy_pass http://my_upstream_name; 

tho these is what you need:

这就是你所需要的:

if u just want to control de load between vhosts for one app :

如果你只想控制一个app的vhosts之间的de load:

 upstream  my_upstream_name{
          server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
          server   127.0.0.1:8081          max_fails=3  fail_timeout=30s;
          server   127.0.0.1:8082          max_fails=3  fail_timeout=30s;
          server   127.0.0.1:8083 backup;
//  proxy_pass http://my_upstream_name; 
// amazingness no.1, the keyword "backup" means that this server should only be used when the rest are non-responsive
    } 

if u have 2 or more apps: 1 upstream per app like:

如果你有2个或更多的应用程序:1个上游应用程序:

upstream  my_upstream_name{
              server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8081          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8082          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8083 backup;  
            } 
upstream  my_upstream_name_app2  {
              server   127.0.0.1:8084          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8085          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8086          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8087 backup; 
            } 
upstream  my_upstream_name_app3  {
              server   127.0.0.1:8088          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8089          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8090          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8091 backup;  
            } 

hope it helps.

希望它可以帮助。

#2


0  

People looking for alternative for Nginx: Install Cluster package for each Meteor app and the package will handle load balancing automatically. https://github.com/meteorhacks/cluster

人们在寻找Nginx的替代方案:为每个流星应用安装集群包,并且包将自动处理负载平衡。https://github.com/meteorhacks/cluster

How to set it up:

如何设置:

# You can use your existing MONGO_URL for this
export CLUSTER_DISCOVERY_URL=mongodb://host:port/db,
# this is the direct URL to your server (it could be a private URL)
export CLUSTER_ENDPOINT_URL=http://ipaddress
# mark your server as a web service (you can set any name for this)
export CLUSTER_SERVICE=web

Example setup:

示例设置:

{
  "ip-1": {
    "endpointUrl": "http://ip-1",
    "balancerUrl": "https://one.bulletproofmeteor.com"
  },
  "ip-2": {
    "endpointUrl": "http://ip-2",
    "balancerUrl": "https://two.bulletproofmeteor.com"
  },
  "ip-3": {
    "endpointUrl": "http://ip-3",
    "balancerUrl": "https://three.bulletproofmeteor.com"
  },
  "ip-4": {
    "endpointUrl": "http://ip-4"
  }
}