Node.js事件循环 - nginx / apache

时间:2021-07-17 23:57:25

Both nginx and Node.js have event loops to handle requests. I put nginx in front of Node.js as has been recommended here

nginx和Node.js都有事件循环来处理请求。我把nginx放在Node.js前面,就像这里推荐的那样

Using Node.js only vs. using Node.js with Apache/Nginx

仅使用Node.js与使用Apache / Nginx的Node.js

with the setup shown here

使用此处显示的设置

Node.js + Nginx - What now?

Node.js + Nginx - 现在怎么办?

  1. How do the two event loops play together? Is there any risk of conflicts between the two? I wonder because Nginx may not be able to handle as many events per second as Node.js or vice versa. For example, if Nginx can handle 1000 events per second but node.js only 500, won't that cause issues? (I have no idea if 1000,500 are reasonable orders of magnitude, you could correct me on that.)

    两个事件循环如何一起玩?两者之间是否存在冲突风险?我想知道,因为Nginx可能无法像Node.js那样每秒处理多少事件,反之亦然。例如,如果Nginx每秒可以处理1000个事件但node.js只能处理500个,那么这不会导致问题吗? (我不知道1000,500是否是合理的数量级,你可以纠正我。)

  2. What about putting Apache in front of Node.js? Apache has no event loop. Just threads. So won't putting Apache in front of Node.js defeat the purpose?

    把Apache放在Node.js面前怎么样? Apache没有事件循环。只是线程。所以不会把Apache放在Node.js前面打败目的吗?

  3. In this 2010 talk, Node.js creator Ryan Dahl had vision to get rid of nginx/apache/whatever entirely and make node talk directly to the internet. When do you think this will be reality?

    在2010年的演讲中,Node.js的创建者Ryan Dahl有望完全摆脱nginx / apache /,并使节点直接与互联网对话。你觉得什么时候会成为现实?

4 个解决方案

#1


17  

  1. Both nginx and Node use an asynchronous and event-driven approach. The communication between them will go more or less like this:

    nginx和Node都使用异步和事件驱动的方法。他们之间的沟通或多或少会像这样:

    • nginx receives a request
    • nginx收到请求

    • nginx forwards the request to the Node process and immediately goes back to wait for more requests
    • nginx将请求转发给Node进程并立即返回以等待更多请求

    • Node receives the request from nginx
    • 节点从nginx接收请求

    • Node handles the request with minimal CPU usage, until at some point it needs to issue one or more I/O requests (read from a database, write the response, etc). At this point it launches all these I/O requests and goes back to wait for more requests.
    • 节点以最小的CPU使用率处理请求,直到在某个时刻它需要发出一个或多个I / O请求(从数据库读取,写入响应等)。此时它会启动所有这些I / O请求并返回以等待更多请求。

    • The above can repeat lots of times. You could have hundreds of thousands of requests all in a non-blocking wait state where nginx is waiting for Node and Node is waiting for I/O. And while this happens both nginx and Node are ready to accept even more requests!
    • 以上可以重复很多次。您可以在非阻塞等待状态下拥有数十万个请求,其中nginx正在等待节点并且Node正在等待I / O.虽然这种情况发生,nginx和Node都准备接受更多的请求!

    • Eventually async I/O started by the Node process will complete and a callback function will get invoked.
    • 最终,Node进程启动的异步I / O将完成,并将调用回调函数。

    • If there are still I/O requests that haven't completed for this request, then Node goes back to its loop one more time. It can also happen that once an I/O operation completes this data is consumed by the Node callback and then new I/O needs to happen, so Node can start more async I/O requests before going back to the loop.
    • 如果仍有I / O请求尚未完成此请求,则Node再次返回其循环。也可能发生这样的情况:一旦I / O操作完成,节点回调就会消耗这些数据,然后需要发生新的I / O,因此Node可以在返回循环之前启动更多的异步I / O请求。

    • Eventually all I/O operations started by Node for a particular request will be complete, including those that write the response back to nginx. So Node ends this request, and then as always goes back to its loop.
    • 最终,Node为特定请求启动的所有I / O操作都将完成,包括那些将响应写回nginx的操作。所以Node结束了这个请求,然后一如既往地回到它的循环。

    • nginx receives an event indicating that response data has arrived for a request, so it takes that data and writes it back to the client, once again in a non-blocking fashion. When the response has been written to the client and event will trigger and nginx will then end the request.
    • nginx接收一个事件,指示响应数据已经到达请求,因此它将采用该数据并以非阻塞方式将其写回客户端。当响应已写入客户端并且事件将触发时,nginx将结束请求。

    You are asking about what would happen if nginx and Node can handle a different number of maximum connections. They really don't have a maximum, the maximum in general comes from operating system configuration, for example from the maximum number of open handles the system can have at a time or the CPU throughput. So your question does not really apply. If the system is configured correctly and all processes are I/O bound, neither nginx or Node will ever block.

    您正在询问如果nginx和Node可以处理不同数量的最大连接会发生什么。它们实际上没有最大值,最大值通常来自操作系统配置,例如系统一次可以拥有的最大打开句柄数或CPU吞吐量。所以你的问题并不适用。如果系统配置正确且所有进程都受I / O限制,则nginx或Node都不会阻塞。

  2. Putting Apache in front of Node will only work well if you can guarantee that your Apache never blocks (i.e it never reaches its maximum connection limit). This is hard/impossible to achieve for large number of connections, because Apache uses an individual process or thread for each connection. nginx and Node scale really well, Apache does not.

    将Apache置于Node之前只有在保证Apache永远不会阻塞(即它永远不会达到其最大连接限制)时才能正常工作。对于大量连接来说,这很难/不可能实现,因为Apache为每个连接使用单独的进程或线程。 nginx和Node规模确实很好,Apache没有。

  3. Running Node without another server in front works fine and it should be okay for small/medium load sites. The reason putting a web server in front of it is preferred is that web servers like nginx come with features that Node does not have and you would need to implement yourself. Things like caching, load balancing, running multiple apps from the same server, etc.

    在前面没有其他服务器的运行节点工作正常,对于小型/中型负载站点应该没问题。将Web服务器放在它前面的原因首选是像nginx这样的Web服务器具有Node没有的功能,您需要自己实现。诸如缓存,负载平衡,从同一服务器运行多个应用程序等等。

#2


2  

I think your questions have been largely covered by some of the others answers, but there are a few pieces missing, and some that I disagree with, so here are mine:

我认为你的问题在很大程度上已经被其他一些答案所覆盖了,但是有一些部分缺失,有些部分我不同意,所以这里是我的:

  1. The event loops are isolated from each other at the process level, but do interact. The issues you're most likely to encounter are around the configuration of nginx response buffers, chunked data, etc. but this is optimisation rather than error resolution.

    事件循环在进程级别彼此隔离,但确实相互作用。您最有可能遇到的问题是nginx响应缓冲区,分块数据等的配置,但这是优化而不是错误解决。

  2. As you point out, if you use Apache you're nullifying the benefit of using Node.js, i.e. massive concurrency and websockets. I wouldn't recommend doing that.

    正如您所指出的,如果您使用Apache,那么您将无效使用Node.js,即大规模并发和websockets。我不建议这样做。

  3. People are already using Node.js at the front of their stack. Searching for benchmarks returns some reasonable-looking results in Node's favour, so performance to my mind isn't an issue. However, there are still reasons to put Nginx in front of Node.

    人们已经在堆栈前面使用了Node.js.搜索基准测试会在Node中获得一些看似合理的结果,因此我认为性能不是问题。但是,仍然有理由将Nginx放在Node之前。

    1. Security - Node has been given increasing scrutiny, but it's still young. You may not have problems here, but caution is often your friend.

      安全 - 节点已经受到越来越多的审查,但它仍然很年轻。你可能没有问题,但谨慎通常是你的朋友。

    2. Training - Ops staff that you hire will know how to manage Nginx, but the configuration and management of your custom Node app will only ever be understood by those people your developers successfully communicate it to. In some companies this is nobody.

      培训 - 您雇用的Ops员工将知道如何管理Nginx,但您的开发人员成功传达的人员只能理解您的自定义Node应用程序的配置和管理。在一些公司,这是没有人。

    3. Operational Flexibility - If you reach scale you might want to split out the serving of static content, purely to reduce the load on your app servers. You might want to split content amongst different domains and have it managed separately, or have different SSL or proxying behaviour for different domains or URL patterns. These are the things that are easy for Ops guys to configure in Nginx, but you'd have to code manually in a Node app.

      操作灵活性 - 如果达到规模,您可能希望拆分静态内容的服务,纯粹是为了减少应用服务器的负载。您可能希望在不同域之间拆分内容并将其单独管理,或者针对不同的域或URL模式具有不同的SSL或代理行为。这些是Ops人在Nginx中很容易配置的东西,但你必须在Node应用程序中手动编码。

#3


1  

  1. The event loops are independent. Event loops are implemented at the application level, so neither cares what sort of architecture the other uses.

    事件循环是独立的。事件循环在应用程序级别实现,因此既不关心另一个使用的架构。

  2. NodeJS is good at many things, but there are some places where it still falters. Once example is serving static files. At the moment, nodejs performs fairly poorly in this test, so having a dedicated web server for your static files greatly improves response time. Also, nodejs is still in its infancy, and has not been "tested and hardened" in the matters of security like Apache on nginX.

    NodeJS很擅长很多东西,但有些地方仍然存在动摇。一旦示例提供静态文件。目前,nodejs在此测试中的表现相当差,因此为静态文件配备专用的Web服务器可以大大缩短响应时间。此外,nodejs仍处于起步阶段,并且在nginX上的Apache等安全问题上尚未经过“测试和强化”。

  3. It'll take a long time for people to consider fronting nodejs all by itself. The cluster module is a step in the right direction, but it'll take a long time even after it reaches v1 before it happens.

    人们需要很长时间才能考虑将nodejs全部置于其中。群集模块是朝着正确方向迈出的一步,但即使在它发生之前达到v1也需要很长时间。

#4


1  

  1. Both event loops are unrelated. They don't play together.
  2. 两个事件循环都是不相关的。他们不在一起玩。

  3. Yes, it is pretty useless. Apache is not a load balancer.
  4. 是的,这很没用。 Apache不是负载均衡器。

  5. What Ryan Dahl said may be applicable already. The limit of concurrent users is definitely higher than that of Apache. Before node.js websites with fair amount of concurrent users had to use nginx to balance the load. For small to medium sized businesses it can be done with node.js alone. But ruling out nginx completely will take time. Let node.js be stable before it can follow this ambitious dream.
  6. Ryan Dahl所说的可能已经适用。并发用户的限制肯定高于Apache。在具有相当数量的并发用户的node.js网站之前,必须使用nginx来平衡负载。对于中小型企业,可以单独使用node.js。但完全排除nginx需要时间。让node.js在遵循这个雄心勃勃的梦想之前保持稳定。

#1


17  

  1. Both nginx and Node use an asynchronous and event-driven approach. The communication between them will go more or less like this:

    nginx和Node都使用异步和事件驱动的方法。他们之间的沟通或多或少会像这样:

    • nginx receives a request
    • nginx收到请求

    • nginx forwards the request to the Node process and immediately goes back to wait for more requests
    • nginx将请求转发给Node进程并立即返回以等待更多请求

    • Node receives the request from nginx
    • 节点从nginx接收请求

    • Node handles the request with minimal CPU usage, until at some point it needs to issue one or more I/O requests (read from a database, write the response, etc). At this point it launches all these I/O requests and goes back to wait for more requests.
    • 节点以最小的CPU使用率处理请求,直到在某个时刻它需要发出一个或多个I / O请求(从数据库读取,写入响应等)。此时它会启动所有这些I / O请求并返回以等待更多请求。

    • The above can repeat lots of times. You could have hundreds of thousands of requests all in a non-blocking wait state where nginx is waiting for Node and Node is waiting for I/O. And while this happens both nginx and Node are ready to accept even more requests!
    • 以上可以重复很多次。您可以在非阻塞等待状态下拥有数十万个请求,其中nginx正在等待节点并且Node正在等待I / O.虽然这种情况发生,nginx和Node都准备接受更多的请求!

    • Eventually async I/O started by the Node process will complete and a callback function will get invoked.
    • 最终,Node进程启动的异步I / O将完成,并将调用回调函数。

    • If there are still I/O requests that haven't completed for this request, then Node goes back to its loop one more time. It can also happen that once an I/O operation completes this data is consumed by the Node callback and then new I/O needs to happen, so Node can start more async I/O requests before going back to the loop.
    • 如果仍有I / O请求尚未完成此请求,则Node再次返回其循环。也可能发生这样的情况:一旦I / O操作完成,节点回调就会消耗这些数据,然后需要发生新的I / O,因此Node可以在返回循环之前启动更多的异步I / O请求。

    • Eventually all I/O operations started by Node for a particular request will be complete, including those that write the response back to nginx. So Node ends this request, and then as always goes back to its loop.
    • 最终,Node为特定请求启动的所有I / O操作都将完成,包括那些将响应写回nginx的操作。所以Node结束了这个请求,然后一如既往地回到它的循环。

    • nginx receives an event indicating that response data has arrived for a request, so it takes that data and writes it back to the client, once again in a non-blocking fashion. When the response has been written to the client and event will trigger and nginx will then end the request.
    • nginx接收一个事件,指示响应数据已经到达请求,因此它将采用该数据并以非阻塞方式将其写回客户端。当响应已写入客户端并且事件将触发时,nginx将结束请求。

    You are asking about what would happen if nginx and Node can handle a different number of maximum connections. They really don't have a maximum, the maximum in general comes from operating system configuration, for example from the maximum number of open handles the system can have at a time or the CPU throughput. So your question does not really apply. If the system is configured correctly and all processes are I/O bound, neither nginx or Node will ever block.

    您正在询问如果nginx和Node可以处理不同数量的最大连接会发生什么。它们实际上没有最大值,最大值通常来自操作系统配置,例如系统一次可以拥有的最大打开句柄数或CPU吞吐量。所以你的问题并不适用。如果系统配置正确且所有进程都受I / O限制,则nginx或Node都不会阻塞。

  2. Putting Apache in front of Node will only work well if you can guarantee that your Apache never blocks (i.e it never reaches its maximum connection limit). This is hard/impossible to achieve for large number of connections, because Apache uses an individual process or thread for each connection. nginx and Node scale really well, Apache does not.

    将Apache置于Node之前只有在保证Apache永远不会阻塞(即它永远不会达到其最大连接限制)时才能正常工作。对于大量连接来说,这很难/不可能实现,因为Apache为每个连接使用单独的进程或线程。 nginx和Node规模确实很好,Apache没有。

  3. Running Node without another server in front works fine and it should be okay for small/medium load sites. The reason putting a web server in front of it is preferred is that web servers like nginx come with features that Node does not have and you would need to implement yourself. Things like caching, load balancing, running multiple apps from the same server, etc.

    在前面没有其他服务器的运行节点工作正常,对于小型/中型负载站点应该没问题。将Web服务器放在它前面的原因首选是像nginx这样的Web服务器具有Node没有的功能,您需要自己实现。诸如缓存,负载平衡,从同一服务器运行多个应用程序等等。

#2


2  

I think your questions have been largely covered by some of the others answers, but there are a few pieces missing, and some that I disagree with, so here are mine:

我认为你的问题在很大程度上已经被其他一些答案所覆盖了,但是有一些部分缺失,有些部分我不同意,所以这里是我的:

  1. The event loops are isolated from each other at the process level, but do interact. The issues you're most likely to encounter are around the configuration of nginx response buffers, chunked data, etc. but this is optimisation rather than error resolution.

    事件循环在进程级别彼此隔离,但确实相互作用。您最有可能遇到的问题是nginx响应缓冲区,分块数据等的配置,但这是优化而不是错误解决。

  2. As you point out, if you use Apache you're nullifying the benefit of using Node.js, i.e. massive concurrency and websockets. I wouldn't recommend doing that.

    正如您所指出的,如果您使用Apache,那么您将无效使用Node.js,即大规模并发和websockets。我不建议这样做。

  3. People are already using Node.js at the front of their stack. Searching for benchmarks returns some reasonable-looking results in Node's favour, so performance to my mind isn't an issue. However, there are still reasons to put Nginx in front of Node.

    人们已经在堆栈前面使用了Node.js.搜索基准测试会在Node中获得一些看似合理的结果,因此我认为性能不是问题。但是,仍然有理由将Nginx放在Node之前。

    1. Security - Node has been given increasing scrutiny, but it's still young. You may not have problems here, but caution is often your friend.

      安全 - 节点已经受到越来越多的审查,但它仍然很年轻。你可能没有问题,但谨慎通常是你的朋友。

    2. Training - Ops staff that you hire will know how to manage Nginx, but the configuration and management of your custom Node app will only ever be understood by those people your developers successfully communicate it to. In some companies this is nobody.

      培训 - 您雇用的Ops员工将知道如何管理Nginx,但您的开发人员成功传达的人员只能理解您的自定义Node应用程序的配置和管理。在一些公司,这是没有人。

    3. Operational Flexibility - If you reach scale you might want to split out the serving of static content, purely to reduce the load on your app servers. You might want to split content amongst different domains and have it managed separately, or have different SSL or proxying behaviour for different domains or URL patterns. These are the things that are easy for Ops guys to configure in Nginx, but you'd have to code manually in a Node app.

      操作灵活性 - 如果达到规模,您可能希望拆分静态内容的服务,纯粹是为了减少应用服务器的负载。您可能希望在不同域之间拆分内容并将其单独管理,或者针对不同的域或URL模式具有不同的SSL或代理行为。这些是Ops人在Nginx中很容易配置的东西,但你必须在Node应用程序中手动编码。

#3


1  

  1. The event loops are independent. Event loops are implemented at the application level, so neither cares what sort of architecture the other uses.

    事件循环是独立的。事件循环在应用程序级别实现,因此既不关心另一个使用的架构。

  2. NodeJS is good at many things, but there are some places where it still falters. Once example is serving static files. At the moment, nodejs performs fairly poorly in this test, so having a dedicated web server for your static files greatly improves response time. Also, nodejs is still in its infancy, and has not been "tested and hardened" in the matters of security like Apache on nginX.

    NodeJS很擅长很多东西,但有些地方仍然存在动摇。一旦示例提供静态文件。目前,nodejs在此测试中的表现相当差,因此为静态文件配备专用的Web服务器可以大大缩短响应时间。此外,nodejs仍处于起步阶段,并且在nginX上的Apache等安全问题上尚未经过“测试和强化”。

  3. It'll take a long time for people to consider fronting nodejs all by itself. The cluster module is a step in the right direction, but it'll take a long time even after it reaches v1 before it happens.

    人们需要很长时间才能考虑将nodejs全部置于其中。群集模块是朝着正确方向迈出的一步,但即使在它发生之前达到v1也需要很长时间。

#4


1  

  1. Both event loops are unrelated. They don't play together.
  2. 两个事件循环都是不相关的。他们不在一起玩。

  3. Yes, it is pretty useless. Apache is not a load balancer.
  4. 是的,这很没用。 Apache不是负载均衡器。

  5. What Ryan Dahl said may be applicable already. The limit of concurrent users is definitely higher than that of Apache. Before node.js websites with fair amount of concurrent users had to use nginx to balance the load. For small to medium sized businesses it can be done with node.js alone. But ruling out nginx completely will take time. Let node.js be stable before it can follow this ambitious dream.
  6. Ryan Dahl所说的可能已经适用。并发用户的限制肯定高于Apache。在具有相当数量的并发用户的node.js网站之前,必须使用nginx来平衡负载。对于中小型企业,可以单独使用node.js。但完全排除nginx需要时间。让node.js在遵循这个雄心勃勃的梦想之前保持稳定。