意想不到的结果的节点。js和ASP。网络核心性能测试

时间:2022-08-10 03:41:52

I am doing a quick stress test on two (kinda) hello world projects written in and . Both of them are running in production mode and without a logger attached to them. The result is astonishing! ASP.NET core is outperforming node.js app even after doing some extra work whereas the node.js app is just rendering a view.

我正在对两个用node编写的hello world项目进行快速压力测试。js和asp.net-core。它们都在生产模式下运行,没有附加一个日志记录器。结果是惊人的!ASP。网络内核的表现优于节点。js应用程序甚至在做了一些额外的工作之后。js应用只是呈现一个视图。

App 1: http://localhost:3000/nodejs node.js

Using: node.js, express and vash rendering engine.

使用:节点。express和vash渲染引擎。

意想不到的结果的节点。js和ASP。网络核心性能测试

The code in this endpoint is

这个端点的代码是。

router.get('/', function(req, res, next) {
  var vm = {
    title: 'Express',
    time: new Date()
  }
  res.render('index', vm);
});

As you can see, it does nothing apart from sending current date via the time variable to the view.

如您所见,除了通过时间变量将当前日期发送到视图之外,它什么都不做。

App 2: http://localhost:5000/aspnet-core asp.net core

Using: ASP.NET Core, default template targeting dnxcore50

使用:ASP。NET Core,针对dnxcore50的默认模板

However this app does something other than just rendering a page with a date on it. It generates 5 paragraphs of various random texts. This should theoretically make this little bit heavier than the nodejs app.

然而,这个应用程序除了呈现一个有日期的页面外,还有其他功能。它产生了5个不同的随机文本段落。从理论上讲,这应该比nodejs应用程序要重一点。

意想不到的结果的节点。js和ASP。网络核心性能测试

Here is the action method that render this page

这是呈现此页面的操作方法

[ResponseCache(Location = ResponseCacheLocation.None, NoStore = true)]
[Route("aspnet-core")]
public IActionResult Index()
{
    var sb = new StringBuilder(1024);
    GenerateParagraphs(5, sb);

    ViewData["Message"] = sb.ToString();
    return View();
}

Stress test result

Node.js App stress test result

Update: Following suggestion by Gorgi Kosev

更新:根据高吉尼·科瑟夫的建议

Using npm install -g recluster-cli && NODE_ENV=production recluster-cli app.js 8

使用npm安装-g recluster-cli & NODE_ENV=production recluster-cli应用程序

意想不到的结果的节点。js和ASP。网络核心性能测试

ASP.NET Core App stress test result

意想不到的结果的节点。js和ASP。网络核心性能测试

Can't believe my eyes! It can't be true that in this basic test asp.net core is way faster than nodejs. Off course this is not the only metric used to measure performance between these two web technologies, but I am wondering what am I doing wrong in the node.js side?.

真不敢相信我的眼睛!在这个基本测试中,asp.net核心要比nodejs快得多,这是不可能的。当然,这并不是度量这两种web技术之间性能的唯一指标,但我想知道我在节点中做错了什么。js ?。

Being a professional asp.net developer and wishing to adapt node.js in personal projects, this is kind of putting me off - as I'm a little paranoid about performance. I thought node.js is faster than asp.net core (in general - as seen in various other benchmarks) I just want to prove it to myself (to encourage myself in adapting node.js).

作为一个专业的asp.net开发人员,希望能够适应node。在个人项目中,这有点让我反感——因为我对性能有点偏执。我认为节点。js比asp.net core要快(一般来说,可以在其他各种基准测试中看到),我只是想向自己证明一下(鼓励自己使用node.js)。

Please reply in comment if you want me to include more code snippets.

如果您想让我包含更多的代码片段,请在评论中回复。

Update: Time distribution of .NET Core app

更新:.NET Core app的时间分配

意想不到的结果的节点。js和ASP。网络核心性能测试

Server response

服务器响应

HTTP/1.1 200 OK
Cache-Control: no-store,no-cache
Date: Fri, 12 May 2017 07:46:56 GMT
Pragma: no-cache
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
Server: Kestrel

2 个解决方案

#1


100  

As many others have alluded, the comparison lacks context.
At the time of its release, the async approach of node.js was revolutionary. Since then other languages and web frameworks have been adopting the approaches they took mainstream.

正如许多人所暗示的,这种比较缺乏上下文。在其发布时,节点的异步方法。js是革命性的。从那时起,其他语言和web框架都采用了他们所采用的主流方法。

To understand what the difference meant, you need to simulate a blocking request that represents some IO workload, such as a database request. In a thread-per-request system, this will exhaust the threadpool and new requests will be put in to a queue waiting for an available thread.
With non-blocking-io frameworks this does not happen.

要理解差异的含义,您需要模拟一个表示一些IO工作负载(如数据库请求)的阻塞请求。在每个请求一个线程的系统中,这将耗尽线程池,新的请求将被放入等待可用线程的队列中。对于非阻塞io框架,这种情况不会发生。

Consider this node.js server that waits 1 second before responding

考虑这个节点。在响应前等待1秒的js服务器

const server = http.createServer((req, res) => {
  setTimeout(() => {
    res.statusCode = 200;
    res.end();
  }, 1000);
});

Now let's throw 100 concurrent conenctions at it, for 10s. So we expect about 1000 requests to complete.

现在让我们把100个并发的conenctions放在它上面,10秒。因此,我们希望完成大约1000个请求。

$ wrk -t100 -c100 -d10s http://localhost:8000
Running 10s test @ http://localhost:8000
  100 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.01s    10.14ms   1.16s    99.57%
    Req/Sec     0.13      0.34     1.00     86.77%
  922 requests in 10.09s, 89.14KB read
Requests/sec:     91.34
Transfer/sec:      8.83KB

As you can see we get in the ballpark with 922 completed.

正如你看到的,我们在球场上完成了922。

Now consider the following asp.net code, written as though async/await were not supported yet, therefore dating us back to the node.js launch era.

现在考虑一下下面的asp.net代码,它的编写方式好像还不支持异步/ wait,因此我们可以追溯到节点。js发射的时代。

app.Run((context) =>
{
    Thread.Sleep(1000);
    context.Response.StatusCode = 200;
    return Task.CompletedTask;
});

$ wrk -t100 -c100 -d10s http://localhost:5000
Running 10s test @ http://localhost:5000
  100 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.08s    74.62ms   1.15s   100.00%
    Req/Sec     0.00      0.00     0.00    100.00%
  62 requests in 10.07s, 5.57KB read
  Socket errors: connect 0, read 0, write 0, timeout 54
Requests/sec:      6.16
Transfer/sec:     566.51B

62! Here we see the limit of the threadpool. By tuning it up we could get more concurrent requests happening, but at the cost of more server resources.

62年!这里我们看到了threadpool的限制。通过对它进行调优,我们可以获得更多并发请求,但代价是消耗更多的服务器资源。

For these IO-bound workloads, the move to avoid blocking the processing threads was that dramatic.

对于这些io绑定的工作负载,避免阻塞处理线程的做法非常引人注目。

Now let's bring it to today, where that influence has rippled through the industry and allow dotnet to take advantage of its improvements.

现在让我们把它带到今天,它的影响已经波及整个行业,并允许dotnet利用它的改进。

app.Run(async (context) =>
{
    await Task.Delay(1000);
    context.Response.StatusCode = 200;
});

$ wrk -t100 -c100 -d10s http://localhost:5000
Running 10s test @ http://localhost:5000
  100 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.01s    19.84ms   1.16s    98.26%
    Req/Sec     0.12      0.32     1.00     88.06%
  921 requests in 10.09s, 82.75KB read
Requests/sec:     91.28
Transfer/sec:      8.20KB

No surprises here, we now match node.js.

没有什么奇怪的,我们现在匹配node.js。

So what does all this mean?

这一切意味着什么呢?

Your impressions that node.js is the "fastest" come from an era we are no longer living in. Add to that it was never node/js/v8 that were "fast", it was that they broke the thread-per-request model. Everyone else has been catching up.

你的印象,节点。js是“最快的”来自于我们不再生活的时代。此外,节点/js/v8从来都不是“快”的,而是它们破坏了每个请求的线程模型。其他人都在迎头赶上。

If your goal is the fastest possible processing of single requests, then look at the serious benchmarks instead of rolling your own. But if instead what you want is simply something that scales to modern standards, then go for whichever language you like and make sure you are not blocking those threads.

如果您的目标是对单个请求进行最快的处理,那么请查看严肃的基准测试,而不是使用您自己的基准测试。但是,如果您想要的只是能够扩展到现代标准的东西,那么请使用您喜欢的任何语言,并确保您没有阻塞这些线程。

Disclaimer: All code written, and tests run, on an ageing MacBook Air during a sleepy Sunday morning. Feel free to grab the code and try it on Windows or tweak to your needs - https://github.com/csainty/nodejs-vs-aspnetcore

免责声明:在一个昏昏欲睡的周日早晨,所有的代码和测试都在一台老化的MacBook Air上运行。请随意抓取代码并在Windows上试用,或者根据需要进行调整——https://github.com/csainty/nodejs-vs-aspnetcore

#2


6  

Node Frameworks like Express and Koa have a terrible overhead. "Raw" Node is significantly faster.

像Express和Koa这样的节点框架的开销非常大。“原始”节点要快得多。

I haven't tried it, but there's a newer framework that gets very close to "Raw" Node performance: https://github.com/aerojs/aero

我还没有尝试过,但是有一个新的框架非常接近“原始”节点性能:https://github.com/aerojs/aero。

(see benchmark on that page)

(请参阅该页上的benchmark)

update: Here are some figures: https://github.com/blitzprog/webserver-benchmarks

更新:这里有一些图:https://github.com/blitzprog/webserver-benchmark

Node:
    31336.78
    31940.29
Aero:
    29922.20
    27738.14
Restify:
    19403.99
    19744.61
Express:
    19020.79
    18937.67
Koa:
    16182.02
    16631.97
Koala:
    5806.04
    6111.47
Hapi:
    497.56
    500.00

As you can see, the overheads in the most popular node.js frameworks are VERY significant!

如您所见,最流行的节点的开销。js框架非常重要!

#1


100  

As many others have alluded, the comparison lacks context.
At the time of its release, the async approach of node.js was revolutionary. Since then other languages and web frameworks have been adopting the approaches they took mainstream.

正如许多人所暗示的,这种比较缺乏上下文。在其发布时,节点的异步方法。js是革命性的。从那时起,其他语言和web框架都采用了他们所采用的主流方法。

To understand what the difference meant, you need to simulate a blocking request that represents some IO workload, such as a database request. In a thread-per-request system, this will exhaust the threadpool and new requests will be put in to a queue waiting for an available thread.
With non-blocking-io frameworks this does not happen.

要理解差异的含义,您需要模拟一个表示一些IO工作负载(如数据库请求)的阻塞请求。在每个请求一个线程的系统中,这将耗尽线程池,新的请求将被放入等待可用线程的队列中。对于非阻塞io框架,这种情况不会发生。

Consider this node.js server that waits 1 second before responding

考虑这个节点。在响应前等待1秒的js服务器

const server = http.createServer((req, res) => {
  setTimeout(() => {
    res.statusCode = 200;
    res.end();
  }, 1000);
});

Now let's throw 100 concurrent conenctions at it, for 10s. So we expect about 1000 requests to complete.

现在让我们把100个并发的conenctions放在它上面,10秒。因此,我们希望完成大约1000个请求。

$ wrk -t100 -c100 -d10s http://localhost:8000
Running 10s test @ http://localhost:8000
  100 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.01s    10.14ms   1.16s    99.57%
    Req/Sec     0.13      0.34     1.00     86.77%
  922 requests in 10.09s, 89.14KB read
Requests/sec:     91.34
Transfer/sec:      8.83KB

As you can see we get in the ballpark with 922 completed.

正如你看到的,我们在球场上完成了922。

Now consider the following asp.net code, written as though async/await were not supported yet, therefore dating us back to the node.js launch era.

现在考虑一下下面的asp.net代码,它的编写方式好像还不支持异步/ wait,因此我们可以追溯到节点。js发射的时代。

app.Run((context) =>
{
    Thread.Sleep(1000);
    context.Response.StatusCode = 200;
    return Task.CompletedTask;
});

$ wrk -t100 -c100 -d10s http://localhost:5000
Running 10s test @ http://localhost:5000
  100 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.08s    74.62ms   1.15s   100.00%
    Req/Sec     0.00      0.00     0.00    100.00%
  62 requests in 10.07s, 5.57KB read
  Socket errors: connect 0, read 0, write 0, timeout 54
Requests/sec:      6.16
Transfer/sec:     566.51B

62! Here we see the limit of the threadpool. By tuning it up we could get more concurrent requests happening, but at the cost of more server resources.

62年!这里我们看到了threadpool的限制。通过对它进行调优,我们可以获得更多并发请求,但代价是消耗更多的服务器资源。

For these IO-bound workloads, the move to avoid blocking the processing threads was that dramatic.

对于这些io绑定的工作负载,避免阻塞处理线程的做法非常引人注目。

Now let's bring it to today, where that influence has rippled through the industry and allow dotnet to take advantage of its improvements.

现在让我们把它带到今天,它的影响已经波及整个行业,并允许dotnet利用它的改进。

app.Run(async (context) =>
{
    await Task.Delay(1000);
    context.Response.StatusCode = 200;
});

$ wrk -t100 -c100 -d10s http://localhost:5000
Running 10s test @ http://localhost:5000
  100 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.01s    19.84ms   1.16s    98.26%
    Req/Sec     0.12      0.32     1.00     88.06%
  921 requests in 10.09s, 82.75KB read
Requests/sec:     91.28
Transfer/sec:      8.20KB

No surprises here, we now match node.js.

没有什么奇怪的,我们现在匹配node.js。

So what does all this mean?

这一切意味着什么呢?

Your impressions that node.js is the "fastest" come from an era we are no longer living in. Add to that it was never node/js/v8 that were "fast", it was that they broke the thread-per-request model. Everyone else has been catching up.

你的印象,节点。js是“最快的”来自于我们不再生活的时代。此外,节点/js/v8从来都不是“快”的,而是它们破坏了每个请求的线程模型。其他人都在迎头赶上。

If your goal is the fastest possible processing of single requests, then look at the serious benchmarks instead of rolling your own. But if instead what you want is simply something that scales to modern standards, then go for whichever language you like and make sure you are not blocking those threads.

如果您的目标是对单个请求进行最快的处理,那么请查看严肃的基准测试,而不是使用您自己的基准测试。但是,如果您想要的只是能够扩展到现代标准的东西,那么请使用您喜欢的任何语言,并确保您没有阻塞这些线程。

Disclaimer: All code written, and tests run, on an ageing MacBook Air during a sleepy Sunday morning. Feel free to grab the code and try it on Windows or tweak to your needs - https://github.com/csainty/nodejs-vs-aspnetcore

免责声明:在一个昏昏欲睡的周日早晨,所有的代码和测试都在一台老化的MacBook Air上运行。请随意抓取代码并在Windows上试用,或者根据需要进行调整——https://github.com/csainty/nodejs-vs-aspnetcore

#2


6  

Node Frameworks like Express and Koa have a terrible overhead. "Raw" Node is significantly faster.

像Express和Koa这样的节点框架的开销非常大。“原始”节点要快得多。

I haven't tried it, but there's a newer framework that gets very close to "Raw" Node performance: https://github.com/aerojs/aero

我还没有尝试过,但是有一个新的框架非常接近“原始”节点性能:https://github.com/aerojs/aero。

(see benchmark on that page)

(请参阅该页上的benchmark)

update: Here are some figures: https://github.com/blitzprog/webserver-benchmarks

更新:这里有一些图:https://github.com/blitzprog/webserver-benchmark

Node:
    31336.78
    31940.29
Aero:
    29922.20
    27738.14
Restify:
    19403.99
    19744.61
Express:
    19020.79
    18937.67
Koa:
    16182.02
    16631.97
Koala:
    5806.04
    6111.47
Hapi:
    497.56
    500.00

As you can see, the overheads in the most popular node.js frameworks are VERY significant!

如您所见,最流行的节点的开销。js框架非常重要!