任何具有显式,可配置支持请求/响应缓冲和延迟连接的HTTP代理?

时间:2020-11-29 03:51:11

When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following:

在处理移动客户端时,在HTTP请求的传输期间具有多秒延迟是很常见的。如果您使用prefork Apache提供页面或服务,即使您的应用程序服务器逻辑在5毫秒内完成,子进程也将在几秒钟内为单个移动客户端提供服务。我正在寻找支持以下内容的HTTP服务器,平衡器或代理服务器:

  1. A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part.

    请求到达代理。代理开始在RAM或磁盘中缓冲请求,包括头和POST / PUT主体。代理不会打开与后端服务器的连接。这可能是最重要的部分。

  2. The proxy server stops buffering the request when:

    代理服务器在以下情况下停止缓冲请求:

    • A size limit has been reached (say, 4KB), or
    • 已达到大小限制(例如,4KB),或

    • The request has been received completely, headers and body
    • 请求已完全收到,标题和正文

  3. Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed.

    只有现在,在内存中有(部分)请求,才会向后端打开一个连接,并且中继请求。

  4. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.)

    后端发回响应。代理服务器再次开始立即缓冲(最大尺寸,比如64KB)。

  5. Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed.

    由于代理具有足够大的缓冲区,因此后端响应在几毫秒内完全存储在代理服务器中,并且后端进程/线程可以*处理更多请求。后端连接立即关闭。

  6. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources.

    代理可以快速或尽可能慢地将响应发送回移动客户端,而无需连接到后端占用资源。

I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model?

我相信你可以用Squid做4-6,而nginx似乎支持1-3(在这方面看起来相当独特)。我的问题是:是否有任何代理服务器能够理解这些缓冲和非开放连接直到准备好的功能?也许只有一点Apache config-fu使这个缓冲行为变得微不足道?它们中的任何一个都不像Squid这样的恐龙,它支持精益的单进程,异步,基于事件的执行模型?

(Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

(Siderant:我会使用nginx,但它不支持分块的POST机构,因此无法为移动客户端提供服务。是便宜的50美元手机喜欢分块的POST ...叹息)

5 个解决方案

#1


4  

What about using both nginx and Squid (client — Squid — nginx — backend)? When returning data from a backend, Squid does convert it from C-T-E: chunked to a regular stream with Content-Length set, so maybe it can normalize POST also.

那么使用nginx和Squid(客户端 - Squid - nginx - 后端)呢?当从后端返回数据时,Squid确实将它从C-T-E转换为:chunked为具有Content-Length设置的常规流,因此也许它可以规范化POST。

#2


2  

Nginx can do everything you want. The configuration parameters you are looking for are

Nginx可以做你想做的一切。您正在寻找的配置参数是

http://wiki.codemongers.com/NginxHttpCoreModule#client_body_buffer_size

and

http://wiki.codemongers.com/NginxHttpProxyModule#proxy_buffer_size

#3


2  

Fiddler, a free tool from Telerik, does at least some of the things you're looking for.

来自Telerik的免费工具Fiddler至少完成了一些您正在寻找的东西。

Specifically, go to Rules | Custom Rules... and you can add arbitrary Javascript code at all points during the connection. You could simulate some of the things you need with sleep() calls.

具体来说,请转到规则|自定义规则...您可以在连接期间的所有点添加任意Javascript代码。你可以用sleep()调用来模拟你需要的一些东西。

I'm not sure this method gives you the fine buffering control you want, however. Still, something might be better than nothing?

但是,我不确定这种方法能为您提供所需的精确缓冲控制。不过,有些东西可能比什么都好?

#4


1  

Squid 2.7 can support 1-3 with a patch:

Squid 2.7可以支持1-3补丁:

I've tested this and found it to work well, with the proviso that it only buffers to memory, not disk (unless it swaps, of course, and you don't want this), so you need to run it on a box that's appropriately provisioned for your workload.

我已经对它进行了测试并发现它运行良好,条件是它只缓冲到内存,而不是磁盘(当然,除非它交换,你不想要这个),所以你需要在一个盒子上运行它这是适合您的工作量的。

Chunked POSTs are a problem for most servers and intermediaries. Are you sure you need support? Usually clients should retry the request when they get a 411.

对于大多数服务器和中介来说,分块POST是一个问题。你确定需要支持吗?通常客户端应该在获得411时重试该请求。

#5


0  

Unfortunately, I'm not aware of a ready-made solution for this. In the worst case scenario, consider developing it yourself, say, using Java NIO -- it shouldn't take more than a week.

不幸的是,我不知道现成的解决方案。在最糟糕的情况下,考虑使用Java NIO自己开发它 - 它不应该花费超过一周的时间。

#1


4  

What about using both nginx and Squid (client — Squid — nginx — backend)? When returning data from a backend, Squid does convert it from C-T-E: chunked to a regular stream with Content-Length set, so maybe it can normalize POST also.

那么使用nginx和Squid(客户端 - Squid - nginx - 后端)呢?当从后端返回数据时,Squid确实将它从C-T-E转换为:chunked为具有Content-Length设置的常规流,因此也许它可以规范化POST。

#2


2  

Nginx can do everything you want. The configuration parameters you are looking for are

Nginx可以做你想做的一切。您正在寻找的配置参数是

http://wiki.codemongers.com/NginxHttpCoreModule#client_body_buffer_size

and

http://wiki.codemongers.com/NginxHttpProxyModule#proxy_buffer_size

#3


2  

Fiddler, a free tool from Telerik, does at least some of the things you're looking for.

来自Telerik的免费工具Fiddler至少完成了一些您正在寻找的东西。

Specifically, go to Rules | Custom Rules... and you can add arbitrary Javascript code at all points during the connection. You could simulate some of the things you need with sleep() calls.

具体来说,请转到规则|自定义规则...您可以在连接期间的所有点添加任意Javascript代码。你可以用sleep()调用来模拟你需要的一些东西。

I'm not sure this method gives you the fine buffering control you want, however. Still, something might be better than nothing?

但是,我不确定这种方法能为您提供所需的精确缓冲控制。不过,有些东西可能比什么都好?

#4


1  

Squid 2.7 can support 1-3 with a patch:

Squid 2.7可以支持1-3补丁:

I've tested this and found it to work well, with the proviso that it only buffers to memory, not disk (unless it swaps, of course, and you don't want this), so you need to run it on a box that's appropriately provisioned for your workload.

我已经对它进行了测试并发现它运行良好,条件是它只缓冲到内存,而不是磁盘(当然,除非它交换,你不想要这个),所以你需要在一个盒子上运行它这是适合您的工作量的。

Chunked POSTs are a problem for most servers and intermediaries. Are you sure you need support? Usually clients should retry the request when they get a 411.

对于大多数服务器和中介来说,分块POST是一个问题。你确定需要支持吗?通常客户端应该在获得411时重试该请求。

#5


0  

Unfortunately, I'm not aware of a ready-made solution for this. In the worst case scenario, consider developing it yourself, say, using Java NIO -- it shouldn't take more than a week.

不幸的是,我不知道现成的解决方案。在最糟糕的情况下,考虑使用Java NIO自己开发它 - 它不应该花费超过一周的时间。