I'm creating a server side app in Swift 3. I've chosen libevent for implementing networking code because it's cross-platform and doesn't suffer from C10k problem. Libevent implements it's own event loop, but I want to keep CFRunLoop and GCD (DispatchQueue.main.after
etc) functional as well, so I need to glue them somehow.
我正在Swift 3中创建一个服务器端应用程序。我选择了libevent来实现网络代码,因为它是跨平台的,并且不会遇到C10k问题。 Libevent实现了它自己的事件循环,但我想保持CFRunLoop和GCD(DispatchQueue.main.after等)的功能,所以我需要以某种方式粘贴它们。
This is what I've came up with:
这就是我想出的:
var terminated = false
DispatchQueue.main.after(when: DispatchTime.now() + 3) {
print("Dispatch works!")
terminated = true
}
while !terminated {
switch event_base_loop(eventBase, EVLOOP_NONBLOCK) { // libevent
case 1:
break // No events were processed
case 0:
print("DEBUG: Libevent processed one or more events")
default: // -1
print("Unhandled error in network backend")
exit(1)
}
RunLoop.current().run(mode: RunLoopMode.defaultRunLoopMode,
before: Date(timeIntervalSinceNow: 0.01))
}
This works, but introduces a latency of 0.01 sec. While RunLoop is sleeping, libevent won't be able to process events. Lowering this timeout increases CPU usage significantly when the app is idle.
这有效,但引入了0.01秒的延迟。当RunLoop处于休眠状态时,libevent将无法处理事件。当应用程序空闲时,降低此超时会显着增加CPU使用率。
I was also considering using only libevent, but third party libs in the project can use dispatch_async internally, so this can be problematic.
我也在考虑只使用libevent,但项目中的第三方库可以在内部使用dispatch_async,因此这可能会有问题。
Running libevent's loop in a different thread makes synchronization more complex, is this the only way of solving this latency issue?
在另一个线程中运行libevent的循环会使同步变得更复杂,这是解决此延迟问题的唯一方法吗?
LINUX UPDATE. The above code does not work on Linux (2016-07-25-a Swift snapshot), RunLoop.current().run
exists with an error. Below is a working Linux version reimplemented with a timer and dispatch_main
. It suffers from the same latency issue:
LINUX更新。上面的代码在Linux上不起作用(2016-07-25-a Swift snapshot),RunLoop.current()。run存在但有错误。下面是一个工作的Linux版本,使用timer和dispatch_main重新实现。它遇到了相同的延迟问题:
let queue = dispatch_get_main_queue()
let timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue)
let interval = 0.01
let block: () -> () = {
guard !terminated else {
print("Quitting")
exit(0)
}
switch server.loop() {
case 1: break // Just idling
case 0: break //print("Libevent: processed event(s)")
default: // -1
print("Unhandled error in network backend")
exit(1)
}
}
block()
let fireTime = dispatch_time(DISPATCH_TIME_NOW, Int64(interval * Double(NSEC_PER_SEC)))
dispatch_source_set_timer(timer, fireTime, UInt64(interval * Double(NSEC_PER_SEC)), UInt64(NSEC_PER_SEC) / 10)
dispatch_source_set_event_handler(timer, block)
dispatch_resume(timer)
dispatch_main()
1 个解决方案
#1
1
A quick search of the Open Source Swift Foundation libraries on GitHub reveals that the support in CFRunLoop
is (perhaps obviously) implemented differently on different platforms. This means, in essence, that RunLoop
and libevent
, with respect to cross-platform-ness, are just different ways to achieve the same thing. I can see the thinking behind the thought that libevent
is probably better suited to server implementations, since CFRunLoop
didn't grow up with that specific goal, but as far as being cross-platform goes, they're both barking up the same tree.
在GitHub上快速搜索Open Source Swift Foundation库可以发现CFRunLoop中的支持(可能显然)在不同平台上的实现方式不同。从本质上讲,这意味着RunLoop和libevent,就跨平台而言,只是实现同样事物的不同方式。我可以看到libevent可能更适合服务器实现的思想背后的想法,因为CFRunLoop并没有随着特定目标而成长,但就跨平台而言,他们都在吠叫同一棵树。
That said, the underlying synchronization primitives used by RunLoop
and libevent
are inherently private implementation details and, perhaps more importantly, different between platforms. From the source, it looks like RunLoop
uses epoll
on Linux, as does libevent
, but on macOS/iOS/etc, RunLoop
is going to use Mach ports as its fundamental primitive, but libevent
looks like it's going to use kqueue
. You might, with enough effort, be able to make a hybrid RunLoopSource
that ties to a libevent
source for a given platform, but this would likely be very fragile, and generally ill-advised, for a couple of reasons: Firstly, it would be based on private implementation details of RunLoop
that are not part of the public API, and therefore subject to change at any time without notice. Second, assuming you didn't go through and do this for every platform supported by both Swift and libevent
, you would have broken the cross-platform-ness of it, which was one of your stated reasons for going with libevent
in the first place.
也就是说,RunLoop和libevent使用的底层同步原语本质上是私有的实现细节,更重要的是,平台之间可能不同。从源代码来看,看起来RunLoop在Linux上使用epoll,libevent也是如此,但在macOS / iOS /等上,RunLoop将使用Mach端口作为其基本原语,但libevent看起来似乎会使用kqueue。您可以通过足够的努力,创建一个与给定平台的*源相关联的混合RunLoopSource,但这可能非常脆弱,而且通常是不明智的,原因如下:首先,它将是基于RunLoop的私有实现细节,这些细节不属于公共API,因此可能随时更改,恕不另行通知。其次,假设你没有经历并为Swift和libevent支持的每个平台执行此操作,那么你就会破坏它的跨平台性,这是你在第一时间使用libevent的原因之一。
One additional option you might not have considered would be to use GCD by itself, without RunLoops
. Look at the docs for dispatch_main
. In a server application, there's (typically) nothing special about a "main thread," so dispatching to the "main queue", should be good enough (if needed at all). You can use dispatch "sources" to manage your connections, etc. I can't personally speak to how dispatch sources scale up to the C10K/C100K/etc. level, but they've seemed pretty lightweight and low-overhead in my experience. I also suspect that using GCD like this would likely be the most idiomatic way to write a server application in Swift. I've written up a small example of a GCD-based TCP echo server as part of another answer here.
您可能没有考虑过的另一个选项是在没有RunLoops的情况下单独使用GCD。查看dispatch_main的文档。在服务器应用程序中,(通常)没有什么特别的“主线程”,因此调度到“主队列”应该足够好(如果需要的话)。您可以使用调度“来源”来管理您的连接等。我无法亲自谈论调度源如何扩展到C10K / C100K /等。但是,根据我的经验,它们看起来非常轻巧,开销很低。我还怀疑使用像这样的GCD可能是在Swift中编写服务器应用程序最惯用的方式。我在这里写了一个基于GCD的TCP回显服务器的小例子作为另一个答案的一部分。
If you were bound and determined to use both RunLoop
and libevent
in the same application, it would, as you guessed, be best to give libevent
it's own separate thread, but I don't think it's as complex as you might think. You should be able to dispatch_async
from libevent
callbacks freely, and similarly marshal replies from GCD managed threads to libevent
using libevent
's multi-threading mechanisms fairly easily (i.e. either by running with locking on, or by marshaling your calls into libevent
as events themselves.) Similarly, third party libraries using GCD should not be an issue even if you chose to use libevent's loop structure. GCD manages its own thread pools and would have no way of stepping on libevent
's main loop, etc.
如果你被绑定并决定在同一个应用程序中同时使用RunLoop和libevent,那么,正如你所猜测的那样,最好将libevent放在自己独立的线程中,但我认为它并不像你想象的那么复杂。您应该可以*地从libevent回调中调度dispas_async,并且类似地编组来自GCD托管线程的回复,以便相当容易地使用libevent的多线程机制(即通过锁定运行,或者通过将调用封送到libevent作为事件本身)。同样,即使您选择使用libevent的循环结构,使用GCD的第三方库也不应成为问题。 GCD管理自己的线程池,无法踩到libevent的主循环等。
You might also consider architecting your application such that it didn't matter what concurrency and connection handling library you used. Then you could swap out libevent
, GCD, CFStreams, etc. (or mix and match) depending on what worked best for a given situation or deployment. Choosing a concurrency approach is important, but ideally you wouldn't couple yourself to it so tightly that you couldn't switch if circumstances called for it.
您还可以考虑构建应用程序,使得您使用的并发和连接处理库无关紧要。然后你可以换掉libevent,GCD,CFStreams等(或混合搭配),具体取决于对给定情况或部署最有效的方法。选择并发方法很重要,但理想情况下,如果情况需要,您就不会将自己与自己紧密联系起来。
When you have such an architecture, I'm generally a fan of the approach of using the highest level abstraction that gets the job done, and only driving down to lower level abstractions when specific circumstances require it. In this case, that would probably mean using CFStreams
and RunLoops
to start, and switching out to "bare" GCD or libevent
later, if you hit a wall and also determined (through empirical measurement) that it was the transport layer and not the application layer that was the limiting factor. Very few non-trivial applications actually get to the C10K problem in the transport layer; things tend to have to scale "out" at the application layer first, at least for apps more complicated than basic message passing.
当你拥有这样的架构时,我通常喜欢使用*抽象来完成工作的方法,并且只有在特定情况需要时才能降低到较低级别的抽象。在这种情况下,这可能意味着使用CFStreams和RunLoops启动,并切换到“裸”GCD或稍后解放,如果你碰壁并且还确定(通过经验测量)它是传输层而不是应用程序层是限制因素。很少有非平凡的应用程序实际上解决了传输层中的C10K问题;事情往往必须首先在应用层“扩展”,至少对于比基本消息传递更复杂的应用程序。
#1
1
A quick search of the Open Source Swift Foundation libraries on GitHub reveals that the support in CFRunLoop
is (perhaps obviously) implemented differently on different platforms. This means, in essence, that RunLoop
and libevent
, with respect to cross-platform-ness, are just different ways to achieve the same thing. I can see the thinking behind the thought that libevent
is probably better suited to server implementations, since CFRunLoop
didn't grow up with that specific goal, but as far as being cross-platform goes, they're both barking up the same tree.
在GitHub上快速搜索Open Source Swift Foundation库可以发现CFRunLoop中的支持(可能显然)在不同平台上的实现方式不同。从本质上讲,这意味着RunLoop和libevent,就跨平台而言,只是实现同样事物的不同方式。我可以看到libevent可能更适合服务器实现的思想背后的想法,因为CFRunLoop并没有随着特定目标而成长,但就跨平台而言,他们都在吠叫同一棵树。
That said, the underlying synchronization primitives used by RunLoop
and libevent
are inherently private implementation details and, perhaps more importantly, different between platforms. From the source, it looks like RunLoop
uses epoll
on Linux, as does libevent
, but on macOS/iOS/etc, RunLoop
is going to use Mach ports as its fundamental primitive, but libevent
looks like it's going to use kqueue
. You might, with enough effort, be able to make a hybrid RunLoopSource
that ties to a libevent
source for a given platform, but this would likely be very fragile, and generally ill-advised, for a couple of reasons: Firstly, it would be based on private implementation details of RunLoop
that are not part of the public API, and therefore subject to change at any time without notice. Second, assuming you didn't go through and do this for every platform supported by both Swift and libevent
, you would have broken the cross-platform-ness of it, which was one of your stated reasons for going with libevent
in the first place.
也就是说,RunLoop和libevent使用的底层同步原语本质上是私有的实现细节,更重要的是,平台之间可能不同。从源代码来看,看起来RunLoop在Linux上使用epoll,libevent也是如此,但在macOS / iOS /等上,RunLoop将使用Mach端口作为其基本原语,但libevent看起来似乎会使用kqueue。您可以通过足够的努力,创建一个与给定平台的*源相关联的混合RunLoopSource,但这可能非常脆弱,而且通常是不明智的,原因如下:首先,它将是基于RunLoop的私有实现细节,这些细节不属于公共API,因此可能随时更改,恕不另行通知。其次,假设你没有经历并为Swift和libevent支持的每个平台执行此操作,那么你就会破坏它的跨平台性,这是你在第一时间使用libevent的原因之一。
One additional option you might not have considered would be to use GCD by itself, without RunLoops
. Look at the docs for dispatch_main
. In a server application, there's (typically) nothing special about a "main thread," so dispatching to the "main queue", should be good enough (if needed at all). You can use dispatch "sources" to manage your connections, etc. I can't personally speak to how dispatch sources scale up to the C10K/C100K/etc. level, but they've seemed pretty lightweight and low-overhead in my experience. I also suspect that using GCD like this would likely be the most idiomatic way to write a server application in Swift. I've written up a small example of a GCD-based TCP echo server as part of another answer here.
您可能没有考虑过的另一个选项是在没有RunLoops的情况下单独使用GCD。查看dispatch_main的文档。在服务器应用程序中,(通常)没有什么特别的“主线程”,因此调度到“主队列”应该足够好(如果需要的话)。您可以使用调度“来源”来管理您的连接等。我无法亲自谈论调度源如何扩展到C10K / C100K /等。但是,根据我的经验,它们看起来非常轻巧,开销很低。我还怀疑使用像这样的GCD可能是在Swift中编写服务器应用程序最惯用的方式。我在这里写了一个基于GCD的TCP回显服务器的小例子作为另一个答案的一部分。
If you were bound and determined to use both RunLoop
and libevent
in the same application, it would, as you guessed, be best to give libevent
it's own separate thread, but I don't think it's as complex as you might think. You should be able to dispatch_async
from libevent
callbacks freely, and similarly marshal replies from GCD managed threads to libevent
using libevent
's multi-threading mechanisms fairly easily (i.e. either by running with locking on, or by marshaling your calls into libevent
as events themselves.) Similarly, third party libraries using GCD should not be an issue even if you chose to use libevent's loop structure. GCD manages its own thread pools and would have no way of stepping on libevent
's main loop, etc.
如果你被绑定并决定在同一个应用程序中同时使用RunLoop和libevent,那么,正如你所猜测的那样,最好将libevent放在自己独立的线程中,但我认为它并不像你想象的那么复杂。您应该可以*地从libevent回调中调度dispas_async,并且类似地编组来自GCD托管线程的回复,以便相当容易地使用libevent的多线程机制(即通过锁定运行,或者通过将调用封送到libevent作为事件本身)。同样,即使您选择使用libevent的循环结构,使用GCD的第三方库也不应成为问题。 GCD管理自己的线程池,无法踩到libevent的主循环等。
You might also consider architecting your application such that it didn't matter what concurrency and connection handling library you used. Then you could swap out libevent
, GCD, CFStreams, etc. (or mix and match) depending on what worked best for a given situation or deployment. Choosing a concurrency approach is important, but ideally you wouldn't couple yourself to it so tightly that you couldn't switch if circumstances called for it.
您还可以考虑构建应用程序,使得您使用的并发和连接处理库无关紧要。然后你可以换掉libevent,GCD,CFStreams等(或混合搭配),具体取决于对给定情况或部署最有效的方法。选择并发方法很重要,但理想情况下,如果情况需要,您就不会将自己与自己紧密联系起来。
When you have such an architecture, I'm generally a fan of the approach of using the highest level abstraction that gets the job done, and only driving down to lower level abstractions when specific circumstances require it. In this case, that would probably mean using CFStreams
and RunLoops
to start, and switching out to "bare" GCD or libevent
later, if you hit a wall and also determined (through empirical measurement) that it was the transport layer and not the application layer that was the limiting factor. Very few non-trivial applications actually get to the C10K problem in the transport layer; things tend to have to scale "out" at the application layer first, at least for apps more complicated than basic message passing.
当你拥有这样的架构时,我通常喜欢使用*抽象来完成工作的方法,并且只有在特定情况需要时才能降低到较低级别的抽象。在这种情况下,这可能意味着使用CFStreams和RunLoops启动,并切换到“裸”GCD或稍后解放,如果你碰壁并且还确定(通过经验测量)它是传输层而不是应用程序层是限制因素。很少有非平凡的应用程序实际上解决了传输层中的C10K问题;事情往往必须首先在应用层“扩展”,至少对于比基本消息传递更复杂的应用程序。