I have a java app on linux which opens UDP socket and waits for messages.
我在linux上有一个java应用程序,它打开UDP套接字并等待消息。
After couple of hours under heavy load, there is a packet loss, i.e. the packets are received by kernel but not by my app (we see the lost packets in sniffer, we see UDP packets lost in netstat, we don't see those packets in our app logs).
在沉重的负载下几小时后,会有一个数据包丢失,即数据包是由内核接收,而不是由我的app接收(我们在sniffer中看到丢失的数据包,我们在netstat中看到UDP数据包丢失,我们在app日志中看不到这些数据包)。
We tried enlarging socket buffers but this didnt help - we started losing packets later then before, but that's it.
我们尝试扩大套接字缓冲区,但这没有帮助——我们以前开始丢失包,但仅此而已。
For debugging, I want to know how full the OS udp buffer is, at any given moment. Googled, but didn't find anything. Can you help me?
对于调试,我想知道在任何给定时刻OS udp缓冲区有多满。用谷歌搜索,但什么也没找到。你能帮我吗?
P.S. Guys, I'm aware that UDP is unreliable. However - my computer receives all UDP messages, while my app is unable to consume some of them. I want to optimize my app to the max, that's the reason for the question. Thanks.
伙计们,我知道UDP不可靠。然而,我的电脑接收所有UDP消息,而我的应用程序不能使用其中的一些。我想把我的应用优化到最大,这就是问题的原因。谢谢。
4 个解决方案
#1
28
Linux provides the files /proc/net/udp
and /proc/net/udp6
, which lists all open UDP sockets (for IPv4 and IPv6, respectively). In both of them, the columns tx_queue
and rx_queue
show the outgoing and incoming queues in bytes.
Linux提供文件/proc/net/udp和/proc/net/udp6,它们列出了所有开放的UDP套接字(分别用于IPv4和IPv6)。在这两个列中,tx_queue和rx_queue以字节为单位显示出站和入站队列。
If everything is working as expected, you usually will not see any value different than zero in those two columns: as soon as your application generates packets they are sent through the network, and as soon those packets arrive from the network your application will wake up and receive them (the recv
call immediately returns). You may see the rx_queue
go up if your application has the socket open but is not invoking recv
to receive the data, or if it is not processing such data fast enough.
如果一切都按预期工作,你通常不会看到任何值不同于零在这两个列:当您的应用程序生成数据包通过网络发送,并就这些数据包到达的网络应用程序会醒来并接收他们(recv调用立即返回)。如果应用程序打开了套接字,但没有调用recv来接收数据,或者处理这些数据的速度不够快,您可能会看到rx_queue上升。
#2
49
UDP is a perfectly viable protocol. It is the same old case of the right tool for the right job!
UDP是一个完全可行的协议。这是同样的老情况,正确的工具,为正确的工作!
If you have a program that waits for UDP datagrams, and then goes off to process them before returning to wait for another, then your elapsed processing time needs to always be faster than the worst case arrival rate of datagrams. If it is not, then the UDP socket receive queue will begin to fill.
如果您有一个等待UDP数据报的程序,然后在返回等待另一个数据报之前对它们进行处理,那么您所经过的处理时间必须总是比最坏的数据报到达率要快。如果不是,则UDP套接字接收队列将开始填充。
This can be tolerated for short bursts. The queue does exactly what it is supposed to do – queue datagrams until you are ready. But if the average arrival rate regularly causes a backlog in the queue, it is time to redesign your program. There are two main choices here: reduce the elapsed processing time via crafty programming techniques, and/or multi-thread your program. Load balancing across multiple instances of your program may also be employed.
这在短时间内是可以容忍的。队列执行它应该执行的操作——在您准备好之前对数据报进行排队。但是如果平均到达率经常导致队列中的积压,那么是时候重新设计你的程序了。这里有两个主要的选择:通过巧妙的编程技术减少运行的处理时间,以及/或多线程您的程序。程序的多个实例之间的负载平衡也可以使用。
As mentioned, on Linux you can examine the proc filesystem to get status about what UDP is up to. For example, if I cat
the /proc/net/udp
node, I get something like this:
如前所述,在Linux上,您可以检查proc文件系统以获得关于UDP的状态。例如,如果我cat the /proc/net/udp节点,我得到如下:
$ cat /proc/net/udp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops
40: 00000000:0202 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 3466 2 ffff88013abc8340 0
67: 00000000:231D 00000000:0000 07 00000000:0001E4C8 00:00000000 00000000 1006 0 16940862 2 ffff88013abc9040 2237
122: 00000000:30D4 00000000:0000 07 00000000:00000000 00:00000000 00000000 1006 0 912865 2 ffff88013abc8d00 0
From this, I can see that a socket owned by user id 1006, is listening on port 0x231D (8989) and that the receive queue is at about 128KB. As 128KB is the max size on my system, this tells me my program is woefully weak at keeping up with the arriving datagrams. There have been 2237 drops so far, meaning the UDP layer cannot put any more datagrams into the socket queue, and must drop them.
从这里,我可以看到用户id 1006拥有的套接字正在监听端口0x231D(8989),接收队列大约为128KB。因为128KB是我的系统的最大大小,这告诉我我的程序在跟上到达的数据报方面非常弱。到目前为止已经有2237次掉线,这意味着UDP层不能再将任何数据报放入套接字队列中,并且必须删除它们。
You could watch your program's behaviour over time e.g. using:
你可以观察你的程序的行为,例如:
watch -d 'cat /proc/net/udp|grep 00000000:231D'
Note also that the netstat command does about the same thing: netstat -c --udp -an
还要注意,netstat命令的作用与此相同:netstat -c—udp -an。
My solution for my weenie program, will be to multi-thread.
我的weenie程序的解决方案是多线程。
Cheers!
干杯!
#3
4
rx_queue will tell you the queue length at any given instant, but it will not tell you how full the queue has been, i.e. the highwater mark. There is no way to constantly monitor this value, and no way to get it programmatically (see How do I get amount of queued data for UDP socket?).
rx_queue将在任何给定的时刻告诉您队列的长度,但它不会告诉您队列的满度,也就是高水位标记。没有办法持续监视这个值,也没有办法以编程方式获取它(参见如何获取UDP套接字的排队数据)。
The only way I can imagine monitoring the queue length is to move the queue into your own program. In other words, start two threads -- one is reading the socket as fast as it can and dumping the datagrams into your queue; and the other one is your program pulling from this queue and processing the packets. This of course assumes that you can assure each thread is on a separate CPU. Now you can monitor the length of your own queue and keep track of the highwater mark.
我可以想象监视队列长度的唯一方法是将队列移动到您自己的程序中。换句话说,启动两个线程——一个是尽可能快地读取套接字,并将数据报文转储到您的队列中;另一个是程序从这个队列中取出并处理数据包。当然,这假定您可以确保每个线程都位于一个单独的CPU上。现在您可以监视自己队列的长度并跟踪highwater标记。
#4
-1
The process is simple:
这个过程很简单:
-
If desired, pause the application process.
如果需要,请暂停应用程序进程。
-
Open the UDP socket. You can snag it from the running process using
/proc/<PID>/fd
if necessary. Or you can add this code to the application itself and send it a signal -- it will already have the socket open, of course.打开UDP套接字。如果需要,您可以使用/proc/
/fd从运行过程中捕获它。或者您可以将此代码添加到应用程序本身并向其发送一个信号——当然,它已经打开了套接字。 -
Call
recvmsg
in a tight loop as quickly as possible.尽可能快地在紧密循环中调用recvmsg。
-
Count how many packets/bytes you got.
数一下你有多少包/字节。
This will discard any datagrams currently buffered, but if that breaks your application, your application was already broken.
这将丢弃当前缓存的任何数据报,但如果这破坏了应用程序,则应用程序已经被破坏。
#1
28
Linux provides the files /proc/net/udp
and /proc/net/udp6
, which lists all open UDP sockets (for IPv4 and IPv6, respectively). In both of them, the columns tx_queue
and rx_queue
show the outgoing and incoming queues in bytes.
Linux提供文件/proc/net/udp和/proc/net/udp6,它们列出了所有开放的UDP套接字(分别用于IPv4和IPv6)。在这两个列中,tx_queue和rx_queue以字节为单位显示出站和入站队列。
If everything is working as expected, you usually will not see any value different than zero in those two columns: as soon as your application generates packets they are sent through the network, and as soon those packets arrive from the network your application will wake up and receive them (the recv
call immediately returns). You may see the rx_queue
go up if your application has the socket open but is not invoking recv
to receive the data, or if it is not processing such data fast enough.
如果一切都按预期工作,你通常不会看到任何值不同于零在这两个列:当您的应用程序生成数据包通过网络发送,并就这些数据包到达的网络应用程序会醒来并接收他们(recv调用立即返回)。如果应用程序打开了套接字,但没有调用recv来接收数据,或者处理这些数据的速度不够快,您可能会看到rx_queue上升。
#2
49
UDP is a perfectly viable protocol. It is the same old case of the right tool for the right job!
UDP是一个完全可行的协议。这是同样的老情况,正确的工具,为正确的工作!
If you have a program that waits for UDP datagrams, and then goes off to process them before returning to wait for another, then your elapsed processing time needs to always be faster than the worst case arrival rate of datagrams. If it is not, then the UDP socket receive queue will begin to fill.
如果您有一个等待UDP数据报的程序,然后在返回等待另一个数据报之前对它们进行处理,那么您所经过的处理时间必须总是比最坏的数据报到达率要快。如果不是,则UDP套接字接收队列将开始填充。
This can be tolerated for short bursts. The queue does exactly what it is supposed to do – queue datagrams until you are ready. But if the average arrival rate regularly causes a backlog in the queue, it is time to redesign your program. There are two main choices here: reduce the elapsed processing time via crafty programming techniques, and/or multi-thread your program. Load balancing across multiple instances of your program may also be employed.
这在短时间内是可以容忍的。队列执行它应该执行的操作——在您准备好之前对数据报进行排队。但是如果平均到达率经常导致队列中的积压,那么是时候重新设计你的程序了。这里有两个主要的选择:通过巧妙的编程技术减少运行的处理时间,以及/或多线程您的程序。程序的多个实例之间的负载平衡也可以使用。
As mentioned, on Linux you can examine the proc filesystem to get status about what UDP is up to. For example, if I cat
the /proc/net/udp
node, I get something like this:
如前所述,在Linux上,您可以检查proc文件系统以获得关于UDP的状态。例如,如果我cat the /proc/net/udp节点,我得到如下:
$ cat /proc/net/udp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops
40: 00000000:0202 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 3466 2 ffff88013abc8340 0
67: 00000000:231D 00000000:0000 07 00000000:0001E4C8 00:00000000 00000000 1006 0 16940862 2 ffff88013abc9040 2237
122: 00000000:30D4 00000000:0000 07 00000000:00000000 00:00000000 00000000 1006 0 912865 2 ffff88013abc8d00 0
From this, I can see that a socket owned by user id 1006, is listening on port 0x231D (8989) and that the receive queue is at about 128KB. As 128KB is the max size on my system, this tells me my program is woefully weak at keeping up with the arriving datagrams. There have been 2237 drops so far, meaning the UDP layer cannot put any more datagrams into the socket queue, and must drop them.
从这里,我可以看到用户id 1006拥有的套接字正在监听端口0x231D(8989),接收队列大约为128KB。因为128KB是我的系统的最大大小,这告诉我我的程序在跟上到达的数据报方面非常弱。到目前为止已经有2237次掉线,这意味着UDP层不能再将任何数据报放入套接字队列中,并且必须删除它们。
You could watch your program's behaviour over time e.g. using:
你可以观察你的程序的行为,例如:
watch -d 'cat /proc/net/udp|grep 00000000:231D'
Note also that the netstat command does about the same thing: netstat -c --udp -an
还要注意,netstat命令的作用与此相同:netstat -c—udp -an。
My solution for my weenie program, will be to multi-thread.
我的weenie程序的解决方案是多线程。
Cheers!
干杯!
#3
4
rx_queue will tell you the queue length at any given instant, but it will not tell you how full the queue has been, i.e. the highwater mark. There is no way to constantly monitor this value, and no way to get it programmatically (see How do I get amount of queued data for UDP socket?).
rx_queue将在任何给定的时刻告诉您队列的长度,但它不会告诉您队列的满度,也就是高水位标记。没有办法持续监视这个值,也没有办法以编程方式获取它(参见如何获取UDP套接字的排队数据)。
The only way I can imagine monitoring the queue length is to move the queue into your own program. In other words, start two threads -- one is reading the socket as fast as it can and dumping the datagrams into your queue; and the other one is your program pulling from this queue and processing the packets. This of course assumes that you can assure each thread is on a separate CPU. Now you can monitor the length of your own queue and keep track of the highwater mark.
我可以想象监视队列长度的唯一方法是将队列移动到您自己的程序中。换句话说,启动两个线程——一个是尽可能快地读取套接字,并将数据报文转储到您的队列中;另一个是程序从这个队列中取出并处理数据包。当然,这假定您可以确保每个线程都位于一个单独的CPU上。现在您可以监视自己队列的长度并跟踪highwater标记。
#4
-1
The process is simple:
这个过程很简单:
-
If desired, pause the application process.
如果需要,请暂停应用程序进程。
-
Open the UDP socket. You can snag it from the running process using
/proc/<PID>/fd
if necessary. Or you can add this code to the application itself and send it a signal -- it will already have the socket open, of course.打开UDP套接字。如果需要,您可以使用/proc/
/fd从运行过程中捕获它。或者您可以将此代码添加到应用程序本身并向其发送一个信号——当然,它已经打开了套接字。 -
Call
recvmsg
in a tight loop as quickly as possible.尽可能快地在紧密循环中调用recvmsg。
-
Count how many packets/bytes you got.
数一下你有多少包/字节。
This will discard any datagrams currently buffered, but if that breaks your application, your application was already broken.
这将丢弃当前缓存的任何数据报,但如果这破坏了应用程序,则应用程序已经被破坏。