in other words you apply some sort of bandwidth allocation

时间:2021-11-25 02:29:59

原文:https://www.cyberciti.biz/faq/linux-traffic-shaping-using-tc-to-control-http-traffic/

I‘ve 10Mbps server port dedicated to our small business server. The server also act as a backup DNS server and I’d like to slow down outbound traffic on port 80. How do I limit bandwidth allocation to http service 5Mbps (burst to 8Mbps) at peak times so that DNS and other service will not go down due to heavy activity under a Linux operating systems?

You need use the tc command which can slow down traffic for given port and services on servers and it is called traffic shaping:

 

When traffic is shaped, its rate of transmission is under control, in other words you apply some sort of bandwidth allocation for each port or or so called Linux services. Shaping occurs on egress.

You can only apply traffic shaping to outgoing or forwarding traffic i.e. you do not have any control for incoming traffic to server. However, tc can do policing controls for arriving traffic. Policing thus occurs on ingress. This FAQ only deals with traffic shaping.

Token Bucket (TB)

A token bucket is nothing but a common algorithm used to control the amount of data that is injected into a network, allowing for bursts of data to be sent. It is used for network traffic shaping or rate limiting. With token bucket you can define the maximum rate of traffic allowed on an interface at a given moment in time.

tokens/sec | | | | Bucket to | | to hold b tokens +======+=====+ | | | \|/ Packets | +============+ stream | ---> | token wait | ---> Remove token ---> eth0 | +============+

The TB filter puts tokens into the bucket at a certain rate.

Each token is permission for the source to send a specific number of bits into the network.

Bucket can hold b tokens as per shaping rules.

Kernel can send packet if you’ve a token else traffic need to wait.

How Do I Use tc command?

WARNING! These examples requires good understanding of TCP/IP and other networking concepts. All new user should try out examples in test environment.

tc command is by default installed on my Linux distributions. To list existing rules, enter:
# tc -s qdisc ls dev eth0
Sample outputs:

qdisc pfifo_fast 0: root bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 2732108 bytes 10732 pkt (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 Your First Traffic Shaping Rule

First, send ping request to cyberciti.biz from your Local Linux workstation and note down ping time, enter:
# ping cyberciti.biz
Sample outputs:

PING cyberciti.biz (74.86.48.99) 56(84) bytes of data. 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=1 ttl=47 time=304 ms 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=2 ttl=47 time=304 ms 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=3 ttl=47 time=304 ms 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=4 ttl=47 time=304 ms 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=5 ttl=47 time=304 ms 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=6 ttl=47 time=304 ms

Type the following tc command to slow down traffic by 200 ms:
# tc qdisc add dev eth0 root netem delay 200ms
Now, send ping requests again:
# ping cyberciti.biz
Sample outputs:

PING cyberciti.biz (74.86.48.99) 56(84) bytes of data. 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=1 ttl=47 time=505 ms 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=2 ttl=47 time=505 ms 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=3 ttl=47 time=505 ms 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=4 ttl=47 time=505 ms 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=5 ttl=47 time=505 ms 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=6 ttl=47 time=505 ms 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=7 ttl=47 time=505 ms 64 bytes from txvip1.simplyguide.org (74.86.48.99): icmp_seq=8 ttl=47 time=505 ms ^C --- cyberciti.biz ping statistics --- 8 packets transmitted, 8 received, 0% packet loss, time 7006ms rtt min/avg/max/mdev = 504.464/505.303/506.308/0.949 ms

To list current rules, enter:
# tc -s qdisc ls dev eth0
Sample outputs:

qdisc netem 8001: root limit 1000 delay 200.0ms Sent 175545 bytes 540 pkt (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0

To delete all rules, enter:
# tc qdisc del dev eth0 root
# tc -s qdisc ls dev eth0

TBF Example