绑定KVM虚拟机的vcpu与物理CPU

时间:2022-12-25 07:49:53

Setting KVM processor affinities

This section covers setting processor and processing core affinities with libvirt for KVM guests. By default, libvirt provisions guests using the hypervisor's default policy. For most hypervisors, the policy is to run guests on any available processing core or CPU. There are times when an explicit policy may be better, in particular for systems with a NUMA (Non-Uniform Memory Access) architecture. A guest on a NUMA system should be pinned to a processing core so that its memory allocations are always local to the node it is running on. This avoids cross-node memory transports which have less bandwidth and can significantly degrade performance. On a non-NUMA systems some form of explicit placement across the hosts’ sockets, cores and hyperthreads may be more efficient. Identifying CPU and NUMA topologyThe first step in deciding what policy to apply is to determine the host’s memory and CPU topology. The virsh
nodeinfo
 command provides information about how many sockets, cores and hyperthreads there are attached a host.
# virsh nodeinfo
CPU model: x86_64
CPU(s): 8
CPU frequency: 1000 MHz
CPU socket(s): 2
Core(s) per socket: 4
Thread(s) per core: 1
NUMA cell(s): 1
Memory size: 8179176 kB
This system has eight CPUs, in two sockets, each processor has four cores. The output shows that that the system has a NUMA architecture. NUMA is more complex and requires more data to accurately interpret. Use the virsh
capabilities
 to get additional output data on the CPU configuration.
# virsh capabilities
<capabilities>
<host>
<cpu>
<arch>x86_64</arch>
</cpu>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num='2'>
<cell id='0'>
<cpus num='4'>
<cpu id='0'/>
<cpu id='1'/>
<cpu id='2'/>
<cpu id='3'/>
</cpus>
</cell>
<cell id='1'>
<cpus num='4'>
<cpu id='4'/>
<cpu id='5'/>
<cpu id='6'/>
<cpu id='7'/>
</cpus>
</cell>
</cells>
</topology>
<secmodel>
<model>selinux</model>
<doi>0</doi>
</secmodel>
</host>

[ Additional XML removed ]

</capabilities>
The output shows two NUMA nodes (also know as NUMA cells), each containing four logical CPUs (four processing cores). This system has two sockets, therefore we can infer that each socket is a separate NUMA node. For a guest with four virtual CPUs, it would be optimal to lock the guest to physical CPUs 0 to 3, or 4 to 7 to avoid accessing non-local memory, which are significantly slower than accessing local memory. If a guest requires eight virtual CPUs, as each NUMA node only has four physical CPUs, a better utilization may be obtained by running a pair of four virtual CPU guests and splitting the work between them, rather than using a single 8 CPU guest. Running across multiple NUMA nodes significantly degrades performance for physical and virtualized tasks. Decide which NUMA node can run the guestLocking a guest to a particular NUMA node offers no benefit if that node does not have sufficient free memory for that guest. libvirt stores information on the free memory available on each node. Use the virsh
freecell
 command to display the free memory on all NUMA nodes.
# virsh freecell
0: 2203620 kB
1: 3354784 kB
If a guest requires 3 GB of RAM allocated, then the guest should be run on NUMA node (cell) 1. Node 0 only has 2.2GB free which is probably not sufficient for certain guests. Lock a guest to a NUMA node or physical CPU setOnce you have determined which node to run the guest on, see the capabilities data (the output of the virsh
capabilities
 command) about NUMA topology.
  1. Extract from the virsh
    capabilities
     output.
    <topology>
    <cells num='2'>
    <cell id='0'>
    <cpus num='4'>
    <cpu id='0'/>
    <cpu id='1'/>
    <cpu id='2'/>
    <cpu id='3'/>
    </cpus>
    </cell>
    <cell id='1'>
    <cpus num='4'>
    <cpu id='4'/>
    <cpu id='5'/>
    <cpu id='6'/>
    <cpu id='7'/>
    </cpus>
    </cell>
    </cells>
    </topology>
  2. Observe that the node 1, <cell
    id='1'>
    , has physical CPUs 4 to 7.
  3. The guest can be locked to a set of CPUs by appending the cpuset attribute to the configuration file.
    1. While the guest is offline, open the configuration file with virsh
      edit
      .
    2. Locate where the guest's virtual CPU count is specified. Find the vcpus element.
      <vcpus>4</vcpus>
      The guest in this example has four CPUs.
    3. Add a cpuset attribute with the CPU numbers for the relevant NUMA cell.
      <vcpus cpuset='4-7'>4</vcpus>
  4. Save the configuration file and restart the guest.
The guest has been locked to CPUs 4 to 7. Automatically locking guests to CPUs with virt-installThe virt-install provisioning tool provides a simple way to automatically apply a 'best fit' NUMA policy when guests are created. The cpuset option for virt-install can use a CPU set of processors or the parameter auto. The auto parameter automatically determines the optimal CPU locking using the available NUMA data. For a NUMA system, use the --cpuset=auto with the virt-install command when creating new guests. Tuning CPU affinity on running guestsThere may be times where modifying CPU affinities on running guests is preferable to rebooting the guest. The virsh
vcpuinfo
 and virsh vcpupin commands can perform CPU affinity changes on running guests.
The virsh
vcpuinfo
 command gives up to date information about where each virtual CPU is running.
In this example, guest1 is a guest with four virtual CPUs is running on a KVM host.
# virsh vcpuinfo guest1
VCPU: 0
CPU: 3
State: running
CPU time: 0.5s
CPU Affinity: yyyyyyyy
VCPU: 1
CPU: 1
State: running
CPU Affinity: yyyyyyyy
VCPU: 2
CPU: 1
State: running
CPU Affinity: yyyyyyyy
VCPU: 3
CPU: 2
State: running
CPU Affinity: yyyyyyyy
The virsh
vcpuinfo
 output (the yyyyyyyy value of CPU
Affinity
) shows that the guest can presently run on any CPU.
To lock the virtual CPUs to the second NUMA node (CPUs four to seven), run the following commands.
# virsh vcpupin guest1 0 4
# virsh vcpupin guest1 1 5
# virsh vcpupin guest1 2 6
# virsh vcpupin guest1 3 7
The virsh
vcpuinfo
 command confirms the change in affinity.
# virsh vcpuinfo guest1
VCPU: 0
CPU: 4
State: running
CPU time: 32.2s
CPU Affinity: ----y---
VCPU: 1
CPU: 5
State: running
CPU time: 16.9s
CPU Affinity: -----y--
VCPU: 2
CPU: 6
State: running
CPU time: 11.9s
CPU Affinity: ------y-
VCPU: 3
CPU: 7
State: running
CPU time: 14.6s
CPU Affinity: -------y

      常常感觉系统资源不够用,一台机子上跑了不下3个比较重要的服务,但是每天我们还要在上面进行个备份压缩等处理,网络长时间传输,这在就很影响本就不够用的系统资源;

      这个时候我们就可以把一些不太重要的比如copy/备份/同步等工作限定在一颗cpu上,或者是多核的cpu的一颗核心上进行处理,虽然这不一定是最有效的方法,但可以最大程度上利用了有效资源,降低那些不太重要的进程占用cpu资源;

      taskset就可以帮我们完成这项工作,而且操作非常简单;

      该工具系统默认安装,rpm包名util-linux

      借助一个例子说明,借助以前写过的一个消耗CPU的脚本 原]消耗CPU资源的shell脚本 ,将一台16个CPU的机器上其中4个CPU的资源耗尽:

绑定KVM虚拟机的vcpu与物理CPU

      使用 top 命令能看到4颗CPU跑满的效果:

绑定KVM虚拟机的vcpu与物理CPU      现在可以使用 taskset 命令调整这些进程所使用的CPU了:

?
1234 taskset -cp1  25718taskset -cp3  25720taskset -cp5  25722taskset -cp7  25724

      在top中再看看效果:

绑定KVM虚拟机的vcpu与物理CPU       哈哈,CPU的使用得到调配了,同样我们可以使某个进程仅使用其中几个CPU:

?
1 taskset -cp1,2  25718

      更详细的信息可以用 man taskset 查看。


Taskset命令设置某虚拟机在某个固定cpu上运行

1)设置某个进程pid在某个cpu上运行:

[root@test~]# taskset -p000000000000000000000000000000000000100 95090

pid 95090's current affinity mask: 1

pid 95090's new affinity mask: 100

解释:设置95090这个进程,在cpu8上运行

95090是我提前用ps –aux|grep “虚拟机名” 找到的虚拟机进程id。

2)vcpupin的命令解释如下:Pin guest domain virtual CPUs to physical host CPUs;

绑定命令:virsh vcpupin 4 0 8:绑定domain4的vcpu0 到物理CPU8

 

2)查看哪个进程在哪个CPU上运行:ps -eopid,args,psr|grep 95090

[root@test ~]# ps -eopid,args,psr|grep 95090

 95090/usr/bin/qemu-system-test    8

 95091 [vhost-95090]                80

161336 grep --color=auto 95090      72

 

Taskset和vcpupin区别:

Taskset是以task(也就是虚拟机)为单位,也就是以虚拟机上的所有cpu为一个单位,与物理机上的cpu进行绑定,它不能指定虚拟机上的某个vcpu与物理机上某个物理cpu进行绑定,其粒度较大。

Vcpupin命令就可以单独把虚拟机上的vcpu与物理机上的物理cpu进行绑定。

比如vm1有4个vcpu(core),物理机有8个cpu(8个core,假如每个core一个线程),taskset能做到把4个vcpu同时绑定到一个或者多个cpu上,但vcpupin能把每个vcpu与每个cpu进行绑定。



ppc64_cpu --smt=on/off  开启/关闭smt virsh list:列出vm及相关信息 virsh vcpuinfo domain(): 列出vm的vcpu信息 cpupower -c all frequency-info:列出所有cpu的频率 cpupower frequency-set -f 3.69GHz:设置所有cpu的频率 virsh edit rhel1:编辑vm配置文件(不能单独修改xml,修改vm配置只能用virsh改)


参考:

KVM上如何绑定虚拟机vcpu与物理CPU:http://blog.csdn.net/qianlong4526888/article/details/42554265

2 RHEL Virtual: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Virtualization/ch33s08.html

3使用taskset命令来限制进程的CPU:http://www.cnblogs.com/killkill/archive/2012/04/08/2437960.html