I know linux scheduler will schedule the task_struct which is a thread. Then if we have two processes, e.g., A contains 100 threads while B is single thread, how can the two processes be scheduled fairly, considering if each thread would be scheduled fairly?
我知道linux调度程序会调度task_struct这是一个线程。然后,如果我们有两个进程,例如,A包含100个线程,而B是单线程,那么如何公平地调度这两个进程,考虑每个线程是否会被公平调度?
In addition, so in Linux, context switch between threads from the same process would be faster than that between threads from different processes, right? Since the latter will have something to do with process control block while the former wouldn't.
另外,在Linux中,来自同一进程的线程之间的上下文切换比来自不同进程的线程之间的上下文更快,对吧?因为后者将与过程控制块有关,而前者则不然。
1 个解决方案
#1
1
The point you are missing here is, how scheduler looks at threads or tasks. Well, the Linux kernel scheduler will treat them as individual scheduling entity, therefore will be counted and scheduled differently.
您在这里缺少的是,调度程序如何查看线程或任务。那么,Linux内核调度程序会将它们视为单独的调度实体,因此将以不同的方式进行计数和调度。
Now let's see what CFS documentation says - it has a simplistic approach of giving out even slice of CPU time to each runnable process, therefore, if there are 4 runnable process/threads they'll get 25% of cpu time each. But on real hardware it's not possible and to fix the issue vruntime was introduced (take more on this from here
现在让我们看一下CFS文档所说的内容 - 它有一种简单的方法,即为每个可运行的进程提供均匀的CPU时间,因此,如果有4个可运行的进程/线程,它们将分别获得25%的cpu时间。但是在真正的硬件上,这是不可能的,并且修复了vruntime的问题(从这里开始采用更多内容)
Now come back to your example, if process A creates 100 threads and B creates 1 thread then the # of running processes or threads becomes 103 (assuming all are runnable state) then CFS will evenly share the cpu using formula 1/103 (cpu/number of running tasks). And the context switching is same for all the scheduling entities, threads only shares task's internal mm_struct and when they run they have their own sets of registers, task status to load up to start with. Hope this will help to understand better.
现在回到您的示例,如果进程A创建100个线程并且B创建1个线程,那么正在运行的进程或线程的数量变为103(假设所有都是可运行状态),那么CFS将使用公式1/103(cpu /)均匀地共享cpu正在运行的任务数量)。并且上下文切换对于所有调度实体是相同的,线程仅共享任务的内部mm_struct,并且当它们运行时,它们具有它们自己的寄存器集,要加载的任务状态开始。希望这有助于更好地理解。
#1
1
The point you are missing here is, how scheduler looks at threads or tasks. Well, the Linux kernel scheduler will treat them as individual scheduling entity, therefore will be counted and scheduled differently.
您在这里缺少的是,调度程序如何查看线程或任务。那么,Linux内核调度程序会将它们视为单独的调度实体,因此将以不同的方式进行计数和调度。
Now let's see what CFS documentation says - it has a simplistic approach of giving out even slice of CPU time to each runnable process, therefore, if there are 4 runnable process/threads they'll get 25% of cpu time each. But on real hardware it's not possible and to fix the issue vruntime was introduced (take more on this from here
现在让我们看一下CFS文档所说的内容 - 它有一种简单的方法,即为每个可运行的进程提供均匀的CPU时间,因此,如果有4个可运行的进程/线程,它们将分别获得25%的cpu时间。但是在真正的硬件上,这是不可能的,并且修复了vruntime的问题(从这里开始采用更多内容)
Now come back to your example, if process A creates 100 threads and B creates 1 thread then the # of running processes or threads becomes 103 (assuming all are runnable state) then CFS will evenly share the cpu using formula 1/103 (cpu/number of running tasks). And the context switching is same for all the scheduling entities, threads only shares task's internal mm_struct and when they run they have their own sets of registers, task status to load up to start with. Hope this will help to understand better.
现在回到您的示例,如果进程A创建100个线程并且B创建1个线程,那么正在运行的进程或线程的数量变为103(假设所有都是可运行状态),那么CFS将使用公式1/103(cpu /)均匀地共享cpu正在运行的任务数量)。并且上下文切换对于所有调度实体是相同的,线程仅共享任务的内部mm_struct,并且当它们运行时,它们具有它们自己的寄存器集,要加载的任务状态开始。希望这有助于更好地理解。