8 CFS stands for "Completely Fair Scheduler," and is the new "desktop" process
9 scheduler implemented by Ingo Molnar and merged in Linux 2.6.23. It is the
10 replacement for the previous vanilla scheduler's SCHED_OTHER interactivity
13 80% of CFS's design can be summed up in a single sentence: CFS basically models
14 an "ideal, precise multi-tasking CPU" on real hardware.
16 "Ideal multi-tasking CPU" is a (non-existent :-)) CPU that has 100% physical
17 power and which can run each task at precise equal speed, in parallel, each at
18 1/nr_running speed. For example: if there are 2 tasks running, then it runs
19 each at 50% physical power --- i.e., actually in parallel.
21 On real hardware, we can run only a single task at once, so we have to
22 introduce the concept of "virtual runtime." The virtual runtime of a task
23 specifies when its next timeslice would start execution on the ideal
24 multi-tasking CPU described above. In practice, the virtual runtime of a task
25 is its actual runtime normalized to the total number of running tasks.
29 2. FEW IMPLEMENTATION DETAILS
31 In CFS the virtual runtime is expressed and tracked via the per-task
32 p->se.vruntime (nanosec-unit) value. This way, it's possible to accurately
33 timestamp and measure the "expected CPU time" a task should have gotten.
35 [ small detail: on "ideal" hardware, at any time all tasks would have the same
36 p->se.vruntime value --- i.e., tasks would execute simultaneously and no task
37 would ever get "out of balance" from the "ideal" share of CPU time. ]
39 CFS's task picking logic is based on this p->se.vruntime value and it is thus
40 very simple: it always tries to run the task with the smallest p->se.vruntime
41 value (i.e., the task which executed least so far). CFS always tries to split
42 up CPU time between runnable tasks as close to "ideal multitasking hardware" as
45 Most of the rest of CFS's design just falls out of this really simple concept,
46 with a few add-on embellishments like nice levels, multiprocessing and various
47 algorithm variants to recognize sleepers.
53 CFS's design is quite radical: it does not use the old data structures for the
54 runqueues, but it uses a time-ordered rbtree to build a "timeline" of future
55 task execution, and thus has no "array switch" artifacts (by which both the
56 previous vanilla scheduler and RSDL/SD are affected).
58 CFS also maintains the rq->cfs.min_vruntime value, which is a monotonic
59 increasing value tracking the smallest vruntime among all tasks in the
60 runqueue. The total amount of work done by the system is tracked using
61 min_vruntime; that value is used to place newly activated entities on the left
62 side of the tree as much as possible.
64 The total number of running tasks in the runqueue is accounted through the
65 rq->cfs.load value, which is the sum of the weights of the tasks queued on the
68 CFS maintains a time-ordered rbtree, where all runnable tasks are sorted by the
69 p->se.vruntime key (there is a subtraction using rq->cfs.min_vruntime to
70 account for possible wraparounds). CFS picks the "leftmost" task from this
71 tree and sticks to it.
72 As the system progresses forwards, the executed tasks are put into the tree
73 more and more to the right --- slowly but surely giving a chance for every task
74 to become the "leftmost task" and thus get on the CPU within a deterministic
77 Summing up, CFS works like this: it runs a task a bit, and when the task
78 schedules (or a scheduler tick happens) the task's CPU usage is "accounted
79 for": the (small) time it just spent using the physical CPU is added to
80 p->se.vruntime. Once p->se.vruntime gets high enough so that another task
81 becomes the "leftmost task" of the time-ordered rbtree it maintains (plus a
82 small amount of "granularity" distance relative to the leftmost task so that we
83 do not over-schedule tasks and trash the cache), then the new leftmost task is
84 picked and the current task is preempted.
88 4. SOME FEATURES OF CFS
90 CFS uses nanosecond granularity accounting and does not rely on any jiffies or
91 other HZ detail. Thus the CFS scheduler has no notion of "timeslices" in the
92 way the previous scheduler had, and has no heuristics whatsoever. There is
93 only one central tunable (you have to switch on CONFIG_SCHED_DEBUG):
95 /proc/sys/kernel/sched_granularity_ns
97 which can be used to tune the scheduler from "desktop" (i.e., low latencies) to
98 "server" (i.e., good batching) workloads. It defaults to a setting suitable
99 for desktop workloads. SCHED_BATCH is handled by the CFS scheduler module too.
101 Due to its design, the CFS scheduler is not prone to any of the "attacks" that
102 exist today against the heuristics of the stock scheduler: fiftyp.c, thud.c,
103 chew.c, ring-test.c, massive_intr.c all work fine and do not impact
104 interactivity and produce the expected behavior.
106 The CFS scheduler has a much stronger handling of nice levels and SCHED_BATCH
107 than the previous vanilla scheduler: both types of workloads are isolated much
110 SMP load-balancing has been reworked/sanitized: the runqueue-walking
111 assumptions are gone from the load-balancing code now, and iterators of the
112 scheduling modules are used. The balancing code got quite a bit simpler as a
117 5. SCHEDULING CLASSES
119 The new CFS scheduler has been designed in such a way to introduce "Scheduling
120 Classes," an extensible hierarchy of scheduler modules. These modules
121 encapsulate scheduling policy details and are handled by the scheduler core
122 without the core code assuming too much about them.
124 sched_fair.c implements the CFS scheduler described above.
126 sched_rt.c implements SCHED_FIFO and SCHED_RR semantics, in a simpler way than
127 the previous vanilla scheduler did. It uses 100 runqueues (for all 100 RT
128 priority levels, instead of 140 in the previous scheduler) and it needs no
131 Scheduling classes are implemented through the sched_class structure, which
132 contains hooks to functions that must be called whenever an interesting event
135 This is the (partial) list of the hooks:
139 Called when a task enters a runnable state.
140 It puts the scheduling entity (task) into the red-black tree and
141 increments the nr_running variable.
145 When a task is no longer runnable, this function is called to keep the
146 corresponding scheduling entity out of the red-black tree. It decrements
147 the nr_running variable.
151 This function is basically just a dequeue followed by an enqueue, unless the
152 compat_yield sysctl is turned on; in that case, it places the scheduling
153 entity at the right-most end of the red-black tree.
155 - check_preempt_curr(...)
157 This function checks if a task that entered the runnable state should
158 preempt the currently running task.
160 - pick_next_task(...)
162 This function chooses the most appropriate task eligible to run next.
166 This function is called when a task changes its scheduling class or changes
171 This function is mostly called from time tick functions; it might lead to
172 process switch. This drives the running preemption.
176 The core scheduler gives the scheduling module an opportunity to manage new
177 task startup. The CFS scheduling module uses it for group scheduling, while
178 the scheduling module for a real-time task does not use it.
182 6. GROUP SCHEDULER EXTENSIONS TO CFS
184 Normally, the scheduler operates on individual tasks and strives to provide
185 fair CPU time to each task. Sometimes, it may be desirable to group tasks and
186 provide fair CPU time to each such task group. For example, it may be
187 desirable to first provide fair CPU time to each user on the system and then to
188 each task belonging to a user.
190 CONFIG_GROUP_SCHED strives to achieve exactly that. It lets tasks to be
191 grouped and divides CPU time fairly among such groups.
193 CONFIG_RT_GROUP_SCHED permits to group real-time (i.e., SCHED_FIFO and
196 CONFIG_FAIR_GROUP_SCHED permits to group CFS (i.e., SCHED_NORMAL and
199 At present, there are two (mutually exclusive) mechanisms to group tasks for
200 CPU bandwidth control purposes:
202 - Based on user id (CONFIG_USER_SCHED)
204 With this option, tasks are grouped according to their user id.
206 - Based on "cgroup" pseudo filesystem (CONFIG_CGROUP_SCHED)
208 This options needs CONFIG_CGROUPS to be defined, and lets the administrator
209 create arbitrary groups of tasks, using the "cgroup" pseudo filesystem. See
210 Documentation/cgroups.txt for more information about this filesystem.
212 Only one of these options to group tasks can be chosen and not both.
214 When CONFIG_USER_SCHED is defined, a directory is created in sysfs for each new
215 user and a "cpu_share" file is added in that directory.
217 # cd /sys/kernel/uids
218 # cat 512/cpu_share # Display user 512's CPU share
220 # echo 2048 > 512/cpu_share # Modify user 512's CPU share
221 # cat 512/cpu_share # Display user 512's CPU share
225 CPU bandwidth between two users is divided in the ratio of their CPU shares.
226 For example: if you would like user "root" to get twice the bandwidth of user
227 "guest," then set the cpu_share for both the users such that "root"'s cpu_share
228 is twice "guest"'s cpu_share.
230 When CONFIG_CGROUP_SCHED is defined, a "cpu.shares" file is created for each
231 group created using the pseudo filesystem. See example steps below to create
232 task groups and modify their CPU share using the "cgroups" pseudo filesystem.
235 # mount -t cgroup -ocpu none /dev/cpuctl
238 # mkdir multimedia # create "multimedia" group of tasks
239 # mkdir browser # create "browser" group of tasks
241 # #Configure the multimedia group to receive twice the CPU bandwidth
242 # #that of browser group
244 # echo 2048 > multimedia/cpu.shares
245 # echo 1024 > browser/cpu.shares
247 # firefox & # Launch firefox and move it to "browser" group
248 # echo <firefox_pid> > browser/tasks
250 # #Launch gmplayer (or your favourite movie player)
251 # echo <movie_player_pid> > multimedia/tasks