sched: Rewrite tg_shares_up)
authorPeter Zijlstra <a.p.zijlstra@chello.nl>
Mon, 15 Nov 2010 23:47:00 +0000 (15:47 -0800)
committerIngo Molnar <mingo@elte.hu>
Thu, 18 Nov 2010 12:27:46 +0000 (13:27 +0100)
By tracking a per-cpu load-avg for each cfs_rq and folding it into a
global task_group load on each tick we can rework tg_shares_up to be
strictly per-cpu.

This should improve cpu-cgroup performance for smp systems
significantly.

[ Paul: changed to use queueing cfs_rq + bug fixes ]

Signed-off-by: Paul Turner <pjt@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20101115234937.580480400@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

No differences found