x86: voluntary leave_mm before entering ACPI C3
authorVenki Pallipadi <venkatesh.pallipadi@intel.com>
Wed, 30 Jan 2008 12:32:01 +0000 (13:32 +0100)
committerIngo Molnar <mingo@elte.hu>
Wed, 30 Jan 2008 12:32:01 +0000 (13:32 +0100)
commitbde6f5f59c2b2b48a7a849c129d5b48838fe77ee
tree4fa3befdfa227db56770a0dc85b8fc18be232f70
parent7d409d6057c7244f8757ce15245f6df27271be0c
x86: voluntary leave_mm before entering ACPI C3

Aviod TLB flush IPIs during C3 states by voluntary leave_mm()
before entering C3.

The performance impact of TLB flush on C3 should not be significant with
respect to C3 wakeup latency. Also, CPUs tend to flush TLB in hardware while in
C3 anyways.

On a 8 logical CPU system, running make -j2, the number of tlbflush IPIs goes
down from 40 per second to ~ 0. Total number of interrupts during the run
of this workload was ~1200 per second, which makes it ~3% savings in wakeups.

There was no measurable performance or power impact however.

[ akpm@linux-foundation.org: symbol export fixes. ]

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
arch/x86/kernel/smp_32.c
arch/x86/kernel/smp_64.c
drivers/acpi/processor_idle.c
include/asm-ia64/acpi.h
include/asm-x86/acpi.h
include/asm-x86/mmu.h
include/asm-x86/mmu_context_32.h