ARM: 6517/1: kexec: Add missing memory clobber to inline asm in crash_setup_regs()
authorDave Martin <dave.martin@linaro.org>
Wed, 1 Dec 2010 17:05:13 +0000 (18:05 +0100)
committerRussell King <rmk+kernel@arm.linux.org.uk>
Sat, 4 Dec 2010 11:00:52 +0000 (11:00 +0000)
Currently, the inline asm is passed &newregs->ARM_r0 as in input,
when modifying multiple fields of newregs.

It's plausible to assume that GCC will assume newregs->ARM_r0 is
modified when passed the address, but unfortunately this assumption
is incorrect.

Also, GCC has no way to guess that the other ARM_r* fields are
modified without the addition of a "memory" clobber.

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arch/arm/include/asm/kexec.h

index 8ec9ef5..b37e02c 100644 (file)
@@ -34,7 +34,7 @@ static inline void crash_setup_regs(struct pt_regs *newregs,
                memcpy(newregs, oldregs, sizeof(*newregs));
        } else {
                __asm__ __volatile__ ("stmia %0, {r0 - r15}"
-                                     : : "r" (&newregs->ARM_r0));
+                                     : : "r" (&newregs->ARM_r0) : "memory");
                __asm__ __volatile__ ("mrs %0, cpsr"
                                      : "=r" (newregs->ARM_cpsr));
        }