ARM: 7955/1: spinlock: ensure we have a compiler barrier before sev
authorWill Deacon <will.deacon@arm.com>
Fri, 7 Feb 2014 18:12:32 +0000 (19:12 +0100)
committerBen Hutchings <ben@decadent.org.uk>
Tue, 1 Apr 2014 23:58:50 +0000 (00:58 +0100)
commit 7c8746a9eb287642deaad0e7c2cdf482dce5e4be upstream.

When unlocking a spinlock, we require the following, strictly ordered
sequence of events:

<barrier> /* dmb */
<unlock>
<barrier> /* dsb */
<sev>

Whilst the code does indeed reflect this in terms of the architecture,
the final <barrier> + <sev> have been contracted into a single inline
asm without a "memory" clobber, therefore the compiler is at liberty to
reorder the unlock to the end of the above sequence. In such a case,
a waiting CPU may be woken up before the lock has been unlocked, leading
to extremely poor performance.

This patch reworks the dsb_sev() function to make use of the dsb()
macro and ensure ordering against the unlock.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[bwh: Backported to 3.2: 'ishst' variant is not used here]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
arch/arm/include/asm/spinlock.h

index 65fa3c8..095a9a3 100644 (file)
 
 static inline void dsb_sev(void)
 {
-#if __LINUX_ARM_ARCH__ >= 7
-       __asm__ __volatile__ (
-               "dsb\n"
-               SEV
-       );
-#else
-       __asm__ __volatile__ (
-               "mcr p15, 0, %0, c7, c10, 4\n"
-               SEV
-               : : "r" (0)
-       );
-#endif
+
+       dsb();
+       __asm__(SEV);
 }
 
 /*