x86: Avoid 'constant_test_bit()' misoptimization due to cast to non-volatile
authorAlexander Chumachenko <ledest@gmail.com>
Thu, 1 Apr 2010 12:34:52 +0000 (15:34 +0300)
committerH. Peter Anvin <hpa@zytor.com>
Mon, 27 Sep 2010 05:43:07 +0000 (22:43 -0700)
commitc9e2fbd909c20b165b2b9ffb59f8b674cf0a55b0
tree29b4f977e3a1fa4b0b85057c35652ee5fe58949e
parent7329cf0201f48695862e334828a108aa7175e955
x86: Avoid 'constant_test_bit()' misoptimization due to cast to non-volatile

While debugging bit_spin_lock() hang, it was tracked down to gcc-4.4
misoptimization of non-inlined constant_test_bit() due to non-volatile
addr when 'const volatile unsigned long *addr' cast to 'unsigned long *'
with subsequent unconditional jump to pause (and not to the test) leading
to hang.

Compiling with gcc-4.3 or disabling CONFIG_OPTIMIZE_INLINING yields inlined
constant_test_bit() and correct jump, thus working around the kernel bug.

Other arches than asm-x86 may implement this slightly differently;
2.6.29 mitigates the misoptimization by changing the function prototype
(commit c4295fbb6048d85f0b41c5ced5cbf63f6811c46c) but probably fixing the issue
itself is better.

Signed-off-by: Alexander Chumachenko <ledest@gmail.com>
Signed-off-by: Michael Shigorin <mike@osdn.org.ua>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
arch/x86/include/asm/bitops.h