x86/mm: Improve switch_mm() barrier comments
authorAndy Lutomirski <luto@kernel.org>
Tue, 12 Jan 2016 20:47:40 +0000 (12:47 -0800)
committerBen Hutchings <ben@decadent.org.uk>
Sat, 13 Feb 2016 10:34:09 +0000 (10:34 +0000)
commit 4eaffdd5a5fe6ff9f95e1ab4de1ac904d5e0fa8b upstream.

My previous comments were still a bit confusing and there was a
typo. Fix it up.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 71b3c126e611 ("x86/mm: Add barriers and document switch_mm()-vs-flush synchronization")
Link: http://lkml.kernel.org/r/0a0b43cdcdd241c5faaaecfbcc91a155ddedc9a1.1452631609.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
arch/x86/include/asm/mmu_context.h

index bae923c..babbcd1 100644 (file)
@@ -103,14 +103,16 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
                 * be sent, and CPU 0's TLB will contain a stale entry.)
                 *
                 * The bad outcome can occur if either CPU's load is
-                * reordered before that CPU's store, so both CPUs much
+                * reordered before that CPU's store, so both CPUs must
                 * execute full barriers to prevent this from happening.
                 *
                 * Thus, switch_mm needs a full barrier between the
                 * store to mm_cpumask and any operation that could load
-                * from next->pgd.  This barrier synchronizes with
-                * remote TLB flushers.  Fortunately, load_cr3 is
-                * serializing and thus acts as a full barrier.
+                * from next->pgd.  TLB fills are special and can happen
+                * due to instruction fetches or for no reason at all,
+                * and neither LOCK nor MFENCE orders them.
+                * Fortunately, load_cr3() is serializing and gives the
+                * ordering guarantee we need.
                 *
                 */
                load_cr3(next->pgd);
@@ -134,9 +136,8 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
                         * tlb flush IPI delivery. We must reload CR3
                         * to make sure to use no freed page tables.
                         *
-                        * As above, this is a barrier that forces
-                        * TLB repopulation to be ordered after the
-                        * store to mm_cpumask.
+                        * As above, load_cr3() is serializing and orders TLB
+                        * fills with respect to the mm_cpumask write.
                         */
                        load_cr3(next->pgd);
                        load_mm_ldt(next);