mm: reduce atomic use on use_mm fast path
authorMichael S. Tsirkin <mst@redhat.com>
Tue, 22 Sep 2009 00:03:52 +0000 (17:03 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 22 Sep 2009 14:17:42 +0000 (07:17 -0700)
When the mm being switched to matches the active mm, we don't need to
increment and then drop the mm count.  In a simple benchmark this happens
in about 50% of time.  Making that conditional reduces contention on that
cacheline on SMP systems.

Acked-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/mmu_context.c

index fd473b5..ded9081 100644 (file)
@@ -26,13 +26,16 @@ void use_mm(struct mm_struct *mm)
 
        task_lock(tsk);
        active_mm = tsk->active_mm;
-       atomic_inc(&mm->mm_count);
+       if (active_mm != mm) {
+               atomic_inc(&mm->mm_count);
+               tsk->active_mm = mm;
+       }
        tsk->mm = mm;
-       tsk->active_mm = mm;
        switch_mm(active_mm, mm, tsk);
        task_unlock(tsk);
 
-       mmdrop(active_mm);
+       if (active_mm != mm)
+               mmdrop(active_mm);
 }
 
 /*