locking/rwsem: Avoid double checking before try acquiring write lock
authorJason Low <jason.low2@hp.com>
Wed, 17 Sep 2014 00:16:57 +0000 (17:16 -0700)
committerIngo Molnar <mingo@kernel.org>
Fri, 3 Oct 2014 04:09:29 +0000 (06:09 +0200)
Commit 9b0fc9c09f1b ("rwsem: skip initial trylock in rwsem_down_write_failed")
checks for if there are known active lockers in order to avoid write trylocking
using expensive cmpxchg() when it likely wouldn't get the lock.

However, a subsequent patch was added such that we directly
check for sem->count == RWSEM_WAITING_BIAS right before trying
that cmpxchg().

Thus, commit 9b0fc9c09f1b now just adds overhead.

This patch modifies it so that we only do a check for if
count == RWSEM_WAITING_BIAS.

Also, add a comment on why we do an "extra check" of count
before the cmpxchg().

Signed-off-by: Jason Low <jason.low2@hp.com>
Acked-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Cc: Chegu Vinod <chegu_vinod@hp.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1410913017.2447.22.camel@j-VirtualBox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/locking/rwsem-xadd.c

index 12166ec..7628c3f 100644 (file)
@@ -250,16 +250,18 @@ EXPORT_SYMBOL(rwsem_down_read_failed);
 
 static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
 {
-       if (!(count & RWSEM_ACTIVE_MASK)) {
-               /* try acquiring the write lock */
-               if (sem->count == RWSEM_WAITING_BIAS &&
-                   cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
-                           RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
-                       if (!list_is_singular(&sem->wait_list))
-                               rwsem_atomic_update(RWSEM_WAITING_BIAS, sem);
-                       return true;
-               }
+       /*
+        * Try acquiring the write lock. Check count first in order
+        * to reduce unnecessary expensive cmpxchg() operations.
+        */
+       if (count == RWSEM_WAITING_BIAS &&
+           cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
+                   RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
+               if (!list_is_singular(&sem->wait_list))
+                       rwsem_atomic_update(RWSEM_WAITING_BIAS, sem);
+               return true;
        }
+
        return false;
 }