slab, lockdep: Annotate slab -> rcu -> debug_object -> slab
authorPeter Zijlstra <peterz@infradead.org>
Fri, 22 Jul 2011 13:26:05 +0000 (15:26 +0200)
committerIngo Molnar <mingo@elte.hu>
Thu, 4 Aug 2011 08:17:54 +0000 (10:17 +0200)
commit83835b3d9aec8e9f666d8223d8a386814f756266
tree6112e44af7202c41b48d468eff8b5a28138efd33
parent70a0686a72c7a7e554b404ca11406ceec709d425
slab, lockdep: Annotate slab -> rcu -> debug_object -> slab

Lockdep thinks there's lock recursion through:

kmem_cache_free()
  cache_flusharray()
    spin_lock(&l3->list_lock)  <----------------.
    free_block()                                |
      slab_destroy()                            |
call_rcu()                              |
  debug_object_activate()               |
    debug_object_init()                 |
      __debug_object_init()             |
kmem_cache_alloc()              |
  cache_alloc_refill()          |
    spin_lock(&l3->list_lock) --'

Now debug objects doesn't use SLAB_DESTROY_BY_RCU and hence there is no
actual possibility of recursing. Luckily debug objects marks it slab
with SLAB_DEBUG_OBJECTS so we can identify the thing.

Mark all SLAB_DEBUG_OBJECTS (all one!) slab caches with a special
lockdep key so that lockdep sees its a different cachep.

Also add a WARN on trying to create a SLAB_DESTROY_BY_RCU |
SLAB_DEBUG_OBJECTS cache, to avoid possible future trouble.

Reported-and-tested-by: Sebastian Siewior <sebastian@breakpoint.cc>
[ fixes to the initial patch ]
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1311341165.27400.58.camel@twins
Signed-off-by: Ingo Molnar <mingo@elte.hu>
mm/slab.c