lockdep: Fix redundant_hardirqs_on incremented with irqs enabled
authorFrederic Weisbecker <fweisbec@gmail.com>
Thu, 15 Apr 2010 21:10:42 +0000 (23:10 +0200)
committerFrederic Weisbecker <fweisbec@gmail.com>
Fri, 30 Apr 2010 17:15:49 +0000 (19:15 +0200)
When a path restore the flags while irqs are already enabled, we
update the per cpu var redundant_hardirqs_on in a racy fashion
and debug_atomic_inc() warns about this situation.

In this particular case, loosing a few hits in a stat is not a big
deal, so increment it without protection.

v2: Don't bother with disabling irq, we can miss one count in
    rare situations

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: David Miller <davem@davemloft.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
kernel/lockdep.c

index 78325f8..1b58a1b 100644 (file)
@@ -2298,7 +2298,12 @@ void trace_hardirqs_on_caller(unsigned long ip)
                return;
 
        if (unlikely(curr->hardirqs_enabled)) {
-               debug_atomic_inc(redundant_hardirqs_on);
+               /*
+                * Neither irq nor preemption are disabled here
+                * so this is racy by nature but loosing one hit
+                * in a stat is not a big deal.
+                */
+               this_cpu_inc(lockdep_stats.redundant_hardirqs_on);
                return;
        }
        /* we'll do an OFF -> ON transition: */