From: Daniel Mentz Date: Fri, 28 Oct 2016 00:46:59 +0000 (-0700) Subject: lib/genalloc.c: start search from start of chunk X-Git-Tag: v3.2.85~63 X-Git-Url: http://git.openpandora.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=56f3b449b5961a63b0c03bea225209ef00c3c666;p=pandora-kernel.git lib/genalloc.c: start search from start of chunk commit 62e931fac45b17c2a42549389879411572f75804 upstream. gen_pool_alloc_algo() iterates over the chunks of a pool trying to find a contiguous block of memory that satisfies the allocation request. The shortcut if (size > atomic_read(&chunk->avail)) continue; makes the loop skip over chunks that do not have enough bytes left to fulfill the request. There are two situations, though, where an allocation might still fail: (1) The available memory is not contiguous, i.e. the request cannot be fulfilled due to external fragmentation. (2) A race condition. Another thread runs the same code concurrently and is quicker to grab the available memory. In those situations, the loop calls pool->algo() to search the entire chunk, and pool->algo() returns some value that is >= end_bit to indicate that the search failed. This return value is then assigned to start_bit. The variables start_bit and end_bit describe the range that should be searched, and this range should be reset for every chunk that is searched. Today, the code fails to reset start_bit to 0. As a result, prefixes of subsequent chunks are ignored. Memory allocations might fail even though there is plenty of room left in these prefixes of those other chunks. Fixes: 7f184275aa30 ("lib, Make gen_pool memory allocator lockless") Link: http://lkml.kernel.org/r/1477420604-28918-1-git-send-email-danielmentz@google.com Signed-off-by: Daniel Mentz Reviewed-by: Mathieu Desnoyers Acked-by: Will Deacon Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings --- diff --git a/lib/genalloc.c b/lib/genalloc.c index 716f94747c81..266e0e67d0aa 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -263,7 +263,7 @@ unsigned long gen_pool_alloc(struct gen_pool *pool, size_t size) struct gen_pool_chunk *chunk; unsigned long addr = 0; int order = pool->min_alloc_order; - int nbits, start_bit = 0, end_bit, remain; + int nbits, start_bit, end_bit, remain; #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG BUG_ON(in_nmi()); @@ -278,6 +278,7 @@ unsigned long gen_pool_alloc(struct gen_pool *pool, size_t size) if (size > atomic_read(&chunk->avail)) continue; + start_bit = 0; end_bit = (chunk->end_addr - chunk->start_addr) >> order; retry: start_bit = bitmap_find_next_zero_area(chunk->bits, end_bit,