mremap: don't do mm_populate(new_addr) on failure
authorOleg Nesterov <oleg@redhat.com>
Fri, 4 Sep 2015 22:48:07 +0000 (15:48 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 4 Sep 2015 23:54:41 +0000 (16:54 -0700)
move_vma() sets *locked even if move_page_tables() or ->mremap() fails,
change sys_mremap() to check "ret & ~PAGE_MASK".

I think we should simply remove the VM_LOCKED code in move_vma(), that is
why this patch doesn't change move_vma().  But this needs more cleanups.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Benjamin LaHaise <bcrl@kvack.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/mremap.c

index 3310378..7dcf7b4 100644 (file)
@@ -578,8 +578,10 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
                ret = move_vma(vma, addr, old_len, new_len, new_addr, &locked);
        }
 out:
-       if (ret & ~PAGE_MASK)
+       if (ret & ~PAGE_MASK) {
                vm_unacct_memory(charged);
+               locked = 0;
+       }
        up_write(&current->mm->mmap_sem);
        if (locked && new_len > old_len)
                mm_populate(new_addr + old_len, new_len - old_len);