mm: memcontrol: uncharge pages on swapout
authorJohannes Weiner <hannes@cmpxchg.org>
Wed, 10 Dec 2014 23:43:54 +0000 (15:43 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 11 Dec 2014 01:41:07 +0000 (17:41 -0800)
This series gets rid of the remaining page_cgroup flags, thus cutting the
memcg per-page overhead down to one pointer.

This patch (of 4):

mem_cgroup_swapout() is called with exclusive access to the page at the
end of the page's lifetime.  Instead of clearing the PCG_MEMSW flag and
deferring the uncharge, just do it right away.  This allows follow-up
patches to simplify the uncharge code.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Hugh Dickins <hughd@google.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Acked-by: Vladimir Davydov <vdavydov@parallels.com>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memcontrol.c

index 8c10d4c..266a440 100644 (file)
@@ -5777,6 +5777,7 @@ static void __init enable_swap_cgroup(void)
  */
 void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 {
+       struct mem_cgroup *memcg;
        struct page_cgroup *pc;
        unsigned short oldid;
 
@@ -5793,13 +5794,22 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
                return;
 
        VM_BUG_ON_PAGE(!(pc->flags & PCG_MEMSW), page);
+       memcg = pc->mem_cgroup;
 
-       oldid = swap_cgroup_record(entry, mem_cgroup_id(pc->mem_cgroup));
+       oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg));
        VM_BUG_ON_PAGE(oldid, page);
+       mem_cgroup_swap_statistics(memcg, true);
+
+       pc->flags = 0;
 
-       pc->flags &= ~PCG_MEMSW;
-       css_get(&pc->mem_cgroup->css);
-       mem_cgroup_swap_statistics(pc->mem_cgroup, true);
+       if (!mem_cgroup_is_root(memcg))
+               page_counter_uncharge(&memcg->memory, 1);
+
+       /* XXX: caller holds IRQ-safe mapping->tree_lock */
+       VM_BUG_ON(!irqs_disabled());
+
+       mem_cgroup_charge_statistics(memcg, page, -1);
+       memcg_check_events(memcg, page);
 }
 
 /**