From 600c8254c73e823a32dfb2a73939c47afb5d8ebe Mon Sep 17 00:00:00 2001 From: Jia He Date: Tue, 2 Aug 2016 14:02:31 -0700 Subject: [PATCH] mm/hugetlb: avoid soft lockup in set_max_huge_pages() commit 649920c6ab93429b94bc7c1aa7c0e8395351be32 upstream. In powerpc servers with large memory(32TB), we watched several soft lockups for hugepage under stress tests. The call traces are as follows: 1. get_page_from_freelist+0x2d8/0xd50 __alloc_pages_nodemask+0x180/0xc20 alloc_fresh_huge_page+0xb0/0x190 set_max_huge_pages+0x164/0x3b0 2. prep_new_huge_page+0x5c/0x100 alloc_fresh_huge_page+0xc8/0x190 set_max_huge_pages+0x164/0x3b0 This patch fixes such soft lockups. It is safe to call cond_resched() there because it is out of spin_lock/unlock section. Link: http://lkml.kernel.org/r/1469674442-14848-1-git-send-email-hejianet@gmail.com Signed-off-by: Jia He Reviewed-by: Naoya Horiguchi Acked-by: Michal Hocko Acked-by: Dave Hansen Cc: Mike Kravetz Cc: "Kirill A. Shutemov" Cc: Paul Gortmaker Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings --- mm/hugetlb.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c52095ce40b4..390f0ac4eed6 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1417,6 +1417,10 @@ static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count, * and reducing the surplus. */ spin_unlock(&hugetlb_lock); + + /* yield cpu to avoid soft lockup */ + cond_resched(); + ret = alloc_fresh_huge_page(h, nodes_allowed); spin_lock(&hugetlb_lock); if (!ret) -- 2.39.2