From: "Kirill A. Shutemov" kirill.shutemov@linux.intel.com
mainline inclusion from mainline-v5.8-rc1 commit ae2c5d8042426b69c5f4a74296d1a20bb769a8ad category: bugfix bugzilla: 36222 CVE: NA
-------------------------------------------------
collapse_huge_page() tries to swap in pages that are part of the PMD range. Just swapped in page goes though LRU add cache. The cache gets extra reference on the page.
The extra reference can lead to the collapse fail: the following __collapse_huge_page_isolate() would check refcount and abort collapse seeing unexpected refcount.
The fix is to drain local LRU add cache in __collapse_huge_page_swapin() if we successfully swapped in any pages.
Signed-off-by: Kirill A. Shutemov kirill.shutemov@linux.intel.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Tested-by: Zi Yan ziy@nvidia.com Reviewed-by: William Kucharski william.kucharski@oracle.com Reviewed-by: Zi Yan ziy@nvidia.com Acked-by: Yang Shi yang.shi@linux.alibaba.com Cc: Andrea Arcangeli aarcange@redhat.com Cc: John Hubbard jhubbard@nvidia.com Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Ralph Campbell rcampbell@nvidia.com Link: http://lkml.kernel.org/r/20200416160026.16538-5-kirill.shutemov@linux.intel.... Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/khugepaged.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a0eae9df34bd..ad386978d7e0 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -937,6 +937,11 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, } vmf.pte--; pte_unmap(vmf.pte); + + /* Drain LRU add pagevec to remove extra pin on the swapped in pages */ + if (swapped_in) + lru_add_drain(); + trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 1); return true; }