From: "Kirill A. Shutemov" kirill.shutemov@linux.intel.com
mainline inclusion from mainline-v5.8-rc1 commit a980df33e9351e5474c06ec0fd96b2f409e2ff22 category: bugfix bugzilla: 36242 CVE: NA
-------------------------------------------------
Having a page in LRU add cache offsets page refcount and gives false-negative on PageLRU(). It reduces collapse success rate.
Drain all LRU add caches before scanning. It happens relatively rare and should not disturb the system too much.
Signed-off-by: Kirill A. Shutemov kirill.shutemov@linux.intel.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Tested-by: Zi Yan ziy@nvidia.com Reviewed-by: William Kucharski william.kucharski@oracle.com Reviewed-by: Zi Yan ziy@nvidia.com Acked-by: Yang Shi yang.shi@linux.alibaba.com Cc: Andrea Arcangeli aarcange@redhat.com Cc: John Hubbard jhubbard@nvidia.com Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Ralph Campbell rcampbell@nvidia.com Link: http://lkml.kernel.org/r/20200416160026.16538-4-kirill.shutemov@linux.intel.... Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/khugepaged.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 0ad9f2b2b33e..a0eae9df34bd 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1834,6 +1834,8 @@ static void khugepaged_do_scan(void)
barrier(); /* write khugepaged_pages_to_scan to local stack */
+ lru_add_drain_all(); + while (progress < pages) { if (!khugepaged_prealloc_page(&hpage, &wait)) break;