From: Yang Shi yang.shi@linux.alibaba.com
mainline inclusion from mainline-v5.4-rc1 commit 7ae88534cdd96235cd775c03b32a75009355740b category: bugfix bugzilla: 47240 CVE: NA
-------------------------------------------------
A later patch makes THP deferred split shrinker memcg aware, but it needs page->mem_cgroup information in THP destructor, which is called after mem_cgroup_uncharge() now.
So move mem_cgroup_uncharge() from __page_cache_release() to compound page destructor, which is called by both THP and other compound pages except HugeTLB. And call it in __put_single_page() for single order page.
Link: http://lkml.kernel.org/r/1565144277-36240-3-git-send-email-yang.shi@linux.al... Signed-off-by: Yang Shi yang.shi@linux.alibaba.com Suggested-by: "Kirill A . Shutemov" kirill.shutemov@linux.intel.com Acked-by: Kirill A. Shutemov kirill.shutemov@linux.intel.com Reviewed-by: Kirill Tkhai ktkhai@virtuozzo.com Cc: Johannes Weiner hannes@cmpxchg.org Cc: Michal Hocko mhocko@suse.com Cc: Hugh Dickins hughd@google.com Cc: Shakeel Butt shakeelb@google.com Cc: David Rientjes rientjes@google.com Cc: Qian Cai cai@lca.pw Cc: Vladimir Davydov vdavydov.dev@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- mm/page_alloc.c | 1 + mm/swap.c | 2 +- mm/vmscan.c | 6 ++---- 3 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f5746b4c5a3f..d85b5a0bfda6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -593,6 +593,7 @@ static void bad_page(struct page *page, const char *reason,
void free_compound_page(struct page *page) { + mem_cgroup_uncharge(page); __free_pages_ok(page, compound_order(page)); }
diff --git a/mm/swap.c b/mm/swap.c index bdb9b294afbf..002c98a81555 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -71,12 +71,12 @@ static void __page_cache_release(struct page *page) spin_unlock_irqrestore(zone_lru_lock(zone), flags); } __ClearPageWaiters(page); - mem_cgroup_uncharge(page); }
static void __put_single_page(struct page *page) { __page_cache_release(page); + mem_cgroup_uncharge(page); free_unref_page(page); }
diff --git a/mm/vmscan.c b/mm/vmscan.c index f49c11a6cff3..497df95dd7b8 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1475,10 +1475,9 @@ static unsigned long shrink_page_list(struct list_head *page_list, * Is there need to periodically free_page_list? It would * appear not as the counts should be low */ - if (unlikely(PageTransHuge(page))) { - mem_cgroup_uncharge(page); + if (unlikely(PageTransHuge(page))) (*get_compound_page_dtor(page))(page); - } else + else list_add(&page->lru, &free_pages); continue;
@@ -1876,7 +1875,6 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
if (unlikely(PageCompound(page))) { spin_unlock_irq(&pgdat->lru_lock); - mem_cgroup_uncharge(page); (*get_compound_page_dtor(page))(page); spin_lock_irq(&pgdat->lru_lock); } else