From: Muchun Song songmuchun@bytedance.com
mainline inclusion from mainline-5.12-rc1 commit 96403bfe50c344b587ea53894954a9d152af1c9d category: bugfix bugzilla: 51349 CVE: NA
-------------------------------------------------
SLUB currently account kmalloc() and kmalloc_node() allocations larger than order-1 page per-node. But it forget to update the per-memcg vmstats. So it can lead to inaccurate statistics of "slab_unreclaimable" which is from memory.stat. Fix it by using mod_lruvec_page_state instead of mod_node_page_state.
Link: https://lkml.kernel.org/r/20210223092423.42420-1-songmuchun@bytedance.com Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting") Signed-off-by: Muchun Song songmuchun@bytedance.com Reviewed-by: Shakeel Butt shakeelb@google.com Reviewed-by: Roman Gushchin guro@fb.com Reviewed-by: Michal Koutný mkoutny@suse.com Cc: Johannes Weiner hannes@cmpxchg.org Cc: Michal Hocko mhocko@kernel.org Cc: Vladimir Davydov vdavydov.dev@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org (cherry picked from commit 96403bfe50c344b587ea53894954a9d152af1c9d) Signed-off-by: Yongqiang Liu liuyongqiang13@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/slab_common.c | 4 ++-- mm/slub.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c index 321a9abed5d9d..b8b0df81bece3 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1259,8 +1259,8 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) page = alloc_pages(flags, order); if (likely(page)) { ret = page_address(page); - mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE, - 1 << order); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE, + PAGE_SIZE << order); } kmemleak_alloc(ret, size, 1, flags); kasan_kmalloc_large(ret, size, flags); diff --git a/mm/slub.c b/mm/slub.c index 0d69d5b3ceefe..12f23ceab1177 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3799,8 +3799,8 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node) page = alloc_pages_node(node, flags, order); if (page) { ptr = page_address(page); - mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE, - 1 << order); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE, + PAGE_SIZE << order); }
kmalloc_large_node_hook(ptr, size, flags); @@ -3940,8 +3940,8 @@ void kfree(const void *x)
BUG_ON(!PageCompound(page)); kfree_hook(object); - mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE, - -(1 << order)); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE, + -(PAGE_SIZE << order)); __free_pages(page, order); return; }