
From: Ma Wupeng <mawupeng1@huawei.com> hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I6RKHX CVE: NA -------------------------------- After the fork operation, it is erroneous for the child process to have a reliable page size twice that of its parent process. Upon examining the mm_struct structure, it was discovered that reliable_nr_page should be initialized to 0, similar to how RSS is initialized during mm_init(). This particular problem that arises during forking is merely one such example. To resolve this issue, it is recommended to set reliable_nr_page to 0 during the mm_init() operation. Fixes: 094eaabb3fe8 ("proc: Count reliable memory usage of reliable tasks") Signed-off-by: Ma Wupeng <mawupeng1@huawei.com> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Yongqiang Liu <liuyongqiang13@huawei.com> --- include/linux/mem_reliable.h | 8 ++++++++ kernel/fork.c | 1 + 2 files changed, 9 insertions(+) diff --git a/include/linux/mem_reliable.h b/include/linux/mem_reliable.h index 6d57c36fb676..aa3fe77c8a72 100644 --- a/include/linux/mem_reliable.h +++ b/include/linux/mem_reliable.h @@ -123,6 +123,13 @@ static inline bool mem_reliable_shmem_limit_check(void) shmem_reliable_nr_page; } +static inline void reliable_clear_page_counter(struct mm_struct *mm) +{ + if (!mem_reliable_is_enabled()) + return; + + atomic_long_set(&mm->reliable_nr_page, 0); +} #else #define reliable_enabled 0 #define reliable_allow_fb_enabled() false @@ -171,6 +178,7 @@ static inline void reliable_lru_add_batch(int zid, enum lru_list lru, int val) {} static inline bool mem_reliable_counter_initialized(void) { return false; } +static inline void reliable_clear_page_counter(struct mm_struct *mm) {} #endif #endif diff --git a/kernel/fork.c b/kernel/fork.c index b5453a26655e..c256525d4ce5 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1007,6 +1007,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, atomic_long_set(&mm->locked_vm, 0); mm->pinned_vm = 0; memset(&mm->rss_stat, 0, sizeof(mm->rss_stat)); + reliable_clear_page_counter(mm); spin_lock_init(&mm->page_table_lock); spin_lock_init(&mm->arg_lock); mm_init_cpumask(mm); -- 2.25.1