From: Ma Wupeng mawupeng1@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I6RKHX CVE: NA
--------------------------------
After the fork operation, it is erroneous for the child process to have a reliable page size twice that of its parent process.
Upon examining the mm_struct structure, it was discovered that reliable_nr_page should be initialized to 0, similar to how RSS is initialized during mm_init(). This particular problem that arises during forking is merely one such example.
To resolve this issue, it is recommended to set reliable_nr_page to 0 during the mm_init() operation.
Fixes: d81e9624de21 ("proc: Count reliable memory usage of reliable tasks") Signed-off-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- include/linux/mem_reliable.h | 9 +++++++++ kernel/fork.c | 1 + 2 files changed, 10 insertions(+)
diff --git a/include/linux/mem_reliable.h b/include/linux/mem_reliable.h index ddadf2803742..79228a1b2f0b 100644 --- a/include/linux/mem_reliable.h +++ b/include/linux/mem_reliable.h @@ -135,6 +135,14 @@ static inline void reliable_page_counter(struct page *page, if (page_reliable(page)) atomic_long_add(val, &mm->reliable_nr_page); } + +static inline void reliable_clear_page_counter(struct mm_struct *mm) +{ + if (!mem_reliable_is_enabled()) + return; + + atomic_long_set(&mm->reliable_nr_page, 0); +} #else #define reliable_enabled 0 #define pagecache_use_reliable_mem 0 @@ -178,6 +186,7 @@ static inline void reliable_page_counter(struct page *page, struct mm_struct *mm, int val) {} static inline void reliable_report_usage(struct seq_file *m, struct mm_struct *mm) {} +static inline void reliable_clear_page_counter(struct mm_struct *mm) {} #endif
#endif diff --git a/kernel/fork.c b/kernel/fork.c index f0aa2da990b8..908c4e2e7896 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1049,6 +1049,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, atomic_set(&mm->has_pinned, 0); atomic64_set(&mm->pinned_vm, 0); memset(&mm->rss_stat, 0, sizeof(mm->rss_stat)); + reliable_clear_page_counter(mm); spin_lock_init(&mm->page_table_lock); spin_lock_init(&mm->arg_lock); mm_init_cpumask(mm);