hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8USBA -------------------------------- If the fallback for memory reliable is enabled, direct reclaim will be used if the task's reliable memory limit is reached and pages need to be released. However, percpu_counter_read_positive() provides a fast but imprecise counter reading. During limit enforcement, this inaccuracy may cause the observed usage to appear significantly larger than the actual value. As a result, even repeated constrained reclaim attempts may fail to bring memory usage below the limit, eventually leading to OOM. To avoid this issue, use an accurate counter check when determining whether the reliable memory limit has been exceeded. Fixes: 200321e8a69e ("mm: mem_reliable: Add limiting the usage of reliable memory") Signed-off-by: Wupeng Ma <mawupeng1@huawei.com> --- include/linux/mem_reliable.h | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/include/linux/mem_reliable.h b/include/linux/mem_reliable.h index 1e928ff69d99..9047918e1331 100644 --- a/include/linux/mem_reliable.h +++ b/include/linux/mem_reliable.h @@ -169,11 +169,15 @@ static inline void shmem_reliable_folio_add(struct folio *folio, int nr_page) percpu_counter_add(&shmem_reliable_pages, nr_page); } - static inline bool reliable_mem_limit_check(unsigned long nr_page) { - return (task_reliable_used_pages() + nr_page) <= - (task_reliable_limit >> PAGE_SHIFT); + s64 nr_task_pages; + + /* limit check need precise counter, use sum rather than read */ + nr_task_pages = percpu_counter_sum_positive(&pagecache_reliable_pages); + nr_task_pages += percpu_counter_sum_positive(&anon_reliable_pages); + + return (nr_task_pages + nr_page) <= (task_reliable_limit >> PAGE_SHIFT); } static inline bool mem_reliable_should_reclaim(void) -- 2.43.0