From: Jing Xiangfeng jingxiangfeng@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I46IUJ CVE: NA
--------------------------------------
In oom_next_task(), if points is equal to LONG_MIN, then we choose it to kill. That is not correct. LONG_MIN means to disable killing it. So fix it.
Fixes: 4da32073a8fe ("memcg: support priority for oom") Signed-off-by: Jing Xiangfeng jingxiangfeng@huawei.com Reviewed-by: Chen Wandun chenwandun@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- mm/oom_kill.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 41668bd37f52..fb39b0902476 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -311,15 +311,15 @@ static enum oom_constraint constrained_alloc(struct oom_control *oc) * choose the task with the highest number of 'points'. */ static bool oom_next_task(struct task_struct *task, struct oom_control *oc, - unsigned long points) + long points) { struct mem_cgroup *cur_memcg; struct mem_cgroup *oc_memcg;
if (!static_branch_likely(&memcg_qos_stat_key)) - return !points || points < oc->chosen_points; + return (points == LONG_MIN || points < oc->chosen_points);
- if (!points) + if (points == LONG_MIN) return true;
if (!oc->chosen) @@ -341,9 +341,9 @@ static bool oom_next_task(struct task_struct *task, struct oom_control *oc, } #else static inline bool oom_next_task(struct task_struct *task, - struct oom_control *oc, unsigned long points) + struct oom_control *oc, long points) { - return !points || points < oc->chosen_points; + return (points == LONG_MIN || points < oc->chosen_points); } #endif
From: Mel Gorman mgorman@techsingularity.net
mainline inclusion from mainline-5.14-rc1 commit ff4b2b4014cbffb3d32b22629252f4dc8616b0fe category: bugfix bugzilla: https://gitee.com/src-openeuler/nfs-utils/issues/I46NSS CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
-------------------------------------------------
Dave Jones reported the following
This made it into 5.13 final, and completely breaks NFSD for me (Serving tcp v3 mounts). Existing mounts on clients hang, as do new mounts from new clients. Rebooting the server back to rc7 everything recovers.
The commit b3b64ebd3822 ("mm/page_alloc: do bulk array bounds check after checking populated elements") returns the wrong value if the array is already populated which is interpreted as an allocation failure. Dave reported this fixes his problem and it also passed a test running dbench over NFS.
Link: https://lkml.kernel.org/r/20210628150219.GC3840@techsingularity.net Fixes: b3b64ebd3822 ("mm/page_alloc: do bulk array bounds check after checking populated elements") Signed-off-by: Mel Gorman mgorman@techsingularity.net Reported-by: Dave Jones davej@codemonkey.org.uk Tested-by: Dave Jones davej@codemonkey.org.uk Cc: Dan Carpenter dan.carpenter@oracle.com Cc: Jesper Dangaard Brouer brouer@redhat.com Cc: Vlastimil Babka vbabka@suse.cz Cc: stable@vger.kernel.org [5.13+] Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org (cherry picked from commit ff4b2b4014cbffb3d32b22629252f4dc8616b0fe) Signed-off-by: Yongqiang Liu liuyongqiang13@huawei.com Reviewed-by: tong tiangen tongtiangen@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 853d65d5420f..19394f8f9daf 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4965,7 +4965,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
/* Already populated array? */ if (unlikely(page_array && nr_pages - nr_populated == 0)) - return 0; + return nr_populated;
/* Use the single page allocator for one page. */ if (nr_pages - nr_populated == 1)