From: Kefeng Wang wangkefeng.wang@huawei.com
mainline inclusion from stable-v6.12-rc1 commit 658be46520ce480a44fe405730a1725166298f27 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB0OV7
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
Similar to other poison recovery, use copy_mc_user_highpage() to avoid potentially kernel panic during copy page in copy_present_page() from fork, once copy failed due to hwpoison in source page, we need to break out of copy in copy_pte_range() and release prealloc folio, so copy_mc_user_highpage() is moved ahead before set *prealloc to NULL.
Link: https://lkml.kernel.org/r/20240906024201.1214712-3-wangkefeng.wang@huawei.co... Signed-off-by: Kefeng Wang wangkefeng.wang@huawei.com Reviewed-by: Jane Chu jane.chu@oracle.com Reviewed-by: Miaohe Lin linmiaohe@huawei.com Cc: David Hildenbrand david@redhat.com Cc: Jiaqi Yan jiaqiyan@google.com Cc: Naoya Horiguchi nao.horiguchi@gmail.com Cc: Tony Luck tony.luck@intel.com Signed-off-by: Andrew Morton akpm@linux-foundation.org
Conflicts: mm/memory.c [Ma Wupeng: copy_pte_range don't need to handle case -EBUSY] Signed-off-by: Ma Wupeng mawupeng1@huawei.com --- mm/memory.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c index a4b8b1d47a3b..c364158a5889 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -844,8 +844,11 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma * We have a prealloc page, all good! Take it * over and copy the page & arm it. */ + + if (copy_user_highpage_mc(new_page, page, addr, src_vma)) + return -EHWPOISON; + *prealloc = NULL; - copy_user_highpage(new_page, page, addr, src_vma); __SetPageUptodate(new_page); reliable_page_counter(new_page, dst_vma->vm_mm, 1); page_add_new_anon_rmap(new_page, dst_vma, addr, false); @@ -996,8 +999,9 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, /* * If we need a pre-allocated page for this pte, drop the * locks, allocate, and try again. + * If copy failed due to hwpoison in source page, break out. */ - if (unlikely(ret == -EAGAIN)) + if (unlikely(ret == -EAGAIN || ret == -EHWPOISON)) break; if (unlikely(prealloc)) { /* @@ -1025,6 +1029,8 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, goto out; } entry.val = 0; + } else if (unlikely(ret == -EHWPOISON)) { + goto out; } else if (ret) { WARN_ON_ONCE(ret != -EAGAIN); prealloc = page_copy_prealloc(src_mm, src_vma, addr);