From: Ma Wupeng mawupeng1@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5JT6V CVE: NA
--------------------------------
shmem_mfill_atomic_pte() wrongly called mem_cgroup_cancel_charge() in "success" path, it should mem_cgroup_uncharge() to dec memory counter instead. mem_cgroup_cancel_charge() should only be used if this transaction is unsuccessful and mem_cgroup_uncharge() is used to do this if this transaction succeed.
This will lead to page->memcg not null and will uncharge one more in put_page(). The page counter will underflow to maximum value and trigger oom to kill all process include sshd and leave system unaccessible.
page->memcg is set in the following path: mem_cgroup_commit_charge commit_charge page->mem_cgroup = memcg;
extra uncharge will be done in the following path: put_page __put_page __put_single_page mem_cgroup_uncharge if (!page->mem_cgroup) <-- should return return uncharge_page uncharge_batch
To fix this, call mem_cgroup_commit_charge() at the end of this transaction to make sure this transaction is really finished.
Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support") Signed-off-by: Ma Wupeng mawupeng1@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yongqiang Liu liuyongqiang13@huawei.com --- mm/shmem.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c index 34981c7aad14..e300395fe308 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2464,8 +2464,6 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (ret) goto out_release_uncharge;
- mem_cgroup_commit_charge(page, memcg, false, false); - _dst_pte = mk_pte(page, dst_vma->vm_page_prot); if (dst_vma->vm_flags & VM_WRITE) _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte)); @@ -2491,6 +2489,8 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (!pte_none(*dst_pte)) goto out_release_uncharge_unlock;
+ mem_cgroup_commit_charge(page, memcg, false, false); + lru_cache_add_anon(page);
spin_lock_irq(&info->lock);