From: Alistair Popple alistair@popple.id.au
mainline inclusion from mainline-v5.9-rc4 commit ad7df764b7e1c7dc64e016da7ada2e3e1bb90700 category: bugfix bugzilla: 42213 CVE: NA
-------------------------------------------------
During memory migration a pte is temporarily replaced with a migration swap pte. Some pte bits from the existing mapping such as the soft-dirty and uffd write-protect bits are preserved by copying these to the temporary migration swap pte.
However these bits are not stored at the same location for swap and non-swap ptes. Therefore testing these bits requires using the appropriate helper function for the given pte type.
Unfortunately several code locations were found where the wrong helper function is being used to test soft_dirty and uffd_wp bits which leads to them getting incorrectly set or cleared during page-migration.
Fix these by using the correct tests based on pte type.
Fixes: a5430dda8a3a ("mm/migrate: support un-addressable ZONE_DEVICE page in migration") Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while collecting pages") Fixes: f45ec5ff16a7 ("userfaultfd: wp: support swap and page migration") Signed-off-by: Alistair Popple alistair@popple.id.au Signed-off-by: Andrew Morton akpm@linux-foundation.org Reviewed-by: Peter Xu peterx@redhat.com Cc: Jérôme Glisse jglisse@redhat.com Cc: John Hubbard jhubbard@nvidia.com Cc: Ralph Campbell rcampbell@nvidia.com Cc: Alistair Popple alistair@popple.id.au Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200825064232.10023-2-alistair@popple.id.au Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Conflicts: mm/migrate.c mm/rmap.c Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/migrate.c | 9 +++++++-- mm/rmap.c | 7 ++++++- 2 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c index 069a76fe5bf1..22b08aea0697 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2326,8 +2326,13 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, entry = make_migration_entry(page, mpfn & MIGRATE_PFN_WRITE); swp_pte = swp_entry_to_pte(entry); - if (pte_soft_dirty(pte)) - swp_pte = pte_swp_mksoft_dirty(swp_pte); + if (pte_present(pte)) { + if (pte_soft_dirty(pte)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + } else { + if (pte_swp_soft_dirty(pte)) + swp_pte = pte_swp_mksoft_dirty(swp_pte); + } set_pte_at(mm, addr, ptep, swp_pte);
/* diff --git a/mm/rmap.c b/mm/rmap.c index 4ca7a0db9645..aabd094d310f 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1467,7 +1467,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, */ entry = make_migration_entry(page, 0); swp_pte = swp_entry_to_pte(entry); - if (pte_soft_dirty(pteval)) + + /* + * pteval maps a zone device page and is therefore + * a swap pte. + */ + if (pte_swp_soft_dirty(pteval)) swp_pte = pte_swp_mksoft_dirty(swp_pte); set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); /*