From: David Hildenbrand david@redhat.com
mainline inclusion from mainline-v6.11-rc1 commit 2c1f057e5be63e890f2dd89e4c25ab5eef084a91 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IBGU7R
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
We added PM_MMAP_EXCLUSIVE in 2015 via commit 77bb499bb60f ("pagemap: add mmap-exclusive bit for marking pages mapped only here"), when THPs could not be partially mapped and page_mapcount() returned something that was true for all pages of the THP.
In 2016, we added support for partially mapping THPs via commit 53f9263baba6 ("mm: rework mapcount accounting to enable 4k mapping of THPs") but missed to determine PM_MMAP_EXCLUSIVE as well per page.
Checking page_mapcount() on the head page does not tell the whole story.
We should check each individual page. In a future without per-page mapcounts it will be different, but we'll change that to be consistent with PTE-mapped THPs once we deal with that.
Link: https://lkml.kernel.org/r/20240607122357.115423-4-david@redhat.com Fixes: 53f9263baba6 ("mm: rework mapcount accounting to enable 4k mapping of THPs") Signed-off-by: David Hildenbrand david@redhat.com Reviewed-by: Oscar Salvador osalvador@suse.de Cc: Kirill A. Shutemov kirill.shutemov@linux.intel.com Cc: Alexey Dobriyan adobriyan@gmail.com Cc: Jonathan Corbet corbet@lwn.net Cc: Lance Yang ioworker0@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Conflicts: fs/proc/task_mmu.c [Because OLK-5.10 don't merge mainline inclusion patches b83d7e432399d44454411dec5c25afb5c4469e96("mm, /proc/pid/pagemap: fix soft dirty marking for PMD migration entry") and 8e165e733bfa06edbcdbe491ef13b2bf1a3fa883("mm/pagemap: recognize uffd-wp bit for shmem/hugetlbfs")] Signed-off-by: Kaixiong Yu yukaixiong@huawei.com --- fs/proc/task_mmu.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index e02dead5b3b4..d320e304938b 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1451,6 +1451,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
ptl = pmd_trans_huge_lock(pmdp, vma); if (ptl) { + unsigned int idx = (addr & ~PMD_MASK) >> PAGE_SHIFT; u64 flags = 0, frame = 0; pmd_t pmd = *pmdp; struct page *page = NULL; @@ -1465,8 +1466,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, if (pmd_soft_dirty(pmd)) flags |= PM_SOFT_DIRTY; if (pm->show_pfn) - frame = pmd_pfn(pmd) + - ((addr & ~PMD_MASK) >> PAGE_SHIFT); + frame = pmd_pfn(pmd) + idx; } #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION else if (is_swap_pmd(pmd)) { @@ -1474,8 +1474,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned long offset;
if (pm->show_pfn) { - offset = swp_offset(entry) + - ((addr & ~PMD_MASK) >> PAGE_SHIFT); + offset = swp_offset(entry) + idx; frame = swp_type(entry) | (offset << MAX_SWAPFILES_SHIFT); } @@ -1494,9 +1493,15 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, if (page && !migration && page_mapcount(page) == 1) flags |= PM_MMAP_EXCLUSIVE;
- for (; addr != end; addr += PAGE_SIZE) { - pagemap_entry_t pme = make_pme(frame, flags); + for (; addr != end; addr += PAGE_SIZE, idx++) { + unsigned long cur_flags = flags; + pagemap_entry_t pme;
+ if (page && (flags & PM_PRESENT) && + page_mapcount(page + idx) == 1) + cur_flags |= PM_MMAP_EXCLUSIVE; + + pme = make_pme(frame, cur_flags); err = add_to_pagemap(addr, &pme, pm); if (err) break;