The rmap interface overhaul is to speeds up the folio operations.
David Hildenbrand (42): mm/rmap: rename hugepage_add* to hugetlb_add* mm/rmap: introduce and use hugetlb_remove_rmap() mm/rmap: introduce and use hugetlb_add_file_rmap() mm/rmap: introduce and use hugetlb_try_dup_anon_rmap() mm/rmap: introduce and use hugetlb_try_share_anon_rmap() mm/rmap: add hugetlb sanity checks for anon rmap handling mm/rmap: convert folio_add_file_rmap_range() into folio_add_file_rmap_[pte|ptes|pmd]() mm/memory: page_add_file_rmap() -> folio_add_file_rmap_[pte|pmd]() mm/huge_memory: page_add_file_rmap() -> folio_add_file_rmap_pmd() mm/migrate: page_add_file_rmap() -> folio_add_file_rmap_pte() mm/userfaultfd: page_add_file_rmap() -> folio_add_file_rmap_pte() mm/rmap: remove page_add_file_rmap() mm/rmap: factor out adding folio mappings into __folio_add_rmap() mm/rmap: introduce folio_add_anon_rmap_[pte|ptes|pmd]() mm/huge_memory: batch rmap operations in __split_huge_pmd_locked() mm/huge_memory: page_add_anon_rmap() -> folio_add_anon_rmap_pmd() mm/migrate: page_add_anon_rmap() -> folio_add_anon_rmap_pte() mm/ksm: page_add_anon_rmap() -> folio_add_anon_rmap_pte() mm/swapfile: page_add_anon_rmap() -> folio_add_anon_rmap_pte() mm/memory: page_add_anon_rmap() -> folio_add_anon_rmap_pte() mm/rmap: remove page_add_anon_rmap() mm/rmap: remove RMAP_COMPOUND mm/rmap: introduce folio_remove_rmap_[pte|ptes|pmd]() kernel/events/uprobes: page_remove_rmap() -> folio_remove_rmap_pte() mm/huge_memory: page_remove_rmap() -> folio_remove_rmap_pmd() mm/khugepaged: page_remove_rmap() -> folio_remove_rmap_pte() mm/ksm: page_remove_rmap() -> folio_remove_rmap_pte() mm/memory: page_remove_rmap() -> folio_remove_rmap_pte() mm/migrate_device: page_remove_rmap() -> folio_remove_rmap_pte() mm/rmap: page_remove_rmap() -> folio_remove_rmap_pte() Documentation: stop referring to page_remove_rmap() mm/rmap: remove page_remove_rmap() mm/rmap: convert page_dup_file_rmap() to folio_dup_file_rmap_[pte|ptes|pmd]() mm/rmap: introduce folio_try_dup_anon_rmap_[pte|ptes|pmd]() mm/huge_memory: page_try_dup_anon_rmap() -> folio_try_dup_anon_rmap_pmd() mm/memory: page_try_dup_anon_rmap() -> folio_try_dup_anon_rmap_pte() mm/rmap: remove page_try_dup_anon_rmap() mm: convert page_try_share_anon_rmap() to folio_try_share_anon_rmap_[pte|pmd]() mm/rmap: rename COMPOUND_MAPPED to ENTIRELY_MAPPED mm: remove one last reference to page_add_*_rmap() mm/huge_memory: fix folio_set_dirty() vs. folio_mark_dirty() mm/memory: fix folio_set_dirty() vs. folio_mark_dirty() in zap_pte_range()
Kefeng Wang (1): mm: userswap: page_remove_rmap() -> folio_remove_rmap_pte()
Vishal Moola (Oracle) (5): mm/khugepaged: convert __collapse_huge_page_isolate() to use folios mm/khugepaged: convert hpage_collapse_scan_pmd() to use folios mm/khugepaged: convert is_refcount_suitable() to use folios mm/khugepaged: convert alloc_charge_hpage() to use folios mm/khugepaged: convert collapse_pte_mapped_thp() to use folios
Documentation/mm/transhuge.rst | 4 +- Documentation/mm/unevictable-lru.rst | 4 +- include/linux/memcontrol.h | 14 - include/linux/mm.h | 6 +- include/linux/rmap.h | 397 +++++++++++++++++++----- kernel/events/uprobes.c | 2 +- mm/filemap.c | 10 +- mm/gup.c | 2 +- mm/huge_memory.c | 85 +++--- mm/hugetlb.c | 21 +- mm/internal.h | 14 +- mm/khugepaged.c | 154 +++++----- mm/ksm.c | 15 +- mm/memory-failure.c | 4 +- mm/memory.c | 60 ++-- mm/migrate.c | 12 +- mm/migrate_device.c | 41 +-- mm/mmu_gather.c | 2 +- mm/rmap.c | 433 ++++++++++++++++----------- mm/swapfile.c | 2 +- mm/userfaultfd.c | 2 +- mm/userswap.c | 2 +- 22 files changed, 808 insertions(+), 478 deletions(-)