From: "Matthew Wilcox (Oracle)" willy@infradead.org
mainline inclusion from mainline-v6.10-rc1 commit e06d03d5590ae1c257b8aa2cfbfe6765e0755c14 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IAIHQO
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
Convert directly from a pmd to a folio without going through another representation first. For now this is just a slightly shorter way to write it, but it might end up being more efficient later.
Link: https://lkml.kernel.org/r/20240326202833.523759-4-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org Reviewed-by: David Hildenbrand david@redhat.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Conflicts: mm/mempolicy.c mm/userfaultfd.c [ Context conflicts in mm/mempolicy.c due to miss commit 5beaee54a324. Not covert in mm/userfaultfd.c due to miss commit adef440691ba ] Signed-off-by: Liu Shixin liushixin2@huawei.com --- include/linux/pgtable.h | 2 ++ mm/huge_memory.c | 6 +++--- mm/madvise.c | 2 +- mm/mempolicy.c | 2 +- mm/mlock.c | 2 +- 5 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index a0fafb8e7005..d96927423a08 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -50,6 +50,8 @@ #define pmd_pgtable(pmd) pmd_page(pmd) #endif
+#define pmd_folio(pmd) page_folio(pmd_page(pmd)) + /* * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD] * diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 16d8ed7f46bd..47f249c1bf7a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2081,7 +2081,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, goto out; }
- folio = pfn_folio(pmd_pfn(orig_pmd)); + folio = pmd_folio(orig_pmd); /* * If other processes are mapping this folio, we couldn't discard * the folio unless they all do MADV_FREE so let's skip the folio. @@ -2352,7 +2352,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, if (pmd_protnone(*pmd)) goto unlock;
- folio = page_folio(pmd_page(*pmd)); + folio = pmd_folio(*pmd); toptier = node_is_toptier(folio_nid(folio)); /* * Skip scanning top tier node if normal numa @@ -2791,7 +2791,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, * It's safe to call pmd_page when folio is set because it's * guaranteed that pmd is present. */ - if (folio && folio != page_folio(pmd_page(*pmd))) + if (folio && folio != pmd_folio(*pmd)) goto out; __split_huge_pmd_locked(vma, pmd, range.start, freeze); } diff --git a/mm/madvise.c b/mm/madvise.c index 57ad2c766aa6..a38bcdf335f6 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -408,7 +408,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, goto huge_unlock; }
- folio = pfn_folio(pmd_pfn(orig_pmd)); + folio = pmd_folio(orig_pmd);
/* Do not interfere with other mappings of this folio */ if (folio_likely_mapped_shared(folio)) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 720664398432..f4dfeb5f052f 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -471,7 +471,7 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr, ret = -EIO; goto unlock; } - folio = pfn_folio(pmd_pfn(*pmd)); + folio = pmd_folio(*pmd); if (is_huge_zero_page(&folio->page)) { walk->action = ACTION_CONTINUE; goto unlock; diff --git a/mm/mlock.c b/mm/mlock.c index 4cecef94de4d..cd0997d89c7c 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -378,7 +378,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, goto out; if (is_huge_zero_pmd(*pmd)) goto out; - folio = page_folio(pmd_page(*pmd)); + folio = pmd_folio(*pmd); if (vma->vm_flags & VM_LOCKED) mlock_folio(folio); else