
From: Peter Xu <peterx@redhat.com> mainline inclusion from mainline-v6.10-rc1 commit 239e9a90c887d33e9339870a650d2bd621b95674 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/IC2YHO CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... ------------------------------------------- Introduce per-vma begin()/end() helpers for pgtable walks. This is a preparation work to merge hugetlb pgtable walkers with generic mm. The helpers need to be called before and after a pgtable walk, will start to be needed if the pgtable walker code supports hugetlb pages. It's a hook point for any type of VMA, but for now only hugetlb uses it to stablize the pgtable pages from getting away (due to possible pmd unsharing). Link: https://lkml.kernel.org/r/20240327152332.950956-5-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Christoph Hellwig <hch@infradead.org> Reviewed-by: Muchun Song <muchun.song@linux.dev> Tested-by: Ryan Roberts <ryan.roberts@arm.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andrew Jones <andrew.jones@linux.dev> Cc: Aneesh Kumar K.V (IBM) <aneesh.kumar@kernel.org> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David Hildenbrand <david@redhat.com> Cc: James Houghton <jthoughton@google.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Kirill A. Shutemov <kirill@shutemov.name> Cc: Lorenzo Stoakes <lstoakes@gmail.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: "Mike Rapoport (IBM)" <rppt@kernel.org> Cc: Rik van Riel <riel@surriel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Conflicts: include/linux/mm.h mm/migrate.c [Context conflicts.] Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com> --- include/linux/mm.h | 3 +++ mm/memory.c | 12 ++++++++++++ mm/migrate.c | 1 + 3 files changed, 16 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index b974880cf283..d24d6115a9bf 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4205,6 +4205,9 @@ static inline void accept_memory(phys_addr_t start, phys_addr_t end) #endif +void vma_pgtable_walk_begin(struct vm_area_struct *vma); +void vma_pgtable_walk_end(struct vm_area_struct *vma); + /* added to mm.h to avoid every caller adding new header file */ #include <linux/mem_reliable.h> diff --git a/mm/memory.c b/mm/memory.c index abc7f1645831..c76a1dae75d9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6863,3 +6863,15 @@ void ptlock_free(struct ptdesc *ptdesc) kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif + +void vma_pgtable_walk_begin(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_lock_read(vma); +} + +void vma_pgtable_walk_end(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_unlock_read(vma); +} diff --git a/mm/migrate.c b/mm/migrate.c index 710fb5610fd3..26fad79380b1 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -51,6 +51,7 @@ #include <linux/sched/sysctl.h> #include <linux/memory-tiers.h> #include <linux/dynamic_pool.h> +#include <linux/pagewalk.h> #include <asm/tlbflush.h> -- 2.43.0