From: Yicong Yang yangyicong@hisilicon.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I7U78A CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?...
-------------------------------------------
Currently we'll flush the mm in flush_tlb_batched_pending() to avoid race between reclaim unmaps pages by batched TLB flush and mprotect/munmap/etc. Other architectures like arm64 may only need a synchronization barrier(dsb) here rather than a full mm flush. So add arch_flush_tlb_batched_pending() to allow an arch-specific implementation here. This intends no functional changes on x86 since still a full mm flush for x86.
Link: https://lkml.kernel.org/r/20230717131004.12662-4-yangyicong@huawei.com Signed-off-by: Yicong Yang yangyicong@hisilicon.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Cc: Anshuman Khandual anshuman.khandual@arm.com Cc: Anshuman Khandual khandual@linux.vnet.ibm.com Cc: Arnd Bergmann arnd@arndb.de Cc: Barry Song baohua@kernel.org Cc: Barry Song v-songbaohua@oppo.com Cc: Darren Hart darren@os.amperecomputing.com Cc: Jonathan Cameron Jonathan.Cameron@huawei.com Cc: Jonathan Corbet corbet@lwn.net Cc: Kefeng Wang wangkefeng.wang@huawei.com Cc: lipeifeng lipeifeng@oppo.com Cc: Mark Rutland mark.rutland@arm.com Cc: Mel Gorman mgorman@suse.de Cc: Nadav Amit namit@vmware.com Cc: Peter Zijlstra peterz@infradead.org Cc: Punit Agrawal punit.agrawal@bytedance.com Cc: Ryan Roberts ryan.roberts@arm.com Cc: Steven Miao realmz6@gmail.com Cc: Will Deacon will@kernel.org Cc: Xin Hao xhao@linux.alibaba.com Cc: Zeng Tao prime.zeng@hisilicon.com Signed-off-by: Andrew Morton akpm@linux-foundation.org
Conflicts: mm/rmap.c arch/x86/include/asm/tlbflush.h
Signed-off-by: Jinjiang Tu tujinjiang@huawei.com --- arch/x86/include/asm/tlbflush.h | 5 +++++ mm/rmap.c | 2 +- 2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 1404f43a5986..1f1e5add0b97 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -270,6 +270,11 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); }
+static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm) +{ + flush_tlb_mm(mm); +} + extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
#endif /* !MODULE */ diff --git a/mm/rmap.c b/mm/rmap.c index 306709acb288..4d9304c751d5 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -677,7 +677,7 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) void flush_tlb_batched_pending(struct mm_struct *mm) { if (data_race(mm->tlb_flush_batched)) { - flush_tlb_mm(mm); + arch_flush_tlb_batched_pending(mm);
/* * Do not allow the compiler to re-order the clearing of