From: Jinjiang Tu <tujinjiang@huawei.com> To utilize TLBID feature effectively, we need to know which CPUs a given mm (address space) has been active on. This patch implements cumulative CPU tracking for each mm: - On new ASID allocation (new_context()): Clear the cpumask to start fresh. This handles both new processes and ASID generation wrap-around cases. - On context switch (check_and_switch_context()): Set the current CPU in mm_cpumask(mm) if TLBID is supported. The tracking is cumulative (CPUs are never cleared except on ASID re-allocation). While this may include CPUs where the task no longer runs, TLB invalidation to a superset remains functionally correct. This infrastructure enables subsequent flush_tlb_mm() can use domain-based invalidation instead of full broadcast when the mm's CPU footprint is limited. Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com> Signed-off-by: Zeng Heng <zengheng4@huawei.com> --- arch/arm64/mm/context.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index fd6d093bbdff..9be3ff8f8106 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -207,6 +207,9 @@ static u64 new_context(struct mm_struct *mm) asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1); set_asid: + if (system_supports_tlbid()) + cpumask_clear(mm_cpumask(mm)); + __set_bit(asid, asid_map); cur_idx = asid; return asid2ctxid(asid, generation); @@ -215,8 +218,8 @@ static u64 new_context(struct mm_struct *mm) void check_and_switch_context(struct mm_struct *mm) { unsigned long flags; - unsigned int cpu; u64 asid, old_active_asid; + unsigned int cpu = smp_processor_id(); if (system_supports_cnp()) cpu_set_reserved_ttbr0(); @@ -251,7 +254,6 @@ void check_and_switch_context(struct mm_struct *mm) atomic64_set(&mm->context.id, asid); } - cpu = smp_processor_id(); if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) local_flush_tlb_all(); @@ -262,6 +264,9 @@ void check_and_switch_context(struct mm_struct *mm) arm64_apply_bp_hardening(); + if (system_supports_tlbid()) + cpumask_set_cpu(cpu, mm_cpumask(mm)); + /* * Defer TTBR0_EL1 setting for user threads to uaccess_enable() when * emulating PAN. -- 2.25.1