hulk inclusion category: bugfix bugzilla: https://atomgit.com/openeuler/kernel/issues/8882 -------------------------------- wake_wide() uses sd_llc_size as the spreading threshold to detect wide waker/wakee relationships and to disable wake_affine() for those cases. On SMT systems, sd_llc_size counts logical CPUs rather than physical cores. This inflates the wake_wide() threshold, allowing wake_affine() to pack more tasks into one LLC domain than the actual compute capacity of its physical cores can sustain. The resulting SMT interference may cost more than the cache-locality benefit wake_affine() intends to gain. Scale the factor by the SMT width of the current CPU so that it approximates the number of independent physical cores in the LLC domain, making wake_wide() more likely to kick in before SMT interference becomes significant. On non-SMT systems the SMT width is 1 and behaviour is unchanged. Fixes: 63b0e9edceec ("sched/fair: Beef up wake_wide()") Signed-off-by: Zhang Qiao <zhangqiao22@huawei.com> --- kernel/sched/fair.c | 5 +++++ kernel/sched/features.h | 2 ++ 2 files changed, 7 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ad30bb800961..4100998e18cd 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7850,6 +7850,11 @@ static int wake_wide(struct task_struct *p) unsigned int slave = p->wakee_flips; int factor = __this_cpu_read(sd_llc_size); + /* Scale factor to physical-core count to account for SMT interference. */ + if (sched_feat(WA_SMT)) + factor = DIV_ROUND_UP(factor, + cpumask_weight(cpu_smt_mask(smp_processor_id()))); + if (master < slave) swap(master, slave); if (slave < factor || master < slave * factor) diff --git a/kernel/sched/features.h b/kernel/sched/features.h index 24a0c853a8a0..c9ad8e72ecd0 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -128,3 +128,5 @@ SCHED_FEAT(SOFT_DOMAIN, false) #ifdef CONFIG_SCHED_SOFT_QUOTA SCHED_FEAT(SOFT_QUOTA, false) #endif + +SCHED_FEAT(WA_SMT, false) -- 2.18.0