Offering: HULK hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IB42TC
--------------------------------
After the preferred_cpu goes offline, the core selection of dynamic affinity does not check whether the preferred_cpu is valid. As a result, related dynamic affinity processes are concentrated on a shared_cpu.
This patch resolves this problem to checks whether the preferred_cpu is valid and compares only the usage threshold of the valid preferred_cpu.
Fixes: 70a232a564cf ("sched: Adjust wakeup cpu range according CPU util dynamicly") Signed-off-by: He Yujie coka.heyujie@huawei.com --- kernel/sched/fair.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 63f4344ac344..41b339c5b671 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7248,6 +7248,7 @@ static void set_task_select_cpus(struct task_struct *p, int *idlest_cpu, struct task_group *tg; long spare; int cpu, mode; + int nr_cpus_valid = 0;
rcu_read_lock(); mode = dynamic_affinity_mode(p); @@ -7265,7 +7266,7 @@ static void set_task_select_cpus(struct task_struct *p, int *idlest_cpu,
/* manual mode */ tg = task_group(p); - for_each_cpu(cpu, p->prefer_cpus) { + for_each_cpu_and(cpu, p->prefer_cpus, cpu_online_mask) { if (unlikely(!tg->se[cpu])) continue;
@@ -7289,10 +7290,18 @@ static void set_task_select_cpus(struct task_struct *p, int *idlest_cpu,
util_avg_sum += tg->se[cpu]->avg.util_avg; tg_capacity += capacity_of(cpu); + nr_cpus_valid++; } rcu_read_unlock();
- if (tg_capacity > cpumask_weight(p->prefer_cpus) && + /* + * Follow cases should select cpus_ptr, checking by condition of + * tg_capacity > nr_cpus_valid: + * 1. all prefer_cpus offline; + * 2. all prefer_cpus has no cfs capaicity(tg_capacity = + * nr_cpus_valid * 1) + */ + if (tg_capacity > nr_cpus_valid && util_avg_sum * 100 <= tg_capacity * sysctl_sched_util_low_pct) { p->select_cpus = p->prefer_cpus; if (sd_flag & SD_BALANCE_WAKE)