
From: He Yujie <coka.heyujie@huawei.com> hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IB42TC -------------------------------- Commit aed539da59ad fix preffer_cpu offline problem, but introducing one CI problem. When one cpu usage is 100% occupied by RT tasks, CFS tasks util_avg is 0 but cpu capacity is not 0. During core selection, commit aed539da59ad delete the judgment of prefer_cpu set capacity and prefer_cpumask weight. As a result, dynamic_affinity task always select prefer_cpu as idlest_cpu. This patch resolves this problem to add the check of the valid prefer_cpu set capacity and the valid prefer_cpu weight. Fixes: 56bfe5272963 ("sched/dynamic_affinity: fix preffered_cpu offline problem") Signed-off-by: He Yujie <coka.heyujie@huawei.com> Signed-off-by: Cheng Yu <serein.chengyu@huawei.com> --- kernel/sched/fair.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f83094121207..d2ad18d4b554 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9128,6 +9128,7 @@ static void set_task_select_cpus(struct task_struct *p, int *idlest_cpu, struct task_group *tg; long spare; int cpu, mode; + int nr_cpus_valid = 0; p->select_cpus = p->cpus_ptr; if (!prefer_cpus_valid(p)) @@ -9171,10 +9172,18 @@ static void set_task_select_cpus(struct task_struct *p, int *idlest_cpu, util_avg_sum += taskgroup_cpu_util(tg, cpu); tg_capacity += capacity_of(cpu); + nr_cpus_valid++; } rcu_read_unlock(); - if (util_avg_sum * 100 < tg_capacity * sysctl_sched_util_low_pct) { + /* + * Follow cases should select cpus_ptr, checking by condition of + * tg_capacity > nr_cpus_valid: + * 1. all prefer_cpus offline; + * 2. all prefer_cpus has no cfs capaicity(tg_capacity = nr_cpus_valid * 1) + */ + if (tg_capacity > nr_cpus_valid && + util_avg_sum * 100 <= tg_capacity * sysctl_sched_util_low_pct) { p->select_cpus = p->prefer_cpus; if (sd_flag & SD_BALANCE_WAKE) schedstat_inc(p->stats.nr_wakeups_preferred_cpus); -- 2.25.1