On 05/03/21 11:29, Barry Song wrote:
mask is built in build_balance_mask() by for_each_cpu(i, sg_span), so it must be a subset of sched_group_span(sg).
So we should indeed have
cpumask_subset(sched_group_span(sg), mask)
but that doesn't imply
cpumask_first(sched_group_span(sg)) == cpumask_first(mask)
does it? I'm thinking if in your topology of N CPUs, CPUs 0 and N-1 are the furthest away, you will most likely hit
!cpumask_equal(sg_pan, sched_domain_span(sibling->child)) ^^^^^^ ^^^^^^^^^^^^^ CPUN-1 CPU0
which should be the case on your Kunpeng920 system.
Though cpumask_first_and doesn't lead to a wrong result of balance cpu, it is pointless to do cpumask_and again.
Signed-off-by: Barry Song song.bao.hua@hisilicon.com
kernel/sched/topology.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 12f8058..45f3db2 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -934,7 +934,7 @@ static void init_overlap_sched_group(struct sched_domain *sd, int cpu;
build_balance_mask(sd, sg, mask);
- cpu = cpumask_first_and(sched_group_span(sg), mask);
cpu = cpumask_first(mask);
sg->sgc = *per_cpu_ptr(sdd->sgc, cpu); if (atomic_inc_return(&sg->sgc->ref) == 1)
-- 1.8.3.1