hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I9RZC8
--------------------------------
When using cgroup rt_bandwidth with RT_RUNTIME_SHARE, if there are cpu hotplug and cpu.rt_runtime_us changing concurrently, the warning in __disable_runtime may occur: [ 991.697692] WARNING: CPU: 0 PID: 49573 at kernel/sched/rt.c:802 rq_offline_rt+0x24d/0x260 [ 991.697795] CPU: 0 PID: 49573 Comm: kworker/1:0 Kdump: loaded Not tainted 6.9.0-rc1+ #4 [ 991.697800] Workqueue: events cpuset_hotplug_workfn [ 991.697803] RIP: 0010:rq_offline_rt+0x24d/0x260 [ 991.697825] Call Trace: [ 991.697827] <TASK> [ 991.697858] set_rq_offline.part.125+0x2d/0x70 [ 991.697864] rq_attach_root+0xda/0x110 [ 991.697867] cpu_attach_domain+0x433/0x860 [ 991.697880] partition_sched_domains_locked+0x2a8/0x3a0 [ 991.697885] rebuild_sched_domains_locked+0x608/0x800 [ 991.697895] rebuild_sched_domains+0x1b/0x30 [ 991.697897] cpuset_hotplug_workfn+0x4b6/0x1160 [ 991.697909] process_scheduled_works+0xad/0x430 [ 991.697917] worker_thread+0x105/0x270 [ 991.697922] kthread+0xde/0x110 [ 991.697928] ret_from_fork+0x2d/0x50 [ 991.697935] ret_from_fork_asm+0x11/0x20 [ 991.697940] </TASK> [ 991.697941] ---[ end trace 0000000000000000 ]---
That's how it happens: CPU0 CPU1 ----- -----
set_rq_offline(rq) __disable_runtime(rq) (1) tg_set_rt_bandwidth (2) do_balance_runtime (3) set_rq_online(rq) __enable_runtime(rq) (4)
In step(1) rt_rq->rt_runtime is set to RUNTIME_INF, and this rtrq's runtime is not supposed to change until its rq gets online. However, in step(2) tg_set_rt_bandwidth can set rt_rq->rt_runtime to rt_bandwidth.rt_runtime. Then, in step(3) rtrq's runtime is not RUNTIME_INF, so others can borrow rt_runtime from it. Finally, in step(4) the rq gets online, so its rtrq's runtime is set to rt_bandwidth.rt_runtime again, and Since then the total rt_runtime in the domain is increased by this way. After these steps, when offline cpu, rebuilding the sched_domain will offline all rq, and the last rq will find the rt_runtime is increased but nowhere to return.
To fix this, we can add a state RUNTIME_DISABLED, which means the runtime is disabled and should not be used. When the rq is offline, we can set its rtrq's rt_runtime to RUNTIME_DISABLED, and when rq online, reset it. And in tg_set_rt_bandwidth and do_balance_runtime, never change a disabled rt_runtime.
Fixes: 7def2be1dc67 ("sched: fix hotplug cpus on ia64") Closes: https://lore.kernel.org/all/47b4a790-9a27-2fc5-f2aa-f9981c6da015@huawei.com/ Co-developed-by: Hui Tang tanghui20@huawei.com Signed-off-by: Hui Tang tanghui20@huawei.com Signed-off-by: Zhao Wenhui zhaowenhui8@huawei.com --- kernel/sched/rt.c | 15 +++++++++------ kernel/sched/sched.h | 5 +++++ 2 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 58364f489529..9093fa9f8fdc 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -664,7 +664,8 @@ static void do_balance_runtime(struct rt_rq *rt_rq) * or __disable_runtime() below sets a specific rq to inf to * indicate its been disabled and disalow stealing. */ - if (iter->rt_runtime == RUNTIME_INF) + if (iter->rt_runtime == RUNTIME_INF || + iter->rt_runtime == RUNTIME_DISABLED) goto next;
/* @@ -735,7 +736,9 @@ static void __disable_runtime(struct rq *rq) /* * Can't reclaim from ourselves or disabled runqueues. */ - if (iter == rt_rq || iter->rt_runtime == RUNTIME_INF) + if (iter == rt_rq || + iter->rt_runtime == RUNTIME_INF || + iter->rt_runtime == RUNTIME_DISABLED) continue;
raw_spin_lock(&iter->rt_runtime_lock); @@ -761,10 +764,9 @@ static void __disable_runtime(struct rq *rq) WARN_ON_ONCE(want); balanced: /* - * Disable all the borrow logic by pretending we have inf - * runtime - in which case borrowing doesn't make sense. + * Disable all the borrow logic by marking runtime disabled. */ - rt_rq->rt_runtime = RUNTIME_INF; + rt_rq->rt_runtime = RUNTIME_DISABLED; rt_rq->rt_throttled = 0; raw_spin_unlock(&rt_rq->rt_runtime_lock); raw_spin_unlock(&rt_b->rt_runtime_lock); @@ -2555,7 +2557,8 @@ static int tg_set_rt_bandwidth(struct task_group *tg, struct rt_rq *rt_rq = tg->rt_rq[i];
raw_spin_lock(&rt_rq->rt_runtime_lock); - rt_rq->rt_runtime = rt_runtime; + if (rt_rq->rt_runtime != RUNTIME_DISABLED) + rt_rq->rt_runtime = rt_runtime; raw_spin_unlock(&rt_rq->rt_runtime_lock); } raw_spin_unlock_irq(&tg->rt_bandwidth.rt_runtime_lock); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4dd0e4de0aab..80e9d254ab7c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -163,6 +163,11 @@ static inline void cpu_load_update_active(struct rq *this_rq) { } */ #define RUNTIME_INF ((u64)~0ULL)
+/* + * Single value that denotes runtime is disabled, and it should not be used. + */ +#define RUNTIME_DISABLED (-2ULL) + static inline int idle_policy(int policy) { return policy == SCHED_IDLE;