From: Frederic Weisbecker frederic@kernel.org
mainline inclusion from mainline-5.14-rc1 commit 29721b859217b946bfc001c1644745ed4d7c26cb category: feature feature: Deep isolation bugzilla: https://gitee.com/openeuler/kernel/issues/I4N00D CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
When adding a tick dependency to a task, its necessary to wake up the CPU where the task resides to reevaluate tick dependencies on that CPU.
However the current code wakes up all nohz_full CPUs, which is unnecessary.
Switch to waking up a single CPU, by using ordering of writes to task->cpu and task->tick_dep_mask.
[ mingo: Minor readability edit. ]
Suggested-by: Peter Zijlstra peterz@infradead.org Signed-off-by: Frederic Weisbecker frederic@kernel.org Signed-off-by: Marcelo Tosatti mtosatti@redhat.com Signed-off-by: Ingo Molnar mingo@kernel.org Acked-by: Peter Zijlstra peterz@infradead.org Link: https://lore.kernel.org/r/20210512232924.150322-7-frederic@kernel.org Signed-off-by: Yunfeng Ye yeyunfeng@huawei.com Reviewed-by: Chao Liu liuchao173@huawei.com Reviewed-by: Chen Hui judy.chenhui@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- kernel/time/tick-sched.c | 40 +++++++++++++++++++++++++++------------- 1 file changed, 27 insertions(+), 13 deletions(-)
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index e53a2a835f30..a88741991e03 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -302,6 +302,31 @@ void tick_nohz_full_kick_cpu(int cpu) irq_work_queue_on(&per_cpu(nohz_full_kick_work, cpu), cpu); }
+static void tick_nohz_kick_task(struct task_struct *tsk) +{ + int cpu = task_cpu(tsk); + + /* + * If the task concurrently migrates to another CPU, + * we guarantee it sees the new tick dependency upon + * schedule. + * + * + * set_task_cpu(p, cpu); + * STORE p->cpu = @cpu + * __schedule() (switch to task 'p') + * LOCK rq->lock + * smp_mb__after_spin_lock() STORE p->tick_dep_mask + * tick_nohz_task_switch() smp_mb() (atomic_fetch_or()) + * LOAD p->tick_dep_mask LOAD p->cpu + */ + + preempt_disable(); + if (cpu_online(cpu)) + tick_nohz_full_kick_cpu(cpu); + preempt_enable(); +} + /* * Kick all full dynticks CPUs in order to force these to re-evaluate * their dependency on the tick and restart it if necessary. @@ -384,19 +409,8 @@ EXPORT_SYMBOL_GPL(tick_nohz_dep_clear_cpu); */ void tick_nohz_dep_set_task(struct task_struct *tsk, enum tick_dep_bits bit) { - if (!atomic_fetch_or(BIT(bit), &tsk->tick_dep_mask)) { - if (tsk == current) { - preempt_disable(); - tick_nohz_full_kick(); - preempt_enable(); - } else { - /* - * Some future tick_nohz_full_kick_task() - * should optimize this. - */ - tick_nohz_full_kick_all(); - } - } + if (!atomic_fetch_or(BIT(bit), &tsk->tick_dep_mask)) + tick_nohz_kick_task(tsk); } EXPORT_SYMBOL_GPL(tick_nohz_dep_set_task);