From: Thomas Gleixner tglx@linutronix.de
mainline inclusion from mainline-v5.17-rc1 commit 24ee940d89277602147ce1b8b4fd87b01b9a6660 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I53K0E CVE: NA
-------------------------------------------------------------------------
While reporting a quiescent state for a given CPU, rcu_core() takes advantage of the freshly loaded grace period sequence number and the locked rnp to accelerate the callbacks whose sequence number have been assigned a stale value.
This action is only necessary when the rdp isn't offloaded, otherwise the NOCB kthreads already take care of the callbacks progression.
However the check for the offloaded state is volatile because it is performed outside the IRQs disabled section. It's possible for the offloading process to preempt rcu_core() at that point on PREEMPT_RT.
This is dangerous because rcu_core() may end up accelerating callbacks concurrently with NOCB kthreads without appropriate locking.
Fix this with moving the offloaded check inside the rnp locking section.
Reported-and-tested-by: Valentin Schneider valentin.schneider@arm.com Reviewed-by: Valentin Schneider valentin.schneider@arm.com Tested-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: Peter Zijlstra peterz@infradead.org Cc: Sebastian Andrzej Siewior bigeasy@linutronix.de Cc: Josh Triplett josh@joshtriplett.org Cc: Joel Fernandes joel@joelfernandes.org Cc: Boqun Feng boqun.feng@gmail.com Cc: Neeraj Upadhyay neeraju@codeaurora.org Cc: Uladzislau Rezki urezki@gmail.com Cc: Thomas Gleixner tglx@linutronix.de Signed-off-by: Frederic Weisbecker frederic@kernel.org Signed-off-by: Paul E. McKenney paulmck@kernel.org Conflicts: kernel/rcu/tree.c Move "const bool offloaded = ..." down, so that it is within the irq disabled protection range, and with minimal changes.
Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Reviewed-by: Cheng Jian cj.chengjian@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- kernel/rcu/tree.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b9c45b2d7690..fc3e2180e9d5 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2276,8 +2276,6 @@ rcu_report_qs_rdp(struct rcu_data *rdp) unsigned long flags; unsigned long mask; bool needwake = false; - const bool offloaded = IS_ENABLED(CONFIG_RCU_NOCB_CPU) && - rcu_segcblist_is_offloaded(&rdp->cblist); struct rcu_node *rnp;
WARN_ON_ONCE(rdp->cpu != smp_processor_id()); @@ -2301,9 +2299,13 @@ rcu_report_qs_rdp(struct rcu_data *rdp) if ((rnp->qsmask & mask) == 0) { raw_spin_unlock_irqrestore_rcu_node(rnp, flags); } else { + const bool offloaded = IS_ENABLED(CONFIG_RCU_NOCB_CPU) && + rcu_segcblist_is_offloaded(&rdp->cblist); /* * This GP can't end until cpu checks in, so all of our * callbacks can be processed during the next GP. + * + * NOCB kthreads have their own way to deal with that. */ if (!offloaded) needwake = rcu_accelerate_cbs(rnp, rdp);