mainline inclusion from mainline-v5.4 commit 7b2b55da1db10a5525460633ae4b6fb0be060c41 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I47QS2 CVE: NA
--------------------------------
Only when calling the poll syscall the first time can user receive POLLPRI correctly. After that, user always fails to acquire the event signal.
Reproduce case: 1. Get the monitor code in Documentation/accounting/psi.txt 2. Run it, and wait for the event triggered. 3. Kill and restart the process.
The question is why we can end up with poll_scheduled = 1 but the work not running (which would reset it to 0). And the answer is because the scheduling side sees group->poll_kworker under RCU protection and then schedules it, but here we cancel the work and destroy the worker. The cancel needs to pair with resetting the poll_scheduled flag.
Link: http://lkml.kernel.org/r/1566357985-97781-1-git-send-email-joseph.qi@linux.a... Signed-off-by: Jason Xing kerneljasonxing@linux.alibaba.com Signed-off-by: Joseph Qi joseph.qi@linux.alibaba.com Reviewed-by: Caspar Zhang caspar@linux.alibaba.com Reviewed-by: Suren Baghdasaryan surenb@google.com Acked-by: Johannes Weiner hannes@cmpxchg.org Cc: Ingo Molnar mingo@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Liu Xinpeng liuxp11@chinatelecom.cn Signed-off-by: Ctyun Kernel ctyuncommiter01@chinatelecom.cn --- kernel/sched/psi.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c index 23fbbcc..6e52b67 100644 --- a/kernel/sched/psi.c +++ b/kernel/sched/psi.c @@ -1131,7 +1131,15 @@ static void psi_trigger_destroy(struct kref *ref) * deadlock while waiting for psi_poll_work to acquire trigger_lock */ if (kworker_to_destroy) { + /* + * After the RCU grace period has expired, the worker + * can no longer be found through group->poll_kworker. + * But it might have been already scheduled before + * that - deschedule it cleanly before destroying it. + */ kthread_cancel_delayed_work_sync(&group->poll_work); + atomic_set(&group->poll_scheduled, 0); + kthread_destroy_worker(kworker_to_destroy); } kfree(t);