From: Peter Zijlstra peterz@infradead.org
mainline inclusion from mainline-v5.13-rc1 commit 3a7956e25e1d7b3c148569e78895e1f3178122a9 bugzilla: 52510 CVE: NA
--------------------------------
The kthread_is_per_cpu() construct relies on only being called on PF_KTHREAD tasks (per the WARN in to_kthread). This gives rise to the following usage pattern:
if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))
However, as reported by syzcaller, this is broken. The scenario is:
CPU0 CPU1 (running p)
(p->flags & PF_KTHREAD) // true
begin_new_exec() me->flags &= ~(PF_KTHREAD|...); kthread_is_per_cpu(p) to_kthread(p) WARN(!(p->flags & PF_KTHREAD) <-- *SPLAT*
Introduce __to_kthread() that omits the WARN and is sure to check both values.
Use this to remove the problematic pattern for kthread_is_per_cpu() and fix a number of other kthread_*() functions that have similar issues but are currently not used in ways that would expose the problem.
Notably kthread_func() is only ever called on 'current', while kthread_probe_data() is only used for PF_WQ_WORKER, which implies the task is from kthread_create*().
Fixes: ac687e6e8c26 ("kthread: Extract KTHREAD_IS_PER_CPU") Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Valentin Schneider Valentin.Schneider@arm.com Link: https://lkml.kernel.org/r/YH6WJc825C4P0FCK@hirez.programming.kicks-ass.net Signed-off-by: Zheng Zucheng <zhengzucheng.huawei.com>
Conflicts: kernel/kthread.c kernel/sched/core.c kernel/sched/fair.c Reviewed-by: Cheng Jian cj.chengjian@huawei.com Reviewed-by: Cheng Jian cj.chengjian@huawei.com Reviewed-by: Chen Hui judy.chenhui@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- kernel/kthread.c | 28 ++++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c index fee5cc055791b..8873128ef6498 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -74,6 +74,25 @@ static inline struct kthread *to_kthread(struct task_struct *k) return (__force void *)k->set_child_tid; }
+/* + * Variant of to_kthread() that doesn't assume @p is a kthread. + * + * Per construction; when: + * + * (p->flags & PF_KTHREAD) && p->set_child_tid + * + * the task is both a kthread and struct kthread is persistent. However + * PF_KTHREAD on it's own is not, kernel_thread() can exec() (See umh.c and + * begin_new_exec()). + */ +static inline struct kthread *__to_kthread(struct task_struct *p) +{ + void *kthread = (__force void *)p->set_child_tid; + if (kthread && !(p->flags & PF_KTHREAD)) + kthread = NULL; + return kthread; +} + void free_kthread_struct(struct task_struct *k) { struct kthread *kthread; @@ -177,10 +196,11 @@ void set_kthreadd_affinity(void) */ void *kthread_probe_data(struct task_struct *task) { - struct kthread *kthread = to_kthread(task); + struct kthread *kthread = __to_kthread(task); void *data = NULL;
- probe_kernel_read(&data, &kthread->data, sizeof(data)); + if (kthread) + probe_kernel_read(&data, &kthread->data, sizeof(data)); return data; }
@@ -492,9 +512,9 @@ void kthread_set_per_cpu(struct task_struct *k, int cpu) set_bit(KTHREAD_IS_PER_CPU, &kthread->flags); }
-bool kthread_is_per_cpu(struct task_struct *k) +bool kthread_is_per_cpu(struct task_struct *p) { - struct kthread *kthread = to_kthread(k); + struct kthread *kthread = __to_kthread(p); if (!kthread) return false;