hulk inclusion category: bugfix bugzilla: https://atomgit.com/openeuler/kernel/issues/8331 CVE: NA ------------------ When a process has hundreds of threads and runs for an extended period, the aggregated stime or utime of its thread group can become extremely large. Accessing /proc/xx/stat then triggers a divide error like the following: divide error: 0000 [#1] SMP NOPTI CPU: 273 PID: 4619 Comm: ... Tainted: 5.10.0 x86_64 RIP: 0010:cputime_adjust+0x55/0xb0 RSP: 0018:ffffae408e07bbc8 EFLAGS: 00010807 ... Call Trace: thread_group_cputime_adjusted+0x4b/0x70 do_task_stat+0x2d8/0xdc0 This issue is caused by an overflow in stime + utime. In older kernel versions, this overflow could lead to a division-by-zero error in scale_stime(). Although the scale_stime() function itself might exhibit issues under certain specially crafted inputs, the actual arguments passed to it are constrained by the context at its call sites within the kernel. Given that this interface is no longer used in newer kernel versions, only this minimal fix is applied. Add overflow detection logic and, upon overflow, right-shift the values before computation. This fixes the issue while preserving the original calculation semantics. Fixes: 9d7fb0427648 ("sched/cputime: Guarantee stime + utime == rtime") Signed-off-by: Xia Fukun <xiafukun@huawei.com> --- kernel/sched/cputime.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c index 4055f2008cc6..e2077ee4c710 100644 --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -692,6 +692,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev, u64 *ut, u64 *st) { u64 rtime, stime, utime; + u64 s, u, sum; unsigned long flags; /* Serialize concurrent callers such that we can honour our guarantees */ @@ -727,7 +728,23 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev, goto update; } - stime = scale_stime(stime, rtime, stime + utime); + s = stime; + u = utime; + sum = s + u; + + /* + * Detect unsigned overflow: if sum < either operand, + * overflow occurred. A single right shift is sufficient + * to guarantee that the new sum won't overflow. + */ + if (unlikely(sum < s)) { + s >>= 1; + u >>= 1; + sum = s + u; + stime = scale_stime(s, rtime, sum); + } else { + stime = scale_stime(stime, rtime, stime + utime); + } /* * Because mul_u64_u64_div_u64() can approximate on some * achitectures; enforce the constraint that: a*b/(b+c) <= a. -- 2.34.1