From: Jann Horn <jannh(a)google.com>
stable inclusion
from stable-v6.6.67
commit f9f85df30118f3f4112761e6682fc60ebcce23e5
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/IBEAKR
CVE: CVE-2024-56675
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
commit ef1b808e3b7c98612feceedf985c2fbbeb28f956 upstream.
Uprobes always use bpf_prog_run_array_uprobe() under tasks-trace-RCU
protection. But it is possible to attach a non-sleepable BPF program to a
uprobe, and non-sleepable BPF programs are freed via normal RCU (see
__bpf_prog_put_noref()). This leads to UAF of the bpf_prog because a normal
RCU grace period does not imply a tasks-trace-RCU grace period.
Fix it by explicitly waiting for a tasks-trace-RCU grace period after
removing the attachment of a bpf_prog to a perf_event.
Fixes: 8c7dcb84e3b7 ("bpf: implement sleepable uprobes by chaining gps")
Suggested-by: Andrii Nakryiko <andrii(a)kernel.org>
Suggested-by: Alexei Starovoitov <ast(a)kernel.org>
Signed-off-by: Jann Horn <jannh(a)google.com>
Signed-off-by: Andrii Nakryiko <andrii(a)kernel.org>
Cc: stable(a)vger.kernel.org
Link: https://lore.kernel.org/bpf/20241210-bpf-fix-actual-uprobe-uaf-v1-1-1943984…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Pu Lehui <pulehui(a)huawei.com>
---
kernel/trace/bpf_trace.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 93e06d370395..b084fe1dbe12 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2224,6 +2224,13 @@ void perf_event_detach_bpf_prog(struct perf_event *event)
bpf_prog_array_free_sleepable(old_array);
}
+ /*
+ * It could be that the bpf_prog is not sleepable (and will be freed
+ * via normal RCU), but is called from a point that supports sleepable
+ * programs and uses tasks-trace-RCU.
+ */
+ synchronize_rcu_tasks_trace();
+
bpf_prog_put(event->prog);
event->prog = NULL;
--
2.34.1
hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/IB963V
CVE: NA
--------------------------------
In nmi_trigger_cpumask_backtrace(), printk_safe_flush() is called after
sending NMI to flush the logs. When logbuf_lock is already held and the
current CPU is in printk-safe context (e.g., NMI context), attempting to
acquire the lock again can lead to deadlock.
Modify the function to return early when detecting logbuf_lock is held
and current CPU is in printk-safe context. This prevents deadlock scenarios
where CPU0 holds the lock while other CPUs try to acquire it in NMI
context.
Fixes: 099f1c84c005 ("printk: introduce per-cpu safe_print seq buffer")
Signed-off-by: Xiaomeng Zhang <zhangxiaomeng13(a)huawei.com>
---
kernel/printk/printk_safe.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
index 809f92492ec7..c97845688fe1 100644
--- a/kernel/printk/printk_safe.c
+++ b/kernel/printk/printk_safe.c
@@ -256,6 +256,10 @@ void printk_safe_flush(void)
{
int cpu;
+ if (raw_spin_is_locked(&logbuf_lock) &&
+ (this_cpu_read(printk_context) & PRINTK_SAFE_CONTEXT_MASK))
+ return;
+
for_each_possible_cpu(cpu) {
#ifdef CONFIG_PRINTK_NMI
__printk_safe_flush(&per_cpu(nmi_print_seq, cpu).work);
--
2.34.1