[PATCH OLK-5.10] perf: Fix __perf_event_overflow() vs perf_remove_from_context() race
From: Peter Zijlstra <peterz@infradead.org> mainline inclusion from mainline-v7.0-rc2 commit c9bc1753b3cc41d0e01fbca7f035258b5f4db0ae category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13907 CVE: CVE-2026-23271 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=... ---------------------------------------------------------------------- Make sure that __perf_event_overflow() runs with IRQs disabled for all possible callchains. Specifically the software events can end up running it with only preemption disabled. This opens up a race vs perf_event_exit_event() and friends that will go and free various things the overflow path expects to be present, like the BPF program. Fixes: 592903cdcbf6 ("perf_counter: add an event_list") Reported-by: Simond Hu <cmdhh1767@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Simond Hu <cmdhh1767@gmail.com> Link: https://patch.msgid.link/20260224122909.GV1395416@noisy.programming.kicks-as... Conflicts: kernel/events/core.c [Fix ctx conflicts and guard conflict.] Signed-off-by: Luo Gengkun <luogengkun2@huawei.com> --- kernel/events/core.c | 56 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 52 insertions(+), 4 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index d1ffd5fb9c6f..a0ac222caca0 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -8972,6 +8972,13 @@ int perf_event_overflow(struct perf_event *event, struct perf_sample_data *data, struct pt_regs *regs) { + /* + * Entry point from hardware PMI, interrupts should be disabled here. + * This serializes us against perf_event_remove_from_context() in + * things like perf_event_release_kernel(). + */ + lockdep_assert_irqs_disabled(); + return __perf_event_overflow(event, 1, data, regs); } @@ -9051,7 +9058,21 @@ static void perf_swevent_event(struct perf_event *event, u64 nr, struct pt_regs *regs) { struct hw_perf_event *hwc = &event->hw; + unsigned long flags; + /* + * This is: + * - software preempt + * - tracepoint preempt + * - tp_target_task irq (ctx->lock) + * - uprobes preempt/irq + * - kprobes preempt/irq + * - hw_breakpoint irq + * + * Any of these are sufficient to hold off RCU and thus ensure @event + * exists. + */ + lockdep_assert_preemption_disabled(); local64_add(nr, &event->count); if (!regs) @@ -9060,19 +9081,36 @@ static void perf_swevent_event(struct perf_event *event, u64 nr, if (!is_sampling_event(event)) return; + /* + * Serialize against event_function_call() IPIs like normal overflow + * event handling. Specifically, must not allow + * perf_event_release_kernel() -> perf_remove_from_context() to make + * progress and 'release' the event from under us. + */ + local_irq_save(flags); + if (event->state != PERF_EVENT_STATE_ACTIVE) { + local_irq_restore(flags); + return; + } + if ((event->attr.sample_type & PERF_SAMPLE_PERIOD) && !event->attr.freq) { data->period = nr; + local_irq_restore(flags); return perf_swevent_overflow(event, 1, data, regs); } else data->period = event->hw.last_period; - if (nr == 1 && hwc->sample_period == 1 && !event->attr.freq) + if (nr == 1 && hwc->sample_period == 1 && !event->attr.freq) { + local_irq_restore(flags); return perf_swevent_overflow(event, 1, data, regs); + } - if (local64_add_negative(nr, &hwc->period_left)) + if (local64_add_negative(nr, &hwc->period_left)) { + local_irq_restore(flags); return; - + } perf_swevent_overflow(event, 0, data, regs); + local_irq_restore(flags); } static int perf_exclude_event(struct perf_event *event, @@ -9476,6 +9514,11 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size, struct perf_sample_data data; struct perf_event *event; + /* + * Per being a tracepoint, this runs with preemption disabled. + */ + lockdep_assert_preemption_disabled(); + struct perf_raw_record raw = { .frag = { .size = entry_size, @@ -9908,6 +9951,11 @@ void perf_bp_event(struct perf_event *bp, void *data) struct perf_sample_data sample; struct pt_regs *regs = data; + /* + * Exception context, will have interrupts disabled. + */ + lockdep_assert_irqs_disabled(); + perf_sample_data_init(&sample, bp->attr.bp_addr, 0); if (!bp->hw.state && !perf_exclude_event(bp, regs)) @@ -10361,7 +10409,7 @@ static enum hrtimer_restart perf_swevent_hrtimer(struct hrtimer *hrtimer) if (regs && !perf_exclude_event(event, regs)) { if (!(event->attr.exclude_idle && is_idle_task(current))) - if (__perf_event_overflow(event, 1, &data, regs)) + if (perf_event_overflow(event, &data, regs)) ret = HRTIMER_NORESTART; } -- 2.34.1
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://atomgit.com/openeuler/kernel/merge_requests/21736 邮件列表地址:https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/2VJ... FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://atomgit.com/openeuler/kernel/merge_requests/21736 Mailing list address: https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/2VJ...
participants (2)
-
Luo Gengkun -
patchwork bot