mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 44 participants
  • 18676 discussions
[PATCH openEuler-22.03-LTS-SP1] bpf: Use raw_spinlock_t in ringbuf
by Xiaomeng Zhang 12 Nov '24

12 Nov '24
From: Wander Lairson Costa <wander.lairson(a)gmail.com> mainline inclusion from mainline-v6.12-rc4 commit 8b62645b09f870d70c7910e7550289d444239a46 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IB3I26 CVE: CVE-2024-50138 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- The function __bpf_ringbuf_reserve is invoked from a tracepoint, which disables preemption. Using spinlock_t in this context can lead to a "sleep in atomic" warning in the RT variant. This issue is illustrated in the example below: BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 556208, name: test_progs preempt_count: 1, expected: 0 RCU nest depth: 1, expected: 1 INFO: lockdep is turned off. Preemption disabled at: [<ffffd33a5c88ea44>] migrate_enable+0xc0/0x39c CPU: 7 PID: 556208 Comm: test_progs Tainted: G Hardware name: Qualcomm SA8775P Ride (DT) Call trace: dump_backtrace+0xac/0x130 show_stack+0x1c/0x30 dump_stack_lvl+0xac/0xe8 dump_stack+0x18/0x30 __might_resched+0x3bc/0x4fc rt_spin_lock+0x8c/0x1a4 __bpf_ringbuf_reserve+0xc4/0x254 bpf_ringbuf_reserve_dynptr+0x5c/0xdc bpf_prog_ac3d15160d62622a_test_read_write+0x104/0x238 trace_call_bpf+0x238/0x774 perf_call_bpf_enter.isra.0+0x104/0x194 perf_syscall_enter+0x2f8/0x510 trace_sys_enter+0x39c/0x564 syscall_trace_enter+0x220/0x3c0 do_el0_svc+0x138/0x1dc el0_svc+0x54/0x130 el0t_64_sync_handler+0x134/0x150 el0t_64_sync+0x17c/0x180 Switch the spinlock to raw_spinlock_t to avoid this error. Fixes: 457f44363a88 ("bpf: Implement BPF ring buffer and verifier support for it") Reported-by: Brian Grech <bgrech(a)redhat.com> Signed-off-by: Wander Lairson Costa <wander.lairson(a)gmail.com> Signed-off-by: Wander Lairson Costa <wander(a)redhat.com> Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net> Acked-by: Daniel Borkmann <daniel(a)iogearbox.net> Link: https://lore.kernel.org/r/20240920190700.617253-1-wander@redhat.com Conflicts: kernel/bpf/ringbuf.c [The conflicts were due to not merge the commits 457f44363a889] Signed-off-by: Xiaomeng Zhang <zhangxiaomeng13(a)huawei.com> --- kernel/bpf/ringbuf.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c index d21d4ba2eb39..984f4772a01e 100644 --- a/kernel/bpf/ringbuf.c +++ b/kernel/bpf/ringbuf.c @@ -36,7 +36,7 @@ struct bpf_ringbuf { u64 mask; struct page **pages; int nr_pages; - spinlock_t spinlock ____cacheline_aligned_in_smp; + raw_spinlock_t spinlock ____cacheline_aligned_in_smp; /* Consumer and producer counters are put into separate pages to allow * mapping consumer page as r/w, but restrict producer page to r/o. * This protects producer position from being modified by user-space @@ -139,7 +139,7 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node) if (!rb) return ERR_PTR(-ENOMEM); - spin_lock_init(&rb->spinlock); + raw_spin_lock_init(&rb->spinlock); init_waitqueue_head(&rb->waitq); init_irq_work(&rb->work, bpf_ringbuf_notify); @@ -339,10 +339,10 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size) cons_pos = smp_load_acquire(&rb->consumer_pos); if (in_nmi()) { - if (!spin_trylock_irqsave(&rb->spinlock, flags)) + if (!raw_spin_trylock_irqsave(&rb->spinlock, flags)) return NULL; } else { - spin_lock_irqsave(&rb->spinlock, flags); + raw_spin_lock_irqsave(&rb->spinlock, flags); } pend_pos = rb->pending_pos; @@ -368,7 +368,7 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size) */ if (new_prod_pos - cons_pos > rb->mask || new_prod_pos - pend_pos > rb->mask) { - spin_unlock_irqrestore(&rb->spinlock, flags); + raw_spin_unlock_irqrestore(&rb->spinlock, flags); return NULL; } @@ -380,7 +380,7 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size) /* pairs with consumer's smp_load_acquire() */ smp_store_release(&rb->producer_pos, new_prod_pos); - spin_unlock_irqrestore(&rb->spinlock, flags); + raw_spin_unlock_irqrestore(&rb->spinlock, flags); return (void *)hdr + BPF_RINGBUF_HDR_SZ; } -- 2.34.1
2 1
0 0
[PATCH openEuler-22.03-LTS-SP1] bpf: Use raw_spinlock_t in ringbuf
by Xiaomeng Zhang 12 Nov '24

12 Nov '24
From: Wander Lairson Costa <wander.lairson(a)gmail.com> mainline inclusion from mainline-v6.12-rc4 commit 8b62645b09f870d70c7910e7550289d444239a46 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IB3I26 CVE: CVE-2024-50138 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- The function __bpf_ringbuf_reserve is invoked from a tracepoint, which disables preemption. Using spinlock_t in this context can lead to a "sleep in atomic" warning in the RT variant. This issue is illustrated in the example below: BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 556208, name: test_progs preempt_count: 1, expected: 0 RCU nest depth: 1, expected: 1 INFO: lockdep is turned off. Preemption disabled at: [<ffffd33a5c88ea44>] migrate_enable+0xc0/0x39c CPU: 7 PID: 556208 Comm: test_progs Tainted: G Hardware name: Qualcomm SA8775P Ride (DT) Call trace: dump_backtrace+0xac/0x130 show_stack+0x1c/0x30 dump_stack_lvl+0xac/0xe8 dump_stack+0x18/0x30 __might_resched+0x3bc/0x4fc rt_spin_lock+0x8c/0x1a4 __bpf_ringbuf_reserve+0xc4/0x254 bpf_ringbuf_reserve_dynptr+0x5c/0xdc bpf_prog_ac3d15160d62622a_test_read_write+0x104/0x238 trace_call_bpf+0x238/0x774 perf_call_bpf_enter.isra.0+0x104/0x194 perf_syscall_enter+0x2f8/0x510 trace_sys_enter+0x39c/0x564 syscall_trace_enter+0x220/0x3c0 do_el0_svc+0x138/0x1dc el0_svc+0x54/0x130 el0t_64_sync_handler+0x134/0x150 el0t_64_sync+0x17c/0x180 Switch the spinlock to raw_spinlock_t to avoid this error. Fixes: 457f44363a88 ("bpf: Implement BPF ring buffer and verifier support for it") Reported-by: Brian Grech <bgrech(a)redhat.com> Signed-off-by: Wander Lairson Costa <wander.lairson(a)gmail.com> Signed-off-by: Wander Lairson Costa <wander(a)redhat.com> Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net> Acked-by: Daniel Borkmann <daniel(a)iogearbox.net> Link: https://lore.kernel.org/r/20240920190700.617253-1-wander@redhat.com Conflicts: kernel/bpf/ringbuf.c [The conflicts were due to not merge the commits 457f44363a889] Signed-off-by: Xiaomeng Zhang <zhangxiaomeng13(a)huawei.com> --- kernel/bpf/ringbuf.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c index d21d4ba2eb39..984f4772a01e 100644 --- a/kernel/bpf/ringbuf.c +++ b/kernel/bpf/ringbuf.c @@ -36,7 +36,7 @@ struct bpf_ringbuf { u64 mask; struct page **pages; int nr_pages; - spinlock_t spinlock ____cacheline_aligned_in_smp; + raw_spinlock_t spinlock ____cacheline_aligned_in_smp; /* Consumer and producer counters are put into separate pages to allow * mapping consumer page as r/w, but restrict producer page to r/o. * This protects producer position from being modified by user-space @@ -139,7 +139,7 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node) if (!rb) return ERR_PTR(-ENOMEM); - spin_lock_init(&rb->spinlock); + raw_spin_lock_init(&rb->spinlock); init_waitqueue_head(&rb->waitq); init_irq_work(&rb->work, bpf_ringbuf_notify); @@ -339,10 +339,10 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size) cons_pos = smp_load_acquire(&rb->consumer_pos); if (in_nmi()) { - if (!spin_trylock_irqsave(&rb->spinlock, flags)) + if (!raw_spin_trylock_irqsave(&rb->spinlock, flags)) return NULL; } else { - spin_lock_irqsave(&rb->spinlock, flags); + raw_spin_lock_irqsave(&rb->spinlock, flags); } pend_pos = rb->pending_pos; @@ -368,7 +368,7 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size) */ if (new_prod_pos - cons_pos > rb->mask || new_prod_pos - pend_pos > rb->mask) { - spin_unlock_irqrestore(&rb->spinlock, flags); + raw_spin_unlock_irqrestore(&rb->spinlock, flags); return NULL; } @@ -380,7 +380,7 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size) /* pairs with consumer's smp_load_acquire() */ smp_store_release(&rb->producer_pos, new_prod_pos); - spin_unlock_irqrestore(&rb->spinlock, flags); + raw_spin_unlock_irqrestore(&rb->spinlock, flags); return (void *)hdr + BPF_RINGBUF_HDR_SZ; } -- 2.34.1
2 1
0 0
[PATCH openEuler-22.03-LTS-SP1] bpf: Use raw_spinlock_t in ringbuf
by Xiaomeng Zhang 12 Nov '24

12 Nov '24
From: Wander Lairson Costa <wander.lairson(a)gmail.com> mainline inclusion from mainline-v6.12-rc4 commit 8b62645b09f870d70c7910e7550289d444239a46 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IB3I26 CVE: CVE-2024-50138 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- The function __bpf_ringbuf_reserve is invoked from a tracepoint, which disables preemption. Using spinlock_t in this context can lead to a "sleep in atomic" warning in the RT variant. This issue is illustrated in the example below: BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 556208, name: test_progs preempt_count: 1, expected: 0 RCU nest depth: 1, expected: 1 INFO: lockdep is turned off. Preemption disabled at: [<ffffd33a5c88ea44>] migrate_enable+0xc0/0x39c CPU: 7 PID: 556208 Comm: test_progs Tainted: G Hardware name: Qualcomm SA8775P Ride (DT) Call trace: dump_backtrace+0xac/0x130 show_stack+0x1c/0x30 dump_stack_lvl+0xac/0xe8 dump_stack+0x18/0x30 __might_resched+0x3bc/0x4fc rt_spin_lock+0x8c/0x1a4 __bpf_ringbuf_reserve+0xc4/0x254 bpf_ringbuf_reserve_dynptr+0x5c/0xdc bpf_prog_ac3d15160d62622a_test_read_write+0x104/0x238 trace_call_bpf+0x238/0x774 perf_call_bpf_enter.isra.0+0x104/0x194 perf_syscall_enter+0x2f8/0x510 trace_sys_enter+0x39c/0x564 syscall_trace_enter+0x220/0x3c0 do_el0_svc+0x138/0x1dc el0_svc+0x54/0x130 el0t_64_sync_handler+0x134/0x150 el0t_64_sync+0x17c/0x180 Switch the spinlock to raw_spinlock_t to avoid this error. Fixes: 457f44363a88 ("bpf: Implement BPF ring buffer and verifier support for it") Reported-by: Brian Grech <bgrech(a)redhat.com> Signed-off-by: Wander Lairson Costa <wander.lairson(a)gmail.com> Signed-off-by: Wander Lairson Costa <wander(a)redhat.com> Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net> Acked-by: Daniel Borkmann <daniel(a)iogearbox.net> Link: https://lore.kernel.org/r/20240920190700.617253-1-wander@redhat.com Conflicts: kernel/bpf/ringbuf.c [The conflicts were due to not merge the commits 457f44363a889] Signed-off-by: Xiaomeng Zhang <zhangxiaomeng13(a)huawei.com> --- kernel/bpf/ringbuf.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c index d21d4ba2eb39..984f4772a01e 100644 --- a/kernel/bpf/ringbuf.c +++ b/kernel/bpf/ringbuf.c @@ -36,7 +36,7 @@ struct bpf_ringbuf { u64 mask; struct page **pages; int nr_pages; - spinlock_t spinlock ____cacheline_aligned_in_smp; + raw_spinlock_t spinlock ____cacheline_aligned_in_smp; /* Consumer and producer counters are put into separate pages to allow * mapping consumer page as r/w, but restrict producer page to r/o. * This protects producer position from being modified by user-space @@ -139,7 +139,7 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node) if (!rb) return ERR_PTR(-ENOMEM); - spin_lock_init(&rb->spinlock); + raw_spin_lock_init(&rb->spinlock); init_waitqueue_head(&rb->waitq); init_irq_work(&rb->work, bpf_ringbuf_notify); @@ -339,10 +339,10 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size) cons_pos = smp_load_acquire(&rb->consumer_pos); if (in_nmi()) { - if (!spin_trylock_irqsave(&rb->spinlock, flags)) + if (!raw_spin_trylock_irqsave(&rb->spinlock, flags)) return NULL; } else { - spin_lock_irqsave(&rb->spinlock, flags); + raw_spin_lock_irqsave(&rb->spinlock, flags); } pend_pos = rb->pending_pos; @@ -368,7 +368,7 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size) */ if (new_prod_pos - cons_pos > rb->mask || new_prod_pos - pend_pos > rb->mask) { - spin_unlock_irqrestore(&rb->spinlock, flags); + raw_spin_unlock_irqrestore(&rb->spinlock, flags); return NULL; } @@ -380,7 +380,7 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size) /* pairs with consumer's smp_load_acquire() */ smp_store_release(&rb->producer_pos, new_prod_pos); - spin_unlock_irqrestore(&rb->spinlock, flags); + raw_spin_unlock_irqrestore(&rb->spinlock, flags); return (void *)hdr + BPF_RINGBUF_HDR_SZ; } -- 2.34.1
2 1
0 0
[PATCH v6 OLK-5.10 0/3] arm64: support page mapping percpu first chunk allocator
by Kaixiong Yu 12 Nov '24

12 Nov '24
Percpu embedded first chunk allocator is the firstly option, but it could fails on ARM64 when turning on numa with CONFIG_KASAN=y. Let's implement page mapping percpu first chunk allocator as a fallback to the embedding allocator to increase the robustness of the system. Also fix a crash when both NEED_PER_CPU_PAGE_FIRST_CHUNK and KASAN_VMALLOC enabled. After merging this patch set, the ARM64 machine can start and work normally. Kefeng Wang (3): vmalloc: choose a better start address in vm_area_register_early() arm64: support page mapping percpu first chunk allocator kasan: arm64: fix pcpu_page_first_chunk crash with KASAN_VMALLOC arch/arm64/Kconfig | 4 ++ arch/arm64/mm/kasan_init.c | 16 ++++++++ arch/arm64/mm/numa.c | 83 +++++++++++++++++++++++++++++++++----- include/linux/kasan.h | 10 ++++- mm/kasan/common.c | 5 +++ mm/vmalloc.c | 19 ++++++--- 6 files changed, 119 insertions(+), 18 deletions(-) -- 2.34.1
2 4
0 0
[PATCH OLK-5.10] sched: smart_grid: Prevent double-free in sched_grid_qos_free
by Yipeng Zou 12 Nov '24

12 Nov '24
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IB3N2A ---------------------------------------- KASAN detected a double-free bug in the smart grid. This issue arises from the uninitialized use of p->grid_qos within the task {fork,free} processes. The sequence of events leading to the double-free is as follows: CPU0 CPU1 fork (in some error process) goto bad_fork_free call_rcu(__delayed_free_task) __delayed_free_task sched_grid_qos_free realse_task delayed_put_task_struct __put_task_struct sched_grid_qos_free(double free) When copy_process returns with an error, grid_qos is double-freed. To address this, grid_qos is initialized to NULL in dup_task_struct, and a NULL check is added for p->grid_qos before freeing. Bug report details: ================================================================== BUG: KASAN: double-free or invalid-free in sched_grid_qos_free+0x3c/0x90 CPU: 343 PID: 0 Comm: swapper/343 Kdump: loaded Tainted: G B E Call trace: dump_backtrace+0x0/0x3e0 show_stack+0x1c/0x28 dump_stack+0x13c/0x190 print_address_description.constprop.0+0x28/0x1f0 kasan_report_invalid_free+0x44/0x6c __kasan_slab_free+0x158/0x180 kasan_slab_free+0x10/0x20 kfree+0xe0/0x6e0 sched_grid_qos_free+0x3c/0x90 free_task+0xc4/0x164 __put_task_struct+0x264/0x31c delayed_put_task_struct+0x94/0x180 rcu_do_batch+0x2ec/0x9f0 rcu_core+0x34c/0x530 rcu_core_si+0x14/0x30 __do_softirq+0x284/0x900 irq_exit+0x2d4/0x35c __handle_domain_irq+0x108/0x1f0 gic_handle_irq+0x74/0x620 el1_irq+0xbc/0x140 arch_cpu_idle+0x14/0x3c default_idle_call+0x80/0x320 cpuidle_idle_call+0x244/0x2b0 do_idle+0x138/0x260 cpu_startup_entry+0x2c/0x70 secondary_start_kernel+0x35c/0x4e4 Allocated by task 44027: kasan_save_stack+0x24/0x50 __kasan_kmalloc.constprop.0+0xa0/0xcc kasan_kmalloc+0xc/0x14 kmem_cache_alloc_trace+0xdc/0x5d0 sched_grid_qos_fork+0x50/0x20c copy_process+0x8fc/0x3f60 kernel_clone+0x12c/0x660 __se_sys_clone+0xc0/0x110 __arm64_sys_clone+0xa8/0x110 invoke_syscall+0x70/0x274 el0_svc_common.constprop.0+0x1fc/0x2dc do_el0_svc+0xe8/0x140 el0_svc+0x1c/0x2c el0_sync_handler+0xb0/0xb4 el0_sync+0x168/0x180 Freed by task 1748: kasan_save_stack+0x24/0x50 kasan_set_track+0x24/0x34 kasan_set_free_info+0x24/0x4c __kasan_slab_free+0xf8/0x180 kasan_slab_free+0x10/0x20 kfree+0xe0/0x6e0 sched_grid_qos_free+0x3c/0x90 free_task+0xc4/0x164 __delayed_free_task+0x18/0x3c rcu_do_batch+0x2ec/0x9f0 rcu_core+0x34c/0x530 rcu_core_si+0x14/0x30 __do_softirq+0x284/0x900 Fixes: 700bfc4068cf ("sched: smart grid: init sched_grid_qos structure on QOS purpose") Signed-off-by: Yipeng Zou <zouyipeng(a)huawei.com> --- kernel/fork.c | 4 ++++ kernel/sched/grid/qos.c | 3 +++ 2 files changed, 7 insertions(+) diff --git a/kernel/fork.c b/kernel/fork.c index dcf1f9c655d8..9b1ea79deaa5 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -953,6 +953,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) tsk->prefer_cpus = NULL; #endif +#ifdef CONFIG_QOS_SCHED_SMART_GRID + tsk->grid_qos = NULL; +#endif + #ifdef CONFIG_SCHED_TASK_RELATIONSHIP tsk->rship = NULL; #endif diff --git a/kernel/sched/grid/qos.c b/kernel/sched/grid/qos.c index 7ee3687347ce..60b1ff843bbb 100644 --- a/kernel/sched/grid/qos.c +++ b/kernel/sched/grid/qos.c @@ -70,6 +70,9 @@ int sched_grid_qos_fork(struct task_struct *p, struct task_struct *orig) void sched_grid_qos_free(struct task_struct *p) { + if (!p->grid_qos) + return; + kfree(p->grid_qos); p->grid_qos = NULL; } -- 2.34.1
2 1
0 0
[PATCH OLK-6.6] sched: smart_grid: Prevent double-free in sched_grid_qos_free
by Yipeng Zou 12 Nov '24

12 Nov '24
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IB3N2A ---------------------------------------- KASAN detected a double-free bug in the smart grid. This issue arises from the uninitialized use of p->grid_qos within the task {fork,free} processes. The sequence of events leading to the double-free is as follows: CPU0 CPU1 fork (in some error process) goto bad_fork_free call_rcu(__delayed_free_task) __delayed_free_task sched_grid_qos_free realse_task delayed_put_task_struct __put_task_struct sched_grid_qos_free(double free) When copy_process returns with an error, grid_qos is double-freed. To address this, grid_qos is initialized to NULL in dup_task_struct, and a NULL check is added for p->grid_qos before freeing. Bug report details: ================================================================== BUG: KASAN: double-free or invalid-free in sched_grid_qos_free+0x3c/0x90 CPU: 343 PID: 0 Comm: swapper/343 Kdump: loaded Tainted: G B E Call trace: dump_backtrace+0x0/0x3e0 show_stack+0x1c/0x28 dump_stack+0x13c/0x190 print_address_description.constprop.0+0x28/0x1f0 kasan_report_invalid_free+0x44/0x6c __kasan_slab_free+0x158/0x180 kasan_slab_free+0x10/0x20 kfree+0xe0/0x6e0 sched_grid_qos_free+0x3c/0x90 free_task+0xc4/0x164 __put_task_struct+0x264/0x31c delayed_put_task_struct+0x94/0x180 rcu_do_batch+0x2ec/0x9f0 rcu_core+0x34c/0x530 rcu_core_si+0x14/0x30 __do_softirq+0x284/0x900 irq_exit+0x2d4/0x35c __handle_domain_irq+0x108/0x1f0 gic_handle_irq+0x74/0x620 el1_irq+0xbc/0x140 arch_cpu_idle+0x14/0x3c default_idle_call+0x80/0x320 cpuidle_idle_call+0x244/0x2b0 do_idle+0x138/0x260 cpu_startup_entry+0x2c/0x70 secondary_start_kernel+0x35c/0x4e4 Allocated by task 44027: kasan_save_stack+0x24/0x50 __kasan_kmalloc.constprop.0+0xa0/0xcc kasan_kmalloc+0xc/0x14 kmem_cache_alloc_trace+0xdc/0x5d0 sched_grid_qos_fork+0x50/0x20c copy_process+0x8fc/0x3f60 kernel_clone+0x12c/0x660 __se_sys_clone+0xc0/0x110 __arm64_sys_clone+0xa8/0x110 invoke_syscall+0x70/0x274 el0_svc_common.constprop.0+0x1fc/0x2dc do_el0_svc+0xe8/0x140 el0_svc+0x1c/0x2c el0_sync_handler+0xb0/0xb4 el0_sync+0x168/0x180 Freed by task 1748: kasan_save_stack+0x24/0x50 kasan_set_track+0x24/0x34 kasan_set_free_info+0x24/0x4c __kasan_slab_free+0xf8/0x180 kasan_slab_free+0x10/0x20 kfree+0xe0/0x6e0 sched_grid_qos_free+0x3c/0x90 free_task+0xc4/0x164 __delayed_free_task+0x18/0x3c rcu_do_batch+0x2ec/0x9f0 rcu_core+0x34c/0x530 rcu_core_si+0x14/0x30 __do_softirq+0x284/0x900 Fixes: 1a553561230a ("sched: smart grid: init sched_grid_qos structure on QOS purpose") Signed-off-by: Yipeng Zou <zouyipeng(a)huawei.com> --- kernel/fork.c | 4 ++++ kernel/sched/grid/qos.c | 3 +++ 2 files changed, 7 insertions(+) diff --git a/kernel/fork.c b/kernel/fork.c index 97a89ab68a26..a8a30a21799a 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1192,6 +1192,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) tsk->prefer_cpus = NULL; #endif +#ifdef CONFIG_QOS_SCHED_SMART_GRID + tsk->grid_qos = NULL; +#endif + setup_thread_stack(tsk, orig); clear_user_return_notifier(tsk); clear_tsk_need_resched(tsk); diff --git a/kernel/sched/grid/qos.c b/kernel/sched/grid/qos.c index 7c4cb867b60b..e1504170cc6c 100644 --- a/kernel/sched/grid/qos.c +++ b/kernel/sched/grid/qos.c @@ -71,6 +71,9 @@ int sched_grid_qos_fork(struct task_struct *p, struct task_struct *orig) void sched_grid_qos_free(struct task_struct *p) { + if (!p->grid_qos) + return; + kfree(p->grid_qos); p->grid_qos = NULL; } -- 2.34.1
2 1
0 0
[PATCH openEuler-1.0-LTS] sched: smart_grid: Prevent double-free in sched_grid_qos_free
by Yipeng Zou 12 Nov '24

12 Nov '24
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IB3N2A ---------------------------------------- KASAN detected a double-free bug in the smart grid. This issue arises from the uninitialized use of p->grid_qos within the task {fork,free} processes. The sequence of events leading to the double-free is as follows: CPU0 CPU1 fork (in some error process) goto bad_fork_free call_rcu(__delayed_free_task) __delayed_free_task sched_grid_qos_free realse_task delayed_put_task_struct __put_task_struct sched_grid_qos_free(double free) When copy_process returns with an error, grid_qos is double-freed. To address this, grid_qos is initialized to NULL in dup_task_struct, and a NULL check is added for p->grid_qos before freeing. Bug report details: ================================================================== BUG: KASAN: double-free or invalid-free in sched_grid_qos_free+0x3c/0x90 CPU: 343 PID: 0 Comm: swapper/343 Kdump: loaded Tainted: G B E Call trace: dump_backtrace+0x0/0x3e0 show_stack+0x1c/0x28 dump_stack+0x13c/0x190 print_address_description.constprop.0+0x28/0x1f0 kasan_report_invalid_free+0x44/0x6c __kasan_slab_free+0x158/0x180 kasan_slab_free+0x10/0x20 kfree+0xe0/0x6e0 sched_grid_qos_free+0x3c/0x90 free_task+0xc4/0x164 __put_task_struct+0x264/0x31c delayed_put_task_struct+0x94/0x180 rcu_do_batch+0x2ec/0x9f0 rcu_core+0x34c/0x530 rcu_core_si+0x14/0x30 __do_softirq+0x284/0x900 irq_exit+0x2d4/0x35c __handle_domain_irq+0x108/0x1f0 gic_handle_irq+0x74/0x620 el1_irq+0xbc/0x140 arch_cpu_idle+0x14/0x3c default_idle_call+0x80/0x320 cpuidle_idle_call+0x244/0x2b0 do_idle+0x138/0x260 cpu_startup_entry+0x2c/0x70 secondary_start_kernel+0x35c/0x4e4 Allocated by task 44027: kasan_save_stack+0x24/0x50 __kasan_kmalloc.constprop.0+0xa0/0xcc kasan_kmalloc+0xc/0x14 kmem_cache_alloc_trace+0xdc/0x5d0 sched_grid_qos_fork+0x50/0x20c copy_process+0x8fc/0x3f60 kernel_clone+0x12c/0x660 __se_sys_clone+0xc0/0x110 __arm64_sys_clone+0xa8/0x110 invoke_syscall+0x70/0x274 el0_svc_common.constprop.0+0x1fc/0x2dc do_el0_svc+0xe8/0x140 el0_svc+0x1c/0x2c el0_sync_handler+0xb0/0xb4 el0_sync+0x168/0x180 Freed by task 1748: kasan_save_stack+0x24/0x50 kasan_set_track+0x24/0x34 kasan_set_free_info+0x24/0x4c __kasan_slab_free+0xf8/0x180 kasan_slab_free+0x10/0x20 kfree+0xe0/0x6e0 sched_grid_qos_free+0x3c/0x90 free_task+0xc4/0x164 __delayed_free_task+0x18/0x3c rcu_do_batch+0x2ec/0x9f0 rcu_core+0x34c/0x530 rcu_core_si+0x14/0x30 __do_softirq+0x284/0x900 Fixes: ce35ded5d577 ("sched: smart grid: init sched_grid_qos structure on QOS purpose") Signed-off-by: Yipeng Zou <zouyipeng(a)huawei.com> --- kernel/fork.c | 4 ++++ kernel/sched/grid/qos.c | 3 +++ 2 files changed, 7 insertions(+) diff --git a/kernel/fork.c b/kernel/fork.c index 02b676d10054..54c9b8841e00 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -919,6 +919,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) tsk->se.dyn_affi_stats = NULL; #endif +#ifdef CONFIG_QOS_SCHED_SMART_GRID + tsk->grid_qos = NULL; +#endif + setup_thread_stack(tsk, orig); clear_user_return_notifier(tsk); clear_tsk_need_resched(tsk); diff --git a/kernel/sched/grid/qos.c b/kernel/sched/grid/qos.c index b3df69d91499..d6c8525fc16f 100644 --- a/kernel/sched/grid/qos.c +++ b/kernel/sched/grid/qos.c @@ -68,6 +68,9 @@ int sched_grid_qos_fork(struct task_struct *p, struct task_struct *orig) void sched_grid_qos_free(struct task_struct *p) { + if (!p->grid_qos) + return; + kfree(p->_resvd->grid_qos); p->_resvd->grid_qos = NULL; } -- 2.34.1
2 1
0 0
[PATCH OLK-6.6] sched: smart_grid: Prevent double-free in sched_grid_qos_free
by Yipeng Zou 12 Nov '24

12 Nov '24
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8WMOG CVE: NA ---------------------------------------- KASAN detected a double-free bug in the smart grid. This issue arises from the uninitialized use of p->grid_qos within the task {fork,free} processes. The sequence of events leading to the double-free is as follows: CPU0 CPU1 fork (in some error process) goto bad_fork_free call_rcu(__delayed_free_task) __delayed_free_task sched_grid_qos_free realse_task delayed_put_task_struct __put_task_struct sched_grid_qos_free(double free) When copy_process returns with an error, grid_qos is double-freed. To address this, grid_qos is initialized to NULL in dup_task_struct, and a NULL check is added for p->grid_qos before freeing. Bug report details: ================================================================== BUG: KASAN: double-free or invalid-free in sched_grid_qos_free+0x3c/0x90 CPU: 343 PID: 0 Comm: swapper/343 Kdump: loaded Tainted: G B E Call trace: dump_backtrace+0x0/0x3e0 show_stack+0x1c/0x28 dump_stack+0x13c/0x190 print_address_description.constprop.0+0x28/0x1f0 kasan_report_invalid_free+0x44/0x6c __kasan_slab_free+0x158/0x180 kasan_slab_free+0x10/0x20 kfree+0xe0/0x6e0 sched_grid_qos_free+0x3c/0x90 free_task+0xc4/0x164 __put_task_struct+0x264/0x31c delayed_put_task_struct+0x94/0x180 rcu_do_batch+0x2ec/0x9f0 rcu_core+0x34c/0x530 rcu_core_si+0x14/0x30 __do_softirq+0x284/0x900 irq_exit+0x2d4/0x35c __handle_domain_irq+0x108/0x1f0 gic_handle_irq+0x74/0x620 el1_irq+0xbc/0x140 arch_cpu_idle+0x14/0x3c default_idle_call+0x80/0x320 cpuidle_idle_call+0x244/0x2b0 do_idle+0x138/0x260 cpu_startup_entry+0x2c/0x70 secondary_start_kernel+0x35c/0x4e4 Allocated by task 44027: kasan_save_stack+0x24/0x50 __kasan_kmalloc.constprop.0+0xa0/0xcc kasan_kmalloc+0xc/0x14 kmem_cache_alloc_trace+0xdc/0x5d0 sched_grid_qos_fork+0x50/0x20c copy_process+0x8fc/0x3f60 kernel_clone+0x12c/0x660 __se_sys_clone+0xc0/0x110 __arm64_sys_clone+0xa8/0x110 invoke_syscall+0x70/0x274 el0_svc_common.constprop.0+0x1fc/0x2dc do_el0_svc+0xe8/0x140 el0_svc+0x1c/0x2c el0_sync_handler+0xb0/0xb4 el0_sync+0x168/0x180 Freed by task 1748: kasan_save_stack+0x24/0x50 kasan_set_track+0x24/0x34 kasan_set_free_info+0x24/0x4c __kasan_slab_free+0xf8/0x180 kasan_slab_free+0x10/0x20 kfree+0xe0/0x6e0 sched_grid_qos_free+0x3c/0x90 free_task+0xc4/0x164 __delayed_free_task+0x18/0x3c rcu_do_batch+0x2ec/0x9f0 rcu_core+0x34c/0x530 rcu_core_si+0x14/0x30 __do_softirq+0x284/0x900 Fixes: 1a553561230a ("sched: smart grid: init sched_grid_qos structure on QOS purpose") Signed-off-by: Yipeng Zou <zouyipeng(a)huawei.com> --- kernel/fork.c | 4 ++++ kernel/sched/grid/qos.c | 3 +++ 2 files changed, 7 insertions(+) diff --git a/kernel/fork.c b/kernel/fork.c index 97a89ab68a26..a8a30a21799a 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1192,6 +1192,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) tsk->prefer_cpus = NULL; #endif +#ifdef CONFIG_QOS_SCHED_SMART_GRID + tsk->grid_qos = NULL; +#endif + setup_thread_stack(tsk, orig); clear_user_return_notifier(tsk); clear_tsk_need_resched(tsk); diff --git a/kernel/sched/grid/qos.c b/kernel/sched/grid/qos.c index 7c4cb867b60b..e1504170cc6c 100644 --- a/kernel/sched/grid/qos.c +++ b/kernel/sched/grid/qos.c @@ -71,6 +71,9 @@ int sched_grid_qos_fork(struct task_struct *p, struct task_struct *orig) void sched_grid_qos_free(struct task_struct *p) { + if (!p->grid_qos) + return; + kfree(p->grid_qos); p->grid_qos = NULL; } -- 2.34.1
2 1
0 0
[PATCH OLK-5.10] sched: smart_grid: Prevent double-free in sched_grid_qos_free
by Yipeng Zou 12 Nov '24

12 Nov '24
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I7BQZ0 CVE: NA ---------------------------------------- KASAN detected a double-free bug in the smart grid. This issue arises from the uninitialized use of p->grid_qos within the task {fork,free} processes. The sequence of events leading to the double-free is as follows: CPU0 CPU1 fork (in some error process) goto bad_fork_free call_rcu(__delayed_free_task) __delayed_free_task sched_grid_qos_free realse_task delayed_put_task_struct __put_task_struct sched_grid_qos_free(double free) When copy_process returns with an error, grid_qos is double-freed. To address this, grid_qos is initialized to NULL in dup_task_struct, and a NULL check is added for p->grid_qos before freeing. Bug report details: ================================================================== BUG: KASAN: double-free or invalid-free in sched_grid_qos_free+0x3c/0x90 CPU: 343 PID: 0 Comm: swapper/343 Kdump: loaded Tainted: G B E Call trace: dump_backtrace+0x0/0x3e0 show_stack+0x1c/0x28 dump_stack+0x13c/0x190 print_address_description.constprop.0+0x28/0x1f0 kasan_report_invalid_free+0x44/0x6c __kasan_slab_free+0x158/0x180 kasan_slab_free+0x10/0x20 kfree+0xe0/0x6e0 sched_grid_qos_free+0x3c/0x90 free_task+0xc4/0x164 __put_task_struct+0x264/0x31c delayed_put_task_struct+0x94/0x180 rcu_do_batch+0x2ec/0x9f0 rcu_core+0x34c/0x530 rcu_core_si+0x14/0x30 __do_softirq+0x284/0x900 irq_exit+0x2d4/0x35c __handle_domain_irq+0x108/0x1f0 gic_handle_irq+0x74/0x620 el1_irq+0xbc/0x140 arch_cpu_idle+0x14/0x3c default_idle_call+0x80/0x320 cpuidle_idle_call+0x244/0x2b0 do_idle+0x138/0x260 cpu_startup_entry+0x2c/0x70 secondary_start_kernel+0x35c/0x4e4 Allocated by task 44027: kasan_save_stack+0x24/0x50 __kasan_kmalloc.constprop.0+0xa0/0xcc kasan_kmalloc+0xc/0x14 kmem_cache_alloc_trace+0xdc/0x5d0 sched_grid_qos_fork+0x50/0x20c copy_process+0x8fc/0x3f60 kernel_clone+0x12c/0x660 __se_sys_clone+0xc0/0x110 __arm64_sys_clone+0xa8/0x110 invoke_syscall+0x70/0x274 el0_svc_common.constprop.0+0x1fc/0x2dc do_el0_svc+0xe8/0x140 el0_svc+0x1c/0x2c el0_sync_handler+0xb0/0xb4 el0_sync+0x168/0x180 Freed by task 1748: kasan_save_stack+0x24/0x50 kasan_set_track+0x24/0x34 kasan_set_free_info+0x24/0x4c __kasan_slab_free+0xf8/0x180 kasan_slab_free+0x10/0x20 kfree+0xe0/0x6e0 sched_grid_qos_free+0x3c/0x90 free_task+0xc4/0x164 __delayed_free_task+0x18/0x3c rcu_do_batch+0x2ec/0x9f0 rcu_core+0x34c/0x530 rcu_core_si+0x14/0x30 __do_softirq+0x284/0x900 Fixes: 700bfc4068cf ("sched: smart grid: init sched_grid_qos structure on QOS purpose") Signed-off-by: Yipeng Zou <zouyipeng(a)huawei.com> --- kernel/fork.c | 4 ++++ kernel/sched/grid/qos.c | 3 +++ 2 files changed, 7 insertions(+) diff --git a/kernel/fork.c b/kernel/fork.c index dcf1f9c655d8..9b1ea79deaa5 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -953,6 +953,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) tsk->prefer_cpus = NULL; #endif +#ifdef CONFIG_QOS_SCHED_SMART_GRID + tsk->grid_qos = NULL; +#endif + #ifdef CONFIG_SCHED_TASK_RELATIONSHIP tsk->rship = NULL; #endif diff --git a/kernel/sched/grid/qos.c b/kernel/sched/grid/qos.c index 7ee3687347ce..60b1ff843bbb 100644 --- a/kernel/sched/grid/qos.c +++ b/kernel/sched/grid/qos.c @@ -70,6 +70,9 @@ int sched_grid_qos_fork(struct task_struct *p, struct task_struct *orig) void sched_grid_qos_free(struct task_struct *p) { + if (!p->grid_qos) + return; + kfree(p->grid_qos); p->grid_qos = NULL; } -- 2.34.1
2 1
0 0
[PATCH openEuler-22.03-LTS-SP1 V1] drm/amd/display: Add null check for pipe_ctx->plane_state in dcn20_program_pipe
by Zicheng Qu 12 Nov '24

12 Nov '24
From: Srinivasan Shanmugam <srinivasan.shanmugam(a)amd.com> stable inclusion from stable-v6.11.3 commit 65a6fee22d5cfa645cb05489892dc9cd3d142fc2 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/IAYRAY CVE: CVE-2024-49914 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- [ Upstream commit 8e4ed3cf1642df0c4456443d865cff61a9598aa8 ] This commit addresses a null pointer dereference issue in the `dcn20_program_pipe` function. The issue could occur when `pipe_ctx->plane_state` is null. The fix adds a check to ensure `pipe_ctx->plane_state` is not null before accessing. This prevents a null pointer dereference. Reported by smatch: drivers/gpu/drm/amd/amdgpu/../display/dc/hwss/dcn20/dcn20_hwseq.c:1925 dcn20_program_pipe() error: we previously assumed 'pipe_ctx->plane_state' could be null (see line 1877) Cc: Tom Chung <chiahsuan.chung(a)amd.com> Cc: Rodrigo Siqueira <Rodrigo.Siqueira(a)amd.com> Cc: Roman Li <roman.li(a)amd.com> Cc: Alex Hung <alex.hung(a)amd.com> Cc: Aurabindo Pillai <aurabindo.pillai(a)amd.com> Cc: Harry Wentland <harry.wentland(a)amd.com> Cc: Hamza Mahfooz <hamza.mahfooz(a)amd.com> Signed-off-by: Srinivasan Shanmugam <srinivasan.shanmugam(a)amd.com> Reviewed-by: Tom Chung <chiahsuan.chung(a)amd.com> Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Conflicts: drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c [The stable version 5.10 is missing patch 65a6fee22d5cfa645cb05489892dc9cd3d142fc2, which was pulled from 6.11. Manually removed unnecessary code, such as if (pipe_ctx->update_flags.bits.det_size), if (hws->funcs.populate_mcm_luts), if (pipe_ctx->update_flags.bits.enable), if ((pipe_ctx->plane_state && pipe_ctx->plane_state->visible)). The purpose of this patch is to check if pipe_ctx->plane_state is not null before using its properties. Other extraneous code from higher versions is unrelated to the current patch and has been removed.] Signed-off-by: Zicheng Qu <quzicheng(a)huawei.com> --- drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c index 0adbcfc5e222..8d6a5b45b688 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c @@ -1601,16 +1601,20 @@ static void dcn20_program_pipe( dc->res_pool->hubbub->funcs->force_wm_propagate_to_pipes(dc->res_pool->hubbub); } - if (pipe_ctx->update_flags.raw || pipe_ctx->plane_state->update_flags.raw || pipe_ctx->stream->update_flags.raw) + if (pipe_ctx->update_flags.raw || + (pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.raw) || + pipe_ctx->stream->update_flags.raw) dcn20_update_dchubp_dpp(dc, pipe_ctx, context); - if (pipe_ctx->update_flags.bits.enable - || pipe_ctx->plane_state->update_flags.bits.hdr_mult) + if (pipe_ctx->update_flags.bits.enable || + (pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.bits.hdr_mult)) hws->funcs.set_hdr_multiplier(pipe_ctx); if (pipe_ctx->update_flags.bits.enable || - pipe_ctx->plane_state->update_flags.bits.in_transfer_func_change || - pipe_ctx->plane_state->update_flags.bits.gamma_change) + (pipe_ctx->plane_state && + pipe_ctx->plane_state->update_flags.bits.in_transfer_func_change) || + (pipe_ctx->plane_state && + pipe_ctx->plane_state->update_flags.bits.gamma_change)) hws->funcs.set_input_transfer_func(dc, pipe_ctx, pipe_ctx->plane_state); /* dcn10_translate_regamma_to_hw_format takes 750us to finish -- 2.34.1
2 1
0 0
  • ← Newer
  • 1
  • ...
  • 417
  • 418
  • 419
  • 420
  • 421
  • 422
  • 423
  • ...
  • 1868
  • Older →

HyperKitty Powered by HyperKitty