mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2026 -----
  • February
  • January
  • ----- 2025 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 44 participants
  • 22884 discussions
[PATCH openEuler-1.0-LTS] bpf, cpumap: Make sure kthread is running before map update returns
by Pu Lehui 27 Feb '26

27 Feb '26
From: Hou Tao <houtao1(a)huawei.com> mainline inclusion from mainline-v6.5-rc5 commit 640a604585aa30f93e39b17d4d6ba69fcb1e66c9 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/8156 CVE: CVE-2023-53577 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- The following warning was reported when running stress-mode enabled xdp_redirect_cpu with some RT threads: ------------[ cut here ]------------ WARNING: CPU: 4 PID: 65 at kernel/bpf/cpumap.c:135 CPU: 4 PID: 65 Comm: kworker/4:1 Not tainted 6.5.0-rc2+ #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) Workqueue: events cpu_map_kthread_stop RIP: 0010:put_cpu_map_entry+0xda/0x220 ...... Call Trace: <TASK> ? show_regs+0x65/0x70 ? __warn+0xa5/0x240 ...... ? put_cpu_map_entry+0xda/0x220 cpu_map_kthread_stop+0x41/0x60 process_one_work+0x6b0/0xb80 worker_thread+0x96/0x720 kthread+0x1a5/0x1f0 ret_from_fork+0x3a/0x70 ret_from_fork_asm+0x1b/0x30 </TASK> The root cause is the same as commit 436901649731 ("bpf: cpumap: Fix memory leak in cpu_map_update_elem"). The kthread is stopped prematurely by kthread_stop() in cpu_map_kthread_stop(), and kthread() doesn't call cpu_map_kthread_run() at all but XDP program has already queued some frames or skbs into ptr_ring. So when __cpu_map_ring_cleanup() checks the ptr_ring, it will find it was not emptied and report a warning. An alternative fix is to use __cpu_map_ring_cleanup() to drop these pending frames or skbs when kthread_stop() returns -EINTR, but it may confuse the user, because these frames or skbs have been handled correctly by XDP program. So instead of dropping these frames or skbs, just make sure the per-cpu kthread is running before __cpu_map_entry_alloc() returns. After apply the fix, the error handle for kthread_stop() will be unnecessary because it will always return 0, so just remove it. Fixes: 6710e1126934 ("bpf: introduce new bpf cpu map type BPF_MAP_TYPE_CPUMAP") Signed-off-by: Hou Tao <houtao1(a)huawei.com> Reviewed-by: Pu Lehui <pulehui(a)huawei.com> Acked-by: Jesper Dangaard Brouer <hawk(a)kernel.org> Link: https://lore.kernel.org/r/20230729095107.1722450-2-houtao@huaweicloud.com Signed-off-by: Martin KaFai Lau <martin.lau(a)kernel.org> Conflicts: kernel/bpf/cpumap.c [ctx conflicts] Signed-off-by: Pu Lehui <pulehui(a)huawei.com> --- kernel/bpf/cpumap.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 31faabe4cd26..7d8646688f4e 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -25,6 +25,7 @@ #include <linux/workqueue.h> #include <linux/kthread.h> #include <linux/capability.h> +#include <linux/completion.h> #include <trace/events/xdp.h> #include <linux/netdevice.h> /* netif_receive_skb_core */ @@ -56,6 +57,7 @@ struct bpf_cpu_map_entry { struct ptr_ring *queue; struct task_struct *kthread; struct work_struct kthread_stop_wq; + struct completion kthread_running; atomic_t refcnt; /* Control when this struct can be free'ed */ struct rcu_head rcu; @@ -228,7 +230,6 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu) static void cpu_map_kthread_stop(struct work_struct *work) { struct bpf_cpu_map_entry *rcpu; - int err; rcpu = container_of(work, struct bpf_cpu_map_entry, kthread_stop_wq); @@ -238,20 +239,14 @@ static void cpu_map_kthread_stop(struct work_struct *work) rcu_barrier(); /* kthread_stop will wake_up_process and wait for it to complete */ - err = kthread_stop(rcpu->kthread); - if (err) { - /* kthread_stop may be called before cpu_map_kthread_run - * is executed, so we need to release the memory related - * to rcpu. - */ - put_cpu_map_entry(rcpu); - } + kthread_stop(rcpu->kthread); } static int cpu_map_kthread_run(void *data) { struct bpf_cpu_map_entry *rcpu = data; + complete(&rcpu->kthread_running); set_current_state(TASK_INTERRUPTIBLE); /* When kthread gives stop order, then rcpu have been disconnected @@ -348,6 +343,7 @@ static struct bpf_cpu_map_entry *__cpu_map_entry_alloc(u32 qsize, u32 cpu, rcpu->qsize = qsize; /* Setup kthread */ + init_completion(&rcpu->kthread_running); rcpu->kthread = kthread_create_on_node(cpu_map_kthread_run, rcpu, numa, "cpumap/%d/map:%d", cpu, map_id); if (IS_ERR(rcpu->kthread)) @@ -360,6 +356,12 @@ static struct bpf_cpu_map_entry *__cpu_map_entry_alloc(u32 qsize, u32 cpu, kthread_bind(rcpu->kthread, cpu); wake_up_process(rcpu->kthread); + /* Make sure kthread has been running, so kthread_stop() will not + * stop the kthread prematurely and all pending frames or skbs + * will be handled by the kthread before kthread_stop() returns. + */ + wait_for_completion(&rcpu->kthread_running); + return rcpu; free_ptr_ring: -- 2.34.1
2 1
0 0
[PATCH OLK-5.10] bpf, cpumap: Make sure kthread is running before map update returns
by Pu Lehui 27 Feb '26

27 Feb '26
From: Hou Tao <houtao1(a)huawei.com> mainline inclusion from mainline-v6.5-rc5 commit 640a604585aa30f93e39b17d4d6ba69fcb1e66c9 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/8156 CVE: CVE-2023-53577 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- The following warning was reported when running stress-mode enabled xdp_redirect_cpu with some RT threads: ------------[ cut here ]------------ WARNING: CPU: 4 PID: 65 at kernel/bpf/cpumap.c:135 CPU: 4 PID: 65 Comm: kworker/4:1 Not tainted 6.5.0-rc2+ #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) Workqueue: events cpu_map_kthread_stop RIP: 0010:put_cpu_map_entry+0xda/0x220 ...... Call Trace: <TASK> ? show_regs+0x65/0x70 ? __warn+0xa5/0x240 ...... ? put_cpu_map_entry+0xda/0x220 cpu_map_kthread_stop+0x41/0x60 process_one_work+0x6b0/0xb80 worker_thread+0x96/0x720 kthread+0x1a5/0x1f0 ret_from_fork+0x3a/0x70 ret_from_fork_asm+0x1b/0x30 </TASK> The root cause is the same as commit 436901649731 ("bpf: cpumap: Fix memory leak in cpu_map_update_elem"). The kthread is stopped prematurely by kthread_stop() in cpu_map_kthread_stop(), and kthread() doesn't call cpu_map_kthread_run() at all but XDP program has already queued some frames or skbs into ptr_ring. So when __cpu_map_ring_cleanup() checks the ptr_ring, it will find it was not emptied and report a warning. An alternative fix is to use __cpu_map_ring_cleanup() to drop these pending frames or skbs when kthread_stop() returns -EINTR, but it may confuse the user, because these frames or skbs have been handled correctly by XDP program. So instead of dropping these frames or skbs, just make sure the per-cpu kthread is running before __cpu_map_entry_alloc() returns. After apply the fix, the error handle for kthread_stop() will be unnecessary because it will always return 0, so just remove it. Fixes: 6710e1126934 ("bpf: introduce new bpf cpu map type BPF_MAP_TYPE_CPUMAP") Signed-off-by: Hou Tao <houtao1(a)huawei.com> Reviewed-by: Pu Lehui <pulehui(a)huawei.com> Acked-by: Jesper Dangaard Brouer <hawk(a)kernel.org> Link: https://lore.kernel.org/r/20230729095107.1722450-2-houtao@huaweicloud.com Signed-off-by: Martin KaFai Lau <martin.lau(a)kernel.org> Conflicts: kernel/bpf/cpumap.c [ctx conflicts] Signed-off-by: Pu Lehui <pulehui(a)huawei.com> --- kernel/bpf/cpumap.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 918202cdff16..1c4c2312bf4d 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -25,6 +25,7 @@ #include <linux/workqueue.h> #include <linux/kthread.h> #include <linux/capability.h> +#include <linux/completion.h> #include <trace/events/xdp.h> #include <linux/netdevice.h> /* netif_receive_skb_core */ @@ -69,6 +70,7 @@ struct bpf_cpu_map_entry { struct rcu_head rcu; struct work_struct kthread_stop_wq; + struct completion kthread_running; }; struct bpf_cpu_map { @@ -213,7 +215,6 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu) static void cpu_map_kthread_stop(struct work_struct *work) { struct bpf_cpu_map_entry *rcpu; - int err; rcpu = container_of(work, struct bpf_cpu_map_entry, kthread_stop_wq); @@ -223,14 +224,7 @@ static void cpu_map_kthread_stop(struct work_struct *work) rcu_barrier(); /* kthread_stop will wake_up_process and wait for it to complete */ - err = kthread_stop(rcpu->kthread); - if (err) { - /* kthread_stop may be called before cpu_map_kthread_run - * is executed, so we need to release the memory related - * to rcpu. - */ - put_cpu_map_entry(rcpu); - } + kthread_stop(rcpu->kthread); } static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu, @@ -308,6 +302,7 @@ static int cpu_map_kthread_run(void *data) { struct bpf_cpu_map_entry *rcpu = data; + complete(&rcpu->kthread_running); set_current_state(TASK_INTERRUPTIBLE); /* When kthread gives stop order, then rcpu have been disconnected @@ -464,6 +459,7 @@ __cpu_map_entry_alloc(struct bpf_map *map, struct bpf_cpumap_val *value, u32 cpu goto free_ptr_ring; /* Setup kthread */ + init_completion(&rcpu->kthread_running); rcpu->kthread = kthread_create_on_node(cpu_map_kthread_run, rcpu, numa, "cpumap/%d/map:%d", cpu, map->id); if (IS_ERR(rcpu->kthread)) @@ -476,6 +472,12 @@ __cpu_map_entry_alloc(struct bpf_map *map, struct bpf_cpumap_val *value, u32 cpu kthread_bind(rcpu->kthread, cpu); wake_up_process(rcpu->kthread); + /* Make sure kthread has been running, so kthread_stop() will not + * stop the kthread prematurely and all pending frames or skbs + * will be handled by the kthread before kthread_stop() returns. + */ + wait_for_completion(&rcpu->kthread_running); + return rcpu; free_prog: -- 2.34.1
2 1
0 0
[PATCH OLK-6.6] ext4: fix iloc.bh leak in ext4_xattr_inode_update_ref
by Yongjian Sun 26 Feb '26

26 Feb '26
From: Yang Erkun <yangerkun(a)huawei.com> mainline inclusion from mainline-v6.19-rc6 commit d250bdf531d9cd4096fedbb9f172bb2ca660c868 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13461 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- The error branch for ext4_xattr_inode_update_ref forget to release the refcount for iloc.bh. Find this when review code. Fixes: 57295e835408 ("ext4: guard against EA inode refcount underflow in xattr update") Signed-off-by: Yang Erkun <yangerkun(a)huawei.com> Reviewed-by: Baokun Li <libaokun1(a)huawei.com> Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com> Link: https://patch.msgid.link/20251213055706.3417529-1-yangerkun@huawei.com Signed-off-by: Theodore Ts'o <tytso(a)mit.edu> Cc: stable(a)kernel.org Signed-off-by: Yongjian Sun <sunyongjian1(a)huawei.com> --- fs/ext4/xattr.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c index cd906aa08afa..96810af688f5 100644 --- a/fs/ext4/xattr.c +++ b/fs/ext4/xattr.c @@ -1047,6 +1047,7 @@ static int ext4_xattr_inode_update_ref(handle_t *handle, struct inode *ea_inode, ext4_error_inode(ea_inode, __func__, __LINE__, 0, "EA inode %lu ref wraparound: ref_count=%lld ref_change=%d", ea_inode->i_ino, ref_count, ref_change); + brelse(iloc.bh); ret = -EFSCORRUPTED; goto out; } -- 2.39.2
2 1
0 0
[PATCH v7 0/5] arm64/riscv: Add support for crashkernel CMA reservation
by Jinjie Ruan 26 Feb '26

26 Feb '26
The crash memory allocation, and the exclude of crashk_res, crashk_low_res and crashk_cma memory are almost identical across different architectures, This patch set handle them in crash core in a general way, which eliminate a lot of duplication code. And add support for crashkernel CMA reservation for arm64 and riscv. Rebased on v7.0-rc1. Basic test were performed on QEMU platforms for x86, ARM64, and RISC-V architectures with the following parameters: "cma=256M crashkernel=256M crashkernel=64M,cma" Changes in v7: - Correct the inclusion of CMA-reserved ranges for kdump kernel in of/kexec for arm64 and riscv. - Add Acked-by. Changes in v6: - Update the crash core exclude code as Mike suggested. - Rebased on v7.0-rc1. - Add acked-by. - Link to v5: https://lore.kernel.org/all/20260212101001.343158-1-ruanjinjie@huawei.com/ Changes in v5: - Fix the kernel test robot build warnings. - Sort crash memory ranges before preparing elfcorehdr for powerpc - Link to v4: https://lore.kernel.org/all/20260209095931.2813152-1-ruanjinjie@huawei.com/ Changes in v4: - Move the size calculation (and the realloc if needed) into the generic crash. - Link to v3: https://lore.kernel.org/all/20260204093728.1447527-1-ruanjinjie@huawei.com/ Jinjie Ruan (4): crash: Exclude crash kernel memory in crash core crash: Use crash_exclude_core_ranges() on powerpc arm64: kexec: Add support for crashkernel CMA reservation riscv: kexec: Add support for crashkernel CMA reservation Sourabh Jain (1): powerpc/crash: sort crash memory ranges before preparing elfcorehdr .../admin-guide/kernel-parameters.txt | 16 +-- arch/arm64/kernel/machine_kexec_file.c | 39 +++---- arch/arm64/mm/init.c | 5 +- arch/loongarch/kernel/machine_kexec_file.c | 39 +++---- arch/powerpc/include/asm/kexec_ranges.h | 4 +- arch/powerpc/kexec/crash.c | 5 +- arch/powerpc/kexec/ranges.c | 101 +----------------- arch/riscv/kernel/machine_kexec_file.c | 38 +++---- arch/riscv/mm/init.c | 5 +- arch/x86/kernel/crash.c | 89 +++------------ drivers/of/kexec.c | 9 ++ include/linux/crash_core.h | 9 ++ kernel/crash_core.c | 89 ++++++++++++++- 13 files changed, 176 insertions(+), 272 deletions(-) -- 2.34.1
1 5
0 0
[PATCH OLK-6.6] sched/soft_domain: Increasing NR_MAX_CLUSTER to 32
by Zhang Qiao 25 Feb '26

25 Feb '26
hulk inclusion category: bugfix bugzilla: https://atomgit.com/openeuler/kernel/issues/8540 -------------------------------- Bumped the maximum number of clusters per node from 16 to 32 to resolve an out-of-bounds/array overflow issue. Fixes: 645a1ba256ef ("sched: topology: Build soft domain for LLC") Signed-off-by: Zhang Qiao <zhangqiao22(a)huawei.com> --- kernel/sched/soft_domain.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/kernel/sched/soft_domain.c b/kernel/sched/soft_domain.c index 3c52680ca82f..7b569fdb2c6e 100644 --- a/kernel/sched/soft_domain.c +++ b/kernel/sched/soft_domain.c @@ -17,6 +17,8 @@ #include <linux/sort.h> +#define NR_MAX_CLUSTER 32 + static DEFINE_STATIC_KEY_TRUE(__soft_domain_switch); static int __init soft_domain_switch_setup(char *str) @@ -50,6 +52,7 @@ static int build_soft_sub_domain(int nid, struct cpumask *cpus) const struct cpumask *span = cpumask_of_node(nid); struct soft_domain *sf_d = NULL; int i; + int cls_cnt = 0; sf_d = kzalloc_node(sizeof(struct soft_domain) + cpumask_size(), GFP_KERNEL, nid); @@ -63,6 +66,12 @@ static int build_soft_sub_domain(int nid, struct cpumask *cpus) for_each_cpu_and(i, span, cpus) { struct soft_subdomain *sub_d = NULL; + cls_cnt++; + if (cls_cnt > NR_MAX_CLUSTER) { + pr_info("clsuter number > %d, unsupport soft domain.\n", NR_MAX_CLUSTER); + return -EINVAL; + } + sub_d = kzalloc_node(sizeof(struct soft_subdomain) + cpumask_size(), GFP_KERNEL, nid); if (!sub_d) { @@ -138,8 +147,6 @@ void build_soft_domain(void) static DEFINE_MUTEX(soft_domain_mutex); -#define NR_MAX_CLUSTER 16 - struct domain_node { struct soft_subdomain *sud_d; unsigned int attached; -- 2.18.0.huawei.25
1 0
0 0
[PATCH OLK-6.6] sched/soft_domain: Increasing NR_MAX_CLUSTER to 32
by Zhang Qiao 25 Feb '26

25 Feb '26
hulk inclusion category: bugfix bugzilla: https://gitcode.com/openeuler/kernel/issues/8540 -------------------------------- Bumped the maximum number of clusters per node from 16 to 32 to resolve an out-of-bounds/array overflow issue. Fixes: 645a1ba256ef ("sched: topology: Build soft domain for LLC") Signed-off-by: Zhang Qiao <zhangqiao22(a)huawei.com> --- kernel/sched/soft_domain.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/kernel/sched/soft_domain.c b/kernel/sched/soft_domain.c index 3c52680ca82f..7b569fdb2c6e 100644 --- a/kernel/sched/soft_domain.c +++ b/kernel/sched/soft_domain.c @@ -17,6 +17,8 @@ #include <linux/sort.h> +#define NR_MAX_CLUSTER 32 + static DEFINE_STATIC_KEY_TRUE(__soft_domain_switch); static int __init soft_domain_switch_setup(char *str) @@ -50,6 +52,7 @@ static int build_soft_sub_domain(int nid, struct cpumask *cpus) const struct cpumask *span = cpumask_of_node(nid); struct soft_domain *sf_d = NULL; int i; + int cls_cnt = 0; sf_d = kzalloc_node(sizeof(struct soft_domain) + cpumask_size(), GFP_KERNEL, nid); @@ -63,6 +66,12 @@ static int build_soft_sub_domain(int nid, struct cpumask *cpus) for_each_cpu_and(i, span, cpus) { struct soft_subdomain *sub_d = NULL; + cls_cnt++; + if (cls_cnt > NR_MAX_CLUSTER) { + pr_info("clsuter number > %d, unsupport soft domain.\n", NR_MAX_CLUSTER); + return -EINVAL; + } + sub_d = kzalloc_node(sizeof(struct soft_subdomain) + cpumask_size(), GFP_KERNEL, nid); if (!sub_d) { @@ -138,8 +147,6 @@ void build_soft_domain(void) static DEFINE_MUTEX(soft_domain_mutex); -#define NR_MAX_CLUSTER 16 - struct domain_node { struct soft_subdomain *sud_d; unsigned int attached; -- 2.18.0.huawei.25
1 0
0 0
[PATCH openEuler-1.0-LTS] squashfs: fix memory leak in squashfs_fill_super
by Long Li 25 Feb '26

25 Feb '26
From: Phillip Lougher <phillip(a)squashfs.org.uk> stable inclusion from stable-v5.10.240 commit 9bdb2e7f9b11a1c9bb7096d5726fe5a738fa5d73 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/9383 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=… -------------------------------- commit b64700d41bdc4e9f82f1346c15a3678ebb91a89c upstream. If sb_min_blocksize returns 0, squashfs_fill_super exits without freeing allocated memory (sb->s_fs_info). Fix this by moving the call to sb_min_blocksize to before memory is allocated. Link: https://lkml.kernel.org/r/20250811223740.110392-1-phillip@squashfs.org.uk Fixes: 734aa85390ea ("Squashfs: check return result of sb_min_blocksize") Signed-off-by: Phillip Lougher <phillip(a)squashfs.org.uk> Reported-by: Scott GUO <scottzhguo(a)tencent.com> Closes: https://lore.kernel.org/all/20250811061921.3807353-1-scott_gzh@163.com Cc: <stable(a)vger.kernel.org> Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Conflicts: fs/squashfs/super.c [Context conflicts] Signed-off-by: Long Li <leo.lilong(a)huawei.com> --- fs/squashfs/super.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c index 41756ca57ff2..119c0d63dafc 100644 --- a/fs/squashfs/super.c +++ b/fs/squashfs/super.c @@ -85,10 +85,15 @@ static int squashfs_fill_super(struct super_block *sb, void *data, int silent) unsigned short flags; unsigned int fragments; u64 lookup_table_start, xattr_id_table_start, next_table; - int err; + int err, devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE); TRACE("Entered squashfs_fill_superblock\n"); + if (!devblksize) { + ERROR("squashfs: unable to set blocksize\n"); + return -EINVAL; + } + sb->s_fs_info = kzalloc(sizeof(*msblk), GFP_KERNEL); if (sb->s_fs_info == NULL) { ERROR("Failed to allocate squashfs_sb_info\n"); @@ -96,12 +101,7 @@ static int squashfs_fill_super(struct super_block *sb, void *data, int silent) } msblk = sb->s_fs_info; - msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE); - if (!msblk->devblksize) { - ERROR("squashfs: unable to set blocksize\n"); - return -EINVAL; - } - + msblk->devblksize = devblksize; msblk->devblksize_log2 = ffz(~msblk->devblksize); mutex_init(&msblk->meta_index_mutex); -- 2.39.2
2 1
0 0
[PATCH OLK-6.6] efivarfs: fix error propagation in efivar_entry_get()
by Yongjian Sun 25 Feb '26

25 Feb '26
From: Kohei Enju <kohei(a)enjuk.jp> mainline inclusion from mainline-v6.19-rc8 commit 4b22ec1685ce1fc0d862dcda3225d852fb107995 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13706 CVE: CVE-2026-23156 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- efivar_entry_get() always returns success even if the underlying __efivar_entry_get() fails, masking errors. This may result in uninitialized heap memory being copied to userspace in the efivarfs_file_read() path. Fix it by returning the error from __efivar_entry_get(). Fixes: 2d82e6227ea1 ("efi: vars: Move efivar caching layer into efivarfs") Cc: <stable(a)vger.kernel.org> # v6.1+ Signed-off-by: Kohei Enju <kohei(a)enjuk.jp> Signed-off-by: Ard Biesheuvel <ardb(a)kernel.org> Signed-off-by: Yongjian Sun <sunyongjian1(a)huawei.com> --- fs/efivarfs/vars.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/efivarfs/vars.c b/fs/efivarfs/vars.c index 13bc60698955..b10aa5afd7f7 100644 --- a/fs/efivarfs/vars.c +++ b/fs/efivarfs/vars.c @@ -609,7 +609,7 @@ int efivar_entry_get(struct efivar_entry *entry, u32 *attributes, err = __efivar_entry_get(entry, attributes, size, data); efivar_unlock(); - return 0; + return err; } /** -- 2.39.2
2 1
0 0
[PATCH OLK-6.6] io_uring/io-wq: check IO_WQ_BIT_EXIT inside work run loop
by Yongjian Sun 25 Feb '26

25 Feb '26
From: Jens Axboe <axboe(a)kernel.dk> mainline inclusion from mainline-v6.19-rc7 commit 10dc959398175736e495f71c771f8641e1ca1907 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13663 CVE: CVE-2026-23113 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- Currently this is checked before running the pending work. Normally this is quite fine, as work items either end up blocking (which will create a new worker for other items), or they complete fairly quickly. But syzbot reports an issue where io-wq takes seemingly forever to exit, and with a bit of debugging, this turns out to be because it queues a bunch of big (2GB - 4096b) reads with a /dev/msr* file. Since this file type doesn't support ->read_iter(), loop_rw_iter() ends up handling them. Each read returns 16MB of data read, which takes 20 (!!) seconds. With a bunch of these pending, processing the whole chain can take a long time. Easily longer than the syzbot uninterruptible sleep timeout of 140 seconds. This then triggers a complaint off the io-wq exit path: INFO: task syz.4.135:6326 blocked for more than 143 seconds. Not tainted syzkaller #0 Blocked by coredump. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.4.135 state:D stack:26824 pid:6326 tgid:6324 ppid:5957 task_flags:0x400548 flags:0x00080000 Call Trace: <TASK> context_switch kernel/sched/core.c:5256 [inline] __schedule+0x1139/0x6150 kernel/sched/core.c:6863 __schedule_loop kernel/sched/core.c:6945 [inline] schedule+0xe7/0x3a0 kernel/sched/core.c:6960 schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75 do_wait_for_common kernel/sched/completion.c:100 [inline] __wait_for_common+0x2fc/0x4e0 kernel/sched/completion.c:121 io_wq_exit_workers io_uring/io-wq.c:1328 [inline] io_wq_put_and_exit+0x271/0x8a0 io_uring/io-wq.c:1356 io_uring_clean_tctx+0x10d/0x190 io_uring/tctx.c:203 io_uring_cancel_generic+0x69c/0x9a0 io_uring/cancel.c:651 io_uring_files_cancel include/linux/io_uring.h:19 [inline] do_exit+0x2ce/0x2bd0 kernel/exit.c:911 do_group_exit+0xd3/0x2a0 kernel/exit.c:1112 get_signal+0x2671/0x26d0 kernel/signal.c:3034 arch_do_signal_or_restart+0x8f/0x7e0 arch/x86/kernel/signal.c:337 __exit_to_user_mode_loop kernel/entry/common.c:41 [inline] exit_to_user_mode_loop+0x8c/0x540 kernel/entry/common.c:75 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline] syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline] syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline] syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline] do_syscall_64+0x4ee/0xf80 arch/x86/entry/syscall_64.c:100 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fa02738f749 RSP: 002b:00007fa0281ae0e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca RAX: fffffffffffffe00 RBX: 00007fa0275e6098 RCX: 00007fa02738f749 RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fa0275e6098 RBP: 00007fa0275e6090 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fa0275e6128 R14: 00007fff14e4fcb0 R15: 00007fff14e4fd98 There's really nothing wrong here, outside of processing these reads will take a LONG time. However, we can speed up the exit by checking the IO_WQ_BIT_EXIT inside the io_worker_handle_work() loop, as syzbot will exit the ring after queueing up all of these reads. Then once the first item is processed, io-wq will simply cancel the rest. That should avoid syzbot running into this complaint again. Cc: stable(a)vger.kernel.org Link: https://lore.kernel.org/all/68a2decc.050a0220.e29e5.0099.GAE@google.com/ Reported-by: syzbot+4eb282331cab6d5b6588(a)syzkaller.appspotmail.com Signed-off-by: Jens Axboe <axboe(a)kernel.dk> Signed-off-by: Yongjian Sun <sunyongjian1(a)huawei.com> --- io_uring/io-wq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c index 1c4ef4e4eb52..c848a5018d12 100644 --- a/io_uring/io-wq.c +++ b/io_uring/io-wq.c @@ -554,9 +554,9 @@ static void io_worker_handle_work(struct io_wq_acct *acct, __releases(&acct->lock) { struct io_wq *wq = worker->wq; - bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state); do { + bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state); struct io_wq_work *work; /* -- 2.39.2
2 1
0 0
[PATCH OLK-6.6] dmaengine: mmp_pdma: Fix race condition in mmp_pdma_residue()
by Zhang Yuwei 25 Feb '26

25 Feb '26
From: Guodong Xu <guodong(a)riscstar.com> mainline inclusion from mainline-v6.19-rc6 commit a143545855bc2c6e1330f6f57ae375ac44af00a7 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13727 CVE: CVE-2025-71221 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- Add proper locking in mmp_pdma_residue() to prevent use-after-free when accessing descriptor list and descriptor contents. The race occurs when multiple threads call tx_status() while the tasklet on another CPU is freeing completed descriptors: CPU 0 CPU 1 ----- ----- mmp_pdma_tx_status() mmp_pdma_residue() -> NO LOCK held list_for_each_entry(sw, ..) DMA interrupt dma_do_tasklet() -> spin_lock(&desc_lock) list_move(sw->node, ...) spin_unlock(&desc_lock) | dma_pool_free(sw) <- FREED! -> access sw->desc <- UAF! This issue can be reproduced when running dmatest on the same channel with multiple threads (threads_per_chan > 1). Fix by protecting the chain_running list iteration and descriptor access with the chan->desc_lock spinlock. Signed-off-by: Juan Li <lijuan(a)linux.spacemit.com> Signed-off-by: Guodong Xu <guodong(a)riscstar.com> Link: https://patch.msgid.link/20251216-mmp-pdma-race-v1-1-976a224bb622@riscstar.… Signed-off-by: Vinod Koul <vkoul(a)kernel.org> Conflicts: drivers/dma/mmp_pdma.c [commit 35e40bf761fcb not merged] Signed-off-by: Zhang Yuwei <zhangyuwei20(a)huawei.com> --- drivers/dma/mmp_pdma.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c index ebdfdcbb4f7a..3a035a31bec5 100644 --- a/drivers/dma/mmp_pdma.c +++ b/drivers/dma/mmp_pdma.c @@ -763,6 +763,7 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan, dma_cookie_t cookie) { struct mmp_pdma_desc_sw *sw; + unsigned long flags; u32 curr, residue = 0; bool passed = false; bool cyclic = chan->cyclic_first != NULL; @@ -779,6 +780,8 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan, else curr = readl(chan->phy->base + DSADR(chan->phy->idx)); + spin_lock_irqsave(&chan->desc_lock, flags); + list_for_each_entry(sw, &chan->chain_running, node) { u32 start, end, len; @@ -822,6 +825,7 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan, continue; if (sw->async_tx.cookie == cookie) { + spin_unlock_irqrestore(&chan->desc_lock, flags); return residue; } else { residue = 0; @@ -829,6 +833,8 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan, } } + spin_unlock_irqrestore(&chan->desc_lock, flags); + /* We should only get here in case of cyclic transactions */ return residue; } -- 2.22.0
2 1
0 0
  • ← Newer
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • ...
  • 2289
  • Older →

HyperKitty Powered by HyperKitty