mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2026 -----
  • February
  • January
  • ----- 2025 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 43 participants
  • 22862 discussions
[PATCH OLK-6.6] ext4: fix iloc.bh leak in ext4_xattr_inode_update_ref
by Yongjian Sun 26 Feb '26

26 Feb '26
From: Yang Erkun <yangerkun(a)huawei.com> mainline inclusion from mainline-v6.19-rc6 commit d250bdf531d9cd4096fedbb9f172bb2ca660c868 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13461 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- The error branch for ext4_xattr_inode_update_ref forget to release the refcount for iloc.bh. Find this when review code. Fixes: 57295e835408 ("ext4: guard against EA inode refcount underflow in xattr update") Signed-off-by: Yang Erkun <yangerkun(a)huawei.com> Reviewed-by: Baokun Li <libaokun1(a)huawei.com> Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com> Link: https://patch.msgid.link/20251213055706.3417529-1-yangerkun@huawei.com Signed-off-by: Theodore Ts'o <tytso(a)mit.edu> Cc: stable(a)kernel.org Signed-off-by: Yongjian Sun <sunyongjian1(a)huawei.com> --- fs/ext4/xattr.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c index cd906aa08afa..96810af688f5 100644 --- a/fs/ext4/xattr.c +++ b/fs/ext4/xattr.c @@ -1047,6 +1047,7 @@ static int ext4_xattr_inode_update_ref(handle_t *handle, struct inode *ea_inode, ext4_error_inode(ea_inode, __func__, __LINE__, 0, "EA inode %lu ref wraparound: ref_count=%lld ref_change=%d", ea_inode->i_ino, ref_count, ref_change); + brelse(iloc.bh); ret = -EFSCORRUPTED; goto out; } -- 2.39.2
2 1
0 0
[PATCH v7 0/5] arm64/riscv: Add support for crashkernel CMA reservation
by Jinjie Ruan 26 Feb '26

26 Feb '26
The crash memory allocation, and the exclude of crashk_res, crashk_low_res and crashk_cma memory are almost identical across different architectures, This patch set handle them in crash core in a general way, which eliminate a lot of duplication code. And add support for crashkernel CMA reservation for arm64 and riscv. Rebased on v7.0-rc1. Basic test were performed on QEMU platforms for x86, ARM64, and RISC-V architectures with the following parameters: "cma=256M crashkernel=256M crashkernel=64M,cma" Changes in v7: - Correct the inclusion of CMA-reserved ranges for kdump kernel in of/kexec for arm64 and riscv. - Add Acked-by. Changes in v6: - Update the crash core exclude code as Mike suggested. - Rebased on v7.0-rc1. - Add acked-by. - Link to v5: https://lore.kernel.org/all/20260212101001.343158-1-ruanjinjie@huawei.com/ Changes in v5: - Fix the kernel test robot build warnings. - Sort crash memory ranges before preparing elfcorehdr for powerpc - Link to v4: https://lore.kernel.org/all/20260209095931.2813152-1-ruanjinjie@huawei.com/ Changes in v4: - Move the size calculation (and the realloc if needed) into the generic crash. - Link to v3: https://lore.kernel.org/all/20260204093728.1447527-1-ruanjinjie@huawei.com/ Jinjie Ruan (4): crash: Exclude crash kernel memory in crash core crash: Use crash_exclude_core_ranges() on powerpc arm64: kexec: Add support for crashkernel CMA reservation riscv: kexec: Add support for crashkernel CMA reservation Sourabh Jain (1): powerpc/crash: sort crash memory ranges before preparing elfcorehdr .../admin-guide/kernel-parameters.txt | 16 +-- arch/arm64/kernel/machine_kexec_file.c | 39 +++---- arch/arm64/mm/init.c | 5 +- arch/loongarch/kernel/machine_kexec_file.c | 39 +++---- arch/powerpc/include/asm/kexec_ranges.h | 4 +- arch/powerpc/kexec/crash.c | 5 +- arch/powerpc/kexec/ranges.c | 101 +----------------- arch/riscv/kernel/machine_kexec_file.c | 38 +++---- arch/riscv/mm/init.c | 5 +- arch/x86/kernel/crash.c | 89 +++------------ drivers/of/kexec.c | 9 ++ include/linux/crash_core.h | 9 ++ kernel/crash_core.c | 89 ++++++++++++++- 13 files changed, 176 insertions(+), 272 deletions(-) -- 2.34.1
1 5
0 0
[PATCH OLK-6.6] sched/soft_domain: Increasing NR_MAX_CLUSTER to 32
by Zhang Qiao 25 Feb '26

25 Feb '26
hulk inclusion category: bugfix bugzilla: https://atomgit.com/openeuler/kernel/issues/8540 -------------------------------- Bumped the maximum number of clusters per node from 16 to 32 to resolve an out-of-bounds/array overflow issue. Fixes: 645a1ba256ef ("sched: topology: Build soft domain for LLC") Signed-off-by: Zhang Qiao <zhangqiao22(a)huawei.com> --- kernel/sched/soft_domain.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/kernel/sched/soft_domain.c b/kernel/sched/soft_domain.c index 3c52680ca82f..7b569fdb2c6e 100644 --- a/kernel/sched/soft_domain.c +++ b/kernel/sched/soft_domain.c @@ -17,6 +17,8 @@ #include <linux/sort.h> +#define NR_MAX_CLUSTER 32 + static DEFINE_STATIC_KEY_TRUE(__soft_domain_switch); static int __init soft_domain_switch_setup(char *str) @@ -50,6 +52,7 @@ static int build_soft_sub_domain(int nid, struct cpumask *cpus) const struct cpumask *span = cpumask_of_node(nid); struct soft_domain *sf_d = NULL; int i; + int cls_cnt = 0; sf_d = kzalloc_node(sizeof(struct soft_domain) + cpumask_size(), GFP_KERNEL, nid); @@ -63,6 +66,12 @@ static int build_soft_sub_domain(int nid, struct cpumask *cpus) for_each_cpu_and(i, span, cpus) { struct soft_subdomain *sub_d = NULL; + cls_cnt++; + if (cls_cnt > NR_MAX_CLUSTER) { + pr_info("clsuter number > %d, unsupport soft domain.\n", NR_MAX_CLUSTER); + return -EINVAL; + } + sub_d = kzalloc_node(sizeof(struct soft_subdomain) + cpumask_size(), GFP_KERNEL, nid); if (!sub_d) { @@ -138,8 +147,6 @@ void build_soft_domain(void) static DEFINE_MUTEX(soft_domain_mutex); -#define NR_MAX_CLUSTER 16 - struct domain_node { struct soft_subdomain *sud_d; unsigned int attached; -- 2.18.0.huawei.25
1 0
0 0
[PATCH OLK-6.6] sched/soft_domain: Increasing NR_MAX_CLUSTER to 32
by Zhang Qiao 25 Feb '26

25 Feb '26
hulk inclusion category: bugfix bugzilla: https://gitcode.com/openeuler/kernel/issues/8540 -------------------------------- Bumped the maximum number of clusters per node from 16 to 32 to resolve an out-of-bounds/array overflow issue. Fixes: 645a1ba256ef ("sched: topology: Build soft domain for LLC") Signed-off-by: Zhang Qiao <zhangqiao22(a)huawei.com> --- kernel/sched/soft_domain.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/kernel/sched/soft_domain.c b/kernel/sched/soft_domain.c index 3c52680ca82f..7b569fdb2c6e 100644 --- a/kernel/sched/soft_domain.c +++ b/kernel/sched/soft_domain.c @@ -17,6 +17,8 @@ #include <linux/sort.h> +#define NR_MAX_CLUSTER 32 + static DEFINE_STATIC_KEY_TRUE(__soft_domain_switch); static int __init soft_domain_switch_setup(char *str) @@ -50,6 +52,7 @@ static int build_soft_sub_domain(int nid, struct cpumask *cpus) const struct cpumask *span = cpumask_of_node(nid); struct soft_domain *sf_d = NULL; int i; + int cls_cnt = 0; sf_d = kzalloc_node(sizeof(struct soft_domain) + cpumask_size(), GFP_KERNEL, nid); @@ -63,6 +66,12 @@ static int build_soft_sub_domain(int nid, struct cpumask *cpus) for_each_cpu_and(i, span, cpus) { struct soft_subdomain *sub_d = NULL; + cls_cnt++; + if (cls_cnt > NR_MAX_CLUSTER) { + pr_info("clsuter number > %d, unsupport soft domain.\n", NR_MAX_CLUSTER); + return -EINVAL; + } + sub_d = kzalloc_node(sizeof(struct soft_subdomain) + cpumask_size(), GFP_KERNEL, nid); if (!sub_d) { @@ -138,8 +147,6 @@ void build_soft_domain(void) static DEFINE_MUTEX(soft_domain_mutex); -#define NR_MAX_CLUSTER 16 - struct domain_node { struct soft_subdomain *sud_d; unsigned int attached; -- 2.18.0.huawei.25
1 0
0 0
[PATCH openEuler-1.0-LTS] squashfs: fix memory leak in squashfs_fill_super
by Long Li 25 Feb '26

25 Feb '26
From: Phillip Lougher <phillip(a)squashfs.org.uk> stable inclusion from stable-v5.10.240 commit 9bdb2e7f9b11a1c9bb7096d5726fe5a738fa5d73 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/9383 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=… -------------------------------- commit b64700d41bdc4e9f82f1346c15a3678ebb91a89c upstream. If sb_min_blocksize returns 0, squashfs_fill_super exits without freeing allocated memory (sb->s_fs_info). Fix this by moving the call to sb_min_blocksize to before memory is allocated. Link: https://lkml.kernel.org/r/20250811223740.110392-1-phillip@squashfs.org.uk Fixes: 734aa85390ea ("Squashfs: check return result of sb_min_blocksize") Signed-off-by: Phillip Lougher <phillip(a)squashfs.org.uk> Reported-by: Scott GUO <scottzhguo(a)tencent.com> Closes: https://lore.kernel.org/all/20250811061921.3807353-1-scott_gzh@163.com Cc: <stable(a)vger.kernel.org> Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Conflicts: fs/squashfs/super.c [Context conflicts] Signed-off-by: Long Li <leo.lilong(a)huawei.com> --- fs/squashfs/super.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c index 41756ca57ff2..119c0d63dafc 100644 --- a/fs/squashfs/super.c +++ b/fs/squashfs/super.c @@ -85,10 +85,15 @@ static int squashfs_fill_super(struct super_block *sb, void *data, int silent) unsigned short flags; unsigned int fragments; u64 lookup_table_start, xattr_id_table_start, next_table; - int err; + int err, devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE); TRACE("Entered squashfs_fill_superblock\n"); + if (!devblksize) { + ERROR("squashfs: unable to set blocksize\n"); + return -EINVAL; + } + sb->s_fs_info = kzalloc(sizeof(*msblk), GFP_KERNEL); if (sb->s_fs_info == NULL) { ERROR("Failed to allocate squashfs_sb_info\n"); @@ -96,12 +101,7 @@ static int squashfs_fill_super(struct super_block *sb, void *data, int silent) } msblk = sb->s_fs_info; - msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE); - if (!msblk->devblksize) { - ERROR("squashfs: unable to set blocksize\n"); - return -EINVAL; - } - + msblk->devblksize = devblksize; msblk->devblksize_log2 = ffz(~msblk->devblksize); mutex_init(&msblk->meta_index_mutex); -- 2.39.2
2 1
0 0
[PATCH OLK-6.6] efivarfs: fix error propagation in efivar_entry_get()
by Yongjian Sun 25 Feb '26

25 Feb '26
From: Kohei Enju <kohei(a)enjuk.jp> mainline inclusion from mainline-v6.19-rc8 commit 4b22ec1685ce1fc0d862dcda3225d852fb107995 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13706 CVE: CVE-2026-23156 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- efivar_entry_get() always returns success even if the underlying __efivar_entry_get() fails, masking errors. This may result in uninitialized heap memory being copied to userspace in the efivarfs_file_read() path. Fix it by returning the error from __efivar_entry_get(). Fixes: 2d82e6227ea1 ("efi: vars: Move efivar caching layer into efivarfs") Cc: <stable(a)vger.kernel.org> # v6.1+ Signed-off-by: Kohei Enju <kohei(a)enjuk.jp> Signed-off-by: Ard Biesheuvel <ardb(a)kernel.org> Signed-off-by: Yongjian Sun <sunyongjian1(a)huawei.com> --- fs/efivarfs/vars.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/efivarfs/vars.c b/fs/efivarfs/vars.c index 13bc60698955..b10aa5afd7f7 100644 --- a/fs/efivarfs/vars.c +++ b/fs/efivarfs/vars.c @@ -609,7 +609,7 @@ int efivar_entry_get(struct efivar_entry *entry, u32 *attributes, err = __efivar_entry_get(entry, attributes, size, data); efivar_unlock(); - return 0; + return err; } /** -- 2.39.2
2 1
0 0
[PATCH OLK-6.6] io_uring/io-wq: check IO_WQ_BIT_EXIT inside work run loop
by Yongjian Sun 25 Feb '26

25 Feb '26
From: Jens Axboe <axboe(a)kernel.dk> mainline inclusion from mainline-v6.19-rc7 commit 10dc959398175736e495f71c771f8641e1ca1907 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13663 CVE: CVE-2026-23113 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- Currently this is checked before running the pending work. Normally this is quite fine, as work items either end up blocking (which will create a new worker for other items), or they complete fairly quickly. But syzbot reports an issue where io-wq takes seemingly forever to exit, and with a bit of debugging, this turns out to be because it queues a bunch of big (2GB - 4096b) reads with a /dev/msr* file. Since this file type doesn't support ->read_iter(), loop_rw_iter() ends up handling them. Each read returns 16MB of data read, which takes 20 (!!) seconds. With a bunch of these pending, processing the whole chain can take a long time. Easily longer than the syzbot uninterruptible sleep timeout of 140 seconds. This then triggers a complaint off the io-wq exit path: INFO: task syz.4.135:6326 blocked for more than 143 seconds. Not tainted syzkaller #0 Blocked by coredump. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.4.135 state:D stack:26824 pid:6326 tgid:6324 ppid:5957 task_flags:0x400548 flags:0x00080000 Call Trace: <TASK> context_switch kernel/sched/core.c:5256 [inline] __schedule+0x1139/0x6150 kernel/sched/core.c:6863 __schedule_loop kernel/sched/core.c:6945 [inline] schedule+0xe7/0x3a0 kernel/sched/core.c:6960 schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75 do_wait_for_common kernel/sched/completion.c:100 [inline] __wait_for_common+0x2fc/0x4e0 kernel/sched/completion.c:121 io_wq_exit_workers io_uring/io-wq.c:1328 [inline] io_wq_put_and_exit+0x271/0x8a0 io_uring/io-wq.c:1356 io_uring_clean_tctx+0x10d/0x190 io_uring/tctx.c:203 io_uring_cancel_generic+0x69c/0x9a0 io_uring/cancel.c:651 io_uring_files_cancel include/linux/io_uring.h:19 [inline] do_exit+0x2ce/0x2bd0 kernel/exit.c:911 do_group_exit+0xd3/0x2a0 kernel/exit.c:1112 get_signal+0x2671/0x26d0 kernel/signal.c:3034 arch_do_signal_or_restart+0x8f/0x7e0 arch/x86/kernel/signal.c:337 __exit_to_user_mode_loop kernel/entry/common.c:41 [inline] exit_to_user_mode_loop+0x8c/0x540 kernel/entry/common.c:75 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline] syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline] syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline] syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline] do_syscall_64+0x4ee/0xf80 arch/x86/entry/syscall_64.c:100 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fa02738f749 RSP: 002b:00007fa0281ae0e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca RAX: fffffffffffffe00 RBX: 00007fa0275e6098 RCX: 00007fa02738f749 RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fa0275e6098 RBP: 00007fa0275e6090 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fa0275e6128 R14: 00007fff14e4fcb0 R15: 00007fff14e4fd98 There's really nothing wrong here, outside of processing these reads will take a LONG time. However, we can speed up the exit by checking the IO_WQ_BIT_EXIT inside the io_worker_handle_work() loop, as syzbot will exit the ring after queueing up all of these reads. Then once the first item is processed, io-wq will simply cancel the rest. That should avoid syzbot running into this complaint again. Cc: stable(a)vger.kernel.org Link: https://lore.kernel.org/all/68a2decc.050a0220.e29e5.0099.GAE@google.com/ Reported-by: syzbot+4eb282331cab6d5b6588(a)syzkaller.appspotmail.com Signed-off-by: Jens Axboe <axboe(a)kernel.dk> Signed-off-by: Yongjian Sun <sunyongjian1(a)huawei.com> --- io_uring/io-wq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c index 1c4ef4e4eb52..c848a5018d12 100644 --- a/io_uring/io-wq.c +++ b/io_uring/io-wq.c @@ -554,9 +554,9 @@ static void io_worker_handle_work(struct io_wq_acct *acct, __releases(&acct->lock) { struct io_wq *wq = worker->wq; - bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state); do { + bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state); struct io_wq_work *work; /* -- 2.39.2
2 1
0 0
[PATCH OLK-6.6] dmaengine: mmp_pdma: Fix race condition in mmp_pdma_residue()
by Zhang Yuwei 25 Feb '26

25 Feb '26
From: Guodong Xu <guodong(a)riscstar.com> mainline inclusion from mainline-v6.19-rc6 commit a143545855bc2c6e1330f6f57ae375ac44af00a7 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13727 CVE: CVE-2025-71221 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- Add proper locking in mmp_pdma_residue() to prevent use-after-free when accessing descriptor list and descriptor contents. The race occurs when multiple threads call tx_status() while the tasklet on another CPU is freeing completed descriptors: CPU 0 CPU 1 ----- ----- mmp_pdma_tx_status() mmp_pdma_residue() -> NO LOCK held list_for_each_entry(sw, ..) DMA interrupt dma_do_tasklet() -> spin_lock(&desc_lock) list_move(sw->node, ...) spin_unlock(&desc_lock) | dma_pool_free(sw) <- FREED! -> access sw->desc <- UAF! This issue can be reproduced when running dmatest on the same channel with multiple threads (threads_per_chan > 1). Fix by protecting the chain_running list iteration and descriptor access with the chan->desc_lock spinlock. Signed-off-by: Juan Li <lijuan(a)linux.spacemit.com> Signed-off-by: Guodong Xu <guodong(a)riscstar.com> Link: https://patch.msgid.link/20251216-mmp-pdma-race-v1-1-976a224bb622@riscstar.… Signed-off-by: Vinod Koul <vkoul(a)kernel.org> Conflicts: drivers/dma/mmp_pdma.c [commit 35e40bf761fcb not merged] Signed-off-by: Zhang Yuwei <zhangyuwei20(a)huawei.com> --- drivers/dma/mmp_pdma.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c index ebdfdcbb4f7a..3a035a31bec5 100644 --- a/drivers/dma/mmp_pdma.c +++ b/drivers/dma/mmp_pdma.c @@ -763,6 +763,7 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan, dma_cookie_t cookie) { struct mmp_pdma_desc_sw *sw; + unsigned long flags; u32 curr, residue = 0; bool passed = false; bool cyclic = chan->cyclic_first != NULL; @@ -779,6 +780,8 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan, else curr = readl(chan->phy->base + DSADR(chan->phy->idx)); + spin_lock_irqsave(&chan->desc_lock, flags); + list_for_each_entry(sw, &chan->chain_running, node) { u32 start, end, len; @@ -822,6 +825,7 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan, continue; if (sw->async_tx.cookie == cookie) { + spin_unlock_irqrestore(&chan->desc_lock, flags); return residue; } else { residue = 0; @@ -829,6 +833,8 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan, } } + spin_unlock_irqrestore(&chan->desc_lock, flags); + /* We should only get here in case of cyclic transactions */ return residue; } -- 2.22.0
2 1
0 0
[PATCH OLK-5.10] dmaengine: mmp_pdma: Fix race condition in mmp_pdma_residue()
by Zhang Yuwei 25 Feb '26

25 Feb '26
From: Guodong Xu <guodong(a)riscstar.com> mainline inclusion from mainline-v6.19-rc6 commit a143545855bc2c6e1330f6f57ae375ac44af00a7 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13727 CVE: CVE-2025-71221 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- Add proper locking in mmp_pdma_residue() to prevent use-after-free when accessing descriptor list and descriptor contents. The race occurs when multiple threads call tx_status() while the tasklet on another CPU is freeing completed descriptors: CPU 0 CPU 1 ----- ----- mmp_pdma_tx_status() mmp_pdma_residue() -> NO LOCK held list_for_each_entry(sw, ..) DMA interrupt dma_do_tasklet() -> spin_lock(&desc_lock) list_move(sw->node, ...) spin_unlock(&desc_lock) | dma_pool_free(sw) <- FREED! -> access sw->desc <- UAF! This issue can be reproduced when running dmatest on the same channel with multiple threads (threads_per_chan > 1). Fix by protecting the chain_running list iteration and descriptor access with the chan->desc_lock spinlock. Signed-off-by: Juan Li <lijuan(a)linux.spacemit.com> Signed-off-by: Guodong Xu <guodong(a)riscstar.com> Link: https://patch.msgid.link/20251216-mmp-pdma-race-v1-1-976a224bb622@riscstar.… Signed-off-by: Vinod Koul <vkoul(a)kernel.org> Conflicts: drivers/dma/mmp_pdma.c [commit 35e40bf761fcb not merged] Signed-off-by: Zhang Yuwei <zhangyuwei20(a)huawei.com> --- drivers/dma/mmp_pdma.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c index 4eb63f1ad224..c56cb954b392 100644 --- a/drivers/dma/mmp_pdma.c +++ b/drivers/dma/mmp_pdma.c @@ -764,6 +764,7 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan, dma_cookie_t cookie) { struct mmp_pdma_desc_sw *sw; + unsigned long flags; u32 curr, residue = 0; bool passed = false; bool cyclic = chan->cyclic_first != NULL; @@ -780,6 +781,8 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan, else curr = readl(chan->phy->base + DSADR(chan->phy->idx)); + spin_lock_irqsave(&chan->desc_lock, flags); + list_for_each_entry(sw, &chan->chain_running, node) { u32 start, end, len; @@ -823,6 +826,7 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan, continue; if (sw->async_tx.cookie == cookie) { + spin_unlock_irqrestore(&chan->desc_lock, flags); return residue; } else { residue = 0; @@ -830,6 +834,8 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan, } } + spin_unlock_irqrestore(&chan->desc_lock, flags); + /* We should only get here in case of cyclic transactions */ return residue; } -- 2.22.0
2 1
0 0
[PATCH] Prefer steal task within LLC and fallback to native newidle_balance on failure
by Chen Jinghuang 25 Feb '26

25 Feb '26
--- kernel/sched/fair.c | 15 +++++++++++++-- kernel/sched/features.h | 1 + 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e60f19cb0fee..70ba6ee3c058 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -10343,9 +10343,15 @@ done: __maybe_unused; */ rq_idle_stamp_update(rq); - new_tasks = newidle_balance(rq, rf); - if (new_tasks == 0) + if (sched_feat(STEALPOC)) { new_tasks = try_steal(rq, rf); + if (new_tasks == 0) + new_tasks = newidle_balance(rq, rf); + } else { + new_tasks = newidle_balance(rq, rf); + if (new_tasks == 0) + new_tasks = try_steal(rq, rf); + } schedstat_end_time(rq, time); if (new_tasks) @@ -14313,6 +14319,11 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf) int continue_balancing = 1; u64 domain_cost; + if (sched_feat(STEALPOC)) { + if (sd->flags & SD_SHARE_PKG_RESOURCES) + continue; + } + update_next_balance(sd, &next_balance); if (this_rq->avg_idle < curr_cost + sd->max_newidle_lb_cost) diff --git a/kernel/sched/features.h b/kernel/sched/features.h index 1f665fbf0137..c74329fd86b9 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -63,6 +63,7 @@ SCHED_FEAT(SIS_UTIL, true) * Improves CPU utilization. */ SCHED_FEAT(STEAL, false) +SCHED_FEAT(STEALPOC, false) #endif #ifdef CONFIG_SCHED_PARAL -- 2.34.1
1 0
0 0
  • ← Newer
  • 1
  • 2
  • 3
  • 4
  • ...
  • 2287
  • Older →

HyperKitty Powered by HyperKitty