mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 47 participants
  • 19846 discussions
[openeuler:OLK-6.6 2650/2650] WARNING: modpost: vmlinux: section mismatch in reference: virtio_fs_init+0x105 (section: .init.text) -> virtio_fs_sysfs_exit (section: .exit.text)
by kernel test robot 05 Aug '25

05 Aug '25
tree: https://gitee.com/openeuler/kernel.git OLK-6.6 head: 5d1af5a159ade19469a4a6aceaa10d4cece4eebb commit: cc6009cb24f9754275aa850a61fd12554acdec36 [2650/2650] virtiofs: export filesystem tags through sysfs config: x86_64-buildonly-randconfig-2001-20250805 (https://download.01.org/0day-ci/archive/20250805/202508051633.5bENwyOF-lkp@…) compiler: gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250805/202508051633.5bENwyOF-lkp@…) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp(a)intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202508051633.5bENwyOF-lkp@intel.com/ All warnings (new ones prefixed by >>, old ones prefixed by <<): >> WARNING: modpost: vmlinux: section mismatch in reference: virtio_fs_init+0x105 (section: .init.text) -> virtio_fs_sysfs_exit (section: .exit.text) -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
1 0
0 0
[PATCH openEuler-1.0-LTS] md/raid1: Fix stack memory use after return in raid1_reshape
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Wang Jinchao <wangjinchao600(a)gmail.com> mainline inclusion from mainline-v6.16-rc6 commit d67ed2ccd2d1dcfda9292c0ea8697a9d0f2f0d98 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICOXOC CVE: CVE-2025-38445 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… ------------------ In the raid1_reshape function, newpool is allocated on the stack and assigned to conf->r1bio_pool. This results in conf->r1bio_pool.wait.head pointing to a stack address. Accessing this address later can lead to a kernel panic. Example access path: raid1_reshape() { // newpool is on the stack mempool_t newpool, oldpool; // initialize newpool.wait.head to stack address mempool_init(&newpool, ...); conf->r1bio_pool = newpool; } raid1_read_request() or raid1_write_request() { alloc_r1bio() { mempool_alloc() { // if pool->alloc fails remove_element() { --pool->curr_nr; } } } } mempool_free() { if (pool->curr_nr < pool->min_nr) { // pool->wait.head is a stack address // wake_up() will try to access this invalid address // which leads to a kernel panic return; wake_up(&pool->wait); } } Fix: reinit conf->r1bio_pool.wait after assigning newpool. Fixes: afeee514ce7f ("md: convert to bioset_init()/mempool_init()") Signed-off-by: Wang Jinchao <wangjinchao600(a)gmail.com> Reviewed-by: Yu Kuai <yukuai3(a)huawei.com> Link: https://lore.kernel.org/linux-raid/20250612112901.3023950-1-wangjinchao600@… Signed-off-by: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- drivers/md/raid1.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 0275dcb18692..b1d79edaa92b 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3312,6 +3312,7 @@ static int raid1_reshape(struct mddev *mddev) /* ok, everything is stopped */ oldpool = conf->r1bio_pool; conf->r1bio_pool = newpool; + init_waitqueue_head(&conf->r1bio_pool.wait); for (d = d2 = 0; d < conf->raid_disks; d++) { struct md_rdev *rdev = conf->mirrors[d].rdev; -- 2.39.2
2 1
0 0
[PATCH OLK-6.6] raid10: cleanup memleak at raid10_make_request
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Nigel Croxon <ncroxon(a)redhat.com> mainline inclusion from mainline-v6.16-rc6 commit 43806c3d5b9bb7d74ba4e33a6a8a41ac988bde24 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICOXO6 CVE: CVE-2025-38444 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… ------------------ If raid10_read_request or raid10_write_request registers a new request and the REQ_NOWAIT flag is set, the code does not free the malloc from the mempool. unreferenced object 0xffff8884802c3200 (size 192): comm "fio", pid 9197, jiffies 4298078271 hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 88 41 02 00 00 00 00 00 .........A...... 08 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace (crc c1a049a2): __kmalloc+0x2bb/0x450 mempool_alloc+0x11b/0x320 raid10_make_request+0x19e/0x650 [raid10] md_handle_request+0x3b3/0x9e0 __submit_bio+0x394/0x560 __submit_bio_noacct+0x145/0x530 submit_bio_noacct_nocheck+0x682/0x830 __blkdev_direct_IO_async+0x4dc/0x6b0 blkdev_read_iter+0x1e5/0x3b0 __io_read+0x230/0x1110 io_read+0x13/0x30 io_issue_sqe+0x134/0x1180 io_submit_sqes+0x48c/0xe90 __do_sys_io_uring_enter+0x574/0x8b0 do_syscall_64+0x5c/0xe0 entry_SYSCALL_64_after_hwframe+0x76/0x7e V4: changing backing tree to see if CKI tests will pass. The patch code has not changed between any versions. Fixes: c9aa889b035f ("md: raid10 add nowait support") Signed-off-by: Nigel Croxon <ncroxon(a)redhat.com> Link: https://lore.kernel.org/linux-raid/c0787379-9caa-42f3-b5fc-369aed784400@red… Signed-off-by: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- drivers/md/raid10.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index fbe52bd69ff9..3aabd087d375 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1194,8 +1194,11 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio, } } - if (!regular_request_wait(mddev, conf, bio, r10_bio->sectors)) + if (!regular_request_wait(mddev, conf, bio, r10_bio->sectors)) { + raid_end_bio_io(r10_bio); return; + } + rdev = read_balance(conf, r10_bio, &max_sectors); if (!rdev) { if (err_rdev) { @@ -1385,8 +1388,11 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, } sectors = r10_bio->sectors; - if (!regular_request_wait(mddev, conf, bio, sectors)) + if (!regular_request_wait(mddev, conf, bio, sectors)) { + raid_end_bio_io(r10_bio); return; + } + if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && (mddev->reshape_backwards ? (bio->bi_iter.bi_sector < conf->reshape_safe && -- 2.39.2
2 1
0 0
[PATCH OLK-5.10] md/raid1: Fix stack memory use after return in raid1_reshape
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Wang Jinchao <wangjinchao600(a)gmail.com> stable inclusion from stable-v5.10.240 commit 12b00ec99624f8da8c325f2dd6e807df26df0025 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICOXOC CVE: CVE-2025-38445 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… ------------------ [ Upstream commit d67ed2ccd2d1dcfda9292c0ea8697a9d0f2f0d98 ] In the raid1_reshape function, newpool is allocated on the stack and assigned to conf->r1bio_pool. This results in conf->r1bio_pool.wait.head pointing to a stack address. Accessing this address later can lead to a kernel panic. Example access path: raid1_reshape() { // newpool is on the stack mempool_t newpool, oldpool; // initialize newpool.wait.head to stack address mempool_init(&newpool, ...); conf->r1bio_pool = newpool; } raid1_read_request() or raid1_write_request() { alloc_r1bio() { mempool_alloc() { // if pool->alloc fails remove_element() { --pool->curr_nr; } } } } mempool_free() { if (pool->curr_nr < pool->min_nr) { // pool->wait.head is a stack address // wake_up() will try to access this invalid address // which leads to a kernel panic return; wake_up(&pool->wait); } } Fix: reinit conf->r1bio_pool.wait after assigning newpool. Fixes: afeee514ce7f ("md: convert to bioset_init()/mempool_init()") Signed-off-by: Wang Jinchao <wangjinchao600(a)gmail.com> Reviewed-by: Yu Kuai <yukuai3(a)huawei.com> Link: https://lore.kernel.org/linux-raid/20250612112901.3023950-1-wangjinchao600@… Signed-off-by: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- drivers/md/raid1.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 11f649ed2466..acd008f97847 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3295,6 +3295,7 @@ static int raid1_reshape(struct mddev *mddev) /* ok, everything is stopped */ oldpool = conf->r1bio_pool; conf->r1bio_pool = newpool; + init_waitqueue_head(&conf->r1bio_pool.wait); for (d = d2 = 0; d < conf->raid_disks; d++) { struct md_rdev *rdev = conf->mirrors[d].rdev; -- 2.39.2
2 1
0 0
[PATCH OLK-6.6] md/raid1: Fix stack memory use after return in raid1_reshape
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Wang Jinchao <wangjinchao600(a)gmail.com> stable inclusion from stable-v6.6.99 commit df5894014a92ff0196dbc212a7764e97366fd2b7 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICOXOC CVE: CVE-2025-38445 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… ------------------ [ Upstream commit d67ed2ccd2d1dcfda9292c0ea8697a9d0f2f0d98 ] In the raid1_reshape function, newpool is allocated on the stack and assigned to conf->r1bio_pool. This results in conf->r1bio_pool.wait.head pointing to a stack address. Accessing this address later can lead to a kernel panic. Example access path: raid1_reshape() { // newpool is on the stack mempool_t newpool, oldpool; // initialize newpool.wait.head to stack address mempool_init(&newpool, ...); conf->r1bio_pool = newpool; } raid1_read_request() or raid1_write_request() { alloc_r1bio() { mempool_alloc() { // if pool->alloc fails remove_element() { --pool->curr_nr; } } } } mempool_free() { if (pool->curr_nr < pool->min_nr) { // pool->wait.head is a stack address // wake_up() will try to access this invalid address // which leads to a kernel panic return; wake_up(&pool->wait); } } Fix: reinit conf->r1bio_pool.wait after assigning newpool. Fixes: afeee514ce7f ("md: convert to bioset_init()/mempool_init()") Signed-off-by: Wang Jinchao <wangjinchao600(a)gmail.com> Reviewed-by: Yu Kuai <yukuai3(a)huawei.com> Link: https://lore.kernel.org/linux-raid/20250612112901.3023950-1-wangjinchao600@… Signed-off-by: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- drivers/md/raid1.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 4b7e68276d6c..8741b10db2bc 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3263,6 +3263,7 @@ static int raid1_reshape(struct mddev *mddev) /* ok, everything is stopped */ oldpool = conf->r1bio_pool; conf->r1bio_pool = newpool; + init_waitqueue_head(&conf->r1bio_pool.wait); for (d = d2 = 0; d < conf->raid_disks; d++) { struct md_rdev *rdev = conf->mirrors[d].rdev; -- 2.39.2
2 1
0 0
[openEuler-1.0-LTS] md/raid1: Fix stack memory use after return in raid1_reshape
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Wang Jinchao <wangjinchao600(a)gmail.com> mainline inclusion from mainline-v6.16-rc6 commit d67ed2ccd2d1dcfda9292c0ea8697a9d0f2f0d98 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICOXOC CVE: CVE-2025-38445 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… ------------------ In the raid1_reshape function, newpool is allocated on the stack and assigned to conf->r1bio_pool. This results in conf->r1bio_pool.wait.head pointing to a stack address. Accessing this address later can lead to a kernel panic. Example access path: raid1_reshape() { // newpool is on the stack mempool_t newpool, oldpool; // initialize newpool.wait.head to stack address mempool_init(&newpool, ...); conf->r1bio_pool = newpool; } raid1_read_request() or raid1_write_request() { alloc_r1bio() { mempool_alloc() { // if pool->alloc fails remove_element() { --pool->curr_nr; } } } } mempool_free() { if (pool->curr_nr < pool->min_nr) { // pool->wait.head is a stack address // wake_up() will try to access this invalid address // which leads to a kernel panic return; wake_up(&pool->wait); } } Fix: reinit conf->r1bio_pool.wait after assigning newpool. Fixes: afeee514ce7f ("md: convert to bioset_init()/mempool_init()") Signed-off-by: Wang Jinchao <wangjinchao600(a)gmail.com> Reviewed-by: Yu Kuai <yukuai3(a)huawei.com> Link: https://lore.kernel.org/linux-raid/20250612112901.3023950-1-wangjinchao600@… Signed-off-by: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- drivers/md/raid1.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 0275dcb18692..b1d79edaa92b 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3312,6 +3312,7 @@ static int raid1_reshape(struct mddev *mddev) /* ok, everything is stopped */ oldpool = conf->r1bio_pool; conf->r1bio_pool = newpool; + init_waitqueue_head(&conf->r1bio_pool.wait); for (d = d2 = 0; d < conf->raid_disks; d++) { struct md_rdev *rdev = conf->mirrors[d].rdev; -- 2.39.2
1 0
0 0
[PATCH] md/raid1: Fix stack memory use after return in raid1_reshape
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Wang Jinchao <wangjinchao600(a)gmail.com> mainline inclusion from mainline-v6.16-rc6 commit d67ed2ccd2d1dcfda9292c0ea8697a9d0f2f0d98 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICOXOC CVE: CVE-2025-38445 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… ------------------ In the raid1_reshape function, newpool is allocated on the stack and assigned to conf->r1bio_pool. This results in conf->r1bio_pool.wait.head pointing to a stack address. Accessing this address later can lead to a kernel panic. Example access path: raid1_reshape() { // newpool is on the stack mempool_t newpool, oldpool; // initialize newpool.wait.head to stack address mempool_init(&newpool, ...); conf->r1bio_pool = newpool; } raid1_read_request() or raid1_write_request() { alloc_r1bio() { mempool_alloc() { // if pool->alloc fails remove_element() { --pool->curr_nr; } } } } mempool_free() { if (pool->curr_nr < pool->min_nr) { // pool->wait.head is a stack address // wake_up() will try to access this invalid address // which leads to a kernel panic return; wake_up(&pool->wait); } } Fix: reinit conf->r1bio_pool.wait after assigning newpool. Fixes: afeee514ce7f ("md: convert to bioset_init()/mempool_init()") Signed-off-by: Wang Jinchao <wangjinchao600(a)gmail.com> Reviewed-by: Yu Kuai <yukuai3(a)huawei.com> Link: https://lore.kernel.org/linux-raid/20250612112901.3023950-1-wangjinchao600@… Signed-off-by: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- drivers/md/raid1.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 0275dcb18692..b1d79edaa92b 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3312,6 +3312,7 @@ static int raid1_reshape(struct mddev *mddev) /* ok, everything is stopped */ oldpool = conf->r1bio_pool; conf->r1bio_pool = newpool; + init_waitqueue_head(&conf->r1bio_pool.wait); for (d = d2 = 0; d < conf->raid_disks; d++) { struct md_rdev *rdev = conf->mirrors[d].rdev; -- 2.39.2
1 0
0 0
[PATCH OLK-5.10] blk-mq: don't touch ->tagset in blk_mq_get_sq_hctx
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Ming Lei <ming.lei(a)redhat.com> mainline inclusion from mainline-v5.19-rc1 commit 5d05426e2d5fd7df8afc866b78c36b37b00188b7 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/IBP2VT CVE: CVE-2022-49377 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- blk_mq_run_hw_queues() could be run when there isn't queued request and after queue is cleaned up, at that time tagset is freed, because tagset lifetime is covered by driver, and often freed after blk_cleanup_queue() returns. So don't touch ->tagset for figuring out current default hctx by the mapping built in request queue, so use-after-free on tagset can be avoided. Meantime this way should be fast than retrieving mapping from tagset. Cc: "yukuai (C)" <yukuai3(a)huawei.com> Cc: Jan Kara <jack(a)suse.cz> Fixes: b6e68ee82585 ("blk-mq: Improve performance of non-mq IO schedulers with multiple HW queues") Signed-off-by: Ming Lei <ming.lei(a)redhat.com> Reviewed-by: Jan Kara <jack(a)suse.cz> Link: https://lore.kernel.org/r/20220522122350.743103-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe(a)kernel.dk> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- block/blk-mq.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index f94adf15bf53..e7c690e954ee 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1796,8 +1796,7 @@ EXPORT_SYMBOL(blk_mq_run_hw_queue); */ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q) { - struct blk_mq_hw_ctx *hctx; - + struct blk_mq_ctx *ctx = blk_mq_get_ctx(q); /* * If the IO scheduler does not respect hardware queues when * dispatching, we just don't bother with multiple HW queues and @@ -1805,8 +1804,8 @@ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q) * just causes lock contention inside the scheduler and pointless cache * bouncing. */ - hctx = blk_mq_map_queue_type(q, HCTX_TYPE_DEFAULT, - raw_smp_processor_id()); + struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, 0, ctx); + if (!blk_mq_hctx_stopped(hctx)) return hctx; return NULL; -- 2.46.1
2 1
0 0
[PATCH OLK-5.10] posix-cpu-timers: fix race between handle_posix_cpu_timers() and posix_cpu_timer_del()
by Xiongfeng Wang 05 Aug '25

05 Aug '25
From: Oleg Nesterov <oleg(a)redhat.com> stable inclusion from stable-v5.10.239 commit c076635b3a42771ace7d276de8dc3bc76ee2ba1b category: bugfix bugzilla: 189268 CVE: CVE-2025-38352 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- commit f90fff1e152dedf52b932240ebbd670d83330eca upstream. If an exiting non-autoreaping task has already passed exit_notify() and calls handle_posix_cpu_timers() from IRQ, it can be reaped by its parent or debugger right after unlock_task_sighand(). If a concurrent posix_cpu_timer_del() runs at that moment, it won't be able to detect timer->it.cpu.firing != 0: cpu_timer_task_rcu() and/or lock_task_sighand() will fail. Add the tsk->exit_state check into run_posix_cpu_timers() to fix this. This fix is not needed if CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y, because exit_task_work() is called before exit_notify(). But the check still makes sense, task_work_add(&tsk->posix_cputimers_work.work) will fail anyway in this case. Cc: stable(a)vger.kernel.org Reported-by: Benoît Sevens <bsevens(a)google.com> Fixes: 0bdd2ed4138e ("sched: run_posix_cpu_timers: Don't check ->exit_state, use lock_task_sighand()") Signed-off-by: Oleg Nesterov <oleg(a)redhat.com> Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com> --- kernel/time/posix-cpu-timers.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c index 578d0ebedb677..3e08c89918336 100644 --- a/kernel/time/posix-cpu-timers.c +++ b/kernel/time/posix-cpu-timers.c @@ -1310,6 +1310,15 @@ void run_posix_cpu_timers(void) lockdep_assert_irqs_disabled(); + /* + * Ensure that release_task(tsk) can't happen while + * handle_posix_cpu_timers() is running. Otherwise, a concurrent + * posix_cpu_timer_del() may fail to lock_task_sighand(tsk) and + * miss timer->it.cpu.firing != 0. + */ + if (tsk->exit_state) + return; + /* * If the actual expiry is deferred to task work context and the * work is already scheduled there is no point to do anything here. -- 2.20.1
2 1
0 0
[PATCH OLK-6.6] posix-cpu-timers: fix race between handle_posix_cpu_timers() and posix_cpu_timer_del()
by Xiongfeng Wang 05 Aug '25

05 Aug '25
From: Oleg Nesterov <oleg(a)redhat.com> stable inclusion from stable-v6.6.94 commit 2c72fe18cc5f9f1750f5bc148cf1c94c29e106ff category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/ICOE0M CVE: CVE-2025-38352 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- commit f90fff1e152dedf52b932240ebbd670d83330eca upstream. If an exiting non-autoreaping task has already passed exit_notify() and calls handle_posix_cpu_timers() from IRQ, it can be reaped by its parent or debugger right after unlock_task_sighand(). If a concurrent posix_cpu_timer_del() runs at that moment, it won't be able to detect timer->it.cpu.firing != 0: cpu_timer_task_rcu() and/or lock_task_sighand() will fail. Add the tsk->exit_state check into run_posix_cpu_timers() to fix this. This fix is not needed if CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y, because exit_task_work() is called before exit_notify(). But the check still makes sense, task_work_add(&tsk->posix_cputimers_work.work) will fail anyway in this case. Cc: stable(a)vger.kernel.org Reported-by: Benoît Sevens <bsevens(a)google.com> Fixes: 0bdd2ed4138e ("sched: run_posix_cpu_timers: Don't check ->exit_state, use lock_task_sighand()") Signed-off-by: Oleg Nesterov <oleg(a)redhat.com> Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com> --- kernel/time/posix-cpu-timers.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c index e9c6f9d0e42c..9af1f2a72a0a 100644 --- a/kernel/time/posix-cpu-timers.c +++ b/kernel/time/posix-cpu-timers.c @@ -1437,6 +1437,15 @@ void run_posix_cpu_timers(void) lockdep_assert_irqs_disabled(); + /* + * Ensure that release_task(tsk) can't happen while + * handle_posix_cpu_timers() is running. Otherwise, a concurrent + * posix_cpu_timer_del() may fail to lock_task_sighand(tsk) and + * miss timer->it.cpu.firing != 0. + */ + if (tsk->exit_state) + return; + /* * If the actual expiry is deferred to task work context and the * work is already scheduled there is no point to do anything here. -- 2.20.1
2 1
0 0
  • ← Newer
  • 1
  • ...
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • ...
  • 1985
  • Older →

HyperKitty Powered by HyperKitty