mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 47 participants
  • 19848 discussions
[openeuler:openEuler-1.0-LTS 1743/1743] fs/block_dev.c:1075:5: warning: no previous prototype for 'bd_prepare_to_claim'
by kernel test robot 06 Aug '25

06 Aug '25
tree: https://gitee.com/openeuler/kernel.git openEuler-1.0-LTS head: 58693b1f5fa8f02b34ab9a02096472232731e864 commit: fb5186196e267b3cd58b3ef31951427e8b153d05 [1743/1743] block: fix scan partition for exclusively open device again config: x86_64-buildonly-randconfig-2004-20250802 (https://download.01.org/0day-ci/archive/20250805/202508051928.DGkSyfnv-lkp@…) compiler: gcc-11 (Debian 11.3.0-12) 11.3.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250805/202508051928.DGkSyfnv-lkp@…) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp(a)intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202508051928.DGkSyfnv-lkp@intel.com/ All warnings (new ones prefixed by >>): >> fs/block_dev.c:1075:5: warning: no previous prototype for 'bd_prepare_to_claim' [-Wmissing-prototypes] 1075 | int bd_prepare_to_claim(struct block_device *bdev, | ^~~~~~~~~~~~~~~~~~~ fs/block_dev.c:1123:6: warning: no previous prototype for 'bd_abort_claiming' [-Wmissing-prototypes] 1123 | void bd_abort_claiming(struct block_device *bdev, struct block_device *whole, | ^~~~~~~~~~~~~~~~~ vim +/bd_prepare_to_claim +1075 fs/block_dev.c 1061 1062 /** 1063 * bd_prepare_to_claim - claim a block device 1064 * @bdev: block device of interest 1065 * @whole: the whole device containing @bdev, may equal @bdev 1066 * @holder: holder trying to claim @bdev 1067 * 1068 * Claim @bdev. This function fails if @bdev is already claimed by another 1069 * holder and waits if another claiming is in progress. return, the caller 1070 * has ownership of bd_claiming and bd_holder[s]. 1071 * 1072 * RETURNS: 1073 * 0 if @bdev can be claimed, -EBUSY otherwise. 1074 */ > 1075 int bd_prepare_to_claim(struct block_device *bdev, 1076 struct block_device *whole, void *holder) 1077 { 1078 retry: 1079 spin_lock(&bdev_lock); 1080 /* if someone else claimed, fail */ 1081 if (!bd_may_claim(bdev, whole, holder)) { 1082 spin_unlock(&bdev_lock); 1083 return -EBUSY; 1084 } 1085 1086 /* if claiming is already in progress, wait for it to finish */ 1087 if (whole->bd_claiming) { 1088 wait_queue_head_t *wq = bit_waitqueue(&whole->bd_claiming, 0); 1089 DEFINE_WAIT(wait); 1090 1091 prepare_to_wait(wq, &wait, TASK_UNINTERRUPTIBLE); 1092 spin_unlock(&bdev_lock); 1093 schedule(); 1094 finish_wait(wq, &wait); 1095 goto retry; 1096 } 1097 1098 /* yay, all mine */ 1099 whole->bd_claiming = holder; 1100 spin_unlock(&bdev_lock); 1101 return 0; 1102 } 1103 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
1 0
0 0
[openeuler:openEuler-1.0-LTS 1743/1743] block/genhd.c:642:5: warning: no previous prototype for 'disk_scan_partitions'
by kernel test robot 05 Aug '25

05 Aug '25
tree: https://gitee.com/openeuler/kernel.git openEuler-1.0-LTS head: 3c7bbbad8e4b8331c0db8c827bcd03a54741d7fa commit: cdfb5c11ad89867cd28c903369fbfebe3f36ca26 [1743/1743] block: fix kabi broken in ioctl.c config: x86_64-buildonly-randconfig-2004-20250802 (https://download.01.org/0day-ci/archive/20250805/202508051758.1jFhoh8X-lkp@…) compiler: gcc-11 (Debian 11.3.0-12) 11.3.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250805/202508051758.1jFhoh8X-lkp@…) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp(a)intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202508051758.1jFhoh8X-lkp@intel.com/ All warnings (new ones prefixed by >>): >> block/genhd.c:642:5: warning: no previous prototype for 'disk_scan_partitions' [-Wmissing-prototypes] 642 | int disk_scan_partitions(struct gendisk *disk, fmode_t mode) | ^~~~~~~~~~~~~~~~~~~~ block/genhd.c:533: warning: Function parameter or member 'devt' not described in 'blk_invalidate_devt' vim +/disk_scan_partitions +642 block/genhd.c d2bf1b6723ed0ea Tejun Heo 2010-12-08 641 efc73feb2901d27 Christoph Hellwig 2023-04-07 @642 int disk_scan_partitions(struct gendisk *disk, fmode_t mode) b9484a857f600ca Yu Kuai 2022-08-09 643 { b9484a857f600ca Yu Kuai 2022-08-09 644 struct block_device *bdev; efc73feb2901d27 Christoph Hellwig 2023-04-07 645 int ret; b9484a857f600ca Yu Kuai 2022-08-09 646 efc73feb2901d27 Christoph Hellwig 2023-04-07 647 if (!disk_part_scan_enabled(disk)) efc73feb2901d27 Christoph Hellwig 2023-04-07 648 return -EINVAL; b9484a857f600ca Yu Kuai 2022-08-09 649 b9484a857f600ca Yu Kuai 2022-08-09 650 bdev = bdget_disk(disk, 0); b9484a857f600ca Yu Kuai 2022-08-09 651 if (!bdev) efc73feb2901d27 Christoph Hellwig 2023-04-07 652 return -ENOMEM; b9484a857f600ca Yu Kuai 2022-08-09 653 b9484a857f600ca Yu Kuai 2022-08-09 654 bdev->bd_invalidated = 1; efc73feb2901d27 Christoph Hellwig 2023-04-07 655 efc73feb2901d27 Christoph Hellwig 2023-04-07 656 ret = blkdev_get(bdev, mode, NULL); efc73feb2901d27 Christoph Hellwig 2023-04-07 657 if (!ret) efc73feb2901d27 Christoph Hellwig 2023-04-07 658 blkdev_put(bdev, mode); efc73feb2901d27 Christoph Hellwig 2023-04-07 659 efc73feb2901d27 Christoph Hellwig 2023-04-07 660 return ret; fbbec472351c994 Christoph Hellwig 2023-04-07 661 } fbbec472351c994 Christoph Hellwig 2023-04-07 662 :::::: The code at line 642 was first introduced by commit :::::: efc73feb2901d27dcd01fa859d1378aee42850aa block: merge disk_scan_partitions and blkdev_reread_part :::::: TO: Christoph Hellwig <hch(a)lst.de> :::::: CC: Yongqiang Liu <duanzi(a)zju.edu.cn> -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
1 0
0 0
[openeuler:OLK-6.6 2650/2650] WARNING: modpost: vmlinux: section mismatch in reference: virtio_fs_init+0x105 (section: .init.text) -> virtio_fs_sysfs_exit (section: .exit.text)
by kernel test robot 05 Aug '25

05 Aug '25
tree: https://gitee.com/openeuler/kernel.git OLK-6.6 head: 5d1af5a159ade19469a4a6aceaa10d4cece4eebb commit: cc6009cb24f9754275aa850a61fd12554acdec36 [2650/2650] virtiofs: export filesystem tags through sysfs config: x86_64-buildonly-randconfig-2001-20250805 (https://download.01.org/0day-ci/archive/20250805/202508051633.5bENwyOF-lkp@…) compiler: gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250805/202508051633.5bENwyOF-lkp@…) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp(a)intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202508051633.5bENwyOF-lkp@intel.com/ All warnings (new ones prefixed by >>, old ones prefixed by <<): >> WARNING: modpost: vmlinux: section mismatch in reference: virtio_fs_init+0x105 (section: .init.text) -> virtio_fs_sysfs_exit (section: .exit.text) -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
1 0
0 0
[PATCH openEuler-1.0-LTS] md/raid1: Fix stack memory use after return in raid1_reshape
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Wang Jinchao <wangjinchao600(a)gmail.com> mainline inclusion from mainline-v6.16-rc6 commit d67ed2ccd2d1dcfda9292c0ea8697a9d0f2f0d98 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICOXOC CVE: CVE-2025-38445 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… ------------------ In the raid1_reshape function, newpool is allocated on the stack and assigned to conf->r1bio_pool. This results in conf->r1bio_pool.wait.head pointing to a stack address. Accessing this address later can lead to a kernel panic. Example access path: raid1_reshape() { // newpool is on the stack mempool_t newpool, oldpool; // initialize newpool.wait.head to stack address mempool_init(&newpool, ...); conf->r1bio_pool = newpool; } raid1_read_request() or raid1_write_request() { alloc_r1bio() { mempool_alloc() { // if pool->alloc fails remove_element() { --pool->curr_nr; } } } } mempool_free() { if (pool->curr_nr < pool->min_nr) { // pool->wait.head is a stack address // wake_up() will try to access this invalid address // which leads to a kernel panic return; wake_up(&pool->wait); } } Fix: reinit conf->r1bio_pool.wait after assigning newpool. Fixes: afeee514ce7f ("md: convert to bioset_init()/mempool_init()") Signed-off-by: Wang Jinchao <wangjinchao600(a)gmail.com> Reviewed-by: Yu Kuai <yukuai3(a)huawei.com> Link: https://lore.kernel.org/linux-raid/20250612112901.3023950-1-wangjinchao600@… Signed-off-by: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- drivers/md/raid1.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 0275dcb18692..b1d79edaa92b 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3312,6 +3312,7 @@ static int raid1_reshape(struct mddev *mddev) /* ok, everything is stopped */ oldpool = conf->r1bio_pool; conf->r1bio_pool = newpool; + init_waitqueue_head(&conf->r1bio_pool.wait); for (d = d2 = 0; d < conf->raid_disks; d++) { struct md_rdev *rdev = conf->mirrors[d].rdev; -- 2.39.2
2 1
0 0
[PATCH OLK-6.6] raid10: cleanup memleak at raid10_make_request
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Nigel Croxon <ncroxon(a)redhat.com> mainline inclusion from mainline-v6.16-rc6 commit 43806c3d5b9bb7d74ba4e33a6a8a41ac988bde24 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICOXO6 CVE: CVE-2025-38444 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… ------------------ If raid10_read_request or raid10_write_request registers a new request and the REQ_NOWAIT flag is set, the code does not free the malloc from the mempool. unreferenced object 0xffff8884802c3200 (size 192): comm "fio", pid 9197, jiffies 4298078271 hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 88 41 02 00 00 00 00 00 .........A...... 08 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace (crc c1a049a2): __kmalloc+0x2bb/0x450 mempool_alloc+0x11b/0x320 raid10_make_request+0x19e/0x650 [raid10] md_handle_request+0x3b3/0x9e0 __submit_bio+0x394/0x560 __submit_bio_noacct+0x145/0x530 submit_bio_noacct_nocheck+0x682/0x830 __blkdev_direct_IO_async+0x4dc/0x6b0 blkdev_read_iter+0x1e5/0x3b0 __io_read+0x230/0x1110 io_read+0x13/0x30 io_issue_sqe+0x134/0x1180 io_submit_sqes+0x48c/0xe90 __do_sys_io_uring_enter+0x574/0x8b0 do_syscall_64+0x5c/0xe0 entry_SYSCALL_64_after_hwframe+0x76/0x7e V4: changing backing tree to see if CKI tests will pass. The patch code has not changed between any versions. Fixes: c9aa889b035f ("md: raid10 add nowait support") Signed-off-by: Nigel Croxon <ncroxon(a)redhat.com> Link: https://lore.kernel.org/linux-raid/c0787379-9caa-42f3-b5fc-369aed784400@red… Signed-off-by: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- drivers/md/raid10.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index fbe52bd69ff9..3aabd087d375 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1194,8 +1194,11 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio, } } - if (!regular_request_wait(mddev, conf, bio, r10_bio->sectors)) + if (!regular_request_wait(mddev, conf, bio, r10_bio->sectors)) { + raid_end_bio_io(r10_bio); return; + } + rdev = read_balance(conf, r10_bio, &max_sectors); if (!rdev) { if (err_rdev) { @@ -1385,8 +1388,11 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, } sectors = r10_bio->sectors; - if (!regular_request_wait(mddev, conf, bio, sectors)) + if (!regular_request_wait(mddev, conf, bio, sectors)) { + raid_end_bio_io(r10_bio); return; + } + if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && (mddev->reshape_backwards ? (bio->bi_iter.bi_sector < conf->reshape_safe && -- 2.39.2
2 1
0 0
[PATCH OLK-5.10] md/raid1: Fix stack memory use after return in raid1_reshape
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Wang Jinchao <wangjinchao600(a)gmail.com> stable inclusion from stable-v5.10.240 commit 12b00ec99624f8da8c325f2dd6e807df26df0025 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICOXOC CVE: CVE-2025-38445 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… ------------------ [ Upstream commit d67ed2ccd2d1dcfda9292c0ea8697a9d0f2f0d98 ] In the raid1_reshape function, newpool is allocated on the stack and assigned to conf->r1bio_pool. This results in conf->r1bio_pool.wait.head pointing to a stack address. Accessing this address later can lead to a kernel panic. Example access path: raid1_reshape() { // newpool is on the stack mempool_t newpool, oldpool; // initialize newpool.wait.head to stack address mempool_init(&newpool, ...); conf->r1bio_pool = newpool; } raid1_read_request() or raid1_write_request() { alloc_r1bio() { mempool_alloc() { // if pool->alloc fails remove_element() { --pool->curr_nr; } } } } mempool_free() { if (pool->curr_nr < pool->min_nr) { // pool->wait.head is a stack address // wake_up() will try to access this invalid address // which leads to a kernel panic return; wake_up(&pool->wait); } } Fix: reinit conf->r1bio_pool.wait after assigning newpool. Fixes: afeee514ce7f ("md: convert to bioset_init()/mempool_init()") Signed-off-by: Wang Jinchao <wangjinchao600(a)gmail.com> Reviewed-by: Yu Kuai <yukuai3(a)huawei.com> Link: https://lore.kernel.org/linux-raid/20250612112901.3023950-1-wangjinchao600@… Signed-off-by: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- drivers/md/raid1.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 11f649ed2466..acd008f97847 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3295,6 +3295,7 @@ static int raid1_reshape(struct mddev *mddev) /* ok, everything is stopped */ oldpool = conf->r1bio_pool; conf->r1bio_pool = newpool; + init_waitqueue_head(&conf->r1bio_pool.wait); for (d = d2 = 0; d < conf->raid_disks; d++) { struct md_rdev *rdev = conf->mirrors[d].rdev; -- 2.39.2
2 1
0 0
[PATCH OLK-6.6] md/raid1: Fix stack memory use after return in raid1_reshape
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Wang Jinchao <wangjinchao600(a)gmail.com> stable inclusion from stable-v6.6.99 commit df5894014a92ff0196dbc212a7764e97366fd2b7 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICOXOC CVE: CVE-2025-38445 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… ------------------ [ Upstream commit d67ed2ccd2d1dcfda9292c0ea8697a9d0f2f0d98 ] In the raid1_reshape function, newpool is allocated on the stack and assigned to conf->r1bio_pool. This results in conf->r1bio_pool.wait.head pointing to a stack address. Accessing this address later can lead to a kernel panic. Example access path: raid1_reshape() { // newpool is on the stack mempool_t newpool, oldpool; // initialize newpool.wait.head to stack address mempool_init(&newpool, ...); conf->r1bio_pool = newpool; } raid1_read_request() or raid1_write_request() { alloc_r1bio() { mempool_alloc() { // if pool->alloc fails remove_element() { --pool->curr_nr; } } } } mempool_free() { if (pool->curr_nr < pool->min_nr) { // pool->wait.head is a stack address // wake_up() will try to access this invalid address // which leads to a kernel panic return; wake_up(&pool->wait); } } Fix: reinit conf->r1bio_pool.wait after assigning newpool. Fixes: afeee514ce7f ("md: convert to bioset_init()/mempool_init()") Signed-off-by: Wang Jinchao <wangjinchao600(a)gmail.com> Reviewed-by: Yu Kuai <yukuai3(a)huawei.com> Link: https://lore.kernel.org/linux-raid/20250612112901.3023950-1-wangjinchao600@… Signed-off-by: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- drivers/md/raid1.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 4b7e68276d6c..8741b10db2bc 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3263,6 +3263,7 @@ static int raid1_reshape(struct mddev *mddev) /* ok, everything is stopped */ oldpool = conf->r1bio_pool; conf->r1bio_pool = newpool; + init_waitqueue_head(&conf->r1bio_pool.wait); for (d = d2 = 0; d < conf->raid_disks; d++) { struct md_rdev *rdev = conf->mirrors[d].rdev; -- 2.39.2
2 1
0 0
[openEuler-1.0-LTS] md/raid1: Fix stack memory use after return in raid1_reshape
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Wang Jinchao <wangjinchao600(a)gmail.com> mainline inclusion from mainline-v6.16-rc6 commit d67ed2ccd2d1dcfda9292c0ea8697a9d0f2f0d98 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICOXOC CVE: CVE-2025-38445 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… ------------------ In the raid1_reshape function, newpool is allocated on the stack and assigned to conf->r1bio_pool. This results in conf->r1bio_pool.wait.head pointing to a stack address. Accessing this address later can lead to a kernel panic. Example access path: raid1_reshape() { // newpool is on the stack mempool_t newpool, oldpool; // initialize newpool.wait.head to stack address mempool_init(&newpool, ...); conf->r1bio_pool = newpool; } raid1_read_request() or raid1_write_request() { alloc_r1bio() { mempool_alloc() { // if pool->alloc fails remove_element() { --pool->curr_nr; } } } } mempool_free() { if (pool->curr_nr < pool->min_nr) { // pool->wait.head is a stack address // wake_up() will try to access this invalid address // which leads to a kernel panic return; wake_up(&pool->wait); } } Fix: reinit conf->r1bio_pool.wait after assigning newpool. Fixes: afeee514ce7f ("md: convert to bioset_init()/mempool_init()") Signed-off-by: Wang Jinchao <wangjinchao600(a)gmail.com> Reviewed-by: Yu Kuai <yukuai3(a)huawei.com> Link: https://lore.kernel.org/linux-raid/20250612112901.3023950-1-wangjinchao600@… Signed-off-by: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- drivers/md/raid1.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 0275dcb18692..b1d79edaa92b 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3312,6 +3312,7 @@ static int raid1_reshape(struct mddev *mddev) /* ok, everything is stopped */ oldpool = conf->r1bio_pool; conf->r1bio_pool = newpool; + init_waitqueue_head(&conf->r1bio_pool.wait); for (d = d2 = 0; d < conf->raid_disks; d++) { struct md_rdev *rdev = conf->mirrors[d].rdev; -- 2.39.2
1 0
0 0
[PATCH] md/raid1: Fix stack memory use after return in raid1_reshape
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Wang Jinchao <wangjinchao600(a)gmail.com> mainline inclusion from mainline-v6.16-rc6 commit d67ed2ccd2d1dcfda9292c0ea8697a9d0f2f0d98 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICOXOC CVE: CVE-2025-38445 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… ------------------ In the raid1_reshape function, newpool is allocated on the stack and assigned to conf->r1bio_pool. This results in conf->r1bio_pool.wait.head pointing to a stack address. Accessing this address later can lead to a kernel panic. Example access path: raid1_reshape() { // newpool is on the stack mempool_t newpool, oldpool; // initialize newpool.wait.head to stack address mempool_init(&newpool, ...); conf->r1bio_pool = newpool; } raid1_read_request() or raid1_write_request() { alloc_r1bio() { mempool_alloc() { // if pool->alloc fails remove_element() { --pool->curr_nr; } } } } mempool_free() { if (pool->curr_nr < pool->min_nr) { // pool->wait.head is a stack address // wake_up() will try to access this invalid address // which leads to a kernel panic return; wake_up(&pool->wait); } } Fix: reinit conf->r1bio_pool.wait after assigning newpool. Fixes: afeee514ce7f ("md: convert to bioset_init()/mempool_init()") Signed-off-by: Wang Jinchao <wangjinchao600(a)gmail.com> Reviewed-by: Yu Kuai <yukuai3(a)huawei.com> Link: https://lore.kernel.org/linux-raid/20250612112901.3023950-1-wangjinchao600@… Signed-off-by: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- drivers/md/raid1.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 0275dcb18692..b1d79edaa92b 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3312,6 +3312,7 @@ static int raid1_reshape(struct mddev *mddev) /* ok, everything is stopped */ oldpool = conf->r1bio_pool; conf->r1bio_pool = newpool; + init_waitqueue_head(&conf->r1bio_pool.wait); for (d = d2 = 0; d < conf->raid_disks; d++) { struct md_rdev *rdev = conf->mirrors[d].rdev; -- 2.39.2
1 0
0 0
[PATCH OLK-5.10] blk-mq: don't touch ->tagset in blk_mq_get_sq_hctx
by Zheng Qixing 05 Aug '25

05 Aug '25
From: Ming Lei <ming.lei(a)redhat.com> mainline inclusion from mainline-v5.19-rc1 commit 5d05426e2d5fd7df8afc866b78c36b37b00188b7 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/IBP2VT CVE: CVE-2022-49377 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- blk_mq_run_hw_queues() could be run when there isn't queued request and after queue is cleaned up, at that time tagset is freed, because tagset lifetime is covered by driver, and often freed after blk_cleanup_queue() returns. So don't touch ->tagset for figuring out current default hctx by the mapping built in request queue, so use-after-free on tagset can be avoided. Meantime this way should be fast than retrieving mapping from tagset. Cc: "yukuai (C)" <yukuai3(a)huawei.com> Cc: Jan Kara <jack(a)suse.cz> Fixes: b6e68ee82585 ("blk-mq: Improve performance of non-mq IO schedulers with multiple HW queues") Signed-off-by: Ming Lei <ming.lei(a)redhat.com> Reviewed-by: Jan Kara <jack(a)suse.cz> Link: https://lore.kernel.org/r/20220522122350.743103-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe(a)kernel.dk> Signed-off-by: Zheng Qixing <zhengqixing(a)huawei.com> --- block/blk-mq.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index f94adf15bf53..e7c690e954ee 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1796,8 +1796,7 @@ EXPORT_SYMBOL(blk_mq_run_hw_queue); */ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q) { - struct blk_mq_hw_ctx *hctx; - + struct blk_mq_ctx *ctx = blk_mq_get_ctx(q); /* * If the IO scheduler does not respect hardware queues when * dispatching, we just don't bother with multiple HW queues and @@ -1805,8 +1804,8 @@ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q) * just causes lock contention inside the scheduler and pointless cache * bouncing. */ - hctx = blk_mq_map_queue_type(q, HCTX_TYPE_DEFAULT, - raw_smp_processor_id()); + struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, 0, ctx); + if (!blk_mq_hctx_stopped(hctx)) return hctx; return NULL; -- 2.46.1
2 1
0 0
  • ← Newer
  • 1
  • ...
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • ...
  • 1985
  • Older →

HyperKitty Powered by HyperKitty