[PATCH OLK-6.6 v2 0/2] block: fix discard ioctl

Li Nan (2): block: fix overflow in blk_ioctl_discard() block: check io size before submit discard block/blk-lib.c | 8 ++++++++ block/ioctl.c | 5 +++-- 2 files changed, 11 insertions(+), 2 deletions(-) -- 2.39.2

mainline inclusion from mainline-v6.9-rc3 commit 22d24a544b0d49bbcbd61c8c0eaf77d3c9297155 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I9K0H6 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... -------------------------------- There is no check for overflow of 'start + len' in blk_ioctl_discard(). Hung task occurs if submit an discard ioctl with the following param: start = 0x80000000000ff000, len = 0x8000000000fff000; Add the overflow validation now. Signed-off-by: Li Nan <linan122@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240329012319.2034550-1-linan666@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk> --- block/ioctl.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/block/ioctl.c b/block/ioctl.c index 572bef75d601..b7104fd80a46 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -89,7 +89,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode, unsigned long arg) { uint64_t range[2]; - uint64_t start, len; + uint64_t start, len, end; struct inode *inode = bdev->bd_inode; int err; @@ -110,7 +110,8 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode, if (len & 511) return -EINVAL; - if (start + len > bdev_nr_bytes(bdev)) + if (check_add_overflow(start, len, &end) || + end > bdev_nr_bytes(bdev)) return -EINVAL; filemap_invalidate_lock(inode->i_mapping); -- 2.39.2

Offering: HULK hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I9K0H6 -------------------------------- In __blkdev_issue_discard(), q->limits.discard_granularity is read multi times. WARN_ON_ONCE(!q->limits.discard_granularity) [1] ... q->limits.discard_granularity >> SECTOR_SHIFT [2] ... bio_aligned_discard_max_sectors [3] It can be changed to 0 after check [1], such as submitting ioctl 'LOOP_SET_STATUS'. This is undesirable. If 'discard_granularity' is set to 0, BUG_ON might be triggered and the loop of 'while(nr_sects)' will never exit(see fixes commit). Fix it by checking 'req_sects' before submit io, return error code directly if it is 0. Fixes: b35fd7422c2f ("block: check queue's limits.discard_granularity in __blkdev_issue_discard()") Signed-off-by: Li Nan <linan122@huawei.com> --- block/blk-lib.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/block/blk-lib.c b/block/blk-lib.c index e59c3069e835..564904eeaed5 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -64,6 +64,14 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, sector_t req_sects = min(nr_sects, bio_discard_limit(bdev, sector)); + if (!req_sects) { + if (bio) { + bio_io_error(bio); + bio_put(bio); + } + return -EOPNOTSUPP; + } + bio = blk_next_bio(bio, bdev, 0, REQ_OP_DISCARD, gfp_mask); bio->bi_iter.bi_sector = sector; bio->bi_iter.bi_size = req_sects << 9; -- 2.39.2

反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/6819 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/I... FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/6819 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/I...
participants (2)
-
Li Nan
-
patchwork bot