mainline inclusion from mainline-v6.5-rc1 commit 4f1731df60f9033669f024d06ae26a6301260b55 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IAQPKU CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
In __blk_mq_tag_busy/idle(), updating 'active_queues' and calculating 'wake_batch' is not atomic:
t1: t2: _blk_mq_tag_busy blk_mq_tag_busy inc active_queues // assume 1->2 inc active_queues // 2 -> 3 blk_mq_update_wake_batch // calculate based on 3 blk_mq_update_wake_batch /* calculate based on 2, while active_queues is actually 3. */
Fix this problem by protecting them wih 'tags->lock', this is not a hot path, so performance should not be concerned. And now that all writers are inside the lock, switch 'actives_queues' from atomic to unsigned int.
Fixes: 180dccb0dba4 ("blk-mq: fix tag_get wait task can't be awakened") Signed-off-by: Yu Kuai yukuai3@huawei.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20230610023043.2559121-1-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe axboe@kernel.dk Conflicts: block/blk-mq-debugfs.c block/blk-mq-tag.c block/blk-mq-tag.h block/blk-mq.h include/linux/blk-mq.h [Still use atomic for 'active_queues' to prevent kabi broken] Signed-off-by: Yu Kuai yukuai3@huawei.com --- block/blk-mq-tag.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 676d56a04094..7a509ecc47dc 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -42,6 +42,7 @@ static void blk_mq_update_wake_batch(struct blk_mq_tags *tags, bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx) { unsigned int users; + struct blk_mq_tags *tags = hctx->tags;
if (blk_mq_is_sbitmap_shared(hctx->flags)) { struct request_queue *q = hctx->queue; @@ -57,9 +58,10 @@ bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx) } }
- users = atomic_inc_return(&hctx->tags->active_queues); - - blk_mq_update_wake_batch(hctx->tags, users); + spin_lock_irq(&tags->lock); + users = atomic_inc_return(&tags->active_queues); + blk_mq_update_wake_batch(tags, users); + spin_unlock_irq(&tags->lock);
return true; } @@ -94,9 +96,10 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx) return; }
+ spin_lock_irq(&tags->lock); users = atomic_dec_return(&tags->active_queues); - blk_mq_update_wake_batch(tags, users); + spin_unlock_irq(&tags->lock);
blk_mq_tag_wakeup_all(tags, false); }