
From: Ming Lei <ming.lei@redhat.com> mainline inclusion from mainline-v5.14-rc7 commit c2da19ed50554ce52ecbad3655c98371fe58599f category: bugfix bugzilla: 34280, https://gitee.com/openeuler/kernel/issues/I4AKY4 CVE: NA ----------------------------------------------- For fixing use-after-free during iterating over requests, we grabbed request's refcount before calling ->fn in commit 2e315dc07df0 ("blk-mq: grab rq->refcount before calling ->fn in blk_mq_tagset_busy_iter"). Turns out this way may cause kernel panic when iterating over one flush request: 1) old flush request's tag is just released, and this tag is reused by one new request, but ->rqs[] isn't updated yet 2) the flush request can be re-used for submitting one new flush command, so blk_rq_init() is called at the same time 3) meantime blk_mq_queue_tag_busy_iter() is called, and old flush request is retrieved from ->rqs[tag]; when blk_mq_put_rq_ref() is called, flush_rq->end_io may not be updated yet, so NULL pointer dereference is triggered in blk_mq_put_rq_ref(). Fix the issue by calling refcount_set(&flush_rq->ref, 1) after flush_rq->end_io is set. So far the only other caller of blk_rq_init() is scsi_ioctl_reset() in which the request doesn't enter block IO stack and the request reference count isn't used, so the change is safe. Fixes: 2e315dc07df0 ("blk-mq: grab rq->refcount before calling ->fn in blk_mq_tagset_busy_iter") Reported-by: "Blank-Burian, Markus, Dr." <blankburian@uni-muenster.de> Tested-by: "Blank-Burian, Markus, Dr." <blankburian@uni-muenster.de> Signed-off-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/20210811142624.618598-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk> conflicts: use __blk_rq_init() instead of blk_rq_init(). Signed-off-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> --- block/blk-flush.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/block/blk-flush.c b/block/blk-flush.c index c357e5c16d89c..c87a0ffba9c74 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -340,7 +340,7 @@ static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, */ fq->flush_pending_idx ^= 1; - blk_rq_init(q, flush_rq); + __blk_rq_init(q, flush_rq); /* * In case of none scheduler, borrow tag from the first request @@ -371,6 +371,15 @@ static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq, flush_rq->rq_disk = first_rq->rq_disk; flush_rq->end_io = flush_end_io; + /* + * Order WRITE ->end_io and WRITE rq->ref, and its pair is the one + * implied in refcount_inc_not_zero() called from + * blk_mq_find_and_get_req(), which orders WRITE/READ flush_rq->ref + * and READ flush_rq->end_io + */ + smp_wmb(); + refcount_set(&flush_rq->ref, 1); + return blk_flush_queue_rq(flush_rq, false); } -- 2.25.1