From: Jens Axboe <axboe@kernel.dk> stable inclusion from stable-v6.1.167 commit 0f4ce79b8db7b040373fc664c8bc6c5fd74bd196 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14112 CVE: CVE-2026-23473 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=... -------------------------------- commit a68ed2df72131447d131531a08fe4dfcf4fa4653 upstream. When a socket send and shutdown() happen back-to-back, both fire wake-ups before the receiver's task_work has a chance to run. The first wake gets poll ownership (poll_refs=1), and the second bumps it to 2. When io_poll_check_events() runs, it calls io_poll_issue() which does a recv that reads the data and returns IOU_RETRY. The loop then drains all accumulated refs (atomic_sub_return(2) -> 0) and exits, even though only the first event was consumed. Since the shutdown is a persistent state change, no further wakeups will happen, and the multishot recv can hang forever. Check specifically for HUP in the poll loop, and ensure that another loop is done to check for status if more than a single poll activation is pending. This ensures we don't lose the shutdown event. Cc: stable@vger.kernel.org Fixes: dbc2564cfe0f ("io_uring: let fast poll support multishot") Reported-by: Francis Brosseau <francis@malagauche.com> Link: https://github.com/axboe/liburing/issues/1549 Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: io_uring/poll.c [1. bcf8a0293a01 ("io_uring: introduce type alias for io_tw_state") not merged, and not affect to this patch. 2. 5027d02452c9 ("io_uring: unify STOP_MULTISHOT with IOU_OK") not merged, and not affect to this patch. 3. 2c762be5b798 ("io_uring: keep multishot request NAPI timeout current") not merged, because the patch that caused the problem was not merged.] Signed-off-by: Zizhi Wo <wozizhi@huawei.com> --- io_uring/poll.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/io_uring/poll.c b/io_uring/poll.c index caa2c5cf88c5..534351899b9e 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -286,10 +286,11 @@ static int io_poll_check_events(struct io_kiocb *req, struct io_tw_state *ts) * flag in advance. */ atomic_andnot(IO_POLL_RETRY_FLAG, &req->poll_refs); v &= ~IO_POLL_RETRY_FLAG; } + v &= IO_POLL_REF_MASK; } /* the mask was stashed in __io_poll_execute */ if (!req->cqe.res) { struct poll_table_struct pt = { ._key = req->apoll_events }; @@ -318,11 +319,17 @@ static int io_poll_check_events(struct io_kiocb *req, struct io_tw_state *ts) if (!io_req_post_cqe(req, mask, IORING_CQE_F_MORE)) { io_req_set_res(req, mask, 0); return IOU_POLL_REMOVE_POLL_USE_RES; } } else { - int ret = io_poll_issue(req, ts); + int ret; + + /* multiple refs and HUP, ensure we loop once more */ + if ((req->cqe.res & (POLLHUP | POLLRDHUP)) && v != 1) + v--; + + ret = io_poll_issue(req, ts); if (ret == IOU_STOP_MULTISHOT) return IOU_POLL_REMOVE_POLL_USE_RES; else if (ret == IOU_REQUEUE) return IOU_POLL_REQUEUE; if (ret < 0) @@ -334,11 +341,10 @@ static int io_poll_check_events(struct io_kiocb *req, struct io_tw_state *ts) /* * Release all references, retry if someone tried to restart * task_work while we were executing it. */ - v &= IO_POLL_REF_MASK; } while (atomic_sub_return(v, &req->poll_refs) & IO_POLL_REF_MASK); return IOU_POLL_NO_ACTION; } -- 2.39.2