From: Jens Axboe axboe@kernel.dk
mainline inclusion from mainline-5.6-rc1 commit e46a7950d362231a4d0b078af5f4c109b8e5ac9e category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=27 CVE: NA ---------------------------
We currently flush early, but if we have something in progress and a new switch is scheduled, we need to ensure to flush after our teardown as well.
Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: yangerkun yangerkun@huawei.com Reviewed-by: zhangyi (F) yi.zhang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- fs/io_uring.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c index 40cbb76ed770..ed348a47cea1 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -5030,11 +5030,14 @@ static int io_sqe_files_unregister(struct io_ring_ctx *ctx) return -ENXIO;
/* protect against inflight atomic switch, which drops the ref */ - flush_work(&data->ref_work); percpu_ref_get(&data->refs); + /* wait for existing switches */ + flush_work(&data->ref_work); percpu_ref_kill_and_confirm(&data->refs, io_file_ref_kill); wait_for_completion(&data->done); percpu_ref_put(&data->refs); + /* flush potential new switch */ + flush_work(&data->ref_work); percpu_ref_exit(&data->refs);
__io_sqe_files_unregister(ctx);