From: Brian Gianforcaro b.gianfo@gmail.com
mainline inclusion from mainline-5.5-rc3 commit d195a66e367b3d24fdd3c3565f37ab7c6882b9d2 category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=27 CVE: NA ---------------------------
- Fix a few typos found while reading the code.
- Fix stale io_get_sqring comment referencing s->sqe, the 's' parameter was renamed to 'req', but the comment still holds.
Signed-off-by: Brian Gianforcaro b.gianfo@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: yangerkun yangerkun@huawei.com Reviewed-by: zhangyi (F) yi.zhang@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- fs/io-wq.c | 2 +- fs/io_uring.c | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/io-wq.c b/fs/io-wq.c index 1be307a174b5..e38e3c6e30f7 100644 --- a/fs/io-wq.c +++ b/fs/io-wq.c @@ -949,7 +949,7 @@ static enum io_wq_cancel io_wqe_cancel_work(struct io_wqe *wqe, /* * Now check if a free (going busy) or busy worker has the work * currently running. If we find it there, we'll return CANCEL_RUNNING - * as an indication that we attempte to signal cancellation. The + * as an indication that we attempt to signal cancellation. The * completion will run normally in this case. */ rcu_read_lock(); diff --git a/fs/io_uring.c b/fs/io_uring.c index 17103383d146..7c388b4331a3 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1177,7 +1177,7 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events, }
/* - * Poll for a mininum of 'min' events. Note that if min == 0 we consider that a + * Poll for a minimum of 'min' events. Note that if min == 0 we consider that a * non-spinning poll check - we'll still enter the driver poll loop, but only * as a non-spinning completion check. */ @@ -2572,7 +2572,7 @@ static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer)
/* * Adjust the reqs sequence before the current one because it - * will consume a slot in the cq_ring and the the cq_tail + * will consume a slot in the cq_ring and the cq_tail * pointer will be increased, otherwise other timeout reqs may * return in advance without waiting for enough wait_nr. */ @@ -3429,7 +3429,7 @@ static void io_commit_sqring(struct io_ring_ctx *ctx) }
/* - * Fetch an sqe, if one is available. Note that s->sqe will point to memory + * Fetch an sqe, if one is available. Note that req->sqe will point to memory * that is mapped by userspace. This means that care needs to be taken to * ensure that reads are stable, as we cannot rely on userspace always * being a good citizen. If members of the sqe are validated and then later @@ -3693,7 +3693,7 @@ static inline bool io_should_wake(struct io_wait_queue *iowq, bool noflush) struct io_ring_ctx *ctx = iowq->ctx;
/* - * Wake up if we have enough events, or if a timeout occured since we + * Wake up if we have enough events, or if a timeout occurred since we * started waiting. For timeouts, we always want to return to userspace, * regardless of event count. */