From: Jinke Han hanjinke.666@bytedance.com
mainline inclusion from mainline-v6.5-rc1 commit ad7c3b41e86b59943a903d23c7b037d820e6270c category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IAUKH4 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
After commit f382fb0bcef4 ("block: remove legacy IO schedulers"), blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become the only stable io stats interface of cgroup v1, and these statistics are done in the blk-throttle code. But the current code only counts the bios that are actually throttled. When the user does not add the throttle limit, the io stats for cgroup v1 has nothing. I fix it according to the statistical method of v2, and made it count all ios accurately.
Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline") Tested-by: Andrea Righi andrea.righi@canonical.com Signed-off-by: Jinke Han hanjinke.666@bytedance.com Acked-by: Muchun Song songmuchun@bytedance.com Acked-by: Tejun Heo tj@kernel.org Link: https://lore.kernel.org/r/20230507170631.89607-1-hanjinke.666@bytedance.com Signed-off-by: Jens Axboe axboe@kernel.dk Conflicts: block/blk-cgroup.c [commit 0416f3be58c6 ("blk-cgroup: don't update io stat for root cgroup") is not backported.] Signed-off-by: Yu Kuai yukuai3@huawei.com --- block/blk-cgroup.c | 6 ++++-- block/blk-throttle.c | 6 ------ block/blk-throttle.h | 9 +++++++++ 3 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 0872b392360d..0805864543be 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1953,6 +1953,9 @@ void blk_cgroup_bio_start(struct bio *bio) struct blkg_iostat_set *bis; unsigned long flags;
+ if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) + return; + cpu = get_cpu(); bis = per_cpu_ptr(bio->bi_blkg->iostat_cpu, cpu); flags = u64_stats_update_begin_irqsave(&bis->sync); @@ -1968,8 +1971,7 @@ void blk_cgroup_bio_start(struct bio *bio) bis->cur.ios[rwd]++;
u64_stats_update_end_irqrestore(&bis->sync, flags); - if (cgroup_subsys_on_dfl(io_cgrp_subsys)) - cgroup_rstat_updated(bio->bi_blkg->blkcg->css.cgroup, cpu); + cgroup_rstat_updated(bio->bi_blkg->blkcg->css.cgroup, cpu); put_cpu(); }
diff --git a/block/blk-throttle.c b/block/blk-throttle.c index d04a72b30faf..79ffce4bc213 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -2149,12 +2149,6 @@ bool __blk_throtl_bio(struct bio *bio)
rcu_read_lock();
- if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) { - blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf, - bio->bi_iter.bi_size); - blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1); - } - spin_lock_irq(&q->queue_lock);
throtl_update_latency_buckets(td); diff --git a/block/blk-throttle.h b/block/blk-throttle.h index 45fcf7aa0eb5..cb7f7d0e6f0d 100644 --- a/block/blk-throttle.h +++ b/block/blk-throttle.h @@ -182,6 +182,15 @@ static inline bool blk_should_throtl(struct bio *bio) struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg); int rw = bio_data_dir(bio);
+ if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) { + if (!bio_flagged(bio, BIO_CGROUP_ACCT)) { + bio_set_flag(bio, BIO_CGROUP_ACCT); + blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf, + bio->bi_iter.bi_size); + } + blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1); + } + /* iops limit is always counted */ if (tg->has_rules_iops[rw]) return true;