From: Xin Yin yinxin.x@bytedance.com
anolis inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB5UKT
Reference: https://gitee.com/anolis/cloud-kernel/commit/5ef14211857f
--------------------------------
ANBZ: #3211
commit 1122f40072731525c06b1371cfa30112b9b54d27 upstream.
For now, enqueuing and dequeuing on-demand requests all start from idx 0, this makes request distribution unfair. In the weighty concurrent I/O scenario, the request stored in higher idx will starve.
Searching requests cyclically in cachefiles_ondemand_daemon_read, makes distribution fairer.
Fixes: c8383054506c ("cachefiles: notify the user daemon when looking up cookie") Reported-by: Yongqing Li liyongqing@bytedance.com Signed-off-by: Xin Yin yinxin.x@bytedance.com Signed-off-by: David Howells dhowells@redhat.com Reviewed-by: Jeffle Xu jefflexu@linux.alibaba.com Reviewed-by: Gao Xiang hsiangkao@linux.alibaba.com Link: https://lore.kernel.org/r/20220817065200.11543-1-yinxin.x@bytedance.com/ # v1 Link: https://lore.kernel.org/r/20220825020945.2293-1-yinxin.x@bytedance.com/ # v2 Acked-by: Joseph Qi joseph.qi@linux.alibaba.com Link: https://gitee.com/anolis/cloud-kernel/pulls/881 Link: https://gitee.com/anolis/cloud-kernel/pulls/884 Signed-off-by: Baokun Li libaokun1@huawei.com --- fs/cachefiles/internal.h | 1 + fs/cachefiles/ondemand.c | 19 ++++++++++++++++--- 2 files changed, 17 insertions(+), 3 deletions(-)
diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h index 7440c3e4fd14..cff061f208f6 100644 --- a/fs/cachefiles/internal.h +++ b/fs/cachefiles/internal.h @@ -97,6 +97,7 @@ struct cachefiles_cache { char *tag; /* cache binding tag */ refcount_t unbind_pincount;/* refcount to do daemon unbind */ struct radix_tree_root reqs; /* xarray of pending on-demand requests */ + unsigned long req_id_next; struct idr ondemand_ids; /* xarray for ondemand_id allocation */ u32 ondemand_id_next; }; diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c index 68b8c2a4e6a8..bebc2f8627d8 100644 --- a/fs/cachefiles/ondemand.c +++ b/fs/cachefiles/ondemand.c @@ -262,17 +262,29 @@ ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache, void **slot;
/* - * Search for a request that has not ever been processed, to prevent - * requests from being processed repeatedly. + * Cyclically search for a request that has not ever been processed, + * to prevent requests from being processed repeatedly, and make + * request distribution fair. */ xa_lock(&cache->reqs); - radix_tree_for_each_tagged(slot, &cache->reqs, &iter, 0, + radix_tree_for_each_tagged(slot, &cache->reqs, &iter, cache->req_id_next, CACHEFILES_REQ_NEW) { req = radix_tree_deref_slot_protected(slot, &cache->reqs.xa_lock); WARN_ON(!req); break; }
+ if (!req && cache->req_id_next > 0) { + radix_tree_for_each_tagged(slot, &cache->reqs, &iter, 0, + CACHEFILES_REQ_NEW) { + if (iter.index >= cache->req_id_next) + break; + req = radix_tree_deref_slot_protected(slot, &cache->reqs.xa_lock); + WARN_ON(!req); + break; + } + } + /* no request tagged with CACHEFILES_REQ_NEW found */ if (!req) { xa_unlock(&cache->reqs); @@ -288,6 +300,7 @@ ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache, }
radix_tree_iter_tag_clear(&cache->reqs, &iter, CACHEFILES_REQ_NEW); + cache->req_id_next = iter.index + 1; xa_unlock(&cache->reqs);
id = iter.index;