[PATCH OLK-5.10 v2] eventpoll: Fix return fixed cpu bug in set_prefetch_numa_cpu()

hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/release-management/issues/IC9Q31 -------------------------------- The original intention of set_prefetch_numa_cpu() was to uniform select a CPU from the pfi->related_cpu range. The function of cpumask_next(int n, const struct cpumask *srcp) is, in the given CPU mask "srcp", find the next CPU number that is set (i.e., set to 1) starting from the specified position n. So cpumask_next() in set_prefetch_numa_cpu() will always return cpumask_first(&related_cpus) if the following conditions are met: fd % cpumask_weight(&related_cpus) < cpumask_first(&related_cpus) Fix it by maintain a per-NUMA CPU array, and each time select the next CPU after the last selected one in the "related_cpus". Fixes: 7e1291339cb5 ("eventpoll: Support xcall async prefetch") Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> --- fs/eventpoll.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/fs/eventpoll.c b/fs/eventpoll.c index dd11e92994c9..8087f50f3f83 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -949,28 +949,34 @@ static void prefetch_work_fn(struct work_struct *work) trace_epoll_rc_ready(pfi->file, pfi->len); } -static void set_prefetch_numa_cpu(struct prefetch_item *pfi, int fd) +static int prefetch_cpus[MAX_NUMNODES] = { [0 ... MAX_NUMNODES - 1] = -1 }; + +static void set_prefetch_numa_cpu(struct prefetch_item *pfi) { int cur_cpu = smp_processor_id(); + int cur_nid = numa_node_id(); + int old_cpu, new_cpu; struct cpumask tmp; - int cpu; cpumask_copy(&tmp, &xcall_mask); cpumask_and(&pfi->related_cpus, cpu_cpu_mask(cur_cpu), cpu_online_mask); if (cpumask_intersects(&tmp, &pfi->related_cpus)) cpumask_and(&pfi->related_cpus, &pfi->related_cpus, &tmp); - cpu = cpumask_next(fd % cpumask_weight(&pfi->related_cpus), - &pfi->related_cpus); - if (cpu > cpumask_last(&pfi->related_cpus)) - cpu = cpumask_first(&pfi->related_cpus); - pfi->cpu = cpu; + + do { + old_cpu = prefetch_cpus[cur_nid]; + new_cpu = cpumask_next(old_cpu, &pfi->related_cpus); + if (new_cpu > cpumask_last(&pfi->related_cpus)) + new_cpu = cpumask_first(&pfi->related_cpus); + } while (cmpxchg(&prefetch_cpus[cur_nid], old_cpu, new_cpu) != old_cpu); + + pfi->cpu = new_cpu; } static struct prefetch_item *alloc_prefetch_item(struct epitem *epi) { struct file *tfile = epi->ffd.file; struct prefetch_item *pfi; - int fd = epi->ffd.fd; unsigned int hash; pfi = kmalloc(sizeof(struct prefetch_item), GFP_KERNEL); @@ -991,7 +997,7 @@ static struct prefetch_item *alloc_prefetch_item(struct epitem *epi) pfi->file = tfile; pfi->len = 0; pfi->pos = 0; - set_prefetch_numa_cpu(pfi, fd); + set_prefetch_numa_cpu(pfi); hash = hash_64((u64)tfile, PREFETCH_ITEM_HASH_BITS); write_lock(&xcall_table_lock); -- 2.34.1

反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/17344 邮件列表地址:https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/GY4... FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/17344 Mailing list address: https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/GY4...
participants (2)
-
Jinjie Ruan
-
patchwork bot