
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IBN6HC -------------------------------- When the user moves a running task to a new rdtgroup using the task's file interface or by deleting its rdtgroup, the resulting change in CLOSID/RMID must be immediately propagated to the PQR_ASSOC MSR on the task(s) CPUs. x86 allows reordering loads with prior stores, so if the task starts running between a task_curr() check that the CPU hoisted before the stores in the CLOSID/RMID update then it can start running with the old CLOSID/RMID until it is switched again because __rdtgroup_move_task() failed to determine that it needs to be interrupted to obtain the new CLOSID/RMID. Refer to the diagram below: CPU 0 CPU 1 ----- ----- __rdtgroup_move_task(): curr <- t1->cpu->rq->curr __schedule(): rq->curr <- t1 resctrl_sched_in(): t1->{closid,rmid} -> {1,1} t1->{closid,rmid} <- {2,2} if (curr == t1) // false IPI(t1->cpu) A similar race impacts rdt_move_group_tasks(), which updates tasks in a deleted rdtgroup. In both cases, use smp_mb() to order the task_struct::{closid,rmid} stores before the loads in task_curr(). In particular, in the rdt_move_group_tasks() case, simply execute an smp_mb() on every iteration with a matching task. It is possible to use a single smp_mb() in rdt_move_group_tasks(), but this would require two passes and a means of remembering which task_structs were updated in the first loop. However, benchmarking results below showed too little performance impact in the simple approach to justify implementing the two-pass approach. Times below were collected using `perf stat` to measure the time to remove a group containing a 1600-task, parallel workload. CPU: Intel(R) Xeon(R) Platinum P-8136 CPU @ 2.00GHz (112 threads) # mkdir /sys/fs/resctrl/test # echo $$ > /sys/fs/resctrl/test/tasks # perf bench sched messaging -g 40 -l 100000 task-clock time ranges collected using: # perf stat rmdir /sys/fs/resctrl/test Baseline: 1.54 - 1.60 ms smp_mb() every matching task: 1.57 - 1.67 ms Link: https://lore.kernel.org/r/20221220161123.432120-1-peternewman@google.com Fixes: 6065f5513f78 ("resctrlfs: init support resctrlfs") Signed-off-by: Zeng Heng <zengheng4@huawei.com> --- arch/arm64/kernel/mpam/mpam_resctrl.c | 4 +++- fs/resctrlfs.c | 8 ++++++++ 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/mpam/mpam_resctrl.c b/arch/arm64/kernel/mpam/mpam_resctrl.c index b04c0d2b8312..ab822fd6828d 100644 --- a/arch/arm64/kernel/mpam/mpam_resctrl.c +++ b/arch/arm64/kernel/mpam/mpam_resctrl.c @@ -1371,8 +1371,10 @@ int __resctrl_group_move_task(struct task_struct *tsk, /* * Ensure the task's closid and rmid are written before determining if * the task is current that will decide if it will be interrupted. + * This pairs with the full barrier between the rq->curr update and + * resctrl_sched_in() during context switch. */ - barrier(); + smp_mb(); /* * By now, the task's closid and rmid are set. If the task is current diff --git a/fs/resctrlfs.c b/fs/resctrlfs.c index 29fe73571870..7d816b33e209 100644 --- a/fs/resctrlfs.c +++ b/fs/resctrlfs.c @@ -457,6 +457,14 @@ static void resctrl_move_group_tasks(struct resctrl_group *from, struct resctrl_ WRITE_ONCE(t->closid, resctrl_navie_closid(to->closid)); WRITE_ONCE(t->rmid, resctrl_navie_rmid(to->mon.rmid)); + /* + * Order the closid/rmid stores above before the loads + * in task_curr(). This pairs with the full barrier + * between the rq->curr update and mpam_sched_in() + * during context switch. + */ + smp_mb(); + #ifdef CONFIG_SMP /* * This is safe on x86 w/o barriers as the ordering -- 2.25.1