[PATCH OLK-6.6 0/3] The QoS feature adapts to the cfs bandwidth throttling

The QoS feature adapts to the cfs bandwidth throttling Liu Kai (2): hungtask: fixed offline group hung task issue under high load scenarios sched/qos: Fix qos throttling in SMT expelled Vishal Chourasia (1): sched/fair: Fix CPU bandwidth limit bypass during CPU hotplug kernel/sched/fair.c | 40 ++++++++++++++++++++++++++-------------- 1 file changed, 26 insertions(+), 14 deletions(-) -- 2.34.1

hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/release-management/issues/IBGRJE -------------------------------- If an offline group exists and its parent group has a configured quota, and if there are other online groups running in the parent group, when the online groups are fully loaded and able to claim all of the parent group's time slices, once the online groups exhaust all of the parent group's time slices, the offline group will not be able to get scheduled. So, if the parent group of this offline group is throttled, we will not cancel the qos timer, which would otherwise remove the throttling of the offline group by the online group, ensuring that the offline group can still be scheduled. Signed-off-by: Liu Kai <liukai284@huawei.com> --- kernel/sched/fair.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d8a6686774ea..eb8037d36b7e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9566,17 +9566,22 @@ static void unthrottle_qos_cfs_rq(struct cfs_rq *cfs_rq) static int __unthrottle_qos_cfs_rqs(int cpu) { struct cfs_rq *cfs_rq, *tmp_rq; - int res = 0; + int all_is_runnable = 1; + + if (list_empty(&per_cpu(qos_throttled_cfs_rq, cpu))) + return 0; list_for_each_entry_safe(cfs_rq, tmp_rq, &per_cpu(qos_throttled_cfs_rq, cpu), qos_throttled_list) { if (cfs_rq_throttled(cfs_rq)) { unthrottle_qos_cfs_rq(cfs_rq); - res++; } + + if (throttled_hierarchy(cfs_rq)) + all_is_runnable = 0; } - return res; + return all_is_runnable; } static int unthrottle_qos_cfs_rqs(int cpu) @@ -9584,9 +9589,14 @@ static int unthrottle_qos_cfs_rqs(int cpu) int res; res = __unthrottle_qos_cfs_rqs(cpu); - if (qos_timer_is_activated(cpu) && !qos_smt_expelled(cpu)) + /* + * We should not cancel the timer if there is still a cfs_rq + * throttling after __unthrottle_qos_cfs_rqs(). + */ + if (res && qos_timer_is_activated(cpu) && !qos_smt_expelled(cpu)) cancel_qos_timer(cpu); - return res; + + return cpu_rq(cpu)->cfs.h_nr_running; } static bool check_qos_cfs_rq(struct cfs_rq *cfs_rq) -- 2.34.1

hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/release-management/issues/IBGRJE -------------------------------- When the CPU does not require SMT expelled, the QoS timer needs to be canceled. However, when the qos_throttled_list is empty, the system incorrectly considers that the offline tasks are throttled by the CFS bandwidth. As a result, the timer is not canceled and the QoS throtting is canceled in advance. Therefore, the expelled time of offline tasks is shorter than expected. Fixes: efccb199ef7c ("hungtask: fixed offline group hung task issue under high load scenarios") Signed-off-by: Liu Kai <liukai284@huawei.com> --- kernel/sched/fair.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index eb8037d36b7e..76d7a63a69d6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9566,10 +9566,7 @@ static void unthrottle_qos_cfs_rq(struct cfs_rq *cfs_rq) static int __unthrottle_qos_cfs_rqs(int cpu) { struct cfs_rq *cfs_rq, *tmp_rq; - int all_is_runnable = 1; - - if (list_empty(&per_cpu(qos_throttled_cfs_rq, cpu))) - return 0; + int cfs_bandwidth_throttle = 0; list_for_each_entry_safe(cfs_rq, tmp_rq, &per_cpu(qos_throttled_cfs_rq, cpu), qos_throttled_list) { @@ -9578,22 +9575,21 @@ static int __unthrottle_qos_cfs_rqs(int cpu) } if (throttled_hierarchy(cfs_rq)) - all_is_runnable = 0; + cfs_bandwidth_throttle = 1; } - return all_is_runnable; + return cfs_bandwidth_throttle; } static int unthrottle_qos_cfs_rqs(int cpu) { - int res; - res = __unthrottle_qos_cfs_rqs(cpu); + int throttled = __unthrottle_qos_cfs_rqs(cpu); /* * We should not cancel the timer if there is still a cfs_rq * throttling after __unthrottle_qos_cfs_rqs(). */ - if (res && qos_timer_is_activated(cpu) && !qos_smt_expelled(cpu)) + if (qos_timer_is_activated(cpu) && !(qos_smt_expelled(cpu) || throttled)) cancel_qos_timer(cpu); return cpu_rq(cpu)->cfs.h_nr_running; -- 2.34.1

From: Vishal Chourasia <vishalc@linux.ibm.com> mainline inclusion from mainline-v6.14-rc1 commit af98d8a36a963e758e84266d152b92c7b51d4ecb category: bugfix bugzilla: https://gitee.com/openeuler/release-management/issues/IBGRJE Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... -------------------------------- CPU controller limits are not properly enforced during CPU hotplug operations, particularly during CPU offline. When a CPU goes offline, throttled processes are unintentionally being unthrottled across all CPUs in the system, allowing them to exceed their assigned quota limits. Consider below for an example, Assigning 6.25% bandwidth limit to a cgroup in a 8 CPU system, where, workload is running 8 threads for 20 seconds at 100% CPU utilization, expected (user+sys) time = 10 seconds. $ cat /sys/fs/cgroup/test/cpu.max 50000 100000 $ ./ebizzy -t 8 -S 20 // non-hotplug case real 20.00 s user 10.81 s // intended behaviour sys 0.00 s $ ./ebizzy -t 8 -S 20 // hotplug case real 20.00 s user 14.43 s // Workload is able to run for 14 secs sys 0.00 s // when it should have only run for 10 secs During CPU hotplug, scheduler domains are rebuilt and cpu_attach_domain is called for every active CPU to update the root domain. That ends up calling rq_offline_fair which un-throttles any throttled hierarchies. Unthrottling should only occur for the CPU being hotplugged to allow its throttled processes to become runnable and get migrated to other CPUs. With current patch applied, $ ./ebizzy -t 8 -S 20 // hotplug case real 21.00 s user 10.16 s // intended behaviour sys 0.00 s This also has another symptom, when a CPU goes offline, and if the cfs_rq is not in throttled state and the runtime_remaining still had plenty remaining, it gets reset to 1 here, causing the runtime_remaining of cfs_rq to be quickly depleted. Note: hotplug operation (online, offline) was performed in while(1) loop v3: https://lore.kernel.org/all/20241210102346.228663-2-vishalc@linux.ibm.com v2: https://lore.kernel.org/all/20241207052730.1746380-2-vishalc@linux.ibm.com v1: https://lore.kernel.org/all/20241126064812.809903-2-vishalc@linux.ibm.com Suggested-by: Zhang Qiao <zhangqiao22@huawei.com> Signed-off-by: Vishal Chourasia <vishalc@linux.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Vincent Guittot <vincent.guittot@linaro.org> Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com> Tested-by: Samir Mulani <samir@linux.ibm.com> Link: https://lore.kernel.org/r/20241212043102.584863-2-vishalc@linux.ibm.com Signed-off-by: Liu Kai <liukai284@huawei.com> --- kernel/sched/fair.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 76d7a63a69d6..c9a4ea466689 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6692,6 +6692,10 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq) lockdep_assert_rq_held(rq); + // Do not unthrottle for an active CPU + if (cpumask_test_cpu(cpu_of(rq), cpu_active_mask)) + return; + /* * The rq clock has already been updated in the * set_rq_offline(), so we should skip updating @@ -6709,19 +6713,21 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq) if (!cfs_rq->runtime_enabled) continue; - /* - * clock_task is not advancing so we just need to make sure - * there's some valid quota amount - */ - cfs_rq->runtime_remaining = 1; /* * Offline rq is schedulable till CPU is completely disabled * in take_cpu_down(), so we prevent new cfs throttling here. */ cfs_rq->runtime_enabled = 0; - if (cfs_rq_throttled(cfs_rq)) - unthrottle_cfs_rq(cfs_rq); + if (!cfs_rq_throttled(cfs_rq)) + continue; + + /* + * clock_task is not advancing so we just need to make sure + * there's some valid quota amount + */ + cfs_rq->runtime_remaining = 1; + unthrottle_cfs_rq(cfs_rq); } rcu_read_unlock(); -- 2.34.1

反馈: 您发送到kernel@openeuler.org的补丁/补丁集,转换为PR失败! 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/4... 失败原因:调用gitee api创建PR失败, 失败原因如下: 源分支 patch-1740100561 不存在 建议解决方法:请稍等,机器人会在下一次任务重新执行 FeedBack: The patch(es) which you have sent to kernel@openeuler.org has been converted to PR failed! Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/4... Failed Reason: create PR failed when call gitee's api, failed reason is as follows: 源分支 patch-1740100561 不存在 Suggest Solution: please wait, the bot will retry in the next interval

反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/15170 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/4... FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/15170 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/4...
participants (2)
-
Liu Kai
-
patchwork bot