Enable SIS_UTIL for arm64 and fix numa imbalance in load_balance.
Zhang Qiao (2): sched/numa: Fix numa imbalance in load_balance() config: Disable COBFIG_ARCH_CUSTOM_NUMA_DISTANCE for arm64
Zheng Zengkai (2): Revert "sched: ARM64 enables SIS_PROP and disables SIS_UTIL"" Revert "Revert "sched/fair:ARM64 enables SIS_UTIL and disables SIS_PROP""
arch/arm64/configs/openeuler_defconfig | 2 +- kernel/sched/fair.c | 10 ++++++---- kernel/sched/features.h | 5 +++++ 3 files changed, 12 insertions(+), 5 deletions(-)
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/9507 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/5...
FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/9507 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/5...
hulk inclusion category: performance bugzilla: https://gitee.com/openeuler/kernel/issues/IA7CCA CVE: NA
--------------------------------
This reverts commit 256597906bbbd85a5b17c77307f604d2c6935cf7.
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- kernel/sched/features.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/features.h b/kernel/sched/features.h index 1b789ae9079c..83f524cea64d 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -54,8 +54,8 @@ SCHED_FEAT(TTWU_QUEUE, true) /* * When doing wakeups, attempt to limit superfluous scans of the LLC domain. */ -SCHED_FEAT(SIS_PROP, true) -SCHED_FEAT(SIS_UTIL, false) +SCHED_FEAT(SIS_PROP, false) +SCHED_FEAT(SIS_UTIL, true)
#ifdef CONFIG_SCHED_STEAL /*
hulk inclusion category: performance bugzilla: https://gitee.com/openeuler/kernel/issues/I61E4M CVE: NA
--------------------------------
This reverts commit 3f18c6fb5dedf655cb6d9c24d599de5d658e39e9.
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- kernel/sched/features.h | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/kernel/sched/features.h b/kernel/sched/features.h index 83f524cea64d..76fade025c4b 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -54,8 +54,13 @@ SCHED_FEAT(TTWU_QUEUE, true) /* * When doing wakeups, attempt to limit superfluous scans of the LLC domain. */ +#ifdef CONFIG_ARM64 SCHED_FEAT(SIS_PROP, false) SCHED_FEAT(SIS_UTIL, true) +#else +SCHED_FEAT(SIS_PROP, true) +SCHED_FEAT(SIS_UTIL, false) +#endif
#ifdef CONFIG_SCHED_STEAL /*
From: Zhang Qiao zhangqiao22@huawei.com
hulk inclusion category: performance bugzilla: https://gitee.com/openeuler/kernel/issues/I9RMHW CVE: NA
--------------------------------
When perform load balance, a NUMA imbalance is allowed if busy CPUs is less than the maximum threshold, it keeps a pair of communication tasks on the current node when the destination is lightly loaded.
1. But, calculate_imbalance() use local->sum_nr_running, it may not be accurate, because communication tasks is on busiest group, so should be busiest->sum_nr_running.
2. At the same time, idles cpus are used to calculate imbalance, but the group_weight may not be the same between local and busiest groups. In this case, even if both groups are very idle, imbalance will be calculated very large, so the imbalance is calculated by calculating the difference of busy cpus between groups.
Signed-off-by: Zhang Qiao zhangqiao22@huawei.com
Conflicts: kernel/sched/fair.c Signed-off-by: Zhao Wenhui zhaowenhui8@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- kernel/sched/fair.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 45577cd1aa84..778fb388b2e8 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -11465,17 +11465,19 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
/* * If there is no overload, we just want to even the number of - * idle cpus. + * busy cpus. */ env->migration_type = migrate_task; - env->imbalance = max_t(long, 0, (local->idle_cpus - - busiest->idle_cpus) >> 1); + env->imbalance = max_t(long, 0, + ((busiest->group_weight - busiest->idle_cpus) + - (local->group_weight - local->idle_cpus)) >> 1); }
/* Consider allowing a small imbalance between NUMA groups */ if (env->sd->flags & SD_NUMA) { env->imbalance = adjust_numa_imbalance(env->imbalance, - local->sum_nr_running + 1, env->sd->imb_numa_nr); + busiest->sum_nr_running, + env->sd->imb_numa_nr); }
return;
From: Zhang Qiao zhangqiao22@huawei.com
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8PG0C CVE: NA
--------------------------------
Disable COBFIG_ARCH_CUSTOM_NUMA_DISTANCE for arm64.
Signed-off-by: Zhang Qiao zhangqiao22@huawei.com Signed-off-by: Zhao Wenhui zhaowenhui8@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- arch/arm64/configs/openeuler_defconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig index 05b50ca381b1..69ff0b64ba59 100644 --- a/arch/arm64/configs/openeuler_defconfig +++ b/arch/arm64/configs/openeuler_defconfig @@ -7449,7 +7449,7 @@ CONFIG_ULTRASOC_SMB=m CONFIG_FUNCTION_ERROR_INJECTION=y # CONFIG_FAULT_INJECTION is not set CONFIG_ARCH_HAS_KCOV=y -CONFIG_ARCH_CUSTOM_NUMA_DISTANCE=y +# CONFIG_ARCH_CUSTOM_NUMA_DISTANCE is not set
# CONFIG_KCOV is not set # CONFIG_RUNTIME_TESTING_MENU is not set