ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data while each cluster has local L3 tag. On the other hand, each cluster will share some internal system bus. This means cache is much more affine inside one cluster than across clusters.
+-----------------------------------+ +---------+ | +------+ +------+ +---------------------------+ | | | CPU0 | | cpu1 | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ cluster | | tag | | | | | CPU2 | | CPU3 | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +----+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | L3 | | data | +-----------------------------------+ | | | +------+ +------+ | +-----------+ | | | | | | | | | | | | | +------+ +------+ +----+ L3 | | | | | | tag | | | | +------+ +------+ | | | | | | | | | | ++ +-----------+ | | | +------+ +------+ |---------------------------+ | +-----------------------------------| | | +-----------------------------------| | | | +------+ +------+ +---------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ | | tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | |
Through the following small program, you can see the performance impact of running it in one cluster and across two clusters:
struct foo { int x; int y; } f;
void *thread1_fun(void *param) { int s = 0; for (int i = 0; i < 0xfffffff; i++) s += f.x; }
void *thread2_fun(void *param) { int s = 0; for (int i = 0; i < 0xfffffff; i++) f.y++; }
int main(int argc, char **argv) { pthread_t tid1, tid2;
pthread_create(&tid1, NULL, thread1_fun, NULL); pthread_create(&tid2, NULL, thread2_fun, NULL); pthread_join(tid1, NULL); pthread_join(tid2, NULL); }
While running this program in one cluster, it takes: $ time taskset -c 0,1 ./a.out real 0m0.832s user 0m1.649s sys 0m0.004s
As a contrast, it takes much more time if we run the same program in two clusters: $ time taskset -c 0,4 ./a.out real 0m1.133s user 0m1.960s sys 0m0.000s
0.832/1.133 = 73%, it is a huge difference.
Also, hackbench running on 4 cpus in single one cluster and 4 cpus in different clusters also shows a large contrast: * inside a cluster: root@ubuntu:~# taskset -c 0,1,2,3 hackbench -p -T -l 20000 -g 1 Running in threaded mode with 1 groups using 40 file descriptors each (== 40 tasks) Each sender will pass 20000 messages of 100 bytes Time: 4.285
* across clusters: root@ubuntu:~# taskset -c 0,4,8,12 hackbench -p -T -l 20000 -g 1 Running in threaded mode with 1 groups using 40 file descriptors each (== 40 tasks) Each sender will pass 20000 messages of 100 bytes Time: 5.524
The score is 4.285 vs 5.524, shorter time means better performance.
All these testing implies that we should let the Linux scheduler use this topology to make better load balancing and WAKE_AFFINE decisions. However, the current scheduler totally has no idea of clusters.
This patchset exposed the cluster topology first, then added the sched domain for cluster. While it is named as "cluster", architectures and machines can define the exact meaning of cluster as long as they have some resources sharing under llc and they can leverage the affinity of this resource to achive better scheduling performance.
-v3: - rebased againest 5.11-rc2 - with respect to the comments of Valentin Schneider, Peter Zijlstra, Vincent Guittot and Mel Gorman etc. * moved the scheduler changes from arm64 to the common place for all architectures. * added SD_SHARE_CLS_RESOURCES sd_flags specifying the sched_domain where select_idle_cpu() should begin to scan from * removed redundant select_idle_cluster() function since all code is in select_idle_cpu() now. it also avoided scanning cluster cpus twice in v2 code; * redo the hackbench in one numa after the above changes
Valentin suggested that select_idle_cpu() could begin to scan from domain with SD_SHARE_PKG_RESOURCES. Changing like this might be too aggressive and limit the spreading of tasks. Thus, this patch lets the architectures and machines to decide where to start by adding a new SD_SHARE_CLS_RESOURCES.
Barry Song (1): scheduler: add scheduler level for clusters
Jonathan Cameron (1): topology: Represent clusters of CPUs within a die.
Documentation/admin-guide/cputopology.rst | 26 +++++++++++--- arch/arm64/Kconfig | 7 ++++ arch/arm64/kernel/topology.c | 2 ++ drivers/acpi/pptt.c | 60 +++++++++++++++++++++++++++++++ drivers/base/arch_topology.c | 14 ++++++++ drivers/base/topology.c | 10 ++++++ include/linux/acpi.h | 5 +++ include/linux/arch_topology.h | 5 +++ include/linux/sched/sd_flags.h | 9 +++++ include/linux/sched/topology.h | 7 ++++ include/linux/topology.h | 13 +++++++ kernel/sched/fair.c | 27 ++++++++++---- kernel/sched/topology.c | 6 ++++ 13 files changed, 181 insertions(+), 10 deletions(-)
From: Jonathan Cameron Jonathan.Cameron@huawei.com
Both ACPI and DT provide the ability to describe additional layers of topology between that of individual cores and higher level constructs such as the level at which the last level cache is shared. In ACPI this can be represented in PPTT as a Processor Hierarchy Node Structure [1] that is the parent of the CPU cores and in turn has a parent Processor Hierarchy Nodes Structure representing a higher level of topology.
For example Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus.
+-----------------------------------+ +---------+ | +------+ +------+ +---------------------------+ | | | CPU0 | | cpu1 | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ cluster | | tag | | | | | CPU2 | | CPU3 | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +----+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | L3 | | data | +-----------------------------------+ | | | +------+ +------+ | +-----------+ | | | | | | | | | | | | | +------+ +------+ +----+ L3 | | | | | | tag | | | | +------+ +------+ | | | | | | | | | | ++ +-----------+ | | | +------+ +------+ |---------------------------+ | +-----------------------------------| | | +-----------------------------------| | | | +------+ +------+ +---------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ | | tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +---+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ ++ | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +--+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | +---------+ +-----------------------------------+
That means the cost to transfer ownership of a cacheline between CPUs within a cluster is lower than between CPUs in different clusters on the same die. Hence, it can make sense to tell the scheduler to use the cache affinity of the cluster to make better decision on thread migration.
This patch simply exposes this information to userspace libraries like hwloc by providing cluster_cpus and related sysfs attributes. PoC of HWLOC support at [2].
Note this patch only handle the ACPI case.
Special consideration is needed for SMT processors, where it is necessary to move 2 levels up the hierarchy from the leaf nodes (thus skipping the processor core level).
Currently the ID provided is the offset of the Processor Hierarchy Nodes Structure within PPTT. Whilst this is unique it is not terribly elegant so alternative suggestions welcome.
Note that arm64 / ACPI does not provide any means of identifying a die level in the topology but that may be unrelate to the cluster level.
[1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node structure (Type 0) [2] https://github.com/hisilicon/hwloc/tree/linux-cluster
Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Barry Song song.bao.hua@hisilicon.com --- -v3: * no real change(v2 comments not addressed); * rebased againest 5.11-rc2;
Documentation/admin-guide/cputopology.rst | 26 +++++++++++--- arch/arm64/kernel/topology.c | 2 ++ drivers/acpi/pptt.c | 60 +++++++++++++++++++++++++++++++ drivers/base/arch_topology.c | 14 ++++++++ drivers/base/topology.c | 10 ++++++ include/linux/acpi.h | 5 +++ include/linux/arch_topology.h | 5 +++ include/linux/topology.h | 6 ++++ 8 files changed, 124 insertions(+), 4 deletions(-)
diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst index b90dafc..f9d3745 100644 --- a/Documentation/admin-guide/cputopology.rst +++ b/Documentation/admin-guide/cputopology.rst @@ -24,6 +24,12 @@ core_id: identifier (rather than the kernel's). The actual value is architecture and platform dependent.
+cluster_id: + + the Cluster ID of cpuX. Typically it is the hardware platform's + identifier (rather than the kernel's). The actual value is + architecture and platform dependent. + book_id:
the book ID of cpuX. Typically it is the hardware platform's @@ -56,6 +62,14 @@ package_cpus_list: human-readable list of CPUs sharing the same physical_package_id. (deprecated name: "core_siblings_list")
+cluster_cpus: + + internal kernel map of CPUs within the same cluster. + +cluster_cpus_list: + + human-readable list of CPUs within the same cluster. + die_cpus:
internal kernel map of CPUs within the same die. @@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
#define topology_physical_package_id(cpu) #define topology_die_id(cpu) + #define topology_cluster_id(cpu) #define topology_core_id(cpu) #define topology_book_id(cpu) #define topology_drawer_id(cpu) #define topology_sibling_cpumask(cpu) #define topology_core_cpumask(cpu) + #define topology_cluster_cpumask(cpu) #define topology_die_cpumask(cpu) #define topology_book_cpumask(cpu) #define topology_drawer_cpumask(cpu) @@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
1) topology_physical_package_id: -1 2) topology_die_id: -1 -3) topology_core_id: 0 -4) topology_sibling_cpumask: just the given CPU -5) topology_core_cpumask: just the given CPU -6) topology_die_cpumask: just the given CPU +3) topology_cluster_id: -1 +4) topology_core_id: 0 +5) topology_sibling_cpumask: just the given CPU +6) topology_core_cpumask: just the given CPU +7) topology_cluster_cpumask: just the given CPU +8) topology_die_cpumask: just the given CPU
For architectures that don't support books (CONFIG_SCHED_BOOK) there are no default definitions for topology_book_id() and topology_book_cpumask(). diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index f6faa69..fe076b3 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -103,6 +103,8 @@ int __init parse_acpi_topology(void) cpu_topology[cpu].thread_id = -1; cpu_topology[cpu].core_id = topology_id; } + topology_id = find_acpi_cpu_topology_cluster(cpu); + cpu_topology[cpu].cluster_id = topology_id; topology_id = find_acpi_cpu_topology_package(cpu); cpu_topology[cpu].package_id = topology_id;
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c index 4ae9335..8646a93 100644 --- a/drivers/acpi/pptt.c +++ b/drivers/acpi/pptt.c @@ -737,6 +737,66 @@ int find_acpi_cpu_topology_package(unsigned int cpu) }
/** + * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value + * @cpu: Kernel logical CPU number + * + * Determine a topology unique cluster ID for the given CPU/thread. + * This ID can then be used to group peers, which will have matching ids. + * + * The cluster, if present is the level of topology above CPUs. In a + * multi-thread CPU, it will be the level above the CPU, not the thread. + * It may not exist in single CPU systems. In simple multi-CPU systems, + * it may be equal to the package topology level. + * + * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found + * or there is no toplogy level above the CPU.. + * Otherwise returns a value which represents the package for this CPU. + */ + +int find_acpi_cpu_topology_cluster(unsigned int cpu) +{ + struct acpi_table_header *table; + acpi_status status; + struct acpi_pptt_processor *cpu_node, *cluster_node; + int retval; + int is_thread; + + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); + if (ACPI_FAILURE(status)) { + acpi_pptt_warn_missing(); + return -ENOENT; + } + cpu_node = acpi_find_processor_node(table, cpu); + if (cpu_node == NULL || !cpu_node->parent) { + retval = -ENOENT; + goto put_table; + } + + is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD; + cluster_node = fetch_pptt_node(table, cpu_node->parent); + if (cluster_node == NULL) { + retval = -ENOENT; + goto put_table; + } + if (is_thread) { + if (!cluster_node->parent) { + retval = -ENOENT; + goto put_table; + } + cluster_node = fetch_pptt_node(table, cluster_node->parent); + if (cluster_node == NULL) { + retval = -ENOENT; + goto put_table; + } + } + retval = ACPI_PTR_DIFF(cluster_node, table); +put_table: + acpi_put_table(table); + + return retval; +} + +/** * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag * @cpu: Kernel logical CPU number * diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c index de8587c..3079232 100644 --- a/drivers/base/arch_topology.c +++ b/drivers/base/arch_topology.c @@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu) return core_mask; }
+const struct cpumask *cpu_clustergroup_mask(int cpu) +{ + return &cpu_topology[cpu].cluster_sibling; +} + void update_siblings_masks(unsigned int cpuid) { struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid]; @@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid) if (cpuid_topo->package_id != cpu_topo->package_id) continue;
+ if (cpuid_topo->cluster_id == cpu_topo->cluster_id) { + cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling); + cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling); + } + cpumask_set_cpu(cpuid, &cpu_topo->core_sibling); cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
@@ -541,6 +551,9 @@ static void clear_cpu_topology(int cpu) cpumask_clear(&cpu_topo->llc_sibling); cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
+ cpumask_clear(&cpu_topo->cluster_sibling); + cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling); + cpumask_clear(&cpu_topo->core_sibling); cpumask_set_cpu(cpu, &cpu_topo->core_sibling); cpumask_clear(&cpu_topo->thread_sibling); @@ -571,6 +584,7 @@ void remove_cpu_topology(unsigned int cpu) cpumask_clear_cpu(cpu, topology_core_cpumask(sibling)); for_each_cpu(sibling, topology_sibling_cpumask(cpu)) cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling)); + for_each_cpu(sibling, topology_llc_cpumask(cpu)) cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
diff --git a/drivers/base/topology.c b/drivers/base/topology.c index 4d254fc..7157ac0 100644 --- a/drivers/base/topology.c +++ b/drivers/base/topology.c @@ -46,6 +46,9 @@ static DEVICE_ATTR_RO(physical_package_id); define_id_show_func(die_id); static DEVICE_ATTR_RO(die_id);
+define_id_show_func(cluster_id); +static DEVICE_ATTR_RO(cluster_id); + define_id_show_func(core_id); static DEVICE_ATTR_RO(core_id);
@@ -61,6 +64,10 @@ define_siblings_show_func(core_siblings, core_cpumask); static DEVICE_ATTR_RO(core_siblings); static DEVICE_ATTR_RO(core_siblings_list);
+define_siblings_show_func(cluster_cpus, cluster_cpumask); +static DEVICE_ATTR_RO(cluster_cpus); +static DEVICE_ATTR_RO(cluster_cpus_list); + define_siblings_show_func(die_cpus, die_cpumask); static DEVICE_ATTR_RO(die_cpus); static DEVICE_ATTR_RO(die_cpus_list); @@ -88,6 +95,7 @@ static DEVICE_ATTR_RO(drawer_siblings_list); static struct attribute *default_attrs[] = { &dev_attr_physical_package_id.attr, &dev_attr_die_id.attr, + &dev_attr_cluster_id.attr, &dev_attr_core_id.attr, &dev_attr_thread_siblings.attr, &dev_attr_thread_siblings_list.attr, @@ -95,6 +103,8 @@ static struct attribute *default_attrs[] = { &dev_attr_core_cpus_list.attr, &dev_attr_core_siblings.attr, &dev_attr_core_siblings_list.attr, + &dev_attr_cluster_cpus.attr, + &dev_attr_cluster_cpus_list.attr, &dev_attr_die_cpus.attr, &dev_attr_die_cpus_list.attr, &dev_attr_package_cpus.attr, diff --git a/include/linux/acpi.h b/include/linux/acpi.h index 2630c2e..8f603cd 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@ -1325,6 +1325,7 @@ static inline int lpit_read_residency_count_address(u64 *address) #ifdef CONFIG_ACPI_PPTT int acpi_pptt_cpu_is_thread(unsigned int cpu); int find_acpi_cpu_topology(unsigned int cpu, int level); +int find_acpi_cpu_topology_cluster(unsigned int cpu); int find_acpi_cpu_topology_package(unsigned int cpu); int find_acpi_cpu_topology_hetero_id(unsigned int cpu); int find_acpi_cpu_cache_topology(unsigned int cpu, int level); @@ -1337,6 +1338,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level) { return -EINVAL; } +static inline int find_acpi_cpu_topology_cluster(unsigned int cpu) +{ + return -EINVAL; +} static inline int find_acpi_cpu_topology_package(unsigned int cpu) { return -EINVAL; diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h index 0f6cd6b..987c7ea 100644 --- a/include/linux/arch_topology.h +++ b/include/linux/arch_topology.h @@ -49,10 +49,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus, struct cpu_topology { int thread_id; int core_id; + int cluster_id; int package_id; int llc_id; cpumask_t thread_sibling; cpumask_t core_sibling; + cpumask_t cluster_sibling; cpumask_t llc_sibling; };
@@ -60,13 +62,16 @@ struct cpu_topology { extern struct cpu_topology cpu_topology[NR_CPUS];
#define topology_physical_package_id(cpu) (cpu_topology[cpu].package_id) +#define topology_cluster_id(cpu) (cpu_topology[cpu].cluster_id) #define topology_core_id(cpu) (cpu_topology[cpu].core_id) #define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling) #define topology_sibling_cpumask(cpu) (&cpu_topology[cpu].thread_sibling) +#define topology_cluster_cpumask(cpu) (&cpu_topology[cpu].cluster_sibling) #define topology_llc_cpumask(cpu) (&cpu_topology[cpu].llc_sibling) void init_cpu_topology(void); void store_cpu_topology(unsigned int cpuid); const struct cpumask *cpu_coregroup_mask(int cpu); +const struct cpumask *cpu_clustergroup_mask(int cpu); void update_siblings_masks(unsigned int cpu); void remove_cpu_topology(unsigned int cpuid); void reset_cpu_topology(void); diff --git a/include/linux/topology.h b/include/linux/topology.h index ad03df1..bf2cc3c 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -185,6 +185,9 @@ static inline int cpu_to_mem(int cpu) #ifndef topology_die_id #define topology_die_id(cpu) ((void)(cpu), -1) #endif +#ifndef topology_cluster_id +#define topology_cluster_id(cpu) ((void)(cpu), -1) +#endif #ifndef topology_core_id #define topology_core_id(cpu) ((void)(cpu), 0) #endif @@ -194,6 +197,9 @@ static inline int cpu_to_mem(int cpu) #ifndef topology_core_cpumask #define topology_core_cpumask(cpu) cpumask_of(cpu) #endif +#ifndef topology_cluster_cpumask +#define topology_cluster_cpumask(cpu) cpumask_of(cpu) +#endif #ifndef topology_die_cpumask #define topology_die_cpumask(cpu) cpumask_of(cpu) #endif
On Wed, Jan 06, 2021 at 09:30:25PM +1300, Barry Song wrote:
From: Jonathan Cameron Jonathan.Cameron@huawei.com
Both ACPI and DT provide the ability to describe additional layers of topology between that of individual cores and higher level constructs such as the level at which the last level cache is shared. In ACPI this can be represented in PPTT as a Processor Hierarchy Node Structure [1] that is the parent of the CPU cores and in turn has a parent Processor Hierarchy Nodes Structure representing a higher level of topology.
For example Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus.
+-----------------------------------+ +---------+ | +------+ +------+ +---------------------------+ | | | CPU0 | | cpu1 | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ cluster | | tag | | | | | CPU2 | | CPU3 | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +----+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | L3 | | data | +-----------------------------------+ | | | +------+ +------+ | +-----------+ | | | | | | | | | | | | | +------+ +------+ +----+ L3 | | | | | | tag | | | | +------+ +------+ | | | | | | | | | | ++ +-----------+ | | | +------+ +------+ |---------------------------+ | +-----------------------------------| | | +-----------------------------------| | | | +------+ +------+ +---------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ | | tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +---+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ ++ | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +--+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | +---------+ +-----------------------------------+
That means the cost to transfer ownership of a cacheline between CPUs within a cluster is lower than between CPUs in different clusters on the same die. Hence, it can make sense to tell the scheduler to use the cache affinity of the cluster to make better decision on thread migration.
This patch simply exposes this information to userspace libraries like hwloc by providing cluster_cpus and related sysfs attributes. PoC of HWLOC support at [2].
Note this patch only handle the ACPI case.
Special consideration is needed for SMT processors, where it is necessary to move 2 levels up the hierarchy from the leaf nodes (thus skipping the processor core level).
Currently the ID provided is the offset of the Processor Hierarchy Nodes Structure within PPTT. Whilst this is unique it is not terribly elegant so alternative suggestions welcome.
Note that arm64 / ACPI does not provide any means of identifying a die level in the topology but that may be unrelate to the cluster level.
[1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node structure (Type 0) [2] https://github.com/hisilicon/hwloc/tree/linux-cluster
Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Barry Song song.bao.hua@hisilicon.com
-v3:
- no real change(v2 comments not addressed);
- rebased againest 5.11-rc2;
Documentation/admin-guide/cputopology.rst | 26 +++++++++++--- arch/arm64/kernel/topology.c | 2 ++ drivers/acpi/pptt.c | 60 +++++++++++++++++++++++++++++++ drivers/base/arch_topology.c | 14 ++++++++ drivers/base/topology.c | 10 ++++++ include/linux/acpi.h | 5 +++ include/linux/arch_topology.h | 5 +++ include/linux/topology.h | 6 ++++ 8 files changed, 124 insertions(+), 4 deletions(-)
diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst index b90dafc..f9d3745 100644 --- a/Documentation/admin-guide/cputopology.rst +++ b/Documentation/admin-guide/cputopology.rst @@ -24,6 +24,12 @@ core_id: identifier (rather than the kernel's). The actual value is architecture and platform dependent.
+cluster_id:
- the Cluster ID of cpuX. Typically it is the hardware platform's
- identifier (rather than the kernel's). The actual value is
- architecture and platform dependent.
book_id:
the book ID of cpuX. Typically it is the hardware platform's @@ -56,6 +62,14 @@ package_cpus_list: human-readable list of CPUs sharing the same physical_package_id. (deprecated name: "core_siblings_list")
+cluster_cpus:
- internal kernel map of CPUs within the same cluster.
+cluster_cpus_list:
- human-readable list of CPUs within the same cluster.
die_cpus:
internal kernel map of CPUs within the same die. @@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
#define topology_physical_package_id(cpu) #define topology_die_id(cpu)
- #define topology_cluster_id(cpu) #define topology_core_id(cpu) #define topology_book_id(cpu) #define topology_drawer_id(cpu) #define topology_sibling_cpumask(cpu) #define topology_core_cpumask(cpu)
- #define topology_cluster_cpumask(cpu) #define topology_die_cpumask(cpu) #define topology_book_cpumask(cpu) #define topology_drawer_cpumask(cpu)
@@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
- topology_physical_package_id: -1
- topology_die_id: -1
-3) topology_core_id: 0 -4) topology_sibling_cpumask: just the given CPU -5) topology_core_cpumask: just the given CPU -6) topology_die_cpumask: just the given CPU +3) topology_cluster_id: -1 +4) topology_core_id: 0 +5) topology_sibling_cpumask: just the given CPU +6) topology_core_cpumask: just the given CPU +7) topology_cluster_cpumask: just the given CPU +8) topology_die_cpumask: just the given CPU
For architectures that don't support books (CONFIG_SCHED_BOOK) there are no default definitions for topology_book_id() and topology_book_cpumask(). diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index f6faa69..fe076b3 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -103,6 +103,8 @@ int __init parse_acpi_topology(void) cpu_topology[cpu].thread_id = -1; cpu_topology[cpu].core_id = topology_id; }
topology_id = find_acpi_cpu_topology_cluster(cpu);
topology_id = find_acpi_cpu_topology_package(cpu); cpu_topology[cpu].package_id = topology_id;cpu_topology[cpu].cluster_id = topology_id;
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c index 4ae9335..8646a93 100644 --- a/drivers/acpi/pptt.c +++ b/drivers/acpi/pptt.c @@ -737,6 +737,66 @@ int find_acpi_cpu_topology_package(unsigned int cpu) }
/**
- find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
- @cpu: Kernel logical CPU number
- Determine a topology unique cluster ID for the given CPU/thread.
- This ID can then be used to group peers, which will have matching ids.
- The cluster, if present is the level of topology above CPUs. In a
- multi-thread CPU, it will be the level above the CPU, not the thread.
- It may not exist in single CPU systems. In simple multi-CPU systems,
- it may be equal to the package topology level.
- Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
- or there is no toplogy level above the CPU..
- Otherwise returns a value which represents the package for this CPU.
- */
+int find_acpi_cpu_topology_cluster(unsigned int cpu) +{
- struct acpi_table_header *table;
- acpi_status status;
- struct acpi_pptt_processor *cpu_node, *cluster_node;
- int retval;
- int is_thread;
- status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
- if (ACPI_FAILURE(status)) {
acpi_pptt_warn_missing();
return -ENOENT;
- }
- cpu_node = acpi_find_processor_node(table, cpu);
The second argument of acpi_find_processor_node() expects ACPI Processor ID, not the logical cpu number. How about the following?
acpi_cpu_id = get_acpi_id_for_cpu(cpu); cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
Thanks, Masa
- if (cpu_node == NULL || !cpu_node->parent) {
retval = -ENOENT;
goto put_table;
- }
- is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
- cluster_node = fetch_pptt_node(table, cpu_node->parent);
- if (cluster_node == NULL) {
retval = -ENOENT;
goto put_table;
- }
- if (is_thread) {
if (!cluster_node->parent) {
retval = -ENOENT;
goto put_table;
}
cluster_node = fetch_pptt_node(table, cluster_node->parent);
if (cluster_node == NULL) {
retval = -ENOENT;
goto put_table;
}
- }
- retval = ACPI_PTR_DIFF(cluster_node, table);
+put_table:
- acpi_put_table(table);
- return retval;
+}
+/**
- find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
- @cpu: Kernel logical CPU number
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c index de8587c..3079232 100644 --- a/drivers/base/arch_topology.c +++ b/drivers/base/arch_topology.c @@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu) return core_mask; }
+const struct cpumask *cpu_clustergroup_mask(int cpu) +{
- return &cpu_topology[cpu].cluster_sibling;
+}
void update_siblings_masks(unsigned int cpuid) { struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid]; @@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid) if (cpuid_topo->package_id != cpu_topo->package_id) continue;
if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
}
- cpumask_set_cpu(cpuid, &cpu_topo->core_sibling); cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
@@ -541,6 +551,9 @@ static void clear_cpu_topology(int cpu) cpumask_clear(&cpu_topo->llc_sibling); cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
- cpumask_clear(&cpu_topo->cluster_sibling);
- cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling);
- cpumask_clear(&cpu_topo->core_sibling); cpumask_set_cpu(cpu, &cpu_topo->core_sibling); cpumask_clear(&cpu_topo->thread_sibling);
@@ -571,6 +584,7 @@ void remove_cpu_topology(unsigned int cpu) cpumask_clear_cpu(cpu, topology_core_cpumask(sibling)); for_each_cpu(sibling, topology_sibling_cpumask(cpu)) cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
- for_each_cpu(sibling, topology_llc_cpumask(cpu)) cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
diff --git a/drivers/base/topology.c b/drivers/base/topology.c index 4d254fc..7157ac0 100644 --- a/drivers/base/topology.c +++ b/drivers/base/topology.c @@ -46,6 +46,9 @@ static DEVICE_ATTR_RO(physical_package_id); define_id_show_func(die_id); static DEVICE_ATTR_RO(die_id);
+define_id_show_func(cluster_id); +static DEVICE_ATTR_RO(cluster_id);
define_id_show_func(core_id); static DEVICE_ATTR_RO(core_id);
@@ -61,6 +64,10 @@ define_siblings_show_func(core_siblings, core_cpumask); static DEVICE_ATTR_RO(core_siblings); static DEVICE_ATTR_RO(core_siblings_list);
+define_siblings_show_func(cluster_cpus, cluster_cpumask); +static DEVICE_ATTR_RO(cluster_cpus); +static DEVICE_ATTR_RO(cluster_cpus_list);
define_siblings_show_func(die_cpus, die_cpumask); static DEVICE_ATTR_RO(die_cpus); static DEVICE_ATTR_RO(die_cpus_list); @@ -88,6 +95,7 @@ static DEVICE_ATTR_RO(drawer_siblings_list); static struct attribute *default_attrs[] = { &dev_attr_physical_package_id.attr, &dev_attr_die_id.attr,
- &dev_attr_cluster_id.attr, &dev_attr_core_id.attr, &dev_attr_thread_siblings.attr, &dev_attr_thread_siblings_list.attr,
@@ -95,6 +103,8 @@ static struct attribute *default_attrs[] = { &dev_attr_core_cpus_list.attr, &dev_attr_core_siblings.attr, &dev_attr_core_siblings_list.attr,
- &dev_attr_cluster_cpus.attr,
- &dev_attr_cluster_cpus_list.attr, &dev_attr_die_cpus.attr, &dev_attr_die_cpus_list.attr, &dev_attr_package_cpus.attr,
diff --git a/include/linux/acpi.h b/include/linux/acpi.h index 2630c2e..8f603cd 100644 --- a/include/linux/acpi.h +++ b/include/linux/acpi.h @@ -1325,6 +1325,7 @@ static inline int lpit_read_residency_count_address(u64 *address) #ifdef CONFIG_ACPI_PPTT int acpi_pptt_cpu_is_thread(unsigned int cpu); int find_acpi_cpu_topology(unsigned int cpu, int level); +int find_acpi_cpu_topology_cluster(unsigned int cpu); int find_acpi_cpu_topology_package(unsigned int cpu); int find_acpi_cpu_topology_hetero_id(unsigned int cpu); int find_acpi_cpu_cache_topology(unsigned int cpu, int level); @@ -1337,6 +1338,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level) { return -EINVAL; } +static inline int find_acpi_cpu_topology_cluster(unsigned int cpu) +{
- return -EINVAL;
+} static inline int find_acpi_cpu_topology_package(unsigned int cpu) { return -EINVAL; diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h index 0f6cd6b..987c7ea 100644 --- a/include/linux/arch_topology.h +++ b/include/linux/arch_topology.h @@ -49,10 +49,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus, struct cpu_topology { int thread_id; int core_id;
- int cluster_id; int package_id; int llc_id; cpumask_t thread_sibling; cpumask_t core_sibling;
- cpumask_t cluster_sibling; cpumask_t llc_sibling;
};
@@ -60,13 +62,16 @@ struct cpu_topology { extern struct cpu_topology cpu_topology[NR_CPUS];
#define topology_physical_package_id(cpu) (cpu_topology[cpu].package_id) +#define topology_cluster_id(cpu) (cpu_topology[cpu].cluster_id) #define topology_core_id(cpu) (cpu_topology[cpu].core_id) #define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling) #define topology_sibling_cpumask(cpu) (&cpu_topology[cpu].thread_sibling) +#define topology_cluster_cpumask(cpu) (&cpu_topology[cpu].cluster_sibling) #define topology_llc_cpumask(cpu) (&cpu_topology[cpu].llc_sibling) void init_cpu_topology(void); void store_cpu_topology(unsigned int cpuid); const struct cpumask *cpu_coregroup_mask(int cpu); +const struct cpumask *cpu_clustergroup_mask(int cpu); void update_siblings_masks(unsigned int cpu); void remove_cpu_topology(unsigned int cpuid); void reset_cpu_topology(void); diff --git a/include/linux/topology.h b/include/linux/topology.h index ad03df1..bf2cc3c 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -185,6 +185,9 @@ static inline int cpu_to_mem(int cpu) #ifndef topology_die_id #define topology_die_id(cpu) ((void)(cpu), -1) #endif +#ifndef topology_cluster_id +#define topology_cluster_id(cpu) ((void)(cpu), -1) +#endif #ifndef topology_core_id #define topology_core_id(cpu) ((void)(cpu), 0) #endif @@ -194,6 +197,9 @@ static inline int cpu_to_mem(int cpu) #ifndef topology_core_cpumask #define topology_core_cpumask(cpu) cpumask_of(cpu) #endif +#ifndef topology_cluster_cpumask +#define topology_cluster_cpumask(cpu) cpumask_of(cpu) +#endif #ifndef topology_die_cpumask #define topology_die_cpumask(cpu) cpumask_of(cpu)
#endif
2.7.4
ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters.
This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
This will affect load balance. For example, without this patch, while cpu0 becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0 will try to pull a task from cpu1-cpu3 first. This will have much less overhead of task migration.
On the other hand, while doing WAKE_AFFINE, this patch will try to find a core in the target cluster before scanning the whole llc domain. This means it will proactively use a core which has better affinity with target core at first.
Though it is named "cluster", architectures or machines can define its exact meaning of cluster as long as some cpus can share some resources in lower level than llc. So the implementation is applicable to all architectures.
Different cpus might have different resource sharing like L1, L2, cache tags, internal busses etc. Since it is hard to know where we should start to scan, this patch adds a SD_SHARE_CLS_RESOURCES rather than directly leveraging the existing SD_SHARE_PKG_RESOURCES flag. Architectures or machines can decide what is cluster and who should get SD_SHARE_CLS_RESOURCES. select_idle_cpu() will scan from the first sched_domain with SD_SHARE_CLS_RESOURCES.
The below is a hackbench result:
we run the below command with different -g parameter from 1 to 10, for each different g, we run the command 10 times and get the average time $ numactl -N 0 hackbench -p -T -l 20000 -g $1
hackbench will report the time which is needed to complete a certain number of messages transmissions between a certain number of tasks, for example: $ numactl -N 0 hackbench -p -T -l 20000 -g 10 Running in threaded mode with 10 groups using 40 file descriptors each (== 400 tasks) Each sender will pass 20000 messages of 100 bytes Time: 8.874
The below is the result of hackbench w/ and w/o the patch: g 1 2 3 4 5 6 7 8 9 10 w/o 1.4777 2.0112 3.1919 4.2654 5.3246 6.4019 7.5939 8.7073 9.7526 10.8987 w/ 1.4793 1.9344 2.9080 3.9267 4.8339 5.7186 6.6923 7.5088 8.3715 9.2173 +8.9% +7.9% +9.3% +10.7% +11.8% +13.8% +14.2% +15.5%
Tracing the kernel while g=10, it shows select_idle_cpu() has a large chance to get cpu in the same cluster with the target while it sometimes gets cpu outside the cluster: target cpu 19 -> 17 13 -> 15 23 -> 20 23 -> 20 19 -> 17 13 -> 15 16 -> 17 19 -> 17 7 -> 5 10 -> 11 23 -> 20 *23 -> 4 ...
Signed-off-by: Barry Song song.bao.hua@hisilicon.com --- -v3: - rebased againest 5.11-rc2 - with respect to the comments of Valentin Schneider, Peter Zijlstra, Vincent Guittot and Mel Gorman etc. * moved the scheduler changes from arm64 to the common place for all architectures. * added SD_SHARE_CLS_RESOURCES sd_flags specifying the sched_domain where select_idle_cpu() should begin to scan from * removed redundant select_idle_cluster() function since all code is in select_idle_cpu() now. it also avoided scanning cluster cpus twice in v2 code; * redo the hackbench in one numa after the above changes
arch/arm64/Kconfig | 7 +++++++ include/linux/sched/sd_flags.h | 9 +++++++++ include/linux/sched/topology.h | 7 +++++++ include/linux/topology.h | 7 +++++++ kernel/sched/fair.c | 27 +++++++++++++++++++++------ kernel/sched/topology.c | 6 ++++++ 6 files changed, 57 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 05e1735..546cd61 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -973,6 +973,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here.
+config SCHED_CLUSTER + bool "Cluster scheduler support" + help + Cluster scheduler support improves the CPU scheduler's decision + making when dealing with machines that have clusters(sharing internal + bus or sharing LLC cache tag). If unsure say N here. + config SCHED_SMT bool "SMT scheduler support" help diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h index 34b21e9..fc3c894 100644 --- a/include/linux/sched/sd_flags.h +++ b/include/linux/sched/sd_flags.h @@ -100,6 +100,15 @@ SD_FLAG(SD_ASYM_CPUCAPACITY, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS) SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
/* + * Domain members share CPU cluster resources (i.e. llc cache tags) + * + * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share + * the cluster resouces (such as llc tags and internal bus) + * NEEDS_GROUPS: Caches are shared between groups. + */ +SD_FLAG(SD_SHARE_CLS_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) + +/* * Domain members share CPU package resources (i.e. caches) * * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 8f0f778..846fcac 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void) } #endif
+#ifdef CONFIG_SCHED_CLUSTER +static inline int cpu_cluster_flags(void) +{ + return SD_SHARE_CLS_RESOURCES | SD_SHARE_PKG_RESOURCES; +} +#endif + #ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { diff --git a/include/linux/topology.h b/include/linux/topology.h index bf2cc3c..81be614 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu) } #endif
+#ifdef CONFIG_SCHED_CLUSTER +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{ + return topology_cluster_cpumask(cpu); +} +#endif + static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 04a3ce2..c14fae6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6145,6 +6145,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t { struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); struct sched_domain *this_sd; + struct sched_domain *prev_ssd = NULL, *ssd; u64 avg_cost, avg_idle; u64 time; int this = smp_processor_id(); @@ -6174,15 +6175,29 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
time = cpu_clock(this);
- cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); - - for_each_cpu_wrap(cpu, cpus, target) { - if (!--nr) - return -1; - if (available_idle_cpu(cpu) || sched_idle_cpu(cpu)) + /* + * we first scan those child domains who declare they are sharing + * cluster resources such as llc tags, internal busses; then scan + * the whole llc + */ + for_each_domain(target, ssd) { + if ((ssd->flags & SD_SHARE_CLS_RESOURCES) || (ssd == sd)) { + cpumask_and(cpus, sched_domain_span(ssd), p->cpus_ptr); + if (prev_ssd) + cpumask_andnot(cpus, cpus, sched_domain_span(prev_ssd)); + for_each_cpu_wrap(cpu, cpus, target) { + if (!--nr) + return -1; + if (available_idle_cpu(cpu) || sched_idle_cpu(cpu)) + goto done; + } + prev_ssd = ssd; + } + if (ssd == sd) break; }
+done: time = cpu_clock(this) - time; update_avg(&this_sd->avg_scan_cost, time);
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 5d3675c..79030c9 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1361,6 +1361,7 @@ int __read_mostly node_reclaim_distance = RECLAIM_DISTANCE; */ #define TOPOLOGY_SD_FLAGS \ (SD_SHARE_CPUCAPACITY | \ + SD_SHARE_CLS_RESOURCES | \ SD_SHARE_PKG_RESOURCES | \ SD_NUMA | \ SD_ASYM_PACKING) @@ -1480,6 +1481,11 @@ static struct sched_domain_topology_level default_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, #endif + +#ifdef CONFIG_SCHED_CLUSTER + { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) }, +#endif + #ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, #endif
On Wed, 6 Jan 2021 at 09:35, Barry Song song.bao.hua@hisilicon.com wrote:
ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters.
This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
This will affect load balance. For example, without this patch, while cpu0 becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0 will try to pull a task from cpu1-cpu3 first. This will have much less overhead of task migration.
On the other hand, while doing WAKE_AFFINE, this patch will try to find a core in the target cluster before scanning the whole llc domain. This means it will proactively use a core which has better affinity with target core at first.
Though it is named "cluster", architectures or machines can define its exact meaning of cluster as long as some cpus can share some resources in lower level than llc. So the implementation is applicable to all architectures.
Different cpus might have different resource sharing like L1, L2, cache tags, internal busses etc. Since it is hard to know where we should start to scan, this patch adds a SD_SHARE_CLS_RESOURCES rather than directly leveraging the existing
Not sure that we need this new flag. See more about this below
You should have a look at https://lkml.org/lkml/2020/12/14/560 which rework select_idle_core/cpu/smt
SD_SHARE_PKG_RESOURCES flag. Architectures or machines can decide what is cluster and who should get SD_SHARE_CLS_RESOURCES. select_idle_cpu() will scan from the first sched_domain with SD_SHARE_CLS_RESOURCES.
The below is a hackbench result:
we run the below command with different -g parameter from 1 to 10, for each different g, we run the command 10 times and get the average time $ numactl -N 0 hackbench -p -T -l 20000 -g $1
hackbench will report the time which is needed to complete a certain number of messages transmissions between a certain number of tasks, for example: $ numactl -N 0 hackbench -p -T -l 20000 -g 10 Running in threaded mode with 10 groups using 40 file descriptors each (== 400 tasks) Each sender will pass 20000 messages of 100 bytes Time: 8.874
The below is the result of hackbench w/ and w/o the patch: g 1 2 3 4 5 6 7 8 9 10 w/o 1.4777 2.0112 3.1919 4.2654 5.3246 6.4019 7.5939 8.7073 9.7526 10.8987 w/ 1.4793 1.9344 2.9080 3.9267 4.8339 5.7186 6.6923 7.5088 8.3715 9.2173 +8.9% +7.9% +9.3% +10.7% +11.8% +13.8% +14.2% +15.5%
Tracing the kernel while g=10, it shows select_idle_cpu() has a large chance to get cpu in the same cluster with the target while it sometimes gets cpu outside the cluster: target cpu 19 -> 17 13 -> 15 23 -> 20 23 -> 20 19 -> 17 13 -> 15 16 -> 17 19 -> 17 7 -> 5 10 -> 11 23 -> 20 *23 -> 4 ...
Signed-off-by: Barry Song song.bao.hua@hisilicon.com
-v3:
- rebased againest 5.11-rc2
- with respect to the comments of Valentin Schneider, Peter Zijlstra, Vincent Guittot and Mel Gorman etc.
- moved the scheduler changes from arm64 to the common place for all architectures.
- added SD_SHARE_CLS_RESOURCES sd_flags specifying the sched_domain where select_idle_cpu() should begin to scan from
- removed redundant select_idle_cluster() function since all code is in select_idle_cpu() now. it also avoided scanning cluster cpus twice in v2 code;
- redo the hackbench in one numa after the above changes
arch/arm64/Kconfig | 7 +++++++ include/linux/sched/sd_flags.h | 9 +++++++++ include/linux/sched/topology.h | 7 +++++++ include/linux/topology.h | 7 +++++++ kernel/sched/fair.c | 27 +++++++++++++++++++++------ kernel/sched/topology.c | 6 ++++++ 6 files changed, 57 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 05e1735..546cd61 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -973,6 +973,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here.
+config SCHED_CLUSTER
bool "Cluster scheduler support"
help
Cluster scheduler support improves the CPU scheduler's decision
making when dealing with machines that have clusters(sharing internal
bus or sharing LLC cache tag). If unsure say N here.
config SCHED_SMT bool "SMT scheduler support" help diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h index 34b21e9..fc3c894 100644 --- a/include/linux/sched/sd_flags.h +++ b/include/linux/sched/sd_flags.h @@ -100,6 +100,15 @@ SD_FLAG(SD_ASYM_CPUCAPACITY, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS) SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
/*
- Domain members share CPU cluster resources (i.e. llc cache tags)
- SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share
the cluster resouces (such as llc tags and internal bus)
- NEEDS_GROUPS: Caches are shared between groups.
- */
+SD_FLAG(SD_SHARE_CLS_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
+/*
- Domain members share CPU package resources (i.e. caches)
- SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 8f0f778..846fcac 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void) } #endif
+#ifdef CONFIG_SCHED_CLUSTER +static inline int cpu_cluster_flags(void) +{
return SD_SHARE_CLS_RESOURCES | SD_SHARE_PKG_RESOURCES;
+} +#endif
#ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { diff --git a/include/linux/topology.h b/include/linux/topology.h index bf2cc3c..81be614 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu) } #endif
+#ifdef CONFIG_SCHED_CLUSTER +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{
return topology_cluster_cpumask(cpu);
+} +#endif
static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 04a3ce2..c14fae6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6145,6 +6145,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t { struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); struct sched_domain *this_sd;
struct sched_domain *prev_ssd = NULL, *ssd; u64 avg_cost, avg_idle; u64 time; int this = smp_processor_id();
@@ -6174,15 +6175,29 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
time = cpu_clock(this);
cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
for_each_cpu_wrap(cpu, cpus, target) {
if (!--nr)
return -1;
if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
/*
* we first scan those child domains who declare they are sharing
* cluster resources such as llc tags, internal busses; then scan
* the whole llc
*/
for_each_domain(target, ssd) {
I don't like looping the sched domain in the fast path. Instead you should follow similar policy as for smt the rework patchset mentioned above: In the select_idle_core(), you can add a static key for cluster case and loop the cpu_cluster_mask(cpu), similarly to what is done with cpu_smt_mask but for looking for one idle cpu cpu_cluster_mask() in instead of all cpus in the case of smt
if ((ssd->flags & SD_SHARE_CLS_RESOURCES) || (ssd == sd)) {
cpumask_and(cpus, sched_domain_span(ssd), p->cpus_ptr);
if (prev_ssd)
cpumask_andnot(cpus, cpus, sched_domain_span(prev_ssd));
for_each_cpu_wrap(cpu, cpus, target) {
if (!--nr)
return -1;
if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
goto done;
}
prev_ssd = ssd;
}
if (ssd == sd) break; }
+done: time = cpu_clock(this) - time; update_avg(&this_sd->avg_scan_cost, time);
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 5d3675c..79030c9 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1361,6 +1361,7 @@ int __read_mostly node_reclaim_distance = RECLAIM_DISTANCE; */ #define TOPOLOGY_SD_FLAGS \ (SD_SHARE_CPUCAPACITY | \
SD_SHARE_CLS_RESOURCES | \ SD_SHARE_PKG_RESOURCES | \ SD_NUMA | \ SD_ASYM_PACKING)
@@ -1480,6 +1481,11 @@ static struct sched_domain_topology_level default_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, #endif
+#ifdef CONFIG_SCHED_CLUSTER
{ cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
#ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
#endif
2.7.4
-----Original Message----- From: Vincent Guittot [mailto:vincent.guittot@linaro.org] Sent: Thursday, January 7, 2021 5:29 AM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com Cc: Valentin Schneider valentin.schneider@arm.com; Catalin Marinas catalin.marinas@arm.com; Will Deacon will@kernel.org; Rafael J. Wysocki rjw@rjwysocki.net; Cc: Len Brown lenb@kernel.org; gregkh@linuxfoundation.org; Jonathan Cameron jonathan.cameron@huawei.com; Ingo Molnar mingo@redhat.com; Peter Zijlstra peterz@infradead.org; Juri Lelli juri.lelli@redhat.com; Dietmar Eggemann dietmar.eggemann@arm.com; Steven Rostedt rostedt@goodmis.org; Ben Segall bsegall@google.com; Mel Gorman mgorman@suse.de; Mark Rutland mark.rutland@arm.com; Sudeep Holla sudeep.holla@arm.com; Aubrey Li aubrey.li@linux.intel.com; LAK linux-arm-kernel@lists.infradead.org; linux-kernel linux-kernel@vger.kernel.org; ACPI Devel Maling List linux-acpi@vger.kernel.org; linuxarm@openeuler.org; xuwei (O) xuwei5@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com Subject: Re: [RFC PATCH v3 2/2] scheduler: add scheduler level for clusters
On Wed, 6 Jan 2021 at 09:35, Barry Song song.bao.hua@hisilicon.com wrote:
ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data, but each cluster has local L3 tag. On the other hand, each clusters will share some internal system bus. This means cache coherence overhead inside one cluster is much less than the overhead across clusters.
This patch adds the sched_domain for clusters. On kunpeng 920, without this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
This will affect load balance. For example, without this patch, while cpu0 becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0 will try to pull a task from cpu1-cpu3 first. This will have much less overhead of task migration.
On the other hand, while doing WAKE_AFFINE, this patch will try to find a core in the target cluster before scanning the whole llc domain. This means it will proactively use a core which has better affinity with target core at first.
Though it is named "cluster", architectures or machines can define its exact meaning of cluster as long as some cpus can share some resources in lower level than llc. So the implementation is applicable to all architectures.
Different cpus might have different resource sharing like L1, L2, cache tags, internal busses etc. Since it is hard to know where we should start to scan, this patch adds a SD_SHARE_CLS_RESOURCES rather than directly leveraging the existing
Not sure that we need this new flag. See more about this below
You should have a look at https://lkml.org/lkml/2020/12/14/560 which rework select_idle_core/cpu/smt
SD_SHARE_PKG_RESOURCES flag. Architectures or machines can decide what is cluster and who should get SD_SHARE_CLS_RESOURCES. select_idle_cpu() will scan from the first sched_domain with SD_SHARE_CLS_RESOURCES.
The below is a hackbench result:
we run the below command with different -g parameter from 1 to 10, for each different g, we run the command 10 times and get the average time $ numactl -N 0 hackbench -p -T -l 20000 -g $1
hackbench will report the time which is needed to complete a certain number of messages transmissions between a certain number of tasks, for example: $ numactl -N 0 hackbench -p -T -l 20000 -g 10 Running in threaded mode with 10 groups using 40 file descriptors each (== 400 tasks) Each sender will pass 20000 messages of 100 bytes Time: 8.874
The below is the result of hackbench w/ and w/o the patch: g 1 2 3 4 5 6 7 8 9 10 w/o 1.4777 2.0112 3.1919 4.2654 5.3246 6.4019 7.5939 8.7073 9.7526 10.8987 w/ 1.4793 1.9344 2.9080 3.9267 4.8339 5.7186 6.6923 7.5088 8.3715 9.2173 +8.9% +7.9% +9.3% +10.7% +11.8% +13.8% +14.2% +15.5%
Tracing the kernel while g=10, it shows select_idle_cpu() has a large chance to get cpu in the same cluster with the target while it sometimes gets cpu outside the cluster: target cpu 19 -> 17 13 -> 15 23 -> 20 23 -> 20 19 -> 17 13 -> 15 16 -> 17 19 -> 17 7 -> 5 10 -> 11 23 -> 20 *23 -> 4 ...
Signed-off-by: Barry Song song.bao.hua@hisilicon.com
-v3:
- rebased againest 5.11-rc2
- with respect to the comments of Valentin Schneider, Peter Zijlstra, Vincent Guittot and Mel Gorman etc.
- moved the scheduler changes from arm64 to the common place for all architectures.
- added SD_SHARE_CLS_RESOURCES sd_flags specifying the sched_domain where select_idle_cpu() should begin to scan from
- removed redundant select_idle_cluster() function since all code is in select_idle_cpu() now. it also avoided scanning cluster cpus twice in v2 code;
- redo the hackbench in one numa after the above changes
arch/arm64/Kconfig | 7 +++++++ include/linux/sched/sd_flags.h | 9 +++++++++ include/linux/sched/topology.h | 7 +++++++ include/linux/topology.h | 7 +++++++ kernel/sched/fair.c | 27 +++++++++++++++++++++------ kernel/sched/topology.c | 6 ++++++ 6 files changed, 57 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 05e1735..546cd61 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -973,6 +973,13 @@ config SCHED_MC making when dealing with multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here.
+config SCHED_CLUSTER
bool "Cluster scheduler support"
help
Cluster scheduler support improves the CPU scheduler's decision
making when dealing with machines that have clusters(sharing internal
bus or sharing LLC cache tag). If unsure say N here.
config SCHED_SMT bool "SMT scheduler support" help diff --git a/include/linux/sched/sd_flags.h
b/include/linux/sched/sd_flags.h
index 34b21e9..fc3c894 100644 --- a/include/linux/sched/sd_flags.h +++ b/include/linux/sched/sd_flags.h @@ -100,6 +100,15 @@ SD_FLAG(SD_ASYM_CPUCAPACITY, SDF_SHARED_PARENT |
SDF_NEEDS_GROUPS)
SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
/*
- Domain members share CPU cluster resources (i.e. llc cache tags)
- SHARED_CHILD: Set from the base domain up until spanned CPUs no longer
share
the cluster resouces (such as llc tags and internal bus)
- NEEDS_GROUPS: Caches are shared between groups.
- */
+SD_FLAG(SD_SHARE_CLS_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
+/*
- Domain members share CPU package resources (i.e. caches)
- SHARED_CHILD: Set from the base domain up until spanned CPUs no longer
share
diff --git a/include/linux/sched/topology.h
b/include/linux/sched/topology.h
index 8f0f778..846fcac 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void) } #endif
+#ifdef CONFIG_SCHED_CLUSTER +static inline int cpu_cluster_flags(void) +{
return SD_SHARE_CLS_RESOURCES | SD_SHARE_PKG_RESOURCES;
+} +#endif
#ifdef CONFIG_SCHED_MC static inline int cpu_core_flags(void) { diff --git a/include/linux/topology.h b/include/linux/topology.h index bf2cc3c..81be614 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int
cpu)
} #endif
+#ifdef CONFIG_SCHED_CLUSTER +static inline const struct cpumask *cpu_cluster_mask(int cpu) +{
return topology_cluster_cpumask(cpu);
+} +#endif
static inline const struct cpumask *cpu_cpu_mask(int cpu) { return cpumask_of_node(cpu_to_node(cpu)); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 04a3ce2..c14fae6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6145,6 +6145,7 @@ static int select_idle_cpu(struct task_struct *p, struct
sched_domain *sd, int t
{ struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); struct sched_domain *this_sd;
struct sched_domain *prev_ssd = NULL, *ssd; u64 avg_cost, avg_idle; u64 time; int this = smp_processor_id();
@@ -6174,15 +6175,29 @@ static int select_idle_cpu(struct task_struct *p,
struct sched_domain *sd, int t
time = cpu_clock(this);
cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
for_each_cpu_wrap(cpu, cpus, target) {
if (!--nr)
return -1;
if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
/*
* we first scan those child domains who declare they are sharing
* cluster resources such as llc tags, internal busses; then scan
* the whole llc
*/
for_each_domain(target, ssd) {
I don't like looping the sched domain in the fast path. Instead you should follow similar policy as for smt the rework patchset mentioned above: In the select_idle_core(), you can add a static key for cluster case and loop the cpu_cluster_mask(cpu), similarly to what is done with cpu_smt_mask but for looking for one idle cpu cpu_cluster_mask() in instead of all cpus in the case of smt
Hi Vincent, Thanks!
Probably I got the idea to iterate sched_domains from Valentin's comment: https://lore.kernel.org/lkml/jhj1rg9v7gr.mognet@arm.com/
just in case hardware has a hierarchy of clusters, it might want to scan smaller cluster, then bigger cluster: +-------------------+ | | | | | bigger cluster | | | ++-----------------++ | | | | | | +-----------+------+ +---+---------------+ | | | | | small cluster | | small cluster | | | | | | | | | +------------------+ +-------------------+
But you are right. Anyway, the current code supports one layer of cluster only. This makes neither SD_SHARE_CLS_RESOURCES nor looping sched_domains useful.
if ((ssd->flags & SD_SHARE_CLS_RESOURCES) || (ssd == sd)) {
cpumask_and(cpus, sched_domain_span(ssd), p->cpus_ptr);
if (prev_ssd)
cpumask_andnot(cpus, cpus,
sched_domain_span(prev_ssd));
for_each_cpu_wrap(cpu, cpus, target) {
if (!--nr)
return -1;
if (available_idle_cpu(cpu) ||
sched_idle_cpu(cpu))
goto done;
}
prev_ssd = ssd;
}
if (ssd == sd) break; }
+done: time = cpu_clock(this) - time; update_avg(&this_sd->avg_scan_cost, time);
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 5d3675c..79030c9 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1361,6 +1361,7 @@ int __read_mostly node_reclaim_distance =
RECLAIM_DISTANCE;
*/ #define TOPOLOGY_SD_FLAGS \ (SD_SHARE_CPUCAPACITY | \
SD_SHARE_CLS_RESOURCES | \ SD_SHARE_PKG_RESOURCES | \ SD_NUMA | \ SD_ASYM_PACKING)
@@ -1480,6 +1481,11 @@ static struct sched_domain_topology_level
default_topology[] = {
#ifdef CONFIG_SCHED_SMT { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) }, #endif
+#ifdef CONFIG_SCHED_CLUSTER
{ cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
#ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
#endif
2.7.4
Thanks Barry
On 1/6/21 12:30 AM, Barry Song wrote:
ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data while each cluster has local L3 tag. On the other hand, each cluster will share some internal system bus. This means cache is much more affine inside one cluster than across clusters.
+-----------------------------------+ +---------+ | +------+ +------+ +---------------------------+ | | | CPU0 | | cpu1 | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ cluster | | tag | | | | | CPU2 | | CPU3 | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +----+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | L3 | | data | +-----------------------------------+ | | | +------+ +------+ | +-----------+ | | | | | | | | | | | | | +------+ +------+ +----+ L3 | | | | | | tag | | | | +------+ +------+ | | | | | | | | | | ++ +-----------+ | | | +------+ +------+ |---------------------------+ | +-----------------------------------| | | +-----------------------------------| | | | +------+ +------+ +---------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ | | tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | |
There is a similar need for clustering in x86. Some x86 cores could share L2 caches that is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3). Having a sched domain at the L2 cluster helps spread load among L2 domains. This will reduce L2 cache contention and help with performance for low to moderate load scenarios.
The cluster detection mechanism will need to be based on L2 cache sharing in this case. I suggest making the cluster detection to be CPU architecture dependent so both ARM64 and x86 use cases can be accommodated.
Attached below are two RFC patches for creating x86 L2 cache sched domain, sans the idle cpu selection on wake up code. It is similar enough in concept to Barry's patch that we should have a single patchset that accommodates both use cases.
Thanks.
Tim
From e0e7e42e1a033c9634723ff1dc80b426deeec1e9 Mon Sep 17 00:00:00 2001 Message-Id: e0e7e42e1a033c9634723ff1dc80b426deeec1e9.1609970726.git.tim.c.chen@linux.intel.com In-Reply-To: cover.1609970726.git.tim.c.chen@linux.intel.com References: cover.1609970726.git.tim.c.chen@linux.intel.com From: Tim Chen tim.c.chen@linux.intel.com Date: Wed, 19 Aug 2020 16:22:35 -0700 Subject: [RFC PATCH 1/2] sched: Add L2 cache cpu mask
There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce is shared among a group of cores instead of being exclusive to one single core.
To prevent oversubscription of L2 cache, load could be balanced between such L2 domains.
Add CPU masks of CPUs sharing the L2 cache so we can build such L2 scheduler domain for load balancing at the L2 level.
Signed-off-by: Tim Chen tim.c.chen@linux.intel.com --- arch/x86/include/asm/topology.h | 1 + arch/x86/kernel/smpboot.c | 12 ++++++++++++ 2 files changed, 13 insertions(+)
diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h index f4234575f3fd..e35f5f55cb15 100644 --- a/arch/x86/include/asm/topology.h +++ b/arch/x86/include/asm/topology.h @@ -103,6 +103,7 @@ static inline void setup_node_to_cpumask_map(void) { } #include <asm-generic/topology.h>
extern const struct cpumask *cpu_coregroup_mask(int cpu); +extern const struct cpumask *cpu_l2group_mask(int cpu);
#define topology_logical_package_id(cpu) (cpu_data(cpu).logical_proc_id) #define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id) diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 27aa04a95702..8ba0b505f020 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -56,6 +56,7 @@ #include <linux/numa.h> #include <linux/pgtable.h> #include <linux/overflow.h> +#include <linux/cacheinfo.h>
#include <asm/acpi.h> #include <asm/desc.h> @@ -643,6 +644,17 @@ const struct cpumask *cpu_coregroup_mask(int cpu) return cpu_llc_shared_mask(cpu); }
+const struct cpumask *cpu_l2group_mask(int cpu) +{ + struct cpu_cacheinfo *ci = get_cpu_cacheinfo(cpu); + + /* Sanity check for presence of L2, leaf index 2 */ + if (ci->num_leaves < 3) + return topology_sibling_cpumask(cpu); + + return &ci->info_list[2].shared_cpu_map; +} + static void impress_friends(void) { int cpu;
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
On 1/6/21 12:30 AM, Barry Song wrote:
ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data while each cluster has local L3 tag. On the other hand, each cluster will share some internal system bus. This means cache is much more affine inside one cluster than across clusters.
There is a similar need for clustering in x86. Some x86 cores could share L2 caches that is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3). Having a sched domain at the L2 cluster helps spread load among L2 domains. This will reduce L2 cache contention and help with performance for low to moderate load scenarios.
IIUC, you are arguing for the exact opposite behaviour, i.e. balancing between L2 caches while Barry is after consolidating tasks within the boundaries of a L3 tag cache. One helps cache utilization, the other communication latency between tasks. Am I missing something?
IMHO, we need some numbers on the table to say which way to go. Looking at just benchmarks of one type doesn't show that this is a good idea in general.
Morten
On 1/8/21 7:12 AM, Morten Rasmussen wrote:
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
On 1/6/21 12:30 AM, Barry Song wrote:
ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data while each cluster has local L3 tag. On the other hand, each cluster will share some internal system bus. This means cache is much more affine inside one cluster than across clusters.
There is a similar need for clustering in x86. Some x86 cores could share L2 caches that is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3). Having a sched domain at the L2 cluster helps spread load among L2 domains. This will reduce L2 cache contention and help with performance for low to moderate load scenarios.
IIUC, you are arguing for the exact opposite behaviour, i.e. balancing between L2 caches while Barry is after consolidating tasks within the boundaries of a L3 tag cache. One helps cache utilization, the other communication latency between tasks. Am I missing something?
IMHO, we need some numbers on the table to say which way to go. Looking at just benchmarks of one type doesn't show that this is a good idea in general.
I think it is going to depend on the workload. If there are dependent tasks that communicate with one another, putting them together in the same cluster will be the right thing to do to reduce communication costs. On the other hand, if the tasks are independent, putting them together on the same cluster will increase resource contention and spreading them out will be better.
Any thoughts on what is the right clustering "tag" to use to clump related tasks together? Cgroup? Pid? Tasks with same mm?
Tim
On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
On 1/8/21 7:12 AM, Morten Rasmussen wrote:
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
On 1/6/21 12:30 AM, Barry Song wrote:
ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data while each cluster has local L3 tag. On the other hand, each cluster will share some internal system bus. This means cache is much more affine inside one cluster than across clusters.
There is a similar need for clustering in x86. Some x86 cores could share L2 caches that is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3). Having a sched domain at the L2 cluster helps spread load among L2 domains. This will reduce L2 cache contention and help with performance for low to moderate load scenarios.
IIUC, you are arguing for the exact opposite behaviour, i.e. balancing between L2 caches while Barry is after consolidating tasks within the boundaries of a L3 tag cache. One helps cache utilization, the other communication latency between tasks. Am I missing something?
IMHO, we need some numbers on the table to say which way to go. Looking at just benchmarks of one type doesn't show that this is a good idea in general.
I think it is going to depend on the workload. If there are dependent tasks that communicate with one another, putting them together in the same cluster will be the right thing to do to reduce communication costs. On the other hand, if the tasks are independent, putting them together on the same cluster will increase resource contention and spreading them out will be better.
Agree. That is exactly where I'm coming from. This is all about the task placement policy. We generally tend to spread tasks to avoid resource contention, SMT and caches, which seems to be what you are proposing to extend. I think that makes sense given it can produce significant benefits.
Any thoughts on what is the right clustering "tag" to use to clump related tasks together? Cgroup? Pid? Tasks with same mm?
I think this is the real question. I think the closest thing we have at the moment is the wakee/waker flip heuristic. This seems to be related. Perhaps the wake_affine tricks can serve as starting point?
Morten
On 11/01/2021 10:28, Morten Rasmussen wrote:
On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
On 1/8/21 7:12 AM, Morten Rasmussen wrote:
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
On 1/6/21 12:30 AM, Barry Song wrote:
[...]
I think it is going to depend on the workload. If there are dependent tasks that communicate with one another, putting them together in the same cluster will be the right thing to do to reduce communication costs. On the other hand, if the tasks are independent, putting them together on the same cluster will increase resource contention and spreading them out will be better.
Agree. That is exactly where I'm coming from. This is all about the task placement policy. We generally tend to spread tasks to avoid resource contention, SMT and caches, which seems to be what you are proposing to extend. I think that makes sense given it can produce significant benefits.
Any thoughts on what is the right clustering "tag" to use to clump related tasks together? Cgroup? Pid? Tasks with same mm?
I think this is the real question. I think the closest thing we have at the moment is the wakee/waker flip heuristic. This seems to be related. Perhaps the wake_affine tricks can serve as starting point?
wake_wide() switches between packing (select_idle_sibling(), llc_size CPUs) and spreading (find_idlest_cpu(), all CPUs).
AFAICS, since none of the sched domains set SD_BALANCE_WAKE, currently all wakeups are (llc-)packed.
select_task_rq_fair()
for_each_domain(cpu, tmp)
if (tmp->flags & sd_flag) sd = tmp;
In case we would like to further distinguish between llc-packing and even narrower (cluster or MC-L2)-packing, we would introduce a 2. level packing vs. spreading heuristic further down in sis().
IMHO, Barry's current implementation doesn't do this right now. Instead he's trying to pack on cluster first and if not successful look further among the remaining llc CPUs for an idle CPU.
-----Original Message----- From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com] Sent: Wednesday, January 13, 2021 12:00 AM To: Morten Rasmussen morten.rasmussen@arm.com; Tim Chen tim.c.chen@linux.intel.com Cc: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; valentin.schneider@arm.com; catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net; vincent.guittot@linaro.org; lenb@kernel.org; gregkh@linuxfoundation.org; Jonathan Cameron jonathan.cameron@huawei.com; mingo@redhat.com; peterz@infradead.org; juri.lelli@redhat.com; rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de; mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; linuxarm@openeuler.org; xuwei (O) xuwei5@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler
On 11/01/2021 10:28, Morten Rasmussen wrote:
On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
On 1/8/21 7:12 AM, Morten Rasmussen wrote:
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
On 1/6/21 12:30 AM, Barry Song wrote:
[...]
I think it is going to depend on the workload. If there are dependent tasks that communicate with one another, putting them together in the same cluster will be the right thing to do to reduce communication costs. On the other hand, if the tasks are independent, putting them together
on the same cluster
will increase resource contention and spreading them out will be better.
Agree. That is exactly where I'm coming from. This is all about the task placement policy. We generally tend to spread tasks to avoid resource contention, SMT and caches, which seems to be what you are proposing to extend. I think that makes sense given it can produce significant benefits.
Any thoughts on what is the right clustering "tag" to use to clump related tasks together? Cgroup? Pid? Tasks with same mm?
I think this is the real question. I think the closest thing we have at the moment is the wakee/waker flip heuristic. This seems to be related. Perhaps the wake_affine tricks can serve as starting point?
wake_wide() switches between packing (select_idle_sibling(), llc_size CPUs) and spreading (find_idlest_cpu(), all CPUs).
AFAICS, since none of the sched domains set SD_BALANCE_WAKE, currently all wakeups are (llc-)packed.
Sorry for late response. I was struggling with some other topology issues recently.
For "all wakeups are (llc-)packed", it seems you mean current want_affine is only affecting the new_cpu, and for wake-up path, we will always go to select_idle_sibling() rather than find_idlest_cpu() since nobody sets SD_WAKE_BALANCE in any sched_domain ?
select_task_rq_fair()
for_each_domain(cpu, tmp)
if (tmp->flags & sd_flag) sd = tmp;
In case we would like to further distinguish between llc-packing and even narrower (cluster or MC-L2)-packing, we would introduce a 2. level packing vs. spreading heuristic further down in sis().
I didn't get your point on "2 level packing". Would you like to describe more? It seems you mean we need to have separate calculation for avg_scan_cost and sched_feat(SIS_) for cluster (or MC-L2) since cluster and llc are not in the same level physically?
IMHO, Barry's current implementation doesn't do this right now. Instead he's trying to pack on cluster first and if not successful look further among the remaining llc CPUs for an idle CPU.
Yes. That is exactly what the current patch is doing.
Thanks Barry
On 25/01/2021 11:50, Song Bao Hua (Barry Song) wrote:
-----Original Message----- From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com] Sent: Wednesday, January 13, 2021 12:00 AM To: Morten Rasmussen morten.rasmussen@arm.com; Tim Chen tim.c.chen@linux.intel.com Cc: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; valentin.schneider@arm.com; catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net; vincent.guittot@linaro.org; lenb@kernel.org; gregkh@linuxfoundation.org; Jonathan Cameron jonathan.cameron@huawei.com; mingo@redhat.com; peterz@infradead.org; juri.lelli@redhat.com; rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de; mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; linuxarm@openeuler.org; xuwei (O) xuwei5@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler
On 11/01/2021 10:28, Morten Rasmussen wrote:
On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
On 1/8/21 7:12 AM, Morten Rasmussen wrote:
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
On 1/6/21 12:30 AM, Barry Song wrote:
[...]
wake_wide() switches between packing (select_idle_sibling(), llc_size CPUs) and spreading (find_idlest_cpu(), all CPUs).
AFAICS, since none of the sched domains set SD_BALANCE_WAKE, currently all wakeups are (llc-)packed.
Sorry for late response. I was struggling with some other topology issues recently.
For "all wakeups are (llc-)packed", it seems you mean current want_affine is only affecting the new_cpu, and for wake-up path, we will always go to select_idle_sibling() rather than find_idlest_cpu() since nobody sets SD_WAKE_BALANCE in any sched_domain ?
select_task_rq_fair()
for_each_domain(cpu, tmp)
if (tmp->flags & sd_flag) sd = tmp;
In case we would like to further distinguish between llc-packing and even narrower (cluster or MC-L2)-packing, we would introduce a 2. level packing vs. spreading heuristic further down in sis().
I didn't get your point on "2 level packing". Would you like to describe more? It seems you mean we need to have separate calculation for avg_scan_cost and sched_feat(SIS_) for cluster (or MC-L2) since cluster and llc are not in the same level physically?
By '1. level packing' I meant going sis() (i.e. sd=per_cpu(sd_llc, target)) instead of routing WF_TTWU through find_idlest_cpu() which uses a broader sd span (in case all sd's (or at least up to an sd > llc) would have SD_BALANCE_WAKE set). wake_wide() (wakee/waker flip heuristic) is currently used to make this decision. But since no sd sets SD_BALANCE_WAKE we always go sis() for WF_TTWU.
'2. level packing' would be the decision between cluster- and llc-packing. The question was which heuristic could be used here.
IMHO, Barry's current implementation doesn't do this right now. Instead he's trying to pack on cluster first and if not successful look further among the remaining llc CPUs for an idle CPU.
Yes. That is exactly what the current patch is doing.
And this will be favoring cluster- over llc-packing for each task instead.
-----Original Message----- From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com] Sent: Wednesday, January 13, 2021 12:00 AM To: Morten Rasmussen morten.rasmussen@arm.com; Tim Chen tim.c.chen@linux.intel.com Cc: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; valentin.schneider@arm.com; catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net; vincent.guittot@linaro.org; lenb@kernel.org; gregkh@linuxfoundation.org; Jonathan Cameron jonathan.cameron@huawei.com; mingo@redhat.com; peterz@infradead.org; juri.lelli@redhat.com; rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de; mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; linuxarm@openeuler.org; xuwei (O) xuwei5@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler
On 11/01/2021 10:28, Morten Rasmussen wrote:
On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
On 1/8/21 7:12 AM, Morten Rasmussen wrote:
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
On 1/6/21 12:30 AM, Barry Song wrote:
[...]
I think it is going to depend on the workload. If there are dependent tasks that communicate with one another, putting them together in the same cluster will be the right thing to do to reduce communication costs. On the other hand, if the tasks are independent, putting them together
on the same cluster
will increase resource contention and spreading them out will be better.
Agree. That is exactly where I'm coming from. This is all about the task placement policy. We generally tend to spread tasks to avoid resource contention, SMT and caches, which seems to be what you are proposing to extend. I think that makes sense given it can produce significant benefits.
Any thoughts on what is the right clustering "tag" to use to clump related tasks together? Cgroup? Pid? Tasks with same mm?
I think this is the real question. I think the closest thing we have at the moment is the wakee/waker flip heuristic. This seems to be related. Perhaps the wake_affine tricks can serve as starting point?
wake_wide() switches between packing (select_idle_sibling(), llc_size CPUs) and spreading (find_idlest_cpu(), all CPUs).
AFAICS, since none of the sched domains set SD_BALANCE_WAKE, currently all wakeups are (llc-)packed.
select_task_rq_fair()
for_each_domain(cpu, tmp)
if (tmp->flags & sd_flag) sd = tmp;
In case we would like to further distinguish between llc-packing and even narrower (cluster or MC-L2)-packing, we would introduce a 2. level packing vs. spreading heuristic further down in sis().
IMHO, Barry's current implementation doesn't do this right now. Instead he's trying to pack on cluster first and if not successful look further among the remaining llc CPUs for an idle CPU.
Right now in the main cases of using wake_affine to achieve better performance, processes are actually bound within one numa which is also a LLC in kunpeng920.
Probably LLC=NUMA is also true for X86 Jacobsville, Tim?
So one possible way to pretend a 2-level packing might be: if the affinity cpuset of waker and waker are both subset of one same LLC, we totally use cluster as the factor to determine packing or not and ignore LLC.
I haven't really done this, but the below code can make the same result by forcing llc_id=cluster_id:
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index d72eb8d..3d78097 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -107,7 +107,7 @@ int __init parse_acpi_topology(void) cpu_topology[cpu].cluster_id = topology_id; topology_id = find_acpi_cpu_topology_package(cpu); cpu_topology[cpu].package_id = topology_id; - +#if 0 i = acpi_find_last_cache_level(cpu);
if (i > 0) { @@ -119,8 +119,11 @@ int __init parse_acpi_topology(void) if (cache_id > 0) cpu_topology[cpu].llc_id = cache_id; } - } +#else + cpu_topology[cpu].llc_id = cpu_topology[cpu].cluster_id; +#endif
+ } return 0; } #endif
With this, I have seen some major improvement in hackbench especially for monogamous communication model (fds_num=1, one sender for one receiver): numactl -N 0 hackbench -p -T -l 200000 -f 1 -g $1
I have tested -g(group_nums) 6, 12, 18, 24, 28, 32, For each different g, I ran 20 times and got the average value. The result is as below:
g= 6 12 18 24 28 32 w/o 1.3243 1.6741 1.7560 1.9036 2.0262 2.1826 w/ 1.1314 1.1864 1.4494 1.6159 1.9078 2.1249
Using top -H and hit "f" to show cpu of each thread, I am seeing the two threads in one group are likely to run in a cluster. That's why the hackbench latency is decreasing much.
Thanks Barry
On 4/13/21 3:45 AM, Song Bao Hua (Barry Song) wrote:
Right now in the main cases of using wake_affine to achieve better performance, processes are actually bound within one numa which is also a LLC in kunpeng920.
Probably LLC=NUMA is also true for X86 Jacobsville, Tim?
In general for x86, LLC is partitioned at the sub-numa cluster level. LLC could be divided between sub-numa cluster within a NUMA node. That said, for Jacobsville, we don't have sub-numa clusters so LLC=NUMA is true for that platform.
Tim
-----Original Message----- From: Morten Rasmussen [mailto:morten.rasmussen@arm.com] Sent: Saturday, January 9, 2021 4:13 AM To: Tim Chen tim.c.chen@linux.intel.com Cc: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; valentin.schneider@arm.com; catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net; vincent.guittot@linaro.org; lenb@kernel.org; gregkh@linuxfoundation.org; Jonathan Cameron jonathan.cameron@huawei.com; mingo@redhat.com; peterz@infradead.org; juri.lelli@redhat.com; dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de; mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; linuxarm@openeuler.org; xuwei (O) xuwei5@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
On 1/6/21 12:30 AM, Barry Song wrote:
ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data while each cluster has local L3 tag. On the other hand, each cluster will share some internal system bus. This means cache is much more affine inside one cluster than across clusters.
There is a similar need for clustering in x86. Some x86 cores could share
L2 caches that
is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing
L3).
Having a sched domain at the L2 cluster helps spread load among L2 domains. This will reduce L2 cache contention and help with performance for low to moderate load scenarios.
IIUC, you are arguing for the exact opposite behaviour, i.e. balancing between L2 caches while Barry is after consolidating tasks within the boundaries of a L3 tag cache. One helps cache utilization, the other communication latency between tasks. Am I missing something?
Morten, this is not true.
we are both actually looking for the same behavior. My patch also has done the exact same behavior of spreading with Tim's patch.
Considering the below two cases: Case 1. we have two tasks without any relationship running in a system with 2 clusters and 8 cpus.
Without the sched_domain of cluster, these two tasks might be put as below: +-------------------+ +-----------------+ | +----+ +----+ | | | | |task| |task| | | | | |1 | |2 | | | | | +----+ +----+ | | | | | | | | cluster1 | | cluster2 | +-------------------+ +-----------------+ With the sched_domain of cluster, load balance will spread them as below: +-------------------+ +-----------------+ | +----+ | | +----+ | | |task| | | |task| | | |1 | | | |2 | | | +----+ | | +----+ | | | | | | cluster1 | | cluster2 | +-------------------+ +-----------------+
Then task1 and tasks2 get more cache and decrease cache contention. They will get better performance.
That is what my original patch also can make. And tim's patch is also doing. Once we add a sched_domain, load balance will get involved.
Case 2. we have 8 tasks, running in a system with 2 clusters and 8 cpus. But they are working in 4 groups: Task1 wakes up task4 Task2 wakes up task5 Task3 wakes up task6 Task4 wakes up task7
With my changing in select_idle_sibling, the WAKE_AFFINE mechanism will try to put task1 and 4, task2 and 5, task3 and 6, task4 and 7 in same clusters rather than putting all of them in the random one of the 8 cpus. However, the 8 tasks are still spreading among the 8 cpus with my change in select_idle_sibling as load balance is still working.
+---------------------------+ +----------------------+ | +----+ +-----+ | | +----+ +-----+ | | |task| |task | | | |task| |task | | | |1 | | 4 | | | |2 | |5 | | | +----+ +-----+ | | +----+ +-----+ | | | | | | cluster1 | | cluster2 | | | | | | | | | | +-----+ +------+ | | +-----+ +------+ | | |task | | task | | | |task | |task | | | |3 | | 6 | | | |4 | |8 | | | +-----+ +------+ | | +-----+ +------+ | +---------------------------+ +----------------------+
Let's consider the 3rd case, that one would be more tricky:
task1 and task2 have close relationship and they are waker-wakee pair. With my current patch, select_idle_sidling() wants to put them in one cluster, load balance wants to put them in two clusters. Load balance will win. Then maybe we need some same mechanism like adjusting numa imbalance: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ke... if we permit a light imbalance between clusters, select_idle_sidling() will win. And task1 and task2 get better cache affinity.
The 3rd case could be our goal for next step.
Thanks Barry
On 08/01/2021 22:30, Song Bao Hua (Barry Song) wrote:
-----Original Message----- From: Morten Rasmussen [mailto:morten.rasmussen@arm.com] Sent: Saturday, January 9, 2021 4:13 AM To: Tim Chen tim.c.chen@linux.intel.com Cc: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; valentin.schneider@arm.com; catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net; vincent.guittot@linaro.org; lenb@kernel.org; gregkh@linuxfoundation.org; Jonathan Cameron jonathan.cameron@huawei.com; mingo@redhat.com; peterz@infradead.org; juri.lelli@redhat.com; dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de; mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; linuxarm@openeuler.org; xuwei (O) xuwei5@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
On 1/6/21 12:30 AM, Barry Song wrote:
ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data while each cluster has local L3 tag. On the other hand, each cluster will share some internal system bus. This means cache is much more affine inside one cluster than across clusters.
There is a similar need for clustering in x86. Some x86 cores could share
L2 caches that
is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing
L3).
Having a sched domain at the L2 cluster helps spread load among L2 domains. This will reduce L2 cache contention and help with performance for low to moderate load scenarios.
IIUC, you are arguing for the exact opposite behaviour, i.e. balancing between L2 caches while Barry is after consolidating tasks within the boundaries of a L3 tag cache. One helps cache utilization, the other communication latency between tasks. Am I missing something?
Morten, this is not true.
we are both actually looking for the same behavior. My patch also has done the exact same behavior of spreading with Tim's patch.
That's the case for the load-balance path because of the extra Sched Domain (SD) (CLS/MC_L2) below MC.
But in wakeup you add code which leads to a different packing strategy.
It looks like that Tim's workload (SPECrate mcf) shows a performance boost solely because of the changes the additional MC_L2 SD introduces in load balance. The wakeup path is unchanged, i.e. llc-packing. IMHO we have to carefully distinguish between packing vs. spreading in wakeup and load-balance here.
Considering the below two cases: Case 1. we have two tasks without any relationship running in a system with 2 clusters and 8 cpus.
Without the sched_domain of cluster, these two tasks might be put as below: +-------------------+ +-----------------+ | +----+ +----+ | | | | |task| |task| | | | | |1 | |2 | | | | | +----+ +----+ | | | | | | | | cluster1 | | cluster2 | +-------------------+ +-----------------+ With the sched_domain of cluster, load balance will spread them as below: +-------------------+ +-----------------+ | +----+ | | +----+ | | |task| | | |task| | | |1 | | | |2 | | | +----+ | | +----+ | | | | | | cluster1 | | cluster2 | +-------------------+ +-----------------+
Then task1 and tasks2 get more cache and decrease cache contention. They will get better performance.
That is what my original patch also can make. And tim's patch is also doing. Once we add a sched_domain, load balance will get involved.
Case 2. we have 8 tasks, running in a system with 2 clusters and 8 cpus. But they are working in 4 groups: Task1 wakes up task4 Task2 wakes up task5 Task3 wakes up task6 Task4 wakes up task7
With my changing in select_idle_sibling, the WAKE_AFFINE mechanism will try to put task1 and 4, task2 and 5, task3 and 6, task4 and 7 in same clusters rather than putting all of them in the random one of the 8 cpus. However, the 8 tasks are still spreading among the 8 cpus with my change in select_idle_sibling as load balance is still working.
+---------------------------+ +----------------------+ | +----+ +-----+ | | +----+ +-----+ | | |task| |task | | | |task| |task | | | |1 | | 4 | | | |2 | |5 | | | +----+ +-----+ | | +----+ +-----+ | | | | | | cluster1 | | cluster2 | | | | | | | | | | +-----+ +------+ | | +-----+ +------+ | | |task | | task | | | |task | |task | | | |3 | | 6 | | | |4 | |8 | | | +-----+ +------+ | | +-----+ +------+ | +---------------------------+ +----------------------+
Your use-case (#tasks, runtime/period) seems to be perfectly crafted to show the benefit of your patch on your specific system (cluster-size = 4). IMHO, this extra infrastructure especially in the wakeup path should show benefits over a range of different benchmarks.
Let's consider the 3rd case, that one would be more tricky:
task1 and task2 have close relationship and they are waker-wakee pair. With my current patch, select_idle_sidling() wants to put them in one cluster, load balance wants to put them in two clusters. Load balance will win. Then maybe we need some same mechanism like adjusting numa imbalance: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ke... if we permit a light imbalance between clusters, select_idle_sidling() will win. And task1 and task2 get better cache affinity.
This would look weird to allow this kind of imbalance on CLS (MC_L2) and NUMA domains but not on the MC domain for example.
-----Original Message----- From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com] Sent: Wednesday, January 13, 2021 1:53 AM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; Morten Rasmussen morten.rasmussen@arm.com; Tim Chen tim.c.chen@linux.intel.com Cc: valentin.schneider@arm.com; catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net; vincent.guittot@linaro.org; lenb@kernel.org; gregkh@linuxfoundation.org; Jonathan Cameron jonathan.cameron@huawei.com; mingo@redhat.com; peterz@infradead.org; juri.lelli@redhat.com; rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de; mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; linuxarm@openeuler.org; xuwei (O) xuwei5@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler
On 08/01/2021 22:30, Song Bao Hua (Barry Song) wrote:
-----Original Message----- From: Morten Rasmussen [mailto:morten.rasmussen@arm.com] Sent: Saturday, January 9, 2021 4:13 AM To: Tim Chen tim.c.chen@linux.intel.com Cc: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; valentin.schneider@arm.com; catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net; vincent.guittot@linaro.org; lenb@kernel.org; gregkh@linuxfoundation.org; Jonathan Cameron
mingo@redhat.com; peterz@infradead.org; juri.lelli@redhat.com; dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de; mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; linuxarm@openeuler.org; xuwei (O) xuwei5@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters
and
add cluster scheduler
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
On 1/6/21 12:30 AM, Barry Song wrote:
ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data while each cluster has local L3 tag. On the other hand, each cluster will share some internal system bus. This means cache is much more affine inside one cluster than across clusters.
There is a similar need for clustering in x86. Some x86 cores could share
L2 caches that
is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6
clusters
of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing
L3).
Having a sched domain at the L2 cluster helps spread load among L2 domains. This will reduce L2 cache contention and help with performance for low to moderate load scenarios.
IIUC, you are arguing for the exact opposite behaviour, i.e. balancing between L2 caches while Barry is after consolidating tasks within the boundaries of a L3 tag cache. One helps cache utilization, the other communication latency between tasks. Am I missing something?
Morten, this is not true.
we are both actually looking for the same behavior. My patch also has done the exact same behavior of spreading with Tim's patch.
That's the case for the load-balance path because of the extra Sched Domain (SD) (CLS/MC_L2) below MC.
But in wakeup you add code which leads to a different packing strategy.
Yes, but I put a note for the 1st case: "Case 1. we have two tasks *without* any relationship running in a system with 2 clusters and 8 cpus"
so for tasks without wake-up relationship, the current patch will only result in spreading.
Anyway, I will also test Tim's benchmark in kunpeng920 with the SCHED_CLUTER to see what will happen. Till now, benchmark has only covered the case to figure out the benefit of changing wake-up path. I would also be interested in figuring out what we have got from the change of load_balance().
It looks like that Tim's workload (SPECrate mcf) shows a performance boost solely because of the changes the additional MC_L2 SD introduces in load balance. The wakeup path is unchanged, i.e. llc-packing. IMHO we have to carefully distinguish between packing vs. spreading in wakeup and load-balance here.
Considering the below two cases: Case 1. we have two tasks without any relationship running in a system with
2 clusters and 8 cpus.
Without the sched_domain of cluster, these two tasks might be put as below: +-------------------+ +-----------------+ | +----+ +----+ | | | | |task| |task| | | | | |1 | |2 | | | | | +----+ +----+ | | | | | | | | cluster1 | | cluster2 | +-------------------+ +-----------------+ With the sched_domain of cluster, load balance will spread them as below: +-------------------+ +-----------------+ | +----+ | | +----+ | | |task| | | |task| | | |1 | | | |2 | | | +----+ | | +----+ | | | | | | cluster1 | | cluster2 | +-------------------+ +-----------------+
Then task1 and tasks2 get more cache and decrease cache contention. They will get better performance.
That is what my original patch also can make. And tim's patch is also doing. Once we add a sched_domain, load balance will get involved.
Case 2. we have 8 tasks, running in a system with 2 clusters and 8 cpus. But they are working in 4 groups: Task1 wakes up task4 Task2 wakes up task5 Task3 wakes up task6 Task4 wakes up task7
With my changing in select_idle_sibling, the WAKE_AFFINE mechanism will try to put task1 and 4, task2 and 5, task3 and 6, task4 and 7 in same clusters
rather
than putting all of them in the random one of the 8 cpus. However, the 8 tasks are still spreading among the 8 cpus with my change in select_idle_sibling as load balance is still working.
+---------------------------+ +----------------------+ | +----+ +-----+ | | +----+ +-----+ | | |task| |task | | | |task| |task | | | |1 | | 4 | | | |2 | |5 | | | +----+ +-----+ | | +----+ +-----+ | | | | | | cluster1 | | cluster2 | | | | | | | | | | +-----+ +------+ | | +-----+ +------+ | | |task | | task | | | |task | |task | | | |3 | | 6 | | | |4 | |8 | | | +-----+ +------+ | | +-----+ +------+ | +---------------------------+ +----------------------+
Your use-case (#tasks, runtime/period) seems to be perfectly crafted to show the benefit of your patch on your specific system (cluster-size = 4). IMHO, this extra infrastructure especially in the wakeup path should show benefits over a range of different benchmarks.
Let's consider the 3rd case, that one would be more tricky:
task1 and task2 have close relationship and they are waker-wakee pair. With my current patch, select_idle_sidling() wants to put them in one cluster, load balance wants to put them in two clusters. Load balance will
win.
Then maybe we need some same mechanism like adjusting numa imbalance:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ kernel/sched/fair.c?id=b396f52326de20
if we permit a light imbalance between clusters, select_idle_sidling() will win. And task1 and task2 get better cache affinity.
This would look weird to allow this kind of imbalance on CLS (MC_L2) and NUMA domains but not on the MC domain for example.
Yes. I guess I actually meant permitting imbalance between sched_group made by the child sched_cluster domain of the parent sched_mc domain.
sched_mc domain
+----------------------------------+ | +--------+ +----------+ | | |sched_ | |sched_ | | | |group | |group | | | +--+-----+ +----+-----+ | | | allow small | | | | imbalance | | +----------------------------------+ | | | | | | | | | | + + child domain: child domain: sched_cluster sched_cluster
For sched_group within one sched_cluster domain, we don't allow this kind of imbalance.
Anyway, I would be happier to see this kind of imbalance is only allowed when we exactly know two tasks in the cluster have wake-up relationship. Right now, SD_NUMA seems to be simply allowing this imbalance without the knowledge of the relationships of tasks causing imbalance.
Thanks Barry
-----Original Message----- From: Tim Chen [mailto:tim.c.chen@linux.intel.com] Sent: Friday, January 8, 2021 12:17 PM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; valentin.schneider@arm.com; catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net; vincent.guittot@linaro.org; lenb@kernel.org; gregkh@linuxfoundation.org; Jonathan Cameron jonathan.cameron@huawei.com; mingo@redhat.com; peterz@infradead.org; juri.lelli@redhat.com; dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de; mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com Cc: linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; linuxarm@openeuler.org; xuwei (O) xuwei5@huawei.com; Zengtao (B) prime.zeng@hisilicon.com; tiantao (H) tiantao6@hisilicon.com Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler
On 1/6/21 12:30 AM, Barry Song wrote:
ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each cluster has 4 cpus. All clusters share L3 cache data while each cluster has local L3 tag. On the other hand, each cluster will share some internal system bus. This means cache is much more affine inside one cluster than across clusters.
+-----------------------------------+ +---------+ | +------+ +------+ +---------------------------+ | | | CPU0 | | cpu1 | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ cluster | | tag | | | | | CPU2 | | CPU3 | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | | | L3 | | | | +------+ +------+ +----+ tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | L3 | | data | +-----------------------------------+ | | | +------+ +------+ | +-----------+ | | | | | | | | | | | | | +------+ +------+ +----+ L3 | | | | | | tag | | | | +------+ +------+ | | | | | | | | | | ++ +-----------+ | | | +------+ +------+ |---------------------------+ | +-----------------------------------| | | +-----------------------------------| | | | +------+ +------+ +---------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | | | +----+ L3 | | | | +------+ +------+ | | tag | | | | | | | | | | | | | | +------+ +------+ | +-----------+ | | | | | | +-----------------------------------+ | | +-----------------------------------+ | | | +------+ +------+ +--------------------------+ | | | | | | | +-----------+ | | | +------+ +------+ | | | | |
There is a similar need for clustering in x86. Some x86 cores could share L2 caches that is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3). Having a sched domain at the L2 cluster helps spread load among L2 domains. This will reduce L2 cache contention and help with performance for low to moderate load scenarios.
The cluster detection mechanism will need to be based on L2 cache sharing in this case. I suggest making the cluster detection to be CPU architecture dependent so both ARM64 and x86 use cases can be accommodated.
Attached below are two RFC patches for creating x86 L2 cache sched domain, sans the idle cpu selection on wake up code. It is similar enough in concept to Barry's patch that we should have a single patchset that accommodates both use cases.
Hi Tim, Agreed on this. hopefully the RFC v4 I am preparing will cover your case.
Thanks.
Tim
Thanks Barry
On 2/3/21 3:32 AM, Song Bao Hua (Barry Song) wrote:
Attached below are two RFC patches for creating x86 L2 cache sched domain, sans the idle cpu selection on wake up code. It is similar enough in concept to Barry's patch that we should have a single patchset that accommodates both use cases.
Hi Tim, Agreed on this. hopefully the RFC v4 I am preparing will cover your case.
Barry,
I've taken a crack at it. Attached is a patch on top of your v3 patches to implement L2 cluster sched domain for x86.
Thanks.
Tim
---->8------
From 9189e489b019e110ee6e9d4183e243e48f44ff25 Mon Sep 17 00:00:00 2001 From: Tim Chen tim.c.chen@linux.intel.com Date: Tue, 16 Feb 2021 08:24:39 -0800 Subject: [RFC PATCH] scheduler: Add cluster scheduler level for x86 To: valentin.schneider@arm.com, catalin.marinas@arm.com, will@kernel.org, rjw@rjwysocki.net, vincent.guittot@linaro.org, lenb@kernel.org, gregkh@linuxfoundation.org, jonathan.cameron@huawei.com, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, mark.rutland@arm.com, sudeep.holla@arm.com, aubrey.li@linux.intel.com Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org, linuxarm@openeuler.org, xuwei5@huawei.com, prime.zeng@hisilicon.com, tiantao6@hisilicon.com, Jonathan Cameron Jonathan.Cameron@huawei.com, Barry Song song.bao.hua@hisilicon.com
There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce is shared among a cluster of cores instead of being exclusive to one single core.
To prevent oversubscription of L2 cache, load should be balanced between such L2 clusters, especially for tasks with no shared data.
Also with cluster scheduling policy where tasks are woken up in the same L2 cluster, we will benefit from keeping tasks related to each other and likely sharing data in the same L2 cluster.
Add CPU masks of CPUs sharing the L2 cache so we can build such L2 cluster scheduler domain.
Signed-off-by: Tim Chen tim.c.chen@linux.intel.com --- arch/x86/Kconfig | 8 ++++++ arch/x86/include/asm/smp.h | 7 ++++++ arch/x86/include/asm/topology.h | 1 + arch/x86/kernel/cpu/cacheinfo.c | 1 + arch/x86/kernel/cpu/common.c | 3 +++ arch/x86/kernel/smpboot.c | 43 ++++++++++++++++++++++++++++++++- 6 files changed, 62 insertions(+), 1 deletion(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 21f851179ff0..10fc95005df7 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1008,6 +1008,14 @@ config NR_CPUS This is purely to save memory: each supported CPU adds about 8KB to the kernel image.
+config SCHED_CLUSTER + bool "Cluster scheduler support" + default n + help + Cluster scheduler support improves the CPU scheduler's decision + making when dealing with machines that have clusters of CPUs + sharing L2 cache. If unsure say N here. + config SCHED_SMT def_bool y if SMP
diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h index c0538f82c9a2..9cbc4ae3078f 100644 --- a/arch/x86/include/asm/smp.h +++ b/arch/x86/include/asm/smp.h @@ -16,7 +16,9 @@ DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_map); DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_die_map); /* cpus sharing the last level cache: */ DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map); +DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map); DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id); +DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id); DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
static inline struct cpumask *cpu_llc_shared_mask(int cpu) @@ -24,6 +26,11 @@ static inline struct cpumask *cpu_llc_shared_mask(int cpu) return per_cpu(cpu_llc_shared_map, cpu); }
+static inline struct cpumask *cpu_l2c_shared_mask(int cpu) +{ + return per_cpu(cpu_l2c_shared_map, cpu); +} + DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_cpu_to_apicid); DECLARE_EARLY_PER_CPU_READ_MOSTLY(u32, x86_cpu_to_acpiid); DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_bios_cpu_apicid); diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h index 9239399e5491..2a11ccc14fb1 100644 --- a/arch/x86/include/asm/topology.h +++ b/arch/x86/include/asm/topology.h @@ -103,6 +103,7 @@ static inline void setup_node_to_cpumask_map(void) { } #include <asm-generic/topology.h>
extern const struct cpumask *cpu_coregroup_mask(int cpu); +extern const struct cpumask *cpu_clustergroup_mask(int cpu);
#define topology_logical_package_id(cpu) (cpu_data(cpu).logical_proc_id) #define topology_physical_package_id(cpu) (cpu_data(cpu).phys_proc_id) diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c index 3ca9be482a9e..0d03a71e713e 100644 --- a/arch/x86/kernel/cpu/cacheinfo.c +++ b/arch/x86/kernel/cpu/cacheinfo.c @@ -846,6 +846,7 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c) l2 = new_l2; #ifdef CONFIG_SMP per_cpu(cpu_llc_id, cpu) = l2_id; + per_cpu(cpu_l2c_id, cpu) = l2_id; #endif }
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 35ad8480c464..fb08c73d752c 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -78,6 +78,9 @@ EXPORT_SYMBOL(smp_num_siblings); /* Last level cache ID of each logical CPU */ DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID;
+/* L2 cache ID of each logical CPU */ +DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id) = BAD_APICID; + /* correctly size the local cpu masks */ void __init setup_cpu_local_masks(void) { diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 117e24fbfd8a..b9e8780b2617 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -101,6 +101,8 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map); + /* Per CPU bogomips and other parameters */ DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info); EXPORT_PER_CPU_SYMBOL(cpu_info); @@ -501,6 +503,21 @@ static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o) return topology_sane(c, o, "llc"); }
+static bool match_l2c(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o) +{ + int cpu1 = c->cpu_index, cpu2 = o->cpu_index; + + /* Do not match if we do not have a valid APICID for cpu: */ + if (per_cpu(cpu_l2c_id, cpu1) == BAD_APICID) + return false; + + /* Do not match if L2 cache id does not match: */ + if (per_cpu(cpu_l2c_id, cpu1) != per_cpu(cpu_l2c_id, cpu2)) + return false; + + return topology_sane(c, o, "l2c"); +} + /* * Unlike the other levels, we do not enforce keeping a * multicore group inside a NUMA node. If this happens, we will @@ -522,7 +539,7 @@ static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o) }
-#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC) +#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_CLUSTER) || defined(CONFIG_SCHED_MC) static inline int x86_sched_itmt_flags(void) { return sysctl_sched_itmt_enabled ? SD_ASYM_PACKING : 0; @@ -540,12 +557,21 @@ static int x86_smt_flags(void) return cpu_smt_flags() | x86_sched_itmt_flags(); } #endif +#ifdef CONFIG_SCHED_CLUSTER +static int x86_cluster_flags(void) +{ + return cpu_cluster_flags() | x86_sched_itmt_flags(); +} +#endif #endif
static struct sched_domain_topology_level x86_numa_in_package_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) }, #endif +#ifdef CONFIG_SCHED_CLUSTER + { cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) }, +#endif #ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) }, #endif @@ -556,6 +582,9 @@ static struct sched_domain_topology_level x86_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) }, #endif +#ifdef CONFIG_SCHED_CLUSTER + { cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) }, +#endif #ifdef CONFIG_SCHED_MC { cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) }, #endif @@ -583,6 +612,7 @@ void set_cpu_sibling_map(int cpu) if (!has_mp) { cpumask_set_cpu(cpu, topology_sibling_cpumask(cpu)); cpumask_set_cpu(cpu, cpu_llc_shared_mask(cpu)); + cpumask_set_cpu(cpu, cpu_l2c_shared_mask(cpu)); cpumask_set_cpu(cpu, topology_core_cpumask(cpu)); cpumask_set_cpu(cpu, topology_die_cpumask(cpu)); c->booted_cores = 1; @@ -598,6 +628,8 @@ void set_cpu_sibling_map(int cpu) if ((i == cpu) || (has_mp && match_llc(c, o))) link_mask(cpu_llc_shared_mask, cpu, i);
+ if ((i == cpu) || (has_mp && match_l2c(c, o))) + link_mask(cpu_l2c_shared_mask, cpu, i); }
/* @@ -649,6 +681,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu) return cpu_llc_shared_mask(cpu); }
+const struct cpumask *cpu_clustergroup_mask(int cpu) +{ + return cpu_l2c_shared_mask(cpu); +} + static void impress_friends(void) { int cpu; @@ -1332,6 +1369,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus) zalloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL); zalloc_cpumask_var(&per_cpu(cpu_die_map, i), GFP_KERNEL); zalloc_cpumask_var(&per_cpu(cpu_llc_shared_map, i), GFP_KERNEL); + zalloc_cpumask_var(&per_cpu(cpu_l2c_shared_map, i), GFP_KERNEL); }
/* @@ -1556,7 +1594,10 @@ static void remove_siblinginfo(int cpu) cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling)); for_each_cpu(sibling, cpu_llc_shared_mask(cpu)) cpumask_clear_cpu(cpu, cpu_llc_shared_mask(sibling)); + for_each_cpu(sibling, cpu_l2c_shared_mask(cpu)) + cpumask_clear_cpu(cpu, cpu_l2c_shared_mask(sibling)); cpumask_clear(cpu_llc_shared_mask(cpu)); + cpumask_clear(cpu_l2c_shared_mask(cpu)); cpumask_clear(topology_sibling_cpumask(cpu)); cpumask_clear(topology_core_cpumask(cpu)); cpumask_clear(topology_die_cpumask(cpu));