On Fri, 5 Mar 2021 20:12:45 +0800 "tiantao (H)" tiantao6@huawei.com wrote:
Hi All:
I found a strange phenomenon about smmu interrupting affinity.
The background is as follows.
First. I have a kunpeng920 with two cpu's, four numa nodes, each numa has 32core, and each cpu has two smmu devices, which are located in numa 0 and numa 2 separately.
Secondly. I know the irq number of smmu evtq by the following patch, here if this irq number is 26
As just communicated on espace. I did a bit of debugging on this and could not replicate. Eventually checked the git history and seems that irqbalance only just started supporting detection of numa nodes for non PCI devices (28 Jan). Before we go further with this I'd like to confirm that you have a recent enough version to have that support. Before that it was ignoring /proc/irq/N/node completely.
Otherwise, the internal representation of topology in irqbalance is rather old so there are some hacks to make it not do anything too stupid for modern architectures. Assumption is a Numa Node is container of packages whereas these days it is the other way around. Hack makes the package no larger than the numa node - confusion is caused because it doesn't rename the package id to take this into account)
On our system I think it works as follows. 1) Numa nodes map 1-to-1 to packages 2) Each package then contains the CPUs that map to a particular NUMA node.
Jonathan
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 8594b4a..ce894bc 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2945,6 +2945,7 @@ static void arm_smmu_setup_msis(struct arm_smmu_device *smmu) switch (desc->platform.msi_index) { case EVTQ_MSI_INDEX: smmu->evtq.q.irq = desc->irq; + dev_err(smmu->dev, "evtq irq number %d\n", desc->irq);
The strange phenomenon is as follows. After the system boots normally, the system load is very low at this time and I can know that the irq number is on numa 1 with the following command localhost:~ # cat /proc/irq/26/smp_affinity_list 32-63 But this is wrong, the irq should be set to numa 0 for the correct affinity. When I disable irqbalance.service via command "systemctl disable irqbalance.service" and then reboot the system. After checking in the log again that the irq number of smmu evtq is still 26, then I can get the correct result by querying with the following command. localhost:~ # cat /proc/irq/26/smp_affinity_list 0-31
Why does irqbalance cause the affinity of smmu's evtq interrupt to be set to a numa node that the smmu device is not on?