Hi All:
I found a strange phenomenon about smmu interrupting affinity.
The background is as follows.
First. I have a kunpeng920 with two cpu's, four numa nodes, each numa has 32core, and each cpu has two smmu devices, which are located in numa 0 and numa 2 separately.
Secondly. I know the irq number of smmu evtq by the following patch, here if this irq number is 26
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 8594b4a..ce894bc 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2945,6 +2945,7 @@ static void arm_smmu_setup_msis(struct arm_smmu_device *smmu) switch (desc->platform.msi_index) { case EVTQ_MSI_INDEX: smmu->evtq.q.irq = desc->irq; + dev_err(smmu->dev, "evtq irq number %d\n", desc->irq);
The strange phenomenon is as follows. After the system boots normally, the system load is very low at this time and I can know that the irq number is on numa 1 with the following command localhost:~ # cat /proc/irq/26/smp_affinity_list 32-63 But this is wrong, the irq should be set to numa 0 for the correct affinity. When I disable irqbalance.service via command "systemctl disable irqbalance.service" and then reboot the system. After checking in the log again that the irq number of smmu evtq is still 26, then I can get the correct result by querying with the following command. localhost:~ # cat /proc/irq/26/smp_affinity_list 0-31
Why does irqbalance cause the affinity of smmu's evtq interrupt to be set to a numa node that the smmu device is not on?