[+ Joerg]
On Thu, Mar 25, 2021 at 11:38:24AM +0800, chenxiang wrote:
From: Xiang Chen chenxiang66@hisilicon.com
After the change of patch ("iommu: Switch gather->end to the inclusive end"), the performace drops from 1600+K IOPS to 1200K in our kunpeng ARM64 platform. We find that the range [start1, end1) actually is joint from the range [end1, end2), but it is considered as disjoint after the change, so it needs more times of TLB sync, and spends more time on it. So fix the boundary issue to avoid performance drop.
Fixes: 862c3715de8f ("iommu: Switch gather->end to the inclusive end") Signed-off-by: Xiang Chen chenxiang66@hisilicon.com
include/linux/iommu.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/iommu.h b/include/linux/iommu.h index ae8eddd..4d5bcc2 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -547,7 +547,7 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain, * structure can be rewritten. */ if (gather->pgsize != size ||
end < gather->start || start > gather->end) {
if (gather->pgsize) iommu_iotlb_sync(domain, gather); gather->pgsize = size;end + 1 < gather->start || start > gather->end + 1) {
Urgh, I must say I much preferred these things being exclusive, but this looks like a necessary fix:
Acked-by: Will Deacon will@kernel.org
I wonder whether we should've just made these things u64s instead...
Will