From: Robin Murphy robin.murphy@arm.com
stable inclusion from stable-v5.10.110 commit fcd3c31dd1608b9977860562a8847b57b0596b4b bugzilla: https://gitee.com/openeuler/kernel/issues/I574AL
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit 5b61343b50590fb04a3f6be2cdc4868091757262 upstream.
For various reasons based on the allocator behaviour and typical use-cases at the time, when the max32_alloc_size optimisation was introduced it seemed reasonable to couple the reset of the tracked size to the update of cached32_node upon freeing a relevant IOVA. However, since subsequent optimisations focused on helping genuine 32-bit devices make best use of even more limited address spaces, it is now a lot more likely for cached32_node to be anywhere in a "full" 32-bit address space, and as such more likely for space to become available from IOVAs below that node being freed.
At this point, the short-cut in __cached_rbnode_delete_update() really doesn't hold up any more, and we need to fix the logic to reliably provide the expected behaviour. We still want cached32_node to only move upwards, but we should reset the allocation size if *any* 32-bit space has become available.
Reported-by: Yunfei Wang yf.wang@mediatek.com Signed-off-by: Robin Murphy robin.murphy@arm.com Reviewed-by: Miles Chen miles.chen@mediatek.com Link: https://lore.kernel.org/r/033815732d83ca73b13c11485ac39336f15c3b40.164631840... Signed-off-by: Joerg Roedel jroedel@suse.de Cc: Miles Chen miles.chen@mediatek.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yu Liao liaoyu15@huawei.com Reviewed-by: Wei Li liwei391@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- drivers/iommu/iova.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 82504049f8e4..1246e8f8bf08 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -158,10 +158,11 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free) cached_iova = rb_entry(iovad->cached32_node, struct iova, node); if (free == cached_iova || (free->pfn_hi < iovad->dma_32bit_pfn && - free->pfn_lo >= cached_iova->pfn_lo)) { + free->pfn_lo >= cached_iova->pfn_lo)) iovad->cached32_node = rb_next(&free->node); + + if (free->pfn_lo < iovad->dma_32bit_pfn) iovad->max32_alloc_size = iovad->dma_32bit_pfn; - }
cached_iova = rb_entry(iovad->cached_node, struct iova, node); if (free->pfn_lo >= cached_iova->pfn_lo)