On some Zhaoxin platforms, xHCI will prefetch TRB for performance improvement. However this TRB prefetch mechanism may cross page boundary, which may access memory not belong to xHCI. In order to fix this issue, using two pages for TRB allocate and only the first page will be used.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com --- drivers/usb/host/xhci-mem.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 9e87c282a743..f3c0eb0d4622 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -2385,6 +2385,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) { dma_addr_t dma; struct device *dev = xhci_to_hcd(xhci)->self.sysdev; + struct pci_dev *pdev = to_pci_dev(dev); unsigned int val, val2; u64 val_64; u32 page_size, temp; @@ -2450,8 +2451,13 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) * and our use of dma addresses in the trb_address_map radix tree needs * TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need. */ - xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, - TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size); + /*With xHCI TRB prefetch patch:To fix cross page boundry access issue in IOV environment*/ + if ((pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) && (pdev->device == 0x9202 || pdev->device == 0x9203)) { + xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, + TRB_SEGMENT_SIZE*2, TRB_SEGMENT_SIZE*2, xhci->page_size*2); + } else + xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, + TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size);
/* See Table 46 and Note on Figure 55 */ xhci->device_pool = dma_pool_create("xHCI input/output contexts", dev,
On 2021/3/25 18:09, LeoLiu-oc wrote:
On some Zhaoxin platforms, xHCI will prefetch TRB for performance improvement. However this TRB prefetch mechanism may cross page boundary, which may access memory not belong to xHCI. In order to fix this issue, using two pages for TRB allocate and only the first page will be used.
The patch is scheduled to be submitted to the kernel mainline in 2021.
This will be hard to upstream to mainline kernel :), but it's fine for openEuler kernel.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com
drivers/usb/host/xhci-mem.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 9e87c282a743..f3c0eb0d4622 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -2385,6 +2385,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) { dma_addr_t dma; struct device *dev = xhci_to_hcd(xhci)->self.sysdev; + struct pci_dev *pdev = to_pci_dev(dev); unsigned int val, val2; u64 val_64; u32 page_size, temp; @@ -2450,8 +2451,13 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) * and our use of dma addresses in the trb_address_map radix tree needs * TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need. */ - xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, - TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size); + /*With xHCI TRB prefetch patch:To fix cross page boundry access issue in IOV environment*/ + if ((pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) && (pdev->device == 0x9202 || pdev->device == 0x9203)) {
I think here will have compile error if you set CONFIG_PCI=n
Thanks Hanjun
+ xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, + TRB_SEGMENT_SIZE*2, TRB_SEGMENT_SIZE*2, xhci->page_size*2); + } else + xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, + TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size);
/* See Table 46 and Note on Figure 55 */ xhci->device_pool = dma_pool_create("xHCI input/output contexts", dev,
On 26/03/2021 11:00, Hanjun Guo wrote:
On 2021/3/25 18:09, LeoLiu-oc wrote:
On some Zhaoxin platforms, xHCI will prefetch TRB for performance improvement. However this TRB prefetch mechanism may cross page boundary, which may access memory not belong to xHCI. In order to fix this issue, using two pages for TRB allocate and only the first page will be used.
The patch is scheduled to be submitted to the kernel mainline in 2021.
This will be hard to upstream to mainline kernel :), but it's fine for openEuler kernel.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com
drivers/usb/host/xhci-mem.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 9e87c282a743..f3c0eb0d4622 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -2385,6 +2385,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) { dma_addr_t dma; struct device *dev = xhci_to_hcd(xhci)->self.sysdev; + struct pci_dev *pdev = to_pci_dev(dev); unsigned int val, val2; u64 val_64; u32 page_size, temp; @@ -2450,8 +2451,13 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) * and our use of dma addresses in the trb_address_map radix tree needs * TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need. */ - xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, - TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size); + /*With xHCI TRB prefetch patch:To fix cross page boundry access issue in IOV environment*/ + if ((pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) && (pdev->device == 0x9202 || pdev->device == 0x9203)) {
I think here will have compile error if you set CONFIG_PCI=n
Will change patch code and add XHCI_ZX_TRB_FETCH quirk in V2.
Sincerely TonyWWangoc
Thanks Hanjun
+ xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, + TRB_SEGMENT_SIZE*2, TRB_SEGMENT_SIZE*2, xhci->page_size*2); + } else + xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, + TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size);
/* See Table 46 and Note on Figure 55 */ xhci->device_pool = dma_pool_create("xHCI input/output contexts", dev,
.