[PATCH v2] PCI/DPC: Check host->native_dpc before enable dpc service
by Yicong Yang
Per Downstream Port Containment Related Enhancements ECN[1]
Table 4-6, Interpretation of _OSC Control Field Returned Value,
for bit 7 of _OSC control return value:
"Firmware sets this bit to 1 to grant the OS control over PCI Express
Downstream Port Containment configuration."
"If control of this feature was requested and denied,
or was not requested, the firmware returns this bit set to 0."
We store bit 7 of _OSC control return value in host->native_dpc,
check it before enable the dpc service as the firmware may not
grant the control.
[1] Downstream Port Containment Related Enhancements ECN,
Jan 28, 2019, affecting PCI Firmware Specification, Rev. 3.2
https://members.pcisig.com/wg/PCI-SIG/document/12888
Signed-off-by: Yicong Yang <yangyicong(a)hisilicon.com>
---
Change since v1:
- use correct reference for _OSC control return value
drivers/pci/pcie/portdrv_core.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
index e1fed664..7445d03 100644
--- a/drivers/pci/pcie/portdrv_core.c
+++ b/drivers/pci/pcie/portdrv_core.c
@@ -253,7 +253,8 @@ static int get_port_device_capability(struct pci_dev *dev)
*/
if (pci_find_ext_capability(dev, PCI_EXT_CAP_ID_DPC) &&
pci_aer_available() &&
- (pcie_ports_dpc_native || (services & PCIE_PORT_SERVICE_AER)))
+ (pcie_ports_dpc_native ||
+ ((services & PCIE_PORT_SERVICE_AER) && host->native_dpc)))
services |= PCIE_PORT_SERVICE_DPC;
if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM ||
--
2.8.1
1 year, 6 months
[RFC PATCH 0/5] KVM/ARM64 Add support for pinned VMIDs
by Shameer Kolothum
On an ARM64 system with a SMMUv3 implementation that fully supports
Broadcast TLB Maintenance(BTM) feature as part of the Distributed
Virtual Memory(DVM) protocol, the CPU TLB invalidate instructions are
received by SMMUv3. This is very useful when the SMMUv3 shares the
page tables with the CPU(eg: Guest SVA use case). For this to work,
the SMMU must use the same VMID that is allocated by KVM to configure
the stage 2 translations. At present KVM VMID allocations are recycled
on rollover and may change as a result. This will create issues if we
have to share the KVM VMID with SMMU.
Please see the discussion here,
https://lore.kernel.org/linux-iommu/20200522101755.GA3453945@myrica/
This series proposes a way to share the VMID between KVM and IOMMU
driver by,
1. Splitting the KVM VMID space into two equal halves based on the
command line option "kvm-arm.pinned_vmid_enable".
2. First half of the VMID space follows the normal recycle on rollover
policy.
3. Second half of the VMID space doesn't roll over and is used to
allocate pinned VMIDs.
4. Provides helper function to retrieve the KVM instance associated
with a device(if it is part of a vfio group).
5. Introduces generic interfaces to get/put pinned KVM VMIDs.
Open Items:
1. I couldn't figure out a way to determine whether a platform actually
fully supports DVM/BTM or not. Not sure we can take a call based on
SMMUv3 BTM feature bit alone. Probably we can get it from firmware
via IORT?
2. The current splitting of VMID space is only one way to do this and
probably not the best. Maybe we can follow the pinned ASID method used
in SVA code. Suggestions welcome here.
3. The detach_pasid_table() interface is not very clear to me as the current
Qemu prototype is not using that. This requires fixing from my side.
This is based on Jean-Philippe's SVA series[1] and Eric's SMMUv3 dual-stage
support series[2].
The branch with the whole vSVA + BTM solution is here,
https://github.com/hisilicon/kernel-dev/tree/5.10-rc4-2stage-v13-vsva-btm...
This is lightly tested on a HiSilicon D06 platform with uacce/zip dev test tool,
./zip_sva_per -k tlb
Thanks,
Shameer
1. https://github.com/Linaro/linux-kernel-uadk/commits/uacce-devel-5.10
2. https://lore.kernel.org/linux-iommu/20201118112151.25412-1-eric.auger@red...
Shameer Kolothum (5):
vfio: Add a helper to retrieve kvm instance from a dev
KVM: Add generic infrastructure to support pinned VMIDs
KVM: ARM64: Add support for pinned VMIDs
iommu/arm-smmu-v3: Use pinned VMID for NESTED stage with BTM
KVM: arm64: Make sure pinned vmid is released on VM exit
arch/arm64/include/asm/kvm_host.h | 2 +
arch/arm64/kvm/Kconfig | 1 +
arch/arm64/kvm/arm.c | 116 +++++++++++++++++++-
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 49 ++++++++-
drivers/vfio/vfio.c | 12 ++
include/linux/kvm_host.h | 17 +++
include/linux/vfio.h | 1 +
virt/kvm/Kconfig | 2 +
virt/kvm/kvm_main.c | 25 +++++
9 files changed, 220 insertions(+), 5 deletions(-)
--
2.17.1
1 year, 6 months
[PATCH net] net: hns3: Fixes+Refactors the broken set channel error fallback logic
by Salil Mehta
The fallback logic of the set channels when set_channel() fails to
configure TX Sched/RSS H/W configuration or for the code which brings
down/restores client before/therafter is not handled properly.
Fix and refactor the code for handling the errors properly and to improve
the readibility.
Fixes: 3a5a5f06d4d2 ("net: hns3: revert to old channel when setting new channel num fail")
Signed-off-by: Salil Mehta <salil.mehta(a)huawei.com>
Signed-off-by: Peng Li <lipeng321(a)huawei.com>
---
.../net/ethernet/hisilicon/hns3/hns3_enet.c | 77 +++++++++++--------
1 file changed, 47 insertions(+), 30 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index bf4302a5cf95..fbb0f4c9b98e 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -4692,24 +4692,60 @@ static int hns3_reset_notify(struct hnae3_handle *handle,
static int hns3_change_channels(struct hnae3_handle *handle, u32 new_tqp_num,
bool rxfh_configured)
{
- int ret;
+ const struct hnae3_ae_ops *ops = handle->ae_algo->ops;
+ u32 org_tqp_num = handle->kinfo.num_tqps;
+ struct device *dev = &handle->pdev->dev;
+ u32 req_tqp_num = new_tqp_num;
+ bool revert_old_config = false;
+ int ret, retval = 0;
+
+ /* bring down the client */
+ ret = hns3_reset_notify(handle, HNAE3_DOWN_CLIENT);
+ if (ret) {
+ dev_err(dev, "client down fail, this should'nt have happened!\n");
+ return ret;
+ }
- ret = handle->ae_algo->ops->set_channels(handle, new_tqp_num,
- rxfh_configured);
+ ret = hns3_reset_notify(handle, HNAE3_UNINIT_CLIENT);
if (ret) {
- dev_err(&handle->pdev->dev,
- "Change tqp num(%u) fail.\n", new_tqp_num);
+ dev_err(dev, "client uninit fail, this should'nt have happened!\n");
return ret;
}
+revert_old_tpqs_config:
+ /* update the TX Sched and RSS config in the H/W */
+ ret = ops->set_channels(handle, req_tqp_num, rxfh_configured);
+ if (ret) {
+ dev_err(dev, "TX Sched/RSS H/W cfg fail(=%d) for %s TPQs\n",
+ ret, revert_old_config ? "old" : "new");
+ goto err_set_channel;
+ }
+
+ /* restore the client */
ret = hns3_reset_notify(handle, HNAE3_INIT_CLIENT);
- if (ret)
- return ret;
+ if (ret) {
+ dev_err(dev, "failed to initialize the client again\n");
+ goto err_set_channel;
+ }
ret = hns3_reset_notify(handle, HNAE3_UP_CLIENT);
- if (ret)
- hns3_reset_notify(handle, HNAE3_UNINIT_CLIENT);
+ if (ret) {
+ dev_err(dev, "Client up fail, this should'nt have happened!\n");
+ return ret;
+ }
+
+ return retval;
+err_set_channel:
+ if (!revert_old_config) {
+ dev_warn(dev, "Revert TX Sched/RSS H/W config with old TPQs\n");
+ req_tqp_num = org_tqp_num;
+ revert_old_config = true;
+ retval = ret;
+ goto revert_old_tpqs_config;
+ }
+ dev_err(dev, "Bad, we could'nt revert to old TPQ H/W config\n");
+ dev_warn(dev, "Device maybe insane. Reload driver/Reset required!\n");
return ret;
}
@@ -4720,7 +4756,6 @@ int hns3_set_channels(struct net_device *netdev,
struct hnae3_knic_private_info *kinfo = &h->kinfo;
bool rxfh_configured = netif_is_rxfh_configured(netdev);
u32 new_tqp_num = ch->combined_count;
- u16 org_tqp_num;
int ret;
if (hns3_nic_resetting(netdev))
@@ -4750,28 +4785,10 @@ int hns3_set_channels(struct net_device *netdev,
"set channels: tqp_num=%u, rxfh=%d\n",
new_tqp_num, rxfh_configured);
- ret = hns3_reset_notify(h, HNAE3_DOWN_CLIENT);
- if (ret)
- return ret;
-
- ret = hns3_reset_notify(h, HNAE3_UNINIT_CLIENT);
- if (ret)
- return ret;
-
- org_tqp_num = h->kinfo.num_tqps;
ret = hns3_change_channels(h, new_tqp_num, rxfh_configured);
if (ret) {
- int ret1;
-
- netdev_warn(netdev,
- "Change channels fail, revert to old value\n");
- ret1 = hns3_change_channels(h, org_tqp_num, rxfh_configured);
- if (ret1) {
- netdev_err(netdev,
- "revert to old channel fail\n");
- return ret1;
- }
-
+ netdev_err(netdev, "fail(=%d) to set number of channels to %u\n", ret,
+ new_tqp_num);
return ret;
}
--
2.17.1
1 year, 7 months
[PATCH net-next 0/9] net: hns3: refactor and new features for flow director
by Huazhong Tan
This patchset refactor some functions and add some new features for
flow director.
patch 1~3: refactor large functions
patch 4, 7: add traffic class and user-def field support for ethtool
patch 5: use asynchronously configuration
patch 6: clean up for hns3_del_all_fd_entries()
patch 8, 9: add support for queue bonding mode
Jian Shen (9):
net: hns3: refactor out hclge_add_fd_entry()
net: hns3: refactor out hclge_fd_get_tuple()
net: hns3: refactor for function hclge_fd_convert_tuple
net: hns3: add support for traffic class tuple support for flow
director by ethtool
net: hns3: refactor flow director configuration
net: hns3: refine for hns3_del_all_fd_entries()
net: hns3: add support for user-def data of flow director
net: hns3: add support for queue bonding mode of flow director
net: hns3: add queue bonding mode support for VF
drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h | 8 +
drivers/net/ethernet/hisilicon/hns3/hnae3.h | 9 +-
drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c | 7 +-
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 91 +-
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 14 +-
drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c | 13 +-
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c | 2 +
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h | 21 +
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 1570 ++++++++++++++------
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 63 +
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c | 33 +
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c | 2 +
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c | 74 +
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h | 7 +
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c | 17 +
15 files changed, 1450 insertions(+), 481 deletions(-)
--
2.7.4
1 year, 7 months
[RESEND PATCH v3] iommu/iova: put free_iova_mem() outside of spinlock iova_rbtree_lock
by chenxiang
From: Xiang Chen <chenxiang66(a)hisilicon.com>
It is not necessary to put free_iova_mem() inside of spinlock/unlock
iova_rbtree_lock which only leads to more completion for the spinlock.
It has a small promote on the performance after the change. And also
rename private_free_iova() as remove_iova() because the function will not
free iova after that change.
Signed-off-by: Xiang Chen <chenxiang66(a)hisilicon.com>
Reviewed-by: John Garry <john.garry(a)huawei.com>
---
drivers/iommu/iova.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index b7ecd5b..b6cf5f1 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -412,12 +412,11 @@ private_find_iova(struct iova_domain *iovad, unsigned long pfn)
return NULL;
}
-static void private_free_iova(struct iova_domain *iovad, struct iova *iova)
+static void remove_iova(struct iova_domain *iovad, struct iova *iova)
{
assert_spin_locked(&iovad->iova_rbtree_lock);
__cached_rbnode_delete_update(iovad, iova);
rb_erase(&iova->node, &iovad->rbroot);
- free_iova_mem(iova);
}
/**
@@ -452,8 +451,9 @@ __free_iova(struct iova_domain *iovad, struct iova *iova)
unsigned long flags;
spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
- private_free_iova(iovad, iova);
+ remove_iova(iovad, iova);
spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+ free_iova_mem(iova);
}
EXPORT_SYMBOL_GPL(__free_iova);
@@ -472,10 +472,13 @@ free_iova(struct iova_domain *iovad, unsigned long pfn)
spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
iova = private_find_iova(iovad, pfn);
- if (iova)
- private_free_iova(iovad, iova);
+ if (!iova) {
+ spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+ return;
+ }
+ remove_iova(iovad, iova);
spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
-
+ free_iova_mem(iova);
}
EXPORT_SYMBOL_GPL(free_iova);
@@ -825,7 +828,8 @@ iova_magazine_free_pfns(struct iova_magazine *mag, struct iova_domain *iovad)
if (WARN_ON(!iova))
continue;
- private_free_iova(iovad, iova);
+ remove_iova(iovad, iova);
+ free_iova_mem(iova);
}
spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
--
2.8.1
1 year, 7 months
[PATCH net-next 0/3] Some optimization for lockless qdisc
by Yunsheng Lin
Patch 1: remove unnecessary seqcount operation.
Patch 2: implement TCQ_F_CAN_BYPASS.
Patch 3: remove qdisc->empty.
Performance data for pktgen in queue_xmit mode + dummy netdev
with pfifo_fast:
threads unpatched patched delta
1 2.60Mpps 3.21Mpps +23%
2 3.84Mpps 5.56Mpps +44%
4 5.52Mpps 5.58Mpps +1%
8 2.77Mpps 2.76Mpps -0.3%
16 2.24Mpps 2.23Mpps +0.4%
Performance for IP forward testing: 1.05Mpps increases to
1.16Mpps, about 10% improvement.
V1: Drop RFC tag, Add nolock_qdisc_is_empty() and do the qdisc
empty checking without the protection of qdisc->seqlock to
aviod doing unnecessary spin_trylock() for contention case.
RFC v4: Use STATE_MISSED and STATE_DRAINING to indicate non-empty
qdisc, and add patch 1 and 3.
Yunsheng Lin (3):
net: sched: avoid unnecessary seqcount operation for lockless qdisc
net: sched: implement TCQ_F_CAN_BYPASS for lockless qdisc
net: sched: remove qdisc->empty for lockless qdisc
include/net/sch_generic.h | 31 ++++++++++++++++++-------------
net/core/dev.c | 26 ++++++++++++++++++++++++--
net/sched/sch_generic.c | 23 ++++++++++++++++-------
3 files changed, 58 insertions(+), 22 deletions(-)
--
2.7.4
1 year, 7 months
[RFC v2 plinth/topic-sas-5.12 0/6] Add support for IOMMU debugfs related to
by chenxiang
From: Xiang Chen <chenxiang66(a)hisilicon.com>
The first patch is to release those rcache when rmmod the driver of the
last device to save memory.
And patch2~6 is add support for IOMMU debugfs related to IOVA as follows:
/sys/kernel/debug/iommu/iovad/iommu_domainx
Under iommu_domainx dir, add debugfs file iova_rcache and drop_rcache.
From debugfs file iova_rcache, we can get how many cpu_rcache / share
rcache / iovas are used, and also we can drop those rcache by debugfs file
drop_rcache:
For cpu_rcache, [i]=x|y indicates that there are x iova in load iova_magazine
and y iova in prev iova_magazine (128 at most).
For share rcache, [i]=x indicates that there are x iova_magazines in use.
estuary:/sys/kernel/debug/iommu/iovad/iommu_domain2$ cat iova_rcache
[ 272.814457] cpu0 [0]=60|0 [1]=7|0 [2]=32|0 [3]=0|0 [4]=97|0 [5]=104|0
[ 272.820982] cpu1 [0]=0|0 [1]=3|0 [2]=14|0 [3]=76|0 [4]=15|0 [5]=64|0
[ 272.827399] cpu2 [0]=85|128 [1]=84|128 [2]=83|128 [3]=112|128 [4]=22|128 [5]=116|128
[ 272.835197] cpu3 [0]=0|0 [1]=91|0 [2]=101|0 [3]=29|0 [4]=0|0 [5]=36|0
[ 272.841699] cpu4 [0]=0|0 [1]=39|0 [2]=113|0 [3]=75|0 [4]=95|0 [5]=0|0
[ 272.848201] cpu5 [0]=27|0 [1]=48|0 [2]=82|0 [3]=0|0 [4]=19|0 [5]=36|0
[ 272.854702] cpu6 [0]=30|0 [1]=0|0 [2]=48|0 [3]=2|0 [4]=18|0 [5]=56|0
[ 272.861117] cpu7 [0]=27|0 [1]=89|0 [2]=101|0 [3]=59|0 [4]=28|0 [5]=0|0
[ 272.867706] cpu8 [0]=66|0 [1]=114|0 [2]=42|0 [3]=123|0 [4]=96|0 [5]=68|0
[ 272.874466] cpu9 [0]=71|0 [1]=61|0 [2]=28|0 [3]=118|0 [4]=116|0 [5]=41|0
[ 272.881227] cpu10 [0]=83|128 [1]=63|128 [2]=109|128 [3]=79|128 [4]=7|128 [5]=54|128
[ 272.888938] cpu11 [0]=90|0 [1]=34|0 [2]=88|0 [3]=58|0 [4]=20|0 [5]=35|0
[ 272.895611] cpu12 [0]=64|0 [1]=20|0 [2]=18|0 [3]=33|0 [4]=42|0 [5]=22|0
[ 272.902285] cpu13 [0]=17|0 [1]=70|0 [2]=115|0 [3]=59|0 [4]=108|0 [5]=58|0
[ 272.909132] cpu14 [0]=28|0 [1]=18|0 [2]=27|0 [3]=105|0 [4]=65|0 [5]=81|0
[ 272.915892] cpu15 [0]=75|0 [1]=3|0 [2]=73|0 [3]=104|0 [4]=127|0 [5]=102|0
[ 272.922738] cpu16 [0]=54|0 [1]=116|0 [2]=90|0 [3]=31|0 [4]=108|0 [5]=41|0
[ 272.929590] cpu17 [0]=47|0 [1]=82|0 [2]=3|0 [3]=66|0 [4]=68|0 [5]=66|0
[ 272.936179] cpu18 [0]=126|128 [1]=110|128 [2]=48|128 [3]=118|128 [4]=54|128
[5]=73|128
[ 272.944156] cpu19 [0]=31|0 [1]=13|0 [2]=104|0 [3]=45|0 [4]=108|0 [5]=96|0
[ 272.951006] cpu20 [0]=58|0 [1]=113|0 [2]=14|0 [3]=123|0 [4]=52|0 [5]=54|0
[ 272.957856] cpu21 [0]=116|0 [1]=47|0 [2]=96|0 [3]=60|0 [4]=47|0 [5]=126|0
[ 272.964701] cpu22 [0]=84|0 [1]=87|0 [2]=88|0 [3]=68|0 [4]=37|0 [5]=119|0
[ 272.971462] cpu23 [0]=13|0 [1]=63|0 [2]=124|0 [3]=3|0 [4]=7|0 [5]=38|0
[ 272.978051] cpu24 [0]=15|0 [1]=64|0 [2]=65|0 [3]=53|0 [4]=102|0 [5]=69|0
[ 272.984812] cpu25 [0]=94|0 [1]=108|0 [2]=67|0 [3]=125|0 [4]=107|0 [5]=8|0
[ 272.991663] cpu26 [0]=84|128 [1]=86|128 [2]=91|128 [3]=121|128 [4]=77|128 [5]=25|128
[ 272.999464] cpu27 [0]=60|0 [1]=105|0 [2]=61|0 [3]=91|0 [4]=79|0 [5]=6|0
[ 273.006141] cpu28 [0]=39|0 [1]=91|0 [2]=11|0 [3]=87|0 [4]=112|0 [5]=10|0
[ 273.012904] cpu29 [0]=88|0 [1]=43|0 [2]=0|0 [3]=77|0 [4]=10|0 [5]=79|0
[ 273.019492] cpu30 [0]=65|0 [1]=24|0 [2]=125|0 [3]=24|0 [4]=54|0 [5]=21|0
[ 273.026254] cpu31 [0]=26|0 [1]=90|0 [2]=42|0 [3]=17|0 [4]=73|0 [5]=35|0
[ 273.032929] cpu32 [0]=1|0 [1]=83|0 [2]=76|0 [3]=62|0 [4]=117|0 [5]=96|0
[ 273.039612] cpu33 [0]=50|0 [1]=55|0 [2]=63|0 [3]=79|0 [4]=86|0 [5]=15|0
[ 273.046293] cpu34 [0]=122|128 [1]=36|128 [2]=36|128 [3]=79|128 [4]=113|128 [5]=80|128
[ 273.054179] cpu35 [0]=101|0 [1]=18|0 [2]=7|0 [3]=10|0 [4]=7|0 [5]=112|0
[ 273.060854] cpu36 [0]=12|0 [1]=107|0 [2]=43|0 [3]=60|0 [4]=19|0 [5]=110|0
[ 273.067703] cpu37 [0]=90|0 [1]=34|0 [2]=66|0 [3]=91|0 [4]=85|0 [5]=31|0
[ 273.074378] cpu38 [0]=0|0 [1]=22|0 [2]=18|0 [3]=73|0 [4]=54|0 [5]=96|0
[ 273.080968] cpu39 [0]=109|0 [1]=54|0 [2]=124|0 [3]=21|0 [4]=88|0 [5]=61|0
[ 273.087816] cpu40 [0]=95|0 [1]=50|0 [2]=45|0 [3]=66|0 [4]=30|0 [5]=84|0
[ 273.094490] cpu41 [0]=99|0 [1]=47|0 [2]=8|0 [3]=81|0 [4]=0|0 [5]=95|0
[ 273.100992] cpu42 [0]=25|128 [1]=92|128 [2]=53|128 [3]=49|128 [4]=43|128 [5]=78|128
[ 273.108704] cpu43 [0]=88|0 [1]=42|0 [2]=10|0 [3]=124|0 [4]=4|0 [5]=105|0
[ 273.115464] cpu44 [0]=80|0 [1]=63|0 [2]=1|0 [3]=123|0 [4]=35|0 [5]=17|0
[ 273.122139] cpu45 [0]=31|0 [1]=92|0 [2]=8|0 [3]=60|0 [4]=74|0 [5]=92|0
[ 273.128727] cpu46 [0]=78|0 [1]=40|0 [2]=95|0 [3]=33|0 [4]=67|0 [5]=63|0
[ 273.135401] cpu47 [0]=112|0 [1]=93|0 [2]=96|0 [3]=24|0 [4]=93|0 [5]=15|0
[ 273.142162] cpu48 [0]=92|0 [1]=120|0 [2]=49|0 [3]=118|0 [4]=1|0 [5]=83|0
[ 273.148923] cpu49 [0]=101|0 [1]=7|0 [2]=108|0 [3]=15|0 [4]=69|0 [5]=116|0
[ 273.155771] cpu50 [0]=111|128 [1]=98|128 [2]=21|128 [3]=27|128 [4]=109|128 [5]=21|128
[ 273.163655] cpu51 [0]=31|0 [1]=33|0 [2]=82|0 [3]=117|0 [4]=98|0 [5]=1|0
[ 273.170329] cpu52 [0]=113|0 [1]=64|0 [2]=16|0 [3]=48|0 [4]=97|0 [5]=80|0
[ 273.177090] cpu53 [0]=95|0 [1]=39|0 [2]=26|0 [3]=107|0 [4]=2|0 [5]=18|0
[ 273.183764] cpu54 [0]=114|0 [1]=94|0 [2]=110|0 [3]=85|0 [4]=66|0 [5]=45|0
[ 273.190611] cpu55 [0]=52|0 [1]=89|0 [2]=43|0 [3]=117|0 [4]=115|0 [5]=91|0
[ 273.197460] cpu56 [0]=0|0 [1]=51|0 [2]=81|0 [3]=60|0 [4]=20|0 [5]=27|0
[ 273.204048] cpu57 [0]=124|0 [1]=121|0 [2]=56|0 [3]=0|0 [4]=77|0 [5]=59|0
[ 273.210807] cpu58 [0]=109|128 [1]=98|128 [2]=6|128 [3]=39|128 [4]=64|128 [5]=24|128
[ 273.218518] cpu59 [0]=57|0 [1]=62|0 [2]=66|0 [3]=55|0 [4]=95|0 [5]=47|0
[ 273.225192] cpu60 [0]=29|0 [1]=12|0 [2]=112|0 [3]=23|0 [4]=65|0 [5]=34|0
[ 273.231955] cpu61 [0]=92|0 [1]=5|0 [2]=19|0 [3]=91|0 [4]=101|0 [5]=97|0
[ 273.238629] cpu62 [0]=97|0 [1]=42|0 [2]=30|0 [3]=111|0 [4]=99|0 [5]=2|0
[ 273.245304] cpu63 [0]=77|0 [1]=79|0 [2]=62|0 [3]=56|0 [4]=17|0 [5]=76|0
[ 273.251982] cpu64 [0]=0|0 [1]=108|0 [2]=87|0 [3]=58|0 [4]=26|0 [5]=0|0
[ 273.258582] cpu65 [0]=0|0 [1]=114|0 [2]=99|0 [3]=73|0 [4]=33|0 [5]=0|0
[ 273.265174] cpu66 [0]=39|128 [1]=52|128 [2]=82|128 [3]=107|128 [4]=73|128 [5]=13|128
[ 273.272974] cpu67 [0]=0|0 [1]=123|0 [2]=87|0 [3]=49|0 [4]=27|0 [5]=98|0
[ 273.279651] cpu68 [0]=108|0 [1]=0|0 [2]=123|0 [3]=47|0 [4]=30|0 [5]=109|0
[ 273.286500] cpu69 [0]=92|0 [1]=110|0 [2]=92|0 [3]=59|0 [4]=42|0 [5]=104|0
[ 273.293351] cpu70 [0]=105|0 [1]=106|0 [2]=99|0 [3]=57|0 [4]=15|0 [5]=111|0
[ 273.300287] cpu71 [0]=106|0 [1]=108|0 [2]=108|0 [3]=58|0 [4]=35|0 [5]=98|0
[ 273.307222] cpu72 [0]=0|0 [1]=0|0 [2]=114|0 [3]=118|0 [4]=95|0 [5]=0|0
[ 273.313813] cpu73 [0]=0|0 [1]=0|0 [2]=0|0 [3]=98|0 [4]=95|0 [5]=124|0
[ 273.320318] cpu74 [0]=65|0 [1]=76|0 [2]=109|0 [3]=58|128 [4]=115|128 [5]=105|0
[ 273.327603] cpu75 [0]=0|0 [1]=0|0 [2]=123|0 [3]=102|0 [4]=109|0 [5]=0|0
[ 273.334279] cpu76 [0]=125|0 [1]=0|0 [2]=0|0 [3]=101|0 [4]=112|0 [5]=0|0
[ 273.340954] cpu77 [0]=124|0 [1]=124|0 [2]=0|0 [3]=103|0 [4]=100|0 [5]=0|0
[ 273.347803] cpu78 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=118|0 [5]=0|0
[ 273.354134] cpu79 [0]=124|0 [1]=0|0 [2]=116|0 [3]=104|0 [4]=120|0 [5]=0|0
[ 273.360981] cpu80 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.367138] cpu81 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.373292] cpu82 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.379447] cpu83 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.385601] cpu84 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.391755] cpu85 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.397910] cpu86 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.404065] cpu87 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.410220] cpu88 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.416376] cpu89 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.422532] cpu90 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.428687] cpu91 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.434841] cpu92 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.440997] cpu93 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.447152] cpu94 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.453311] cpu95 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 273.459468] cpu96 [0]=78|0 [1]=95|0 [2]=117|0 [3]=120|0 [4]=37|0 [5]=31|0
[ 273.466317] cpu97 [0]=105|0 [1]=99|0 [2]=37|0 [3]=24|0 [4]=86|0 [5]=39|0
[ 273.473080] cpu98 [0]=37|128 [1]=13|128 [2]=102|128 [3]=124|128 [4]=60|128 [5]=36|128
[ 273.480967] cpu99 [0]=69|0 [1]=12|0 [2]=84|0 [3]=49|0 [4]=108|0 [5]=127|0
[ 273.487818] cpu100 [0]=92|0 [1]=104|0 [2]=22|0 [3]=31|0 [4]=2|0 [5]=72|0
[ 273.494581] cpu101 [0]=0|0 [1]=31|0 [2]=44|0 [3]=8|0 [4]=13|0 [5]=50|0
[ 273.501170] cpu102 [0]=79|0 [1]=14|0 [2]=88|0 [3]=53|0 [4]=7|0 [5]=44|0
[ 273.507846] cpu103 [0]=10|0 [1]=117|0 [2]=53|0 [3]=112|0 [4]=11|0 [5]=71|0
[ 273.514781] cpu104 [0]=123|0 [1]=30|0 [2]=9|0 [3]=2|0 [4]=30|0 [5]=24|0
[ 273.521462] cpu105 [0]=116|0 [1]=44|0 [2]=29|0 [3]=104|0 [4]=71|0 [5]=3|0
[ 273.528309] cpu106 [0]=38|128 [1]=127|128 [2]=21|128 [3]=83|128 [4]=59|128 [5]=81|128
[ 273.536195] cpu107 [0]=127|0 [1]=126|0 [2]=125|0 [3]=79|0 [4]=12|0 [5]=115|0
[ 273.543303] cpu108 [0]=0|0 [1]=43|0 [2]=111|0 [3]=10|0 [4]=51|0 [5]=68|0
[ 273.550065] cpu109 [0]=37|0 [1]=63|0 [2]=11|0 [3]=10|0 [4]=58|0 [5]=9|0
[ 273.556742] cpu110 [0]=86|0 [1]=44|0 [2]=83|0 [3]=85|0 [4]=82|0 [5]=119|0
[ 273.563590] cpu111 [0]=114|0 [1]=27|0 [2]=111|0 [3]=11|0 [4]=45|0 [5]=56|0
[ 273.570524] cpu112 [0]=74|0 [1]=86|0 [2]=93|0 [3]=122|0 [4]=126|0 [5]=31|0
[ 273.577458] cpu113 [0]=51|0 [1]=81|0 [2]=7|0 [3]=124|0 [4]=71|0 [5]=5|0
[ 273.584133] cpu114 [0]=58|128 [1]=95|128 [2]=14|128 [3]=119|128 [4]=85|128 [5]=23|128
[ 273.592018] cpu115 [0]=59|0 [1]=66|0 [2]=1|0 [3]=50|0 [4]=77|0 [5]=6|0
[ 273.598607] cpu116 [0]=65|0 [1]=81|0 [2]=22|0 [3]=18|0 [4]=96|0 [5]=2|0
[ 273.605282] cpu117 [0]=23|0 [1]=41|0 [2]=19|0 [3]=69|0 [4]=108|0 [5]=126|0
[ 273.612216] cpu118 [0]=74|0 [1]=40|0 [2]=0|0 [3]=100|0 [4]=109|0 [5]=7|0
[ 273.618979] cpu119 [0]=47|0 [1]=31|0 [2]=103|0 [3]=65|0 [4]=84|0 [5]=124|0
[ 273.625914] cpu120 [0]=74|0 [1]=18|0 [2]=87|0 [3]=117|0 [4]=17|0 [5]=62|0
[ 273.632762] cpu121 [0]=64|0 [1]=73|0 [2]=3|0 [3]=127|0 [4]=120|0 [5]=105|0
[ 273.639698] cpu122 [0]=46|128 [1]=89|128 [2]=22|128 [3]=78|128 [4]=100|128
[5]=106|128
[ 273.647671] cpu123 [0]=36|0 [1]=71|0 [2]=25|0 [3]=23|0 [4]=126|0 [5]=86|0
[ 273.654521] cpu124 [0]=26|0 [1]=93|0 [2]=101|0 [3]=103|0 [4]=54|0 [5]=100|0
[ 273.661542] cpu125 [0]=58|0 [1]=57|0 [2]=116|0 [3]=35|0 [4]=73|0 [5]=12|0
[ 273.668392] cpu126 [0]=11|0 [1]=54|0 [2]=41|0 [3]=0|0 [4]=27|0 [5]=54|0
[ 273.675067] cpu127 [0]=70|0 [1]=38|0 [2]=58|0 [3]=123|0 [4]=47|0 [5]=21|0
[ 273.681913] share cache: [0]=0 [1]=4 [2]=9 [3]=3 [4]=6 [5]=5
[ 273.687633] rb_total: 59443
estuary:/sys/kernel/debug/iommu/iovad/iommu_domain2$ echo 1 > drop_rca
che
estuary:/sys/kernel/debug/iommu/iovad/iommu_domain2$ cat iova_rcache
[ 295.489690] cpu0 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.495786] cpu1 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.501861] cpu2 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.507929] cpu3 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.514004] cpu4 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.520074] cpu5 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.526142] cpu6 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.532212] cpu7 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.538287] cpu8 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.544355] cpu9 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.550423] cpu10 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.556581] cpu11 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.562739] cpu12 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.568897] cpu13 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.575054] cpu14 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.581209] cpu15 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.587368] cpu16 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.593525] cpu17 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.599680] cpu18 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.605837] cpu19 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.611999] cpu20 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.618159] cpu21 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.624313] cpu22 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.630467] cpu23 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.636624] cpu24 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.642779] cpu25 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.648933] cpu26 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.655088] cpu27 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.661241] cpu28 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.667396] cpu29 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.673549] cpu30 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.679703] cpu31 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.685857] cpu32 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.692010] cpu33 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.698165] cpu34 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.704319] cpu35 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.710474] cpu36 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.716628] cpu37 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.722782] cpu38 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.728936] cpu39 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.735091] cpu40 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.741246] cpu41 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.747400] cpu42 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.753553] cpu43 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.759707] cpu44 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.765860] cpu45 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.772014] cpu46 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.778168] cpu47 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.784322] cpu48 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.790476] cpu49 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.796632] cpu50 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.802786] cpu51 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.808941] cpu52 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.815095] cpu53 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.821250] cpu54 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.827405] cpu55 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.833561] cpu56 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.839716] cpu57 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.845870] cpu58 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.852024] cpu59 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.858181] cpu60 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.864336] cpu61 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.870492] cpu62 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.876646] cpu63 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.882801] cpu64 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.888956] cpu65 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.895109] cpu66 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.901265] cpu67 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.907419] cpu68 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.913574] cpu69 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.919729] cpu70 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.925882] cpu71 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.932037] cpu72 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.938191] cpu73 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.944345] cpu74 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.950499] cpu75 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.956653] cpu76 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.962807] cpu77 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.968961] cpu78 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.975115] cpu79 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.981269] cpu80 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.987423] cpu81 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.993578] cpu82 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 295.999731] cpu83 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.005885] cpu84 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.012040] cpu85 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.018196] cpu86 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.024350] cpu87 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.030504] cpu88 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.036659] cpu89 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.042814] cpu90 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.048968] cpu91 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.055122] cpu92 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.061275] cpu93 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.067429] cpu94 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.073582] cpu95 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.079736] cpu96 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.085890] cpu97 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.092043] cpu98 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.098198] cpu99 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.104352] cpu100 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.110593] cpu101 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.116834] cpu102 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.123076] cpu103 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.129315] cpu104 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.135555] cpu105 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.141795] cpu106 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.148035] cpu107 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.154275] cpu108 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.160515] cpu109 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.166757] cpu110 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.172997] cpu111 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.179237] cpu112 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.185477] cpu113 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.191717] cpu114 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.197958] cpu115 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.204198] cpu116 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.210438] cpu117 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.216677] cpu118 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.222917] cpu119 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.229157] cpu120 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.235398] cpu121 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.241638] cpu122 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.247879] cpu123 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.254121] cpu124 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.260361] cpu125 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.266601] cpu126 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.272843] cpu127 [0]=0|0 [1]=0|0 [2]=0|0 [3]=0|0 [4]=0|0 [5]=0|0
[ 296.279083] share cache: [0]=0 [1]=0 [2]=0 [3]=0 [4]=0 [5]=0
[ 296.284804] rb_total: 4134
estuary:/sys/kernel/debug/iommu/iovad/iommu_domain2$
We can add more debugfs interfaces if we want to.
c00284940 (6):
{topost} iommu: free rcache iovas when rmmod the driver of the last
device
{topost} iommu/iova: create iovad debugfs dir under iommu debugfs dir
{topost} iommu/iova: create iommu domain debugfs dir under iovad
debugfs dir
{topost} iommu/iova: create debugfs file related to iova
{topost} iommu/iova: trace how many iovas are in use
{topost} iommu/iova: add a debugfs file drop_rcache to drop rcache
drivers/iommu/dma-iommu.c | 9 +++++
drivers/iommu/iommu-debugfs.c | 93 +++++++++++++++++++++++++++++++++++++++++++
drivers/iommu/iommu.c | 26 +++++-------
drivers/iommu/iova.c | 43 ++++++++------------
include/linux/dma-iommu.h | 6 ++-
include/linux/iommu.h | 32 ++++++++++++++-
include/linux/iova.h | 29 +++++++++++++-
7 files changed, 193 insertions(+), 45 deletions(-)
--
2.8.1
1 year, 8 months
[PATCH] libata: configure max sectors properly
by chenxiang
From: Xiang Chen <chenxiang66(a)hisilicon.com>
Max sectors of limitations for scsi host can be set through
scsi_host_template->max_sectors in scsi driver. But we find that max
sectors may exceed scsi_host_template->max_sectors for SATA disk even
if we set it. We find that it may be overwrote in some scsi drivers
(which calls the callback slave_configure and also calls function
ata_scsi_dev_config in it). The invoking relationship is as follows:
scsi_probe_and_add_lun
...
scsi_alloc_sdev
scsi_mq_alloc_queue
...
__scsi_init_queue
blk_queue_max_hw_sectors(q, shost->max_sectors) //max_sectors coming from sht->max_sectors
scsi_change_queue_depth
scsi_sysfs_device_initialize
shost->hostt->slave_alloc()
xxx_salve_configure
...
ata_scsi_dev_config
blk_queue_max_hw_sectors(q, dev->max_sectors) //max_sectors is overwrote by dev->max_sectors
To avoid the issue, set q->limits.max_sectors with the minimum value between
dev->max_sectors and q->limits.max_sectors.
Signed-off-by: Xiang Chen <chenxiang66(a)hisilicon.com>
---
drivers/ata/libata-scsi.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
index 48b8934..fb7b243 100644
--- a/drivers/ata/libata-scsi.c
+++ b/drivers/ata/libata-scsi.c
@@ -1026,12 +1026,15 @@ EXPORT_SYMBOL_GPL(ata_scsi_dma_need_drain);
int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
{
struct request_queue *q = sdev->request_queue;
+ unsigned int max_sectors;
if (!ata_id_has_unload(dev->id))
dev->flags |= ATA_DFLAG_NO_UNLOAD;
/* configure max sectors */
- blk_queue_max_hw_sectors(q, dev->max_sectors);
+ max_sectors = min_t(unsigned int, dev->max_sectors,
+ q->limits.max_sectors);
+ blk_queue_max_hw_sectors(q, max_sectors);
if (dev->class == ATA_DEV_ATAPI) {
sdev->sector_size = ATA_SECT_SIZE;
--
2.8.1
1 year, 8 months
[PATCH 0/2] use bin_attribute to avoid buff overflow
by Tian Tao
The first patch adds a new function cpumap_print_to_buf and uses
this function in drivers/base/topology.c, and the second patch uses
this new function in drivers/base/node.c
Tian Tao (2):
topology: use bin_attribute to avoid buff overflow
drivers/base/node.c: use bin_attribute to avoid buff overflow
drivers/base/node.c | 50 +++++++++++++--------
drivers/base/topology.c | 115 ++++++++++++++++++++++++++----------------------
include/linux/bitmap.h | 3 ++
include/linux/cpumask.h | 25 +++++++++++
lib/bitmap.c | 34 ++++++++++++++
5 files changed, 157 insertions(+), 70 deletions(-)
--
2.7.4
1 year, 8 months
[PATCH v8] topology: use bin_attribute to avoid buff overflow
by Tian Tao
Reading sys/devices/system/cpu/cpuX/topology/ returns cpu topology.
However, the size of this file is limited to PAGE_SIZE because of the
limitation for sysfs attribute. so we use bin_attribute instead of
attribute to avoid NR_CPUS too big to cause buff overflow.
This patch is based on the following discussion.
https://www.spinics.net/lists/linux-doc/msg95921.html
Signed-off-by: Tian Tao <tiantao6(a)hisilicon.com>
---
v2: rewrite the function name##_read
v3: remove the temp buffer.
v4: rewrite the function name##_read
v5: rewrite the patch
v6: update the commit message
v7: remove BUFF_SIZE, dynamic calculation of size
v8: Extracting the public part as a subfunction
---
drivers/base/topology.c | 115 ++++++++++++++++++++++++++----------------------
include/linux/bitmap.h | 3 ++
include/linux/cpumask.h | 25 +++++++++++
lib/bitmap.c | 34 ++++++++++++++
4 files changed, 125 insertions(+), 52 deletions(-)
diff --git a/drivers/base/topology.c b/drivers/base/topology.c
index 4d254fc..013edbb 100644
--- a/drivers/base/topology.c
+++ b/drivers/base/topology.c
@@ -21,25 +21,27 @@ static ssize_t name##_show(struct device *dev, \
return sysfs_emit(buf, "%d\n", topology_##name(dev->id)); \
}
-#define define_siblings_show_map(name, mask) \
-static ssize_t name##_show(struct device *dev, \
- struct device_attribute *attr, char *buf) \
-{ \
- return cpumap_print_to_pagebuf(false, buf, topology_##mask(dev->id));\
+#define define_siblings_read_func(name, mask) \
+static ssize_t name##_read(struct file *file, struct kobject *kobj, \
+ struct bin_attribute *attr, char *buf, \
+ loff_t off, size_t count) \
+{ \
+ struct device *dev = kobj_to_dev(kobj); \
+ \
+ return cpumap_print_to_buf(false, buf, topology_##mask(dev->id), \
+ off, count); \
+} \
+ \
+static ssize_t name##_list_read(struct file *file, struct kobject *kobj, \
+ struct bin_attribute *attr, char *buf, \
+ loff_t off, size_t count) \
+{ \
+ struct device *dev = kobj_to_dev(kobj); \
+ \
+ return cpumap_print_to_buf(true, buf, topology_##mask(dev->id), \
+ off, count); \
}
-#define define_siblings_show_list(name, mask) \
-static ssize_t name##_list_show(struct device *dev, \
- struct device_attribute *attr, \
- char *buf) \
-{ \
- return cpumap_print_to_pagebuf(true, buf, topology_##mask(dev->id));\
-}
-
-#define define_siblings_show_func(name, mask) \
- define_siblings_show_map(name, mask); \
- define_siblings_show_list(name, mask)
-
define_id_show_func(physical_package_id);
static DEVICE_ATTR_RO(physical_package_id);
@@ -49,71 +51,80 @@ static DEVICE_ATTR_RO(die_id);
define_id_show_func(core_id);
static DEVICE_ATTR_RO(core_id);
-define_siblings_show_func(thread_siblings, sibling_cpumask);
-static DEVICE_ATTR_RO(thread_siblings);
-static DEVICE_ATTR_RO(thread_siblings_list);
+define_siblings_read_func(thread_siblings, sibling_cpumask);
+static BIN_ATTR_RO(thread_siblings, 0);
+static BIN_ATTR_RO(thread_siblings_list, 0);
-define_siblings_show_func(core_cpus, sibling_cpumask);
-static DEVICE_ATTR_RO(core_cpus);
-static DEVICE_ATTR_RO(core_cpus_list);
+define_siblings_read_func(core_cpus, sibling_cpumask);
+static BIN_ATTR_RO(core_cpus, 0);
+static BIN_ATTR_RO(core_cpus_list, 0);
-define_siblings_show_func(core_siblings, core_cpumask);
-static DEVICE_ATTR_RO(core_siblings);
-static DEVICE_ATTR_RO(core_siblings_list);
+define_siblings_read_func(core_siblings, core_cpumask);
+static BIN_ATTR_RO(core_siblings, 0);
+static BIN_ATTR_RO(core_siblings_list, 0);
-define_siblings_show_func(die_cpus, die_cpumask);
-static DEVICE_ATTR_RO(die_cpus);
-static DEVICE_ATTR_RO(die_cpus_list);
+define_siblings_read_func(die_cpus, die_cpumask);
+static BIN_ATTR_RO(die_cpus, 0);
+static BIN_ATTR_RO(die_cpus_list, 0);
-define_siblings_show_func(package_cpus, core_cpumask);
-static DEVICE_ATTR_RO(package_cpus);
-static DEVICE_ATTR_RO(package_cpus_list);
+define_siblings_read_func(package_cpus, core_cpumask);
+static BIN_ATTR_RO(package_cpus, 0);
+static BIN_ATTR_RO(package_cpus_list, 0);
#ifdef CONFIG_SCHED_BOOK
define_id_show_func(book_id);
static DEVICE_ATTR_RO(book_id);
-define_siblings_show_func(book_siblings, book_cpumask);
-static DEVICE_ATTR_RO(book_siblings);
-static DEVICE_ATTR_RO(book_siblings_list);
+define_siblings_read_func(book_siblings, book_cpumask);
+static BIN_ATTR_RO(book_siblings, 0);
+static BIN_ATTR_RO(book_siblings_list, 0);
#endif
#ifdef CONFIG_SCHED_DRAWER
define_id_show_func(drawer_id);
static DEVICE_ATTR_RO(drawer_id);
-define_siblings_show_func(drawer_siblings, drawer_cpumask);
-static DEVICE_ATTR_RO(drawer_siblings);
-static DEVICE_ATTR_RO(drawer_siblings_list);
+define_siblings_read_func(drawer_siblings, drawer_cpumask);
+static BIN_ATTR_RO(drawer_siblings, 0);
+static BIN_ATTR_RO(drawer_siblings_list, 0);
#endif
+static struct bin_attribute *bin_attrs[] = {
+ &bin_attr_core_cpus,
+ &bin_attr_core_cpus_list,
+ &bin_attr_thread_siblings,
+ &bin_attr_thread_siblings_list,
+ &bin_attr_core_siblings,
+ &bin_attr_core_siblings_list,
+ &bin_attr_die_cpus,
+ &bin_attr_die_cpus_list,
+ &bin_attr_package_cpus,
+ &bin_attr_package_cpus_list,
+#ifdef CONFIG_SCHED_BOOK
+ &bin_attr_book_siblings,
+ &bin_attr_book_siblings_list,
+#endif
+#ifdef CONFIG_SCHED_DRAWER
+ &bin_attr_drawer_siblings,
+ &bin_attr_drawer_siblings_list,
+#endif
+ NULL,
+};
+
static struct attribute *default_attrs[] = {
&dev_attr_physical_package_id.attr,
&dev_attr_die_id.attr,
&dev_attr_core_id.attr,
- &dev_attr_thread_siblings.attr,
- &dev_attr_thread_siblings_list.attr,
- &dev_attr_core_cpus.attr,
- &dev_attr_core_cpus_list.attr,
- &dev_attr_core_siblings.attr,
- &dev_attr_core_siblings_list.attr,
- &dev_attr_die_cpus.attr,
- &dev_attr_die_cpus_list.attr,
- &dev_attr_package_cpus.attr,
- &dev_attr_package_cpus_list.attr,
#ifdef CONFIG_SCHED_BOOK
&dev_attr_book_id.attr,
- &dev_attr_book_siblings.attr,
- &dev_attr_book_siblings_list.attr,
#endif
#ifdef CONFIG_SCHED_DRAWER
&dev_attr_drawer_id.attr,
- &dev_attr_drawer_siblings.attr,
- &dev_attr_drawer_siblings_list.attr,
#endif
NULL
};
static const struct attribute_group topology_attr_group = {
.attrs = default_attrs,
+ .bin_attrs = bin_attrs,
.name = "topology"
};
diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h
index 70a9324..bc401bd9b 100644
--- a/include/linux/bitmap.h
+++ b/include/linux/bitmap.h
@@ -219,6 +219,9 @@ extern unsigned int bitmap_ord_to_pos(const unsigned long *bitmap, unsigned int
extern int bitmap_print_to_pagebuf(bool list, char *buf,
const unsigned long *maskp, int nmaskbits);
+extern int bitmap_print_to_buf(bool list, char *buf,
+ const unsigned long *maskp, int nmaskbits, loff_t off, size_t count);
+
#define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG - 1)))
#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG - 1)))
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 383684e..e4810b3e 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -928,6 +928,31 @@ cpumap_print_to_pagebuf(bool list, char *buf, const struct cpumask *mask)
nr_cpu_ids);
}
+/**
+ * cpumap_print_to_buf - copies the cpumask into the buffer either
+ * as comma-separated list of cpus or hex values of cpumask
+ * @list: indicates whether the cpumap must be list
+ * @mask: the cpumask to copy
+ * @buf: the buffer to copy into
+ * @off: the offset that buffer to copy into
+ * @count: the count thatbuffer to copy into
+ *
+ * the role of cpumap_print_to_buf and cpumap_print_to_pagebuf is
+ * the same, the difference is that the second parameter of
+ * bitmap_print_to_buf can be more than one pagesize.
+ *
+ * Returns the length of the (null-terminated) @buf string, zero if
+ * nothing is copied.
+ */
+
+static inline ssize_t
+cpumap_print_to_buf(bool list, char *buf, const struct cpumask *mask,
+ loff_t off, size_t count)
+{
+ return bitmap_print_to_buf(list, buf, cpumask_bits(mask),
+ nr_cpu_ids, off, count);
+}
+
#if NR_CPUS <= BITS_PER_LONG
#define CPU_MASK_ALL \
(cpumask_t) { { \
diff --git a/lib/bitmap.c b/lib/bitmap.c
index 75006c4..5bf89f1 100644
--- a/lib/bitmap.c
+++ b/lib/bitmap.c
@@ -460,6 +460,40 @@ int bitmap_parse_user(const char __user *ubuf,
EXPORT_SYMBOL(bitmap_parse_user);
/**
+ * bitmap_print_to_buf - convert bitmap to list or hex format ASCII string
+ * @list: indicates whether the bitmap must be list
+ * @buf: page aligned buffer into which string is placed
+ * @maskp: pointer to bitmap to convert
+ * @nmaskbits: size of bitmap, in bits
+ * @off: offset in buf
+ * @count: count that already output
+ *
+ * the role of bitmap_print_to_buf and bitmap_print_to_pagebuf is
+ * the same, the difference is that the second parameter of
+ * bitmap_print_to_buf can be more than one pagesize.
+ */
+int bitmap_print_to_buf(bool list, char *buf, const unsigned long *maskp,
+ int nmaskbits, loff_t off, size_t count)
+{
+ int len, size;
+ void *data;
+ char *fmt = list ? "%*pbl\n" : "%*pb\n";
+
+ len = snprintf(NULL, 0, fmt, nmaskbits, maskp);
+
+ data = kvmalloc(len+1, GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ size = scnprintf(data, len+1, fmt, nmaskbits, maskp);
+ size = memory_read_from_buffer(buf, count, &off, data, size);
+ kvfree(data);
+
+ return size;
+}
+EXPORT_SYMBOL(bitmap_print_to_buf);
+
+/**
* bitmap_print_to_pagebuf - convert bitmap to list or hex format ASCII string
* @list: indicates whether the bitmap must be list
* @buf: page aligned buffer into which string is placed
--
2.7.4
1 year, 8 months