[PATCH v2] PCI/DPC: Check host->native_dpc before enable dpc service
by Yicong Yang
Per Downstream Port Containment Related Enhancements ECN[1]
Table 4-6, Interpretation of _OSC Control Field Returned Value,
for bit 7 of _OSC control return value:
"Firmware sets this bit to 1 to grant the OS control over PCI Express
Downstream Port Containment configuration."
"If control of this feature was requested and denied,
or was not requested, the firmware returns this bit set to 0."
We store bit 7 of _OSC control return value in host->native_dpc,
check it before enable the dpc service as the firmware may not
grant the control.
[1] Downstream Port Containment Related Enhancements ECN,
Jan 28, 2019, affecting PCI Firmware Specification, Rev. 3.2
https://members.pcisig.com/wg/PCI-SIG/document/12888
Signed-off-by: Yicong Yang <yangyicong(a)hisilicon.com>
---
Change since v1:
- use correct reference for _OSC control return value
drivers/pci/pcie/portdrv_core.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c
index e1fed664..7445d03 100644
--- a/drivers/pci/pcie/portdrv_core.c
+++ b/drivers/pci/pcie/portdrv_core.c
@@ -253,7 +253,8 @@ static int get_port_device_capability(struct pci_dev *dev)
*/
if (pci_find_ext_capability(dev, PCI_EXT_CAP_ID_DPC) &&
pci_aer_available() &&
- (pcie_ports_dpc_native || (services & PCIE_PORT_SERVICE_AER)))
+ (pcie_ports_dpc_native ||
+ ((services & PCIE_PORT_SERVICE_AER) && host->native_dpc)))
services |= PCIE_PORT_SERVICE_DPC;
if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM ||
--
2.8.1
1 year, 6 months
[RFC PATCH 0/5] KVM/ARM64 Add support for pinned VMIDs
by Shameer Kolothum
On an ARM64 system with a SMMUv3 implementation that fully supports
Broadcast TLB Maintenance(BTM) feature as part of the Distributed
Virtual Memory(DVM) protocol, the CPU TLB invalidate instructions are
received by SMMUv3. This is very useful when the SMMUv3 shares the
page tables with the CPU(eg: Guest SVA use case). For this to work,
the SMMU must use the same VMID that is allocated by KVM to configure
the stage 2 translations. At present KVM VMID allocations are recycled
on rollover and may change as a result. This will create issues if we
have to share the KVM VMID with SMMU.
Please see the discussion here,
https://lore.kernel.org/linux-iommu/20200522101755.GA3453945@myrica/
This series proposes a way to share the VMID between KVM and IOMMU
driver by,
1. Splitting the KVM VMID space into two equal halves based on the
command line option "kvm-arm.pinned_vmid_enable".
2. First half of the VMID space follows the normal recycle on rollover
policy.
3. Second half of the VMID space doesn't roll over and is used to
allocate pinned VMIDs.
4. Provides helper function to retrieve the KVM instance associated
with a device(if it is part of a vfio group).
5. Introduces generic interfaces to get/put pinned KVM VMIDs.
Open Items:
1. I couldn't figure out a way to determine whether a platform actually
fully supports DVM/BTM or not. Not sure we can take a call based on
SMMUv3 BTM feature bit alone. Probably we can get it from firmware
via IORT?
2. The current splitting of VMID space is only one way to do this and
probably not the best. Maybe we can follow the pinned ASID method used
in SVA code. Suggestions welcome here.
3. The detach_pasid_table() interface is not very clear to me as the current
Qemu prototype is not using that. This requires fixing from my side.
This is based on Jean-Philippe's SVA series[1] and Eric's SMMUv3 dual-stage
support series[2].
The branch with the whole vSVA + BTM solution is here,
https://github.com/hisilicon/kernel-dev/tree/5.10-rc4-2stage-v13-vsva-btm...
This is lightly tested on a HiSilicon D06 platform with uacce/zip dev test tool,
./zip_sva_per -k tlb
Thanks,
Shameer
1. https://github.com/Linaro/linux-kernel-uadk/commits/uacce-devel-5.10
2. https://lore.kernel.org/linux-iommu/20201118112151.25412-1-eric.auger@red...
Shameer Kolothum (5):
vfio: Add a helper to retrieve kvm instance from a dev
KVM: Add generic infrastructure to support pinned VMIDs
KVM: ARM64: Add support for pinned VMIDs
iommu/arm-smmu-v3: Use pinned VMID for NESTED stage with BTM
KVM: arm64: Make sure pinned vmid is released on VM exit
arch/arm64/include/asm/kvm_host.h | 2 +
arch/arm64/kvm/Kconfig | 1 +
arch/arm64/kvm/arm.c | 116 +++++++++++++++++++-
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 49 ++++++++-
drivers/vfio/vfio.c | 12 ++
include/linux/kvm_host.h | 17 +++
include/linux/vfio.h | 1 +
virt/kvm/Kconfig | 2 +
virt/kvm/kvm_main.c | 25 +++++
9 files changed, 220 insertions(+), 5 deletions(-)
--
2.17.1
1 year, 6 months
[PATCH net] net: hns3: Fixes+Refactors the broken set channel error fallback logic
by Salil Mehta
The fallback logic of the set channels when set_channel() fails to
configure TX Sched/RSS H/W configuration or for the code which brings
down/restores client before/therafter is not handled properly.
Fix and refactor the code for handling the errors properly and to improve
the readibility.
Fixes: 3a5a5f06d4d2 ("net: hns3: revert to old channel when setting new channel num fail")
Signed-off-by: Salil Mehta <salil.mehta(a)huawei.com>
Signed-off-by: Peng Li <lipeng321(a)huawei.com>
---
.../net/ethernet/hisilicon/hns3/hns3_enet.c | 77 +++++++++++--------
1 file changed, 47 insertions(+), 30 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index bf4302a5cf95..fbb0f4c9b98e 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -4692,24 +4692,60 @@ static int hns3_reset_notify(struct hnae3_handle *handle,
static int hns3_change_channels(struct hnae3_handle *handle, u32 new_tqp_num,
bool rxfh_configured)
{
- int ret;
+ const struct hnae3_ae_ops *ops = handle->ae_algo->ops;
+ u32 org_tqp_num = handle->kinfo.num_tqps;
+ struct device *dev = &handle->pdev->dev;
+ u32 req_tqp_num = new_tqp_num;
+ bool revert_old_config = false;
+ int ret, retval = 0;
+
+ /* bring down the client */
+ ret = hns3_reset_notify(handle, HNAE3_DOWN_CLIENT);
+ if (ret) {
+ dev_err(dev, "client down fail, this should'nt have happened!\n");
+ return ret;
+ }
- ret = handle->ae_algo->ops->set_channels(handle, new_tqp_num,
- rxfh_configured);
+ ret = hns3_reset_notify(handle, HNAE3_UNINIT_CLIENT);
if (ret) {
- dev_err(&handle->pdev->dev,
- "Change tqp num(%u) fail.\n", new_tqp_num);
+ dev_err(dev, "client uninit fail, this should'nt have happened!\n");
return ret;
}
+revert_old_tpqs_config:
+ /* update the TX Sched and RSS config in the H/W */
+ ret = ops->set_channels(handle, req_tqp_num, rxfh_configured);
+ if (ret) {
+ dev_err(dev, "TX Sched/RSS H/W cfg fail(=%d) for %s TPQs\n",
+ ret, revert_old_config ? "old" : "new");
+ goto err_set_channel;
+ }
+
+ /* restore the client */
ret = hns3_reset_notify(handle, HNAE3_INIT_CLIENT);
- if (ret)
- return ret;
+ if (ret) {
+ dev_err(dev, "failed to initialize the client again\n");
+ goto err_set_channel;
+ }
ret = hns3_reset_notify(handle, HNAE3_UP_CLIENT);
- if (ret)
- hns3_reset_notify(handle, HNAE3_UNINIT_CLIENT);
+ if (ret) {
+ dev_err(dev, "Client up fail, this should'nt have happened!\n");
+ return ret;
+ }
+
+ return retval;
+err_set_channel:
+ if (!revert_old_config) {
+ dev_warn(dev, "Revert TX Sched/RSS H/W config with old TPQs\n");
+ req_tqp_num = org_tqp_num;
+ revert_old_config = true;
+ retval = ret;
+ goto revert_old_tpqs_config;
+ }
+ dev_err(dev, "Bad, we could'nt revert to old TPQ H/W config\n");
+ dev_warn(dev, "Device maybe insane. Reload driver/Reset required!\n");
return ret;
}
@@ -4720,7 +4756,6 @@ int hns3_set_channels(struct net_device *netdev,
struct hnae3_knic_private_info *kinfo = &h->kinfo;
bool rxfh_configured = netif_is_rxfh_configured(netdev);
u32 new_tqp_num = ch->combined_count;
- u16 org_tqp_num;
int ret;
if (hns3_nic_resetting(netdev))
@@ -4750,28 +4785,10 @@ int hns3_set_channels(struct net_device *netdev,
"set channels: tqp_num=%u, rxfh=%d\n",
new_tqp_num, rxfh_configured);
- ret = hns3_reset_notify(h, HNAE3_DOWN_CLIENT);
- if (ret)
- return ret;
-
- ret = hns3_reset_notify(h, HNAE3_UNINIT_CLIENT);
- if (ret)
- return ret;
-
- org_tqp_num = h->kinfo.num_tqps;
ret = hns3_change_channels(h, new_tqp_num, rxfh_configured);
if (ret) {
- int ret1;
-
- netdev_warn(netdev,
- "Change channels fail, revert to old value\n");
- ret1 = hns3_change_channels(h, org_tqp_num, rxfh_configured);
- if (ret1) {
- netdev_err(netdev,
- "revert to old channel fail\n");
- return ret1;
- }
-
+ netdev_err(netdev, "fail(=%d) to set number of channels to %u\n", ret,
+ new_tqp_num);
return ret;
}
--
2.17.1
1 year, 7 months
[PATCH net-next 0/9] net: hns3: refactor and new features for flow director
by Huazhong Tan
This patchset refactor some functions and add some new features for
flow director.
patch 1~3: refactor large functions
patch 4, 7: add traffic class and user-def field support for ethtool
patch 5: use asynchronously configuration
patch 6: clean up for hns3_del_all_fd_entries()
patch 8, 9: add support for queue bonding mode
Jian Shen (9):
net: hns3: refactor out hclge_add_fd_entry()
net: hns3: refactor out hclge_fd_get_tuple()
net: hns3: refactor for function hclge_fd_convert_tuple
net: hns3: add support for traffic class tuple support for flow
director by ethtool
net: hns3: refactor flow director configuration
net: hns3: refine for hns3_del_all_fd_entries()
net: hns3: add support for user-def data of flow director
net: hns3: add support for queue bonding mode of flow director
net: hns3: add queue bonding mode support for VF
drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h | 8 +
drivers/net/ethernet/hisilicon/hns3/hnae3.h | 9 +-
drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c | 7 +-
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 91 +-
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h | 14 +-
drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c | 13 +-
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c | 2 +
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h | 21 +
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 1570 ++++++++++++++------
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 63 +
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c | 33 +
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c | 2 +
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c | 74 +
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h | 7 +
.../ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c | 17 +
15 files changed, 1450 insertions(+), 481 deletions(-)
--
2.7.4
1 year, 7 months
Re: [PATCH net v4 1/2] net: sched: fix packet stuck problem for lockless qdisc
by Yunsheng Lin
On 2021/4/21 17:25, Yunsheng Lin wrote:
> On 2021/4/21 16:44, Michal Kubecek wrote:
>
>>
>> I'll try running some tests also on other architectures, including arm64
>> and s390x (to catch potential endinanity issues).
I tried debugging nperf in arm64, with the below patch:
diff --git a/client/main.c b/client/main.c
index 429634d..de1a3ef 100644
--- a/client/main.c
+++ b/client/main.c
@@ -63,7 +63,10 @@ static int client_init(void)
ret = client_set_usr1_handler();
if (ret < 0)
return ret;
- return ignore_signal(SIGPIPE);
+ //return ignore_signal(SIGPIPE);
+ signal(SIGPIPE, SIG_IGN);
+
+ return 0;
}
static int ctrl_send_start(struct client_config *config)
diff --git a/client/worker.c b/client/worker.c
index ac026893..d269311 100644
--- a/client/worker.c
+++ b/client/worker.c
@@ -7,7 +7,7 @@
#include "worker.h"
#include "main.h"
-#define WORKER_STACK_SIZE 16384
+#define WORKER_STACK_SIZE 131072
struct client_worker_data *workers_data;
union sockaddr_any test_addr;
It has below error output:
../nperf/nperf -H 127.0.0.1 -l 3 -i 1 --exact -t TCP_STREAM -M 1
server: 127.0.0.1, port 12543
iterations: 1, threads: 1, test length: 3
test: TCP_STREAM, message size: 1048576
run test begin
send begin
send done: -32
failed to receive server stats
*** Iteration 1 failed, quitting. ***
Tcpdump has below output:
09:55:12.253341 IP localhost.53080 > localhost.12543: Flags [S], seq 3954442980, win 65495, options [mss 65495,sackOK,TS val 3268837738 ecr 0,nop,wscale 7], length 0
09:55:12.253363 IP localhost.12543 > localhost.53080: Flags [S.], seq 4240541653, ack 3954442981, win 65483, options [mss 65495,sackOK,TS val 3268837738 ecr 3268837738,nop,wscale 7], length 0
09:55:12.253379 IP localhost.53080 > localhost.12543: Flags [.], ack 1, win 512, options [nop,nop,TS val 3268837738 ecr 3268837738], length 0
09:55:12.253412 IP localhost.53080 > localhost.12543: Flags [P.], seq 1:29, ack 1, win 512, options [nop,nop,TS val 3268837738 ecr 3268837738], length 28
09:55:12.253863 IP localhost.12543 > localhost.53080: Flags [P.], seq 1:17, ack 29, win 512, options [nop,nop,TS val 3268837739 ecr 3268837738], length 16
09:55:12.253891 IP localhost.53080 > localhost.12543: Flags [.], ack 17, win 512, options [nop,nop,TS val 3268837739 ecr 3268837739], length 0
09:55:12.254265 IP localhost.12543 > localhost.53080: Flags [F.], seq 17, ack 29, win 512, options [nop,nop,TS val 3268837739 ecr 3268837739], length 0
09:55:12.301992 IP localhost.53080 > localhost.12543: Flags [.], ack 18, win 512, options [nop,nop,TS val 3268837787 ecr 3268837739], length 0
09:55:15.254389 IP localhost.53080 > localhost.12543: Flags [F.], seq 29, ack 18, win 512, options [nop,nop,TS val 3268840739 ecr 3268837739], length 0
09:55:15.254426 IP localhost.12543 > localhost.53080: Flags [.], ack 30, win 512, options [nop,nop,TS val 3268840739 ecr 3268840739], length 0
Any idea what went wrong here?
Also, Would you mind running netperf to see if there is similar issue
in your system?
>>
>> Michal
>>
>> .
>>
>
>
> .
>
1 year, 9 months
[RFC PATCH v6 1/4] topology: Represent clusters of CPUs within a die
by Barry Song
From: Jonathan Cameron <Jonathan.Cameron(a)huawei.com>
Both ACPI and DT provide the ability to describe additional layers of
topology between that of individual cores and higher level constructs
such as the level at which the last level cache is shared.
In ACPI this can be represented in PPTT as a Processor Hierarchy
Node Structure [1] that is the parent of the CPU cores and in turn
has a parent Processor Hierarchy Nodes Structure representing
a higher level of topology.
For example Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus.
+-----------------------------------+ +---------+
| +------+ +------+ +---------------------------+ |
| | CPU0 | | cpu1 | | +-----------+ | |
| +------+ +------+ | | | | |
| +----+ L3 | | |
| +------+ +------+ cluster | | tag | | |
| | CPU2 | | CPU3 | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | |
+-----------------------------------+ | |
| +------+ +------+ +--------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| | | L3 | | |
| +------+ +------+ +----+ tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | L3 |
| data |
+-----------------------------------+ | |
| +------+ +------+ | +-----------+ | |
| | | | | | | | | |
| +------+ +------+ +----+ L3 | | |
| | | tag | | |
| +------+ +------+ | | | | |
| | | | | ++ +-----------+ | |
| +------+ +------+ |---------------------------+ |
+-----------------------------------| | |
+-----------------------------------| | |
| +------+ +------+ +---------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| +----+ L3 | | |
| +------+ +------+ | | tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | |
+-----------------------------------+ | |
| +------+ +------+ +--------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| | | L3 | | |
| +------+ +------+ +---+ tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | | |
+-----------------------------------+ | |
+-----------------------------------+ ++ |
| +------+ +------+ +--------------------------+ |
| | | | | | +-----------+ | |
| +------+ +------+ | | | | |
| | | L3 | | |
| +------+ +------+ +--+ tag | | |
| | | | | | | | | |
| +------+ +------+ | +-----------+ | |
| | +---------+
+-----------------------------------+
That means the cost to transfer ownership of a cacheline between CPUs
within a cluster is lower than between CPUs in different clusters on
the same die. Hence, it can make sense to tell the scheduler to use
the cache affinity of the cluster to make better decision on thread
migration.
This patch simply exposes this information to userspace libraries
like hwloc by providing cluster_cpus and related sysfs attributes.
PoC of HWLOC support at [2].
Note this patch only handle the ACPI case.
Special consideration is needed for SMT processors, where it is
necessary to move 2 levels up the hierarchy from the leaf nodes
(thus skipping the processor core level).
Currently the ID provided is the offset of the Processor
Hierarchy Nodes Structure within PPTT. Whilst this is unique
it is not terribly elegant so alternative suggestions welcome.
Note that arm64 / ACPI does not provide any means of identifying
a die level in the topology but that may be unrelate to the cluster
level.
[1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
structure (Type 0)
[2] https://github.com/hisilicon/hwloc/tree/linux-cluster
Signed-off-by: Jonathan Cameron <Jonathan.Cameron(a)huawei.com>
Signed-off-by: Barry Song <song.bao.hua(a)hisilicon.com>
---
-v6:
* the topology ABI documents required by Greg is not completed yet.
will have a separate patch for that.
Documentation/admin-guide/cputopology.rst | 26 +++++++++++--
arch/arm64/kernel/topology.c | 2 +
drivers/acpi/pptt.c | 63 +++++++++++++++++++++++++++++++
drivers/base/arch_topology.c | 15 ++++++++
drivers/base/topology.c | 10 +++++
include/linux/acpi.h | 5 +++
include/linux/arch_topology.h | 5 +++
include/linux/topology.h | 6 +++
8 files changed, 128 insertions(+), 4 deletions(-)
diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst
index b90dafc..f9d3745 100644
--- a/Documentation/admin-guide/cputopology.rst
+++ b/Documentation/admin-guide/cputopology.rst
@@ -24,6 +24,12 @@ core_id:
identifier (rather than the kernel's). The actual value is
architecture and platform dependent.
+cluster_id:
+
+ the Cluster ID of cpuX. Typically it is the hardware platform's
+ identifier (rather than the kernel's). The actual value is
+ architecture and platform dependent.
+
book_id:
the book ID of cpuX. Typically it is the hardware platform's
@@ -56,6 +62,14 @@ package_cpus_list:
human-readable list of CPUs sharing the same physical_package_id.
(deprecated name: "core_siblings_list")
+cluster_cpus:
+
+ internal kernel map of CPUs within the same cluster.
+
+cluster_cpus_list:
+
+ human-readable list of CPUs within the same cluster.
+
die_cpus:
internal kernel map of CPUs within the same die.
@@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
#define topology_physical_package_id(cpu)
#define topology_die_id(cpu)
+ #define topology_cluster_id(cpu)
#define topology_core_id(cpu)
#define topology_book_id(cpu)
#define topology_drawer_id(cpu)
#define topology_sibling_cpumask(cpu)
#define topology_core_cpumask(cpu)
+ #define topology_cluster_cpumask(cpu)
#define topology_die_cpumask(cpu)
#define topology_book_cpumask(cpu)
#define topology_drawer_cpumask(cpu)
@@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
1) topology_physical_package_id: -1
2) topology_die_id: -1
-3) topology_core_id: 0
-4) topology_sibling_cpumask: just the given CPU
-5) topology_core_cpumask: just the given CPU
-6) topology_die_cpumask: just the given CPU
+3) topology_cluster_id: -1
+4) topology_core_id: 0
+5) topology_sibling_cpumask: just the given CPU
+6) topology_core_cpumask: just the given CPU
+7) topology_cluster_cpumask: just the given CPU
+8) topology_die_cpumask: just the given CPU
For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
default definitions for topology_book_id() and topology_book_cpumask().
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index e08a412..d72eb8d 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -103,6 +103,8 @@ int __init parse_acpi_topology(void)
cpu_topology[cpu].thread_id = -1;
cpu_topology[cpu].core_id = topology_id;
}
+ topology_id = find_acpi_cpu_topology_cluster(cpu);
+ cpu_topology[cpu].cluster_id = topology_id;
topology_id = find_acpi_cpu_topology_package(cpu);
cpu_topology[cpu].package_id = topology_id;
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
index 4ae9335..11f8b02 100644
--- a/drivers/acpi/pptt.c
+++ b/drivers/acpi/pptt.c
@@ -737,6 +737,69 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
}
/**
+ * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
+ * @cpu: Kernel logical CPU number
+ *
+ * Determine a topology unique cluster ID for the given CPU/thread.
+ * This ID can then be used to group peers, which will have matching ids.
+ *
+ * The cluster, if present is the level of topology above CPUs. In a
+ * multi-thread CPU, it will be the level above the CPU, not the thread.
+ * It may not exist in single CPU systems. In simple multi-CPU systems,
+ * it may be equal to the package topology level.
+ *
+ * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
+ * or there is no toplogy level above the CPU..
+ * Otherwise returns a value which represents the package for this CPU.
+ */
+
+int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+ struct acpi_table_header *table;
+ acpi_status status;
+ struct acpi_pptt_processor *cpu_node, *cluster_node;
+ u32 acpi_cpu_id;
+ int retval;
+ int is_thread;
+
+ status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+ if (ACPI_FAILURE(status)) {
+ acpi_pptt_warn_missing();
+ return -ENOENT;
+ }
+
+ acpi_cpu_id = get_acpi_id_for_cpu(cpu);
+ cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
+ if (cpu_node == NULL || !cpu_node->parent) {
+ retval = -ENOENT;
+ goto put_table;
+ }
+
+ is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
+ cluster_node = fetch_pptt_node(table, cpu_node->parent);
+ if (cluster_node == NULL) {
+ retval = -ENOENT;
+ goto put_table;
+ }
+ if (is_thread) {
+ if (!cluster_node->parent) {
+ retval = -ENOENT;
+ goto put_table;
+ }
+ cluster_node = fetch_pptt_node(table, cluster_node->parent);
+ if (cluster_node == NULL) {
+ retval = -ENOENT;
+ goto put_table;
+ }
+ }
+ retval = ACPI_PTR_DIFF(cluster_node, table);
+put_table:
+ acpi_put_table(table);
+
+ return retval;
+}
+
+/**
* find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
* @cpu: Kernel logical CPU number
*
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index de8587c..ca3b8c1 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
return core_mask;
}
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+ return &cpu_topology[cpu].cluster_sibling;
+}
+
void update_siblings_masks(unsigned int cpuid)
{
struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
@@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid)
if (cpuid_topo->package_id != cpu_topo->package_id)
continue;
+ if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
+ cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
+ cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
+ }
+
cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
@@ -541,6 +551,9 @@ static void clear_cpu_topology(int cpu)
cpumask_clear(&cpu_topo->llc_sibling);
cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
+ cpumask_clear(&cpu_topo->cluster_sibling);
+ cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling);
+
cpumask_clear(&cpu_topo->core_sibling);
cpumask_set_cpu(cpu, &cpu_topo->core_sibling);
cpumask_clear(&cpu_topo->thread_sibling);
@@ -556,6 +569,7 @@ void __init reset_cpu_topology(void)
cpu_topo->thread_id = -1;
cpu_topo->core_id = -1;
+ cpu_topo->cluster_id = -1;
cpu_topo->package_id = -1;
cpu_topo->llc_id = -1;
@@ -571,6 +585,7 @@ void remove_cpu_topology(unsigned int cpu)
cpumask_clear_cpu(cpu, topology_core_cpumask(sibling));
for_each_cpu(sibling, topology_sibling_cpumask(cpu))
cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
+
for_each_cpu(sibling, topology_llc_cpumask(cpu))
cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
diff --git a/drivers/base/topology.c b/drivers/base/topology.c
index 4d254fc..7157ac0 100644
--- a/drivers/base/topology.c
+++ b/drivers/base/topology.c
@@ -46,6 +46,9 @@
define_id_show_func(die_id);
static DEVICE_ATTR_RO(die_id);
+define_id_show_func(cluster_id);
+static DEVICE_ATTR_RO(cluster_id);
+
define_id_show_func(core_id);
static DEVICE_ATTR_RO(core_id);
@@ -61,6 +64,10 @@
static DEVICE_ATTR_RO(core_siblings);
static DEVICE_ATTR_RO(core_siblings_list);
+define_siblings_show_func(cluster_cpus, cluster_cpumask);
+static DEVICE_ATTR_RO(cluster_cpus);
+static DEVICE_ATTR_RO(cluster_cpus_list);
+
define_siblings_show_func(die_cpus, die_cpumask);
static DEVICE_ATTR_RO(die_cpus);
static DEVICE_ATTR_RO(die_cpus_list);
@@ -88,6 +95,7 @@
static struct attribute *default_attrs[] = {
&dev_attr_physical_package_id.attr,
&dev_attr_die_id.attr,
+ &dev_attr_cluster_id.attr,
&dev_attr_core_id.attr,
&dev_attr_thread_siblings.attr,
&dev_attr_thread_siblings_list.attr,
@@ -95,6 +103,8 @@
&dev_attr_core_cpus_list.attr,
&dev_attr_core_siblings.attr,
&dev_attr_core_siblings_list.attr,
+ &dev_attr_cluster_cpus.attr,
+ &dev_attr_cluster_cpus_list.attr,
&dev_attr_die_cpus.attr,
&dev_attr_die_cpus_list.attr,
&dev_attr_package_cpus.attr,
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 9f43241..138b779 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -1307,6 +1307,7 @@ static inline int lpit_read_residency_count_address(u64 *address)
#ifdef CONFIG_ACPI_PPTT
int acpi_pptt_cpu_is_thread(unsigned int cpu);
int find_acpi_cpu_topology(unsigned int cpu, int level);
+int find_acpi_cpu_topology_cluster(unsigned int cpu);
int find_acpi_cpu_topology_package(unsigned int cpu);
int find_acpi_cpu_topology_hetero_id(unsigned int cpu);
int find_acpi_cpu_cache_topology(unsigned int cpu, int level);
@@ -1319,6 +1320,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level)
{
return -EINVAL;
}
+static inline int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+ return -EINVAL;
+}
static inline int find_acpi_cpu_topology_package(unsigned int cpu)
{
return -EINVAL;
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index 0f6cd6b..987c7ea 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -49,10 +49,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus,
struct cpu_topology {
int thread_id;
int core_id;
+ int cluster_id;
int package_id;
int llc_id;
cpumask_t thread_sibling;
cpumask_t core_sibling;
+ cpumask_t cluster_sibling;
cpumask_t llc_sibling;
};
@@ -60,13 +62,16 @@ struct cpu_topology {
extern struct cpu_topology cpu_topology[NR_CPUS];
#define topology_physical_package_id(cpu) (cpu_topology[cpu].package_id)
+#define topology_cluster_id(cpu) (cpu_topology[cpu].cluster_id)
#define topology_core_id(cpu) (cpu_topology[cpu].core_id)
#define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling)
#define topology_sibling_cpumask(cpu) (&cpu_topology[cpu].thread_sibling)
+#define topology_cluster_cpumask(cpu) (&cpu_topology[cpu].cluster_sibling)
#define topology_llc_cpumask(cpu) (&cpu_topology[cpu].llc_sibling)
void init_cpu_topology(void);
void store_cpu_topology(unsigned int cpuid);
const struct cpumask *cpu_coregroup_mask(int cpu);
+const struct cpumask *cpu_clustergroup_mask(int cpu);
void update_siblings_masks(unsigned int cpu);
void remove_cpu_topology(unsigned int cpuid);
void reset_cpu_topology(void);
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 7634cd7..80d27d7 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -186,6 +186,9 @@ static inline int cpu_to_mem(int cpu)
#ifndef topology_die_id
#define topology_die_id(cpu) ((void)(cpu), -1)
#endif
+#ifndef topology_cluster_id
+#define topology_cluster_id(cpu) ((void)(cpu), -1)
+#endif
#ifndef topology_core_id
#define topology_core_id(cpu) ((void)(cpu), 0)
#endif
@@ -195,6 +198,9 @@ static inline int cpu_to_mem(int cpu)
#ifndef topology_core_cpumask
#define topology_core_cpumask(cpu) cpumask_of(cpu)
#endif
+#ifndef topology_cluster_cpumask
+#define topology_cluster_cpumask(cpu) cpumask_of(cpu)
+#endif
#ifndef topology_die_cpumask
#define topology_die_cpumask(cpu) cpumask_of(cpu)
#endif
--
1.8.3.1
1 year, 9 months
Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
by Dietmar Eggemann
On 20/04/2021 02:18, Barry Song wrote:
[...]
> @@ -5786,11 +5786,12 @@ static void record_wakee(struct task_struct *p)
> * whatever is irrelevant, spread criteria is apparent partner count exceeds
> * socket size.
> */
> -static int wake_wide(struct task_struct *p)
> +static int wake_wide(struct task_struct *p, int cluster)
> {
> unsigned int master = current->wakee_flips;
> unsigned int slave = p->wakee_flips;
> - int factor = __this_cpu_read(sd_llc_size);
> + int factor = cluster ? __this_cpu_read(sd_cluster_size) :
> + __this_cpu_read(sd_llc_size);
I don't see that the wake_wide() change has any effect here. None of the
sched domains has SD_BALANCE_WAKE set so a wakeup (WF_TTWU) can never
end up in the slow path.
Have you seen a diff when running your `lmbench stream` workload in what
wake_wide() returns when you use `sd cluster size` instead of `sd llc
size` as factor?
I guess for you, wakeups are now subdivided into faster (cluster = 4
CPUs) and fast (llc = 24 CPUs) via sis(), not into fast (sis()) and slow
(find_idlest_cpu()).
>
> if (master < slave)
> swap(master, slave);
[...]
> @@ -6745,6 +6748,12 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> int want_affine = 0;
> /* SD_flags and WF_flags share the first nibble */
> int sd_flag = wake_flags & 0xF;
> + /*
> + * if cpu and prev_cpu share LLC, consider cluster sibling rather
> + * than llc. this is typically true while tasks are bound within
> + * one numa
> + */
> + int cluster = sched_cluster_active() && cpus_share_cache(cpu, prev_cpu, 0);
So you changed from scanning cluster before LLC to scan either cluster
or LLC.
And this is based on whether `this_cpu` and `prev_cpu` are sharing LLC
or not. So you only see an effect when running the workload with
`numactl -N X ...`.
>
> if (wake_flags & WF_TTWU) {
> record_wakee(p);
> @@ -6756,7 +6765,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> new_cpu = prev_cpu;
> }
>
> - want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
> + want_affine = !wake_wide(p, cluster) && cpumask_test_cpu(cpu, p->cpus_ptr);
> }
>
> rcu_read_lock();
> @@ -6768,7 +6777,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
> cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
> if (cpu != prev_cpu)
> - new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> + new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync, cluster);
>
> sd = NULL; /* Prefer wake_affine over balance flags */
> break;
> @@ -6785,7 +6794,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
> } else if (wake_flags & WF_TTWU) { /* XXX always ? */
> /* Fast path */
> - new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
> + new_cpu = select_idle_sibling(p, prev_cpu, new_cpu, cluster);
>
> if (want_affine)
> current->recent_used_cpu = cpu;
[...]
1 year, 9 months
Re: [PATCH V3 7/7] app/testpmd: remove redundant fwd streams initialization
by Li, Xiaoyun
> -----Original Message-----
> From: Huisong Li <lihuisong(a)huawei.com>
> Sent: Tuesday, April 20, 2021 17:01
> To: dev(a)dpdk.org
> Cc: Yigit, Ferruh <ferruh.yigit(a)intel.com>; Li, Xiaoyun <xiaoyun.li(a)intel.com>;
> linuxarm(a)openeuler.org; lihuisong(a)huawei.com
> Subject: [PATCH V3 7/7] app/testpmd: remove redundant fwd streams
> initialization
>
> The fwd_config_setup() is called after init_fwd_streams().
> The fwd_config_setup() will reinitialize forwarding streams.
> This patch removes init_fwd_streams() from init_config().
>
> Signed-off-by: Huisong Li <lihuisong(a)huawei.com>
> Signed-off-by: Lijun Ou <oulijun(a)huawei.com>
> ---
Agree. Seems redundant. Every fwd setup will call init_fwd_streams() again.
Acked-by: Xiaoyun Li <xiaoyun.li(a)intel.com>
1 year, 9 months
Re: [PATCH V3 6/7] app/testpmd: add forwarding config in start port
by Li, Xiaoyun
> -----Original Message-----
> From: Huisong Li <lihuisong(a)huawei.com>
> Sent: Tuesday, April 20, 2021 17:01
> To: dev(a)dpdk.org
> Cc: Yigit, Ferruh <ferruh.yigit(a)intel.com>; Li, Xiaoyun <xiaoyun.li(a)intel.com>;
> linuxarm(a)openeuler.org; lihuisong(a)huawei.com
> Subject: [PATCH V3 6/7] app/testpmd: add forwarding config in start port
>
> Most operations in testpmd that need to update the forwarding streams in
> testpmd call fwd_config_setup(). In some scenarios, eg, dev_configure is called
> again, the forwarding streams may not be updated. As a result, the actual
> forwarding streams cannot be queried by "show config fwd" cmd.
I don't agree on this.
Fwd config should be only changed after the user change something like nb-cores, queue number, eth-peer in non-dcb mode. These are already done in those commands.
You should do fwd_config_setup at the end of cmd_config_dcb_parsed(), I agree on this.
But doing it in start port seems to do redundant times of fwd_setup. It's not really needed.
>
> The procedure is as follows:
> set nbcore 4
> port stop all
> port config 0 dcb vt off 4 pfc on
> port start all
> show config fwd
>
> Signed-off-by: Huisong Li <lihuisong(a)huawei.com>
> Signed-off-by: Lijun Ou <oulijun(a)huawei.com>
> ---
> app/test-pmd/testpmd.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> abcbdaa..f8052b6 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2678,6 +2678,12 @@ start_port(portid_t pid)
> }
> }
> }
> + /*
> + * In some scenarios, eg, dev_configure is called again, the forwarding
> + * streams may not be updated. As a result, the actual forwarding
> + * streams cannot be queried by "show config fwd" command.
> + */
> + fwd_config_setup();
>
> printf("Done\n");
> return 0;
> --
> 2.7.4
1 year, 9 months