mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Linuxarm

Threads by month
  • ----- 2025 -----
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
linuxarm@openeuler.org

May 2021

  • 35 participants
  • 62 discussions
[PATCH] topology: use bin_attribute to avoid buff overflow
by Tian Tao 19 May '21

19 May '21
Use bin_attribute to display thread_siblings/core_cpus/core_siblings, die_cpus/package_cpus/book_siblings/drawer_siblings,avoiding NR_CPUS too big to cause buff overflow. This patch is based on the following discussion. https://www.spinics.net/lists/linux-doc/msg95921.html Signed-off-by: Tian Tao <tiantao6(a)hisilicon.com> --- drivers/base/topology.c | 54 +++++++++++++++++++++++++++++++------------------ 1 file changed, 34 insertions(+), 20 deletions(-) diff --git a/drivers/base/topology.c b/drivers/base/topology.c index 4d254fc..be589b6b 100644 --- a/drivers/base/topology.c +++ b/drivers/base/topology.c @@ -21,11 +21,15 @@ static ssize_t name##_show(struct device *dev, \ return sysfs_emit(buf, "%d\n", topology_##name(dev->id)); \ } -#define define_siblings_show_map(name, mask) \ -static ssize_t name##_show(struct device *dev, \ - struct device_attribute *attr, char *buf) \ -{ \ - return cpumap_print_to_pagebuf(false, buf, topology_##mask(dev->id));\ +#define define_siblings_read(name, mask) \ +static ssize_t name##_read(struct file *file, struct kobject *kobj, \ + struct bin_attribute *attr, char *buf, \ + loff_t off, size_t count) \ +{ \ + struct device *dev = kobj_to_dev(kobj); \ + char temp[(NR_CPUS/32)*9] = {0}; \ + cpumap_print_to_pagebuf(false, temp, topology_##mask(dev->id)); \ + return memory_read_from_buffer(buf, count, &off, temp, (NR_CPUS/32)*9); \ } #define define_siblings_show_list(name, mask) \ @@ -37,7 +41,7 @@ static ssize_t name##_list_show(struct device *dev, \ } #define define_siblings_show_func(name, mask) \ - define_siblings_show_map(name, mask); \ + define_siblings_read(name, mask); \ define_siblings_show_list(name, mask) define_id_show_func(physical_package_id); @@ -50,30 +54,30 @@ define_id_show_func(core_id); static DEVICE_ATTR_RO(core_id); define_siblings_show_func(thread_siblings, sibling_cpumask); -static DEVICE_ATTR_RO(thread_siblings); +static BIN_ATTR_RO(thread_siblings, 0); static DEVICE_ATTR_RO(thread_siblings_list); define_siblings_show_func(core_cpus, sibling_cpumask); -static DEVICE_ATTR_RO(core_cpus); +static BIN_ATTR_RO(core_cpus, 0); static DEVICE_ATTR_RO(core_cpus_list); define_siblings_show_func(core_siblings, core_cpumask); -static DEVICE_ATTR_RO(core_siblings); +static BIN_ATTR_RO(core_siblings, 0); static DEVICE_ATTR_RO(core_siblings_list); define_siblings_show_func(die_cpus, die_cpumask); -static DEVICE_ATTR_RO(die_cpus); +static BIN_ATTR_RO(die_cpus, 0); static DEVICE_ATTR_RO(die_cpus_list); define_siblings_show_func(package_cpus, core_cpumask); -static DEVICE_ATTR_RO(package_cpus); +static BIN_ATTR_RO(package_cpus, 0); static DEVICE_ATTR_RO(package_cpus_list); #ifdef CONFIG_SCHED_BOOK define_id_show_func(book_id); static DEVICE_ATTR_RO(book_id); define_siblings_show_func(book_siblings, book_cpumask); -static DEVICE_ATTR_RO(book_siblings); +static BIN_ATTR_RO(book_siblings, 0); static DEVICE_ATTR_RO(book_siblings_list); #endif @@ -81,32 +85,41 @@ static DEVICE_ATTR_RO(book_siblings_list); define_id_show_func(drawer_id); static DEVICE_ATTR_RO(drawer_id); define_siblings_show_func(drawer_siblings, drawer_cpumask); -static DEVICE_ATTR_RO(drawer_siblings); +static BIN_ATTR_RO(drawer_siblings, 0); static DEVICE_ATTR_RO(drawer_siblings_list); #endif + +static struct bin_attribute *bin_attrs[] = { + &bin_attr_thread_siblings, + &bin_attr_core_cpus, + &bin_attr_core_siblings, + &bin_attr_die_cpus, + &bin_attr_package_cpus, +#ifdef CONFIG_SCHED_BOOK + &bin_attr_book_siblings, +#endif +#ifdef CONFIG_SCHED_DRAWER + &bin_attr_drawer_siblings, +#endif + NULL, +}; + static struct attribute *default_attrs[] = { &dev_attr_physical_package_id.attr, &dev_attr_die_id.attr, &dev_attr_core_id.attr, - &dev_attr_thread_siblings.attr, &dev_attr_thread_siblings_list.attr, - &dev_attr_core_cpus.attr, &dev_attr_core_cpus_list.attr, - &dev_attr_core_siblings.attr, &dev_attr_core_siblings_list.attr, - &dev_attr_die_cpus.attr, &dev_attr_die_cpus_list.attr, - &dev_attr_package_cpus.attr, &dev_attr_package_cpus_list.attr, #ifdef CONFIG_SCHED_BOOK &dev_attr_book_id.attr, - &dev_attr_book_siblings.attr, &dev_attr_book_siblings_list.attr, #endif #ifdef CONFIG_SCHED_DRAWER &dev_attr_drawer_id.attr, - &dev_attr_drawer_siblings.attr, &dev_attr_drawer_siblings_list.attr, #endif NULL @@ -114,6 +127,7 @@ static struct attribute *default_attrs[] = { static const struct attribute_group topology_attr_group = { .attrs = default_attrs, + .bin_attrs = bin_attrs, .name = "topology" }; -- 2.7.4
1 0
0 0
Re: [RFC PATCH v3 4/9] hw/arm/virt: Initialize the present cpu members
by Salil Mehta 19 May '21

19 May '21
> From: wangyanan (Y) > Sent: Tuesday, May 18, 2021 5:43 AM > > Hi Salil, > > On 2021/5/18 4:48, Salil Mehta wrote: > >> From: Qemu-arm > [mailto:qemu-arm-bounces+salil.mehta=huawei.com@nongnu.org] > >> On Behalf Of Yanan Wang > >> Sent: Sunday, May 16, 2021 11:29 AM > >> To: Peter Maydell <peter.maydell(a)linaro.org>; Andrew Jones > >> <drjones(a)redhat.com>; Michael S . Tsirkin <mst(a)redhat.com>; Igor Mammedov > >> <imammedo(a)redhat.com>; Shannon Zhao <shannon.zhaosl(a)gmail.com>; Alistair > >> Francis <alistair.francis(a)wdc.com>; David Gibson > >> <david(a)gibson.dropbear.id.au>; qemu-devel(a)nongnu.org; qemu-arm(a)nongnu.org > >> Cc: Song Bao Hua (Barry Song) <song.bao.hua(a)hisilicon.com>; zhukeqian > >> <zhukeqian1(a)huawei.com>; yangyicong <yangyicong(a)huawei.com>; Zengtao (B) > >> <prime.zeng(a)hisilicon.com>; Wanghaibin (D) <wanghaibin.wang(a)huawei.com>; > >> yuzenghui <yuzenghui(a)huawei.com>; Paolo Bonzini <pbonzini(a)redhat.com>; > >> Philippe Mathieu-Daudé <philmd(a)redhat.com> > >> Subject: [RFC PATCH v3 4/9] hw/arm/virt: Initialize the present cpu members > >> > >> We create and initialize a cpuobj for each present cpu in > >> machvirt_init(). Now we also initialize the cpu member of > >> structure CPUArchId for each present cpu in the function. > > [...] > > > >> qdev_realize(DEVICE(cpuobj), NULL, &error_fatal); > >> + > >> + /* > >> + * As ARM cpu hotplug is not supported yet, we initialize > >> + * the present cpu members here. > >> + */ > >> + machine->possible_cpus->cpus[n].cpu = cpuobj; > > > > when vcpu Hotplug is not supported yet, what necessitates this change now? > > > The initialization will gives a way to determine whether a CPU is > present or not. > At least, for now it will be used when generating ACPI tables, e.g. > DSDT, MADT. > See patch 5 and 6. yes, but why do you require it now as part of the vcpu topology change? As-far-as-i-can-see, PPTT table changes(part of patch 5/9) do not require this change. Change in Patch 5/9 has also been done in anticipation of some future requirement(vcpu Hotplug?). Please correct me here if I am wrong? Thanks
2 2
0 0
Re: [RFC PATCH v3 6/9] hw/arm/virt-acpi-build: Use possible cpus in generation of MADT
by Salil Mehta 18 May '21

18 May '21
> From: wangyanan (Y) > Sent: Tuesday, May 18, 2021 6:03 AM > > Hi Salil, > > On 2021/5/18 1:07, Salil Mehta wrote: > >> From: Qemu-arm > [mailto:qemu-arm-bounces+salil.mehta=huawei.com@nongnu.org] > >> On Behalf Of Yanan Wang > >> Sent: Sunday, May 16, 2021 11:29 AM > >> To: Peter Maydell <peter.maydell(a)linaro.org>; Andrew Jones > >> <drjones(a)redhat.com>; Michael S . Tsirkin <mst(a)redhat.com>; Igor Mammedov > >> <imammedo(a)redhat.com>; Shannon Zhao <shannon.zhaosl(a)gmail.com>; Alistair > >> Francis <alistair.francis(a)wdc.com>; David Gibson > >> <david(a)gibson.dropbear.id.au>; qemu-devel(a)nongnu.org; qemu-arm(a)nongnu.org > >> Cc: Song Bao Hua (Barry Song) <song.bao.hua(a)hisilicon.com>; zhukeqian > >> <zhukeqian1(a)huawei.com>; yangyicong <yangyicong(a)huawei.com>; Zengtao (B) > >> <prime.zeng(a)hisilicon.com>; Wanghaibin (D) <wanghaibin.wang(a)huawei.com>; > >> yuzenghui <yuzenghui(a)huawei.com>; Paolo Bonzini <pbonzini(a)redhat.com>; > >> Philippe Mathieu-Daudé <philmd(a)redhat.com> > >> Subject: [RFC PATCH v3 6/9] hw/arm/virt-acpi-build: Use possible cpus in > >> generation of MADT > >> > >> When building ACPI tables regarding CPUs we should always build > >> them for the number of possible CPUs, not the number of present > >> CPUs. So we create gicc nodes in MADT for possible cpus and then > >> ensure only the present CPUs are marked ENABLED. Furthermore, it > >> also needed if we are going to support CPU hotplug in the future. > > Hi Yanan, > > Yes, these changes are part of the QEMU patch-set I floated last year. > > > > Link: https://www.mail-archive.com/qemu-devel@nongnu.org/msg712018.html > Yes, I noticed this. Thanks! > > > > Perhaps I am missing something, but how this patch is related to the vcpu > > topology support? > No related actually. But this patch together with patch 5 aim to provide > complete information (all cpus including enabled and the others) to guest, > which will be more consistent with requirement in ACPI spec. Well, if it is not related to the cpu topology support then this and other similar patches included with the same line of thought should not be part of this patch-set. I am already working with ARM folks in this regard. Thanks > > We don't consider cpu hotplug at all in this patch, but it indeed pave way > for cpu hotplug in the future. > > Thanks, > Yanan > > Thanks > > > >> Co-developed-by: Andrew Jones <drjones(a)redhat.com> > >> Signed-off-by: Andrew Jones <drjones(a)redhat.com> > >> Co-developed-by: Ying Fang <fangying1(a)huawei.com> > >> Signed-off-by: Ying Fang <fangying1(a)huawei.com> > >> Co-developed-by: Yanan Wang <wangyanan55(a)huawei.com> > >> Signed-off-by: Yanan Wang <wangyanan55(a)huawei.com> > >> --- > >> hw/arm/virt-acpi-build.c | 29 +++++++++++++++++++++++++---- > >> 1 file changed, 25 insertions(+), 4 deletions(-) > >> > >> diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c > >> index a2d8e87616..4d64aeb865 100644 > >> --- a/hw/arm/virt-acpi-build.c > >> +++ b/hw/arm/virt-acpi-build.c > >> @@ -481,6 +481,9 @@ build_madt(GArray *table_data, BIOSLinker *linker, > >> VirtMachineState *vms) > >> const int *irqmap = vms->irqmap; > >> AcpiMadtGenericDistributor *gicd; > >> AcpiMadtGenericMsiFrame *gic_msi; > >> + MachineClass *mc = MACHINE_GET_CLASS(vms); > >> + const CPUArchIdList *possible_cpus = > >> mc->possible_cpu_arch_ids(MACHINE(vms)); > >> + bool pmu; > >> int i; > >> > >> acpi_data_push(table_data, sizeof(AcpiMultipleApicTable)); > >> @@ -491,11 +494,21 @@ build_madt(GArray *table_data, BIOSLinker *linker, > >> VirtMachineState *vms) > >> gicd->base_address = cpu_to_le64(memmap[VIRT_GIC_DIST].base); > >> gicd->version = vms->gic_version; > >> > >> - for (i = 0; i < MACHINE(vms)->smp.cpus; i++) { > >> + for (i = 0; i < possible_cpus->len; i++) { > >> AcpiMadtGenericCpuInterface *gicc = acpi_data_push(table_data, > >> sizeof(*gicc)); > >> ARMCPU *armcpu = ARM_CPU(qemu_get_cpu(i)); > >> > >> + /* > >> + * PMU should have been either implemented for all CPUs or not, > >> + * so we only get information from the first CPU, which could > >> + * represent the others. > >> + */ > >> + if (i == 0) { > >> + pmu = arm_feature(&armcpu->env, ARM_FEATURE_PMU); > >> + } > >> + assert(!armcpu || arm_feature(&armcpu->env, ARM_FEATURE_PMU) == > pmu); > >> + > >> gicc->type = ACPI_APIC_GENERIC_CPU_INTERFACE; > >> gicc->length = sizeof(*gicc); > >> if (vms->gic_version == 2) { > >> @@ -504,11 +517,19 @@ build_madt(GArray *table_data, BIOSLinker *linker, > >> VirtMachineState *vms) > >> gicc->gicv_base_address = > >> cpu_to_le64(memmap[VIRT_GIC_VCPU].base); > >> } > >> gicc->cpu_interface_number = cpu_to_le32(i); > >> - gicc->arm_mpidr = cpu_to_le64(armcpu->mp_affinity); > >> + gicc->arm_mpidr = cpu_to_le64(possible_cpus->cpus[i].arch_id); > >> gicc->uid = cpu_to_le32(i); > >> - gicc->flags = cpu_to_le32(ACPI_MADT_GICC_ENABLED); > >> > >> - if (arm_feature(&armcpu->env, ARM_FEATURE_PMU)) { > >> + /* > >> + * ACPI spec says that LAPIC entry for non present CPU may be > >> + * omitted from MADT or it must be marked as disabled. Here we > >> + * choose to also keep the disabled ones in MADT. > >> + */ > >> + if (possible_cpus->cpus[i].cpu != NULL) { > >> + gicc->flags = cpu_to_le32(ACPI_MADT_GICC_ENABLED); > >> + } > >> + > >> + if (pmu) { > >> gicc->performance_interrupt = > cpu_to_le32(PPI(VIRTUAL_PMU_IRQ)); > >> } > >> if (vms->virt) { > >> -- > >> 2.19.1 > >> > > .
2 1
0 0
Re: [PATCH v3] iommu/iova: put free_iova_mem() outside of spinlock iova_rbtree_lock
by Joerg Roedel 18 May '21

18 May '21
On Mon, Apr 19, 2021 at 03:13:35PM +0800, chenxiang wrote: > From: Xiang Chen <chenxiang66(a)hisilicon.com> > > It is not necessary to put free_iova_mem() inside of spinlock/unlock > iova_rbtree_lock which only leads to more completion for the spinlock. > It has a small promote on the performance after the change. And also > rename private_free_iova() as remove_iova() because the function will not > free iova after that change. > > Signed-off-by: Xiang Chen <chenxiang66(a)hisilicon.com> > --- > drivers/iommu/iova.c | 18 +++++++++++------- > 1 file changed, 11 insertions(+), 7 deletions(-) Looks good to me, but I'll wait for Robins opinion before applying.
3 3
0 0
Re: [RFC PATCH v3 4/4] hw/arm/virt: Parse -smp cluster parameter in virt_smp_parse
by Salil Mehta 18 May '21

18 May '21
> From: wangyanan (Y) > > Hi Salil, > > On 2021/5/17 23:17, Salil Mehta wrote: > >> From: Qemu-devel > >> [mailto:qemu-devel-bounces+salil.mehta=huawei.com@nongnu.org] On Behalf Of > >> Yanan Wang > >> Sent: Sunday, May 16, 2021 11:32 AM > >> To: Peter Maydell <peter.maydell(a)linaro.org>; Paolo Bonzini > >> <pbonzini(a)redhat.com>; Andrew Jones <drjones(a)redhat.com>; Michael S . Tsirkin > >> <mst(a)redhat.com>; Igor Mammedov <imammedo(a)redhat.com>; Shannon Zhao > >> <shannon.zhaosl(a)gmail.com>; qemu-devel(a)nongnu.org; qemu-arm(a)nongnu.org > >> Cc: Song Bao Hua (Barry Song) <song.bao.hua(a)hisilicon.com>; Philippe > >> Mathieu-Daudé <philmd(a)redhat.com>; wangyanan (Y) <wangyanan55(a)huawei.com>; > >> Zengtao (B) <prime.zeng(a)hisilicon.com>; Wanghaibin (D) > >> <wanghaibin.wang(a)huawei.com>; yuzenghui <yuzenghui(a)huawei.com>; yangyicong > >> <yangyicong(a)huawei.com>; zhukeqian <zhukeqian1(a)huawei.com> > >> Subject: [RFC PATCH v3 4/4] hw/arm/virt: Parse -smp cluster parameter in > >> virt_smp_parse > >> > >> There is a separate function virt_smp_parse() in hw/virt/arm.c used > >> to parse cpu topology for the ARM machines. So add parsing of -smp > >> cluster parameter in it, then total number of logical cpus will be > >> calculated like: max_cpus = sockets * clusters * cores * threads. > >> > >> Note, we will assume multi-cluster in one socket is not supported > >> and default the value of clusters to 1, if it's not explicitly > >> specified in -smp cmdline. > >> > >> Signed-off-by: Yanan Wang <wangyanan55(a)huawei.com> > >> --- > >> hw/arm/virt.c | 32 ++++++++++++++++++-------------- > >> 1 file changed, 18 insertions(+), 14 deletions(-) > >> > >> diff --git a/hw/arm/virt.c b/hw/arm/virt.c > >> index 7de822e491..678d5ef36c 100644 > >> --- a/hw/arm/virt.c > >> +++ b/hw/arm/virt.c > >> @@ -2642,8 +2642,8 @@ static int virt_kvm_type(MachineState *ms, const char > >> *type_str) > >> * with the -smp cmdlines when parsing them. > >> * > >> * We require that at least one of cpus or maxcpus must be provided. > >> - * Threads will default to 1 if not provided. Sockets and cores must > >> - * be either both provided or both not. > >> + * Clusters and threads will default to 1 if they are not provided. > >> + * Sockets and cores must be either both provided or both not. > >> * > >> * Note, if neither sockets nor cores are specified, we will calculate > >> * all the missing values just like smp_parse() does, but will disable > >> @@ -2652,15 +2652,18 @@ static int virt_kvm_type(MachineState *ms, const char > >> *type_str) > >> static void virt_smp_parse(MachineState *ms, QemuOpts *opts) > >> { > >> VirtMachineClass *vmc = VIRT_MACHINE_GET_CLASS(ms); > >> + VirtMachineState *vms = VIRT_MACHINE(ms); > >> > >> if (opts) { > >> unsigned cpus = qemu_opt_get_number(opts, "cpus", 0); > >> unsigned maxcpus = qemu_opt_get_number(opts, "maxcpus", 0); > >> unsigned sockets = qemu_opt_get_number(opts, "sockets", 0); > >> + unsigned clusters = qemu_opt_get_number(opts, "clusters", 0); > >> unsigned cores = qemu_opt_get_number(opts, "cores", 0); > >> unsigned threads = qemu_opt_get_number(opts, "threads", 0); > >> > >> - /* Default threads to 1 if not provided */ > >> + /* Default clusters and threads to 1 if not provided */ > >> + clusters = clusters > 0 ? clusters : 1; > >> threads = threads > 0 ? threads : 1; > >> > >> if (cpus == 0 && maxcpus == 0) { > >> @@ -2676,13 +2679,13 @@ static void virt_smp_parse(MachineState *ms, QemuOpts > >> *opts) > >> cores = 1; > >> if (cpus == 0) { > >> sockets = 1; > >> - cpus = sockets * cores * threads; > >> + cpus = sockets * clusters * cores * threads; > >> } else { > >> maxcpus = maxcpus > 0 ? maxcpus : cpus; > >> - sockets = maxcpus / (cores * threads); > >> + sockets = maxcpus / (clusters * cores * threads); > >> } > >> } else if (sockets > 0 && cores > 0) { > >> - cpus = cpus > 0 ? cpus : sockets * cores * threads; > >> + cpus = cpus > 0 ? cpus : sockets * clusters * cores * threads; > >> maxcpus = maxcpus > 0 ? maxcpus : cpus; > >> } else { > >> error_report("sockets and cores must be both provided " > >> @@ -2695,25 +2698,26 @@ static void virt_smp_parse(MachineState *ms, QemuOpts > >> *opts) > >> exit(1); > >> } > >> > >> - if (sockets * cores * threads < cpus) { > >> + if (sockets * clusters * cores * threads < cpus) { > >> error_report("cpu topology: " > >> - "sockets (%u) * cores (%u) * threads (%u) < " > >> - "smp_cpus (%u)", > >> - sockets, cores, threads, cpus); > >> + "sockets (%u) * clusters (%u) * cores (%u) * " > >> + "threads (%u) < smp_cpus (%u)", > >> + sockets, clusters, cores, threads, cpus); > >> exit(1); > >> } > >> > >> - if (sockets * cores * threads != maxcpus) { > >> + if (sockets * clusters * cores * threads != maxcpus) { > >> error_report("cpu topology: " > >> - "sockets (%u) * cores (%u) * threads (%u) " > >> - "!= maxcpus (%u)", > >> - sockets, cores, threads, maxcpus); > >> + "sockets (%u) * clusters (%u) * cores (%u) * " > >> + "threads (%u) != maxcpus (%u)", > >> + sockets, clusters, cores, threads, maxcpus); > >> exit(1); > >> } > >> > >> ms->smp.cpus = cpus; > >> ms->smp.max_cpus = maxcpus; > >> ms->smp.sockets = sockets; > >> + vms->smp_clusters = clusters; > > > > This variable naming *smp_clusters* looks out-of-sorts. I thought a similar > > variable *smp_cpus* was destined to be removed for the reason given in below > > link - a patch by Andrew Jones? > > > > Link: https://lists.gnu.org/archive/html/qemu-arm/2020-12/msg00418.html > > > > Am I missing anything here? > The smp_clusters is added in VirtMachineState and nowhere else because > it's currently only used for ARM. But I think maybe I should also move it to > CpuTopology structure like [1] is doing to move dies to CpuTopology. yes, that’s the idea. It is always good to have right place holders so that the code comprehension/usage(in this case) becomes easy and obvious. > > Move clusters to CpuTopology won't affect other architectures that don't > support it yet, and will also make it easy if they want to in the future. > > [1] From Paolo: > https://patchwork.kernel.org/project/qemu-devel/patch/20210513162901.131023 > 9-10-pbonzini(a)redhat.com/ sure. > > Thanks, > Yanan > > Salil. > > > >> ms->smp.cores = cores; > >> ms->smp.threads = threads; > >> }
1 0
0 0
Re: [PATCH net v8 1/3] net: sched: fix packet stuck problem for lockless qdisc
by Cong Wang 18 May '21

18 May '21
On Fri, May 14, 2021 at 7:25 PM Yunsheng Lin <linyunsheng(a)huawei.com> wrote: > > On 2021/5/15 8:17, Jakub Kicinski wrote: > > On Fri, 14 May 2021 16:57:29 -0700 Cong Wang wrote: > >> On Fri, May 14, 2021 at 4:39 PM Jakub Kicinski <kuba(a)kernel.org> wrote: > >>> > >>> On Fri, 14 May 2021 16:36:16 -0700 Cong Wang wrote: > >> [...] > >>>> > >>>> We have test_and_clear_bit() which is atomic, test_bit()+clear_bit() > >>>> is not. > >>> > >>> It doesn't have to be atomic, right? I asked to split the test because > >>> test_and_clear is a locked op on x86, test by itself is not. > >> > >> It depends on whether you expect the code under the true condition > >> to run once or multiple times, something like: > >> > >> if (test_bit()) { > >> clear_bit(); > >> // this code may run multiple times > >> } > >> > >> With the atomic test_and_clear_bit(), it only runs once: > >> > >> if (test_and_clear_bit()) { > >> // this code runs once > >> } > > I am not sure if the above really matter when the test and clear > does not need to be atomic. > > In order for the above to happens, the MISSED has to set between > test and clear, right? Nope, see the following: // MISSED bit is already set CPU0 CPU1 if (test_bit(MISSED) ( //true if (test_bit(MISSED)) { // also true clear_bit(MISSED); do_something(); clear_bit(MISSED); do_something(); } } Now do_something() is called twice instead of once. This may or may not be a problem, hence I asked this question. > > >> > >> This is why __netif_schedule() uses test_and_set_bit() instead of > >> test_bit()+set_bit(). > > I think test_and_set_bit() is needed in __netif_schedule() mainly > because STATE_SCHED is also used to indicate if the qdisc is in > sd->output_queue, so it has to be atomic. If you replace the "do_something()" above with __netif_reschedule(), this is exactly my point. An entry can not be inserted twice into a list, hence it should never be called twice like above. Thanks.
1 0
0 0
Re: [RFC PATCH v3 6/9] hw/arm/virt-acpi-build: Use possible cpus in generation of MADT
by Salil Mehta 18 May '21

18 May '21
> From: Qemu-arm [mailto:qemu-arm-bounces+salil.mehta=huawei.com@nongnu.org] > On Behalf Of Yanan Wang > Sent: Sunday, May 16, 2021 11:29 AM > To: Peter Maydell <peter.maydell(a)linaro.org>; Andrew Jones > <drjones(a)redhat.com>; Michael S . Tsirkin <mst(a)redhat.com>; Igor Mammedov > <imammedo(a)redhat.com>; Shannon Zhao <shannon.zhaosl(a)gmail.com>; Alistair > Francis <alistair.francis(a)wdc.com>; David Gibson > <david(a)gibson.dropbear.id.au>; qemu-devel(a)nongnu.org; qemu-arm(a)nongnu.org > Cc: Song Bao Hua (Barry Song) <song.bao.hua(a)hisilicon.com>; zhukeqian > <zhukeqian1(a)huawei.com>; yangyicong <yangyicong(a)huawei.com>; Zengtao (B) > <prime.zeng(a)hisilicon.com>; Wanghaibin (D) <wanghaibin.wang(a)huawei.com>; > yuzenghui <yuzenghui(a)huawei.com>; Paolo Bonzini <pbonzini(a)redhat.com>; > Philippe Mathieu-Daudé <philmd(a)redhat.com> > Subject: [RFC PATCH v3 6/9] hw/arm/virt-acpi-build: Use possible cpus in > generation of MADT > > When building ACPI tables regarding CPUs we should always build > them for the number of possible CPUs, not the number of present > CPUs. So we create gicc nodes in MADT for possible cpus and then > ensure only the present CPUs are marked ENABLED. Furthermore, it > also needed if we are going to support CPU hotplug in the future. Hi Yanan, Yes, these changes are part of the QEMU patch-set I floated last year. Link: https://www.mail-archive.com/qemu-devel@nongnu.org/msg712018.html Perhaps I am missing something, but how this patch is related to the vcpu topology support? Thanks > > Co-developed-by: Andrew Jones <drjones(a)redhat.com> > Signed-off-by: Andrew Jones <drjones(a)redhat.com> > Co-developed-by: Ying Fang <fangying1(a)huawei.com> > Signed-off-by: Ying Fang <fangying1(a)huawei.com> > Co-developed-by: Yanan Wang <wangyanan55(a)huawei.com> > Signed-off-by: Yanan Wang <wangyanan55(a)huawei.com> > --- > hw/arm/virt-acpi-build.c | 29 +++++++++++++++++++++++++---- > 1 file changed, 25 insertions(+), 4 deletions(-) > > diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c > index a2d8e87616..4d64aeb865 100644 > --- a/hw/arm/virt-acpi-build.c > +++ b/hw/arm/virt-acpi-build.c > @@ -481,6 +481,9 @@ build_madt(GArray *table_data, BIOSLinker *linker, > VirtMachineState *vms) > const int *irqmap = vms->irqmap; > AcpiMadtGenericDistributor *gicd; > AcpiMadtGenericMsiFrame *gic_msi; > + MachineClass *mc = MACHINE_GET_CLASS(vms); > + const CPUArchIdList *possible_cpus = > mc->possible_cpu_arch_ids(MACHINE(vms)); > + bool pmu; > int i; > > acpi_data_push(table_data, sizeof(AcpiMultipleApicTable)); > @@ -491,11 +494,21 @@ build_madt(GArray *table_data, BIOSLinker *linker, > VirtMachineState *vms) > gicd->base_address = cpu_to_le64(memmap[VIRT_GIC_DIST].base); > gicd->version = vms->gic_version; > > - for (i = 0; i < MACHINE(vms)->smp.cpus; i++) { > + for (i = 0; i < possible_cpus->len; i++) { > AcpiMadtGenericCpuInterface *gicc = acpi_data_push(table_data, > sizeof(*gicc)); > ARMCPU *armcpu = ARM_CPU(qemu_get_cpu(i)); > > + /* > + * PMU should have been either implemented for all CPUs or not, > + * so we only get information from the first CPU, which could > + * represent the others. > + */ > + if (i == 0) { > + pmu = arm_feature(&armcpu->env, ARM_FEATURE_PMU); > + } > + assert(!armcpu || arm_feature(&armcpu->env, ARM_FEATURE_PMU) == pmu); > + > gicc->type = ACPI_APIC_GENERIC_CPU_INTERFACE; > gicc->length = sizeof(*gicc); > if (vms->gic_version == 2) { > @@ -504,11 +517,19 @@ build_madt(GArray *table_data, BIOSLinker *linker, > VirtMachineState *vms) > gicc->gicv_base_address = > cpu_to_le64(memmap[VIRT_GIC_VCPU].base); > } > gicc->cpu_interface_number = cpu_to_le32(i); > - gicc->arm_mpidr = cpu_to_le64(armcpu->mp_affinity); > + gicc->arm_mpidr = cpu_to_le64(possible_cpus->cpus[i].arch_id); > gicc->uid = cpu_to_le32(i); > - gicc->flags = cpu_to_le32(ACPI_MADT_GICC_ENABLED); > > - if (arm_feature(&armcpu->env, ARM_FEATURE_PMU)) { > + /* > + * ACPI spec says that LAPIC entry for non present CPU may be > + * omitted from MADT or it must be marked as disabled. Here we > + * choose to also keep the disabled ones in MADT. > + */ > + if (possible_cpus->cpus[i].cpu != NULL) { > + gicc->flags = cpu_to_le32(ACPI_MADT_GICC_ENABLED); > + } > + > + if (pmu) { > gicc->performance_interrupt = cpu_to_le32(PPI(VIRTUAL_PMU_IRQ)); > } > if (vms->virt) { > -- > 2.19.1 >
1 0
0 0
Re: [RFC PATCH v3 4/4] hw/arm/virt: Parse -smp cluster parameter in virt_smp_parse
by Salil Mehta 17 May '21

17 May '21
> From: Qemu-devel > [mailto:qemu-devel-bounces+salil.mehta=huawei.com@nongnu.org] On Behalf Of > Yanan Wang > Sent: Sunday, May 16, 2021 11:32 AM > To: Peter Maydell <peter.maydell(a)linaro.org>; Paolo Bonzini > <pbonzini(a)redhat.com>; Andrew Jones <drjones(a)redhat.com>; Michael S . Tsirkin > <mst(a)redhat.com>; Igor Mammedov <imammedo(a)redhat.com>; Shannon Zhao > <shannon.zhaosl(a)gmail.com>; qemu-devel(a)nongnu.org; qemu-arm(a)nongnu.org > Cc: Song Bao Hua (Barry Song) <song.bao.hua(a)hisilicon.com>; Philippe > Mathieu-Daudé <philmd(a)redhat.com>; wangyanan (Y) <wangyanan55(a)huawei.com>; > Zengtao (B) <prime.zeng(a)hisilicon.com>; Wanghaibin (D) > <wanghaibin.wang(a)huawei.com>; yuzenghui <yuzenghui(a)huawei.com>; yangyicong > <yangyicong(a)huawei.com>; zhukeqian <zhukeqian1(a)huawei.com> > Subject: [RFC PATCH v3 4/4] hw/arm/virt: Parse -smp cluster parameter in > virt_smp_parse > > There is a separate function virt_smp_parse() in hw/virt/arm.c used > to parse cpu topology for the ARM machines. So add parsing of -smp > cluster parameter in it, then total number of logical cpus will be > calculated like: max_cpus = sockets * clusters * cores * threads. > > Note, we will assume multi-cluster in one socket is not supported > and default the value of clusters to 1, if it's not explicitly > specified in -smp cmdline. > > Signed-off-by: Yanan Wang <wangyanan55(a)huawei.com> > --- > hw/arm/virt.c | 32 ++++++++++++++++++-------------- > 1 file changed, 18 insertions(+), 14 deletions(-) > > diff --git a/hw/arm/virt.c b/hw/arm/virt.c > index 7de822e491..678d5ef36c 100644 > --- a/hw/arm/virt.c > +++ b/hw/arm/virt.c > @@ -2642,8 +2642,8 @@ static int virt_kvm_type(MachineState *ms, const char > *type_str) > * with the -smp cmdlines when parsing them. > * > * We require that at least one of cpus or maxcpus must be provided. > - * Threads will default to 1 if not provided. Sockets and cores must > - * be either both provided or both not. > + * Clusters and threads will default to 1 if they are not provided. > + * Sockets and cores must be either both provided or both not. > * > * Note, if neither sockets nor cores are specified, we will calculate > * all the missing values just like smp_parse() does, but will disable > @@ -2652,15 +2652,18 @@ static int virt_kvm_type(MachineState *ms, const char > *type_str) > static void virt_smp_parse(MachineState *ms, QemuOpts *opts) > { > VirtMachineClass *vmc = VIRT_MACHINE_GET_CLASS(ms); > + VirtMachineState *vms = VIRT_MACHINE(ms); > > if (opts) { > unsigned cpus = qemu_opt_get_number(opts, "cpus", 0); > unsigned maxcpus = qemu_opt_get_number(opts, "maxcpus", 0); > unsigned sockets = qemu_opt_get_number(opts, "sockets", 0); > + unsigned clusters = qemu_opt_get_number(opts, "clusters", 0); > unsigned cores = qemu_opt_get_number(opts, "cores", 0); > unsigned threads = qemu_opt_get_number(opts, "threads", 0); > > - /* Default threads to 1 if not provided */ > + /* Default clusters and threads to 1 if not provided */ > + clusters = clusters > 0 ? clusters : 1; > threads = threads > 0 ? threads : 1; > > if (cpus == 0 && maxcpus == 0) { > @@ -2676,13 +2679,13 @@ static void virt_smp_parse(MachineState *ms, QemuOpts > *opts) > cores = 1; > if (cpus == 0) { > sockets = 1; > - cpus = sockets * cores * threads; > + cpus = sockets * clusters * cores * threads; > } else { > maxcpus = maxcpus > 0 ? maxcpus : cpus; > - sockets = maxcpus / (cores * threads); > + sockets = maxcpus / (clusters * cores * threads); > } > } else if (sockets > 0 && cores > 0) { > - cpus = cpus > 0 ? cpus : sockets * cores * threads; > + cpus = cpus > 0 ? cpus : sockets * clusters * cores * threads; > maxcpus = maxcpus > 0 ? maxcpus : cpus; > } else { > error_report("sockets and cores must be both provided " > @@ -2695,25 +2698,26 @@ static void virt_smp_parse(MachineState *ms, QemuOpts > *opts) > exit(1); > } > > - if (sockets * cores * threads < cpus) { > + if (sockets * clusters * cores * threads < cpus) { > error_report("cpu topology: " > - "sockets (%u) * cores (%u) * threads (%u) < " > - "smp_cpus (%u)", > - sockets, cores, threads, cpus); > + "sockets (%u) * clusters (%u) * cores (%u) * " > + "threads (%u) < smp_cpus (%u)", > + sockets, clusters, cores, threads, cpus); > exit(1); > } > > - if (sockets * cores * threads != maxcpus) { > + if (sockets * clusters * cores * threads != maxcpus) { > error_report("cpu topology: " > - "sockets (%u) * cores (%u) * threads (%u) " > - "!= maxcpus (%u)", > - sockets, cores, threads, maxcpus); > + "sockets (%u) * clusters (%u) * cores (%u) * " > + "threads (%u) != maxcpus (%u)", > + sockets, clusters, cores, threads, maxcpus); > exit(1); > } > > ms->smp.cpus = cpus; > ms->smp.max_cpus = maxcpus; > ms->smp.sockets = sockets; > + vms->smp_clusters = clusters; This variable naming *smp_clusters* looks out-of-sorts. I thought a similar variable *smp_cpus* was destined to be removed for the reason given in below link - a patch by Andrew Jones? Link: https://lists.gnu.org/archive/html/qemu-arm/2020-12/msg00418.html Am I missing anything here? Salil. > ms->smp.cores = cores; > ms->smp.threads = threads; > }
1 0
0 0
[PATCH 8/9] tty: hvc_console: Remove the repeated words 'no' and 'from'
by Xiaofei Tan 17 May '21

17 May '21
Remove the repeated words 'no' and 'from', reported by checkpatch.pl. Signed-off-by: Xiaofei Tan <tanxiaofei(a)huawei.com> --- drivers/tty/hvc/hvc_console.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c index f31efeb..b6720b0 100644 --- a/drivers/tty/hvc/hvc_console.c +++ b/drivers/tty/hvc/hvc_console.c @@ -293,7 +293,7 @@ int hvc_instantiate(uint32_t vtermno, int index, const struct hv_ops *ops) if (vtermnos[index] != -1) return -1; - /* make sure no no tty has been registered in this index */ + /* make sure no tty has been registered in this index */ hp = hvc_get_by_index(index); if (hp) { tty_port_put(&hp->port); @@ -622,7 +622,7 @@ static u32 timeout = MIN_TIMEOUT; /* * Maximum number of bytes to get from the console driver if hvc_poll is * called from driver (and can't sleep). Any more than this and we break - * and start polling with khvcd. This value was derived from from an OpenBMC + * and start polling with khvcd. This value was derived from an OpenBMC * console with the OPAL driver that results in about 0.25ms interrupts off * latency. */ -- 2.8.1
1 0
0 0
[PATCH 7/9] tty: hvc_console: Add a blank line after declarations
by Xiaofei Tan 17 May '21

17 May '21
Add a blank line after declarations, reported by checkpatch.pl. Signed-off-by: Xiaofei Tan <tanxiaofei(a)huawei.com> --- drivers/tty/hvc/hvc_console.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c index a61cdf0..f31efeb 100644 --- a/drivers/tty/hvc/hvc_console.c +++ b/drivers/tty/hvc/hvc_console.c @@ -922,6 +922,7 @@ struct hvc_struct *hvc_alloc(uint32_t vtermno, int data, /* We wait until a driver actually comes along */ if (atomic_inc_not_zero(&hvc_needs_init)) { int err = hvc_init(); + if (err) return ERR_PTR(err); } -- 2.8.1
1 0
0 0
  • ← Newer
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • Older →

HyperKitty Powered by HyperKitty