Adds Zhaoxin CPU support
LeoLiu-oc (33): x86/cpu: Create Zhaoxin processors architecture support file x86/cpu: Remove redundant cpu_detect_cache_sizes() call x86/cpu/centaur: Replace two-condition switch-case with an if statement x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support x86/cpufeatures: Add Zhaoxin feature bits x86/cpu: Add detect extended topology for Zhaoxin CPUs ACPI, x86: Add Zhaoxin processors support for NONSTOP TSC x86/power: Optimize C3 entry on Centaur CPUs x86/acpi/cstate: Add Zhaoxin processors support for cache flush policy in C3 x86/mce: Add Zhaoxin MCE support x86/mce: Add Zhaoxin CMCI support x86/mce: Add Zhaoxin LMCE support x86/speculation/spectre_v2: Exclude Zhaoxin CPUs from SPECTRE_V2 x86/speculation/swapgs: Exclude Zhaoxin CPUs from SWAPGS vulnerability crypto: x86/crc32c-intel - Don't match some Zhaoxin CPUs x86/perf: Add hardware performance events support for Zhaoxin CPU. PCI: Add Zhaoxin Vendor ID ata: sata_zhaoxin: Add support for Zhaoxin Serial ATA xhci: Add Zhaoxin xHCI LPM U1/U2 feature support PCI: Add ACS quirk for Zhaoxin multi-function devices PCI: Add ACS quirk for Zhaoxin Root/Downstream Ports xhci: fix issue of cross page boundary in TRB prefetch xhci: Show Zhaoxin XHCI root hub speed correctly ALSA: hda: Add support of Zhaoxin SB HDAC ALSA: hda: Add support of Zhaoxin NB HDAC ALSA: hda: Add support of Zhaoxin NB HDAC codec xhci: Adjust the UHCI Controllers bit value xhci: fix issue with resume from system Sx state x86/apic: Mask IOAPIC entries when disabling the local APIC USB:Fix kernel NULL pointer when unbind UHCI form vfio-pci iommu/vt-d:Add support for detecting ACPI device in RMRR x86/Kconfig: Rename UMIP config parameter x86/Kconfig: Drop vendor dependency for X86_UMIP
MAINTAINERS | 6 + arch/x86/Kconfig | 15 +- arch/x86/Kconfig.cpu | 13 + arch/x86/crypto/crc32c-intel_glue.c | 7 + arch/x86/events/Makefile | 2 + arch/x86/events/core.c | 4 + arch/x86/events/perf_event.h | 14 +- arch/x86/events/zhaoxin/Makefile | 3 + arch/x86/events/zhaoxin/core.c | 612 +++++++++ arch/x86/events/zhaoxin/uncore.c | 1101 +++++++++++++++++ arch/x86/events/zhaoxin/uncore.h | 308 +++++ arch/x86/include/asm/cpufeatures.h | 21 + arch/x86/include/asm/disabled-features.h | 2 +- arch/x86/include/asm/processor.h | 3 +- arch/x86/include/asm/umip.h | 4 +- arch/x86/kernel/Makefile | 2 +- arch/x86/kernel/acpi/cstate.c | 27 + arch/x86/kernel/apic/apic.c | 7 + arch/x86/kernel/cpu/Makefile | 1 + arch/x86/kernel/cpu/centaur.c | 47 +- arch/x86/kernel/cpu/common.c | 9 +- arch/x86/kernel/cpu/mce/core.c | 97 +- arch/x86/kernel/cpu/mce/intel.c | 11 +- arch/x86/kernel/cpu/mce/internal.h | 6 + arch/x86/kernel/cpu/perfctr-watchdog.c | 8 + arch/x86/kernel/cpu/zhaoxin.c | 170 +++ drivers/acpi/acpi_pad.c | 1 + drivers/acpi/processor_idle.c | 1 + drivers/ata/Kconfig | 8 + drivers/ata/Makefile | 1 + drivers/ata/sata_zhaoxin.c | 384 ++++++ drivers/iommu/dmar.c | 75 +- drivers/iommu/intel-iommu.c | 24 +- drivers/pci/quirks.c | 31 + drivers/usb/core/hcd-pci.c | 10 + drivers/usb/host/uhci-pci.c | 3 + drivers/usb/host/xhci-mem.c | 11 +- drivers/usb/host/xhci-pci.c | 12 + drivers/usb/host/xhci.c | 53 +- drivers/usb/host/xhci.h | 2 + include/linux/dmar.h | 11 +- include/linux/pci_ids.h | 2 + sound/pci/hda/hda_controller.c | 17 +- sound/pci/hda/hda_controller.h | 2 + sound/pci/hda/hda_intel.c | 68 +- sound/pci/hda/patch_hdmi.c | 26 + .../arch/x86/include/asm/disabled-features.h | 2 +- 47 files changed, 3143 insertions(+), 101 deletions(-) create mode 100644 arch/x86/events/zhaoxin/Makefile create mode 100644 arch/x86/events/zhaoxin/core.c create mode 100644 arch/x86/events/zhaoxin/uncore.c create mode 100644 arch/x86/events/zhaoxin/uncore.h create mode 100644 arch/x86/kernel/cpu/zhaoxin.c create mode 100644 drivers/ata/sata_zhaoxin.c
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.2 commit 761fdd5e3327db6c646a09bab5ad48cd42680cd2 category: x86/cpu bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add x86 architecture support for new Zhaoxin processors. Carve out initialization code needed by Zhaoxin processors into a separate compilation unit.
To identify Zhaoxin CPU, add a new vendor type X86_VENDOR_ZHAOXIN for system recognition.
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: "hpa@zytor.com" hpa@zytor.com Cc: "gregkh@linuxfoundation.org" gregkh@linuxfoundation.org Cc: "rjw@rjwysocki.net" rjw@rjwysocki.net Cc: "lenb@kernel.org" lenb@kernel.org Cc: David Wang DavidWang@zhaoxin.com Cc: "Cooper Yan(BJ-RD)" CooperYan@zhaoxin.com Cc: "Qiyuan Wang(BJ-RD)" QiyuanWang@zhaoxin.com Cc: "Herry Yang(BJ-RD)" HerryYang@zhaoxin.com Link: https://lkml.kernel.org/r/01042674b2f741b2aed1f797359bdffb@zhaoxin.com Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- MAINTAINERS | 6 ++ arch/x86/Kconfig.cpu | 13 +++ arch/x86/include/asm/processor.h | 3 +- arch/x86/kernel/cpu/Makefile | 1 + arch/x86/kernel/cpu/zhaoxin.c | 167 +++++++++++++++++++++++++++++++ 5 files changed, 189 insertions(+), 1 deletion(-) create mode 100644 arch/x86/kernel/cpu/zhaoxin.c
diff --git a/MAINTAINERS b/MAINTAINERS index ada8fbdd1d71..210fdd54b496 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -16265,6 +16265,12 @@ Q: https://patchwork.linuxtv.org/project/linux-media/list/ S: Maintained F: drivers/media/dvb-frontends/zd1301_demod*
+ZHAOXIN PROCESSOR SUPPORT +M: Tony W Wang-oc TonyWWang-oc@zhaoxin.com +L: linux-kernel@vger.kernel.org +S: Maintained +F: arch/x86/kernel/cpu/zhaoxin.c + ZPOOL COMPRESSED PAGE STORAGE API M: Dan Streetman ddstreet@ieee.org L: linux-mm@kvack.org diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu index 76e274a0fd0a..d1a51794c587 100644 --- a/arch/x86/Kconfig.cpu +++ b/arch/x86/Kconfig.cpu @@ -480,3 +480,16 @@ config CPU_SUP_UMC_32 CPU might render the kernel unbootable.
If unsure, say N. + +config CPU_SUP_ZHAOXIN + default y + bool "Support Zhaoxin processors" if PROCESSOR_SELECT + help + This enables detection, tunings and quirks for Zhaoxin processors + + You need this enabled if you want your kernel to run on a + Zhaoxin CPU. Disabling this option on other types of CPUs + makes the kernel a tiny bit smaller. Disabling it on a Zhaoxin + CPU might render the kernel unbootable. + + If unsure, say N. diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index e4b27128aaea..7a4b529881ff 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -156,7 +156,8 @@ enum cpuid_regs_idx { #define X86_VENDOR_TRANSMETA 7 #define X86_VENDOR_NSC 8 #define X86_VENDOR_HYGON 9 -#define X86_VENDOR_NUM 10 +#define X86_VENDOR_ZHAOXIN 10 +#define X86_VENDOR_NUM 11
#define X86_VENDOR_UNKNOWN 0xff
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index e46d718ba4cc..69bba2b1ef08 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -35,6 +35,7 @@ obj-$(CONFIG_CPU_SUP_CYRIX_32) += cyrix.o obj-$(CONFIG_CPU_SUP_CENTAUR) += centaur.o obj-$(CONFIG_CPU_SUP_TRANSMETA_32) += transmeta.o obj-$(CONFIG_CPU_SUP_UMC_32) += umc.o +obj-$(CONFIG_CPU_SUP_ZHAOXIN) += zhaoxin.o
obj-$(CONFIG_INTEL_RDT) += intel_rdt.o intel_rdt_rdtgroup.o intel_rdt_monitor.o obj-$(CONFIG_INTEL_RDT) += intel_rdt_ctrlmondata.o intel_rdt_pseudo_lock.o diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c new file mode 100644 index 000000000000..8e6f2f4b4afe --- /dev/null +++ b/arch/x86/kernel/cpu/zhaoxin.c @@ -0,0 +1,167 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <linux/sched.h> +#include <linux/sched/clock.h> + +#include <asm/cpufeature.h> + +#include "cpu.h" + +#define MSR_ZHAOXIN_FCR57 0x00001257 + +#define ACE_PRESENT (1 << 6) +#define ACE_ENABLED (1 << 7) +#define ACE_FCR (1 << 7) /* MSR_ZHAOXIN_FCR */ + +#define RNG_PRESENT (1 << 2) +#define RNG_ENABLED (1 << 3) +#define RNG_ENABLE (1 << 8) /* MSR_ZHAOXIN_RNG */ + +#define X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW 0x00200000 +#define X86_VMX_FEATURE_PROC_CTLS_VNMI 0x00400000 +#define X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS 0x80000000 +#define X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC 0x00000001 +#define X86_VMX_FEATURE_PROC_CTLS2_EPT 0x00000002 +#define X86_VMX_FEATURE_PROC_CTLS2_VPID 0x00000020 + +static void init_zhaoxin_cap(struct cpuinfo_x86 *c) +{ + u32 lo, hi; + + /* Test for Extended Feature Flags presence */ + if (cpuid_eax(0xC0000000) >= 0xC0000001) { + u32 tmp = cpuid_edx(0xC0000001); + + /* Enable ACE unit, if present and disabled */ + if ((tmp & (ACE_PRESENT | ACE_ENABLED)) == ACE_PRESENT) { + rdmsr(MSR_ZHAOXIN_FCR57, lo, hi); + /* Enable ACE unit */ + lo |= ACE_FCR; + wrmsr(MSR_ZHAOXIN_FCR57, lo, hi); + pr_info("CPU: Enabled ACE h/w crypto\n"); + } + + /* Enable RNG unit, if present and disabled */ + if ((tmp & (RNG_PRESENT | RNG_ENABLED)) == RNG_PRESENT) { + rdmsr(MSR_ZHAOXIN_FCR57, lo, hi); + /* Enable RNG unit */ + lo |= RNG_ENABLE; + wrmsr(MSR_ZHAOXIN_FCR57, lo, hi); + pr_info("CPU: Enabled h/w RNG\n"); + } + + /* + * Store Extended Feature Flags as word 5 of the CPU + * capability bit array + */ + c->x86_capability[CPUID_C000_0001_EDX] = cpuid_edx(0xC0000001); + } + + if (c->x86 >= 0x6) + set_cpu_cap(c, X86_FEATURE_REP_GOOD); + + cpu_detect_cache_sizes(c); +} + +static void early_init_zhaoxin(struct cpuinfo_x86 *c) +{ + if (c->x86 >= 0x6) + set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC); +#ifdef CONFIG_X86_64 + set_cpu_cap(c, X86_FEATURE_SYSENTER32); +#endif + if (c->x86_power & (1 << 8)) { + set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC); + set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC); + } + + if (c->cpuid_level >= 0x00000001) { + u32 eax, ebx, ecx, edx; + + cpuid(0x00000001, &eax, &ebx, &ecx, &edx); + /* + * If HTT (EDX[28]) is set EBX[16:23] contain the number of + * apicids which are reserved per package. Store the resulting + * shift value for the package management code. + */ + if (edx & (1U << 28)) + c->x86_coreid_bits = get_count_order((ebx >> 16) & 0xff); + } + +} + +static void zhaoxin_detect_vmx_virtcap(struct cpuinfo_x86 *c) +{ + u32 vmx_msr_low, vmx_msr_high, msr_ctl, msr_ctl2; + + rdmsr(MSR_IA32_VMX_PROCBASED_CTLS, vmx_msr_low, vmx_msr_high); + msr_ctl = vmx_msr_high | vmx_msr_low; + + if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW) + set_cpu_cap(c, X86_FEATURE_TPR_SHADOW); + if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_VNMI) + set_cpu_cap(c, X86_FEATURE_VNMI); + if (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_2ND_CTLS) { + rdmsr(MSR_IA32_VMX_PROCBASED_CTLS2, + vmx_msr_low, vmx_msr_high); + msr_ctl2 = vmx_msr_high | vmx_msr_low; + if ((msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC) && + (msr_ctl & X86_VMX_FEATURE_PROC_CTLS_TPR_SHADOW)) + set_cpu_cap(c, X86_FEATURE_FLEXPRIORITY); + if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_EPT) + set_cpu_cap(c, X86_FEATURE_EPT); + if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VPID) + set_cpu_cap(c, X86_FEATURE_VPID); + } +} + +static void init_zhaoxin(struct cpuinfo_x86 *c) +{ + early_init_zhaoxin(c); + init_intel_cacheinfo(c); + detect_num_cpu_cores(c); +#ifdef CONFIG_X86_32 + detect_ht(c); +#endif + + if (c->cpuid_level > 9) { + unsigned int eax = cpuid_eax(10); + + /* + * Check for version and the number of counters + * Version(eax[7:0]) can't be 0; + * Counters(eax[15:8]) should be greater than 1; + */ + if ((eax & 0xff) && (((eax >> 8) & 0xff) > 1)) + set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON); + } + + if (c->x86 >= 0x6) + init_zhaoxin_cap(c); +#ifdef CONFIG_X86_64 + set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC); +#endif + + if (cpu_has(c, X86_FEATURE_VMX)) + zhaoxin_detect_vmx_virtcap(c); +} + +#ifdef CONFIG_X86_32 +static unsigned int +zhaoxin_size_cache(struct cpuinfo_x86 *c, unsigned int size) +{ + return size; +} +#endif + +static const struct cpu_dev zhaoxin_cpu_dev = { + .c_vendor = "zhaoxin", + .c_ident = { " Shanghai " }, + .c_early_init = early_init_zhaoxin, + .c_init = init_zhaoxin, +#ifdef CONFIG_X86_32 + .legacy_cache_size = zhaoxin_size_cache, +#endif + .c_x86_vendor = X86_VENDOR_ZHAOXIN, +}; + +cpu_dev_register(zhaoxin_cpu_dev);
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.6 commit 283bab9809786cf41798512f5c1e97f4b679ba96 category: x86/cpu bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Both functions call init_intel_cacheinfo() which computes L2 and L3 cache sizes from CPUID(4). But then they also call cpu_detect_cache_sizes() a bit later which computes ->x86_tlbsize and L2 size from CPUID(80000006).
However, the latter call is not needed because
- on these CPUs, CPUID(80000006).EBX for ->x86_tlbsize is reserved
- CPUID(80000006).ECX for the L2 size has the same result as CPUID(4)
Therefore, remove the latter call to simplify the code.
[ bp: Rewrite commit message. ]
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/1579075257-6985-1-git-send-email-TonyWWang-oc@zhao.... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/kernel/cpu/centaur.c | 2 -- arch/x86/kernel/cpu/zhaoxin.c | 2 -- 2 files changed, 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c index 14433ff5b828..b98529e50d6f 100644 --- a/arch/x86/kernel/cpu/centaur.c +++ b/arch/x86/kernel/cpu/centaur.c @@ -71,8 +71,6 @@ static void init_c3(struct cpuinfo_x86 *c) c->x86_cache_alignment = c->x86_clflush_size * 2; set_cpu_cap(c, X86_FEATURE_REP_GOOD); } - - cpu_detect_cache_sizes(c); }
enum { diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c index 8e6f2f4b4afe..452fd0a6bc61 100644 --- a/arch/x86/kernel/cpu/zhaoxin.c +++ b/arch/x86/kernel/cpu/zhaoxin.c @@ -58,8 +58,6 @@ static void init_zhaoxin_cap(struct cpuinfo_x86 *c)
if (c->x86 >= 0x6) set_cpu_cap(c, X86_FEATURE_REP_GOOD); - - cpu_detect_cache_sizes(c); }
static void early_init_zhaoxin(struct cpuinfo_x86 *c)
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.9 commit 8687bdc04128b2bd16faaae11db10128ad0da7b8 category: x86/cpu bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Use a normal if statements instead of a two-condition switch-case.
[ bp: Massage commit message. ]
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/1599562666-31351-2-git-send-email-TonyWWang-oc@zha... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/kernel/cpu/centaur.c | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c index b98529e50d6f..b3be281334e4 100644 --- a/arch/x86/kernel/cpu/centaur.c +++ b/arch/x86/kernel/cpu/centaur.c @@ -96,18 +96,14 @@ enum {
static void early_init_centaur(struct cpuinfo_x86 *c) { - switch (c->x86) { #ifdef CONFIG_X86_32 - case 5: - /* Emulate MTRRs using Centaur's MCR. */ + /* Emulate MTRRs using Centaur's MCR. */ + if (c->x86 == 5) set_cpu_cap(c, X86_FEATURE_CENTAUR_MCR); - break; #endif - case 6: - if (c->x86_model >= 0xf) - set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC); - break; - } + if (c->x86 == 6 && c->x86_model >= 0xf) + set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC); + #ifdef CONFIG_X86_64 set_cpu_cap(c, X86_FEATURE_SYSENTER32); #endif @@ -176,9 +172,8 @@ static void init_centaur(struct cpuinfo_x86 *c) set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON); }
- switch (c->x86) { #ifdef CONFIG_X86_32 - case 5: + if (c->x86 == 5) { switch (c->x86_model) { case 4: name = "C6"; @@ -238,12 +233,10 @@ static void init_centaur(struct cpuinfo_x86 *c) c->x86_cache_size = (cc>>24)+(dd>>24); } sprintf(c->x86_model_id, "WinChip %s", name); - break; + } #endif - case 6: + if (c->x86 == 6) init_c3(c); - break; - } #ifdef CONFIG_X86_64 set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC); #endif
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.9 commit 33b4711df4c1b3aec7c267c60fc24abccfadd40c category: x86/cpu bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add Centaur family >=7 CPUs specific initialization support.
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/1599562666-31351-3-git-send-email-TonyWWang-oc@zha... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/kernel/cpu/centaur.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c index b3be281334e4..8735be464bc1 100644 --- a/arch/x86/kernel/cpu/centaur.c +++ b/arch/x86/kernel/cpu/centaur.c @@ -71,6 +71,9 @@ static void init_c3(struct cpuinfo_x86 *c) c->x86_cache_alignment = c->x86_clflush_size * 2; set_cpu_cap(c, X86_FEATURE_REP_GOOD); } + + if (c->x86 >= 7) + set_cpu_cap(c, X86_FEATURE_REP_GOOD); }
enum { @@ -101,7 +104,8 @@ static void early_init_centaur(struct cpuinfo_x86 *c) if (c->x86 == 5) set_cpu_cap(c, X86_FEATURE_CENTAUR_MCR); #endif - if (c->x86 == 6 && c->x86_model >= 0xf) + if ((c->x86 == 6 && c->x86_model >= 0xf) || + (c->x86 >= 7)) set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
#ifdef CONFIG_X86_64 @@ -235,7 +239,7 @@ static void init_centaur(struct cpuinfo_x86 *c) sprintf(c->x86_model_id, "WinChip %s", name); } #endif - if (c->x86 == 6) + if (c->x86 == 6 || c->x86 >= 7) init_c3(c); #ifdef CONFIG_X86_64 set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add Zhaoxin feature bits on Zhaoxin CPUs.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/include/asm/cpufeatures.h | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index f7f9604b10cc..48535113efa6 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -145,8 +145,12 @@ #define X86_FEATURE_HYPERVISOR ( 4*32+31) /* Running on a hypervisor */
/* VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001, word 5 */ +#define X86_FEATURE_SM2 (5*32+0) /* sm2 present*/ +#define X86_FEATURE_SM2_EN (5*32+1) /* sm2 enabled */ #define X86_FEATURE_XSTORE ( 5*32+ 2) /* "rng" RNG present (xstore) */ #define X86_FEATURE_XSTORE_EN ( 5*32+ 3) /* "rng_en" RNG enabled */ +#define X86_FEATURE_CCS (5*32+4) /* "sm3 sm4" present */ +#define X86_FEATURE_CCS_EN (5*32+5) /* "sm3_en sm4_en" enabled */ #define X86_FEATURE_XCRYPT ( 5*32+ 6) /* "ace" on-CPU crypto (xcrypt) */ #define X86_FEATURE_XCRYPT_EN ( 5*32+ 7) /* "ace_en" on-CPU crypto enabled */ #define X86_FEATURE_ACE2 ( 5*32+ 8) /* Advanced Cryptography Engine v2 */ @@ -155,6 +159,23 @@ #define X86_FEATURE_PHE_EN ( 5*32+11) /* PHE enabled */ #define X86_FEATURE_PMM ( 5*32+12) /* PadLock Montgomery Multiplier */ #define X86_FEATURE_PMM_EN ( 5*32+13) /* PMM enabled */ +#define X86_FEATURE_ZX_FMA (5*32+15) /* FMA supported */ +#define X86_FEATURE_PARALLAX (5*32+16) /* Adaptive P-state control present */ +#define X86_FEATURE_PARALLAX_EN (5*32+17) /* Adaptive P-state control enabled */ +#define X86_FEATURE_OVERSTRESS (5*32+18) /* Overstress Feature for auto overclock present */ +#define X86_FEATURE_OVERSTRESS_EN (5*32+19) /* Overstress Feature for auto overclock enabled */ +#define X86_FEATURE_TM3 (5*32+20) /* Thermal Monitor 3 present */ +#define X86_FEATURE_TM3_EN (5*32+21) /* Thermal Monitor 3 enabled */ +#define X86_FEATURE_RNG2 (5*32+22) /* 2nd generation of RNG present */ +#define X86_FEATURE_RNG2_EN (5*32+23) /* 2nd generation of RNG enabled */ +#define X86_FEATURE_SEM (5*32+24) /* SME feature present */ +#define X86_FEATURE_PHE2 (5*32+25) /* SHA384 and SHA 512 present */ +#define X86_FEATURE_PHE2_EN (5*32+26) /* SHA384 and SHA 512 enabled */ +#define X86_FEATURE_XMODX (5*32+27) /* "rsa" XMODEXP and MONTMUL2 instructions are present */ +#define X86_FEATURE_XMODX_EN (5*32+28) /* "rsa_en" XMODEXP and MONTMUL2instructions are enabled */ +#define X86_FEATURE_VEX (5*32+29) /* VEX instructions are present */ +#define X86_FEATURE_VEX_EN (5*32+30) /* VEX instructions are enabled */ +#define X86_FEATURE_STK (5*32+31) /* STK are present */
/* More extended AMD flags: CPUID level 0x80000001, ECX, word 6 */ #define X86_FEATURE_LAHF_LM ( 6*32+ 0) /* LAHF/SAHF in long mode */
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Detect the extended topology information of Zhaoxin CPUs if available.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/kernel/cpu/centaur.c | 20 +++++++++++++++++++- arch/x86/kernel/cpu/zhaoxin.c | 7 ++++++- 2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c index 8735be464bc1..608b8dfa119f 100644 --- a/arch/x86/kernel/cpu/centaur.c +++ b/arch/x86/kernel/cpu/centaur.c @@ -115,6 +115,21 @@ static void early_init_centaur(struct cpuinfo_x86 *c) set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC); set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC); } + + if (c->cpuid_level >= 0x00000001) { + u32 eax, ebx, ecx, edx; + + cpuid(0x00000001, &eax, &ebx, &ecx, &edx); + /* + * If HTT (EDX[28]) is set EBX[16:23] contain the number of + * apicids which are reserved per package. Store the resulting + * shift value for the package management code. + */ + if (edx & (1U << 28)) + c->x86_coreid_bits = get_count_order((ebx >> 16) & 0xff); + } + if (detect_extended_topology_early(c) < 0) + detect_ht_early(c); }
static void centaur_detect_vmx_virtcap(struct cpuinfo_x86 *c) @@ -158,11 +173,14 @@ static void init_centaur(struct cpuinfo_x86 *c) clear_cpu_cap(c, 0*32+31); #endif early_init_centaur(c); + detect_extended_topology(c); init_intel_cacheinfo(c); - detect_num_cpu_cores(c); + if (!cpu_has(c, X86_FEATURE_XTOPOLOGY)) { + detect_num_cpu_cores(c); #ifdef CONFIG_X86_32 detect_ht(c); #endif + }
if (c->cpuid_level > 9) { unsigned int eax = cpuid_eax(10); diff --git a/arch/x86/kernel/cpu/zhaoxin.c b/arch/x86/kernel/cpu/zhaoxin.c index 452fd0a6bc61..e4ed34361a1f 100644 --- a/arch/x86/kernel/cpu/zhaoxin.c +++ b/arch/x86/kernel/cpu/zhaoxin.c @@ -85,6 +85,8 @@ static void early_init_zhaoxin(struct cpuinfo_x86 *c) c->x86_coreid_bits = get_count_order((ebx >> 16) & 0xff); }
+ if (detect_extended_topology_early(c) < 0) + detect_ht_early(c); }
static void zhaoxin_detect_vmx_virtcap(struct cpuinfo_x86 *c) @@ -115,11 +117,14 @@ static void zhaoxin_detect_vmx_virtcap(struct cpuinfo_x86 *c) static void init_zhaoxin(struct cpuinfo_x86 *c) { early_init_zhaoxin(c); + detect_extended_topology(c); init_intel_cacheinfo(c); - detect_num_cpu_cores(c); + if (!cpu_has(c, X86_FEATURE_XTOPOLOGY)) { + detect_num_cpu_cores(c); #ifdef CONFIG_X86_32 detect_ht(c); #endif + }
if (c->cpuid_level > 9) { unsigned int eax = cpuid_eax(10);
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.3 commit 773b2f30a3fc026f3ed121a8b945b0ae19b64ec5 category: ACPI bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Zhaoxin CPUs have NONSTOP TSC feature, so enable the ACPI driver support for it.
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: "hpa@zytor.com" hpa@zytor.com Cc: "gregkh@linuxfoundation.org" gregkh@linuxfoundation.org Cc: "rjw@rjwysocki.net" rjw@rjwysocki.net Cc: "lenb@kernel.org" lenb@kernel.org Cc: David Wang DavidWang@zhaoxin.com Cc: "Cooper Yan(BJ-RD)" CooperYan@zhaoxin.com Cc: "Qiyuan Wang(BJ-RD)" QiyuanWang@zhaoxin.com Cc: "Herry Yang(BJ-RD)" HerryYang@zhaoxin.com Link: https://lkml.kernel.org/r/d1cfd937dabc44518d42038b55522c53@zhaoxin.com Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/acpi/acpi_pad.c | 1 + drivers/acpi/processor_idle.c | 1 + 2 files changed, 2 insertions(+)
diff --git a/drivers/acpi/acpi_pad.c b/drivers/acpi/acpi_pad.c index a47676a55b84..c06306e6ac92 100644 --- a/drivers/acpi/acpi_pad.c +++ b/drivers/acpi/acpi_pad.c @@ -73,6 +73,7 @@ static void power_saving_mwait_init(void) case X86_VENDOR_HYGON: case X86_VENDOR_AMD: case X86_VENDOR_INTEL: + case X86_VENDOR_ZHAOXIN: /* * AMD Fam10h TSC will tick in all * C/P/S0/S1 states when this bit is set. diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c index b2131c4ea124..6336f956a144 100644 --- a/drivers/acpi/processor_idle.c +++ b/drivers/acpi/processor_idle.c @@ -209,6 +209,7 @@ static void tsc_check_state(int state) case X86_VENDOR_AMD: case X86_VENDOR_INTEL: case X86_VENDOR_CENTAUR: + case X86_VENDOR_ZHAOXIN: /* * AMD Fam10h TSC will tick in all * C/P/S0/S1 states when this bit is set.
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.2 commit 987ddbe4870b53623d76ac64044c55a13e368113 category: x86/power bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
For new Centaur CPUs the ucode will take care of the preservation of cache coherence between CPU cores in C-states regardless of how deep the C-states are. So, it is not necessary to flush the caches in software befor entering C3. This useless operation will cause performance drop for the cores which share some caches with the idling core.
Signed-off-by: David Wang davidwang@zhaoxin.com Reviewed-by: Thomas Gleixner tglx@linutronix.de Acked-by: Pavel Machek pavel@ucw.cz Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: brucechang@via-alliance.com Cc: cooperyan@zhaoxin.com Cc: len.brown@intel.com Cc: linux-pm@kernel.org Cc: qiyuanwang@zhaoxin.com Cc: rjw@rjwysocki.net Cc: timguo@zhaoxin.com Link: http://lkml.kernel.org/r/1545900110-2757-1-git-send-email-davidwang@zhaoxin.... [ Tidy up the comment. ] Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/kernel/acpi/cstate.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
diff --git a/arch/x86/kernel/acpi/cstate.c b/arch/x86/kernel/acpi/cstate.c index 92539a1c3e31..45745ecaa624 100644 --- a/arch/x86/kernel/acpi/cstate.c +++ b/arch/x86/kernel/acpi/cstate.c @@ -51,6 +51,18 @@ void acpi_processor_power_init_bm_check(struct acpi_processor_flags *flags, if (c->x86_vendor == X86_VENDOR_INTEL && (c->x86 > 0xf || (c->x86 == 6 && c->x86_model >= 0x0f))) flags->bm_control = 0; + /* + * For all recent Centaur CPUs, the ucode will make sure that each + * core can keep cache coherence with each other while entering C3 + * type state. So, set bm_check to 1 to indicate that the kernel + * doesn't need to execute a cache flush operation (WBINVD) when + * entering C3 type state. + */ + if (c->x86_vendor == X86_VENDOR_CENTAUR) { + if (c->x86 > 6 || (c->x86 == 6 && c->x86_model == 0x0f && + c->x86_stepping >= 0x0e)) + flags->bm_check = 1; + } } EXPORT_SYMBOL(acpi_processor_power_init_bm_check);
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.2 commit f8c0e061cb83bd528ff0843e717bcebc846d4838 category: x86/acpi/cstate bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Same as Intel, Zhaoxin MP CPUs support C3 share cache and on all recent Zhaoxin platforms ARB_DISABLE is a nop. So set related flags correctly in the same way as Intel does.
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: "hpa@zytor.com" hpa@zytor.com Cc: "gregkh@linuxfoundation.org" gregkh@linuxfoundation.org Cc: "rjw@rjwysocki.net" rjw@rjwysocki.net Cc: "lenb@kernel.org" lenb@kernel.org Cc: David Wang DavidWang@zhaoxin.com Cc: "Cooper Yan(BJ-RD)" CooperYan@zhaoxin.com Cc: "Qiyuan Wang(BJ-RD)" QiyuanWang@zhaoxin.com Cc: "Herry Yang(BJ-RD)" HerryYang@zhaoxin.com Link: https://lkml.kernel.org/r/a370503660994669991a7f7cda7c5e98@zhaoxin.com Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/kernel/acpi/cstate.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+)
diff --git a/arch/x86/kernel/acpi/cstate.c b/arch/x86/kernel/acpi/cstate.c index 45745ecaa624..5eebe05b00fb 100644 --- a/arch/x86/kernel/acpi/cstate.c +++ b/arch/x86/kernel/acpi/cstate.c @@ -63,6 +63,21 @@ void acpi_processor_power_init_bm_check(struct acpi_processor_flags *flags, c->x86_stepping >= 0x0e)) flags->bm_check = 1; } + + if (c->x86_vendor == X86_VENDOR_ZHAOXIN) { + /* + * All Zhaoxin CPUs that support C3 share cache. + * And caches should not be flushed by software while + * entering C3 type state. + */ + flags->bm_check = 1; + /* + * On all recent Zhaoxin platforms, ARB_DISABLE is a nop. + * So, set bm_control to zero to indicate that ARB_DISABLE + * is not required while entering C3 type state. + */ + flags->bm_control = 0; + } } EXPORT_SYMBOL(acpi_processor_power_init_bm_check);
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.5 commit 6e898d2bf67a82df0aa0c955adc9278faba9a635 category: x86/mce bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add support for more Zhaoxin CPUs.
All newer Zhaoxin CPUs are compatible with Intel's Machine-Check Architecture, so add support for them.
[ bp: Reflow comment in vendor_disable_error_reporting() and massage commit message. ]
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Borislav Petkov bp@suse.de Cc: CooperYan@zhaoxin.com Cc: DavidWang@zhaoxin.com Cc: HerryYang@zhaoxin.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: linux-edac linux-edac@vger.kernel.org Cc: QiyuanWang@zhaoxin.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: x86-ml x86@kernel.org Link: https://lkml.kernel.org/r/1568787573-1297-2-git-send-email-TonyWWang-oc@zhao... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/kernel/cpu/mce/core.c | 42 ++++++++++++++++++++++++++-------- 1 file changed, 32 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 5221c49d335e..9acfe70e8c00 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -473,8 +473,10 @@ int mce_usable_address(struct mce *m) if (!(m->status & MCI_STATUS_ADDRV)) return 0;
- /* Checks after this one are Intel-specific: */ - if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) + /* Checks after this one are Intel/Zhaoxin-specific: */ + if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL && + boot_cpu_data.x86_vendor != X86_VENDOR_ZHAOXIN && + boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR) return 1;
if (!(m->status & MCI_STATUS_MISCV)) @@ -492,10 +494,14 @@ EXPORT_SYMBOL_GPL(mce_usable_address);
bool mce_is_memory_error(struct mce *m) { - if (m->cpuvendor == X86_VENDOR_AMD || - m->cpuvendor == X86_VENDOR_HYGON) { + switch (m->cpuvendor) { + case X86_VENDOR_AMD: + case X86_VENDOR_HYGON: return amd_mce_is_memory_error(m); - } else if (m->cpuvendor == X86_VENDOR_INTEL) { + + case X86_VENDOR_INTEL: + case X86_VENDOR_ZHAOXIN: + case X86_VENDOR_CENTAUR: /* * Intel SDM Volume 3B - 15.9.2 Compound Error Codes * @@ -512,9 +518,10 @@ bool mce_is_memory_error(struct mce *m) return (m->status & 0xef80) == BIT(7) || (m->status & 0xef00) == BIT(8) || (m->status & 0xeffc) == 0xc; - }
- return false; + default: + return false; + } } EXPORT_SYMBOL_GPL(mce_is_memory_error);
@@ -1658,6 +1665,19 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c) if (c->x86 == 6 && c->x86_model == 45) quirk_no_way_out = quirk_sandybridge_ifu; } + + if (c->x86_vendor == X86_VENDOR_ZHAOXIN || + c->x86_vendor == X86_VENDOR_CENTAUR) { + /* + * All newer Zhaoxin CPUs support MCE broadcasting. Enable + * synchronization with a one second timeout. + */ + if (c->x86 > 6 || (c->x86_model == 0x19 || c->x86_model == 0x1f)) { + if (cfg->monarch_timeout < 0) + cfg->monarch_timeout = USEC_PER_SEC; + } + } + if (cfg->monarch_timeout < 0) cfg->monarch_timeout = 0; if (cfg->bootlog != 0) @@ -1963,15 +1983,17 @@ static void mce_disable_error_reporting(void) static void vendor_disable_error_reporting(void) { /* - * Don't clear on Intel, AMD or Hygon CPUs. Some of these MSRs are - * socket-wide. + * Don't clear on Intel, AMD, Hygon or Zhaoxin CPUs. Some of these + * MSRs are socket-wide. * Disabling them for just a single offlined CPU is bad, since it will * inhibit reporting for all shared resources on the socket like the * last level cache (LLC), the integrated memory controller (iMC), etc. */ if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL || boot_cpu_data.x86_vendor == X86_VENDOR_HYGON || - boot_cpu_data.x86_vendor == X86_VENDOR_AMD) + boot_cpu_data.x86_vendor == X86_VENDOR_AMD || + boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN || + boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR) return;
mce_disable_error_reporting();
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.5 commit 5a3d56a034be9e8e87a6cb9ed3f2928184db1417 category: x86/mce bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add support for more Zhaoxin CPUs.
All newer Zhaoxin CPUs support CMCI and are compatible with Intel's Machine-Check Architecture. Add that support for Zhaoxin CPUs.
[ bp: Massage comments and export intel_init_cmci(). ]
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Borislav Petkov bp@suse.de Cc: CooperYan@zhaoxin.com Cc: DavidWang@zhaoxin.com Cc: HerryYang@zhaoxin.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: linux-edac linux-edac@vger.kernel.org Cc: QiyuanWang@zhaoxin.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: x86-ml x86@kernel.org Link: https://lkml.kernel.org/r/1568787573-1297-4-git-send-email-TonyWWang-oc@zhao... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/kernel/cpu/mce/core.c | 30 +++++++++++++++++++----------- arch/x86/kernel/cpu/mce/intel.c | 7 +++++-- arch/x86/kernel/cpu/mce/internal.h | 2 ++ 3 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 9acfe70e8c00..2e26423c48c9 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1726,19 +1726,26 @@ static void __mcheck_cpu_init_early(struct cpuinfo_x86 *c) } }
-static void mce_centaur_feature_init(struct cpuinfo_x86 *c) +static void mce_zhaoxin_feature_init(struct cpuinfo_x86 *c) { struct mca_config *cfg = &mca_cfg; - - /* - * All newer Centaur CPUs support MCE broadcasting. Enable - * synchronization with a one second timeout. - */ - if ((c->x86 == 6 && c->x86_model == 0xf && c->x86_stepping >= 0xe) || - c->x86 > 6) { - if (cfg->monarch_timeout < 0) - cfg->monarch_timeout = USEC_PER_SEC; + /* + * These CPUs have MCA bank 8 which reports only one error type called + * SVAD (System View Address Decoder). The reporting of that error is + * controlled by IA32_MC8.CTL.0. + * + * If enabled, prefetching on these CPUs will cause SVAD MCE when + * virtual machines start and result in a system panic. Always disable + * bank 8 SVAD error by default. + */ + if ((c->x86 == 7 && c->x86_model == 0x1b) || + (c->x86_model == 0x19 || c->x86_model == 0x1f)) { + if (cfg->banks > 8) + mce_banks[8].ctl = 0; } + + intel_init_cmci(); + mce_adjust_timer = cmci_intel_adjust_timer; }
static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) @@ -1759,7 +1766,8 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) break;
case X86_VENDOR_CENTAUR: - mce_centaur_feature_init(c); + case X86_VENDOR_ZHAOXIN: + mce_zhaoxin_feature_init(c); break;
default: diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 693c8cfac75d..6a220c999a01 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -85,8 +85,11 @@ static int cmci_supported(int *banks) * initialization is vendor keyed and this * makes sure none of the backdoors are entered otherwise. */ - if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) + if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL && + boot_cpu_data.x86_vendor != X86_VENDOR_ZHAOXIN && + boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR) return 0; + if (!boot_cpu_has(X86_FEATURE_APIC) || lapic_get_maxlvt() < 6) return 0; rdmsrl(MSR_IA32_MCG_CAP, cap); @@ -423,7 +426,7 @@ void cmci_disable_bank(int bank) raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); }
-static void intel_init_cmci(void) +void intel_init_cmci(void) { int banks;
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index ceb67cd5918f..99d73d18f2c4 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -52,11 +52,13 @@ unsigned long cmci_intel_adjust_timer(unsigned long interval); bool mce_intel_cmci_poll(void); void mce_intel_hcpu_update(unsigned long cpu); void cmci_disable_bank(int bank); +void intel_init_cmci(void); #else # define cmci_intel_adjust_timer mce_adjust_timer_default static inline bool mce_intel_cmci_poll(void) { return false; } static inline void mce_intel_hcpu_update(unsigned long cpu) { } static inline void cmci_disable_bank(int bank) { } +static inline void intel_init_cmci(void) { } #endif
void mce_timer_kick(unsigned long interval);
Hi LeoLiu,
On 2021/4/7 14:12, Cheng Jian wrote:
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.5 commit 5a3d56a034be9e8e87a6cb9ed3f2928184db1417 category: x86/mce bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
Add support for more Zhaoxin CPUs.
All newer Zhaoxin CPUs support CMCI and are compatible with Intel's Machine-Check Architecture. Add that support for Zhaoxin CPUs.
[ bp: Massage comments and export intel_init_cmci(). ]
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Borislav Petkov bp@suse.de Cc: CooperYan@zhaoxin.com Cc: DavidWang@zhaoxin.com Cc: HerryYang@zhaoxin.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: linux-edac linux-edac@vger.kernel.org Cc: QiyuanWang@zhaoxin.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: x86-ml x86@kernel.org Link: https://lkml.kernel.org/r/1568787573-1297-4-git-send-email-TonyWWang-oc@zhao... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com
arch/x86/kernel/cpu/mce/core.c | 30 +++++++++++++++++++----------- arch/x86/kernel/cpu/mce/intel.c | 7 +++++-- arch/x86/kernel/cpu/mce/internal.h | 2 ++ 3 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 9acfe70e8c00..2e26423c48c9 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1726,19 +1726,26 @@ static void __mcheck_cpu_init_early(struct cpuinfo_x86 *c) } }
-static void mce_centaur_feature_init(struct cpuinfo_x86 *c) +static void mce_zhaoxin_feature_init(struct cpuinfo_x86 *c)
In this patch we merge centaur and zhaoxin into one function. It's different with upstream patch, and removed this setting, is it right?
- /* - * All newer Centaur CPUs support MCE broadcasting. Enable - * synchronization with a one second timeout. - */ - if ((c->x86 == 6 && c->x86_model == 0xf && c->x86_stepping >= 0xe) || - c->x86 > 6) { - if (cfg->monarch_timeout < 0) - cfg->monarch_timeout = USEC_PER_SEC;
{ struct mca_config *cfg = &mca_cfg;
/*
* All newer Centaur CPUs support MCE broadcasting. Enable
* synchronization with a one second timeout.
*/
- if ((c->x86 == 6 && c->x86_model == 0xf && c->x86_stepping >= 0xe) ||
c->x86 > 6) {
if (cfg->monarch_timeout < 0)
cfg->monarch_timeout = USEC_PER_SEC;
- /*
* These CPUs have MCA bank 8 which reports only one error type called
* SVAD (System View Address Decoder). The reporting of that error is
* controlled by IA32_MC8.CTL.0.
*
* If enabled, prefetching on these CPUs will cause SVAD MCE when
* virtual machines start and result in a system panic. Always disable
* bank 8 SVAD error by default.
*/
- if ((c->x86 == 7 && c->x86_model == 0x1b) ||
(c->x86_model == 0x19 || c->x86_model == 0x1f)) {
if (cfg->banks > 8)
}mce_banks[8].ctl = 0;
- intel_init_cmci();
- mce_adjust_timer = cmci_intel_adjust_timer;
}
static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) @@ -1759,7 +1766,8 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) break;
case X86_VENDOR_CENTAUR:
mce_centaur_feature_init(c);
case X86_VENDOR_ZHAOXIN:
mce_zhaoxin_feature_init(c);
break;
default:
diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 693c8cfac75d..6a220c999a01 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -85,8 +85,11 @@ static int cmci_supported(int *banks) * initialization is vendor keyed and this * makes sure none of the backdoors are entered otherwise. */
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL &&
boot_cpu_data.x86_vendor != X86_VENDOR_ZHAOXIN &&
return 0;boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR)
- if (!boot_cpu_has(X86_FEATURE_APIC) || lapic_get_maxlvt() < 6) return 0; rdmsrl(MSR_IA32_MCG_CAP, cap);
@@ -423,7 +426,7 @@ void cmci_disable_bank(int bank) raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); }
-static void intel_init_cmci(void) +void intel_init_cmci(void) { int banks;
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index ceb67cd5918f..99d73d18f2c4 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -52,11 +52,13 @@ unsigned long cmci_intel_adjust_timer(unsigned long interval); bool mce_intel_cmci_poll(void); void mce_intel_hcpu_update(unsigned long cpu); void cmci_disable_bank(int bank); +void intel_init_cmci(void); #else # define cmci_intel_adjust_timer mce_adjust_timer_default static inline bool mce_intel_cmci_poll(void) { return false; } static inline void mce_intel_hcpu_update(unsigned long cpu) { } static inline void cmci_disable_bank(int bank) { } +static inline void intel_init_cmci(void) { } #endif
void mce_timer_kick(unsigned long interval);
On 07/04/2021 14:31, Xie XiuQi wrote:
Hi LeoLiu,
On 2021/4/7 14:12, Cheng Jian wrote:
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.5 commit 5a3d56a034be9e8e87a6cb9ed3f2928184db1417 category: x86/mce bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
Add support for more Zhaoxin CPUs.
All newer Zhaoxin CPUs support CMCI and are compatible with Intel's Machine-Check Architecture. Add that support for Zhaoxin CPUs.
[ bp: Massage comments and export intel_init_cmci(). ]
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Borislav Petkov bp@suse.de Cc: CooperYan@zhaoxin.com Cc: DavidWang@zhaoxin.com Cc: HerryYang@zhaoxin.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: linux-edac linux-edac@vger.kernel.org Cc: QiyuanWang@zhaoxin.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: x86-ml x86@kernel.org Link: https://lkml.kernel.org/r/1568787573-1297-4-git-send-email-TonyWWang-oc@zhao... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com
arch/x86/kernel/cpu/mce/core.c | 30 +++++++++++++++++++----------- arch/x86/kernel/cpu/mce/intel.c | 7 +++++-- arch/x86/kernel/cpu/mce/internal.h | 2 ++ 3 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 9acfe70e8c00..2e26423c48c9 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1726,19 +1726,26 @@ static void __mcheck_cpu_init_early(struct cpuinfo_x86 *c) } }
-static void mce_centaur_feature_init(struct cpuinfo_x86 *c) +static void mce_zhaoxin_feature_init(struct cpuinfo_x86 *c)
In this patch we merge centaur and zhaoxin into one function. It's different with upstream patch, and removed this setting, is it right?
It's right. Will submit patch to mainline kernel to update this.
Sincerely TonyWWangoc
/*
* All newer Centaur CPUs support MCE broadcasting. Enable
* synchronization with a one second timeout.
*/
- if ((c->x86 == 6 && c->x86_model == 0xf && c->x86_stepping >= 0xe) ||
c->x86 > 6) {
if (cfg->monarch_timeout < 0)
cfg->monarch_timeout = USEC_PER_SEC;
{ struct mca_config *cfg = &mca_cfg;
/*
* All newer Centaur CPUs support MCE broadcasting. Enable
* synchronization with a one second timeout.
*/
- if ((c->x86 == 6 && c->x86_model == 0xf && c->x86_stepping >= 0xe) ||
c->x86 > 6) {
if (cfg->monarch_timeout < 0)
cfg->monarch_timeout = USEC_PER_SEC;
- /*
* These CPUs have MCA bank 8 which reports only one error type called
* SVAD (System View Address Decoder). The reporting of that error is
* controlled by IA32_MC8.CTL.0.
*
* If enabled, prefetching on these CPUs will cause SVAD MCE when
* virtual machines start and result in a system panic. Always disable
* bank 8 SVAD error by default.
*/
- if ((c->x86 == 7 && c->x86_model == 0x1b) ||
(c->x86_model == 0x19 || c->x86_model == 0x1f)) {
if (cfg->banks > 8)
}mce_banks[8].ctl = 0;
- intel_init_cmci();
- mce_adjust_timer = cmci_intel_adjust_timer;
}
static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) @@ -1759,7 +1766,8 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) break;
case X86_VENDOR_CENTAUR:
mce_centaur_feature_init(c);
case X86_VENDOR_ZHAOXIN:
mce_zhaoxin_feature_init(c);
break;
default:
diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 693c8cfac75d..6a220c999a01 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -85,8 +85,11 @@ static int cmci_supported(int *banks) * initialization is vendor keyed and this * makes sure none of the backdoors are entered otherwise. */
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL &&
boot_cpu_data.x86_vendor != X86_VENDOR_ZHAOXIN &&
return 0;boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR)
- if (!boot_cpu_has(X86_FEATURE_APIC) || lapic_get_maxlvt() < 6) return 0; rdmsrl(MSR_IA32_MCG_CAP, cap);
@@ -423,7 +426,7 @@ void cmci_disable_bank(int bank) raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); }
-static void intel_init_cmci(void) +void intel_init_cmci(void) { int banks;
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index ceb67cd5918f..99d73d18f2c4 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -52,11 +52,13 @@ unsigned long cmci_intel_adjust_timer(unsigned long interval); bool mce_intel_cmci_poll(void); void mce_intel_hcpu_update(unsigned long cpu); void cmci_disable_bank(int bank); +void intel_init_cmci(void); #else # define cmci_intel_adjust_timer mce_adjust_timer_default static inline bool mce_intel_cmci_poll(void) { return false; } static inline void mce_intel_hcpu_update(unsigned long cpu) { } static inline void cmci_disable_bank(int bank) { } +static inline void intel_init_cmci(void) { } #endif
void mce_timer_kick(unsigned long interval);
.
Hi,
On 2021/4/7 15:23, Tony W Wang-oc wrote:
On 07/04/2021 14:31, Xie XiuQi wrote:
Hi LeoLiu,
On 2021/4/7 14:12, Cheng Jian wrote:
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.5 commit 5a3d56a034be9e8e87a6cb9ed3f2928184db1417 category: x86/mce bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
Add support for more Zhaoxin CPUs.
All newer Zhaoxin CPUs support CMCI and are compatible with Intel's Machine-Check Architecture. Add that support for Zhaoxin CPUs.
[ bp: Massage comments and export intel_init_cmci(). ]
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Borislav Petkov bp@suse.de Cc: CooperYan@zhaoxin.com Cc: DavidWang@zhaoxin.com Cc: HerryYang@zhaoxin.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: linux-edac linux-edac@vger.kernel.org Cc: QiyuanWang@zhaoxin.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: x86-ml x86@kernel.org Link: https://lkml.kernel.org/r/1568787573-1297-4-git-send-email-TonyWWang-oc@zhao... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com
arch/x86/kernel/cpu/mce/core.c | 30 +++++++++++++++++++----------- arch/x86/kernel/cpu/mce/intel.c | 7 +++++-- arch/x86/kernel/cpu/mce/internal.h | 2 ++ 3 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 9acfe70e8c00..2e26423c48c9 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1726,19 +1726,26 @@ static void __mcheck_cpu_init_early(struct cpuinfo_x86 *c) } }
-static void mce_centaur_feature_init(struct cpuinfo_x86 *c) +static void mce_zhaoxin_feature_init(struct cpuinfo_x86 *c)
In this patch we merge centaur and zhaoxin into one function. It's different with upstream patch, and removed this setting, is it right?
It's right. Will submit patch to mainline kernel to update this.
Looks good to me, Thanks.
Sincerely TonyWWangoc
/*
* All newer Centaur CPUs support MCE broadcasting. Enable
* synchronization with a one second timeout.
*/
- if ((c->x86 == 6 && c->x86_model == 0xf && c->x86_stepping >= 0xe) ||
c->x86 > 6) {
if (cfg->monarch_timeout < 0)
cfg->monarch_timeout = USEC_PER_SEC;
{ struct mca_config *cfg = &mca_cfg;
/*
* All newer Centaur CPUs support MCE broadcasting. Enable
* synchronization with a one second timeout.
*/
- if ((c->x86 == 6 && c->x86_model == 0xf && c->x86_stepping >= 0xe) ||
c->x86 > 6) {
if (cfg->monarch_timeout < 0)
cfg->monarch_timeout = USEC_PER_SEC;
- /*
* These CPUs have MCA bank 8 which reports only one error type called
* SVAD (System View Address Decoder). The reporting of that error is
* controlled by IA32_MC8.CTL.0.
*
* If enabled, prefetching on these CPUs will cause SVAD MCE when
* virtual machines start and result in a system panic. Always disable
* bank 8 SVAD error by default.
*/
- if ((c->x86 == 7 && c->x86_model == 0x1b) ||
(c->x86_model == 0x19 || c->x86_model == 0x1f)) {
if (cfg->banks > 8)
}mce_banks[8].ctl = 0;
- intel_init_cmci();
- mce_adjust_timer = cmci_intel_adjust_timer;
}
static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) @@ -1759,7 +1766,8 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) break;
case X86_VENDOR_CENTAUR:
mce_centaur_feature_init(c);
case X86_VENDOR_ZHAOXIN:
mce_zhaoxin_feature_init(c);
break;
default:
diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 693c8cfac75d..6a220c999a01 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -85,8 +85,11 @@ static int cmci_supported(int *banks) * initialization is vendor keyed and this * makes sure none of the backdoors are entered otherwise. */
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL &&
boot_cpu_data.x86_vendor != X86_VENDOR_ZHAOXIN &&
return 0;boot_cpu_data.x86_vendor != X86_VENDOR_CENTAUR)
- if (!boot_cpu_has(X86_FEATURE_APIC) || lapic_get_maxlvt() < 6) return 0; rdmsrl(MSR_IA32_MCG_CAP, cap);
@@ -423,7 +426,7 @@ void cmci_disable_bank(int bank) raw_spin_unlock_irqrestore(&cmci_discover_lock, flags); }
-static void intel_init_cmci(void) +void intel_init_cmci(void) { int banks;
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index ceb67cd5918f..99d73d18f2c4 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -52,11 +52,13 @@ unsigned long cmci_intel_adjust_timer(unsigned long interval); bool mce_intel_cmci_poll(void); void mce_intel_hcpu_update(unsigned long cpu); void cmci_disable_bank(int bank); +void intel_init_cmci(void); #else # define cmci_intel_adjust_timer mce_adjust_timer_default static inline bool mce_intel_cmci_poll(void) { return false; } static inline void mce_intel_hcpu_update(unsigned long cpu) { } static inline void cmci_disable_bank(int bank) { } +static inline void intel_init_cmci(void) { } #endif
void mce_timer_kick(unsigned long interval);
.
Kernel mailing list -- kernel@openeuler.org To unsubscribe send an email to kernel-leave@openeuler.org
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.5 commit 70f0c230031dfef3c9b3e37b2a8c18d3f7186fb2 category: x86/mce bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add support for more Zhaoxin CPUs.
Newer Zhaoxin CPUs support LMCE compatible with Intel. Add support for that.
[ bp: Export functions and massage. ]
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Borislav Petkov bp@suse.de Cc: CooperYan@zhaoxin.com Cc: DavidWang@zhaoxin.com Cc: HerryYang@zhaoxin.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: linux-edac linux-edac@vger.kernel.org Cc: QiyuanWang@zhaoxin.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: x86-ml x86@kernel.org Link: https://lkml.kernel.org/r/1568787573-1297-5-git-send-email-TonyWWang-oc@zhao... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/kernel/cpu/mce/core.c | 25 +++++++++++++++++++++++-- arch/x86/kernel/cpu/mce/intel.c | 4 ++-- arch/x86/kernel/cpu/mce/internal.h | 4 ++++ 3 files changed, 29 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 2e26423c48c9..88fcdddccd39 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1125,6 +1125,13 @@ static bool __mc_check_crashing_cpu(int cpu) u64 mcgstatus;
mcgstatus = mce_rdmsrl(MSR_IA32_MCG_STATUS); + + if (boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN || + boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR) { + if (mcgstatus & MCG_STATUS_LMCES) + return false; + } + if (mcgstatus & MCG_STATUS_RIPV) { mce_wrmsrl(MSR_IA32_MCG_STATUS, 0); return true; @@ -1274,9 +1281,11 @@ void do_machine_check(struct pt_regs *regs, long error_code)
/* * Check if this MCE is signaled to only this logical processor, - * on Intel only. + * on Intel, Zhaoxin only. */ - if (m.cpuvendor == X86_VENDOR_INTEL) + if (m.cpuvendor == X86_VENDOR_INTEL || + m.cpuvendor == X86_VENDOR_ZHAOXIN || + m.cpuvendor == X86_VENDOR_CENTAUR) lmce = m.mcgstatus & MCG_STATUS_LMCES;
/* @@ -1745,9 +1754,15 @@ static void mce_zhaoxin_feature_init(struct cpuinfo_x86 *c) }
intel_init_cmci(); + intel_init_lmce(); mce_adjust_timer = cmci_intel_adjust_timer; }
+static void mce_zhaoxin_feature_clear(struct cpuinfo_x86 *c) +{ + intel_clear_lmce(); +} + static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c) { switch (c->x86_vendor) { @@ -1781,6 +1796,12 @@ static void __mcheck_cpu_clear_vendor(struct cpuinfo_x86 *c) case X86_VENDOR_INTEL: mce_intel_feature_clear(c); break; + + case X86_VENDOR_ZHAOXIN: + case X86_VENDOR_CENTAUR: + mce_zhaoxin_feature_clear(c); + break; + default: break; } diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c index 6a220c999a01..f6f3b2675164 100644 --- a/arch/x86/kernel/cpu/mce/intel.c +++ b/arch/x86/kernel/cpu/mce/intel.c @@ -445,7 +445,7 @@ void intel_init_cmci(void) cmci_recheck(); }
-static void intel_init_lmce(void) +void intel_init_lmce(void) { u64 val;
@@ -458,7 +458,7 @@ static void intel_init_lmce(void) wrmsrl(MSR_IA32_MCG_EXT_CTL, val | MCG_EXT_CTL_LMCE_EN); }
-static void intel_clear_lmce(void) +void intel_clear_lmce(void) { u64 val;
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index 99d73d18f2c4..22e8aa8c8fe7 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -53,12 +53,16 @@ bool mce_intel_cmci_poll(void); void mce_intel_hcpu_update(unsigned long cpu); void cmci_disable_bank(int bank); void intel_init_cmci(void); +void intel_init_lmce(void); +void intel_clear_lmce(void); #else # define cmci_intel_adjust_timer mce_adjust_timer_default static inline bool mce_intel_cmci_poll(void) { return false; } static inline void mce_intel_hcpu_update(unsigned long cpu) { } static inline void cmci_disable_bank(int bank) { } static inline void intel_init_cmci(void) { } +static inline void intel_init_lmce(void) { } +static inline void intel_clear_lmce(void) { } #endif
void mce_timer_kick(unsigned long interval);
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.6 commit 1e41a766c98b481400ab8c5a7aa8ea63a1bb03de category: x86/speculation/spectre_v2 bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
New Zhaoxin family 7 CPUs are not affected by SPECTRE_V2. So define a separate cpu_vuln_whitelist bit NO_SPECTRE_V2 and add these CPUs to the cpu vulnerability whitelist.
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/1579227872-26972-2-git-send-email-TonyWWang-oc@zha... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/kernel/cpu/common.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index a5954a2f8591..246f98153240 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -954,6 +954,7 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c) #define MSBDS_ONLY BIT(5) #define NO_SWAPGS BIT(6) #define NO_ITLB_MULTIHIT BIT(7) +#define NO_SPECTRE_V2 BIT(8)
#define VULNWL(_vendor, _family, _model, _whitelist) \ { X86_VENDOR_##_vendor, _family, _model, X86_FEATURE_ANY, _whitelist } @@ -1014,6 +1015,10 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { /* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */ VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT), VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT), + + /* Zhaoxin Family 7 */ + VULNWL(CENTAUR, 7, X86_MODEL_ANY, NO_SPECTRE_V2), + VULNWL(ZHAOXIN, 7, X86_MODEL_ANY, NO_SPECTRE_V2), {} };
@@ -1068,7 +1073,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) return;
setup_force_cpu_bug(X86_BUG_SPECTRE_V1); - setup_force_cpu_bug(X86_BUG_SPECTRE_V2); + + if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2)) + setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.6 commit a84de2fa962c1b0551653fe245d6cb5f6129179c category: x86/speculation/swapgs bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
New Zhaoxin family 7 CPUs are not affected by the SWAPGS vulnerability. So mark these CPUs in the cpu vulnerability whitelist accordingly.
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/1579227872-26972-3-git-send-email-TonyWWang-oc@zha... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/kernel/cpu/common.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 246f98153240..1d83e5f7c5a8 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1017,8 +1017,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { VULNWL_HYGON(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT),
/* Zhaoxin Family 7 */ - VULNWL(CENTAUR, 7, X86_MODEL_ANY, NO_SPECTRE_V2), - VULNWL(ZHAOXIN, 7, X86_MODEL_ANY, NO_SPECTRE_V2), + VULNWL(CENTAUR, 7, X86_MODEL_ANY, NO_SPECTRE_V2 | NO_SWAPGS), + VULNWL(ZHAOXIN, 7, X86_MODEL_ANY, NO_SPECTRE_V2 | NO_SWAPGS), {} };
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
The driver crc32c-intel match CPUs supporting X86_FEATURE_XMM4_2. On platforms with Zhaoxin CPUs supporting this X86 feature, when crc32c-intel and crc32c-generic are both registered, system will use crc32c-intel because its .cra_priority is greater than crc32c-generic.
When doing lmbench3 Create and Delete file test on partitions with ext4 enabling metadata checksum, found using crc32c-generic driver could get about 20% performance gain than using the driver crc32c-intel on some Zhaoxin CPUs.
This case expect to use crc32c-generic driver for these Zhaoxin CPUs to get performance gain, so remove these Zhaoxin CPUs support from crc32c-intel.
This patch was submitted to mainline kernel but not accepted by upstream maintainer whose reason is "Then create a BUG flag for it,".
We think this is not a CPU bug for Zhaoxin CPUs. So should patch the crc32c driver for Zhaoxin CPUs but not report a BUG.
References: https://lkml.org/lkml/2020/12/11/308 Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/crypto/crc32c-intel_glue.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/arch/x86/crypto/crc32c-intel_glue.c b/arch/x86/crypto/crc32c-intel_glue.c index 5773e1161072..48e20da286df 100644 --- a/arch/x86/crypto/crc32c-intel_glue.c +++ b/arch/x86/crypto/crc32c-intel_glue.c @@ -242,8 +242,15 @@ MODULE_DEVICE_TABLE(x86cpu, crc32c_cpu_id);
static int __init crc32c_intel_mod_init(void) { + struct cpuinfo_x86 *c = &boot_cpu_data; + if (!x86_match_cpu(crc32c_cpu_id)) return -ENODEV; + + if ((c->x86_vendor == X86_VENDOR_ZHAOXIN || c->x86_vendor == X86_VENDOR_CENTAUR) && + (c->x86 <= 7 && c->x86_model <= 59)) { + return -ENODEV; + } #ifdef CONFIG_X86_64 if (boot_cpu_has(X86_FEATURE_PCLMULQDQ)) { alg.update = crc32c_pcl_intel_update;
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.8 commit 3a4ac121c2cacbf97d493fa3bc42ead88657abe4 category: x86/perf bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add the generic Zhaoxin uncore PMU support.
Zhaoxin CPU has provided facilities for monitoring performance via PMU (Performance Monitor Unit), but the functionality is unused so far. Therefore, add support for zhaoxin pmu to make performance related hardware events available.
The PMU is mostly an Intel Architectural PerfMon-v2 with a novel errata for the ZXC line. It supports the following events:
----------------------------------------------------------------------------------------------------------------------------------- Event | Event | Umask | Description | Select | | ----------------------------------------------------------------------------------------------------------------------------------- cpu-cycles | 82h | 00h | unhalt core clock instructions | 00h | 00h | number of instructions at retirement. cache-references | 15h | 05h | number of fillq pushs at the current cycle. cache-misses | 1ah | 05h | number of l2 miss pushed by fillq. branch-instructions | 28h | 00h | counts the number of branch instructions retired. branch-misses | 29h | 00h | mispredicted branch instructions at retirement. bus-cycles | 83h | 00h | unhalt bus clock stalled-cycles-frontend | 01h | 01h | Increments each cycle the # of Uops issued by the RAT to RS. stalled-cycles-backend | 0fh | 04h | RS0/1/2/3/45 empty L1-dcache-loads | 68h | 05h | number of retire/commit load. L1-dcache-load-misses | 4bh | 05h | retired load uops whose data source followed an L1 miss. L1-dcache-stores | 69h | 06h | number of retire/commit Store,no LEA L1-dcache-store-misses | 62h | 05h | cache lines in M state evicted out of L1D due to Snoop HitM or dirty line replacement. L1-icache-loads | 00h | 03h | number of l1i cache access for valid normal fetch,including un-cacheable access. L1-icache-load-misses | 01h | 03h | number of l1i cache miss for valid normal fetch,including un-cacheable miss. L1-icache-prefetches | 0ah | 03h | number of prefetch. L1-icache-prefetch-misses | 0bh | 03h | number of prefetch miss. dTLB-loads | 68h | 05h | number of retire/commit load dTLB-load-misses | 2ch | 05h | number of load operations miss all level tlbs and cause a tablewalk. dTLB-stores | 69h | 06h | number of retire/commit Store,no LEA dTLB-store-misses | 30h | 05h | number of store operations miss all level tlbs and cause a tablewalk. dTLB-prefetches | 64h | 05h | number of hardware pte prefetch requests dispatched out of the prefetch FIFO. dTLB-prefetch-misses | 65h | 05h | number of hardware pte prefetch requests miss the l1d data cache. iTLB-load | 00h | 00h | actually counter instructions. iTLB-load-misses | 34h | 05h | number of code operations miss all level tlbs and cause a tablewalk. -----------------------------------------------------------------------------------------------------------------------------------
Reported-by: kbuild test robot lkp@intel.com Signed-off-by: CodyYao-oc CodyYao-oc@zhaoxin.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/1586747669-4827-1-git-send-email-CodyYao-oc@zhaoxi... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/events/Makefile | 2 + arch/x86/events/core.c | 4 + arch/x86/events/perf_event.h | 14 +- arch/x86/events/zhaoxin/Makefile | 3 + arch/x86/events/zhaoxin/core.c | 612 +++++++++++++ arch/x86/events/zhaoxin/uncore.c | 1101 ++++++++++++++++++++++++ arch/x86/events/zhaoxin/uncore.h | 308 +++++++ arch/x86/kernel/cpu/perfctr-watchdog.c | 8 + 8 files changed, 2051 insertions(+), 1 deletion(-) create mode 100644 arch/x86/events/zhaoxin/Makefile create mode 100644 arch/x86/events/zhaoxin/core.c create mode 100644 arch/x86/events/zhaoxin/uncore.c create mode 100644 arch/x86/events/zhaoxin/uncore.h
diff --git a/arch/x86/events/Makefile b/arch/x86/events/Makefile index b8ccdb5c9244..ad4a7c789637 100644 --- a/arch/x86/events/Makefile +++ b/arch/x86/events/Makefile @@ -2,3 +2,5 @@ obj-y += core.o obj-y += amd/ obj-$(CONFIG_X86_LOCAL_APIC) += msr.o obj-$(CONFIG_CPU_SUP_INTEL) += intel/ +obj-$(CONFIG_CPU_SUP_CENTAUR) += zhaoxin/ +obj-$(CONFIG_CPU_SUP_ZHAOXIN) += zhaoxin/ diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 8e8970dd1af1..640f85da2b34 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -1758,6 +1758,10 @@ static int __init init_hw_perf_events(void) err = amd_pmu_init(); x86_pmu.name = "HYGON"; break; + case X86_VENDOR_ZHAOXIN: + case X86_VENDOR_CENTAUR: + err = zhaoxin_pmu_init(); + break; default: err = -ENOTSUPP; } diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 05659c7b43d4..dd24cac3d5e5 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -565,9 +565,12 @@ struct x86_pmu { struct event_constraint *event_constraints; struct x86_pmu_quirk *quirks; int perfctr_second_write; - bool late_ack; u64 (*limit_period)(struct perf_event *event, u64 l);
+ /* PMI handler bits */ + unsigned int late_ack :1, + enabled_ack :1, + counter_freezing :1; /* * sysfs attrs */ @@ -1044,3 +1047,12 @@ static inline int is_ht_workaround_enabled(void) return 0; } #endif /* CONFIG_CPU_SUP_INTEL */ + +#if ((defined CONFIG_CPU_SUP_CENTAUR) || (defined CONFIG_CPU_ZHAOXIN)) +int zhaoxin_pmu_init(void); +#else +static inline int zhaoxin_pmu_init(void) +{ + return 0; +} +#endif /*CONFIG_CPU_SUP_CENTAUR or CONFIG_CPU_SUP_ZHAOXIN*/ diff --git a/arch/x86/events/zhaoxin/Makefile b/arch/x86/events/zhaoxin/Makefile new file mode 100644 index 000000000000..767d6212bac1 --- /dev/null +++ b/arch/x86/events/zhaoxin/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-y += core.o +obj-y += uncore.o diff --git a/arch/x86/events/zhaoxin/core.c b/arch/x86/events/zhaoxin/core.c new file mode 100644 index 000000000000..c2e5bdf3893d --- /dev/null +++ b/arch/x86/events/zhaoxin/core.c @@ -0,0 +1,612 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Zhaoxin PMU; + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include <linux/stddef.h> +#include <linux/types.h> +#include <linux/init.h> +#include <linux/slab.h> +#include <linux/export.h> +#include <linux/nmi.h> + +#include <asm/cpufeature.h> +#include <asm/hardirq.h> +#include <asm/apic.h> + +#include "../perf_event.h" + +/* + * Zhaoxin PerfMon, used on zxc and later. + */ +static u64 zx_pmon_event_map[PERF_COUNT_HW_MAX] __read_mostly = { + + [PERF_COUNT_HW_CPU_CYCLES] = 0x0082, + [PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0, + [PERF_COUNT_HW_CACHE_REFERENCES] = 0x0515, + [PERF_COUNT_HW_CACHE_MISSES] = 0x051a, + [PERF_COUNT_HW_BUS_CYCLES] = 0x0083, +}; + +static struct event_constraint zxc_event_constraints[] __read_mostly = { + + FIXED_EVENT_CONSTRAINT(0x0082, 1), /* unhalted core clock cycles */ + EVENT_CONSTRAINT_END +}; + +static struct event_constraint zxd_event_constraints[] __read_mostly = { + + FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* retired instructions */ + FIXED_EVENT_CONSTRAINT(0x0082, 1), /* unhalted core clock cycles */ + FIXED_EVENT_CONSTRAINT(0x0083, 2), /* unhalted bus clock cycles */ + EVENT_CONSTRAINT_END +}; + +static __initconst const u64 zxd_hw_cache_event_ids + [PERF_COUNT_HW_CACHE_MAX] + [PERF_COUNT_HW_CACHE_OP_MAX] + [PERF_COUNT_HW_CACHE_RESULT_MAX] = { +[C(L1D)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = 0x0042, + [C(RESULT_MISS)] = 0x0538, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = 0x0043, + [C(RESULT_MISS)] = 0x0562, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, +}, +[C(L1I)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = 0x0300, + [C(RESULT_MISS)] = 0x0301, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = 0x030a, + [C(RESULT_MISS)] = 0x030b, + }, +}, +[C(LL)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, +}, +[C(DTLB)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = 0x0042, + [C(RESULT_MISS)] = 0x052c, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = 0x0043, + [C(RESULT_MISS)] = 0x0530, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = 0x0564, + [C(RESULT_MISS)] = 0x0565, + }, +}, +[C(ITLB)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = 0x00c0, + [C(RESULT_MISS)] = 0x0534, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, +}, +[C(BPU)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = 0x0700, + [C(RESULT_MISS)] = 0x0709, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, +}, +[C(NODE)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, +}, +}; + +static __initconst const u64 zxe_hw_cache_event_ids + [PERF_COUNT_HW_CACHE_MAX] + [PERF_COUNT_HW_CACHE_OP_MAX] + [PERF_COUNT_HW_CACHE_RESULT_MAX] = { +[C(L1D)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = 0x0568, + [C(RESULT_MISS)] = 0x054b, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = 0x0669, + [C(RESULT_MISS)] = 0x0562, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, +}, +[C(L1I)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = 0x0300, + [C(RESULT_MISS)] = 0x0301, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = 0x030a, + [C(RESULT_MISS)] = 0x030b, + }, +}, +[C(LL)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = 0x0, + [C(RESULT_MISS)] = 0x0, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = 0x0, + [C(RESULT_MISS)] = 0x0, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = 0x0, + [C(RESULT_MISS)] = 0x0, + }, +}, +[C(DTLB)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = 0x0568, + [C(RESULT_MISS)] = 0x052c, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = 0x0669, + [C(RESULT_MISS)] = 0x0530, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = 0x0564, + [C(RESULT_MISS)] = 0x0565, + }, +}, +[C(ITLB)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = 0x00c0, + [C(RESULT_MISS)] = 0x0534, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, +}, +[C(BPU)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = 0x0028, + [C(RESULT_MISS)] = 0x0029, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, +}, +[C(NODE)] = { + [C(OP_READ)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, + [C(OP_WRITE)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, + [C(OP_PREFETCH)] = { + [C(RESULT_ACCESS)] = -1, + [C(RESULT_MISS)] = -1, + }, +}, +}; + +static void zhaoxin_pmu_disable_all(void) +{ + wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0); +} + +static void zhaoxin_pmu_enable_all(int added) +{ + wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, x86_pmu.intel_ctrl); +} + +static inline u64 zhaoxin_pmu_get_status(void) +{ + u64 status; + + rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, status); + + return status; +} + +static inline void zhaoxin_pmu_ack_status(u64 ack) +{ + wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, ack); +} + +static inline void zxc_pmu_ack_status(u64 ack) +{ + /* + * ZXC needs global control enabled in order to clear status bits. + */ + zhaoxin_pmu_enable_all(0); + zhaoxin_pmu_ack_status(ack); + zhaoxin_pmu_disable_all(); +} + +static void zhaoxin_pmu_disable_fixed(struct hw_perf_event *hwc) +{ + int idx = hwc->idx - INTEL_PMC_IDX_FIXED; + u64 ctrl_val, mask; + + mask = 0xfULL << (idx * 4); + + rdmsrl(hwc->config_base, ctrl_val); + ctrl_val &= ~mask; + wrmsrl(hwc->config_base, ctrl_val); +} + +static void zhaoxin_pmu_disable_event(struct perf_event *event) +{ + struct hw_perf_event *hwc = &event->hw; + + if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) { + zhaoxin_pmu_disable_fixed(hwc); + return; + } + + x86_pmu_disable_event(event); +} + +static void zhaoxin_pmu_enable_fixed(struct hw_perf_event *hwc) +{ + int idx = hwc->idx - INTEL_PMC_IDX_FIXED; + u64 ctrl_val, bits, mask; + + /* + * Enable IRQ generation (0x8), + * and enable ring-3 counting (0x2) and ring-0 counting (0x1) + * if requested: + */ + bits = 0x8ULL; + if (hwc->config & ARCH_PERFMON_EVENTSEL_USR) + bits |= 0x2; + if (hwc->config & ARCH_PERFMON_EVENTSEL_OS) + bits |= 0x1; + + bits <<= (idx * 4); + mask = 0xfULL << (idx * 4); + + rdmsrl(hwc->config_base, ctrl_val); + ctrl_val &= ~mask; + ctrl_val |= bits; + wrmsrl(hwc->config_base, ctrl_val); +} + +static void zhaoxin_pmu_enable_event(struct perf_event *event) +{ + struct hw_perf_event *hwc = &event->hw; + + if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) { + zhaoxin_pmu_enable_fixed(hwc); + return; + } + + __x86_pmu_enable_event(hwc, ARCH_PERFMON_EVENTSEL_ENABLE); +} + +/* + * This handler is triggered by the local APIC, so the APIC IRQ handling + * rules apply: + */ +static int zhaoxin_pmu_handle_irq(struct pt_regs *regs) +{ + struct perf_sample_data data; + struct cpu_hw_events *cpuc; + int handled = 0; + u64 status; + int bit; + + cpuc = this_cpu_ptr(&cpu_hw_events); + apic_write(APIC_LVTPC, APIC_DM_NMI); + zhaoxin_pmu_disable_all(); + status = zhaoxin_pmu_get_status(); + if (!status) + goto done; + +again: + if (x86_pmu.enabled_ack) + zxc_pmu_ack_status(status); + else + zhaoxin_pmu_ack_status(status); + + inc_irq_stat(apic_perf_irqs); + + /* + * CondChgd bit 63 doesn't mean any overflow status. Ignore + * and clear the bit. + */ + if (__test_and_clear_bit(63, (unsigned long *)&status)) { + if (!status) + goto done; + } + + for_each_set_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) { + struct perf_event *event = cpuc->events[bit]; + + handled++; + + if (!test_bit(bit, cpuc->active_mask)) + continue; + + x86_perf_event_update(event); + perf_sample_data_init(&data, 0, event->hw.last_period); + + if (!x86_perf_event_set_period(event)) + continue; + + if (perf_event_overflow(event, &data, regs)) + x86_pmu_stop(event, 0); + } + + /* + * Repeat if there is more work to be done: + */ + status = zhaoxin_pmu_get_status(); + if (status) + goto again; + +done: + zhaoxin_pmu_enable_all(0); + return handled; +} + +static u64 zhaoxin_pmu_event_map(int hw_event) +{ + return zx_pmon_event_map[hw_event]; +} + +static struct event_constraint * +zhaoxin_get_event_constraints(struct cpu_hw_events *cpuc, int idx, + struct perf_event *event) +{ + struct event_constraint *c; + + if (x86_pmu.event_constraints) { + for_each_event_constraint(c, x86_pmu.event_constraints) { + if ((event->hw.config & c->cmask) == c->code) + return c; + } + } + + return &unconstrained; +} + +PMU_FORMAT_ATTR(event, "config:0-7"); +PMU_FORMAT_ATTR(umask, "config:8-15"); +PMU_FORMAT_ATTR(edge, "config:18"); +PMU_FORMAT_ATTR(inv, "config:23"); +PMU_FORMAT_ATTR(cmask, "config:24-31"); + +static struct attribute *zx_arch_formats_attr[] = { + &format_attr_event.attr, + &format_attr_umask.attr, + &format_attr_edge.attr, + &format_attr_inv.attr, + &format_attr_cmask.attr, + NULL, +}; + +static ssize_t zhaoxin_event_sysfs_show(char *page, u64 config) +{ + u64 event = (config & ARCH_PERFMON_EVENTSEL_EVENT); + + return x86_event_sysfs_show(page, config, event); +} + +static const struct x86_pmu zhaoxin_pmu __initconst = { + .name = "zhaoxin_pmu", + .handle_irq = zhaoxin_pmu_handle_irq, + .disable_all = zhaoxin_pmu_disable_all, + .enable_all = zhaoxin_pmu_enable_all, + .enable = zhaoxin_pmu_enable_event, + .disable = zhaoxin_pmu_disable_event, + .hw_config = x86_pmu_hw_config, + .schedule_events = x86_schedule_events, + .eventsel = MSR_ARCH_PERFMON_EVENTSEL0, + .perfctr = MSR_ARCH_PERFMON_PERFCTR0, + .event_map = zhaoxin_pmu_event_map, + .max_events = ARRAY_SIZE(zx_pmon_event_map), + .apic = 1, + /* + * For zxd/zxe, read/write operation for PMCx MSR is 48 bits. + */ + .max_period = (1ULL << 47) - 1, + .get_event_constraints = zhaoxin_get_event_constraints, + + .format_attrs = zx_arch_formats_attr, + .events_sysfs_show = zhaoxin_event_sysfs_show, +}; + +static const struct { int id; char *name; } zx_arch_events_map[] __initconst = { + { PERF_COUNT_HW_CPU_CYCLES, "cpu cycles" }, + { PERF_COUNT_HW_INSTRUCTIONS, "instructions" }, + { PERF_COUNT_HW_BUS_CYCLES, "bus cycles" }, + { PERF_COUNT_HW_CACHE_REFERENCES, "cache references" }, + { PERF_COUNT_HW_CACHE_MISSES, "cache misses" }, + { PERF_COUNT_HW_BRANCH_INSTRUCTIONS, "branch instructions" }, + { PERF_COUNT_HW_BRANCH_MISSES, "branch misses" }, +}; + +static __init void zhaoxin_arch_events_quirk(void) +{ + int bit; + + /* disable event that reported as not presend by cpuid */ + for_each_set_bit(bit, x86_pmu.events_mask, ARRAY_SIZE(zx_arch_events_map)) { + zx_pmon_event_map[zx_arch_events_map[bit].id] = 0; + pr_warn("CPUID marked event: '%s' unavailable\n", + zx_arch_events_map[bit].name); + } +} + +__init int zhaoxin_pmu_init(void) +{ + union cpuid10_edx edx; + union cpuid10_eax eax; + union cpuid10_ebx ebx; + struct event_constraint *c; + unsigned int unused; + int version; + + pr_info("Welcome to pmu!\n"); + + /* + * Check whether the Architectural PerfMon supports + * hw_event or not. + */ + cpuid(10, &eax.full, &ebx.full, &unused, &edx.full); + + if (eax.split.mask_length < ARCH_PERFMON_EVENTS_COUNT - 1) + return -ENODEV; + + version = eax.split.version_id; + if (version != 2) + return -ENODEV; + + x86_pmu = zhaoxin_pmu; + pr_info("Version check pass!\n"); + + x86_pmu.version = version; + x86_pmu.num_counters = eax.split.num_counters; + x86_pmu.cntval_bits = eax.split.bit_width; + x86_pmu.cntval_mask = (1ULL << eax.split.bit_width) - 1; + x86_pmu.events_maskl = ebx.full; + x86_pmu.events_mask_len = eax.split.mask_length; + + x86_pmu.num_counters_fixed = edx.split.num_counters_fixed; + x86_add_quirk(zhaoxin_arch_events_quirk); + + switch (boot_cpu_data.x86) { + case 0x06: + if (boot_cpu_data.x86_model == 0x0f || boot_cpu_data.x86_model == 0x19) { + + x86_pmu.max_period = x86_pmu.cntval_mask >> 1; + + /* Clearing status works only if the global control is enable on zxc. */ + x86_pmu.enabled_ack = 1; + + x86_pmu.event_constraints = zxc_event_constraints; + zx_pmon_event_map[PERF_COUNT_HW_INSTRUCTIONS] = 0; + zx_pmon_event_map[PERF_COUNT_HW_CACHE_REFERENCES] = 0; + zx_pmon_event_map[PERF_COUNT_HW_CACHE_MISSES] = 0; + zx_pmon_event_map[PERF_COUNT_HW_BUS_CYCLES] = 0; + + pr_cont("C events, "); + break; + } + return -ENODEV; + + case 0x07: + zx_pmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = + X86_CONFIG(.event = 0x01, .umask = 0x01, .inv = 0x01, .cmask = 0x01); + + zx_pmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = + X86_CONFIG(.event = 0x0f, .umask = 0x04, .inv = 0, .cmask = 0); + + switch (boot_cpu_data.x86_model) { + case 0x1b: + memcpy(hw_cache_event_ids, zxd_hw_cache_event_ids, + sizeof(hw_cache_event_ids)); + + x86_pmu.event_constraints = zxd_event_constraints; + + zx_pmon_event_map[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x0700; + zx_pmon_event_map[PERF_COUNT_HW_BRANCH_MISSES] = 0x0709; + + pr_cont("D events, "); + break; + case 0x3b: + memcpy(hw_cache_event_ids, zxe_hw_cache_event_ids, + sizeof(hw_cache_event_ids)); + + x86_pmu.event_constraints = zxd_event_constraints; + + zx_pmon_event_map[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x0028; + zx_pmon_event_map[PERF_COUNT_HW_BRANCH_MISSES] = 0x0029; + + pr_cont("E events, "); + break; + default: + return -ENODEV; + } + break; + + default: + return -ENODEV; + } + + x86_pmu.intel_ctrl = (1 << (x86_pmu.num_counters)) - 1; + x86_pmu.intel_ctrl |= ((1LL << x86_pmu.num_counters_fixed)-1) << INTEL_PMC_IDX_FIXED; + + if (x86_pmu.event_constraints) { + for_each_event_constraint(c, x86_pmu.event_constraints) { + c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1; + c->weight += x86_pmu.num_counters; + } + } + + return 0; +} diff --git a/arch/x86/events/zhaoxin/uncore.c b/arch/x86/events/zhaoxin/uncore.c new file mode 100644 index 000000000000..4c4ea01d23c8 --- /dev/null +++ b/arch/x86/events/zhaoxin/uncore.c @@ -0,0 +1,1101 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include <linux/module.h> + +#include <asm/cpu_device_id.h> +#include "uncore.h" + +static struct zhaoxin_uncore_type *empty_uncore[] = { NULL, }; +static struct zhaoxin_uncore_type **uncore_msr_uncores = empty_uncore; + +/* mask of cpus that collect uncore events */ +static cpumask_t uncore_cpu_mask; + +/* constraint for the fixed counter */ +static struct event_constraint uncore_constraint_fixed = + EVENT_CONSTRAINT(~0ULL, 1 << UNCORE_PMC_IDX_FIXED, ~0ULL); + +static int max_packages; + +/* CHX event control */ +#define CHX_UNC_CTL_EV_SEL_MASK 0x000000ff +#define CHX_UNC_CTL_UMASK_MASK 0x0000ff00 +#define CHX_UNC_CTL_EDGE_DET (1 << 18) +#define CHX_UNC_CTL_EN (1 << 22) +#define CHX_UNC_CTL_INVERT (1 << 23) +#define CHX_UNC_CTL_CMASK_MASK 0xff000000 +#define CHX_UNC_FIXED_CTR_CTL_EN (1 << 0) + +#define CHX_UNC_RAW_EVENT_MASK (CHX_UNC_CTL_EV_SEL_MASK | \ + CHX_UNC_CTL_UMASK_MASK | \ + CHX_UNC_CTL_EDGE_DET | \ + CHX_UNC_CTL_INVERT | \ + CHX_UNC_CTL_CMASK_MASK) + +/* CHX global control register */ +#define CHX_UNC_PERF_GLOBAL_CTL 0x391 +#define CHX_UNC_FIXED_CTR 0x394 +#define CHX_UNC_FIXED_CTR_CTRL 0x395 + +/* CHX uncore global control */ +#define CHX_UNC_GLOBAL_CTL_EN_PC_ALL ((1ULL << 4) - 1) +#define CHX_UNC_GLOBAL_CTL_EN_FC (1ULL << 32) + +/* CHX uncore register */ +#define CHX_UNC_PERFEVTSEL0 0x3c0 +#define CHX_UNC_UNCORE_PMC0 0x3b0 + +DEFINE_UNCORE_FORMAT_ATTR(event, event, "config:0-7"); +DEFINE_UNCORE_FORMAT_ATTR(umask, umask, "config:8-15"); +DEFINE_UNCORE_FORMAT_ATTR(edge, edge, "config:18"); +DEFINE_UNCORE_FORMAT_ATTR(inv, inv, "config:23"); +DEFINE_UNCORE_FORMAT_ATTR(cmask8, cmask, "config:24-31"); + +ssize_t zx_uncore_event_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) +{ + struct uncore_event_desc *event = + container_of(attr, struct uncore_event_desc, attr); + return sprintf(buf, "%s", event->config); +} + +/*chx uncore support */ +static void chx_uncore_msr_disable_event(struct zhaoxin_uncore_box *box, struct perf_event *event) +{ + wrmsrl(event->hw.config_base, 0); +} + +static u64 uncore_msr_read_counter(struct zhaoxin_uncore_box *box, struct perf_event *event) +{ + u64 count; + + rdmsrl(event->hw.event_base, count); + + return count; +} + +static void chx_uncore_msr_disable_box(struct zhaoxin_uncore_box *box) +{ + wrmsrl(CHX_UNC_PERF_GLOBAL_CTL, 0); +} + +static void chx_uncore_msr_enable_box(struct zhaoxin_uncore_box *box) +{ + wrmsrl(CHX_UNC_PERF_GLOBAL_CTL, CHX_UNC_GLOBAL_CTL_EN_PC_ALL | CHX_UNC_GLOBAL_CTL_EN_FC); +} + +static void chx_uncore_msr_enable_event(struct zhaoxin_uncore_box *box, struct perf_event *event) +{ + struct hw_perf_event *hwc = &event->hw; + + if (hwc->idx < UNCORE_PMC_IDX_FIXED) + wrmsrl(hwc->config_base, hwc->config | CHX_UNC_CTL_EN); + else + wrmsrl(hwc->config_base, CHX_UNC_FIXED_CTR_CTL_EN); +} + +static struct attribute *chx_uncore_formats_attr[] = { + &format_attr_event.attr, + &format_attr_umask.attr, + &format_attr_edge.attr, + &format_attr_inv.attr, + &format_attr_cmask8.attr, + NULL, +}; + +static struct attribute_group chx_uncore_format_group = { + .name = "format", + .attrs = chx_uncore_formats_attr, +}; + +static struct uncore_event_desc chx_uncore_events[] = { + { /* end: all zeroes */ }, +}; + +static struct zhaoxin_uncore_ops chx_uncore_msr_ops = { + .disable_box = chx_uncore_msr_disable_box, + .enable_box = chx_uncore_msr_enable_box, + .disable_event = chx_uncore_msr_disable_event, + .enable_event = chx_uncore_msr_enable_event, + .read_counter = uncore_msr_read_counter, +}; + +static struct zhaoxin_uncore_type chx_uncore_box = { + .name = "", + .num_counters = 4, + .num_boxes = 1, + .perf_ctr_bits = 48, + .fixed_ctr_bits = 48, + .event_ctl = CHX_UNC_PERFEVTSEL0, + .perf_ctr = CHX_UNC_UNCORE_PMC0, + .fixed_ctr = CHX_UNC_FIXED_CTR, + .fixed_ctl = CHX_UNC_FIXED_CTR_CTRL, + .event_mask = CHX_UNC_RAW_EVENT_MASK, + .event_descs = chx_uncore_events, + .ops = &chx_uncore_msr_ops, + .format_group = &chx_uncore_format_group, +}; + +static struct zhaoxin_uncore_type *chx_msr_uncores[] = { + &chx_uncore_box, + NULL, +}; + +static struct zhaoxin_uncore_box *uncore_pmu_to_box(struct zhaoxin_uncore_pmu *pmu, int cpu) +{ + unsigned int package_id = topology_logical_package_id(cpu); + + /* + * The unsigned check also catches the '-1' return value for non + * existent mappings in the topology map. + */ + return package_id < max_packages ? pmu->boxes[package_id] : NULL; +} + +static void uncore_assign_hw_event(struct zhaoxin_uncore_box *box, + struct perf_event *event, int idx) +{ + struct hw_perf_event *hwc = &event->hw; + + hwc->idx = idx; + hwc->last_tag = ++box->tags[idx]; + + if (uncore_pmc_fixed(hwc->idx)) { + hwc->event_base = uncore_fixed_ctr(box); + hwc->config_base = uncore_fixed_ctl(box); + return; + } + + hwc->config_base = uncore_event_ctl(box, hwc->idx); + hwc->event_base = uncore_perf_ctr(box, hwc->idx); +} + +void uncore_perf_event_update(struct zhaoxin_uncore_box *box, struct perf_event *event) +{ + u64 prev_count, new_count, delta; + int shift; + + if (uncore_pmc_fixed(event->hw.idx)) + shift = 64 - uncore_fixed_ctr_bits(box); + else + shift = 64 - uncore_perf_ctr_bits(box); + + /* the hrtimer might modify the previous event value */ +again: + prev_count = local64_read(&event->hw.prev_count); + new_count = uncore_read_counter(box, event); + if (local64_xchg(&event->hw.prev_count, new_count) != prev_count) + goto again; + + delta = (new_count << shift) - (prev_count << shift); + delta >>= shift; + + local64_add(delta, &event->count); +} + +static enum hrtimer_restart uncore_pmu_hrtimer(struct hrtimer *hrtimer) +{ + struct zhaoxin_uncore_box *box; + struct perf_event *event; + unsigned long flags; + int bit; + + box = container_of(hrtimer, struct zhaoxin_uncore_box, hrtimer); + if (!box->n_active || box->cpu != smp_processor_id()) + return HRTIMER_NORESTART; + /* + * disable local interrupt to prevent uncore_pmu_event_start/stop + * to interrupt the update process + */ + local_irq_save(flags); + + /* + * handle boxes with an active event list as opposed to active + * counters + */ + list_for_each_entry(event, &box->active_list, active_entry) { + uncore_perf_event_update(box, event); + } + + for_each_set_bit(bit, box->active_mask, UNCORE_PMC_IDX_MAX) + uncore_perf_event_update(box, box->events[bit]); + + local_irq_restore(flags); + + hrtimer_forward_now(hrtimer, ns_to_ktime(box->hrtimer_duration)); + return HRTIMER_RESTART; +} + +static void uncore_pmu_start_hrtimer(struct zhaoxin_uncore_box *box) +{ + hrtimer_start(&box->hrtimer, ns_to_ktime(box->hrtimer_duration), + HRTIMER_MODE_REL_PINNED); +} + +static void uncore_pmu_cancel_hrtimer(struct zhaoxin_uncore_box *box) +{ + hrtimer_cancel(&box->hrtimer); +} + +static void uncore_pmu_init_hrtimer(struct zhaoxin_uncore_box *box) +{ + hrtimer_init(&box->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + box->hrtimer.function = uncore_pmu_hrtimer; +} + +static struct zhaoxin_uncore_box *uncore_alloc_box(struct zhaoxin_uncore_type *type, + int node) +{ + int i, size, numshared = type->num_shared_regs; + struct zhaoxin_uncore_box *box; + + size = sizeof(*box) + numshared * sizeof(struct zhaoxin_uncore_extra_reg); + + box = kzalloc_node(size, GFP_KERNEL, node); + if (!box) + return NULL; + + for (i = 0; i < numshared; i++) + raw_spin_lock_init(&box->shared_regs[i].lock); + + uncore_pmu_init_hrtimer(box); + box->cpu = -1; + box->package_id = -1; + + /* set default hrtimer timeout */ + box->hrtimer_duration = UNCORE_PMU_HRTIMER_INTERVAL; + + INIT_LIST_HEAD(&box->active_list); + + return box; +} + +static bool is_box_event(struct zhaoxin_uncore_box *box, struct perf_event *event) +{ + return &box->pmu->pmu == event->pmu; +} + +static struct event_constraint * +uncore_get_event_constraint(struct zhaoxin_uncore_box *box, struct perf_event *event) +{ + struct zhaoxin_uncore_type *type = box->pmu->type; + struct event_constraint *c; + + if (type->ops->get_constraint) { + c = type->ops->get_constraint(box, event); + if (c) + return c; + } + + if (event->attr.config == UNCORE_FIXED_EVENT) + return &uncore_constraint_fixed; + + if (type->constraints) { + for_each_event_constraint(c, type->constraints) { + if ((event->hw.config & c->cmask) == c->code) + return c; + } + } + + return &type->unconstrainted; +} + +static void uncore_put_event_constraint(struct zhaoxin_uncore_box *box, + struct perf_event *event) +{ + if (box->pmu->type->ops->put_constraint) + box->pmu->type->ops->put_constraint(box, event); +} + +static int uncore_assign_events(struct zhaoxin_uncore_box *box, int assign[], int n) +{ + unsigned long used_mask[BITS_TO_LONGS(UNCORE_PMC_IDX_MAX)]; + struct event_constraint *c; + int i, wmin, wmax, ret = 0; + struct hw_perf_event *hwc; + + bitmap_zero(used_mask, UNCORE_PMC_IDX_MAX); + + for (i = 0, wmin = UNCORE_PMC_IDX_MAX, wmax = 0; i < n; i++) { + c = uncore_get_event_constraint(box, box->event_list[i]); + box->event_constraint[i] = c; + wmin = min(wmin, c->weight); + wmax = max(wmax, c->weight); + } + + /* fastpath, try to reuse previous register */ + for (i = 0; i < n; i++) { + hwc = &box->event_list[i]->hw; + c = box->event_constraint[i]; + + /* never assigned */ + if (hwc->idx == -1) + break; + + /* constraint still honored */ + if (!test_bit(hwc->idx, c->idxmsk)) + break; + + /* not already used */ + if (test_bit(hwc->idx, used_mask)) + break; + + __set_bit(hwc->idx, used_mask); + if (assign) + assign[i] = hwc->idx; + } + /* slow path */ + if (i != n) + ret = perf_assign_events(box->event_constraint, n, + wmin, wmax, n, assign); + + if (!assign || ret) { + for (i = 0; i < n; i++) + uncore_put_event_constraint(box, box->event_list[i]); + } + return ret ? -EINVAL : 0; +} + +static void uncore_pmu_event_start(struct perf_event *event, int flags) +{ + struct zhaoxin_uncore_box *box = uncore_event_to_box(event); + int idx = event->hw.idx; + + + if (WARN_ON_ONCE(idx == -1 || idx >= UNCORE_PMC_IDX_MAX)) + return; + + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) + return; + + event->hw.state = 0; + box->events[idx] = event; + box->n_active++; + __set_bit(idx, box->active_mask); + + local64_set(&event->hw.prev_count, uncore_read_counter(box, event)); + uncore_enable_event(box, event); + + if (box->n_active == 1) { + uncore_enable_box(box); + uncore_pmu_start_hrtimer(box); + } +} + +static void uncore_pmu_event_stop(struct perf_event *event, int flags) +{ + struct zhaoxin_uncore_box *box = uncore_event_to_box(event); + struct hw_perf_event *hwc = &event->hw; + + if (__test_and_clear_bit(hwc->idx, box->active_mask)) { + uncore_disable_event(box, event); + box->n_active--; + box->events[hwc->idx] = NULL; + WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED); + hwc->state |= PERF_HES_STOPPED; + + if (box->n_active == 0) { + uncore_disable_box(box); + uncore_pmu_cancel_hrtimer(box); + } + } + + if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) { + /* + * Drain the remaining delta count out of a event + * that we are disabling: + */ + uncore_perf_event_update(box, event); + hwc->state |= PERF_HES_UPTODATE; + } +} + +static int +uncore_collect_events(struct zhaoxin_uncore_box *box, struct perf_event *leader, + bool dogrp) +{ + struct perf_event *event; + int n, max_count; + + max_count = box->pmu->type->num_counters; + if (box->pmu->type->fixed_ctl) + max_count++; + + if (box->n_events >= max_count) + return -EINVAL; + + n = box->n_events; + + if (is_box_event(box, leader)) { + box->event_list[n] = leader; + n++; + } + + if (!dogrp) + return n; + + for_each_sibling_event(event, leader) { + if (!is_box_event(box, event) || + event->state <= PERF_EVENT_STATE_OFF) + continue; + + if (n >= max_count) + return -EINVAL; + + box->event_list[n] = event; + n++; + } + return n; +} + +static int uncore_pmu_event_add(struct perf_event *event, int flags) +{ + struct zhaoxin_uncore_box *box = uncore_event_to_box(event); + struct hw_perf_event *hwc = &event->hw; + int assign[UNCORE_PMC_IDX_MAX]; + int i, n, ret; + + if (!box) + return -ENODEV; + + ret = n = uncore_collect_events(box, event, false); + if (ret < 0) + return ret; + + hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED; + if (!(flags & PERF_EF_START)) + hwc->state |= PERF_HES_ARCH; + + ret = uncore_assign_events(box, assign, n); + if (ret) + return ret; + + /* save events moving to new counters */ + for (i = 0; i < box->n_events; i++) { + event = box->event_list[i]; + hwc = &event->hw; + + if (hwc->idx == assign[i] && + hwc->last_tag == box->tags[assign[i]]) + continue; + /* + * Ensure we don't accidentally enable a stopped + * counter simply because we rescheduled. + */ + if (hwc->state & PERF_HES_STOPPED) + hwc->state |= PERF_HES_ARCH; + + uncore_pmu_event_stop(event, PERF_EF_UPDATE); + } + + /* reprogram moved events into new counters */ + for (i = 0; i < n; i++) { + event = box->event_list[i]; + hwc = &event->hw; + + if (hwc->idx != assign[i] || + hwc->last_tag != box->tags[assign[i]]) + uncore_assign_hw_event(box, event, assign[i]); + else if (i < box->n_events) + continue; + + if (hwc->state & PERF_HES_ARCH) + continue; + + uncore_pmu_event_start(event, 0); + } + box->n_events = n; + + return 0; +} + +static int uncore_validate_group(struct zhaoxin_uncore_pmu *pmu, + struct perf_event *event) +{ + struct perf_event *leader = event->group_leader; + struct zhaoxin_uncore_box *fake_box; + int ret = -EINVAL, n; + + fake_box = uncore_alloc_box(pmu->type, NUMA_NO_NODE); + if (!fake_box) + return -ENOMEM; + + fake_box->pmu = pmu; + /* + * the event is not yet connected with its + * siblings therefore we must first collect + * existing siblings, then add the new event + * before we can simulate the scheduling + */ + n = uncore_collect_events(fake_box, leader, true); + if (n < 0) + goto out; + + fake_box->n_events = n; + n = uncore_collect_events(fake_box, event, false); + if (n < 0) + goto out; + + fake_box->n_events = n; + + ret = uncore_assign_events(fake_box, NULL, n); +out: + kfree(fake_box); + return ret; +} + +static void uncore_pmu_event_del(struct perf_event *event, int flags) +{ + struct zhaoxin_uncore_box *box = uncore_event_to_box(event); + int i; + + uncore_pmu_event_stop(event, PERF_EF_UPDATE); + + for (i = 0; i < box->n_events; i++) { + if (event == box->event_list[i]) { + uncore_put_event_constraint(box, event); + + for (++i; i < box->n_events; i++) + box->event_list[i - 1] = box->event_list[i]; + + --box->n_events; + break; + } + } + + event->hw.idx = -1; + event->hw.last_tag = ~0ULL; +} + +static void uncore_pmu_event_read(struct perf_event *event) +{ + struct zhaoxin_uncore_box *box = uncore_event_to_box(event); + + uncore_perf_event_update(box, event); +} + +static int uncore_pmu_event_init(struct perf_event *event) +{ + struct zhaoxin_uncore_pmu *pmu; + struct zhaoxin_uncore_box *box; + struct hw_perf_event *hwc = &event->hw; + int ret; + + if (event->attr.type != event->pmu->type) + return -ENOENT; + + pmu = uncore_event_to_pmu(event); + /* no device found for this pmu */ + if (pmu->func_id < 0) + return -ENOENT; + + /* Sampling not supported yet */ + if (hwc->sample_period) + return -EINVAL; + + /* + * Place all uncore events for a particular physical package + * onto a single cpu + */ + if (event->cpu < 0) + return -EINVAL; + box = uncore_pmu_to_box(pmu, event->cpu); + if (!box || box->cpu < 0) + return -EINVAL; + event->cpu = box->cpu; + event->pmu_private = box; + + event->event_caps |= PERF_EV_CAP_READ_ACTIVE_PKG; + + event->hw.idx = -1; + event->hw.last_tag = ~0ULL; + event->hw.extra_reg.idx = EXTRA_REG_NONE; + event->hw.branch_reg.idx = EXTRA_REG_NONE; + + if (event->attr.config == UNCORE_FIXED_EVENT) { + /* no fixed counter */ + if (!pmu->type->fixed_ctl) + return -EINVAL; + /* + * if there is only one fixed counter, only the first pmu + * can access the fixed counter + */ + if (pmu->type->single_fixed && pmu->pmu_idx > 0) + return -EINVAL; + + /* fixed counters have event field hardcoded to zero */ + hwc->config = 0ULL; + } else { + hwc->config = event->attr.config & + (pmu->type->event_mask | ((u64)pmu->type->event_mask_ext << 32)); + if (pmu->type->ops->hw_config) { + ret = pmu->type->ops->hw_config(box, event); + if (ret) + return ret; + } + } + + if (event->group_leader != event) + ret = uncore_validate_group(pmu, event); + else + ret = 0; + + return ret; +} + +static ssize_t uncore_get_attr_cpumask(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpumap_print_to_pagebuf(true, buf, &uncore_cpu_mask); +} + +static DEVICE_ATTR(cpumask, S_IRUGO, uncore_get_attr_cpumask, NULL); + +static struct attribute *uncore_pmu_attrs[] = { + &dev_attr_cpumask.attr, + NULL, +}; + +static const struct attribute_group uncore_pmu_attr_group = { + .attrs = uncore_pmu_attrs, +}; + +static void uncore_pmu_unregister(struct zhaoxin_uncore_pmu *pmu) +{ + if (!pmu->registered) + return; + perf_pmu_unregister(&pmu->pmu); + pmu->registered = false; +} + +static void uncore_free_boxes(struct zhaoxin_uncore_pmu *pmu) +{ + int package; + + for (package = 0; package < max_packages; package++) + kfree(pmu->boxes[package]); + kfree(pmu->boxes); +} + +static void uncore_type_exit(struct zhaoxin_uncore_type *type) +{ + struct zhaoxin_uncore_pmu *pmu = type->pmus; + int i; + + if (pmu) { + for (i = 0; i < type->num_boxes; i++, pmu++) { + uncore_pmu_unregister(pmu); + uncore_free_boxes(pmu); + } + kfree(type->pmus); + type->pmus = NULL; + } + kfree(type->events_group); + type->events_group = NULL; +} + +static void uncore_types_exit(struct zhaoxin_uncore_type **types) +{ + for (; *types; types++) + uncore_type_exit(*types); +} + +static int __init uncore_type_init(struct zhaoxin_uncore_type *type, bool setid) +{ + struct zhaoxin_uncore_pmu *pmus; + size_t size; + int i, j; + + pmus = kcalloc(type->num_boxes, sizeof(*pmus), GFP_KERNEL); + if (!pmus) + return -ENOMEM; + + size = max_packages*sizeof(struct zhaoxin_uncore_box *); + + for (i = 0; i < type->num_boxes; i++) { + pmus[i].func_id = setid ? i : -1; + pmus[i].pmu_idx = i; + pmus[i].type = type; + pmus[i].boxes = kzalloc(size, GFP_KERNEL); + if (!pmus[i].boxes) + goto err; + } + + type->pmus = pmus; + type->unconstrainted = (struct event_constraint) + __EVENT_CONSTRAINT(0, (1ULL << type->num_counters) - 1, + 0, type->num_counters, 0, 0); + + if (type->event_descs) { + struct { + struct attribute_group group; + struct attribute *attrs[]; + } *attr_group; + for (i = 0; type->event_descs[i].attr.attr.name; i++) + ; + + attr_group = kzalloc(struct_size(attr_group, attrs, i + 1), GFP_KERNEL); + if (!attr_group) + goto err; + + attr_group->group.name = "events"; + attr_group->group.attrs = attr_group->attrs; + + for (j = 0; j < i; j++) + attr_group->attrs[j] = &type->event_descs[j].attr.attr; + + type->events_group = &attr_group->group; + } + + type->pmu_group = &uncore_pmu_attr_group; + + return 0; + +err: + for (i = 0; i < type->num_boxes; i++) + kfree(pmus[i].boxes); + kfree(pmus); + + return -ENOMEM; +} + +static int __init +uncore_types_init(struct zhaoxin_uncore_type **types, bool setid) +{ + int ret; + + for (; *types; types++) { + ret = uncore_type_init(*types, setid); + if (ret) + return ret; + } + return 0; +} + +static void uncore_change_type_ctx(struct zhaoxin_uncore_type *type, int old_cpu, + int new_cpu) +{ + struct zhaoxin_uncore_pmu *pmu = type->pmus; + struct zhaoxin_uncore_box *box; + int i, package; + + package = topology_logical_package_id(old_cpu < 0 ? new_cpu : old_cpu); + for (i = 0; i < type->num_boxes; i++, pmu++) { + box = pmu->boxes[package]; + if (!box) + continue; + + if (old_cpu < 0) { + WARN_ON_ONCE(box->cpu != -1); + box->cpu = new_cpu; + continue; + } + + WARN_ON_ONCE(box->cpu != old_cpu); + box->cpu = -1; + if (new_cpu < 0) + continue; + + uncore_pmu_cancel_hrtimer(box); + perf_pmu_migrate_context(&pmu->pmu, old_cpu, new_cpu); + box->cpu = new_cpu; + } +} + +static void uncore_change_context(struct zhaoxin_uncore_type **uncores, + int old_cpu, int new_cpu) +{ + for (; *uncores; uncores++) + uncore_change_type_ctx(*uncores, old_cpu, new_cpu); +} + +static void uncore_box_unref(struct zhaoxin_uncore_type **types, int id) +{ + struct zhaoxin_uncore_type *type; + struct zhaoxin_uncore_pmu *pmu; + struct zhaoxin_uncore_box *box; + int i; + + for (; *types; types++) { + type = *types; + pmu = type->pmus; + for (i = 0; i < type->num_boxes; i++, pmu++) { + box = pmu->boxes[id]; + if (box && atomic_dec_return(&box->refcnt) == 0) + uncore_box_exit(box); + } + } +} + +static int uncore_event_cpu_offline(unsigned int cpu) +{ + int package, target; + + /* Check if exiting cpu is used for collecting uncore events */ + if (!cpumask_test_and_clear_cpu(cpu, &uncore_cpu_mask)) + goto unref; + /* Find a new cpu to collect uncore events */ + target = cpumask_any_but(topology_core_cpumask(cpu), cpu); + + /* Migrate uncore events to the new target */ + if (target < nr_cpu_ids) + cpumask_set_cpu(target, &uncore_cpu_mask); + else + target = -1; + + uncore_change_context(uncore_msr_uncores, cpu, target); + +unref: + /* Clear the references */ + package = topology_logical_package_id(cpu); + uncore_box_unref(uncore_msr_uncores, package); + return 0; +} + +static int allocate_boxes(struct zhaoxin_uncore_type **types, + unsigned int package, unsigned int cpu) +{ + struct zhaoxin_uncore_box *box, *tmp; + struct zhaoxin_uncore_type *type; + struct zhaoxin_uncore_pmu *pmu; + LIST_HEAD(allocated); + int i; + + /* Try to allocate all required boxes */ + for (; *types; types++) { + type = *types; + pmu = type->pmus; + for (i = 0; i < type->num_boxes; i++, pmu++) { + if (pmu->boxes[package]) + continue; + box = uncore_alloc_box(type, cpu_to_node(cpu)); + if (!box) + goto cleanup; + box->pmu = pmu; + box->package_id = package; + list_add(&box->active_list, &allocated); + } + } + /* Install them in the pmus */ + list_for_each_entry_safe(box, tmp, &allocated, active_list) { + list_del_init(&box->active_list); + box->pmu->boxes[package] = box; + } + return 0; + +cleanup: + list_for_each_entry_safe(box, tmp, &allocated, active_list) { + list_del_init(&box->active_list); + kfree(box); + } + return -ENOMEM; +} + +static int uncore_box_ref(struct zhaoxin_uncore_type **types, + int id, unsigned int cpu) +{ + struct zhaoxin_uncore_type *type; + struct zhaoxin_uncore_pmu *pmu; + struct zhaoxin_uncore_box *box; + int i, ret; + + ret = allocate_boxes(types, id, cpu); + if (ret) + return ret; + + for (; *types; types++) { + type = *types; + pmu = type->pmus; + for (i = 0; i < type->num_boxes; i++, pmu++) { + box = pmu->boxes[id]; + if (box && atomic_inc_return(&box->refcnt) == 1) + uncore_box_init(box); + } + } + return 0; +} + +static int uncore_event_cpu_online(unsigned int cpu) +{ + int package, target, msr_ret; + + package = topology_logical_package_id(cpu); + msr_ret = uncore_box_ref(uncore_msr_uncores, package, cpu); + + if (msr_ret) + return -ENOMEM; + + /* + * Check if there is an online cpu in the package + * which collects uncore events already. + */ + target = cpumask_any_and(&uncore_cpu_mask, topology_core_cpumask(cpu)); + if (target < nr_cpu_ids) + return 0; + + cpumask_set_cpu(cpu, &uncore_cpu_mask); + + if (!msr_ret) + uncore_change_context(uncore_msr_uncores, -1, cpu); + + return 0; +} + +static int uncore_pmu_register(struct zhaoxin_uncore_pmu *pmu) +{ + int ret; + + if (!pmu->type->pmu) { + pmu->pmu = (struct pmu) { + .attr_groups = pmu->type->attr_groups, + .task_ctx_nr = perf_invalid_context, + .event_init = uncore_pmu_event_init, + .add = uncore_pmu_event_add, + .del = uncore_pmu_event_del, + .start = uncore_pmu_event_start, + .stop = uncore_pmu_event_stop, + .read = uncore_pmu_event_read, + .module = THIS_MODULE, + }; + } else { + pmu->pmu = *pmu->type->pmu; + pmu->pmu.attr_groups = pmu->type->attr_groups; + } + + if (pmu->type->num_boxes == 1) { + if (strlen(pmu->type->name) > 0) + sprintf(pmu->name, "uncore_%s", pmu->type->name); + else + sprintf(pmu->name, "uncore"); + } else { + sprintf(pmu->name, "uncore_%s_%d", pmu->type->name, + pmu->pmu_idx); + } + + ret = perf_pmu_register(&pmu->pmu, pmu->name, -1); + if (!ret) + pmu->registered = true; + return ret; +} + +static int __init type_pmu_register(struct zhaoxin_uncore_type *type) +{ + int i, ret; + + for (i = 0; i < type->num_boxes; i++) { + ret = uncore_pmu_register(&type->pmus[i]); + if (ret) + return ret; + } + return 0; +} + +static int __init uncore_msr_pmus_register(void) +{ + struct zhaoxin_uncore_type **types = uncore_msr_uncores; + int ret; + + for (; *types; types++) { + ret = type_pmu_register(*types); + if (ret) + return ret; + } + return 0; +} + +static int __init uncore_cpu_init(void) +{ + int ret; + + ret = uncore_types_init(uncore_msr_uncores, true); + if (ret) + goto err; + + ret = uncore_msr_pmus_register(); + if (ret) + goto err; + return 0; +err: + uncore_types_exit(uncore_msr_uncores); + uncore_msr_uncores = empty_uncore; + return ret; +} + + +#define CENTAUR_UNCORE_MODEL_MATCH(model, init) \ + { X86_VENDOR_CENTAUR, 7, model, X86_FEATURE_ANY, (unsigned long)&init } + +#define ZHAOXIN_UNCORE_MODEL_MATCH(model, init) \ + { X86_VENDOR_ZHAOXIN, 7, model, X86_FEATURE_ANY, (unsigned long)&init } + +struct zhaoxin_uncore_init_fun { + void (*cpu_init)(void); +}; + +void chx_uncore_cpu_init(void) +{ + uncore_msr_uncores = chx_msr_uncores; +} + +static const struct zhaoxin_uncore_init_fun chx_uncore_init __initconst = { + .cpu_init = chx_uncore_cpu_init, +}; + +static const struct x86_cpu_id zhaoxin_uncore_match[] __initconst = { + CENTAUR_UNCORE_MODEL_MATCH(ZHAOXIN_FAM7_CHX001, chx_uncore_init), + CENTAUR_UNCORE_MODEL_MATCH(ZHAOXIN_FAM7_CHX002, chx_uncore_init), + ZHAOXIN_UNCORE_MODEL_MATCH(ZHAOXIN_FAM7_CHX001, chx_uncore_init), + ZHAOXIN_UNCORE_MODEL_MATCH(ZHAOXIN_FAM7_CHX002, chx_uncore_init), + {}, +}; + +MODULE_DEVICE_TABLE(x86cpu, zhaoxin_uncore_match); + +static int __init zhaoxin_uncore_init(void) +{ + const struct x86_cpu_id *id; + struct zhaoxin_uncore_init_fun *uncore_init; + int cret = 0, ret; + + id = x86_match_cpu(zhaoxin_uncore_match); + + if (!id) + return -ENODEV; + + if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) + return -ENODEV; + + max_packages = topology_max_packages(); + + pr_info("welcome to uncore!\n"); + + uncore_init = (struct zhaoxin_uncore_init_fun *)id->driver_data; + + if (uncore_init->cpu_init) { + uncore_init->cpu_init(); + cret = uncore_cpu_init(); + } + + if (cret) + return -ENODEV; + + ret = cpuhp_setup_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE, + "perf/x86/zhaoxin/uncore:online", + uncore_event_cpu_online, + uncore_event_cpu_offline); + if (ret) + goto err; + + pr_info("uncore init success!\n"); + return 0; +err: + uncore_types_exit(uncore_msr_uncores); + return ret; +} +module_init(zhaoxin_uncore_init); + +static void __exit zhaoxin_uncore_exit(void) +{ + cpuhp_remove_state(CPUHP_AP_PERF_X86_UNCORE_ONLINE); + uncore_types_exit(uncore_msr_uncores); +} +module_exit(zhaoxin_uncore_exit); + +MODULE_LICENSE("GPL"); diff --git a/arch/x86/events/zhaoxin/uncore.h b/arch/x86/events/zhaoxin/uncore.h new file mode 100644 index 000000000000..3521123dc95d --- /dev/null +++ b/arch/x86/events/zhaoxin/uncore.h @@ -0,0 +1,308 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include <linux/slab.h> +#include <linux/pci.h> +#include <asm/apicdef.h> +#include <linux/io-64-nonatomic-lo-hi.h> + +#include <linux/perf_event.h> +#include "../perf_event.h" + +#define ZHAOXIN_FAM7_CHX001 0x1b +#define ZHAOXIN_FAM7_CHX002 0x3b + +#define UNCORE_PMU_NAME_LEN 32 +#define UNCORE_PMU_HRTIMER_INTERVAL (60LL * NSEC_PER_SEC) +#define UNCORE_CHX_IMC_HRTIMER_INTERVAL (5ULL * NSEC_PER_SEC) + + +#define UNCORE_FIXED_EVENT 0xff +#define UNCORE_PMC_IDX_MAX_GENERIC 4 +#define UNCORE_PMC_IDX_MAX_FIXED 1 +#define UNCORE_PMC_IDX_FIXED UNCORE_PMC_IDX_MAX_GENERIC + +#define UNCORE_PMC_IDX_MAX (UNCORE_PMC_IDX_FIXED + 1) + +struct zhaoxin_uncore_ops; +struct zhaoxin_uncore_pmu; +struct zhaoxin_uncore_box; +struct uncore_event_desc; + +struct zhaoxin_uncore_type { + const char *name; + int num_counters; + int num_boxes; + int perf_ctr_bits; + int fixed_ctr_bits; + unsigned perf_ctr; + unsigned event_ctl; + unsigned event_mask; + unsigned event_mask_ext; + unsigned fixed_ctr; + unsigned fixed_ctl; + unsigned box_ctl; + unsigned msr_offset; + unsigned num_shared_regs:8; + unsigned single_fixed:1; + unsigned pair_ctr_ctl:1; + unsigned *msr_offsets; + struct event_constraint unconstrainted; + struct event_constraint *constraints; + struct zhaoxin_uncore_pmu *pmus; + struct zhaoxin_uncore_ops *ops; + struct uncore_event_desc *event_descs; + const struct attribute_group *attr_groups[4]; + struct pmu *pmu; /* for custom pmu ops */ +}; + +#define pmu_group attr_groups[0] +#define format_group attr_groups[1] +#define events_group attr_groups[2] + +struct zhaoxin_uncore_ops { + void (*init_box)(struct zhaoxin_uncore_box *); + void (*exit_box)(struct zhaoxin_uncore_box *); + void (*disable_box)(struct zhaoxin_uncore_box *); + void (*enable_box)(struct zhaoxin_uncore_box *); + void (*disable_event)(struct zhaoxin_uncore_box *, struct perf_event *); + void (*enable_event)(struct zhaoxin_uncore_box *, struct perf_event *); + u64 (*read_counter)(struct zhaoxin_uncore_box *, struct perf_event *); + int (*hw_config)(struct zhaoxin_uncore_box *, struct perf_event *); + struct event_constraint *(*get_constraint)(struct zhaoxin_uncore_box *, + struct perf_event *); + void (*put_constraint)(struct zhaoxin_uncore_box *, struct perf_event *); +}; + +struct zhaoxin_uncore_pmu { + struct pmu pmu; + char name[UNCORE_PMU_NAME_LEN]; + int pmu_idx; + int func_id; + bool registered; + atomic_t activeboxes; + struct zhaoxin_uncore_type *type; + struct zhaoxin_uncore_box **boxes; +}; + +struct zhaoxin_uncore_extra_reg { + raw_spinlock_t lock; + u64 config, config1, config2; + atomic_t ref; +}; + +struct zhaoxin_uncore_box { + int pci_phys_id; + int package_id; /*Package ID */ + int n_active; /* number of active events */ + int n_events; + int cpu; /* cpu to collect events */ + unsigned long flags; + atomic_t refcnt; + struct perf_event *events[UNCORE_PMC_IDX_MAX]; + struct perf_event *event_list[UNCORE_PMC_IDX_MAX]; + struct event_constraint *event_constraint[UNCORE_PMC_IDX_MAX]; + unsigned long active_mask[BITS_TO_LONGS(UNCORE_PMC_IDX_MAX)]; + u64 tags[UNCORE_PMC_IDX_MAX]; + struct pci_dev *pci_dev; + struct zhaoxin_uncore_pmu *pmu; + u64 hrtimer_duration; /* hrtimer timeout for this box */ + struct hrtimer hrtimer; + struct list_head list; + struct list_head active_list; + void __iomem *io_addr; + struct zhaoxin_uncore_extra_reg shared_regs[0]; +}; + +#define UNCORE_BOX_FLAG_INITIATED 0 + +struct uncore_event_desc { + struct kobj_attribute attr; + const char *config; +}; + +ssize_t zx_uncore_event_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf); + +#define ZHAOXIN_UNCORE_EVENT_DESC(_name, _config) \ +{ \ + .attr = __ATTR(_name, 0444, zx_uncore_event_show, NULL), \ + .config = _config, \ +} + +#define DEFINE_UNCORE_FORMAT_ATTR(_var, _name, _format) \ +static ssize_t __uncore_##_var##_show(struct kobject *kobj, \ + struct kobj_attribute *attr, \ + char *page) \ +{ \ + BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \ + return sprintf(page, _format "\n"); \ +} \ +static struct kobj_attribute format_attr_##_var = \ + __ATTR(_name, 0444, __uncore_##_var##_show, NULL) + +static inline bool uncore_pmc_fixed(int idx) +{ + return idx == UNCORE_PMC_IDX_FIXED; +} + +static inline unsigned uncore_msr_box_offset(struct zhaoxin_uncore_box *box) +{ + struct zhaoxin_uncore_pmu *pmu = box->pmu; + + return pmu->type->msr_offsets ? + pmu->type->msr_offsets[pmu->pmu_idx] : + pmu->type->msr_offset * pmu->pmu_idx; +} + +static inline unsigned uncore_msr_box_ctl(struct zhaoxin_uncore_box *box) +{ + if (!box->pmu->type->box_ctl) + return 0; + return box->pmu->type->box_ctl + uncore_msr_box_offset(box); +} + +static inline unsigned uncore_msr_fixed_ctl(struct zhaoxin_uncore_box *box) +{ + if (!box->pmu->type->fixed_ctl) + return 0; + return box->pmu->type->fixed_ctl + uncore_msr_box_offset(box); +} + +static inline unsigned uncore_msr_fixed_ctr(struct zhaoxin_uncore_box *box) +{ + return box->pmu->type->fixed_ctr + uncore_msr_box_offset(box); +} + +static inline +unsigned uncore_msr_event_ctl(struct zhaoxin_uncore_box *box, int idx) +{ + return box->pmu->type->event_ctl + + (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) + + uncore_msr_box_offset(box); +} + +static inline +unsigned uncore_msr_perf_ctr(struct zhaoxin_uncore_box *box, int idx) +{ + return box->pmu->type->perf_ctr + + (box->pmu->type->pair_ctr_ctl ? 2 * idx : idx) + + uncore_msr_box_offset(box); +} + +static inline +unsigned uncore_fixed_ctl(struct zhaoxin_uncore_box *box) +{ + return uncore_msr_fixed_ctl(box); +} + +static inline +unsigned uncore_fixed_ctr(struct zhaoxin_uncore_box *box) +{ + return uncore_msr_fixed_ctr(box); +} + +static inline +unsigned uncore_event_ctl(struct zhaoxin_uncore_box *box, int idx) +{ + return uncore_msr_event_ctl(box, idx); +} + +static inline +unsigned uncore_perf_ctr(struct zhaoxin_uncore_box *box, int idx) +{ + return uncore_msr_perf_ctr(box, idx); +} + +static inline int uncore_perf_ctr_bits(struct zhaoxin_uncore_box *box) +{ + return box->pmu->type->perf_ctr_bits; +} + +static inline int uncore_fixed_ctr_bits(struct zhaoxin_uncore_box *box) +{ + return box->pmu->type->fixed_ctr_bits; +} + +static inline int uncore_num_counters(struct zhaoxin_uncore_box *box) +{ + return box->pmu->type->num_counters; +} + +static inline void uncore_disable_box(struct zhaoxin_uncore_box *box) +{ + if (box->pmu->type->ops->disable_box) + box->pmu->type->ops->disable_box(box); +} + +static inline void uncore_enable_box(struct zhaoxin_uncore_box *box) +{ + if (box->pmu->type->ops->enable_box) + box->pmu->type->ops->enable_box(box); +} + +static inline void uncore_disable_event(struct zhaoxin_uncore_box *box, + struct perf_event *event) +{ + box->pmu->type->ops->disable_event(box, event); +} + +static inline void uncore_enable_event(struct zhaoxin_uncore_box *box, + struct perf_event *event) +{ + box->pmu->type->ops->enable_event(box, event); +} + +static inline u64 uncore_read_counter(struct zhaoxin_uncore_box *box, + struct perf_event *event) +{ + return box->pmu->type->ops->read_counter(box, event); +} + +static inline void uncore_box_init(struct zhaoxin_uncore_box *box) +{ + if (!test_and_set_bit(UNCORE_BOX_FLAG_INITIATED, &box->flags)) { + if (box->pmu->type->ops->init_box) + box->pmu->type->ops->init_box(box); + } +} + +static inline void uncore_box_exit(struct zhaoxin_uncore_box *box) +{ + if (test_and_clear_bit(UNCORE_BOX_FLAG_INITIATED, &box->flags)) { + if (box->pmu->type->ops->exit_box) + box->pmu->type->ops->exit_box(box); + } +} + +static inline bool uncore_box_is_fake(struct zhaoxin_uncore_box *box) +{ + return (box->package_id < 0); +} + +static inline struct zhaoxin_uncore_pmu *uncore_event_to_pmu(struct perf_event *event) +{ + return container_of(event->pmu, struct zhaoxin_uncore_pmu, pmu); +} + +static inline struct zhaoxin_uncore_box *uncore_event_to_box(struct perf_event *event) +{ + return event->pmu_private; +} + + +static struct zhaoxin_uncore_box *uncore_pmu_to_box(struct zhaoxin_uncore_pmu *pmu, int cpu); +static u64 uncore_msr_read_counter(struct zhaoxin_uncore_box *box, struct perf_event *event); + +static void uncore_pmu_start_hrtimer(struct zhaoxin_uncore_box *box); +static void uncore_pmu_cancel_hrtimer(struct zhaoxin_uncore_box *box); +static void uncore_pmu_event_start(struct perf_event *event, int flags); +static void uncore_pmu_event_stop(struct perf_event *event, int flags); +static int uncore_pmu_event_add(struct perf_event *event, int flags); +static void uncore_pmu_event_del(struct perf_event *event, int flags); +static void uncore_pmu_event_read(struct perf_event *event); +static void uncore_perf_event_update(struct zhaoxin_uncore_box *box, struct perf_event *event); +struct event_constraint * +uncore_get_constraint(struct zhaoxin_uncore_box *box, struct perf_event *event); +void uncore_put_constraint(struct zhaoxin_uncore_box *box, struct perf_event *event); +u64 uncore_shared_reg_config(struct zhaoxin_uncore_box *box, int idx); + +void chx_uncore_cpu_init(void); diff --git a/arch/x86/kernel/cpu/perfctr-watchdog.c b/arch/x86/kernel/cpu/perfctr-watchdog.c index 9556930cd8c1..a548d9104604 100644 --- a/arch/x86/kernel/cpu/perfctr-watchdog.c +++ b/arch/x86/kernel/cpu/perfctr-watchdog.c @@ -63,6 +63,10 @@ static inline unsigned int nmi_perfctr_msr_to_bit(unsigned int msr) case 15: return msr - MSR_P4_BPU_PERFCTR0; } + break; + case X86_VENDOR_ZHAOXIN: + case X86_VENDOR_CENTAUR: + return msr - MSR_ARCH_PERFMON_PERFCTR0; } return 0; } @@ -92,6 +96,10 @@ static inline unsigned int nmi_evntsel_msr_to_bit(unsigned int msr) case 15: return msr - MSR_P4_BSU_ESCR0; } + break; + case X86_VENDOR_ZHAOXIN: + case X86_VENDOR_CENTAUR: + return msr - MSR_ARCH_PERFMON_EVENTSEL0; } return 0;
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.6.9 commit 3375590623e4a132b19a8740512f4deb95728933 category: PCI bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add Zhaoxin Vendor ID to pci_ids.h
Link: https://lore.kernel.org/r/20200327091148.5190-2-RaymondPang-oc@zhaoxin.com Signed-off-by: Raymond Pang RaymondPang-oc@zhaoxin.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- include/linux/pci_ids.h | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index 277d8f87d551..31e226dcd923 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -2596,6 +2596,8 @@
#define PCI_VENDOR_ID_AMAZON 0x1d0f
+#define PCI_VENDOR_ID_ZHAOXIN 0x1d17 + #define PCI_VENDOR_ID_HYGON 0x1d94
#define PCI_VENDOR_ID_TEKRAM 0x1de1
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add Zhaoxin Serial ATA support for Zhaoxin CPUs.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/ata/Kconfig | 8 + drivers/ata/Makefile | 1 + drivers/ata/sata_zhaoxin.c | 384 +++++++++++++++++++++++++++++++++++++ 3 files changed, 393 insertions(+) create mode 100644 drivers/ata/sata_zhaoxin.c
diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig index 99698d7fe585..78a6338d074e 100644 --- a/drivers/ata/Kconfig +++ b/drivers/ata/Kconfig @@ -494,6 +494,14 @@ config SATA_VITESSE
If unsure, say N.
+config SATA_ZHAOXIN + tristate "ZhaoXin SATA support" + depends on PCI + help + This option enables support for ZhaoXin Serial ATA. + + If unsure, say N. + comment "PATA SFF controllers with BMDMA"
config PATA_ALI diff --git a/drivers/ata/Makefile b/drivers/ata/Makefile index d21cdd83f7ab..2d9220311187 100644 --- a/drivers/ata/Makefile +++ b/drivers/ata/Makefile @@ -44,6 +44,7 @@ obj-$(CONFIG_SATA_SIL) += sata_sil.o obj-$(CONFIG_SATA_SIS) += sata_sis.o obj-$(CONFIG_SATA_SVW) += sata_svw.o obj-$(CONFIG_SATA_ULI) += sata_uli.o +obj-$(CONFIG_SATA_ZHAOXIN) += sata_zhaoxin.o obj-$(CONFIG_SATA_VIA) += sata_via.o obj-$(CONFIG_SATA_VITESSE) += sata_vsc.o
diff --git a/drivers/ata/sata_zhaoxin.c b/drivers/ata/sata_zhaoxin.c new file mode 100644 index 000000000000..f4a694355f13 --- /dev/null +++ b/drivers/ata/sata_zhaoxin.c @@ -0,0 +1,384 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * sata_zhaoxin.c - ZhaoXin Serial ATA controllers + */ + +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/pci.h> +#include <linux/blkdev.h> +#include <linux/delay.h> +#include <linux/device.h> +#include <scsi/scsi.h> +#include <scsi/scsi_cmnd.h> +#include <scsi/scsi_host.h> +#include <linux/libata.h> + +#define DRV_NAME "sata_zx" +#define DRV_VERSION "2.6.1" + +enum board_ids_enum { + cnd001, +}; + +enum { + SATA_CHAN_ENAB = 0x40, /* SATA channel enable */ + SATA_INT_GATE = 0x41, /* SATA interrupt gating */ + SATA_NATIVE_MODE = 0x42, /* Native mode enable */ + PATA_UDMA_TIMING = 0xB3, /* PATA timing for DMA/ cable detect */ + PATA_PIO_TIMING = 0xAB, /* PATA timing register */ + + PORT0 = (1 << 1), + PORT1 = (1 << 0), + ALL_PORTS = PORT0 | PORT1, + + NATIVE_MODE_ALL = (1 << 7) | (1 << 6) | (1 << 5) | (1 << 4), + + SATA_EXT_PHY = (1 << 6), /* 0==use PATA, 1==ext phy */ +}; + +static int szx_init_one(struct pci_dev *pdev, const struct pci_device_id *ent); +static int cnd001_scr_read(struct ata_link *link, unsigned int scr, u32 *val); +static int cnd001_scr_write(struct ata_link *link, unsigned int scr, u32 val); +static int szx_hardreset(struct ata_link *link, unsigned int *class, + unsigned long deadline); + +static void szx_tf_load(struct ata_port *ap, const struct ata_taskfile *tf); + +static const struct pci_device_id szx_pci_tbl[] = { + { PCI_VDEVICE(ZHAOXIN, 0x9002), cnd001 }, + { PCI_VDEVICE(ZHAOXIN, 0x9003), cnd001 }, + + { } /* terminate list */ +}; + +static struct pci_driver szx_pci_driver = { + .name = DRV_NAME, + .id_table = szx_pci_tbl, + .probe = szx_init_one, +#ifdef CONFIG_PM_SLEEP + .suspend = ata_pci_device_suspend, + .resume = ata_pci_device_resume, +#endif + .remove = ata_pci_remove_one, +}; + +static struct scsi_host_template szx_sht = { + ATA_BMDMA_SHT(DRV_NAME), +}; + +static struct ata_port_operations szx_base_ops = { + .inherits = &ata_bmdma_port_ops, + .sff_tf_load = szx_tf_load, +}; + +static struct ata_port_operations cnd001_ops = { + .inherits = &szx_base_ops, + .hardreset = szx_hardreset, + .scr_read = cnd001_scr_read, + .scr_write = cnd001_scr_write, +}; + +static struct ata_port_info cnd001_port_info = { + .flags = ATA_FLAG_SATA | ATA_FLAG_SLAVE_POSS, + .pio_mask = ATA_PIO4, + .mwdma_mask = ATA_MWDMA2, + .udma_mask = ATA_UDMA6, + .port_ops = &cnd001_ops, +}; + + +static int szx_hardreset(struct ata_link *link, unsigned int *class, + unsigned long deadline) +{ + int rc; + + rc = sata_std_hardreset(link, class, deadline); + if (!rc || rc == -EAGAIN) { + struct ata_port *ap = link->ap; + int pmp = link->pmp; + int tmprc; + + if (pmp) { + ap->ops->sff_dev_select(ap, pmp); + tmprc = ata_sff_wait_ready(&ap->link, deadline); + } else { + tmprc = ata_sff_wait_ready(link, deadline); + } + if (tmprc) + ata_link_err(link, "COMRESET failed for wait (errno=%d)\n", + rc); + else + ata_link_err(link, "wait for bsy success\n"); + + ata_link_err(link, "COMRESET success (errno=%d) ap=%d link %d\n", + rc, link->ap->port_no, link->pmp); + } else { + ata_link_err(link, "COMRESET failed (errno=%d) ap=%d link %d\n", + rc, link->ap->port_no, link->pmp); + } + return rc; +} + +static int cnd001_scr_read(struct ata_link *link, unsigned int scr, u32 *val) +{ + static const u8 ipm_tbl[] = { 1, 2, 6, 0 }; + struct pci_dev *pdev = to_pci_dev(link->ap->host->dev); + int slot = 2 * link->ap->port_no + link->pmp; + u32 v = 0; + u8 raw; + + switch (scr) { + case SCR_STATUS: + pci_read_config_byte(pdev, 0xA0 + slot, &raw); + + /* read the DET field, bit0 and 1 of the config byte */ + v |= raw & 0x03; + + /* read the SPD field, bit4 of the configure byte */ + v |= raw & 0x30; + + /* read the IPM field, bit2 and 3 of the config byte */ + v |= ((ipm_tbl[(raw >> 2) & 0x3])<<8); + break; + + case SCR_ERROR: + /* devices other than 5287 uses 0xA8 as base */ + WARN_ON(pdev->device != 0x9002 && pdev->device != 0x9003); + pci_write_config_byte(pdev, 0x42, slot); + pci_read_config_dword(pdev, 0xA8, &v); + break; + + case SCR_CONTROL: + pci_read_config_byte(pdev, 0xA4 + slot, &raw); + + /* read the DET field, bit0 and bit1 */ + v |= ((raw & 0x02) << 1) | (raw & 0x01); + + /* read the IPM field, bit2 and bit3 */ + v |= ((raw >> 2) & 0x03) << 8; + + break; + + default: + return -EINVAL; + } + + *val = v; + return 0; +} + +static int cnd001_scr_write(struct ata_link *link, unsigned int scr, u32 val) +{ + struct pci_dev *pdev = to_pci_dev(link->ap->host->dev); + int slot = 2 * link->ap->port_no + link->pmp; + u32 v = 0; + + WARN_ON(pdev == NULL); + + switch (scr) { + case SCR_ERROR: + /* devices 0x9002 uses 0xA8 as base */ + WARN_ON(pdev->device != 0x9002 && pdev->device != 0x9003); + pci_write_config_byte(pdev, 0x42, slot); + pci_write_config_dword(pdev, 0xA8, val); + return 0; + + case SCR_CONTROL: + /* set the DET field */ + v |= ((val & 0x4) >> 1) | (val & 0x1); + + /* set the IPM field */ + v |= ((val >> 8) & 0x3) << 2; + + + pci_write_config_byte(pdev, 0xA4 + slot, v); + + + return 0; + + default: + return -EINVAL; + } +} + + +/** + * szx_tf_load - send taskfile registers to host controller + * @ap: Port to which output is sent + * @tf: ATA taskfile register set + * + * Outputs ATA taskfile to standard ATA host controller. + * + * This is to fix the internal bug of zx chipsets, which will + * reset the device register after changing the IEN bit on ctl + * register. + */ +static void szx_tf_load(struct ata_port *ap, const struct ata_taskfile *tf) +{ + struct ata_taskfile ttf; + + if (tf->ctl != ap->last_ctl) { + ttf = *tf; + ttf.flags |= ATA_TFLAG_DEVICE; + tf = &ttf; + } + ata_sff_tf_load(ap, tf); +} + +static const unsigned int szx_bar_sizes[] = { + 8, 4, 8, 4, 16, 256 +}; + +static const unsigned int cnd001_bar_sizes0[] = { + 8, 4, 8, 4, 16, 0 +}; + +static const unsigned int cnd001_bar_sizes1[] = { + 8, 4, 0, 0, 16, 0 +}; + +static int cnd001_prepare_host(struct pci_dev *pdev, struct ata_host **r_host) +{ + const struct ata_port_info *ppi0[] = { + &cnd001_port_info, NULL + }; + const struct ata_port_info *ppi1[] = { + &cnd001_port_info, &ata_dummy_port_info + }; + struct ata_host *host; + int i, rc; + + if (pdev->device == 0x9002) + rc = ata_pci_bmdma_prepare_host(pdev, ppi0, &host); + else if (pdev->device == 0x9003) + rc = ata_pci_bmdma_prepare_host(pdev, ppi1, &host); + else + rc = -EINVAL; + + if (rc) + return rc; + + *r_host = host; + + /* cnd001 9002 hosts four sata ports as M/S of the two channels */ + /* cnd001 9003 hosts two sata ports as M/S of the one channel */ + for (i = 0; i < host->n_ports; i++) + ata_slave_link_init(host->ports[i]); + + return 0; +} + +static void szx_configure(struct pci_dev *pdev, int board_id) +{ + u8 tmp8; + + pci_read_config_byte(pdev, PCI_INTERRUPT_LINE, &tmp8); + dev_info(&pdev->dev, "routed to hard irq line %d\n", + (int) (tmp8 & 0xf0) == 0xf0 ? 0 : tmp8 & 0x0f); + + /* make sure SATA channels are enabled */ + pci_read_config_byte(pdev, SATA_CHAN_ENAB, &tmp8); + if ((tmp8 & ALL_PORTS) != ALL_PORTS) { + dev_dbg(&pdev->dev, "enabling SATA channels (0x%x)\n", + (int)tmp8); + tmp8 |= ALL_PORTS; + pci_write_config_byte(pdev, SATA_CHAN_ENAB, tmp8); + } + + /* make sure interrupts for each channel sent to us */ + pci_read_config_byte(pdev, SATA_INT_GATE, &tmp8); + if ((tmp8 & ALL_PORTS) != ALL_PORTS) { + dev_dbg(&pdev->dev, "enabling SATA channel interrupts (0x%x)\n", + (int) tmp8); + tmp8 |= ALL_PORTS; + pci_write_config_byte(pdev, SATA_INT_GATE, tmp8); + } + + /* make sure native mode is enabled */ + pci_read_config_byte(pdev, SATA_NATIVE_MODE, &tmp8); + if ((tmp8 & NATIVE_MODE_ALL) != NATIVE_MODE_ALL) { + dev_dbg(&pdev->dev, + "enabling SATA channel native mode (0x%x)\n", + (int) tmp8); + tmp8 |= NATIVE_MODE_ALL; + pci_write_config_byte(pdev, SATA_NATIVE_MODE, tmp8); + } +} + +static int szx_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) +{ + unsigned int i; + int rc; + struct ata_host *host = NULL; + int board_id = (int) ent->driver_data; + const unsigned int *bar_sizes; + int legacy_mode = 0; + + ata_print_version_once(&pdev->dev, DRV_VERSION); + + if (pdev->device == 0x9002 || pdev->device == 0x9003) { + if ((pdev->class >> 8) == PCI_CLASS_STORAGE_IDE) { + u8 tmp8, mask; + + /* TODO: What if one channel is in native mode ... */ + pci_read_config_byte(pdev, PCI_CLASS_PROG, &tmp8); + mask = (1 << 2) | (1 << 0); + if ((tmp8 & mask) != mask) + legacy_mode = 1; + } + if (legacy_mode) + return -EINVAL; + } + + rc = pcim_enable_device(pdev); + if (rc) + return rc; + + if (board_id == cnd001 && pdev->device == 0x9002) + bar_sizes = &cnd001_bar_sizes0[0]; + else if (board_id == cnd001 && pdev->device == 0x9003) + bar_sizes = &cnd001_bar_sizes1[0]; + else + bar_sizes = &szx_bar_sizes[0]; + + for (i = 0; i < ARRAY_SIZE(szx_bar_sizes); i++) { + if ((pci_resource_start(pdev, i) == 0) || + (pci_resource_len(pdev, i) < bar_sizes[i])) { + if (bar_sizes[i] == 0) + continue; + + dev_err(&pdev->dev, + "invalid PCI BAR %u (sz 0x%llx, val 0x%llx)\n", + i, + (unsigned long long)pci_resource_start(pdev, i), + (unsigned long long)pci_resource_len(pdev, i)); + + return -ENODEV; + } + } + + switch (board_id) { + case cnd001: + rc = cnd001_prepare_host(pdev, &host); + break; + default: + rc = -EINVAL; + } + if (rc) + return rc; + + szx_configure(pdev, board_id); + + pci_set_master(pdev); + return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt, + IRQF_SHARED, &szx_sht); +} + +module_pci_driver(szx_pci_driver); + +MODULE_AUTHOR("Yanchen:YanchenSun@zhaoxin.com"); +MODULE_DESCRIPTION("SCSI low-level driver for ZX SATA controllers"); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, szx_pci_tbl); +MODULE_VERSION(DRV_VERSION);
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add LPM u1/u2 feature support for xHCI of zhaoxin
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/usb/host/xhci-pci.c | 4 ++++ drivers/usb/host/xhci.c | 34 ++++++++++++++++++++++++++++++++-- drivers/usb/host/xhci.h | 1 + 3 files changed, 37 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index 6b828d79a6d8..e1b2dec099f2 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -228,6 +228,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) } if (pdev->vendor == PCI_VENDOR_ID_VIA) xhci->quirks |= XHCI_RESET_ON_RESUME; + if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) { + xhci->quirks |= XHCI_LPM_SUPPORT; + xhci->quirks |= XHCI_ZHAOXIN_HOST; + }
/* See https://bugzilla.kernel.org/show_bug.cgi?id=79511 */ if (pdev->vendor == PCI_VENDOR_ID_VIA && diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index 69617b8f5e00..8848e98c4824 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -4569,7 +4569,7 @@ static u16 xhci_calculate_u1_timeout(struct xhci_hcd *xhci, { unsigned long long timeout_ns;
- if (xhci->quirks & XHCI_INTEL_HOST) + if (xhci->quirks & (XHCI_INTEL_HOST | XHCI_ZHAOXIN_HOST)) timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc); else timeout_ns = udev->u1_params.sel; @@ -4633,7 +4633,7 @@ static u16 xhci_calculate_u2_timeout(struct xhci_hcd *xhci, { unsigned long long timeout_ns;
- if (xhci->quirks & XHCI_INTEL_HOST) + if (xhci->quirks & (XHCI_INTEL_HOST | XHCI_ZHAOXIN_HOST)) timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc); else timeout_ns = udev->u2_params.sel; @@ -4738,12 +4738,42 @@ static int xhci_check_intel_tier_policy(struct usb_device *udev, return -E2BIG; }
+static int xhci_check_zhaoxin_tier_policy(struct usb_device *udev, + enum usb3_link_state state) +{ + struct usb_device *parent; + unsigned int num_hubs; + char *state_name; + + if (state == USB3_LPM_U1) + state_name = "U1"; + else if (state == USB3_LPM_U2) + state_name = "U2"; + else + state_name = "Unknown"; + /* Don't enable U1/U2 if the device is on an external hub*/ + for (parent = udev->parent, num_hubs = 0; parent->parent; + parent = parent->parent) + num_hubs++; + + if (num_hubs < 1) + return 0; + + dev_dbg(&udev->dev, "Disabling %s link state for device" \ + " below external hub.\n", state_name); + dev_dbg(&udev->dev, "Plug device into root port " \ + "to decrease power consumption.\n"); + return -E2BIG; +} + static int xhci_check_tier_policy(struct xhci_hcd *xhci, struct usb_device *udev, enum usb3_link_state state) { if (xhci->quirks & XHCI_INTEL_HOST) return xhci_check_intel_tier_policy(udev, state); + else if (xhci->quirks & XHCI_ZHAOXIN_HOST) + return xhci_check_zhaoxin_tier_policy(udev, state); else return 0; } diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 7a4195f8cd1c..069390a1f2ac 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1872,6 +1872,7 @@ struct xhci_hcd { #define XHCI_ZERO_64B_REGS BIT_ULL(32) #define XHCI_RESET_PLL_ON_DISCONNECT BIT_ULL(34) #define XHCI_SNPS_BROKEN_SUSPEND BIT_ULL(35) +#define XHCI_ZHAOXIN_HOST BIT_ULL(36) #define XHCI_DISABLE_SPARSE BIT_ULL(38)
unsigned int num_active_eps;
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.6.9 commit 0325837c51cb7c9a5bd3e354ac0c0cda0667d50e category: PCI bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Some Zhaoxin endpoints are implemented as multi-function devices without an ACS capability, but they actually don't support peer-to-peer transactions. Add ACS quirks to declare DMA isolation.
Link: https://lore.kernel.org/r/20200327091148.5190-3-RaymondPang-oc@zhaoxin.com Signed-off-by: Raymond Pang RaymondPang-oc@zhaoxin.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/pci/quirks.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index edb6138a9e7f..8f072a511fdd 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -4767,6 +4767,10 @@ static const struct pci_dev_acs_enabled { { PCI_VENDOR_ID_AMPERE, 0xE00B, pci_quirk_xgene_acs }, { PCI_VENDOR_ID_AMPERE, 0xE00C, pci_quirk_xgene_acs }, { PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs }, + /* Zhaoxin multi-function devices */ + { PCI_VENDOR_ID_ZHAOXIN, 0x3038, pci_quirk_mf_endpoint_acs }, + { PCI_VENDOR_ID_ZHAOXIN, 0x3104, pci_quirk_mf_endpoint_acs }, + { PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs }, { 0 } };
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.6.9 commit 299bd044a6f332b4a6c8f708575c27cad70a35c1 category: PCI bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Adapt to current kernel code
Many Zhaoxin Root Ports and Switch Downstream Ports do provide ACS-like capability but have no ACS Capability Structure. Peer-to-Peer transactions could be blocked between these ports, so add quirk so devices behind them could be assigned to different IOMMU group.
Link: https://lore.kernel.org/r/20200327091148.5190-4-RaymondPang-oc@zhaoxin.com Signed-off-by: Raymond Pang RaymondPang-oc@zhaoxin.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/pci/quirks.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+)
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index 8f072a511fdd..5bfe2457aea9 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -4281,6 +4281,31 @@ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f0d, PCI_CLASS_NOT_DEFINED DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f0e, PCI_CLASS_NOT_DEFINED, 8, quirk_relaxedordering_disable);
+ /* + * Many Zhaoxin Root Ports and Switch Downstream Ports have no ACS capability. + * But the implementation could block peer-to-peer transactions between them + * and provide ACS-like functionality. + */ +static int pci_quirk_zhaoxin_pcie_ports_acs(struct pci_dev *dev, u16 acs_flags) +{ + u16 flags = (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_SV); + int ret = acs_flags & ~flags ? 0 : 1; + + if (!pci_is_pcie(dev) || + ((pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) && + (pci_pcie_type(dev) != PCI_EXP_TYPE_DOWNSTREAM))) + return -ENOTTY; + + switch (dev->device) { + case 0x0710 ... 0x071e: + case 0x0721: + case 0x0723 ... 0x0732: + return ret; + } + + return false; +} + /* * The AMD ARM A1100 (aka "SEATTLE") SoC has a bug in its PCIe Root Complex * where Upstream Transaction Layer Packets with the Relaxed Ordering @@ -4771,6 +4796,8 @@ static const struct pci_dev_acs_enabled { { PCI_VENDOR_ID_ZHAOXIN, 0x3038, pci_quirk_mf_endpoint_acs }, { PCI_VENDOR_ID_ZHAOXIN, 0x3104, pci_quirk_mf_endpoint_acs }, { PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs }, + /* Zhaoxin Root/Downstream Ports */ + { PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs }, { 0 } };
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
On some Zhaoxin platforms, xHCI will prefetch TRB for performance improvement. However this TRB prefetch mechanism may cross page boundary, which may access memory not belong to xHCI. In order to fix this issue, using two pages for TRB allocate and only the first page will be used.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/usb/host/xhci-mem.c | 11 +++++++++-- drivers/usb/host/xhci-pci.c | 5 +++++ drivers/usb/host/xhci.h | 1 + 3 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 9e87c282a743..a6101f095db8 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -2450,8 +2450,15 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags) * and our use of dma addresses in the trb_address_map radix tree needs * TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need. */ - xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, - TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size); + /* With xHCI TRB prefetch patch:To fix cross page boundry access issue + * in IOV environment */ + if (xhci->quirks & XHCI_ZHAOXIN_TRB_FETCH) { + xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, + TRB_SEGMENT_SIZE*2, TRB_SEGMENT_SIZE*2, xhci->page_size*2); + } else { + xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, + TRB_SEGMENT_SIZE, TRB_SEGMENT_SIZE, xhci->page_size); + }
/* See Table 46 and Note on Figure 55 */ xhci->device_pool = dma_pool_create("xHCI input/output contexts", dev, diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index e1b2dec099f2..9a7c88067db0 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -238,6 +238,11 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) pdev->device == 0x3432) xhci->quirks |= XHCI_BROKEN_STREAMS;
+ if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN && + (pdev->device == 0x9202 || + pdev->device == 0x9203)) + xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH; + if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI) xhci->quirks |= XHCI_BROKEN_STREAMS; diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 069390a1f2ac..3ae8e25a2622 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1874,6 +1874,7 @@ struct xhci_hcd { #define XHCI_SNPS_BROKEN_SUSPEND BIT_ULL(35) #define XHCI_ZHAOXIN_HOST BIT_ULL(36) #define XHCI_DISABLE_SPARSE BIT_ULL(38) +#define XHCI_ZHAOXIN_TRB_FETCH BIT_ULL(39)
unsigned int num_active_eps; unsigned int limit_active_eps;
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Some Zhaoxin xHCI controllers follow usb3.1 spec, but only support gen1 speed 5G. While in Linux kernel, if xHCI suspport usb3.1,root hub speed will show on 10G. To fix this issue, read usb speed ID supported by xHCI to determine root hub speed.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/usb/host/xhci.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index 8848e98c4824..e6e0a46e48a6 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -5089,6 +5089,7 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks) */ struct device *dev = hcd->self.sysdev; unsigned int minor_rev; + u8 i, j; int retval;
/* Accept arbitrarily long scatter-gather lists */ @@ -5143,6 +5144,24 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks) hcd->self.root_hub->speed = USB_SPEED_SUPER_PLUS; break; } + + /* usb3.1 has gen1 and gen2, Some zx's xHCI controller that follow usb3.1 spec + * but only support gen1 + */ + if (xhci->quirks & XHCI_ZHAOXIN_HOST) { + minor_rev = 0; + for (j = 0; j < xhci->num_port_caps; j++) { + for (i = 0; i < xhci->port_caps[j].psi_count; i++) { + if (XHCI_EXT_PORT_PSIV(xhci->port_caps[j].psi[i]) >= 5) + minor_rev = 1; + } + if (minor_rev != 1) { + hcd->speed = HCD_USB3; + hcd->self.root_hub->speed = USB_SPEED_SUPER; + } + } + } + xhci_info(xhci, "Host supports USB 3.%x %sSuperSpeed\n", minor_rev, minor_rev ? "Enhanced " : "");
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add the new PCI ID 0x1d17 0x3288 Zhaoxin SB HDAC support.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- sound/pci/hda/hda_intel.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c index 2cd8bfd5293b..67791114471c 100644 --- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -250,7 +250,8 @@ MODULE_SUPPORTED_DEVICE("{{Intel, ICH6}," "{VIA, VT8251}," "{VIA, VT8237A}," "{SiS, SIS966}," - "{ULI, M5461}}"); + "{ULI, M5461}," + "{ZX, ZhaoxinHDA}}"); MODULE_DESCRIPTION("Intel HDA driver");
#if defined(CONFIG_PM) && defined(CONFIG_VGA_SWITCHEROO) @@ -281,6 +282,7 @@ enum { AZX_DRIVER_CTX, AZX_DRIVER_CTHDA, AZX_DRIVER_CMEDIA, + AZX_DRIVER_ZHAOXIN, AZX_DRIVER_GENERIC, AZX_NUM_DRIVERS, /* keep this as last entry */ }; @@ -401,6 +403,7 @@ static char *driver_short_names[] = { [AZX_DRIVER_CTX] = "HDA Creative", [AZX_DRIVER_CTHDA] = "HDA Creative", [AZX_DRIVER_CMEDIA] = "HDA C-Media", + [AZX_DRIVER_ZHAOXIN] = "HDA Zhaoxin", [AZX_DRIVER_GENERIC] = "HD-Audio Generic", };
@@ -1599,6 +1602,9 @@ static int check_position_fix(struct azx *chip, int fix) dev_dbg(chip->card->dev, "Using FIFO position fix\n"); return POS_FIX_FIFO; } + if (chip->driver_type == AZX_DRIVER_ZHAOXIN) { + return POS_FIX_VIACOMBO; + } if (chip->driver_caps & AZX_DCAPS_POSFIX_LPIB) { dev_dbg(chip->card->dev, "Using LPIB position fix\n"); return POS_FIX_LPIB; @@ -1755,6 +1761,15 @@ static void azx_check_snoop_available(struct azx *chip) snoop = false; }
+ if (azx_get_snoop_type(chip) == AZX_SNOOP_TYPE_NONE && + chip->driver_type == AZX_DRIVER_ZHAOXIN) { + u8 val1; + pci_read_config_byte(chip->pci, 0x42, &val1); + if (!(val1 & 0x80) && chip->pci->revision == 0x20) { + snoop = false; + } + } + if (chip->driver_caps & AZX_DCAPS_SNOOP_OFF) snoop = false;
@@ -2811,6 +2826,8 @@ static const struct pci_device_id azx_ids[] = { .class = PCI_CLASS_MULTIMEDIA_HD_AUDIO << 8, .class_mask = 0xffffff, .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_HDMI }, + /* Zhaoxin */ + { PCI_DEVICE(0x1d17, 0x3288), .driver_data = AZX_DRIVER_ZHAOXIN }, { 0, } }; MODULE_DEVICE_TABLE(pci, azx_ids);
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add the new PCI ID 0x1d17 0x9141/0x9142/0x9144 Zhaoxin NB HDAC support. And add some special initialization for Zhaoxin NB HDAC.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- sound/pci/hda/hda_controller.c | 17 +++++++++++- sound/pci/hda/hda_controller.h | 2 ++ sound/pci/hda/hda_intel.c | 51 +++++++++++++++++++++++++++++++++- 3 files changed, 68 insertions(+), 2 deletions(-)
diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c index 0c5d41e5d146..0341637aa5d9 100644 --- a/sound/pci/hda/hda_controller.c +++ b/sound/pci/hda/hda_controller.c @@ -1116,6 +1116,16 @@ void azx_stop_chip(struct azx *chip) } EXPORT_SYMBOL_GPL(azx_stop_chip);
+static void azx_rirb_zxdelay(struct azx *chip, int enable) +{ + if (chip->remap_diu_addr) { + if (!enable) + writel(0x0, (char *)chip->remap_diu_addr + 0x490a8); + else + writel(0x1000000, (char *)chip->remap_diu_addr + 0x490a8); + } +} + /* * interrupt handler */ @@ -1175,9 +1185,14 @@ irqreturn_t azx_interrupt(int irq, void *dev_id) azx_writeb(chip, RIRBSTS, RIRB_INT_MASK); active = true; if (status & RIRB_INT_RESPONSE) { - if (chip->driver_caps & AZX_DCAPS_CTX_WORKAROUND) + if ((chip->driver_caps & AZX_DCAPS_CTX_WORKAROUND) || + (chip->driver_caps & AZX_DCAPS_RIRB_PRE_DELAY)) { + azx_rirb_zxdelay(chip, 1); udelay(80); + } snd_hdac_bus_update_rirb(bus); + if (chip->driver_caps & AZX_DCAPS_RIRB_PRE_DELAY) + azx_rirb_zxdelay(chip, 0); } } } while (active && ++repeat < 10); diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h index 63cc10604afc..16bffded0aa3 100644 --- a/sound/pci/hda/hda_controller.h +++ b/sound/pci/hda/hda_controller.h @@ -58,6 +58,7 @@ #define AZX_DCAPS_CORBRP_SELF_CLEAR (1 << 28) /* CORBRP clears itself after reset */ #define AZX_DCAPS_NO_MSI64 (1 << 29) /* Stick to 32-bit MSIs */ #define AZX_DCAPS_SEPARATE_STREAM_TAG (1 << 30) /* capture and playback use separate stream tag */ +#define AZX_DCAPS_RIRB_PRE_DELAY (1 << 31)
enum { AZX_SNOOP_TYPE_NONE, @@ -167,6 +168,7 @@ struct azx {
/* GTS present */ unsigned int gts_present:1; + void __iomem *remap_diu_addr;
#ifdef CONFIG_SND_HDA_DSP_LOADER struct azx_dev saved_azx_dev; diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c index 67791114471c..a72852b37118 100644 --- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -251,7 +251,8 @@ MODULE_SUPPORTED_DEVICE("{{Intel, ICH6}," "{VIA, VT8237A}," "{SiS, SIS966}," "{ULI, M5461}," - "{ZX, ZhaoxinHDA}}"); + "{ZX, ZhaoxinHDA}," + "{ZX, ZhaoxinHDMI}}"); MODULE_DESCRIPTION("Intel HDA driver");
#if defined(CONFIG_PM) && defined(CONFIG_VGA_SWITCHEROO) @@ -283,6 +284,7 @@ enum { AZX_DRIVER_CTHDA, AZX_DRIVER_CMEDIA, AZX_DRIVER_ZHAOXIN, + AZX_DRIVER_ZXHDMI, AZX_DRIVER_GENERIC, AZX_NUM_DRIVERS, /* keep this as last entry */ }; @@ -404,6 +406,7 @@ static char *driver_short_names[] = { [AZX_DRIVER_CTHDA] = "HDA Creative", [AZX_DRIVER_CMEDIA] = "HDA C-Media", [AZX_DRIVER_ZHAOXIN] = "HDA Zhaoxin", + [AZX_DRIVER_ZXHDMI] = "HDA Zhaoxin GFX", [AZX_DRIVER_GENERIC] = "HD-Audio Generic", };
@@ -480,6 +483,29 @@ static void update_pci_byte(struct pci_dev *pci, unsigned int reg, pci_write_config_byte(pci, reg, data); }
+static int azx_init_pci_zx(struct azx *chip) +{ + struct snd_card *card = chip->card; + unsigned int diu_reg; + struct pci_dev *diu_pci = NULL; + + diu_pci = pci_get_device(0x1d17, 0x3a03, NULL); + if (!diu_pci) { + dev_err(card->dev, "hda no chx001 device. \n"); + return -ENXIO; + } + pci_read_config_dword(diu_pci, PCI_BASE_ADDRESS_0, &diu_reg); + chip->remap_diu_addr = ioremap_nocache(diu_reg, 0x50000); + dev_info(card->dev, "hda %x %p \n", diu_reg, chip->remap_diu_addr); + return 0; +} + +static void azx_free_pci_zx(struct azx *chip) +{ + if (chip->remap_diu_addr) + iounmap(chip->remap_diu_addr); +} + static void azx_init_pci(struct azx *chip) { int snoop_type = azx_get_snoop_type(chip); @@ -1450,6 +1476,10 @@ static int azx_free(struct azx *chip) hda->init_failed = 1; /* to be sure */ complete_all(&hda->probe_wait);
+ if (chip->driver_type == AZX_DRIVER_ZXHDMI) { + azx_free_pci_zx(chip); + } + if (use_vga_switcheroo(hda)) { if (chip->disabled && hda->probe_continued) snd_hda_unlock_devices(&chip->bus); @@ -1803,6 +1833,8 @@ static int default_bdl_pos_adj(struct azx *chip) case AZX_DRIVER_ICH: case AZX_DRIVER_PCH: return 1; + case AZX_DRIVER_ZXHDMI: + return 128; default: return 32; } @@ -1921,6 +1953,12 @@ static int azx_first_init(struct azx *chip) } #endif
+ chip->remap_diu_addr = NULL; + + if (chip->driver_type == AZX_DRIVER_ZXHDMI) { + azx_init_pci_zx(chip); + } + err = pci_request_regions(pci, "ICH HD audio"); if (err < 0) return err; @@ -2030,6 +2068,7 @@ static int azx_first_init(struct azx *chip) chip->playback_streams = ATIHDMI_NUM_PLAYBACK; chip->capture_streams = ATIHDMI_NUM_CAPTURE; break; + case AZX_DRIVER_ZXHDMI: case AZX_DRIVER_GENERIC: default: chip->playback_streams = ICH6_NUM_PLAYBACK; @@ -2773,6 +2812,11 @@ static const struct pci_device_id azx_ids[] = { { PCI_DEVICE(0x1106, 0x9170), .driver_data = AZX_DRIVER_GENERIC }, /* VIA GFX VT6122/VX11 */ { PCI_DEVICE(0x1106, 0x9140), .driver_data = AZX_DRIVER_GENERIC }, + { PCI_DEVICE(0x1106, 0x9141), .driver_data = AZX_DRIVER_GENERIC }, + { PCI_DEVICE(0x1106, 0x9142), + .driver_data = AZX_DRIVER_ZXHDMI | AZX_DCAPS_POSFIX_LPIB | AZX_DCAPS_NO_MSI | AZX_DCAPS_RIRB_PRE_DELAY }, + { PCI_DEVICE(0x1106, 0x9144), + .driver_data = AZX_DRIVER_ZXHDMI | AZX_DCAPS_POSFIX_LPIB | AZX_DCAPS_NO_MSI | AZX_DCAPS_RIRB_PRE_DELAY }, /* SIS966 */ { PCI_DEVICE(0x1039, 0x7502), .driver_data = AZX_DRIVER_SIS }, /* ULI M5461 */ @@ -2828,6 +2872,11 @@ static const struct pci_device_id azx_ids[] = { .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_HDMI }, /* Zhaoxin */ { PCI_DEVICE(0x1d17, 0x3288), .driver_data = AZX_DRIVER_ZHAOXIN }, + { PCI_DEVICE(0x1d17, 0x9141), .driver_data = AZX_DRIVER_GENERIC }, + { PCI_DEVICE(0x1d17, 0x9142), + .driver_data = AZX_DRIVER_ZXHDMI | AZX_DCAPS_POSFIX_LPIB | AZX_DCAPS_NO_MSI | AZX_DCAPS_RIRB_PRE_DELAY }, + { PCI_DEVICE(0x1d17, 0x9144), + .driver_data = AZX_DRIVER_ZXHDMI | AZX_DCAPS_POSFIX_LPIB | AZX_DCAPS_NO_MSI | AZX_DCAPS_RIRB_PRE_DELAY }, { 0, } }; MODULE_DEVICE_TABLE(pci, azx_ids);
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Add Zhaoxin NB HDAC codec support.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- sound/pci/hda/patch_hdmi.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+)
diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c index d21a4eb1ca49..8a10e660c616 100644 --- a/sound/pci/hda/patch_hdmi.c +++ b/sound/pci/hda/patch_hdmi.c @@ -3843,6 +3843,20 @@ static int patch_via_hdmi(struct hda_codec *codec) return patch_simple_hdmi(codec, VIAHDMI_CVT_NID, VIAHDMI_PIN_NID); }
+/* ZHAOXIN HDMI Implementation */ +static int patch_zx_hdmi(struct hda_codec *codec) +{ + int err; + + err = patch_generic_hdmi(codec); + codec->no_sticky_stream = 1; + + if (err) + return err; + + return 0; +} + /* * patch entries */ @@ -3932,6 +3946,12 @@ HDA_CODEC_ENTRY(0x11069f80, "VX900 HDMI/DP", patch_via_hdmi), HDA_CODEC_ENTRY(0x11069f81, "VX900 HDMI/DP", patch_via_hdmi), HDA_CODEC_ENTRY(0x11069f84, "VX11 HDMI/DP", patch_generic_hdmi), HDA_CODEC_ENTRY(0x11069f85, "VX11 HDMI/DP", patch_generic_hdmi), +HDA_CODEC_ENTRY(0x11069f86, "CND001 HDMI/DP", patch_generic_hdmi), +HDA_CODEC_ENTRY(0x11069f87, "CND001 HDMI/DP", patch_generic_hdmi), +HDA_CODEC_ENTRY(0x11069f88, "CHX001 HDMI/DP", patch_zx_hdmi), +HDA_CODEC_ENTRY(0x11069f89, "CHX001 HDMI/DP", patch_zx_hdmi), +HDA_CODEC_ENTRY(0x11069f8a, "CHX002 HDMI/DP", patch_zx_hdmi), +HDA_CODEC_ENTRY(0x11069f8b, "CHX002 HDMI/DP", patch_zx_hdmi), HDA_CODEC_ENTRY(0x80860054, "IbexPeak HDMI", patch_i915_cpt_hdmi), HDA_CODEC_ENTRY(0x80862801, "Bearlake HDMI", patch_generic_hdmi), HDA_CODEC_ENTRY(0x80862802, "Cantiga HDMI", patch_generic_hdmi), @@ -3951,6 +3971,12 @@ HDA_CODEC_ENTRY(0x80862880, "CedarTrail HDMI", patch_generic_hdmi), HDA_CODEC_ENTRY(0x80862882, "Valleyview2 HDMI", patch_i915_byt_hdmi), HDA_CODEC_ENTRY(0x80862883, "Braswell HDMI", patch_i915_byt_hdmi), HDA_CODEC_ENTRY(0x808629fb, "Crestline HDMI", patch_generic_hdmi), +HDA_CODEC_ENTRY(0x1d179f86, "CND001 HDMI/DP", patch_generic_hdmi), +HDA_CODEC_ENTRY(0x1d179f87, "CND001 HDMI/DP", patch_generic_hdmi), +HDA_CODEC_ENTRY(0x1d179f88, "CHX001 HDMI/DP", patch_zx_hdmi), +HDA_CODEC_ENTRY(0x1d179f89, "CHX001 HDMI/DP", patch_zx_hdmi), +HDA_CODEC_ENTRY(0x1d179f8a, "CHX002 HDMI/DP", patch_zx_hdmi), +HDA_CODEC_ENTRY(0x1d179f8b, "CHX002 HDMI/DP", patch_zx_hdmi), /* special ID for generic HDMI */ HDA_CODEC_ENTRY(HDA_CODEC_ID_GENERIC_HDMI, "Generic HDMI", patch_generic_hdmi), {} /* terminator */
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Over Current condition is not standardized in the UHCI spec. Zhaoxin UHCI controllers report Over Current active off. Intel controllers report it active on, so we'll adjust the bit value.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/usb/host/uhci-pci.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/usb/host/uhci-pci.c b/drivers/usb/host/uhci-pci.c index 0dd944277c99..3c0d4c43b640 100644 --- a/drivers/usb/host/uhci-pci.c +++ b/drivers/usb/host/uhci-pci.c @@ -134,6 +134,9 @@ static int uhci_pci_init(struct usb_hcd *hcd) if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_INTEL) device_set_wakeup_capable(uhci_dev(uhci), true);
+ if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_ZHAOXIN) + uhci->oc_low = 1; + /* Set up pointers to PCI-specific functions */ uhci->reset_hc = uhci_pci_reset_hc; uhci->check_and_reset_hc = uhci_pci_check_and_reset_hc;
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
On Zhaoxin ZX-100 project, xHCI can't work normally after resume from system Sx state. To fix this issue, when resume from system sx state, reinitialize xHCI instead of restore.
The patch is scheduled to be submitted to the kernel mainline in 2021.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/usb/host/xhci-pci.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index 9a7c88067db0..31068c6c0af3 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -264,6 +264,9 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) if (pdev->vendor == PCI_VENDOR_ID_TI && pdev->device == 0x8241) xhci->quirks |= XHCI_LIMIT_ENDPOINT_INTERVAL_7;
+ if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN && pdev->device == 0x9202) + xhci->quirks |= XHCI_RESET_ON_RESUME; + if ((pdev->vendor == PCI_VENDOR_ID_BROADCOM || pdev->vendor == PCI_VENDOR_ID_CAVIUM) && pdev->device == 0x9026)
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.6 commit 0f378d73d429d5f73fe2f00be4c9a15dbe9779ee category: x86/apic bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
When a system suspends, the local APIC is disabled in the suspend sequence, but the IOAPIC is left in the current state. This means unmasked interrupt lines stay unmasked. This is usually the case for IOAPIC pin 9 to which the ACPI interrupt is connected.
That means that in suspended state the IOAPIC can respond to an external interrupt, e.g. the wakeup via keyboard/RTC/ACPI, but the interrupt message cannot be handled by the disabled local APIC. As a consequence the Remote IRR bit is set, but the local APIC does not send an EOI to acknowledge it. This causes the affected interrupt line to become stale and the stale Remote IRR bit will cause a hang when __synchronize_hardirq() is invoked for that interrupt line.
To prevent this, mask all IOAPIC entries before disabling the local APIC. The resume code already has the unmask operation inside.
[ tglx: Massaged changelog ]
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/1579076539-7267-1-git-send-email-TonyWWang-oc@zhao... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/kernel/apic/apic.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c index 11c2bee8b4e5..c4b599b8fb61 100644 --- a/arch/x86/kernel/apic/apic.c +++ b/arch/x86/kernel/apic/apic.c @@ -2638,6 +2638,13 @@ static int lapic_suspend(void) #endif
local_irq_save(flags); + + /* + * Mask IOAPIC before disabling the local APIC to prevent stale IRR + * entries on some implementations. + */ + mask_ioapic_entries(); + disable_local_APIC();
irq_remapping_disable();
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
This bug is found in Zhaoxin platform, but it's a commom code bug. Fail sequence: step1: Unbind UHCI controller from native driver; step2: Bind UHCI controller to vfio-pci, which will put UHCI controller in one vfio group's device list and set UHCI's dev->driver_data to struct vfio-pci(for UHCI) step3: Unbind EHCI controller from native driver, will try to tell UHCI native driver that "I'm removed by set companion_hcd->self.hs_companion to NULL. However, companion_hcd get from UHCI's dev->driver_data that has modified by vfio-pci already.So, the vfio-pci structure will be damaged! step4: Bind EHCI controller to vfio-pci driver, which will put EHCI controller in the same vfio group as UHCI controller; ... ... step5: Unbind UHCI controller from vfio-pci, which will delete UHCI from vfio group' device list that has been damaged in step 3. So,delete operation can random result into a NULL pointer dereference with the below stack dump. step6: Bind UHCI controller to native driver; step7: Unbind EHCI controller from vfio-pci, which will try to remove EHCI controller from the vfio group; step8: Bind EHCI controller to native driver;
[ 929.114641] uhci_hcd 0000:00:10.0: remove, state 1 [ 929.114652] usb usb1: USB disconnect, device number 1 [ 929.114655] usb 1-1: USB disconnect, device number 2 [ 929.270313] usb 1-2: USB disconnect, device number 3 [ 929.318404] uhci_hcd 0000:00:10.0: USB bus 1 deregistered [ 929.343029] uhci_hcd 0000:00:10.1: remove, state 4 [ 929.343045] usb usb3: USB disconnect, device number 1 [ 929.343685] uhci_hcd 0000:00:10.1: USB bus 3 deregistered [ 929.369087] ehci-pci 0000:00:10.7: remove, state 4 [ 929.369102] usb usb4: USB disconnect, device number 1 [ 929.370325] ehci-pci 0000:00:10.7: USB bus 4 deregistered [ 932.398494] BUG: unable to handle kernel NULL pointer dereference at 0000000000000000 [ 932.398496] PGD 42a67d067 P4D 42a67d067 PUD 42a65f067 PMD 0 [ 932.398502] Oops: 0002 [#2] SMP NOPTI [ 932.398505] CPU: 2 PID: 7824 Comm: vfio_unbind.sh Tainted: P D 4.19.65-2020051917-rainos #1 [ 932.398506] Hardware name: Shanghai Zhaoxin Semiconductor Co., Ltd. HX002EH/HX002EH, BIOS HX002EH0_01_R480_R_200408 04/08/2020 [ 932.398513] RIP: 0010:vfio_device_put+0x31/0xa0 [vfio] [ 932.398515] Code: 89 e5 41 54 53 4c 8b 67 18 48 89 fb 49 8d 74 24 30 e8 e3 0e f3 de 84 c0 74 67 48 8b 53 20 48 8b 43 28 48 8b 7b 18 48 89 42 08 <48> 89 10 48 b8 00 01 00 00 00 00 ad de 48 89 43 20 48 b8 00 02 00 [ 932.398516] RSP: 0018:ffffbbfd04cffc18 EFLAGS: 00010202 [ 932.398518] RAX: 0000000000000000 RBX: ffff92c7ea717880 RCX: 0000000000000000 [ 932.398519] RDX: ffff92c7ea713620 RSI: ffff92c7ea713630 RDI: ffff92c7ea713600 [ 932.398521] RBP: ffffbbfd04cffc28 R08: ffff92c7f02a8080 R09: ffff92c7efc03980 [ 932.398522] R10: ffffbbfd04cff9a8 R11: 0000000000000000 R12: ffff92c7ea713600 [ 932.398523] R13: ffff92c7ed8bb0a8 R14: ffff92c7ea717880 R15: 0000000000000000 [ 932.398525] FS: 00007f3031500740(0000) GS:ffff92c7f0280000(0000) knlGS:0000000000000000 [ 932.398526] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 932.398527] CR2: 0000000000000000 CR3: 0000000428626004 CR4: 0000000000160ee0 [ 932.398528] Call Trace: [ 932.398534] vfio_del_group_dev+0xe8/0x2a0 [vfio] [ 932.398539] ? __blocking_notifier_call_chain+0x52/0x60 [ 932.398542] ? do_wait_intr_irq+0x90/0x90 [ 932.398546] ? iommu_bus_notifier+0x75/0x100 [ 932.398551] vfio_pci_remove+0x20/0xa0 [vfio_pci] [ 932.398554] pci_device_remove+0x3e/0xc0 [ 932.398557] device_release_driver_internal+0x17a/0x240 [ 932.398560] device_release_driver+0x12/0x20 [ 932.398561] unbind_store+0xee/0x180 [ 932.398564] drv_attr_store+0x27/0x40 [ 932.398567] sysfs_kf_write+0x3c/0x50 [ 932.398568] kernfs_fop_write+0x125/0x1a0 [ 932.398572] __vfs_write+0x3a/0x190 [ 932.398575] ? apparmor_file_permission+0x1a/0x20 [ 932.398577] ? security_file_permission+0x3b/0xc0 [ 932.398581] ? _cond_resched+0x1a/0x50 [ 932.398582] vfs_write+0xb8/0x1b0 [ 932.398584] ksys_write+0x5c/0xe0 [ 932.398586] __x64_sys_write+0x1a/0x20 [ 932.398589] do_syscall_64+0x5a/0x110 [ 932.398592] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Using virt-manager/qemu to boot guest os, we can see the same fail sequence!
Fix this by determine whether the PCI Driver of the USB controller is a kernel native driver. If not, do not let it modify UHCI's dev->driver_data.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/usb/core/hcd-pci.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c index 7537681355f6..c3cddaab708d 100644 --- a/drivers/usb/core/hcd-pci.c +++ b/drivers/usb/core/hcd-pci.c @@ -49,6 +49,7 @@ static void for_each_companion(struct pci_dev *pdev, struct usb_hcd *hcd, struct pci_dev *companion; struct usb_hcd *companion_hcd; unsigned int slot = PCI_SLOT(pdev->devfn); + struct pci_driver *drv;
/* * Iterate through other PCI functions in the same slot. @@ -61,6 +62,15 @@ static void for_each_companion(struct pci_dev *pdev, struct usb_hcd *hcd, PCI_SLOT(companion->devfn) != slot) continue;
+ drv = companion->driver; + if (!drv) + continue; + + if (strncmp(drv->name, "uhci_hcd", sizeof("uhci_hcd") - 1) && + strncmp(drv->name, "ooci_hcd", sizeof("uhci_hcd") - 1) && + strncmp(drv->name, "ehci_hcd", sizeof("uhci_hcd") - 1)) + continue; + /* * Companion device should be either UHCI,OHCI or EHCI host * controller, otherwise skip.
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Some ACPI devices need to issue dma requests to access the reserved memory area.BIOS uses the device scope type ACPI_NAMESPACE_DEVICE in RMRR to report these ACPI devices. This patch add support for detecting ACPI devices in RMRR and in order to distinguish it from PCI device, some interface functions are modified.
This patch was submitted to mainline kernel but not accepted by upstream maintainer whose reason is "As I explained in the previous reply, RMRRs were added as work around for certain legacy device and we have been working hard to fix those legacy devices so that RMRR are no longer needed. Any new use case of RMRR is not encouraged".
VT-D 1.3/2.5/3.0 Spec have this case's specification, We think this Intel driver should support this case too.
References: https://lkml.org/lkml/2020/10/10/56 Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- drivers/iommu/dmar.c | 75 +++++++++++++++++++++---------------- drivers/iommu/intel-iommu.c | 24 +++++++++++- include/linux/dmar.h | 11 +++++- 3 files changed, 75 insertions(+), 35 deletions(-)
diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c index 3f0c2c1ef0cb..9a07bcad38e5 100644 --- a/drivers/iommu/dmar.c +++ b/drivers/iommu/dmar.c @@ -226,7 +226,7 @@ static bool dmar_match_pci_path(struct dmar_pci_notify_info *info, int bus, }
/* Return: > 0 if match found, 0 if no match found, < 0 if error happens */ -int dmar_insert_dev_scope(struct dmar_pci_notify_info *info, +int dmar_pci_insert_dev_scope(struct dmar_pci_notify_info *info, void *start, void*end, u16 segment, struct dmar_dev_scope *devices, int devices_cnt) @@ -315,7 +315,7 @@ static int dmar_pci_bus_add_dev(struct dmar_pci_notify_info *info)
drhd = container_of(dmaru->hdr, struct acpi_dmar_hardware_unit, header); - ret = dmar_insert_dev_scope(info, (void *)(drhd + 1), + ret = dmar_pci_insert_dev_scope(info, (void *)(drhd + 1), ((void *)drhd) + drhd->header.length, dmaru->segment, dmaru->devices, dmaru->devices_cnt); @@ -707,47 +707,58 @@ dmar_find_matched_drhd_unit(struct pci_dev *dev) return dmaru; }
-static void __init dmar_acpi_insert_dev_scope(u8 device_number, - struct acpi_device *adev) +/* Return: > 0 if match found, 0 if no match found */ +bool dmar_acpi_insert_dev_scope(u8 device_number, + struct acpi_device *adev, + void *start, void *end, + struct dmar_dev_scope *devices, + int devices_cnt) { - struct dmar_drhd_unit *dmaru; - struct acpi_dmar_hardware_unit *drhd; struct acpi_dmar_device_scope *scope; struct device *tmp; int i; struct acpi_dmar_pci_path *path;
+ for (; start < end; start += scope->length) { + scope = start; + if (scope->entry_type != ACPI_DMAR_SCOPE_TYPE_NAMESPACE) + continue; + if (scope->enumeration_id != device_number) + continue; + path = (void *)(scope + 1); + for_each_dev_scope(devices, devices_cnt, i, tmp) + if (tmp == NULL) { + devices[i].bus = scope->bus; + devices[i].devfn = PCI_DEVFN(path->device, path->function); + rcu_assign_pointer(devices[i].dev, + get_device(&adev->dev)); + return true; + } + WARN_ON(i >= devices_cnt); + } + return false; +} + +static int dmar_acpi_bus_add_dev(u8 device_number, struct acpi_device *adev) +{ + struct dmar_drhd_unit *dmaru; + struct acpi_dmar_hardware_unit *drhd; + int ret; + for_each_drhd_unit(dmaru) { drhd = container_of(dmaru->hdr, struct acpi_dmar_hardware_unit, header);
- for (scope = (void *)(drhd + 1); - (unsigned long)scope < ((unsigned long)drhd) + drhd->header.length; - scope = ((void *)scope) + scope->length) { - if (scope->entry_type != ACPI_DMAR_SCOPE_TYPE_NAMESPACE) - continue; - if (scope->enumeration_id != device_number) - continue; - - path = (void *)(scope + 1); - pr_info("ACPI device "%s" under DMAR at %llx as %02x:%02x.%d\n", - dev_name(&adev->dev), dmaru->reg_base_addr, - scope->bus, path->device, path->function); - for_each_dev_scope(dmaru->devices, dmaru->devices_cnt, i, tmp) - if (tmp == NULL) { - dmaru->devices[i].bus = scope->bus; - dmaru->devices[i].devfn = PCI_DEVFN(path->device, - path->function); - rcu_assign_pointer(dmaru->devices[i].dev, - get_device(&adev->dev)); - return; - } - BUG_ON(i >= dmaru->devices_cnt); - } + ret = dmar_acpi_insert_dev_scope(device_number, adev, (void *)(drhd+1), + ((void *)drhd)+drhd->header.length, + dmaru->devices, dmaru->devices_cnt); + if (ret) + break; } - pr_warn("No IOMMU scope found for ANDD enumeration ID %d (%s)\n", - device_number, dev_name(&adev->dev)); + if (ret > 0) + ret = dmar_rmrr_add_acpi_dev(device_number, adev); + return ret; }
static int __init dmar_acpi_dev_scope_init(void) @@ -776,7 +787,7 @@ static int __init dmar_acpi_dev_scope_init(void) andd->device_name); continue; } - dmar_acpi_insert_dev_scope(andd->device_number, adev); + dmar_acpi_bus_add_dev(andd->device_number, adev); } } return 0; diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index f51ae0086786..18e0be8e05a5 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -4512,6 +4512,26 @@ int dmar_find_matched_atsr_unit(struct pci_dev *dev) return ret; }
+int dmar_rmrr_add_acpi_dev(u8 device_number, struct acpi_device *adev) +{ + int ret; + struct dmar_rmrr_unit *rmrru; + struct acpi_dmar_reserved_memory *rmrr; + + list_for_each_entry(rmrru, &dmar_rmrr_units, list) { + rmrr = container_of(rmrru->hdr, + struct acpi_dmar_reserved_memory, + header); + ret = dmar_acpi_insert_dev_scope(device_number, adev, (void *)(rmrr + 1), + ((void *)rmrr) + rmrr->header.length, + rmrru->devices, rmrru->devices_cnt); + if (ret) + break; + } + pr_info("Add acpi_dev:%s to rmrru->devices\n", dev_name(&adev->dev)); + return 0; +} + int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info) { int ret = 0; @@ -4527,7 +4547,7 @@ int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info) rmrr = container_of(rmrru->hdr, struct acpi_dmar_reserved_memory, header); if (info->event == BUS_NOTIFY_ADD_DEVICE) { - ret = dmar_insert_dev_scope(info, (void *)(rmrr + 1), + ret = dmar_pci_insert_dev_scope(info, (void *)(rmrr + 1), ((void *)rmrr) + rmrr->header.length, rmrr->segment, rmrru->devices, rmrru->devices_cnt); @@ -4545,7 +4565,7 @@ int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info)
atsr = container_of(atsru->hdr, struct acpi_dmar_atsr, header); if (info->event == BUS_NOTIFY_ADD_DEVICE) { - ret = dmar_insert_dev_scope(info, (void *)(atsr + 1), + ret = dmar_pci_insert_dev_scope(info, (void *)(atsr + 1), (void *)atsr + atsr->header.length, atsr->segment, atsru->devices, atsru->devices_cnt); diff --git a/include/linux/dmar.h b/include/linux/dmar.h index 843a41ba7e28..68de8732d8d4 100644 --- a/include/linux/dmar.h +++ b/include/linux/dmar.h @@ -117,10 +117,13 @@ extern int dmar_parse_dev_scope(void *start, void *end, int *cnt, struct dmar_dev_scope **devices, u16 segment); extern void *dmar_alloc_dev_scope(void *start, void *end, int *cnt); extern void dmar_free_dev_scope(struct dmar_dev_scope **devices, int *cnt); -extern int dmar_insert_dev_scope(struct dmar_pci_notify_info *info, +extern int dmar_pci_insert_dev_scope(struct dmar_pci_notify_info *info, void *start, void*end, u16 segment, struct dmar_dev_scope *devices, int devices_cnt); +extern bool dmar_acpi_insert_dev_scope(u8 device_number, + struct acpi_device *adev, void *start, void *end, + struct dmar_dev_scope *devices, int devices_cnt); extern int dmar_remove_dev_scope(struct dmar_pci_notify_info *info, u16 segment, struct dmar_dev_scope *devices, int count); @@ -143,6 +146,7 @@ extern int dmar_parse_one_atsr(struct acpi_dmar_header *header, void *arg); extern int dmar_check_one_atsr(struct acpi_dmar_header *hdr, void *arg); extern int dmar_release_one_atsr(struct acpi_dmar_header *hdr, void *arg); extern int dmar_iommu_hotplug(struct dmar_drhd_unit *dmaru, bool insert); +extern int dmar_rmrr_add_acpi_dev(u8 device_number, struct acpi_device *adev); extern int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info); #else /* !CONFIG_INTEL_IOMMU: */ static inline int intel_iommu_init(void) { return -ENODEV; } @@ -152,6 +156,11 @@ static inline int intel_iommu_init(void) { return -ENODEV; } #define dmar_check_one_atsr dmar_res_noop #define dmar_release_one_atsr dmar_res_noop
+static inline int dmar_rmrr_add_acpi_dev(u8 device_number, struct acpi_device *adev) +{ + return 0; +} + static inline int dmar_iommu_notify_scope_dev(struct dmar_pci_notify_info *info) { return 0;
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.5 commit b971880fe79f4042aaaf426744a5b19521bf77b3 category: x86/Kconfig bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
AMD 2nd generation EPYC processors support the UMIP (User-Mode Instruction Prevention) feature. So, rename X86_INTEL_UMIP to generic X86_UMIP and modify the text to cover both Intel and AMD.
[ bp: take of the disabled-features.h copy in tools/ too. ]
Signed-off-by: Babu Moger babu.moger@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: Andy Lutomirski luto@kernel.org Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: Ricardo Neri ricardo.neri-calderon@linux.intel.com Cc: Thomas Gleixner tglx@linutronix.de Cc: "x86@kernel.org" x86@kernel.org Link: https://lkml.kernel.org/r/157298912544.17462.2018334793891409521.stgit@naple... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/Kconfig | 16 ++++++++-------- arch/x86/include/asm/disabled-features.h | 2 +- arch/x86/include/asm/umip.h | 4 ++-- arch/x86/kernel/Makefile | 2 +- tools/arch/x86/include/asm/disabled-features.h | 2 +- 5 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 2b0695630031..5e00e8900748 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1867,16 +1867,16 @@ config X86_SMAP
If unsure, say Y.
-config X86_INTEL_UMIP +config X86_UMIP def_bool y - depends on CPU_SUP_INTEL - prompt "Intel User Mode Instruction Prevention" if EXPERT + depends on CPU_SUP_INTEL || CPU_SUP_AMD + prompt "User Mode Instruction Prevention" if EXPERT ---help--- - The User Mode Instruction Prevention (UMIP) is a security - feature in newer Intel processors. If enabled, a general - protection fault is issued if the SGDT, SLDT, SIDT, SMSW - or STR instructions are executed in user mode. These instructions - unnecessarily expose information about the hardware state. + User Mode Instruction Prevention (UMIP) is a security feature in + some x86 processors. If enabled, a general protection fault is + issued if the SGDT, SLDT, SIDT, SMSW or STR instructions are + executed in user mode. These instructions unnecessarily expose + information about the hardware state.
The vast majority of applications do not use these instructions. For the very few that do, software emulation is provided in diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index 33833d1909af..9d9da3487425 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -16,7 +16,7 @@ # define DISABLE_MPX (1<<(X86_FEATURE_MPX & 31)) #endif
-#ifdef CONFIG_X86_INTEL_UMIP +#ifdef CONFIG_X86_UMIP # define DISABLE_UMIP 0 #else # define DISABLE_UMIP (1<<(X86_FEATURE_UMIP & 31)) diff --git a/arch/x86/include/asm/umip.h b/arch/x86/include/asm/umip.h index db43f2a0d92c..aeed98c3c9e1 100644 --- a/arch/x86/include/asm/umip.h +++ b/arch/x86/include/asm/umip.h @@ -4,9 +4,9 @@ #include <linux/types.h> #include <asm/ptrace.h>
-#ifdef CONFIG_X86_INTEL_UMIP +#ifdef CONFIG_X86_UMIP bool fixup_umip_exception(struct pt_regs *regs); #else static inline bool fixup_umip_exception(struct pt_regs *regs) { return false; } -#endif /* CONFIG_X86_INTEL_UMIP */ +#endif /* CONFIG_X86_UMIP */ #endif /* _ASM_X86_UMIP_H */ diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index da0b6bc090f3..66835d9a6f72 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -134,7 +134,7 @@ obj-$(CONFIG_EFI) += sysfb_efi.o obj-$(CONFIG_PERF_EVENTS) += perf_regs.o obj-$(CONFIG_TRACING) += tracepoint.o obj-$(CONFIG_SCHED_MC_PRIO) += itmt.o -obj-$(CONFIG_X86_INTEL_UMIP) += umip.o +obj-$(CONFIG_X86_UMIP) += umip.o
obj-$(CONFIG_UNWINDER_ORC) += unwind_orc.o obj-$(CONFIG_UNWINDER_FRAME_POINTER) += unwind_frame.o diff --git a/tools/arch/x86/include/asm/disabled-features.h b/tools/arch/x86/include/asm/disabled-features.h index 33833d1909af..9d9da3487425 100644 --- a/tools/arch/x86/include/asm/disabled-features.h +++ b/tools/arch/x86/include/asm/disabled-features.h @@ -16,7 +16,7 @@ # define DISABLE_MPX (1<<(X86_FEATURE_MPX & 31)) #endif
-#ifdef CONFIG_X86_INTEL_UMIP +#ifdef CONFIG_X86_UMIP # define DISABLE_UMIP 0 #else # define DISABLE_UMIP (1<<(X86_FEATURE_UMIP & 31))
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
mainline inclusion from mainline-5.7 commit bdb04a1abbf92c998f1afb5f00a037f2edaec1f7 category: x86/Kconfig bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=19 CVE: NA
----------------------------------------------------------------
Some Centaur family 7 CPUs and Zhaoxin family 7 CPUs support the UMIP feature too. The text size growth which UMIP adds is ~1K and distro kernels enable it anyway so remove the vendor dependency.
[ bp: Rewrite commit message. ]
Signed-off-by: Tony W Wang-oc TonyWWang-oc@zhaoxin.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/1583733990-2587-1-git-send-email-TonyWWang-oc@zhao... Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Cheng Jian cj.chengjian@huawei.com --- arch/x86/Kconfig | 1 - 1 file changed, 1 deletion(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 5e00e8900748..dd4dfb80ac6c 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1869,7 +1869,6 @@ config X86_SMAP
config X86_UMIP def_bool y - depends on CPU_SUP_INTEL || CPU_SUP_AMD prompt "User Mode Instruction Prevention" if EXPERT ---help--- User Mode Instruction Prevention (UMIP) is a security feature in
在 2021/4/7 14:12, Cheng Jian 写道:
Adds Zhaoxin CPU support
LeoLiu-oc (33): x86/cpu: Create Zhaoxin processors architecture support file x86/cpu: Remove redundant cpu_detect_cache_sizes() call x86/cpu/centaur: Replace two-condition switch-case with an if statement x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support x86/cpufeatures: Add Zhaoxin feature bits x86/cpu: Add detect extended topology for Zhaoxin CPUs ACPI, x86: Add Zhaoxin processors support for NONSTOP TSC x86/power: Optimize C3 entry on Centaur CPUs x86/acpi/cstate: Add Zhaoxin processors support for cache flush policy in C3 x86/mce: Add Zhaoxin MCE support x86/mce: Add Zhaoxin CMCI support x86/mce: Add Zhaoxin LMCE support x86/speculation/spectre_v2: Exclude Zhaoxin CPUs from SPECTRE_V2 x86/speculation/swapgs: Exclude Zhaoxin CPUs from SWAPGS vulnerability crypto: x86/crc32c-intel - Don't match some Zhaoxin CPUs x86/perf: Add hardware performance events support for Zhaoxin CPU. PCI: Add Zhaoxin Vendor ID ata: sata_zhaoxin: Add support for Zhaoxin Serial ATA xhci: Add Zhaoxin xHCI LPM U1/U2 feature support PCI: Add ACS quirk for Zhaoxin multi-function devices PCI: Add ACS quirk for Zhaoxin Root/Downstream Ports xhci: fix issue of cross page boundary in TRB prefetch xhci: Show Zhaoxin XHCI root hub speed correctly ALSA: hda: Add support of Zhaoxin SB HDAC ALSA: hda: Add support of Zhaoxin NB HDAC ALSA: hda: Add support of Zhaoxin NB HDAC codec xhci: Adjust the UHCI Controllers bit value xhci: fix issue with resume from system Sx state x86/apic: Mask IOAPIC entries when disabling the local APIC USB:Fix kernel NULL pointer when unbind UHCI form vfio-pci iommu/vt-d:Add support for detecting ACPI device in RMRR x86/Kconfig: Rename UMIP config parameter x86/Kconfig: Drop vendor dependency for X86_UMIP
MAINTAINERS | 6 + arch/x86/Kconfig | 15 +- arch/x86/Kconfig.cpu | 13 + arch/x86/crypto/crc32c-intel_glue.c | 7 + arch/x86/events/Makefile | 2 + arch/x86/events/core.c | 4 + arch/x86/events/perf_event.h | 14 +- arch/x86/events/zhaoxin/Makefile | 3 + arch/x86/events/zhaoxin/core.c | 612 +++++++++ arch/x86/events/zhaoxin/uncore.c | 1101 +++++++++++++++++ arch/x86/events/zhaoxin/uncore.h | 308 +++++ arch/x86/include/asm/cpufeatures.h | 21 + arch/x86/include/asm/disabled-features.h | 2 +- arch/x86/include/asm/processor.h | 3 +- arch/x86/include/asm/umip.h | 4 +- arch/x86/kernel/Makefile | 2 +- arch/x86/kernel/acpi/cstate.c | 27 + arch/x86/kernel/apic/apic.c | 7 + arch/x86/kernel/cpu/Makefile | 1 + arch/x86/kernel/cpu/centaur.c | 47 +- arch/x86/kernel/cpu/common.c | 9 +- arch/x86/kernel/cpu/mce/core.c | 97 +- arch/x86/kernel/cpu/mce/intel.c | 11 +- arch/x86/kernel/cpu/mce/internal.h | 6 + arch/x86/kernel/cpu/perfctr-watchdog.c | 8 + arch/x86/kernel/cpu/zhaoxin.c | 170 +++ drivers/acpi/acpi_pad.c | 1 + drivers/acpi/processor_idle.c | 1 + drivers/ata/Kconfig | 8 + drivers/ata/Makefile | 1 + drivers/ata/sata_zhaoxin.c | 384 ++++++ drivers/iommu/dmar.c | 75 +- drivers/iommu/intel-iommu.c | 24 +- drivers/pci/quirks.c | 31 + drivers/usb/core/hcd-pci.c | 10 + drivers/usb/host/uhci-pci.c | 3 + drivers/usb/host/xhci-mem.c | 11 +- drivers/usb/host/xhci-pci.c | 12 + drivers/usb/host/xhci.c | 53 +- drivers/usb/host/xhci.h | 2 + include/linux/dmar.h | 11 +- include/linux/pci_ids.h | 2 + sound/pci/hda/hda_controller.c | 17 +- sound/pci/hda/hda_controller.h | 2 + sound/pci/hda/hda_intel.c | 68 +- sound/pci/hda/patch_hdmi.c | 26 + .../arch/x86/include/asm/disabled-features.h | 2 +- 47 files changed, 3143 insertions(+), 101 deletions(-) create mode 100644 arch/x86/events/zhaoxin/Makefile create mode 100644 arch/x86/events/zhaoxin/core.c create mode 100644 arch/x86/events/zhaoxin/uncore.c create mode 100644 arch/x86/events/zhaoxin/uncore.h create mode 100644 arch/x86/kernel/cpu/zhaoxin.c create mode 100644 drivers/ata/sata_zhaoxin.c
Reviewed-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com
For this series, Acked-by: Xie XiuQi xiexiuqi@huawei.com
On 2021/4/7 14:12, Cheng Jian wrote:
Adds Zhaoxin CPU support
LeoLiu-oc (33): x86/cpu: Create Zhaoxin processors architecture support file x86/cpu: Remove redundant cpu_detect_cache_sizes() call x86/cpu/centaur: Replace two-condition switch-case with an if statement x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support x86/cpufeatures: Add Zhaoxin feature bits x86/cpu: Add detect extended topology for Zhaoxin CPUs ACPI, x86: Add Zhaoxin processors support for NONSTOP TSC x86/power: Optimize C3 entry on Centaur CPUs x86/acpi/cstate: Add Zhaoxin processors support for cache flush policy in C3 x86/mce: Add Zhaoxin MCE support x86/mce: Add Zhaoxin CMCI support x86/mce: Add Zhaoxin LMCE support x86/speculation/spectre_v2: Exclude Zhaoxin CPUs from SPECTRE_V2 x86/speculation/swapgs: Exclude Zhaoxin CPUs from SWAPGS vulnerability crypto: x86/crc32c-intel - Don't match some Zhaoxin CPUs x86/perf: Add hardware performance events support for Zhaoxin CPU. PCI: Add Zhaoxin Vendor ID ata: sata_zhaoxin: Add support for Zhaoxin Serial ATA xhci: Add Zhaoxin xHCI LPM U1/U2 feature support PCI: Add ACS quirk for Zhaoxin multi-function devices PCI: Add ACS quirk for Zhaoxin Root/Downstream Ports xhci: fix issue of cross page boundary in TRB prefetch xhci: Show Zhaoxin XHCI root hub speed correctly ALSA: hda: Add support of Zhaoxin SB HDAC ALSA: hda: Add support of Zhaoxin NB HDAC ALSA: hda: Add support of Zhaoxin NB HDAC codec xhci: Adjust the UHCI Controllers bit value xhci: fix issue with resume from system Sx state x86/apic: Mask IOAPIC entries when disabling the local APIC USB:Fix kernel NULL pointer when unbind UHCI form vfio-pci iommu/vt-d:Add support for detecting ACPI device in RMRR x86/Kconfig: Rename UMIP config parameter x86/Kconfig: Drop vendor dependency for X86_UMIP
MAINTAINERS | 6 + arch/x86/Kconfig | 15 +- arch/x86/Kconfig.cpu | 13 + arch/x86/crypto/crc32c-intel_glue.c | 7 + arch/x86/events/Makefile | 2 + arch/x86/events/core.c | 4 + arch/x86/events/perf_event.h | 14 +- arch/x86/events/zhaoxin/Makefile | 3 + arch/x86/events/zhaoxin/core.c | 612 +++++++++ arch/x86/events/zhaoxin/uncore.c | 1101 +++++++++++++++++ arch/x86/events/zhaoxin/uncore.h | 308 +++++ arch/x86/include/asm/cpufeatures.h | 21 + arch/x86/include/asm/disabled-features.h | 2 +- arch/x86/include/asm/processor.h | 3 +- arch/x86/include/asm/umip.h | 4 +- arch/x86/kernel/Makefile | 2 +- arch/x86/kernel/acpi/cstate.c | 27 + arch/x86/kernel/apic/apic.c | 7 + arch/x86/kernel/cpu/Makefile | 1 + arch/x86/kernel/cpu/centaur.c | 47 +- arch/x86/kernel/cpu/common.c | 9 +- arch/x86/kernel/cpu/mce/core.c | 97 +- arch/x86/kernel/cpu/mce/intel.c | 11 +- arch/x86/kernel/cpu/mce/internal.h | 6 + arch/x86/kernel/cpu/perfctr-watchdog.c | 8 + arch/x86/kernel/cpu/zhaoxin.c | 170 +++ drivers/acpi/acpi_pad.c | 1 + drivers/acpi/processor_idle.c | 1 + drivers/ata/Kconfig | 8 + drivers/ata/Makefile | 1 + drivers/ata/sata_zhaoxin.c | 384 ++++++ drivers/iommu/dmar.c | 75 +- drivers/iommu/intel-iommu.c | 24 +- drivers/pci/quirks.c | 31 + drivers/usb/core/hcd-pci.c | 10 + drivers/usb/host/uhci-pci.c | 3 + drivers/usb/host/xhci-mem.c | 11 +- drivers/usb/host/xhci-pci.c | 12 + drivers/usb/host/xhci.c | 53 +- drivers/usb/host/xhci.h | 2 + include/linux/dmar.h | 11 +- include/linux/pci_ids.h | 2 + sound/pci/hda/hda_controller.c | 17 +- sound/pci/hda/hda_controller.h | 2 + sound/pci/hda/hda_intel.c | 68 +- sound/pci/hda/patch_hdmi.c | 26 + .../arch/x86/include/asm/disabled-features.h | 2 +- 47 files changed, 3143 insertions(+), 101 deletions(-) create mode 100644 arch/x86/events/zhaoxin/Makefile create mode 100644 arch/x86/events/zhaoxin/core.c create mode 100644 arch/x86/events/zhaoxin/uncore.c create mode 100644 arch/x86/events/zhaoxin/uncore.h create mode 100644 arch/x86/kernel/cpu/zhaoxin.c create mode 100644 drivers/ata/sata_zhaoxin.c
Applied.
On 2021/4/7 14:12, Cheng Jian wrote:
Adds Zhaoxin CPU support
LeoLiu-oc (33): x86/cpu: Create Zhaoxin processors architecture support file x86/cpu: Remove redundant cpu_detect_cache_sizes() call x86/cpu/centaur: Replace two-condition switch-case with an if statement x86/cpu/centaur: Add Centaur family >=7 CPUs initialization support x86/cpufeatures: Add Zhaoxin feature bits x86/cpu: Add detect extended topology for Zhaoxin CPUs ACPI, x86: Add Zhaoxin processors support for NONSTOP TSC x86/power: Optimize C3 entry on Centaur CPUs x86/acpi/cstate: Add Zhaoxin processors support for cache flush policy in C3 x86/mce: Add Zhaoxin MCE support x86/mce: Add Zhaoxin CMCI support x86/mce: Add Zhaoxin LMCE support x86/speculation/spectre_v2: Exclude Zhaoxin CPUs from SPECTRE_V2 x86/speculation/swapgs: Exclude Zhaoxin CPUs from SWAPGS vulnerability crypto: x86/crc32c-intel - Don't match some Zhaoxin CPUs x86/perf: Add hardware performance events support for Zhaoxin CPU. PCI: Add Zhaoxin Vendor ID ata: sata_zhaoxin: Add support for Zhaoxin Serial ATA xhci: Add Zhaoxin xHCI LPM U1/U2 feature support PCI: Add ACS quirk for Zhaoxin multi-function devices PCI: Add ACS quirk for Zhaoxin Root/Downstream Ports xhci: fix issue of cross page boundary in TRB prefetch xhci: Show Zhaoxin XHCI root hub speed correctly ALSA: hda: Add support of Zhaoxin SB HDAC ALSA: hda: Add support of Zhaoxin NB HDAC ALSA: hda: Add support of Zhaoxin NB HDAC codec xhci: Adjust the UHCI Controllers bit value xhci: fix issue with resume from system Sx state x86/apic: Mask IOAPIC entries when disabling the local APIC USB:Fix kernel NULL pointer when unbind UHCI form vfio-pci iommu/vt-d:Add support for detecting ACPI device in RMRR x86/Kconfig: Rename UMIP config parameter x86/Kconfig: Drop vendor dependency for X86_UMIP
MAINTAINERS | 6 + arch/x86/Kconfig | 15 +- arch/x86/Kconfig.cpu | 13 + arch/x86/crypto/crc32c-intel_glue.c | 7 + arch/x86/events/Makefile | 2 + arch/x86/events/core.c | 4 + arch/x86/events/perf_event.h | 14 +- arch/x86/events/zhaoxin/Makefile | 3 + arch/x86/events/zhaoxin/core.c | 612 +++++++++ arch/x86/events/zhaoxin/uncore.c | 1101 +++++++++++++++++ arch/x86/events/zhaoxin/uncore.h | 308 +++++ arch/x86/include/asm/cpufeatures.h | 21 + arch/x86/include/asm/disabled-features.h | 2 +- arch/x86/include/asm/processor.h | 3 +- arch/x86/include/asm/umip.h | 4 +- arch/x86/kernel/Makefile | 2 +- arch/x86/kernel/acpi/cstate.c | 27 + arch/x86/kernel/apic/apic.c | 7 + arch/x86/kernel/cpu/Makefile | 1 + arch/x86/kernel/cpu/centaur.c | 47 +- arch/x86/kernel/cpu/common.c | 9 +- arch/x86/kernel/cpu/mce/core.c | 97 +- arch/x86/kernel/cpu/mce/intel.c | 11 +- arch/x86/kernel/cpu/mce/internal.h | 6 + arch/x86/kernel/cpu/perfctr-watchdog.c | 8 + arch/x86/kernel/cpu/zhaoxin.c | 170 +++ drivers/acpi/acpi_pad.c | 1 + drivers/acpi/processor_idle.c | 1 + drivers/ata/Kconfig | 8 + drivers/ata/Makefile | 1 + drivers/ata/sata_zhaoxin.c | 384 ++++++ drivers/iommu/dmar.c | 75 +- drivers/iommu/intel-iommu.c | 24 +- drivers/pci/quirks.c | 31 + drivers/usb/core/hcd-pci.c | 10 + drivers/usb/host/uhci-pci.c | 3 + drivers/usb/host/xhci-mem.c | 11 +- drivers/usb/host/xhci-pci.c | 12 + drivers/usb/host/xhci.c | 53 +- drivers/usb/host/xhci.h | 2 + include/linux/dmar.h | 11 +- include/linux/pci_ids.h | 2 + sound/pci/hda/hda_controller.c | 17 +- sound/pci/hda/hda_controller.h | 2 + sound/pci/hda/hda_intel.c | 68 +- sound/pci/hda/patch_hdmi.c | 26 + .../arch/x86/include/asm/disabled-features.h | 2 +- 47 files changed, 3143 insertions(+), 101 deletions(-) create mode 100644 arch/x86/events/zhaoxin/Makefile create mode 100644 arch/x86/events/zhaoxin/core.c create mode 100644 arch/x86/events/zhaoxin/uncore.c create mode 100644 arch/x86/events/zhaoxin/uncore.h create mode 100644 arch/x86/kernel/cpu/zhaoxin.c create mode 100644 drivers/ata/sata_zhaoxin.c