bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4
Babu Moger (1): KVM: SVM: Clear the CR4 register on reset
David Edmondson (1): KVM: x86: clflushopt should be treated as a no-op by emulation
Fenghua Yu (2): x86/cpufeatures: Enumerate MOVDIRI instruction x86/cpufeatures: Enumerate MOVDIR64B instruction
Haiyan Song (2): perf vendor events intel: Add Icelake V1.00 event file perf vendor events intel: Add Tremontx event file v1.02
Isaac Vaughn (1): EDAC/amd64: Add PCI device IDs for family 17h, model 70h
Jan H. Schönherr (1): x86/mce: Fix use of uninitialized MCE message string
Jim Mattson (1): kvm: x86: Expose RDPID in KVM_GET_SUPPORTED_CPUID
John Allen (2): kvm/svm: PKU not currently supported x86/microcode/AMD: Increase microcode PATCH_MAX_SIZE
John Garry (4): perf jevents: Add support for Hisi hip08 DDRC PMU aliasing perf jevents: Add support for Hisi hip08 HHA PMU aliasing perf jevents: Add support for Hisi hip08 L3C PMU aliasing perf vendor events arm64: Fix Hisi hip08 DDRC PMU eventname
Kai Huang (3): kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c kvm: x86: Fix reserved bits related calculation errors caused by MKTME kvm: x86: Fix L1TF mitigation for shadow MMU
Kan Liang (1): perf vendor events intel: Add uncore_upi JSON support
Kim Phillips (17): perf/amd/uncore: Prepare L3 thread mask code for Family 19h perf/amd/uncore: Make L3 thread mask code more readable perf/amd/uncore: Add support for Family 19h L3 PMU arch/x86/amd/ibs: Fix re-arming IBS Fetch perf/x86/amd/ibs: Fix raw sample data accumulation perf/amd/uncore: Set all slices and threads to restore perf stat -a behaviour perf/x86/amd/ibs: Don't include randomized bits in get_ibs_op_count() perf/amd/uncore: Prepare to scale for more attributes that vary per family perf/amd/uncore: Allow F17h user threadmask and slicemask specification perf/amd/uncore: Allow F19h user coreid, threadmask, and sliceid specification perf vendor events amd: Add L3 cache events for Family 17h perf vendor events amd: Remove redundant '[' perf vendor events amd: Enable Family 19h users by matching Zen2 events x86/cpu/amd: Call init_amd_zn() om Family 19h processors too perf/x86/amd/ibs: Support 27-bit extended Op/cycle counter tools/power turbostat: Support AMD Family 19h perf vendor events amd: Add L2 Prefetch events for zen1
Krish Sadhukhan (1): KVM: SVM: Replace hard-coded value with #define
Like Xu (1): perf/x86/amd: Don't touch the AMD64_EVENTSEL_HOSTONLY bit inside the guest
Liu Jingqi (2): KVM: x86: expose MOVDIRI CPU feature into VM. KVM: x86: expose MOVDIR64B CPU feature into VM.
Maciej S. Szmigiero (1): KVM: mmu: Fix SPTE encoding of MMIO generation upper half
Marcel Bocu (1): x86/amd_nb: Add PCI device IDs for family 17h, model 70h
Martin Liška (1): perf vendor events amd: perf PMU events for AMD Family 17h
Nathan Chancellor (2): crypto: ccp - Remove forward declaration perf/amd/uncore: Fix sysfs type mismatch
Paolo Bonzini (3): KVM: x86: only do L1TF workaround on affected processors KVM: x86: assign two bits to track SPTE kinds KVM: x86: fix overlap between SPTE_MMIO_MASK and generation
Rasmus Villemoes (1): build_bug.h: add wrapper for _Static_assert
Sean Christopherson (13): KVM: nVMX: Allocate and configure VM{READ,WRITE} bitmaps iff enable_shadow_vmcs KVM: x86: Add requisite includes to kvm_cache_regs.h KVM: x86: Add requisite includes to hyperv.h KVM: x86: Use a u64 when passing the MMIO gen around KVM: Explicitly define the "memslot update in-progress" bit KVM: x86: Refactor the MMIO SPTE generation handling KVM: x86: Rename access permissions cache member in struct kvm_vcpu_arch KVM: x86/mmu: Add explicit access mask for MMIO SPTEs KVM: x86/mmu: Consolidate "is MMIO SPTE" code KVM: x86/mmu: Apply max PA check for MMIO sptes to 32-bit KVM KVM: x86/mmu: Set mmio_value to '0' if reserved #PF can't be generated KVM: Remove the hack to trigger memslot generation wraparound KVM: Move the memslot update in-progress flag to bit 63
Sebastian Andrzej Siewior (1): x86/pkeys: Don't check if PKRU is zero before writing it
Tom Lendacky (1): KVM: SVM: Override default MMIO mask if memory encryption is enabled
Vijay Thakkar (3): perf vendor events amd: Restrict model detection for zen1 based processors perf vendor events amd: Add Zen2 events perf vendor events amd: Update Zen1 events to V2
Woods, Brian (2): hwmon/k10temp, x86/amd_nb: Consolidate shared device IDs x86/amd_nb: Add PCI device IDs for family 17h, model 30h
Yazen Ghannam (24): EDAC/amd64: Drop some family checks for newer systems x86/amd_nb: Add Family 19h PCI IDs EDAC/mce_amd: Always load on SMCA systems x86/MCE/AMD, EDAC/mce_amd: Add new MP5, NBIO, and PCIE SMCA bank types x86/MCE/AMD, EDAC/mce_amd: Add new McaTypes for CS, PSP, and SMU units x86/MCE/AMD, EDAC/mce_amd: Add new error descriptions for some SMCA bank types x86/MCE/AMD, EDAC/mce_amd: Add new Load Store unit McaType EDAC/amd64: Use a macro for iterating over Unified Memory Controllers EDAC/amd64: Support more than two controllers for chip selects handling EDAC/amd64: Initialize DIMM info for systems with more than two channels EDAC/amd64: Add Family 17h Model 30h PCI IDs EDAC/amd64: Support more than two Unified Memory Controllers EDAC/amd64: Set maximum channel layer size depending on family EDAC/amd64: Recognize x16 symbol size EDAC/amd64: Adjust printed chip select sizes when interleaved EDAC/amd64: Find Chip Select memory size using Address Mask EDAC/amd64: Cache secondary Chip Select registers EDAC/amd64: Support asymmetric dual-rank DIMMs EDAC/amd64: Set grain per DIMM EDAC/amd64: Make struct amd64_family_type global EDAC/amd64: Gather hardware information early EDAC/amd64: Save max number of controllers to family type EDAC/amd64: Add family ops for Family 19h Models 00h-0Fh EDAC/amd64: Handle three rank interleaving mode
Documentation/virtual/kvm/mmu.txt | 13 +- arch/x86/events/amd/ibs.c | 93 +- arch/x86/events/amd/uncore.c | 179 ++-- arch/x86/events/perf_event.h | 3 +- arch/x86/include/asm/cpufeatures.h | 4 +- arch/x86/include/asm/kvm_host.h | 10 +- arch/x86/include/asm/mce.h | 7 + arch/x86/include/asm/microcode_amd.h | 2 +- arch/x86/include/asm/msr-index.h | 1 + arch/x86/include/asm/perf_event.h | 16 +- arch/x86/kernel/amd_nb.c | 15 +- arch/x86/kernel/cpu/amd.c | 3 +- arch/x86/kernel/cpu/mce/amd.c | 28 +- arch/x86/kernel/cpu/mce/core.c | 4 +- arch/x86/kvm/cpuid.c | 6 +- arch/x86/kvm/emulate.c | 8 +- arch/x86/kvm/hyperv.h | 2 + arch/x86/kvm/kvm_cache_regs.h | 2 + arch/x86/kvm/mmu.c | 215 +++-- arch/x86/kvm/mmu.h | 2 +- arch/x86/kvm/svm.c | 53 +- arch/x86/kvm/vmx.c | 51 +- arch/x86/kvm/x86.c | 33 +- arch/x86/kvm/x86.h | 4 +- arch/x86/mm/pkeys.c | 7 - drivers/crypto/ccp/sp-platform.c | 53 +- drivers/edac/amd64_edac.c | 609 ++++++++---- drivers/edac/amd64_edac.h | 31 +- drivers/edac/mce_amd.c | 134 ++- drivers/hwmon/k10temp.c | 9 +- include/linux/build_bug.h | 19 + include/linux/kvm_host.h | 21 + include/linux/pci_ids.h | 5 + .../arm64/hisilicon/hip08/uncore-ddrc.json | 44 + .../arm64/hisilicon/hip08/uncore-hha.json | 51 + .../arm64/hisilicon/hip08/uncore-l3c.json | 37 + .../pmu-events/arch/x86/amdzen1/branch.json | 23 + .../pmu-events/arch/x86/amdzen1/cache.json | 312 ++++++ .../pmu-events/arch/x86/amdzen1/core.json | 125 +++ .../arch/x86/amdzen1/floating-point.json | 224 +++++ .../pmu-events/arch/x86/amdzen1/memory.json | 184 ++++ .../pmu-events/arch/x86/amdzen1/other.json | 56 ++ .../pmu-events/arch/x86/amdzen2/branch.json | 52 + .../pmu-events/arch/x86/amdzen2/cache.json | 338 +++++++ .../pmu-events/arch/x86/amdzen2/core.json | 130 +++ .../arch/x86/amdzen2/floating-point.json | 140 +++ .../pmu-events/arch/x86/amdzen2/memory.json | 341 +++++++ .../pmu-events/arch/x86/amdzen2/other.json | 115 +++ .../pmu-events/arch/x86/icelake/cache.json | 552 +++++++++++ .../arch/x86/icelake/floating-point.json | 102 ++ .../pmu-events/arch/x86/icelake/frontend.json | 424 +++++++++ .../pmu-events/arch/x86/icelake/memory.json | 410 ++++++++ .../pmu-events/arch/x86/icelake/other.json | 121 +++ .../pmu-events/arch/x86/icelake/pipeline.json | 892 ++++++++++++++++++ .../arch/x86/icelake/virtual-memory.json | 236 +++++ tools/perf/pmu-events/arch/x86/mapfile.csv | 6 + .../pmu-events/arch/x86/tremontx/cache.json | 111 +++ .../arch/x86/tremontx/frontend.json | 26 + .../pmu-events/arch/x86/tremontx/memory.json | 26 + .../pmu-events/arch/x86/tremontx/other.json | 26 + .../arch/x86/tremontx/pipeline.json | 111 +++ .../arch/x86/tremontx/uncore-memory.json | 73 ++ .../arch/x86/tremontx/uncore-other.json | 431 +++++++++ .../arch/x86/tremontx/uncore-power.json | 11 + .../arch/x86/tremontx/virtual-memory.json | 86 ++ tools/perf/pmu-events/jevents.c | 5 + tools/power/x86/turbostat/turbostat.c | 34 +- virt/kvm/kvm_main.c | 36 +- 68 files changed, 6981 insertions(+), 552 deletions(-) create mode 100644 tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json create mode 100644 tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-hha.json create mode 100644 tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-l3c.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen1/branch.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen1/cache.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen1/core.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen1/floating-point.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen1/memory.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen1/other.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen2/branch.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen2/cache.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen2/core.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen2/floating-point.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen2/memory.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen2/other.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/cache.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/floating-point.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/frontend.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/memory.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/other.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/pipeline.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/virtual-memory.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/cache.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/frontend.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/memory.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/other.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/pipeline.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/uncore-memory.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/uncore-other.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/uncore-power.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/virtual-memory.json
From: Sebastian Andrzej Siewior bigeasy@linutronix.de
mainline inclusion from mainline-v5.2-rc1 commit 0556cbdc2fbcb3068e5b924a8b3d5386ae0dd27d category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
write_pkru() checks if the current value is the same as the expected value. So instead of just checking if the current and new value is zero (and skip the write in such a case) we can benefit from that.
Remove the zero check of PKRU, __write_pkru() provides such a check now.
Signed-off-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Signed-off-by: Borislav Petkov bp@suse.de Reviewed-by: Dave Hansen dave.hansen@intel.com Reviewed-by: Thomas Gleixner tglx@linutronix.de Cc: Andy Lutomirski luto@kernel.org Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: "Jason A. Donenfeld" Jason@zx2c4.com Cc: kvm ML kvm@vger.kernel.org Cc: Paolo Bonzini pbonzini@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Radim Krčmář rkrcmar@redhat.com Cc: Rik van Riel riel@surriel.com Cc: x86-ml x86@kernel.org Link: https://lkml.kernel.org/r/20190403164156.19645-15-bigeasy@linutronix.de Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/mm/pkeys.c | 7 ------- 1 file changed, 7 deletions(-)
diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c index 6e98e0a7c923..552233885f8f 100644 --- a/arch/x86/mm/pkeys.c +++ b/arch/x86/mm/pkeys.c @@ -146,13 +146,6 @@ u32 init_pkru_value = PKRU_AD_KEY( 1) | PKRU_AD_KEY( 2) | PKRU_AD_KEY( 3) | void copy_init_pkru_to_fpregs(void) { u32 init_pkru_value_snapshot = READ_ONCE(init_pkru_value); - /* - * Any write to PKRU takes it out of the XSAVE 'init - * state' which increases context switch cost. Avoid - * writing 0 when PKRU was already 0. - */ - if (!init_pkru_value_snapshot && !read_pkru()) - return; /* * Override the PKRU state that came from 'init_fpstate' * with the baseline from the process.
From: Fenghua Yu fenghua.yu@intel.com
mainline inclusion from mainline-v4.20-rc1 commit 33823f4d63f7a010653d219800539409a78ef4be category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
MOVDIRI moves doubleword or quadword from register to memory through direct store which is implemented by using write combining (WC) for writing data directly into memory without caching the data.
Programmable agents can handle streaming offload (e.g. high speed packet processing in network). Hardware implements a doorbell (tail pointer) register that is updated by software when adding new work-elements to the streaming offload work-queue.
MOVDIRI can be used as the doorbell write which is a 4-byte or 8-byte uncachable write to MMIO. MOVDIRI has lower overhead than other ways to write the doorbell.
Availability of the MOVDIRI instruction is indicated by the presence of the CPUID feature flag MOVDIRI(CPUID.0x07.0x0:ECX[bit 27]).
Please check the latest Intel Architecture Instruction Set Extensions and Future Features Programming Reference for more details on the CPUID feature MOVDIRI flag.
Signed-off-by: Fenghua Yu fenghua.yu@intel.com Cc: Andy Lutomirski luto@amacapital.net Cc: Ashok Raj ashok.raj@intel.com Cc: Borislav Petkov bp@alien8.de Cc: Brian Gerst brgerst@gmail.com Cc: Denys Vlasenko dvlasenk@redhat.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Ravi V Shankar ravi.v.shankar@intel.com Cc: Thomas Gleixner tglx@linutronix.de Link: http://lkml.kernel.org/r/1540418237-125817-2-git-send-email-fenghua.yu@intel... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/include/asm/cpufeatures.h | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index b40125de2770..35ff9c7168c9 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -363,6 +363,7 @@ #define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */ #define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */ #define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */ +#define X86_FEATURE_MOVDIRI (16*32+27) /* MOVDIRI instruction */
/* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */ #define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */
From: Fenghua Yu fenghua.yu@intel.com
mainline inclusion from mainline-v4.20-rc1 commit ace6485a03266cc3c198ce8e927a1ce0ce139699 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
MOVDIR64B moves 64-bytes as direct-store with 64-bytes write atomicity. Direct store is implemented by using write combining (WC) for writing data directly into memory without caching the data.
In low latency offload (e.g. Non-Volatile Memory, etc), MOVDIR64B writes work descriptors (and data in some cases) to device-hosted work-queues atomically without cache pollution.
Availability of the MOVDIR64B instruction is indicated by the presence of the CPUID feature flag MOVDIR64B (CPUID.0x07.0x0:ECX[bit 28]).
Please check the latest Intel Architecture Instruction Set Extensions and Future Features Programming Reference for more details on the CPUID feature MOVDIR64B flag.
Signed-off-by: Fenghua Yu fenghua.yu@intel.com Cc: Andy Lutomirski luto@amacapital.net Cc: Ashok Raj ashok.raj@intel.com Cc: Borislav Petkov bp@alien8.de Cc: Brian Gerst brgerst@gmail.com Cc: Denys Vlasenko dvlasenk@redhat.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Ravi V Shankar ravi.v.shankar@intel.com Cc: Thomas Gleixner tglx@linutronix.de Link: http://lkml.kernel.org/r/1540418237-125817-3-git-send-email-fenghua.yu@intel... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/include/asm/cpufeatures.h | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 35ff9c7168c9..b40e72201ccb 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -364,6 +364,7 @@ #define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */ #define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */ #define X86_FEATURE_MOVDIRI (16*32+27) /* MOVDIRI instruction */ +#define X86_FEATURE_MOVDIR64B (16*32+28) /* MOVDIR64B instruction */
/* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */ #define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */
From: Liu Jingqi jingqi.liu@intel.com
mainline inclusion from mainline-v5.1-rc1 commit 74f2370bb64f5c1c418f115e338f20f11b75f954 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
MOVDIRI moves doubleword or quadword from register to memory through direct store which is implemented by using write combining (WC) for writing data directly into memory without caching the data.
Availability of the MOVDIRI instruction is indicated by the presence of the CPUID feature flag MOVDIRI(CPUID.0x07.0x0:ECX[bit 27]).
This patch exposes the movdiri feature to the guest.
The release document ref below link: https://software.intel.com/sites/default/files/managed/c5/15/%5C architecture-instruction-set-extensions-programming-reference.pdf
Signed-off-by: Liu Jingqi jingqi.liu@intel.com Cc: Xu Tao tao3.xu@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/cpuid.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 89d26399eaad..77a86af73c0c 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -353,7 +353,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index) F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) | F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) | - F(CLDEMOTE); + F(CLDEMOTE) | F(MOVDIRI);
/* cpuid 7.0.edx*/ const u32 kvm_cpuid_7_0_edx_x86_features =
From: Liu Jingqi jingqi.liu@intel.com
mainline inclusion from mainline-v5.1-rc1 commit c029b5deb0b5d7e5090317b835f21c5d93999db7 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
MOVDIR64B moves 64-bytes as direct-store with 64-bytes write atomicity. Direct store is implemented by using write combining (WC) for writing data directly into memory without caching the data.
Availability of the MOVDIR64B instruction is indicated by the presence of the CPUID feature flag MOVDIR64B (CPUID.0x07.0x0:ECX[bit 28]).
This patch exposes the movdir64b feature to the guest.
The release document ref below link: https://software.intel.com/sites/default/files/managed/c5/15/%5C architecture-instruction-set-extensions-programming-reference.pdf
Signed-off-by: Liu Jingqi jingqi.liu@intel.com Cc: Xu Tao tao3.xu@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/cpuid.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 77a86af73c0c..5249a60966dc 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -353,7 +353,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index) F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) | F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) | - F(CLDEMOTE) | F(MOVDIRI); + F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B);
/* cpuid 7.0.edx*/ const u32 kvm_cpuid_7_0_edx_x86_features =
From: Jim Mattson jmattson@google.com
mainline inclusion from mainline-v5.4-rc5 commit 41cd02c6f7f6e66e7abf02a4379e355a7db89f78 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
When the RDPID instruction is supported on the host, enumerate it in KVM_GET_SUPPORTED_CPUID.
Signed-off-by: Jim Mattson jmattson@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/cpuid.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 5249a60966dc..3d84c57304aa 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -350,7 +350,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index)
/* cpuid 7.0.ecx*/ const u32 kvm_cpuid_7_0_ecx_x86_features = - F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | + F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | F(RDPID) | F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) | F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) | F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B);
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.0-rc1 commit dfae3c03b89fd5547b1adee857b10dc6f1c66132 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
...and make enable_shadow_vmcs depend on nested. Aside from the obvious memory savings, this will allow moving the relevant code out of vmx.c in the future, e.g. to a nested specific file.
Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/vmx.c | 43 +++++++++++++++++++++++-------------------- 1 file changed, 23 insertions(+), 20 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index b8fceac285f6..145708925b69 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -4835,6 +4835,9 @@ static void init_vmcs_shadow_fields(void) { int i, j;
+ memset(vmx_vmread_bitmap, 0xff, PAGE_SIZE); + memset(vmx_vmwrite_bitmap, 0xff, PAGE_SIZE); + for (i = j = 0; i < max_shadow_read_only_fields; i++) { u16 field = shadow_read_only_fields[i]; if (vmcs_field_width(field) == VMCS_FIELD_WIDTH_U64 && @@ -7898,19 +7901,8 @@ static __init int hardware_setup(void) for (i = 0; i < ARRAY_SIZE(vmx_msr_index); ++i) kvm_define_shared_msr(i, vmx_msr_index[i]);
- for (i = 0; i < VMX_BITMAP_NR; i++) { - vmx_bitmap[i] = (unsigned long *)__get_free_page(GFP_KERNEL); - if (!vmx_bitmap[i]) - goto out; - } - - memset(vmx_vmread_bitmap, 0xff, PAGE_SIZE); - memset(vmx_vmwrite_bitmap, 0xff, PAGE_SIZE); - - if (setup_vmcs_config(&vmcs_config) < 0) { - r = -EIO; - goto out; - } + if (setup_vmcs_config(&vmcs_config) < 0) + return -EIO;
if (boot_cpu_has(X86_FEATURE_NX)) kvm_enable_efer_bits(EFER_NX); @@ -8021,10 +8013,18 @@ static __init int hardware_setup(void) kvm_x86_ops->cancel_hv_timer = NULL; }
- if (!cpu_has_vmx_shadow_vmcs()) + if (!cpu_has_vmx_shadow_vmcs() || !nested) enable_shadow_vmcs = 0; - if (enable_shadow_vmcs) + if (enable_shadow_vmcs) { + for (i = 0; i < VMX_BITMAP_NR; i++) { + vmx_bitmap[i] = (unsigned long *) + __get_free_page(GFP_KERNEL); + if (!vmx_bitmap[i]) + goto out; + } + init_vmcs_shadow_fields(); + }
kvm_set_posted_intr_wakeup_handler(wakeup_handler); nested_vmx_setup_ctls_msrs(&vmcs_config.nested, enable_apicv); @@ -8037,9 +8037,10 @@ static __init int hardware_setup(void) return 0;
out: - for (i = 0; i < VMX_BITMAP_NR; i++) - free_page((unsigned long)vmx_bitmap[i]); - + if (enable_shadow_vmcs) { + for (i = 0; i < VMX_BITMAP_NR; i++) + free_page((unsigned long)vmx_bitmap[i]); + } return r; }
@@ -8047,8 +8048,10 @@ static __exit void hardware_unsetup(void) { int i;
- for (i = 0; i < VMX_BITMAP_NR; i++) - free_page((unsigned long)vmx_bitmap[i]); + if (enable_shadow_vmcs) { + for (i = 0; i < VMX_BITMAP_NR; i++) + free_page((unsigned long)vmx_bitmap[i]); + }
free_kvm_area(); }
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.0-rc1 commit 8ba2e525ecd7428e25d80f37c533612d62f2dc26 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Until this point vmx.c has been the only consumer and included the file after many others. Prepare for multiple consumers, i.e. the shattering of vmx.c
Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/kvm_cache_regs.h | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 9619dcc2b325..f8f56a93358b 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -2,6 +2,8 @@ #ifndef ASM_KVM_CACHE_REGS_H #define ASM_KVM_CACHE_REGS_H
+#include <linux/kvm_host.h> + #define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS #define KVM_POSSIBLE_CR4_GUEST_BITS \ (X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.0-rc1 commit 3592cda6bc27fd6e73f73a6e793cbd0c09a07a36 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Until this point vmx.c has been the only consumer and included the file after many others. Prepare for multiple consumers, i.e. the shattering of vmx.c
Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/hyperv.h | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h index 0e66c12ed2c3..9c21c3479899 100644 --- a/arch/x86/kvm/hyperv.h +++ b/arch/x86/kvm/hyperv.h @@ -24,6 +24,8 @@ #ifndef __ARCH_X86_KVM_HYPERV_H__ #define __ARCH_X86_KVM_HYPERV_H__
+#include <linux/kvm_host.h> + static inline struct kvm_vcpu_hv *vcpu_to_hv_vcpu(struct kvm_vcpu *vcpu) { return &vcpu->arch.hyperv;
From: John Allen john.allen@amd.com
mainline inclusion from mainline-v5.6-rc1 commit a47970ed74a535b1accb4bc73643fd5a93993c3e category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Current SVM implementation does not have support for handling PKU. Guests running on a host with future AMD cpus that support the feature will read garbage from the PKRU register and will hit segmentation faults on boot as memory is getting marked as protected that should not be. Ensure that cpuid from SVM does not advertise the feature.
Signed-off-by: John Allen john.allen@amd.com Cc: stable@vger.kernel.org Fixes: 0556cbdc2fbc ("x86/pkeys: Don't check if PKRU is zero before writing it") Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/cpuid.c | 4 +++- arch/x86/kvm/svm.c | 6 ++++++ arch/x86/kvm/vmx.c | 6 ++++++ 4 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4344e56c9925..a48f443fd7e6 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1084,6 +1084,7 @@ struct kvm_x86_ops { bool (*mpx_supported)(void); bool (*xsaves_supported)(void); bool (*umip_emulated)(void); + bool (*pku_supported)(void);
int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr); void (*request_immediate_exit)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 3d84c57304aa..0d15273bb4a0 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -339,6 +339,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index) unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0; unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0; unsigned f_la57; + unsigned f_pku = kvm_x86_ops->pku_supported() ? F(PKU) : 0;
/* cpuid 7.0.ebx */ const u32 kvm_cpuid_7_0_ebx_x86_features = @@ -350,7 +351,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index)
/* cpuid 7.0.ecx*/ const u32 kvm_cpuid_7_0_ecx_x86_features = - F(AVX512VBMI) | F(LA57) | F(PKU) | 0 /*OSPKE*/ | F(RDPID) | + F(AVX512VBMI) | F(LA57) | 0 /*PKU*/ | 0 /*OSPKE*/ | F(RDPID) | F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) | F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) | F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B); @@ -378,6 +379,7 @@ static inline void do_cpuid_7_mask(struct kvm_cpuid_entry2 *entry, int index) /* Set LA57 based on hardware capability. */ entry->ecx |= f_la57; entry->ecx |= f_umip; + entry->ecx |= f_pku; /* PKU is not yet implemented for shadow paging. */ if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE)) entry->ecx &= ~F(PKU); diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 236e94d56722..511f428a49c8 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -5969,6 +5969,11 @@ static bool svm_has_wbinvd_exit(void) return true; }
+static bool svm_pku_supported(void) +{ + return false; +} + #define PRE_EX(exit) { .exit_code = (exit), \ .stage = X86_ICPT_PRE_EXCEPT, } #define POST_EX(exit) { .exit_code = (exit), \ @@ -7201,6 +7206,7 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = { .mpx_supported = svm_mpx_supported, .xsaves_supported = svm_xsaves_supported, .umip_emulated = svm_umip_emulated, + .pku_supported = svm_pku_supported,
.set_supported_cpuid = svm_set_supported_cpuid,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 145708925b69..347d045a5567 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -1938,6 +1938,11 @@ static bool vmx_umip_emulated(void) SECONDARY_EXEC_DESC; }
+static inline bool vmx_pku_supported(void) +{ + return boot_cpu_has(X86_FEATURE_PKU); +} + static inline bool report_flexpriority(void) { return flexpriority_enabled; @@ -14497,6 +14502,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = { .mpx_supported = vmx_mpx_supported, .xsaves_supported = vmx_xsaves_supported, .umip_emulated = vmx_umip_emulated, + .pku_supported = vmx_pku_supported,
.check_nested_events = vmx_check_nested_events, .request_immediate_exit = vmx_request_immediate_exit,
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.1-rc1 commit 5192f9b976f9687569a90602b8a6c053da4498f6 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
KVM currently uses an 'unsigned int' for the MMIO generation number despite it being derived from the 64-bit memslots generation and being propagated to (potentially) 64-bit sptes. There is no hidden agenda behind using an 'unsigned int', it's done simply because the MMIO generation will never set bits above bit 19.
Passing a u64 will allow the "update in-progress" flag to be relocated from bit 0 to bit 63 and removes the need to cast the generation back to a u64 when propagating it to a spte.
Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/mmu.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index eddf91a0e363..a7d3aa4c59bb 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -349,20 +349,20 @@ static inline bool is_access_track_spte(u64 spte) #define MMIO_GEN_LOW_MASK ((1 << MMIO_GEN_LOW_SHIFT) - 2) #define MMIO_GEN_MASK ((1 << MMIO_GEN_SHIFT) - 1)
-static u64 generation_mmio_spte_mask(unsigned int gen) +static u64 generation_mmio_spte_mask(u64 gen) { u64 mask;
WARN_ON(gen & ~MMIO_GEN_MASK);
mask = (gen & MMIO_GEN_LOW_MASK) << MMIO_SPTE_GEN_LOW_SHIFT; - mask |= ((u64)gen >> MMIO_GEN_LOW_SHIFT) << MMIO_SPTE_GEN_HIGH_SHIFT; + mask |= (gen >> MMIO_GEN_LOW_SHIFT) << MMIO_SPTE_GEN_HIGH_SHIFT; return mask; }
-static unsigned int get_mmio_spte_generation(u64 spte) +static u64 get_mmio_spte_generation(u64 spte) { - unsigned int gen; + u64 gen;
spte &= ~shadow_mmio_mask;
@@ -371,7 +371,7 @@ static unsigned int get_mmio_spte_generation(u64 spte) return gen; }
-static unsigned int kvm_current_mmio_generation(struct kvm_vcpu *vcpu) +static u64 kvm_current_mmio_generation(struct kvm_vcpu *vcpu) { return kvm_vcpu_memslots(vcpu)->generation & MMIO_GEN_MASK; } @@ -379,7 +379,7 @@ static unsigned int kvm_current_mmio_generation(struct kvm_vcpu *vcpu) static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, unsigned access) { - unsigned int gen = kvm_current_mmio_generation(vcpu); + u64 gen = kvm_current_mmio_generation(vcpu); u64 mask = generation_mmio_spte_mask(gen); u64 gpa = gfn << PAGE_SHIFT;
@@ -427,7 +427,7 @@ static bool set_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
static bool check_mmio_spte(struct kvm_vcpu *vcpu, u64 spte) { - unsigned int kvm_gen, spte_gen; + u64 kvm_gen, spte_gen;
kvm_gen = kvm_current_mmio_generation(vcpu); spte_gen = get_mmio_spte_generation(spte);
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.1-rc1 commit 361209e054a2c9f34da090ee1ee4c1e8bfe76a64 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
KVM uses bit 0 of the memslots generation as an "update in-progress" flag, which is used by x86 to prevent caching MMIO access while the memslots are changing. Although the intended behavior is flag-like, e.g. MMIO sptes intentionally drop the in-progress bit so as to avoid caching data from in-flux memslots, the implementation oftentimes treats the bit as part of the generation number itself, e.g. incrementing the generation increments twice, once to set the flag and once to clear it.
Prior to commit 4bd518f1598d ("KVM: use separate generations for each address space"), incorporating the "update in-progress" bit into the generation number largely made sense, e.g. "real" generations are even, "bogus" generations are odd, most code doesn't need to be aware of the bit, etc...
Now that unique memslots generation numbers are assigned to each address space, stealthing the in-progress status into the generation number results in a wide variety of subtle code, e.g. kvm_create_vm() jumps over bit 0 when initializing the memslots generation without any hint as to why.
Explicitly define the flag and convert as much code as possible (which isn't much) to actually treat it like a flag. This paves the way for eventually using a different bit for "update in-progress" so that it can be a flag in truth instead of a awkward extension to the generation number.
Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/x86.h | 2 +- include/linux/kvm_host.h | 21 +++++++++++++++++++++ virt/kvm/kvm_main.c | 26 +++++++++++++------------- 3 files changed, 35 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 608e5f8c5d0a..40476fa647cf 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -188,7 +188,7 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu, { u64 gen = kvm_memslots(vcpu->kvm)->generation;
- if (unlikely(gen & 1)) + if (unlikely(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS)) return;
/* diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 27e31d972dfb..4ae67f641951 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -49,6 +49,27 @@ */ #define KVM_MEMSLOT_INVALID (1UL << 16)
+/* + * Bit 0 of the memslot generation number is an "update in-progress flag", + * e.g. is temporarily set for the duration of install_new_memslots(). + * This flag effectively creates a unique generation number that is used to + * mark cached memslot data, e.g. MMIO accesses, as potentially being stale, + * i.e. may (or may not) have come from the previous memslots generation. + * + * This is necessary because the actual memslots update is not atomic with + * respect to the generation number update. Updating the generation number + * first would allow a vCPU to cache a spte from the old memslots using the + * new generation number, and updating the generation number after switching + * to the new memslots would allow cache hits using the old generation number + * to reference the defunct memslots. + * + * This mechanism is used to prevent getting hits in KVM's caches while a + * memslot update is in-progress, and to prevent cache hits *after* updating + * the actual generation number against accesses that were inserted into the + * cache *before* the memslots were updated. + */ +#define KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS BIT_ULL(0) + /* Two fragments for cross MMIO pages. */ #define KVM_MAX_MMIO_FRAGMENTS 2
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 78bcd465c4f4..8aecda31801f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -927,30 +927,30 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, int as_id, struct kvm_memslots *slots) { struct kvm_memslots *old_memslots = __kvm_memslots(kvm, as_id); - u64 gen; + u64 gen = old_memslots->generation;
- /* - * Set the low bit in the generation, which disables SPTE caching - * until the end of synchronize_srcu_expedited. - */ - WARN_ON(old_memslots->generation & 1); - slots->generation = old_memslots->generation + 1; + WARN_ON(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS); + slots->generation = gen | KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS;
rcu_assign_pointer(kvm->memslots[as_id], slots); synchronize_srcu_expedited(&kvm->srcu);
/* - * Increment the new memslot generation a second time. This prevents - * vm exits that race with memslot updates from caching a memslot - * generation that will (potentially) be valid forever. - * + * Increment the new memslot generation a second time, dropping the + * update in-progress flag and incrementing then generation based on + * the number of address spaces. This provides a unique and easily + * identifiable generation number while the memslots are in flux. + */ + gen = slots->generation & ~KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS; + + /* * Generations must be unique even across address spaces. We do not need * a global counter for that, instead the generation space is evenly split * across address spaces. For example, with two address spaces, address - * space 0 will use generations 0, 4, 8, ... while * address space 1 will + * space 0 will use generations 0, 4, 8, ... while address space 1 will * use generations 2, 6, 10, 14, ... */ - gen = slots->generation + KVM_ADDRESS_SPACE_NUM * 2 - 1; + gen += KVM_ADDRESS_SPACE_NUM * 2;
kvm_arch_memslots_updated(kvm, gen);
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.1-rc1 commit cae7ed3c2cb06680400adab632a243c5e5f42637 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The code to propagate the memslots generation number into MMIO sptes is a bit convoluted. The "what" is relatively straightfoward, e.g. the comment explaining which bits go where is quite readable, but the "how" requires a lot of staring to understand what is happening. For example, 'MMIO_GEN_LOW_SHIFT' is actually used to calculate the high bits of the spte, while 'MMIO_SPTE_GEN_LOW_SHIFT' is used to calculate the low bits.
Refactor the code to:
- use #defines whose values align with the bits defined in the comment - use consistent code for both the high and low mask - explicitly highlight the handling of bit 0 (update in-progress flag) - explicitly call out that the defines are for MMIO sptes (to avoid confusion with the per-vCPU MMIO cache, which uses the full memslots generation)
In addition to making the code a little less magical, this paves the way for moving the update in-progress flag to bit 63 without having to simultaneously rewrite all of the MMIO spte code.
Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/mmu.c | 76 ++++++++++++++++++++++++++-------------------- 1 file changed, 43 insertions(+), 33 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index a7d3aa4c59bb..1294804a5aff 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -333,30 +333,41 @@ static inline bool is_access_track_spte(u64 spte) }
/* - * the low bit of the generation number is always presumed to be zero. - * This disables mmio caching during memslot updates. The concept is - * similar to a seqcount but instead of retrying the access we just punt - * and ignore the cache. + * Due to limited space in PTEs, the MMIO generation is a 19 bit subset of + * the memslots generation and is derived as follows: * - * spte bits 3-11 are used as bits 1-9 of the generation number, - * the bits 52-61 are used as bits 10-19 of the generation number. + * Bits 1-9 of the memslot generation are propagated to spte bits 3-11 + * Bits 10-19 of the memslot generation are propagated to spte bits 52-61 + * + * The MMIO generation starts at bit 1 of the memslots generation in order to + * skip over bit 0, the KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS flag. Including + * the flag would require stealing a bit from the "real" generation number and + * thus effectively halve the maximum number of MMIO generations that can be + * handled before encountering a wrap (which requires a full MMU zap). The + * flag is instead explicitly queried when checking for MMIO spte cache hits. */ -#define MMIO_SPTE_GEN_LOW_SHIFT 2 -#define MMIO_SPTE_GEN_HIGH_SHIFT 52 - -#define MMIO_GEN_SHIFT 20 -#define MMIO_GEN_LOW_SHIFT 10 -#define MMIO_GEN_LOW_MASK ((1 << MMIO_GEN_LOW_SHIFT) - 2) -#define MMIO_GEN_MASK ((1 << MMIO_GEN_SHIFT) - 1) - +#define MMIO_SPTE_GEN_MASK GENMASK_ULL(19, 1) +#define MMIO_SPTE_GEN_SHIFT 1 + +#define MMIO_SPTE_GEN_LOW_START 3 +#define MMIO_SPTE_GEN_LOW_END 11 +#define MMIO_SPTE_GEN_LOW_MASK GENMASK_ULL(MMIO_SPTE_GEN_LOW_END, \ + MMIO_SPTE_GEN_LOW_START) + +#define MMIO_SPTE_GEN_HIGH_START 52 +#define MMIO_SPTE_GEN_HIGH_END 61 +#define MMIO_SPTE_GEN_HIGH_MASK GENMASK_ULL(MMIO_SPTE_GEN_HIGH_END, \ + MMIO_SPTE_GEN_HIGH_START) static u64 generation_mmio_spte_mask(u64 gen) { u64 mask;
- WARN_ON(gen & ~MMIO_GEN_MASK); + WARN_ON(gen & ~MMIO_SPTE_GEN_MASK);
- mask = (gen & MMIO_GEN_LOW_MASK) << MMIO_SPTE_GEN_LOW_SHIFT; - mask |= (gen >> MMIO_GEN_LOW_SHIFT) << MMIO_SPTE_GEN_HIGH_SHIFT; + gen >>= MMIO_SPTE_GEN_SHIFT; + + mask = (gen << MMIO_SPTE_GEN_LOW_START) & MMIO_SPTE_GEN_LOW_MASK; + mask |= (gen << MMIO_SPTE_GEN_HIGH_START) & MMIO_SPTE_GEN_HIGH_MASK; return mask; }
@@ -366,20 +377,15 @@ static u64 get_mmio_spte_generation(u64 spte)
spte &= ~shadow_mmio_mask;
- gen = (spte >> MMIO_SPTE_GEN_LOW_SHIFT) & MMIO_GEN_LOW_MASK; - gen |= (spte >> MMIO_SPTE_GEN_HIGH_SHIFT) << MMIO_GEN_LOW_SHIFT; - return gen; -} - -static u64 kvm_current_mmio_generation(struct kvm_vcpu *vcpu) -{ - return kvm_vcpu_memslots(vcpu)->generation & MMIO_GEN_MASK; + gen = (spte & MMIO_SPTE_GEN_LOW_MASK) >> MMIO_SPTE_GEN_LOW_START; + gen |= (spte & MMIO_SPTE_GEN_HIGH_MASK) >> MMIO_SPTE_GEN_HIGH_START; + return gen << MMIO_SPTE_GEN_SHIFT; }
static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, unsigned access) { - u64 gen = kvm_current_mmio_generation(vcpu); + u64 gen = kvm_vcpu_memslots(vcpu)->generation & MMIO_SPTE_GEN_MASK; u64 mask = generation_mmio_spte_mask(gen); u64 gpa = gfn << PAGE_SHIFT;
@@ -410,7 +416,7 @@ static gfn_t get_mmio_spte_gfn(u64 spte)
static unsigned get_mmio_spte_access(u64 spte) { - u64 mask = generation_mmio_spte_mask(MMIO_GEN_MASK) | shadow_mmio_mask; + u64 mask = generation_mmio_spte_mask(MMIO_SPTE_GEN_MASK) | shadow_mmio_mask; return (spte & ~mask) & ~PAGE_MASK; }
@@ -427,9 +433,13 @@ static bool set_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
static bool check_mmio_spte(struct kvm_vcpu *vcpu, u64 spte) { - u64 kvm_gen, spte_gen; + u64 kvm_gen, spte_gen, gen; + + gen = kvm_vcpu_memslots(vcpu)->generation; + if (unlikely(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS)) + return false;
- kvm_gen = kvm_current_mmio_generation(vcpu); + kvm_gen = gen & MMIO_SPTE_GEN_MASK; spte_gen = get_mmio_spte_generation(spte);
trace_check_mmio_spte(spte, kvm_gen, spte_gen); @@ -5876,13 +5886,13 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen) { - gen &= MMIO_GEN_MASK; + gen &= MMIO_SPTE_GEN_MASK;
/* - * Shift to eliminate the "update in-progress" flag, which isn't - * included in the spte's generation number. + * Shift to adjust for the "update in-progress" flag, which isn't + * included in the MMIO generation number. */ - gen >>= 1; + gen >>= MMIO_SPTE_GEN_SHIFT;
/* * Generation numbers are incremented in multiples of the number of
From: Kai Huang kai.huang@linux.intel.com
mainline inclusion from mainline-v5.3-rc1 commit 7b6f8a06e482960ba6ab06faba51c8f3727a5c7b category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
As a prerequisite to fix several SPTE reserved bits related calculation errors caused by MKTME, which requires kvm_set_mmio_spte_mask() to use local static variable defined in mmu.c.
Also move call site of kvm_set_mmio_spte_mask() from kvm_arch_init() to kvm_mmu_module_init() so that kvm_set_mmio_spte_mask() can be static.
Reviewed-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Kai Huang kai.huang@linux.intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/mmu.c | 31 +++++++++++++++++++++++++++++++ arch/x86/kvm/x86.c | 31 ------------------------------- 2 files changed, 31 insertions(+), 31 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 1294804a5aff..411cb9736333 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -6039,6 +6039,35 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) return 0; }
+static void kvm_set_mmio_spte_mask(void) +{ + u64 mask; + int maxphyaddr = boot_cpu_data.x86_phys_bits; + + /* + * Set the reserved bits and the present bit of an paging-structure + * entry to generate page fault with PFER.RSV = 1. + */ + + /* + * Mask the uppermost physical address bit, which would be reserved as + * long as the supported physical address width is less than 52. + */ + mask = 1ull << 51; + + /* Set the present bit. */ + mask |= 1ull; + + /* + * If reserved bit is not supported, clear the present bit to disable + * mmio page fault. + */ + if (IS_ENABLED(CONFIG_X86_64) && maxphyaddr == 52) + mask &= ~1ull; + + kvm_mmu_set_mmio_spte_mask(mask, mask); +} + int kvm_mmu_module_init(void) { int ret = -ENOMEM; @@ -6048,6 +6077,8 @@ int kvm_mmu_module_init(void)
kvm_mmu_reset_all_pte_masks();
+ kvm_set_mmio_spte_mask(); + pte_list_desc_cache = kmem_cache_create("pte_list_desc", sizeof(struct pte_list_desc), 0, SLAB_ACCOUNT, NULL); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a6b3c45a1313..5933460bc386 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6805,35 +6805,6 @@ static struct perf_guest_info_callbacks kvm_guest_cbs = { .get_guest_ip = kvm_get_guest_ip, };
-static void kvm_set_mmio_spte_mask(void) -{ - u64 mask; - int maxphyaddr = boot_cpu_data.x86_phys_bits; - - /* - * Set the reserved bits and the present bit of an paging-structure - * entry to generate page fault with PFER.RSV = 1. - */ - - /* - * Mask the uppermost physical address bit, which would be reserved as - * long as the supported physical address width is less than 52. - */ - mask = 1ull << 51; - - /* Set the present bit. */ - mask |= 1ull; - - /* - * If reserved bit is not supported, clear the present bit to disable - * mmio page fault. - */ - if (IS_ENABLED(CONFIG_X86_64) && maxphyaddr == 52) - mask &= ~1ull; - - kvm_mmu_set_mmio_spte_mask(mask, mask); -} - #ifdef CONFIG_X86_64 static void pvclock_gtod_update_fn(struct work_struct *work) { @@ -6911,8 +6882,6 @@ int kvm_arch_init(void *opaque) if (r) goto out_free_percpu;
- kvm_set_mmio_spte_mask(); - kvm_x86_ops = ops;
kvm_mmu_set_mask_ptes(PT_USER_MASK, PT_ACCESSED_MASK,
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.4-rc1 commit 871bd0346018df53055141f09754cb5ffb334c7b category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Rename "access" to "mmio_access" to match the other MMIO cache members and to make it more obvious that it's tracking the access permissions for the MMIO cache.
Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/x86.c | 2 +- arch/x86/kvm/x86.h | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index a48f443fd7e6..b2a4d29a56ef 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -668,7 +668,7 @@ struct kvm_vcpu_arch {
/* Cache MMIO info */ u64 mmio_gva; - unsigned access; + unsigned mmio_access; gfn_t mmio_gfn; u64 mmio_gen;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5933460bc386..06df5c23267a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5214,7 +5214,7 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva, */ if (vcpu_match_mmio_gva(vcpu, gva) && !permission_fault(vcpu, vcpu->arch.walk_mmu, - vcpu->arch.access, 0, access)) { + vcpu->arch.mmio_access, 0, access)) { *gpa = vcpu->arch.mmio_gfn << PAGE_SHIFT | (gva & (PAGE_SIZE - 1)); trace_vcpu_match_mmio(gva, *gpa, write, false); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 40476fa647cf..587c39f323f1 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -196,7 +196,7 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu, * actually a nGPA. */ vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK; - vcpu->arch.access = access; + vcpu->arch.mmio_access = access; vcpu->arch.mmio_gfn = gfn; vcpu->arch.mmio_gen = gen; }
From: Kai Huang kai.huang@linux.intel.com
mainline inclusion from mainline-v5.3-rc1 commit f3ecb59dd49f1742b97df6ba071aaa3d031154ac category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Intel MKTME repurposes several high bits of physical address as 'keyID' for memory encryption thus effectively reduces platform's maximum physical address bits. Exactly how many bits are reduced is configured by BIOS. To honor such HW behavior, the repurposed bits are reduced from cpuinfo_x86->x86_phys_bits when MKTME is detected in CPU detection. Similarly, AMD SME/SEV also reduces physical address bits for memory encryption, and cpuinfo->x86_phys_bits is reduced too when SME/SEV is detected, so for both MKTME and SME/SEV, boot_cpu_data.x86_phys_bits doesn't hold physical address bits reported by CPUID anymore.
Currently KVM treats bits from boot_cpu_data.x86_phys_bits to 51 as reserved bits, but it's not true anymore for MKTME, since MKTME treats those reduced bits as 'keyID', but not reserved bits. Therefore boot_cpu_data.x86_phys_bits cannot be used to calculate reserved bits anymore, although we can still use it for AMD SME/SEV since SME/SEV treats the reduced bits differently -- they are treated as reserved bits, the same as other reserved bits in page table entity [1].
Fix by introducing a new 'shadow_phys_bits' variable in KVM x86 MMU code to store the effective physical bits w/o reserved bits -- for MKTME, it equals to physical address reported by CPUID, and for SME/SEV, it is boot_cpu_data.x86_phys_bits.
Note that for the physical address bits reported to guest should remain unchanged -- KVM should report physical address reported by CPUID to guest, but not boot_cpu_data.x86_phys_bits. Because for Intel MKTME, there's no harm if guest sets up 'keyID' bits in guest page table (since MKTME only works at physical address level), and KVM doesn't even expose MKTME to guest. Arguably, for AMD SME/SEV, guest is aware of SEV thus it should adjust boot_cpu_data.x86_phys_bits when it detects SEV, therefore KVM should still reports physcial address reported by CPUID to guest.
Reviewed-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Kai Huang kai.huang@linux.intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/mmu.c | 33 +++++++++++++++++++++++++++------ 1 file changed, 27 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 411cb9736333..357f34904cee 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -281,6 +281,11 @@ static const u64 shadow_nonpresent_or_rsvd_mask_len = 5; */ static u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
+/* + * The number of non-reserved physical address bits irrespective of features + * that repurpose legal bits, e.g. MKTME. + */ +static u8 __read_mostly shadow_phys_bits;
static void mmu_spte_set(u64 *sptep, u64 spte); static bool is_executable_pte(u64 spte); @@ -472,6 +477,21 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, } EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes);
+static u8 kvm_get_shadow_phys_bits(void) +{ + /* + * boot_cpu_data.x86_phys_bits is reduced when MKTME is detected + * in CPU detection code, but MKTME treats those reduced bits as + * 'keyID' thus they are not reserved bits. Therefore for MKTME + * we should still return physical address bits reported by CPUID. + */ + if (!boot_cpu_has(X86_FEATURE_TME) || + WARN_ON_ONCE(boot_cpu_data.extended_cpuid_level < 0x80000008)) + return boot_cpu_data.x86_phys_bits; + + return cpuid_eax(0x80000008) & 0xff; +} + static void kvm_mmu_reset_all_pte_masks(void) { u8 low_phys_bits; @@ -485,6 +505,8 @@ static void kvm_mmu_reset_all_pte_masks(void) shadow_present_mask = 0; shadow_acc_track_mask = 0;
+ shadow_phys_bits = kvm_get_shadow_phys_bits(); + /* * If the CPU has 46 or less physical address bits, then set an * appropriate mask to guard against L1TF attacks. Otherwise, it is @@ -4534,7 +4556,7 @@ reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context) */ shadow_zero_check = &context->shadow_zero_check; __reset_rsvds_bits_mask(vcpu, shadow_zero_check, - boot_cpu_data.x86_phys_bits, + shadow_phys_bits, context->shadow_root_level, uses_nx, guest_cpuid_has(vcpu, X86_FEATURE_GBPAGES), is_pse(vcpu), true); @@ -4571,13 +4593,13 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu,
if (boot_cpu_is_amd()) __reset_rsvds_bits_mask(vcpu, shadow_zero_check, - boot_cpu_data.x86_phys_bits, + shadow_phys_bits, context->shadow_root_level, false, boot_cpu_has(X86_FEATURE_GBPAGES), true, true); else __reset_rsvds_bits_mask_ept(shadow_zero_check, - boot_cpu_data.x86_phys_bits, + shadow_phys_bits, false);
if (!shadow_me_mask) @@ -4598,7 +4620,7 @@ reset_ept_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context, bool execonly) { __reset_rsvds_bits_mask_ept(&context->shadow_zero_check, - boot_cpu_data.x86_phys_bits, execonly); + shadow_phys_bits, execonly); }
#define BYTE_MASK(access) \ @@ -6042,7 +6064,6 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) static void kvm_set_mmio_spte_mask(void) { u64 mask; - int maxphyaddr = boot_cpu_data.x86_phys_bits;
/* * Set the reserved bits and the present bit of an paging-structure @@ -6062,7 +6083,7 @@ static void kvm_set_mmio_spte_mask(void) * If reserved bit is not supported, clear the present bit to disable * mmio page fault. */ - if (IS_ENABLED(CONFIG_X86_64) && maxphyaddr == 52) + if (IS_ENABLED(CONFIG_X86_64) && shadow_phys_bits == 52) mask &= ~1ull;
kvm_mmu_set_mmio_spte_mask(mask, mask);
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.4-rc1 commit 4af7715110a2617fc40ac2c1232f664019269f3a category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
When shadow paging is enabled, KVM tracks the allowed access type for MMIO SPTEs so that it can do a permission check on a MMIO GVA cache hit without having to walk the guest's page tables. The tracking is done by retaining the WRITE and USER bits of the access when inserting the MMIO SPTE (read access is implicitly allowed), which allows the MMIO page fault handler to retrieve and cache the WRITE/USER bits from the SPTE.
Unfortunately for EPT, the mask used to retain the WRITE/USER bits is hardcoded using the x86 paging versions of the bits. This funkiness happens to work because KVM uses a completely different mask/value for MMIO SPTEs when EPT is enabled, and the EPT mask/value just happens to overlap exactly with the x86 WRITE/USER bits[*].
Explicitly define the access mask for MMIO SPTEs to accurately reflect that EPT does not want to incorporate any access bits into the SPTE, and so that KVM isn't subtly relying on EPT's WX bits always being set in MMIO SPTEs, e.g. attempting to use other bits for experimentation breaks horribly.
Note, vcpu_match_mmio_gva() explicits prevents matching GVA==0, and all TDP flows explicit set mmio_gva to 0, i.e. zeroing vcpu->arch.access for EPT has no (known) functional impact.
[*] Using WX to generate EPT misconfigurations (equivalent to reserved bit page fault) ensures KVM can employ its MMIO page fault tricks even platforms without reserved address bits.
Fixes: ce88decffd17 ("KVM: MMU: mmio page fault support") Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/mmu.c | 15 +++++++++------ arch/x86/kvm/mmu.h | 2 +- arch/x86/kvm/vmx.c | 2 +- 3 files changed, 11 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 357f34904cee..e04e0195d024 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -239,6 +239,7 @@ static u64 __read_mostly shadow_accessed_mask; static u64 __read_mostly shadow_dirty_mask; static u64 __read_mostly shadow_mmio_mask; static u64 __read_mostly shadow_mmio_value; +static u64 __read_mostly shadow_mmio_access_mask; static u64 __read_mostly shadow_present_mask; static u64 __read_mostly shadow_me_mask;
@@ -296,11 +297,13 @@ kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu); #include "mmutrace.h"
-void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value) +void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask) { + BUG_ON((u64)(unsigned)access_mask != access_mask); BUG_ON((mmio_mask & mmio_value) != mmio_value); shadow_mmio_value = mmio_value | SPTE_SPECIAL_MASK; shadow_mmio_mask = mmio_mask | SPTE_SPECIAL_MASK; + shadow_mmio_access_mask = access_mask; } EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
@@ -394,7 +397,7 @@ static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, u64 mask = generation_mmio_spte_mask(gen); u64 gpa = gfn << PAGE_SHIFT;
- access &= ACC_WRITE_MASK | ACC_USER_MASK; + access &= shadow_mmio_access_mask; mask |= shadow_mmio_value | access; mask |= gpa | shadow_nonpresent_or_rsvd_mask; mask |= (gpa & shadow_nonpresent_or_rsvd_mask) @@ -421,8 +424,7 @@ static gfn_t get_mmio_spte_gfn(u64 spte)
static unsigned get_mmio_spte_access(u64 spte) { - u64 mask = generation_mmio_spte_mask(MMIO_SPTE_GEN_MASK) | shadow_mmio_mask; - return (spte & ~mask) & ~PAGE_MASK; + return spte & shadow_mmio_access_mask; }
static bool set_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, @@ -3328,7 +3330,8 @@ static bool handle_abnormal_pfn(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn, }
if (unlikely(is_noslot_pfn(pfn))) - vcpu_cache_mmio_info(vcpu, gva, gfn, access); + vcpu_cache_mmio_info(vcpu, gva, gfn, + access & shadow_mmio_access_mask);
return false; } @@ -6086,7 +6089,7 @@ static void kvm_set_mmio_spte_mask(void) if (IS_ENABLED(CONFIG_X86_64) && shadow_phys_bits == 52) mask &= ~1ull;
- kvm_mmu_set_mmio_spte_mask(mask, mask); + kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK); }
int kvm_mmu_module_init(void) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index f7b2de7b6382..a8b75ef4499e 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -56,7 +56,7 @@ static inline u64 rsvd_bits(int s, int e) return ((1ULL << (e - s + 1)) - 1) << s; }
-void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value); +void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask);
void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context); diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 347d045a5567..6faae26176bb 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -6593,7 +6593,7 @@ static void ept_set_mmio_spte_mask(void) * of an EPT paging-structure entry is 110b (write/execute). */ kvm_mmu_set_mmio_spte_mask(VMX_EPT_RWX_MASK, - VMX_EPT_MISCONFIG_WX_VALUE); + VMX_EPT_MISCONFIG_WX_VALUE, 0); }
#define VMX_XSS_EXIT_BITMAP 0
From: Tom Lendacky thomas.lendacky@amd.com
mainline inclusion from mainline-v5.6-rc1 commit 52918ed5fcf05d97d257f4131e19479da18f5d16 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The KVM MMIO support uses bit 51 as the reserved bit to cause nested page faults when a guest performs MMIO. The AMD memory encryption support uses a CPUID function to define the encryption bit position. Given this, it is possible that these bits can conflict.
Use svm_hardware_setup() to override the MMIO mask if memory encryption support is enabled. Various checks are performed to ensure that the mask is properly defined and rsvd_bits() is used to generate the new mask (as was done prior to the change that necessitated this patch).
Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs") Suggested-by: Sean Christopherson sean.j.christopherson@intel.com Reviewed-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Tom Lendacky thomas.lendacky@amd.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/svm.c | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 511f428a49c8..746f0926c51c 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1300,6 +1300,47 @@ static void shrink_ple_window(struct kvm_vcpu *vcpu) control->pause_filter_count, old); }
+/* + * The default MMIO mask is a single bit (excluding the present bit), + * which could conflict with the memory encryption bit. Check for + * memory encryption support and override the default MMIO mask if + * memory encryption is enabled. + */ +static __init void svm_adjust_mmio_mask(void) +{ + unsigned int enc_bit, mask_bit; + u64 msr, mask; + + /* If there is no memory encryption support, use existing mask */ + if (cpuid_eax(0x80000000) < 0x8000001f) + return; + + /* If memory encryption is not enabled, use existing mask */ + rdmsrl(MSR_K8_SYSCFG, msr); + if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT)) + return; + + enc_bit = cpuid_ebx(0x8000001f) & 0x3f; + mask_bit = boot_cpu_data.x86_phys_bits; + + /* Increment the mask bit if it is the same as the encryption bit */ + if (enc_bit == mask_bit) + mask_bit++; + + /* + * If the mask bit location is below 52, then some bits above the + * physical addressing limit will always be reserved, so use the + * rsvd_bits() function to generate the mask. This mask, along with + * the present bit, will be used to generate a page fault with + * PFER.RSV = 1. + * + * If the mask bit location is 52 (or above), then clear the mask. + */ + mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0; + + kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK); +} + static __init int svm_hardware_setup(void) { int cpu; @@ -1354,6 +1395,8 @@ static __init int svm_hardware_setup(void) } }
+ svm_adjust_mmio_mask(); + for_each_possible_cpu(cpu) { r = svm_cpu_init(cpu); if (r)
From: John Allen john.allen@amd.com
mainline inclusion from mainline-v5.7-rc2 commit bdf89df3c54518eed879d8fac7577fcfb220c67e category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Future AMD CPUs will have microcode patches that exceed the default 4K patch size. Raise our limit.
Signed-off-by: John Allen john.allen@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: stable@vger.kernel.org # v4.14.. Link: https://lkml.kernel.org/r/20200409152931.GA685273@mojo.amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/include/asm/microcode_amd.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/microcode_amd.h b/arch/x86/include/asm/microcode_amd.h index 209492849566..5c524d4f71cd 100644 --- a/arch/x86/include/asm/microcode_amd.h +++ b/arch/x86/include/asm/microcode_amd.h @@ -41,7 +41,7 @@ struct microcode_amd { unsigned int mpb[0]; };
-#define PATCH_MAX_SIZE PAGE_SIZE +#define PATCH_MAX_SIZE (3 * PAGE_SIZE)
#ifdef CONFIG_MICROCODE_AMD extern void __init load_ucode_amd_bsp(unsigned int family);
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.6-rc1 commit dcd01394ce7cd7d25bb15c81ad2e804d8090611f category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
In general, "pvt->umc != NULL" is used to check if the system is Family 17h+. However, there are a few places that are using direct family checks.
Replace the remaining family checks with a check for "pvt->umc != NULL".
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20200110015651.14887-6-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 45 +++++++++++++++++---------------------- 1 file changed, 19 insertions(+), 26 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index 12e55d4fb156..0a436b099aac 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -211,7 +211,7 @@ static int __set_scrub_rate(struct amd64_pvt *pvt, u32 new_bw, u32 min_rate)
scrubval = scrubrates[i].scrubval;
- if (pvt->fam == 0x17 || pvt->fam == 0x18) { + if (pvt->umc) { __f17h_set_scrubval(pvt, scrubval); } else if (pvt->fam == 0x15 && pvt->model == 0x60) { f15h_select_dct(pvt, 0); @@ -253,18 +253,7 @@ static int get_scrub_rate(struct mem_ctl_info *mci) int i, retval = -EINVAL; u32 scrubval = 0;
- switch (pvt->fam) { - case 0x15: - /* Erratum #505 */ - if (pvt->model < 0x10) - f15h_select_dct(pvt, 0); - - if (pvt->model == 0x60) - amd64_read_pci_cfg(pvt->F2, F15H_M60H_SCRCTRL, &scrubval); - break; - - case 0x17: - case 0x18: + if (pvt->umc) { amd64_read_pci_cfg(pvt->F6, F17H_SCR_BASE_ADDR, &scrubval); if (scrubval & BIT(0)) { amd64_read_pci_cfg(pvt->F6, F17H_SCR_LIMIT_ADDR, &scrubval); @@ -273,11 +262,15 @@ static int get_scrub_rate(struct mem_ctl_info *mci) } else { scrubval = 0; } - break; + } else if (pvt->fam == 0x15) { + /* Erratum #505 */ + if (pvt->model < 0x10) + f15h_select_dct(pvt, 0);
- default: + if (pvt->model == 0x60) + amd64_read_pci_cfg(pvt->F2, F15H_M60H_SCRCTRL, &scrubval); + } else { amd64_read_pci_cfg(pvt->F3, SCRCTRL, &scrubval); - break; }
scrubval = scrubval & 0x001F; @@ -999,6 +992,16 @@ static void determine_memory_type(struct amd64_pvt *pvt) { u32 dram_ctrl, dcsm;
+ if (pvt->umc) { + if ((pvt->umc[0].dimm_cfg | pvt->umc[1].dimm_cfg) & BIT(5)) + pvt->dram_type = MEM_LRDDR4; + else if ((pvt->umc[0].dimm_cfg | pvt->umc[1].dimm_cfg) & BIT(4)) + pvt->dram_type = MEM_RDDR4; + else + pvt->dram_type = MEM_DDR4; + return; + } + switch (pvt->fam) { case 0xf: if (pvt->ext_model >= K8_REV_F) @@ -1044,16 +1047,6 @@ static void determine_memory_type(struct amd64_pvt *pvt) case 0x16: goto ddr3;
- case 0x17: - case 0x18: - if ((pvt->umc[0].dimm_cfg | pvt->umc[1].dimm_cfg) & BIT(5)) - pvt->dram_type = MEM_LRDDR4; - else if ((pvt->umc[0].dimm_cfg | pvt->umc[1].dimm_cfg) & BIT(4)) - pvt->dram_type = MEM_RDDR4; - else - pvt->dram_type = MEM_DDR4; - return; - default: WARN(1, KERN_ERR "%s: Family??? 0x%x\n", __func__, pvt->fam); pvt->dram_type = MEM_EMPTY;
From: "Woods, Brian" Brian.Woods@amd.com
mainline inclusion from mainline-v5.0-rc1 commit dedf7dce4cec5c0abe69f4fa6938d5100398220b category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Consolidate shared PCI_DEVICE_IDs that were scattered through k10temp and amd_nb, and move them into pci_ids.
Signed-off-by: Brian Woods brian.woods@amd.com Signed-off-by: Borislav Petkov bp@suse.de Acked-by: Guenter Roeck linux@roeck-us.net CC: Bjorn Helgaas bhelgaas@google.com CC: Clemens Ladisch clemens@ladisch.de CC: "H. Peter Anvin" hpa@zytor.com CC: Ingo Molnar mingo@redhat.com CC: Jean Delvare jdelvare@suse.com CC: Jia Zhang qianyue.zj@alibaba-inc.com CC: linux-hwmon@vger.kernel.org CC: linux-pci@vger.kernel.org CC: Pu Wen puwen@hygon.cn CC: Thomas Gleixner tglx@linutronix.de CC: x86-ml x86@kernel.org Link: http://lkml.kernel.org/r/20181106200754.60722-2-brian.woods@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kernel/amd_nb.c | 3 +-- drivers/hwmon/k10temp.c | 9 +-------- include/linux/pci_ids.h | 2 ++ 3 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c index 7e1a0995d8c4..420991baa5ea 100644 --- a/arch/x86/kernel/amd_nb.c +++ b/arch/x86/kernel/amd_nb.c @@ -11,13 +11,12 @@ #include <linux/errno.h> #include <linux/export.h> #include <linux/spinlock.h> +#include <linux/pci_ids.h> #include <asm/amd_nb.h>
#define PCI_DEVICE_ID_AMD_17H_ROOT 0x1450 #define PCI_DEVICE_ID_AMD_17H_M10H_ROOT 0x15d0 -#define PCI_DEVICE_ID_AMD_17H_DF_F3 0x1463 #define PCI_DEVICE_ID_AMD_17H_DF_F4 0x1464 -#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F3 0x15eb #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F4 0x15ec
/* Protect the PCI config register pairs used for SMN and DF indirect access. */ diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c index e24ba1014670..20bdb6c47f8e 100644 --- a/drivers/hwmon/k10temp.c +++ b/drivers/hwmon/k10temp.c @@ -23,6 +23,7 @@ #include <linux/init.h> #include <linux/module.h> #include <linux/pci.h> +#include <linux/pci_ids.h> #include <asm/amd_nb.h> #include <asm/processor.h>
@@ -41,14 +42,6 @@ static DEFINE_MUTEX(nb_smu_ind_mutex); #define PCI_DEVICE_ID_AMD_15H_M70H_NB_F3 0x15b3 #endif
-#ifndef PCI_DEVICE_ID_AMD_17H_DF_F3 -#define PCI_DEVICE_ID_AMD_17H_DF_F3 0x1463 -#endif - -#ifndef PCI_DEVICE_ID_AMD_17H_M10H_DF_F3 -#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F3 0x15eb -#endif - /* CPUID function 0x80000001, ebx */ #define CPUID_PKGTYPE_MASK 0xf0000000 #define CPUID_PKGTYPE_F 0x00000000 diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index 2551d928472a..f43107d0cd06 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -541,6 +541,8 @@ #define PCI_DEVICE_ID_AMD_16H_NB_F4 0x1534 #define PCI_DEVICE_ID_AMD_16H_M30H_NB_F3 0x1583 #define PCI_DEVICE_ID_AMD_16H_M30H_NB_F4 0x1584 +#define PCI_DEVICE_ID_AMD_17H_DF_F3 0x1463 +#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F3 0x15eb #define PCI_DEVICE_ID_AMD_CNB17H_F3 0x1703 #define PCI_DEVICE_ID_AMD_LANCE 0x2000 #define PCI_DEVICE_ID_AMD_LANCE_HOME 0x2001
From: "Woods, Brian" Brian.Woods@amd.com
mainline inclusion from mainline-v5.0-rc1 commit be3518a16ef270e3b030a6ae96055f83f51bd3dd category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Add the PCI device IDs for family 17h model 30h, since they are needed for accessing various registers via the data fabric/SMN interface.
Signed-off-by: Brian Woods brian.woods@amd.com Signed-off-by: Borislav Petkov bp@suse.de CC: Bjorn Helgaas bhelgaas@google.com CC: Clemens Ladisch clemens@ladisch.de CC: Guenter Roeck linux@roeck-us.net CC: "H. Peter Anvin" hpa@zytor.com CC: Ingo Molnar mingo@redhat.com CC: Jean Delvare jdelvare@suse.com CC: Jia Zhang qianyue.zj@alibaba-inc.com CC: linux-hwmon@vger.kernel.org CC: linux-pci@vger.kernel.org CC: Pu Wen puwen@hygon.cn CC: Thomas Gleixner tglx@linutronix.de CC: x86-ml x86@kernel.org Link: http://lkml.kernel.org/r/20181106200754.60722-4-brian.woods@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kernel/amd_nb.c | 6 ++++++ include/linux/pci_ids.h | 1 + 2 files changed, 7 insertions(+)
diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c index 420991baa5ea..f6ec5aa98b9b 100644 --- a/arch/x86/kernel/amd_nb.c +++ b/arch/x86/kernel/amd_nb.c @@ -16,8 +16,10 @@
#define PCI_DEVICE_ID_AMD_17H_ROOT 0x1450 #define PCI_DEVICE_ID_AMD_17H_M10H_ROOT 0x15d0 +#define PCI_DEVICE_ID_AMD_17H_M30H_ROOT 0x1480 #define PCI_DEVICE_ID_AMD_17H_DF_F4 0x1464 #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F4 0x15ec +#define PCI_DEVICE_ID_AMD_17H_M30H_DF_F4 0x1494
/* Protect the PCI config register pairs used for SMN and DF indirect access. */ static DEFINE_MUTEX(smn_mutex); @@ -27,9 +29,11 @@ static u32 *flush_words; static const struct pci_device_id amd_root_ids[] = { { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_ROOT) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_ROOT) }, + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M30H_ROOT) }, {} };
+ #define PCI_DEVICE_ID_AMD_CNB17H_F4 0x1704
const struct pci_device_id amd_nb_misc_ids[] = { @@ -43,6 +47,7 @@ const struct pci_device_id amd_nb_misc_ids[] = { { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F3) }, + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F3) }, {} }; @@ -56,6 +61,7 @@ static const struct pci_device_id amd_nb_link_ids[] = { { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F4) }, + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F4) }, {} }; diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index f43107d0cd06..17b1cc675215 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -543,6 +543,7 @@ #define PCI_DEVICE_ID_AMD_16H_M30H_NB_F4 0x1584 #define PCI_DEVICE_ID_AMD_17H_DF_F3 0x1463 #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F3 0x15eb +#define PCI_DEVICE_ID_AMD_17H_M30H_DF_F3 0x1493 #define PCI_DEVICE_ID_AMD_CNB17H_F3 0x1703 #define PCI_DEVICE_ID_AMD_LANCE 0x2000 #define PCI_DEVICE_ID_AMD_LANCE_HOME 0x2001
From: Marcel Bocu marcel.p.bocu@gmail.com
mainline inclusion from mainline-v5.4-rc1 commit af4e1c5eca95bed1192d8dc45c8ed63aea2209e8 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The AMD Ryzen gen 3 processors came with a different PCI IDs for the function 3 & 4 which are used to access the SMN interface. The root PCI address however remained at the same address as the model 30h.
Adding the F3/F4 PCI IDs respectively to the misc and link ids appear to be sufficient for k10temp, so let's add them and follow up on the patch if other functions need more tweaking.
Vicki Pfau sent an identical patch after I checked that no-one had written this patch. I would have been happy about dropping my patch but unlike for his patch series, I had already Cc:ed the x86 people and they already reviewed the changes. Since Vicki has not answered to any email after his initial series, let's assume she is on vacation and let's avoid duplication of reviews from the maintainers and merge my series. To acknowledge Vicki's anteriority, I added her S-o-b to the patch.
v2, suggested by Guenter Roeck and Brian Woods: - rename from 71h to 70h
Signed-off-by: Vicki Pfau vi@endrift.com Signed-off-by: Marcel Bocu marcel.p.bocu@gmail.com Tested-by: Marcel Bocu marcel.p.bocu@gmail.com Acked-by: Thomas Gleixner tglx@linutronix.de Acked-by: Brian Woods brian.woods@amd.com Acked-by: Bjorn Helgaas bhelgaas@google.com # pci_ids.h
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: "H. Peter Anvin" hpa@zytor.com Cc: x86@kernel.org Cc: "Woods, Brian" Brian.Woods@amd.com Cc: Clemens Ladisch clemens@ladisch.de Cc: Jean Delvare jdelvare@suse.com Cc: Guenter Roeck linux@roeck-us.net Cc: linux-hwmon@vger.kernel.org Link: https://lore.kernel.org/r/20190722174510.2179-1-marcel.p.bocu@gmail.com Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kernel/amd_nb.c | 3 +++ include/linux/pci_ids.h | 1 + 2 files changed, 4 insertions(+)
diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c index f6ec5aa98b9b..f03967f45eb3 100644 --- a/arch/x86/kernel/amd_nb.c +++ b/arch/x86/kernel/amd_nb.c @@ -20,6 +20,7 @@ #define PCI_DEVICE_ID_AMD_17H_DF_F4 0x1464 #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F4 0x15ec #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F4 0x1494 +#define PCI_DEVICE_ID_AMD_17H_M70H_DF_F4 0x1444
/* Protect the PCI config register pairs used for SMN and DF indirect access. */ static DEFINE_MUTEX(smn_mutex); @@ -49,6 +50,7 @@ const struct pci_device_id amd_nb_misc_ids[] = { { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F3) }, + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M70H_DF_F3) }, {} }; EXPORT_SYMBOL_GPL(amd_nb_misc_ids); @@ -62,6 +64,7 @@ static const struct pci_device_id amd_nb_link_ids[] = { { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F4) }, + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M70H_DF_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F4) }, {} }; diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index 17b1cc675215..cfbd6ef9d994 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -544,6 +544,7 @@ #define PCI_DEVICE_ID_AMD_17H_DF_F3 0x1463 #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F3 0x15eb #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F3 0x1493 +#define PCI_DEVICE_ID_AMD_17H_M70H_DF_F3 0x1443 #define PCI_DEVICE_ID_AMD_CNB17H_F3 0x1703 #define PCI_DEVICE_ID_AMD_LANCE 0x2000 #define PCI_DEVICE_ID_AMD_LANCE_HOME 0x2001
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.6-rc1 commit b3f79ae45904ae987a7c06a9e8d6084d7b73e67f category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Add the new PCI Device 18h IDs for AMD Family 19h systems. Note that Family 19h systems will not have a new PCI root device ID.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20200110015651.14887-4-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kernel/amd_nb.c | 3 +++ include/linux/pci_ids.h | 1 + 2 files changed, 4 insertions(+)
diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c index f03967f45eb3..b0dbe5d5162d 100644 --- a/arch/x86/kernel/amd_nb.c +++ b/arch/x86/kernel/amd_nb.c @@ -21,6 +21,7 @@ #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F4 0x15ec #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F4 0x1494 #define PCI_DEVICE_ID_AMD_17H_M70H_DF_F4 0x1444 +#define PCI_DEVICE_ID_AMD_19H_DF_F4 0x1654
/* Protect the PCI config register pairs used for SMN and DF indirect access. */ static DEFINE_MUTEX(smn_mutex); @@ -51,6 +52,7 @@ const struct pci_device_id amd_nb_misc_ids[] = { { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F3) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M70H_DF_F3) }, + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_DF_F3) }, {} }; EXPORT_SYMBOL_GPL(amd_nb_misc_ids); @@ -65,6 +67,7 @@ static const struct pci_device_id amd_nb_link_ids[] = { { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M10H_DF_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M30H_DF_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_M70H_DF_F4) }, + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_19H_DF_F4) }, { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F4) }, {} }; diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index cfbd6ef9d994..3d622046b753 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -545,6 +545,7 @@ #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F3 0x15eb #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F3 0x1493 #define PCI_DEVICE_ID_AMD_17H_M70H_DF_F3 0x1443 +#define PCI_DEVICE_ID_AMD_19H_DF_F3 0x1653 #define PCI_DEVICE_ID_AMD_CNB17H_F3 0x1703 #define PCI_DEVICE_ID_AMD_LANCE 0x2000 #define PCI_DEVICE_ID_AMD_LANCE_HOME 0x2001
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.6-rc1 commit 9f6aef86315ac31481a288ba1b3f43b2aac93757 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
MCA error decoding on SMCA systems is not dependent on family. Return success early if the system supports the SMCA feature.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20200110015651.14887-3-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/mce_amd.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/edac/mce_amd.c b/drivers/edac/mce_amd.c index c605089d899f..b2838d774d12 100644 --- a/drivers/edac/mce_amd.c +++ b/drivers/edac/mce_amd.c @@ -1067,6 +1067,11 @@ static int __init mce_amd_init(void) if (!fam_ops) return -ENOMEM;
+ if (boot_cpu_has(X86_FEATURE_SMCA)) { + xec_mask = 0x3f; + goto out; + } + switch (c->x86) { case 0xf: fam_ops->mc0_mce = k8_mc0_mce; @@ -1115,11 +1120,8 @@ static int __init mce_amd_init(void)
case 0x17: case 0x18: - xec_mask = 0x3f; - if (!boot_cpu_has(X86_FEATURE_SMCA)) { - printk(KERN_WARNING "Decoding supported only on Scalable MCA processors.\n"); - goto err_out; - } + pr_warn("Decoding supported only on Scalable MCA processors.\n"); + goto err_out; break;
default: @@ -1127,6 +1129,7 @@ static int __init mce_amd_init(void) goto err_out; }
+out: pr_info("MCE: In-kernel MCE decoding enabled.\n");
mce_register_decode_chain(&amd_mce_dec_nb);
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.1-rc1 commit cbfa447edd6a3825fdb8a4ffae74ff7208f2d2c0 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Add the (HWID, MCATYPE) tuples and names for the new MP5, NBIO, and PCIE SMCA bank types.
Also, add their respective error descriptions to the MCE decoding module edac_mce_amd.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: Arnd Bergmann arnd@arndb.de Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: Kees Cook keescook@chromium.org Cc: linux-edac linux-edac@vger.kernel.org Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Pu Wen puwen@hygon.cn Cc: Qiuxu Zhuo qiuxu.zhuo@intel.com Cc: Shirish S Shirish.S@amd.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: Vishal Verma vishal.l.verma@intel.com Cc: x86-ml x86@kernel.org Link: https://lkml.kernel.org/r/20190201225534.8177-2-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/include/asm/mce.h | 3 +++ arch/x86/kernel/cpu/mce/amd.c | 12 ++++++++++++ drivers/edac/mce_amd.c | 32 ++++++++++++++++++++++++++++++++ 3 files changed, 47 insertions(+)
diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h index c1a812bd5a27..91b65d859ca8 100644 --- a/arch/x86/include/asm/mce.h +++ b/arch/x86/include/asm/mce.h @@ -312,6 +312,9 @@ enum smca_bank_types { SMCA_PB, /* Parameter Block */ SMCA_PSP, /* Platform Security Processor */ SMCA_SMU, /* System Management Unit */ + SMCA_MP5, /* Microprocessor 5 Unit */ + SMCA_NBIO, /* Northbridge IO Unit */ + SMCA_PCIE, /* PCI Express Unit */ N_SMCA_BANK_TYPES };
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c index 27f7c2cb2561..b0e540287552 100644 --- a/arch/x86/kernel/cpu/mce/amd.c +++ b/arch/x86/kernel/cpu/mce/amd.c @@ -93,6 +93,9 @@ static struct smca_bank_name smca_names[] = { [SMCA_PB] = { "param_block", "Parameter Block" }, [SMCA_PSP] = { "psp", "Platform Security Processor" }, [SMCA_SMU] = { "smu", "System Management Unit" }, + [SMCA_MP5] = { "mp5", "Microprocessor 5 Unit" }, + [SMCA_NBIO] = { "nbio", "Northbridge IO Unit" }, + [SMCA_PCIE] = { "pcie", "PCI Express Unit" }, };
static u32 smca_bank_addrs[MAX_NR_BANKS][NR_BLOCKS] __ro_after_init = @@ -162,6 +165,15 @@ static struct smca_hwid smca_hwid_mcatypes[] = {
/* System Management Unit MCA type */ { SMCA_SMU, HWID_MCATYPE(0x01, 0x0), 0x1 }, + + /* Microprocessor 5 Unit MCA type */ + { SMCA_MP5, HWID_MCATYPE(0x01, 0x2), 0x3FF }, + + /* Northbridge IO Unit MCA type */ + { SMCA_NBIO, HWID_MCATYPE(0x18, 0x0), 0x1F }, + + /* PCI Express Unit MCA type */ + { SMCA_PCIE, HWID_MCATYPE(0x46, 0x0), 0x1F }, };
struct smca_bank smca_banks[MAX_NR_BANKS]; diff --git a/drivers/edac/mce_amd.c b/drivers/edac/mce_amd.c index b2838d774d12..db334053b8bc 100644 --- a/drivers/edac/mce_amd.c +++ b/drivers/edac/mce_amd.c @@ -285,6 +285,35 @@ static const char * const smca_smu_mce_desc[] = { "SMU RAM ECC or parity error", };
+static const char * const smca_mp5_mce_desc[] = { + "High SRAM ECC or parity error", + "Low SRAM ECC or parity error", + "Data Cache Bank A ECC or parity error", + "Data Cache Bank B ECC or parity error", + "Data Tag Cache Bank A ECC or parity error", + "Data Tag Cache Bank B ECC or parity error", + "Instruction Cache Bank A ECC or parity error", + "Instruction Cache Bank B ECC or parity error", + "Instruction Tag Cache Bank A ECC or parity error", + "Instruction Tag Cache Bank B ECC or parity error", +}; + +static const char * const smca_nbio_mce_desc[] = { + "ECC or Parity error", + "PCIE error", + "SDP ErrEvent error", + "SDP Egress Poison Error", + "IOHC Internal Poison Error", +}; + +static const char * const smca_pcie_mce_desc[] = { + "CCIX PER Message logging", + "CCIX Read Response with Status: Non-Data Error", + "CCIX Write Response with Status: Non-Data Error", + "CCIX Read Response with Status: Data Error", + "CCIX Non-okay write response with data error", +}; + struct smca_mce_desc { const char * const *descs; unsigned int num_descs; @@ -304,6 +333,9 @@ static struct smca_mce_desc smca_mce_descs[] = { [SMCA_PB] = { smca_pb_mce_desc, ARRAY_SIZE(smca_pb_mce_desc) }, [SMCA_PSP] = { smca_psp_mce_desc, ARRAY_SIZE(smca_psp_mce_desc) }, [SMCA_SMU] = { smca_smu_mce_desc, ARRAY_SIZE(smca_smu_mce_desc) }, + [SMCA_MP5] = { smca_mp5_mce_desc, ARRAY_SIZE(smca_mp5_mce_desc) }, + [SMCA_NBIO] = { smca_nbio_mce_desc, ARRAY_SIZE(smca_nbio_mce_desc) }, + [SMCA_PCIE] = { smca_pcie_mce_desc, ARRAY_SIZE(smca_pcie_mce_desc) }, };
static bool f12h_mc0_mce(u16 ec, u8 xec)
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.1-rc1 commit 3ad7e748c12cc771df6020a552def3e1727e8a17 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The existing CS, PSP, and SMU SMCA bank types will see new versions (as indicated by their McaTypes) in future SMCA systems.
Add the new (HWID, MCATYPE) tuples for these new versions. Reuse the same names as the older versions, since they are logically the same to the user. SMCA systems won't mix and match IP blocks with different McaType versions in the same system, so there isn't a need to distinguish them. The MCA_IPID register is saved when logging an MCA error, and that can be used to triage the error.
Also, add the new error descriptions to edac_mce_amd. Some error types (positions in the list) are overloaded compared to the previous McaTypes. Therefore, just create new lists of the error descriptions to keep things simple even if some of the error descriptions are the same between versions.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: Arnd Bergmann arnd@arndb.de Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: Kees Cook keescook@chromium.org Cc: linux-edac linux-edac@vger.kernel.org Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Pu Wen puwen@hygon.cn Cc: Qiuxu Zhuo qiuxu.zhuo@intel.com Cc: Shirish S Shirish.S@amd.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: Vishal Verma vishal.l.verma@intel.com Cc: x86-ml x86@kernel.org Link: https://lkml.kernel.org/r/20190201225534.8177-3-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/include/asm/mce.h | 3 ++ arch/x86/kernel/cpu/mce/amd.c | 6 ++++ drivers/edac/mce_amd.c | 55 +++++++++++++++++++++++++++++++++++ 3 files changed, 64 insertions(+)
diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h index 91b65d859ca8..299a38536567 100644 --- a/arch/x86/include/asm/mce.h +++ b/arch/x86/include/asm/mce.h @@ -307,11 +307,14 @@ enum smca_bank_types { SMCA_FP, /* Floating Point */ SMCA_L3_CACHE, /* L3 Cache */ SMCA_CS, /* Coherent Slave */ + SMCA_CS_V2, /* Coherent Slave */ SMCA_PIE, /* Power, Interrupts, etc. */ SMCA_UMC, /* Unified Memory Controller */ SMCA_PB, /* Parameter Block */ SMCA_PSP, /* Platform Security Processor */ + SMCA_PSP_V2, /* Platform Security Processor */ SMCA_SMU, /* System Management Unit */ + SMCA_SMU_V2, /* System Management Unit */ SMCA_MP5, /* Microprocessor 5 Unit */ SMCA_NBIO, /* Northbridge IO Unit */ SMCA_PCIE, /* PCI Express Unit */ diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c index b0e540287552..81569eee7d29 100644 --- a/arch/x86/kernel/cpu/mce/amd.c +++ b/arch/x86/kernel/cpu/mce/amd.c @@ -88,11 +88,14 @@ static struct smca_bank_name smca_names[] = { [SMCA_FP] = { "floating_point", "Floating Point Unit" }, [SMCA_L3_CACHE] = { "l3_cache", "L3 Cache" }, [SMCA_CS] = { "coherent_slave", "Coherent Slave" }, + [SMCA_CS_V2] = { "coherent_slave", "Coherent Slave" }, [SMCA_PIE] = { "pie", "Power, Interrupts, etc." }, [SMCA_UMC] = { "umc", "Unified Memory Controller" }, [SMCA_PB] = { "param_block", "Parameter Block" }, [SMCA_PSP] = { "psp", "Platform Security Processor" }, + [SMCA_PSP_V2] = { "psp", "Platform Security Processor" }, [SMCA_SMU] = { "smu", "System Management Unit" }, + [SMCA_SMU_V2] = { "smu", "System Management Unit" }, [SMCA_MP5] = { "mp5", "Microprocessor 5 Unit" }, [SMCA_NBIO] = { "nbio", "Northbridge IO Unit" }, [SMCA_PCIE] = { "pcie", "PCI Express Unit" }, @@ -153,6 +156,7 @@ static struct smca_hwid smca_hwid_mcatypes[] = { /* Data Fabric MCA types */ { SMCA_CS, HWID_MCATYPE(0x2E, 0x0), 0x1FF }, { SMCA_PIE, HWID_MCATYPE(0x2E, 0x1), 0xF }, + { SMCA_CS_V2, HWID_MCATYPE(0x2E, 0x2), 0x3FFF },
/* Unified Memory Controller MCA type */ { SMCA_UMC, HWID_MCATYPE(0x96, 0x0), 0x3F }, @@ -162,9 +166,11 @@ static struct smca_hwid smca_hwid_mcatypes[] = {
/* Platform Security Processor MCA type */ { SMCA_PSP, HWID_MCATYPE(0xFF, 0x0), 0x1 }, + { SMCA_PSP_V2, HWID_MCATYPE(0xFF, 0x1), 0x3FFFF },
/* System Management Unit MCA type */ { SMCA_SMU, HWID_MCATYPE(0x01, 0x0), 0x1 }, + { SMCA_SMU_V2, HWID_MCATYPE(0x01, 0x1), 0x7FF },
/* Microprocessor 5 Unit MCA type */ { SMCA_MP5, HWID_MCATYPE(0x01, 0x2), 0x3FF }, diff --git a/drivers/edac/mce_amd.c b/drivers/edac/mce_amd.c index db334053b8bc..970ee54750a6 100644 --- a/drivers/edac/mce_amd.c +++ b/drivers/edac/mce_amd.c @@ -257,6 +257,23 @@ static const char * const smca_cs_mce_desc[] = { "ECC error on probe filter access", };
+static const char * const smca_cs2_mce_desc[] = { + "Illegal Request", + "Address Violation", + "Security Violation", + "Illegal Response", + "Unexpected Response", + "Request or Probe Parity Error", + "Read Response Parity Error", + "Atomic Request Parity Error", + "SDP read response had no match in the CS queue", + "Probe Filter Protocol Error", + "Probe Filter ECC Error", + "SDP read response had an unexpected RETRY error", + "Counter overflow error", + "Counter underflow error", +}; + static const char * const smca_pie_mce_desc[] = { "HW assert", "Internal PIE register security violation", @@ -281,10 +298,45 @@ static const char * const smca_psp_mce_desc[] = { "PSP RAM ECC or parity error", };
+static const char * const smca_psp2_mce_desc[] = { + "High SRAM ECC or parity error", + "Low SRAM ECC or parity error", + "Instruction Cache Bank 0 ECC or parity error", + "Instruction Cache Bank 1 ECC or parity error", + "Instruction Tag Ram 0 parity error", + "Instruction Tag Ram 1 parity error", + "Data Cache Bank 0 ECC or parity error", + "Data Cache Bank 1 ECC or parity error", + "Data Cache Bank 2 ECC or parity error", + "Data Cache Bank 3 ECC or parity error", + "Data Tag Bank 0 parity error", + "Data Tag Bank 1 parity error", + "Data Tag Bank 2 parity error", + "Data Tag Bank 3 parity error", + "Dirty Data Ram parity error", + "TLB Bank 0 parity error", + "TLB Bank 1 parity error", + "System Hub Read Buffer ECC or parity error", +}; + static const char * const smca_smu_mce_desc[] = { "SMU RAM ECC or parity error", };
+static const char * const smca_smu2_mce_desc[] = { + "High SRAM ECC or parity error", + "Low SRAM ECC or parity error", + "Data Cache Bank A ECC or parity error", + "Data Cache Bank B ECC or parity error", + "Data Tag Cache Bank A ECC or parity error", + "Data Tag Cache Bank B ECC or parity error", + "Instruction Cache Bank A ECC or parity error", + "Instruction Cache Bank B ECC or parity error", + "Instruction Tag Cache Bank A ECC or parity error", + "Instruction Tag Cache Bank B ECC or parity error", + "System Hub Read Buffer ECC or parity error", +}; + static const char * const smca_mp5_mce_desc[] = { "High SRAM ECC or parity error", "Low SRAM ECC or parity error", @@ -328,11 +380,14 @@ static struct smca_mce_desc smca_mce_descs[] = { [SMCA_FP] = { smca_fp_mce_desc, ARRAY_SIZE(smca_fp_mce_desc) }, [SMCA_L3_CACHE] = { smca_l3_mce_desc, ARRAY_SIZE(smca_l3_mce_desc) }, [SMCA_CS] = { smca_cs_mce_desc, ARRAY_SIZE(smca_cs_mce_desc) }, + [SMCA_CS_V2] = { smca_cs2_mce_desc, ARRAY_SIZE(smca_cs2_mce_desc) }, [SMCA_PIE] = { smca_pie_mce_desc, ARRAY_SIZE(smca_pie_mce_desc) }, [SMCA_UMC] = { smca_umc_mce_desc, ARRAY_SIZE(smca_umc_mce_desc) }, [SMCA_PB] = { smca_pb_mce_desc, ARRAY_SIZE(smca_pb_mce_desc) }, [SMCA_PSP] = { smca_psp_mce_desc, ARRAY_SIZE(smca_psp_mce_desc) }, + [SMCA_PSP_V2] = { smca_psp2_mce_desc, ARRAY_SIZE(smca_psp2_mce_desc) }, [SMCA_SMU] = { smca_smu_mce_desc, ARRAY_SIZE(smca_smu_mce_desc) }, + [SMCA_SMU_V2] = { smca_smu2_mce_desc, ARRAY_SIZE(smca_smu2_mce_desc) }, [SMCA_MP5] = { smca_mp5_mce_desc, ARRAY_SIZE(smca_mp5_mce_desc) }, [SMCA_NBIO] = { smca_nbio_mce_desc, ARRAY_SIZE(smca_nbio_mce_desc) }, [SMCA_PCIE] = { smca_pcie_mce_desc, ARRAY_SIZE(smca_pcie_mce_desc) },
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.1-rc1 commit 8a5dd2cd2f2e94878cacc969655a69ca214795ab category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Some SMCA bank types on future systems will report new error types even though the bank type is not treated as a new version. These new error types will reported by bits that are reserved in past systems.
Add the new error descriptions to the lists in edac_mce_amd.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: Kees Cook keescook@chromium.org Cc: linux-edac linux-edac@vger.kernel.org Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Shirish S Shirish.S@amd.com Cc: Thomas Gleixner tglx@linutronix.de Cc: Tony Luck tony.luck@intel.com Cc: x86-ml x86@kernel.org Link: https://lkml.kernel.org/r/20190201225534.8177-4-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kernel/cpu/mce/amd.c | 8 ++++---- drivers/edac/mce_amd.c | 6 +++++- 2 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c index 81569eee7d29..3ed77a97200d 100644 --- a/arch/x86/kernel/cpu/mce/amd.c +++ b/arch/x86/kernel/cpu/mce/amd.c @@ -144,22 +144,22 @@ static struct smca_hwid smca_hwid_mcatypes[] = { { SMCA_RESERVED, HWID_MCATYPE(0x00, 0x0), 0x0 },
/* ZN Core (HWID=0xB0) MCA types */ - { SMCA_LS, HWID_MCATYPE(0xB0, 0x0), 0x1FFFEF }, + { SMCA_LS, HWID_MCATYPE(0xB0, 0x0), 0x1FFFFF }, { SMCA_IF, HWID_MCATYPE(0xB0, 0x1), 0x3FFF }, { SMCA_L2_CACHE, HWID_MCATYPE(0xB0, 0x2), 0xF }, { SMCA_DE, HWID_MCATYPE(0xB0, 0x3), 0x1FF }, /* HWID 0xB0 MCATYPE 0x4 is Reserved */ - { SMCA_EX, HWID_MCATYPE(0xB0, 0x5), 0x7FF }, + { SMCA_EX, HWID_MCATYPE(0xB0, 0x5), 0xFFF }, { SMCA_FP, HWID_MCATYPE(0xB0, 0x6), 0x7F }, { SMCA_L3_CACHE, HWID_MCATYPE(0xB0, 0x7), 0xFF },
/* Data Fabric MCA types */ { SMCA_CS, HWID_MCATYPE(0x2E, 0x0), 0x1FF }, - { SMCA_PIE, HWID_MCATYPE(0x2E, 0x1), 0xF }, + { SMCA_PIE, HWID_MCATYPE(0x2E, 0x1), 0x1F }, { SMCA_CS_V2, HWID_MCATYPE(0x2E, 0x2), 0x3FFF },
/* Unified Memory Controller MCA type */ - { SMCA_UMC, HWID_MCATYPE(0x96, 0x0), 0x3F }, + { SMCA_UMC, HWID_MCATYPE(0x96, 0x0), 0xFF },
/* Parameter Block MCA type */ { SMCA_PB, HWID_MCATYPE(0x05, 0x0), 0x1 }, diff --git a/drivers/edac/mce_amd.c b/drivers/edac/mce_amd.c index 970ee54750a6..8ee3acc6644e 100644 --- a/drivers/edac/mce_amd.c +++ b/drivers/edac/mce_amd.c @@ -155,7 +155,7 @@ static const char * const smca_ls_mce_desc[] = { "Store queue parity", "Miss address buffer payload parity", "L1 TLB parity", - "Reserved", + "DC Tag error type 5", "DC tag error type 6", "DC tag error type 1", "Internal error type 1", @@ -222,6 +222,7 @@ static const char * const smca_ex_mce_desc[] = { "Retire status queue parity error", "Scheduling queue parity error", "Branch buffer queue parity error", + "Hardware Assertion error", };
static const char * const smca_fp_mce_desc[] = { @@ -279,6 +280,7 @@ static const char * const smca_pie_mce_desc[] = { "Internal PIE register security violation", "Error on GMI link", "Poison data written to internal PIE register", + "A deferred error was detected in the DF" };
static const char * const smca_umc_mce_desc[] = { @@ -288,6 +290,8 @@ static const char * const smca_umc_mce_desc[] = { "Advanced peripheral bus error", "Command/address parity error", "Write data CRC error", + "DCQ SRAM ECC error", + "AES SRAM ECC error", };
static const char * const smca_pb_mce_desc[] = {
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.6-rc1 commit 89a76171bf50bd20d44338408b8c09433c302956 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Add support for a new version of the Load Store unit bank type as indicated by its McaType value, which will be present in future SMCA systems.
Add the new (HWID, MCATYPE) tuple. Reuse the same name, since this is logically the same to the user.
Also, add the new error descriptions to edac_mce_amd.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20200110015651.14887-2-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/include/asm/mce.h | 1 + arch/x86/kernel/cpu/mce/amd.c | 2 ++ drivers/edac/mce_amd.c | 28 ++++++++++++++++++++++++++++ 3 files changed, 31 insertions(+)
diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h index 299a38536567..77cb81ec0c42 100644 --- a/arch/x86/include/asm/mce.h +++ b/arch/x86/include/asm/mce.h @@ -299,6 +299,7 @@ extern void apei_mce_report_mem_error(int corrected, /* These may be used by multiple smca_hwid_mcatypes */ enum smca_bank_types { SMCA_LS = 0, /* Load Store */ + SMCA_LS_V2, /* Load Store */ SMCA_IF, /* Instruction Fetch */ SMCA_L2_CACHE, /* L2 Cache */ SMCA_DE, /* Decoder Unit */ diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c index 3ed77a97200d..553112f483a4 100644 --- a/arch/x86/kernel/cpu/mce/amd.c +++ b/arch/x86/kernel/cpu/mce/amd.c @@ -80,6 +80,7 @@ struct smca_bank_name {
static struct smca_bank_name smca_names[] = { [SMCA_LS] = { "load_store", "Load Store Unit" }, + [SMCA_LS_V2] = { "load_store", "Load Store Unit" }, [SMCA_IF] = { "insn_fetch", "Instruction Fetch Unit" }, [SMCA_L2_CACHE] = { "l2_cache", "L2 Cache" }, [SMCA_DE] = { "decode_unit", "Decode Unit" }, @@ -145,6 +146,7 @@ static struct smca_hwid smca_hwid_mcatypes[] = {
/* ZN Core (HWID=0xB0) MCA types */ { SMCA_LS, HWID_MCATYPE(0xB0, 0x0), 0x1FFFFF }, + { SMCA_LS_V2, HWID_MCATYPE(0xB0, 0x10), 0xFFFFFF }, { SMCA_IF, HWID_MCATYPE(0xB0, 0x1), 0x3FFF }, { SMCA_L2_CACHE, HWID_MCATYPE(0xB0, 0x2), 0xF }, { SMCA_DE, HWID_MCATYPE(0xB0, 0x3), 0x1FF }, diff --git a/drivers/edac/mce_amd.c b/drivers/edac/mce_amd.c index 8ee3acc6644e..af89f4b68ecd 100644 --- a/drivers/edac/mce_amd.c +++ b/drivers/edac/mce_amd.c @@ -174,6 +174,33 @@ static const char * const smca_ls_mce_desc[] = { "L2 fill data error", };
+static const char * const smca_ls2_mce_desc[] = { + "An ECC error was detected on a data cache read by a probe or victimization", + "An ECC error or L2 poison was detected on a data cache read by a load", + "An ECC error was detected on a data cache read-modify-write by a store", + "An ECC error or poison bit mismatch was detected on a tag read by a probe or victimization", + "An ECC error or poison bit mismatch was detected on a tag read by a load", + "An ECC error or poison bit mismatch was detected on a tag read by a store", + "An ECC error was detected on an EMEM read by a load", + "An ECC error was detected on an EMEM read-modify-write by a store", + "A parity error was detected in an L1 TLB entry by any access", + "A parity error was detected in an L2 TLB entry by any access", + "A parity error was detected in a PWC entry by any access", + "A parity error was detected in an STQ entry by any access", + "A parity error was detected in an LDQ entry by any access", + "A parity error was detected in a MAB entry by any access", + "A parity error was detected in an SCB entry state field by any access", + "A parity error was detected in an SCB entry address field by any access", + "A parity error was detected in an SCB entry data field by any access", + "A parity error was detected in a WCB entry by any access", + "A poisoned line was detected in an SCB entry by any access", + "A SystemReadDataError error was reported on read data returned from L2 for a load", + "A SystemReadDataError error was reported on read data returned from L2 for an SCB store", + "A SystemReadDataError error was reported on read data returned from L2 for a WCB store", + "A hardware assertion error was reported", + "A parity error was detected in an STLF, SCB EMEM entry or SRB store data by any access", +}; + static const char * const smca_if_mce_desc[] = { "microtag probe port parity error", "IC microtag or full tag multi-hit error", @@ -377,6 +404,7 @@ struct smca_mce_desc {
static struct smca_mce_desc smca_mce_descs[] = { [SMCA_LS] = { smca_ls_mce_desc, ARRAY_SIZE(smca_ls_mce_desc) }, + [SMCA_LS_V2] = { smca_ls2_mce_desc, ARRAY_SIZE(smca_ls2_mce_desc) }, [SMCA_IF] = { smca_if_mce_desc, ARRAY_SIZE(smca_if_mce_desc) }, [SMCA_L2_CACHE] = { smca_l2_mce_desc, ARRAY_SIZE(smca_l2_mce_desc) }, [SMCA_DE] = { smca_de_mce_desc, ARRAY_SIZE(smca_de_mce_desc) },
From: Jan H. Schönherr jschoenh@amazon.de
mainline inclusion from mainline-v5.6-rc1 commit 7a8bc2b0462eaca0072d1f7f4ddc749fcb8a773c category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The function mce_severity() is not required to update its msg argument. In fact, mce_severity_amd() does not, which makes mce_no_way_out() return uninitialized data, which may be used later for printing.
Assuming that implementations of mce_severity() either always or never update the msg argument (which is currently the case), it is sufficient to initialize the temporary variable in mce_no_way_out().
While at it, avoid printing a useless "Unknown".
Signed-off-by: Jan H. Schönherr jschoenh@amazon.de Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20200103150722.20313-4-jschoenh@amazon.de Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kernel/cpu/mce/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 31aa078650bb..2b0fecdc137d 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -793,7 +793,7 @@ EXPORT_SYMBOL_GPL(machine_check_poll); static int mce_no_way_out(struct mce *m, char **msg, unsigned long *validp, struct pt_regs *regs) { - char *tmp; + char *tmp = *msg; int i;
for (i = 0; i < mca_cfg.banks; i++) { @@ -1218,8 +1218,8 @@ void do_machine_check(struct pt_regs *regs, long error_code) DECLARE_BITMAP(toclear, MAX_NR_BANKS); struct mca_config *cfg = &mca_cfg; int cpu = smp_processor_id(); - char *msg = "Unknown"; struct mce m, *final; + char *msg = NULL; int worst = 0;
/*
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.2-rc1 commit 4d30d2bc3c23e63c2608bc5b03b0960490d5b740 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Define and use a macro for looping over the number of Unified Memory Controllers.
No functional change.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Tested-by: Kim Phillips kim.phillips@amd.com Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: linux-edac linux-edac@vger.kernel.org Link: https://lkml.kernel.org/r/20190228153558.127292-2-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index 0a436b099aac..8f979b5e4f73 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -442,6 +442,9 @@ static void get_cs_base_and_mask(struct amd64_pvt *pvt, int csrow, u8 dct, #define for_each_chip_select_mask(i, dct, pvt) \ for (i = 0; i < pvt->csels[dct].m_cnt; i++)
+#define for_each_umc(i) \ + for (i = 0; i < NUM_UMCS; i++) + /* * @input_addr is an InputAddr associated with the node given by mci. Return the * csrow that input_addr maps to, or -1 on failure (no csrow claims input_addr). @@ -715,7 +718,7 @@ static unsigned long determine_edac_cap(struct amd64_pvt *pvt) if (pvt->umc) { u8 i, umc_en_mask = 0, dimm_ecc_en_mask = 0;
- for (i = 0; i < NUM_UMCS; i++) { + for_each_umc(i) { if (!(pvt->umc[i].sdp_ctrl & UMC_SDP_INIT)) continue;
@@ -804,7 +807,7 @@ static void __dump_misc_regs_df(struct amd64_pvt *pvt) struct amd64_umc *umc; u32 i, tmp, umc_base;
- for (i = 0; i < NUM_UMCS; i++) { + for_each_umc(i) { umc_base = get_umc_base(i); umc = &pvt->umc[i];
@@ -1381,7 +1384,7 @@ static int f17_early_channel_count(struct amd64_pvt *pvt) int i, channels = 0;
/* SDP Control bit 31 (SdpInit) is clear for unused UMC channels */ - for (i = 0; i < NUM_UMCS; i++) + for_each_umc(i) channels += !!(pvt->umc[i].sdp_ctrl & UMC_SDP_INIT);
amd64_info("MCT channel count: %d\n", channels); @@ -2596,7 +2599,7 @@ static void determine_ecc_sym_sz(struct amd64_pvt *pvt) if (pvt->umc) { u8 i;
- for (i = 0; i < NUM_UMCS; i++) { + for_each_umc(i) { /* Check enabled channels only: */ if ((pvt->umc[i].sdp_ctrl & UMC_SDP_INIT) && (pvt->umc[i].ecc_ctrl & BIT(7))) { @@ -2632,7 +2635,7 @@ static void __read_mc_regs_df(struct amd64_pvt *pvt) u32 i, umc_base;
/* Read registers from each UMC */ - for (i = 0; i < NUM_UMCS; i++) { + for_each_umc(i) {
umc_base = get_umc_base(i); umc = &pvt->umc[i]; @@ -3045,7 +3048,7 @@ static bool ecc_enabled(struct pci_dev *F3, u16 nid) if (boot_cpu_data.x86 >= 0x17) { u8 umc_en_mask = 0, ecc_en_mask = 0;
- for (i = 0; i < NUM_UMCS; i++) { + for_each_umc(i) { u32 base = get_umc_base(i);
/* Only check enabled UMCs. */ @@ -3098,7 +3101,7 @@ f17h_determine_edac_ctl_cap(struct mem_ctl_info *mci, struct amd64_pvt *pvt) { u8 i, ecc_en = 1, cpk_en = 1, dev_x4 = 1, dev_x16 = 1;
- for (i = 0; i < NUM_UMCS; i++) { + for_each_umc(i) { if (pvt->umc[i].sdp_ctrl & UMC_SDP_INIT) { ecc_en &= !!(pvt->umc[i].umc_cap_hi & UMC_ECC_ENABLED); cpk_en &= !!(pvt->umc[i].umc_cap_hi & UMC_ECC_CHIPKILL_CAP);
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.4-rc1 commit d971e28e2ce4696fcc32998c8aced5e47701fffe category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The struct chip_select array that's used for saving chip select bases and masks is fixed at length of two. There should be one struct chip_select for each controller, so this array should be increased to support systems that may have more than two controllers.
Increase the size of the struct chip_select array to eight, which is the largest number of controllers per die currently supported on AMD systems.
Fix number of DIMMs and Chip Select bases/masks on Family17h, because AMD Family 17h systems support 2 DIMMs, 4 CS bases, and 2 CS masks per channel.
Also, carve out the Family 17h+ reading of the bases/masks into a separate function. This effectively reverts the original bases/masks reading code to before Family 17h support was added.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: "linux-edac@vger.kernel.org" linux-edac@vger.kernel.org Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Tony Luck tony.luck@intel.com Link: https://lkml.kernel.org/r/20190821235938.118710-2-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 123 +++++++++++++++++++++----------------- drivers/edac/amd64_edac.h | 5 +- 2 files changed, 71 insertions(+), 57 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index 8f979b5e4f73..c31de480185e 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -783,7 +783,7 @@ static void debug_display_dimm_sizes_df(struct amd64_pvt *pvt, u8 ctrl)
edac_printk(KERN_DEBUG, EDAC_MC, "UMC%d chip selects:\n", ctrl);
- for (dimm = 0; dimm < 4; dimm++) { + for (dimm = 0; dimm < 2; dimm++) { size0 = 0; cs0 = dimm * 2;
@@ -905,89 +905,102 @@ static void prep_chip_selects(struct amd64_pvt *pvt) } else if (pvt->fam == 0x15 && pvt->model == 0x30) { pvt->csels[0].b_cnt = pvt->csels[1].b_cnt = 4; pvt->csels[0].m_cnt = pvt->csels[1].m_cnt = 2; + } else if (pvt->fam >= 0x17) { + int umc; + + for_each_umc(umc) { + pvt->csels[umc].b_cnt = 4; + pvt->csels[umc].m_cnt = 2; + } + } else { pvt->csels[0].b_cnt = pvt->csels[1].b_cnt = 8; pvt->csels[0].m_cnt = pvt->csels[1].m_cnt = 4; } }
+static void read_umc_base_mask(struct amd64_pvt *pvt) +{ + u32 umc_base_reg, umc_mask_reg; + u32 base_reg, mask_reg; + u32 *base, *mask; + int cs, umc; + + for_each_umc(umc) { + umc_base_reg = get_umc_base(umc) + UMCCH_BASE_ADDR; + + for_each_chip_select(cs, umc, pvt) { + base = &pvt->csels[umc].csbases[cs]; + + base_reg = umc_base_reg + (cs * 4); + + if (!amd_smn_read(pvt->mc_node_id, base_reg, base)) + edac_dbg(0, " DCSB%d[%d]=0x%08x reg: 0x%x\n", + umc, cs, *base, base_reg); + } + + umc_mask_reg = get_umc_base(umc) + UMCCH_ADDR_MASK; + + for_each_chip_select_mask(cs, umc, pvt) { + mask = &pvt->csels[umc].csmasks[cs]; + + mask_reg = umc_mask_reg + (cs * 4); + + if (!amd_smn_read(pvt->mc_node_id, mask_reg, mask)) + edac_dbg(0, " DCSM%d[%d]=0x%08x reg: 0x%x\n", + umc, cs, *mask, mask_reg); + } + } +} + /* * Function 2 Offset F10_DCSB0; read in the DCS Base and DCS Mask registers */ static void read_dct_base_mask(struct amd64_pvt *pvt) { - int base_reg0, base_reg1, mask_reg0, mask_reg1, cs; + int cs;
prep_chip_selects(pvt);
- if (pvt->umc) { - base_reg0 = get_umc_base(0) + UMCCH_BASE_ADDR; - base_reg1 = get_umc_base(1) + UMCCH_BASE_ADDR; - mask_reg0 = get_umc_base(0) + UMCCH_ADDR_MASK; - mask_reg1 = get_umc_base(1) + UMCCH_ADDR_MASK; - } else { - base_reg0 = DCSB0; - base_reg1 = DCSB1; - mask_reg0 = DCSM0; - mask_reg1 = DCSM1; - } + if (pvt->umc) + return read_umc_base_mask(pvt);
for_each_chip_select(cs, 0, pvt) { - int reg0 = base_reg0 + (cs * 4); - int reg1 = base_reg1 + (cs * 4); + int reg0 = DCSB0 + (cs * 4); + int reg1 = DCSB1 + (cs * 4); u32 *base0 = &pvt->csels[0].csbases[cs]; u32 *base1 = &pvt->csels[1].csbases[cs];
- if (pvt->umc) { - if (!amd_smn_read(pvt->mc_node_id, reg0, base0)) - edac_dbg(0, " DCSB0[%d]=0x%08x reg: 0x%x\n", - cs, *base0, reg0); - - if (!amd_smn_read(pvt->mc_node_id, reg1, base1)) - edac_dbg(0, " DCSB1[%d]=0x%08x reg: 0x%x\n", - cs, *base1, reg1); - } else { - if (!amd64_read_dct_pci_cfg(pvt, 0, reg0, base0)) - edac_dbg(0, " DCSB0[%d]=0x%08x reg: F2x%x\n", - cs, *base0, reg0); + if (!amd64_read_dct_pci_cfg(pvt, 0, reg0, base0)) + edac_dbg(0, " DCSB0[%d]=0x%08x reg: F2x%x\n", + cs, *base0, reg0);
- if (pvt->fam == 0xf) - continue; + if (pvt->fam == 0xf) + continue;
- if (!amd64_read_dct_pci_cfg(pvt, 1, reg0, base1)) - edac_dbg(0, " DCSB1[%d]=0x%08x reg: F2x%x\n", - cs, *base1, (pvt->fam == 0x10) ? reg1 - : reg0); - } + if (!amd64_read_dct_pci_cfg(pvt, 1, reg0, base1)) + edac_dbg(0, " DCSB1[%d]=0x%08x reg: F2x%x\n", + cs, *base1, (pvt->fam == 0x10) ? reg1 + : reg0); }
for_each_chip_select_mask(cs, 0, pvt) { - int reg0 = mask_reg0 + (cs * 4); - int reg1 = mask_reg1 + (cs * 4); + int reg0 = DCSM0 + (cs * 4); + int reg1 = DCSM1 + (cs * 4); u32 *mask0 = &pvt->csels[0].csmasks[cs]; u32 *mask1 = &pvt->csels[1].csmasks[cs];
- if (pvt->umc) { - if (!amd_smn_read(pvt->mc_node_id, reg0, mask0)) - edac_dbg(0, " DCSM0[%d]=0x%08x reg: 0x%x\n", - cs, *mask0, reg0); - - if (!amd_smn_read(pvt->mc_node_id, reg1, mask1)) - edac_dbg(0, " DCSM1[%d]=0x%08x reg: 0x%x\n", - cs, *mask1, reg1); - } else { - if (!amd64_read_dct_pci_cfg(pvt, 0, reg0, mask0)) - edac_dbg(0, " DCSM0[%d]=0x%08x reg: F2x%x\n", - cs, *mask0, reg0); + if (!amd64_read_dct_pci_cfg(pvt, 0, reg0, mask0)) + edac_dbg(0, " DCSM0[%d]=0x%08x reg: F2x%x\n", + cs, *mask0, reg0);
- if (pvt->fam == 0xf) - continue; + if (pvt->fam == 0xf) + continue;
- if (!amd64_read_dct_pci_cfg(pvt, 1, reg0, mask1)) - edac_dbg(0, " DCSM1[%d]=0x%08x reg: F2x%x\n", - cs, *mask1, (pvt->fam == 0x10) ? reg1 - : reg0); - } + if (!amd64_read_dct_pci_cfg(pvt, 1, reg0, mask1)) + edac_dbg(0, " DCSM1[%d]=0x%08x reg: F2x%x\n", + cs, *mask1, (pvt->fam == 0x10) ? reg1 + : reg0); } }
diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h index 4242f8e39c18..e4563721ae35 100644 --- a/drivers/edac/amd64_edac.h +++ b/drivers/edac/amd64_edac.h @@ -96,6 +96,7 @@ /* Hardware limit on ChipSelect rows per MC and processors per system */ #define NUM_CHIPSELECTS 8 #define DRAM_RANGES 8 +#define NUM_CONTROLLERS 8
#define ON true #define OFF false @@ -350,8 +351,8 @@ struct amd64_pvt { u32 dbam0; /* DRAM Base Address Mapping reg for DCT0 */ u32 dbam1; /* DRAM Base Address Mapping reg for DCT1 */
- /* one for each DCT */ - struct chip_select csels[2]; + /* one for each DCT/UMC */ + struct chip_select csels[NUM_CONTROLLERS];
/* DRAM base and limit pairs F1x[78,70,68,60,58,50,48,40] */ struct dram_range ranges[DRAM_RANGES];
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.4-rc1 commit 353a1fcb8f9e5857c0fb720b9e57a86c1fb7c17e category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Currently, the DIMM info for AMD Family 17h systems is initialized in init_csrows(). This function is shared with legacy systems, and it has a limit of two channel support.
This prevents initialization of the DIMM info for a number of ranks, so there will be missing ranks in the EDAC sysfs.
Create a new init_csrows_df() for Family17h+ and revert init_csrows() back to pre-Family17h support.
Loop over all channels in the new function in order to support systems with more than two channels.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: "linux-edac@vger.kernel.org" linux-edac@vger.kernel.org Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Tony Luck tony.luck@intel.com Link: https://lkml.kernel.org/r/20190821235938.118710-4-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 66 ++++++++++++++++++++++++++++++--------- 1 file changed, 52 insertions(+), 14 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index c31de480185e..db89386c85bf 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -2799,6 +2799,49 @@ static u32 get_csrow_nr_pages(struct amd64_pvt *pvt, u8 dct, int csrow_nr_orig) return nr_pages; }
+static int init_csrows_df(struct mem_ctl_info *mci) +{ + struct amd64_pvt *pvt = mci->pvt_info; + enum edac_type edac_mode = EDAC_NONE; + enum dev_type dev_type = DEV_UNKNOWN; + struct dimm_info *dimm; + int empty = 1; + u8 umc, cs; + + if (mci->edac_ctl_cap & EDAC_FLAG_S16ECD16ED) { + edac_mode = EDAC_S16ECD16ED; + dev_type = DEV_X16; + } else if (mci->edac_ctl_cap & EDAC_FLAG_S8ECD8ED) { + edac_mode = EDAC_S8ECD8ED; + dev_type = DEV_X8; + } else if (mci->edac_ctl_cap & EDAC_FLAG_S4ECD4ED) { + edac_mode = EDAC_S4ECD4ED; + dev_type = DEV_X4; + } else if (mci->edac_ctl_cap & EDAC_FLAG_SECDED) { + edac_mode = EDAC_SECDED; + } + + for_each_umc(umc) { + for_each_chip_select(cs, umc, pvt) { + if (!csrow_enabled(cs, umc, pvt)) + continue; + + empty = 0; + dimm = mci->csrows[cs]->channels[umc]->dimm; + + edac_dbg(1, "MC node: %d, csrow: %d\n", + pvt->mc_node_id, cs); + + dimm->nr_pages = get_csrow_nr_pages(pvt, umc, cs); + dimm->mtype = pvt->dram_type; + dimm->edac_mode = edac_mode; + dimm->dtype = dev_type; + } + } + + return empty; +} + /* * Initialize the array of csrow attribute instances, based on the values * from pci config hardware registers. @@ -2813,15 +2856,16 @@ static int init_csrows(struct mem_ctl_info *mci) int nr_pages = 0; u32 val;
- if (!pvt->umc) { - amd64_read_pci_cfg(pvt->F3, NBCFG, &val); + if (pvt->umc) + return init_csrows_df(mci); + + amd64_read_pci_cfg(pvt->F3, NBCFG, &val);
- pvt->nbcfg = val; + pvt->nbcfg = val;
- edac_dbg(0, "node %d, NBCFG=0x%08x[ChipKillEccCap: %d|DramEccEn: %d]\n", - pvt->mc_node_id, val, - !!(val & NBCFG_CHIPKILL), !!(val & NBCFG_ECC_ENABLE)); - } + edac_dbg(0, "node %d, NBCFG=0x%08x[ChipKillEccCap: %d|DramEccEn: %d]\n", + pvt->mc_node_id, val, + !!(val & NBCFG_CHIPKILL), !!(val & NBCFG_ECC_ENABLE));
/* * We iterate over DCT0 here but we look at DCT1 in parallel, if needed. @@ -2858,13 +2902,7 @@ static int init_csrows(struct mem_ctl_info *mci) edac_dbg(1, "Total csrow%d pages: %u\n", i, nr_pages);
/* Determine DIMM ECC mode: */ - if (pvt->umc) { - if (mci->edac_ctl_cap & EDAC_FLAG_S4ECD4ED) - edac_mode = EDAC_S4ECD4ED; - else if (mci->edac_ctl_cap & EDAC_FLAG_SECDED) - edac_mode = EDAC_SECDED; - - } else if (pvt->nbcfg & NBCFG_ECC_ENABLE) { + if (pvt->nbcfg & NBCFG_ECC_ENABLE) { edac_mode = (pvt->nbcfg & NBCFG_CHIPKILL) ? EDAC_S4ECD4ED : EDAC_SECDED;
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.2-rc1 commit 6e846239e5487cbb89ac8192d5f11437d010130e category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Add the new Family 17h Model 30h PCI IDs to the AMD64 EDAC module.
This also fixes a probe failure that appeared when some other PCI IDs for Family 17h Model 30h were added to the AMD NB code.
Fixes: be3518a16ef2 (x86/amd_nb: Add PCI device IDs for family 17h, model 30h) Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Tested-by: Kim Phillips kim.phillips@amd.com Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: linux-edac linux-edac@vger.kernel.org Link: https://lkml.kernel.org/r/20190228153558.127292-1-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 13 +++++++++++++ drivers/edac/amd64_edac.h | 3 +++ 2 files changed, 16 insertions(+)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index db89386c85bf..6b6a9206ba6d 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -2220,6 +2220,15 @@ static struct amd64_family_type family_types[] = { .dbam_to_cs = f17_base_addr_to_cs_size, } }, + [F17_M30H_CPUS] = { + .ctl_name = "F17h_M30h", + .f0_id = PCI_DEVICE_ID_AMD_17H_M30H_DF_F0, + .f6_id = PCI_DEVICE_ID_AMD_17H_M30H_DF_F6, + .ops = { + .early_channel_count = f17_early_channel_count, + .dbam_to_cs = f17_base_addr_to_cs_size, + } + }, };
/* @@ -3260,6 +3269,10 @@ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt) fam_type = &family_types[F17_M10H_CPUS]; pvt->ops = &family_types[F17_M10H_CPUS].ops; break; + } else if (pvt->model >= 0x30 && pvt->model <= 0x3f) { + fam_type = &family_types[F17_M30H_CPUS]; + pvt->ops = &family_types[F17_M30H_CPUS].ops; + break; } /* fall through */ case 0x18: diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h index e4563721ae35..e1a8586ca347 100644 --- a/drivers/edac/amd64_edac.h +++ b/drivers/edac/amd64_edac.h @@ -118,6 +118,8 @@ #define PCI_DEVICE_ID_AMD_17H_DF_F6 0x1466 #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F0 0x15e8 #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F6 0x15ee +#define PCI_DEVICE_ID_AMD_17H_M30H_DF_F0 0x1490 +#define PCI_DEVICE_ID_AMD_17H_M30H_DF_F6 0x1496
/* * Function 1 - Address Map @@ -285,6 +287,7 @@ enum amd_families { F16_M30H_CPUS, F17_CPUS, F17_M10H_CPUS, + F17_M30H_CPUS, NUM_FAMILIES, };
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.2-rc1 commit bdcee7747f5c490297665af0e1e0fbeb4368804d category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The first few models of Family 17h all had 2 Unified Memory Controllers per Die, so this was treated as a fixed value. However, future systems may have more Unified Memory Controllers per Die.
Related to this, the channel number and base address of a Unified Memory Controller were found by matching on fixed, known values. However, current and future systems follow this pattern for the channel number and base address of a Unified Memory Controller: 0xYXXXXX, where Y is the channel number. So matching on hardcoded values is not necessary.
Set the number of Unified Memory Controllers at driver init time based on the family/model. Also, update the functions that find the channel number and base address of a Unified Memory Controller to support more than two.
[ bp: Move num_umcs into the .c file and simplify comment. ]
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Tested-by: Kim Phillips kim.phillips@amd.com Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: linux-edac linux-edac@vger.kernel.org Link: https://lkml.kernel.org/r/20190228153558.127292-3-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 47 +++++++++++++++++++++++++-------------- drivers/edac/amd64_edac.h | 6 ++--- 2 files changed, 32 insertions(+), 21 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index 6b6a9206ba6d..db9a56f0c13d 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -18,6 +18,9 @@ static struct msr __percpu *msrs; /* Per-node stuff */ static struct ecc_settings **ecc_stngs;
+/* Number of Unified Memory Controllers */ +static u8 num_umcs; + /* * Valid scrub rates for the K8 hardware memory scrubber. We map the scrubbing * bandwidth to a valid bit pattern. The 'set' operation finds the 'matching- @@ -443,7 +446,7 @@ static void get_cs_base_and_mask(struct amd64_pvt *pvt, int csrow, u8 dct, for (i = 0; i < pvt->csels[dct].m_cnt; i++)
#define for_each_umc(i) \ - for (i = 0; i < NUM_UMCS; i++) + for (i = 0; i < num_umcs; i++)
/* * @input_addr is an InputAddr associated with the node given by mci. Return the @@ -2482,18 +2485,14 @@ static inline void decode_bus_error(int node_id, struct mce *m) * To find the UMC channel represented by this bank we need to match on its * instance_id. The instance_id of a bank is held in the lower 32 bits of its * IPID. + * + * Currently, we can derive the channel number by looking at the 6th nibble in + * the instance_id. For example, instance_id=0xYXXXXX where Y is the channel + * number. */ -static int find_umc_channel(struct amd64_pvt *pvt, struct mce *m) +static int find_umc_channel(struct mce *m) { - u32 umc_instance_id[] = {0x50f00, 0x150f00}; - u32 instance_id = m->ipid & GENMASK(31, 0); - int i, channel = -1; - - for (i = 0; i < ARRAY_SIZE(umc_instance_id); i++) - if (umc_instance_id[i] == instance_id) - channel = i; - - return channel; + return (m->ipid & GENMASK(31, 0)) >> 20; }
static void decode_umc_error(int node_id, struct mce *m) @@ -2515,11 +2514,7 @@ static void decode_umc_error(int node_id, struct mce *m) if (m->status & MCI_STATUS_DEFERRED) ecc_type = 3;
- err.channel = find_umc_channel(pvt, m); - if (err.channel < 0) { - err.err_code = ERR_CHANNEL; - goto log_error; - } + err.channel = find_umc_channel(m);
if (!(m->status & MCI_STATUS_SYNDV)) { err.err_code = ERR_SYND; @@ -3306,6 +3301,22 @@ static const struct attribute_group *amd64_edac_attr_groups[] = { NULL };
+/* Set the number of Unified Memory Controllers in the system. */ +static void compute_num_umcs(void) +{ + u8 model = boot_cpu_data.x86_model; + + if (boot_cpu_data.x86 < 0x17) + return; + + if (model >= 0x30 && model <= 0x3f) + num_umcs = 8; + else + num_umcs = 2; + + edac_dbg(1, "Number of UMCs: %x", num_umcs); +} + static int init_one_instance(unsigned int nid) { struct pci_dev *F3 = node_to_amd_nb(nid)->misc; @@ -3330,7 +3341,7 @@ static int init_one_instance(unsigned int nid) goto err_free;
if (pvt->fam >= 0x17) { - pvt->umc = kcalloc(NUM_UMCS, sizeof(struct amd64_umc), GFP_KERNEL); + pvt->umc = kcalloc(num_umcs, sizeof(struct amd64_umc), GFP_KERNEL); if (!pvt->umc) { ret = -ENOMEM; goto err_free; @@ -3551,6 +3562,8 @@ static int __init amd64_edac_init(void) if (!msrs) goto err_free;
+ compute_num_umcs(); + for (i = 0; i < amd_nb_num(); i++) { err = probe_one_instance(i); if (err) { diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h index e1a8586ca347..df269383f47f 100644 --- a/drivers/edac/amd64_edac.h +++ b/drivers/edac/amd64_edac.h @@ -275,8 +275,6 @@
#define UMC_SDP_INIT BIT(31)
-#define NUM_UMCS 2 - enum amd_families { K8_CPUS = 0, F10_CPUS, @@ -400,8 +398,8 @@ struct err_info {
static inline u32 get_umc_base(u8 channel) { - /* ch0: 0x50000, ch1: 0x150000 */ - return 0x50000 + (!!channel << 20); + /* chY: 0xY50000 */ + return 0x50000 + (channel << 20); }
static inline u64 get_dram_base(struct amd64_pvt *pvt, u8 i)
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.2-rc1 commit 869adc4316ea348e3c52af2494d9b1f6bd68abbd category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The AMD64 EDAC module currently hardcodes the EDAC channel layer size count to two. Future AMD systems may have more channels than this.
Set the EDAC channel layer size equal to the maximum number of channels possible for the system. On Family 17h and later, this is set in the num_umcs variable. Older systems will continue to use two as the default.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: linux-edac linux-edac@vger.kernel.org Link: https://lkml.kernel.org/r/20190325203319.7603-1-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index db9a56f0c13d..29a452c83e13 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -3380,8 +3380,14 @@ static int init_one_instance(unsigned int nid) * Always allocate two channels since we can have setups with DIMMs on * only one channel. Also, this simplifies handling later for the price * of a couple of KBs tops. + * + * On Fam17h+, the number of controllers may be greater than two. So set + * the size equal to the maximum number of UMCs. */ - layers[1].size = 2; + if (pvt->fam >= 0x17) + layers[1].size = num_umcs; + else + layers[1].size = 2; layers[1].is_virt_csrow = false;
mci = edac_mc_alloc(nid, ARRAY_SIZE(layers), layers, 0);
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.2-rc1 commit 7835961d377b75ab9ae77f715e378fcb72508306 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Future AMD systems may support x16 symbol sizes.
Recognize if a system is using x16 symbol size. Also, simplify the print statement.
Note that a x16 syndrome vector table is not necessary like with x4 or x8 syndromes. This is because systems that support x16 symbol sizes are SMCA systems and in that case, the syndrome can be directly extracted from the MCA_SYND[Syndrome] field.
[ bp: massage. ]
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Tested-by: Kim Phillips kim.phillips@amd.com Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: linux-edac linux-edac@vger.kernel.org Link: https://lkml.kernel.org/r/20190228153558.127292-4-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 21 ++++++++++----------- drivers/edac/amd64_edac.h | 2 +- 2 files changed, 11 insertions(+), 12 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index 29a452c83e13..c4d634225969 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -893,8 +893,7 @@ static void dump_misc_regs(struct amd64_pvt *pvt)
edac_dbg(1, " DramHoleValid: %s\n", dhar_valid(pvt) ? "yes" : "no");
- amd64_info("using %s syndromes.\n", - ((pvt->ecc_sym_sz == 8) ? "x8" : "x4")); + amd64_info("using x%u syndromes.\n", pvt->ecc_sym_sz); }
/* @@ -2618,17 +2617,17 @@ static void determine_ecc_sym_sz(struct amd64_pvt *pvt)
for_each_umc(i) { /* Check enabled channels only: */ - if ((pvt->umc[i].sdp_ctrl & UMC_SDP_INIT) && - (pvt->umc[i].ecc_ctrl & BIT(7))) { - pvt->ecc_sym_sz = 8; - break; + if (pvt->umc[i].sdp_ctrl & UMC_SDP_INIT) { + if (pvt->umc[i].ecc_ctrl & BIT(9)) { + pvt->ecc_sym_sz = 16; + return; + } else if (pvt->umc[i].ecc_ctrl & BIT(7)) { + pvt->ecc_sym_sz = 8; + return; + } } } - - return; - } - - if (pvt->fam >= 0x10) { + } else if (pvt->fam >= 0x10) { u32 tmp;
amd64_read_pci_cfg(pvt->F3, EXT_NB_MCA_CFG, &tmp); diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h index df269383f47f..4dce6a2ac75f 100644 --- a/drivers/edac/amd64_edac.h +++ b/drivers/edac/amd64_edac.h @@ -365,7 +365,7 @@ struct amd64_pvt { u32 dct_sel_hi; /* DRAM Controller Select High */ u32 online_spare; /* On-Line spare Reg */
- /* x4 or x8 syndromes in use */ + /* x4, x8, or x16 syndromes in use */ u8 ecc_sym_sz;
/* place to store error injection parameters prior to issue */
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.2-rc1 commit fc00c6a416381010c4a721a4142ddd0260d68f20 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
AMD systems may support chip select interleaving. However, on family 17h+ this was not taken into account when printing the chip select sizes.
Add support to detect if chip selects are interleaved on family 17h+, and adjust the sizes accordingly.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Tested-by: Kim Phillips kim.phillips@amd.com Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: linux-edac linux-edac@vger.kernel.org Link: https://lkml.kernel.org/r/20190228153558.127292-6-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 31 +++++++++++++++++++++++++++++-- 1 file changed, 29 insertions(+), 2 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index c4d634225969..c6ae9d36a2cf 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -780,6 +780,22 @@ static void debug_dump_dramcfg_low(struct amd64_pvt *pvt, u32 dclr, int chan) (dclr & BIT(15)) ? "yes" : "no"); }
+/* + * The Address Mask should be a contiguous set of bits in the non-interleaved + * case. So to check for CS interleaving, find the most- and least-significant + * bits of the mask, generate a contiguous bitmask, and compare the two. + */ +static bool f17_cs_interleaved(struct amd64_pvt *pvt, u8 ctrl, int cs) +{ + u32 mask = pvt->csels[ctrl].csmasks[cs >> 1]; + u32 msb = fls(mask) - 1, lsb = ffs(mask) - 1; + u32 test_mask = GENMASK(msb, lsb); + + edac_dbg(1, "mask=0x%08x test_mask=0x%08x\n", mask, test_mask); + + return mask ^ test_mask; +} + static void debug_display_dimm_sizes_df(struct amd64_pvt *pvt, u8 ctrl) { int dimm, size0, size1, cs0, cs1; @@ -796,8 +812,19 @@ static void debug_display_dimm_sizes_df(struct amd64_pvt *pvt, u8 ctrl) size1 = 0; cs1 = dimm * 2 + 1;
- if (csrow_enabled(cs1, ctrl, pvt)) - size1 = pvt->ops->dbam_to_cs(pvt, ctrl, 0, cs1); + if (csrow_enabled(cs1, ctrl, pvt)) { + /* + * CS interleaving is only supported if both CSes have + * the same amount of memory. Because they are + * interleaved, it will look like both CSes have the + * full amount of memory. Save the size for both as + * half the amount we found on CS0, if interleaved. + */ + if (f17_cs_interleaved(pvt, ctrl, cs1)) + size1 = size0 = (size0 >> 1); + else + size1 = pvt->ops->dbam_to_cs(pvt, ctrl, 0, cs1); + }
amd64_info(EDAC_MC ": %d: %5dMB %d: %5dMB\n", cs0, size0,
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.4-rc1 commit e53a3b267fb0a79db9ca1f1e08b97889b22013e6 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Chip Select memory size reporting on AMD Family 17h was recently fixed in order to account for interleaving. However, the current method is not robust.
The Chip Select Address Mask can be used to find the memory size. There are a couple of cases.
1) For single-rank and dual-rank non-interleaved, use the address mask plus 1 as the size.
2) For dual-rank interleaved, do #1 but "de-interleave" the address mask first.
Always "de-interleave" the address mask in order to simplify the code flow. Bit mask manipulation is necessary to check for interleaving, so just go ahead and do the de-interleaving. In the non-interleaved case, the original and de-interleaved address masks will be the same.
To de-interleave the mask, count the number of zero bits in the middle of the mask and swap them with the most significant bits.
For example, Original=0xFFFF9FE, De-interleaved=0x3FFFFFE
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: "linux-edac@vger.kernel.org" linux-edac@vger.kernel.org Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Tony Luck tony.luck@intel.com Link: https://lkml.kernel.org/r/20190821235938.118710-5-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 114 +++++++++++++++++++++++--------------- 1 file changed, 70 insertions(+), 44 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index c6ae9d36a2cf..4a7b6d5d5329 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -780,51 +780,39 @@ static void debug_dump_dramcfg_low(struct amd64_pvt *pvt, u32 dclr, int chan) (dclr & BIT(15)) ? "yes" : "no"); }
-/* - * The Address Mask should be a contiguous set of bits in the non-interleaved - * case. So to check for CS interleaving, find the most- and least-significant - * bits of the mask, generate a contiguous bitmask, and compare the two. - */ -static bool f17_cs_interleaved(struct amd64_pvt *pvt, u8 ctrl, int cs) +#define CS_EVEN_PRIMARY BIT(0) +#define CS_ODD_PRIMARY BIT(1) + +#define CS_EVEN CS_EVEN_PRIMARY +#define CS_ODD CS_ODD_PRIMARY + +static int f17_get_cs_mode(int dimm, u8 ctrl, struct amd64_pvt *pvt) { - u32 mask = pvt->csels[ctrl].csmasks[cs >> 1]; - u32 msb = fls(mask) - 1, lsb = ffs(mask) - 1; - u32 test_mask = GENMASK(msb, lsb); + int cs_mode = 0;
- edac_dbg(1, "mask=0x%08x test_mask=0x%08x\n", mask, test_mask); + if (csrow_enabled(2 * dimm, ctrl, pvt)) + cs_mode |= CS_EVEN_PRIMARY;
- return mask ^ test_mask; + if (csrow_enabled(2 * dimm + 1, ctrl, pvt)) + cs_mode |= CS_ODD_PRIMARY; + + return cs_mode; }
static void debug_display_dimm_sizes_df(struct amd64_pvt *pvt, u8 ctrl) { - int dimm, size0, size1, cs0, cs1; + int dimm, size0, size1, cs0, cs1, cs_mode;
edac_printk(KERN_DEBUG, EDAC_MC, "UMC%d chip selects:\n", ctrl);
for (dimm = 0; dimm < 2; dimm++) { - size0 = 0; cs0 = dimm * 2; - - if (csrow_enabled(cs0, ctrl, pvt)) - size0 = pvt->ops->dbam_to_cs(pvt, ctrl, 0, cs0); - - size1 = 0; cs1 = dimm * 2 + 1;
- if (csrow_enabled(cs1, ctrl, pvt)) { - /* - * CS interleaving is only supported if both CSes have - * the same amount of memory. Because they are - * interleaved, it will look like both CSes have the - * full amount of memory. Save the size for both as - * half the amount we found on CS0, if interleaved. - */ - if (f17_cs_interleaved(pvt, ctrl, cs1)) - size1 = size0 = (size0 >> 1); - else - size1 = pvt->ops->dbam_to_cs(pvt, ctrl, 0, cs1); - } + cs_mode = f17_get_cs_mode(dimm, ctrl, pvt); + + size0 = pvt->ops->dbam_to_cs(pvt, ctrl, cs_mode, cs0); + size1 = pvt->ops->dbam_to_cs(pvt, ctrl, cs_mode, cs1);
amd64_info(EDAC_MC ": %d: %5dMB %d: %5dMB\n", cs0, size0, @@ -1561,18 +1549,54 @@ static int f16_dbam_to_chip_select(struct amd64_pvt *pvt, u8 dct, return ddr3_cs_size(cs_mode, false); }
-static int f17_base_addr_to_cs_size(struct amd64_pvt *pvt, u8 umc, +static int f17_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc, unsigned int cs_mode, int csrow_nr) { - u32 base_addr = pvt->csels[umc].csbases[csrow_nr]; + u32 addr_mask_orig, addr_mask_deinterleaved; + u32 msb, weight, num_zero_bits; + int dimm, size = 0;
- /* Each mask is used for every two base addresses. */ - u32 addr_mask = pvt->csels[umc].csmasks[csrow_nr >> 1]; + /* No Chip Selects are enabled. */ + if (!cs_mode) + return size;
- /* Register [31:1] = Address [39:9]. Size is in kBs here. */ - u32 size = ((addr_mask >> 1) - (base_addr >> 1) + 1) >> 1; + /* Requested size of an even CS but none are enabled. */ + if (!(cs_mode & CS_EVEN) && !(csrow_nr & 1)) + return size;
- edac_dbg(1, "BaseAddr: 0x%x, AddrMask: 0x%x\n", base_addr, addr_mask); + /* Requested size of an odd CS but none are enabled. */ + if (!(cs_mode & CS_ODD) && (csrow_nr & 1)) + return size; + + /* + * There is one mask per DIMM, and two Chip Selects per DIMM. + * CS0 and CS1 -> DIMM0 + * CS2 and CS3 -> DIMM1 + */ + dimm = csrow_nr >> 1; + + addr_mask_orig = pvt->csels[umc].csmasks[dimm]; + + /* + * The number of zero bits in the mask is equal to the number of bits + * in a full mask minus the number of bits in the current mask. + * + * The MSB is the number of bits in the full mask because BIT[0] is + * always 0. + */ + msb = fls(addr_mask_orig) - 1; + weight = hweight_long(addr_mask_orig); + num_zero_bits = msb - weight; + + /* Take the number of zero bits off from the top of the mask. */ + addr_mask_deinterleaved = GENMASK_ULL(msb - num_zero_bits, 1); + + edac_dbg(1, "CS%d DIMM%d AddrMasks:\n", csrow_nr, dimm); + edac_dbg(1, " Original AddrMask: 0x%x\n", addr_mask_orig); + edac_dbg(1, " Deinterleaved AddrMask: 0x%x\n", addr_mask_deinterleaved); + + /* Register [31:1] = Address [39:9]. Size is in kBs here. */ + size = (addr_mask_deinterleaved >> 2) + 1;
/* Return size in MBs. */ return size >> 10; @@ -2237,7 +2261,7 @@ static struct amd64_family_type family_types[] = { .f6_id = PCI_DEVICE_ID_AMD_17H_DF_F6, .ops = { .early_channel_count = f17_early_channel_count, - .dbam_to_cs = f17_base_addr_to_cs_size, + .dbam_to_cs = f17_addr_mask_to_cs_size, } }, [F17_M10H_CPUS] = { @@ -2246,7 +2270,7 @@ static struct amd64_family_type family_types[] = { .f6_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F6, .ops = { .early_channel_count = f17_early_channel_count, - .dbam_to_cs = f17_base_addr_to_cs_size, + .dbam_to_cs = f17_addr_mask_to_cs_size, } }, [F17_M30H_CPUS] = { @@ -2255,7 +2279,7 @@ static struct amd64_family_type family_types[] = { .f6_id = PCI_DEVICE_ID_AMD_17H_M30H_DF_F6, .ops = { .early_channel_count = f17_early_channel_count, - .dbam_to_cs = f17_base_addr_to_cs_size, + .dbam_to_cs = f17_addr_mask_to_cs_size, } }, }; @@ -2814,10 +2838,12 @@ static u32 get_csrow_nr_pages(struct amd64_pvt *pvt, u8 dct, int csrow_nr_orig) int csrow_nr = csrow_nr_orig; u32 cs_mode, nr_pages;
- if (!pvt->umc) + if (!pvt->umc) { csrow_nr >>= 1; - - cs_mode = DBAM_DIMM(csrow_nr, dbam); + cs_mode = DBAM_DIMM(csrow_nr, dbam); + } else { + cs_mode = f17_get_cs_mode(csrow_nr >> 1, dct, pvt); + }
nr_pages = pvt->ops->dbam_to_cs(pvt, dct, cs_mode, csrow_nr); nr_pages <<= 20 - PAGE_SHIFT;
From: Isaac Vaughn isaac.vaughn@Knights.ucf.edu
mainline inclusion from mainline-v5.4-rc1 commit 3e443eb353eda6f4b4796e07f2599683fa752f1d category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Add the new Family 17h Model 70h PCI IDs (device 18h functions 0 and 6) to the AMD64 EDAC module.
[ bp: s/f17_base_addr_to_cs_size/f17_addr_mask_to_cs_size/g ]
Signed-off-by: Isaac Vaughn isaac.vaughn@knights.ucf.edu Signed-off-by: Borislav Petkov bp@suse.de Cc: James Morse james.morse@arm.com Cc: linux-edac@vger.kernel.org Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Robert Richter rrichter@marvell.com Cc: Tony Luck tony.luck@intel.com Link: https://lkml.kernel.org/r/20190906192131.8ced0ca112146f32d82b6cae@knights.uc... Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 13 +++++++++++++ drivers/edac/amd64_edac.h | 3 +++ 2 files changed, 16 insertions(+)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index 4a7b6d5d5329..b62a3d17aa18 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -2282,6 +2282,15 @@ static struct amd64_family_type family_types[] = { .dbam_to_cs = f17_addr_mask_to_cs_size, } }, + [F17_M70H_CPUS] = { + .ctl_name = "F17h_M70h", + .f0_id = PCI_DEVICE_ID_AMD_17H_M70H_DF_F0, + .f6_id = PCI_DEVICE_ID_AMD_17H_M70H_DF_F6, + .ops = { + .early_channel_count = f17_early_channel_count, + .dbam_to_cs = f17_addr_mask_to_cs_size, + } + }, };
/* @@ -3320,6 +3329,10 @@ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt) fam_type = &family_types[F17_M30H_CPUS]; pvt->ops = &family_types[F17_M30H_CPUS].ops; break; + } else if (pvt->model >= 0x70 && pvt->model <= 0x7f) { + fam_type = &family_types[F17_M70H_CPUS]; + pvt->ops = &family_types[F17_M70H_CPUS].ops; + break; } /* fall through */ case 0x18: diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h index 4dce6a2ac75f..d8c79c9465df 100644 --- a/drivers/edac/amd64_edac.h +++ b/drivers/edac/amd64_edac.h @@ -120,6 +120,8 @@ #define PCI_DEVICE_ID_AMD_17H_M10H_DF_F6 0x15ee #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F0 0x1490 #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F6 0x1496 +#define PCI_DEVICE_ID_AMD_17H_M70H_DF_F0 0x1440 +#define PCI_DEVICE_ID_AMD_17H_M70H_DF_F6 0x1446
/* * Function 1 - Address Map @@ -286,6 +288,7 @@ enum amd_families { F17_CPUS, F17_M10H_CPUS, F17_M30H_CPUS, + F17_M70H_CPUS, NUM_FAMILIES, };
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.4-rc1 commit 7574729e91468d568cc198de438feb35ef04f41a category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
AMD Family 17h systems have a set of secondary Chip Select Base Addresses and Address Masks. These do not represent unique Chip Selects, rather they are used in conjunction with the primary Chip Select registers in certain cases.
Cache these secondary Chip Select registers for future use.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: "linux-edac@vger.kernel.org" linux-edac@vger.kernel.org Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Tony Luck tony.luck@intel.com Link: https://lkml.kernel.org/r/20190821235938.118710-7-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 23 ++++++++++++++++++++--- drivers/edac/amd64_edac.h | 4 ++++ 2 files changed, 24 insertions(+), 3 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index b62a3d17aa18..a5dda6357d99 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -938,34 +938,51 @@ static void prep_chip_selects(struct amd64_pvt *pvt)
static void read_umc_base_mask(struct amd64_pvt *pvt) { - u32 umc_base_reg, umc_mask_reg; - u32 base_reg, mask_reg; - u32 *base, *mask; + u32 umc_base_reg, umc_base_reg_sec; + u32 umc_mask_reg, umc_mask_reg_sec; + u32 base_reg, base_reg_sec; + u32 mask_reg, mask_reg_sec; + u32 *base, *base_sec; + u32 *mask, *mask_sec; int cs, umc;
for_each_umc(umc) { umc_base_reg = get_umc_base(umc) + UMCCH_BASE_ADDR; + umc_base_reg_sec = get_umc_base(umc) + UMCCH_BASE_ADDR_SEC;
for_each_chip_select(cs, umc, pvt) { base = &pvt->csels[umc].csbases[cs]; + base_sec = &pvt->csels[umc].csbases_sec[cs];
base_reg = umc_base_reg + (cs * 4); + base_reg_sec = umc_base_reg_sec + (cs * 4);
if (!amd_smn_read(pvt->mc_node_id, base_reg, base)) edac_dbg(0, " DCSB%d[%d]=0x%08x reg: 0x%x\n", umc, cs, *base, base_reg); + + if (!amd_smn_read(pvt->mc_node_id, base_reg_sec, base_sec)) + edac_dbg(0, " DCSB_SEC%d[%d]=0x%08x reg: 0x%x\n", + umc, cs, *base_sec, base_reg_sec); }
umc_mask_reg = get_umc_base(umc) + UMCCH_ADDR_MASK; + umc_mask_reg_sec = get_umc_base(umc) + UMCCH_ADDR_MASK_SEC;
for_each_chip_select_mask(cs, umc, pvt) { mask = &pvt->csels[umc].csmasks[cs]; + mask_sec = &pvt->csels[umc].csmasks_sec[cs];
mask_reg = umc_mask_reg + (cs * 4); + mask_reg_sec = umc_mask_reg_sec + (cs * 4);
if (!amd_smn_read(pvt->mc_node_id, mask_reg, mask)) edac_dbg(0, " DCSM%d[%d]=0x%08x reg: 0x%x\n", umc, cs, *mask, mask_reg); + + if (!amd_smn_read(pvt->mc_node_id, mask_reg_sec, mask_sec)) + edac_dbg(0, " DCSM_SEC%d[%d]=0x%08x reg: 0x%x\n", + umc, cs, *mask_sec, mask_reg_sec); } } } diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h index d8c79c9465df..338d78404ed8 100644 --- a/drivers/edac/amd64_edac.h +++ b/drivers/edac/amd64_edac.h @@ -261,7 +261,9 @@
/* UMC CH register offsets */ #define UMCCH_BASE_ADDR 0x0 +#define UMCCH_BASE_ADDR_SEC 0x10 #define UMCCH_ADDR_MASK 0x20 +#define UMCCH_ADDR_MASK_SEC 0x28 #define UMCCH_ADDR_CFG 0x30 #define UMCCH_DIMM_CFG 0x80 #define UMCCH_UMC_CFG 0x100 @@ -315,9 +317,11 @@ struct dram_range { /* A DCT chip selects collection */ struct chip_select { u32 csbases[NUM_CHIPSELECTS]; + u32 csbases_sec[NUM_CHIPSELECTS]; u8 b_cnt;
u32 csmasks[NUM_CHIPSELECTS]; + u32 csmasks_sec[NUM_CHIPSELECTS]; u8 m_cnt; };
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.4-rc1 commit 81f5090db843be897414418c24fe472fa6e082b6 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Future AMD systems will support asymmetric dual-rank DIMMs. These are DIMMs where the ranks are of different sizes.
The even rank will use the Primary Even Chip Select registers and the odd rank will use the Secondary Odd Chip Select registers.
Recognize if a Secondary Odd Chip Select is being used. Use the Secondary Odd Address Mask when calculating the chip select size.
[ bp: move csrow_sec_enabled() to the header, fix CS_ODD define and tone-down the capitalized words spelling. ]
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: "linux-edac@vger.kernel.org" linux-edac@vger.kernel.org Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Tony Luck tony.luck@intel.com Link: https://lkml.kernel.org/r/20190821235938.118710-8-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 16 +++++++++++++--- drivers/edac/amd64_edac.h | 3 ++- 2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index a5dda6357d99..a128baec5faf 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -782,9 +782,11 @@ static void debug_dump_dramcfg_low(struct amd64_pvt *pvt, u32 dclr, int chan)
#define CS_EVEN_PRIMARY BIT(0) #define CS_ODD_PRIMARY BIT(1) +#define CS_EVEN_SECONDARY BIT(2) +#define CS_ODD_SECONDARY BIT(3)
-#define CS_EVEN CS_EVEN_PRIMARY -#define CS_ODD CS_ODD_PRIMARY +#define CS_EVEN (CS_EVEN_PRIMARY | CS_EVEN_SECONDARY) +#define CS_ODD (CS_ODD_PRIMARY | CS_ODD_SECONDARY)
static int f17_get_cs_mode(int dimm, u8 ctrl, struct amd64_pvt *pvt) { @@ -796,6 +798,10 @@ static int f17_get_cs_mode(int dimm, u8 ctrl, struct amd64_pvt *pvt) if (csrow_enabled(2 * dimm + 1, ctrl, pvt)) cs_mode |= CS_ODD_PRIMARY;
+ /* Asymmetric dual-rank DIMM support. */ + if (csrow_sec_enabled(2 * dimm + 1, ctrl, pvt)) + cs_mode |= CS_ODD_SECONDARY; + return cs_mode; }
@@ -1592,7 +1598,11 @@ static int f17_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc, */ dimm = csrow_nr >> 1;
- addr_mask_orig = pvt->csels[umc].csmasks[dimm]; + /* Asymmetric dual-rank DIMM support. */ + if ((csrow_nr & 1) && (cs_mode & CS_ODD_SECONDARY)) + addr_mask_orig = pvt->csels[umc].csmasks_sec[dimm]; + else + addr_mask_orig = pvt->csels[umc].csmasks[dimm];
/* * The number of zero bits in the mask is equal to the number of bits diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h index 338d78404ed8..8c3cda81e619 100644 --- a/drivers/edac/amd64_edac.h +++ b/drivers/edac/amd64_edac.h @@ -171,7 +171,8 @@ #define DCSM0 0x60 #define DCSM1 0x160
-#define csrow_enabled(i, dct, pvt) ((pvt)->csels[(dct)].csbases[(i)] & DCSB_CS_ENABLE) +#define csrow_enabled(i, dct, pvt) ((pvt)->csels[(dct)].csbases[(i)] & DCSB_CS_ENABLE) +#define csrow_sec_enabled(i, dct, pvt) ((pvt)->csels[(dct)].csbases_sec[(i)] & DCSB_CS_ENABLE)
#define DRAM_CONTROL 0x78
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.5-rc1 commit 466503d6b1b33be46ab87c6090f0ade6c6011cbc category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The following commit introduced a warning on error reports without a non-zero grain value.
3724ace582d9 ("EDAC/mc: Fix grain_bits calculation")
The amd64_edac_mod module does not provide a value, so the warning will be given on the first reported memory error.
Set the grain per DIMM to cacheline size (64 bytes). This is the current recommendation.
Fixes: 3724ace582d9 ("EDAC/mc: Fix grain_bits calculation") Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: "linux-edac@vger.kernel.org" linux-edac@vger.kernel.org Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Robert Richter rrichter@marvell.com Cc: Tony Luck tony.luck@intel.com Link: https://lkml.kernel.org/r/20191022203448.13962-7-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index a128baec5faf..62dd8032f2d8 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -2928,6 +2928,7 @@ static int init_csrows_df(struct mem_ctl_info *mci) dimm->mtype = pvt->dram_type; dimm->edac_mode = edac_mode; dimm->dtype = dev_type; + dimm->grain = 64; } }
@@ -3004,6 +3005,7 @@ static int init_csrows(struct mem_ctl_info *mci) dimm = csrow->channels[j]->dimm; dimm->mtype = pvt->dram_type; dimm->edac_mode = edac_mode; + dimm->grain = 64; } }
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.5-rc1 commit 38ddd4d1574530e1447b6ad91d27225d0f7662fb category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The struct amd64_family_type doesn't change between multiple nodes and instances of the module, so make it global.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: "linux-edac@vger.kernel.org" linux-edac@vger.kernel.org Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Robert Richter rrichter@marvell.com Cc: Tony Luck tony.luck@intel.com Link: https://lkml.kernel.org/r/20191106012448.243970-2-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index 62dd8032f2d8..b195962cc955 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -15,6 +15,8 @@ module_param(ecc_enable_override, int, 0644);
static struct msr __percpu *msrs;
+static struct amd64_family_type *fam_type; + /* Per-node stuff */ static struct ecc_settings **ecc_stngs;
@@ -3272,8 +3274,7 @@ f17h_determine_edac_ctl_cap(struct mem_ctl_info *mci, struct amd64_pvt *pvt) } }
-static void setup_mci_misc_attrs(struct mem_ctl_info *mci, - struct amd64_family_type *fam) +static void setup_mci_misc_attrs(struct mem_ctl_info *mci) { struct amd64_pvt *pvt = mci->pvt_info;
@@ -3292,7 +3293,7 @@ static void setup_mci_misc_attrs(struct mem_ctl_info *mci,
mci->edac_cap = determine_edac_cap(pvt); mci->mod_name = EDAC_MOD_STR; - mci->ctl_name = fam->ctl_name; + mci->ctl_name = fam_type->ctl_name; mci->dev_name = pci_name(pvt->F3); mci->ctl_page_to_phys = NULL;
@@ -3306,8 +3307,6 @@ static void setup_mci_misc_attrs(struct mem_ctl_info *mci, */ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt) { - struct amd64_family_type *fam_type = NULL; - pvt->ext_model = boot_cpu_data.x86_model >> 4; pvt->stepping = boot_cpu_data.x86_stepping; pvt->model = boot_cpu_data.x86_model; @@ -3414,7 +3413,6 @@ static void compute_num_umcs(void) static int init_one_instance(unsigned int nid) { struct pci_dev *F3 = node_to_amd_nb(nid)->misc; - struct amd64_family_type *fam_type = NULL; struct mem_ctl_info *mci = NULL; struct edac_mc_layer layers[2]; struct amd64_pvt *pvt = NULL; @@ -3491,7 +3489,7 @@ static int init_one_instance(unsigned int nid) mci->pvt_info = pvt; mci->pdev = &pvt->F3->dev;
- setup_mci_misc_attrs(mci, fam_type); + setup_mci_misc_attrs(mci);
if (init_csrows(mci)) mci->edac_cap = EDAC_FLAG_NONE;
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.5-rc1 commit 80355a3b2db9d0b713af5169e2cdd7f8fbfdad82 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Split out gathering hardware information from init_one_instance() into a separate function hw_info_get(). This is necessary so that the information can be cached earlier and used to check if memory is populated and if ECC is enabled on a node.
Also, define a function hw_info_put() to back out changes made in hw_info_get().
Check for an allocated PCI device (Function 0 for Family 17h or Function 1 for pre-Family 17h) before freeing, since hw_info_put() may be called before PCI siblings are reserved.
Drop the family check when freeing pvt->umc. This will be NULL on pre-Family 17h systems. However, kfree() is safe and will check for a NULL pointer before freeing.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: "linux-edac@vger.kernel.org" linux-edac@vger.kernel.org Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Robert Richter rrichter@marvell.com Cc: Tony Luck tony.luck@intel.com Link: https://lkml.kernel.org/r/20191106012448.243970-3-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 101 +++++++++++++++++++------------------- 1 file changed, 51 insertions(+), 50 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index b195962cc955..82157e2a0f9c 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -3410,34 +3410,15 @@ static void compute_num_umcs(void) edac_dbg(1, "Number of UMCs: %x", num_umcs); }
-static int init_one_instance(unsigned int nid) +static int hw_info_get(struct amd64_pvt *pvt) { - struct pci_dev *F3 = node_to_amd_nb(nid)->misc; - struct mem_ctl_info *mci = NULL; - struct edac_mc_layer layers[2]; - struct amd64_pvt *pvt = NULL; u16 pci_id1, pci_id2; - int err = 0, ret; - - ret = -ENOMEM; - pvt = kzalloc(sizeof(struct amd64_pvt), GFP_KERNEL); - if (!pvt) - goto err_ret; - - pvt->mc_node_id = nid; - pvt->F3 = F3; - - ret = -EINVAL; - fam_type = per_family_init(pvt); - if (!fam_type) - goto err_free; + int ret = -EINVAL;
if (pvt->fam >= 0x17) { pvt->umc = kcalloc(num_umcs, sizeof(struct amd64_umc), GFP_KERNEL); - if (!pvt->umc) { - ret = -ENOMEM; - goto err_free; - } + if (!pvt->umc) + return -ENOMEM;
pci_id1 = fam_type->f0_id; pci_id2 = fam_type->f6_id; @@ -3446,21 +3427,37 @@ static int init_one_instance(unsigned int nid) pci_id2 = fam_type->f2_id; }
- err = reserve_mc_sibling_devs(pvt, pci_id1, pci_id2); - if (err) - goto err_post_init; + ret = reserve_mc_sibling_devs(pvt, pci_id1, pci_id2); + if (ret) + return ret;
read_mc_regs(pvt);
+ return 0; +} + +static void hw_info_put(struct amd64_pvt *pvt) +{ + if (pvt->F0 || pvt->F1) + free_mc_sibling_devs(pvt); + + kfree(pvt->umc); +} + +static int init_one_instance(struct amd64_pvt *pvt) +{ + struct mem_ctl_info *mci = NULL; + struct edac_mc_layer layers[2]; + int ret = -EINVAL; + /* * We need to determine how many memory channels there are. Then use * that information for calculating the size of the dynamic instance * tables in the 'mci' structure. */ - ret = -EINVAL; pvt->channel_count = pvt->ops->early_channel_count(pvt); if (pvt->channel_count < 0) - goto err_siblings; + return ret;
ret = -ENOMEM; layers[0].type = EDAC_MC_LAYER_CHIP_SELECT; @@ -3482,9 +3479,9 @@ static int init_one_instance(unsigned int nid) layers[1].size = 2; layers[1].is_virt_csrow = false;
- mci = edac_mc_alloc(nid, ARRAY_SIZE(layers), layers, 0); + mci = edac_mc_alloc(pvt->mc_node_id, ARRAY_SIZE(layers), layers, 0); if (!mci) - goto err_siblings; + return ret;
mci->pvt_info = pvt; mci->pdev = &pvt->F3->dev; @@ -3497,31 +3494,17 @@ static int init_one_instance(unsigned int nid) ret = -ENODEV; if (edac_mc_add_mc_with_groups(mci, amd64_edac_attr_groups)) { edac_dbg(1, "failed edac_mc_add_mc()\n"); - goto err_add_mc; + edac_mc_free(mci); + return ret; }
return 0; - -err_add_mc: - edac_mc_free(mci); - -err_siblings: - free_mc_sibling_devs(pvt); - -err_post_init: - if (pvt->fam >= 0x17) - kfree(pvt->umc); - -err_free: - kfree(pvt); - -err_ret: - return ret; }
static int probe_one_instance(unsigned int nid) { struct pci_dev *F3 = node_to_amd_nb(nid)->misc; + struct amd64_pvt *pvt = NULL; struct ecc_settings *s; int ret;
@@ -3532,6 +3515,21 @@ static int probe_one_instance(unsigned int nid)
ecc_stngs[nid] = s;
+ pvt = kzalloc(sizeof(struct amd64_pvt), GFP_KERNEL); + if (!pvt) + goto err_settings; + + pvt->mc_node_id = nid; + pvt->F3 = F3; + + fam_type = per_family_init(pvt); + if (!fam_type) + goto err_enable; + + ret = hw_info_get(pvt); + if (ret < 0) + goto err_enable; + if (!ecc_enabled(F3, nid)) { ret = 0;
@@ -3548,7 +3546,7 @@ static int probe_one_instance(unsigned int nid) goto err_enable; }
- ret = init_one_instance(nid); + ret = init_one_instance(pvt); if (ret < 0) { amd64_err("Error probing instance: %d\n", nid);
@@ -3561,6 +3559,10 @@ static int probe_one_instance(unsigned int nid) return ret;
err_enable: + hw_info_put(pvt); + kfree(pvt); + +err_settings: kfree(s); ecc_stngs[nid] = NULL;
@@ -3587,14 +3589,13 @@ static void remove_one_instance(unsigned int nid)
restore_ecc_error_reporting(s, nid, F3);
- free_mc_sibling_devs(pvt); - kfree(ecc_stngs[nid]); ecc_stngs[nid] = NULL;
/* Free the EDAC CORE resources */ mci->pvt_info = NULL;
+ hw_info_put(pvt); kfree(pvt); edac_mc_free(mci); }
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.5-rc1 commit 5e4c55276ae8758f5789722b384bb2ab3de3a24f category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The maximum number of memory controllers is fixed within a family/model group. In most cases, this has been fixed at 2, but some systems may have up to 8.
The struct amd64_family_type already contains family/model-specific information, and this can be used rather than adding model checks to various functions.
Create a new field in struct amd64_family_type for max_mcs. Set this when setting other family type information, and use this when needing the maximum number of memory controllers possible for a system.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Cc: "linux-edac@vger.kernel.org" linux-edac@vger.kernel.org Cc: James Morse james.morse@arm.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Cc: Robert Richter rrichter@marvell.com Cc: Tony Luck tony.luck@intel.com Link: https://lkml.kernel.org/r/20191106012448.243970-4-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 44 +++++++++++++-------------------------- drivers/edac/amd64_edac.h | 2 ++ 2 files changed, 16 insertions(+), 30 deletions(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index 82157e2a0f9c..0abbd41266d1 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -20,9 +20,6 @@ static struct amd64_family_type *fam_type; /* Per-node stuff */ static struct ecc_settings **ecc_stngs;
-/* Number of Unified Memory Controllers */ -static u8 num_umcs; - /* * Valid scrub rates for the K8 hardware memory scrubber. We map the scrubbing * bandwidth to a valid bit pattern. The 'set' operation finds the 'matching- @@ -448,7 +445,7 @@ static void get_cs_base_and_mask(struct amd64_pvt *pvt, int csrow, u8 dct, for (i = 0; i < pvt->csels[dct].m_cnt; i++)
#define for_each_umc(i) \ - for (i = 0; i < num_umcs; i++) + for (i = 0; i < fam_type->max_mcs; i++)
/* * @input_addr is an InputAddr associated with the node given by mci. Return the @@ -2218,6 +2215,7 @@ static struct amd64_family_type family_types[] = { .ctl_name = "K8", .f1_id = PCI_DEVICE_ID_AMD_K8_NB_ADDRMAP, .f2_id = PCI_DEVICE_ID_AMD_K8_NB_MEMCTL, + .max_mcs = 2, .ops = { .early_channel_count = k8_early_channel_count, .map_sysaddr_to_csrow = k8_map_sysaddr_to_csrow, @@ -2228,6 +2226,7 @@ static struct amd64_family_type family_types[] = { .ctl_name = "F10h", .f1_id = PCI_DEVICE_ID_AMD_10H_NB_MAP, .f2_id = PCI_DEVICE_ID_AMD_10H_NB_DRAM, + .max_mcs = 2, .ops = { .early_channel_count = f1x_early_channel_count, .map_sysaddr_to_csrow = f1x_map_sysaddr_to_csrow, @@ -2238,6 +2237,7 @@ static struct amd64_family_type family_types[] = { .ctl_name = "F15h", .f1_id = PCI_DEVICE_ID_AMD_15H_NB_F1, .f2_id = PCI_DEVICE_ID_AMD_15H_NB_F2, + .max_mcs = 2, .ops = { .early_channel_count = f1x_early_channel_count, .map_sysaddr_to_csrow = f1x_map_sysaddr_to_csrow, @@ -2248,6 +2248,7 @@ static struct amd64_family_type family_types[] = { .ctl_name = "F15h_M30h", .f1_id = PCI_DEVICE_ID_AMD_15H_M30H_NB_F1, .f2_id = PCI_DEVICE_ID_AMD_15H_M30H_NB_F2, + .max_mcs = 2, .ops = { .early_channel_count = f1x_early_channel_count, .map_sysaddr_to_csrow = f1x_map_sysaddr_to_csrow, @@ -2258,6 +2259,7 @@ static struct amd64_family_type family_types[] = { .ctl_name = "F15h_M60h", .f1_id = PCI_DEVICE_ID_AMD_15H_M60H_NB_F1, .f2_id = PCI_DEVICE_ID_AMD_15H_M60H_NB_F2, + .max_mcs = 2, .ops = { .early_channel_count = f1x_early_channel_count, .map_sysaddr_to_csrow = f1x_map_sysaddr_to_csrow, @@ -2268,6 +2270,7 @@ static struct amd64_family_type family_types[] = { .ctl_name = "F16h", .f1_id = PCI_DEVICE_ID_AMD_16H_NB_F1, .f2_id = PCI_DEVICE_ID_AMD_16H_NB_F2, + .max_mcs = 2, .ops = { .early_channel_count = f1x_early_channel_count, .map_sysaddr_to_csrow = f1x_map_sysaddr_to_csrow, @@ -2278,6 +2281,7 @@ static struct amd64_family_type family_types[] = { .ctl_name = "F16h_M30h", .f1_id = PCI_DEVICE_ID_AMD_16H_M30H_NB_F1, .f2_id = PCI_DEVICE_ID_AMD_16H_M30H_NB_F2, + .max_mcs = 2, .ops = { .early_channel_count = f1x_early_channel_count, .map_sysaddr_to_csrow = f1x_map_sysaddr_to_csrow, @@ -2288,6 +2292,7 @@ static struct amd64_family_type family_types[] = { .ctl_name = "F17h", .f0_id = PCI_DEVICE_ID_AMD_17H_DF_F0, .f6_id = PCI_DEVICE_ID_AMD_17H_DF_F6, + .max_mcs = 2, .ops = { .early_channel_count = f17_early_channel_count, .dbam_to_cs = f17_addr_mask_to_cs_size, @@ -2297,6 +2302,7 @@ static struct amd64_family_type family_types[] = { .ctl_name = "F17h_M10h", .f0_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F0, .f6_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F6, + .max_mcs = 2, .ops = { .early_channel_count = f17_early_channel_count, .dbam_to_cs = f17_addr_mask_to_cs_size, @@ -2306,6 +2312,7 @@ static struct amd64_family_type family_types[] = { .ctl_name = "F17h_M30h", .f0_id = PCI_DEVICE_ID_AMD_17H_M30H_DF_F0, .f6_id = PCI_DEVICE_ID_AMD_17H_M30H_DF_F6, + .max_mcs = 8, .ops = { .early_channel_count = f17_early_channel_count, .dbam_to_cs = f17_addr_mask_to_cs_size, @@ -2315,6 +2322,7 @@ static struct amd64_family_type family_types[] = { .ctl_name = "F17h_M70h", .f0_id = PCI_DEVICE_ID_AMD_17H_M70H_DF_F0, .f6_id = PCI_DEVICE_ID_AMD_17H_M70H_DF_F6, + .max_mcs = 2, .ops = { .early_channel_count = f17_early_channel_count, .dbam_to_cs = f17_addr_mask_to_cs_size, @@ -3394,29 +3402,13 @@ static const struct attribute_group *amd64_edac_attr_groups[] = { NULL };
-/* Set the number of Unified Memory Controllers in the system. */ -static void compute_num_umcs(void) -{ - u8 model = boot_cpu_data.x86_model; - - if (boot_cpu_data.x86 < 0x17) - return; - - if (model >= 0x30 && model <= 0x3f) - num_umcs = 8; - else - num_umcs = 2; - - edac_dbg(1, "Number of UMCs: %x", num_umcs); -} - static int hw_info_get(struct amd64_pvt *pvt) { u16 pci_id1, pci_id2; int ret = -EINVAL;
if (pvt->fam >= 0x17) { - pvt->umc = kcalloc(num_umcs, sizeof(struct amd64_umc), GFP_KERNEL); + pvt->umc = kcalloc(fam_type->max_mcs, sizeof(struct amd64_umc), GFP_KERNEL); if (!pvt->umc) return -ENOMEM;
@@ -3469,14 +3461,8 @@ static int init_one_instance(struct amd64_pvt *pvt) * Always allocate two channels since we can have setups with DIMMs on * only one channel. Also, this simplifies handling later for the price * of a couple of KBs tops. - * - * On Fam17h+, the number of controllers may be greater than two. So set - * the size equal to the maximum number of UMCs. */ - if (pvt->fam >= 0x17) - layers[1].size = num_umcs; - else - layers[1].size = 2; + layers[1].size = fam_type->max_mcs; layers[1].is_virt_csrow = false;
mci = edac_mc_alloc(pvt->mc_node_id, ARRAY_SIZE(layers), layers, 0); @@ -3661,8 +3647,6 @@ static int __init amd64_edac_init(void) if (!msrs) goto err_free;
- compute_num_umcs(); - for (i = 0; i < amd_nb_num(); i++) { err = probe_one_instance(i); if (err) { diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h index 8c3cda81e619..9be31688110b 100644 --- a/drivers/edac/amd64_edac.h +++ b/drivers/edac/amd64_edac.h @@ -479,6 +479,8 @@ struct low_ops { struct amd64_family_type { const char *ctl_name; u16 f0_id, f1_id, f2_id, f6_id; + /* Maximum number of memory controllers per die/node. */ + u8 max_mcs; struct low_ops ops; };
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-v5.6-rc1 commit 2eb61c91c3e2738218e55f2eaf7e78a4435c233d category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Add family ops to support AMD Family 19h systems. Existing Family 17h functions can be used. Also, add Family 19h to the list of families to automatically load the module.
Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20200110015651.14887-5-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 17 +++++++++++++++++ drivers/edac/amd64_edac.h | 3 +++ 2 files changed, 20 insertions(+)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index 0abbd41266d1..a0d929a1309d 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -2328,6 +2328,16 @@ static struct amd64_family_type family_types[] = { .dbam_to_cs = f17_addr_mask_to_cs_size, } }, + [F19_CPUS] = { + .ctl_name = "F19h", + .f0_id = PCI_DEVICE_ID_AMD_19H_DF_F0, + .f6_id = PCI_DEVICE_ID_AMD_19H_DF_F6, + .max_mcs = 8, + .ops = { + .early_channel_count = f17_early_channel_count, + .dbam_to_cs = f17_addr_mask_to_cs_size, + } + }, };
/* @@ -3379,6 +3389,12 @@ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt) family_types[F17_CPUS].ctl_name = "F18h"; break;
+ case 0x19: + fam_type = &family_types[F19_CPUS]; + pvt->ops = &family_types[F19_CPUS].ops; + family_types[F19_CPUS].ctl_name = "F19h"; + break; + default: amd64_err("Unsupported family!\n"); return NULL; @@ -3616,6 +3632,7 @@ static const struct x86_cpu_id amd64_cpuids[] = { { X86_VENDOR_AMD, 0x16, X86_MODEL_ANY, X86_FEATURE_ANY, 0 }, { X86_VENDOR_AMD, 0x17, X86_MODEL_ANY, X86_FEATURE_ANY, 0 }, { X86_VENDOR_HYGON, 0x18, X86_MODEL_ANY, X86_FEATURE_ANY, 0 }, + { X86_VENDOR_AMD, 0x19, X86_MODEL_ANY, X86_FEATURE_ANY, 0 }, { } }; MODULE_DEVICE_TABLE(x86cpu, amd64_cpuids); diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h index 9be31688110b..abbf3c274d74 100644 --- a/drivers/edac/amd64_edac.h +++ b/drivers/edac/amd64_edac.h @@ -122,6 +122,8 @@ #define PCI_DEVICE_ID_AMD_17H_M30H_DF_F6 0x1496 #define PCI_DEVICE_ID_AMD_17H_M70H_DF_F0 0x1440 #define PCI_DEVICE_ID_AMD_17H_M70H_DF_F6 0x1446 +#define PCI_DEVICE_ID_AMD_19H_DF_F0 0x1650 +#define PCI_DEVICE_ID_AMD_19H_DF_F6 0x1656
/* * Function 1 - Address Map @@ -292,6 +294,7 @@ enum amd_families { F17_M10H_CPUS, F17_M30H_CPUS, F17_M70H_CPUS, + F19_CPUS, NUM_FAMILIES, };
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.7-rc1 commit 4dcc3df82573a946c620dda5fb00e27c7b080105 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
In order to better accommodate the upcoming Family 19h, given the 80-char line limit, move the existing code into a new l3_thread_slice_mask() function.
No functional changes.
[ bp: Touchups. ]
Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Borislav Petkov bp@suse.de Acked-by: Peter Zijlstra peterz@infradead.org Link: https://lkml.kernel.org/r/20200313231024.17601-1-kim.phillips@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/amd/uncore.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c index b3ab1658a309..353765cab5c5 100644 --- a/arch/x86/events/amd/uncore.c +++ b/arch/x86/events/amd/uncore.c @@ -183,6 +183,20 @@ static void amd_uncore_del(struct perf_event *event, int flags) hwc->idx = -1; }
+/* + * Convert logical CPU number to L3 PMC Config ThreadMask format + */ +static u64 l3_thread_slice_mask(int cpu) +{ + int thread = 2 * (cpu_data(cpu).cpu_core_id % 4); + + if (smp_num_siblings > 1) + thread += cpu_data(cpu).apicid & 1; + + return (1ULL << (AMD64_L3_THREAD_SHIFT + thread) & + AMD64_L3_THREAD_MASK) | AMD64_L3_SLICE_MASK; +} + static int amd_uncore_event_init(struct perf_event *event) { struct amd_uncore *uncore; @@ -217,15 +231,8 @@ static int amd_uncore_event_init(struct perf_event *event) * SliceMask and ThreadMask need to be set for certain L3 events in * Family 17h. For other events, the two fields do not affect the count. */ - if (l3_mask && is_llc_event(event)) { - int thread = 2 * (cpu_data(event->cpu).cpu_core_id % 4); - - if (smp_num_siblings > 1) - thread += cpu_data(event->cpu).apicid & 1; - - hwc->config |= (1ULL << (AMD64_L3_THREAD_SHIFT + thread) & - AMD64_L3_THREAD_MASK) | AMD64_L3_SLICE_MASK; - } + if (l3_mask && is_llc_event(event)) + hwc->config |= l3_thread_slice_mask(event->cpu);
uncore = event_to_amd_uncore(event); if (!uncore)
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.7-rc1 commit 9689dbbeaea884d19e3085439c6a247ef986b2af category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Convert the l3_thread_slice_mask() function to use the more readable topology_* helper functions, more intuitive variable names like shift and thread_mask, and BIT_ULL().
No functional changes.
Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Borislav Petkov bp@suse.de Acked-by: Peter Zijlstra peterz@infradead.org Link: https://lkml.kernel.org/r/20200313231024.17601-2-kim.phillips@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/amd/uncore.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c index 353765cab5c5..685b62efaa4d 100644 --- a/arch/x86/events/amd/uncore.c +++ b/arch/x86/events/amd/uncore.c @@ -188,13 +188,16 @@ static void amd_uncore_del(struct perf_event *event, int flags) */ static u64 l3_thread_slice_mask(int cpu) { - int thread = 2 * (cpu_data(cpu).cpu_core_id % 4); + u64 thread_mask, core = topology_core_id(cpu); + unsigned int shift, thread = 0;
- if (smp_num_siblings > 1) - thread += cpu_data(cpu).apicid & 1; + if (topology_smt_supported() && !topology_is_primary_thread(cpu)) + thread = 1;
- return (1ULL << (AMD64_L3_THREAD_SHIFT + thread) & - AMD64_L3_THREAD_MASK) | AMD64_L3_SLICE_MASK; + shift = AMD64_L3_THREAD_SHIFT + 2 * (core % 4) + thread; + thread_mask = BIT_ULL(shift); + + return AMD64_L3_SLICE_MASK | thread_mask; }
static int amd_uncore_event_init(struct perf_event *event)
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.7-rc1 commit e48667b865480d8bf0f1171a8b474ffc785b9ace category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Family 19h introduces change in slice, core and thread specification in its L3 Performance Event Select (ChL3PmcCfg) h/w register. The change is incompatible with Family 17h's version of the register.
Introduce a new path in l3_thread_slice_mask() to do things differently for Family 19h vs. Family 17h, otherwise the new hardware doesn't get programmed correctly.
Instead of a linear core--thread bitmask, Family 19h takes an encoded core number, and a separate thread mask. There are new bits that are set for all cores and all slices, of which only the latter is used, since the driver counts events for all slices on behalf of the specified CPU.
Also update amd_uncore_init() to base its L2/NB vs. L3/Data Fabric mode decision based on Family 17h or above, not just 17h and 18h: the Family 19h Data Fabric PMC is compatible with the Family 17h DF PMC.
[ bp: Touchups. ]
Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Borislav Petkov bp@suse.de Acked-by: Peter Zijlstra peterz@infradead.org Link: https://lkml.kernel.org/r/20200313231024.17601-3-kim.phillips@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/amd/uncore.c | 20 ++++++++++++++------ arch/x86/include/asm/perf_event.h | 15 +++++++++++++-- 2 files changed, 27 insertions(+), 8 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c index 685b62efaa4d..63461a868fed 100644 --- a/arch/x86/events/amd/uncore.c +++ b/arch/x86/events/amd/uncore.c @@ -194,10 +194,18 @@ static u64 l3_thread_slice_mask(int cpu) if (topology_smt_supported() && !topology_is_primary_thread(cpu)) thread = 1;
- shift = AMD64_L3_THREAD_SHIFT + 2 * (core % 4) + thread; + if (boot_cpu_data.x86 <= 0x18) { + shift = AMD64_L3_THREAD_SHIFT + 2 * (core % 4) + thread; + thread_mask = BIT_ULL(shift); + + return AMD64_L3_SLICE_MASK | thread_mask; + } + + core = (core << AMD64_L3_COREID_SHIFT) & AMD64_L3_COREID_MASK; + shift = AMD64_L3_THREAD_SHIFT + thread; thread_mask = BIT_ULL(shift);
- return AMD64_L3_SLICE_MASK | thread_mask; + return AMD64_L3_EN_ALL_SLICES | core | thread_mask; }
static int amd_uncore_event_init(struct perf_event *event) @@ -231,8 +239,8 @@ static int amd_uncore_event_init(struct perf_event *event) return -EINVAL;
/* - * SliceMask and ThreadMask need to be set for certain L3 events in - * Family 17h. For other events, the two fields do not affect the count. + * SliceMask and ThreadMask need to be set for certain L3 events. + * For other events, the two fields do not affect the count. */ if (l3_mask && is_llc_event(event)) hwc->config |= l3_thread_slice_mask(event->cpu); @@ -539,9 +547,9 @@ static int __init amd_uncore_init(void) if (!boot_cpu_has(X86_FEATURE_TOPOEXT)) return -ENODEV;
- if (boot_cpu_data.x86 == 0x17 || boot_cpu_data.x86 == 0x18) { + if (boot_cpu_data.x86 >= 0x17) { /* - * For F17h or F18h, the Northbridge counters are + * For F17h and above, the Northbridge counters are * repurposed as Data Fabric counters. Also, L3 * counters are supported too. The PMUs are exported * based on family as either L2 or L3 and NB or DF. diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 27afef6a050a..647c40e4ed23 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -50,11 +50,22 @@
#define AMD64_L3_SLICE_SHIFT 48 #define AMD64_L3_SLICE_MASK \ - ((0xFULL) << AMD64_L3_SLICE_SHIFT) + (0xFULL << AMD64_L3_SLICE_SHIFT) +#define AMD64_L3_SLICEID_MASK \ + (0x7ULL << AMD64_L3_SLICE_SHIFT)
#define AMD64_L3_THREAD_SHIFT 56 #define AMD64_L3_THREAD_MASK \ - ((0xFFULL) << AMD64_L3_THREAD_SHIFT) + (0xFFULL << AMD64_L3_THREAD_SHIFT) +#define AMD64_L3_F19H_THREAD_MASK \ + (0x3ULL << AMD64_L3_THREAD_SHIFT) + +#define AMD64_L3_EN_ALL_CORES BIT_ULL(47) +#define AMD64_L3_EN_ALL_SLICES BIT_ULL(46) + +#define AMD64_L3_COREID_SHIFT 42 +#define AMD64_L3_COREID_MASK \ + (0x7ULL << AMD64_L3_COREID_SHIFT)
#define X86_RAW_EVENT_MASK \ (ARCH_PERFMON_EVENTSEL_EVENT | \
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.10-rc1 commit 221bfce5ebbdf72ff08b3bf2510ae81058ee568b category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Stephane Eranian found a bug in that IBS' current Fetch counter was not being reset when the driver would write the new value to clear it along with the enable bit set, and found that adding an MSR write that would first disable IBS Fetch would make IBS Fetch reset its current count.
Indeed, the PPR for AMD Family 17h Model 31h B0 55803 Rev 0.54 - Sep 12, 2019 states "The periodic fetch counter is set to IbsFetchCnt [...] when IbsFetchEn is changed from 0 to 1."
Explicitly set IbsFetchEn to 0 and then to 1 when re-enabling IBS Fetch, so the driver properly resets the internal counter to 0 and IBS Fetch starts counting again.
A family 15h machine tested does not have this problem, and the extra wrmsr is also not needed on Family 19h, so only do the extra wrmsr on families 16h through 18h.
Reported-by: Stephane Eranian stephane.eranian@google.com Signed-off-by: Kim Phillips kim.phillips@amd.com [peterz: optimized] Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: stable@vger.kernel.org Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537 Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/amd/ibs.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index 07bf5517d9d8..801c50be8e1d 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -89,6 +89,7 @@ struct perf_ibs { u64 max_period; unsigned long offset_mask[1]; int offset_max; + unsigned int fetch_count_reset_broken : 1; struct cpu_perf_ibs __percpu *pcpu;
struct attribute **format_attrs; @@ -375,7 +376,12 @@ perf_ibs_event_update(struct perf_ibs *perf_ibs, struct perf_event *event, static inline void perf_ibs_enable_event(struct perf_ibs *perf_ibs, struct hw_perf_event *hwc, u64 config) { - wrmsrl(hwc->config_base, hwc->config | config | perf_ibs->enable_mask); + u64 tmp = hwc->config | config; + + if (perf_ibs->fetch_count_reset_broken) + wrmsrl(hwc->config_base, tmp & ~perf_ibs->enable_mask); + + wrmsrl(hwc->config_base, tmp | perf_ibs->enable_mask); }
/* @@ -744,6 +750,13 @@ static __init void perf_event_ibs_init(void) { struct attribute **attr = ibs_op_format_attrs;
+ /* + * Some chips fail to reset the fetch count when it is written; instead + * they need a 0-1 transition of IbsFetchEn. + */ + if (boot_cpu_data.x86 >= 0x16 && boot_cpu_data.x86 <= 0x18) + perf_ibs_fetch.fetch_count_reset_broken = 1; + perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
if (ibs_caps & IBS_CAPS_OPCNT) {
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.10-rc1 commit 36e1be8ada994d509538b3b1d0af8b63c351e729 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Neither IbsBrTarget nor OPDATA4 are populated in IBS Fetch mode. Don't accumulate them into raw sample user data in that case.
Also, in Fetch mode, add saving the IBS Fetch Control Extended MSR.
Technically, there is an ABI change here with respect to the IBS raw sample data format, but I don't see any perf driver version information being included in perf.data file headers, but, existing users can detect whether the size of the sample record has reduced by 8 bytes to determine whether the IBS driver has this fix.
Fixes: 904cb3677f3a ("perf/x86/amd/ibs: Update IBS MSRs and feature definitions") Reported-by: Stephane Eranian stephane.eranian@google.com Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200908214740.18097-6-kim.phillips@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/amd/ibs.c | 26 ++++++++++++++++---------- arch/x86/include/asm/msr-index.h | 1 + 2 files changed, 17 insertions(+), 10 deletions(-)
diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index 801c50be8e1d..db9e0b33510f 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -643,18 +643,24 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs) perf_ibs->offset_max, offset + 1); } while (offset < offset_max); + /* + * Read IbsBrTarget, IbsOpData4, and IbsExtdCtl separately + * depending on their availability. + * Can't add to offset_max as they are staggered + */ if (event->attr.sample_type & PERF_SAMPLE_RAW) { - /* - * Read IbsBrTarget and IbsOpData4 separately - * depending on their availability. - * Can't add to offset_max as they are staggered - */ - if (ibs_caps & IBS_CAPS_BRNTRGT) { - rdmsrl(MSR_AMD64_IBSBRTARGET, *buf++); - size++; + if (perf_ibs == &perf_ibs_op) { + if (ibs_caps & IBS_CAPS_BRNTRGT) { + rdmsrl(MSR_AMD64_IBSBRTARGET, *buf++); + size++; + } + if (ibs_caps & IBS_CAPS_OPDATA4) { + rdmsrl(MSR_AMD64_IBSOPDATA4, *buf++); + size++; + } } - if (ibs_caps & IBS_CAPS_OPDATA4) { - rdmsrl(MSR_AMD64_IBSOPDATA4, *buf++); + if (perf_ibs == &perf_ibs_fetch && (ibs_caps & IBS_CAPS_FETCHCTLEXTD)) { + rdmsrl(MSR_AMD64_ICIBSEXTDCTL, *buf++); size++; } } diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index cc387a1da9f9..8d433235dcbb 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -378,6 +378,7 @@ #define MSR_AMD64_IBSOP_REG_MASK ((1UL<<MSR_AMD64_IBSOP_REG_COUNT)-1) #define MSR_AMD64_IBSCTL 0xc001103a #define MSR_AMD64_IBSBRTARGET 0xc001103b +#define MSR_AMD64_ICIBSEXTDCTL 0xc001103c #define MSR_AMD64_IBSOPDATA4 0xc001103d #define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */ #define MSR_AMD64_SEV 0xc0010131
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.10-rc1 commit c8fe99d0701fec9fb849ec880a86bc5592530496 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Commit 2f217d58a8a0 ("perf/x86/amd/uncore: Set the thread mask for F17h L3 PMCs") inadvertently changed the uncore driver's behaviour wrt perf tool invocations with or without a CPU list, specified with -C / --cpu=.
Change the behaviour of the driver to assume the former all-cpu (-a) case, which is the more commonly desired default. This fixes '-a -A' invocations without explicit cpu lists (-C) to not count L3 events only on behalf of the first thread of the first core in the L3 domain.
BEFORE:
Activity performed by the first thread of the last core (CPU#43) in CPU#40's L3 domain is not reported by CPU#40:
sudo perf stat -a -A -e l3_request_g1.caching_l3_cache_accesses taskset -c 43 perf bench mem memcpy -s 32mb -l 100 -f default ... CPU36 21,835 l3_request_g1.caching_l3_cache_accesses CPU40 87,066 l3_request_g1.caching_l3_cache_accesses CPU44 17,360 l3_request_g1.caching_l3_cache_accesses ...
AFTER:
The L3 domain activity is now reported by CPU#40:
sudo perf stat -a -A -e l3_request_g1.caching_l3_cache_accesses taskset -c 43 perf bench mem memcpy -s 32mb -l 100 -f default ... CPU36 354,891 l3_request_g1.caching_l3_cache_accesses CPU40 1,780,870 l3_request_g1.caching_l3_cache_accesses CPU44 315,062 l3_request_g1.caching_l3_cache_accesses ...
Fixes: 2f217d58a8a0 ("perf/x86/amd/uncore: Set the thread mask for F17h L3 PMCs") Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200908214740.18097-2-kim.phillips@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/amd/uncore.c | 28 ++++++++-------------------- 1 file changed, 8 insertions(+), 20 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c index 63461a868fed..b3cee06e61ee 100644 --- a/arch/x86/events/amd/uncore.c +++ b/arch/x86/events/amd/uncore.c @@ -184,28 +184,16 @@ static void amd_uncore_del(struct perf_event *event, int flags) }
/* - * Convert logical CPU number to L3 PMC Config ThreadMask format + * Return a full thread and slice mask until per-CPU is + * properly supported. */ -static u64 l3_thread_slice_mask(int cpu) +static u64 l3_thread_slice_mask(void) { - u64 thread_mask, core = topology_core_id(cpu); - unsigned int shift, thread = 0; + if (boot_cpu_data.x86 <= 0x18) + return AMD64_L3_SLICE_MASK | AMD64_L3_THREAD_MASK;
- if (topology_smt_supported() && !topology_is_primary_thread(cpu)) - thread = 1; - - if (boot_cpu_data.x86 <= 0x18) { - shift = AMD64_L3_THREAD_SHIFT + 2 * (core % 4) + thread; - thread_mask = BIT_ULL(shift); - - return AMD64_L3_SLICE_MASK | thread_mask; - } - - core = (core << AMD64_L3_COREID_SHIFT) & AMD64_L3_COREID_MASK; - shift = AMD64_L3_THREAD_SHIFT + thread; - thread_mask = BIT_ULL(shift); - - return AMD64_L3_EN_ALL_SLICES | core | thread_mask; + return AMD64_L3_EN_ALL_SLICES | AMD64_L3_EN_ALL_CORES | + AMD64_L3_F19H_THREAD_MASK; }
static int amd_uncore_event_init(struct perf_event *event) @@ -243,7 +231,7 @@ static int amd_uncore_event_init(struct perf_event *event) * For other events, the two fields do not affect the count. */ if (l3_mask && is_llc_event(event)) - hwc->config |= l3_thread_slice_mask(event->cpu); + hwc->config |= l3_thread_slice_mask();
uncore = event_to_amd_uncore(event); if (!uncore)
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.10-rc1 commit 680d69635005ba0e58fe3f4c52fc162b8fc743b0 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
get_ibs_op_count() adds hardware's current count (IbsOpCurCnt) bits to its count regardless of hardware's valid status.
According to the PPR for AMD Family 17h Model 31h B0 55803 Rev 0.54, if the counter rolls over, valid status is set, and the lower 7 bits of IbsOpCurCnt are randomized by hardware.
Don't include those bits in the driver's event count.
Fixes: 8b1e13638d46 ("perf/x86-ibs: Fix usage of IBS op current count") Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: stable@vger.kernel.org Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537 Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/amd/ibs.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index db9e0b33510f..2410bd4bb48f 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -347,11 +347,15 @@ static u64 get_ibs_op_count(u64 config) { u64 count = 0;
+ /* + * If the internal 27-bit counter rolled over, the count is MaxCnt + * and the lower 7 bits of CurCnt are randomized. + * Otherwise CurCnt has the full 27-bit current counter value. + */ if (config & IBS_OP_VAL) - count += (config & IBS_OP_MAX_CNT) << 4; /* cnt rolled over */ - - if (ibs_caps & IBS_CAPS_RDWROPCNT) - count += (config & IBS_OP_CUR_CNT) >> 32; + count = (config & IBS_OP_MAX_CNT) << 4; + else if (ibs_caps & IBS_CAPS_RDWROPCNT) + count = (config & IBS_OP_CUR_CNT) >> 32;
return count; }
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.10-rc1 commit 06f2c24584f31bc16129643bfb8239a1af82a17f category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Replace AMD_FORMAT_ATTR with the more apropos DEFINE_UNCORE_FORMAT_ATTR stolen from arch/x86/events/intel/uncore.h. This way we can clearly see the bit-variants of each of the attributes that want to have the same name across families.
Also unroll AMD_ATTRIBUTE because we are going to separately add new attributes that differ between DF and L3.
Also clean up the if-Family 17h-else logic in amd_uncore_init.
This is basically a rewrite of commit da6adaea2b7e ("perf/x86/amd/uncore: Update sysfs attributes for Family17h processors").
No functional changes.
Tested F17h+ /sys/bus/event_source/devices/amd_{l3,df}/format/* content remains unchanged:
/sys/bus/event_source/devices/amd_l3/format/event:config:0-7 /sys/bus/event_source/devices/amd_l3/format/umask:config:8-15 /sys/bus/event_source/devices/amd_df/format/event:config:0-7,32-35,59-60 /sys/bus/event_source/devices/amd_df/format/umask:config:8-15
Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20200921144330.6331-2-kim.phillips@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/amd/uncore.c | 111 +++++++++++++++++++---------------- 1 file changed, 61 insertions(+), 50 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c index b3cee06e61ee..bcdeda906c77 100644 --- a/arch/x86/events/amd/uncore.c +++ b/arch/x86/events/amd/uncore.c @@ -273,47 +273,60 @@ static struct attribute_group amd_uncore_attr_group = { .attrs = amd_uncore_attrs, };
-/* - * Similar to PMU_FORMAT_ATTR but allowing for format_attr to be assigned based - * on family - */ -#define AMD_FORMAT_ATTR(_dev, _name, _format) \ -static ssize_t \ -_dev##_show##_name(struct device *dev, \ - struct device_attribute *attr, \ - char *page) \ -{ \ - BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \ - return sprintf(page, _format "\n"); \ -} \ -static struct device_attribute format_attr_##_dev##_name = __ATTR_RO(_dev); - -/* Used for each uncore counter type */ -#define AMD_ATTRIBUTE(_name) \ -static struct attribute *amd_uncore_format_attr_##_name[] = { \ - &format_attr_event_##_name.attr, \ - &format_attr_umask.attr, \ - NULL, \ -}; \ -static struct attribute_group amd_uncore_format_group_##_name = { \ - .name = "format", \ - .attrs = amd_uncore_format_attr_##_name, \ -}; \ -static const struct attribute_group *amd_uncore_attr_groups_##_name[] = { \ - &amd_uncore_attr_group, \ - &amd_uncore_format_group_##_name, \ - NULL, \ +#define DEFINE_UNCORE_FORMAT_ATTR(_var, _name, _format) \ +static ssize_t __uncore_##_var##_show(struct kobject *kobj, \ + struct kobj_attribute *attr, \ + char *page) \ +{ \ + BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \ + return sprintf(page, _format "\n"); \ +} \ +static struct kobj_attribute format_attr_##_var = \ + __ATTR(_name, 0444, __uncore_##_var##_show, NULL) + +DEFINE_UNCORE_FORMAT_ATTR(event12, event, "config:0-7,32-35"); +DEFINE_UNCORE_FORMAT_ATTR(event14, event, "config:0-7,32-35,59-60"); /* F17h+ DF */ +DEFINE_UNCORE_FORMAT_ATTR(event8, event, "config:0-7"); /* F17h+ L3 */ +DEFINE_UNCORE_FORMAT_ATTR(umask, umask, "config:8-15"); + +static struct attribute *amd_uncore_df_format_attr[] = { + &format_attr_event12.attr, /* event14 if F17h+ */ + &format_attr_umask.attr, + NULL, +}; + +static struct attribute *amd_uncore_l3_format_attr[] = { + &format_attr_event12.attr, /* event8 if F17h+ */ + &format_attr_umask.attr, + NULL, +}; + +static struct attribute_group amd_uncore_df_format_group = { + .name = "format", + .attrs = amd_uncore_df_format_attr, +}; + +static struct attribute_group amd_uncore_l3_format_group = { + .name = "format", + .attrs = amd_uncore_l3_format_attr, };
-AMD_FORMAT_ATTR(event, , "config:0-7,32-35"); -AMD_FORMAT_ATTR(umask, , "config:8-15"); -AMD_FORMAT_ATTR(event, _df, "config:0-7,32-35,59-60"); -AMD_FORMAT_ATTR(event, _l3, "config:0-7"); -AMD_ATTRIBUTE(df); -AMD_ATTRIBUTE(l3); +static const struct attribute_group *amd_uncore_df_attr_groups[] = { + &amd_uncore_attr_group, + &amd_uncore_df_format_group, + NULL, +}; + +static const struct attribute_group *amd_uncore_l3_attr_groups[] = { + &amd_uncore_attr_group, + &amd_uncore_l3_format_group, + NULL, +};
static struct pmu amd_nb_pmu = { .task_ctx_nr = perf_invalid_context, + .attr_groups = amd_uncore_df_attr_groups, + .name = "amd_nb", .event_init = amd_uncore_event_init, .add = amd_uncore_add, .del = amd_uncore_del, @@ -324,6 +337,8 @@ static struct pmu amd_nb_pmu = {
static struct pmu amd_llc_pmu = { .task_ctx_nr = perf_invalid_context, + .attr_groups = amd_uncore_l3_attr_groups, + .name = "amd_l2", .event_init = amd_uncore_event_init, .add = amd_uncore_add, .del = amd_uncore_del, @@ -526,6 +541,8 @@ static int amd_uncore_cpu_dead(unsigned int cpu)
static int __init amd_uncore_init(void) { + struct attribute **df_attr = amd_uncore_df_format_attr; + struct attribute **l3_attr = amd_uncore_l3_format_attr; int ret = -ENODEV;
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD && @@ -535,6 +552,8 @@ static int __init amd_uncore_init(void) if (!boot_cpu_has(X86_FEATURE_TOPOEXT)) return -ENODEV;
+ num_counters_nb = NUM_COUNTERS_NB; + num_counters_llc = NUM_COUNTERS_L2; if (boot_cpu_data.x86 >= 0x17) { /* * For F17h and above, the Northbridge counters are @@ -542,27 +561,16 @@ static int __init amd_uncore_init(void) * counters are supported too. The PMUs are exported * based on family as either L2 or L3 and NB or DF. */ - num_counters_nb = NUM_COUNTERS_NB; num_counters_llc = NUM_COUNTERS_L3; amd_nb_pmu.name = "amd_df"; amd_llc_pmu.name = "amd_l3"; - format_attr_event_df.show = &event_show_df; - format_attr_event_l3.show = &event_show_l3; l3_mask = true; - } else { - num_counters_nb = NUM_COUNTERS_NB; - num_counters_llc = NUM_COUNTERS_L2; - amd_nb_pmu.name = "amd_nb"; - amd_llc_pmu.name = "amd_l2"; - format_attr_event_df = format_attr_event; - format_attr_event_l3 = format_attr_event; - l3_mask = false; }
- amd_nb_pmu.attr_groups = amd_uncore_attr_groups_df; - amd_llc_pmu.attr_groups = amd_uncore_attr_groups_l3; - if (boot_cpu_has(X86_FEATURE_PERFCTR_NB)) { + if (boot_cpu_data.x86 >= 0x17) + *df_attr = &format_attr_event14.attr; + amd_uncore_nb = alloc_percpu(struct amd_uncore *); if (!amd_uncore_nb) { ret = -ENOMEM; @@ -579,6 +587,9 @@ static int __init amd_uncore_init(void) }
if (boot_cpu_has(X86_FEATURE_PERFCTR_LLC)) { + if (boot_cpu_data.x86 >= 0x17) + *l3_attr = &format_attr_event8.attr; + amd_uncore_llc = alloc_percpu(struct amd_uncore *); if (!amd_uncore_llc) { ret = -ENOMEM;
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.10-rc1 commit 8170f386f19ca7120393c957d4bfbdc07f964ab6 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Continue to fully populate either one of threadmask or slicemask if the user doesn't.
Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20200921144330.6331-3-kim.phillips@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/amd/uncore.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c index bcdeda906c77..2ce1c7d81c77 100644 --- a/arch/x86/events/amd/uncore.c +++ b/arch/x86/events/amd/uncore.c @@ -184,13 +184,14 @@ static void amd_uncore_del(struct perf_event *event, int flags) }
/* - * Return a full thread and slice mask until per-CPU is - * properly supported. + * Return a full thread and slice mask unless user + * has provided them */ -static u64 l3_thread_slice_mask(void) +static u64 l3_thread_slice_mask(u64 config) { if (boot_cpu_data.x86 <= 0x18) - return AMD64_L3_SLICE_MASK | AMD64_L3_THREAD_MASK; + return ((config & AMD64_L3_SLICE_MASK) ? : AMD64_L3_SLICE_MASK) | + ((config & AMD64_L3_THREAD_MASK) ? : AMD64_L3_THREAD_MASK);
return AMD64_L3_EN_ALL_SLICES | AMD64_L3_EN_ALL_CORES | AMD64_L3_F19H_THREAD_MASK; @@ -231,7 +232,7 @@ static int amd_uncore_event_init(struct perf_event *event) * For other events, the two fields do not affect the count. */ if (l3_mask && is_llc_event(event)) - hwc->config |= l3_thread_slice_mask(); + hwc->config |= l3_thread_slice_mask(event->attr.config);
uncore = event_to_amd_uncore(event); if (!uncore) @@ -288,6 +289,8 @@ DEFINE_UNCORE_FORMAT_ATTR(event12, event, "config:0-7,32-35"); DEFINE_UNCORE_FORMAT_ATTR(event14, event, "config:0-7,32-35,59-60"); /* F17h+ DF */ DEFINE_UNCORE_FORMAT_ATTR(event8, event, "config:0-7"); /* F17h+ L3 */ DEFINE_UNCORE_FORMAT_ATTR(umask, umask, "config:8-15"); +DEFINE_UNCORE_FORMAT_ATTR(slicemask, slicemask, "config:48-51"); /* F17h L3 */ +DEFINE_UNCORE_FORMAT_ATTR(threadmask8, threadmask, "config:56-63"); /* F17h L3 */
static struct attribute *amd_uncore_df_format_attr[] = { &format_attr_event12.attr, /* event14 if F17h+ */ @@ -298,6 +301,8 @@ static struct attribute *amd_uncore_df_format_attr[] = { static struct attribute *amd_uncore_l3_format_attr[] = { &format_attr_event12.attr, /* event8 if F17h+ */ &format_attr_umask.attr, + NULL, /* slicemask if F17h */ + NULL, /* threadmask8 if F17h */ NULL, };
@@ -587,8 +592,12 @@ static int __init amd_uncore_init(void) }
if (boot_cpu_has(X86_FEATURE_PERFCTR_LLC)) { - if (boot_cpu_data.x86 >= 0x17) - *l3_attr = &format_attr_event8.attr; + if (boot_cpu_data.x86 >= 0x17) { + *l3_attr++ = &format_attr_event8.attr; + *l3_attr++ = &format_attr_umask.attr; + *l3_attr++ = &format_attr_slicemask.attr; + *l3_attr++ = &format_attr_threadmask8.attr; + }
amd_uncore_llc = alloc_percpu(struct amd_uncore *); if (!amd_uncore_llc) {
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.10-rc1 commit 87a54a1fd525f2af8d82becf583c7e836918cf22 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
On Family 19h, the driver checks for a populated 2-bit threadmask in order to establish that the user wants to measure individual slices, individual cores (only one can be measured at a time), and lets the user also directly specify enallcores and/or enallslices if desired.
Example F19h invocation to measure L3 accesses (event 4, umask 0xff) by the first thread (id 0 -> mask 0x1) of the first core (id 0) on the first slice (id 0):
perf stat -a -e instructions,amd_l3/umask=0xff,event=0x4,coreid=0,threadmask=1,sliceid=0,enallcores=0,enallslices=0/ <workload>
Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20200921144330.6331-4-kim.phillips@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/amd/uncore.c | 37 +++++++++++++++++++++++++++++++----- 1 file changed, 32 insertions(+), 5 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c index 2ce1c7d81c77..9adad0408068 100644 --- a/arch/x86/events/amd/uncore.c +++ b/arch/x86/events/amd/uncore.c @@ -193,8 +193,19 @@ static u64 l3_thread_slice_mask(u64 config) return ((config & AMD64_L3_SLICE_MASK) ? : AMD64_L3_SLICE_MASK) | ((config & AMD64_L3_THREAD_MASK) ? : AMD64_L3_THREAD_MASK);
- return AMD64_L3_EN_ALL_SLICES | AMD64_L3_EN_ALL_CORES | - AMD64_L3_F19H_THREAD_MASK; + /* + * If the user doesn't specify a threadmask, they're not trying to + * count core 0, so we enable all cores & threads. + * We'll also assume that they want to count slice 0 if they specify + * a threadmask and leave sliceid and enallslices unpopulated. + */ + if (!(config & AMD64_L3_F19H_THREAD_MASK)) + return AMD64_L3_F19H_THREAD_MASK | AMD64_L3_EN_ALL_SLICES | + AMD64_L3_EN_ALL_CORES; + + return config & (AMD64_L3_F19H_THREAD_MASK | AMD64_L3_SLICEID_MASK | + AMD64_L3_EN_ALL_CORES | AMD64_L3_EN_ALL_SLICES | + AMD64_L3_COREID_MASK); }
static int amd_uncore_event_init(struct perf_event *event) @@ -289,8 +300,13 @@ DEFINE_UNCORE_FORMAT_ATTR(event12, event, "config:0-7,32-35"); DEFINE_UNCORE_FORMAT_ATTR(event14, event, "config:0-7,32-35,59-60"); /* F17h+ DF */ DEFINE_UNCORE_FORMAT_ATTR(event8, event, "config:0-7"); /* F17h+ L3 */ DEFINE_UNCORE_FORMAT_ATTR(umask, umask, "config:8-15"); +DEFINE_UNCORE_FORMAT_ATTR(coreid, coreid, "config:42-44"); /* F19h L3 */ DEFINE_UNCORE_FORMAT_ATTR(slicemask, slicemask, "config:48-51"); /* F17h L3 */ DEFINE_UNCORE_FORMAT_ATTR(threadmask8, threadmask, "config:56-63"); /* F17h L3 */ +DEFINE_UNCORE_FORMAT_ATTR(threadmask2, threadmask, "config:56-57"); /* F19h L3 */ +DEFINE_UNCORE_FORMAT_ATTR(enallslices, enallslices, "config:46"); /* F19h L3 */ +DEFINE_UNCORE_FORMAT_ATTR(enallcores, enallcores, "config:47"); /* F19h L3 */ +DEFINE_UNCORE_FORMAT_ATTR(sliceid, sliceid, "config:48-50"); /* F19h L3 */
static struct attribute *amd_uncore_df_format_attr[] = { &format_attr_event12.attr, /* event14 if F17h+ */ @@ -301,8 +317,11 @@ static struct attribute *amd_uncore_df_format_attr[] = { static struct attribute *amd_uncore_l3_format_attr[] = { &format_attr_event12.attr, /* event8 if F17h+ */ &format_attr_umask.attr, - NULL, /* slicemask if F17h */ - NULL, /* threadmask8 if F17h */ + NULL, /* slicemask if F17h, coreid if F19h */ + NULL, /* threadmask8 if F17h, enallslices if F19h */ + NULL, /* enallcores if F19h */ + NULL, /* sliceid if F19h */ + NULL, /* threadmask2 if F19h */ NULL, };
@@ -592,7 +611,15 @@ static int __init amd_uncore_init(void) }
if (boot_cpu_has(X86_FEATURE_PERFCTR_LLC)) { - if (boot_cpu_data.x86 >= 0x17) { + if (boot_cpu_data.x86 >= 0x19) { + *l3_attr++ = &format_attr_event8.attr; + *l3_attr++ = &format_attr_umask.attr; + *l3_attr++ = &format_attr_coreid.attr; + *l3_attr++ = &format_attr_enallslices.attr; + *l3_attr++ = &format_attr_enallcores.attr; + *l3_attr++ = &format_attr_sliceid.attr; + *l3_attr++ = &format_attr_threadmask2.attr; + } else if (boot_cpu_data.x86 >= 0x17) { *l3_attr++ = &format_attr_event8.attr; *l3_attr++ = &format_attr_umask.attr; *l3_attr++ = &format_attr_slicemask.attr;
From: Martin Liška mliska@suse.cz
mainline inclusion from mainline-v5.1-rc2 commit 98c07a8f74f85a19aeee2016f5afa0c667fa694d category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Thi patch adds PMC events for AMD Family 17 CPUs as defined in [1]. It covers events described in section: 2.1.13. Regex pattern in mapfile.csv covers all CPUs of the family.
[1] https://support.amd.com/TechDocs/54945_PPR_Family_17h_Models_00h-0Fh.pdf
Signed-off-by: Martin Liška mliska@suse.cz Acked-by: Borislav Petkov bp@suse.de Cc: Jiri Olsa jolsa@redhat.com Cc: Jon Grimm jon.grimm@amd.com Cc: Martin Jambor mjambor@suse.cz Cc: William Cohen wcohen@redhat.com Link: https://lkml.kernel.org/r/d65873ca-e402-b198-4fe9-8c4af81258c8@suse.cz Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- .../pmu-events/arch/x86/amdfam17h/branch.json | 12 + .../pmu-events/arch/x86/amdfam17h/cache.json | 287 ++++++++++++++++++ .../pmu-events/arch/x86/amdfam17h/core.json | 134 ++++++++ .../arch/x86/amdfam17h/floating-point.json | 168 ++++++++++ .../pmu-events/arch/x86/amdfam17h/memory.json | 162 ++++++++++ .../pmu-events/arch/x86/amdfam17h/other.json | 65 ++++ tools/perf/pmu-events/arch/x86/mapfile.csv | 1 + 7 files changed, 829 insertions(+) create mode 100644 tools/perf/pmu-events/arch/x86/amdfam17h/branch.json create mode 100644 tools/perf/pmu-events/arch/x86/amdfam17h/cache.json create mode 100644 tools/perf/pmu-events/arch/x86/amdfam17h/core.json create mode 100644 tools/perf/pmu-events/arch/x86/amdfam17h/floating-point.json create mode 100644 tools/perf/pmu-events/arch/x86/amdfam17h/memory.json create mode 100644 tools/perf/pmu-events/arch/x86/amdfam17h/other.json
diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/branch.json b/tools/perf/pmu-events/arch/x86/amdfam17h/branch.json new file mode 100644 index 000000000000..93ddfd8053ca --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/amdfam17h/branch.json @@ -0,0 +1,12 @@ +[ + { + "EventName": "bp_l1_btb_correct", + "EventCode": "0x8a", + "BriefDescription": "L1 BTB Correction." + }, + { + "EventName": "bp_l2_btb_correct", + "EventCode": "0x8b", + "BriefDescription": "L2 BTB Correction." + } +] diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/cache.json b/tools/perf/pmu-events/arch/x86/amdfam17h/cache.json new file mode 100644 index 000000000000..fad4af9142cb --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/amdfam17h/cache.json @@ -0,0 +1,287 @@ +[ + { + "EventName": "ic_fw32", + "EventCode": "0x80", + "BriefDescription": "The number of 32B fetch windows transferred from IC pipe to DE instruction decoder (includes non-cacheable and cacheable fill responses)." + }, + { + "EventName": "ic_fw32_miss", + "EventCode": "0x81", + "BriefDescription": "The number of 32B fetch windows tried to read the L1 IC and missed in the full tag." + }, + { + "EventName": "ic_cache_fill_l2", + "EventCode": "0x82", + "BriefDescription": "The number of 64 byte instruction cache line was fulfilled from the L2 cache." + }, + { + "EventName": "ic_cache_fill_sys", + "EventCode": "0x83", + "BriefDescription": "The number of 64 byte instruction cache line fulfilled from system memory or another cache." + }, + { + "EventName": "bp_l1_tlb_miss_l2_hit", + "EventCode": "0x84", + "BriefDescription": "The number of instruction fetches that miss in the L1 ITLB but hit in the L2 ITLB." + }, + { + "EventName": "bp_l1_tlb_miss_l2_miss", + "EventCode": "0x85", + "BriefDescription": "The number of instruction fetches that miss in both the L1 and L2 TLBs." + }, + { + "EventName": "bp_snp_re_sync", + "EventCode": "0x86", + "BriefDescription": "The number of pipeline restarts caused by invalidating probes that hit on the instruction stream currently being executed. This would happen if the active instruction stream was being modified by another processor in an MP system - typically a highly unlikely event." + }, + { + "EventName": "ic_fetch_stall.ic_stall_any", + "EventCode": "0x87", + "BriefDescription": "IC pipe was stalled during this clock cycle for any reason (nothing valid in pipe ICM1).", + "PublicDescription": "Instruction Pipe Stall. IC pipe was stalled during this clock cycle for any reason (nothing valid in pipe ICM1).", + "UMask": "0x4" + }, + { + "EventName": "ic_fetch_stall.ic_stall_dq_empty", + "EventCode": "0x87", + "BriefDescription": "IC pipe was stalled during this clock cycle (including IC to OC fetches) due to DQ empty.", + "PublicDescription": "Instruction Pipe Stall. IC pipe was stalled during this clock cycle (including IC to OC fetches) due to DQ empty.", + "UMask": "0x2" + }, + { + "EventName": "ic_fetch_stall.ic_stall_back_pressure", + "EventCode": "0x87", + "BriefDescription": "IC pipe was stalled during this clock cycle (including IC to OC fetches) due to back-pressure.", + "PublicDescription": "Instruction Pipe Stall. IC pipe was stalled during this clock cycle (including IC to OC fetches) due to back-pressure.", + "UMask": "0x1" + }, + { + "EventName": "ic_cache_inval.l2_invalidating_probe", + "EventCode": "0x8c", + "BriefDescription": "IC line invalidated due to L2 invalidating probe (external or LS).", + "PublicDescription": "The number of instruction cache lines invalidated. A non-SMC event is CMC (cross modifying code), either from the other thread of the core or another core. IC line invalidated due to L2 invalidating probe (external or LS).", + "UMask": "0x2" + }, + { + "EventName": "ic_cache_inval.fill_invalidated", + "EventCode": "0x8c", + "BriefDescription": "IC line invalidated due to overwriting fill response.", + "PublicDescription": "The number of instruction cache lines invalidated. A non-SMC event is CMC (cross modifying code), either from the other thread of the core or another core. IC line invalidated due to overwriting fill response.", + "UMask": "0x1" + }, + { + "EventName": "bp_tlb_rel", + "EventCode": "0x99", + "BriefDescription": "The number of ITLB reload requests." + }, + { + "EventName": "l2_request_g1.rd_blk_l", + "EventCode": "0x60", + "BriefDescription": "Requests to L2 Group1.", + "PublicDescription": "Requests to L2 Group1.", + "UMask": "0x80" + }, + { + "EventName": "l2_request_g1.rd_blk_x", + "EventCode": "0x60", + "BriefDescription": "Requests to L2 Group1.", + "PublicDescription": "Requests to L2 Group1.", + "UMask": "0x40" + }, + { + "EventName": "l2_request_g1.ls_rd_blk_c_s", + "EventCode": "0x60", + "BriefDescription": "Requests to L2 Group1.", + "PublicDescription": "Requests to L2 Group1.", + "UMask": "0x20" + }, + { + "EventName": "l2_request_g1.cacheable_ic_read", + "EventCode": "0x60", + "BriefDescription": "Requests to L2 Group1.", + "PublicDescription": "Requests to L2 Group1.", + "UMask": "0x10" + }, + { + "EventName": "l2_request_g1.change_to_x", + "EventCode": "0x60", + "BriefDescription": "Requests to L2 Group1.", + "PublicDescription": "Requests to L2 Group1.", + "UMask": "0x8" + }, + { + "EventName": "l2_request_g1.prefetch_l2", + "EventCode": "0x60", + "BriefDescription": "Requests to L2 Group1.", + "PublicDescription": "Requests to L2 Group1.", + "UMask": "0x4" + }, + { + "EventName": "l2_request_g1.l2_hw_pf", + "EventCode": "0x60", + "BriefDescription": "Requests to L2 Group1.", + "PublicDescription": "Requests to L2 Group1.", + "UMask": "0x2" + }, + { + "EventName": "l2_request_g1.other_requests", + "EventCode": "0x60", + "BriefDescription": "Events covered by l2_request_g2.", + "PublicDescription": "Requests to L2 Group1. Events covered by l2_request_g2.", + "UMask": "0x1" + }, + { + "EventName": "l2_request_g2.group1", + "EventCode": "0x61", + "BriefDescription": "All Group 1 commands not in unit0.", + "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous. All Group 1 commands not in unit0.", + "UMask": "0x80" + }, + { + "EventName": "l2_request_g2.ls_rd_sized", + "EventCode": "0x61", + "BriefDescription": "RdSized, RdSized32, RdSized64.", + "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous. RdSized, RdSized32, RdSized64.", + "UMask": "0x40" + }, + { + "EventName": "l2_request_g2.ls_rd_sized_nc", + "EventCode": "0x61", + "BriefDescription": "RdSizedNC, RdSized32NC, RdSized64NC.", + "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous. RdSizedNC, RdSized32NC, RdSized64NC.", + "UMask": "0x20" + }, + { + "EventName": "l2_request_g2.ic_rd_sized", + "EventCode": "0x61", + "BriefDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "UMask": "0x10" + }, + { + "EventName": "l2_request_g2.ic_rd_sized_nc", + "EventCode": "0x61", + "BriefDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "UMask": "0x8" + }, + { + "EventName": "l2_request_g2.smc_inval", + "EventCode": "0x61", + "BriefDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "UMask": "0x4" + }, + { + "EventName": "l2_request_g2.bus_locks_originator", + "EventCode": "0x61", + "BriefDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "UMask": "0x2" + }, + { + "EventName": "l2_request_g2.bus_locks_responses", + "EventCode": "0x61", + "BriefDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "UMask": "0x1" + }, + { + "EventName": "l2_latency.l2_cycles_waiting_on_fills", + "EventCode": "0x62", + "BriefDescription": "Total cycles spent waiting for L2 fills to complete from L3 or memory, divided by four. Event counts are for both threads. To calculate average latency, the number of fills from both threads must be used.", + "PublicDescription": "Total cycles spent waiting for L2 fills to complete from L3 or memory, divided by four. Event counts are for both threads. To calculate average latency, the number of fills from both threads must be used.", + "UMask": "0x1" + }, + { + "EventName": "l2_wcb_req.wcb_write", + "EventCode": "0x63", + "PublicDescription": "LS (Load/Store unit) to L2 WCB (Write Combining Buffer) write requests.", + "BriefDescription": "LS to L2 WCB write requests.", + "UMask": "0x40" + }, + { + "EventName": "l2_wcb_req.wcb_close", + "EventCode": "0x63", + "BriefDescription": "LS to L2 WCB close requests.", + "PublicDescription": "LS (Load/Store unit) to L2 WCB (Write Combining Buffer) close requests.", + "UMask": "0x20" + }, + { + "EventName": "l2_wcb_req.zero_byte_store", + "EventCode": "0x63", + "BriefDescription": "LS to L2 WCB zero byte store requests.", + "PublicDescription": "LS (Load/Store unit) to L2 WCB (Write Combining Buffer) zero byte store requests.", + "UMask": "0x4" + }, + { + "EventName": "l2_wcb_req.cl_zero", + "EventCode": "0x63", + "PublicDescription": "LS to L2 WCB cache line zeroing requests.", + "BriefDescription": "LS (Load/Store unit) to L2 WCB (Write Combining Buffer) cache line zeroing requests.", + "UMask": "0x1" + }, + { + "EventName": "l2_cache_req_stat.ls_rd_blk_cs", + "EventCode": "0x64", + "BriefDescription": "LS ReadBlock C/S Hit.", + "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. LS ReadBlock C/S Hit.", + "UMask": "0x80" + }, + { + "EventName": "l2_cache_req_stat.ls_rd_blk_l_hit_x", + "EventCode": "0x64", + "BriefDescription": "LS Read Block L Hit X.", + "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. LS Read Block L Hit X.", + "UMask": "0x40" + }, + { + "EventName": "l2_cache_req_stat.ls_rd_blk_l_hit_s", + "EventCode": "0x64", + "BriefDescription": "LsRdBlkL Hit Shared.", + "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. LsRdBlkL Hit Shared.", + "UMask": "0x20" + }, + { + "EventName": "l2_cache_req_stat.ls_rd_blk_x", + "EventCode": "0x64", + "BriefDescription": "LsRdBlkX/ChgToX Hit X. Count RdBlkX finding Shared as a Miss.", + "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. LsRdBlkX/ChgToX Hit X. Count RdBlkX finding Shared as a Miss.", + "UMask": "0x10" + }, + { + "EventName": "l2_cache_req_stat.ls_rd_blk_c", + "EventCode": "0x64", + "BriefDescription": "LS Read Block C S L X Change to X Miss.", + "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. LS Read Block C S L X Change to X Miss.", + "UMask": "0x8" + }, + { + "EventName": "l2_cache_req_stat.ic_fill_hit_x", + "EventCode": "0x64", + "BriefDescription": "IC Fill Hit Exclusive Stale.", + "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. IC Fill Hit Exclusive Stale.", + "UMask": "0x4" + }, + { + "EventName": "l2_cache_req_stat.ic_fill_hit_s", + "EventCode": "0x64", + "BriefDescription": "IC Fill Hit Shared.", + "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. IC Fill Hit Shared.", + "UMask": "0x2" + }, + { + "EventName": "l2_cache_req_stat.ic_fill_miss", + "EventCode": "0x64", + "BriefDescription": "IC Fill Miss.", + "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. IC Fill Miss.", + "UMask": "0x1" + }, + { + "EventName": "l2_fill_pending.l2_fill_busy", + "EventCode": "0x6d", + "BriefDescription": "Total cycles spent with one or more fill requests in flight from L2.", + "PublicDescription": "Total cycles spent with one or more fill requests in flight from L2.", + "UMask": "0x1" + } +] diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/core.json b/tools/perf/pmu-events/arch/x86/amdfam17h/core.json new file mode 100644 index 000000000000..7b285b0a7f35 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/amdfam17h/core.json @@ -0,0 +1,134 @@ +[ + { + "EventName": "ex_ret_instr", + "EventCode": "0xc0", + "BriefDescription": "Retired Instructions." + }, + { + "EventName": "ex_ret_cops", + "EventCode": "0xc1", + "BriefDescription": "Retired Uops.", + "PublicDescription": "The number of uOps retired. This includes all processor activity (instructions, exceptions, interrupts, microcode assists, etc.). The number of events logged per cycle can vary from 0 to 4." + }, + { + "EventName": "ex_ret_brn", + "EventCode": "0xc2", + "BriefDescription": "[Retired Branch Instructions.", + "PublicDescription": "The number of branch instructions retired. This includes all types of architectural control flow changes, including exceptions and interrupts." + }, + { + "EventName": "ex_ret_brn_misp", + "EventCode": "0xc3", + "BriefDescription": "Retired Branch Instructions Mispredicted.", + "PublicDescription": "The number of branch instructions retired, of any type, that were not correctly predicted. This includes those for which prediction is not attempted (far control transfers, exceptions and interrupts)." + }, + { + "EventName": "ex_ret_brn_tkn", + "EventCode": "0xc4", + "BriefDescription": "Retired Taken Branch Instructions.", + "PublicDescription": "The number of taken branches that were retired. This includes all types of architectural control flow changes, including exceptions and interrupts." + }, + { + "EventName": "ex_ret_brn_tkn_misp", + "EventCode": "0xc5", + "BriefDescription": "Retired Taken Branch Instructions Mispredicted.", + "PublicDescription": "The number of retired taken branch instructions that were mispredicted." + }, + { + "EventName": "ex_ret_brn_far", + "EventCode": "0xc6", + "BriefDescription": "Retired Far Control Transfers.", + "PublicDescription": "The number of far control transfers retired including far call/jump/return, IRET, SYSCALL and SYSRET, plus exceptions and interrupts. Far control transfers are not subject to branch prediction." + }, + { + "EventName": "ex_ret_brn_resync", + "EventCode": "0xc7", + "BriefDescription": "Retired Branch Resyncs.", + "PublicDescription": "The number of resync branches. These reflect pipeline restarts due to certain microcode assists and events such as writes to the active instruction stream, among other things. Each occurrence reflects a restart penalty similar to a branch mispredict. This is relatively rare." + }, + { + "EventName": "ex_ret_near_ret", + "EventCode": "0xc8", + "BriefDescription": "Retired Near Returns.", + "PublicDescription": "The number of near return instructions (RET or RET Iw) retired." + }, + { + "EventName": "ex_ret_near_ret_mispred", + "EventCode": "0xc9", + "BriefDescription": "Retired Near Returns Mispredicted.", + "PublicDescription": "The number of near returns retired that were not correctly predicted by the return address predictor. Each such mispredict incurs the same penalty as a mispredicted conditional branch instruction." + }, + { + "EventName": "ex_ret_brn_ind_misp", + "EventCode": "0xca", + "BriefDescription": "Retired Indirect Branch Instructions Mispredicted.", + "PublicDescription": "Retired Indirect Branch Instructions Mispredicted." + }, + { + "EventName": "ex_ret_mmx_fp_instr.sse_instr", + "EventCode": "0xcb", + "BriefDescription": "SSE instructions (SSE, SSE2, SSE3, SSSE3, SSE4A, SSE41, SSE42, AVX).", + "PublicDescription": "The number of MMX, SSE or x87 instructions retired. The UnitMask allows the selection of the individual classes of instructions as given in the table. Each increment represents one complete instruction. Since this event includes non-numeric instructions it is not suitable for measuring MFLOPS. SSE instructions (SSE, SSE2, SSE3, SSSE3, SSE4A, SSE41, SSE42, AVX).", + "UMask": "0x4" + }, + { + "EventName": "ex_ret_mmx_fp_instr.mmx_instr", + "EventCode": "0xcb", + "BriefDescription": "MMX instructions.", + "PublicDescription": "The number of MMX, SSE or x87 instructions retired. The UnitMask allows the selection of the individual classes of instructions as given in the table. Each increment represents one complete instruction. Since this event includes non-numeric instructions it is not suitable for measuring MFLOPS. MMX instructions.", + "UMask": "0x2" + }, + { + "EventName": "ex_ret_mmx_fp_instr.x87_instr", + "EventCode": "0xcb", + "BriefDescription": "x87 instructions.", + "PublicDescription": "The number of MMX, SSE or x87 instructions retired. The UnitMask allows the selection of the individual classes of instructions as given in the table. Each increment represents one complete instruction. Since this event includes non-numeric instructions it is not suitable for measuring MFLOPS. x87 instructions.", + "UMask": "0x1" + }, + { + "EventName": "ex_ret_cond", + "EventCode": "0xd1", + "BriefDescription": "Retired Conditional Branch Instructions." + }, + { + "EventName": "ex_ret_cond_misp", + "EventCode": "0xd2", + "BriefDescription": "Retired Conditional Branch Instructions Mispredicted." + }, + { + "EventName": "ex_div_busy", + "EventCode": "0xd3", + "BriefDescription": "Div Cycles Busy count." + }, + { + "EventName": "ex_div_count", + "EventCode": "0xd4", + "BriefDescription": "Div Op Count." + }, + { + "EventName": "ex_tagged_ibs_ops.ibs_count_rollover", + "EventCode": "0x1cf", + "BriefDescription": "Number of times an op could not be tagged by IBS because of a previous tagged op that has not retired.", + "PublicDescription": "Tagged IBS Ops. Number of times an op could not be tagged by IBS because of a previous tagged op that has not retired.", + "UMask": "0x4" + }, + { + "EventName": "ex_tagged_ibs_ops.ibs_tagged_ops_ret", + "EventCode": "0x1cf", + "BriefDescription": "Number of Ops tagged by IBS that retired.", + "PublicDescription": "Tagged IBS Ops. Number of Ops tagged by IBS that retired.", + "UMask": "0x2" + }, + { + "EventName": "ex_tagged_ibs_ops.ibs_tagged_ops", + "EventCode": "0x1cf", + "BriefDescription": "Number of Ops tagged by IBS.", + "PublicDescription": "Tagged IBS Ops. Number of Ops tagged by IBS.", + "UMask": "0x1" + }, + { + "EventName": "ex_ret_fus_brnch_inst", + "EventCode": "0x1d0", + "BriefDescription": "The number of fused retired branch instructions retired per cycle. The number of events logged per cycle can vary from 0 to 3." + } +] diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/floating-point.json b/tools/perf/pmu-events/arch/x86/amdfam17h/floating-point.json new file mode 100644 index 000000000000..ea4711983d1d --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/amdfam17h/floating-point.json @@ -0,0 +1,168 @@ +[ + { + "EventName": "fpu_pipe_assignment.dual", + "EventCode": "0x00", + "BriefDescription": "Total number multi-pipe uOps.", + "PublicDescription": "The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number multi-pipe uOps assigned to Pipe 3.", + "UMask": "0xf0" + }, + { + "EventName": "fpu_pipe_assignment.total", + "EventCode": "0x00", + "BriefDescription": "Total number uOps.", + "PublicDescription": "The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to Pipe 3.", + "UMask": "0xf" + }, + { + "EventName": "fp_sched_empty", + "EventCode": "0x01", + "BriefDescription": "This is a speculative event. The number of cycles in which the FPU scheduler is empty. Note that some Ops like FP loads bypass the scheduler." + }, + { + "EventName": "fp_retx87_fp_ops.all", + "EventCode": "0x02", + "BriefDescription": "All Ops.", + "PublicDescription": "The number of x87 floating-point Ops that have retired. The number of events logged per cycle can vary from 0 to 8.", + "UMask": "0x7" + }, + { + "EventName": "fp_retx87_fp_ops.div_sqr_r_ops", + "EventCode": "0x02", + "BriefDescription": "Divide and square root Ops.", + "PublicDescription": "The number of x87 floating-point Ops that have retired. The number of events logged per cycle can vary from 0 to 8. Divide and square root Ops.", + "UMask": "0x4" + }, + { + "EventName": "fp_retx87_fp_ops.mul_ops", + "EventCode": "0x02", + "BriefDescription": "Multiply Ops.", + "PublicDescription": "The number of x87 floating-point Ops that have retired. The number of events logged per cycle can vary from 0 to 8. Multiply Ops.", + "UMask": "0x2" + }, + { + "EventName": "fp_retx87_fp_ops.add_sub_ops", + "EventCode": "0x02", + "BriefDescription": "Add/subtract Ops.", + "PublicDescription": "The number of x87 floating-point Ops that have retired. The number of events logged per cycle can vary from 0 to 8. Add/subtract Ops.", + "UMask": "0x1" + }, + { + "EventName": "fp_ret_sse_avx_ops.all", + "EventCode": "0x03", + "BriefDescription": "All FLOPS.", + "PublicDescription": "This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15.", + "UMask": "0xff" + }, + { + "EventName": "fp_ret_sse_avx_ops.dp_mult_add_flops", + "EventCode": "0x03", + "BriefDescription": "Double precision multiply-add FLOPS. Multiply-add counts as 2 FLOPS.", + "PublicDescription": "This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Double precision multiply-add FLOPS. Multiply-add counts as 2 FLOPS.", + "UMask": "0x80" + }, + { + "EventName": "fp_ret_sse_avx_ops.dp_div_flops", + "EventCode": "0x03", + "BriefDescription": "Double precision divide/square root FLOPS.", + "PublicDescription": "This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Double precision divide/square root FLOPS.", + "UMask": "0x40" + }, + { + "EventName": "fp_ret_sse_avx_ops.dp_mult_flops", + "EventCode": "0x03", + "BriefDescription": "Double precision multiply FLOPS.", + "PublicDescription": "This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Double precision multiply FLOPS.", + "UMask": "0x20" + }, + { + "EventName": "fp_ret_sse_avx_ops.dp_add_sub_flops", + "EventCode": "0x03", + "BriefDescription": "Double precision add/subtract FLOPS.", + "PublicDescription": "This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Double precision add/subtract FLOPS.", + "UMask": "0x10" + }, + { + "EventName": "fp_ret_sse_avx_ops.sp_mult_add_flops", + "EventCode": "0x03", + "BriefDescription": "Single precision multiply-add FLOPS. Multiply-add counts as 2 FLOPS.", + "PublicDescription": "This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Single precision multiply-add FLOPS. Multiply-add counts as 2 FLOPS.", + "UMask": "0x8" + }, + { + "EventName": "fp_ret_sse_avx_ops.sp_div_flops", + "EventCode": "0x03", + "BriefDescription": "Single-precision divide/square root FLOPS.", + "PublicDescription": "This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Single-precision divide/square root FLOPS.", + "UMask": "0x4" + }, + { + "EventName": "fp_ret_sse_avx_ops.sp_mult_flops", + "EventCode": "0x03", + "BriefDescription": "Single-precision multiply FLOPS.", + "PublicDescription": "This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Single-precision multiply FLOPS.", + "UMask": "0x2" + }, + { + "EventName": "fp_ret_sse_avx_ops.sp_add_sub_flops", + "EventCode": "0x03", + "BriefDescription": "Single-precision add/subtract FLOPS.", + "PublicDescription": "This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15. Single-precision add/subtract FLOPS.", + "UMask": "0x1" + }, + { + "EventName": "fp_num_mov_elim_scal_op.optimized", + "EventCode": "0x04", + "BriefDescription": "Number of Scalar Ops optimized.", + "PublicDescription": "This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemes. Number of Scalar Ops optimized.", + "UMask": "0x8" + }, + { + "EventName": "fp_num_mov_elim_scal_op.opt_potential", + "EventCode": "0x04", + "BriefDescription": "Number of Ops that are candidates for optimization (have Z-bit either set or pass).", + "PublicDescription": "This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemes. Number of Ops that are candidates for optimization (have Z-bit either set or pass).", + "UMask": "0x4" + }, + { + "EventName": "fp_num_mov_elim_scal_op.sse_mov_ops_elim", + "EventCode": "0x04", + "BriefDescription": "Number of SSE Move Ops eliminated.", + "PublicDescription": "This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemes. Number of SSE Move Ops eliminated.", + "UMask": "0x2" + }, + { + "EventName": "fp_num_mov_elim_scal_op.sse_mov_ops", + "EventCode": "0x04", + "BriefDescription": "Number of SSE Move Ops.", + "PublicDescription": "This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemes. Number of SSE Move Ops.", + "UMask": "0x1" + }, + { + "EventName": "fp_retired_ser_ops.x87_ctrl_ret", + "EventCode": "0x05", + "BriefDescription": "x87 control word mispredict traps due to mispredictions in RC or PC, or changes in mask bits.", + "PublicDescription": "The number of serializing Ops retired. x87 control word mispredict traps due to mispredictions in RC or PC, or changes in mask bits.", + "UMask": "0x8" + }, + { + "EventName": "fp_retired_ser_ops.x87_bot_ret", + "EventCode": "0x05", + "BriefDescription": "x87 bottom-executing uOps retired.", + "PublicDescription": "The number of serializing Ops retired. x87 bottom-executing uOps retired.", + "UMask": "0x4" + }, + { + "EventName": "fp_retired_ser_ops.sse_ctrl_ret", + "EventCode": "0x05", + "BriefDescription": "SSE control word mispredict traps due to mispredictions in RC, FTZ or DAZ, or changes in mask bits.", + "PublicDescription": "The number of serializing Ops retired. SSE control word mispredict traps due to mispredictions in RC, FTZ or DAZ, or changes in mask bits.", + "UMask": "0x2" + }, + { + "EventName": "fp_retired_ser_ops.sse_bot_ret", + "EventCode": "0x05", + "BriefDescription": "SSE bottom-executing uOps retired.", + "PublicDescription": "The number of serializing Ops retired. SSE bottom-executing uOps retired.", + "UMask": "0x1" + } +] diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/memory.json b/tools/perf/pmu-events/arch/x86/amdfam17h/memory.json new file mode 100644 index 000000000000..fa2d60d4def0 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/amdfam17h/memory.json @@ -0,0 +1,162 @@ +[ + { + "EventName": "ls_locks.bus_lock", + "EventCode": "0x25", + "BriefDescription": "Bus lock when a locked operations crosses a cache boundary or is done on an uncacheable memory type.", + "PublicDescription": "Bus lock when a locked operations crosses a cache boundary or is done on an uncacheable memory type.", + "UMask": "0x1" + }, + { + "EventName": "ls_dispatch.ld_st_dispatch", + "EventCode": "0x29", + "BriefDescription": "Load-op-Stores.", + "PublicDescription": "Counts the number of operations dispatched to the LS unit. Unit Masks ADDed. Load-op-Stores.", + "UMask": "0x4" + }, + { + "EventName": "ls_dispatch.store_dispatch", + "EventCode": "0x29", + "BriefDescription": "Counts the number of operations dispatched to the LS unit. Unit Masks ADDed.", + "PublicDescription": "Counts the number of operations dispatched to the LS unit. Unit Masks ADDed.", + "UMask": "0x2" + }, + { + "EventName": "ls_dispatch.ld_dispatch", + "EventCode": "0x29", + "BriefDescription": "Counts the number of operations dispatched to the LS unit. Unit Masks ADDed.", + "PublicDescription": "Counts the number of operations dispatched to the LS unit. Unit Masks ADDed.", + "UMask": "0x1" + }, + { + "EventName": "ls_stlf", + "EventCode": "0x35", + "BriefDescription": "Number of STLF hits." + }, + { + "EventName": "ls_dc_accesses", + "EventCode": "0x40", + "BriefDescription": "The number of accesses to the data cache for load and store references. This may include certain microcode scratchpad accesses, although these are generally rare. Each increment represents an eight-byte access, although the instruction may only be accessing a portion of that. This event is a speculative event." + }, + { + "EventName": "ls_l1_d_tlb_miss.all", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss or Reload off all sizes.", + "PublicDescription": "L1 DTLB Miss or Reload off all sizes.", + "UMask": "0xff" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_1g_l2_miss", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss of a page of 1G size.", + "PublicDescription": "L1 DTLB Miss of a page of 1G size.", + "UMask": "0x80" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_2m_l2_miss", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss of a page of 2M size.", + "PublicDescription": "L1 DTLB Miss of a page of 2M size.", + "UMask": "0x40" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_32k_l2_miss", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss of a page of 32K size.", + "PublicDescription": "L1 DTLB Miss of a page of 32K size.", + "UMask": "0x20" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_4k_l2_miss", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss of a page of 4K size.", + "PublicDescription": "L1 DTLB Miss of a page of 4K size.", + "UMask": "0x10" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_1g_l2_hit", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Reload of a page of 1G size.", + "PublicDescription": "L1 DTLB Reload of a page of 1G size.", + "UMask": "0x8" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_2m_l2_hit", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Reload of a page of 2M size.", + "PublicDescription": "L1 DTLB Reload of a page of 2M size.", + "UMask": "0x4" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_32k_l2_hit", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Reload of a page of 32K size.", + "PublicDescription": "L1 DTLB Reload of a page of 32K size.", + "UMask": "0x2" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_4k_l2_hit", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Reload of a page of 4K size.", + "PublicDescription": "L1 DTLB Reload of a page of 4K size.", + "UMask": "0x1" + }, + { + "EventName": "ls_tablewalker.perf_mon_tablewalk_alloc_iside", + "EventCode": "0x46", + "BriefDescription": "Tablewalker allocation.", + "PublicDescription": "Tablewalker allocation.", + "UMask": "0xc" + }, + { + "EventName": "ls_tablewalker.perf_mon_tablewalk_alloc_dside", + "EventCode": "0x46", + "BriefDescription": "Tablewalker allocation.", + "PublicDescription": "Tablewalker allocation.", + "UMask": "0x3" + }, + { + "EventName": "ls_misal_accesses", + "EventCode": "0x47", + "BriefDescription": "Misaligned loads." + }, + { + "EventName": "ls_pref_instr_disp.prefetch_nta", + "EventCode": "0x4b", + "BriefDescription": "Software Prefetch Instructions (PREFETCHNTA instruction) Dispatched.", + "PublicDescription": "Software Prefetch Instructions (PREFETCHNTA instruction) Dispatched.", + "UMask": "0x4" + }, + { + "EventName": "ls_pref_instr_disp.store_prefetch_w", + "EventCode": "0x4b", + "BriefDescription": "Software Prefetch Instructions (3DNow PREFETCHW instruction) Dispatched.", + "PublicDescription": "Software Prefetch Instructions (3DNow PREFETCHW instruction) Dispatched.", + "UMask": "0x2" + }, + { + "EventName": "ls_pref_instr_disp.load_prefetch_w", + "EventCode": "0x4b", + "BriefDescription": "Prefetch, Prefetch_T0_T1_T2.", + "PublicDescription": "Software Prefetch Instructions Dispatched. Prefetch, Prefetch_T0_T1_T2.", + "UMask": "0x1" + }, + { + "EventName": "ls_inef_sw_pref.mab_mch_cnt", + "EventCode": "0x52", + "BriefDescription": "The number of software prefetches that did not fetch data outside of the processor core.", + "PublicDescription": "The number of software prefetches that did not fetch data outside of the processor core.", + "UMask": "0x2" + }, + { + "EventName": "ls_inef_sw_pref.data_pipe_sw_pf_dc_hit", + "EventCode": "0x52", + "BriefDescription": "The number of software prefetches that did not fetch data outside of the processor core.", + "PublicDescription": "The number of software prefetches that did not fetch data outside of the processor core.", + "UMask": "0x1" + }, + { + "EventName": "ls_not_halted_cyc", + "EventCode": "0x76", + "BriefDescription": "Cycles not in Halt." + } +] diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/other.json b/tools/perf/pmu-events/arch/x86/amdfam17h/other.json new file mode 100644 index 000000000000..b26a00d05a2e --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/amdfam17h/other.json @@ -0,0 +1,65 @@ +[ + { + "EventName": "ic_oc_mode_switch.oc_ic_mode_switch", + "EventCode": "0x28a", + "BriefDescription": "OC to IC mode switch.", + "PublicDescription": "OC Mode Switch. OC to IC mode switch.", + "UMask": "0x2" + }, + { + "EventName": "ic_oc_mode_switch.ic_oc_mode_switch", + "EventCode": "0x28a", + "BriefDescription": "IC to OC mode switch.", + "PublicDescription": "OC Mode Switch. IC to OC mode switch.", + "UMask": "0x1" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.retire_token_stall", + "EventCode": "0xaf", + "BriefDescription": "RETIRE Tokens unavailable.", + "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. RETIRE Tokens unavailable.", + "UMask": "0x40" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.agsq_token_stall", + "EventCode": "0xaf", + "BriefDescription": "AGSQ Tokens unavailable.", + "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. AGSQ Tokens unavailable.", + "UMask": "0x20" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.alu_token_stall", + "EventCode": "0xaf", + "BriefDescription": "ALU tokens total unavailable.", + "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALU tokens total unavailable.", + "UMask": "0x10" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.alsq3_0_token_stall", + "EventCode": "0xaf", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall.", + "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall.", + "UMask": "0x8" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.alsq3_token_stall", + "EventCode": "0xaf", + "BriefDescription": "ALSQ 3 Tokens unavailable.", + "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 3 Tokens unavailable.", + "UMask": "0x4" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.alsq2_token_stall", + "EventCode": "0xaf", + "BriefDescription": "ALSQ 2 Tokens unavailable.", + "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 2 Tokens unavailable.", + "UMask": "0x2" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.alsq1_token_stall", + "EventCode": "0xaf", + "BriefDescription": "ALSQ 1 Tokens unavailable.", + "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 1 Tokens unavailable.", + "UMask": "0x1" + } +] diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-events/arch/x86/mapfile.csv index e05c2c8458fc..d6984a3017e0 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -33,3 +33,4 @@ GenuineIntel-6-25,v2,westmereep-sp,core GenuineIntel-6-2F,v2,westmereex,core GenuineIntel-6-55-[01234],v1,skylakex,core GenuineIntel-6-55-[56789ABCDEF],v1,cascadelakex,core +AuthenticAMD-23-[[:xdigit:]]+,v1,amdfam17h,core
From: Kan Liang kan.liang@linux.intel.com
mainline inclusion from mainline-v5.2-rc1 commit bf6d18cffa5f26bd5dc71485c2a2ad0c42a0ce60 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Perf cannot parse UPI (Intel's "Ultra Path Interconnect" [1]) events.
# perf stat -e UPI_DATA_BANDWIDTH_TX event syntax error: 'UPI_DATA_BANDWIDTH_TX' ___ parser error Run 'perf list' for a list of valid events
The JSON lists call the box UPI LL, while perf calls it upi. Add conversion support to JSON to convert the unit properly.
Committer notes:
[1] https://en.wikipedia.org/wiki/Intel_Ultra_Path_Interconnect
"The Intel Ultra Path Interconnect (UPI) is a point-to-point processor interconnect developed by Intel which replaced the Intel QuickPath Interconnect (QPI) in Xeon Skylake-SP platforms starting in 2017.
UPI is a low-latency coherent interconnect for scalable multiprocessor systems with a shared address space. It uses a directory-based home snoop coherency protocol with a transfer speed of up to 10.4 GT/s. Supporting processors typically have two or three UPI links."
Signed-off-by: Kan Liang kan.liang@linux.intel.com Cc: Andi Kleen ak@linux.intel.com Cc: Jiri Olsa jolsa@kernel.org Link: http://lkml.kernel.org/r/1557234991-130456-1-git-send-email-kan.liang@linux.... Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- tools/perf/pmu-events/jevents.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c index 6631970f9683..581a31412f00 100644 --- a/tools/perf/pmu-events/jevents.c +++ b/tools/perf/pmu-events/jevents.c @@ -235,6 +235,7 @@ static struct map { { "iMPH-U", "uncore_arb" }, { "CPU-M-CF", "cpum_cf" }, { "CPU-M-SF", "cpum_sf" }, + { "UPI LL", "uncore_upi" }, {} };
From: John Garry john.garry@huawei.com
mainline inclusion from mainline-v5.3-rc1 commit 57cc732479bac2a3cbd759fb07188657c871d5c1 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Add support for Hisi hip08 DDRC PMU aliasing. We can now do something like this:
$perf list
[snip]
uncore ddrc: uncore_hisi_ddrc.act_cmd [DDRC active commands. Unit: hisi_sccl,ddrc] uncore_hisi_ddrc.flux_rcmd [DDRC read commands. Unit: hisi_sccl,ddrc] uncore_hisi_ddrc.flux_wcmd [DDRC write commands. Unit: hisi_sccl,ddrc] uncore_hisi_ddrc.flux_wr [DDRC precharge commands. Unit: hisi_sccl,ddrc] uncore_hisi_ddrc.rnk_chg [DDRC rank commands. Unit: hisi_sccl,ddrc] uncore_hisi_ddrc.rw_chg [DDRC read and write changes. Unit: hisi_sccl,ddrc]
Performance counter stats for 'system wide':
0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl1_ddrc0] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl3_ddrc1] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl5_ddrc2] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl7_ddrc3] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl5_ddrc0] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl7_ddrc1] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl1_ddrc3] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl1_ddrc1] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl3_ddrc2] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl5_ddrc3] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl3_ddrc0] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl5_ddrc1] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl7_ddrc2] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl7_ddrc0] 20,421 uncore_hisi_ddrc.flux_rcmd [hisi_sccl1_ddrc2] 0 uncore_hisi_ddrc.flux_rcmd [hisi_sccl3_ddrc3]
1.001559011 seconds time elapsed
The kernel driver is in drivers/perf/hisilicon/hisi_uncore_ddrc_pmu.c
Signed-off-by: John Garry john.garry@huawei.com Acked-by: Jiri Olsa jolsa@kernel.org Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Andi Kleen ak@linux.intel.com Cc: Ben Hutchings ben@decadent.org.uk Cc: Hendrik Brueckner brueckner@linux.ibm.com Cc: Kan Liang kan.liang@linux.intel.com Cc: Mark Rutland mark.rutland@arm.com Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Shaokun Zhang zhangshaokun@hisilicon.com Cc: Thomas Richter tmricht@linux.ibm.com Cc: Will Deacon will.deacon@arm.com Cc: linux-arm-kernel@lists.infradead.org Cc: linuxarm@huawei.com Link: http://lkml.kernel.org/r/1561732552-143038-3-git-send-email-john.garry@huawe... Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- .../arm64/hisilicon/hip08/uncore-ddrc.json | 44 +++++++++++++++++++ tools/perf/pmu-events/jevents.c | 1 + 2 files changed, 45 insertions(+) create mode 100644 tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json new file mode 100644 index 000000000000..0d1556fcdffe --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json @@ -0,0 +1,44 @@ +[ + { + "EventCode": "0x02", + "EventName": "uncore_hisi_ddrc.flux_wcmd", + "BriefDescription": "DDRC write commands", + "PublicDescription": "DDRC write commands", + "Unit": "hisi_sccl,ddrc", + }, + { + "EventCode": "0x03", + "EventName": "uncore_hisi_ddrc.flux_rcmd", + "BriefDescription": "DDRC read commands", + "PublicDescription": "DDRC read commands", + "Unit": "hisi_sccl,ddrc", + }, + { + "EventCode": "0x04", + "EventName": "uncore_hisi_ddrc.flux_wr", + "BriefDescription": "DDRC precharge commands", + "PublicDescription": "DDRC precharge commands", + "Unit": "hisi_sccl,ddrc", + }, + { + "EventCode": "0x05", + "EventName": "uncore_hisi_ddrc.act_cmd", + "BriefDescription": "DDRC active commands", + "PublicDescription": "DDRC active commands", + "Unit": "hisi_sccl,ddrc", + }, + { + "EventCode": "0x06", + "EventName": "uncore_hisi_ddrc.rnk_chg", + "BriefDescription": "DDRC rank commands", + "PublicDescription": "DDRC rank commands", + "Unit": "hisi_sccl,ddrc", + }, + { + "EventCode": "0x07", + "EventName": "uncore_hisi_ddrc.rw_chg", + "BriefDescription": "DDRC read and write changes", + "PublicDescription": "DDRC read and write changes", + "Unit": "hisi_sccl,ddrc", + }, +] diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c index 581a31412f00..8d9e42823975 100644 --- a/tools/perf/pmu-events/jevents.c +++ b/tools/perf/pmu-events/jevents.c @@ -236,6 +236,7 @@ static struct map { { "CPU-M-CF", "cpum_cf" }, { "CPU-M-SF", "cpum_sf" }, { "UPI LL", "uncore_upi" }, + { "hisi_sccl,ddrc", "hisi_sccl,ddrc" }, {} };
From: John Garry john.garry@huawei.com
mainline inclusion from mainline-v5.3-rc1 commit 8f5b703add99473b59b4a38a6b66afbafc29d92e category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Add support for Hisi hip08 HHA PMU aliasing.
The kernel driver is in drivers/perf/hisilicon/hisi_uncore_hha_pmu.c
Signed-off-by: John Garry john.garry@huawei.com Acked-by: Jiri Olsa jolsa@kernel.org Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Andi Kleen ak@linux.intel.com Cc: Ben Hutchings ben@decadent.org.uk Cc: Hendrik Brueckner brueckner@linux.ibm.com Cc: Kan Liang kan.liang@linux.intel.com Cc: Mark Rutland mark.rutland@arm.com Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Shaokun Zhang zhangshaokun@hisilicon.com Cc: Thomas Richter tmricht@linux.ibm.com Cc: Will Deacon will.deacon@arm.com Cc: linux-arm-kernel@lists.infradead.org Cc: linuxarm@huawei.com Link: http://lkml.kernel.org/r/1561732552-143038-4-git-send-email-john.garry@huawe... Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- .../arm64/hisilicon/hip08/uncore-hha.json | 51 +++++++++++++++++++ tools/perf/pmu-events/jevents.c | 1 + 2 files changed, 52 insertions(+) create mode 100644 tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-hha.json
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-hha.json b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-hha.json new file mode 100644 index 000000000000..447d3064de90 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-hha.json @@ -0,0 +1,51 @@ +[ + { + "EventCode": "0x00", + "EventName": "uncore_hisi_hha.rx_ops_num", + "BriefDescription": "The number of all operations received by the HHA", + "PublicDescription": "The number of all operations received by the HHA", + "Unit": "hisi_sccl,hha", + }, + { + "EventCode": "0x01", + "EventName": "uncore_hisi_hha.rx_outer", + "BriefDescription": "The number of all operations received by the HHA from another socket", + "PublicDescription": "The number of all operations received by the HHA from another socket", + "Unit": "hisi_sccl,hha", + }, + { + "EventCode": "0x02", + "EventName": "uncore_hisi_hha.rx_sccl", + "BriefDescription": "The number of all operations received by the HHA from another SCCL in this socket", + "PublicDescription": "The number of all operations received by the HHA from another SCCL in this socket", + "Unit": "hisi_sccl,hha", + }, + { + "EventCode": "0x1c", + "EventName": "uncore_hisi_hha.rd_ddr_64b", + "BriefDescription": "The number of read operations sent by HHA to DDRC which size is 64 bytes", + "PublicDescription": "The number of read operations sent by HHA to DDRC which size is 64bytes", + "Unit": "hisi_sccl,hha", + }, + { + "EventCode": "0x1d", + "EventName": "uncore_hisi_hha.wr_dr_64b", + "BriefDescription": "The number of write operations sent by HHA to DDRC which size is 64 bytes", + "PublicDescription": "The number of write operations sent by HHA to DDRC which size is 64 bytes", + "Unit": "hisi_sccl,hha", + }, + { + "EventCode": "0x1e", + "EventName": "uncore_hisi_hha.rd_ddr_128b", + "BriefDescription": "The number of read operations sent by HHA to DDRC which size is 128 bytes", + "PublicDescription": "The number of read operations sent by HHA to DDRC which size is 128 bytes", + "Unit": "hisi_sccl,hha", + }, + { + "EventCode": "0x1f", + "EventName": "uncore_hisi_hha.wr_ddr_128b", + "BriefDescription": "The number of write operations sent by HHA to DDRC which size is 128 bytes", + "PublicDescription": "The number of write operations sent by HHA to DDRC which size is 128 bytes", + "Unit": "hisi_sccl,hha", + }, +] diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c index 8d9e42823975..5e0fdab4dce6 100644 --- a/tools/perf/pmu-events/jevents.c +++ b/tools/perf/pmu-events/jevents.c @@ -237,6 +237,7 @@ static struct map { { "CPU-M-SF", "cpum_sf" }, { "UPI LL", "uncore_upi" }, { "hisi_sccl,ddrc", "hisi_sccl,ddrc" }, + { "hisi_sccl,hha", "hisi_sccl,hha" }, {} };
From: John Garry john.garry@huawei.com
mainline inclusion from mainline-v5.3-rc1 commit edd93a4076cf18ede423c167de6d6fb8e4211e7b category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Add support for Hisi hip08 L3C PMU aliasing.
The kernel driver is in drivers/perf/hisilicon/hisi_uncore_l3c_pmu.c
Signed-off-by: John Garry john.garry@huawei.com Acked-by: Jiri Olsa jolsa@kernel.org Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Andi Kleen ak@linux.intel.com Cc: Ben Hutchings ben@decadent.org.uk Cc: Hendrik Brueckner brueckner@linux.ibm.com Cc: Kan Liang kan.liang@linux.intel.com Cc: Mark Rutland mark.rutland@arm.com Cc: Mathieu Poirier mathieu.poirier@linaro.org Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Shaokun Zhang zhangshaokun@hisilicon.com Cc: Thomas Richter tmricht@linux.ibm.com Cc: Will Deacon will.deacon@arm.com Cc: linux-arm-kernel@lists.infradead.org Cc: linuxarm@huawei.com Link: http://lkml.kernel.org/r/1561732552-143038-5-git-send-email-john.garry@huawe... Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- .../arm64/hisilicon/hip08/uncore-l3c.json | 37 +++++++++++++++++++ tools/perf/pmu-events/jevents.c | 1 + 2 files changed, 38 insertions(+) create mode 100644 tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-l3c.json
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-l3c.json b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-l3c.json new file mode 100644 index 000000000000..ca48747642e1 --- /dev/null +++ b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-l3c.json @@ -0,0 +1,37 @@ +[ + { + "EventCode": "0x00", + "EventName": "uncore_hisi_l3c.rd_cpipe", + "BriefDescription": "Total read accesses", + "PublicDescription": "Total read accesses", + "Unit": "hisi_sccl,l3c", + }, + { + "EventCode": "0x01", + "EventName": "uncore_hisi_l3c.wr_cpipe", + "BriefDescription": "Total write accesses", + "PublicDescription": "Total write accesses", + "Unit": "hisi_sccl,l3c", + }, + { + "EventCode": "0x02", + "EventName": "uncore_hisi_l3c.rd_hit_cpipe", + "BriefDescription": "Total read hits", + "PublicDescription": "Total read hits", + "Unit": "hisi_sccl,l3c", + }, + { + "EventCode": "0x03", + "EventName": "uncore_hisi_l3c.wr_hit_cpipe", + "BriefDescription": "Total write hits", + "PublicDescription": "Total write hits", + "Unit": "hisi_sccl,l3c", + }, + { + "EventCode": "0x04", + "EventName": "uncore_hisi_l3c.victim_num", + "BriefDescription": "l3c precharge commands", + "PublicDescription": "l3c precharge commands", + "Unit": "hisi_sccl,l3c", + }, +] diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c index 5e0fdab4dce6..09816c443036 100644 --- a/tools/perf/pmu-events/jevents.c +++ b/tools/perf/pmu-events/jevents.c @@ -238,6 +238,7 @@ static struct map { { "UPI LL", "uncore_upi" }, { "hisi_sccl,ddrc", "hisi_sccl,ddrc" }, { "hisi_sccl,hha", "hisi_sccl,hha" }, + { "hisi_sccl,l3c", "hisi_sccl,l3c" }, {} };
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.4-rc1 commit faef87494139cf2cc4d188d5730251ade9b2022d category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Allow users to symbolically specify L3 events for Family 17h processors using the existing AMD Uncore driver.
Source of events descriptions are from section 2.1.15.4.1 "L3 Cache PMC Events" of the latest Family 17h PPR, available here:
https://www.amd.com/system/files/TechDocs/55570-B1_PUB.zip
Opnly BriefDescriptions added, since they show with and without the -v and --details flags.
Tested with:
# perf stat -e l3_request_g1.caching_l3_cache_accesses,amd_l3/event=0x01,umask=0x80/,l3_comb_clstr_state.request_miss,amd_l3/event=0x06,umask=0x01/ perf bench mem memcpy -s 4mb -l 100 -f default ... 7,006,831 l3_request_g1.caching_l3_cache_accesses 7,006,830 amd_l3/event=0x01,umask=0x80/ 366,530 l3_comb_clstr_state.request_miss 366,568 amd_l3/event=0x06,umask=0x01/
Signed-off-by: Kim Phillips kim.phillips@amd.com Reviewed-by: Andi Kleen ak@linux.intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Andi Kleen ak@linux.intel.com Cc: Borislav Petkov bp@suse.de Cc: Janakarajan Natarajan janakarajan.natarajan@amd.com Cc: Jin Yao yao.jin@linux.intel.com Cc: Jiri Olsa jolsa@redhat.com Cc: Kan Liang kan.liang@linux.intel.com Cc: Luke Mujica lukemujica@google.com Cc: Martin Liška mliska@suse.cz Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: http://lore.kernel.org/lkml/20190919204306.12598-1-kim.phillips@amd.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- .../pmu-events/arch/x86/amdfam17h/cache.json | 42 +++++++++++++++++++ tools/perf/pmu-events/jevents.c | 1 + 2 files changed, 43 insertions(+)
diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/cache.json b/tools/perf/pmu-events/arch/x86/amdfam17h/cache.json index fad4af9142cb..6221a840fcea 100644 --- a/tools/perf/pmu-events/arch/x86/amdfam17h/cache.json +++ b/tools/perf/pmu-events/arch/x86/amdfam17h/cache.json @@ -283,5 +283,47 @@ "BriefDescription": "Total cycles spent with one or more fill requests in flight from L2.", "PublicDescription": "Total cycles spent with one or more fill requests in flight from L2.", "UMask": "0x1" + }, + { + "EventName": "l3_request_g1.caching_l3_cache_accesses", + "EventCode": "0x01", + "BriefDescription": "Caching: L3 cache accesses", + "UMask": "0x80", + "Unit": "L3PMC" + }, + { + "EventName": "l3_lookup_state.all_l3_req_typs", + "EventCode": "0x04", + "BriefDescription": "All L3 Request Types", + "UMask": "0xff", + "Unit": "L3PMC" + }, + { + "EventName": "l3_comb_clstr_state.other_l3_miss_typs", + "EventCode": "0x06", + "BriefDescription": "Other L3 Miss Request Types", + "UMask": "0xfe", + "Unit": "L3PMC" + }, + { + "EventName": "l3_comb_clstr_state.request_miss", + "EventCode": "0x06", + "BriefDescription": "L3 cache misses", + "UMask": "0x01", + "Unit": "L3PMC" + }, + { + "EventName": "xi_sys_fill_latency", + "EventCode": "0x90", + "BriefDescription": "L3 Cache Miss Latency. Total cycles for all transactions divided by 16. Ignores SliceMask and ThreadMask.", + "UMask": "0x00", + "Unit": "L3PMC" + }, + { + "EventName": "xi_ccx_sdp_req1.all_l3_miss_req_typs", + "EventCode": "0x9a", + "BriefDescription": "All L3 Miss Request Types. Ignores SliceMask and ThreadMask.", + "UMask": "0x3f", + "Unit": "L3PMC" } ] diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c index 09816c443036..48bf546e6798 100644 --- a/tools/perf/pmu-events/jevents.c +++ b/tools/perf/pmu-events/jevents.c @@ -239,6 +239,7 @@ static struct map { { "hisi_sccl,ddrc", "hisi_sccl,ddrc" }, { "hisi_sccl,hha", "hisi_sccl,hha" }, { "hisi_sccl,l3c", "hisi_sccl,l3c" }, + { "L3PMC", "amd_l3" }, {} };
From: Haiyan Song haiyanx.song@intel.com
mainline inclusion from mainline-v5.4-rc1 commit b115df076d337a727017538d11d7d46f5bcbff15 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Add a Intel event file for perf.
Signed-off-by: Haiyan Song haiyanx.song@intel.com Reviewed-by: Kan Liang kan.liang@linux.intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Andi Kleen ak@linux.intel.com Cc: Jin Yao yao.jin@intel.com Cc: Jiri Olsa jolsa@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: https://lkml.kernel.org/r/8859095e-5b02-d6b7-fbdc-3f42b714bae0@intel.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- .../pmu-events/arch/x86/icelake/cache.json | 552 +++++++++++ .../arch/x86/icelake/floating-point.json | 102 ++ .../pmu-events/arch/x86/icelake/frontend.json | 424 +++++++++ .../pmu-events/arch/x86/icelake/memory.json | 410 ++++++++ .../pmu-events/arch/x86/icelake/other.json | 121 +++ .../pmu-events/arch/x86/icelake/pipeline.json | 892 ++++++++++++++++++ .../arch/x86/icelake/virtual-memory.json | 236 +++++ tools/perf/pmu-events/arch/x86/mapfile.csv | 2 + 8 files changed, 2739 insertions(+) create mode 100644 tools/perf/pmu-events/arch/x86/icelake/cache.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/floating-point.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/frontend.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/memory.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/other.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/pipeline.json create mode 100644 tools/perf/pmu-events/arch/x86/icelake/virtual-memory.json
diff --git a/tools/perf/pmu-events/arch/x86/icelake/cache.json b/tools/perf/pmu-events/arch/x86/icelake/cache.json new file mode 100644 index 000000000000..3529fc338c17 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/icelake/cache.json @@ -0,0 +1,552 @@ +[ + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are counted.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0x21", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS", + "SampleAfterValue": "200003", + "BriefDescription": "Demand Data Read miss L2, no rejects" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the RFO (Read-for-Ownership) requests that miss L2 cache.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0x22", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.RFO_MISS", + "SampleAfterValue": "200003", + "BriefDescription": "RFO requests that miss L2 cache" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts L2 cache misses when fetching instructions.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0x24", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.CODE_RD_MISS", + "SampleAfterValue": "200003", + "BriefDescription": "L2 cache misses when fetching instructions" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts demand requests that miss L2 cache.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0x27", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.ALL_DEMAND_MISS", + "SampleAfterValue": "200003", + "BriefDescription": "Demand requests that miss L2 cache" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts Software prefetch requests that miss the L2 cache. This event accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0x28", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.SWPF_MISS", + "SampleAfterValue": "200003", + "BriefDescription": "SW prefetch requests that miss L2 cache." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of demand Data Read requests initiated by load instructions that hit L2 cache.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0xc1", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT", + "SampleAfterValue": "200003", + "BriefDescription": "Demand Data Read requests that hit L2 cache" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the RFO (Read-for-Ownership) requests that hit L2 cache.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0xc2", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.RFO_HIT", + "SampleAfterValue": "200003", + "BriefDescription": "RFO requests that hit L2 cache" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts L2 cache hits when fetching instructions, code reads.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0xc4", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.CODE_RD_HIT", + "SampleAfterValue": "200003", + "BriefDescription": "L2 cache hits when fetching instructions, code reads." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts Software prefetch requests that hit the L2 cache. This event accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0xc8", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.SWPF_HIT", + "SampleAfterValue": "200003", + "BriefDescription": "SW prefetch requests that hit L2 cache." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are counted.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0xe1", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD", + "SampleAfterValue": "200003", + "BriefDescription": "Demand Data Read requests" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0xe2", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.ALL_RFO", + "SampleAfterValue": "200003", + "BriefDescription": "RFO requests to L2 cache" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the total number of L2 code requests.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0xe4", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.ALL_CODE_RD", + "SampleAfterValue": "200003", + "BriefDescription": "L2 code requests" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts demand requests to L2 cache.", + "EventCode": "0x24", + "Counter": "0,1,2,3", + "UMask": "0xe7", + "PEBScounters": "0,1,2,3", + "EventName": "L2_RQSTS.ALL_DEMAND_REFERENCES", + "SampleAfterValue": "200003", + "BriefDescription": "Demand requests to L2 cache" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of L1D misses that are outstanding in each cycle, that is each cycle the number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.", + "EventCode": "0x48", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "L1D_PEND_MISS.PENDING", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of L1D misses that are outstanding" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts duration of L1D miss outstanding in cycles.", + "EventCode": "0x48", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "L1D_PEND_MISS.PENDING_CYCLES", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles with L1D load Misses outstanding.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailablability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.", + "EventCode": "0x48", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "L1D_PEND_MISS.FB_FULL", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailablability." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of phases a demand request has waited due to L1D Fill Buffer (FB) unavailablability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.", + "EventCode": "0x48", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "L1D_PEND_MISS.FB_FULL_PERIODS", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of phases a demand request has waited due to L1D Fill Buffer (FB) unavailablability.", + "CounterMask": "1", + "EdgeDetect": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of cycles a demand request has waited due to L1D due to lack of L2 resources. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.", + "EventCode": "0x48", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "L1D_PEND_MISS.L2_STALL", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of cycles a demand request has waited due to L1D due to lack of L2 resources." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.", + "EventCode": "0x51", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "L1D.REPLACEMENT", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts the number of cache lines replaced in L1 data cache." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of offcore outstanding demand rfo Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.", + "EventCode": "0x60", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles with offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of offcore outstanding cacheable Core Data Read transactions in the super queue every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.", + "EventCode": "0x60", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD", + "SampleAfterValue": "2000003", + "BriefDescription": "Offcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles when offcore outstanding cacheable Core Data Read transactions are present in the super queue. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.", + "EventCode": "0x60", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.", + "EventCode": "0xB0", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD", + "SampleAfterValue": "100003", + "BriefDescription": "Demand Data Read requests sent to uncore" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.", + "EventCode": "0xB0", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_REQUESTS.DEMAND_RFO", + "SampleAfterValue": "100003", + "BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.", + "EventCode": "0xB0", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_REQUESTS.ALL_DATA_RD", + "SampleAfterValue": "100003", + "BriefDescription": "Demand and prefetch data reads" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts memory transactions reached the super queue including requests initiated by the core, all L3 prefetches, page walks, etc..", + "EventCode": "0xB0", + "Counter": "0,1,2,3", + "UMask": "0x80", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_REQUESTS.ALL_REQUESTS", + "SampleAfterValue": "100003", + "BriefDescription": "Any memory transaction that reached the SQ." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions that true miss the STLB.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x11", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS", + "SampleAfterValue": "100003", + "BriefDescription": "Retired load instructions that miss the STLB.", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired store instructions that true miss the STLB.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x12", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_INST_RETIRED.STLB_MISS_STORES", + "SampleAfterValue": "100003", + "BriefDescription": "Retired store instructions that miss the STLB.", + "Data_LA": "1", + "L1_Hit_Indication": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions with locked access.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x21", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_INST_RETIRED.LOCK_LOADS", + "SampleAfterValue": "100007", + "BriefDescription": "Retired load instructions with locked access.", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions that split across a cacheline boundary.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x41", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_INST_RETIRED.SPLIT_LOADS", + "SampleAfterValue": "100003", + "BriefDescription": "Retired load instructions that split across a cacheline boundary.", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired store instructions that split across a cacheline boundary.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x42", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_INST_RETIRED.SPLIT_STORES", + "SampleAfterValue": "100003", + "BriefDescription": "Retired store instructions that split across a cacheline boundary.", + "Data_LA": "1", + "L1_Hit_Indication": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts all retired load instructions. This event accounts for SW prefetch instructions for loads.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x81", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_INST_RETIRED.ALL_LOADS", + "SampleAfterValue": "2000003", + "BriefDescription": "All retired load instructions.", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts all retired store instructions. This event account for SW prefetch instructions and PREFETCHW instruction for stores.", + "EventCode": "0xD0", + "Counter": "0,1,2,3", + "UMask": "0x82", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_INST_RETIRED.ALL_STORES", + "SampleAfterValue": "2000003", + "BriefDescription": "All retired store instructions.", + "Data_LA": "1", + "L1_Hit_Indication": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_RETIRED.L1_HIT", + "SampleAfterValue": "2000003", + "BriefDescription": "Retired load instructions with L1 cache hits as data sources", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions with L2 cache hits as data sources.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_RETIRED.L2_HIT", + "SampleAfterValue": "100003", + "BriefDescription": "Retired load instructions with L2 cache hits as data sources", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L3 cache.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_RETIRED.L3_HIT", + "SampleAfterValue": "50021", + "BriefDescription": "Retired load instructions with L3 cache hits as data sources", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L1 cache.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_RETIRED.L1_MISS", + "SampleAfterValue": "100003", + "BriefDescription": "Retired load instructions missed L1 cache as data sources", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions missed L2 cache as data sources.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_RETIRED.L2_MISS", + "SampleAfterValue": "50021", + "BriefDescription": "Retired load instructions missed L2 cache as data sources", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L3 cache.", + "EventCode": "0xD1", + "Counter": "0,1,2,3", + "UMask": "0x20", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_RETIRED.L3_MISS", + "SampleAfterValue": "100007", + "BriefDescription": "Retired load instructions missed L3 cache as data sources", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready.", + "EventCode": "0xd1", + "Counter": "0,1,2,3", + "UMask": "0x40", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_RETIRED.FB_HIT", + "SampleAfterValue": "100007", + "BriefDescription": "Number of completed demand load requests that missed the L1, but hit the FB(fill buffer), because a preceding miss to the same cacheline initiated the line to be brought into L1, but data is not yet ready in L1.", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.", + "EventCode": "0xd2", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS", + "SampleAfterValue": "20011", + "BriefDescription": "Retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache.", + "EventCode": "0xd2", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT", + "SampleAfterValue": "20011", + "BriefDescription": "Retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions whose data sources were HitM responses from shared L3.", + "EventCode": "0xd2", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM", + "SampleAfterValue": "20011", + "BriefDescription": "Retired load instructions whose data sources were HitM responses from shared L3", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired load instructions whose data sources were hits in L3 without snoops required.", + "EventCode": "0xd2", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE", + "SampleAfterValue": "100003", + "BriefDescription": "Retired load instructions whose data sources were hits in L3 without snoops required", + "Data_LA": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of L2 cache lines filling the L2. Counting does not cover rejects.", + "EventCode": "0xF1", + "Counter": "0,1,2,3", + "UMask": "0x1f", + "PEBScounters": "0,1,2,3", + "EventName": "L2_LINES_IN.ALL", + "SampleAfterValue": "100003", + "BriefDescription": "L2 cache lines filling L2" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the cycles for which the thread is active and the superQ cannot take any more entries.", + "EventCode": "0xF4", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "SQ_MISC.SQ_FULL", + "SampleAfterValue": "100003", + "BriefDescription": "Cycles the thread is active and superQ cannot take any more entries." + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/icelake/floating-point.json b/tools/perf/pmu-events/arch/x86/icelake/floating-point.json new file mode 100644 index 000000000000..594c5551f610 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/icelake/floating-point.json @@ -0,0 +1,102 @@ +[ + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts all microcode Floating Point assists.", + "EventCode": "0xC1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "ASSISTS.FP", + "SampleAfterValue": "100003", + "BriefDescription": "Counts all microcode FP assists.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", + "EventCode": "0xc7", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 RANGE SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", + "EventCode": "0xc7", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 RANGE SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", + "EventCode": "0xc7", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x4", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT14 RCP14 RANGE DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", + "EventCode": "0xc7", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x8", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", + "EventCode": "0xc7", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x10", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 RANGE SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", + "EventCode": "0xc7", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x20", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 RANGE SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 RANGE FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", + "EventCode": "0xc7", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x40", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 RANGE FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 RANGE FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.", + "EventCode": "0xc7", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x80", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 RANGE FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element." + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/icelake/frontend.json b/tools/perf/pmu-events/arch/x86/icelake/frontend.json new file mode 100644 index 000000000000..9c3cfbfcec0f --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/icelake/frontend.json @@ -0,0 +1,424 @@ +[ + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).", + "EventCode": "0x79", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "IDQ.MITE_UOPS", + "SampleAfterValue": "2000003", + "BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB).", + "EventCode": "0x79", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "IDQ.MITE_CYCLES_OK", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles MITE is delivering optimal number of Uops", + "CounterMask": "5" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of cycles uops were delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB).", + "EventCode": "0x79", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "IDQ.MITE_CYCLES_ANY", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles MITE is delivering any Uop", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.", + "EventCode": "0x79", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "IDQ.DSB_UOPS", + "SampleAfterValue": "2000003", + "BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB).", + "EventCode": "0x79", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "IDQ.DSB_CYCLES_OK", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles DSB is delivering optimal number of Uops", + "CounterMask": "5" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of cycles uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.", + "EventCode": "0x79", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "IDQ.DSB_CYCLES_ANY", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer.", + "EventCode": "0x79", + "Counter": "0,1,2,3", + "UMask": "0x30", + "PEBScounters": "0,1,2,3", + "EventName": "IDQ.MS_SWITCHES", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of switches from DSB or MITE to the MS", + "CounterMask": "1", + "EdgeDetect": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the total number of uops delivered by the Microcode Sequencer (MS). Any instruction over 4 uops will be delivered by the MS. Some instructions such as transcendentals may additionally generate uops from the MS.", + "EventCode": "0x79", + "Counter": "0,1,2,3", + "UMask": "0x30", + "PEBScounters": "0,1,2,3", + "EventName": "IDQ.MS_UOPS", + "SampleAfterValue": "2000003", + "BriefDescription": "Uops delivered to IDQ while MS is busy" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Uops maybe initiated by Decode Stream Buffer (DSB) or MITE.", + "EventCode": "0x79", + "Counter": "0,1,2,3", + "UMask": "0x30", + "PEBScounters": "0,1,2,3", + "EventName": "IDQ.MS_CYCLES_ANY", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles when uops are being delivered to IDQ while MS is busy", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity.", + "EventCode": "0x80", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "ICACHE_16B.IFDATA_STALL", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity. Accounts for both cacheable and uncacheable accesses.", + "EventCode": "0x83", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "ICACHE_64B.IFTAG_HIT", + "SampleAfterValue": "200003", + "BriefDescription": "Instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity. Accounts for both cacheable and uncacheable accesses.", + "EventCode": "0x83", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "ICACHE_64B.IFTAG_MISS", + "SampleAfterValue": "200003", + "BriefDescription": "Instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss.", + "EventCode": "0x83", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "ICACHE_64B.IFTAG_STALL", + "SampleAfterValue": "200003", + "BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle.", + "EventCode": "0x9C", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "IDQ_UOPS_NOT_DELIVERED.CORE", + "SampleAfterValue": "2000003", + "BriefDescription": "Uops not delivered by IDQ when backend of the machine is not stalled" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of cycles when no uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle.", + "EventCode": "0x9c", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles when no uops are not delivered by the IDQ when backend of the machine is not stalled", + "CounterMask": "5" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of cycles when the optimal number of uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle.", + "EventCode": "0x9C", + "Invert": "1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles when optimal number of uops was delivered to the back-end when the back-end is not stalled", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Decode Stream Buffer (DSB) is a Uop-cache that holds translations of previously fetched instructions that were decoded by the legacy x86 decode pipeline (MITE). This event counts fetch penalty cycles when a transition occurs from DSB to MITE.", + "EventCode": "0xAB", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES", + "SampleAfterValue": "2000003", + "BriefDescription": "DSB-to-MITE switch true penalty cycles." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss.", + "EventCode": "0xC6", + "MSRValue": "0x11", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.DSB_MISS", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired Instructions who experienced DSB miss.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired Instructions who experienced Instruction L1 Cache true miss.", + "EventCode": "0xC6", + "MSRValue": "0x12", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.L1I_MISS", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired Instructions who experienced Instruction L1 Cache true miss.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired Instructions who experienced Instruction L2 Cache true miss.", + "EventCode": "0xC6", + "MSRValue": "0x13", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.L2_MISS", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired Instructions who experienced Instruction L2 Cache true miss.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired Instructions that experienced iTLB (Instruction TLB) true miss.", + "EventCode": "0xC6", + "MSRValue": "0x14", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.ITLB_MISS", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired Instructions who experienced iTLB true miss.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired Instructions that experienced STLB (2nd level TLB) true miss.", + "EventCode": "0xC6", + "MSRValue": "0x15", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.STLB_MISS", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired Instructions who experienced STLB (2nd level TLB) true miss.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 2 cycles which was not interrupted by a back-end stall.", + "EventCode": "0xC6", + "MSRValue": "0x500206", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.LATENCY_GE_2", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 2 cycles which was not interrupted by a back-end stall.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.", + "EventCode": "0xC6", + "MSRValue": "0x500406", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.LATENCY_GE_4", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops.", + "EventCode": "0xC6", + "MSRValue": "0x500806", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.LATENCY_GE_8", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops.", + "EventCode": "0xC6", + "MSRValue": "0x501006", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.LATENCY_GE_16", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops.", + "EventCode": "0xC6", + "MSRValue": "0x502006", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.LATENCY_GE_32", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.", + "EventCode": "0xC6", + "MSRValue": "0x504006", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.LATENCY_GE_64", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.", + "EventCode": "0xC6", + "MSRValue": "0x508006", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.LATENCY_GE_128", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.", + "EventCode": "0xC6", + "MSRValue": "0x510006", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.LATENCY_GE_256", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.", + "EventCode": "0xC6", + "MSRValue": "0x520006", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.LATENCY_GE_512", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.", + "TakenAlone": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall.", + "EventCode": "0xC6", + "MSRValue": "0x100206", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1", + "MSRIndex": "0x3F7", + "SampleAfterValue": "100007", + "BriefDescription": "Retired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall.", + "TakenAlone": "1" + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/icelake/memory.json b/tools/perf/pmu-events/arch/x86/icelake/memory.json new file mode 100644 index 000000000000..f158366b9dd6 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/icelake/memory.json @@ -0,0 +1,410 @@ +[ + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times a TSX line had a cache conflict.", + "EventCode": "0x54", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "TX_MEM.ABORT_CONFLICT", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times a transactional abort was signaled due to a data conflict on a transactionally accessed address" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Speculatively counts the number Transactional Synchronization Extensions (TSX) Aborts due to a data capacity limitation for transactional writes.", + "EventCode": "0x54", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "TX_MEM.ABORT_CAPACITY_WRITE", + "SampleAfterValue": "2000003", + "BriefDescription": "Speculatively counts the number TSX Aborts due to a data capacity limitation for transactional writes." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times a TSX Abort was triggered due to a non-release/commit store to lock.", + "EventCode": "0x54", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "TX_MEM.ABORT_HLE_STORE_TO_ELIDED_LOCK", + "SampleAfterValue": "100003", + "BriefDescription": "Number of times a HLE transactional region aborted due to a non XRELEASE prefixed instruction writing to an elided lock in the elision buffer" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times a TSX Abort was triggered due to commit but Lock Buffer not empty.", + "EventCode": "0x54", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_NOT_EMPTY", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an HLE transactional execution aborted due to NoAllocatedElisionBuffer being non-zero." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times a TSX Abort was triggered due to release/commit but data and address mismatch.", + "EventCode": "0x54", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_MISMATCH", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an HLE transactional execution aborted due to XRELEASE lock not satisfying the address and value requirements in the elision buffer" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffer.", + "EventCode": "0x54", + "Counter": "0,1,2,3", + "UMask": "0x20", + "PEBScounters": "0,1,2,3", + "EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_UNSUPPORTED_ALIGNMENT", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an HLE transactional execution aborted due to an unsupported read alignment from the elision buffer." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times we could not allocate Lock Buffer.", + "EventCode": "0x54", + "Counter": "0,1,2,3", + "UMask": "0x40", + "PEBScounters": "0,1,2,3", + "EventName": "TX_MEM.HLE_ELISION_BUFFER_FULL", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times HLE lock could not be elided due to ElisionBufferAvailable being zero." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts Unfriendly TSX abort triggered by a vzeroupper instruction.", + "EventCode": "0x5d", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "TX_EXEC.MISC2", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts the number of times a class of instructions that may cause a transactional abort was executed inside a transactional region" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts Unfriendly TSX abort triggered by a nest count that is too deep.", + "EventCode": "0x5d", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x4", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "TX_EXEC.MISC3", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an instruction execution caused the transactional nest count supported to be exceeded" + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xA3", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "CYCLE_ACTIVITY.CYCLES_L3_MISS", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles while L3 cache miss demand load is outstanding.", + "CounterMask": "2" + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xA3", + "Counter": "0,1,2,3", + "UMask": "0x6", + "PEBScounters": "0,1,2,3", + "EventName": "CYCLE_ACTIVITY.STALLS_L3_MISS", + "SampleAfterValue": "2000003", + "BriefDescription": "Execution stalls while L3 cache miss demand load is outstanding.", + "CounterMask": "6" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Demand Data Read requests who miss L3 cache.", + "EventCode": "0xB0", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD", + "SampleAfterValue": "100003", + "BriefDescription": "Demand Data Read requests who miss L3 cache" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of Machine Clears detected dye to memory ordering. Memory Ordering Machine Clears may apply when a memory read may not conform to the memory ordering rules of the x86 architecture", + "EventCode": "0xc3", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "MACHINE_CLEARS.MEMORY_ORDERING", + "SampleAfterValue": "100003", + "BriefDescription": "Number of machine clears due to memory ordering conflicts." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times we entered an HLE region. Does not count nested transactions.", + "EventCode": "0xC8", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "HLE_RETIRED.START", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an HLE execution started." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times HLE commit succeeded.", + "EventCode": "0xC8", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "HLE_RETIRED.COMMIT", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an HLE execution successfully committed", + "Data_LA": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times HLE abort was triggered.", + "EventCode": "0xc8", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x4", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "HLE_RETIRED.ABORTED", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an HLE execution aborted due to any reasons (multiple categories may count as one)." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts).", + "EventCode": "0xC8", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x8", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "HLE_RETIRED.ABORTED_MEM", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts)." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times an HLE execution aborted due to HLE-unfriendly instructions and certain unfriendly events (such as AD assists etc.).", + "EventCode": "0xC8", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x20", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "HLE_RETIRED.ABORTED_UNFRIENDLY", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an HLE execution aborted due to HLE-unfriendly instructions and certain unfriendly events (such as AD assists etc.)." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times an HLE execution aborted due to unfriendly events (such as interrupts).", + "EventCode": "0xC8", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x80", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "HLE_RETIRED.ABORTED_EVENTS", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an HLE execution aborted due to unfriendly events (such as interrupts)." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times we entered an RTM region. Does not count nested transactions.", + "EventCode": "0xC9", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "RTM_RETIRED.START", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an RTM execution started." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times RTM commit succeeded.", + "EventCode": "0xC9", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "RTM_RETIRED.COMMIT", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an RTM execution successfully committed" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times RTM abort was triggered.", + "EventCode": "0xc9", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x4", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "RTM_RETIRED.ABORTED", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an RTM execution aborted.", + "Data_LA": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts).", + "EventCode": "0xC9", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x8", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "RTM_RETIRED.ABORTED_MEM", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times an RTM execution aborted due to HLE-unfriendly instructions.", + "EventCode": "0xC9", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x20", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "RTM_RETIRED.ABORTED_UNFRIENDLY", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times an RTM execution aborted due to incompatible memory type.", + "EventCode": "0xC9", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x40", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "RTM_RETIRED.ABORTED_MEMTYPE", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt).", + "EventCode": "0xC9", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x80", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "RTM_RETIRED.ABORTED_EVENTS", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles. Reported latency may be longer than just the memory latency.", + "EventCode": "0xcd", + "MSRValue": "0x4", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4", + "MSRIndex": "0x3F6", + "SampleAfterValue": "100003", + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.", + "TakenAlone": "1" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles. Reported latency may be longer than just the memory latency.", + "EventCode": "0xcd", + "MSRValue": "0x8", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8", + "MSRIndex": "0x3F6", + "SampleAfterValue": "50021", + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.", + "TakenAlone": "1" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles. Reported latency may be longer than just the memory latency.", + "EventCode": "0xcd", + "MSRValue": "0x10", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16", + "MSRIndex": "0x3F6", + "SampleAfterValue": "20011", + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.", + "TakenAlone": "1" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles. Reported latency may be longer than just the memory latency.", + "EventCode": "0xcd", + "MSRValue": "0x20", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32", + "MSRIndex": "0x3F6", + "SampleAfterValue": "100007", + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.", + "TakenAlone": "1" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles. Reported latency may be longer than just the memory latency.", + "EventCode": "0xcd", + "MSRValue": "0x40", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64", + "MSRIndex": "0x3F6", + "SampleAfterValue": "2003", + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.", + "TakenAlone": "1" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles. Reported latency may be longer than just the memory latency.", + "EventCode": "0xcd", + "MSRValue": "0x80", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128", + "MSRIndex": "0x3F6", + "SampleAfterValue": "1009", + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.", + "TakenAlone": "1" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles. Reported latency may be longer than just the memory latency.", + "EventCode": "0xcd", + "MSRValue": "0x100", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256", + "MSRIndex": "0x3F6", + "SampleAfterValue": "503", + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.", + "TakenAlone": "1" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles. Reported latency may be longer than just the memory latency.", + "EventCode": "0xcd", + "MSRValue": "0x200", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512", + "MSRIndex": "0x3F6", + "SampleAfterValue": "101", + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.", + "TakenAlone": "1" + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/icelake/other.json b/tools/perf/pmu-events/arch/x86/icelake/other.json new file mode 100644 index 000000000000..f8dfdb847224 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/icelake/other.json @@ -0,0 +1,121 @@ +[ + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method. The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core. Software can use this event as the denominator for the top-level metrics of the Top-down Microarchitecture Analysis method. This event is counted on a designated fixed counter (Fixed Counter 3) and is an architectural event.", + "Counter": "35", + "UMask": "0x4", + "PEBScounters": "35", + "EventName": "TOPDOWN.SLOTS", + "SampleAfterValue": "10000003", + "BriefDescription": "Counts the number of available slots for an unhalted logical processor." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts Core cycles where the core was running with power-delivery for baseline license level 0. This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes.", + "EventCode": "0x28", + "Counter": "0,1,2,3", + "UMask": "0x7", + "PEBScounters": "0,1,2,3", + "EventName": "CORE_POWER.LVL0_TURBO_LICENSE", + "SampleAfterValue": "200003", + "BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the Non-AVX turbo schedule." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts Core cycles where the core was running with power-delivery for license level 1. This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions.", + "EventCode": "0x28", + "Counter": "0,1,2,3", + "UMask": "0x18", + "PEBScounters": "0,1,2,3", + "EventName": "CORE_POWER.LVL1_TURBO_LICENSE", + "SampleAfterValue": "200003", + "BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the AVX2 turbo schedule." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Core cycles where the core was running with power-delivery for license level 2 (introduced in Skylake Server microarchtecture). This includes high current AVX 512-bit instructions.", + "EventCode": "0x28", + "Counter": "0,1,2,3", + "UMask": "0x20", + "PEBScounters": "0,1,2,3", + "EventName": "CORE_POWER.LVL2_TURBO_LICENSE", + "SampleAfterValue": "200003", + "BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the AVX512 turbo schedule." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of PREFETCHNTA instructions executed.", + "EventCode": "0x32", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "SW_PREFETCH_ACCESS.NTA", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of PREFETCHNTA instructions executed." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of PREFETCHT0 instructions executed.", + "EventCode": "0x32", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "SW_PREFETCH_ACCESS.T0", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of PREFETCHT0 instructions executed." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of PREFETCHT1 or PREFETCHT2 instructions executed.", + "EventCode": "0x32", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "SW_PREFETCH_ACCESS.T1_T2", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructions executed." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of PREFETCHW instructions executed.", + "EventCode": "0x32", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "SW_PREFETCH_ACCESS.PREFETCHW", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of PREFETCHW instructions executed." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method. The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core.", + "EventCode": "0xa4", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "TOPDOWN.SLOTS_P", + "SampleAfterValue": "10000003", + "BriefDescription": "Counts the number of available slots for an unhalted logical processor." + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xA4", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "TOPDOWN.BACKEND_BOUND_SLOTS", + "SampleAfterValue": "10000003", + "BriefDescription": "Issue slots where no uops were being issued due to lack of back end resources." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of occurrences where a microcode assist is invoked by hardware Examples include AD (page Access Dirty), FP and AVX related assists.", + "EventCode": "0xc1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x7", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "ASSISTS.ANY", + "SampleAfterValue": "100003", + "BriefDescription": "Number of occurrences where a microcode assist is invoked by hardware." + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/icelake/pipeline.json b/tools/perf/pmu-events/arch/x86/icelake/pipeline.json new file mode 100644 index 000000000000..6d8311e634aa --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/icelake/pipeline.json @@ -0,0 +1,892 @@ +[ + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.", + "Counter": "32", + "UMask": "0x1", + "PEBScounters": "32", + "EventName": "INST_RETIRED.ANY", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of instructions retired. Fixed Counter - architectural event" + }, + { + "PEBS": "2", + "CollectPEBSRecord": "3", + "PublicDescription": "A version of INST_RETIRED that allows for a more unbiased distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR) feature to mitigate some bias in how retired instructions get sampled. Use on Fixed Counter 0.", + "Counter": "32", + "UMask": "0x1", + "PEBScounters": "32", + "EventName": "INST_RETIRED.PREC_DIST", + "SampleAfterValue": "2000003", + "BriefDescription": "Precise instruction retired event with a reduced effect of PEBS shadow in IP distribution" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events.", + "Counter": "33", + "UMask": "0x2", + "PEBScounters": "33", + "EventName": "CPU_CLK_UNHALTED.THREAD", + "SampleAfterValue": "2000003", + "BriefDescription": "Core cycles when the thread is not in halt state" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.", + "Counter": "34", + "UMask": "0x3", + "PEBScounters": "34", + "EventName": "CPU_CLK_UNHALTED.REF_TSC", + "SampleAfterValue": "2000003", + "BriefDescription": "Reference cycles when the core is not in halt state." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times the load operation got the true Block-on-Store blocking code preventing store forwarding. This includes cases when: a. preceding store conflicts with the load (incomplete overlap),b. store forwarding is impossible due to u-arch limitations, c. preceding lock RMW operations are not forwarded, d. store has the no-forward bit set (uncacheable/page-split/masked stores), e. all-blocking stores are used (mostly, fences and port I/O), and others. The most common case is a load blocked due to its address range overlapping with a preceding smaller uncompleted store. Note: This event does not take into account cases of out-of-SW-control (for example, SbTailHit), unknown physical STA, and cases of blocking loads on store due to being non-WB memory type or a lock. These cases are covered by other events. See the table of not supported store forwards in the Optimization Guide.", + "EventCode": "0x03", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "LD_BLOCKS.STORE_FORWARD", + "SampleAfterValue": "100003", + "BriefDescription": "Loads blocked by overlapping with store buffer that cannot be forwarded." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.", + "EventCode": "0x03", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "LD_BLOCKS.NO_SR", + "SampleAfterValue": "100003", + "BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times a load got blocked due to false dependencies in MOB due to partial compare on address.", + "EventCode": "0x07", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS", + "SampleAfterValue": "100003", + "BriefDescription": "False dependencies in MOB due to partial compare on address." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts core cycles when the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear event.", + "EventCode": "0x0D", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "INT_MISC.RECOVERY_CYCLES", + "SampleAfterValue": "2000003", + "BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles the Backend cluster is recovering after a miss-speculation or a Store Buffer or Load Buffer drain stall.", + "EventCode": "0x0D", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x3", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "INT_MISC.ALL_RECOVERY_CYCLES", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles the Backend cluster is recovering after a miss-speculation or a Store Buffer or Load Buffer drain stall.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.", + "EventCode": "0x0d", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x80", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "INT_MISC.CLEAR_RESTEER_CYCLES", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS).", + "EventCode": "0x0E", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_ISSUED.ANY", + "SampleAfterValue": "2000003", + "BriefDescription": "Uops that RAT issues to RS" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles during which the Resource Allocation Table (RAT) does not issue any Uops to the reservation station (RS) for the current thread.", + "EventCode": "0x0E", + "Invert": "1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_ISSUED.STALL_CYCLES", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles when RAT does not issue Uops to RS for the thread", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles when divide unit is busy executing divide or square root operations. Accounts for integer and floating-point operations.", + "EventCode": "0x14", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x9", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "ARITH.DIVIDER_ACTIVE", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles when divide unit is busy executing divide or square root operations.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.", + "EventCode": "0x3C", + "Counter": "0,1,2,3,4,5,6,7", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "CPU_CLK_UNHALTED.THREAD_P", + "SampleAfterValue": "2000003", + "BriefDescription": "Thread cycles when thread is not in halt state" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts core crystal clock cycles when the thread is unhalted.", + "EventCode": "0x3C", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "CPU_CLK_UNHALTED.REF_XCLK", + "SampleAfterValue": "25003", + "BriefDescription": "Core crystal clock cycles when the thread is unhalted." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts Core crystal clock cycles when current thread is unhalted and the other thread is halted.", + "EventCode": "0x3C", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE", + "SampleAfterValue": "25003", + "BriefDescription": "Core crystal clock cycles when this thread is unhalted and the other thread is halted." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.", + "EventCode": "0x4c", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "LOAD_HIT_PREFETCH.SWPF", + "SampleAfterValue": "100003", + "BriefDescription": "Counts the number of demand load dispatches that hit L1D fill buffer (FB) allocated for software prefetch." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into stravation periods (e.g. branch mispredictions or i-cache misses)", + "EventCode": "0x5E", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "RS_EVENTS.EMPTY_CYCLES", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to closely sample on front-end latency issues (see the FRONTEND_RETIRED event of designated precise events)", + "EventCode": "0x5E", + "Invert": "1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "RS_EVENTS.EMPTY_END", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty.", + "CounterMask": "1", + "EdgeDetect": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk.", + "EventCode": "0x87", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "ILD_STALL.LCP", + "SampleAfterValue": "2000003", + "BriefDescription": "Stalls caused by changing prefix length of the instruction." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 0.", + "EventCode": "0xa1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_DISPATCHED.PORT_0", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of uops executed on port 0" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 1.", + "EventCode": "0xa1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_DISPATCHED.PORT_1", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of uops executed on port 1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 2 and 3.", + "EventCode": "0xa1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x4", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_DISPATCHED.PORT_2_3", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of uops executed on port 2 and 3" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 5 and 9.", + "EventCode": "0xa1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x10", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_DISPATCHED.PORT_4_9", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of uops executed on port 4 and 9" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 5.", + "EventCode": "0xa1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x20", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_DISPATCHED.PORT_5", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of uops executed on port 5" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 6.", + "EventCode": "0xa1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x40", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_DISPATCHED.PORT_6", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of uops executed on port 6" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 7 and 8.", + "EventCode": "0xa1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x80", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_DISPATCHED.PORT_7_8", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of uops executed on port 7 and 8" + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xa2", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "RESOURCE_STALLS.SCOREBOARD", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts cycles where the pipeline is stalled due to serializing operations." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts allocation stall cycles caused by the store buffer (SB) being full. This counts cycles that the pipeline back-end blocked uop delivery from the front-end.", + "EventCode": "0xA2", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x8", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "RESOURCE_STALLS.SB", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync)." + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xA3", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles while L2 cache miss demand load is outstanding.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xA3", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x4", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "CYCLE_ACTIVITY.STALLS_TOTAL", + "SampleAfterValue": "2000003", + "BriefDescription": "Total execution stalls.", + "CounterMask": "4" + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xA3", + "Counter": "0,1,2,3", + "UMask": "0x5", + "PEBScounters": "0,1,2,3", + "EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS", + "SampleAfterValue": "2000003", + "BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.", + "CounterMask": "5" + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xA3", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles while L1 cache miss demand load is outstanding.", + "CounterMask": "8" + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xA3", + "Counter": "0,1,2,3", + "UMask": "0xc", + "PEBScounters": "0,1,2,3", + "EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS", + "SampleAfterValue": "2000003", + "BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.", + "CounterMask": "12" + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xA3", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x10", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles while memory subsystem has an outstanding load.", + "CounterMask": "16" + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xA3", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x14", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "CYCLE_ACTIVITY.STALLS_MEM_ANY", + "SampleAfterValue": "2000003", + "BriefDescription": "Execution stalls while memory subsystem has an outstanding load.", + "CounterMask": "20" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles during which a total of 1 uop was executed on all ports and Reservation Station (RS) was not empty.", + "EventCode": "0xa6", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "EXE_ACTIVITY.1_PORTS_UTIL", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles total of 1 uop is executed on all ports and Reservation Station was not empty." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles during which a total of 2 uops were executed on all ports and Reservation Station (RS) was not empty.", + "EventCode": "0xa6", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x4", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "EXE_ACTIVITY.2_PORTS_UTIL", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles total of 2 uops are executed on all ports and Reservation Station was not empty." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles where the Store Buffer was full and no loads caused an execution stall.", + "EventCode": "0xA6", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x40", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "EXE_ACTIVITY.BOUND_ON_STORES", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles where the Store Buffer was full and no loads caused an execution stall.", + "CounterMask": "2" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles during which no uops were executed on all ports and Reservation Station (RS) was not empty.", + "EventCode": "0xa6", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x80", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "EXE_ACTIVITY.EXE_BOUND_0_PORTS", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles where no uops were executed, the Reservation Station was not empty, the Store Buffer was full and there was no outstanding load." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of uops delivered to the back-end by the LSD(Loop Stream Detector).", + "EventCode": "0xA8", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "LSD.UOPS", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of Uops delivered by the LSD." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the cycles when at least one uop is delivered by the LSD (Loop-stream detector).", + "EventCode": "0xA8", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "LSD.CYCLES_ACTIVE", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the cycles when optimal number of uops is delivered by the LSD (Loop-stream detector).", + "EventCode": "0xa8", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "LSD.CYCLES_OK", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles optimal number of Uops delivered by the LSD, but did not come from the decoder.", + "CounterMask": "5" + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xB1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_EXECUTED.THREAD", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts the number of uops to be executed per-thread each cycle." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles during which no uops were dispatched from the Reservation Station (RS) per thread.", + "EventCode": "0xB1", + "Invert": "1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_EXECUTED.STALL_CYCLES", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Cycles where at least 1 uop was executed per-thread.", + "EventCode": "0xb1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_EXECUTED.CYCLES_GE_1", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles where at least 1 uop was executed per-thread", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Cycles where at least 2 uops were executed per-thread.", + "EventCode": "0xb1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_EXECUTED.CYCLES_GE_2", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles where at least 2 uops were executed per-thread", + "CounterMask": "2" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Cycles where at least 3 uops were executed per-thread.", + "EventCode": "0xb1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_EXECUTED.CYCLES_GE_3", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles where at least 3 uops were executed per-thread", + "CounterMask": "3" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Cycles where at least 4 uops were executed per-thread.", + "EventCode": "0xb1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_EXECUTED.CYCLES_GE_4", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles where at least 4 uops were executed per-thread", + "CounterMask": "4" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of uops executed from any thread.", + "EventCode": "0xB1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_EXECUTED.CORE", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of uops executed on the core." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles when at least 1 micro-op is executed from any thread on physical core.", + "EventCode": "0xB1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles when at least 2 micro-ops are executed from any thread on physical core.", + "EventCode": "0xB1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.", + "CounterMask": "2" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles when at least 3 micro-ops are executed from any thread on physical core.", + "EventCode": "0xB1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.", + "CounterMask": "3" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles when at least 4 micro-ops are executed from any thread on physical core.", + "EventCode": "0xB1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.", + "CounterMask": "4" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of x87 uops executed.", + "EventCode": "0xB1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x10", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_EXECUTED.X87", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts the number of x87 uops dispatched." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.", + "EventCode": "0xC0", + "Counter": "0,1,2,3,4,5,6,7", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "INST_RETIRED.ANY_P", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of instructions retired. General Counter - architectural event" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of cycles using always true condition (uops_ret &lt; 16) applied to non PEBS uops retired event.", + "EventCode": "0xC2", + "Invert": "1", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_RETIRED.TOTAL_CYCLES", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycles with less than 10 actually retired uops.", + "CounterMask": "10" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the retirement slots used each cycle.", + "EventCode": "0xc2", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "UOPS_RETIRED.SLOTS", + "SampleAfterValue": "2000003", + "BriefDescription": "Retirement slots used." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of machine clears (nukes) of any type.", + "EventCode": "0xC3", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "MACHINE_CLEARS.COUNT", + "SampleAfterValue": "100003", + "BriefDescription": "Number of machine clears (nukes) of any type.", + "CounterMask": "1", + "EdgeDetect": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts self-modifying code (SMC) detected, which causes a machine clear.", + "EventCode": "0xC3", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x4", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "MACHINE_CLEARS.SMC", + "SampleAfterValue": "100003", + "BriefDescription": "Self-modifying code (SMC) detected." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts all branch instructions retired.", + "EventCode": "0xC4", + "Counter": "0,1,2,3,4,5,6,7", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_INST_RETIRED.ALL_BRANCHES", + "SampleAfterValue": "400009", + "BriefDescription": "All branch instructions retired." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts taken conditional branch instructions retired.", + "EventCode": "0xc4", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_INST_RETIRED.COND_TAKEN", + "SampleAfterValue": "400009", + "BriefDescription": "Taken conditional branch instructions retired." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts both direct and indirect near call instructions retired.", + "EventCode": "0xC4", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_INST_RETIRED.NEAR_CALL", + "SampleAfterValue": "100007", + "BriefDescription": "Direct and indirect near call instructions retired." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts return instructions retired.", + "EventCode": "0xC4", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x8", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_INST_RETIRED.NEAR_RETURN", + "SampleAfterValue": "100007", + "BriefDescription": "Return instructions retired." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts not taken branch instructions retired.", + "EventCode": "0xC4", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x10", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_INST_RETIRED.COND_NTAKEN", + "SampleAfterValue": "400009", + "BriefDescription": "Not taken branch instructions retired." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts conditional branch instructions retired.", + "EventCode": "0xc4", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x11", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_INST_RETIRED.COND", + "SampleAfterValue": "400009", + "BriefDescription": "Conditional branch instructions retired." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts taken branch instructions retired.", + "EventCode": "0xC4", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x20", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_INST_RETIRED.NEAR_TAKEN", + "SampleAfterValue": "400009", + "BriefDescription": "Taken branch instructions retired." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts far branch instructions retired.", + "EventCode": "0xC4", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x40", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_INST_RETIRED.FAR_BRANCH", + "SampleAfterValue": "100007", + "BriefDescription": "Far branch instructions retired." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts all indirect branch instructions retired (excluding RETs. TSX aborts is considered indirect branch).", + "EventCode": "0xc4", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x80", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_INST_RETIRED.INDIRECT", + "SampleAfterValue": "100003", + "BriefDescription": "All indirect branch instructions retired (excluding RETs. TSX aborts are considered indirect branch)." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.", + "EventCode": "0xC5", + "Counter": "0,1,2,3,4,5,6,7", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_MISP_RETIRED.ALL_BRANCHES", + "SampleAfterValue": "400009", + "BriefDescription": "All mispredicted branch instructions retired.", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts taken conditional mispredicted branch instructions retired.", + "EventCode": "0xc5", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x1", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_MISP_RETIRED.COND_TAKEN", + "SampleAfterValue": "400009", + "BriefDescription": "number of branch instructions retired that were mispredicted and taken. Non PEBS", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts mispredicted conditional branch instructions retired.", + "EventCode": "0xc5", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x11", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_MISP_RETIRED.COND", + "SampleAfterValue": "400009", + "BriefDescription": "Mispredicted conditional branch instructions retired.", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts number of near branch instructions retired that were mispredicted and taken.", + "EventCode": "0xC5", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x20", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_MISP_RETIRED.NEAR_TAKEN", + "SampleAfterValue": "400009", + "BriefDescription": "Number of near branch instructions retired that were mispredicted and taken.", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts all miss-predicted indirect branch instructions retired (excluding RETs. TSX aborts is considered indirect branch).", + "EventCode": "0xC5", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x80", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "BR_MISP_RETIRED.INDIRECT", + "SampleAfterValue": "100003", + "BriefDescription": "All miss-predicted indirect branch instructions retired (excluding RETs. TSX aborts is considered indirect branch).", + "Data_LA": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR enable via IA32_DEBUGCTL MSR and branch type selection via MSR_LBR_SELECT.", + "EventCode": "0xcc", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x20", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "MISC_RETIRED.LBR_INSERTS", + "SampleAfterValue": "2000003", + "BriefDescription": "Increments whenever there is an update to the LBR array." + }, + { + "PublicDescription": "Counts number of retired PAUSE instructions (that do not end up with a VMExit to the VMM; TSX aborted Instructions may be counted).", + "EventCode": "0xcc", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x40", + "EventName": "MISC_RETIRED.PAUSE_INST", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of retired PAUSE instructions." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times the front-end is resteered when it finds a branch instruction in a fetch line. This occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymore.", + "EventCode": "0xE6", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "BACLEARS.ANY", + "SampleAfterValue": "100003", + "BriefDescription": "Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "This event distributes cycle counts between active hyperthreads, i.e., those in C0. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If all other hyperthreads are inactive (or disabled or do not exist), all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.", + "EventCode": "0xec", + "Counter": "0,1,2,3,4,5,6,7", + "UMask": "0x2", + "PEBScounters": "0,1,2,3,4,5,6,7", + "EventName": "CPU_CLK_UNHALTED.DISTRIBUTED", + "SampleAfterValue": "2000003", + "BriefDescription": "Cycle counts are evenly distributed between active threads in the Core." + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/icelake/virtual-memory.json b/tools/perf/pmu-events/arch/x86/icelake/virtual-memory.json new file mode 100644 index 000000000000..7180a900c175 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/icelake/virtual-memory.json @@ -0,0 +1,236 @@ +[ + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts page walks completed due to demand data loads whose address translations missed in the TLB and were mapped to 4K pages. The page walks can end with or without a page fault.", + "EventCode": "0x08", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K", + "SampleAfterValue": "2000003", + "BriefDescription": "Page walks completed due to a demand data load to a 4K page." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts page walks completed due to demand data loads whose address translations missed in the TLB and were mapped to 2M/4M pages. The page walks can end with or without a page fault.", + "EventCode": "0x08", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M", + "SampleAfterValue": "2000003", + "BriefDescription": "Page walks completed due to a demand data load to a 2M/4M page." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts demand data loads that caused a completed page walk of any page size (4K/2M/4M/1G). This implies it missed in all TLB levels. The page walk can end with or without a fault.", + "EventCode": "0x08", + "Counter": "0,1,2,3", + "UMask": "0xe", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED", + "SampleAfterValue": "100003", + "BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (All page sizes)" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of page walks outstanding for a demand load in the PMH (Page Miss Handler) each cycle.", + "EventCode": "0x08", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_LOAD_MISSES.WALK_PENDING", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of page walks outstanding for a demand load in the PMH each cycle." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a demand load.", + "EventCode": "0x08", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_LOAD_MISSES.WALK_ACTIVE", + "SampleAfterValue": "100003", + "BriefDescription": "Cycles when at least one PMH is busy with a page walk for a demand load.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB).", + "EventCode": "0x08", + "Counter": "0,1,2,3", + "UMask": "0x20", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_LOAD_MISSES.STLB_HIT", + "SampleAfterValue": "2000003", + "BriefDescription": "Loads that miss the DTLB and hit the STLB." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 4K pages. The page walks can end with or without a page fault.", + "EventCode": "0x49", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K", + "SampleAfterValue": "100003", + "BriefDescription": "Page walks completed due to a demand data store to a 4K page." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 2M/4M pages. The page walks can end with or without a page fault.", + "EventCode": "0x49", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M", + "SampleAfterValue": "100003", + "BriefDescription": "Page walks completed due to a demand data store to a 2M/4M page." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts demand data stores that caused a completed page walk of any page size (4K/2M/4M/1G). This implies it missed in all TLB levels. The page walk can end with or without a fault.", + "EventCode": "0x49", + "Counter": "0,1,2,3", + "UMask": "0xe", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED", + "SampleAfterValue": "100003", + "BriefDescription": "Store misses in all TLB levels causes a page walk that completes. (All page sizes)" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of page walks outstanding for a store in the PMH (Page Miss Handler) each cycle.", + "EventCode": "0x49", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_STORE_MISSES.WALK_PENDING", + "SampleAfterValue": "2000003", + "BriefDescription": "Number of page walks outstanding for a store in the PMH each cycle." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a store.", + "EventCode": "0x49", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_STORE_MISSES.WALK_ACTIVE", + "SampleAfterValue": "100003", + "BriefDescription": "Cycles when at least one PMH is busy with a page walk for a store.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB).", + "EventCode": "0x49", + "Counter": "0,1,2,3", + "UMask": "0x20", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_STORE_MISSES.STLB_HIT", + "SampleAfterValue": "100003", + "BriefDescription": "Stores that miss the DTLB and hit the STLB." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts completed page walks (4K page size) caused by a code fetch. This implies it missed in the ITLB and further levels of TLB. The page walk can end with or without a fault.", + "EventCode": "0x85", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB_MISSES.WALK_COMPLETED_4K", + "SampleAfterValue": "100003", + "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts code misses in all ITLB (Instruction TLB) levels that caused a completed page walk (2M and 4M page sizes). The page walk can end with or without a fault.", + "EventCode": "0x85", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M", + "SampleAfterValue": "100003", + "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts completed page walks (2M and 4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.", + "EventCode": "0x85", + "Counter": "0,1,2,3", + "UMask": "0xe", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB_MISSES.WALK_COMPLETED", + "SampleAfterValue": "100003", + "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (All page sizes)" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of page walks outstanding for an outstanding code (instruction fetch) request in the PMH (Page Miss Handler) each cycle.", + "EventCode": "0x85", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB_MISSES.WALK_PENDING", + "SampleAfterValue": "100003", + "BriefDescription": "Number of page walks outstanding for an outstanding code request in the PMH each cycle." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a code (instruction fetch) request.", + "EventCode": "0x85", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB_MISSES.WALK_ACTIVE", + "SampleAfterValue": "100003", + "BriefDescription": "Cycles when at least one PMH is busy with a page walk for code (instruction fetch) request.", + "CounterMask": "1" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts instruction fetch requests that miss the ITLB (Instruction TLB) and hit the STLB (Second-level TLB).", + "EventCode": "0x85", + "Counter": "0,1,2,3", + "UMask": "0x20", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB_MISSES.STLB_HIT", + "SampleAfterValue": "100003", + "BriefDescription": "Instruction fetch requests that miss the ITLB and hit the STLB." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of flushes of the big or small ITLB pages. Counting include both TLB Flush (covering all sets) and TLB Set Clear (set-specific).", + "EventCode": "0xAE", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB.ITLB_FLUSH", + "SampleAfterValue": "100007", + "BriefDescription": "Flushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pages." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of DTLB flush attempts of the thread-specific entries.", + "EventCode": "0xBD", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "TLB_FLUSH.DTLB_THREAD", + "SampleAfterValue": "100007", + "BriefDescription": "DTLB flush attempts of the thread-specific entries" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, etc.).", + "EventCode": "0xBD", + "Counter": "0,1,2,3", + "UMask": "0x20", + "PEBScounters": "0,1,2,3", + "EventName": "TLB_FLUSH.STLB_ANY", + "SampleAfterValue": "100007", + "BriefDescription": "STLB flush attempts" + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-events/arch/x86/mapfile.csv index d6984a3017e0..b90e5fec2f32 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -33,4 +33,6 @@ GenuineIntel-6-25,v2,westmereep-sp,core GenuineIntel-6-2F,v2,westmereex,core GenuineIntel-6-55-[01234],v1,skylakex,core GenuineIntel-6-55-[56789ABCDEF],v1,cascadelakex,core +GenuineIntel-6-7D,v1,icelake,core +GenuineIntel-6-7E,v1,icelake,core AuthenticAMD-23-[[:xdigit:]]+,v1,amdfam17h,core
From: Haiyan Song haiyanx.song@intel.com
mainline inclusion from mainline-v5.4-rc1 commit 11e54d35e6d5c3533b706753224ef38ea235684b category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Add a Intel event file for perf.
Signed-off-by: Haiyan Song haiyanx.song@intel.com Reviewed-by: Kan Liang kan.liang@linux.intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Andi Kleen ak@linux.intel.com Cc: Jin Yao yao.jin@intel.com Cc: Jiri Olsa jolsa@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: http://lkml.kernel.org/r/20190815035942.30602-1-haiyanx.song@intel.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- tools/perf/pmu-events/arch/x86/mapfile.csv | 1 + .../pmu-events/arch/x86/tremontx/cache.json | 111 +++++ .../arch/x86/tremontx/frontend.json | 26 ++ .../pmu-events/arch/x86/tremontx/memory.json | 26 ++ .../pmu-events/arch/x86/tremontx/other.json | 26 ++ .../arch/x86/tremontx/pipeline.json | 111 +++++ .../arch/x86/tremontx/uncore-memory.json | 73 +++ .../arch/x86/tremontx/uncore-other.json | 431 ++++++++++++++++++ .../arch/x86/tremontx/uncore-power.json | 11 + .../arch/x86/tremontx/virtual-memory.json | 86 ++++ 10 files changed, 902 insertions(+) create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/cache.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/frontend.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/memory.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/other.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/pipeline.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/uncore-memory.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/uncore-other.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/uncore-power.json create mode 100644 tools/perf/pmu-events/arch/x86/tremontx/virtual-memory.json
diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-events/arch/x86/mapfile.csv index b90e5fec2f32..745ced083844 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -35,4 +35,5 @@ GenuineIntel-6-55-[01234],v1,skylakex,core GenuineIntel-6-55-[56789ABCDEF],v1,cascadelakex,core GenuineIntel-6-7D,v1,icelake,core GenuineIntel-6-7E,v1,icelake,core +GenuineIntel-6-86,v1,tremontx,core AuthenticAMD-23-[[:xdigit:]]+,v1,amdfam17h,core diff --git a/tools/perf/pmu-events/arch/x86/tremontx/cache.json b/tools/perf/pmu-events/arch/x86/tremontx/cache.json new file mode 100644 index 000000000000..f88040171b4d --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/tremontx/cache.json @@ -0,0 +1,111 @@ +[ + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cacheable memory requests that miss in the the Last Level Cache. Requests include Demand Loads, Reads for Ownership(RFO), Instruction fetches and L1 HW prefetches. If the platform has an L3 cache, last level cache is the L3, otherwise it is the L2.", + "EventCode": "0x2e", + "Counter": "0,1,2,3", + "UMask": "0x41", + "PEBScounters": "0,1,2,3", + "EventName": "LONGEST_LAT_CACHE.MISS", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Counts memory requests originating from the core that miss in the last level cache. If the platform has an L3 cache, last level cache is the L3, otherwise it is the L2." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts cacheable memory requests that access the Last Level Cache. Requests include Demand Loads, Reads for Ownership(RFO), Instruction fetches and L1 HW prefetches. If the platform has an L3 cache, last level cache is the L3, otherwise it is the L2.", + "EventCode": "0x2e", + "Counter": "0,1,2,3", + "UMask": "0x4f", + "PEBScounters": "0,1,2,3", + "EventName": "LONGEST_LAT_CACHE.REFERENCE", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Counts memory requests originating from the core that reference a cache line in the last level cache. If the platform has an L3 cache, last level cache is the L3, otherwise it is the L2." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of load uops retired. This event is Precise Event capable", + "EventCode": "0xd0", + "Counter": "0,1,2,3", + "UMask": "0x81", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_UOPS_RETIRED.ALL_LOADS", + "SampleAfterValue": "200003", + "BriefDescription": "Counts the number of load uops retired.", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of store uops retired. This event is Precise Event capable", + "EventCode": "0xd0", + "Counter": "0,1,2,3", + "UMask": "0x82", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_UOPS_RETIRED.ALL_STORES", + "SampleAfterValue": "200003", + "BriefDescription": "Counts the number of store uops retired.", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "EventCode": "0xd1", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT", + "SampleAfterValue": "200003", + "BriefDescription": "Counts the number of load uops retired that hit the level 1 data cache", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "EventCode": "0xd1", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT", + "SampleAfterValue": "200003", + "BriefDescription": "Counts the number of load uops retired that hit in the level 2 cache", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "EventCode": "0xd1", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_UOPS_RETIRED.L3_HIT", + "SampleAfterValue": "200003", + "BriefDescription": "Counts the number of load uops retired that miss in the level 3 cache" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "EventCode": "0xd1", + "Counter": "0,1,2,3", + "UMask": "0x8", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS", + "SampleAfterValue": "200003", + "BriefDescription": "Counts the number of load uops retired that miss in the level 1 data cache", + "Data_LA": "1" + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "EventCode": "0xd1", + "Counter": "0,1,2,3", + "UMask": "0x10", + "PEBScounters": "0,1,2,3", + "EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS", + "SampleAfterValue": "200003", + "BriefDescription": "Counts the number of load uops retired that miss in the level 2 cache", + "Data_LA": "1" + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/tremontx/frontend.json b/tools/perf/pmu-events/arch/x86/tremontx/frontend.json new file mode 100644 index 000000000000..73b0a1ed5756 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/tremontx/frontend.json @@ -0,0 +1,26 @@ +[ + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line and that cache line is not in the ICache (miss). The event strives to count on a cache line basis, so that multiple accesses which miss in a single cache line count as one ICACHE.MISS. Specifically, the event counts when straight line code crosses the cache line boundary, or when a branch target is to a new line, and that cache line is not in the ICache.", + "EventCode": "0x80", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "ICACHE.MISSES", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in a cache line and they do not hit in the ICache (miss)." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line. The event strives to count on a cache line basis, so that multiple fetches to a single cache line count as one ICACHE.ACCESS. Specifically, the event counts when accesses from straight line code crosses the cache line boundary, or when a branch target is to a new line.", + "EventCode": "0x80", + "Counter": "0,1,2,3", + "UMask": "0x3", + "PEBScounters": "0,1,2,3", + "EventName": "ICACHE.ACCESSES", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes cache Line." + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/tremontx/memory.json b/tools/perf/pmu-events/arch/x86/tremontx/memory.json new file mode 100644 index 000000000000..65469e84f35b --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/tremontx/memory.json @@ -0,0 +1,26 @@ +[ + { + "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.", + "EventCode": "0XB7", + "MSRValue": "0x000000003F04000001", + "Counter": "0,1,2,3", + "UMask": "0x1", + "EventName": "OCR.DEMAND_DATA_RD.L3_MISS", + "MSRIndex": "0x1a6,0x1a7", + "SampleAfterValue": "100003", + "BriefDescription": "Counts demand data reads that was not supplied by the L3 cache.", + "Offcore": "1" + }, + { + "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.", + "EventCode": "0XB7", + "MSRValue": "0x000000003F04000002", + "Counter": "0,1,2,3", + "UMask": "0x1", + "EventName": "OCR.DEMAND_RFO.L3_MISS", + "MSRIndex": "0x1a6,0x1a7", + "SampleAfterValue": "100003", + "BriefDescription": "Counts all demand reads for ownership (RFO) requests and software based prefetches for exclusive ownership (PREFETCHW) that was not supplied by the L3 cache.", + "Offcore": "1" + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/tremontx/other.json b/tools/perf/pmu-events/arch/x86/tremontx/other.json new file mode 100644 index 000000000000..85bf3c8f3914 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/tremontx/other.json @@ -0,0 +1,26 @@ +[ + { + "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.", + "EventCode": "0XB7", + "MSRValue": "0x000000000000010001", + "Counter": "0,1,2,3", + "UMask": "0x1", + "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE", + "MSRIndex": "0x1a6,0x1a7", + "SampleAfterValue": "100003", + "BriefDescription": "Counts demand data reads that have any response type.", + "Offcore": "1" + }, + { + "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.", + "EventCode": "0XB7", + "MSRValue": "0x000000000000010002", + "Counter": "0,1,2,3", + "UMask": "0x1", + "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE", + "MSRIndex": "0x1a6,0x1a7", + "SampleAfterValue": "100003", + "BriefDescription": "Counts all demand reads for ownership (RFO) requests and software based prefetches for exclusive ownership (PREFETCHW) that have any response type.", + "Offcore": "1" + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/tremontx/pipeline.json b/tools/perf/pmu-events/arch/x86/tremontx/pipeline.json new file mode 100644 index 000000000000..05a8f6a7d9c0 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/tremontx/pipeline.json @@ -0,0 +1,111 @@ +[ + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of instructions that retire. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. The counter continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses fixed counter 0.", + "Counter": "32", + "UMask": "0x1", + "PEBScounters": "32", + "EventName": "INST_RETIRED.ANY", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts the number of instructions retired. (Fixed event)" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses fixed counter 1.", + "Counter": "33", + "UMask": "0x2", + "PEBScounters": "33", + "EventName": "CPU_CLK_UNHALTED.CORE", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts the number of unhalted core clock cycles. (Fixed event)" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time. This event is not affected by core frequency changes and at a fixed frequency. This event uses fixed counter 2.", + "Counter": "34", + "UMask": "0x3", + "PEBScounters": "34", + "EventName": "CPU_CLK_UNHALTED.REF_TSC", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts the number of unhalted reference clock cycles at TSC frequency. (Fixed event)" + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses a programmable general purpose performance counter.", + "EventCode": "0x3c", + "Counter": "0,1,2,3", + "PEBScounters": "0,1,2,3", + "EventName": "CPU_CLK_UNHALTED.CORE_P", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts the number of unhalted core clock cycles." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts reference cycles (at TSC frequency) when core is not halted. This event uses a programmable general purpose perfmon counter.", + "EventCode": "0x3c", + "Counter": "0,1,2,3", + "UMask": "0x1", + "PEBScounters": "0,1,2,3", + "EventName": "CPU_CLK_UNHALTED.REF", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts the number of unhalted reference clock cycles at TSC frequency." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of instructions that retire execution. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. The event continues counting during hardware interrupts, traps, and inside interrupt handlers. This is an architectural performance event. This event uses a Programmable general purpose perfmon counter. *This event is Precise Event capable: The EventingRIP field in the PEBS record is precise to the address of the instruction which caused the event.", + "EventCode": "0xc0", + "Counter": "0,1,2,3", + "PEBScounters": "0,1,2,3", + "EventName": "INST_RETIRED.ANY_P", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts the number of instructions retired." + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xc3", + "Counter": "0,1,2,3", + "PEBScounters": "0,1,2,3", + "EventName": "MACHINE_CLEARS.ANY", + "PDIR_COUNTER": "na", + "SampleAfterValue": "20003", + "BriefDescription": "Counts all machine clears due to, but not limited to memory ordering, memory disambiguation, SMC, page faults and FP assist." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts branch instructions retired for all branch types. This event is Precise Event capable. This is an architectural event.", + "EventCode": "0xc4", + "Counter": "0,1,2,3", + "PEBScounters": "0,1,2,3", + "EventName": "BR_INST_RETIRED.ALL_BRANCHES", + "SampleAfterValue": "200003", + "BriefDescription": "Counts the number of branch instructions retired for all branch types." + }, + { + "PEBS": "1", + "CollectPEBSRecord": "2", + "PublicDescription": "Counts mispredicted branch instructions retired for all branch types. This event is Precise Event capable. This is an architectural event.", + "EventCode": "0xc5", + "Counter": "0,1,2,3", + "PEBScounters": "0,1,2,3", + "EventName": "BR_MISP_RETIRED.ALL_BRANCHES", + "SampleAfterValue": "200003", + "BriefDescription": "Counts the number of mispredicted branch instructions retired." + }, + { + "CollectPEBSRecord": "2", + "EventCode": "0xcd", + "Counter": "0,1,2,3", + "PEBScounters": "0,1,2,3", + "EventName": "CYCLES_DIV_BUSY.ANY", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Counts cycles the floating point divider or integer divider or both are busy. Does not imply a stall waiting for either divider." + } +] \ No newline at end of file diff --git a/tools/perf/pmu-events/arch/x86/tremontx/uncore-memory.json b/tools/perf/pmu-events/arch/x86/tremontx/uncore-memory.json new file mode 100644 index 000000000000..15376f2cf052 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/tremontx/uncore-memory.json @@ -0,0 +1,73 @@ +[ + { + "BriefDescription": "read requests to memory controller. Derived from unc_m_cas_count.rd", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x04", + "EventName": "LLC_MISSES.MEM_READ", + "PerPkg": "1", + "ScaleUnit": "64Bytes", + "UMask": "0x0f", + "Unit": "iMC" + }, + { + "BriefDescription": "write requests to memory controller. Derived from unc_m_cas_count.wr", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x04", + "EventName": "LLC_MISSES.MEM_WRITE", + "PerPkg": "1", + "ScaleUnit": "64Bytes", + "UMask": "0x30", + "Unit": "iMC" + }, + { + "BriefDescription": "Memory controller clock ticks", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventName": "UNC_M_CLOCKTICKS", + "PerPkg": "1", + "Unit": "iMC" + }, + { + "BriefDescription": "Pre-charge for reads", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x02", + "EventName": "UNC_M_PRE_COUNT.RD", + "PerPkg": "1", + "UMask": "0x04", + "Unit": "iMC" + }, + { + "BriefDescription": "Pre-charge for writes", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x02", + "EventName": "UNC_M_PRE_COUNT.WR", + "PerPkg": "1", + "UMask": "0x08", + "Unit": "iMC" + }, + { + "BriefDescription": "Precharge due to read on page miss, write on page miss or PGT", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x02", + "EventName": "UNC_M_PRE_COUNT.ALL", + "PerPkg": "1", + "UMask": "0x1c", + "Unit": "iMC" + }, + { + "BriefDescription": "DRAM Precharge commands. : Precharge due to page table", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x02", + "EventName": "UNC_M_PRE_COUNT.PGT", + "PerPkg": "1", + "PublicDescription": "DRAM Precharge commands. : Precharge due to page table : Counts the number of DRAM Precharge commands sent on this channel.", + "UMask": "0x10", + "Unit": "iMC" + } +] diff --git a/tools/perf/pmu-events/arch/x86/tremontx/uncore-other.json b/tools/perf/pmu-events/arch/x86/tremontx/uncore-other.json new file mode 100644 index 000000000000..6deff1fe89e3 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/tremontx/uncore-other.json @@ -0,0 +1,431 @@ +[ + { + "BriefDescription": "Uncore cache clock ticks", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventName": "UNC_CHA_CLOCKTICKS", + "PerPkg": "1", + "Unit": "CHA" + }, + { + "BriefDescription": "LLC misses - Uncacheable reads (from cpu) . Derived from unc_cha_tor_inserts.ia_miss", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "LLC_MISSES.UNCACHEABLE", + "Filter": "config1=0x40e33", + "PerPkg": "1", + "UMask": "0xC001FE01", + "UMaskExt": "0xC001FE", + "Unit": "CHA" + }, + { + "BriefDescription": "MMIO reads. Derived from unc_cha_tor_inserts.ia_miss", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "LLC_MISSES.MMIO_READ", + "Filter": "config1=0x40040e33", + "PerPkg": "1", + "UMask": "0xC001FE01", + "UMaskExt": "0xC001FE", + "Unit": "CHA" + }, + { + "BriefDescription": "MMIO writes. Derived from unc_cha_tor_inserts.ia_miss", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "LLC_MISSES.MMIO_WRITE", + "Filter": "config1=0x40041e33", + "PerPkg": "1", + "UMask": "0xC001FE01", + "UMaskExt": "0xC001FE", + "Unit": "CHA" + }, + { + "BriefDescription": "Streaming stores (full cache line). Derived from unc_cha_tor_inserts.ia_miss", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "LLC_REFERENCES.STREAMING_FULL", + "Filter": "config1=0x41833", + "PerPkg": "1", + "ScaleUnit": "64Bytes", + "UMask": "0xC001FE01", + "UMaskExt": "0xC001FE", + "Unit": "CHA" + }, + { + "BriefDescription": "Streaming stores (partial cache line). Derived from unc_cha_tor_inserts.ia_miss", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "LLC_REFERENCES.STREAMING_PARTIAL", + "Filter": "config1=0x41a33", + "PerPkg": "1", + "ScaleUnit": "64Bytes", + "UMask": "0xC001FE01", + "UMaskExt": "0xC001FE", + "Unit": "CHA" + }, + { + "BriefDescription": "PCI Express bandwidth reading at IIO. Derived from unc_iio_data_req_of_cpu.mem_read.part0", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "LLC_MISSES.PCIE_READ", + "FCMask": "0x07", + "Filter": "ch_mask=0x1f", + "MetricExpr": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 +UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1 +UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 +UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3", + "MetricName": "LLC_MISSES.PCIE_READ", + "PerPkg": "1", + "PortMask": "0x01", + "ScaleUnit": "4Bytes", + "UMask": "0x04", + "Unit": "IIO" + }, + { + "BriefDescription": "PCI Express bandwidth writing at IIO. Derived from unc_iio_data_req_of_cpu.mem_write.part0", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "LLC_MISSES.PCIE_WRITE", + "FCMask": "0x07", + "Filter": "ch_mask=0x1f", + "MetricExpr": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 +UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1 +UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 +UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3", + "MetricName": "LLC_MISSES.PCIE_WRITE", + "PerPkg": "1", + "PortMask": "0x01", + "ScaleUnit": "4Bytes", + "UMask": "0x01", + "Unit": "IIO" + }, + { + "BriefDescription": "PCI Express bandwidth writing at IIO, part 1", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x02", + "ScaleUnit": "4Bytes", + "UMask": "0x01", + "Unit": "IIO" + }, + { + "BriefDescription": "PCI Express bandwidth writing at IIO, part 2", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x04", + "ScaleUnit": "4Bytes", + "UMask": "0x01", + "Unit": "IIO" + }, + { + "BriefDescription": "PCI Express bandwidth writing at IIO, part 3", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x08", + "ScaleUnit": "4Bytes", + "UMask": "0x01", + "Unit": "IIO" + }, + { + "BriefDescription": "PCI Express bandwidth reading at IIO, part 1", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x02", + "ScaleUnit": "4Bytes", + "UMask": "0x04", + "Unit": "IIO" + }, + { + "BriefDescription": "PCI Express bandwidth reading at IIO, part 2", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x04", + "ScaleUnit": "4Bytes", + "UMask": "0x04", + "Unit": "IIO" + }, + { + "BriefDescription": "PCI Express bandwidth reading at IIO, part 3", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x08", + "ScaleUnit": "4Bytes", + "UMask": "0x04", + "Unit": "IIO" + }, + { + "BriefDescription": "TOR Inserts; CRd misses from local IA", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD", + "PerPkg": "1", + "PublicDescription": "TOR Inserts; Code read from local IA that misses in the snoop filter", + "UMask": "0xC80FFE01", + "UMaskExt": "0xC80FFE", + "Unit": "CHA" + }, + { + "BriefDescription": "TOR Inserts; CRd Pref misses from local IA", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF", + "PerPkg": "1", + "PublicDescription": "TOR Inserts; Code read prefetch from local IA that misses in the snoop filter", + "UMask": "0xC88FFE01", + "UMaskExt": "0xC88FFE", + "Unit": "CHA" + }, + { + "BriefDescription": "TOR Inserts; DRd Opt misses from local IA", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT", + "PerPkg": "1", + "PublicDescription": "TOR Inserts; Data read opt from local IA that misses in the snoop filter", + "UMask": "0xC827FE01", + "UMaskExt": "0xC827FE", + "Unit": "CHA" + }, + { + "BriefDescription": "TOR Inserts; DRd Opt Pref misses from local IA", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF", + "PerPkg": "1", + "PublicDescription": "TOR Inserts; Data read opt prefetch from local IA that misses in the snoop filter", + "UMask": "0xC8A7FE01", + "UMaskExt": "0xC8A7FE", + "Unit": "CHA" + }, + { + "BriefDescription": "TOR Inserts; RFO misses from local IA", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO", + "PerPkg": "1", + "PublicDescription": "TOR Inserts; Read for ownership from local IA that misses in the snoop filter", + "UMask": "0xC807FE01", + "UMaskExt": "0xC807FE", + "Unit": "CHA" + }, + { + "BriefDescription": "TOR Inserts; RFO pref misses from local IA", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF", + "PerPkg": "1", + "PublicDescription": "TOR Inserts; Read for ownership prefetch from local IA that misses in the snoop filter", + "UMask": "0xC887FE01", + "UMaskExt": "0xC887FE", + "Unit": "CHA" + }, + { + "BriefDescription": "TOR Inserts; WCiL misses from local IA", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL", + "PerPkg": "1", + "PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter", + "UMask": "0xC86FFE01", + "UMaskExt": "0xC86FFE", + "Unit": "CHA" + }, + { + "BriefDescription": "TOR Inserts; WCiLF misses from local IA", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x35", + "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF", + "PerPkg": "1", + "PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter", + "UMask": "0xC867FE01", + "UMaskExt": "0xC867FE", + "Unit": "CHA" + }, + { + "BriefDescription": "Clockticks of the integrated IO (IIO) traffic controller", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x01", + "EventName": "UNC_IIO_CLOCKTICKS", + "PerPkg": "1", + "PublicDescription": "Clockticks of the integrated IO (IIO) traffic controller", + "Unit": "IIO" + }, + { + "BriefDescription": "Data requested of the CPU : Card reading from DRAM", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART4", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x10", + "PublicDescription": "Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die. Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0", + "UMask": "0x04", + "Unit": "IIO" + }, + { + "BriefDescription": "Data requested of the CPU : Card reading from DRAM", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART5", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x20", + "PublicDescription": "Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die. Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1", + "UMask": "0x04", + "Unit": "IIO" + }, + { + "BriefDescription": "Data requested of the CPU : Card reading from DRAM", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART6", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x40", + "PublicDescription": "Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die. Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1", + "UMask": "0x04", + "Unit": "IIO" + }, + { + "BriefDescription": "Data requested of the CPU : Card reading from DRAM", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART7", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x80", + "PublicDescription": "Data requested of the CPU : Card reading from DRAM : Number of DWs (4 bytes) the card requests of the main die. Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3", + "UMask": "0x04", + "Unit": "IIO" + }, + { + "BriefDescription": "Data requested of the CPU : Card writing to DRAM", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART4", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x10", + "PublicDescription": "Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die. Includes all requests initiated by the Card, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0", + "UMask": "0x01", + "Unit": "IIO" + }, + { + "BriefDescription": "Data requested of the CPU : Card writing to DRAM", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART5", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x20", + "PublicDescription": "Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die. Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 1", + "UMask": "0x01", + "Unit": "IIO" + }, + { + "BriefDescription": "Data requested of the CPU : Card writing to DRAM", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART6", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x40", + "PublicDescription": "Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die. Includes all requests initiated by the Card, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1", + "UMask": "0x01", + "Unit": "IIO" + }, + { + "BriefDescription": "Data requested of the CPU : Card writing to DRAM", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x83", + "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART7", + "FCMask": "0x07", + "PerPkg": "1", + "PortMask": "0x80", + "PublicDescription": "Data requested of the CPU : Card writing to DRAM : Number of DWs (4 bytes) the card requests of the main die. Includes all requests initiated by the Card, including reads and writes. : x4 card is plugged in to slot 3", + "UMask": "0x01", + "Unit": "IIO" + }, + { + "BriefDescription": "Clockticks of the IO coherency tracker (IRP)", + "Counter": "0,1", + "CounterType": "PGMABLE", + "EventCode": "0x01", + "EventName": "UNC_I_CLOCKTICKS", + "PerPkg": "1", + "PublicDescription": "Clockticks of the IO coherency tracker (IRP)", + "Unit": "IRP" + }, + { + "BriefDescription": "Clockticks of the mesh to memory (M2M)", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventName": "UNC_M2M_CLOCKTICKS", + "PerPkg": "1", + "PublicDescription": "Clockticks of the mesh to memory (M2M)", + "Unit": "M2M" + }, + { + "BriefDescription": "Clockticks of the mesh to PCI (M2P)", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventCode": "0x01", + "EventName": "UNC_M2P_CLOCKTICKS", + "PerPkg": "1", + "PublicDescription": "Clockticks of the mesh to PCI (M2P)", + "Unit": "M2PCIe" + }, + { + "BriefDescription": "Clockticks in the UBOX using a dedicated 48-bit Fixed Counter", + "Counter": "FIXED", + "CounterType": "PGMABLE", + "EventCode": "0xff", + "EventName": "UNC_U_CLOCKTICKS", + "PerPkg": "1", + "PublicDescription": "Clockticks in the UBOX using a dedicated 48-bit Fixed Counter", + "Unit": "UBOX" + } +] diff --git a/tools/perf/pmu-events/arch/x86/tremontx/uncore-power.json b/tools/perf/pmu-events/arch/x86/tremontx/uncore-power.json new file mode 100644 index 000000000000..ea62c092b43f --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/tremontx/uncore-power.json @@ -0,0 +1,11 @@ +[ + { + "BriefDescription": "Clockticks of the power control unit (PCU)", + "Counter": "0,1,2,3", + "CounterType": "PGMABLE", + "EventName": "UNC_P_CLOCKTICKS", + "PerPkg": "1", + "PublicDescription": "Clockticks of the power control unit (PCU)", + "Unit": "PCU" + } +] diff --git a/tools/perf/pmu-events/arch/x86/tremontx/virtual-memory.json b/tools/perf/pmu-events/arch/x86/tremontx/virtual-memory.json new file mode 100644 index 000000000000..93e407a0f645 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/tremontx/virtual-memory.json @@ -0,0 +1,86 @@ +[ + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts page walks completed due to demand data loads (including SW prefetches) whose address translations missed in all TLB levels and were mapped to 4K pages. The page walks can end with or without a page fault.", + "EventCode": "0x08", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Page walk completed due to a demand load to a 4K page." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts page walks completed due to demand data loads (including SW prefetches) whose address translations missed in all TLB levels and were mapped to 2M or 4M pages. The page walks can end with or without a page fault.", + "EventCode": "0x08", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Page walk completed due to a demand load to a 2M or 4M page." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 4K pages. The page walks can end with or without a page fault.", + "EventCode": "0x49", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Page walk completed due to a demand data store to a 4K page." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 2M or 4M pages. The page walks can end with or without a page fault.", + "EventCode": "0x49", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Page walk completed due to a demand data store to a 2M or 4M page." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts the number of times the machine was unable to find a translation in the Instruction Translation Lookaside Buffer (ITLB) and new translation was filled into the ITLB. The event is speculative in nature, but will not count translations (page walks) that are begun and not finished, or translations that are finished but not filled into the ITLB.", + "EventCode": "0x81", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB.FILLS", + "PDIR_COUNTER": "na", + "SampleAfterValue": "200003", + "BriefDescription": "Counts the number of times there was an ITLB miss and a new translation was filled into the ITLB." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts page walks completed due to instruction fetches whose address translations missed in the TLB and were mapped to 4K pages. The page walks can end with or without a page fault.", + "EventCode": "0x85", + "Counter": "0,1,2,3", + "UMask": "0x2", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB_MISSES.WALK_COMPLETED_4K", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Page walk completed due to an instruction fetch in a 4K page." + }, + { + "CollectPEBSRecord": "2", + "PublicDescription": "Counts page walks completed due to instruction fetches whose address translations missed in the TLB and were mapped to 2M or 4M pages. The page walks can end with or without a page fault.", + "EventCode": "0x85", + "Counter": "0,1,2,3", + "UMask": "0x4", + "PEBScounters": "0,1,2,3", + "EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M", + "PDIR_COUNTER": "na", + "SampleAfterValue": "2000003", + "BriefDescription": "Page walk completed due to an instruction fetch in a 2M or 4M page." + } +] \ No newline at end of file
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.4-rc1 commit 0c03d3aa255b5d3a7b64051a79f6e9f487194a9f category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Remove the redundant '['.
'perf list' output before:
ex_ret_brn [[Retired Branch Instructions]
'perf list' output after:
ex_ret_brn [Retired Branch Instructions]
Fixes: 98c07a8f74f8 ("perf vendor events amd: perf PMU events for AMD Family 17h") Signed-off-by: Kim Phillips kim.phillips@amd.com Reviewed-by: Andi Kleen ak@linux.intel.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Borislav Petkov bp@suse.de Cc: Janakarajan Natarajan janakarajan.natarajan@amd.com Cc: Jin Yao yao.jin@linux.intel.com Cc: Jiri Olsa jolsa@redhat.com Cc: Kan Liang kan.liang@linux.intel.com Cc: Luke Mujica lukemujica@google.com Cc: Martin Liška mliska@suse.cz Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: http://lore.kernel.org/lkml/20190919204306.12598-2-kim.phillips@amd.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- tools/perf/pmu-events/arch/x86/amdfam17h/core.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/core.json b/tools/perf/pmu-events/arch/x86/amdfam17h/core.json index 7b285b0a7f35..1079544eeed5 100644 --- a/tools/perf/pmu-events/arch/x86/amdfam17h/core.json +++ b/tools/perf/pmu-events/arch/x86/amdfam17h/core.json @@ -13,7 +13,7 @@ { "EventName": "ex_ret_brn", "EventCode": "0xc2", - "BriefDescription": "[Retired Branch Instructions.", + "BriefDescription": "Retired Branch Instructions.", "PublicDescription": "The number of branch instructions retired. This includes all types of architectural control flow changes, including exceptions and interrupts." }, {
From: Vijay Thakkar vijaythakkar@me.com
mainline inclusion from mainline-v5.7-rc1 commit c5f18e9e94bad244115dc5e47f27bd061ecc5552 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
This patch changes the previous blanket detection of AMD Family 17h processors to be more specific to Zen1 core based products only by replacing model detection regex pattern [[:xdigit:]]+ with ([12][0-9A-F]|[0-9A-F]), restricting to models 0 though 2f only.
This change is required to allow for the addition of separate PMU events for Zen2 core based models in the following patches as those belong to family 17h but have different PMCs. Current PMU events directory has also been renamed to "amdzen1" from "amdfam17h" to reflect this specificity.
Note that although this change does not break PMU counters for existing zen1 based systems, it does disable the current set of counters for zen2 based systems. Counters for zen2 have been added in the following patches in this patchset.
Signed-off-by: Vijay Thakkar vijaythakkar@me.com Acked-by: Kim Phillips kim.phillips@amd.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Jiri Olsa jolsa@redhat.com Cc: Jon Grimm jon.grimm@amd.com Cc: Martin Liška mliska@suse.cz Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: http://lore.kernel.org/lkml/20200318190002.307290-2-vijaythakkar@me.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- .../perf/pmu-events/arch/x86/{amdfam17h => amdzen1}/branch.json | 0 .../perf/pmu-events/arch/x86/{amdfam17h => amdzen1}/cache.json | 0 tools/perf/pmu-events/arch/x86/{amdfam17h => amdzen1}/core.json | 0 .../arch/x86/{amdfam17h => amdzen1}/floating-point.json | 0 .../perf/pmu-events/arch/x86/{amdfam17h => amdzen1}/memory.json | 0 .../perf/pmu-events/arch/x86/{amdfam17h => amdzen1}/other.json | 0 tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- 7 files changed, 1 insertion(+), 1 deletion(-) rename tools/perf/pmu-events/arch/x86/{amdfam17h => amdzen1}/branch.json (100%) rename tools/perf/pmu-events/arch/x86/{amdfam17h => amdzen1}/cache.json (100%) rename tools/perf/pmu-events/arch/x86/{amdfam17h => amdzen1}/core.json (100%) rename tools/perf/pmu-events/arch/x86/{amdfam17h => amdzen1}/floating-point.json (100%) rename tools/perf/pmu-events/arch/x86/{amdfam17h => amdzen1}/memory.json (100%) rename tools/perf/pmu-events/arch/x86/{amdfam17h => amdzen1}/other.json (100%)
diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/branch.json b/tools/perf/pmu-events/arch/x86/amdzen1/branch.json similarity index 100% rename from tools/perf/pmu-events/arch/x86/amdfam17h/branch.json rename to tools/perf/pmu-events/arch/x86/amdzen1/branch.json diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/cache.json b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json similarity index 100% rename from tools/perf/pmu-events/arch/x86/amdfam17h/cache.json rename to tools/perf/pmu-events/arch/x86/amdzen1/cache.json diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/core.json b/tools/perf/pmu-events/arch/x86/amdzen1/core.json similarity index 100% rename from tools/perf/pmu-events/arch/x86/amdfam17h/core.json rename to tools/perf/pmu-events/arch/x86/amdzen1/core.json diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/floating-point.json b/tools/perf/pmu-events/arch/x86/amdzen1/floating-point.json similarity index 100% rename from tools/perf/pmu-events/arch/x86/amdfam17h/floating-point.json rename to tools/perf/pmu-events/arch/x86/amdzen1/floating-point.json diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/memory.json b/tools/perf/pmu-events/arch/x86/amdzen1/memory.json similarity index 100% rename from tools/perf/pmu-events/arch/x86/amdfam17h/memory.json rename to tools/perf/pmu-events/arch/x86/amdzen1/memory.json diff --git a/tools/perf/pmu-events/arch/x86/amdfam17h/other.json b/tools/perf/pmu-events/arch/x86/amdzen1/other.json similarity index 100% rename from tools/perf/pmu-events/arch/x86/amdfam17h/other.json rename to tools/perf/pmu-events/arch/x86/amdzen1/other.json diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-events/arch/x86/mapfile.csv index 745ced083844..82a9db00125e 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -36,4 +36,4 @@ GenuineIntel-6-55-[56789ABCDEF],v1,cascadelakex,core GenuineIntel-6-7D,v1,icelake,core GenuineIntel-6-7E,v1,icelake,core GenuineIntel-6-86,v1,tremontx,core -AuthenticAMD-23-[[:xdigit:]]+,v1,amdfam17h,core +AuthenticAMD-23-([12][0-9A-F]|[0-9A-F]),v1,amdzen1,core
From: Vijay Thakkar vijaythakkar@me.com
mainline inclusion from mainline-v5.7-rc1 commit 2079f7aa0a49d3ae83a82f70785a28b07bb9b16b category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
This patch adds PMU events for AMD Zen2 core based processors, namely, Matisse (model 71h), Castle Peak (model 31h) and Rome (model 2xh), as documented in the AMD Processor Programming Reference for Matisse [1]. The model number regex has been set to detect all the models under family 17 that do not match those of Zen1, as the range is larger for zen2.
Zen2 adds some additional counters that are not present in Zen1 and events for them have been added in this patch. Some counters have also been removed for Zen2 thatwere previously present in Zen1 and have been confirmed to always sample zero on zen2. These added/removed counters have been omitted for brevity but can be found here: https://gist.github.com/thakkarV/5b12ca5fd7488eb2c42e451e40bdd5f3
Note that PPR for Zen2 [1] does not include some counters that were documented in the PPR for Zen1 based processors [2]. After having tested these counters, some of them that still work for zen2 systems have been preserved in the events for zen2. The counters that are omitted in [1] but are still measurable and non-zero on zen2 (tested on a Ryzen 3900X system) are the following:
PMC 0x000 fpu_pipe_assignment.{total|total0|total1|total2|total3} PMC 0x004 fp_num_mov_elim_scal_op.* PMC 0x046 ls_tablewalker.* PMC 0x062 l2_latency.l2_cycles_waiting_on_fills PMC 0x063 l2_wcb_req.* PMC 0x06D l2_fill_pending.l2_fill_busy PMC 0x080 ic_fw32 PMC 0x081 ic_fw32_miss PMC 0x086 bp_snp_re_sync PMC 0x087 ic_fetch_stall.* PMC 0x08C ic_cache_inval.* PMC 0x099 bp_tlb_rel PMC 0x0C7 ex_ret_brn_resync PMC 0x28A ic_oc_mode_switch.* L3PMC 0x001 l3_request_g1.* L3PMC 0x006 l3_comb_clstr_state.*
[1]: Processor Programming Reference (PPR) for AMD Family 17h Model 71h, Revision B0 Processors, 56176 Rev 3.06 - Jul 17, 2019
[2]: Processor Programming Reference (PPR) for AMD Family 17h Models 01h,08h, Revision B2 Processors, 54945 Rev 3.03 - Jun 14, 2019
All of the PPRs can be found at:
https://bugzilla.kernel.org/show_bug.cgi?id=206537
Here are the results of running "fpu_pipe_assignment.total" events on my Ryzen 3900X family 17h model 71h system:
Before this patch:
$> perf list *fpu_pipe_assignment*
List of pre-defined events (to be used in -e):
After:
$> perf list *fpu_pipe_assignment*
floating point: fpu_pipe_assignment.total [Total number of fp uOps] fpu_pipe_assignment.total0 [Total number uOps assigned to pipe 0] fpu_pipe_assignment.total1 [Total number uOps assigned to pipe 1] fpu_pipe_assignment.total2 [Total number uOps assigned to pipe 2] fpu_pipe_assignment.total3 [Total number uOps assigned to pipe 3]
Metric Groups:
$> perf stat -e fpu_pipe_assignment.total sleep 1
Performance counter stats for 'sleep 1':
25,883 fpu_pipe_assignment.total
1.004145868 seconds time elapsed
0.001805000 seconds user 0.000000000 seconds sys
Usage tests while running Linpackin the background:
$> perf stat -I1000 -e fpu_pipe_assignment.total 1.000266796 79,313,191,516 fpu_pipe_assignment.total 2.000809630 68,091,474,430 fpu_pipe_assignment.total 3.001028115 52,925,023,174 fpu_pipe_assignment.total
$> perf record -e fpu_pipe_assignment.total,fpu_pipe_assignment.total0 -a sleep 1 [ perf record: Woken up 9 times to write data ] [ perf record: Captured and wrote 4.031 MB perf.data (64764 samples) ]
$> perf report --stdio --no-header | head -30 98.33% xhpl xhpl [.] dgemm_kernel 0.28% xhpl xhpl [.] dtrsm_kernel_LT 0.10% xhpl [kernel.kallsyms] [k] entry_SYSCALL_64 0.08% xhpl xhpl [.] idamax_k 0.07% baloo_file_extr liblmdb.so [.] mdb_mid2l_insert 0.06% xhpl xhpl [.] dgemm_itcopy 0.06% xhpl xhpl [.] dgemm_oncopy 0.06% xhpl [kernel.kallsyms] [k] __schedule 0.06% xhpl [kernel.kallsyms] [k] syscall_trace_enter 0.06% xhpl [kernel.kallsyms] [k] native_sched_clock 0.06% xhpl [kernel.kallsyms] [k] pick_next_task_fair 0.05% xhpl xhpl [.] blas_thread_server.llvm.15009391670273914865 0.04% xhpl [kernel.kallsyms] [k] do_syscall_64 0.04% xhpl [kernel.kallsyms] [k] yield_task_fair 0.04% xhpl libpthread-2.31.so [.] __pthread_mutex_unlock_usercnt 0.03% xhpl [kernel.kallsyms] [k] cpuacct_charge 0.03% xhpl [kernel.kallsyms] [k] syscall_return_via_sysret 0.03% xhpl libc-2.31.so [.] __sched_yield 0.03% xhpl [kernel.kallsyms] [k] __calc_delta
$> perf annotate --stdio2 dgemm_kernel | egrep '^ {0,2}[0-9]+' -B2 -A2 sub $0x60,%rsp mov %rbx,(%rsp) 0.00 mov %rbp,0x8(%rsp) mov %r12,0x10(%rsp) 0.00 mov %r13,0x18(%rsp) mov %r14,0x20(%rsp) mov %r15,0x28(%rsp) -- mov %rdi,%r13 mov %rsi,0x28(%rsp) 0.00 mov %rdx,%r12 vmovsd %xmm0,0x30(%rsp) shl $0x3,%r10 mov 0x28(%rsp),%rax 0.00 xor %rdx,%rdx mov $0x18,%rdi div %rdi -- nop a0: mov %r12,%rax 0.00 shl $0x3,%rax mov %r8,%rdi lea (%r8,%rax,8),%r15 -- mov %r12,%rax nop 0.00 c0: vmovups (%rdi),%ymm1 0.09 vmovups 0x20(%rdi),%ymm2 0.02 vmovups (%r15),%ymm3 0.10 vmovups %ymm1,(%rsi) 0.07 vmovups %ymm2,0x20(%rsi) 0.07 vmovups %ymm3,0x40(%rsi) 0.06 add $0x40,%rdi add $0x40,%r15 add $0x60,%rsi 0.00 dec %rax ↑ jne c0 mov %r9,%r15 -- nop 110: lea 0x80(%rsp),%rsi 0.01 add $0x60,%rsi 0.03 mov %r12,%rax 0.00 sar $0x3,%rax cmp $0x2,%rax ↓ jl d26 prefetcht0 0x200(%rdi) 0.01 vmovups -0x60(%rsi),%ymm1 0.02 prefetcht0 0xa0(%rsi) 0.00 vbroadcastsd -0x80(%rdi),%ymm0 0.00 prefetcht0 0xe0(%rsi) 0.03 vmovups -0x40(%rsi),%ymm2 0.00 prefetcht0 0x120(%rsi) vmovups -0x20(%rsi),%ymm3 vmulpd %ymm0,%ymm1,%ymm4 0.01 prefetcht0 0x160(%rsi) vmulpd %ymm0,%ymm2,%ymm8 0.01 vmulpd %ymm0,%ymm3,%ymm12 0.02 prefetcht0 0x1a0(%rsi) 0.01 vbroadcastsd -0x78(%rdi),%ymm0 vmulpd %ymm0,%ymm1,%ymm5 0.01 vmulpd %ymm0,%ymm2,%ymm9 vmulpd %ymm0,%ymm3,%ymm13 0.01 vbroadcastsd -0x70(%rdi),%ymm0 vmulpd %ymm0,%ymm1,%ymm6 0.00 vmulpd %ymm0,%ymm2,%ymm10 0.00 add $0x60,%rsi
... snip ...
nop 65e0: vmovddup -0x60(%rsi),%xmm2 0.00 vmovups -0x80(%rdi),%xmm0 vmovups -0x70(%rdi),%xmm1 0.00 vmovddup -0x58(%rsi),%xmm3 vfmadd231pd %xmm0,%xmm2,%xmm4 0.00 vfmadd231pd %xmm1,%xmm2,%xmm5 0.00 vfmadd231pd %xmm0,%xmm3,%xmm6 0.00 vfmadd231pd %xmm1,%xmm3,%xmm7 0.00 add $0x10,%rsi add $0x20,%rdi 0.00 dec %rax ↑ jne 65e0 nop nop 6620: vmovddup 0x30(%rsp),%xmm0 0.00 vmulpd %xmm0,%xmm4,%xmm4 0.00 vmulpd %xmm0,%xmm5,%xmm5 vmulpd %xmm0,%xmm6,%xmm6 vmulpd %xmm0,%xmm7,%xmm7 vaddpd (%r15),%xmm4,%xmm4 vaddpd 0x10(%r15),%xmm5,%xmm5 0.00 vaddpd (%r15,%r10,1),%xmm6,%xmm6 0.00 vaddpd 0x10(%r15,%r10,1),%xmm7,%xmm7 0.00 vmovups %xmm4,(%r15) vmovups %xmm5,0x10(%r15) 0.00 vmovups %xmm6,(%r15,%r10,1) vmovups %xmm7,0x10(%r15,%r10,1) add $0x20,%r15 -- lea (%r8,%rax,8),%r8 69d8: mov 0x20(%rsp),%r14 0.00 test $0x1,%r14 ↓ je 6d84 mov %r9,%r15 -- vbroadcastsd -0x28(%rsi),%ymm3 vfmadd231pd (%rdi),%ymm0,%ymm4 0.00 vfmadd231pd 0x20(%rdi),%ymm1,%ymm5 vfmadd231pd 0x40(%rdi),%ymm2,%ymm6 vfmadd231pd 0x60(%rdi),%ymm3,%ymm7 -- vmulpd %ymm0,%ymm4,%ymm4 vaddpd (%r15),%ymm4,%ymm4 0.00 vmovups %ymm4,(%r15) add $0x20,%r15 dec %r11 -- mov %rbx,%rsp mov (%rsp),%rbx 0.01 mov 0x8(%rsp),%rbp mov 0x10(%rsp),%r12 mov 0x18(%rsp),%r13
Signed-off-by: Vijay Thakkar vijaythakkar@me.com Tested-by: Arnaldo Carvalho de Melo acme@redhat.com Acked-by: Kim Phillips kim.phillips@amd.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Jiri Olsa jolsa@redhat.com Cc: Jon Grimm jon.grimm@amd.com Cc: Martin Liška mliska@suse.cz Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Link: http://lore.kernel.org/lkml/20200318190002.307290-3-vijaythakkar@me.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- .../pmu-events/arch/x86/amdzen2/branch.json | 52 +++ .../pmu-events/arch/x86/amdzen2/cache.json | 338 +++++++++++++++++ .../pmu-events/arch/x86/amdzen2/core.json | 130 +++++++ .../arch/x86/amdzen2/floating-point.json | 140 +++++++ .../pmu-events/arch/x86/amdzen2/memory.json | 341 ++++++++++++++++++ .../pmu-events/arch/x86/amdzen2/other.json | 115 ++++++ tools/perf/pmu-events/arch/x86/mapfile.csv | 1 + 7 files changed, 1117 insertions(+) create mode 100644 tools/perf/pmu-events/arch/x86/amdzen2/branch.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen2/cache.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen2/core.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen2/floating-point.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen2/memory.json create mode 100644 tools/perf/pmu-events/arch/x86/amdzen2/other.json
diff --git a/tools/perf/pmu-events/arch/x86/amdzen2/branch.json b/tools/perf/pmu-events/arch/x86/amdzen2/branch.json new file mode 100644 index 000000000000..ef4166a66288 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/amdzen2/branch.json @@ -0,0 +1,52 @@ +[ + { + "EventName": "bp_l1_btb_correct", + "EventCode": "0x8a", + "BriefDescription": "L1 Branch Prediction Overrides Existing Prediction (speculative)." + }, + { + "EventName": "bp_l2_btb_correct", + "EventCode": "0x8b", + "BriefDescription": "L2 Branch Prediction Overrides Existing Prediction (speculative)." + }, + { + "EventName": "bp_dyn_ind_pred", + "EventCode": "0x8e", + "BriefDescription": "Dynamic Indirect Predictions.", + "PublicDescription": "Indirect Branch Prediction for potential multi-target branch (speculative)." + }, + { + "EventName": "bp_de_redirect", + "EventCode": "0x91", + "BriefDescription": "Decoder Overrides Existing Branch Prediction (speculative)." + }, + { + "EventName": "bp_l1_tlb_fetch_hit", + "EventCode": "0x94", + "BriefDescription": "The number of instruction fetches that hit in the L1 ITLB.", + "UMask": "0xFF" + }, + { + "EventName": "bp_l1_tlb_fetch_hit.if1g", + "EventCode": "0x94", + "BriefDescription": "The number of instruction fetches that hit in the L1 ITLB. Instruction fetches to a 1GB page.", + "UMask": "0x4" + }, + { + "EventName": "bp_l1_tlb_fetch_hit.if2m", + "EventCode": "0x94", + "BriefDescription": "The number of instruction fetches that hit in the L1 ITLB. Instruction fetches to a 2MB page.", + "UMask": "0x2" + }, + { + "EventName": "bp_l1_tlb_fetch_hit.if4k", + "EventCode": "0x94", + "BriefDescription": "The number of instruction fetches that hit in the L1 ITLB. Instruction fetches to a 4KB page.", + "UMask": "0x1" + }, + { + "EventName": "bp_tlb_rel", + "EventCode": "0x99", + "BriefDescription": "The number of ITLB reload requests." + } +] diff --git a/tools/perf/pmu-events/arch/x86/amdzen2/cache.json b/tools/perf/pmu-events/arch/x86/amdzen2/cache.json new file mode 100644 index 000000000000..1c60bfa0f00b --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/amdzen2/cache.json @@ -0,0 +1,338 @@ +[ + { + "EventName": "l2_request_g1.rd_blk_l", + "EventCode": "0x60", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). Data cache reads (including hardware and software prefetch).", + "UMask": "0x80" + }, + { + "EventName": "l2_request_g1.rd_blk_x", + "EventCode": "0x60", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). Data cache stores.", + "UMask": "0x40" + }, + { + "EventName": "l2_request_g1.ls_rd_blk_c_s", + "EventCode": "0x60", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). Data cache shared reads.", + "UMask": "0x20" + }, + { + "EventName": "l2_request_g1.cacheable_ic_read", + "EventCode": "0x60", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). Instruction cache reads.", + "UMask": "0x10" + }, + { + "EventName": "l2_request_g1.change_to_x", + "EventCode": "0x60", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). Data cache state change requests. Request change to writable, check L2 for current state.", + "UMask": "0x8" + }, + { + "EventName": "l2_request_g1.prefetch_l2_cmd", + "EventCode": "0x60", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). PrefetchL2Cmd.", + "UMask": "0x4" + }, + { + "EventName": "l2_request_g1.l2_hw_pf", + "EventCode": "0x60", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). L2 Prefetcher. All prefetches accepted by L2 pipeline, hit or miss. Types of PF and L2 hit/miss broken out in a separate perfmon event.", + "UMask": "0x2" + }, + { + "EventName": "l2_request_g1.group2", + "EventCode": "0x60", + "BriefDescription": "Miscellaneous events covered in more detail by l2_request_g2 (PMCx061).", + "UMask": "0x1" + }, + { + "EventName": "l2_request_g2.group1", + "EventCode": "0x61", + "BriefDescription": "Miscellaneous events covered in more detail by l2_request_g1 (PMCx060).", + "UMask": "0x80" + }, + { + "EventName": "l2_request_g2.ls_rd_sized", + "EventCode": "0x61", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Data cache read sized.", + "UMask": "0x40" + }, + { + "EventName": "l2_request_g2.ls_rd_sized_nc", + "EventCode": "0x61", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Data cache read sized non-cacheable.", + "UMask": "0x20" + }, + { + "EventName": "l2_request_g2.ic_rd_sized", + "EventCode": "0x61", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Instruction cache read sized.", + "UMask": "0x10" + }, + { + "EventName": "l2_request_g2.ic_rd_sized_nc", + "EventCode": "0x61", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Instruction cache read sized non-cacheable.", + "UMask": "0x8" + }, + { + "EventName": "l2_request_g2.smc_inval", + "EventCode": "0x61", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Self-modifying code invalidates.", + "UMask": "0x4" + }, + { + "EventName": "l2_request_g2.bus_locks_originator", + "EventCode": "0x61", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Bus locks.", + "UMask": "0x2" + }, + { + "EventName": "l2_request_g2.bus_locks_responses", + "EventCode": "0x61", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Bus lock response.", + "UMask": "0x1" + }, + { + "EventName": "l2_latency.l2_cycles_waiting_on_fills", + "EventCode": "0x62", + "BriefDescription": "Total cycles spent waiting for L2 fills to complete from L3 or memory, divided by four. Event counts are for both threads. To calculate average latency, the number of fills from both threads must be used.", + "UMask": "0x1" + }, + { + "EventName": "l2_wcb_req.wcb_write", + "EventCode": "0x63", + "BriefDescription": "LS to L2 WCB write requests. LS (Load/Store unit) to L2 WCB (Write Combining Buffer) write requests.", + "UMask": "0x40" + }, + { + "EventName": "l2_wcb_req.wcb_close", + "EventCode": "0x63", + "BriefDescription": "LS to L2 WCB close requests. LS (Load/Store unit) to L2 WCB (Write Combining Buffer) close requests.", + "UMask": "0x20" + }, + { + "EventName": "l2_wcb_req.zero_byte_store", + "EventCode": "0x63", + "BriefDescription": "LS to L2 WCB zero byte store requests. LS (Load/Store unit) to L2 WCB (Write Combining Buffer) zero byte store requests.", + "UMask": "0x4" + }, + { + "EventName": "l2_wcb_req.cl_zero", + "EventCode": "0x63", + "BriefDescription": "LS to L2 WCB cache line zeroing requests. LS (Load/Store unit) to L2 WCB (Write Combining Buffer) cache line zeroing requests.", + "UMask": "0x1" + }, + { + "EventName": "l2_cache_req_stat.ls_rd_blk_cs", + "EventCode": "0x64", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Data cache shared read hit in L2", + "UMask": "0x80" + }, + { + "EventName": "l2_cache_req_stat.ls_rd_blk_l_hit_x", + "EventCode": "0x64", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Data cache read hit in L2.", + "UMask": "0x40" + }, + { + "EventName": "l2_cache_req_stat.ls_rd_blk_l_hit_s", + "EventCode": "0x64", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Data cache read hit on shared line in L2.", + "UMask": "0x20" + }, + { + "EventName": "l2_cache_req_stat.ls_rd_blk_x", + "EventCode": "0x64", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Data cache store or state change hit in L2.", + "UMask": "0x10" + }, + { + "EventName": "l2_cache_req_stat.ls_rd_blk_c", + "EventCode": "0x64", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Data cache request miss in L2 (all types).", + "UMask": "0x8" + }, + { + "EventName": "l2_cache_req_stat.ic_fill_hit_x", + "EventCode": "0x64", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Instruction cache hit modifiable line in L2.", + "UMask": "0x4" + }, + { + "EventName": "l2_cache_req_stat.ic_fill_hit_s", + "EventCode": "0x64", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Instruction cache hit clean line in L2.", + "UMask": "0x2" + }, + { + "EventName": "l2_cache_req_stat.ic_fill_miss", + "EventCode": "0x64", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Instruction cache request miss in L2.", + "UMask": "0x1" + }, + { + "EventName": "l2_fill_pending.l2_fill_busy", + "EventCode": "0x6d", + "BriefDescription": "Cycles with fill pending from L2. Total cycles spent with one or more fill requests in flight from L2.", + "UMask": "0x1" + }, + { + "EventName": "l2_pf_hit_l2", + "EventCode": "0x70", + "BriefDescription": "L2 prefetch hit in L2.", + "UMask": "0xff" + }, + { + "EventName": "l2_pf_miss_l2_hit_l3", + "EventCode": "0x71", + "BriefDescription": "L2 prefetcher hits in L3. Counts all L2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit the L3.", + "UMask": "0xff" + }, + { + "EventName": "l2_pf_miss_l2_l3", + "EventCode": "0x72", + "BriefDescription": "L2 prefetcher misses in L3. All L2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches.", + "UMask": "0xff" + }, + { + "EventName": "ic_fw32", + "EventCode": "0x80", + "BriefDescription": "The number of 32B fetch windows transferred from IC pipe to DE instruction decoder (includes non-cacheable and cacheable fill responses)." + }, + { + "EventName": "ic_fw32_miss", + "EventCode": "0x81", + "BriefDescription": "The number of 32B fetch windows tried to read the L1 IC and missed in the full tag." + }, + { + "EventName": "ic_cache_fill_l2", + "EventCode": "0x82", + "BriefDescription": "The number of 64 byte instruction cache line was fulfilled from the L2 cache." + }, + { + "EventName": "ic_cache_fill_sys", + "EventCode": "0x83", + "BriefDescription": "The number of 64 byte instruction cache line fulfilled from system memory or another cache." + }, + { + "EventName": "bp_l1_tlb_miss_l2_hit", + "EventCode": "0x84", + "BriefDescription": "The number of instruction fetches that miss in the L1 ITLB but hit in the L2 ITLB." + }, + { + "EventName": "bp_l1_tlb_miss_l2_tlb_miss", + "EventCode": "0x85", + "BriefDescription": "The number of instruction fetches that miss in both the L1 and L2 TLBs.", + "UMask": "0xff" + }, + { + "EventName": "bp_l1_tlb_miss_l2_tlb_miss.if1g", + "EventCode": "0x85", + "BriefDescription": "The number of instruction fetches that miss in both the L1 and L2 TLBs. Instruction fetches to a 1GB page.", + "UMask": "0x4" + }, + { + "EventName": "bp_l1_tlb_miss_l2_tlb_miss.if2m", + "EventCode": "0x85", + "BriefDescription": "The number of instruction fetches that miss in both the L1 and L2 TLBs. Instruction fetches to a 2MB page.", + "UMask": "0x2" + }, + { + "EventName": "bp_l1_tlb_miss_l2_tlb_miss.if4k", + "EventCode": "0x85", + "BriefDescription": "The number of instruction fetches that miss in both the L1 and L2 TLBs. Instruction fetches to a 4KB page.", + "UMask": "0x1" + }, + { + "EventName": "bp_snp_re_sync", + "EventCode": "0x86", + "BriefDescription": "The number of pipeline restarts caused by invalidating probes that hit on the instruction stream currently being executed. This would happen if the active instruction stream was being modified by another processor in an MP system - typically a highly unlikely event." + }, + { + "EventName": "ic_fetch_stall.ic_stall_any", + "EventCode": "0x87", + "BriefDescription": "Instruction Pipe Stall. IC pipe was stalled during this clock cycle for any reason (nothing valid in pipe ICM1).", + "UMask": "0x4" + }, + { + "EventName": "ic_fetch_stall.ic_stall_dq_empty", + "EventCode": "0x87", + "BriefDescription": "Instruction Pipe Stall. IC pipe was stalled during this clock cycle (including IC to OC fetches) due to DQ empty.", + "UMask": "0x2" + }, + { + "EventName": "ic_fetch_stall.ic_stall_back_pressure", + "EventCode": "0x87", + "BriefDescription": "Instruction Pipe Stall. IC pipe was stalled during this clock cycle (including IC to OC fetches) due to back-pressure.", + "UMask": "0x1" + }, + { + "EventName": "ic_cache_inval.l2_invalidating_probe", + "EventCode": "0x8c", + "BriefDescription": "IC line invalidated due to L2 invalidating probe (external or LS). The number of instruction cache lines invalidated. A non-SMC event is CMC (cross modifying code), either from the other thread of the core or another core.", + "UMask": "0x2" + }, + { + "EventName": "ic_cache_inval.fill_invalidated", + "EventCode": "0x8c", + "BriefDescription": "IC line invalidated due to overwriting fill response. The number of instruction cache lines invalidated. A non-SMC event is CMC (cross modifying code), either from the other thread of the core or another core.", + "UMask": "0x1" + }, + { + "EventName": "ic_oc_mode_switch.oc_ic_mode_switch", + "EventCode": "0x28a", + "BriefDescription": "OC Mode Switch. OC to IC mode switch.", + "UMask": "0x2" + }, + { + "EventName": "ic_oc_mode_switch.ic_oc_mode_switch", + "EventCode": "0x28a", + "BriefDescription": "OC Mode Switch. IC to OC mode switch.", + "UMask": "0x1" + }, + { + "EventName": "l3_request_g1.caching_l3_cache_accesses", + "EventCode": "0x01", + "BriefDescription": "Caching: L3 cache accesses", + "UMask": "0x80", + "Unit": "L3PMC" + }, + { + "EventName": "l3_lookup_state.all_l3_req_typs", + "EventCode": "0x04", + "BriefDescription": "All L3 Request Types", + "UMask": "0xff", + "Unit": "L3PMC" + }, + { + "EventName": "l3_comb_clstr_state.other_l3_miss_typs", + "EventCode": "0x06", + "BriefDescription": "Other L3 Miss Request Types", + "UMask": "0xfe", + "Unit": "L3PMC" + }, + { + "EventName": "l3_comb_clstr_state.request_miss", + "EventCode": "0x06", + "BriefDescription": "L3 cache misses", + "UMask": "0x01", + "Unit": "L3PMC" + }, + { + "EventName": "xi_sys_fill_latency", + "EventCode": "0x90", + "BriefDescription": "L3 Cache Miss Latency. Total cycles for all transactions divided by 16. Ignores SliceMask and ThreadMask.", + "UMask": "0x00", + "Unit": "L3PMC" + }, + { + "EventName": "xi_ccx_sdp_req1.all_l3_miss_req_typs", + "EventCode": "0x9A", + "BriefDescription": "All L3 Miss Request Types. Ignores SliceMask and ThreadMask.", + "UMask": "0x3f", + "Unit": "L3PMC" + } +] diff --git a/tools/perf/pmu-events/arch/x86/amdzen2/core.json b/tools/perf/pmu-events/arch/x86/amdzen2/core.json new file mode 100644 index 000000000000..de89e5a44ff1 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/amdzen2/core.json @@ -0,0 +1,130 @@ +[ + { + "EventName": "ex_ret_instr", + "EventCode": "0xc0", + "BriefDescription": "Retired Instructions." + }, + { + "EventName": "ex_ret_cops", + "EventCode": "0xc1", + "BriefDescription": "Retired Uops.", + "PublicDescription": "The number of micro-ops retired. This count includes all processor activity (instructions, exceptions, interrupts, microcode assists, etc.). The number of events logged per cycle can vary from 0 to 8." + }, + { + "EventName": "ex_ret_brn", + "EventCode": "0xc2", + "BriefDescription": "Retired Branch Instructions.", + "PublicDescription": "The number of branch instructions retired. This includes all types of architectural control flow changes, including exceptions and interrupts." + }, + { + "EventName": "ex_ret_brn_misp", + "EventCode": "0xc3", + "BriefDescription": "Retired Branch Instructions Mispredicted.", + "PublicDescription": "The number of branch instructions retired, of any type, that were not correctly predicted. This includes those for which prediction is not attempted (far control transfers, exceptions and interrupts)." + }, + { + "EventName": "ex_ret_brn_tkn", + "EventCode": "0xc4", + "BriefDescription": "Retired Taken Branch Instructions.", + "PublicDescription": "The number of taken branches that were retired. This includes all types of architectural control flow changes, including exceptions and interrupts." + }, + { + "EventName": "ex_ret_brn_tkn_misp", + "EventCode": "0xc5", + "BriefDescription": "Retired Taken Branch Instructions Mispredicted.", + "PublicDescription": "The number of retired taken branch instructions that were mispredicted." + }, + { + "EventName": "ex_ret_brn_far", + "EventCode": "0xc6", + "BriefDescription": "Retired Far Control Transfers.", + "PublicDescription": "The number of far control transfers retired including far call/jump/return, IRET, SYSCALL and SYSRET, plus exceptions and interrupts. Far control transfers are not subject to branch prediction." + }, + { + "EventName": "ex_ret_brn_resync", + "EventCode": "0xc7", + "BriefDescription": "Retired Branch Resyncs.", + "PublicDescription": "The number of resync branches. These reflect pipeline restarts due to certain microcode assists and events such as writes to the active instruction stream, among other things. Each occurrence reflects a restart penalty similar to a branch mispredict. This is relatively rare." + }, + { + "EventName": "ex_ret_near_ret", + "EventCode": "0xc8", + "BriefDescription": "Retired Near Returns.", + "PublicDescription": "The number of near return instructions (RET or RET Iw) retired." + }, + { + "EventName": "ex_ret_near_ret_mispred", + "EventCode": "0xc9", + "BriefDescription": "Retired Near Returns Mispredicted.", + "PublicDescription": "The number of near returns retired that were not correctly predicted by the return address predictor. Each such mispredict incurs the same penalty as a mispredicted conditional branch instruction." + }, + { + "EventName": "ex_ret_brn_ind_misp", + "EventCode": "0xca", + "BriefDescription": "Retired Indirect Branch Instructions Mispredicted." + }, + { + "EventName": "ex_ret_mmx_fp_instr.sse_instr", + "EventCode": "0xcb", + "BriefDescription": "SSE instructions (SSE, SSE2, SSE3, SSSE3, SSE4A, SSE41, SSE42, AVX).", + "PublicDescription": "The number of MMX, SSE or x87 instructions retired. The UnitMask allows the selection of the individual classes of instructions as given in the table. Each increment represents one complete instruction. Since this event includes non-numeric instructions it is not suitable for measuring MFLOPS. SSE instructions (SSE, SSE2, SSE3, SSSE3, SSE4A, SSE41, SSE42, AVX).", + "UMask": "0x4" + }, + { + "EventName": "ex_ret_mmx_fp_instr.mmx_instr", + "EventCode": "0xcb", + "BriefDescription": "MMX instructions.", + "PublicDescription": "The number of MMX, SSE or x87 instructions retired. The UnitMask allows the selection of the individual classes of instructions as given in the table. Each increment represents one complete instruction. Since this event includes non-numeric instructions it is not suitable for measuring MFLOPS. MMX instructions.", + "UMask": "0x2" + }, + { + "EventName": "ex_ret_mmx_fp_instr.x87_instr", + "EventCode": "0xcb", + "BriefDescription": "x87 instructions.", + "PublicDescription": "The number of MMX, SSE or x87 instructions retired. The UnitMask allows the selection of the individual classes of instructions as given in the table. Each increment represents one complete instruction. Since this event includes non-numeric instructions it is not suitable for measuring MFLOPS. x87 instructions.", + "UMask": "0x1" + }, + { + "EventName": "ex_ret_cond", + "EventCode": "0xd1", + "BriefDescription": "Retired Conditional Branch Instructions." + }, + { + "EventName": "ex_ret_cond_misp", + "EventCode": "0xd2", + "BriefDescription": "Retired Conditional Branch Instructions Mispredicted." + }, + { + "EventName": "ex_div_busy", + "EventCode": "0xd3", + "BriefDescription": "Div Cycles Busy count." + }, + { + "EventName": "ex_div_count", + "EventCode": "0xd4", + "BriefDescription": "Div Op Count." + }, + { + "EventName": "ex_tagged_ibs_ops.ibs_count_rollover", + "EventCode": "0x1cf", + "BriefDescription": "Tagged IBS Ops. Number of times an op could not be tagged by IBS because of a previous tagged op that has not retired.", + "UMask": "0x4" + }, + { + "EventName": "ex_tagged_ibs_ops.ibs_tagged_ops_ret", + "EventCode": "0x1cf", + "BriefDescription": "Tagged IBS Ops. Number of Ops tagged by IBS that retired.", + "UMask": "0x2" + }, + { + "EventName": "ex_tagged_ibs_ops.ibs_tagged_ops", + "EventCode": "0x1cf", + "BriefDescription": "Tagged IBS Ops. Number of Ops tagged by IBS.", + "UMask": "0x1" + }, + { + "EventName": "ex_ret_fus_brnch_inst", + "EventCode": "0x1d0", + "BriefDescription": "Retired Fused Instructions. The number of fuse-branch instructions retired per cycle. The number of events logged per cycle can vary from 0-8.", + } +] diff --git a/tools/perf/pmu-events/arch/x86/amdzen2/floating-point.json b/tools/perf/pmu-events/arch/x86/amdzen2/floating-point.json new file mode 100644 index 000000000000..622a0c420e46 --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/amdzen2/floating-point.json @@ -0,0 +1,140 @@ +[ + { + "EventName": "fpu_pipe_assignment.total", + "EventCode": "0x00", + "BriefDescription": "Total number of fp uOps.", + "PublicDescription": "Total number of fp uOps. The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS.", + "UMask": "0xf" + }, + { + "EventName": "fpu_pipe_assignment.total3", + "EventCode": "0x00", + "BriefDescription": "Total number uOps assigned to pipe 3.", + "PublicDescription": "The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one-cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 3.", + "UMask": "0x8" + }, + { + "EventName": "fpu_pipe_assignment.total2", + "EventCode": "0x00", + "BriefDescription": "Total number uOps assigned to pipe 2.", + "PublicDescription": "The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 2.", + "UMask": "0x4" + }, + { + "EventName": "fpu_pipe_assignment.total1", + "EventCode": "0x00", + "BriefDescription": "Total number uOps assigned to pipe 1.", + "PublicDescription": "The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 1.", + "UMask": "0x2" + }, + { + "EventName": "fpu_pipe_assignment.total0", + "EventCode": "0x00", + "BriefDescription": "Total number of fp uOps on pipe 0.", + "PublicDescription": "The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 0.", + "UMask": "0x1" + }, + { + "EventName": "fp_ret_sse_avx_ops.all", + "EventCode": "0x03", + "BriefDescription": "All FLOPS. This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15.", + "UMask": "0xff" + }, + { + "EventName": "fp_ret_sse_avx_ops.mac_flops", + "EventCode": "0x03", + "BriefDescription": "Multiply-add FLOPS. Multiply-add counts as 2 FLOPS. This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15.", + "PublicDescription": "", + "UMask": "0x8" + }, + { + "EventName": "fp_ret_sse_avx_ops.div_flops", + "EventCode": "0x03", + "BriefDescription": "Divide/square root FLOPS. This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15.", + "UMask": "0x4" + }, + { + "EventName": "fp_ret_sse_avx_ops.mult_flops", + "EventCode": "0x03", + "BriefDescription": "Multiply FLOPS. This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15.", + "UMask": "0x2" + }, + { + "EventName": "fp_ret_sse_avx_ops.add_sub_flops", + "EventCode": "0x03", + "BriefDescription": "Add/subtract FLOPS. This is a retire-based event. The number of retired SSE/AVX FLOPS. The number of events logged per cycle can vary from 0 to 64. This event can count above 15.", + "UMask": "0x1" + }, + { + "EventName": "fp_num_mov_elim_scal_op.optimized", + "EventCode": "0x04", + "BriefDescription": "Number of Scalar Ops optimized. This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemes.", + "UMask": "0x8" + }, + { + "EventName": "fp_num_mov_elim_scal_op.opt_potential", + "EventCode": "0x04", + "BriefDescription": "Number of Ops that are candidates for optimization (have Z-bit either set or pass). This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemes.", + "UMask": "0x4" + }, + { + "EventName": "fp_num_mov_elim_scal_op.sse_mov_ops_elim", + "EventCode": "0x04", + "BriefDescription": "Number of SSE Move Ops eliminated. This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemes.", + "UMask": "0x2" + }, + { + "EventName": "fp_num_mov_elim_scal_op.sse_mov_ops", + "EventCode": "0x04", + "BriefDescription": "Number of SSE Move Ops. This is a dispatch based speculative event, and is useful for measuring the effectiveness of the Move elimination and Scalar code optimization schemes.", + "UMask": "0x1" + }, + { + "EventName": "fp_retired_ser_ops.sse_bot_ret", + "EventCode": "0x05", + "BriefDescription": "SSE bottom-executing uOps retired. The number of serializing Ops retired.", + "UMask": "0x8" + }, + { + "EventName": "fp_retired_ser_ops.sse_ctrl_ret", + "EventCode": "0x05", + "BriefDescription": "The number of serializing Ops retired. SSE control word mispredict traps due to mispredictions in RC, FTZ or DAZ, or changes in mask bits.", + "UMask": "0x4" + }, + { + "EventName": "fp_retired_ser_ops.x87_bot_ret", + "EventCode": "0x05", + "BriefDescription": "x87 bottom-executing uOps retired. The number of serializing Ops retired.", + "UMask": "0x2" + }, + { + "EventName": "fp_retired_ser_ops.x87_ctrl_ret", + "EventCode": "0x05", + "BriefDescription": "x87 control word mispredict traps due to mispredictions in RC or PC, or changes in mask bits. The number of serializing Ops retired.", + "UMask": "0x1" + }, + { + "EventName": "fp_disp_faults.ymm_spill_fault", + "EventCode": "0x0e", + "BriefDescription": "Floating Point Dispatch Faults. YMM spill fault.", + "UMask": "0x8" + }, + { + "EventName": "fp_disp_faults.ymm_fill_fault", + "EventCode": "0x0e", + "BriefDescription": "Floating Point Dispatch Faults. YMM fill fault.", + "UMask": "0x4" + }, + { + "EventName": "fp_disp_faults.xmm_fill_fault", + "EventCode": "0x0e", + "BriefDescription": "Floating Point Dispatch Faults. XMM fill fault.", + "UMask": "0x2" + }, + { + "EventName": "fp_disp_faults.x87_fill_fault", + "EventCode": "0x0e", + "BriefDescription": "Floating Point Dispatch Faults. x87 fill fault.", + "UMask": "0x1" + } +] diff --git a/tools/perf/pmu-events/arch/x86/amdzen2/memory.json b/tools/perf/pmu-events/arch/x86/amdzen2/memory.json new file mode 100644 index 000000000000..715046b339cb --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/amdzen2/memory.json @@ -0,0 +1,341 @@ +[ + { + "EventName": "ls_bad_status2.stli_other", + "EventCode": "0x24", + "BriefDescription": "Non-forwardable conflict; used to reduce STLI's via software. All reasons. Store To Load Interlock (STLI) are loads that were unable to complete because of a possible match with an older store, and the older store could not do STLF for some reason.", + "PublicDescription" : "Store-to-load conflicts: A load was unable to complete due to a non-forwardable conflict with an older store. Most commonly, a load's address range partially but not completely overlaps with an uncompleted older store. Software can avoid this problem by using same-size and same-alignment loads and stores when accessing the same data. Vector/SIMD code is particularly susceptible to this problem; software should construct wide vector stores by manipulating vector elements in registers using shuffle/blend/swap instructions prior to storing to memory, instead of using narrow element-by-element stores.", + "UMask": "0x2" + }, + { + "EventName": "ls_locks.spec_lock_hi_spec", + "EventCode": "0x25", + "BriefDescription": "Retired lock instructions. High speculative cacheable lock speculation succeeded.", + "UMask": "0x8" + }, + { + "EventName": "ls_locks.spec_lock_lo_spec", + "EventCode": "0x25", + "BriefDescription": "Retired lock instructions. Low speculative cacheable lock speculation succeeded.", + "UMask": "0x4" + }, + { + "EventName": "ls_locks.non_spec_lock", + "EventCode": "0x25", + "BriefDescription": "Retired lock instructions. Non-speculative lock succeeded.", + "UMask": "0x2" + }, + { + "EventName": "ls_locks.bus_lock", + "EventCode": "0x25", + "BriefDescription": "Retired lock instructions. Bus lock when a locked operations crosses a cache boundary or is done on an uncacheable memory type. Comparable to legacy bus lock.", + "UMask": "0x1" + }, + { + "EventName": "ls_ret_cl_flush", + "EventCode": "0x26", + "BriefDescription": "Number of retired CLFLUSH instructions." + }, + { + "EventName": "ls_ret_cpuid", + "EventCode": "0x27", + "BriefDescription": "Number of retired CPUID instructions." + }, + { + "EventName": "ls_dispatch.ld_st_dispatch", + "EventCode": "0x29", + "BriefDescription": "Dispatch of a single op that performs a load from and store to the same memory address. Number of single ops that do load/store to an address.", + "UMask": "0x4" + }, + { + "EventName": "ls_dispatch.store_dispatch", + "EventCode": "0x29", + "BriefDescription": "Number of stores dispatched. Counts the number of operations dispatched to the LS unit. Unit Masks ADDed.", + "UMask": "0x2" + }, + { + "EventName": "ls_dispatch.ld_dispatch", + "EventCode": "0x29", + "BriefDescription": "Number of loads dispatched. Counts the number of operations dispatched to the LS unit. Unit Masks ADDed.", + "UMask": "0x1" + }, + { + "EventName": "ls_smi_rx", + "EventCode": "0x2B", + "BriefDescription": "Number of SMIs received." + }, + { + "EventName": "ls_int_taken", + "EventCode": "0x2C", + "BriefDescription": "Number of interrupts taken." + }, + { + "EventName": "ls_rdtsc", + "EventCode": "0x2D", + "BriefDescription": "Number of reads of the TSC (RDTSC instructions). The count is speculative." + }, + { + "EventName": "ls_stlf", + "EventCode": "0x35", + "BriefDescription": "Number of STLF hits." + }, + { + "EventName": "ls_st_commit_cancel2.st_commit_cancel_wcb_full", + "EventCode": "0x37", + "BriefDescription": "A non-cacheable store and the non-cacheable commit buffer is full." + }, + { + "EventName": "ls_dc_accesses", + "EventCode": "0x40", + "BriefDescription": "Number of accesses to the dcache for load/store references.", + "PublicDescription": "The number of accesses to the data cache for load and store references. This may include certain microcode scratchpad accesses, although these are generally rare. Each increment represents an eight-byte access, although the instruction may only be accessing a portion of that. This event is a speculative event." + }, + { + "EventName": "ls_mab_alloc.dc_prefetcher", + "EventCode": "0x41", + "BriefDescription": "LS MAB Allocates by Type. DC prefetcher.", + "UMask": "0x8" + }, + { + "EventName": "ls_mab_alloc.stores", + "EventCode": "0x41", + "BriefDescription": "LS MAB Allocates by Type. Stores.", + "UMask": "0x2" + }, + { + "EventName": "ls_mab_alloc.loads", + "EventCode": "0x41", + "BriefDescription": "LS MAB Allocates by Type. Loads.", + "UMask": "0x1" + }, + { + "EventName": "ls_refills_from_sys.ls_mabresp_rmt_dram", + "EventCode": "0x43", + "BriefDescription": "Demand Data Cache Fills by Data Source. DRAM or IO from different die.", + "UMask": "0x40" + }, + { + "EventName": "ls_refills_from_sys.ls_mabresp_rmt_cache", + "EventCode": "0x43", + "BriefDescription": "Demand Data Cache Fills by Data Source. Hit in cache; Remote CCX and the address's Home Node is on a different die.", + "UMask": "0x10" + }, + { + "EventName": "ls_refills_from_sys.ls_mabresp_lcl_dram", + "EventCode": "0x43", + "BriefDescription": "Demand Data Cache Fills by Data Source. DRAM or IO from this thread's die.", + "UMask": "0x8" + }, + { + "EventName": "ls_refills_from_sys.ls_mabresp_lcl_cache", + "EventCode": "0x43", + "BriefDescription": "Demand Data Cache Fills by Data Source. Hit in cache; local CCX (not Local L2), or Remote CCX and the address's Home Node is on this thread's die.", + "UMask": "0x2" + }, + { + "EventName": "ls_refills_from_sys.ls_mabresp_lcl_l2", + "EventCode": "0x43", + "BriefDescription": "Demand Data Cache Fills by Data Source. Local L2 hit.", + "UMask": "0x1" + }, + { + "EventName": "ls_l1_d_tlb_miss.all", + "EventCode": "0x45", + "BriefDescription": "All L1 DTLB Misses or Reloads.", + "UMask": "0xff" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_1g_l2_miss", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss. DTLB reload to a 1G page that miss in the L2 TLB.", + "UMask": "0x80" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_2m_l2_miss", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss. DTLB reload to a 2M page that miss in the L2 TLB.", + "UMask": "0x40" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_coalesced_page_miss", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss. DTLB reload coalesced page miss.", + "UMask": "0x20" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_4k_l2_miss", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss. DTLB reload to a 4K page that miss the L2 TLB.", + "UMask": "0x10" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_1g_l2_hit", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss. DTLB reload to a 1G page that hit in the L2 TLB.", + "UMask": "0x8" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_2m_l2_hit", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss. DTLB reload to a 2M page that hit in the L2 TLB.", + "UMask": "0x4" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_coalesced_page_hit", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss. DTLB reload hit a coalesced page.", + "UMask": "0x2" + }, + { + "EventName": "ls_l1_d_tlb_miss.tlb_reload_4k_l2_hit", + "EventCode": "0x45", + "BriefDescription": "L1 DTLB Miss. DTLB reload to a 4K page that hit in the L2 TLB.", + "UMask": "0x1" + }, + { + "EventName": "ls_tablewalker.iside", + "EventCode": "0x46", + "BriefDescription": "Total Page Table Walks on I-side.", + "UMask": "0xc" + }, + { + "EventName": "ls_tablewalker.ic_type1", + "EventCode": "0x46", + "BriefDescription": "Total Page Table Walks IC Type 1.", + "UMask": "0x8" + }, + { + "EventName": "ls_tablewalker.ic_type0", + "EventCode": "0x46", + "BriefDescription": "Total Page Table Walks IC Type 0.", + "UMask": "0x4" + }, + { + "EventName": "ls_tablewalker.dside", + "EventCode": "0x46", + "BriefDescription": "Total Page Table Walks on D-side.", + "UMask": "0x3" + }, + { + "EventName": "ls_tablewalker.dc_type1", + "EventCode": "0x46", + "BriefDescription": "Total Page Table Walks DC Type 1.", + "UMask": "0x2" + }, + { + "EventName": "ls_tablewalker.dc_type0", + "EventCode": "0x46", + "BriefDescription": "Total Page Table Walks DC Type 0.", + "UMask": "0x1" + }, + { + "EventName": "ls_misal_accesses", + "EventCode": "0x47", + "BriefDescription": "Misaligned loads." + }, + { + "EventName": "ls_pref_instr_disp", + "EventCode": "0x4b", + "BriefDescription": "Software Prefetch Instructions Dispatched (Speculative).", + "UMask": "0xff" + }, + { + "EventName": "ls_pref_instr_disp.prefetch_nta", + "EventCode": "0x4b", + "BriefDescription": "Software Prefetch Instructions Dispatched (Speculative). PrefetchNTA instruction. See docAPM3 PREFETCHlevel.", + "UMask": "0x4" + }, + { + "EventName": "ls_pref_instr_disp.prefetch_w", + "EventCode": "0x4b", + "BriefDescription": "Software Prefetch Instructions Dispatched (Speculative). See docAPM3 PREFETCHW.", + "UMask": "0x2" + }, + { + "EventName": "ls_pref_instr_disp.prefetch", + "EventCode": "0x4b", + "BriefDescription": "Software Prefetch Instructions Dispatched (Speculative). Prefetch_T0_T1_T2. PrefetchT0, T1 and T2 instructions. See docAPM3 PREFETCHlevel.", + "UMask": "0x1" + }, + { + "EventName": "ls_inef_sw_pref.mab_mch_cnt", + "EventCode": "0x52", + "BriefDescription": "The number of software prefetches that did not fetch data outside of the processor core. Software PREFETCH instruction saw a match on an already-allocated miss request buffer.", + "UMask": "0x2" + }, + { + "EventName": "ls_inef_sw_pref.data_pipe_sw_pf_dc_hit", + "EventCode": "0x52", + "BriefDescription": "The number of software prefetches that did not fetch data outside of the processor core. Software PREFETCH instruction saw a DC hit.", + "UMask": "0x1" + }, + { + "EventName": "ls_sw_pf_dc_fill.ls_mabresp_rmt_dram", + "EventCode": "0x59", + "BriefDescription": "Software Prefetch Data Cache Fills by Data Source. From DRAM (home node remote).", + "UMask": "0x40" + }, + { + "EventName": "ls_sw_pf_dc_fill.ls_mabresp_rmt_cache", + "EventCode": "0x59", + "BriefDescription": "Software Prefetch Data Cache Fills by Data Source. From another cache (home node remote).", + "UMask": "0x10" + }, + { + "EventName": "ls_sw_pf_dc_fill.ls_mabresp_lcl_dram", + "EventCode": "0x59", + "BriefDescription": "Software Prefetch Data Cache Fills by Data Source. DRAM or IO from this thread's die. From DRAM (home node local).", + "UMask": "0x8" + }, + { + "EventName": "ls_sw_pf_dc_fill.ls_mabresp_lcl_cache", + "EventCode": "0x59", + "BriefDescription": "Software Prefetch Data Cache Fills by Data Source. From another cache (home node local).", + "UMask": "0x2" + }, + { + "EventName": "ls_sw_pf_dc_fill.ls_mabresp_lcl_l2", + "EventCode": "0x59", + "BriefDescription": "Software Prefetch Data Cache Fills by Data Source. Local L2 hit.", + "UMask": "0x1" + }, + { + "EventName": "ls_hw_pf_dc_fill.ls_mabresp_rmt_dram", + "EventCode": "0x5A", + "BriefDescription": "Hardware Prefetch Data Cache Fills by Data Source. From DRAM (home node remote).", + "UMask": "0x40" + }, + { + "EventName": "ls_hw_pf_dc_fill.ls_mabresp_rmt_cache", + "EventCode": "0x5A", + "BriefDescription": "Hardware Prefetch Data Cache Fills by Data Source. From another cache (home node remote).", + "UMask": "0x10" + }, + { + "EventName": "ls_hw_pf_dc_fill.ls_mabresp_lcl_dram", + "EventCode": "0x5A", + "BriefDescription": "Hardware Prefetch Data Cache Fills by Data Source. From DRAM (home node local).", + "UMask": "0x8" + }, + { + "EventName": "ls_hw_pf_dc_fill.ls_mabresp_lcl_cache", + "EventCode": "0x5A", + "BriefDescription": "Hardware Prefetch Data Cache Fills by Data Source. From another cache (home node local).", + "UMask": "0x2" + }, + { + "EventName": "ls_hw_pf_dc_fill.ls_mabresp_lcl_l2", + "EventCode": "0x5A", + "BriefDescription": "Hardware Prefetch Data Cache Fills by Data Source. Local L2 hit.", + "UMask": "0x1" + }, + { + "EventName": "ls_not_halted_cyc", + "EventCode": "0x76", + "BriefDescription": "Cycles not in Halt." + }, + { + "EventName": "ls_tlb_flush", + "EventCode": "0x78", + "BriefDescription": "All TLB Flushes" + } +] diff --git a/tools/perf/pmu-events/arch/x86/amdzen2/other.json b/tools/perf/pmu-events/arch/x86/amdzen2/other.json new file mode 100644 index 000000000000..e94994d4a60e --- /dev/null +++ b/tools/perf/pmu-events/arch/x86/amdzen2/other.json @@ -0,0 +1,115 @@ +[ + { + "EventName": "de_dis_uop_queue_empty_di0", + "EventCode": "0xa9", + "BriefDescription": "Cycles where the Micro-Op Queue is empty." + }, + { + "EventName": "de_dis_uops_from_decoder", + "EventCode": "0xaa", + "BriefDescription": "Ops dispatched from either the decoders, OpCache or both.", + "UMask": "0xff" + }, + { + "EventName": "de_dis_uops_from_decoder.opcache_dispatched", + "EventCode": "0xaa", + "BriefDescription": "Count of dispatched Ops from OpCache.", + "UMask": "0x2" + }, + { + "EventName": "de_dis_uops_from_decoder.decoder_dispatched", + "EventCode": "0xaa", + "BriefDescription": "Count of dispatched Ops from Decoder.", + "UMask": "0x1" + }, + { + "EventName": "de_dis_dispatch_token_stalls1.fp_misc_rsrc_stall", + "EventCode": "0xae", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. FP Miscellaneous resource unavailable. Applies to the recovery of mispredicts with FP ops.", + "UMask": "0x80" + }, + { + "EventName": "de_dis_dispatch_token_stalls1.fp_sch_rsrc_stall", + "EventCode": "0xae", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. FP scheduler resource stall. Applies to ops that use the FP scheduler.", + "UMask": "0x40" + }, + { + "EventName": "de_dis_dispatch_token_stalls1.fp_reg_file_rsrc_stall", + "EventCode": "0xae", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. Floating point register file resource stall. Applies to all FP ops that have a destination register.", + "UMask": "0x20" + }, + { + "EventName": "de_dis_dispatch_token_stalls1.taken_branch_buffer_rsrc_stall", + "EventCode": "0xae", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. Taken branch buffer resource stall.", + "UMask": "0x10" + }, + { + "EventName": "de_dis_dispatch_token_stalls1.int_sched_misc_token_stall", + "EventCode": "0xae", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. Integer Scheduler miscellaneous resource stall.", + "UMask": "0x8" + }, + { + "EventName": "de_dis_dispatch_token_stalls1.store_queue_token_stall", + "EventCode": "0xae", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. Store queue resource stall. Applies to all ops with store semantics.", + "UMask": "0x4" + }, + { + "EventName": "de_dis_dispatch_token_stalls1.load_queue_token_stall", + "EventCode": "0xae", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. Load queue resource stall. Applies to all ops with load semantics.", + "UMask": "0x2" + }, + { + "EventName": "de_dis_dispatch_token_stalls1.int_phy_reg_file_token_stall", + "EventCode": "0xae", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. Integer Physical Register File resource stall. Applies to all ops that have an integer destination register.", + "UMask": "0x1" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.sc_agu_dispatch_stall", + "EventCode": "0xaf", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. SC AGU dispatch stall.", + "UMask": "0x40" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.retire_token_stall", + "EventCode": "0xaf", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. RETIRE Tokens unavailable.", + "UMask": "0x20" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.agsq_token_stall", + "EventCode": "0xaf", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. AGSQ Tokens unavailable.", + "UMask": "0x10" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.alu_token_stall", + "EventCode": "0xaf", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALU tokens total unavailable.", + "UMask": "0x8" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.alsq3_0_token_stall", + "EventCode": "0xaf", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ3_0_TokenStall.", + "UMask": "0x4" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.alsq2_token_stall", + "EventCode": "0xaf", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 2 Tokens unavailable.", + "UMask": "0x2" + }, + { + "EventName": "de_dis_dispatch_token_stalls0.alsq1_token_stall", + "EventCode": "0xaf", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 1 Tokens unavailable.", + "UMask": "0x1" + } +] diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-events/arch/x86/mapfile.csv index 82a9db00125e..244a36e37a3a 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -37,3 +37,4 @@ GenuineIntel-6-7D,v1,icelake,core GenuineIntel-6-7E,v1,icelake,core GenuineIntel-6-86,v1,tremontx,core AuthenticAMD-23-([12][0-9A-F]|[0-9A-F]),v1,amdzen1,core +AuthenticAMD-23-[[:xdigit:]]+,v1,amdzen2,core
From: Vijay Thakkar vijaythakkar@me.com
mainline inclusion from mainline-v5.7-rc1 commit b5b8a7cf141acba6588d2a43cda0dd258299f902 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
This patch updates the PMCs for AMD Zen1 core based processors (Family 17h; Models 0 through 2F) to be in accordance with PMCs as documented in the latest versions of the AMD Processor Programming Reference [1], [2] and [3]. Note that some events, such as FPU pipe assignment are missing in [1], and therefore [3] is included for full coverage of events.
PMCs added:
fpu_pipe_assignment.dual{0|1|2|3} fpu_pipe_assignment.total{0|1|2|3} ls_mab_alloc.dc_prefetcher ls_mab_alloc.stores ls_mab_alloc.loads bp_dyn_ind_pred bp_de_redirect
PMC removed:
ex_ret_cond_misp
Cumulative counts, fpu_pipe_assignment.total and fpu_pipe_assignment.dual, existed in v1, but did expose port-level counters.
ex_ret_cond_misp has been removed as it has been removed from the latest versions of the PPR, and when tested, always seems to sample zero as tested on a Ryzen 3400G system.
[1]: Processor Programming Reference (PPR) for AMD Family 17h Models 01h,08h, Revision B2 Processors, 54945 Rev 3.03 - Jun 14, 2019.
[2]: Processor Programming Reference (PPR) for AMD Family 17h Model 18h, Revision B1 Processors, 55570-B1 Rev 3.14 - Sep 26, 2019.
[3]: OSRR for AMD Family 17h processors, Models 00h-2Fh, 56255 Rev 3.03 - July, 2018
All of the PPRs can be found at: https://bugzilla.kernel.org/show_bug.cgi?id=206537
Signed-off-by: Vijay Thakkar vijaythakkar@me.com Acked-by: Kim Phillips kim.phillips@amd.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Jiri Olsa jolsa@redhat.com Cc: Jon Grimm jon.grimm@amd.com Cc: Martin Liška mliska@suse.cz Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: vijay thakkar vijaythakkar@me.com Link: http://lore.kernel.org/lkml/20200318190002.307290-4-vijaythakkar@me.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- .../pmu-events/arch/x86/amdzen1/branch.json | 11 ++ .../pmu-events/arch/x86/amdzen1/cache.json | 107 ++++++------------ .../pmu-events/arch/x86/amdzen1/core.json | 15 +-- .../arch/x86/amdzen1/floating-point.json | 64 ++++++++++- .../pmu-events/arch/x86/amdzen1/memory.json | 82 +++++++++----- .../pmu-events/arch/x86/amdzen1/other.json | 27 ++--- tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +- 7 files changed, 172 insertions(+), 136 deletions(-)
diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/branch.json b/tools/perf/pmu-events/arch/x86/amdzen1/branch.json index 93ddfd8053ca..a9943eeb8d6b 100644 --- a/tools/perf/pmu-events/arch/x86/amdzen1/branch.json +++ b/tools/perf/pmu-events/arch/x86/amdzen1/branch.json @@ -8,5 +8,16 @@ "EventName": "bp_l2_btb_correct", "EventCode": "0x8b", "BriefDescription": "L2 BTB Correction." + }, + { + "EventName": "bp_dyn_ind_pred", + "EventCode": "0x8e", + "BriefDescription": "Dynamic Indirect Predictions.", + "PublicDescription": "Indirect Branch Prediction for potential multi-target branch (speculative)." + }, + { + "EventName": "bp_de_redirect", + "EventCode": "0x91", + "BriefDescription": "Decoder Overrides Existing Branch Prediction (speculative)." } ] diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json index 6221a840fcea..404d4c569c01 100644 --- a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json +++ b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json @@ -37,36 +37,31 @@ { "EventName": "ic_fetch_stall.ic_stall_any", "EventCode": "0x87", - "BriefDescription": "IC pipe was stalled during this clock cycle for any reason (nothing valid in pipe ICM1).", - "PublicDescription": "Instruction Pipe Stall. IC pipe was stalled during this clock cycle for any reason (nothing valid in pipe ICM1).", + "BriefDescription": "Instruction Pipe Stall. IC pipe was stalled during this clock cycle for any reason (nothing valid in pipe ICM1).", "UMask": "0x4" }, { "EventName": "ic_fetch_stall.ic_stall_dq_empty", "EventCode": "0x87", - "BriefDescription": "IC pipe was stalled during this clock cycle (including IC to OC fetches) due to DQ empty.", - "PublicDescription": "Instruction Pipe Stall. IC pipe was stalled during this clock cycle (including IC to OC fetches) due to DQ empty.", + "BriefDescription": "Instruction Pipe Stall. IC pipe was stalled during this clock cycle (including IC to OC fetches) due to DQ empty.", "UMask": "0x2" }, { "EventName": "ic_fetch_stall.ic_stall_back_pressure", "EventCode": "0x87", - "BriefDescription": "IC pipe was stalled during this clock cycle (including IC to OC fetches) due to back-pressure.", - "PublicDescription": "Instruction Pipe Stall. IC pipe was stalled during this clock cycle (including IC to OC fetches) due to back-pressure.", + "BriefDescription": "Instruction Pipe Stall. IC pipe was stalled during this clock cycle (including IC to OC fetches) due to back-pressure.", "UMask": "0x1" }, { "EventName": "ic_cache_inval.l2_invalidating_probe", "EventCode": "0x8c", - "BriefDescription": "IC line invalidated due to L2 invalidating probe (external or LS).", - "PublicDescription": "The number of instruction cache lines invalidated. A non-SMC event is CMC (cross modifying code), either from the other thread of the core or another core. IC line invalidated due to L2 invalidating probe (external or LS).", + "BriefDescription": "IC line invalidated due to L2 invalidating probe (external or LS). The number of instruction cache lines invalidated. A non-SMC event is CMC (cross modifying code), either from the other thread of the core or another core.", "UMask": "0x2" }, { "EventName": "ic_cache_inval.fill_invalidated", "EventCode": "0x8c", - "BriefDescription": "IC line invalidated due to overwriting fill response.", - "PublicDescription": "The number of instruction cache lines invalidated. A non-SMC event is CMC (cross modifying code), either from the other thread of the core or another core. IC line invalidated due to overwriting fill response.", + "BriefDescription": "IC line invalidated due to overwriting fill response. The number of instruction cache lines invalidated. A non-SMC event is CMC (cross modifying code), either from the other thread of the core or another core.", "UMask": "0x1" }, { @@ -77,211 +72,181 @@ { "EventName": "l2_request_g1.rd_blk_l", "EventCode": "0x60", - "BriefDescription": "Requests to L2 Group1.", - "PublicDescription": "Requests to L2 Group1.", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). Data cache reads (including hardware and software prefetch).", "UMask": "0x80" }, { "EventName": "l2_request_g1.rd_blk_x", "EventCode": "0x60", - "BriefDescription": "Requests to L2 Group1.", - "PublicDescription": "Requests to L2 Group1.", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). Data cache stores.", "UMask": "0x40" }, { "EventName": "l2_request_g1.ls_rd_blk_c_s", "EventCode": "0x60", - "BriefDescription": "Requests to L2 Group1.", - "PublicDescription": "Requests to L2 Group1.", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). Data cache shared reads.", "UMask": "0x20" }, { "EventName": "l2_request_g1.cacheable_ic_read", "EventCode": "0x60", - "BriefDescription": "Requests to L2 Group1.", - "PublicDescription": "Requests to L2 Group1.", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). Instruction cache reads.", "UMask": "0x10" }, { "EventName": "l2_request_g1.change_to_x", "EventCode": "0x60", - "BriefDescription": "Requests to L2 Group1.", - "PublicDescription": "Requests to L2 Group1.", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). Data cache state change requests. Request change to writable, check L2 for current state.", "UMask": "0x8" }, { - "EventName": "l2_request_g1.prefetch_l2", + "EventName": "l2_request_g1.prefetch_l2_cmd", "EventCode": "0x60", - "BriefDescription": "Requests to L2 Group1.", - "PublicDescription": "Requests to L2 Group1.", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). PrefetchL2Cmd.", "UMask": "0x4" }, { "EventName": "l2_request_g1.l2_hw_pf", "EventCode": "0x60", - "BriefDescription": "Requests to L2 Group1.", - "PublicDescription": "Requests to L2 Group1.", + "BriefDescription": "All L2 Cache Requests (Breakdown 1 - Common). L2 Prefetcher. All prefetches accepted by L2 pipeline, hit or miss. Types of PF and L2 hit/miss broken out in a separate perfmon event.", "UMask": "0x2" }, { - "EventName": "l2_request_g1.other_requests", + "EventName": "l2_request_g1.group2", "EventCode": "0x60", - "BriefDescription": "Events covered by l2_request_g2.", - "PublicDescription": "Requests to L2 Group1. Events covered by l2_request_g2.", + "BriefDescription": "Miscellaneous events covered in more detail by l2_request_g2 (PMCx061).", "UMask": "0x1" }, { "EventName": "l2_request_g2.group1", "EventCode": "0x61", - "BriefDescription": "All Group 1 commands not in unit0.", - "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous. All Group 1 commands not in unit0.", + "BriefDescription": "Miscellaneous events covered in more detail by l2_request_g1 (PMCx060).", "UMask": "0x80" }, { "EventName": "l2_request_g2.ls_rd_sized", "EventCode": "0x61", - "BriefDescription": "RdSized, RdSized32, RdSized64.", - "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous. RdSized, RdSized32, RdSized64.", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Data cache read sized.", "UMask": "0x40" }, { "EventName": "l2_request_g2.ls_rd_sized_nc", "EventCode": "0x61", - "BriefDescription": "RdSizedNC, RdSized32NC, RdSized64NC.", - "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous. RdSizedNC, RdSized32NC, RdSized64NC.", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Data cache read sized non-cacheable.", "UMask": "0x20" }, { "EventName": "l2_request_g2.ic_rd_sized", "EventCode": "0x61", - "BriefDescription": "Multi-events in that LS and IF requests can be received simultaneous.", - "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Instruction cache read sized.", "UMask": "0x10" }, { "EventName": "l2_request_g2.ic_rd_sized_nc", "EventCode": "0x61", - "BriefDescription": "Multi-events in that LS and IF requests can be received simultaneous.", - "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Instruction cache read sized non-cacheable.", "UMask": "0x8" }, { "EventName": "l2_request_g2.smc_inval", "EventCode": "0x61", - "BriefDescription": "Multi-events in that LS and IF requests can be received simultaneous.", - "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Self-modifying code invalidates.", "UMask": "0x4" }, { "EventName": "l2_request_g2.bus_locks_originator", "EventCode": "0x61", - "BriefDescription": "Multi-events in that LS and IF requests can be received simultaneous.", - "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Bus locks.", "UMask": "0x2" }, { "EventName": "l2_request_g2.bus_locks_responses", "EventCode": "0x61", - "BriefDescription": "Multi-events in that LS and IF requests can be received simultaneous.", - "PublicDescription": "Multi-events in that LS and IF requests can be received simultaneous.", + "BriefDescription": "All L2 Cache Requests (Breakdown 2 - Rare). Bus lock response.", "UMask": "0x1" }, { "EventName": "l2_latency.l2_cycles_waiting_on_fills", "EventCode": "0x62", "BriefDescription": "Total cycles spent waiting for L2 fills to complete from L3 or memory, divided by four. Event counts are for both threads. To calculate average latency, the number of fills from both threads must be used.", - "PublicDescription": "Total cycles spent waiting for L2 fills to complete from L3 or memory, divided by four. Event counts are for both threads. To calculate average latency, the number of fills from both threads must be used.", "UMask": "0x1" }, { "EventName": "l2_wcb_req.wcb_write", "EventCode": "0x63", - "PublicDescription": "LS (Load/Store unit) to L2 WCB (Write Combining Buffer) write requests.", - "BriefDescription": "LS to L2 WCB write requests.", + "BriefDescription": "LS to L2 WCB write requests. LS (Load/Store unit) to L2 WCB (Write Combining Buffer) write requests.", "UMask": "0x40" }, { "EventName": "l2_wcb_req.wcb_close", "EventCode": "0x63", - "BriefDescription": "LS to L2 WCB close requests.", - "PublicDescription": "LS (Load/Store unit) to L2 WCB (Write Combining Buffer) close requests.", + "BriefDescription": "LS to L2 WCB close requests. LS (Load/Store unit) to L2 WCB (Write Combining Buffer) close requests.", "UMask": "0x20" }, { "EventName": "l2_wcb_req.zero_byte_store", "EventCode": "0x63", - "BriefDescription": "LS to L2 WCB zero byte store requests.", - "PublicDescription": "LS (Load/Store unit) to L2 WCB (Write Combining Buffer) zero byte store requests.", + "BriefDescription": "LS to L2 WCB zero byte store requests. LS (Load/Store unit) to L2 WCB (Write Combining Buffer) zero byte store requests.", "UMask": "0x4" }, { "EventName": "l2_wcb_req.cl_zero", "EventCode": "0x63", - "PublicDescription": "LS to L2 WCB cache line zeroing requests.", - "BriefDescription": "LS (Load/Store unit) to L2 WCB (Write Combining Buffer) cache line zeroing requests.", + "BriefDescription": "LS to L2 WCB cache line zeroing requests. LS (Load/Store unit) to L2 WCB (Write Combining Buffer) cache line zeroing requests.", "UMask": "0x1" }, { "EventName": "l2_cache_req_stat.ls_rd_blk_cs", "EventCode": "0x64", - "BriefDescription": "LS ReadBlock C/S Hit.", - "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. LS ReadBlock C/S Hit.", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Data cache shared read hit in L2", "UMask": "0x80" }, { "EventName": "l2_cache_req_stat.ls_rd_blk_l_hit_x", "EventCode": "0x64", - "BriefDescription": "LS Read Block L Hit X.", - "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. LS Read Block L Hit X.", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Data cache read hit in L2.", "UMask": "0x40" }, { "EventName": "l2_cache_req_stat.ls_rd_blk_l_hit_s", "EventCode": "0x64", - "BriefDescription": "LsRdBlkL Hit Shared.", - "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. LsRdBlkL Hit Shared.", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Data cache read hit on shared line in L2.", "UMask": "0x20" }, { "EventName": "l2_cache_req_stat.ls_rd_blk_x", "EventCode": "0x64", - "BriefDescription": "LsRdBlkX/ChgToX Hit X. Count RdBlkX finding Shared as a Miss.", - "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. LsRdBlkX/ChgToX Hit X. Count RdBlkX finding Shared as a Miss.", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Data cache store or state change hit in L2.", "UMask": "0x10" }, { "EventName": "l2_cache_req_stat.ls_rd_blk_c", "EventCode": "0x64", - "BriefDescription": "LS Read Block C S L X Change to X Miss.", - "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. LS Read Block C S L X Change to X Miss.", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Data cache request miss in L2 (all types).", "UMask": "0x8" }, { "EventName": "l2_cache_req_stat.ic_fill_hit_x", "EventCode": "0x64", - "BriefDescription": "IC Fill Hit Exclusive Stale.", - "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. IC Fill Hit Exclusive Stale.", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Instruction cache hit modifiable line in L2.", "UMask": "0x4" }, { "EventName": "l2_cache_req_stat.ic_fill_hit_s", "EventCode": "0x64", - "BriefDescription": "IC Fill Hit Shared.", - "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. IC Fill Hit Shared.", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Instruction cache hit clean line in L2.", "UMask": "0x2" }, { "EventName": "l2_cache_req_stat.ic_fill_miss", "EventCode": "0x64", - "BriefDescription": "IC Fill Miss.", - "PublicDescription": "This event does not count accesses to the L2 cache by the L2 prefetcher, but it does count accesses by the L1 prefetcher. IC Fill Miss.", + "BriefDescription": "Core to L2 cacheable request access status (not including L2 Prefetch). Instruction cache request miss in L2.", "UMask": "0x1" }, { "EventName": "l2_fill_pending.l2_fill_busy", "EventCode": "0x6d", - "BriefDescription": "Total cycles spent with one or more fill requests in flight from L2.", - "PublicDescription": "Total cycles spent with one or more fill requests in flight from L2.", + "BriefDescription": "Cycles with fill pending from L2. Total cycles spent with one or more fill requests in flight from L2.", "UMask": "0x1" }, { diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/core.json b/tools/perf/pmu-events/arch/x86/amdzen1/core.json index 1079544eeed5..7e1aa8273935 100644 --- a/tools/perf/pmu-events/arch/x86/amdzen1/core.json +++ b/tools/perf/pmu-events/arch/x86/amdzen1/core.json @@ -62,7 +62,6 @@ "EventName": "ex_ret_brn_ind_misp", "EventCode": "0xca", "BriefDescription": "Retired Indirect Branch Instructions Mispredicted.", - "PublicDescription": "Retired Indirect Branch Instructions Mispredicted." }, { "EventName": "ex_ret_mmx_fp_instr.sse_instr", @@ -90,11 +89,6 @@ "EventCode": "0xd1", "BriefDescription": "Retired Conditional Branch Instructions." }, - { - "EventName": "ex_ret_cond_misp", - "EventCode": "0xd2", - "BriefDescription": "Retired Conditional Branch Instructions Mispredicted." - }, { "EventName": "ex_div_busy", "EventCode": "0xd3", @@ -108,22 +102,19 @@ { "EventName": "ex_tagged_ibs_ops.ibs_count_rollover", "EventCode": "0x1cf", - "BriefDescription": "Number of times an op could not be tagged by IBS because of a previous tagged op that has not retired.", - "PublicDescription": "Tagged IBS Ops. Number of times an op could not be tagged by IBS because of a previous tagged op that has not retired.", + "BriefDescription": "Tagged IBS Ops. Number of times an op could not be tagged by IBS because of a previous tagged op that has not retired.", "UMask": "0x4" }, { "EventName": "ex_tagged_ibs_ops.ibs_tagged_ops_ret", "EventCode": "0x1cf", - "BriefDescription": "Number of Ops tagged by IBS that retired.", - "PublicDescription": "Tagged IBS Ops. Number of Ops tagged by IBS that retired.", + "BriefDescription": "Tagged IBS Ops. Number of Ops tagged by IBS that retired.", "UMask": "0x2" }, { "EventName": "ex_tagged_ibs_ops.ibs_tagged_ops", "EventCode": "0x1cf", - "BriefDescription": "Number of Ops tagged by IBS.", - "PublicDescription": "Tagged IBS Ops. Number of Ops tagged by IBS.", + "BriefDescription": "Tagged IBS Ops. Number of Ops tagged by IBS.", "UMask": "0x1" }, { diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/floating-point.json b/tools/perf/pmu-events/arch/x86/amdzen1/floating-point.json index ea4711983d1d..a35542bd3b36 100644 --- a/tools/perf/pmu-events/arch/x86/amdzen1/floating-point.json +++ b/tools/perf/pmu-events/arch/x86/amdzen1/floating-point.json @@ -2,17 +2,73 @@ { "EventName": "fpu_pipe_assignment.dual", "EventCode": "0x00", - "BriefDescription": "Total number multi-pipe uOps.", - "PublicDescription": "The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number multi-pipe uOps assigned to Pipe 3.", + "BriefDescription": "Total number multi-pipe uOps assigned to all pipes.", + "PublicDescription": "The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number multi-pipe uOps assigned to all pipes.", "UMask": "0xf0" }, + { + "EventName": "fpu_pipe_assignment.dual3", + "EventCode": "0x00", + "BriefDescription": "Total number multi-pipe uOps assigned to pipe 3.", + "PublicDescription": "The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number multi-pipe uOps assigned to pipe 3.", + "UMask": "0x80" + }, + { + "EventName": "fpu_pipe_assignment.dual2", + "EventCode": "0x00", + "BriefDescription": "Total number multi-pipe uOps assigned to pipe 2.", + "PublicDescription": "The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number multi-pipe uOps assigned to pipe 2.", + "UMask": "0x40" + }, + { + "EventName": "fpu_pipe_assignment.dual1", + "EventCode": "0x00", + "BriefDescription": "Total number multi-pipe uOps assigned to pipe 1.", + "PublicDescription": "The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number multi-pipe uOps assigned to pipe 1.", + "UMask": "0x20" + }, + { + "EventName": "fpu_pipe_assignment.dual0", + "EventCode": "0x00", + "BriefDescription": "Total number multi-pipe uOps assigned to pipe 0.", + "PublicDescription": "The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number multi-pipe uOps assigned to pipe 0.", + "UMask": "0x10" + }, { "EventName": "fpu_pipe_assignment.total", "EventCode": "0x00", - "BriefDescription": "Total number uOps.", - "PublicDescription": "The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to Pipe 3.", + "BriefDescription": "Total number uOps assigned to all fpu pipes.", + "PublicDescription": "The number of operations (uOps) and dual-pipe uOps dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to all pipes.", "UMask": "0xf" }, + { + "EventName": "fpu_pipe_assignment.total3", + "EventCode": "0x00", + "BriefDescription": "Total number of fp uOps on pipe 3.", + "PublicDescription": "The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one-cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 3.", + "UMask": "0x8" + }, + { + "EventName": "fpu_pipe_assignment.total2", + "EventCode": "0x00", + "BriefDescription": "Total number of fp uOps on pipe 2.", + "PublicDescription": "The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 2.", + "UMask": "0x4" + }, + { + "EventName": "fpu_pipe_assignment.total1", + "EventCode": "0x00", + "BriefDescription": "Total number of fp uOps on pipe 1.", + "PublicDescription": "The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 1.", + "UMask": "0x2" + }, + { + "EventName": "fpu_pipe_assignment.total0", + "EventCode": "0x00", + "BriefDescription": "Total number of fp uOps on pipe 0.", + "PublicDescription": "The number of operations (uOps) dispatched to each of the 4 FPU execution pipelines. This event reflects how busy the FPU pipelines are and may be used for workload characterization. This includes all operations performed by x87, MMX, and SSE instructions, including moves. Each increment represents a one- cycle dispatch event. This event is a speculative event. Since this event includes non-numeric operations it is not suitable for measuring MFLOPS. Total number uOps assigned to pipe 0.", + "UMask": "0x1" + }, { "EventName": "fp_sched_empty", "EventCode": "0x01", diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/memory.json b/tools/perf/pmu-events/arch/x86/amdzen1/memory.json index fa2d60d4def0..b33a3c308019 100644 --- a/tools/perf/pmu-events/arch/x86/amdzen1/memory.json +++ b/tools/perf/pmu-events/arch/x86/amdzen1/memory.json @@ -3,28 +3,24 @@ "EventName": "ls_locks.bus_lock", "EventCode": "0x25", "BriefDescription": "Bus lock when a locked operations crosses a cache boundary or is done on an uncacheable memory type.", - "PublicDescription": "Bus lock when a locked operations crosses a cache boundary or is done on an uncacheable memory type.", "UMask": "0x1" }, { "EventName": "ls_dispatch.ld_st_dispatch", "EventCode": "0x29", - "BriefDescription": "Load-op-Stores.", - "PublicDescription": "Counts the number of operations dispatched to the LS unit. Unit Masks ADDed. Load-op-Stores.", + "BriefDescription": "Counts the number of operations dispatched to the LS unit. Unit Masks ADDed. Load-op-Stores.", "UMask": "0x4" }, { "EventName": "ls_dispatch.store_dispatch", "EventCode": "0x29", - "BriefDescription": "Counts the number of operations dispatched to the LS unit. Unit Masks ADDed.", - "PublicDescription": "Counts the number of operations dispatched to the LS unit. Unit Masks ADDed.", + "BriefDescription": "Counts the number of stores dispatched to the LS unit. Unit Masks ADDed.", "UMask": "0x2" }, { "EventName": "ls_dispatch.ld_dispatch", "EventCode": "0x29", - "BriefDescription": "Counts the number of operations dispatched to the LS unit. Unit Masks ADDed.", - "PublicDescription": "Counts the number of operations dispatched to the LS unit. Unit Masks ADDed.", + "BriefDescription": "Counts the number of loads dispatched to the LS unit. Unit Masks ADDed.", "UMask": "0x1" }, { @@ -37,83 +33,114 @@ "EventCode": "0x40", "BriefDescription": "The number of accesses to the data cache for load and store references. This may include certain microcode scratchpad accesses, although these are generally rare. Each increment represents an eight-byte access, although the instruction may only be accessing a portion of that. This event is a speculative event." }, + { + "EventName": "ls_mab_alloc.dc_prefetcher", + "EventCode": "0x41", + "BriefDescription": "LS MAB allocates by type - DC prefetcher.", + "UMask": "0x8" + }, + { + "EventName": "ls_mab_alloc.stores", + "EventCode": "0x41", + "BriefDescription": "LS MAB allocates by type - stores.", + "UMask": "0x2" + }, + { + "EventName": "ls_mab_alloc.loads", + "EventCode": "0x41", + "BriefDescription": "LS MAB allocates by type - loads.", + "UMask": "0x01" + }, { "EventName": "ls_l1_d_tlb_miss.all", "EventCode": "0x45", "BriefDescription": "L1 DTLB Miss or Reload off all sizes.", - "PublicDescription": "L1 DTLB Miss or Reload off all sizes.", "UMask": "0xff" }, { "EventName": "ls_l1_d_tlb_miss.tlb_reload_1g_l2_miss", "EventCode": "0x45", "BriefDescription": "L1 DTLB Miss of a page of 1G size.", - "PublicDescription": "L1 DTLB Miss of a page of 1G size.", "UMask": "0x80" }, { "EventName": "ls_l1_d_tlb_miss.tlb_reload_2m_l2_miss", "EventCode": "0x45", "BriefDescription": "L1 DTLB Miss of a page of 2M size.", - "PublicDescription": "L1 DTLB Miss of a page of 2M size.", "UMask": "0x40" }, { "EventName": "ls_l1_d_tlb_miss.tlb_reload_32k_l2_miss", "EventCode": "0x45", "BriefDescription": "L1 DTLB Miss of a page of 32K size.", - "PublicDescription": "L1 DTLB Miss of a page of 32K size.", "UMask": "0x20" }, { "EventName": "ls_l1_d_tlb_miss.tlb_reload_4k_l2_miss", "EventCode": "0x45", "BriefDescription": "L1 DTLB Miss of a page of 4K size.", - "PublicDescription": "L1 DTLB Miss of a page of 4K size.", "UMask": "0x10" }, { "EventName": "ls_l1_d_tlb_miss.tlb_reload_1g_l2_hit", "EventCode": "0x45", "BriefDescription": "L1 DTLB Reload of a page of 1G size.", - "PublicDescription": "L1 DTLB Reload of a page of 1G size.", "UMask": "0x8" }, { "EventName": "ls_l1_d_tlb_miss.tlb_reload_2m_l2_hit", "EventCode": "0x45", "BriefDescription": "L1 DTLB Reload of a page of 2M size.", - "PublicDescription": "L1 DTLB Reload of a page of 2M size.", "UMask": "0x4" }, { "EventName": "ls_l1_d_tlb_miss.tlb_reload_32k_l2_hit", "EventCode": "0x45", "BriefDescription": "L1 DTLB Reload of a page of 32K size.", - "PublicDescription": "L1 DTLB Reload of a page of 32K size.", "UMask": "0x2" }, { "EventName": "ls_l1_d_tlb_miss.tlb_reload_4k_l2_hit", "EventCode": "0x45", "BriefDescription": "L1 DTLB Reload of a page of 4K size.", - "PublicDescription": "L1 DTLB Reload of a page of 4K size.", "UMask": "0x1" }, { - "EventName": "ls_tablewalker.perf_mon_tablewalk_alloc_iside", + "EventName": "ls_tablewalker.iside", "EventCode": "0x46", - "BriefDescription": "Tablewalker allocation.", - "PublicDescription": "Tablewalker allocation.", + "BriefDescription": "Total Page Table Walks on I-side.", "UMask": "0xc" }, { - "EventName": "ls_tablewalker.perf_mon_tablewalk_alloc_dside", + "EventName": "ls_tablewalker.ic_type1", + "EventCode": "0x46", + "BriefDescription": "Total Page Table Walks IC Type 1.", + "UMask": "0x8" + }, + { + "EventName": "ls_tablewalker.ic_type0", "EventCode": "0x46", - "BriefDescription": "Tablewalker allocation.", - "PublicDescription": "Tablewalker allocation.", + "BriefDescription": "Total Page Table Walks IC Type 0.", + "UMask": "0x4" + }, + { + "EventName": "ls_tablewalker.dside", + "EventCode": "0x46", + "BriefDescription": "Total Page Table Walks on D-side.", "UMask": "0x3" }, + { + "EventName": "ls_tablewalker.dc_type1", + "EventCode": "0x46", + "BriefDescription": "Total Page Table Walks DC Type 1.", + "UMask": "0x2" + }, + { + "EventName": "ls_tablewalker.dc_type0", + "EventCode": "0x46", + "BriefDescription": "Total Page Table Walks DC Type 0.", + "UMask": "0x1" + }, { "EventName": "ls_misal_accesses", "EventCode": "0x47", @@ -123,35 +150,30 @@ "EventName": "ls_pref_instr_disp.prefetch_nta", "EventCode": "0x4b", "BriefDescription": "Software Prefetch Instructions (PREFETCHNTA instruction) Dispatched.", - "PublicDescription": "Software Prefetch Instructions (PREFETCHNTA instruction) Dispatched.", "UMask": "0x4" }, { "EventName": "ls_pref_instr_disp.store_prefetch_w", "EventCode": "0x4b", "BriefDescription": "Software Prefetch Instructions (3DNow PREFETCHW instruction) Dispatched.", - "PublicDescription": "Software Prefetch Instructions (3DNow PREFETCHW instruction) Dispatched.", "UMask": "0x2" }, { "EventName": "ls_pref_instr_disp.load_prefetch_w", "EventCode": "0x4b", - "BriefDescription": "Prefetch, Prefetch_T0_T1_T2.", - "PublicDescription": "Software Prefetch Instructions Dispatched. Prefetch, Prefetch_T0_T1_T2.", + "BriefDescription": "Software Prefetch Instructions Dispatched. Prefetch, Prefetch_T0_T1_T2.", "UMask": "0x1" }, { "EventName": "ls_inef_sw_pref.mab_mch_cnt", "EventCode": "0x52", - "BriefDescription": "The number of software prefetches that did not fetch data outside of the processor core.", - "PublicDescription": "The number of software prefetches that did not fetch data outside of the processor core.", + "BriefDescription": "The number of software prefetches that did not fetch data outside of the processor core. Software PREFETCH instruction saw a match on an already-allocated miss request buffer.", "UMask": "0x2" }, { "EventName": "ls_inef_sw_pref.data_pipe_sw_pf_dc_hit", "EventCode": "0x52", - "BriefDescription": "The number of software prefetches that did not fetch data outside of the processor core.", - "PublicDescription": "The number of software prefetches that did not fetch data outside of the processor core.", + "BriefDescription": "The number of software prefetches that did not fetch data outside of the processor core. Software PREFETCH instruction saw a DC hit.", "UMask": "0x1" }, { diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/other.json b/tools/perf/pmu-events/arch/x86/amdzen1/other.json index b26a00d05a2e..ff780098d36e 100644 --- a/tools/perf/pmu-events/arch/x86/amdzen1/other.json +++ b/tools/perf/pmu-events/arch/x86/amdzen1/other.json @@ -2,64 +2,55 @@ { "EventName": "ic_oc_mode_switch.oc_ic_mode_switch", "EventCode": "0x28a", - "BriefDescription": "OC to IC mode switch.", - "PublicDescription": "OC Mode Switch. OC to IC mode switch.", + "BriefDescription": "OC Mode Switch. OC to IC mode switch.", "UMask": "0x2" }, { "EventName": "ic_oc_mode_switch.ic_oc_mode_switch", "EventCode": "0x28a", - "BriefDescription": "IC to OC mode switch.", - "PublicDescription": "OC Mode Switch. IC to OC mode switch.", + "BriefDescription": "OC Mode Switch. IC to OC mode switch.", "UMask": "0x1" }, { "EventName": "de_dis_dispatch_token_stalls0.retire_token_stall", "EventCode": "0xaf", - "BriefDescription": "RETIRE Tokens unavailable.", - "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. RETIRE Tokens unavailable.", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. RETIRE Tokens unavailable.", "UMask": "0x40" }, { "EventName": "de_dis_dispatch_token_stalls0.agsq_token_stall", "EventCode": "0xaf", - "BriefDescription": "AGSQ Tokens unavailable.", - "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. AGSQ Tokens unavailable.", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. AGSQ Tokens unavailable.", "UMask": "0x20" }, { "EventName": "de_dis_dispatch_token_stalls0.alu_token_stall", "EventCode": "0xaf", - "BriefDescription": "ALU tokens total unavailable.", - "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALU tokens total unavailable.", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALU tokens total unavailable.", "UMask": "0x10" }, { "EventName": "de_dis_dispatch_token_stalls0.alsq3_0_token_stall", "EventCode": "0xaf", - "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall.", - "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall.", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 3_0 Tokens unavailable.", "UMask": "0x8" }, { "EventName": "de_dis_dispatch_token_stalls0.alsq3_token_stall", "EventCode": "0xaf", - "BriefDescription": "ALSQ 3 Tokens unavailable.", - "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 3 Tokens unavailable.", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 3 Tokens unavailable.", "UMask": "0x4" }, { "EventName": "de_dis_dispatch_token_stalls0.alsq2_token_stall", "EventCode": "0xaf", - "BriefDescription": "ALSQ 2 Tokens unavailable.", - "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 2 Tokens unavailable.", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 2 Tokens unavailable.", "UMask": "0x2" }, { "EventName": "de_dis_dispatch_token_stalls0.alsq1_token_stall", "EventCode": "0xaf", - "BriefDescription": "ALSQ 1 Tokens unavailable.", - "PublicDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 1 Tokens unavailable.", + "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a token stall. ALSQ 1 Tokens unavailable.", "UMask": "0x1" } ] diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-events/arch/x86/mapfile.csv index 244a36e37a3a..25b06cf98747 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -36,5 +36,5 @@ GenuineIntel-6-55-[56789ABCDEF],v1,cascadelakex,core GenuineIntel-6-7D,v1,icelake,core GenuineIntel-6-7E,v1,icelake,core GenuineIntel-6-86,v1,tremontx,core -AuthenticAMD-23-([12][0-9A-F]|[0-9A-F]),v1,amdzen1,core +AuthenticAMD-23-([12][0-9A-F]|[0-9A-F]),v2,amdzen1,core AuthenticAMD-23-[[:xdigit:]]+,v1,amdzen2,core
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.10-rc1 commit 09b54b30ccdcd3e17cc13079f581b1a389b04939 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
This enables zen3 users by reusing mostly-compatible zen2 events until the official public list of zen3 events is published in a future PPR.
Signed-off-by: Kim Phillips kim.phillips@amd.com Acked-by: Ian Rogers irogers@google.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Andi Kleen ak@linux.intel.com Cc: Borislav Petkov bp@suse.de Cc: Jin Yao yao.jin@linux.intel.com Cc: Jiri Olsa jolsa@redhat.com Cc: John Garry john.garry@huawei.com Cc: Jon Grimm jon.grimm@amd.com Cc: Kan Liang kan.liang@linux.intel.com Cc: Mark Rutland mark.rutland@arm.com Cc: Martin Jambor mjambor@suse.cz Cc: Martin Liška mliska@suse.cz Cc: Michael Petlan mpetlan@redhat.com Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Stephane Eranian eranian@google.com Cc: Vijay Thakkar vijaythakkar@me.com Cc: William Cohen wcohen@redhat.com Cc: Yunfeng Ye yeyunfeng@huawei.com Link: http://lore.kernel.org/lkml/20200901220944.277505-4-kim.phillips@amd.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- tools/perf/pmu-events/arch/x86/mapfile.csv | 1 + 1 file changed, 1 insertion(+)
diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-events/arch/x86/mapfile.csv index 25b06cf98747..2f2a209e87e1 100644 --- a/tools/perf/pmu-events/arch/x86/mapfile.csv +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv @@ -38,3 +38,4 @@ GenuineIntel-6-7E,v1,icelake,core GenuineIntel-6-86,v1,tremontx,core AuthenticAMD-23-([12][0-9A-F]|[0-9A-F]),v2,amdzen1,core AuthenticAMD-23-[[:xdigit:]]+,v1,amdzen2,core +AuthenticAMD-25-[[:xdigit:]]+,v1,amdzen2,core
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.4-rc1 commit 26c44a63a291893e0a00f01e96b6e1d0310a79a9 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
[ Upstream commit 26c44a63a291893e0a00f01e96b6e1d0310a79a9 ]
Replace the open-coded "is MMIO SPTE" checks in the MMU warnings related to software-based access/dirty tracking to make the code slightly more self-documenting.
No functional change intended.
Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/mmu.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index e04e0195d024..1c3726c79da9 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -307,6 +307,11 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask) } EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
+static bool is_mmio_spte(u64 spte) +{ + return (spte & shadow_mmio_mask) == shadow_mmio_value; +} + static inline bool sp_ad_disabled(struct kvm_mmu_page *sp) { return sp->role.ad_disabled; @@ -314,7 +319,7 @@ static inline bool sp_ad_disabled(struct kvm_mmu_page *sp)
static inline bool spte_ad_enabled(u64 spte) { - MMU_WARN_ON((spte & shadow_mmio_mask) == shadow_mmio_value); + MMU_WARN_ON(is_mmio_spte(spte)); return !(spte & shadow_acc_track_value); }
@@ -325,13 +330,13 @@ static bool is_nx_huge_page_enabled(void)
static inline u64 spte_shadow_accessed_mask(u64 spte) { - MMU_WARN_ON((spte & shadow_mmio_mask) == shadow_mmio_value); + MMU_WARN_ON(is_mmio_spte(spte)); return spte_ad_enabled(spte) ? shadow_accessed_mask : 0; }
static inline u64 spte_shadow_dirty_mask(u64 spte) { - MMU_WARN_ON((spte & shadow_mmio_mask) == shadow_mmio_value); + MMU_WARN_ON(is_mmio_spte(spte)); return spte_ad_enabled(spte) ? shadow_dirty_mask : 0; }
@@ -407,11 +412,6 @@ static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, mmu_spte_set(sptep, mask); }
-static bool is_mmio_spte(u64 spte) -{ - return (spte & shadow_mmio_mask) == shadow_mmio_value; -} - static gfn_t get_mmio_spte_gfn(u64 spte) { u64 gpa = spte & shadow_nonpresent_or_rsvd_lower_gfn_mask;
From: Kai Huang kai.huang@linux.intel.com
mainline inclusion from mainline-v5.2-rc1 commit 61455bf26236e7f3d72705382a6437fdfd1bd0af category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
[ Upstream commit 61455bf26236e7f3d72705382a6437fdfd1bd0af ]
Currently KVM sets 5 most significant bits of physical address bits reported by CPUID (boot_cpu_data.x86_phys_bits) for nonpresent or reserved bits SPTE to mitigate L1TF attack from guest when using shadow MMU. However for some particular Intel CPUs the physical address bits of internal cache is greater than physical address bits reported by CPUID.
Use the kernel's existing boot_cpu_data.x86_cache_bits to determine the five most significant bits. Doing so improves KVM's L1TF mitigation in the unlikely scenario that system RAM overlaps the high order bits of the "real" physical address space as reported by CPUID. This aligns with the kernel's warnings regarding L1TF mitigation, e.g. in the above scenario the kernel won't warn the user about lack of L1TF mitigation if x86_cache_bits is greater than x86_phys_bits.
Also initialize shadow_nonpresent_or_rsvd_mask explicitly to make it consistent with other 'shadow_{xxx}_mask', and opportunistically add a WARN once if KVM's L1TF mitigation cannot be applied on a system that is marked as being susceptible to L1TF.
Reviewed-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Kai Huang kai.huang@linux.intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/mmu.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 1c3726c79da9..cee03f1bf906 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -513,16 +513,24 @@ static void kvm_mmu_reset_all_pte_masks(void) * If the CPU has 46 or less physical address bits, then set an * appropriate mask to guard against L1TF attacks. Otherwise, it is * assumed that the CPU is not vulnerable to L1TF. + * + * Some Intel CPUs address the L1 cache using more PA bits than are + * reported by CPUID. Use the PA width of the L1 cache when possible + * to achieve more effective mitigation, e.g. if system RAM overlaps + * the most significant bits of legal physical address space. */ - low_phys_bits = boot_cpu_data.x86_phys_bits; - if (boot_cpu_data.x86_phys_bits < + shadow_nonpresent_or_rsvd_mask = 0; + low_phys_bits = boot_cpu_data.x86_cache_bits; + if (boot_cpu_data.x86_cache_bits < 52 - shadow_nonpresent_or_rsvd_mask_len) { shadow_nonpresent_or_rsvd_mask = - rsvd_bits(boot_cpu_data.x86_phys_bits - + rsvd_bits(boot_cpu_data.x86_cache_bits - shadow_nonpresent_or_rsvd_mask_len, - boot_cpu_data.x86_phys_bits - 1); + boot_cpu_data.x86_cache_bits - 1); low_phys_bits -= shadow_nonpresent_or_rsvd_mask_len; - } + } else + WARN_ON_ONCE(boot_cpu_has_bug(X86_BUG_L1TF)); + shadow_nonpresent_or_rsvd_lower_gfn_mask = GENMASK_ULL(low_phys_bits - 1, PAGE_SHIFT); }
From: Paolo Bonzini pbonzini@redhat.com
mainline inclusion from mainline-v5.8-rc1 commit d43e2675e96fc6ae1a633b6a69d296394448cc32 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
[ Upstream commit d43e2675e96fc6ae1a633b6a69d296394448cc32 ]
KVM stores the gfn in MMIO SPTEs as a caching optimization. These are split in two parts, as in "[high 11111 low]", to thwart any attempt to use these bits in an L1TF attack. This works as long as there are 5 free bits between MAXPHYADDR and bit 50 (inclusive), leaving bit 51 free so that the MMIO access triggers a reserved-bit-set page fault.
The bit positions however were computed wrongly for AMD processors that have encryption support. In this case, x86_phys_bits is reduced (for example from 48 to 43, to account for the C bit at position 47 and four bits used internally to store the SEV ASID and other stuff) while x86_cache_bits in would remain set to 48, and _all_ bits between the reduced MAXPHYADDR and bit 51 are set. Then low_phys_bits would also cover some of the bits that are set in the shadow_mmio_value, terribly confusing the gfn caching mechanism.
To fix this, avoid splitting gfns as long as the processor does not have the L1TF bug (which includes all AMD processors). When there is no splitting, low_phys_bits can be set to the reduced MAXPHYADDR removing the overlap. This fixes "npt=0" operation on EPYC processors.
Thanks to Maxim Levitsky for bisecting this bug.
Cc: stable@vger.kernel.org Fixes: 52918ed5fcf0 ("KVM: SVM: Override default MMIO mask if memory encryption is enabled") Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/mmu.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index cee03f1bf906..d5568df8552f 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -301,6 +301,8 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask) { BUG_ON((u64)(unsigned)access_mask != access_mask); BUG_ON((mmio_mask & mmio_value) != mmio_value); + WARN_ON(mmio_value & (shadow_nonpresent_or_rsvd_mask << shadow_nonpresent_or_rsvd_mask_len)); + WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask); shadow_mmio_value = mmio_value | SPTE_SPECIAL_MASK; shadow_mmio_mask = mmio_mask | SPTE_SPECIAL_MASK; shadow_mmio_access_mask = access_mask; @@ -520,16 +522,15 @@ static void kvm_mmu_reset_all_pte_masks(void) * the most significant bits of legal physical address space. */ shadow_nonpresent_or_rsvd_mask = 0; - low_phys_bits = boot_cpu_data.x86_cache_bits; - if (boot_cpu_data.x86_cache_bits < - 52 - shadow_nonpresent_or_rsvd_mask_len) { + low_phys_bits = boot_cpu_data.x86_phys_bits; + if (boot_cpu_has_bug(X86_BUG_L1TF) && + !WARN_ON_ONCE(boot_cpu_data.x86_cache_bits >= + 52 - shadow_nonpresent_or_rsvd_mask_len)) { + low_phys_bits = boot_cpu_data.x86_cache_bits + - shadow_nonpresent_or_rsvd_mask_len; shadow_nonpresent_or_rsvd_mask = - rsvd_bits(boot_cpu_data.x86_cache_bits - - shadow_nonpresent_or_rsvd_mask_len, - boot_cpu_data.x86_cache_bits - 1); - low_phys_bits -= shadow_nonpresent_or_rsvd_mask_len; - } else - WARN_ON_ONCE(boot_cpu_has_bug(X86_BUG_L1TF)); + rsvd_bits(low_phys_bits, boot_cpu_data.x86_cache_bits - 1); + }
shadow_nonpresent_or_rsvd_lower_gfn_mask = GENMASK_ULL(low_phys_bits - 1, PAGE_SHIFT);
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.6-rc1 commit e30a7d623dccdb3f880fbcad980b0cb589a1da45 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Remove the bogus 64-bit only condition from the check that disables MMIO spte optimization when the system supports the max PA, i.e. doesn't have any reserved PA bits. 32-bit KVM always uses PAE paging for the shadow MMU, and per Intel's SDM:
PAE paging translates 32-bit linear addresses to 52-bit physical addresses.
The kernel's restrictions on max physical addresses are limits on how much memory the kernel can reasonably use, not what physical addresses are supported by hardware.
Fixes: ce88decffd17 ("KVM: MMU: mmio page fault support") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d5568df8552f..1df612429886 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -6095,7 +6095,7 @@ static void kvm_set_mmio_spte_mask(void) * If reserved bit is not supported, clear the present bit to disable * mmio page fault. */ - if (IS_ENABLED(CONFIG_X86_64) && shadow_phys_bits == 52) + if (shadow_phys_bits == 52) mask &= ~1ull;
kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.8-rc1 commit 6129ed877d409037b79866327102c9dc59a302fe category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Set the mmio_value to '0' instead of simply clearing the present bit to squash a benign warning in kvm_mmu_set_mmio_spte_mask() that complains about the mmio_value overlapping the lower GFN mask on systems with 52 bits of PA space.
Opportunistically clean up the code and comments.
Cc: stable@vger.kernel.org Fixes: d43e2675e96fc ("KVM: x86: only do L1TF workaround on affected processors") Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Message-Id: 20200527084909.23492-1-sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/mmu.c | 27 +++++++++------------------ 1 file changed, 9 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 1df612429886..c1ec03b1e596 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -6078,25 +6078,16 @@ static void kvm_set_mmio_spte_mask(void) u64 mask;
/* - * Set the reserved bits and the present bit of an paging-structure - * entry to generate page fault with PFER.RSV = 1. + * Set a reserved PA bit in MMIO SPTEs to generate page faults with + * PFEC.RSVD=1 on MMIO accesses. 64-bit PTEs (PAE, x86-64, and EPT + * paging) support a maximum of 52 bits of PA, i.e. if the CPU supports + * 52-bit physical addresses then there are no reserved PA bits in the + * PTEs and so the reserved PA approach must be disabled. */ - - /* - * Mask the uppermost physical address bit, which would be reserved as - * long as the supported physical address width is less than 52. - */ - mask = 1ull << 51; - - /* Set the present bit. */ - mask |= 1ull; - - /* - * If reserved bit is not supported, clear the present bit to disable - * mmio page fault. - */ - if (shadow_phys_bits == 52) - mask &= ~1ull; + if (shadow_phys_bits < 52) + mask = BIT_ULL(51) | PT_PRESENT_MASK; + else + mask = 0;
kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK); }
From: Krish Sadhukhan krish.sadhukhan@oracle.com
mainline inclusion from mainline-v5.12-rc1-dontuse commit 04548ed0206ca895c8edd6a078c20a218423890b category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Replace the hard-coded value for bit# 1 in EFLAGS, with the available
Signed-off-by: Krish Sadhukhan krish.sadhukhan@oracle.com Message-Id: 20210203012842.101447-2-krish.sadhukhan@oracle.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/svm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 746f0926c51c..2355f20b1342 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1615,7 +1615,7 @@ static void init_vmcb(struct vcpu_svm *svm)
svm_set_efer(&svm->vcpu, 0); save->dr6 = 0xffff0ff0; - kvm_set_rflags(&svm->vcpu, 2); + kvm_set_rflags(&svm->vcpu, X86_EFLAGS_FIXED); save->rip = 0x0000fff0; svm->vcpu.arch.regs[VCPU_REGS_RIP] = save->rip;
From: Babu Moger babu.moger@amd.com
mainline inclusion from mainline-v5.12-rc2 commit 9e46f6c6c959d9bb45445c2e8f04a75324a0dfd0 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
This problem was reported on a SVM guest while executing kexec. Kexec fails to load the new kernel when the PCID feature is enabled.
When kexec starts loading the new kernel, it starts the process by resetting the vCPU's and then bringing each vCPU online one by one. The vCPU reset is supposed to reset all the register states before the vCPUs are brought online. However, the CR4 register is not reset during this process. If this register is already setup during the last boot, all the flags can remain intact. The X86_CR4_PCIDE bit can only be enabled in long mode. So, it must be enabled much later in SMP initialization. Having the X86_CR4_PCIDE bit set during SMP boot can cause a boot failures.
Fix the issue by resetting the CR4 register in init_vmcb().
Signed-off-by: Babu Moger babu.moger@amd.com Message-Id: 161471109108.30811.6392805173629704166.stgit@bmoger-ubuntu Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/svm.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 2355f20b1342..46567343680f 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -389,6 +389,7 @@ static int nested_svm_intercept(struct vcpu_svm *svm); static int nested_svm_vmexit(struct vcpu_svm *svm); static int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, bool has_error_code, u32 error_code); +static int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
enum { VMCB_INTERCEPTS, /* Intercept vectors, TSC offset, @@ -1613,6 +1614,7 @@ static void init_vmcb(struct vcpu_svm *svm) init_sys_seg(&save->ldtr, SEG_TYPE_LDT); init_sys_seg(&save->tr, SEG_TYPE_BUSY_TSS16);
+ svm_set_cr4(&svm->vcpu, 0); svm_set_efer(&svm->vcpu, 0); save->dr6 = 0xffff0ff0; kvm_set_rflags(&svm->vcpu, X86_EFLAGS_FIXED);
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.7-rc1 commit 753039ef8b2f1078e5bff8cd42f80578bf6385b0 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Family 19h CPUs are Zen-based and still share most architectural features with Family 17h CPUs, and therefore still need to call init_amd_zn() e.g., to set the RECLAIM_DISTANCE override.
init_amd_zn() also sets X86_FEATURE_ZEN, which today is only used in amd_set_core_ssb_state(), which isn't called on some late model Family 17h CPUs, nor on any Family 19h CPUs: X86_FEATURE_AMD_SSBD replaces X86_FEATURE_LS_CFG_SSBD on those later model CPUs, where the SSBD mitigation is done via the SPEC_CTRL MSR instead of the LS_CFG MSR.
Family 19h CPUs also don't have the erratum where the CPB feature bit isn't set, but that code can stay unchanged and run safely on Family 19h.
Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20200311191451.13221-1-kim.phillips@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/kernel/cpu/amd.c | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index b40e72201ccb..0a4145100c3a 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -239,7 +239,7 @@ #define X86_FEATURE_IBRS ( 7*32+25) /* Indirect Branch Restricted Speculation */ #define X86_FEATURE_IBPB ( 7*32+26) /* Indirect Branch Prediction Barrier */ #define X86_FEATURE_STIBP ( 7*32+27) /* Single Thread Indirect Branch Predictors */ -#define X86_FEATURE_ZEN ( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */ +#define X86_FEATURE_ZEN ( 7*32+28) /* "" CPU is AMD family 0x17 or above (Zen) */ #define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* "" L1TF workaround PTE inversion */ #define X86_FEATURE_IBRS_ENHANCED ( 7*32+30) /* Enhanced IBRS */
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 5d68964d968b..404b44729ac3 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -921,7 +921,8 @@ static void init_amd(struct cpuinfo_x86 *c) case 0x12: init_amd_ln(c); break; case 0x15: init_amd_bd(c); break; case 0x16: init_amd_jg(c); break; - case 0x17: init_amd_zn(c); break; + case 0x17: /* fallthrough */; + case 0x19: init_amd_zn(c); break; }
/*
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.10-rc1 commit 8b0bed7d410f48499d72af2e2bcd890daad94e0d category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
IBS hardware with the OpCntExt feature gets a 7-bit wider internal counter. Both the maximum and current count bitfields in the IBS_OP_CTL register are extended to support reading and writing it.
No changes are necessary to the driver for handling the extra contiguous current count bits (IbsOpCurCnt), as the driver already passes through 32 bits of that field. However, the driver has to do some extra bit manipulation when converting from a period to the non-contiguous (although conveniently aligned) extra bits in the IbsOpMaxCnt bitfield.
This decreases IBS Op interrupt overhead when the period is over 1,048,560 (0xffff0), which would previously activate the driver's software counter. That threshold is now 134,217,712 (0x7fffff0).
Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20200908214740.18097-7-kim.phillips@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/amd/ibs.c | 42 +++++++++++++++++++++++-------- arch/x86/include/asm/perf_event.h | 1 + 2 files changed, 32 insertions(+), 11 deletions(-)
diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index 2410bd4bb48f..25123359c8aa 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -352,10 +352,13 @@ static u64 get_ibs_op_count(u64 config) * and the lower 7 bits of CurCnt are randomized. * Otherwise CurCnt has the full 27-bit current counter value. */ - if (config & IBS_OP_VAL) + if (config & IBS_OP_VAL) { count = (config & IBS_OP_MAX_CNT) << 4; - else if (ibs_caps & IBS_CAPS_RDWROPCNT) + if (ibs_caps & IBS_CAPS_OPCNTEXT) + count += config & IBS_OP_MAX_CNT_EXT_MASK; + } else if (ibs_caps & IBS_CAPS_RDWROPCNT) { count = (config & IBS_OP_CUR_CNT) >> 32; + }
return count; } @@ -416,7 +419,7 @@ static void perf_ibs_start(struct perf_event *event, int flags) struct hw_perf_event *hwc = &event->hw; struct perf_ibs *perf_ibs = container_of(event->pmu, struct perf_ibs, pmu); struct cpu_perf_ibs *pcpu = this_cpu_ptr(perf_ibs->pcpu); - u64 period; + u64 period, config = 0;
if (WARN_ON_ONCE(!(hwc->state & PERF_HES_STOPPED))) return; @@ -425,13 +428,19 @@ static void perf_ibs_start(struct perf_event *event, int flags) hwc->state = 0;
perf_ibs_set_period(perf_ibs, hwc, &period); + if (perf_ibs == &perf_ibs_op && (ibs_caps & IBS_CAPS_OPCNTEXT)) { + config |= period & IBS_OP_MAX_CNT_EXT_MASK; + period &= ~IBS_OP_MAX_CNT_EXT_MASK; + } + config |= period >> 4; + /* * Set STARTED before enabling the hardware, such that a subsequent NMI * must observe it. */ set_bit(IBS_STARTED, pcpu->state); clear_bit(IBS_STOPPING, pcpu->state); - perf_ibs_enable_event(perf_ibs, hwc, period >> 4); + perf_ibs_enable_event(perf_ibs, hwc, config);
perf_event_update_userpage(event); } @@ -598,7 +607,7 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs) struct perf_ibs_data ibs_data; int offset, size, check_rip, offset_max, throttle = 0; unsigned int msr; - u64 *buf, *config, period; + u64 *buf, *config, period, new_config = 0;
if (!test_bit(IBS_STARTED, pcpu->state)) { fail: @@ -693,13 +702,17 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs) if (throttle) { perf_ibs_stop(event, 0); } else { - period >>= 4; - - if ((ibs_caps & IBS_CAPS_RDWROPCNT) && - (*config & IBS_OP_CNT_CTL)) - period |= *config & IBS_OP_CUR_CNT_RAND; + if (perf_ibs == &perf_ibs_op) { + if (ibs_caps & IBS_CAPS_OPCNTEXT) { + new_config = period & IBS_OP_MAX_CNT_EXT_MASK; + period &= ~IBS_OP_MAX_CNT_EXT_MASK; + } + if ((ibs_caps & IBS_CAPS_RDWROPCNT) && (*config & IBS_OP_CNT_CTL)) + new_config |= *config & IBS_OP_CUR_CNT_RAND; + } + new_config |= period >> 4;
- perf_ibs_enable_event(perf_ibs, hwc, period); + perf_ibs_enable_event(perf_ibs, hwc, new_config); }
perf_event_update_userpage(event); @@ -773,6 +786,13 @@ static __init void perf_event_ibs_init(void) perf_ibs_op.config_mask |= IBS_OP_CNT_CTL; *attr++ = &format_attr_cnt_ctl.attr; } + + if (ibs_caps & IBS_CAPS_OPCNTEXT) { + perf_ibs_op.max_period |= IBS_OP_MAX_CNT_EXT_MASK; + perf_ibs_op.config_mask |= IBS_OP_MAX_CNT_EXT_MASK; + perf_ibs_op.cnt_mask |= IBS_OP_MAX_CNT_EXT_MASK; + } + perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
register_nmi_handler(NMI_LOCAL, perf_ibs_nmi_handler, 0, "perf_ibs"); diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 647c40e4ed23..f1cf33e2da2e 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -373,6 +373,7 @@ struct pebs_lbr { #define IBS_OP_ENABLE (1ULL<<17) #define IBS_OP_MAX_CNT 0x0000FFFFULL #define IBS_OP_MAX_CNT_EXT 0x007FFFFFULL /* not a register bit mask */ +#define IBS_OP_MAX_CNT_EXT_MASK (0x7FULL<<20) /* separate upper 7 bits */ #define IBS_RIP_INVALID (1ULL<<38)
#ifdef CONFIG_X86_LOCAL_APIC
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.10-rc4 commit 33eb82251af9be47a625ca1578f44e596a3a0ca9 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Family 19h processors have the same RAPL (Running average power limit) hardware register interface as Family 17h processors.
Change the family checks to succeed for Family 17h and above to enable core and package energy measurement on Family 19h machines.
Also update the TDP to the largest found at the bottom of the page at amd.com->processors->servers->epyc->2nd-gen-epyc, i.e., the EPYC 7H12.
Signed-off-by: Kim Phillips kim.phillips@amd.com Cc: Len Brown len.brown@intel.com Cc: Len Brown lenb@kernel.org Cc: linux-pm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Len Brown len.brown@intel.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- tools/power/x86/turbostat/turbostat.c | 34 +++++++++------------------ 1 file changed, 11 insertions(+), 23 deletions(-)
diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c index 9805314c3d65..1be2effb7316 100644 --- a/tools/power/x86/turbostat/turbostat.c +++ b/tools/power/x86/turbostat/turbostat.c @@ -3784,13 +3784,8 @@ double get_tdp_intel(unsigned int model)
double get_tdp_amd(unsigned int family) { - switch (family) { - case 0x17: - case 0x18: - default: - /* This is the max stock TDP of HEDT/Server Fam17h chips */ - return 250.0; - } + /* This is the max stock TDP of HEDT/Server Fam17h+ chips */ + return 280.0; }
/* @@ -3958,27 +3953,20 @@ void rapl_probe_amd(unsigned int family, unsigned int model)
if (max_extended_level >= 0x80000007) { __cpuid(0x80000007, eax, ebx, ecx, edx); - /* RAPL (Fam 17h) */ + /* RAPL (Fam 17h+) */ has_rapl = edx & (1 << 14); }
- if (!has_rapl) + if (!has_rapl || family < 0x17) return;
- switch (family) { - case 0x17: /* Zen, Zen+ */ - case 0x18: /* Hygon Dhyana */ - do_rapl = RAPL_AMD_F17H | RAPL_PER_CORE_ENERGY; - if (rapl_joules) { - BIC_PRESENT(BIC_Pkg_J); - BIC_PRESENT(BIC_Cor_J); - } else { - BIC_PRESENT(BIC_PkgWatt); - BIC_PRESENT(BIC_CorWatt); - } - break; - default: - return; + do_rapl = RAPL_AMD_F17H | RAPL_PER_CORE_ENERGY; + if (rapl_joules) { + BIC_PRESENT(BIC_Pkg_J); + BIC_PRESENT(BIC_Cor_J); + } else { + BIC_PRESENT(BIC_PkgWatt); + BIC_PRESENT(BIC_CorWatt); }
if (get_msr(base_cpu, MSR_RAPL_PWR_UNIT, &msr))
From: Like Xu likexu@tencent.com
mainline inclusion from mainline-v5.14-rc5 commit df51fe7ea1c1c2c3bfdb81279712fdd2e4ea6c27 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
If we use "perf record" in an AMD Milan guest, dmesg reports a #GP warning from an unchecked MSR access error on MSR_F15H_PERF_CTLx:
[] unchecked MSR access error: WRMSR to 0xc0010200 (tried to write 0x0000020000110076) at rIP: 0xffffffff8106ddb4 (native_write_msr+0x4/0x20) [] Call Trace: [] amd_pmu_disable_event+0x22/0x90 [] x86_pmu_stop+0x4c/0xa0 [] x86_pmu_del+0x3a/0x140
The AMD64_EVENTSEL_HOSTONLY bit is defined and used on the host, while the guest perf driver should avoid such use.
Fixes: 1018faa6cf23 ("perf/x86/kvm: Fix Host-Only/Guest-Only counting with SVM disabled") Signed-off-by: Like Xu likexu@tencent.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Liam Merwick liam.merwick@oracle.com Tested-by: Kim Phillips kim.phillips@amd.com Tested-by: Liam Merwick liam.merwick@oracle.com Link: https://lkml.kernel.org/r/20210802070850.35295-1-likexu@tencent.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/perf_event.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index 73173f058dc6..db719665e147 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -912,9 +912,10 @@ void x86_pmu_stop(struct perf_event *event, int flags);
static inline void x86_pmu_disable_event(struct perf_event *event) { + u64 disable_mask = __this_cpu_read(cpu_hw_events.perf_ctr_virt_mask); struct hw_perf_event *hwc = &event->hw;
- wrmsrl(hwc->config_base, hwc->config); + wrmsrl(hwc->config_base, hwc->config & ~disable_mask);
if (is_counter_pair(hwc)) wrmsrl(x86_pmu_config_addr(hwc->idx + 1), 0);
From: David Edmondson david.edmondson@oracle.com
mainline inclusion from mainline-v5.10-rc4 commit 51b958e5aeb1e18c00332e0b37c5d4e95a3eff84 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The instruction emulator ignores clflush instructions, yet fails to support clflushopt. Treat both similarly.
Fixes: 13e457e0eebf ("KVM: x86: Emulator does not decode clflush well") Signed-off-by: David Edmondson david.edmondson@oracle.com Message-Id: 20201103120400.240882-1-david.edmondson@oracle.com Reviewed-by: Joao Martins joao.m.martins@oracle.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/emulate.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index f93b19ed0f68..366d8d96b139 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -4003,6 +4003,12 @@ static int em_clflush(struct x86_emulate_ctxt *ctxt) return X86EMUL_CONTINUE; }
+static int em_clflushopt(struct x86_emulate_ctxt *ctxt) +{ + /* emulating clflushopt regardless of cpuid */ + return X86EMUL_CONTINUE; +} + static int em_movsxd(struct x86_emulate_ctxt *ctxt) { ctxt->dst.val = (s32) ctxt->src.val; @@ -4516,7 +4522,7 @@ static const struct opcode group11[] = { };
static const struct gprefix pfx_0f_ae_7 = { - I(SrcMem | ByteOp, em_clflush), N, N, N, + I(SrcMem | ByteOp, em_clflush), I(SrcMem | ByteOp, em_clflushopt), N, N, };
static const struct group_dual group15 = { {
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.1-rc1 commit 0e32958ec449a9bb63c031ed04ac7a494ea1bc1c category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
x86 captures a subset of the memslot generation (19 bits) in its MMIO sptes so that it can expedite emulated MMIO handling by checking only the releveant spte, i.e. doesn't need to do a full page fault walk.
Because the MMIO sptes capture only 19 bits (due to limited space in the sptes), there is a non-zero probability that the MMIO generation could wrap, e.g. after 500k memslot updates. Since normal usage is extremely unlikely to result in 500k memslot updates, a hack was added by commit 69c9ea93eaea ("KVM: MMU: init kvm generation close to mmio wrap-around value") to offset the MMIO generation in order to trigger a wraparound, e.g. after 150 memslot updates.
When separate memslot generation sequences were assigned to each address space, commit 00f034a12fdd ("KVM: do not bias the generation number in kvm_current_mmio_generation") moved the offset logic into the initialization of the memslot generation itself so that the per-address space bit(s) were not dropped/corrupted by the MMIO shenanigans.
Remove the offset hack for three reasons:
- While it does exercise x86's kvm_mmu_invalidate_mmio_sptes(), simply wrapping the generation doesn't actually test the interesting case of having stale MMIO sptes with the new generation number, e.g. old sptes with a generation number of 0.
- Triggering kvm_mmu_invalidate_mmio_sptes() prematurely makes its performance rather important since the probability of invalidating MMIO sptes jumps from "effectively never" to "fairly likely". This limits what can be done in future patches, e.g. to simplify the invalidation code, as doing so without proper caution could lead to a noticeable performance regression.
- Forcing the memslots generation, which is a 64-bit number, to wrap prevents KVM from assuming the memslots generation will never wrap. This in turn prevents KVM from using an arbitrary bit for the "update in-progress" flag, e.g. using bit 63 would immediately collide with using a large value as the starting generation number. The "update in-progress" flag is effectively forced into bit 0 so that it's (subtly) taken into account when incrementing the generation.
Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- virt/kvm/kvm_main.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8aecda31801f..98674b0ed14a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -700,12 +700,8 @@ static struct kvm *kvm_create_vm(unsigned long type) struct kvm_memslots *slots = kvm_alloc_memslots(); if (!slots) goto out_err_no_srcu; - /* - * Generations must be different for each address space. - * Init kvm generation close to the maximum to easily test the - * code of handling generation number wrap-around. - */ - slots->generation = i * 2 - 150; + /* Generations must be different for each address space. */ + slots->generation = i * 2; rcu_assign_pointer(kvm->memslots[i], slots); }
From: Sean Christopherson sean.j.christopherson@intel.com
mainline inclusion from mainline-v5.1-rc1 commit 164bf7e56c5a73f2f819c39ba7e0f20e0f97dc7b category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
...now that KVM won't explode by moving it out of bit 0. Using bit 63 eliminates the need to jump over bit 0, e.g. when calculating a new memslots generation or when propagating the memslots generation to an MMIO spte.
Signed-off-by: Sean Christopherson sean.j.christopherson@intel.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- Documentation/virtual/kvm/mmu.txt | 13 ++++++++----- arch/x86/kvm/mmu.c | 31 ++++++++++++------------------- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 8 ++++---- 4 files changed, 26 insertions(+), 30 deletions(-)
diff --git a/Documentation/virtual/kvm/mmu.txt b/Documentation/virtual/kvm/mmu.txt index 851a8abcadce..c843bee035ac 100644 --- a/Documentation/virtual/kvm/mmu.txt +++ b/Documentation/virtual/kvm/mmu.txt @@ -452,13 +452,16 @@ stored into the MMIO spte. Thus, the MMIO spte might be created based on out-of-date information, but with an up-to-date generation number.
To avoid this, the generation number is incremented again after synchronize_srcu -returns; thus, the low bit of kvm_memslots(kvm)->generation is only 1 during a +returns; thus, bit 63 of kvm_memslots(kvm)->generation set to 1 only during a memslot update, while some SRCU readers might be using the old copy. We do not want to use an MMIO sptes created with an odd generation number, and we can do -this without losing a bit in the MMIO spte. The low bit of the generation -is not stored in MMIO spte, and presumed zero when it is extracted out of the -spte. If KVM is unlucky and creates an MMIO spte while the low bit is 1, -the next access to the spte will always be a cache miss. +this without losing a bit in the MMIO spte. The "update in-progress" bit of the +generation is not stored in MMIO spte, and is so is implicitly zero when the +generation is extracted out of the spte. If KVM is unlucky and creates an MMIO +spte while an update is in-progress, the next access to the spte will always be +a cache miss. For example, a subsequent access during the update window will +miss due to the in-progress flag diverging, while an access after the update +window closes will have a higher generation number (as compared to the spte).
Further reading diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index c1ec03b1e596..4f23d0c0dea6 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -351,18 +351,17 @@ static inline bool is_access_track_spte(u64 spte) * Due to limited space in PTEs, the MMIO generation is a 19 bit subset of * the memslots generation and is derived as follows: * - * Bits 1-9 of the memslot generation are propagated to spte bits 3-11 - * Bits 10-19 of the memslot generation are propagated to spte bits 52-61 + * Bits 0-8 of the MMIO generation are propagated to spte bits 3-11 + * Bits 9-18 of the MMIO generation are propagated to spte bits 52-61 * - * The MMIO generation starts at bit 1 of the memslots generation in order to - * skip over bit 0, the KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS flag. Including - * the flag would require stealing a bit from the "real" generation number and - * thus effectively halve the maximum number of MMIO generations that can be - * handled before encountering a wrap (which requires a full MMU zap). The - * flag is instead explicitly queried when checking for MMIO spte cache hits. + * The KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS flag is intentionally not included in + * the MMIO generation number, as doing so would require stealing a bit from + * the "real" generation number and thus effectively halve the maximum number + * of MMIO generations that can be handled before encountering a wrap (which + * requires a full MMU zap). The flag is instead explicitly queried when + * checking for MMIO spte cache hits. */ -#define MMIO_SPTE_GEN_MASK GENMASK_ULL(19, 1) -#define MMIO_SPTE_GEN_SHIFT 1 +#define MMIO_SPTE_GEN_MASK GENMASK_ULL(18, 0)
#define MMIO_SPTE_GEN_LOW_START 3 #define MMIO_SPTE_GEN_LOW_END 11 @@ -379,8 +378,6 @@ static u64 generation_mmio_spte_mask(u64 gen)
WARN_ON(gen & ~MMIO_SPTE_GEN_MASK);
- gen >>= MMIO_SPTE_GEN_SHIFT; - mask = (gen << MMIO_SPTE_GEN_LOW_START) & MMIO_SPTE_GEN_LOW_MASK; mask |= (gen << MMIO_SPTE_GEN_HIGH_START) & MMIO_SPTE_GEN_HIGH_MASK; return mask; @@ -394,7 +391,7 @@ static u64 get_mmio_spte_generation(u64 spte)
gen = (spte & MMIO_SPTE_GEN_LOW_MASK) >> MMIO_SPTE_GEN_LOW_START; gen |= (spte & MMIO_SPTE_GEN_HIGH_MASK) >> MMIO_SPTE_GEN_HIGH_START; - return gen << MMIO_SPTE_GEN_SHIFT; + return gen; }
static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, @@ -5920,13 +5917,9 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen) { - gen &= MMIO_SPTE_GEN_MASK; + WARN_ON(gen & KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS);
- /* - * Shift to adjust for the "update in-progress" flag, which isn't - * included in the MMIO generation number. - */ - gen >>= MMIO_SPTE_GEN_SHIFT; + gen &= MMIO_SPTE_GEN_MASK;
/* * Generation numbers are incremented in multiples of the number of diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 4ae67f641951..386ac3f48178 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -50,7 +50,7 @@ #define KVM_MEMSLOT_INVALID (1UL << 16)
/* - * Bit 0 of the memslot generation number is an "update in-progress flag", + * Bit 63 of the memslot generation number is an "update in-progress flag", * e.g. is temporarily set for the duration of install_new_memslots(). * This flag effectively creates a unique generation number that is used to * mark cached memslot data, e.g. MMIO accesses, as potentially being stale, @@ -68,7 +68,7 @@ * the actual generation number against accesses that were inserted into the * cache *before* the memslots were updated. */ -#define KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS BIT_ULL(0) +#define KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS BIT_ULL(63)
/* Two fragments for cross MMIO pages. */ #define KVM_MAX_MMIO_FRAGMENTS 2 diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 98674b0ed14a..38e758e0f452 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -701,7 +701,7 @@ static struct kvm *kvm_create_vm(unsigned long type) if (!slots) goto out_err_no_srcu; /* Generations must be different for each address space. */ - slots->generation = i * 2; + slots->generation = i; rcu_assign_pointer(kvm->memslots[i], slots); }
@@ -943,10 +943,10 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, * Generations must be unique even across address spaces. We do not need * a global counter for that, instead the generation space is evenly split * across address spaces. For example, with two address spaces, address - * space 0 will use generations 0, 4, 8, ... while address space 1 will - * use generations 2, 6, 10, 14, ... + * space 0 will use generations 0, 2, 4, ... while address space 1 will + * use generations 1, 3, 5, ... */ - gen += KVM_ADDRESS_SPACE_NUM * 2; + gen += KVM_ADDRESS_SPACE_NUM;
kvm_arch_memslots_updated(kvm, gen);
From: Paolo Bonzini pbonzini@redhat.com
mainline inclusion from mainline-v5.4-rc2 commit 6eeb4ef049e7cd89783e8ebe1ea2f1dac276f82c category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Currently, we are overloading SPTE_SPECIAL_MASK to mean both "A/D bits unavailable" and MMIO, where the difference between the two is determined by mio_mask and mmio_value.
However, the next patch will need two bits to distinguish availability of A/D bits from write protection. So, while at it give MMIO its own bit pattern, and move the two bits from bit 62 to bits 52..53 since Intel is allocating EPT page table bits from the top.
Reviewed-by: Junaid Shahid junaids@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/include/asm/kvm_host.h | 7 ------- arch/x86/kvm/mmu.c | 28 ++++++++++++++++++---------- 2 files changed, 18 insertions(+), 17 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index b2a4d29a56ef..6549c20c958c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -211,13 +211,6 @@ enum { PFERR_WRITE_MASK | \ PFERR_PRESENT_MASK)
-/* - * The mask used to denote special SPTEs, which can be either MMIO SPTEs or - * Access Tracking SPTEs. We use bit 62 instead of bit 63 to avoid conflicting - * with the SVE bit in EPT PTEs. - */ -#define SPTE_SPECIAL_MASK (1ULL << 62) - /* apic attention bits */ #define KVM_APIC_CHECK_VAPIC 0 /* diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 4f23d0c0dea6..03624f52cc75 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -110,7 +110,16 @@ module_param(dbg, bool, 0644); #define PTE_PREFETCH_NUM 8
#define PT_FIRST_AVAIL_BITS_SHIFT 10 -#define PT64_SECOND_AVAIL_BITS_SHIFT 52 +#define PT64_SECOND_AVAIL_BITS_SHIFT 54 + +/* + * The mask used to denote special SPTEs, which can be either MMIO SPTEs or + * Access Tracking SPTEs. + */ +#define SPTE_SPECIAL_MASK (3ULL << 52) +#define SPTE_AD_ENABLED_MASK (0ULL << 52) +#define SPTE_AD_DISABLED_MASK (1ULL << 52) +#define SPTE_MMIO_MASK (3ULL << 52)
#define PT64_LEVEL_BITS 9
@@ -244,12 +253,11 @@ static u64 __read_mostly shadow_present_mask; static u64 __read_mostly shadow_me_mask;
/* - * SPTEs used by MMUs without A/D bits are marked with shadow_acc_track_value. - * Non-present SPTEs with shadow_acc_track_value set are in place for access - * tracking. + * SPTEs used by MMUs without A/D bits are marked with SPTE_AD_DISABLED_MASK; + * shadow_acc_track_mask is the set of bits to be cleared in non-accessed + * pages. */ static u64 __read_mostly shadow_acc_track_mask; -static const u64 shadow_acc_track_value = SPTE_SPECIAL_MASK;
/* * The mask/shift to use for saving the original R/X bits when marking the PTE @@ -303,7 +311,7 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask) BUG_ON((mmio_mask & mmio_value) != mmio_value); WARN_ON(mmio_value & (shadow_nonpresent_or_rsvd_mask << shadow_nonpresent_or_rsvd_mask_len)); WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask); - shadow_mmio_value = mmio_value | SPTE_SPECIAL_MASK; + shadow_mmio_value = mmio_value | SPTE_MMIO_MASK; shadow_mmio_mask = mmio_mask | SPTE_SPECIAL_MASK; shadow_mmio_access_mask = access_mask; } @@ -322,7 +330,7 @@ static inline bool sp_ad_disabled(struct kvm_mmu_page *sp) static inline bool spte_ad_enabled(u64 spte) { MMU_WARN_ON(is_mmio_spte(spte)); - return !(spte & shadow_acc_track_value); + return (spte & SPTE_SPECIAL_MASK) == SPTE_AD_ENABLED_MASK; }
static bool is_nx_huge_page_enabled(void) @@ -465,7 +473,7 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, { BUG_ON(!dirty_mask != !accessed_mask); BUG_ON(!accessed_mask && !acc_track_mask); - BUG_ON(acc_track_mask & shadow_acc_track_value); + BUG_ON(acc_track_mask & SPTE_SPECIAL_MASK);
shadow_user_mask = user_mask; shadow_accessed_mask = accessed_mask; @@ -2621,7 +2629,7 @@ static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, shadow_user_mask | shadow_x_mask | shadow_me_mask;
if (sp_ad_disabled(sp)) - spte |= shadow_acc_track_value; + spte |= SPTE_AD_DISABLED_MASK; else spte |= shadow_accessed_mask;
@@ -2953,7 +2961,7 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
sp = page_header(__pa(sptep)); if (sp_ad_disabled(sp)) - spte |= shadow_acc_track_value; + spte |= SPTE_AD_DISABLED_MASK;
/* * For the EPT case, shadow_present_mask is 0 if hardware
From: Paolo Bonzini pbonzini@redhat.com
mainline inclusion from mainline-v5.6-rc1 commit 56871d444bc4d7ea66708775e62e2e0926384dbc category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The SPTE_MMIO_MASK overlaps with the bits used to track MMIO generation number. A high enough generation number would overwrite the SPTE_SPECIAL_MASK region and cause the MMIO SPTE to be misinterpreted.
Likewise, setting bits 52 and 53 would also cause an incorrect generation number to be read from the PTE, though this was partially mitigated by the (useless if it weren't for the bug) removal of SPTE_SPECIAL_MASK from the spte in get_mmio_spte_generation. Drop that removal, and replace it with a compile-time assertion.
Fixes: 6eeb4ef049e7 ("KVM: x86: assign two bits to track SPTE kinds") Reported-by: Ben Gardon bgardon@google.com Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/mmu.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 03624f52cc75..af34f9c4551c 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -369,22 +369,24 @@ static inline bool is_access_track_spte(u64 spte) * requires a full MMU zap). The flag is instead explicitly queried when * checking for MMIO spte cache hits. */ -#define MMIO_SPTE_GEN_MASK GENMASK_ULL(18, 0) +#define MMIO_SPTE_GEN_MASK GENMASK_ULL(17, 0)
#define MMIO_SPTE_GEN_LOW_START 3 #define MMIO_SPTE_GEN_LOW_END 11 #define MMIO_SPTE_GEN_LOW_MASK GENMASK_ULL(MMIO_SPTE_GEN_LOW_END, \ MMIO_SPTE_GEN_LOW_START)
-#define MMIO_SPTE_GEN_HIGH_START 52 -#define MMIO_SPTE_GEN_HIGH_END 61 +#define MMIO_SPTE_GEN_HIGH_START PT64_SECOND_AVAIL_BITS_SHIFT +#define MMIO_SPTE_GEN_HIGH_END 62 #define MMIO_SPTE_GEN_HIGH_MASK GENMASK_ULL(MMIO_SPTE_GEN_HIGH_END, \ MMIO_SPTE_GEN_HIGH_START) + static u64 generation_mmio_spte_mask(u64 gen) { u64 mask;
WARN_ON(gen & ~MMIO_SPTE_GEN_MASK); + BUILD_BUG_ON((MMIO_SPTE_GEN_HIGH_MASK | MMIO_SPTE_GEN_LOW_MASK) & SPTE_SPECIAL_MASK);
mask = (gen << MMIO_SPTE_GEN_LOW_START) & MMIO_SPTE_GEN_LOW_MASK; mask |= (gen << MMIO_SPTE_GEN_HIGH_START) & MMIO_SPTE_GEN_HIGH_MASK; @@ -395,8 +397,6 @@ static u64 get_mmio_spte_generation(u64 spte) { u64 gen;
- spte &= ~shadow_mmio_mask; - gen = (spte & MMIO_SPTE_GEN_LOW_MASK) >> MMIO_SPTE_GEN_LOW_START; gen |= (spte & MMIO_SPTE_GEN_HIGH_MASK) >> MMIO_SPTE_GEN_HIGH_START; return gen;
From: Rasmus Villemoes linux@rasmusvillemoes.dk
mainline inclusion from mainline-v5.1-rc1 commit 6bab69c65013bed5fce9f101a64a84d0385b3946 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
BUILD_BUG_ON() is a little annoying, since it cannot be used outside function scope. So one cannot put assertions about the sizeof() a struct next to the struct definition, but has to hide that in some more or less arbitrary function.
Since gcc 4.6 (which is now also the required minimum), there is support for the C11 _Static_assert in all C modes, including gnu89. So add a simple wrapper for that.
_Static_assert() requires a message argument, which is usually quite redundant (and I believe that bug got fixed at least in newer C++ standards), but we can easily work around that with a little macro magic, making it optional.
For example, adding
static_assert(sizeof(struct printf_spec) == 8);
in vsprintf.c and modifying that struct to violate it, one gets
./include/linux/build_bug.h:78:41: error: static assertion failed: "sizeof(struct printf_spec) == 8" #define __static_assert(expr, msg, ...) _Static_assert(expr, "" msg "")
godbolt.org suggests that _Static_assert() has been support by clang since at least 3.0.0.
Link: http://lkml.kernel.org/r/20190208203015.29702-1-linux@rasmusvillemoes.dk Signed-off-by: Rasmus Villemoes linux@rasmusvillemoes.dk Acked-by: Alexey Dobriyan adobriyan@gmail.com Cc: Masahiro Yamada yamada.masahiro@socionext.com Cc: Nick Desaulniers ndesaulniers@google.com Cc: Kees Cook keescook@chromium.org Cc: Luc Van Oostenryck luc.vanoostenryck@gmail.com Cc: Alexander Viro viro@zeniv.linux.org.uk Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- include/linux/build_bug.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/include/linux/build_bug.h b/include/linux/build_bug.h index 219d24e19de0..e8fe68102c1e 100644 --- a/include/linux/build_bug.h +++ b/include/linux/build_bug.h @@ -72,4 +72,23 @@ */ #define BUILD_BUG() BUILD_BUG_ON_MSG(1, "BUILD_BUG failed")
+/** + * static_assert - check integer constant expression at build time + * + * static_assert() is a wrapper for the C11 _Static_assert, with a + * little macro magic to make the message optional (defaulting to the + * stringification of the tested expression). + * + * Contrary to BUILD_BUG_ON(), static_assert() can be used at global + * scope, but requires the expression to be an integer constant + * expression (i.e., it is not enough that __builtin_constant_p() is + * true for expr). + * + * Also note that BUILD_BUG_ON() fails the build if the condition is + * true, while static_assert() fails the build if the expression is + * false. + */ +#define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr) +#define __static_assert(expr, msg, ...) _Static_assert(expr, msg) + #endif /* _LINUX_BUILD_BUG_H */
From: "Maciej S. Szmigiero" maciej.szmigiero@oracle.com
mainline inclusion from mainline-v5.10 commit 34c0f6f2695a2db81e09a3ab7bdb2853f45d4d3d category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Commit cae7ed3c2cb0 ("KVM: x86: Refactor the MMIO SPTE generation handling") cleaned up the computation of MMIO generation SPTE masks, however it introduced a bug how the upper part was encoded: SPTE bits 52-61 were supposed to contain bits 10-19 of the current generation number, however a missing shift encoded bits 1-10 there instead (mostly duplicating the lower part of the encoded generation number that then consisted of bits 1-9).
In the meantime, the upper part was shrunk by one bit and moved by subsequent commits to become an upper half of the encoded generation number (bits 9-17 of bits 0-17 encoded in a SPTE).
In addition to the above, commit 56871d444bc4 ("KVM: x86: fix overlap between SPTE_MMIO_MASK and generation") has changed the SPTE bit range assigned to encode the generation number and the total number of bits encoded but did not update them in the comment attached to their defines, nor in the KVM MMU doc. Let's do it here, too, since it is too trivial thing to warrant a separate commit.
Fixes: cae7ed3c2cb0 ("KVM: x86: Refactor the MMIO SPTE generation handling") Signed-off-by: Maciej S. Szmigiero maciej.szmigiero@oracle.com Message-Id: 156700708db2a5296c5ed7a8b9ac71f1e9765c85.1607129096.git.maciej.szmigiero@oracle.com Cc: stable@vger.kernel.org [Reorganize macros so that everything is computed from the bit ranges. - Paolo] Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/kvm/mmu.c | 29 ++++++++++++++++++++--------- 1 file changed, 20 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index af34f9c4551c..8003daa945a3 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -356,11 +356,11 @@ static inline bool is_access_track_spte(u64 spte) }
/* - * Due to limited space in PTEs, the MMIO generation is a 19 bit subset of + * Due to limited space in PTEs, the MMIO generation is a 18 bit subset of * the memslots generation and is derived as follows: * * Bits 0-8 of the MMIO generation are propagated to spte bits 3-11 - * Bits 9-18 of the MMIO generation are propagated to spte bits 52-61 + * Bits 9-17 of the MMIO generation are propagated to spte bits 54-62 * * The KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS flag is intentionally not included in * the MMIO generation number, as doing so would require stealing a bit from @@ -369,18 +369,29 @@ static inline bool is_access_track_spte(u64 spte) * requires a full MMU zap). The flag is instead explicitly queried when * checking for MMIO spte cache hits. */ -#define MMIO_SPTE_GEN_MASK GENMASK_ULL(17, 0)
#define MMIO_SPTE_GEN_LOW_START 3 #define MMIO_SPTE_GEN_LOW_END 11 -#define MMIO_SPTE_GEN_LOW_MASK GENMASK_ULL(MMIO_SPTE_GEN_LOW_END, \ - MMIO_SPTE_GEN_LOW_START)
#define MMIO_SPTE_GEN_HIGH_START PT64_SECOND_AVAIL_BITS_SHIFT #define MMIO_SPTE_GEN_HIGH_END 62 + +#define MMIO_SPTE_GEN_LOW_MASK GENMASK_ULL(MMIO_SPTE_GEN_LOW_END, \ + MMIO_SPTE_GEN_LOW_START) #define MMIO_SPTE_GEN_HIGH_MASK GENMASK_ULL(MMIO_SPTE_GEN_HIGH_END, \ MMIO_SPTE_GEN_HIGH_START)
+#define MMIO_SPTE_GEN_LOW_BITS (MMIO_SPTE_GEN_LOW_END - MMIO_SPTE_GEN_LOW_START + 1) +#define MMIO_SPTE_GEN_HIGH_BITS (MMIO_SPTE_GEN_HIGH_END - MMIO_SPTE_GEN_HIGH_START + 1) + +/* remember to adjust the comment above as well if you change these */ +static_assert(MMIO_SPTE_GEN_LOW_BITS == 9 && MMIO_SPTE_GEN_HIGH_BITS == 9); + +#define MMIO_SPTE_GEN_LOW_SHIFT (MMIO_SPTE_GEN_LOW_START - 0) +#define MMIO_SPTE_GEN_HIGH_SHIFT (MMIO_SPTE_GEN_HIGH_START - MMIO_SPTE_GEN_LOW_BITS) + +#define MMIO_SPTE_GEN_MASK GENMASK_ULL(MMIO_SPTE_GEN_LOW_BITS + MMIO_SPTE_GEN_HIGH_BITS - 1, 0) + static u64 generation_mmio_spte_mask(u64 gen) { u64 mask; @@ -388,8 +399,8 @@ static u64 generation_mmio_spte_mask(u64 gen) WARN_ON(gen & ~MMIO_SPTE_GEN_MASK); BUILD_BUG_ON((MMIO_SPTE_GEN_HIGH_MASK | MMIO_SPTE_GEN_LOW_MASK) & SPTE_SPECIAL_MASK);
- mask = (gen << MMIO_SPTE_GEN_LOW_START) & MMIO_SPTE_GEN_LOW_MASK; - mask |= (gen << MMIO_SPTE_GEN_HIGH_START) & MMIO_SPTE_GEN_HIGH_MASK; + mask = (gen << MMIO_SPTE_GEN_LOW_SHIFT) & MMIO_SPTE_GEN_LOW_MASK; + mask |= (gen << MMIO_SPTE_GEN_HIGH_SHIFT) & MMIO_SPTE_GEN_HIGH_MASK; return mask; }
@@ -397,8 +408,8 @@ static u64 get_mmio_spte_generation(u64 spte) { u64 gen;
- gen = (spte & MMIO_SPTE_GEN_LOW_MASK) >> MMIO_SPTE_GEN_LOW_START; - gen |= (spte & MMIO_SPTE_GEN_HIGH_MASK) >> MMIO_SPTE_GEN_HIGH_START; + gen = (spte & MMIO_SPTE_GEN_LOW_MASK) >> MMIO_SPTE_GEN_LOW_SHIFT; + gen |= (spte & MMIO_SPTE_GEN_HIGH_MASK) >> MMIO_SPTE_GEN_HIGH_SHIFT; return gen; }
From: Nathan Chancellor natechancellor@gmail.com
mainline inclusion from mainline-v4.20-rc1 commit 3512dcb4e6c64733871202c01f0ec6b5d84d32ac category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
drivers/crypto/ccp/sp-platform.c:36:36: warning: tentative array definition assumed to have one element static const struct acpi_device_id sp_acpi_match[]; ^ 1 warning generated.
Just remove the forward declarations and move the initializations up so that they can be used in sp_get_of_version and sp_get_acpi_version.
Reported-by: Nick Desaulniers ndesaulniers@google.com Signed-off-by: Nathan Chancellor natechancellor@gmail.com Reviewed-by: Nick Desaulniers ndesaulniers@google.com Acked-by: Gary R Hook gary.hook@amd.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/crypto/ccp/sp-platform.c | 53 +++++++++++++++----------------- 1 file changed, 25 insertions(+), 28 deletions(-)
diff --git a/drivers/crypto/ccp/sp-platform.c b/drivers/crypto/ccp/sp-platform.c index 71734f254fd1..b75dc7db2d4a 100644 --- a/drivers/crypto/ccp/sp-platform.c +++ b/drivers/crypto/ccp/sp-platform.c @@ -33,8 +33,31 @@ struct sp_platform { unsigned int irq_count; };
-static const struct acpi_device_id sp_acpi_match[]; -static const struct of_device_id sp_of_match[]; +static const struct sp_dev_vdata dev_vdata[] = { + { + .bar = 0, +#ifdef CONFIG_CRYPTO_DEV_SP_CCP + .ccp_vdata = &ccpv3_platform, +#endif + }, +}; + +#ifdef CONFIG_ACPI +static const struct acpi_device_id sp_acpi_match[] = { + { "AMDI0C00", (kernel_ulong_t)&dev_vdata[0] }, + { }, +}; +MODULE_DEVICE_TABLE(acpi, sp_acpi_match); +#endif + +#ifdef CONFIG_OF +static const struct of_device_id sp_of_match[] = { + { .compatible = "amd,ccp-seattle-v1a", + .data = (const void *)&dev_vdata[0] }, + { }, +}; +MODULE_DEVICE_TABLE(of, sp_of_match); +#endif
static struct sp_dev_vdata *sp_get_of_version(struct platform_device *pdev) { @@ -201,32 +224,6 @@ static int sp_platform_resume(struct platform_device *pdev) } #endif
-static const struct sp_dev_vdata dev_vdata[] = { - { - .bar = 0, -#ifdef CONFIG_CRYPTO_DEV_SP_CCP - .ccp_vdata = &ccpv3_platform, -#endif - }, -}; - -#ifdef CONFIG_ACPI -static const struct acpi_device_id sp_acpi_match[] = { - { "AMDI0C00", (kernel_ulong_t)&dev_vdata[0] }, - { }, -}; -MODULE_DEVICE_TABLE(acpi, sp_acpi_match); -#endif - -#ifdef CONFIG_OF -static const struct of_device_id sp_of_match[] = { - { .compatible = "amd,ccp-seattle-v1a", - .data = (const void *)&dev_vdata[0] }, - { }, -}; -MODULE_DEVICE_TABLE(of, sp_of_match); -#endif - static struct platform_driver sp_platform_driver = { .driver = { .name = "ccp",
From: Yazen Ghannam yazen.ghannam@amd.com
mainline inclusion from mainline-next-20211008 commit 9f4873fb6af7966de8fcbd95c36b61351c1c4b1f category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
AMD Rome systems and later support interleaving between three identical ranks within a channel.
Check for this mode by counting the number of enabled chip selects and comparing their masks. If there are exactly three enabled chip selects and their masks are identical, then three rank interleaving is enabled.
The size of a rank is determined from its mask value. However, three rank interleaving doesn't follow the method of swapping an interleave bit with the most significant bit. Rather, the interleave bit is flipped and the most significant bit remains the same. There is only a single interleave bit in this case.
Account for this when determining the chip select size by keeping the most significant bit at its original value and ignoring any zero bits. This will return a full bitmask in [MSB:1].
Fixes: e53a3b267fb0 ("EDAC/amd64: Find Chip Select memory size using Address Mask") Signed-off-by: Yazen Ghannam yazen.ghannam@amd.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20211005154419.2060504-1-yazen.ghannam@amd.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- drivers/edac/amd64_edac.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c index a0d929a1309d..c588bbfafb0d 100644 --- a/drivers/edac/amd64_edac.c +++ b/drivers/edac/amd64_edac.c @@ -783,12 +783,14 @@ static void debug_dump_dramcfg_low(struct amd64_pvt *pvt, u32 dclr, int chan) #define CS_ODD_PRIMARY BIT(1) #define CS_EVEN_SECONDARY BIT(2) #define CS_ODD_SECONDARY BIT(3) +#define CS_3R_INTERLEAVE BIT(4)
#define CS_EVEN (CS_EVEN_PRIMARY | CS_EVEN_SECONDARY) #define CS_ODD (CS_ODD_PRIMARY | CS_ODD_SECONDARY)
static int f17_get_cs_mode(int dimm, u8 ctrl, struct amd64_pvt *pvt) { + u8 base, count = 0; int cs_mode = 0;
if (csrow_enabled(2 * dimm, ctrl, pvt)) @@ -801,6 +803,20 @@ static int f17_get_cs_mode(int dimm, u8 ctrl, struct amd64_pvt *pvt) if (csrow_sec_enabled(2 * dimm + 1, ctrl, pvt)) cs_mode |= CS_ODD_SECONDARY;
+ /* + * 3 Rank inteleaving support. + * There should be only three bases enabled and their two masks should + * be equal. + */ + for_each_chip_select(base, ctrl, pvt) + count += csrow_enabled(base, ctrl, pvt); + + if (count == 3 && + pvt->csels[ctrl].csmasks[0] == pvt->csels[ctrl].csmasks[1]) { + edac_dbg(1, "3R interleaving in use.\n"); + cs_mode |= CS_3R_INTERLEAVE; + } + return cs_mode; }
@@ -1609,10 +1625,14 @@ static int f17_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc, * * The MSB is the number of bits in the full mask because BIT[0] is * always 0. + * + * In the special 3 Rank interleaving case, a single bit is flipped + * without swapping with the most significant bit. This can be handled + * by keeping the MSB where it is and ignoring the single zero bit. */ msb = fls(addr_mask_orig) - 1; weight = hweight_long(addr_mask_orig); - num_zero_bits = msb - weight; + num_zero_bits = msb - weight - !!(cs_mode & CS_3R_INTERLEAVE);
/* Take the number of zero bits off from the top of the mask. */ addr_mask_deinterleaved = GENMASK_ULL(msb - num_zero_bits, 1);
From: Nathan Chancellor nathan@kernel.org
mainline inclusion from mainline-v5.13-rc1 commit 5deac80d4571dffb51f452f0027979d72259a1b9 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
dev_attr_show() calls the __uncore_*_show() functions via an indirect call but their type does not currently match the type of the show() member in 'struct device_attribute', resulting in a Control Flow Integrity violation.
$ cat /sys/devices/amd_l3/format/umask config:8-15
$ dmesg | grep "CFI failure" [ 1258.174653] CFI failure (target: __uncore_umask_show...):
Update the type in the DEFINE_UNCORE_FORMAT_ATTR macro to match 'struct device_attribute' so that there is no more CFI violation.
Fixes: 06f2c24584f3 ("perf/amd/uncore: Prepare to scale for more attributes that vary per family") Signed-off-by: Nathan Chancellor nathan@kernel.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20210415001112.3024673-2-nathan@kernel.org Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- arch/x86/events/amd/uncore.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c index 9adad0408068..1e2aa644e6ef 100644 --- a/arch/x86/events/amd/uncore.c +++ b/arch/x86/events/amd/uncore.c @@ -286,14 +286,14 @@ static struct attribute_group amd_uncore_attr_group = { };
#define DEFINE_UNCORE_FORMAT_ATTR(_var, _name, _format) \ -static ssize_t __uncore_##_var##_show(struct kobject *kobj, \ - struct kobj_attribute *attr, \ +static ssize_t __uncore_##_var##_show(struct device *dev, \ + struct device_attribute *attr, \ char *page) \ { \ BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \ return sprintf(page, _format "\n"); \ } \ -static struct kobj_attribute format_attr_##_var = \ +static struct device_attribute format_attr_##_var = \ __ATTR(_name, 0444, __uncore_##_var##_show, NULL)
DEFINE_UNCORE_FORMAT_ATTR(event12, event, "config:0-7,32-35");
From: Kim Phillips kim.phillips@amd.com
mainline inclusion from mainline-v5.10-rc1 commit 60d804521ec4cd01217a96f33cd1bb29e295333d category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
Later revisions of PPRs that post-date the original Family 17h events submission patch add these events.
Specifically, they were not in this 2017 revision of the F17h PPR:
Processor Programming Reference (PPR) for AMD Family 17h Model 01h, Revision B1 Processors Rev 1.14 - April 15, 2017
But e.g., are included in this 2019 version of the PPR:
Processor Programming Reference (PPR) for AMD Family 17h Model 18h, Revision B1 Processors Rev. 3.14 - Sep 26, 2019
Fixes: 98c07a8f74f8 ("perf vendor events amd: perf PMU events for AMD Family 17h") Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537 Signed-off-by: Kim Phillips kim.phillips@amd.com Reviewed-by: Ian Rogers irogers@google.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Andi Kleen ak@linux.intel.com Cc: Borislav Petkov bp@suse.de Cc: Jin Yao yao.jin@linux.intel.com Cc: Jiri Olsa jolsa@redhat.com Cc: John Garry john.garry@huawei.com Cc: Jon Grimm jon.grimm@amd.com Cc: Kan Liang kan.liang@linux.intel.com Cc: Mark Rutland mark.rutland@arm.com Cc: Martin Jambor mjambor@suse.cz Cc: Martin Liška mliska@suse.cz Cc: Michael Petlan mpetlan@redhat.com Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: stable@vger.kernel.org Cc: Stephane Eranian eranian@google.com Cc: Vijay Thakkar vijaythakkar@me.com Cc: William Cohen wcohen@redhat.com Cc: Yunfeng Ye yeyunfeng@huawei.com Link: http://lore.kernel.org/lkml/20200901220944.277505-1-kim.phillips@amd.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- .../pmu-events/arch/x86/amdzen1/cache.json | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json index 404d4c569c01..695ed3ffa3a6 100644 --- a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json +++ b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json @@ -249,6 +249,24 @@ "BriefDescription": "Cycles with fill pending from L2. Total cycles spent with one or more fill requests in flight from L2.", "UMask": "0x1" }, + { + "EventName": "l2_pf_hit_l2", + "EventCode": "0x70", + "BriefDescription": "L2 prefetch hit in L2.", + "UMask": "0xff" + }, + { + "EventName": "l2_pf_miss_l2_hit_l3", + "EventCode": "0x71", + "BriefDescription": "L2 prefetcher hits in L3. Counts all L2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit the L3.", + "UMask": "0xff" + }, + { + "EventName": "l2_pf_miss_l2_l3", + "EventCode": "0x72", + "BriefDescription": "L2 prefetcher misses in L3. All L2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches.", + "UMask": "0xff" + }, { "EventName": "l3_request_g1.caching_l3_cache_accesses", "EventCode": "0x01",
From: John Garry john.garry@huawei.com
mainline inclusion from mainline-v5.5-rc1 commit 84b0975f4853ba32d2d9b3c19ffa2b947f023fb3 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA
--------------------------------
The "EventName" for the DDRC precharge command event is incorrect, so fix it.
Fixes: 57cc732479ba ("perf jevents: Add support for Hisi hip08 DDRC PMU aliasing") Signed-off-by: John Garry john.garry@huawei.com Reviewed-by: Shaokun Zhang zhangshaokun@hisilicon.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Jiri Olsa jolsa@redhat.com Cc: Mark Rutland mark.rutland@arm.com Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will@kernel.org Cc: linuxarm@huawei.com Link: http://lore.kernel.org/lkml/1567612484-195727-2-git-send-email-john.garry@hu... Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jackie Liu liuyun01@kylinos.cn Signed-off-by: Laibin Qiu qiulaibin@huawei.com --- .../perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json index 0d1556fcdffe..99f4fc425564 100644 --- a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json +++ b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json @@ -15,7 +15,7 @@ }, { "EventCode": "0x04", - "EventName": "uncore_hisi_ddrc.flux_wr", + "EventName": "uncore_hisi_ddrc.pre_cmd", "BriefDescription": "DDRC precharge commands", "PublicDescription": "DDRC precharge commands", "Unit": "hisi_sccl,ddrc",