New architectural features and CPUID bits related to the Speculative Return Stack Overflow (SRSO) vulnerability.
Arnaldo Carvalho de Melo (2): tools headers cpufeatures: Sync with the kernel sources tools arch x86: Sync the msr-index.h copy with the kernel sources
Borislav Petkov (AMD) (9): x86/bugs: Increase the x86 bugs vector size to two u32s x86/srso: Add a Speculative RAS Overflow mitigation x86/srso: Add IBPB_BRTYPE support x86/srso: Add SRSO_NO support x86/srso: Add IBPB x86/srso: Add IBPB on VMEXIT x86/srso: Tie SBPB bit setting to microcode patch detection x86/srso: Disable the mitigation on unaffected configurations x86/srso: Correct the mitigation status when SMT is disabled
Christophe Leroy (1): objtool: Fix SEGFAULT
Jialin Zhang (1): x86/cpufeatures: Fix abi breakage caused by NCAPINTS in cpufeature header file.
Josh Poimboeuf (2): x86/srso: Fix return thunks in generated code objtool: Add frame-pointer-specific function ignore
Kim Phillips (1): x86/cpu, kvm: Add support for CPUID_80000021_EAX
Nick Desaulniers (1): x86/srso: Fix build breakage with the LLVM linker
Nikolay Borisov (1): kabi: Allow extra bugsints (bsc#1213927).
Peter Zijlstra (10): x86/ibt: Add ANNOTATE_NOENDBR x86/cpu: Fix __x86_return_thunk symbol type x86/cpu: Fix up srso_safe_ret() and __x86_return_thunk() x86/alternative: Make custom return thunk unconditional x86/cpu: Clean up SRSO return thunk mess x86/cpu: Rename original retbleed methods x86/cpu: Rename srso_(.*)_alias to srso_alias_\1 x86/cpu: Cleanup the untrain mess objtool/x86: Fixup frame-pointer vs rethunk objtool/x86: Fix SRSO mess
Sean Christopherson (1): x86/retpoline: Don't clobber RFLAGS during srso_safe_ret()
Documentation/admin-guide/hw-vuln/index.rst | 1 + Documentation/admin-guide/hw-vuln/srso.rst | 133 ++++++++++++ .../admin-guide/kernel-parameters.txt | 11 + arch/x86/Kconfig | 7 + arch/x86/include/asm/cpufeature.h | 23 +- arch/x86/include/asm/cpufeatures.h | 15 +- arch/x86/include/asm/msr-index.h | 1 + arch/x86/include/asm/nospec-branch.h | 34 ++- arch/x86/include/asm/processor.h | 7 +- arch/x86/kernel/alternative.c | 2 +- arch/x86/kernel/cpu/amd.c | 19 ++ arch/x86/kernel/cpu/bugs.c | 197 ++++++++++++++++++ arch/x86/kernel/cpu/common.c | 17 +- arch/x86/kernel/cpu/mkcapflags.sh | 2 +- arch/x86/kernel/cpu/proc.c | 2 +- arch/x86/kernel/cpu/scattered.c | 3 + arch/x86/kernel/vmlinux.lds.S | 38 +++- arch/x86/kvm/cpuid.c | 7 + arch/x86/kvm/cpuid.h | 14 ++ arch/x86/kvm/svm/svm.c | 4 +- arch/x86/kvm/svm/vmenter.S | 3 + arch/x86/lib/retpoline.S | 135 ++++++++++-- drivers/base/cpu.c | 8 + include/linux/cpu.h | 2 + include/linux/objtool.h | 28 +++ tools/arch/x86/include/asm/cpufeatures.h | 16 +- tools/arch/x86/include/asm/msr-index.h | 1 + tools/include/linux/objtool.h | 28 +++ tools/objtool/arch.h | 1 + tools/objtool/arch/x86/decode.c | 6 + tools/objtool/check.c | 43 +++- tools/objtool/elf.h | 1 + .../perf/trace/beauty/tracepoints/x86_msr.sh | 2 +- 33 files changed, 759 insertions(+), 52 deletions(-) create mode 100644 Documentation/admin-guide/hw-vuln/srso.rst
From: Christophe Leroy christophe.leroy@csgroup.eu
stable inclusion from stable-v5.10.163 commit 0af0e115ff59d638f45416a004cdd8edb38db40c category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7PJ9N
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
----------------------------------------------------
[ Upstream commit efb11fdb3e1a9f694fa12b70b21e69e55ec59c36 ]
find_insn() will return NULL in case of failure. Check insn in order to avoid a kernel Oops for NULL pointer dereference.
Tested-by: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com Reviewed-by: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com Acked-by: Josh Poimboeuf jpoimboe@kernel.org Acked-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Christophe Leroy christophe.leroy@csgroup.eu Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20221114175754.1131267-9-sv@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: zhaoxiaoqiang11 zhaoxiaoqiang11@jd.com Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- tools/objtool/check.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c index ea80b29b9913..654ffb51e1dd 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -196,7 +196,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func, return false;
insn = find_insn(file, func->sec, func->offset); - if (!insn->func) + if (!insn || !insn->func) return false;
func_for_each_insn(file, func, insn) {
From: Arnaldo Carvalho de Melo acme@redhat.com
stable inclusion from stable-v5.10.189 commit 9b7fe7c6fbc007564f97805ff45882e79f0c70d0 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67 CVE: CVE-2023-20569
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit 1a9bcadd0058a3e81c1beca48e5e08dee9446a01 upstream.
To pick the changes from:
3b9c723ed7cfa4e1 ("KVM: SVM: Add support for SVM instruction address check change") b85a0425d8056f3b ("Enumerate AVX Vector Neural Network instructions") fb35d30fe5b06cc2 ("x86/cpufeatures: Assign dedicated feature word for CPUID_0x8000001F[EAX]")
This only causes these perf files to be rebuilt:
CC /tmp/build/perf/bench/mem-memcpy-x86-64-asm.o CC /tmp/build/perf/bench/mem-memset-x86-64-asm.o
And addresses this perf build warning:
Warning: Kernel ABI header at 'tools/arch/x86/include/asm/cpufeatures.h' differs from latest version at 'arch/x86/include/asm/cpufeatures.h' diff -u tools/arch/x86/include/asm/cpufeatures.h arch/x86/include/asm/cpufeatures.h
Cc: Borislav Petkov bp@suse.de Cc: Kyung Min Park kyung.min.park@intel.com Cc: Paolo Bonzini pbonzini@redhat.com Cc: Sean Christopherson seanjc@google.com Cc: Wei Huang wei.huang2@amd.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Conflicts: tools/arch/x86/include/asm/cpufeatures.h
Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit c8ba64d2f149c365e6f4dd757095db1894547432) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- tools/arch/x86/include/asm/cpufeatures.h | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h index ec53f52a06a5..7b66d6c2448e 100644 --- a/tools/arch/x86/include/asm/cpufeatures.h +++ b/tools/arch/x86/include/asm/cpufeatures.h @@ -96,7 +96,7 @@ #define X86_FEATURE_SYSCALL32 ( 3*32+14) /* "" syscall in IA32 userspace */ #define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */ #define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */ -#define X86_FEATURE_SME_COHERENT ( 3*32+17) /* "" AMD hardware-enforced cache coherency */ +/* FREE! ( 3*32+17) */ #define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) /* "" LFENCE synchronizes RDTSC */ #define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */ #define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */ @@ -201,7 +201,7 @@ #define X86_FEATURE_INVPCID_SINGLE ( 7*32+ 7) /* Effectively INVPCID && CR4.PCIDE=1 */ #define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */ #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */ -#define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ +/* FREE! ( 7*32+10) */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ #define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */ #define X86_FEATURE_RSB_VMEXIT ( 7*32+13) /* "" Fill RSB on VM-Exit */ @@ -211,7 +211,7 @@ #define X86_FEATURE_SSBD ( 7*32+17) /* Speculative Store Bypass Disable */ #define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */ #define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* "" Fill RSB on context switches */ -#define X86_FEATURE_SEV ( 7*32+20) /* AMD Secure Encrypted Virtualization */ +/* FREE! ( 7*32+20) */ #define X86_FEATURE_USE_IBPB ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */ #define X86_FEATURE_USE_IBRS_FW ( 7*32+22) /* "" Use IBRS during runtime firmware calls */ #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* "" Disable Speculative Store Bypass. */ @@ -236,7 +236,6 @@ #define X86_FEATURE_EPT_AD ( 8*32+17) /* Intel Extended Page Table access-dirty bit */ #define X86_FEATURE_VMCALL ( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */ #define X86_FEATURE_VMW_VMMCALL ( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */ -#define X86_FEATURE_SEV_ES ( 8*32+20) /* AMD Secure Encrypted Virtualization - Encrypted State */
/* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */ #define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/ @@ -299,6 +298,7 @@ #define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM-Exit when EIBRS is enabled */
/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ +#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */
/* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */ @@ -343,6 +343,7 @@ #define X86_FEATURE_AVIC (15*32+13) /* Virtual Interrupt Controller */ #define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* Virtual VMSAVE VMLOAD */ #define X86_FEATURE_VGIF (15*32+16) /* Virtual GIF */ +#define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* "" SVME addr check */
/* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */ #define X86_FEATURE_AVX512VBMI (16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/ @@ -370,6 +371,13 @@ #define X86_FEATURE_SUCCOR (17*32+ 1) /* Uncorrectable error containment and recovery */ #define X86_FEATURE_SMCA (17*32+ 3) /* Scalable MCA */
+/* AMD-defined memory encryption features, CPUID level 0x8000001f (EAX), word 19 */ +#define X86_FEATURE_SME (17*32+27) /* AMD Secure Memory Encryption */ +#define X86_FEATURE_SEV (17*32+28) /* AMD Secure Encrypted Virtualization */ +#define X86_FEATURE_VM_PAGE_FLUSH (17*32+29) /* "" VM Page Flush MSR is supported */ +#define X86_FEATURE_SEV_ES (17*32+30) /* AMD Secure Encrypted Virtualization - Encrypted State */ +#define X86_FEATURE_SME_COHERENT (17*32+31) /* "" AMD hardware-enforced cache coherency */ + /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */ #define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */ #define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
From: "Borislav Petkov (AMD)" bp@alien8.de
stable inclusion from stable-v5.10.189 commit 073a28a9b50662991e7d6956c2cf2fc5d54f28cd category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67 CVE: CVE-2023-20569
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
Upstream commit: 0e52740ffd10c6c316837c6c128f460f1aaba1ea
There was never a doubt in my mind that they would not fit into a single u32 eventually.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Conflicts: arch/x86/include/asm/cpufeatures.h tools/arch/x86/include/asm/cpufeatures.h
Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit bd3c6fdbe55cd11d4e7750401507dbebcbf9fce5) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/include/asm/cpufeatures.h | 2 +- tools/arch/x86/include/asm/cpufeatures.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index b324dacac04f..bb6cffb2476b 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -14,7 +14,7 @@ * Defines x86 CPU feature bits */ #define NCAPINTS 19 /* N 32-bit words worth of info */ -#define NBUGINTS 1 /* N 32-bit bug flags */ +#define NBUGINTS 2 /* N 32-bit bug flags */
/* * Note: If the comment begins with a quoted string, that string is used diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h index 7b66d6c2448e..285e490696f4 100644 --- a/tools/arch/x86/include/asm/cpufeatures.h +++ b/tools/arch/x86/include/asm/cpufeatures.h @@ -14,7 +14,7 @@ * Defines x86 CPU feature bits */ #define NCAPINTS 19 /* N 32-bit words worth of info */ -#define NBUGINTS 1 /* N 32-bit bug flags */ +#define NBUGINTS 2 /* N 32-bit bug flags */
/* * Note: If the comment begins with a quoted string, that string is used
From: Kim Phillips kim.phillips@amd.com
stable inclusion from stable-v5.10.189 commit 34f23ba8a399ecd38b45c84da257b91d278e88aa category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67 CVE: CVE-2023-20569
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit 8415a74852d7c24795007ee9862d25feb519007c upstream.
Add support for CPUID leaf 80000021, EAX. The majority of the features will be used in the kernel and thus a separate leaf is appropriate.
Include KVM's reverse_cpuid entry because features are used by VM guests, too.
[ bp: Massage commit message. ]
Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Acked-by: Sean Christopherson seanjc@google.com Link: https://lore.kernel.org/r/20230124163319.2277355-2-kim.phillips@amd.com [bwh: Backported to 6.1: adjust context] Signed-off-by: Ben Hutchings benh@debian.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Conflicts: arch/x86/include/asm/cpufeature.h arch/x86/include/asm/cpufeatures.h arch/x86/include/asm/disabled-features.h arch/x86/include/asm/required-features.h arch/x86/kernel/cpu/common.c arch/x86/kvm/cpuid.h
Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 6c0893ee6540431092830da3e05eb50beafc803c) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/include/asm/cpufeature.h | 7 +++++-- arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/include/asm/disabled-features.h | 3 ++- arch/x86/include/asm/required-features.h | 3 ++- arch/x86/kernel/cpu/common.c | 3 +++ arch/x86/kvm/cpuid.h | 1 + 6 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index f4cbc01c0bc4..5efb04544612 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -31,6 +31,7 @@ enum cpuid_leafs CPUID_7_ECX, CPUID_8000_0007_EBX, CPUID_7_EDX, + CPUID_8000_0021_EAX, };
#ifdef CONFIG_X86_FEATURE_NAMES @@ -89,8 +90,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32]; CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 16, feature_bit) || \ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 17, feature_bit) || \ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 18, feature_bit) || \ + CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 19, feature_bit) || \ REQUIRED_MASK_CHECK || \ - BUILD_BUG_ON_ZERO(NCAPINTS != 19)) + BUILD_BUG_ON_ZERO(NCAPINTS != 20))
#define DISABLED_MASK_BIT_SET(feature_bit) \ ( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 0, feature_bit) || \ @@ -112,8 +114,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32]; CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 16, feature_bit) || \ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 17, feature_bit) || \ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 18, feature_bit) || \ + CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 19, feature_bit) || \ DISABLED_MASK_CHECK || \ - BUILD_BUG_ON_ZERO(NCAPINTS != 19)) + BUILD_BUG_ON_ZERO(NCAPINTS != 20))
#define cpu_has(c, bit) \ (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \ diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index bb6cffb2476b..3fda3d27bed5 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -13,7 +13,7 @@ /* * Defines x86 CPU feature bits */ -#define NCAPINTS 19 /* N 32-bit words worth of info */ +#define NCAPINTS 20 /* N 32-bit words worth of info */ #define NBUGINTS 2 /* N 32-bit bug flags */
/* diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index fb51349e45a7..f7be189e9723 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -110,6 +110,7 @@ DISABLE_ENQCMD) #define DISABLED_MASK17 0 #define DISABLED_MASK18 0 -#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19) +#define DISABLED_MASK19 0 +#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 20)
#endif /* _ASM_X86_DISABLED_FEATURES_H */ diff --git a/arch/x86/include/asm/required-features.h b/arch/x86/include/asm/required-features.h index 3ff0d48469f2..b2d504f11937 100644 --- a/arch/x86/include/asm/required-features.h +++ b/arch/x86/include/asm/required-features.h @@ -101,6 +101,7 @@ #define REQUIRED_MASK16 0 #define REQUIRED_MASK17 0 #define REQUIRED_MASK18 0 -#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19) +#define REQUIRED_MASK19 0 +#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 20)
#endif /* _ASM_X86_REQUIRED_FEATURES_H */ diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index abba125765e4..ae66bc84df81 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -963,6 +963,9 @@ void get_cpu_cap(struct cpuinfo_x86 *c) if (c->extended_cpuid_level >= 0x8000000a) c->x86_capability[CPUID_8000_000A_EDX] = cpuid_edx(0x8000000a);
+ if (c->extended_cpuid_level >= 0x80000021) + c->x86_capability[CPUID_8000_0021_EAX] = cpuid_eax(0x80000021); + init_scattered_cpuid_features(c); init_speculation_control(c);
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index 63d96ec61cd3..ed6aabb27424 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -84,6 +84,7 @@ static const struct cpuid_reg reverse_cpuid[] = { [CPUID_7_EDX] = { 7, 0, CPUID_EDX}, [CPUID_7_1_EAX] = { 7, 1, CPUID_EAX}, [CPUID_12_EAX] = {0x00000012, 0, CPUID_EAX}, + [CPUID_8000_0021_EAX] = {0x80000021, 0, CPUID_EAX}, };
/*
From: "Borislav Petkov (AMD)" bp@alien8.de
stable inclusion from stable-v5.10.189 commit 3f9b7101bea1dcb63410c016ceb266f6e9f733c9 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67 CVE: CVE-2023-20569
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
Upstream commit: fb3bd914b3ec28f5fb697ac55c4846ac2d542855
Add a mitigation for the speculative return address stack overflow vulnerability found on AMD processors.
The mitigation works by ensuring all RET instructions speculate to a controlled location, similar to how speculation is controlled in the retpoline sequence. To accomplish this, the __x86_return_thunk forces the CPU to mispredict every function return using a 'safe return' sequence.
To ensure the safety of this mitigation, the kernel must ensure that the safe return sequence is itself free from attacker interference. In Zen3 and Zen4, this is accomplished by creating a BTB alias between the untraining function srso_untrain_ret_alias() and the safe return function srso_safe_ret_alias() which results in evicting a potentially poisoned BTB entry and using that safe one for all function returns.
In older Zen1 and Zen2, this is accomplished using a reinterpretation technique similar to Retbleed one: srso_untrain_ret() and srso_safe_ret().
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Conflicts: arch/x86/include/asm/processor.h arch/x86/kernel/cpu/bugs.c arch/x86/kernel/cpu/common.c include/linux/cpu.h
Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 8105f71bb9145778d57008e312634c780c9180cb) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- Documentation/admin-guide/hw-vuln/index.rst | 1 + Documentation/admin-guide/hw-vuln/srso.rst | 133 ++++++++++++++++++ .../admin-guide/kernel-parameters.txt | 11 ++ arch/x86/Kconfig | 7 + arch/x86/include/asm/cpufeatures.h | 5 + arch/x86/include/asm/nospec-branch.h | 9 +- arch/x86/include/asm/processor.h | 2 + arch/x86/kernel/alternative.c | 4 +- arch/x86/kernel/cpu/amd.c | 14 ++ arch/x86/kernel/cpu/bugs.c | 106 ++++++++++++++ arch/x86/kernel/cpu/common.c | 8 +- arch/x86/kernel/vmlinux.lds.S | 32 ++++- arch/x86/lib/retpoline.S | 81 ++++++++++- drivers/base/cpu.c | 8 ++ include/linux/cpu.h | 2 + tools/objtool/arch/x86/decode.c | 5 +- 16 files changed, 419 insertions(+), 9 deletions(-) create mode 100644 Documentation/admin-guide/hw-vuln/srso.rst
diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst index 8de11532ca76..d629a2348498 100644 --- a/Documentation/admin-guide/hw-vuln/index.rst +++ b/Documentation/admin-guide/hw-vuln/index.rst @@ -19,3 +19,4 @@ are configurable at compile, boot or run time. processor_mmio_stale_data.rst cross-thread-rsb.rst gather_data_sampling.rst + srso diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst new file mode 100644 index 000000000000..2f923c805802 --- /dev/null +++ b/Documentation/admin-guide/hw-vuln/srso.rst @@ -0,0 +1,133 @@ +.. SPDX-License-Identifier: GPL-2.0 + +Speculative Return Stack Overflow (SRSO) +======================================== + +This is a mitigation for the speculative return stack overflow (SRSO) +vulnerability found on AMD processors. The mechanism is by now the well +known scenario of poisoning CPU functional units - the Branch Target +Buffer (BTB) and Return Address Predictor (RAP) in this case - and then +tricking the elevated privilege domain (the kernel) into leaking +sensitive data. + +AMD CPUs predict RET instructions using a Return Address Predictor (aka +Return Address Stack/Return Stack Buffer). In some cases, a non-architectural +CALL instruction (i.e., an instruction predicted to be a CALL but is +not actually a CALL) can create an entry in the RAP which may be used +to predict the target of a subsequent RET instruction. + +The specific circumstances that lead to this varies by microarchitecture +but the concern is that an attacker can mis-train the CPU BTB to predict +non-architectural CALL instructions in kernel space and use this to +control the speculative target of a subsequent kernel RET, potentially +leading to information disclosure via a speculative side-channel. + +The issue is tracked under CVE-2023-20569. + +Affected processors +------------------- + +AMD Zen, generations 1-4. That is, all families 0x17 and 0x19. Older +processors have not been investigated. + +System information and options +------------------------------ + +First of all, it is required that the latest microcode be loaded for +mitigations to be effective. + +The sysfs file showing SRSO mitigation status is: + + /sys/devices/system/cpu/vulnerabilities/spec_rstack_overflow + +The possible values in this file are: + + - 'Not affected' The processor is not vulnerable + + - 'Vulnerable: no microcode' The processor is vulnerable, no + microcode extending IBPB functionality + to address the vulnerability has been + applied. + + - 'Mitigation: microcode' Extended IBPB functionality microcode + patch has been applied. It does not + address User->Kernel and Guest->Host + transitions protection but it does + address User->User and VM->VM attack + vectors. + + (spec_rstack_overflow=microcode) + + - 'Mitigation: safe RET' Software-only mitigation. It complements + the extended IBPB microcode patch + functionality by addressing User->Kernel + and Guest->Host transitions protection. + + Selected by default or by + spec_rstack_overflow=safe-ret + + - 'Mitigation: IBPB' Similar protection as "safe RET" above + but employs an IBPB barrier on privilege + domain crossings (User->Kernel, + Guest->Host). + + (spec_rstack_overflow=ibpb) + + - 'Mitigation: IBPB on VMEXIT' Mitigation addressing the cloud provider + scenario - the Guest->Host transitions + only. + + (spec_rstack_overflow=ibpb-vmexit) + +In order to exploit vulnerability, an attacker needs to: + + - gain local access on the machine + + - break kASLR + + - find gadgets in the running kernel in order to use them in the exploit + + - potentially create and pin an additional workload on the sibling + thread, depending on the microarchitecture (not necessary on fam 0x19) + + - run the exploit + +Considering the performance implications of each mitigation type, the +default one is 'Mitigation: safe RET' which should take care of most +attack vectors, including the local User->Kernel one. + +As always, the user is advised to keep her/his system up-to-date by +applying software updates regularly. + +The default setting will be reevaluated when needed and especially when +new attack vectors appear. + +As one can surmise, 'Mitigation: safe RET' does come at the cost of some +performance depending on the workload. If one trusts her/his userspace +and does not want to suffer the performance impact, one can always +disable the mitigation with spec_rstack_overflow=off. + +Similarly, 'Mitigation: IBPB' is another full mitigation type employing +an indrect branch prediction barrier after having applied the required +microcode patch for one's system. This mitigation comes also at +a performance cost. + +Mitigation: safe RET +-------------------- + +The mitigation works by ensuring all RET instructions speculate to +a controlled location, similar to how speculation is controlled in the +retpoline sequence. To accomplish this, the __x86_return_thunk forces +the CPU to mispredict every function return using a 'safe return' +sequence. + +To ensure the safety of this mitigation, the kernel must ensure that the +safe return sequence is itself free from attacker interference. In Zen3 +and Zen4, this is accomplished by creating a BTB alias between the +untraining function srso_untrain_ret_alias() and the safe return +function srso_safe_ret_alias() which results in evicting a potentially +poisoned BTB entry and using that safe one for all function returns. + +In older Zen1 and Zen2, this is accomplished using a reinterpretation +technique similar to Retbleed one: srso_untrain_ret() and +srso_safe_ret(). diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 93bab61aaf38..d8582d7e5984 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5325,6 +5325,17 @@ Not specifying this option is equivalent to spectre_v2_user=auto.
+ spec_rstack_overflow= + [X86] Control RAS overflow mitigation on AMD Zen CPUs + + off - Disable mitigation + microcode - Enable microcode mitigation only + safe-ret - Enable sw-only safe RET mitigation (default) + ibpb - Enable mitigation by issuing IBPB on + kernel entry + ibpb-vmexit - Issue IBPB only on VMEXIT + (cloud-specific mitigation) + spec_store_bypass_disable= [HW] Control Speculative Store Bypass (SSB) Disable mitigation (Speculative Store Bypass vulnerability) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 0baa452efb5e..12a6ff5814c6 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2537,6 +2537,13 @@ config CPU_IBRS_ENTRY This mitigates both spectre_v2 and retbleed at great cost to performance.
+config CPU_SRSO + bool "Mitigate speculative RAS overflow on AMD" + depends on CPU_SUP_AMD && X86_64 && RETHUNK + default y + help + Enable the SRSO mitigation needed on AMD Zen1-4 machines. + config SLS bool "Mitigate Straight-Line-Speculation" depends on CC_HAS_SLS && X86_64 diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 3fda3d27bed5..beb792123441 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -323,6 +323,9 @@ #define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */ #define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */
+#define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */ +#define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */ + /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */ @@ -475,4 +478,6 @@ #define X86_BUG_GDS X86_BUG(30) /* CPU is affected by Gather Data Sampling */ #define X86_BUG_DIV0 X86_BUG(31) /* AMD DIV0 speculation bug */
+/* BUG word 2 */ +#define X86_BUG_SRSO X86_BUG(1*32 + 0) /* AMD SRSO bug */ #endif /* _ASM_X86_CPUFEATURES_H */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 020f5ad84ff7..65db2a7d39e1 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -112,7 +112,7 @@ * eventually turn into it's own annotation. */ .macro ANNOTATE_UNRET_END -#ifdef CONFIG_DEBUG_ENTRY +#if (defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_SRSO)) ANNOTATE_RETPOLINE_SAFE nop #endif @@ -179,6 +179,11 @@ CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET, \ "call entry_ibpb", X86_FEATURE_ENTRY_IBPB #endif + +#ifdef CONFIG_CPU_SRSO + ALTERNATIVE_2 "", "call srso_untrain_ret", X86_FEATURE_SRSO, \ + "call srso_untrain_ret_alias", X86_FEATURE_SRSO_ALIAS +#endif .endm
#else /* __ASSEMBLY__ */ @@ -191,6 +196,8 @@
extern void __x86_return_thunk(void); extern void zen_untrain_ret(void); +extern void srso_untrain_ret(void); +extern void srso_untrain_ret_alias(void); extern void entry_ibpb(void);
#ifdef CONFIG_RETPOLINE diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 479a5cc39879..8b3f8b406dbf 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -835,10 +835,12 @@ extern u16 get_llc_id(unsigned int cpu); extern u16 amd_get_nb_id(int cpu); extern u32 amd_get_nodes_per_socket(void); extern void amd_clear_divider(void); +extern bool cpu_has_ibpb_brtype_microcode(void); #else static inline u16 amd_get_nb_id(int cpu) { return 0; } static inline u32 amd_get_nodes_per_socket(void) { return 0; } static inline void amd_clear_divider(void) { } +static inline bool cpu_has_ibpb_brtype_microcode(void) { return false; } #endif
static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 82e9fd11b364..e98db8844a5b 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -678,7 +678,9 @@ static int patch_return(void *addr, struct insn *insn, u8 *bytes) { int i = 0;
- if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) + if (cpu_feature_enabled(X86_FEATURE_RETHUNK) || + cpu_feature_enabled(X86_FEATURE_SRSO) || + cpu_feature_enabled(X86_FEATURE_SRSO_ALIAS)) return -1;
bytes[i++] = RET_INSN_OPCODE; diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 6df68f2afd20..3a8ca21c5b64 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -1283,6 +1283,20 @@ void set_dr_addr_mask(unsigned long mask, int dr) } }
+bool cpu_has_ibpb_brtype_microcode(void) +{ + u8 fam = boot_cpu_data.x86; + + if (fam == 0x17) { + /* Zen1/2 IBPB flushes branch type predictions too. */ + return boot_cpu_has(X86_FEATURE_AMD_IBPB); + } else if (fam == 0x19) { + return false; + } + + return false; +} + static void zenbleed_check_cpu(void *unused) { struct cpuinfo_x86 *c = &cpu_data(smp_processor_id()); diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index b142595a01d3..332f4fa70bb3 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -49,6 +49,7 @@ static void __init taa_select_mitigation(void); static void __init mmio_select_mitigation(void); static void __init srbds_select_mitigation(void); static void __init gds_select_mitigation(void); +static void __init srso_select_mitigation(void);
/* The base value of the SPEC_CTRL MSR without task-specific bits set */ u64 x86_spec_ctrl_base; @@ -153,6 +154,7 @@ void __init check_bugs(void) md_clear_select_mitigation(); srbds_select_mitigation(); gds_select_mitigation(); + srso_select_mitigation();
arch_smt_update();
@@ -2267,6 +2269,95 @@ static int __init l1tf_cmdline(char *str) } early_param("l1tf", l1tf_cmdline);
+#undef pr_fmt +#define pr_fmt(fmt) "Speculative Return Stack Overflow: " fmt + +enum srso_mitigation { + SRSO_MITIGATION_NONE, + SRSO_MITIGATION_MICROCODE, + SRSO_MITIGATION_SAFE_RET, +}; + +enum srso_mitigation_cmd { + SRSO_CMD_OFF, + SRSO_CMD_MICROCODE, + SRSO_CMD_SAFE_RET, +}; + +static const char * const srso_strings[] = { + [SRSO_MITIGATION_NONE] = "Vulnerable", + [SRSO_MITIGATION_MICROCODE] = "Mitigation: microcode", + [SRSO_MITIGATION_SAFE_RET] = "Mitigation: safe RET", +}; + +static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE; +static enum srso_mitigation_cmd srso_cmd __ro_after_init = SRSO_CMD_SAFE_RET; + +static int __init srso_parse_cmdline(char *str) +{ + if (!str) + return -EINVAL; + + if (!strcmp(str, "off")) + srso_cmd = SRSO_CMD_OFF; + else if (!strcmp(str, "microcode")) + srso_cmd = SRSO_CMD_MICROCODE; + else if (!strcmp(str, "safe-ret")) + srso_cmd = SRSO_CMD_SAFE_RET; + else + pr_err("Ignoring unknown SRSO option (%s).", str); + + return 0; +} +early_param("spec_rstack_overflow", srso_parse_cmdline); + +#define SRSO_NOTICE "WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options." + +static void __init srso_select_mitigation(void) +{ + bool has_microcode; + + if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off()) + return; + + has_microcode = cpu_has_ibpb_brtype_microcode(); + if (!has_microcode) { + pr_warn("IBPB-extending microcode not applied!\n"); + pr_warn(SRSO_NOTICE); + } + + switch (srso_cmd) { + case SRSO_CMD_OFF: + return; + + case SRSO_CMD_MICROCODE: + if (has_microcode) { + srso_mitigation = SRSO_MITIGATION_MICROCODE; + pr_warn(SRSO_NOTICE); + } + break; + + case SRSO_CMD_SAFE_RET: + if (IS_ENABLED(CONFIG_CPU_SRSO)) { + if (boot_cpu_data.x86 == 0x19) + setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS); + else + setup_force_cpu_cap(X86_FEATURE_SRSO); + srso_mitigation = SRSO_MITIGATION_SAFE_RET; + } else { + pr_err("WARNING: kernel not compiled with CPU_SRSO.\n"); + return; + } + break; + + default: + break; + + } + + pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode")); +} + #undef pr_fmt #define pr_fmt(fmt) fmt
@@ -2470,6 +2561,13 @@ static ssize_t gds_show_state(char *buf) return sysfs_emit(buf, "%s\n", gds_strings[gds_mitigation]); }
+static ssize_t srso_show_state(char *buf) +{ + return sysfs_emit(buf, "%s%s\n", + srso_strings[srso_mitigation], + (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode")); +} + static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, char *buf, unsigned int bug) { @@ -2522,6 +2620,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr case X86_BUG_GDS: return gds_show_state(buf);
+ case X86_BUG_SRSO: + return srso_show_state(buf); + default: break; } @@ -2591,4 +2692,9 @@ ssize_t cpu_show_gds(struct device *dev, struct device_attribute *attr, char *bu { return cpu_show_common(dev, attr, buf, X86_BUG_GDS); } + +ssize_t cpu_show_spec_rstack_overflow(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpu_show_common(dev, attr, buf, X86_BUG_SRSO); +} #endif diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index ae66bc84df81..5dc78d88d932 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1131,6 +1131,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { #define SMT_RSB BIT(4) /* CPU is affected by GDS */ #define GDS BIT(5) +/* CPU is affected by SRSO */ +#define SRSO BIT(6)
static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS), @@ -1164,8 +1166,9 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
VULNBL_AMD(0x15, RETBLEED), VULNBL_AMD(0x16, RETBLEED), - VULNBL_AMD(0x17, RETBLEED | SMT_RSB), + VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO), VULNBL_HYGON(0x18, RETBLEED | SMT_RSB), + VULNBL_AMD(0x19, SRSO), {} };
@@ -1296,6 +1299,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) boot_cpu_has(X86_FEATURE_AVX)) setup_force_cpu_bug(X86_BUG_GDS);
+ if (cpu_matches(cpu_vuln_blacklist, SRSO)) + setup_force_cpu_bug(X86_BUG_SRSO); + if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return;
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index a21cd2381fa8..4955cb5cc001 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -133,7 +133,20 @@ SECTIONS LOCK_TEXT KPROBES_TEXT ALIGN_ENTRY_TEXT_BEGIN +#ifdef CONFIG_CPU_SRSO + *(.text.__x86.rethunk_untrain) +#endif + ENTRY_TEXT + +#ifdef CONFIG_CPU_SRSO + /* + * See the comment above srso_untrain_ret_alias()'s + * definition. + */ + . = srso_untrain_ret_alias | (1 << 2) | (1 << 8) | (1 << 14) | (1 << 20); + *(.text.__x86.rethunk_safe) +#endif ALIGN_ENTRY_TEXT_END SOFTIRQENTRY_TEXT STATIC_CALL_TEXT @@ -142,13 +155,15 @@ SECTIONS
#ifdef CONFIG_RETPOLINE __indirect_thunk_start = .; - *(.text.__x86.*) + *(.text.__x86.indirect_thunk) + *(.text.__x86.return_thunk) __indirect_thunk_end = .; #endif } :text =0xcccc
/* End of text section, which should occupy whole number of pages */ _etext = .; + . = ALIGN(PAGE_SIZE);
X86_ALIGN_RODATA_BEGIN @@ -502,6 +517,21 @@ INIT_PER_CPU(irq_stack_backing_store); "fixed_percpu_data is not at start of per-cpu area"); #endif
+#ifdef CONFIG_RETHUNK +. = ASSERT((__ret & 0x3f) == 0, "__ret not cacheline-aligned"); +. = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned"); +#endif + +#ifdef CONFIG_CPU_SRSO +/* + * GNU ld cannot do XOR so do: (A | B) - (A & B) in order to compute the XOR + * of the two function addresses: + */ +. = ASSERT(((srso_untrain_ret_alias | srso_safe_ret_alias) - + (srso_untrain_ret_alias & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)), + "SRSO function pair won't alias"); +#endif + #endif /* CONFIG_X86_32 */
#ifdef CONFIG_KEXEC_CORE diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 1221bb099afb..5f7eed97487e 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -9,6 +9,7 @@ #include <asm/nospec-branch.h> #include <asm/unwind_hints.h> #include <asm/frame.h> +#include <asm/nops.h>
.section .text.__x86.indirect_thunk
@@ -73,6 +74,45 @@ SYM_CODE_END(__x86_indirect_thunk_array) */ #ifdef CONFIG_RETHUNK
+/* + * srso_untrain_ret_alias() and srso_safe_ret_alias() are placed at + * special addresses: + * + * - srso_untrain_ret_alias() is 2M aligned + * - srso_safe_ret_alias() is also in the same 2M page but bits 2, 8, 14 + * and 20 in its virtual address are set (while those bits in the + * srso_untrain_ret_alias() function are cleared). + * + * This guarantees that those two addresses will alias in the branch + * target buffer of Zen3/4 generations, leading to any potential + * poisoned entries at that BTB slot to get evicted. + * + * As a result, srso_safe_ret_alias() becomes a safe return. + */ +#ifdef CONFIG_CPU_SRSO + .section .text.__x86.rethunk_untrain + +SYM_START(srso_untrain_ret_alias, SYM_L_GLOBAL, SYM_A_NONE) + ASM_NOP2 + lfence + jmp __x86_return_thunk +SYM_FUNC_END(srso_untrain_ret_alias) +__EXPORT_THUNK(srso_untrain_ret_alias) + + .section .text.__x86.rethunk_safe +#endif + +/* Needs a definition for the __x86_return_thunk alternative below. */ +SYM_START(srso_safe_ret_alias, SYM_L_GLOBAL, SYM_A_NONE) +#ifdef CONFIG_CPU_SRSO + add $8, %_ASM_SP + UNWIND_HINT_FUNC +#endif + ANNOTATE_UNRET_SAFE + ret + int3 +SYM_FUNC_END(srso_safe_ret_alias) + .section .text.__x86.return_thunk
/* @@ -85,7 +125,7 @@ SYM_CODE_END(__x86_indirect_thunk_array) * from re-poisioning the BTB prediction. */ .align 64 - .skip 63, 0xcc + .skip 64 - (__ret - zen_untrain_ret), 0xcc SYM_FUNC_START_NOALIGN(zen_untrain_ret);
/* @@ -117,10 +157,10 @@ SYM_FUNC_START_NOALIGN(zen_untrain_ret); * evicted, __x86_return_thunk will suffer Straight Line Speculation * which will be contained safely by the INT3. */ -SYM_INNER_LABEL(__x86_return_thunk, SYM_L_GLOBAL) +SYM_INNER_LABEL(__ret, SYM_L_GLOBAL) ret int3 -SYM_CODE_END(__x86_return_thunk) +SYM_CODE_END(__ret)
/* * Ensure the TEST decoding / BTB invalidation is complete. @@ -131,11 +171,44 @@ SYM_CODE_END(__x86_return_thunk) * Jump back and execute the RET in the middle of the TEST instruction. * INT3 is for SLS protection. */ - jmp __x86_return_thunk + jmp __ret int3 SYM_FUNC_END(zen_untrain_ret) __EXPORT_THUNK(zen_untrain_ret)
+/* + * SRSO untraining sequence for Zen1/2, similar to zen_untrain_ret() + * above. On kernel entry, srso_untrain_ret() is executed which is a + * + * movabs $0xccccccc308c48348,%rax + * + * and when the return thunk executes the inner label srso_safe_ret() + * later, it is a stack manipulation and a RET which is mispredicted and + * thus a "safe" one to use. + */ + .align 64 + .skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc +SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) + .byte 0x48, 0xb8 + +SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL) + add $8, %_ASM_SP + ret + int3 + int3 + int3 + lfence + call srso_safe_ret + int3 +SYM_CODE_END(srso_safe_ret) +SYM_FUNC_END(srso_untrain_ret) +__EXPORT_THUNK(srso_untrain_ret) + +SYM_FUNC_START(__x86_return_thunk) + ALTERNATIVE_2 "jmp __ret", "call srso_safe_ret", X86_FEATURE_SRSO, \ + "call srso_safe_ret_alias", X86_FEATURE_SRSO_ALIAS + int3 +SYM_CODE_END(__x86_return_thunk) EXPORT_SYMBOL(__x86_return_thunk)
#endif /* CONFIG_RETHUNK */ diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c index 81743cd47dc8..7968e169617c 100644 --- a/drivers/base/cpu.c +++ b/drivers/base/cpu.c @@ -584,6 +584,12 @@ ssize_t __weak cpu_show_gds(struct device *dev, return sysfs_emit(buf, "Not affected\n"); }
+ssize_t __weak cpu_show_spec_rstack_overflow(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "Not affected\n"); +} + static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL); @@ -596,6 +602,7 @@ static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL); static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL); static DEVICE_ATTR(retbleed, 0444, cpu_show_retbleed, NULL); static DEVICE_ATTR(gather_data_sampling, 0444, cpu_show_gds, NULL); +static DEVICE_ATTR(spec_rstack_overflow, 0444, cpu_show_spec_rstack_overflow, NULL);
static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_meltdown.attr, @@ -610,6 +617,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_mmio_stale_data.attr, &dev_attr_retbleed.attr, &dev_attr_gather_data_sampling.attr, + &dev_attr_spec_rstack_overflow.attr, NULL };
diff --git a/include/linux/cpu.h b/include/linux/cpu.h index 224a3acc2b66..e1ea2680b8e4 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -72,6 +72,8 @@ extern ssize_t cpu_show_retbleed(struct device *dev, struct device_attribute *attr, char *buf); extern ssize_t cpu_show_gds(struct device *dev, struct device_attribute *attr, char *buf); +extern ssize_t cpu_show_spec_rstack_overflow(struct device *dev, + struct device_attribute *attr, char *buf);
extern __printf(4, 5) struct device *cpu_device_create(struct device *parent, void *drvdata, diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index 828d902f482b..19f5b2e37869 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -661,5 +661,8 @@ bool arch_is_retpoline(struct symbol *sym)
bool arch_is_rethunk(struct symbol *sym) { - return !strcmp(sym->name, "__x86_return_thunk"); + return !strcmp(sym->name, "__x86_return_thunk") || + !strcmp(sym->name, "srso_untrain_ret") || + !strcmp(sym->name, "srso_safe_ret") || + !strcmp(sym->name, "__ret"); }
From: "Borislav Petkov (AMD)" bp@alien8.de
stable inclusion from stable-v5.10.189 commit df76a59feba549825f426cb1586bfa86b49c08fa category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67 CVE: CVE-2023-20569
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
Upstream commit: 79113e4060aba744787a81edb9014f2865193854
Add support for the synthetic CPUID flag which "if this bit is 1, it indicates that MSR 49h (PRED_CMD) bit 0 (IBPB) flushes all branch type predictions from the CPU branch predictor."
This flag is there so that this capability in guests can be detected easily (otherwise one would have to track microcode revisions which is impossible for guests).
It is also needed only for Zen3 and -4. The other two (Zen1 and -2) always flush branch type predictions by default.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Conflicts: arch/x86/include/asm/cpufeatures.h
Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 74ec6951a699b064017d8ad03f6d56154b9136fc) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/include/asm/cpufeatures.h | 2 ++ arch/x86/kernel/cpu/bugs.c | 12 +++++++++++- 2 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index beb792123441..7e28276a1007 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -434,6 +434,8 @@ #define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* "" Speculative Store Bypass Disable */
+#define X86_FEATURE_IBPB_BRTYPE (19*32+28) /* "" MSR_PRED_CMD[IBPB] flushes all branch type predictions */ + /* * BUG word(s) */ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 332f4fa70bb3..5a9e923ab83f 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2320,10 +2320,20 @@ static void __init srso_select_mitigation(void) if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off()) return;
- has_microcode = cpu_has_ibpb_brtype_microcode(); + /* + * The first check is for the kernel running as a guest in order + * for guests to verify whether IBPB is a viable mitigation. + */ + has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) || cpu_has_ibpb_brtype_microcode(); if (!has_microcode) { pr_warn("IBPB-extending microcode not applied!\n"); pr_warn(SRSO_NOTICE); + } else { + /* + * Enable the synthetic (even if in a real CPUID leaf) + * flag for guests. + */ + setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE); }
switch (srso_cmd) {
From: "Borislav Petkov (AMD)" bp@alien8.de
stable inclusion from stable-v5.10.189 commit e47af0c255aed7da91202f26250558a8e34e1c26 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67 CVE: CVE-2023-20569
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
Upstream commit: 1b5277c0ea0b247393a9c426769fde18cff5e2f6
Add support for the CPUID flag which denotes that the CPU is not affected by SRSO.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Conflicts: arch/x86/include/asm/cpufeatures.h
Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit f4a2459b03a28b1b89eec1402579fcad7020f431) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/include/asm/cpufeatures.h | 2 ++ arch/x86/include/asm/msr-index.h | 1 + arch/x86/include/asm/nospec-branch.h | 6 +++--- arch/x86/kernel/cpu/amd.c | 12 ++++++------ arch/x86/kernel/cpu/bugs.c | 24 ++++++++++++++++++++---- arch/x86/kernel/cpu/common.c | 6 ++++-- arch/x86/kvm/cpuid.c | 3 +++ 7 files changed, 39 insertions(+), 15 deletions(-)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 7e28276a1007..447086e89114 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -434,7 +434,9 @@ #define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* "" Speculative Store Bypass Disable */
+#define X86_FEATURE_SBPB (19*32+27) /* "" Selective Branch Prediction Barrier */ #define X86_FEATURE_IBPB_BRTYPE (19*32+28) /* "" MSR_PRED_CMD[IBPB] flushes all branch type predictions */ +#define X86_FEATURE_SRSO_NO (19*32+29) /* "" CPU is not affected by SRSO */
/* * BUG word(s) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index ec39d6d9967b..62e8681d0e3f 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -56,6 +56,7 @@
#define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */ #define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */ +#define PRED_CMD_SBPB BIT(7) /* Selective Branch Prediction Barrier */
#define MSR_PPIN_CTL 0x0000004e #define MSR_PPIN 0x0000004f diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 65db2a7d39e1..888256d2f1c6 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -314,11 +314,11 @@ void alternative_msr_write(unsigned int msr, u64 val, unsigned int feature) : "memory"); }
+extern u64 x86_pred_cmd; + static inline void indirect_branch_prediction_barrier(void) { - u64 val = PRED_CMD_IBPB; - - alternative_msr_write(MSR_IA32_PRED_CMD, val, X86_FEATURE_USE_IBPB); + alternative_msr_write(MSR_IA32_PRED_CMD, x86_pred_cmd, X86_FEATURE_USE_IBPB); }
/* The Intel SPEC CTRL MSR base value cache */ diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 3a8ca21c5b64..402979952fc7 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -1287,14 +1287,14 @@ bool cpu_has_ibpb_brtype_microcode(void) { u8 fam = boot_cpu_data.x86;
- if (fam == 0x17) { - /* Zen1/2 IBPB flushes branch type predictions too. */ + /* Zen1/2 IBPB flushes branch type predictions too. */ + if (fam == 0x17) return boot_cpu_has(X86_FEATURE_AMD_IBPB); - } else if (fam == 0x19) { + /* Poke the MSR bit on Zen3/4 to check its presence. */ + else if (fam == 0x19) + return !wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB); + else return false; - } - - return false; }
static void zenbleed_check_cpu(void *unused) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 5a9e923ab83f..f2a7d9491955 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -59,6 +59,9 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base); DEFINE_PER_CPU(u64, x86_spec_ctrl_current); EXPORT_SYMBOL_GPL(x86_spec_ctrl_current);
+u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB; +EXPORT_SYMBOL_GPL(x86_pred_cmd); + static DEFINE_MUTEX(spec_ctrl_mutex);
/* @@ -2318,7 +2321,7 @@ static void __init srso_select_mitigation(void) bool has_microcode;
if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off()) - return; + goto pred_cmd;
/* * The first check is for the kernel running as a guest in order @@ -2331,9 +2334,18 @@ static void __init srso_select_mitigation(void) } else { /* * Enable the synthetic (even if in a real CPUID leaf) - * flag for guests. + * flags for guests. */ setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE); + setup_force_cpu_cap(X86_FEATURE_SBPB); + + /* + * Zen1/2 with SMT off aren't vulnerable after the right + * IBPB microcode has been applied. + */ + if ((boot_cpu_data.x86 < 0x19) && + (cpu_smt_control == CPU_SMT_DISABLED)) + setup_force_cpu_cap(X86_FEATURE_SRSO_NO); }
switch (srso_cmd) { @@ -2356,16 +2368,20 @@ static void __init srso_select_mitigation(void) srso_mitigation = SRSO_MITIGATION_SAFE_RET; } else { pr_err("WARNING: kernel not compiled with CPU_SRSO.\n"); - return; + goto pred_cmd; } break;
default: break; - }
pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode")); + +pred_cmd: + if (boot_cpu_has(X86_FEATURE_SRSO_NO) || + srso_cmd == SRSO_CMD_OFF) + x86_pred_cmd = PRED_CMD_SBPB; }
#undef pr_fmt diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 5dc78d88d932..3ce12037320d 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1299,8 +1299,10 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) boot_cpu_has(X86_FEATURE_AVX)) setup_force_cpu_bug(X86_BUG_GDS);
- if (cpu_matches(cpu_vuln_blacklist, SRSO)) - setup_force_cpu_bug(X86_BUG_SRSO); + if (!cpu_has(c, X86_FEATURE_SRSO_NO)) { + if (cpu_matches(cpu_vuln_blacklist, SRSO)) + setup_force_cpu_bug(X86_BUG_SRSO); + }
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return; diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index aab185562803..7dad1cf6ed21 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -559,6 +559,9 @@ void kvm_set_cpu_caps(void) !boot_cpu_has(X86_FEATURE_AMD_SSBD)) kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);
+ if (cpu_feature_enabled(X86_FEATURE_SRSO_NO)) + kvm_cpu_cap_set(X86_FEATURE_SRSO_NO); + /* * Hide all SVM features by default, SVM will set the cap bits for * features it emulates and/or exposes for L1.
From: "Borislav Petkov (AMD)" bp@alien8.de
stable inclusion from stable-v5.10.189 commit 4acaea47e3bcb7cd55cc56c7fd4e5fb60eebdada category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67 CVE: CVE-2023-20569
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
Upstream commit: 233d6f68b98d480a7c42ebe78c38f79d44741ca9
Add the option to mitigate using IBPB on a kernel entry. Pull in the Retbleed alternative so that the IBPB call from there can be used. Also, if Retbleed mitigation is done using IBPB, the same mitigation can and must be used here.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit a960084bb5d6804197483726e280e16f3e37aeda) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/include/asm/nospec-branch.h | 3 ++- arch/x86/kernel/cpu/bugs.c | 23 +++++++++++++++++++++++ 2 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 888256d2f1c6..d03fd4258aa9 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -173,7 +173,8 @@ * where we have a stack but before any RET instruction. */ .macro UNTRAIN_RET -#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) +#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \ + defined(CONFIG_CPU_SRSO) ANNOTATE_UNRET_END ALTERNATIVE_2 "", \ CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET, \ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index f2a7d9491955..c6f6d5b98f4e 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2279,18 +2279,21 @@ enum srso_mitigation { SRSO_MITIGATION_NONE, SRSO_MITIGATION_MICROCODE, SRSO_MITIGATION_SAFE_RET, + SRSO_MITIGATION_IBPB, };
enum srso_mitigation_cmd { SRSO_CMD_OFF, SRSO_CMD_MICROCODE, SRSO_CMD_SAFE_RET, + SRSO_CMD_IBPB, };
static const char * const srso_strings[] = { [SRSO_MITIGATION_NONE] = "Vulnerable", [SRSO_MITIGATION_MICROCODE] = "Mitigation: microcode", [SRSO_MITIGATION_SAFE_RET] = "Mitigation: safe RET", + [SRSO_MITIGATION_IBPB] = "Mitigation: IBPB", };
static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE; @@ -2307,6 +2310,8 @@ static int __init srso_parse_cmdline(char *str) srso_cmd = SRSO_CMD_MICROCODE; else if (!strcmp(str, "safe-ret")) srso_cmd = SRSO_CMD_SAFE_RET; + else if (!strcmp(str, "ibpb")) + srso_cmd = SRSO_CMD_IBPB; else pr_err("Ignoring unknown SRSO option (%s).", str);
@@ -2348,6 +2353,14 @@ static void __init srso_select_mitigation(void) setup_force_cpu_cap(X86_FEATURE_SRSO_NO); }
+ if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) { + if (has_microcode) { + pr_err("Retbleed IBPB mitigation enabled, using same for SRSO\n"); + srso_mitigation = SRSO_MITIGATION_IBPB; + goto pred_cmd; + } + } + switch (srso_cmd) { case SRSO_CMD_OFF: return; @@ -2372,6 +2385,16 @@ static void __init srso_select_mitigation(void) } break;
+ case SRSO_CMD_IBPB: + if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) { + if (has_microcode) { + setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB); + srso_mitigation = SRSO_MITIGATION_IBPB; + } + } else { + pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n"); + goto pred_cmd; + } default: break; }
From: "Borislav Petkov (AMD)" bp@alien8.de
stable inclusion from stable-v5.10.189 commit 384d41bea948a18288aff668b7bdf3b522b7bf73 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67 CVE: CVE-2023-20569
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
Upstream commit: d893832d0e1ef41c72cdae444268c1d64a2be8ad
Add the option to flush IBPB only on VMEXIT in order to protect from malicious guests but one otherwise trusts the software that runs on the hypervisor.
Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 141aaec428fae34b92f19c1555a40d27dcb348f5) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/kernel/cpu/bugs.c | 19 +++++++++++++++++++ arch/x86/kvm/svm/svm.c | 4 +++- arch/x86/kvm/svm/vmenter.S | 3 +++ 4 files changed, 26 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 447086e89114..35c24e7a7407 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -325,6 +325,7 @@
#define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */ #define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */ +#define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* "" Issue an IBPB only on VMEXIT */
/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index c6f6d5b98f4e..f5a393c26f25 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2280,6 +2280,7 @@ enum srso_mitigation { SRSO_MITIGATION_MICROCODE, SRSO_MITIGATION_SAFE_RET, SRSO_MITIGATION_IBPB, + SRSO_MITIGATION_IBPB_ON_VMEXIT, };
enum srso_mitigation_cmd { @@ -2287,6 +2288,7 @@ enum srso_mitigation_cmd { SRSO_CMD_MICROCODE, SRSO_CMD_SAFE_RET, SRSO_CMD_IBPB, + SRSO_CMD_IBPB_ON_VMEXIT, };
static const char * const srso_strings[] = { @@ -2294,6 +2296,7 @@ static const char * const srso_strings[] = { [SRSO_MITIGATION_MICROCODE] = "Mitigation: microcode", [SRSO_MITIGATION_SAFE_RET] = "Mitigation: safe RET", [SRSO_MITIGATION_IBPB] = "Mitigation: IBPB", + [SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only" };
static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE; @@ -2312,6 +2315,8 @@ static int __init srso_parse_cmdline(char *str) srso_cmd = SRSO_CMD_SAFE_RET; else if (!strcmp(str, "ibpb")) srso_cmd = SRSO_CMD_IBPB; + else if (!strcmp(str, "ibpb-vmexit")) + srso_cmd = SRSO_CMD_IBPB_ON_VMEXIT; else pr_err("Ignoring unknown SRSO option (%s).", str);
@@ -2395,6 +2400,20 @@ static void __init srso_select_mitigation(void) pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n"); goto pred_cmd; } + break; + + case SRSO_CMD_IBPB_ON_VMEXIT: + if (IS_ENABLED(CONFIG_CPU_SRSO)) { + if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) { + setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT); + srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT; + } + } else { + pr_err("WARNING: kernel not compiled with CPU_SRSO.\n"); + goto pred_cmd; + } + break; + default: break; } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 3ad12d07160a..a09cd8102611 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1399,7 +1399,9 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
if (sd->current_vmcb != svm->vmcb) { sd->current_vmcb = svm->vmcb; - indirect_branch_prediction_barrier(); + + if (!cpu_feature_enabled(X86_FEATURE_IBPB_ON_VMEXIT)) + indirect_branch_prediction_barrier(); } avic_vcpu_load(vcpu, cpu); } diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index c18d812d00cd..a8859c173258 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -137,6 +137,9 @@ SYM_FUNC_START(__svm_vcpu_run) */ UNTRAIN_RET
+ /* SRSO */ + ALTERNATIVE "", "call entry_ibpb", X86_FEATURE_IBPB_ON_VMEXIT + /* * Clear all general purpose registers except RSP and RAX to prevent * speculative use of the guest's values, even those that are reloaded
From: Josh Poimboeuf jpoimboe@kernel.org
stable inclusion from stable-v5.10.189 commit 4873939c0e1cec2fd04a38ddf2c03a05e4eeb7ef category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67 CVE: CVE-2023-20569
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
Upstream commit: 238ec850b95a02dcdff3edc86781aa913549282f
Set X86_FEATURE_RETHUNK when enabling the SRSO mitigation so that generated code (e.g., ftrace, static call, eBPF) generates "jmp __x86_return_thunk" instead of RET.
[ bp: Add a comment. ]
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Josh Poimboeuf jpoimboe@kernel.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 117e1b91ff94e8c38c65adf6ec7697603842e8fe) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/kernel/alternative.c | 4 +--- arch/x86/kernel/cpu/bugs.c | 6 ++++++ 2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index e98db8844a5b..82e9fd11b364 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -678,9 +678,7 @@ static int patch_return(void *addr, struct insn *insn, u8 *bytes) { int i = 0;
- if (cpu_feature_enabled(X86_FEATURE_RETHUNK) || - cpu_feature_enabled(X86_FEATURE_SRSO) || - cpu_feature_enabled(X86_FEATURE_SRSO_ALIAS)) + if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) return -1;
bytes[i++] = RET_INSN_OPCODE; diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index f5a393c26f25..dd6bcbead1df 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2379,6 +2379,12 @@ static void __init srso_select_mitigation(void)
case SRSO_CMD_SAFE_RET: if (IS_ENABLED(CONFIG_CPU_SRSO)) { + /* + * Enable the return thunk for generated code + * like ftrace, static_call, etc. + */ + setup_force_cpu_cap(X86_FEATURE_RETHUNK); + if (boot_cpu_data.x86 == 0x19) setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS); else
From: "Borislav Petkov (AMD)" bp@alien8.de
stable inclusion from stable-v5.10.189 commit 8457fb5740b14311a8941044ff4eb5a3945de9b2 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67 CVE: CVE-2023-20569
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit 5a15d8348881e9371afdf9f5357a135489496955 upstream.
The SBPB bit in MSR_IA32_PRED_CMD is supported only after a microcode patch has been applied so set X86_FEATURE_SBPB only then. Otherwise, guests would attempt to set that bit and #GP on the MSR write.
While at it, make SMT detection more robust as some guests - depending on how and what CPUID leafs their report - lead to cpu_smt_control getting set to CPU_SMT_NOT_SUPPORTED but SRSO_NO should be set for any guest incarnation where one simply cannot do SMT, for whatever reason.
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Reported-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Reported-by: Salvatore Bonaccorso carnil@debian.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit f8cc99ac0b3a57045c146e15113a2c674364518b) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/kernel/cpu/amd.c | 19 ++++++++++++------- arch/x86/kernel/cpu/bugs.c | 7 +++---- 2 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 402979952fc7..53cd34ac89c4 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -1285,16 +1285,21 @@ void set_dr_addr_mask(unsigned long mask, int dr)
bool cpu_has_ibpb_brtype_microcode(void) { - u8 fam = boot_cpu_data.x86; - + switch (boot_cpu_data.x86) { /* Zen1/2 IBPB flushes branch type predictions too. */ - if (fam == 0x17) + case 0x17: return boot_cpu_has(X86_FEATURE_AMD_IBPB); - /* Poke the MSR bit on Zen3/4 to check its presence. */ - else if (fam == 0x19) - return !wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB); - else + case 0x19: + /* Poke the MSR bit on Zen3/4 to check its presence. */ + if (!wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) { + setup_force_cpu_cap(X86_FEATURE_SBPB); + return true; + } else { + return false; + } + default: return false; + } }
static void zenbleed_check_cpu(void *unused) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index dd6bcbead1df..7764381f86f3 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2347,14 +2347,13 @@ static void __init srso_select_mitigation(void) * flags for guests. */ setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE); - setup_force_cpu_cap(X86_FEATURE_SBPB);
/* * Zen1/2 with SMT off aren't vulnerable after the right * IBPB microcode has been applied. */ if ((boot_cpu_data.x86 < 0x19) && - (cpu_smt_control == CPU_SMT_DISABLED)) + (!cpu_smt_possible() || (cpu_smt_control == CPU_SMT_DISABLED))) setup_force_cpu_cap(X86_FEATURE_SRSO_NO); }
@@ -2427,8 +2426,8 @@ static void __init srso_select_mitigation(void) pr_info("%s%s\n", srso_strings[srso_mitigation], (has_microcode ? "" : ", no microcode"));
pred_cmd: - if (boot_cpu_has(X86_FEATURE_SRSO_NO) || - srso_cmd == SRSO_CMD_OFF) + if ((boot_cpu_has(X86_FEATURE_SRSO_NO) || srso_cmd == SRSO_CMD_OFF) && + boot_cpu_has(X86_FEATURE_SBPB)) x86_pred_cmd = PRED_CMD_SBPB; }
From: Nick Desaulniers ndesaulniers@google.com
stable inclusion from stable-v5.10.191 commit cb1eefc0463491ee09ce79acc44dc764e5269224 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit cbe8ded48b939b9d55d2c5589ab56caa7b530709 upstream.
The assertion added to verify the difference in bits set of the addresses of srso_untrain_ret_alias() and srso_safe_ret_alias() would fail to link in LLVM's ld.lld linker with the following error:
ld.lld: error: ./arch/x86/kernel/vmlinux.lds:210: at least one side of the expression must be absolute ld.lld: error: ./arch/x86/kernel/vmlinux.lds:211: at least one side of the expression must be absolute
Use ABSOLUTE to evaluate the expression referring to at least one of the symbols so that LLD can evaluate the linker script.
Also, add linker version info to the comment about XOR being unsupported in either ld.bfd or ld.lld until somewhat recently.
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Closes: https://lore.kernel.org/llvm/CA+G9fYsdUeNu-gwbs0+T6XHi4hYYk=Y9725-wFhZ7gJMsp... Reported-by: Nathan Chancellor nathan@kernel.org Reported-by: Daniel Kolesa daniel@octaforge.org Reported-by: Naresh Kamboju naresh.kamboju@linaro.org Suggested-by: Sven Volkinsfeld thyrc@gmx.net Signed-off-by: Nick Desaulniers ndesaulniers@google.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://github.com/ClangBuiltLinux/linux/issues/1907 Link: https://lore.kernel.org/r/20230809-gds-v1-1-eaac90b0cbcc@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 35122999234b37f4f65064205fbb170956a2cdba) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/kernel/vmlinux.lds.S | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 4955cb5cc001..72ba175cb9d4 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -524,11 +524,17 @@ INIT_PER_CPU(irq_stack_backing_store);
#ifdef CONFIG_CPU_SRSO /* - * GNU ld cannot do XOR so do: (A | B) - (A & B) in order to compute the XOR + * GNU ld cannot do XOR until 2.41. + * https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=f6f78318fca803c490... + * + * LLVM lld cannot do XOR until lld-17. + * https://github.com/llvm/llvm-project/commit/fae96104d4378166cbe5c875ef8ed808... + * + * Instead do: (A | B) - (A & B) in order to compute the XOR * of the two function addresses: */ -. = ASSERT(((srso_untrain_ret_alias | srso_safe_ret_alias) - - (srso_untrain_ret_alias & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)), +. = ASSERT(((ABSOLUTE(srso_untrain_ret_alias) | srso_safe_ret_alias) - + (ABSOLUTE(srso_untrain_ret_alias) & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)), "SRSO function pair won't alias"); #endif
From: Josh Poimboeuf jpoimboe@redhat.com
stable inclusion from stable-v5.10.192 commit bbbe1b23c7e61e6dd6cd1389cac704ad14cd9d4a category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
[ Upstream commit e028c4f7ac7ca8c96126fe46c54ab3d56ffe6a66 ]
Add a CONFIG_FRAME_POINTER-specific version of STACK_FRAME_NON_STANDARD() for the case where a function is intentionally missing frame pointer setup, but otherwise needs objtool/ORC coverage when frame pointers are disabled.
Link: https://lkml.kernel.org/r/163163047364.489837.17377799909553689661.stgit@dev...
Signed-off-by: Josh Poimboeuf jpoimboe@redhat.com Reviewed-by: Masami Hiramatsu mhiramat@kernel.org Tested-by: Masami Hiramatsu mhiramat@kernel.org Signed-off-by: Masami Hiramatsu mhiramat@kernel.org Signed-off-by: Steven Rostedt (VMware) rostedt@goodmis.org Stable-dep-of: c8c301abeae5 ("x86/ibt: Add ANNOTATE_NOENDBR") Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit fe65c1082a4e8e6479000922ebce4b002781ce67) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- include/linux/objtool.h | 12 ++++++++++++ tools/include/linux/objtool.h | 12 ++++++++++++ 2 files changed, 24 insertions(+)
diff --git a/include/linux/objtool.h b/include/linux/objtool.h index 662f19374bd9..ce33290a4e70 100644 --- a/include/linux/objtool.h +++ b/include/linux/objtool.h @@ -71,6 +71,17 @@ struct unwind_hint { static void __used __section(".discard.func_stack_frame_non_standard") \ *__func_stack_frame_non_standard_##func = func
+/* + * STACK_FRAME_NON_STANDARD_FP() is a frame-pointer-specific function ignore + * for the case where a function is intentionally missing frame pointer setup, + * but otherwise needs objtool/ORC coverage when frame pointers are disabled. + */ +#ifdef CONFIG_FRAME_POINTER +#define STACK_FRAME_NON_STANDARD_FP(func) STACK_FRAME_NON_STANDARD(func) +#else +#define STACK_FRAME_NON_STANDARD_FP(func) +#endif + #else /* __ASSEMBLY__ */
/* @@ -126,6 +137,7 @@ struct unwind_hint { #define UNWIND_HINT(sp_reg, sp_offset, type, end) \ "\n\t" #define STACK_FRAME_NON_STANDARD(func) +#define STACK_FRAME_NON_STANDARD_FP(func) #else #define ANNOTATE_INTRA_FUNCTION_CALL .macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0 diff --git a/tools/include/linux/objtool.h b/tools/include/linux/objtool.h index 662f19374bd9..ce33290a4e70 100644 --- a/tools/include/linux/objtool.h +++ b/tools/include/linux/objtool.h @@ -71,6 +71,17 @@ struct unwind_hint { static void __used __section(".discard.func_stack_frame_non_standard") \ *__func_stack_frame_non_standard_##func = func
+/* + * STACK_FRAME_NON_STANDARD_FP() is a frame-pointer-specific function ignore + * for the case where a function is intentionally missing frame pointer setup, + * but otherwise needs objtool/ORC coverage when frame pointers are disabled. + */ +#ifdef CONFIG_FRAME_POINTER +#define STACK_FRAME_NON_STANDARD_FP(func) STACK_FRAME_NON_STANDARD(func) +#else +#define STACK_FRAME_NON_STANDARD_FP(func) +#endif + #else /* __ASSEMBLY__ */
/* @@ -126,6 +137,7 @@ struct unwind_hint { #define UNWIND_HINT(sp_reg, sp_offset, type, end) \ "\n\t" #define STACK_FRAME_NON_STANDARD(func) +#define STACK_FRAME_NON_STANDARD_FP(func) #else #define ANNOTATE_INTRA_FUNCTION_CALL .macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0
From: Peter Zijlstra peterz@infradead.org
stable inclusion from stable-v5.10.192 commit 20e24c8b4c2ab49571457340039d4686515f7f33 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
[ Upstream commit c8c301abeae58ec756b8fcb2178a632bd3c9e284 ]
In order to have objtool warn about code references to !ENDBR instruction, we need an annotation to allow this for non-control-flow instances -- consider text range checks, text patching, or return trampolines etc.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Kees Cook keescook@chromium.org Acked-by: Josh Poimboeuf jpoimboe@redhat.com Link: https://lore.kernel.org/r/20220308154317.578968224@infradead.org Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 94babac2da231d81a76eee9c0f862bfba2975136) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- include/linux/objtool.h | 16 ++++++++++++++++ tools/include/linux/objtool.h | 16 ++++++++++++++++ 2 files changed, 32 insertions(+)
diff --git a/include/linux/objtool.h b/include/linux/objtool.h index ce33290a4e70..d17fa7f4001d 100644 --- a/include/linux/objtool.h +++ b/include/linux/objtool.h @@ -82,6 +82,12 @@ struct unwind_hint { #define STACK_FRAME_NON_STANDARD_FP(func) #endif
+#define ANNOTATE_NOENDBR \ + "986: \n\t" \ + ".pushsection .discard.noendbr\n\t" \ + _ASM_PTR " 986b\n\t" \ + ".popsection\n\t" + #else /* __ASSEMBLY__ */
/* @@ -128,6 +134,13 @@ struct unwind_hint { .popsection .endm
+.macro ANNOTATE_NOENDBR +.Lhere_@: + .pushsection .discard.noendbr + .quad .Lhere_@ + .popsection +.endm + #endif /* __ASSEMBLY__ */
#else /* !CONFIG_STACK_VALIDATION */ @@ -138,10 +151,13 @@ struct unwind_hint { "\n\t" #define STACK_FRAME_NON_STANDARD(func) #define STACK_FRAME_NON_STANDARD_FP(func) +#define ANNOTATE_NOENDBR #else #define ANNOTATE_INTRA_FUNCTION_CALL .macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0 .endm +.macro ANNOTATE_NOENDBR +.endm #endif
#endif /* CONFIG_STACK_VALIDATION */ diff --git a/tools/include/linux/objtool.h b/tools/include/linux/objtool.h index ce33290a4e70..d17fa7f4001d 100644 --- a/tools/include/linux/objtool.h +++ b/tools/include/linux/objtool.h @@ -82,6 +82,12 @@ struct unwind_hint { #define STACK_FRAME_NON_STANDARD_FP(func) #endif
+#define ANNOTATE_NOENDBR \ + "986: \n\t" \ + ".pushsection .discard.noendbr\n\t" \ + _ASM_PTR " 986b\n\t" \ + ".popsection\n\t" + #else /* __ASSEMBLY__ */
/* @@ -128,6 +134,13 @@ struct unwind_hint { .popsection .endm
+.macro ANNOTATE_NOENDBR +.Lhere_@: + .pushsection .discard.noendbr + .quad .Lhere_@ + .popsection +.endm + #endif /* __ASSEMBLY__ */
#else /* !CONFIG_STACK_VALIDATION */ @@ -138,10 +151,13 @@ struct unwind_hint { "\n\t" #define STACK_FRAME_NON_STANDARD(func) #define STACK_FRAME_NON_STANDARD_FP(func) +#define ANNOTATE_NOENDBR #else #define ANNOTATE_INTRA_FUNCTION_CALL .macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0 .endm +.macro ANNOTATE_NOENDBR +.endm #endif
#endif /* CONFIG_STACK_VALIDATION */
From: Peter Zijlstra peterz@infradead.org
stable inclusion from stable-v5.10.192 commit d5b3c88d153cf10404dc9efb6b17ec2ff5e1dc67 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit 77f67119004296a9b2503b377d610e08b08afc2a upstream.
Commit
fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation")
reimplemented __x86_return_thunk with a mix of SYM_FUNC_START and SYM_CODE_END, this is not a sane combination.
Since nothing should ever actually 'CALL' this, make it consistently CODE.
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20230814121148.571027074@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 6daad183bce92139b8521015cae58832f839a180) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/lib/retpoline.S | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 5f7eed97487e..cb04a2b17d36 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -204,7 +204,9 @@ SYM_CODE_END(srso_safe_ret) SYM_FUNC_END(srso_untrain_ret) __EXPORT_THUNK(srso_untrain_ret)
-SYM_FUNC_START(__x86_return_thunk) +SYM_CODE_START(__x86_return_thunk) + UNWIND_HINT_FUNC + ANNOTATE_NOENDBR ALTERNATIVE_2 "jmp __ret", "call srso_safe_ret", X86_FEATURE_SRSO, \ "call srso_safe_ret_alias", X86_FEATURE_SRSO_ALIAS int3
From: Peter Zijlstra peterz@infradead.org
stable inclusion from stable-v5.10.192 commit 043d3bfe0a72a7b164d84bc19ef8fc1c69fcfb79 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit af023ef335f13c8b579298fc432daeef609a9e60 upstream.
vmlinux.o: warning: objtool: srso_untrain_ret() falls through to next function __x86_return_skl() vmlinux.o: warning: objtool: __x86_return_thunk() falls through to next function __x86_return_skl()
This is because these functions (can) end with CALL, which objtool does not consider a terminating instruction. Therefore, replace the INT3 instruction (which is a non-fatal trap) with UD2 (which is a fatal-trap).
This indicates execution will not continue past this point.
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20230814121148.637802730@infradead.org Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit cda9fda640b18b49221a746c5bb7c7472d54db31) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/lib/retpoline.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index cb04a2b17d36..f6b65faa266a 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -199,7 +199,7 @@ SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL) int3 lfence call srso_safe_ret - int3 + ud2 SYM_CODE_END(srso_safe_ret) SYM_FUNC_END(srso_untrain_ret) __EXPORT_THUNK(srso_untrain_ret) @@ -209,7 +209,7 @@ SYM_CODE_START(__x86_return_thunk) ANNOTATE_NOENDBR ALTERNATIVE_2 "jmp __ret", "call srso_safe_ret", X86_FEATURE_SRSO, \ "call srso_safe_ret_alias", X86_FEATURE_SRSO_ALIAS - int3 + ud2 SYM_CODE_END(__x86_return_thunk) EXPORT_SYMBOL(__x86_return_thunk)
From: Peter Zijlstra peterz@infradead.org
stable inclusion from stable-v5.10.192 commit bd3d12e6fda0a7911a0f56f223924bb075b92d1e category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit 095b8303f3835c68ac4a8b6d754ca1c3b6230711 upstream.
There is infrastructure to rewrite return thunks to point to any random thunk one desires, unwrap that from CALL_THUNKS, which up to now was the sole user of that.
[ bp: Make the thunks visible on 32-bit and add ifdeffery for the 32-bit builds. ]
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20230814121148.775293785@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 3c6cbc57967291bd9f810a2b8a2931685cb6f868) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com
Conflicts: arch/x86/kernel/cpu/bugs.c --- arch/x86/include/asm/nospec-branch.h | 5 +++++ arch/x86/kernel/cpu/bugs.c | 2 ++ 2 files changed, 7 insertions(+)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index d03fd4258aa9..73281db56d38 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -195,7 +195,12 @@ _ASM_PTR " 999b\n\t" \ ".popsection\n\t"
+#ifdef CONFIG_RETHUNK extern void __x86_return_thunk(void); +#else +static inline void __x86_return_thunk(void) {} +#endif + extern void zen_untrain_ret(void); extern void srso_untrain_ret(void); extern void srso_untrain_ret_alias(void); diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 7764381f86f3..8977ecc07514 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -64,6 +64,8 @@ EXPORT_SYMBOL_GPL(x86_pred_cmd);
static DEFINE_MUTEX(spec_ctrl_mutex);
+void (*x86_return_thunk)(void) __ro_after_init = &__x86_return_thunk; + /* * Keep track of the SPEC_CTRL MSR value for the current task, which may differ * from x86_spec_ctrl_base due to STIBP/SSB in __speculation_ctrl_update().
From: Peter Zijlstra peterz@infradead.org
stable inclusion from stable-v5.10.192 commit 8b0ff83e8ad3daaeab583189ff7504383c854f24 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit d43490d0ab824023e11d0b57d0aeec17a6e0ca13 upstream.
Use the existing configurable return thunk. There is absolute no justification for having created this __x86_return_thunk alternative.
To clarify, the whole thing looks like:
Zen3/4 does:
srso_alias_untrain_ret: nop2 lfence jmp srso_alias_return_thunk int3
srso_alias_safe_ret: // aliasses srso_alias_untrain_ret just so add $8, %rsp ret int3
srso_alias_return_thunk: call srso_alias_safe_ret ud2
While Zen1/2 does:
srso_untrain_ret: movabs $foo, %rax lfence call srso_safe_ret (jmp srso_return_thunk ?) int3
srso_safe_ret: // embedded in movabs instruction add $8,%rsp ret int3
srso_return_thunk: call srso_safe_ret ud2
While retbleed does:
zen_untrain_ret: test $0xcc, %bl lfence jmp zen_return_thunk int3
zen_return_thunk: // embedded in the test instruction ret int3
Where Zen1/2 flush the BTB entry using the instruction decoder trick (test,movabs) Zen3/4 use BTB aliasing. SRSO adds a return sequence (srso_safe_ret()) which forces the function return instruction to speculate into a trap (UD2). This RET will then mispredict and execution will continue at the return site read from the top of the stack.
Pick one of three options at boot (evey function can only ever return once).
[ bp: Fixup commit message uarch details and add them in a comment in the code too. Add a comment about the srso_select_mitigation() dependency on retbleed_select_mitigation(). Add moar ifdeffery for 32-bit builds. Add a dummy srso_untrain_ret_alias() definition for 32-bit alternatives needing the symbol. ]
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20230814121148.842775684@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Conflicts: arch/x86/kernel/cpu/bugs.c
Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit ae5bdfdf707b1fe4e72be85446356e953a5b8294) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/include/asm/nospec-branch.h | 5 +++ arch/x86/kernel/cpu/bugs.c | 17 ++++++-- arch/x86/kernel/vmlinux.lds.S | 2 +- arch/x86/lib/retpoline.S | 58 ++++++++++++++++++++-------- tools/objtool/arch/x86/decode.c | 2 +- 5 files changed, 63 insertions(+), 21 deletions(-)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 73281db56d38..7462998b7130 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -201,9 +201,14 @@ extern void __x86_return_thunk(void); static inline void __x86_return_thunk(void) {} #endif
+extern void zen_return_thunk(void); +extern void srso_return_thunk(void); +extern void srso_alias_return_thunk(void); + extern void zen_untrain_ret(void); extern void srso_untrain_ret(void); extern void srso_untrain_ret_alias(void); + extern void entry_ibpb(void);
#ifdef CONFIG_RETPOLINE diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 8977ecc07514..c00d3a571d0d 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -158,8 +158,13 @@ void __init check_bugs(void) l1tf_select_mitigation(); md_clear_select_mitigation(); srbds_select_mitigation(); - gds_select_mitigation(); + + /* + * srso_select_mitigation() depends and must run after + * retbleed_select_mitigation(). + */ srso_select_mitigation(); + gds_select_mitigation();
arch_smt_update();
@@ -1012,6 +1017,9 @@ static void __init retbleed_select_mitigation(void) setup_force_cpu_cap(X86_FEATURE_RETHUNK); setup_force_cpu_cap(X86_FEATURE_UNRET);
+ if (IS_ENABLED(CONFIG_RETHUNK)) + x86_return_thunk = zen_return_thunk; + if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD && boot_cpu_data.x86_vendor != X86_VENDOR_HYGON) pr_err(RETBLEED_UNTRAIN_MSG); @@ -2386,10 +2394,13 @@ static void __init srso_select_mitigation(void) */ setup_force_cpu_cap(X86_FEATURE_RETHUNK);
- if (boot_cpu_data.x86 == 0x19) + if (boot_cpu_data.x86 == 0x19) { setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS); - else + x86_return_thunk = srso_alias_return_thunk; + } else { setup_force_cpu_cap(X86_FEATURE_SRSO); + x86_return_thunk = srso_return_thunk; + } srso_mitigation = SRSO_MITIGATION_SAFE_RET; } else { pr_err("WARNING: kernel not compiled with CPU_SRSO.\n"); diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 72ba175cb9d4..560fa84161a2 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -518,7 +518,7 @@ INIT_PER_CPU(irq_stack_backing_store); #endif
#ifdef CONFIG_RETHUNK -. = ASSERT((__ret & 0x3f) == 0, "__ret not cacheline-aligned"); +. = ASSERT((zen_return_thunk & 0x3f) == 0, "zen_return_thunk not cacheline-aligned"); . = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned"); #endif
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index f6b65faa266a..0d0de750b87e 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -93,21 +93,26 @@ SYM_CODE_END(__x86_indirect_thunk_array) .section .text.__x86.rethunk_untrain
SYM_START(srso_untrain_ret_alias, SYM_L_GLOBAL, SYM_A_NONE) + UNWIND_HINT_FUNC ASM_NOP2 lfence - jmp __x86_return_thunk + jmp srso_alias_return_thunk SYM_FUNC_END(srso_untrain_ret_alias) __EXPORT_THUNK(srso_untrain_ret_alias)
.section .text.__x86.rethunk_safe +#else +/* dummy definition for alternatives */ +SYM_START(srso_untrain_ret_alias, SYM_L_GLOBAL, SYM_A_NONE) + ANNOTATE_UNRET_SAFE + ret + int3 +SYM_FUNC_END(srso_untrain_ret_alias) #endif
-/* Needs a definition for the __x86_return_thunk alternative below. */ SYM_START(srso_safe_ret_alias, SYM_L_GLOBAL, SYM_A_NONE) -#ifdef CONFIG_CPU_SRSO add $8, %_ASM_SP UNWIND_HINT_FUNC -#endif ANNOTATE_UNRET_SAFE ret int3 @@ -115,9 +120,16 @@ SYM_FUNC_END(srso_safe_ret_alias)
.section .text.__x86.return_thunk
+SYM_CODE_START(srso_alias_return_thunk) + UNWIND_HINT_FUNC + ANNOTATE_NOENDBR + call srso_safe_ret_alias + ud2 +SYM_CODE_END(srso_alias_return_thunk) + /* * Safety details here pertain to the AMD Zen{1,2} microarchitecture: - * 1) The RET at __x86_return_thunk must be on a 64 byte boundary, for + * 1) The RET at zen_return_thunk must be on a 64 byte boundary, for * alignment within the BTB. * 2) The instruction at zen_untrain_ret must contain, and not * end with, the 0xc3 byte of the RET. @@ -125,7 +137,7 @@ SYM_FUNC_END(srso_safe_ret_alias) * from re-poisioning the BTB prediction. */ .align 64 - .skip 64 - (__ret - zen_untrain_ret), 0xcc + .skip 64 - (zen_return_thunk - zen_untrain_ret), 0xcc SYM_FUNC_START_NOALIGN(zen_untrain_ret);
/* @@ -133,16 +145,16 @@ SYM_FUNC_START_NOALIGN(zen_untrain_ret); * * TEST $0xcc, %bl * LFENCE - * JMP __x86_return_thunk + * JMP zen_return_thunk * * Executing the TEST instruction has a side effect of evicting any BTB * prediction (potentially attacker controlled) attached to the RET, as - * __x86_return_thunk + 1 isn't an instruction boundary at the moment. + * zen_return_thunk + 1 isn't an instruction boundary at the moment. */ .byte 0xf6
/* - * As executed from __x86_return_thunk, this is a plain RET. + * As executed from zen_return_thunk, this is a plain RET. * * As part of the TEST above, RET is the ModRM byte, and INT3 the imm8. * @@ -154,13 +166,13 @@ SYM_FUNC_START_NOALIGN(zen_untrain_ret); * With SMT enabled and STIBP active, a sibling thread cannot poison * RET's prediction to a type of its choice, but can evict the * prediction due to competitive sharing. If the prediction is - * evicted, __x86_return_thunk will suffer Straight Line Speculation + * evicted, zen_return_thunk will suffer Straight Line Speculation * which will be contained safely by the INT3. */ -SYM_INNER_LABEL(__ret, SYM_L_GLOBAL) +SYM_INNER_LABEL(zen_return_thunk, SYM_L_GLOBAL) ret int3 -SYM_CODE_END(__ret) +SYM_CODE_END(zen_return_thunk)
/* * Ensure the TEST decoding / BTB invalidation is complete. @@ -171,7 +183,7 @@ SYM_CODE_END(__ret) * Jump back and execute the RET in the middle of the TEST instruction. * INT3 is for SLS protection. */ - jmp __ret + jmp zen_return_thunk int3 SYM_FUNC_END(zen_untrain_ret) __EXPORT_THUNK(zen_untrain_ret) @@ -191,12 +203,19 @@ __EXPORT_THUNK(zen_untrain_ret) SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) .byte 0x48, 0xb8
+/* + * This forces the function return instruction to speculate into a trap + * (UD2 in srso_return_thunk() below). This RET will then mispredict + * and execution will continue at the return site read from the top of + * the stack. + */ SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL) add $8, %_ASM_SP ret int3 int3 int3 + /* end of movabs */ lfence call srso_safe_ret ud2 @@ -204,12 +223,19 @@ SYM_CODE_END(srso_safe_ret) SYM_FUNC_END(srso_untrain_ret) __EXPORT_THUNK(srso_untrain_ret)
-SYM_CODE_START(__x86_return_thunk) +SYM_CODE_START(srso_return_thunk) UNWIND_HINT_FUNC ANNOTATE_NOENDBR - ALTERNATIVE_2 "jmp __ret", "call srso_safe_ret", X86_FEATURE_SRSO, \ - "call srso_safe_ret_alias", X86_FEATURE_SRSO_ALIAS + call srso_safe_ret ud2 +SYM_CODE_END(srso_return_thunk) + +SYM_CODE_START(__x86_return_thunk) + UNWIND_HINT_FUNC + ANNOTATE_NOENDBR + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(__x86_return_thunk) EXPORT_SYMBOL(__x86_return_thunk)
diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index 19f5b2e37869..877d6d4978ce 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -664,5 +664,5 @@ bool arch_is_rethunk(struct symbol *sym) return !strcmp(sym->name, "__x86_return_thunk") || !strcmp(sym->name, "srso_untrain_ret") || !strcmp(sym->name, "srso_safe_ret") || - !strcmp(sym->name, "__ret"); + !strcmp(sym->name, "zen_return_thunk"); }
From: Peter Zijlstra peterz@infradead.org
stable inclusion from stable-v5.10.192 commit 0676a392539be5ace44588d68dc13483fedea08c category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit d025b7bac07a6e90b6b98b487f88854ad9247c39 upstream.
Rename the original retbleed return thunk and untrain_ret to retbleed_return_thunk() and retbleed_untrain_ret().
No functional changes.
Suggested-by: Josh Poimboeuf jpoimboe@kernel.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20230814121148.909378169@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 2e8d625ec4396b87a450249390849e7096d79adf) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/include/asm/nospec-branch.h | 8 ++++---- arch/x86/kernel/cpu/bugs.c | 2 +- arch/x86/kernel/vmlinux.lds.S | 2 +- arch/x86/lib/retpoline.S | 30 ++++++++++++++-------------- tools/objtool/arch/x86/decode.c | 2 +- tools/objtool/check.c | 2 +- 6 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 7462998b7130..51e3e189ec59 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -156,7 +156,7 @@ .endm
#ifdef CONFIG_CPU_UNRET_ENTRY -#define CALL_ZEN_UNTRAIN_RET "call zen_untrain_ret" +#define CALL_ZEN_UNTRAIN_RET "call retbleed_untrain_ret" #else #define CALL_ZEN_UNTRAIN_RET "" #endif @@ -166,7 +166,7 @@ * return thunk isn't mapped into the userspace tables (then again, AMD * typically has NO_MELTDOWN). * - * While zen_untrain_ret() doesn't clobber anything but requires stack, + * While retbleed_untrain_ret() doesn't clobber anything but requires stack, * entry_ibpb() will clobber AX, CX, DX. * * As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a point @@ -201,11 +201,11 @@ extern void __x86_return_thunk(void); static inline void __x86_return_thunk(void) {} #endif
-extern void zen_return_thunk(void); +extern void retbleed_return_thunk(void); extern void srso_return_thunk(void); extern void srso_alias_return_thunk(void);
-extern void zen_untrain_ret(void); +extern void retbleed_untrain_ret(void); extern void srso_untrain_ret(void); extern void srso_untrain_ret_alias(void);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index c00d3a571d0d..699feac18486 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1018,7 +1018,7 @@ static void __init retbleed_select_mitigation(void) setup_force_cpu_cap(X86_FEATURE_UNRET);
if (IS_ENABLED(CONFIG_RETHUNK)) - x86_return_thunk = zen_return_thunk; + x86_return_thunk = retbleed_return_thunk;
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD && boot_cpu_data.x86_vendor != X86_VENDOR_HYGON) diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 560fa84161a2..226c72f94c3a 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -518,7 +518,7 @@ INIT_PER_CPU(irq_stack_backing_store); #endif
#ifdef CONFIG_RETHUNK -. = ASSERT((zen_return_thunk & 0x3f) == 0, "zen_return_thunk not cacheline-aligned"); +. = ASSERT((retbleed_return_thunk & 0x3f) == 0, "retbleed_return_thunk not cacheline-aligned"); . = ASSERT((srso_safe_ret & 0x3f) == 0, "srso_safe_ret not cacheline-aligned"); #endif
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 0d0de750b87e..889882f8c3c8 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -129,32 +129,32 @@ SYM_CODE_END(srso_alias_return_thunk)
/* * Safety details here pertain to the AMD Zen{1,2} microarchitecture: - * 1) The RET at zen_return_thunk must be on a 64 byte boundary, for + * 1) The RET at retbleed_return_thunk must be on a 64 byte boundary, for * alignment within the BTB. - * 2) The instruction at zen_untrain_ret must contain, and not + * 2) The instruction at retbleed_untrain_ret must contain, and not * end with, the 0xc3 byte of the RET. * 3) STIBP must be enabled, or SMT disabled, to prevent the sibling thread * from re-poisioning the BTB prediction. */ .align 64 - .skip 64 - (zen_return_thunk - zen_untrain_ret), 0xcc -SYM_FUNC_START_NOALIGN(zen_untrain_ret); + .skip 64 - (retbleed_return_thunk - retbleed_untrain_ret), 0xcc +SYM_FUNC_START_NOALIGN(retbleed_untrain_ret);
/* - * As executed from zen_untrain_ret, this is: + * As executed from retbleed_untrain_ret, this is: * * TEST $0xcc, %bl * LFENCE - * JMP zen_return_thunk + * JMP retbleed_return_thunk * * Executing the TEST instruction has a side effect of evicting any BTB * prediction (potentially attacker controlled) attached to the RET, as - * zen_return_thunk + 1 isn't an instruction boundary at the moment. + * retbleed_return_thunk + 1 isn't an instruction boundary at the moment. */ .byte 0xf6
/* - * As executed from zen_return_thunk, this is a plain RET. + * As executed from retbleed_return_thunk, this is a plain RET. * * As part of the TEST above, RET is the ModRM byte, and INT3 the imm8. * @@ -166,13 +166,13 @@ SYM_FUNC_START_NOALIGN(zen_untrain_ret); * With SMT enabled and STIBP active, a sibling thread cannot poison * RET's prediction to a type of its choice, but can evict the * prediction due to competitive sharing. If the prediction is - * evicted, zen_return_thunk will suffer Straight Line Speculation + * evicted, retbleed_return_thunk will suffer Straight Line Speculation * which will be contained safely by the INT3. */ -SYM_INNER_LABEL(zen_return_thunk, SYM_L_GLOBAL) +SYM_INNER_LABEL(retbleed_return_thunk, SYM_L_GLOBAL) ret int3 -SYM_CODE_END(zen_return_thunk) +SYM_CODE_END(retbleed_return_thunk)
/* * Ensure the TEST decoding / BTB invalidation is complete. @@ -183,13 +183,13 @@ SYM_CODE_END(zen_return_thunk) * Jump back and execute the RET in the middle of the TEST instruction. * INT3 is for SLS protection. */ - jmp zen_return_thunk + jmp retbleed_return_thunk int3 -SYM_FUNC_END(zen_untrain_ret) -__EXPORT_THUNK(zen_untrain_ret) +SYM_FUNC_END(retbleed_untrain_ret) +__EXPORT_THUNK(retbleed_untrain_ret)
/* - * SRSO untraining sequence for Zen1/2, similar to zen_untrain_ret() + * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret() * above. On kernel entry, srso_untrain_ret() is executed which is a * * movabs $0xccccccc308c48348,%rax diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index 877d6d4978ce..791263c4eea8 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -664,5 +664,5 @@ bool arch_is_rethunk(struct symbol *sym) return !strcmp(sym->name, "__x86_return_thunk") || !strcmp(sym->name, "srso_untrain_ret") || !strcmp(sym->name, "srso_safe_ret") || - !strcmp(sym->name, "zen_return_thunk"); + !strcmp(sym->name, "retbleed_return_thunk"); } diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 654ffb51e1dd..26d9cb21a357 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -1149,7 +1149,7 @@ static int add_jump_destinations(struct objtool_file *file) continue;
/* - * This is a special case for zen_untrain_ret(). + * This is a special case for retbleed_untrain_ret(). * It jumps to __x86_return_thunk(), but objtool * can't find the thunk's starting RET * instruction, because the RET is also in the
From: Peter Zijlstra peterz@infradead.org
stable inclusion from stable-v5.10.192 commit e0f50b0e4186489f9abc8d7cd3c654f47eefc792 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit 42be649dd1f2eee6b1fb185f1a231b9494cf095f upstream.
For a more consistent namespace.
[ bp: Fixup names in the doc too. ]
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20230814121148.976236447@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 01aee51c18afda58706d1ba2abdaa2a8b8669984) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- Documentation/admin-guide/hw-vuln/srso.rst | 4 ++-- arch/x86/include/asm/nospec-branch.h | 4 ++-- arch/x86/kernel/vmlinux.lds.S | 8 +++---- arch/x86/lib/retpoline.S | 26 +++++++++++----------- 4 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst index 2f923c805802..f79cb11b080f 100644 --- a/Documentation/admin-guide/hw-vuln/srso.rst +++ b/Documentation/admin-guide/hw-vuln/srso.rst @@ -124,8 +124,8 @@ sequence. To ensure the safety of this mitigation, the kernel must ensure that the safe return sequence is itself free from attacker interference. In Zen3 and Zen4, this is accomplished by creating a BTB alias between the -untraining function srso_untrain_ret_alias() and the safe return -function srso_safe_ret_alias() which results in evicting a potentially +untraining function srso_alias_untrain_ret() and the safe return +function srso_alias_safe_ret() which results in evicting a potentially poisoned BTB entry and using that safe one for all function returns.
In older Zen1 and Zen2, this is accomplished using a reinterpretation diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 51e3e189ec59..07967d80d14e 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -183,7 +183,7 @@
#ifdef CONFIG_CPU_SRSO ALTERNATIVE_2 "", "call srso_untrain_ret", X86_FEATURE_SRSO, \ - "call srso_untrain_ret_alias", X86_FEATURE_SRSO_ALIAS + "call srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS #endif .endm
@@ -207,7 +207,7 @@ extern void srso_alias_return_thunk(void);
extern void retbleed_untrain_ret(void); extern void srso_untrain_ret(void); -extern void srso_untrain_ret_alias(void); +extern void srso_alias_untrain_ret(void);
extern void entry_ibpb(void);
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 226c72f94c3a..3f3e31a457d4 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -141,10 +141,10 @@ SECTIONS
#ifdef CONFIG_CPU_SRSO /* - * See the comment above srso_untrain_ret_alias()'s + * See the comment above srso_alias_untrain_ret()'s * definition. */ - . = srso_untrain_ret_alias | (1 << 2) | (1 << 8) | (1 << 14) | (1 << 20); + . = srso_alias_untrain_ret | (1 << 2) | (1 << 8) | (1 << 14) | (1 << 20); *(.text.__x86.rethunk_safe) #endif ALIGN_ENTRY_TEXT_END @@ -533,8 +533,8 @@ INIT_PER_CPU(irq_stack_backing_store); * Instead do: (A | B) - (A & B) in order to compute the XOR * of the two function addresses: */ -. = ASSERT(((ABSOLUTE(srso_untrain_ret_alias) | srso_safe_ret_alias) - - (ABSOLUTE(srso_untrain_ret_alias) & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)), +. = ASSERT(((ABSOLUTE(srso_alias_untrain_ret) | srso_alias_safe_ret) - + (ABSOLUTE(srso_alias_untrain_ret) & srso_alias_safe_ret)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)), "SRSO function pair won't alias"); #endif
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 889882f8c3c8..b5697ba1ca71 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -75,55 +75,55 @@ SYM_CODE_END(__x86_indirect_thunk_array) #ifdef CONFIG_RETHUNK
/* - * srso_untrain_ret_alias() and srso_safe_ret_alias() are placed at + * srso_alias_untrain_ret() and srso_alias_safe_ret() are placed at * special addresses: * - * - srso_untrain_ret_alias() is 2M aligned - * - srso_safe_ret_alias() is also in the same 2M page but bits 2, 8, 14 + * - srso_alias_untrain_ret() is 2M aligned + * - srso_alias_safe_ret() is also in the same 2M page but bits 2, 8, 14 * and 20 in its virtual address are set (while those bits in the - * srso_untrain_ret_alias() function are cleared). + * srso_alias_untrain_ret() function are cleared). * * This guarantees that those two addresses will alias in the branch * target buffer of Zen3/4 generations, leading to any potential * poisoned entries at that BTB slot to get evicted. * - * As a result, srso_safe_ret_alias() becomes a safe return. + * As a result, srso_alias_safe_ret() becomes a safe return. */ #ifdef CONFIG_CPU_SRSO .section .text.__x86.rethunk_untrain
-SYM_START(srso_untrain_ret_alias, SYM_L_GLOBAL, SYM_A_NONE) +SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) UNWIND_HINT_FUNC ASM_NOP2 lfence jmp srso_alias_return_thunk -SYM_FUNC_END(srso_untrain_ret_alias) -__EXPORT_THUNK(srso_untrain_ret_alias) +SYM_FUNC_END(srso_alias_untrain_ret) +__EXPORT_THUNK(srso_alias_untrain_ret)
.section .text.__x86.rethunk_safe #else /* dummy definition for alternatives */ -SYM_START(srso_untrain_ret_alias, SYM_L_GLOBAL, SYM_A_NONE) +SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) ANNOTATE_UNRET_SAFE ret int3 -SYM_FUNC_END(srso_untrain_ret_alias) +SYM_FUNC_END(srso_alias_untrain_ret) #endif
-SYM_START(srso_safe_ret_alias, SYM_L_GLOBAL, SYM_A_NONE) +SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE) add $8, %_ASM_SP UNWIND_HINT_FUNC ANNOTATE_UNRET_SAFE ret int3 -SYM_FUNC_END(srso_safe_ret_alias) +SYM_FUNC_END(srso_alias_safe_ret)
.section .text.__x86.return_thunk
SYM_CODE_START(srso_alias_return_thunk) UNWIND_HINT_FUNC ANNOTATE_NOENDBR - call srso_safe_ret_alias + call srso_alias_safe_ret ud2 SYM_CODE_END(srso_alias_return_thunk)
From: Peter Zijlstra peterz@infradead.org
stable inclusion from stable-v5.10.192 commit 06597b650beb49bffc61e077f41e39b830d72128 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit e7c25c441e9e0fa75b4c83e0b26306b702cfe90d upstream.
Since there can only be one active return_thunk, there only needs be one (matching) untrain_ret. It fundamentally doesn't make sense to allow multiple untrain_ret at the same time.
Fold all the 3 different untrain methods into a single (temporary) helper stub.
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20230814121149.042774962@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 1b30d6fdea936a576ddc4d2554cb162613714994) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/include/asm/nospec-branch.h | 12 ++++-------- arch/x86/kernel/cpu/bugs.c | 1 + arch/x86/lib/retpoline.S | 7 +++++++ 3 files changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 07967d80d14e..181cda568eff 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -156,9 +156,9 @@ .endm
#ifdef CONFIG_CPU_UNRET_ENTRY -#define CALL_ZEN_UNTRAIN_RET "call retbleed_untrain_ret" +#define CALL_UNTRAIN_RET "call entry_untrain_ret" #else -#define CALL_ZEN_UNTRAIN_RET "" +#define CALL_UNTRAIN_RET "" #endif
/* @@ -177,14 +177,9 @@ defined(CONFIG_CPU_SRSO) ANNOTATE_UNRET_END ALTERNATIVE_2 "", \ - CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET, \ + CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \ "call entry_ibpb", X86_FEATURE_ENTRY_IBPB #endif - -#ifdef CONFIG_CPU_SRSO - ALTERNATIVE_2 "", "call srso_untrain_ret", X86_FEATURE_SRSO, \ - "call srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS -#endif .endm
#else /* __ASSEMBLY__ */ @@ -209,6 +204,7 @@ extern void retbleed_untrain_ret(void); extern void srso_untrain_ret(void); extern void srso_alias_untrain_ret(void);
+extern void entry_untrain_ret(void); extern void entry_ibpb(void);
#ifdef CONFIG_RETPOLINE diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 699feac18486..858c8dff0315 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2393,6 +2393,7 @@ static void __init srso_select_mitigation(void) * like ftrace, static_call, etc. */ setup_force_cpu_cap(X86_FEATURE_RETHUNK); + setup_force_cpu_cap(X86_FEATURE_UNRET);
if (boot_cpu_data.x86 == 0x19) { setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS); diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index b5697ba1ca71..e807df6e56a7 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -230,6 +230,13 @@ SYM_CODE_START(srso_return_thunk) ud2 SYM_CODE_END(srso_return_thunk)
+SYM_FUNC_START(entry_untrain_ret) + ALTERNATIVE_2 "jmp retbleed_untrain_ret", \ + "jmp srso_untrain_ret", X86_FEATURE_SRSO, \ + "jmp srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS +SYM_FUNC_END(entry_untrain_ret) +__EXPORT_THUNK(entry_untrain_ret) + SYM_CODE_START(__x86_return_thunk) UNWIND_HINT_FUNC ANNOTATE_NOENDBR
From: Sean Christopherson seanjc@google.com
stable inclusion from stable-v5.10.192 commit 62ebfeb0dcf7f107f0b08bc98c1f9b6619f4f212 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit ba5ca5e5e6a1d55923e88b4a83da452166f5560e upstream.
Use LEA instead of ADD when adjusting %rsp in srso_safe_ret{,_alias}() so as to avoid clobbering flags. Drop one of the INT3 instructions to account for the LEA consuming one more byte than the ADD.
KVM's emulator makes indirect calls into a jump table of sorts, where the destination of each call is a small blob of code that performs fast emulation by executing the target instruction with fixed operands.
E.g. to emulate ADC, fastop() invokes adcb_al_dl():
adcb_al_dl: <+0>: adc %dl,%al <+2>: jmp <__x86_return_thunk>
A major motivation for doing fast emulation is to leverage the CPU to handle consumption and manipulation of arithmetic flags, i.e. RFLAGS is both an input and output to the target of the call. fastop() collects the RFLAGS result by pushing RFLAGS onto the stack and popping them back into a variable (held in %rdi in this case):
asm("push %[flags]; popf; " CALL_NOSPEC " ; pushf; pop %[flags]\n"
<+71>: mov 0xc0(%r8),%rdx <+78>: mov 0x100(%r8),%rcx <+85>: push %rdi <+86>: popf <+87>: call *%rsi <+89>: nop <+90>: nop <+91>: nop <+92>: pushf <+93>: pop %rdi
and then propagating the arithmetic flags into the vCPU's emulator state:
ctxt->eflags = (ctxt->eflags & ~EFLAGS_MASK) | (flags & EFLAGS_MASK);
<+64>: and $0xfffffffffffff72a,%r9 <+94>: and $0x8d5,%edi <+109>: or %rdi,%r9 <+122>: mov %r9,0x10(%r8)
The failures can be most easily reproduced by running the "emulator" test in KVM-Unit-Tests.
If you're feeling a bit of deja vu, see commit b63f20a778c8 ("x86/retpoline: Don't clobber RFLAGS during CALL_NOSPEC on i386").
In addition, this breaks booting of clang-compiled guest on a gcc-compiled host where the host contains the %rsp-modifying SRSO mitigations.
[ bp: Massage commit message, extend, remove addresses. ]
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Closes: https://lore.kernel.org/all/de474347-122d-54cd-eabf-9dcc95ab9eae@amd.com Reported-by: Srikanth Aithal sraithal@amd.com Reported-by: Nathan Chancellor nathan@kernel.org Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Tested-by: Nathan Chancellor nathan@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/20230810013334.GA5354@dev-arch.thelio-3990X/ Link: https://lore.kernel.org/r/20230811155255.250835-1-seanjc@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit d39ba984d80791506bee4051533fffb06cfd0d06) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/lib/retpoline.S | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index e807df6e56a7..8d82da010784 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -111,7 +111,7 @@ SYM_FUNC_END(srso_alias_untrain_ret) #endif
SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE) - add $8, %_ASM_SP + lea 8(%_ASM_SP), %_ASM_SP UNWIND_HINT_FUNC ANNOTATE_UNRET_SAFE ret @@ -192,7 +192,7 @@ __EXPORT_THUNK(retbleed_untrain_ret) * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret() * above. On kernel entry, srso_untrain_ret() is executed which is a * - * movabs $0xccccccc308c48348,%rax + * movabs $0xccccc30824648d48,%rax * * and when the return thunk executes the inner label srso_safe_ret() * later, it is a stack manipulation and a RET which is mispredicted and @@ -210,11 +210,10 @@ SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) * the stack. */ SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL) - add $8, %_ASM_SP + lea 8(%_ASM_SP), %_ASM_SP ret int3 int3 - int3 /* end of movabs */ lfence call srso_safe_ret
From: "Borislav Petkov (AMD)" bp@alien8.de
stable inclusion from stable-v5.10.192 commit 88e16ce7f8a6b1e1024f70f1729b93e5612da7b2 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit e9fbc47b818b964ddff5df5b2d5c0f5f32f4a147 upstream.
Skip the srso cmd line parsing which is not needed on Zen1/2 with SMT disabled and with the proper microcode applied (latter should be the case anyway) as those are not affected.
Fixes: 5a15d8348881 ("x86/srso: Tie SBPB bit setting to microcode patch detection") Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20230813104517.3346-1-bp@alien8.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 870d257c050931c8a9b4630d03b152412124738f) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/kernel/cpu/bugs.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 858c8dff0315..f3620e725004 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2363,8 +2363,10 @@ static void __init srso_select_mitigation(void) * IBPB microcode has been applied. */ if ((boot_cpu_data.x86 < 0x19) && - (!cpu_smt_possible() || (cpu_smt_control == CPU_SMT_DISABLED))) + (!cpu_smt_possible() || (cpu_smt_control == CPU_SMT_DISABLED))) { setup_force_cpu_cap(X86_FEATURE_SRSO_NO); + return; + } }
if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) { @@ -2650,6 +2652,9 @@ static ssize_t gds_show_state(char *buf)
static ssize_t srso_show_state(char *buf) { + if (boot_cpu_has(X86_FEATURE_SRSO_NO)) + return sysfs_emit(buf, "Not affected\n"); + return sysfs_emit(buf, "%s%s\n", srso_strings[srso_mitigation], (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode"));
From: Peter Zijlstra peterz@infradead.org
stable inclusion from stable-v5.10.192 commit 23e59874657c6396abb82544f6f60d100d84ee6a category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit dbf46008775516f7f25c95b7760041c286299783 upstream.
For stack-validation of a frame-pointer build, objtool validates that every CALL instruction is preceded by a frame-setup. The new SRSO return thunks violate this with their RSB stuffing trickery.
Extend the __fentry__ exception to also cover the embedded_insn case used for this. This cures:
vmlinux.o: warning: objtool: srso_untrain_ret+0xd: call without frame pointer save/setup
Fixes: 4ae68b26c3ab ("objtool/x86: Fix SRSO mess") Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Acked-by: Josh Poimboeuf jpoimboe@kernel.org Link: https://lore.kernel.org/r/20230816115921.GH980931@hirez.programming.kicks-as... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit b2f4d82ac59780d86ff189b50c5dda6e8d1f775f) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- tools/objtool/check.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 26d9cb21a357..88da69af6f5e 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -2063,12 +2063,17 @@ static int decode_sections(struct objtool_file *file) return 0; }
-static bool is_fentry_call(struct instruction *insn) +static bool is_special_call(struct instruction *insn) { - if (insn->type == INSN_CALL && - insn->call_dest && - insn->call_dest->fentry) - return true; + if (insn->type == INSN_CALL) { + struct symbol *dest = insn->call_dest; + + if (!dest) + return false; + + if (dest->fentry) + return true; + }
return false; } @@ -2942,7 +2947,7 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, if (ret) return ret;
- if (!no_fp && func && !is_fentry_call(insn) && + if (!no_fp && func && !is_special_call(insn) && !has_valid_stack_frame(&state)) { WARN_FUNC("call without frame pointer save/setup", sec, insn->offset);
From: "Borislav Petkov (AMD)" bp@alien8.de
stable inclusion from stable-v5.10.192 commit 0e8139f92304cef82ac769cfb51dda341a17fb1c category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit 6405b72e8d17bd1875a56ae52d23ec3cd51b9d66 upstream.
Specify how is SRSO mitigated when SMT is disabled. Also, correct the SMT check for that.
Fixes: e9fbc47b818b ("x86/srso: Disable the mitigation on unaffected configurations") Suggested-by: Josh Poimboeuf jpoimboe@kernel.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Acked-by: Josh Poimboeuf jpoimboe@kernel.org Link: https://lore.kernel.org/r/20230814200813.p5czl47zssuej7nv@treble Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit d3b2e775575120d687f87b93d2afc9c8b764cbee) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/kernel/cpu/bugs.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index f3620e725004..8bbc88149f0c 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2362,8 +2362,7 @@ static void __init srso_select_mitigation(void) * Zen1/2 with SMT off aren't vulnerable after the right * IBPB microcode has been applied. */ - if ((boot_cpu_data.x86 < 0x19) && - (!cpu_smt_possible() || (cpu_smt_control == CPU_SMT_DISABLED))) { + if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) { setup_force_cpu_cap(X86_FEATURE_SRSO_NO); return; } @@ -2653,7 +2652,7 @@ static ssize_t gds_show_state(char *buf) static ssize_t srso_show_state(char *buf) { if (boot_cpu_has(X86_FEATURE_SRSO_NO)) - return sysfs_emit(buf, "Not affected\n"); + return sysfs_emit(buf, "Mitigation: SMT disabled\n");
return sysfs_emit(buf, "%s%s\n", srso_strings[srso_mitigation],
From: Peter Zijlstra peterz@infradead.org
stable inclusion from stable-v5.10.193 commit c6aecc29d29eeda6b963f1a5ea0adae2e9fd7c3d category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit 4ae68b26c3ab5a82aa271e6e9fc9b1a06e1d6b40 upstream.
Objtool --rethunk does two things:
- it collects all (tail) call's of __x86_return_thunk and places them into .return_sites. These are typically compiler generated, but RET also emits this same.
- it fudges the validation of the __x86_return_thunk symbol; because this symbol is inside another instruction, it can't actually find the instruction pointed to by the symbol offset and gets upset.
Because these two things pertained to the same symbol, there was no pressing need to separate these two separate things.
However, alas, along comes SRSO and more crazy things to deal with appeared.
The SRSO patch itself added the following symbol names to identify as rethunk:
'srso_untrain_ret', 'srso_safe_ret' and '__ret'
Where '__ret' is the old retbleed return thunk, 'srso_safe_ret' is a new similarly embedded return thunk, and 'srso_untrain_ret' is completely unrelated to anything the above does (and was only included because of that INT3 vs UD2 issue fixed previous).
Clear things up by adding a second category for the embedded instruction thing.
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20230814121148.704502245@infradead.org Signed-off-by: Josh Poimboeuf jpoimboe@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 0ceea87e7fb16cbe1973d470c45a44c8fb191a74) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- tools/objtool/arch.h | 1 + tools/objtool/arch/x86/decode.c | 11 +++++++---- tools/objtool/check.c | 22 +++++++++++++++++++++- tools/objtool/elf.h | 1 + 4 files changed, 30 insertions(+), 5 deletions(-)
diff --git a/tools/objtool/arch.h b/tools/objtool/arch.h index 580ce1857585..75840291b393 100644 --- a/tools/objtool/arch.h +++ b/tools/objtool/arch.h @@ -90,6 +90,7 @@ int arch_decode_hint_reg(u8 sp_reg, int *base);
bool arch_is_retpoline(struct symbol *sym); bool arch_is_rethunk(struct symbol *sym); +bool arch_is_embedded_insn(struct symbol *sym);
int arch_rewrite_retpolines(struct objtool_file *file);
diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index 791263c4eea8..7e464a95ba55 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -661,8 +661,11 @@ bool arch_is_retpoline(struct symbol *sym)
bool arch_is_rethunk(struct symbol *sym) { - return !strcmp(sym->name, "__x86_return_thunk") || - !strcmp(sym->name, "srso_untrain_ret") || - !strcmp(sym->name, "srso_safe_ret") || - !strcmp(sym->name, "retbleed_return_thunk"); + return !strcmp(sym->name, "__x86_return_thunk"); +} + +bool arch_is_embedded_insn(struct symbol *sym) +{ + return !strcmp(sym->name, "retbleed_return_thunk") || + !strcmp(sym->name, "srso_safe_ret"); } diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 88da69af6f5e..ffd82ee6ee13 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -930,16 +930,33 @@ static int add_ignore_alternatives(struct objtool_file *file) return 0; }
+/* + * Symbols that replace INSN_CALL_DYNAMIC, every (tail) call to such a symbol + * will be added to the .retpoline_sites section. + */ __weak bool arch_is_retpoline(struct symbol *sym) { return false; }
+/* + * Symbols that replace INSN_RETURN, every (tail) call to such a symbol + * will be added to the .return_sites section. + */ __weak bool arch_is_rethunk(struct symbol *sym) { return false; }
+/* + * Symbols that are embedded inside other instructions, because sometimes crazy + * code exists. These are mostly ignored for validation purposes. + */ +__weak bool arch_is_embedded_insn(struct symbol *sym) +{ + return false; +} + #define NEGATIVE_RELOC ((void *)-1L)
static struct reloc *insn_reloc(struct objtool_file *file, struct instruction *insn) @@ -1156,7 +1173,7 @@ static int add_jump_destinations(struct objtool_file *file) * middle of another instruction. Objtool only * knows about the outer instruction. */ - if (sym && sym->return_thunk) { + if (sym && sym->embedded_insn) { add_return_call(file, insn, false); continue; } @@ -1955,6 +1972,9 @@ static int classify_symbols(struct objtool_file *file) if (arch_is_rethunk(func)) func->return_thunk = true;
+ if (arch_is_embedded_insn(func)) + func->embedded_insn = true; + if (!strcmp(func->name, "__fentry__")) func->fentry = true;
diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h index a1863eb35fbb..19446d911244 100644 --- a/tools/objtool/elf.h +++ b/tools/objtool/elf.h @@ -61,6 +61,7 @@ struct symbol { u8 return_thunk : 1; u8 fentry : 1; u8 kcov : 1; + u8 embedded_insn : 1; };
struct reloc {
From: Arnaldo Carvalho de Melo acme@redhat.com
mainline inclusion from mainline-v5.11-rc1 commit fde668244d1d8d490b5b9daf53fe4f92a6751773 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
--------------------------------
To pick up the changes in:
Fixes: 69372cf01290b958 ("x86/cpu: Add VM page flush MSR availablility as a CPUID feature")
That cause these changes in tooling:
$ tools/perf/trace/beauty/tracepoints/x86_msr.sh > before $ cp arch/x86/include/asm/msr-index.h tools/arch/x86/include/asm/msr-index.h $ tools/perf/trace/beauty/tracepoints/x86_msr.sh > after $ diff -u before after --- before 2020-12-21 09:09:05.593005003 -0300 +++ after 2020-12-21 09:12:48.436994802 -0300 @@ -21,7 +21,7 @@ [0x0000004f] = "PPIN", [0x00000060] = "LBR_CORE_TO", [0x00000079] = "IA32_UCODE_WRITE", - [0x0000008b] = "IA32_UCODE_REV", + [0x0000008b] = "AMD64_PATCH_LEVEL", [0x0000008C] = "IA32_SGXLEPUBKEYHASH0", [0x0000008D] = "IA32_SGXLEPUBKEYHASH1", [0x0000008E] = "IA32_SGXLEPUBKEYHASH2", @@ -286,6 +286,7 @@ [0xc0010114 - x86_AMD_V_KVM_MSRs_offset] = "VM_CR", [0xc0010115 - x86_AMD_V_KVM_MSRs_offset] = "VM_IGNNE", [0xc0010117 - x86_AMD_V_KVM_MSRs_offset] = "VM_HSAVE_PA", + [0xc001011e - x86_AMD_V_KVM_MSRs_offset] = "AMD64_VM_PAGE_FLUSH", [0xc001011f - x86_AMD_V_KVM_MSRs_offset] = "AMD64_VIRT_SPEC_CTRL", [0xc0010130 - x86_AMD_V_KVM_MSRs_offset] = "AMD64_SEV_ES_GHCB", [0xc0010131 - x86_AMD_V_KVM_MSRs_offset] = "AMD64_SEV", $
The new MSR has a pattern that wasn't matched to avoid a clash with IA32_UCODE_REV, change the regex to prefer the more relevant AMD_ prefixed ones to catch this new AMD64_VM_PAGE_FLUSH MSR.
Which causes these parts of tools/perf/ to be rebuilt:
CC /tmp/build/perf/trace/beauty/tracepoints/x86_msr.o LD /tmp/build/perf/trace/beauty/tracepoints/perf-in.o LD /tmp/build/perf/trace/beauty/perf-in.o LD /tmp/build/perf/perf-in.o LINK /tmp/build/perf/perf
This addresses this perf tools build warning:
diff -u tools/arch/x86/include/asm/msr-index.h arch/x86/include/asm/msr-index.h Warning: Kernel ABI header at 'tools/arch/x86/include/asm/msr-index.h' differs from latest version at 'arch/x86/include/asm/msr-index.h'
Cc: Adrian Hunter adrian.hunter@intel.com Cc: Ian Rogers irogers@google.com Cc: Jiri Olsa jolsa@kernel.org Cc: Namhyung Kim namhyung@kernel.org Cc: Paolo Bonzini pbonzini@redhat.com Cc: Tom Lendacky thomas.lendacky@amd.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit fbb8a6de699700d49a8e1c50897cc439d29d9348) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- tools/arch/x86/include/asm/msr-index.h | 1 + tools/perf/trace/beauty/tracepoints/x86_msr.sh | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h index b2ab83d8bdcd..2bd1aa8c68f2 100644 --- a/tools/arch/x86/include/asm/msr-index.h +++ b/tools/arch/x86/include/asm/msr-index.h @@ -516,6 +516,7 @@ #define MSR_AMD64_ICIBSEXTDCTL 0xc001103c #define MSR_AMD64_IBSOPDATA4 0xc001103d #define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */ +#define MSR_AMD64_VM_PAGE_FLUSH 0xc001011e #define MSR_AMD64_SEV_ES_GHCB 0xc0010130 #define MSR_AMD64_SEV 0xc0010131 #define MSR_AMD64_SEV_ENABLED_BIT 0 diff --git a/tools/perf/trace/beauty/tracepoints/x86_msr.sh b/tools/perf/trace/beauty/tracepoints/x86_msr.sh index 831c02cf0586..27ee1ea1fe94 100755 --- a/tools/perf/trace/beauty/tracepoints/x86_msr.sh +++ b/tools/perf/trace/beauty/tracepoints/x86_msr.sh @@ -15,7 +15,7 @@ x86_msr_index=${arch_x86_header_dir}/msr-index.h
printf "static const char *x86_MSRs[] = {\n" regex='^[[:space:]]*#[[:space:]]*define[[:space:]]+MSR_([[:alnum:]][[:alnum:]_]+)[[:space:]]+(0x00000[[:xdigit:]]+)[[:space:]]*.*' -egrep $regex ${x86_msr_index} | egrep -v 'MSR_(ATOM|P[46]|AMD64|IA32_TSCDEADLINE|IDT_FCR4)' | \ +egrep $regex ${x86_msr_index} | egrep -v 'MSR_(ATOM|P[46]|IA32_(TSCDEADLINE|UCODE_REV)|IDT_FCR4)' | \ sed -r "s/$regex/\2 \1/g" | sort -n | \ xargs printf "\t[%s] = "%s",\n" printf "};\n\n"
hulk inclusion category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
--------------------------------
Fix abi breakage according to the previous solution:
commit ac376dd8c1af ("x86/cpufeatures: Fix abi breakage caused by NCAPINTS in cpufeature header file.")
Signed-off-by: Jialin Zhang zhangjialin11@huawei.com Signed-off-by: Yanan Wang wangyanan55@huawei.com Signed-off-by: Kunkun Jiang jiangkunkun@huawei.com (cherry picked from commit a9efa75d809979bc1daf3ec0169be74a29dfa8ed) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/include/asm/cpufeature.h | 7 ++----- arch/x86/include/asm/cpufeatures.h | 11 ++++++----- arch/x86/include/asm/disabled-features.h | 3 +-- arch/x86/include/asm/required-features.h | 3 +-- arch/x86/kernel/cpu/common.c | 3 --- arch/x86/kernel/cpu/scattered.c | 3 +++ arch/x86/kvm/cpuid.c | 4 ++++ arch/x86/kvm/cpuid.h | 13 +++++++++++++ 8 files changed, 30 insertions(+), 17 deletions(-)
diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index 5efb04544612..f4cbc01c0bc4 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -31,7 +31,6 @@ enum cpuid_leafs CPUID_7_ECX, CPUID_8000_0007_EBX, CPUID_7_EDX, - CPUID_8000_0021_EAX, };
#ifdef CONFIG_X86_FEATURE_NAMES @@ -90,9 +89,8 @@ extern const char * const x86_bug_flags[NBUGINTS*32]; CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 16, feature_bit) || \ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 17, feature_bit) || \ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 18, feature_bit) || \ - CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 19, feature_bit) || \ REQUIRED_MASK_CHECK || \ - BUILD_BUG_ON_ZERO(NCAPINTS != 20)) + BUILD_BUG_ON_ZERO(NCAPINTS != 19))
#define DISABLED_MASK_BIT_SET(feature_bit) \ ( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 0, feature_bit) || \ @@ -114,9 +112,8 @@ extern const char * const x86_bug_flags[NBUGINTS*32]; CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 16, feature_bit) || \ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 17, feature_bit) || \ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 18, feature_bit) || \ - CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 19, feature_bit) || \ DISABLED_MASK_CHECK || \ - BUILD_BUG_ON_ZERO(NCAPINTS != 20)) + BUILD_BUG_ON_ZERO(NCAPINTS != 19))
#define cpu_has(c, bit) \ (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \ diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 35c24e7a7407..4b80bb75d56d 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -13,7 +13,7 @@ /* * Defines x86 CPU feature bits */ -#define NCAPINTS 20 /* N 32-bit words worth of info */ +#define NCAPINTS 19 /* N 32-bit words worth of info */ #define NBUGINTS 2 /* N 32-bit bug flags */
/* @@ -404,6 +404,11 @@ #define X86_FEATURE_SUCCOR (17*32+ 1) /* Uncorrectable error containment and recovery */ #define X86_FEATURE_SMCA (17*32+ 3) /* Scalable MCA */
+/* AMD-defined SRSO vulnerability features, CPUID level 0x80000021 (EAX), word 20 */ +#define X86_FEATURE_SBPB (17*32+24) +#define X86_FEATURE_IBPB_BRTYPE (17*32+25) +#define X86_FEATURE_SRSO_NO (17*32+26) + /* AMD-defined memory encryption features, CPUID level 0x8000001f (EAX), word 19 */ #define X86_FEATURE_SME (17*32+ 27) /* AMD Secure Memory Encryption */ #define X86_FEATURE_SEV (17*32+ 28) /* AMD Secure Encrypted Virtualization */ @@ -435,10 +440,6 @@ #define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* "" Speculative Store Bypass Disable */
-#define X86_FEATURE_SBPB (19*32+27) /* "" Selective Branch Prediction Barrier */ -#define X86_FEATURE_IBPB_BRTYPE (19*32+28) /* "" MSR_PRED_CMD[IBPB] flushes all branch type predictions */ -#define X86_FEATURE_SRSO_NO (19*32+29) /* "" CPU is not affected by SRSO */ - /* * BUG word(s) */ diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index f7be189e9723..fb51349e45a7 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -110,7 +110,6 @@ DISABLE_ENQCMD) #define DISABLED_MASK17 0 #define DISABLED_MASK18 0 -#define DISABLED_MASK19 0 -#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 20) +#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)
#endif /* _ASM_X86_DISABLED_FEATURES_H */ diff --git a/arch/x86/include/asm/required-features.h b/arch/x86/include/asm/required-features.h index b2d504f11937..3ff0d48469f2 100644 --- a/arch/x86/include/asm/required-features.h +++ b/arch/x86/include/asm/required-features.h @@ -101,7 +101,6 @@ #define REQUIRED_MASK16 0 #define REQUIRED_MASK17 0 #define REQUIRED_MASK18 0 -#define REQUIRED_MASK19 0 -#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 20) +#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)
#endif /* _ASM_X86_REQUIRED_FEATURES_H */ diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 3ce12037320d..ba3d7c3c0eaf 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -963,9 +963,6 @@ void get_cpu_cap(struct cpuinfo_x86 *c) if (c->extended_cpuid_level >= 0x8000000a) c->x86_capability[CPUID_8000_000A_EDX] = cpuid_edx(0x8000000a);
- if (c->extended_cpuid_level >= 0x80000021) - c->x86_capability[CPUID_8000_0021_EAX] = cpuid_eax(0x80000021); - init_scattered_cpuid_features(c); init_speculation_control(c);
diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c index b8c775d3c7d3..07f147435919 100644 --- a/arch/x86/kernel/cpu/scattered.c +++ b/arch/x86/kernel/cpu/scattered.c @@ -49,6 +49,9 @@ static const struct cpuid_bit cpuid_bits[] = { { X86_FEATURE_VM_PAGE_FLUSH, CPUID_EAX, 2, 0x8000001f, 0 }, { X86_FEATURE_SEV_ES, CPUID_EAX, 3, 0x8000001f, 0 }, { X86_FEATURE_SME_COHERENT, CPUID_EAX, 10, 0x8000001f, 0 }, + { X86_FEATURE_SBPB, CPUID_EAX, 27, 0x80000021, 0 }, + { X86_FEATURE_IBPB_BRTYPE, CPUID_EAX, 28, 0x80000021, 0 }, + { X86_FEATURE_SRSO_NO, CPUID_EAX, 29, 0x80000021, 0 }, { 0, 0, 0, 0, 0 } };
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 7dad1cf6ed21..a2590cde1b0a 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -559,6 +559,10 @@ void kvm_set_cpu_caps(void) !boot_cpu_has(X86_FEATURE_AMD_SSBD)) kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);
+ if (cpu_feature_enabled(X86_FEATURE_SBPB)) + kvm_cpu_cap_set(X86_FEATURE_SBPB); + if (cpu_feature_enabled(X86_FEATURE_IBPB_BRTYPE)) + kvm_cpu_cap_set(X86_FEATURE_IBPB_BRTYPE); if (cpu_feature_enabled(X86_FEATURE_SRSO_NO)) kvm_cpu_cap_set(X86_FEATURE_SRSO_NO);
diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h index ed6aabb27424..b10e830283e3 100644 --- a/arch/x86/kvm/cpuid.h +++ b/arch/x86/kvm/cpuid.h @@ -14,6 +14,7 @@ */ enum kvm_only_cpuid_leafs { CPUID_12_EAX = NCAPINTS, + CPUID_8000_0021_EAX, NR_KVM_CPU_CAPS,
NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS, @@ -25,6 +26,11 @@ enum kvm_only_cpuid_leafs { #define KVM_X86_FEATURE_SGX1 KVM_X86_FEATURE(CPUID_12_EAX, 0) #define KVM_X86_FEATURE_SGX2 KVM_X86_FEATURE(CPUID_12_EAX, 1)
+/* AMD-defined SRSO vulnerability features, CPUID level 0x80000021 (EAX), word 20 */ +#define KVM_X86_FEATURE_SBPB KVM_X86_FEATURE(CPUID_8000_0021_EAX, 27) +#define KVM_X86_FEATURE_IBPB_BRTYPE KVM_X86_FEATURE(CPUID_8000_0021_EAX, 28) +#define KVM_X86_FEATURE_SRSO_NO KVM_X86_FEATURE(CPUID_8000_0021_EAX, 29) + extern u32 kvm_cpu_caps[NR_KVM_CPU_CAPS] __read_mostly; void kvm_set_cpu_caps(void);
@@ -116,6 +122,13 @@ static __always_inline u32 __feature_translate(int x86_feature) else if (x86_feature == X86_FEATURE_SGX2) return KVM_X86_FEATURE_SGX2;
+ if (x86_feature == X86_FEATURE_SBPB) + return KVM_X86_FEATURE_SBPB; + else if (x86_feature == X86_FEATURE_IBPB_BRTYPE) + return KVM_X86_FEATURE_IBPB_BRTYPE; + else if (x86_feature == X86_FEATURE_SRSO_NO) + return KVM_X86_FEATURE_SRSO_NO; + return x86_feature; }
From: Nikolay Borisov nik.borisov@suse.com
suse inclusion category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://github.com/SUSE/kernel/commit/9aee1654a4063ca19b8f4bda22b157b465bf9b...
--------------------------------
suse-commit: fc75ce0587a9184fc130351acb6474d84203ca3e
Signed-off-by: Nikolay Borisov nik.borisov@suse.com
Conflicts: arch/x86/include/asm/cpufeature.h arch/x86/include/asm/cpufeatures.h tools/arch/x86/include/asm/cpufeatures.h
Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit 4c54158e06e6386723b7ab21532ae0a3ddc66a3d) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/include/asm/cpufeature.h | 23 +++++++++++++++++++---- arch/x86/include/asm/cpufeatures.h | 4 ++-- arch/x86/include/asm/processor.h | 5 ++++- arch/x86/kernel/alternative.c | 2 +- arch/x86/kernel/cpu/common.c | 7 ++++++- arch/x86/kernel/cpu/mkcapflags.sh | 2 +- arch/x86/kernel/cpu/proc.c | 2 +- tools/arch/x86/include/asm/cpufeatures.h | 2 +- 8 files changed, 35 insertions(+), 12 deletions(-)
diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index f4cbc01c0bc4..39cbea8f06da 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -47,10 +47,14 @@ extern const char * const x86_power_flags[32]; * In order to save room, we index into this array by doing * X86_BUG_<name> - NCAPINTS*32. */ -extern const char * const x86_bug_flags[NBUGINTS*32]; +extern const char * const x86_bug_flags[(NBUGINTS+NEXTBUGINTS)*32]; + +#define IS_EXT_BUGBIT(bit) ((bit>>5) >= (NCAPINTS+NBUGINTS))
#define test_cpu_cap(c, bit) \ - arch_test_bit(bit, (unsigned long *)((c)->x86_capability)) + (IS_EXT_BUGBIT(bit) ? arch_test_bit((bit) - ((NCAPINTS+NBUGINTS)*32), \ + (unsigned long *)((c)->x86_ext_capability)) : \ + arch_test_bit(bit, (unsigned long *)((c)->x86_capability)))
/* * There are 32 bits/features in each mask word. The high bits @@ -137,14 +141,25 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
#define boot_cpu_has(bit) cpu_has(&boot_cpu_data, bit)
-#define set_cpu_cap(c, bit) set_bit(bit, (unsigned long *)((c)->x86_capability)) +#define set_cpu_cap(c, bit) do { \ + if (IS_EXT_BUGBIT(bit)) \ + set_bit(bit - ((NCAPINTS+NBUGINTS))*32, \ + (unsigned long *)((c)->x86_ext_capability)); \ + else \ + set_bit(bit, (unsigned long *)((c)->x86_capability)); \ +} while (0) +
extern void setup_clear_cpu_cap(unsigned int bit); extern void clear_cpu_cap(struct cpuinfo_x86 *c, unsigned int bit);
#define setup_force_cpu_cap(bit) do { \ set_cpu_cap(&boot_cpu_data, bit); \ - set_bit(bit, (unsigned long *)cpu_caps_set); \ + if (IS_EXT_BUGBIT(bit)) \ + set_bit(bit - ((NCAPINTS+NBUGINTS))*32, \ + (unsigned long *)&cpu_caps_set[NCAPINTS+NBUGINTS]); \ + else \ + set_bit(bit, (unsigned long *)cpu_caps_set); \ } while (0)
#define setup_force_cpu_bug(bit) setup_force_cpu_cap(bit) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 4b80bb75d56d..a0621cae5ac2 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -14,8 +14,8 @@ * Defines x86 CPU feature bits */ #define NCAPINTS 19 /* N 32-bit words worth of info */ -#define NBUGINTS 2 /* N 32-bit bug flags */ - +#define NBUGINTS 1 /* N 32-bit bug flags */ +#define NEXTBUGINTS 1 /* N 32-bit extended bug flags */ /* * Note: If the comment begins with a quoted string, that string is used * in /proc/cpuinfo instead of the macro name. If the string is "", diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 8b3f8b406dbf..33cf8d30e19e 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -141,6 +141,9 @@ struct cpuinfo_x86 { /* Address space bits used by the cache internally */ u8 x86_cache_bits; unsigned initialized : 1; +#ifndef __GENKSYMS__ + __u32 x86_ext_capability[NEXTBUGINTS]; +#endif } __randomize_layout;
struct extra_cpuinfo_x86 { @@ -182,7 +185,7 @@ extern struct cpuinfo_x86 new_cpu_data; extern struct extra_cpuinfo_x86 extra_boot_cpu_data;
extern __u32 cpu_caps_cleared[NCAPINTS + NBUGINTS]; -extern __u32 cpu_caps_set[NCAPINTS + NBUGINTS]; +extern __u32 cpu_caps_set[NCAPINTS + NBUGINTS + NEXTBUGINTS];
#ifdef CONFIG_SMP DECLARE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info); diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 82e9fd11b364..293c8d4a4328 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -441,7 +441,7 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, instr = (u8 *)&a->instr_offset + a->instr_offset; replacement = (u8 *)&a->repl_offset + a->repl_offset; BUG_ON(a->instrlen > sizeof(insn_buff)); - BUG_ON(feature >= (NCAPINTS + NBUGINTS) * 32); + BUG_ON(feature >= (NCAPINTS + NBUGINTS + NEXTBUGINTS) * 32);
/* * Patch if either: diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index ba3d7c3c0eaf..9608633f49ff 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -592,7 +592,7 @@ static const char *table_lookup_model(struct cpuinfo_x86 *c)
/* Aligned to unsigned long to avoid split lock in atomic bitmap ops */ __u32 cpu_caps_cleared[NCAPINTS + NBUGINTS] __aligned(sizeof(unsigned long)); -__u32 cpu_caps_set[NCAPINTS + NBUGINTS] __aligned(sizeof(unsigned long)); +__u32 cpu_caps_set[NCAPINTS + NBUGINTS + NEXTBUGINTS] __aligned(sizeof(unsigned long));
void load_percpu_segment(int cpu) { @@ -855,6 +855,11 @@ static void apply_forced_caps(struct cpuinfo_x86 *c) c->x86_capability[i] &= ~cpu_caps_cleared[i]; c->x86_capability[i] |= cpu_caps_set[i]; } + + for (i = 0; i < NEXTBUGINTS; i++) { + c->x86_ext_capability[i] |= cpu_caps_set[NCAPINTS+NBUGINTS + i]; + } + }
static void init_speculation_control(struct cpuinfo_x86 *c) diff --git a/arch/x86/kernel/cpu/mkcapflags.sh b/arch/x86/kernel/cpu/mkcapflags.sh index b030882d2f6f..a07865f0e622 100644 --- a/arch/x86/kernel/cpu/mkcapflags.sh +++ b/arch/x86/kernel/cpu/mkcapflags.sh @@ -60,7 +60,7 @@ trap 'rm "$OUT"' EXIT dump_array "x86_cap_flags" "NCAPINTS*32" "X86_FEATURE_" "" $2 echo ""
- dump_array "x86_bug_flags" "NBUGINTS*32" "X86_BUG_" "NCAPINTS*32" $2 + dump_array "x86_bug_flags" "(NBUGINTS+NEXTBUGINTS)*32" "X86_BUG_" "NCAPINTS*32" $2 echo ""
echo "#ifdef CONFIG_X86_VMX_FEATURE_NAMES" diff --git a/arch/x86/kernel/cpu/proc.c b/arch/x86/kernel/cpu/proc.c index 7810c75abdd4..daa078c9e208 100644 --- a/arch/x86/kernel/cpu/proc.c +++ b/arch/x86/kernel/cpu/proc.c @@ -126,7 +126,7 @@ static int show_cpuinfo(struct seq_file *m, void *v) #endif
seq_puts(m, "\nbugs\t\t:"); - for (i = 0; i < 32*NBUGINTS; i++) { + for (i = 0; i < 32*(NBUGINTS+NEXTBUGINTS); i++) { unsigned int bug_bit = 32*NCAPINTS + i;
if (cpu_has_bug(c, bug_bit) && x86_bug_flags[i]) diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h index 285e490696f4..7b66d6c2448e 100644 --- a/tools/arch/x86/include/asm/cpufeatures.h +++ b/tools/arch/x86/include/asm/cpufeatures.h @@ -14,7 +14,7 @@ * Defines x86 CPU feature bits */ #define NCAPINTS 19 /* N 32-bit words worth of info */ -#define NBUGINTS 2 /* N 32-bit bug flags */ +#define NBUGINTS 1 /* N 32-bit bug flags */
/* * Note: If the comment begins with a quoted string, that string is used
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/5314 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/F...
FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/5314 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/F...