From: Sean Christopherson seanjc@google.com
stable inclusion from stable-v5.10.192 commit 62ebfeb0dcf7f107f0b08bc98c1f9b6619f4f212 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7RQ67
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
commit ba5ca5e5e6a1d55923e88b4a83da452166f5560e upstream.
Use LEA instead of ADD when adjusting %rsp in srso_safe_ret{,_alias}() so as to avoid clobbering flags. Drop one of the INT3 instructions to account for the LEA consuming one more byte than the ADD.
KVM's emulator makes indirect calls into a jump table of sorts, where the destination of each call is a small blob of code that performs fast emulation by executing the target instruction with fixed operands.
E.g. to emulate ADC, fastop() invokes adcb_al_dl():
adcb_al_dl: <+0>: adc %dl,%al <+2>: jmp <__x86_return_thunk>
A major motivation for doing fast emulation is to leverage the CPU to handle consumption and manipulation of arithmetic flags, i.e. RFLAGS is both an input and output to the target of the call. fastop() collects the RFLAGS result by pushing RFLAGS onto the stack and popping them back into a variable (held in %rdi in this case):
asm("push %[flags]; popf; " CALL_NOSPEC " ; pushf; pop %[flags]\n"
<+71>: mov 0xc0(%r8),%rdx <+78>: mov 0x100(%r8),%rcx <+85>: push %rdi <+86>: popf <+87>: call *%rsi <+89>: nop <+90>: nop <+91>: nop <+92>: pushf <+93>: pop %rdi
and then propagating the arithmetic flags into the vCPU's emulator state:
ctxt->eflags = (ctxt->eflags & ~EFLAGS_MASK) | (flags & EFLAGS_MASK);
<+64>: and $0xfffffffffffff72a,%r9 <+94>: and $0x8d5,%edi <+109>: or %rdi,%r9 <+122>: mov %r9,0x10(%r8)
The failures can be most easily reproduced by running the "emulator" test in KVM-Unit-Tests.
If you're feeling a bit of deja vu, see commit b63f20a778c8 ("x86/retpoline: Don't clobber RFLAGS during CALL_NOSPEC on i386").
In addition, this breaks booting of clang-compiled guest on a gcc-compiled host where the host contains the %rsp-modifying SRSO mitigations.
[ bp: Massage commit message, extend, remove addresses. ]
Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Closes: https://lore.kernel.org/all/de474347-122d-54cd-eabf-9dcc95ab9eae@amd.com Reported-by: Srikanth Aithal sraithal@amd.com Reported-by: Nathan Chancellor nathan@kernel.org Signed-off-by: Sean Christopherson seanjc@google.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Tested-by: Nathan Chancellor nathan@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/20230810013334.GA5354@dev-arch.thelio-3990X/ Link: https://lore.kernel.org/r/20230811155255.250835-1-seanjc@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Jialin Zhang zhangjialin11@huawei.com (cherry picked from commit d39ba984d80791506bee4051533fffb06cfd0d06) Signed-off-by: Jialin Zhang zhangjialin11@huawei.com --- arch/x86/lib/retpoline.S | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index e807df6e56a7..8d82da010784 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -111,7 +111,7 @@ SYM_FUNC_END(srso_alias_untrain_ret) #endif
SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE) - add $8, %_ASM_SP + lea 8(%_ASM_SP), %_ASM_SP UNWIND_HINT_FUNC ANNOTATE_UNRET_SAFE ret @@ -192,7 +192,7 @@ __EXPORT_THUNK(retbleed_untrain_ret) * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret() * above. On kernel entry, srso_untrain_ret() is executed which is a * - * movabs $0xccccccc308c48348,%rax + * movabs $0xccccc30824648d48,%rax * * and when the return thunk executes the inner label srso_safe_ret() * later, it is a stack manipulation and a RET which is mispredicted and @@ -210,11 +210,10 @@ SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) * the stack. */ SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL) - add $8, %_ASM_SP + lea 8(%_ASM_SP), %_ASM_SP ret int3 int3 - int3 /* end of movabs */ lfence call srso_safe_ret