Borislav Petkov (1): x86/cpu: Init AP exception handling from cpu_init_secondary()
Daniel Sneddon (3): x86/speculation: Add force option to GDS mitigation x86/speculation: Add Kconfig option for GDS KVM: Add GDS_NO support to KVM
Dave Hansen (1): Documentation/x86: Fix backwards on/off logic about YMM support
Juergen Gross (2): x86/xen: Fix secondary processors' FPU initialization x86/mm: fix poking_init() for Xen PV guests
Peter Zijlstra (3): x86/mm: Use mm_alloc() in poking_init() mm: Move mm_cachep initialization to mm_init() x86/mm: Initialize text poking earlier
Thomas Gleixner (9): init: Provide arch_cpu_finalize_init() x86/cpu: Switch to arch_cpu_finalize_init() ARM: cpu: Switch to arch_cpu_finalize_init() init: Remove check_bugs() leftovers init: Invoke arch_cpu_finalize_init() earlier init, x86: Move mem_encrypt_init() into arch_cpu_finalize_init() x86/fpu: Remove cpuinfo argument from init functions x86/fpu: Mark init functions __init x86/fpu: Move FPU initialization into arch_cpu_finalize_init()
.../hw-vuln/gather_data_sampling.rst | 18 ++- .../admin-guide/kernel-parameters.txt | 8 +- arch/Kconfig | 3 + arch/alpha/include/asm/bugs.h | 20 ---- arch/arm/Kconfig | 1 + arch/arm/include/asm/bugs.h | 4 - arch/arm/kernel/bugs.c | 3 +- arch/parisc/include/asm/bugs.h | 20 ---- arch/powerpc/include/asm/bugs.h | 15 --- arch/x86/Kconfig | 20 ++++ arch/x86/include/asm/bugs.h | 2 - arch/x86/include/asm/fpu/api.h | 2 +- arch/x86/include/asm/mem_encrypt.h | 7 +- arch/x86/include/asm/processor.h | 1 + arch/x86/kernel/cpu/bugs.c | 82 ++++++-------- arch/x86/kernel/cpu/common.c | 104 +++++++++++++++--- arch/x86/kernel/cpu/cpu.h | 1 + arch/x86/kernel/fpu/init.c | 8 +- arch/x86/kernel/smpboot.c | 3 +- arch/x86/kernel/traps.c | 4 +- arch/x86/kvm/x86.c | 7 +- arch/x86/mm/init.c | 7 +- arch/x86/xen/smp_pv.c | 2 + arch/xtensa/include/asm/bugs.h | 18 --- include/asm-generic/bugs.h | 11 -- include/linux/cpu.h | 6 + include/linux/sched/task.h | 2 +- init/main.c | 21 +--- kernel/fork.c | 37 +++---- 29 files changed, 222 insertions(+), 215 deletions(-) delete mode 100644 arch/alpha/include/asm/bugs.h delete mode 100644 arch/parisc/include/asm/bugs.h delete mode 100644 arch/powerpc/include/asm/bugs.h delete mode 100644 arch/xtensa/include/asm/bugs.h delete mode 100644 include/asm-generic/bugs.h
-- 2.25.1
From: Thomas Gleixner tglx@linutronix.de
mainline inclusion from mainline-v6.5-rc1 commit 7725acaa4f0c04fbefb0e0d342635b967bb7d414 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 7725acaa4f0c04fbefb0e0d342635b967bb7d414 upstream
check_bugs() has become a dumping ground for all sorts of activities to finalize the CPU initialization before running the rest of the init code.
Most are empty, a few do actual bug checks, some do alternative patching and some cobble a CPU advertisement string together....
Aside of that the current implementation requires duplicated function declaration and mostly empty header files for them.
Provide a new function arch_cpu_finalize_init(). Provide a generic declaration if CONFIG_ARCH_HAS_CPU_FINALIZE_INIT is selected and a stub inline otherwise.
This requires a temporary #ifdef in start_kernel() which will be removed along with check_bugs() once the architectures are converted over.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20230613224544.957805717@linutronix.de Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/Kconfig | 3 +++ include/linux/cpu.h | 6 ++++++ init/main.c | 5 +++++ 3 files changed, 14 insertions(+)
diff --git a/arch/Kconfig b/arch/Kconfig index b0319fa3c3ee..0fc9c6d591b8 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -313,6 +313,9 @@ config ARCH_HAS_DMA_SET_UNCACHED config ARCH_HAS_DMA_CLEAR_UNCACHED bool
+config ARCH_HAS_CPU_FINALIZE_INIT + bool + # Select if arch init_task must go in the __init_task_data section config ARCH_TASK_STRUCT_ON_STACK bool diff --git a/include/linux/cpu.h b/include/linux/cpu.h index 224a3acc2b66..56e8599afd5f 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -190,6 +190,12 @@ void arch_cpu_idle_enter(void); void arch_cpu_idle_exit(void); void arch_cpu_idle_dead(void);
+#ifdef CONFIG_ARCH_HAS_CPU_FINALIZE_INIT +void arch_cpu_finalize_init(void); +#else +static inline void arch_cpu_finalize_init(void) { } +#endif + int cpu_report_state(int cpu); int cpu_check_up_prepare(int cpu); void cpu_set_state_online(int cpu); diff --git a/init/main.c b/init/main.c index 21b65f18ba83..e48df2a5f31b 100644 --- a/init/main.c +++ b/init/main.c @@ -1074,7 +1074,12 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) delayacct_init();
poking_init(); + + arch_cpu_finalize_init(); + /* Temporary conditional until everything has been converted */ +#ifndef CONFIG_ARCH_HAS_CPU_FINALIZE_INIT check_bugs(); +#endif
acpi_subsystem_init(); arch_post_acpi_subsys_init();
From: Thomas Gleixner tglx@linutronix.de
mainline inclusion from mainline-v6.5-rc1 commit 7c7077a72674402654f3291354720cd73cdf649e category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 7c7077a72674402654f3291354720cd73cdf649e upstream
check_bugs() is a dumping ground for finalizing the CPU bringup. Only parts of it has to do with actual CPU bugs.
Split it apart into arch_cpu_finalize_init() and cpu_select_mitigations().
Fixup the bogus 32bit comments while at it.
No functional change.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Borislav Petkov (AMD) bp@alien8.de Link: https://lore.kernel.org/r/20230613224545.019583869@linutronix.de Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org conflict: arch/x86/kernel/cpu/bugs.c arch/x86/kernel/cpu/common.c Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/x86/Kconfig | 1 + arch/x86/include/asm/bugs.h | 2 -- arch/x86/kernel/cpu/bugs.c | 51 +--------------------------------- arch/x86/kernel/cpu/common.c | 53 ++++++++++++++++++++++++++++++++++++ arch/x86/kernel/cpu/cpu.h | 1 + 5 files changed, 56 insertions(+), 52 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 5bdf36c0d789..9eadce534c74 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -61,6 +61,7 @@ config X86 select ARCH_32BIT_OFF_T if X86_32 select ARCH_CLOCKSOURCE_INIT select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI + select ARCH_HAS_CPU_FINALIZE_INIT select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE if !X86_PAE select ARCH_HAS_DEVMEM_IS_ALLOWED diff --git a/arch/x86/include/asm/bugs.h b/arch/x86/include/asm/bugs.h index 92ae28389940..f25ca2d709d4 100644 --- a/arch/x86/include/asm/bugs.h +++ b/arch/x86/include/asm/bugs.h @@ -4,8 +4,6 @@
#include <asm/processor.h>
-extern void check_bugs(void); - #if defined(CONFIG_CPU_SUP_INTEL) && defined(CONFIG_X86_32) int ppro_with_ram_bug(void); #else diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 812ac979c871..89b22439620c 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -9,7 +9,6 @@ * - Andrew D. Balsa (code cleanup). */ #include <linux/init.h> -#include <linux/utsname.h> #include <linux/cpu.h> #include <linux/module.h> #include <linux/nospec.h> @@ -27,8 +26,6 @@ #include <asm/msr.h> #include <asm/vmx.h> #include <asm/paravirt.h> -#include <asm/alternative.h> -#include <asm/set_memory.h> #include <asm/intel-family.h> #include <asm/e820/api.h> #include <asm/hypervisor.h> @@ -117,21 +114,8 @@ EXPORT_SYMBOL_GPL(mds_idle_clear); DEFINE_STATIC_KEY_FALSE(mmio_stale_data_clear); EXPORT_SYMBOL_GPL(mmio_stale_data_clear);
-void __init check_bugs(void) +void __init cpu_select_mitigations(void) { - identify_boot_cpu(); - - /* - * identify_boot_cpu() initialized SMT support information, let the - * core code know. - */ - cpu_smt_check_topology(); - - if (!IS_ENABLED(CONFIG_SMP)) { - pr_info("CPU: "); - print_cpu_info(&boot_cpu_data); - } - /* * Read the SPEC_CTRL MSR to account for reserved bits which may * have unknown values. AMD64_LS_CFG MSR is cached in the early AMD @@ -160,39 +144,6 @@ void __init check_bugs(void) md_clear_select_mitigation(); srbds_select_mitigation(); gds_select_mitigation(); - - arch_smt_update(); - -#ifdef CONFIG_X86_32 - /* - * Check whether we are able to run this kernel safely on SMP. - * - * - i386 is no longer supported. - * - In order to run on anything without a TSC, we need to be - * compiled for a i486. - */ - if (boot_cpu_data.x86 < 4) - panic("Kernel requires i486+ for 'invlpg' and other features"); - - init_utsname()->machine[1] = - '0' + (boot_cpu_data.x86 > 6 ? 6 : boot_cpu_data.x86); - alternative_instructions(); - - fpu__init_check_bugs(); -#else /* CONFIG_X86_64 */ - alternative_instructions(); - - /* - * Make sure the first 2MB area is not mapped by huge pages - * There are typically fixed size MTRRs in there and overlapping - * MTRRs into large pages causes slow downs. - * - * Right now we don't do that with gbpages because there seems - * very little benefit for that case. - */ - if (!direct_gbpages) - set_memory_4k((unsigned long)__va(0), 1); -#endif }
/* diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index abba125765e4..cb5dd020ee24 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -19,10 +19,13 @@ #include <linux/kprobes.h> #include <linux/kgdb.h> #include <linux/smp.h> +#include <linux/cpu.h> #include <linux/io.h> #include <linux/syscore_ops.h> #include <linux/pgtable.h> +#include <linux/utsname.h>
+#include <asm/alternative.h> #include <asm/cmdline.h> #include <asm/stackprotector.h> #include <asm/perf_event.h> @@ -59,6 +62,7 @@ #include <asm/cpu_device_id.h> #include <asm/uv/uv.h> #include <asm/sigframe.h> +#include <asm/set_memory.h>
#include "cpu.h"
@@ -2202,3 +2206,52 @@ void arch_smt_update(void) /* Check whether IPI broadcasting can be enabled */ apic_smt_update(); } + +void __init arch_cpu_finalize_init(void) +{ + identify_boot_cpu(); + + /* + * identify_boot_cpu() initialized SMT support information, let the + * core code know. + */ + cpu_smt_check_topology(); + + if (!IS_ENABLED(CONFIG_SMP)) { + pr_info("CPU: "); + print_cpu_info(&boot_cpu_data); + } + + cpu_select_mitigations(); + + arch_smt_update(); + + if (IS_ENABLED(CONFIG_X86_32)) { + /* + * Check whether this is a real i386 which is not longer + * supported and fixup the utsname. + */ + if (boot_cpu_data.x86 < 4) + panic("Kernel requires i486+ for 'invlpg' and other features"); + + init_utsname()->machine[1] = + '0' + (boot_cpu_data.x86 > 6 ? 6 : boot_cpu_data.x86); + } + + alternative_instructions(); + + if (IS_ENABLED(CONFIG_X86_64)) { + /* + * Make sure the first 2MB area is not mapped by huge pages + * There are typically fixed size MTRRs in there and overlapping + * MTRRs into large pages causes slow downs. + * + * Right now we don't do that with gbpages because there seems + * very little benefit for that case. + */ + if (!direct_gbpages) + set_memory_4k((unsigned long)__va(0), 1); + } else { + fpu__init_check_bugs(); + } +} diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h index 4e1da316879a..66e19380a899 100644 --- a/arch/x86/kernel/cpu/cpu.h +++ b/arch/x86/kernel/cpu/cpu.h @@ -78,6 +78,7 @@ extern void detect_ht(struct cpuinfo_x86 *c); extern void check_null_seg_clears_base(struct cpuinfo_x86 *c);
unsigned int aperfmperf_get_khz(int cpu); +void cpu_select_mitigations(void);
extern void x86_spec_ctrl_setup_ap(void); extern void update_srbds_msr(void);
From: Thomas Gleixner tglx@linutronix.de
mainline inclusion from mainline-v6.5-rc1 commit ee31bb0524a2e7c99b03f50249a411cc1eaa411f category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit ee31bb0524a2e7c99b03f50249a411cc1eaa411f upstream
check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation.
No functional change.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20230613224545.078124882@linutronix.de Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/arm/Kconfig | 1 + arch/arm/include/asm/bugs.h | 4 ---- arch/arm/kernel/bugs.c | 3 ++- 3 files changed, 3 insertions(+), 5 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 400d5373659c..370b9048ed56 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -4,6 +4,7 @@ config ARM default y select ARCH_32BIT_OFF_T select ARCH_HAS_BINFMT_FLAT + select ARCH_HAS_CPU_FINALIZE_INIT if MMU select ARCH_HAS_DEBUG_VIRTUAL if MMU select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h index 97a312ba0840..fe385551edec 100644 --- a/arch/arm/include/asm/bugs.h +++ b/arch/arm/include/asm/bugs.h @@ -1,7 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * arch/arm/include/asm/bugs.h - * * Copyright (C) 1995-2003 Russell King */ #ifndef __ASM_BUGS_H @@ -10,10 +8,8 @@ extern void check_writebuffer_bugs(void);
#ifdef CONFIG_MMU -extern void check_bugs(void); extern void check_other_bugs(void); #else -#define check_bugs() do { } while (0) #define check_other_bugs() do { } while (0) #endif
diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c index 14c8dbbb7d2d..087bce6ec8e9 100644 --- a/arch/arm/kernel/bugs.c +++ b/arch/arm/kernel/bugs.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 #include <linux/init.h> +#include <linux/cpu.h> #include <asm/bugs.h> #include <asm/proc-fns.h>
@@ -11,7 +12,7 @@ void check_other_bugs(void) #endif }
-void __init check_bugs(void) +void __init arch_cpu_finalize_init(void) { check_writebuffer_bugs(); check_other_bugs();
From: Thomas Gleixner tglx@linutronix.de
mainline inclusion from mainline-v6.5-rc1 commit 61235b24b9cb37c13fcad5b9596d59a1afdcec30 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 61235b24b9cb37c13fcad5b9596d59a1afdcec30 upstream
Everything is converted over to arch_cpu_finalize_init(). Remove the check_bugs() leftovers including the empty stubs in asm-generic, alpha, parisc, powerpc and xtensa.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Richard Henderson richard.henderson@linaro.org Link: https://lore.kernel.org/r/20230613224545.553215951@linutronix.de Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/alpha/include/asm/bugs.h | 20 -------------------- arch/parisc/include/asm/bugs.h | 20 -------------------- arch/powerpc/include/asm/bugs.h | 15 --------------- arch/xtensa/include/asm/bugs.h | 18 ------------------ include/asm-generic/bugs.h | 11 ----------- init/main.c | 5 ----- 6 files changed, 89 deletions(-) delete mode 100644 arch/alpha/include/asm/bugs.h delete mode 100644 arch/parisc/include/asm/bugs.h delete mode 100644 arch/powerpc/include/asm/bugs.h delete mode 100644 arch/xtensa/include/asm/bugs.h delete mode 100644 include/asm-generic/bugs.h
diff --git a/arch/alpha/include/asm/bugs.h b/arch/alpha/include/asm/bugs.h deleted file mode 100644 index 78030d1c7e7e..000000000000 --- a/arch/alpha/include/asm/bugs.h +++ /dev/null @@ -1,20 +0,0 @@ -/* - * include/asm-alpha/bugs.h - * - * Copyright (C) 1994 Linus Torvalds - */ - -/* - * This is included by init/main.c to check for architecture-dependent bugs. - * - * Needs: - * void check_bugs(void); - */ - -/* - * I don't know of any alpha bugs yet.. Nice chip - */ - -static void check_bugs(void) -{ -} diff --git a/arch/parisc/include/asm/bugs.h b/arch/parisc/include/asm/bugs.h deleted file mode 100644 index 0a7f9db6bd1c..000000000000 --- a/arch/parisc/include/asm/bugs.h +++ /dev/null @@ -1,20 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * include/asm-parisc/bugs.h - * - * Copyright (C) 1999 Mike Shaver - */ - -/* - * This is included by init/main.c to check for architecture-dependent bugs. - * - * Needs: - * void check_bugs(void); - */ - -#include <asm/processor.h> - -static inline void check_bugs(void) -{ -// identify_cpu(&boot_cpu_data); -} diff --git a/arch/powerpc/include/asm/bugs.h b/arch/powerpc/include/asm/bugs.h deleted file mode 100644 index 01b8f6ca4dbb..000000000000 --- a/arch/powerpc/include/asm/bugs.h +++ /dev/null @@ -1,15 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -#ifndef _ASM_POWERPC_BUGS_H -#define _ASM_POWERPC_BUGS_H - -/* - */ - -/* - * This file is included by 'init/main.c' to check for - * architecture-dependent bugs. - */ - -static inline void check_bugs(void) { } - -#endif /* _ASM_POWERPC_BUGS_H */ diff --git a/arch/xtensa/include/asm/bugs.h b/arch/xtensa/include/asm/bugs.h deleted file mode 100644 index 69b29d198249..000000000000 --- a/arch/xtensa/include/asm/bugs.h +++ /dev/null @@ -1,18 +0,0 @@ -/* - * include/asm-xtensa/bugs.h - * - * This is included by init/main.c to check for architecture-dependent bugs. - * - * Xtensa processors don't have any bugs. :) - * - * This file is subject to the terms and conditions of the GNU General - * Public License. See the file "COPYING" in the main directory of - * this archive for more details. - */ - -#ifndef _XTENSA_BUGS_H -#define _XTENSA_BUGS_H - -static void check_bugs(void) { } - -#endif /* _XTENSA_BUGS_H */ diff --git a/include/asm-generic/bugs.h b/include/asm-generic/bugs.h deleted file mode 100644 index 69021830f078..000000000000 --- a/include/asm-generic/bugs.h +++ /dev/null @@ -1,11 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef __ASM_GENERIC_BUGS_H -#define __ASM_GENERIC_BUGS_H -/* - * This file is included by 'init/main.c' to check for - * architecture-dependent bugs. - */ - -static inline void check_bugs(void) { } - -#endif /* __ASM_GENERIC_BUGS_H */ diff --git a/init/main.c b/init/main.c index e48df2a5f31b..71a984aac540 100644 --- a/init/main.c +++ b/init/main.c @@ -102,7 +102,6 @@ #include <linux/randomize_kstack.h>
#include <asm/io.h> -#include <asm/bugs.h> #include <asm/setup.h> #include <asm/sections.h> #include <asm/cacheflush.h> @@ -1076,10 +1075,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) poking_init();
arch_cpu_finalize_init(); - /* Temporary conditional until everything has been converted */ -#ifndef CONFIG_ARCH_HAS_CPU_FINALIZE_INIT - check_bugs(); -#endif
acpi_subsystem_init(); arch_post_acpi_subsys_init();
From: Thomas Gleixner tglx@linutronix.de
mainline inclusion from mainline-v6.5-rc1 commit 9df9d2f0471b4c4702670380b8d8a45b40b23a7d category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 9df9d2f0471b4c4702670380b8d8a45b40b23a7d upstream
X86 is reworking the boot process so that initializations which are not required during early boot can be moved into the late boot process and out of the fragile and restricted initial boot phase.
arch_cpu_finalize_init() is the obvious place to do such initializations, but arch_cpu_finalize_init() is invoked too late in start_kernel() e.g. for initializing the FPU completely. fork_init() requires that the FPU is initialized as the size of task_struct on X86 depends on the size of the required FPU register buffer.
Fortunately none of the init calls between calibrate_delay() and arch_cpu_finalize_init() is relevant for the functionality of arch_cpu_finalize_init().
Invoke it right after calibrate_delay() where everything which is relevant for arch_cpu_finalize_init() has been set up already.
No functional change intended.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Rick Edgecombe rick.p.edgecombe@intel.com Link: https://lore.kernel.org/r/20230613224545.612182854@linutronix.de Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- init/main.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/init/main.c b/init/main.c index 71a984aac540..73f7c575990e 100644 --- a/init/main.c +++ b/init/main.c @@ -1046,6 +1046,9 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) late_time_init(); sched_clock_init(); calibrate_delay(); + + arch_cpu_finalize_init(); + pid_idr_init(); anon_vma_init(); #ifdef CONFIG_X86 @@ -1074,8 +1077,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
poking_init();
- arch_cpu_finalize_init(); - acpi_subsystem_init(); arch_post_acpi_subsys_init(); sfi_init_late();
From: Thomas Gleixner tglx@linutronix.de
mainline inclusion from mainline-v6.5-rc1 commit 439e17576eb47f26b78c5bbc72e344d4206d2327 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 439e17576eb47f26b78c5bbc72e344d4206d2327 upstream
Invoke the X86ism mem_encrypt_init() from X86 arch_cpu_finalize_init() and remove the weak fallback from the core code.
No functional change.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20230613224545.670360645@linutronix.de Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/x86/include/asm/mem_encrypt.h | 7 ++++--- arch/x86/kernel/cpu/common.c | 11 +++++++++++ init/main.c | 11 ----------- 3 files changed, 15 insertions(+), 14 deletions(-)
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 8f3fa55d43ce..4406d58cf7cb 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -47,14 +47,13 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
void __init mem_encrypt_free_decrypted_mem(void);
-/* Architecture __weak replacement functions */ -void __init mem_encrypt_init(void); - void __init sev_es_init_vc_handling(void); bool sme_active(void); bool sev_active(void); bool sev_es_active(void);
+void __init mem_encrypt_init(void); + #define __bss_decrypted __section(".bss..decrypted")
#else /* !CONFIG_AMD_MEM_ENCRYPT */ @@ -86,6 +85,8 @@ early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0;
static inline void mem_encrypt_free_decrypted_mem(void) { }
+static inline void mem_encrypt_init(void) { } + #define __bss_decrypted
#endif /* CONFIG_AMD_MEM_ENCRYPT */ diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index cb5dd020ee24..27b8d0fcf09e 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -18,6 +18,7 @@ #include <linux/init.h> #include <linux/kprobes.h> #include <linux/kgdb.h> +#include <linux/mem_encrypt.h> #include <linux/smp.h> #include <linux/cpu.h> #include <linux/io.h> @@ -2254,4 +2255,14 @@ void __init arch_cpu_finalize_init(void) } else { fpu__init_check_bugs(); } + + /* + * This needs to be called before any devices perform DMA + * operations that might use the SWIOTLB bounce buffers. It will + * mark the bounce buffers as decrypted so that their usage will + * not cause "plain-text" data to be decrypted when accessed. It + * must be called after late_time_init() so that Hyper-V x86/x64 + * hypercalls work when the SWIOTLB bounce buffers are decrypted. + */ + mem_encrypt_init(); } diff --git a/init/main.c b/init/main.c index 73f7c575990e..8cccd0687a93 100644 --- a/init/main.c +++ b/init/main.c @@ -96,7 +96,6 @@ #include <linux/cache.h> #include <linux/rodata_test.h> #include <linux/jump_label.h> -#include <linux/mem_encrypt.h> #include <linux/kcsan.h> #include <linux/init_syscalls.h> #include <linux/randomize_kstack.h> @@ -777,8 +776,6 @@ void __init __weak thread_stack_cache_init(void) } #endif
-void __init __weak mem_encrypt_init(void) { } - void __init __weak poking_init(void) { }
void __init __weak pgtable_cache_init(void) { } @@ -1022,14 +1019,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) */ locking_selftest();
- /* - * This needs to be called before any devices perform DMA - * operations that might use the SWIOTLB bounce buffers. It will - * mark the bounce buffers as decrypted so that their usage will - * not cause "plain-text" data to be decrypted when accessed. - */ - mem_encrypt_init(); - #ifdef CONFIG_BLK_DEV_INITRD if (initrd_start && !initrd_below_start_ok && page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) {
From: Thomas Gleixner tglx@linutronix.de
mainline inclusion from mainline-v6.5-rc1 commit 1f34bb2a24643e0087652d81078e4f616562738d category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 1f34bb2a24643e0087652d81078e4f616562738d upstream
Nothing in the call chain requires it
Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20230613224545.783704297@linutronix.de Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org conflicts: arch/x86/include/asm/fpu/api.h arch/x86/include/asm/fpu/internal.h arch/x86/kernel/cpu/common.c arch/x86/kernel/fpu/init.c Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/x86/include/asm/fpu/api.h | 2 +- arch/x86/kernel/cpu/common.c | 2 +- arch/x86/kernel/fpu/init.c | 6 +++--- 3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index 1b37f1d3ab8b..e34d4b508de0 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -88,7 +88,7 @@ extern void fpu_reset_from_exception_fixup(void);
/* Boot, hotplug and resume */ extern void fpu__init_cpu(void); -extern void fpu__init_system(struct cpuinfo_x86 *c); +extern void fpu__init_system(void); extern void fpu__init_check_bugs(void); extern void fpu__resume_cpu(void);
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 27b8d0fcf09e..352c0266e0b4 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1439,7 +1439,7 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
sld_setup(c);
- fpu__init_system(c); + fpu__init_system();
init_sigframe_size();
diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c index 851eb13edc01..5001df943828 100644 --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -71,7 +71,7 @@ static bool fpu__probe_without_cpuid(void) return fsw == 0 && (fcw & 0x103f) == 0x003f; }
-static void fpu__init_system_early_generic(struct cpuinfo_x86 *c) +static void fpu__init_system_early_generic(void) { if (!boot_cpu_has(X86_FEATURE_CPUID) && !test_bit(X86_FEATURE_FPU, (unsigned long *)cpu_caps_cleared)) { @@ -211,10 +211,10 @@ static void __init fpu__init_system_xstate_size_legacy(void) * Called on the boot CPU once per system bootup, to set up the initial * FPU state that is later cloned into all processes: */ -void __init fpu__init_system(struct cpuinfo_x86 *c) +void __init fpu__init_system(void) { fpstate_reset(¤t->thread.fpu); - fpu__init_system_early_generic(c); + fpu__init_system_early_generic();
/* * The FPU has to be operational for some of the
From: Thomas Gleixner tglx@linutronix.de
mainline inclusion from mainline-v6.5-rc1 commit 1703db2b90c91b2eb2d699519fc505fe431dde0e category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 1703db2b90c91b2eb2d699519fc505fe431dde0e upstream
No point in keeping them around.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20230613224545.841685728@linutronix.de Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/x86/kernel/fpu/init.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c index 5001df943828..998a08f17e33 100644 --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -53,7 +53,7 @@ void fpu__init_cpu(void) fpu__init_cpu_xstate(); }
-static bool fpu__probe_without_cpuid(void) +static bool __init fpu__probe_without_cpuid(void) { unsigned long cr0; u16 fsw, fcw; @@ -71,7 +71,7 @@ static bool fpu__probe_without_cpuid(void) return fsw == 0 && (fcw & 0x103f) == 0x003f; }
-static void fpu__init_system_early_generic(void) +static void __init fpu__init_system_early_generic(void) { if (!boot_cpu_has(X86_FEATURE_CPUID) && !test_bit(X86_FEATURE_FPU, (unsigned long *)cpu_caps_cleared)) {
From: Borislav Petkov bp@suse.de
stable inclusion from stable-v5.10.173 commit 2e3bd75f64d2f844a0f8c7b2d80eba17d1959677 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I6PMQI
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=...
--------------------------------
[ Upstream commit b1efd0ff4bd16e8bb8607ba566b03f2024a830bb ]
SEV-ES guests require properly setup task register with which the TSS descriptor in the GDT can be located so that the IST-type #VC exception handler which they need to function properly, can be executed.
This setup needs to happen before attempting to load microcode in ucode_cpu_init() on secondary CPUs which can cause such #VC exceptions.
Simplify the machinery by running that exception setup from a new function cpu_init_secondary() and explicitly call cpu_init_exception_handling() for the boot CPU before cpu_init(). The latter prepares for fixing and simplifying the exception/IST setup on the boot CPU.
There should be no functional changes resulting from this patch.
[ tglx: Reworked it so cpu_init_exception_handling() stays seperate ]
Signed-off-by: Borislav Petkov bp@suse.de Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Lai Jiangshan laijs@linux.alibaba.com Acked-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lore.kernel.org/r/87k0o6gtvu.ffs@nanos.tec.linutronix.de Stable-dep-of: c0dd9245aa9e ("x86/microcode: Check CPU capabilities after late microcode update correctly") Signed-off-by: Sasha Levin sashal@kernel.org Signed-off-by: Wang Hai wanghai38@huawei.com Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/x86/include/asm/processor.h | 1 + arch/x86/kernel/cpu/common.c | 28 +++++++++++++++------------- arch/x86/kernel/smpboot.c | 3 +-- arch/x86/kernel/traps.c | 4 +--- 4 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 69c1993603ad..aa259a80a9f8 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -707,6 +707,7 @@ extern void load_direct_gdt(int); extern void load_fixmap_gdt(int); extern void load_percpu_segment(int); extern void cpu_init(void); +extern void cpu_init_secondary(void); extern void cpu_init_exception_handling(void); extern void cr4_init(void);
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 352c0266e0b4..9c7a376d636e 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -2087,13 +2087,12 @@ void cpu_init_exception_handling(void)
/* * cpu_init() initializes state that is per-CPU. Some data is already - * initialized (naturally) in the bootstrap process, such as the GDT - * and IDT. We reload them nevertheless, this function acts as a - * 'CPU state barrier', nothing should get across. + * initialized (naturally) in the bootstrap process, such as the GDT. We + * reload it nevertheless, this function acts as a 'CPU state barrier', + * nothing should get across. */ void cpu_init(void) { - struct tss_struct *tss = this_cpu_ptr(&cpu_tss_rw); struct task_struct *cur = current; int cpu = raw_smp_processor_id();
@@ -2106,8 +2105,6 @@ void cpu_init(void) early_cpu_to_node(cpu) != NUMA_NO_NODE) set_numa_node(early_cpu_to_node(cpu)); #endif - setup_getcpu(cpu); - pr_debug("Initializing CPU#%d\n", cpu);
if (IS_ENABLED(CONFIG_X86_64) || cpu_feature_enabled(X86_FEATURE_VME) || @@ -2119,7 +2116,6 @@ void cpu_init(void) * and set up the GDT descriptor: */ switch_to_new_gdt(cpu); - load_current_idt();
if (IS_ENABLED(CONFIG_X86_64)) { loadsegment(fs, 0); @@ -2139,12 +2135,6 @@ void cpu_init(void) initialize_tlbstate_and_flush(); enter_lazy_tlb(&init_mm, cur);
- /* Initialize the TSS. */ - tss_setup_ist(tss); - tss_setup_io_bitmap(tss); - set_tss_desc(cpu, &get_cpu_entry_area(cpu)->tss.x86_tss); - - load_TR_desc(); /* * sp0 points to the entry trampoline stack regardless of what task * is running. @@ -2166,6 +2156,18 @@ void cpu_init(void) load_fixmap_gdt(cpu); }
+#ifdef CONFIG_SMP +void cpu_init_secondary(void) +{ + /* + * Relies on the BP having set-up the IDT tables, which are loaded + * on this CPU in cpu_init_exception_handling(). + */ + cpu_init_exception_handling(); + cpu_init(); +} +#endif + /* * The microcode loader calls this upon late microcode load to recheck features, * only when microcode has been updated. Caller holds microcode_mutex and CPU diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 3b92be65ed68..31fe93269fba 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -237,8 +237,7 @@ static void notrace start_secondary(void *unused) load_cr3(swapper_pg_dir); __flush_tlb_all(); #endif - cpu_init_exception_handling(); - cpu_init(); + cpu_init_secondary(); rcu_cpu_starting(raw_smp_processor_id()); x86_cpuinit.early_percpu_clock_init(); smp_callin(); diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index 035fd6b89ecc..88ed44e01eaa 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -1303,9 +1303,7 @@ void __init trap_init(void)
idt_setup_traps();
- /* - * Should be a barrier for any external CPU state: - */ + cpu_init_exception_handling(); cpu_init();
idt_setup_ist_traps();
From: Thomas Gleixner tglx@linutronix.de
mainline inclusion from mainline-v6.5-rc1 commit b81fac906a8f9e682e513ddd95697ec7a20878d4 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit b81fac906a8f9e682e513ddd95697ec7a20878d4 upstream
Initializing the FPU during the early boot process is a pointless exercise. Early boot is convoluted and fragile enough.
Nothing requires that the FPU is set up early. It has to be initialized before fork_init() because the task_struct size depends on the FPU register buffer size.
Move the initialization to arch_cpu_finalize_init() which is the perfect place to do so.
No functional change.
This allows to remove quite some of the custom early command line parsing, but that's subject to the next installment.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20230613224545.902376621@linutronix.de Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org conflicts: arch/x86/kernel/cpu/common.c Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/x86/kernel/cpu/common.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 9c7a376d636e..c1e607d2098b 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1439,8 +1439,6 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
sld_setup(c);
- fpu__init_system(); - init_sigframe_size();
#ifdef CONFIG_X86_32 @@ -2148,8 +2146,6 @@ void cpu_init(void)
doublefault_init_cpu_tss();
- fpu__init_cpu(); - if (is_uv_system()) uv_cpu_init();
@@ -2165,6 +2161,7 @@ void cpu_init_secondary(void) */ cpu_init_exception_handling(); cpu_init(); + fpu__init_cpu(); } #endif
@@ -2241,6 +2238,13 @@ void __init arch_cpu_finalize_init(void) '0' + (boot_cpu_data.x86 > 6 ? 6 : boot_cpu_data.x86); }
+ /* + * Must be before alternatives because it might set or clear + * feature bits. + */ + fpu__init_system(); + fpu__init_cpu(); + alternative_instructions();
if (IS_ENABLED(CONFIG_X86_64)) {
From: Daniel Sneddon daniel.sneddon@linux.intel.com
mainline inclusion from mainline-v6.5-rc6 commit 553a5c03e90a6087e88f8ff878335ef0621536fb category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 553a5c03e90a6087e88f8ff878335ef0621536fb upstream
The Gather Data Sampling (GDS) vulnerability allows malicious software to infer stale data previously stored in vector registers. This may include sensitive data such as cryptographic keys. GDS is mitigated in microcode, and systems with up-to-date microcode are protected by default. However, any affected system that is running with older microcode will still be vulnerable to GDS attacks.
Since the gather instructions used by the attacker are part of the AVX2 and AVX512 extensions, disabling these extensions prevents gather instructions from being executed, thereby mitigating the system from GDS. Disabling AVX2 is sufficient, but we don't have the granularity to do this. The XCR0[2] disables AVX, with no option to just disable AVX2.
Add a kernel parameter gather_data_sampling=force that will enable the microcode mitigation if available, otherwise it will disable AVX on affected systems.
This option will be ignored if cmdline mitigations=off.
This is a *big* hammer. It is known to break buggy userspace that uses incomplete, buggy AVX enumeration. Unfortunately, such userspace does exist in the wild:
https://www.mail-archive.com/bug-coreutils@gnu.org/msg33046.html
[ dhansen: add some more ominous warnings about disabling AVX ]
Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Dave Hansen dave.hansen@linux.intel.com Acked-by: Josh Poimboeuf jpoimboe@kernel.org Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- .../hw-vuln/gather_data_sampling.rst | 18 +++++++++++++---- .../admin-guide/kernel-parameters.txt | 8 +++++++- arch/x86/kernel/cpu/bugs.c | 20 ++++++++++++++++++- 3 files changed, 40 insertions(+), 6 deletions(-)
diff --git a/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst b/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst index 74dab6af7fe1..40b7a6260010 100644 --- a/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst +++ b/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst @@ -60,14 +60,21 @@ bits: ================================ === ============================
GDS can also be mitigated on systems that don't have updated microcode by -disabling AVX. This can be done by setting "clearcpuid=avx" on the kernel -command-line. +disabling AVX. This can be done by setting gather_data_sampling="force" or +"clearcpuid=avx" on the kernel command-line. + +If used, these options will disable AVX use by turning on XSAVE YMM support. +However, the processor will still enumerate AVX support. Userspace that +does not follow proper AVX enumeration to check both AVX *and* XSAVE YMM +support will break.
Mitigation control on the kernel command line --------------------------------------------- The mitigation can be disabled by setting "gather_data_sampling=off" or -"mitigations=off" on the kernel command line. Not specifying either will -default to the mitigation being enabled. +"mitigations=off" on the kernel command line. Not specifying either will default +to the mitigation being enabled. Specifying "gather_data_sampling=force" will +use the microcode mitigation when available or disable AVX on affected systems +where the microcode hasn't been updated to include the mitigation.
GDS System Information ------------------------ @@ -83,6 +90,9 @@ The possible values contained in this file are: Vulnerable Processor vulnerable and mitigation disabled. Vulnerable: No microcode Processor vulnerable and microcode is missing mitigation. + Mitigation: AVX disabled, + no microcode Processor is vulnerable and microcode is missing + mitigation. AVX disabled as mitigation. Mitigation: Microcode Processor is vulnerable and mitigation is in effect. Mitigation: Microcode (locked) Processor is vulnerable and mitigation is in diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 6f2edd63d64b..71567aa7eca9 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1487,7 +1487,13 @@
This issue is mitigated by default in updated microcode. The mitigation may have a performance impact but can be - disabled. + disabled. On systems without the microcode mitigation + disabling AVX serves as a mitigation. + + force: Disable AVX to mitigate systems without + microcode mitigation. No effect if the microcode + mitigation is present. Known to cause crashes in + userspace with buggy AVX enumeration.
off: Disable GDS mitigation.
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 89b22439620c..37edc17514ff 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -607,6 +607,7 @@ early_param("srbds", srbds_parse_cmdline); enum gds_mitigations { GDS_MITIGATION_OFF, GDS_MITIGATION_UCODE_NEEDED, + GDS_MITIGATION_FORCE, GDS_MITIGATION_FULL, GDS_MITIGATION_FULL_LOCKED, GDS_MITIGATION_HYPERVISOR, @@ -617,6 +618,7 @@ static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FULL static const char * const gds_strings[] = { [GDS_MITIGATION_OFF] = "Vulnerable", [GDS_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode", + [GDS_MITIGATION_FORCE] = "Mitigation: AVX disabled, no microcode", [GDS_MITIGATION_FULL] = "Mitigation: Microcode", [GDS_MITIGATION_FULL_LOCKED] = "Mitigation: Microcode (locked)", [GDS_MITIGATION_HYPERVISOR] = "Unknown: Dependent on hypervisor status", @@ -642,6 +644,7 @@ void update_gds_msr(void) rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl); mcu_ctrl &= ~GDS_MITG_DIS; break; + case GDS_MITIGATION_FORCE: case GDS_MITIGATION_UCODE_NEEDED: case GDS_MITIGATION_HYPERVISOR: return; @@ -676,10 +679,23 @@ static void __init gds_select_mitigation(void)
/* No microcode */ if (!(x86_read_arch_cap_msr() & ARCH_CAP_GDS_CTRL)) { - gds_mitigation = GDS_MITIGATION_UCODE_NEEDED; + if (gds_mitigation == GDS_MITIGATION_FORCE) { + /* + * This only needs to be done on the boot CPU so do it + * here rather than in update_gds_msr() + */ + setup_clear_cpu_cap(X86_FEATURE_AVX); + pr_warn("Microcode update needed! Disabling AVX as mitigation.\n"); + } else { + gds_mitigation = GDS_MITIGATION_UCODE_NEEDED; + } goto out; }
+ /* Microcode has mitigation, use it */ + if (gds_mitigation == GDS_MITIGATION_FORCE) + gds_mitigation = GDS_MITIGATION_FULL; + rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl); if (mcu_ctrl & GDS_MITG_LOCKED) { if (gds_mitigation == GDS_MITIGATION_OFF) @@ -710,6 +726,8 @@ static int __init gds_parse_cmdline(char *str)
if (!strcmp(str, "off")) gds_mitigation = GDS_MITIGATION_OFF; + else if (!strcmp(str, "force")) + gds_mitigation = GDS_MITIGATION_FORCE;
return 0; }
From: Daniel Sneddon daniel.sneddon@linux.intel.com
mainline inclusion from mainline-v6.5-rc6 commit 53cf5797f114ba2bd86d23a862302119848eff19 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 53cf5797f114ba2bd86d23a862302119848eff19 upstream
Gather Data Sampling (GDS) is mitigated in microcode. However, on systems that haven't received the updated microcode, disabling AVX can act as a mitigation. Add a Kconfig option that uses the microcode mitigation if available and disables AVX otherwise. Setting this option has no effect on systems not affected by GDS. This is the equivalent of setting gather_data_sampling=force.
Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Dave Hansen dave.hansen@linux.intel.com Acked-by: Josh Poimboeuf jpoimboe@kernel.org Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/x86/Kconfig | 19 +++++++++++++++++++ arch/x86/kernel/cpu/bugs.c | 4 ++++ 2 files changed, 23 insertions(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 9eadce534c74..3f561abb1108 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2555,6 +2555,25 @@ config SLS against straight line speculation. The kernel image might be slightly larger.
+config GDS_FORCE_MITIGATION + bool "Force GDS Mitigation" + depends on CPU_SUP_INTEL + default n + help + Gather Data Sampling (GDS) is a hardware vulnerability which allows + unprivileged speculative access to data which was previously stored in + vector registers. + + This option is equivalent to setting gather_data_sampling=force on the + command line. The microcode mitigation is used if present, otherwise + AVX is disabled as a mitigation. On affected systems that are missing + the microcode any userspace code that unconditionally uses AVX will + break with this option set. + + Setting this option on systems not vulnerable to GDS has no effect. + + If in doubt, say N. + endif
config ARCH_HAS_ADD_PAGES diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 37edc17514ff..5d88db695c2a 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -613,7 +613,11 @@ enum gds_mitigations { GDS_MITIGATION_HYPERVISOR, };
+#if IS_ENABLED(CONFIG_GDS_FORCE_MITIGATION) +static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FORCE; +#else static enum gds_mitigations gds_mitigation __ro_after_init = GDS_MITIGATION_FULL; +#endif
static const char * const gds_strings[] = { [GDS_MITIGATION_OFF] = "Vulnerable",
From: Daniel Sneddon daniel.sneddon@linux.intel.com
mainline inclusion from mainline-v6.5-rc6 commit 81ac7e5d741742d650b4ed6186c4826c1a0631a7 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 81ac7e5d741742d650b4ed6186c4826c1a0631a7 upstream
Gather Data Sampling (GDS) is a transient execution attack using gather instructions from the AVX2 and AVX512 extensions. This attack allows malicious code to infer data that was previously stored in vector registers. Systems that are not vulnerable to GDS will set the GDS_NO bit of the IA32_ARCH_CAPABILITIES MSR. This is useful for VM guests that may think they are on vulnerable systems that are, in fact, not affected. Guests that are running on affected hosts where the mitigation is enabled are protected as if they were running on an unaffected system.
On all hosts that are not affected or that are mitigated, set the GDS_NO bit.
Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Dave Hansen dave.hansen@linux.intel.com Acked-by: Josh Poimboeuf jpoimboe@kernel.org Signed-off-by: Daniel Sneddon daniel.sneddon@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/x86/kernel/cpu/bugs.c | 7 +++++++ arch/x86/kvm/x86.c | 7 ++++++- 2 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 5d88db695c2a..a21ace652a61 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -628,6 +628,13 @@ static const char * const gds_strings[] = { [GDS_MITIGATION_HYPERVISOR] = "Unknown: Dependent on hypervisor status", };
+bool gds_ucode_mitigated(void) +{ + return (gds_mitigation == GDS_MITIGATION_FULL || + gds_mitigation == GDS_MITIGATION_FULL_LOCKED); +} +EXPORT_SYMBOL_GPL(gds_ucode_mitigated); + void update_gds_msr(void) { u64 mcu_ctrl_after; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2ecc6a37df14..32f98ffae8a5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -314,6 +314,8 @@ EXPORT_SYMBOL_GPL(supported_xcr0);
static struct kmem_cache *x86_emulator_cache;
+extern bool gds_ucode_mitigated(void); + /* * When called, it means the previous get/set msr reached an invalid msr. * Return true if we want to ignore/silent this failed msr access. @@ -1459,7 +1461,7 @@ static unsigned int num_msr_based_features; ARCH_CAP_SKIP_VMENTRY_L1DFLUSH | ARCH_CAP_SSB_NO | ARCH_CAP_MDS_NO | \ ARCH_CAP_PSCHANGE_MC_NO | ARCH_CAP_TSX_CTRL_MSR | ARCH_CAP_TAA_NO | \ ARCH_CAP_SBDR_SSDP_NO | ARCH_CAP_FBSDP_NO | ARCH_CAP_PSDP_NO | \ - ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO) + ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO | ARCH_CAP_GDS_NO)
static u64 kvm_get_arch_capabilities(void) { @@ -1516,6 +1518,9 @@ static u64 kvm_get_arch_capabilities(void) */ }
+ if (!boot_cpu_has_bug(X86_BUG_GDS) || gds_ucode_mitigated()) + data |= ARCH_CAP_GDS_NO; + return data; }
From: Juergen Gross jgross@suse.com
mainline inclusion from mainline-v6.5-rc1 commit fe3e0a13e597c1c8617814bf9b42ab732db5c26e category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit fe3e0a13e597c1c8617814bf9b42ab732db5c26e upstream.
Moving the call of fpu__init_cpu() from cpu_init() to start_secondary() broke Xen PV guests, as those don't call start_secondary() for APs.
Call fpu__init_cpu() in Xen's cpu_bringup(), which is the Xen PV replacement of start_secondary().
Fixes: b81fac906a8f ("x86/fpu: Move FPU initialization into arch_cpu_finalize_init()") Signed-off-by: Juergen Gross jgross@suse.com Signed-off-by: Borislav Petkov (AMD) bp@alien8.de Reviewed-by: Boris Ostrovsky boris.ostrovsky@oracle.com Acked-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20230703130032.22916-1-jgross@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/x86/xen/smp_pv.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c index 64873937cd1d..755e939db3ed 100644 --- a/arch/x86/xen/smp_pv.c +++ b/arch/x86/xen/smp_pv.c @@ -30,6 +30,7 @@ #include <asm/desc.h> #include <asm/cpu.h> #include <asm/io_apic.h> +#include <asm/fpu/internal.h>
#include <xen/interface/xen.h> #include <xen/interface/vcpu.h> @@ -63,6 +64,7 @@ static void cpu_bringup(void)
cr4_init(); cpu_init(); + fpu__init_cpu(); touch_softlockup_watchdog(); preempt_disable();
From: Juergen Gross jgross@suse.com
mainline inclusion from mainline-v6.2-rc4 commit 26ce6ec364f18d2915923bc05784084e54a5c4cc category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 26ce6ec364f18d2915923bc05784084e54a5c4cc upstream.
Commit 3f4c8211d982 ("x86/mm: Use mm_alloc() in poking_init()") broke the kernel for running as Xen PV guest.
It seems as if the new address space is never activated before being used, resulting in Xen rejecting to accept the new CR3 value (the PGD isn't pinned).
Fix that by adding the now missing call of paravirt_arch_dup_mmap() to poking_init(). That call was previously done by dup_mm()->dup_mmap() and it is a NOP for all cases but for Xen PV, where it is just doing the pinning of the PGD.
Fixes: 3f4c8211d982 ("x86/mm: Use mm_alloc() in poking_init()") Signed-off-by: Juergen Gross jgross@suse.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20230109150922.10578-1-jgross@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/x86/mm/init.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 6d14fcaf4c30..1dc6286c7fc2 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -26,6 +26,7 @@ #include <asm/pti.h> #include <asm/text-patching.h> #include <asm/memtype.h> +#include <asm/paravirt.h>
/* * We need to define the tracepoints somewhere, and tlb.c @@ -782,6 +783,9 @@ void __init poking_init(void) poking_mm = copy_init_mm(); BUG_ON(!poking_mm);
+ /* Xen PV guests need the PGD to be pinned. */ + paravirt_arch_dup_mmap(NULL, poking_mm); + /* * Randomize the poking address, but make sure that the following page * will be mapped at the same PMD. We need 2 pages, so find space for 3,
From: Peter Zijlstra peterz@infradead.org
mainline inclusion from mainline-v6.2-rc1 commit 3f4c8211d982099be693be9aa7d6fc4607dff290 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 3f4c8211d982099be693be9aa7d6fc4607dff290 upstream.
Instead of duplicating init_mm, allocate a fresh mm. The advantage is that mm_alloc() has much simpler dependencies. Additionally it makes more conceptual sense, init_mm has no (and must not have) user state to duplicate.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20221025201057.816175235@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Conflict: arch/x86/mm/init.c Signed-off-by: Zeng Heng zengheng4@huawei.com --- arch/x86/mm/init.c | 3 ++- include/linux/sched/task.h | 1 - kernel/fork.c | 5 ----- 3 files changed, 2 insertions(+), 7 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 1dc6286c7fc2..19c531c77810 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -7,6 +7,7 @@ #include <linux/swapops.h> #include <linux/kmemleak.h> #include <linux/sched/task.h> +#include <linux/sched/mm.h>
#include <asm/set_memory.h> #include <asm/e820/api.h> @@ -780,7 +781,7 @@ void __init poking_init(void) spinlock_t *ptl; pte_t *ptep;
- poking_mm = copy_init_mm(); + poking_mm = mm_alloc(); BUG_ON(!poking_mm);
/* Xen PV guests need the PGD to be pinned. */ diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index 5629761d9790..bc76171fa6d7 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -88,7 +88,6 @@ extern void exit_itimers(struct task_struct *); extern pid_t kernel_clone(struct kernel_clone_args *kargs); struct task_struct *create_io_thread(int (*fn)(void *), void *arg, int node); struct task_struct *fork_idle(int); -struct mm_struct *copy_init_mm(void); extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags); extern long kernel_wait4(pid_t, int __user *, int, struct rusage *); int kernel_wait(pid_t pid, int *stat); diff --git a/kernel/fork.c b/kernel/fork.c index 2547c6a6e5b1..d50609760c26 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2529,11 +2529,6 @@ struct task_struct * __init fork_idle(int cpu) return task; }
-struct mm_struct *copy_init_mm(void) -{ - return dup_mm(NULL, &init_mm); -} - /* * This is like kernel_clone(), but shaved down and tailored to just * creating io_uring workers. It returns a created task, or an error pointer.
From: Peter Zijlstra peterz@infradead.org
mainline inclusion from mainline-v6.2-rc1 commit af80602799681c78f14fbe20b6185a56020dedee category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit af80602799681c78f14fbe20b6185a56020dedee upstream.
In order to allow using mm_alloc() much earlier, move initializing mm_cachep into mm_init().
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20221025201057.751153381@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org conflicts: kernel/fork.c Signed-off-by: Zeng Heng zengheng4@huawei.com --- include/linux/sched/task.h | 1 + init/main.c | 1 + kernel/fork.c | 32 ++++++++++++++++++-------------- 3 files changed, 20 insertions(+), 14 deletions(-)
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index bc76171fa6d7..41757b291204 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -62,6 +62,7 @@ extern void sched_dead(struct task_struct *p);
void __noreturn do_task_dead(void);
+extern void mm_cache_init(void); extern void proc_caches_init(void);
extern void fork_init(void); diff --git a/init/main.c b/init/main.c index 8cccd0687a93..342b6226ac4e 100644 --- a/init/main.c +++ b/init/main.c @@ -841,6 +841,7 @@ static void __init mm_init(void) init_espfix_bsp(); /* Should be run after espfix64 is set up. */ pti_init(); + mm_cache_init(); }
#ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET diff --git a/kernel/fork.c b/kernel/fork.c index d50609760c26..40c96a559a9f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2923,10 +2923,27 @@ static void sighand_ctor(void *data) init_waitqueue_head(&sighand->signalfd_wqh); }
-void __init proc_caches_init(void) +void __init mm_cache_init(void) { unsigned int mm_size;
+ /* + * The mm_cpumask is located at the end of mm_struct, and is + * dynamically sized based on the maximum CPU number this system + * can have, taking hotplug into account (nr_cpu_ids). + */ + mm_size = MM_STRUCT_SIZE; + + mm_cachep = kmem_cache_create_usercopy("mm_struct", + mm_size, ARCH_MIN_MMSTRUCT_ALIGN, + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, + OFFSET_OF_MM_SAVED_AUXV, + SIZE_OF_MM_SAVED_AUXV, + NULL); +} + +void __init proc_caches_init(void) +{ sighand_cachep = kmem_cache_create("sighand_cache", sizeof(struct sighand_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| @@ -2944,19 +2961,6 @@ void __init proc_caches_init(void) SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL);
- /* - * The mm_cpumask is located at the end of mm_struct, and is - * dynamically sized based on the maximum CPU number this system - * can have, taking hotplug into account (nr_cpu_ids). - */ - mm_size = MM_STRUCT_SIZE; - - mm_cachep = kmem_cache_create_usercopy("mm_struct", - mm_size, ARCH_MIN_MMSTRUCT_ALIGN, - SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, - OFFSET_OF_MM_SAVED_AUXV, - SIZE_OF_MM_SAVED_AUXV, - NULL); vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); mmap_init(); nsproxy_cache_init();
From: Peter Zijlstra peterz@infradead.org
mainline inclusion from mainline-v6.2-rc1 commit 5b93a83649c7cba3a15eb7e8959b250841acb1b1 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 5b93a83649c7cba3a15eb7e8959b250841acb1b1 upstream.
Move poking_init() up a bunch; specifically move it right after mm_init() which is right before ftrace_init().
This will allow simplifying ftrace text poking which currently has a bunch of exceptions for early boot.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20221025201057.881703081@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- init/main.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/init/main.c b/init/main.c index 342b6226ac4e..f06fbe79a84a 100644 --- a/init/main.c +++ b/init/main.c @@ -929,7 +929,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) sort_main_extable(); trap_init(); mm_init(); - + poking_init(); ftrace_init();
/* trace_printk can be enabled here */ @@ -1065,8 +1065,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) taskstats_init_early(); delayacct_init();
- poking_init(); - acpi_subsystem_init(); arch_post_acpi_subsys_init(); sfi_init_late();
From: Dave Hansen dave.hansen@linux.intel.com
mainline inclusion from mainline-v6.5-rc6 commit 1b0fc0345f2852ffe54fb9ae0e12e2ee69ad6a20 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7XLNT CVE: CVE-2022-40982
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
---------------------------
commit 1b0fc0345f2852ffe54fb9ae0e12e2ee69ad6a20 upstream
These options clearly turn *off* XSAVE YMM support. Correct the typo.
Reported-by: Ben Hutchings ben@decadent.org.uk Fixes: 553a5c03e90a ("x86/speculation: Add force option to GDS mitigation") Signed-off-by: Dave Hansen dave.hansen@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Zeng Heng zengheng4@huawei.com --- Documentation/admin-guide/hw-vuln/gather_data_sampling.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst b/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst index 40b7a6260010..264bfa937f7d 100644 --- a/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst +++ b/Documentation/admin-guide/hw-vuln/gather_data_sampling.rst @@ -63,7 +63,7 @@ GDS can also be mitigated on systems that don't have updated microcode by disabling AVX. This can be done by setting gather_data_sampling="force" or "clearcpuid=avx" on the kernel command-line.
-If used, these options will disable AVX use by turning on XSAVE YMM support. +If used, these options will disable AVX use by turning off XSAVE YMM support. However, the processor will still enumerate AVX support. Userspace that does not follow proper AVX enumeration to check both AVX *and* XSAVE YMM support will break.
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/1956 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/T...
FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/1956 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/T...