Firstly, this series improves function return address protection for the arm64 kernel, by compiling the kernel with ARMv8.3 Pointer Authentication instructions (referred ptrauth hereafter). This should help protect the kernel against attacks using return-oriented programming. Secondly, Suzuki's patches optimize checking and enabling cpu capabilities. Thirdly, three patches on kernel config enable ptrauth compiling and fix compile warnings. Finally, two patches of Mark Rutland and one patch of Marc Zyngier are necessary simplifications and cleanups.
Amit Daniel Kachhap (9): arm64: cpufeature: Fix meta-capability cpufeature check arm64: ptrauth: Add bootup/runtime flags for __cpu_setup arm64: cpufeature: Move cpu capability helpers inside C file arm64: initialize ptrauth keys for kernel booting task arm64: mask PAC bits of __builtin_return_address arm64: __show_regs: strip PAC from lr in printk arm64: suspend: restore the kernel ptrauth keys lkdtm: arm64: test kernel pointer authentication arm64: Kconfig: ptrauth: Add binutils version check to fix mismatch
Catalin Marinas (1): kbuild: Add support for 'as-instr' to be used in Kconfig files
Kristina Martsenko (7): arm64: cpufeature: add pointer auth meta-capabilities arm64: rename ptrauth key structures to be user-specific arm64: install user ptrauth keys at kernel exit time arm64: cpufeature: handle conflicts based on capability arm64: enable ptrauth earlier arm64: initialize and switch ptrauth kernel keys arm64: compile the kernel with ptrauth return address signing
Marc Zyngier (1): arm64: Drop unnecessary include from asm/smp.h
Mark Rutland (3): arm64: unwind: strip PAC from kernel addresses arm64: remove ptrauth_keys_install_kernel sync arg arm64: simplify ptrauth initialization
Nick Desaulniers (1): arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH
Suzuki K Poulose (4): arm64: capabilities: Speed up capability lookup arm64: capabilities: Optimize this_cpu_has_cap arm64: capabilities: Use linear array for detection and verification arm64: capabilities: Batch cpu_enable callbacks
Vincenzo Frascino (1): kconfig: Add support for 'as-option'
Zheng Zengkai (1): config: enable ARM64 pointer authentication configs by default
arch/arm64/Kconfig | 38 +++- arch/arm64/Makefile | 11 + arch/arm64/configs/hulk_defconfig | 7 + arch/arm64/configs/openeuler_defconfig | 7 + arch/arm64/include/asm/asm_pointer_auth.h | 98 ++++++++ arch/arm64/include/asm/compiler.h | 19 ++ arch/arm64/include/asm/cpucaps.h | 4 +- arch/arm64/include/asm/cpufeature.h | 42 ++-- arch/arm64/include/asm/pointer_auth.h | 50 ++--- arch/arm64/include/asm/processor.h | 3 +- arch/arm64/include/asm/stackprotector.h | 5 + arch/arm64/kernel/asm-offsets.c | 13 ++ arch/arm64/kernel/cpufeature.c | 258 ++++++++++++++-------- arch/arm64/kernel/entry.S | 6 + arch/arm64/kernel/head.S | 10 + arch/arm64/kernel/pointer_auth.c | 7 +- arch/arm64/kernel/process.c | 5 +- arch/arm64/kernel/ptrace.c | 16 +- arch/arm64/kernel/sleep.S | 1 + arch/arm64/kernel/stacktrace.c | 5 +- arch/arm64/mm/proc.S | 27 ++- drivers/misc/lkdtm/bugs.c | 36 +++ drivers/misc/lkdtm/core.c | 1 + drivers/misc/lkdtm/lkdtm.h | 1 + include/linux/stackprotector.h | 2 +- scripts/Kconfig.include | 10 + 26 files changed, 513 insertions(+), 169 deletions(-) create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h
From: Suzuki K Poulose suzuki.poulose@arm.com
mainline inclusion from v5.0-rc1 commit 82a3a21b236f category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
We maintain two separate tables of capabilities, errata and features, which decide the system capabilities. We iterate over each of these tables for various operations (e.g, detection, verification etc.). We do not have a way to map a system "capability" to its entry, (i.e, cap -> struct arm64_cpu_capabilities) which is needed for this_cpu_has_cap(). So we iterate over the table one by one to find the entry and then do the operation. Also, this prevents us from optimizing the way we "enable" the capabilities on the CPUs, where we now issue a stop_machine() for each available capability.
One solution is to merge the two tables into a single table, sorted by the capability. But this is has the following disadvantages: - We loose the "classification" of an errata vs. feature - It is quite easy to make a mistake when adding an entry, unless we sort the table at runtime.
So we maintain a list of pointers to the capability entry, sorted by the "cap number" in a separate array, initialized at boot time. The only restriction is that we can have one "entry" per capability. While at it, remove the duplicate declaration of arm64_errata table.
Reviewed-by: Vladimir Murzin vladimir.murzin@arm.com Tested-by: Vladimir Murzin vladimir.murzin@arm.com Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/kernel/cpufeature.c | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 8df58d0859e81..98c32a9035d45 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -50,6 +50,7 @@ unsigned int compat_elf_hwcap2 __read_mostly;
DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); EXPORT_SYMBOL(cpu_hwcaps); +static struct arm64_cpu_capabilities const __ro_after_init *cpu_hwcaps_ptrs[ARM64_NCAPS];
/* Need also bit for ARM64_CB_PATCH */ DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE); @@ -540,6 +541,29 @@ static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new) }
extern const struct arm64_cpu_capabilities arm64_errata[]; +static const struct arm64_cpu_capabilities arm64_features[]; + +static void __init +init_cpu_hwcaps_indirect_list_from_array(const struct arm64_cpu_capabilities *caps) +{ + for (; caps->matches; caps++) { + if (WARN(caps->capability >= ARM64_NCAPS, + "Invalid capability %d\n", caps->capability)) + continue; + if (WARN(cpu_hwcaps_ptrs[caps->capability], + "Duplicate entry for capability %d\n", + caps->capability)) + continue; + cpu_hwcaps_ptrs[caps->capability] = caps; + } +} + +static void __init init_cpu_hwcaps_indirect_list(void) +{ + init_cpu_hwcaps_indirect_list_from_array(arm64_features); + init_cpu_hwcaps_indirect_list_from_array(arm64_errata); +} + static void __init setup_boot_cpu_capabilities(void);
void __init init_cpu_features(struct cpuinfo_arm64 *info) @@ -585,6 +609,12 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) sve_init_vq_map(); }
+ /* + * Initialize the indirect array of CPU hwcaps capabilities pointers + * before we handle the boot CPU below. + */ + init_cpu_hwcaps_indirect_list(); + /* * Detect and enable early CPU capabilities based on the boot CPU, * after we have initialised the CPU feature infrastructure. @@ -2047,8 +2077,6 @@ static void __init mark_const_caps_ready(void) static_branch_enable(&arm64_const_caps_ready); }
-extern const struct arm64_cpu_capabilities arm64_errata[]; - bool this_cpu_has_cap(unsigned int cap) { return (__this_cpu_has_cap(arm64_features, cap) ||
From: Suzuki K Poulose suzuki.poulose@arm.com
mainline inclusion from v5.0-rc1 commit f7bfc14a0819 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Make use of the sorted capability list to access the capability entry in this_cpu_has_cap() to avoid iterating over the two tables.
Reviewed-by: Vladimir Murzin vladimir.murzin@arm.com Tested-by: Vladimir Murzin vladimir.murzin@arm.com Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/kernel/cpufeature.c | 31 +++++++++---------------------- 1 file changed, 9 insertions(+), 22 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 98c32a9035d45..30dd8f97bd9f4 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1806,25 +1806,6 @@ static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps) cap_set_elf_hwcap(hwcaps); }
-/* - * Check if the current CPU has a given feature capability. - * Should be called from non-preemptible context. - */ -static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array, - unsigned int cap) -{ - const struct arm64_cpu_capabilities *caps; - - if (WARN_ON(preemptible())) - return false; - - for (caps = cap_array; caps->matches; caps++) - if (caps->capability == cap) - return caps->matches(caps, SCOPE_LOCAL_CPU); - - return false; -} - static void __update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, u16 scope_mask, const char *info) { @@ -2077,10 +2058,16 @@ static void __init mark_const_caps_ready(void) static_branch_enable(&arm64_const_caps_ready); }
-bool this_cpu_has_cap(unsigned int cap) +bool this_cpu_has_cap(unsigned int n) { - return (__this_cpu_has_cap(arm64_features, cap) || - __this_cpu_has_cap(arm64_errata, cap)); + if (!WARN_ON(preemptible()) && n < ARM64_NCAPS) { + const struct arm64_cpu_capabilities *cap = cpu_hwcaps_ptrs[n]; + + if (cap) + return cap->matches(cap, SCOPE_LOCAL_CPU); + } + + return false; }
static void __init setup_system_capabilities(void)
From: Suzuki K Poulose suzuki.poulose@arm.com
mainline inclusion from v5.0-rc1 commit 606f8e7b27bf category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Use the sorted list of capability entries for the detection and verification.
Reviewed-by: Vladimir Murzin vladimir.murzin@arm.com Tested-by: Vladimir Murzin vladimir.murzin@arm.com Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/kernel/cpufeature.c | 42 ++++++++++++++-------------------- 1 file changed, 17 insertions(+), 25 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 30dd8f97bd9f4..664e72caef7fd 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1806,17 +1806,21 @@ static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps) cap_set_elf_hwcap(hwcaps); }
-static void __update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, - u16 scope_mask, const char *info) +static void update_cpu_capabilities(u16 scope_mask) { + int i; + const struct arm64_cpu_capabilities *caps; + scope_mask &= ARM64_CPUCAP_SCOPE_MASK; - for (; caps->matches; caps++) { - if (!(caps->type & scope_mask) || + for (i = 0; i < ARM64_NCAPS; i++) { + caps = cpu_hwcaps_ptrs[i]; + if (!caps || !(caps->type & scope_mask) || + cpus_have_cap(caps->capability) || !caps->matches(caps, cpucap_default_scope(caps))) continue;
- if (!cpus_have_cap(caps->capability) && caps->desc) - pr_info("%s %s\n", info, caps->desc); + if (caps->desc) + pr_info("detected: %s\n", caps->desc); cpus_set_cap(caps->capability);
if ((scope_mask & SCOPE_BOOT_CPU) && (caps->type & SCOPE_BOOT_CPU)) @@ -1824,13 +1828,6 @@ static void __update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, } }
-static void update_cpu_capabilities(u16 scope_mask) -{ - __update_cpu_capabilities(arm64_errata, scope_mask, - "enabling workaround for"); - __update_cpu_capabilities(arm64_features, scope_mask, "detected:"); -} - static int __enable_cpu_capability(void *arg) { const struct arm64_cpu_capabilities *cap = arg; @@ -1894,16 +1891,17 @@ static void __init enable_cpu_capabilities(u16 scope_mask) * * Returns "false" on conflicts. */ -static bool -__verify_local_cpu_caps(const struct arm64_cpu_capabilities *caps, - u16 scope_mask) +static bool verify_local_cpu_caps(u16 scope_mask) { + int i; bool cpu_has_cap, system_has_cap; + const struct arm64_cpu_capabilities *caps;
scope_mask &= ARM64_CPUCAP_SCOPE_MASK;
- for (; caps->matches; caps++) { - if (!(caps->type & scope_mask)) + for (i = 0; i < ARM64_NCAPS; i++) { + caps = cpu_hwcaps_ptrs[i]; + if (!caps || !(caps->type & scope_mask)) continue;
cpu_has_cap = caps->matches(caps, SCOPE_LOCAL_CPU); @@ -1934,7 +1932,7 @@ __verify_local_cpu_caps(const struct arm64_cpu_capabilities *caps, } }
- if (caps->matches) { + if (i < ARM64_NCAPS) { pr_crit("CPU%d: Detected conflict for capability %d (%s), System: %d, CPU: %d\n", smp_processor_id(), caps->capability, caps->desc, system_has_cap, cpu_has_cap); @@ -1944,12 +1942,6 @@ __verify_local_cpu_caps(const struct arm64_cpu_capabilities *caps, return true; }
-static bool verify_local_cpu_caps(u16 scope_mask) -{ - return __verify_local_cpu_caps(arm64_errata, scope_mask) && - __verify_local_cpu_caps(arm64_features, scope_mask); -} - /* * Check for CPU features that are used in early boot * based on the Boot CPU value.
From: Suzuki K Poulose suzuki.poulose@arm.com
mainline inclusion from v5.0-rc1 commit 0b587c84e421 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
We use a stop_machine call for each available capability to enable it on all the CPUs available at boot time. Instead we could batch the cpu_enable callbacks to a single stop_machine() call to save us some time.
Reviewed-by: Vladimir Murzin vladimir.murzin@arm.com Tested-by: Vladimir Murzin vladimir.murzin@arm.com Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Will Deacon will.deacon@arm.com
Conflicts: arch/arm64/include/asm/cpufeature.h [Zheng Zengkai: adjust context]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/cpufeature.h | 3 ++ arch/arm64/kernel/cpufeature.c | 70 ++++++++++++++++++----------- 2 files changed, 47 insertions(+), 26 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 69e0d834ac9d6..93cf6982bb376 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -398,6 +398,9 @@ extern struct static_key_false arm64_const_caps_ready; #define ARM64_NPATCHABLE (ARM64_NCAPS + 1) extern DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
+#define for_each_available_cap(cap) \ + for_each_set_bit(cap, cpu_hwcaps, ARM64_NCAPS) + bool this_cpu_has_cap(unsigned int cap);
static inline bool cpu_have_feature(unsigned int num) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 664e72caef7fd..aa4e4621d7c27 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1828,11 +1828,27 @@ static void update_cpu_capabilities(u16 scope_mask) } }
-static int __enable_cpu_capability(void *arg) +/* + * Enable all the available capabilities on this CPU. The capabilities + * with BOOT_CPU scope are handled separately and hence skipped here. + */ +static int cpu_enable_non_boot_scope_capabilities(void *__unused) { - const struct arm64_cpu_capabilities *cap = arg; + int i; + u16 non_boot_scope = SCOPE_ALL & ~SCOPE_BOOT_CPU; + + for_each_available_cap(i) { + const struct arm64_cpu_capabilities *cap = cpu_hwcaps_ptrs[i]; + + if (WARN_ON(!cap)) + continue; + + if (!(cap->type & non_boot_scope)) + continue;
- cap->cpu_enable(cap); + if (cap->cpu_enable) + cap->cpu_enable(cap); + } return 0; }
@@ -1840,21 +1856,29 @@ static int __enable_cpu_capability(void *arg) * Run through the enabled capabilities and enable() it on all active * CPUs */ -static void __init -__enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps, - u16 scope_mask) +static void __init enable_cpu_capabilities(u16 scope_mask) { + int i; + const struct arm64_cpu_capabilities *caps; + bool boot_scope; + scope_mask &= ARM64_CPUCAP_SCOPE_MASK; - for (; caps->matches; caps++) { - unsigned int num = caps->capability; + boot_scope = !!(scope_mask & SCOPE_BOOT_CPU); + + for (i = 0; i < ARM64_NCAPS; i++) { + unsigned int num;
- if (!(caps->type & scope_mask) || !cpus_have_cap(num)) + caps = cpu_hwcaps_ptrs[i]; + if (!caps || !(caps->type & scope_mask)) + continue; + num = caps->capability; + if (!cpus_have_cap(num)) continue;
/* Ensure cpus_have_const_cap(num) works */ static_branch_enable(&cpu_hwcap_keys[num]);
- if (caps->cpu_enable) { + if (boot_scope && caps->cpu_enable) /* * Capabilities with SCOPE_BOOT_CPU scope are finalised * before any secondary CPU boots. Thus, each secondary @@ -1863,25 +1887,19 @@ __enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps, * the boot CPU, for which the capability must be * enabled here. This approach avoids costly * stop_machine() calls for this case. - * - * Otherwise, use stop_machine() as it schedules the - * work allowing us to modify PSTATE, instead of - * on_each_cpu() which uses an IPI, giving us a PSTATE - * that disappears when we return. */ - if (scope_mask & SCOPE_BOOT_CPU) - caps->cpu_enable(caps); - else - stop_machine(__enable_cpu_capability, - (void *)caps, cpu_online_mask); - } + caps->cpu_enable(caps); } -}
-static void __init enable_cpu_capabilities(u16 scope_mask) -{ - __enable_cpu_capabilities(arm64_errata, scope_mask); - __enable_cpu_capabilities(arm64_features, scope_mask); + /* + * For all non-boot scope capabilities, use stop_machine() + * as it schedules the work allowing us to modify PSTATE, + * instead of on_each_cpu() which uses an IPI, giving us a + * PSTATE that disappears when we return. + */ + if (!boot_scope) + stop_machine(cpu_enable_non_boot_scope_capabilities, + NULL, cpu_online_mask); }
/*
From: Amit Daniel Kachhap amit.kachhap@arm.com
mainline inclusion from v5.7-rc1 commit 3ff047f6971d category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Some existing/future meta cpucaps match need the presence of individual cpucaps. Currently the individual cpucaps checks it via an array based flag and this introduces dependency on the array entry order. This limitation exists only for system scope cpufeature.
This patch introduces an internal helper function (__system_matches_cap) to invoke the matching handler for system scope. This helper has to be used during a narrow window when, - The system wide safe registers are set with all the SMP CPUs and, - The SYSTEM_FEATURE cpu_hwcaps may not have been set.
Normal users should use the existing cpus_have_{const_}cap() global function.
Suggested-by: Suzuki K Poulose suzuki.poulose@arm.com Suggested-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Reviewed-by: Vincenzo Frascino Vincenzo.Frascino@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com
Conflicts: arch/arm64/kernel/cpufeature.c [Zheng Zengkai: fix conflicts caused by skipping the following commit. aec0bff757 arm64: HWCAP: encapsulate elf_hwcap]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/kernel/cpufeature.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index aa4e4621d7c27..604cac7803113 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -121,6 +121,8 @@ static bool __maybe_unused cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused);
+static bool __system_matches_cap(unsigned int n); + /* * NOTE: Any changes to the visibility of features should be kept in * sync with the documentation of the CPU feature register ABI. @@ -2080,6 +2082,23 @@ bool this_cpu_has_cap(unsigned int n) return false; }
+/* + * This helper function is used in a narrow window when, + * - The system wide safe registers are set with all the SMP CPUs and, + * - The SYSTEM_FEATURE cpu_hwcaps may not have been set. + * In all other cases cpus_have_{const_}cap() should be used. + */ +static bool __system_matches_cap(unsigned int n) +{ + if (n < ARM64_NCAPS) { + const struct arm64_cpu_capabilities *cap = cpu_hwcaps_ptrs[n]; + + if (cap) + return cap->matches(cap, SCOPE_SYSTEM); + } + return false; +} + static void __init setup_system_capabilities(void) { /* @@ -2124,7 +2143,7 @@ void __init setup_cpu_features(void) static bool __maybe_unused cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused) { - return (cpus_have_const_cap(ARM64_HAS_PAN) && !cpus_have_const_cap(ARM64_HAS_UAO)); + return (__system_matches_cap(ARM64_HAS_PAN) && !__system_matches_cap(ARM64_HAS_UAO)); }
/*
From: Kristina Martsenko kristina.martsenko@arm.com
mainline inclusion from v5.7-rc1 commit cfef06bd0686 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
To enable pointer auth for the kernel, we're going to need to check for the presence of address auth and generic auth using alternative_if. We currently have two cpucaps for each, but alternative_if needs to check a single cpucap. So define meta-capabilities that are present when either of the current two capabilities is present.
Leave the existing four cpucaps in place, as they are still needed to check for mismatched systems where one CPU has the architected algorithm but another has the IMP DEF algorithm.
Note, the meta-capabilities were present before but were removed in commit a56005d32105 ("arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4") and commit 1e013d06120c ("arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches"), as they were not needed then. Note, unlike before, the current patch checks the cpucap values directly, instead of reading the CPU ID register value.
Reviewed-by: Suzuki K Poulose suzuki.poulose@arm.com Reviewed-by: Kees Cook keescook@chromium.org Reviewed-by: Vincenzo Frascino Vincenzo.Frascino@arm.com Signed-off-by: Kristina Martsenko kristina.martsenko@arm.com [Amit: commit message and macro rebase, use __system_matches_cap] Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com
Conflicts: arch/arm64/include/asm/cpucaps.h arch/arm64/kernel/cpufeature.c [Zheng Zengkai: adjust context and fix conflicts caused by skipping the following commit. 3e6c69a arm64: Add initial support for E0PD 1a50ec0 arm64: Implement archrandom.h for ARMv8.5-RNG e85d68f arm64: Rename WORKAROUND_1165522 to SPECULATIVE_AT_VHE db0d46a arm64: Rename WORKAROUND_1319367 to SPECULATIVE_AT_NVHE 9405447 arm64: Avoid Cavium TX2 erratum 219 when switching TTBR a532508 arm64: Handle erratum 1418040 as a superset of erratum 1188873 d3ec3a0 arm64: KVM: Trap VM ops when ARM64_WORKAROUND_CAVIUM_TX2_219_TVM is set]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/cpucaps.h | 4 +++- arch/arm64/include/asm/cpufeature.h | 6 ++---- arch/arm64/kernel/cpufeature.c | 25 ++++++++++++++++++++++++- 3 files changed, 29 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index a02b97d1ca39c..1f9fc9c6d516f 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -63,7 +63,9 @@ #define ARM64_HAS_ADDRESS_AUTH_IMP_DEF 42 #define ARM64_HAS_GENERIC_AUTH_ARCH 43 #define ARM64_HAS_GENERIC_AUTH_IMP_DEF 44 +#define ARM64_HAS_ADDRESS_AUTH 45 +#define ARM64_HAS_GENERIC_AUTH 46
-#define ARM64_NCAPS 45 +#define ARM64_NCAPS 47
#endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 93cf6982bb376..99d0c345624f6 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -566,15 +566,13 @@ static inline bool system_has_prio_mask_debugging(void) static inline bool system_supports_address_auth(void) { return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) && - (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || - cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF)); + cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH); }
static inline bool system_supports_generic_auth(void) { return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) && - (cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_ARCH) || - cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF)); + cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH); }
#define ARM64_SSBD_UNKNOWN -1 diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 604cac7803113..38ebbd5bc3a3a 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1320,6 +1320,20 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); } + +static bool has_address_auth(const struct arm64_cpu_capabilities *entry, + int __unused) +{ + return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || + __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF); +} + +static bool has_generic_auth(const struct arm64_cpu_capabilities *entry, + int __unused) +{ + return __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH) || + __system_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF); +} #endif /* CONFIG_ARM64_PTR_AUTH */
static const struct arm64_cpu_capabilities arm64_features[] = { @@ -1583,7 +1597,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64ISAR1_APA_SHIFT, .min_field_value = ID_AA64ISAR1_APA_ARCHITECTED, .matches = has_cpuid_feature, - .cpu_enable = cpu_enable_address_auth, }, { .desc = "Address authentication (IMP DEF algorithm)", @@ -1594,6 +1607,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64ISAR1_API_SHIFT, .min_field_value = ID_AA64ISAR1_API_IMP_DEF, .matches = has_cpuid_feature, + }, + { + .capability = ARM64_HAS_ADDRESS_AUTH, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_address_auth, .cpu_enable = cpu_enable_address_auth, }, { @@ -1616,6 +1634,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .min_field_value = ID_AA64ISAR1_GPI_IMP_DEF, .matches = has_cpuid_feature, }, + { + .capability = ARM64_HAS_GENERIC_AUTH, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_generic_auth, + }, #endif /* CONFIG_ARM64_PTR_AUTH */ {}, };
From: Kristina Martsenko kristina.martsenko@arm.com
mainline inclusion from v5.7-rc1 commit 91a1b6ccff3 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
We currently enable ptrauth for userspace, but do not use it within the kernel. We're going to enable it for the kernel, and will need to manage a separate set of ptrauth keys for the kernel.
We currently keep all 5 keys in struct ptrauth_keys. However, as the kernel will only need to use 1 key, it is a bit wasteful to allocate a whole ptrauth_keys struct for every thread.
Therefore, a subsequent patch will define a separate struct, with only 1 key, for the kernel. In preparation for that, rename the existing struct (and associated macros and functions) to reflect that they are specific to userspace.
Acked-by: Catalin Marinas catalin.marinas@arm.com Reviewed-by: Vincenzo Frascino Vincenzo.Frascino@arm.com Signed-off-by: Kristina Martsenko kristina.martsenko@arm.com [Amit: Re-positioned the patch to reduce the diff] Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/pointer_auth.h | 12 ++++++------ arch/arm64/include/asm/processor.h | 2 +- arch/arm64/kernel/pointer_auth.c | 8 ++++---- arch/arm64/kernel/ptrace.c | 16 ++++++++-------- 4 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 2a42d40fafa1c..3d46c07596b4b 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -22,7 +22,7 @@ struct ptrauth_key { * We give each process its own keys, which are shared by all threads. The keys * are inherited upon fork(), and reinitialised upon exec*(). */ -struct ptrauth_keys { +struct ptrauth_keys_user { struct ptrauth_key apia; struct ptrauth_key apib; struct ptrauth_key apda; @@ -30,7 +30,7 @@ struct ptrauth_keys { struct ptrauth_key apga; };
-static inline void ptrauth_keys_init(struct ptrauth_keys *keys) +static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys) { if (system_supports_address_auth()) { get_random_bytes(&keys->apia, sizeof(keys->apia)); @@ -50,7 +50,7 @@ do { \ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ } while (0)
-static inline void ptrauth_keys_switch(struct ptrauth_keys *keys) +static inline void ptrauth_keys_switch_user(struct ptrauth_keys_user *keys) { if (system_supports_address_auth()) { __ptrauth_key_install(APIA, keys->apia); @@ -80,12 +80,12 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) #define ptrauth_thread_init_user(tsk) \ do { \ struct task_struct *__ptiu_tsk = (tsk); \ - ptrauth_keys_init(&__ptiu_tsk->thread.keys_user); \ - ptrauth_keys_switch(&__ptiu_tsk->thread.keys_user); \ + ptrauth_keys_init_user(&__ptiu_tsk->thread.keys_user); \ + ptrauth_keys_switch_user(&__ptiu_tsk->thread.keys_user); \ } while (0)
#define ptrauth_thread_switch(tsk) \ - ptrauth_keys_switch(&(tsk)->thread.keys_user) + ptrauth_keys_switch_user(&(tsk)->thread.keys_user)
#else /* CONFIG_ARM64_PTR_AUTH */ #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 926981d4e0489..e25269c47d70c 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -144,7 +144,7 @@ struct thread_struct { unsigned long fault_code; /* ESR_EL1 value */ struct debug_info debug; /* debugging */ #ifdef CONFIG_ARM64_PTR_AUTH - struct ptrauth_keys keys_user; + struct ptrauth_keys_user keys_user; #endif };
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c index c507b584259d0..af5a638207f8d 100644 --- a/arch/arm64/kernel/pointer_auth.c +++ b/arch/arm64/kernel/pointer_auth.c @@ -9,7 +9,7 @@
int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) { - struct ptrauth_keys *keys = &tsk->thread.keys_user; + struct ptrauth_keys_user *keys = &tsk->thread.keys_user; unsigned long addr_key_mask = PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY; unsigned long key_mask = addr_key_mask | PR_PAC_APGAKEY; @@ -18,8 +18,8 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) return -EINVAL;
if (!arg) { - ptrauth_keys_init(keys); - ptrauth_keys_switch(keys); + ptrauth_keys_init_user(keys); + ptrauth_keys_switch_user(keys); return 0; }
@@ -41,7 +41,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) if (arg & PR_PAC_APGAKEY) get_random_bytes(&keys->apga, sizeof(keys->apga));
- ptrauth_keys_switch(keys); + ptrauth_keys_switch_user(keys);
return 0; } diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 52b11e80ca698..9d1fe72485a1d 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -1012,7 +1012,7 @@ static struct ptrauth_key pac_key_from_user(__uint128_t ukey) }
static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys, - const struct ptrauth_keys *keys) + const struct ptrauth_keys_user *keys) { ukeys->apiakey = pac_key_to_user(&keys->apia); ukeys->apibkey = pac_key_to_user(&keys->apib); @@ -1020,7 +1020,7 @@ static void pac_address_keys_to_user(struct user_pac_address_keys *ukeys, ukeys->apdbkey = pac_key_to_user(&keys->apdb); }
-static void pac_address_keys_from_user(struct ptrauth_keys *keys, +static void pac_address_keys_from_user(struct ptrauth_keys_user *keys, const struct user_pac_address_keys *ukeys) { keys->apia = pac_key_from_user(ukeys->apiakey); @@ -1034,7 +1034,7 @@ static int pac_address_keys_get(struct task_struct *target, unsigned int pos, unsigned int count, void *kbuf, void __user *ubuf) { - struct ptrauth_keys *keys = &target->thread.keys_user; + struct ptrauth_keys_user *keys = &target->thread.keys_user; struct user_pac_address_keys user_keys;
if (!system_supports_address_auth()) @@ -1051,7 +1051,7 @@ static int pac_address_keys_set(struct task_struct *target, unsigned int pos, unsigned int count, const void *kbuf, const void __user *ubuf) { - struct ptrauth_keys *keys = &target->thread.keys_user; + struct ptrauth_keys_user *keys = &target->thread.keys_user; struct user_pac_address_keys user_keys; int ret;
@@ -1069,12 +1069,12 @@ static int pac_address_keys_set(struct task_struct *target, }
static void pac_generic_keys_to_user(struct user_pac_generic_keys *ukeys, - const struct ptrauth_keys *keys) + const struct ptrauth_keys_user *keys) { ukeys->apgakey = pac_key_to_user(&keys->apga); }
-static void pac_generic_keys_from_user(struct ptrauth_keys *keys, +static void pac_generic_keys_from_user(struct ptrauth_keys_user *keys, const struct user_pac_generic_keys *ukeys) { keys->apga = pac_key_from_user(ukeys->apgakey); @@ -1085,7 +1085,7 @@ static int pac_generic_keys_get(struct task_struct *target, unsigned int pos, unsigned int count, void *kbuf, void __user *ubuf) { - struct ptrauth_keys *keys = &target->thread.keys_user; + struct ptrauth_keys_user *keys = &target->thread.keys_user; struct user_pac_generic_keys user_keys;
if (!system_supports_generic_auth()) @@ -1102,7 +1102,7 @@ static int pac_generic_keys_set(struct task_struct *target, unsigned int pos, unsigned int count, const void *kbuf, const void __user *ubuf) { - struct ptrauth_keys *keys = &target->thread.keys_user; + struct ptrauth_keys_user *keys = &target->thread.keys_user; struct user_pac_generic_keys user_keys; int ret;
From: Kristina Martsenko kristina.martsenko@arm.com
mainline inclusion from v5.7-rc1 commit be1298425665 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
As we're going to enable pointer auth within the kernel and use a different APIAKey for the kernel itself, so move the user APIAKey switch to EL0 exception return.
The other 4 keys could remain switched during task switch, but are also moved to keep things consistent.
Reviewed-by: Kees Cook keescook@chromium.org Reviewed-by: James Morse james.morse@arm.com Reviewed-by: Vincenzo Frascino Vincenzo.Frascino@arm.com Signed-off-by: Kristina Martsenko kristina.martsenko@arm.com [Amit: commit msg, re-positioned the patch, comments] Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com
Conflicts: arch/arm64/kernel/entry.S [Zheng Zengkai: fix conflicts caused by skipping the following commit. a532508 arm64: Handle erratum 1418040 as a superset of erratum 1188873 1cf24a2 arm64/module: deal with ambiguity in PRELxx relocation ranges]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/asm_pointer_auth.h | 49 +++++++++++++++++++++++ arch/arm64/include/asm/pointer_auth.h | 23 +---------- arch/arm64/kernel/asm-offsets.c | 11 +++++ arch/arm64/kernel/entry.S | 3 ++ arch/arm64/kernel/pointer_auth.c | 3 -- arch/arm64/kernel/process.c | 1 - 6 files changed, 64 insertions(+), 26 deletions(-) create mode 100644 arch/arm64/include/asm/asm_pointer_auth.h
diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h new file mode 100644 index 0000000000000..3482348ec07ff --- /dev/null +++ b/arch/arm64/include/asm/asm_pointer_auth.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_ASM_POINTER_AUTH_H +#define __ASM_ASM_POINTER_AUTH_H + +#include <asm/alternative.h> +#include <asm/asm-offsets.h> +#include <asm/cpufeature.h> +#include <asm/sysreg.h> + +#ifdef CONFIG_ARM64_PTR_AUTH +/* + * thread.keys_user.ap* as offset exceeds the #imm offset range + * so use the base value of ldp as thread.keys_user and offset as + * thread.keys_user.ap*. + */ + .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 + mov \tmp1, #THREAD_KEYS_USER + add \tmp1, \tsk, \tmp1 +alternative_if_not ARM64_HAS_ADDRESS_AUTH + b .Laddr_auth_skip_@ +alternative_else_nop_endif + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA] + msr_s SYS_APIAKEYLO_EL1, \tmp2 + msr_s SYS_APIAKEYHI_EL1, \tmp3 + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIB] + msr_s SYS_APIBKEYLO_EL1, \tmp2 + msr_s SYS_APIBKEYHI_EL1, \tmp3 + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDA] + msr_s SYS_APDAKEYLO_EL1, \tmp2 + msr_s SYS_APDAKEYHI_EL1, \tmp3 + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDB] + msr_s SYS_APDBKEYLO_EL1, \tmp2 + msr_s SYS_APDBKEYHI_EL1, \tmp3 +.Laddr_auth_skip_@: +alternative_if ARM64_HAS_GENERIC_AUTH + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APGA] + msr_s SYS_APGAKEYLO_EL1, \tmp2 + msr_s SYS_APGAKEYHI_EL1, \tmp3 +alternative_else_nop_endif + .endm + +#else /* CONFIG_ARM64_PTR_AUTH */ + + .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 + .endm + +#endif /* CONFIG_ARM64_PTR_AUTH */ + +#endif /* __ASM_ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 3d46c07596b4b..404276bebbbec 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -50,19 +50,6 @@ do { \ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ } while (0)
-static inline void ptrauth_keys_switch_user(struct ptrauth_keys_user *keys) -{ - if (system_supports_address_auth()) { - __ptrauth_key_install(APIA, keys->apia); - __ptrauth_key_install(APIB, keys->apib); - __ptrauth_key_install(APDA, keys->apda); - __ptrauth_key_install(APDB, keys->apdb); - } - - if (system_supports_generic_auth()) - __ptrauth_key_install(APGA, keys->apga); -} - extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
/* @@ -78,20 +65,12 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) }
#define ptrauth_thread_init_user(tsk) \ -do { \ - struct task_struct *__ptiu_tsk = (tsk); \ - ptrauth_keys_init_user(&__ptiu_tsk->thread.keys_user); \ - ptrauth_keys_switch_user(&__ptiu_tsk->thread.keys_user); \ -} while (0) - -#define ptrauth_thread_switch(tsk) \ - ptrauth_keys_switch_user(&(tsk)->thread.keys_user) + ptrauth_keys_init_user(&(tsk)->thread.keys_user)
#else /* CONFIG_ARM64_PTR_AUTH */ #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) #define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_thread_init_user(tsk) -#define ptrauth_thread_switch(tsk) #endif /* CONFIG_ARM64_PTR_AUTH */
#endif /* __ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index f7776492164be..c6df491da71bd 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -48,6 +48,9 @@ int main(void) DEFINE(TSK_STACK, offsetof(struct task_struct, stack)); BLANK(); DEFINE(THREAD_CPU_CONTEXT, offsetof(struct task_struct, thread.cpu_context)); +#ifdef CONFIG_ARM64_PTR_AUTH + DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user)); +#endif BLANK(); DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); DEFINE(S_X1, offsetof(struct pt_regs, regs[1])); @@ -171,6 +174,14 @@ int main(void) #ifdef CONFIG_ARM_SDE_INTERFACE DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs)); DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); +#endif +#ifdef CONFIG_ARM64_PTR_AUTH + DEFINE(PTRAUTH_USER_KEY_APIA, offsetof(struct ptrauth_keys_user, apia)); + DEFINE(PTRAUTH_USER_KEY_APIB, offsetof(struct ptrauth_keys_user, apib)); + DEFINE(PTRAUTH_USER_KEY_APDA, offsetof(struct ptrauth_keys_user, apda)); + DEFINE(PTRAUTH_USER_KEY_APDB, offsetof(struct ptrauth_keys_user, apdb)); + DEFINE(PTRAUTH_USER_KEY_APGA, offsetof(struct ptrauth_keys_user, apga)); + BLANK(); #endif return 0; } diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 54092ae9968ad..65457545b3b61 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -25,6 +25,7 @@ #include <asm/alternative.h> #include <asm/assembler.h> #include <asm/asm-offsets.h> +#include <asm/asm_pointer_auth.h> #include <asm/cpufeature.h> #include <asm/errno.h> #include <asm/esr.h> @@ -339,6 +340,8 @@ alternative_if ARM64_WORKAROUND_845719 alternative_else_nop_endif #endif 3: + ptrauth_keys_install_user tsk, x0, x1, x2 + apply_ssbd 0, x0, x1 .endif
diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c index af5a638207f8d..1e77736a4f66e 100644 --- a/arch/arm64/kernel/pointer_auth.c +++ b/arch/arm64/kernel/pointer_auth.c @@ -19,7 +19,6 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg)
if (!arg) { ptrauth_keys_init_user(keys); - ptrauth_keys_switch_user(keys); return 0; }
@@ -41,7 +40,5 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) if (arg & PR_PAC_APGAKEY) get_random_bytes(&keys->apga, sizeof(keys->apga));
- ptrauth_keys_switch_user(keys); - return 0; } diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index acf9a7b6ecec7..878e5bf8a43e8 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -516,7 +516,6 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, contextidr_thread_switch(next); entry_task_switch(next); uao_thread_switch(next); - ptrauth_thread_switch(next); ssbs_thread_switch(next);
/*
From: Amit Daniel Kachhap amit.kachhap@arm.com
mainline inclusion from v5.7-rc1 commit df3551011b81 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
This patch allows __cpu_setup to be invoked with one of these flags, ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_SECONDARY or ARM64_CPU_RUNTIME. This is required as some cpufeatures need different handling during different scenarios.
The input parameter in x0 is preserved till the end to be used inside this function.
There should be no functional change with this patch and is useful for the subsequent ptrauth patch which utilizes it. Some upcoming arm cpufeatures can also utilize these flags.
Suggested-by: James Morse james.morse@arm.com Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Reviewed-by: Vincenzo Frascino Vincenzo.Frascino@arm.com Reviewed-by: James Morse james.morse@arm.com Reviewed-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/smp.h | 8 ++++++++ arch/arm64/kernel/head.S | 2 ++ arch/arm64/kernel/sleep.S | 2 ++ arch/arm64/mm/proc.S | 26 +++++++++++++++----------- 4 files changed, 27 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h index 403c22f62b580..fc93ea608d1af 100644 --- a/arch/arm64/include/asm/smp.h +++ b/arch/arm64/include/asm/smp.h @@ -27,6 +27,14 @@ /* Fatal system error detected by secondary CPU, crash the system */ #define CPU_PANIC_KERNEL (3)
+/* Possible options for __cpu_setup */ +/* Option to setup primary cpu */ +#define ARM64_CPU_BOOT_PRIMARY (1) +/* Option to setup secondary cpus */ +#define ARM64_CPU_BOOT_SECONDARY (2) +/* Option to setup cpus for different cpu run time services */ +#define ARM64_CPU_RUNTIME (3) + #ifndef __ASSEMBLY__
#include <asm/percpu.h> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index d22ab8d9edc95..c8a64e5bb9c11 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -126,6 +126,7 @@ ENTRY(stext) * On return, the CPU will be ready for the MMU to be turned on and * the TCR will have been set. */ + mov x0, #ARM64_CPU_BOOT_PRIMARY bl __cpu_setup // initialise processor b __primary_switch ENDPROC(stext) @@ -704,6 +705,7 @@ secondary_startup: * Common entry point for secondary CPUs. */ bl __cpu_secondary_check52bitva + mov x0, #ARM64_CPU_BOOT_SECONDARY bl __cpu_setup // initialise processor bl __enable_mmu ldr x8, =__secondary_switched diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index bebec8ef9372a..8eee57d97281a 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -3,6 +3,7 @@ #include <linux/linkage.h> #include <asm/asm-offsets.h> #include <asm/assembler.h> +#include <asm/smp.h>
.text /* @@ -99,6 +100,7 @@ ENDPROC(__cpu_suspend_enter) .pushsection ".idmap.text", "awx" ENTRY(cpu_resume) bl el2_setup // if in EL2 drop to EL1 cleanly + mov x0, #ARM64_CPU_RUNTIME bl __cpu_setup /* enable the MMU early - so we can access sleep_save_stash by va */ bl __enable_mmu diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 9ff213bb584e4..c52f89762e199 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -396,21 +396,25 @@ ENDPROC(idmap_kpti_install_ng_mappings) /* * __cpu_setup * - * Initialise the processor for turning the MMU on. Return in x0 the - * value of the SCTLR_EL1 register. + * Initialise the processor for turning the MMU on. + * + * Input: + * x0 with a flag ARM64_CPU_BOOT_PRIMARY/ARM64_CPU_BOOT_SECONDARY/ARM64_CPU_RUNTIME. + * Output: + * Return in x0 the value of the SCTLR_EL1 register. */ .pushsection ".idmap.text", "awx" ENTRY(__cpu_setup) tlbi vmalle1 // Invalidate local TLB dsb nsh
- mov x0, #3 << 20 - msr cpacr_el1, x0 // Enable FP/ASIMD - mov x0, #1 << 12 // Reset mdscr_el1 and disable - msr mdscr_el1, x0 // access to the DCC from EL0 + mov x1, #3 << 20 + msr cpacr_el1, x1 // Enable FP/ASIMD + mov x1, #1 << 12 // Reset mdscr_el1 and disable + msr mdscr_el1, x1 // access to the DCC from EL0 isb // Unmask debug exceptions now, enable_dbg // since this is per-cpu - reset_pmuserenr_el0 x0 // Disable PMU access from EL0 + reset_pmuserenr_el0 x1 // Disable PMU access from EL0 /* * Memory region attributes for LPAE: * @@ -430,10 +434,6 @@ ENTRY(__cpu_setup) MAIR(0xff, MT_NORMAL) | \ MAIR(0xbb, MT_NORMAL_WT) msr mair_el1, x5 - /* - * Prepare SCTLR - */ - mov_q x0, SCTLR_EL1_SET /* * Set/prepare TCR and TTBR. We use 512GB (39-bit) address range for * both user and kernel. @@ -460,5 +460,9 @@ ENTRY(__cpu_setup) 1: #endif /* CONFIG_ARM64_HW_AFDBM */ msr tcr_el1, x10 + /* + * Prepare SCTLR + */ + mov_q x0, SCTLR_EL1_SET ret // return to head.S ENDPROC(__cpu_setup)
From: Amit Daniel Kachhap amit.kachhap@arm.com
mainline inclusion from v5.7-rc1 commit 8c176e1625a6 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
These helpers are used only by functions inside cpufeature.c and hence makes sense to be moved from cpufeature.h to cpufeature.c as they are not expected to be used globally.
This change helps in reducing the header file size as well as to add future cpu capability types without confusion. Only a cpu capability type macro is sufficient to expose those capabilities globally.
Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Reviewed-by: Vincenzo Frascino Vincenzo.Frascino@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com
Conflicts: arch/arm64/kernel/cpufeature.c [Zheng Zengkai: fix conflicts caused by skipping the following commit. b90d2b22a arm64: cpufeature: Add cpufeature for IRQ priority masking bc3c03ccb arm64: Enable the support of pseudo-NMIs 3e6c69a05 arm64: Add initial support for E0PD]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/cpufeature.h | 12 ------------ arch/arm64/kernel/cpufeature.c | 13 +++++++++++++ 2 files changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 99d0c345624f6..2c7fb746f2c86 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -345,18 +345,6 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap) return cap->type & ARM64_CPUCAP_SCOPE_MASK; }
-static inline bool -cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap) -{ - return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU); -} - -static inline bool -cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap) -{ - return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU); -} - /* * Generic helper for handling capabilties with multiple (match,enable) pairs * of call backs, sharing the same capability bit. diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 38ebbd5bc3a3a..ebe1f4a9825d9 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1336,6 +1336,19 @@ static bool has_generic_auth(const struct arm64_cpu_capabilities *entry, } #endif /* CONFIG_ARM64_PTR_AUTH */
+/* Internal helper functions to match cpu capability type */ +static bool +cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap) +{ + return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU); +} + +static bool +cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap) +{ + return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU); +} + static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "GIC system register CPU interface",
From: Kristina Martsenko kristina.martsenko@arm.com
mainline inclusion from v5.7-rc1 commit deeaac5175a5 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Each system capability can be of either boot, local, or system scope, depending on when the state of the capability is finalized. When we detect a conflict on a late CPU, we either offline the CPU or panic the system. We currently always panic if the conflict is caused by a boot scope capability, and offline the CPU if the conflict is caused by a local or system scope capability.
We're going to want to add a new capability (for pointer authentication) which needs to be boot scope but doesn't need to panic the system when a conflict is detected. So add a new flag to specify whether the capability requires the system to panic or not. Current boot scope capabilities are updated to set the flag, so there should be no functional change as a result of this patch.
Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Signed-off-by: Kristina Martsenko kristina.martsenko@arm.com Reviewed-by: Vincenzo Frascino vincenzo.frascino@arm.com Reviewed-by: Suzuki K Poulose suzuki.poulose@arm.com Reviewed-by: Kees Cook keescook@chromium.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/cpufeature.h | 12 ++++++++++-- arch/arm64/kernel/cpufeature.c | 29 +++++++++++++++-------------- 2 files changed, 25 insertions(+), 16 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 2c7fb746f2c86..a845640c41b4c 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -220,6 +220,10 @@ int arm64_cpu_ftr_regs_traverse(int (*op)(u32, u64, void *), void *argp); * In some non-typical cases either both (a) and (b), or neither, * should be permitted. This can be described by including neither * or both flags in the capability's type field. + * + * In case of a conflict, the CPU is prevented from booting. If the + * ARM64_CPUCAP_PANIC_ON_CONFLICT flag is specified for the capability, + * then a kernel panic is triggered. */
@@ -252,6 +256,8 @@ int arm64_cpu_ftr_regs_traverse(int (*op)(u32, u64, void *), void *argp); #define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU ((u16)BIT(4)) /* Is it safe for a late CPU to miss this capability when system has it */ #define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU ((u16)BIT(5)) +/* Panic when a conflict is detected */ +#define ARM64_CPUCAP_PANIC_ON_CONFLICT ((u16)BIT(6))
/* * CPU errata workarounds that need to be enabled at boot time if one or @@ -291,9 +297,11 @@ int arm64_cpu_ftr_regs_traverse(int (*op)(u32, u64, void *), void *argp);
/* * CPU feature used early in the boot based on the boot CPU. All secondary - * CPUs must match the state of the capability as detected by the boot CPU. + * CPUs must match the state of the capability as detected by the boot CPU. In + * case of a conflict, a kernel panic is triggered. */ -#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU +#define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE \ + (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
struct arm64_cpu_capabilities { const char *desc; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index ebe1f4a9825d9..66bb94dca935d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1349,6 +1349,12 @@ cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap) return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU); }
+static bool +cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap) +{ + return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT); +} + static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "GIC system register CPU interface", @@ -1944,10 +1950,8 @@ static void __init enable_cpu_capabilities(u16 scope_mask) * Run through the list of capabilities to check for conflicts. * If the system has already detected a capability, take necessary * action on this CPU. - * - * Returns "false" on conflicts. */ -static bool verify_local_cpu_caps(u16 scope_mask) +static void verify_local_cpu_caps(u16 scope_mask) { int i; bool cpu_has_cap, system_has_cap; @@ -1992,10 +1996,12 @@ static bool verify_local_cpu_caps(u16 scope_mask) pr_crit("CPU%d: Detected conflict for capability %d (%s), System: %d, CPU: %d\n", smp_processor_id(), caps->capability, caps->desc, system_has_cap, cpu_has_cap); - return false; - }
- return true; + if (cpucap_panic_on_conflict(caps)) + cpu_panic_kernel(); + else + cpu_die_early(); + } }
/* @@ -2005,12 +2011,8 @@ static bool verify_local_cpu_caps(u16 scope_mask) static void check_early_cpu_features(void) { verify_cpu_asid_bits(); - /* - * Early features are used by the kernel already. If there - * is a conflict, we cannot proceed further. - */ - if (!verify_local_cpu_caps(SCOPE_BOOT_CPU)) - cpu_panic_kernel(); + + verify_local_cpu_caps(SCOPE_BOOT_CPU); }
static void @@ -2058,8 +2060,7 @@ static void verify_local_cpu_capabilities(void) * check_early_cpu_features(), as they need to be verified * on all secondary CPUs. */ - if (!verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU)) - cpu_die_early(); + verify_local_cpu_caps(SCOPE_ALL & ~SCOPE_BOOT_CPU);
verify_local_elf_hwcaps(arm64_elf_hwcaps);
From: Kristina Martsenko kristina.martsenko@arm.com
mainline inclusion from v5.7-rc1 commit 6982934e19f8 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
When the kernel is compiled with pointer auth instructions, the boot CPU needs to start using address auth very early, so change the cpucap to account for this.
Pointer auth must be enabled before we call C functions, because it is not possible to enter a function with pointer auth disabled and exit it with pointer auth enabled. Note, mismatches between architected and IMPDEF algorithms will still be caught by the cpufeature framework (the separate *_ARCH and *_IMP_DEF cpucaps).
Note the change in behavior: if the boot CPU has address auth and a late CPU does not, then the late CPU is parked by the cpufeature framework. This is possible as kernel will only have NOP space intructions for PAC so such mismatched late cpu will silently ignore those instructions in C functions. Also, if the boot CPU does not have address auth and the late CPU has then the late cpu will still boot but with ptrauth feature disabled.
Leave generic authentication as a "system scope" cpucap for now, since initially the kernel will only use address authentication.
Reviewed-by: Kees Cook keescook@chromium.org Reviewed-by: Suzuki K Poulose suzuki.poulose@arm.com Reviewed-by: Vincenzo Frascino Vincenzo.Frascino@arm.com Signed-off-by: Kristina Martsenko kristina.martsenko@arm.com [Amit: Re-worked ptrauth setup logic, comments] Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com
Conflicts: arch/arm64/Kconfig [Zheng Zengkai: fix conflicts caused by skipping the following commit. 384b40caa8af KVM: arm/arm64: Context-switch ptrauth registers 3e6c69a058de arm64: Add initial support for E0PD 1a50ec0b3b2e arm64: Implement archrandom.h for ARMv8.5-RNG]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/Kconfig | 6 ++++++ arch/arm64/include/asm/cpufeature.h | 9 +++++++++ arch/arm64/kernel/cpufeature.c | 13 +++--------- arch/arm64/mm/proc.S | 31 +++++++++++++++++++++++++++++ 4 files changed, 49 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6fb5cc4fd6079..0a79eb17c15c8 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1252,6 +1252,12 @@ config ARM64_PTR_AUTH hardware it will not be advertised to userspace nor will it be enabled.
+ If the feature is present on the boot CPU but not on a late CPU, then + the late CPU will be parked. Also, if the boot CPU does not have + address auth and the late CPU has then the late CPU will still boot + but with the feature disabled. On such a system, this option should + not be selected. + endmenu
config ARM64_SVE diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index a845640c41b4c..1145c68a27c87 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -303,6 +303,15 @@ int arm64_cpu_ftr_regs_traverse(int (*op)(u32, u64, void *), void *argp); #define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE \ (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PANIC_ON_CONFLICT)
+/* + * CPU feature used early in the boot based on the boot CPU. It is safe for a + * late CPU to have this feature even though the boot CPU hasn't enabled it, + * although the feature will not be used by Linux in this case. If the boot CPU + * has enabled this feature already, then every late CPU must have it. + */ +#define ARM64_CPUCAP_BOOT_CPU_FEATURE \ + (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU) + struct arm64_cpu_capabilities { const char *desc; u16 capability; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 66bb94dca935d..f05253ca79c5b 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1315,12 +1315,6 @@ static bool can_clearpage_use_stnp(const struct arm64_cpu_capabilities *entry, }
#ifdef CONFIG_ARM64_PTR_AUTH -static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) -{ - sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | - SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); -} - static bool has_address_auth(const struct arm64_cpu_capabilities *entry, int __unused) { @@ -1610,7 +1604,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Address authentication (architected algorithm)", .capability = ARM64_HAS_ADDRESS_AUTH_ARCH, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, .sys_reg = SYS_ID_AA64ISAR1_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64ISAR1_APA_SHIFT, @@ -1620,7 +1614,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Address authentication (IMP DEF algorithm)", .capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, .sys_reg = SYS_ID_AA64ISAR1_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64ISAR1_API_SHIFT, @@ -1629,9 +1623,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = { }, { .capability = ARM64_HAS_ADDRESS_AUTH, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, .matches = has_address_auth, - .cpu_enable = cpu_enable_address_auth, }, { .desc = "Generic authentication (architected algorithm)", diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index c52f89762e199..5a7c3a2b8e27c 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -27,6 +27,7 @@ #include <asm/pgtable-hwdef.h> #include <asm/cpufeature.h> #include <asm/alternative.h> +#include <asm/smp.h>
#ifdef CONFIG_ARM64_64K_PAGES #define TCR_TG_FLAGS TCR_TG0_64K | TCR_TG1_64K @@ -460,9 +461,39 @@ ENTRY(__cpu_setup) 1: #endif /* CONFIG_ARM64_HW_AFDBM */ msr tcr_el1, x10 + mov x1, x0 /* * Prepare SCTLR */ mov_q x0, SCTLR_EL1_SET + +#ifdef CONFIG_ARM64_PTR_AUTH + /* No ptrauth setup for run time cpus */ + cmp x1, #ARM64_CPU_RUNTIME + b.eq 3f + + /* Check if the CPU supports ptrauth */ + mrs x2, id_aa64isar1_el1 + ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 + cbz x2, 3f + + msr_s SYS_APIAKEYLO_EL1, xzr + msr_s SYS_APIAKEYHI_EL1, xzr + + /* Just enable ptrauth for primary cpu */ + cmp x1, #ARM64_CPU_BOOT_PRIMARY + b.eq 2f + + /* if !system_supports_address_auth() then skip enable */ +alternative_if_not ARM64_HAS_ADDRESS_AUTH + b 3f +alternative_else_nop_endif + +2: /* Enable ptrauth instructions */ + ldr x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \ + SCTLR_ELx_ENDA | SCTLR_ELx_ENDB + orr x0, x0, x2 +3: +#endif ret // return to head.S ENDPROC(__cpu_setup)
From: Kristina Martsenko kristina.martsenko@arm.com
mainline inclusion from v5.7-rc1 commit 33e45234987e category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Set up keys to use pointer authentication within the kernel. The kernel will be compiled with APIAKey instructions, the other keys are currently unused. Each task is given its own APIAKey, which is initialized during fork. The key is changed during context switch and on kernel entry from EL0.
The keys for idle threads need to be set before calling any C functions, because it is not possible to enter and exit a function with different keys.
Reviewed-by: Kees Cook keescook@chromium.org Reviewed-by: Catalin Marinas catalin.marinas@arm.com Reviewed-by: Vincenzo Frascino Vincenzo.Frascino@arm.com Signed-off-by: Kristina Martsenko kristina.martsenko@arm.com [Amit: Modified secondary cores key structure, comments] Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com
Conflicts: arch/arm64/kernel/entry.S arch/arm64/kernel/smp.c [Zheng Zengkai: fix conflicts caused by skipping the following commit. a532508 arm64: Handle erratum 1418040 as a superset of erratum 1188873 0f80cad arm64: Restrict ARM64_ERRATUM_1188873 mitigation to AArch32 5b1cfe3 arm64: smp: Don't enter kernel with NULL stack pointer or task struct]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/asm_pointer_auth.h | 14 ++++++++++++++ arch/arm64/include/asm/pointer_auth.h | 13 +++++++++++++ arch/arm64/include/asm/processor.h | 1 + arch/arm64/include/asm/smp.h | 4 ++++ arch/arm64/kernel/asm-offsets.c | 5 +++++ arch/arm64/kernel/entry.S | 3 +++ arch/arm64/kernel/process.c | 2 ++ arch/arm64/kernel/smp.c | 8 ++++++++ arch/arm64/mm/proc.S | 12 ++++++++++++ 9 files changed, 62 insertions(+)
diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h index 3482348ec07ff..d3f4aee42851d 100644 --- a/arch/arm64/include/asm/asm_pointer_auth.h +++ b/arch/arm64/include/asm/asm_pointer_auth.h @@ -39,11 +39,25 @@ alternative_if ARM64_HAS_GENERIC_AUTH alternative_else_nop_endif .endm
+ .macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3 +alternative_if ARM64_HAS_ADDRESS_AUTH + mov \tmp1, #THREAD_KEYS_KERNEL + add \tmp1, \tsk, \tmp1 + ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA] + msr_s SYS_APIAKEYLO_EL1, \tmp2 + msr_s SYS_APIAKEYHI_EL1, \tmp3 + isb +alternative_else_nop_endif + .endm + #else /* CONFIG_ARM64_PTR_AUTH */
.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 .endm
+ .macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3 + .endm + #endif /* CONFIG_ARM64_PTR_AUTH */
#endif /* __ASM_ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 404276bebbbec..5848e739b0cc0 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -30,6 +30,10 @@ struct ptrauth_keys_user { struct ptrauth_key apga; };
+struct ptrauth_keys_kernel { + struct ptrauth_key apia; +}; + static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys) { if (system_supports_address_auth()) { @@ -50,6 +54,12 @@ do { \ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ } while (0)
+static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys) +{ + if (system_supports_address_auth()) + get_random_bytes(&keys->apia, sizeof(keys->apia)); +} + extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
/* @@ -66,11 +76,14 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
#define ptrauth_thread_init_user(tsk) \ ptrauth_keys_init_user(&(tsk)->thread.keys_user) +#define ptrauth_thread_init_kernel(tsk) \ + ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel)
#else /* CONFIG_ARM64_PTR_AUTH */ #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) #define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_thread_init_user(tsk) +#define ptrauth_thread_init_kernel(tsk) #endif /* CONFIG_ARM64_PTR_AUTH */
#endif /* __ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index e25269c47d70c..3aa17494ee1ee 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -145,6 +145,7 @@ struct thread_struct { struct debug_info debug; /* debugging */ #ifdef CONFIG_ARM64_PTR_AUTH struct ptrauth_keys_user keys_user; + struct ptrauth_keys_kernel keys_kernel; #endif };
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h index fc93ea608d1af..0faa0f62f4033 100644 --- a/arch/arm64/include/asm/smp.h +++ b/arch/arm64/include/asm/smp.h @@ -42,6 +42,7 @@ #include <linux/threads.h> #include <linux/cpumask.h> #include <linux/thread_info.h> +#include <asm/pointer_auth.h>
DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
@@ -93,6 +94,9 @@ asmlinkage void secondary_start_kernel(void); struct secondary_data { void *stack; struct task_struct *task; +#ifdef CONFIG_ARM64_PTR_AUTH + struct ptrauth_keys_kernel ptrauth_key; +#endif long status; };
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index c6df491da71bd..35f1b16956a88 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -50,6 +50,7 @@ int main(void) DEFINE(THREAD_CPU_CONTEXT, offsetof(struct task_struct, thread.cpu_context)); #ifdef CONFIG_ARM64_PTR_AUTH DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user)); + DEFINE(THREAD_KEYS_KERNEL, offsetof(struct task_struct, thread.keys_kernel)); #endif BLANK(); DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); @@ -137,6 +138,9 @@ int main(void) BLANK(); DEFINE(CPU_BOOT_STACK, offsetof(struct secondary_data, stack)); DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task)); +#ifdef CONFIG_ARM64_PTR_AUTH + DEFINE(CPU_BOOT_PTRAUTH_KEY, offsetof(struct secondary_data, ptrauth_key)); +#endif BLANK(); #ifdef CONFIG_KVM_ARM_HOST DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); @@ -181,6 +185,7 @@ int main(void) DEFINE(PTRAUTH_USER_KEY_APDA, offsetof(struct ptrauth_keys_user, apda)); DEFINE(PTRAUTH_USER_KEY_APDB, offsetof(struct ptrauth_keys_user, apdb)); DEFINE(PTRAUTH_USER_KEY_APGA, offsetof(struct ptrauth_keys_user, apga)); + DEFINE(PTRAUTH_KERNEL_KEY_APIA, offsetof(struct ptrauth_keys_kernel, apia)); BLANK(); #endif return 0; diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 65457545b3b61..2eb1b657de2fb 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -184,6 +184,7 @@ alternative_cb_end
apply_ssbd 1, x22, x23
+ ptrauth_keys_install_kernel tsk, x20, x22, x23 .else add x21, sp, #S_FRAME_SIZE get_thread_info tsk @@ -340,6 +341,7 @@ alternative_if ARM64_WORKAROUND_845719 alternative_else_nop_endif #endif 3: + /* No kernel C function calls after this as user keys are set. */ ptrauth_keys_install_user tsk, x0, x1, x2
apply_ssbd 0, x0, x1 @@ -1161,6 +1163,7 @@ ENTRY(cpu_switch_to) ldr lr, [x8] mov sp, x9 msr sp_el0, x1 + ptrauth_keys_install_kernel x1, x8, x9, x10 ret ENDPROC(cpu_switch_to) NOKPROBE(cpu_switch_to) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 878e5bf8a43e8..a5f219c1296fa 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -381,6 +381,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, */ fpsimd_flush_task_state(p);
+ ptrauth_thread_init_kernel(p); + if (likely(!(p->flags & PF_KTHREAD))) { *childregs = *current_pt_regs(); childregs->regs[0] = 0; diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 5235b9aa05241..ce0e7055dbf7e 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -125,6 +125,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) */ secondary_data.task = idle; secondary_data.stack = task_stack_page(idle) + THREAD_SIZE; +#if defined(CONFIG_ARM64_PTR_AUTH) + secondary_data.ptrauth_key.apia.lo = idle->thread.keys_kernel.apia.lo; + secondary_data.ptrauth_key.apia.hi = idle->thread.keys_kernel.apia.hi; +#endif update_cpu_boot_status(CPU_MMU_OFF); __flush_dcache_area(&secondary_data, sizeof(secondary_data));
@@ -155,6 +159,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
secondary_data.task = NULL; secondary_data.stack = NULL; +#if defined(CONFIG_ARM64_PTR_AUTH) + secondary_data.ptrauth_key.apia.lo = 0; + secondary_data.ptrauth_key.apia.hi = 0; +#endif status = READ_ONCE(secondary_data.status); if (ret && status) {
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 5a7c3a2b8e27c..df178bed25edf 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -477,6 +477,10 @@ ENTRY(__cpu_setup) ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 cbz x2, 3f
+ /* + * The primary cpu keys are reset here and can be + * re-initialised with some proper values later. + */ msr_s SYS_APIAKEYLO_EL1, xzr msr_s SYS_APIAKEYHI_EL1, xzr
@@ -489,6 +493,14 @@ alternative_if_not ARM64_HAS_ADDRESS_AUTH b 3f alternative_else_nop_endif
+ /* Install ptrauth key for secondary cpus */ + adr_l x2, secondary_data + ldr x3, [x2, #CPU_BOOT_TASK] // get secondary_data.task + cbz x3, 2f // check for slow booting cpus + ldp x3, x4, [x2, #CPU_BOOT_PTRAUTH_KEY] + msr_s SYS_APIAKEYLO_EL1, x3 + msr_s SYS_APIAKEYHI_EL1, x4 + 2: /* Enable ptrauth instructions */ ldr x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \ SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
From: Amit Daniel Kachhap amit.kachhap@arm.com
mainline inclusion from v5.7-rc1 commit 28321582334c category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
This patch uses the existing boot_init_stack_canary arch function to initialize the ptrauth keys for the booting task in the primary core. The requirement here is that it should be always inline and the caller must never return.
As pointer authentication too detects a subset of stack corruption so it makes sense to place this code here.
Both pointer authentication and stack canary codes are protected by their respective config option.
Suggested-by: Ard Biesheuvel ardb@kernel.org Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Reviewed-by: Vincenzo Frascino Vincenzo.Frascino@arm.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com
Conflicts: arch/arm64/include/asm/stackprotector.h [Zheng Zengkai: fix conflicts caused by skipping the following commit. 0a1213fa74327 arm64: enable per-task stack canaries]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/pointer_auth.h | 11 ++++++++++- arch/arm64/include/asm/stackprotector.h | 5 +++++ include/linux/stackprotector.h | 2 +- 3 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 5848e739b0cc0..041762fd4ad65 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -54,12 +54,18 @@ do { \ write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ } while (0)
-static inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys) +static __always_inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys) { if (system_supports_address_auth()) get_random_bytes(&keys->apia, sizeof(keys->apia)); }
+static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kernel *keys) +{ + if (system_supports_address_auth()) + __ptrauth_key_install(APIA, keys->apia); +} + extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
/* @@ -78,12 +84,15 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) ptrauth_keys_init_user(&(tsk)->thread.keys_user) #define ptrauth_thread_init_kernel(tsk) \ ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel) +#define ptrauth_thread_switch_kernel(tsk) \ + ptrauth_keys_switch_kernel(&(tsk)->thread.keys_kernel)
#else /* CONFIG_ARM64_PTR_AUTH */ #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL) #define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_thread_init_user(tsk) #define ptrauth_thread_init_kernel(tsk) +#define ptrauth_thread_switch_kernel(tsk) #endif /* CONFIG_ARM64_PTR_AUTH */
#endif /* __ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/include/asm/stackprotector.h b/arch/arm64/include/asm/stackprotector.h index 58d15be11c4d8..7181f1b772944 100644 --- a/arch/arm64/include/asm/stackprotector.h +++ b/arch/arm64/include/asm/stackprotector.h @@ -15,6 +15,7 @@
#include <linux/random.h> #include <linux/version.h> +#include <asm/pointer_auth.h>
extern unsigned long __stack_chk_guard;
@@ -26,6 +27,7 @@ extern unsigned long __stack_chk_guard; */ static __always_inline void boot_init_stack_canary(void) { +#if defined(CONFIG_STACKPROTECTOR) unsigned long canary;
/* Try to get a semi random initial value. */ @@ -35,6 +37,9 @@ static __always_inline void boot_init_stack_canary(void)
current->stack_canary = canary; __stack_chk_guard = current->stack_canary; +#endif + ptrauth_thread_init_kernel(current); + ptrauth_thread_switch_kernel(current); }
#endif /* _ASM_STACKPROTECTOR_H */ diff --git a/include/linux/stackprotector.h b/include/linux/stackprotector.h index 6b792d080eee8..4c678c4fec58e 100644 --- a/include/linux/stackprotector.h +++ b/include/linux/stackprotector.h @@ -6,7 +6,7 @@ #include <linux/sched.h> #include <linux/random.h>
-#ifdef CONFIG_STACKPROTECTOR +#if defined(CONFIG_STACKPROTECTOR) || defined(CONFIG_ARM64_PTR_AUTH) # include <asm/stackprotector.h> #else static inline void boot_init_stack_canary(void)
From: Amit Daniel Kachhap amit.kachhap@arm.com
mainline inclusion from v5.7-rc1 commit 689eae42afd7 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Functions like vmap() record how much memory has been allocated by their callers, and callers are identified using __builtin_return_address(). Once the kernel is using pointer-auth the return address will be signed. This means it will not match any kernel symbol, and will vary between threads even for the same caller.
The output of /proc/vmallocinfo in this case may look like, 0x(____ptrval____)-0x(____ptrval____) 20480 0x86e28000100e7c60 pages=4 vmalloc N0=4 0x(____ptrval____)-0x(____ptrval____) 20480 0x86e28000100e7c60 pages=4 vmalloc N0=4 0x(____ptrval____)-0x(____ptrval____) 20480 0xc5c78000100e7c60 pages=4 vmalloc N0=4
The above three 64bit values should be the same symbol name and not different LR values.
Use the pre-processor to add logic to clear the PAC to __builtin_return_address() callers. This patch adds a new file asm/compiler.h and is transitively included via include/compiler_types.h on the compiler command line so it is guaranteed to be loaded and the users of this macro will not find a wrong version.
Helper macros ptrauth_kernel_pac_mask/ptrauth_clear_pac are created for this purpose and added in this file. Existing macro ptrauth_user_pac_mask moved from asm/pointer_auth.h.
Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Reviewed-by: James Morse james.morse@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com
Conflicts: arch/arm64/include/asm/compiler.h arch/arm64/include/asm/pointer_auth.h [Zheng Zengkai: use VA_BITS instead of vabits_actual to fix conflicts in pointer_auth.h and fix conflicts in compiler.h caused by skipping the following commit. 9376b1e7b6 arm64: remove unused asm/compiler.h header file]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/compiler.h | 19 +++++++++++++++++++ arch/arm64/include/asm/pointer_auth.h | 9 +-------- 3 files changed, 21 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 0a79eb17c15c8..124e23a3098b9 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -104,6 +104,7 @@ config ARM64 select HAVE_ALIGNED_STRUCT_PAGE if SLUB select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_BITREVERSE + select HAVE_ARCH_COMPILER_H select HAVE_ARCH_HUGE_VMAP select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h index ee35fd0f2236c..b9c630287012b 100644 --- a/arch/arm64/include/asm/compiler.h +++ b/arch/arm64/include/asm/compiler.h @@ -27,4 +27,23 @@ */ #define __asmeq(x, y) ".ifnc " x "," y " ; .err ; .endif\n\t"
+#if defined(CONFIG_ARM64_PTR_AUTH) + +/* + * The EL0/EL1 pointer bits used by a pointer authentication code. + * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. + */ +#define ptrauth_user_pac_mask() GENMASK_ULL(54, VA_BITS) +#define ptrauth_kernel_pac_mask() GENMASK_ULL(63, VA_BITS) + +/* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */ +#define ptrauth_clear_pac(ptr) \ + ((ptr & BIT_ULL(55)) ? (ptr | ptrauth_kernel_pac_mask()) : \ + (ptr & ~ptrauth_user_pac_mask())) + +#define __builtin_return_address(val) \ + (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val))) + +#endif /* CONFIG_ARM64_PTR_AUTH */ + #endif /* __ASM_COMPILER_H */ diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 041762fd4ad65..330bd06e74499 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -68,16 +68,9 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne
extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
-/* - * The EL0 pointer bits used by a pointer authentication code. - * This is dependent on TBI0 being enabled, or bits 63:56 would also apply. - */ -#define ptrauth_user_pac_mask() GENMASK(54, VA_BITS) - -/* Only valid for EL0 TTBR0 instruction pointers */ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) { - return ptr & ~ptrauth_user_pac_mask(); + return ptrauth_clear_pac(ptr); }
#define ptrauth_thread_init_user(tsk) \
From: Mark Rutland mark.rutland@arm.com
mainline inclusion from v5.7-rc1 commit 04ad99a0b160 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
When we enable pointer authentication in the kernel, LR values saved to the stack will have a PAC which we must strip in order to retrieve the real return address.
Strip PACs when unwinding the stack in order to account for this.
When function graph tracer is used with patchable-function-entry then return_to_handler will also have pac bits so strip it too.
Reviewed-by: Kees Cook keescook@chromium.org Acked-by: Catalin Marinas catalin.marinas@arm.com Reviewed-by: James Morse james.morse@arm.com Signed-off-by: Mark Rutland mark.rutland@arm.com Signed-off-by: Kristina Martsenko kristina.martsenko@arm.com [Amit: Re-position ptrauth_strip_insn_pac, comment] Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com
Conflicts: arch/arm64/kernel/stacktrace.c [Zheng Zengkai: fix conflicts caused by skipping the following commit. a44827 arm64: Use ftrace_graph_get_ret_stack() instead of curr_ret_stack 421d10 arm64: function_graph: Remove use of FTRACE_NOTRACE_DEPTH]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/kernel/stacktrace.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index bb482ec044b61..7f266b8d1b2b0 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -25,6 +25,7 @@ #include <linux/stacktrace.h>
#include <asm/irq.h> +#include <asm/pointer_auth.h> #include <asm/stack_pointer.h> #include <asm/stacktrace.h>
@@ -59,7 +60,7 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER if (tsk->ret_stack && - (frame->pc == (unsigned long)return_to_handler)) { + (ptrauth_strip_insn_pac(frame->pc) == (unsigned long)return_to_handler)) { if (WARN_ON_ONCE(frame->graph == -1)) return -EINVAL; if (frame->graph < -1) @@ -75,6 +76,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) } #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+ frame->pc = ptrauth_strip_insn_pac(frame->pc); + /* * Frames created upon entry from EL0 have NULL FP and PC values, so * don't bother reporting these. Frames created by __noreturn functions
From: Amit Daniel Kachhap amit.kachhap@arm.com
mainline inclusion from v5.7-rc1 commit cdcb61ae4c56 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
lr is printed with %pS which will try to find an entry in kallsyms. After enabling pointer authentication, this match will fail due to PAC present in the lr.
Strip PAC from the lr to display the correct symbol name.
Suggested-by: James Morse james.morse@arm.com Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Reviewed-by: Vincenzo Frascino Vincenzo.Frascino@arm.com Acked-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/kernel/process.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index a5f219c1296fa..7da480ed83f4e 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -274,7 +274,7 @@ void __show_regs(struct pt_regs *regs)
if (!user_mode(regs)) { printk("pc : %pS\n", (void *)regs->pc); - printk("lr : %pS\n", (void *)lr); + printk("lr : %pS\n", (void *)ptrauth_strip_insn_pac(lr)); } else { printk("pc : %016llx\n", regs->pc); printk("lr : %016llx\n", lr);
From: Amit Daniel Kachhap amit.kachhap@arm.com
mainline inclusion from v5.7-rc1 commit e51f5f56dd69 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
This patch restores the kernel keys from current task during cpu resume after the mmu is turned on and ptrauth is enabled.
A flag is added in macro ptrauth_keys_install_kernel to check if isb instruction needs to be executed.
Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Reviewed-by: Vincenzo Frascino Vincenzo.Frascino@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/asm_pointer_auth.h | 6 ++++-- arch/arm64/kernel/entry.S | 4 ++-- arch/arm64/mm/proc.S | 2 ++ 3 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h index d3f4aee42851d..ce2a8486992bb 100644 --- a/arch/arm64/include/asm/asm_pointer_auth.h +++ b/arch/arm64/include/asm/asm_pointer_auth.h @@ -39,14 +39,16 @@ alternative_if ARM64_HAS_GENERIC_AUTH alternative_else_nop_endif .endm
- .macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3 + .macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3 alternative_if ARM64_HAS_ADDRESS_AUTH mov \tmp1, #THREAD_KEYS_KERNEL add \tmp1, \tsk, \tmp1 ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA] msr_s SYS_APIAKEYLO_EL1, \tmp2 msr_s SYS_APIAKEYHI_EL1, \tmp3 + .if \sync == 1 isb + .endif alternative_else_nop_endif .endm
@@ -55,7 +57,7 @@ alternative_else_nop_endif .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 .endm
- .macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3 + .macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3 .endm
#endif /* CONFIG_ARM64_PTR_AUTH */ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 2eb1b657de2fb..09a65f718364f 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -184,7 +184,7 @@ alternative_cb_end
apply_ssbd 1, x22, x23
- ptrauth_keys_install_kernel tsk, x20, x22, x23 + ptrauth_keys_install_kernel tsk, 1, x20, x22, x23 .else add x21, sp, #S_FRAME_SIZE get_thread_info tsk @@ -1163,7 +1163,7 @@ ENTRY(cpu_switch_to) ldr lr, [x8] mov sp, x9 msr sp_el0, x1 - ptrauth_keys_install_kernel x1, x8, x9, x10 + ptrauth_keys_install_kernel x1, 1, x8, x9, x10 ret ENDPROC(cpu_switch_to) NOKPROBE(cpu_switch_to) diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index df178bed25edf..e2f62d7d771d8 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -22,6 +22,7 @@ #include <linux/linkage.h> #include <asm/assembler.h> #include <asm/asm-offsets.h> +#include <asm/asm_pointer_auth.h> #include <asm/hwcap.h> #include <asm/pgtable.h> #include <asm/pgtable-hwdef.h> @@ -135,6 +136,7 @@ alternative_if ARM64_HAS_RAS_EXTN msr_s SYS_DISR_EL1, xzr alternative_else_nop_endif
+ ptrauth_keys_install_kernel x14, 0, x1, x2, x3 isb ret ENDPROC(cpu_do_resume)
From: Catalin Marinas catalin.marinas@arm.com
mainline inclusion from v5.6-rc1 commit 42d519e3d0c0 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Similar to 'cc-option' or 'ld-option', it is occasionally necessary to check whether the assembler supports certain ISA extensions. In the arm64 code we currently do this in Makefile with an additional define:
lseinstr := $(call as-instr,.arch_extension lse,-DCONFIG_AS_LSE=1)
Add the 'as-instr' option so that it can be used in Kconfig directly:
def_bool $(as-instr,.arch_extension lse)
Acked-by: Masahiro Yamada masahiroy@kernel.org Reviewed-by: Vladimir Murzin vladimir.murzin@arm.com Tested-by: Vladimir Murzin vladimir.murzin@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Will Deacon will@kernel.org
Conflicts: scripts/Kconfig.include [Zheng Zengkai: fix conflicts caused by skipping the following commit. 75959d44f9 kbuild: Fail if gold linker is detected 902a6898bf kbuild: terminate Kconfig when $(CC) or $(LD) is missing]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- scripts/Kconfig.include | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include index 79455ad6b3863..6db0e7cd0310b 100644 --- a/scripts/Kconfig.include +++ b/scripts/Kconfig.include @@ -26,5 +26,9 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -S -x c /dev/null -o /de # Return y if the linker supports <flag>, n otherwise ld-option = $(success,$(LD) -v $(1))
+# $(as-instr,<instr>) +# Return y if the assembler supports <instr>, n otherwise +as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -) + # gcc version including patch level gcc-version := $(shell,$(srctree)/scripts/gcc-version.sh -p $(CC) | sed 's/^0*//')
From: Vincenzo Frascino vincenzo.frascino@arm.com
mainline inclusion from v5.7-rc1 commit c2d920bf1fff category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Currently kconfig does not have a feature that allows to detect if the used assembler supports a specific compilation option.
Introduce 'as-option' to serve this purpose in the context of Kconfig:
config X def_bool $(as-option,...)
Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Signed-off-by: Vincenzo Frascino vincenzo.frascino@arm.com Acked-by: Masahiro Yamada masahiroy@kernel.org Cc: linux-kbuild@vger.kernel.org Cc: Masahiro Yamada yamada.masahiro@socionext.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- scripts/Kconfig.include | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/scripts/Kconfig.include b/scripts/Kconfig.include index 6db0e7cd0310b..8717a60ca125a 100644 --- a/scripts/Kconfig.include +++ b/scripts/Kconfig.include @@ -26,6 +26,12 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -S -x c /dev/null -o /de # Return y if the linker supports <flag>, n otherwise ld-option = $(success,$(LD) -v $(1))
+# $(as-option,<flag>) +# /dev/zero is used as output instead of /dev/null as some assembler cribs when +# both input and output are same. Also both of them have same write behaviour so +# can be easily substituted. +as-option = $(success, $(CC) $(CLANG_FLAGS) $(1) -c -x assembler /dev/null -o /dev/zero) + # $(as-instr,<instr>) # Return y if the assembler supports <instr>, n otherwise as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
From: Kristina Martsenko kristina.martsenko@arm.com
mainline inclusion from v5.7-rc1 commit 74afda4016a7 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Compile all functions with two ptrauth instructions: PACIASP in the prologue to sign the return address, and AUTIASP in the epilogue to authenticate the return address (from the stack). If authentication fails, the return will cause an instruction abort to be taken, followed by an oops and killing the task.
This should help protect the kernel against attacks using return-oriented programming. As ptrauth protects the return address, it can also serve as a replacement for CONFIG_STACKPROTECTOR, although note that it does not protect other parts of the stack.
The new instructions are in the HINT encoding space, so on a system without ptrauth they execute as NOPs.
CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM guests, but also automatically builds the kernel with ptrauth instructions if the compiler supports it. If there is no compiler support, we do not warn that the kernel was built without ptrauth instructions.
GCC 7 and 8 support the -msign-return-address option, while GCC 9 deprecates that option and replaces it with -mbranch-protection. Support both options.
Clang uses an external assembler hence this patch makes sure that the correct parameters (-march=armv8.3-a) are passed down to help it recognize the ptrauth instructions.
Ftrace function tracer works properly with Ptrauth only when patchable-function-entry feature is present and is ensured by the Kconfig dependency.
Cc: Catalin Marinas catalin.marinas@arm.com Cc: Will Deacon will@kernel.org Cc: Masahiro Yamada yamada.masahiro@socionext.com Reviewed-by: Kees Cook keescook@chromium.org Reviewed-by: Vincenzo Frascino Vincenzo.Frascino@arm.com # not co-dev parts Co-developed-by: Vincenzo Frascino vincenzo.frascino@arm.com Signed-off-by: Vincenzo Frascino vincenzo.frascino@arm.com Signed-off-by: Kristina Martsenko kristina.martsenko@arm.com [Amit: Cover leaf function, comments, Ftrace Kconfig] Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com
Conflicts: arch/arm64/Kconfig arch/arm64/Makefile [Zheng Zengkai: fix conflicts caused by skipping the following commit. 384b40caa8af KVM: arm/arm64: Context-switch ptrauth registers 0a1213fa7432 arm64: enable per-task stack canaries]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/Kconfig | 24 +++++++++++++++++++++++- arch/arm64/Makefile | 11 +++++++++++ 2 files changed, 34 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 124e23a3098b9..39eda6ec0cbaf 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1237,6 +1237,8 @@ menu "ARMv8.3 architectural features" config ARM64_PTR_AUTH bool "Enable support for pointer authentication" default y + depends on (CC_HAS_SIGN_RETURN_ADDRESS || CC_HAS_BRANCH_PROT_PAC_RET) && AS_HAS_PAC + depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS) help Pointer authentication (part of the ARMv8.3 Extensions) provides instructions for signing and authenticating pointers against secret @@ -1244,11 +1246,17 @@ config ARM64_PTR_AUTH and other attacks.
This option enables these instructions at EL0 (i.e. for userspace). - Choosing this option will cause the kernel to initialise secret keys for each process at exec() time, with these keys being context-switched along with the process.
+ If the compiler supports the -mbranch-protection or + -msign-return-address flag (e.g. GCC 7 or later), then this option + will also cause the kernel itself to be compiled with return address + protection. In this case, and if the target hardware is known to + support pointer authentication, then CONFIG_STACKPROTECTOR can be + disabled with minimal loss of protection. + The feature is detected at runtime. If the feature is not present in hardware it will not be advertised to userspace nor will it be enabled. @@ -1259,6 +1267,20 @@ config ARM64_PTR_AUTH but with the feature disabled. On such a system, this option should not be selected.
+ This feature works with FUNCTION_GRAPH_TRACER option only if + DYNAMIC_FTRACE_WITH_REGS is enabled. + +config CC_HAS_BRANCH_PROT_PAC_RET + # GCC 9 or later, clang 8 or later + def_bool $(cc-option,-mbranch-protection=pac-ret+leaf) + +config CC_HAS_SIGN_RETURN_ADDRESS + # GCC 7, 8 + def_bool $(cc-option,-msign-return-address=all) + +config AS_HAS_PAC + def_bool $(as-option,-Wa$(comma)-march=armv8.3-a) + endmenu
config ARM64_SVE diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 9a5e281412116..75205ff038a0d 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -57,6 +57,17 @@ KBUILD_AFLAGS += $(lseinstr) $(brokengasinst) KBUILD_CFLAGS += $(call cc-option,-mabi=lp64) KBUILD_AFLAGS += $(call cc-option,-mabi=lp64)
+ifeq ($(CONFIG_ARM64_PTR_AUTH),y) +branch-prot-flags-$(CONFIG_CC_HAS_SIGN_RETURN_ADDRESS) := -msign-return-address=all +branch-prot-flags-$(CONFIG_CC_HAS_BRANCH_PROT_PAC_RET) := -mbranch-protection=pac-ret+leaf +# -march=armv8.3-a enables the non-nops instructions for PAC, to avoid the +# compiler to generate them and consequently to break the single image contract +# we pass it only to the assembler. This option is utilized only in case of non +# integrated assemblers. +branch-prot-flags-$(CONFIG_AS_HAS_PAC) += -Wa,-march=armv8.3-a +KBUILD_CFLAGS += $(branch-prot-flags-y) +endif + ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) KBUILD_CPPFLAGS += -mbig-endian CHECKFLAGS += -D__AARCH64EB__
From: Amit Daniel Kachhap amit.kachhap@arm.com
mainline inclusion from v5.7-rc1 commit 6cb6982f42cb category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
This test is specific for arm64. When in-kernel Pointer Authentication config is enabled, the return address stored in the stack is signed. This feature helps in ROP kind of attack. If any parameters used to generate the pac (<key, sp, lr>) is modified then this will fail in the authentication stage and will lead to abort.
This test changes the input parameter APIA kernel keys to cause abort. The pac computed from the new key can be same as last due to hash collision so this is retried for few times as there is no reliable way to compare the pacs. Even though this test may fail even after retries but this may cause authentication failure at a later stage in earlier function returns.
This test can be invoked as, echo CORRUPT_PAC > /sys/kernel/debug/provoke-crash/DIRECT
or as below if inserted as a module, insmod lkdtm.ko cpoint_name=DIRECT cpoint_type=CORRUPT_PAC cpoint_count=1
[ 13.118166] lkdtm: Performing direct entry CORRUPT_PAC [ 13.118298] lkdtm: Clearing PAC from the return address [ 13.118466] Unable to handle kernel paging request at virtual address bfff8000108648ec [ 13.118626] Mem abort info: [ 13.118666] ESR = 0x86000004 [ 13.118866] EC = 0x21: IABT (current EL), IL = 32 bits [ 13.118966] SET = 0, FnV = 0 [ 13.119117] EA = 0, S1PTW = 0
Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com Acked-by: Catalin Marinas catalin.marinas@arm.com Cc: Kees Cook keescook@chromium.org Signed-off-by: Catalin Marinas catalin.marinas@arm.com
Conflicts: drivers/misc/lkdtm/bugs.c drivers/misc/lkdtm/core.c drivers/misc/lkdtm/lkdtm.h [Zheng Zengkai: fix conflicts caused by skipping the following commit. b09511c253e5 lkdtm: Add a DOUBLE_FAULT crash type on x86 06b32fdb0309 lkdtm: Check for SMEP clearing protections]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/misc/lkdtm/bugs.c | 36 ++++++++++++++++++++++++++++++++++++ drivers/misc/lkdtm/core.c | 1 + drivers/misc/lkdtm/lkdtm.h | 1 + 3 files changed, 38 insertions(+)
diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c index 7eebbdfbcacd0..4095a458ab1a3 100644 --- a/drivers/misc/lkdtm/bugs.c +++ b/drivers/misc/lkdtm/bugs.c @@ -255,3 +255,39 @@ void lkdtm_STACK_GUARD_PAGE_TRAILING(void)
pr_err("FAIL: accessed page after stack!\n"); } + +#ifdef CONFIG_ARM64_PTR_AUTH +static noinline void change_pac_parameters(void) +{ + /* Reset the keys of current task */ + ptrauth_thread_init_kernel(current); + ptrauth_thread_switch_kernel(current); +} + +#define CORRUPT_PAC_ITERATE 10 +noinline void lkdtm_CORRUPT_PAC(void) +{ + int i; + + if (!system_supports_address_auth()) { + pr_err("FAIL: arm64 pointer authentication feature not present\n"); + return; + } + + pr_info("Change the PAC parameters to force function return failure\n"); + /* + * Pac is a hash value computed from input keys, return address and + * stack pointer. As pac has fewer bits so there is a chance of + * collision, so iterate few times to reduce the collision probability. + */ + for (i = 0; i < CORRUPT_PAC_ITERATE; i++) + change_pac_parameters(); + + pr_err("FAIL: %s test failed. Kernel may be unstable from here\n", __func__); +} +#else /* !CONFIG_ARM64_PTR_AUTH */ +noinline void lkdtm_CORRUPT_PAC(void) +{ + pr_err("FAIL: arm64 pointer authentication config disabled\n"); +} +#endif diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c index 07caaa2cfe1e4..5999bc838a561 100644 --- a/drivers/misc/lkdtm/core.c +++ b/drivers/misc/lkdtm/core.c @@ -136,6 +136,7 @@ static const struct crashtype crashtypes[] = { CRASHTYPE(CORRUPT_STACK_STRONG), CRASHTYPE(STACK_GUARD_PAGE_LEADING), CRASHTYPE(STACK_GUARD_PAGE_TRAILING), + CRASHTYPE(CORRUPT_PAC), CRASHTYPE(UNALIGNED_LOAD_STORE_WRITE), CRASHTYPE(OVERWRITE_ALLOCATION), CRASHTYPE(WRITE_AFTER_FREE), diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h index 8c3f2e6af256c..7060744a0f60d 100644 --- a/drivers/misc/lkdtm/lkdtm.h +++ b/drivers/misc/lkdtm/lkdtm.h @@ -26,6 +26,7 @@ void lkdtm_CORRUPT_LIST_DEL(void); void lkdtm_CORRUPT_USER_DS(void); void lkdtm_STACK_GUARD_PAGE_LEADING(void); void lkdtm_STACK_GUARD_PAGE_TRAILING(void); +void lkdtm_CORRUPT_PAC(void);
/* lkdtm_heap.c */ void lkdtm_OVERWRITE_ALLOCATION(void);
From: Nick Desaulniers ndesaulniers@google.com
mainline inclusion from v5.7-rc1 commit 3b446c7d27ddd category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Clang relies on GNU as from binutils to assemble the Linux kernel, currently. A recent patch to enable the armv8.3-a extension for pointer authentication checked for compiler support of the relevant flags. Everything works with binutils 2.34+, but for older versions we observe assembler errors:
/tmp/vgettimeofday-36a54b.s: Assembler messages: /tmp/vgettimeofday-36a54b.s:40: Error: unknown pseudo-op: `.cfi_negate_ra_state'
When compiling with Clang, require the assembler to support .cfi_negate_ra_state directives, in order to support CONFIG_ARM64_PTR_AUTH.
Link: https://github.com/ClangBuiltLinux/linux/issues/938 Signed-off-by: Nick Desaulniers ndesaulniers@google.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com Reviewed-by: Nathan Chancellor natechancellor@gmail.com Tested-by: Nathan Chancellor natechancellor@gmail.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/Kconfig | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 39eda6ec0cbaf..a9cee77dbfaa4 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1238,6 +1238,7 @@ config ARM64_PTR_AUTH bool "Enable support for pointer authentication" default y depends on (CC_HAS_SIGN_RETURN_ADDRESS || CC_HAS_BRANCH_PROT_PAC_RET) && AS_HAS_PAC + depends on CC_IS_GCC || (CC_IS_CLANG && AS_HAS_CFI_NEGATE_RA_STATE) depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS) help Pointer authentication (part of the ARMv8.3 Extensions) provides @@ -1281,6 +1282,9 @@ config CC_HAS_SIGN_RETURN_ADDRESS config AS_HAS_PAC def_bool $(as-option,-Wa$(comma)-march=armv8.3-a)
+config AS_HAS_CFI_NEGATE_RA_STATE + def_bool $(as-instr,.cfi_startproc\n.cfi_negate_ra_state\n.cfi_endproc\n) + endmenu
config ARM64_SVE
From: Amit Daniel Kachhap amit.kachhap@arm.com
mainline inclusion from v5.7-rc1 commit 15cd0e675f3f category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Recent addition of ARM64_PTR_AUTH exposed a mismatch issue with binutils. 9.1+ versions of gcc inserts a section note .note.gnu.property but this can be used properly by binutils version greater than 2.33.1. If older binutils are used then the following warnings are generated,
aarch64-linux-ld: warning: arch/arm64/kernel/vdso/vgettimeofday.o: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000 aarch64-linux-objdump: warning: arch/arm64/lib/csum.o: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000 aarch64-linux-nm: warning: .tmp_vmlinux1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000
This patch enables ARM64_PTR_AUTH when gcc and binutils versions are compatible with each other. Older gcc which do not insert such section continue to work as before.
This scenario may not occur with clang as a recent commit 3b446c7d27ddd06 ("arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH") masks binutils version lesser then 2.34.
Reported-by: kbuild test robot lkp@intel.com Suggested-by: Vincenzo Frascino Vincenzo.Frascino@arm.com Signed-off-by: Amit Daniel Kachhap amit.kachhap@arm.com [catalin.marinas@arm.com: slight adjustment to the comment] Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/Kconfig | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index a9cee77dbfaa4..2b6326901cd3c 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1238,7 +1238,10 @@ config ARM64_PTR_AUTH bool "Enable support for pointer authentication" default y depends on (CC_HAS_SIGN_RETURN_ADDRESS || CC_HAS_BRANCH_PROT_PAC_RET) && AS_HAS_PAC - depends on CC_IS_GCC || (CC_IS_CLANG && AS_HAS_CFI_NEGATE_RA_STATE) + # GCC 9.1 and later inserts a .note.gnu.property section note for PAC + # which is only understood by binutils starting with version 2.33.1. + depends on !CC_IS_GCC || GCC_VERSION < 90100 || LD_VERSION >= 233010000 + depends on !CC_IS_CLANG || AS_HAS_CFI_NEGATE_RA_STATE depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS) help Pointer authentication (part of the ARMv8.3 Extensions) provides
From: Mark Rutland mark.rutland@arm.com
mainline inclusion from v5.8-rc1 commit d0055da5266a category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
The 'sync' argument to ptrauth_keys_install_kernel macro is somewhat opaque at callsites, so instead lets have regular and _nosync variants of the macro to make this a little more obvious.
Signed-off-by: Mark Rutland mark.rutland@arm.com Cc: Amit Daniel Kachhap amit.kachhap@arm.com Cc: Catalin Marinas catalin.marinas@arm.com Cc: Will Deacon will@kernel.org Link: https://lore.kernel.org/r/20200423101606.37601-2-mark.rutland@arm.com Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/asm_pointer_auth.h | 21 ++++++++++++++++----- arch/arm64/kernel/entry.S | 4 ++-- arch/arm64/mm/proc.S | 2 +- 3 files changed, 19 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h index ce2a8486992bb..c85540a911d3c 100644 --- a/arch/arm64/include/asm/asm_pointer_auth.h +++ b/arch/arm64/include/asm/asm_pointer_auth.h @@ -39,16 +39,24 @@ alternative_if ARM64_HAS_GENERIC_AUTH alternative_else_nop_endif .endm
- .macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3 -alternative_if ARM64_HAS_ADDRESS_AUTH + .macro __ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3 mov \tmp1, #THREAD_KEYS_KERNEL add \tmp1, \tsk, \tmp1 ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA] msr_s SYS_APIAKEYLO_EL1, \tmp2 msr_s SYS_APIAKEYHI_EL1, \tmp3 - .if \sync == 1 + .endm + + .macro ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3 +alternative_if ARM64_HAS_ADDRESS_AUTH + __ptrauth_keys_install_kernel_nosync \tsk, \tmp1, \tmp2, \tmp3 +alternative_else_nop_endif + .endm + + .macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3 +alternative_if ARM64_HAS_ADDRESS_AUTH + __ptrauth_keys_install_kernel_nosync \tsk, \tmp1, \tmp2, \tmp3 isb - .endif alternative_else_nop_endif .endm
@@ -57,7 +65,10 @@ alternative_else_nop_endif .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 .endm
- .macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3 + .macro ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3 + .endm + + .macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3 .endm
#endif /* CONFIG_ARM64_PTR_AUTH */ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 09a65f718364f..2eb1b657de2fb 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -184,7 +184,7 @@ alternative_cb_end
apply_ssbd 1, x22, x23
- ptrauth_keys_install_kernel tsk, 1, x20, x22, x23 + ptrauth_keys_install_kernel tsk, x20, x22, x23 .else add x21, sp, #S_FRAME_SIZE get_thread_info tsk @@ -1163,7 +1163,7 @@ ENTRY(cpu_switch_to) ldr lr, [x8] mov sp, x9 msr sp_el0, x1 - ptrauth_keys_install_kernel x1, 1, x8, x9, x10 + ptrauth_keys_install_kernel x1, x8, x9, x10 ret ENDPROC(cpu_switch_to) NOKPROBE(cpu_switch_to) diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index e2f62d7d771d8..d6e0886fa1e94 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -136,7 +136,7 @@ alternative_if ARM64_HAS_RAS_EXTN msr_s SYS_DISR_EL1, xzr alternative_else_nop_endif
- ptrauth_keys_install_kernel x14, 0, x1, x2, x3 + ptrauth_keys_install_kernel_nosync x14, x1, x2, x3 isb ret ENDPROC(cpu_do_resume)
From: Mark Rutland mark.rutland@arm.com
mainline inclusion from v5.8-rc1 commit 62a679cb2825 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Currently __cpu_setup conditionally initializes the address authentication keys and enables them in SCTLR_EL1, doing so differently for the primary CPU and secondary CPUs, and skipping this work for CPUs returning from an idle state. For the latter case, cpu_do_resume restores the keys and SCTLR_EL1 value after the MMU has been enabled.
This flow is rather difficult to follow, so instead let's move the primary and secondary CPU initialization into their respective boot paths. By following the example of cpu_do_resume and doing so once the MMU is enabled, we can always initialize the keys from the values in thread_struct, and avoid the machinery necessary to pass the keys in secondary_data or open-coding initialization for the boot CPU.
This means we perform an additional RMW of SCTLR_EL1, but we already do this in the cpu_do_resume path, and for other features in cpufeature.c, so this isn't a major concern in a bringup path. Note that even while the enable bits are clear, the key registers are accessible.
As this now renders the argument to __cpu_setup redundant, let's also remove that entirely. Future extensions can follow a similar approach to initialize values that differ for primary/secondary CPUs.
Signed-off-by: Mark Rutland mark.rutland@arm.com Tested-by: Amit Daniel Kachhap amit.kachhap@arm.com Reviewed-by: Amit Daniel Kachhap amit.kachhap@arm.com Cc: Amit Daniel Kachhap amit.kachhap@arm.com Cc: Catalin Marinas catalin.marinas@arm.com Cc: James Morse james.morse@arm.com Cc: Suzuki K Poulose suzuki.poulose@arm.com Cc: Will Deacon will@kernel.org Link: https://lore.kernel.org/r/20200423101606.37601-3-mark.rutland@arm.com Signed-off-by: Will Deacon will@kernel.org
Conflicts: arch/arm64/kernel/smp.c [Zheng Zengkai: fix conflicts caused by skipping the following commit. 5b1cfe3a arm64: smp: Don't enter kernel with NULL stack pointer or task struct]
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/asm_pointer_auth.h | 22 ++++++++++++ arch/arm64/include/asm/smp.h | 11 ------ arch/arm64/kernel/asm-offsets.c | 3 -- arch/arm64/kernel/head.S | 12 +++++-- arch/arm64/kernel/sleep.S | 1 - arch/arm64/kernel/smp.c | 8 ----- arch/arm64/mm/proc.S | 44 ----------------------- 7 files changed, 32 insertions(+), 69 deletions(-)
diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h index c85540a911d3c..52dead2a8640d 100644 --- a/arch/arm64/include/asm/asm_pointer_auth.h +++ b/arch/arm64/include/asm/asm_pointer_auth.h @@ -60,6 +60,28 @@ alternative_if ARM64_HAS_ADDRESS_AUTH alternative_else_nop_endif .endm
+ .macro __ptrauth_keys_init_cpu tsk, tmp1, tmp2, tmp3 + mrs \tmp1, id_aa64isar1_el1 + ubfx \tmp1, \tmp1, #ID_AA64ISAR1_APA_SHIFT, #8 + cbz \tmp1, .Lno_addr_auth@ + mov_q \tmp1, (SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \ + SCTLR_ELx_ENDA | SCTLR_ELx_ENDB) + mrs \tmp2, sctlr_el1 + orr \tmp2, \tmp2, \tmp1 + msr sctlr_el1, \tmp2 + __ptrauth_keys_install_kernel_nosync \tsk, \tmp1, \tmp2, \tmp3 + isb +.Lno_addr_auth@: + .endm + + .macro ptrauth_keys_init_cpu tsk, tmp1, tmp2, tmp3 +alternative_if_not ARM64_HAS_ADDRESS_AUTH + b .Lno_addr_auth@ +alternative_else_nop_endif + __ptrauth_keys_init_cpu \tsk, \tmp1, \tmp2, \tmp3 +.Lno_addr_auth@: + .endm + #else /* CONFIG_ARM64_PTR_AUTH */
.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h index 0faa0f62f4033..65d29eb316277 100644 --- a/arch/arm64/include/asm/smp.h +++ b/arch/arm64/include/asm/smp.h @@ -27,14 +27,6 @@ /* Fatal system error detected by secondary CPU, crash the system */ #define CPU_PANIC_KERNEL (3)
-/* Possible options for __cpu_setup */ -/* Option to setup primary cpu */ -#define ARM64_CPU_BOOT_PRIMARY (1) -/* Option to setup secondary cpus */ -#define ARM64_CPU_BOOT_SECONDARY (2) -/* Option to setup cpus for different cpu run time services */ -#define ARM64_CPU_RUNTIME (3) - #ifndef __ASSEMBLY__
#include <asm/percpu.h> @@ -94,9 +86,6 @@ asmlinkage void secondary_start_kernel(void); struct secondary_data { void *stack; struct task_struct *task; -#ifdef CONFIG_ARM64_PTR_AUTH - struct ptrauth_keys_kernel ptrauth_key; -#endif long status; };
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 35f1b16956a88..a792b4d9b89eb 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -138,9 +138,6 @@ int main(void) BLANK(); DEFINE(CPU_BOOT_STACK, offsetof(struct secondary_data, stack)); DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task)); -#ifdef CONFIG_ARM64_PTR_AUTH - DEFINE(CPU_BOOT_PTRAUTH_KEY, offsetof(struct secondary_data, ptrauth_key)); -#endif BLANK(); #ifdef CONFIG_KVM_ARM_HOST DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index c8a64e5bb9c11..dbee08c9d7d1e 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -24,6 +24,7 @@ #include <linux/init.h> #include <linux/irqchip/arm-gic-v3.h>
+#include <asm/asm_pointer_auth.h> #include <asm/assembler.h> #include <asm/boot.h> #include <asm/ptrace.h> @@ -126,7 +127,6 @@ ENTRY(stext) * On return, the CPU will be ready for the MMU to be turned on and * the TCR will have been set. */ - mov x0, #ARM64_CPU_BOOT_PRIMARY bl __cpu_setup // initialise processor b __primary_switch ENDPROC(stext) @@ -411,6 +411,10 @@ __primary_switched: adr_l x5, init_task msr sp_el0, x5 // Save thread_info
+#ifdef CONFIG_ARM64_PTR_AUTH + __ptrauth_keys_init_cpu x5, x6, x7, x8 +#endif + adr_l x8, vectors // load VBAR_EL1 with virtual msr vbar_el1, x8 // vector table address isb @@ -705,7 +709,6 @@ secondary_startup: * Common entry point for secondary CPUs. */ bl __cpu_secondary_check52bitva - mov x0, #ARM64_CPU_BOOT_SECONDARY bl __cpu_setup // initialise processor bl __enable_mmu ldr x8, =__secondary_switched @@ -724,6 +727,11 @@ __secondary_switched: msr sp_el0, x2 mov x29, #0 mov x30, #0 + +#ifdef CONFIG_ARM64_PTR_AUTH + ptrauth_keys_init_cpu x2, x3, x4, x5 +#endif + b secondary_start_kernel ENDPROC(__secondary_switched)
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S index 8eee57d97281a..f7193aef2cc83 100644 --- a/arch/arm64/kernel/sleep.S +++ b/arch/arm64/kernel/sleep.S @@ -100,7 +100,6 @@ ENDPROC(__cpu_suspend_enter) .pushsection ".idmap.text", "awx" ENTRY(cpu_resume) bl el2_setup // if in EL2 drop to EL1 cleanly - mov x0, #ARM64_CPU_RUNTIME bl __cpu_setup /* enable the MMU early - so we can access sleep_save_stash by va */ bl __enable_mmu diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index ce0e7055dbf7e..5235b9aa05241 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -125,10 +125,6 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) */ secondary_data.task = idle; secondary_data.stack = task_stack_page(idle) + THREAD_SIZE; -#if defined(CONFIG_ARM64_PTR_AUTH) - secondary_data.ptrauth_key.apia.lo = idle->thread.keys_kernel.apia.lo; - secondary_data.ptrauth_key.apia.hi = idle->thread.keys_kernel.apia.hi; -#endif update_cpu_boot_status(CPU_MMU_OFF); __flush_dcache_area(&secondary_data, sizeof(secondary_data));
@@ -159,10 +155,6 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
secondary_data.task = NULL; secondary_data.stack = NULL; -#if defined(CONFIG_ARM64_PTR_AUTH) - secondary_data.ptrauth_key.apia.lo = 0; - secondary_data.ptrauth_key.apia.hi = 0; -#endif status = READ_ONCE(secondary_data.status); if (ret && status) {
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index d6e0886fa1e94..9bb8df527ccb0 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -401,8 +401,6 @@ ENDPROC(idmap_kpti_install_ng_mappings) * * Initialise the processor for turning the MMU on. * - * Input: - * x0 with a flag ARM64_CPU_BOOT_PRIMARY/ARM64_CPU_BOOT_SECONDARY/ARM64_CPU_RUNTIME. * Output: * Return in x0 the value of the SCTLR_EL1 register. */ @@ -463,51 +461,9 @@ ENTRY(__cpu_setup) 1: #endif /* CONFIG_ARM64_HW_AFDBM */ msr tcr_el1, x10 - mov x1, x0 /* * Prepare SCTLR */ mov_q x0, SCTLR_EL1_SET - -#ifdef CONFIG_ARM64_PTR_AUTH - /* No ptrauth setup for run time cpus */ - cmp x1, #ARM64_CPU_RUNTIME - b.eq 3f - - /* Check if the CPU supports ptrauth */ - mrs x2, id_aa64isar1_el1 - ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8 - cbz x2, 3f - - /* - * The primary cpu keys are reset here and can be - * re-initialised with some proper values later. - */ - msr_s SYS_APIAKEYLO_EL1, xzr - msr_s SYS_APIAKEYHI_EL1, xzr - - /* Just enable ptrauth for primary cpu */ - cmp x1, #ARM64_CPU_BOOT_PRIMARY - b.eq 2f - - /* if !system_supports_address_auth() then skip enable */ -alternative_if_not ARM64_HAS_ADDRESS_AUTH - b 3f -alternative_else_nop_endif - - /* Install ptrauth key for secondary cpus */ - adr_l x2, secondary_data - ldr x3, [x2, #CPU_BOOT_TASK] // get secondary_data.task - cbz x3, 2f // check for slow booting cpus - ldp x3, x4, [x2, #CPU_BOOT_PTRAUTH_KEY] - msr_s SYS_APIAKEYLO_EL1, x3 - msr_s SYS_APIAKEYHI_EL1, x4 - -2: /* Enable ptrauth instructions */ - ldr x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \ - SCTLR_ELx_ENDA | SCTLR_ELx_ENDB - orr x0, x0, x2 -3: -#endif ret // return to head.S ENDPROC(__cpu_setup)
From: Marc Zyngier maz@kernel.org
mainline inclusion from v5.8 commit 835d1c3a9879 category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
asm/pointer_auth.h is not needed anymore in asm/smp.h, as 62a679cb2825 ("arm64: simplify ptrauth initialization") removed the keys from the secondary_data structure.
This also cures a compilation issue introduced by f227e3ec3b5c ("random32: update the net random state on interrupt and activity").
Fixes: 62a679cb2825 ("arm64: simplify ptrauth initialization") Fixes: f227e3ec3b5c ("random32: update the net random state on interrupt and activity") Acked-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Marc Zyngier maz@kernel.org Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/smp.h | 1 - 1 file changed, 1 deletion(-)
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h index 65d29eb316277..403c22f62b580 100644 --- a/arch/arm64/include/asm/smp.h +++ b/arch/arm64/include/asm/smp.h @@ -34,7 +34,6 @@ #include <linux/threads.h> #include <linux/cpumask.h> #include <linux/thread_info.h> -#include <asm/pointer_auth.h>
DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
From: Zheng Zengkai zhengzengkai@huawei.com
hulk inclusion category: feature bugzilla: 27615 CVE: NA
-------------------------------------------------
Enable CONFIG_ARM64_PTR_AUTH,CONFIG_CC_HAS_SIGN_RETURN_ADDRESS and CONFIG_AS_HAS_PAC in hulk_defconfig and openeuler_defconfig by default.
Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/configs/hulk_defconfig | 7 +++++++ arch/arm64/configs/openeuler_defconfig | 7 +++++++ 2 files changed, 14 insertions(+)
diff --git a/arch/arm64/configs/hulk_defconfig b/arch/arm64/configs/hulk_defconfig index 9bcbc79bef166..02090f911f17a 100644 --- a/arch/arm64/configs/hulk_defconfig +++ b/arch/arm64/configs/hulk_defconfig @@ -480,6 +480,13 @@ CONFIG_ASCEND_OOM=y CONFIG_ASCEND_IOPF_HIPRI=y CONFIG_ASCEND_CHARGE_MIGRATE_HUGEPAGES=y
+# +# ARMv8.3 architectural features +# +CONFIG_ARM64_PTR_AUTH=y +CONFIG_CC_HAS_SIGN_RETURN_ADDRESS=y +CONFIG_AS_HAS_PAC=y + # # Boot options # diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig index 201d26100ce94..49fd2e5c20185 100644 --- a/arch/arm64/configs/openeuler_defconfig +++ b/arch/arm64/configs/openeuler_defconfig @@ -477,6 +477,13 @@ CONFIG_RELOCATABLE=y CONFIG_RANDOMIZE_BASE=y CONFIG_RANDOMIZE_MODULE_REGION_FULL=y
+# +# ARMv8.3 architectural features +# +CONFIG_ARM64_PTR_AUTH=y +CONFIG_CC_HAS_SIGN_RETURN_ADDRESS=y +CONFIG_AS_HAS_PAC=y + # # Boot options #