
03 Nov '23
v1 -> v2: modified patch name
---
kernel.spec | 7 +-
...rm64-HWCAP-add-support-for-AT_HWCAP2.patch | 463 ++++++++++++++++++
...4-Expose-SVE2-features-for-userspace.patch | 275 +++++++++++
...-Fix-missing-ZFR0-in-__read_sysreg_b.patch | 55 +++
...-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch | 67 +++
series.conf | 4 +
6 files changed, 870 insertions(+), 1 deletion(-)
create mode 100644 patches/0097-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
create mode 100644 patches/0098-arm64-Expose-SVE2-features-for-userspace.patch
create mode 100644 patches/0099-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
create mode 100644 patches/0100-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
diff --git a/kernel.spec b/kernel.spec
index db1c158ac117..3047a91b59bb 100644
--- a/kernel.spec
+++ b/kernel.spec
@@ -32,7 +32,7 @@
Name: kernel
Version: 4.19.90
-Release: %{hulkrelease}.0232
+Release: %{hulkrelease}.0233
Summary: Linux Kernel
License: GPLv2
URL: http://www.kernel.org/
@@ -832,6 +832,11 @@ fi
%changelog
+* Fri Nov 3 2023 Yu Liao <liaoyu15(a)huawei.com> - 4.19.90-2310.4.0.0233
+- arm64: HWCAP: add support for AT_HWCAP2
+- arm64: Expose SVE2 features for userspace
+- arm64: cpufeature: Fix missing ZFR0 in __read_sysreg_by_encoding()
+
* Thu Nov 2 2023 hongrongxuan <hongrongxuan(a)huawei.com> - 4.19.90-2311.1.0.0232
- remove linux-kernel-test.patch
diff --git a/patches/0097-arm64-HWCAP-add-support-for-AT_HWCAP2.patch b/patches/0097-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
new file mode 100644
index 000000000000..46effba895fb
--- /dev/null
+++ b/patches/0097-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
@@ -0,0 +1,463 @@
+From a97497b283894653e53f7eb83b5825f5564d1614 Mon Sep 17 00:00:00 2001
+From: Andrew Murray <andrew.murray(a)arm.com>
+Date: Tue, 9 Apr 2019 10:52:40 +0100
+Subject: [PATCH openEuler-20.03-LTS-SP4 1/4] arm64: HWCAP: add support for
+ AT_HWCAP2
+
+mainline inclusion
+from mainline-v5.2-rc1
+commit 06a916feca2b262ab0c1a2aeb68882f4b1108a07
+category: feature
+bugzilla: https://gitee.com/openeuler/kernel/issues/I8B82O
+CVE: NA
+
+Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
+
+--------------------------------
+
+As we will exhaust the first 32 bits of AT_HWCAP let's start
+exposing AT_HWCAP2 to userspace to give us up to 64 caps.
+
+Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
+prefer to expand into AT_HWCAP2 in order to provide a consistent
+view to userspace between ILP32 and LP64. However internal to the
+kernel we prefer to continue to use the full space of elf_hwcap.
+
+To reduce complexity and allow for future expansion, we now
+represent hwcaps in the kernel as ordinals and use a
+KERNEL_HWCAP_ prefix. This allows us to support automatic feature
+based module loading for all our hwcaps.
+
+We introduce cpu_set_feature to set hwcaps which complements the
+existing cpu_have_feature helper. These helpers allow us to clean
+up existing direct uses of elf_hwcap and reduce any future effort
+required to move beyond 64 caps.
+
+For convenience we also introduce cpu_{have,set}_named_feature which
+makes use of the cpu_feature macro to allow providing a hwcap name
+without a {KERNEL_}HWCAP_ prefix.
+
+Signed-off-by: Andrew Murray <andrew.murray(a)arm.com>
+[will: use const_ilog2() and tweak documentation]
+Signed-off-by: Will Deacon <will.deacon(a)arm.com>
+
+Conflicts:
+ Documentation/arm64/elf_hwcaps.txt
+ arch/arm64/crypto/chacha-neon-glue.c
+ arch/arm64/crypto/crct10dif-ce-glue.c
+ arch/arm64/crypto/ghash-ce-glue.c
+ arch/arm64/crypto/nhpoly1305-neon-glue.c
+ arch/arm64/kernel/cpufeature.c
+ drivers/clocksource/arm_arch_timer.c
+
+Signed-off-by: Yu Liao <liaoyu15(a)huawei.com>
+---
+ Documentation/arm64/elf_hwcaps.txt | 13 ++++--
+ arch/arm64/crypto/aes-ce-ccm-glue.c | 2 +-
+ arch/arm64/crypto/aes-neonbs-glue.c | 2 +-
+ arch/arm64/crypto/chacha20-neon-glue.c | 2 +-
+ arch/arm64/crypto/ghash-ce-glue.c | 6 +--
+ arch/arm64/crypto/sha256-glue.c | 4 +-
+ arch/arm64/include/asm/cpufeature.h | 22 +++++-----
+ arch/arm64/include/asm/hwcap.h | 49 ++++++++++++++++++++-
+ arch/arm64/include/uapi/asm/hwcap.h | 2 +-
+ arch/arm64/kernel/cpufeature.c | 60 +++++++++++++-------------
+ arch/arm64/kernel/cpuinfo.c | 2 +-
+ arch/arm64/kernel/fpsimd.c | 4 +-
+ drivers/clocksource/arm_arch_timer.c | 8 ++++
+ 13 files changed, 120 insertions(+), 56 deletions(-)
+
+diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
+index 6feaffe90e22..186feb16e2f2 100644
+--- a/Documentation/arm64/elf_hwcaps.txt
++++ b/Documentation/arm64/elf_hwcaps.txt
+@@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The
+ kernel exposes the presence of these features to userspace through a set
+ of flags called hwcaps, exposed in the auxilliary vector.
+
+-Userspace software can test for features by acquiring the AT_HWCAP entry
+-of the auxilliary vector, and testing whether the relevant flags are
+-set, e.g.
++Userspace software can test for features by acquiring the AT_HWCAP or
++AT_HWCAP2 entry of the auxiliary vector, and testing whether the relevant
++flags are set, e.g.
+
+ bool floating_point_is_present(void)
+ {
+@@ -182,3 +182,10 @@ HWCAP_FLAGM
+ HWCAP_SSBS
+
+ Functionality implied by ID_AA64PFR1_EL1.SSBS == 0b0010.
++
++
++4. Unused AT_HWCAP bits
++-----------------------
++
++For interoperation with userspace, the kernel guarantees that bits 62
++and 63 of AT_HWCAP will always be returned as 0.
+diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c
+index 5fc6f51908fd..036ea77f83bc 100644
+--- a/arch/arm64/crypto/aes-ce-ccm-glue.c
++++ b/arch/arm64/crypto/aes-ce-ccm-glue.c
+@@ -372,7 +372,7 @@ static struct aead_alg ccm_aes_alg = {
+
+ static int __init aes_mod_init(void)
+ {
+- if (!(elf_hwcap & HWCAP_AES))
++ if (!cpu_have_named_feature(AES))
+ return -ENODEV;
+ return crypto_register_aead(&ccm_aes_alg);
+ }
+diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c
+index 5cc248967387..742359801559 100644
+--- a/arch/arm64/crypto/aes-neonbs-glue.c
++++ b/arch/arm64/crypto/aes-neonbs-glue.c
+@@ -442,7 +442,7 @@ static int __init aes_init(void)
+ int err;
+ int i;
+
+- if (!(elf_hwcap & HWCAP_ASIMD))
++ if (!cpu_have_named_feature(ASIMD))
+ return -ENODEV;
+
+ err = crypto_register_skciphers(aes_algs, ARRAY_SIZE(aes_algs));
+diff --git a/arch/arm64/crypto/chacha20-neon-glue.c b/arch/arm64/crypto/chacha20-neon-glue.c
+index 727579c93ded..bb3314905bee 100644
+--- a/arch/arm64/crypto/chacha20-neon-glue.c
++++ b/arch/arm64/crypto/chacha20-neon-glue.c
+@@ -114,7 +114,7 @@ static struct skcipher_alg alg = {
+
+ static int __init chacha20_simd_mod_init(void)
+ {
+- if (!(elf_hwcap & HWCAP_ASIMD))
++ if (!cpu_have_named_feature(ASIMD))
+ return -ENODEV;
+
+ return crypto_register_skcipher(&alg);
+diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c
+index 1ed227bf6106..cd9d743cb40f 100644
+--- a/arch/arm64/crypto/ghash-ce-glue.c
++++ b/arch/arm64/crypto/ghash-ce-glue.c
+@@ -648,10 +648,10 @@ static int __init ghash_ce_mod_init(void)
+ {
+ int ret;
+
+- if (!(elf_hwcap & HWCAP_ASIMD))
++ if (!cpu_have_named_feature(ASIMD))
+ return -ENODEV;
+
+- if (elf_hwcap & HWCAP_PMULL)
++ if (cpu_have_named_feature(PMULL))
+ pmull_ghash_update = pmull_ghash_update_p64;
+
+ else
+@@ -661,7 +661,7 @@ static int __init ghash_ce_mod_init(void)
+ if (ret)
+ return ret;
+
+- if (elf_hwcap & HWCAP_PMULL) {
++ if (cpu_have_named_feature(PMULL)) {
+ ret = crypto_register_aead(&gcm_aes_alg);
+ if (ret)
+ crypto_unregister_shash(&ghash_alg);
+diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c
+index 4aedeaefd61f..0cccdb9cc2c0 100644
+--- a/arch/arm64/crypto/sha256-glue.c
++++ b/arch/arm64/crypto/sha256-glue.c
+@@ -173,7 +173,7 @@ static int __init sha256_mod_init(void)
+ if (ret)
+ return ret;
+
+- if (elf_hwcap & HWCAP_ASIMD) {
++ if (cpu_have_named_feature(ASIMD)) {
+ ret = crypto_register_shashes(neon_algs, ARRAY_SIZE(neon_algs));
+ if (ret)
+ crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
+@@ -183,7 +183,7 @@ static int __init sha256_mod_init(void)
+
+ static void __exit sha256_mod_fini(void)
+ {
+- if (elf_hwcap & HWCAP_ASIMD)
++ if (cpu_have_named_feature(ASIMD))
+ crypto_unregister_shashes(neon_algs, ARRAY_SIZE(neon_algs));
+ crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
+ }
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index ffb0a1ec0088..eef5a9c9b823 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -14,15 +14,8 @@
+ #include <asm/hwcap.h>
+ #include <asm/sysreg.h>
+
+-/*
+- * In the arm64 world (as in the ARM world), elf_hwcap is used both internally
+- * in the kernel and for user space to keep track of which optional features
+- * are supported by the current system. So let's map feature 'x' to HWCAP_x.
+- * Note that HWCAP_x constants are bit fields so we need to take the log.
+- */
+-
+-#define MAX_CPU_FEATURES (8 * sizeof(elf_hwcap))
+-#define cpu_feature(x) ilog2(HWCAP_ ## x)
++#define MAX_CPU_FEATURES 64
++#define cpu_feature(x) KERNEL_HWCAP_ ## x
+
+ #ifndef __ASSEMBLY__
+
+@@ -372,10 +365,19 @@ extern bool set_cap_spectre_bhb;
+
+ bool this_cpu_has_cap(unsigned int cap);
+
++static inline void cpu_set_feature(unsigned int num)
++{
++ WARN_ON(num >= MAX_CPU_FEATURES);
++ elf_hwcap |= BIT(num);
++}
++#define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
++
+ static inline bool cpu_have_feature(unsigned int num)
+ {
+- return elf_hwcap & (1UL << num);
++ WARN_ON(num >= MAX_CPU_FEATURES);
++ return elf_hwcap & BIT(num);
+ }
++#define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
+
+ /* System capability check for constant caps */
+ static __always_inline bool __cpus_have_const_cap(int num)
+diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
+index 428b745b5386..458ff2d7ece3 100644
+--- a/arch/arm64/include/asm/hwcap.h
++++ b/arch/arm64/include/asm/hwcap.h
+@@ -40,11 +40,58 @@
+ #define COMPAT_HWCAP2_CRC32 (1 << 4)
+
+ #ifndef __ASSEMBLY__
++#include <linux/kernel.h>
++#include <linux/log2.h>
++
++/*
++ * For userspace we represent hwcaps as a collection of HWCAP{,2}_x bitfields
++ * as described in uapi/asm/hwcap.h. For the kernel we represent hwcaps as
++ * natural numbers (in a single range of size MAX_CPU_FEATURES) defined here
++ * with prefix KERNEL_HWCAP_ mapped to their HWCAP{,2}_x counterpart.
++ *
++ * Hwcaps should be set and tested within the kernel via the
++ * cpu_{set,have}_named_feature(feature) where feature is the unique suffix
++ * of KERNEL_HWCAP_{feature}.
++ */
++#define __khwcap_feature(x) const_ilog2(HWCAP_ ## x)
++#define KERNEL_HWCAP_FP __khwcap_feature(FP)
++#define KERNEL_HWCAP_ASIMD __khwcap_feature(ASIMD)
++#define KERNEL_HWCAP_EVTSTRM __khwcap_feature(EVTSTRM)
++#define KERNEL_HWCAP_AES __khwcap_feature(AES)
++#define KERNEL_HWCAP_PMULL __khwcap_feature(PMULL)
++#define KERNEL_HWCAP_SHA1 __khwcap_feature(SHA1)
++#define KERNEL_HWCAP_SHA2 __khwcap_feature(SHA2)
++#define KERNEL_HWCAP_CRC32 __khwcap_feature(CRC32)
++#define KERNEL_HWCAP_ATOMICS __khwcap_feature(ATOMICS)
++#define KERNEL_HWCAP_FPHP __khwcap_feature(FPHP)
++#define KERNEL_HWCAP_ASIMDHP __khwcap_feature(ASIMDHP)
++#define KERNEL_HWCAP_CPUID __khwcap_feature(CPUID)
++#define KERNEL_HWCAP_ASIMDRDM __khwcap_feature(ASIMDRDM)
++#define KERNEL_HWCAP_JSCVT __khwcap_feature(JSCVT)
++#define KERNEL_HWCAP_FCMA __khwcap_feature(FCMA)
++#define KERNEL_HWCAP_LRCPC __khwcap_feature(LRCPC)
++#define KERNEL_HWCAP_DCPOP __khwcap_feature(DCPOP)
++#define KERNEL_HWCAP_SHA3 __khwcap_feature(SHA3)
++#define KERNEL_HWCAP_SM3 __khwcap_feature(SM3)
++#define KERNEL_HWCAP_SM4 __khwcap_feature(SM4)
++#define KERNEL_HWCAP_ASIMDDP __khwcap_feature(ASIMDDP)
++#define KERNEL_HWCAP_SHA512 __khwcap_feature(SHA512)
++#define KERNEL_HWCAP_SVE __khwcap_feature(SVE)
++#define KERNEL_HWCAP_ASIMDFHM __khwcap_feature(ASIMDFHM)
++#define KERNEL_HWCAP_DIT __khwcap_feature(DIT)
++#define KERNEL_HWCAP_USCAT __khwcap_feature(USCAT)
++#define KERNEL_HWCAP_ILRCPC __khwcap_feature(ILRCPC)
++#define KERNEL_HWCAP_FLAGM __khwcap_feature(FLAGM)
++#define KERNEL_HWCAP_SSBS __khwcap_feature(SSBS)
++
++#define __khwcap2_feature(x) (const_ilog2(HWCAP2_ ## x) + 32)
++
+ /*
+ * This yields a mask that user programs can use to figure out what
+ * instruction set this cpu supports.
+ */
+-#define ELF_HWCAP (elf_hwcap)
++#define ELF_HWCAP lower_32_bits(elf_hwcap)
++#define ELF_HWCAP2 upper_32_bits(elf_hwcap)
+
+ #ifdef CONFIG_AARCH32_EL0
+ extern unsigned int a32_elf_hwcap, a32_elf_hwcap2;
+diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
+index 2bcd6e4f3474..602158a55554 100644
+--- a/arch/arm64/include/uapi/asm/hwcap.h
++++ b/arch/arm64/include/uapi/asm/hwcap.h
+@@ -18,7 +18,7 @@
+ #define _UAPI__ASM_HWCAP_H
+
+ /*
+- * HWCAP flags - for elf_hwcap (in kernel) and AT_HWCAP
++ * HWCAP flags - for AT_HWCAP
+ */
+ #define HWCAP_FP (1 << 0)
+ #define HWCAP_ASIMD (1 << 1)
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 1c93cc3f7692..3a0e7e10f2d7 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1553,35 +1553,35 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
+ }
+
+ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_PMULL),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_AES),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA1),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA2),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_SHA512),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_CRC32),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ATOMICS),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDRDM),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA3),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM3),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM4),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDDP),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDFHM),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FLAGM),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_FP),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_ASIMDHP),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_DIT),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_DCPOP),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ILRCPC),
+- HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_USCAT),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_PMULL),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AES),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA1),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA2),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_SHA512),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_CRC32),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ATOMICS),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDRDM),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA3),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM3),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM4),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDDP),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDFHM),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FLAGM),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_FP),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FPHP),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_ASIMD),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDHP),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DCPOP),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_JSCVT),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FCMA),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_LRCPC),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ILRCPC),
++ HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT),
+ #ifdef CONFIG_ARM64_SVE
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, KERNEL_HWCAP_SVE),
+ #endif
+- HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, HWCAP_SSBS),
++ HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS),
+ {},
+ };
+
+@@ -1627,7 +1627,7 @@ static void __init cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap)
+ {
+ switch (cap->hwcap_type) {
+ case CAP_HWCAP:
+- elf_hwcap |= cap->hwcap;
++ cpu_set_feature(cap->hwcap);
+ break;
+ #ifdef CONFIG_AARCH32_EL0
+ case CAP_COMPAT_HWCAP:
+@@ -1650,7 +1650,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
+
+ switch (cap->hwcap_type) {
+ case CAP_HWCAP:
+- rc = (elf_hwcap & cap->hwcap) != 0;
++ rc = cpu_have_feature(cap->hwcap);
+ break;
+ #ifdef CONFIG_AARCH32_EL0
+ case CAP_COMPAT_HWCAP:
+@@ -1671,7 +1671,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
+ static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
+ {
+ /* We support emulation of accesses to CPU ID feature registers */
+- elf_hwcap |= HWCAP_CPUID;
++ cpu_set_named_feature(CPUID);
+ for (; hwcaps->matches; hwcaps++)
+ if (hwcaps->matches(hwcaps, cpucap_default_scope(hwcaps)))
+ cap_set_elf_hwcap(hwcaps);
+diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
+index 005d88db1082..bfe3bb8f05fe 100644
+--- a/arch/arm64/kernel/cpuinfo.c
++++ b/arch/arm64/kernel/cpuinfo.c
+@@ -164,7 +164,7 @@ static int c_show(struct seq_file *m, void *v)
+ #endif /* CONFIG_AARCH32_EL0 */
+ } else {
+ for (j = 0; hwcap_str[j]; j++)
+- if (elf_hwcap & (1 << j))
++ if (cpu_have_feature(j))
+ seq_printf(m, " %s", hwcap_str[j]);
+ }
+ seq_puts(m, "\n");
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index bb048144c3bd..6972de5681ec 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -1302,14 +1302,14 @@ static inline void fpsimd_hotplug_init(void) { }
+ */
+ static int __init fpsimd_init(void)
+ {
+- if (elf_hwcap & HWCAP_FP) {
++ if (cpu_have_named_feature(FP)) {
+ fpsimd_pm_init();
+ fpsimd_hotplug_init();
+ } else {
+ pr_notice("Floating-point is not implemented\n");
+ }
+
+- if (!(elf_hwcap & HWCAP_ASIMD))
++ if (!cpu_have_named_feature(ASIMD))
+ pr_notice("Advanced SIMD is not implemented\n");
+
+ return sve_sysctl_init();
+diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
+index 58863fd9c91b..fbfc81932dea 100644
+--- a/drivers/clocksource/arm_arch_timer.c
++++ b/drivers/clocksource/arm_arch_timer.c
+@@ -825,7 +825,11 @@ static void arch_timer_evtstrm_enable(int divider)
+ cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT)
+ | ARCH_TIMER_VIRT_EVT_EN;
+ arch_timer_set_cntkctl(cntkctl);
++#ifdef CONFIG_ARM64
++ cpu_set_named_feature(EVTSTRM);
++#else
+ elf_hwcap |= HWCAP_EVTSTRM;
++#endif
+ #ifdef CONFIG_AARCH32_EL0
+ a32_elf_hwcap |= COMPAT_HWCAP_EVTSTRM;
+ #endif
+@@ -1059,7 +1063,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self,
+ } else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) {
+ arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
+
++#ifdef CONFIG_ARM64
++ if (cpu_have_named_feature(EVTSTRM))
++#else
+ if (elf_hwcap & HWCAP_EVTSTRM)
++#endif
+ cpumask_set_cpu(smp_processor_id(), &evtstrm_available);
+ }
+ return NOTIFY_OK;
+--
+2.25.1
+
diff --git a/patches/0098-arm64-Expose-SVE2-features-for-userspace.patch b/patches/0098-arm64-Expose-SVE2-features-for-userspace.patch
new file mode 100644
index 000000000000..45f709f3fab0
--- /dev/null
+++ b/patches/0098-arm64-Expose-SVE2-features-for-userspace.patch
@@ -0,0 +1,275 @@
+From 2ba00283ddd367afa75f72e3b4de15f80b4a97a7 Mon Sep 17 00:00:00 2001
+From: Dave Martin <Dave.Martin(a)arm.com>
+Date: Thu, 18 Apr 2019 18:41:38 +0100
+Subject: [PATCH openEuler-20.03-LTS-SP4 2/4] arm64: Expose SVE2 features for
+ userspace
+
+mainline inclusion
+from mainline-v5.2-rc1
+commit 06a916feca2b262ab0c1a2aeb68882f4b1108a07
+category: feature
+bugzilla: https://gitee.com/openeuler/kernel/issues/I8B82O
+CVE: NA
+
+Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
+
+--------------------------------
+
+This patch provides support for reporting the presence of SVE2 and
+its optional features to userspace.
+
+This will also enable visibility of SVE2 for guests, when KVM
+support for SVE-enabled guests is available.
+
+Signed-off-by: Dave Martin <Dave.Martin(a)arm.com>
+Signed-off-by: Will Deacon <will.deacon(a)arm.com>
+
+Conflicts:
+ arch/arm64/include/asm/hwcap.h
+ arch/arm64/include/uapi/asm/hwcap.h
+ arch/arm64/kernel/cpuinfo.c
+
+Signed-off-by: Yu Liao <liaoyu15(a)huawei.com>
+---
+ Documentation/arm64/cpu-feature-registers.txt | 16 +++++++++++++
+ Documentation/arm64/elf_hwcaps.txt | 24 +++++++++++++++++++
+ Documentation/arm64/sve.txt | 17 +++++++++++++
+ arch/arm64/Kconfig | 3 +++
+ arch/arm64/include/asm/hwcap.h | 6 +++++
+ arch/arm64/include/asm/sysreg.h | 14 +++++++++++
+ arch/arm64/include/uapi/asm/hwcap.h | 10 ++++++++
+ arch/arm64/kernel/cpufeature.c | 17 ++++++++++++-
+ arch/arm64/kernel/cpuinfo.c | 10 ++++++++
+ 9 files changed, 116 insertions(+), 1 deletion(-)
+
+diff --git a/Documentation/arm64/cpu-feature-registers.txt b/Documentation/arm64/cpu-feature-registers.txt
+index 7964f03846b1..fcd2e1deb886 100644
+--- a/Documentation/arm64/cpu-feature-registers.txt
++++ b/Documentation/arm64/cpu-feature-registers.txt
+@@ -201,6 +201,22 @@ infrastructure:
+ | AT | [35-32] | y |
+ x--------------------------------------------------x
+
++ 6) ID_AA64ZFR0_EL1 - SVE feature ID register 0
++
++ x--------------------------------------------------x
++ | Name | bits | visible |
++ |--------------------------------------------------|
++ | SM4 | [43-40] | y |
++ |--------------------------------------------------|
++ | SHA3 | [35-32] | y |
++ |--------------------------------------------------|
++ | BitPerm | [19-16] | y |
++ |--------------------------------------------------|
++ | AES | [7-4] | y |
++ |--------------------------------------------------|
++ | SVEVer | [3-0] | y |
++ x--------------------------------------------------x
++
+ Appendix I: Example
+ ---------------------------
+
+diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
+index 186feb16e2f2..e2ce14dfccf2 100644
+--- a/Documentation/arm64/elf_hwcaps.txt
++++ b/Documentation/arm64/elf_hwcaps.txt
+@@ -159,6 +159,30 @@ HWCAP_SVE
+
+ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001.
+
++HWCAP2_SVE2
++
++ Functionality implied by ID_AA64ZFR0_EL1.SVEVer == 0b0001.
++
++HWCAP2_SVEAES
++
++ Functionality implied by ID_AA64ZFR0_EL1.AES == 0b0001.
++
++HWCAP2_SVEPMULL
++
++ Functionality implied by ID_AA64ZFR0_EL1.AES == 0b0010.
++
++HWCAP2_SVEBITPERM
++
++ Functionality implied by ID_AA64ZFR0_EL1.BitPerm == 0b0001.
++
++HWCAP2_SVESHA3
++
++ Functionality implied by ID_AA64ZFR0_EL1.SHA3 == 0b0001.
++
++HWCAP2_SVESM4
++
++ Functionality implied by ID_AA64ZFR0_EL1.SM4 == 0b0001.
++
+ HWCAP_ASIMDFHM
+
+ Functionality implied by ID_AA64ISAR0_EL1.FHM == 0b0001.
+diff --git a/Documentation/arm64/sve.txt b/Documentation/arm64/sve.txt
+index 2001d84384ca..5689fc9a976a 100644
+--- a/Documentation/arm64/sve.txt
++++ b/Documentation/arm64/sve.txt
+@@ -34,6 +34,23 @@ model features for SVE is included in Appendix A.
+ following sections: software that needs to verify that those interfaces are
+ present must check for HWCAP_SVE instead.
+
++* On hardware that supports the SVE2 extensions, HWCAP2_SVE2 will also
++ be reported in the AT_HWCAP2 aux vector entry. In addition to this,
++ optional extensions to SVE2 may be reported by the presence of:
++
++ HWCAP2_SVE2
++ HWCAP2_SVEAES
++ HWCAP2_SVEPMULL
++ HWCAP2_SVEBITPERM
++ HWCAP2_SVESHA3
++ HWCAP2_SVESM4
++
++ This list may be extended over time as the SVE architecture evolves.
++
++ These extensions are also reported via the CPU ID register ID_AA64ZFR0_EL1,
++ which userspace can read using an MRS instruction. See elf_hwcaps.txt and
++ cpu-feature-registers.txt for details.
++
+ * Debuggers should restrict themselves to interacting with the target via the
+ NT_ARM_SVE regset. The recommended way of detecting support for this regset
+ is to connect to a target process first and then attempt a
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 88b8031a93b2..f7398a1904a2 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1316,6 +1316,9 @@ config ARM64_SVE
+
+ To enable use of this extension on CPUs that implement it, say Y.
+
++ On CPUs that support the SVE2 extensions, this option will enable
++ those too.
++
+ Note that for architectural reasons, firmware _must_ implement SVE
+ support when running on SVE capable hardware. The required support
+ is present in:
+diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
+index 458ff2d7ece3..08315a3bf387 100644
+--- a/arch/arm64/include/asm/hwcap.h
++++ b/arch/arm64/include/asm/hwcap.h
+@@ -85,6 +85,12 @@
+ #define KERNEL_HWCAP_SSBS __khwcap_feature(SSBS)
+
+ #define __khwcap2_feature(x) (const_ilog2(HWCAP2_ ## x) + 32)
++#define KERNEL_HWCAP_SVE2 __khwcap2_feature(SVE2)
++#define KERNEL_HWCAP_SVEAES __khwcap2_feature(SVEAES)
++#define KERNEL_HWCAP_SVEPMULL __khwcap2_feature(SVEPMULL)
++#define KERNEL_HWCAP_SVEBITPERM __khwcap2_feature(SVEBITPERM)
++#define KERNEL_HWCAP_SVESHA3 __khwcap2_feature(SVESHA3)
++#define KERNEL_HWCAP_SVESM4 __khwcap2_feature(SVESM4)
+
+ /*
+ * This yields a mask that user programs can use to figure out what
+diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
+index 0fd51d253648..69618e602ed8 100644
+--- a/arch/arm64/include/asm/sysreg.h
++++ b/arch/arm64/include/asm/sysreg.h
+@@ -564,6 +564,20 @@
+ #define ID_AA64PFR1_SSBS_PSTATE_ONLY 1
+ #define ID_AA64PFR1_SSBS_PSTATE_INSNS 2
+
++/* id_aa64zfr0 */
++#define ID_AA64ZFR0_SM4_SHIFT 40
++#define ID_AA64ZFR0_SHA3_SHIFT 32
++#define ID_AA64ZFR0_BITPERM_SHIFT 16
++#define ID_AA64ZFR0_AES_SHIFT 4
++#define ID_AA64ZFR0_SVEVER_SHIFT 0
++
++#define ID_AA64ZFR0_SM4 0x1
++#define ID_AA64ZFR0_SHA3 0x1
++#define ID_AA64ZFR0_BITPERM 0x1
++#define ID_AA64ZFR0_AES 0x1
++#define ID_AA64ZFR0_AES_PMULL 0x2
++#define ID_AA64ZFR0_SVEVER_SVE2 0x1
++
+ /* id_aa64mmfr0 */
+ #define ID_AA64MMFR0_TGRAN4_SHIFT 28
+ #define ID_AA64MMFR0_TGRAN64_SHIFT 24
+diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
+index 602158a55554..fea93415b493 100644
+--- a/arch/arm64/include/uapi/asm/hwcap.h
++++ b/arch/arm64/include/uapi/asm/hwcap.h
+@@ -50,4 +50,14 @@
+ #define HWCAP_FLAGM (1 << 27)
+ #define HWCAP_SSBS (1 << 28)
+
++/*
++ * HWCAP2 flags - for AT_HWCAP2
++ */
++#define HWCAP2_SVE2 (1 << 1)
++#define HWCAP2_SVEAES (1 << 2)
++#define HWCAP2_SVEPMULL (1 << 3)
++#define HWCAP2_SVEBITPERM (1 << 4)
++#define HWCAP2_SVESHA3 (1 << 5)
++#define HWCAP2_SVESM4 (1 << 6)
++
+ #endif /* _UAPI__ASM_HWCAP_H */
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 3a0e7e10f2d7..4f384bbd86c7 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -183,6 +183,15 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+ ARM64_FTR_END,
+ };
+
++static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
++ ARM64_FTR_END,
++};
++
+ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
+ /*
+ * We already refuse to boot CPUs that don't support our configured
+@@ -399,7 +408,7 @@ static const struct __ftr_reg_entry {
+ /* Op1 = 0, CRn = 0, CRm = 4 */
+ ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0),
+ ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_id_aa64pfr1),
+- ARM64_FTR_REG(SYS_ID_AA64ZFR0_EL1, ftr_raz),
++ ARM64_FTR_REG(SYS_ID_AA64ZFR0_EL1, ftr_id_aa64zfr0),
+
+ /* Op1 = 0, CRn = 0, CRm = 5 */
+ ARM64_FTR_REG(SYS_ID_AA64DFR0_EL1, ftr_id_aa64dfr0),
+@@ -1580,6 +1589,12 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+ HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT),
+ #ifdef CONFIG_ARM64_SVE
+ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, KERNEL_HWCAP_SVE),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SVEVER_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_SVEVER_SVE2, CAP_HWCAP, KERNEL_HWCAP_SVE2),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_AES_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_AES, CAP_HWCAP, KERNEL_HWCAP_SVEAES),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_AES_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_AES_PMULL, CAP_HWCAP, KERNEL_HWCAP_SVEPMULL),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_BITPERM_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_BITPERM, CAP_HWCAP, KERNEL_HWCAP_SVEBITPERM),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SHA3_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_SHA3, CAP_HWCAP, KERNEL_HWCAP_SVESHA3),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SM4_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_SM4, CAP_HWCAP, KERNEL_HWCAP_SVESM4),
+ #endif
+ HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS),
+ {},
+diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
+index bfe3bb8f05fe..c8e4ddd23f0c 100644
+--- a/arch/arm64/kernel/cpuinfo.c
++++ b/arch/arm64/kernel/cpuinfo.c
+@@ -82,6 +82,16 @@ static const char *const hwcap_str[] = {
+ "ilrcpc",
+ "flagm",
+ "ssbs",
++ "sb",
++ "paca",
++ "pacg",
++ "dcpodp",
++ "sve2",
++ "sveaes",
++ "svepmull",
++ "svebitperm",
++ "svesha3",
++ "svesm4",
+ NULL
+ };
+
+--
+2.25.1
+
diff --git a/patches/0099-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch b/patches/0099-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
new file mode 100644
index 000000000000..4ce008cecf19
--- /dev/null
+++ b/patches/0099-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
@@ -0,0 +1,55 @@
+From 9f8dff634365e7bfa0c764ccd31b54a4f0992bc8 Mon Sep 17 00:00:00 2001
+From: Dave Martin <Dave.Martin(a)arm.com>
+Date: Mon, 3 Jun 2019 16:35:02 +0100
+Subject: [PATCH openEuler-20.03-LTS-SP4 3/4] arm64: cpufeature: Fix missing
+ ZFR0 in __read_sysreg_by_encoding()
+
+mainline inclusion
+from mainline-v5.2-rc4
+commit 78ed70bf3a923f1965e3c19f544677d418397108
+category: feature
+bugzilla: https://gitee.com/openeuler/kernel/issues/I8B82O
+CVE: NA
+
+Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
+
+--------------------------------
+
+In commit 06a916feca2b ("arm64: Expose SVE2 features for
+userspace"), new hwcaps are added that are detected via fields in
+the SVE-specific ID register ID_AA64ZFR0_EL1.
+
+In order to check compatibility of secondary cpus with the hwcaps
+established at boot, the cpufeatures code uses
+__read_sysreg_by_encoding() to read this ID register based on the
+sys_reg field of the arm64_elf_hwcaps[] table.
+
+This leads to a kernel splat if an hwcap uses an ID register that
+__read_sysreg_by_encoding() doesn't explicitly handle, as now
+happens when exercising cpu hotplug on an SVE2-capable platform.
+
+So fix it by adding the required case in there.
+
+Fixes: 06a916feca2b ("arm64: Expose SVE2 features for userspace")
+Signed-off-by: Dave Martin <Dave.Martin(a)arm.com>
+Signed-off-by: Will Deacon <will.deacon(a)arm.com>
+Signed-off-by: Yu Liao <liaoyu15(a)huawei.com>
+---
+ arch/arm64/kernel/cpufeature.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 4f384bbd86c7..8e7473df2660 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -828,6 +828,7 @@ static u64 __read_sysreg_by_encoding(u32 sys_id)
+
+ read_sysreg_case(SYS_ID_AA64PFR0_EL1);
+ read_sysreg_case(SYS_ID_AA64PFR1_EL1);
++ read_sysreg_case(SYS_ID_AA64ZFR0_EL1);
+ read_sysreg_case(SYS_ID_AA64DFR0_EL1);
+ read_sysreg_case(SYS_ID_AA64DFR1_EL1);
+ read_sysreg_case(SYS_ID_AA64MMFR0_EL1);
+--
+2.25.1
+
diff --git a/patches/0100-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch b/patches/0100-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
new file mode 100644
index 000000000000..7df40531adda
--- /dev/null
+++ b/patches/0100-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
@@ -0,0 +1,67 @@
+From 515c2917ae3bc768e8793dac6b27ea4dff36b40c Mon Sep 17 00:00:00 2001
+From: Julien Grall <julien.grall(a)arm.com>
+Date: Mon, 14 Oct 2019 11:21:13 +0100
+Subject: [PATCH openEuler-20.03-LTS-SP4 4/4] arm64: cpufeature: Treat
+ ID_AA64ZFR0_EL1 as RAZ when SVE is not enabled
+
+mainline inclusion
+from mainline-v5.4-rc4
+commit ec52c7134b1fcef0edfc56d55072fd4f261ef198
+category: feature
+bugzilla: https://gitee.com/openeuler/kernel/issues/I8B82O
+CVE: NA
+
+Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
+
+--------------------------------
+
+If CONFIG_ARM64_SVE=n then we fail to report ID_AA64ZFR0_EL1 as 0 when
+read by userspace, despite being required by the architecture. Although
+this is theoretically a change in ABI, userspace will first check for
+the presence of SVE via the HWCAP or the ID_AA64PFR0_EL1.SVE field
+before probing the ID_AA64ZFR0_EL1 register. Given that these are
+reported correctly for this configuration, we can safely tighten up the
+current behaviour.
+
+Ensure ID_AA64ZFR0_EL1 is treated as RAZ when CONFIG_ARM64_SVE=n.
+
+Signed-off-by: Julien Grall <julien.grall(a)arm.com>
+Reviewed-by: Suzuki K Poulose <suzuki.poulose(a)arm.com>
+Reviewed-by: Mark Rutland <mark.rutland(a)arm.com>
+Reviewed-by: Dave Martin <dave.martin(a)arm.com>
+Fixes: 06a916feca2b ("arm64: Expose SVE2 features for userspace")
+Signed-off-by: Will Deacon <will(a)kernel.org>
+Signed-off-by: Yu Liao <liaoyu15(a)huawei.com>
+---
+ arch/arm64/kernel/cpufeature.c | 15 ++++++++++-----
+ 1 file changed, 10 insertions(+), 5 deletions(-)
+
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 8e7473df2660..98a8b2703f84 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -184,11 +184,16 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+ };
+
+ static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
+ ARM64_FTR_END,
+ };
+
+--
+2.25.1
+
diff --git a/series.conf b/series.conf
index 4a8fef62b657..b4c8a39443d8 100644
--- a/series.conf
+++ b/series.conf
@@ -97,3 +97,7 @@ patches/0093-Revert-perf-smmuv3_pmu-Enable-HiSilicon-Erratum-1620.patch
patches/0094-perf-smmuv3-Enable-HiSilicon-Erratum-162001800-quirk.patch
patches/0095-perf-smmuv3-Enable-HiSilicon-Erratum-162001900-quirk.patch
patches/0096-perf-smmuv3-Add-MODULE_ALIAS-for-module-auto-loading.patch
+patches/0097-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
+patches/0098-arm64-Expose-SVE2-features-for-userspace.patch
+patches/0099-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
+patches/0100-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
--
2.25.1
2
1
---
kernel.spec | 7 +-
...-Fix-missing-ZFR0-in-__read_sysreg_b.patch | 55 +++
...rm64-HWCAP-add-support-for-AT_HWCAP2.patch | 463 ++++++++++++++++++
...4-Expose-SVE2-features-for-userspace.patch | 275 +++++++++++
...-Expose-SVE2-features-for-userspace.patch} | 0
...-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch | 67 +++
series.conf | 4 +
7 files changed, 870 insertions(+), 1 deletion(-)
create mode 100644 patches/0044-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
create mode 100644 patches/0097-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
create mode 100644 patches/0098-arm64-Expose-SVE2-features-for-userspace.patch
rename patches/{0044-Revert-perf-hisi-Add-support-for-HiSilicon-SoC-LPDDR.patch => 0099-Expose-SVE2-features-for-userspace.patch} (100%)
create mode 100644 patches/0100-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
diff --git a/kernel.spec b/kernel.spec
index acf822fc666d..a1a1fad55bdd 100644
--- a/kernel.spec
+++ b/kernel.spec
@@ -32,7 +32,7 @@
Name: kernel
Version: 4.19.90
-Release: %{hulkrelease}.0231
+Release: %{hulkrelease}.0232
Summary: Linux Kernel
License: GPLv2
URL: http://www.kernel.org/
@@ -832,6 +832,11 @@ fi
%changelog
+* Fri Nov 3 2023 Yu Liao <liaoyu15(a)huawei.com> - 4.19.90-2310.4.0.0232
+- arm64: HWCAP: add support for AT_HWCAP2
+- arm64: Expose SVE2 features for userspace
+- arm64: cpufeature: Fix missing ZFR0 in __read_sysreg_by_encoding()
+
* Wed Nov 1 2023 hongrongxuan <hongrongxuan(a)huawei.com> - 4.19.90-2311.1.0.0231
- perf/smmuv3: Add MODULE_ALIAS for module auto loading
- perf/smmuv3: Enable HiSilicon Erratum 162001900 quirk for HIP08/09
diff --git a/patches/0044-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch b/patches/0044-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
new file mode 100644
index 000000000000..4ce008cecf19
--- /dev/null
+++ b/patches/0044-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
@@ -0,0 +1,55 @@
+From 9f8dff634365e7bfa0c764ccd31b54a4f0992bc8 Mon Sep 17 00:00:00 2001
+From: Dave Martin <Dave.Martin(a)arm.com>
+Date: Mon, 3 Jun 2019 16:35:02 +0100
+Subject: [PATCH openEuler-20.03-LTS-SP4 3/4] arm64: cpufeature: Fix missing
+ ZFR0 in __read_sysreg_by_encoding()
+
+mainline inclusion
+from mainline-v5.2-rc4
+commit 78ed70bf3a923f1965e3c19f544677d418397108
+category: feature
+bugzilla: https://gitee.com/openeuler/kernel/issues/I8B82O
+CVE: NA
+
+Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
+
+--------------------------------
+
+In commit 06a916feca2b ("arm64: Expose SVE2 features for
+userspace"), new hwcaps are added that are detected via fields in
+the SVE-specific ID register ID_AA64ZFR0_EL1.
+
+In order to check compatibility of secondary cpus with the hwcaps
+established at boot, the cpufeatures code uses
+__read_sysreg_by_encoding() to read this ID register based on the
+sys_reg field of the arm64_elf_hwcaps[] table.
+
+This leads to a kernel splat if an hwcap uses an ID register that
+__read_sysreg_by_encoding() doesn't explicitly handle, as now
+happens when exercising cpu hotplug on an SVE2-capable platform.
+
+So fix it by adding the required case in there.
+
+Fixes: 06a916feca2b ("arm64: Expose SVE2 features for userspace")
+Signed-off-by: Dave Martin <Dave.Martin(a)arm.com>
+Signed-off-by: Will Deacon <will.deacon(a)arm.com>
+Signed-off-by: Yu Liao <liaoyu15(a)huawei.com>
+---
+ arch/arm64/kernel/cpufeature.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 4f384bbd86c7..8e7473df2660 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -828,6 +828,7 @@ static u64 __read_sysreg_by_encoding(u32 sys_id)
+
+ read_sysreg_case(SYS_ID_AA64PFR0_EL1);
+ read_sysreg_case(SYS_ID_AA64PFR1_EL1);
++ read_sysreg_case(SYS_ID_AA64ZFR0_EL1);
+ read_sysreg_case(SYS_ID_AA64DFR0_EL1);
+ read_sysreg_case(SYS_ID_AA64DFR1_EL1);
+ read_sysreg_case(SYS_ID_AA64MMFR0_EL1);
+--
+2.25.1
+
diff --git a/patches/0097-arm64-HWCAP-add-support-for-AT_HWCAP2.patch b/patches/0097-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
new file mode 100644
index 000000000000..46effba895fb
--- /dev/null
+++ b/patches/0097-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
@@ -0,0 +1,463 @@
+From a97497b283894653e53f7eb83b5825f5564d1614 Mon Sep 17 00:00:00 2001
+From: Andrew Murray <andrew.murray(a)arm.com>
+Date: Tue, 9 Apr 2019 10:52:40 +0100
+Subject: [PATCH openEuler-20.03-LTS-SP4 1/4] arm64: HWCAP: add support for
+ AT_HWCAP2
+
+mainline inclusion
+from mainline-v5.2-rc1
+commit 06a916feca2b262ab0c1a2aeb68882f4b1108a07
+category: feature
+bugzilla: https://gitee.com/openeuler/kernel/issues/I8B82O
+CVE: NA
+
+Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
+
+--------------------------------
+
+As we will exhaust the first 32 bits of AT_HWCAP let's start
+exposing AT_HWCAP2 to userspace to give us up to 64 caps.
+
+Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
+prefer to expand into AT_HWCAP2 in order to provide a consistent
+view to userspace between ILP32 and LP64. However internal to the
+kernel we prefer to continue to use the full space of elf_hwcap.
+
+To reduce complexity and allow for future expansion, we now
+represent hwcaps in the kernel as ordinals and use a
+KERNEL_HWCAP_ prefix. This allows us to support automatic feature
+based module loading for all our hwcaps.
+
+We introduce cpu_set_feature to set hwcaps which complements the
+existing cpu_have_feature helper. These helpers allow us to clean
+up existing direct uses of elf_hwcap and reduce any future effort
+required to move beyond 64 caps.
+
+For convenience we also introduce cpu_{have,set}_named_feature which
+makes use of the cpu_feature macro to allow providing a hwcap name
+without a {KERNEL_}HWCAP_ prefix.
+
+Signed-off-by: Andrew Murray <andrew.murray(a)arm.com>
+[will: use const_ilog2() and tweak documentation]
+Signed-off-by: Will Deacon <will.deacon(a)arm.com>
+
+Conflicts:
+ Documentation/arm64/elf_hwcaps.txt
+ arch/arm64/crypto/chacha-neon-glue.c
+ arch/arm64/crypto/crct10dif-ce-glue.c
+ arch/arm64/crypto/ghash-ce-glue.c
+ arch/arm64/crypto/nhpoly1305-neon-glue.c
+ arch/arm64/kernel/cpufeature.c
+ drivers/clocksource/arm_arch_timer.c
+
+Signed-off-by: Yu Liao <liaoyu15(a)huawei.com>
+---
+ Documentation/arm64/elf_hwcaps.txt | 13 ++++--
+ arch/arm64/crypto/aes-ce-ccm-glue.c | 2 +-
+ arch/arm64/crypto/aes-neonbs-glue.c | 2 +-
+ arch/arm64/crypto/chacha20-neon-glue.c | 2 +-
+ arch/arm64/crypto/ghash-ce-glue.c | 6 +--
+ arch/arm64/crypto/sha256-glue.c | 4 +-
+ arch/arm64/include/asm/cpufeature.h | 22 +++++-----
+ arch/arm64/include/asm/hwcap.h | 49 ++++++++++++++++++++-
+ arch/arm64/include/uapi/asm/hwcap.h | 2 +-
+ arch/arm64/kernel/cpufeature.c | 60 +++++++++++++-------------
+ arch/arm64/kernel/cpuinfo.c | 2 +-
+ arch/arm64/kernel/fpsimd.c | 4 +-
+ drivers/clocksource/arm_arch_timer.c | 8 ++++
+ 13 files changed, 120 insertions(+), 56 deletions(-)
+
+diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
+index 6feaffe90e22..186feb16e2f2 100644
+--- a/Documentation/arm64/elf_hwcaps.txt
++++ b/Documentation/arm64/elf_hwcaps.txt
+@@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The
+ kernel exposes the presence of these features to userspace through a set
+ of flags called hwcaps, exposed in the auxilliary vector.
+
+-Userspace software can test for features by acquiring the AT_HWCAP entry
+-of the auxilliary vector, and testing whether the relevant flags are
+-set, e.g.
++Userspace software can test for features by acquiring the AT_HWCAP or
++AT_HWCAP2 entry of the auxiliary vector, and testing whether the relevant
++flags are set, e.g.
+
+ bool floating_point_is_present(void)
+ {
+@@ -182,3 +182,10 @@ HWCAP_FLAGM
+ HWCAP_SSBS
+
+ Functionality implied by ID_AA64PFR1_EL1.SSBS == 0b0010.
++
++
++4. Unused AT_HWCAP bits
++-----------------------
++
++For interoperation with userspace, the kernel guarantees that bits 62
++and 63 of AT_HWCAP will always be returned as 0.
+diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c
+index 5fc6f51908fd..036ea77f83bc 100644
+--- a/arch/arm64/crypto/aes-ce-ccm-glue.c
++++ b/arch/arm64/crypto/aes-ce-ccm-glue.c
+@@ -372,7 +372,7 @@ static struct aead_alg ccm_aes_alg = {
+
+ static int __init aes_mod_init(void)
+ {
+- if (!(elf_hwcap & HWCAP_AES))
++ if (!cpu_have_named_feature(AES))
+ return -ENODEV;
+ return crypto_register_aead(&ccm_aes_alg);
+ }
+diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c
+index 5cc248967387..742359801559 100644
+--- a/arch/arm64/crypto/aes-neonbs-glue.c
++++ b/arch/arm64/crypto/aes-neonbs-glue.c
+@@ -442,7 +442,7 @@ static int __init aes_init(void)
+ int err;
+ int i;
+
+- if (!(elf_hwcap & HWCAP_ASIMD))
++ if (!cpu_have_named_feature(ASIMD))
+ return -ENODEV;
+
+ err = crypto_register_skciphers(aes_algs, ARRAY_SIZE(aes_algs));
+diff --git a/arch/arm64/crypto/chacha20-neon-glue.c b/arch/arm64/crypto/chacha20-neon-glue.c
+index 727579c93ded..bb3314905bee 100644
+--- a/arch/arm64/crypto/chacha20-neon-glue.c
++++ b/arch/arm64/crypto/chacha20-neon-glue.c
+@@ -114,7 +114,7 @@ static struct skcipher_alg alg = {
+
+ static int __init chacha20_simd_mod_init(void)
+ {
+- if (!(elf_hwcap & HWCAP_ASIMD))
++ if (!cpu_have_named_feature(ASIMD))
+ return -ENODEV;
+
+ return crypto_register_skcipher(&alg);
+diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c
+index 1ed227bf6106..cd9d743cb40f 100644
+--- a/arch/arm64/crypto/ghash-ce-glue.c
++++ b/arch/arm64/crypto/ghash-ce-glue.c
+@@ -648,10 +648,10 @@ static int __init ghash_ce_mod_init(void)
+ {
+ int ret;
+
+- if (!(elf_hwcap & HWCAP_ASIMD))
++ if (!cpu_have_named_feature(ASIMD))
+ return -ENODEV;
+
+- if (elf_hwcap & HWCAP_PMULL)
++ if (cpu_have_named_feature(PMULL))
+ pmull_ghash_update = pmull_ghash_update_p64;
+
+ else
+@@ -661,7 +661,7 @@ static int __init ghash_ce_mod_init(void)
+ if (ret)
+ return ret;
+
+- if (elf_hwcap & HWCAP_PMULL) {
++ if (cpu_have_named_feature(PMULL)) {
+ ret = crypto_register_aead(&gcm_aes_alg);
+ if (ret)
+ crypto_unregister_shash(&ghash_alg);
+diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c
+index 4aedeaefd61f..0cccdb9cc2c0 100644
+--- a/arch/arm64/crypto/sha256-glue.c
++++ b/arch/arm64/crypto/sha256-glue.c
+@@ -173,7 +173,7 @@ static int __init sha256_mod_init(void)
+ if (ret)
+ return ret;
+
+- if (elf_hwcap & HWCAP_ASIMD) {
++ if (cpu_have_named_feature(ASIMD)) {
+ ret = crypto_register_shashes(neon_algs, ARRAY_SIZE(neon_algs));
+ if (ret)
+ crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
+@@ -183,7 +183,7 @@ static int __init sha256_mod_init(void)
+
+ static void __exit sha256_mod_fini(void)
+ {
+- if (elf_hwcap & HWCAP_ASIMD)
++ if (cpu_have_named_feature(ASIMD))
+ crypto_unregister_shashes(neon_algs, ARRAY_SIZE(neon_algs));
+ crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
+ }
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index ffb0a1ec0088..eef5a9c9b823 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -14,15 +14,8 @@
+ #include <asm/hwcap.h>
+ #include <asm/sysreg.h>
+
+-/*
+- * In the arm64 world (as in the ARM world), elf_hwcap is used both internally
+- * in the kernel and for user space to keep track of which optional features
+- * are supported by the current system. So let's map feature 'x' to HWCAP_x.
+- * Note that HWCAP_x constants are bit fields so we need to take the log.
+- */
+-
+-#define MAX_CPU_FEATURES (8 * sizeof(elf_hwcap))
+-#define cpu_feature(x) ilog2(HWCAP_ ## x)
++#define MAX_CPU_FEATURES 64
++#define cpu_feature(x) KERNEL_HWCAP_ ## x
+
+ #ifndef __ASSEMBLY__
+
+@@ -372,10 +365,19 @@ extern bool set_cap_spectre_bhb;
+
+ bool this_cpu_has_cap(unsigned int cap);
+
++static inline void cpu_set_feature(unsigned int num)
++{
++ WARN_ON(num >= MAX_CPU_FEATURES);
++ elf_hwcap |= BIT(num);
++}
++#define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
++
+ static inline bool cpu_have_feature(unsigned int num)
+ {
+- return elf_hwcap & (1UL << num);
++ WARN_ON(num >= MAX_CPU_FEATURES);
++ return elf_hwcap & BIT(num);
+ }
++#define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
+
+ /* System capability check for constant caps */
+ static __always_inline bool __cpus_have_const_cap(int num)
+diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
+index 428b745b5386..458ff2d7ece3 100644
+--- a/arch/arm64/include/asm/hwcap.h
++++ b/arch/arm64/include/asm/hwcap.h
+@@ -40,11 +40,58 @@
+ #define COMPAT_HWCAP2_CRC32 (1 << 4)
+
+ #ifndef __ASSEMBLY__
++#include <linux/kernel.h>
++#include <linux/log2.h>
++
++/*
++ * For userspace we represent hwcaps as a collection of HWCAP{,2}_x bitfields
++ * as described in uapi/asm/hwcap.h. For the kernel we represent hwcaps as
++ * natural numbers (in a single range of size MAX_CPU_FEATURES) defined here
++ * with prefix KERNEL_HWCAP_ mapped to their HWCAP{,2}_x counterpart.
++ *
++ * Hwcaps should be set and tested within the kernel via the
++ * cpu_{set,have}_named_feature(feature) where feature is the unique suffix
++ * of KERNEL_HWCAP_{feature}.
++ */
++#define __khwcap_feature(x) const_ilog2(HWCAP_ ## x)
++#define KERNEL_HWCAP_FP __khwcap_feature(FP)
++#define KERNEL_HWCAP_ASIMD __khwcap_feature(ASIMD)
++#define KERNEL_HWCAP_EVTSTRM __khwcap_feature(EVTSTRM)
++#define KERNEL_HWCAP_AES __khwcap_feature(AES)
++#define KERNEL_HWCAP_PMULL __khwcap_feature(PMULL)
++#define KERNEL_HWCAP_SHA1 __khwcap_feature(SHA1)
++#define KERNEL_HWCAP_SHA2 __khwcap_feature(SHA2)
++#define KERNEL_HWCAP_CRC32 __khwcap_feature(CRC32)
++#define KERNEL_HWCAP_ATOMICS __khwcap_feature(ATOMICS)
++#define KERNEL_HWCAP_FPHP __khwcap_feature(FPHP)
++#define KERNEL_HWCAP_ASIMDHP __khwcap_feature(ASIMDHP)
++#define KERNEL_HWCAP_CPUID __khwcap_feature(CPUID)
++#define KERNEL_HWCAP_ASIMDRDM __khwcap_feature(ASIMDRDM)
++#define KERNEL_HWCAP_JSCVT __khwcap_feature(JSCVT)
++#define KERNEL_HWCAP_FCMA __khwcap_feature(FCMA)
++#define KERNEL_HWCAP_LRCPC __khwcap_feature(LRCPC)
++#define KERNEL_HWCAP_DCPOP __khwcap_feature(DCPOP)
++#define KERNEL_HWCAP_SHA3 __khwcap_feature(SHA3)
++#define KERNEL_HWCAP_SM3 __khwcap_feature(SM3)
++#define KERNEL_HWCAP_SM4 __khwcap_feature(SM4)
++#define KERNEL_HWCAP_ASIMDDP __khwcap_feature(ASIMDDP)
++#define KERNEL_HWCAP_SHA512 __khwcap_feature(SHA512)
++#define KERNEL_HWCAP_SVE __khwcap_feature(SVE)
++#define KERNEL_HWCAP_ASIMDFHM __khwcap_feature(ASIMDFHM)
++#define KERNEL_HWCAP_DIT __khwcap_feature(DIT)
++#define KERNEL_HWCAP_USCAT __khwcap_feature(USCAT)
++#define KERNEL_HWCAP_ILRCPC __khwcap_feature(ILRCPC)
++#define KERNEL_HWCAP_FLAGM __khwcap_feature(FLAGM)
++#define KERNEL_HWCAP_SSBS __khwcap_feature(SSBS)
++
++#define __khwcap2_feature(x) (const_ilog2(HWCAP2_ ## x) + 32)
++
+ /*
+ * This yields a mask that user programs can use to figure out what
+ * instruction set this cpu supports.
+ */
+-#define ELF_HWCAP (elf_hwcap)
++#define ELF_HWCAP lower_32_bits(elf_hwcap)
++#define ELF_HWCAP2 upper_32_bits(elf_hwcap)
+
+ #ifdef CONFIG_AARCH32_EL0
+ extern unsigned int a32_elf_hwcap, a32_elf_hwcap2;
+diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
+index 2bcd6e4f3474..602158a55554 100644
+--- a/arch/arm64/include/uapi/asm/hwcap.h
++++ b/arch/arm64/include/uapi/asm/hwcap.h
+@@ -18,7 +18,7 @@
+ #define _UAPI__ASM_HWCAP_H
+
+ /*
+- * HWCAP flags - for elf_hwcap (in kernel) and AT_HWCAP
++ * HWCAP flags - for AT_HWCAP
+ */
+ #define HWCAP_FP (1 << 0)
+ #define HWCAP_ASIMD (1 << 1)
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 1c93cc3f7692..3a0e7e10f2d7 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1553,35 +1553,35 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
+ }
+
+ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_PMULL),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_AES),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA1),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA2),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_SHA512),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_CRC32),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ATOMICS),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDRDM),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA3),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM3),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM4),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDDP),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDFHM),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FLAGM),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_FP),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_ASIMDHP),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_DIT),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_DCPOP),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ILRCPC),
+- HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_USCAT),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_PMULL),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AES),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA1),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA2),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_SHA512),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_CRC32),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ATOMICS),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDRDM),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA3),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM3),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM4),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDDP),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDFHM),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FLAGM),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_FP),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FPHP),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_ASIMD),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDHP),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DCPOP),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_JSCVT),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FCMA),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_LRCPC),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ILRCPC),
++ HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT),
+ #ifdef CONFIG_ARM64_SVE
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, KERNEL_HWCAP_SVE),
+ #endif
+- HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, HWCAP_SSBS),
++ HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS),
+ {},
+ };
+
+@@ -1627,7 +1627,7 @@ static void __init cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap)
+ {
+ switch (cap->hwcap_type) {
+ case CAP_HWCAP:
+- elf_hwcap |= cap->hwcap;
++ cpu_set_feature(cap->hwcap);
+ break;
+ #ifdef CONFIG_AARCH32_EL0
+ case CAP_COMPAT_HWCAP:
+@@ -1650,7 +1650,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
+
+ switch (cap->hwcap_type) {
+ case CAP_HWCAP:
+- rc = (elf_hwcap & cap->hwcap) != 0;
++ rc = cpu_have_feature(cap->hwcap);
+ break;
+ #ifdef CONFIG_AARCH32_EL0
+ case CAP_COMPAT_HWCAP:
+@@ -1671,7 +1671,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
+ static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
+ {
+ /* We support emulation of accesses to CPU ID feature registers */
+- elf_hwcap |= HWCAP_CPUID;
++ cpu_set_named_feature(CPUID);
+ for (; hwcaps->matches; hwcaps++)
+ if (hwcaps->matches(hwcaps, cpucap_default_scope(hwcaps)))
+ cap_set_elf_hwcap(hwcaps);
+diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
+index 005d88db1082..bfe3bb8f05fe 100644
+--- a/arch/arm64/kernel/cpuinfo.c
++++ b/arch/arm64/kernel/cpuinfo.c
+@@ -164,7 +164,7 @@ static int c_show(struct seq_file *m, void *v)
+ #endif /* CONFIG_AARCH32_EL0 */
+ } else {
+ for (j = 0; hwcap_str[j]; j++)
+- if (elf_hwcap & (1 << j))
++ if (cpu_have_feature(j))
+ seq_printf(m, " %s", hwcap_str[j]);
+ }
+ seq_puts(m, "\n");
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index bb048144c3bd..6972de5681ec 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -1302,14 +1302,14 @@ static inline void fpsimd_hotplug_init(void) { }
+ */
+ static int __init fpsimd_init(void)
+ {
+- if (elf_hwcap & HWCAP_FP) {
++ if (cpu_have_named_feature(FP)) {
+ fpsimd_pm_init();
+ fpsimd_hotplug_init();
+ } else {
+ pr_notice("Floating-point is not implemented\n");
+ }
+
+- if (!(elf_hwcap & HWCAP_ASIMD))
++ if (!cpu_have_named_feature(ASIMD))
+ pr_notice("Advanced SIMD is not implemented\n");
+
+ return sve_sysctl_init();
+diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
+index 58863fd9c91b..fbfc81932dea 100644
+--- a/drivers/clocksource/arm_arch_timer.c
++++ b/drivers/clocksource/arm_arch_timer.c
+@@ -825,7 +825,11 @@ static void arch_timer_evtstrm_enable(int divider)
+ cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT)
+ | ARCH_TIMER_VIRT_EVT_EN;
+ arch_timer_set_cntkctl(cntkctl);
++#ifdef CONFIG_ARM64
++ cpu_set_named_feature(EVTSTRM);
++#else
+ elf_hwcap |= HWCAP_EVTSTRM;
++#endif
+ #ifdef CONFIG_AARCH32_EL0
+ a32_elf_hwcap |= COMPAT_HWCAP_EVTSTRM;
+ #endif
+@@ -1059,7 +1063,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self,
+ } else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) {
+ arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
+
++#ifdef CONFIG_ARM64
++ if (cpu_have_named_feature(EVTSTRM))
++#else
+ if (elf_hwcap & HWCAP_EVTSTRM)
++#endif
+ cpumask_set_cpu(smp_processor_id(), &evtstrm_available);
+ }
+ return NOTIFY_OK;
+--
+2.25.1
+
diff --git a/patches/0098-arm64-Expose-SVE2-features-for-userspace.patch b/patches/0098-arm64-Expose-SVE2-features-for-userspace.patch
new file mode 100644
index 000000000000..45f709f3fab0
--- /dev/null
+++ b/patches/0098-arm64-Expose-SVE2-features-for-userspace.patch
@@ -0,0 +1,275 @@
+From 2ba00283ddd367afa75f72e3b4de15f80b4a97a7 Mon Sep 17 00:00:00 2001
+From: Dave Martin <Dave.Martin(a)arm.com>
+Date: Thu, 18 Apr 2019 18:41:38 +0100
+Subject: [PATCH openEuler-20.03-LTS-SP4 2/4] arm64: Expose SVE2 features for
+ userspace
+
+mainline inclusion
+from mainline-v5.2-rc1
+commit 06a916feca2b262ab0c1a2aeb68882f4b1108a07
+category: feature
+bugzilla: https://gitee.com/openeuler/kernel/issues/I8B82O
+CVE: NA
+
+Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
+
+--------------------------------
+
+This patch provides support for reporting the presence of SVE2 and
+its optional features to userspace.
+
+This will also enable visibility of SVE2 for guests, when KVM
+support for SVE-enabled guests is available.
+
+Signed-off-by: Dave Martin <Dave.Martin(a)arm.com>
+Signed-off-by: Will Deacon <will.deacon(a)arm.com>
+
+Conflicts:
+ arch/arm64/include/asm/hwcap.h
+ arch/arm64/include/uapi/asm/hwcap.h
+ arch/arm64/kernel/cpuinfo.c
+
+Signed-off-by: Yu Liao <liaoyu15(a)huawei.com>
+---
+ Documentation/arm64/cpu-feature-registers.txt | 16 +++++++++++++
+ Documentation/arm64/elf_hwcaps.txt | 24 +++++++++++++++++++
+ Documentation/arm64/sve.txt | 17 +++++++++++++
+ arch/arm64/Kconfig | 3 +++
+ arch/arm64/include/asm/hwcap.h | 6 +++++
+ arch/arm64/include/asm/sysreg.h | 14 +++++++++++
+ arch/arm64/include/uapi/asm/hwcap.h | 10 ++++++++
+ arch/arm64/kernel/cpufeature.c | 17 ++++++++++++-
+ arch/arm64/kernel/cpuinfo.c | 10 ++++++++
+ 9 files changed, 116 insertions(+), 1 deletion(-)
+
+diff --git a/Documentation/arm64/cpu-feature-registers.txt b/Documentation/arm64/cpu-feature-registers.txt
+index 7964f03846b1..fcd2e1deb886 100644
+--- a/Documentation/arm64/cpu-feature-registers.txt
++++ b/Documentation/arm64/cpu-feature-registers.txt
+@@ -201,6 +201,22 @@ infrastructure:
+ | AT | [35-32] | y |
+ x--------------------------------------------------x
+
++ 6) ID_AA64ZFR0_EL1 - SVE feature ID register 0
++
++ x--------------------------------------------------x
++ | Name | bits | visible |
++ |--------------------------------------------------|
++ | SM4 | [43-40] | y |
++ |--------------------------------------------------|
++ | SHA3 | [35-32] | y |
++ |--------------------------------------------------|
++ | BitPerm | [19-16] | y |
++ |--------------------------------------------------|
++ | AES | [7-4] | y |
++ |--------------------------------------------------|
++ | SVEVer | [3-0] | y |
++ x--------------------------------------------------x
++
+ Appendix I: Example
+ ---------------------------
+
+diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
+index 186feb16e2f2..e2ce14dfccf2 100644
+--- a/Documentation/arm64/elf_hwcaps.txt
++++ b/Documentation/arm64/elf_hwcaps.txt
+@@ -159,6 +159,30 @@ HWCAP_SVE
+
+ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001.
+
++HWCAP2_SVE2
++
++ Functionality implied by ID_AA64ZFR0_EL1.SVEVer == 0b0001.
++
++HWCAP2_SVEAES
++
++ Functionality implied by ID_AA64ZFR0_EL1.AES == 0b0001.
++
++HWCAP2_SVEPMULL
++
++ Functionality implied by ID_AA64ZFR0_EL1.AES == 0b0010.
++
++HWCAP2_SVEBITPERM
++
++ Functionality implied by ID_AA64ZFR0_EL1.BitPerm == 0b0001.
++
++HWCAP2_SVESHA3
++
++ Functionality implied by ID_AA64ZFR0_EL1.SHA3 == 0b0001.
++
++HWCAP2_SVESM4
++
++ Functionality implied by ID_AA64ZFR0_EL1.SM4 == 0b0001.
++
+ HWCAP_ASIMDFHM
+
+ Functionality implied by ID_AA64ISAR0_EL1.FHM == 0b0001.
+diff --git a/Documentation/arm64/sve.txt b/Documentation/arm64/sve.txt
+index 2001d84384ca..5689fc9a976a 100644
+--- a/Documentation/arm64/sve.txt
++++ b/Documentation/arm64/sve.txt
+@@ -34,6 +34,23 @@ model features for SVE is included in Appendix A.
+ following sections: software that needs to verify that those interfaces are
+ present must check for HWCAP_SVE instead.
+
++* On hardware that supports the SVE2 extensions, HWCAP2_SVE2 will also
++ be reported in the AT_HWCAP2 aux vector entry. In addition to this,
++ optional extensions to SVE2 may be reported by the presence of:
++
++ HWCAP2_SVE2
++ HWCAP2_SVEAES
++ HWCAP2_SVEPMULL
++ HWCAP2_SVEBITPERM
++ HWCAP2_SVESHA3
++ HWCAP2_SVESM4
++
++ This list may be extended over time as the SVE architecture evolves.
++
++ These extensions are also reported via the CPU ID register ID_AA64ZFR0_EL1,
++ which userspace can read using an MRS instruction. See elf_hwcaps.txt and
++ cpu-feature-registers.txt for details.
++
+ * Debuggers should restrict themselves to interacting with the target via the
+ NT_ARM_SVE regset. The recommended way of detecting support for this regset
+ is to connect to a target process first and then attempt a
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 88b8031a93b2..f7398a1904a2 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1316,6 +1316,9 @@ config ARM64_SVE
+
+ To enable use of this extension on CPUs that implement it, say Y.
+
++ On CPUs that support the SVE2 extensions, this option will enable
++ those too.
++
+ Note that for architectural reasons, firmware _must_ implement SVE
+ support when running on SVE capable hardware. The required support
+ is present in:
+diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
+index 458ff2d7ece3..08315a3bf387 100644
+--- a/arch/arm64/include/asm/hwcap.h
++++ b/arch/arm64/include/asm/hwcap.h
+@@ -85,6 +85,12 @@
+ #define KERNEL_HWCAP_SSBS __khwcap_feature(SSBS)
+
+ #define __khwcap2_feature(x) (const_ilog2(HWCAP2_ ## x) + 32)
++#define KERNEL_HWCAP_SVE2 __khwcap2_feature(SVE2)
++#define KERNEL_HWCAP_SVEAES __khwcap2_feature(SVEAES)
++#define KERNEL_HWCAP_SVEPMULL __khwcap2_feature(SVEPMULL)
++#define KERNEL_HWCAP_SVEBITPERM __khwcap2_feature(SVEBITPERM)
++#define KERNEL_HWCAP_SVESHA3 __khwcap2_feature(SVESHA3)
++#define KERNEL_HWCAP_SVESM4 __khwcap2_feature(SVESM4)
+
+ /*
+ * This yields a mask that user programs can use to figure out what
+diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
+index 0fd51d253648..69618e602ed8 100644
+--- a/arch/arm64/include/asm/sysreg.h
++++ b/arch/arm64/include/asm/sysreg.h
+@@ -564,6 +564,20 @@
+ #define ID_AA64PFR1_SSBS_PSTATE_ONLY 1
+ #define ID_AA64PFR1_SSBS_PSTATE_INSNS 2
+
++/* id_aa64zfr0 */
++#define ID_AA64ZFR0_SM4_SHIFT 40
++#define ID_AA64ZFR0_SHA3_SHIFT 32
++#define ID_AA64ZFR0_BITPERM_SHIFT 16
++#define ID_AA64ZFR0_AES_SHIFT 4
++#define ID_AA64ZFR0_SVEVER_SHIFT 0
++
++#define ID_AA64ZFR0_SM4 0x1
++#define ID_AA64ZFR0_SHA3 0x1
++#define ID_AA64ZFR0_BITPERM 0x1
++#define ID_AA64ZFR0_AES 0x1
++#define ID_AA64ZFR0_AES_PMULL 0x2
++#define ID_AA64ZFR0_SVEVER_SVE2 0x1
++
+ /* id_aa64mmfr0 */
+ #define ID_AA64MMFR0_TGRAN4_SHIFT 28
+ #define ID_AA64MMFR0_TGRAN64_SHIFT 24
+diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
+index 602158a55554..fea93415b493 100644
+--- a/arch/arm64/include/uapi/asm/hwcap.h
++++ b/arch/arm64/include/uapi/asm/hwcap.h
+@@ -50,4 +50,14 @@
+ #define HWCAP_FLAGM (1 << 27)
+ #define HWCAP_SSBS (1 << 28)
+
++/*
++ * HWCAP2 flags - for AT_HWCAP2
++ */
++#define HWCAP2_SVE2 (1 << 1)
++#define HWCAP2_SVEAES (1 << 2)
++#define HWCAP2_SVEPMULL (1 << 3)
++#define HWCAP2_SVEBITPERM (1 << 4)
++#define HWCAP2_SVESHA3 (1 << 5)
++#define HWCAP2_SVESM4 (1 << 6)
++
+ #endif /* _UAPI__ASM_HWCAP_H */
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 3a0e7e10f2d7..4f384bbd86c7 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -183,6 +183,15 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+ ARM64_FTR_END,
+ };
+
++static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
++ ARM64_FTR_END,
++};
++
+ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
+ /*
+ * We already refuse to boot CPUs that don't support our configured
+@@ -399,7 +408,7 @@ static const struct __ftr_reg_entry {
+ /* Op1 = 0, CRn = 0, CRm = 4 */
+ ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0),
+ ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_id_aa64pfr1),
+- ARM64_FTR_REG(SYS_ID_AA64ZFR0_EL1, ftr_raz),
++ ARM64_FTR_REG(SYS_ID_AA64ZFR0_EL1, ftr_id_aa64zfr0),
+
+ /* Op1 = 0, CRn = 0, CRm = 5 */
+ ARM64_FTR_REG(SYS_ID_AA64DFR0_EL1, ftr_id_aa64dfr0),
+@@ -1580,6 +1589,12 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+ HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT),
+ #ifdef CONFIG_ARM64_SVE
+ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, KERNEL_HWCAP_SVE),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SVEVER_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_SVEVER_SVE2, CAP_HWCAP, KERNEL_HWCAP_SVE2),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_AES_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_AES, CAP_HWCAP, KERNEL_HWCAP_SVEAES),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_AES_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_AES_PMULL, CAP_HWCAP, KERNEL_HWCAP_SVEPMULL),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_BITPERM_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_BITPERM, CAP_HWCAP, KERNEL_HWCAP_SVEBITPERM),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SHA3_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_SHA3, CAP_HWCAP, KERNEL_HWCAP_SVESHA3),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SM4_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_SM4, CAP_HWCAP, KERNEL_HWCAP_SVESM4),
+ #endif
+ HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS),
+ {},
+diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
+index bfe3bb8f05fe..c8e4ddd23f0c 100644
+--- a/arch/arm64/kernel/cpuinfo.c
++++ b/arch/arm64/kernel/cpuinfo.c
+@@ -82,6 +82,16 @@ static const char *const hwcap_str[] = {
+ "ilrcpc",
+ "flagm",
+ "ssbs",
++ "sb",
++ "paca",
++ "pacg",
++ "dcpodp",
++ "sve2",
++ "sveaes",
++ "svepmull",
++ "svebitperm",
++ "svesha3",
++ "svesm4",
+ NULL
+ };
+
+--
+2.25.1
+
diff --git a/patches/0044-Revert-perf-hisi-Add-support-for-HiSilicon-SoC-LPDDR.patch b/patches/0099-Expose-SVE2-features-for-userspace.patch
similarity index 100%
rename from patches/0044-Revert-perf-hisi-Add-support-for-HiSilicon-SoC-LPDDR.patch
rename to patches/0099-Expose-SVE2-features-for-userspace.patch
diff --git a/patches/0100-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch b/patches/0100-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
new file mode 100644
index 000000000000..7df40531adda
--- /dev/null
+++ b/patches/0100-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
@@ -0,0 +1,67 @@
+From 515c2917ae3bc768e8793dac6b27ea4dff36b40c Mon Sep 17 00:00:00 2001
+From: Julien Grall <julien.grall(a)arm.com>
+Date: Mon, 14 Oct 2019 11:21:13 +0100
+Subject: [PATCH openEuler-20.03-LTS-SP4 4/4] arm64: cpufeature: Treat
+ ID_AA64ZFR0_EL1 as RAZ when SVE is not enabled
+
+mainline inclusion
+from mainline-v5.4-rc4
+commit ec52c7134b1fcef0edfc56d55072fd4f261ef198
+category: feature
+bugzilla: https://gitee.com/openeuler/kernel/issues/I8B82O
+CVE: NA
+
+Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
+
+--------------------------------
+
+If CONFIG_ARM64_SVE=n then we fail to report ID_AA64ZFR0_EL1 as 0 when
+read by userspace, despite being required by the architecture. Although
+this is theoretically a change in ABI, userspace will first check for
+the presence of SVE via the HWCAP or the ID_AA64PFR0_EL1.SVE field
+before probing the ID_AA64ZFR0_EL1 register. Given that these are
+reported correctly for this configuration, we can safely tighten up the
+current behaviour.
+
+Ensure ID_AA64ZFR0_EL1 is treated as RAZ when CONFIG_ARM64_SVE=n.
+
+Signed-off-by: Julien Grall <julien.grall(a)arm.com>
+Reviewed-by: Suzuki K Poulose <suzuki.poulose(a)arm.com>
+Reviewed-by: Mark Rutland <mark.rutland(a)arm.com>
+Reviewed-by: Dave Martin <dave.martin(a)arm.com>
+Fixes: 06a916feca2b ("arm64: Expose SVE2 features for userspace")
+Signed-off-by: Will Deacon <will(a)kernel.org>
+Signed-off-by: Yu Liao <liaoyu15(a)huawei.com>
+---
+ arch/arm64/kernel/cpufeature.c | 15 ++++++++++-----
+ 1 file changed, 10 insertions(+), 5 deletions(-)
+
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 8e7473df2660..98a8b2703f84 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -184,11 +184,16 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+ };
+
+ static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
+ ARM64_FTR_END,
+ };
+
+--
+2.25.1
+
diff --git a/series.conf b/series.conf
index 987dcc1adad1..acdead12c1e2 100644
--- a/series.conf
+++ b/series.conf
@@ -98,3 +98,7 @@ patches/0093-Revert-perf-smmuv3_pmu-Enable-HiSilicon-Erratum-1620.patch
patches/0094-perf-smmuv3-Enable-HiSilicon-Erratum-162001800-quirk.patch
patches/0095-perf-smmuv3-Enable-HiSilicon-Erratum-162001900-quirk.patch
patches/0096-perf-smmuv3-Add-MODULE_ALIAS-for-module-auto-loading.patch
+patches/0097-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
+patches/0098-arm64-Expose-SVE2-features-for-userspace.patch
+patches/0099-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
+patches/0100-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
--
2.25.1
2
1
---
kernel.spec | 8 +-
...rm64-HWCAP-add-support-for-AT_HWCAP2.patch | 463 ++++++++++++++++++
...4-Expose-SVE2-features-for-userspace.patch | 275 +++++++++++
...-Fix-missing-ZFR0-in-__read_sysreg_b.patch | 55 +++
...-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch | 67 +++
series.conf | 4 +
6 files changed, 871 insertions(+), 1 deletion(-)
create mode 100644 patches/0042-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
create mode 100644 patches/0043-arm64-Expose-SVE2-features-for-userspace.patch
create mode 100644 patches/0044-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
create mode 100644 patches/0045-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
diff --git a/kernel.spec b/kernel.spec
index 1a8ad37c95d8..e3d408f7b6b4 100644
--- a/kernel.spec
+++ b/kernel.spec
@@ -32,7 +32,7 @@
Name: kernel
Version: 4.19.90
-Release: %{hulkrelease}.0229
+Release: %{hulkrelease}.0230
Summary: Linux Kernel
License: GPLv2
URL: http://www.kernel.org/
@@ -832,6 +832,12 @@ fi
%changelog
+* Wed Nov 1 2023 Yu Liao <liaoyu15(a)huawei.com> - 4.19.90-2310.4.0.0230
+- arm64: HWCAP: add support for AT_HWCAP2
+- arm64: Expose SVE2 features for userspace
+- arm64: cpufeature: Fix missing ZFR0 in __read_sysreg_by_encoding()
+- arm64: cpufeature: Treat ID_AA64ZFR0_EL1 as RAZ when SVE is not enabled
+
* Tue Oct 31 2023 Yu Liao <liaoyu15(a)huawei.com> - 4.19.90-2310.4.0.0229
- add new line at the end of series.conf
diff --git a/patches/0042-arm64-HWCAP-add-support-for-AT_HWCAP2.patch b/patches/0042-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
new file mode 100644
index 000000000000..46effba895fb
--- /dev/null
+++ b/patches/0042-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
@@ -0,0 +1,463 @@
+From a97497b283894653e53f7eb83b5825f5564d1614 Mon Sep 17 00:00:00 2001
+From: Andrew Murray <andrew.murray(a)arm.com>
+Date: Tue, 9 Apr 2019 10:52:40 +0100
+Subject: [PATCH openEuler-20.03-LTS-SP4 1/4] arm64: HWCAP: add support for
+ AT_HWCAP2
+
+mainline inclusion
+from mainline-v5.2-rc1
+commit 06a916feca2b262ab0c1a2aeb68882f4b1108a07
+category: feature
+bugzilla: https://gitee.com/openeuler/kernel/issues/I8B82O
+CVE: NA
+
+Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
+
+--------------------------------
+
+As we will exhaust the first 32 bits of AT_HWCAP let's start
+exposing AT_HWCAP2 to userspace to give us up to 64 caps.
+
+Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
+prefer to expand into AT_HWCAP2 in order to provide a consistent
+view to userspace between ILP32 and LP64. However internal to the
+kernel we prefer to continue to use the full space of elf_hwcap.
+
+To reduce complexity and allow for future expansion, we now
+represent hwcaps in the kernel as ordinals and use a
+KERNEL_HWCAP_ prefix. This allows us to support automatic feature
+based module loading for all our hwcaps.
+
+We introduce cpu_set_feature to set hwcaps which complements the
+existing cpu_have_feature helper. These helpers allow us to clean
+up existing direct uses of elf_hwcap and reduce any future effort
+required to move beyond 64 caps.
+
+For convenience we also introduce cpu_{have,set}_named_feature which
+makes use of the cpu_feature macro to allow providing a hwcap name
+without a {KERNEL_}HWCAP_ prefix.
+
+Signed-off-by: Andrew Murray <andrew.murray(a)arm.com>
+[will: use const_ilog2() and tweak documentation]
+Signed-off-by: Will Deacon <will.deacon(a)arm.com>
+
+Conflicts:
+ Documentation/arm64/elf_hwcaps.txt
+ arch/arm64/crypto/chacha-neon-glue.c
+ arch/arm64/crypto/crct10dif-ce-glue.c
+ arch/arm64/crypto/ghash-ce-glue.c
+ arch/arm64/crypto/nhpoly1305-neon-glue.c
+ arch/arm64/kernel/cpufeature.c
+ drivers/clocksource/arm_arch_timer.c
+
+Signed-off-by: Yu Liao <liaoyu15(a)huawei.com>
+---
+ Documentation/arm64/elf_hwcaps.txt | 13 ++++--
+ arch/arm64/crypto/aes-ce-ccm-glue.c | 2 +-
+ arch/arm64/crypto/aes-neonbs-glue.c | 2 +-
+ arch/arm64/crypto/chacha20-neon-glue.c | 2 +-
+ arch/arm64/crypto/ghash-ce-glue.c | 6 +--
+ arch/arm64/crypto/sha256-glue.c | 4 +-
+ arch/arm64/include/asm/cpufeature.h | 22 +++++-----
+ arch/arm64/include/asm/hwcap.h | 49 ++++++++++++++++++++-
+ arch/arm64/include/uapi/asm/hwcap.h | 2 +-
+ arch/arm64/kernel/cpufeature.c | 60 +++++++++++++-------------
+ arch/arm64/kernel/cpuinfo.c | 2 +-
+ arch/arm64/kernel/fpsimd.c | 4 +-
+ drivers/clocksource/arm_arch_timer.c | 8 ++++
+ 13 files changed, 120 insertions(+), 56 deletions(-)
+
+diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
+index 6feaffe90e22..186feb16e2f2 100644
+--- a/Documentation/arm64/elf_hwcaps.txt
++++ b/Documentation/arm64/elf_hwcaps.txt
+@@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The
+ kernel exposes the presence of these features to userspace through a set
+ of flags called hwcaps, exposed in the auxilliary vector.
+
+-Userspace software can test for features by acquiring the AT_HWCAP entry
+-of the auxilliary vector, and testing whether the relevant flags are
+-set, e.g.
++Userspace software can test for features by acquiring the AT_HWCAP or
++AT_HWCAP2 entry of the auxiliary vector, and testing whether the relevant
++flags are set, e.g.
+
+ bool floating_point_is_present(void)
+ {
+@@ -182,3 +182,10 @@ HWCAP_FLAGM
+ HWCAP_SSBS
+
+ Functionality implied by ID_AA64PFR1_EL1.SSBS == 0b0010.
++
++
++4. Unused AT_HWCAP bits
++-----------------------
++
++For interoperation with userspace, the kernel guarantees that bits 62
++and 63 of AT_HWCAP will always be returned as 0.
+diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c
+index 5fc6f51908fd..036ea77f83bc 100644
+--- a/arch/arm64/crypto/aes-ce-ccm-glue.c
++++ b/arch/arm64/crypto/aes-ce-ccm-glue.c
+@@ -372,7 +372,7 @@ static struct aead_alg ccm_aes_alg = {
+
+ static int __init aes_mod_init(void)
+ {
+- if (!(elf_hwcap & HWCAP_AES))
++ if (!cpu_have_named_feature(AES))
+ return -ENODEV;
+ return crypto_register_aead(&ccm_aes_alg);
+ }
+diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c
+index 5cc248967387..742359801559 100644
+--- a/arch/arm64/crypto/aes-neonbs-glue.c
++++ b/arch/arm64/crypto/aes-neonbs-glue.c
+@@ -442,7 +442,7 @@ static int __init aes_init(void)
+ int err;
+ int i;
+
+- if (!(elf_hwcap & HWCAP_ASIMD))
++ if (!cpu_have_named_feature(ASIMD))
+ return -ENODEV;
+
+ err = crypto_register_skciphers(aes_algs, ARRAY_SIZE(aes_algs));
+diff --git a/arch/arm64/crypto/chacha20-neon-glue.c b/arch/arm64/crypto/chacha20-neon-glue.c
+index 727579c93ded..bb3314905bee 100644
+--- a/arch/arm64/crypto/chacha20-neon-glue.c
++++ b/arch/arm64/crypto/chacha20-neon-glue.c
+@@ -114,7 +114,7 @@ static struct skcipher_alg alg = {
+
+ static int __init chacha20_simd_mod_init(void)
+ {
+- if (!(elf_hwcap & HWCAP_ASIMD))
++ if (!cpu_have_named_feature(ASIMD))
+ return -ENODEV;
+
+ return crypto_register_skcipher(&alg);
+diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c
+index 1ed227bf6106..cd9d743cb40f 100644
+--- a/arch/arm64/crypto/ghash-ce-glue.c
++++ b/arch/arm64/crypto/ghash-ce-glue.c
+@@ -648,10 +648,10 @@ static int __init ghash_ce_mod_init(void)
+ {
+ int ret;
+
+- if (!(elf_hwcap & HWCAP_ASIMD))
++ if (!cpu_have_named_feature(ASIMD))
+ return -ENODEV;
+
+- if (elf_hwcap & HWCAP_PMULL)
++ if (cpu_have_named_feature(PMULL))
+ pmull_ghash_update = pmull_ghash_update_p64;
+
+ else
+@@ -661,7 +661,7 @@ static int __init ghash_ce_mod_init(void)
+ if (ret)
+ return ret;
+
+- if (elf_hwcap & HWCAP_PMULL) {
++ if (cpu_have_named_feature(PMULL)) {
+ ret = crypto_register_aead(&gcm_aes_alg);
+ if (ret)
+ crypto_unregister_shash(&ghash_alg);
+diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c
+index 4aedeaefd61f..0cccdb9cc2c0 100644
+--- a/arch/arm64/crypto/sha256-glue.c
++++ b/arch/arm64/crypto/sha256-glue.c
+@@ -173,7 +173,7 @@ static int __init sha256_mod_init(void)
+ if (ret)
+ return ret;
+
+- if (elf_hwcap & HWCAP_ASIMD) {
++ if (cpu_have_named_feature(ASIMD)) {
+ ret = crypto_register_shashes(neon_algs, ARRAY_SIZE(neon_algs));
+ if (ret)
+ crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
+@@ -183,7 +183,7 @@ static int __init sha256_mod_init(void)
+
+ static void __exit sha256_mod_fini(void)
+ {
+- if (elf_hwcap & HWCAP_ASIMD)
++ if (cpu_have_named_feature(ASIMD))
+ crypto_unregister_shashes(neon_algs, ARRAY_SIZE(neon_algs));
+ crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
+ }
+diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
+index ffb0a1ec0088..eef5a9c9b823 100644
+--- a/arch/arm64/include/asm/cpufeature.h
++++ b/arch/arm64/include/asm/cpufeature.h
+@@ -14,15 +14,8 @@
+ #include <asm/hwcap.h>
+ #include <asm/sysreg.h>
+
+-/*
+- * In the arm64 world (as in the ARM world), elf_hwcap is used both internally
+- * in the kernel and for user space to keep track of which optional features
+- * are supported by the current system. So let's map feature 'x' to HWCAP_x.
+- * Note that HWCAP_x constants are bit fields so we need to take the log.
+- */
+-
+-#define MAX_CPU_FEATURES (8 * sizeof(elf_hwcap))
+-#define cpu_feature(x) ilog2(HWCAP_ ## x)
++#define MAX_CPU_FEATURES 64
++#define cpu_feature(x) KERNEL_HWCAP_ ## x
+
+ #ifndef __ASSEMBLY__
+
+@@ -372,10 +365,19 @@ extern bool set_cap_spectre_bhb;
+
+ bool this_cpu_has_cap(unsigned int cap);
+
++static inline void cpu_set_feature(unsigned int num)
++{
++ WARN_ON(num >= MAX_CPU_FEATURES);
++ elf_hwcap |= BIT(num);
++}
++#define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
++
+ static inline bool cpu_have_feature(unsigned int num)
+ {
+- return elf_hwcap & (1UL << num);
++ WARN_ON(num >= MAX_CPU_FEATURES);
++ return elf_hwcap & BIT(num);
+ }
++#define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
+
+ /* System capability check for constant caps */
+ static __always_inline bool __cpus_have_const_cap(int num)
+diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
+index 428b745b5386..458ff2d7ece3 100644
+--- a/arch/arm64/include/asm/hwcap.h
++++ b/arch/arm64/include/asm/hwcap.h
+@@ -40,11 +40,58 @@
+ #define COMPAT_HWCAP2_CRC32 (1 << 4)
+
+ #ifndef __ASSEMBLY__
++#include <linux/kernel.h>
++#include <linux/log2.h>
++
++/*
++ * For userspace we represent hwcaps as a collection of HWCAP{,2}_x bitfields
++ * as described in uapi/asm/hwcap.h. For the kernel we represent hwcaps as
++ * natural numbers (in a single range of size MAX_CPU_FEATURES) defined here
++ * with prefix KERNEL_HWCAP_ mapped to their HWCAP{,2}_x counterpart.
++ *
++ * Hwcaps should be set and tested within the kernel via the
++ * cpu_{set,have}_named_feature(feature) where feature is the unique suffix
++ * of KERNEL_HWCAP_{feature}.
++ */
++#define __khwcap_feature(x) const_ilog2(HWCAP_ ## x)
++#define KERNEL_HWCAP_FP __khwcap_feature(FP)
++#define KERNEL_HWCAP_ASIMD __khwcap_feature(ASIMD)
++#define KERNEL_HWCAP_EVTSTRM __khwcap_feature(EVTSTRM)
++#define KERNEL_HWCAP_AES __khwcap_feature(AES)
++#define KERNEL_HWCAP_PMULL __khwcap_feature(PMULL)
++#define KERNEL_HWCAP_SHA1 __khwcap_feature(SHA1)
++#define KERNEL_HWCAP_SHA2 __khwcap_feature(SHA2)
++#define KERNEL_HWCAP_CRC32 __khwcap_feature(CRC32)
++#define KERNEL_HWCAP_ATOMICS __khwcap_feature(ATOMICS)
++#define KERNEL_HWCAP_FPHP __khwcap_feature(FPHP)
++#define KERNEL_HWCAP_ASIMDHP __khwcap_feature(ASIMDHP)
++#define KERNEL_HWCAP_CPUID __khwcap_feature(CPUID)
++#define KERNEL_HWCAP_ASIMDRDM __khwcap_feature(ASIMDRDM)
++#define KERNEL_HWCAP_JSCVT __khwcap_feature(JSCVT)
++#define KERNEL_HWCAP_FCMA __khwcap_feature(FCMA)
++#define KERNEL_HWCAP_LRCPC __khwcap_feature(LRCPC)
++#define KERNEL_HWCAP_DCPOP __khwcap_feature(DCPOP)
++#define KERNEL_HWCAP_SHA3 __khwcap_feature(SHA3)
++#define KERNEL_HWCAP_SM3 __khwcap_feature(SM3)
++#define KERNEL_HWCAP_SM4 __khwcap_feature(SM4)
++#define KERNEL_HWCAP_ASIMDDP __khwcap_feature(ASIMDDP)
++#define KERNEL_HWCAP_SHA512 __khwcap_feature(SHA512)
++#define KERNEL_HWCAP_SVE __khwcap_feature(SVE)
++#define KERNEL_HWCAP_ASIMDFHM __khwcap_feature(ASIMDFHM)
++#define KERNEL_HWCAP_DIT __khwcap_feature(DIT)
++#define KERNEL_HWCAP_USCAT __khwcap_feature(USCAT)
++#define KERNEL_HWCAP_ILRCPC __khwcap_feature(ILRCPC)
++#define KERNEL_HWCAP_FLAGM __khwcap_feature(FLAGM)
++#define KERNEL_HWCAP_SSBS __khwcap_feature(SSBS)
++
++#define __khwcap2_feature(x) (const_ilog2(HWCAP2_ ## x) + 32)
++
+ /*
+ * This yields a mask that user programs can use to figure out what
+ * instruction set this cpu supports.
+ */
+-#define ELF_HWCAP (elf_hwcap)
++#define ELF_HWCAP lower_32_bits(elf_hwcap)
++#define ELF_HWCAP2 upper_32_bits(elf_hwcap)
+
+ #ifdef CONFIG_AARCH32_EL0
+ extern unsigned int a32_elf_hwcap, a32_elf_hwcap2;
+diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
+index 2bcd6e4f3474..602158a55554 100644
+--- a/arch/arm64/include/uapi/asm/hwcap.h
++++ b/arch/arm64/include/uapi/asm/hwcap.h
+@@ -18,7 +18,7 @@
+ #define _UAPI__ASM_HWCAP_H
+
+ /*
+- * HWCAP flags - for elf_hwcap (in kernel) and AT_HWCAP
++ * HWCAP flags - for AT_HWCAP
+ */
+ #define HWCAP_FP (1 << 0)
+ #define HWCAP_ASIMD (1 << 1)
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 1c93cc3f7692..3a0e7e10f2d7 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1553,35 +1553,35 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
+ }
+
+ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_PMULL),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_AES),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA1),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA2),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_SHA512),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_CRC32),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ATOMICS),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDRDM),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA3),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM3),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM4),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDDP),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDFHM),
+- HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FLAGM),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_FP),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_ASIMDHP),
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_DIT),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_DCPOP),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
+- HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ILRCPC),
+- HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_USCAT),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_PMULL),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AES),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA1),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA2),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_SHA512),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_CRC32),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ATOMICS),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDRDM),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA3),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM3),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM4),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDDP),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDFHM),
++ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FLAGM),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_FP),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FPHP),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_ASIMD),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDHP),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DCPOP),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_JSCVT),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FCMA),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_LRCPC),
++ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ILRCPC),
++ HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT),
+ #ifdef CONFIG_ARM64_SVE
+- HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
++ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, KERNEL_HWCAP_SVE),
+ #endif
+- HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, HWCAP_SSBS),
++ HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS),
+ {},
+ };
+
+@@ -1627,7 +1627,7 @@ static void __init cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap)
+ {
+ switch (cap->hwcap_type) {
+ case CAP_HWCAP:
+- elf_hwcap |= cap->hwcap;
++ cpu_set_feature(cap->hwcap);
+ break;
+ #ifdef CONFIG_AARCH32_EL0
+ case CAP_COMPAT_HWCAP:
+@@ -1650,7 +1650,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
+
+ switch (cap->hwcap_type) {
+ case CAP_HWCAP:
+- rc = (elf_hwcap & cap->hwcap) != 0;
++ rc = cpu_have_feature(cap->hwcap);
+ break;
+ #ifdef CONFIG_AARCH32_EL0
+ case CAP_COMPAT_HWCAP:
+@@ -1671,7 +1671,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
+ static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
+ {
+ /* We support emulation of accesses to CPU ID feature registers */
+- elf_hwcap |= HWCAP_CPUID;
++ cpu_set_named_feature(CPUID);
+ for (; hwcaps->matches; hwcaps++)
+ if (hwcaps->matches(hwcaps, cpucap_default_scope(hwcaps)))
+ cap_set_elf_hwcap(hwcaps);
+diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
+index 005d88db1082..bfe3bb8f05fe 100644
+--- a/arch/arm64/kernel/cpuinfo.c
++++ b/arch/arm64/kernel/cpuinfo.c
+@@ -164,7 +164,7 @@ static int c_show(struct seq_file *m, void *v)
+ #endif /* CONFIG_AARCH32_EL0 */
+ } else {
+ for (j = 0; hwcap_str[j]; j++)
+- if (elf_hwcap & (1 << j))
++ if (cpu_have_feature(j))
+ seq_printf(m, " %s", hwcap_str[j]);
+ }
+ seq_puts(m, "\n");
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index bb048144c3bd..6972de5681ec 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -1302,14 +1302,14 @@ static inline void fpsimd_hotplug_init(void) { }
+ */
+ static int __init fpsimd_init(void)
+ {
+- if (elf_hwcap & HWCAP_FP) {
++ if (cpu_have_named_feature(FP)) {
+ fpsimd_pm_init();
+ fpsimd_hotplug_init();
+ } else {
+ pr_notice("Floating-point is not implemented\n");
+ }
+
+- if (!(elf_hwcap & HWCAP_ASIMD))
++ if (!cpu_have_named_feature(ASIMD))
+ pr_notice("Advanced SIMD is not implemented\n");
+
+ return sve_sysctl_init();
+diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
+index 58863fd9c91b..fbfc81932dea 100644
+--- a/drivers/clocksource/arm_arch_timer.c
++++ b/drivers/clocksource/arm_arch_timer.c
+@@ -825,7 +825,11 @@ static void arch_timer_evtstrm_enable(int divider)
+ cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT)
+ | ARCH_TIMER_VIRT_EVT_EN;
+ arch_timer_set_cntkctl(cntkctl);
++#ifdef CONFIG_ARM64
++ cpu_set_named_feature(EVTSTRM);
++#else
+ elf_hwcap |= HWCAP_EVTSTRM;
++#endif
+ #ifdef CONFIG_AARCH32_EL0
+ a32_elf_hwcap |= COMPAT_HWCAP_EVTSTRM;
+ #endif
+@@ -1059,7 +1063,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self,
+ } else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) {
+ arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
+
++#ifdef CONFIG_ARM64
++ if (cpu_have_named_feature(EVTSTRM))
++#else
+ if (elf_hwcap & HWCAP_EVTSTRM)
++#endif
+ cpumask_set_cpu(smp_processor_id(), &evtstrm_available);
+ }
+ return NOTIFY_OK;
+--
+2.25.1
+
diff --git a/patches/0043-arm64-Expose-SVE2-features-for-userspace.patch b/patches/0043-arm64-Expose-SVE2-features-for-userspace.patch
new file mode 100644
index 000000000000..45f709f3fab0
--- /dev/null
+++ b/patches/0043-arm64-Expose-SVE2-features-for-userspace.patch
@@ -0,0 +1,275 @@
+From 2ba00283ddd367afa75f72e3b4de15f80b4a97a7 Mon Sep 17 00:00:00 2001
+From: Dave Martin <Dave.Martin(a)arm.com>
+Date: Thu, 18 Apr 2019 18:41:38 +0100
+Subject: [PATCH openEuler-20.03-LTS-SP4 2/4] arm64: Expose SVE2 features for
+ userspace
+
+mainline inclusion
+from mainline-v5.2-rc1
+commit 06a916feca2b262ab0c1a2aeb68882f4b1108a07
+category: feature
+bugzilla: https://gitee.com/openeuler/kernel/issues/I8B82O
+CVE: NA
+
+Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
+
+--------------------------------
+
+This patch provides support for reporting the presence of SVE2 and
+its optional features to userspace.
+
+This will also enable visibility of SVE2 for guests, when KVM
+support for SVE-enabled guests is available.
+
+Signed-off-by: Dave Martin <Dave.Martin(a)arm.com>
+Signed-off-by: Will Deacon <will.deacon(a)arm.com>
+
+Conflicts:
+ arch/arm64/include/asm/hwcap.h
+ arch/arm64/include/uapi/asm/hwcap.h
+ arch/arm64/kernel/cpuinfo.c
+
+Signed-off-by: Yu Liao <liaoyu15(a)huawei.com>
+---
+ Documentation/arm64/cpu-feature-registers.txt | 16 +++++++++++++
+ Documentation/arm64/elf_hwcaps.txt | 24 +++++++++++++++++++
+ Documentation/arm64/sve.txt | 17 +++++++++++++
+ arch/arm64/Kconfig | 3 +++
+ arch/arm64/include/asm/hwcap.h | 6 +++++
+ arch/arm64/include/asm/sysreg.h | 14 +++++++++++
+ arch/arm64/include/uapi/asm/hwcap.h | 10 ++++++++
+ arch/arm64/kernel/cpufeature.c | 17 ++++++++++++-
+ arch/arm64/kernel/cpuinfo.c | 10 ++++++++
+ 9 files changed, 116 insertions(+), 1 deletion(-)
+
+diff --git a/Documentation/arm64/cpu-feature-registers.txt b/Documentation/arm64/cpu-feature-registers.txt
+index 7964f03846b1..fcd2e1deb886 100644
+--- a/Documentation/arm64/cpu-feature-registers.txt
++++ b/Documentation/arm64/cpu-feature-registers.txt
+@@ -201,6 +201,22 @@ infrastructure:
+ | AT | [35-32] | y |
+ x--------------------------------------------------x
+
++ 6) ID_AA64ZFR0_EL1 - SVE feature ID register 0
++
++ x--------------------------------------------------x
++ | Name | bits | visible |
++ |--------------------------------------------------|
++ | SM4 | [43-40] | y |
++ |--------------------------------------------------|
++ | SHA3 | [35-32] | y |
++ |--------------------------------------------------|
++ | BitPerm | [19-16] | y |
++ |--------------------------------------------------|
++ | AES | [7-4] | y |
++ |--------------------------------------------------|
++ | SVEVer | [3-0] | y |
++ x--------------------------------------------------x
++
+ Appendix I: Example
+ ---------------------------
+
+diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
+index 186feb16e2f2..e2ce14dfccf2 100644
+--- a/Documentation/arm64/elf_hwcaps.txt
++++ b/Documentation/arm64/elf_hwcaps.txt
+@@ -159,6 +159,30 @@ HWCAP_SVE
+
+ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001.
+
++HWCAP2_SVE2
++
++ Functionality implied by ID_AA64ZFR0_EL1.SVEVer == 0b0001.
++
++HWCAP2_SVEAES
++
++ Functionality implied by ID_AA64ZFR0_EL1.AES == 0b0001.
++
++HWCAP2_SVEPMULL
++
++ Functionality implied by ID_AA64ZFR0_EL1.AES == 0b0010.
++
++HWCAP2_SVEBITPERM
++
++ Functionality implied by ID_AA64ZFR0_EL1.BitPerm == 0b0001.
++
++HWCAP2_SVESHA3
++
++ Functionality implied by ID_AA64ZFR0_EL1.SHA3 == 0b0001.
++
++HWCAP2_SVESM4
++
++ Functionality implied by ID_AA64ZFR0_EL1.SM4 == 0b0001.
++
+ HWCAP_ASIMDFHM
+
+ Functionality implied by ID_AA64ISAR0_EL1.FHM == 0b0001.
+diff --git a/Documentation/arm64/sve.txt b/Documentation/arm64/sve.txt
+index 2001d84384ca..5689fc9a976a 100644
+--- a/Documentation/arm64/sve.txt
++++ b/Documentation/arm64/sve.txt
+@@ -34,6 +34,23 @@ model features for SVE is included in Appendix A.
+ following sections: software that needs to verify that those interfaces are
+ present must check for HWCAP_SVE instead.
+
++* On hardware that supports the SVE2 extensions, HWCAP2_SVE2 will also
++ be reported in the AT_HWCAP2 aux vector entry. In addition to this,
++ optional extensions to SVE2 may be reported by the presence of:
++
++ HWCAP2_SVE2
++ HWCAP2_SVEAES
++ HWCAP2_SVEPMULL
++ HWCAP2_SVEBITPERM
++ HWCAP2_SVESHA3
++ HWCAP2_SVESM4
++
++ This list may be extended over time as the SVE architecture evolves.
++
++ These extensions are also reported via the CPU ID register ID_AA64ZFR0_EL1,
++ which userspace can read using an MRS instruction. See elf_hwcaps.txt and
++ cpu-feature-registers.txt for details.
++
+ * Debuggers should restrict themselves to interacting with the target via the
+ NT_ARM_SVE regset. The recommended way of detecting support for this regset
+ is to connect to a target process first and then attempt a
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 88b8031a93b2..f7398a1904a2 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1316,6 +1316,9 @@ config ARM64_SVE
+
+ To enable use of this extension on CPUs that implement it, say Y.
+
++ On CPUs that support the SVE2 extensions, this option will enable
++ those too.
++
+ Note that for architectural reasons, firmware _must_ implement SVE
+ support when running on SVE capable hardware. The required support
+ is present in:
+diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
+index 458ff2d7ece3..08315a3bf387 100644
+--- a/arch/arm64/include/asm/hwcap.h
++++ b/arch/arm64/include/asm/hwcap.h
+@@ -85,6 +85,12 @@
+ #define KERNEL_HWCAP_SSBS __khwcap_feature(SSBS)
+
+ #define __khwcap2_feature(x) (const_ilog2(HWCAP2_ ## x) + 32)
++#define KERNEL_HWCAP_SVE2 __khwcap2_feature(SVE2)
++#define KERNEL_HWCAP_SVEAES __khwcap2_feature(SVEAES)
++#define KERNEL_HWCAP_SVEPMULL __khwcap2_feature(SVEPMULL)
++#define KERNEL_HWCAP_SVEBITPERM __khwcap2_feature(SVEBITPERM)
++#define KERNEL_HWCAP_SVESHA3 __khwcap2_feature(SVESHA3)
++#define KERNEL_HWCAP_SVESM4 __khwcap2_feature(SVESM4)
+
+ /*
+ * This yields a mask that user programs can use to figure out what
+diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
+index 0fd51d253648..69618e602ed8 100644
+--- a/arch/arm64/include/asm/sysreg.h
++++ b/arch/arm64/include/asm/sysreg.h
+@@ -564,6 +564,20 @@
+ #define ID_AA64PFR1_SSBS_PSTATE_ONLY 1
+ #define ID_AA64PFR1_SSBS_PSTATE_INSNS 2
+
++/* id_aa64zfr0 */
++#define ID_AA64ZFR0_SM4_SHIFT 40
++#define ID_AA64ZFR0_SHA3_SHIFT 32
++#define ID_AA64ZFR0_BITPERM_SHIFT 16
++#define ID_AA64ZFR0_AES_SHIFT 4
++#define ID_AA64ZFR0_SVEVER_SHIFT 0
++
++#define ID_AA64ZFR0_SM4 0x1
++#define ID_AA64ZFR0_SHA3 0x1
++#define ID_AA64ZFR0_BITPERM 0x1
++#define ID_AA64ZFR0_AES 0x1
++#define ID_AA64ZFR0_AES_PMULL 0x2
++#define ID_AA64ZFR0_SVEVER_SVE2 0x1
++
+ /* id_aa64mmfr0 */
+ #define ID_AA64MMFR0_TGRAN4_SHIFT 28
+ #define ID_AA64MMFR0_TGRAN64_SHIFT 24
+diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
+index 602158a55554..fea93415b493 100644
+--- a/arch/arm64/include/uapi/asm/hwcap.h
++++ b/arch/arm64/include/uapi/asm/hwcap.h
+@@ -50,4 +50,14 @@
+ #define HWCAP_FLAGM (1 << 27)
+ #define HWCAP_SSBS (1 << 28)
+
++/*
++ * HWCAP2 flags - for AT_HWCAP2
++ */
++#define HWCAP2_SVE2 (1 << 1)
++#define HWCAP2_SVEAES (1 << 2)
++#define HWCAP2_SVEPMULL (1 << 3)
++#define HWCAP2_SVEBITPERM (1 << 4)
++#define HWCAP2_SVESHA3 (1 << 5)
++#define HWCAP2_SVESM4 (1 << 6)
++
+ #endif /* _UAPI__ASM_HWCAP_H */
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 3a0e7e10f2d7..4f384bbd86c7 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -183,6 +183,15 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+ ARM64_FTR_END,
+ };
+
++static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
++ ARM64_FTR_END,
++};
++
+ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
+ /*
+ * We already refuse to boot CPUs that don't support our configured
+@@ -399,7 +408,7 @@ static const struct __ftr_reg_entry {
+ /* Op1 = 0, CRn = 0, CRm = 4 */
+ ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0),
+ ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_id_aa64pfr1),
+- ARM64_FTR_REG(SYS_ID_AA64ZFR0_EL1, ftr_raz),
++ ARM64_FTR_REG(SYS_ID_AA64ZFR0_EL1, ftr_id_aa64zfr0),
+
+ /* Op1 = 0, CRn = 0, CRm = 5 */
+ ARM64_FTR_REG(SYS_ID_AA64DFR0_EL1, ftr_id_aa64dfr0),
+@@ -1580,6 +1589,12 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+ HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT),
+ #ifdef CONFIG_ARM64_SVE
+ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, KERNEL_HWCAP_SVE),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SVEVER_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_SVEVER_SVE2, CAP_HWCAP, KERNEL_HWCAP_SVE2),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_AES_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_AES, CAP_HWCAP, KERNEL_HWCAP_SVEAES),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_AES_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_AES_PMULL, CAP_HWCAP, KERNEL_HWCAP_SVEPMULL),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_BITPERM_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_BITPERM, CAP_HWCAP, KERNEL_HWCAP_SVEBITPERM),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SHA3_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_SHA3, CAP_HWCAP, KERNEL_HWCAP_SVESHA3),
++ HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SM4_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_SM4, CAP_HWCAP, KERNEL_HWCAP_SVESM4),
+ #endif
+ HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS),
+ {},
+diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
+index bfe3bb8f05fe..c8e4ddd23f0c 100644
+--- a/arch/arm64/kernel/cpuinfo.c
++++ b/arch/arm64/kernel/cpuinfo.c
+@@ -82,6 +82,16 @@ static const char *const hwcap_str[] = {
+ "ilrcpc",
+ "flagm",
+ "ssbs",
++ "sb",
++ "paca",
++ "pacg",
++ "dcpodp",
++ "sve2",
++ "sveaes",
++ "svepmull",
++ "svebitperm",
++ "svesha3",
++ "svesm4",
+ NULL
+ };
+
+--
+2.25.1
+
diff --git a/patches/0044-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch b/patches/0044-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
new file mode 100644
index 000000000000..4ce008cecf19
--- /dev/null
+++ b/patches/0044-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
@@ -0,0 +1,55 @@
+From 9f8dff634365e7bfa0c764ccd31b54a4f0992bc8 Mon Sep 17 00:00:00 2001
+From: Dave Martin <Dave.Martin(a)arm.com>
+Date: Mon, 3 Jun 2019 16:35:02 +0100
+Subject: [PATCH openEuler-20.03-LTS-SP4 3/4] arm64: cpufeature: Fix missing
+ ZFR0 in __read_sysreg_by_encoding()
+
+mainline inclusion
+from mainline-v5.2-rc4
+commit 78ed70bf3a923f1965e3c19f544677d418397108
+category: feature
+bugzilla: https://gitee.com/openeuler/kernel/issues/I8B82O
+CVE: NA
+
+Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
+
+--------------------------------
+
+In commit 06a916feca2b ("arm64: Expose SVE2 features for
+userspace"), new hwcaps are added that are detected via fields in
+the SVE-specific ID register ID_AA64ZFR0_EL1.
+
+In order to check compatibility of secondary cpus with the hwcaps
+established at boot, the cpufeatures code uses
+__read_sysreg_by_encoding() to read this ID register based on the
+sys_reg field of the arm64_elf_hwcaps[] table.
+
+This leads to a kernel splat if an hwcap uses an ID register that
+__read_sysreg_by_encoding() doesn't explicitly handle, as now
+happens when exercising cpu hotplug on an SVE2-capable platform.
+
+So fix it by adding the required case in there.
+
+Fixes: 06a916feca2b ("arm64: Expose SVE2 features for userspace")
+Signed-off-by: Dave Martin <Dave.Martin(a)arm.com>
+Signed-off-by: Will Deacon <will.deacon(a)arm.com>
+Signed-off-by: Yu Liao <liaoyu15(a)huawei.com>
+---
+ arch/arm64/kernel/cpufeature.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 4f384bbd86c7..8e7473df2660 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -828,6 +828,7 @@ static u64 __read_sysreg_by_encoding(u32 sys_id)
+
+ read_sysreg_case(SYS_ID_AA64PFR0_EL1);
+ read_sysreg_case(SYS_ID_AA64PFR1_EL1);
++ read_sysreg_case(SYS_ID_AA64ZFR0_EL1);
+ read_sysreg_case(SYS_ID_AA64DFR0_EL1);
+ read_sysreg_case(SYS_ID_AA64DFR1_EL1);
+ read_sysreg_case(SYS_ID_AA64MMFR0_EL1);
+--
+2.25.1
+
diff --git a/patches/0045-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch b/patches/0045-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
new file mode 100644
index 000000000000..7df40531adda
--- /dev/null
+++ b/patches/0045-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
@@ -0,0 +1,67 @@
+From 515c2917ae3bc768e8793dac6b27ea4dff36b40c Mon Sep 17 00:00:00 2001
+From: Julien Grall <julien.grall(a)arm.com>
+Date: Mon, 14 Oct 2019 11:21:13 +0100
+Subject: [PATCH openEuler-20.03-LTS-SP4 4/4] arm64: cpufeature: Treat
+ ID_AA64ZFR0_EL1 as RAZ when SVE is not enabled
+
+mainline inclusion
+from mainline-v5.4-rc4
+commit ec52c7134b1fcef0edfc56d55072fd4f261ef198
+category: feature
+bugzilla: https://gitee.com/openeuler/kernel/issues/I8B82O
+CVE: NA
+
+Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
+
+--------------------------------
+
+If CONFIG_ARM64_SVE=n then we fail to report ID_AA64ZFR0_EL1 as 0 when
+read by userspace, despite being required by the architecture. Although
+this is theoretically a change in ABI, userspace will first check for
+the presence of SVE via the HWCAP or the ID_AA64PFR0_EL1.SVE field
+before probing the ID_AA64ZFR0_EL1 register. Given that these are
+reported correctly for this configuration, we can safely tighten up the
+current behaviour.
+
+Ensure ID_AA64ZFR0_EL1 is treated as RAZ when CONFIG_ARM64_SVE=n.
+
+Signed-off-by: Julien Grall <julien.grall(a)arm.com>
+Reviewed-by: Suzuki K Poulose <suzuki.poulose(a)arm.com>
+Reviewed-by: Mark Rutland <mark.rutland(a)arm.com>
+Reviewed-by: Dave Martin <dave.martin(a)arm.com>
+Fixes: 06a916feca2b ("arm64: Expose SVE2 features for userspace")
+Signed-off-by: Will Deacon <will(a)kernel.org>
+Signed-off-by: Yu Liao <liaoyu15(a)huawei.com>
+---
+ arch/arm64/kernel/cpufeature.c | 15 ++++++++++-----
+ 1 file changed, 10 insertions(+), 5 deletions(-)
+
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 8e7473df2660..98a8b2703f84 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -184,11 +184,16 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+ };
+
+ static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
++ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
+ ARM64_FTR_END,
+ };
+
+--
+2.25.1
+
diff --git a/series.conf b/series.conf
index 1470060a870a..32716ced5856 100644
--- a/series.conf
+++ b/series.conf
@@ -43,3 +43,7 @@ patches/0038-perf-arm-spe-Add-more-sub-classes-for-operation-pack.patch
patches/0039-perf-arm_spe-Decode-memory-tagging-properties.patch
patches/0040-perf-arm-spe-Add-support-for-ARMv8.3-SPE.patch
patches/0041-drivers-perf-Add-support-for-ARMv8.3-SPE.patch
+patches/0042-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
+patches/0043-arm64-Expose-SVE2-features-for-userspace.patch
+patches/0044-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
+patches/0045-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
--
2.25.1
2
1

31 Oct '23
---
kernel.spec | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/kernel.spec b/kernel.spec
index c0bd82f1e791..1a8ad37c95d8 100644
--- a/kernel.spec
+++ b/kernel.spec
@@ -32,7 +32,7 @@
Name: kernel
Version: 4.19.90
-Release: %{hulkrelease}.0228
+Release: %{hulkrelease}.0229
Summary: Linux Kernel
License: GPLv2
URL: http://www.kernel.org/
@@ -241,11 +241,12 @@ ignores_for_main="CONFIG_DESCRIPTION,FILE_PATH_CHANGES,GERRIT_CHANGE_ID,GIT_COMM
Checkpatches() {
local SERIESCONF=$1
local PATCH_DIR=$2
+ echo "" >> $SERIESCONF
sed -i '/^#/d' $SERIESCONF
sed -i '/^[\s]*$/d' $SERIESCONF
set +e
- while IFS= read -r patch; do
+ while read patch; do
output=$(scripts/checkpatch.pl --ignore $ignores_for_main $PATCH_DIR/$patch)
if echo "$output" | grep -q "ERROR:"; then
echo "checkpatch $patch failed"
@@ -830,6 +831,10 @@ fi
%endif
%changelog
+
+* Tue Oct 31 2023 Yu Liao <liaoyu15(a)huawei.com> - 4.19.90-2310.4.0.0229
+- add new line at the end of series.conf
+
* Tue Oct 31 2023 hongrongxuan <hongrongxuan(a)huawei.com> - 4.19.90-2310.4.0.0228
- drivers/perf: Add support for ARMv8.3-SPE
- perf arm-spe: Add support for ARMv8.3-SPE
--
2.33.0
2
1
---
kernel.spec | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/kernel.spec b/kernel.spec
index 5bcf9476f107..3c9e31996699 100644
--- a/kernel.spec
+++ b/kernel.spec
@@ -32,7 +32,7 @@
Name: kernel
Version: 4.19.90
-Release: %{hulkrelease}.0225
+Release: %{hulkrelease}.0226
Summary: Linux Kernel
License: GPLv2
URL: http://www.kernel.org/
@@ -236,6 +236,28 @@ if [ ! -d patches ];then
mv ../patches .
fi
+ignores_for_main="CONFIG_DESCRIPTION,FILE_PATH_CHANGES,GERRIT_CHANGE_ID,GIT_COMMIT_ID,UNKNOWN_COMMIT_ID,FROM_SIGN_OFF_MISMATCH,REPEATED_WORD,COMMIT_COMMENT_SYMBOL,BLOCK_COMMENT_STYLE,AVOID_EXTERNS,AVOID_BUG"
+
+Checkpatches() {
+ local SERIESCONF=$1
+ local PATCH_DIR=$2
+ sed -i '/^#/d' $SERIESCONF
+ sed -i '/^[\s]*$/d' $SERIESCONF
+
+ set +e
+ while IFS= read -r patch; do
+ output=$(scripts/checkpatch.pl --ignore $ignores_for_main $PATCH_DIR/$patch)
+ if echo "$output" | grep -q "ERROR:"; then
+ echo "checkpatch $patch failed"
+ set -e
+ return 1
+ fi
+ done < "$SERIESCONF"
+
+ set -e
+ return 0
+}
+
Applypatches()
{
set -e
@@ -252,6 +274,7 @@ Applypatches()
) | sh
}
+Checkpatches series.conf %{_builddir}/kernel-%{version}/linux-%{KernelVer}
Applypatches series.conf %{_builddir}/kernel-%{version}/linux-%{KernelVer}
%endif
@@ -807,6 +830,9 @@ fi
%endif
%changelog
+* Fri Oct 30 2023 Yu Liao <liaoyu15(a)huawei.com> - 4.19.90-2310.4.0.0226
+- Add checkpatch check
+
* Sat Oct 28 2023 YunYi Yang <yangyunyi2(a)huawei.com> - 4.19.90-2310.4.0.0225
- config: arm64: Enable config of hisi ptt
- hwtracing: hisi_ptt: Add dummy callback pmu::read()
--
2.33.0
2
1
---
kernel.spec | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/kernel.spec b/kernel.spec
index 5bcf9476f107..3c9e31996699 100644
--- a/kernel.spec
+++ b/kernel.spec
@@ -32,7 +32,7 @@
Name: kernel
Version: 4.19.90
-Release: %{hulkrelease}.0225
+Release: %{hulkrelease}.0226
Summary: Linux Kernel
License: GPLv2
URL: http://www.kernel.org/
@@ -236,6 +236,28 @@ if [ ! -d patches ];then
mv ../patches .
fi
+ignores_for_main="CONFIG_DESCRIPTION,FILE_PATH_CHANGES,GERRIT_CHANGE_ID,GIT_COMMIT_ID,UNKNOWN_COMMIT_ID,FROM_SIGN_OFF_MISMATCH,REPEATED_WORD,COMMIT_COMMENT_SYMBOL,BLOCK_COMMENT_STYLE,AVOID_EXTERNS,AVOID_BUG"
+
+Checkpatches() {
+ local SERIESCONF=$1
+ local PATCH_DIR=$2
+ sed -i '/^#/d' $SERIESCONF
+ sed -i '/^[\s]*$/d' $SERIESCONF
+
+ set +e
+ while IFS= read -r patch; do
+ output=$(scripts/checkpatch.pl --ignore $ignores_for_main $PATCH_DIR/$patch)
+ if echo "$output" | grep -q "ERROR:"; then
+ echo "checkpatch $patch failed"
+ set -e
+ return 1
+ fi
+ done < "$SERIESCONF"
+
+ set -e
+ return 0
+}
+
Applypatches()
{
set -e
@@ -252,6 +274,7 @@ Applypatches()
) | sh
}
+Checkpatches series.conf %{_builddir}/kernel-%{version}/linux-%{KernelVer}
Applypatches series.conf %{_builddir}/kernel-%{version}/linux-%{KernelVer}
%endif
@@ -807,6 +830,9 @@ fi
%endif
%changelog
+* Fri Oct 30 2023 Yu Liao <liaoyu15(a)huawei.com> - 4.19.90-2310.4.0.0226
+- Add checkpatch check
+
* Sat Oct 28 2023 YunYi Yang <yangyunyi2(a)huawei.com> - 4.19.90-2310.4.0.0225
- config: arm64: Enable config of hisi ptt
- hwtracing: hisi_ptt: Add dummy callback pmu::read()
--
2.33.0
2
1

28 Oct '23
v1 -> v2: add new bugfix patch
Wei Li (1):
Add series.conf
Yu Liao (1):
Expose SVE2 features for userspace
kernel.spec | 2 +-
...rm64-HWCAP-add-support-for-AT_HWCAP2.patch | 463 ++++++++++++++++++
...4-Expose-SVE2-features-for-userspace.patch | 275 +++++++++++
...-Fix-missing-ZFR0-in-__read_sysreg_b.patch | 55 +++
...-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch | 67 +++
series.conf | 7 +
6 files changed, 868 insertions(+), 1 deletion(-)
create mode 100644 patches/0001-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
create mode 100644 patches/0002-arm64-Expose-SVE2-features-for-userspace.patch
create mode 100644 patches/0003-arm64-cpufeature-Fix-missing-ZFR0-in-__read_sysreg_b.patch
create mode 100644 patches/0004-arm64-cpufeature-Treat-ID_AA64ZFR0_EL1-as-RAZ-when-S.patch
create mode 100644 series.conf
--
2.33.0
2
3

27 Oct '23
Wei Li (1):
Add series.conf
Yu Liao (1):
Expose SVE2 features for userspace
kernel.spec | 2 +-
...rm64-HWCAP-add-support-for-AT_HWCAP2.patch | 462 ++++++++++++++++++
...4-Expose-SVE2-features-for-userspace.patch | 275 +++++++++++
series.conf | 5 +
4 files changed, 743 insertions(+), 1 deletion(-)
create mode 100644 patches/0001-arm64-HWCAP-add-support-for-AT_HWCAP2.patch
create mode 100644 patches/0002-arm64-Expose-SVE2-features-for-userspace.patch
create mode 100644 series.conf
--
2.33.0
1
2
Xiongfeng Wang (2):
cpufreq: change '.set_boost' to act on one policy
cpufreq: CPPC: add SW BOOST support
drivers/cpufreq/acpi-cpufreq.c | 14 ++++----
drivers/cpufreq/cppc_cpufreq.c | 35 ++++++++++++++++++--
drivers/cpufreq/cpufreq.c | 58 +++++++++++++++++++---------------
include/linux/cpufreq.h | 2 +-
4 files changed, 75 insertions(+), 34 deletions(-)
--
2.25.1
1
2
Xiongfeng Wang (2):
cpufreq: change '.set_boost' to act on one policy
cpufreq: CPPC: add SW BOOST support
drivers/cpufreq/acpi-cpufreq.c | 14 ++++----
drivers/cpufreq/cppc_cpufreq.c | 35 ++++++++++++++++++--
drivers/cpufreq/cpufreq.c | 58 +++++++++++++++++++---------------
include/linux/cpufreq.h | 2 +-
4 files changed, 75 insertions(+), 34 deletions(-)
--
2.25.1
1
2