[PATCH OLK-6.6 0/5] arm64: Add support for FEAT_{LS64, LS64_V}

From: Hongye Lin <linhongye@h-partners.com> driver inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IC1F41 ---------------------------------------------------------------------- arm64: Add support for FEAT_{LS64, LS64_V} Yicong Yang (5): arm64: Provide basic EL2 setup for FEAT_{LS64, LS64_V} usage at EL0/1 arm64: Add support for FEAT_{LS64, LS64_V} kselftest/arm64: Add HWCAP test for FEAT_{LS64, LS64_V} arm64: Add ESR.DFSC definition of unsupported exclusive or atomic access KVM: arm64: Handle DABT caused by LS64* instructions on unsupported memory Documentation/arch/arm64/booting.rst | 12 +++ Documentation/arch/arm64/elf_hwcaps.rst | 6 ++ arch/arm64/include/asm/el2_setup.h | 11 +++ arch/arm64/include/asm/esr.h | 8 ++ arch/arm64/include/asm/hwcap.h | 3 + arch/arm64/include/asm/kvm_emulate.h | 1 + arch/arm64/include/uapi/asm/hwcap.h | 6 ++ arch/arm64/kernel/cpufeature.c | 51 +++++++++++++ arch/arm64/kernel/cpuinfo.c | 2 + arch/arm64/kvm/inject_fault.c | 35 +++++++++ arch/arm64/kvm/mmu.c | 22 +++++- arch/arm64/tools/cpucaps | 4 +- tools/testing/selftests/arm64/abi/hwcap.c | 90 +++++++++++++++++++++++ 13 files changed, 248 insertions(+), 3 deletions(-) -- 2.33.0

From: Yicong Yang <yangyicong@hisilicon.com> driver inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IC1F41 ---------------------------------------------------------------------- Instructions introduced by FEAT_{LS64, LS64_V} is controlled by HCRX_EL2.{EnALS, EnASR}. Configure all of these to allow usage at EL0/1. This doesn't mean these instructions are always available in EL0/1 if provided. The hypervisor still have the control at runtime. Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Reviewed-by: Yicong Yang <yangyicong@hisilicon.com> Signed-off-by: Hongye Lin <linhongye@h-partners.com> --- arch/arm64/include/asm/el2_setup.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h index e7cae1452098..1eaaa5a83fd2 100644 --- a/arch/arm64/include/asm/el2_setup.h +++ b/arch/arm64/include/asm/el2_setup.h @@ -27,6 +27,17 @@ ubfx x0, x0, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4 cbz x0, .Lskip_hcrx_\@ mov_q x0, HCRX_HOST_FLAGS + + /* Enable LS64, LS64_V if supported */ + mrs_s x1, SYS_ID_AA64ISAR1_EL1 + ubfx x1, x1, #ID_AA64ISAR1_EL1_LS64_SHIFT, #4 + cbz x1, .Lset_hcrx_\@ + orr x0, x0, #HCRX_EL2_EnALS + cmp x1, #ID_AA64ISAR1_EL1_LS64_LS64_V + b.lt .Lset_hcrx_\@ + orr x0, x0, #HCRX_EL2_EnASR + +.Lset_hcrx_\@: msr_s SYS_HCRX_EL2, x0 .Lskip_hcrx_\@: .endm -- 2.33.0

From: Yicong Yang <yangyicong@hisilicon.com> driver inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IC1F41 ---------------------------------------------------------------------- Armv8.7 introduces single-copy atomic 64-byte loads and stores instructions and its variants named under FEAT_{LS64, LS64_V}. These features are identified by ID_AA64ISAR1_EL1.LS64 and the use of such instructions in userspace (EL0) can be trapped. In order to support the use of corresponding instructions in userspace: - Make ID_AA64ISAR1_EL1.LS64 visbile to userspace - Add identifying and enabling in the cpufeature list - Expose these support of these features to userspace through HWCAP3 and cpuinfo Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Reviewed-by: Yicong Yang <yangyicong@hisilicon.com> Signed-off-by: Hongye Lin <linhongye@h-partners.com> --- Documentation/arch/arm64/booting.rst | 12 ++++++ Documentation/arch/arm64/elf_hwcaps.rst | 6 +++ arch/arm64/include/asm/hwcap.h | 3 ++ arch/arm64/include/uapi/asm/hwcap.h | 6 +++ arch/arm64/kernel/cpufeature.c | 51 +++++++++++++++++++++++++ arch/arm64/kernel/cpuinfo.c | 2 + arch/arm64/tools/cpucaps | 4 +- 7 files changed, 82 insertions(+), 2 deletions(-) diff --git a/Documentation/arch/arm64/booting.rst b/Documentation/arch/arm64/booting.rst index 408d2e27b641..b540e0933dde 100644 --- a/Documentation/arch/arm64/booting.rst +++ b/Documentation/arch/arm64/booting.rst @@ -438,6 +438,18 @@ Before jumping into the kernel, the following conditions must be met: - HCRX_EL2.TALLINT must be initialised to 0b0. + For CPUs support for 64-byte loads and stores without status (FEAT_LS64): + + - If the kernel is entered at EL1 and EL2 is present: + + - HCRX_EL2.EnALS (bit 1) must be initialised to 0b1. + + For CPUs support for 64-byte loads and stores with status (FEAT_LS64_V): + + - If the kernel is entered at EL1 and EL2 is present: + + - HCRX_EL2.EnASR (bit 2) must be initialised to 0b1. + The requirements described above for CPU mode, caches, MMUs, architected timers, coherency and system registers apply to all CPUs. All CPUs must enter the kernel in the same exception level. Where the values documented diff --git a/Documentation/arch/arm64/elf_hwcaps.rst b/Documentation/arch/arm64/elf_hwcaps.rst index f88a24d621dd..129d2e9862b8 100644 --- a/Documentation/arch/arm64/elf_hwcaps.rst +++ b/Documentation/arch/arm64/elf_hwcaps.rst @@ -320,6 +320,12 @@ HWCAP2_MOPS HWCAP2_HBC Functionality implied by ID_AA64ISAR2_EL1.BC == 0b0001. +HWCAP3_LS64 + Functionality implied by ID_AA64ISAR1_EL1.LS64 == 0b0001. + +HWCAP3_LS64_V + Functionality implied by ID_AA64ISAR1_EL1.LS64 == 0b0010. + 4. Unused AT_HWCAP bits ----------------------- diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h index 2e914609db93..d68a3f2f6bbb 100644 --- a/arch/arm64/include/asm/hwcap.h +++ b/arch/arm64/include/asm/hwcap.h @@ -140,6 +140,9 @@ #define KERNEL_HWCAP_MOPS __khwcap2_feature(MOPS) #define KERNEL_HWCAP_HBC __khwcap2_feature(HBC) +#define KERNEL_HWCAP_LS64 __khwcap3_feature(LS64) +#define KERNEL_HWCAP_LS64_V __khwcap3_feature(LS64_V) + /* * This yields a mask that user programs can use to figure out what * instruction set this cpu supports. diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h index 53026f45a509..13a8ad7bc492 100644 --- a/arch/arm64/include/uapi/asm/hwcap.h +++ b/arch/arm64/include/uapi/asm/hwcap.h @@ -105,4 +105,10 @@ #define HWCAP2_MOPS (1UL << 43) #define HWCAP2_HBC (1UL << 44) +/* + * HWCAP3 flags - for AT_HWCAP3 + */ +#define HWCAP3_LS64 (1UL << 0) +#define HWCAP3_LS64_V (1UL << 1) + #endif /* _UAPI__ASM_HWCAP_H */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c49a76ad747f..16b335b918fd 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -200,6 +200,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = { }; static const struct arm64_ftr_bits ftr_id_aa64isar1[] = { + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_LS64_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_I8MM_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_DGH_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_BF16_SHIFT, 4, 0), @@ -2133,6 +2134,38 @@ static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap) static bool enable_pseudo_nmi; #endif +static bool has_ls64(const struct arm64_cpu_capabilities *entry, int __unused) +{ + u64 ls64; + + ls64 = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg), + entry->field_pos, entry->sign); + + if (ls64 == ID_AA64ISAR1_EL1_LS64_NI || + ls64 > ID_AA64ISAR1_EL1_LS64_LS64_ACCDATA) + return false; + + if (entry->capability == ARM64_HAS_LS64 && + ls64 >= ID_AA64ISAR1_EL1_LS64_LS64) + return true; + + if (entry->capability == ARM64_HAS_LS64_V && + ls64 >= ID_AA64ISAR1_EL1_LS64_LS64_V) + return true; + + return false; +} + +static void cpu_enable_ls64(struct arm64_cpu_capabilities const *cap) +{ + sysreg_clear_set(sctlr_el1, SCTLR_EL1_EnALS, SCTLR_EL1_EnALS); +} + +static void cpu_enable_ls64_v(struct arm64_cpu_capabilities const *cap) +{ + sysreg_clear_set(sctlr_el1, SCTLR_EL1_EnASR, SCTLR_EL1_EnASR); +} + #ifdef CONFIG_ARM64_PSEUDO_NMI static int __init early_enable_pseudo_nmi(char *p) { @@ -2972,6 +3005,22 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_xint_support, }, #endif + { + .desc = "LS64", + .capability = ARM64_HAS_LS64, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_ls64, + .cpu_enable = cpu_enable_ls64, + ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, LS64, LS64) + }, + { + .desc = "LS64_V", + .capability = ARM64_HAS_LS64_V, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_ls64, + .cpu_enable = cpu_enable_ls64_v, + ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, LS64, LS64_V) + }, {}, }; @@ -3080,6 +3129,8 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { HWCAP_CAP(ID_AA64ISAR1_EL1, BF16, EBF16, CAP_HWCAP, KERNEL_HWCAP_EBF16), HWCAP_CAP(ID_AA64ISAR1_EL1, DGH, IMP, CAP_HWCAP, KERNEL_HWCAP_DGH), HWCAP_CAP(ID_AA64ISAR1_EL1, I8MM, IMP, CAP_HWCAP, KERNEL_HWCAP_I8MM), + HWCAP_CAP(ID_AA64ISAR1_EL1, LS64, LS64, CAP_HWCAP, KERNEL_HWCAP_LS64), + HWCAP_CAP(ID_AA64ISAR1_EL1, LS64, LS64_V, CAP_HWCAP, KERNEL_HWCAP_LS64_V), HWCAP_CAP(ID_AA64MMFR2_EL1, AT, IMP, CAP_HWCAP, KERNEL_HWCAP_USCAT), #ifdef CONFIG_ARM64_SVE HWCAP_CAP(ID_AA64PFR0_EL1, SVE, IMP, CAP_HWCAP, KERNEL_HWCAP_SVE), diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c index 7466b6066d87..dade66047478 100644 --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -82,6 +82,8 @@ static const char *const hwcap_str[] = { [KERNEL_HWCAP_SB] = "sb", [KERNEL_HWCAP_PACA] = "paca", [KERNEL_HWCAP_PACG] = "pacg", + [KERNEL_HWCAP_LS64] = "ls64", + [KERNEL_HWCAP_LS64_V] = "ls64_v", [KERNEL_HWCAP_DCPODP] = "dcpodp", [KERNEL_HWCAP_SVE2] = "sve2", [KERNEL_HWCAP_SVEAES] = "sveaes", diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index f2ddced689b5..27d93050e5da 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -110,8 +110,8 @@ WORKAROUND_HISI_HIP08_RU_PREFETCH WORKAROUND_HISILICON_1980005 HAS_XCALL HAS_XINT -KABI_RESERVE_3 -KABI_RESERVE_4 +HAS_LS64 +HAS_LS64_V KABI_RESERVE_5 KABI_RESERVE_6 KABI_RESERVE_7 -- 2.33.0

From: Yicong Yang <yangyicong@hisilicon.com> driver inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IC1F41 ---------------------------------------------------------------------- Add tests for FEAT_{LS64, LS64_V}. Issue related instructions if feature presents, no SIGILL should be received. When such instructions operate on Device memory or non-cacheable memory, we may received a SIGBUS during the test (w/o FEAT_LS64WB). Just ignore it since we only tested whether the instruction itself can be issued as expected on platforms declaring the support of such features. Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Reviewed-by: Yicong Yang <yangyicong@hisilicon.com> Signed-off-by: Hongye Lin <linhongye@h-partners.com> --- tools/testing/selftests/arm64/abi/hwcap.c | 90 +++++++++++++++++++++++ 1 file changed, 90 insertions(+) diff --git a/tools/testing/selftests/arm64/abi/hwcap.c b/tools/testing/selftests/arm64/abi/hwcap.c index e3d262831d91..d5cc55e80438 100644 --- a/tools/testing/selftests/arm64/abi/hwcap.c +++ b/tools/testing/selftests/arm64/abi/hwcap.c @@ -11,6 +11,8 @@ #include <stdlib.h> #include <string.h> #include <unistd.h> +#include <linux/auxvec.h> +#include <linux/compiler.h> #include <sys/auxv.h> #include <sys/prctl.h> #include <asm/hwcap.h> @@ -289,6 +291,78 @@ static void uscat_sigbus(void) asm volatile(".inst 0xb820003f" : : : ); } +static void ignore_signal(int sig, siginfo_t *info, void *context) +{ + ucontext_t *uc = context; + + uc->uc_mcontext.pc += 4; +} + +static void ls64_sigill(void) +{ + struct sigaction ign, old; + char src[64] __aligned(64) = { 1 }; + + /* + * LS64, LS64_V require target memory to be Device/Non-cacheable (if + * FEAT_LS64WB not supported) and the completer supports these + * instructions, otherwise we'll receive a SIGBUS. Since we are only + * testing the ABI here, so just ignore the SIGBUS and see if we can + * execute the instructions without receiving a SIGILL. Restore the + * handler of SIGBUS after this test. + */ + ign.sa_sigaction = ignore_signal; + ign.sa_flags = SA_SIGINFO | SA_RESTART; + sigemptyset(&ign.sa_mask); + sigaction(SIGBUS, &ign, &old); + + register void *xn asm ("x8") = src; + register u64 xt_1 asm ("x0"); + register u64 __maybe_unused xt_2 asm ("x1"); + register u64 __maybe_unused xt_3 asm ("x2"); + register u64 __maybe_unused xt_4 asm ("x3"); + register u64 __maybe_unused xt_5 asm ("x4"); + register u64 __maybe_unused xt_6 asm ("x5"); + register u64 __maybe_unused xt_7 asm ("x6"); + register u64 __maybe_unused xt_8 asm ("x7"); + + /* LD64B x0, [x8] */ + asm volatile(".inst 0xf83fd100" : "=r" (xt_1) : "r" (xn)); + + /* ST64B x0, [x8] */ + asm volatile(".inst 0xf83f9100" : : "r" (xt_1), "r" (xn)); + + sigaction(SIGBUS, &old, NULL); +} + +static void ls64_v_sigill(void) +{ + struct sigaction ign, old; + char dst[64] __aligned(64); + + /* See comment in ls64_sigill() */ + ign.sa_sigaction = ignore_signal; + ign.sa_flags = SA_SIGINFO | SA_RESTART; + sigemptyset(&ign.sa_mask); + sigaction(SIGBUS, &ign, &old); + + register void *xn asm ("x8") = dst; + register u64 xt_1 asm ("x0") = 1; + register u64 __maybe_unused xt_2 asm ("x1") = 2; + register u64 __maybe_unused xt_3 asm ("x2") = 3; + register u64 __maybe_unused xt_4 asm ("x3") = 4; + register u64 __maybe_unused xt_5 asm ("x4") = 5; + register u64 __maybe_unused xt_6 asm ("x5") = 6; + register u64 __maybe_unused xt_7 asm ("x6") = 7; + register u64 __maybe_unused xt_8 asm ("x7") = 8; + register u64 st asm ("x9"); + + /* ST64BV x9, x0, [x8] */ + asm volatile(".inst 0xf829b100" : "=r" (st) : "r" (xt_1), "r" (xn)); + + sigaction(SIGBUS, &old, NULL); +} + static const struct hwcap_data { const char *name; unsigned long at_hwcap; @@ -563,6 +637,22 @@ static const struct hwcap_data { .sigill_fn = hbc_sigill, .sigill_reliable = true, }, + { + .name = "LS64", + .at_hwcap = AT_HWCAP3, + .hwcap_bit = HWCAP3_LS64, + .cpuinfo = "ls64", + .sigill_fn = ls64_sigill, + .sigill_reliable = true, + }, + { + .name = "LS64_V", + .at_hwcap = AT_HWCAP3, + .hwcap_bit = HWCAP3_LS64_V, + .cpuinfo = "ls64_v", + .sigill_fn = ls64_v_sigill, + .sigill_reliable = true, + }, }; typedef void (*sighandler_fn)(int, siginfo_t *, void *); -- 2.33.0

From: Yicong Yang <yangyicong@hisilicon.com> driver inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IC1F41 ---------------------------------------------------------------------- 0x35 indicates IMPLEMENTATION DEFINED fault for Unsupported Exclusive or Atomic access. Add ESR_ELx_FSC definition and corresponding wrapper. Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Reviewed-by: Yicong Yang <yangyicong@hisilicon.com> Signed-off-by: Hongye Lin <linhongye@h-partners.com> --- arch/arm64/include/asm/esr.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index 276eb39ed9a7..1def4e552d37 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -130,6 +130,7 @@ #define ESR_ELx_FSC_SECC_TTW1 (0x1d) #define ESR_ELx_FSC_SECC_TTW2 (0x1e) #define ESR_ELx_FSC_SECC_TTW3 (0x1f) +#define ESR_ELx_FSC_EXCL_ATOMIC (0x35) /* ISS field definitions for Data Aborts */ #define ESR_ELx_ISV_SHIFT (24) @@ -398,6 +399,13 @@ static inline bool esr_is_data_abort(unsigned long esr) return ec == ESR_ELx_EC_DABT_LOW || ec == ESR_ELx_EC_DABT_CUR; } +static inline bool esr_fsc_is_excl_atomic_fault(unsigned long esr) +{ + esr = esr & ESR_ELx_FSC; + + return esr == ESR_ELx_FSC_EXCL_ATOMIC; +} + const char *esr_get_class_string(unsigned long esr); #endif /* __ASSEMBLY */ -- 2.33.0

From: Yicong Yang <yangyicong@hisilicon.com> driver inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IC1F41 ---------------------------------------------------------------------- If FEAT_LS64WB not supported, FEAT_LS64* instructions only support to access Device/Uncacheable memory, otherwise a data abort for unsupported Exclusive or atomic access (0x35) is generated per spec. It's implementation defined whether the target exception level is routed and is possible to implemented as route to EL2 on a VHE VM. Per DDI0487K.a Section C3.2.12.2 Single-copy atomic 64-byte load/store: The check is performed against the resulting memory type after all enabled stages of translation. In this case the fault is reported at the final enabled stage of translation. If it's implemented as generate the DABT to the final enabled stage (stage-2), inject a DABT to the guest to handle it. Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Reviewed-by: Yicong Yang <yangyicong@hisilicon.com> Signed-off-by: Hongye Lin <linhongye@h-partners.com> --- arch/arm64/include/asm/kvm_emulate.h | 1 + arch/arm64/kvm/inject_fault.c | 35 ++++++++++++++++++++++++++++ arch/arm64/kvm/mmu.c | 22 ++++++++++++++++- 3 files changed, 57 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index f0b10cb2c87d..131faad7f015 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -45,6 +45,7 @@ void kvm_skip_instr32(struct kvm_vcpu *vcpu); void kvm_inject_undefined(struct kvm_vcpu *vcpu); void kvm_inject_vabt(struct kvm_vcpu *vcpu); void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr); +void kvm_inject_dabt_excl_atomic(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_size_fault(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index 0bd93a5f21ce..d5b98ec3f8ea 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -171,6 +171,41 @@ void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr) inject_abt64(vcpu, false, addr); } +/** + * kvm_inject_dabt_excl_atomic - inject a data abort for unsupported exclusive + * or atomic access + * @vcpu: The VCPU to receive the data abort + * @addr: The address to report in the DFAR + * + * It is assumed that this code is called from the VCPU thread and that the + * VCPU therefore is not currently executing guest code. + */ +void kvm_inject_dabt_excl_atomic(struct kvm_vcpu *vcpu, unsigned long addr) +{ + unsigned long cpsr = *vcpu_cpsr(vcpu); + u64 esr = 0; + + pend_sync_exception(vcpu); + + if (kvm_vcpu_trap_il_is32bit(vcpu)) + esr |= ESR_ELx_IL; + + if ((cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t) + esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT; + else + esr |= ESR_ELx_EC_DABT_CUR << ESR_ELx_EC_SHIFT; + + esr |= ESR_ELx_FSC_EXCL_ATOMIC; + + if (match_target_el(vcpu, unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC))) { + vcpu_write_sys_reg(vcpu, addr, FAR_EL1); + vcpu_write_sys_reg(vcpu, esr, ESR_EL1); + } else { + vcpu_write_sys_reg(vcpu, addr, FAR_EL2); + vcpu_write_sys_reg(vcpu, esr, ESR_EL2); + } +} + /** * kvm_inject_pabt - inject a prefetch abort into the guest * @vcpu: The VCPU to receive the prefetch abort diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 3830aa0b07a0..5c65189d28be 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1549,6 +1549,25 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault && device) return -ENOEXEC; + if (esr_fsc_is_excl_atomic_fault(kvm_vcpu_get_esr(vcpu))) { + /* + * Target address is normal memory on the Host. We come here + * because: + * 1) Guest map it as device memory and perform LS64 operations + * 2) VMM report it as device memory mistakenly + * Warn the VMM and inject the DABT back to the guest. + */ + if (!device) + kvm_err("memory attributes maybe incorrect for hva 0x%lx\n", hva); + + /* + * Otherwise it's a piece of device memory on the Host. + * Inject the DABT back to the guest since the mapping + * is wrong. + */ + kvm_inject_dabt_excl_atomic(vcpu, kvm_vcpu_get_hfar(vcpu)); + } + read_lock(&kvm->mmu_lock); pgt = vcpu->arch.hw_mmu->pgt; if (mmu_invalidate_retry(kvm, mmu_seq)) @@ -1719,7 +1738,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) /* Check the stage-2 fault is trans. fault or write fault */ if (fault_status != ESR_ELx_FSC_FAULT && fault_status != ESR_ELx_FSC_PERM && - fault_status != ESR_ELx_FSC_ACCESS) { + fault_status != ESR_ELx_FSC_ACCESS && + !esr_fsc_is_excl_atomic_fault(kvm_vcpu_get_esr(vcpu))) { kvm_err("Unsupported FSC: EC=%#x xFSC=%#lx ESR_EL2=%#lx\n", kvm_vcpu_trap_get_class(vcpu), (unsigned long)kvm_vcpu_trap_get_fault(vcpu), -- 2.33.0

反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/15873 邮件列表地址:https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/UJQ... FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/15873 Mailing list address: https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/UJQ...
participants (2)
-
patchwork bot
-
Yushan Wang