hulk inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/7894 -------------------------------- Currently, the MPAM implementation on Linux only applies isolation control to tasks' user-space, while resource allocation for execution in kernel mode is according to the root group's (Partid=0) policy. With MPAM adding L2 cache support, and given small L2 capacity, frequently switching partition policies between kernel and user space would cause L2 thrashing and degrade performance. MPAM1_EL1 is set to the same value as MPAM0_EL1 rather than keeping the default value. The advantages of this are that: 1. It is closer to the x86 model where the closid is globally applicable. 2. All partids are usable from user-space and user-space can't bypass MPAM controls by doing the work in the kernel. 3. Support MPAM isolation for kernel-space processes, kernel space would adopt the same partition policy as user space. However, 1. This causes some priority inversion where a high priority task waits to take a mutex held by another whose resources are restricted by MPAM. 2. It also adds some extra isb(). Signed-off-by: Zeng Heng <zengheng4@huawei.com> --- arch/arm64/include/asm/mpam.h | 3 +++ arch/arm64/kernel/cpufeature.c | 10 +++------- arch/arm64/kernel/mpam.c | 4 +++- 3 files changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h index 8caac70e22ed..5f1ac30ea470 100644 --- a/arch/arm64/include/asm/mpam.h +++ b/arch/arm64/include/asm/mpam.h @@ -161,6 +161,9 @@ static inline void mpam_thread_switch(struct task_struct *tsk) return; /* Synchronising this write is left until the ERET to EL0 */ + write_sysreg_s(regval, SYS_MPAM1_EL1); + isb(); + write_sysreg_s(regval, SYS_MPAM0_EL1); WRITE_ONCE(per_cpu(arm64_mpam_current, cpu), regval); } diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 414b9d28ecd1..a1928cf3c887 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2419,13 +2419,9 @@ cpu_enable_mpam(const struct arm64_cpu_capabilities *entry) if (idr & MPAMIDR_HAS_HCR) write_sysreg_s(0, SYS_MPAMHCR_EL2); - /* - * Access by the kernel (at EL1) should use the reserved PARTID - * which is configured unrestricted. This avoids priority-inversion - * where latency sensitive tasks have to wait for a task that has - * been throttled to release the lock. - */ - write_sysreg_s(0, SYS_MPAM1_EL1); + write_sysreg_s(regval, SYS_MPAM1_EL1); + isb(); + write_sysreg_s(regval, SYS_MPAM0_EL1); } diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c index 3f070cbab420..b7c974076ac5 100644 --- a/arch/arm64/kernel/mpam.c +++ b/arch/arm64/kernel/mpam.c @@ -40,7 +40,9 @@ static int mpam_pm_notifier(struct notifier_block *self, * value has changed under our feet. */ regval = READ_ONCE(per_cpu(arm64_mpam_current, cpu)); - write_sysreg_s(0, SYS_MPAM1_EL1); + write_sysreg_s(regval, SYS_MPAM1_EL1); + isb(); + write_sysreg_s(regval, SYS_MPAM0_EL1); return NOTIFY_OK; -- 2.25.1