From: Kefeng Wang wangkefeng.wang@huawei.com
maillist inclusion category: performance bugzilla: https://gitee.com/openeuler/kernel/issues/I9E07B CVE: NA
Reference: https://lore.kernel.org/linux-mm/20240403083805.1818160-1-wangkefeng.wang@hu...
--------------------------------
The vm_flags of vma already checked under per-VMA lock, if it is a bad access, directly set fault to VM_FAULT_BADACCESS and handle error, no need to retry with mmap_lock again, the latency time reduces 34% in 'lat_sig -P 1 prot lat_sig' from lmbench testcase.
Since the page faut is handled under per-VMA lock, count it as a vma lock event with VMA_LOCK_SUCCESS.
Reviewed-by: Suren Baghdasaryan surenb@google.com Signed-off-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: ZhangPeng zhangpeng362@huawei.com --- arch/arm64/mm/fault.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 7d304e0132b7..e5b38a26bc73 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -622,7 +622,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
if (!(vma->vm_flags & vm_flags)) { vma_end_read(vma); - goto lock_mmap; + fault = VM_FAULT_BADACCESS; + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + goto done; } fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))