[PATCH OLK-5.10 0/2] two bugfix for arm64 support HVO

two bugfix for arm64 support HVO Nanyong Sun (2): mm: HVO: fix hard lockup in split_vmemmap_huge_pmd under x86 arm64: mm: HVO: fix deadlock in split vmemmap pmd arch/arm64/include/asm/pgtable.h | 2 ++ mm/sparse-vmemmap.c | 18 ++++++++++++++---- 2 files changed, 16 insertions(+), 4 deletions(-) -- 2.34.1

反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/17603 邮件列表地址:https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/6KF... FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/17603 Mailing list address: https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/6KF...

hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/ICTXL2 ------------------------------- The commit 94238c5ff554 ("[Huawei] arm64: mm: HVO: make spin_lock irq safe") is improper for X86 because it disable irq before TLB flush in split_vmemmap_huge_pmd, which need sending IPI under x86. The comments of smp_call_function_many_cond() has pointed out that deadlock can happen when called with interrupts disabled. Reverting to spin_lock for archs besides arm64. Fixes: 4529b88488e4 ("arm64: mm: HVO: support BBM of vmemmap pgtable safely") Signed-off-by: Nanyong Sun <sunnanyong@huawei.com> --- arch/arm64/include/asm/pgtable.h | 2 ++ mm/sparse-vmemmap.c | 12 ++++++++++-- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index f914c30b7487..91e2c5a1bf29 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1043,6 +1043,8 @@ void vmemmap_update_pmd(unsigned long addr, pmd_t *pmdp, pte_t *ptep); #define vmemmap_update_pmd vmemmap_update_pmd void vmemmap_update_pte(unsigned long addr, pte_t *ptep, pte_t pte); #define vmemmap_update_pte vmemmap_update_pte +#define vmemmap_split_lock(lock) spin_lock_irq(lock) +#define vmemmap_split_unlock(lock) spin_unlock_irq(lock) #endif #endif /* !__ASSEMBLY__ */ diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index a47b027af1f7..04dd9133aae7 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -70,6 +70,14 @@ static inline void vmemmap_update_pte(unsigned long addr, } #endif +#ifndef vmemmap_split_lock +#define vmemmap_split_lock(lock) spin_lock(lock) +#endif + +#ifndef vmemmap_split_unlock +#define vmemmap_split_unlock(lock) spin_unlock(lock) +#endif + #ifndef vmemmap_flush_tlb_all static inline void vmemmap_flush_tlb_all(void) { @@ -107,7 +115,7 @@ static int __split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start) set_pte_at(&init_mm, addr, pte, entry); } - spin_lock_irq(&init_mm.page_table_lock); + vmemmap_split_lock(&init_mm.page_table_lock); if (likely(pmd_leaf(*pmd))) { /* * Higher order allocations from buddy allocator must be able to @@ -124,7 +132,7 @@ static int __split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start) } else { pte_free_kernel(&init_mm, pgtable); } - spin_unlock_irq(&init_mm.page_table_lock); + vmemmap_split_unlock(&init_mm.page_table_lock); return 0; } -- 2.34.1

hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/ICTXL2 ------------------------------- If task A had hold the zone->lock before trigger page fault when touch the page which is doing break-before-make by another task, a ABBA deadlock can happen: CPU0 CPU1 ---------------------------------------------------------------- free_pages() got zone->lock touch struct page in BBM __do_kernel_fault() check spurious PF vmemmap_handle_page_fault() __split_vmemmap_huge_pmd() got init_mm.page_table_lock pte_free_kernel() want init_mm.page_table_lock want zone->lock <--- DEAD LOCK Fix this by moving pte_free_kernel() out from init_mm.page_table_lock spin lock scope. The probability of encountering this issue should be very low because in most cases, the page table has restored when checking spurious PF. Fixes: 4529b88488e4 ("arm64: mm: HVO: support BBM of vmemmap pgtable safely") Signed-off-by: Nanyong Sun <sunnanyong@huawei.com> --- mm/sparse-vmemmap.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 04dd9133aae7..b2536a3e5e2d 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -129,11 +129,13 @@ static int __split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start) smp_wmb(); vmemmap_update_pmd(start, pmd, pgtable); vmemmap_flush_tlb_range(start, start + PMD_SIZE); - } else { - pte_free_kernel(&init_mm, pgtable); + pgtable = NULL; } vmemmap_split_unlock(&init_mm.page_table_lock); + if (unlikely(pgtable)) + pte_free_kernel(&init_mm, pgtable); + return 0; } -- 2.34.1
participants (2)
-
Nanyong Sun
-
patchwork bot