[PATCH OLK-6.6 0/2] two bugfix for arm64 support HVO

two bugfix for arm64 support HVO Nanyong Sun (2): mm: HVO: fix hard lockup in split_vmemmap_huge_pmd under x86 arm64: mm: HVO: fix deadlock in split vmemmap pmd arch/arm64/include/asm/pgtable.h | 2 ++ mm/hugetlb_vmemmap.c | 18 ++++++++++++++---- 2 files changed, 16 insertions(+), 4 deletions(-) -- 2.34.1

反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/17600 邮件列表地址:https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/PIV... FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/17600 Mailing list address: https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/PIV...

hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IBZIRG ------------------------------- The commit 9f779dc0a09c ("arm64: mm: HVO: support BBM of vmemmap pgtable safely") is improper for X86 because it disable irq before TLB flush in split_vmemmap_huge_pmd, which need sending IPI under x86. The comments of smp_call_function_many_cond() has pointed out that deadlock can happen when called with interrupts disabled. Reverting to spin_lock for archs besides arm64. Fixes: 9f779dc0a09c ("arm64: mm: HVO: support BBM of vmemmap pgtable safely") Signed-off-by: Nanyong Sun <sunnanyong@huawei.com> --- arch/arm64/include/asm/pgtable.h | 2 ++ mm/hugetlb_vmemmap.c | 12 ++++++++++-- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 626e43967e0a..2b66aab73dbc 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1515,6 +1515,8 @@ void vmemmap_update_pmd(unsigned long addr, pmd_t *pmdp, pte_t *ptep); #define vmemmap_update_pmd vmemmap_update_pmd void vmemmap_update_pte(unsigned long addr, pte_t *ptep, pte_t pte); #define vmemmap_update_pte vmemmap_update_pte +#define vmemmap_split_lock(lock) spin_lock_irq(lock) +#define vmemmap_split_unlock(lock) spin_unlock_irq(lock) #endif #endif /* !__ASSEMBLY__ */ diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 149ab629855c..427cfd08069e 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -69,6 +69,14 @@ static inline void vmemmap_flush_tlb_range(unsigned long start, } #endif +#ifndef vmemmap_split_lock +#define vmemmap_split_lock(lock) spin_lock(lock) +#endif + +#ifndef vmemmap_split_unlock +#define vmemmap_split_unlock(lock) spin_unlock(lock) +#endif + static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start) { pmd_t __pmd; @@ -99,7 +107,7 @@ static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start) set_pte_at(&init_mm, addr, pte, entry); } - spin_lock_irq(&init_mm.page_table_lock); + vmemmap_split_lock(&init_mm.page_table_lock); if (likely(pmd_leaf(*pmd))) { /* * Higher order allocations from buddy allocator must be able to @@ -116,7 +124,7 @@ static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start) } else { pte_free_kernel(&init_mm, pgtable); } - spin_unlock_irq(&init_mm.page_table_lock); + vmemmap_split_unlock(&init_mm.page_table_lock); return 0; } -- 2.34.1

hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IBZIRG ------------------------------- If task A had hold the zone->lock before trigger page fault when touch the page which is doing break-before-make by another task, a ABBA deadlock can happen: CPU0 CPU1 ---------------------------------------------------------------- free_pages() got zone->lock touch struct page in BBM __do_kernel_fault() check spurious PF vmemmap_handle_page_fault() split_vmemmap_huge_pmd() got init_mm.page_table_lock pte_free_kernel() want init_mm.page_table_lock want zone->lock <--- DEAD LOCK Fix this by moving pte_free_kernel() out from init_mm.page_table_lock spin lock scope. The probability of encountering this issue should be very low because in most cases, the page table has restored when checking spurious PF. Fixes: 9f779dc0a09c ("arm64: mm: HVO: support BBM of vmemmap pgtable safely") Signed-off-by: Nanyong Sun <sunnanyong@huawei.com> --- mm/hugetlb_vmemmap.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 427cfd08069e..2bde429b2ea3 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -121,11 +121,13 @@ static int split_vmemmap_huge_pmd(pmd_t *pmd, unsigned long start) smp_wmb(); vmemmap_update_pmd(start, pmd, pgtable); vmemmap_flush_tlb_range(start, start + PMD_SIZE); - } else { - pte_free_kernel(&init_mm, pgtable); + pgtable = NULL; } vmemmap_split_unlock(&init_mm.page_table_lock); + if (unlikely(pgtable)) + pte_free_kernel(&init_mm, pgtable); + return 0; } -- 2.34.1
participants (2)
-
Nanyong Sun
-
patchwork bot