[PATCH kernel-4.19 1/2] mm: vmalloc: prevent use after free in _vm_unmap_aliases

From: Vijayanand Jitta <vjitta@codeaurora.org> mainline inclusion from mainline-5.13-rc1 commit ad216c0316ad6391d90f4de0a7f59396b2925a06 category: feature bugzilla: NA CVE: NA --------------------------- A potential use after free can occur in _vm_unmap_aliases where an already freed vmap_area could be accessed, Consider the following scenario: Process 1 Process 2 __vm_unmap_aliases __vm_unmap_aliases purge_fragmented_blocks_allcpus rcu_read_lock() rcu_read_lock() list_del_rcu(&vb->free_list) list_for_each_entry_rcu(vb .. ) __purge_vmap_area_lazy kmem_cache_free(va) va_start = vb->va->va_start Here Process 1 is in purge path and it does list_del_rcu on vmap_block and later frees the vmap_area, since Process 2 was holding the rcu lock at this time vmap_block will still be present in and Process 2 accesse it and thereby it tries to access vmap_area of that vmap_block which was already freed by Process 1 and this results in use after free. Fix this by adding a check for vb->dirty before accessing vmap_area structure since vb->dirty will be set to VMAP_BBMAP_BITS in purge path checking for this will prevent the use after free. Link: https://lkml.kernel.org/r/1616062105-23263-1-git-send-email-vjitta@codeauror... Signed-off-by: Vijayanand Jitta <vjitta@codeaurora.org> Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Rui Xiang <rui.xiang@huawei.com> Reviewed-by: Ding Tianhong <dingtianhong@huawei.com> Reviewed-by: Weilong Chen <chenweilong@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> --- mm/vmalloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 8c70131e0b078..011a84ebec04d 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1953,7 +1953,7 @@ void vm_unmap_aliases(void) rcu_read_lock(); list_for_each_entry_rcu(vb, &vbq->free, free_list) { spin_lock(&vb->lock); - if (vb->dirty) { + if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { unsigned long va_start = vb->va->va_start; unsigned long s, e; -- 2.25.1

From: Christophe Leroy <christophe.leroy@csgroup.eu> mainline inclusion from mainline-5.13-rc2 commit 86d0c164272536c732853e19391de5159f860701 category: feature bugzilla: NA CVE: NA --------------------------- iomap_max_page_shift is expected to contain a page shift, so it can't be a 'bool', has to be an 'unsigned int' And fix the default values: P4D_SHIFT is when huge iomap is allowed. However, on some architectures (eg: powerpc book3s/64), P4D_SHIFT is not a constant so it can't be used to initialise a static variable. So, initialise iomap_max_page_shift with a maximum shift supported by the architecture, it is gated by P4D_SHIFT in vmap_try_huge_p4d() anyway. Link: https://lkml.kernel.org/r/ad2d366015794a9f21320dcbdd0a8eb98979e9df.162089811... Fixes: bbc180a5adb0 ("mm: HUGE_VMAP arch support cleanup") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Rui Xiang <rui.xiang@huawei.com> Reviewed-by: Tang Yizhou <tangyizhou@huawei.com> Reviewed-by: Ding Tianhong <dingtianhong@huawei.com> Reviewed-by: Weilong Chen <chenweilong@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> --- mm/ioremap.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/ioremap.c b/mm/ioremap.c index 8b1905bf03d26..53827cbf4c2cb 100644 --- a/mm/ioremap.c +++ b/mm/ioremap.c @@ -15,16 +15,16 @@ #include <asm/pgtable.h> #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP -static bool __ro_after_init iomap_max_page_shift = PAGE_SHIFT; +static unsigned int __ro_after_init iomap_max_page_shift = BITS_PER_LONG - 1; static int __init set_nohugeiomap(char *str) { - iomap_max_page_shift = P4D_SHIFT; + iomap_max_page_shift = PAGE_SHIFT; return 0; } early_param("nohugeiomap", set_nohugeiomap); #else /* CONFIG_HAVE_ARCH_HUGE_VMAP */ -static const bool iomap_max_page_shift = PAGE_SHIFT; +static const unsigned int iomap_max_page_shift = PAGE_SHIFT; #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ int ioremap_page_range(unsigned long addr, -- 2.25.1
participants (1)
-
Yang Yingliang