From: Thomas Gleixner tglx@linutronix.de
mainline inclusion from mainline-v5.10-rc1 commit 23357b61f8062a8a8c9c84c0252056cd6d849ec8 category: bugfix bugzilla: 43219 CVE: NA
------------------------------------------------
Dereferencing irq_data before checking it for NULL is suboptimal.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Joerg Roedel jroedel@suse.de
Conflicts: drivers/iommu/amd/iommu.c Signed-off-by: Chen Huang chenhuang5@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- drivers/iommu/amd_iommu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 8148f240aac31..d7a288dd15667 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -4254,8 +4254,8 @@ static int irq_remapping_alloc(struct irq_domain *domain, unsigned int virq,
for (i = 0; i < nr_irqs; i++) { irq_data = irq_domain_get_irq_data(domain, virq + i); - cfg = irqd_cfg(irq_data); - if (!irq_data || !cfg) { + cfg = irq_data ? irqd_cfg(irq_data) : NULL; + if (!cfg) { ret = -EINVAL; goto out_free_data; }
From: Jianqun Xu jay.xu@rock-chips.com
mainline inclusion from mainline-v5.9-rc1 commit 835832ba01bb444c7e45139e4b807527c119dafc category: bugfix bugzilla: 41397 CVE: NA
-------------------------------------------------
In some case the cma area could not be activated, but the cma_alloc be used under this case, then the kernel will crash caused by NULL pointer dereference.
Add bitmap valid check in cma_alloc to avoid this issue.
Signed-off-by: Jianqun Xu jay.xu@rock-chips.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Reviewed-by: David Hildenbrand david@redhat.com Link: http://lkml.kernel.org/r/20200615010123.15596-1-jay.xu@rock-chips.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Chen Wandun chenwandun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/cma.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/cma.c b/mm/cma.c index 4c2864270a39b..f4df1bcbaf3ba 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -430,7 +430,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, struct page *page = NULL; int ret = -ENOMEM;
- if (!cma || !cma->count) + if (!cma || !cma->count || !cma->bitmap) return NULL;
pr_debug("%s(cma %p, count %zu, align %d)\n", __func__, (void *)cma,
From: Qian Cai cai@lca.pw
mainline inclusion from mainline-v5.9-rc1 commit a1f459354a0f011a7baf52e5f16cd9ca95f1e6f2 category: bugfix bugzilla: 41543 CVE: NA
-------------------------------------------------
struct list_lru_one l.nr_items could be accessed concurrently as noticed by KCSAN,
BUG: KCSAN: data-race in list_lru_count_one / list_lru_isolate_move
write to 0xffffa102789c4510 of 8 bytes by task 823 on cpu 39: list_lru_isolate_move+0xf9/0x130 list_lru_isolate_move at mm/list_lru.c:180 inode_lru_isolate+0x12b/0x2a0 __list_lru_walk_one+0x122/0x3d0 list_lru_walk_one+0x75/0xa0 prune_icache_sb+0x8b/0xc0 super_cache_scan+0x1b8/0x250 do_shrink_slab+0x256/0x6d0 shrink_slab+0x41b/0x4a0 shrink_node+0x35c/0xd80 balance_pgdat+0x652/0xd90 kswapd+0x396/0x8d0 kthread+0x1e0/0x200 ret_from_fork+0x27/0x50
read to 0xffffa102789c4510 of 8 bytes by task 6345 on cpu 56: list_lru_count_one+0x116/0x2f0 list_lru_count_one at mm/list_lru.c:193 super_cache_count+0xe8/0x170 do_shrink_slab+0x95/0x6d0 shrink_slab+0x41b/0x4a0 shrink_node+0x35c/0xd80 do_try_to_free_pages+0x1f7/0xa10 try_to_free_pages+0x26c/0x5e0 __alloc_pages_slowpath+0x458/0x1290 __alloc_pages_nodemask+0x3bb/0x450 alloc_pages_vma+0x8a/0x2c0 do_anonymous_page+0x170/0x700 __handle_mm_fault+0xc9f/0xd00 handle_mm_fault+0xfc/0x2f0 do_page_fault+0x263/0x6f9 page_fault+0x34/0x40
Reported by Kernel Concurrency Sanitizer on: CPU: 56 PID: 6345 Comm: oom01 Tainted: G W L 5.5.0-next-20200205+ #4 Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 07/10/2019
A shattered l.nr_items could affect the shrinker behaviour due to a data race. Fix it by adding READ_ONCE() for the read. Since the writes are aligned and up to word-size, assume those are safe from data races to avoid readability issues of writing WRITE_ONCE(var, var + val).
Signed-off-by: Qian Cai cai@lca.pw Signed-off-by: Andrew Morton akpm@linux-foundation.org Cc: Marco Elver elver@google.com Cc: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Link: http://lkml.kernel.org/r/1581114679-5488-1-git-send-email-cai@lca.pw Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Chen Wandun chenwandun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/list_lru.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/list_lru.c b/mm/list_lru.c index 2961b87ccae72..24e76a91902fd 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -189,7 +189,7 @@ unsigned long list_lru_count_one(struct list_lru *lru,
rcu_read_lock(); l = list_lru_from_memcg_idx(nlru, memcg_cache_id(memcg)); - count = l->nr_items; + count = READ_ONCE(l->nr_items); rcu_read_unlock();
return count;
From: Qian Cai cai@lca.pw
mainline inclusion from mainline-v5.9-rc1 commit abe1de4209f670ca81ba0d96f64fa9285c05a5ad category: bugfix bugzilla: 41548 CVE: NA
-------------------------------------------------
mempool_t pool.curr_nr could be accessed concurrently as noticed by KCSAN,
BUG: KCSAN: data-race in mempool_free / remove_element
write to 0xffffffffa937638c of 4 bytes by task 6359 on cpu 113: remove_element+0x4a/0x1c0 remove_element at mm/mempool.c:132 mempool_alloc+0x102/0x210 (inlined by) mempool_alloc at mm/mempool.c:399 bio_alloc_bioset+0x106/0x2c0 get_swap_bio+0x49/0x230 __swap_writepage+0x680/0xc30 swap_writepage+0x9c/0xf0 pageout+0x33e/0xae0 shrink_page_list+0x1f57/0x2870 shrink_inactive_list+0x316/0x880 shrink_lruvec+0x8dc/0x1380 shrink_node+0x317/0xd80 do_try_to_free_pages+0x1f7/0xa10 try_to_free_pages+0x26c/0x5e0 __alloc_pages_slowpath+0x458/0x1290 <snip>
read to 0xffffffffa937638c of 4 bytes by interrupt on cpu 64: mempool_free+0x3e/0x150 mempool_free at mm/mempool.c:492 bio_free+0x192/0x280 bio_put+0x91/0xd0 end_swap_bio_write+0x1d8/0x280 bio_endio+0x2c2/0x5b0 dec_pending+0x22b/0x440 [dm_mod] clone_endio+0xe4/0x2c0 [dm_mod] bio_endio+0x2c2/0x5b0 blk_update_request+0x217/0x940 scsi_end_request+0x6b/0x4d0 scsi_io_completion+0xb7/0x7e0 scsi_finish_command+0x223/0x310 scsi_softirq_done+0x1d5/0x210 blk_mq_complete_request+0x224/0x250 scsi_mq_done+0xc2/0x250 pqi_raid_io_complete+0x5a/0x70 [smartpqi] pqi_irq_handler+0x150/0x1410 [smartpqi] __handle_irq_event_percpu+0x90/0x540 handle_irq_event_percpu+0x49/0xd0 handle_irq_event+0x85/0xca handle_edge_irq+0x13f/0x3e0 do_IRQ+0x86/0x190 <snip>
Since the write is under pool->lock but the read is done as lockless. Even though the commit 5b990546e334 ("mempool: fix and document synchronization and memory barrier usage") introduced the smp_wmb() and smp_rmb() pair to improve the situation, it is adequate to protect it from data races which could lead to a logic bug, so fix it by adding READ_ONCE() for the read.
Signed-off-by: Qian Cai cai@lca.pw Signed-off-by: Andrew Morton akpm@linux-foundation.org Cc: Marco Elver elver@google.com Cc: Tejun Heo tj@kernel.org Cc: Oleg Nesterov oleg@redhat.com Link: http://lkml.kernel.org/r/1581446384-2131-1-git-send-email-cai@lca.pw Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Chen Wandun chenwandun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/mempool.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/mempool.c b/mm/mempool.c index 0ef8cc8d16022..189104e3aae1f 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -481,7 +481,7 @@ void mempool_free(void *element, mempool_t *pool) * ensures that there will be frees which return elements to the * pool waking up the waiters. */ - if (unlikely(pool->curr_nr < pool->min_nr)) { + if (unlikely(READ_ONCE(pool->curr_nr) < pool->min_nr)) { spin_lock_irqsave(&pool->lock, flags); if (likely(pool->curr_nr < pool->min_nr)) { add_element(pool, element);