From: Tang Yizhou tangyizhou@huawei.com
ascend inclusion category: bugfix bugzilla: 47462 CVE: NA
-------------------------------------------------
KASAN report: [ 127.094921] BUG: KASAN: use-after-free in rb_next+0x18/0xa8 [ 127.095591] Read of size 8 at addr ffff8000cffb0130 by task cat/642 [ 127.096169] [ 127.096935] CPU: 1 PID: 642 Comm: cat Tainted: G OE 4.19.170+ #168 [ 127.097499] Hardware name: linux,dummy-virt (DT) [ 127.098200] Call trace: [ 127.098508] dump_backtrace+0x0/0x268 [ 127.098885] show_stack+0x24/0x30 [ 127.099241] dump_stack+0x104/0x15c [ 127.099754] print_address_description+0x68/0x278 [ 127.100317] kasan_report+0x208/0x328 [ 127.100683] __asan_load8+0x84/0xa8 [ 127.101035] rb_next+0x18/0xa8 [ 127.101355] spa_stat_show+0x148/0x378 [ 127.101746] seq_read+0x160/0x730 [ 127.102106] proc_reg_read+0xac/0x100 [ 127.102492] do_iter_read+0x248/0x290 [ 127.102860] vfs_readv+0xe4/0x140 [ 127.103220] default_file_splice_read+0x298/0x4e0 [ 127.103765] do_splice_to+0xa8/0xe0 [ 127.104179] splice_direct_to_actor+0x180/0x3d8 [ 127.104603] do_splice_direct+0x100/0x178 [ 127.104991] do_sendfile+0x2ec/0x520 [ 127.105363] __arm64_sys_sendfile64+0x204/0x250 [ 127.105792] el0_svc_common+0xb0/0x2d0 [ 127.106168] el0_svc_handler+0x40/0x90 [ 127.106523] el0_svc+0x10/0x248
The reason is that __sp_area_drop_locked(spa) may free the spa and its corresponding rbtree node. Then the node of rb_next(node) is use-after-free.
Signed-off-by: Tang Yizhou tangyizhou@huawei.com Reviewed-by: Ding Tianhong dingtianhong@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/share_pool.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/share_pool.c b/mm/share_pool.c index a14e8c678bbc3..e7926581d9e16 100644 --- a/mm/share_pool.c +++ b/mm/share_pool.c @@ -2513,12 +2513,15 @@ int proc_sp_group_state(struct seq_file *m, struct pid_namespace *ns, static void rb_spa_stat_show(struct seq_file *seq) { struct rb_node *node; - struct sp_area *spa; + struct sp_area *spa, *prev = NULL;
spin_lock(&sp_area_lock);
for (node = rb_first(&sp_area_root); node; node = rb_next(node)) { + __sp_area_drop_locked(prev); + spa = rb_entry(node, struct sp_area, rb_node); + prev = spa; atomic_inc(&spa->use_count); spin_unlock(&sp_area_lock);
@@ -2557,9 +2560,8 @@ static void rb_spa_stat_show(struct seq_file *seq) seq_printf(seq, "%-10d\n", atomic_read(&spa->use_count));
spin_lock(&sp_area_lock); - __sp_area_drop_locked(spa); } - + __sp_area_drop_locked(prev); spin_unlock(&sp_area_lock); }