From: Li Lingfeng lilingfeng3@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5SR8X CVE: NA
--------------------------------
====================================================== WARNING: possible circular locking dependency detected 4.18.0+ #4 Tainted: G ---------r- - ------------------------------------------------------ dmsetup/923 is trying to acquire lock: 000000008d8170dd (kn->count#184){++++}, at: kernfs_remove+0x24/0x40 fs/kernfs/dir.c:1354
but task is already holding lock: 000000003377330b (slab_mutex){+.+.}, at: kmem_cache_destroy+0xec/0x320 mm/slab_common.c:928
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (slab_mutex){+.+.}: __mutex_lock_common kernel/locking/mutex.c:925 [inline] __mutex_lock+0x105/0x11a0 kernel/locking/mutex.c:1072 slab_attr_store+0x6d/0xe0 mm/slub.c:5526 sysfs_kf_write+0x10f/0x170 fs/sysfs/file.c:139 kernfs_fop_write+0x290/0x440 fs/kernfs/file.c:316 __vfs_write+0x81/0x100 fs/read_write.c:485 vfs_write+0x184/0x4c0 fs/read_write.c:549 ksys_write+0xc6/0x1a0 fs/read_write.c:598 do_syscall_64+0xca/0x5a0 arch/x86/entry/common.c:298 entry_SYSCALL_64_after_hwframe+0x6a/0xdf
-> #0 (kn->count#184){++++}: lock_acquire+0x10f/0x340 kernel/locking/lockdep.c:3868 kernfs_drain fs/kernfs/dir.c:467 [inline] __kernfs_remove fs/kernfs/dir.c:1320 [inline] __kernfs_remove+0x6d0/0x890 fs/kernfs/dir.c:1279 kernfs_remove+0x24/0x40 fs/kernfs/dir.c:1354 sysfs_remove_dir+0xb6/0xf0 fs/sysfs/dir.c:99 kobject_del.part.1+0x35/0xe0 lib/kobject.c:573 kobject_del+0x1b/0x30 lib/kobject.c:569 shutdown_cache+0x17f/0x310 mm/slab_common.c:592 kmem_cache_destroy+0x263/0x320 mm/slab_common.c:943 bio_put_slab block/bio.c:152 [inline] bioset_exit+0x20d/0x330 block/bio.c:1916 cleanup_mapped_device+0x64/0x360 drivers/md/dm.c:1903 free_dev+0xbc/0x240 drivers/md/dm.c:2058 __dm_destroy+0x317/0x490 drivers/md/dm.c:2426 dm_hash_remove_all+0x8f/0x250 drivers/md/dm-ioctl.c:314 remove_all+0x4d/0x90 drivers/md/dm-ioctl.c:471 ctl_ioctl+0x426/0x910 drivers/md/dm-ioctl.c:1870 dm_ctl_ioctl+0x23/0x30 drivers/md/dm-ioctl.c:1892 vfs_ioctl fs/ioctl.c:46 [inline] file_ioctl fs/ioctl.c:509 [inline] do_vfs_ioctl+0x1a5/0x1100 fs/ioctl.c:696 ksys_ioctl+0x7c/0xa0 fs/ioctl.c:713 __do_sys_ioctl fs/ioctl.c:720 [inline] __se_sys_ioctl fs/ioctl.c:718 [inline] __x64_sys_ioctl+0x74/0xb0 fs/ioctl.c:718 do_syscall_64+0xca/0x5a0 arch/x86/entry/common.c:298 entry_SYSCALL_64_after_hwframe+0x6a/0xdf
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1 ---- ---- lock(slab_mutex); lock(kn->count#184); lock(slab_mutex); lock(kn->count#184);
A potential deadlock may occur when we remove and write a slab-attr-file in /sys/kernfs/slab/xxx/ at the same time. The lock sequence in remove process is: slab_mutex --> kn->count The lock sequence in write process is: kn->count --> slab_mutex This can be fixed by replacing mutex_lock with mutex_trylock in slab_attr_store.
Signed-off-by: Li Lingfeng lilingfeng3@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yongqiang Liu liuyongqiang13@huawei.com --- mm/slub.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c index 4bc29bcd0d5d..f9b39a3718d0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5560,7 +5560,10 @@ static ssize_t slab_attr_store(struct kobject *kobj, if (slab_state >= FULL && err >= 0 && is_root_cache(s)) { struct kmem_cache *c;
- mutex_lock(&slab_mutex); + if (!mutex_trylock(&slab_mutex)) { + pr_warn("slab file is busy\n"); + return -EBUSY; + } if (s->max_attr_size < len) s->max_attr_size = len;