From: Vlastimil Babka vbabka@suse.cz
mainline inclusion from mainline-v5.12-rc1 commit 59450bbc12bee1c4e5dd25e6aa5d6a45a7bd6e81 category: bugfix bugzilla: 175589 CVE: NA
------------------------------------------------- SLAB has been using get/put_online_cpus() around creating, destroying and shrinking kmem caches since 95402b382901 ("cpu-hotplug: replace per-subsystem mutexes with get_online_cpus()") in 2008, which is supposed to be replacing a private mutex (cache_chain_mutex, called slab_mutex today) with system-wide mechanism, but in case of SLAB it's in fact used in addition to the existing mutex, without explanation why.
SLUB appears to have avoided the cpu hotplug lock initially, but gained it due to common code unification, such as 20cea9683ecc ("mm, sl[aou]b: Move kmem_cache_create mutex handling to common code").
Regardless of the history, checking if the hotplug lock is actually needed today suggests that it's not, and therefore it's better to avoid this system-wide lock and the ordering this imposes wrt other locks (such as slab_mutex).
Specifically, in SLAB we have for_each_online_cpu() in do_tune_cpucache() protected by slab_mutex, and cpu hotplug callbacks that also take the slab_mutex, which is also taken by the common slab function that currently also take the hotplug lock. Thus the slab_mutex protection should be sufficient. Also per-cpu array caches are allocated for each possible cpu, so not affected by their online/offline state.
In SLUB we have for_each_online_cpu() in functions that show statistics and are already unprotected today, as racing with hotplug is not harmful. Otherwise SLUB relies on percpu allocator. The slub_cpu_dead() hotplug callback takes the slab_mutex.
To sum up, this patch removes get/put_online_cpus() calls from slab as it should be safe without further adjustments.
Link: https://lkml.kernel.org/r/20210113131634.3671-4-vbabka@suse.cz Signed-off-by: Vlastimil Babka vbabka@suse.cz Cc: Christoph Lameter cl@linux.com Cc: David Hildenbrand david@redhat.com Cc: David Rientjes rientjes@google.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Michal Hocko mhocko@kernel.org Cc: Pekka Enberg penberg@kernel.org Cc: Qian Cai cai@redhat.com Cc: Vladimir Davydov vdavydov.dev@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Chengyang Fan cy.fan@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/slab_common.c | 12 ------------ 1 file changed, 12 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c index 5e3dc1e9eaf09..cc7e830c0be73 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -459,7 +459,6 @@ kmem_cache_create_usercopy(const char *name, const char *cache_name; int err;
- get_online_cpus(); memcg_get_cache_ids();
mutex_lock(&slab_mutex); @@ -511,7 +510,6 @@ kmem_cache_create_usercopy(const char *name, mutex_unlock(&slab_mutex);
memcg_put_cache_ids(); - put_online_cpus();
if (err) { if (flags & SLAB_PANIC) @@ -914,8 +912,6 @@ void kmem_cache_destroy(struct kmem_cache *s) if (unlikely(!s)) return;
- get_online_cpus(); - mutex_lock(&slab_mutex);
s->refcount--; @@ -927,12 +923,8 @@ void kmem_cache_destroy(struct kmem_cache *s)
mutex_unlock(&slab_mutex);
- put_online_cpus(); - flush_memcg_workqueue(s);
- get_online_cpus(); - mutex_lock(&slab_mutex);
/* @@ -957,8 +949,6 @@ void kmem_cache_destroy(struct kmem_cache *s) } out_unlock: mutex_unlock(&slab_mutex); - - put_online_cpus(); } EXPORT_SYMBOL(kmem_cache_destroy);
@@ -973,12 +963,10 @@ int kmem_cache_shrink(struct kmem_cache *cachep) { int ret;
- get_online_cpus();
kasan_cache_shrink(cachep); ret = __kmem_cache_shrink(cachep);
- put_online_cpus(); return ret; } EXPORT_SYMBOL(kmem_cache_shrink);