From: "Paul E. McKenney" paulmck@linux.ibm.com
mainline inclusion from mainline-5.0-rc1 commit 6564a25e6c185e65ca3148ed6e18f80882f6798f category: bugfix bugzilla: 34611 CVE: NA
------------------------------------------------- Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change.
Signed-off-by: Paul E. McKenney paulmck@linux.ibm.com Cc: Christoph Lameter cl@linux.com Cc: Pekka Enberg penberg@kernel.org Cc: David Rientjes rientjes@google.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Andrew Morton akpm@linux-foundation.org Cc: linux-mm@kvack.org (cherry picked from commit 6564a25e6c185e65ca3148ed6e18f80882f6798f) Signed-off-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Liu Shixin liushixin2@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/slab.c | 4 ++-- mm/slab_common.c | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c index c96657204508..9a80511afb47 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -965,10 +965,10 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep, * To protect lockless access to n->shared during irq disabled context. * If n->shared isn't NULL in irq disabled context, accessing to it is * guaranteed to be valid until irq is re-enabled, because it will be - * freed after synchronize_sched(). + * freed after synchronize_rcu(). */ if (old_shared && force_change) - synchronize_sched(); + synchronize_rcu();
fail: kfree(old_shared); diff --git a/mm/slab_common.c b/mm/slab_common.c index a3105880e86d..5bb1afcce3d2 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -739,7 +739,7 @@ void slab_deactivate_memcg_cache_rcu_sched(struct kmem_cache *s, css_get(&s->memcg_params.memcg->css);
s->memcg_params.deact_fn = deact_fn; - call_rcu_sched(&s->memcg_params.deact_rcu_head, kmemcg_deactivate_rcufn); + call_rcu(&s->memcg_params.deact_rcu_head, kmemcg_deactivate_rcufn); unlock: spin_unlock_irq(&memcg_kmem_wq_lock); } @@ -859,11 +859,11 @@ static void memcg_set_kmem_cache_dying(struct kmem_cache *s) static void flush_memcg_workqueue(struct kmem_cache *s) { /* - * SLUB deactivates the kmem_caches through call_rcu_sched. Make + * SLUB deactivates the kmem_caches through call_rcu. Make * sure all registered rcu callbacks have been invoked. */ if (IS_ENABLED(CONFIG_SLUB)) - rcu_barrier_sched(); + rcu_barrier();
/* * SLAB and SLUB create memcg kmem_caches through workqueue and SLUB