From: Miaohe Lin linmiaohe@huawei.com
stable inclusion from linux-4.19.214 commit 21e23e4c02fc9aa1f4b037f30e47d215d1a27f21
--------------------------------
commit 899447f669da76cc3605665e1a95ee877bc464cc upstream.
If object's reuse is delayed, it will be excluded from the reconstructed freelist. But we forgot to adjust the cnt accordingly. So there will be a mismatch between reconstructed freelist depth and cnt. This will lead to free_debug_processing() complaining about freelist count or a incorrect slub inuse count.
Link: https://lkml.kernel.org/r/20210916123920.48704-3-linmiaohe@huawei.com Fixes: c3895391df38 ("kasan, slub: fix handling of kasan_slab_free hook") Signed-off-by: Miaohe Lin linmiaohe@huawei.com Reviewed-by: Vlastimil Babka vbabka@suse.cz Cc: Andrey Konovalov andreyknvl@gmail.com Cc: Andrey Ryabinin ryabinin.a.a@gmail.com Cc: Bharata B Rao bharata@linux.ibm.com Cc: Christoph Lameter cl@linux.com Cc: David Rientjes rientjes@google.com Cc: Faiyaz Mohammed faiyazm@codeaurora.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Kees Cook keescook@chromium.org Cc: Pekka Enberg penberg@kernel.org Cc: Roman Gushchin guro@fb.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Acked-by: Jason Yan yanaijie@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- mm/slub.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c index 04aa35f650877..7b5630ca92746 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1398,7 +1398,8 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x) }
static inline bool slab_free_freelist_hook(struct kmem_cache *s, - void **head, void **tail) + void **head, void **tail, + int *cnt) { /* * Compiler cannot detect this function can be removed if slab_free_hook() @@ -1427,6 +1428,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, *head = object; if (!*tail) *tail = object; + } else { + /* + * Adjust the reconstructed freelist depth + * accordingly if object's reuse is delayed. + */ + --(*cnt); } } while (object != old_tail);
@@ -2994,7 +3001,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, * With KASAN enabled slab_free_freelist_hook modifies the freelist * to remove objects, whose reuse must be delayed. */ - if (slab_free_freelist_hook(s, &head, &tail)) + if (slab_free_freelist_hook(s, &head, &tail, &cnt)) do_slab_free(s, page, head, tail, cnt, addr); }