From: Alexander Potapenko glider@google.com
mainline inclusion from mainline-v5.12-rc1 commit 2b8305260fb37fc20e13f71e13073304d0a031c8 category: feature bugzilla: 181005 https://gitee.com/openeuler/kernel/issues/I4EUY7
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
-----------------------------------------------
Make KFENCE compatible with KASAN. Currently this helps test KFENCE itself, where KASAN can catch potential corruptions to KFENCE state, or other corruptions that may be a result of freepointer corruptions in the main allocators.
[akpm@linux-foundation.org: merge fixup] [andreyknvl@google.com: untag addresses for KFENCE] Link: https://lkml.kernel.org/r/9dc196006921b191d25d10f6e611316db7da2efc.161194615...
Link: https://lkml.kernel.org/r/20201103175841.3495947-7-elver@google.com Signed-off-by: Marco Elver elver@google.com Signed-off-by: Alexander Potapenko glider@google.com Signed-off-by: Andrey Konovalov andreyknvl@google.com Reviewed-by: Dmitry Vyukov dvyukov@google.com Reviewed-by: Jann Horn jannh@google.com Co-developed-by: Marco Elver elver@google.com Cc: Andrey Konovalov andreyknvl@google.com Cc: Andrey Ryabinin aryabinin@virtuozzo.com Cc: Andy Lutomirski luto@kernel.org Cc: Borislav Petkov bp@alien8.de Cc: Catalin Marinas catalin.marinas@arm.com Cc: Christopher Lameter cl@linux.com Cc: Dave Hansen dave.hansen@linux.intel.com Cc: David Rientjes rientjes@google.com Cc: Eric Dumazet edumazet@google.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Hillf Danton hdanton@sina.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Ingo Molnar mingo@redhat.com Cc: Joern Engel joern@purestorage.com Cc: Jonathan Corbet corbet@lwn.net Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Kees Cook keescook@chromium.org Cc: Mark Rutland mark.rutland@arm.com Cc: Paul E. McKenney paulmck@kernel.org Cc: Pekka Enberg penberg@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: SeongJae Park sjpark@amazon.de Cc: Thomas Gleixner tglx@linutronix.de Cc: Vlastimil Babka vbabka@suse.cz Cc: Will Deacon will@kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Conflicts: mm/kasan/kasan.h mm/kasan/shadow.c [Peng Liu: cherry-pick from 2b8305260fb37fc20e13f71e13073304d0a031c8] Signed-off-by: Peng Liu liupeng256@huawei.com Reviewed-by: Kefeng Wang wangkefeng.wang@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Yingjie Shang 1415317271@qq.com Reviewed-by: Bixuan Cui cuibixuan@huawei.com --- lib/Kconfig.kfence | 2 +- mm/kasan/common.c | 18 ++++++++++++++++++ mm/kasan/generic.c | 3 ++- 3 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/lib/Kconfig.kfence b/lib/Kconfig.kfence index b88ac9d6b2e6..edfecb5d6165 100644 --- a/lib/Kconfig.kfence +++ b/lib/Kconfig.kfence @@ -5,7 +5,7 @@ config HAVE_ARCH_KFENCE
menuconfig KFENCE bool "KFENCE: low-overhead sampling-based memory safety error detector" - depends on HAVE_ARCH_KFENCE && !KASAN && (SLAB || SLUB) + depends on HAVE_ARCH_KFENCE && (SLAB || SLUB) select STACKTRACE help KFENCE is a low-overhead sampling-based detector of heap out-of-bounds diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 950fd372a07e..6c8fa5aed54c 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -124,6 +124,10 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) */ address = reset_tag(address);
+ /* Skip KFENCE memory if called explicitly outside of sl*b. */ + if (is_kfence_address(address)) + return; + shadow_start = kasan_mem_to_shadow(address); shadow_end = kasan_mem_to_shadow(address + size);
@@ -141,6 +145,14 @@ void kasan_unpoison_shadow(const void *address, size_t size) */ address = reset_tag(address);
+ /* + * Skip KFENCE memory if called explicitly outside of sl*b. Also note + * that calls to ksize(), where size is not a multiple of machine-word + * size, would otherwise poison the invalid portion of the word. + */ + if (is_kfence_address(address)) + return; + kasan_poison_shadow(address, size, tag);
if (size & KASAN_SHADOW_MASK) { @@ -396,6 +408,9 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object, tagged_object = object; object = reset_tag(object);
+ if (is_kfence_address(object)) + return false; + if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != object)) { kasan_report_invalid_free(tagged_object, ip); @@ -444,6 +459,9 @@ static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object, if (unlikely(object == NULL)) return NULL;
+ if (is_kfence_address(kasan_reset_tag(object))) + return (void *)object; + redzone_start = round_up((unsigned long)(object + size), KASAN_SHADOW_SCALE_SIZE); redzone_end = round_up((unsigned long)object + cache->object_size, diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c index 2efc48444e77..c4c56ec8a472 100644 --- a/mm/kasan/generic.c +++ b/mm/kasan/generic.c @@ -21,6 +21,7 @@ #include <linux/init.h> #include <linux/kasan.h> #include <linux/kernel.h> +#include <linux/kfence.h> #include <linux/kmemleak.h> #include <linux/linkage.h> #include <linux/memblock.h> @@ -332,7 +333,7 @@ void kasan_record_aux_stack(void *addr) struct kasan_alloc_meta *alloc_info; void *object;
- if (!(page && PageSlab(page))) + if (is_kfence_address(addr) || !(page && PageSlab(page))) return;
cache = page->slab_cache;