This patchset supports the following two features:
1. The number of kfence objects is a key factor to balance kfence bug detection capabilities and performance loss, and often needs to be optimized according to actual business scenarios. The number of kfence objects can be dynamically configured kfence.num_objects cmdline, so as to reduce the release of kernel versions in actual use, thereby reducing costs.
2. The kfence feature under arm64 supports dynamic opening after system startup, which will turn all the kernel memblock memory into page-mapped mapping and increase the memory overhead, in order to save the page table memory usage, only the kfence pool supports page-mapped mapping, which can be enabled at kernel startup by setting kfence.sample _interval=-1 at kernel startup to enable the dynamic on capability, at this time it will allocate memory for kfence by default. The kfence will be allocated memory by default. Subsequent dynamic enablement and disabling will no longer consume additional memory.
This feature has been tested under ARM64, x86_64 and ARM architectures, and the Kfence Kunit test results are consistent with the native ones.
ChangeLog: - Add config kfence_must_early_init to perform code isolation for non-functional purposes. - Some minor changes about document, no functional changes.
Ze Zuo (2): kfence: Add a module parameter to adjust kfence objects arm64: kfence: scale sample_interval to support early init for kfence.
Documentation/dev-tools/kfence.rst | 8 +- arch/arm64/include/asm/kfence.h | 3 + arch/arm64/mm/mmu.c | 5 + include/linux/kfence.h | 13 +- lib/Kconfig.kfence | 28 +++++ mm/kfence/core.c | 189 +++++++++++++++++++++++++---- mm/kfence/kfence.h | 4 +- mm/kfence/kfence_test.c | 2 +- 8 files changed, 222 insertions(+), 30 deletions(-)
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8Q3P9
--------------------------------
KFENCE is designed to be enabled in production kernels, but it can be also useful in some debug situations. For machines with limited memory and CPU resources, KASAN is really hard to run. Fortunately, KFENCE can be a suitable candidate. For KFENCE running on a single machine, the possibility of discovering existed bugs will increase as the increasing of KFENCE objects, but this will cost more memory. In order to balance the possibility of discovering existed bugs and memory cost, KFENCE objects need to be adjusted according to memory resources for a compiled kernel Image. Add a module parameter to adjust KFENCE objects will make kfence to use in different machines with the same kernel Image.
In short, the following reasons motivate us to add this parameter. 1) In some debug situations, this will make kfence flexible. 2) For some production machines with different memory and CPU size, this will reduce the kernel-Image-version burden.
The main change is just using kfence_num_objects to replace config CONFIG_KFENCE_NUM_OBJECTS for dynamic configuration convenient. To make compatible, kfence_metadata and alloc_covered are alloced by memblock_alloc.
Unfortunately, dynamic allocation require the KFENCE pool size to be a configurable variable, which lead to additional instructions (eg, load) added to the fast path of the memory allocation. As a result, the performance will degrade. To avoid bad performance in production machine, an ugly macro is used to isolate the changes.
Signed-off-by: Ze Zuo zuoze1@huawei.com --- Documentation/dev-tools/kfence.rst | 8 +- include/linux/kfence.h | 9 +- lib/Kconfig.kfence | 12 +++ mm/kfence/core.c | 146 ++++++++++++++++++++++++++--- mm/kfence/kfence.h | 4 +- mm/kfence/kfence_test.c | 2 +- 6 files changed, 163 insertions(+), 18 deletions(-)
diff --git a/Documentation/dev-tools/kfence.rst b/Documentation/dev-tools/kfence.rst index 936f6aaa75c8..1790698d8f3d 100644 --- a/Documentation/dev-tools/kfence.rst +++ b/Documentation/dev-tools/kfence.rst @@ -53,13 +53,19 @@ configurable via the Kconfig option ``CONFIG_KFENCE_DEFERRABLE``. The KUnit test suite is very likely to fail when using a deferrable timer since it currently causes very unpredictable sample intervals.
-The KFENCE memory pool is of fixed size, and if the pool is exhausted, no +If ``CONFIG_KFENCE_DYNAMIC_OBJECTS`` is disabled, +the KFENCE memory pool is of fixed size, and if the pool is exhausted, no further KFENCE allocations occur. With ``CONFIG_KFENCE_NUM_OBJECTS`` (default 255), the number of available guarded objects can be controlled. Each object requires 2 pages, one for the object itself and the other one used as a guard page; object pages are interleaved with guard pages, and every object page is therefore surrounded by two guard pages.
+If ``CONFIG_KFENCE_DYNAMIC_OBJECTS`` is enabled, +the KFENCE memory pool size could be set via the kernel boot parameter +``kfence.num_objects``. Note, the performance will degrade due to additional +instructions(eg, load) added to the fast path of the memory allocation. + The total memory dedicated to the KFENCE memory pool can be computed as::
( #objects + 1 ) * 2 * PAGE_SIZE diff --git a/include/linux/kfence.h b/include/linux/kfence.h index 401af4757514..d228e0a4676d 100644 --- a/include/linux/kfence.h +++ b/include/linux/kfence.h @@ -19,12 +19,19 @@
extern unsigned long kfence_sample_interval;
+#ifdef CONFIG_KFENCE_DYNAMIC_OBJECTS +extern int kfence_num_objects; +#define KFENCE_NR_OBJECTS kfence_num_objects +#else +#define KFENCE_NR_OBJECTS CONFIG_KFENCE_NUM_OBJECTS +#endif + /* * We allocate an even number of pages, as it simplifies calculations to map * address to metadata indices; effectively, the very first page serves as an * extended guard page, but otherwise has no special purpose. */ -#define KFENCE_POOL_SIZE ((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 * PAGE_SIZE) +#define KFENCE_POOL_SIZE ((KFENCE_NR_OBJECTS + 1) * 2 * PAGE_SIZE) extern char *__kfence_pool;
DECLARE_STATIC_KEY_FALSE(kfence_allocation_key); diff --git a/lib/Kconfig.kfence b/lib/Kconfig.kfence index 459dda9ef619..999be97173f9 100644 --- a/lib/Kconfig.kfence +++ b/lib/Kconfig.kfence @@ -57,6 +57,18 @@ config KFENCE_DEFERRABLE
Say N if you are unsure.
+config KFENCE_DYNAMIC_OBJECTS + bool "Support dynamic configuration of the number of guarded objects" + default n + help + Enable dynamic configuration of the number of KFENCE guarded objects. + If this config is enabled, the number of KFENCE guarded objects could + be overridden via boot parameter "kfence.num_objects". Note that the + performance will degrade due to additional instructions(eg, load) + added to the fast path of the memory allocation. + + Say N if you are unsure. + config KFENCE_STATIC_KEYS bool "Use static keys to set up allocations" if EXPERT depends on JUMP_LABEL diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 3872528d0963..d39ebe647670 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -111,11 +111,79 @@ module_param_named(check_on_panic, kfence_check_on_panic, bool, 0444); char *__kfence_pool __read_mostly; EXPORT_SYMBOL(__kfence_pool); /* Export for test modules. */
+#ifdef CONFIG_KFENCE_DYNAMIC_OBJECTS +/* + * The number of kfence objects will affect performance and bug detection + * accuracy. The initial value of this global parameter is determined by + * compiling settings. + */ +int kfence_num_objects = CONFIG_KFENCE_NUM_OBJECTS; +EXPORT_SYMBOL(kfence_num_objects); /* Export for test modules. */ + +#define MIN_KFENCE_OBJECTS 1 +#define MAX_KFENCE_OBJECTS 65535 + +static int param_set_num_objects(const char *val, const struct kernel_param *kp) +{ + int num; + + if (system_state != SYSTEM_BOOTING) + return -EINVAL; /* Cannot adjust KFENCE objects number on-the-fly. */ + + if (kstrtoint(val, 0, &num) < 0) + return -EINVAL; + + if (num < MIN_KFENCE_OBJECTS || num > MAX_KFENCE_OBJECTS) { + pr_warn("kfence_num_objects = %d is not in valid range [%d, %d]\n", + num, MIN_KFENCE_OBJECTS, MAX_KFENCE_OBJECTS); + return -EINVAL; + } + + *((unsigned long *)kp->arg) = num; + return 0; +} + +static int param_get_num_objects(char *buffer, const struct kernel_param *kp) +{ + if (!READ_ONCE(kfence_enabled)) + return sprintf(buffer, "0\n"); + + return param_get_int(buffer, kp); +} + +static const struct kernel_param_ops num_objects_param_ops = { + .set = param_set_num_objects, + .get = param_get_num_objects, +}; +module_param_cb(num_objects, &num_objects_param_ops, &kfence_num_objects, 0600); + +#ifdef CONFIG_ARM64 +static int __init parse_num_objects(char *str) +{ + int num; + + if (kstrtoint(str, 0, &num) < 0) + return 0; + if (num < MIN_KFENCE_OBJECTS || num > MAX_KFENCE_OBJECTS) + return 0; + kfence_num_objects = num; + return 0; +} +early_param("kfence.num_objects", parse_num_objects); +#endif +#endif + +#ifdef CONFIG_KFENCE_DYNAMIC_OBJECTS +#define ILOG2(x) (ilog2((x))) +#else +#define ILOG2(x) (const_ilog2((x))) + /* * Per-object metadata, with one-to-one mapping of object metadata to * backing pages (in __kfence_pool). */ -static_assert(CONFIG_KFENCE_NUM_OBJECTS > 0); +static_assert(KFENCE_NR_OBJECTS > 0); +#endif struct kfence_metadata *kfence_metadata __read_mostly;
/* @@ -150,11 +218,16 @@ atomic_t kfence_allocation_gate = ATOMIC_INIT(1); * P(alloc_traces) = (1 - e^(-HNUM * (alloc_traces / SIZE)) ^ HNUM */ #define ALLOC_COVERED_HNUM 2 -#define ALLOC_COVERED_ORDER (const_ilog2(CONFIG_KFENCE_NUM_OBJECTS) + 2) +#define ALLOC_COVERED_ORDER (ILOG2(KFENCE_NR_OBJECTS) + 2) #define ALLOC_COVERED_SIZE (1 << ALLOC_COVERED_ORDER) #define ALLOC_COVERED_HNEXT(h) hash_32(h, ALLOC_COVERED_ORDER) #define ALLOC_COVERED_MASK (ALLOC_COVERED_SIZE - 1) +#ifdef CONFIG_KFENCE_DYNAMIC_OBJECTS +static atomic_t *alloc_covered; +static phys_addr_t covered_size; +#else static atomic_t alloc_covered[ALLOC_COVERED_SIZE]; +#endif
/* Stack depth used to determine uniqueness of an allocation. */ #define UNIQUE_ALLOC_STACK_DEPTH ((size_t)8) @@ -194,7 +267,7 @@ static_assert(ARRAY_SIZE(counter_names) == KFENCE_COUNTER_COUNT);
static inline bool should_skip_covered(void) { - unsigned long thresh = (CONFIG_KFENCE_NUM_OBJECTS * kfence_skip_covered_thresh) / 100; + unsigned long thresh = (KFENCE_NR_OBJECTS * kfence_skip_covered_thresh) / 100;
return atomic_long_read(&counters[KFENCE_COUNTER_ALLOCATED]) > thresh; } @@ -256,7 +329,7 @@ static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *m
/* Only call with a pointer into kfence_metadata. */ if (KFENCE_WARN_ON(meta < kfence_metadata || - meta >= kfence_metadata + CONFIG_KFENCE_NUM_OBJECTS)) + meta >= kfence_metadata + KFENCE_NR_OBJECTS)) return 0;
/* @@ -567,6 +640,36 @@ static void rcu_guarded_free(struct rcu_head *h) kfence_guarded_free((void *)meta->addr, meta, false); }
+#ifdef CONFIG_KFENCE_DYNAMIC_OBJECTS +static int __ref kfence_dynamic_init(void) +{ + covered_size = sizeof(atomic_t) * ALLOC_COVERED_SIZE; + + if (system_state < SYSTEM_RUNNING) + alloc_covered = memblock_alloc(covered_size, PAGE_SIZE); + else + alloc_covered = kzalloc(covered_size, GFP_KERNEL); + if (!alloc_covered) { + pr_err("failed to allocate covered\n"); + return -ENOMEM; + } + + return 0; +} + +static void __ref kfence_dynamic_destroy(void) +{ + if (system_state < SYSTEM_RUNNING) + memblock_free(alloc_covered, covered_size); + else + kfree(alloc_covered); + alloc_covered = NULL; +} +#else +static int __init kfence_dynamic_init(void) { return 0; } +static void __init kfence_dynamic_destroy(void) { } +#endif + /* * Initialization of the KFENCE pool after its allocation. * Returns 0 on success; otherwise returns the address up to @@ -618,7 +721,7 @@ static unsigned long kfence_init_pool(void) addr += PAGE_SIZE; }
- for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { + for (i = 0; i < KFENCE_NR_OBJECTS; i++) { struct kfence_metadata *meta = &kfence_metadata_init[i];
/* Initialize metadata. */ @@ -691,6 +794,7 @@ static bool __init kfence_init_pool_early(void) memblock_free_late(__pa(kfence_metadata_init), KFENCE_METADATA_SIZE); kfence_metadata_init = NULL;
+ kfence_dynamic_destroy(); return false; }
@@ -715,7 +819,7 @@ DEFINE_SHOW_ATTRIBUTE(stats); */ static void *start_object(struct seq_file *seq, loff_t *pos) { - if (*pos < CONFIG_KFENCE_NUM_OBJECTS) + if (*pos < KFENCE_NR_OBJECTS) return (void *)((long)*pos + 1); return NULL; } @@ -727,7 +831,7 @@ static void stop_object(struct seq_file *seq, void *v) static void *next_object(struct seq_file *seq, void *v, loff_t *pos) { ++*pos; - if (*pos < CONFIG_KFENCE_NUM_OBJECTS) + if (*pos < KFENCE_NR_OBJECTS) return (void *)((long)*pos + 1); return NULL; } @@ -774,7 +878,7 @@ static void kfence_check_all_canary(void) { int i;
- for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { + for (i = 0; i < KFENCE_NR_OBJECTS; i++) { struct kfence_metadata *meta = &kfence_metadata[i];
if (meta->state == KFENCE_OBJECT_ALLOCATED) @@ -845,6 +949,8 @@ void __init kfence_alloc_pool_and_metadata(void) if (!kfence_sample_interval) return;
+ if (kfence_dynamic_init()) + return; /* * If the pool has already been initialized by arch, there is no need to * re-allocate the memory pool. @@ -854,6 +960,7 @@ void __init kfence_alloc_pool_and_metadata(void)
if (!__kfence_pool) { pr_err("failed to allocate pool\n"); + kfence_dynamic_destroy(); return; }
@@ -863,6 +970,7 @@ void __init kfence_alloc_pool_and_metadata(void) pr_err("failed to allocate metadata\n"); memblock_free(__kfence_pool, KFENCE_POOL_SIZE); __kfence_pool = NULL; + kfence_dynamic_destroy(); } }
@@ -883,7 +991,7 @@ static void kfence_init_enable(void) queue_delayed_work(system_unbound_wq, &kfence_timer, 0);
pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE, - CONFIG_KFENCE_NUM_OBJECTS, (void *)__kfence_pool, + KFENCE_NR_OBJECTS, (void *)__kfence_pool, (void *)(__kfence_pool + KFENCE_POOL_SIZE)); }
@@ -913,11 +1021,18 @@ static int kfence_init_late(void)
#ifdef CONFIG_CONTIG_ALLOC struct page *pages; +#endif + + if (kfence_dynamic_init()) + return -ENOMEM;
+#ifdef CONFIG_CONTIG_ALLOC pages = alloc_contig_pages(nr_pages_pool, GFP_KERNEL, first_online_node, NULL); - if (!pages) + if (!pages) { + kfence_dynamic_destroy(); return -ENOMEM; + }
__kfence_pool = page_to_virt(pages); pages = alloc_contig_pages(nr_pages_meta, GFP_KERNEL, first_online_node, @@ -932,8 +1047,10 @@ static int kfence_init_late(void) }
__kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE, GFP_KERNEL); - if (!__kfence_pool) + if (!__kfence_pool) { + kfence_dynamic_destroy(); return -ENOMEM; + }
kfence_metadata_init = alloc_pages_exact(KFENCE_METADATA_SIZE, GFP_KERNEL); #endif @@ -977,6 +1094,9 @@ static int kfence_enable_late(void)
WRITE_ONCE(kfence_enabled, true); queue_delayed_work(system_unbound_wq, &kfence_timer, 0); + pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE, + KFENCE_NR_OBJECTS, (void *)__kfence_pool, + (void *)(__kfence_pool + KFENCE_POOL_SIZE)); pr_info("re-enabled\n"); return 0; } @@ -991,7 +1111,7 @@ void kfence_shutdown_cache(struct kmem_cache *s) if (!smp_load_acquire(&kfence_metadata)) return;
- for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { + for (i = 0; i < KFENCE_NR_OBJECTS; i++) { bool in_use;
meta = &kfence_metadata[i]; @@ -1030,7 +1150,7 @@ void kfence_shutdown_cache(struct kmem_cache *s) } }
- for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { + for (i = 0; i < KFENCE_NR_OBJECTS; i++) { meta = &kfence_metadata[i];
/* See above. */ diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index f46fbb03062b..2165208d64de 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -103,7 +103,7 @@ struct kfence_metadata { };
#define KFENCE_METADATA_SIZE PAGE_ALIGN(sizeof(struct kfence_metadata) * \ - CONFIG_KFENCE_NUM_OBJECTS) + KFENCE_NR_OBJECTS)
extern struct kfence_metadata *kfence_metadata;
@@ -122,7 +122,7 @@ static inline struct kfence_metadata *addr_to_metadata(unsigned long addr) * error. */ index = (addr - (unsigned long)__kfence_pool) / (PAGE_SIZE * 2) - 1; - if (index < 0 || index >= CONFIG_KFENCE_NUM_OBJECTS) + if (index < 0 || index >= KFENCE_NR_OBJECTS) return NULL;
return &kfence_metadata[index]; diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c index 95b2b84c296d..9cef21127da8 100644 --- a/mm/kfence/kfence_test.c +++ b/mm/kfence/kfence_test.c @@ -624,7 +624,7 @@ static void test_gfpzero(struct kunit *test) break; test_free(buf2);
- if (kthread_should_stop() || (i == CONFIG_KFENCE_NUM_OBJECTS)) { + if (kthread_should_stop() || (i == KFENCE_NR_OBJECTS)) { kunit_warn(test, "giving up ... cannot get same object back\n"); return; }
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8Q3P9
--------------------------------
KFENCE pool requires linear map to be mapped at page granularity, this must done in very early time in arm64. To support the late initialisation of kfence for arm64, all mappings are turned into page-level mappings, which incurs memory consumption.To save memory of page table, arm64 can only map the pages in KFENCE pool itself at page granularity. Thus, the kfence pool could not allocated by buddy system.
For the flexibility of KFENCE, by setting "kfence.sample_interval" to -1, the kfence pool memory will be allocated from the early memory, kfence disabled by default, after system startup(re-enabling), you can set "kfence.sample_interval" to a non-zero value to enable kfence, and set "kfence.sample_interval" to 0 or -1 to turn kfence off. Note that disabling kfence will not free the memory associated with kfence.
Note: Whether the config KFENCE_MUST_EARLY_INIT is enabled or not, kfence.sample_interval being set to -1 has the same function as being 0 for non-ARM64 architectures.
Signed-off-by: Ze Zuo zuoze1@huawei.com --- arch/arm64/include/asm/kfence.h | 3 +++ arch/arm64/mm/mmu.c | 5 ++++ include/linux/kfence.h | 4 +++ lib/Kconfig.kfence | 16 ++++++++++++ mm/kfence/core.c | 45 +++++++++++++++++++++++---------- 5 files changed, 60 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h index a81937fae9f6..36052893433f 100644 --- a/arch/arm64/include/asm/kfence.h +++ b/arch/arm64/include/asm/kfence.h @@ -23,6 +23,9 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect) extern bool kfence_early_init; static inline bool arm64_kfence_can_set_direct_map(void) { + if (IS_ENABLED(CONFIG_KFENCE_MUST_EARLY_INIT)) + return false; + return !kfence_early_init; } #else /* CONFIG_KFENCE */ diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 47781bec6171..58d228de4808 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -520,6 +520,11 @@ static int __init parse_kfence_early_init(char *arg)
if (get_option(&arg, &val)) kfence_early_init = !!val; + +#if IS_ENABLED(CONFIG_KFENCE_MUST_EARLY_INIT) + kfence_must_early_init = (val == -1) ? true : false; +#endif + return 0; } early_param("kfence.sample_interval", parse_kfence_early_init); diff --git a/include/linux/kfence.h b/include/linux/kfence.h index d228e0a4676d..62dad350b906 100644 --- a/include/linux/kfence.h +++ b/include/linux/kfence.h @@ -19,6 +19,10 @@
extern unsigned long kfence_sample_interval;
+#if IS_ENABLED(CONFIG_KFENCE_MUST_EARLY_INIT) +extern bool __ro_after_init kfence_must_early_init; +#endif + #ifdef CONFIG_KFENCE_DYNAMIC_OBJECTS extern int kfence_num_objects; #define KFENCE_NR_OBJECTS kfence_num_objects diff --git a/lib/Kconfig.kfence b/lib/Kconfig.kfence index 999be97173f9..f40df4b11ed3 100644 --- a/lib/Kconfig.kfence +++ b/lib/Kconfig.kfence @@ -69,6 +69,22 @@ config KFENCE_DYNAMIC_OBJECTS
Say N if you are unsure.
+config KFENCE_MUST_EARLY_INIT + bool "Require kfence_pool to be pre-allocated on arm64." + depends on ARM64 + help + To support KFENCE late init, arm64 will convert block mapping to + page-level mappings, which leads to performance degradation and + increased memory consumption. + + If this config is enabled, only KFENCE memory early init for arm64 + is supported, extending sample_interval to implement late enable. When + "kfence.sample_interval" is set to -1 or 0, KFENCE will not be enabled. + Only when "kfence.sample_interval" is set to -1, it can be enabled by + setting it to a non-zero value. + + Say N if you are unsure. + config KFENCE_STATIC_KEYS bool "Use static keys to set up allocations" if EXPERT depends on JUMP_LABEL diff --git a/mm/kfence/core.c b/mm/kfence/core.c index d39ebe647670..def02ef3625e 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -50,6 +50,11 @@
static bool kfence_enabled __read_mostly; static bool disabled_by_warn __read_mostly; +#if IS_ENABLED(CONFIG_KFENCE_MUST_EARLY_INIT) +bool __ro_after_init kfence_must_early_init; +#else +#define kfence_must_early_init 0 +#endif
unsigned long kfence_sample_interval __read_mostly = CONFIG_KFENCE_SAMPLE_INTERVAL; EXPORT_SYMBOL_GPL(kfence_sample_interval); /* Export for test modules. */ @@ -62,19 +67,28 @@ EXPORT_SYMBOL_GPL(kfence_sample_interval); /* Export for test modules. */ static int kfence_enable_late(void); static int param_set_sample_interval(const char *val, const struct kernel_param *kp) { - unsigned long num; - int ret = kstrtoul(val, 0, &num); + long num; + int ret = kstrtol(val, 0, &num);
if (ret < 0) return ret;
+ if (num < -1) + return -ERANGE; + + /* + * For architecture that don't require early allocation, always support + * re-enabling. So only need to set num to 0 if num < 0. + */ + num = max_t(long, 0, num); + /* Using 0 to indicate KFENCE is disabled. */ if (!num && READ_ONCE(kfence_enabled)) { pr_info("disabled\n"); WRITE_ONCE(kfence_enabled, false); }
- *((unsigned long *)kp->arg) = num; + *((unsigned long *)kp->arg) = (unsigned long)num;
if (num && !READ_ONCE(kfence_enabled) && system_state != SYSTEM_BOOTING) return disabled_by_warn ? -EINVAL : kfence_enable_late(); @@ -861,7 +875,7 @@ static int kfence_debugfs_init(void) { struct dentry *kfence_dir;
- if (!READ_ONCE(kfence_enabled)) + if (!READ_ONCE(kfence_enabled) && !kfence_must_early_init) return 0;
kfence_dir = debugfs_create_dir("kfence", NULL); @@ -946,7 +960,7 @@ static void toggle_allocation_gate(struct work_struct *work)
void __init kfence_alloc_pool_and_metadata(void) { - if (!kfence_sample_interval) + if (!kfence_sample_interval && !kfence_must_early_init) return;
if (kfence_dynamic_init()) @@ -987,12 +1001,13 @@ static void kfence_init_enable(void) if (kfence_check_on_panic) atomic_notifier_chain_register(&panic_notifier_list, &kfence_check_canary_notifier);
- WRITE_ONCE(kfence_enabled, true); - queue_delayed_work(system_unbound_wq, &kfence_timer, 0); - - pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE, - KFENCE_NR_OBJECTS, (void *)__kfence_pool, - (void *)(__kfence_pool + KFENCE_POOL_SIZE)); + if (!kfence_must_early_init) { + WRITE_ONCE(kfence_enabled, true); + queue_delayed_work(system_unbound_wq, &kfence_timer, 0); + pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE, + KFENCE_NR_OBJECTS, (void *)__kfence_pool, + (void *)(__kfence_pool + KFENCE_POOL_SIZE)); + } }
void __init kfence_init(void) @@ -1000,7 +1015,7 @@ void __init kfence_init(void) stack_hash_seed = get_random_u32();
/* Setting kfence_sample_interval to 0 on boot disables KFENCE. */ - if (!kfence_sample_interval) + if (!kfence_sample_interval && !kfence_must_early_init) return;
if (!kfence_init_pool_early()) { @@ -1089,8 +1104,12 @@ static int kfence_init_late(void)
static int kfence_enable_late(void) { - if (!__kfence_pool) + if (!__kfence_pool) { + if (IS_ENABLED(CONFIG_KFENCE_MUST_EARLY_INIT)) + return 0; + return kfence_init_late(); + }
WRITE_ONCE(kfence_enabled, true); queue_delayed_work(system_unbound_wq, &kfence_timer, 0);
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/3897 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/W...
FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/3897 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/W...