hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8MGE6 CVE: NA
--------------------------------
livepatch wo_ftrace and kprobe are in conflict, because kprobe may modify the instructions anywhere in the function.
So it's dangerous to patched/unpatched an function when there are some kprobes registered on it. Restrict these situation.
we should hold kprobe_mutex in klp_check_patch_kprobed, but it's static and can't export, so protect klp_check_patch_probe in stop_machine to avoid registing kprobes when patching.
we do nothing for (un)register kprobes on the (old) function which has been patched. because there are sone engineers need this. certainly, it will not lead to hangs, but not recommended.
Signed-off-by: Cheng Jian cj.chengjian@huawei.com Signed-off-by: Wang ShaoBo bobo.shaobowang@huawei.com Signed-off-by: Dong Kai dongkai11@huawei.com Signed-off-by: Ye Weihua yeweihua4@huawei.com Signed-off-by: Zheng Yejian zhengyejian1@huawei.com --- kernel/livepatch/Kconfig | 11 ++++++++++ kernel/livepatch/core.c | 47 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 58 insertions(+)
diff --git a/kernel/livepatch/Kconfig b/kernel/livepatch/Kconfig index d67a3e59a372..ad15685dfd53 100644 --- a/kernel/livepatch/Kconfig +++ b/kernel/livepatch/Kconfig @@ -76,4 +76,15 @@ config LIVEPATCH_STACK help Say N here if you want to remove the patch stacking principle.
+config LIVEPATCH_RESTRICT_KPROBE + bool "Enforing check livepatch and kprobe restrict" + depends on LIVEPATCH_WO_FTRACE + depends on KPROBES + default y + help + Livepatch without ftrace and kprobe are conflicting. + We should not patch for the functions where registered with kprobe, + and vice versa. + Say Y here if you want to check those. + endmenu diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c index fef672d7b4b6..0e745f489cee 100644 --- a/kernel/livepatch/core.c +++ b/kernel/livepatch/core.c @@ -31,6 +31,9 @@ #include <linux/proc_fs.h> #include <linux/seq_file.h> #include <linux/stop_machine.h> +#ifdef CONFIG_LIVEPATCH_RESTRICT_KPROBE +#include <linux/kprobes.h> +#endif /* CONFIG_LIVEPATCH_RESTRICT_KPROBE */ #endif /* CONFIG_LIVEPATCH_FTRACE */
/* @@ -1908,6 +1911,40 @@ static int klp_mem_prepare(struct klp_patch *patch) return 0; }
+#ifdef CONFIG_LIVEPATCH_RESTRICT_KPROBE +/* + * Check whether a function has been registered with kprobes before patched. + * We can't patched this function util we unregistered the kprobes. + */ +static struct kprobe *klp_check_patch_kprobed(struct klp_patch *patch) +{ + struct klp_object *obj; + struct klp_func *func; + struct kprobe *kp; + int i; + + klp_for_each_object(patch, obj) { + klp_for_each_func(obj, func) { + for (i = 0; i < func->old_size; i++) { + kp = get_kprobe(func->old_func + i); + if (kp) { + pr_err("func %s has been probed, (un)patch failed\n", + func->old_name); + return kp; + } + } + } + } + + return NULL; +} +#else +static inline struct kprobe *klp_check_patch_kprobed(struct klp_patch *patch) +{ + return NULL; +} +#endif /* CONFIG_LIVEPATCH_RESTRICT_KPROBE */ + void __weak arch_klp_unpatch_func(struct klp_func *func) { } @@ -2028,6 +2065,11 @@ static int klp_try_disable_patch(void *data) if (atomic_inc_return(&pd->cpu_count) == 1) { struct klp_patch *patch = pd->patch;
+ if (klp_check_patch_kprobed(patch)) { + atomic_inc(&pd->cpu_count); + return -EINVAL; + } + ret = klp_check_calltrace(patch, 0); if (ret) { atomic_inc(&pd->cpu_count); @@ -2124,6 +2166,11 @@ static int klp_try_enable_patch(void *data) if (atomic_inc_return(&pd->cpu_count) == 1) { struct klp_patch *patch = pd->patch;
+ if (klp_check_patch_kprobed(patch)) { + atomic_inc(&pd->cpu_count); + return -EINVAL; + } + ret = klp_check_calltrace(patch, 1); if (ret) { atomic_inc(&pd->cpu_count);