hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9R2TB
--------------------------------
As before, task calltrace checking was assumed being performed under stop_machine and tasks to be checked are all not in running state. So there is one case that old function not needed to check, that is: preemption disabled and 'force' field not be KLP_NORMAL_FORCE and no 'call' instructions in livepatch replace area.
But when using breakpoint optimization without stop_machine, tasks may be running, we can not ignore above check.
Signed-off-by: Zheng Yejian zhengyejian1@huawei.com --- arch/arm/kernel/livepatch.c | 1 + arch/arm64/kernel/livepatch.c | 1 + arch/powerpc/kernel/livepatch_32.c | 1 + arch/powerpc/kernel/livepatch_64.c | 1 + arch/x86/kernel/livepatch.c | 1 + 5 files changed, 5 insertions(+)
diff --git a/arch/arm/kernel/livepatch.c b/arch/arm/kernel/livepatch.c index a8e9088ca778..41ee14992b59 100644 --- a/arch/arm/kernel/livepatch.c +++ b/arch/arm/kernel/livepatch.c @@ -138,6 +138,7 @@ int arch_klp_check_activeness_func(struct klp_patch *patch, int enable, * complete. */ if (IS_ENABLED(CONFIG_PREEMPTION) || + IS_ENABLED(CONFIG_LIVEPATCH_BREAKPOINT_NO_STOP_MACHINE) || (func->force == KLP_NORMAL_FORCE) || check_jump_insn(func_addr)) { ret = add_func(func_list, &pcheck, diff --git a/arch/arm64/kernel/livepatch.c b/arch/arm64/kernel/livepatch.c index a303fd4a69c9..90932ac5f3f5 100644 --- a/arch/arm64/kernel/livepatch.c +++ b/arch/arm64/kernel/livepatch.c @@ -127,6 +127,7 @@ int arch_klp_check_activeness_func(struct klp_patch *patch, int enable, * complete. */ if (IS_ENABLED(CONFIG_PREEMPTION) || + IS_ENABLED(CONFIG_LIVEPATCH_BREAKPOINT_NO_STOP_MACHINE) || (func->force == KLP_NORMAL_FORCE) || check_jump_insn(func_addr)) { ret = add_func(func_list, &pcheck, diff --git a/arch/powerpc/kernel/livepatch_32.c b/arch/powerpc/kernel/livepatch_32.c index 5be4398474ce..7feb8a04f2e3 100644 --- a/arch/powerpc/kernel/livepatch_32.c +++ b/arch/powerpc/kernel/livepatch_32.c @@ -128,6 +128,7 @@ int arch_klp_check_activeness_func(struct klp_patch *patch, int enable, * complete. */ if (IS_ENABLED(CONFIG_PREEMPTION) || + IS_ENABLED(CONFIG_LIVEPATCH_BREAKPOINT_NO_STOP_MACHINE) || (func->force == KLP_NORMAL_FORCE) || check_jump_insn(func_addr)) { ret = add_func(func_list, &pcheck, diff --git a/arch/powerpc/kernel/livepatch_64.c b/arch/powerpc/kernel/livepatch_64.c index 8bd1f8ca17b8..7f7c5fcb6e92 100644 --- a/arch/powerpc/kernel/livepatch_64.c +++ b/arch/powerpc/kernel/livepatch_64.c @@ -131,6 +131,7 @@ int arch_klp_check_activeness_func(struct klp_patch *patch, int enable, * complete. */ if (IS_ENABLED(CONFIG_PREEMPTION) || + IS_ENABLED(CONFIG_LIVEPATCH_BREAKPOINT_NO_STOP_MACHINE) || (func->force == KLP_NORMAL_FORCE) || check_jump_insn(func_addr)) { ret = add_func(func_list, &pcheck, diff --git a/arch/x86/kernel/livepatch.c b/arch/x86/kernel/livepatch.c index b6803cd7e0af..0675357bd16c 100644 --- a/arch/x86/kernel/livepatch.c +++ b/arch/x86/kernel/livepatch.c @@ -124,6 +124,7 @@ int arch_klp_check_activeness_func(struct klp_patch *patch, int enable, * complete. */ if (IS_ENABLED(CONFIG_PREEMPTION) || + IS_ENABLED(CONFIG_LIVEPATCH_BREAKPOINT_NO_STOP_MACHINE) || (func->force == KLP_NORMAL_FORCE) || check_jump_insn(func_addr)) { ret = add_func(func_list, &pcheck,