From: Xu Kuohai xukuohai@huawei.com
mainline inclusion from mainline-v6.0-rc1 commit aada476655461a9ab491d8298a415430cdd10278 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9FGRE
Reference: https://github.com/torvalds/linux/commit/aada47665546
--------------------------------
The sparse tool complains as follows:
arch/arm64/net/bpf_jit_comp.c:1684:16: warning: incorrect type in assignment (different base types) arch/arm64/net/bpf_jit_comp.c:1684:16: expected unsigned int [usertype] *branch arch/arm64/net/bpf_jit_comp.c:1684:16: got restricted __le32 [usertype] * arch/arm64/net/bpf_jit_comp.c:1700:52: error: subtraction of different types can't work (different base types) arch/arm64/net/bpf_jit_comp.c:1734:29: warning: incorrect type in assignment (different base types) arch/arm64/net/bpf_jit_comp.c:1734:29: expected unsigned int [usertype] * arch/arm64/net/bpf_jit_comp.c:1734:29: got restricted __le32 [usertype] * arch/arm64/net/bpf_jit_comp.c:1918:52: error: subtraction of different types can't work (different base types)
This is because the variable branch in function invoke_bpf_prog and the variable branches in function prepare_trampoline are defined as type u32 *, which conflicts with ctx->image's type __le32 *, so sparse complains when assignment or arithmetic operation are performed on these two variables and ctx->image.
Since arm64 instructions are always little-endian, change the type of these two variables to __le32 * and call cpu_to_le32() to convert instruction to little-endian before writing it to memory. This is also in line with emit() which internally does cpu_to_le32(), too.
Fixes: efc9909fdce0 ("bpf, arm64: Add bpf trampoline for arm64") Reported-by: kernel test robot lkp@intel.com Signed-off-by: Xu Kuohai xukuohai@huawei.com Signed-off-by: Daniel Borkmann daniel@iogearbox.net Reviewed-by: Jean-Philippe Brucker jean-philippe@linaro.org Link: https://lore.kernel.org/bpf/20220808040735.1232002-1-xukuohai@huawei.com Conflicts: arch/arm64/net/bpf_jit_comp.c Signed-off-by: Pu Lehui pulehui@huawei.com --- arch/arm64/net/bpf_jit_comp.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index a5241345ef42..04f1cbd63f5b 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -1562,7 +1562,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im, struct bpf_tramp_progs *fexit = &tprogs[BPF_TRAMP_FEXIT]; struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN]; bool save_ret; - u32 **branches = NULL; + __le32 **branches = NULL;
/* trampoline stack layout: * [ parent ip ] @@ -1639,7 +1639,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im, retval_off, flags & BPF_TRAMP_F_RET_FENTRY_RET);
if (fmod_ret->nr_progs) { - branches = kcalloc(fmod_ret->nr_progs, sizeof(u32 *), + branches = kcalloc(fmod_ret->nr_progs, sizeof(__le32 *), GFP_KERNEL); if (!branches) return -ENOMEM; @@ -1662,7 +1662,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im, /* update the branches saved in invoke_bpf_mod_ret with cbnz */ for (i = 0; i < fmod_ret->nr_progs && ctx->image != NULL; i++) { int offset = &ctx->image[ctx->idx] - branches[i]; - *branches[i] = A64_CBNZ(1, A64_R(10), offset); + *branches[i] = cpu_to_le32(A64_CBNZ(1, A64_R(10), offset)); }
for (i = 0; i < fexit->nr_progs; i++)