From: 沈子俊 shenzijun@kylinos.cn
mainline inclusion from mainline-v5.15 commit f8690a4b5a1b64f74ae5c4f7c4ea880d8a8e1a0d category: bugfix bugzilla: NA CVE: NA
--------------------------------------------------------------
This fixes the following warning:
vmlinux.o: warning: objtool: elf_update: invalid section entry size
The size of the rodata section is 164 bytes, directly using the entry_size of 164 bytes will cause errors in some versions of the gcc compiler, while using 16 bytes directly will cause errors in the clang compiler. This patch correct it by filling the size of rodata to a 16-byte boundary.
Fixes: a7ee22ee1445 ("crypto: x86/sm4 - add AES-NI/AVX/x86_64 implementation") Fixes: 5b2efa2bb865 ("crypto: x86/sm4 - add AES-NI/AVX2/x86_64 implementation") Reported-by: Peter Zijlstra peterz@infradead.org Reported-by: Abaci Robot abaci@linux.alibaba.com Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com Tested-by: Heyuan Shi heyuan@linux.alibaba.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: 沈子俊 shenzijun@kylinos.cn Reviewed-by: weiyang wang wangweiyang2@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- arch/x86/crypto/sm4-aesni-avx-asm_64.S | 6 +++++- arch/x86/crypto/sm4-aesni-avx2-asm_64.S | 6 +++++- 2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/x86/crypto/sm4-aesni-avx-asm_64.S b/arch/x86/crypto/sm4-aesni-avx-asm_64.S index 18d2f5199194..1cc72b4804fa 100644 --- a/arch/x86/crypto/sm4-aesni-avx-asm_64.S +++ b/arch/x86/crypto/sm4-aesni-avx-asm_64.S @@ -78,7 +78,7 @@ vpxor tmp0, x, x;
-.section .rodata.cst164, "aM", @progbits, 164 +.section .rodata.cst16, "aM", @progbits, 16 .align 16
/* @@ -133,6 +133,10 @@ .L0f0f0f0f: .long 0x0f0f0f0f
+/* 12 bytes, only for padding */ +.Lpadding_deadbeef: + .long 0xdeadbeef, 0xdeadbeef, 0xdeadbeef +
.text .align 16 diff --git a/arch/x86/crypto/sm4-aesni-avx2-asm_64.S b/arch/x86/crypto/sm4-aesni-avx2-asm_64.S index d2ffd7f76ee2..9c5d3f3ad45a 100644 --- a/arch/x86/crypto/sm4-aesni-avx2-asm_64.S +++ b/arch/x86/crypto/sm4-aesni-avx2-asm_64.S @@ -93,7 +93,7 @@ vpxor tmp0, x, x;
-.section .rodata.cst164, "aM", @progbits, 164 +.section .rodata.cst16, "aM", @progbits, 16 .align 16
/* @@ -148,6 +148,10 @@ .L0f0f0f0f: .long 0x0f0f0f0f
+/* 12 bytes, only for padding */ +.Lpadding_deadbeef: + .long 0xdeadbeef, 0xdeadbeef, 0xdeadbeef + .text .align 16