From: Felix Fu fuzhen5@huawei.com
support kaslr feature in arm32 platform, using CONFIG_RANDOMIZE_BASE=y to enable this feature.
Ard Biesheuvel (15): asm-generic: add .data.rel.ro sections to __ro_after_init arm-soc: exynos: replace open coded VA->PA conversions arm-soc: mvebu: replace open coded VA->PA conversion arm-soc: various: replace open coded VA->PA calculation of pen_release ARM: kernel: switch to relative exception tables ARM: kernel: make vmlinux buildable as a PIE executable ARM: kernel: use PC-relative symbol references in MMU switch code ARM: kernel: use PC relative symbol references in suspend/resume code ARM: mm: export default vmalloc base address ARM: kernel: refer to swapper_pg_dir via its symbol arm: vectors: use local symbol names for vector entry points ARM: kernel: implement randomization of the kernel load address ARM: decompressor: explicitly map decompressor binary cacheable ARM: decompressor: add KASLR support No idea why this broke ...
Cui GaoSheng (6): arm32: kaslr: Add missing sections about relocatable arm32: kaslr: Fix the bug of module install failure arm32: kaslr: Fix the bug of hidden symbols when decompressing code is compiled arm32: kaslr: Adapt dts files of multiple memory nodes arm32: kaslr: Fix the bug of symbols relocation arm32: kaslr: Print the real kaslr offset when kernel panic
Ye Bin (6): arm32: kaslr: Add missing sections about relocatable arm: kaslr: Fix memtop calculate, when there is no memory top info, we can't use zero instead it. arm32: kaslr: When boot with vxboot, we must adjust dtb address before kaslr_early_init, and store dtb address after init. arm32: kaslr: pop visibility when compile decompress boot code as we need relocate BSS by GOT. arm32: kaslr: print kaslr offset when kernel panic arm32: kaslr: Fix clock_gettime and gettimeofday performance degradation when configure CONFIG_RANDOMIZE_BASE
arch/arm/Kconfig | 17 + arch/arm/Makefile | 6 + arch/arm/boot/compressed/Makefile | 17 +- arch/arm/boot/compressed/head.S | 139 +++++++- arch/arm/boot/compressed/kaslr.c | 461 ++++++++++++++++++++++++++ arch/arm/boot/compressed/misc.c | 1 - arch/arm/include/asm/Kbuild | 1 - arch/arm/include/asm/assembler.h | 18 +- arch/arm/include/asm/extable.h | 47 +++ arch/arm/include/asm/futex.h | 6 +- arch/arm/include/asm/memory.h | 14 + arch/arm/include/asm/pgtable.h | 1 + arch/arm/include/asm/uaccess.h | 17 +- arch/arm/include/asm/vmlinux.lds.h | 11 +- arch/arm/include/asm/word-at-a-time.h | 6 +- arch/arm/kernel/entry-armv.S | 40 +-- arch/arm/kernel/head-common.S | 60 +--- arch/arm/kernel/head.S | 107 +++++- arch/arm/kernel/setup.c | 31 ++ arch/arm/kernel/sleep.S | 7 +- arch/arm/kernel/swp_emulate.c | 8 +- arch/arm/kernel/vmlinux.lds.S | 19 ++ arch/arm/lib/backtrace.S | 13 +- arch/arm/lib/getuser.S | 24 +- arch/arm/lib/putuser.S | 15 +- arch/arm/mach-exynos/headsmp.S | 9 +- arch/arm/mach-exynos/sleep.S | 26 +- arch/arm/mach-mvebu/coherency_ll.S | 8 +- arch/arm/mach-spear/headsmp.S | 11 +- arch/arm/mach-versatile/headsmp.S | 9 +- arch/arm/mm/alignment.c | 24 +- arch/arm/mm/extable.c | 2 +- arch/arm/nwfpe/entry.S | 6 +- arch/arm/vdso/vgettimeofday.c | 5 + include/asm-generic/vmlinux.lds.h | 2 +- scripts/module.lds.S | 1 + scripts/sorttable.c | 2 +- 37 files changed, 956 insertions(+), 235 deletions(-) create mode 100644 arch/arm/boot/compressed/kaslr.c create mode 100644 arch/arm/include/asm/extable.h
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit 857ddf520d76d6516d5cdca396461141b7ca921b category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
-------------------------------------------------
When running in PIC mode, the compiler will emit const structures containing runtime relocatable quantities into .data.rel.ro.* sections, so that the linker can be smart about placing them together in a segment that is read-write initially, and is remapped read-only afterwards. This is exactly what __ro_after_init aims to provide, so move these sections together.
Acked-by: Arnd Bergmann arnd@arndb.de Acked-by: Nicolas Pitre nico@linaro.org Acked-by: Kees Cook keescook@chromium.org Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- include/asm-generic/vmlinux.lds.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index 67d8dd2f1bde..5896b7bc981f 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -428,7 +428,7 @@ #define RO_AFTER_INIT_DATA \ . = ALIGN(8); \ __start_ro_after_init = .; \ - *(.data..ro_after_init) \ + *(.data..ro_after_init .data.rel.ro.*) \ JUMP_TABLE_DATA \ STATIC_CALL_DATA \ __end_ro_after_init = .;
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit 6b12f315331362a8ec9e8fe3f97d9ae09e43fd28 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
-------------------------------------------------
This replaces a couple of open coded calculations to obtain the physical address of a far symbol with calls to the new adr_l etc macros.
Acked-by: Nicolas Pitre nico@linaro.org Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/mach-exynos/headsmp.S | 9 +-------- arch/arm/mach-exynos/sleep.S | 26 +++++--------------------- 2 files changed, 6 insertions(+), 29 deletions(-)
diff --git a/arch/arm/mach-exynos/headsmp.S b/arch/arm/mach-exynos/headsmp.S index 0ac2cb9a7355..be7cd0eebe1d 100644 --- a/arch/arm/mach-exynos/headsmp.S +++ b/arch/arm/mach-exynos/headsmp.S @@ -19,10 +19,7 @@ ENTRY(exynos4_secondary_startup) ARM_BE8(setend be) mrc p15, 0, r0, c0, c0, 5 and r0, r0, #15 - adr r4, 1f - ldmia r4, {r5, r6} - sub r4, r4, r5 - add r6, r6, r4 + adr_l r6, exynos_pen_release pen: ldr r7, [r6] cmp r7, r0 bne pen @@ -33,7 +30,3 @@ pen: ldr r7, [r6] */ b secondary_startup ENDPROC(exynos4_secondary_startup) - - .align 2 -1: .long . - .long exynos_pen_release diff --git a/arch/arm/mach-exynos/sleep.S b/arch/arm/mach-exynos/sleep.S index ed93f91853b8..ed27515a4458 100644 --- a/arch/arm/mach-exynos/sleep.S +++ b/arch/arm/mach-exynos/sleep.S @@ -8,6 +8,7 @@
#include <linux/linkage.h> #include <asm/asm-offsets.h> +#include <asm/assembler.h> #include <asm/hardware/cache-l2x0.h> #include "smc.h"
@@ -54,19 +55,13 @@ ENTRY(exynos_cpu_resume_ns) cmp r0, r1 bne skip_cp15
- adr r0, _cp15_save_power - ldr r1, [r0] - ldr r1, [r0, r1] - adr r0, _cp15_save_diag - ldr r2, [r0] - ldr r2, [r0, r2] + ldr_l r1, cp15_save_power + ldr_l r2, cp15_save_diag mov r0, #SMC_CMD_C15RESUME dsb smc #0 #ifdef CONFIG_CACHE_L2X0 - adr r0, 1f - ldr r2, [r0] - add r0, r2, r0 + adr_l r0, l2x0_saved_regs
/* Check that the address has been initialised. */ ldr r1, [r0, #L2X0_R_PHY_BASE] @@ -85,9 +80,7 @@ ENTRY(exynos_cpu_resume_ns) smc #0
/* Reload saved regs pointer because smc corrupts registers. */ - adr r0, 1f - ldr r2, [r0] - add r0, r2, r0 + adr_l r0, l2x0_saved_regs
ldr r1, [r0, #L2X0_R_PWR_CTRL] ldr r2, [r0, #L2X0_R_AUX_CTRL] @@ -106,15 +99,6 @@ skip_cp15: b cpu_resume ENDPROC(exynos_cpu_resume_ns)
- .align -_cp15_save_power: - .long cp15_save_power - . -_cp15_save_diag: - .long cp15_save_diag - . -#ifdef CONFIG_CACHE_L2X0 -1: .long l2x0_saved_regs - . -#endif /* CONFIG_CACHE_L2X0 */ - .data .align 2 .globl cp15_save_diag
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit 59dee05a68727a7fc3c62240542f8753797e38d6 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
-------------------------------------------------
This replaces an open coded calculation to obtain the physical address of a far symbol with a call to the new ldr_l etc macro.
Acked-by: Nicolas Pitre nico@linaro.org Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/mach-mvebu/coherency_ll.S | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/arm/mach-mvebu/coherency_ll.S b/arch/arm/mach-mvebu/coherency_ll.S index 35930e03d9c6..b81266a22a6d 100644 --- a/arch/arm/mach-mvebu/coherency_ll.S +++ b/arch/arm/mach-mvebu/coherency_ll.S @@ -35,9 +35,7 @@ ENTRY(ll_get_coherency_base) * MMU is disabled, use the physical address of the coherency * base address, (or 0x0 if the coherency fabric is not mapped) */ - adr r1, 3f - ldr r3, [r1] - ldr r1, [r1, r3] + ldr_l r1, coherency_phys_base b 2f 1: /* @@ -153,7 +151,3 @@ ENTRY(ll_disable_coherency) dsb ret lr ENDPROC(ll_disable_coherency) - - .align 2 -3: - .long coherency_phys_base - .
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit e2aa765c4eb9bbcdd3046744e6f73050d1175138 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
-------------------------------------------------
This replaces a few copies of the open coded calculations of the physical address of 'pen_release' in the secondary startup code of a couple of platforms. This ensures these quantities are invariant under runtime relocation.
Conflicts: arch/arm/plat-versatile/headsmp.S arch/arm/mach-prima2/headsmp.S
Cc: Russell King linux@armlinux.org.uk Acked-by: Nicolas Pitre nico@linaro.org Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/mach-spear/headsmp.S | 11 +++-------- arch/arm/mach-versatile/headsmp.S | 9 +-------- 2 files changed, 4 insertions(+), 16 deletions(-)
diff --git a/arch/arm/mach-spear/headsmp.S b/arch/arm/mach-spear/headsmp.S index 96f89436ccf6..32ffc75ff332 100644 --- a/arch/arm/mach-spear/headsmp.S +++ b/arch/arm/mach-spear/headsmp.S @@ -10,6 +10,8 @@ #include <linux/linkage.h> #include <linux/init.h>
+#include <asm/assembler.h> + __INIT
/* @@ -20,10 +22,7 @@ ENTRY(spear13xx_secondary_startup) mrc p15, 0, r0, c0, c0, 5 and r0, r0, #15 - adr r4, 1f - ldmia r4, {r5, r6} - sub r4, r4, r5 - add r6, r6, r4 + adr_l r6, spear_pen_release pen: ldr r7, [r6] cmp r7, r0 bne pen @@ -37,8 +36,4 @@ pen: ldr r7, [r6] * should now contain the SVC stack for this core */ b secondary_startup - - .align -1: .long . - .long spear_pen_release ENDPROC(spear13xx_secondary_startup) diff --git a/arch/arm/mach-versatile/headsmp.S b/arch/arm/mach-versatile/headsmp.S index 99c32db412ae..ce925e9059c5 100644 --- a/arch/arm/mach-versatile/headsmp.S +++ b/arch/arm/mach-versatile/headsmp.S @@ -16,10 +16,7 @@ ENTRY(versatile_secondary_startup) ARM_BE8(setend be) mrc p15, 0, r0, c0, c0, 5 bic r0, #0xff000000 - adr r4, 1f - ldmia r4, {r5, r6} - sub r4, r4, r5 - add r6, r6, r4 + adr_l r6, versatile_cpu_release pen: ldr r7, [r6] cmp r7, r0 bne pen @@ -29,8 +26,4 @@ pen: ldr r7, [r6] * should now contain the SVC stack for this core */ b secondary_startup - - .align -1: .long . - .long versatile_cpu_release ENDPROC(versatile_secondary_startup)
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit ccb456783dd71f474e5783a81d7f18c2cd4dda81 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
-------------------------------------------------
To avoid having to relocate the contents of extable entries at runtime when running with KASLR enabled, wire up the existing support for emitting them as relative references. This ensures these quantities are invariant under runtime relocation.
Conflicts: arch/arm/kernel/entry-armv.S arch/arm/include/asm/Kbuild arch/arm/lib/backtrace.S scripts/sorttable.c.rej arch/arm/nwfpe/entry.S
Cc: Russell King linux@armlinux.org.uk Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/include/asm/Kbuild | 1 - arch/arm/include/asm/assembler.h | 16 +++------ arch/arm/include/asm/extable.h | 47 +++++++++++++++++++++++++++ arch/arm/include/asm/futex.h | 6 ++-- arch/arm/include/asm/uaccess.h | 17 +++------- arch/arm/include/asm/word-at-a-time.h | 6 ++-- arch/arm/kernel/swp_emulate.c | 8 ++--- arch/arm/lib/backtrace.S | 13 +++----- arch/arm/lib/getuser.S | 24 +++++++------- arch/arm/lib/putuser.S | 15 ++++----- arch/arm/mm/alignment.c | 24 +++++--------- arch/arm/mm/extable.c | 2 +- arch/arm/nwfpe/entry.S | 6 ++-- scripts/sorttable.c | 2 +- 14 files changed, 98 insertions(+), 89 deletions(-) create mode 100644 arch/arm/include/asm/extable.h
diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild index 03657ff8fbe3..90c2964fd1bc 100644 --- a/arch/arm/include/asm/Kbuild +++ b/arch/arm/include/asm/Kbuild @@ -1,6 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 generic-y += early_ioremap.h -generic-y += extable.h generic-y += flat.h generic-y += parport.h
diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index aebe2c8f6a68..e8d48e3c390b 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -18,6 +18,7 @@ #endif
#include <asm/ptrace.h> +#include <asm/extable.h> #include <asm/opcodes-virt.h> #include <asm/asm-offsets.h> #include <asm/page.h> @@ -246,10 +247,7 @@ THUMB( fpreg .req r7 )
#define USERL(l, x...) \ 9999: x; \ - .pushsection __ex_table,"a"; \ - .align 3; \ - .long 9999b,l; \ - .popsection + ex_entry 9999b,l;
#define USER(x...) USERL(9001f, x)
@@ -476,10 +474,7 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) .error "Unsupported inc macro argument" .endif
- .pushsection __ex_table,"a" - .align 3 - .long 9999b, \abort - .popsection + ex_entry 9999b, \abort .endm
.macro usracc, instr, reg, ptr, inc, cond, rept, abort @@ -517,10 +512,7 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) .error "Unsupported inc macro argument" .endif
- .pushsection __ex_table,"a" - .align 3 - .long 9999b, \abort - .popsection + ex_entry 9999b, \abort .endr .endm
diff --git a/arch/arm/include/asm/extable.h b/arch/arm/include/asm/extable.h new file mode 100644 index 000000000000..175d42247b96 --- /dev/null +++ b/arch/arm/include/asm/extable.h @@ -0,0 +1,47 @@ +#ifndef __ASM_EXTABLE_H +#define __ASM_EXTABLE_H + +#ifndef __ASSEMBLY__ + +/* + * The exception table consists of pairs of relative offsets: the first + * is the relative offset to an instruction that is allowed to fault, + * and the second is the relative offset at which the program should + * continue. No registers are modified, so it is entirely up to the + * continuation code to figure out what to do. + */ + +struct exception_table_entry { + int insn, fixup; +}; + +#define ARCH_HAS_RELATIVE_EXTABLE + +extern int fixup_exception(struct pt_regs *regs); + + /* + * ex_entry - place-relative extable entry + */ +asm( ".macro ex_entry, insn, fixup \n" + ".pushsection __ex_table, "a", %progbits \n" + ".align 3 \n" + ".long \insn - . \n" + ".long \fixup - . \n" + ".popsection \n" + ".endm \n"); + +#else + + /* + * ex_entry - place-relative extable entry + */ + .macro ex_entry, insn, fixup + .pushsection __ex_table, "a", %progbits + .align 3 + .long \insn - . + .long \fixup - . + .popsection + .endm + +#endif +#endif diff --git a/arch/arm/include/asm/futex.h b/arch/arm/include/asm/futex.h index a9151884bc85..6921c58c6c7c 100644 --- a/arch/arm/include/asm/futex.h +++ b/arch/arm/include/asm/futex.h @@ -10,10 +10,8 @@
#define __futex_atomic_ex_table(err_reg) \ "3:\n" \ - " .pushsection __ex_table,"a"\n" \ - " .align 3\n" \ - " .long 1b, 4f, 2b, 4f\n" \ - " .popsection\n" \ + " ex_entry 1b, 4f\n" \ + " ex_entry 2b, 4f\n" \ " .pushsection .text.fixup,"ax"\n" \ " .align 2\n" \ "4: mov %0, " err_reg "\n" \ diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index bb5c81823117..2162ebc6c77a 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -288,10 +288,7 @@ do { \ " mov %1, #0\n" \ " b 2b\n" \ " .popsection\n" \ - " .pushsection __ex_table,"a"\n" \ - " .align 3\n" \ - " .long 1b, 3b\n" \ - " .popsection" \ + " ex_entry 1b, 3b\n" \ : "+r" (err), "=&r" (x) \ : "r" (addr), "i" (-EFAULT) \ : "cc") @@ -390,10 +387,7 @@ do { \ "3: mov %0, %3\n" \ " b 2b\n" \ " .popsection\n" \ - " .pushsection __ex_table,"a"\n" \ - " .align 3\n" \ - " .long 1b, 3b\n" \ - " .popsection" \ + " ex_entry 1b, 3b\n" \ : "+r" (err) \ : "r" (x), "r" (__pu_addr), "i" (-EFAULT) \ : "cc") @@ -449,11 +443,8 @@ do { \ "4: mov %0, %3\n" \ " b 3b\n" \ " .popsection\n" \ - " .pushsection __ex_table,"a"\n" \ - " .align 3\n" \ - " .long 1b, 4b\n" \ - " .long 2b, 4b\n" \ - " .popsection" \ + " ex_entry 1b, 4b\n" \ + " ex_entry 2b, 4b\n" \ : "+r" (err), "+r" (__pu_addr) \ : "r" (x), "i" (-EFAULT) \ : "cc") diff --git a/arch/arm/include/asm/word-at-a-time.h b/arch/arm/include/asm/word-at-a-time.h index 352ab213520d..a440ec1cd85b 100644 --- a/arch/arm/include/asm/word-at-a-time.h +++ b/arch/arm/include/asm/word-at-a-time.h @@ -9,6 +9,7 @@ * Heavily based on the x86 algorithm. */ #include <linux/kernel.h> +#include <asm/extable.h>
struct word_at_a_time { const unsigned long one_bits, high_bits; @@ -85,10 +86,7 @@ static inline unsigned long load_unaligned_zeropad(const void *addr) #endif " b 2b\n" " .popsection\n" - " .pushsection __ex_table,"a"\n" - " .align 3\n" - " .long 1b, 3b\n" - " .popsection" + " ex_entry 1b, 3b\n" : "=&r" (ret), "=&r" (offset) : "r" (addr), "Qo" (*(unsigned long *)addr));
diff --git a/arch/arm/kernel/swp_emulate.c b/arch/arm/kernel/swp_emulate.c index fdce83c95acb..c10bb1161ea2 100644 --- a/arch/arm/kernel/swp_emulate.c +++ b/arch/arm/kernel/swp_emulate.c @@ -24,6 +24,7 @@ #include <linux/syscalls.h> #include <linux/perf_event.h>
+#include <asm/extable.h> #include <asm/opcodes.h> #include <asm/system_info.h> #include <asm/traps.h> @@ -46,11 +47,8 @@ "3: mov %0, %5\n" \ " b 2b\n" \ " .previous\n" \ - " .section __ex_table,"a"\n" \ - " .align 3\n" \ - " .long 0b, 3b\n" \ - " .long 1b, 3b\n" \ - " .previous" \ + " ex_entry 0b, 3b\n" \ + " ex_entry 1b, 3b\n" \ : "=&r" (res), "+r" (data), "=&r" (temp) \ : "r" (addr), "i" (-EAGAIN), "i" (-EFAULT) \ : "cc", "memory") diff --git a/arch/arm/lib/backtrace.S b/arch/arm/lib/backtrace.S index 293a2716bd20..d51679c84c1f 100644 --- a/arch/arm/lib/backtrace.S +++ b/arch/arm/lib/backtrace.S @@ -114,14 +114,11 @@ for_each_frame: tst frame, mask @ Check for address exceptions bl _printk no_frame: ldmfd sp!, {r4 - r9, pc} ENDPROC(c_backtrace) - - .pushsection __ex_table,"a" - .align 3 - .long 1001b, 1006b - .long 1002b, 1006b - .long 1003b, 1006b - .long 1004b, 1006b - .popsection + + ex_entry 1001b, 1006b + ex_entry 1002b, 1006b + ex_entry 1003b, 1006b + ex_entry 1004b, 1006b
.Lbad: .asciz "%sBacktrace aborted due to bad frame pointer <%p>\n" .align diff --git a/arch/arm/lib/getuser.S b/arch/arm/lib/getuser.S index c5e420750c48..d120e0223a8c 100644 --- a/arch/arm/lib/getuser.S +++ b/arch/arm/lib/getuser.S @@ -27,6 +27,7 @@ #include <linux/linkage.h> #include <asm/assembler.h> #include <asm/errno.h> +#include <asm/extable.h> #include <asm/domain.h>
ENTRY(__get_user_1) @@ -149,19 +150,18 @@ _ASM_NOKPROBE(__get_user_bad) _ASM_NOKPROBE(__get_user_bad8)
.pushsection __ex_table, "a" - .long 1b, __get_user_bad - .long 2b, __get_user_bad + ex_entry 1b, __get_user_bad + ex_entry 2b, __get_user_bad #if __LINUX_ARM_ARCH__ < 6 - .long 3b, __get_user_bad + ex_entry 3b, __get_user_bad #endif - .long 4b, __get_user_bad - .long 5b, __get_user_bad8 - .long 6b, __get_user_bad8 + ex_entry 4b, __get_user_bad + ex_entry 5b, __get_user_bad8 + ex_entry 6b, __get_user_bad8 #ifdef __ARMEB__ - .long 7b, __get_user_bad - .long 8b, __get_user_bad8 - .long 9b, __get_user_bad8 - .long 10b, __get_user_bad8 - .long 11b, __get_user_bad8 + ex_entry 7b, __get_user_bad + ex_entry 8b, __get_user_bad8 + ex_entry 9b, __get_user_bad8 + ex_entry 10b, __get_user_bad8 + ex_entry 11b, __get_user_bad8 #endif -.popsection diff --git a/arch/arm/lib/putuser.S b/arch/arm/lib/putuser.S index bdd8836dc5c2..1bb85192ba60 100644 --- a/arch/arm/lib/putuser.S +++ b/arch/arm/lib/putuser.S @@ -27,6 +27,7 @@ #include <linux/linkage.h> #include <asm/assembler.h> #include <asm/errno.h> +#include <asm/extable.h> #include <asm/domain.h>
ENTRY(__put_user_1) @@ -83,13 +84,11 @@ __put_user_bad: ret lr ENDPROC(__put_user_bad)
-.pushsection __ex_table, "a" - .long 1b, __put_user_bad - .long 2b, __put_user_bad + ex_entry 1b, __put_user_bad + ex_entry 2b, __put_user_bad #if __LINUX_ARM_ARCH__ < 6 - .long 3b, __put_user_bad + ex_entry 3b, __put_user_bad #endif - .long 4b, __put_user_bad - .long 5b, __put_user_bad - .long 6b, __put_user_bad -.popsection + ex_entry 4b, __put_user_bad + ex_entry 5b, __put_user_bad + ex_entry 6b, __put_user_bad diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c index f8dd0b3cc8e0..e788963ec4c8 100644 --- a/arch/arm/mm/alignment.c +++ b/arch/arm/mm/alignment.c @@ -21,6 +21,7 @@ #include <linux/uaccess.h>
#include <asm/cp15.h> +#include <asm/extable.h> #include <asm/system_info.h> #include <asm/unaligned.h> #include <asm/opcodes.h> @@ -204,10 +205,7 @@ union offset_union { "3: mov %0, #1\n" \ " b 2b\n" \ " .popsection\n" \ - " .pushsection __ex_table,"a"\n" \ - " .align 3\n" \ - " .long 1b, 3b\n" \ - " .popsection\n" \ + " ex_entry 1b, 3b\n" \ : "=r" (err), "=&r" (val), "=r" (addr) \ : "0" (err), "2" (addr))
@@ -264,11 +262,8 @@ union offset_union { "4: mov %0, #1\n" \ " b 3b\n" \ " .popsection\n" \ - " .pushsection __ex_table,"a"\n" \ - " .align 3\n" \ - " .long 1b, 4b\n" \ - " .long 2b, 4b\n" \ - " .popsection\n" \ + " ex_entry 1b, 4b\n" \ + " ex_entry 2b, 4b\n" \ : "=r" (err), "=&r" (v), "=&r" (a) \ : "0" (err), "1" (v), "2" (a)); \ if (err) \ @@ -304,13 +299,10 @@ union offset_union { "6: mov %0, #1\n" \ " b 5b\n" \ " .popsection\n" \ - " .pushsection __ex_table,"a"\n" \ - " .align 3\n" \ - " .long 1b, 6b\n" \ - " .long 2b, 6b\n" \ - " .long 3b, 6b\n" \ - " .long 4b, 6b\n" \ - " .popsection\n" \ + " ex_entry 1b, 6b\n" \ + " ex_entry 2b, 6b\n" \ + " ex_entry 3b, 6b\n" \ + " ex_entry 4b, 6b\n" \ : "=r" (err), "=&r" (v), "=&r" (a) \ : "0" (err), "1" (v), "2" (a)); \ if (err) \ diff --git a/arch/arm/mm/extable.c b/arch/arm/mm/extable.c index fc33564597b8..46c4a8a7f5da 100644 --- a/arch/arm/mm/extable.c +++ b/arch/arm/mm/extable.c @@ -11,7 +11,7 @@ int fixup_exception(struct pt_regs *regs)
fixup = search_exception_tables(instruction_pointer(regs)); if (fixup) { - regs->ARM_pc = fixup->fixup; + regs->ARM_pc = (unsigned long)&fixup->fixup + fixup->fixup; #ifdef CONFIG_THUMB2_KERNEL /* Clear the IT state to avoid nasty surprises in the fixup */ regs->ARM_cpsr &= ~PSR_IT_MASK; diff --git a/arch/arm/nwfpe/entry.S b/arch/arm/nwfpe/entry.S index 354d297a193b..366422cdb882 100644 --- a/arch/arm/nwfpe/entry.S +++ b/arch/arm/nwfpe/entry.S @@ -9,6 +9,7 @@ */ #include <linux/linkage.h> #include <asm/assembler.h> +#include <asm/extable.h> #include <asm/opcodes.h>
/* This is the kernel's entry point into the floating point emulator. @@ -109,10 +110,7 @@ next: .Lfix: ret r9 @ let the user eat segfaults .popsection
- .pushsection __ex_table,"a" - .align 3 - .long .Lx1, .Lfix - .popsection + ex_entry .Lx1, .Lfix
@ @ Check whether the instruction is a co-processor instruction. diff --git a/scripts/sorttable.c b/scripts/sorttable.c index 83cdb843d92f..09a6e53b2199 100644 --- a/scripts/sorttable.c +++ b/scripts/sorttable.c @@ -312,12 +312,12 @@ static int do_file(char const *const fname, void *addr) break; case EM_PARISC: case EM_PPC: + case EM_ARM: case EM_PPC64: custom_sort = sort_relative_table; break; case EM_ARCOMPACT: case EM_ARCV2: - case EM_ARM: case EM_MICROBLAZE: case EM_MIPS: case EM_XTENSA:
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit 04be01192973461cdd00ab47908a78f0e2f55ef8 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
Update the Kconfig RELOCATABLE depends on !JUMP_LABEL to resolve compilation conflicts between fpic and JUMP_LABEL
-------------------------------------------------
Update the build flags and linker script to allow vmlinux to be built as a PIE binary, which retains relocation information about absolute symbol references so that they can be fixed up at runtime. This will be used for implementing KASLR,
Conflicts: arch/arm/include/asm/assembler.h
Cc: Russell King linux@armlinux.org.uk Acked-by: Nicolas Pitre nico@linaro.org Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/Kconfig | 5 +++++ arch/arm/Makefile | 5 +++++ arch/arm/include/asm/assembler.h | 2 +- arch/arm/include/asm/vmlinux.lds.h | 6 +++++- arch/arm/kernel/vmlinux.lds.S | 6 ++++++ scripts/module.lds.S | 1 + 6 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 9557808e8937..a29613b7ea8b 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1452,6 +1452,11 @@ config STACKPROTECTOR_PER_TASK Enable this option to switch to a different method that uses a different canary value for each task.
+config RELOCATABLE + bool + depends on !XIP_KERNEL && !JUMP_LABEL + select HAVE_ARCH_PREL32_RELOCATIONS + endmenu
menu "Boot options" diff --git a/arch/arm/Makefile b/arch/arm/Makefile index 547e5856eaa0..43e159617f9b 100644 --- a/arch/arm/Makefile +++ b/arch/arm/Makefile @@ -51,6 +51,11 @@ CHECKFLAGS += -D__ARMEL__ KBUILD_LDFLAGS += -EL endif
+ifeq ($(CONFIG_RELOCATABLE),y) +KBUILD_CFLAGS += -fpic -include $(srctree)/include/linux/hidden.h +LDFLAGS_vmlinux += -pie -shared -Bsymbolic +endif + # # The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and # later may result in code being generated that handles signed short and signed diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index e8d48e3c390b..0e4a952f0104 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -625,7 +625,7 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) * mov_l - move a constant value or [relocated] address into a register */ .macro mov_l, dst:req, imm:req, cond - .if __LINUX_ARM_ARCH__ < 7 + .if CONFIG_RELOCATABLE == 1 || __LINUX_ARM_ARCH__ < 7 ldr\cond \dst, =\imm .else movw\cond \dst, #:lower16:\imm diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h index 4c8632d5c432..579becda9453 100644 --- a/arch/arm/include/asm/vmlinux.lds.h +++ b/arch/arm/include/asm/vmlinux.lds.h @@ -63,7 +63,11 @@ EXIT_CALL \ ARM_MMU_DISCARD(*(.text.fixup)) \ ARM_MMU_DISCARD(*(__ex_table)) \ - COMMON_DISCARDS + COMMON_DISCARDS \ + *(.ARM.exidx.discard.text) \ + *(.interp .dynamic) \ + *(.dynsym .dynstr .hash) +
/* * Sections that should stay zero sized, which is safer to explicitly diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S index bd9127c4b451..d6ccf647eef7 100644 --- a/arch/arm/kernel/vmlinux.lds.S +++ b/arch/arm/kernel/vmlinux.lds.S @@ -114,6 +114,12 @@ SECTIONS __smpalt_end = .; } #endif + .rel.dyn : ALIGN(8) { + __rel_begin = .; + *(.rel .rel.* .rel.dyn) + } + __rel_end = ADDR(.rel.dyn) + SIZEOF(.rel.dyn); + .init.pv_table : { __pv_table_begin = .; *(.pv_table) diff --git a/scripts/module.lds.S b/scripts/module.lds.S index bf5bcf2836d8..25a85dbae205 100644 --- a/scripts/module.lds.S +++ b/scripts/module.lds.S @@ -13,6 +13,7 @@ SECTIONS { /DISCARD/ : { *(.discard) *(.discard.*) + *(*.discard.*) }
__ksymtab 0 : { *(SORT(___ksymtab+*)) }
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit 7e279c05992a88d0517df371a48e72d060b2ca21 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
-------------------------------------------------
To prepare for adding support for KASLR, which relocates all absolute symbol references at runtime after the caches have been enabled, update the MMU switch code to avoid using absolute symbol references where possible. This ensures these quantities are invariant under runtime relocation.
Conflicts: arch/arm/kernel/head-common.S
Cc: Russell King linux@armlinux.org.uk Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/kernel/head-common.S | 60 ++++++++++------------------------- 1 file changed, 17 insertions(+), 43 deletions(-)
diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S index 42cae73fcc19..44f4aa6c9acc 100644 --- a/arch/arm/kernel/head-common.S +++ b/arch/arm/kernel/head-common.S @@ -80,27 +80,28 @@ __mmap_switched: mov r8, r2 mov r10, r0
- adr r4, __mmap_switched_data mov fp, #0
#if defined(CONFIG_XIP_DEFLATED_DATA) - ARM( ldr sp, [r4], #4 ) - THUMB( ldr sp, [r4] ) - THUMB( add r4, #4 ) + adr_l r4, __bss_stop + mov sp, r4 @ sp (temporary stack in .bss) bl __inflate_kernel_data @ decompress .data to RAM teq r0, #0 bne __error #elif defined(CONFIG_XIP_KERNEL) - ARM( ldmia r4!, {r0, r1, r2, sp} ) - THUMB( ldmia r4!, {r0, r1, r2, r3} ) - THUMB( mov sp, r3 ) + adr_l r0, _sdata + adr_l r1, __data_loc + adr_l r2, _edata_loc + adr_l r3, __bss_stop + mov sp, r3 @ sp (temporary stack in .bss) sub r2, r2, r1 bl __memcpy @ copy .data to RAM #endif
- ARM( ldmia r4!, {r0, r1, sp} ) - THUMB( ldmia r4!, {r0, r1, r3} ) - THUMB( mov sp, r3 ) + adr_l r0, __bss_start + adr_l r1, __bss_stop + adr_l r3, init_thread_union + THREAD_START_SP + mov sp, r3 sub r2, r1, r0 mov r1, #0 bl __memset @ clear .bss @@ -108,46 +109,19 @@ __mmap_switched: adr_l r0, init_task @ get swapper task_struct set_current r0, r1
- ldmia r4, {r0, r1, r2, r3} - str r9, [r0] @ Save processor ID - str r7, [r1] @ Save machine type - str r8, [r2] @ Save atags pointer - cmp r3, #0 - strne r10, [r3] @ Save control register values #ifdef CONFIG_KASAN bl kasan_early_init +#endif + str_l r9, processor_id, r4 @ Save processor ID + str_l r7, __machine_arch_type, r4 @ Save machine type + str_l r8, __atags_pointer, r4 @ Save atags pointer +#ifdef CONFIG_CPU_CP15 + str_l r10, cr_alignment, r4 @ Save control register values #endif mov lr, #0 b start_kernel ENDPROC(__mmap_switched)
- .align 2 - .type __mmap_switched_data, %object -__mmap_switched_data: -#ifdef CONFIG_XIP_KERNEL -#ifndef CONFIG_XIP_DEFLATED_DATA - .long _sdata @ r0 - .long __data_loc @ r1 - .long _edata_loc @ r2 -#endif - .long __bss_stop @ sp (temporary stack in .bss) -#endif - - .long __bss_start @ r0 - .long __bss_stop @ r1 - .long init_thread_union + THREAD_START_SP @ sp - - .long processor_id @ r0 - .long __machine_arch_type @ r1 - .long __atags_pointer @ r2 -#ifdef CONFIG_CPU_CP15 - .long cr_alignment @ r3 -#else -M_CLASS(.long exc_ret) @ r3 -AR_CLASS(.long 0) @ r3 -#endif - .size __mmap_switched_data, . - __mmap_switched_data - __FINIT .text
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit 2c7e6b4d7cbff417ff96a24c243508e16168f90c category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
-------------------------------------------------
Replace some unnecessary absolute references with relative ones.
Conflicts: arch/arm/kernel/sleep.S
Cc: Russell King linux@armlinux.org.uk Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/kernel/sleep.S | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/arm/kernel/sleep.S b/arch/arm/kernel/sleep.S index a86a1d4f3461..ff8722c955f2 100644 --- a/arch/arm/kernel/sleep.S +++ b/arch/arm/kernel/sleep.S @@ -61,10 +61,9 @@ ENTRY(__cpu_suspend) stmfd sp!, {r4 - r11, lr} #ifdef MULTI_CPU - ldr r10, =processor - ldr r4, [r10, #CPU_SLEEP_SIZE] @ size of CPU sleep state + ldr_l r4, processor + CPU_SLEEP_SIZE @ size of CPU sleep state #else - ldr r4, =cpu_suspend_size + adr_l r4, cpu_suspend_size #endif mov r5, sp @ current virtual SP #ifdef CONFIG_VMAP_STACK @@ -75,7 +74,7 @@ ENTRY(__cpu_suspend) #endif add r4, r4, #12 @ Space for pgd, virt sp, phys resume fn sub sp, sp, r4 @ allocate CPU state on stack - ldr r3, =sleep_save_sp + adr_l r3, sleep_save_sp stmfd sp!, {r0, r1} @ save suspend func arg and pointer ldr r3, [r3, #SLEEP_SAVE_SP_VIRT] ALT_SMP(W(nop)) @ don't use adr_l inside ALT_SMP()
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit c3ae0029ea41f4a26a40f592062155412d1b6d07 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
-------------------------------------------------
In order for the EFI stub to be able to decide over what range to randomize the load address of the kernel, expose the definition of the default vmalloc base address as VMALLOC_DEFAULT_BASE.
Conflicts: arch/arm/mm/mmu.c
Cc: Russell King linux@armlinux.org.uk Acked-by: Nicolas Pitre nico@linaro.org Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/include/asm/pgtable.h | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index 16b02f44c7d3..93ffc943c87d 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -50,6 +50,7 @@ extern struct page *empty_zero_page; #define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_END 0xff800000UL +#define VMALLOC_DEFAULT_BASE (VMALLOC_END - (240 << 20) - VMALLOC_OFFSET)
#define LIBRARY_TEXT_START 0x0c000000
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit fe64d7efe89877bc52454f9f2bc9ab0ce01ae8fc category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
-------------------------------------------------
The location of swapper_pg_dir is relative to the kernel, not to PAGE_OFFSET or PHYS_OFFSET. So define the symbol relative to the start of the kernel image, and refer to it via its name.
Conflicts: arch/arm/kernel/head.S
Cc: Russell King linux@armlinux.org.uk Acked-by: Nicolas Pitre nico@linaro.org Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/kernel/head.S | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index 1ec35f065617..0375154c1b70 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -90,6 +90,9 @@ kernel_sec_end: .arm
__HEAD + .globl swapper_pg_dir + .equ swapper_pg_dir, . - PG_DIR_SIZE + ENTRY(stext) ARM_BE8(setend be ) @ ensure we are in BE8 mode
@@ -185,7 +188,7 @@ ENDPROC(stext) * r4 = physical page table address */ __create_page_tables: - pgtbl r4, r8 @ page table address + adr_l r4, swapper_pg_dir @ page table address
/* * Clear the swapper page table
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit 11f8bbc5b0d4d76b3d7114bf9af1805607a20372 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
-------------------------------------------------
The location of the ARM vector table in virtual memory is not a compile time constant, and so the virtual addresses of the various entry points are rather meaningless (although they are most likely to reside at the offsets below)
ffff1004 t vector_rst ffff1020 t vector_irq ffff10a0 t vector_dabt ffff1120 t vector_pabt ffff11a0 t vector_und ffff1220 t vector_addrexcptn ffff1240 T vector_fiq
However, when running with KASLR enabled, the virtual addresses are subject to runtime relocation, which means we should avoid to take absolute references to these symbols, not only directly (by taking the address in C code), but also via /proc/kallsyms or other kernel facilities that deal with ELF symbols. For instance, /proc/kallsyms will list their addresses as
0abf1004 t vector_rst 0abf1020 t vector_irq 0abf10a0 t vector_dabt 0abf1120 t vector_pabt 0abf11a0 t vector_und 0abf1220 t vector_addrexcptn 0abf1240 T vector_fiq
when running randomized, which may confuse tools like perf that may use /proc/kallsyms to annotate stack traces.
So use .L prefixes for these symbols. This will prevent them from being visible at all outside the assembler source.
Confilicts: arch/arm/include/asm/vmlinux.lds.h arch/arm/kernel/entry-armv.S
Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/include/asm/vmlinux.lds.h | 2 -- arch/arm/kernel/entry-armv.S | 40 ++++++++++++++++-------------- 2 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h index 579becda9453..825af9c65db2 100644 --- a/arch/arm/include/asm/vmlinux.lds.h +++ b/arch/arm/include/asm/vmlinux.lds.h @@ -152,8 +152,6 @@ ARM_LMA(__stubs, .stubs); \ . = __stubs_lma + SIZEOF(.stubs); \ \ - PROVIDE(vector_fiq_offset = vector_fiq - ADDR(.vectors)); - #define ARM_TCM \ __itcm_start = ALIGN(4); \ .text_itcm ITCM_OFFSET : AT(__itcm_start - LOAD_OFFSET) { \ diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 6150a716828c..10b84539d83a 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -853,7 +853,7 @@ vector_bhb_bpiall_\name: @ which gives a "context synchronisation". #endif
-vector_\name: +.Lvector_\name: .if \correction sub lr, lr, #\correction .endif @@ -882,7 +882,7 @@ vector_\name: mov r0, sp ARM( ldr lr, [pc, lr, lsl #2] ) movs pc, lr @ branch to handler in SVC mode -ENDPROC(vector_\name) +ENDPROC(.Lvector_\name)
#ifdef CONFIG_HARDEN_BRANCH_HISTORY .subsection 1 @@ -914,6 +914,10 @@ ENDPROC(vector_bhb_loop8_\name) .endm
.section .stubs, "ax", %progbits +#ifdef CONFIG_FIQ + .global vector_fiq_offset + .set vector_fiq_offset, .Lvector_fiq - . + 0x1000 +#endif @ These need to remain at the start of the section so that @ they are in range of the 'SWI' entries in the vector tables @ located 4k down. @@ -926,11 +930,11 @@ ENDPROC(vector_bhb_loop8_\name) .word vector_bhb_bpiall_swi #endif
-vector_rst: +.Lvector_rst: ARM( swi SYS_ERROR0 ) THUMB( svc #0 ) THUMB( nop ) - b vector_und + b .Lvector_und
/* * Interrupt dispatcher @@ -1032,8 +1036,8 @@ vector_rst: * (they're not supposed to happen, and won't happen in 32-bit data mode). */
-vector_addrexcptn: - b vector_addrexcptn +.Lvector_addrexcptn: + b .Lvector_addrexcptn
/*============================================================================= * FIQ "NMI" handler @@ -1062,42 +1066,40 @@ vector_addrexcptn: .long __fiq_svc @ e .long __fiq_svc @ f
- .globl vector_fiq - .section .vectors, "ax", %progbits - W(b) vector_rst - W(b) vector_und + W(b) .Lvector_rst + W(b) .Lvector_und ARM( .reloc ., R_ARM_LDR_PC_G0, .L__vector_swi ) THUMB( .reloc ., R_ARM_THM_PC12, .L__vector_swi ) W(ldr) pc, . - W(b) vector_pabt - W(b) vector_dabt - W(b) vector_addrexcptn - W(b) vector_irq - W(b) vector_fiq + W(b) .Lvector_pabt + W(b) .Lvector_dabt + W(b) .Lvector_addrexcptn + W(b) .Lvector_irq + W(b) .Lvector_fiq
#ifdef CONFIG_HARDEN_BRANCH_HISTORY .section .vectors.bhb.loop8, "ax", %progbits - W(b) vector_rst + W(b) .Lvector_rst W(b) vector_bhb_loop8_und ARM( .reloc ., R_ARM_LDR_PC_G0, .L__vector_bhb_loop8_swi ) THUMB( .reloc ., R_ARM_THM_PC12, .L__vector_bhb_loop8_swi ) W(ldr) pc, . W(b) vector_bhb_loop8_pabt W(b) vector_bhb_loop8_dabt - W(b) vector_addrexcptn + W(b) .Lvector_addrexcptn W(b) vector_bhb_loop8_irq W(b) vector_bhb_loop8_fiq
.section .vectors.bhb.bpiall, "ax", %progbits - W(b) vector_rst + W(b) .Lvector_rst W(b) vector_bhb_bpiall_und ARM( .reloc ., R_ARM_LDR_PC_G0, .L__vector_bhb_bpiall_swi ) THUMB( .reloc ., R_ARM_THM_PC12, .L__vector_bhb_bpiall_swi ) W(ldr) pc, . W(b) vector_bhb_bpiall_pabt W(b) vector_bhb_bpiall_dabt - W(b) vector_addrexcptn + W(b) .Lvector_addrexcptn W(b) vector_bhb_bpiall_irq W(b) vector_bhb_bpiall_fiq #endif
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit c11744cd7b351b0fbc5233c04c32822544c96fc1 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
Update the Kconfig RANDOMIZE_BASE depends on !JUMP_LABEL to resolve compilation conflicts between fpic and JUMP_LABEL.
Update PMD_ORDER to PMD_ENTRY_ORDER in OLK-6.6
-------------------------------------------------
This implements randomization of the placement of the kernel image inside the lowmem region. It is intended to work together with the decompressor to place the kernel at an offset in physical memory that is a multiple of 2 MB, and to take the same offset into account when creating the virtual mapping.
This uses runtime relocation of the kernel built as a PIE binary, to fix up all absolute symbol references to refer to their runtime virtual address. The physical-to-virtual mapping remains unchanged.
In order to allow the decompressor to hand over to the core kernel without making assumptions that are not guaranteed to hold when invoking the core kernel directly using bootloaders that are not KASLR aware, the KASLR offset is expected to be placed in r3 when entering the kernel 4 bytes past the entry point, skipping the first instruction.
Conflicts: arch/arm/kernel/head.S
Cc: Russell King linux@armlinux.org.uk Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/Kconfig | 12 +++++ arch/arm/kernel/head.S | 102 +++++++++++++++++++++++++++++++++++++---- 2 files changed, 105 insertions(+), 9 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index a29613b7ea8b..4d23dc5d7867 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1457,6 +1457,18 @@ config RELOCATABLE depends on !XIP_KERNEL && !JUMP_LABEL select HAVE_ARCH_PREL32_RELOCATIONS
+config RANDOMIZE_BASE + bool "Randomize the address of the kernel image" + depends on MMU && AUTO_ZRELADDR + depends on !XIP_KERNEL && !ZBOOT_ROM && !JUMP_LABEL + select RELOCATABLE + select ARM_MODULE_PLTS if MODULES + select MODULE_REL_CRCS if MODVERSIONS + help + Randomizes the virtual and physical address at which the kernel + image is loaded, as a security feature that deters exploit attempts + relying on knowledge of the location of kernel internals. + endmenu
menu "Boot options" diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index 0375154c1b70..e54e0edc36d3 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -69,6 +69,28 @@ kernel_sec_end: sub \rd, \rd, #PG_DIR_SIZE .endm
+ .macro get_kaslr_offset, reg +#ifdef CONFIG_RANDOMIZE_BASE + ldr_l \reg, __kaslr_offset +#else + mov \reg, #0 +#endif + .endm + + .macro add_kaslr_offset, reg, tmp +#ifdef CONFIG_RANDOMIZE_BASE + get_kaslr_offset \tmp + add \reg, \reg, \tmp +#endif + .endm + + .macro sub_kaslr_offset, reg, tmp +#ifdef CONFIG_RANDOMIZE_BASE + get_kaslr_offset \tmp + sub \reg, \reg, \tmp +#endif + .endm + /* * Kernel startup entry point. * --------------------------- @@ -94,6 +116,7 @@ kernel_sec_end: .equ swapper_pg_dir, . - PG_DIR_SIZE
ENTRY(stext) + mov r3, #0 @ normal entry point - clear r3 ARM_BE8(setend be ) @ ensure we are in BE8 mode
THUMB( badr r9, 1f ) @ Kernel is always entered in ARM. @@ -101,6 +124,16 @@ ENTRY(stext) THUMB( .thumb ) @ switch to Thumb now. THUMB(1: )
+#ifdef CONFIG_RANDOMIZE_BASE + str_l r3, __kaslr_offset, r9 @ offset in r3 if entered via kaslr ep + + .section ".bss", "aw", %nobits + .align 2 +__kaslr_offset: + .long 0 @ will be wiped before entering C code + .previous +#endif + #ifdef CONFIG_ARM_VIRT_EXT bl __hyp_stub_install #endif @@ -124,6 +157,7 @@ ENTRY(stext) #ifndef CONFIG_XIP_KERNEL adr_l r8, _text @ __pa(_text) sub r8, r8, #TEXT_OFFSET @ PHYS_OFFSET + sub_kaslr_offset r8, r12 #else ldr r8, =PLAT_PHYS_OFFSET @ always constant in this case #endif @@ -160,8 +194,8 @@ ENTRY(stext) * r0 will hold the CPU control register value, r1, r2, r4, and * r9 will be preserved. r5 will also be preserved if LPAE. */ - ldr r13, =__mmap_switched @ address to jump to after - @ mmu has been enabled + adr_l lr, __primary_switch @ address to jump to after + mov r13, lr @ mmu has been enabled badr lr, 1f @ return (PIC) address #ifdef CONFIG_ARM_LPAE mov r5, #0 @ high TTBR0 @@ -172,7 +206,8 @@ ENTRY(stext) ldr r12, [r10, #PROCINFO_INITFUNC] add r12, r12, r10 ret r12 -1: b __enable_mmu +1: get_kaslr_offset r12 @ get before turning MMU on + b __enable_mmu ENDPROC(stext) .ltorg
@@ -253,15 +288,20 @@ __create_page_tables: * set two variables to indicate the physical start and end of the * kernel. */ - add r0, r4, #KERNEL_OFFSET >> (SECTION_SHIFT - PMD_ENTRY_ORDER) - ldr r6, =(_end - 1) + get_kaslr_offset r3 + add r0, r3, #PAGE_OFFSET + add r0, r4, r0, lsr #(SECTION_SHIFT - PMD_ENTRY_ORDER) + adr_l r6, _end - 1 + sub r6, r6, r8 + add r6, r6, #PAGE_OFFSET + add r3, r3, r8 adr_l r5, kernel_sec_start @ _pa(kernel_sec_start) #if defined CONFIG_CPU_ENDIAN_BE8 || defined CONFIG_CPU_ENDIAN_BE32 str r8, [r5, #4] @ Save physical start of kernel (BE) #else str r8, [r5] @ Save physical start of kernel (LE) #endif - orr r3, r8, r7 @ Add the MMU flags + orr r3, r3, r7 add r6, r4, r6, lsr #(SECTION_SHIFT - PMD_ENTRY_ORDER) 1: str r3, [r0], #1 << PMD_ENTRY_ORDER add r3, r3, #1 << SECTION_SHIFT @@ -411,7 +451,7 @@ ENTRY(secondary_startup) * Use the page tables supplied from __cpu_up. */ adr_l r3, secondary_data - mov_l r12, __secondary_switched + mov_l r12, __secondary_switch ldrd r4, r5, [r3, #0] @ get secondary_data.pgdir ARM_BE8(eor r4, r4, r5) @ Swap r5 and r4 in BE: ARM_BE8(eor r5, r4, r5) @ it can be done in 3 steps @@ -457,6 +497,7 @@ ENDPROC(__secondary_switched) * r4 = TTBR pointer (low word) * r5 = TTBR pointer (high word if LPAE) * r9 = processor ID + * r12 = KASLR offset * r13 = *virtual* address to jump to upon completion */ __enable_mmu: @@ -494,6 +535,7 @@ ENDPROC(__enable_mmu) * r1 = machine ID * r2 = atags or dtb pointer * r9 = processor ID + * r12 = KASLR offset * r13 = *virtual* address to jump to upon completion * * other registers depend on the function called upon completion @@ -509,10 +551,52 @@ ENTRY(__turn_mmu_on) mov r3, r3 mov r3, r13 ret r3 -__turn_mmu_on_end: ENDPROC(__turn_mmu_on) - .popsection
+__primary_switch: +#ifdef CONFIG_RELOCATABLE + adr_l r7, _text @ r7 := __pa(_text) + sub r7, r7, #TEXT_OFFSET @ r7 := PHYS_OFFSET + + adr_l r5, __rel_begin + adr_l r6, __rel_end + sub r5, r5, r7 + sub r6, r6, r7 + + add r5, r5, #PAGE_OFFSET + add r6, r6, #PAGE_OFFSET + add r5, r5, r12 + add r6, r6, r12 + + adr_l r3, __stubs_start @ __pa(__stubs_start) + sub r3, r3, r7 @ offset of __stubs_start + add r3, r3, #PAGE_OFFSET @ __va(__stubs_start) + sub r3, r3, #0xffff1000 @ subtract VA of stubs section + +0: cmp r5, r6 + bge 1f + ldm r5!, {r7, r8} @ load next relocation entry + cmp r8, #23 @ R_ARM_RELATIVE + bne 0b + cmp r7, #0xff000000 @ vector page? + addgt r7, r7, r3 @ fix up VA offset + ldr r8, [r7, r12] + add r8, r8, r12 + str r8, [r7, r12] + b 0b +1: +#endif + ldr pc, =__mmap_switched +ENDPROC(__primary_switch) + +#ifdef CONFIG_SMP +__secondary_switch: + ldr pc, =__secondary_switched +ENDPROC(__secondary_switch) +#endif + .ltorg +__turn_mmu_on_end: + .popsection
#ifdef CONFIG_SMP_ON_UP __HEAD
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit a58cdcfbee11974669a651e3ce049ef729e81411 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
-------------------------------------------------
When randomizing the kernel load address, there may be a large distance in memory between the decompressor binary and its payload and the destination area in memory. Ensure that the decompressor itself is mapped cacheable in this case, by tweaking the existing routine that takes care of this for XIP decompressors.
Cc: Russell King linux@armlinux.org.uk Acked-by: Nicolas Pitre nico@linaro.org Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/boot/compressed/head.S | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S index 9f406e9c0ea6..2021baee35ad 100644 --- a/arch/arm/boot/compressed/head.S +++ b/arch/arm/boot/compressed/head.S @@ -814,20 +814,24 @@ __setup_mmu: sub r3, r4, #16384 @ Page directory size teq r0, r2 bne 1b /* - * If ever we are running from Flash, then we surely want the cache - * to be enabled also for our execution instance... We map 2MB of it - * so there is no map overlap problem for up to 1 MB compressed kernel. - * If the execution is in RAM then we would only be duplicating the above. + * Make sure our entire executable image (including payload) is mapped + * cacheable, in case it is located outside the region we covered above. + * (This may be the case if running from flash or with randomization enabled) + * If the regions happen to overlap, we just duplicate some of the above. */ orr r1, r6, #0x04 @ ensure B is set for this orr r1, r1, #3 << 10 mov r2, pc + adr_l r9, _end mov r2, r2, lsr #20 + mov r9, r9, lsr #20 orr r1, r1, r2, lsl #20 add r0, r3, r2, lsl #2 - str r1, [r0], #4 + add r9, r3, r9, lsl #2 +0: str r1, [r0], #4 add r1, r1, #1048576 - str r1, [r0] + cmp r0, r9 + bls 0b mov pc, lr ENDPROC(__setup_mmu)
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit b152e5c5054c3937211a541be50d8a7c98a59974 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
Update <generated/utsversion.h> depend in OLK-6.6
-------------------------------------------------
Add support to the decompressor to load the kernel at a randomized offset, and invoke the kernel proper while passing on the information about the offset at which the kernel was loaded.
This implementation will extract some pseudo-randomness from the low bits of the generic timer (if available), and use CRC-16 to combine it with the build ID string and the device tree binary (which ideally has a /chosen/kaslr-seed property, but may also have other properties that differ between boots). This seed is used to select one of the candidate offsets in the lowmem region that don't overlap the zImage itself, the DTB, the initrd and /memreserve/s and/or /reserved-memory nodes that should be left alone.
When booting via the UEFI stub, it is left up to the firmware to supply a suitable seed and select an offset.
Conflicts: arch/arm/boot/compressed/head.S arch/arm/boot/compressed/Makefile
Cc: Russell King linux@armlinux.org.uk Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/boot/compressed/Makefile | 9 +- arch/arm/boot/compressed/head.S | 88 ++++++ arch/arm/boot/compressed/kaslr.c | 442 ++++++++++++++++++++++++++++++ 3 files changed, 538 insertions(+), 1 deletion(-) create mode 100644 arch/arm/boot/compressed/kaslr.c
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile index 726ecabcef09..3e029d7f288c 100644 --- a/arch/arm/boot/compressed/Makefile +++ b/arch/arm/boot/compressed/Makefile @@ -84,10 +84,17 @@ compress-$(CONFIG_KERNEL_LZ4) = lz4_with_size
libfdt_objs := fdt_rw.o fdt_ro.o fdt_wip.o fdt.o
+ifneq ($(CONFIG_ARM_ATAG_DTB_COMPAT)$(CONFIG_RANDOMIZE_BASE),) +OBJS += $(libfdt_objs) ifeq ($(CONFIG_ARM_ATAG_DTB_COMPAT),y) CFLAGS_REMOVE_atags_to_fdt.o += -Wframe-larger-than=${CONFIG_FRAME_WARN} CFLAGS_atags_to_fdt.o += -Wframe-larger-than=1280 -OBJS += $(libfdt_objs) atags_to_fdt.o +OBJS += atags_to_fdt.o +endif +ifeq ($(CONFIG_RANDOMIZE_BASE),y) +OBJS += kaslr.o +CFLAGS_kaslr.o := -I $(srctree)/scripts/dtc/libfdt +endif endif ifeq ($(CONFIG_USE_OF),y) OBJS += $(libfdt_objs) fdt_check_mem_start.o diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S index 2021baee35ad..79d88fb10714 100644 --- a/arch/arm/boot/compressed/head.S +++ b/arch/arm/boot/compressed/head.S @@ -174,6 +174,25 @@ #endif .endm
+ .macro record_seed +#ifdef CONFIG_RANDOMIZE_BASE + sub ip, r1, ip, ror #1 @ poor man's kaslr seed, will + sub ip, r2, ip, ror #2 @ be superseded by kaslr-seed + sub ip, r3, ip, ror #3 @ from /chosen if present + sub ip, r4, ip, ror #5 + sub ip, r5, ip, ror #8 + sub ip, r6, ip, ror #13 + sub ip, r7, ip, ror #21 + sub ip, r8, ip, ror #3 + sub ip, r9, ip, ror #24 + sub ip, r10, ip, ror #27 + sub ip, r11, ip, ror #19 + sub ip, r13, ip, ror #14 + sub ip, r14, ip, ror #2 + str_l ip, __kaslr_seed, r9 +#endif + .endm + .section ".start", "ax" /* * sort out different calling conventions @@ -222,6 +241,7 @@ start: __EFI_HEADER 1: ARM_BE8( setend be ) @ go BE8 if compiled for BE8 + record_seed AR_CLASS( mrs r9, cpsr ) #ifdef CONFIG_ARM_VIRT_EXT bl __hyp_stub_install @ get into SVC mode, reversibly @@ -446,6 +466,38 @@ restart: adr r0, LC1 dtb_check_done: #endif
+#ifdef CONFIG_RANDOMIZE_BASE + ldr r1, __kaslr_offset @ check if the kaslr_offset is + cmp r1, #0 @ already set + bne 1f + + stmfd sp!, {r0-r3, ip, lr} + adr_l r2, _text @ start of zImage + stmfd sp!, {r2, r8, r10} @ pass stack arguments + + ldr_l r3, __kaslr_seed +#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_V6K) || defined(CONFIG_CPU_V7) + /* + * Get some pseudo-entropy from the low bits of the generic + * timer if it is implemented. + */ + mrc p15, 0, r1, c0, c1, 1 @ read ID_PFR1 register + tst r1, #0x10000 @ have generic timer? + mrrcne p15, 1, r3, r1, c14 @ read CNTVCT +#endif + adr_l r0, __kaslr_offset @ pass &__kaslr_offset in r0 + mov r1, r4 @ pass base address + mov r2, r9 @ pass decompressed image size + eor r3, r3, r3, ror #16 @ pass pseudorandom seed + bl kaslr_early_init + add sp, sp, #12 + cmp r0, #0 + addne r4, r4, r0 @ add offset to base address + ldmfd sp!, {r0-r3, ip, lr} + bne restart +1: +#endif + /* * Check to see if we will overwrite ourselves. * r4 = final kernel address (possibly with LSB set) @@ -1439,10 +1491,46 @@ __enter_kernel: mov r0, #0 @ must be 0 mov r1, r7 @ restore architecture number mov r2, r8 @ restore atags pointer +#ifdef CONFIG_RANDOMIZE_BASE + ldr r3, __kaslr_offset + add r4, r4, #4 @ skip first instruction +#endif ARM( mov pc, r4 ) @ call kernel M_CLASS( add r4, r4, #1 ) @ enter in Thumb mode for M class THUMB( bx r4 ) @ entry point is always ARM for A/R classes
+#ifdef CONFIG_RANDOMIZE_BASE + /* + * Minimal implementation of CRC-16 that does not use a + * lookup table and uses 32-bit wide loads, so it still + * performs reasonably well with the D-cache off. Equivalent + * to lib/crc16.c for input sizes that are 4 byte multiples. + */ +ENTRY(__crc16) + push {r4, lr} + ldr r3, =0xa001 @ CRC-16 polynomial +0: subs r2, r2, #4 + popmi {r4, pc} + ldr r4, [r1], #4 +#ifdef __ARMEB__ + eor ip, r4, r4, ror #16 @ endian swap + bic ip, ip, #0x00ff0000 + mov r4, r4, ror #8 + eor r4, r4, ip, lsr #8 +#endif + eor r0, r0, r4 + .rept 32 + lsrs r0, r0, #1 + eorcs r0, r0, r3 + .endr + b 0b +ENDPROC(__crc16) + + .align 2 +__kaslr_seed: .long 0 +__kaslr_offset: .long 0 +#endif + reloc_code_end:
#ifdef CONFIG_EFI_STUB diff --git a/arch/arm/boot/compressed/kaslr.c b/arch/arm/boot/compressed/kaslr.c new file mode 100644 index 000000000000..df078679b3f6 --- /dev/null +++ b/arch/arm/boot/compressed/kaslr.c @@ -0,0 +1,442 @@ +/* + * Copyright (C) 2017 Linaro Ltd; ard.biesheuvel@linaro.org + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include <linux/libfdt_env.h> +#include <libfdt.h> +#include <linux/types.h> +#include <generated/compile.h> +#include <generated/utsrelease.h> +#include <generated/utsversion.h> +#include <asm/pgtable.h> + +#include CONFIG_UNCOMPRESS_INCLUDE + +struct regions { + u32 pa_start; + u32 pa_end; + u32 image_size; + u32 zimage_start; + u32 zimage_size; + u32 dtb_start; + u32 dtb_size; + u32 initrd_start; + u32 initrd_size; + int reserved_mem; + int reserved_mem_addr_cells; + int reserved_mem_size_cells; +}; + +extern u32 __crc16(u32 crc, u32 const input[], int byte_count); + +static u32 __memparse(const char *val, const char **retptr) +{ + int base = 10; + u32 ret = 0; + + if (*val == '0') { + val++; + if (*val == 'x' || *val == 'X') { + val++; + base = 16; + } else { + base = 8; + } + } + + while (*val != ',' && *val != ' ' && *val != '\0') { + char c = *val++; + + switch (c) { + case '0' ... '9': + ret = ret * base + (c - '0'); + continue; + case 'a' ... 'f': + ret = ret * base + (c - 'a' + 10); + continue; + case 'A' ... 'F': + ret = ret * base + (c - 'A' + 10); + continue; + case 'g': + case 'G': + ret <<= 10; + case 'm': + case 'M': + ret <<= 10; + case 'k': + case 'K': + ret <<= 10; + break; + default: + if (retptr) + *retptr = NULL; + return 0; + } + } + if (retptr) + *retptr = val; + return ret; +} + +static bool regions_intersect(u32 s1, u32 e1, u32 s2, u32 e2) +{ + return e1 >= s2 && e2 >= s1; +} + +static bool intersects_reserved_region(const void *fdt, u32 start, + u32 end, struct regions *regions) +{ + int subnode, len, i; + u64 base, size; + + /* check for overlap with /memreserve/ entries */ + for (i = 0; i < fdt_num_mem_rsv(fdt); i++) { + if (fdt_get_mem_rsv(fdt, i, &base, &size) < 0) + continue; + if (regions_intersect(start, end, base, base + size)) + return true; + } + + if (regions->reserved_mem < 0) + return false; + + /* check for overlap with static reservations in /reserved-memory */ + for (subnode = fdt_first_subnode(fdt, regions->reserved_mem); + subnode >= 0; + subnode = fdt_next_subnode(fdt, subnode)) { + const fdt32_t *reg; + + len = 0; + reg = fdt_getprop(fdt, subnode, "reg", &len); + while (len >= (regions->reserved_mem_addr_cells + + regions->reserved_mem_size_cells)) { + + base = fdt32_to_cpu(reg[0]); + if (regions->reserved_mem_addr_cells == 2) + base = (base << 32) | fdt32_to_cpu(reg[1]); + + reg += regions->reserved_mem_addr_cells; + len -= 4 * regions->reserved_mem_addr_cells; + + size = fdt32_to_cpu(reg[0]); + if (regions->reserved_mem_size_cells == 2) + size = (size << 32) | fdt32_to_cpu(reg[1]); + + reg += regions->reserved_mem_size_cells; + len -= 4 * regions->reserved_mem_size_cells; + + if (base >= regions->pa_end) + continue; + + if (regions_intersect(start, end, base, + min(base + size, (u64)U32_MAX))) + return true; + } + } + return false; +} + +static bool intersects_occupied_region(const void *fdt, u32 start, + u32 end, struct regions *regions) +{ + if (regions_intersect(start, end, regions->zimage_start, + regions->zimage_start + regions->zimage_size)) + return true; + + if (regions_intersect(start, end, regions->initrd_start, + regions->initrd_start + regions->initrd_size)) + return true; + + if (regions_intersect(start, end, regions->dtb_start, + regions->dtb_start + regions->dtb_size)) + return true; + + return intersects_reserved_region(fdt, start, end, regions); +} + +static u32 count_suitable_regions(const void *fdt, struct regions *regions, + u32 *bitmap) +{ + u32 pa, i = 0, ret = 0; + + for (pa = regions->pa_start; pa < regions->pa_end; pa += SZ_2M, i++) { + if (!intersects_occupied_region(fdt, pa, + pa + regions->image_size, + regions)) { + ret++; + } else { + /* set 'occupied' bit */ + bitmap[i >> 5] |= BIT(i & 0x1f); + } + } + return ret; +} + +static u32 get_region_number(u32 num, u32 *bitmap) +{ + u32 i; + + for (i = 0; num > 0; i++) + if (!(bitmap[i >> 5] & BIT(i & 0x1f))) + num--; + return i; +} + +static void get_cell_sizes(const void *fdt, int node, int *addr_cells, + int *size_cells) +{ + const int *prop; + int len; + + /* + * Retrieve the #address-cells and #size-cells properties + * from the 'node', or use the default if not provided. + */ + *addr_cells = *size_cells = 1; + + prop = fdt_getprop(fdt, node, "#address-cells", &len); + if (len == 4) + *addr_cells = fdt32_to_cpu(*prop); + prop = fdt_getprop(fdt, node, "#size-cells", &len); + if (len == 4) + *size_cells = fdt32_to_cpu(*prop); +} + +static u32 get_memory_end(const void *fdt) +{ + int mem_node, address_cells, size_cells, len; + const fdt32_t *reg; + u64 memory_end = 0; + + /* Look for a node called "memory" at the lowest level of the tree */ + mem_node = fdt_path_offset(fdt, "/memory"); + if (mem_node <= 0) + return 0; + + get_cell_sizes(fdt, 0, &address_cells, &size_cells); + + /* + * Now find the 'reg' property of the /memory node, and iterate over + * the base/size pairs. + */ + len = 0; + reg = fdt_getprop(fdt, mem_node, "reg", &len); + while (len >= 4 * (address_cells + size_cells)) { + u64 base, size; + + base = fdt32_to_cpu(reg[0]); + if (address_cells == 2) + base = (base << 32) | fdt32_to_cpu(reg[1]); + + reg += address_cells; + len -= 4 * address_cells; + + size = fdt32_to_cpu(reg[0]); + if (size_cells == 2) + size = (size << 32) | fdt32_to_cpu(reg[1]); + + reg += size_cells; + len -= 4 * size_cells; + + memory_end = max(memory_end, base + size); + } + return min(memory_end, (u64)U32_MAX); +} + +static char *__strstr(const char *s1, const char *s2, int l2) +{ + int l1; + + l1 = strlen(s1); + while (l1 >= l2) { + l1--; + if (!memcmp(s1, s2, l2)) + return (char *)s1; + s1++; + } + return NULL; +} + +static const char *get_cmdline_param(const char *cmdline, const char *param, + int param_size) +{ + static const char default_cmdline[] = CONFIG_CMDLINE; + const char *p; + + if (!IS_ENABLED(CONFIG_CMDLINE_FORCE) && cmdline != NULL) { + p = __strstr(cmdline, param, param_size); + if (p == cmdline || + (p > cmdline && *(p - 1) == ' ')) + return p; + } + + if (IS_ENABLED(CONFIG_CMDLINE_FORCE) || + IS_ENABLED(CONFIG_CMDLINE_EXTEND)) { + p = __strstr(default_cmdline, param, param_size); + if (p == default_cmdline || + (p > default_cmdline && *(p - 1) == ' ')) + return p; + } + return NULL; +} + +static void __puthex32(const char *name, u32 val) +{ + int i; + + while (*name) + putc(*name++); + putc(':'); + for (i = 28; i >= 0; i -= 4) { + char c = (val >> i) & 0xf; + + if (c < 10) + putc(c + '0'); + else + putc(c + 'a' - 10); + } + putc('\r'); + putc('\n'); +} +#define puthex32(val) __puthex32(#val, (val)) + +u32 kaslr_early_init(u32 *kaslr_offset, u32 image_base, u32 image_size, + u32 seed, u32 zimage_start, const void *fdt, + u32 zimage_end) +{ + static const char __aligned(4) build_id[] = UTS_VERSION UTS_RELEASE; + u32 bitmap[(VMALLOC_END - PAGE_OFFSET) / SZ_2M / 32] = {}; + struct regions regions; + const char *command_line; + const char *p; + int chosen, len; + u32 lowmem_top, count, num; + + if (IS_ENABLED(CONFIG_EFI_STUB)) { + extern u32 __efi_kaslr_offset; + + if (__efi_kaslr_offset == U32_MAX) + return 0; + } + + if (fdt_check_header(fdt)) + return 0; + + chosen = fdt_path_offset(fdt, "/chosen"); + if (chosen < 0) + return 0; + + command_line = fdt_getprop(fdt, chosen, "bootargs", &len); + + /* check the command line for the presence of 'nokaslr' */ + p = get_cmdline_param(command_line, "nokaslr", sizeof("nokaslr") - 1); + if (p != NULL) + return 0; + + /* check the command line for the presence of 'vmalloc=' */ + p = get_cmdline_param(command_line, "vmalloc=", sizeof("vmalloc=") - 1); + if (p != NULL) + lowmem_top = VMALLOC_END - __memparse(p + 8, NULL) - + VMALLOC_OFFSET; + else + lowmem_top = VMALLOC_DEFAULT_BASE; + + regions.image_size = image_base % SZ_128M + round_up(image_size, SZ_2M); + regions.pa_start = round_down(image_base, SZ_128M); + regions.pa_end = lowmem_top - PAGE_OFFSET + regions.pa_start; + regions.zimage_start = zimage_start; + regions.zimage_size = zimage_end - zimage_start; + regions.dtb_start = (u32)fdt; + regions.dtb_size = fdt_totalsize(fdt); + + /* + * Stir up the seed a bit by taking the CRC of the DTB: + * hopefully there's a /chosen/kaslr-seed in there. + */ + seed = __crc16(seed, fdt, regions.dtb_size); + + /* stir a bit more using data that changes between builds */ + seed = __crc16(seed, (u32 *)build_id, sizeof(build_id)); + + /* check for initrd on the command line */ + regions.initrd_start = regions.initrd_size = 0; + p = get_cmdline_param(command_line, "initrd=", sizeof("initrd=") - 1); + if (p != NULL) { + regions.initrd_start = __memparse(p + 7, &p); + if (*p++ == ',') + regions.initrd_size = __memparse(p, NULL); + if (regions.initrd_size == 0) + regions.initrd_start = 0; + } + + /* ... or in /chosen */ + if (regions.initrd_size == 0) { + const fdt32_t *prop; + u64 start = 0, end = 0; + + prop = fdt_getprop(fdt, chosen, "linux,initrd-start", &len); + if (prop) { + start = fdt32_to_cpu(prop[0]); + if (len == 8) + start = (start << 32) | fdt32_to_cpu(prop[1]); + } + + prop = fdt_getprop(fdt, chosen, "linux,initrd-end", &len); + if (prop) { + end = fdt32_to_cpu(prop[0]); + if (len == 8) + end = (end << 32) | fdt32_to_cpu(prop[1]); + } + if (start != 0 && end != 0 && start < U32_MAX) { + regions.initrd_start = start; + regions.initrd_size = max_t(u64, end, U32_MAX) - start; + } + } + + /* check the memory nodes for the size of the lowmem region */ + regions.pa_end = min(regions.pa_end, get_memory_end(fdt)) - + regions.image_size; + + puthex32(regions.image_size); + puthex32(regions.pa_start); + puthex32(regions.pa_end); + puthex32(regions.zimage_start); + puthex32(regions.zimage_size); + puthex32(regions.dtb_start); + puthex32(regions.dtb_size); + puthex32(regions.initrd_start); + puthex32(regions.initrd_size); + + /* check for a reserved-memory node and record its cell sizes */ + regions.reserved_mem = fdt_path_offset(fdt, "/reserved-memory"); + if (regions.reserved_mem >= 0) + get_cell_sizes(fdt, regions.reserved_mem, + ®ions.reserved_mem_addr_cells, + ®ions.reserved_mem_size_cells); + + /* + * Iterate over the physical memory range covered by the lowmem region + * in 2 MB increments, and count each offset at which we don't overlap + * with any of the reserved regions for the zImage itself, the DTB, + * the initrd and any regions described as reserved in the device tree. + * If the region does overlap, set the respective bit in the bitmap[]. + * Using this random value, we go over the bitmap and count zero bits + * until we counted enough iterations, and return the offset we ended + * up at. + */ + count = count_suitable_regions(fdt, ®ions, bitmap); + puthex32(count); + + num = ((u16)seed * count) >> 16; + puthex32(num); + + *kaslr_offset = get_region_number(num, bitmap) * SZ_2M; + puthex32(*kaslr_offset); + + return *kaslr_offset; +}
From: Ard Biesheuvel ard.biesheuvel@linaro.org
maillist inclusion commit b4fa1dbef0cac754a6daec1dec575540967dc240 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm...
Gcc flag '-fvisibility=hidden' specifies the visibility attribute for external linkage entities in object files. You can also selectively set visibility attributes for entities by using pairs of the #pragma GCC visibility push and #pragma GCC visibility pop compiler directives throughout your source program.when we include the hidden.h, __bss_start and __bss_end went from global symbol to local symbol,so we need to modify the regular expression to accommodate this change.
-------------------------------------------------
Conflicts: arch/arm/boot/compressed/Makefile
Signed-off-by: Ard Biesheuvel ard.biesheuvel@linaro.org Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/boot/compressed/Makefile | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile index 3e029d7f288c..b618ae8c1062 100644 --- a/arch/arm/boot/compressed/Makefile +++ b/arch/arm/boot/compressed/Makefile @@ -115,8 +115,8 @@ asflags-y := -DZIMAGE
# Supply kernel BSS size to the decompressor via a linker symbol. KBSS_SZ = $(shell echo $$(($$($(NM) vmlinux | \ - sed -n -e 's/^([^ ]*) [ABD] __bss_start$$/-0x\1/p' \ - -e 's/^([^ ]*) [ABD] __bss_stop$$/+0x\1/p') )) ) + sed -n -e 's/^([^ ]*) [ABDb] __bss_start$$/-0x\1/p' \ + -e 's/^([^ ]*) [ABDb] __bss_stop$$/+0x\1/p') )) ) LDFLAGS_vmlinux = --defsym _kernel_bss_size=$(KBSS_SZ) # Supply ZRELADDR to the decompressor via a linker symbol. ifneq ($(CONFIG_AUTO_ZRELADDR),y)
From: Ye Bin yebin10@huawei.com
maillist inclusion from mainline-v5.7-rc1 commit 32830a0534700f86366f371b150b17f0f0d140d7 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 DTS: NA CVE: NA
------------------------------------------------------------------------
Fix follow warnings: arm-linux-gnueabihf-ld: warning: orphan section `.data.rel.local' from `net/sunrpc/xprt.o' being placed in section `.data.rel.local'. ...... arm-linux-gnueabihf-ld: warning: orphan section `.got.plt' from `arch/arm/kernel/head.o' being placed in section `.got.plt'. arm-linux-gnueabihf-ld: warning: orphan section `.plt' from `arch/arm/kernel/head.o' being placed in section `.plt'. arm-linux-gnueabihf-ld: warning: orphan section `.data.rel.ro' from `arch/arm/kernel/head.o' being placed in section `.data.rel.ro'. ......
Conflicts: arch/arm/boot/compressed/kaslr.c arch/arm/kernel/vmlinux.lds.S
Fixes:("ARM: kernel: make vmlinux buildable as a PIE executable") Signed-off-by: Ye Bin yebin10@huawei.com Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/boot/compressed/kaslr.c | 4 +++- arch/arm/include/asm/vmlinux.lds.h | 3 ++- arch/arm/kernel/vmlinux.lds.S | 11 ++++++++++- 3 files changed, 15 insertions(+), 3 deletions(-)
diff --git a/arch/arm/boot/compressed/kaslr.c b/arch/arm/boot/compressed/kaslr.c index df078679b3f6..cd9915ff038b 100644 --- a/arch/arm/boot/compressed/kaslr.c +++ b/arch/arm/boot/compressed/kaslr.c @@ -13,7 +13,7 @@ #include <generated/compile.h> #include <generated/utsrelease.h> #include <generated/utsversion.h> -#include <asm/pgtable.h> +#include <linux/pgtable.h>
#include CONFIG_UNCOMPRESS_INCLUDE
@@ -65,9 +65,11 @@ static u32 __memparse(const char *val, const char **retptr) case 'g': case 'G': ret <<= 10; + fallthrough; case 'm': case 'M': ret <<= 10; + fallthrough; case 'k': case 'K': ret <<= 10; diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h index 825af9c65db2..3ae285cdd2ce 100644 --- a/arch/arm/include/asm/vmlinux.lds.h +++ b/arch/arm/include/asm/vmlinux.lds.h @@ -75,7 +75,7 @@ */ #define ARM_ASSERTS \ .plt : { \ - *(.iplt) *(.rel.iplt) *(.iplt) *(.igot.plt) \ + *(.iplt) *(.rel.iplt) *(.iplt) *(.igot.plt) *(.plt) \ } \ ASSERT(SIZEOF(.plt) == 0, \ "Unexpected run-time procedure linkages detected!") @@ -105,6 +105,7 @@ ARM_STUBS_TEXT \ . = ALIGN(4); \ *(.got) /* Global offset table */ \ + *(.got.plt) \ ARM_CPU_KEEP(PROC_INFO)
/* Stack unwinding tables */ diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S index d6ccf647eef7..4a45f8e6cb4d 100644 --- a/arch/arm/kernel/vmlinux.lds.S +++ b/arch/arm/kernel/vmlinux.lds.S @@ -116,7 +116,7 @@ SECTIONS #endif .rel.dyn : ALIGN(8) { __rel_begin = .; - *(.rel .rel.* .rel.dyn) + *(.rel .rel.* .rel.dyn .rel*) } __rel_end = ADDR(.rel.dyn) + SIZEOF(.rel.dyn);
@@ -149,6 +149,15 @@ SECTIONS
_sdata = .; RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN) + + .data.rel.local : { + *(.data.rel.local) + } + + .data.rel.ro : { + *(.data.rel.ro) + } + _edata = .;
BSS_SECTION(0, 0, 0)
From: Ye Bin yebin10@huawei.com
hulk inclusion commit 88bf5c03832d56c68fac61e4ae97158b3332bd63 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
-------------------------------------------------
Fix memtop calculate, when there is no memory top info, we can't use zero instead it.
Signed-off-by: Ye Bin yebin10@huawei.com Reviewed-by: Jing Xiangfeng jingxiangfeng@huawei.com Signed-off-by: zhangyi (F) yi.zhang@huawei.com Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/boot/compressed/kaslr.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/arm/boot/compressed/kaslr.c b/arch/arm/boot/compressed/kaslr.c index cd9915ff038b..caefec2b4219 100644 --- a/arch/arm/boot/compressed/kaslr.c +++ b/arch/arm/boot/compressed/kaslr.c @@ -317,7 +317,7 @@ u32 kaslr_early_init(u32 *kaslr_offset, u32 image_base, u32 image_size, const char *command_line; const char *p; int chosen, len; - u32 lowmem_top, count, num; + u32 lowmem_top, count, num, mem_fdt;
if (IS_ENABLED(CONFIG_EFI_STUB)) { extern u32 __efi_kaslr_offset; @@ -401,8 +401,11 @@ u32 kaslr_early_init(u32 *kaslr_offset, u32 image_base, u32 image_size, }
/* check the memory nodes for the size of the lowmem region */ - regions.pa_end = min(regions.pa_end, get_memory_end(fdt)) - - regions.image_size; + mem_fdt = get_memory_end(fdt); + if (mem_fdt) + regions.pa_end = min(regions.pa_end, mem_fdt) - regions.image_size; + else + regions.pa_end = regions.pa_end - regions.image_size;
puthex32(regions.image_size); puthex32(regions.pa_start);
From: Ye Bin yebin10@huawei.com
hulk inclusion commit b20bc6211469919f2022884e9a1634d8e576c281 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
-------------------------------------------------
When boot with vxboot, we must adjust dtb address before kaslr_early_init, and store dtb address after init.
Signed-off-by: Ye Bin yebin10@huawei.com Reviewed-by: Jing Xiangfeng jingxiangfeng@huawei.com Signed-off-by: zhangyi (F) yi.zhang@huawei.com Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/boot/compressed/head.S | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+)
diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S index 79d88fb10714..f3a99e6aef19 100644 --- a/arch/arm/boot/compressed/head.S +++ b/arch/arm/boot/compressed/head.S @@ -472,6 +472,29 @@ dtb_check_done: bne 1f
stmfd sp!, {r0-r3, ip, lr} +#ifdef CONFIG_ARCH_HISI +#ifdef CONFIG_ARM_APPENDED_DTB +#ifdef CONFIG_START_MEM_2M_ALIGN + mov r0, r4 +#ifdef CONFIG_CORTEX_A9 + lsr r0, r0, #20 + lsl r0, r0, #20 +#else + lsr r0, r0, #21 + lsl r0, r0, #21 +#endif + add r0, r0, #0x1000 + ldr r1, [r0] +#ifndef __ARMEB__ + ldr r2, =0xedfe0dd0 @ sig is 0xd00dfeed big endian +#else + ldr r2, =0xd00dfeed +#endif + cmp r1, r2 + moveq r8, r0 +#endif +#endif +#endif adr_l r2, _text @ start of zImage stmfd sp!, {r2, r8, r10} @ pass stack arguments
@@ -493,6 +516,14 @@ dtb_check_done: add sp, sp, #12 cmp r0, #0 addne r4, r4, r0 @ add offset to base address +#ifdef CONFIG_VXBOOT +#ifdef CONFIG_START_MEM_2M_ALIGN +#ifdef CONFIG_CORTEX_A9 + adr r1, vx_edata + strne r6, [r1] +#endif +#endif +#endif ldmfd sp!, {r0-r3, ip, lr} bne restart 1:
From: Ye Bin yebin10@huawei.com
hulk inclusion commit 6337511516862e5a4d2d5a96481510e4a7a12b1b category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
-------------------------------------------------
If we not hide the GOT, when insert module which reference global variable we got error "Unknown symbol_GLOBAL_OFFSET_TABLE_ (err 0)".
Signed-off-by: Ye Bin yebin10@huawei.com Reviewed-by: Jing Xiangfeng jingxiangfeng@huawei.com Signed-off-by: zhangyi (F) yi.zhang@huawei.com Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/boot/compressed/decompress.c | 4 ++++ arch/arm/boot/compressed/misc.c | 3 +++ 2 files changed, 7 insertions(+)
diff --git a/arch/arm/boot/compressed/decompress.c b/arch/arm/boot/compressed/decompress.c index 0669851394f0..29cf68dce7a0 100644 --- a/arch/arm/boot/compressed/decompress.c +++ b/arch/arm/boot/compressed/decompress.c @@ -1,6 +1,10 @@ // SPDX-License-Identifier: GPL-2.0 #define _LINUX_STRING_H_
+#ifdef CONFIG_RANDOMIZE_BASE +#pragma GCC visibility pop +#endif + #include <linux/compiler.h> /* for inline */ #include <linux/types.h> /* for size_t */ #include <linux/stddef.h> /* for NULL */ diff --git a/arch/arm/boot/compressed/misc.c b/arch/arm/boot/compressed/misc.c index 6b4baa6a9a50..047ec4069ea7 100644 --- a/arch/arm/boot/compressed/misc.c +++ b/arch/arm/boot/compressed/misc.c @@ -16,6 +16,9 @@ * which should point to addresses in RAM and cleared to 0 on start. * This allows for a much quicker boot time. */ +#ifdef CONFIG_RANDOMIZE_BASE +#pragma GCC visibility pop +#endif
unsigned int __machine_arch_type;
From: Ye Bin yebin10@huawei.com
hulk inclusion commit 76bbe667ce4ea3f02bd325ca8e8c999c15034079 category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
-------------------------------------------------
Conflicts: arch/arm/include/asm/memory.h
Signed-off-by: Ye Bin yebin10@huawei.com Reviewed-by: Jason Yan yanaijie@huawei.com Signed-off-by: yangerkun yangerkun@huawei.com Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/include/asm/memory.h | 14 ++++++++++++++ arch/arm/kernel/head.S | 2 +- arch/arm/kernel/setup.c | 31 +++++++++++++++++++++++++++++++ 3 files changed, 46 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index ef2aa79ece5a..11bf7814fb93 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -171,6 +171,20 @@ extern unsigned long vectors_base; extern u64 kernel_sec_start; extern u64 kernel_sec_end;
+#ifdef CONFIG_RANDOMIZE_BASE +extern unsigned long __kaslr_offset; + +static inline unsigned long kaslr_offset(void) +{ + return __kaslr_offset; +} +#else +static inline unsigned long kaslr_offset(void) +{ + return 0; +} +#endif + /* * Physical vs virtual RAM address space conversion. These are * private definitions which should NOT be used outside memory.h diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index e54e0edc36d3..fd37084d80ba 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -129,7 +129,7 @@ ENTRY(stext)
.section ".bss", "aw", %nobits .align 2 -__kaslr_offset: +ENTRY(__kaslr_offset) .long 0 @ will be wiped before entering C code .previous #endif diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index c66b560562b3..5cfc9c5056a7 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -60,6 +60,7 @@ #include <asm/memblock.h> #include <asm/virt.h> #include <asm/kasan.h> +#include <linux/panic_notifier.h>
#include "atags.h"
@@ -1359,3 +1360,33 @@ const struct seq_operations cpuinfo_op = { .stop = c_stop, .show = c_show }; + +/* + * Dump out kernel offset information on panic. + */ +static int dump_kernel_offset(struct notifier_block *self, unsigned long v, + void *p) +{ + const unsigned long offset = kaslr_offset(); + + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && offset > 0) { + pr_emerg("Kernel Offset: 0x%lx from 0x%lx\n", + offset, PAGE_OFFSET); + + } else { + pr_emerg("Kernel Offset: disabled\n"); + } + return 0; +} + +static struct notifier_block kernel_offset_notifier = { + .notifier_call = dump_kernel_offset +}; + +static int __init register_kernel_offset_dumper(void) +{ + atomic_notifier_chain_register(&panic_notifier_list, + &kernel_offset_notifier); + return 0; +} +__initcall(register_kernel_offset_dumper);
From: Cui GaoSheng cuigaosheng1@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
------------------------------------------------------------------------
Fix follow warnings: armeb-linux-gnueabi-ld: warning: orphan section `.gnu.hash' from `arch/arm/kernel/head.o' being placed in section `.gnu.hash'
Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/kernel/vmlinux.lds.S | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S index 4a45f8e6cb4d..b659b4b69e01 100644 --- a/arch/arm/kernel/vmlinux.lds.S +++ b/arch/arm/kernel/vmlinux.lds.S @@ -69,6 +69,10 @@ SECTIONS #endif _etext = .; /* End of text section */
+ .gnu.hash : { + *(.gnu.hash) + } + RO_DATA(PAGE_SIZE)
. = ALIGN(4);
From: Cui GaoSheng cuigaosheng1@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
------------------------------------------------------------------------
Linux can't enable fpic to compile modules, because the modules have their own relocation table, and they can't use the got table for symbolic addressing.
Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/Makefile | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm/Makefile b/arch/arm/Makefile index 43e159617f9b..01e8995b9b64 100644 --- a/arch/arm/Makefile +++ b/arch/arm/Makefile @@ -53,6 +53,7 @@ endif
ifeq ($(CONFIG_RELOCATABLE),y) KBUILD_CFLAGS += -fpic -include $(srctree)/include/linux/hidden.h +CFLAGS_MODULE += -fno-pic LDFLAGS_vmlinux += -pie -shared -Bsymbolic endif
From: Cui GaoSheng cuigaosheng1@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
------------------------------------------------------------------------
Fix the bug of hidden symbols when decompressing code is compiled, we can't enable hidden cflags because decompressed code needs to support symbol relocation.
Conflicts: arch/arm/boot/compressed/Makefile
Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/boot/compressed/Makefile | 4 ++++ arch/arm/boot/compressed/decompress.c | 4 ---- arch/arm/boot/compressed/misc.c | 4 ---- 3 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile index b618ae8c1062..dc0243ed9733 100644 --- a/arch/arm/boot/compressed/Makefile +++ b/arch/arm/boot/compressed/Makefile @@ -104,6 +104,10 @@ OBJS += lib1funcs.o ashldi3.o bswapsdi2.o
targets := vmlinux vmlinux.lds piggy_data piggy.o \ head.o $(OBJS) +ifeq ($(CONFIG_RELOCATABLE),y) +HIDDEN_STR := -include $(srctree)/include/linux/hidden.h +KBUILD_CFLAGS := $(subst $(HIDDEN_STR), , $(KBUILD_CFLAGS)) +endif
KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
diff --git a/arch/arm/boot/compressed/decompress.c b/arch/arm/boot/compressed/decompress.c index 29cf68dce7a0..0669851394f0 100644 --- a/arch/arm/boot/compressed/decompress.c +++ b/arch/arm/boot/compressed/decompress.c @@ -1,10 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 #define _LINUX_STRING_H_
-#ifdef CONFIG_RANDOMIZE_BASE -#pragma GCC visibility pop -#endif - #include <linux/compiler.h> /* for inline */ #include <linux/types.h> /* for size_t */ #include <linux/stddef.h> /* for NULL */ diff --git a/arch/arm/boot/compressed/misc.c b/arch/arm/boot/compressed/misc.c index 047ec4069ea7..65bda6811065 100644 --- a/arch/arm/boot/compressed/misc.c +++ b/arch/arm/boot/compressed/misc.c @@ -16,10 +16,6 @@ * which should point to addresses in RAM and cleared to 0 on start. * This allows for a much quicker boot time. */ -#ifdef CONFIG_RANDOMIZE_BASE -#pragma GCC visibility pop -#endif - unsigned int __machine_arch_type;
#include <linux/compiler.h> /* for inline */
From: Cui GaoSheng cuigaosheng1@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
------------------------------------------------------------------------
The dts files of some boards may have mutiple memory nodes, so when calculating the offset value of kaslr, we need to consider the memory layout, and choose the memory node where the zImage is located.
Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/boot/compressed/kaslr.c | 64 +++++++++++++++++++------------- 1 file changed, 39 insertions(+), 25 deletions(-)
diff --git a/arch/arm/boot/compressed/kaslr.c b/arch/arm/boot/compressed/kaslr.c index caefec2b4219..37f9e6ef8060 100644 --- a/arch/arm/boot/compressed/kaslr.c +++ b/arch/arm/boot/compressed/kaslr.c @@ -209,11 +209,15 @@ static void get_cell_sizes(const void *fdt, int node, int *addr_cells, *size_cells = fdt32_to_cpu(*prop); }
-static u32 get_memory_end(const void *fdt) +/* + * Original method only consider the first memory node in dtb, + * but there may be more than one memory nodes, we only consider + * the memory node zImage exists. + */ +static u32 get_memory_end(const void *fdt, u32 zimage_start) { int mem_node, address_cells, size_cells, len; const fdt32_t *reg; - u64 memory_end = 0;
/* Look for a node called "memory" at the lowest level of the tree */ mem_node = fdt_path_offset(fdt, "/memory"); @@ -222,32 +226,38 @@ static u32 get_memory_end(const void *fdt)
get_cell_sizes(fdt, 0, &address_cells, &size_cells);
- /* - * Now find the 'reg' property of the /memory node, and iterate over - * the base/size pairs. - */ - len = 0; - reg = fdt_getprop(fdt, mem_node, "reg", &len); - while (len >= 4 * (address_cells + size_cells)) { - u64 base, size; - - base = fdt32_to_cpu(reg[0]); - if (address_cells == 2) - base = (base << 32) | fdt32_to_cpu(reg[1]); + while(mem_node >= 0) { + /* + * Now find the 'reg' property of the /memory node, and iterate over + * the base/size pairs. + */ + len = 0; + reg = fdt_getprop(fdt, mem_node, "reg", &len); + while (len >= 4 * (address_cells + size_cells)) { + u64 base, size; + base = fdt32_to_cpu(reg[0]); + if (address_cells == 2) + base = (base << 32) | fdt32_to_cpu(reg[1]);
- reg += address_cells; - len -= 4 * address_cells; + reg += address_cells; + len -= 4 * address_cells;
- size = fdt32_to_cpu(reg[0]); - if (size_cells == 2) - size = (size << 32) | fdt32_to_cpu(reg[1]); + size = fdt32_to_cpu(reg[0]); + if (size_cells == 2) + size = (size << 32) | fdt32_to_cpu(reg[1]);
- reg += size_cells; - len -= 4 * size_cells; + reg += size_cells; + len -= 4 * size_cells;
- memory_end = max(memory_end, base + size); + /* Get the base and size of the zimage memory node */ + if (zimage_start >= base && zimage_start < base + size) + return base + size; + } + /* If current memory node is not the one zImage exists, then traverse next memory node. */ + mem_node = fdt_node_offset_by_prop_value(fdt, mem_node, "device_type", "memory", sizeof("memory")); } - return min(memory_end, (u64)U32_MAX); + + return 0; }
static char *__strstr(const char *s1, const char *s2, int l2) @@ -400,8 +410,12 @@ u32 kaslr_early_init(u32 *kaslr_offset, u32 image_base, u32 image_size, } }
- /* check the memory nodes for the size of the lowmem region */ - mem_fdt = get_memory_end(fdt); + /* + * check the memory nodes for the size of the lowmem region, traverse + * all memory nodes to find the node in which zImage exists, we + * randomize kernel only in the one zImage exists. + */ + mem_fdt = get_memory_end(fdt, zimage_start); if (mem_fdt) regions.pa_end = min(regions.pa_end, mem_fdt) - regions.image_size; else
From: Cui GaoSheng cuigaosheng1@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
------------------------------------------------------------------------
Use adr_l instead of adr macro for symbol relocation, because linux symbol relocation has scope restrictions.
Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/boot/compressed/head.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S index f3a99e6aef19..028b3f3fdb50 100644 --- a/arch/arm/boot/compressed/head.S +++ b/arch/arm/boot/compressed/head.S @@ -154,7 +154,7 @@ * in little-endian form. */ .macro get_inflated_image_size, res:req, tmp1:req, tmp2:req - adr \res, .Linflated_image_size_offset + adr_l \res, .Linflated_image_size_offset ldr \tmp1, [\res] add \tmp1, \tmp1, \res @ address of inflated image size
@@ -348,7 +348,7 @@ not_angel: orrcc r4, r4, #1 @ remember we skipped cache_on blcs cache_on
-restart: adr r0, LC1 +restart: adr_l r0, LC1 ldr sp, [r0] ldr r6, [r0, #4] add sp, sp, r0
From: Cui GaoSheng cuigaosheng1@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
------------------------------------------------------------------------
The bss section is cleared when the kernel is started, and __kaslr_offset variable is located in the bss section, __kaslr_offset is reset to zero, so we move __kaslr_offset from bss section to data section.
Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/kernel/head.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index fd37084d80ba..ba813f84fb09 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -127,11 +127,11 @@ ENTRY(stext) #ifdef CONFIG_RANDOMIZE_BASE str_l r3, __kaslr_offset, r9 @ offset in r3 if entered via kaslr ep
- .section ".bss", "aw", %nobits + .pushsection .data @ data in bss will be cleared .align 2 ENTRY(__kaslr_offset) .long 0 @ will be wiped before entering C code - .previous + .popsection #endif
#ifdef CONFIG_ARM_VIRT_EXT
From: Ye Bin yebin10@huawei.com
hulk inclusion category: feature feature: ARM kaslr support bugzilla: https://gitee.com/openeuler/kernel/issues/I8KNA9 CVE: NA
-----------------------------------------------
When we configure CONFIG_RANDOMIZE_BASE we find that: [XX]$arm-linux-gnueabihf-readelf -s ./arch/arm/vdso/vdso.so Symbol table '.dynsym' contains 5 entries: Num: Value Size Type Bind Vis Ndx Name 0: 00000000 0 NOTYPE LOCAL DEFAULT UND 1: 00000278 0 SECTION LOCAL DEFAULT 8 2: 00000000 0 OBJECT GLOBAL DEFAULT ABS LINUX_2.6
We can't find __vdso_gettimeofday and __vdso_clock_gettime symbol. So call clock_gettime and gettimeofday will call system call. This results in performance degradation.
Conflicts: arch/arm/vdso/vgettimeofday.c
Signed-off-by: Ye Bin yebin10@huawei.com Reviewed-by: Jason Yan yanaijie@huawei.com Signed-off-by: yangerkun yangerkun@huawei.com Signed-off-by: Cui GaoSheng cuigaosheng1@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Felix Fu fuzhen5@huawei.com --- arch/arm/vdso/vgettimeofday.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c index a003beacac76..3842e5fbc196 100644 --- a/arch/arm/vdso/vgettimeofday.c +++ b/arch/arm/vdso/vgettimeofday.c @@ -4,6 +4,11 @@ * * Copyright 2015 Mentor Graphics Corporation. */ + +#ifdef CONFIG_RANDOMIZE_BASE +#pragma GCC visibility pop +#endif + #include <linux/time.h> #include <linux/types.h> #include <asm/vdso.h>