Backport ILP32 for ARM64 patches to OLK6.6 form openEuler-23.09.
Andrew Pinski (3): arm64: rename COMPAT to AARCH32_EL0 arm64: uapi: set __BITS_PER_LONG correctly for ILP32 and LP64 arm64:ilp32: add ARM64_ILP32 to Kconfig
Chen Jiahao (4): arm64: fix AUDIT_ARCH_AARCH64ILP32 bug on audit subsystem arm64: fix address limit problem with TASK_SIZE_MAX arm64: set 32-bit compatible TASK_SIZE_MAX to fix U32 libc_write_01 error arm64: fix image size inflation with CONFIG_COMPAT_TASK_SIZE
Dave Martin (1): arm64: signal: Make parse_user_sigframe() independent of rt_sigframe layout
James Morse (1): ptrace: Add compat PTRACE_{G,S}ETSIGMASK handlers
Philipp Tomsich (1): arm64:ilp32: add vdso-ilp32 and use for signal return
Xiongfeng Wang (6): arm64: rename functions that reference compat term arm64: secomp: fix the secure computing mode 1 syscall check for ilp32 ilp32: avoid clearing upper 32 bits of syscall return value for ilp32 ilp32: skip ARM erratum 1418040 for ilp32 application arm64: fix abi change caused by ILP32 ilp32: fix compile problem when ARM64_ILP32 and UBSAN are both enabled
Yury Norov (14): thread: move thread bits accessors to separated file arm64: ilp32: add documentation on the ILP32 ABI for ARM64 arm64: introduce is_a32_compat_{task,thread} for AArch32 compat arm64: ilp32: add is_ilp32_compat_{task,thread} and TIF_32BIT_AARCH64 arm64: introduce AUDIT_ARCH_AARCH64ILP32 for ilp32 arm64: introduce binfmt_elf32.c arm64: change compat_elf_hwcap and compat_elf_hwcap2 prefix to a32 arm64: ilp32: introduce binfmt_ilp32.c arm64: ilp32: share aarch32 syscall handlers arm64: ilp32: introduce syscall table for ILP32 arm64: signal: share lp64 signal structures and routines to ilp32 arm64: signal32: move ilp32 and aarch32 common code to separated file arm64: ilp32: introduce ilp32-specific sigframe and ucontext arm64: ptrace: handle ptrace_request differently for aarch32 and ilp32
Zhen Lei (1): arm64: replace is_compat_task() with is_ilp32_compat_task() in TASK_SIZE_MAX
Documentation/arm64/ilp32.txt | 52 +++ arch/arm64/Kconfig | 26 +- arch/arm64/Makefile | 3 + arch/arm64/include/asm/arch_timer.h | 4 +- arch/arm64/include/asm/compat.h | 18 +- arch/arm64/include/asm/elf.h | 32 +- arch/arm64/include/asm/fpsimd.h | 2 +- arch/arm64/include/asm/ftrace.h | 2 +- arch/arm64/include/asm/hwcap.h | 8 +- arch/arm64/include/asm/is_compat.h | 83 ++++ arch/arm64/include/asm/processor.h | 24 +- arch/arm64/include/asm/ptrace.h | 16 +- arch/arm64/include/asm/seccomp.h | 32 +- arch/arm64/include/asm/signal32.h | 31 +- arch/arm64/include/asm/signal32_common.h | 13 + arch/arm64/include/asm/signal_common.h | 391 +++++++++++++++++ arch/arm64/include/asm/signal_ilp32.h | 23 + arch/arm64/include/asm/syscall.h | 14 +- arch/arm64/include/asm/thread_info.h | 4 +- arch/arm64/include/asm/unistd.h | 5 + arch/arm64/include/asm/vdso.h | 9 + arch/arm64/include/uapi/asm/bitsperlong.h | 9 +- arch/arm64/include/uapi/asm/unistd.h | 15 +- arch/arm64/kernel/Makefile | 8 +- arch/arm64/kernel/armv8_deprecated.c | 14 +- arch/arm64/kernel/asm-offsets.c | 13 +- arch/arm64/kernel/binfmt_elf32.c | 27 ++ arch/arm64/kernel/binfmt_ilp32.c | 89 ++++ arch/arm64/kernel/compat_alignment.c | 2 +- arch/arm64/kernel/cpufeature.c | 36 +- arch/arm64/kernel/cpuinfo.c | 18 +- arch/arm64/kernel/debug-monitors.c | 4 +- arch/arm64/kernel/entry-common.c | 6 +- arch/arm64/kernel/hw_breakpoint.c | 8 +- arch/arm64/kernel/perf_callchain.c | 28 +- arch/arm64/kernel/perf_regs.c | 4 +- arch/arm64/kernel/process.c | 15 +- arch/arm64/kernel/proton-pack.c | 2 +- arch/arm64/kernel/ptrace.c | 38 +- arch/arm64/kernel/signal.c | 397 +++--------------- arch/arm64/kernel/signal32.c | 97 ++--- arch/arm64/kernel/signal32_common.c | 37 ++ arch/arm64/kernel/signal_ilp32.c | 67 +++ arch/arm64/kernel/sys32.c | 104 +---- arch/arm64/kernel/sys32_common.c | 106 +++++ arch/arm64/kernel/sys_compat.c | 12 +- arch/arm64/kernel/sys_ilp32.c | 82 ++++ arch/arm64/kernel/syscall.c | 37 +- arch/arm64/kernel/traps.c | 5 +- arch/arm64/kernel/vdso-ilp32/.gitignore | 2 + arch/arm64/kernel/vdso-ilp32/Makefile | 111 +++++ arch/arm64/kernel/vdso-ilp32/vdso-ilp32.S | 22 + arch/arm64/kernel/vdso-ilp32/vdso-ilp32.lds.S | 88 ++++ arch/arm64/kernel/vdso.c | 46 +- arch/arm64/mm/fault.c | 2 +- include/linux/sched.h | 1 + include/linux/thread_bits.h | 87 ++++ include/linux/thread_info.h | 73 +--- kernel/ptrace.c | 52 ++- 59 files changed, 1790 insertions(+), 766 deletions(-) create mode 100644 Documentation/arm64/ilp32.txt create mode 100644 arch/arm64/include/asm/is_compat.h create mode 100644 arch/arm64/include/asm/signal32_common.h create mode 100644 arch/arm64/include/asm/signal_common.h create mode 100644 arch/arm64/include/asm/signal_ilp32.h create mode 100644 arch/arm64/kernel/binfmt_elf32.c create mode 100644 arch/arm64/kernel/binfmt_ilp32.c create mode 100644 arch/arm64/kernel/signal32_common.c create mode 100644 arch/arm64/kernel/signal_ilp32.c create mode 100644 arch/arm64/kernel/sys32_common.c create mode 100644 arch/arm64/kernel/sys_ilp32.c create mode 100644 arch/arm64/kernel/vdso-ilp32/.gitignore create mode 100644 arch/arm64/kernel/vdso-ilp32/Makefile create mode 100644 arch/arm64/kernel/vdso-ilp32/vdso-ilp32.S create mode 100644 arch/arm64/kernel/vdso-ilp32/vdso-ilp32.lds.S create mode 100644 include/linux/thread_bits.h
From: Dave Martin Dave.Martin@arm.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
ILP32 uses the same struct sigcontext as the native ABI (i.e., LP64), but a different layout for the rest of the signal frame (since siginfo_t and ucontext_t are both ABI-dependent).
Since the purpose of parse_user_sigframe() is really to parse sigcontext and not the whole signal frame, the function does not need to depend on the layout of rt_sigframe -- the only purpose of the rt_sigframe pointer is for use as a base to measure the signal frame size.
So, this patch renames the function to make it clear that only the sigcontext is really being parsed, and makes the sigframe base pointer generic. A macro is defined to provide a suitable duck-typed interface that can be used with both sigframe definitions.
Suggested-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Dave Martin Dave.Martin@arm.com Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/kernel/signal.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 0e8beb3349ea..bbd316f7bef6 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -574,16 +574,16 @@ extern int restore_zt_context(struct user_ctxs *user);
#endif /* ! CONFIG_ARM64_SME */
-static int parse_user_sigframe(struct user_ctxs *user, - struct rt_sigframe __user *sf) +static int __parse_user_sigcontext(struct user_ctxs *user, + struct sigcontext __user const *sc, + void __user const *sigframe_base) { - struct sigcontext __user *const sc = &sf->uc.uc_mcontext; struct _aarch64_ctx __user *head; char __user *base = (char __user *)&sc->__reserved; size_t offset = 0; size_t limit = sizeof(sc->__reserved); bool have_extra_context = false; - char const __user *const sfp = (char const __user *)sf; + char const __user *const sfp = (char const __user *)sigframe_base;
user->fpsimd = NULL; user->sve = NULL; @@ -766,6 +766,9 @@ static int parse_user_sigframe(struct user_ctxs *user, return -EINVAL; }
+#define parse_user_sigcontext(user, sf) \ + __parse_user_sigcontext(user, &(sf)->uc.uc_mcontext, sf) + static int restore_sigframe(struct pt_regs *regs, struct rt_sigframe __user *sf) { @@ -791,7 +794,7 @@ static int restore_sigframe(struct pt_regs *regs,
err |= !valid_user_regs(®s->user_regs, current); if (err == 0) - err = parse_user_sigframe(&user, sf); + err = parse_user_sigcontext(&user, sf);
if (err == 0 && system_supports_fpsimd()) { if (!user.fpsimd)
From: James Morse james.morse@arm.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
compat_ptrace_request() lacks handlers for PTRACE_{G,S}ETSIGMASK, instead using those in ptrace_request(). The compat variant should read a compat_sigset_t from userspace instead of ptrace_request()s sigset_t.
While compat_sigset_t is the same size as sigset_t, it is defined as 2xu32, instead of a single u64. On a big-endian CPU this means that compat_sigset_t is passed to user-space using middle-endianness, where the least-significant u32 is written most significant byte first.
If ptrace_request()s code is used userspace will read the most significant u32 where it expected the least significant.
Instead of duplicating ptrace_request()s code as a special case in the arch code, handle it here.
Fixes: 29000caecbe8 ("ptrace: add ability to get/set signal-blocked mask") CC: Andrey Vagin avagin@openvz.org Signed-off-by: James Morse james.morse@arm.com
Yury: Replace sigset_{to,from}_compat() with new {get,put}_compat_sigset() Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Patchwork Links: http://patchwork.huawei.com/project/hulk5.10/list/?series=12937 Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- kernel/ptrace.c | 52 ++++++++++++++++++++++++++++++++++++------------- 1 file changed, 38 insertions(+), 14 deletions(-)
diff --git a/kernel/ptrace.c b/kernel/ptrace.c index 443057bee87c..a8682842c840 100644 --- a/kernel/ptrace.c +++ b/kernel/ptrace.c @@ -1028,6 +1028,24 @@ ptrace_get_syscall_info(struct task_struct *child, unsigned long user_size, } #endif /* CONFIG_HAVE_ARCH_TRACEHOOK */
+static int ptrace_setsigmask(struct task_struct *child, sigset_t *new_set) +{ + sigdelsetmask(new_set, sigmask(SIGKILL)|sigmask(SIGSTOP)); + + /* + * Every thread does recalc_sigpending() after resume, so + * retarget_shared_pending() and recalc_sigpending() are not + * called here. + */ + spin_lock_irq(&child->sighand->siglock); + child->blocked = *new_set; + spin_unlock_irq(&child->sighand->siglock); + + clear_tsk_restore_sigmask(child); + + return 0; +} + int ptrace_request(struct task_struct *child, long request, unsigned long addr, unsigned long data) { @@ -1106,20 +1124,7 @@ int ptrace_request(struct task_struct *child, long request, break; }
- sigdelsetmask(&new_set, sigmask(SIGKILL)|sigmask(SIGSTOP)); - - /* - * Every thread does recalc_sigpending() after resume, so - * retarget_shared_pending() and recalc_sigpending() are not - * called here. - */ - spin_lock_irq(&child->sighand->siglock); - child->blocked = new_set; - spin_unlock_irq(&child->sighand->siglock); - - clear_tsk_restore_sigmask(child); - - ret = 0; + ret = ptrace_setsigmask(child, &new_set); break; }
@@ -1341,6 +1346,7 @@ int compat_ptrace_request(struct task_struct *child, compat_long_t request, { compat_ulong_t __user *datap = compat_ptr(data); compat_ulong_t word; + sigset_t new_set; kernel_siginfo_t siginfo; int ret;
@@ -1380,6 +1386,24 @@ int compat_ptrace_request(struct task_struct *child, compat_long_t request, if (!ret) ret = ptrace_setsiginfo(child, &siginfo); break; + case PTRACE_GETSIGMASK: + if (addr != sizeof(compat_sigset_t)) + return -EINVAL; + + ret = put_compat_sigset((compat_sigset_t __user *) datap, + &child->blocked, sizeof(compat_sigset_t)); + break; + case PTRACE_SETSIGMASK: + if (addr != sizeof(compat_sigset_t)) + return -EINVAL; + + ret = get_compat_sigset(&new_set, + (compat_sigset_t __user *) datap); + if (ret) + break; + + ret = ptrace_setsigmask(child, &new_set); + break; #ifdef CONFIG_HAVE_ARCH_TRACEHOOK case PTRACE_GETREGSET: case PTRACE_SETREGSET:
From: Yury Norov ynorov@caviumnetworks.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
Thread bits may be accessed from low-level code, so isolating is a measure to avoid circular dependencies in header files.
The exact reason for circular dependency is WARN_ON() macro added in patch edd63a27 "set_restore_sigmask() is never called without SIGPENDING (and never should be)"
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- include/linux/sched.h | 1 + include/linux/thread_bits.h | 87 +++++++++++++++++++++++++++++++++++++ include/linux/thread_info.h | 73 +------------------------------ 3 files changed, 89 insertions(+), 72 deletions(-) create mode 100644 include/linux/thread_bits.h
diff --git a/include/linux/sched.h b/include/linux/sched.h index 77f01ac385f7..4412f8818386 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -38,6 +38,7 @@ #include <linux/rv.h> #include <linux/livepatch_sched.h> #include <asm/kmap_size.h> +#include <linux/thread_bits.h>
/* task_struct member predeclarations (sorted alphabetically): */ struct audit_context; diff --git a/include/linux/thread_bits.h b/include/linux/thread_bits.h new file mode 100644 index 000000000000..0f6fe55744f1 --- /dev/null +++ b/include/linux/thread_bits.h @@ -0,0 +1,87 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* Common low-level thread bits accessors */ + +#ifndef _LINUX_THREAD_BITS_H +#define _LINUX_THREAD_BITS_H + +#ifndef __ASSEMBLY__ + +/* + * For per-arch arch_within_stack_frames() implementations, defined in + * asm/thread_info.h. + */ +enum { + BAD_STACK = -1, + NOT_STACK = 0, + GOOD_FRAME, + GOOD_STACK, +}; + +#ifdef CONFIG_THREAD_INFO_IN_TASK +/* + * For CONFIG_THREAD_INFO_IN_TASK kernels we need <asm/current.h> for the + * definition of current, but for !CONFIG_THREAD_INFO_IN_TASK kernels, + * including <asm/current.h> can cause a circular dependency on some platforms. + */ +#include <asm/current.h> +#define current_thread_info() ((struct thread_info *)current) +#endif + +#include <linux/bitops.h> +#include <asm/thread_info.h> + +/* + * flag set/clear/test wrappers + * - pass TIF_xxxx constants to these functions + */ + +static inline void set_ti_thread_flag(struct thread_info *ti, int flag) +{ + set_bit(flag, (unsigned long *)&ti->flags); +} + +static inline void clear_ti_thread_flag(struct thread_info *ti, int flag) +{ + clear_bit(flag, (unsigned long *)&ti->flags); +} + +static inline void update_ti_thread_flag(struct thread_info *ti, int flag, + bool value) +{ + if (value) + set_ti_thread_flag(ti, flag); + else + clear_ti_thread_flag(ti, flag); +} + +static inline int test_and_set_ti_thread_flag(struct thread_info *ti, int flag) +{ + return test_and_set_bit(flag, (unsigned long *)&ti->flags); +} + +static inline int test_and_clear_ti_thread_flag(struct thread_info *ti, int flag) +{ + return test_and_clear_bit(flag, (unsigned long *)&ti->flags); +} + +static inline int test_ti_thread_flag(struct thread_info *ti, int flag) +{ + return test_bit(flag, (unsigned long *)&ti->flags); +} + +#define set_thread_flag(flag) \ + set_ti_thread_flag(current_thread_info(), flag) +#define clear_thread_flag(flag) \ + clear_ti_thread_flag(current_thread_info(), flag) +#define update_thread_flag(flag, value) \ + update_ti_thread_flag(current_thread_info(), flag, value) +#define test_and_set_thread_flag(flag) \ + test_and_set_ti_thread_flag(current_thread_info(), flag) +#define test_and_clear_thread_flag(flag) \ + test_and_clear_ti_thread_flag(current_thread_info(), flag) +#define test_thread_flag(flag) \ + test_ti_thread_flag(current_thread_info(), flag) + +#endif /* !__ASSEMBLY__ */ +#endif /* _LINUX_THREAD_BITS_H */ diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h index 9ea0b28068f4..74d9fe3609bc 100644 --- a/include/linux/thread_info.h +++ b/include/linux/thread_info.h @@ -13,30 +13,10 @@ #include <linux/bug.h> #include <linux/restart_block.h> #include <linux/errno.h> - -#ifdef CONFIG_THREAD_INFO_IN_TASK -/* - * For CONFIG_THREAD_INFO_IN_TASK kernels we need <asm/current.h> for the - * definition of current, but for !CONFIG_THREAD_INFO_IN_TASK kernels, - * including <asm/current.h> can cause a circular dependency on some platforms. - */ -#include <asm/current.h> -#define current_thread_info() ((struct thread_info *)current) -#endif +#include <linux/thread_bits.h>
#include <linux/bitops.h>
-/* - * For per-arch arch_within_stack_frames() implementations, defined in - * asm/thread_info.h. - */ -enum { - BAD_STACK = -1, - NOT_STACK = 0, - GOOD_FRAME, - GOOD_STACK, -}; - #ifdef CONFIG_GENERIC_ENTRY enum syscall_work_bit { SYSCALL_WORK_BIT_SECCOMP, @@ -79,45 +59,6 @@ static inline long set_restart_fn(struct restart_block *restart,
#define THREADINFO_GFP (GFP_KERNEL_ACCOUNT | __GFP_ZERO)
-/* - * flag set/clear/test wrappers - * - pass TIF_xxxx constants to these functions - */ - -static inline void set_ti_thread_flag(struct thread_info *ti, int flag) -{ - set_bit(flag, (unsigned long *)&ti->flags); -} - -static inline void clear_ti_thread_flag(struct thread_info *ti, int flag) -{ - clear_bit(flag, (unsigned long *)&ti->flags); -} - -static inline void update_ti_thread_flag(struct thread_info *ti, int flag, - bool value) -{ - if (value) - set_ti_thread_flag(ti, flag); - else - clear_ti_thread_flag(ti, flag); -} - -static inline int test_and_set_ti_thread_flag(struct thread_info *ti, int flag) -{ - return test_and_set_bit(flag, (unsigned long *)&ti->flags); -} - -static inline int test_and_clear_ti_thread_flag(struct thread_info *ti, int flag) -{ - return test_and_clear_bit(flag, (unsigned long *)&ti->flags); -} - -static inline int test_ti_thread_flag(struct thread_info *ti, int flag) -{ - return test_bit(flag, (unsigned long *)&ti->flags); -} - /* * This may be used in noinstr code, and needs to be __always_inline to prevent * inadvertent instrumentation. @@ -127,18 +68,6 @@ static __always_inline unsigned long read_ti_thread_flags(struct thread_info *ti return READ_ONCE(ti->flags); }
-#define set_thread_flag(flag) \ - set_ti_thread_flag(current_thread_info(), flag) -#define clear_thread_flag(flag) \ - clear_ti_thread_flag(current_thread_info(), flag) -#define update_thread_flag(flag, value) \ - update_ti_thread_flag(current_thread_info(), flag, value) -#define test_and_set_thread_flag(flag) \ - test_and_set_ti_thread_flag(current_thread_info(), flag) -#define test_and_clear_thread_flag(flag) \ - test_and_clear_ti_thread_flag(current_thread_info(), flag) -#define test_thread_flag(flag) \ - test_ti_thread_flag(current_thread_info(), flag) #define read_thread_flags() \ read_ti_thread_flags(current_thread_info())
From: Yury Norov ynorov@caviumnetworks.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
Based on Andrew Pinski's patch-series.
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- Documentation/arm64/ilp32.txt | 52 +++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100644 Documentation/arm64/ilp32.txt
diff --git a/Documentation/arm64/ilp32.txt b/Documentation/arm64/ilp32.txt new file mode 100644 index 000000000000..5f01a61c92af --- /dev/null +++ b/Documentation/arm64/ilp32.txt @@ -0,0 +1,52 @@ +ILP32 AARCH64 SYSCALL ABI +========================= + +This document describes the ILP32 syscall ABI and where it differs +from the generic compat linux syscall interface. + +ILP32 is acronym for memory model which stands for "Integers, Longs and +Pointers are 32-bit". The main purpose of ILP32 in Linux kernel is providing +compatibility with 32-bit legacy code. Also, ILP32 binaries look better in some +performance tests. ARM has AN490 document which coves ILP32 details for ARM64 +platform: +http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dai0490a/ar01s01... + +AARCH64/ILP32 userspace may pass garbage in the top halve of w0-w7 registers +(syscall arguments). So top 32 bits are zeroed for them. + +Comparing to AARCH32, AARCH64/ILP32 has 64-bit length of following types: +ino_t is u64 type. +off_t is s64 type. +blkcnt_t is s64 type. +fsblkcnt_t is u64 type. +fsfilcnt_t is u64 type. +rlim_t is u64 type. + +AARCH64/ILP32 ABI uses standard syscall table which can be found at +include/uapi/asm-generic/unistd.h, with the exceptions listed below. + +Syscalls which pass 64-bit values are handled by the code shared from +AARCH32 and pass that value as a pair. Following syscalls are affected: +fadvise64_64() +fallocate() +ftruncate64() +pread64() +pwrite64() +readahead() +sync_file_range() +truncate64() + +ptrace() syscall is handled by compat version. + +shmat() syscall is handled by non-compat handler as aarch64/ilp32 has no +limitation on 4-pages alignment for shared memory. + +statfs() and fstatfs() take the size of struct statfs as an argument. +It is calculated differently in kernel and user spaces. So AARCH32 handlers +are taken to handle it. + +struct rt_sigframe is redefined and contains struct compat_siginfo, +as compat syscalls expect, and struct ilp32_ucontext, to handle +AARCH64 register set and 32-bit userspace register representation. + +elf_gregset_t is taken from lp64 to handle registers properly.
From: Andrew Pinski apinski@cavium.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
In this patchset ILP32 ABI support is added. Additionally to AARCH32, which is binary-compatible with ARM, ILP32 is (mostly) ABI-compatible.
From now, AARCH32_EL0 (former COMPAT) config option means the support of AARCH32 userspace, and ARM64_ILP32 - support of ILP32 ABI (see following patches). COMPAT indicates that one of them or both is enabled.
Where needed, CONFIG_COMPAT is changed over to use CONFIG_AARCH32_EL0 instead.
Reviewed-by: David Daney ddaney@caviumnetworks.com Signed-off-by: Andrew Pinski Andrew.Pinski@caviumnetworks.com Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Philipp Tomsich philipp.tomsich@theobroma-systems.com Signed-off-by: Christoph Muellner christoph.muellner@theobroma-systems.com Signed-off-by: Bamvor Jian Zhang bamv2005@gmail.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com
Conflicts: arch/arm64/Kconfig arch/arm64/kernel/Makefile arch/arm64/kernel/cpuinfo.c [ruanjinjie: simple context conflicts]
Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/Kconfig | 11 ++++++++--- arch/arm64/include/asm/arch_timer.h | 2 +- arch/arm64/include/asm/fpsimd.h | 2 +- arch/arm64/include/asm/hwcap.h | 4 ++-- arch/arm64/include/asm/processor.h | 4 ++-- arch/arm64/include/asm/ptrace.h | 2 +- arch/arm64/include/asm/seccomp.h | 2 +- arch/arm64/include/asm/signal32.h | 4 ++-- arch/arm64/include/asm/syscall.h | 2 +- arch/arm64/include/asm/unistd.h | 2 +- arch/arm64/kernel/Makefile | 4 ++-- arch/arm64/kernel/asm-offsets.c | 2 +- arch/arm64/kernel/cpufeature.c | 10 +++++----- arch/arm64/kernel/cpuinfo.c | 8 ++++---- arch/arm64/kernel/entry-common.c | 6 +++--- arch/arm64/kernel/perf_callchain.c | 6 +++--- arch/arm64/kernel/ptrace.c | 10 ++++++---- arch/arm64/kernel/syscall.c | 4 ++-- arch/arm64/kernel/vdso.c | 4 ++-- 19 files changed, 48 insertions(+), 41 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 78f20e632712..23ac6dbf3856 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -583,7 +583,7 @@ config ARM64_ERRATUM_1742098
config ARM64_ERRATUM_845719 bool "Cortex-A53: 845719: a load might read incorrect data" - depends on COMPAT + depends on AARCH32_EL0 default y help This option adds an alternative code sequence to work around ARM @@ -1595,7 +1595,7 @@ config ARM64_TAGGED_ADDR_ABI to system calls as pointer arguments. For details, see Documentation/arch/arm64/tagged-address-abi.rst.
-menuconfig COMPAT +menuconfig AARCH32_EL0 bool "Kernel support for 32-bit EL0" depends on ARM64_4K_PAGES || EXPERT select HAVE_UID16 @@ -1613,7 +1613,7 @@ menuconfig COMPAT
If you want to execute 32-bit userspace applications, say Y.
-if COMPAT +if AARCH32_EL0
config KUSER_HELPERS bool "Enable kuser helpers page for 32-bit applications" @@ -1669,6 +1669,7 @@ config COMPAT_ALIGNMENT_FIXUPS
menuconfig ARMV8_DEPRECATED bool "Emulate deprecated/obsolete ARMv8 instructions" + depends on AARCH32_EL0 depends on SYSCTL help Legacy software support may require certain instructions @@ -2285,6 +2286,10 @@ config DMI
endmenu # "Boot options"
+config COMPAT + def_bool y + depends on AARCH32_EL0 + menu "Power management options"
source "kernel/power/Kconfig" diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h index 934c658ee947..fdf5ee2ffa3a 100644 --- a/arch/arm64/include/asm/arch_timer.h +++ b/arch/arm64/include/asm/arch_timer.h @@ -217,7 +217,7 @@ static inline int arch_timer_arch_init(void) static inline void arch_timer_set_evtstrm_feature(void) { cpu_set_named_feature(EVTSTRM); -#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 compat_elf_hwcap |= COMPAT_HWCAP_EVTSTRM; #endif } diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 8df46f186c64..fbbcaaf376be 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -21,7 +21,7 @@ #include <linux/stddef.h> #include <linux/types.h>
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 /* Masks for extracting the FPSR and FPCR from the FPSCR */ #define VFP_FPSCR_STAT_MASK 0xf800009f #define VFP_FPSCR_CTRL_MASK 0x07f79f00 diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h index 521267478d18..93ee19669b73 100644 --- a/arch/arm64/include/asm/hwcap.h +++ b/arch/arm64/include/asm/hwcap.h @@ -147,7 +147,7 @@ #define ELF_HWCAP cpu_get_elf_hwcap() #define ELF_HWCAP2 cpu_get_elf_hwcap2()
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 #define COMPAT_ELF_HWCAP (compat_elf_hwcap) #define COMPAT_ELF_HWCAP2 (compat_elf_hwcap2) extern unsigned int compat_elf_hwcap, compat_elf_hwcap2; @@ -155,7 +155,7 @@ extern unsigned int compat_elf_hwcap, compat_elf_hwcap2;
enum { CAP_HWCAP = 1, -#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 CAP_COMPAT_HWCAP, CAP_COMPAT_HWCAP2, #endif diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index e5bc54522e71..04cf99bf760f 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -256,7 +256,7 @@ static inline void arch_thread_struct_whitelist(unsigned long *offset, *size = sizeof_field(struct thread_struct, uw); }
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 #define task_user_tls(t) \ ({ \ unsigned long *__tls; \ @@ -297,7 +297,7 @@ static inline void start_thread(struct pt_regs *regs, unsigned long pc, regs->sp = sp; }
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc, unsigned long sp) { diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h index 47ec58031f11..1ff9909e5169 100644 --- a/arch/arm64/include/asm/ptrace.h +++ b/arch/arm64/include/asm/ptrace.h @@ -217,7 +217,7 @@ static inline void forget_syscall(struct pt_regs *regs)
#define arch_has_single_step() (1)
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 #define compat_thumb_mode(regs) \ (((regs)->pstate & PSR_AA32_T_BIT)) #else diff --git a/arch/arm64/include/asm/seccomp.h b/arch/arm64/include/asm/seccomp.h index 30256233788b..0f4cc9322eb4 100644 --- a/arch/arm64/include/asm/seccomp.h +++ b/arch/arm64/include/asm/seccomp.h @@ -10,7 +10,7 @@
#include <asm/unistd.h>
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 #define __NR_seccomp_read_32 __NR_compat_read #define __NR_seccomp_write_32 __NR_compat_write #define __NR_seccomp_exit_32 __NR_compat_exit diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h index 7e9f163d02ec..48ba6c7ab53e 100644 --- a/arch/arm64/include/asm/signal32.h +++ b/arch/arm64/include/asm/signal32.h @@ -5,7 +5,7 @@ #ifndef __ASM_SIGNAL32_H #define __ASM_SIGNAL32_H
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 #include <linux/compat.h>
struct compat_sigcontext { @@ -77,5 +77,5 @@ static inline int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t static inline void compat_setup_restart_syscall(struct pt_regs *regs) { } -#endif /* CONFIG_COMPAT */ +#endif /* CONFIG_AARCH32_EL0 */ #endif /* __ASM_SIGNAL32_H */ diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h index ab8e14b96f68..d40b854705db 100644 --- a/arch/arm64/include/asm/syscall.h +++ b/arch/arm64/include/asm/syscall.h @@ -13,7 +13,7 @@ typedef long (*syscall_fn_t)(const struct pt_regs *regs);
extern const syscall_fn_t sys_call_table[];
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 extern const syscall_fn_t compat_sys_call_table[]; #endif
diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h index bd77253b62e0..8de22f4f3a21 100644 --- a/arch/arm64/include/asm/unistd.h +++ b/arch/arm64/include/asm/unistd.h @@ -2,7 +2,7 @@ /* * Copyright (C) 2012 ARM Ltd. */ -#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 #define __ARCH_WANT_COMPAT_STAT #define __ARCH_WANT_COMPAT_STAT64 #define __ARCH_WANT_SYS_GETHOSTNAME diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index d95b3d6b471a..8125fe7067c9 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -36,9 +36,9 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ syscall.o proton-pack.o idreg-override.o idle.o \ patching.o
-obj-$(CONFIG_COMPAT) += sys32.o signal32.o \ +obj-$(CONFIG_AARCH32_EL0) += sys32.o signal32.o \ sys_compat.o -obj-$(CONFIG_COMPAT) += sigreturn32.o +obj-$(CONFIG_AARCH32_EL0) += sigreturn32.o obj-$(CONFIG_COMPAT_ALIGNMENT_FIXUPS) += compat_alignment.o obj-$(CONFIG_KUSER_HELPERS) += kuser32.o obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 5ff1942b04fc..ccb5c28697e9 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -99,7 +99,7 @@ int main(void) DEFINE(FREGS_SIZE, sizeof(struct ftrace_regs)); BLANK(); #endif -#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 DEFINE(COMPAT_SIGFRAME_REGS_OFFSET, offsetof(struct compat_sigframe, uc.uc_mcontext.arm_r0)); DEFINE(COMPAT_RT_SIGFRAME_REGS_OFFSET, offsetof(struct compat_rt_sigframe, sig.uc.uc_mcontext.arm_r0)); BLANK(); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 444a73c2e638..a4aa453fed4b 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -95,7 +95,7 @@ /* Kernel representation of AT_HWCAP and AT_HWCAP2 */ static DECLARE_BITMAP(elf_hwcap, MAX_CPU_FEATURES) __read_mostly;
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 #define COMPAT_ELF_HWCAP_DEFAULT \ (COMPAT_HWCAP_HALF|COMPAT_HWCAP_THUMB|\ COMPAT_HWCAP_FAST_MULT|COMPAT_HWCAP_EDSP|\ @@ -2868,7 +2868,7 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { {}, };
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 static bool compat_has_neon(const struct arm64_cpu_capabilities *cap, int scope) { /* @@ -2891,7 +2891,7 @@ static bool compat_has_neon(const struct arm64_cpu_capabilities *cap, int scope) #endif
static const struct arm64_cpu_capabilities compat_elf_hwcaps[] = { -#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 HWCAP_CAP_MATCH(compat_has_neon, CAP_COMPAT_HWCAP, COMPAT_HWCAP_NEON), HWCAP_CAP(MVFR1_EL1, SIMDFMAC, IMP, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFPv4), /* Arm v8 mandates MVFR0.FPDP == {0, 2}. So, piggy back on this for the presence of VFP support */ @@ -2920,7 +2920,7 @@ static void cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap) case CAP_HWCAP: cpu_set_feature(cap->hwcap); break; -#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 case CAP_COMPAT_HWCAP: compat_elf_hwcap |= (u32)cap->hwcap; break; @@ -2943,7 +2943,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap) case CAP_HWCAP: rc = cpu_have_feature(cap->hwcap); break; -#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 case CAP_COMPAT_HWCAP: rc = (compat_elf_hwcap & (u32)cap->hwcap) != 0; break; diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c index 98fda8500535..60ff939c7785 100644 --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -129,7 +129,7 @@ static const char *const hwcap_str[] = { [KERNEL_HWCAP_HBC] = "hbc", };
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 #define COMPAT_KERNEL_HWCAP(x) const_ilog2(COMPAT_HWCAP_ ## x) static const char *const compat_hwcap_str[] = { [COMPAT_KERNEL_HWCAP(SWP)] = "swp", @@ -172,7 +172,7 @@ static const char *const compat_hwcap2_str[] = { [COMPAT_KERNEL_HWCAP2(SB)] = "sb", [COMPAT_KERNEL_HWCAP2(SSBS)] = "ssbs", }; -#endif /* CONFIG_COMPAT */ +#endif /* CONFIG_AARCH32_EL0 */
static int c_show(struct seq_file *m, void *v) { @@ -205,7 +205,7 @@ static int c_show(struct seq_file *m, void *v) */ seq_puts(m, "Features\t:"); if (compat) { -#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 for (j = 0; j < ARRAY_SIZE(compat_hwcap_str); j++) { if (compat_elf_hwcap & (1 << j)) { /* @@ -222,7 +222,7 @@ static int c_show(struct seq_file *m, void *v) for (j = 0; j < ARRAY_SIZE(compat_hwcap2_str); j++) if (compat_elf_hwcap2 & (1 << j)) seq_printf(m, " %s", compat_hwcap2_str[j]); -#endif /* CONFIG_COMPAT */ +#endif /* CONFIG_AARCH32_EL0 */ } else { for (j = 0; j < ARRAY_SIZE(hwcap_str); j++) if (cpu_have_feature(j)) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 0fc94207e69a..8a460bffbd0c 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -801,7 +801,7 @@ asmlinkage void noinstr el0t_64_error_handler(struct pt_regs *regs) __el0_error_handler_common(regs); }
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); @@ -877,12 +877,12 @@ asmlinkage void noinstr el0t_32_error_handler(struct pt_regs *regs) { __el0_error_handler_common(regs); } -#else /* CONFIG_COMPAT */ +#else /* CONFIG_AARCH32_EL0 */ UNHANDLED(el0t, 32, sync) UNHANDLED(el0t, 32, irq) UNHANDLED(el0t, 32, fiq) UNHANDLED(el0t, 32, error) -#endif /* CONFIG_COMPAT */ +#endif /* CONFIG_AARCH32_EL0 */
#ifdef CONFIG_VMAP_STACK asmlinkage void noinstr __noreturn handle_bad_stack(struct pt_regs *regs) diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c index 6d157f32187b..d5cddb150d39 100644 --- a/arch/arm64/kernel/perf_callchain.c +++ b/arch/arm64/kernel/perf_callchain.c @@ -52,7 +52,7 @@ user_backtrace(struct frame_tail __user *tail, return buftail.fp; }
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 /* * The registers we're interested in are at the end of the variable * length saved register structure. The fp points at the end of this @@ -97,7 +97,7 @@ compat_user_backtrace(struct compat_frame_tail __user *tail,
return (struct compat_frame_tail __user *)compat_ptr(buftail.fp) - 1; } -#endif /* CONFIG_COMPAT */ +#endif /* CONFIG_AARCH32_EL0 */
void perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) @@ -119,7 +119,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry, tail && !((unsigned long)tail & 0x7)) tail = user_backtrace(tail, entry); } else { -#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 /* AARCH32 compat mode */ struct compat_frame_tail __user *tail;
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 20d7ef82de90..6d00c4d326b6 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -173,7 +173,7 @@ static void ptrace_hbptriggered(struct perf_event *bp, struct arch_hw_breakpoint *bkpt = counter_arch_bp(bp); const char *desc = "Hardware breakpoint trap (ptrace)";
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 if (is_compat_task()) { int si_errno = 0; int i; @@ -1594,7 +1594,9 @@ static const struct user_regset_view user_aarch64_view = { .regsets = aarch64_regsets, .n = ARRAY_SIZE(aarch64_regsets) };
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 +#include <linux/compat.h> + enum compat_regset { REGSET_COMPAT_GPR, REGSET_COMPAT_VFP, @@ -2108,11 +2110,11 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
return ret; } -#endif /* CONFIG_COMPAT */ +#endif /* CONFIG_AARCH32_EL0 */
const struct user_regset_view *task_user_regset_view(struct task_struct *task) { -#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 /* * Core dumping of 32-bit tasks or compat ptrace requests must use the * user_aarch32_view compatible with arm32. Native ptrace requests on diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index 9a70d9746b66..9e760784426b 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -20,7 +20,7 @@ long sys_ni_syscall(void);
static long do_ni_syscall(struct pt_regs *regs, int scno) { -#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 long ret; if (is_compat_task()) { ret = compat_arm_syscall(regs, scno); @@ -155,7 +155,7 @@ void do_el0_svc(struct pt_regs *regs) el0_svc_common(regs, regs->regs[8], __NR_syscalls, sys_call_table); }
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 void do_el0_svc_compat(struct pt_regs *regs) { el0_svc_common(regs, regs->regs[7], __NR_compat_syscalls, diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index d9e1355730ef..47c2eb75f591 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -231,7 +231,7 @@ static int __setup_additional_pages(enum vdso_abi abi, return PTR_ERR(ret); }
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 /* * Create and map the vectors page for AArch32 tasks. */ @@ -409,7 +409,7 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) mmap_write_unlock(mm); return ret; } -#endif /* CONFIG_COMPAT */ +#endif /* CONFIG_AARCH32_EL0 */
enum aarch64_map { AA64_MAP_VVAR,
From: Xiongfeng Wang wangxiongfeng2@huawei.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
The ILP32 for ARM64 patch series introduces another 'compat' mode additionally to aarch32_el0. So to avoid confusing, aarch32-only functions renamed in according to it.
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/ptrace.h | 10 ++-- arch/arm64/include/asm/signal32.h | 27 +++++----- arch/arm64/include/asm/syscall.h | 2 +- arch/arm64/kernel/armv8_deprecated.c | 14 +++--- arch/arm64/kernel/asm-offsets.c | 4 +- arch/arm64/kernel/compat_alignment.c | 2 +- arch/arm64/kernel/cpufeature.c | 10 ++-- arch/arm64/kernel/debug-monitors.c | 4 +- arch/arm64/kernel/perf_callchain.c | 22 ++++----- arch/arm64/kernel/perf_regs.c | 2 +- arch/arm64/kernel/process.c | 4 +- arch/arm64/kernel/proton-pack.c | 2 +- arch/arm64/kernel/signal.c | 8 +-- arch/arm64/kernel/signal32.c | 74 ++++++++++++++-------------- arch/arm64/kernel/sys32.c | 2 +- arch/arm64/kernel/sys_compat.c | 12 ++--- arch/arm64/kernel/syscall.c | 6 +-- arch/arm64/kernel/traps.c | 4 +- arch/arm64/mm/fault.c | 2 +- 19 files changed, 106 insertions(+), 105 deletions(-)
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h index 1ff9909e5169..992b04efe7f8 100644 --- a/arch/arm64/include/asm/ptrace.h +++ b/arch/arm64/include/asm/ptrace.h @@ -218,16 +218,16 @@ static inline void forget_syscall(struct pt_regs *regs) #define arch_has_single_step() (1)
#ifdef CONFIG_AARCH32_EL0 -#define compat_thumb_mode(regs) \ +#define a32_thumb_mode(regs) \ (((regs)->pstate & PSR_AA32_T_BIT)) #else -#define compat_thumb_mode(regs) (0) +#define a32_thumb_mode(regs) (0) #endif
#define user_mode(regs) \ (((regs)->pstate & PSR_MODE_MASK) == PSR_MODE_EL0t)
-#define compat_user_mode(regs) \ +#define a32_user_mode(regs) \ (((regs)->pstate & (PSR_MODE32_BIT | PSR_MODE_MASK)) == \ (PSR_MODE32_BIT | PSR_MODE_EL0t))
@@ -247,7 +247,7 @@ static inline void forget_syscall(struct pt_regs *regs)
static inline unsigned long user_stack_pointer(struct pt_regs *regs) { - if (compat_user_mode(regs)) + if (a32_user_mode(regs)) return regs->compat_sp; return regs->sp; } @@ -327,7 +327,7 @@ static inline unsigned long regs_return_value(struct pt_regs *regs) * syscall_get_return_value(). Apply the same sign-extension here until * audit is updated to use syscall_get_return_value(). */ - if (compat_user_mode(regs)) + if (a32_user_mode(regs)) val = sign_extend64(val, 31);
return val; diff --git a/arch/arm64/include/asm/signal32.h b/arch/arm64/include/asm/signal32.h index 48ba6c7ab53e..dbffee2b4414 100644 --- a/arch/arm64/include/asm/signal32.h +++ b/arch/arm64/include/asm/signal32.h @@ -8,7 +8,7 @@ #ifdef CONFIG_AARCH32_EL0 #include <linux/compat.h>
-struct compat_sigcontext { +struct a32_sigcontext { /* We always set these two fields to 0 */ compat_ulong_t trap_no; compat_ulong_t error_code; @@ -34,47 +34,48 @@ struct compat_sigcontext { compat_ulong_t fault_address; };
-struct compat_ucontext { +struct a32_ucontext { compat_ulong_t uc_flags; compat_uptr_t uc_link; compat_stack_t uc_stack; - struct compat_sigcontext uc_mcontext; + struct a32_sigcontext uc_mcontext; compat_sigset_t uc_sigmask; int __unused[32 - (sizeof(compat_sigset_t) / sizeof(int))]; compat_ulong_t uc_regspace[128] __attribute__((__aligned__(8))); };
-struct compat_sigframe { - struct compat_ucontext uc; +struct a32_sigframe { + struct a32_ucontext uc; compat_ulong_t retcode[2]; };
-struct compat_rt_sigframe { +struct a32_rt_sigframe { struct compat_siginfo info; - struct compat_sigframe sig; + struct a32_sigframe sig; };
-int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set, +int a32_setup_frame(int usig, struct ksignal *ksig, sigset_t *set, struct pt_regs *regs); -int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, + +int a32_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, struct pt_regs *regs);
-void compat_setup_restart_syscall(struct pt_regs *regs); +void a32_setup_restart_syscall(struct pt_regs *regs); #else
-static inline int compat_setup_frame(int usid, struct ksignal *ksig, +static inline int a32_setup_frame(int usid, struct ksignal *ksig, sigset_t *set, struct pt_regs *regs) { return -ENOSYS; }
-static inline int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, +static inline int a32_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, struct pt_regs *regs) { return -ENOSYS; }
-static inline void compat_setup_restart_syscall(struct pt_regs *regs) +static inline void a32_setup_restart_syscall(struct pt_regs *regs) { } #endif /* CONFIG_AARCH32_EL0 */ diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h index d40b854705db..4974c19def43 100644 --- a/arch/arm64/include/asm/syscall.h +++ b/arch/arm64/include/asm/syscall.h @@ -14,7 +14,7 @@ typedef long (*syscall_fn_t)(const struct pt_regs *regs); extern const syscall_fn_t sys_call_table[];
#ifdef CONFIG_AARCH32_EL0 -extern const syscall_fn_t compat_sys_call_table[]; +extern const syscall_fn_t a32_sys_call_table[]; #endif
static inline int syscall_get_nr(struct task_struct *task, diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c index e459cfd33711..fd0f291e215e 100644 --- a/arch/arm64/kernel/armv8_deprecated.c +++ b/arch/arm64/kernel/armv8_deprecated.c @@ -233,7 +233,7 @@ static int swp_handler(struct pt_regs *regs, u32 instr) static bool try_emulate_swp(struct pt_regs *regs, u32 insn) { /* SWP{B} only exists in ARM state and does not exist in Thumb */ - if (!compat_user_mode(regs) || compat_thumb_mode(regs)) + if (!a32_user_mode(regs) || a32_thumb_mode(regs)) return false;
if ((insn & 0x0fb00ff0) != 0x01000090) @@ -315,7 +315,7 @@ static int cp15_barrier_set_hw_mode(bool enable)
static bool try_emulate_cp15_barrier(struct pt_regs *regs, u32 insn) { - if (!compat_user_mode(regs) || compat_thumb_mode(regs)) + if (!a32_user_mode(regs) || a32_thumb_mode(regs)) return false;
if ((insn & 0x0fff0fdf) == 0x0e070f9a) @@ -348,7 +348,7 @@ static int setend_set_hw_mode(bool enable) return 0; }
-static int compat_setend_handler(struct pt_regs *regs, u32 big_endian) +static int __a32_setend_handler(struct pt_regs *regs, u32 big_endian) { char *insn;
@@ -371,25 +371,25 @@ static int compat_setend_handler(struct pt_regs *regs, u32 big_endian)
static int a32_setend_handler(struct pt_regs *regs, u32 instr) { - int rc = compat_setend_handler(regs, (instr >> 9) & 1); + int rc = __a32_setend_handler(regs, (instr >> 9) & 1); arm64_skip_faulting_instruction(regs, 4); return rc; }
static int t16_setend_handler(struct pt_regs *regs, u32 instr) { - int rc = compat_setend_handler(regs, (instr >> 3) & 1); + int rc = __a32_setend_handler(regs, (instr >> 3) & 1); arm64_skip_faulting_instruction(regs, 2); return rc; }
static bool try_emulate_setend(struct pt_regs *regs, u32 insn) { - if (compat_thumb_mode(regs) && + if (a32_thumb_mode(regs) && (insn & 0xfffffff7) == 0x0000b650) return t16_setend_handler(regs, insn) == 0;
- if (compat_user_mode(regs) && + if (a32_user_mode(regs) && (insn & 0xfffffdff) == 0xf1010000) return a32_setend_handler(regs, insn) == 0;
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index ccb5c28697e9..acf922704a41 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -100,8 +100,8 @@ int main(void) BLANK(); #endif #ifdef CONFIG_AARCH32_EL0 - DEFINE(COMPAT_SIGFRAME_REGS_OFFSET, offsetof(struct compat_sigframe, uc.uc_mcontext.arm_r0)); - DEFINE(COMPAT_RT_SIGFRAME_REGS_OFFSET, offsetof(struct compat_rt_sigframe, sig.uc.uc_mcontext.arm_r0)); + DEFINE(COMPAT_SIGFRAME_REGS_OFFSET, offsetof(struct a32_sigframe, uc.uc_mcontext.arm_r0)); + DEFINE(COMPAT_RT_SIGFRAME_REGS_OFFSET, offsetof(struct a32_rt_sigframe, sig.uc.uc_mcontext.arm_r0)); BLANK(); #endif DEFINE(MM_CONTEXT_ID, offsetof(struct mm_struct, context.id.counter)); diff --git a/arch/arm64/kernel/compat_alignment.c b/arch/arm64/kernel/compat_alignment.c index deff21bfa680..934cea87cfd5 100644 --- a/arch/arm64/kernel/compat_alignment.c +++ b/arch/arm64/kernel/compat_alignment.c @@ -319,7 +319,7 @@ int do_compat_alignment_fixup(unsigned long addr, struct pt_regs *regs)
instrptr = instruction_pointer(regs);
- if (compat_thumb_mode(regs)) { + if (a32_thumb_mode(regs)) { __le16 __user *ptr = (__le16 __user *)(instrptr & ~1); u16 tinstr, tinst2;
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index a4aa453fed4b..374ead145a8a 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2890,7 +2890,7 @@ static bool compat_has_neon(const struct arm64_cpu_capabilities *cap, int scope) } #endif
-static const struct arm64_cpu_capabilities compat_elf_hwcaps[] = { +static const struct arm64_cpu_capabilities a32_elf_hwcaps[] = { #ifdef CONFIG_AARCH32_EL0 HWCAP_CAP_MATCH(compat_has_neon, CAP_COMPAT_HWCAP, COMPAT_HWCAP_NEON), HWCAP_CAP(MVFR1_EL1, SIMDFMAC, IMP, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFPv4), @@ -3148,7 +3148,7 @@ static void verify_local_elf_hwcaps(void) __verify_local_elf_hwcaps(arm64_elf_hwcaps);
if (id_aa64pfr0_32bit_el0(read_cpuid(ID_AA64PFR0_EL1))) - __verify_local_elf_hwcaps(compat_elf_hwcaps); + __verify_local_elf_hwcaps(a32_elf_hwcaps); }
static void verify_sve_features(void) @@ -3348,7 +3348,7 @@ void __init setup_cpu_features(void) setup_elf_hwcaps(arm64_elf_hwcaps);
if (system_supports_32bit_el0()) { - setup_elf_hwcaps(compat_elf_hwcaps); + setup_elf_hwcaps(a32_elf_hwcaps); elf_hwcap_fixup(); }
@@ -3399,7 +3399,7 @@ static int enable_mismatched_32bit_el0(unsigned int cpu) lucky_winner = cpu_32bit ? cpu : cpumask_any_and(cpu_32bit_el0_mask, cpu_active_mask); get_cpu_device(lucky_winner)->offline_disabled = true; - setup_elf_hwcaps(compat_elf_hwcaps); + setup_elf_hwcaps(a32_elf_hwcaps); elf_hwcap_fixup(); pr_info("Asymmetric 32-bit EL0 support detected on CPU %u; CPU hot-unplug disabled on CPU %u\n", cpu, lucky_winner); @@ -3503,7 +3503,7 @@ bool try_emulate_mrs(struct pt_regs *regs, u32 insn) { u32 sys_reg, rt;
- if (compat_user_mode(regs) || !aarch64_insn_is_mrs(insn)) + if (a32_user_mode(regs) || !aarch64_insn_is_mrs(insn)) return false;
/* diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c index 64f2ecbdfe5c..745aefddd9a3 100644 --- a/arch/arm64/kernel/debug-monitors.c +++ b/arch/arm64/kernel/debug-monitors.c @@ -346,10 +346,10 @@ int aarch32_break_handler(struct pt_regs *regs) bool bp = false; void __user *pc = (void __user *)instruction_pointer(regs);
- if (!compat_user_mode(regs)) + if (!a32_user_mode(regs)) return -EFAULT;
- if (compat_thumb_mode(regs)) { + if (a32_thumb_mode(regs)) { /* get 16-bit Thumb instruction */ __le16 instr; get_user(instr, (__le16 __user *)pc); diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c index d5cddb150d39..aad6190e03ca 100644 --- a/arch/arm64/kernel/perf_callchain.c +++ b/arch/arm64/kernel/perf_callchain.c @@ -57,21 +57,21 @@ user_backtrace(struct frame_tail __user *tail, * The registers we're interested in are at the end of the variable * length saved register structure. The fp points at the end of this * structure so the address of this struct is: - * (struct compat_frame_tail *)(xxx->fp)-1 + * (struct a32_frame_tail *)(xxx->fp)-1 * * This code has been adapted from the ARM OProfile support. */ -struct compat_frame_tail { - compat_uptr_t fp; /* a (struct compat_frame_tail *) in compat mode */ +struct a32_frame_tail { + compat_uptr_t fp; /* a (struct a32_frame_tail *) in compat mode */ u32 sp; u32 lr; } __attribute__((packed));
-static struct compat_frame_tail __user * -compat_user_backtrace(struct compat_frame_tail __user *tail, +static struct a32_frame_tail __user * +compat_user_backtrace(struct a32_frame_tail __user *tail, struct perf_callchain_entry_ctx *entry) { - struct compat_frame_tail buftail; + struct a32_frame_tail buftail; unsigned long err;
/* Also check accessibility of one struct frame_tail beyond */ @@ -91,11 +91,11 @@ compat_user_backtrace(struct compat_frame_tail __user *tail, * Frame pointers should strictly progress back up the stack * (towards higher addresses). */ - if (tail + 1 >= (struct compat_frame_tail __user *) + if (tail + 1 >= (struct a32_frame_tail __user *) compat_ptr(buftail.fp)) return NULL;
- return (struct compat_frame_tail __user *)compat_ptr(buftail.fp) - 1; + return (struct a32_frame_tail __user *)compat_ptr(buftail.fp) - 1; } #endif /* CONFIG_AARCH32_EL0 */
@@ -109,7 +109,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
perf_callchain_store(entry, regs->pc);
- if (!compat_user_mode(regs)) { + if (!a32_user_mode(regs)) { /* AARCH64 mode */ struct frame_tail __user *tail;
@@ -121,9 +121,9 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry, } else { #ifdef CONFIG_AARCH32_EL0 /* AARCH32 compat mode */ - struct compat_frame_tail __user *tail; + struct a32_frame_tail __user *tail;
- tail = (struct compat_frame_tail __user *)regs->compat_fp - 1; + tail = (struct a32_frame_tail __user *)regs->compat_fp - 1;
while ((entry->nr < entry->max_stack) && tail && !((unsigned long)tail & 0x3)) diff --git a/arch/arm64/kernel/perf_regs.c b/arch/arm64/kernel/perf_regs.c index b4eece3eb17d..1497f1b3e2fb 100644 --- a/arch/arm64/kernel/perf_regs.c +++ b/arch/arm64/kernel/perf_regs.c @@ -54,7 +54,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) * At the time we make a sample, we don't know whether the consumer is * 32-bit or 64-bit, so we have to cater for both possibilities. */ - if (compat_user_mode(regs)) { + if (a32_user_mode(regs)) { if ((u32)idx == PERF_REG_ARM64_SP) return regs->compat_sp; if ((u32)idx == PERF_REG_ARM64_LR) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 0fcc4eb1a7ab..a48d2f596c07 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -159,7 +159,7 @@ static void print_pstate(struct pt_regs *regs) { u64 pstate = regs->pstate;
- if (compat_user_mode(regs)) { + if (a32_user_mode(regs)) { printk("pstate: %08llx (%c%c%c%c %c %s %s %c%c%c %cDIT %cSSBS)\n", pstate, pstate & PSR_AA32_N_BIT ? 'N' : 'n', @@ -202,7 +202,7 @@ void __show_regs(struct pt_regs *regs) int i, top_reg; u64 lr, sp;
- if (compat_user_mode(regs)) { + if (a32_user_mode(regs)) { lr = regs->compat_lr; sp = regs->compat_sp; top_reg = 12; diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c index 05f40c4e18fd..58a97861bfc5 100644 --- a/arch/arm64/kernel/proton-pack.c +++ b/arch/arm64/kernel/proton-pack.c @@ -643,7 +643,7 @@ void spectre_v4_enable_mitigation(const struct arm64_cpu_capabilities *__unused)
static void __update_pstate_ssbs(struct pt_regs *regs, bool state) { - u64 bit = compat_user_mode(regs) ? PSR_AA32_SSBS_BIT : PSR_SSBS_BIT; + u64 bit = a32_user_mode(regs) ? PSR_AA32_SSBS_BIT : PSR_SSBS_BIT;
if (state) regs->pstate |= bit; diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index bbd316f7bef6..06c5d2e66f6d 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -1164,7 +1164,7 @@ static int setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, static void setup_restart_syscall(struct pt_regs *regs) { if (is_compat_task()) - compat_setup_restart_syscall(regs); + a32_setup_restart_syscall(regs); else regs->regs[8] = __NR_restart_syscall; } @@ -1185,9 +1185,9 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs) */ if (is_compat_task()) { if (ksig->ka.sa.sa_flags & SA_SIGINFO) - ret = compat_setup_rt_frame(usig, ksig, oldset, regs); + ret = a32_setup_rt_frame(usig, ksig, oldset, regs); else - ret = compat_setup_frame(usig, ksig, oldset, regs); + ret = a32_setup_frame(usig, ksig, oldset, regs); } else { ret = setup_rt_frame(usig, ksig, oldset, regs); } @@ -1222,7 +1222,7 @@ static void do_signal(struct pt_regs *regs) */ if (syscall) { continue_addr = regs->pc; - restart_addr = continue_addr - (compat_thumb_mode(regs) ? 2 : 4); + restart_addr = continue_addr - (a32_thumb_mode(regs) ? 2 : 4); retval = regs->regs[0];
/* diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c index bbd542704730..3853a62ab839 100644 --- a/arch/arm64/kernel/signal32.c +++ b/arch/arm64/kernel/signal32.c @@ -20,7 +20,7 @@ #include <asm/unistd.h> #include <asm/vdso.h>
-struct compat_vfp_sigframe { +struct a32_vfp_sigframe { compat_ulong_t magic; compat_ulong_t size; struct compat_user_vfp { @@ -35,12 +35,12 @@ struct compat_vfp_sigframe { } __attribute__((__aligned__(8)));
#define VFP_MAGIC 0x56465001 -#define VFP_STORAGE_SIZE sizeof(struct compat_vfp_sigframe) +#define VFP_STORAGE_SIZE sizeof(struct a32_vfp_sigframe)
#define FSR_WRITE_SHIFT (11)
-struct compat_aux_sigframe { - struct compat_vfp_sigframe vfp; +struct a32_aux_sigframe { + struct a32_vfp_sigframe vfp;
/* Something that isn't a valid magic number for any coprocessor. */ unsigned long end_magic; @@ -72,7 +72,7 @@ static inline int get_sigset_t(sigset_t *set, * VFP save/restore code. * * We have to be careful with endianness, since the fpsimd context-switch - * code operates on 128-bit (Q) register values whereas the compat ABI + * code operates on 128-bit (Q) register values whereas the a32 ABI * uses an array of 64-bit (D) registers. Consequently, we need to swap * the two halves of each Q register when running on a big-endian CPU. */ @@ -89,7 +89,7 @@ union __fpsimd_vreg { }; };
-static int compat_preserve_vfp_context(struct compat_vfp_sigframe __user *frame) +static int a32_preserve_vfp_context(struct a32_vfp_sigframe __user *frame) { struct user_fpsimd_state const *fpsimd = ¤t->thread.uw.fpsimd_state; @@ -139,7 +139,7 @@ static int compat_preserve_vfp_context(struct compat_vfp_sigframe __user *frame) return err ? -EFAULT : 0; }
-static int compat_restore_vfp_context(struct compat_vfp_sigframe __user *frame) +static int a32_restore_vfp_context(struct a32_vfp_sigframe __user *frame) { struct user_fpsimd_state fpsimd; compat_ulong_t magic = VFP_MAGIC; @@ -179,12 +179,12 @@ static int compat_restore_vfp_context(struct compat_vfp_sigframe __user *frame) return err ? -EFAULT : 0; }
-static int compat_restore_sigframe(struct pt_regs *regs, - struct compat_sigframe __user *sf) +static int a32_restore_sigframe(struct pt_regs *regs, + struct a32_sigframe __user *sf) { int err; sigset_t set; - struct compat_aux_sigframe __user *aux; + struct a32_aux_sigframe __user *aux; unsigned long psr;
err = get_sigset_t(&set, &sf->uc.uc_sigmask); @@ -218,9 +218,9 @@ static int compat_restore_sigframe(struct pt_regs *regs,
err |= !valid_user_regs(®s->user_regs, current);
- aux = (struct compat_aux_sigframe __user *) sf->uc.uc_regspace; + aux = (struct a32_aux_sigframe __user *) sf->uc.uc_regspace; if (err == 0 && system_supports_fpsimd()) - err |= compat_restore_vfp_context(&aux->vfp); + err |= a32_restore_vfp_context(&aux->vfp);
return err; } @@ -228,7 +228,7 @@ static int compat_restore_sigframe(struct pt_regs *regs, COMPAT_SYSCALL_DEFINE0(sigreturn) { struct pt_regs *regs = current_pt_regs(); - struct compat_sigframe __user *frame; + struct a32_sigframe __user *frame;
/* Always make any pending restarted system calls return -EINTR */ current->restart_block.fn = do_no_restart_syscall; @@ -241,12 +241,12 @@ COMPAT_SYSCALL_DEFINE0(sigreturn) if (regs->compat_sp & 7) goto badframe;
- frame = (struct compat_sigframe __user *)regs->compat_sp; + frame = (struct a32_sigframe __user *)regs->compat_sp;
if (!access_ok(frame, sizeof (*frame))) goto badframe;
- if (compat_restore_sigframe(regs, frame)) + if (a32_restore_sigframe(regs, frame)) goto badframe;
return regs->regs[0]; @@ -259,7 +259,7 @@ COMPAT_SYSCALL_DEFINE0(sigreturn) COMPAT_SYSCALL_DEFINE0(rt_sigreturn) { struct pt_regs *regs = current_pt_regs(); - struct compat_rt_sigframe __user *frame; + struct a32_rt_sigframe __user *frame;
/* Always make any pending restarted system calls return -EINTR */ current->restart_block.fn = do_no_restart_syscall; @@ -272,12 +272,12 @@ COMPAT_SYSCALL_DEFINE0(rt_sigreturn) if (regs->compat_sp & 7) goto badframe;
- frame = (struct compat_rt_sigframe __user *)regs->compat_sp; + frame = (struct a32_rt_sigframe __user *)regs->compat_sp;
if (!access_ok(frame, sizeof (*frame))) goto badframe;
- if (compat_restore_sigframe(regs, &frame->sig)) + if (a32_restore_sigframe(regs, &frame->sig)) goto badframe;
if (compat_restore_altstack(&frame->sig.uc.uc_stack)) @@ -290,7 +290,7 @@ COMPAT_SYSCALL_DEFINE0(rt_sigreturn) return 0; }
-static void __user *compat_get_sigframe(struct ksignal *ksig, +static void __user *a32_get_sigframe(struct ksignal *ksig, struct pt_regs *regs, int framesize) { @@ -311,7 +311,7 @@ static void __user *compat_get_sigframe(struct ksignal *ksig, return frame; }
-static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, +static void a32_setup_return(struct pt_regs *regs, struct k_sigaction *ka, compat_ulong_t __user *rc, void __user *frame, int usig) { @@ -354,10 +354,10 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, regs->pstate = spsr; }
-static int compat_setup_sigframe(struct compat_sigframe __user *sf, +static int a32_setup_sigframe(struct a32_sigframe __user *sf, struct pt_regs *regs, sigset_t *set) { - struct compat_aux_sigframe __user *aux; + struct a32_aux_sigframe __user *aux; unsigned long psr = pstate_to_compat_psr(regs->pstate); int err = 0;
@@ -380,7 +380,7 @@ static int compat_setup_sigframe(struct compat_sigframe __user *sf, __put_user_error(psr, &sf->uc.uc_mcontext.arm_cpsr, err);
__put_user_error((compat_ulong_t)0, &sf->uc.uc_mcontext.trap_no, err); - /* set the compat FSR WnR */ + /* set the aarch32 FSR WnR */ __put_user_error(!!(current->thread.fault_code & ESR_ELx_WNR) << FSR_WRITE_SHIFT, &sf->uc.uc_mcontext.error_code, err); __put_user_error(current->thread.fault_address, &sf->uc.uc_mcontext.fault_address, err); @@ -388,25 +388,25 @@ static int compat_setup_sigframe(struct compat_sigframe __user *sf,
err |= put_sigset_t(&sf->uc.uc_sigmask, set);
- aux = (struct compat_aux_sigframe __user *) sf->uc.uc_regspace; + aux = (struct a32_aux_sigframe __user *) sf->uc.uc_regspace;
if (err == 0 && system_supports_fpsimd()) - err |= compat_preserve_vfp_context(&aux->vfp); + err |= a32_preserve_vfp_context(&aux->vfp); __put_user_error(0, &aux->end_magic, err);
return err; }
/* - * 32-bit signal handling routines called from signal.c + * aarch32-bit signal handling routines called from signal.c */ -int compat_setup_rt_frame(int usig, struct ksignal *ksig, +int a32_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, struct pt_regs *regs) { - struct compat_rt_sigframe __user *frame; + struct a32_rt_sigframe __user *frame; int err = 0;
- frame = compat_get_sigframe(ksig, regs, sizeof(*frame)); + frame = a32_get_sigframe(ksig, regs, sizeof(*frame));
if (!frame) return 1; @@ -418,10 +418,10 @@ int compat_setup_rt_frame(int usig, struct ksignal *ksig,
err |= __compat_save_altstack(&frame->sig.uc.uc_stack, regs->compat_sp);
- err |= compat_setup_sigframe(&frame->sig, regs, set); + err |= a32_setup_sigframe(&frame->sig, regs, set);
if (err == 0) { - compat_setup_return(regs, &ksig->ka, frame->sig.retcode, frame, usig); + a32_setup_return(regs, &ksig->ka, frame->sig.retcode, frame, usig); regs->regs[1] = (compat_ulong_t)(unsigned long)&frame->info; regs->regs[2] = (compat_ulong_t)(unsigned long)&frame->sig.uc; } @@ -429,27 +429,27 @@ int compat_setup_rt_frame(int usig, struct ksignal *ksig, return err; }
-int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set, +int a32_setup_frame(int usig, struct ksignal *ksig, sigset_t *set, struct pt_regs *regs) { - struct compat_sigframe __user *frame; + struct a32_sigframe __user *frame; int err = 0;
- frame = compat_get_sigframe(ksig, regs, sizeof(*frame)); + frame = a32_get_sigframe(ksig, regs, sizeof(*frame));
if (!frame) return 1;
__put_user_error(0x5ac3c35a, &frame->uc.uc_flags, err);
- err |= compat_setup_sigframe(frame, regs, set); + err |= a32_setup_sigframe(frame, regs, set); if (err == 0) - compat_setup_return(regs, &ksig->ka, frame->retcode, frame, usig); + a32_setup_return(regs, &ksig->ka, frame->retcode, frame, usig);
return err; }
-void compat_setup_restart_syscall(struct pt_regs *regs) +void a32_setup_restart_syscall(struct pt_regs *regs) { regs->regs[7] = __NR_compat_restart_syscall; } diff --git a/arch/arm64/kernel/sys32.c b/arch/arm64/kernel/sys32.c index fc40386afb1b..3c3c13a9dde3 100644 --- a/arch/arm64/kernel/sys32.c +++ b/arch/arm64/kernel/sys32.c @@ -129,7 +129,7 @@ COMPAT_SYSCALL_DEFINE6(aarch32_fallocate, int, fd, int, mode, #undef __SYSCALL #define __SYSCALL(nr, sym) [nr] = __arm64_##sym,
-const syscall_fn_t compat_sys_call_table[__NR_compat_syscalls] = { +const syscall_fn_t a32_sys_call_table[__NR_compat_syscalls] = { [0 ... __NR_compat_syscalls - 1] = __arm64_sys_ni_syscall, #include <asm/unistd32.h> }; diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c index df14336c3a29..62022c403bec 100644 --- a/arch/arm64/kernel/sys_compat.c +++ b/arch/arm64/kernel/sys_compat.c @@ -21,7 +21,7 @@ #include <asm/unistd.h>
static long -__do_compat_cache_op(unsigned long start, unsigned long end) +__do_a32_cache_op(unsigned long start, unsigned long end) { long ret;
@@ -52,7 +52,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end) }
static inline long -do_compat_cache_op(unsigned long start, unsigned long end, int flags) +do_a32_cache_op(unsigned long start, unsigned long end, int flags) { if (end < start || flags) return -EINVAL; @@ -60,12 +60,12 @@ do_compat_cache_op(unsigned long start, unsigned long end, int flags) if (!access_ok((const void __user *)start, end - start)) return -EFAULT;
- return __do_compat_cache_op(start, end); + return __do_a32_cache_op(start, end); } /* * Handle all unrecognised system calls. */ -long compat_arm_syscall(struct pt_regs *regs, int scno) +long a32_arm_syscall(struct pt_regs *regs, int scno) { unsigned long addr;
@@ -85,7 +85,7 @@ long compat_arm_syscall(struct pt_regs *regs, int scno) * the specified region). */ case __ARM_NR_compat_cacheflush: - return do_compat_cache_op(regs->regs[0], regs->regs[1], regs->regs[2]); + return do_a32_cache_op(regs->regs[0], regs->regs[1], regs->regs[2]);
case __ARM_NR_compat_set_tls: current->thread.uw.tp_value = regs->regs[0]; @@ -110,7 +110,7 @@ long compat_arm_syscall(struct pt_regs *regs, int scno) break; }
- addr = instruction_pointer(regs) - (compat_thumb_mode(regs) ? 2 : 4); + addr = instruction_pointer(regs) - (a32_thumb_mode(regs) ? 2 : 4);
arm64_notify_die("Oops - bad compat syscall(2)", regs, SIGILL, ILL_ILLTRP, addr, 0); diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index 9e760784426b..f6da6b0afcf7 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -15,7 +15,7 @@ #include <asm/thread_info.h> #include <asm/unistd.h>
-long compat_arm_syscall(struct pt_regs *regs, int scno); +long a32_arm_syscall(struct pt_regs *regs, int scno); long sys_ni_syscall(void);
static long do_ni_syscall(struct pt_regs *regs, int scno) @@ -23,7 +23,7 @@ static long do_ni_syscall(struct pt_regs *regs, int scno) #ifdef CONFIG_AARCH32_EL0 long ret; if (is_compat_task()) { - ret = compat_arm_syscall(regs, scno); + ret = a32_arm_syscall(regs, scno); if (ret != -ENOSYS) return ret; } @@ -159,6 +159,6 @@ void do_el0_svc(struct pt_regs *regs) void do_el0_svc_compat(struct pt_regs *regs) { el0_svc_common(regs, regs->regs[7], __NR_compat_syscalls, - compat_sys_call_table); + a32_sys_call_table); } #endif diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 8b70759cdbb9..dad89b667c29 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -368,7 +368,7 @@ void arm64_skip_faulting_instruction(struct pt_regs *regs, unsigned long size) if (user_mode(regs)) user_fastforward_single_step(current);
- if (compat_user_mode(regs)) + if (a32_user_mode(regs)) advance_itstate(regs); else regs->pstate &= ~PSR_BTYPE_MASK; @@ -379,7 +379,7 @@ static int user_insn_read(struct pt_regs *regs, u32 *insnp) u32 instr; unsigned long pc = instruction_pointer(regs);
- if (compat_thumb_mode(regs)) { + if (a32_thumb_mode(regs)) { /* 16-bit Thumb instruction */ __le16 instr_le; if (get_user(instr_le, (__le16 __user *)pc)) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 2e5d1e238af9..2b6dda379b4c 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -717,7 +717,7 @@ static int do_alignment_fault(unsigned long far, unsigned long esr, struct pt_regs *regs) { if (IS_ENABLED(CONFIG_COMPAT_ALIGNMENT_FIXUPS) && - compat_user_mode(regs)) + a32_user_mode(regs)) return do_compat_alignment_fixup(far, regs); do_bad_area(far, esr, regs); return 0;
From: Andrew Pinski apinski@cavium.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
Define __BITS_PER_LONG depending on the ABI used (i.e. check whether __ILP32__ or __LP64__ is defined). This is necessary for glibc to determine the appropriate type definitions for the system call interface.
Signed-off-by: Andrew Pinski apinski@cavium.com Signed-off-by: Philipp Tomsich philipp.tomsich@theobroma-systems.com Signed-off-by: Christoph Muellner christoph.muellner@theobroma-systems.com Signed-off-by: Yury Norov ynorov@caviumnetworks.com Reviewed-by: David Daney ddaney@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/uapi/asm/bitsperlong.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/uapi/asm/bitsperlong.h b/arch/arm64/include/uapi/asm/bitsperlong.h index 485d60bee26c..9a05a9659e76 100644 --- a/arch/arm64/include/uapi/asm/bitsperlong.h +++ b/arch/arm64/include/uapi/asm/bitsperlong.h @@ -17,7 +17,14 @@ #ifndef __ASM_BITSPERLONG_H #define __ASM_BITSPERLONG_H
-#define __BITS_PER_LONG 64 +#if defined(__LP64__) +/* Assuming __LP64__ will be defined for native ELF64's and not for ILP32. */ +# define __BITS_PER_LONG 64 +#elif defined(__ILP32__) +# define __BITS_PER_LONG 32 +#else +# error "Neither LP64 nor ILP32: unsupported ABI in asm/bitsperlong.h" +#endif
#include <asm-generic/bitsperlong.h>
From: Yury Norov ynorov@caviumnetworks.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
Based on patch of Andrew Pinski.
This patch introduces is_a32_compat_task and is_a32_thread so it is easier to say this is a a32 specific thread or a generic compat thread/task. Corresponding functions are located in <asm/is_compat.h> to avoid mess in headers.
Some files include both <linux/compat.h> and <asm/compat.h>, and this is wrong because <linux/compat.h> has <asm/compat.h> already included. It was fixed too.
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Andrew Pinski Andrew.Pinski@caviumnetworks.com Signed-off-by: Bamvor Jian Zhang bamv2005@gmail.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/compat.h | 18 +--------- arch/arm64/include/asm/elf.h | 10 +++--- arch/arm64/include/asm/ftrace.h | 2 +- arch/arm64/include/asm/is_compat.h | 52 ++++++++++++++++++++++++++++ arch/arm64/include/asm/processor.h | 11 +++--- arch/arm64/include/asm/syscall.h | 2 +- arch/arm64/include/asm/thread_info.h | 2 +- arch/arm64/kernel/hw_breakpoint.c | 8 ++--- arch/arm64/kernel/perf_regs.c | 2 +- arch/arm64/kernel/process.c | 7 ++-- arch/arm64/kernel/ptrace.c | 11 +++--- arch/arm64/kernel/signal.c | 4 +-- arch/arm64/kernel/syscall.c | 2 +- arch/arm64/kernel/traps.c | 1 + 14 files changed, 84 insertions(+), 48 deletions(-) create mode 100644 arch/arm64/include/asm/is_compat.h
diff --git a/arch/arm64/include/asm/compat.h b/arch/arm64/include/asm/compat.h index ae904a1ad529..e45d73cbfbd5 100644 --- a/arch/arm64/include/asm/compat.h +++ b/arch/arm64/include/asm/compat.h @@ -27,6 +27,7 @@ typedef u16 compat_ipc_pid_t; #include <linux/types.h> #include <linux/sched.h> #include <linux/sched/task_stack.h> +#include <asm/is_compat.h>
#ifdef __AARCH64EB__ #define COMPAT_UTS_MACHINE "armv8b\0\0" @@ -86,24 +87,7 @@ struct compat_statfs { #define compat_user_stack_pointer() (user_stack_pointer(task_pt_regs(current))) #define COMPAT_MINSIGSTKSZ 2048
-static inline int is_compat_task(void) -{ - return test_thread_flag(TIF_32BIT); -} - -static inline int is_compat_thread(struct thread_info *thread) -{ - return test_ti_thread_flag(thread, TIF_32BIT); -} - long compat_arm_syscall(struct pt_regs *regs, int scno);
-#else /* !CONFIG_COMPAT */ - -static inline int is_compat_thread(struct thread_info *thread) -{ - return 0; -} - #endif /* CONFIG_COMPAT */ #endif /* __ASM_COMPAT_H */ diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h index 97932fbf973d..8119fab9bffb 100644 --- a/arch/arm64/include/asm/elf.h +++ b/arch/arm64/include/asm/elf.h @@ -5,6 +5,10 @@ #ifndef __ASM_ELF_H #define __ASM_ELF_H
+#ifndef __ASSEMBLY__ +#include <linux/compat.h> +#endif + #include <asm/hwcap.h>
/* @@ -187,13 +191,9 @@ extern int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp);
/* 1GB of VA */ -#ifdef CONFIG_COMPAT -#define STACK_RND_MASK (test_thread_flag(TIF_32BIT) ? \ +#define STACK_RND_MASK (is_compat_task() ? \ 0x7ff >> (PAGE_SHIFT - 12) : \ 0x3ffff >> (PAGE_SHIFT - 12)) -#else -#define STACK_RND_MASK (0x3ffff >> (PAGE_SHIFT - 12)) -#endif
#ifdef __AARCH64EB__ #define COMPAT_ELF_PLATFORM ("v8b") diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h index ab158196480c..0f06410b5497 100644 --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -175,7 +175,7 @@ static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs, #define ARCH_TRACE_IGNORE_COMPAT_SYSCALLS static inline bool arch_trace_is_compat_syscall(struct pt_regs *regs) { - return is_compat_task(); + return is_a32_compat_task(); }
#define ARCH_HAS_SYSCALL_MATCH_SYM_NAME diff --git a/arch/arm64/include/asm/is_compat.h b/arch/arm64/include/asm/is_compat.h new file mode 100644 index 000000000000..d8534d9ec1d0 --- /dev/null +++ b/arch/arm64/include/asm/is_compat.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_IS_COMPAT_H +#define __ASM_IS_COMPAT_H +#ifndef __ASSEMBLY__ + +#include <linux/thread_bits.h> + +#ifdef CONFIG_AARCH32_EL0 + +static inline int is_a32_compat_task(void) +{ + return test_thread_flag(TIF_32BIT); +} + +static inline int is_a32_compat_thread(struct thread_info *thread) +{ + return test_ti_thread_flag(thread, TIF_32BIT); +} + +#else + +static inline int is_a32_compat_task(void) + +{ + return 0; +} + +static inline int is_a32_compat_thread(struct thread_info *thread) +{ + return 0; +} + +#endif /* CONFIG_AARCH32_EL0 */ + +#ifdef CONFIG_COMPAT + +static inline int is_compat_task(void) +{ + return is_a32_compat_task(); +} + +#endif /* CONFIG_COMPAT */ + +static inline int is_compat_thread(struct thread_info *thread) +{ + return is_a32_compat_thread(thread); +} + + +#endif /* !__ASSEMBLY__ */ +#endif /* __ASM_IS_COMPAT_H */ diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 04cf99bf760f..7d444cd882ce 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -36,6 +36,7 @@
#include <asm/alternative.h> #include <asm/cpufeature.h> +#include <asm/is_compat.h> #include <asm/hw_breakpoint.h> #include <asm/kasan.h> #include <asm/lse.h> @@ -64,11 +65,11 @@ #else #define TASK_SIZE_32 (UL(0x100000000) - PAGE_SIZE) #endif /* CONFIG_ARM64_64K_PAGES */ -#define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \ +#define TASK_SIZE (is_compat_task() ? \ TASK_SIZE_32 : TASK_SIZE_64) -#define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \ +#define TASK_SIZE_OF(tsk) (is_compat_thread(tsk) ? \ TASK_SIZE_32 : TASK_SIZE_64) -#define DEFAULT_MAP_WINDOW (test_thread_flag(TIF_32BIT) ? \ +#define DEFAULT_MAP_WINDOW (is_compat_task() ? \ TASK_SIZE_32 : DEFAULT_MAP_WINDOW_64) #else #define TASK_SIZE TASK_SIZE_64 @@ -85,7 +86,7 @@
#ifdef CONFIG_COMPAT #define AARCH32_VECTORS_BASE 0xffff0000 -#define STACK_TOP (test_thread_flag(TIF_32BIT) ? \ +#define STACK_TOP (is_compat_task() ? \ AARCH32_VECTORS_BASE : STACK_TOP_MAX) #else #define STACK_TOP STACK_TOP_MAX @@ -260,7 +261,7 @@ static inline void arch_thread_struct_whitelist(unsigned long *offset, #define task_user_tls(t) \ ({ \ unsigned long *__tls; \ - if (is_compat_thread(task_thread_info(t))) \ + if (is_a32_compat_thread(task_thread_info(t))) \ __tls = &(t)->thread.uw.tp2_value; \ else \ __tls = &(t)->thread.uw.tp_value; \ diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h index 4974c19def43..55ca02e00b21 100644 --- a/arch/arm64/include/asm/syscall.h +++ b/arch/arm64/include/asm/syscall.h @@ -79,7 +79,7 @@ static inline void syscall_get_arguments(struct task_struct *task, */ static inline int syscall_get_arch(struct task_struct *task) { - if (is_compat_thread(task_thread_info(task))) + if (is_a32_compat_thread(task_thread_info(task))) return AUDIT_ARCH_ARM;
return AUDIT_ARCH_AARCH64; diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 553d1bc559c6..1dcca52f249f 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -73,7 +73,7 @@ void arch_setup_new_exec(void); #define TIF_FREEZE 19 #define TIF_RESTORE_SIGMASK 20 #define TIF_SINGLESTEP 21 -#define TIF_32BIT 22 /* 32bit process */ +#define TIF_32BIT 22 /* AARCH32 process */ #define TIF_SVE 23 /* Scalable Vector Extension in use */ #define TIF_SVE_VL_INHERIT 24 /* Inherit SVE vl_onexec across exec */ #define TIF_SSBD 25 /* Wants SSB mitigation */ diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c index 35225632d70a..d39a8787edf2 100644 --- a/arch/arm64/kernel/hw_breakpoint.c +++ b/arch/arm64/kernel/hw_breakpoint.c @@ -157,7 +157,7 @@ enum hw_breakpoint_ops { HW_BREAKPOINT_RESTORE };
-static int is_compat_bp(struct perf_event *bp) +static int is_a32_compat_bp(struct perf_event *bp) { struct task_struct *tsk = bp->hw.target;
@@ -168,7 +168,7 @@ static int is_compat_bp(struct perf_event *bp) * deprecated behaviour if we use unaligned watchpoints in * AArch64 state. */ - return tsk && is_compat_thread(task_thread_info(tsk)); + return tsk && is_a32_compat_thread(task_thread_info(tsk)); }
/** @@ -467,7 +467,7 @@ static int arch_build_bp_info(struct perf_event *bp, * Watchpoints can be of length 1, 2, 4 or 8 bytes. */ if (hw->ctrl.type == ARM_BREAKPOINT_EXECUTE) { - if (is_compat_bp(bp)) { + if (is_a32_compat_bp(bp)) { if (hw->ctrl.len != ARM_BREAKPOINT_LEN_2 && hw->ctrl.len != ARM_BREAKPOINT_LEN_4) return -EINVAL; @@ -525,7 +525,7 @@ int hw_breakpoint_arch_parse(struct perf_event *bp, * AArch32 tasks expect some simple alignment fixups, so emulate * that here. */ - if (is_compat_bp(bp)) { + if (is_a32_compat_bp(bp)) { if (hw->ctrl.len == ARM_BREAKPOINT_LEN_8) alignment_mask = 0x7; else diff --git a/arch/arm64/kernel/perf_regs.c b/arch/arm64/kernel/perf_regs.c index 1497f1b3e2fb..49cde242c086 100644 --- a/arch/arm64/kernel/perf_regs.c +++ b/arch/arm64/kernel/perf_regs.c @@ -92,7 +92,7 @@ int perf_reg_validate(u64 mask)
u64 perf_reg_abi(struct task_struct *task) { - if (is_compat_thread(task_thread_info(task))) + if (is_a32_compat_thread(task_thread_info(task))) return PERF_SAMPLE_REGS_ABI_32; else return PERF_SAMPLE_REGS_ABI_64; diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index a48d2f596c07..cf7a1f7807d3 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -43,7 +43,6 @@ #include <linux/stacktrace.h>
#include <asm/alternative.h> -#include <asm/compat.h> #include <asm/cpufeature.h> #include <asm/cacheflush.h> #include <asm/exec.h> @@ -252,7 +251,7 @@ static void tls_thread_flush(void) if (system_supports_tpidr2()) write_sysreg_s(0, SYS_TPIDR2_EL0);
- if (is_compat_task()) { + if (is_a32_compat_task()) { current->thread.uw.tp_value = 0;
/* @@ -375,7 +374,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) p->thread.tpidr2_el0 = read_sysreg_s(SYS_TPIDR2_EL0);
if (stack_start) { - if (is_compat_thread(task_thread_info(p))) + if (is_a32_compat_thread(task_thread_info(p))) childregs->compat_sp = stack_start; else childregs->sp = stack_start; @@ -427,7 +426,7 @@ static void tls_thread_switch(struct task_struct *next) { tls_preserve_current_state();
- if (is_compat_thread(task_thread_info(next))) + if (is_a32_compat_thread(task_thread_info(next))) write_sysreg(next->thread.uw.tp_value, tpidrro_el0); else if (!arm64_kernel_unmapped_at_el0()) write_sysreg(0, tpidrro_el0); diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 6d00c4d326b6..b87ebdaced90 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -29,7 +29,6 @@ #include <linux/regset.h> #include <linux/elf.h>
-#include <asm/compat.h> #include <asm/cpufeature.h> #include <asm/debug-monitors.h> #include <asm/fpsimd.h> @@ -174,7 +173,7 @@ static void ptrace_hbptriggered(struct perf_event *bp, const char *desc = "Hardware breakpoint trap (ptrace)";
#ifdef CONFIG_AARCH32_EL0 - if (is_compat_task()) { + if (is_a32_compat_task()) { int si_errno = 0; int i;
@@ -2121,9 +2120,9 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task) * 32-bit children use an extended user_aarch32_ptrace_view to allow * access to the TLS register. */ - if (is_compat_task()) + if (is_a32_compat_task()) return &user_aarch32_view; - else if (is_compat_thread(task_thread_info(task))) + else if (is_a32_compat_thread(task_thread_info(task))) return &user_aarch32_ptrace_view; #endif return &user_aarch64_view; @@ -2167,7 +2166,7 @@ static void report_syscall(struct pt_regs *regs, enum ptrace_syscall_dir dir) * - Syscall stops behave differently to seccomp and pseudo-step traps * (the latter do not nobble any registers). */ - regno = (is_compat_task() ? 12 : 7); + regno = (is_a32_compat_task() ? 12 : 7); saved_reg = regs->regs[regno]; regs->regs[regno] = dir;
@@ -2303,7 +2302,7 @@ int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task) /* https://lore.kernel.org/lkml/20191118131525.GA4180@willie-the-truck */ user_regs_reset_single_step(regs, task);
- if (is_compat_thread(task_thread_info(task))) + if (is_a32_compat_thread(task_thread_info(task))) return valid_compat_regs(regs); else return valid_native_regs(regs); diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 06c5d2e66f6d..c08b47c066bb 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -1163,7 +1163,7 @@ static int setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
static void setup_restart_syscall(struct pt_regs *regs) { - if (is_compat_task()) + if (is_a32_compat_task()) a32_setup_restart_syscall(regs); else regs->regs[8] = __NR_restart_syscall; @@ -1183,7 +1183,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs) /* * Set up the stack frame */ - if (is_compat_task()) { + if (is_a32_compat_task()) { if (ksig->ka.sa.sa_flags & SA_SIGINFO) ret = a32_setup_rt_frame(usig, ksig, oldset, regs); else diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index f6da6b0afcf7..aab954921a87 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -22,7 +22,7 @@ static long do_ni_syscall(struct pt_regs *regs, int scno) { #ifdef CONFIG_AARCH32_EL0 long ret; - if (is_compat_task()) { + if (is_a32_compat_task()) { ret = a32_arm_syscall(regs, scno); if (ret != -ENOSYS) return ret; diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index dad89b667c29..b7b7afb4a8c7 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -8,6 +8,7 @@
#include <linux/bug.h> #include <linux/context_tracking.h> +#include <linux/compat.h> #include <linux/signal.h> #include <linux/kallsyms.h> #include <linux/kprobes.h>
From: Yury Norov ynorov@caviumnetworks.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
ILP32 tasks are needed to be distinguished from LP64 and AARCH32. This patch adds helper functions is_ilp32_compat_{task,thread} and thread flag TIF_32BIT_AARCH64 to address it. This is a preparation for following patches in ILP32 patchset.
For consistency, SET_PERSONALITY is changed here accordingly.
Signed-off-by: Andrew Pinski Andrew.Pinski@caviumnetworks.com Signed-off-by: Philipp Tomsich philipp.tomsich@theobroma-systems.com Signed-off-by: Christoph Muellner christoph.muellner@theobroma-systems.com Signed-off-by: Yury Norov ynorov@caviumnetworks.com Reviewed-by: David Daney ddaney@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com
Conflicts: arch/arm64/include/asm/compat.h
Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/elf.h | 2 ++ arch/arm64/include/asm/is_compat.h | 30 ++++++++++++++++++++++++++-- arch/arm64/include/asm/thread_info.h | 2 ++ 3 files changed, 32 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h index 8119fab9bffb..2f7b8f1f8a34 100644 --- a/arch/arm64/include/asm/elf.h +++ b/arch/arm64/include/asm/elf.h @@ -164,6 +164,7 @@ typedef struct user_fpsimd_state elf_fpregset_t;
#define SET_PERSONALITY(ex) \ ({ \ + clear_thread_flag(TIF_32BIT_AARCH64); \ clear_thread_flag(TIF_32BIT); \ current->personality &= ~READ_IMPLIES_EXEC; \ }) @@ -223,6 +224,7 @@ int compat_elf_check_arch(const struct elf32_hdr *); */ #define COMPAT_SET_PERSONALITY(ex) \ ({ \ + clear_thread_flag(TIF_32BIT_AARCH64); \ set_thread_flag(TIF_32BIT); \ }) #ifdef CONFIG_COMPAT_VDSO diff --git a/arch/arm64/include/asm/is_compat.h b/arch/arm64/include/asm/is_compat.h index d8534d9ec1d0..2c2d1f4c26bd 100644 --- a/arch/arm64/include/asm/is_compat.h +++ b/arch/arm64/include/asm/is_compat.h @@ -33,18 +33,44 @@ static inline int is_a32_compat_thread(struct thread_info *thread)
#endif /* CONFIG_AARCH32_EL0 */
+#ifdef CONFIG_ARM64_ILP32 + +static inline int is_ilp32_compat_task(void) +{ + return test_thread_flag(TIF_32BIT_AARCH64); +} + +static inline int is_ilp32_compat_thread(struct thread_info *thread) +{ + return test_ti_thread_flag(thread, TIF_32BIT_AARCH64); +} + +#else + +static inline int is_ilp32_compat_task(void) +{ + return 0; +} + +static inline int is_ilp32_compat_thread(struct thread_info *thread) +{ + return 0; +} + +#endif /* CONFIG_ARM64_ILP32 */ + #ifdef CONFIG_COMPAT
static inline int is_compat_task(void) { - return is_a32_compat_task(); + return is_a32_compat_task() || is_ilp32_compat_task(); }
#endif /* CONFIG_COMPAT */
static inline int is_compat_thread(struct thread_info *thread) { - return is_a32_compat_thread(thread); + return is_a32_compat_thread(thread) || is_ilp32_compat_thread(thread); }
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 1dcca52f249f..889a7a43792d 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -80,6 +80,7 @@ void arch_setup_new_exec(void); #define TIF_TAGGED_ADDR 26 /* Allow tagged user addresses */ #define TIF_SME 27 /* SME in use */ #define TIF_SME_VL_INHERIT 28 /* Inherit SME vl_onexec across exec */ +#define TIF_32BIT_AARCH64 29 /* 32 bit process on AArch64(ILP32) */
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) @@ -96,6 +97,7 @@ void arch_setup_new_exec(void); #define _TIF_SVE (1 << TIF_SVE) #define _TIF_MTE_ASYNC_FAULT (1 << TIF_MTE_ASYNC_FAULT) #define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL) +#define _TIF_32BIT_AARCH64 (1 << TIF_32BIT_AARCH64)
#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \ _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
From: Yury Norov ynorov@marvell.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
syscall_get_arch() currently makes no difference between arm64 and arm64/ilp32. Fix it by adding AUDIT_ARCH_AARCH64ILP32.
Reported-by: Andy Lutomirski luto@amacapital.net Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/syscall.h | 3 +++ include/uapi/linux/audit.h | 1 + 2 files changed, 4 insertions(+)
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h index 55ca02e00b21..8d6a211a0c07 100644 --- a/arch/arm64/include/asm/syscall.h +++ b/arch/arm64/include/asm/syscall.h @@ -82,6 +82,9 @@ static inline int syscall_get_arch(struct task_struct *task) if (is_a32_compat_thread(task_thread_info(task))) return AUDIT_ARCH_ARM;
+ else if (is_ilp32_compat_task()) + return AUDIT_ARCH_AARCH64ILP32; + return AUDIT_ARCH_AARCH64; }
diff --git a/include/uapi/linux/audit.h b/include/uapi/linux/audit.h index d676ed2b246e..bafc9a2ac2db 100644 --- a/include/uapi/linux/audit.h +++ b/include/uapi/linux/audit.h @@ -386,6 +386,7 @@ enum { #define __AUDIT_ARCH_LE 0x40000000
#define AUDIT_ARCH_AARCH64 (EM_AARCH64|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE) +#define AUDIT_ARCH_AARCH64ILP32 (EM_AARCH64|__AUDIT_ARCH_LE) #define AUDIT_ARCH_ALPHA (EM_ALPHA|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE) #define AUDIT_ARCH_ARCOMPACT (EM_ARCOMPACT|__AUDIT_ARCH_LE) #define AUDIT_ARCH_ARCOMPACTBE (EM_ARCOMPACT)
From: Yury Norov ynorov@caviumnetworks.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
As we support more than one compat formats, it looks more reasonable to not use fs/compat_binfmt.c. Custom binfmt_elf32.c allows to move aarch32 specific definitions there and make code more maintainable and readable.
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com
Conflicts: arch/arm64/kernel/Makefile
Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/elf.h | 22 ++++++---------------- arch/arm64/include/asm/hwcap.h | 2 -- arch/arm64/kernel/Makefile | 2 +- arch/arm64/kernel/binfmt_elf32.c | 27 +++++++++++++++++++++++++++ arch/arm64/kernel/process.c | 2 +- 5 files changed, 35 insertions(+), 20 deletions(-) create mode 100644 arch/arm64/kernel/binfmt_elf32.c
diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h index 2f7b8f1f8a34..de3ad90894ec 100644 --- a/arch/arm64/include/asm/elf.h +++ b/arch/arm64/include/asm/elf.h @@ -206,7 +206,9 @@ extern int arch_setup_additional_pages(struct linux_binprm *bprm,
/* PIE load location for compat arm. Must match ARM ELF_ET_DYN_BASE. */ #define COMPAT_ELF_ET_DYN_BASE 0x000400000UL +#endif /*CONFIG_COMPAT */
+#ifdef CONFIG_AARCH32_EL0 /* AArch32 registers. */ #define COMPAT_ELF_NGREG 18 typedef unsigned int compat_elf_greg_t; @@ -215,18 +217,8 @@ typedef compat_elf_greg_t compat_elf_gregset_t[COMPAT_ELF_NGREG]; /* AArch32 EABI. */ #define EF_ARM_EABI_MASK 0xff000000 int compat_elf_check_arch(const struct elf32_hdr *); -#define compat_elf_check_arch compat_elf_check_arch #define compat_start_thread compat_start_thread -/* - * Unlike the native SET_PERSONALITY macro, the compat version maintains - * READ_IMPLIES_EXEC across an execve() since this is the behaviour on - * arch/arm/. - */ -#define COMPAT_SET_PERSONALITY(ex) \ -({ \ - clear_thread_flag(TIF_32BIT_AARCH64); \ - set_thread_flag(TIF_32BIT); \ - }) + #ifdef CONFIG_COMPAT_VDSO #define COMPAT_ARCH_DLINFO \ do { \ @@ -242,12 +234,10 @@ do { \ #else #define COMPAT_ARCH_DLINFO #endif -extern int aarch32_setup_additional_pages(struct linux_binprm *bprm, - int uses_interp); -#define compat_arch_setup_additional_pages \ - aarch32_setup_additional_pages
-#endif /* CONFIG_COMPAT */ +extern int aarch32_setup_additional_pages(struct linux_binprm *bprm, + int uses_interp); +#endif /* CONFIG_AARCH32_EL0 */
struct arch_elf_state { int flags; diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h index 93ee19669b73..995179ab2b7d 100644 --- a/arch/arm64/include/asm/hwcap.h +++ b/arch/arm64/include/asm/hwcap.h @@ -148,8 +148,6 @@ #define ELF_HWCAP2 cpu_get_elf_hwcap2()
#ifdef CONFIG_AARCH32_EL0 -#define COMPAT_ELF_HWCAP (compat_elf_hwcap) -#define COMPAT_ELF_HWCAP2 (compat_elf_hwcap2) extern unsigned int compat_elf_hwcap, compat_elf_hwcap2; #endif
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 8125fe7067c9..bd91cc07f720 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -36,7 +36,7 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ syscall.o proton-pack.o idreg-override.o idle.o \ patching.o
-obj-$(CONFIG_AARCH32_EL0) += sys32.o signal32.o \ +obj-$(CONFIG_AARCH32_EL0) += binfmt_elf32.o sys32.o signal32.o \ sys_compat.o obj-$(CONFIG_AARCH32_EL0) += sigreturn32.o obj-$(CONFIG_COMPAT_ALIGNMENT_FIXUPS) += compat_alignment.o diff --git a/arch/arm64/kernel/binfmt_elf32.c b/arch/arm64/kernel/binfmt_elf32.c new file mode 100644 index 000000000000..e839a8753144 --- /dev/null +++ b/arch/arm64/kernel/binfmt_elf32.c @@ -0,0 +1,27 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Support for AArch32 Linux ELF binaries. + */ + +/* AArch32 EABI. */ +#define compat_start_thread compat_start_thread +/* + * Unlike the native SET_PERSONALITY macro, the compat version inherits + * READ_IMPLIES_EXEC across a fork() since this is the behaviour on + * arch/arm/. + */ +#define COMPAT_SET_PERSONALITY(ex) \ +({ \ + clear_thread_flag(TIF_32BIT_AARCH64); \ + set_thread_flag(TIF_32BIT); \ +}) + +#define COMPAT_ARCH_DLINFO +#define COMPAT_ELF_HWCAP (compat_elf_hwcap) +#define COMPAT_ELF_HWCAP2 (compat_elf_hwcap2) + +#define compat_arch_setup_additional_pages \ + aarch32_setup_additional_pages + +#include "../../../fs/compat_binfmt_elf.c" diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index cf7a1f7807d3..82f2754ed4b5 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -597,7 +597,7 @@ unsigned long arch_align_stack(unsigned long sp) return sp & ~0xf; }
-#ifdef CONFIG_COMPAT +#ifdef CONFIG_AARCH32_EL0 int compat_elf_check_arch(const struct elf32_hdr *hdr) { if (!system_supports_32bit_el0())
From: Yury Norov ynorov@caviumnetworks.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
ILP32 patch series introduces new type of binaries which is also compat. So rename existung aarch32 compat_elf_hwcap's helps to avoid confusing.
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com
Conflicts: arch/arm64/kernel/cpufeature.c
Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/arch_timer.h | 2 +- arch/arm64/include/asm/hwcap.h | 2 +- arch/arm64/kernel/binfmt_elf32.c | 4 ++-- arch/arm64/kernel/cpufeature.c | 16 ++++++++-------- arch/arm64/kernel/cpuinfo.c | 10 +++++----- 5 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h index fdf5ee2ffa3a..379a98176681 100644 --- a/arch/arm64/include/asm/arch_timer.h +++ b/arch/arm64/include/asm/arch_timer.h @@ -218,7 +218,7 @@ static inline void arch_timer_set_evtstrm_feature(void) { cpu_set_named_feature(EVTSTRM); #ifdef CONFIG_AARCH32_EL0 - compat_elf_hwcap |= COMPAT_HWCAP_EVTSTRM; + a32_elf_hwcap |= COMPAT_HWCAP_EVTSTRM; #endif }
diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h index 995179ab2b7d..2e914609db93 100644 --- a/arch/arm64/include/asm/hwcap.h +++ b/arch/arm64/include/asm/hwcap.h @@ -148,7 +148,7 @@ #define ELF_HWCAP2 cpu_get_elf_hwcap2()
#ifdef CONFIG_AARCH32_EL0 -extern unsigned int compat_elf_hwcap, compat_elf_hwcap2; +extern unsigned int a32_elf_hwcap, a32_elf_hwcap2; #endif
enum { diff --git a/arch/arm64/kernel/binfmt_elf32.c b/arch/arm64/kernel/binfmt_elf32.c index e839a8753144..e4335c7ba8be 100644 --- a/arch/arm64/kernel/binfmt_elf32.c +++ b/arch/arm64/kernel/binfmt_elf32.c @@ -18,8 +18,8 @@ })
#define COMPAT_ARCH_DLINFO -#define COMPAT_ELF_HWCAP (compat_elf_hwcap) -#define COMPAT_ELF_HWCAP2 (compat_elf_hwcap2) +#define COMPAT_ELF_HWCAP (a32_elf_hwcap) +#define COMPAT_ELF_HWCAP2 (a32_elf_hwcap2)
#define compat_arch_setup_additional_pages \ aarch32_setup_additional_pages diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 374ead145a8a..c806a117dbcc 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -96,13 +96,13 @@ static DECLARE_BITMAP(elf_hwcap, MAX_CPU_FEATURES) __read_mostly;
#ifdef CONFIG_AARCH32_EL0 -#define COMPAT_ELF_HWCAP_DEFAULT \ +#define AARCH32_EL0_ELF_HWCAP_DEFAULT \ (COMPAT_HWCAP_HALF|COMPAT_HWCAP_THUMB|\ COMPAT_HWCAP_FAST_MULT|COMPAT_HWCAP_EDSP|\ COMPAT_HWCAP_TLS|COMPAT_HWCAP_IDIV|\ COMPAT_HWCAP_LPAE) -unsigned int compat_elf_hwcap __read_mostly = COMPAT_ELF_HWCAP_DEFAULT; -unsigned int compat_elf_hwcap2 __read_mostly; +unsigned int a32_elf_hwcap __read_mostly = AARCH32_EL0_ELF_HWCAP_DEFAULT; +unsigned int a32_elf_hwcap2 __read_mostly; #endif
DECLARE_BITMAP(system_cpucaps, ARM64_NCAPS); @@ -2194,7 +2194,7 @@ static void elf_hwcap_fixup(void) { #ifdef CONFIG_ARM64_ERRATUM_1742098 if (cpus_have_const_cap(ARM64_WORKAROUND_1742098)) - compat_elf_hwcap2 &= ~COMPAT_HWCAP2_AES; + a32_elf_hwcap2 &= ~COMPAT_HWCAP2_AES; #endif /* ARM64_ERRATUM_1742098 */ }
@@ -2922,10 +2922,10 @@ static void cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap) break; #ifdef CONFIG_AARCH32_EL0 case CAP_COMPAT_HWCAP: - compat_elf_hwcap |= (u32)cap->hwcap; + a32_elf_hwcap |= (u32)cap->hwcap; break; case CAP_COMPAT_HWCAP2: - compat_elf_hwcap2 |= (u32)cap->hwcap; + a32_elf_hwcap2 |= (u32)cap->hwcap; break; #endif default: @@ -2945,10 +2945,10 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap) break; #ifdef CONFIG_AARCH32_EL0 case CAP_COMPAT_HWCAP: - rc = (compat_elf_hwcap & (u32)cap->hwcap) != 0; + rc = (a32_elf_hwcap & (u32)cap->hwcap) != 0; break; case CAP_COMPAT_HWCAP2: - rc = (compat_elf_hwcap2 & (u32)cap->hwcap) != 0; + rc = (a32_elf_hwcap2 & (u32)cap->hwcap) != 0; break; #endif default: diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c index 60ff939c7785..49f2627e9d9a 100644 --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -177,7 +177,7 @@ static const char *const compat_hwcap2_str[] = { static int c_show(struct seq_file *m, void *v) { int i, j; - bool compat = personality(current->personality) == PER_LINUX32; + bool aarch32 = personality(current->personality) == PER_LINUX32;
for_each_online_cpu(i) { struct cpuinfo_arm64 *cpuinfo = &per_cpu(cpu_data, i); @@ -189,7 +189,7 @@ static int c_show(struct seq_file *m, void *v) * "processor". Give glibc what it expects. */ seq_printf(m, "processor\t: %d\n", i); - if (compat) + if (aarch32) seq_printf(m, "model name\t: ARMv8 Processor rev %d (%s)\n", MIDR_REVISION(midr), COMPAT_ELF_PLATFORM);
@@ -204,10 +204,10 @@ static int c_show(struct seq_file *m, void *v) * software which does already (at least for 32-bit). */ seq_puts(m, "Features\t:"); - if (compat) { + if (aarch32) { #ifdef CONFIG_AARCH32_EL0 for (j = 0; j < ARRAY_SIZE(compat_hwcap_str); j++) { - if (compat_elf_hwcap & (1 << j)) { + if (a32_elf_hwcap & (1 << j)) { /* * Warn once if any feature should not * have been present on arm64 platform. @@ -220,7 +220,7 @@ static int c_show(struct seq_file *m, void *v) }
for (j = 0; j < ARRAY_SIZE(compat_hwcap2_str); j++) - if (compat_elf_hwcap2 & (1 << j)) + if (a32_elf_hwcap2 & (1 << j)) seq_printf(m, " %s", compat_hwcap2_str[j]); #endif /* CONFIG_AARCH32_EL0 */ } else {
From: Yury Norov ynorov@caviumnetworks.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
Like binfmt_elf32.c for AARCH32, binfmt_ilp32.c is needed to handle ILP32 binaries.
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Bamvor Jian Zhang bamv2005@gmail.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com
Conflicts: arch/arm64/kernel/Makefile
Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/binfmt_ilp32.c | 89 ++++++++++++++++++++++++++++++++ 2 files changed, 90 insertions(+) create mode 100644 arch/arm64/kernel/binfmt_ilp32.c
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index bd91cc07f720..2ba7be181efb 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -41,6 +41,7 @@ obj-$(CONFIG_AARCH32_EL0) += binfmt_elf32.o sys32.o signal32.o \ obj-$(CONFIG_AARCH32_EL0) += sigreturn32.o obj-$(CONFIG_COMPAT_ALIGNMENT_FIXUPS) += compat_alignment.o obj-$(CONFIG_KUSER_HELPERS) += kuser32.o +obj-$(CONFIG_ARM64_ILP32) += binfmt_ilp32.o obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o obj-$(CONFIG_MODULES) += module.o module-plts.o obj-$(CONFIG_PERF_EVENTS) += perf_regs.o perf_callchain.o diff --git a/arch/arm64/kernel/binfmt_ilp32.c b/arch/arm64/kernel/binfmt_ilp32.c new file mode 100644 index 000000000000..17784c7f1163 --- /dev/null +++ b/arch/arm64/kernel/binfmt_ilp32.c @@ -0,0 +1,89 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Support for ILP32 Linux/aarch64 ELF binaries. + */ +#undef CONFIG_AARCH32_EL0 +#define compat_elf_gregset_t elf_gregset_t + +#include <linux/elfcore-compat.h> +#include <linux/time.h> +#include <asm/cpufeature.h> + +#undef ELF_CLASS +#define ELF_CLASS ELFCLASS32 + +#undef elfhdr +#undef elf_phdr +#undef elf_shdr +#undef elf_note +#undef elf_addr_t +#define elfhdr elf32_hdr +#define elf_phdr elf32_phdr +#define elf_shdr elf32_shdr +#define elf_note elf32_note +#define elf_addr_t Elf32_Addr + +/* + * Some data types as stored in coredump. + */ +#define user_long_t compat_long_t +#define user_siginfo_t compat_siginfo_t +#define copy_siginfo_to_external copy_siginfo_to_external32 + +/* + * The machine-dependent core note format types are defined in elfcore-compat.h, + * which requires asm/elf.h to define compat_elf_gregset_t et al. + */ +#define elf_prstatus compat_elf_prstatus +#define elf_prpsinfo compat_elf_prpsinfo +#define elf_prstatus_common compat_elf_prstatus_common + +/* AARCH64 ILP32 EABI. */ +#undef elf_check_arch +#define elf_check_arch(x) (((x)->e_machine == EM_AARCH64) \ + && (x)->e_ident[EI_CLASS] == ELFCLASS32) + +#undef SET_PERSONALITY +#define SET_PERSONALITY(ex) \ +do { \ + set_bit(TIF_32BIT, ¤t->mm->context.flags); \ + set_thread_flag(TIF_32BIT_AARCH64); \ + clear_thread_flag(TIF_32BIT); \ +} while (0) + +#undef ARCH_DLINFO +#define ARCH_DLINFO \ +do { \ + NEW_AUX_ENT(AT_SYSINFO_EHDR, \ + (elf_addr_t)(long)current->mm->context.vdso); \ +} while (0) + +#undef ELF_PLATFORM +#ifdef __AARCH64EB__ +#define ELF_PLATFORM ("aarch64_be:ilp32") +#else +#define ELF_PLATFORM ("aarch64:ilp32") +#endif + +#undef ELF_ET_DYN_BASE +#define ELF_ET_DYN_BASE COMPAT_ELF_ET_DYN_BASE + +#undef ELF_HWCAP +#undef ELF_HWCAP2 +#define ELF_HWCAP cpu_get_elf_hwcap() +#define ELF_HWCAP2 cpu_get_elf_hwcap2() + +/* + * Rename a few of the symbols that binfmt_elf.c will define. + * These are all local so the names don't really matter, but it + * might make some debugging less confusing not to duplicate them. + */ +#define elf_format compat_elf_format +#define init_elf_binfmt init_compat_elf_binfmt +#define exit_elf_binfmt exit_compat_elf_binfmt + +#undef ns_to_kernel_old_timeval +#define ns_to_kernel_old_timeval ns_to_old_timeval32 + +#include "../../../fs/binfmt_elf.c"
From: Yury Norov ynorov@caviumnetworks.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
According to userspace/kernel ABI, userspace off_t is passed in register pair just like in aarch32. In this patch corresponding aarch32 handlers are shared to ilp32 code.
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com
Conflicts: arch/arm64/kernel/Makefile
Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/sys32.c | 102 ----------------------------- arch/arm64/kernel/sys32_common.c | 106 +++++++++++++++++++++++++++++++ 3 files changed, 107 insertions(+), 102 deletions(-) create mode 100644 arch/arm64/kernel/sys32_common.c
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 2ba7be181efb..09c28a29309b 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -42,6 +42,7 @@ obj-$(CONFIG_AARCH32_EL0) += sigreturn32.o obj-$(CONFIG_COMPAT_ALIGNMENT_FIXUPS) += compat_alignment.o obj-$(CONFIG_KUSER_HELPERS) += kuser32.o obj-$(CONFIG_ARM64_ILP32) += binfmt_ilp32.o +obj-$(CONFIG_COMPAT) += sys32_common.o obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o obj-$(CONFIG_MODULES) += module.o module-plts.o obj-$(CONFIG_PERF_EVENTS) += perf_regs.o perf_callchain.o diff --git a/arch/arm64/kernel/sys32.c b/arch/arm64/kernel/sys32.c index 3c3c13a9dde3..7577acaae911 100644 --- a/arch/arm64/kernel/sys32.c +++ b/arch/arm64/kernel/sys32.c @@ -20,108 +20,6 @@ asmlinkage long compat_sys_sigreturn(void); asmlinkage long compat_sys_rt_sigreturn(void);
-COMPAT_SYSCALL_DEFINE3(aarch32_statfs64, const char __user *, pathname, - compat_size_t, sz, struct compat_statfs64 __user *, buf) -{ - /* - * 32-bit ARM applies an OABI compatibility fixup to statfs64 and - * fstatfs64 regardless of whether OABI is in use, and therefore - * arbitrary binaries may rely upon it, so we must do the same. - * For more details, see commit: - * - * 713c481519f19df9 ("[ARM] 3108/2: old ABI compat: statfs64 and - * fstatfs64") - */ - if (sz == 88) - sz = 84; - - return kcompat_sys_statfs64(pathname, sz, buf); -} - -COMPAT_SYSCALL_DEFINE3(aarch32_fstatfs64, unsigned int, fd, compat_size_t, sz, - struct compat_statfs64 __user *, buf) -{ - /* see aarch32_statfs64 */ - if (sz == 88) - sz = 84; - - return kcompat_sys_fstatfs64(fd, sz, buf); -} - -/* - * Note: off_4k is always in units of 4K. If we can't do the - * requested offset because it is not page-aligned, we return -EINVAL. - */ -COMPAT_SYSCALL_DEFINE6(aarch32_mmap2, unsigned long, addr, unsigned long, len, - unsigned long, prot, unsigned long, flags, - unsigned long, fd, unsigned long, off_4k) -{ - if (off_4k & (~PAGE_MASK >> 12)) - return -EINVAL; - - off_4k >>= (PAGE_SHIFT - 12); - - return ksys_mmap_pgoff(addr, len, prot, flags, fd, off_4k); -} - -#ifdef CONFIG_CPU_BIG_ENDIAN -#define arg_u32p(name) u32, name##_hi, u32, name##_lo -#else -#define arg_u32p(name) u32, name##_lo, u32, name##_hi -#endif - -#define arg_u64(name) (((u64)name##_hi << 32) | name##_lo) - -COMPAT_SYSCALL_DEFINE6(aarch32_pread64, unsigned int, fd, char __user *, buf, - size_t, count, u32, __pad, arg_u32p(pos)) -{ - return ksys_pread64(fd, buf, count, arg_u64(pos)); -} - -COMPAT_SYSCALL_DEFINE6(aarch32_pwrite64, unsigned int, fd, - const char __user *, buf, size_t, count, u32, __pad, - arg_u32p(pos)) -{ - return ksys_pwrite64(fd, buf, count, arg_u64(pos)); -} - -COMPAT_SYSCALL_DEFINE4(aarch32_truncate64, const char __user *, pathname, - u32, __pad, arg_u32p(length)) -{ - return ksys_truncate(pathname, arg_u64(length)); -} - -COMPAT_SYSCALL_DEFINE4(aarch32_ftruncate64, unsigned int, fd, u32, __pad, - arg_u32p(length)) -{ - return ksys_ftruncate(fd, arg_u64(length)); -} - -COMPAT_SYSCALL_DEFINE5(aarch32_readahead, int, fd, u32, __pad, - arg_u32p(offset), size_t, count) -{ - return ksys_readahead(fd, arg_u64(offset), count); -} - -COMPAT_SYSCALL_DEFINE6(aarch32_fadvise64_64, int, fd, int, advice, - arg_u32p(offset), arg_u32p(len)) -{ - return ksys_fadvise64_64(fd, arg_u64(offset), arg_u64(len), advice); -} - -COMPAT_SYSCALL_DEFINE6(aarch32_sync_file_range2, int, fd, unsigned int, flags, - arg_u32p(offset), arg_u32p(nbytes)) -{ - return ksys_sync_file_range(fd, arg_u64(offset), arg_u64(nbytes), - flags); -} - -COMPAT_SYSCALL_DEFINE6(aarch32_fallocate, int, fd, int, mode, - arg_u32p(offset), arg_u32p(len)) -{ - return ksys_fallocate(fd, mode, arg_u64(offset), arg_u64(len)); -} - #undef __SYSCALL #define __SYSCALL(nr, sym) asmlinkage long __arm64_##sym(const struct pt_regs *); #include <asm/unistd32.h> diff --git a/arch/arm64/kernel/sys32_common.c b/arch/arm64/kernel/sys32_common.c new file mode 100644 index 000000000000..bbf81d8991ba --- /dev/null +++ b/arch/arm64/kernel/sys32_common.c @@ -0,0 +1,106 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include <linux/compat.h> +#include <linux/syscalls.h> + +COMPAT_SYSCALL_DEFINE3(aarch32_statfs64, const char __user *, pathname, + compat_size_t, sz, struct compat_statfs64 __user *, buf) +{ + /* + * 32-bit ARM applies an OABI compatibility fixup to statfs64 and + * fstatfs64 regardless of whether OABI is in use, and therefore + * arbitrary binaries may rely upon it, so we must do the same. + * For more details, see commit: + * + * 713c481519f19df9 ("[ARM] 3108/2: old ABI compat: statfs64 and + * fstatfs64") + */ + if (sz == 88) + sz = 84; + + return kcompat_sys_statfs64(pathname, sz, buf); +} + +COMPAT_SYSCALL_DEFINE3(aarch32_fstatfs64, unsigned int, fd, compat_size_t, sz, + struct compat_statfs64 __user *, buf) +{ + /* see aarch32_statfs64 */ + if (sz == 88) + sz = 84; + + return kcompat_sys_fstatfs64(fd, sz, buf); +} + +/* + * Note: off_4k is always in units of 4K. If we can't do the + * requested offset because it is not page-aligned, we return -EINVAL. + */ +COMPAT_SYSCALL_DEFINE6(aarch32_mmap2, unsigned long, addr, unsigned long, len, + unsigned long, prot, unsigned long, flags, + unsigned long, fd, unsigned long, off_4k) +{ + if (off_4k & (~PAGE_MASK >> 12)) + return -EINVAL; + + off_4k >>= (PAGE_SHIFT - 12); + + return ksys_mmap_pgoff(addr, len, prot, flags, fd, off_4k); +} + +#ifdef CONFIG_CPU_BIG_ENDIAN +#define arg_u32p(name) u32, name##_hi, u32, name##_lo +#else +#define arg_u32p(name) u32, name##_lo, u32, name##_hi +#endif + +#define arg_u64(name) (((u64)name##_hi << 32) | name##_lo) + +COMPAT_SYSCALL_DEFINE6(aarch32_pread64, unsigned int, fd, char __user *, buf, + size_t, count, u32, __pad, arg_u32p(pos)) +{ + return ksys_pread64(fd, buf, count, arg_u64(pos)); +} + +COMPAT_SYSCALL_DEFINE6(aarch32_pwrite64, unsigned int, fd, + const char __user *, buf, size_t, count, u32, __pad, + arg_u32p(pos)) +{ + return ksys_pwrite64(fd, buf, count, arg_u64(pos)); +} + +COMPAT_SYSCALL_DEFINE4(aarch32_truncate64, const char __user *, pathname, + u32, __pad, arg_u32p(length)) +{ + return ksys_truncate(pathname, arg_u64(length)); +} + +COMPAT_SYSCALL_DEFINE4(aarch32_ftruncate64, unsigned int, fd, u32, __pad, + arg_u32p(length)) +{ + return ksys_ftruncate(fd, arg_u64(length)); +} + +COMPAT_SYSCALL_DEFINE5(aarch32_readahead, int, fd, u32, __pad, + arg_u32p(offset), size_t, count) +{ + return ksys_readahead(fd, arg_u64(offset), count); +} + +COMPAT_SYSCALL_DEFINE6(aarch32_fadvise64_64, int, fd, int, advice, + arg_u32p(offset), arg_u32p(len)) +{ + return ksys_fadvise64_64(fd, arg_u64(offset), arg_u64(len), advice); +} + +COMPAT_SYSCALL_DEFINE6(aarch32_sync_file_range2, int, fd, unsigned int, flags, + arg_u32p(offset), arg_u32p(nbytes)) +{ + return ksys_sync_file_range(fd, arg_u64(offset), arg_u64(nbytes), + flags); +} + +COMPAT_SYSCALL_DEFINE6(aarch32_fallocate, int, fd, int, mode, + arg_u32p(offset), arg_u32p(len)) +{ + return ksys_fallocate(fd, mode, arg_u64(offset), arg_u64(len)); +}
From: Yury Norov ynorov@caviumnetworks.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
Depending on the personality of the task, syscalls has to be dispatched to either aarch64, aarch32 or aarch64/ilp32 syscall handlers. We add the support of ILP32 mode in this series, therefore introduce corresponding syscall table.
Some system calls are wired to aarch32 syscall handlers, as listed in arch/arm64/kernel/sys_ilp32.c.
For aarch64/ilp32, top halves of syscall arguments are meaningless anthough not zeroed by hardware. Do that in the delouse_pt_regs() routine to avoid passing garbage by userspace.
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com
Conflicts: arch/arm64/kernel/Makefile arch/arm64/kernel/syscall.c
Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/syscall.h | 4 ++ arch/arm64/include/asm/unistd.h | 7 ++- arch/arm64/include/uapi/asm/unistd.h | 15 +++++- arch/arm64/kernel/Makefile | 2 +- arch/arm64/kernel/sys_ilp32.c | 76 ++++++++++++++++++++++++++++ arch/arm64/kernel/syscall.c | 25 ++++++++- 6 files changed, 125 insertions(+), 4 deletions(-) create mode 100644 arch/arm64/kernel/sys_ilp32.c
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h index 8d6a211a0c07..9a3d99c8aa80 100644 --- a/arch/arm64/include/asm/syscall.h +++ b/arch/arm64/include/asm/syscall.h @@ -17,6 +17,10 @@ extern const syscall_fn_t sys_call_table[]; extern const syscall_fn_t a32_sys_call_table[]; #endif
+#ifdef CONFIG_ARM64_ILP32 +extern const syscall_fn_t ilp32_sys_call_table[]; +#endif + static inline int syscall_get_nr(struct task_struct *task, struct pt_regs *regs) { diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h index 8de22f4f3a21..d98414a0d01e 100644 --- a/arch/arm64/include/asm/unistd.h +++ b/arch/arm64/include/asm/unistd.h @@ -2,9 +2,14 @@ /* * Copyright (C) 2012 ARM Ltd. */ -#ifdef CONFIG_AARCH32_EL0 + +#ifdef CONFIG_COMPAT #define __ARCH_WANT_COMPAT_STAT #define __ARCH_WANT_COMPAT_STAT64 +#define __ARCH_WANT_SYS_LLSEEK +#endif + +#ifdef CONFIG_AARCH32_EL0 #define __ARCH_WANT_SYS_GETHOSTNAME #define __ARCH_WANT_SYS_PAUSE #define __ARCH_WANT_SYS_GETPGRP diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h index ce2ee8f1e361..079139c04b14 100644 --- a/arch/arm64/include/uapi/asm/unistd.h +++ b/arch/arm64/include/uapi/asm/unistd.h @@ -15,9 +15,22 @@ * along with this program. If not, see http://www.gnu.org/licenses/. */
+/* + * AARCH32 interface for ILP32 syscalls. + */ +#if defined(__ILP32__) || defined(__SYSCALL_COMPAT) +#define __ARCH_WANT_SYNC_FILE_RANGE2 +#endif + +/* + * AARCH64/ILP32 is introduced after the following syscalls were deprecated. + */ +#if !(defined(__ILP32__) || defined(__SYSCALL_COMPAT)) #define __ARCH_WANT_RENAMEAT -#define __ARCH_WANT_NEW_STAT #define __ARCH_WANT_SET_GET_RLIMIT +#endif + +#define __ARCH_WANT_NEW_STAT #define __ARCH_WANT_TIME32_SYSCALLS #define __ARCH_WANT_SYS_CLONE3 #define __ARCH_WANT_MEMFD_SECRET diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 09c28a29309b..0dc8de7d78e9 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -41,7 +41,7 @@ obj-$(CONFIG_AARCH32_EL0) += binfmt_elf32.o sys32.o signal32.o \ obj-$(CONFIG_AARCH32_EL0) += sigreturn32.o obj-$(CONFIG_COMPAT_ALIGNMENT_FIXUPS) += compat_alignment.o obj-$(CONFIG_KUSER_HELPERS) += kuser32.o -obj-$(CONFIG_ARM64_ILP32) += binfmt_ilp32.o +obj-$(CONFIG_ARM64_ILP32) += binfmt_ilp32.o sys_ilp32.o obj-$(CONFIG_COMPAT) += sys32_common.o obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o obj-$(CONFIG_MODULES) += module.o module-plts.o diff --git a/arch/arm64/kernel/sys_ilp32.c b/arch/arm64/kernel/sys_ilp32.c new file mode 100644 index 000000000000..05eca957a18d --- /dev/null +++ b/arch/arm64/kernel/sys_ilp32.c @@ -0,0 +1,76 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * AArch64- ILP32 specific system calls implementation + * Copyright (C) 2018 Marvell. + */ + +#define __SYSCALL_COMPAT + +#include <linux/compat.h> +#include <linux/compiler.h> +#include <linux/syscalls.h> + +#include <asm/syscall.h> + +/* + * AARCH32 requires 4-page alignment for shared memory, + * but AARCH64 - only 1 page. This is the only difference + * between compat and native sys_shmat(). So ILP32 just pick + * AARCH64 version. + */ +#define __arm64_compat_sys_shmat __arm64_sys_shmat + +/* + * ILP32 needs special handling for some ptrace requests. + */ +#define __arm64_sys_ptrace __arm64_compat_sys_ptrace + +/* + * Using AARCH32 interface for syscalls that take 64-bit + * parameters in registers. + */ +#define __arm64_compat_sys_fadvise64_64 __arm64_compat_sys_aarch32_fadvise64_64 +#define __arm64_compat_sys_fallocate __arm64_compat_sys_aarch32_fallocate +#define __arm64_compat_sys_ftruncate64 __arm64_compat_sys_aarch32_ftruncate64 +#define __arm64_compat_sys_pread64 __arm64_compat_sys_aarch32_pread64 +#define __arm64_compat_sys_pwrite64 __arm64_compat_sys_aarch32_pwrite64 +#define __arm64_compat_sys_readahead __arm64_compat_sys_aarch32_readahead +#define __arm64_compat_sys_sync_file_range2 __arm64_compat_sys_aarch32_sync_file_range2 +#define __arm64_compat_sys_truncate64 __arm64_compat_sys_aarch32_truncate64 +#define __arm64_sys_mmap2 __arm64_compat_sys_aarch32_mmap2 + +/* + * Using AARCH32 interface for syscalls that take the size of + * struct statfs as an argument, as it's calculated differently + * in kernel and user spaces. + */ +#define __arm64_compat_sys_fstatfs64 __arm64_compat_sys_aarch32_fstatfs64 +#define __arm64_compat_sys_statfs64 __arm64_compat_sys_aarch32_statfs64 + +/* + * Using old interface for IPC syscalls that should handle IPC_64 flag. + */ +#define __arm64_compat_sys_semctl __arm64_compat_sys_old_semctl +#define __arm64_compat_sys_msgctl __arm64_compat_sys_old_msgctl +#define __arm64_compat_sys_shmctl __arm64_compat_sys_old_shmctl + +/* + * Wrappers to pass the pt_regs argument. + */ +#define sys_personality sys_arm64_personality + +asmlinkage long sys_ni_syscall(const struct pt_regs *); +#define __arm64_sys_ni_syscall sys_ni_syscall + +#undef __SYSCALL +#define __SYSCALL(nr, sym) asmlinkage long __arm64_##sym(const struct pt_regs *); +#include <asm/unistd.h> + +#undef __SYSCALL +#define __SYSCALL(nr, sym) [nr] = (syscall_fn_t)__arm64_##sym, + +const syscall_fn_t ilp32_sys_call_table[__NR_syscalls] = { + [0 ... __NR_syscalls - 1] = __arm64_sys_ni_syscall, +#include <asm/unistd.h> +}; diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index aab954921a87..d434c589cc09 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -150,9 +150,32 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, syscall_trace_exit(regs); }
+#ifdef CONFIG_ARM64_ILP32 +static inline void delouse_pt_regs(struct pt_regs *regs) +{ + regs->regs[0] &= UINT_MAX; + regs->regs[1] &= UINT_MAX; + regs->regs[2] &= UINT_MAX; + regs->regs[3] &= UINT_MAX; + regs->regs[4] &= UINT_MAX; + regs->regs[5] &= UINT_MAX; + regs->regs[6] &= UINT_MAX; + regs->regs[7] &= UINT_MAX; +} +#endif + void do_el0_svc(struct pt_regs *regs) { - el0_svc_common(regs, regs->regs[8], __NR_syscalls, sys_call_table); + const syscall_fn_t *t = sys_call_table; + +#ifdef CONFIG_ARM64_ILP32 + if (is_ilp32_compat_task()) { + t = ilp32_sys_call_table; + delouse_pt_regs(regs); + } +#endif + + el0_svc_common(regs, regs->regs[8], __NR_syscalls, t); }
#ifdef CONFIG_AARCH32_EL0
From: Yury Norov ynorov@caviumnetworks.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
Following patches of the series introduce ILP32-specific structures and handlers for signal subsystem. In this patch, functions and structures that common for LP64 and ILP32 are moved to arch/arm64/include/asm/signal_common.h to let ILP32 code reuse them. Some functions work with struct rt_sigframe which differs for ILP32. Therefore, to let ILP32 generate correct code, body of that functions are moved to arch/arm64/include/asm/signal_common.h. Others just declared in new header.
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/signal_common.h | 391 +++++++++++++++++++++++++ arch/arm64/kernel/signal.c | 379 ++++-------------------- 2 files changed, 448 insertions(+), 322 deletions(-) create mode 100644 arch/arm64/include/asm/signal_common.h
diff --git a/arch/arm64/include/asm/signal_common.h b/arch/arm64/include/asm/signal_common.h new file mode 100644 index 000000000000..460efa29d5a6 --- /dev/null +++ b/arch/arm64/include/asm/signal_common.h @@ -0,0 +1,391 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Copyright (C) 1995-2009 Russell King + * Copyright (C) 2012 ARM Ltd. + * Copyright (C) 2018 Cavium Networks. + */ + +#ifndef __ASM_SIGNAL_COMMON_H +#define __ASM_SIGNAL_COMMON_H + +#include <linux/uaccess.h> +#include <asm/fpsimd.h> +#include <asm/traps.h> + +#define EXTRA_CONTEXT_SIZE round_up(sizeof(struct extra_context), 16) +#define TERMINATOR_SIZE round_up(sizeof(struct _aarch64_ctx), 16) +#define SIGCONTEXT_RESERVED_SIZE sizeof(((struct sigcontext *)0)->__reserved) +#define RT_SIGFRAME_RESERVED_OFFSET \ + offsetof(struct rt_sigframe, uc.uc_mcontext.__reserved) + +/* + * Sanity limit on the approximate maximum size of signal frame we'll + * try to generate. Stack alignment padding and the frame record are + * not taken into account. This limit is not a guarantee and is + * NOT ABI. + */ +#define SIGFRAME_MAXSZ SZ_64K + +struct rt_sigframe_user_layout { + struct rt_sigframe __user *sigframe; + struct frame_record __user *next_frame; + + unsigned long size; /* size of allocated sigframe data */ + unsigned long limit; /* largest allowed size */ + + unsigned long fpsimd_offset; + unsigned long esr_offset; + unsigned long sve_offset; + unsigned long tpidr2_offset; + unsigned long za_offset; + unsigned long zt_offset; + unsigned long extra_offset; + unsigned long end_offset; +}; + +struct user_ctxs { + struct fpsimd_context __user *fpsimd; + u32 fpsimd_size; + struct sve_context __user *sve; + u32 sve_size; + struct tpidr2_context __user *tpidr2; + u32 tpidr2_size; + struct za_context __user *za; + u32 za_size; + struct zt_context __user *zt; + u32 zt_size; +}; + +struct frame_record { + u64 fp; + u64 lr; +}; + +void __user *apply_user_offset(struct rt_sigframe_user_layout const *user, + unsigned long offset); + +int setup_sigframe_layout(struct rt_sigframe_user_layout *user, bool add_all); +int setup_extra_context(char __user *sfp, unsigned long sf_size, + char __user *exprap); +int __parse_user_sigcontext(struct user_ctxs *user, + struct sigcontext __user const *sc, + void __user const *sigframe_base); +#define parse_user_sigcontext(user, sf) \ + __parse_user_sigcontext(user, &(sf)->uc.uc_mcontext, sf) + +int preserve_fpsimd_context(struct fpsimd_context __user *ctx); +int restore_fpsimd_context(struct user_ctxs *user); + +#ifdef CONFIG_ARM64_SVE +int preserve_sve_context(struct sve_context __user *ctx); +int restore_sve_fpsimd_context(struct user_ctxs *user); +#else /* ! CONFIG_ARM64_SVE */ + +/* Turn any non-optimised out attempts to use these into a link error: */ +extern int preserve_sve_context(void __user *ctx); +extern int restore_sve_fpsimd_context(struct user_ctxs *user); + +#endif /* ! CONFIG_ARM64_SVE */ + + +#ifdef CONFIG_ARM64_SME +int preserve_tpidr2_context(struct tpidr2_context __user *ctx); +int restore_tpidr2_context(struct user_ctxs *user); +int preserve_za_context(struct za_context __user *ctx); +int restore_za_context(struct user_ctxs *user); +int preserve_zt_context(struct zt_context __user *ctx); +int restore_zt_context(struct user_ctxs *user); +#else /* ! CONFIG_ARM64_SME */ + +/* Turn any non-optimised out attempts to use these into a link error: */ +extern int preserve_tpidr2_context(void __user *ctx); +extern int restore_tpidr2_context(struct user_ctxs *user); +extern int preserve_za_context(void __user *ctx); +extern int restore_za_context(struct user_ctxs *user); +extern int preserve_zt_context(void __user *ctx); +extern int restore_zt_context(struct user_ctxs *user); + +#endif /* ! CONFIG_ARM64_SME */ + +int sigframe_alloc(struct rt_sigframe_user_layout *user, + unsigned long *offset, size_t size); +int sigframe_alloc_end(struct rt_sigframe_user_layout *user); + +void __setup_return(struct pt_regs *regs, struct k_sigaction *ka, + struct rt_sigframe_user_layout *user, int usig); + +static void init_user_layout(struct rt_sigframe_user_layout *user) +{ + const size_t reserved_size = + sizeof(user->sigframe->uc.uc_mcontext.__reserved); + + memset(user, 0, sizeof(*user)); + user->size = offsetof(struct rt_sigframe, uc.uc_mcontext.__reserved); + + user->limit = user->size + reserved_size; + + user->limit -= TERMINATOR_SIZE; + user->limit -= EXTRA_CONTEXT_SIZE; + /* Reserve space for extension and terminator ^ */ +} + +static size_t sigframe_size(struct rt_sigframe_user_layout const *user) +{ + return round_up(max(user->size, sizeof(struct rt_sigframe)), 16); +} + +static int get_sigframe(struct rt_sigframe_user_layout *user, + struct ksignal *ksig, struct pt_regs *regs) +{ + unsigned long sp, sp_top; + int err; + + init_user_layout(user); + err = setup_sigframe_layout(user, false); + if (err) + return err; + + sp = sp_top = sigsp(regs->sp, ksig); + + sp = round_down(sp - sizeof(struct frame_record), 16); + user->next_frame = (struct frame_record __user *)sp; + + sp = round_down(sp, 16) - sigframe_size(user); + user->sigframe = (void __user *)sp; + + /* + * Check that we can actually write to the signal frame. + */ + if (!access_ok(VERIFY_WRITE, user->sigframe, sp_top - sp)) + return -EFAULT; + + return 0; +} + +static int restore_sigframe(struct pt_regs *regs, + struct rt_sigframe __user *sf) +{ + sigset_t set; + int i, err; + struct user_ctxs user; + + err = __copy_from_user(&set, &sf->uc.uc_sigmask, sizeof(set)); + if (err == 0) + set_current_blocked(&set); + + for (i = 0; i < 31; i++) + __get_user_error(regs->regs[i], &sf->uc.uc_mcontext.regs[i], + err); + __get_user_error(regs->sp, &sf->uc.uc_mcontext.sp, err); + __get_user_error(regs->pc, &sf->uc.uc_mcontext.pc, err); + __get_user_error(regs->pstate, &sf->uc.uc_mcontext.pstate, err); + + /* + * Avoid sys_rt_sigreturn() restarting. + */ + forget_syscall(regs); + + err |= !valid_user_regs(®s->user_regs, current); + if (err == 0) + err = parse_user_sigcontext(&user, sf); + + if (err == 0 && system_supports_fpsimd()) { + if (!user.fpsimd) + return -EINVAL; + + if (user.sve) + err = restore_sve_fpsimd_context(&user); + else + err = restore_fpsimd_context(&user); + } + + if (err == 0 && system_supports_tpidr2() && user.tpidr2) + err = restore_tpidr2_context(&user); + + if (err == 0 && system_supports_sme() && user.za) + err = restore_za_context(&user); + + if (err == 0 && system_supports_sme2() && user.zt) + err = restore_zt_context(&user); + + return err; +} + +static int setup_sigframe(struct rt_sigframe_user_layout *user, + struct pt_regs *regs, sigset_t *set) +{ + int i, err = 0; + struct rt_sigframe __user *sf = user->sigframe; + + /* set up the stack frame for unwinding */ + __put_user_error(regs->regs[29], &user->next_frame->fp, err); + __put_user_error(regs->regs[30], &user->next_frame->lr, err); + + for (i = 0; i < 31; i++) + __put_user_error(regs->regs[i], &sf->uc.uc_mcontext.regs[i], + err); + __put_user_error(regs->sp, &sf->uc.uc_mcontext.sp, err); + __put_user_error(regs->pc, &sf->uc.uc_mcontext.pc, err); + __put_user_error(regs->pstate, &sf->uc.uc_mcontext.pstate, err); + + __put_user_error(current->thread.fault_address, &sf->uc.uc_mcontext.fault_address, err); + + err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(*set)); + + if (err == 0 && system_supports_fpsimd()) { + struct fpsimd_context __user *fpsimd_ctx = + apply_user_offset(user, user->fpsimd_offset); + err |= preserve_fpsimd_context(fpsimd_ctx); + } + + /* fault information, if valid */ + if (err == 0 && user->esr_offset) { + struct esr_context __user *esr_ctx = + apply_user_offset(user, user->esr_offset); + + __put_user_error(ESR_MAGIC, &esr_ctx->head.magic, err); + __put_user_error(sizeof(*esr_ctx), &esr_ctx->head.size, err); + __put_user_error(current->thread.fault_code, &esr_ctx->esr, err); + } + + /* Scalable Vector Extension state (including streaming), if present */ + if ((system_supports_sve() || system_supports_sme()) && + err == 0 && user->sve_offset) { + struct sve_context __user *sve_ctx = + apply_user_offset(user, user->sve_offset); + err |= preserve_sve_context(sve_ctx); + } + + /* TPIDR2 if supported */ + if (system_supports_tpidr2() && err == 0) { + struct tpidr2_context __user *tpidr2_ctx = + apply_user_offset(user, user->tpidr2_offset); + err |= preserve_tpidr2_context(tpidr2_ctx); + } + + /* ZA state if present */ + if (system_supports_sme() && err == 0 && user->za_offset) { + struct za_context __user *za_ctx = + apply_user_offset(user, user->za_offset); + err |= preserve_za_context(za_ctx); + } + + /* ZT state if present */ + if (system_supports_sme2() && err == 0 && user->zt_offset) { + struct zt_context __user *zt_ctx = + apply_user_offset(user, user->zt_offset); + err |= preserve_zt_context(zt_ctx); + } + + if (err == 0 && user->extra_offset) { + char __user *sfp = (char __user *)user->sigframe; + char __user *userp = + apply_user_offset(user, user->extra_offset); + + struct extra_context __user *extra; + struct _aarch64_ctx __user *end; + u64 extra_datap; + u32 extra_size; + + extra = (struct extra_context __user *)userp; + userp += EXTRA_CONTEXT_SIZE; + + end = (struct _aarch64_ctx __user *)userp; + userp += TERMINATOR_SIZE; + + /* + * extra_datap is just written to the signal frame. + * The value gets cast back to a void __user * + * during sigreturn. + */ + extra_datap = (__force u64)userp; + extra_size = sfp + round_up(user->size, 16) - userp; + + __put_user_error(EXTRA_MAGIC, &extra->head.magic, err); + __put_user_error(EXTRA_CONTEXT_SIZE, &extra->head.size, err); + __put_user_error(extra_datap, &extra->datap, err); + __put_user_error(extra_size, &extra->size, err); + + /* Add the terminator */ + __put_user_error(0, &end->magic, err); + __put_user_error(0, &end->size, err); + } + + /* set the "end" magic */ + if (err == 0) { + struct _aarch64_ctx __user *end = + apply_user_offset(user, user->end_offset); + + __put_user_error(0, &end->magic, err); + __put_user_error(0, &end->size, err); + } + + return err; +} + +static long __sys_rt_sigreturn(struct pt_regs *regs) +{ + struct rt_sigframe __user *frame; + + /* Always make any pending restarted system calls return -EINTR */ + current->restart_block.fn = do_no_restart_syscall; + + /* + * Since we stacked the signal on a 128-bit boundary, then 'sp' should + * be word aligned here. + */ + if (regs->sp & 15) + goto badframe; + + frame = (struct rt_sigframe __user *)regs->sp; + + if (!access_ok(VERIFY_READ, frame, sizeof(*frame))) + goto badframe; + + if (restore_sigframe(regs, frame)) + goto badframe; + + if (restore_altstack(&frame->uc.uc_stack)) + goto badframe; + + return regs->regs[0]; + +badframe: + arm64_notify_segfault(regs->sp); + return 0; +} + +static int __setup_rt_frame(int usig, struct ksignal *ksig, + sigset_t *set, struct pt_regs *regs) +{ + struct rt_sigframe_user_layout user; + struct rt_sigframe __user *frame; + int err = 0; + + fpsimd_signal_preserve_current_state(); + + if (get_sigframe(&user, ksig, regs)) + return 1; + + frame = user.sigframe; + + __put_user_error(0, &frame->uc.uc_flags, err); + __put_user_error((typeof(frame->uc.uc_link)) 0, + &frame->uc.uc_link, err); + + err |= __save_altstack(&frame->uc.uc_stack, regs->sp); + err |= setup_sigframe(&user, regs, set); + if (err == 0) { + setup_return(regs, &ksig->ka, &user, usig); + if (ksig->ka.sa.sa_flags & SA_SIGINFO) { + err |= copy_siginfo_to_user(&frame->info, &ksig->info); + regs->regs[1] = (unsigned long)&frame->info; + regs->regs[2] = (unsigned long)&frame->uc; + } + } + + return err; +} + +#endif /* __ASM_SIGNAL_COMMON_H */ diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index c08b47c066bb..a502d485b530 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -34,6 +34,9 @@ #include <asm/traps.h> #include <asm/vdso.h>
+#define get_sigset(s, m) __copy_from_user(s, m, sizeof(*s)) +#define put_sigset(s, m) __copy_to_user(m, s, sizeof(*s)) + /* * Do a signal return; undo the signal stack. These are aligned to 128-bit. */ @@ -41,60 +44,12 @@ struct rt_sigframe { struct siginfo info; struct ucontext uc; }; +struct rt_sigframe_user_layout;
-struct frame_record { - u64 fp; - u64 lr; -}; - -struct rt_sigframe_user_layout { - struct rt_sigframe __user *sigframe; - struct frame_record __user *next_frame; - - unsigned long size; /* size of allocated sigframe data */ - unsigned long limit; /* largest allowed size */ - - unsigned long fpsimd_offset; - unsigned long esr_offset; - unsigned long sve_offset; - unsigned long tpidr2_offset; - unsigned long za_offset; - unsigned long zt_offset; - unsigned long extra_offset; - unsigned long end_offset; -}; - -#define BASE_SIGFRAME_SIZE round_up(sizeof(struct rt_sigframe), 16) -#define TERMINATOR_SIZE round_up(sizeof(struct _aarch64_ctx), 16) -#define EXTRA_CONTEXT_SIZE round_up(sizeof(struct extra_context), 16) - -static void init_user_layout(struct rt_sigframe_user_layout *user) -{ - const size_t reserved_size = - sizeof(user->sigframe->uc.uc_mcontext.__reserved); - - memset(user, 0, sizeof(*user)); - user->size = offsetof(struct rt_sigframe, uc.uc_mcontext.__reserved); - - user->limit = user->size + reserved_size; +static void setup_return(struct pt_regs *regs, struct k_sigaction *ka, + struct rt_sigframe_user_layout *user, int usig);
- user->limit -= TERMINATOR_SIZE; - user->limit -= EXTRA_CONTEXT_SIZE; - /* Reserve space for extension and terminator ^ */ -} - -static size_t sigframe_size(struct rt_sigframe_user_layout const *user) -{ - return round_up(max(user->size, sizeof(struct rt_sigframe)), 16); -} - -/* - * Sanity limit on the approximate maximum size of signal frame we'll - * try to generate. Stack alignment padding and the frame record are - * not taken into account. This limit is not a guarantee and is - * NOT ABI. - */ -#define SIGFRAME_MAXSZ SZ_256K +#include <asm/signal_common.h>
static int __sigframe_alloc(struct rt_sigframe_user_layout *user, unsigned long *offset, size_t size, bool extend) @@ -139,14 +94,14 @@ static int __sigframe_alloc(struct rt_sigframe_user_layout *user, * signal frame. The offset from the signal frame base address to the * allocated block is assigned to *offset. */ -static int sigframe_alloc(struct rt_sigframe_user_layout *user, +int sigframe_alloc(struct rt_sigframe_user_layout *user, unsigned long *offset, size_t size) { return __sigframe_alloc(user, offset, size, true); }
/* Allocate the null terminator record and prevent further allocations */ -static int sigframe_alloc_end(struct rt_sigframe_user_layout *user) +int sigframe_alloc_end(struct rt_sigframe_user_layout *user) { int ret;
@@ -163,7 +118,7 @@ static int sigframe_alloc_end(struct rt_sigframe_user_layout *user) return 0; }
-static void __user *apply_user_offset( +void __user *apply_user_offset( struct rt_sigframe_user_layout const *user, unsigned long offset) { char __user *base = (char __user *)user->sigframe; @@ -171,20 +126,7 @@ static void __user *apply_user_offset( return base + offset; }
-struct user_ctxs { - struct fpsimd_context __user *fpsimd; - u32 fpsimd_size; - struct sve_context __user *sve; - u32 sve_size; - struct tpidr2_context __user *tpidr2; - u32 tpidr2_size; - struct za_context __user *za; - u32 za_size; - struct zt_context __user *zt; - u32 zt_size; -}; - -static int preserve_fpsimd_context(struct fpsimd_context __user *ctx) +int preserve_fpsimd_context(struct fpsimd_context __user *ctx) { struct user_fpsimd_state const *fpsimd = ¤t->thread.uw.fpsimd_state; @@ -202,7 +144,7 @@ static int preserve_fpsimd_context(struct fpsimd_context __user *ctx) return err ? -EFAULT : 0; }
-static int restore_fpsimd_context(struct user_ctxs *user) +int restore_fpsimd_context(struct user_ctxs *user) { struct user_fpsimd_state fpsimd; int err = 0; @@ -230,7 +172,7 @@ static int restore_fpsimd_context(struct user_ctxs *user)
#ifdef CONFIG_ARM64_SVE
-static int preserve_sve_context(struct sve_context __user *ctx) +int preserve_sve_context(struct sve_context __user *ctx) { int err = 0; u16 reserved[ARRAY_SIZE(ctx->__reserved)]; @@ -270,7 +212,7 @@ static int preserve_sve_context(struct sve_context __user *ctx) return err ? -EFAULT : 0; }
-static int restore_sve_fpsimd_context(struct user_ctxs *user) +int restore_sve_fpsimd_context(struct user_ctxs *user) { int err = 0; unsigned int vl, vq; @@ -363,20 +305,17 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user)
#else /* ! CONFIG_ARM64_SVE */
-static int restore_sve_fpsimd_context(struct user_ctxs *user) +int restore_sve_fpsimd_context(struct user_ctxs *user) { WARN_ON_ONCE(1); return -EINVAL; }
-/* Turn any non-optimised out attempts to use this into a link error: */ -extern int preserve_sve_context(void __user *ctx); - #endif /* ! CONFIG_ARM64_SVE */
#ifdef CONFIG_ARM64_SME
-static int preserve_tpidr2_context(struct tpidr2_context __user *ctx) +int preserve_tpidr2_context(struct tpidr2_context __user *ctx) { int err = 0;
@@ -389,7 +328,7 @@ static int preserve_tpidr2_context(struct tpidr2_context __user *ctx) return err; }
-static int restore_tpidr2_context(struct user_ctxs *user) +int restore_tpidr2_context(struct user_ctxs *user) { u64 tpidr2_el0; int err = 0; @@ -404,7 +343,7 @@ static int restore_tpidr2_context(struct user_ctxs *user) return err; }
-static int preserve_za_context(struct za_context __user *ctx) +int preserve_za_context(struct za_context __user *ctx) { int err = 0; u16 reserved[ARRAY_SIZE(ctx->__reserved)]; @@ -439,7 +378,7 @@ static int preserve_za_context(struct za_context __user *ctx) return err ? -EFAULT : 0; }
-static int restore_za_context(struct user_ctxs *user) +int restore_za_context(struct user_ctxs *user) { int err = 0; unsigned int vq; @@ -495,7 +434,7 @@ static int restore_za_context(struct user_ctxs *user) return 0; }
-static int preserve_zt_context(struct zt_context __user *ctx) +int preserve_zt_context(struct zt_context __user *ctx) { int err = 0; u16 reserved[ARRAY_SIZE(ctx->__reserved)]; @@ -524,7 +463,7 @@ static int preserve_zt_context(struct zt_context __user *ctx) return err ? -EFAULT : 0; }
-static int restore_zt_context(struct user_ctxs *user) +int restore_zt_context(struct user_ctxs *user) { int err; u16 nregs; @@ -574,7 +513,7 @@ extern int restore_zt_context(struct user_ctxs *user);
#endif /* ! CONFIG_ARM64_SME */
-static int __parse_user_sigcontext(struct user_ctxs *user, +int __parse_user_sigcontext(struct user_ctxs *user, struct sigcontext __user const *sc, void __user const *sigframe_base) { @@ -766,89 +705,10 @@ static int __parse_user_sigcontext(struct user_ctxs *user, return -EINVAL; }
-#define parse_user_sigcontext(user, sf) \ - __parse_user_sigcontext(user, &(sf)->uc.uc_mcontext, sf) - -static int restore_sigframe(struct pt_regs *regs, - struct rt_sigframe __user *sf) -{ - sigset_t set; - int i, err; - struct user_ctxs user; - - err = __copy_from_user(&set, &sf->uc.uc_sigmask, sizeof(set)); - if (err == 0) - set_current_blocked(&set); - - for (i = 0; i < 31; i++) - __get_user_error(regs->regs[i], &sf->uc.uc_mcontext.regs[i], - err); - __get_user_error(regs->sp, &sf->uc.uc_mcontext.sp, err); - __get_user_error(regs->pc, &sf->uc.uc_mcontext.pc, err); - __get_user_error(regs->pstate, &sf->uc.uc_mcontext.pstate, err); - - /* - * Avoid sys_rt_sigreturn() restarting. - */ - forget_syscall(regs); - - err |= !valid_user_regs(®s->user_regs, current); - if (err == 0) - err = parse_user_sigcontext(&user, sf); - - if (err == 0 && system_supports_fpsimd()) { - if (!user.fpsimd) - return -EINVAL; - - if (user.sve) - err = restore_sve_fpsimd_context(&user); - else - err = restore_fpsimd_context(&user); - } - - if (err == 0 && system_supports_tpidr2() && user.tpidr2) - err = restore_tpidr2_context(&user); - - if (err == 0 && system_supports_sme() && user.za) - err = restore_za_context(&user); - - if (err == 0 && system_supports_sme2() && user.zt) - err = restore_zt_context(&user); - - return err; -} - SYSCALL_DEFINE0(rt_sigreturn) { struct pt_regs *regs = current_pt_regs(); - struct rt_sigframe __user *frame; - - /* Always make any pending restarted system calls return -EINTR */ - current->restart_block.fn = do_no_restart_syscall; - - /* - * Since we stacked the signal on a 128-bit boundary, then 'sp' should - * be word aligned here. - */ - if (regs->sp & 15) - goto badframe; - - frame = (struct rt_sigframe __user *)regs->sp; - - if (!access_ok(frame, sizeof (*frame))) - goto badframe; - - if (restore_sigframe(regs, frame)) - goto badframe; - - if (restore_altstack(&frame->uc.uc_stack)) - goto badframe; - - return regs->regs[0]; - -badframe: - arm64_notify_segfault(regs->sp); - return 0; + return __sys_rt_sigreturn(regs); }
/* @@ -858,8 +718,7 @@ SYSCALL_DEFINE0(rt_sigreturn) * this task; otherwise, generates a layout for the current state * of the task. */ -static int setup_sigframe_layout(struct rt_sigframe_user_layout *user, - bool add_all) +int setup_sigframe_layout(struct rt_sigframe_user_layout *user, bool add_all) { int err;
@@ -934,144 +793,48 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user, return sigframe_alloc_end(user); }
-static int setup_sigframe(struct rt_sigframe_user_layout *user, - struct pt_regs *regs, sigset_t *set) +int setup_extra_context(char __user *sfp, unsigned long sf_size, + char __user *extrap) { - int i, err = 0; - struct rt_sigframe __user *sf = user->sigframe; - - /* set up the stack frame for unwinding */ - __put_user_error(regs->regs[29], &user->next_frame->fp, err); - __put_user_error(regs->regs[30], &user->next_frame->lr, err); - - for (i = 0; i < 31; i++) - __put_user_error(regs->regs[i], &sf->uc.uc_mcontext.regs[i], - err); - __put_user_error(regs->sp, &sf->uc.uc_mcontext.sp, err); - __put_user_error(regs->pc, &sf->uc.uc_mcontext.pc, err); - __put_user_error(regs->pstate, &sf->uc.uc_mcontext.pstate, err); - - __put_user_error(current->thread.fault_address, &sf->uc.uc_mcontext.fault_address, err); - - err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(*set)); - - if (err == 0 && system_supports_fpsimd()) { - struct fpsimd_context __user *fpsimd_ctx = - apply_user_offset(user, user->fpsimd_offset); - err |= preserve_fpsimd_context(fpsimd_ctx); - } - - /* fault information, if valid */ - if (err == 0 && user->esr_offset) { - struct esr_context __user *esr_ctx = - apply_user_offset(user, user->esr_offset); - - __put_user_error(ESR_MAGIC, &esr_ctx->head.magic, err); - __put_user_error(sizeof(*esr_ctx), &esr_ctx->head.size, err); - __put_user_error(current->thread.fault_code, &esr_ctx->esr, err); - } - - /* Scalable Vector Extension state (including streaming), if present */ - if ((system_supports_sve() || system_supports_sme()) && - err == 0 && user->sve_offset) { - struct sve_context __user *sve_ctx = - apply_user_offset(user, user->sve_offset); - err |= preserve_sve_context(sve_ctx); - } - - /* TPIDR2 if supported */ - if (system_supports_tpidr2() && err == 0) { - struct tpidr2_context __user *tpidr2_ctx = - apply_user_offset(user, user->tpidr2_offset); - err |= preserve_tpidr2_context(tpidr2_ctx); - } - - /* ZA state if present */ - if (system_supports_sme() && err == 0 && user->za_offset) { - struct za_context __user *za_ctx = - apply_user_offset(user, user->za_offset); - err |= preserve_za_context(za_ctx); - } - - /* ZT state if present */ - if (system_supports_sme2() && err == 0 && user->zt_offset) { - struct zt_context __user *zt_ctx = - apply_user_offset(user, user->zt_offset); - err |= preserve_zt_context(zt_ctx); - } - - if (err == 0 && user->extra_offset) { - char __user *sfp = (char __user *)user->sigframe; - char __user *userp = - apply_user_offset(user, user->extra_offset); - - struct extra_context __user *extra; - struct _aarch64_ctx __user *end; - u64 extra_datap; - u32 extra_size; + int err = 0; + struct extra_context __user *extra; + struct _aarch64_ctx __user *end; + u64 extra_datap; + u32 extra_size;
- extra = (struct extra_context __user *)userp; - userp += EXTRA_CONTEXT_SIZE; + extra = (struct extra_context __user *)extrap; + extrap += EXTRA_CONTEXT_SIZE;
- end = (struct _aarch64_ctx __user *)userp; - userp += TERMINATOR_SIZE; + end = (struct _aarch64_ctx __user *)extrap; + extrap += TERMINATOR_SIZE;
- /* - * extra_datap is just written to the signal frame. - * The value gets cast back to a void __user * - * during sigreturn. - */ - extra_datap = (__force u64)userp; - extra_size = sfp + round_up(user->size, 16) - userp; - - __put_user_error(EXTRA_MAGIC, &extra->head.magic, err); - __put_user_error(EXTRA_CONTEXT_SIZE, &extra->head.size, err); - __put_user_error(extra_datap, &extra->datap, err); - __put_user_error(extra_size, &extra->size, err); - - /* Add the terminator */ - __put_user_error(0, &end->magic, err); - __put_user_error(0, &end->size, err); - } + /* + * extra_datap is just written to the signal frame. + * The value gets cast back to a void __user * + * during sigreturn. + */ + extra_datap = (__force u64)extrap; + extra_size = sfp + round_up(sf_size, 16) - extrap;
- /* set the "end" magic */ - if (err == 0) { - struct _aarch64_ctx __user *end = - apply_user_offset(user, user->end_offset); + __put_user_error(EXTRA_MAGIC, &extra->head.magic, err); + __put_user_error(EXTRA_CONTEXT_SIZE, &extra->head.size, err); + __put_user_error(extra_datap, &extra->datap, err); + __put_user_error(extra_size, &extra->size, err);
- __put_user_error(0, &end->magic, err); - __put_user_error(0, &end->size, err); - } + /* Add the terminator */ + __put_user_error(0, &end->magic, err); + __put_user_error(0, &end->size, err);
return err; }
-static int get_sigframe(struct rt_sigframe_user_layout *user, - struct ksignal *ksig, struct pt_regs *regs) +void __setup_return(struct pt_regs *regs, struct k_sigaction *ka, + struct rt_sigframe_user_layout *user, int usig) { - unsigned long sp, sp_top; - int err; - - init_user_layout(user); - err = setup_sigframe_layout(user, false); - if (err) - return err; - - sp = sp_top = sigsp(regs->sp, ksig); - - sp = round_down(sp - sizeof(struct frame_record), 16); - user->next_frame = (struct frame_record __user *)sp; - - sp = round_down(sp, 16) - sigframe_size(user); - user->sigframe = (struct rt_sigframe __user *)sp; - - /* - * Check that we can actually write to the signal frame. - */ - if (!access_ok(user->sigframe, sp_top - sp)) - return -EFAULT; - - return 0; + regs->regs[0] = usig; + regs->sp = (unsigned long)user->sigframe; + regs->regs[29] = (unsigned long)&user->next_frame->fp; + regs->pc = (unsigned long)ka->sa.sa_handler; }
static void setup_return(struct pt_regs *regs, struct k_sigaction *ka, @@ -1079,10 +842,7 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka, { __sigrestore_t sigtramp;
- regs->regs[0] = usig; - regs->sp = (unsigned long)user->sigframe; - regs->regs[29] = (unsigned long)&user->next_frame->fp; - regs->pc = (unsigned long)ka->sa.sa_handler; + __setup_return(regs, ka, user, usig);
/* * Signal delivery is a (wacky) indirect function call in @@ -1133,32 +893,7 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka, static int setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, struct pt_regs *regs) { - struct rt_sigframe_user_layout user; - struct rt_sigframe __user *frame; - int err = 0; - - fpsimd_signal_preserve_current_state(); - - if (get_sigframe(&user, ksig, regs)) - return 1; - - frame = user.sigframe; - - __put_user_error(0, &frame->uc.uc_flags, err); - __put_user_error(NULL, &frame->uc.uc_link, err); - - err |= __save_altstack(&frame->uc.uc_stack, regs->sp); - err |= setup_sigframe(&user, regs, set); - if (err == 0) { - setup_return(regs, &ksig->ka, &user, usig); - if (ksig->ka.sa.sa_flags & SA_SIGINFO) { - err |= copy_siginfo_to_user(&frame->info, &ksig->info); - regs->regs[1] = (unsigned long)&frame->info; - regs->regs[2] = (unsigned long)&frame->uc; - } - } - - return err; + return __setup_rt_frame(usig, ksig, set, regs); }
static void setup_restart_syscall(struct pt_regs *regs)
From: Yury Norov ynorov@caviumnetworks.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
ILP32 needs to mix 32bit struct siginfo and 64bit sigframe for its signal handlers. Move the existing compat code for copying siginfo to user space and manipulating signal masks into signal32_common.c so it can be used to deliver aarch32 and ilp32 signals.
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com
Conflicts: arch/arm64/kernel/Makefile
Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/signal32_common.h | 13 +++++++++ arch/arm64/include/asm/signal_common.h | 4 +-- arch/arm64/kernel/Makefile | 2 +- arch/arm64/kernel/signal32.c | 23 +-------------- arch/arm64/kernel/signal32_common.c | 37 ++++++++++++++++++++++++ 5 files changed, 54 insertions(+), 25 deletions(-) create mode 100644 arch/arm64/include/asm/signal32_common.h create mode 100644 arch/arm64/kernel/signal32_common.c
diff --git a/arch/arm64/include/asm/signal32_common.h b/arch/arm64/include/asm/signal32_common.h new file mode 100644 index 000000000000..4b365b31d37e --- /dev/null +++ b/arch/arm64/include/asm/signal32_common.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_SIGNAL32_COMMON_H +#define __ASM_SIGNAL32_COMMON_H + +#ifdef CONFIG_COMPAT + +int put_sigset_t(compat_sigset_t __user *uset, sigset_t *set); +int get_sigset_t(sigset_t *set, const compat_sigset_t __user *uset); + +#endif /* CONFIG_COMPAT*/ + +#endif /* __ASM_SIGNAL32_COMMON_H */ diff --git a/arch/arm64/include/asm/signal_common.h b/arch/arm64/include/asm/signal_common.h index 460efa29d5a6..3a00144c6952 100644 --- a/arch/arm64/include/asm/signal_common.h +++ b/arch/arm64/include/asm/signal_common.h @@ -157,7 +157,7 @@ static int get_sigframe(struct rt_sigframe_user_layout *user, /* * Check that we can actually write to the signal frame. */ - if (!access_ok(VERIFY_WRITE, user->sigframe, sp_top - sp)) + if (!access_ok(user->sigframe, sp_top - sp)) return -EFAULT;
return 0; @@ -340,7 +340,7 @@ static long __sys_rt_sigreturn(struct pt_regs *regs)
frame = (struct rt_sigframe __user *)regs->sp;
- if (!access_ok(VERIFY_READ, frame, sizeof(*frame))) + if (!access_ok(frame, sizeof(*frame))) goto badframe;
if (restore_sigframe(regs, frame)) diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 0dc8de7d78e9..843fbd1f8ed7 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -42,7 +42,7 @@ obj-$(CONFIG_AARCH32_EL0) += sigreturn32.o obj-$(CONFIG_COMPAT_ALIGNMENT_FIXUPS) += compat_alignment.o obj-$(CONFIG_KUSER_HELPERS) += kuser32.o obj-$(CONFIG_ARM64_ILP32) += binfmt_ilp32.o sys_ilp32.o -obj-$(CONFIG_COMPAT) += sys32_common.o +obj-$(CONFIG_COMPAT) += sys32_common.o signal32_common.o obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o obj-$(CONFIG_MODULES) += module.o module-plts.o obj-$(CONFIG_PERF_EVENTS) += perf_regs.o perf_callchain.o diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c index 3853a62ab839..8a0de714cb8e 100644 --- a/arch/arm64/kernel/signal32.c +++ b/arch/arm64/kernel/signal32.c @@ -16,6 +16,7 @@ #include <asm/fpsimd.h> #include <asm/signal32.h> #include <asm/traps.h> +#include <asm/signal32_common.h> #include <linux/uaccess.h> #include <asm/unistd.h> #include <asm/vdso.h> @@ -46,28 +47,6 @@ struct a32_aux_sigframe { unsigned long end_magic; } __attribute__((__aligned__(8)));
-static inline int put_sigset_t(compat_sigset_t __user *uset, sigset_t *set) -{ - compat_sigset_t cset; - - cset.sig[0] = set->sig[0] & 0xffffffffull; - cset.sig[1] = set->sig[0] >> 32; - - return copy_to_user(uset, &cset, sizeof(*uset)); -} - -static inline int get_sigset_t(sigset_t *set, - const compat_sigset_t __user *uset) -{ - compat_sigset_t s32; - - if (copy_from_user(&s32, uset, sizeof(*uset))) - return -EFAULT; - - set->sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32); - return 0; -} - /* * VFP save/restore code. * diff --git a/arch/arm64/kernel/signal32_common.c b/arch/arm64/kernel/signal32_common.c new file mode 100644 index 000000000000..4844d2c5fd89 --- /dev/null +++ b/arch/arm64/kernel/signal32_common.c @@ -0,0 +1,37 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Based on arch/arm/kernel/signal.c + * + * Copyright (C) 1995-2009 Russell King + * Copyright (C) 2012 ARM Ltd. + * Modified by Will Deacon will.deacon@arm.com + */ + +#include <linux/compat.h> +#include <linux/signal.h> +#include <linux/uaccess.h> + +#include <asm/signal32_common.h> +#include <asm/unistd.h> + +int put_sigset_t(compat_sigset_t __user *uset, sigset_t *set) +{ + compat_sigset_t cset; + + cset.sig[0] = set->sig[0] & 0xffffffffull; + cset.sig[1] = set->sig[0] >> 32; + + return copy_to_user(uset, &cset, sizeof(*uset)); +} + +int get_sigset_t(sigset_t *set, const compat_sigset_t __user *uset) +{ + compat_sigset_t s32; + + if (copy_from_user(&s32, uset, sizeof(*uset))) + return -EFAULT; + + set->sig[0] = s32.sig[0] | (((long)s32.sig[1]) << 32); + return 0; +}
From: Yury Norov ynorov@caviumnetworks.comk
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
ILP32 uses AARCH32 compat structures and syscall handlers for signals. But ILP32 rt_sigframe and ucontext structures differ from both LP64 and AARCH32.
From software point of view ILP32 is typical 32-bit compat ABI, and from
hardware point of view, it's just like LP64.
struct rt_sigframe defined in this patch in arch/arm64/kernel/signal_ilp32.c redefines one in arch/arm64/kernel/signal.c. And functions located in arch/arm64/include/signal_common.h pick up new structure to generate the code suitable for ILP32.
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com
Conflicts: arch/arm64/kernel/Makefile
Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/signal_ilp32.h | 23 +++++++++ arch/arm64/kernel/Makefile | 3 +- arch/arm64/kernel/signal.c | 3 ++ arch/arm64/kernel/signal_ilp32.c | 67 +++++++++++++++++++++++++++ arch/arm64/kernel/sys_ilp32.c | 6 +++ 5 files changed, 101 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/signal_ilp32.h create mode 100644 arch/arm64/kernel/signal_ilp32.c
diff --git a/arch/arm64/include/asm/signal_ilp32.h b/arch/arm64/include/asm/signal_ilp32.h new file mode 100644 index 000000000000..64333dfaeaa2 --- /dev/null +++ b/arch/arm64/include/asm/signal_ilp32.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_SIGNAL_ILP32_H +#define __ASM_SIGNAL_ILP32_H + +#ifdef CONFIG_ARM64_ILP32 + +#include <linux/compat.h> + +int ilp32_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, + struct pt_regs *regs); + +#else + +static inline int ilp32_setup_rt_frame(int usig, struct ksignal *ksig, + sigset_t *set, struct pt_regs *regs) +{ + return -ENOSYS; +} + +#endif /* CONFIG_ARM64_ILP32 */ + +#endif /* __ASM_SIGNAL_ILP32_H */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 843fbd1f8ed7..3b51762934a4 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -41,7 +41,8 @@ obj-$(CONFIG_AARCH32_EL0) += binfmt_elf32.o sys32.o signal32.o \ obj-$(CONFIG_AARCH32_EL0) += sigreturn32.o obj-$(CONFIG_COMPAT_ALIGNMENT_FIXUPS) += compat_alignment.o obj-$(CONFIG_KUSER_HELPERS) += kuser32.o -obj-$(CONFIG_ARM64_ILP32) += binfmt_ilp32.o sys_ilp32.o +obj-$(CONFIG_ARM64_ILP32) += binfmt_ilp32.o sys_ilp32.o \ + signal_ilp32.o obj-$(CONFIG_COMPAT) += sys32_common.o signal32_common.o obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o obj-$(CONFIG_MODULES) += module.o module-plts.o diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index a502d485b530..44f1ba6d9952 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -33,6 +33,7 @@ #include <asm/signal32.h> #include <asm/traps.h> #include <asm/vdso.h> +#include <asm/signal_ilp32.h>
#define get_sigset(s, m) __copy_from_user(s, m, sizeof(*s)) #define put_sigset(s, m) __copy_to_user(m, s, sizeof(*s)) @@ -923,6 +924,8 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs) ret = a32_setup_rt_frame(usig, ksig, oldset, regs); else ret = a32_setup_frame(usig, ksig, oldset, regs); + } else if (is_ilp32_compat_task()) { + ret = ilp32_setup_rt_frame(usig, ksig, oldset, regs); } else { ret = setup_rt_frame(usig, ksig, oldset, regs); } diff --git a/arch/arm64/kernel/signal_ilp32.c b/arch/arm64/kernel/signal_ilp32.c new file mode 100644 index 000000000000..64090e2addb2 --- /dev/null +++ b/arch/arm64/kernel/signal_ilp32.c @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 1995-2009 Russell King + * Copyright (C) 2012 ARM Ltd. + * Copyright (C) 2018 Cavium Networks. + * Yury Norov ynorov@caviumnetworks.com + */ + +#include <linux/compat.h> +#include <linux/signal.h> +#include <linux/syscalls.h> + +#include <asm/fpsimd.h> +#include <asm/unistd.h> +#include <asm/ucontext.h> +#include <asm/vdso.h> + +#include <asm/signal_ilp32.h> +#include <asm/signal32_common.h> + +#define get_sigset(s, m) get_sigset_t(s, m) +#define put_sigset(s, m) put_sigset_t(m, s) + +#define restore_altstack(stack) compat_restore_altstack(stack) +#define __save_altstack(stack, sp) __compat_save_altstack(stack, sp) +#define copy_siginfo_to_user(frame_info, ksig_info) \ + copy_siginfo_to_user32(frame_info, ksig_info) + +#define setup_return(regs, ka, user_layout, usig) \ +{ \ + __setup_return(regs, ka, user_layout, usig); \ + regs->regs[30] = \ + (unsigned long)VDSO_SYMBOL(current->mm->context.vdso, \ + sigtramp_ilp32); \ +} + +struct ilp32_ucontext { + u32 uc_flags; + u32 uc_link; + compat_stack_t uc_stack; + compat_sigset_t uc_sigmask; + /* glibc uses a 1024-bit sigset_t */ + __u8 __unused[1024 / 8 - sizeof(compat_sigset_t)]; + /* last for future expansion */ + struct sigcontext uc_mcontext; +}; + +struct rt_sigframe { + struct compat_siginfo info; + struct ilp32_ucontext uc; +}; + +#include <asm/signal_common.h> + +COMPAT_SYSCALL_DEFINE0(ilp32_rt_sigreturn) +{ + struct pt_regs *regs = current_pt_regs(); + + return __sys_rt_sigreturn(regs); +} + +int ilp32_setup_rt_frame(int usig, struct ksignal *ksig, + sigset_t *set, struct pt_regs *regs) +{ + return __setup_rt_frame(usig, ksig, set, regs); +} diff --git a/arch/arm64/kernel/sys_ilp32.c b/arch/arm64/kernel/sys_ilp32.c index 05eca957a18d..1c8db8f8341a 100644 --- a/arch/arm64/kernel/sys_ilp32.c +++ b/arch/arm64/kernel/sys_ilp32.c @@ -55,6 +55,12 @@ #define __arm64_compat_sys_msgctl __arm64_compat_sys_old_msgctl #define __arm64_compat_sys_shmctl __arm64_compat_sys_old_shmctl
+/* + * Using custom wrapper for rt_sigreturn() to handle custom + * struct rt_sigframe. + */ +#define __arm64_compat_sys_rt_sigreturn __arm64_compat_sys_ilp32_rt_sigreturn + /* * Wrappers to pass the pt_regs argument. */
From: Yury Norov ynorov@caviumnetworks.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
ILP32 has context-related structures different from both aarch32 and aarch64/lp64. In this patch compat_arch_ptrace() renamed to compat_a32_ptrace(), and compat_arch_ptrace() only makes choice between compat_a32_ptrace() and new compat_ilp32_ptrace() handler.
compat_ilp32_ptrace() calls generic compat_ptrace_request() for all requests except PTRACE_GETSIGMASK and PTRACE_SETSIGMASK, which need special handling.
Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Bamvor Jian Zhang bamv2005@gmail.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/kernel/ptrace.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index b87ebdaced90..26f6888d29ac 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -1593,9 +1593,11 @@ static const struct user_regset_view user_aarch64_view = { .regsets = aarch64_regsets, .n = ARRAY_SIZE(aarch64_regsets) };
-#ifdef CONFIG_AARCH32_EL0 +#ifdef CONFIG_COMPAT #include <linux/compat.h> +#endif
+#ifdef CONFIG_AARCH32_EL0 enum compat_regset { REGSET_COMPAT_GPR, REGSET_COMPAT_VFP, @@ -2032,7 +2034,7 @@ static int compat_ptrace_sethbpregs(struct task_struct *tsk, compat_long_t num, } #endif /* CONFIG_HAVE_HW_BREAKPOINT */
-long compat_arch_ptrace(struct task_struct *child, compat_long_t request, +static long compat_a32_ptrace(struct task_struct *child, compat_long_t request, compat_ulong_t caddr, compat_ulong_t cdata) { unsigned long addr = caddr; @@ -2109,8 +2111,23 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
return ret; } + +#else +#define compat_a32_ptrace(child, request, caddr, cdata) (0) #endif /* CONFIG_AARCH32_EL0 */
+#ifdef CONFIG_COMPAT +long compat_arch_ptrace(struct task_struct *child, compat_long_t request, + compat_ulong_t caddr, compat_ulong_t cdata) +{ + if (is_a32_compat_task()) + return compat_a32_ptrace(child, request, caddr, cdata); + + /* ILP32 */ + return compat_ptrace_request(child, request, caddr, cdata); +} +#endif + const struct user_regset_view *task_user_regset_view(struct task_struct *task) { #ifdef CONFIG_AARCH32_EL0
From: Philipp Tomsich philipp.tomsich@theobroma-systems.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
ILP32 VDSO exports following symbols: __kernel_rt_sigreturn; __kernel_gettimeofday; __kernel_clock_gettime; __kernel_clock_getres.
What shared object to use, kernel selects depending on result of is_ilp32_compat_task() in arch/arm64/kernel/vdso.c, so it substitutes correct pages and spec.
Adjusted to move the data page before code pages in sync with commit 601255ae3c98 ("arm64: vdso: move data page before code pages")
Signed-off-by: Philipp Tomsich philipp.tomsich@theobroma-systems.com Signed-off-by: Christoph Muellner christoph.muellner@theobroma-systems.com Signed-off-by: Yury Norov ynorov@caviumnetworks.com Signed-off-by: Bamvor Jian Zhang bamv2005@gmail.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/Makefile | 3 + arch/arm64/include/asm/vdso.h | 9 ++ arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/asm-offsets.c | 7 ++ arch/arm64/kernel/vdso-ilp32/.gitignore | 2 + arch/arm64/kernel/vdso-ilp32/Makefile | 108 ++++++++++++++++++ arch/arm64/kernel/vdso-ilp32/vdso-ilp32.S | 22 ++++ arch/arm64/kernel/vdso-ilp32/vdso-ilp32.lds.S | 88 ++++++++++++++ arch/arm64/kernel/vdso.c | 42 ++++++- 9 files changed, 281 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/kernel/vdso-ilp32/.gitignore create mode 100644 arch/arm64/kernel/vdso-ilp32/Makefile create mode 100644 arch/arm64/kernel/vdso-ilp32/vdso-ilp32.S create mode 100644 arch/arm64/kernel/vdso-ilp32/vdso-ilp32.lds.S
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 2d49aea0ff67..efd4b861fb86 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -203,6 +203,9 @@ ifdef CONFIG_COMPAT_VDSO $(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso32 \ include/generated/vdso32-offsets.h arch/arm64/kernel/vdso32/vdso.so endif +ifeq ($(CONFIG_ARM64_ILP32), y) + $(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso-ilp32 include/generated/vdso-ilp32-offsets.h +endif endif
include $(srctree)/scripts/Makefile.defconf diff --git a/arch/arm64/include/asm/vdso.h b/arch/arm64/include/asm/vdso.h index b4ae32109932..0cedfa1cce8a 100644 --- a/arch/arm64/include/asm/vdso.h +++ b/arch/arm64/include/asm/vdso.h @@ -21,6 +21,12 @@ #include <generated/vdso32-offsets.h> #endif
+#ifdef CONFIG_ARM64_ILP32 +#include <generated/vdso-ilp32-offsets.h> +#else +#define vdso_offset_sigtramp_ilp32 ({ BUILD_BUG(); 0; }) +#endif + #define VDSO_SYMBOL(base, name) \ ({ \ (void *)(vdso_offset_##name - VDSO_LBASE + (unsigned long)(base)); \ @@ -28,6 +34,9 @@
extern char vdso_start[], vdso_end[]; extern char vdso32_start[], vdso32_end[]; +#ifdef CONFIG_ARM64_ILP32 +extern char vdso_ilp32_start[], vdso_ilp32_end[]; +#endif
#endif /* !__ASSEMBLY__ */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 3b51762934a4..d2dda4caf90b 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -75,6 +75,7 @@ obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o obj-$(CONFIG_ARM64_MTE) += mte.o obj-y += vdso-wrap.o obj-$(CONFIG_COMPAT_VDSO) += vdso32-wrap.o +obj-$(CONFIG_ARM64_ILP32) += vdso-ilp32/ obj-$(CONFIG_UNWIND_PATCH_PAC_INTO_SCS) += patch-scs.o CFLAGS_patch-scs.o += -mbranch-protection=none
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index acf922704a41..e997ad275afb 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -120,6 +120,13 @@ int main(void) DEFINE(SOFTIRQ_SHIFT, SOFTIRQ_SHIFT); DEFINE(IRQ_CPUSTAT_SOFTIRQ_PENDING, offsetof(irq_cpustat_t, __softirq_pending)); BLANK(); +#ifdef CONFIG_COMPAT + DEFINE(COMPAT_TVAL_TV_SEC, offsetof(struct old_timeval32, tv_sec)); + DEFINE(COMPAT_TVAL_TV_USEC, offsetof(struct old_timeval32, tv_usec)); + DEFINE(COMPAT_TSPEC_TV_SEC, offsetof(struct old_timespec32, tv_sec)); + DEFINE(COMPAT_TSPEC_TV_NSEC, offsetof(struct old_timespec32, tv_nsec)); + BLANK(); +#endif DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task)); BLANK(); DEFINE(FTR_OVR_VAL_OFFSET, offsetof(struct arm64_ftr_override, val)); diff --git a/arch/arm64/kernel/vdso-ilp32/.gitignore b/arch/arm64/kernel/vdso-ilp32/.gitignore new file mode 100644 index 000000000000..61806c3fd68b --- /dev/null +++ b/arch/arm64/kernel/vdso-ilp32/.gitignore @@ -0,0 +1,2 @@ +vdso-ilp32.lds +vdso-ilp32-offsets.h diff --git a/arch/arm64/kernel/vdso-ilp32/Makefile b/arch/arm64/kernel/vdso-ilp32/Makefile new file mode 100644 index 000000000000..9a5bbe313769 --- /dev/null +++ b/arch/arm64/kernel/vdso-ilp32/Makefile @@ -0,0 +1,108 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Building a vDSO image for AArch64. +# +# Author: Will Deacon will.deacon@arm.com +# Heavily based on the vDSO Makefiles for other archs. +# + +# Absolute relocation type $(ARCH_REL_TYPE_ABS) needs to be defined before +# the inclusion of generic Makefile. +ARCH_REL_TYPE_ABS := R_AARCH64_JUMP_SLOT|R_AARCH64_GLOB_DAT|R_AARCH64_ABS64 +include $(srctree)/lib/vdso/Makefile + +obj-ilp32-vdso := vgettimeofday-ilp32.o note-ilp32.o sigreturn-ilp32.o + +# Build rules +targets := $(obj-ilp32-vdso) vdso-ilp32.so vdso-ilp32.so.dbg +obj-ilp32-vdso := $(addprefix $(obj)/, $(obj-ilp32-vdso)) + +btildflags-$(CONFIG_ARM64_BTI_KERNEL) += -z force-bti + +# -Bsymbolic has been added for consistency with arm, the compat vDSO and +# potential future proofing if we end up with internal calls to the exported +# routines, as x86 does (see 6f121e548f83 ("x86, vdso: Reimplement vdso.so +# preparation in build-time C")). +ldflags-y := -shared -nostdlib -soname=linux-ilp32-vdso.so.1 --hash-style=sysv \ + -Bsymbolic $(call ld-option, --no-eh-frame-hdr) --build-id -n \ + $(btildflags-y) -T + +ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18 +ccflags-y += -DDISABLE_BRANCH_PROFILING +#ccflags-y += -nostdlib +ccflags-y += -nostdlib -Wl,-soname=linux-ilp32-vdso.so.1 \ + $(call cc-ldoption, -Wl$(comma)--hash-style=sysv) + +CFLAGS_REMOVE_vgettimeofday-ilp32.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) $(GCC_PLUGINS_CFLAGS) +KBUILD_CFLAGS += $(DISABLE_LTO) +KASAN_SANITIZE := n +UBSAN_SANITIZE := n +OBJECT_FILES_NON_STANDARD := y +KCOV_INSTRUMENT := n + +CFLAGS_vgettimeofday-ilp32.o = -O2 -mcmodel=tiny -fasynchronous-unwind-tables -mabi=ilp32 + +ifneq ($(c-gettimeofday-y),) + CFLAGS_vgettimeofday-ilp32.o += -include $(c-gettimeofday-y) +endif + +# Clang versions less than 8 do not support -mcmodel=tiny +ifeq ($(CONFIG_CC_IS_CLANG), y) + ifeq ($(shell test $(CONFIG_CLANG_VERSION) -lt 80000; echo $$?),0) + CFLAGS_REMOVE_vgettimeofday-ilp32.o += -mcmodel=tiny + endif +endif + +# Disable gcov profiling for VDSO code +GCOV_PROFILE := n + +obj-y += vdso-ilp32.o +extra-y += vdso-ilp32.lds +CPPFLAGS_vdso-ilp32.lds += -P -C -U$(ARCH) -mabi=ilp32 + +# Force dependency (incbin is bad) +$(obj)/vdso-ilp32.o : $(obj)/vdso-ilp32.so + +# Link rule for the .so file, .lds has to be first +$(obj)/vdso-ilp32.so.dbg: $(obj)/vdso-ilp32.lds $(obj-ilp32-vdso) + $(call if_changed,vdso-ilp32ld_and_vdso_check) + +# Strip rule for the .so file +$(obj)/%.so: OBJCOPYFLAGS := -S +$(obj)/%.so: $(obj)/%.so.dbg FORCE + $(call if_changed,objcopy) + +# Generate VDSO offsets using helper script +gen-vdsosym := $(srctree)/$(src)/../vdso/gen_vdso_offsets.sh +quiet_cmd_vdsosym = VDSOSYM $@ + cmd_vdsosym = $(NM) $< | $(gen-vdsosym) | LC_ALL=C sort > $@ + +include/generated/vdso-ilp32-offsets.h: $(obj)/vdso-ilp32.so.dbg FORCE + $(call if_changed,vdsosym) + +$(obj)/vgettimeofday-ilp32.o: $(src)/../vdso/vgettimeofday.c + $(call if_changed_dep,vdso-ilp32cc) + +$(obj)/note-ilp32.o: $(src)/../vdso/note.S + $(call if_changed_dep,vdso-ilp32as) + +$(obj)/sigreturn-ilp32.o: $(src)/../vdso/sigreturn.S + $(call if_changed_dep,vdso-ilp32as) + +# Actual build commands +quiet_cmd_vdso-ilp32ld_and_vdso_check = LD $@ + cmd_vdso-ilp32ld_and_vdso_check = $(CC) $(c_flags) -mabi=ilp32 -Wl,-n -Wl,-T $^ -o $@ +quiet_cmd_vdso-ilp32cc = VDSOILP32C $@ + cmd_vdso-ilp32cc= $(CC) $(c_flags) -mabi=ilp32 -c -o $@ $< +quiet_cmd_vdso-ilp32as = VDSOILP32A $@ + cmd_vdso-ilp32as = $(CC) $(a_flags) -mabi=ilp32 -c -o $@ $< + +# Install commands for the unstripped file +quiet_cmd_vdso_install = INSTALL $@ + cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@ + +vdso-ilp32.so: $(obj)/vdso-ilp32.so.dbg + @mkdir -p $(MODLIB)/vdso + $(call cmd,vdso_install) + +vdso_install: vdso-ilp32.so diff --git a/arch/arm64/kernel/vdso-ilp32/vdso-ilp32.S b/arch/arm64/kernel/vdso-ilp32/vdso-ilp32.S new file mode 100644 index 000000000000..52509a507d26 --- /dev/null +++ b/arch/arm64/kernel/vdso-ilp32/vdso-ilp32.S @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Copyright (C) 2012 ARM Limited + * Author: Will Deacon will.deacon@arm.com + */ + +#include <linux/init.h> +#include <linux/linkage.h> +#include <linux/const.h> +#include <asm/page.h> + + __PAGE_ALIGNED_DATA + + .globl vdso_ilp32_start, vdso_ilp32_end + .balign PAGE_SIZE +vdso_ilp32_start: + .incbin "arch/arm64/kernel/vdso-ilp32/vdso-ilp32.so" + .balign PAGE_SIZE +vdso_ilp32_end: + + .previous diff --git a/arch/arm64/kernel/vdso-ilp32/vdso-ilp32.lds.S b/arch/arm64/kernel/vdso-ilp32/vdso-ilp32.lds.S new file mode 100644 index 000000000000..831a68a8d7b3 --- /dev/null +++ b/arch/arm64/kernel/vdso-ilp32/vdso-ilp32.lds.S @@ -0,0 +1,88 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * GNU linker script for the VDSO library. + * + * Copyright (C) 2012 ARM Limited + * Author: Will Deacon will.deacon@arm.com + * Heavily based on the vDSO linker scripts for other archs. + */ + +#include <linux/const.h> +#include <asm/page.h> +#include <asm/vdso.h> + +SECTIONS +{ + PROVIDE(_vdso_data = . - PAGE_SIZE); + PROVIDE(_vdso_data = . - __VVAR_PAGES * PAGE_SIZE); +#ifdef CONFIG_TIME_NS + PROVIDE(_timens_data = _vdso_data + PAGE_SIZE); +#endif + . = VDSO_LBASE + SIZEOF_HEADERS; + + .hash : { *(.hash) } :text + .gnu.hash : { *(.gnu.hash) } + .dynsym : { *(.dynsym) } + .dynstr : { *(.dynstr) } + .gnu.version : { *(.gnu.version) } + .gnu.version_d : { *(.gnu.version_d) } + .gnu.version_r : { *(.gnu.version_r) } + + .note : { *(.note.*) } :text :note + + . = ALIGN(16); + + .text : { *(.text*) } :text =0xd503201f + PROVIDE (__etext = .); + PROVIDE (_etext = .); + PROVIDE (etext = .); + + .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr + .eh_frame : { KEEP (*(.eh_frame)) } :text + + .dynamic : { *(.dynamic) } :text :dynamic + + .rodata : { *(.rodata*) } :text + + _end = .; + PROVIDE(end = .); + + /DISCARD/ : { + *(.note.GNU-stack) + *(.data .data.* .gnu.linkonce.d.* .sdata*) + *(.bss .sbss .dynbss .dynsbss) + } +} + +/* + * We must supply the ELF program headers explicitly to get just one + * PT_LOAD segment, and set the flags explicitly to make segments read-only. + */ +PHDRS +{ + text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */ + dynamic PT_DYNAMIC FLAGS(4); /* PF_R */ + note PT_NOTE FLAGS(4); /* PF_R */ + eh_frame_hdr PT_GNU_EH_FRAME; +} + +/* + * This controls what symbols we export from the DSO. + */ +VERSION +{ + LINUX_4.12 { + global: + __kernel_rt_sigreturn; + __kernel_gettimeofday; + __kernel_clock_gettime; + __kernel_clock_getres; + local: *; + }; +} + +/* + * Make the sigreturn code visible to the kernel. + */ +VDSO_sigtramp_ilp32 = __kernel_rt_sigreturn; diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 47c2eb75f591..fff989a09f75 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -32,6 +32,9 @@ enum vdso_abi { VDSO_ABI_AA64, VDSO_ABI_AA32, +#ifdef CONFIG_ARM64_ILP32 + VDSO_ABI_ILP32 +#endif };
enum vvar_pages { @@ -64,6 +67,13 @@ static struct vdso_abi_info vdso_info[] __ro_after_init = { .vdso_code_end = vdso32_end, }, #endif /* CONFIG_COMPAT_VDSO */ +#ifdef CONFIG_ARM64_ILP32 + [VDSO_ABI_ILP32] = { + .name = "vdso", + .vdso_code_start = vdso_ilp32_start, + .vdso_code_end = vdso_ilp32_end, + }, +#endif };
/* @@ -427,6 +437,19 @@ static struct vm_special_mapping aarch64_vdso_maps[] __ro_after_init = { }, };
+#ifdef CONFIG_ARM64_ILP32 +static struct vm_special_mapping ilp32_vdso_maps[] __ro_after_init = { + [AA64_MAP_VVAR] = { + .name = "[vvar]", + .fault = vvar_fault, + }, + [AA64_MAP_VDSO] = { + .name = "[vdso]", + .mremap = vdso_mremap, + }, +}; +#endif + static int __init vdso_init(void) { vdso_info[VDSO_ABI_AA64].dm = &aarch64_vdso_maps[AA64_MAP_VVAR]; @@ -436,15 +459,32 @@ static int __init vdso_init(void) } arch_initcall(vdso_init);
+#ifdef CONFIG_ARM64_ILP32 +static int __init vdso_ilp32_init(void) +{ + vdso_info[VDSO_ABI_ILP32].dm = &ilp32_vdso_maps[AA64_MAP_VVAR]; + vdso_info[VDSO_ABI_ILP32].cm = &ilp32_vdso_maps[AA64_MAP_VDSO]; + + return __vdso_init(VDSO_ABI_ILP32); +} +arch_initcall(vdso_ilp32_init); +#endif + int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) { struct mm_struct *mm = current->mm; + enum vdso_abi abi = VDSO_ABI_AA64; int ret;
if (mmap_write_lock_killable(mm)) return -EINTR;
- ret = __setup_additional_pages(VDSO_ABI_AA64, mm, bprm, uses_interp); +#ifdef CONFIG_ARM64_ILP32 + if (is_ilp32_compat_task()) + abi = VDSO_ABI_ILP32; +#endif + ret = __setup_additional_pages(abi, mm, bprm, uses_interp); + mmap_write_unlock(mm);
return ret;
From: Andrew Pinski apinski@cavium.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
Reference: https://github.com/norov/linux/commits/ilp32-5.2
--------------------------------
This patch adds the config option for ILP32.
Signed-off-by: Andrew Pinski Andrew.Pinski@caviumnetworks.com Signed-off-by: Philipp Tomsich philipp.tomsich@theobroma-systems.com Signed-off-by: Christoph Muellner christoph.muellner@theobroma-systems.com Signed-off-by: Yury Norov ynorov@caviumnetworks.com Reviewed-by: David Daney ddaney@caviumnetworks.com Signed-off-by: Yury Norov ynorov@marvell.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com
Conflicts: arch/arm64/Kconfig
Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/Kconfig | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 23ac6dbf3856..920f43df3da1 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1595,6 +1595,14 @@ config ARM64_TAGGED_ADDR_ABI to system calls as pointer arguments. For details, see Documentation/arch/arm64/tagged-address-abi.rst.
+config ARM64_ILP32 + bool "Kernel support for ILP32" + depends on !ARM64_PTR_AUTH + help + This option enables support for AArch64 ILP32 user space. ILP32 + is an ABI where long and pointers are 32bits but it uses the AARCH64 + instruction set. + menuconfig AARCH32_EL0 bool "Kernel support for 32-bit EL0" depends on ARM64_4K_PAGES || EXPERT @@ -2288,7 +2296,7 @@ endmenu # "Boot options"
config COMPAT def_bool y - depends on AARCH32_EL0 + depends on AARCH32_EL0 || ARM64_ILP32
menu "Power management options"
From: Xiongfeng Wang wangxiongfeng2@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
--------------------------------
ILP32 application belongs to the compat application. But its syscall number is different from the traditional compat a32 application. The syscall number is the same with the lp64 application. So we need to fix the secure computing mode 1 syscall check for ilp32.
Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Signed-off-by: Yury Norov ynorov@caviumnetworks.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/seccomp.h | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+)
diff --git a/arch/arm64/include/asm/seccomp.h b/arch/arm64/include/asm/seccomp.h index 0f4cc9322eb4..a6be48b9225a 100644 --- a/arch/arm64/include/asm/seccomp.h +++ b/arch/arm64/include/asm/seccomp.h @@ -17,6 +17,36 @@ #define __NR_seccomp_sigreturn_32 __NR_compat_rt_sigreturn #endif /* CONFIG_COMPAT */
+#ifdef CONFIG_COMPAT +#ifndef __COMPAT_SYSCALL_NR + +static inline const int *get_compat_mode1_syscalls(void) +{ +#ifdef CONFIG_AARCH32_EL0 + static const int mode1_syscalls_a32[] = { + __NR_compat_read, __NR_compat_write, + __NR_compat_exit, __NR_compat_sigreturn, + 0, /* null terminated */ + }; +#endif + static const int mode1_syscalls_ilp32[] = { + __NR_read, __NR_write, + __NR_exit, __NR_rt_sigreturn, + 0, /* null terminated */ + }; + +#ifdef CONFIG_AARCH32_EL0 + if (is_a32_compat_task()) + return mode1_syscalls_a32; +#endif + return mode1_syscalls_ilp32; +} + +#define get_compat_mode1_syscalls get_compat_mode1_syscalls + +#endif +#endif + #include <asm-generic/seccomp.h>
#define SECCOMP_ARCH_NATIVE AUDIT_ARCH_AARCH64
From: Xiongfeng Wang wangxiongfeng2@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
--------------------------------
The following commit clear upper 32 bits of x0 on syscall return for compat application. But it is only suitable for A32 applications. It is not correct for ilp32 applications.
Fixes: 15956689a0e60 ("arm64: compat: Ensure upper 32 bits of x0 are zero on syscall return") Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/syscall.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h index 9a3d99c8aa80..37161bd87f57 100644 --- a/arch/arm64/include/asm/syscall.h +++ b/arch/arm64/include/asm/syscall.h @@ -38,7 +38,7 @@ static inline long syscall_get_return_value(struct task_struct *task, { unsigned long val = regs->regs[0];
- if (is_compat_thread(task_thread_info(task))) + if (is_a32_compat_thread(task_thread_info(task))) val = sign_extend64(val, 31);
return val; @@ -59,7 +59,7 @@ static inline void syscall_set_return_value(struct task_struct *task, if (error) val = error;
- if (is_compat_thread(task_thread_info(task))) + if (is_a32_compat_thread(task_thread_info(task))) val = lower_32_bits(val);
regs->regs[0] = val;
From: Xiongfeng Wang wangxiongfeng2@huawei.com
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
--------------------------------
ARM erratum 1418040 only affect AARCH32. ILP32 is not affected.
Signed-off-by: Xiongfeng Wang wangxiongfeng2@hauwei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/kernel/process.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 82f2754ed4b5..489810e3a7a1 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -484,7 +484,7 @@ static void erratum_1418040_thread_switch(struct task_struct *next) !this_cpu_has_cap(ARM64_WORKAROUND_1418040)) return;
- if (is_compat_thread(task_thread_info(next))) + if (is_a32_compat_thread(task_thread_info(next))) sysreg_clear_set(cntkctl_el1, ARCH_TIMER_USR_VCT_ACCESS_EN, 0); else sysreg_clear_set(cntkctl_el1, 0, ARCH_TIMER_USR_VCT_ACCESS_EN);
From: Chen Jiahao chenjiahao16@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
-------------------------------
In arm64be_ilp32 platform, audit could not record log in some case, because syscall_get_arch() in arch/arm64 returns AUDIT_ARCH_AARCH64ILP32, which fits the arm32 platform. Audit will gets a fault arch in this situation and hence mismatch some syscall numbers in audit_match_perm().
This patch fixes it, and use the arch AUDIT_ARCH_AARCH64 which matches all syscall numbers in arm64be_ilp32 platform.
Fixes: 0fe4141ba63a ("[Backport] arm64: introduce AUDIT_ARCH_AARCH64ILP32 for ilp32") Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Reviewed-by: Liao Chang liaochang1@huawei.com Signed-off-by: Chen Jun chenjun102@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/syscall.h | 3 --- include/uapi/linux/audit.h | 1 - 2 files changed, 4 deletions(-)
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h index 37161bd87f57..3b9788d121f6 100644 --- a/arch/arm64/include/asm/syscall.h +++ b/arch/arm64/include/asm/syscall.h @@ -86,9 +86,6 @@ static inline int syscall_get_arch(struct task_struct *task) if (is_a32_compat_thread(task_thread_info(task))) return AUDIT_ARCH_ARM;
- else if (is_ilp32_compat_task()) - return AUDIT_ARCH_AARCH64ILP32; - return AUDIT_ARCH_AARCH64; }
diff --git a/include/uapi/linux/audit.h b/include/uapi/linux/audit.h index bafc9a2ac2db..d676ed2b246e 100644 --- a/include/uapi/linux/audit.h +++ b/include/uapi/linux/audit.h @@ -386,7 +386,6 @@ enum { #define __AUDIT_ARCH_LE 0x40000000
#define AUDIT_ARCH_AARCH64 (EM_AARCH64|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE) -#define AUDIT_ARCH_AARCH64ILP32 (EM_AARCH64|__AUDIT_ARCH_LE) #define AUDIT_ARCH_ALPHA (EM_ALPHA|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE) #define AUDIT_ARCH_ARCOMPACT (EM_ARCOMPACT|__AUDIT_ARCH_LE) #define AUDIT_ARCH_ARCOMPACTBE (EM_ARCOMPACT)
From: Xiongfeng Wang wangxiongfeng2@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
----------------------------------------
One of the ILP32 patchset rename 'compat_user_mode' and 'compat_thumb_mode' to 'a32_user_mode' and 'a32_user_mode'. But these two macros are used in some opensource userspace application. To keep compatibility, we redefine these two macros.
Fixes: 0f47a7a7b1ea ("arm64: rename functions that reference compat term") Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Reviewed-by: liwei liwei391@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/ptrace.h | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h index 992b04efe7f8..9cb0fd362ac4 100644 --- a/arch/arm64/include/asm/ptrace.h +++ b/arch/arm64/include/asm/ptrace.h @@ -224,6 +224,8 @@ static inline void forget_syscall(struct pt_regs *regs) #define a32_thumb_mode(regs) (0) #endif
+#define compat_thumb_mode(regs) a32_thumb_mode(regs) + #define user_mode(regs) \ (((regs)->pstate & PSR_MODE_MASK) == PSR_MODE_EL0t)
@@ -231,6 +233,8 @@ static inline void forget_syscall(struct pt_regs *regs) (((regs)->pstate & (PSR_MODE32_BIT | PSR_MODE_MASK)) == \ (PSR_MODE32_BIT | PSR_MODE_EL0t))
+#define compat_user_mode(regs) a32_user_mode(regs) + #define processor_mode(regs) \ ((regs)->pstate & PSR_MODE_MASK)
From: Xiongfeng Wang wangxiongfeng2@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
----------------------------------------
When I enable CONFIG_ARM64_ILP32 and CONFIG_UBSAN, I got the following compile error. We need to disable UBSAN for 'vdso-ilp32' like commit ab2a69eee74d ("Fix compile problem when CONFIG_KASAN and CONFIG_UBSAN were on")
`.data' referenced in section `.text' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o: defined in discarded section `.data' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o `.data' referenced in section `.text' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o: defined in discarded section `.data' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o `.data' referenced in section `.text' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o: defined in discarded section `.data' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o `.data' referenced in section `.text' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o: defined in discarded section `.data' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o `.data' referenced in section `.text' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o: defined in discarded section `.data' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o `.data' referenced in section `.text' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o: defined in discarded section `.data' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o `.data' referenced in section `.text' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o: defined in discarded section `.data' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o `.data' referenced in section `.text' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o: defined in discarded section `.data' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o `.data' referenced in section `.text' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o: defined in discarded section `.data' of arch/arm64/kernel/vdso-ilp32/gettimeofday-ilp32.o
Signed-off-by: Wei Li liwei391@huawei.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Reviewed-by: Cheng Jian cj.chengjian@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/kernel/vdso-ilp32/Makefile | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/arch/arm64/kernel/vdso-ilp32/Makefile b/arch/arm64/kernel/vdso-ilp32/Makefile index 9a5bbe313769..088ba0a7237d 100644 --- a/arch/arm64/kernel/vdso-ilp32/Makefile +++ b/arch/arm64/kernel/vdso-ilp32/Makefile @@ -55,6 +55,9 @@ endif
# Disable gcov profiling for VDSO code GCOV_PROFILE := n +KASAN_SANITIZE := n +UBSAN_SANITIZE := n +KCOV_INSTRUMENT := n
obj-y += vdso-ilp32.o extra-y += vdso-ilp32.lds
From: Chen Jiahao chenjiahao16@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
-------------------------------------------------------------------------
In commit e29beeac53c8 ("arm64: uaccess: remove set_fs()"), thread_info->addr_limit and macro USER_DS has been removed and replace by macro TASK_SIZE_MAX. However the address limit set by TASK_SIZE_MAX is incorrect in compat mode, see commit 2ef73d5148e ("[Huawei] arm64: fix current_thread_info()->addr_limit setup") for detail.
Fix the problem by modifying TASK_SIZE_MAX definition in compat mode.
Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Reviewed-by: Chang Liao liaochang1@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/processor.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 7d444cd882ce..abb704640577 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -53,9 +53,10 @@
#define DEFAULT_MAP_WINDOW_64 (UL(1) << VA_BITS_MIN) #define TASK_SIZE_64 (UL(1) << vabits_actual) -#define TASK_SIZE_MAX (UL(1) << VA_BITS)
#ifdef CONFIG_COMPAT +#define TASK_SIZE_MAX (is_compat_task() ? \ + UL(0x100000000) : (UL(1) << VA_BITS)) #if defined(CONFIG_ARM64_64K_PAGES) && defined(CONFIG_KUSER_HELPERS) /* * With CONFIG_ARM64_64K_PAGES enabled, the last page is occupied @@ -72,6 +73,7 @@ #define DEFAULT_MAP_WINDOW (is_compat_task() ? \ TASK_SIZE_32 : DEFAULT_MAP_WINDOW_64) #else +#define TASK_SIZE_MAX (UL(1) << VA_BITS) #define TASK_SIZE TASK_SIZE_64 #define DEFAULT_MAP_WINDOW DEFAULT_MAP_WINDOW_64 #endif /* CONFIG_COMPAT */
From: Zhen Lei thunder.leizhen@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
-------------------------------------------------------------------------
access_ok() is used to preliminarily check 'uaddr' to avoid unnecessary page fault caused by invalid input. The page fault will do the accurate address verification based on task's mm. It's also used to do a check on the get_fs(), see the comments of __access_ok().
But now the support for get_fs() on arm64 has been deleted by commit edf84200127a ("arm64: uaccess: remove set_fs()"). So access_ok() does not need to perform such strict checks for compat tasks.
Remove the is_compact_task() check can improve the performance of syscalls. For example, all test items of libMicro can be improved by 4.89% on average.
The next patch will avoid calling is_ilp32_compat_task() by default by close its build option, because ILP32 has specific requirements.
Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Reviewed-by: Cheng Jian cj.chengjian@huawei.com Reviewed-by: Liu Chao liuchao173@huawei.com Reviewed-by: Xiongfeng Wang wangxiongfeng2@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/processor.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index abb704640577..10aae8d9c667 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -55,7 +55,7 @@ #define TASK_SIZE_64 (UL(1) << vabits_actual)
#ifdef CONFIG_COMPAT -#define TASK_SIZE_MAX (is_compat_task() ? \ +#define TASK_SIZE_MAX (is_ilp32_compat_task() ? \ UL(0x100000000) : (UL(1) << VA_BITS)) #if defined(CONFIG_ARM64_64K_PAGES) && defined(CONFIG_KUSER_HELPERS) /*
From: Chen Jiahao chenjiahao16@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
-------------------------------
In U32 mode, the testcase libc_write_01 failed as below:
[INFO][libc_write_01.c][29][main]:ret=4093 [INFO][libc_write_01.c][30][main]:size_max=-1 libc_write_01_u32: libc_write_01.c:31: main: Assertion `ret==-1' failed. Aborted
The error here is due to the __range_ok check of "addr + size <= TASK_SIZE_MAX" is not performed as expectation.
For U32 testcase libc_write_01, the specified "addr + size" is greater than 32-bit limit and should return -EFAULT, but TASK_SIZE_MAX still defined as UL(1) << VA_BITS in U32 mode, which is much greater than "addr + size" and cannot catch the overflow error.
Fix above testcase failure by defining TASK_SIZE_MAX as 32-bit limit. Since is_compat_task() check leads to performance reduction by 4.89% on libMicro, the fix is wrapped by CONFIG_COMPAT_TASK_SIZE with default n. The performance will not be affected unless open this config manually.
Fixes: cb478b93dc44 ("arm64: replace is_compat_task() with is_ilp32_compat_task() in TASK_SIZE_MAX") Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/Kconfig | 7 +++++++ arch/arm64/include/asm/processor.h | 5 +++++ 2 files changed, 12 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 920f43df3da1..a18106994c98 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1675,6 +1675,13 @@ config THUMB2_COMPAT_VDSO config COMPAT_ALIGNMENT_FIXUPS bool "Fix up misaligned multi-word loads and stores in user space"
+config COMPAT_TASK_SIZE + bool "Set 32-bit compatible task size" + default n + help + Set the task size with 32-bit limit, to be compatible with + 32-bit EL0 tasks. + menuconfig ARMV8_DEPRECATED bool "Emulate deprecated/obsolete ARMv8 instructions" depends on AARCH32_EL0 diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 10aae8d9c667..44cada63ed08 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -55,8 +55,13 @@ #define TASK_SIZE_64 (UL(1) << vabits_actual)
#ifdef CONFIG_COMPAT +#ifdef CONFIG_COMPAT_TASK_SIZE +#define TASK_SIZE_MAX (is_compat_task() ? \ + UL(0x100000000) : (UL(1) << VA_BITS)) +#else #define TASK_SIZE_MAX (is_ilp32_compat_task() ? \ UL(0x100000000) : (UL(1) << VA_BITS)) +#endif #if defined(CONFIG_ARM64_64K_PAGES) && defined(CONFIG_KUSER_HELPERS) /* * With CONFIG_ARM64_64K_PAGES enabled, the last page is occupied
From: Chen Jiahao chenjiahao16@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8JVJ3 CVE: NA
-------------------------------
Previous patch 605330751290 ("[Huawei] arm64: set 32-bit compatible TASK_SIZE_MAX to fix U32 libc_write_01 error") has fixed libc_write_01 testcase failed in U32 mode.
However above patch introduced image size inflation when CONFIG_ARM64_ILP32 and CONFIG_AARCH32_EL0 both set. Here fix the problem by testing current_thread_info()->flags with (_TIF_32BIT | _TIF_32BIT_AARCH64), rather than calling test_thread_flag() twice.
Fixes: 605330751290 ("[Huawei] arm64: set 32-bit compatible TASK_SIZE_MAX to fix U32 libc_write_01 error") Signed-off-by: Chen Jiahao chenjiahao16@huawei.com Signed-off-by: Jinjie Ruan ruanjinjie@huawei.com --- arch/arm64/include/asm/is_compat.h | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/arch/arm64/include/asm/is_compat.h b/arch/arm64/include/asm/is_compat.h index 2c2d1f4c26bd..3b870e4bb2fb 100644 --- a/arch/arm64/include/asm/is_compat.h +++ b/arch/arm64/include/asm/is_compat.h @@ -63,7 +63,12 @@ static inline int is_ilp32_compat_thread(struct thread_info *thread)
static inline int is_compat_task(void) { +#if defined(CONFIG_ARM64_ILP32) && defined(CONFIG_AARCH32_EL0) + return READ_ONCE(current_thread_info()->flags) & + (_TIF_32BIT | _TIF_32BIT_AARCH64); +#else return is_a32_compat_task() || is_ilp32_compat_task(); +#endif }
#endif /* CONFIG_COMPAT */