From: Mark Rutland mark.rutland@arm.com
mainline inclusion from mainline-v5.5-rc1 commit bd8b21d3dd661658addc1cd4cc869bab11d28596 category: bugfix bugzilla: 25285 CVE: NA
--------------------------------
When we load a module, we have to perform some special work for a couple of named sections. To do this, we iterate over all of the module's sections, and perform work for each section we recognize.
To make it easier to handle the unexpected absence of a section, and to make the section-specific logic easer to read, let's factor the section search into a helper. Similar is already done in the core module loader, and other architectures (and ideally we'd unify these in future).
If we expect a module to have an ftrace trampoline section, but it doesn't have one, we'll now reject loading the module. When ARM64_MODULE_PLTS is selected, any correctly built module should have one (and this is assumed by arm64's ftrace PLT code) and the absence of such a section implies something has gone wrong at build time.
Subsequent patches will make use of the new helper.
Signed-off-by: Mark Rutland mark.rutland@arm.com Reviewed-by: Ard Biesheuvel ard.biesheuvel@linaro.org Reviewed-by: Torsten Duwe duwe@suse.de Tested-by: Amit Daniel Kachhap amit.kachhap@arm.com Tested-by: Torsten Duwe duwe@suse.de Cc: Catalin Marinas catalin.marinas@arm.com Cc: James Morse james.morse@arm.com Cc: Will Deacon will@kernel.org Signed-off-by: Pu Lehui pulehui@huawei.com Reviewed-by: Jian Cheng cj.chengjian@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/kernel/module.c | 35 ++++++++++++++++++++++++++--------- 1 file changed, 26 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index 834ecf108b53..55b00bfb704f 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -484,22 +484,39 @@ int apply_relocate_add(Elf64_Shdr *sechdrs, return -ENOEXEC; }
-int module_finalize(const Elf_Ehdr *hdr, - const Elf_Shdr *sechdrs, - struct module *me) +static const Elf_Shdr *find_section(const Elf_Ehdr *hdr, + const Elf_Shdr *sechdrs, + const char *name) { const Elf_Shdr *s, *se; const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) { - if (strcmp(".altinstructions", secstrs + s->sh_name) == 0) - apply_alternatives_module((void *)s->sh_addr, s->sh_size); + if (strcmp(name, secstrs + s->sh_name) == 0) + return s; + } + + return NULL; +} + +int module_finalize(const Elf_Ehdr *hdr, + const Elf_Shdr *sechdrs, + struct module *me) +{ + const Elf_Shdr *s; + + s = find_section(hdr, sechdrs, ".altinstructions"); + if (s) + apply_alternatives_module((void *)s->sh_addr, s->sh_size); + #ifdef CONFIG_ARM64_MODULE_PLTS - if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE) && - !strcmp(".text.ftrace_trampoline", secstrs + s->sh_name)) - me->arch.ftrace_trampoline = (void *)s->sh_addr; -#endif + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE)) { + s = find_section(hdr, sechdrs, ".text.ftrace_trampoline"); + if (!s) + return -ENOEXEC; + me->arch.ftrace_trampoline = (void *)s->sh_addr; } +#endif
return 0; }
From: Mark Rutland mark.rutland@arm.com
mainline inclusion from mainline-v5.5-rc1 commit f1a54ae9af0da4d76239256ed640a93ab3aadac0 category: bugfix bugzilla: 25285 CVE: NA
--------------------------------
Currently we lazily-initialize a module's ftrace PLT at runtime when we install the first ftrace call. To do so we have to apply a number of sanity checks, transiently mark the module text as RW, and perform an IPI as part of handling Neoverse-N1 erratum #1542419.
We only expect the ftrace trampoline to point at ftrace_caller() (AKA FTRACE_ADDR), so let's simplify all of this by intializing the PLT at module load time, before the module loader marks the module RO and performs the intial I-cache maintenance for the module.
Thus we can rely on the module having been correctly intialized, and can simplify the runtime work necessary to install an ftrace call in a module. This will also allow for the removal of module_disable_ro().
Tested by forcing ftrace_make_call() to use the module PLT, and then loading up a module after setting up ftrace with:
| echo ":mod:<module-name>" > set_ftrace_filter; | echo function > current_tracer; | modprobe <module-name>
Since FTRACE_ADDR is only defined when CONFIG_DYNAMIC_FTRACE is selected, we wrap its use along with most of module_init_ftrace_plt() with ifdeffery rather than using IS_ENABLED().
Signed-off-by: Mark Rutland mark.rutland@arm.com Reviewed-by: Amit Daniel Kachhap amit.kachhap@arm.com Reviewed-by: Ard Biesheuvel ard.biesheuvel@linaro.org Reviewed-by: Torsten Duwe duwe@suse.de Tested-by: Amit Daniel Kachhap amit.kachhap@arm.com Tested-by: Torsten Duwe duwe@suse.de Cc: Catalin Marinas catalin.marinas@arm.com Cc: James Morse james.morse@arm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Will Deacon will@kernel.org
Conflicts: arch/arm64/kernel/ftrace.c arch/arm64/kernel/module.c
Signed-off-by: Pu Lehui pulehui@huawei.com Reviewed-by: Jian Cheng cj.chengjian@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/kernel/ftrace.c | 50 +++++++++++--------------------------- arch/arm64/kernel/module.c | 32 +++++++++++++++--------- 2 files changed, 35 insertions(+), 47 deletions(-)
diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index b6618391be8c..4e511d0663b6 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -76,9 +76,21 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
if (offset < -SZ_128M || offset >= SZ_128M) { #ifdef CONFIG_ARM64_MODULE_PLTS - struct plt_entry trampoline, *dst; struct module *mod;
+ /* + * There is only one ftrace trampoline per module. For now, + * this is not a problem since on arm64, all dynamic ftrace + * invocations are routed via ftrace_caller(). This will need + * to be revisited if support for multiple ftrace entry points + * is added in the future, but for now, the pr_err() below + * deals with a theoretical issue only. + */ + if (addr != FTRACE_ADDR) { + pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n"); + return -EINVAL; + } + /* * On kernels that support module PLTs, the offset between the * branch instruction and its target may legally exceed the @@ -96,41 +108,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) if (WARN_ON(!mod)) return -EINVAL;
- /* - * There is only one ftrace trampoline per module. For now, - * this is not a problem since on arm64, all dynamic ftrace - * invocations are routed via ftrace_caller(). This will need - * to be revisited if support for multiple ftrace entry points - * is added in the future, but for now, the pr_err() below - * deals with a theoretical issue only. - */ - dst = mod->arch.ftrace_trampoline; - trampoline = get_plt_entry(addr); - if (!plt_entries_equal(dst, &trampoline)) { - if (!plt_entries_equal(dst, &(struct plt_entry){})) { - pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n"); - return -EINVAL; - } - - /* point the trampoline to our ftrace entry point */ - module_disable_ro(mod); - *dst = trampoline; - module_enable_ro(mod, true); - - /* - * Ensure updated trampoline is visible to instruction - * fetch before we patch in the branch. Although the - * architecture doesn't require an IPI in this case, - * Neoverse-N1 erratum #1542419 does require one - * if the TLB maintenance in module_enable_ro() is - * skipped due to rodata_enabled. It doesn't seem worth - * it to make it conditional given that this is - * certainly not a fast-path. - */ - flush_icache_range((unsigned long)&dst[0], - (unsigned long)&dst[1]); - } - addr = (unsigned long)dst; + addr = (unsigned long)mod->arch.ftrace_trampoline; #else /* CONFIG_ARM64_MODULE_PLTS */ return -EINVAL; #endif /* CONFIG_ARM64_MODULE_PLTS */ diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index 55b00bfb704f..81e1eb1eae53 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -20,6 +20,7 @@
#include <linux/bitops.h> #include <linux/elf.h> +#include <linux/ftrace.h> #include <linux/gfp.h> #include <linux/kasan.h> #include <linux/kernel.h> @@ -499,24 +500,33 @@ static const Elf_Shdr *find_section(const Elf_Ehdr *hdr, return NULL; }
+static int module_init_ftrace_plt(const Elf_Ehdr *hdr, + const Elf_Shdr *sechdrs, + struct module *mod) +{ +#if defined(CONFIG_ARM64_MODULE_PLTS) && defined(CONFIG_DYNAMIC_FTRACE) + const Elf_Shdr *s; + struct plt_entry *plt; + + s = find_section(hdr, sechdrs, ".text.ftrace_trampoline"); + if (!s) + return -ENOEXEC; + + plt = (void *)s->sh_addr; + *plt = get_plt_entry(FTRACE_ADDR); + mod->arch.ftrace_trampoline = plt; +#endif + return 0; +} + int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *me) { const Elf_Shdr *s; - s = find_section(hdr, sechdrs, ".altinstructions"); if (s) apply_alternatives_module((void *)s->sh_addr, s->sh_size);
-#ifdef CONFIG_ARM64_MODULE_PLTS - if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE)) { - s = find_section(hdr, sechdrs, ".text.ftrace_trampoline"); - if (!s) - return -ENOEXEC; - me->arch.ftrace_trampoline = (void *)s->sh_addr; - } -#endif - - return 0; + return module_init_ftrace_plt(hdr, sechdrs, me); }
From: Wang ShaoBo bobo.shaobowang@huawei.com
hulk inclusion category: bugfix bugzilla: 34278 CVE: NA
-------------------------------------------------
This fixes two problems:
1) when cpu offline, we should clear cpu mask from all associated resctrl group but not only default group.
2) when cpu online, we should set cpu mask for default group and update default group's cpus to default state if cdp on, this operation is to fill code and data fields of mpam sysregs with appropriate value.
Fixes: 2e2c511ff49d ("arm64/mpam: resctrl: Handle cpuhp and resctrl_dom allocation") Signed-off-by: Wang ShaoBo bobo.shaobowang@huawei.com Reviewed-by: Jian Cheng cj.chengjian@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/resctrl.h | 25 +++++++++++++++++++++++++ arch/arm64/kernel/mpam/mpam_resctrl.c | 15 ++++++++++++--- fs/resctrlfs.c | 22 ---------------------- 3 files changed, 37 insertions(+), 25 deletions(-)
diff --git a/arch/arm64/include/asm/resctrl.h b/arch/arm64/include/asm/resctrl.h index 95148e1bb232..a31df7fdc477 100644 --- a/arch/arm64/include/asm/resctrl.h +++ b/arch/arm64/include/asm/resctrl.h @@ -447,6 +447,31 @@ int parse_rdtgroupfs_options(char *data);
int resctrl_group_add_files(struct kernfs_node *kn, unsigned long fflags);
+static inline void resctrl_cdp_update_cpus_state(struct resctrl_group *rdtgrp) +{ + int cpu; + + /* + * If cdp on, tasks in resctrl default group with closid=0 + * and rmid=0 don't know how to fill proper partid_i/pmg_i + * and partid_d/pmg_d into MPAMx_ELx sysregs by mpam_sched_in() + * called by __switch_to(), it's because current cpu's default + * closid and rmid are also equal to 0 and make the operation + * modifying configuration passed. Update per cpu default closid + * of none-zero value, call update_closid_rmid() to update each + * cpu's mpam proper MPAMx_ELx sysregs for setting partid and + * pmg when mounting resctrl sysfs, which is a practical method; + * Besides, to support cpu online and offline we should set + * cur_closid to 0. + */ + for_each_cpu(cpu, &rdtgrp->cpu_mask) { + per_cpu(pqr_state.default_closid, cpu) = ~0; + per_cpu(pqr_state.cur_closid, cpu) = 0; + } + + update_closid_rmid(&rdtgrp->cpu_mask, NULL); +} + #define RESCTRL_MAX_CBM 32
/* diff --git a/arch/arm64/kernel/mpam/mpam_resctrl.c b/arch/arm64/kernel/mpam/mpam_resctrl.c index fb5e7c8e59da..872de67b8ab9 100644 --- a/arch/arm64/kernel/mpam/mpam_resctrl.c +++ b/arch/arm64/kernel/mpam/mpam_resctrl.c @@ -85,15 +85,24 @@ static bool resctrl_cdp_enabled;
int mpam_resctrl_set_default_cpu(unsigned int cpu) { - /* The cpu is set in default rdtgroup after online. */ + /* The cpu is set in default rdtgroup after online. */ cpumask_set_cpu(cpu, &resctrl_group_default.cpu_mask); + + /* Update CPU mpam sysregs' default setting when cdp enabled */ + if (resctrl_cdp_enabled) + resctrl_cdp_update_cpus_state(&resctrl_group_default); + return 0; }
void mpam_resctrl_clear_default_cpu(unsigned int cpu) { - /* The cpu is set in default rdtgroup after online. */ - cpumask_clear_cpu(cpu, &resctrl_group_default.cpu_mask); + struct resctrl_group *rdtgrp; + + list_for_each_entry(rdtgrp, &resctrl_all_groups, resctrl_group_list) { + /* The cpu is clear in associated rdtgroup after offline. */ + cpumask_clear_cpu(cpu, &rdtgrp->cpu_mask); + } }
bool is_resctrl_cdp_enabled(void) diff --git a/fs/resctrlfs.c b/fs/resctrlfs.c index c9ad4121e376..190ad509714d 100644 --- a/fs/resctrlfs.c +++ b/fs/resctrlfs.c @@ -317,28 +317,6 @@ static int mkdir_mondata_all(struct kernfs_node *parent_kn, return ret; }
-static void resctrl_cdp_update_cpus_state(struct resctrl_group *r) -{ - int cpu; - - /* - * If cdp on, tasks in resctrl default group with closid=0 - * and rmid=0 don't know how to fill proper partid_i/pmg_i - * and partid_d/pmg_d into MPAMx_ELx sysregs by mpam_sched_in() - * called by __switch_to(), it's because current cpu's default - * closid and rmid are also equal to 0 and to make the operation - * modifying configuration passed. Update per cpu default closid - * of none-zero value, call update_closid_rmid() to update each - * cpu's mpam proper MPAMx_ELx sysregs for setting partid and - * pmg when mounting resctrl sysfs, it looks like a practical - * method. - */ - for_each_cpu(cpu, &r->cpu_mask) - per_cpu(pqr_state.default_closid, cpu) = ~0; - - update_closid_rmid(&r->cpu_mask, NULL); -} - static struct dentry *resctrl_mount(struct file_system_type *fs_type, int flags, const char *unused_dev_name, void *data)
From: Wang ShaoBo bobo.shaobowang@huawei.com
hulk inclusion category: bugfix bugzilla: 34278 CVE: NA
-------------------------------------------------
When cpu online, domains inserted into resctrl_resource structure's domains list may be out of order, so sort them with domain id.
Fixes: 2e2c511ff49d ("arm64/mpam: resctrl: Handle cpuhp and resctrl_dom allocation") Signed-off-by: Wang ShaoBo bobo.shaobowang@huawei.com Reviewed-by: Jian Cheng cj.chengjian@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/kernel/mpam/mpam_setup.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/mpam/mpam_setup.c b/arch/arm64/kernel/mpam/mpam_setup.c index 51817091f119..8fd76dc38d89 100644 --- a/arch/arm64/kernel/mpam/mpam_setup.c +++ b/arch/arm64/kernel/mpam/mpam_setup.c @@ -59,12 +59,14 @@ mpam_get_domain_from_cpu(int cpu, struct mpam_resctrl_res *res) static int mpam_resctrl_setup_domain(unsigned int cpu, struct mpam_resctrl_res *res) { + struct rdt_domain *d; struct mpam_resctrl_dom *dom; struct mpam_class *class = res->class; struct mpam_component *comp_iter, *comp; u32 num_partid; u32 **ctrlval_ptr; enum resctrl_ctrl_type type; + struct list_head *tmp;
num_partid = mpam_sysprops_num_partid();
@@ -99,8 +101,17 @@ static int mpam_resctrl_setup_domain(unsigned int cpu, } }
- /* TODO: this list should be sorted */ - list_add_tail(&dom->resctrl_dom.list, &res->resctrl_res.domains); + tmp = &res->resctrl_res.domains; + /* insert domains in id ascending order */ + list_for_each_entry(d, &res->resctrl_res.domains, list) { + /* find the last domain with id greater than this domain */ + if (dom->resctrl_dom.id > d->id) + tmp = &d->list; + if (dom->resctrl_dom.id < d->id) + break; + } + list_add(&dom->resctrl_dom.list, tmp); + res->resctrl_res.dom_num++;
return 0;