mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

January 2024

  • 73 participants
  • 654 discussions
[PATCH OLK-6.6 v2] PCI/sysfs: Take reference on device to be removed
by Xiongfeng Wang 22 Jan '24

22 Jan '24
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8XVM8 CVE: NA ------------------------------------ When I do some aer-inject and sysfs remove stress tests, I got the following use-after-free Calltrace: ================================================================== BUG: KASAN: use-after-free in pci_stop_bus_device+0x174/0x178 Read of size 8 at addr fffffc3e2e402218 by task bash/26311 CPU: 38 PID: 26311 Comm: bash Tainted: G W 4.19.105+ #82 Hardware name: Huawei TaiShan 2280 V2/BC82AMDC, BIOS 2280-V2 CS V5.B161.01 06/10/2021 Call trace: dump_backtrace+0x0/0x360 show_stack+0x24/0x30 dump_stack+0x130/0x164 print_address_description+0x68/0x278 kasan_report+0x204/0x330 __asan_report_load8_noabort+0x30/0x40 pci_stop_bus_device+0x174/0x178 pci_stop_and_remove_bus_device_locked+0x24/0x40 remove_store+0x1c8/0x1e0 dev_attr_store+0x60/0x80 sysfs_kf_write+0x104/0x170 kernfs_fop_write+0x23c/0x430 __vfs_write+0xec/0x4e0 vfs_write+0x12c/0x3d0 ksys_write+0xe8/0x208 __arm64_sys_write+0x70/0xa0 el0_svc_common+0x10c/0x450 el0_svc_handler+0x50/0xc0 el0_svc+0x10/0x14 Allocated by task 684: kasan_kmalloc+0xe0/0x190 kmem_cache_alloc_trace+0x110/0x240 pci_alloc_dev+0x4c/0x110 pci_scan_single_device+0x100/0x218 pci_scan_slot+0x8c/0x2d8 pci_scan_child_bus_extend+0x90/0x628 pci_scan_child_bus+0x24/0x30 pci_scan_bridge_extend+0x3b8/0xb28 pci_scan_child_bus_extend+0x350/0x628 pci_rescan_bus+0x24/0x48 pcie_do_fatal_recovery+0x390/0x4b0 handle_error_source+0x124/0x158 aer_isr+0x5a0/0x800 process_one_work+0x598/0x1250 worker_thread+0x384/0xf08 kthread+0x2a4/0x320 ret_from_fork+0x10/0x18 Freed by task 685: __kasan_slab_free+0x120/0x228 kasan_slab_free+0x10/0x18 kfree+0x88/0x218 pci_release_dev+0xb4/0xd8 device_release+0x6c/0x1c0 kobject_put+0x12c/0x400 put_device+0x24/0x30 pci_dev_put+0x24/0x30 handle_error_source+0x12c/0x158 aer_isr+0x5a0/0x800 process_one_work+0x598/0x1250 worker_thread+0x384/0xf08 kthread+0x2a4/0x320 ret_from_fork+0x10/0x18 The buggy address belongs to the object at fffffc3e2e402200 which belongs to the cache kmalloc-4096 of size 4096 The buggy address is located 24 bytes inside of 4096-byte region [fffffc3e2e402200, fffffc3e2e403200) The buggy address belongs to the page: page:ffff7ff0f8b90000 count:1 mapcount:0 mapping:ffffdc365f016e00 index:0x0 compound_mapcount: 0 flags: 0x6ffffe0000008100(slab|head) raw: 6ffffe0000008100 ffff7f70d83aae00 0000000300000003 ffffdc365f016e00 raw: 0000000000000000 0000000080070007 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: fffffc3e2e402100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fffffc3e2e402180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >fffffc3e2e402200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ fffffc3e2e402280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fffffc3e2e402300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ================================================================== It is caused by the following race condition: CPU0 CPU1 remove_store() aer_isr() device_remove_file_self() handle_error_source() pci_stop_and_remove_bus_device_locked pcie_do_fatal_recovery() (blocked) pci_lock_rescan_remove() #CPU1 acquire the lock pci_stop_and_remove_bus_device() pci_unlock_rescan_remove() #CPU1 release the lock pci_lock_rescan_remove() #CPU0 acquire the lock pci_dev_put() #free pci_dev pci_stop_and_remove_bus_device() pci_stop_bus_device() #use-after-free pci_unlock_rescan_remove() An AER interrupt is triggered on CPU1. CPU1 starts to process it. A work 'aer_isr()' is scheduled on CPU1. It calling into pcie_do_fatal_recovery(), and aquire lock 'pci_rescan_remove_lock'. Before it removes the sysfs corresponding to the error pci device, a sysfs remove operation is executed on CPU0. CPU0 use device_remove_file_self() to remove the sysfs directory and wait for the lock to be released. After CPU1 finish pci_stop_and_remove_bus_device(), it release the lock and free the 'pci_dev' in pci_dev_put(). CPU0 acquire the lock and access the 'pci_dev'. Then a use-after-free is triggered. To fix this issue, we increase the reference count in remove_store() before remove the device and decrease the reference count in the end. Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com> Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com> Signed-off-by: Jialin Zhang <zhangjialin11(a)huawei.com> Conflicts: drivers/pci/pci-sysfs.c Signed-off-by: Xiongfeng Wang <wangxiongfeng2(a)huawei.com> --- drivers/pci/pci-sysfs.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c index 3317b9354716..e3373cdc5244 100644 --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c @@ -483,12 +483,17 @@ static ssize_t remove_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { unsigned long val; + struct pci_dev *pdev = to_pci_dev(dev); if (kstrtoul(buf, 0, &val) < 0) return -EINVAL; - if (val && device_remove_file_self(dev, attr)) - pci_stop_and_remove_bus_device_locked(to_pci_dev(dev)); + if (val) { + pci_dev_get(pdev); + if (device_remove_file_self(dev, attr)) + pci_stop_and_remove_bus_device_locked(pdev); + pci_dev_put(pdev); + } return count; } static DEVICE_ATTR_IGNORE_LOCKDEP(remove, 0220, NULL, -- 2.20.1
2 1
0 0
[PATCH OLK-6.6 v2] arm64: Add the arm64.nolse command line option
by Wei Li 22 Jan '24

22 Jan '24
From: Maria Yu <quic_aiquny(a)quicinc.com> maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4JBSJ CVE: NA Reference: https://lore.kernel.org/all/20230710055955.36551-1-quic_aiquny@quicinc.com/ -------------------------------- In order to be able to disable lse_atomic even if cpu support it, most likely because of memory controller cannot deal with the lse atomic instructions, use a new idreg override to deal with it. Signed-off-by: Maria Yu <quic_aiquny(a)quicinc.com> [liwei: Remane "arm64.nolse_atomics" to "arm64.nolse"] Signed-off-by: Wei Li <liwei391(a)huawei.com> --- Documentation/admin-guide/kernel-parameters.txt | 2 ++ arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/kernel/cpufeature.c | 4 +++- arch/arm64/kernel/idreg-override.c | 11 +++++++++++ 4 files changed, 17 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 53c15bb5b977..4bf7d0ba2495 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -418,6 +418,8 @@ arm64.nobti [ARM64] Unconditionally disable Branch Target Identification support + arm64.nolse [ARM64] Unconditionally disable LSE Atomic support + arm64.nomops [ARM64] Unconditionally disable Memory Copy and Memory Set instructions support diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 5bba39376055..edc84c0bc16a 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -916,6 +916,7 @@ extern struct arm64_ftr_override id_aa64pfr0_override; extern struct arm64_ftr_override id_aa64pfr1_override; extern struct arm64_ftr_override id_aa64zfr0_override; extern struct arm64_ftr_override id_aa64smfr0_override; +extern struct arm64_ftr_override id_aa64isar0_override; extern struct arm64_ftr_override id_aa64isar1_override; extern struct arm64_ftr_override id_aa64isar2_override; diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 455e72ce080a..75757aad5d99 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -670,6 +670,7 @@ struct arm64_ftr_override __ro_after_init id_aa64pfr0_override; struct arm64_ftr_override __ro_after_init id_aa64pfr1_override; struct arm64_ftr_override __ro_after_init id_aa64zfr0_override; struct arm64_ftr_override __ro_after_init id_aa64smfr0_override; +struct arm64_ftr_override __ro_after_init id_aa64isar0_override; struct arm64_ftr_override __ro_after_init id_aa64isar1_override; struct arm64_ftr_override __ro_after_init id_aa64isar2_override; @@ -722,7 +723,8 @@ static const struct __ftr_reg_entry { ARM64_FTR_REG(SYS_ID_AA64DFR1_EL1, ftr_raz), /* Op1 = 0, CRn = 0, CRm = 6 */ - ARM64_FTR_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0), + ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0, + &id_aa64isar0_override), ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1, &id_aa64isar1_override), ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2, diff --git a/arch/arm64/kernel/idreg-override.c b/arch/arm64/kernel/idreg-override.c index 3addc09f8746..19a730db3e37 100644 --- a/arch/arm64/kernel/idreg-override.c +++ b/arch/arm64/kernel/idreg-override.c @@ -105,6 +105,15 @@ static const struct ftr_set_desc pfr1 __initconst = { }, }; +static const struct ftr_set_desc isar0 __initconst = { + .name = "id_aa64isar0", + .override = &id_aa64isar0_override, + .fields = { + FIELD("atomic", ID_AA64ISAR0_EL1_ATOMIC_SHIFT, NULL), + {} + }, +}; + static const struct ftr_set_desc isar1 __initconst = { .name = "id_aa64isar1", .override = &id_aa64isar1_override, @@ -163,6 +172,7 @@ static const struct ftr_set_desc * const regs[] __initconst = { &mmfr1, &pfr0, &pfr1, + &isar0, &isar1, &isar2, &smfr0, @@ -185,6 +195,7 @@ static const struct { { "arm64.nomops", "id_aa64isar2.mops=0" }, { "arm64.nomte", "id_aa64pfr1.mte=0" }, { "nokaslr", "arm64_sw.nokaslr=1" }, + { "arm64.nolse", "id_aa64isar0.atomic=0" }, }; static int __init parse_nokaslr(char *unused) -- 2.25.1
2 1
0 0
[PATCH openEuler-1.0-LTS] netfilter: nf_tables: fix pointer math issue in nft_byteorder_eval()
by Liu Jian 21 Jan '24

21 Jan '24
From: Dan Carpenter <dan.carpenter(a)linaro.org> mainline inclusion from mainline-v6.7-rc2 commit c301f0981fdd3fd1ffac6836b423c4d7a8e0eb63 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I8WQRG CVE: CVE-2024-0607 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… --------------------------- The problem is in nft_byteorder_eval() where we are iterating through a loop and writing to dst[0], dst[1], dst[2] and so on... On each iteration we are writing 8 bytes. But dst[] is an array of u32 so each element only has space for 4 bytes. That means that every iteration overwrites part of the previous element. I spotted this bug while reviewing commit caf3ef7468f7 ("netfilter: nf_tables: prevent OOB access in nft_byteorder_eval") which is a related issue. I think that the reason we have not detected this bug in testing is that most of time we only write one element. Fixes: ce1e7989d989 ("netfilter: nft_byteorder: provide 64bit le/be conversion") Signed-off-by: Dan Carpenter <dan.carpenter(a)linaro.org> Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org> Signed-off-by: Liu Jian <liujian56(a)huawei.com> Conflicts: include/net/netfilter/nf_tables.h net/netfilter/nft_byteorder.c net/netfilter/nft_meta.c --- net/netfilter/nft_byteorder.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c index 46a8f894717c..ba65e1e8732b 100644 --- a/net/netfilter/nft_byteorder.c +++ b/net/netfilter/nft_byteorder.c @@ -41,19 +41,20 @@ static void nft_byteorder_eval(const struct nft_expr *expr, switch (priv->size) { case 8: { + u64 *dst64 = (void *)dst; u64 src64; switch (priv->op) { case NFT_BYTEORDER_NTOH: for (i = 0; i < priv->len / 8; i++) { src64 = get_unaligned((u64 *)&src[i]); - put_unaligned_be64(src64, &dst[i]); + put_unaligned_be64(src64, &dst64[i]); } break; case NFT_BYTEORDER_HTON: for (i = 0; i < priv->len / 8; i++) { src64 = get_unaligned_be64(&src[i]); - put_unaligned(src64, (u64 *)&dst[i]); + put_unaligned(src64, (u64 *)&dst64[i]); } break; } -- 2.34.1
2 1
0 0
[PATCH OLK-5.10] netfilter: nf_tables: fix pointer math issue in nft_byteorder_eval()
by Liu Jian 21 Jan '24

21 Jan '24
From: Dan Carpenter <dan.carpenter(a)linaro.org> mainline inclusion from mainline-v6.7-rc2 commit c301f0981fdd3fd1ffac6836b423c4d7a8e0eb63 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I8WQRG CVE: CVE-2024-0607 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… --------------------------- The problem is in nft_byteorder_eval() where we are iterating through a loop and writing to dst[0], dst[1], dst[2] and so on... On each iteration we are writing 8 bytes. But dst[] is an array of u32 so each element only has space for 4 bytes. That means that every iteration overwrites part of the previous element. I spotted this bug while reviewing commit caf3ef7468f7 ("netfilter: nf_tables: prevent OOB access in nft_byteorder_eval") which is a related issue. I think that the reason we have not detected this bug in testing is that most of time we only write one element. Fixes: ce1e7989d989 ("netfilter: nft_byteorder: provide 64bit le/be conversion") Signed-off-by: Dan Carpenter <dan.carpenter(a)linaro.org> Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org> Signed-off-by: Liu Jian <liujian56(a)huawei.com> Conflicts: include/net/netfilter/nf_tables.h net/netfilter/nft_byteorder.c --- include/net/netfilter/nf_tables.h | 4 ++-- net/netfilter/nft_byteorder.c | 5 +++-- net/netfilter/nft_meta.c | 2 +- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h index ea893a6d9b36..7550328080bf 100644 --- a/include/net/netfilter/nf_tables.h +++ b/include/net/netfilter/nf_tables.h @@ -132,9 +132,9 @@ static inline u16 nft_reg_load16(const u32 *sreg) return *(u16 *)sreg; } -static inline void nft_reg_store64(u32 *dreg, u64 val) +static inline void nft_reg_store64(u64 *dreg, u64 val) { - put_unaligned(val, (u64 *)dreg); + put_unaligned(val, dreg); } static inline u64 nft_reg_load64(const u32 *sreg) diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c index 7b0b8fecb220..9d250bd60bb8 100644 --- a/net/netfilter/nft_byteorder.c +++ b/net/netfilter/nft_byteorder.c @@ -38,20 +38,21 @@ void nft_byteorder_eval(const struct nft_expr *expr, switch (priv->size) { case 8: { + u64 *dst64 = (void *)dst; u64 src64; switch (priv->op) { case NFT_BYTEORDER_NTOH: for (i = 0; i < priv->len / 8; i++) { src64 = nft_reg_load64(&src[i]); - nft_reg_store64(&dst[i], be64_to_cpu(src64)); + nft_reg_store64(&dst64[i], be64_to_cpu(src64)); } break; case NFT_BYTEORDER_HTON: for (i = 0; i < priv->len / 8; i++) { src64 = (__force __u64) cpu_to_be64(nft_reg_load64(&src[i])); - nft_reg_store64(&dst[i], src64); + nft_reg_store64(&dst64[i], src64); } break; } diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c index 44d9b38e5f90..cb5bb0e21b66 100644 --- a/net/netfilter/nft_meta.c +++ b/net/netfilter/nft_meta.c @@ -63,7 +63,7 @@ nft_meta_get_eval_time(enum nft_meta_keys key, { switch (key) { case NFT_META_TIME_NS: - nft_reg_store64(dest, ktime_get_real_ns()); + nft_reg_store64((u64 *)dest, ktime_get_real_ns()); break; case NFT_META_TIME_DAY: nft_reg_store8(dest, nft_meta_weekday()); -- 2.34.1
2 1
0 0
[PATCH OLK-6.6] sched: programmable: Fix is_cpu_allowed build error
by Cheng Yu 20 Jan '24

20 Jan '24
From: Guan Jing <guanjing6(a)huawei.com> hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8OIT1 -------------------------------- Patch fixes build errors like this: kernel/sched/sched.h:3625:13: error: inlining failed in call to ‘always_inline’ ‘is_cpu_allowed’: function body not available Signed-off-by: Guan Jing <guanjing6(a)huawei.com> --- kernel/sched/core.c | 11 +++++++---- kernel/sched/fair.c | 2 +- kernel/sched/sched.h | 2 +- 3 files changed, 9 insertions(+), 6 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index f49884275d02..d4f19e578341 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2469,11 +2469,7 @@ static inline bool rq_has_pinned_tasks(struct rq *rq) * Per-CPU kthreads are allowed to run on !active && online CPUs, see * __set_cpus_allowed_ptr() and select_fallback_rq(). */ -#ifdef CONFIG_BPF_SCHED -inline bool is_cpu_allowed(struct task_struct *p, int cpu) -#else static inline bool is_cpu_allowed(struct task_struct *p, int cpu) -#endif { /* When not in the task's cpumask, no point in looking further. */ if (!cpumask_test_cpu(cpu, p->cpus_ptr)) @@ -2499,6 +2495,13 @@ static inline bool is_cpu_allowed(struct task_struct *p, int cpu) return cpu_online(cpu); } +#ifdef CONFIG_BPF_SCHED +bool bpf_sched_is_cpu_allowed(struct task_struct *p, int cpu) +{ + return is_cpu_allowed(p, cpu); +} +#endif + /* * This is how migration works: * diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index b0f65c68f6bb..8e180ede46d3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8523,7 +8523,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags) ctx.select_idle_mask = this_cpu_cpumask_var_ptr(select_idle_mask); ret = bpf_sched_cfs_select_rq(&ctx); - if (ret >= 0 && is_cpu_allowed(p, ret)) { + if (ret >= 0 && bpf_sched_is_cpu_allowed(p, ret)) { rcu_read_unlock(); return ret; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b7ba69337453..150c65128adb 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3622,7 +3622,7 @@ extern u64 avg_vruntime(struct cfs_rq *cfs_rq); extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se); #ifdef CONFIG_BPF_SCHED -inline bool is_cpu_allowed(struct task_struct *p, int cpu); +bool bpf_sched_is_cpu_allowed(struct task_struct *p, int cpu); #endif #endif /* _KERNEL_SCHED_SCHED_H */ -- 2.34.1
2 1
0 0
[PATCH OLK-6.6] sched: programmable: Fix is_cpu_allowed build error
by Cheng Yu 20 Jan '24

20 Jan '24
From: Guan Jing <guanjing6(a)huawei.com> hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8OIT1 -------------------------------- Patch fixes build errors like this: kernel/sched/sched.h:3625:13: error: inlining failed in call to ‘always_inline’ ‘is_cpu_allowed’: function body not available Signed-off-by: Guan Jing <guanjing6(a)huawei.com> --- kernel/sched/core.c | 34 ---------------------------------- kernel/sched/sched.h | 33 ++++++++++++++++++++++++++++++--- 2 files changed, 30 insertions(+), 37 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index f49884275d02..fab7d434bd9b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2465,40 +2465,6 @@ static inline bool rq_has_pinned_tasks(struct rq *rq) return rq->nr_pinned; } -/* - * Per-CPU kthreads are allowed to run on !active && online CPUs, see - * __set_cpus_allowed_ptr() and select_fallback_rq(). - */ -#ifdef CONFIG_BPF_SCHED -inline bool is_cpu_allowed(struct task_struct *p, int cpu) -#else -static inline bool is_cpu_allowed(struct task_struct *p, int cpu) -#endif -{ - /* When not in the task's cpumask, no point in looking further. */ - if (!cpumask_test_cpu(cpu, p->cpus_ptr)) - return false; - - /* migrate_disabled() must be allowed to finish. */ - if (is_migration_disabled(p)) - return cpu_online(cpu); - - /* Non kernel threads are not allowed during either online or offline. */ - if (!(p->flags & PF_KTHREAD)) - return cpu_active(cpu) && task_cpu_possible(cpu, p); - - /* KTHREAD_IS_PER_CPU is always allowed. */ - if (kthread_is_per_cpu(p)) - return cpu_online(cpu); - - /* Regular kernel threads don't get to stay during offline. */ - if (cpu_dying(cpu)) - return false; - - /* But are allowed during online. */ - return cpu_online(cpu); -} - /* * This is how migration works: * diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b7ba69337453..f23550fa4b73 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -68,6 +68,7 @@ #include <linux/wait_api.h> #include <linux/wait_bit.h> #include <linux/workqueue_api.h> +#include <linux/mmu_context.h> #include <trace/events/power.h> #include <trace/events/sched.h> @@ -3621,8 +3622,34 @@ static inline void init_sched_mm_cid(struct task_struct *t) { } extern u64 avg_vruntime(struct cfs_rq *cfs_rq); extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se); -#ifdef CONFIG_BPF_SCHED -inline bool is_cpu_allowed(struct task_struct *p, int cpu); -#endif +/* + * Per-CPU kthreads are allowed to run on !active && online CPUs, see + * __set_cpus_allowed_ptr() and select_fallback_rq(). + */ +static inline bool is_cpu_allowed(struct task_struct *p, int cpu) +{ + /* When not in the task's cpumask, no point in looking further. */ + if (!cpumask_test_cpu(cpu, p->cpus_ptr)) + return false; + + /* migrate_disabled() must be allowed to finish. */ + if (is_migration_disabled(p)) + return cpu_online(cpu); + + /* Non kernel threads are not allowed during either online or offline. */ + if (!(p->flags & PF_KTHREAD)) + return cpu_active(cpu) && task_cpu_possible(cpu, p); + + /* KTHREAD_IS_PER_CPU is always allowed. */ + if (kthread_is_per_cpu(p)) + return cpu_online(cpu); + + /* Regular kernel threads don't get to stay during offline. */ + if (cpu_dying(cpu)) + return false; + + /* But are allowed during online. */ + return cpu_online(cpu); +} #endif /* _KERNEL_SCHED_SCHED_H */ -- 2.34.1
2 1
0 0
[PATCH] sched: programmable: Fix is_cpu_allowed build error
by Cheng Yu 20 Jan '24

20 Jan '24
From: Guan Jing <guanjing6(a)huawei.com> hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I8OIT1 -------------------------------- Patch fixes build errors like this: kernel/sched/sched.h:3625:13: error: inlining failed in call to ‘always_inline’ ‘is_cpu_allowed’: function body not available Signed-off-by: Guan Jing <guanjing6(a)huawei.com> --- kernel/sched/core.c | 34 ---------------------------------- kernel/sched/sched.h | 33 ++++++++++++++++++++++++++++++--- 2 files changed, 30 insertions(+), 37 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index f49884275d02..fab7d434bd9b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2465,40 +2465,6 @@ static inline bool rq_has_pinned_tasks(struct rq *rq) return rq->nr_pinned; } -/* - * Per-CPU kthreads are allowed to run on !active && online CPUs, see - * __set_cpus_allowed_ptr() and select_fallback_rq(). - */ -#ifdef CONFIG_BPF_SCHED -inline bool is_cpu_allowed(struct task_struct *p, int cpu) -#else -static inline bool is_cpu_allowed(struct task_struct *p, int cpu) -#endif -{ - /* When not in the task's cpumask, no point in looking further. */ - if (!cpumask_test_cpu(cpu, p->cpus_ptr)) - return false; - - /* migrate_disabled() must be allowed to finish. */ - if (is_migration_disabled(p)) - return cpu_online(cpu); - - /* Non kernel threads are not allowed during either online or offline. */ - if (!(p->flags & PF_KTHREAD)) - return cpu_active(cpu) && task_cpu_possible(cpu, p); - - /* KTHREAD_IS_PER_CPU is always allowed. */ - if (kthread_is_per_cpu(p)) - return cpu_online(cpu); - - /* Regular kernel threads don't get to stay during offline. */ - if (cpu_dying(cpu)) - return false; - - /* But are allowed during online. */ - return cpu_online(cpu); -} - /* * This is how migration works: * diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b7ba69337453..f23550fa4b73 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -68,6 +68,7 @@ #include <linux/wait_api.h> #include <linux/wait_bit.h> #include <linux/workqueue_api.h> +#include <linux/mmu_context.h> #include <trace/events/power.h> #include <trace/events/sched.h> @@ -3621,8 +3622,34 @@ static inline void init_sched_mm_cid(struct task_struct *t) { } extern u64 avg_vruntime(struct cfs_rq *cfs_rq); extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se); -#ifdef CONFIG_BPF_SCHED -inline bool is_cpu_allowed(struct task_struct *p, int cpu); -#endif +/* + * Per-CPU kthreads are allowed to run on !active && online CPUs, see + * __set_cpus_allowed_ptr() and select_fallback_rq(). + */ +static inline bool is_cpu_allowed(struct task_struct *p, int cpu) +{ + /* When not in the task's cpumask, no point in looking further. */ + if (!cpumask_test_cpu(cpu, p->cpus_ptr)) + return false; + + /* migrate_disabled() must be allowed to finish. */ + if (is_migration_disabled(p)) + return cpu_online(cpu); + + /* Non kernel threads are not allowed during either online or offline. */ + if (!(p->flags & PF_KTHREAD)) + return cpu_active(cpu) && task_cpu_possible(cpu, p); + + /* KTHREAD_IS_PER_CPU is always allowed. */ + if (kthread_is_per_cpu(p)) + return cpu_online(cpu); + + /* Regular kernel threads don't get to stay during offline. */ + if (cpu_dying(cpu)) + return false; + + /* But are allowed during online. */ + return cpu_online(cpu); +} #endif /* _KERNEL_SCHED_SCHED_H */ -- 2.34.1
1 0
0 0
[PATCH OLK-5.10 v2 0/2] x86/quirks: Add parameter to clear MSIs early
by Zheng Zengkai 19 Jan '24

19 Jan '24
A crash kernel boot failure issue is reported in openeuler 5.10 series kernel(in x86 virtual machine with Hi1822 virtual nic). Finally the issue was narrowed down to irq vector conflict. Backport following patchset from kernel maillist to fix this issue. https://lore.kernel.org/linux-pci/20181018183721.27467-3-gpiccoli@canonical… Guilherme G. Piccoli (2): x86/PCI: Export find_cap() to be used in early PCI code x86/quirks: Add parameter to clear MSIs early on boot .../admin-guide/kernel-parameters.txt | 6 ++++ arch/x86/include/asm/pci-direct.h | 2 ++ arch/x86/kernel/aperture_64.c | 30 ++--------------- arch/x86/kernel/early-quirks.c | 32 +++++++++++++++++++ arch/x86/pci/common.c | 4 +++ arch/x86/pci/early.c | 25 +++++++++++++++ 6 files changed, 71 insertions(+), 28 deletions(-) -- 2.20.1
2 3
0 0
[PATCH OLK-5.10 V1] ida: Fix crash in ida_free when the bitmap is empty
by Cheng Yu 19 Jan '24

19 Jan '24
From: "Matthew Wilcox (Oracle)" <willy(a)infradead.org> mainline inclusion from mainline-v6.7-rc7 commit af73483f4e8b6f5c68c9aa63257bdd929a9c194a category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I8WBGZ CVE: CVE-2023-6915 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- The IDA usually detects double-frees, but that detection failed to consider the case when there are no nearby IDs allocated and so we have a NULL bitmap rather than simply having a clear bit. Add some tests to the test-suite to be sure we don't inadvertently reintroduce this problem. Unfortunately they're quite noisy so include a message to disregard the warnings. Reported-by: Zhenghan Wang <wzhmmmmm(a)gmail.com> Signed-off-by: Matthew Wilcox (Oracle) <willy(a)infradead.org> Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org> Signed-off-by: Cheng Yu <serein.chengyu(a)huawei.com> --- lib/idr.c | 2 +- lib/test_ida.c | 40 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 41 insertions(+), 1 deletion(-) diff --git a/lib/idr.c b/lib/idr.c index 7ecdfdb5309e..8331b44dd39e 100644 --- a/lib/idr.c +++ b/lib/idr.c @@ -508,7 +508,7 @@ void ida_free(struct ida *ida, unsigned int id) goto delete; xas_store(&xas, xa_mk_value(v)); } else { - if (!test_bit(bit, bitmap->bitmap)) + if (!bitmap || !test_bit(bit, bitmap->bitmap)) goto err; __clear_bit(bit, bitmap->bitmap); xas_set_mark(&xas, XA_FREE_MARK); diff --git a/lib/test_ida.c b/lib/test_ida.c index b06880625961..55105baa19da 100644 --- a/lib/test_ida.c +++ b/lib/test_ida.c @@ -150,6 +150,45 @@ static void ida_check_conv(struct ida *ida) IDA_BUG_ON(ida, !ida_is_empty(ida)); } +/* + * Check various situations where we attempt to free an ID we don't own. + */ +static void ida_check_bad_free(struct ida *ida) +{ + unsigned long i; + + printk("vvv Ignore \"not allocated\" warnings\n"); + /* IDA is empty; all of these will fail */ + ida_free(ida, 0); + for (i = 0; i < 31; i++) + ida_free(ida, 1 << i); + + /* IDA contains a single value entry */ + IDA_BUG_ON(ida, ida_alloc_min(ida, 3, GFP_KERNEL) != 3); + ida_free(ida, 0); + for (i = 0; i < 31; i++) + ida_free(ida, 1 << i); + + /* IDA contains a single bitmap */ + IDA_BUG_ON(ida, ida_alloc_min(ida, 1023, GFP_KERNEL) != 1023); + ida_free(ida, 0); + for (i = 0; i < 31; i++) + ida_free(ida, 1 << i); + + /* IDA contains a tree */ + IDA_BUG_ON(ida, ida_alloc_min(ida, (1 << 20) - 1, GFP_KERNEL) != (1 << 20) - 1); + ida_free(ida, 0); + for (i = 0; i < 31; i++) + ida_free(ida, 1 << i); + printk("^^^ \"not allocated\" warnings over\n"); + + ida_free(ida, 3); + ida_free(ida, 1023); + ida_free(ida, (1 << 20) - 1); + + IDA_BUG_ON(ida, !ida_is_empty(ida)); +} + static DEFINE_IDA(ida); static int ida_checks(void) @@ -162,6 +201,7 @@ static int ida_checks(void) ida_check_leaf(&ida, 1024 * 64); ida_check_max(&ida); ida_check_conv(&ida); + ida_check_bad_free(&ida); printk("IDA: %u of %u tests passed\n", tests_passed, tests_run); return (tests_run != tests_passed) ? 0 : -EINVAL; -- 2.25.1
2 1
0 0
[PATCH OLK-5.10 V1] ida: Fix crash in ida_free when the bitmap is empty
by Cheng Yu 19 Jan '24

19 Jan '24
From: "Matthew Wilcox (Oracle)" <willy(a)infradead.org> mainline inclusion from mainline-v6.7-rc7 commit af73483f4e8b6f5c68c9aa63257bdd929a9c194a category: bugfix bugzilla: 189479 CVE: CVE-2023-6915 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- The IDA usually detects double-frees, but that detection failed to consider the case when there are no nearby IDs allocated and so we have a NULL bitmap rather than simply having a clear bit. Add some tests to the test-suite to be sure we don't inadvertently reintroduce this problem. Unfortunately they're quite noisy so include a message to disregard the warnings. Reported-by: Zhenghan Wang <wzhmmmmm(a)gmail.com> Signed-off-by: Matthew Wilcox (Oracle) <willy(a)infradead.org> Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org> Signed-off-by: Cheng Yu <serein.chengyu(a)huawei.com> --- lib/idr.c | 2 +- lib/test_ida.c | 40 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 41 insertions(+), 1 deletion(-) diff --git a/lib/idr.c b/lib/idr.c index 7ecdfdb5309e..8331b44dd39e 100644 --- a/lib/idr.c +++ b/lib/idr.c @@ -508,7 +508,7 @@ void ida_free(struct ida *ida, unsigned int id) goto delete; xas_store(&xas, xa_mk_value(v)); } else { - if (!test_bit(bit, bitmap->bitmap)) + if (!bitmap || !test_bit(bit, bitmap->bitmap)) goto err; __clear_bit(bit, bitmap->bitmap); xas_set_mark(&xas, XA_FREE_MARK); diff --git a/lib/test_ida.c b/lib/test_ida.c index b06880625961..55105baa19da 100644 --- a/lib/test_ida.c +++ b/lib/test_ida.c @@ -150,6 +150,45 @@ static void ida_check_conv(struct ida *ida) IDA_BUG_ON(ida, !ida_is_empty(ida)); } +/* + * Check various situations where we attempt to free an ID we don't own. + */ +static void ida_check_bad_free(struct ida *ida) +{ + unsigned long i; + + printk("vvv Ignore \"not allocated\" warnings\n"); + /* IDA is empty; all of these will fail */ + ida_free(ida, 0); + for (i = 0; i < 31; i++) + ida_free(ida, 1 << i); + + /* IDA contains a single value entry */ + IDA_BUG_ON(ida, ida_alloc_min(ida, 3, GFP_KERNEL) != 3); + ida_free(ida, 0); + for (i = 0; i < 31; i++) + ida_free(ida, 1 << i); + + /* IDA contains a single bitmap */ + IDA_BUG_ON(ida, ida_alloc_min(ida, 1023, GFP_KERNEL) != 1023); + ida_free(ida, 0); + for (i = 0; i < 31; i++) + ida_free(ida, 1 << i); + + /* IDA contains a tree */ + IDA_BUG_ON(ida, ida_alloc_min(ida, (1 << 20) - 1, GFP_KERNEL) != (1 << 20) - 1); + ida_free(ida, 0); + for (i = 0; i < 31; i++) + ida_free(ida, 1 << i); + printk("^^^ \"not allocated\" warnings over\n"); + + ida_free(ida, 3); + ida_free(ida, 1023); + ida_free(ida, (1 << 20) - 1); + + IDA_BUG_ON(ida, !ida_is_empty(ida)); +} + static DEFINE_IDA(ida); static int ida_checks(void) @@ -162,6 +201,7 @@ static int ida_checks(void) ida_check_leaf(&ida, 1024 * 64); ida_check_max(&ida); ida_check_conv(&ida); + ida_check_bad_free(&ida); printk("IDA: %u of %u tests passed\n", tests_passed, tests_run); return (tests_run != tests_passed) ? 0 : -EINVAL; -- 2.25.1
2 1
0 0
  • ← Newer
  • 1
  • ...
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • ...
  • 66
  • Older →

HyperKitty Powered by HyperKitty