[PATCH OLK-5.10 0/5] zcopy: A zero-copy data transfer mechanism for HPC
This patchset provides a PAGEATTACH mechanism, a zero-copy data transfer solution optimized for High Performance Computing (HPC) workloads. It allows direct sharing of physical memory pages between distinct processes by mapping source pages to a target address space, eliminating redundant data copying and improving large data transfer efficiency. Key features: - Supports zero-copy communication between intra-node processes. - Handles both PTE-level small pages and PMD-level huge pages (2MB). Liu Mingrui (5): zcopy: Initialize zcopy module zcopy: Introduce the pageattach interface zcopy: Extend PMD trans hugepage mapping ability zcopy: Add tracepoint for PageAttach zcopy: Add debug inerface dump_pagetable MAINTAINERS | 6 + drivers/Kconfig | 2 + drivers/Makefile | 3 + drivers/zcopy/Kconfig | 32 ++ drivers/zcopy/Makefile | 2 + drivers/zcopy/zcopy.c | 693 ++++++++++++++++++++++++++++++++++ include/trace/events/attach.h | 157 ++++++++ 7 files changed, 895 insertions(+) create mode 100644 drivers/zcopy/Kconfig create mode 100644 drivers/zcopy/Makefile create mode 100644 drivers/zcopy/zcopy.c create mode 100644 include/trace/events/attach.h -- 2.25.1
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/release-management/issues/ID3TGE -------------------------------- Initialize the zcopy driver module framework Signed-off-by: Liu Mingrui <liumingrui@huawei.com> --- MAINTAINERS | 5 ++ drivers/Kconfig | 2 + drivers/Makefile | 3 ++ drivers/zcopy/Kconfig | 32 ++++++++++++ drivers/zcopy/Makefile | 2 + drivers/zcopy/zcopy.c | 108 +++++++++++++++++++++++++++++++++++++++++ 6 files changed, 152 insertions(+) create mode 100644 drivers/zcopy/Kconfig create mode 100644 drivers/zcopy/Makefile create mode 100644 drivers/zcopy/zcopy.c diff --git a/MAINTAINERS b/MAINTAINERS index 3eb4130ce6d0..fa863a4ab198 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19649,6 +19649,11 @@ L: linux-mm@kvack.org S: Maintained F: mm/zswap.c +ZCOPY DRIVER +M: Mingrui Liu <liumingrui@huawei.com> +S: Maintained +F: drivers/zcopy/ + THE REST M: Linus Torvalds <torvalds@linux-foundation.org> L: linux-kernel@vger.kernel.org diff --git a/drivers/Kconfig b/drivers/Kconfig index 840137d90167..c9de7ebdc7b8 100644 --- a/drivers/Kconfig +++ b/drivers/Kconfig @@ -246,4 +246,6 @@ source "drivers/cpuinspect/Kconfig" source "drivers/arm/Kconfig" +source "drivers/zcopy/Kconfig" + endmenu diff --git a/drivers/Makefile b/drivers/Makefile index 3f0e6f169501..25e965c82d72 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -196,3 +196,6 @@ obj-$(CONFIG_MOST) += most/ obj-$(CONFIG_ROH) += roh/ obj-$(CONFIG_UB) += ub/ obj-$(CONFIG_ARM_SPE_MEM_SAMPLING) += arm/spe/ + +# PAGE_ATTACH zero-copy communication between processes +obj-$(CONFIG_PAGEATTACH) += zcopy/ diff --git a/drivers/zcopy/Kconfig b/drivers/zcopy/Kconfig new file mode 100644 index 000000000000..bfc34912dbe7 --- /dev/null +++ b/drivers/zcopy/Kconfig @@ -0,0 +1,32 @@ +# SPDX-License-Identifier: GPL-2.0-only +config PAGEATTACH + tristate "PAGEATTACH: A zero-copy data transfer mechanism for HPC" + depends on MMU && ARM64 + help + This option enables the PAGEATTACH mechanism, a zero-copy data transfer + solution optimized for High Performance Computing (HPC) workloads. It + allows direct sharing of physical memory pages between distinct processes + by mapping source pages to a target address space, eliminating redundant + data copying and improving large data transfer efficiency. + + Key features: + - Supports zero-copy communication between intra-node processes. + - Handles both PTE-level small pages and PMD-level huge pages (2MB). + - Preserves the read/write permissions of the source page in the target address space. + + Important constraints and requirements: + 1. Callers must ensure the source address is already mapped to physical pages, + while the destination address is unused (no existing mappings). + 2. No internal locking is implemented in the PageAttach interface; userspace + must manage memory mapping and release order to avoid race conditions. + 3. Source and destination must be different processes (not threads of the same process). + 4. Only user-space addresses are supported; kernel addresses cannot be mapped. + 5. Both source and destination processes must remain alive during the mapping operation. + 6. PUD-level huge pages are not supported in current implementation. + 7. The start address and size of both source and destination must be 4KB-aligned. + 8. Callers are responsible for ensuring safe access to mapped pages after attachment, + as permissions are inherited from the source. + + This mechanism is intended for HPC applications requiring high-speed inter-process + data sharing. If your use case does not meet the above constraints or you are unsure, + disable this option by saying N. diff --git a/drivers/zcopy/Makefile b/drivers/zcopy/Makefile new file mode 100644 index 000000000000..60a6909da314 --- /dev/null +++ b/drivers/zcopy/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_PAGEATTACH) += zcopy.o \ No newline at end of file diff --git a/drivers/zcopy/zcopy.c b/drivers/zcopy/zcopy.c new file mode 100644 index 000000000000..9f224eac4e39 --- /dev/null +++ b/drivers/zcopy/zcopy.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Copyright (C) 2025. Huawei Technologies Co., Ltd */ + +#include <linux/init.h> +#include <linux/module.h> +#include <linux/ioctl.h> +#include <linux/fs.h> +#include <linux/cdev.h> +#include <linux/uaccess.h> + +struct zcopy_cdev { + struct cdev chrdev; + dev_t dev; + int major; + struct class *dev_class; + struct device *dev_device; +}; + +static struct zcopy_cdev z_cdev; + +long zcopy_ioctl(struct file *file, unsigned int type, unsigned long ptr) +{ + return 0; +} + +static const struct file_operations zcopy_fops = { + .owner = THIS_MODULE, + .unlocked_ioctl = zcopy_ioctl, +}; + +int register_device_zcopy(void) +{ + int ret; + + ret = alloc_chrdev_region(&z_cdev.dev, 0, 1, "zcopy"); + if (ret < 0) { + pr_err("alloc chrdev failed\n"); + goto err_out; + } + + z_cdev.major = MAJOR(z_cdev.dev); + + cdev_init(&z_cdev.chrdev, &zcopy_fops); + ret = cdev_add(&z_cdev.chrdev, z_cdev.dev, 1); + if (ret < 0) { + pr_err("add char device failed\n"); + goto err_unregister_chrdev; + } + + z_cdev.dev_class = class_create(THIS_MODULE, "zcopy"); + if (IS_ERR(z_cdev.dev_class)) { + pr_err("class create error\n"); + ret = PTR_ERR(z_cdev.dev_class); + goto err_cdev_del; + } + + z_cdev.dev_device = device_create(z_cdev.dev_class, NULL, + MKDEV(z_cdev.major, 0), NULL, "zdax"); + if (IS_ERR(z_cdev.dev_device)) { + pr_err("device create error\n"); + ret = PTR_ERR(z_cdev.dev_device); + goto err_class_destroy; + } + + return 0; + +err_class_destroy: + class_destroy(z_cdev.dev_class); +err_cdev_del: + cdev_del(&z_cdev.chrdev); +err_unregister_chrdev: + unregister_chrdev_region(z_cdev.dev, 1); +err_out: + return ret; +} + +void unregister_device_zcopy(void) +{ + device_destroy(z_cdev.dev_class, MKDEV(z_cdev.major, 0)); + class_destroy(z_cdev.dev_class); + cdev_del(&z_cdev.chrdev); + unregister_chrdev_region(z_cdev.dev, 1); +} + +static int __init zcopy_init(void) +{ + int ret; + + ret = register_device_zcopy(); + if (ret) { + pr_err("register_device_zcopy failed\n"); + return -1; + } + + return 0; +} + +static void __exit zcopy_exit(void) +{ + unregister_device_zcopy(); +} + +module_init(zcopy_init); +module_exit(zcopy_exit); + +MODULE_AUTHOR("liumingrui <liumingrui@huawei.com>"); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("PAGEATTACH: A zero-copy data transfer mechanism"); -- 2.25.1
On 2025/11/14 9:50, Liu Mingrui wrote:
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/release-management/issues/ID3TGE
-------------------------------- Initialize the zcopy driver module framework
Signed-off-by: Liu Mingrui <liumingrui@huawei.com> --- MAINTAINERS | 5 ++ drivers/Kconfig | 2 + drivers/Makefile | 3 ++ drivers/zcopy/Kconfig | 32 ++++++++++++ drivers/zcopy/Makefile | 2 + drivers/zcopy/zcopy.c | 108 +++++++++++++++++++++++++++++++++++++++++ 6 files changed, 152 insertions(+) create mode 100644 drivers/zcopy/Kconfig create mode 100644 drivers/zcopy/Makefile create mode 100644 drivers/zcopy/zcopy.c
diff --git a/MAINTAINERS b/MAINTAINERS index 3eb4130ce6d0..fa863a4ab198 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19649,6 +19649,11 @@ L: linux-mm@kvack.org S: Maintained F: mm/zswap.c
+ZCOPY DRIVER +M: Mingrui Liu <liumingrui@huawei.com> +S: Maintained +F: drivers/zcopy/ +
放到drivers/misc ?
THE REST M: Linus Torvalds <torvalds@linux-foundation.org> L: linux-kernel@vger.kernel.org diff --git a/drivers/Kconfig b/drivers/Kconfig index 840137d90167..c9de7ebdc7b8 100644 --- a/drivers/Kconfig +++ b/drivers/Kconfig @@ -246,4 +246,6 @@ source "drivers/cpuinspect/Kconfig"
source "drivers/arm/Kconfig"
+source "drivers/zcopy/Kconfig" + endmenu diff --git a/drivers/Makefile b/drivers/Makefile index 3f0e6f169501..25e965c82d72 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -196,3 +196,6 @@ obj-$(CONFIG_MOST) += most/ obj-$(CONFIG_ROH) += roh/ obj-$(CONFIG_UB) += ub/ obj-$(CONFIG_ARM_SPE_MEM_SAMPLING) += arm/spe/ + +# PAGE_ATTACH zero-copy communication between processes +obj-$(CONFIG_PAGEATTACH) += zcopy/ diff --git a/drivers/zcopy/Kconfig b/drivers/zcopy/Kconfig new file mode 100644 index 000000000000..bfc34912dbe7 --- /dev/null +++ b/drivers/zcopy/Kconfig @@ -0,0 +1,32 @@ +# SPDX-License-Identifier: GPL-2.0-only +config PAGEATTACH + tristate "PAGEATTACH: A zero-copy data transfer mechanism for HPC" + depends on MMU && ARM64
你后面的很多假设都是PMD 是2M 但是作为通用代码,PMD不一定是2M
+ help + This option enables the PAGEATTACH mechanism, a zero-copy data transfer + solution optimized for High Performance Computing (HPC) workloads. It + allows direct sharing of physical memory pages between distinct processes + by mapping source pages to a target address space, eliminating redundant + data copying and improving large data transfer efficiency. + + Key features: + - Supports zero-copy communication between intra-node processes. + - Handles both PTE-level small pages and PMD-level huge pages (2MB). + - Preserves the read/write permissions of the source page in the target address space. + + Important constraints and requirements: + 1. Callers must ensure the source address is already mapped to physical pages, + while the destination address is unused (no existing mappings). + 2. No internal locking is implemented in the PageAttach interface; userspace + must manage memory mapping and release order to avoid race conditions. + 3. Source and destination must be different processes (not threads of the same process). + 4. Only user-space addresses are supported; kernel addresses cannot be mapped. + 5. Both source and destination processes must remain alive during the mapping operation. + 6. PUD-level huge pages are not supported in current implementation. + 7. The start address and size of both source and destination must be 4KB-aligned. + 8. Callers are responsible for ensuring safe access to mapped pages after attachment, + as permissions are inherited from the source. + + This mechanism is intended for HPC applications requiring high-speed inter-process + data sharing. If your use case does not meet the above constraints or you are unsure, + disable this option by saying N. diff --git a/drivers/zcopy/Makefile b/drivers/zcopy/Makefile new file mode 100644 index 000000000000..60a6909da314 --- /dev/null +++ b/drivers/zcopy/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_PAGEATTACH) += zcopy.o \ No newline at end of file diff --git a/drivers/zcopy/zcopy.c b/drivers/zcopy/zcopy.c new file mode 100644 index 000000000000..9f224eac4e39 --- /dev/null +++ b/drivers/zcopy/zcopy.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Copyright (C) 2025. Huawei Technologies Co., Ltd */ + +#include <linux/init.h> +#include <linux/module.h> +#include <linux/ioctl.h> +#include <linux/fs.h> +#include <linux/cdev.h> +#include <linux/uaccess.h> + +struct zcopy_cdev { + struct cdev chrdev; + dev_t dev; + int major; + struct class *dev_class; + struct device *dev_device; +}; + +static struct zcopy_cdev z_cdev; + +long zcopy_ioctl(struct file *file, unsigned int type, unsigned long ptr) +{ + return 0; +} + +static const struct file_operations zcopy_fops = { + .owner = THIS_MODULE, + .unlocked_ioctl = zcopy_ioctl, +}; + +int register_device_zcopy(void) +{ + int ret; + + ret = alloc_chrdev_region(&z_cdev.dev, 0, 1, "zcopy"); + if (ret < 0) { + pr_err("alloc chrdev failed\n"); + goto err_out; + } + + z_cdev.major = MAJOR(z_cdev.dev); + + cdev_init(&z_cdev.chrdev, &zcopy_fops); + ret = cdev_add(&z_cdev.chrdev, z_cdev.dev, 1); + if (ret < 0) { + pr_err("add char device failed\n"); + goto err_unregister_chrdev; + } + + z_cdev.dev_class = class_create(THIS_MODULE, "zcopy"); + if (IS_ERR(z_cdev.dev_class)) { + pr_err("class create error\n"); + ret = PTR_ERR(z_cdev.dev_class); + goto err_cdev_del; + } + + z_cdev.dev_device = device_create(z_cdev.dev_class, NULL, + MKDEV(z_cdev.major, 0), NULL, "zdax"); + if (IS_ERR(z_cdev.dev_device)) { + pr_err("device create error\n"); + ret = PTR_ERR(z_cdev.dev_device); + goto err_class_destroy; + } + + return 0; + +err_class_destroy: + class_destroy(z_cdev.dev_class); +err_cdev_del: + cdev_del(&z_cdev.chrdev); +err_unregister_chrdev: + unregister_chrdev_region(z_cdev.dev, 1); +err_out: + return ret; +} + +void unregister_device_zcopy(void) +{ + device_destroy(z_cdev.dev_class, MKDEV(z_cdev.major, 0)); + class_destroy(z_cdev.dev_class); + cdev_del(&z_cdev.chrdev); + unregister_chrdev_region(z_cdev.dev, 1); +} + +static int __init zcopy_init(void) +{ + int ret; + + ret = register_device_zcopy(); + if (ret) { + pr_err("register_device_zcopy failed\n"); + return -1; 使用标准的错误码 + } + + return 0; +} + +static void __exit zcopy_exit(void) +{ + unregister_device_zcopy(); +} + +module_init(zcopy_init); +module_exit(zcopy_exit); + +MODULE_AUTHOR("liumingrui <liumingrui@huawei.com>"); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("PAGEATTACH: A zero-copy data transfer mechanism");
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/release-management/issues/ID3TGE -------------------------------- Provide an efficient intra-node data transfer interface, enabling applications to map the pages associated with a specified virtual memory address space in the source process to the virtual address space in the destination process. Signed-off-by: Liu Mingrui <liumingrui@huawei.com> --- drivers/zcopy/zcopy.c | 474 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 469 insertions(+), 5 deletions(-) diff --git a/drivers/zcopy/zcopy.c b/drivers/zcopy/zcopy.c index 9f224eac4e39..b718e14fcff3 100644 --- a/drivers/zcopy/zcopy.c +++ b/drivers/zcopy/zcopy.c @@ -7,6 +7,41 @@ #include <linux/fs.h> #include <linux/cdev.h> #include <linux/uaccess.h> +#include <linux/kallsyms.h> +#include <linux/mm.h> +#include <linux/kprobes.h> +#include <linux/huge_mm.h> +#include <linux/mm_types.h> +#include <linux/mm_types_task.h> +#include <linux/rmap.h> +#include <linux/sched/mm.h> +#include <linux/pgtable.h> +#include <asm-generic/pgalloc.h> +#include <asm/tlbflush.h> +#include <asm/pgtable-hwdef.h> + +#ifndef PUD_SHIFT +#define ARM64_HW_PGTABLE_LEVEL_SHIFT(n) ((PAGE_SHIFT - 3) * (4 - (n)) + 3) +#define PUD_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(1) +#endif + +enum pgt_entry { + NORMAL_PMD, + HPAGE_PMD, +}; + +enum { + IO_ATTACH = 1, + IO_MAX +}; + +struct zcopy_ioctl_pswap { + unsigned long src_addr; + unsigned long dst_addr; + int src_pid; + int dst_pid; + unsigned long size; +}; struct zcopy_cdev { struct cdev chrdev; @@ -18,17 +53,439 @@ struct zcopy_cdev { static struct zcopy_cdev z_cdev; -long zcopy_ioctl(struct file *file, unsigned int type, unsigned long ptr) +static int (*__zcopy_pte_alloc)(struct mm_struct *, pmd_t *); +static int (*__zcopy_pmd_alloc)(struct mm_struct *, pud_t *, unsigned long); +static int (*__zcopy_pud_alloc)(struct mm_struct *, p4d_t *, unsigned long); +static unsigned long (*kallsyms_lookup_name_funcp)(const char *); + +static struct kretprobe __kretprobe; + +static unsigned long __kprobe_lookup_name(const char *symbol_name) +{ + int ret; + void *addr; + + __kretprobe.kp.symbol_name = symbol_name; + ret = register_kretprobe(&__kretprobe); + if (ret < 0) { + pr_err("register_kprobe failed, returned %d\n", ret); + return 0; + } + pr_info("Planted %s kprobe at %pK\n", symbol_name, __kretprobe.kp.addr); + addr = __kretprobe.kp.addr; + unregister_kretprobe(&__kretprobe); + return (unsigned long)addr; +} + +static inline unsigned long __kallsyms_lookup_name(const char *symbol_name) +{ + if (kallsyms_lookup_name_funcp == NULL) + return 0; + return kallsyms_lookup_name_funcp(symbol_name); +} + +static inline pud_t *zcopy_pud_alloc(struct mm_struct *mm, p4d_t *p4d, + unsigned long address) +{ + return (unlikely(p4d_none(*p4d)) && + __zcopy_pud_alloc(mm, p4d, address)) ? NULL : pud_offset(p4d, address); +} + +static inline pmd_t *zcopy_pmd_alloc(struct mm_struct *mm, pud_t *pud, + unsigned long address) +{ + return (unlikely(pud_none(*pud)) && + __zcopy_pmd_alloc(mm, pud, address)) ? NULL : pmd_offset(pud, address); +} + +static inline bool zcopy_pte_alloc(struct mm_struct *mm, pmd_t *pmd) +{ + return unlikely(pmd_none(*pmd)) && __zcopy_pte_alloc(mm, pmd); +} + +static pud_t *zcopy_get_pud(struct mm_struct *mm, unsigned long addr) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + + pgd = pgd_offset(mm, addr); + if (pgd_none(*pgd)) + return NULL; + + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) + return NULL; + + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) + return NULL; + + return pud; +} + +static pmd_t *zcopy_get_pmd(struct mm_struct *mm, unsigned long addr) +{ + pud_t *pud; + pmd_t *pmd; + + pud = zcopy_get_pud(mm, addr); + if (!pud) + return NULL; + + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) + return NULL; + + return pmd; +} + +static pud_t *zcopy_alloc_new_pud(struct mm_struct *mm, unsigned long addr) +{ + pgd_t *pgd; + p4d_t *p4d; + + pgd = pgd_offset(mm, addr); + p4d = p4d_alloc(mm, pgd, addr); + if (!p4d) + return NULL; + + return zcopy_pud_alloc(mm, p4d, addr); +} + +static pmd_t *zcopy_alloc_pmd(struct mm_struct *mm, unsigned long addr) +{ + pud_t *pud; + pmd_t *pmd; + + pud = zcopy_alloc_new_pud(mm, addr); + if (!pud) + return NULL; + + pmd = zcopy_pmd_alloc(mm, pud, addr); + if (!pmd) + return NULL; + + return pmd; +} + +static inline void zcopy_add_mm_counter(struct mm_struct *mm, int member, long value) { + atomic_long_add(value, &mm->rss_stat.count[member]); +} + +static inline void zcopy_add_mm_rss_vec(struct mm_struct *mm, int *rss) +{ + int i; + + for (i = 0; i < NR_MM_COUNTERS; i++) + if (rss[i]) + zcopy_add_mm_counter(mm, i, rss[i]); +} + +static __always_inline unsigned long get_extent(enum pgt_entry entry, + unsigned long old_addr, unsigned long old_end, + unsigned long new_addr) +{ + unsigned long next, extent, mask, size; + + switch (entry) { + case HPAGE_PMD: + case NORMAL_PMD: + mask = PMD_MASK; + size = PMD_SIZE; + break; + default: + BUILD_BUG(); + break; + } + + next = (old_addr + size) & mask; + /* even if next overflowed, extent below will be ok */ + extent = next - old_addr; + if (extent > old_end - old_addr) + extent = old_end - old_addr; + next = (new_addr + size) & mask; + if (extent > next - new_addr) + extent = next - new_addr; + return extent; +} + +static int attach_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, + unsigned long dst_addr, unsigned long src_addr, pmd_t *dst_pmdp, + pmd_t *src_pmdp, unsigned long len) +{ + struct mm_struct *dst_mm = dst_vma->vm_mm; + pte_t *src_ptep, *dst_ptep, pte, orig_pte; + struct page *src_page, *orig_page; + spinlock_t *dst_ptl; + int rss[NR_MM_COUNTERS]; + unsigned long src_addr_end = src_addr + len; + + memset(rss, 0, sizeof(int) * NR_MM_COUNTERS); + + src_ptep = pte_offset_map(src_pmdp, src_addr); + dst_ptep = pte_offset_map(dst_pmdp, dst_addr); + dst_ptl = pte_lockptr(dst_mm, dst_pmdp); + spin_lock_nested(dst_ptl, SINGLE_DEPTH_NESTING); + + for (; src_addr < src_addr_end; src_ptep++, src_addr += PAGE_SIZE, + dst_ptep++, dst_addr += PAGE_SIZE) { + /* + * For special pte, there may not be corresponding page. Hence, + * we skip this situation. + */ + pte = ptep_get(src_ptep); + if (pte_none(*src_ptep) || pte_special(*src_ptep) || !pte_present(pte)) + continue; + + src_page = pte_page(pte); + atomic_inc(&src_page->_refcount); + atomic_inc(&src_page->_mapcount); + rss[MM_ANONPAGES]++; + + /* + * If dst virtual addr has page mapping, before setup the new mapping. + * we should decrease the orig page mapcount and refcount. + */ + orig_pte = *dst_ptep; + if (!pte_none(orig_pte)) { + orig_page = pte_page(orig_pte); + atomic_dec(&orig_page->_refcount); + atomic_dec(&orig_page->_mapcount); + rss[MM_ANONPAGES]--; + } + set_pte_at(dst_mm, dst_addr, dst_ptep, pte); + } + + flush_tlb_range(dst_vma, dst_addr, dst_addr + len); + zcopy_add_mm_rss_vec(dst_mm, rss); + spin_unlock(dst_ptl); + return 0; } +static int attach_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, + unsigned long dst_addr, unsigned long src_addr, unsigned long size) +{ + struct vm_area_struct *src_vma, *dst_vma; + unsigned long extent, src_addr_end; + pmd_t *src_pmd, *dst_pmd; + int ret = 0; + + src_addr_end = src_addr + size; + src_vma = find_vma(src_mm, src_addr); + dst_vma = find_vma(dst_mm, dst_addr); + /* Check the vma has not been freed again.*/ + if (!src_vma || !dst_vma) + return -ENOENT; + + for (; src_addr < src_addr_end; src_addr += extent, dst_addr += extent) { + cond_resched(); + + extent = get_extent(NORMAL_PMD, src_addr, src_addr_end, dst_addr); + src_pmd = zcopy_get_pmd(src_mm, src_addr); + if (!src_pmd) + continue; + dst_pmd = zcopy_alloc_pmd(dst_mm, dst_addr); + if (!dst_pmd) { + ret = -ENOMEM; + break; + } + + if (pmd_trans_huge(*src_pmd)) { + /* Not support hugepage mapping */ + ret = -EOPNOTSUPP; + break; + } else if (is_swap_pmd(*src_pmd) || pmd_devmap(*src_pmd)) { + ret = -EOPNOTSUPP; + break; + } + + if (zcopy_pte_alloc(dst_mm, dst_pmd)) { + ret = -ENOMEM; + break; + } + + ret = attach_ptes(dst_vma, src_vma, dst_addr, src_addr, dst_pmd, + src_pmd, extent); + if (ret < 0) + break; + } + + return ret; +} + +static int attach_pages(unsigned long dst_addr, unsigned long src_addr, + int dst_pid, int src_pid, unsigned long size) +{ + struct mm_struct *dst_mm, *src_mm; + struct task_struct *src_task, *dst_task; + struct page **process_pages; + unsigned long nr_pages; + unsigned int flags = 0; + int pinned_pages; + int locked = 1; + int ret; + + ret = -EINVAL; + if (size <= 0) + goto out; + + if ((src_addr & (PAGE_SIZE-1)) != 0 || + (dst_addr & (PAGE_SIZE-1)) != 0 || + (size & (PAGE_SIZE-1)) != 0) { + pr_err("Not aligned with PAGE_SIZE\n"); + goto out; + } + + /* check the addr is in userspace. wo do not allow */ + if (!is_ttbr0_addr(dst_addr) || !is_ttbr0_addr(src_addr)) { + pr_err("Not allow kernelspace\n"); + goto out; + } + + ret = -ESRCH; + src_task = find_get_task_by_vpid(src_pid); + if (!src_task) + goto out; + + src_mm = mm_access(src_task, PTRACE_MODE_ATTACH_REALCREDS); + if (!src_mm || IS_ERR(src_mm)) { + ret = IS_ERR(src_mm) ? PTR_ERR(src_mm) : -ESRCH; + if (ret == -EACCES) + ret = -EPERM; + goto put_src_task; + } + + dst_task = find_get_task_by_vpid(dst_pid); + if (!dst_task) + goto put_src_mm; + + dst_mm = mm_access(dst_task, PTRACE_MODE_ATTACH_REALCREDS); + if (!dst_mm || IS_ERR(dst_mm)) { + ret = IS_ERR(dst_mm) ? PTR_ERR(dst_mm) : -ESRCH; + if (ret == -EACCES) + ret = -EPERM; + goto put_dst_task; + } + + if (src_mm == dst_mm) { + ret = -EINVAL; + pr_err("Attach is not allowed within the same address space"); + goto put_dst_mm; + } + + nr_pages = (src_addr + size - 1) / PAGE_SIZE - src_addr / PAGE_SIZE + 1; + process_pages = kvmalloc_array(nr_pages, sizeof(struct pages *), GFP_KERNEL); + if (!process_pages) { + ret = -ENOMEM; + goto put_dst_mm; + } + + mmap_read_lock(src_mm); + pinned_pages = pin_user_pages_remote(src_mm, src_addr, nr_pages, + flags, process_pages, + NULL, &locked); + if (locked) + mmap_read_unlock(src_mm); + + if (pinned_pages <= 0) { + ret = -EFAULT; + goto free_pages_array; + } + + ret = attach_page_range(dst_mm, src_mm, dst_addr, src_addr, size); + + unpin_user_pages_dirty_lock(process_pages, pinned_pages, 0); + +free_pages_array: + kvfree(process_pages); +put_dst_mm: + mmput(dst_mm); +put_dst_task: + put_task_struct(dst_task); +put_src_mm: + mmput(src_mm); +put_src_task: + put_task_struct(src_task); +out: + return ret; +} + +static long zcopy_ioctl(struct file *file, unsigned int type, unsigned long ptr) +{ + long ret = 0; + + switch (type) { + case IO_ATTACH: + { + struct zcopy_ioctl_pswap ctx; + + if (copy_from_user((void *)&ctx, (void *)ptr, + sizeof(struct zcopy_ioctl_pswap))) { + pr_err("copy from user for attach failed\n"); + ret = -EFAULT; + break; + } + ret = attach_pages(ctx.dst_addr, ctx.src_addr, ctx.dst_pid, + ctx.src_pid, ctx.size); + break; + } + default: + break; + } + + return ret; +} + static const struct file_operations zcopy_fops = { .owner = THIS_MODULE, .unlocked_ioctl = zcopy_ioctl, }; -int register_device_zcopy(void) +#define REGISTER_CHECK(_var, _errstr) ({ \ + int __ret = 0; \ + if (!(_var)) { \ + pr_warn("Not found %s\n", _errstr); \ + __ret = -ENOENT; \ + } \ + __ret; \ +}) + +static int register_unexport_func(void) +{ + int ret; + + kallsyms_lookup_name_funcp + = (unsigned long (*)(const char *))__kprobe_lookup_name("kallsyms_lookup_name"); + ret = REGISTER_CHECK(kallsyms_lookup_name_funcp, "kallsyms_lookup_name"); + if (ret) + goto out; + + __zcopy_pte_alloc + = (int (*)(struct mm_struct *, pmd_t *))__kallsyms_lookup_name("__pte_alloc"); + ret = REGISTER_CHECK(__zcopy_pte_alloc, "__pte_alloc"); + if (ret) + goto out; + + __zcopy_pmd_alloc + = (int (*)(struct mm_struct *, pud_t *, unsigned long)) + __kallsyms_lookup_name("__pmd_alloc"); + ret = REGISTER_CHECK(__zcopy_pmd_alloc, "__pmd_alloc"); + if (ret) + goto out; + + __zcopy_pud_alloc + = (int (*)(struct mm_struct *, p4d_t *, unsigned long)) + __kallsyms_lookup_name("__pud_alloc"); + ret = REGISTER_CHECK(__zcopy_pud_alloc, "__pud_alloc"); + +out: + return ret; +} + +static int register_device_zcopy(void) { int ret; @@ -74,7 +531,7 @@ int register_device_zcopy(void) return ret; } -void unregister_device_zcopy(void) +static void unregister_device_zcopy(void) { device_destroy(z_cdev.dev_class, MKDEV(z_cdev.major, 0)); class_destroy(z_cdev.dev_class); @@ -86,13 +543,20 @@ static int __init zcopy_init(void) { int ret; + ret = register_unexport_func(); + if (ret) { + pr_err("register_unexport_func failed\n"); + goto out; + } + ret = register_device_zcopy(); if (ret) { pr_err("register_device_zcopy failed\n"); - return -1; + goto out; } - return 0; +out: + return ret; } static void __exit zcopy_exit(void) -- 2.25.1
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/release-management/issues/ID3TGE -------------------------------- It extends support to 2MB Transparent Huge Page (THP) mapping, building on the existing small page mapping capability. Signed-off-by: Liu Mingrui <liumingrui@huawei.com> --- drivers/zcopy/zcopy.c | 89 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 86 insertions(+), 3 deletions(-) diff --git a/drivers/zcopy/zcopy.c b/drivers/zcopy/zcopy.c index b718e14fcff3..272cd6b8ff3f 100644 --- a/drivers/zcopy/zcopy.c +++ b/drivers/zcopy/zcopy.c @@ -211,6 +211,82 @@ static __always_inline unsigned long get_extent(enum pgt_entry entry, return extent; } +static void zcopy_pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp, + pgtable_t pgtable) +{ + assert_spin_locked(pmd_lockptr(mm, pmdp)); + + /* FIFO */ + if (!pmd_huge_pte(mm, pmdp)) + INIT_LIST_HEAD(&pgtable->lru); + else + list_add(&pgtable->lru, &pmd_huge_pte(mm, pmdp)->lru); + pmd_huge_pte(mm, pmdp) = pgtable; +} + +int attach_huge_pmd(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, + unsigned long dst_addr, unsigned long src_addr, pmd_t *dst_pmdp, pmd_t *src_pmdp) +{ + struct mm_struct *dst_mm, *src_mm; + spinlock_t *src_ptl, *dst_ptl; + struct page *src_thp_page, *orig_thp_page; + pmd_t pmd, orig_pmd; + pgtable_t pgtable; + + + if (!vma_is_anonymous(dst_vma)) { + pr_err("dst_vma is %d\n", + vma_is_anonymous(dst_vma)); + return -EINVAL; + } + + dst_mm = dst_vma->vm_mm; + src_mm = src_vma->vm_mm; + + /* alloc a pgtable for new pmdp */ + pgtable = pte_alloc_one(dst_mm); + if (unlikely(!pgtable)) { + pr_err("pte_alloc_one failed\n"); + return -ENOMEM; + } + + src_ptl = pmd_lockptr(src_mm, src_pmdp); + dst_ptl = pmd_lockptr(dst_mm, dst_pmdp); + + spin_lock(src_ptl); + pmd = *src_pmdp; + src_thp_page = pmd_page(pmd); + if (unlikely(!PageHead(src_thp_page))) { + pr_err("VM assertion failed: it is not a head page\n"); + spin_unlock(src_ptl); + return -EINVAL; + } + + get_page(src_thp_page); + atomic_inc(compound_mapcount_ptr(src_thp_page)); + spin_unlock(src_ptl); + + spin_lock_nested(dst_ptl, SINGLE_DEPTH_NESTING); + orig_pmd = *dst_pmdp; + /* umap the old page mappings */ + if (!pmd_none(orig_pmd)) { + orig_thp_page = pmd_page(orig_pmd); + put_page(orig_thp_page); + atomic_dec(compound_mapcount_ptr(orig_thp_page)); + zcopy_add_mm_counter(dst_mm, MM_ANONPAGES, -HPAGE_PMD_NR); + mm_dec_nr_ptes(dst_mm); + } + + zcopy_add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); + mm_inc_nr_ptes(dst_mm); + zcopy_pgtable_trans_huge_deposit(dst_mm, dst_pmdp, pgtable); + set_pmd_at(dst_mm, dst_addr, dst_pmdp, pmd); + flush_tlb_range(dst_vma, dst_addr, dst_addr + HPAGE_PMD_SIZE); + spin_unlock(dst_ptl); + + return 0; +} + static int attach_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, unsigned long dst_addr, unsigned long src_addr, pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long len) @@ -294,9 +370,16 @@ static int attach_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, } if (pmd_trans_huge(*src_pmd)) { - /* Not support hugepage mapping */ - ret = -EOPNOTSUPP; - break; + if (extent == HPAGE_PMD_SIZE) { + ret = attach_huge_pmd(dst_vma, src_vma, dst_addr, src_addr, + dst_pmd, src_pmd); + if (ret) + return ret; + continue; + } else { + ret = -EOPNOTSUPP; + break; + } } else if (is_swap_pmd(*src_pmd) || pmd_devmap(*src_pmd)) { ret = -EOPNOTSUPP; break; -- 2.25.1
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/release-management/issues/ID3TGE -------------------------------- In order to debug mfs, we provide the some basic tracepoint in PageAttach, which are: - trace_attach_extent_start - trace_attach_extent_end - trace_attach_page_range_start - trace_attach_page_range_end Signed-off-by: Liu Mingrui <liumingrui@huawei.com> --- MAINTAINERS | 1 + drivers/zcopy/zcopy.c | 13 +++ include/trace/events/attach.h | 157 ++++++++++++++++++++++++++++++++++ 3 files changed, 171 insertions(+) create mode 100644 include/trace/events/attach.h diff --git a/MAINTAINERS b/MAINTAINERS index fa863a4ab198..3b402b41ee63 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19653,6 +19653,7 @@ ZCOPY DRIVER M: Mingrui Liu <liumingrui@huawei.com> S: Maintained F: drivers/zcopy/ +F: include/trace/events/attach.h THE REST M: Linus Torvalds <torvalds@linux-foundation.org> diff --git a/drivers/zcopy/zcopy.c b/drivers/zcopy/zcopy.c index 272cd6b8ff3f..32ece33eb3b4 100644 --- a/drivers/zcopy/zcopy.c +++ b/drivers/zcopy/zcopy.c @@ -20,6 +20,9 @@ #include <asm/tlbflush.h> #include <asm/pgtable-hwdef.h> +#define CREATE_TRACE_POINTS +#include <trace/events/attach.h> + #ifndef PUD_SHIFT #define ARM64_HW_PGTABLE_LEVEL_SHIFT(n) ((PAGE_SHIFT - 3) * (4 - (n)) + 3) #define PUD_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(1) @@ -371,8 +374,12 @@ static int attach_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, if (pmd_trans_huge(*src_pmd)) { if (extent == HPAGE_PMD_SIZE) { + trace_attach_extent_start(dst_mm, src_mm, dst_addr, src_addr, + dst_pmd, src_pmd, extent); ret = attach_huge_pmd(dst_vma, src_vma, dst_addr, src_addr, dst_pmd, src_pmd); + trace_attach_extent_end(dst_mm, src_mm, dst_addr, src_addr, + dst_pmd, src_pmd, ret); if (ret) return ret; continue; @@ -390,8 +397,12 @@ static int attach_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, break; } + trace_attach_extent_start(dst_mm, src_mm, dst_addr, src_addr, dst_pmd, + src_pmd, extent); ret = attach_ptes(dst_vma, src_vma, dst_addr, src_addr, dst_pmd, src_pmd, extent); + trace_attach_extent_end(dst_mm, src_mm, dst_addr, src_addr, dst_pmd, + src_pmd, ret); if (ret < 0) break; } @@ -478,7 +489,9 @@ static int attach_pages(unsigned long dst_addr, unsigned long src_addr, goto free_pages_array; } + trace_attach_page_range_start(dst_mm, src_mm, dst_addr, src_addr, size); ret = attach_page_range(dst_mm, src_mm, dst_addr, src_addr, size); + trace_attach_page_range_end(dst_mm, src_mm, dst_addr, src_addr, ret); unpin_user_pages_dirty_lock(process_pages, pinned_pages, 0); diff --git a/include/trace/events/attach.h b/include/trace/events/attach.h new file mode 100644 index 000000000000..1a38cbeba747 --- /dev/null +++ b/include/trace/events/attach.h @@ -0,0 +1,157 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM attach + +#if !defined(_TRACE_ATTACH_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_ATTACH_H + +#include <linux/types.h> +#include <linux/tracepoint.h> +#include <linux/mm_types.h> + +TRACE_EVENT(attach_page_range_start, + + TP_PROTO(struct mm_struct *dst_mm, + struct mm_struct *src_mm, + unsigned long dst_addr, + unsigned long src_addr, + unsigned long size), + + TP_ARGS(dst_mm, src_mm, dst_addr, src_addr, size), + + TP_STRUCT__entry( + __field(struct mm_struct *, dst_mm) + __field(struct mm_struct *, src_mm) + __field(unsigned long, dst_addr) + __field(unsigned long, src_addr) + __field(unsigned long, size) + ), + + TP_fast_assign( + __entry->dst_mm = dst_mm; + __entry->src_mm = src_mm; + __entry->dst_addr = dst_addr; + __entry->src_addr = src_addr; + __entry->size = size; + ), + + TP_printk("dst_mm=%p src_mm=%p dst_addr=0x%lx src_addr=0x%lx size=%ld", + __entry->dst_mm, __entry->src_mm, + __entry->dst_addr, __entry->src_addr, + __entry->size) +); + +TRACE_EVENT(attach_page_range_end, + + TP_PROTO(struct mm_struct *dst_mm, + struct mm_struct *src_mm, + unsigned long dst_addr, + unsigned long src_addr, + int ret), + + TP_ARGS(dst_mm, src_mm, dst_addr, src_addr, ret), + + TP_STRUCT__entry( + __field(struct mm_struct *, dst_mm) + __field(struct mm_struct *, src_mm) + __field(unsigned long, dst_addr) + __field(unsigned long, src_addr) + __field(int, ret) + ), + + TP_fast_assign( + __entry->dst_mm = dst_mm; + __entry->src_mm = src_mm; + __entry->dst_addr = dst_addr; + __entry->src_addr = src_addr; + __entry->ret = ret; + ), + + TP_printk("dst_mm=%p src_mm=%p dst_addr=0x%lx src_addr=0x%lx ret=%d", + __entry->dst_mm, __entry->src_mm, + __entry->dst_addr, __entry->src_addr, + __entry->ret) +); + +TRACE_EVENT(attach_extent_start, + + TP_PROTO(struct mm_struct *dst_mm, + struct mm_struct *src_mm, + unsigned long dst_addr, + unsigned long src_addr, + pmd_t *new_pmd, + pmd_t *old_pmd, + unsigned long extent), + + TP_ARGS(dst_mm, src_mm, dst_addr, src_addr, new_pmd, old_pmd, extent), + + TP_STRUCT__entry( + __field(struct mm_struct *, dst_mm) + __field(struct mm_struct *, src_mm) + __field(unsigned long, dst_addr) + __field(unsigned long, src_addr) + __field(pmd_t *, new_pmd) + __field(pmd_t *, old_pmd) + __field(unsigned long, extent) + ), + + TP_fast_assign( + __entry->dst_mm = dst_mm; + __entry->src_mm = src_mm; + __entry->dst_addr = dst_addr; + __entry->src_addr = src_addr; + __entry->new_pmd = new_pmd; + __entry->old_pmd = old_pmd; + __entry->extent = extent; + ), + + TP_printk("dst_mm=%p src_mm=%p dst_addr=0x%lx src_addr=0x%lx new_pmd=%016llx old_pmd=%016llx extent=%ld", + __entry->dst_mm, __entry->src_mm, + __entry->dst_addr, __entry->src_addr, + pmd_val(*__entry->new_pmd), pmd_val(*__entry->old_pmd), + __entry->extent) +); + +TRACE_EVENT(attach_extent_end, + + TP_PROTO(struct mm_struct *dst_mm, + struct mm_struct *src_mm, + unsigned long dst_addr, + unsigned long src_addr, + pmd_t *new_pmd, + pmd_t *old_pmd, + int ret), + + TP_ARGS(dst_mm, src_mm, dst_addr, src_addr, new_pmd, old_pmd, ret), + + TP_STRUCT__entry( + __field(struct mm_struct *, dst_mm) + __field(struct mm_struct *, src_mm) + __field(unsigned long, dst_addr) + __field(unsigned long, src_addr) + __field(pmd_t *, new_pmd) + __field(pmd_t *, old_pmd) + __field(int, ret) + ), + + TP_fast_assign( + __entry->dst_mm = dst_mm; + __entry->src_mm = src_mm; + __entry->dst_addr = dst_addr; + __entry->src_addr = src_addr; + __entry->new_pmd = new_pmd; + __entry->old_pmd = old_pmd; + __entry->ret = ret; + ), + + TP_printk("dst_mm=%p src_mm=%p dst_addr=0x%lx src_addr=0x%lx new_pmd=%016llx old_pmd=%016llx ret=%d", + __entry->dst_mm, __entry->src_mm, + __entry->dst_addr, __entry->src_addr, + pmd_val(*__entry->new_pmd), pmd_val(*__entry->old_pmd), + __entry->ret) +); + +#endif /* _TRACE_ATTACH_H */ + +/* This part must be outside protection */ +#include <trace/define_trace.h> -- 2.25.1
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/release-management/issues/ID3TGE -------------------------------- Add a new interface dump_pagetable for debug, user can use it to dump the pagetables of vaddr. Signed-off-by: Liu Mingrui <liumingrui@huawei.com> --- drivers/zcopy/zcopy.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/drivers/zcopy/zcopy.c b/drivers/zcopy/zcopy.c index 32ece33eb3b4..d800eaba8e91 100644 --- a/drivers/zcopy/zcopy.c +++ b/drivers/zcopy/zcopy.c @@ -35,6 +35,7 @@ enum pgt_entry { enum { IO_ATTACH = 1, + IO_DUMP = 3, IO_MAX }; @@ -46,6 +47,11 @@ struct zcopy_ioctl_pswap { unsigned long size; }; +struct zcopy_ioctl_dump { + unsigned long size; + unsigned long addr; +}; + struct zcopy_cdev { struct cdev chrdev; dev_t dev; @@ -60,6 +66,7 @@ static int (*__zcopy_pte_alloc)(struct mm_struct *, pmd_t *); static int (*__zcopy_pmd_alloc)(struct mm_struct *, pud_t *, unsigned long); static int (*__zcopy_pud_alloc)(struct mm_struct *, p4d_t *, unsigned long); static unsigned long (*kallsyms_lookup_name_funcp)(const char *); +static void (*dump_pagetable)(unsigned long addr); static struct kretprobe __kretprobe; @@ -528,6 +535,19 @@ static long zcopy_ioctl(struct file *file, unsigned int type, unsigned long ptr) ctx.src_pid, ctx.size); break; } + case IO_DUMP: + { + struct zcopy_ioctl_dump param; + + if (copy_from_user((void *)¶m, (void *)ptr, + sizeof(struct zcopy_ioctl_dump))) { + pr_err("copy from user for dump failed\n"); + ret = -EFAULT; + break; + } + dump_pagetable(param.addr); + break; + } default: break; } @@ -576,6 +596,11 @@ static int register_unexport_func(void) = (int (*)(struct mm_struct *, p4d_t *, unsigned long)) __kallsyms_lookup_name("__pud_alloc"); ret = REGISTER_CHECK(__zcopy_pud_alloc, "__pud_alloc"); + if (ret) + goto out; + + dump_pagetable = (void (*)(unsigned long))__kallsyms_lookup_name("show_pte"); + ret = REGISTER_CHECK(dump_pagetable, "show_pte"); out: return ret; -- 2.25.1
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/18972 邮件列表地址:https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/V42... FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/18972 Mailing list address: https://mailweb.openeuler.org/archives/list/kernel@openeuler.org/message/V42...
participants (3)
-
Kefeng Wang -
Liu Mingrui -
patchwork bot