From: Zhou Guanghui zhouguanghui1@huawei.com
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I931MI CVE: NA
-----------------------------------------------------------------
In the SVA scenario, the master device of the SMMU can access the address space of the process by the operation of binding process. The page table of the process is shared by the MMU and SMMU, In this case, if some addrss range cannot be accessed through the MMU, access to this address range needs to be isolated from the process page table.
If the default domain of the SMMU is used, the isolation can be achieved. However, the address space created by default domain do not like the address space of the process which can be shared by multiple SMMUs.
Therefore, we add a shadow virtual space of process. This address space is used only by the SMMU.
Signed-off-by: Zhou Guanghui zhouguanghui1@huawei.com --- arch/arm64/Kconfig | 15 +++++++++++++++ kernel/fork.c | 1 + mm/mmap.c | 1 + 3 files changed, 17 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 67bc2ad13453..3e84b2572827 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2455,6 +2455,21 @@ config DMI However, even with this option, the resultant kernel should continue to boot on existing non-UEFI platforms.
+config ASCEND_SVSP + bool "Enable support for SVSP" + default n + depends on ARM_SMMU_V3_SVA + help + SVSP means Shadow Virtual Space of Process, which is an IO space used for + master device of SMMU. + + This space is only used for the master device, since the target address + after traslation by MMU is not an accessible physical address, but can + accessible by SMMU. + + Therefore, in the sva scenario, an svsp is created for each target process + address space. + endmenu # "Boot options"
config COMPAT diff --git a/kernel/fork.c b/kernel/fork.c index 00e5886c9e04..735bc670b49e 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1373,6 +1373,7 @@ struct mm_struct *mm_alloc(void) memset(mm, 0, sizeof(*mm)); return mm_init(mm, current, current_user_ns()); } +EXPORT_SYMBOL_GPL(mm_alloc);
static inline void __mmput(struct mm_struct *mm) { diff --git a/mm/mmap.c b/mm/mmap.c index 21d1fc39bf21..5d78ebaeed60 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2684,6 +2684,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
return do_vmi_munmap(&vmi, mm, start, len, uf, false); } +EXPORT_SYMBOL(do_munmap);
static unsigned long __mmap_region(struct mm_struct *mm, struct file *file, unsigned long addr,
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I931MI CVE: NA
-----------------------------------------------------------------
The svsp requires the pasid of svsp_mm has the value of pasid of the normal mm with the highest bit set. In this commit we introduce IOMMU_SVA_FEAT_SVSP feature to allow user pass the pasid of normal mm and set pasid of svsp_mm by the given rules.
Signed-off-by: Yuan Can yuancan@huawei.com --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 33 +++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 14 ++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 8 +++++ include/linux/iommu.h | 3 ++ 4 files changed, 58 insertions(+)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index 5bb805ca3e5d..3f643a4e4cc0 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -597,3 +597,36 @@ struct iommu_domain *arm_smmu_sva_domain_alloc(void)
return domain; } + +#ifdef CONFIG_ASCEND_SVSP +bool arm_smmu_master_svsp_supported(struct arm_smmu_master *master) +{ + /* svsp depends on sva */ + if (!arm_smmu_master_sva_supported(master)) + return false; + + return true; +} + +int arm_smmu_master_enable_svsp(struct arm_smmu_master *master) +{ + if (arm_smmu_master_sva_enabled(master)) { + master->svsp_enabled = true; + return 0; + } + + return -EINVAL; +} + +int arm_smmu_master_disable_svsp(struct arm_smmu_master *master) +{ + master->svsp_enabled = false; + return 0; +} + +unsigned int svm_svsp_extract_ssid_bits(struct arm_smmu_master *master) +{ + return master->ssid_bits; +} +EXPORT_SYMBOL(svm_svsp_extract_ssid_bits); +#endif diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 09dc2d690de1..77ef1e2efb27 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3269,6 +3269,14 @@ static int arm_smmu_dev_enable_feature(struct device *dev, if (arm_smmu_master_sva_enabled(master)) return -EBUSY; return arm_smmu_master_enable_sva(master); +#ifdef CONFIG_ASCEND_SVSP + case IOMMU_DEV_FEAT_SVSP: + if (!arm_smmu_master_svsp_supported(master)) + return -EINVAL; + if (master->svsp_enabled) + return -EBUSY; + return arm_smmu_master_enable_svsp(master); +#endif default: return -EINVAL; } @@ -3294,6 +3302,12 @@ static int arm_smmu_dev_disable_feature(struct device *dev, if (!arm_smmu_master_sva_enabled(master)) return -EINVAL; return arm_smmu_master_disable_sva(master); +#ifdef CONFIG_ASCEND_SVSP + case IOMMU_DEV_FEAT_SVSP: + if (!master->svsp_enabled) + return -EINVAL; + return arm_smmu_master_disable_svsp(master); +#endif default: return -EINVAL; } diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index f04d10223f6e..13b09da7c902 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -767,6 +767,9 @@ struct arm_smmu_master { bool stall_enabled; bool sva_enabled; bool iopf_enabled; +#ifdef CONFIG_ASCEND_SVSP + bool svsp_enabled; +#endif struct list_head bonds; unsigned int ssid_bits; }; @@ -831,6 +834,11 @@ void arm_smmu_sva_notifier_synchronize(void); struct iommu_domain *arm_smmu_sva_domain_alloc(void); void arm_smmu_sva_remove_dev_pasid(struct iommu_domain *domain, struct device *dev, ioasid_t id); +#ifdef CONFIG_ASCEND_SVSP +bool arm_smmu_master_svsp_supported(struct arm_smmu_master *master); +int arm_smmu_master_enable_svsp(struct arm_smmu_master *master); +int arm_smmu_master_disable_svsp(struct arm_smmu_master *master); +#endif #else /* CONFIG_ARM_SMMU_V3_SVA */ static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu) { diff --git a/include/linux/iommu.h b/include/linux/iommu.h index a9f9b8bb7540..a1be18a687b6 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -200,6 +200,9 @@ struct iommu_iort_rmr_data { enum iommu_dev_features { IOMMU_DEV_FEAT_SVA, IOMMU_DEV_FEAT_IOPF, +#ifdef CONFIG_ASCEND_SVSP + IOMMU_DEV_FEAT_SVSP, +#endif };
#define IOMMU_NO_PASID (0U) /* Reserved for DMA w/o PASID */
From: Zhou Guanghui zhouguanghui1@huawei.com
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I931MI CVE: NA
-----------------------------------------------------------------
A range is reserved from the user address space to ensure that the process cannot apply for the address range.
Signed-off-by: Zhou Guanghui zhouguanghui1@huawei.com --- fs/hugetlbfs/inode.c | 13 ++++++++ include/linux/mman.h | 43 ++++++++++++++++++++++++++ include/uapi/asm-generic/mman-common.h | 1 + mm/mmap.c | 33 ++++++++++++++++++++ 4 files changed, 90 insertions(+)
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index f001dfe52a14..f151370afdc8 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -197,6 +197,9 @@ hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr, info.high_limit = arch_get_mmap_end(addr, len, flags); info.align_mask = PAGE_MASK & ~huge_page_mask(h); info.align_offset = 0; + + svsp_mmap_get_area(&info, flags); + return vm_unmapped_area(&info); }
@@ -213,6 +216,9 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr, info.high_limit = arch_get_mmap_base(addr, current->mm->mmap_base); info.align_mask = PAGE_MASK & ~huge_page_mask(h); info.align_offset = 0; + + svsp_mmap_get_area(&info, flags); + addr = vm_unmapped_area(&info);
/* @@ -226,6 +232,9 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr, info.flags = 0; info.low_limit = current->mm->mmap_base; info.high_limit = arch_get_mmap_end(addr, len, flags); + + svsp_mmap_get_area(&info, flags); + addr = vm_unmapped_area(&info); }
@@ -255,6 +264,10 @@ generic_hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
if (addr) { addr = ALIGN(addr, huge_page_size(h)); + + if (svsp_mmap_check(addr, len, flags)) + return -ENOMEM; + vma = find_vma(mm, addr); if (mmap_end - len >= addr && (!vma || addr + len <= vm_start_gap(vma))) diff --git a/include/linux/mman.h b/include/linux/mman.h index 40d94411d492..9f0250993ef7 100644 --- a/include/linux/mman.h +++ b/include/linux/mman.h @@ -8,6 +8,49 @@ #include <linux/atomic.h> #include <uapi/linux/mman.h>
+extern int enable_mmap_svsp; + +#ifdef CONFIG_ASCEND_SVSP +#define SVSP_MMAP_BASE 0x180000000000UL +#define SVSP_MMAP_SIZE 0x080000000000UL + +static inline int svsp_mmap_check(unsigned long addr, unsigned long len, + unsigned long flags) +{ + if (enable_mmap_svsp && (flags & MAP_SVSP) && + (addr < (SVSP_MMAP_BASE + SVSP_MMAP_SIZE)) && + (addr > SVSP_MMAP_BASE)) + return -EINVAL; + else + return 0; +} + +static inline void svsp_mmap_get_area(struct vm_unmapped_area_info *info, + unsigned long flags) +{ + if (enable_mmap_svsp && (flags & MAP_SVSP)) { + info->low_limit = SVSP_MMAP_BASE; + info->high_limit = SVSP_MMAP_BASE + SVSP_MMAP_SIZE; + } else { + info->low_limit = max(info->low_limit, TASK_UNMAPPED_BASE); + info->high_limit = min(info->high_limit, SVSP_MMAP_BASE); + } +} + +#else +#define SVSP_MMAP_BASE 0 +#define SVSP_MMAP_SIZE 0 +static inline int svsp_mmap_check(unsigned long addr, unsigned long len, + unsigned long flags) +{ + return 0; +} + +static inline void svsp_mmap_get_area(struct vm_unmapped_area_info *info, + unsigned long flags) +{ } +#endif + /* * Arrange for legacy / undefined architecture specific flags to be * ignored by mmap handling code. diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h index 14e5498efd7a..096c9018d2d5 100644 --- a/include/uapi/asm-generic/mman-common.h +++ b/include/uapi/asm-generic/mman-common.h @@ -30,6 +30,7 @@ #define MAP_SYNC 0x080000 /* perform synchronous page faults for the mapping */ #define MAP_FIXED_NOREPLACE 0x100000 /* MAP_FIXED which doesn't unmap underlying mapping */
+#define MAP_SVSP 0x400000 #define MAP_UNINITIALIZED 0x4000000 /* For anonymous mmap, memory could be * uninitialized */
diff --git a/mm/mmap.c b/mm/mmap.c index 5d78ebaeed60..94077c5356eb 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1728,6 +1728,10 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr,
if (addr) { addr = PAGE_ALIGN(addr); + + if (svsp_mmap_check(addr, len, flags)) + return -ENOMEM; + vma = find_vma_prev(mm, addr, &prev); if (mmap_end - len >= addr && addr >= mmap_min_addr && (!vma || addr + len <= vm_start_gap(vma)) && @@ -1741,6 +1745,9 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr, info.high_limit = mmap_end; info.align_mask = 0; info.align_offset = 0; + + svsp_mmap_get_area(&info, flags); + return vm_unmapped_area(&info); }
@@ -1781,6 +1788,10 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, /* requesting a specific address */ if (addr) { addr = PAGE_ALIGN(addr); + + if (svsp_mmap_check(addr, len, flags)) + return -ENOMEM; + vma = find_vma_prev(mm, addr, &prev); if (mmap_end - len >= addr && addr >= mmap_min_addr && (!vma || addr + len <= vm_start_gap(vma)) && @@ -1794,6 +1805,9 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, info.high_limit = arch_get_mmap_base(addr, mm->mmap_base); info.align_mask = 0; info.align_offset = 0; + + svsp_mmap_get_area(&info, flags); + addr = vm_unmapped_area(&info);
/* @@ -1807,6 +1821,9 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, info.flags = 0; info.low_limit = TASK_UNMAPPED_BASE; info.high_limit = mmap_end; + + svsp_mmap_get_area(&info, flags); + addr = vm_unmapped_area(&info); }
@@ -3946,3 +3963,19 @@ static int __meminit init_reserve_notifier(void) return 0; } subsys_initcall(init_reserve_notifier); + +int enable_mmap_svsp __read_mostly; + +#ifdef CONFIG_ASCEND_SVSP + +static int __init ascend_enable_mmap_svsp(char *s) +{ + enable_mmap_svsp = 1; + + pr_info("Ascend enable svsp mmap features\n"); + + return 1; +} +__setup("enable_mmap_svsp", ascend_enable_mmap_svsp); + +#endif
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I931MI CVE: NA
-----------------------------------------------------------------
svsp_mm may still be used by hardware(aicore/dvpp) after process is gone, same as normal mm, thus release should happen when normal mm is actually released.
Signed-off-by: Yuan Can yuancan@huawei.com --- include/linux/mm_types.h | 3 +++ kernel/fork.c | 12 ++++++++++++ 2 files changed, 15 insertions(+)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 4e22c878309a..5c04fddc8621 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -950,6 +950,9 @@ struct mm_struct { #endif #if IS_ENABLED(CONFIG_ETMEM) && IS_ENABLED(CONFIG_KVM) struct kvm *kvm; +#endif +#if defined(CONFIG_ASCEND_SVSP) + struct mm_struct *svsp_mm; #endif } __randomize_layout;
diff --git a/kernel/fork.c b/kernel/fork.c index 735bc670b49e..af283cec3444 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1322,6 +1322,10 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, mm_init_uprobes_state(mm); hugetlb_count_init(mm);
+#ifdef CONFIG_ASCEND_SVSP + mm->svsp_mm = NULL; +#endif + if (current->mm) { mm->flags = mmf_init_flags(current->mm->flags); mm->def_flags = current->mm->def_flags & VM_INIT_DEF_MASK; @@ -1379,6 +1383,14 @@ static inline void __mmput(struct mm_struct *mm) { VM_BUG_ON(atomic_read(&mm->mm_users));
+#ifdef CONFIG_ASCEND_SVSP + if (mm->svsp_mm) { + exit_mmap(mm->svsp_mm); + mmdrop(mm->svsp_mm); + mm->svsp_mm = NULL; + } +#endif + uprobe_clear_state(mm); exit_aio(mm); ksm_exit(mm);
From: Jian Zhang zhangjian210@huawei.com
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I931MI CVE: NA
----------------------------------
When using svm_svsp_bind_core, the expected pasid is the pasid of the origin mm with its highest bit set, so we need to check and get the expected pasid instead of allocating a new one.
Signed-off-by: Jian Zhang zhangjian210@huawei.com Signed-off-by: Yuan Can yuancan@huawei.com --- drivers/iommu/iommu-sva.c | 4 ++++ drivers/iommu/iommu.c | 36 +++++++++++++++++++++++++++++++++++- include/linux/iommu.h | 3 +++ 3 files changed, 42 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c index b78671a8a914..99b2977ff708 100644 --- a/drivers/iommu/iommu-sva.c +++ b/drivers/iommu/iommu-sva.c @@ -25,6 +25,10 @@ static int iommu_sva_alloc_pasid(struct mm_struct *mm, struct device *dev) if (mm_valid_pasid(mm)) { if (mm->pasid >= dev->iommu->max_pasids) ret = -EOVERFLOW; +#ifdef CONFIG_ASCEND_SVSP + else + ret = iommu_reserve_svsp_pasid(mm, dev); +#endif goto out; }
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 32f0f6c461b3..594d9e984d57 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -3697,20 +3697,54 @@ struct iommu_domain *iommu_sva_domain_alloc(struct device *dev, return domain; }
+#ifdef CONFIG_ASCEND_SVSP +int iommu_reserve_svsp_pasid(struct mm_struct *mm, struct device *dev) +{ + int ret; + int max_pasids = dev->iommu->max_pasids; + + /* This should never happen. */ + if (!max_pasids) + return -ENOSPC; + + /* Do nothing if the given PASID not in SVSP pasid range. */ + if (mm->pasid < max_pasids >> 1) + return 0; + + /* Reserve the given SVSP pasid. */ + ret = ida_alloc_range(&iommu_global_pasid_ida, mm->pasid, mm->pasid, + GFP_KERNEL); + if (ret == -ENOMEM) + return -ENOSPC; + + return 0; +} +#endif + ioasid_t iommu_alloc_global_pasid(struct device *dev) { int ret; + int max_pasids;
/* max_pasids == 0 means that the device does not support PASID */ if (!dev->iommu->max_pasids) return IOMMU_PASID_INVALID;
+ max_pasids = dev->iommu->max_pasids - 1; +#ifdef CONFIG_ASCEND_SVSP + /* + * When SVSP feature is enabled, the pasid space is divided into two + * equal parts, the normal pasids are distributed in the range with + * smaller value. + */ + max_pasids = max_pasids >> 1; +#endif /* * max_pasids is set up by vendor driver based on number of PASID bits * supported but the IDA allocation is inclusive. */ ret = ida_alloc_range(&iommu_global_pasid_ida, IOMMU_FIRST_GLOBAL_PASID, - dev->iommu->max_pasids - 1, GFP_KERNEL); + max_pasids, GFP_KERNEL); return ret < 0 ? IOMMU_PASID_INVALID : ret; } EXPORT_SYMBOL_GPL(iommu_alloc_global_pasid); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index a1be18a687b6..e6f22dfa0205 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -833,6 +833,9 @@ iommu_get_domain_for_dev_pasid(struct device *dev, ioasid_t pasid, unsigned int type); ioasid_t iommu_alloc_global_pasid(struct device *dev); void iommu_free_global_pasid(ioasid_t pasid); +#ifdef CONFIG_ASCEND_SVSP +int iommu_reserve_svsp_pasid(struct mm_struct *mm, struct device *dev); +#endif #else /* CONFIG_IOMMU_API */
struct iommu_ops {};
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,转换为PR失败! 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/2... 失败原因:补丁集缺失封面信息 建议解决方法:请提供补丁集并重新发送您的补丁集到邮件列表
FeedBack: The patch(es) which you have sent to kernel@openeuler.org has been converted to PR failed! Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/2... Failed Reason: the cover of the patches is missing Suggest Solution: please checkout and apply the patches' cover and send all again
driver/svm: 这个标题是不是要改下
在 2024/2/22 21:25, Yuan Can 写道:
From: Zhou Guanghui zhouguanghui1@huawei.com
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I931MI CVE: NA
In the SVA scenario, the master device of the SMMU can access the address space of the process by the operation of binding process. The page table of the process is shared by the MMU and SMMU, In this case, if some addrss range cannot be accessed through the MMU, access to this address range needs to be isolated from the process page table.
If the default domain of the SMMU is used, the isolation can be achieved. However, the address space created by default domain do not like the address space of the process which can be shared by multiple SMMUs.
Therefore, we add a shadow virtual space of process. This address space is used only by the SMMU.
Signed-off-by: Zhou Guanghui zhouguanghui1@huawei.com
arch/arm64/Kconfig | 15 +++++++++++++++ kernel/fork.c | 1 + mm/mmap.c | 1 + 3 files changed, 17 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 67bc2ad13453..3e84b2572827 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2455,6 +2455,21 @@ config DMI However, even with this option, the resultant kernel should continue to boot on existing non-UEFI platforms.
+config ASCEND_SVSP
bool "Enable support for SVSP"
default n
depends on ARM_SMMU_V3_SVA
help
SVSP means Shadow Virtual Space of Process, which is an IO space used for
master device of SMMU.
This space is only used for the master device, since the target address
after traslation by MMU is not an accessible physical address, but can
accessible by SMMU.
Therefore, in the sva scenario, an svsp is created for each target process
address space.
endmenu # "Boot options"
config COMPAT
diff --git a/kernel/fork.c b/kernel/fork.c index 00e5886c9e04..735bc670b49e 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1373,6 +1373,7 @@ struct mm_struct *mm_alloc(void) memset(mm, 0, sizeof(*mm)); return mm_init(mm, current, current_user_ns()); } +EXPORT_SYMBOL_GPL(mm_alloc);
static inline void __mmput(struct mm_struct *mm) { diff --git a/mm/mmap.c b/mm/mmap.c index 21d1fc39bf21..5d78ebaeed60 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2684,6 +2684,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
return do_vmi_munmap(&vmi, mm, start, len, uf, false); } +EXPORT_SYMBOL(do_munmap);
static unsigned long __mmap_region(struct mm_struct *mm, struct file *file, unsigned long addr,
确实,我下个版本改下
在 2024/2/22 21:52, zhangzekun (A) 写道:
driver/svm: 这个标题是不是要改下
在 2024/2/22 21:25, Yuan Can 写道:
From: Zhou Guanghui zhouguanghui1@huawei.com
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I931MI CVE: NA
In the SVA scenario, the master device of the SMMU can access the address space of the process by the operation of binding process. The page table of the process is shared by the MMU and SMMU, In this case, if some addrss range cannot be accessed through the MMU, access to this address range needs to be isolated from the process page table.
If the default domain of the SMMU is used, the isolation can be achieved. However, the address space created by default domain do not like the address space of the process which can be shared by multiple SMMUs.
Therefore, we add a shadow virtual space of process. This address space is used only by the SMMU.
Signed-off-by: Zhou Guanghui zhouguanghui1@huawei.com
arch/arm64/Kconfig | 15 +++++++++++++++ kernel/fork.c | 1 + mm/mmap.c | 1 + 3 files changed, 17 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 67bc2ad13453..3e84b2572827 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2455,6 +2455,21 @@ config DMI However, even with this option, the resultant kernel should continue to boot on existing non-UEFI platforms. +config ASCEND_SVSP + bool "Enable support for SVSP" + default n + depends on ARM_SMMU_V3_SVA + help + SVSP means Shadow Virtual Space of Process, which is an IO space used for + master device of SMMU.
+ This space is only used for the master device, since the target address + after traslation by MMU is not an accessible physical address, but can + accessible by SMMU.
+ Therefore, in the sva scenario, an svsp is created for each target process + address space.
endmenu # "Boot options" config COMPAT diff --git a/kernel/fork.c b/kernel/fork.c index 00e5886c9e04..735bc670b49e 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1373,6 +1373,7 @@ struct mm_struct *mm_alloc(void) memset(mm, 0, sizeof(*mm)); return mm_init(mm, current, current_user_ns()); } +EXPORT_SYMBOL_GPL(mm_alloc); static inline void __mmput(struct mm_struct *mm) { diff --git a/mm/mmap.c b/mm/mmap.c index 21d1fc39bf21..5d78ebaeed60 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2684,6 +2684,7 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, return do_vmi_munmap(&vmi, mm, start, len, uf, false); } +EXPORT_SYMBOL(do_munmap); static unsigned long __mmap_region(struct mm_struct *mm, struct file *file, unsigned long addr,