Jiankang Chen (1): iort: Read ACPI configure to get streamid.
Lijun Fang (2): svm: add support for allocing memory which is within 4G physical address in svm_mmap arm64/ascend: Enable DvPP mmap features for Ascend Platform
Navid Emamdoost (1): nbd_genl_status: null check for nla_nest_start
Weilong Chen (4): arm64: Add MIDR encoding for HiSilicon Taishan CPUs ACPI / APEI: Notify all ras err to driver cache: Workaround HiSilicon Taishan DC CVAU arm64: Fix conflict for capability when cpu hotplug
Xu Qiang (4): serial: amba-pl011: Fix serial port discard interrupt when interrupt signal line of serial port is connected to mbigen. irq-gic-v3: Add support to init ts core GICR irq-gic-v3-its: It can't be initialized when the GICR had been cut irq-gic-v3: Fix too large cpu_count
Zhang Jian (3): mm: export collect_procs() cpu-feature: Enable Taisan IDC feature for Taishan core version iommu: bugfix for missing symbols when build arm_smmu_v3.ko
Documentation/arch/arm64/silicon-errata.rst | 2 + arch/alpha/include/uapi/asm/mman.h | 1 + arch/arm64/Kconfig | 38 +++ arch/arm64/include/asm/cache.h | 9 + arch/arm64/include/asm/cputype.h | 2 + arch/arm64/kernel/cpu_errata.c | 34 +++ arch/mips/include/uapi/asm/mman.h | 1 + arch/parisc/include/uapi/asm/mman.h | 1 + arch/powerpc/include/uapi/asm/mman.h | 1 + arch/sparc/include/uapi/asm/mman.h | 1 + arch/xtensa/include/uapi/asm/mman.h | 1 + drivers/acpi/apei/ghes.c | 4 +- drivers/block/nbd.c | 6 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 14 ++ drivers/irqchip/Kconfig | 10 + drivers/irqchip/irq-gic-v3-its.c | 254 +++++++++++++++++++- drivers/irqchip/irq-gic-v3.c | 101 +++++++- drivers/tty/serial/Kconfig | 18 ++ drivers/tty/serial/amba-pl011.c | 66 +++++ fs/hugetlbfs/inode.c | 16 ++ include/linux/irqchip/arm-gic-v3.h | 5 + include/linux/mm.h | 3 + include/linux/mman.h | 64 +++++ include/uapi/asm-generic/mman.h | 1 + mm/memory-failure.c | 3 +- mm/mmap.c | 44 ++++ 26 files changed, 690 insertions(+), 10 deletions(-)
From: Weilong Chen chenweilong@huawei.com
ascend inclusion category: feature bugzilla: 46922, https://gitee.com/openeuler/kernel/issues/I41AUQ
-------------------------------------
Adding the MIDR encodings for HiSilicon Taishan v200 CPUs, which is used in Kunpeng ARM64 server SoCs. TSV200 is the abbreviation of Taishan v200. There are two variants of TSV200, variant 0 and variant 1.
Signed-off-by: Weilong Chen chenweilong@huawei.com --- arch/arm64/include/asm/cputype.h | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index 7c7493cb571f..861d8dac250a 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -119,6 +119,7 @@ #define FUJITSU_CPU_PART_A64FX 0x001
#define HISI_CPU_PART_TSV110 0xD01 +#define HISI_CPU_PART_TSV200 0xD02
#define APPLE_CPU_PART_M1_ICESTORM 0x022 #define APPLE_CPU_PART_M1_FIRESTORM 0x023 @@ -180,6 +181,7 @@ #define MIDR_NVIDIA_CARMEL MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_CARMEL) #define MIDR_FUJITSU_A64FX MIDR_CPU_MODEL(ARM_CPU_IMP_FUJITSU, FUJITSU_CPU_PART_A64FX) #define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110) +#define MIDR_HISI_TSV200 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV200) #define MIDR_APPLE_M1_ICESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM) #define MIDR_APPLE_M1_FIRESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM) #define MIDR_APPLE_M1_ICESTORM_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM_PRO)
From: Navid Emamdoost navid.emamdoost@gmail.com
maillist inclusion category: bugfix bugzilla: 185744 https://gitee.com/openeuler/kernel/issues/I4DDEL CVE: CVE-2019-16089
Reference: https://lore.kernel.org/lkml/20190911164013.27364-1-navid.emamdoost@gmail.co...
---------------------------
nla_nest_start may fail and return NULL. The check is inserted, and errno is selected based on other call sites within the same source code. Update: removed extra new line. v3 Update: added release reply, thanks to Michal Kubecek for pointing out.
Signed-off-by: Navid Emamdoost navid.emamdoost@gmail.com Reviewed-by: Michal Kubecek mkubecek@suse.cz --- drivers/block/nbd.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 855fdf5c3b4e..b99c169890d5 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -2408,6 +2408,12 @@ static int nbd_genl_status(struct sk_buff *skb, struct genl_info *info) }
dev_list = nla_nest_start_noflag(reply, NBD_ATTR_DEVICE_LIST); + if (!dev_list) { + nlmsg_free(reply); + ret = -EMSGSIZE; + goto out; + } + if (index == -1) { ret = idr_for_each(&nbd_index_idr, &status_cb, reply); if (ret) {
From: Lijun Fang fanglijun3@huawei.com
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4JMM0 CVE: NA
-------------------------------------------------
Add alloc and release memory functions in svm. And the physical address of the memory is within 4GB.
For example: /* alloc */ fd = open("dev/svm0",); mmap(0, ALLOC_SIZE,, MAP_PA32BIT, fd, 0);
/* free */ ioctl(fd, SVM_IOCTL_RELEASE_PHYS32,); close(fd);
Signed-off-by: Lijun Fang fanglijun3@huawei.com --- arch/alpha/include/uapi/asm/mman.h | 1 + arch/mips/include/uapi/asm/mman.h | 1 + arch/parisc/include/uapi/asm/mman.h | 1 + arch/powerpc/include/uapi/asm/mman.h | 1 + arch/sparc/include/uapi/asm/mman.h | 1 + arch/xtensa/include/uapi/asm/mman.h | 1 + include/linux/mm.h | 1 + include/uapi/asm-generic/mman.h | 1 + mm/mmap.c | 4 ++++ 9 files changed, 12 insertions(+)
diff --git a/arch/alpha/include/uapi/asm/mman.h b/arch/alpha/include/uapi/asm/mman.h index 763929e814e9..85fcf44e7cc5 100644 --- a/arch/alpha/include/uapi/asm/mman.h +++ b/arch/alpha/include/uapi/asm/mman.h @@ -31,6 +31,7 @@ #define MAP_STACK 0x80000 /* give out an address that is best suited for process/thread stacks */ #define MAP_HUGETLB 0x100000 /* create a huge page mapping */ #define MAP_FIXED_NOREPLACE 0x200000/* MAP_FIXED which doesn't unmap underlying mapping */ +#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
#define MS_ASYNC 1 /* sync memory asynchronously */ #define MS_SYNC 2 /* synchronous memory sync */ diff --git a/arch/mips/include/uapi/asm/mman.h b/arch/mips/include/uapi/asm/mman.h index c6e1fc77c996..31f92152bd17 100644 --- a/arch/mips/include/uapi/asm/mman.h +++ b/arch/mips/include/uapi/asm/mman.h @@ -49,6 +49,7 @@ #define MAP_STACK 0x40000 /* give out an address that is best suited for process/thread stacks */ #define MAP_HUGETLB 0x80000 /* create a huge page mapping */ #define MAP_FIXED_NOREPLACE 0x100000 /* MAP_FIXED which doesn't unmap underlying mapping */ +#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
/* * Flags for msync diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h index 68c44f99bc93..4eaa9f206c59 100644 --- a/arch/parisc/include/uapi/asm/mman.h +++ b/arch/parisc/include/uapi/asm/mman.h @@ -26,6 +26,7 @@ #define MAP_HUGETLB 0x80000 /* create a huge page mapping */ #define MAP_FIXED_NOREPLACE 0x100000 /* MAP_FIXED which doesn't unmap underlying mapping */ #define MAP_UNINITIALIZED 0 /* uninitialized anonymous mmap */ +#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
#define MS_SYNC 1 /* synchronous memory sync */ #define MS_ASYNC 2 /* sync memory asynchronously */ diff --git a/arch/powerpc/include/uapi/asm/mman.h b/arch/powerpc/include/uapi/asm/mman.h index c0c737215b00..f0eb04780148 100644 --- a/arch/powerpc/include/uapi/asm/mman.h +++ b/arch/powerpc/include/uapi/asm/mman.h @@ -25,6 +25,7 @@ #define MCL_CURRENT 0x2000 /* lock all currently mapped pages */ #define MCL_FUTURE 0x4000 /* lock all additions to address space */ #define MCL_ONFAULT 0x8000 /* lock all pages that are faulted in */ +#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
/* Override any generic PKEY permission defines */ #define PKEY_DISABLE_EXECUTE 0x4 diff --git a/arch/sparc/include/uapi/asm/mman.h b/arch/sparc/include/uapi/asm/mman.h index cec9f4109687..8caf19c604d0 100644 --- a/arch/sparc/include/uapi/asm/mman.h +++ b/arch/sparc/include/uapi/asm/mman.h @@ -21,5 +21,6 @@ #define MCL_CURRENT 0x2000 /* lock all currently mapped pages */ #define MCL_FUTURE 0x4000 /* lock all additions to address space */ #define MCL_ONFAULT 0x8000 /* lock all pages that are faulted in */ +#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
#endif /* _UAPI__SPARC_MMAN_H__ */ diff --git a/arch/xtensa/include/uapi/asm/mman.h b/arch/xtensa/include/uapi/asm/mman.h index 1ff0c858544f..b288b521ae1c 100644 --- a/arch/xtensa/include/uapi/asm/mman.h +++ b/arch/xtensa/include/uapi/asm/mman.h @@ -56,6 +56,7 @@ #define MAP_STACK 0x40000 /* give out an address that is best suited for process/thread stacks */ #define MAP_HUGETLB 0x80000 /* create a huge page mapping */ #define MAP_FIXED_NOREPLACE 0x100000 /* MAP_FIXED which doesn't unmap underlying mapping */ +#define MAP_PA32BIT 0x400000 /* physical address is within 4G */ #define MAP_UNINITIALIZED 0x4000000 /* For anonymous mmap, memory could be * uninitialized */
diff --git a/include/linux/mm.h b/include/linux/mm.h index 8cf86b56aba5..35ead37aa0f9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -312,6 +312,7 @@ extern unsigned int kobjsize(const void *objp); #define VM_HUGEPAGE 0x20000000 /* MADV_HUGEPAGE marked this vma */ #define VM_NOHUGEPAGE 0x40000000 /* MADV_NOHUGEPAGE marked this vma */ #define VM_MERGEABLE 0x80000000 /* KSM may merge identical pages */ +#define VM_PA32BIT 0x400000000 /* Physical address is within 4G */
#ifdef CONFIG_ARCH_USES_HIGH_VMA_FLAGS #define VM_HIGH_ARCH_BIT_0 32 /* bit only usable on 64-bit architectures */ diff --git a/include/uapi/asm-generic/mman.h b/include/uapi/asm-generic/mman.h index 57e8195d0b53..344bb9b090a7 100644 --- a/include/uapi/asm-generic/mman.h +++ b/include/uapi/asm-generic/mman.h @@ -9,6 +9,7 @@ #define MAP_EXECUTABLE 0x1000 /* mark it as an executable */ #define MAP_LOCKED 0x2000 /* pages are locked */ #define MAP_NORESERVE 0x4000 /* don't check for reservations */ +#define MAP_PA32BIT 0x400000 /* physical address is within 4G */
/* * Bits [26:31] are reserved, see asm-generic/hugetlb_encode.h diff --git a/mm/mmap.c b/mm/mmap.c index 9e018d8dd7d6..469f2c0ec43f 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1262,6 +1262,10 @@ unsigned long do_mmap(struct file *file, unsigned long addr, pkey = 0; }
+ /* Physical address is within 4G */ + if (flags & MAP_PA32BIT) + vm_flags |= VM_PA32BIT; + /* Do simple checking here so the lower-level routines won't have * to. we assume access permissions have been handled by the open * of the memory object, so we don't do any here.
From: Weilong Chen chenweilong@huawei.com
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4CMAR CVE: NA
-------------------------------------------------
Customization deliver all types error to driver. As the driver need to process the errors in process context.
Signed-off-by: Weilong Chen chenweilong@huawei.com --- drivers/acpi/apei/ghes.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index 63ad0541db38..49c9d1b35065 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -693,11 +693,13 @@ static bool ghes_do_proc(struct ghes *ghes, } else { void *err = acpi_hest_get_payload(gdata);
- ghes_defer_non_standard_event(gdata, sev); log_non_standard_event(sec_type, fru_id, fru_text, sec_sev, err, gdata->error_data_length); } + + /* Customization deliver all types error to driver. */ + ghes_defer_non_standard_event(gdata, sev); }
return queued;
From: Lijun Fang fanglijun3@huawei.com
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4M24Q CVE: NA
-------------------------------------------------
The DvPP means Davinci Video Pre-Processor, add new config ASCEND_FEATURES and DVPP_MMAP to enable the DvPP features for Ascend platform.
The DvPP could only use a limit range of virtual address, just like the Ascend310/910 could only use the 4 GB range of virtual address, so add a new mmap flag which is named MAP_DVPP to use the DvPP processor by mmap syscall, the new flag is only valid for Ascend platform.
You should alloc the memory for dvpp like this:
addr = mmap(NULL, length, PROT_READ, MAP_ANONYMOUS | MAP_DVPP, -1, 0);
Signed-off-by: Lijun Fang fanglijun3@huawei.com --- arch/arm64/Kconfig | 29 ++++++++++++++++++++ fs/hugetlbfs/inode.c | 16 +++++++++++ include/linux/mman.h | 64 ++++++++++++++++++++++++++++++++++++++++++++ mm/mmap.c | 40 +++++++++++++++++++++++++++ 4 files changed, 149 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6062a52a084f..e36340d804dd 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2206,6 +2206,35 @@ config UNWIND_PATCH_PAC_INTO_SCS select UNWIND_TABLES select DYNAMIC_SCS
+menuconfig ASCEND_FEATURES + bool "Support Ascend Features" + depends on ARM64 + help + The Ascend chip use the Hisilicon DaVinci architecture, and mainly + focus on AI and machine leanring area, contains many external features. + + Enable this config to enable selective list of these features. + + If unsure, say Y + +if ASCEND_FEATURES + +config ASCEND_DVPP_MMAP + bool "Enable support for the DvPP mmap" + default y + help + The DvPP means Davinci Video Pre-Processor, are mainly consist of VDEC + (Video Decode), VENC(Video Encode), JPEG D/E (Decode/Encode), PNGD + (PNG Decode) and VPC (Video Process) processors. + + The DvPP could only use a limit range of virtual address, just like the + Ascend310/910 could only use the limit range of virtual address (default + 4 GB), so add a new mmap flag which is named MAP_DVPP to allocate the + special memory for DvPP processor, the new flag is only valid for Ascend + platform. + +endif + endmenu # "Kernel Features"
menu "Boot options" diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index b0edf5fe8132..ce11ecbafc82 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -196,6 +196,10 @@ hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr, info.high_limit = arch_get_mmap_end(addr, len, flags); info.align_mask = PAGE_MASK & ~huge_page_mask(h); info.align_offset = 0; + + if (enable_mmap_dvpp) + dvpp_mmap_get_area(&info, flags); + return vm_unmapped_area(&info); }
@@ -212,6 +216,10 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr, info.high_limit = arch_get_mmap_base(addr, current->mm->mmap_base); info.align_mask = PAGE_MASK & ~huge_page_mask(h); info.align_offset = 0; + + if (enable_mmap_dvpp) + dvpp_mmap_get_area(&info, flags); + addr = vm_unmapped_area(&info);
/* @@ -225,6 +233,10 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr, info.flags = 0; info.low_limit = current->mm->mmap_base; info.high_limit = arch_get_mmap_end(addr, len, flags); + + if (enable_mmap_dvpp) + dvpp_mmap_get_area(&info, flags); + addr = vm_unmapped_area(&info); }
@@ -254,6 +266,10 @@ generic_hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
if (addr) { addr = ALIGN(addr, huge_page_size(h)); + + if (dvpp_mmap_check(addr, len, flags)) + return -ENOMEM; + vma = find_vma(mm, addr); if (mmap_end - len >= addr && (!vma || addr + len <= vm_start_gap(vma))) diff --git a/include/linux/mman.h b/include/linux/mman.h index 40d94411d492..e76d887094f7 100644 --- a/include/linux/mman.h +++ b/include/linux/mman.h @@ -8,6 +8,70 @@ #include <linux/atomic.h> #include <uapi/linux/mman.h>
+extern int enable_mmap_dvpp; +/* + * Enable MAP_32BIT for Ascend Platform + */ +#ifdef CONFIG_ASCEND_DVPP_MMAP + +#define MAP_DVPP 0x200 + +#define DVPP_MMAP_SIZE (0x100000000UL) +#define DVPP_MMAP_BASE (TASK_SIZE - DVPP_MMAP_SIZE) + +static inline int dvpp_mmap_check(unsigned long addr, unsigned long len, + unsigned long flags) +{ + if (enable_mmap_dvpp && (flags & MAP_DVPP) && + (addr < DVPP_MMAP_BASE + DVPP_MMAP_SIZE) && + (addr > DVPP_MMAP_BASE)) + return -EINVAL; + else + return 0; +} + +static inline void dvpp_mmap_get_area(struct vm_unmapped_area_info *info, + unsigned long flags) +{ + if (flags & MAP_DVPP) { + info->low_limit = DVPP_MMAP_BASE; + info->high_limit = DVPP_MMAP_BASE + DVPP_MMAP_SIZE; + } else { + info->low_limit = max(info->low_limit, TASK_UNMAPPED_BASE); + info->high_limit = min(info->high_limit, DVPP_MMAP_BASE); + } +} + +static inline int dvpp_mmap_zone(unsigned long addr) +{ + if (addr >= DVPP_MMAP_BASE) + return 1; + else + return 0; +} +#else + +#define MAP_DVPP (0) + +static inline int dvpp_mmap_check(unsigned long addr, unsigned long len, + unsigned long flags) +{ + return 0; +} + +static inline void dvpp_mmap_get_area(struct vm_unmapped_area_info *info, + unsigned long flags) +{ +} + +static inline int dvpp_mmap_zone(unsigned long addr) { return 0; } + +#define DVPP_MMAP_BASE (0) + +#define DVPP_MMAP_SIZE (0) + +#endif + /* * Arrange for legacy / undefined architecture specific flags to be * ignored by mmap handling code. diff --git a/mm/mmap.c b/mm/mmap.c index 469f2c0ec43f..f1ce05935c6e 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1709,6 +1709,10 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr,
if (addr) { addr = PAGE_ALIGN(addr); + + if (dvpp_mmap_check(addr, len, flags)) + return -ENOMEM; + vma = find_vma_prev(mm, addr, &prev); if (mmap_end - len >= addr && addr >= mmap_min_addr && (!vma || addr + len <= vm_start_gap(vma)) && @@ -1722,6 +1726,10 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr, info.high_limit = mmap_end; info.align_mask = 0; info.align_offset = 0; + + if (enable_mmap_dvpp) + dvpp_mmap_get_area(&info, flags); + return vm_unmapped_area(&info); }
@@ -1759,6 +1767,10 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, /* requesting a specific address */ if (addr) { addr = PAGE_ALIGN(addr); + + if (dvpp_mmap_check(addr, len, flags)) + return -ENOMEM; + vma = find_vma_prev(mm, addr, &prev); if (mmap_end - len >= addr && addr >= mmap_min_addr && (!vma || addr + len <= vm_start_gap(vma)) && @@ -1772,6 +1784,10 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, info.high_limit = arch_get_mmap_base(addr, mm->mmap_base); info.align_mask = 0; info.align_offset = 0; + + if (enable_mmap_dvpp) + dvpp_mmap_get_area(&info, flags); + addr = vm_unmapped_area(&info);
/* @@ -1785,6 +1801,10 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, info.flags = 0; info.low_limit = TASK_UNMAPPED_BASE; info.high_limit = mmap_end; + + if (enable_mmap_dvpp) + dvpp_mmap_get_area(&info, flags); + addr = vm_unmapped_area(&info); }
@@ -3913,3 +3933,23 @@ static int __meminit init_reserve_notifier(void) return 0; } subsys_initcall(init_reserve_notifier); + + +/* + * Enable the MAP_32BIT (mmaps and hugetlb). + */ +int enable_mmap_dvpp __read_mostly; + +#ifdef CONFIG_ASCEND_DVPP_MMAP + +static int __init ascend_enable_mmap_dvpp(char *s) +{ + enable_mmap_dvpp = 1; + + pr_info("Ascend enable dvpp mmap features\n"); + + return 1; +} +__setup("enable_mmap_dvpp", ascend_enable_mmap_dvpp); + +#endif
From: Xu Qiang xuqiang36@huawei.com
ascend inclusion category: bugfix Bugzilla: https://gitee.com/openeuler/kernel/issues/I4K2U5 CVE: N/A
---------------------------------------
Hisi when designing ascend chip, connect the serial port interrupt signal lines to mbigen equipment, mbigen write GICD_SETSPI_NSR register trigger the SPI interrupt. This can result in serial port drop interrupts.
Signed-off-by: Xu Qiang xuqiang36@huawei.com --- drivers/tty/serial/Kconfig | 18 +++++++++ drivers/tty/serial/amba-pl011.c | 66 +++++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+)
diff --git a/drivers/tty/serial/Kconfig b/drivers/tty/serial/Kconfig index bdc568a4ab66..bfabe47e3eb5 100644 --- a/drivers/tty/serial/Kconfig +++ b/drivers/tty/serial/Kconfig @@ -85,6 +85,24 @@ config SERIAL_EARLYCON_SEMIHOST This is enabled with "earlycon=smh" on the kernel command line. The console is enabled when early_param is processed.
+if ASCEND_FEATURES + +config SERIAL_ATTACHED_MBIGEN + bool "Serial port interrupt signal lines connected to the mbigen" + depends on SERIAL_AMBA_PL011=y + default n + help + Say Y here when the interrupt signal line of the serial port is + connected to the mbigne. The mbigen device has the function of + clearing interrupts automatically. However, the interrupt processing + function of the serial port driver may process multiple interrupts + at a time. The mbigen device cannot adapt to this scenario. + As a result, interrupts are lost.Because it maybe discard interrupt. + + If unsure, say N. + +endif + config SERIAL_EARLYCON_RISCV_SBI bool "Early console using RISC-V SBI" depends on RISCV_SBI_V01 diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c index 3dc9b0fcab1c..49d4aadb04e4 100644 --- a/drivers/tty/serial/amba-pl011.c +++ b/drivers/tty/serial/amba-pl011.c @@ -1548,6 +1548,65 @@ static void check_apply_cts_event_workaround(struct uart_amba_port *uap) pl011_read(uap, REG_ICR); }
+#ifdef CONFIG_SERIAL_ATTACHED_MBIGEN +struct workaround_oem_info { + char oem_id[ACPI_OEM_ID_SIZE + 1]; + char oem_table_id[ACPI_OEM_TABLE_ID_SIZE + 1]; + u32 oem_revision; +}; + +static bool pl011_enable_hisi_wkrd; +static struct workaround_oem_info pl011_wkrd_info[] = { + { + .oem_id = "HISI ", + .oem_table_id = "HIP08 ", + .oem_revision = 0x300, + }, { + .oem_id = "HISI ", + .oem_table_id = "HIP08 ", + .oem_revision = 0x301, + }, { + .oem_id = "HISI ", + .oem_table_id = "HIP08 ", + .oem_revision = 0x400, + }, { + .oem_id = "HISI ", + .oem_table_id = "HIP08 ", + .oem_revision = 0x401, + }, { + .oem_id = "HISI ", + .oem_table_id = "HIP08 ", + .oem_revision = 0x402, + } +}; + +static void pl011_check_hisi_workaround(void) +{ + struct acpi_table_header *tbl; + acpi_status status = AE_OK; + int i; + + status = acpi_get_table(ACPI_SIG_MADT, 0, &tbl); + if (ACPI_FAILURE(status) || !tbl) + return; + + for (i = 0; i < ARRAY_SIZE(pl011_wkrd_info); i++) { + if (!memcmp(pl011_wkrd_info[i].oem_id, tbl->oem_id, ACPI_OEM_ID_SIZE) && + !memcmp(pl011_wkrd_info[i].oem_table_id, tbl->oem_table_id, ACPI_OEM_TABLE_ID_SIZE) && + pl011_wkrd_info[i].oem_revision == tbl->oem_revision) { + pl011_enable_hisi_wkrd = true; + break; + } + } +} + +#else + +#define pl011_enable_hisi_wkrd 0 +static inline void pl011_check_hisi_workaround(void){ } + +#endif + static irqreturn_t pl011_int(int irq, void *dev_id) { struct uart_amba_port *uap = dev_id; @@ -1585,6 +1644,11 @@ static irqreturn_t pl011_int(int irq, void *dev_id) handled = 1; }
+ if (pl011_enable_hisi_wkrd) { + pl011_write(0, uap, REG_IMSC); + pl011_write(uap->im, uap, REG_IMSC); + } + spin_unlock_irqrestore(&uap->port.lock, flags);
return IRQ_RETVAL(handled); @@ -1762,6 +1826,8 @@ static int pl011_hwinit(struct uart_port *port) if (plat->init) plat->init(); } + + pl011_check_hisi_workaround(); return 0; }
From: Zhang Jian zhangjian210@huawei.com
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I53VVE CVE: NA
-------------------------------------------------
Collect the processes who have the page mapped via collect_procs().
@page if the page is a part of the hugepages/compound-page, we must using compound_head() to find it's head page to prevent the kernel panic, and make the page be locked.
@to_kill the function will return a linked list, when we have used this list, we must kfree the list.
@force_early if we want to find all process, we must make it be true, if it's false, the function will only return the process who have PF_MCE_PROCESS or PF_MCE_EARLY mark.
limits: if force_early is true, sysctl_memory_failure_early_kill is useless. If it's false, no process have PF_MCE_PROCESS and PF_MCE_EARLY flag, and the sysctl_memory_failure_early_kill is enabled, function will return all tasks whether the task have the PF_MCE_PROCESS and PF_MCE_EARLY flag.
Signed-off-by: Zhang Jian zhangjian210@huawei.com --- include/linux/mm.h | 2 ++ mm/memory-failure.c | 3 ++- 2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h index 35ead37aa0f9..1876363c0622 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3843,6 +3843,8 @@ extern int unpoison_memory(unsigned long pfn); extern void shake_page(struct page *p); extern atomic_long_t num_poisoned_pages __read_mostly; extern int soft_offline_page(unsigned long pfn, int flags); +extern void collect_procs(struct page *page, struct list_head *tokill, + int force_early); #ifdef CONFIG_MEMORY_FAILURE /* * Sysfs entries for memory failure handling statistics. diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 4d6e43c88489..fa0173db16c7 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -704,7 +704,7 @@ static void collect_procs_fsdax(struct page *page, /* * Collect the processes who have the corrupted page mapped to kill. */ -static void collect_procs(struct page *page, struct list_head *tokill, +void collect_procs(struct page *page, struct list_head *tokill, int force_early) { if (!page->mapping) @@ -716,6 +716,7 @@ static void collect_procs(struct page *page, struct list_head *tokill, else collect_procs_file(page, tokill, force_early); } +EXPORT_SYMBOL_GPL(collect_procs);
struct hwpoison_walk { struct to_kill tk;
From: Xu Qiang xuqiang36@huawei.com
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I554T5 CVE: NA
------------
For Ascend platform, other NON-OS managed GICRs need be initialized in OS.
Signed-off-by: Xu Qiang xuqiang36@huawei.com --- drivers/irqchip/Kconfig | 10 ++ drivers/irqchip/irq-gic-v3-its.c | 217 ++++++++++++++++++++++++++++- drivers/irqchip/irq-gic-v3.c | 101 +++++++++++++- include/linux/irqchip/arm-gic-v3.h | 5 + 4 files changed, 325 insertions(+), 8 deletions(-)
diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index f7149d0f3d45..3ad905633b8c 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -159,6 +159,16 @@ config HISILICON_IRQ_MBIGEN select ARM_GIC_V3 select ARM_GIC_V3_ITS
+if ASCEND_FEATURES + +config ASCEND_INIT_ALL_GICR + bool "Enable init all GICR for Ascend" + depends on ARM_GIC_V3 + depends on ARM_GIC_V3_ITS + default n + +endif + config IMGPDC_IRQ bool select GENERIC_IRQ_CHIP diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index a8c89df1a997..3e270f7d000e 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -193,6 +193,14 @@ static DEFINE_RAW_SPINLOCK(vmovp_lock);
static DEFINE_IDA(its_vpeid_ida);
+#ifdef CONFIG_ASCEND_INIT_ALL_GICR +static bool init_all_gicr; +static int nr_gicr; +#else +#define init_all_gicr false +#define nr_gicr 0 +#endif + #define gic_data_rdist() (raw_cpu_ptr(gic_rdists->rdist)) #define gic_data_rdist_cpu(cpu) (per_cpu_ptr(gic_rdists->rdist, cpu)) #define gic_data_rdist_rd_base() (gic_data_rdist()->rd_base) @@ -1665,6 +1673,26 @@ static int its_select_cpu(struct irq_data *d, return cpu; }
+#ifdef CONFIG_ASCEND_INIT_ALL_GICR +static int its_select_cpu_other(const struct cpumask *mask_val) +{ + int cpu; + + if (!init_all_gicr) + return -EINVAL; + + cpu = find_first_bit(cpumask_bits(mask_val), NR_CPUS); + if (cpu >= nr_gicr) + cpu = -EINVAL; + return cpu; +} +#else +static int its_select_cpu_other(const struct cpumask *mask_val) +{ + return -EINVAL; +} +#endif + static int its_set_affinity(struct irq_data *d, const struct cpumask *mask_val, bool force) { @@ -1686,6 +1714,9 @@ static int its_set_affinity(struct irq_data *d, const struct cpumask *mask_val, cpu = cpumask_pick_least_loaded(d, mask_val);
if (cpu < 0 || cpu >= nr_cpu_ids) + cpu = its_select_cpu_other(mask_val); + + if (cpu < 0) goto err;
/* don't set the affinity when the target cpu is same as current one */ @@ -2951,8 +2982,12 @@ static int allocate_vpe_l1_table(void) static int its_alloc_collections(struct its_node *its) { int i; + int cpu_nr = nr_cpu_ids; + + if (init_all_gicr) + cpu_nr = CONFIG_NR_CPUS;
- its->collections = kcalloc(nr_cpu_ids, sizeof(*its->collections), + its->collections = kcalloc(cpu_nr, sizeof(*its->collections), GFP_KERNEL); if (!its->collections) return -ENOMEM; @@ -3263,6 +3298,186 @@ static void its_cpu_init_collections(void) raw_spin_unlock(&its_lock); }
+#ifdef CONFIG_ASCEND_INIT_ALL_GICR +void its_set_gicr_nr(int nr) +{ + nr_gicr = nr; +} + +static int __init its_enable_init_all_gicr(char *str) +{ + init_all_gicr = true; + return 1; +} + +__setup("init_all_gicr", its_enable_init_all_gicr); + +bool its_init_all_gicr(void) +{ + return init_all_gicr; +} + +static void its_cpu_init_lpis_others(void __iomem *rbase, int cpu) +{ + struct page *pend_page; + phys_addr_t paddr; + u64 val, tmp; + + if (!init_all_gicr) + return; + + val = readl_relaxed(rbase + GICR_CTLR); + if ((gic_rdists->flags & RDIST_FLAGS_RD_TABLES_PREALLOCATED) && + (val & GICR_CTLR_ENABLE_LPIS)) { + /* + * Check that we get the same property table on all + * RDs. If we don't, this is hopeless. + */ + paddr = gicr_read_propbaser(rbase + GICR_PROPBASER); + paddr &= GENMASK_ULL(51, 12); + if (WARN_ON(gic_rdists->prop_table_pa != paddr)) + add_taint(TAINT_CRAP, LOCKDEP_STILL_OK); + + paddr = gicr_read_pendbaser(rbase + GICR_PENDBASER); + paddr &= GENMASK_ULL(51, 16); + + WARN_ON(!gic_check_reserved_range(paddr, LPI_PENDBASE_SZ)); + + goto out; + } + + /* If we didn't allocate the pending table yet, do it now */ + pend_page = its_allocate_pending_table(GFP_NOWAIT); + if (!pend_page) { + pr_err("Failed to allocate PENDBASE for GICR:%p\n", rbase); + return; + } + + paddr = page_to_phys(pend_page); + pr_info("GICR:%p using LPI pending table @%pa\n", + rbase, &paddr); + + WARN_ON(gic_reserve_range(paddr, LPI_PENDBASE_SZ)); + + /* Disable LPIs */ + val = readl_relaxed(rbase + GICR_CTLR); + val &= ~GICR_CTLR_ENABLE_LPIS; + writel_relaxed(val, rbase + GICR_CTLR); + + /* + * Make sure any change to the table is observable by the GIC. + */ + dsb(sy); + + /* set PROPBASE */ + val = (gic_rdists->prop_table_pa | + GICR_PROPBASER_InnerShareable | + GICR_PROPBASER_RaWaWb | + ((LPI_NRBITS - 1) & GICR_PROPBASER_IDBITS_MASK)); + + gicr_write_propbaser(val, rbase + GICR_PROPBASER); + tmp = gicr_read_propbaser(rbase + GICR_PROPBASER); + + if ((tmp ^ val) & GICR_PROPBASER_SHAREABILITY_MASK) { + if (!(tmp & GICR_PROPBASER_SHAREABILITY_MASK)) { + /* + * The HW reports non-shareable, we must + * remove the cacheability attributes as + * well. + */ + val &= ~(GICR_PROPBASER_SHAREABILITY_MASK | + GICR_PROPBASER_CACHEABILITY_MASK); + val |= GICR_PROPBASER_nC; + gicr_write_propbaser(val, rbase + GICR_PROPBASER); + } + pr_info_once("GIC: using cache flushing for LPI property table\n"); + gic_rdists->flags |= RDIST_FLAGS_PROPBASE_NEEDS_FLUSHING; + } + + /* set PENDBASE */ + val = (page_to_phys(pend_page) | + GICR_PENDBASER_InnerShareable | + GICR_PENDBASER_RaWaWb); + + gicr_write_pendbaser(val, rbase + GICR_PENDBASER); + tmp = gicr_read_pendbaser(rbase + GICR_PENDBASER); + + if (!(tmp & GICR_PENDBASER_SHAREABILITY_MASK)) { + /* + * The HW reports non-shareable, we must remove the + * cacheability attributes as well. + */ + val &= ~(GICR_PENDBASER_SHAREABILITY_MASK | + GICR_PENDBASER_CACHEABILITY_MASK); + val |= GICR_PENDBASER_nC; + gicr_write_pendbaser(val, rbase + GICR_PENDBASER); + } + + /* Enable LPIs */ + val = readl_relaxed(rbase + GICR_CTLR); + val |= GICR_CTLR_ENABLE_LPIS; + writel_relaxed(val, rbase + GICR_CTLR); + + /* Make sure the GIC has seen the above */ + dsb(sy); +out: + pr_info("GICv3: CPU%d: using %s LPI pending table @%pa\n", + cpu, pend_page ? "allocated" : "reserved", &paddr); +} + +static void its_cpu_init_collection_others(void __iomem *rbase, + phys_addr_t phys_base, int cpu) +{ + struct its_node *its; + + if (!init_all_gicr) + return; + + raw_spin_lock(&its_lock); + + list_for_each_entry(its, &its_nodes, entry) { + u64 target; + + /* + * We now have to bind each collection to its target + * redistributor. + */ + if (gic_read_typer(its->base + GITS_TYPER) & GITS_TYPER_PTA) { + /* + * This ITS wants the physical address of the + * redistributor. + */ + target = phys_base; + } else { + /* + * This ITS wants a linear CPU number. + */ + target = gic_read_typer(rbase + GICR_TYPER); + target = GICR_TYPER_CPU_NUMBER(target) << 16; + } + + /* Perform collection mapping */ + its->collections[cpu].target_address = target; + its->collections[cpu].col_id = cpu; + + its_send_mapc(its, &its->collections[cpu], 1); + its_send_invall(its, &its->collections[cpu]); + } + + raw_spin_unlock(&its_lock); +} + +int its_cpu_init_others(void __iomem *base, phys_addr_t phys_base, int cpu) +{ + if (!list_empty(&its_nodes)) { + its_cpu_init_lpis_others(base, cpu); + its_cpu_init_collection_others(base, phys_base, cpu); + } + + return 0; +} +#endif + static struct its_device *its_find_device(struct its_node *its, u32 dev_id) { struct its_device *its_dev = NULL, *tmp; diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c index f59ac9586b7b..003043bb0d68 100644 --- a/drivers/irqchip/irq-gic-v3.c +++ b/drivers/irqchip/irq-gic-v3.c @@ -281,17 +281,11 @@ static u64 __maybe_unused gic_read_iar(void) } #endif
-static void gic_enable_redist(bool enable) +static void __gic_enable_redist(void __iomem *rbase, bool enable) { - void __iomem *rbase; u32 count = 1000000; /* 1s! */ u32 val;
- if (gic_data.flags & FLAGS_WORKAROUND_GICR_WAKER_MSM8996) - return; - - rbase = gic_data_rdist_rd_base(); - val = readl_relaxed(rbase + GICR_WAKER); if (enable) /* Wake up this CPU redistributor */ @@ -318,6 +312,14 @@ static void gic_enable_redist(bool enable) enable ? "wakeup" : "sleep"); }
+static void gic_enable_redist(bool enable) +{ + if (gic_data.flags & FLAGS_WORKAROUND_GICR_WAKER_MSM8996) + return; + + __gic_enable_redist(gic_data_rdist_rd_base(), enable); +} + /* * Routines to disable, enable, EOI and route interrupts */ @@ -1288,6 +1290,89 @@ static void gic_cpu_init(void) gic_cpu_sys_reg_init(); }
+#ifdef CONFIG_ASCEND_INIT_ALL_GICR +static int __gic_compute_nr_gicr(struct redist_region *region, void __iomem *ptr) +{ + static int gicr_nr = 0; + + its_set_gicr_nr(++gicr_nr); + + return 1; +} + +static void gic_compute_nr_gicr(void) +{ + gic_iterate_rdists(__gic_compute_nr_gicr); +} + +static int gic_rdist_cpu(void __iomem *ptr, unsigned int cpu) +{ + unsigned long mpidr = cpu_logical_map(cpu); + u64 typer; + u32 aff; + + /* + * Convert affinity to a 32bit value that can be matched to + * GICR_TYPER bits [63:32]. + */ + aff = (MPIDR_AFFINITY_LEVEL(mpidr, 3) << 24 | + MPIDR_AFFINITY_LEVEL(mpidr, 2) << 16 | + MPIDR_AFFINITY_LEVEL(mpidr, 1) << 8 | + MPIDR_AFFINITY_LEVEL(mpidr, 0)); + + typer = gic_read_typer(ptr + GICR_TYPER); + if ((typer >> 32) == aff) + return 0; + + return 1; +} + +static int gic_rdist_cpus(void __iomem *ptr) +{ + unsigned int i; + + for (i = 0; i < nr_cpu_ids; i++) { + if (gic_rdist_cpu(ptr, i) == 0) + return 0; + } + + return 1; +} + +static int gic_cpu_init_other(struct redist_region *region, void __iomem *ptr) +{ + u64 offset; + phys_addr_t phys_base; + static int cpu = 0; + + if (cpu == 0) + cpu = nr_cpu_ids; + + if (gic_rdist_cpus(ptr) == 1) { + offset = ptr - region->redist_base; + phys_base = region->phys_base + offset; + __gic_enable_redist(ptr, true); + if (gic_dist_supports_lpis()) + its_cpu_init_others(ptr, phys_base, cpu); + cpu++; + } + + return 1; +} + +static void gic_cpu_init_others(void) +{ + if (!its_init_all_gicr()) + return; + + gic_iterate_rdists(gic_cpu_init_other); +} +#else +static inline void gic_compute_nr_gicr(void) {} + +static inline void gic_cpu_init_others(void) {} +#endif + #ifdef CONFIG_SMP
#define MPIDR_TO_SGI_RS(mpidr) (MPIDR_RS(mpidr) << ICC_SGI1R_RS_SHIFT) @@ -2052,6 +2137,7 @@ static int __init gic_init_bases(phys_addr_t dist_phys_base, gic_data.rdists.has_direct_lpi = true; gic_data.rdists.has_vpend_valid_dirty = true; } + gic_compute_nr_gicr();
if (WARN_ON(!gic_data.domain) || WARN_ON(!gic_data.rdists.rdist)) { err = -ENOMEM; @@ -2087,6 +2173,7 @@ static int __init gic_init_bases(phys_addr_t dist_phys_base, }
gic_enable_nmi_support(); + gic_cpu_init_others();
return 0;
diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h index 728691365464..33e098c70952 100644 --- a/include/linux/irqchip/arm-gic-v3.h +++ b/include/linux/irqchip/arm-gic-v3.h @@ -637,6 +637,11 @@ struct irq_domain; struct fwnode_handle; int __init its_lpi_memreserve_init(void); int its_cpu_init(void); +#ifdef CONFIG_ASCEND_INIT_ALL_GICR +void its_set_gicr_nr(int nr); +bool its_init_all_gicr(void); +int its_cpu_init_others(void __iomem *base, phys_addr_t phys_base, int idx); +#endif int its_init(struct fwnode_handle *handle, struct rdists *rdists, struct irq_domain *domain); int mbi_init(struct fwnode_handle *fwnode, struct irq_domain *parent);
From: Xu Qiang xuqiang36@huawei.com
ascend inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I554T5 CVE: NA
------------
In FPGA, We need to check if the gicr has been cut, and if it is, it can't be initialized
Signed-off-by: Xu Qiang xuqiang36@huawei.com --- drivers/irqchip/irq-gic-v3-its.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+)
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 3e270f7d000e..b118af43157f 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -3428,6 +3428,7 @@ static void its_cpu_init_lpis_others(void __iomem *rbase, int cpu) static void its_cpu_init_collection_others(void __iomem *rbase, phys_addr_t phys_base, int cpu) { + u32 count; struct its_node *its;
if (!init_all_gicr) @@ -3456,6 +3457,32 @@ static void its_cpu_init_collection_others(void __iomem *rbase, target = GICR_TYPER_CPU_NUMBER(target) << 16; }
+ dsb(sy); + + /* In FPGA, We need to check if the gicr has been cut, + * and if it is, it can't be initialized + */ + count = 2000; + while (1) { + if (readl_relaxed(rbase + GICR_SYNCR) == 0) + break; + + count--; + if (!count) { + pr_err("this gicr does not exist, or it's abnormal:%pK\n", + &phys_base); + break; + } + cpu_relax(); + udelay(1); + } + + if (count == 0) + break; + + pr_info("its init other collection table, ITS:%pK, GICR:%pK, coreId:%u\n", + &its->phys_base, &phys_base, cpu); + /* Perform collection mapping */ its->collections[cpu].target_address = target; its->collections[cpu].col_id = cpu;
From: Xu Qiang xuqiang36@huawei.com
ascend inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I612UG CVE: NA
--------------------------------
Fix that value of CPU is too large in its_inc_lpi_count.
Signed-off-by: Xu Qiang xuqiang36@huawei.com --- drivers/irqchip/irq-gic-v3-its.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index b118af43157f..c38a214daf24 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -1566,6 +1566,11 @@ static __maybe_unused u32 its_read_lpi_count(struct irq_data *d, int cpu)
static void its_inc_lpi_count(struct irq_data *d, int cpu) { +#ifdef CONFIG_ASCEND_INIT_ALL_GICR + if (cpu >= nr_cpu_ids) + return; +#endif + if (irqd_affinity_is_managed(d)) atomic_inc(&per_cpu_ptr(&cpu_lpi_count, cpu)->managed); else @@ -1574,6 +1579,11 @@ static void its_inc_lpi_count(struct irq_data *d, int cpu)
static void its_dec_lpi_count(struct irq_data *d, int cpu) { +#ifdef CONFIG_ASCEND_INIT_ALL_GICR + if (cpu >= nr_cpu_ids) + return; +#endif + if (irqd_affinity_is_managed(d)) atomic_dec(&per_cpu_ptr(&cpu_lpi_count, cpu)->managed); else
From: Weilong Chen chenweilong@huawei.com
ascend inclusion category: feature bugzilla: 46922 CVE: NA
-------------------------------------
Taishan's L1/L2 cache is inclusive, and the data is consistent. Any change of L1 does not require DC operation to brush CL in L1 to L2. It's safe that don't clean data cache by address to point of unification.
Without IDC featrue, kernel needs to flush icache as well as dcache, causes performance degradation.
The flaw refers to V110/V200 variant 1.
Signed-off-by: Weilong Chen chenweilong@huawei.com --- Documentation/arch/arm64/silicon-errata.rst | 2 ++ arch/arm64/Kconfig | 9 ++++++ arch/arm64/kernel/cpu_errata.c | 32 +++++++++++++++++++++ 3 files changed, 43 insertions(+)
diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst index f47f63bcf67c..7e76c6866ea4 100644 --- a/Documentation/arch/arm64/silicon-errata.rst +++ b/Documentation/arch/arm64/silicon-errata.rst @@ -203,6 +203,8 @@ stable kernels. | Hisilicon | Hip08 SMMU PMCG | #162001900 | N/A | | | Hip09 SMMU PMCG | | | +----------------+-----------------+-----------------+-----------------------------+ +| Hisilicon | TSV{110,200} | #1980005 | HISILICON_ERRATUM_1980005 | ++----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+ | Qualcomm Tech. | Kryo/Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 | +----------------+-----------------+-----------------+-----------------------------+ diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index e36340d804dd..93dd8dca4594 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1158,6 +1158,15 @@ config HISILICON_ERRATUM_161600802
If unsure, say Y.
+config HISILICON_ERRATUM_1980005 + bool "Hisilicon erratum IDC support" + default n + help + The HiSilicon TSV100/200 SoC support idc but report wrong value to + kernel. + + If unsure, say N. + config QCOM_FALKOR_ERRATUM_1003 bool "Falkor E1003: Incorrect translation due to ASID change" default y diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 5706e74c5578..5214d22dcf28 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -55,6 +55,29 @@ is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope) return model == entry->midr_range.model; }
+#ifdef CONFIG_HISILICON_ERRATUM_1980005 +static bool +hisilicon_1980005_match(const struct arm64_cpu_capabilities *entry, + int scope) +{ + static const struct midr_range idc_support_list[] = { + MIDR_ALL_VERSIONS(MIDR_HISI_TSV110), + MIDR_REV(MIDR_HISI_TSV200, 1, 0), + { /* sentinel */ } + }; + + return is_midr_in_range_list(read_cpuid_id(), idc_support_list); +} + +static void +hisilicon_1980005_enable(const struct arm64_cpu_capabilities *__unused) +{ + __set_bit(ARM64_HAS_CACHE_IDC, system_cpucaps); + arm64_ftr_reg_ctrel0.sys_val |= BIT(CTR_EL0_IDC_SHIFT); + sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCT, 0); +} +#endif + static bool has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry, int scope) @@ -506,6 +529,15 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .cpu_enable = cpu_enable_trap_ctr_access, }, +#ifdef CONFIG_HISILICON_ERRATUM_1980005 + { + .desc = "Taishan IDC coherence workaround", + .capability = ARM64_WORKAROUND_HISILICON_1980005, + .matches = hisilicon_1980005_match, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .cpu_enable = hisilicon_1980005_enable, + }, +#endif #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003 { .desc = "Qualcomm Technologies Falkor/Kryo erratum 1003",
From: Weilong Chen chenweilong@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4LGV4 CVE: NA
---------------------------
Patch "cache: Workaround HiSilicon Taishan DC CVAU" breaks the verifiy of cpu capability when hot plug cpus. It set the system scope on but local cpu capability still off. This path fix it by two step: 1. Unset CTR_IDC_SHIFT bit from strict_mask to skip check. 2. Special treatment in read_cpuid_effective_cachetype
Signed-off-by: Weilong Chen chenweilong@huawei.com --- arch/arm64/include/asm/cache.h | 9 +++++++++ arch/arm64/kernel/cpu_errata.c | 1 + 2 files changed, 10 insertions(+)
diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h index ceb368d33bf4..e0a60e4c8b42 100644 --- a/arch/arm64/include/asm/cache.h +++ b/arch/arm64/include/asm/cache.h @@ -112,6 +112,15 @@ int cache_line_size(void); static inline u32 __attribute_const__ read_cpuid_effective_cachetype(void) { u32 ctr = read_cpuid_cachetype(); +#ifdef CONFIG_HISILICON_ERRATUM_1980005 + static const struct midr_range idc_support_list[] = { + MIDR_ALL_VERSIONS(MIDR_HISI_TSV110), + MIDR_REV(MIDR_HISI_TSV200, 1, 0), + { /* sentinel */ } + }; + if (is_midr_in_range_list(read_cpuid_id(), idc_support_list)) + ctr |= BIT(CTR_EL0_IDC_SHIFT); +#endif
if (!(ctr & BIT(CTR_EL0_IDC_SHIFT))) { u64 clidr = read_sysreg(clidr_el1); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 5214d22dcf28..ce7ba44a8b23 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -74,6 +74,7 @@ hisilicon_1980005_enable(const struct arm64_cpu_capabilities *__unused) { __set_bit(ARM64_HAS_CACHE_IDC, system_cpucaps); arm64_ftr_reg_ctrel0.sys_val |= BIT(CTR_EL0_IDC_SHIFT); + arm64_ftr_reg_ctrel0.strict_mask &= ~BIT(CTR_EL0_IDC_SHIFT); sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCT, 0); } #endif
From: Zhang Jian zhangjian210@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I63SDZ
-------------------------------
Some Taishan core's sub version is 2, and add new version for Taishan core adaptation.
Signed-off-by: Zhang Jian zhangjian210@huawei.com --- arch/arm64/kernel/cpu_errata.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index ce7ba44a8b23..6759acf19611 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -63,6 +63,7 @@ hisilicon_1980005_match(const struct arm64_cpu_capabilities *entry, static const struct midr_range idc_support_list[] = { MIDR_ALL_VERSIONS(MIDR_HISI_TSV110), MIDR_REV(MIDR_HISI_TSV200, 1, 0), + MIDR_REV(MIDR_HISI_TSV200, 1, 2), { /* sentinel */ } };
From: Jiankang Chen chenjiankang1@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I63SDZ
-------------------
In ascend feature, the SMMU's sid will read from acpi tabels's streamid. To enable the SMMU, we must get SID from ACPI table, and make stall_enabled be true.
Signed-off-by: Jiankang Chen chenjiankang1@huawei.com --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index bd0a596f9863..788310f92642 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2657,6 +2657,9 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev) struct arm_smmu_device *smmu; struct arm_smmu_master *master; struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); +#ifdef CONFIG_ASCEND_FEATURES + u32 sid; +#endif
if (!fwspec || fwspec->ops != &arm_smmu_ops) return ERR_PTR(-ENODEV); @@ -2703,6 +2706,15 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev) smmu->features & ARM_SMMU_FEAT_STALL_FORCE) master->stall_enabled = true;
+#ifdef CONFIG_ASCEND_FEATURES + if (!acpi_dev_prop_read_single(ACPI_COMPANION(dev), + "streamid", DEV_PROP_U32, &sid)) { + if (iommu_fwspec_add_ids(dev, &sid, 1)) + dev_info(dev, "failed to add ids\n"); + master->stall_enabled = true; + master->ssid_bits = 0x10; + } +#endif return &smmu->iommu;
err_free_master:
From: Zhang Jian zhangjian210@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I63SDZ CVE: NA
-------------------------------
In Ascend featue, when we make CONFIG_ARM_AMMU_V3 be M, and when we build arm-smmu-v3.ko, it will miss symbols for acpi_dev_prop_read_single. So use acpi_dev_get_property to replace the old interface
Signed-off-by: Zhang Jian zhangjian210@huawei.com --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 788310f92642..4381876977b2 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2659,6 +2659,7 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev) struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); #ifdef CONFIG_ASCEND_FEATURES u32 sid; + const union acpi_object *obj = NULL; #endif
if (!fwspec || fwspec->ops != &arm_smmu_ops) @@ -2707,8 +2708,9 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev) master->stall_enabled = true;
#ifdef CONFIG_ASCEND_FEATURES - if (!acpi_dev_prop_read_single(ACPI_COMPANION(dev), - "streamid", DEV_PROP_U32, &sid)) { + if (!acpi_dev_get_property(ACPI_COMPANION(dev), + "streamid", ACPI_TYPE_INTEGER, &obj) && obj) { + sid = obj->integer.value; if (iommu_fwspec_add_ids(dev, &sid, 1)) dev_info(dev, "failed to add ids\n"); master->stall_enabled = true;
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/3236 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/4...
FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/3236 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/4...