From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN CVE: NA
----------------------------------------------------------------
When the RTC divider is changed from reset to an operating time base, the first update cycle should be 500ms later. But on some Zhaoxin SOCs, this first update cycle is one second later. So set RTC time on these Zhaoxin SOCs will causing 500ms delay. Skip setup RTC divider on these SOCs in mc146818_set_time to fix it.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Xiongfeng Wang wangxiongfeng2@huawei.com --- drivers/rtc/rtc-mc146818-lib.c | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-)
diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c index 2ecd8752b088..033dcc0645a5 100644 --- a/drivers/rtc/rtc-mc146818-lib.c +++ b/drivers/rtc/rtc-mc146818-lib.c @@ -8,6 +8,22 @@ #include <linux/acpi.h> #endif
+#ifdef CONFIG_X86 +static inline bool follow_mc146818_divider_reset(void) +{ + if ((boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR || + boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN) && + (boot_cpu_data.x86 <= 7 && boot_cpu_data.x86_model <= 59)) + return false; + return true; +} +#else +static inline bool follow_mc146818_divider_reset(void) +{ + return true; +} +#endif + /* * Returns true if a clock update is in progress */ @@ -171,8 +187,11 @@ int mc146818_set_time(struct rtc_time *time)
save_control = CMOS_READ(RTC_CONTROL); CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL); - save_freq_select = CMOS_READ(RTC_FREQ_SELECT); - CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT); + if (follow_mc146818_divider_reset()) { + save_freq_select = CMOS_READ(RTC_FREQ_SELECT); + CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT); + } +
#ifdef CONFIG_MACH_DECSTATION CMOS_WRITE(real_yrs, RTC_DEC_YEAR); @@ -190,7 +209,8 @@ int mc146818_set_time(struct rtc_time *time) #endif
CMOS_WRITE(save_control, RTC_CONTROL); - CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT); + if (follow_mc146818_divider_reset()) + CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
spin_unlock_irqrestore(&rtc_lock, flags);
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN CVE: NA
----------------------------------------------------------------
If plug out device form xhci with runtime suspend enabled. On the one hand, driver will disconnect this device and send disabled slot command to xhci. On the other hand, without no device connect to xhci, PM core will call xhci suspend function to let xhci go to D3 to save power. However there is a temporal competition to get xhci lock between disable slot command interrupt and xhci suspend. If xhci suspend function get xhci lock first, then this function will clear xhci command ring. It will get error command trb when driver to handle disable slot command interrupt. This is a serious error for driver and driver will cleanup xhci. So,any device connect to this xhci port again will not be recognized.
In order to fix this issues, we let disable slot command interrupt ISR to get xhci lock first. So, add a delay in xhci suspend function before to get xhci lock.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Jackie Liu liuyun01@kylinos.cn Reviewed-by: Xiongfeng Wang wangxiongfeng2@huawei.com --- drivers/usb/host/xhci-pci.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index d2dddc6e8a53..490ce38ae93d 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -286,6 +286,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) xhci->quirks |= XHCI_LPM_SUPPORT; xhci->quirks |= XHCI_ZHAOXIN_HOST; } + if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) + xhci->quirks |= XHCI_SUSPEND_DELAY;
/* See https://bugzilla.kernel.org/show_bug.cgi?id=79511 */ if (pdev->vendor == PCI_VENDOR_ID_VIA &&
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN CVE: NA
----------------------------------------------------------------
If we plug in a LS/FS device on USB2 port of EHCI, it will latch a wakeup signal in EHCI internal. This is a bug of EHCI for Some project of ZhaoXin. If enable EHCI runtime suspend and no device attach. PM core will let EHCI go to D3 to save power. However, once EHCI go to D3, it will release wakeup signal that latched on device connect to port during S0. Which will generate a SCI interrupt and bring EHCI to D0. But without device connect, EHCI will go to D3 again. So, there is suspend-resume loop and generate SCI interrupt Continuously.
In order to fix this issues, we need to clear the wakeup signal latched in EHCI when EHCI suspend function is called.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Xiongfeng Wang wangxiongfeng2@huawei.com --- drivers/pci/pci-driver.c | 6 +++++- drivers/usb/host/ehci-hcd.c | 21 +++++++++++++++++++++ drivers/usb/host/ehci-pci.c | 4 ++++ drivers/usb/host/ehci.h | 1 + 4 files changed, 31 insertions(+), 1 deletion(-)
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c index 8b587fc97f7b..79aa5a4b61c5 100644 --- a/drivers/pci/pci-driver.c +++ b/drivers/pci/pci-driver.c @@ -518,7 +518,11 @@ static int pci_restore_standard_config(struct pci_dev *pci_dev) }
pci_restore_state(pci_dev); - pci_pme_restore(pci_dev); + if (!(pci_dev->vendor == PCI_VENDOR_ID_ZHAOXIN && + pci_dev->device == 0x3104 && + (pci_dev->revision & 0xf0) == 0x90 && + pci_dev->class == PCI_CLASS_SERIAL_USB_EHCI)) + pci_pme_restore(pci_dev); return 0; }
diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c index 8aff19ff8e8f..586e2735d6e0 100644 --- a/drivers/usb/host/ehci-hcd.c +++ b/drivers/usb/host/ehci-hcd.c @@ -1142,6 +1142,27 @@ int ehci_suspend(struct usb_hcd *hcd, bool do_wakeup) return -EBUSY; }
+ /*clear wakeup signal locked in S0 state when device plug in*/ + if (ehci->zx_wakeup_clear == 1) { + u32 __iomem *reg = &ehci->regs->port_status[4]; + u32 t1 = ehci_readl(ehci, reg); + + t1 &= (u32)~0xf0000; + t1 |= PORT_TEST_FORCE; + ehci_writel(ehci, t1, reg); + t1 = ehci_readl(ehci, reg); + usleep_range(1000, 2000); + t1 &= (u32)~0xf0000; + ehci_writel(ehci, t1, reg); + usleep_range(1000, 2000); + t1 = ehci_readl(ehci, reg); + ehci_writel(ehci, t1 | PORT_CSC, reg); + udelay(500); + t1 = ehci_readl(ehci, &ehci->regs->status); + ehci_writel(ehci, t1 & STS_PCD, &ehci->regs->status); + ehci_readl(ehci, &ehci->regs->status); + } + return 0; } EXPORT_SYMBOL_GPL(ehci_suspend); diff --git a/drivers/usb/host/ehci-pci.c b/drivers/usb/host/ehci-pci.c index e87cf3a00fa4..a5e27deda83a 100644 --- a/drivers/usb/host/ehci-pci.c +++ b/drivers/usb/host/ehci-pci.c @@ -222,6 +222,10 @@ static int ehci_pci_setup(struct usb_hcd *hcd) ehci->has_synopsys_hc_bug = 1; } break; + case PCI_VENDOR_ID_ZHAOXIN: + if (pdev->device == 0x3104 && (pdev->revision & 0xf0) == 0x90) + ehci->zx_wakeup_clear = 1; + break; }
/* optional debug port, normally in the first BAR */ diff --git a/drivers/usb/host/ehci.h b/drivers/usb/host/ehci.h index 59fd523c55f3..8da080a54668 100644 --- a/drivers/usb/host/ehci.h +++ b/drivers/usb/host/ehci.h @@ -219,6 +219,7 @@ struct ehci_hcd { /* one per controller */ unsigned need_oc_pp_cycle:1; /* MPC834X port power */ unsigned imx28_write_fix:1; /* For Freescale i.MX28 */ unsigned is_aspeed:1; + unsigned zx_wakeup_clear:1;
/* required for usb32 quirk */ #define OHCI_CTRL_HCFS (3 << 6)
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN CVE: NA
----------------------------------------------------------------
The kernel driver now will disable interrupt when port is suspended, causing plugin not work. It is useful to make plugin work,when the plugin interrupt is enabled with power management status. Adding pm_request_resume() is to reume port for plugin or PME signal waking up controller which in D3.
with the AHCI controller frequently enters into D3 and leaves from D3, the identify cmd may be timeout when controller resumes and establishes a connect with the device.it is effective to delay 10ms between controller resume and port resume,with link’s smooth transition.
with non power management request and power management competing with each other in queue, it is often found that block IO hang 120s when system disk is suspending or resuming.it is now guaranteed that PM requests will enter the queue no matter other non-PM requests are waiting. Increase the pm_only counter before checking whether any non-PM blk_queue_enter() calls are in progress. Meanwhile, the new blk_pm_request_resume() call is necessary to occur during request assigned to a queue when device is suspended.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Reviewed-by: Xiongfeng Wang wangxiongfeng2@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- drivers/ata/ahci.c | 13 +++++++++++++ drivers/ata/libahci.c | 15 +++++++++++++++ 2 files changed, 28 insertions(+)
diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c index ff2add0101fe..8a7140ee88b5 100644 --- a/drivers/ata/ahci.c +++ b/drivers/ata/ahci.c @@ -869,6 +869,19 @@ static int ahci_pci_device_runtime_resume(struct device *dev) if (rc) return rc; ahci_pci_init_controller(host); + + /* Port resume for Zhaoxin platform */ + if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) { + if (pdev->revision == 0x01) + ata_msleep(NULL, 10); + + for (rc = 0; rc < host->n_ports; rc++) { + struct ata_port *ap = host->ports[rc]; + + pm_request_resume(&ap->tdev); + } + } + return 0; }
diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c index fec2e9754aed..4514f3f28b7c 100644 --- a/drivers/ata/libahci.c +++ b/drivers/ata/libahci.c @@ -823,9 +823,15 @@ static int ahci_set_lpm(struct ata_link *link, enum ata_lpm_policy policy, static void ahci_power_down(struct ata_port *ap) { struct ahci_host_priv *hpriv = ap->host->private_data; + struct pci_dev *pdev; void __iomem *port_mmio = ahci_port_base(ap); u32 cmd, scontrol;
+ /* port suspended enable Plugin intr for Zhaoxin platform */ + pdev = (ap->dev && dev_is_pci(ap->dev)) ? to_pci_dev(ap->dev) : NULL; + if ((pdev && pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) && !ap->link.device->sdev) + writel(PORT_IRQ_CONNECT, port_mmio + PORT_IRQ_MASK); + if (!(hpriv->cap & HOST_CAP_SSS)) return;
@@ -1701,6 +1707,7 @@ static void ahci_error_intr(struct ata_port *ap, u32 irq_stat) struct ata_link *link = NULL; struct ata_queued_cmd *active_qc; struct ata_eh_info *active_ehi; + struct pci_dev *pdev; bool fbs_need_dec = false; u32 serror;
@@ -1791,6 +1798,14 @@ static void ahci_error_intr(struct ata_port *ap, u32 irq_stat) ata_ehi_push_desc(host_ehi, "%s", irq_stat & PORT_IRQ_CONNECT ? "connection status changed" : "PHY RDY changed"); + + /* When plugin intr happen, now resume suspended port for Zhaoxin platform */ + pdev = (ap->dev && dev_is_pci(ap->dev)) ? to_pci_dev(ap->dev) : NULL; + if ((pdev && pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) && + (ap->pflags & ATA_PFLAG_SUSPENDED)) { + pm_request_resume(&ap->tdev); + return; + } }
/* okay, let's hand over to EH */
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN CVE: NA
----------------------------------------------------------------
The ahci spec mentions that PhyRdy Change Interrupt and Link Power Management (LPM) do not coexist. However, before enabling LPM, the driver did not check whether the host supports LPM, but directly disabled PhyRdy Change Interrupt. Increase the judgment on the actual support of LPM, and disable PhyRdy Change Interrupt only when it is supported.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Acked-by: Jackie Liu liuyun01@kylinos.cn Reviewed-by: Jason Yan yanaijie@huawei.com --- drivers/ata/ahci.c | 11 +++++++++++ drivers/ata/libata-eh.c | 7 +++++++ include/linux/libata.h | 4 ++++ 3 files changed, 22 insertions(+)
diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c index 8a7140ee88b5..abc2df4a96be 100644 --- a/drivers/ata/ahci.c +++ b/drivers/ata/ahci.c @@ -1881,6 +1881,17 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) else dev_info(&pdev->dev, "SSS flag set, parallel bus scan disabled\n");
+ if (pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) { + if (hpriv->cap & HOST_CAP_PART) + host->flags |= ATA_HOST_PART; + + if (hpriv->cap & HOST_CAP_SSC) + host->flags |= ATA_HOST_SSC; + + if (hpriv->cap2 & HOST_CAP2_SDS) + host->flags |= ATA_HOST_DEVSLP; + } + if (pi.flags & ATA_FLAG_EM) ahci_reset_em(host);
diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c index 018ed8736a64..271c3c76f4af 100644 --- a/drivers/ata/libata-eh.c +++ b/drivers/ata/libata-eh.c @@ -3237,6 +3237,8 @@ static int ata_eh_set_lpm(struct ata_link *link, enum ata_lpm_policy policy, struct ata_device **r_failed_dev) { struct ata_port *ap = ata_is_host_link(link) ? link->ap : NULL; + struct device *device = ap->host->dev; + struct pci_dev *pdev = (!device || !dev_is_pci(device)) ? NULL : to_pci_dev(device); struct ata_eh_context *ehc = &link->eh_context; struct ata_device *dev, *link_dev = NULL, *lpm_dev = NULL; enum ata_lpm_policy old_policy = link->lpm_policy; @@ -3245,6 +3247,11 @@ static int ata_eh_set_lpm(struct ata_link *link, enum ata_lpm_policy policy, unsigned int err_mask; int rc;
+ /* if controller does not support lpm, then sets no LPM flags*/ + if ((pdev && pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) && + !(ap->host->flags & (ATA_HOST_PART | ATA_HOST_SSC | ATA_HOST_DEVSLP))) + link->flags |= ATA_LFLAG_NO_LPM; + /* if the link or host doesn't do LPM, noop */ if (!IS_ENABLED(CONFIG_SATA_HOST) || (link->flags & ATA_LFLAG_NO_LPM) || (ap && !ap->ops->set_lpm)) diff --git a/include/linux/libata.h b/include/linux/libata.h index d3600fc7f7c4..bbce7a124286 100644 --- a/include/linux/libata.h +++ b/include/linux/libata.h @@ -260,6 +260,10 @@ enum { ATA_HOST_PARALLEL_SCAN = (1 << 2), /* Ports on this host can be scanned in parallel */ ATA_HOST_IGNORE_ATA = (1 << 3), /* Ignore ATA devices on this host. */
+ ATA_HOST_PART = (1 << 4), /* Host support partial.*/ + ATA_HOST_SSC = (1 << 5), /* Host support slumber.*/ + ATA_HOST_DEVSLP = (1 << 6), /* Host support devslp.*/ + /* bits 24:31 of host->flags are reserved for LLD specific flags */
/* various lengths of time */
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN CVE: NA
----------------------------------------------------------------
The ahci spec mentions that when PxCMD.PSC/SSC is cleared to '0', the PxSCTL.LPM field in each port must be programmed to disallow device initiated Partial/Slumber requests.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Acked-by: Jackie Liu liuyun01@kylinos.cn Reviewed-by: Jason Yan yanaijie@huawei.com --- drivers/ata/libata-sata.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/drivers/ata/libata-sata.c b/drivers/ata/libata-sata.c index c16423e44525..5a722c64b7dd 100644 --- a/drivers/ata/libata-sata.c +++ b/drivers/ata/libata-sata.c @@ -8,6 +8,7 @@ */
#include <linux/kernel.h> +#include <linux/pci.h> #include <linux/module.h> #include <scsi/scsi_cmnd.h> #include <scsi/scsi_device.h> @@ -368,6 +369,10 @@ int sata_link_scr_lpm(struct ata_link *link, enum ata_lpm_policy policy, bool spm_wakeup) { struct ata_eh_context *ehc = &link->eh_context; + struct ata_port *ap = ata_is_host_link(link) ? link->ap : NULL; + struct device *dev = ap ? ap->host->dev : NULL; + struct pci_dev *pdev = (!dev || !dev_is_pci(dev)) ? NULL : to_pci_dev(dev); + bool woken_up = false; u32 scontrol; int rc; @@ -394,10 +399,21 @@ int sata_link_scr_lpm(struct ata_link *link, enum ata_lpm_policy policy, case ATA_LPM_MED_POWER_WITH_DIPM: case ATA_LPM_MIN_POWER_WITH_PARTIAL: case ATA_LPM_MIN_POWER: - if (ata_link_nr_enabled(link) > 0) + if (ata_link_nr_enabled(link) > 0) { /* no restrictions on LPM transitions */ scontrol &= ~(0x7 << 8); - else { + + /* if controller does not support partial, then disallows it, + * the same for slumber + */ + if (pdev && pdev->vendor == PCI_VENDOR_ID_ZHAOXIN) { + if (!(link->ap->host->flags & ATA_HOST_PART)) + scontrol |= (0x1 << 8); + + if (!(link->ap->host->flags & ATA_HOST_SSC)) + scontrol |= (0x2 << 8); + } + } else { /* empty port, power off */ scontrol &= ~0xf; scontrol |= (0x1 << 2);
From: LeoLiu-oc LeoLiu-oc@zhaoxin.com
zhaoxin inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN CVE: NA
----------------------------------------------------------------
When resume from S4 on Zhaoxin 2 packages platform that support X86_FEATURE_TSC_ADJUST, the following warning messages appear:
[ 327.445302] [Firmware Bug]: TSC ADJUST differs: CPU15 45960750 --> 78394770. Restoring [ 329.209120] [Firmware Bug]: TSC ADJUST differs: CPU14 45960750 --> 78394770. Restoring [ 329.209128] [Firmware Bug]: TSC ADJUST differs: CPU13 45960750 --> 78394770. Restoring [ 329.209138] [Firmware Bug]: TSC ADJUST differs: CPU12 45960750 --> 78394770. Restoring [ 329.209151] [Firmware Bug]: TSC ADJUST differs: CPU11 45960750 --> 78394770. Restoring [ 329.209160] [Firmware Bug]: TSC ADJUST differs: CPU10 45960750 --> 78394770. Restoring [ 329.209169] [Firmware Bug]: TSC ADJUST differs: CPU9 45960750 --> 78394770. Restoring
The reason is:
Step 1: Bring up. TSC is sync after bring up with following settings: MSR 0x3b cur->adjusted Package#0 CPU 0-7 0 0 Package#1 first CPU value1 value1 Package#1 non-first CPU value1 value1
Step 2: Suspend to S4. Settings in Step 1 are not changed in this Step.
Step 3: Bring up caused by S4 wake up event. TSC is sync when bring up with following settings: MSR 0x3b cur->adjusted Package#0 CPU 0-7 0 0 Package#1 first CPU value2 value2 Package#1 non-first CPU value2 value2
Step 4: Resume from S4. When resuming from S4, Current TSC synchronous mechanism cause following settings: MSR 0x3b cur->adjusted Package#0 CPU 0-7 0 0 Package#1 first CPU value2 value2 Package#1 non-first CPU value2 value1
In these Steps, value1 != 0 and value2 != value1.
In Step4, as function tsc_store_and_check_tsc_adjust() do, when the value of MSR 0x3b on the non-first online CPU in package#1 is equal to the value of cur->adjusted on the first online CPU in the same package, the cur->adjusted value on this non-first online CPU will hold the old value1. This cause function tsc_verify_tsc_adjust() set the value of MSR 0x3b on the non-first online CPUs in the package#1 to the old value1 and print the beginning warning messages.
Fix it by setting cur->adjusted value on the non-first online CPU in a package to the value of MSR 0x3b on the same CPU when they are not equal.
Signed-off-by: LeoLiu-oc LeoLiu-oc@zhaoxin.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com Reviewed-by: Jackie Liu liuyun01@kylinos.cn Reviewed-by: Xiongfeng Wang wangxiongfeng2@huawei.com --- arch/x86/kernel/tsc_sync.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/arch/x86/kernel/tsc_sync.c b/arch/x86/kernel/tsc_sync.c index 923660057f36..bb59e6af2cde 100644 --- a/arch/x86/kernel/tsc_sync.c +++ b/arch/x86/kernel/tsc_sync.c @@ -231,6 +231,11 @@ bool tsc_store_and_check_tsc_adjust(bool bootcpu) if (bootval != ref->adjusted) { cur->adjusted = ref->adjusted; wrmsrl(MSR_IA32_TSC_ADJUST, ref->adjusted); + } else if (cur->adjusted != bootval) { + if (boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR || + boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN) { + cur->adjusted = bootval; + } } /* * We have the TSCs forced to be in sync on this package. Skip sync
From: Yanling Song songyl@ramaxel.com
Ramaxel inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4TPKM CVE: NA
-----------------------------------------------
Replace hw_link_mode by link_mode in ethtool get_link_ksettings
Signed-off-by: Yanling Song songyl@ramaxel.com Reviewed-by: wenliang wenliang@ramaxel.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- drivers/net/ethernet/ramaxel/spnic/spnic_ethtool_stats.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/ramaxel/spnic/spnic_ethtool_stats.c b/drivers/net/ethernet/ramaxel/spnic/spnic_ethtool_stats.c index 8a5073a20f0b..5a830e3454d4 100644 --- a/drivers/net/ethernet/ramaxel/spnic/spnic_ethtool_stats.c +++ b/drivers/net/ethernet/ramaxel/spnic/spnic_ethtool_stats.c @@ -745,9 +745,9 @@ spnic_add_ethtool_link_mode(struct cmd_link_settings *link_settings, u32 hw_link for (link_mode = 0; link_mode < LINK_MODE_MAX_NUMBERS; link_mode++) { if (hw_link_mode & BIT(link_mode)) { if (name == GET_SUPPORTED_MODE) - ETHTOOL_ADD_SUPPORTED_SPEED_LINK_MODE(link_settings, hw_link_mode); + ETHTOOL_ADD_SUPPORTED_SPEED_LINK_MODE(link_settings, link_mode); else - ETHTOOL_ADD_ADVERTISED_SPEED_LINK_MODE(link_settings, hw_link_mode); + ETHTOOL_ADD_ADVERTISED_SPEED_LINK_MODE(link_settings, link_mode); } } }
From: Yanling Song songyl@ramaxel.com
Ramaxel inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4TPKM CVE: NA
-----------------------------------------------
Fix ethtool loopback command failure
Signed-off-by: Yanling Song songyl@ramaxel.com Reviewed-by: wenliang wenliang@ramaxel.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- drivers/net/ethernet/ramaxel/spnic/spnic_mag_cfg.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/ramaxel/spnic/spnic_mag_cfg.c b/drivers/net/ethernet/ramaxel/spnic/spnic_mag_cfg.c index 2f73f4e6cbfa..d0448d1a6bd3 100644 --- a/drivers/net/ethernet/ramaxel/spnic/spnic_mag_cfg.c +++ b/drivers/net/ethernet/ramaxel/spnic/spnic_mag_cfg.c @@ -178,7 +178,7 @@ int spnic_set_loopback_mode(void *hwdev, u8 mode, u8 enable) return -EINVAL; }
- return spnic_cfg_loopback_mode(nic_cfg, MGMT_MSG_CMD_OP_GET, &mode, &enable); + return spnic_cfg_loopback_mode(nic_cfg, MGMT_MSG_CMD_OP_SET, &mode, &enable); }
int spnic_set_led_status(void *hwdev, enum mag_led_type type, enum mag_led_mode mode)
From: Yanling Song songyl@ramaxel.com
Ramaxel inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4TPKM CVE: NA
-----------------------------------------------
Hardware calcaulats xor by 4B-aligned. When sending a non 4B-aligned message to firmware, the message has to be rounded up as 4B-aligned.
Signed-off-by: Yanling Song songyl@ramaxel.com Reviewed-by: wenliang wenliang@ramaxel.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c b/drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c index 8abb47422bdd..7631b17a2944 100644 --- a/drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c +++ b/drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c @@ -860,6 +860,7 @@ static int mbox_prepare_dma_entry(struct sphw_mbox *func_to_func, struct mbox_dm struct mbox_dma_msg *dma_msg, void *msg, u16 msg_len) { u64 dma_addr, offset; + void *dma_vaddr;
if (IS_MSG_QUEUE_FULL(mq)) { sdk_err(func_to_func->hwdev->dev_hdl, "Mbox sync message queue is busy, pi: %u, ci: %u\n", @@ -869,12 +870,13 @@ static int mbox_prepare_dma_entry(struct sphw_mbox *func_to_func, struct mbox_dm
/* copy data to DMA buffer */ offset = mq->prod_idx * MBOX_MAX_BUF_SZ; - memcpy((u8 *)mq->dma_buff_vaddr + offset, msg, msg_len); + dma_vaddr = (u8 *)mq->dma_buff_vaddr + offset; + memcpy(dma_vaddr, msg, msg_len); dma_addr = mq->dma_buff_paddr + offset; dma_msg->dma_addr_high = upper_32_bits(dma_addr); dma_msg->dma_addr_low = lower_32_bits(dma_addr); dma_msg->msg_len = msg_len; - dma_msg->xor = mbox_dma_msg_xor(msg, msg_len); + dma_msg->xor = mbox_dma_msg_xor(dma_vaddr, ALIGN(msg_len, sizeof(u32)));
mq->prod_idx++; mq->prod_idx = MQ_ID_MASK(mq, mq->prod_idx);
From: Yanling Song songyl@ramaxel.com
Ramaxel inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4TPKM CVE: NA
-----------------------------------------------
In the case of no VF, spnic_pf_event_handler/spnic_pf_mag_event_handler should be registered too.
Signed-off-by: Yanling Song songyl@ramaxel.com Reviewed-by: wenliang wenliang@ramaxel.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- .../ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c | 49 ++++++++++++------- 1 file changed, 30 insertions(+), 19 deletions(-)
diff --git a/drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c b/drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c index 53410b2423c8..634f82153dff 100644 --- a/drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c +++ b/drivers/net/ethernet/ramaxel/spnic/spnic_nic_cfg_vf.c @@ -523,14 +523,26 @@ static int pf_init_vf_infos(struct spnic_nic_cfg *nic_cfg) int err; u16 i;
+ err = sphw_register_mgmt_msg_cb(nic_cfg->hwdev, SPHW_MOD_L2NIC, nic_cfg, + spnic_pf_event_handler); + if (err) + return err; + + err = sphw_register_mgmt_msg_cb(nic_cfg->hwdev, SPHW_MOD_HILINK, nic_cfg, + spnic_pf_mag_event_handler); + if (err) + goto register_mag_mgmt_cb_err; + nic_cfg->max_vfs = sphw_func_max_vf(nic_cfg->hwdev); size = sizeof(*nic_cfg->vf_infos) * nic_cfg->max_vfs; if (!size) return 0;
nic_cfg->vf_infos = kzalloc(size, GFP_KERNEL); - if (!nic_cfg->vf_infos) - return -ENOMEM; + if (!nic_cfg->vf_infos) { + err = -ENOMEM; + goto alloc_vf_infos_err; + }
for (i = 0; i < nic_cfg->max_vfs; i++) { err = spnic_init_vf_infos(nic_cfg, i); @@ -538,36 +550,31 @@ static int pf_init_vf_infos(struct spnic_nic_cfg *nic_cfg) goto init_vf_infos_err; }
- err = sphw_register_mgmt_msg_cb(nic_cfg->hwdev, SPHW_MOD_L2NIC, nic_cfg, - spnic_pf_event_handler); - if (err) - goto register_mgmt_cb_err; - err = sphw_register_pf_mbox_cb(nic_cfg->hwdev, SPHW_MOD_L2NIC, nic_cfg, spnic_pf_mbox_handler); if (err) - goto register_pf_mbox_cb_err; - - err = sphw_register_mgmt_msg_cb(nic_cfg->hwdev, SPHW_MOD_HILINK, nic_cfg, - spnic_pf_mag_event_handler); - if (err) - goto register_mgmt_cb_err; + goto register_nic_mbox_cb_err;
err = sphw_register_pf_mbox_cb(nic_cfg->hwdev, SPHW_MOD_HILINK, nic_cfg, spnic_pf_mag_mbox_handler); if (err) - goto register_pf_mag_mbox_cb_err; + goto register_mag_mbox_cb_err;
return 0;
-register_pf_mag_mbox_cb_err: +register_mag_mbox_cb_err: sphw_unregister_pf_mbox_cb(nic_cfg->hwdev, SPHW_MOD_L2NIC); -register_pf_mbox_cb_err: - sphw_unregister_mgmt_msg_cb(nic_cfg->hwdev, SPHW_MOD_L2NIC); -register_mgmt_cb_err: + +register_nic_mbox_cb_err: init_vf_infos_err: kfree(nic_cfg->vf_infos);
+alloc_vf_infos_err: + sphw_unregister_mgmt_msg_cb(nic_cfg->hwdev, SPHW_MOD_HILINK); + +register_mag_mgmt_cb_err: + sphw_unregister_mgmt_msg_cb(nic_cfg->hwdev, SPHW_MOD_L2NIC); + return err; }
@@ -596,13 +603,17 @@ void spnic_vf_func_free(struct spnic_nic_cfg *nic_cfg) err, unregister.msg_head.status, out_size);
sphw_unregister_vf_mbox_cb(nic_cfg->hwdev, SPHW_MOD_L2NIC); + sphw_unregister_vf_mbox_cb(nic_cfg->hwdev, SPHW_MOD_HILINK); } else { if (nic_cfg->vf_infos) { - sphw_unregister_mgmt_msg_cb(nic_cfg->hwdev, SPHW_MOD_L2NIC); sphw_unregister_pf_mbox_cb(nic_cfg->hwdev, SPHW_MOD_L2NIC); + sphw_unregister_pf_mbox_cb(nic_cfg->hwdev, SPHW_MOD_HILINK); spnic_clear_vfs_info(nic_cfg->hwdev); kfree(nic_cfg->vf_infos); } + + sphw_unregister_mgmt_msg_cb(nic_cfg->hwdev, SPHW_MOD_L2NIC); + sphw_unregister_mgmt_msg_cb(nic_cfg->hwdev, SPHW_MOD_HILINK); } }
From: Yanling Song songyl@ramaxel.com
Ramaxel inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4TPKM CVE: NA
-----------------------------------------------
Reduce the timeout of the channel between driver and firmware
Signed-off-by: Yanling Song songyl@ramaxel.com Reviewed-by: wenliang wenliang@ramaxel.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c | 2 +- drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c | 4 ++-- drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c b/drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c index 5421b813d484..9ebff6f8ac97 100644 --- a/drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c +++ b/drivers/net/ethernet/ramaxel/spnic/hw/sphw_cmdq.c @@ -30,7 +30,7 @@ #define SPHW_CMDQ_MAX_DATA_SIZE \ (SPHW_CMDQ_BUF_SIZE - SPHW_CMDQ_BUF_HW_RSVD)
-#define CMDQ_CMD_TIMEOUT 300000 /* millisecond */ +#define CMDQ_CMD_TIMEOUT 5000 /* millisecond */
#define UPPER_8_BITS(data) (((data) >> 8) & 0xFF) #define LOWER_8_BITS(data) ((data) & 0xFF) diff --git a/drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c b/drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c index 7631b17a2944..694463ca0a93 100644 --- a/drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c +++ b/drivers/net/ethernet/ramaxel/spnic/hw/sphw_mbox.c @@ -61,8 +61,8 @@ enum sphw_mbox_tx_status {
#define MBOX_SEGLEN_MASK SPHW_MSG_HEADER_SET(SPHW_MSG_HEADER_SEG_LEN_MASK, SEG_LEN)
-#define MBOX_MSG_POLLING_TIMEOUT 300000 -#define SPHW_MBOX_COMP_TIME 300000U +#define MBOX_MSG_POLLING_TIMEOUT 8000 +#define SPHW_MBOX_COMP_TIME 25000U
#define MBOX_MAX_BUF_SZ 2048U #define MBOX_HEADER_SZ 8 diff --git a/drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c b/drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c index 01e88535a0ab..a66f40635963 100644 --- a/drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c +++ b/drivers/net/ethernet/ramaxel/spnic/hw/sphw_mgmt.c @@ -50,7 +50,7 @@
#define MSG_NO_RESP 0xFFFF
-#define MGMT_MSG_TIMEOUT 300000 /* millisecond */ +#define MGMT_MSG_TIMEOUT 20000 /* millisecond */
#define SYNC_MSG_ID(pf_to_mgmt) ((pf_to_mgmt)->sync_msg_id)
From: He Ying heying24@huawei.com
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4U2S3 CVE: NA
-------------------------------------------------
The following KASAN warning was reported in our kernel.
BUG: KASAN: stack-out-of-bounds in get_wchan+0x188/0x250 Read of size 4 at addr d216f958 by task ps/14437
CPU: 3 PID: 14437 Comm: ps Tainted: G O 5.10.0 #1 Call Trace: [daa63858] [c0654348] dump_stack+0x9c/0xe4 (unreliable) [daa63888] [c035cf0c] print_address_description.constprop.3+0x8c/0x570 [daa63908] [c035d6bc] kasan_report+0x1ac/0x218 [daa63948] [c00496e8] get_wchan+0x188/0x250 [daa63978] [c0461ec8] do_task_stat+0xce8/0xe60 [daa63b98] [c0455ac8] proc_single_show+0x98/0x170 [daa63bc8] [c03cab8c] seq_read_iter+0x1ec/0x900 [daa63c38] [c03cb47c] seq_read+0x1dc/0x290 [daa63d68] [c037fc94] vfs_read+0x164/0x510 [daa63ea8] [c03808e4] ksys_read+0x144/0x1d0 [daa63f38] [c005b1dc] ret_from_syscall+0x0/0x38 --- interrupt: c00 at 0x8fa8f4 LR = 0x8fa8cc
The buggy address belongs to the page: page:98ebcdd2 refcount:0 mapcount:0 mapping:00000000 index:0x2 pfn:0x1216f flags: 0x0() raw: 00000000 00000000 01010122 00000000 00000002 00000000 ffffffff 00000000 raw: 00000000 page dumped because: kasan: bad access detected
Memory state around the buggy address: d216f800: 00 00 00 00 00 f1 f1 f1 f1 00 00 00 00 00 00 00 d216f880: f2 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
d216f900: 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00
^ d216f980: f2 f2 f2 f2 f2 f2 f2 00 00 00 00 00 00 00 00 00 d216fa00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
After looking into this issue, I find the buggy address belongs to the task stack region. It seems KASAN has something wrong. I look into the code of __get_wchan in x86 architecture and find the same issue has been resolved by the commit f7d27c35ddff ("x86/mm, kasan: Silence KASAN warnings in get_wchan()"). The solution could be applied to powerpc architecture too.
As Andrey Ryabinin said, get_wchan() is racy by design, it may access volatile stack of running task, thus it may access redzone in a stack frame and cause KASAN to warn about this.
Use READ_ONCE_NOCHECK() to silence these warnings.
Reported-by: Wanming Hu huwanming@huaweil.com Signed-off-by: He Ying heying24@huawei.com Signed-off-by: Chen Jingwen chenjingwen6@huawei.com Reviewed-by: Liao Chang liaochang1@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- arch/powerpc/kernel/process.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 3064694afea1..cfb8fd76afb4 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -2108,12 +2108,12 @@ static unsigned long __get_wchan(struct task_struct *p) return 0;
do { - sp = *(unsigned long *)sp; + sp = READ_ONCE_NOCHECK(*(unsigned long *)sp); if (!validate_sp(sp, p, STACK_FRAME_OVERHEAD) || p->state == TASK_RUNNING) return 0; if (count > 0) { - ip = ((unsigned long *)sp)[STACK_FRAME_LR_SAVE]; + ip = READ_ONCE_NOCHECK(((unsigned long *)sp)[STACK_FRAME_LR_SAVE]); if (!in_sched_functions(ip)) return ip; }
From: Peter Zijlstra peterz@infradead.org
mainline inclusion from mainline-v5.11-rc1 commit 7a9f50a05843fee8366bd3a65addbebaa7cf7f07 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4U05V CVE: NA
-------------------------------------------------------------------------
Get rid of the __call_single_node union and clean up the API a little to avoid external code relying on the structure layout as much.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Frederic Weisbecker frederic@kernel.org Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Reviewed-by: Cheng Jian cj.chengjian@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- drivers/gpu/drm/i915/i915_request.c | 4 ++-- include/linux/irq_work.h | 33 ++++++++++++++++++----------- include/linux/irqflags.h | 4 ++-- kernel/bpf/stackmap.c | 2 +- kernel/irq_work.c | 18 ++++++++-------- kernel/printk/printk.c | 6 ++---- kernel/rcu/tree.c | 3 +-- kernel/time/tick-sched.c | 6 ++---- kernel/trace/bpf_trace.c | 2 +- 9 files changed, 41 insertions(+), 37 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 896389f93029..79d835bcb8b7 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -197,7 +197,7 @@ __notify_execute_cb(struct i915_request *rq, bool (*fn)(struct irq_work *wrk))
llist_for_each_entry_safe(cb, cn, llist_del_all(&rq->execute_cb), - work.llnode) + work.node.llist) fn(&cb->work); }
@@ -460,7 +460,7 @@ __await_execution(struct i915_request *rq, * callback first, then checking the ACTIVE bit, we serialise with * the completed/retired request. */ - if (llist_add(&cb->work.llnode, &signal->execute_cb)) { + if (llist_add(&cb->work.node.llist, &signal->execute_cb)) { if (i915_request_is_active(signal) || __request_in_flight(signal)) __notify_execute_cb_imm(signal); diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h index 30823780c192..ec2a47a81e42 100644 --- a/include/linux/irq_work.h +++ b/include/linux/irq_work.h @@ -14,28 +14,37 @@ */
struct irq_work { - union { - struct __call_single_node node; - struct { - struct llist_node llnode; - atomic_t flags; - }; - }; + struct __call_single_node node; void (*func)(struct irq_work *); };
+#define __IRQ_WORK_INIT(_func, _flags) (struct irq_work){ \ + .node = { .u_flags = (_flags), }, \ + .func = (_func), \ +} + +#define IRQ_WORK_INIT(_func) __IRQ_WORK_INIT(_func, 0) +#define IRQ_WORK_INIT_LAZY(_func) __IRQ_WORK_INIT(_func, IRQ_WORK_LAZY) +#define IRQ_WORK_INIT_HARD(_func) __IRQ_WORK_INIT(_func, IRQ_WORK_HARD_IRQ) + +#define DEFINE_IRQ_WORK(name, _f) \ + struct irq_work name = IRQ_WORK_INIT(_f) + static inline void init_irq_work(struct irq_work *work, void (*func)(struct irq_work *)) { - atomic_set(&work->flags, 0); - work->func = func; + *work = IRQ_WORK_INIT(func); }
-#define DEFINE_IRQ_WORK(name, _f) struct irq_work name = { \ - .flags = ATOMIC_INIT(0), \ - .func = (_f) \ +static inline bool irq_work_is_pending(struct irq_work *work) +{ + return atomic_read(&work->node.a_flags) & IRQ_WORK_PENDING; }
+static inline bool irq_work_is_busy(struct irq_work *work) +{ + return atomic_read(&work->node.a_flags) & IRQ_WORK_BUSY; +}
bool irq_work_queue(struct irq_work *work); bool irq_work_queue_on(struct irq_work *work, int cpu); diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h index 3ed4e8771b64..fef2d43a7a1d 100644 --- a/include/linux/irqflags.h +++ b/include/linux/irqflags.h @@ -109,12 +109,12 @@ do { \
# define lockdep_irq_work_enter(__work) \ do { \ - if (!(atomic_read(&__work->flags) & IRQ_WORK_HARD_IRQ))\ + if (!(atomic_read(&__work->node.a_flags) & IRQ_WORK_HARD_IRQ))\ current->irq_config = 1; \ } while (0) # define lockdep_irq_work_exit(__work) \ do { \ - if (!(atomic_read(&__work->flags) & IRQ_WORK_HARD_IRQ))\ + if (!(atomic_read(&__work->node.a_flags) & IRQ_WORK_HARD_IRQ))\ current->irq_config = 0; \ } while (0)
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index 4b5b390e22ea..92b38a5da61a 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -301,7 +301,7 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs, if (irqs_disabled()) { if (!IS_ENABLED(CONFIG_PREEMPT_RT)) { work = this_cpu_ptr(&up_read_work); - if (atomic_read(&work->irq_work.flags) & IRQ_WORK_BUSY) { + if (irq_work_is_busy(&work->irq_work)) { /* cannot queue more up_read, fallback */ irq_work_busy = true; } diff --git a/kernel/irq_work.c b/kernel/irq_work.c index eca83965b631..fbff25adb574 100644 --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -31,7 +31,7 @@ static bool irq_work_claim(struct irq_work *work) { int oflags;
- oflags = atomic_fetch_or(IRQ_WORK_CLAIMED | CSD_TYPE_IRQ_WORK, &work->flags); + oflags = atomic_fetch_or(IRQ_WORK_CLAIMED | CSD_TYPE_IRQ_WORK, &work->node.a_flags); /* * If the work is already pending, no need to raise the IPI. * The pairing atomic_fetch_andnot() in irq_work_run() makes sure @@ -53,12 +53,12 @@ void __weak arch_irq_work_raise(void) static void __irq_work_queue_local(struct irq_work *work) { /* If the work is "lazy", handle it from next tick if any */ - if (atomic_read(&work->flags) & IRQ_WORK_LAZY) { - if (llist_add(&work->llnode, this_cpu_ptr(&lazy_list)) && + if (atomic_read(&work->node.a_flags) & IRQ_WORK_LAZY) { + if (llist_add(&work->node.llist, this_cpu_ptr(&lazy_list)) && tick_nohz_tick_stopped()) arch_irq_work_raise(); } else { - if (llist_add(&work->llnode, this_cpu_ptr(&raised_list))) + if (llist_add(&work->node.llist, this_cpu_ptr(&raised_list))) arch_irq_work_raise(); } } @@ -102,7 +102,7 @@ bool irq_work_queue_on(struct irq_work *work, int cpu) if (cpu != smp_processor_id()) { /* Arch remote IPI send/receive backend aren't NMI safe */ WARN_ON_ONCE(in_nmi()); - __smp_call_single_queue(cpu, &work->llnode); + __smp_call_single_queue(cpu, &work->node.llist); } else { __irq_work_queue_local(work); } @@ -142,7 +142,7 @@ void irq_work_single(void *arg) * to claim that work don't rely on us to handle their data * while we are in the middle of the func. */ - flags = atomic_fetch_andnot(IRQ_WORK_PENDING, &work->flags); + flags = atomic_fetch_andnot(IRQ_WORK_PENDING, &work->node.a_flags);
lockdep_irq_work_enter(work); work->func(work); @@ -152,7 +152,7 @@ void irq_work_single(void *arg) * no-one else claimed it meanwhile. */ flags &= ~IRQ_WORK_PENDING; - (void)atomic_cmpxchg(&work->flags, flags, flags & ~IRQ_WORK_BUSY); + (void)atomic_cmpxchg(&work->node.a_flags, flags, flags & ~IRQ_WORK_BUSY); }
static void irq_work_run_list(struct llist_head *list) @@ -166,7 +166,7 @@ static void irq_work_run_list(struct llist_head *list) return;
llnode = llist_del_all(list); - llist_for_each_entry_safe(work, tmp, llnode, llnode) + llist_for_each_entry_safe(work, tmp, llnode, node.llist) irq_work_single(work); }
@@ -198,7 +198,7 @@ void irq_work_sync(struct irq_work *work) { lockdep_assert_irqs_enabled();
- while (atomic_read(&work->flags) & IRQ_WORK_BUSY) + while (irq_work_is_busy(work)) cpu_relax(); } EXPORT_SYMBOL_GPL(irq_work_sync); diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index e237ac1a6533..43f8f2573eac 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -3078,10 +3078,8 @@ static void wake_up_klogd_work_func(struct irq_work *irq_work) wake_up_interruptible(&log_wait); }
-static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = { - .func = wake_up_klogd_work_func, - .flags = ATOMIC_INIT(IRQ_WORK_LAZY), -}; +static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = + IRQ_WORK_INIT_LAZY(wake_up_klogd_work_func);
void wake_up_klogd(void) { diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b74e7ace4376..018a77b64a2e 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1319,8 +1319,6 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp) if (IS_ENABLED(CONFIG_IRQ_WORK) && !rdp->rcu_iw_pending && rdp->rcu_iw_gp_seq != rnp->gp_seq && (rnp->ffmask & rdp->grpmask)) { - init_irq_work(&rdp->rcu_iw, rcu_iw_handler); - atomic_set(&rdp->rcu_iw.flags, IRQ_WORK_HARD_IRQ); rdp->rcu_iw_pending = true; rdp->rcu_iw_gp_seq = rnp->gp_seq; irq_work_queue_on(&rdp->rcu_iw, rdp->cpu); @@ -4008,6 +4006,7 @@ int rcutree_prepare_cpu(unsigned int cpu) rdp->cpu_no_qs.b.norm = true; rdp->core_needs_qs = false; rdp->rcu_iw_pending = false; + rdp->rcu_iw = IRQ_WORK_INIT_HARD(rcu_iw_handler); rdp->rcu_iw_gp_seq = rdp->gp_seq - 1; trace_rcu_grace_period(rcu_state.name, rdp->gp_seq, TPS("cpuonl")); raw_spin_unlock_irqrestore_rcu_node(rnp, flags); diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index 49423dffdfc3..2d7899700b7a 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -271,10 +271,8 @@ static void nohz_full_kick_func(struct irq_work *work) /* Empty, the tick restart happens on tick_nohz_irq_exit() */ }
-static DEFINE_PER_CPU(struct irq_work, nohz_full_kick_work) = { - .func = nohz_full_kick_func, - .flags = ATOMIC_INIT(IRQ_WORK_HARD_IRQ), -}; +static DEFINE_PER_CPU(struct irq_work, nohz_full_kick_work) = + IRQ_WORK_INIT_HARD(nohz_full_kick_func);
/* * Kick this CPU if it's full dynticks in order to force it to diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index ba644760f507..1228e56853e0 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1081,7 +1081,7 @@ static int bpf_send_signal_common(u32 sig, enum pid_type type) return -EINVAL;
work = this_cpu_ptr(&send_signal_work); - if (atomic_read(&work->irq_work.flags) & IRQ_WORK_BUSY) + if (irq_work_is_busy(&work->irq_work)) return -EBUSY;
/* Add the current task, which is the target of sending signal,
From: Frederic Weisbecker frederic@kernel.org
mainline inclusion from mainline-v5.12-rc1 commit f8bb5cae9616224a39cbb399de382d36ac41df10 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4U05V CVE: NA
-------------------------------------------------------------------------
Entering RCU idle mode may cause a deferred wake up of an RCU NOCB_GP kthread (rcuog) to be serviced.
Unfortunately the call to rcu_user_enter() is already past the last rescheduling opportunity before we resume to userspace or to guest mode. We may escape there with the woken task ignored.
The ultimate resort to fix every callsites is to trigger a self-IPI (nohz_full depends on arch to implement arch_irq_work_raise()) that will trigger a reschedule on IRQ tail or guest exit.
Eventually every site that want a saner treatment will need to carefully place a call to rcu_nocb_flush_deferred_wakeup() before the last explicit need_resched() check upon resume.
Fixes: 96d3fd0d315a (rcu: Break call_rcu() deadlock involving scheduler and perf) Reported-by: Paul E. McKenney paulmck@kernel.org Signed-off-by: Frederic Weisbecker frederic@kernel.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Ingo Molnar mingo@kernel.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20210131230548.32970-4-frederic@kernel.org Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Reviewed-by: Cheng Jian cj.chengjian@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- kernel/rcu/tree.c | 21 ++++++++++++++++++++- kernel/rcu/tree.h | 2 +- kernel/rcu/tree_plugin.h | 25 ++++++++++++++++--------- 3 files changed, 37 insertions(+), 11 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 018a77b64a2e..25721617cdea 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -669,6 +669,18 @@ void rcu_idle_enter(void) EXPORT_SYMBOL_GPL(rcu_idle_enter);
#ifdef CONFIG_NO_HZ_FULL + +/* + * An empty function that will trigger a reschedule on + * IRQ tail once IRQs get re-enabled on userspace resume. + */ +static void late_wakeup_func(struct irq_work *work) +{ +} + +static DEFINE_PER_CPU(struct irq_work, late_wakeup_work) = + IRQ_WORK_INIT(late_wakeup_func); + /** * rcu_user_enter - inform RCU that we are resuming userspace. * @@ -686,12 +698,19 @@ noinstr void rcu_user_enter(void)
lockdep_assert_irqs_disabled();
+ /* + * We may be past the last rescheduling opportunity in the entry code. + * Trigger a self IPI that will fire and reschedule once we resume to + * user/guest mode. + */ instrumentation_begin(); - do_nocb_deferred_wakeup(rdp); + if (do_nocb_deferred_wakeup(rdp) && need_resched()) + irq_work_queue(this_cpu_ptr(&late_wakeup_work)); instrumentation_end();
rcu_eqs_enter(true); } + #endif /* CONFIG_NO_HZ_FULL */
/** diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index e4f66b8f7c47..0ec2b1f66b13 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -431,7 +431,7 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_empty, unsigned long flags); static int rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp); -static void do_nocb_deferred_wakeup(struct rcu_data *rdp); +static bool do_nocb_deferred_wakeup(struct rcu_data *rdp); static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp); static void rcu_spawn_cpu_nocb_kthread(int cpu); static void __init rcu_spawn_nocb_kthreads(void); diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 6ed153f226b3..8de3b2679be6 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -1632,8 +1632,8 @@ bool rcu_is_nocb_cpu(int cpu) * Kick the GP kthread for this NOCB group. Caller holds ->nocb_lock * and this function releases it. */ -static void wake_nocb_gp(struct rcu_data *rdp, bool force, - unsigned long flags) +static bool wake_nocb_gp(struct rcu_data *rdp, bool force, + unsigned long flags) __releases(rdp->nocb_lock) { bool needwake = false; @@ -1644,7 +1644,7 @@ static void wake_nocb_gp(struct rcu_data *rdp, bool force, trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("AlreadyAwake")); rcu_nocb_unlock_irqrestore(rdp, flags); - return; + return false; } del_timer(&rdp->nocb_timer); rcu_nocb_unlock_irqrestore(rdp, flags); @@ -1657,6 +1657,8 @@ static void wake_nocb_gp(struct rcu_data *rdp, bool force, raw_spin_unlock_irqrestore(&rdp_gp->nocb_gp_lock, flags); if (needwake) wake_up_process(rdp_gp->nocb_gp_kthread); + + return needwake; }
/* @@ -2153,20 +2155,23 @@ static int rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp) }
/* Do a deferred wakeup of rcu_nocb_kthread(). */ -static void do_nocb_deferred_wakeup_common(struct rcu_data *rdp) +static bool do_nocb_deferred_wakeup_common(struct rcu_data *rdp) { unsigned long flags; int ndw; + int ret;
rcu_nocb_lock_irqsave(rdp, flags); if (!rcu_nocb_need_deferred_wakeup(rdp)) { rcu_nocb_unlock_irqrestore(rdp, flags); - return; + return false; } ndw = READ_ONCE(rdp->nocb_defer_wakeup); WRITE_ONCE(rdp->nocb_defer_wakeup, RCU_NOCB_WAKE_NOT); - wake_nocb_gp(rdp, ndw == RCU_NOCB_WAKE_FORCE, flags); + ret = wake_nocb_gp(rdp, ndw == RCU_NOCB_WAKE_FORCE, flags); trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("DeferredWake")); + + return ret; }
/* Do a deferred wakeup of rcu_nocb_kthread() from a timer handler. */ @@ -2182,10 +2187,11 @@ static void do_nocb_deferred_wakeup_timer(struct timer_list *t) * This means we do an inexact common-case check. Note that if * we miss, ->nocb_timer will eventually clean things up. */ -static void do_nocb_deferred_wakeup(struct rcu_data *rdp) +static bool do_nocb_deferred_wakeup(struct rcu_data *rdp) { if (rcu_nocb_need_deferred_wakeup(rdp)) - do_nocb_deferred_wakeup_common(rdp); + return do_nocb_deferred_wakeup_common(rdp); + return false; }
void rcu_nocb_flush_deferred_wakeup(void) @@ -2524,8 +2530,9 @@ static int rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp) return false; }
-static void do_nocb_deferred_wakeup(struct rcu_data *rdp) +static bool do_nocb_deferred_wakeup(struct rcu_data *rdp) { + return false; }
static void rcu_spawn_cpu_nocb_kthread(int cpu)
From: Frederic Weisbecker frederic@kernel.org
mainline inclusion from mainline-v5.12-rc1 commit 47b8ff194c1fd73d58dc339b597d466fe48c8958 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4U05V CVE: NA
-------------------------------------------------------------------------
Following the idle loop model, cleanly check for pending rcuog wakeup before the last rescheduling point on resuming to user mode. This way we can avoid to do it from rcu_user_enter() with the last resort self-IPI hack that enforces rescheduling.
Signed-off-by: Frederic Weisbecker frederic@kernel.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Ingo Molnar mingo@kernel.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20210131230548.32970-5-frederic@kernel.org Conflicts: kernel/entry/common.c
Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Reviewed-by: Cheng Jian cj.chengjian@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- kernel/entry/common.c | 7 +++++++ kernel/rcu/tree.c | 12 +++++++----- 2 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/kernel/entry/common.c b/kernel/entry/common.c index df3c534dc138..18a29ca01bfe 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -178,6 +178,10 @@ static unsigned long exit_to_user_mode_loop(struct pt_regs *regs, * enabled above. */ local_irq_disable_exit_to_user(); + + /* Check if any of the above work has queued a deferred wakeup */ + rcu_nocb_flush_deferred_wakeup(); + ti_work = READ_ONCE(current_thread_info()->flags); }
@@ -191,6 +195,9 @@ static void exit_to_user_mode_prepare(struct pt_regs *regs)
lockdep_assert_irqs_disabled();
+ /* Flush pending rcuog wakeup before the last need_resched() check */ + rcu_nocb_flush_deferred_wakeup(); + if (unlikely((ti_work & EXIT_TO_USER_MODE_WORK) || sched_qos_cpu_overload())) ti_work = exit_to_user_mode_loop(regs, ti_work); diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 25721617cdea..1f6c78aa7bfd 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -699,13 +699,15 @@ noinstr void rcu_user_enter(void) lockdep_assert_irqs_disabled();
/* - * We may be past the last rescheduling opportunity in the entry code. - * Trigger a self IPI that will fire and reschedule once we resume to - * user/guest mode. + * Other than generic entry implementation, we may be past the last + * rescheduling opportunity in the entry code. Trigger a self IPI + * that will fire and reschedule once we resume in user/guest mode. */ instrumentation_begin(); - if (do_nocb_deferred_wakeup(rdp) && need_resched()) - irq_work_queue(this_cpu_ptr(&late_wakeup_work)); + if (!IS_ENABLED(CONFIG_GENERIC_ENTRY) || (current->flags & PF_VCPU)) { + if (do_nocb_deferred_wakeup(rdp) && need_resched()) + irq_work_queue(this_cpu_ptr(&late_wakeup_work)); + } instrumentation_end();
rcu_eqs_enter(true);
From: "Paul E. McKenney" paulmck@kernel.org
mainline inclusion from mainline-v5.14-rc1 commit cf868c2af244417ed276ba7f716b980841a71340 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4U05V CVE: NA
-------------------------------------------------------------------------
Heavy networking load can cause a CPU to execute continuously and indefinitely within ksoftirqd, in which case there will be no voluntary task switches and thus no RCU-tasks quiescent states. This commit therefore causes the exiting rcu_softirq_qs() to provide an RCU-tasks quiescent state.
This of course means that __do_softirq() and its callers cannot be invoked from within a tracing trampoline.
Reported-by: Toke Høiland-Jørgensen toke@redhat.com Tested-by: Toke Høiland-Jørgensen toke@redhat.com Reviewed-by: Masami Hiramatsu mhiramat@kernel.org Signed-off-by: Paul E. McKenney paulmck@kernel.org Cc: Steven Rostedt rostedt@goodmis.org Cc: Masami Hiramatsu mhiramat@kernel.org Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Reviewed-by: Cheng Jian cj.chengjian@huawei.com Reviewed-by: Cheng Jian cj.chengjian@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- kernel/rcu/tree.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 1f6c78aa7bfd..b9c45b2d7690 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -236,6 +236,7 @@ void rcu_softirq_qs(void) { rcu_qs(); rcu_preempt_deferred_qs(current); + rcu_tasks_qs(current, false); }
/*
From: "Paul E. McKenney" paulmck@kernel.org
mainline inclusion from mainline-v5.12-rc1 commit bfb3aa735f82c8d98b32a669934ee7d6b346264d category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4UAEG CVE: NA
-------------------------------------------------------------------------
An outgoing CPU is marked offline in a stop-machine handler and most of that CPU's services stop at that point, including IRQ work queues. However, that CPU must take another pass through the scheduler and through a number of CPU-hotplug notifiers, many of which contain RCU readers. In the past, these readers were not a problem because the outgoing CPU has interrupts disabled, so that rcu_read_unlock_special() would not be invoked, and thus RCU would never attempt to queue IRQ work on the outgoing CPU.
This changed with the advent of the CONFIG_RCU_STRICT_GRACE_PERIOD Kconfig option, in which rcu_read_unlock_special() is invoked upon exit from almost all RCU read-side critical sections. Worse yet, because interrupts are disabled, rcu_read_unlock_special() cannot immediately report a quiescent state and will therefore attempt to defer this reporting, for example, by queueing IRQ work. Which fails with a splat because the CPU is already marked as being offline.
But it turns out that there is no need to report this quiescent state because rcu_report_dead() will do this job shortly after the outgoing CPU makes its final dive into the idle loop. This commit therefore makes rcu_read_unlock_special() refrain from queuing IRQ work onto outgoing CPUs.
Fixes: 44bad5b3cca2 ("rcu: Do full report for .need_qs for strict GPs") Signed-off-by: Paul E. McKenney paulmck@kernel.org Cc: Jann Horn jannh@google.com Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Reviewed-by: Cheng Jian cj.chengjian@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- kernel/rcu/tree_plugin.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 8de3b2679be6..a222faffbafa 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -628,7 +628,7 @@ static void rcu_read_unlock_special(struct task_struct *t) set_tsk_need_resched(current); set_preempt_need_resched(); if (IS_ENABLED(CONFIG_IRQ_WORK) && irqs_were_disabled && - !rdp->defer_qs_iw_pending && exp) { + !rdp->defer_qs_iw_pending && exp && cpu_online(rdp->cpu)) { // Get scheduler to re-evaluate and call hooks. // If !IRQ_WORK, FQS scan will eventually IPI. init_irq_work(&rdp->defer_qs_iw,
From: Jan Kara jack@suse.cz
mainline inclusion from mainline-v5.17-rc2 commit 7fc3b7c2981bbd1047916ade327beccb90994eee category: bugfix bugzilla: 186269 https://gitee.com/openeuler/kernel/issues/I4UATL?from=project-issue CVE: CVE-2022-0617
-----------------------------------------------
udf_expand_file_adinicb() calls directly ->writepage to write data expanded into a page. This however misses to setup inode for writeback properly and so we can crash on inode->i_wb dereference when submitting page for IO like:
BUG: kernel NULL pointer dereference, address: 0000000000000158 #PF: supervisor read access in kernel mode ... <TASK> __folio_start_writeback+0x2ac/0x350 __block_write_full_page+0x37d/0x490 udf_expand_file_adinicb+0x255/0x400 [udf] udf_file_write_iter+0xbe/0x1b0 [udf] new_sync_write+0x125/0x1c0 vfs_write+0x28e/0x400
Fix the problem by marking the page dirty and going through the standard writeback path to write the page. Strictly speaking we would not even have to write the page but we want to catch e.g. ENOSPC errors early.
Reported-by: butt3rflyh4ck butterflyhuangxx@gmail.com CC: stable@vger.kernel.org Fixes: 52ebea749aae ("writeback: make backing_dev_info host cgroup-specific bdi_writebacks") Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Zhang Wensheng zhangwensheng5@huawei.com Reviewed-by: Zhang Yi yi.zhang@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- fs/udf/inode.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/fs/udf/inode.c b/fs/udf/inode.c index 0dd2f93ac048..e89403546f38 100644 --- a/fs/udf/inode.c +++ b/fs/udf/inode.c @@ -257,10 +257,6 @@ int udf_expand_file_adinicb(struct inode *inode) char *kaddr; struct udf_inode_info *iinfo = UDF_I(inode); int err; - struct writeback_control udf_wbc = { - .sync_mode = WB_SYNC_NONE, - .nr_to_write = 1, - };
WARN_ON_ONCE(!inode_is_locked(inode)); if (!iinfo->i_lenAlloc) { @@ -304,8 +300,10 @@ int udf_expand_file_adinicb(struct inode *inode) iinfo->i_alloc_type = ICBTAG_FLAG_AD_LONG; /* from now on we have normal address_space methods */ inode->i_data.a_ops = &udf_aops; + set_page_dirty(page); + unlock_page(page); up_write(&iinfo->i_data_sem); - err = inode->i_data.a_ops->writepage(page, &udf_wbc); + err = filemap_fdatawrite(inode->i_mapping); if (err) { /* Restore everything back so that we don't lose data... */ lock_page(page);
From: Jan Kara jack@suse.cz
mainline inclusion from mainline-v5.17-rc2 commit ea8569194b43f0f01f0a84c689388542c7254a1f category: bugfix bugzilla: 186269 https://gitee.com/openeuler/kernel/issues/I4UATL?from=project-issue CVE: CVE-2022-0617
--------------------------------
When we fail to expand inode from inline format to a normal format, we restore inode to contain the original inline formatting but we forgot to set i_lenAlloc back. The mismatch between i_lenAlloc and i_size was then causing further problems such as warnings and lost data down the line.
Reported-by: butt3rflyh4ck butterflyhuangxx@gmail.com CC: stable@vger.kernel.org Fixes: 7e49b6f2480c ("udf: Convert UDF to new truncate calling sequence") Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Zhang Wensheng zhangwensheng5@huawei.com Reviewed-by: Zhang Yi yi.zhang@huawei.com Reviewed-by: Xiu Jianfeng xiujianfeng@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- fs/udf/inode.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/fs/udf/inode.c b/fs/udf/inode.c index e89403546f38..d32b836f6ca7 100644 --- a/fs/udf/inode.c +++ b/fs/udf/inode.c @@ -314,6 +314,7 @@ int udf_expand_file_adinicb(struct inode *inode) unlock_page(page); iinfo->i_alloc_type = ICBTAG_FLAG_AD_IN_ICB; inode->i_data.a_ops = &udf_adinicb_aops; + iinfo->i_lenAlloc = inode->i_size; up_write(&iinfo->i_data_sem); } put_page(page);
From: Chao Liu liuchao173@huawei.com
euler inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4UP2Q CVE: NA
--------------------------------
Enable CONFIG_NTB_INTEL in openeuler_defconfig for x86. Support Intel NTB on capable Xeon and Atom hardware.
Signed-off-by: Chao Liu liuchao173@huawei.com Reviewed-by: Kai Liu kai.liu@suse.com Reviewed-by: Liu Sirui liusirui@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- arch/x86/configs/openeuler_defconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig index b3657bc5fccb..cfb6ad605eef 100644 --- a/arch/x86/configs/openeuler_defconfig +++ b/arch/x86/configs/openeuler_defconfig @@ -7122,7 +7122,7 @@ CONFIG_NTB=m # CONFIG_NTB_MSI is not set # CONFIG_NTB_AMD is not set # CONFIG_NTB_IDT is not set -# CONFIG_NTB_INTEL is not set +CONFIG_NTB_INTEL=m # CONFIG_NTB_SWITCHTEC is not set # CONFIG_NTB_PINGPONG is not set # CONFIG_NTB_TOOL is not set
From: Shengwei Luo luoshengwei@huawei.com
kunpeng inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4OUGN?from=project-issue CVE: NA
The original arm_event trace code only traces out ARM processor error information data. It's not enough for user to take appropriate action.
According to UEFI_2_9 specification chapter N2.4.4, the ARM processor error section includes several ARM processor error information, several ARM processor context information and several vendor specific error information structures. In addition to these info, there are error severity and cpu logical index about the event. Report all of these information to userspace via perf i/f. So that the user can do cpu core isolation according to error severity and other info.
Original-Author: Jason Tian jason@os.amperecomputing.com Signed-off-by: Shengwei Luo luoshengwei@huawei.com Reviewed-by: Lv Ying lvying6@huawei.com Reviewed-by: Tan Xiaofei tanxiaofei@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- drivers/acpi/apei/ghes.c | 3 +-- drivers/ras/ras.c | 46 ++++++++++++++++++++++++++++++++++++-- include/linux/ras.h | 15 +++++++++++-- include/ras/ras_event.h | 48 +++++++++++++++++++++++++++++++++++----- 4 files changed, 101 insertions(+), 11 deletions(-)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index fc816c902394..9ac1e45dbba0 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -493,9 +493,8 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int s int sec_sev, i; char *p;
- log_arm_hw_error(err); - sec_sev = ghes_severity(gdata->error_severity); + log_arm_hw_error(err, sec_sev); if (sev != GHES_SEV_RECOVERABLE || sec_sev != GHES_SEV_RECOVERABLE) return false;
diff --git a/drivers/ras/ras.c b/drivers/ras/ras.c index 95540ea8dd9d..2a7f424d59b9 100644 --- a/drivers/ras/ras.c +++ b/drivers/ras/ras.c @@ -21,9 +21,51 @@ void log_non_standard_event(const guid_t *sec_type, const guid_t *fru_id, trace_non_standard_event(sec_type, fru_id, fru_text, sev, err, len); }
-void log_arm_hw_error(struct cper_sec_proc_arm *err) +void log_arm_hw_error(struct cper_sec_proc_arm *err, const u8 sev) { - trace_arm_event(err); + u32 pei_len; + u32 ctx_len = 0; + s32 vsei_len; + u8 *pei_err; + u8 *ctx_err; + u8 *ven_err_data; + struct cper_arm_err_info *err_info; + struct cper_arm_ctx_info *ctx_info; + int n, sz; + int cpu; + + pei_len = sizeof(struct cper_arm_err_info) * err->err_info_num; + pei_err = (u8 *)err + sizeof(struct cper_sec_proc_arm); + + err_info = (struct cper_arm_err_info *)(err + 1); + ctx_info = (struct cper_arm_ctx_info *)(err_info + err->err_info_num); + ctx_err = (u8 *)ctx_info; + for (n = 0; n < err->context_info_num; n++) { + sz = sizeof(struct cper_arm_ctx_info) + ctx_info->size; + ctx_info = (struct cper_arm_ctx_info *)((long)ctx_info + sz); + ctx_len += sz; + } + + vsei_len = err->section_length - (sizeof(struct cper_sec_proc_arm) + + pei_len + ctx_len); + if (vsei_len < 0) { + pr_warn(FW_BUG + "section length: %d\n", err->section_length); + pr_warn(FW_BUG + "section length is too small\n"); + pr_warn(FW_BUG + "firmware-generated error record is incorrect\n"); + vsei_len = 0; + } + ven_err_data = (u8 *)ctx_info; + + cpu = GET_LOGICAL_INDEX(err->mpidr); + /* when return value is invalid, set cpu index to -1 */ + if (cpu < 0) + cpu = -1; + + trace_arm_event(err, pei_err, pei_len, ctx_err, ctx_len, + ven_err_data, (u32)vsei_len, sev, cpu); }
static int __init ras_init(void) diff --git a/include/linux/ras.h b/include/linux/ras.h index 1f4048bf2674..4529775374d0 100644 --- a/include/linux/ras.h +++ b/include/linux/ras.h @@ -24,7 +24,7 @@ int __init parse_cec_param(char *str); void log_non_standard_event(const guid_t *sec_type, const guid_t *fru_id, const char *fru_text, const u8 sev, const u8 *err, const u32 len); -void log_arm_hw_error(struct cper_sec_proc_arm *err); +void log_arm_hw_error(struct cper_sec_proc_arm *err, const u8 sev); #else static inline void log_non_standard_event(const guid_t *sec_type, @@ -32,7 +32,18 @@ log_non_standard_event(const guid_t *sec_type, const u8 sev, const u8 *err, const u32 len) { return; } static inline void -log_arm_hw_error(struct cper_sec_proc_arm *err) { return; } +log_arm_hw_error(struct cper_sec_proc_arm *err, const u8 sev) { return; } #endif
+#if defined(CONFIG_ARM) || defined(CONFIG_ARM64) +#include <asm/smp_plat.h> +/* + * Include ARM specific SMP header which provides a function mapping mpidr to + * cpu logical index. + */ +#define GET_LOGICAL_INDEX(mpidr) get_logical_index(mpidr & MPIDR_HWID_BITMASK) +#else +#define GET_LOGICAL_INDEX(mpidr) -EINVAL +#endif /* CONFIG_ARM || CONFIG_ARM64 */ + #endif /* __RAS_H__ */ diff --git a/include/ras/ras_event.h b/include/ras/ras_event.h index 0bdbc0d17d2f..d8890dcc0e7c 100644 --- a/include/ras/ras_event.h +++ b/include/ras/ras_event.h @@ -168,11 +168,24 @@ TRACE_EVENT(mc_event, * This event is generated when hardware detects an ARM processor error * has occurred. UEFI 2.6 spec section N.2.4.4. */ +#define APEIL "ARM Processor Err Info data len" +#define APEID "ARM Processor Err Info raw data" +#define APECIL "ARM Processor Err Context Info data len" +#define APECID "ARM Processor Err Context Info raw data" +#define VSEIL "Vendor Specific Err Info data len" +#define VSEID "Vendor Specific Err Info raw data" TRACE_EVENT(arm_event,
- TP_PROTO(const struct cper_sec_proc_arm *proc), + TP_PROTO(const struct cper_sec_proc_arm *proc, const u8 *pei_err, + const u32 pei_len, + const u8 *ctx_err, + const u32 ctx_len, + const u8 *oem, + const u32 oem_len, + u8 sev, + int cpu),
- TP_ARGS(proc), + TP_ARGS(proc, pei_err, pei_len, ctx_err, ctx_len, oem, oem_len, sev, cpu),
TP_STRUCT__entry( __field(u64, mpidr) @@ -180,6 +193,14 @@ TRACE_EVENT(arm_event, __field(u32, running_state) __field(u32, psci_state) __field(u8, affinity) + __field(u32, pei_len) + __dynamic_array(u8, buf, pei_len) + __field(u32, ctx_len) + __dynamic_array(u8, buf1, ctx_len) + __field(u32, oem_len) + __dynamic_array(u8, buf2, oem_len) + __field(u8, sev) + __field(int, cpu) ),
TP_fast_assign( @@ -199,12 +220,29 @@ TRACE_EVENT(arm_event, __entry->running_state = ~0; __entry->psci_state = ~0; } + __entry->pei_len = pei_len; + memcpy(__get_dynamic_array(buf), pei_err, pei_len); + __entry->ctx_len = ctx_len; + memcpy(__get_dynamic_array(buf1), ctx_err, ctx_len); + __entry->oem_len = oem_len; + memcpy(__get_dynamic_array(buf2), oem, oem_len); + __entry->sev = sev; + __entry->cpu = cpu; ),
- TP_printk("affinity level: %d; MPIDR: %016llx; MIDR: %016llx; " - "running state: %d; PSCI state: %d", + TP_printk("cpu: %d; error: %d; affinity level: %d; MPIDR: %016llx; MIDR: %016llx; " + "running state: %d; PSCI state: %d; " + "%s: %d; %s: %s; %s: %d; %s: %s; %s: %d; %s: %s", + __entry->cpu, + __entry->sev, __entry->affinity, __entry->mpidr, __entry->midr, - __entry->running_state, __entry->psci_state) + __entry->running_state, __entry->psci_state, + APEIL, __entry->pei_len, APEID, + __print_hex(__get_dynamic_array(buf), __entry->pei_len), + APECIL, __entry->ctx_len, APECID, + __print_hex(__get_dynamic_array(buf1), __entry->ctx_len), + VSEIL, __entry->oem_len, VSEID, + __print_hex(__get_dynamic_array(buf2), __entry->oem_len)) );
/*
From: Qi Liu liuqi115@huawei.com
mainline inclusion from mainline-v5.11-rc1 commit e72550928ff0 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4UA33 CVE: NA
Reference: https://lore.kernel.org/r/20201208182651.1597945-4-mathieu.poirier@linaro.or...
-----------------------------------------
The ETM device can't keep up with the core pipeline when cpu core is at full speed. This may cause overflow within core and its ETM. This is a common phenomenon on ETM devices.
On HiSilicon Hip08 platform, a specific feature is added to set core pipeline. So commit rate can be reduced manually to avoid ETM overflow.
Reviewed-by: Suzuki K Poulose suzuki.poulose@arm.com Signed-off-by: Qi Liu liuqi115@huawei.com [Modified changelog title and Kconfig description] Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Link: https://lore.kernel.org/r/20201208182651.1597945-4-mathieu.poirier@linaro.or... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Qi Liu liuqi115@huawei.com Reviewed-by: Jay Fang f.fangjian@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- drivers/hwtracing/coresight/Kconfig | 8 ++ .../coresight/coresight-etm4x-core.c | 98 +++++++++++++++++++ drivers/hwtracing/coresight/coresight-etm4x.h | 8 ++ 3 files changed, 114 insertions(+)
diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig index c1198245461d..7b44ba22cbe1 100644 --- a/drivers/hwtracing/coresight/Kconfig +++ b/drivers/hwtracing/coresight/Kconfig @@ -110,6 +110,14 @@ config CORESIGHT_SOURCE_ETM4X To compile this driver as a module, choose M here: the module will be called coresight-etm4x.
+config ETM4X_IMPDEF_FEATURE + bool "Control implementation defined overflow support in ETM 4.x driver" + depends on CORESIGHT_SOURCE_ETM4X + help + This control provides implementation define control for CoreSight + ETM 4.x tracer module that can't reduce commit rate automatically. + This avoids overflow between the ETM tracer module and the cpu core. + config CORESIGHT_STM tristate "CoreSight System Trace Macrocell driver" depends on (ARM && !(CPU_32v3 || CPU_32v4 || CPU_32v4T)) || ARM64 diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c index 74d3e2fe43d4..02d0b92cf510 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c @@ -3,6 +3,7 @@ * Copyright (c) 2014, The Linux Foundation. All rights reserved. */
+#include <linux/bitops.h> #include <linux/kernel.h> #include <linux/moduleparam.h> #include <linux/init.h> @@ -28,7 +29,9 @@ #include <linux/perf_event.h> #include <linux/pm_runtime.h> #include <linux/property.h> + #include <asm/sections.h> +#include <asm/sysreg.h> #include <asm/local.h> #include <asm/virt.h>
@@ -103,6 +106,97 @@ struct etm4_enable_arg { int rc; };
+#ifdef CONFIG_ETM4X_IMPDEF_FEATURE + +#define HISI_HIP08_AMBA_ID 0x000b6d01 +#define ETM4_AMBA_MASK 0xfffff +#define HISI_HIP08_CORE_COMMIT_MASK 0x3000 +#define HISI_HIP08_CORE_COMMIT_SHIFT 12 +#define HISI_HIP08_CORE_COMMIT_FULL 0b00 +#define HISI_HIP08_CORE_COMMIT_LVL_1 0b01 +#define HISI_HIP08_CORE_COMMIT_REG sys_reg(3, 1, 15, 2, 5) + +struct etm4_arch_features { + void (*arch_callback)(bool enable); +}; + +static bool etm4_hisi_match_pid(unsigned int id) +{ + return (id & ETM4_AMBA_MASK) == HISI_HIP08_AMBA_ID; +} + +static void etm4_hisi_config_core_commit(bool enable) +{ + u8 commit = enable ? HISI_HIP08_CORE_COMMIT_LVL_1 : + HISI_HIP08_CORE_COMMIT_FULL; + u64 val; + + /* + * bit 12 and 13 of HISI_HIP08_CORE_COMMIT_REG are used together + * to set core-commit, 2'b00 means cpu is at full speed, 2'b01, + * 2'b10, 2'b11 mean reduce pipeline speed, and 2'b01 means level-1 + * speed(minimun value). So bit 12 and 13 should be cleared together. + */ + val = read_sysreg_s(HISI_HIP08_CORE_COMMIT_REG); + val &= ~HISI_HIP08_CORE_COMMIT_MASK; + val |= commit << HISI_HIP08_CORE_COMMIT_SHIFT; + write_sysreg_s(val, HISI_HIP08_CORE_COMMIT_REG); +} + +static struct etm4_arch_features etm4_features[] = { + [ETM4_IMPDEF_HISI_CORE_COMMIT] = { + .arch_callback = etm4_hisi_config_core_commit, + }, + {}, +}; + +static void etm4_enable_arch_specific(struct etmv4_drvdata *drvdata) +{ + struct etm4_arch_features *ftr; + int bit; + + for_each_set_bit(bit, drvdata->arch_features, ETM4_IMPDEF_FEATURE_MAX) { + ftr = &etm4_features[bit]; + + if (ftr->arch_callback) + ftr->arch_callback(true); + } +} + +static void etm4_disable_arch_specific(struct etmv4_drvdata *drvdata) +{ + struct etm4_arch_features *ftr; + int bit; + + for_each_set_bit(bit, drvdata->arch_features, ETM4_IMPDEF_FEATURE_MAX) { + ftr = &etm4_features[bit]; + + if (ftr->arch_callback) + ftr->arch_callback(false); + } +} + +static void etm4_check_arch_features(struct etmv4_drvdata *drvdata, + unsigned int id) +{ + if (etm4_hisi_match_pid(id)) + set_bit(ETM4_IMPDEF_HISI_CORE_COMMIT, drvdata->arch_features); +} +#else +static void etm4_enable_arch_specific(struct etmv4_drvdata *drvdata) +{ +} + +static void etm4_disable_arch_specific(struct etmv4_drvdata *drvdata) +{ +} + +static void etm4_check_arch_features(struct etmv4_drvdata *drvdata, + unsigned int id) +{ +} +#endif /* CONFIG_ETM4X_IMPDEF_FEATURE */ + static int etm4_enable_hw(struct etmv4_drvdata *drvdata) { int i, rc; @@ -110,6 +204,7 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata) struct device *etm_dev = &drvdata->csdev->dev;
CS_UNLOCK(drvdata->base); + etm4_enable_arch_specific(drvdata);
etm4_os_unlock(drvdata);
@@ -480,6 +575,7 @@ static void etm4_disable_hw(void *info) int i;
CS_UNLOCK(drvdata->base); + etm4_disable_arch_specific(drvdata);
if (!drvdata->skip_power_up) { /* power can be removed from the trace unit now */ @@ -1563,6 +1659,8 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id) drvdata->boot_enable = true; }
+ etm4_check_arch_features(drvdata, id->id); + return 0; }
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h index eefc7371c6c4..3dd3e0633328 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x.h +++ b/drivers/hwtracing/coresight/coresight-etm4x.h @@ -8,6 +8,7 @@
#include <asm/local.h> #include <linux/spinlock.h> +#include <linux/types.h> #include "coresight-priv.h"
/* @@ -203,6 +204,11 @@ /* Interpretation of resource numbers change at ETM v4.3 architecture */ #define ETM4X_ARCH_4V3 0x43
+enum etm_impdef_type { + ETM4_IMPDEF_HISI_CORE_COMMIT, + ETM4_IMPDEF_FEATURE_MAX, +}; + /** * struct etmv4_config - configuration information related to an ETMv4 * @mode: Controls various modes supported by this ETM. @@ -415,6 +421,7 @@ struct etmv4_save_state { * @state_needs_restore: True when there is context to restore after PM exit * @skip_power_up: Indicates if an implementation can skip powering up * the trace unit. + * @arch_features: Bitmap of arch features of etmv4 devices. */ struct etmv4_drvdata { void __iomem *base; @@ -463,6 +470,7 @@ struct etmv4_drvdata { struct etmv4_save_state *save_state; bool state_needs_restore; bool skip_power_up; + DECLARE_BITMAP(arch_features, ETM4_IMPDEF_FEATURE_MAX); };
/* Address comparator access types */
From: Qi Liu liuqi115@huawei.com
driver inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4UA33
-----------------------------------------
This patch adds driver for Ultrasoc SMB(System Memory Buffer) device. SMB provides a way to buffer messages from ETM, and store these CPU instructions in system memory.
SMB is developed by Ultrasoc technology, which is acquired by Siemens, and we still use "Ultrasoc" to name driver.
Signed-off-by: Qi Liu liuqi115@huawei.com Tested-by: JunHao He hejunhao2@hisilicon.com Reviewed-by: Jay Fang f.fangjian@huawei.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- drivers/hwtracing/coresight/Kconfig | 12 + drivers/hwtracing/coresight/Makefile | 1 + drivers/hwtracing/coresight/ultrasoc-smb.c | 602 +++++++++++++++++++++ drivers/hwtracing/coresight/ultrasoc-smb.h | 100 ++++ 4 files changed, 715 insertions(+) create mode 100644 drivers/hwtracing/coresight/ultrasoc-smb.c create mode 100644 drivers/hwtracing/coresight/ultrasoc-smb.h
diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig index 7b44ba22cbe1..17c1fa4fcf70 100644 --- a/drivers/hwtracing/coresight/Kconfig +++ b/drivers/hwtracing/coresight/Kconfig @@ -173,4 +173,16 @@ config CORESIGHT_CTI_INTEGRATION_REGS CTI trigger connections between this and other devices.These registers are not used in normal operation and can leave devices in an inconsistent state. + +config ULTRASOC_SMB + tristate "Ultrasoc system memory buffer drivers" + depends on ARM64 && CORESIGHT_LINKS_AND_SINKS + help + This driver provides support for the Ultrasoc system memory buffer (SMB). + SMB is responsible for receiving the trace data from Coresight ETM devices + and storing them to a system buffer. + + To compile this driver as a module, choose M here: the module will be + called ultrasoc-smb. + endif diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile index f20e357758d1..28acf9600f83 100644 --- a/drivers/hwtracing/coresight/Makefile +++ b/drivers/hwtracing/coresight/Makefile @@ -23,3 +23,4 @@ obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o obj-$(CONFIG_CORESIGHT_CTI) += coresight-cti.o coresight-cti-y := coresight-cti-core.o coresight-cti-platform.o \ coresight-cti-sysfs.o +obj-$(CONFIG_ULTRASOC_SMB) += ultrasoc-smb.o diff --git a/drivers/hwtracing/coresight/ultrasoc-smb.c b/drivers/hwtracing/coresight/ultrasoc-smb.c new file mode 100644 index 000000000000..47adcf8a8d7b --- /dev/null +++ b/drivers/hwtracing/coresight/ultrasoc-smb.c @@ -0,0 +1,602 @@ +// SPDX-License-Identifier: MIT/GPL +/* + * Siemens System Memory Buffer driver. + * Copyright(c) 2021, HiSilicon Limited. + */ + +#include <linux/acpi.h> +#include <linux/circ_buf.h> +#include <linux/err.h> +#include <linux/module.h> +#include <linux/mod_devicetable.h> +#include <linux/platform_device.h> + +#include "ultrasoc-smb.h" + +DEFINE_CORESIGHT_DEVLIST(sink_devs, "smb"); + +static bool smb_buffer_is_empty(struct smb_drv_data *drvdata) +{ + u32 buf_status = readl(drvdata->base + SMB_LB_INT_STS); + + return buf_status & BIT(0) ? false : true; +} + +static bool smb_buffer_cmp_pointer(struct smb_drv_data *drvdata) +{ + u32 wr_offset, rd_offset; + + wr_offset = readl(drvdata->base + SMB_LB_WR_ADDR); + rd_offset = readl(drvdata->base + SMB_LB_RD_ADDR); + return wr_offset == rd_offset; +} + +static void smb_reset_buffer_status(struct smb_drv_data *drvdata) +{ + writel(0xf, drvdata->base + SMB_LB_INT_STS); +} + +/* Purge data remaining in hardware path to SMB. */ +static void smb_purge_data(struct smb_drv_data *drvdata) +{ + writel(0x1, drvdata->base + SMB_LB_PURGE); +} + +static void smb_update_data_size(struct smb_drv_data *drvdata) +{ + struct smb_data_buffer *sdb = &drvdata->sdb; + u32 write_offset; + + smb_purge_data(drvdata); + if (smb_buffer_cmp_pointer(drvdata)) { + if (smb_buffer_is_empty(drvdata)) + sdb->data_size = 0; + else + sdb->data_size = sdb->buf_size; + return; + } + + write_offset = readl(drvdata->base + SMB_LB_WR_ADDR) - sdb->start_addr; + sdb->data_size = CIRC_CNT(write_offset, sdb->rd_offset, sdb->buf_size); +} + +static int smb_open(struct inode *inode, struct file *file) +{ + struct smb_drv_data *drvdata = container_of(file->private_data, + struct smb_drv_data, miscdev); + + if (local_cmpxchg(&drvdata->reading, 0, 1)) + return -EBUSY; + + return 0; +} + +static ssize_t smb_read(struct file *file, char __user *data, size_t len, loff_t *ppos) +{ + struct smb_drv_data *drvdata = container_of(file->private_data, + struct smb_drv_data, miscdev); + struct smb_data_buffer *sdb = &drvdata->sdb; + struct device *dev = &drvdata->csdev->dev; + unsigned long flags; + int to_copy = 0; + + spin_lock_irqsave(&drvdata->spinlock, flags); + + if (!sdb->data_size) { + smb_update_data_size(drvdata); + if (!sdb->data_size) + goto out; + } + + if (atomic_read(drvdata->csdev->refcnt)) { + to_copy = -EBUSY; + goto out; + } + + to_copy = min(sdb->data_size, len); + + /* Copy parts of trace data when the read pointer will wrap around SMB buffer. */ + if (sdb->rd_offset + to_copy > sdb->buf_size) + to_copy = sdb->buf_size - sdb->rd_offset; + + if (copy_to_user(data, (void *)sdb->buf_base + sdb->rd_offset, to_copy)) { + dev_dbg(dev, "Failed to copy data to user.\n"); + to_copy = -EFAULT; + goto out; + } + + *ppos += to_copy; + sdb->data_size -= to_copy; + sdb->rd_offset += to_copy; + sdb->rd_offset %= sdb->buf_size; + writel(sdb->start_addr + sdb->rd_offset, drvdata->base + SMB_LB_RD_ADDR); + dev_dbg(dev, "%d bytes copied.\n", to_copy); +out: + if (!sdb->data_size) + smb_reset_buffer_status(drvdata); + spin_unlock_irqrestore(&drvdata->spinlock, flags); + return to_copy; +} + +static int smb_release(struct inode *inode, struct file *file) +{ + struct smb_drv_data *drvdata = container_of(file->private_data, + struct smb_drv_data, miscdev); + local_set(&drvdata->reading, 0); + return 0; +} + +static const struct file_operations smb_fops = { + .owner = THIS_MODULE, + .open = smb_open, + .read = smb_read, + .release = smb_release, + .llseek = no_llseek, +}; + +smb_reg(read_pos, SMB_LB_RD_ADDR); +smb_reg(write_pos, SMB_LB_WR_ADDR); +smb_reg(buf_status, SMB_LB_INT_STS); + +static ssize_t buf_size_show(struct device *dev, struct device_attribute *attr, char *buf) +{ + struct smb_drv_data *drvdata = dev_get_drvdata(dev->parent); + + return sysfs_emit(buf, "0x%lx\n", drvdata->sdb.buf_size); +} +static DEVICE_ATTR_RO(buf_size); + +static struct attribute *smb_sink_attrs[] = { + &dev_attr_read_pos.attr, + &dev_attr_write_pos.attr, + &dev_attr_buf_status.attr, + &dev_attr_buf_size.attr, + NULL, +}; + +static const struct attribute_group smb_sink_group = { + .attrs = smb_sink_attrs, + .name = "status", +}; + +static const struct attribute_group *smb_sink_groups[] = { + &smb_sink_group, + NULL, +}; + +static int smb_set_perf_buffer(struct perf_output_handle *handle) +{ + struct cs_buffers *buf = etm_perf_sink_config(handle); + u32 head; + + if (!buf) + return -EINVAL; + + /* Wrap head around to the amount of space we have */ + head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1); + + /* Find the page to write to and offset within that page */ + buf->cur = head / PAGE_SIZE; + buf->offset = head % PAGE_SIZE; + + local_set(&buf->data_size, 0); + + return 0; +} + +static void smb_enable_hw(struct smb_drv_data *drvdata) +{ + writel(0x1, drvdata->base + SMB_GLOBAL_EN); +} + +static void smb_disable_hw(struct smb_drv_data *drvdata) +{ + writel(0x0, drvdata->base + SMB_GLOBAL_EN); +} + +static int smb_enable_sysfs(struct smb_drv_data *drvdata) +{ + if (drvdata->mode == CS_MODE_PERF) + return -EBUSY; + + if (drvdata->mode == CS_MODE_SYSFS) + return 0; + + smb_enable_hw(drvdata); + drvdata->mode = CS_MODE_SYSFS; + return 0; +} + +static int smb_enable_perf(struct smb_drv_data *drvdata, void *data) +{ + struct device *dev = &drvdata->csdev->dev; + struct perf_output_handle *handle = data; + pid_t pid; + + if (drvdata->mode == CS_MODE_SYSFS) { + dev_err(dev, "Device is already in used by sysfs.\n"); + return -EBUSY; + } + + /* Get a handle on the pid of the target process*/ + pid = task_pid_nr(handle->event->owner); + if (drvdata->pid != -1 && drvdata->pid != pid) { + dev_err(dev, "Device is already in used by other session.\n"); + return -EBUSY; + } + /* The sink is already enabled by this session. */ + if (drvdata->pid == pid) + return 0; + + if (smb_set_perf_buffer(handle)) + return -EINVAL; + + smb_enable_hw(drvdata); + drvdata->pid = pid; + drvdata->mode = CS_MODE_PERF; + + return 0; +} + +static int smb_enable(struct coresight_device *csdev, u32 mode, void *data) +{ + struct smb_drv_data *drvdata = dev_get_drvdata(csdev->dev.parent); + unsigned long flags; + int ret = -EINVAL; + + /* Do nothing if trace data is reading by other interface now. */ + if (local_read(&drvdata->reading)) + return -EBUSY; + + spin_lock_irqsave(&drvdata->spinlock, flags); + + if (mode == CS_MODE_SYSFS) + ret = smb_enable_sysfs(drvdata); + + if (mode == CS_MODE_PERF) + ret = smb_enable_perf(drvdata, data); + + spin_unlock_irqrestore(&drvdata->spinlock, flags); + + if (ret) + return ret; + + atomic_inc(csdev->refcnt); + dev_dbg(&csdev->dev, "Ultrasoc SMB enabled.\n"); + + return 0; +} + +static int smb_disable(struct coresight_device *csdev) +{ + struct smb_drv_data *drvdata = dev_get_drvdata(csdev->dev.parent); + unsigned long flags; + + spin_lock_irqsave(&drvdata->spinlock, flags); + + if (atomic_dec_return(csdev->refcnt)) { + spin_unlock_irqrestore(&drvdata->spinlock, flags); + return -EBUSY; + } + + WARN_ON_ONCE(drvdata->mode == CS_MODE_DISABLED); + smb_disable_hw(drvdata); + + /* + * Data remaining in hardware path will be sent to SMB after purge, so needs to + * synchronize the read pointer to write pointer in perf mode. + */ + smb_purge_data(drvdata); + if (drvdata->mode == CS_MODE_PERF) + writel(readl(drvdata->base + SMB_LB_WR_ADDR), drvdata->base + SMB_LB_RD_ADDR); + + /* Dissociate from the target process. */ + drvdata->pid = -1; + drvdata->mode = CS_MODE_DISABLED; + spin_unlock_irqrestore(&drvdata->spinlock, flags); + + dev_dbg(&csdev->dev, "Ultrasoc SMB disabled.\n"); + return 0; +} + +static void *smb_alloc_buffer(struct coresight_device *csdev, + struct perf_event *event, void **pages, + int nr_pages, bool overwrite) +{ + struct cs_buffers *buf; + int node; + + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu); + buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node); + if (!buf) + return NULL; + + buf->snapshot = overwrite; + buf->nr_pages = nr_pages; + buf->data_pages = pages; + + return buf; +} + +static void smb_free_buffer(void *config) +{ + struct cs_buffers *buf = config; + + kfree(buf); +} + +static void smb_sync_perf_buffer(struct smb_drv_data *drvdata, + struct cs_buffers *buf, unsigned long data_size) +{ + struct smb_data_buffer *sdb = &drvdata->sdb; + char **dst_pages = (char **)buf->data_pages; + unsigned long buf_offset = buf->offset; + unsigned int cur = buf->cur; + unsigned long to_copy; + + while (data_size) { + /* Copy parts of trace data when the read pointer will wrap around SMB buffer. */ + if (sdb->rd_offset + PAGE_SIZE - buf_offset > sdb->buf_size) + to_copy = sdb->buf_size - sdb->rd_offset; + else + to_copy = min(data_size, PAGE_SIZE - buf_offset); + + memcpy_fromio(dst_pages[cur] + buf_offset, sdb->buf_base + sdb->rd_offset, to_copy); + + buf_offset += to_copy; + if (buf_offset >= PAGE_SIZE) { + buf_offset = 0; + cur++; + cur %= buf->nr_pages; + } + data_size -= to_copy; + /* ensure memcpy finished before update the read pointer */ + sdb->rd_offset += to_copy; + sdb->rd_offset %= sdb->buf_size; + } + + sdb->data_size = 0; + writel(sdb->start_addr + sdb->rd_offset, drvdata->base + SMB_LB_RD_ADDR); + smb_reset_buffer_status(drvdata); +} + +static unsigned long smb_update_buffer(struct coresight_device *csdev, + struct perf_output_handle *handle, + void *sink_config) +{ + struct smb_drv_data *drvdata = dev_get_drvdata(csdev->dev.parent); + struct smb_data_buffer *sdb = &drvdata->sdb; + struct cs_buffers *buf = sink_config; + unsigned long data_size = 0; + unsigned long flags; + bool lost = false; + + if (!buf) + return 0; + + spin_lock_irqsave(&drvdata->spinlock, flags); + + /* Don't do anything if another tracer is using this sink. */ + if (atomic_read(csdev->refcnt) != 1) + goto out; + + smb_update_data_size(drvdata); + data_size = sdb->data_size; + if (data_size > handle->size) { + sdb->rd_offset += data_size - handle->size; + sdb->rd_offset %= sdb->buf_size; + data_size = handle->size; + lost = true; + } + + smb_sync_perf_buffer(drvdata, buf, data_size); + if (!buf->snapshot && lost) + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); + +out: + spin_unlock_irqrestore(&drvdata->spinlock, flags); + return data_size; +} + +static const struct coresight_ops_sink smb_cs_ops = { + .enable = smb_enable, + .disable = smb_disable, + .alloc_buffer = smb_alloc_buffer, + .free_buffer = smb_free_buffer, + .update_buffer = smb_update_buffer, +}; + +static const struct coresight_ops cs_ops = { + .sink_ops = &smb_cs_ops, +}; + +static int smb_init_data_buffer(struct platform_device *pdev, struct smb_data_buffer *sdb) +{ + struct resource *res; + void __iomem *base; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); + if (IS_ERR(res)) { + dev_err(&pdev->dev, "SMB device failed to get resource.\n"); + return -EINVAL; + } + + sdb->start_addr = res->start & SMB_BASE_LOW_MASK; + sdb->buf_size = resource_size(res); + if (sdb->buf_size == 0) + return -EINVAL; + + base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(base)) + return PTR_ERR(base); + + sdb->buf_base = base; + + return 0; +} + +static void smb_init_hw(struct smb_drv_data *drvdata) +{ + u32 value; + + /* First disable smb and clear the status of SMB buffer */ + smb_reset_buffer_status(drvdata); + smb_disable_hw(drvdata); + smb_purge_data(drvdata); + + /* Using smb in single-end mode, and set other configures default */ + value = SMB_BUF_CFG_STREAMING | SMB_BUF_SINGLE_END | SMB_BUF_EN; + writel(value, drvdata->base + SMB_LB_CFG_LO); + value = SMB_MSG_FILTER(0x0, 0xff); + writel(value, drvdata->base + SMB_LB_CFG_HI); + + writel(SMB_GLOBAL_CFG, drvdata->base + SMB_CFG_REG); + writel(SMB_GLB_INT_CFG, drvdata->base + SMB_GLOBAL_INT); + writel(SMB_BUF_INT_CFG, drvdata->base + SMB_LB_INT_CTRL); +} + +static int smb_register_sink(struct platform_device *pdev, struct smb_drv_data *drvdata) +{ + struct coresight_platform_data *pdata = NULL; + struct coresight_desc desc = { 0 }; + int ret; + + pdata = coresight_get_platform_data(&pdev->dev); + if (IS_ERR(pdata)) + return PTR_ERR(pdata); + + desc.type = CORESIGHT_DEV_TYPE_SINK; + desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER; + desc.ops = &cs_ops; + desc.pdata = pdata; + desc.dev = &pdev->dev; + desc.groups = smb_sink_groups; + desc.name = coresight_alloc_device_name(&sink_devs, &pdev->dev); + if (!desc.name) { + dev_err(&pdev->dev, "Failed to alloc coresight device name."); + return -ENOMEM; + } + + drvdata->csdev = coresight_register(&desc); + if (IS_ERR(drvdata->csdev)) + return PTR_ERR(drvdata->csdev); + + drvdata->miscdev.name = desc.name; + drvdata->miscdev.minor = MISC_DYNAMIC_MINOR; + drvdata->miscdev.fops = &smb_fops; + ret = misc_register(&drvdata->miscdev); + if (ret) { + coresight_unregister(drvdata->csdev); + dev_err(&pdev->dev, "Failed to register misc, ret=%d.\n", ret); + } + + return ret; +} + +static void smb_unregister_sink(struct smb_drv_data *drvdata) +{ + misc_deregister(&drvdata->miscdev); + coresight_unregister(drvdata->csdev); +} + +/* + * Send ultrasoc messge to control hardwares on the tracing path, + * using DSM calls to avoid exposing ultrasoc message format. + */ +static int smb_config_inport(struct device *dev, bool enable) +{ + u32 flag = enable ? 1 : 0; + union acpi_object *obj; + guid_t guid; + + if (guid_parse("82ae1283-7f6a-4cbe-aa06-53e8fb24db18", &guid)) { + dev_err(dev, "Get GUID failed.\n"); + return -EINVAL; + } + + obj = acpi_evaluate_dsm(ACPI_HANDLE(dev), &guid, 0, flag, NULL); + if (!obj) + dev_err(dev, "ACPI handle failed!\n"); + else + ACPI_FREE(obj); + + return 0; +} + +static int smb_probe(struct platform_device *pdev) +{ + struct smb_drv_data *drvdata; + int ret; + + drvdata = devm_kzalloc(&pdev->dev, sizeof(*drvdata), GFP_KERNEL); + if (!drvdata) + return -ENOMEM; + + drvdata->base = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(drvdata->base)) { + dev_err(&pdev->dev, "Failed to ioremap resource.\n"); + return PTR_ERR(drvdata->base); + } + + ret = smb_init_data_buffer(pdev, &drvdata->sdb); + if (ret) { + dev_err(&pdev->dev, "Failed to init buffer, ret = %d.\n", ret); + return ret; + } + + smb_init_hw(drvdata); + spin_lock_init(&drvdata->spinlock); + drvdata->pid = -1; + + ret = smb_register_sink(pdev, drvdata); + if (ret) { + dev_err(&pdev->dev, "Failed to register smb sink.\n"); + return ret; + } + + ret = smb_config_inport(&pdev->dev, true); + if (ret) { + smb_unregister_sink(drvdata); + return ret; + } + + platform_set_drvdata(pdev, drvdata); + return 0; +} + +static int smb_remove(struct platform_device *pdev) +{ + struct smb_drv_data *drvdata = platform_get_drvdata(pdev); + int ret; + + ret = smb_config_inport(&pdev->dev, false); + if (ret) + return ret; + + smb_unregister_sink(drvdata); + return 0; +} + +#ifdef CONFIG_ACPI +static const struct acpi_device_id ultrasoc_smb_acpi_match[] = { + {"HISI03A1", 0}, + {}, +}; +MODULE_DEVICE_TABLE(acpi, ultrasoc_smb_acpi_match); +#endif + +static struct platform_driver smb_driver = { + .driver = { + .name = "ultrasoc-smb", + .acpi_match_table = ACPI_PTR(ultrasoc_smb_acpi_match), + .suppress_bind_attrs = true, + }, + .probe = smb_probe, + .remove = smb_remove, +}; +module_platform_driver(smb_driver); + +MODULE_DESCRIPTION("Ultrasoc smb driver"); +MODULE_LICENSE("Dual MIT/GPL"); +MODULE_AUTHOR("Jonathan Zhou jonathan.zhouwen@huawei.com"); +MODULE_AUTHOR("Qi Liu liuqi115@huawei.com"); diff --git a/drivers/hwtracing/coresight/ultrasoc-smb.h b/drivers/hwtracing/coresight/ultrasoc-smb.h new file mode 100644 index 000000000000..672ba32d642c --- /dev/null +++ b/drivers/hwtracing/coresight/ultrasoc-smb.h @@ -0,0 +1,100 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Siemens System Memory Buffer driver. + * Copyright(c) 2021, HiSilicon Limited. + */ + +#ifndef _ULTRASOC_SMB_H +#define _ULTRASOC_SMB_H + +#include <linux/coresight.h> +#include <linux/list.h> +#include <linux/miscdevice.h> + +#include "coresight-etm-perf.h" +#include "coresight-priv.h" + +/* Offset of SMB logical buffer registers */ +#define SMB_CFG_REG 0X0 +#define SMB_GLOBAL_EN 0X4 +#define SMB_GLOBAL_INT 0X8 +#define SMB_LB_CFG_LO 0X40 +#define SMB_LB_CFG_HI 0X44 +#define SMB_LB_INT_CTRL 0X48 +#define SMB_LB_INT_STS 0X4C +#define SMB_LB_LIMIT 0X58 +#define SMB_LB_RD_ADDR 0X5C +#define SMB_LB_WR_ADDR 0X60 +#define SMB_LB_PURGE 0X64 + +/* Set SMB_CFG_REG register */ +#define SMB_IDLE_PRD(period) (((period - 216) & 0xf) << 12) +#define SMB_MEM_WR(credit, rate) (((credit) << 16) | ((rate) << 18)) +#define SMB_MEM_RD(credit, rate) (((credit) << 22) | ((rate) << 24)) +#define SMB_BURST_LEN(len) ((len - 1) << 4) +#define SMB_GLOBAL_CFG (SMB_IDLE_PRD(231) | SMB_MEM_WR(0x3, 0x0) | \ + SMB_MEM_RD(0x3, 0x6) | SMB_BURST_LEN(16)) + +/* Set SMB_GLOBAL_INT register */ +#define SMB_INT_EN BIT(0) +#define SMB_INT_TYPE_PULSE BIT(1) +#define SMB_INT_POLARITY_HIGH BIT(2) +#define SMB_GLB_INT_CFG (SMB_INT_EN | SMB_INT_TYPE_PULSE | SMB_INT_POLARITY_HIGH) + +/* Set SMB_LB_CFG_LO register */ +#define SMB_BUF_EN BIT(0) +#define SMB_BUF_SINGLE_END BIT(1) +#define SMB_BUF_INIT BIT(8) +#define SMB_BUF_CONTINUOUS BIT(11) +#define SMB_FLOW_MASK GENMASK(19, 16) +#define SMB_BUF_CFG_STREAMING (SMB_BUF_INIT | SMB_BUF_CONTINUOUS | SMB_FLOW_MASK) +#define SMB_BASE_LOW_MASK GENMASK(31, 0) + +/* Set SMB_LB_CFG_HI register */ +#define SMB_MSG_FILTER(lower, upper) ((lower & 0xff) | ((upper & 0xff) << 8)) +#define SMB_BUF_INT_EN BIT(0) +#define SMB_BUF_NOTE_MASK GENMASK(11, 8) +#define SMB_BUF_INT_CFG (SMB_BUF_INT_EN | SMB_BUF_NOTE_MASK) + +/** + * struct smb_data_buffer - Details of the buffer used by SMB + * @buf_base : Memory mapped base address of SMB. + * @start_addr : SMB buffer start Physical address. + * @buf_size : Size of the buffer. + * @data_size : Size of Trace data copy to userspace. + * @rd_offset : Offset of the read pointer in the buffer. + */ +struct smb_data_buffer { + void __iomem *buf_base; + u32 start_addr; + unsigned long buf_size; + unsigned long data_size; + unsigned long rd_offset; +}; + +/** + * struct smb_drv_data - specifics associated to an SMB component + * @base: Memory mapped base address for SMB component. + * @csdev: Component vitals needed by the framework. + * @sdb: Data buffer for SMB. + * @miscdev: Specifics to handle "/dev/xyz.smb" entry. + * @spinlock: Only one at a time pls. + * @reading: Synchronise user space access to SMB buffer. + * @pid: Process ID of the process being monitored by the session + * that is using this component. + * @mode: how this SMB is being used, perf mode or sysfs mode. + */ +struct smb_drv_data { + void __iomem *base; + struct coresight_device *csdev; + struct smb_data_buffer sdb; + struct miscdevice miscdev; + spinlock_t spinlock; + local_t reading; + pid_t pid; + u32 mode; +}; + +#define smb_reg(name, offset) coresight_simple_reg32(struct smb_drv_data, name, offset) + +#endif
From: Qi Liu liuqi115@huawei.com
driver inclusion ategory: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4UA33
-----------------------------------------
Enable configs for ultrasoc driver.
Signed-off-by: Qi Liu liuqi115@huawei.com Reviewed-by: Jay Fang f.fangjian@huawei.com Reviewed-by: Kai Liu kai.liu@suse.com Acked-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Zheng Zengkai zhengzengkai@huawei.com --- arch/arm64/configs/openeuler_defconfig | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig index 7bc72dfe9d05..3a415daa4e43 100644 --- a/arch/arm64/configs/openeuler_defconfig +++ b/arch/arm64/configs/openeuler_defconfig @@ -7223,7 +7223,18 @@ CONFIG_IO_STRICT_DEVMEM=y # CONFIG_PID_IN_CONTEXTIDR is not set # CONFIG_DEBUG_EFI is not set # CONFIG_ARM64_RELOC_TEST is not set -# CONFIG_CORESIGHT is not set +CONFIG_CORESIGHT=m +CONFIG_CORESIGHT_LINKS_AND_SINKS=m +# CONFIG_CORESIGHT_LINK_AND_SINK_TMC is not set +# CONFIG_CORESIGHT_SINK_TPIU is not set +# CONFIG_CORESIGHT_SINK_ETBV10 is not set +CONFIG_CORESIGHT_SOURCE_ETM4X=m +CONFIG_ETM4X_IMPDEF_FEATURE=y +# CONFIG_CORESIGHT_STM is not set +# CONFIG_CORESIGHT_CPU_DEBUG is not set +# CONFIG_CORESIGHT_CTI is not set +# CONFIG_CORESIGHT_TRBE is not set +CONFIG_ULTRASOC_SMB=m # end of arm64 Debugging
#