mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 28 participants
  • 18563 discussions
[PATCH kernel-4.19] USB: fix some clerical mistakes
by LeoLiuoc 11 Aug '21

11 Aug '21
Fix some clerical mistakes in previous patch<4d6c910e0825f397ea3b7cdfd639bdc12d878d27> Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com> ---  drivers/usb/core/hcd-pci.c | 4 ++--  1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c index c3cddaab708d..d4234f81b791 100644 --- a/drivers/usb/core/hcd-pci.c +++ b/drivers/usb/core/hcd-pci.c @@ -67,8 +67,8 @@ static void for_each_companion(struct pci_dev *pdev, struct usb_hcd *hcd,              continue;          if (strncmp(drv->name, "uhci_hcd", sizeof("uhci_hcd") - 1) && -            strncmp(drv->name, "ooci_hcd", sizeof("uhci_hcd") - 1) && -            strncmp(drv->name, "ehci_hcd", sizeof("uhci_hcd") - 1)) +            strncmp(drv->name, "ohci-pci", sizeof("ohci-pci") - 1) && +            strncmp(drv->name, "ehci-pci", sizeof("ehci-pci") - 1))              continue;          /* -- 2.20.1
2 1
0 0
[PATCH kernel-4.19] livepatch/x86: Ignore return code of save_stack_trace_tsk_reliable()
by Yang Yingliang 10 Aug '21

10 Aug '21
From: Wang ShaoBo <bobo.shaobowang(a)huawei.com> hulk inclusion category: bugfix bugzilla: 175666 CVE: NA --------------------------- Checking return code of save_stack_trace_tsk_reliable() can be relaxed for safety validation when appling livepatch, so we ignore it's return code in klp_check_stack(). Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com> Reviewed-by: Jian Cheng <cj.chengjian(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- arch/x86/kernel/livepatch.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/kernel/livepatch.c b/arch/x86/kernel/livepatch.c index e927624d5016f..4d5599e443884 100644 --- a/arch/x86/kernel/livepatch.c +++ b/arch/x86/kernel/livepatch.c @@ -210,9 +210,8 @@ static int klp_check_stack(struct task_struct *task, ret = save_stack_trace_tsk_reliable(task, &trace); WARN_ON_ONCE(ret == -ENOSYS); if (ret) { - pr_info("%s: %s:%d has an unreliable stack\n", + pr_debug("%s: %s:%d has an unreliable stack\n", __func__, task->comm, task->pid); - return ret; } klp_for_each_object(patch, obj) { -- 2.25.1
1 0
0 0
[PATCH openEuler-1.0-LTS 1/3] arm64: mm: account for hotplug memory when randomizing the linear region
by Yang Yingliang 10 Aug '21

10 Aug '21
From: Ard Biesheuvel <ardb(a)kernel.org> mainline inclusion from mainline-v5.11-rc1 commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e category: bugfix bugzilla: 175103 CVE: NA ----------------------------------------------- As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed. So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region. Signed-off-by: Ard Biesheuvel <ardb(a)kernel.org> Cc: Anshuman Khandual <anshuman.khandual(a)arm.com> Cc: Will Deacon <will(a)kernel.org> Cc: Steven Price <steven.price(a)arm.com> Cc: Robin Murphy <robin.murphy(a)arm.com> Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas(a)arm.com> Signed-off-by: Peng Liu <liupeng256(a)huawei.com> Reviewed-by: wangkefeng 00584194 <wangkefeng.wang(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- arch/arm64/mm/init.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 554824ce0f286..5449ae2d26bee 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -683,15 +683,18 @@ void __init arm64_memblock_init(void) if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { extern u16 memstart_offset_seed; - u64 range = linear_region_size - - (memblock_end_of_DRAM() - memblock_start_of_DRAM()); + u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1); + int parange = cpuid_feature_extract_unsigned_field( + mmfr0, ID_AA64MMFR0_PARANGE_SHIFT); + s64 range = linear_region_size - + BIT(id_aa64mmfr0_parange_to_phys_shift(parange)); /* * If the size of the linear region exceeds, by a sufficient - * margin, the size of the region that the available physical - * memory spans, randomize the linear region as well. + * margin, the size of the region that the physical memory can + * span, randomize the linear region as well. */ - if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) { + if (memstart_offset_seed > 0 && range >= (s64)ARM64_MEMSTART_ALIGN) { range /= ARM64_MEMSTART_ALIGN; memstart_addr -= ARM64_MEMSTART_ALIGN * ((range * memstart_offset_seed) >> 16); -- 2.25.1
1 2
0 0
[PATCH kernel-4.19 1/3] arm64: mm: account for hotplug memory when randomizing the linear region
by Yang Yingliang 10 Aug '21

10 Aug '21
From: Ard Biesheuvel <ardb(a)kernel.org> mainline inclusion from mainline-v5.11-rc1 commit 97d6786e0669daa5c2f2d07a057f574e849dfd3e category: bugfix bugzilla: 175103 CVE: NA ----------------------------------------------- As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed. So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region. Signed-off-by: Ard Biesheuvel <ardb(a)kernel.org> Cc: Anshuman Khandual <anshuman.khandual(a)arm.com> Cc: Will Deacon <will(a)kernel.org> Cc: Steven Price <steven.price(a)arm.com> Cc: Robin Murphy <robin.murphy(a)arm.com> Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas(a)arm.com> Signed-off-by: Peng Liu <liupeng256(a)huawei.com> Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- arch/arm64/mm/init.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 957cc293226aa..2399a257eaf33 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -683,15 +683,18 @@ void __init arm64_memblock_init(void) if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { extern u16 memstart_offset_seed; - u64 range = linear_region_size - - (memblock_end_of_DRAM() - memblock_start_of_DRAM()); + u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1); + int parange = cpuid_feature_extract_unsigned_field( + mmfr0, ID_AA64MMFR0_PARANGE_SHIFT); + s64 range = linear_region_size - + BIT(id_aa64mmfr0_parange_to_phys_shift(parange)); /* * If the size of the linear region exceeds, by a sufficient - * margin, the size of the region that the available physical - * memory spans, randomize the linear region as well. + * margin, the size of the region that the physical memory can + * span, randomize the linear region as well. */ - if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) { + if (memstart_offset_seed > 0 && range >= (s64)ARM64_MEMSTART_ALIGN) { range /= ARM64_MEMSTART_ALIGN; memstart_addr -= ARM64_MEMSTART_ALIGN * ((range * memstart_offset_seed) >> 16); -- 2.25.1
1 2
0 0
[PATCH openEuler-21.09 1/4] spi: Add HiSilicon SPI Controller Driver for Kunpeng SoCs
by Zheng Zengkai 09 Aug '21

09 Aug '21
From: Jay Fang <f.fangjian(a)huawei.com> mainline inclusion from mainline-v5.13-rc1 commit c770d8631e1810d8f1ce21b18ad5dd67eeb39e5c category: feature bugzilla: 175249 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… ---------------------------------------------------------------------- This driver supports SPI Controller for HiSilicon Kunpeng SoCs. This driver supports SPI operations using FIFO mode of transfer. DMA is not supported, and we just use IRQ mode for operation completion notification. Only ACPI firmware is supported. Signed-off-by: Jay Fang <f.fangjian(a)huawei.com> Link: https://lore.kernel.org/r/1616836200-45827-1-git-send-email-f.fangjian@huaw… Signed-off-by: Mark Brown <broonie(a)kernel.org> Reviewed-by: Chengwen Feng <fengchengwen(a)huawei.com> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com> --- MAINTAINERS | 7 + drivers/spi/Kconfig | 10 + drivers/spi/Makefile | 1 + drivers/spi/spi-hisi-kunpeng.c | 505 +++++++++++++++++++++++++++++++++ 4 files changed, 523 insertions(+) create mode 100644 drivers/spi/spi-hisi-kunpeng.c diff --git a/MAINTAINERS b/MAINTAINERS index a032b00f1380..a12ecc71e1e7 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8007,6 +8007,13 @@ F: drivers/crypto/hisilicon/sec2/sec_crypto.c F: drivers/crypto/hisilicon/sec2/sec_crypto.h F: drivers/crypto/hisilicon/sec2/sec_main.c +HISILICON SPI Controller DRIVER FOR KUNPENG SOCS +M: Jay Fang <f.fangjian(a)huawei.com> +L: linux-spi(a)vger.kernel.org +S: Maintained +W: http://www.hisilicon.com +F: drivers/spi/spi-hisi-kunpeng.c + HISILICON STAGING DRIVERS FOR HIKEY 960/970 M: Mauro Carvalho Chehab <mchehab+huawei(a)kernel.org> S: Maintained diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig index aadaea052f51..2e284cae97e3 100644 --- a/drivers/spi/Kconfig +++ b/drivers/spi/Kconfig @@ -339,6 +339,16 @@ config SPI_FSL_QUADSPI This controller does not support generic SPI messages. It only supports the high-level SPI memory interface. +config SPI_HISI_KUNPENG + tristate "HiSilicon SPI Controller for Kunpeng SoCs" + depends on (ARM64 && ACPI) || COMPILE_TEST + help + This enables support for HiSilicon SPI controller found on + Kunpeng SoCs. + + This driver can also be built as a module. If so, the module + will be called hisi-kunpeng-spi. + config SPI_HISI_SFC_V3XX tristate "HiSilicon SPI NOR Flash Controller for Hi16XX chipsets" depends on (ARM64 && ACPI) || COMPILE_TEST diff --git a/drivers/spi/Makefile b/drivers/spi/Makefile index 6fea5821662e..04291ff89e16 100644 --- a/drivers/spi/Makefile +++ b/drivers/spi/Makefile @@ -54,6 +54,7 @@ obj-$(CONFIG_SPI_FSL_LPSPI) += spi-fsl-lpspi.o obj-$(CONFIG_SPI_FSL_QUADSPI) += spi-fsl-qspi.o obj-$(CONFIG_SPI_FSL_SPI) += spi-fsl-spi.o obj-$(CONFIG_SPI_GPIO) += spi-gpio.o +obj-$(CONFIG_SPI_HISI_KUNPENG) += spi-hisi-kunpeng.o obj-$(CONFIG_SPI_HISI_SFC_V3XX) += spi-hisi-sfc-v3xx.o obj-$(CONFIG_SPI_IMG_SPFI) += spi-img-spfi.o obj-$(CONFIG_SPI_IMX) += spi-imx.o diff --git a/drivers/spi/spi-hisi-kunpeng.c b/drivers/spi/spi-hisi-kunpeng.c new file mode 100644 index 000000000000..abc0cd54eee6 --- /dev/null +++ b/drivers/spi/spi-hisi-kunpeng.c @@ -0,0 +1,505 @@ +// SPDX-License-Identifier: GPL-2.0-only +// +// HiSilicon SPI Controller Driver for Kunpeng SoCs +// +// Copyright (c) 2021 HiSilicon Technologies Co., Ltd. +// Author: Jay Fang <f.fangjian(a)huawei.com> +// +// This code is based on spi-dw-core.c. + +#include <linux/acpi.h> +#include <linux/bitfield.h> +#include <linux/delay.h> +#include <linux/err.h> +#include <linux/interrupt.h> +#include <linux/module.h> +#include <linux/property.h> +#include <linux/platform_device.h> +#include <linux/slab.h> +#include <linux/spi/spi.h> + +/* Register offsets */ +#define HISI_SPI_CSCR 0x00 /* cs control register */ +#define HISI_SPI_CR 0x04 /* spi common control register */ +#define HISI_SPI_ENR 0x08 /* spi enable register */ +#define HISI_SPI_FIFOC 0x0c /* fifo level control register */ +#define HISI_SPI_IMR 0x10 /* interrupt mask register */ +#define HISI_SPI_DIN 0x14 /* data in register */ +#define HISI_SPI_DOUT 0x18 /* data out register */ +#define HISI_SPI_SR 0x1c /* status register */ +#define HISI_SPI_RISR 0x20 /* raw interrupt status register */ +#define HISI_SPI_ISR 0x24 /* interrupt status register */ +#define HISI_SPI_ICR 0x28 /* interrupt clear register */ +#define HISI_SPI_VERSION 0xe0 /* version register */ + +/* Bit fields in HISI_SPI_CR */ +#define CR_LOOP_MASK GENMASK(1, 1) +#define CR_CPOL_MASK GENMASK(2, 2) +#define CR_CPHA_MASK GENMASK(3, 3) +#define CR_DIV_PRE_MASK GENMASK(11, 4) +#define CR_DIV_POST_MASK GENMASK(19, 12) +#define CR_BPW_MASK GENMASK(24, 20) +#define CR_SPD_MODE_MASK GENMASK(25, 25) + +/* Bit fields in HISI_SPI_FIFOC */ +#define FIFOC_TX_MASK GENMASK(5, 3) +#define FIFOC_RX_MASK GENMASK(11, 9) + +/* Bit fields in HISI_SPI_IMR, 4 bits */ +#define IMR_RXOF BIT(0) /* Receive Overflow */ +#define IMR_RXTO BIT(1) /* Receive Timeout */ +#define IMR_RX BIT(2) /* Receive */ +#define IMR_TX BIT(3) /* Transmit */ +#define IMR_MASK (IMR_RXOF | IMR_RXTO | IMR_RX | IMR_TX) + +/* Bit fields in HISI_SPI_SR, 5 bits */ +#define SR_TXE BIT(0) /* Transmit FIFO empty */ +#define SR_TXNF BIT(1) /* Transmit FIFO not full */ +#define SR_RXNE BIT(2) /* Receive FIFO not empty */ +#define SR_RXF BIT(3) /* Receive FIFO full */ +#define SR_BUSY BIT(4) /* Busy Flag */ + +/* Bit fields in HISI_SPI_ISR, 4 bits */ +#define ISR_RXOF BIT(0) /* Receive Overflow */ +#define ISR_RXTO BIT(1) /* Receive Timeout */ +#define ISR_RX BIT(2) /* Receive */ +#define ISR_TX BIT(3) /* Transmit */ +#define ISR_MASK (ISR_RXOF | ISR_RXTO | ISR_RX | ISR_TX) + +/* Bit fields in HISI_SPI_ICR, 2 bits */ +#define ICR_RXOF BIT(0) /* Receive Overflow */ +#define ICR_RXTO BIT(1) /* Receive Timeout */ +#define ICR_MASK (ICR_RXOF | ICR_RXTO) + +#define DIV_POST_MAX 0xFF +#define DIV_POST_MIN 0x00 +#define DIV_PRE_MAX 0xFE +#define DIV_PRE_MIN 0x02 +#define CLK_DIV_MAX ((1 + DIV_POST_MAX) * DIV_PRE_MAX) +#define CLK_DIV_MIN ((1 + DIV_POST_MIN) * DIV_PRE_MIN) + +#define DEFAULT_NUM_CS 1 + +#define HISI_SPI_WAIT_TIMEOUT_MS 10UL + +enum hisi_spi_rx_level_trig { + HISI_SPI_RX_1, + HISI_SPI_RX_4, + HISI_SPI_RX_8, + HISI_SPI_RX_16, + HISI_SPI_RX_32, + HISI_SPI_RX_64, + HISI_SPI_RX_128 +}; + +enum hisi_spi_tx_level_trig { + HISI_SPI_TX_1_OR_LESS, + HISI_SPI_TX_4_OR_LESS, + HISI_SPI_TX_8_OR_LESS, + HISI_SPI_TX_16_OR_LESS, + HISI_SPI_TX_32_OR_LESS, + HISI_SPI_TX_64_OR_LESS, + HISI_SPI_TX_128_OR_LESS +}; + +enum hisi_spi_frame_n_bytes { + HISI_SPI_N_BYTES_NULL, + HISI_SPI_N_BYTES_U8, + HISI_SPI_N_BYTES_U16, + HISI_SPI_N_BYTES_U32 = 4 +}; + +/* Slave spi_dev related */ +struct hisi_chip_data { + u32 cr; + u32 speed_hz; /* baud rate */ + u16 clk_div; /* baud rate divider */ + + /* clk_div = (1 + div_post) * div_pre */ + u8 div_post; /* value from 0 to 255 */ + u8 div_pre; /* value from 2 to 254 (even only!) */ +}; + +struct hisi_spi { + struct device *dev; + + void __iomem *regs; + int irq; + u32 fifo_len; /* depth of the FIFO buffer */ + + /* Current message transfer state info */ + const void *tx; + unsigned int tx_len; + void *rx; + unsigned int rx_len; + u8 n_bytes; /* current is a 1/2/4 bytes op */ +}; + +static u32 hisi_spi_busy(struct hisi_spi *hs) +{ + return readl(hs->regs + HISI_SPI_SR) & SR_BUSY; +} + +static u32 hisi_spi_rx_not_empty(struct hisi_spi *hs) +{ + return readl(hs->regs + HISI_SPI_SR) & SR_RXNE; +} + +static u32 hisi_spi_tx_not_full(struct hisi_spi *hs) +{ + return readl(hs->regs + HISI_SPI_SR) & SR_TXNF; +} + +static void hisi_spi_flush_fifo(struct hisi_spi *hs) +{ + unsigned long limit = loops_per_jiffy << 1; + + do { + while (hisi_spi_rx_not_empty(hs)) + readl(hs->regs + HISI_SPI_DOUT); + } while (hisi_spi_busy(hs) && limit--); +} + +/* Disable the controller and all interrupts */ +static void hisi_spi_disable(struct hisi_spi *hs) +{ + writel(0, hs->regs + HISI_SPI_ENR); + writel(IMR_MASK, hs->regs + HISI_SPI_IMR); + writel(ICR_MASK, hs->regs + HISI_SPI_ICR); +} + +static u8 hisi_spi_n_bytes(struct spi_transfer *transfer) +{ + if (transfer->bits_per_word <= 8) + return HISI_SPI_N_BYTES_U8; + else if (transfer->bits_per_word <= 16) + return HISI_SPI_N_BYTES_U16; + else + return HISI_SPI_N_BYTES_U32; +} + +static void hisi_spi_reader(struct hisi_spi *hs) +{ + u32 max = min_t(u32, hs->rx_len, hs->fifo_len); + u32 rxw; + + while (hisi_spi_rx_not_empty(hs) && max--) { + rxw = readl(hs->regs + HISI_SPI_DOUT); + /* Check the transfer's original "rx" is not null */ + if (hs->rx) { + switch (hs->n_bytes) { + case HISI_SPI_N_BYTES_U8: + *(u8 *)(hs->rx) = rxw; + break; + case HISI_SPI_N_BYTES_U16: + *(u16 *)(hs->rx) = rxw; + break; + case HISI_SPI_N_BYTES_U32: + *(u32 *)(hs->rx) = rxw; + break; + } + hs->rx += hs->n_bytes; + } + --hs->rx_len; + } +} + +static void hisi_spi_writer(struct hisi_spi *hs) +{ + u32 max = min_t(u32, hs->tx_len, hs->fifo_len); + u32 txw = 0; + + while (hisi_spi_tx_not_full(hs) && max--) { + /* Check the transfer's original "tx" is not null */ + if (hs->tx) { + switch (hs->n_bytes) { + case HISI_SPI_N_BYTES_U8: + txw = *(u8 *)(hs->tx); + break; + case HISI_SPI_N_BYTES_U16: + txw = *(u16 *)(hs->tx); + break; + case HISI_SPI_N_BYTES_U32: + txw = *(u32 *)(hs->tx); + break; + } + hs->tx += hs->n_bytes; + } + writel(txw, hs->regs + HISI_SPI_DIN); + --hs->tx_len; + } +} + +static void __hisi_calc_div_reg(struct hisi_chip_data *chip) +{ + chip->div_pre = DIV_PRE_MAX; + while (chip->div_pre >= DIV_PRE_MIN) { + if (chip->clk_div % chip->div_pre == 0) + break; + + chip->div_pre -= 2; + } + + if (chip->div_pre > chip->clk_div) + chip->div_pre = chip->clk_div; + + chip->div_post = (chip->clk_div / chip->div_pre) - 1; +} + +static u32 hisi_calc_effective_speed(struct spi_controller *master, + struct hisi_chip_data *chip, u32 speed_hz) +{ + u32 effective_speed; + + /* Note clock divider doesn't support odd numbers */ + chip->clk_div = DIV_ROUND_UP(master->max_speed_hz, speed_hz) + 1; + chip->clk_div &= 0xfffe; + if (chip->clk_div > CLK_DIV_MAX) + chip->clk_div = CLK_DIV_MAX; + + effective_speed = master->max_speed_hz / chip->clk_div; + if (chip->speed_hz != effective_speed) { + __hisi_calc_div_reg(chip); + chip->speed_hz = effective_speed; + } + + return effective_speed; +} + +static u32 hisi_spi_prepare_cr(struct spi_device *spi) +{ + u32 cr = FIELD_PREP(CR_SPD_MODE_MASK, 1); + + cr |= FIELD_PREP(CR_CPHA_MASK, (spi->mode & SPI_CPHA) ? 1 : 0); + cr |= FIELD_PREP(CR_CPOL_MASK, (spi->mode & SPI_CPOL) ? 1 : 0); + cr |= FIELD_PREP(CR_LOOP_MASK, (spi->mode & SPI_LOOP) ? 1 : 0); + + return cr; +} + +static void hisi_spi_hw_init(struct hisi_spi *hs) +{ + hisi_spi_disable(hs); + + /* FIFO default config */ + writel(FIELD_PREP(FIFOC_TX_MASK, HISI_SPI_TX_64_OR_LESS) | + FIELD_PREP(FIFOC_RX_MASK, HISI_SPI_RX_16), + hs->regs + HISI_SPI_FIFOC); + + hs->fifo_len = 256; +} + +static irqreturn_t hisi_spi_irq(int irq, void *dev_id) +{ + struct spi_controller *master = dev_id; + struct hisi_spi *hs = spi_controller_get_devdata(master); + u32 irq_status = readl(hs->regs + HISI_SPI_ISR) & ISR_MASK; + + if (!irq_status) + return IRQ_NONE; + + if (!master->cur_msg) + return IRQ_HANDLED; + + /* Error handling */ + if (irq_status & ISR_RXOF) { + dev_err(hs->dev, "interrupt_transfer: fifo overflow\n"); + master->cur_msg->status = -EIO; + goto finalize_transfer; + } + + /* + * Read data from the Rx FIFO every time. If there is + * nothing left to receive, finalize the transfer. + */ + hisi_spi_reader(hs); + if (!hs->rx_len) + goto finalize_transfer; + + /* Send data out when Tx FIFO IRQ triggered */ + if (irq_status & ISR_TX) + hisi_spi_writer(hs); + + return IRQ_HANDLED; + +finalize_transfer: + hisi_spi_disable(hs); + spi_finalize_current_transfer(master); + return IRQ_HANDLED; +} + +static int hisi_spi_transfer_one(struct spi_controller *master, + struct spi_device *spi, struct spi_transfer *transfer) +{ + struct hisi_spi *hs = spi_controller_get_devdata(master); + struct hisi_chip_data *chip = spi_get_ctldata(spi); + u32 cr = chip->cr; + + /* Update per transfer options for speed and bpw */ + transfer->effective_speed_hz = + hisi_calc_effective_speed(master, chip, transfer->speed_hz); + cr |= FIELD_PREP(CR_DIV_PRE_MASK, chip->div_pre); + cr |= FIELD_PREP(CR_DIV_POST_MASK, chip->div_post); + cr |= FIELD_PREP(CR_BPW_MASK, transfer->bits_per_word - 1); + writel(cr, hs->regs + HISI_SPI_CR); + + hisi_spi_flush_fifo(hs); + + hs->n_bytes = hisi_spi_n_bytes(transfer); + hs->tx = transfer->tx_buf; + hs->tx_len = transfer->len / hs->n_bytes; + hs->rx = transfer->rx_buf; + hs->rx_len = hs->tx_len; + + /* + * Ensure that the transfer data above has been updated + * before the interrupt to start. + */ + smp_mb(); + + /* Enable all interrupts and the controller */ + writel(~IMR_MASK, hs->regs + HISI_SPI_IMR); + writel(1, hs->regs + HISI_SPI_ENR); + + return 1; +} + +static void hisi_spi_handle_err(struct spi_controller *master, + struct spi_message *msg) +{ + struct hisi_spi *hs = spi_controller_get_devdata(master); + + hisi_spi_disable(hs); + + /* + * Wait for interrupt handler that is + * already in timeout to complete. + */ + msleep(HISI_SPI_WAIT_TIMEOUT_MS); +} + +static int hisi_spi_setup(struct spi_device *spi) +{ + struct hisi_chip_data *chip; + + /* Only alloc on first setup */ + chip = spi_get_ctldata(spi); + if (!chip) { + chip = kzalloc(sizeof(*chip), GFP_KERNEL); + if (!chip) + return -ENOMEM; + spi_set_ctldata(spi, chip); + } + + chip->cr = hisi_spi_prepare_cr(spi); + + return 0; +} + +static void hisi_spi_cleanup(struct spi_device *spi) +{ + struct hisi_chip_data *chip = spi_get_ctldata(spi); + + kfree(chip); + spi_set_ctldata(spi, NULL); +} + +static int hisi_spi_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct spi_controller *master; + struct hisi_spi *hs; + int ret, irq; + + irq = platform_get_irq(pdev, 0); + if (irq < 0) + return irq; + + master = devm_spi_alloc_master(dev, sizeof(*hs)); + if (!master) + return -ENOMEM; + + platform_set_drvdata(pdev, master); + + hs = spi_controller_get_devdata(master); + hs->dev = dev; + hs->irq = irq; + + hs->regs = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(hs->regs)) + return PTR_ERR(hs->regs); + + /* Specify maximum SPI clocking speed (master only) by firmware */ + ret = device_property_read_u32(dev, "spi-max-frequency", + &master->max_speed_hz); + if (ret) { + dev_err(dev, "failed to get max SPI clocking speed, ret=%d\n", + ret); + return -EINVAL; + } + + ret = device_property_read_u16(dev, "num-cs", + &master->num_chipselect); + if (ret) + master->num_chipselect = DEFAULT_NUM_CS; + + master->use_gpio_descriptors = true; + master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP; + master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32); + master->bus_num = pdev->id; + master->setup = hisi_spi_setup; + master->cleanup = hisi_spi_cleanup; + master->transfer_one = hisi_spi_transfer_one; + master->handle_err = hisi_spi_handle_err; + master->dev.fwnode = dev->fwnode; + + hisi_spi_hw_init(hs); + + ret = devm_request_irq(dev, hs->irq, hisi_spi_irq, 0, dev_name(dev), + master); + if (ret < 0) { + dev_err(dev, "failed to get IRQ=%d, ret=%d\n", hs->irq, ret); + return ret; + } + + ret = spi_register_controller(master); + if (ret) { + dev_err(dev, "failed to register spi master, ret=%d\n", ret); + return ret; + } + + dev_info(dev, "hw version:0x%x max-freq:%u kHz\n", + readl(hs->regs + HISI_SPI_VERSION), + master->max_speed_hz / 1000); + + return 0; +} + +static int hisi_spi_remove(struct platform_device *pdev) +{ + struct spi_controller *master = platform_get_drvdata(pdev); + + spi_unregister_controller(master); + + return 0; +} + +static const struct acpi_device_id hisi_spi_acpi_match[] = { + {"HISI03E1", 0}, + {} +}; +MODULE_DEVICE_TABLE(acpi, hisi_spi_acpi_match); + +static struct platform_driver hisi_spi_driver = { + .probe = hisi_spi_probe, + .remove = hisi_spi_remove, + .driver = { + .name = "hisi-kunpeng-spi", + .acpi_match_table = hisi_spi_acpi_match, + }, +}; +module_platform_driver(hisi_spi_driver); + +MODULE_AUTHOR("Jay Fang <f.fangjian(a)huawei.com>"); +MODULE_DESCRIPTION("HiSilicon SPI Controller Driver for Kunpeng SoCs"); +MODULE_LICENSE("GPL v2"); -- 2.20.1
1 3
0 0
[PATCH openEuler-1.0-LTS] mm/vmscan: setup drop_caches_loop_limit in cmdline
by Yang Yingliang 06 Aug '21

06 Aug '21
From: Liu Shixin <liushixin2(a)huawei.com> hulk inclusion category: bugfix bugzilla: 175105 CVE: NA ------------------------------------------------- When !CONFIG_SYSCTL, drop_caches_loop_limit is invisible. Add cmdline "drop_caches_loop_limit=" to set it. This parameter can limit the number of loops per node. Fixes: 90394d30702e ("mm/vmscan: add drop_caches_loop_limit to break loop in drop_slab_node") Signed-off-by: Liu Shixin <liushixin2(a)huawei.com> Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- fs/drop_caches.c | 1 - include/linux/mm.h | 2 +- kernel/sysctl.c | 2 +- mm/vmscan.c | 19 +++++++++++++------ 4 files changed, 15 insertions(+), 9 deletions(-) diff --git a/fs/drop_caches.c b/fs/drop_caches.c index 1f866b32cd150..dc1a1d5d825b4 100644 --- a/fs/drop_caches.c +++ b/fs/drop_caches.c @@ -13,7 +13,6 @@ /* A global variable is a bit ugly, but it keeps the code simple */ int sysctl_drop_caches; -unsigned int sysctl_drop_caches_loop_limit __read_mostly; static void drop_pagecache_sb(struct super_block *sb, void *unused) { diff --git a/include/linux/mm.h b/include/linux/mm.h index db53f49e13b91..d3d62cd3ee07c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2769,7 +2769,7 @@ extern bool process_shares_mm(struct task_struct *p, struct mm_struct *mm); #ifdef CONFIG_SYSCTL extern int sysctl_drop_caches; -extern unsigned int sysctl_drop_caches_loop_limit; +extern unsigned int drop_caches_loop_limit; int drop_caches_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); #endif diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 1733b8c71b117..91d4fe5b2770f 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1479,7 +1479,7 @@ static struct ctl_table vm_table[] = { }, { .procname = "drop_caches_loop_limit", - .data = &sysctl_drop_caches_loop_limit, + .data = &drop_caches_loop_limit, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_douintvec, diff --git a/mm/vmscan.c b/mm/vmscan.c index 3d7716e2e2c66..b5f5a366d155f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -723,12 +723,21 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, return freed; } +unsigned int drop_caches_loop_limit __read_mostly; +static int __init drop_caches_loop_limit_setup(char *s) +{ + int ret = kstrtouint(s, 10, &drop_caches_loop_limit); + + if (ret) + pr_info("Parse drop_caches_loop_limit failed: ret: %d\n", ret); + return 1; +} +__setup("drop_caches_loop_limit=", drop_caches_loop_limit_setup); + void drop_slab_node(int nid) { unsigned long freed; -#ifdef CONFIG_SYSCTL unsigned int counts = 0; -#endif do { struct mem_cgroup *memcg = NULL; @@ -742,17 +751,15 @@ void drop_slab_node(int nid) freed += shrink_slab(GFP_KERNEL, nid, memcg, 0); } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); -#ifdef CONFIG_SYSCTL - if (unlikely(sysctl_drop_caches_loop_limit)) { + if (unlikely(drop_caches_loop_limit)) { counts++; - if (counts >= sysctl_drop_caches_loop_limit) { + if (counts >= drop_caches_loop_limit) { pr_info("%s (%d): drop_caches early break: %u loops\n", current->comm, task_pid_nr(current), counts); return; } } -#endif } while (freed > 10); } -- 2.25.1
1 0
0 0
[PATCH kernel-4.19] mm/vmscan: setup drop_caches_loop_limit in cmdline
by Yang Yingliang 06 Aug '21

06 Aug '21
From: Liu Shixin <liushixin2(a)huawei.com> hulk inclusion category: bugfix bugzilla: 175105 CVE: NA ------------------------------------------------- When !CONFIG_SYSCTL, drop_caches_loop_limit is invisible. Add cmdline "drop_caches_loop_limit=" to set it. This parameter can limit the number of loops per node. Fixes: 90394d30702e ("mm/vmscan: add drop_caches_loop_limit to break loop in drop_slab_node") Signed-off-by: Liu Shixin <liushixin2(a)huawei.com> Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- fs/drop_caches.c | 1 - include/linux/mm.h | 2 +- kernel/sysctl.c | 2 +- mm/vmscan.c | 19 +++++++++++++------ 4 files changed, 15 insertions(+), 9 deletions(-) diff --git a/fs/drop_caches.c b/fs/drop_caches.c index 1f866b32cd150..dc1a1d5d825b4 100644 --- a/fs/drop_caches.c +++ b/fs/drop_caches.c @@ -13,7 +13,6 @@ /* A global variable is a bit ugly, but it keeps the code simple */ int sysctl_drop_caches; -unsigned int sysctl_drop_caches_loop_limit __read_mostly; static void drop_pagecache_sb(struct super_block *sb, void *unused) { diff --git a/include/linux/mm.h b/include/linux/mm.h index 6d457e38fec7f..8b5f9b8fb1a34 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2790,7 +2790,7 @@ extern bool process_shares_mm(struct task_struct *p, struct mm_struct *mm); #ifdef CONFIG_SYSCTL extern int sysctl_drop_caches; -extern unsigned int sysctl_drop_caches_loop_limit; +extern unsigned int drop_caches_loop_limit; int drop_caches_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); #endif diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 60d899cb8b4e6..0be7d2747888c 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1489,7 +1489,7 @@ static struct ctl_table vm_table[] = { }, { .procname = "drop_caches_loop_limit", - .data = &sysctl_drop_caches_loop_limit, + .data = &drop_caches_loop_limit, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_douintvec, diff --git a/mm/vmscan.c b/mm/vmscan.c index 79b5ab1c06f68..226bd89f2d00e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -723,12 +723,21 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, return freed; } +unsigned int drop_caches_loop_limit __read_mostly; +static int __init drop_caches_loop_limit_setup(char *s) +{ + int ret = kstrtouint(s, 10, &drop_caches_loop_limit); + + if (ret) + pr_info("Parse drop_caches_loop_limit failed: ret: %d\n", ret); + return 1; +} +__setup("drop_caches_loop_limit=", drop_caches_loop_limit_setup); + void drop_slab_node(int nid) { unsigned long freed; -#ifdef CONFIG_SYSCTL unsigned int counts = 0; -#endif do { struct mem_cgroup *memcg = NULL; @@ -742,17 +751,15 @@ void drop_slab_node(int nid) freed += shrink_slab(GFP_KERNEL, nid, memcg, 0); } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL); -#ifdef CONFIG_SYSCTL - if (unlikely(sysctl_drop_caches_loop_limit)) { + if (unlikely(drop_caches_loop_limit)) { counts++; - if (counts >= sysctl_drop_caches_loop_limit) { + if (counts >= drop_caches_loop_limit) { pr_info("%s (%d): drop_caches early break: %u loops\n", current->comm, task_pid_nr(current), counts); return; } } -#endif } while (freed > 10); } -- 2.25.1
1 0
0 0
[PATCH kernel-4.19] mm/memcg: optimize memory.numa_stat like memory.stat
by Yang Yingliang 06 Aug '21

06 Aug '21
From: Shakeel Butt <shakeelb(a)google.com> mainline inclusion from mainline-v5.8-rc1 commit dd8657b6c1cb5e65b13445b4a038736e81cf80ea category: bugfix CVE: NA -------------------------------- Currently reading memory.numa_stat traverses the underlying memcg tree multiple times to accumulate the stats to present the hierarchical view of the memcg tree. However the kernel already maintains the hierarchical view of the stats and use it in memory.stat. Just use the same mechanism in memory.numa_stat as well. I ran a simple benchmark which reads root_mem_cgroup's memory.numa_stat file in the presense of 10000 memcgs. The results are: Without the patch: $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null real 0m0.700s user 0m0.001s sys 0m0.697s With the patch: $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null real 0m0.001s user 0m0.001s sys 0m0.000s [akpm(a)linux-foundation.org: avoid forcing out-of-line code generation] Signed-off-by: Shakeel Butt <shakeelb(a)google.com> Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org> Reviewed-by: Andrew Morton <akpm(a)linux-foundation.org> Acked-by: Johannes Weiner <hannes(a)cmpxchg.org> Cc: Roman Gushchin <guro(a)fb.com> Cc: Michal Hocko <mhocko(a)kernel.org> Link: http://lkml.kernel.org/r/20200304022058.248270-1-shakeelb@google.com Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org> Conflicts: mm/memcontrol.c Signed-off-by: Jing Xiangfeng <jingxiangfeng(a)huawei.com> Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- mm/memcontrol.c | 51 +++++++++++++++++++++++++------------------------ 1 file changed, 26 insertions(+), 25 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ef742a42f105f..3f824fd2b6609 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3825,7 +3825,7 @@ int sysctl_memcg_qos_handler(struct ctl_table *table, int write, #define LRU_ALL ((1 << NR_LRU_LISTS) - 1) static unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg, - int nid, unsigned int lru_mask) + int nid, unsigned int lru_mask, bool tree) { struct lruvec *lruvec = mem_cgroup_lruvec(NODE_DATA(nid), memcg); unsigned long nr = 0; @@ -3836,13 +3836,17 @@ static unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg, for_each_lru(lru) { if (!(BIT(lru) & lru_mask)) continue; - nr += lruvec_page_state_local(lruvec, NR_LRU_BASE + lru); + if (tree) + nr += lruvec_page_state(lruvec, NR_LRU_BASE + lru); + else + nr += lruvec_page_state_local(lruvec, NR_LRU_BASE + lru); } return nr; } static unsigned long mem_cgroup_nr_lru_pages(struct mem_cgroup *memcg, - unsigned int lru_mask) + unsigned int lru_mask, + bool tree) { unsigned long nr = 0; enum lru_list lru; @@ -3850,7 +3854,10 @@ static unsigned long mem_cgroup_nr_lru_pages(struct mem_cgroup *memcg, for_each_lru(lru) { if (!(BIT(lru) & lru_mask)) continue; - nr += memcg_page_state_local(memcg, NR_LRU_BASE + lru); + if (tree) + nr += memcg_page_state(memcg, NR_LRU_BASE + lru); + else + nr += memcg_page_state_local(memcg, NR_LRU_BASE + lru); } return nr; } @@ -3870,34 +3877,28 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) }; const struct numa_stat *stat; int nid; - unsigned long nr; struct mem_cgroup *memcg = mem_cgroup_from_seq(m); for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { - nr = mem_cgroup_nr_lru_pages(memcg, stat->lru_mask); - seq_printf(m, "%s=%lu", stat->name, nr); - for_each_node_state(nid, N_MEMORY) { - nr = mem_cgroup_node_nr_lru_pages(memcg, nid, - stat->lru_mask); - seq_printf(m, " N%d=%lu", nid, nr); - } + seq_printf(m, "%s=%lu", stat->name, + mem_cgroup_nr_lru_pages(memcg, stat->lru_mask, + false)); + for_each_node_state(nid, N_MEMORY) + seq_printf(m, " N%d=%lu", nid, + mem_cgroup_node_nr_lru_pages(memcg, nid, + stat->lru_mask, false)); seq_putc(m, '\n'); } for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { - struct mem_cgroup *iter; - - nr = 0; - for_each_mem_cgroup_tree(iter, memcg) - nr += mem_cgroup_nr_lru_pages(iter, stat->lru_mask); - seq_printf(m, "hierarchical_%s=%lu", stat->name, nr); - for_each_node_state(nid, N_MEMORY) { - nr = 0; - for_each_mem_cgroup_tree(iter, memcg) - nr += mem_cgroup_node_nr_lru_pages( - iter, nid, stat->lru_mask); - seq_printf(m, " N%d=%lu", nid, nr); - } + + seq_printf(m, "hierarchical_%s=%lu", stat->name, + mem_cgroup_nr_lru_pages(memcg, stat->lru_mask, + true)); + for_each_node_state(nid, N_MEMORY) + seq_printf(m, " N%d=%lu", nid, + mem_cgroup_node_nr_lru_pages(memcg, nid, + stat->lru_mask, true)); seq_putc(m, '\n'); } -- 2.25.1
1 0
0 0
[PATCH openEuler-1.0-LTS] mm/memcg: optimize memory.numa_stat like memory.stat
by Yang Yingliang 06 Aug '21

06 Aug '21
From: Shakeel Butt <shakeelb(a)google.com> mainline inclusion from mainline-v5.8-rc1 commit dd8657b6c1cb5e65b13445b4a038736e81cf80ea CVE: NA -------------------------------- Currently reading memory.numa_stat traverses the underlying memcg tree multiple times to accumulate the stats to present the hierarchical view of the memcg tree. However the kernel already maintains the hierarchical view of the stats and use it in memory.stat. Just use the same mechanism in memory.numa_stat as well. I ran a simple benchmark which reads root_mem_cgroup's memory.numa_stat file in the presense of 10000 memcgs. The results are: Without the patch: $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null real 0m0.700s user 0m0.001s sys 0m0.697s With the patch: $ time cat /dev/cgroup/memory/memory.numa_stat > /dev/null real 0m0.001s user 0m0.001s sys 0m0.000s [akpm(a)linux-foundation.org: avoid forcing out-of-line code generation] Signed-off-by: Shakeel Butt <shakeelb(a)google.com> Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org> Reviewed-by: Andrew Morton <akpm(a)linux-foundation.org> Acked-by: Johannes Weiner <hannes(a)cmpxchg.org> Cc: Roman Gushchin <guro(a)fb.com> Cc: Michal Hocko <mhocko(a)kernel.org> Link: http://lkml.kernel.org/r/20200304022058.248270-1-shakeelb@google.com Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org> Conflicts: mm/memcontrol.c Signed-off-by: Jing Xiangfeng <jingxiangfeng(a)huawei.com> Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- mm/memcontrol.c | 51 +++++++++++++++++++++++++------------------------ 1 file changed, 26 insertions(+), 25 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e55b46d5d0fcb..0bccf2b8cc599 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3742,7 +3742,7 @@ int sysctl_memcg_qos_handler(struct ctl_table *table, int write, #define LRU_ALL ((1 << NR_LRU_LISTS) - 1) static unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg, - int nid, unsigned int lru_mask) + int nid, unsigned int lru_mask, bool tree) { struct lruvec *lruvec = mem_cgroup_lruvec(NODE_DATA(nid), memcg); unsigned long nr = 0; @@ -3753,13 +3753,17 @@ static unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg, for_each_lru(lru) { if (!(BIT(lru) & lru_mask)) continue; - nr += lruvec_page_state_local(lruvec, NR_LRU_BASE + lru); + if (tree) + nr += lruvec_page_state(lruvec, NR_LRU_BASE + lru); + else + nr += lruvec_page_state_local(lruvec, NR_LRU_BASE + lru); } return nr; } static unsigned long mem_cgroup_nr_lru_pages(struct mem_cgroup *memcg, - unsigned int lru_mask) + unsigned int lru_mask, + bool tree) { unsigned long nr = 0; enum lru_list lru; @@ -3767,7 +3771,10 @@ static unsigned long mem_cgroup_nr_lru_pages(struct mem_cgroup *memcg, for_each_lru(lru) { if (!(BIT(lru) & lru_mask)) continue; - nr += memcg_page_state_local(memcg, NR_LRU_BASE + lru); + if (tree) + nr += memcg_page_state(memcg, NR_LRU_BASE + lru); + else + nr += memcg_page_state_local(memcg, NR_LRU_BASE + lru); } return nr; } @@ -3787,34 +3794,28 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) }; const struct numa_stat *stat; int nid; - unsigned long nr; struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { - nr = mem_cgroup_nr_lru_pages(memcg, stat->lru_mask); - seq_printf(m, "%s=%lu", stat->name, nr); - for_each_node_state(nid, N_MEMORY) { - nr = mem_cgroup_node_nr_lru_pages(memcg, nid, - stat->lru_mask); - seq_printf(m, " N%d=%lu", nid, nr); - } + seq_printf(m, "%s=%lu", stat->name, + mem_cgroup_nr_lru_pages(memcg, stat->lru_mask, + false)); + for_each_node_state(nid, N_MEMORY) + seq_printf(m, " N%d=%lu", nid, + mem_cgroup_node_nr_lru_pages(memcg, nid, + stat->lru_mask, false)); seq_putc(m, '\n'); } for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { - struct mem_cgroup *iter; - - nr = 0; - for_each_mem_cgroup_tree(iter, memcg) - nr += mem_cgroup_nr_lru_pages(iter, stat->lru_mask); - seq_printf(m, "hierarchical_%s=%lu", stat->name, nr); - for_each_node_state(nid, N_MEMORY) { - nr = 0; - for_each_mem_cgroup_tree(iter, memcg) - nr += mem_cgroup_node_nr_lru_pages( - iter, nid, stat->lru_mask); - seq_printf(m, " N%d=%lu", nid, nr); - } + + seq_printf(m, "hierarchical_%s=%lu", stat->name, + mem_cgroup_nr_lru_pages(memcg, stat->lru_mask, + true)); + for_each_node_state(nid, N_MEMORY) + seq_printf(m, " N%d=%lu", nid, + mem_cgroup_node_nr_lru_pages(memcg, nid, + stat->lru_mask, true)); seq_putc(m, '\n'); } -- 2.25.1
1 0
0 0
[PATCH openEuler-21.03] net: mdiobus: get rid of a BUG_ON()
by wangqing 06 Aug '21

06 Aug '21
From: Dan Carpenter <dan.carpenter(a)oracle.com> stable inclusion from stable-v5.10.44 commit be23c4af3d8a1b986fe9b43b8966797653a76ca4 bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=341 CVE: NA -------------------------------- [ Upstream commit 1dde47a66d4fb181830d6fa000e5ea86907b639e ] We spotted a bug recently during a review where a driver was unregistering a bus that wasn't registered, which would trigger this BUG_ON(). Let's handle that situation more gracefully, and just print a warning and return. Reported-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk> Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com> Reviewed-by: Russell King (Oracle) <rmk+kernel(a)armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew(a)lunn.ch> Signed-off-by: David S. Miller <davem(a)davemloft.net> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: wangqing <wangqing(a)uniontech.com> --- drivers/net/phy/mdio_bus.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c index 757e950fb745..b848439fa837 100644 --- a/drivers/net/phy/mdio_bus.c +++ b/drivers/net/phy/mdio_bus.c @@ -608,7 +608,8 @@ void mdiobus_unregister(struct mii_bus *bus) struct mdio_device *mdiodev; int i; - BUG_ON(bus->state != MDIOBUS_REGISTERED); + if (WARN_ON_ONCE(bus->state != MDIOBUS_REGISTERED)) + return; bus->state = MDIOBUS_UNREGISTERED; for (i = 0; i < PHY_MAX_ADDR; i++) { -- 2.23.0
1 0
0 0
  • ← Newer
  • 1
  • ...
  • 1743
  • 1744
  • 1745
  • 1746
  • 1747
  • 1748
  • 1749
  • ...
  • 1857
  • Older →

HyperKitty Powered by HyperKitty