mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 56 participants
  • 18796 discussions
[PATCH OLK-5.10] fs: Fix error checking for d_hash_and_lookup()
by Zizhi Wo 04 Dec '23

04 Dec '23
From: Wang Ming <machel(a)vivo.com> mainline inclusion from mainline-v6.6-rc1 commit 0d5a4f8f775ff990142cdc810a84eae078589d27 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8LNLY?from=project-issue Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- The d_hash_and_lookup() function returns error pointers or NULL. Most incorrect error checks were fixed, but the one in int path_pts() was forgotten. Fixes: eedf265aa003 ("devpts: Make each mount of devpts an independent filesystem.") Signed-off-by: Wang Ming <machel(a)vivo.com> Message-Id: <20230713120555.7025-1-machel(a)vivo.com> Signed-off-by: Christian Brauner <brauner(a)kernel.org> Signed-off-by: Zizhi Wo <wozizhi(a)huawei.com> --- fs/namei.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/namei.c b/fs/namei.c index ff140e2943a0..3588e12d6130 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -2647,7 +2647,7 @@ int path_pts(struct path *path) dput(path->dentry); path->dentry = parent; child = d_hash_and_lookup(parent, &this); - if (!child) + if (IS_ERR_OR_NULL(child)) return -ENOENT; path->dentry = child; -- 2.39.2
2 1
0 0
[PATCH openEuler-1.0-LTS] net: prevent rewrite of msg_name in sock_sendmsg()
by Lu Wei 04 Dec '23

04 Dec '23
From: Jordan Rife <jrife(a)google.com> stable inclusion from stable-v4.19.297 commit 3d62f2577c7414e5b94e4baad6eb621165960f30 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I8EZY4 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- [ Upstream commit 86a7e0b69bd5b812e48a20c66c2161744f3caa16 ] Callers of sock_sendmsg(), and similarly kernel_sendmsg(), in kernel space may observe their value of msg_name change in cases where BPF sendmsg hooks rewrite the send address. This has been confirmed to break NFS mounts running in UDP mode and has the potential to break other systems. This patch: 1) Creates a new function called __sock_sendmsg() with same logic as the old sock_sendmsg() function. 2) Replaces calls to sock_sendmsg() made by __sys_sendto() and __sys_sendmsg() with __sock_sendmsg() to avoid an unnecessary copy, as these system calls are already protected. 3) Modifies sock_sendmsg() so that it makes a copy of msg_name if present before passing it down the stack to insulate callers from changes to the send address. Link: https://lore.kernel.org/netdev/20230912013332.2048422-1-jrife@google.com/ Fixes: 1cedee13d25a ("bpf: Hooks for sys_sendmsg") Cc: stable(a)vger.kernel.org Reviewed-by: Willem de Bruijn <willemb(a)google.com> Signed-off-by: Jordan Rife <jrife(a)google.com> Reviewed-by: Simon Horman <horms(a)kernel.org> Signed-off-by: David S. Miller <davem(a)davemloft.net> Signed-off-by: Sasha Levin <sashal(a)kernel.org> conflict: net/socket.c Signed-off-by: Lu Wei <luwei32(a)huawei.com> --- net/socket.c | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-) diff --git a/net/socket.c b/net/socket.c index 3d3632fe4a34..0cc52ba0b0e2 100644 --- a/net/socket.c +++ b/net/socket.c @@ -637,6 +637,14 @@ void __sock_tx_timestamp(__u16 tsflags, __u8 *tx_flags) } EXPORT_SYMBOL(__sock_tx_timestamp); +static int __sock_sendmsg(struct socket *sock, struct msghdr *msg) +{ + int err = security_socket_sendmsg(sock, msg, + msg_data_left(msg)); + + return err ?: sock_sendmsg_nosec(sock, msg); +} + /** * sock_sendmsg - send a message through @sock * @sock: socket @@ -655,10 +663,19 @@ static inline int sock_sendmsg_nosec(struct socket *sock, struct msghdr *msg) int sock_sendmsg(struct socket *sock, struct msghdr *msg) { - int err = security_socket_sendmsg(sock, msg, - msg_data_left(msg)); + struct sockaddr_storage *save_addr = (struct sockaddr_storage *)msg->msg_name; + struct sockaddr_storage address; + int ret; - return err ?: sock_sendmsg_nosec(sock, msg); + if (msg->msg_name) { + memcpy(&address, msg->msg_name, msg->msg_namelen); + msg->msg_name = &address; + } + + ret = __sock_sendmsg(sock, msg); + msg->msg_name = save_addr; + + return ret; } EXPORT_SYMBOL(sock_sendmsg); @@ -963,7 +980,7 @@ static ssize_t sock_write_iter(struct kiocb *iocb, struct iov_iter *from) if (sock->type == SOCK_SEQPACKET) msg.msg_flags |= MSG_EOR; - res = sock_sendmsg(sock, &msg); + res = __sock_sendmsg(sock, &msg); *from = msg.msg_iter; return res; } @@ -1940,7 +1957,7 @@ int __sys_sendto(int fd, void __user *buff, size_t len, unsigned int flags, if (sock->file->f_flags & O_NONBLOCK) flags |= MSG_DONTWAIT; msg.msg_flags = flags; - err = sock_sendmsg(sock, &msg); + err = __sock_sendmsg(sock, &msg); out_put: fput_light(sock->file, fput_needed); @@ -2272,7 +2289,7 @@ static int ____sys_sendmsg(struct socket *sock, struct msghdr *msg_sys, err = sock_sendmsg_nosec(sock, msg_sys); goto out_freectl; } - err = sock_sendmsg(sock, msg_sys); + err = __sock_sendmsg(sock, msg_sys); /* * If this is sendmmsg() and sending to current destination address was * successful, remember it. -- 2.34.1
2 1
0 0
[PATCH OLK-6.6 0/2] arm64: errata: add option to disable cache readunique prefetch on HIP08
by Yang Yingliang 04 Dec '23

04 Dec '23
Random performance decreases appear on cases of Hackbench which test pipe or socket communication among multi-threads on Hisi HIP08 SoC. Cache sharing which caused by the change of the data layout and the cache readunique prefetch mechanism both lead to this problem. Readunique mechanism which may caused by store operation will invalid cachelines on other cores during data fetching stage which can cause cacheline invalidation happens frequently in a sharing data access situation. Disable cache readunique prefetch can trackle this problem. Test cases are like: for i in 20;do echo "--------pipe thread num=$i----------" for j in $(seq 1 10);do ./hackbench -pipe $i thread 1000 done done We disable readunique prefetch only in el2 for in el1 disabling readunique prefetch may cause panic due to lack of related priority which often be set in BIOS. Introduce CONFIG_HISILICON_ERRATUM_HIP08_RU_PREFETCH and disable RU prefetch using boot cmdline 'readunique_prefetch=off'. Kai Shen (1): arm64: errata: add option to disable cache readunique prefetch on HIP08 Xie XiuQi (1): arm64: errata: enable HISILICON_ERRATUM_HIP08_RU_PREFETCH arch/arm64/Kconfig | 18 +++++++++ arch/arm64/configs/openeuler_defconfig | 2 + arch/arm64/kernel/cpu_errata.c | 56 ++++++++++++++++++++++++++ arch/arm64/tools/cpucaps | 1 + 4 files changed, 77 insertions(+) -- 2.25.1
2 3
0 0
[PATCH openEuler-1.0-LTS] mm: don't let userspace spam allocations warnings
by Peng Zhang 04 Dec '23

04 Dec '23
From: Daniel Vetter <daniel.vetter(a)ffwll.ch> mainline inclusion from mainline-v5.0-rc8 commit 6c8fcc096be9d02f478c508052a41a4430506ab3 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8LMVO CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- memdump_user usually gets fed unchecked userspace input. Blasting a full backtrace into dmesg every time is a bit excessive - I'm not sure on the kernel rule in general, but at least in drm we're trying not to let unpriviledge userspace spam the logs freely. Definitely not entire warning backtraces. It also means more filtering for our CI, because our testsuite exercises these corner cases and so hits these a lot. Link: http://lkml.kernel.org/r/20190220204058.11676-1-daniel.vetter@ffwll.ch Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com> Reviewed-by: Andrew Morton <akpm(a)linux-foundation.org> Acked-by: Michal Hocko <mhocko(a)suse.com> Reviewed-by: Kees Cook <keescook(a)chromium.org> Cc: Mike Rapoport <rppt(a)linux.vnet.ibm.com> Cc: Roman Gushchin <guro(a)fb.com> Cc: Vlastimil Babka <vbabka(a)suse.cz> Cc: Jan Stancek <jstancek(a)redhat.com> Cc: Andrey Ryabinin <aryabinin(a)virtuozzo.com> Cc: "Michael S. Tsirkin" <mst(a)redhat.com> Cc: Huang Ying <ying.huang(a)intel.com> Cc: Bartosz Golaszewski <brgl(a)bgdev.pl> Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org> Signed-off-by: ZhangPeng <zhangpeng362(a)huawei.com> --- mm/util.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/util.c b/mm/util.c index 2b04ff4675d6..3a69fd0a15ca 100644 --- a/mm/util.c +++ b/mm/util.c @@ -160,7 +160,7 @@ void *memdup_user(const void __user *src, size_t len) { void *p; - p = kmalloc_track_caller(len, GFP_USER); + p = kmalloc_track_caller(len, GFP_USER | __GFP_NOWARN); if (!p) return ERR_PTR(-ENOMEM); -- 2.25.1
2 1
0 0
[PATCH OLK-6.6] mm: Add sysctl to clear free list pages
by Yu Liao 04 Dec '23

04 Dec '23
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4SK3S CVE: NA -------------------------------- This patch add sysctl to clear pages in free lists of each NUMA node. For each NUMA node, clear each page in the free list, these work is scheduled on a random CPU of the NUMA node. When kasan is enabled and the pages are free, the shadow memory will be filled with 0xFF, writing these free pages will cause UAF, so just disable KASAN for clear freelist. In the case of large memory, the clear freelist will hold zone lock for a long time. As a result, the process may be blocked unless clear freelist thread exit, and causing the system to be reset by the watchdog. Provide a mechanism to stop clear freelist threads when elapsed time exceeds cfp_timeout, which can be set by module_param(). Signed-off-by: Yu Liao <liaoyu15(a)huawei.com> Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com> --- .../admin-guide/kernel-parameters.txt | 3 + Documentation/admin-guide/sysctl/vm.rst | 13 ++ mm/Kconfig | 13 ++ mm/Makefile | 2 + mm/clear_freelist_page.c | 187 ++++++++++++++++++ 5 files changed, 218 insertions(+) create mode 100644 mm/clear_freelist_page.c diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 41644336e358..97c91370b01c 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -618,6 +618,9 @@ Also note the kernel might malfunction if you disable some critical bits. + clear_freelist + Enable clear_freelist feature. + clk_ignore_unused [CLK] Prevents the clock framework from automatically gating diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst index 45ba1f4dc004..cc15ea0fc406 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -25,6 +25,7 @@ files can be found in mm/swap.c. Currently, these files are in /proc/sys/vm: - admin_reserve_kbytes +- clear_freelist_pages - compact_memory - compaction_proactiveness - compact_unevictable_allowed @@ -106,6 +107,18 @@ On x86_64 this is about 128MB. Changing this takes effect whenever an application requests memory. +clear_freelist_pages +==================== + +Available only when CONFIG_CLEAR_FREELIST_PAGE is set. When 1 is written to the +file, all pages in free lists will be written with 0. + +Zone lock is held during clear_freelist_pages, if the execution time is too +long, RCU CPU Stall warnings will be print. For each NUMA node, +clear_freelist_pages is performed on a "random" CPU of the NUMA node. +The time consuming is related to the hardware. + + compact_memory ============== diff --git a/mm/Kconfig b/mm/Kconfig index ece4f2847e2b..e652d267e87e 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1269,6 +1269,19 @@ config LOCK_MM_AND_FIND_VMA bool depends on !STACK_GROWSUP +config CLEAR_FREELIST_PAGE + bool "Support for clear free list pages" + depends on MMU && SYSCTL + default n + help + Say y here to enable the clear free list pages feature. When + writing to clear_freelist, trigger to clean up the free memory + of the buddy system. + + To enable this feature, kernel parameter "clear_freelist" also + needs to be added. + + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index ec65984e2ade..1dc5d64a54f0 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -7,6 +7,7 @@ KASAN_SANITIZE_slab_common.o := n KASAN_SANITIZE_slab.o := n KASAN_SANITIZE_slub.o := n KCSAN_SANITIZE_kmemleak.o := n +KASAN_SANITIZE_clear_freelist_page.o := n # These produce frequent data race reports: most of them are due to races on # the same word but accesses to different bits of that word. Re-enable KCSAN @@ -138,3 +139,4 @@ obj-$(CONFIG_IO_MAPPING) += io-mapping.o obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_GENERIC_IOREMAP) += ioremap.o obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o +obj-$(CONFIG_CLEAR_FREELIST_PAGE) += clear_freelist_page.o diff --git a/mm/clear_freelist_page.c b/mm/clear_freelist_page.c new file mode 100644 index 000000000000..50b7ec918bfb --- /dev/null +++ b/mm/clear_freelist_page.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Support for clear free list pages. + */ + +#include <linux/mmzone.h> +#include <linux/mm_types.h> +#include <linux/mm.h> +#include <linux/sysctl.h> +#include <linux/highmem.h> +#include <linux/slab.h> +#include <linux/workqueue.h> +#include <linux/sched.h> +#include <linux/atomic.h> +#include <linux/nmi.h> +#include <linux/sched/clock.h> +#include <linux/module.h> + +#define CFP_DEFAULT_TIMEOUT 2000 +#define for_each_populated_zone_pgdat(pgdat, zone) \ + for (zone = pgdat->node_zones; \ + zone; \ + zone = next_pgdat_zone(zone)) \ + if (!populated_zone(zone)) \ + ; /* do nothing */ \ + else + +struct pgdat_entry { + struct pglist_data *pgdat; + struct work_struct work; +}; + +static DECLARE_WAIT_QUEUE_HEAD(clear_freelist_wait); +static DEFINE_MUTEX(clear_freelist_lock); +static atomic_t clear_freelist_workers; +static atomic_t clear_pages_num; +static ulong cfp_timeout_ms = CFP_DEFAULT_TIMEOUT; + +/* + * next_pgdat_zone - helper magic for for_each_populated_zone_pgdat() + */ +static struct zone *next_pgdat_zone(struct zone *zone) +{ + pg_data_t *pgdat = zone->zone_pgdat; + + if (zone < pgdat->node_zones + MAX_NR_ZONES - 1) + zone++; + else + zone = NULL; + return zone; +} + +static void clear_pgdat_freelist_pages(struct work_struct *work) +{ + struct pgdat_entry *entry = container_of(work, struct pgdat_entry, work); + u64 cfp_timeout_ns = cfp_timeout_ms * NSEC_PER_MSEC; + struct pglist_data *pgdat = entry->pgdat; + unsigned long flags, order, t; + struct page *page; + struct zone *zone; + u64 start, now; + + start = sched_clock(); + + for_each_populated_zone_pgdat(pgdat, zone) { + spin_lock_irqsave(&zone->lock, flags); + for_each_migratetype_order(order, t) { + list_for_each_entry(page, &zone->free_area[order].free_list[t], lru) { + now = sched_clock(); + if (unlikely(now - start > cfp_timeout_ns)) { + spin_unlock_irqrestore(&zone->lock, flags); + goto out; + } + +#ifdef CONFIG_KMAP_LOCAL + int i; + + /* Clear highmem by clear_highpage() */ + for (i = 0; i < (1 << order); i++) + clear_highpage(page + i); +#else + memset(page_address(page), 0, (1 << order) * PAGE_SIZE); +#endif + touch_nmi_watchdog(); + atomic_add(1 << order, &clear_pages_num); + } + } + spin_unlock_irqrestore(&zone->lock, flags); + + cond_resched(); + } + +out: + kfree(entry); + + if (atomic_dec_and_test(&clear_freelist_workers)) + wake_up(&clear_freelist_wait); +} + +static void init_clear_freelist_work(struct pglist_data *pgdat) +{ + struct pgdat_entry *entry; + + entry = kzalloc(sizeof(struct pgdat_entry), GFP_KERNEL); + if (!entry) + return; + + entry->pgdat = pgdat; + INIT_WORK(&entry->work, clear_pgdat_freelist_pages); + queue_work_node(pgdat->node_id, system_unbound_wq, &entry->work); +} + +static void clear_freelist_pages(void) +{ + struct pglist_data *pgdat; + + mutex_lock(&clear_freelist_lock); + drain_all_pages(NULL); + + for_each_online_pgdat(pgdat) { + atomic_inc(&clear_freelist_workers); + init_clear_freelist_work(pgdat); + } + + wait_event(clear_freelist_wait, atomic_read(&clear_freelist_workers) == 0); + + pr_debug("Cleared pages %d\nFree pages %lu\n", atomic_read(&clear_pages_num), + global_zone_page_state(NR_FREE_PAGES)); + atomic_set(&clear_pages_num, 0); + + mutex_unlock(&clear_freelist_lock); +} + +static int sysctl_clear_freelist_handler(struct ctl_table *table, int write, + void __user *buffer, size_t *lenp, loff_t *ppos) +{ + int ret; + int val; + + table->data = &val; + ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); + + if (!ret && write) + clear_freelist_pages(); + + return ret; +} + +static struct ctl_table clear_freelist_table[] = { + { + .procname = "clear_freelist_pages", + .data = NULL, + .maxlen = sizeof(int), + .mode = 0200, + .proc_handler = &sysctl_clear_freelist_handler, + .extra1 = SYSCTL_ONE, + .extra2 = SYSCTL_ONE, + }, + { } +}; + +static struct ctl_table sys_ctl_table[] = { + { + .procname = "vm", + .mode = 0555, + .child = clear_freelist_table, + }, + { } +}; + +static bool clear_freelist_enabled; +static int __init setup_clear_freelist(char *str) +{ + clear_freelist_enabled = true; + return 1; +} +__setup("clear_freelist", setup_clear_freelist); + +static int __init clear_freelist_init(void) +{ + if (clear_freelist_enabled) + register_sysctl_table(sys_ctl_table); + + return 0; +} +module_init(clear_freelist_init); +module_param(cfp_timeout_ms, ulong, 0644); -- 2.33.0
2 1
0 0
[PATCH OLK-6.6 0/2] xfs: fix intent item leak during reovery
by Long Li 04 Dec '23

04 Dec '23
This patch set fix intent item leak during reovery when recovery intent fails. Long Li (2): xfs: factor out xfs_defer_pending_abort xfs: abort intent items when recovery intents fail fs/xfs/libxfs/xfs_defer.c | 28 ++++++++++++++++++---------- fs/xfs/libxfs/xfs_defer.h | 2 +- fs/xfs/xfs_log_recover.c | 2 +- 3 files changed, 20 insertions(+), 12 deletions(-) -- 2.31.1
2 3
0 0
[PATCH openEuler-22.03-LTS 0/6] backport Broadcom NIC driver patches
by Dapeng Yu 04 Dec '23

04 Dec '23
Backport the latest patches from upstream stable branch linux-5.10.y for Broadcom NIC driver. Florian Fainelli (2): net: bcmgenet: Remove phy_stop() from bcmgenet_netif_stop() net: bcmgenet: Restore phy_stop() depending upon suspend/close Somnath Kotur (2): bnxt_en: Query default VLAN before VNIC setup on a VF bnxt_en: Implement .set_port / .unset_port UDP tunnel callbacks Sreekanth Reddy (1): bnxt_en: Don't issue AP reset during ethtool's reset operation Tobias Heider (1): Add MODULE_FIRMWARE() for FIRMWARE_TG357766. drivers/net/ethernet/broadcom/bnxt/bnxt.c | 28 ++++++++++++++----- .../net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 2 +- .../net/ethernet/broadcom/genet/bcmgenet.c | 9 +++--- drivers/net/ethernet/broadcom/tg3.c | 1 + 4 files changed, 28 insertions(+), 12 deletions(-) -- 2.43.0
2 1
0 0
[PATCH openEuler-22.03-LTS 1/6] net: bcmgenet: Remove phy_stop() from bcmgenet_netif_stop()
by Dapeng Yu 04 Dec '23

04 Dec '23
From: Florian Fainelli <f.fainelli(a)gmail.com> stable inclusion from stable-v5.10.181 commit e92727ed9e8b6798efc1e7ae44bbe7c44c453670 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8L4MY CVE: NA backport: OLK-5.10, openEuler-22.03-LTS, openEuler-22.03-LTS-SP1, openEuler-22.03-LTS-SP2 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- [ Upstream commit 93e0401e0fc0c54b0ac05b687cd135c2ac38187c ] The call to phy_stop() races with the later call to phy_disconnect(), resulting in concurrent phy_suspend() calls being run from different CPUs. The final call to phy_disconnect() ensures that the PHY is stopped and suspended, too. Fixes: c96e731c93ff ("net: bcmgenet: connect and disconnect from the PHY state machine") Signed-off-by: Florian Fainelli <f.fainelli(a)gmail.com> Signed-off-by: David S. Miller <davem(a)davemloft.net> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Dapeng Yu <dapeng.yu(a)windriver.com> --- drivers/net/ethernet/broadcom/genet/bcmgenet.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c index 6ee6c34f28f2..52c93701a9e4 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c @@ -3408,7 +3408,6 @@ static void bcmgenet_netif_stop(struct net_device *dev) /* Disable MAC transmit. TX DMA disabled must be done before this */ umac_enable_set(priv, CMD_TX_EN, false); - phy_stop(dev->phydev); bcmgenet_disable_rx_napi(priv); bcmgenet_intr_disable(priv); -- 2.43.0
1 5
0 0
[PATCH OLK-6.6] arm64: errata: add option to disable cache readunique prefetch on HIP08
by Yang Yingliang 04 Dec '23

04 Dec '23
From: Kai Shen <shenkai8(a)huawei.com> hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZFV2 CVE: NA ----------------------------------------------------------- Random performance decreases appear on cases of Hackbench which test pipe or socket communication among multi-threads on Hisi HIP08 SoC. Cache sharing which caused by the change of the data layout and the cache readunique prefetch mechanism both lead to this problem. Readunique mechanism which may caused by store operation will invalid cachelines on other cores during data fetching stage which can cause cacheline invalidation happens frequently in a sharing data access situation. Disable cache readunique prefetch can trackle this problem. Test cases are like: for i in 20;do echo "--------pipe thread num=$i----------" for j in $(seq 1 10);do ./hackbench -pipe $i thread 1000 done done We disable readunique prefetch only in el2 for in el1 disabling readunique prefetch may cause panic due to lack of related priority which often be set in BIOS. Introduce CONFIG_HISILICON_ERRATUM_HIP08_RU_PREFETCH and disable RU prefetch using boot cmdline 'readunique_prefetch=off'. Signed-off-by: Kai Shen <shenkai8(a)huawei.com> Signed-off-by: Hanjun Guo <guohanjun(a)huawei.com> [XQ: adjusted context] Signed-off-by: Xie XiuQi <xiexiuqi(a)huawei.com> Reviewed-by: Hanjun Guo <guohanjun(a)huawei.com> Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com> Signed-off-by: Zheng Zengkai <zhengzengkai(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- arch/arm64/Kconfig | 18 +++++++++++ arch/arm64/kernel/cpu_errata.c | 56 ++++++++++++++++++++++++++++++++++ arch/arm64/tools/cpucaps | 1 + 3 files changed, 75 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 78f20e632712..bbe85ca456c9 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1229,6 +1229,24 @@ config SOCIONEXT_SYNQUACER_PREITS If unsure, say Y. +config HISILICON_ERRATUM_HIP08_RU_PREFETCH + bool "HIP08 RU: HiSilicon HIP08 cache readunique might cause performance drop" + default y + help + The HiSilicon HIP08 cache readunique might compromise performance, + use cmdline "readunique_prefetch_disable" to disable RU prefetch. + + If unsure, say Y. + +config HISILICON_HIP08_RU_PREFETCH_DEFAULT_OFF + bool "HIP08 RU: disable HiSilicon HIP08 cache readunique by default" + depends on HISILICON_ERRATUM_HIP08_RU_PREFETCH + default n + help + Disable HiSilicon HIP08 cache readunique by default. + + If unsure, say N. + endmenu # "ARM errata workarounds via the alternatives framework" choice diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 5706e74c5578..6a2ae67e85f7 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -13,6 +13,11 @@ #include <asm/cpufeature.h> #include <asm/kvm_asm.h> #include <asm/smp_plat.h> +#ifdef CONFIG_HISILICON_ERRATUM_HIP08_RU_PREFETCH +#include <asm/ptrace.h> +#include <asm/sysreg.h> +#include <linux/smp.h> +#endif static bool __maybe_unused is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope) @@ -121,6 +126,48 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCI, 0); } +#ifdef CONFIG_HISILICON_ERRATUM_HIP08_RU_PREFETCH +# ifdef CONFIG_HISILICON_HIP08_RU_PREFETCH_DEFAULT_OFF +static bool readunique_prefetch_enabled; +# else +static bool readunique_prefetch_enabled = true; +# endif +static int __init readunique_prefetch_switch(char *data) +{ + if (!data) + return -EINVAL; + + if (strcmp(data, "off") == 0) + readunique_prefetch_enabled = false; + else if (strcmp(data, "on") == 0) + readunique_prefetch_enabled = true; + else + return -EINVAL; + + return 0; +} +early_param("readunique_prefetch", readunique_prefetch_switch); + +static bool +should_disable_hisi_hip08_ru_prefetch(const struct arm64_cpu_capabilities *entry, int unused) +{ + u64 el; + + if (readunique_prefetch_enabled) + return false; + + el = read_sysreg(CurrentEL); + return el == CurrentEL_EL2; +} + +#define CTLR_HISI_HIP08_RU_PREFETCH (1L << 40) +static void __maybe_unused +hisi_hip08_ru_prefetch_disable(const struct arm64_cpu_capabilities *__unused) +{ + sysreg_clear_set(S3_1_c15_c6_4, 0, CTLR_HISI_HIP08_RU_PREFETCH); +} +#endif + static DEFINE_RAW_SPINLOCK(reg_user_mask_modification); static void __maybe_unused cpu_clear_bf16_from_user_emulation(const struct arm64_cpu_capabilities *__unused) @@ -744,6 +791,15 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .capability = ARM64_WORKAROUND_AMPERE_AC03_CPU_38, ERRATA_MIDR_ALL_VERSIONS(MIDR_AMPERE1), }, +#endif +#ifdef CONFIG_HISILICON_ERRATUM_HIP08_RU_PREFETCH + { + .desc = "HiSilicon HIP08 Cache Readunique Prefetch Disable", + .capability = ARM64_WORKAROUND_HISI_HIP08_RU_PREFETCH, + ERRATA_MIDR_ALL_VERSIONS(MIDR_HISI_TSV110), + .matches = should_disable_hisi_hip08_ru_prefetch, + .cpu_enable = hisi_hip08_ru_prefetch_disable, + }, #endif { } diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index dea3dc89234b..bbb97036074b 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -100,3 +100,4 @@ WORKAROUND_NVIDIA_CARMEL_CNP WORKAROUND_QCOM_FALKOR_E1003 WORKAROUND_REPEAT_TLBI WORKAROUND_SPECULATIVE_AT +WORKAROUND_HISI_HIP08_RU_PREFETCH -- 2.25.1
2 1
0 0
[PATCH OLK-5.10 0/3] scsi: scsi_device_gets returns failure
by Li Lingfeng 04 Dec '23

04 Dec '23
If scsi is compiled as a module, scsi_device_gets returns failure when the module is NULL. Li Lingfeng (2): scsi: don't fail if hostt->module is NULL scsi: fix kabi broken in struct Scsi_Host Zhong Jinghua (1): scsi: scsi_device_gets returns failure when the module is NULL. drivers/scsi/hosts.c | 3 +++ drivers/scsi/scsi.c | 6 +++++- include/scsi/scsi_host.h | 2 +- 3 files changed, 9 insertions(+), 2 deletions(-) -- 2.31.1
2 4
0 0
  • ← Newer
  • 1
  • ...
  • 1436
  • 1437
  • 1438
  • 1439
  • 1440
  • 1441
  • 1442
  • ...
  • 1880
  • Older →

HyperKitty Powered by HyperKitty