mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 26 participants
  • 18031 discussions
【Meeting Notice】openEuler kernel 技术分享第七期 & 双周例会 Time: 2021-06-25 14:00-18:00
by Meeting Book 23 Jun '21

23 Jun '21
1 0
0 0
[PATCH openEuler-1.0-LTS 1/4] RDMA/ucma: Put a lock around every call to the rdma_cm layer
by Yang Yingliang 23 Jun '21

23 Jun '21
From: Jason Gunthorpe <jgg(a)mellanox.com> stable inclusion from linux-4.19.115 commit abc4ea7f1345398261295345fd9b30243e4f4f8e prepare for fixing CVE-2021-32078 -------------------------------- commit 7c11910783a1ea17e88777552ef146cace607b3c upstream. The rdma_cm must be used single threaded. This appears to be a bug in the design, as it does have lots of locking that seems like it should allow concurrency. However, when it is all said and done every single place that uses the cma_exch() scheme is broken, and all the unlocked reads from the ucma of the cm_id data are wrong too. syzkaller has been finding endless bugs related to this. Fixing this in any elegant way is some enormous amount of work. Take a very big hammer and put a mutex around everything to do with the ucma_context at the top of every syscall. Fixes: 75216638572f ("RDMA/cma: Export rdma cm interface to userspace") Link: https://lore.kernel.org/r/20200218210432.GA31966@ziepe.ca Reported-by: syzbot+adb15cf8c2798e4e0db4(a)syzkaller.appspotmail.com Reported-by: syzbot+e5579222b6a3edd96522(a)syzkaller.appspotmail.com Reported-by: syzbot+4b628fcc748474003457(a)syzkaller.appspotmail.com Reported-by: syzbot+29ee8f76017ce6cf03da(a)syzkaller.appspotmail.com Reported-by: syzbot+6956235342b7317ec564(a)syzkaller.appspotmail.com Reported-by: syzbot+b358909d8d01556b790b(a)syzkaller.appspotmail.com Reported-by: syzbot+6b46b135602a3f3ac99e(a)syzkaller.appspotmail.com Reported-by: syzbot+8458d13b13562abf6b77(a)syzkaller.appspotmail.com Reported-by: syzbot+bd034f3fdc0402e942ed(a)syzkaller.appspotmail.com Reported-by: syzbot+c92378b32760a4eef756(a)syzkaller.appspotmail.com Reported-by: syzbot+68b44a1597636e0b342c(a)syzkaller.appspotmail.com Signed-off-by: Jason Gunthorpe <jgg(a)mellanox.com> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- drivers/infiniband/core/ucma.c | 49 ++++++++++++++++++++++++++++++++-- 1 file changed, 47 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c index 01d68ed46c1b6..2acc30c3d5b2d 100644 --- a/drivers/infiniband/core/ucma.c +++ b/drivers/infiniband/core/ucma.c @@ -89,6 +89,7 @@ struct ucma_context { struct ucma_file *file; struct rdma_cm_id *cm_id; + struct mutex mutex; u64 uid; struct list_head list; @@ -215,6 +216,7 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file) init_completion(&ctx->comp); INIT_LIST_HEAD(&ctx->mc_list); ctx->file = file; + mutex_init(&ctx->mutex); mutex_lock(&mut); ctx->id = idr_alloc(&ctx_idr, ctx, 0, 0, GFP_KERNEL); @@ -596,6 +598,7 @@ static int ucma_free_ctx(struct ucma_context *ctx) } events_reported = ctx->events_reported; + mutex_destroy(&ctx->mutex); kfree(ctx); return events_reported; } @@ -665,7 +668,10 @@ static ssize_t ucma_bind_ip(struct ucma_file *file, const char __user *inbuf, if (IS_ERR(ctx)) return PTR_ERR(ctx); + mutex_lock(&ctx->mutex); ret = rdma_bind_addr(ctx->cm_id, (struct sockaddr *) &cmd.addr); + mutex_unlock(&ctx->mutex); + ucma_put_ctx(ctx); return ret; } @@ -688,7 +694,9 @@ static ssize_t ucma_bind(struct ucma_file *file, const char __user *inbuf, if (IS_ERR(ctx)) return PTR_ERR(ctx); + mutex_lock(&ctx->mutex); ret = rdma_bind_addr(ctx->cm_id, (struct sockaddr *) &cmd.addr); + mutex_unlock(&ctx->mutex); ucma_put_ctx(ctx); return ret; } @@ -712,8 +720,10 @@ static ssize_t ucma_resolve_ip(struct ucma_file *file, if (IS_ERR(ctx)) return PTR_ERR(ctx); + mutex_lock(&ctx->mutex); ret = rdma_resolve_addr(ctx->cm_id, (struct sockaddr *) &cmd.src_addr, (struct sockaddr *) &cmd.dst_addr, cmd.timeout_ms); + mutex_unlock(&ctx->mutex); ucma_put_ctx(ctx); return ret; } @@ -738,8 +748,10 @@ static ssize_t ucma_resolve_addr(struct ucma_file *file, if (IS_ERR(ctx)) return PTR_ERR(ctx); + mutex_lock(&ctx->mutex); ret = rdma_resolve_addr(ctx->cm_id, (struct sockaddr *) &cmd.src_addr, (struct sockaddr *) &cmd.dst_addr, cmd.timeout_ms); + mutex_unlock(&ctx->mutex); ucma_put_ctx(ctx); return ret; } @@ -759,7 +771,9 @@ static ssize_t ucma_resolve_route(struct ucma_file *file, if (IS_ERR(ctx)) return PTR_ERR(ctx); + mutex_lock(&ctx->mutex); ret = rdma_resolve_route(ctx->cm_id, cmd.timeout_ms); + mutex_unlock(&ctx->mutex); ucma_put_ctx(ctx); return ret; } @@ -848,6 +862,7 @@ static ssize_t ucma_query_route(struct ucma_file *file, if (IS_ERR(ctx)) return PTR_ERR(ctx); + mutex_lock(&ctx->mutex); memset(&resp, 0, sizeof resp); addr = (struct sockaddr *) &ctx->cm_id->route.addr.src_addr; memcpy(&resp.src_addr, addr, addr->sa_family == AF_INET ? @@ -871,6 +886,7 @@ static ssize_t ucma_query_route(struct ucma_file *file, ucma_copy_iw_route(&resp, &ctx->cm_id->route); out: + mutex_unlock(&ctx->mutex); if (copy_to_user(u64_to_user_ptr(cmd.response), &resp, sizeof(resp))) ret = -EFAULT; @@ -1022,6 +1038,7 @@ static ssize_t ucma_query(struct ucma_file *file, if (IS_ERR(ctx)) return PTR_ERR(ctx); + mutex_lock(&ctx->mutex); switch (cmd.option) { case RDMA_USER_CM_QUERY_ADDR: ret = ucma_query_addr(ctx, response, out_len); @@ -1036,6 +1053,7 @@ static ssize_t ucma_query(struct ucma_file *file, ret = -ENOSYS; break; } + mutex_unlock(&ctx->mutex); ucma_put_ctx(ctx); return ret; @@ -1076,7 +1094,9 @@ static ssize_t ucma_connect(struct ucma_file *file, const char __user *inbuf, return PTR_ERR(ctx); ucma_copy_conn_param(ctx->cm_id, &conn_param, &cmd.conn_param); + mutex_lock(&ctx->mutex); ret = rdma_connect(ctx->cm_id, &conn_param); + mutex_unlock(&ctx->mutex); ucma_put_ctx(ctx); return ret; } @@ -1097,7 +1117,9 @@ static ssize_t ucma_listen(struct ucma_file *file, const char __user *inbuf, ctx->backlog = cmd.backlog > 0 && cmd.backlog < max_backlog ? cmd.backlog : max_backlog; + mutex_lock(&ctx->mutex); ret = rdma_listen(ctx->cm_id, ctx->backlog); + mutex_unlock(&ctx->mutex); ucma_put_ctx(ctx); return ret; } @@ -1120,13 +1142,17 @@ static ssize_t ucma_accept(struct ucma_file *file, const char __user *inbuf, if (cmd.conn_param.valid) { ucma_copy_conn_param(ctx->cm_id, &conn_param, &cmd.conn_param); mutex_lock(&file->mut); + mutex_lock(&ctx->mutex); ret = __rdma_accept(ctx->cm_id, &conn_param, NULL); + mutex_unlock(&ctx->mutex); if (!ret) ctx->uid = cmd.uid; mutex_unlock(&file->mut); - } else + } else { + mutex_lock(&ctx->mutex); ret = __rdma_accept(ctx->cm_id, NULL, NULL); - + mutex_unlock(&ctx->mutex); + } ucma_put_ctx(ctx); return ret; } @@ -1145,7 +1171,9 @@ static ssize_t ucma_reject(struct ucma_file *file, const char __user *inbuf, if (IS_ERR(ctx)) return PTR_ERR(ctx); + mutex_lock(&ctx->mutex); ret = rdma_reject(ctx->cm_id, cmd.private_data, cmd.private_data_len); + mutex_unlock(&ctx->mutex); ucma_put_ctx(ctx); return ret; } @@ -1164,7 +1192,9 @@ static ssize_t ucma_disconnect(struct ucma_file *file, const char __user *inbuf, if (IS_ERR(ctx)) return PTR_ERR(ctx); + mutex_lock(&ctx->mutex); ret = rdma_disconnect(ctx->cm_id); + mutex_unlock(&ctx->mutex); ucma_put_ctx(ctx); return ret; } @@ -1195,7 +1225,9 @@ static ssize_t ucma_init_qp_attr(struct ucma_file *file, resp.qp_attr_mask = 0; memset(&qp_attr, 0, sizeof qp_attr); qp_attr.qp_state = cmd.qp_state; + mutex_lock(&ctx->mutex); ret = rdma_init_qp_attr(ctx->cm_id, &qp_attr, &resp.qp_attr_mask); + mutex_unlock(&ctx->mutex); if (ret) goto out; @@ -1274,9 +1306,13 @@ static int ucma_set_ib_path(struct ucma_context *ctx, struct sa_path_rec opa; sa_convert_path_ib_to_opa(&opa, &sa_path); + mutex_lock(&ctx->mutex); ret = rdma_set_ib_path(ctx->cm_id, &opa); + mutex_unlock(&ctx->mutex); } else { + mutex_lock(&ctx->mutex); ret = rdma_set_ib_path(ctx->cm_id, &sa_path); + mutex_unlock(&ctx->mutex); } if (ret) return ret; @@ -1309,7 +1345,9 @@ static int ucma_set_option_level(struct ucma_context *ctx, int level, switch (level) { case RDMA_OPTION_ID: + mutex_lock(&ctx->mutex); ret = ucma_set_option_id(ctx, optname, optval, optlen); + mutex_unlock(&ctx->mutex); break; case RDMA_OPTION_IB: ret = ucma_set_option_ib(ctx, optname, optval, optlen); @@ -1369,8 +1407,10 @@ static ssize_t ucma_notify(struct ucma_file *file, const char __user *inbuf, if (IS_ERR(ctx)) return PTR_ERR(ctx); + mutex_lock(&ctx->mutex); if (ctx->cm_id->device) ret = rdma_notify(ctx->cm_id, (enum ib_event_type)cmd.event); + mutex_unlock(&ctx->mutex); ucma_put_ctx(ctx); return ret; @@ -1413,8 +1453,10 @@ static ssize_t ucma_process_join(struct ucma_file *file, mc->join_state = join_state; mc->uid = cmd->uid; memcpy(&mc->addr, addr, cmd->addr_size); + mutex_lock(&ctx->mutex); ret = rdma_join_multicast(ctx->cm_id, (struct sockaddr *)&mc->addr, join_state, mc); + mutex_unlock(&ctx->mutex); if (ret) goto err2; @@ -1518,7 +1560,10 @@ static ssize_t ucma_leave_multicast(struct ucma_file *file, goto out; } + mutex_lock(&mc->ctx->mutex); rdma_leave_multicast(mc->ctx->cm_id, (struct sockaddr *) &mc->addr); + mutex_unlock(&mc->ctx->mutex); + mutex_lock(&mc->ctx->file->mut); ucma_cleanup_mc_events(mc); list_del(&mc->list); -- 2.25.1
1 3
0 0
[PATCH kernel-4.19] mm/memory-failure: make sure wait for page writeback in memory_failure
by Yang Yingliang 22 Jun '21

22 Jun '21
From: yangerkun <yangerkun(a)huawei.com> mainline inclusion from mainline-5.13-rc7 commit e8675d291ac007e1c636870db880f837a9ea112a category: bugfix bugzilla: 51807 CVE: NA --------------------------- Our syzkaller trigger the "BUG_ON(!list_empty(&inode->i_wb_list))" in clear_inode: kernel BUG at fs/inode.c:519! Internal error: Oops - BUG: 0 [#1] SMP Modules linked in: Process syz-executor.0 (pid: 249, stack limit = 0x00000000a12409d7) CPU: 1 PID: 249 Comm: syz-executor.0 Not tainted 4.19.95 Hardware name: linux,dummy-virt (DT) pstate: 80000005 (Nzcv daif -PAN -UAO) pc : clear_inode+0x280/0x2a8 lr : clear_inode+0x280/0x2a8 Call trace: clear_inode+0x280/0x2a8 ext4_clear_inode+0x38/0xe8 ext4_free_inode+0x130/0xc68 ext4_evict_inode+0xb20/0xcb8 evict+0x1a8/0x3c0 iput+0x344/0x460 do_unlinkat+0x260/0x410 __arm64_sys_unlinkat+0x6c/0xc0 el0_svc_common+0xdc/0x3b0 el0_svc_handler+0xf8/0x160 el0_svc+0x10/0x218 Kernel panic - not syncing: Fatal exception A crash dump of this problem show that someone called __munlock_pagevec to clear page LRU without lock_page: do_mmap -> mmap_region -> do_munmap -> munlock_vma_pages_range -> __munlock_pagevec. As a result memory_failure will call identify_page_state without wait_on_page_writeback. And after truncate_error_page clear the mapping of this page. end_page_writeback won't call sb_clear_inode_writeback to clear inode->i_wb_list. That will trigger BUG_ON in clear_inode! Fix it by checking PageWriteback too to help determine should we skip wait_on_page_writeback. Link: https://lkml.kernel.org/r/20210604084705.3729204-1-yangerkun@huawei.com Fixes: 0bc1f8b0682c ("hwpoison: fix the handling path of the victimized page frame that belong to non-LRU") Signed-off-by: yangerkun <yangerkun(a)huawei.com> Acked-by: Naoya Horiguchi <naoya.horiguchi(a)nec.com> Cc: Jan Kara <jack(a)suse.cz> Cc: Theodore Ts'o <tytso(a)mit.edu> Cc: Oscar Salvador <osalvador(a)suse.de> Cc: Yu Kuai <yukuai3(a)huawei.com> Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org> Signed-off-by: yangerkun <yangerkun(a)huawei.com> Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- mm/memory-failure.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 6fc0695f70e7c..145262227bb08 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1387,7 +1387,12 @@ int memory_failure(unsigned long pfn, int flags) return 0; } - if (!PageTransTail(p) && !PageLRU(p)) + /* + * __munlock_pagevec may clear a writeback page's LRU flag without + * page_lock. We need wait writeback completion for this page or it + * may trigger vfs BUG while evict inode. + */ + if (!PageTransTail(p) && !PageLRU(p) && !PageWriteback(p)) goto identify_page_state; /* -- 2.25.1
1 0
0 0
[PATCH kernel-4.19] can: bcm: fix infoleak in struct bcm_msg_head
by Yang Yingliang 21 Jun '21

21 Jun '21
From: Norbert Slusarek <nslusarek(a)gmx.net> mainline inclusion from mainline-v5.13-rc7 commit 5e87ddbe3942e27e939bdc02deb8579b0cbd8ecc category: bugfix bugzilla: NA CVE: CVE-2021-34693 -------------------------------- On 64-bit systems, struct bcm_msg_head has an added padding of 4 bytes between struct members count and ival1. Even though all struct members are initialized, the 4-byte hole will contain data from the kernel stack. This patch zeroes out struct bcm_msg_head before usage, preventing infoleaks to userspace. Fixes: ffd980f976e7 ("[CAN]: Add broadcast manager (bcm) protocol") Link: https://lore.kernel.org/r/trinity-7c1b2e82-e34f-4885-8060-2cd7a13769ce-1623… Cc: linux-stable <stable(a)vger.kernel.org> Signed-off-by: Norbert Slusarek <nslusarek(a)gmx.net> Acked-by: Oliver Hartkopp <socketcan(a)hartkopp.net> Signed-off-by: Marc Kleine-Budde <mkl(a)pengutronix.de> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com> Reviewed-by: Yue Haibing <yuehaibing(a)huawei.com> Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com> --- net/can/bcm.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/net/can/bcm.c b/net/can/bcm.c index f5f97d9c17d20..0f09838ebef6e 100644 --- a/net/can/bcm.c +++ b/net/can/bcm.c @@ -393,6 +393,7 @@ static void bcm_tx_timeout_tsklet(unsigned long data) if (!op->count && (op->flags & TX_COUNTEVT)) { /* create notification to user */ + memset(&msg_head, 0, sizeof(msg_head)); msg_head.opcode = TX_EXPIRED; msg_head.flags = op->flags; msg_head.count = op->count; @@ -440,6 +441,7 @@ static void bcm_rx_changed(struct bcm_op *op, struct canfd_frame *data) /* this element is not throttled anymore */ data->flags &= (BCM_CAN_FLAGS_MASK|RX_RECV); + memset(&head, 0, sizeof(head)); head.opcode = RX_CHANGED; head.flags = op->flags; head.count = op->count; @@ -554,6 +556,7 @@ static void bcm_rx_timeout_tsklet(unsigned long data) struct bcm_msg_head msg_head; /* create notification to user */ + memset(&msg_head, 0, sizeof(msg_head)); msg_head.opcode = RX_TIMEOUT; msg_head.flags = op->flags; msg_head.count = op->count; -- 2.25.1
1 0
0 0
[PATCH openEuler-1.0-LTS v2] iommu/vt-d: Add support for ACPI device use physical, node as pci device to establish identity mapping
by LeoLiuoc 21 Jun '21

21 Jun '21
When using ACPI device to create identity mapping, if the physical node of the ACPI device is a pci device, domain_1 will be created for these two devices. But if the pci device and other pci devices belong to another domain_2, this will happen conflict and destroy another domain. Such as PCI devices under the PCIE-to-PCI bridge. Therefore, when the physical node of the ACPI device is a PCI device, this patch uses the PCI device to create the domain. In this way, there is only one domain. Signed-off-by: LeoLiu-oc <LeoLiu-oc(a)zhaoxin.com> ---  v1->v2: fix some logic mistake  drivers/iommu/intel-iommu.c | 21 +++++++++++++++++++++  1 file changed, 21 insertions(+) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index e111e5d6d269..453ec0eb3a4f 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -2736,12 +2736,33 @@ static int domain_prepare_identity_map(struct device *dev,      return iommu_domain_identity_map(domain, start, end);  } +static struct device *acpi_dev_find_pci_dev(struct device *dev) +{ +    struct acpi_device_physical_node *pn; +    struct acpi_device *adev; + +    if (dev->bus == &acpi_bus_type) { +        adev = to_acpi_device(dev); +        mutex_lock(&adev->physical_node_lock); +        list_for_each_entry(pn, &adev->physical_node_list, node) { +            if (dev_is_pci(pn->dev)) { +                mutex_unlock(&adev->physical_node_lock); +                return pn->dev; +            } +        } +        mutex_unlock(&adev->physical_node_lock); +    } + +    return dev; +} +  static int iommu_prepare_identity_map(struct device *dev,                        unsigned long long start,                        unsigned long long end)  {      struct dmar_domain *domain;      int ret; +    dev = acpi_dev_find_pci_dev(dev);      domain = get_domain_for_dev(dev, DEFAULT_DOMAIN_ADDRESS_WIDTH);      if (!domain) -- 2.20.1
2 1
0 0
[PATCH openEuler-21.03] neighbour: allow NUD_NOARP entries to be forced GCed
by qingyouzijin 19 Jun '21

19 Jun '21
From: David Ahern <dsahern(a)kernel.org> stable inclusion from stable-5.12.10 commit ddf088d7aaaaacfc836104f2e632b29b1d383cfc bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=326 CVE: NA -------------------------------- commit 7a6b1ab7475fd6478eeaf5c9d1163e7a18125c8f upstream. IFF_POINTOPOINT interfaces use NUD_NOARP entries for IPv6. It's possible to fill up the neighbour table with enough entries that it will overflow for valid connections after that. This behaviour is more prevalent after commit 58956317c8de ("neighbor: Improve garbage collection") is applied, as it prevents removal from entries that are not NUD_FAILED, unless they are more than 5s old. Fixes: 58956317c8de (neighbor: Improve garbage collection) Reported-by: Kasper Dupont <kasperd(a)gjkwv.06.feb.2021.kasperd.net> Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo(a)canonical.com> Signed-off-by: David Ahern <dsahern(a)kernel.org> Signed-off-by: David S. Miller <davem(a)davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: qingyouzijin <2645753614(a)qq.com> --- net/core/neighbour.c | 1 + 1 file changed, 1 insertion(+) diff --git a/net/core/neighbour.c b/net/core/neighbour.c index 2fe4bbb6b80c..d7f61d532371 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -235,6 +235,7 @@ static int neigh_forced_gc(struct neigh_table *tbl) write_lock(&n->lock); if ((n->nud_state == NUD_FAILED) || + (n->nud_state == NUD_NOARP) || (tbl->is_multicast && tbl->is_multicast(n->primary_key)) || time_after(tref, n->updated)) -- 2.23.0
1 0
0 0
[PATCH openEuler-21.03] mt76: mt76x0e: fix device hang during suspend/resume stable inclusion from stable-5.12.10 commit d207c1e8e3fb82aca5ffd14c33d0848b6b820efc bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=205 CVE: NA
by moyu 19 Jun '21

19 Jun '21
From: Lorenzo Bianconi <lorenzo(a)kernel.org> ----------------------------------- [ Upstream commit 509559c35bcd23d5a046624b225cb3e99a9f1481 ] Similar to usb device, re-initialize mt76x0e device after resume in order to fix mt7630e hang during suspend/resume Reported-by: Luca Trombin <luca.trombin(a)gmail.com> Fixes: c2a4d9fbabfb9 ("mt76x0: inital split between pci and usb") Signed-off-by: Lorenzo Bianconi <lorenzo(a)kernel.org> Signed-off-by: Kalle Valo <kvalo(a)codeaurora.org> Link: https://lore.kernel.org/r/4812f9611624b34053c1592fd9c175b67d4ffcb4.16204060… Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: guodongxiu <xiahua.991116(a)qq.com> --- .../net/wireless/mediatek/mt76/mt76x0/pci.c | 81 ++++++++++++++++++- 1 file changed, 77 insertions(+), 4 deletions(-) diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c b/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c index b87d8e136cb9..a594ae11b1f7 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c @@ -87,7 +87,7 @@ static const struct ieee80211_ops mt76x0e_ops = { .reconfig_complete = mt76x02_reconfig_complete, }; -static int mt76x0e_register_device(struct mt76x02_dev *dev) +static int mt76x0e_init_hardware(struct mt76x02_dev *dev, bool resume) { int err; @@ -100,9 +100,11 @@ static int mt76x0e_register_device(struct mt76x02_dev *dev) if (err < 0) return err; - err = mt76x02_dma_init(dev); - if (err < 0) - return err; + if (!resume) { + err = mt76x02_dma_init(dev); + if (err < 0) + return err; + } err = mt76x0_init_hardware(dev); if (err < 0) @@ -123,6 +125,17 @@ static int mt76x0e_register_device(struct mt76x02_dev *dev) mt76_clear(dev, 0x110, BIT(9)); mt76_set(dev, MT_MAX_LEN_CFG, BIT(13)); + return 0; +} + +static int mt76x0e_register_device(struct mt76x02_dev *dev) +{ + int err; + + err = mt76x0e_init_hardware(dev, false); + if (err < 0) + return err; + err = mt76x0_register_device(dev); if (err < 0) return err; @@ -167,6 +180,8 @@ mt76x0e_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (ret) return ret; + mt76_pci_disable_aspm(pdev); + mdev = mt76_alloc_device(&pdev->dev, sizeof(*dev), &mt76x0e_ops, &drv_ops); if (!mdev) @@ -220,6 +235,60 @@ mt76x0e_remove(struct pci_dev *pdev) mt76_free_device(mdev); } +#ifdef CONFIG_PM +static int mt76x0e_suspend(struct pci_dev *pdev, pm_message_t state) +{ + struct mt76_dev *mdev = pci_get_drvdata(pdev); + struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); + int i; + + mt76_worker_disable(&mdev->tx_worker); + for (i = 0; i < ARRAY_SIZE(mdev->phy.q_tx); i++) + mt76_queue_tx_cleanup(dev, mdev->phy.q_tx[i], true); + for (i = 0; i < ARRAY_SIZE(mdev->q_mcu); i++) + mt76_queue_tx_cleanup(dev, mdev->q_mcu[i], true); + napi_disable(&mdev->tx_napi); + + mt76_for_each_q_rx(mdev, i) + napi_disable(&mdev->napi[i]); + + mt76x02_dma_disable(dev); + mt76x02_mcu_cleanup(dev); + mt76x0_chip_onoff(dev, false, false); + + pci_enable_wake(pdev, pci_choose_state(pdev, state), true); + pci_save_state(pdev); + + return pci_set_power_state(pdev, pci_choose_state(pdev, state)); +} + +static int mt76x0e_resume(struct pci_dev *pdev) +{ + struct mt76_dev *mdev = pci_get_drvdata(pdev); + struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); + int err, i; + + err = pci_set_power_state(pdev, PCI_D0); + if (err) + return err; + + pci_restore_state(pdev); + + mt76_worker_enable(&mdev->tx_worker); + + mt76_for_each_q_rx(mdev, i) { + mt76_queue_rx_reset(dev, i); + napi_enable(&mdev->napi[i]); + napi_schedule(&mdev->napi[i]); + } + + napi_enable(&mdev->tx_napi); + napi_schedule(&mdev->tx_napi); + + return mt76x0e_init_hardware(dev, true); +} +#endif /* CONFIG_PM */ + static const struct pci_device_id mt76x0e_device_table[] = { { PCI_DEVICE(0x14c3, 0x7610) }, { PCI_DEVICE(0x14c3, 0x7630) }, @@ -237,6 +306,10 @@ static struct pci_driver mt76x0e_driver = { .id_table = mt76x0e_device_table, .probe = mt76x0e_probe, .remove = mt76x0e_remove, +#ifdef CONFIG_PM + .suspend = mt76x0e_suspend, + .resume = mt76x0e_resume, +#endif /* CONFIG_PM */ }; module_pci_driver(mt76x0e_driver); -- 2.23.0
1 0
0 0
[PATCH openEuler-21.03] mt76: mt76x0e: fix device hang during suspend/resume stable inclusion from stable-5.12.10 commit d207c1e8e3fb82aca5ffd14c33d0848b6b820efc bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=205 CVE: NA [ Upstream commit 509559c35bcd23d5a046624b225cb3e99a9f1481 ]
by guodongxiu 19 Jun '21

19 Jun '21
From: Lorenzo Bianconi <lorenzo(a)kernel.org> Similar to usb device, re-initialize mt76x0e device after resume in order to fix mt7630e hang during suspend/resume Reported-by: Luca Trombin <luca.trombin(a)gmail.com> Fixes: c2a4d9fbabfb9 ("mt76x0: inital split between pci and usb") Signed-off-by: Lorenzo Bianconi <lorenzo(a)kernel.org> Signed-off-by: Kalle Valo <kvalo(a)codeaurora.org> Link: https://lore.kernel.org/r/4812f9611624b34053c1592fd9c175b67d4ffcb4.16204060… Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: guodongxiu <xiahua.991116(a)qq.com> --- .../net/wireless/mediatek/mt76/mt76x0/pci.c | 81 ++++++++++++++++++- 1 file changed, 77 insertions(+), 4 deletions(-) diff --git a/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c b/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c index b87d8e136cb9..a594ae11b1f7 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x0/pci.c @@ -87,7 +87,7 @@ static const struct ieee80211_ops mt76x0e_ops = { .reconfig_complete = mt76x02_reconfig_complete, }; -static int mt76x0e_register_device(struct mt76x02_dev *dev) +static int mt76x0e_init_hardware(struct mt76x02_dev *dev, bool resume) { int err; @@ -100,9 +100,11 @@ static int mt76x0e_register_device(struct mt76x02_dev *dev) if (err < 0) return err; - err = mt76x02_dma_init(dev); - if (err < 0) - return err; + if (!resume) { + err = mt76x02_dma_init(dev); + if (err < 0) + return err; + } err = mt76x0_init_hardware(dev); if (err < 0) @@ -123,6 +125,17 @@ static int mt76x0e_register_device(struct mt76x02_dev *dev) mt76_clear(dev, 0x110, BIT(9)); mt76_set(dev, MT_MAX_LEN_CFG, BIT(13)); + return 0; +} + +static int mt76x0e_register_device(struct mt76x02_dev *dev) +{ + int err; + + err = mt76x0e_init_hardware(dev, false); + if (err < 0) + return err; + err = mt76x0_register_device(dev); if (err < 0) return err; @@ -167,6 +180,8 @@ mt76x0e_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (ret) return ret; + mt76_pci_disable_aspm(pdev); + mdev = mt76_alloc_device(&pdev->dev, sizeof(*dev), &mt76x0e_ops, &drv_ops); if (!mdev) @@ -220,6 +235,60 @@ mt76x0e_remove(struct pci_dev *pdev) mt76_free_device(mdev); } +#ifdef CONFIG_PM +static int mt76x0e_suspend(struct pci_dev *pdev, pm_message_t state) +{ + struct mt76_dev *mdev = pci_get_drvdata(pdev); + struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); + int i; + + mt76_worker_disable(&mdev->tx_worker); + for (i = 0; i < ARRAY_SIZE(mdev->phy.q_tx); i++) + mt76_queue_tx_cleanup(dev, mdev->phy.q_tx[i], true); + for (i = 0; i < ARRAY_SIZE(mdev->q_mcu); i++) + mt76_queue_tx_cleanup(dev, mdev->q_mcu[i], true); + napi_disable(&mdev->tx_napi); + + mt76_for_each_q_rx(mdev, i) + napi_disable(&mdev->napi[i]); + + mt76x02_dma_disable(dev); + mt76x02_mcu_cleanup(dev); + mt76x0_chip_onoff(dev, false, false); + + pci_enable_wake(pdev, pci_choose_state(pdev, state), true); + pci_save_state(pdev); + + return pci_set_power_state(pdev, pci_choose_state(pdev, state)); +} + +static int mt76x0e_resume(struct pci_dev *pdev) +{ + struct mt76_dev *mdev = pci_get_drvdata(pdev); + struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); + int err, i; + + err = pci_set_power_state(pdev, PCI_D0); + if (err) + return err; + + pci_restore_state(pdev); + + mt76_worker_enable(&mdev->tx_worker); + + mt76_for_each_q_rx(mdev, i) { + mt76_queue_rx_reset(dev, i); + napi_enable(&mdev->napi[i]); + napi_schedule(&mdev->napi[i]); + } + + napi_enable(&mdev->tx_napi); + napi_schedule(&mdev->tx_napi); + + return mt76x0e_init_hardware(dev, true); +} +#endif /* CONFIG_PM */ + static const struct pci_device_id mt76x0e_device_table[] = { { PCI_DEVICE(0x14c3, 0x7610) }, { PCI_DEVICE(0x14c3, 0x7630) }, @@ -237,6 +306,10 @@ static struct pci_driver mt76x0e_driver = { .id_table = mt76x0e_device_table, .probe = mt76x0e_probe, .remove = mt76x0e_remove, +#ifdef CONFIG_PM + .suspend = mt76x0e_suspend, + .resume = mt76x0e_resume, +#endif /* CONFIG_PM */ }; module_pci_driver(mt76x0e_driver); -- 2.23.0
1 0
0 0
[PATCH openEuler-21.03] netfilter: nf_tables: missing error reporting for not selected expressions table inclusion from stable-21.03 commit 2682184605848ae28866bf71c7d4774bda7badfc bugzilla:https://bugzilla.openeuler.org/show_bug.cgi?id=324 CVE: NA
by fanxingin 19 Jun '21

19 Jun '21
From: Pablo Neira Ayuso <pablo(a)netfilter.org> -------------------------------- commit c781471d67a56d7d4c113669a11ede0463b5c719 upstream. Sometimes users forget to turn on nftables extensions from Kconfig that they need. In such case, the error reporting from userspace is misleading: $ sudo nft add rule x y counter Error: Could not process rule: No such file or directory add rule x y counter ^^^^^^^^^^^^^^^^^^^^ Add missing NL_SET_BAD_ATTR() to provide a hint: $ nft add rule x y counter Error: Could not process rule: No such file or directory add rule x y counter ^^^^^^^ Fixes: 83d9dcba06c5 ("netfilter: nf_tables: extended netlink error reporting for expressions") Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: fanxingin <fanxingin(a)qq.com> --- net/netfilter/nf_tables_api.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 9a080767667b..168c87c1ac75 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -3263,8 +3263,10 @@ static int nf_tables_newrule(struct net *net, struct sock *nlsk, if (n == NFT_RULE_MAXEXPRS) goto err1; err = nf_tables_expr_parse(&ctx, tmp, &info[n]); - if (err < 0) + if (err < 0) { + NL_SET_BAD_ATTR(extack, tmp); goto err1; + } size += info[n].ops->size; n++; } -- 2.23.0
1 0
0 0
[PATCH openEuler-21.03] net: hns3: fix incorrect resp_msg issue
by lovelonelytime 19 Jun '21

19 Jun '21
From: Jiaran Zhang <zhangjiaran(a)huawei.com> stable inclusion from stable-21.03 commit d207c1e8e3fb82aca5ffd14c33d0848b6b820efc bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=205 CVE: NA [ Upstream commit a710b9ffbebaf713f7dbd4dbd9524907e5d66f33 ] In hclge_mbx_handler(), if there are two consecutive mailbox messages that requires resp_msg, the resp_msg is not cleared after processing the first message, which will cause the resp_msg data of second message incorrect. Fix it by clearing the resp_msg before processing every mailbox message. Fixes: bb5790b71bad ("net: hns3: refactor mailbox response scheme between PF and VF") Signed-off-by: Jiaran Zhang <zhangjiaran(a)huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong(a)huawei.com> Signed-off-by: David S. Miller <davem(a)davemloft.net> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: lovelonelytime <1390751361(a)qq.com> --- drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c index 3ab6db2588d3..324d150d2b22 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c @@ -657,7 +657,6 @@ void hclge_mbx_handler(struct hclge_dev *hdev) unsigned int flag; int ret = 0; - memset(&resp_msg, 0, sizeof(resp_msg)); /* handle all the mailbox requests in the queue */ while (!hclge_cmd_crq_empty(&hdev->hw)) { if (test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) { @@ -685,6 +684,9 @@ void hclge_mbx_handler(struct hclge_dev *hdev) trace_hclge_pf_mbx_get(hdev, req); + /* clear the resp_msg before processing every mailbox message */ + memset(&resp_msg, 0, sizeof(resp_msg)); + switch (req->msg.code) { case HCLGE_MBX_MAP_RING_TO_VECTOR: ret = hclge_map_unmap_ring_to_vf_vector(vport, true, -- 2.23.0
1 0
0 0
  • ← Newer
  • 1
  • ...
  • 1706
  • 1707
  • 1708
  • 1709
  • 1710
  • 1711
  • 1712
  • ...
  • 1804
  • Older →

HyperKitty Powered by HyperKitty