mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 47 participants
  • 18241 discussions
[PATCH openEuler-1.0-LTS] HID: intel_ish-hid: Add check for ishtp_dma_tx_map
by Yongqiang Liu 27 Jun '23

27 Jun '23
From: Jiasheng Jiang <jiasheng(a)iscas.ac.cn> stable inclusion from stable-v4.19.272 commit cc906a3a4432da143ab3d2e894f99ddeff500cd3 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7FCLX CVE: CVE-2023-3358 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- [ Upstream commit b3d40c3ec3dc4ad78017de6c3a38979f57aaaab8 ] As the kcalloc may return NULL pointer, it should be better to check the ishtp_dma_tx_map before use in order to avoid NULL pointer dereference. Fixes: 3703f53b99e4 ("HID: intel_ish-hid: ISH Transport layer") Signed-off-by: Jiasheng Jiang <jiasheng(a)iscas.ac.cn> Acked-by: Srinivas Pandruvada <srinivas.pandruvada(a)linux.intel.com> Signed-off-by: Jiri Kosina <jkosina(a)suse.cz> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Cai Xinchen <caixinchen1(a)huawei.com> Reviewed-by: GUO Zihua<guozihua(a)huawei.com> Reviewed-by: GONG, Ruiqi <gongruiqi1(a)huawei.com> Reviewed-by: Wang Weiyang <wangweiyang2(a)huawei.com> Signed-off-by: Yongqiang Liu <liuyongqiang13(a)huawei.com> --- drivers/hid/intel-ish-hid/ishtp/dma-if.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/hid/intel-ish-hid/ishtp/dma-if.c b/drivers/hid/intel-ish-hid/ishtp/dma-if.c index 2783f3666114..ff4419c8ed4f 100644 --- a/drivers/hid/intel-ish-hid/ishtp/dma-if.c +++ b/drivers/hid/intel-ish-hid/ishtp/dma-if.c @@ -113,6 +113,11 @@ void *ishtp_cl_get_dma_send_buf(struct ishtp_device *dev, int required_slots = (size / DMA_SLOT_SIZE) + 1 * (size % DMA_SLOT_SIZE != 0); + if (!dev->ishtp_dma_tx_map) { + dev_err(dev->devc, "Fail to allocate Tx map\n"); + return NULL; + } + spin_lock_irqsave(&dev->ishtp_dma_tx_lock, flags); for (i = 0; i <= (dev->ishtp_dma_num_slots - required_slots); i++) { free = 1; @@ -159,6 +164,11 @@ void ishtp_cl_release_dma_acked_mem(struct ishtp_device *dev, return; } + if (!dev->ishtp_dma_tx_map) { + dev_err(dev->devc, "Fail to allocate Tx map\n"); + return; + } + i = (msg_addr - dev->ishtp_host_dma_tx_buf) / DMA_SLOT_SIZE; spin_lock_irqsave(&dev->ishtp_dma_tx_lock, flags); for (j = 0; j < acked_slots; j++) { -- 2.25.1
1 0
0 0
[PATCH openEuler-1.0-LTS] media: saa7134: fix use after free bug in saa7134_finidev due to race condition
by Yongqiang Liu 27 Jun '23

27 Jun '23
From: Zheng Wang <zyytlz.wz(a)163.com> stable inclusion from stable-v4.19.283 commit 95e684340470a95ff4957cb9a536ec7a0461c75b category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7ERIV CVE: CVE-2023-3327 -------------------------------- [ Upstream commit 30cf57da176cca80f11df0d9b7f71581fe601389 ] In saa7134_initdev, it will call saa7134_hwinit1. There are three function invoking here: saa7134_video_init1, saa7134_ts_init1 and saa7134_vbi_init1. All of them will init a timer with same function. Take saa7134_video_init1 as an example. It'll bound &dev->video_q.timeout with saa7134_buffer_timeout. In buffer_activate, the timer funtcion is started. If we remove the module or device which will call saa7134_finidev to make cleanup, there may be a unfinished work. The possible sequence is as follows, which will cause a typical UAF bug. Fix it by canceling the timer works accordingly before cleanup in saa7134_finidev. CPU0 CPU1 |saa7134_buffer_timeout saa7134_finidev | kfree(dev); | | | saa7134_buffer_next | //use dev Fixes: 1e7126b4a86a ("media: saa7134: Convert timers to use timer_setup()") Signed-off-by: Zheng Wang <zyytlz.wz(a)163.com> Signed-off-by: Hans Verkuil <hverkuil-cisco(a)xs4all.nl> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Longlong Xia <xialonglong1(a)huawei.com> Reviewed-by: Nanyong Sun <sunnanyong(a)huawei.com> Reviewed-by: Xiu Jianfeng <xiujianfeng(a)huawei.com> Signed-off-by: Yongqiang Liu <liuyongqiang13(a)huawei.com> --- drivers/media/pci/saa7134/saa7134-ts.c | 1 + drivers/media/pci/saa7134/saa7134-vbi.c | 1 + drivers/media/pci/saa7134/saa7134-video.c | 1 + 3 files changed, 3 insertions(+) diff --git a/drivers/media/pci/saa7134/saa7134-ts.c b/drivers/media/pci/saa7134/saa7134-ts.c index 2be703617e29..e7adcd4f9962 100644 --- a/drivers/media/pci/saa7134/saa7134-ts.c +++ b/drivers/media/pci/saa7134/saa7134-ts.c @@ -309,6 +309,7 @@ int saa7134_ts_start(struct saa7134_dev *dev) int saa7134_ts_fini(struct saa7134_dev *dev) { + del_timer_sync(&dev->ts_q.timeout); saa7134_pgtable_free(dev->pci, &dev->ts_q.pt); return 0; } diff --git a/drivers/media/pci/saa7134/saa7134-vbi.c b/drivers/media/pci/saa7134/saa7134-vbi.c index 57bea543c39b..559db500b19c 100644 --- a/drivers/media/pci/saa7134/saa7134-vbi.c +++ b/drivers/media/pci/saa7134/saa7134-vbi.c @@ -194,6 +194,7 @@ int saa7134_vbi_init1(struct saa7134_dev *dev) int saa7134_vbi_fini(struct saa7134_dev *dev) { /* nothing */ + del_timer_sync(&dev->vbi_q.timeout); return 0; } diff --git a/drivers/media/pci/saa7134/saa7134-video.c b/drivers/media/pci/saa7134/saa7134-video.c index 1a50ec9d084f..103cbc8c1345 100644 --- a/drivers/media/pci/saa7134/saa7134-video.c +++ b/drivers/media/pci/saa7134/saa7134-video.c @@ -2213,6 +2213,7 @@ int saa7134_video_init1(struct saa7134_dev *dev) void saa7134_video_fini(struct saa7134_dev *dev) { + del_timer_sync(&dev->video_q.timeout); /* free stuff */ vb2_queue_release(&dev->video_vbq); saa7134_pgtable_free(dev->pci, &dev->video_q.pt); -- 2.25.1
1 0
0 0
[PATCH openEuler-1.0-LTS] config: enable CONFIG_QOS_SCHED_SMART_GRID by default
by Zhang Changzhong 27 Jun '23

27 Jun '23
From: Wang ShaoBo <bobo.shaobowang(a)huawei.com> hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I7G6SW CVE: NA -------------------------------- set config CONFIG_QOS_SCHED_SMART_GRID default value. Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com> Reviewed-by: Wei Li <liwei391(a)huawei.com> Reviewed-by: Xie XiuQi <xiexiuqi(a)huawei.com> Reviewed-by: Chao Liu <liuchao173(a)huawei.com> Signed-off-by: Zhang Changzhong <zhangchangzhong(a)huawei.com> --- arch/arm64/configs/openeuler_defconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig index 95225e2..60b1db8 100644 --- a/arch/arm64/configs/openeuler_defconfig +++ b/arch/arm64/configs/openeuler_defconfig @@ -6004,6 +6004,7 @@ CONFIG_IO_STRICT_DEVMEM=y # CONFIG_ARM64_RELOC_TEST is not set # CONFIG_CORESIGHT is not set CONFIG_SMMU_BYPASS_DEV=y +CONFIG_QOS_SCHED_SMART_GRID=y CONFIG_ETMEM_SCAN=m CONFIG_ETMEM_SWAP=m CONFIG_STAGING=y -- 2.9.5
1 0
0 0
[PATCH openEuler-22.03-LTS 0/2] hugetlb: Fix some incorrect behavior
by Liu Shixin 27 Jun '23

27 Jun '23
Fix two bugfix of hugetlb: 1) Invalid use of nr_online_nodes; 2) Inconsistency between 1G hugepage and 2M hugepage. Peng Liu (2): hugetlb: fix wrong use of nr_online_nodes hugetlb: fix hugepages_setup when deal with pernode mm/hugetlb.c | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) -- 2.25.1
2 3
0 0
[PATCH OLK-5.10] HID: intel_ish-hid: Add check for ishtp_dma_tx_map
by Cai Xinchen 27 Jun '23

27 Jun '23
From: Jiasheng Jiang <jiasheng(a)iscas.ac.cn> stable inclusion from stable-v5.10.166 commit 7b4516ba56f1fcb13ffc91912f3074e28362228d category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7FCLX CVE: CVE-2023-3358 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… ---------------------------------------- [ Upstream commit b3d40c3ec3dc4ad78017de6c3a38979f57aaaab8 ] As the kcalloc may return NULL pointer, it should be better to check the ishtp_dma_tx_map before use in order to avoid NULL pointer dereference. Fixes: 3703f53b99e4 ("HID: intel_ish-hid: ISH Transport layer") Signed-off-by: Jiasheng Jiang <jiasheng(a)iscas.ac.cn> Acked-by: Srinivas Pandruvada <srinivas.pandruvada(a)linux.intel.com> Signed-off-by: Jiri Kosina <jkosina(a)suse.cz> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Cai Xinchen <caixinchen1(a)huawei.com> --- drivers/hid/intel-ish-hid/ishtp/dma-if.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/hid/intel-ish-hid/ishtp/dma-if.c b/drivers/hid/intel-ish-hid/ishtp/dma-if.c index 40554c8daca0..00046cbfd4ed 100644 --- a/drivers/hid/intel-ish-hid/ishtp/dma-if.c +++ b/drivers/hid/intel-ish-hid/ishtp/dma-if.c @@ -104,6 +104,11 @@ void *ishtp_cl_get_dma_send_buf(struct ishtp_device *dev, int required_slots = (size / DMA_SLOT_SIZE) + 1 * (size % DMA_SLOT_SIZE != 0); + if (!dev->ishtp_dma_tx_map) { + dev_err(dev->devc, "Fail to allocate Tx map\n"); + return NULL; + } + spin_lock_irqsave(&dev->ishtp_dma_tx_lock, flags); for (i = 0; i <= (dev->ishtp_dma_num_slots - required_slots); i++) { free = 1; @@ -150,6 +155,11 @@ void ishtp_cl_release_dma_acked_mem(struct ishtp_device *dev, return; } + if (!dev->ishtp_dma_tx_map) { + dev_err(dev->devc, "Fail to allocate Tx map\n"); + return; + } + i = (msg_addr - dev->ishtp_host_dma_tx_buf) / DMA_SLOT_SIZE; spin_lock_irqsave(&dev->ishtp_dma_tx_lock, flags); for (j = 0; j < acked_slots; j++) { -- 2.17.1
2 1
0 0
[PATCH OLK-5.10] mm/hugetlb_vmemmap: remap head page to newly allocated page
by Liu Shixin 26 Jun '23

26 Jun '23
From: Joao Martins <joao.m.martins(a)oracle.com> mainline inclusion from mainline-v6.2-rc1 commit 11aad2631bf74b3c811dee76154702aab855a323 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I6SROX CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- Today with `hugetlb_free_vmemmap=on` the struct page memory that is freed back to page allocator is as following: for a 2M hugetlb page it will reuse the first 4K vmemmap page to remap the remaining 7 vmemmap pages, and for a 1G hugetlb it will remap the remaining 4095 vmemmap pages. Essentially, that means that it breaks the first 4K of a potentially contiguous chunk of memory of 32K (for 2M hugetlb pages) or 16M (for 1G hugetlb pages). For this reason the memory that it's free back to page allocator cannot be used for hugetlb to allocate huge pages of the same size, but rather only of a smaller huge page size: Trying to assign a 64G node to hugetlb (on a 128G 2node guest, each node having 64G): * Before allocation: Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 ... Node 0, zone Normal, type Movable 340 100 32 15 1 2 0 0 0 1 15558 $ echo 32768 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages $ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages 31987 * After: Node 0, zone Normal, type Movable 30893 32006 31515 7 0 0 0 0 0 0 0 Notice how the memory freed back are put back into 4K / 8K / 16K page pools. And it allocates a total of 31987 pages (63974M). To fix this behaviour rather than remapping second vmemmap page (thus breaking the contiguous block of memory backing the struct pages) repopulate the first vmemmap page with a new one. We allocate and copy from the currently mapped vmemmap page, and then remap it later on. The same algorithm works if there's a pre initialized walk::reuse_page and the head page doesn't need to be skipped and instead we remap it when the @addr being changed is the @reuse_addr. The new head page is allocated in vmemmap_remap_free() given that on restore there's no need for functional change. Note that, because right now one hugepage is remapped at a time, thus only one free 4K page at a time is needed to remap the head page. Should it fail to allocate said new page, it reuses the one that's already mapped just like before. As a result, for every 64G of contiguous hugepages it can give back 1G more of contiguous memory per 64G, while needing in total 128M new 4K pages (for 2M hugetlb) or 256k (for 1G hugetlb). After the changes, try to assign a 64G node to hugetlb (on a 128G 2node guest, each node with 64G): * Before allocation Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10 ... Node 0, zone Normal, type Movable 1 1 1 0 0 1 0 0 1 1 15564 $ echo 32768 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages $ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages 32394 * After: Node 0, zone Normal, type Movable 0 50 97 108 96 81 70 46 18 0 0 In the example above, 407 more hugeltb 2M pages are allocated i.e. 814M out of the 32394 (64788M) allocated. So the memory freed back is indeed being used back in hugetlb and there's no massive order-0..order-2 pages accumulated unused. [joao.m.martins(a)oracle.com: v3] Link: https://lkml.kernel.org/r/20221109200623.96867-1-joao.m.martins@oracle.com [joao.m.martins(a)oracle.com: add smp_wmb() to ensure page contents are visible prior to PTE write] Link: https://lkml.kernel.org/r/20221110121214.6297-1-joao.m.martins@oracle.com Link: https://lkml.kernel.org/r/20221107153922.77094-1-joao.m.martins@oracle.com Signed-off-by: Joao Martins <joao.m.martins(a)oracle.com> Reviewed-by: Muchun Song <songmuchun(a)bytedance.com> Cc: Mike Kravetz <mike.kravetz(a)oracle.com> Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org> Conflicts: mm/hugetlb_vmemmap.c Signed-off-by: Liu Shixin <liushixin2(a)huawei.com> --- mm/sparse-vmemmap.c | 41 ++++++++++++++++++++++++++++++++++------- 1 file changed, 34 insertions(+), 7 deletions(-) diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index a5c2efcf59e2..6803c89c5d21 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -221,12 +221,7 @@ static int vmemmap_remap_range(unsigned long start, unsigned long end, return ret; } while (pgd++, addr = next, addr != end); - /* - * We only change the mapping of the vmemmap virtual address range - * [@start + PAGE_SIZE, end), so we only need to flush the TLB which - * belongs to the range. - */ - flush_tlb_kernel_range(start + PAGE_SIZE, end); + flush_tlb_kernel_range(start, end); return 0; } @@ -264,9 +259,23 @@ static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, * to the tail pages. */ pgprot_t pgprot = PAGE_KERNEL_RO; - pte_t entry = mk_pte(walk->reuse_page, pgprot); struct page *page = pte_page(*pte); + pte_t entry; + /* Remapping the head page requires r/w */ + if (unlikely(addr == walk->reuse_addr)) { + pgprot = PAGE_KERNEL; + list_del(&walk->reuse_page->lru); + + /* + * Makes sure that preceding stores to the page contents from + * vmemmap_remap_free() become visible before the set_pte_at() + * write. + */ + smp_wmb(); + } + + entry = mk_pte(walk->reuse_page, pgprot); list_add_tail(&page->lru, walk->vmemmap_pages); set_pte_at(&init_mm, addr, pte, entry); } @@ -331,6 +340,24 @@ int vmemmap_remap_free(unsigned long start, unsigned long end, .reuse_addr = reuse, .vmemmap_pages = &vmemmap_pages, }; + int nid = page_to_nid((struct page *)start); + gfp_t gfp_mask = GFP_KERNEL | __GFP_THISNODE | __GFP_NORETRY | + __GFP_NOWARN; + + /* + * Allocate a new head vmemmap page to avoid breaking a contiguous + * block of struct page memory when freeing it back to page allocator + * in free_vmemmap_page_list(). This will allow the likely contiguous + * struct page backing memory to be kept contiguous and allowing for + * more allocations of hugepages. Fallback to the currently + * mapped head page in case should it fail to allocate. + */ + walk.reuse_page = alloc_pages_node(nid, gfp_mask, 0); + if (walk.reuse_page) { + copy_page(page_to_virt(walk.reuse_page), + (void *)walk.reuse_addr); + list_add(&walk.reuse_page->lru, &vmemmap_pages); + } /* * In order to make remapping routine most efficient for the huge pages, -- 2.25.1
2 1
0 0
[PATCH OLK-5.10 0/2] hugetlb: Fix some incorrect behavior
by Liu Shixin 26 Jun '23

26 Jun '23
Fix two bugfix of hugetlb: 1) Invalid use of nr_online_nodes; 2) Inconsistency between 1G hugepage and 2M hugepage. Peng Liu (2): hugetlb: fix wrong use of nr_online_nodes hugetlb: fix hugepages_setup when deal with pernode mm/hugetlb.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) -- 2.25.1
2 3
0 0
[PATCH openEuler-22.03-LTS-SP2 0/2] set the iova rcache global
by Zhang Zekun 26 Jun '23

26 Jun '23
The iova rcache will have performance problem, set the iova rcache global mag max size to 128 to fix it. Zhang Zekun (2): iommu/iova: increase the iova_rcache depot max size config: enable set the max iova mag size to 128 arch/arm64/configs/openeuler_defconfig | 1 + drivers/iommu/Kconfig | 10 ++++++++++ include/linux/iova.h | 2 +- 3 files changed, 12 insertions(+), 1 deletion(-) -- 2.17.1
3 4
0 0
[PATCH openEuler-1.0-LTS] mm: oom: move memcg_print_bad_task() out of mem_cgroup_scan_tasks()
by Yongqiang Liu 26 Jun '23

26 Jun '23
From: Kang Chen <void0red(a)hust.edu.cn> hulk inclusion category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I6NYW4 CVE: NA -------------------------------- raw call flow: oom_kill_process -> mem_cgroup_scan_tasks(.., .., message) -> memcg_print_bad_task(message, ..) message is "const char*" type, and incorrectly cast to "oom_control*" type in memcg_print_bad_task. Fix it by moving memcg_print_bad_task out of mem_cgroup_scan_tasks and call it in select_bad_process and dump_tasks. Furthermore, use struct oom_control* directly and remove the useless parm `ret`. Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com> Signed-off-by: Kang Chen <void0red(a)hust.edu.cn> (cherry picked from commit 789038c7436e6dc0efb144114992134a52233dbd) Conflicts: include/linux/memcontrol.h [Add declaration of memcg_print_bad_task()] mm/memcontrol.c mm/oom_kill.c Signed-off-by: Liu Shixin <liushixin2(a)huawei.com> Reviewed-by: Kefeng Wang <wangkefeng.wang(a)huawei.com> Signed-off-by: Yongqiang Liu <liuyongqiang13(a)huawei.com> --- include/linux/memcontrol.h | 10 +++++++++- mm/memcontrol.c | 15 +++++++++------ mm/oom_kill.c | 14 ++++++++------ 3 files changed, 26 insertions(+), 13 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 23db8eec0755..1cb695fb83c7 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -35,6 +35,7 @@ struct mem_cgroup; struct page; struct mm_struct; struct kmem_cache; +struct oom_control; /* Cgroup-specific page state, on top of universal node page state */ enum memcg_stat_item { @@ -354,9 +355,11 @@ DECLARE_STATIC_KEY_FALSE(memcg_qos_stat_key); bool memcg_low_priority_scan_tasks(int (*)(struct task_struct *, void *), void *); -void memcg_print_bad_task(void *arg, int ret); +void memcg_print_bad_task(struct oom_control *oc); extern int sysctl_memcg_qos_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos); +#else +void memcg_print_bad_task(struct oom_control *oc); #endif /* @@ -1132,6 +1135,11 @@ static inline void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { } + +static inline void memcg_print_bad_task(struct oom_control *oc) +{ +} + #endif /* CONFIG_MEMCG */ /* idx can be of type enum memcg_stat_item or node_stat_item */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index aff8c05b2c72..1b11bc13e1aa 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1246,9 +1246,6 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, break; } } -#ifdef CONFIG_MEMCG_QOS - memcg_print_bad_task(arg, ret); -#endif return ret; } @@ -3747,16 +3744,15 @@ bool memcg_low_priority_scan_tasks(int (*fn)(struct task_struct *, void *), return oc->chosen ? true : false; } -void memcg_print_bad_task(void *arg, int ret) +void memcg_print_bad_task(struct oom_control *oc) { - struct oom_control *oc = arg; struct mem_cgroup *memcg; struct mem_cgroup_extension *memcg_ext; if (!static_branch_likely(&memcg_qos_stat_key)) return; - if (!ret && oc->chosen) { + if (oc->chosen) { memcg = mem_cgroup_from_task(oc->chosen); memcg_ext = to_memcg_ext(memcg); if (memcg_ext->memcg_priority) @@ -3786,6 +3782,13 @@ int sysctl_memcg_qos_handler(struct ctl_table *table, int write, return ret; } + +#else + +void memcg_print_bad_task(struct oom_control *oc) +{ +} + #endif #ifdef CONFIG_NUMA diff --git a/mm/oom_kill.c b/mm/oom_kill.c index bcd08df6d577..4ad05a72bb8c 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -415,9 +415,10 @@ static int oom_evaluate_task(struct task_struct *task, void *arg) */ static void select_bad_process(struct oom_control *oc) { - if (is_memcg_oom(oc)) - mem_cgroup_scan_tasks(oc->memcg, oom_evaluate_task, oc); - else { + if (is_memcg_oom(oc)) { + if (!mem_cgroup_scan_tasks(oc->memcg, oom_evaluate_task, oc)) + memcg_print_bad_task(oc); + } else { struct task_struct *p; #ifdef CONFIG_MEMCG_QOS @@ -512,9 +513,10 @@ static void dump_tasks(struct oom_control *oc) pr_info("[ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name\n"); } - if (is_memcg_oom(oc)) - mem_cgroup_scan_tasks(oc->memcg, dump_task, oc); - else { + if (is_memcg_oom(oc)) { + if (!mem_cgroup_scan_tasks(oc->memcg, dump_task, oc)) + memcg_print_bad_task(oc); + } else { struct task_struct *p; rcu_read_lock(); -- 2.25.1
1 0
0 0
[PATCH OLK-5.10] config: enable set the max iova mag size to 128
by Zhang Zekun 26 Jun '23

26 Jun '23
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I7ASVH CVE: NA --------------------------------------- iova mag size will be iova_rcache size to 128, to support more concurrency in iova allocation, and can fix the problem dixcribe in bugzilla. Signed-off-by: Zhang Zekun <zhangzekun11(a)huawei.com> --- arch/arm64/configs/openeuler_defconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig index 730078e475d3..2e4880c485be 100644 --- a/arch/arm64/configs/openeuler_defconfig +++ b/arch/arm64/configs/openeuler_defconfig @@ -5947,6 +5947,7 @@ CONFIG_ARM_SMMU_V3_PM=y # CONFIG_QCOM_IOMMU is not set # CONFIG_VIRTIO_IOMMU is not set CONFIG_SMMU_BYPASS_DEV=y +CONFIG_IOVA_MAX_GLOBAL_MAGS=128 # # Remoteproc drivers -- 2.17.1
2 1
0 0
  • ← Newer
  • 1
  • ...
  • 1505
  • 1506
  • 1507
  • 1508
  • 1509
  • 1510
  • 1511
  • ...
  • 1825
  • Older →

HyperKitty Powered by HyperKitty