mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 40 participants
  • 19026 discussions
[PATCH OLK-6.6 v4 0/2] mm/mem_sampling: fix some mem sampling issues
by Ze Zuo 23 Jun '25

23 Jun '25
This patch fix issues in mem_sampling: - Prevent mem_sampling from enabling if SPE initialization fails. - Fix inaccurate sampling during NUMA balancing and DAMON. These changes improve memory sampling accuracy and stability on ARM systems. changes since v3: -- fix bugzilla issue. Ze Zuo (2): mm/mem_sampling: Prevent mem_sampling from being enabled if SPE init failed mm/mem_sampling: Fix inaccurate sampling for NUMA balancing and DAMON drivers/arm/mm_monitor/mm_spe.c | 6 +++--- include/linux/mem_sampling.h | 7 ++++--- mm/mem_sampling.c | 11 +++++++++-- 3 files changed, 16 insertions(+), 8 deletions(-) -- 2.25.1
2 3
0 0
[PATCH OLK-6.6 v2] mm/mem_sampling: add trace event for spe based damon record
by Ze Zuo 23 Jun '25

23 Jun '25
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/ICH7AS CVE: NA -------------------------------- This patch adds a new DAMON access tracking mechanism using ARM Statistical Profiling Extension (SPE). By parsing memory access samples from SPE, DAMON can infer access patterns with low overhead and higher precision on supported ARM platforms. Signed-off-by: Ze Zuo <zuoze1(a)huawei.com> --- changes since v1: -- fix bugzilla issues. include/trace/events/kmem.h | 21 +++++++++++++++++++++ mm/mem_sampling.c | 1 + 2 files changed, 22 insertions(+) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 28b9679c474d..3e78e6bd6e18 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -523,6 +523,27 @@ TRACE_EVENT(mm_mem_sampling_access_record, __entry->cpuid, __entry->pid) ); #endif /* CONFIG_NUMABALANCING_MEM_SAMPLING */ + +#ifdef CONFIG_DAMON_MEM_SAMPLING +TRACE_EVENT(mm_mem_sampling_damon_record, + + TP_PROTO(u64 vaddr, int pid), + + TP_ARGS(vaddr, pid), + + TP_STRUCT__entry( + __field(u64, vaddr) + __field(int, pid) + ), + + TP_fast_assign( + __entry->vaddr = vaddr; + __entry->pid = pid; + ), + + TP_printk("vaddr=%llx pid=%d", __entry->vaddr, __entry->pid) +); +#endif /* CONFIG_DAMON_MEM_SAMPLING */ #endif /* _TRACE_KMEM_H */ /* This part must be outside protection */ diff --git a/mm/mem_sampling.c b/mm/mem_sampling.c index 9ee68e15d1f6..8d79e83e64f0 100644 --- a/mm/mem_sampling.c +++ b/mm/mem_sampling.c @@ -316,6 +316,7 @@ static void damon_mem_sampling_record_cb(struct mem_sampling_record *record) mmput(mm); domon_record.vaddr = record->virt_addr; + trace_mm_mem_sampling_damon_record(record->virt_addr, (pid_t)record->context_id); /* only the proc under monitor now has damon_fifo */ if (damon_fifo) { -- 2.25.1
2 1
0 0
[PATCH OLK-6.6 v3 0/2] mm/mem_sampling: fix some mem sampling issues
by Ze Zuo 23 Jun '25

23 Jun '25
This patch fix issues in mem_sampling: - Prevent mem_sampling from enabling if SPE initialization fails. - Fix inaccurate sampling during NUMA balancing and DAMON. These changes improve memory sampling accuracy and stability on ARM systems. changes since v2: -- remove the feature patch. -- fix bugzilla number. Ze Zuo (2): mm/mem_sampling: Prevent mem_sampling from being enabled if SPE init failed mm/mem_sampling: Fix inaccurate sampling for NUMA balancing and DAMON drivers/arm/mm_monitor/mm_spe.c | 6 +++--- include/linux/mem_sampling.h | 7 ++++--- mm/mem_sampling.c | 11 +++++++++-- 3 files changed, 16 insertions(+), 8 deletions(-) -- 2.25.1
2 3
0 0
[PATCH] mm/mem_sampling: add trace event for spe based damon record
by Ze Zuo 23 Jun '25

23 Jun '25
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/ICH7AS CVE: NA -------------------------------- This patch adds a new DAMON access tracking mechanism using ARM Statistical Profiling Extension (SPE). By parsing memory access samples from SPE, DAMON can infer access patterns with low overhead and higher precision on supported ARM platforms. Signed-off-by: Ze Zuo <zuoze1(a)huawei.com> --- include/trace/events/kmem.h | 21 +++++++++++++++++++++ mm/mem_sampling.c | 1 + 2 files changed, 22 insertions(+) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 28b9679c474d..3e78e6bd6e18 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -523,6 +523,27 @@ TRACE_EVENT(mm_mem_sampling_access_record, __entry->cpuid, __entry->pid) ); #endif /* CONFIG_NUMABALANCING_MEM_SAMPLING */ + +#ifdef CONFIG_DAMON_MEM_SAMPLING +TRACE_EVENT(mm_mem_sampling_damon_record, + + TP_PROTO(u64 vaddr, int pid), + + TP_ARGS(vaddr, pid), + + TP_STRUCT__entry( + __field(u64, vaddr) + __field(int, pid) + ), + + TP_fast_assign( + __entry->vaddr = vaddr; + __entry->pid = pid; + ), + + TP_printk("vaddr=%llx pid=%d", __entry->vaddr, __entry->pid) +); +#endif /* CONFIG_DAMON_MEM_SAMPLING */ #endif /* _TRACE_KMEM_H */ /* This part must be outside protection */ diff --git a/mm/mem_sampling.c b/mm/mem_sampling.c index 9ee68e15d1f6..8d79e83e64f0 100644 --- a/mm/mem_sampling.c +++ b/mm/mem_sampling.c @@ -316,6 +316,7 @@ static void damon_mem_sampling_record_cb(struct mem_sampling_record *record) mmput(mm); domon_record.vaddr = record->virt_addr; + trace_mm_mem_sampling_damon_record(record->virt_addr, (pid_t)record->context_id); /* only the proc under monitor now has damon_fifo */ if (damon_fifo) { -- 2.25.1
1 0
0 0
[PATCH OLK-6.6] drm/radeon: fix uninitialized size issue in radeon_vce_cs_parse()
by Zicheng Qu 23 Jun '25

23 Jun '25
From: Nikita Zhandarovich <n.zhandarovich(a)fintech.ru> stable inclusion from stable-v6.6.85 commit 3ce08215cad55c10a6eeeb33d3583b6cfffe3ab8 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/IBYOFP CVE: CVE-2025-21996 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- commit dd8689b52a24807c2d5ce0a17cb26dc87f75235c upstream. On the off chance that command stream passed from userspace via ioctl() call to radeon_vce_cs_parse() is weirdly crafted and first command to execute is to encode (case 0x03000001), the function in question will attempt to call radeon_vce_cs_reloc() with size argument that has not been properly initialized. Specifically, 'size' will point to 'tmp' variable before the latter had a chance to be assigned any value. Play it safe and init 'tmp' with 0, thus ensuring that radeon_vce_cs_reloc() will catch an early error in cases like these. Found by Linux Verification Center (linuxtesting.org) with static analysis tool SVACE. Fixes: 2fc5703abda2 ("drm/radeon: check VCE relocation buffer range v3") Signed-off-by: Nikita Zhandarovich <n.zhandarovich(a)fintech.ru> Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com> (cherry picked from commit 2d52de55f9ee7aaee0e09ac443f77855989c6b68) Cc: stable(a)vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: Zicheng Qu <quzicheng(a)huawei.com> --- drivers/gpu/drm/radeon/radeon_vce.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/radeon/radeon_vce.c b/drivers/gpu/drm/radeon/radeon_vce.c index d84b780e318c..2eb1636c560e 100644 --- a/drivers/gpu/drm/radeon/radeon_vce.c +++ b/drivers/gpu/drm/radeon/radeon_vce.c @@ -561,7 +561,7 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p) { int session_idx = -1; bool destroyed = false, created = false, allocated = false; - uint32_t tmp, handle = 0; + uint32_t tmp = 0, handle = 0; uint32_t *size = &tmp; int i, r = 0; -- 2.34.1
2 1
0 0
[PATCH OLK-5.10] cachefiles: Fix the potential ABBA deadlock issue
by Zizhi Wo 23 Jun '25

23 Jun '25
From: Zizhi Wo <wozizhi(a)huaweicloud.com> hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/ICFXTV CVE: NA -------------------------------- The current calling process may trigger a potential ABBA deadlock. The calling stack is as follows: do_mkdirat user_path_create filename_create mnt_want_write --lock a inode_lock_nested(path->dentry->d_inode, I_MUTEX_PARENT) --lock b vfs_write cachefiles_daemon_write cachefiles_daemon_cull cachefiles_cull cachefiles_check_active inode_lock_nested(d_inode(dir), I_MUTEX_PARENT) --lock b cachefiles_remove_object_xattr mnt_want_write --lock a vfs_removexattr This is because there is a problem with the lock order. mnt_want_write() should be called first, and then inode_lock_nested(). Fix the lock order. And delete the redundant code in cachefiles_check_old_object_xattr(), because this part of the process cannot be triggered. Fixes: 2a0beff2d223 ("cachefiles: Fix non-taking of sb_writers around set/removexattr") Signed-off-by: Zizhi Wo <wozizhi(a)huaweicloud.com> --- fs/cachefiles/namei.c | 9 ++++++++- fs/cachefiles/xattr.c | 20 ++++---------------- 2 files changed, 12 insertions(+), 17 deletions(-) diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c index 6eeef666c609..6fc8c504ae1b 100644 --- a/fs/cachefiles/namei.c +++ b/fs/cachefiles/namei.c @@ -990,9 +990,15 @@ int cachefiles_cull(struct cachefiles_cache *cache, struct dentry *dir, _enter(",%pd/,%s", dir, filename); + ret = mnt_want_write(cache->mnt); + if (ret < 0) + return ret; + victim = cachefiles_check_active(cache, dir, filename); - if (IS_ERR(victim)) + if (IS_ERR(victim)) { + mnt_drop_write(cache->mnt); return PTR_ERR(victim); + } _debug("victim -> %p %s", victim, d_backing_inode(victim) ? "positive" : "negative"); @@ -1003,6 +1009,7 @@ int cachefiles_cull(struct cachefiles_cache *cache, struct dentry *dir, _debug("victim is cullable"); ret = cachefiles_remove_object_xattr(cache, victim); + mnt_drop_write(cache->mnt); if (ret < 0) goto error_unlock; diff --git a/fs/cachefiles/xattr.c b/fs/cachefiles/xattr.c index bac55fc7359e..646a8002d0cf 100644 --- a/fs/cachefiles/xattr.c +++ b/fs/cachefiles/xattr.c @@ -243,7 +243,6 @@ int cachefiles_check_old_object_xattr(struct cachefiles_object *object, struct cachefiles_xattr *auxdata) { struct cachefiles_xattr *auxbuf; - struct cachefiles_cache *cache; unsigned int len = sizeof(struct cachefiles_xattr) + 512; struct dentry *dentry = object->dentry; int ret; @@ -301,17 +300,10 @@ int cachefiles_check_old_object_xattr(struct cachefiles_object *object, BUG(); } - cache = container_of(object->fscache.cache, - struct cachefiles_cache, cache); - /* update the current label */ - ret = mnt_want_write(cache->mnt); - if (ret == 0) { - ret = vfs_setxattr(dentry, cachefiles_xattr_cache, - &auxdata->type, auxdata->len, - XATTR_REPLACE); - mnt_drop_write(cache->mnt); - } + ret = vfs_setxattr(dentry, cachefiles_xattr_cache, + &auxdata->type, auxdata->len, + XATTR_REPLACE); if (ret < 0) { cachefiles_io_error_obj(object, "Can't update xattr on %lu" @@ -393,11 +385,7 @@ int cachefiles_remove_object_xattr(struct cachefiles_cache *cache, { int ret; - ret = mnt_want_write(cache->mnt); - if (ret == 0) { - ret = vfs_removexattr(dentry, cachefiles_xattr_cache); - mnt_drop_write(cache->mnt); - } + ret = vfs_removexattr(dentry, cachefiles_xattr_cache); if (ret < 0) { if (ret == -ENOENT || ret == -ENODATA) ret = 0; -- 2.39.2
2 1
0 0
[PATCH OLK-5.10] bpf: Fix prog_array_map_poke_run map poke update
by Tengda Wu 23 Jun '25

23 Jun '25
From: Jiri Olsa <jolsa(a)kernel.org> mainline inclusion from mainline-v6.7-rc5 commit 4b7de801606e504e69689df71475d27e35336fb3 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/ICH20Y Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- Lee pointed out issue found by syscaller [0] hitting BUG in prog array map poke update in prog_array_map_poke_run function due to error value returned from bpf_arch_text_poke function. There's race window where bpf_arch_text_poke can fail due to missing bpf program kallsym symbols, which is accounted for with check for -EINVAL in that BUG_ON call. The problem is that in such case we won't update the tail call jump and cause imbalance for the next tail call update check which will fail with -EBUSY in bpf_arch_text_poke. I'm hitting following race during the program load: CPU 0 CPU 1 bpf_prog_load bpf_check do_misc_fixups prog_array_map_poke_track map_update_elem bpf_fd_array_map_update_elem prog_array_map_poke_run bpf_arch_text_poke returns -EINVAL bpf_prog_kallsyms_add After bpf_arch_text_poke (CPU 1) fails to update the tail call jump, the next poke update fails on expected jump instruction check in bpf_arch_text_poke with -EBUSY and triggers the BUG_ON in prog_array_map_poke_run. Similar race exists on the program unload. Fixing this by moving the update to bpf_arch_poke_desc_update function which makes sure we call __bpf_arch_text_poke that skips the bpf address check. Each architecture has slightly different approach wrt looking up bpf address in bpf_arch_text_poke, so instead of splitting the function or adding new 'checkip' argument in previous version, it seems best to move the whole map_poke_run update as arch specific code. [0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810 Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") Reported-by: syzbot+97a4fe20470e9bc30810(a)syzkaller.appspotmail.com Signed-off-by: Jiri Olsa <jolsa(a)kernel.org> Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net> Acked-by: Yonghong Song <yonghong.song(a)linux.dev> Cc: Lee Jones <lee(a)kernel.org> Cc: Maciej Fijalkowski <maciej.fijalkowski(a)intel.com> Link: https://lore.kernel.org/bpf/20231206083041.1306660-2-jolsa@kernel.org Conflicts: arch/x86/net/bpf_jit_comp.c include/linux/bpf.h [There has some context conflicts, and a clean up patch 1022a5498f6f ("bpf, x86_64: Use bpf_jit_binary_pack_alloc") which is no need to merge] Signed-off-by: Tengda Wu <wutengda2(a)huawei.com> --- arch/x86/net/bpf_jit_comp.c | 46 +++++++++++++++++++++++++++++ include/linux/bpf.h | 3 ++ kernel/bpf/arraymap.c | 58 +++++++------------------------------ 3 files changed, 59 insertions(+), 48 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 987b02b857bc..dc6f077e2a2f 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -2169,3 +2169,49 @@ u64 bpf_arch_uaddress_limit(void) { return 0; } + +void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, + struct bpf_prog *new, struct bpf_prog *old) +{ + u8 *old_addr, *new_addr, *old_bypass_addr; + int ret; + + old_bypass_addr = old ? NULL : poke->bypass_addr; + old_addr = old ? (u8 *)old->bpf_func + poke->adj_off : NULL; + new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL; + + /* + * On program loading or teardown, the program's kallsym entry + * might not be in place, so we use __bpf_arch_text_poke to skip + * the kallsyms check. + */ + if (new) { + ret = __bpf_arch_text_poke(poke->tailcall_target, + BPF_MOD_JUMP, + old_addr, new_addr, true); + BUG_ON(ret < 0); + if (!old) { + ret = __bpf_arch_text_poke(poke->tailcall_bypass, + BPF_MOD_JUMP, + poke->bypass_addr, + NULL, true); + BUG_ON(ret < 0); + } + } else { + ret = __bpf_arch_text_poke(poke->tailcall_bypass, + BPF_MOD_JUMP, + old_bypass_addr, + poke->bypass_addr, true); + BUG_ON(ret < 0); + /* let other CPUs finish the execution of program + * so that it will not possible to expose them + * to invalid nop, stack unwind, nop state + */ + if (!ret) + synchronize_rcu(); + ret = __bpf_arch_text_poke(poke->tailcall_target, + BPF_MOD_JUMP, + old_addr, NULL, true); + BUG_ON(ret < 0); + } +} diff --git a/include/linux/bpf.h b/include/linux/bpf.h index f2e1633c5c82..f0db30991f68 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2144,6 +2144,9 @@ enum bpf_text_poke_type { int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, void *addr1, void *addr2); +void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, + struct bpf_prog *new, struct bpf_prog *old); + struct btf_id_set; bool btf_id_set_contains(const struct btf_id_set *set, u32 id); diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index b5b8b57dc212..dfabf842f830 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -916,11 +916,16 @@ static void prog_array_map_poke_untrack(struct bpf_map *map, mutex_unlock(&aux->poke_mutex); } +void __weak bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, + struct bpf_prog *new, struct bpf_prog *old) +{ + WARN_ON_ONCE(1); +} + static void prog_array_map_poke_run(struct bpf_map *map, u32 key, struct bpf_prog *old, struct bpf_prog *new) { - u8 *old_addr, *new_addr, *old_bypass_addr; struct prog_poke_elem *elem; struct bpf_array_aux *aux; @@ -929,7 +934,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, list_for_each_entry(elem, &aux->poke_progs, list) { struct bpf_jit_poke_descriptor *poke; - int i, ret; + int i; for (i = 0; i < elem->aux->size_poke_tab; i++) { poke = &elem->aux->poke_tab[i]; @@ -948,21 +953,10 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, * activated, so tail call updates can arrive from here * while JIT is still finishing its final fixup for * non-activated poke entries. - * 3) On program teardown, the program's kallsym entry gets - * removed out of RCU callback, but we can only untrack - * from sleepable context, therefore bpf_arch_text_poke() - * might not see that this is in BPF text section and - * bails out with -EINVAL. As these are unreachable since - * RCU grace period already passed, we simply skip them. - * 4) Also programs reaching refcount of zero while patching + * 3) Also programs reaching refcount of zero while patching * is in progress is okay since we're protected under * poke_mutex and untrack the programs before the JIT - * buffer is freed. When we're still in the middle of - * patching and suddenly kallsyms entry of the program - * gets evicted, we just skip the rest which is fine due - * to point 3). - * 5) Any other error happening below from bpf_arch_text_poke() - * is a unexpected bug. + * buffer is freed. */ if (!READ_ONCE(poke->tailcall_target_stable)) continue; @@ -972,39 +966,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, poke->tail_call.key != key) continue; - old_bypass_addr = old ? NULL : poke->bypass_addr; - old_addr = old ? (u8 *)old->bpf_func + poke->adj_off : NULL; - new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL; - - if (new) { - ret = bpf_arch_text_poke(poke->tailcall_target, - BPF_MOD_JUMP, - old_addr, new_addr); - BUG_ON(ret < 0 && ret != -EINVAL); - if (!old) { - ret = bpf_arch_text_poke(poke->tailcall_bypass, - BPF_MOD_JUMP, - poke->bypass_addr, - NULL); - BUG_ON(ret < 0 && ret != -EINVAL); - } - } else { - ret = bpf_arch_text_poke(poke->tailcall_bypass, - BPF_MOD_JUMP, - old_bypass_addr, - poke->bypass_addr); - BUG_ON(ret < 0 && ret != -EINVAL); - /* let other CPUs finish the execution of program - * so that it will not possible to expose them - * to invalid nop, stack unwind, nop state - */ - if (!ret) - synchronize_rcu(); - ret = bpf_arch_text_poke(poke->tailcall_target, - BPF_MOD_JUMP, - old_addr, NULL); - BUG_ON(ret < 0 && ret != -EINVAL); - } + bpf_arch_poke_desc_update(poke, new, old); } } } -- 2.34.1
2 1
0 0
[PATCH OLK-6.6] mm:userswap: change VM_USWAP_BIT to bit 61
by Wupeng Ma 23 Jun '25

23 Jun '25
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I8KESX -------------------------------- During introduce userswap to 6.6, bit VM_USWAP is change to 62 by mistake. This is different with the existing bit in 5.10. Change this bit to bit 61 to fix this problem. Fixes: ec6250211515 ("mm/userswap: add VM_USWAP and SWP_USERSWAP_ENTRY") Signed-off-by: Wupeng Ma <mawupeng1(a)huawei.com> --- include/linux/mm.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d24d6115a9bf0..77a7d7c4c88c5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -323,7 +323,7 @@ extern unsigned int kobjsize(const void *objp); #define VM_MERGEABLE 0x80000000 /* KSM may merge identical pages */ #ifdef CONFIG_USERSWAP -# define VM_USWAP_BIT 62 +# define VM_USWAP_BIT 61 #define VM_USWAP BIT(VM_USWAP_BIT) #else /* !CONFIG_USERSWAP */ #define VM_USWAP VM_NONE -- 2.43.0
2 1
0 0
[PATCH OLK-6.6 v2 0/3] mm/mem_sampling: add ARM SPE support and fix sampling issues
by Ze Zuo 23 Jun '25

23 Jun '25
This patch set adds support for ARM SPE to track memory accesses and fixes issues in mem_sampling: - Add ARM SPE support for memory access tracking. - Prevent mem_sampling from enabling if SPE initialization fails. - Fix inaccurate sampling during NUMA balancing and DAMON. These changes improve memory sampling accuracy and stability on ARM systems. changes since v1: -- add commit Fixs message. Ze Zuo (3): mm/mem_sampling: add trace event for spe based damon record mm/mem_sampling: Prevent mem_sampling from being enabled if SPE init failed mm/mem_sampling: Fix inaccurate sampling for NUMA balancing and DAMON drivers/arm/mm_monitor/mm_spe.c | 6 +++--- include/linux/mem_sampling.h | 7 ++++--- include/trace/events/kmem.h | 21 +++++++++++++++++++++ mm/mem_sampling.c | 12 ++++++++++-- 4 files changed, 38 insertions(+), 8 deletions(-) -- 2.25.1
2 4
0 0
[PATCH OLK-6.6 0/3] mm/mem_sampling: add ARM SPE support and fix sampling issues
by Ze Zuo 22 Jun '25

22 Jun '25
This patch set adds support for ARM SPE to track memory accesses and fixes issues in mem_sampling: - Add ARM SPE support for memory access tracking. - Prevent mem_sampling from enabling if SPE initialization fails. - Fix inaccurate sampling during NUMA balancing and DAMON. These changes improve memory sampling accuracy and stability on ARM systems. Ze Zuo (3): mm/mem_sampling: add trace event for spe based damon record mm/mem_sampling: Prevent mem_sampling from being enabled if SPE init failed mm/mem_sampling: Fix inaccurate sampling for NUMA balancing and DAMON drivers/arm/mm_monitor/mm_spe.c | 6 +++--- include/linux/mem_sampling.h | 7 ++++--- include/trace/events/kmem.h | 21 +++++++++++++++++++++ mm/mem_sampling.c | 12 ++++++++++-- 4 files changed, 38 insertions(+), 8 deletions(-) -- 2.25.1
2 4
0 0
  • ← Newer
  • 1
  • ...
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • ...
  • 1903
  • Older →

HyperKitty Powered by HyperKitty