mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2026 -----
  • April
  • March
  • February
  • January
  • ----- 2025 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 17 participants
  • 23165 discussions
[PATCH OLK-6.6] ext4: fix memory leak in ext4_ext_shift_extents()
by Yongjian Sun 26 Mar '26

26 Mar '26
From: Zilin Guan <zilin(a)seu.edu.cn> mainline inclusion from mainline-v7.0-rc1 commit ca81109d4a8f192dc1cbad4a1ee25246363c2833 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13898 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- In ext4_ext_shift_extents(), if the extent is NULL in the while loop, the function returns immediately without releasing the path obtained via ext4_find_extent(), leading to a memory leak. Fix this by jumping to the out label to ensure the path is properly released. Fixes: a18ed359bdddc ("ext4: always check ext4_ext_find_extent result") Signed-off-by: Zilin Guan <zilin(a)seu.edu.cn> Reviewed-by: Zhang Yi <yi.zhang(a)huawei.com> Reviewed-by: Baokun Li <libaokun1(a)huawei.com> Link: https://patch.msgid.link/20251225084800.905701-1-zilin@seu.edu.cn Signed-off-by: Theodore Ts'o <tytso(a)mit.edu> Cc: stable(a)kernel.org Signed-off-by: Yongjian Sun <sunyongjian1(a)huawei.com> --- fs/ext4/extents.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 6126f98bac63..19eae32fc128 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -5281,7 +5281,8 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle, if (!extent) { EXT4_ERROR_INODE(inode, "unexpected hole at %lu", (unsigned long) *iterator); - return -EFSCORRUPTED; + ret = -EFSCORRUPTED; + goto out; } if (SHIFT == SHIFT_LEFT && *iterator > le32_to_cpu(extent->ee_block)) { -- 2.39.2
2 1
0 0
[PATCH OLK-6.6] blktrace: fix __this_cpu_read/write in preemptible context
by Yongjian Sun 26 Mar '26

26 Mar '26
From: Chaitanya Kulkarni <kch(a)nvidia.com> mainline inclusion from mainline-v7.0-rc3 commit da46b5dfef48658d03347cda21532bcdbb521e67 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/14005 CVE: CVE-2026-23374 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- tracing_record_cmdline() internally uses __this_cpu_read() and __this_cpu_write() on the per-CPU variable trace_cmdline_save, and trace_save_cmdline() explicitly asserts preemption is disabled via lockdep_assert_preemption_disabled(). These operations are only safe when preemption is off, as they were designed to be called from the scheduler context (probe_wakeup_sched_switch() / probe_wakeup()). __blk_add_trace() was calling tracing_record_cmdline(current) early in the blk_tracer path, before ring buffer reservation, from process context where preemption is fully enabled. This triggers the following using blktests/blktrace/002: blktrace/002 (blktrace ftrace corruption with sysfs trace) [failed] runtime 0.367s ... 0.437s something found in dmesg: [ 81.211018] run blktests blktrace/002 at 2026-02-25 22:24:33 [ 81.239580] null_blk: disk nullb1 created [ 81.357294] BUG: using __this_cpu_read() in preemptible [00000000] code: dd/2516 [ 81.362842] caller is tracing_record_cmdline+0x10/0x40 [ 81.362872] CPU: 16 UID: 0 PID: 2516 Comm: dd Tainted: G N 7.0.0-rc1lblk+ #84 PREEMPT(full) [ 81.362877] Tainted: [N]=TEST [ 81.362878] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.17.0-0-gb52ca86e094d-prebuilt.qemu.org 04/01/2014 [ 81.362881] Call Trace: [ 81.362884] <TASK> [ 81.362886] dump_stack_lvl+0x8d/0xb0 ... (See '/mnt/sda/blktests/results/nodev/blktrace/002.dmesg' for the entire message) [ 81.211018] run blktests blktrace/002 at 2026-02-25 22:24:33 [ 81.239580] null_blk: disk nullb1 created [ 81.357294] BUG: using __this_cpu_read() in preemptible [00000000] code: dd/2516 [ 81.362842] caller is tracing_record_cmdline+0x10/0x40 [ 81.362872] CPU: 16 UID: 0 PID: 2516 Comm: dd Tainted: G N 7.0.0-rc1lblk+ #84 PREEMPT(full) [ 81.362877] Tainted: [N]=TEST [ 81.362878] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.17.0-0-gb52ca86e094d-prebuilt.qemu.org 04/01/2014 [ 81.362881] Call Trace: [ 81.362884] <TASK> [ 81.362886] dump_stack_lvl+0x8d/0xb0 [ 81.362895] check_preemption_disabled+0xce/0xe0 [ 81.362902] tracing_record_cmdline+0x10/0x40 [ 81.362923] __blk_add_trace+0x307/0x5d0 [ 81.362934] ? lock_acquire+0xe0/0x300 [ 81.362940] ? iov_iter_extract_pages+0x101/0xa30 [ 81.362959] blk_add_trace_bio+0x106/0x1e0 [ 81.362968] submit_bio_noacct_nocheck+0x24b/0x3a0 [ 81.362979] ? lockdep_init_map_type+0x58/0x260 [ 81.362988] submit_bio_wait+0x56/0x90 [ 81.363009] __blkdev_direct_IO_simple+0x16c/0x250 [ 81.363026] ? __pfx_submit_bio_wait_endio+0x10/0x10 [ 81.363038] ? rcu_read_lock_any_held+0x73/0xa0 [ 81.363051] blkdev_read_iter+0xc1/0x140 [ 81.363059] vfs_read+0x20b/0x330 [ 81.363083] ksys_read+0x67/0xe0 [ 81.363090] do_syscall_64+0xbf/0xf00 [ 81.363102] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 81.363106] RIP: 0033:0x7f281906029d [ 81.363111] Code: 31 c0 e9 c6 fe ff ff 50 48 8d 3d 66 63 0a 00 e8 59 ff 01 00 66 0f 1f 84 00 00 00 00 00 80 3d 41 33 0e 00 00 74 17 31 c0 0f 0c [ 81.363113] RSP: 002b:00007ffca127dd48 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [ 81.363120] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f281906029d [ 81.363122] RDX: 0000000000001000 RSI: 0000559f8bfae000 RDI: 0000000000000000 [ 81.363123] RBP: 0000000000001000 R08: 0000002863a10a81 R09: 00007f281915f000 [ 81.363124] R10: 00007f2818f77b60 R11: 0000000000000246 R12: 0000559f8bfae000 [ 81.363126] R13: 0000000000000000 R14: 0000000000000000 R15: 000000000000000a [ 81.363142] </TASK> The same BUG fires from blk_add_trace_plug(), blk_add_trace_unplug(), and blk_add_trace_rq() paths as well. The purpose of tracing_record_cmdline() is to cache the task->comm for a given PID so that the trace can later resolve it. It is only meaningful when a trace event is actually being recorded. Ring buffer reservation via ring_buffer_lock_reserve() disables preemption, and preemption remains disabled until the event is committed :- __blk_add_trace() __trace_buffer_lock_reserve() __trace_buffer_lock_reserve() ring_buffer_lock_reserve() preempt_disable_notrace(); <--- With this fix blktests for blktrace pass: blktests (master) # ./check blktrace blktrace/001 (blktrace zone management command tracing) [passed] runtime 3.650s ... 3.647s blktrace/002 (blktrace ftrace corruption with sysfs trace) [passed] runtime 0.411s ... 0.384s Fixes: 7ffbd48d5cab ("tracing: Cache comms only after an event occurred") Reported-by: Shinichiro Kawasaki <shinichiro.kawasaki(a)wdc.com> Suggested-by: Steven Rostedt <rostedt(a)goodmis.org> Signed-off-by: Chaitanya Kulkarni <kch(a)nvidia.com> Reviewed-by: Steven Rostedt (Google) <rostedt(a)goodmis.org> Signed-off-by: Jens Axboe <axboe(a)kernel.dk> Conflicts: kernel/trace/blktrace.c [mainline e472eca538358 & 48886b9d668 not applied] Signed-off-by: Yongjian Sun <sunyongjian1(a)huawei.com> --- kernel/trace/blktrace.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c index d5d94510afd3..ce797d8dd451 100644 --- a/kernel/trace/blktrace.c +++ b/kernel/trace/blktrace.c @@ -251,8 +251,6 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes, cpu = raw_smp_processor_id(); if (blk_tracer) { - tracing_record_cmdline(current); - buffer = blk_tr->array_buffer.buffer; trace_ctx = tracing_gen_ctx_flags(0); event = trace_buffer_lock_reserve(buffer, TRACE_BLK, @@ -260,6 +258,8 @@ static void __blk_add_trace(struct blk_trace *bt, sector_t sector, int bytes, trace_ctx); if (!event) return; + + tracing_record_cmdline(current); t = ring_buffer_event_data(event); goto record_it; } -- 2.39.2
2 1
0 0
[PATCH OLK-5.10] ext4: fix dirtyclusters double decrement on fs shutdown
by Yongjian Sun 26 Mar '26

26 Mar '26
From: Brian Foster <bfoster(a)redhat.com> mainline inclusion from mainline-v7.0-rc1 commit 94a8cea54cd935c54fa2fba70354757c0fc245e3 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13898 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- fstests test generic/388 occasionally reproduces a warning in ext4_put_super() associated with the dirty clusters count: WARNING: CPU: 7 PID: 76064 at fs/ext4/super.c:1324 ext4_put_super+0x48c/0x590 [ext4] Tracing the failure shows that the warning fires due to an s_dirtyclusters_counter value of -1. IOW, this appears to be a spurious decrement as opposed to some sort of leak. Further tracing of the dirty cluster count deltas and an LLM scan of the resulting output identified the cause as a double decrement in the error path between ext4_mb_mark_diskspace_used() and the caller ext4_mb_new_blocks(). First, note that generic/388 is a shutdown vs. fsstress test and so produces a random set of operations and shutdown injections. In the problematic case, the shutdown triggers an error return from the ext4_handle_dirty_metadata() call(s) made from ext4_mb_mark_context(). The changed value is non-zero at this point, so ext4_mb_mark_diskspace_used() does not exit after the error bubbles up from ext4_mb_mark_context(). Instead, the former decrements both cluster counters and returns the error up to ext4_mb_new_blocks(). The latter falls into the !ar->len out path which decrements the dirty clusters counter a second time, creating the inconsistency. To avoid this problem and simplify ownership of the cluster reservation in this codepath, lift the counter reduction to a single place in the caller. This makes it more clear that ext4_mb_new_blocks() is responsible for acquiring cluster reservation (via ext4_claim_free_clusters()) in the !delalloc case as well as releasing it, regardless of whether it ends up consumed or returned due to failure. Fixes: 0087d9fb3f29 ("ext4: Fix s_dirty_blocks_counter if block allocation failed with nodelalloc") Signed-off-by: Brian Foster <bfoster(a)redhat.com> Reviewed-by: Baokun Li <libaokun1(a)huawei.com> Link: https://patch.msgid.link/20260113171905.118284-1-bfoster@redhat.com Signed-off-by: Theodore Ts'o <tytso(a)mit.edu> Cc: stable(a)kernel.org Conflicts: fs/ext4/mballoc-test.c fs/ext4/mballoc.c [mainline 2f94711b098b & 7c9fa399a369 not applied] Signed-off-by: Yongjian Sun <sunyongjian1(a)huawei.com> --- fs/ext4/mballoc.c | 21 +++++---------------- 1 file changed, 5 insertions(+), 16 deletions(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 9d4e8e3c74e2..683f1e220645 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -3300,8 +3300,7 @@ void ext4_exit_mballoc(void) * Returns 0 if success or error code */ static noinline_for_stack int -ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac, - handle_t *handle, unsigned int reserv_clstrs) +ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac, handle_t *handle) { struct buffer_head *bitmap_bh = NULL; struct ext4_group_desc *gdp; @@ -3388,13 +3387,6 @@ ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac, ext4_unlock_group(sb, ac->ac_b_ex.fe_group); percpu_counter_sub(&sbi->s_freeclusters_counter, ac->ac_b_ex.fe_len); - /* - * Now reduce the dirty block count also. Should not go negative - */ - if (!(ac->ac_flags & EXT4_MB_DELALLOC_RESERVED)) - /* release all the reserved blocks if non delalloc */ - percpu_counter_sub(&sbi->s_dirtyclusters_counter, - reserv_clstrs); if (sbi->s_log_groups_per_flex) { ext4_group_t flex_group = ext4_flex_group(sbi, @@ -5272,7 +5264,7 @@ ext4_fsblk_t ext4_mb_new_blocks(handle_t *handle, ext4_mb_pa_free(ac); } if (likely(ac->ac_status == AC_STATUS_FOUND)) { - *errp = ext4_mb_mark_diskspace_used(ac, handle, reserv_clstrs); + *errp = ext4_mb_mark_diskspace_used(ac, handle); if (*errp) { ext4_discard_allocated_blocks(ac); goto errout; @@ -5304,12 +5296,9 @@ ext4_fsblk_t ext4_mb_new_blocks(handle_t *handle, kmem_cache_free(ext4_ac_cachep, ac); if (inquota && ar->len < inquota) dquot_free_block(ar->inode, EXT4_C2B(sbi, inquota - ar->len)); - if (!ar->len) { - if ((ar->flags & EXT4_MB_DELALLOC_RESERVED) == 0) - /* release all the reserved blocks if non delalloc */ - percpu_counter_sub(&sbi->s_dirtyclusters_counter, - reserv_clstrs); - } + /* release any reserved blocks */ + if (reserv_clstrs) + percpu_counter_sub(&sbi->s_dirtyclusters_counter, reserv_clstrs); trace_ext4_allocate_blocks(ar, (unsigned long long)block); -- 2.39.2
2 1
0 0
[PATCH OLK-5.10] ext4: don't set EXT4_GET_BLOCKS_CONVERT when splitting before submitting I/O
by Yongjian Sun 26 Mar '26

26 Mar '26
From: Zhang Yi <yi.zhang(a)huawei.com> mainline inclusion from mainline-v7.0-rc1 commit feaf2a80e78f89ee8a3464126077ba8683b62791 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13898 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- When allocating blocks during within-EOF DIO and writeback with dioread_nolock enabled, EXT4_GET_BLOCKS_PRE_IO was set to split an existing large unwritten extent. However, EXT4_GET_BLOCKS_CONVERT was set when calling ext4_split_convert_extents(), which may potentially result in stale data issues. Assume we have an unwritten extent, and then DIO writes the second half. [UUUUUUUUUUUUUUUU] on-disk extent U: unwritten extent [UUUUUUUUUUUUUUUU] extent status tree |<- ->| ----> dio write this range First, ext4_iomap_alloc() call ext4_map_blocks() with EXT4_GET_BLOCKS_PRE_IO, EXT4_GET_BLOCKS_UNWRIT_EXT and EXT4_GET_BLOCKS_CREATE flags set. ext4_map_blocks() find this extent and call ext4_split_convert_extents() with EXT4_GET_BLOCKS_CONVERT and the above flags set. Then, ext4_split_convert_extents() calls ext4_split_extent() with EXT4_EXT_MAY_ZEROOUT, EXT4_EXT_MARK_UNWRIT2 and EXT4_EXT_DATA_VALID2 flags set, and it calls ext4_split_extent_at() to split the second half with EXT4_EXT_DATA_VALID2, EXT4_EXT_MARK_UNWRIT1, EXT4_EXT_MAY_ZEROOUT and EXT4_EXT_MARK_UNWRIT2 flags set. However, ext4_split_extent_at() failed to insert extent since a temporary lack -ENOSPC. It zeroes out the first half but convert the entire on-disk extent to written since the EXT4_EXT_DATA_VALID2 flag set, but left the second half as unwritten in the extent status tree. [0000000000SSSSSS] data S: stale data, 0: zeroed [WWWWWWWWWWWWWWWW] on-disk extent W: written extent [WWWWWWWWWWUUUUUU] extent status tree Finally, if the DIO failed to write data to the disk, the stale data in the second half will be exposed once the cached extent entry is gone. Fix this issue by not passing EXT4_GET_BLOCKS_CONVERT when splitting an unwritten extent before submitting I/O, and make ext4_split_convert_extents() to zero out the entire extent range to zero for this case, and also mark the extent in the extent status tree for consistency. Fixes: b8a8684502a0 ("ext4: Introduce FALLOC_FL_ZERO_RANGE flag for fallocate") Signed-off-by: Zhang Yi <yi.zhang(a)huawei.com> Reviewed-by: Ojaswin Mujoo <ojaswin(a)linux.ibm.com> Reviewed-by: Baokun Li <libaokun1(a)huawei.com> Cc: stable(a)kernel.org Message-ID: <20251129103247.686136-4-yi.zhang(a)huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso(a)mit.edu> Conflicts: fs/ext4/extents.c [mainline commit dac092195b6a not applied] Signed-off-by: Yongjian Sun <sunyongjian1(a)huawei.com> --- fs/ext4/extents.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 786701524736..4c77284be84d 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -3714,11 +3714,15 @@ static int ext4_split_convert_extents(handle_t *handle, /* Convert to unwritten */ if (flags & EXT4_GET_BLOCKS_CONVERT_UNWRITTEN) { split_flag |= EXT4_EXT_DATA_VALID1; - /* Convert to initialized */ - } else if (flags & EXT4_GET_BLOCKS_CONVERT) { + /* Split the existing unwritten extent */ + } else if (flags & (EXT4_GET_BLOCKS_UNWRIT_EXT | + EXT4_GET_BLOCKS_CONVERT)) { split_flag |= ee_block + ee_len <= eof_block ? EXT4_EXT_MAY_ZEROOUT : 0; - split_flag |= (EXT4_EXT_MARK_UNWRIT2 | EXT4_EXT_DATA_VALID2); + split_flag |= EXT4_EXT_MARK_UNWRIT2; + /* Convert to initialized */ + if (flags & EXT4_GET_BLOCKS_CONVERT) + split_flag |= EXT4_EXT_DATA_VALID2; } flags |= EXT4_GET_BLOCKS_PRE_IO; return ext4_split_extent(handle, inode, ppath, map, split_flag, flags); @@ -3883,7 +3887,7 @@ ext4_ext_handle_unwritten_extents(handle_t *handle, struct inode *inode, /* get_block() before submitting IO, split the extent */ if (flags & EXT4_GET_BLOCKS_PRE_IO) { ret = ext4_split_convert_extents(handle, inode, map, ppath, - flags | EXT4_GET_BLOCKS_CONVERT); + flags); if (ret < 0) { err = ret; goto out2; -- 2.39.2
2 1
0 0
[PATCH OLK-5.10] fat: avoid parent link count underflow in rmdir
by Yongjian Sun 26 Mar '26

26 Mar '26
From: Zhiyu Zhang <zhiyuzhang999(a)gmail.com> mainline inclusion from mainline-v7.0-rc1 commit 8cafcb881364af5ef3a8b9fed4db254054033d8a category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13898 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- Corrupted FAT images can leave a directory inode with an incorrect i_nlink (e.g. 2 even though subdirectories exist). rmdir then unconditionally calls drop_nlink(dir) and can drive i_nlink to 0, triggering the WARN_ON in drop_nlink(). Add a sanity check in vfat_rmdir() and msdos_rmdir(): only drop the parent link count when it is at least 3, otherwise report a filesystem error. Link: https://lkml.kernel.org/r/20260101111148.1437-1-zhiyuzhang999@gmail.com Fixes: 9a53c3a783c2 ("[PATCH] r/o bind mounts: unlink: monitor i_nlink") Signed-off-by: Zhiyu Zhang <zhiyuzhang999(a)gmail.com> Reported-by: Zhiyu Zhang <zhiyuzhang999(a)gmail.com> Closes: https://lore.kernel.org/linux-fsdevel/aVN06OKsKxZe6-Kv@casper.infradead.org… Tested-by: Zhiyu Zhang <zhiyuzhang999(a)gmail.com> Acked-by: OGAWA Hirofumi <hirofumi(a)mail.parknet.co.jp> Cc: Al Viro <viro(a)zeniv.linux.org.uk> Cc: Christian Brauner <brauner(a)kernel.org> Cc: Jan Kara <jack(a)suse.cz> Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org> Signed-off-by: Yongjian Sun <sunyongjian1(a)huawei.com> --- fs/fat/namei_msdos.c | 7 ++++++- fs/fat/namei_vfat.c | 7 ++++++- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/fs/fat/namei_msdos.c b/fs/fat/namei_msdos.c index 9d062886fbc1..63a323be9179 100644 --- a/fs/fat/namei_msdos.c +++ b/fs/fat/namei_msdos.c @@ -325,7 +325,12 @@ static int msdos_rmdir(struct inode *dir, struct dentry *dentry) err = fat_remove_entries(dir, &sinfo); /* and releases bh */ if (err) goto out; - drop_nlink(dir); + if (dir->i_nlink >= 3) + drop_nlink(dir); + else { + fat_fs_error(sb, "parent dir link count too low (%u)", + dir->i_nlink); + } clear_nlink(inode); fat_truncate_time(inode, NULL, S_CTIME); diff --git a/fs/fat/namei_vfat.c b/fs/fat/namei_vfat.c index 0cdd0fb9f742..db9a88fb6e2d 100644 --- a/fs/fat/namei_vfat.c +++ b/fs/fat/namei_vfat.c @@ -808,7 +808,12 @@ static int vfat_rmdir(struct inode *dir, struct dentry *dentry) err = fat_remove_entries(dir, &sinfo); /* and releases bh */ if (err) goto out; - drop_nlink(dir); + if (dir->i_nlink >= 3) + drop_nlink(dir); + else { + fat_fs_error(sb, "parent dir link count too low (%u)", + dir->i_nlink); + } clear_nlink(inode); fat_truncate_time(inode, NULL, S_ATIME|S_MTIME); -- 2.39.2
2 1
0 0
[PATCH OLK-6.6] scsi: core: Fix refcount leak for tagset_refcnt
by Li Lingfeng 26 Mar '26

26 Mar '26
From: Junxiao Bi <junxiao.bi(a)oracle.com> stable inclusion from stable-v6.6.130 commit 7c01b680beaf4d3143866b062b8e770e8b237fb8 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13999 CVE: CVE-2026-23296 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- commit 1ac22c8eae81366101597d48360718dff9b9d980 upstream. This leak will cause a hang when tearing down the SCSI host. For example, iscsid hangs with the following call trace: [130120.652718] scsi_alloc_sdev: Allocation failure during SCSI scanning, some SCSI devices might not be configured PID: 2528 TASK: ffff9d0408974e00 CPU: 3 COMMAND: "iscsid" #0 [ffffb5b9c134b9e0] __schedule at ffffffff860657d4 #1 [ffffb5b9c134ba28] schedule at ffffffff86065c6f #2 [ffffb5b9c134ba40] schedule_timeout at ffffffff86069fb0 #3 [ffffb5b9c134bab0] __wait_for_common at ffffffff8606674f #4 [ffffb5b9c134bb10] scsi_remove_host at ffffffff85bfe84b #5 [ffffb5b9c134bb30] iscsi_sw_tcp_session_destroy at ffffffffc03031c4 [iscsi_tcp] #6 [ffffb5b9c134bb48] iscsi_if_recv_msg at ffffffffc0292692 [scsi_transport_iscsi] #7 [ffffb5b9c134bb98] iscsi_if_rx at ffffffffc02929c2 [scsi_transport_iscsi] #8 [ffffb5b9c134bbf0] netlink_unicast at ffffffff85e551d6 #9 [ffffb5b9c134bc38] netlink_sendmsg at ffffffff85e554ef Fixes: 8fe4ce5836e9 ("scsi: core: Fix a use-after-free") Cc: stable(a)vger.kernel.org Signed-off-by: Junxiao Bi <junxiao.bi(a)oracle.com> Reviewed-by: Mike Christie <michael.christie(a)oracle.com> Reviewed-by: Bart Van Assche <bvanassche(a)acm.org> Link: https://patch.msgid.link/20260223232728.93350-1-junxiao.bi@oracle.com Signed-off-by: Martin K. Petersen <martin.petersen(a)oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: Li Lingfeng <lilingfeng3(a)huawei.com> --- drivers/scsi/scsi_scan.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c index 8ee74dddef16..e5b62e36761f 100644 --- a/drivers/scsi/scsi_scan.c +++ b/drivers/scsi/scsi_scan.c @@ -354,6 +354,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget, * since we use this queue depth most of times. */ if (scsi_realloc_sdev_budget_map(sdev, depth)) { + kref_put(&sdev->host->tagset_refcnt, scsi_mq_free_tags); put_device(&starget->dev); kfree(sdev); goto out; -- 2.52.0
2 1
0 0
[PATCH OLK-5.10] bonding: fix use-after-free due to enslave fail after slave array update
by Dong Chenchen 26 Mar '26

26 Mar '26
From: Nikolay Aleksandrov <razor(a)blackwall.org> mainline inclusion from mainline-v6.19-rc8 commit e9acda52fd2ee0cdca332f996da7a95c5fd25294 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/13721 CVE: CVE-2026-23171 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- Fix a use-after-free which happens due to enslave failure after the new slave has been added to the array. Since the new slave can be used for Tx immediately, we can use it after it has been freed by the enslave error cleanup path which frees the allocated slave memory. Slave update array is supposed to be called last when further enslave failures are not expected. Move it after xdp setup to avoid any problems. It is very easy to reproduce the problem with a simple xdp_pass prog: ip l add bond1 type bond mode balance-xor ip l set bond1 up ip l set dev bond1 xdp object xdp_pass.o sec xdp_pass ip l add dumdum type dummy Then run in parallel: while :; do ip l set dumdum master bond1 1>/dev/null 2>&1; done; mausezahn bond1 -a own -b rand -A rand -B 1.1.1.1 -c 0 -t tcp "dp=1-1023, flags=syn" The crash happens almost immediately: [ 605.602850] Oops: general protection fault, probably for non-canonical address 0xe0e6fc2460000137: 0000 [#1] SMP KASAN NOPTI [ 605.602916] KASAN: maybe wild-memory-access in range [0x07380123000009b8-0x07380123000009bf] [ 605.602946] CPU: 0 UID: 0 PID: 2445 Comm: mausezahn Kdump: loaded Tainted: G B 6.19.0-rc6+ #21 PREEMPT(voluntary) [ 605.602979] Tainted: [B]=BAD_PAGE [ 605.602998] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 605.603032] RIP: 0010:netdev_core_pick_tx+0xcd/0x210 [ 605.603063] Code: 48 89 fa 48 c1 ea 03 80 3c 02 00 0f 85 3e 01 00 00 48 b8 00 00 00 00 00 fc ff df 4c 8b 6b 08 49 8d 7d 30 48 89 fa 48 c1 ea 03 <80> 3c 02 00 0f 85 25 01 00 00 49 8b 45 30 4c 89 e2 48 89 ee 48 89 [ 605.603111] RSP: 0018:ffff88817b9af348 EFLAGS: 00010213 [ 605.603145] RAX: dffffc0000000000 RBX: ffff88817d28b420 RCX: 0000000000000000 [ 605.603172] RDX: 00e7002460000137 RSI: 0000000000000008 RDI: 07380123000009be [ 605.603199] RBP: ffff88817b541a00 R08: 0000000000000001 R09: fffffbfff3ed8c0c [ 605.603226] R10: ffffffff9f6c6067 R11: 0000000000000001 R12: 0000000000000000 [ 605.603253] R13: 073801230000098e R14: ffff88817d28b448 R15: ffff88817b541a84 [ 605.603286] FS: 00007f6570ef67c0(0000) GS:ffff888221dfa000(0000) knlGS:0000000000000000 [ 605.603319] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 605.603343] CR2: 00007f65712fae40 CR3: 000000011371b000 CR4: 0000000000350ef0 [ 605.603373] Call Trace: [ 605.603392] <TASK> [ 605.603410] __dev_queue_xmit+0x448/0x32a0 [ 605.603434] ? __pfx_vprintk_emit+0x10/0x10 [ 605.603461] ? __pfx_vprintk_emit+0x10/0x10 [ 605.603484] ? __pfx___dev_queue_xmit+0x10/0x10 [ 605.603507] ? bond_start_xmit+0xbfb/0xc20 [bonding] [ 605.603546] ? _printk+0xcb/0x100 [ 605.603566] ? __pfx__printk+0x10/0x10 [ 605.603589] ? bond_start_xmit+0xbfb/0xc20 [bonding] [ 605.603627] ? add_taint+0x5e/0x70 [ 605.603648] ? add_taint+0x2a/0x70 [ 605.603670] ? end_report.cold+0x51/0x75 [ 605.603693] ? bond_start_xmit+0xbfb/0xc20 [bonding] [ 605.603731] bond_start_xmit+0x623/0xc20 [bonding] Fixes: 9e2ee5c7e7c3 ("net, bonding: Add XDP support to the bonding driver") Signed-off-by: Nikolay Aleksandrov <razor(a)blackwall.org> Reported-by: Chen Zhen <chenzhen126(a)huawei.com> Closes: https://lore.kernel.org/netdev/fae17c21-4940-5605-85b2-1d5e17342358@huawei.… CC: Jussi Maki <joamaki(a)gmail.com> CC: Daniel Borkmann <daniel(a)iogearbox.net> Acked-by: Daniel Borkmann <daniel(a)iogearbox.net> Link: https://patch.msgid.link/20260123120659.571187-1-razor@blackwall.org Signed-off-by: Paolo Abeni <pabeni(a)redhat.com> Conflicts: drivers/net/bonding/bond_main.c [commit e0caeb24f538 and cb9e6e584d584 are not backport] Signed-off-by: Dong Chenchen <dongchenchen2(a)huawei.com> --- drivers/net/bonding/bond_main.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index cd64d7cc65aa..9deea4439b79 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -2139,9 +2139,6 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev, unblock_netpoll_tx(); } - if (bond_mode_can_use_xmit_hash(bond)) - bond_update_slave_arr(bond, NULL); - if (!slave_dev->netdev_ops->ndo_bpf || !slave_dev->netdev_ops->ndo_xdp_xmit) { @@ -2178,6 +2175,9 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev, bpf_prog_inc(bond->xdp_prog); } + if (bond_mode_can_use_xmit_hash(bond)) + bond_update_slave_arr(bond, NULL); + slave_info(bond_dev, slave_dev, "Enslaving as %s interface with %s link\n", bond_is_active_slave(new_slave) ? "an active" : "a backup", new_slave->link != BOND_LINK_DOWN ? "an up" : "a down"); -- 2.25.1
2 1
0 0
[PATCH OLK-6.6] perf: HiSilicon: Support uncore ITS PMU
by Yushan Wang 26 Mar '26

26 Mar '26
driver inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8741 ----------------------------------------------- Support uncore ITS PMU, which provides the capability of counting interrupts routed to ITS by interrupt catagories, and its latency. It also supports ITS internal structure latency and DDR access events. The driver adapts to HiSilicon uncore PMU framework. ITS PMU driver does not support overflow interruption, which is the same as NoC PMU, so a few dummy functions or handling interrupts are left empty. Signed-off-by: Yushan Wang <wangyushan12(a)huawei.com> Signed-off-by: Ying Jiang <jiangying44(a)h-partners.com> --- drivers/perf/hisilicon/hisi_uncore_its_pmu.c | 364 +++++++++++++++++++ 1 file changed, 364 insertions(+) create mode 100644 drivers/perf/hisilicon/hisi_uncore_its_pmu.c diff --git a/drivers/perf/hisilicon/hisi_uncore_its_pmu.c b/drivers/perf/hisilicon/hisi_uncore_its_pmu.c new file mode 100644 index 000000000000..9d9035207e78 --- /dev/null +++ b/drivers/perf/hisilicon/hisi_uncore_its_pmu.c @@ -0,0 +1,364 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Driver for HiSilicon Uncore ITS PMU device + * + * Copyright (c) 2025 HiSilicon Technologies Co., Ltd. + * Author: Yushan Wang <wangyushan12(a)huawei.com> + */ +#include <linux/bitops.h> +#include <linux/cpuhotplug.h> +#include <linux/device.h> +#include <linux/io.h> +#include <linux/mod_devicetable.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/property.h> +#include <linux/sysfs.h> + +#include "hisi_uncore_pmu.h" + +#define ITS_PMU_VERSION 0x21000 +#define ITS_PMU_GLOBAL_CTRL 0x21004 +#define ITS_PMU_GLOBAL_CTRL_PMU_EN BIT(0) +#define ITS_PMU_COUNTER_CTRL 0x21008 +#define ITS_PMU_EVENT_CTRL 0x2100c +#define ITS_PMU_COUNTER0 0x21010 + +#define ITS_PMU_INT_ID_MASK 0x20008 +#define ITS_PMU_INT_ID_CTRL 0x20084 + +#define ITS_PMU_NR_COUNTERS 4 + +#define ITS_PMU_EVENT_CNTRn(cntr0, n) ((cntr0) + 8 * (n)) +#define ITS_PMU_CNTR_CTRL_MASK(n) GENMASK(8 * ((n) + 1) - 1, 8 * (n)) +#define ITS_PMU_CNTR_EVENT_CFG(n, e) ((e) << ((n) * 8)) +#define ITS_PMU_EVENT_CTRL_TYPE GENMASK(12, 0) + +HISI_PMU_EVENT_ATTR_EXTRACTOR(int_id, config1, 31, 0); + +/* Dynamic CPU hotplug state used by this PMU driver */ +static enum cpuhp_state hisi_its_pmu_cpuhp_state; + +struct hisi_its_pmu_regs { + u32 version; + u32 pmu_ctrl; + u32 event_ctrl0; + u32 event_cntr0; + u32 cntr_ctrl; +}; + +static void hisi_its_pmu_write_evtype(struct hisi_pmu *its_pmu, int idx, u32 type) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + u32 reg; + + reg = readl(its_pmu->base + reg_info->event_ctrl0); + reg &= ~ITS_PMU_CNTR_CTRL_MASK(idx); + reg |= ITS_PMU_CNTR_EVENT_CFG(idx, type); + writel(reg, its_pmu->base + reg_info->event_ctrl0); +} + +static u64 hisi_its_pmu_read_counter(struct hisi_pmu *its_pmu, + struct hw_perf_event *hwc) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + + return readq(its_pmu->base + ITS_PMU_EVENT_CNTRn(reg_info->event_cntr0, hwc->idx)); +} + +static void hisi_its_pmu_write_counter(struct hisi_pmu *its_pmu, + struct hw_perf_event *hwc, u64 val) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + + writeq(val, its_pmu->base + ITS_PMU_EVENT_CNTRn(reg_info->event_cntr0, hwc->idx)); +} + +static void hisi_its_pmu_enable_counter(struct hisi_pmu *its_pmu, + struct hw_perf_event *hwc) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + u32 reg; + + reg = readl(its_pmu->base + reg_info->cntr_ctrl); + reg |= BIT(hwc->idx); + writel(reg, its_pmu->base + reg_info->cntr_ctrl); +} + +static void hisi_its_pmu_disable_counter(struct hisi_pmu *its_pmu, + struct hw_perf_event *hwc) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + u32 reg; + + reg = readl(its_pmu->base + reg_info->cntr_ctrl); + reg &= ~BIT(hwc->idx); + writel(reg, its_pmu->base + reg_info->cntr_ctrl); +} + +static void hisi_its_pmu_enable_counter_int(struct hisi_pmu *its_pmu, + struct hw_perf_event *hwc) +{ + /* We don't support interrupt, so a stub here. */ +} + +static void hisi_its_pmu_disable_counter_int(struct hisi_pmu *its_pmu, + struct hw_perf_event *hwc) +{ +} + +static void hisi_its_pmu_start_counters(struct hisi_pmu *its_pmu) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + u32 reg; + + reg = readl(its_pmu->base + reg_info->pmu_ctrl); + reg |= ITS_PMU_GLOBAL_CTRL_PMU_EN; + writel(reg, its_pmu->base + reg_info->pmu_ctrl); +} + +static void hisi_its_pmu_stop_counters(struct hisi_pmu *its_pmu) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + u32 reg; + + reg = readl(its_pmu->base + reg_info->pmu_ctrl); + reg &= ~ITS_PMU_GLOBAL_CTRL_PMU_EN; + writel(reg, its_pmu->base + reg_info->pmu_ctrl); +} + +static void hisi_its_pmu_enable_filter(struct perf_event *event) +{ + struct hisi_pmu *its_pmu = to_hisi_pmu(event->pmu); + u32 int_id = hisi_get_int_id(event); + u32 reg = int_id ? 0 : -1U; + + if (int_id) + writel(int_id, its_pmu->base + ITS_PMU_INT_ID_CTRL); + + writel(reg, its_pmu->base + ITS_PMU_INT_ID_MASK); +} + +static void hisi_its_pmu_disable_filter(struct perf_event *event) +{ + struct hisi_pmu *its_pmu = to_hisi_pmu(event->pmu); + u32 int_id = hisi_get_int_id(event); + + if (bitmap_weight(its_pmu->pmu_events.used_mask, its_pmu->num_counters) > 1) + return; + + if (int_id) { + writel(0, its_pmu->base + ITS_PMU_INT_ID_CTRL); + writel(-1U, its_pmu->base + ITS_PMU_INT_ID_MASK); + } +} + +static const struct hisi_uncore_ops hisi_uncore_its_ops = { + .write_evtype = hisi_its_pmu_write_evtype, + .get_event_idx = hisi_uncore_pmu_get_event_idx, + .read_counter = hisi_its_pmu_read_counter, + .write_counter = hisi_its_pmu_write_counter, + .enable_counter = hisi_its_pmu_enable_counter, + .disable_counter = hisi_its_pmu_disable_counter, + .enable_counter_int = hisi_its_pmu_enable_counter_int, + .disable_counter_int = hisi_its_pmu_disable_counter_int, + .start_counters = hisi_its_pmu_start_counters, + .stop_counters = hisi_its_pmu_stop_counters, + .enable_filter = hisi_its_pmu_enable_filter, + .disable_filter = hisi_its_pmu_disable_filter, +}; + +static struct attribute *hisi_its_pmu_format_attrs[] = { + HISI_PMU_FORMAT_ATTR(event, "config:0-16"), + HISI_PMU_FORMAT_ATTR(int_id, "config1:0-31"), + NULL +}; + +static const struct attribute_group hisi_its_pmu_format_group = { + .name = "format", + .attrs = hisi_its_pmu_format_attrs, +}; + +static struct attribute *hisi_its_pmu_events_attrs[] = { + HISI_PMU_EVENT_ATTR(lpi_num, 0xc0), + HISI_PMU_EVENT_ATTR(lpi_time, 0x80), + HISI_PMU_EVENT_ATTR(sgi_num, 0xc1), + HISI_PMU_EVENT_ATTR(sgi_time, 0x81), + HISI_PMU_EVENT_ATTR(ppi_num, 0xc2), + HISI_PMU_EVENT_ATTR(ppi_time, 0x82), + HISI_PMU_EVENT_ATTR(sl3_lpi_num, 0xc3), + HISI_PMU_EVENT_ATTR(sl3_sgi_num, 0xc4), + HISI_PMU_EVENT_ATTR(sl3_ppi_num, 0xc5), + HISI_PMU_EVENT_ATTR(sl0_ddr_read, 0xc9), + HISI_PMU_EVENT_ATTR(sl0_ddr_time, 0x89), + HISI_PMU_EVENT_ATTR(sl1_ddr_read, 0xca), + HISI_PMU_EVENT_ATTR(sl1_ddr_time, 0x8a), + HISI_PMU_EVENT_ATTR(sl2_ddr_read, 0xcb), + HISI_PMU_EVENT_ATTR(sl2_ddr_time, 0x8b), + HISI_PMU_EVENT_ATTR(cycles, 0xcc), + NULL +}; + +static const struct attribute_group hisi_its_pmu_events_group = { + .name = "events", + .attrs = hisi_its_pmu_events_attrs, +}; + +static const struct attribute_group *hisi_its_pmu_attr_groups[] = { + &hisi_its_pmu_format_group, + &hisi_its_pmu_events_group, + &hisi_pmu_cpumask_attr_group, + &hisi_pmu_identifier_group, + NULL +}; + +static int hisi_its_pmu_dev_init(struct platform_device *pdev, struct hisi_pmu *its_pmu) +{ + struct hisi_its_pmu_regs *reg_info; + + hisi_uncore_pmu_init_topology(its_pmu, &pdev->dev); + + if (its_pmu->topo.scl_id < 0) + return dev_err_probe(&pdev->dev, -EINVAL, "failed to get scl-id\n"); + + if (its_pmu->topo.index_id < 0) + return dev_err_probe(&pdev->dev, -EINVAL, "failed to get idx-id\n"); + + its_pmu->base = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(its_pmu->base)) + return dev_err_probe(&pdev->dev, PTR_ERR(its_pmu->base), + "fail to remap io memory\n"); + + its_pmu->dev_info = device_get_match_data(&pdev->dev); + if (!its_pmu->dev_info) + return -ENODEV; + + its_pmu->pmu_events.attr_groups = its_pmu->dev_info->attr_groups; + its_pmu->counter_bits = its_pmu->dev_info->counter_bits; + its_pmu->check_event = its_pmu->dev_info->check_event; + its_pmu->num_counters = ITS_PMU_NR_COUNTERS; + its_pmu->ops = &hisi_uncore_its_ops; + its_pmu->dev = &pdev->dev; + its_pmu->on_cpu = -1; + + reg_info = its_pmu->dev_info->private; + its_pmu->identifier = readl(its_pmu->base + reg_info->version); + + return 0; +} + +static void hisi_its_pmu_remove_cpuhp_instance(void *hotplug_node) +{ + cpuhp_state_remove_instance_nocalls(hisi_its_pmu_cpuhp_state, hotplug_node); +} + +static void hisi_its_pmu_unregister_pmu(void *pmu) +{ + perf_pmu_unregister(pmu); +} + +static int hisi_its_pmu_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct hisi_pmu *its_pmu; + char *name; + int ret; + + its_pmu = devm_kzalloc(dev, sizeof(*its_pmu), GFP_KERNEL); + if (!its_pmu) + return -ENOMEM; + + /* + * HiSilicon Uncore PMU framework needs to get common hisi_pmu device + * from device's drvdata. + */ + platform_set_drvdata(pdev, its_pmu); + + ret = hisi_its_pmu_dev_init(pdev, its_pmu); + if (ret) + return ret; + + ret = cpuhp_state_add_instance(hisi_its_pmu_cpuhp_state, &its_pmu->node); + if (ret) + return dev_err_probe(dev, ret, "Fail to register cpuhp instance\n"); + + ret = devm_add_action_or_reset(dev, hisi_its_pmu_remove_cpuhp_instance, + &its_pmu->node); + if (ret) + return ret; + + hisi_pmu_init(its_pmu, THIS_MODULE); + + name = devm_kasprintf(dev, GFP_KERNEL, "hisi_scl%d_its%d", + its_pmu->topo.scl_id, its_pmu->topo.index_id); + if (!name) + return -ENOMEM; + + ret = perf_pmu_register(&its_pmu->pmu, name, -1); + if (ret) + return dev_err_probe(dev, ret, "Fail to register PMU\n"); + + return devm_add_action_or_reset(dev, hisi_its_pmu_unregister_pmu, + &its_pmu->pmu); +} + +static struct hisi_its_pmu_regs hisi_its_v1_pmu_regs = { + .version = ITS_PMU_VERSION, + .pmu_ctrl = ITS_PMU_GLOBAL_CTRL, + .event_ctrl0 = ITS_PMU_EVENT_CTRL, + .event_cntr0 = ITS_PMU_COUNTER0, + .cntr_ctrl = ITS_PMU_COUNTER_CTRL, +}; + +static const struct hisi_pmu_dev_info hisi_its_v1 = { + .attr_groups = hisi_its_pmu_attr_groups, + .counter_bits = 48, + .check_event = ITS_PMU_EVENT_CTRL_TYPE, + .private = &hisi_its_v1_pmu_regs, +}; + +static const struct acpi_device_id hisi_its_pmu_ids[] = { + { "HISI0591", (kernel_ulong_t) &hisi_its_v1 }, + { } +}; +MODULE_DEVICE_TABLE(acpi, hisi_its_pmu_ids); + +static struct platform_driver hisi_its_pmu_driver = { + .driver = { + .name = "hisi_its_pmu", + .acpi_match_table = hisi_its_pmu_ids, + .suppress_bind_attrs = true, + }, + .probe = hisi_its_pmu_probe, +}; + +static int __init hisi_its_pmu_module_init(void) +{ + int ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, + "perf/hisi/its:online", + hisi_uncore_pmu_online_cpu, + hisi_uncore_pmu_offline_cpu); + if (ret < 0) { + pr_err("hisi_its_pmu: Fail to setup cpuhp callbacks, ret = %d\n", ret); + return ret; + } + hisi_its_pmu_cpuhp_state = ret; + + ret = platform_driver_register(&hisi_its_pmu_driver); + if (ret) + cpuhp_remove_multi_state(hisi_its_pmu_cpuhp_state); + + return ret; +} +module_init(hisi_its_pmu_module_init); + +static void __exit hisi_its_pmu_module_exit(void) +{ + platform_driver_unregister(&hisi_its_pmu_driver); + cpuhp_remove_multi_state(hisi_its_pmu_cpuhp_state); +} +module_exit(hisi_its_pmu_module_exit); + +MODULE_IMPORT_NS(HISI_PMU); +MODULE_DESCRIPTION("HiSilicon SoC Uncore ITS PMU driver"); +MODULE_LICENSE("GPL"); -- 2.33.0
2 1
0 0
[PATCH OLK-6.6] drivers/perf: hisi: Add new function for HiSilicon MN PMU driver
by Yifan Wu 26 Mar '26

26 Mar '26
driver inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8741 ----------------------------------------------- MN (Miscellaneous Node) is a hybrid node in ARM CHI. The MN PMU driver using the HiSilicon uncore PMU framework. On HiSilicon Hip13 platform, cycle event is supported on MN PMU. The cycle event is exposed directly in driver and some variables shall be added suffix to distinguish the version. Signed-off-by: Yifan Wu <wuyifan50(a)huawei.com> Signed-off-by: Ying Jiang <jiangying44(a)h-partners.com> --- drivers/perf/hisilicon/hisi_uncore_mn_pmu.c | 61 +++++++++++++++++++-- 1 file changed, 55 insertions(+), 6 deletions(-) diff --git a/drivers/perf/hisilicon/hisi_uncore_mn_pmu.c b/drivers/perf/hisilicon/hisi_uncore_mn_pmu.c index 38a72c95fb6d..5baebb63af2a 100644 --- a/drivers/perf/hisilicon/hisi_uncore_mn_pmu.c +++ b/drivers/perf/hisilicon/hisi_uncore_mn_pmu.c @@ -192,7 +192,7 @@ static const struct attribute_group hisi_mn_pmu_format_group = { .attrs = hisi_mn_pmu_format_attr, }; -static struct attribute *hisi_mn_pmu_events_attr[] = { +static struct attribute *hisi_mn_pmu_events_attr_v1[] = { HISI_PMU_EVENT_ATTR(req_eobarrier_num, 0x00), HISI_PMU_EVENT_ATTR(req_ecbarrier_num, 0x01), HISI_PMU_EVENT_ATTR(req_dvmop_num, 0x02), @@ -219,14 +219,55 @@ static struct attribute *hisi_mn_pmu_events_attr[] = { NULL }; -static const struct attribute_group hisi_mn_pmu_events_group = { +static const struct attribute_group hisi_mn_pmu_events_group_v1 = { .name = "events", - .attrs = hisi_mn_pmu_events_attr, + .attrs = hisi_mn_pmu_events_attr_v1, }; -static const struct attribute_group *hisi_mn_pmu_attr_groups[] = { +static const struct attribute_group *hisi_mn_pmu_attr_groups_v1[] = { &hisi_mn_pmu_format_group, - &hisi_mn_pmu_events_group, + &hisi_mn_pmu_events_group_v1, + &hisi_pmu_cpumask_attr_group, + &hisi_pmu_identifier_group, + NULL +}; + +static struct attribute *hisi_mn_pmu_events_attr_v2[] = { + HISI_PMU_EVENT_ATTR(req_eobarrier_num, 0x00), + HISI_PMU_EVENT_ATTR(req_ecbarrier_num, 0x01), + HISI_PMU_EVENT_ATTR(req_dvmop_num, 0x02), + HISI_PMU_EVENT_ATTR(req_dvmsync_num, 0x03), + HISI_PMU_EVENT_ATTR(req_retry_num, 0x04), + HISI_PMU_EVENT_ATTR(req_writenosnp_num, 0x05), + HISI_PMU_EVENT_ATTR(req_readnosnp_num, 0x06), + HISI_PMU_EVENT_ATTR(snp_dvm_num, 0x07), + HISI_PMU_EVENT_ATTR(snp_dvmsync_num, 0x08), + HISI_PMU_EVENT_ATTR(l3t_req_dvm_num, 0x09), + HISI_PMU_EVENT_ATTR(l3t_req_dvmsync_num, 0x0A), + HISI_PMU_EVENT_ATTR(mn_req_dvm_num, 0x0B), + HISI_PMU_EVENT_ATTR(mn_req_dvmsync_num, 0x0C), + HISI_PMU_EVENT_ATTR(pa_req_dvm_num, 0x0D), + HISI_PMU_EVENT_ATTR(pa_req_dvmsync_num, 0x0E), + HISI_PMU_EVENT_ATTR(cycles, 0x0F), + HISI_PMU_EVENT_ATTR(snp_dvm_latency, 0x80), + HISI_PMU_EVENT_ATTR(snp_dvmsync_latency, 0x81), + HISI_PMU_EVENT_ATTR(l3t_req_dvm_latency, 0x82), + HISI_PMU_EVENT_ATTR(l3t_req_dvmsync_latency, 0x83), + HISI_PMU_EVENT_ATTR(mn_req_dvm_latency, 0x84), + HISI_PMU_EVENT_ATTR(mn_req_dvmsync_latency, 0x85), + HISI_PMU_EVENT_ATTR(pa_req_dvm_latency, 0x86), + HISI_PMU_EVENT_ATTR(pa_req_dvmsync_latency, 0x87), + NULL +}; + +static const struct attribute_group hisi_mn_pmu_events_group_v2 = { + .name = "events", + .attrs = hisi_mn_pmu_events_attr_v2, +}; + +static const struct attribute_group *hisi_mn_pmu_attr_groups_v2[] = { + &hisi_mn_pmu_format_group, + &hisi_mn_pmu_events_group_v2, &hisi_pmu_cpumask_attr_group, &hisi_pmu_identifier_group, NULL @@ -351,7 +392,14 @@ static struct hisi_mn_pmu_regs hisi_mn_v1_pmu_regs = { }; static const struct hisi_pmu_dev_info hisi_mn_v1 = { - .attr_groups = hisi_mn_pmu_attr_groups, + .attr_groups = hisi_mn_pmu_attr_groups_v1, + .counter_bits = 48, + .check_event = HISI_MN_EVTYPE_MASK, + .private = &hisi_mn_v1_pmu_regs, +}; + +static const struct hisi_pmu_dev_info hisi_mn_v2 = { + .attr_groups = hisi_mn_pmu_attr_groups_v2, .counter_bits = 48, .check_event = HISI_MN_EVTYPE_MASK, .private = &hisi_mn_v1_pmu_regs, @@ -359,6 +407,7 @@ static const struct hisi_pmu_dev_info hisi_mn_v1 = { static const struct acpi_device_id hisi_mn_pmu_acpi_match[] = { { "HISI0222", (kernel_ulong_t) &hisi_mn_v1 }, + { "HISI0224", (kernel_ulong_t) &hisi_mn_v2 }, { } }; MODULE_DEVICE_TABLE(acpi, hisi_mn_pmu_acpi_match); -- 2.33.0
2 1
0 0
[PATCH OLK-6.6 1/1] perf: HiSilicon: Support uncore ITS PMU
by Yushan Wang 26 Mar '26

26 Mar '26
driver inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8741 ----------------------------------------------- Support uncore ITS PMU, which provides the capability of counting interrupts routed to ITS by interrupt catagories, and its latency. It also supports ITS internal structure latency and DDR access events. The driver adapts to HiSilicon uncore PMU framework. ITS PMU driver does not support overflow interruption, which is the same as NoC PMU, so a few dummy functions or handling interrupts are left empty. Signed-off-by: Yushan Wang <wangyushan12(a)huawei.com> Signed-off-by: Ying Jiang <jiangying44(a)h-partners.com> --- drivers/perf/hisilicon/hisi_uncore_its_pmu.c | 364 +++++++++++++++++++ 1 file changed, 364 insertions(+) create mode 100644 drivers/perf/hisilicon/hisi_uncore_its_pmu.c diff --git a/drivers/perf/hisilicon/hisi_uncore_its_pmu.c b/drivers/perf/hisilicon/hisi_uncore_its_pmu.c new file mode 100644 index 000000000000..9d9035207e78 --- /dev/null +++ b/drivers/perf/hisilicon/hisi_uncore_its_pmu.c @@ -0,0 +1,364 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Driver for HiSilicon Uncore ITS PMU device + * + * Copyright (c) 2025 HiSilicon Technologies Co., Ltd. + * Author: Yushan Wang <wangyushan12(a)huawei.com> + */ +#include <linux/bitops.h> +#include <linux/cpuhotplug.h> +#include <linux/device.h> +#include <linux/io.h> +#include <linux/mod_devicetable.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/property.h> +#include <linux/sysfs.h> + +#include "hisi_uncore_pmu.h" + +#define ITS_PMU_VERSION 0x21000 +#define ITS_PMU_GLOBAL_CTRL 0x21004 +#define ITS_PMU_GLOBAL_CTRL_PMU_EN BIT(0) +#define ITS_PMU_COUNTER_CTRL 0x21008 +#define ITS_PMU_EVENT_CTRL 0x2100c +#define ITS_PMU_COUNTER0 0x21010 + +#define ITS_PMU_INT_ID_MASK 0x20008 +#define ITS_PMU_INT_ID_CTRL 0x20084 + +#define ITS_PMU_NR_COUNTERS 4 + +#define ITS_PMU_EVENT_CNTRn(cntr0, n) ((cntr0) + 8 * (n)) +#define ITS_PMU_CNTR_CTRL_MASK(n) GENMASK(8 * ((n) + 1) - 1, 8 * (n)) +#define ITS_PMU_CNTR_EVENT_CFG(n, e) ((e) << ((n) * 8)) +#define ITS_PMU_EVENT_CTRL_TYPE GENMASK(12, 0) + +HISI_PMU_EVENT_ATTR_EXTRACTOR(int_id, config1, 31, 0); + +/* Dynamic CPU hotplug state used by this PMU driver */ +static enum cpuhp_state hisi_its_pmu_cpuhp_state; + +struct hisi_its_pmu_regs { + u32 version; + u32 pmu_ctrl; + u32 event_ctrl0; + u32 event_cntr0; + u32 cntr_ctrl; +}; + +static void hisi_its_pmu_write_evtype(struct hisi_pmu *its_pmu, int idx, u32 type) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + u32 reg; + + reg = readl(its_pmu->base + reg_info->event_ctrl0); + reg &= ~ITS_PMU_CNTR_CTRL_MASK(idx); + reg |= ITS_PMU_CNTR_EVENT_CFG(idx, type); + writel(reg, its_pmu->base + reg_info->event_ctrl0); +} + +static u64 hisi_its_pmu_read_counter(struct hisi_pmu *its_pmu, + struct hw_perf_event *hwc) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + + return readq(its_pmu->base + ITS_PMU_EVENT_CNTRn(reg_info->event_cntr0, hwc->idx)); +} + +static void hisi_its_pmu_write_counter(struct hisi_pmu *its_pmu, + struct hw_perf_event *hwc, u64 val) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + + writeq(val, its_pmu->base + ITS_PMU_EVENT_CNTRn(reg_info->event_cntr0, hwc->idx)); +} + +static void hisi_its_pmu_enable_counter(struct hisi_pmu *its_pmu, + struct hw_perf_event *hwc) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + u32 reg; + + reg = readl(its_pmu->base + reg_info->cntr_ctrl); + reg |= BIT(hwc->idx); + writel(reg, its_pmu->base + reg_info->cntr_ctrl); +} + +static void hisi_its_pmu_disable_counter(struct hisi_pmu *its_pmu, + struct hw_perf_event *hwc) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + u32 reg; + + reg = readl(its_pmu->base + reg_info->cntr_ctrl); + reg &= ~BIT(hwc->idx); + writel(reg, its_pmu->base + reg_info->cntr_ctrl); +} + +static void hisi_its_pmu_enable_counter_int(struct hisi_pmu *its_pmu, + struct hw_perf_event *hwc) +{ + /* We don't support interrupt, so a stub here. */ +} + +static void hisi_its_pmu_disable_counter_int(struct hisi_pmu *its_pmu, + struct hw_perf_event *hwc) +{ +} + +static void hisi_its_pmu_start_counters(struct hisi_pmu *its_pmu) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + u32 reg; + + reg = readl(its_pmu->base + reg_info->pmu_ctrl); + reg |= ITS_PMU_GLOBAL_CTRL_PMU_EN; + writel(reg, its_pmu->base + reg_info->pmu_ctrl); +} + +static void hisi_its_pmu_stop_counters(struct hisi_pmu *its_pmu) +{ + struct hisi_its_pmu_regs *reg_info = its_pmu->dev_info->private; + u32 reg; + + reg = readl(its_pmu->base + reg_info->pmu_ctrl); + reg &= ~ITS_PMU_GLOBAL_CTRL_PMU_EN; + writel(reg, its_pmu->base + reg_info->pmu_ctrl); +} + +static void hisi_its_pmu_enable_filter(struct perf_event *event) +{ + struct hisi_pmu *its_pmu = to_hisi_pmu(event->pmu); + u32 int_id = hisi_get_int_id(event); + u32 reg = int_id ? 0 : -1U; + + if (int_id) + writel(int_id, its_pmu->base + ITS_PMU_INT_ID_CTRL); + + writel(reg, its_pmu->base + ITS_PMU_INT_ID_MASK); +} + +static void hisi_its_pmu_disable_filter(struct perf_event *event) +{ + struct hisi_pmu *its_pmu = to_hisi_pmu(event->pmu); + u32 int_id = hisi_get_int_id(event); + + if (bitmap_weight(its_pmu->pmu_events.used_mask, its_pmu->num_counters) > 1) + return; + + if (int_id) { + writel(0, its_pmu->base + ITS_PMU_INT_ID_CTRL); + writel(-1U, its_pmu->base + ITS_PMU_INT_ID_MASK); + } +} + +static const struct hisi_uncore_ops hisi_uncore_its_ops = { + .write_evtype = hisi_its_pmu_write_evtype, + .get_event_idx = hisi_uncore_pmu_get_event_idx, + .read_counter = hisi_its_pmu_read_counter, + .write_counter = hisi_its_pmu_write_counter, + .enable_counter = hisi_its_pmu_enable_counter, + .disable_counter = hisi_its_pmu_disable_counter, + .enable_counter_int = hisi_its_pmu_enable_counter_int, + .disable_counter_int = hisi_its_pmu_disable_counter_int, + .start_counters = hisi_its_pmu_start_counters, + .stop_counters = hisi_its_pmu_stop_counters, + .enable_filter = hisi_its_pmu_enable_filter, + .disable_filter = hisi_its_pmu_disable_filter, +}; + +static struct attribute *hisi_its_pmu_format_attrs[] = { + HISI_PMU_FORMAT_ATTR(event, "config:0-16"), + HISI_PMU_FORMAT_ATTR(int_id, "config1:0-31"), + NULL +}; + +static const struct attribute_group hisi_its_pmu_format_group = { + .name = "format", + .attrs = hisi_its_pmu_format_attrs, +}; + +static struct attribute *hisi_its_pmu_events_attrs[] = { + HISI_PMU_EVENT_ATTR(lpi_num, 0xc0), + HISI_PMU_EVENT_ATTR(lpi_time, 0x80), + HISI_PMU_EVENT_ATTR(sgi_num, 0xc1), + HISI_PMU_EVENT_ATTR(sgi_time, 0x81), + HISI_PMU_EVENT_ATTR(ppi_num, 0xc2), + HISI_PMU_EVENT_ATTR(ppi_time, 0x82), + HISI_PMU_EVENT_ATTR(sl3_lpi_num, 0xc3), + HISI_PMU_EVENT_ATTR(sl3_sgi_num, 0xc4), + HISI_PMU_EVENT_ATTR(sl3_ppi_num, 0xc5), + HISI_PMU_EVENT_ATTR(sl0_ddr_read, 0xc9), + HISI_PMU_EVENT_ATTR(sl0_ddr_time, 0x89), + HISI_PMU_EVENT_ATTR(sl1_ddr_read, 0xca), + HISI_PMU_EVENT_ATTR(sl1_ddr_time, 0x8a), + HISI_PMU_EVENT_ATTR(sl2_ddr_read, 0xcb), + HISI_PMU_EVENT_ATTR(sl2_ddr_time, 0x8b), + HISI_PMU_EVENT_ATTR(cycles, 0xcc), + NULL +}; + +static const struct attribute_group hisi_its_pmu_events_group = { + .name = "events", + .attrs = hisi_its_pmu_events_attrs, +}; + +static const struct attribute_group *hisi_its_pmu_attr_groups[] = { + &hisi_its_pmu_format_group, + &hisi_its_pmu_events_group, + &hisi_pmu_cpumask_attr_group, + &hisi_pmu_identifier_group, + NULL +}; + +static int hisi_its_pmu_dev_init(struct platform_device *pdev, struct hisi_pmu *its_pmu) +{ + struct hisi_its_pmu_regs *reg_info; + + hisi_uncore_pmu_init_topology(its_pmu, &pdev->dev); + + if (its_pmu->topo.scl_id < 0) + return dev_err_probe(&pdev->dev, -EINVAL, "failed to get scl-id\n"); + + if (its_pmu->topo.index_id < 0) + return dev_err_probe(&pdev->dev, -EINVAL, "failed to get idx-id\n"); + + its_pmu->base = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(its_pmu->base)) + return dev_err_probe(&pdev->dev, PTR_ERR(its_pmu->base), + "fail to remap io memory\n"); + + its_pmu->dev_info = device_get_match_data(&pdev->dev); + if (!its_pmu->dev_info) + return -ENODEV; + + its_pmu->pmu_events.attr_groups = its_pmu->dev_info->attr_groups; + its_pmu->counter_bits = its_pmu->dev_info->counter_bits; + its_pmu->check_event = its_pmu->dev_info->check_event; + its_pmu->num_counters = ITS_PMU_NR_COUNTERS; + its_pmu->ops = &hisi_uncore_its_ops; + its_pmu->dev = &pdev->dev; + its_pmu->on_cpu = -1; + + reg_info = its_pmu->dev_info->private; + its_pmu->identifier = readl(its_pmu->base + reg_info->version); + + return 0; +} + +static void hisi_its_pmu_remove_cpuhp_instance(void *hotplug_node) +{ + cpuhp_state_remove_instance_nocalls(hisi_its_pmu_cpuhp_state, hotplug_node); +} + +static void hisi_its_pmu_unregister_pmu(void *pmu) +{ + perf_pmu_unregister(pmu); +} + +static int hisi_its_pmu_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct hisi_pmu *its_pmu; + char *name; + int ret; + + its_pmu = devm_kzalloc(dev, sizeof(*its_pmu), GFP_KERNEL); + if (!its_pmu) + return -ENOMEM; + + /* + * HiSilicon Uncore PMU framework needs to get common hisi_pmu device + * from device's drvdata. + */ + platform_set_drvdata(pdev, its_pmu); + + ret = hisi_its_pmu_dev_init(pdev, its_pmu); + if (ret) + return ret; + + ret = cpuhp_state_add_instance(hisi_its_pmu_cpuhp_state, &its_pmu->node); + if (ret) + return dev_err_probe(dev, ret, "Fail to register cpuhp instance\n"); + + ret = devm_add_action_or_reset(dev, hisi_its_pmu_remove_cpuhp_instance, + &its_pmu->node); + if (ret) + return ret; + + hisi_pmu_init(its_pmu, THIS_MODULE); + + name = devm_kasprintf(dev, GFP_KERNEL, "hisi_scl%d_its%d", + its_pmu->topo.scl_id, its_pmu->topo.index_id); + if (!name) + return -ENOMEM; + + ret = perf_pmu_register(&its_pmu->pmu, name, -1); + if (ret) + return dev_err_probe(dev, ret, "Fail to register PMU\n"); + + return devm_add_action_or_reset(dev, hisi_its_pmu_unregister_pmu, + &its_pmu->pmu); +} + +static struct hisi_its_pmu_regs hisi_its_v1_pmu_regs = { + .version = ITS_PMU_VERSION, + .pmu_ctrl = ITS_PMU_GLOBAL_CTRL, + .event_ctrl0 = ITS_PMU_EVENT_CTRL, + .event_cntr0 = ITS_PMU_COUNTER0, + .cntr_ctrl = ITS_PMU_COUNTER_CTRL, +}; + +static const struct hisi_pmu_dev_info hisi_its_v1 = { + .attr_groups = hisi_its_pmu_attr_groups, + .counter_bits = 48, + .check_event = ITS_PMU_EVENT_CTRL_TYPE, + .private = &hisi_its_v1_pmu_regs, +}; + +static const struct acpi_device_id hisi_its_pmu_ids[] = { + { "HISI0591", (kernel_ulong_t) &hisi_its_v1 }, + { } +}; +MODULE_DEVICE_TABLE(acpi, hisi_its_pmu_ids); + +static struct platform_driver hisi_its_pmu_driver = { + .driver = { + .name = "hisi_its_pmu", + .acpi_match_table = hisi_its_pmu_ids, + .suppress_bind_attrs = true, + }, + .probe = hisi_its_pmu_probe, +}; + +static int __init hisi_its_pmu_module_init(void) +{ + int ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, + "perf/hisi/its:online", + hisi_uncore_pmu_online_cpu, + hisi_uncore_pmu_offline_cpu); + if (ret < 0) { + pr_err("hisi_its_pmu: Fail to setup cpuhp callbacks, ret = %d\n", ret); + return ret; + } + hisi_its_pmu_cpuhp_state = ret; + + ret = platform_driver_register(&hisi_its_pmu_driver); + if (ret) + cpuhp_remove_multi_state(hisi_its_pmu_cpuhp_state); + + return ret; +} +module_init(hisi_its_pmu_module_init); + +static void __exit hisi_its_pmu_module_exit(void) +{ + platform_driver_unregister(&hisi_its_pmu_driver); + cpuhp_remove_multi_state(hisi_its_pmu_cpuhp_state); +} +module_exit(hisi_its_pmu_module_exit); + +MODULE_IMPORT_NS(HISI_PMU); +MODULE_DESCRIPTION("HiSilicon SoC Uncore ITS PMU driver"); +MODULE_LICENSE("GPL"); -- 2.33.0
2 1
0 0
  • ← Newer
  • 1
  • ...
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • ...
  • 2317
  • Older →

HyperKitty Powered by HyperKitty