mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 56 participants
  • 18794 discussions
[PATCH openEuler-1.0-LTS v6 0/2] CVE-2021-47110
by liwei 25 Mar '24

25 Mar '24
CVE-2021-47110 Kirill A. Shutemov (1): x86/kvm: Do not try to disable kvmclock if it was not enabled Vitaly Kuznetsov (1): x86/kvm: Disable kvmclock on all CPUs on shutdown arch/x86/include/asm/kvm_para.h | 4 ++-- arch/x86/kernel/kvm.c | 1 + arch/x86/kernel/kvmclock.c | 17 +++++++++-------- 3 files changed, 12 insertions(+), 10 deletions(-) -- 2.25.1
2 3
0 0
[PATCH openEuler-1.0-LTS v5 0/2] CVE-2021-47110
by liwei 25 Mar '24

25 Mar '24
CVE-2021-47110 Kirill A. Shutemov (1): x86/kvm: Do not try to disable kvmclock if it was not enabled Vitaly Kuznetsov (1): x86/kvm: Disable kvmclock on all CPUs on shutdown arch/x86/include/asm/kvm_para.h | 4 ++-- arch/x86/kernel/kvm.c | 1 + arch/x86/kernel/kvmclock.c | 17 +++++++++-------- 3 files changed, 12 insertions(+), 10 deletions(-) -- 2.25.1
2 3
0 0
[PATCH OLK-5.10] bus: mhi: host: Drop chan lock before queuing buffers
by Guan Jing 25 Mar '24

25 Mar '24
From: Qiang Yu <quic_qianyu(a)quicinc.com> stable inclusion from stable-v5.10.209 commit 20a6dea2d1c68d4e03c6bb50bc12e72e226b5c0e category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I97NHA CVE: CVE-2023-52493 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- Ensure read and write locks for the channel are not taken in succession by dropping the read lock from parse_xfer_event() such that a callback given to client can potentially queue buffers and acquire the write lock in that process. Any queueing of buffers should be done without channel read lock acquired as it can result in multiple locks and a soft lockup. Cc: <stable(a)vger.kernel.org> # 5.7 Fixes: 1d3173a3bae7 ("bus: mhi: core: Add support for processing events from client device") Signed-off-by: Qiang Yu <quic_qianyu(a)quicinc.com> Reviewed-by: Jeffrey Hugo <quic_jhugo(a)quicinc.com> Tested-by: Jeffrey Hugo <quic_jhugo(a)quicinc.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam(a)linaro.org> Link: https://lore.kernel.org/r/1702276972-41296-3-git-send-email-quic_qianyu@qui… [mani: added fixes tag and cc'ed stable] Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam(a)linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: Guan Jing <guanjing6(a)huawei.com> --- drivers/bus/mhi/host/main.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c index 82d6e9e8bddf..b0a83a725069 100644 --- a/drivers/bus/mhi/host/main.c +++ b/drivers/bus/mhi/host/main.c @@ -570,6 +570,8 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl, mhi_del_ring_element(mhi_cntrl, tre_ring); local_rp = tre_ring->rp; + read_unlock_bh(&mhi_chan->lock); + /* notify client */ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result); @@ -592,6 +594,8 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl, kfree(buf_info->cb_buf); } } + + read_lock_bh(&mhi_chan->lock); } break; } /* CC_EOT */ -- 2.34.1
2 1
0 0
[PATCH OLK-6.6] ext4: Validate inode pa before using preallocation blocks
by Zhihao Cheng 25 Mar '24

25 Mar '24
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I97HJA CVE: NA -------------------------------- In ext4 continue & no-journal mode, physical blocks could be allocated more than once (caused by writing extent entries failed & reclaiming extent cache) in preallocation process, which could trigger a BUG_ON (pa->pa_free < len) in ext4_mb_use_inode_pa(). kernel BUG at fs/ext4/mballoc.c:4681! invalid opcode: 0000 [#1] PREEMPT SMP CPU: 3 PID: 97 Comm: kworker/u8:3 Not tainted 6.8.0-rc7 RIP: 0010:ext4_mb_use_inode_pa+0x1b6/0x1e0 Call Trace: ext4_mb_use_preallocated.constprop.0+0x19e/0x540 ext4_mb_new_blocks+0x220/0x1f30 ext4_ext_map_blocks+0xf3c/0x2900 ext4_map_blocks+0x264/0xa40 ext4_do_writepages+0xb15/0x1400 do_writepages+0x8c/0x260 writeback_sb_inodes+0x224/0x720 wb_writeback+0xd8/0x580 wb_workfn+0x148/0x820 Details are shown as following: 0. Given a file with i_size=4096 with one mapped block 1. Write block no 1, blocks 1~3 are preallocated. ext4_ext_map_blocks ext4_mb_normalize_request size = 16 * 1024 size = end - start // Allocate 3 blocks (bs = 4096) ext4_mb_regular_allocator ext4_mb_regular_allocator ext4_mb_regular_allocator ext4_mb_use_inode_pa pa->pa_free -= len // 3 - 1 = 2 2. Extent buffer head is written failed, es cache and buffer head are reclaimed. 3. Write blocks 1~3 ext4_ext_map_blocks newex.ee_len = 3 ext4_ext_check_overlap // Find nothing, there should have been block 1 allocated = map->m_len // 3 ext4_mb_new_blocks ext4_mb_use_preallocated ext4_mb_use_inode_pa BUG_ON(pa->pa_free < len) // 2 < 3! Fix it by adding validation checking for inode pa. If invalid pa is detected, stop using inode preallocation, drop invalid pa to avoid it being used again, mark group block bitmap as corrupted to avoid allocating from the erroneous group. Fetch a reproducer in Link. Cc: stable(a)vger.kernel.org Link: https://bugzilla.kernel.org/show_bug.cgi?id=218576 Signed-off-by: Zhihao Cheng <chengzhihao1(a)huawei.com> Signed-off-by: Zhang Yi <yi.zhang(a)huawei.com> --- fs/ext4/mballoc.c | 128 +++++++++++++++++++++++++++++++++++----------- 1 file changed, 98 insertions(+), 30 deletions(-) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index ea5ac2636632..82aef3072162 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -422,6 +422,9 @@ static void ext4_mb_new_preallocation(struct ext4_allocation_context *ac); static bool ext4_mb_good_group(struct ext4_allocation_context *ac, ext4_group_t group, enum criteria cr); +static void ext4_mb_put_pa(struct ext4_allocation_context *ac, + struct super_block *sb, struct ext4_prealloc_space *pa); + static int ext4_try_to_trim_range(struct super_block *sb, struct ext4_buddy *e4b, ext4_grpblk_t start, ext4_grpblk_t max, ext4_grpblk_t minblocks); @@ -4783,6 +4786,79 @@ ext4_mb_pa_goal_check(struct ext4_allocation_context *ac, return true; } +/* + * check if found pa is valid + */ +static bool ext4_mb_pa_is_valid(struct ext4_allocation_context *ac, + struct ext4_prealloc_space *pa) +{ + struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb); + ext4_fsblk_t start; + ext4_fsblk_t end; + int len; + + if (unlikely(pa->pa_free == 0)) { + /* + * We found a valid overlapping pa but couldn't use it because + * it had no free blocks. This should ideally never happen + * because: + * + * 1. When a new inode pa is added to rbtree it must have + * pa_free > 0 since otherwise we won't actually need + * preallocation. + * + * 2. An inode pa that is in the rbtree can only have it's + * pa_free become zero when another thread calls: + * ext4_mb_new_blocks + * ext4_mb_use_preallocated + * ext4_mb_use_inode_pa + * + * 3. Further, after the above calls make pa_free == 0, we will + * immediately remove it from the rbtree in: + * ext4_mb_new_blocks + * ext4_mb_release_context + * ext4_mb_put_pa + * + * 4. Since the pa_free becoming 0 and pa_free getting removed + * from tree both happen in ext4_mb_new_blocks, which is always + * called with i_data_sem held for data allocations, we can be + * sure that another process will never see a pa in rbtree with + * pa_free == 0. + */ + ext4_msg(ac->ac_sb, KERN_ERR, "invalid pa, free is 0"); + return false; + } + + start = pa->pa_pstart + (ac->ac_o_ex.fe_logical - pa->pa_lstart); + end = min(pa->pa_pstart + EXT4_C2B(sbi, pa->pa_len), + start + EXT4_C2B(sbi, ac->ac_o_ex.fe_len)); + len = EXT4_NUM_B2C(sbi, end - start); + + if (unlikely(start < pa->pa_pstart)) { + ext4_msg(ac->ac_sb, KERN_ERR, + "invalid pa, start(%llu) < pa_pstart(%llu)", + start, pa->pa_pstart); + return false; + } + if (unlikely(end > pa->pa_pstart + EXT4_C2B(sbi, pa->pa_len))) { + ext4_msg(ac->ac_sb, KERN_ERR, + "invalid pa, end(%llu) > pa_pstart(%llu) + pa_len(%d)", + end, pa->pa_pstart, EXT4_C2B(sbi, pa->pa_len)); + return false; + } + if (unlikely(pa->pa_free < len)) { + ext4_msg(ac->ac_sb, KERN_ERR, + "invalid pa, pa_free(%d) < len(%d)", pa->pa_free, len); + return false; + } + if (unlikely(len <= 0)) { + ext4_msg(ac->ac_sb, KERN_ERR, "invalid pa, len(%d) <= 0", len); + return false; + } + + return true; +} + /* * search goal blocks in preallocated space */ @@ -4906,45 +4982,37 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac) goto try_group_pa; } - if (tmp_pa->pa_free && likely(ext4_mb_pa_goal_check(ac, tmp_pa))) { + if (unlikely(!ext4_mb_pa_is_valid(ac, tmp_pa))) { + ext4_group_t group; + + tmp_pa->pa_free = 0; + atomic_inc(&tmp_pa->pa_count); + spin_unlock(&tmp_pa->pa_lock); + read_unlock(&ei->i_prealloc_lock); + + ext4_mb_put_pa(ac, ac->ac_sb, tmp_pa); + group = ext4_get_group_number(ac->ac_sb, tmp_pa->pa_pstart); + ext4_lock_group(ac->ac_sb, group); + ext4_mark_group_bitmap_corrupted(ac->ac_sb, group, + EXT4_GROUP_INFO_BBITMAP_CORRUPT); + ext4_unlock_group(ac->ac_sb, group); + ext4_error(ac->ac_sb, "drop pa and mark group %u block bitmap corrupted", + group); + WARN_ON_ONCE(1); + goto try_group_pa_unlocked; + } + + if (likely(ext4_mb_pa_goal_check(ac, tmp_pa))) { atomic_inc(&tmp_pa->pa_count); ext4_mb_use_inode_pa(ac, tmp_pa); spin_unlock(&tmp_pa->pa_lock); read_unlock(&ei->i_prealloc_lock); return true; - } else { - /* - * We found a valid overlapping pa but couldn't use it because - * it had no free blocks. This should ideally never happen - * because: - * - * 1. When a new inode pa is added to rbtree it must have - * pa_free > 0 since otherwise we won't actually need - * preallocation. - * - * 2. An inode pa that is in the rbtree can only have it's - * pa_free become zero when another thread calls: - * ext4_mb_new_blocks - * ext4_mb_use_preallocated - * ext4_mb_use_inode_pa - * - * 3. Further, after the above calls make pa_free == 0, we will - * immediately remove it from the rbtree in: - * ext4_mb_new_blocks - * ext4_mb_release_context - * ext4_mb_put_pa - * - * 4. Since the pa_free becoming 0 and pa_free getting removed - * from tree both happen in ext4_mb_new_blocks, which is always - * called with i_data_sem held for data allocations, we can be - * sure that another process will never see a pa in rbtree with - * pa_free == 0. - */ - WARN_ON_ONCE(tmp_pa->pa_free == 0); } spin_unlock(&tmp_pa->pa_lock); try_group_pa: read_unlock(&ei->i_prealloc_lock); +try_group_pa_unlocked: /* can we use group allocation? */ if (!(ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC)) -- 2.31.1
2 1
0 0
[PATCH openEuler-1.0-LTS] ext4: Validate inode pa before using preallocation blocks
by Zhihao Cheng 25 Mar '24

25 Mar '24
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I97HJA CVE: NA -------------------------------- In ext4 continue & no-journal mode, physical blocks could be allocated more than once (caused by writing extent entries failed & reclaiming extent cache) in preallocation process, which could trigger a BUG_ON (pa->pa_free < len) in ext4_mb_use_inode_pa(). kernel BUG at fs/ext4/mballoc.c:4681! invalid opcode: 0000 [#1] PREEMPT SMP CPU: 3 PID: 97 Comm: kworker/u8:3 Not tainted 6.8.0-rc7 RIP: 0010:ext4_mb_use_inode_pa+0x1b6/0x1e0 Call Trace: ext4_mb_use_preallocated.constprop.0+0x19e/0x540 ext4_mb_new_blocks+0x220/0x1f30 ext4_ext_map_blocks+0xf3c/0x2900 ext4_map_blocks+0x264/0xa40 ext4_do_writepages+0xb15/0x1400 do_writepages+0x8c/0x260 writeback_sb_inodes+0x224/0x720 wb_writeback+0xd8/0x580 wb_workfn+0x148/0x820 Details are shown as following: 0. Given a file with i_size=4096 with one mapped block 1. Write block no 1, blocks 1~3 are preallocated. ext4_ext_map_blocks ext4_mb_normalize_request size = 16 * 1024 size = end - start // Allocate 3 blocks (bs = 4096) ext4_mb_regular_allocator ext4_mb_regular_allocator ext4_mb_regular_allocator ext4_mb_use_inode_pa pa->pa_free -= len // 3 - 1 = 2 2. Extent buffer head is written failed, es cache and buffer head are reclaimed. 3. Write blocks 1~3 ext4_ext_map_blocks newex.ee_len = 3 ext4_ext_check_overlap // Find nothing, there should have been block 1 allocated = map->m_len // 3 ext4_mb_new_blocks ext4_mb_use_preallocated ext4_mb_use_inode_pa BUG_ON(pa->pa_free < len) // 2 < 3! Fix it by adding validation checking for inode pa. If invalid pa is detected, stop using inode preallocation, drop invalid pa to avoid it being used again, mark group block bitmap as corrupted to avoid allocating from the erroneous group. Fetch a reproducer in Link. Cc: stable(a)vger.kernel.org Link: https://bugzilla.kernel.org/show_bug.cgi?id=218576 Signed-off-by: Zhihao Cheng <chengzhihao1(a)huawei.com> Signed-off-by: Zhang Yi <yi.zhang(a)huawei.com> --- fs/ext4/mballoc.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index a40990da0b62..c07289164c12 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -356,6 +356,8 @@ static void ext4_mb_generate_from_pa(struct super_block *sb, void *bitmap, ext4_group_t group); static void ext4_mb_generate_from_freelist(struct super_block *sb, void *bitmap, ext4_group_t group); +static void ext4_mb_put_pa(struct ext4_allocation_context *ac, + struct super_block *sb, struct ext4_prealloc_space *pa); static inline void *mb_correct_addr_and_bit(int *bit, void *addr) { @@ -3364,6 +3366,47 @@ static void ext4_discard_allocated_blocks(struct ext4_allocation_context *ac) pa->pa_free += ac->ac_b_ex.fe_len; } +/* + * check if found pa is valid + */ +static bool ext4_mb_pa_is_valid(struct ext4_allocation_context *ac, + struct ext4_prealloc_space *pa) +{ + struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb); + ext4_fsblk_t start; + ext4_fsblk_t end; + int len; + + start = pa->pa_pstart + (ac->ac_o_ex.fe_logical - pa->pa_lstart); + end = min(pa->pa_pstart + EXT4_C2B(sbi, pa->pa_len), + start + EXT4_C2B(sbi, ac->ac_o_ex.fe_len)); + len = EXT4_NUM_B2C(sbi, end - start); + + if (unlikely(start < pa->pa_pstart)) { + ext4_msg(ac->ac_sb, KERN_ERR, + "invalid pa, start(%llu) < pa_pstart(%llu)", + start, pa->pa_pstart); + return false; + } + if (unlikely(end > pa->pa_pstart + EXT4_C2B(sbi, pa->pa_len))) { + ext4_msg(ac->ac_sb, KERN_ERR, + "invalid pa, end(%llu) > pa_pstart(%llu) + pa_len(%d)", + end, pa->pa_pstart, EXT4_C2B(sbi, pa->pa_len)); + return false; + } + if (unlikely(pa->pa_free < len)) { + ext4_msg(ac->ac_sb, KERN_ERR, + "invalid pa, pa_free(%d) < len(%d)", pa->pa_free, len); + return false; + } + if (unlikely(len <= 0)) { + ext4_msg(ac->ac_sb, KERN_ERR, "invalid pa, len(%d) <= 0", len); + return false; + } + + return true; +} + /* * use blocks preallocated to inode */ @@ -3483,6 +3526,23 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac) /* found preallocated blocks, use them */ spin_lock(&pa->pa_lock); + if (unlikely(!ext4_mb_pa_is_valid(ac, pa))) { + ext4_group_t group; + + pa->pa_free = 0; + atomic_inc(&pa->pa_count); + spin_unlock(&pa->pa_lock); + rcu_read_unlock(); + ext4_mb_put_pa(ac, ac->ac_sb, pa); + group = ext4_get_group_number(ac->ac_sb, pa->pa_pstart); + ext4_lock_group(ac->ac_sb, group); + ext4_mark_group_bitmap_corrupted(ac->ac_sb, group, + EXT4_GROUP_INFO_BBITMAP_CORRUPT); + ext4_unlock_group(ac->ac_sb, group); + ext4_error(ac->ac_sb, "drop pa and mark group %u block bitmap corrupted", + group); + goto try_group_pa; + } if (pa->pa_deleted == 0 && pa->pa_free) { atomic_inc(&pa->pa_count); ext4_mb_use_inode_pa(ac, pa); @@ -3495,6 +3555,7 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac) } rcu_read_unlock(); +try_group_pa: /* can we use group allocation? */ if (!(ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC)) return 0; -- 2.31.1
2 1
0 0
[PATCH OLK-5.10] ext4: Validate inode pa before using preallocation blocks
by Zhihao Cheng 25 Mar '24

25 Mar '24
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I97HJA CVE: NA -------------------------------- In ext4 continue & no-journal mode, physical blocks could be allocated more than once (caused by writing extent entries failed & reclaiming extent cache) in preallocation process, which could trigger a BUG_ON (pa->pa_free < len) in ext4_mb_use_inode_pa(). kernel BUG at fs/ext4/mballoc.c:4681! invalid opcode: 0000 [#1] PREEMPT SMP CPU: 3 PID: 97 Comm: kworker/u8:3 Not tainted 6.8.0-rc7 RIP: 0010:ext4_mb_use_inode_pa+0x1b6/0x1e0 Call Trace: ext4_mb_use_preallocated.constprop.0+0x19e/0x540 ext4_mb_new_blocks+0x220/0x1f30 ext4_ext_map_blocks+0xf3c/0x2900 ext4_map_blocks+0x264/0xa40 ext4_do_writepages+0xb15/0x1400 do_writepages+0x8c/0x260 writeback_sb_inodes+0x224/0x720 wb_writeback+0xd8/0x580 wb_workfn+0x148/0x820 Details are shown as following: 0. Given a file with i_size=4096 with one mapped block 1. Write block no 1, blocks 1~3 are preallocated. ext4_ext_map_blocks ext4_mb_normalize_request size = 16 * 1024 size = end - start // Allocate 3 blocks (bs = 4096) ext4_mb_regular_allocator ext4_mb_regular_allocator ext4_mb_regular_allocator ext4_mb_use_inode_pa pa->pa_free -= len // 3 - 1 = 2 2. Extent buffer head is written failed, es cache and buffer head are reclaimed. 3. Write blocks 1~3 ext4_ext_map_blocks newex.ee_len = 3 ext4_ext_check_overlap // Find nothing, there should have been block 1 allocated = map->m_len // 3 ext4_mb_new_blocks ext4_mb_use_preallocated ext4_mb_use_inode_pa BUG_ON(pa->pa_free < len) // 2 < 3! Fix it by adding validation checking for inode pa. If invalid pa is detected, stop using inode preallocation, drop invalid pa to avoid it being used again, mark group block bitmap as corrupted to avoid allocating from the erroneous group. Fetch a reproducer in Link. Cc: stable(a)vger.kernel.org Link: https://bugzilla.kernel.org/show_bug.cgi?id=218576 Signed-off-by: Zhihao Cheng <chengzhihao1(a)huawei.com> Signed-off-by: Zhang Yi <yi.zhang(a)huawei.com> --- fs/ext4/mballoc.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index c4f018dc0b59..2d33c0123b72 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -352,6 +352,9 @@ static void ext4_mb_generate_from_freelist(struct super_block *sb, void *bitmap, ext4_group_t group); static void ext4_mb_new_preallocation(struct ext4_allocation_context *ac); +static void ext4_mb_put_pa(struct ext4_allocation_context *ac, + struct super_block *sb, struct ext4_prealloc_space *pa); + /* * The algorithm using this percpu seq counter goes below: * 1. We sample the percpu discard_pa_seq counter before trying for block @@ -3812,6 +3815,47 @@ static void ext4_discard_allocated_blocks(struct ext4_allocation_context *ac) pa->pa_free += ac->ac_b_ex.fe_len; } +/* + * check if found pa is valid + */ +static bool ext4_mb_pa_is_valid(struct ext4_allocation_context *ac, + struct ext4_prealloc_space *pa) +{ + struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb); + ext4_fsblk_t start; + ext4_fsblk_t end; + int len; + + start = pa->pa_pstart + (ac->ac_o_ex.fe_logical - pa->pa_lstart); + end = min(pa->pa_pstart + EXT4_C2B(sbi, pa->pa_len), + start + EXT4_C2B(sbi, ac->ac_o_ex.fe_len)); + len = EXT4_NUM_B2C(sbi, end - start); + + if (unlikely(start < pa->pa_pstart)) { + ext4_msg(ac->ac_sb, KERN_ERR, + "invalid pa, start(%llu) < pa_pstart(%llu)", + start, pa->pa_pstart); + return false; + } + if (unlikely(end > pa->pa_pstart + EXT4_C2B(sbi, pa->pa_len))) { + ext4_msg(ac->ac_sb, KERN_ERR, + "invalid pa, end(%llu) > pa_pstart(%llu) + pa_len(%d)", + end, pa->pa_pstart, EXT4_C2B(sbi, pa->pa_len)); + return false; + } + if (unlikely(pa->pa_free < len)) { + ext4_msg(ac->ac_sb, KERN_ERR, + "invalid pa, pa_free(%d) < len(%d)", pa->pa_free, len); + return false; + } + if (unlikely(len <= 0)) { + ext4_msg(ac->ac_sb, KERN_ERR, "invalid pa, len(%d) <= 0", len); + return false; + } + + return true; +} + /* * use blocks preallocated to inode */ @@ -3932,6 +3976,23 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac) /* found preallocated blocks, use them */ spin_lock(&pa->pa_lock); + if (unlikely(!ext4_mb_pa_is_valid(ac, pa))) { + ext4_group_t group; + + pa->pa_free = 0; + atomic_inc(&pa->pa_count); + spin_unlock(&pa->pa_lock); + rcu_read_unlock(); + ext4_mb_put_pa(ac, ac->ac_sb, pa); + group = ext4_get_group_number(ac->ac_sb, pa->pa_pstart); + ext4_lock_group(ac->ac_sb, group); + ext4_mark_group_bitmap_corrupted(ac->ac_sb, group, + EXT4_GROUP_INFO_BBITMAP_CORRUPT); + ext4_unlock_group(ac->ac_sb, group); + ext4_error(ac->ac_sb, "drop pa and mark group %u block bitmap corrupted", + group); + goto try_group_pa; + } if (pa->pa_deleted == 0 && pa->pa_free) { atomic_inc(&pa->pa_count); ext4_mb_use_inode_pa(ac, pa); @@ -3944,6 +4005,7 @@ ext4_mb_use_preallocated(struct ext4_allocation_context *ac) } rcu_read_unlock(); +try_group_pa: /* can we use group allocation? */ if (!(ac->ac_flags & EXT4_MB_HINT_GROUP_ALLOC)) return false; -- 2.31.1
2 1
0 0
[PATCH OLK-5.10 0/7] ext4: dio: Put endio under irq context for overwrite
by Zhihao Cheng 25 Mar '24

25 Mar '24
Christoph Hellwig (2): iomap: rename the flags variable in __iomap_dio_rw iomap: pass a flags argument to iomap_dio_rw Jens Axboe (3): iomap: cleanup up iomap_dio_bio_end_io() iomap: use an unsigned type for IOMAP_DIO_* defines iomap: add IOMAP_DIO_INLINE_COMP Zhihao Cheng (2): iomap: Add a IOMAP_DIO_MAY_INLINE_COMP flag ext4: Optimize endio process for DIO overwrites fs/btrfs/inode.c | 4 +-- fs/ext4/file.c | 14 +++++--- fs/gfs2/file.c | 7 ++-- fs/iomap/direct-io.c | 84 ++++++++++++++++++++++++++++--------------- fs/xfs/xfs_file.c | 5 ++- fs/zonefs/super.c | 4 +-- include/linux/iomap.h | 16 +++++++-- 7 files changed, 87 insertions(+), 47 deletions(-) -- 2.31.1
2 8
0 0
[PATCH openEuler-1.0-LTS] pstore/ram: Fix crash when setting number of cpus to an odd number
by Zeng Heng 25 Mar '24

25 Mar '24
From: Weichen Chen <weichen.chen(a)mediatek.com> mainline inclusion from mainline-v6.8-rc1 commit d49270a04623ce3c0afddbf3e984cb245aa48e9c category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I99JQV CVE: CVE-2023-52619 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- When the number of cpu cores is adjusted to 7 or other odd numbers, the zone size will become an odd number. The address of the zone will become: addr of zone0 = BASE addr of zone1 = BASE + zone_size addr of zone2 = BASE + zone_size*2 ... The address of zone1/3/5/7 will be mapped to non-alignment va. Eventually crashes will occur when accessing these va. So, use ALIGN_DOWN() to make sure the zone size is even to avoid this bug. Fixes: de83209249d6 ("pstore: Make ramoops_init_przs generic for other prz arrays") Signed-off-by: Weichen Chen <weichen.chen(a)mediatek.com> Reviewed-by: Matthias Brugger <matthias.bgg(a)gmail.com> Tested-by: "Guilherme G. Piccoli" <gpiccoli(a)igalia.com> Link: https://lore.kernel.org/r/20230224023632.6840-1-weichen.chen@mediatek.com Signed-off-by: Kees Cook <keescook(a)chromium.org> Signed-off-by: Zeng Heng <zengheng4(a)huawei.com> --- fs/pstore/ram.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c index 5907d081fa13..07a510a06e7a 100644 --- a/fs/pstore/ram.c +++ b/fs/pstore/ram.c @@ -590,6 +590,7 @@ static int ramoops_init_przs(const char *name, } zone_sz = mem_sz / *cnt; + zone_sz = ALIGN_DOWN(zone_sz, 2); if (!zone_sz) { dev_err(dev, "%s zone size == 0\n", name); goto fail; -- 2.25.1
2 1
0 0
[PATCH OLK-5.10] pstore/ram: Fix crash when setting number of cpus to an odd number
by Zeng Heng 25 Mar '24

25 Mar '24
From: Weichen Chen <weichen.chen(a)mediatek.com> mainline inclusion from mainline-v6.8-rc1 commit d49270a04623ce3c0afddbf3e984cb245aa48e9c category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I99JQV CVE: CVE-2023-52619 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- When the number of cpu cores is adjusted to 7 or other odd numbers, the zone size will become an odd number. The address of the zone will become: addr of zone0 = BASE addr of zone1 = BASE + zone_size addr of zone2 = BASE + zone_size*2 ... The address of zone1/3/5/7 will be mapped to non-alignment va. Eventually crashes will occur when accessing these va. So, use ALIGN_DOWN() to make sure the zone size is even to avoid this bug. Signed-off-by: Weichen Chen <weichen.chen(a)mediatek.com> Reviewed-by: Matthias Brugger <matthias.bgg(a)gmail.com> Tested-by: "Guilherme G. Piccoli" <gpiccoli(a)igalia.com> Link: https://lore.kernel.org/r/20230224023632.6840-1-weichen.chen@mediatek.com Signed-off-by: Kees Cook <keescook(a)chromium.org> Signed-off-by: Zeng Heng <zengheng4(a)huawei.com> --- fs/pstore/ram.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c index 98e579ce0d63..44fc3b396288 100644 --- a/fs/pstore/ram.c +++ b/fs/pstore/ram.c @@ -519,6 +519,7 @@ static int ramoops_init_przs(const char *name, } zone_sz = mem_sz / *cnt; + zone_sz = ALIGN_DOWN(zone_sz, 2); if (!zone_sz) { dev_err(dev, "%s zone size == 0\n", name); goto fail; -- 2.25.1
2 1
0 0
[openeuler:OLK-6.6 3892/6859] arch/loongarch/kernel/machine_kexec.c:97:12-25: WARNING: casting value returned by memory allocation function to (unsigned long *) is useless.
by kernel test robot 25 Mar '24

25 Mar '24
tree: https://gitee.com/openeuler/kernel.git OLK-6.6 head: 1bf66e081b92a539d54b723c424bb38130f10c11 commit: 3706d0fb92bc3da98bec285ff17ac82405c2d26e [3892/6859] LoongArch: kexec: Add compatibility with old interfaces config: loongarch-randconfig-r063-20240325 (https://download.01.org/0day-ci/archive/20240325/202403251417.bkjC4PDS-lkp@…) compiler: loongarch64-linux-gcc (GCC) 13.2.0 If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp(a)intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202403251417.bkjC4PDS-lkp@intel.com/ cocci warnings: (new ones prefixed by >>) >> arch/loongarch/kernel/machine_kexec.c:97:12-25: WARNING: casting value returned by memory allocation function to (unsigned long *) is useless. vim +97 arch/loongarch/kernel/machine_kexec.c 64 65 int machine_kexec_prepare(struct kimage *kimage) 66 { 67 int i; 68 char *bootloader = "kexec"; 69 void *cmdline_ptr = (void *)KEXEC_CMDLINE_ADDR; 70 71 kexec_image_info(kimage); 72 73 kimage->arch.efi_boot = fw_arg0; 74 kimage->arch.systable_ptr = fw_arg2; 75 76 if (!fw_arg2) 77 pr_err("Small fdt mode is not supported!\n"); 78 79 /* Find the command line */ 80 for (i = 0; i < kimage->nr_segments; i++) { 81 if (!strncmp(bootloader, (char __user *)kimage->segment[i].buf, strlen(bootloader))) { 82 if (fw_arg0 < 2) { 83 /* New firmware */ 84 if (!copy_from_user(cmdline_ptr, kimage->segment[i].buf, COMMAND_LINE_SIZE)) 85 kimage->arch.cmdline_ptr = (unsigned long)cmdline_ptr; 86 } else { 87 /* Old firmware */ 88 int argc = 0; 89 long offt; 90 char *ptr, *str; 91 unsigned long *argv; 92 93 /* 94 * convert command line string to array 95 * of parameters (as bootloader does). 96 */ > 97 argv = (unsigned long *)kmalloc(KEXEC_CMDLINE_SIZE, GFP_KERNEL); 98 argv[argc++] = (unsigned long)(KEXEC_CMDLINE_ADDR + KEXEC_CMDLINE_SIZE/2); 99 str = (char *)argv + KEXEC_CMDLINE_SIZE/2; 100 101 if (copy_from_user(str, kimage->segment[i].buf, KEXEC_CMDLINE_SIZE/2)) 102 return -EINVAL; 103 104 ptr = strchr(str, ' '); 105 106 while (ptr && (argc < MAX_ARGS)) { 107 *ptr = '\0'; 108 if (ptr[1] != ' ') { 109 offt = (long)(ptr - str + 1); 110 argv[argc++] = (unsigned long)argv + KEXEC_CMDLINE_SIZE/2 + offt; 111 } 112 ptr = strchr(ptr + 1, ' '); 113 } 114 115 kimage->arch.efi_boot = argc; 116 kimage->arch.cmdline_ptr = (unsigned long)argv; 117 break; 118 } 119 break; 120 } 121 } 122 123 if (!kimage->arch.cmdline_ptr) { 124 pr_err("Command line not included in the provided image\n"); 125 return -EINVAL; 126 } 127 128 /* kexec/kdump need a safe page to save reboot_code_buffer */ 129 kimage->control_code_page = virt_to_page((void *)KEXEC_CONTROL_CODE); 130 131 reboot_code_buffer = (unsigned long)page_address(kimage->control_code_page); 132 memcpy((void *)reboot_code_buffer, relocate_new_kernel, relocate_new_kernel_size); 133 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
1 0
0 0
  • ← Newer
  • 1
  • ...
  • 1199
  • 1200
  • 1201
  • 1202
  • 1203
  • 1204
  • 1205
  • ...
  • 1880
  • Older →

HyperKitty Powered by HyperKitty