mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 17 participants
  • 18899 discussions
[PATCH OLK-6.10] ALSA: pcm: Fix race of buffer access at PCM OSS layer
by Luo Gengkun 23 Jun '25

23 Jun '25
From: Takashi Iwai <tiwai(a)suse.de> stable inclusion from stable-v6.6.93 commit 74d90875f3d43f3eff0e9861c4701418795d3455 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICGACK Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- commit 93a81ca0657758b607c3f4ba889ae806be9beb73 upstream. The PCM OSS layer tries to clear the buffer with the silence data at initialization (or reconfiguration) of a stream with the explicit call of snd_pcm_format_set_silence() with runtime->dma_area. But this may lead to a UAF because the accessed runtime->dma_area might be freed concurrently, as it's performed outside the PCM ops. For avoiding it, move the code into the PCM core and perform it inside the buffer access lock, so that it won't be changed during the operation. Reported-by: syzbot+32d4647f551007595173(a)syzkaller.appspotmail.com Closes: https://lore.kernel.org/68164d8e.050a0220.11da1b.0019.GAE@google.com Cc: <stable(a)vger.kernel.org> Link: https://patch.msgid.link/20250516080817.20068-1-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai(a)suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: Luo Gengkun <luogengkun2(a)huawei.com> --- include/sound/pcm.h | 2 ++ sound/core/oss/pcm_oss.c | 3 +-- sound/core/pcm_native.c | 11 +++++++++++ 3 files changed, 14 insertions(+), 2 deletions(-) diff --git a/include/sound/pcm.h b/include/sound/pcm.h index 2a815373dac1..ed4449cbdf80 100644 --- a/include/sound/pcm.h +++ b/include/sound/pcm.h @@ -1427,6 +1427,8 @@ int snd_pcm_lib_mmap_iomem(struct snd_pcm_substream *substream, struct vm_area_s #define snd_pcm_lib_mmap_iomem NULL #endif +void snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime); + /** * snd_pcm_limit_isa_dma_size - Get the max size fitting with ISA DMA transfer * @dma: DMA number diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c index 728c211142d1..471de2d1b37a 100644 --- a/sound/core/oss/pcm_oss.c +++ b/sound/core/oss/pcm_oss.c @@ -1085,8 +1085,7 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream) runtime->oss.params = 0; runtime->oss.prepare = 1; runtime->oss.buffer_used = 0; - if (runtime->dma_area) - snd_pcm_format_set_silence(runtime->format, runtime->dma_area, bytes_to_samples(runtime, runtime->dma_bytes)); + snd_pcm_runtime_buffer_set_silence(runtime); runtime->oss.period_frames = snd_pcm_alsa_frames(substream, oss_period_size); diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c index e40de64ec85c..31fc20350fd9 100644 --- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -703,6 +703,17 @@ static void snd_pcm_buffer_access_unlock(struct snd_pcm_runtime *runtime) atomic_inc(&runtime->buffer_accessing); } +/* fill the PCM buffer with the current silence format; called from pcm_oss.c */ +void snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime) +{ + snd_pcm_buffer_access_lock(runtime); + if (runtime->dma_area) + snd_pcm_format_set_silence(runtime->format, runtime->dma_area, + bytes_to_samples(runtime, runtime->dma_bytes)); + snd_pcm_buffer_access_unlock(runtime); +} +EXPORT_SYMBOL_GPL(snd_pcm_runtime_buffer_set_silence); + #if IS_ENABLED(CONFIG_SND_PCM_OSS) #define is_oss_stream(substream) ((substream)->oss.oss) #else -- 2.34.1
1 0
0 0
[PATCH OLK-5.10] ALSA: pcm: Fix race of buffer access at PCM OSS layer
by Luo Gengkun 23 Jun '25

23 Jun '25
From: Takashi Iwai <tiwai(a)suse.de> stable inclusion from stable-v5.10.238 commit 8170d8ec4efd0be352c14cb61f374e30fb0c2a25 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/ICGACK Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- commit 93a81ca0657758b607c3f4ba889ae806be9beb73 upstream. The PCM OSS layer tries to clear the buffer with the silence data at initialization (or reconfiguration) of a stream with the explicit call of snd_pcm_format_set_silence() with runtime->dma_area. But this may lead to a UAF because the accessed runtime->dma_area might be freed concurrently, as it's performed outside the PCM ops. For avoiding it, move the code into the PCM core and perform it inside the buffer access lock, so that it won't be changed during the operation. Reported-by: syzbot+32d4647f551007595173(a)syzkaller.appspotmail.com Closes: https://lore.kernel.org/68164d8e.050a0220.11da1b.0019.GAE@google.com Cc: <stable(a)vger.kernel.org> Link: https://patch.msgid.link/20250516080817.20068-1-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai(a)suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: Luo Gengkun <luogengkun2(a)huawei.com> --- include/sound/pcm.h | 2 ++ sound/core/oss/pcm_oss.c | 3 +-- sound/core/pcm_native.c | 11 +++++++++++ 3 files changed, 14 insertions(+), 2 deletions(-) diff --git a/include/sound/pcm.h b/include/sound/pcm.h index 6554a9f71c62..c573c6a7da12 100644 --- a/include/sound/pcm.h +++ b/include/sound/pcm.h @@ -1334,6 +1334,8 @@ int snd_pcm_lib_mmap_iomem(struct snd_pcm_substream *substream, struct vm_area_s #define snd_pcm_lib_mmap_iomem NULL #endif +void snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime); + /** * snd_pcm_limit_isa_dma_size - Get the max size fitting with ISA DMA transfer * @dma: DMA number diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c index de6f94bee50b..8eb5fef41dbe 100644 --- a/sound/core/oss/pcm_oss.c +++ b/sound/core/oss/pcm_oss.c @@ -1078,8 +1078,7 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream) runtime->oss.params = 0; runtime->oss.prepare = 1; runtime->oss.buffer_used = 0; - if (runtime->dma_area) - snd_pcm_format_set_silence(runtime->format, runtime->dma_area, bytes_to_samples(runtime, runtime->dma_bytes)); + snd_pcm_runtime_buffer_set_silence(runtime); runtime->oss.period_frames = snd_pcm_alsa_frames(substream, oss_period_size); diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c index 9425fcd30c4c..98bd6fe850d3 100644 --- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -685,6 +685,17 @@ static void snd_pcm_buffer_access_unlock(struct snd_pcm_runtime *runtime) atomic_inc(&runtime->buffer_accessing); } +/* fill the PCM buffer with the current silence format; called from pcm_oss.c */ +void snd_pcm_runtime_buffer_set_silence(struct snd_pcm_runtime *runtime) +{ + snd_pcm_buffer_access_lock(runtime); + if (runtime->dma_area) + snd_pcm_format_set_silence(runtime->format, runtime->dma_area, + bytes_to_samples(runtime, runtime->dma_bytes)); + snd_pcm_buffer_access_unlock(runtime); +} +EXPORT_SYMBOL_GPL(snd_pcm_runtime_buffer_set_silence); + #if IS_ENABLED(CONFIG_SND_PCM_OSS) #define is_oss_stream(substream) ((substream)->oss.oss) #else -- 2.34.1
2 1
0 0
[openeuler:OLK-5.10 2978/2978] drivers/net/ethernet/huawei/hinic3/hinic3_irq.c:22:5: warning: no previous prototype for 'hinic3_poll'
by kernel test robot 23 Jun '25

23 Jun '25
tree: https://gitee.com/openeuler/kernel.git OLK-5.10 head: 04dbc107e40be138cc70f7d15d50779f5538f412 commit: ebcedbe6ddb7bfcb756769994b3a796c771b43f5 [2978/2978] net/hinic3: Add Huawei Intelligent Network Card Driver: hinic3 config: x86_64-buildonly-randconfig-002-20250623 (https://download.01.org/0day-ci/archive/20250623/202506231925.HQ2gIERd-lkp@…) compiler: gcc-12 (Debian 12.2.0-14) 12.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250623/202506231925.HQ2gIERd-lkp@…) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp(a)intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202506231925.HQ2gIERd-lkp@intel.com/ All warnings (new ones prefixed by >>): >> drivers/net/ethernet/huawei/hinic3/hinic3_irq.c:22:5: warning: no previous prototype for 'hinic3_poll' [-Wmissing-prototypes] 22 | int hinic3_poll(struct napi_struct *napi, int budget) | ^~~~~~~~~~~ -- >> drivers/net/ethernet/huawei/hinic3/hw/hinic3_lld.c:76:6: warning: no previous prototype for 'hinic3_uld_lock_init' [-Wmissing-prototypes] 76 | void hinic3_uld_lock_init(void) | ^~~~~~~~~~~~~~~~~~~~ -- >> drivers/net/ethernet/huawei/hinic3/hinic3_rx.c:740:5: warning: no previous prototype for 'hinic3_run_xdp' [-Wmissing-prototypes] 740 | int hinic3_run_xdp(struct hinic3_rxq *rxq, u32 pkt_len) | ^~~~~~~~~~~~~~ >> drivers/net/ethernet/huawei/hinic3/hinic3_rx.c:1156:5: warning: no previous prototype for 'rxq_restore' [-Wmissing-prototypes] 1156 | int rxq_restore(struct hinic3_nic_dev *nic_dev, u16 q_id, u16 hw_ci) | ^~~~~~~~~~~ drivers/net/ethernet/huawei/hinic3/hinic3_rx.c:1254:6: warning: no previous prototype for 'rxq_is_normal' [-Wmissing-prototypes] 1254 | bool rxq_is_normal(struct hinic3_rxq *rxq, struct rxq_check_info rxq_info) | ^~~~~~~~~~~~~ -- drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c:346:6: warning: no previous prototype for 'hinic3_flush_rx_flow_rule' [-Wmissing-prototypes] 346 | void hinic3_flush_rx_flow_rule(struct hinic3_nic_dev *nic_dev) | ^~~~~~~~~~~~~~~~~~~~~~~~~ >> drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c:785:5: warning: no previous prototype for 'hinic3_ethtool_flow_replace' [-Wmissing-prototypes] 785 | int hinic3_ethtool_flow_replace(struct hinic3_nic_dev *nic_dev, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c:830:5: warning: no previous prototype for 'hinic3_ethtool_flow_remove' [-Wmissing-prototypes] 830 | int hinic3_ethtool_flow_remove(struct hinic3_nic_dev *nic_dev, u32 location) | ^~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c:852:5: warning: no previous prototype for 'hinic3_ethtool_get_flow' [-Wmissing-prototypes] 852 | int hinic3_ethtool_get_flow(const struct hinic3_nic_dev *nic_dev, | ^~~~~~~~~~~~~~~~~~~~~~~ drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c:875:5: warning: no previous prototype for 'hinic3_ethtool_get_all_flows' [-Wmissing-prototypes] 875 | int hinic3_ethtool_get_all_flows(const struct hinic3_nic_dev *nic_dev, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/net/ethernet/huawei/hinic3/hinic3_ntuple.c:893:6: warning: no previous prototype for 'hinic3_validate_channel_setting_in_ntuple' [-Wmissing-prototypes] 893 | bool hinic3_validate_channel_setting_in_ntuple(const struct hinic3_nic_dev *nic_dev, u32 q_num) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- >> drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c:19:5: warning: no previous prototype for 'hinic3_dbg_get_wqe_info' [-Wmissing-prototypes] 19 | int hinic3_dbg_get_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt, | ^~~~~~~~~~~~~~~~~~~~~~~ drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c:60:5: warning: no previous prototype for 'hinic3_dbg_get_sq_info' [-Wmissing-prototypes] 60 | int hinic3_dbg_get_sq_info(void *hwdev, u16 q_id, struct nic_sq_info *sq_info, | ^~~~~~~~~~~~~~~~~~~~~~ >> drivers/net/ethernet/huawei/hinic3/hinic3_nic_dbg.c:103:5: warning: no previous prototype for 'hinic3_dbg_get_rq_info' [-Wmissing-prototypes] 103 | int hinic3_dbg_get_rq_info(void *hwdev, u16 q_id, struct nic_rq_info *rq_info, | ^~~~~~~~~~~~~~~~~~~~~~ vim +/hinic3_poll +22 drivers/net/ethernet/huawei/hinic3/hinic3_irq.c 21 > 22 int hinic3_poll(struct napi_struct *napi, int budget) 23 { 24 int tx_pkts, rx_pkts; 25 struct hinic3_irq *irq_cfg = 26 container_of(napi, struct hinic3_irq, napi); 27 struct hinic3_nic_dev *nic_dev = netdev_priv(irq_cfg->netdev); 28 29 rx_pkts = hinic3_rx_poll(irq_cfg->rxq, budget); 30 31 tx_pkts = hinic3_tx_poll(irq_cfg->txq, budget); 32 if (tx_pkts >= budget || rx_pkts >= budget) 33 return budget; 34 35 napi_complete(napi); 36 37 hinic3_set_msix_state(nic_dev->hwdev, irq_cfg->msix_entry_idx, 38 HINIC3_MSIX_ENABLE); 39 40 return max(tx_pkts, rx_pkts); 41 } 42 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
1 0
0 0
[PATCH OLK-6.6 v4 0/2] mm/mem_sampling: fix some mem sampling issues
by Ze Zuo 23 Jun '25

23 Jun '25
This patch fix issues in mem_sampling: - Prevent mem_sampling from enabling if SPE initialization fails. - Fix inaccurate sampling during NUMA balancing and DAMON. These changes improve memory sampling accuracy and stability on ARM systems. changes since v3: -- fix bugzilla issue. Ze Zuo (2): mm/mem_sampling: Prevent mem_sampling from being enabled if SPE init failed mm/mem_sampling: Fix inaccurate sampling for NUMA balancing and DAMON drivers/arm/mm_monitor/mm_spe.c | 6 +++--- include/linux/mem_sampling.h | 7 ++++--- mm/mem_sampling.c | 11 +++++++++-- 3 files changed, 16 insertions(+), 8 deletions(-) -- 2.25.1
2 3
0 0
[PATCH OLK-6.6 v2] mm/mem_sampling: add trace event for spe based damon record
by Ze Zuo 23 Jun '25

23 Jun '25
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/ICH7AS CVE: NA -------------------------------- This patch adds a new DAMON access tracking mechanism using ARM Statistical Profiling Extension (SPE). By parsing memory access samples from SPE, DAMON can infer access patterns with low overhead and higher precision on supported ARM platforms. Signed-off-by: Ze Zuo <zuoze1(a)huawei.com> --- changes since v1: -- fix bugzilla issues. include/trace/events/kmem.h | 21 +++++++++++++++++++++ mm/mem_sampling.c | 1 + 2 files changed, 22 insertions(+) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 28b9679c474d..3e78e6bd6e18 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -523,6 +523,27 @@ TRACE_EVENT(mm_mem_sampling_access_record, __entry->cpuid, __entry->pid) ); #endif /* CONFIG_NUMABALANCING_MEM_SAMPLING */ + +#ifdef CONFIG_DAMON_MEM_SAMPLING +TRACE_EVENT(mm_mem_sampling_damon_record, + + TP_PROTO(u64 vaddr, int pid), + + TP_ARGS(vaddr, pid), + + TP_STRUCT__entry( + __field(u64, vaddr) + __field(int, pid) + ), + + TP_fast_assign( + __entry->vaddr = vaddr; + __entry->pid = pid; + ), + + TP_printk("vaddr=%llx pid=%d", __entry->vaddr, __entry->pid) +); +#endif /* CONFIG_DAMON_MEM_SAMPLING */ #endif /* _TRACE_KMEM_H */ /* This part must be outside protection */ diff --git a/mm/mem_sampling.c b/mm/mem_sampling.c index 9ee68e15d1f6..8d79e83e64f0 100644 --- a/mm/mem_sampling.c +++ b/mm/mem_sampling.c @@ -316,6 +316,7 @@ static void damon_mem_sampling_record_cb(struct mem_sampling_record *record) mmput(mm); domon_record.vaddr = record->virt_addr; + trace_mm_mem_sampling_damon_record(record->virt_addr, (pid_t)record->context_id); /* only the proc under monitor now has damon_fifo */ if (damon_fifo) { -- 2.25.1
2 1
0 0
[PATCH OLK-6.6 v3 0/2] mm/mem_sampling: fix some mem sampling issues
by Ze Zuo 23 Jun '25

23 Jun '25
This patch fix issues in mem_sampling: - Prevent mem_sampling from enabling if SPE initialization fails. - Fix inaccurate sampling during NUMA balancing and DAMON. These changes improve memory sampling accuracy and stability on ARM systems. changes since v2: -- remove the feature patch. -- fix bugzilla number. Ze Zuo (2): mm/mem_sampling: Prevent mem_sampling from being enabled if SPE init failed mm/mem_sampling: Fix inaccurate sampling for NUMA balancing and DAMON drivers/arm/mm_monitor/mm_spe.c | 6 +++--- include/linux/mem_sampling.h | 7 ++++--- mm/mem_sampling.c | 11 +++++++++-- 3 files changed, 16 insertions(+), 8 deletions(-) -- 2.25.1
2 3
0 0
[PATCH] mm/mem_sampling: add trace event for spe based damon record
by Ze Zuo 23 Jun '25

23 Jun '25
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/ICH7AS CVE: NA -------------------------------- This patch adds a new DAMON access tracking mechanism using ARM Statistical Profiling Extension (SPE). By parsing memory access samples from SPE, DAMON can infer access patterns with low overhead and higher precision on supported ARM platforms. Signed-off-by: Ze Zuo <zuoze1(a)huawei.com> --- include/trace/events/kmem.h | 21 +++++++++++++++++++++ mm/mem_sampling.c | 1 + 2 files changed, 22 insertions(+) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 28b9679c474d..3e78e6bd6e18 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -523,6 +523,27 @@ TRACE_EVENT(mm_mem_sampling_access_record, __entry->cpuid, __entry->pid) ); #endif /* CONFIG_NUMABALANCING_MEM_SAMPLING */ + +#ifdef CONFIG_DAMON_MEM_SAMPLING +TRACE_EVENT(mm_mem_sampling_damon_record, + + TP_PROTO(u64 vaddr, int pid), + + TP_ARGS(vaddr, pid), + + TP_STRUCT__entry( + __field(u64, vaddr) + __field(int, pid) + ), + + TP_fast_assign( + __entry->vaddr = vaddr; + __entry->pid = pid; + ), + + TP_printk("vaddr=%llx pid=%d", __entry->vaddr, __entry->pid) +); +#endif /* CONFIG_DAMON_MEM_SAMPLING */ #endif /* _TRACE_KMEM_H */ /* This part must be outside protection */ diff --git a/mm/mem_sampling.c b/mm/mem_sampling.c index 9ee68e15d1f6..8d79e83e64f0 100644 --- a/mm/mem_sampling.c +++ b/mm/mem_sampling.c @@ -316,6 +316,7 @@ static void damon_mem_sampling_record_cb(struct mem_sampling_record *record) mmput(mm); domon_record.vaddr = record->virt_addr; + trace_mm_mem_sampling_damon_record(record->virt_addr, (pid_t)record->context_id); /* only the proc under monitor now has damon_fifo */ if (damon_fifo) { -- 2.25.1
1 0
0 0
[PATCH OLK-6.6] drm/radeon: fix uninitialized size issue in radeon_vce_cs_parse()
by Zicheng Qu 23 Jun '25

23 Jun '25
From: Nikita Zhandarovich <n.zhandarovich(a)fintech.ru> stable inclusion from stable-v6.6.85 commit 3ce08215cad55c10a6eeeb33d3583b6cfffe3ab8 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/IBYOFP CVE: CVE-2025-21996 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- commit dd8689b52a24807c2d5ce0a17cb26dc87f75235c upstream. On the off chance that command stream passed from userspace via ioctl() call to radeon_vce_cs_parse() is weirdly crafted and first command to execute is to encode (case 0x03000001), the function in question will attempt to call radeon_vce_cs_reloc() with size argument that has not been properly initialized. Specifically, 'size' will point to 'tmp' variable before the latter had a chance to be assigned any value. Play it safe and init 'tmp' with 0, thus ensuring that radeon_vce_cs_reloc() will catch an early error in cases like these. Found by Linux Verification Center (linuxtesting.org) with static analysis tool SVACE. Fixes: 2fc5703abda2 ("drm/radeon: check VCE relocation buffer range v3") Signed-off-by: Nikita Zhandarovich <n.zhandarovich(a)fintech.ru> Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com> (cherry picked from commit 2d52de55f9ee7aaee0e09ac443f77855989c6b68) Cc: stable(a)vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org> Signed-off-by: Zicheng Qu <quzicheng(a)huawei.com> --- drivers/gpu/drm/radeon/radeon_vce.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/radeon/radeon_vce.c b/drivers/gpu/drm/radeon/radeon_vce.c index d84b780e318c..2eb1636c560e 100644 --- a/drivers/gpu/drm/radeon/radeon_vce.c +++ b/drivers/gpu/drm/radeon/radeon_vce.c @@ -561,7 +561,7 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p) { int session_idx = -1; bool destroyed = false, created = false, allocated = false; - uint32_t tmp, handle = 0; + uint32_t tmp = 0, handle = 0; uint32_t *size = &tmp; int i, r = 0; -- 2.34.1
2 1
0 0
[PATCH OLK-5.10] cachefiles: Fix the potential ABBA deadlock issue
by Zizhi Wo 23 Jun '25

23 Jun '25
From: Zizhi Wo <wozizhi(a)huaweicloud.com> hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/ICFXTV CVE: NA -------------------------------- The current calling process may trigger a potential ABBA deadlock. The calling stack is as follows: do_mkdirat user_path_create filename_create mnt_want_write --lock a inode_lock_nested(path->dentry->d_inode, I_MUTEX_PARENT) --lock b vfs_write cachefiles_daemon_write cachefiles_daemon_cull cachefiles_cull cachefiles_check_active inode_lock_nested(d_inode(dir), I_MUTEX_PARENT) --lock b cachefiles_remove_object_xattr mnt_want_write --lock a vfs_removexattr This is because there is a problem with the lock order. mnt_want_write() should be called first, and then inode_lock_nested(). Fix the lock order. And delete the redundant code in cachefiles_check_old_object_xattr(), because this part of the process cannot be triggered. Fixes: 2a0beff2d223 ("cachefiles: Fix non-taking of sb_writers around set/removexattr") Signed-off-by: Zizhi Wo <wozizhi(a)huaweicloud.com> --- fs/cachefiles/namei.c | 9 ++++++++- fs/cachefiles/xattr.c | 20 ++++---------------- 2 files changed, 12 insertions(+), 17 deletions(-) diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c index 6eeef666c609..6fc8c504ae1b 100644 --- a/fs/cachefiles/namei.c +++ b/fs/cachefiles/namei.c @@ -990,9 +990,15 @@ int cachefiles_cull(struct cachefiles_cache *cache, struct dentry *dir, _enter(",%pd/,%s", dir, filename); + ret = mnt_want_write(cache->mnt); + if (ret < 0) + return ret; + victim = cachefiles_check_active(cache, dir, filename); - if (IS_ERR(victim)) + if (IS_ERR(victim)) { + mnt_drop_write(cache->mnt); return PTR_ERR(victim); + } _debug("victim -> %p %s", victim, d_backing_inode(victim) ? "positive" : "negative"); @@ -1003,6 +1009,7 @@ int cachefiles_cull(struct cachefiles_cache *cache, struct dentry *dir, _debug("victim is cullable"); ret = cachefiles_remove_object_xattr(cache, victim); + mnt_drop_write(cache->mnt); if (ret < 0) goto error_unlock; diff --git a/fs/cachefiles/xattr.c b/fs/cachefiles/xattr.c index bac55fc7359e..646a8002d0cf 100644 --- a/fs/cachefiles/xattr.c +++ b/fs/cachefiles/xattr.c @@ -243,7 +243,6 @@ int cachefiles_check_old_object_xattr(struct cachefiles_object *object, struct cachefiles_xattr *auxdata) { struct cachefiles_xattr *auxbuf; - struct cachefiles_cache *cache; unsigned int len = sizeof(struct cachefiles_xattr) + 512; struct dentry *dentry = object->dentry; int ret; @@ -301,17 +300,10 @@ int cachefiles_check_old_object_xattr(struct cachefiles_object *object, BUG(); } - cache = container_of(object->fscache.cache, - struct cachefiles_cache, cache); - /* update the current label */ - ret = mnt_want_write(cache->mnt); - if (ret == 0) { - ret = vfs_setxattr(dentry, cachefiles_xattr_cache, - &auxdata->type, auxdata->len, - XATTR_REPLACE); - mnt_drop_write(cache->mnt); - } + ret = vfs_setxattr(dentry, cachefiles_xattr_cache, + &auxdata->type, auxdata->len, + XATTR_REPLACE); if (ret < 0) { cachefiles_io_error_obj(object, "Can't update xattr on %lu" @@ -393,11 +385,7 @@ int cachefiles_remove_object_xattr(struct cachefiles_cache *cache, { int ret; - ret = mnt_want_write(cache->mnt); - if (ret == 0) { - ret = vfs_removexattr(dentry, cachefiles_xattr_cache); - mnt_drop_write(cache->mnt); - } + ret = vfs_removexattr(dentry, cachefiles_xattr_cache); if (ret < 0) { if (ret == -ENOENT || ret == -ENODATA) ret = 0; -- 2.39.2
2 1
0 0
[PATCH OLK-5.10] bpf: Fix prog_array_map_poke_run map poke update
by Tengda Wu 23 Jun '25

23 Jun '25
From: Jiri Olsa <jolsa(a)kernel.org> mainline inclusion from mainline-v6.7-rc5 commit 4b7de801606e504e69689df71475d27e35336fb3 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/ICH20Y Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- Lee pointed out issue found by syscaller [0] hitting BUG in prog array map poke update in prog_array_map_poke_run function due to error value returned from bpf_arch_text_poke function. There's race window where bpf_arch_text_poke can fail due to missing bpf program kallsym symbols, which is accounted for with check for -EINVAL in that BUG_ON call. The problem is that in such case we won't update the tail call jump and cause imbalance for the next tail call update check which will fail with -EBUSY in bpf_arch_text_poke. I'm hitting following race during the program load: CPU 0 CPU 1 bpf_prog_load bpf_check do_misc_fixups prog_array_map_poke_track map_update_elem bpf_fd_array_map_update_elem prog_array_map_poke_run bpf_arch_text_poke returns -EINVAL bpf_prog_kallsyms_add After bpf_arch_text_poke (CPU 1) fails to update the tail call jump, the next poke update fails on expected jump instruction check in bpf_arch_text_poke with -EBUSY and triggers the BUG_ON in prog_array_map_poke_run. Similar race exists on the program unload. Fixing this by moving the update to bpf_arch_poke_desc_update function which makes sure we call __bpf_arch_text_poke that skips the bpf address check. Each architecture has slightly different approach wrt looking up bpf address in bpf_arch_text_poke, so instead of splitting the function or adding new 'checkip' argument in previous version, it seems best to move the whole map_poke_run update as arch specific code. [0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810 Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") Reported-by: syzbot+97a4fe20470e9bc30810(a)syzkaller.appspotmail.com Signed-off-by: Jiri Olsa <jolsa(a)kernel.org> Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net> Acked-by: Yonghong Song <yonghong.song(a)linux.dev> Cc: Lee Jones <lee(a)kernel.org> Cc: Maciej Fijalkowski <maciej.fijalkowski(a)intel.com> Link: https://lore.kernel.org/bpf/20231206083041.1306660-2-jolsa@kernel.org Conflicts: arch/x86/net/bpf_jit_comp.c include/linux/bpf.h [There has some context conflicts, and a clean up patch 1022a5498f6f ("bpf, x86_64: Use bpf_jit_binary_pack_alloc") which is no need to merge] Signed-off-by: Tengda Wu <wutengda2(a)huawei.com> --- arch/x86/net/bpf_jit_comp.c | 46 +++++++++++++++++++++++++++++ include/linux/bpf.h | 3 ++ kernel/bpf/arraymap.c | 58 +++++++------------------------------ 3 files changed, 59 insertions(+), 48 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 987b02b857bc..dc6f077e2a2f 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -2169,3 +2169,49 @@ u64 bpf_arch_uaddress_limit(void) { return 0; } + +void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, + struct bpf_prog *new, struct bpf_prog *old) +{ + u8 *old_addr, *new_addr, *old_bypass_addr; + int ret; + + old_bypass_addr = old ? NULL : poke->bypass_addr; + old_addr = old ? (u8 *)old->bpf_func + poke->adj_off : NULL; + new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL; + + /* + * On program loading or teardown, the program's kallsym entry + * might not be in place, so we use __bpf_arch_text_poke to skip + * the kallsyms check. + */ + if (new) { + ret = __bpf_arch_text_poke(poke->tailcall_target, + BPF_MOD_JUMP, + old_addr, new_addr, true); + BUG_ON(ret < 0); + if (!old) { + ret = __bpf_arch_text_poke(poke->tailcall_bypass, + BPF_MOD_JUMP, + poke->bypass_addr, + NULL, true); + BUG_ON(ret < 0); + } + } else { + ret = __bpf_arch_text_poke(poke->tailcall_bypass, + BPF_MOD_JUMP, + old_bypass_addr, + poke->bypass_addr, true); + BUG_ON(ret < 0); + /* let other CPUs finish the execution of program + * so that it will not possible to expose them + * to invalid nop, stack unwind, nop state + */ + if (!ret) + synchronize_rcu(); + ret = __bpf_arch_text_poke(poke->tailcall_target, + BPF_MOD_JUMP, + old_addr, NULL, true); + BUG_ON(ret < 0); + } +} diff --git a/include/linux/bpf.h b/include/linux/bpf.h index f2e1633c5c82..f0db30991f68 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2144,6 +2144,9 @@ enum bpf_text_poke_type { int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, void *addr1, void *addr2); +void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, + struct bpf_prog *new, struct bpf_prog *old); + struct btf_id_set; bool btf_id_set_contains(const struct btf_id_set *set, u32 id); diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index b5b8b57dc212..dfabf842f830 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -916,11 +916,16 @@ static void prog_array_map_poke_untrack(struct bpf_map *map, mutex_unlock(&aux->poke_mutex); } +void __weak bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, + struct bpf_prog *new, struct bpf_prog *old) +{ + WARN_ON_ONCE(1); +} + static void prog_array_map_poke_run(struct bpf_map *map, u32 key, struct bpf_prog *old, struct bpf_prog *new) { - u8 *old_addr, *new_addr, *old_bypass_addr; struct prog_poke_elem *elem; struct bpf_array_aux *aux; @@ -929,7 +934,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, list_for_each_entry(elem, &aux->poke_progs, list) { struct bpf_jit_poke_descriptor *poke; - int i, ret; + int i; for (i = 0; i < elem->aux->size_poke_tab; i++) { poke = &elem->aux->poke_tab[i]; @@ -948,21 +953,10 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, * activated, so tail call updates can arrive from here * while JIT is still finishing its final fixup for * non-activated poke entries. - * 3) On program teardown, the program's kallsym entry gets - * removed out of RCU callback, but we can only untrack - * from sleepable context, therefore bpf_arch_text_poke() - * might not see that this is in BPF text section and - * bails out with -EINVAL. As these are unreachable since - * RCU grace period already passed, we simply skip them. - * 4) Also programs reaching refcount of zero while patching + * 3) Also programs reaching refcount of zero while patching * is in progress is okay since we're protected under * poke_mutex and untrack the programs before the JIT - * buffer is freed. When we're still in the middle of - * patching and suddenly kallsyms entry of the program - * gets evicted, we just skip the rest which is fine due - * to point 3). - * 5) Any other error happening below from bpf_arch_text_poke() - * is a unexpected bug. + * buffer is freed. */ if (!READ_ONCE(poke->tailcall_target_stable)) continue; @@ -972,39 +966,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, poke->tail_call.key != key) continue; - old_bypass_addr = old ? NULL : poke->bypass_addr; - old_addr = old ? (u8 *)old->bpf_func + poke->adj_off : NULL; - new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL; - - if (new) { - ret = bpf_arch_text_poke(poke->tailcall_target, - BPF_MOD_JUMP, - old_addr, new_addr); - BUG_ON(ret < 0 && ret != -EINVAL); - if (!old) { - ret = bpf_arch_text_poke(poke->tailcall_bypass, - BPF_MOD_JUMP, - poke->bypass_addr, - NULL); - BUG_ON(ret < 0 && ret != -EINVAL); - } - } else { - ret = bpf_arch_text_poke(poke->tailcall_bypass, - BPF_MOD_JUMP, - old_bypass_addr, - poke->bypass_addr); - BUG_ON(ret < 0 && ret != -EINVAL); - /* let other CPUs finish the execution of program - * so that it will not possible to expose them - * to invalid nop, stack unwind, nop state - */ - if (!ret) - synchronize_rcu(); - ret = bpf_arch_text_poke(poke->tailcall_target, - BPF_MOD_JUMP, - old_addr, NULL); - BUG_ON(ret < 0 && ret != -EINVAL); - } + bpf_arch_poke_desc_update(poke, new, old); } } } -- 2.34.1
2 1
0 0
  • ← Newer
  • 1
  • ...
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • ...
  • 1890
  • Older →

HyperKitty Powered by HyperKitty