mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 56 participants
  • 22224 discussions
[PATCH OLK-6.6 v2] bpf: account for current allocated stack depth in widen_imprecise_scalars()
by Pu Lehui 23 Dec '25

23 Dec '25
From: Eduard Zingerman <eddyz87(a)gmail.com> stable inclusion from stable-v6.6.117 commit 64b12dca2b0abcb5fc0542887d18b926ea5cf711 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/11581 CVE: CVE-2025-68208 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- [ Upstream commit b0c8e6d3d866b6a7f73877f71968dbffd27b7785 ] The usage pattern for widen_imprecise_scalars() looks as follows: prev_st = find_prev_entry(env, ...); queued_st = push_stack(...); widen_imprecise_scalars(env, prev_st, queued_st); Where prev_st is an ancestor of the queued_st in the explored states tree. This ancestor is not guaranteed to have same allocated stack depth as queued_st. E.g. in the following case: def main(): for i in 1..2: foo(i) // same callsite, differnt param def foo(i): if i == 1: use 128 bytes of stack iterator based loop Here, for a second 'foo' call prev_st->allocated_stack is 128, while queued_st->allocated_stack is much smaller. widen_imprecise_scalars() needs to take this into account and avoid accessing bpf_verifier_state->frame[*]->stack out of bounds. Fixes: 2793a8b015f7 ("bpf: exact states comparison for iterator convergence checks") Reported-by: Emil Tsalapatis <emil(a)etsalapatis.com> Signed-off-by: Eduard Zingerman <eddyz87(a)gmail.com> Link: https://lore.kernel.org/r/20251114025730.772723-1-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast(a)kernel.org> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Pu Lehui <pulehui(a)huawei.com> --- kernel/bpf/verifier.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 26086c893dfb..ce1f5a9bdd9a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7918,7 +7918,7 @@ static int widen_imprecise_scalars(struct bpf_verifier_env *env, struct bpf_verifier_state *cur) { struct bpf_func_state *fold, *fcur; - int i, fr; + int i, fr, num_slots; reset_idmap_scratch(env); for (fr = old->curframe; fr >= 0; fr--) { @@ -7931,7 +7931,9 @@ static int widen_imprecise_scalars(struct bpf_verifier_env *env, &fcur->regs[i], &env->idmap_scratch); - for (i = 0; i < fold->allocated_stack / BPF_REG_SIZE; i++) { + num_slots = min(fold->allocated_stack / BPF_REG_SIZE, + fcur->allocated_stack / BPF_REG_SIZE); + for (i = 0; i < num_slots; i++) { if (!is_spilled_reg(&fold->stack[i]) || !is_spilled_reg(&fcur->stack[i])) continue; -- 2.34.1
2 1
0 0
[PATCH OLK-6.6 v2] bpf: Add bpf_prog_run_data_pointers()
by Pu Lehui 23 Dec '25

23 Dec '25
From: Eric Dumazet <edumazet(a)google.com> stable inclusion from stable-v6.6.117 commit baa61dcaa50b7141048c8d2aede7fe9ed8f21d11 category: bugfix bugzilla: https://atomgit.com/src-openeuler/kernel/issues/11594 CVE: CVE-2025-68200 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- [ Upstream commit 4ef92743625818932b9c320152b58274c05e5053 ] syzbot found that cls_bpf_classify() is able to change tc_skb_cb(skb)->drop_reason triggering a warning in sk_skb_reason_drop(). WARNING: CPU: 0 PID: 5965 at net/core/skbuff.c:1192 __sk_skb_reason_drop net/core/skbuff.c:1189 [inline] WARNING: CPU: 0 PID: 5965 at net/core/skbuff.c:1192 sk_skb_reason_drop+0x76/0x170 net/core/skbuff.c:1214 struct tc_skb_cb has been added in commit ec624fe740b4 ("net/sched: Extend qdisc control block with tc control block"), which added a wrong interaction with db58ba459202 ("bpf: wire in data and data_end for cls_act_bpf"). drop_reason was added later. Add bpf_prog_run_data_pointers() helper to save/restore the net_sched storage colliding with BPF data_meta/data_end. Fixes: ec624fe740b4 ("net/sched: Extend qdisc control block with tc control block") Reported-by: syzbot <syzkaller(a)googlegroups.com> Closes: https://lore.kernel.org/netdev/6913437c.a70a0220.22f260.013b.GAE@google.com/ Signed-off-by: Eric Dumazet <edumazet(a)google.com> Signed-off-by: Martin KaFai Lau <martin.lau(a)kernel.org> Reviewed-by: Victor Nogueira <victor(a)mojatatu.com> Acked-by: Jamal Hadi Salim <jhs(a)mojatatu.com> Link: https://patch.msgid.link/20251112125516.1563021-1-edumazet@google.com Signed-off-by: Sasha Levin <sashal(a)kernel.org> Conflicts: net/sched/act_bpf.c net/sched/cls_bpf.c [ctx conflicts] Signed-off-by: Pu Lehui <pulehui(a)huawei.com> --- include/linux/filter.h | 20 ++++++++++++++++++++ net/sched/act_bpf.c | 6 ++---- net/sched/cls_bpf.c | 6 ++---- 3 files changed, 24 insertions(+), 8 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index a7c0caa8b7ad..4ae423d8533f 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -685,6 +685,26 @@ static inline void bpf_compute_data_pointers(struct sk_buff *skb) cb->data_end = skb->data + skb_headlen(skb); } +static inline int bpf_prog_run_data_pointers( + const struct bpf_prog *prog, + struct sk_buff *skb) +{ + struct bpf_skb_data_end *cb = (struct bpf_skb_data_end *)skb->cb; + void *save_data_meta, *save_data_end; + int res; + + save_data_meta = cb->data_meta; + save_data_end = cb->data_end; + + bpf_compute_data_pointers(skb); + res = bpf_prog_run(prog, skb); + + cb->data_meta = save_data_meta; + cb->data_end = save_data_end; + + return res; +} + /* Similar to bpf_compute_data_pointers(), except that save orginal * data in cb->data and cb->meta_data for restore. */ diff --git a/net/sched/act_bpf.c b/net/sched/act_bpf.c index b0455fda7d0b..223cc157312a 100644 --- a/net/sched/act_bpf.c +++ b/net/sched/act_bpf.c @@ -47,12 +47,10 @@ TC_INDIRECT_SCOPE int tcf_bpf_act(struct sk_buff *skb, filter = rcu_dereference(prog->filter); if (at_ingress) { __skb_push(skb, skb->mac_len); - bpf_compute_data_pointers(skb); - filter_res = bpf_prog_run(filter, skb); + filter_res = bpf_prog_run_data_pointers(filter, skb); __skb_pull(skb, skb->mac_len); } else { - bpf_compute_data_pointers(skb); - filter_res = bpf_prog_run(filter, skb); + filter_res = bpf_prog_run_data_pointers(filter, skb); } if (unlikely(!skb->tstamp && skb->mono_delivery_time)) skb->mono_delivery_time = 0; diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c index 382c7a71f81f..05f718dd09a6 100644 --- a/net/sched/cls_bpf.c +++ b/net/sched/cls_bpf.c @@ -97,12 +97,10 @@ TC_INDIRECT_SCOPE int cls_bpf_classify(struct sk_buff *skb, } else if (at_ingress) { /* It is safe to push/pull even if skb_shared() */ __skb_push(skb, skb->mac_len); - bpf_compute_data_pointers(skb); - filter_res = bpf_prog_run(prog->filter, skb); + filter_res = bpf_prog_run_data_pointers(prog->filter, skb); __skb_pull(skb, skb->mac_len); } else { - bpf_compute_data_pointers(skb); - filter_res = bpf_prog_run(prog->filter, skb); + filter_res = bpf_prog_run_data_pointers(prog->filter, skb); } if (unlikely(!skb->tstamp && skb->mono_delivery_time)) skb->mono_delivery_time = 0; -- 2.34.1
2 1
0 0
[PATCH OLK-6.6] [Backport] nvmet-fc: avoid scheduling association deletion twice
by Chen Jinghuang 23 Dec '25

23 Dec '25
From: Daniel Wagner <wagi(a)kernel.org> stable inclusion from stable-v6.6.117 commit 601ed47b2363c24d948d7bac0c23abc8bd459570 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/IDBQL1 CVE: CVE-2025-40343 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… ---------------------------------------------------------------------- [ Upstream commit f2537be4f8421f6495edfa0bc284d722f253841d ] When forcefully shutting down a port via the configfs interface, nvmet_port_subsys_drop_link() first calls nvmet_port_del_ctrls() and then nvmet_disable_port(). Both functions will eventually schedule all remaining associations for deletion. The current implementation checks whether an association is about to be removed, but only after the work item has already been scheduled. As a result, it is possible for the first scheduled work item to free all resources, and then for the same work item to be scheduled again for deletion. Because the association list is an RCU list, it is not possible to take a lock and remove the list entry directly, so it cannot be looked up again. Instead, a flag (terminating) must be used to determine whether the association is already in the process of being deleted. Reported-by: Shinichiro Kawasaki <shinichiro.kawasaki(a)wdc.com> Closes: https://lore.kernel.org/all/rsdinhafrtlguauhesmrrzkybpnvwantwmyfq2ih5areggh… Reviewed-by: Hannes Reinecke <hare(a)suse.de> Signed-off-by: Daniel Wagner <wagi(a)kernel.org> Signed-off-by: Keith Busch <kbusch(a)kernel.org> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Chen Jinghuang <chenjinghuang2(a)huawei.com> --- drivers/nvme/target/fc.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c index a15e764bae35..188b9f1bdaca 100644 --- a/drivers/nvme/target/fc.c +++ b/drivers/nvme/target/fc.c @@ -1090,6 +1090,14 @@ nvmet_fc_delete_assoc_work(struct work_struct *work) static void nvmet_fc_schedule_delete_assoc(struct nvmet_fc_tgt_assoc *assoc) { + int terminating; + + terminating = atomic_xchg(&assoc->terminating, 1); + + /* if already terminating, do nothing */ + if (terminating) + return; + nvmet_fc_tgtport_get(assoc->tgtport); if (!queue_work(nvmet_wq, &assoc->del_work)) nvmet_fc_tgtport_put(assoc->tgtport); @@ -1209,13 +1217,7 @@ nvmet_fc_delete_target_assoc(struct nvmet_fc_tgt_assoc *assoc) { struct nvmet_fc_tgtport *tgtport = assoc->tgtport; unsigned long flags; - int i, terminating; - - terminating = atomic_xchg(&assoc->terminating, 1); - - /* if already terminating, do nothing */ - if (terminating) - return; + int i; spin_lock_irqsave(&tgtport->lock, flags); list_del_rcu(&assoc->a_list); -- 2.34.1
2 1
0 0
[PATCH OLK-5.10] scsi/hiraid: Support New Raid feature
by LinKun 23 Dec '25

23 Dec '25
From: 岳智超 <yuezhichao1(a)h-partners.com> driver inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8290 CVE: NA -------------------------------- Add thread irq for io queue Signed-off-by: 岳智超 <yuezhichao1(a)h-partners.com> --- drivers/scsi/hisi_raid/hiraid.h | 1 + drivers/scsi/hisi_raid/hiraid_main.c | 60 ++++++++++++++++++++++++++-- 2 files changed, 58 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/hisi_raid/hiraid.h b/drivers/scsi/hisi_raid/hiraid.h index 04b2e25..b786066 100644 --- a/drivers/scsi/hisi_raid/hiraid.h +++ b/drivers/scsi/hisi_raid/hiraid.h @@ -686,6 +686,7 @@ struct hiraid_queue { atomic_t inflight; void *sense_buffer_virt; dma_addr_t sense_buffer_phy; + s32 pci_irq; struct dma_pool *prp_small_pool; }; diff --git a/drivers/scsi/hisi_raid/hiraid_main.c b/drivers/scsi/hisi_raid/hiraid_main.c index 2f33339..ee5cb10 100644 --- a/drivers/scsi/hisi_raid/hiraid_main.c +++ b/drivers/scsi/hisi_raid/hiraid_main.c @@ -107,6 +107,13 @@ static u32 log_debug_switch; module_param(log_debug_switch, uint, 0644); MODULE_PARM_DESC(log_debug_switch, "set log state, default zero for switch off"); +static bool threaded_irq = true; +module_param(threaded_irq, bool, 0444); +MODULE_PARM_DESC(threaded_irq, "use threaded irq for io queue, default on"); + +static u32 poll_delay_min = 9; +static u32 poll_delay_max = 19; + static int extra_pool_num_set(const char *val, const struct kernel_param *kp) { u8 n = 0; @@ -152,7 +159,7 @@ static struct workqueue_struct *work_queue; __func__, ##__VA_ARGS__); \ } while (0) -#define HIRAID_DRV_VERSION "1.1.0.1" +#define HIRAID_DRV_VERSION "1.1.0.2" #define ADMIN_TIMEOUT (admin_tmout * HZ) #define USRCMD_TIMEOUT (180 * HZ) @@ -1305,6 +1312,7 @@ static int hiraid_alloc_queue(struct hiraid_dev *hdev, u16 qid, u16 depth) hiraidq->q_depth = depth; hiraidq->qid = qid; hiraidq->cq_vector = -1; + hiraidq->pci_irq = -1; hdev->queue_count++; return 0; @@ -1631,6 +1639,39 @@ static irqreturn_t hiraid_handle_irq(int irq, void *data) return ret; } +static irqreturn_t hiraid_io_poll(int irq, void *data) +{ + struct hiraid_queue *hiraidq = data; + irqreturn_t ret = IRQ_NONE; + u16 start, end; + + do { + spin_lock(&hiraidq->cq_lock); + hiraid_process_cq(hiraidq, &start, &end, -1); + hiraidq->last_cq_head = hiraidq->cq_head; + spin_unlock(&hiraidq->cq_lock); + + if (start != end) { + hiraid_complete_cqes(hiraidq, start, end); + ret = IRQ_HANDLED; + } + usleep_range(poll_delay_min, poll_delay_max); + } while (start != end); + enable_irq(hiraidq->pci_irq); + return ret; +} + +static irqreturn_t hiraid_io_irq(int irq, void *data) +{ + struct hiraid_queue *q = data; + + if (hiraid_cqe_pending(q)) { + disable_irq_nosync(q->pci_irq); + return IRQ_WAKE_THREAD; + } + return IRQ_NONE; +} + static int hiraid_setup_admin_queue(struct hiraid_dev *hdev) { struct hiraid_queue *adminq = &hdev->queues[0]; @@ -1666,9 +1707,11 @@ static int hiraid_setup_admin_queue(struct hiraid_dev *hdev) adminq, "hiraid%d_q%d", hdev->instance, adminq->qid); if (ret) { adminq->cq_vector = -1; + adminq->pci_irq = -1; return ret; } + adminq->pci_irq = pci_irq_vector(hdev->pdev, adminq->cq_vector); hiraid_init_queue(adminq, 0); dev_info(hdev->dev, "setup admin queue success, queuecount[%d] online[%d] pagesize[%d]\n", @@ -1937,14 +1980,23 @@ static int hiraid_create_queue(struct hiraid_queue *hiraidq, u16 qid) goto delete_cq; hiraidq->cq_vector = cq_vector; - ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_handle_irq, NULL, - hiraidq, "hiraid%d_q%d", hdev->instance, qid); + + if (threaded_irq) + ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_io_irq, + hiraid_io_poll, hiraidq, "hiraid%d_q%d", + hdev->instance, qid); + else + ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_handle_irq, + NULL, hiraidq, "hiraid%d_q%d", + hdev->instance, qid); if (ret) { hiraidq->cq_vector = -1; + hiraidq->pci_irq = -1; dev_err(hdev->dev, "request queue[%d] irq failed\n", qid); goto delete_sq; } + hiraidq->pci_irq = pci_irq_vector(hdev->pdev, hiraidq->cq_vector); hiraid_init_queue(hiraidq, qid); return 0; @@ -2094,10 +2146,12 @@ static int hiraid_setup_io_queues(struct hiraid_dev *hdev) adminq, "hiraid%d_q%d", hdev->instance, adminq->qid); if (ret) { dev_err(hdev->dev, "request admin irq failed\n"); + adminq->pci_irq = -1; adminq->cq_vector = -1; return ret; } + adminq->pci_irq = pci_irq_vector(hdev->pdev, adminq->cq_vector); hdev->online_queues++; for (i = hdev->queue_count; i <= hdev->max_qid; i++) { -- 2.45.1.windows.1
2 1
0 0
[PATCH OLK-6.6 0/1] iommu: set the default iommu-dma mode as non-strict
by Qinxin Xia 23 Dec '25

23 Dec '25
driver inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8292 ---------------------------------------------------------------------- The non-strict smmu mode has significant performance gains and can resolve the nvme soft lockup problem. We enable it by default. Currently, many peripherals are faster than before. For example, the top speed of the older netcard is 10Gb/s, and now it's more than 25Gb/s. But when iommu page-table mapping enabled, it's hard to reach the top speed in strict mode, because of frequently map and unmap operations. In order to keep abreast of the times, I think it's better to set non-strict as default. Below it's our iperf performance data of 25Gb netcard: strict mode: 18-20 Gb/s non-strict mode: 23.5 Gb/s Qinxin Xia (1): iommu: set the default iommu-dma mode as non-strict Documentation/admin-guide/kernel-parameters.txt | 2 +- arch/arm64/configs/openeuler_defconfig | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) -- 2.33.0
2 2
0 0
[PATCH openEuler-1.0-LTS] scsi/hiraid: Support New Raid feature
by LinKun 23 Dec '25

23 Dec '25
From: 岳智超 <yuezhichao1(a)h-partners.com> driver inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8291 CVE: NA -------------------------------- Add thread irq for io queue Signed-off-by: 岳智超 <yuezhichao1(a)h-partners.com> --- drivers/scsi/hisi_raid/hiraid.h | 1 + drivers/scsi/hisi_raid/hiraid_main.c | 61 ++++++++++++++++++++++++++-- 2 files changed, 58 insertions(+), 4 deletions(-) diff --git a/drivers/scsi/hisi_raid/hiraid.h b/drivers/scsi/hisi_raid/hiraid.h index 1ebc3dd..bc4e05a 100644 --- a/drivers/scsi/hisi_raid/hiraid.h +++ b/drivers/scsi/hisi_raid/hiraid.h @@ -683,6 +683,7 @@ struct hiraid_queue { atomic_t inflight; void *sense_buffer_virt; dma_addr_t sense_buffer_phy; + s32 pci_irq; struct dma_pool *prp_small_pool; }; diff --git a/drivers/scsi/hisi_raid/hiraid_main.c b/drivers/scsi/hisi_raid/hiraid_main.c index f84182f..ee25893 100644 --- a/drivers/scsi/hisi_raid/hiraid_main.c +++ b/drivers/scsi/hisi_raid/hiraid_main.c @@ -107,6 +107,13 @@ static u32 log_debug_switch; module_param(log_debug_switch, uint, 0644); MODULE_PARM_DESC(log_debug_switch, "set log state, default zero for switch off"); +static bool threaded_irq = true; +module_param(threaded_irq, bool, 0444); +MODULE_PARM_DESC(threaded_irq, "use threaded irq for io queue, default on"); + +static u32 poll_delay_min = 9; +static u32 poll_delay_max = 19; + static int extra_pool_num_set(const char *val, const struct kernel_param *kp) { u8 n = 0; @@ -153,7 +160,7 @@ static struct workqueue_struct *work_queue; __func__, ##__VA_ARGS__); \ } while (0) -#define HIRAID_DRV_VERSION "1.1.0.0" +#define HIRAID_DRV_VERSION "1.1.0.1" #define ADMIN_TIMEOUT (admin_tmout * HZ) #define USRCMD_TIMEOUT (180 * HZ) @@ -1305,6 +1312,7 @@ static int hiraid_alloc_queue(struct hiraid_dev *hdev, u16 qid, u16 depth) hiraidq->q_depth = depth; hiraidq->qid = qid; hiraidq->cq_vector = -1; + hiraidq->pci_irq = -1; hdev->queue_count++; return 0; @@ -1646,6 +1654,39 @@ static irqreturn_t hiraid_handle_irq(int irq, void *data) return ret; } +static irqreturn_t hiraid_io_poll(int irq, void *data) +{ + struct hiraid_queue *hiraidq = data; + irqreturn_t ret = IRQ_NONE; + u16 start, end; + + do { + spin_lock(&hiraidq->cq_lock); + hiraid_process_cq(hiraidq, &start, &end, -1); + hiraidq->last_cq_head = hiraidq->cq_head; + spin_unlock(&hiraidq->cq_lock); + + if (start != end) { + hiraid_complete_cqes(hiraidq, start, end); + ret = IRQ_HANDLED; + } + usleep_range(poll_delay_min, poll_delay_max); + } while (start != end); + enable_irq(hiraidq->pci_irq); + return ret; +} + +static irqreturn_t hiraid_io_irq(int irq, void *data) +{ + struct hiraid_queue *q = data; + + if (hiraid_cqe_pending(q)) { + disable_irq_nosync(q->pci_irq); + return IRQ_WAKE_THREAD; + } + return IRQ_NONE; +} + static int hiraid_setup_admin_queue(struct hiraid_dev *hdev) { struct hiraid_queue *adminq = &hdev->queues[0]; @@ -1681,9 +1722,11 @@ static int hiraid_setup_admin_queue(struct hiraid_dev *hdev) NULL, adminq, "hiraid%d_q%d", hdev->instance, adminq->qid); if (ret) { adminq->cq_vector = -1; + adminq->pci_irq = -1; return ret; } + adminq->pci_irq = pci_irq_vector(hdev->pdev, adminq->cq_vector); hiraid_init_queue(adminq, 0); dev_info(hdev->dev, "setup admin queue success, queuecount[%d] online[%d] pagesize[%d]\n", @@ -1958,14 +2001,23 @@ static int hiraid_create_queue(struct hiraid_queue *hiraidq, u16 qid) goto delete_cq; hiraidq->cq_vector = cq_vector; - ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_handle_irq, NULL, - hiraidq, "hiraid%d_q%d", hdev->instance, qid); + if (threaded_irq) + ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_io_irq, + hiraid_io_poll, hiraidq, "hiraid%d_q%d", + hdev->instance, qid); + else + ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_handle_irq, + NULL, hiraidq, "hiraid%d_q%d", + hdev->instance, qid); + if (ret) { hiraidq->cq_vector = -1; + hiraidq->pci_irq = -1; dev_err(hdev->dev, "request queue[%d] irq failed\n", qid); goto delete_sq; } + hiraidq->pci_irq = pci_irq_vector(hdev->pdev, hiraidq->cq_vector); hiraid_init_queue(hiraidq, qid); return 0; @@ -2122,10 +2174,11 @@ static int hiraid_setup_io_queues(struct hiraid_dev *hdev) adminq, "hiraid%d_q%d", hdev->instance, adminq->qid); if (ret) { dev_err(hdev->dev, "request admin irq failed\n"); + adminq->pci_irq = -1; adminq->cq_vector = -1; return ret; } - + adminq->pci_irq = pci_irq_vector(hdev->pdev, adminq->cq_vector); hdev->online_queues++; for (i = hdev->queue_count; i <= hdev->max_qid; i++) { -- 2.45.1.windows.1
2 1
0 0
[openeuler:OLK-6.6 2/2] drivers/platform/mpam/mpam_devices.c:247:11: error: implicit declaration of function '__acpi_get_mem_attribute'
by kernel test robot 23 Dec '25

23 Dec '25
Hi James, FYI, the error/warning still remains. tree: https://gitee.com/openeuler/kernel.git OLK-6.6 head: c098fa18c07cc52100a52db8fd0c2900461888c9 commit: 3e9e723f3bf92a19e5e15dda89bbb136ce463294 [2/2] arm_mpam: Add probe/remove for mpam msc driver and kbuild boiler plate config: arm64-randconfig-004-20251223 (https://download.01.org/0day-ci/archive/20251223/202512231755.XRc0hf4e-lkp@…) compiler: aarch64-linux-gcc (GCC) 9.5.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251223/202512231755.XRc0hf4e-lkp@…) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp(a)intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202512231755.XRc0hf4e-lkp@intel.com/ All errors (new ones prefixed by >>): In file included from include/linux/mmzone.h:8, from include/linux/gfp.h:7, from include/linux/slab.h:16, from include/linux/resource_ext.h:11, from include/linux/acpi.h:13, from drivers/platform/mpam/mpam_devices.c:6: drivers/platform/mpam/mpam_devices.c: In function 'mpam_msc_drv_probe': drivers/platform/mpam/mpam_devices.c:212:24: error: 'struct mpam_msc' has no member named 'mon_sel_lock'; did you mean 'part_sel_lock'? 212 | spin_lock_init(&msc->mon_sel_lock); | ^~~~~~~~~~~~ include/linux/spinlock.h:335:38: note: in definition of macro 'spin_lock_init' 335 | __raw_spin_lock_init(spinlock_check(lock), \ | ^~~~ >> drivers/platform/mpam/mpam_devices.c:247:11: error: implicit declaration of function '__acpi_get_mem_attribute' [-Werror=implicit-function-declaration] 247 | prot = __acpi_get_mem_attribute(msc->pcc_chan->shmem_base_addr); | ^~~~~~~~~~~~~~~~~~~~~~~~ >> drivers/platform/mpam/mpam_devices.c:247:11: error: incompatible types when assigning to type 'pgprot_t' {aka 'struct <anonymous>'} from type 'int' cc1: some warnings being treated as errors vim +/__acpi_get_mem_attribute +247 drivers/platform/mpam/mpam_devices.c 170 171 static int mpam_msc_drv_probe(struct platform_device *pdev) 172 { 173 int err; 174 pgprot_t prot; 175 void * __iomem io; 176 struct mpam_msc *msc; 177 struct resource *msc_res; 178 void *plat_data = pdev->dev.platform_data; 179 180 mutex_lock(&mpam_list_lock); 181 do { 182 msc = devm_kzalloc(&pdev->dev, sizeof(*msc), GFP_KERNEL); 183 if (!msc) { 184 err = -ENOMEM; 185 break; 186 } 187 188 INIT_LIST_HEAD_RCU(&msc->glbl_list); 189 msc->pdev = pdev; 190 191 err = device_property_read_u32(&pdev->dev, "arm,not-ready-us", 192 &msc->nrdy_usec); 193 if (err) { 194 /* This will prevent CSU monitors being usable */ 195 msc->nrdy_usec = 0; 196 } 197 198 err = get_msc_affinity(msc); 199 if (err) 200 break; 201 if (cpumask_empty(&msc->accessibility)) { 202 pr_err_once("msc:%u is not accessible from any CPU!", 203 msc->id); 204 err = -EINVAL; 205 break; 206 } 207 208 mutex_init(&msc->lock); 209 msc->id = mpam_num_msc++; 210 INIT_LIST_HEAD_RCU(&msc->ris); 211 spin_lock_init(&msc->part_sel_lock); 212 spin_lock_init(&msc->mon_sel_lock); 213 214 if (device_property_read_u32(&pdev->dev, "pcc-channel", 215 &msc->pcc_subspace_id)) 216 msc->iface = MPAM_IFACE_MMIO; 217 else 218 msc->iface = MPAM_IFACE_PCC; 219 220 if (msc->iface == MPAM_IFACE_MMIO) { 221 io = devm_platform_get_and_ioremap_resource(pdev, 0, 222 &msc_res); 223 if (IS_ERR(io)) { 224 pr_err("Failed to map MSC base address\n"); 225 devm_kfree(&pdev->dev, msc); 226 err = PTR_ERR(io); 227 break; 228 } 229 msc->mapped_hwpage_sz = msc_res->end - msc_res->start; 230 msc->mapped_hwpage = io; 231 } else if (msc->iface == MPAM_IFACE_PCC) { 232 msc->pcc_cl.dev = &pdev->dev; 233 msc->pcc_cl.rx_callback = mpam_pcc_rx_callback; 234 msc->pcc_cl.tx_block = false; 235 msc->pcc_cl.tx_tout = 1000; /* 1s */ 236 msc->pcc_cl.knows_txdone = false; 237 238 msc->pcc_chan = pcc_mbox_request_channel(&msc->pcc_cl, 239 msc->pcc_subspace_id); 240 if (IS_ERR(msc->pcc_chan)) { 241 pr_err("Failed to request MSC PCC channel\n"); 242 devm_kfree(&pdev->dev, msc); 243 err = PTR_ERR(msc->pcc_chan); 244 break; 245 } 246 > 247 prot = __acpi_get_mem_attribute(msc->pcc_chan->shmem_base_addr); 248 io = ioremap_prot(msc->pcc_chan->shmem_base_addr, 249 msc->pcc_chan->shmem_size, pgprot_val(prot)); 250 if (IS_ERR(io)) { 251 pr_err("Failed to map MSC base address\n"); 252 pcc_mbox_free_channel(msc->pcc_chan); 253 devm_kfree(&pdev->dev, msc); 254 err = PTR_ERR(io); 255 break; 256 } 257 258 /* TODO: issue a read to update the registers */ 259 260 msc->mapped_hwpage_sz = msc->pcc_chan->shmem_size; 261 msc->mapped_hwpage = io + sizeof(struct acpi_pcct_shared_memory); 262 } 263 264 list_add_rcu(&msc->glbl_list, &mpam_all_msc); 265 platform_set_drvdata(pdev, msc); 266 } while (0); 267 mutex_unlock(&mpam_list_lock); 268 269 if (!err) { 270 /* Create RIS entries described by firmware */ 271 if (!acpi_disabled) 272 err = acpi_mpam_parse_resources(msc, plat_data); 273 else 274 err = mpam_dt_parse_resources(msc, plat_data); 275 } 276 277 if (!err && fw_num_msc == mpam_num_msc) 278 mpam_discovery_complete(); 279 280 return err; 281 } 282 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
1 0
0 0
[PATCH OLK-6.6 0/1] iommu: set the default iommu-dma mode as non-strict
by Qinxin Xia 23 Dec '25

23 Dec '25
driver inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8292 ---------------------------------------------------------------------- The non-strict smmu mode has significant performance gains and can resolve the nvme soft lockup problem. We enable it by default. Currently, many peripherals are faster than before. For example, the top speed of the older netcard is 10Gb/s, and now it's more than 25Gb/s. But when iommu page-table mapping enabled, it's hard to reach the top speed in strict mode, because of frequently map and unmap operations. In order to keep abreast of the times, I think it's better to set non-strict as default. Below it's our iperf performance data of 25Gb netcard: strict mode: 18-20 Gb/s non-strict mode: 23.5 Gb/s Qinxin Xia (1): iommu: set the default iommu-dma mode as non-strict Documentation/admin-guide/kernel-parameters.txt | 2 +- arch/arm64/configs/openeuler_defconfig | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) -- 2.33.0
2 2
0 0
[PATCH OLK-6.6] ftrace: Fix softlockup in ftrace_module_enable
by Tengda Wu 23 Dec '25

23 Dec '25
From: Vladimir Riabchun <ferr.lambarginio(a)gmail.com> stable inclusion from stable-v6.6.119 commit e81e6d6d99b16dae11adbeda5c996317942a940c category: bugfix bugzilla: http://atomgit.com/src-openeuler/kernel/issues/11609 CVE: CVE-2025-68173 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- [ Upstream commit 4099b98203d6b33d990586542fa5beee408032a3 ] A soft lockup was observed when loading amdgpu module. If a module has a lot of tracable functions, multiple calls to kallsyms_lookup can spend too much time in RCU critical section and with disabled preemption, causing kernel panic. This is the same issue that was fixed in commit d0b24b4e91fc ("ftrace: Prevent RCU stall on PREEMPT_VOLUNTARY kernels") and commit 42ea22e754ba ("ftrace: Add cond_resched() to ftrace_graph_set_hash()"). Fix it the same way by adding cond_resched() in ftrace_module_enable. Link: https://lore.kernel.org/aMQD9_lxYmphT-up@vova-pc Signed-off-by: Vladimir Riabchun <ferr.lambarginio(a)gmail.com> Signed-off-by: Steven Rostedt (Google) <rostedt(a)goodmis.org> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Tengda Wu <wutengda2(a)huawei.com> --- kernel/trace/ftrace.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 15785a729a0c..398992597685 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -6873,6 +6873,8 @@ void ftrace_module_enable(struct module *mod) if (!within_module(rec->ip, mod)) break; + cond_resched(); + /* Weak functions should still be ignored */ if (!test_for_valid_rec(rec)) { /* Clear all other flags. Should not be enabled anyway */ -- 2.34.1
2 1
0 0
[PATCH openEuler-26.09 0/2] memcg: enable asynchronous reclaim for cgroup-v2
by Chen Ridong 23 Dec '25

23 Dec '25
memcg: enable asynchronous reclaim for cgroup-v2 Chen Ridong (2): memcg: change CONFIG_MEMCG_V1_RECLAIM to CONFIG_MEMCG_QOS memcg: enable asynchronous reclaim for cgroup-v2 arch/arm64/configs/openeuler_defconfig | 2 +- arch/riscv/configs/openeuler_defconfig | 2 +- arch/x86/configs/openeuler_defconfig | 2 +- include/linux/memcontrol.h | 4 ++-- init/Kconfig | 10 +++++++-- mm/memcontrol.c | 30 ++++++++++++++++---------- 6 files changed, 32 insertions(+), 18 deletions(-) -- 2.34.1
2 3
0 0
  • ← Newer
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • ...
  • 2223
  • Older →

HyperKitty Powered by HyperKitty