mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

  • 61 participants
  • 22282 discussions
[PATCH OLK-6.6] [Backport] nvmet-fc: avoid scheduling association deletion twice
by Chen Jinghuang 23 Dec '25

23 Dec '25
From: Daniel Wagner <wagi(a)kernel.org> stable inclusion from stable-v6.6.117 commit 601ed47b2363c24d948d7bac0c23abc8bd459570 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/IDBQL1 CVE: CVE-2025-40343 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… ---------------------------------------------------------------------- [ Upstream commit f2537be4f8421f6495edfa0bc284d722f253841d ] When forcefully shutting down a port via the configfs interface, nvmet_port_subsys_drop_link() first calls nvmet_port_del_ctrls() and then nvmet_disable_port(). Both functions will eventually schedule all remaining associations for deletion. The current implementation checks whether an association is about to be removed, but only after the work item has already been scheduled. As a result, it is possible for the first scheduled work item to free all resources, and then for the same work item to be scheduled again for deletion. Because the association list is an RCU list, it is not possible to take a lock and remove the list entry directly, so it cannot be looked up again. Instead, a flag (terminating) must be used to determine whether the association is already in the process of being deleted. Reported-by: Shinichiro Kawasaki <shinichiro.kawasaki(a)wdc.com> Closes: https://lore.kernel.org/all/rsdinhafrtlguauhesmrrzkybpnvwantwmyfq2ih5areggh… Reviewed-by: Hannes Reinecke <hare(a)suse.de> Signed-off-by: Daniel Wagner <wagi(a)kernel.org> Signed-off-by: Keith Busch <kbusch(a)kernel.org> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Chen Jinghuang <chenjinghuang2(a)huawei.com> --- drivers/nvme/target/fc.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c index a15e764bae35..188b9f1bdaca 100644 --- a/drivers/nvme/target/fc.c +++ b/drivers/nvme/target/fc.c @@ -1090,6 +1090,14 @@ nvmet_fc_delete_assoc_work(struct work_struct *work) static void nvmet_fc_schedule_delete_assoc(struct nvmet_fc_tgt_assoc *assoc) { + int terminating; + + terminating = atomic_xchg(&assoc->terminating, 1); + + /* if already terminating, do nothing */ + if (terminating) + return; + nvmet_fc_tgtport_get(assoc->tgtport); if (!queue_work(nvmet_wq, &assoc->del_work)) nvmet_fc_tgtport_put(assoc->tgtport); @@ -1209,13 +1217,7 @@ nvmet_fc_delete_target_assoc(struct nvmet_fc_tgt_assoc *assoc) { struct nvmet_fc_tgtport *tgtport = assoc->tgtport; unsigned long flags; - int i, terminating; - - terminating = atomic_xchg(&assoc->terminating, 1); - - /* if already terminating, do nothing */ - if (terminating) - return; + int i; spin_lock_irqsave(&tgtport->lock, flags); list_del_rcu(&assoc->a_list); -- 2.34.1
2 1
0 0
[PATCH OLK-5.10] scsi/hiraid: Support New Raid feature
by LinKun 23 Dec '25

23 Dec '25
From: 岳智超 <yuezhichao1(a)h-partners.com> driver inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8290 CVE: NA -------------------------------- Add thread irq for io queue Signed-off-by: 岳智超 <yuezhichao1(a)h-partners.com> --- drivers/scsi/hisi_raid/hiraid.h | 1 + drivers/scsi/hisi_raid/hiraid_main.c | 60 ++++++++++++++++++++++++++-- 2 files changed, 58 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/hisi_raid/hiraid.h b/drivers/scsi/hisi_raid/hiraid.h index 04b2e25..b786066 100644 --- a/drivers/scsi/hisi_raid/hiraid.h +++ b/drivers/scsi/hisi_raid/hiraid.h @@ -686,6 +686,7 @@ struct hiraid_queue { atomic_t inflight; void *sense_buffer_virt; dma_addr_t sense_buffer_phy; + s32 pci_irq; struct dma_pool *prp_small_pool; }; diff --git a/drivers/scsi/hisi_raid/hiraid_main.c b/drivers/scsi/hisi_raid/hiraid_main.c index 2f33339..ee5cb10 100644 --- a/drivers/scsi/hisi_raid/hiraid_main.c +++ b/drivers/scsi/hisi_raid/hiraid_main.c @@ -107,6 +107,13 @@ static u32 log_debug_switch; module_param(log_debug_switch, uint, 0644); MODULE_PARM_DESC(log_debug_switch, "set log state, default zero for switch off"); +static bool threaded_irq = true; +module_param(threaded_irq, bool, 0444); +MODULE_PARM_DESC(threaded_irq, "use threaded irq for io queue, default on"); + +static u32 poll_delay_min = 9; +static u32 poll_delay_max = 19; + static int extra_pool_num_set(const char *val, const struct kernel_param *kp) { u8 n = 0; @@ -152,7 +159,7 @@ static struct workqueue_struct *work_queue; __func__, ##__VA_ARGS__); \ } while (0) -#define HIRAID_DRV_VERSION "1.1.0.1" +#define HIRAID_DRV_VERSION "1.1.0.2" #define ADMIN_TIMEOUT (admin_tmout * HZ) #define USRCMD_TIMEOUT (180 * HZ) @@ -1305,6 +1312,7 @@ static int hiraid_alloc_queue(struct hiraid_dev *hdev, u16 qid, u16 depth) hiraidq->q_depth = depth; hiraidq->qid = qid; hiraidq->cq_vector = -1; + hiraidq->pci_irq = -1; hdev->queue_count++; return 0; @@ -1631,6 +1639,39 @@ static irqreturn_t hiraid_handle_irq(int irq, void *data) return ret; } +static irqreturn_t hiraid_io_poll(int irq, void *data) +{ + struct hiraid_queue *hiraidq = data; + irqreturn_t ret = IRQ_NONE; + u16 start, end; + + do { + spin_lock(&hiraidq->cq_lock); + hiraid_process_cq(hiraidq, &start, &end, -1); + hiraidq->last_cq_head = hiraidq->cq_head; + spin_unlock(&hiraidq->cq_lock); + + if (start != end) { + hiraid_complete_cqes(hiraidq, start, end); + ret = IRQ_HANDLED; + } + usleep_range(poll_delay_min, poll_delay_max); + } while (start != end); + enable_irq(hiraidq->pci_irq); + return ret; +} + +static irqreturn_t hiraid_io_irq(int irq, void *data) +{ + struct hiraid_queue *q = data; + + if (hiraid_cqe_pending(q)) { + disable_irq_nosync(q->pci_irq); + return IRQ_WAKE_THREAD; + } + return IRQ_NONE; +} + static int hiraid_setup_admin_queue(struct hiraid_dev *hdev) { struct hiraid_queue *adminq = &hdev->queues[0]; @@ -1666,9 +1707,11 @@ static int hiraid_setup_admin_queue(struct hiraid_dev *hdev) adminq, "hiraid%d_q%d", hdev->instance, adminq->qid); if (ret) { adminq->cq_vector = -1; + adminq->pci_irq = -1; return ret; } + adminq->pci_irq = pci_irq_vector(hdev->pdev, adminq->cq_vector); hiraid_init_queue(adminq, 0); dev_info(hdev->dev, "setup admin queue success, queuecount[%d] online[%d] pagesize[%d]\n", @@ -1937,14 +1980,23 @@ static int hiraid_create_queue(struct hiraid_queue *hiraidq, u16 qid) goto delete_cq; hiraidq->cq_vector = cq_vector; - ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_handle_irq, NULL, - hiraidq, "hiraid%d_q%d", hdev->instance, qid); + + if (threaded_irq) + ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_io_irq, + hiraid_io_poll, hiraidq, "hiraid%d_q%d", + hdev->instance, qid); + else + ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_handle_irq, + NULL, hiraidq, "hiraid%d_q%d", + hdev->instance, qid); if (ret) { hiraidq->cq_vector = -1; + hiraidq->pci_irq = -1; dev_err(hdev->dev, "request queue[%d] irq failed\n", qid); goto delete_sq; } + hiraidq->pci_irq = pci_irq_vector(hdev->pdev, hiraidq->cq_vector); hiraid_init_queue(hiraidq, qid); return 0; @@ -2094,10 +2146,12 @@ static int hiraid_setup_io_queues(struct hiraid_dev *hdev) adminq, "hiraid%d_q%d", hdev->instance, adminq->qid); if (ret) { dev_err(hdev->dev, "request admin irq failed\n"); + adminq->pci_irq = -1; adminq->cq_vector = -1; return ret; } + adminq->pci_irq = pci_irq_vector(hdev->pdev, adminq->cq_vector); hdev->online_queues++; for (i = hdev->queue_count; i <= hdev->max_qid; i++) { -- 2.45.1.windows.1
2 1
0 0
[PATCH OLK-6.6 0/1] iommu: set the default iommu-dma mode as non-strict
by Qinxin Xia 23 Dec '25

23 Dec '25
driver inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8292 ---------------------------------------------------------------------- The non-strict smmu mode has significant performance gains and can resolve the nvme soft lockup problem. We enable it by default. Currently, many peripherals are faster than before. For example, the top speed of the older netcard is 10Gb/s, and now it's more than 25Gb/s. But when iommu page-table mapping enabled, it's hard to reach the top speed in strict mode, because of frequently map and unmap operations. In order to keep abreast of the times, I think it's better to set non-strict as default. Below it's our iperf performance data of 25Gb netcard: strict mode: 18-20 Gb/s non-strict mode: 23.5 Gb/s Qinxin Xia (1): iommu: set the default iommu-dma mode as non-strict Documentation/admin-guide/kernel-parameters.txt | 2 +- arch/arm64/configs/openeuler_defconfig | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) -- 2.33.0
2 2
0 0
[PATCH openEuler-1.0-LTS] scsi/hiraid: Support New Raid feature
by LinKun 23 Dec '25

23 Dec '25
From: 岳智超 <yuezhichao1(a)h-partners.com> driver inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8291 CVE: NA -------------------------------- Add thread irq for io queue Signed-off-by: 岳智超 <yuezhichao1(a)h-partners.com> --- drivers/scsi/hisi_raid/hiraid.h | 1 + drivers/scsi/hisi_raid/hiraid_main.c | 61 ++++++++++++++++++++++++++-- 2 files changed, 58 insertions(+), 4 deletions(-) diff --git a/drivers/scsi/hisi_raid/hiraid.h b/drivers/scsi/hisi_raid/hiraid.h index 1ebc3dd..bc4e05a 100644 --- a/drivers/scsi/hisi_raid/hiraid.h +++ b/drivers/scsi/hisi_raid/hiraid.h @@ -683,6 +683,7 @@ struct hiraid_queue { atomic_t inflight; void *sense_buffer_virt; dma_addr_t sense_buffer_phy; + s32 pci_irq; struct dma_pool *prp_small_pool; }; diff --git a/drivers/scsi/hisi_raid/hiraid_main.c b/drivers/scsi/hisi_raid/hiraid_main.c index f84182f..ee25893 100644 --- a/drivers/scsi/hisi_raid/hiraid_main.c +++ b/drivers/scsi/hisi_raid/hiraid_main.c @@ -107,6 +107,13 @@ static u32 log_debug_switch; module_param(log_debug_switch, uint, 0644); MODULE_PARM_DESC(log_debug_switch, "set log state, default zero for switch off"); +static bool threaded_irq = true; +module_param(threaded_irq, bool, 0444); +MODULE_PARM_DESC(threaded_irq, "use threaded irq for io queue, default on"); + +static u32 poll_delay_min = 9; +static u32 poll_delay_max = 19; + static int extra_pool_num_set(const char *val, const struct kernel_param *kp) { u8 n = 0; @@ -153,7 +160,7 @@ static struct workqueue_struct *work_queue; __func__, ##__VA_ARGS__); \ } while (0) -#define HIRAID_DRV_VERSION "1.1.0.0" +#define HIRAID_DRV_VERSION "1.1.0.1" #define ADMIN_TIMEOUT (admin_tmout * HZ) #define USRCMD_TIMEOUT (180 * HZ) @@ -1305,6 +1312,7 @@ static int hiraid_alloc_queue(struct hiraid_dev *hdev, u16 qid, u16 depth) hiraidq->q_depth = depth; hiraidq->qid = qid; hiraidq->cq_vector = -1; + hiraidq->pci_irq = -1; hdev->queue_count++; return 0; @@ -1646,6 +1654,39 @@ static irqreturn_t hiraid_handle_irq(int irq, void *data) return ret; } +static irqreturn_t hiraid_io_poll(int irq, void *data) +{ + struct hiraid_queue *hiraidq = data; + irqreturn_t ret = IRQ_NONE; + u16 start, end; + + do { + spin_lock(&hiraidq->cq_lock); + hiraid_process_cq(hiraidq, &start, &end, -1); + hiraidq->last_cq_head = hiraidq->cq_head; + spin_unlock(&hiraidq->cq_lock); + + if (start != end) { + hiraid_complete_cqes(hiraidq, start, end); + ret = IRQ_HANDLED; + } + usleep_range(poll_delay_min, poll_delay_max); + } while (start != end); + enable_irq(hiraidq->pci_irq); + return ret; +} + +static irqreturn_t hiraid_io_irq(int irq, void *data) +{ + struct hiraid_queue *q = data; + + if (hiraid_cqe_pending(q)) { + disable_irq_nosync(q->pci_irq); + return IRQ_WAKE_THREAD; + } + return IRQ_NONE; +} + static int hiraid_setup_admin_queue(struct hiraid_dev *hdev) { struct hiraid_queue *adminq = &hdev->queues[0]; @@ -1681,9 +1722,11 @@ static int hiraid_setup_admin_queue(struct hiraid_dev *hdev) NULL, adminq, "hiraid%d_q%d", hdev->instance, adminq->qid); if (ret) { adminq->cq_vector = -1; + adminq->pci_irq = -1; return ret; } + adminq->pci_irq = pci_irq_vector(hdev->pdev, adminq->cq_vector); hiraid_init_queue(adminq, 0); dev_info(hdev->dev, "setup admin queue success, queuecount[%d] online[%d] pagesize[%d]\n", @@ -1958,14 +2001,23 @@ static int hiraid_create_queue(struct hiraid_queue *hiraidq, u16 qid) goto delete_cq; hiraidq->cq_vector = cq_vector; - ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_handle_irq, NULL, - hiraidq, "hiraid%d_q%d", hdev->instance, qid); + if (threaded_irq) + ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_io_irq, + hiraid_io_poll, hiraidq, "hiraid%d_q%d", + hdev->instance, qid); + else + ret = pci_request_irq(hdev->pdev, cq_vector, hiraid_handle_irq, + NULL, hiraidq, "hiraid%d_q%d", + hdev->instance, qid); + if (ret) { hiraidq->cq_vector = -1; + hiraidq->pci_irq = -1; dev_err(hdev->dev, "request queue[%d] irq failed\n", qid); goto delete_sq; } + hiraidq->pci_irq = pci_irq_vector(hdev->pdev, hiraidq->cq_vector); hiraid_init_queue(hiraidq, qid); return 0; @@ -2122,10 +2174,11 @@ static int hiraid_setup_io_queues(struct hiraid_dev *hdev) adminq, "hiraid%d_q%d", hdev->instance, adminq->qid); if (ret) { dev_err(hdev->dev, "request admin irq failed\n"); + adminq->pci_irq = -1; adminq->cq_vector = -1; return ret; } - + adminq->pci_irq = pci_irq_vector(hdev->pdev, adminq->cq_vector); hdev->online_queues++; for (i = hdev->queue_count; i <= hdev->max_qid; i++) { -- 2.45.1.windows.1
2 1
0 0
[openeuler:OLK-6.6 2/2] drivers/platform/mpam/mpam_devices.c:247:11: error: implicit declaration of function '__acpi_get_mem_attribute'
by kernel test robot 23 Dec '25

23 Dec '25
Hi James, FYI, the error/warning still remains. tree: https://gitee.com/openeuler/kernel.git OLK-6.6 head: c098fa18c07cc52100a52db8fd0c2900461888c9 commit: 3e9e723f3bf92a19e5e15dda89bbb136ce463294 [2/2] arm_mpam: Add probe/remove for mpam msc driver and kbuild boiler plate config: arm64-randconfig-004-20251223 (https://download.01.org/0day-ci/archive/20251223/202512231755.XRc0hf4e-lkp@…) compiler: aarch64-linux-gcc (GCC) 9.5.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251223/202512231755.XRc0hf4e-lkp@…) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp(a)intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202512231755.XRc0hf4e-lkp@intel.com/ All errors (new ones prefixed by >>): In file included from include/linux/mmzone.h:8, from include/linux/gfp.h:7, from include/linux/slab.h:16, from include/linux/resource_ext.h:11, from include/linux/acpi.h:13, from drivers/platform/mpam/mpam_devices.c:6: drivers/platform/mpam/mpam_devices.c: In function 'mpam_msc_drv_probe': drivers/platform/mpam/mpam_devices.c:212:24: error: 'struct mpam_msc' has no member named 'mon_sel_lock'; did you mean 'part_sel_lock'? 212 | spin_lock_init(&msc->mon_sel_lock); | ^~~~~~~~~~~~ include/linux/spinlock.h:335:38: note: in definition of macro 'spin_lock_init' 335 | __raw_spin_lock_init(spinlock_check(lock), \ | ^~~~ >> drivers/platform/mpam/mpam_devices.c:247:11: error: implicit declaration of function '__acpi_get_mem_attribute' [-Werror=implicit-function-declaration] 247 | prot = __acpi_get_mem_attribute(msc->pcc_chan->shmem_base_addr); | ^~~~~~~~~~~~~~~~~~~~~~~~ >> drivers/platform/mpam/mpam_devices.c:247:11: error: incompatible types when assigning to type 'pgprot_t' {aka 'struct <anonymous>'} from type 'int' cc1: some warnings being treated as errors vim +/__acpi_get_mem_attribute +247 drivers/platform/mpam/mpam_devices.c 170 171 static int mpam_msc_drv_probe(struct platform_device *pdev) 172 { 173 int err; 174 pgprot_t prot; 175 void * __iomem io; 176 struct mpam_msc *msc; 177 struct resource *msc_res; 178 void *plat_data = pdev->dev.platform_data; 179 180 mutex_lock(&mpam_list_lock); 181 do { 182 msc = devm_kzalloc(&pdev->dev, sizeof(*msc), GFP_KERNEL); 183 if (!msc) { 184 err = -ENOMEM; 185 break; 186 } 187 188 INIT_LIST_HEAD_RCU(&msc->glbl_list); 189 msc->pdev = pdev; 190 191 err = device_property_read_u32(&pdev->dev, "arm,not-ready-us", 192 &msc->nrdy_usec); 193 if (err) { 194 /* This will prevent CSU monitors being usable */ 195 msc->nrdy_usec = 0; 196 } 197 198 err = get_msc_affinity(msc); 199 if (err) 200 break; 201 if (cpumask_empty(&msc->accessibility)) { 202 pr_err_once("msc:%u is not accessible from any CPU!", 203 msc->id); 204 err = -EINVAL; 205 break; 206 } 207 208 mutex_init(&msc->lock); 209 msc->id = mpam_num_msc++; 210 INIT_LIST_HEAD_RCU(&msc->ris); 211 spin_lock_init(&msc->part_sel_lock); 212 spin_lock_init(&msc->mon_sel_lock); 213 214 if (device_property_read_u32(&pdev->dev, "pcc-channel", 215 &msc->pcc_subspace_id)) 216 msc->iface = MPAM_IFACE_MMIO; 217 else 218 msc->iface = MPAM_IFACE_PCC; 219 220 if (msc->iface == MPAM_IFACE_MMIO) { 221 io = devm_platform_get_and_ioremap_resource(pdev, 0, 222 &msc_res); 223 if (IS_ERR(io)) { 224 pr_err("Failed to map MSC base address\n"); 225 devm_kfree(&pdev->dev, msc); 226 err = PTR_ERR(io); 227 break; 228 } 229 msc->mapped_hwpage_sz = msc_res->end - msc_res->start; 230 msc->mapped_hwpage = io; 231 } else if (msc->iface == MPAM_IFACE_PCC) { 232 msc->pcc_cl.dev = &pdev->dev; 233 msc->pcc_cl.rx_callback = mpam_pcc_rx_callback; 234 msc->pcc_cl.tx_block = false; 235 msc->pcc_cl.tx_tout = 1000; /* 1s */ 236 msc->pcc_cl.knows_txdone = false; 237 238 msc->pcc_chan = pcc_mbox_request_channel(&msc->pcc_cl, 239 msc->pcc_subspace_id); 240 if (IS_ERR(msc->pcc_chan)) { 241 pr_err("Failed to request MSC PCC channel\n"); 242 devm_kfree(&pdev->dev, msc); 243 err = PTR_ERR(msc->pcc_chan); 244 break; 245 } 246 > 247 prot = __acpi_get_mem_attribute(msc->pcc_chan->shmem_base_addr); 248 io = ioremap_prot(msc->pcc_chan->shmem_base_addr, 249 msc->pcc_chan->shmem_size, pgprot_val(prot)); 250 if (IS_ERR(io)) { 251 pr_err("Failed to map MSC base address\n"); 252 pcc_mbox_free_channel(msc->pcc_chan); 253 devm_kfree(&pdev->dev, msc); 254 err = PTR_ERR(io); 255 break; 256 } 257 258 /* TODO: issue a read to update the registers */ 259 260 msc->mapped_hwpage_sz = msc->pcc_chan->shmem_size; 261 msc->mapped_hwpage = io + sizeof(struct acpi_pcct_shared_memory); 262 } 263 264 list_add_rcu(&msc->glbl_list, &mpam_all_msc); 265 platform_set_drvdata(pdev, msc); 266 } while (0); 267 mutex_unlock(&mpam_list_lock); 268 269 if (!err) { 270 /* Create RIS entries described by firmware */ 271 if (!acpi_disabled) 272 err = acpi_mpam_parse_resources(msc, plat_data); 273 else 274 err = mpam_dt_parse_resources(msc, plat_data); 275 } 276 277 if (!err && fw_num_msc == mpam_num_msc) 278 mpam_discovery_complete(); 279 280 return err; 281 } 282 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
1 0
0 0
[PATCH OLK-6.6 0/1] iommu: set the default iommu-dma mode as non-strict
by Qinxin Xia 23 Dec '25

23 Dec '25
driver inclusion category: feature bugzilla: https://atomgit.com/openeuler/kernel/issues/8292 ---------------------------------------------------------------------- The non-strict smmu mode has significant performance gains and can resolve the nvme soft lockup problem. We enable it by default. Currently, many peripherals are faster than before. For example, the top speed of the older netcard is 10Gb/s, and now it's more than 25Gb/s. But when iommu page-table mapping enabled, it's hard to reach the top speed in strict mode, because of frequently map and unmap operations. In order to keep abreast of the times, I think it's better to set non-strict as default. Below it's our iperf performance data of 25Gb netcard: strict mode: 18-20 Gb/s non-strict mode: 23.5 Gb/s Qinxin Xia (1): iommu: set the default iommu-dma mode as non-strict Documentation/admin-guide/kernel-parameters.txt | 2 +- arch/arm64/configs/openeuler_defconfig | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) -- 2.33.0
2 2
0 0
[PATCH OLK-6.6] ftrace: Fix softlockup in ftrace_module_enable
by Tengda Wu 23 Dec '25

23 Dec '25
From: Vladimir Riabchun <ferr.lambarginio(a)gmail.com> stable inclusion from stable-v6.6.119 commit e81e6d6d99b16dae11adbeda5c996317942a940c category: bugfix bugzilla: http://atomgit.com/src-openeuler/kernel/issues/11609 CVE: CVE-2025-68173 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- [ Upstream commit 4099b98203d6b33d990586542fa5beee408032a3 ] A soft lockup was observed when loading amdgpu module. If a module has a lot of tracable functions, multiple calls to kallsyms_lookup can spend too much time in RCU critical section and with disabled preemption, causing kernel panic. This is the same issue that was fixed in commit d0b24b4e91fc ("ftrace: Prevent RCU stall on PREEMPT_VOLUNTARY kernels") and commit 42ea22e754ba ("ftrace: Add cond_resched() to ftrace_graph_set_hash()"). Fix it the same way by adding cond_resched() in ftrace_module_enable. Link: https://lore.kernel.org/aMQD9_lxYmphT-up@vova-pc Signed-off-by: Vladimir Riabchun <ferr.lambarginio(a)gmail.com> Signed-off-by: Steven Rostedt (Google) <rostedt(a)goodmis.org> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Tengda Wu <wutengda2(a)huawei.com> --- kernel/trace/ftrace.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 15785a729a0c..398992597685 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -6873,6 +6873,8 @@ void ftrace_module_enable(struct module *mod) if (!within_module(rec->ip, mod)) break; + cond_resched(); + /* Weak functions should still be ignored */ if (!test_for_valid_rec(rec)) { /* Clear all other flags. Should not be enabled anyway */ -- 2.34.1
2 1
0 0
[PATCH openEuler-26.09 0/2] memcg: enable asynchronous reclaim for cgroup-v2
by Chen Ridong 23 Dec '25

23 Dec '25
memcg: enable asynchronous reclaim for cgroup-v2 Chen Ridong (2): memcg: change CONFIG_MEMCG_V1_RECLAIM to CONFIG_MEMCG_QOS memcg: enable asynchronous reclaim for cgroup-v2 arch/arm64/configs/openeuler_defconfig | 2 +- arch/riscv/configs/openeuler_defconfig | 2 +- arch/x86/configs/openeuler_defconfig | 2 +- include/linux/memcontrol.h | 4 ++-- init/Kconfig | 10 +++++++-- mm/memcontrol.c | 30 ++++++++++++++++---------- 6 files changed, 32 insertions(+), 18 deletions(-) -- 2.34.1
2 3
0 0
[PATCH OLK-5.10] [Backport] ima: Handle error code returned by ima_filter_rule_match()
by Zhao Yipeng 23 Dec '25

23 Dec '25
mainline inclusion from mainline-v6.19-rc1 commit 738c9738e690f5cea24a3ad6fd2d9a323cf614f6 category: bugfix bugzilla: https://atomgit.com/openeuler/kernel/issues/7810 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- In ima_match_rules(), if ima_filter_rule_match() returns -ENOENT due to the rule being NULL, the function incorrectly skips the 'if (!rc)' check and sets 'result = true'. The LSM rule is considered a match, causing extra files to be measured by IMA. This issue can be reproduced in the following scenario: After unloading the SELinux policy module via 'semodule -d', if an IMA measurement is triggered before ima_lsm_rules is updated, in ima_match_rules(), the first call to ima_filter_rule_match() returns -ESTALE. This causes the code to enter the 'if (rc == -ESTALE && !rule_reinitialized)' block, perform ima_lsm_copy_rule() and retry. In ima_lsm_copy_rule(), since the SELinux module has been removed, the rule becomes NULL, and the second call to ima_filter_rule_match() returns -ENOENT. This bypasses the 'if (!rc)' check and results in a false match. Call trace: selinux_audit_rule_match+0x310/0x3b8 security_audit_rule_match+0x60/0xa0 ima_match_rules+0x2e4/0x4a0 ima_match_policy+0x9c/0x1e8 ima_get_action+0x48/0x60 process_measurement+0xf8/0xa98 ima_bprm_check+0x98/0xd8 security_bprm_check+0x5c/0x78 search_binary_handler+0x6c/0x318 exec_binprm+0x58/0x1b8 bprm_execve+0xb8/0x130 do_execveat_common.isra.0+0x1a8/0x258 __arm64_sys_execve+0x48/0x68 invoke_syscall+0x50/0x128 el0_svc_common.constprop.0+0xc8/0xf0 do_el0_svc+0x24/0x38 el0_svc+0x44/0x200 el0t_64_sync_handler+0x100/0x130 el0t_64_sync+0x3c8/0x3d0 Fix this by changing 'if (!rc)' to 'if (rc <= 0)' to ensure that error codes like -ENOENT do not bypass the check and accidentally result in a successful match. Fixes: 4af4662fa4a9d ("integrity: IMA policy") Signed-off-by: Zhao Yipeng <zhaoyipeng5(a)huawei.com> Reviewed-by: Roberto Sassu <roberto.sassu(a)huawei.com> Signed-off-by: Mimi Zohar <zohar(a)linux.ibm.com> Signed-off-by: Zhao Yipeng <zhaoyipeng5(a)huawei.com> --- security/integrity/ima/ima_policy.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c index fa9942e126b1..e2c617a824a5 100644 --- a/security/integrity/ima/ima_policy.c +++ b/security/integrity/ima/ima_policy.c @@ -640,7 +640,7 @@ static bool ima_match_rules(struct ima_rule_entry *rule, struct inode *inode, goto retry; } } - if (!rc) { + if (rc <= 0) { result = false; goto out; } -- 2.34.1
2 1
0 0
[PATCH OLK-6.6] [Backport] ima: Handle error code returned by ima_filter_rule_match()
by Zhao Yipeng 23 Dec '25

23 Dec '25
mainline inclusion from mainline-v6.19-rc1 commit 738c9738e690f5cea24a3ad6fd2d9a323cf614f6 category: bugfix bugzilla: https://atomgit.com/openeuler/kernel/issues/7810 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… -------------------------------- In ima_match_rules(), if ima_filter_rule_match() returns -ENOENT due to the rule being NULL, the function incorrectly skips the 'if (!rc)' check and sets 'result = true'. The LSM rule is considered a match, causing extra files to be measured by IMA. This issue can be reproduced in the following scenario: After unloading the SELinux policy module via 'semodule -d', if an IMA measurement is triggered before ima_lsm_rules is updated, in ima_match_rules(), the first call to ima_filter_rule_match() returns -ESTALE. This causes the code to enter the 'if (rc == -ESTALE && !rule_reinitialized)' block, perform ima_lsm_copy_rule() and retry. In ima_lsm_copy_rule(), since the SELinux module has been removed, the rule becomes NULL, and the second call to ima_filter_rule_match() returns -ENOENT. This bypasses the 'if (!rc)' check and results in a false match. Call trace: selinux_audit_rule_match+0x310/0x3b8 security_audit_rule_match+0x60/0xa0 ima_match_rules+0x2e4/0x4a0 ima_match_policy+0x9c/0x1e8 ima_get_action+0x48/0x60 process_measurement+0xf8/0xa98 ima_bprm_check+0x98/0xd8 security_bprm_check+0x5c/0x78 search_binary_handler+0x6c/0x318 exec_binprm+0x58/0x1b8 bprm_execve+0xb8/0x130 do_execveat_common.isra.0+0x1a8/0x258 __arm64_sys_execve+0x48/0x68 invoke_syscall+0x50/0x128 el0_svc_common.constprop.0+0xc8/0xf0 do_el0_svc+0x24/0x38 el0_svc+0x44/0x200 el0t_64_sync_handler+0x100/0x130 el0t_64_sync+0x3c8/0x3d0 Fix this by changing 'if (!rc)' to 'if (rc <= 0)' to ensure that error codes like -ENOENT do not bypass the check and accidentally result in a successful match. Fixes: 4af4662fa4a9d ("integrity: IMA policy") Signed-off-by: Zhao Yipeng <zhaoyipeng5(a)huawei.com> Reviewed-by: Roberto Sassu <roberto.sassu(a)huawei.com> Signed-off-by: Mimi Zohar <zohar(a)linux.ibm.com> Signed-off-by: Zhao Yipeng <zhaoyipeng5(a)huawei.com> --- security/integrity/ima/ima_policy.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c index a01f30d2d8cc..ba4f617366a5 100644 --- a/security/integrity/ima/ima_policy.c +++ b/security/integrity/ima/ima_policy.c @@ -729,7 +729,7 @@ static bool ima_match_rules(struct ima_rule_entry *rule, goto retry; } } - if (!rc) { + if (rc <= 0) { result = false; goto out; } -- 2.34.1
2 1
0 0
  • ← Newer
  • 1
  • ...
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • ...
  • 2229
  • Older →

HyperKitty Powered by HyperKitty