mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Kernel

Threads by month
  • ----- 2025 -----
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
kernel@openeuler.org

August 2023

  • 55 participants
  • 222 discussions
[openEuler-1.0-LTS 1/5] workqueue: Rename "delayed" (delayed by active management) to "inactive"
by Zeng Heng 03 Aug '23

03 Aug '23
From: Lai Jiangshan <laijs(a)linux.alibaba.com> mainline inclusion from mainline-v5.15-rc1 commit f97a4a1a3f8769e3452885967955e21c88f3f263 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7LRJF DTS: DTS2023072811781 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… --------------------------- There are two kinds of "delayed" work items in workqueue subsystem. One is for timer-delayed work items which are visible to workqueue users. The other kind is for work items delayed by active management which can not be directly visible to workqueue users. We mixed the word "delayed" for both kinds and caused somewhat ambiguity. This patch renames the later one (delayed by active management) to "inactive", because it is used for workqueue active management and most of its related symbols are named with "active" or "activate". All "delayed" and "DELAYED" are carefully checked and renamed one by one to avoid accidentally changing the name of the other kind for timer-delayed. No functional change intended. Signed-off-by: Lai Jiangshan <laijs(a)linux.alibaba.com> Signed-off-by: Tejun Heo <tj(a)kernel.org> conflict: kernel/workqueue.c Signed-off-by: Zeng Heng <zengheng4(a)huawei.com> --- include/linux/workqueue.h | 4 +-- kernel/workqueue.c | 58 +++++++++++++++++++-------------------- 2 files changed, 31 insertions(+), 31 deletions(-) diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index 6f2b042fc44c..3ac0fa0c822d 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -30,7 +30,7 @@ void delayed_work_timer_fn(struct timer_list *t); enum { WORK_STRUCT_PENDING_BIT = 0, /* work item is pending execution */ - WORK_STRUCT_DELAYED_BIT = 1, /* work item is delayed */ + WORK_STRUCT_INACTIVE_BIT= 1, /* work item is inactive */ WORK_STRUCT_PWQ_BIT = 2, /* data points to pwq */ WORK_STRUCT_LINKED_BIT = 3, /* next work is linked to this one */ #ifdef CONFIG_DEBUG_OBJECTS_WORK @@ -43,7 +43,7 @@ enum { WORK_STRUCT_COLOR_BITS = 4, WORK_STRUCT_PENDING = 1 << WORK_STRUCT_PENDING_BIT, - WORK_STRUCT_DELAYED = 1 << WORK_STRUCT_DELAYED_BIT, + WORK_STRUCT_INACTIVE = 1 << WORK_STRUCT_INACTIVE_BIT, WORK_STRUCT_PWQ = 1 << WORK_STRUCT_PWQ_BIT, WORK_STRUCT_LINKED = 1 << WORK_STRUCT_LINKED_BIT, #ifdef CONFIG_DEBUG_OBJECTS_WORK diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 04b558a267ae..fe3c4a292909 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -207,7 +207,7 @@ struct pool_workqueue { /* L: nr of in_flight works */ int nr_active; /* L: nr of active works */ int max_active; /* L: max active works */ - struct list_head delayed_works; /* L: delayed works */ + struct list_head inactive_works; /* L: inactive works */ struct list_head pwqs_node; /* WR: node on wq->pwqs */ struct list_head mayday_node; /* MD: node on wq->maydays */ @@ -1105,7 +1105,7 @@ static void put_pwq_unlocked(struct pool_workqueue *pwq) } } -static void pwq_activate_delayed_work(struct work_struct *work) +static void pwq_activate_inactive_work(struct work_struct *work) { struct pool_workqueue *pwq = get_work_pwq(work); @@ -1113,16 +1113,16 @@ static void pwq_activate_delayed_work(struct work_struct *work) if (list_empty(&pwq->pool->worklist)) pwq->pool->watchdog_ts = jiffies; move_linked_works(work, &pwq->pool->worklist, NULL); - __clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work)); + __clear_bit(WORK_STRUCT_INACTIVE_BIT, work_data_bits(work)); pwq->nr_active++; } -static void pwq_activate_first_delayed(struct pool_workqueue *pwq) +static void pwq_activate_first_inactive(struct pool_workqueue *pwq) { - struct work_struct *work = list_first_entry(&pwq->delayed_works, + struct work_struct *work = list_first_entry(&pwq->inactive_works, struct work_struct, entry); - pwq_activate_delayed_work(work); + pwq_activate_inactive_work(work); } /** @@ -1145,10 +1145,10 @@ static void pwq_dec_nr_in_flight(struct pool_workqueue *pwq, int color) pwq->nr_in_flight[color]--; pwq->nr_active--; - if (!list_empty(&pwq->delayed_works)) { - /* one down, submit a delayed one */ + if (!list_empty(&pwq->inactive_works)) { + /* one down, submit an inactive one */ if (pwq->nr_active < pwq->max_active) - pwq_activate_first_delayed(pwq); + pwq_activate_first_inactive(pwq); } /* is flush in progress and are we at the flushing tip? */ @@ -1246,14 +1246,14 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork, debug_work_deactivate(work); /* - * A delayed work item cannot be grabbed directly because + * An inactive work item cannot be grabbed directly because * it might have linked NO_COLOR work items which, if left - * on the delayed_list, will confuse pwq->nr_active + * on the inactive_works list, will confuse pwq->nr_active * management later on and cause stall. Make sure the work * item is activated before grabbing. */ - if (*work_data_bits(work) & WORK_STRUCT_DELAYED) - pwq_activate_delayed_work(work); + if (*work_data_bits(work) & WORK_STRUCT_INACTIVE) + pwq_activate_inactive_work(work); list_del_init(&work->entry); pwq_dec_nr_in_flight(pwq, get_work_color(work)); @@ -1451,8 +1451,8 @@ static void __queue_work(int cpu, struct workqueue_struct *wq, if (list_empty(worklist)) pwq->pool->watchdog_ts = jiffies; } else { - work_flags |= WORK_STRUCT_DELAYED; - worklist = &pwq->delayed_works; + work_flags |= WORK_STRUCT_INACTIVE; + worklist = &pwq->inactive_works; } debug_work_activate(work); @@ -2496,7 +2496,7 @@ static int rescuer_thread(void *__rescuer) /* * The above execution of rescued work items could * have created more to rescue through - * pwq_activate_first_delayed() or chained + * pwq_activate_first_inactive() or chained * queueing. Let's put @pwq back on mayday list so * that such back-to-back work items, which may be * being used to relieve memory pressure, don't @@ -2922,7 +2922,7 @@ void drain_workqueue(struct workqueue_struct *wq) bool drained; spin_lock_irq(&pwq->pool->lock); - drained = !pwq->nr_active && list_empty(&pwq->delayed_works); + drained = !pwq->nr_active && list_empty(&pwq->inactive_works); spin_unlock_irq(&pwq->pool->lock); if (drained) @@ -3703,7 +3703,7 @@ static void pwq_unbound_release_workfn(struct work_struct *work) * @pwq: target pool_workqueue * * If @pwq isn't freezing, set @pwq->max_active to the associated - * workqueue's saved_max_active and activate delayed work items + * workqueue's saved_max_active and activate inactive work items * accordingly. If @pwq is freezing, clear @pwq->max_active to zero. */ static void pwq_adjust_max_active(struct pool_workqueue *pwq) @@ -3732,9 +3732,9 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq) pwq->max_active = wq->saved_max_active; - while (!list_empty(&pwq->delayed_works) && + while (!list_empty(&pwq->inactive_works) && pwq->nr_active < pwq->max_active) { - pwq_activate_first_delayed(pwq); + pwq_activate_first_inactive(pwq); kick = true; } @@ -3765,7 +3765,7 @@ static void init_pwq(struct pool_workqueue *pwq, struct workqueue_struct *wq, pwq->wq = wq; pwq->flush_color = -1; pwq->refcnt = 1; - INIT_LIST_HEAD(&pwq->delayed_works); + INIT_LIST_HEAD(&pwq->inactive_works); INIT_LIST_HEAD(&pwq->pwqs_node); INIT_LIST_HEAD(&pwq->mayday_node); INIT_WORK(&pwq->unbound_release_work, pwq_unbound_release_workfn); @@ -4386,7 +4386,7 @@ void destroy_workqueue(struct workqueue_struct *wq) if (WARN_ON((pwq != wq->dfl_pwq) && (pwq->refcnt > 1)) || WARN_ON(pwq->nr_active) || - WARN_ON(!list_empty(&pwq->delayed_works))) { + WARN_ON(!list_empty(&pwq->inactive_works))) { mutex_unlock(&wq->mutex); show_workqueue_state(); return; @@ -4527,7 +4527,7 @@ bool workqueue_congested(int cpu, struct workqueue_struct *wq) else pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu)); - ret = !list_empty(&pwq->delayed_works); + ret = !list_empty(&pwq->inactive_works); rcu_read_unlock_sched(); return ret; @@ -4722,11 +4722,11 @@ static void show_pwq(struct pool_workqueue *pwq) pr_cont("\n"); } - if (!list_empty(&pwq->delayed_works)) { + if (!list_empty(&pwq->inactive_works)) { bool comma = false; - pr_info(" delayed:"); - list_for_each_entry(work, &pwq->delayed_works, entry) { + pr_info(" inactive:"); + list_for_each_entry(work, &pwq->inactive_works, entry) { pr_cont_work(comma, work); comma = !(*work_data_bits(work) & WORK_STRUCT_LINKED); } @@ -4756,7 +4756,7 @@ void show_workqueue_state(void) bool idle = true; for_each_pwq(pwq, wq) { - if (pwq->nr_active || !list_empty(&pwq->delayed_works)) { + if (pwq->nr_active || !list_empty(&pwq->inactive_works)) { idle = false; break; } @@ -4768,7 +4768,7 @@ void show_workqueue_state(void) for_each_pwq(pwq, wq) { spin_lock_irqsave(&pwq->pool->lock, flags); - if (pwq->nr_active || !list_empty(&pwq->delayed_works)) + if (pwq->nr_active || !list_empty(&pwq->inactive_works)) show_pwq(pwq); spin_unlock_irqrestore(&pwq->pool->lock, flags); /* @@ -5143,7 +5143,7 @@ EXPORT_SYMBOL_GPL(work_on_cpu_safe); * freeze_workqueues_begin - begin freezing workqueues * * Start freezing workqueues. After this function returns, all freezable - * workqueues will queue new works to their delayed_works list instead of + * workqueues will queue new works to their inactive_works list instead of * pool->worklist. * * CONTEXT: -- 2.25.1
1 4
0 0
[PATCH OLK-5.10] Revert "arm64/mpam: Fix mpam corrupt when cpu online"
by Wang ShaoBo 03 Aug '23

03 Aug '23
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7PN0A CVE: NA ------------------------------------------------- BUG reported when setuping MPAM driver: [Thu Jul 27 12:15:54 2023] BUG: sleeping function called from invalid context at include/linux/percpu-rwsem.h:49 [Thu Jul 27 12:15:54 2023] in_atomic(): 0, irqs_disabled(): 128, non_block: 0, pid: 593, name: kworker/72:1 [Thu Jul 27 12:15:54 2023] CPU: 72 PID: 593 Comm: kworker/72:1 Not tainted 5.10.0-03467-g02e1abb0f821-dirty #1 [Thu Jul 27 12:15:54 2023] Hardware name: Huawei TaiShan 2280 V2/BC82AMDDA, BIOS 1.79 08/21/2021 [Thu Jul 27 12:15:54 2023] Loaded X.509 cert 'Build time autogenerated kernel key: 9ae2dc86231d0b23cd114b5ed4089cac17566b0f' [Thu Jul 27 12:15:54 2023] Workqueue: events mpam_enable [Thu Jul 27 12:15:54 2023] Load PGP public keys [Thu Jul 27 12:15:54 2023] Call trace: [Thu Jul 27 12:15:54 2023] dump_backtrace+0x0/0x30c [Thu Jul 27 12:15:54 2023] show_stack+0x20/0x30 [Thu Jul 27 12:15:54 2023] dump_stack+0x11c/0x174 [Thu Jul 27 12:15:54 2023] ___might_sleep+0x15c/0x1a0 [Thu Jul 27 12:15:54 2023] __might_sleep+0x7c/0x100 [Thu Jul 27 12:15:54 2023] cpus_read_lock+0x3c/0x110 [Thu Jul 27 12:15:54 2023] __cpuhp_setup_state+0x3c/0x80 [Thu Jul 27 12:15:54 2023] mpam_enable+0x148/0x3a4 [Thu Jul 27 12:15:54 2023] process_one_work+0x3cc/0x984 [Thu Jul 27 12:15:54 2023] worker_thread+0x2b0/0x71c [Thu Jul 27 12:15:54 2023] kthread+0x1e0/0x220 [Thu Jul 27 12:15:54 2023] ret_from_fork+0x10/0x18 [Thu Jul 27 12:15:54 2023] kmemleak: Kernel memory leak detector initialized (mem pool available: 11650) [Thu Jul 27 12:15:54 2023] kmemleak: Automatic memory scanning thread started [Thu Jul 27 12:15:54 2023] cryptd: max_cpu_qlen set to 1000 [Thu Jul 27 12:15:54 2023] Key type encrypted registered [Thu Jul 27 12:15:54 2023] AppArmor: AppArmor sha1 policy hashing enabled [Thu Jul 27 12:15:54 2023] integrity: Loading X.509 certificate: UEFI:db Patch: bc9e3f9895ef2 ("arm64/mpam: Fix mpam corrupt when cpu online") reported a 'Bad PC' BUG, but missing the conclusion, disabling irqs before calling cpuhp_setup_state() may only affect the probability of reproduction. The reason why triggerring 'Bad PC' BUG report is because mpam_enable() is __init type function, and may schedule out after calling __cpuhp_setup_state() ->__might_sleep(), so the space of mpam_enable() might be released after scheduling back. As we have changed mpam_enable() to non-init type function, so we can revert commit bc9e3f9895ef257b76601291d99c87c13c7c31df, to solve these both two problems. Fixes: bc9e3f9895ef2 ("arm64/mpam: Fix mpam corrupt when cpu online") Signed-off-by: Wang ShaoBo <bobo.shaobowang(a)huawei.com> --- arch/arm64/kernel/mpam/mpam_device.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/arm64/kernel/mpam/mpam_device.c b/arch/arm64/kernel/mpam/mpam_device.c index c59f7f09308dc..bb88db115a86c 100644 --- a/arch/arm64/kernel/mpam/mpam_device.c +++ b/arch/arm64/kernel/mpam/mpam_device.c @@ -596,11 +596,9 @@ static void mpam_enable(struct work_struct *work) pr_err("Failed to setup/init resctrl\n"); mutex_unlock(&mpam_devices_lock); - local_irq_disable(); mpam_cpuhp_state = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "mpam:online", mpam_cpu_online, mpam_cpu_offline); - local_irq_enable(); if (mpam_cpuhp_state <= 0) pr_err("Failed to re-register 'dyn' cpuhp callbacks"); mutex_unlock(&mpam_cpuhp_lock); -- 2.25.1
1 0
0 0
[openEuler-1.0-LTS 1/5] workqueue: Rename "delayed" (delayed by active management) to "inactive"
by Zeng Heng 03 Aug '23

03 Aug '23
From: Lai Jiangshan <laijs(a)linux.alibaba.com> mainline inclusion from mainline-v5.15-rc1 commit f97a4a1a3f8769e3452885967955e21c88f3f263 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7LRJF Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?… --------------------------- There are two kinds of "delayed" work items in workqueue subsystem. One is for timer-delayed work items which are visible to workqueue users. The other kind is for work items delayed by active management which can not be directly visible to workqueue users. We mixed the word "delayed" for both kinds and caused somewhat ambiguity. This patch renames the later one (delayed by active management) to "inactive", because it is used for workqueue active management and most of its related symbols are named with "active" or "activate". All "delayed" and "DELAYED" are carefully checked and renamed one by one to avoid accidentally changing the name of the other kind for timer-delayed. No functional change intended. Signed-off-by: Lai Jiangshan <laijs(a)linux.alibaba.com> Signed-off-by: Tejun Heo <tj(a)kernel.org> conflict: kernel/workqueue.c Signed-off-by: Zeng Heng <zengheng4(a)huawei.com> --- include/linux/workqueue.h | 4 +-- kernel/workqueue.c | 58 +++++++++++++++++++-------------------- 2 files changed, 31 insertions(+), 31 deletions(-) diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index 6f2b042fc44c..3ac0fa0c822d 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -30,7 +30,7 @@ void delayed_work_timer_fn(struct timer_list *t); enum { WORK_STRUCT_PENDING_BIT = 0, /* work item is pending execution */ - WORK_STRUCT_DELAYED_BIT = 1, /* work item is delayed */ + WORK_STRUCT_INACTIVE_BIT= 1, /* work item is inactive */ WORK_STRUCT_PWQ_BIT = 2, /* data points to pwq */ WORK_STRUCT_LINKED_BIT = 3, /* next work is linked to this one */ #ifdef CONFIG_DEBUG_OBJECTS_WORK @@ -43,7 +43,7 @@ enum { WORK_STRUCT_COLOR_BITS = 4, WORK_STRUCT_PENDING = 1 << WORK_STRUCT_PENDING_BIT, - WORK_STRUCT_DELAYED = 1 << WORK_STRUCT_DELAYED_BIT, + WORK_STRUCT_INACTIVE = 1 << WORK_STRUCT_INACTIVE_BIT, WORK_STRUCT_PWQ = 1 << WORK_STRUCT_PWQ_BIT, WORK_STRUCT_LINKED = 1 << WORK_STRUCT_LINKED_BIT, #ifdef CONFIG_DEBUG_OBJECTS_WORK diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 04b558a267ae..fe3c4a292909 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -207,7 +207,7 @@ struct pool_workqueue { /* L: nr of in_flight works */ int nr_active; /* L: nr of active works */ int max_active; /* L: max active works */ - struct list_head delayed_works; /* L: delayed works */ + struct list_head inactive_works; /* L: inactive works */ struct list_head pwqs_node; /* WR: node on wq->pwqs */ struct list_head mayday_node; /* MD: node on wq->maydays */ @@ -1105,7 +1105,7 @@ static void put_pwq_unlocked(struct pool_workqueue *pwq) } } -static void pwq_activate_delayed_work(struct work_struct *work) +static void pwq_activate_inactive_work(struct work_struct *work) { struct pool_workqueue *pwq = get_work_pwq(work); @@ -1113,16 +1113,16 @@ static void pwq_activate_delayed_work(struct work_struct *work) if (list_empty(&pwq->pool->worklist)) pwq->pool->watchdog_ts = jiffies; move_linked_works(work, &pwq->pool->worklist, NULL); - __clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work)); + __clear_bit(WORK_STRUCT_INACTIVE_BIT, work_data_bits(work)); pwq->nr_active++; } -static void pwq_activate_first_delayed(struct pool_workqueue *pwq) +static void pwq_activate_first_inactive(struct pool_workqueue *pwq) { - struct work_struct *work = list_first_entry(&pwq->delayed_works, + struct work_struct *work = list_first_entry(&pwq->inactive_works, struct work_struct, entry); - pwq_activate_delayed_work(work); + pwq_activate_inactive_work(work); } /** @@ -1145,10 +1145,10 @@ static void pwq_dec_nr_in_flight(struct pool_workqueue *pwq, int color) pwq->nr_in_flight[color]--; pwq->nr_active--; - if (!list_empty(&pwq->delayed_works)) { - /* one down, submit a delayed one */ + if (!list_empty(&pwq->inactive_works)) { + /* one down, submit an inactive one */ if (pwq->nr_active < pwq->max_active) - pwq_activate_first_delayed(pwq); + pwq_activate_first_inactive(pwq); } /* is flush in progress and are we at the flushing tip? */ @@ -1246,14 +1246,14 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork, debug_work_deactivate(work); /* - * A delayed work item cannot be grabbed directly because + * An inactive work item cannot be grabbed directly because * it might have linked NO_COLOR work items which, if left - * on the delayed_list, will confuse pwq->nr_active + * on the inactive_works list, will confuse pwq->nr_active * management later on and cause stall. Make sure the work * item is activated before grabbing. */ - if (*work_data_bits(work) & WORK_STRUCT_DELAYED) - pwq_activate_delayed_work(work); + if (*work_data_bits(work) & WORK_STRUCT_INACTIVE) + pwq_activate_inactive_work(work); list_del_init(&work->entry); pwq_dec_nr_in_flight(pwq, get_work_color(work)); @@ -1451,8 +1451,8 @@ static void __queue_work(int cpu, struct workqueue_struct *wq, if (list_empty(worklist)) pwq->pool->watchdog_ts = jiffies; } else { - work_flags |= WORK_STRUCT_DELAYED; - worklist = &pwq->delayed_works; + work_flags |= WORK_STRUCT_INACTIVE; + worklist = &pwq->inactive_works; } debug_work_activate(work); @@ -2496,7 +2496,7 @@ static int rescuer_thread(void *__rescuer) /* * The above execution of rescued work items could * have created more to rescue through - * pwq_activate_first_delayed() or chained + * pwq_activate_first_inactive() or chained * queueing. Let's put @pwq back on mayday list so * that such back-to-back work items, which may be * being used to relieve memory pressure, don't @@ -2922,7 +2922,7 @@ void drain_workqueue(struct workqueue_struct *wq) bool drained; spin_lock_irq(&pwq->pool->lock); - drained = !pwq->nr_active && list_empty(&pwq->delayed_works); + drained = !pwq->nr_active && list_empty(&pwq->inactive_works); spin_unlock_irq(&pwq->pool->lock); if (drained) @@ -3703,7 +3703,7 @@ static void pwq_unbound_release_workfn(struct work_struct *work) * @pwq: target pool_workqueue * * If @pwq isn't freezing, set @pwq->max_active to the associated - * workqueue's saved_max_active and activate delayed work items + * workqueue's saved_max_active and activate inactive work items * accordingly. If @pwq is freezing, clear @pwq->max_active to zero. */ static void pwq_adjust_max_active(struct pool_workqueue *pwq) @@ -3732,9 +3732,9 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq) pwq->max_active = wq->saved_max_active; - while (!list_empty(&pwq->delayed_works) && + while (!list_empty(&pwq->inactive_works) && pwq->nr_active < pwq->max_active) { - pwq_activate_first_delayed(pwq); + pwq_activate_first_inactive(pwq); kick = true; } @@ -3765,7 +3765,7 @@ static void init_pwq(struct pool_workqueue *pwq, struct workqueue_struct *wq, pwq->wq = wq; pwq->flush_color = -1; pwq->refcnt = 1; - INIT_LIST_HEAD(&pwq->delayed_works); + INIT_LIST_HEAD(&pwq->inactive_works); INIT_LIST_HEAD(&pwq->pwqs_node); INIT_LIST_HEAD(&pwq->mayday_node); INIT_WORK(&pwq->unbound_release_work, pwq_unbound_release_workfn); @@ -4386,7 +4386,7 @@ void destroy_workqueue(struct workqueue_struct *wq) if (WARN_ON((pwq != wq->dfl_pwq) && (pwq->refcnt > 1)) || WARN_ON(pwq->nr_active) || - WARN_ON(!list_empty(&pwq->delayed_works))) { + WARN_ON(!list_empty(&pwq->inactive_works))) { mutex_unlock(&wq->mutex); show_workqueue_state(); return; @@ -4527,7 +4527,7 @@ bool workqueue_congested(int cpu, struct workqueue_struct *wq) else pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu)); - ret = !list_empty(&pwq->delayed_works); + ret = !list_empty(&pwq->inactive_works); rcu_read_unlock_sched(); return ret; @@ -4722,11 +4722,11 @@ static void show_pwq(struct pool_workqueue *pwq) pr_cont("\n"); } - if (!list_empty(&pwq->delayed_works)) { + if (!list_empty(&pwq->inactive_works)) { bool comma = false; - pr_info(" delayed:"); - list_for_each_entry(work, &pwq->delayed_works, entry) { + pr_info(" inactive:"); + list_for_each_entry(work, &pwq->inactive_works, entry) { pr_cont_work(comma, work); comma = !(*work_data_bits(work) & WORK_STRUCT_LINKED); } @@ -4756,7 +4756,7 @@ void show_workqueue_state(void) bool idle = true; for_each_pwq(pwq, wq) { - if (pwq->nr_active || !list_empty(&pwq->delayed_works)) { + if (pwq->nr_active || !list_empty(&pwq->inactive_works)) { idle = false; break; } @@ -4768,7 +4768,7 @@ void show_workqueue_state(void) for_each_pwq(pwq, wq) { spin_lock_irqsave(&pwq->pool->lock, flags); - if (pwq->nr_active || !list_empty(&pwq->delayed_works)) + if (pwq->nr_active || !list_empty(&pwq->inactive_works)) show_pwq(pwq); spin_unlock_irqrestore(&pwq->pool->lock, flags); /* @@ -5143,7 +5143,7 @@ EXPORT_SYMBOL_GPL(work_on_cpu_safe); * freeze_workqueues_begin - begin freezing workqueues * * Start freezing workqueues. After this function returns, all freezable - * workqueues will queue new works to their delayed_works list instead of + * workqueues will queue new works to their inactive_works list instead of * pool->worklist. * * CONTEXT: -- 2.25.1
1 4
0 0
[PATCH openEuler-22.03-LTS] nvme-pci: clear the prp2 field when not used
by Yong Hu 03 Aug '23

03 Aug '23
From: Lei Rao <lei.rao(a)intel.com> stable inclusion from stable-v5.10.188 commit 74b139c63f0775cf79266e9d9546c62b73fb3385 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7PZZC CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- [ Upstream commit a56ea6147facce4ac1fc38675455f9733d96232b ] If the prp2 field is not filled in nvme_setup_prp_simple(), the prp2 field is garbage data. According to nvme spec, the prp2 is reserved if the data transfer does not cross a memory page boundary, so clear it to zero if it is not used. Signed-off-by: Lei Rao <lei.rao(a)intel.com> Signed-off-by: Christoph Hellwig <hch(a)lst.de> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Yong Hu <yong.hu(a)windriver.com> --- drivers/nvme/host/pci.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index c09ee462b048..fbbbfdea076a 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -817,6 +817,8 @@ static blk_status_t nvme_setup_prp_simple(struct nvme_dev *dev, cmnd->dptr.prp1 = cpu_to_le64(iod->first_dma); if (bv->bv_len > first_prp_len) cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma + first_prp_len); + else + cmnd->dptr.prp2 = 0; return BLK_STS_OK; } -- 2.34.1
2 1
0 0
[PATCH OLK-5.10] nvme-pci: clear the prp2 field when not used
by Yong Hu 03 Aug '23

03 Aug '23
From: Lei Rao <lei.rao(a)intel.com> stable inclusion from stable-v5.10.188 commit 74b139c63f0775cf79266e9d9546c62b73fb3385 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I7PZZC CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id… -------------------------------- [ Upstream commit a56ea6147facce4ac1fc38675455f9733d96232b ] If the prp2 field is not filled in nvme_setup_prp_simple(), the prp2 field is garbage data. According to nvme spec, the prp2 is reserved if the data transfer does not cross a memory page boundary, so clear it to zero if it is not used. Signed-off-by: Lei Rao <lei.rao(a)intel.com> Signed-off-by: Christoph Hellwig <hch(a)lst.de> Signed-off-by: Sasha Levin <sashal(a)kernel.org> Signed-off-by: Yong Hu <yong.hu(a)windriver.com> --- drivers/nvme/host/pci.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index c30ab2cf2533..8965ea20f5ef 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -817,6 +817,8 @@ static blk_status_t nvme_setup_prp_simple(struct nvme_dev *dev, cmnd->dptr.prp1 = cpu_to_le64(iod->first_dma); if (bv->bv_len > first_prp_len) cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma + first_prp_len); + else + cmnd->dptr.prp2 = 0; return BLK_STS_OK; } -- 2.33.0
2 1
0 0
[PATCH OLK-5.10] sched: Add feature 'UTIL_TASKGROUP' for dynamic affinity
by Hui Tang 03 Aug '23

03 Aug '23
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I526XC -------------------------------- If the feature enabled, util_avg of bottom-level tg is used, otherwise, util_avg of rq's cfs_rq is used. Signed-off-by: Hui Tang <tanghui20(a)huawei.com> --- kernel/sched/fair.c | 19 ++++++++++++++----- kernel/sched/features.h | 7 +++++++ 2 files changed, 21 insertions(+), 5 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 5c8b48150862..f6068f1c28bc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7097,6 +7097,17 @@ static inline bool prefer_cpus_valid(struct task_struct *p) cpumask_subset(p->prefer_cpus, p->cpus_ptr); } +static inline unsigned long taskgroup_cpu_util(struct task_group *tg, + int cpu) +{ +#ifdef CONFIG_FAIR_GROUP_SCHED + if (tg->se[cpu] && sched_feat(DA_UTIL_TASKGROUP)) + return tg->se[cpu]->avg.util_avg; +#endif + + return cpu_util(cpu); +} + /* * set_task_select_cpus: select the cpu range for task * @p: the task whose available cpu range will to set @@ -7127,13 +7138,11 @@ static void set_task_select_cpus(struct task_struct *p, int *idlest_cpu, rcu_read_lock(); tg = task_group(p); for_each_cpu(cpu, p->prefer_cpus) { - if (unlikely(!tg->se[cpu])) - continue; - if (idlest_cpu && available_idle_cpu(cpu)) { *idlest_cpu = cpu; } else if (idlest_cpu) { - spare = (long)(capacity_of(cpu) - tg->se[cpu]->avg.util_avg); + spare = (long)(capacity_of(cpu) - + taskgroup_cpu_util(tg, cpu)); if (spare > min_util) { min_util = spare; *idlest_cpu = cpu; @@ -7148,7 +7157,7 @@ static void set_task_select_cpus(struct task_struct *p, int *idlest_cpu, return; } - util_avg_sum += tg->se[cpu]->avg.util_avg; + util_avg_sum += taskgroup_cpu_util(tg, cpu); tg_capacity += capacity_of(cpu); } rcu_read_unlock(); diff --git a/kernel/sched/features.h b/kernel/sched/features.h index fef48f5be2fa..76fade025c4b 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -106,3 +106,10 @@ SCHED_FEAT(UTIL_EST_FASTUP, true) SCHED_FEAT(ALT_PERIOD, true) SCHED_FEAT(BASE_SLICE, true) + +#ifdef CONFIG_QOS_SCHED_DYNAMIC_AFFINITY +/* + * Use util_avg of bottom-Level taskgroup + */ +SCHED_FEAT(DA_UTIL_TASKGROUP, true) +#endif -- 2.17.1
2 1
0 0
[PATCH openEuler-1.0-LTS 0/2] can: raw: fix receiver memory leak
by Ziyang Xuan 03 Aug '23

03 Aug '23
Backport can/raw receiver memory leak fix commits. Eric Dumazet (1): can: raw: fix lockdep issue in raw_release() Ziyang Xuan (1): can: raw: fix receiver memory leak net/can/raw.c | 68 ++++++++++++++++++++++----------------------------- 1 file changed, 29 insertions(+), 39 deletions(-) -- 2.25.1
2 3
0 0
[PATCH openEuler-1.0-LTS v2 0/2] Fix host zero page refcount overflow caused by kvm
by Lei Chen 03 Aug '23

03 Aug '23
v2: 1. using full commit id instead of abbrevation 2. fix wrong mainline tag Sean Christopherson (1): KVM: Don't set Accessed/Dirty bits for ZERO_PAGE Zhuang Yanying (1): KVM: fix overflow of zero page refcount with ksm running virt/kvm/kvm_main.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) -- 2.34.1
2 3
0 0
[PATCH] coresight: etm4x: Don't bother the user with nonessential log message
by Tian Tao 03 Aug '23

03 Aug '23
Each cpu will print the following log when initializing ETM "coresight etm1: CPU1: etm v4.5 initialized", if there are a lot of cpus, e.g. 128. there will be a screen full of this log. replace dev_info with dev_dbg prints only when needed. Signed-off-by: Tian Tao <tiantao6(a)hisilicon.com> --- drivers/hwtracing/coresight/coresight-etm4x-core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c index 7e307022303a..7b51e8594fd5 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c @@ -2033,7 +2033,7 @@ static int etm4_add_coresight_dev(struct etm4_init_arg *init_arg) etmdrvdata[drvdata->cpu] = drvdata; - dev_info(&drvdata->csdev->dev, "CPU%d: %s v%d.%d initialized\n", + dev_dbg(&drvdata->csdev->dev, "CPU%d: %s v%d.%d initialized\n", drvdata->cpu, type_name, major, minor); if (boot_enable) { -- 2.33.0
1 0
0 0
[PATCH] coresight: etm4x: Don't bother the user with nonessential log message
by Tian Tao 02 Aug '23

02 Aug '23
Each cpu will print the following log when initializing ETM "coresight etm1: CPU1: etm v4.5 initialized", if there are a lot of cpus, e.g. 128. there will be a screen full of this log. replace dev_dbg with dev _printf prints only when needed. Signed-off-by: Tian Tao <tiantao6(a)hisilicon.com> --- drivers/hwtracing/coresight/coresight-etm4x-core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c index 7e307022303a..7b51e8594fd5 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c @@ -2033,7 +2033,7 @@ static int etm4_add_coresight_dev(struct etm4_init_arg *init_arg) etmdrvdata[drvdata->cpu] = drvdata; - dev_info(&drvdata->csdev->dev, "CPU%d: %s v%d.%d initialized\n", + dev_dbg(&drvdata->csdev->dev, "CPU%d: %s v%d.%d initialized\n", drvdata->cpu, type_name, major, minor); if (boot_enable) { -- 2.33.0
1 0
0 0
  • ← Newer
  • 1
  • ...
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • Older →

HyperKitty Powered by HyperKitty