Support haltpoll feature to improve performance
Ankur Arora (10): asm-generic: add barrier smp_cond_load_relaxed_timeout() cpuidle/poll_state: poll via smp_cond_load_relaxed_timeout() cpuidle: rename ARCH_HAS_CPU_RELAX to ARCH_HAS_OPTIMIZED_POLL arm64: barrier: add support for smp_cond_relaxed_timeout() arm64: add support for polling in idle cpuidle-haltpoll: condition on ARCH_CPUIDLE_HALTPOLL arm64: idle: export arch_cpu_idle arm64: support cpuidle-haltpoll arm64/delay: add some constants to a separate header arm64: support WFET in smp_cond_relaxed_timeout()
Joao Martins (4): Kconfig: move ARCH_HAS_OPTIMIZED_POLL to arch/Kconfig arm64: define TIF_POLLING_NRFLAG cpuidle-haltpoll: define arch_haltpoll_want() governors/haltpoll: drop kvm_para_available() check
Lifeng Zheng (1): ACPI: processor_idle: Support polling state for LPI
lishusen (1): cpuidle: edit cpuidle-haltpoll driver module parameter
arch/Kconfig | 3 + arch/arm64/Kconfig | 7 ++ arch/arm64/configs/openeuler_defconfig | 3 + arch/arm64/include/asm/barrier.h | 62 ++++++++++++- arch/arm64/include/asm/cmpxchg.h | 26 ++++-- arch/arm64/include/asm/cpuidle_haltpoll.h | 20 +++++ arch/arm64/include/asm/delay-const.h | 15 ++++ arch/arm64/include/asm/thread_info.h | 2 + arch/arm64/kernel/idle.c | 1 + arch/x86/Kconfig | 5 +- arch/x86/include/asm/cpuidle_haltpoll.h | 1 + arch/x86/kernel/kvm.c | 13 +++ drivers/acpi/processor_idle.c | 43 +++++++-- drivers/cpuidle/Kconfig | 5 +- drivers/cpuidle/Makefile | 2 +- drivers/cpuidle/cpuidle-haltpoll.c | 105 +++++++++++++++++++--- drivers/cpuidle/governors/haltpoll.c | 6 +- drivers/cpuidle/poll_state.c | 31 +++---- drivers/idle/Kconfig | 1 + include/asm-generic/barrier.h | 42 +++++++++ include/linux/cpuidle.h | 2 +- include/linux/cpuidle_haltpoll.h | 5 ++ 22 files changed, 336 insertions(+), 64 deletions(-) create mode 100644 arch/arm64/include/asm/cpuidle_haltpoll.h create mode 100644 arch/arm64/include/asm/delay-const.h
From: Ankur Arora ankur.a.arora@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
Add a timed variant of smp_cond_load_relaxed().
This is useful because arm64 supports polling on a conditional variable by directly waiting on the cacheline instead of spin waiting for the condition to change.
However, an implementation such as this has a problem that it can block forever -- unless there's an explicit timeout or another out-of-band mechanism which allows it to come out of the wait state periodically.
smp_cond_load_relaxed_timeout() supports these semantics by specifying a time-check expression and an associated time-limit.
However, note that for the generic spin-wait implementation we want to minimize the numbers of instructions executed in each iteration. So, limit how often we evaluate the time-check expression by doing it once every smp_cond_time_check_count.
The inner loop in poll_idle() has a substantially similar structure and constraints as smp_cond_load_relaxed_timeout(), so define smp_cond_time_check_count to the same value used in poll_idle().
Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- include/asm-generic/barrier.h | 42 +++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+)
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 1985c22d90ca..f5d6c8444d33 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -275,6 +275,48 @@ do { \ }) #endif
+#ifndef smp_cond_time_check_count +/* + * Limit how often smp_cond_load_relaxed_timeout() evaluates time_expr_ns. + * This helps reduce the number of instructions executed while spin-waiting. + */ +#define smp_cond_time_check_count 200 +#endif + +/** + * smp_cond_load_relaxed_timeout() - (Spin) wait for cond with no ordering + * guarantees until a timeout expires. + * @ptr: pointer to the variable to wait on + * @cond: boolean expression to wait for + * @time_expr_ns: evaluates to the current time + * @time_limit_ns: compared against time_expr_ns + * + * Equivalent to using READ_ONCE() on the condition variable. + * + * Due to C lacking lambda expressions we load the value of *ptr into a + * pre-named variable @VAL to be used in @cond. + */ +#ifndef smp_cond_load_relaxed_timeout +#define smp_cond_load_relaxed_timeout(ptr, cond_expr, time_expr_ns, \ + time_limit_ns) ({ \ + typeof(ptr) __PTR = (ptr); \ + __unqual_scalar_typeof(*ptr) VAL; \ + unsigned int __count = 0; \ + for (;;) { \ + VAL = READ_ONCE(*__PTR); \ + if (cond_expr) \ + break; \ + cpu_relax(); \ + if (__count++ < smp_cond_time_check_count) \ + continue; \ + if ((time_expr_ns) >= time_limit_ns) \ + break; \ + __count = 0; \ + } \ + (typeof(*ptr))VAL; \ +}) +#endif + /* * pmem_wmb() ensures that all stores for which the modification * are written to persistent storage by preceding instructions have
From: Ankur Arora ankur.a.arora@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
The inner loop in poll_idle() polls to see if the thread's TIF_NEED_RESCHED bit is set. The loop exits once the condition is met, or if the poll time limit has been exceeded.
To minimize the number of instructions executed in each iteration, the time check is rate-limited. In addition, each loop iteration executes cpu_relax() which on certain platforms provides a hint to the pipeline that the loop is busy-waiting, which allows the processor to reduce power consumption.
However, cpu_relax() is defined optimally only on x86. On arm64, for instance, it is implemented as a YIELD which only serves as a hint to the CPU that it prioritize a different hardware thread if one is available. arm64, does expose a more optimal polling mechanism via smp_cond_load_relaxed_timeout() which uses LDXR, WFE to wait until a store to a specified region, or until a timeout.
These semantics are essentially identical to what we want from poll_idle(). So, restructure the loop to use smp_cond_load_relaxed_timeout() instead.
The generated code remains close to the original version.
Suggested-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- drivers/cpuidle/poll_state.c | 31 ++++++++++--------------------- 1 file changed, 10 insertions(+), 21 deletions(-)
diff --git a/drivers/cpuidle/poll_state.c b/drivers/cpuidle/poll_state.c index 9b6d90a72601..0b42971393c9 100644 --- a/drivers/cpuidle/poll_state.c +++ b/drivers/cpuidle/poll_state.c @@ -8,35 +8,24 @@ #include <linux/sched/clock.h> #include <linux/sched/idle.h>
-#define POLL_IDLE_RELAX_COUNT 200 - static int __cpuidle poll_idle(struct cpuidle_device *dev, struct cpuidle_driver *drv, int index) { - u64 time_start; - - time_start = local_clock_noinstr();
dev->poll_time_limit = false;
raw_local_irq_enable(); if (!current_set_polling_and_test()) { - unsigned int loop_count = 0; - u64 limit; - - limit = cpuidle_poll_time(drv, dev); - - while (!need_resched()) { - cpu_relax(); - if (loop_count++ < POLL_IDLE_RELAX_COUNT) - continue; - - loop_count = 0; - if (local_clock_noinstr() - time_start > limit) { - dev->poll_time_limit = true; - break; - } - } + unsigned long flags; + u64 time_start = local_clock_noinstr(); + u64 limit = cpuidle_poll_time(drv, dev); + + flags = smp_cond_load_relaxed_timeout(¤t_thread_info()->flags, + VAL & _TIF_NEED_RESCHED, + local_clock_noinstr(), + time_start + limit); + + dev->poll_time_limit = !(flags & _TIF_NEED_RESCHED); } raw_local_irq_disable();
From: Ankur Arora ankur.a.arora@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
ARCH_HAS_CPU_RELAX is defined on architectures that provide an primitive (via cpu_relax()) that can be used as part of a polling mechanism -- one that would be cheaper than spinning in a tight loop.
However, recent changes in poll_idle() mean that a higher level primitive -- smp_cond_load_relaxed() is used for polling. This would in-turn use cpu_relax() or an architecture specific implementation. On ARM64 in particular this turns into a WFE which waits on a store to a cacheline instead of a busy poll.
Accordingly condition the polling drivers on ARCH_HAS_OPTIMIZED_POLL instead of ARCH_HAS_CPU_RELAX. While at it, make both intel-idle and cpuidle-haltpoll explicitly depend on ARCH_HAS_CPU_RELAX.
Suggested-by: Will Deacon will@kernel.org Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- arch/x86/Kconfig | 2 +- drivers/acpi/processor_idle.c | 4 ++-- drivers/cpuidle/Kconfig | 2 +- drivers/cpuidle/Makefile | 2 +- drivers/idle/Kconfig | 1 + include/linux/cpuidle.h | 2 +- 6 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 9be87b701f0f..0ee350a21f3a 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -370,7 +370,7 @@ config ARCH_MAY_HAVE_PC_FDC config GENERIC_CALIBRATE_DELAY def_bool y
-config ARCH_HAS_CPU_RELAX +config ARCH_HAS_OPTIMIZED_POLL def_bool y
config ARCH_HIBERNATION_POSSIBLE diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c index 831fa4a12159..44096406d65d 100644 --- a/drivers/acpi/processor_idle.c +++ b/drivers/acpi/processor_idle.c @@ -35,7 +35,7 @@ #include <asm/cpu.h> #endif
-#define ACPI_IDLE_STATE_START (IS_ENABLED(CONFIG_ARCH_HAS_CPU_RELAX) ? 1 : 0) +#define ACPI_IDLE_STATE_START (IS_ENABLED(CONFIG_ARCH_HAS_OPTIMIZED_POLL) ? 1 : 0)
static unsigned int max_cstate __read_mostly = ACPI_PROCESSOR_MAX_POWER; module_param(max_cstate, uint, 0400); @@ -782,7 +782,7 @@ static int acpi_processor_setup_cstates(struct acpi_processor *pr) if (max_cstate == 0) max_cstate = 1;
- if (IS_ENABLED(CONFIG_ARCH_HAS_CPU_RELAX)) { + if (IS_ENABLED(CONFIG_ARCH_HAS_OPTIMIZED_POLL)) { cpuidle_poll_state_init(drv); count = 1; } else { diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig index cac5997dca50..75f6e176bbc8 100644 --- a/drivers/cpuidle/Kconfig +++ b/drivers/cpuidle/Kconfig @@ -73,7 +73,7 @@ endmenu
config HALTPOLL_CPUIDLE tristate "Halt poll cpuidle driver" - depends on X86 && KVM_GUEST + depends on X86 && KVM_GUEST && ARCH_HAS_OPTIMIZED_POLL select CPU_IDLE_GOV_HALTPOLL default y help diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile index d103342b7cfc..f29dfd1525b0 100644 --- a/drivers/cpuidle/Makefile +++ b/drivers/cpuidle/Makefile @@ -7,7 +7,7 @@ obj-y += cpuidle.o driver.o governor.o sysfs.o governors/ obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o obj-$(CONFIG_DT_IDLE_GENPD) += dt_idle_genpd.o -obj-$(CONFIG_ARCH_HAS_CPU_RELAX) += poll_state.o +obj-$(CONFIG_ARCH_HAS_OPTIMIZED_POLL) += poll_state.o obj-$(CONFIG_HALTPOLL_CPUIDLE) += cpuidle-haltpoll.o
################################################################################## diff --git a/drivers/idle/Kconfig b/drivers/idle/Kconfig index 6707d2539fc4..6f9b1d48fede 100644 --- a/drivers/idle/Kconfig +++ b/drivers/idle/Kconfig @@ -4,6 +4,7 @@ config INTEL_IDLE depends on CPU_IDLE depends on X86 depends on CPU_SUP_INTEL + depends on ARCH_HAS_OPTIMIZED_POLL help Enable intel_idle, a cpuidle driver that includes knowledge of native Intel hardware idle features. The acpi_idle driver diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h index 3183aeb7f5b4..7e7e58a17b07 100644 --- a/include/linux/cpuidle.h +++ b/include/linux/cpuidle.h @@ -275,7 +275,7 @@ static inline void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, } #endif
-#if defined(CONFIG_CPU_IDLE) && defined(CONFIG_ARCH_HAS_CPU_RELAX) +#if defined(CONFIG_CPU_IDLE) && defined(CONFIG_ARCH_HAS_OPTIMIZED_POLL) void cpuidle_poll_state_init(struct cpuidle_driver *drv); #else static inline void cpuidle_poll_state_init(struct cpuidle_driver *drv) {}
From: Joao Martins joao.m.martins@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
ARCH_HAS_OPTIMIZED_POLL gates selection of polling while idle in poll_idle(). Move the configuration option to arch/Kconfig to allow non-x86 architectures to select it.
Note that ARCH_HAS_OPTIMIZED_POLL should probably be exclusive with GENERIC_IDLE_POLL_SETUP (which controls the generic polling logic in cpu_idle_poll()). However, that would remove boot options (hlt=, nohlt=). So, leave it untouched for now.
Signed-off-by: Joao Martins joao.m.martins@oracle.com Signed-off-by: Mihai Carabas mihai.carabas@oracle.com Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- arch/Kconfig | 3 +++ arch/x86/Kconfig | 4 +--- 2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig index 98116fbfcff6..b30284a2397d 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -264,6 +264,9 @@ config HAVE_ARCH_TRACEHOOK config HAVE_DMA_CONTIGUOUS bool
+config ARCH_HAS_OPTIMIZED_POLL + bool + config GENERIC_SMP_IDLE_THREAD bool
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 0ee350a21f3a..f3c2d2933240 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -133,6 +133,7 @@ config X86 select ARCH_WANTS_NO_INSTR select ARCH_WANT_GENERAL_HUGETLB select ARCH_WANT_HUGE_PMD_SHARE + select ARCH_HAS_OPTIMIZED_POLL select ARCH_WANT_LD_ORPHAN_WARN select ARCH_WANT_OPTIMIZE_DAX_VMEMMAP if X86_64 select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP if X86_64 @@ -370,9 +371,6 @@ config ARCH_MAY_HAVE_PC_FDC config GENERIC_CALIBRATE_DELAY def_bool y
-config ARCH_HAS_OPTIMIZED_POLL - def_bool y - config ARCH_HIBERNATION_POSSIBLE def_bool y
From: Ankur Arora ankur.a.arora@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
Support a waited variant of polling on a conditional variable via smp_cond_relaxed_timeout().
This uses the __cmpwait_relaxed() primitive to do the actual waiting, when the wait can be guaranteed to not block forever (in case there are no stores to the waited for cacheline.) For this we depend on the availability of the event-stream.
For cases when the event-stream is unavailable, we fallback to a spin-waited implementation which is identical to the generic variant.
Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- arch/arm64/include/asm/barrier.h | 54 ++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+)
diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h index 1ca947d5c939..ab2515ecd6ca 100644 --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -216,6 +216,60 @@ do { \ (typeof(*ptr))VAL; \ })
+#define __smp_cond_load_timeout_spin(ptr, cond_expr, \ + time_expr_ns, time_limit_ns) \ +({ \ + typeof(ptr) __PTR = (ptr); \ + __unqual_scalar_typeof(*ptr) VAL; \ + unsigned int __count = 0; \ + for (;;) { \ + VAL = READ_ONCE(*__PTR); \ + if (cond_expr) \ + break; \ + cpu_relax(); \ + if (__count++ < smp_cond_time_check_count) \ + continue; \ + if ((time_expr_ns) >= time_limit_ns) \ + break; \ + __count = 0; \ + } \ + (typeof(*ptr))VAL; \ +}) + +#define __smp_cond_load_timeout_wait(ptr, cond_expr, \ + time_expr_ns, time_limit_ns) \ +({ \ + typeof(ptr) __PTR = (ptr); \ + __unqual_scalar_typeof(*ptr) VAL; \ + for (;;) { \ + VAL = READ_ONCE(*__PTR); \ + if (cond_expr) \ + break; \ + __cmpwait_relaxed(__PTR, VAL); \ + if ((time_expr_ns) >= time_limit_ns) \ + break; \ + } \ + (typeof(*ptr))VAL; \ +}) + +#define smp_cond_load_relaxed_timeout(ptr, cond_expr, \ + time_expr_ns, time_limit_ns) \ +({ \ + __unqual_scalar_typeof(*ptr) _val; \ + \ + int __wfe = arch_timer_evtstrm_available(); \ + if (likely(__wfe)) \ + _val = __smp_cond_load_timeout_wait(ptr, cond_expr, \ + time_expr_ns, \ + time_limit_ns); \ + else \ + _val = __smp_cond_load_timeout_spin(ptr, cond_expr, \ + time_expr_ns, \ + time_limit_ns); \ + (typeof(*ptr))_val; \ +}) + + #include <asm-generic/barrier.h>
#endif /* __ASSEMBLY__ */
From: Joao Martins joao.m.martins@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
Commit 842514849a61 ("arm64: Remove TIF_POLLING_NRFLAG") had removed TIF_POLLING_NRFLAG because arm64 only supported non-polled idling via cpu_do_idle().
To add support for polling via cpuidle-haltpoll, we want to use the standard poll_idle() interface, which sets TIF_POLLING_NRFLAG while polling.
Reuse the same bit to define TIF_POLLING_NRFLAG.
Signed-off-by: Joao Martins joao.m.martins@oracle.com Signed-off-by: Mihai Carabas mihai.carabas@oracle.com Reviewed-by: Christoph Lameter cl@linux.com Acked-by: Will Deacon will@kernel.org Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- arch/arm64/include/asm/thread_info.h | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 2dd890c8e4f8..379d24059f5b 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -76,6 +76,7 @@ void arch_setup_new_exec(void); #define TIF_SYSCALL_TRACEPOINT 10 /* syscall tracepoint for ftrace */ #define TIF_SECCOMP 11 /* syscall secure computing */ #define TIF_SYSCALL_EMU 12 /* syscall emulation active */ +#define TIF_POLLING_NRFLAG 16 /* set while polling in poll_idle() */ #define TIF_MEMDIE 18 /* is terminating due to OOM killer */ #define TIF_FREEZE 19 #define TIF_RESTORE_SIGMASK 20 @@ -98,6 +99,7 @@ void arch_setup_new_exec(void); #define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT) #define _TIF_SECCOMP (1 << TIF_SECCOMP) #define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU) +#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) #define _TIF_UPROBE (1 << TIF_UPROBE) #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) #define _TIF_32BIT (1 << TIF_32BIT)
From: Ankur Arora ankur.a.arora@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
Polling in idle with poll_idle() needs TIF_POLLING_NRFLAG support, and a cheap mechanism to do the actual polling via smp_cond_load_relaxed_timeout().
Both of these are present on arm64. So, select ARCH_HAS_OPTIMIZED_POLL to enable it.
Enabling this should help reduce the cost of remote wakeups, since if the target sets TIF_POLLING_NRFLAG (as it does while polling in idle), the scheduler does those just by setting the need-resched bit. This contrasts with sending an IPI, and incurring the cost of handling the interrupt on the receiver.
Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- arch/arm64/Kconfig | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index c11aa218c08e..350c431d0fc1 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -35,6 +35,7 @@ config ARM64 select ARCH_HAS_MEMBARRIER_SYNC_CORE select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE + select ARCH_HAS_OPTIMIZED_POLL select ARCH_HAS_NONLEAF_PMD_YOUNG if ARM64_HAFT select ARCH_HAS_PTE_DEVMAP select ARCH_HAS_PTE_SPECIAL
From: Lifeng Zheng zhenglifeng1@huawei.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
Initialize an optional polling state besides LPI states.
Wrap up a new enter method to correctly reflect the actual entered state when the polling state is enabled.
Signed-off-by: Lifeng Zheng zhenglifeng1@huawei.com Reviewed-by: Jie Zhan zhanjie9@hisilicon.com Signed-off-by: lishusen lishusen2@huawei.com --- drivers/acpi/processor_idle.c | 39 ++++++++++++++++++++++++++++++----- 1 file changed, 34 insertions(+), 5 deletions(-)
diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c index 44096406d65d..d154b5d77328 100644 --- a/drivers/acpi/processor_idle.c +++ b/drivers/acpi/processor_idle.c @@ -1194,20 +1194,46 @@ static int acpi_idle_lpi_enter(struct cpuidle_device *dev, return -EINVAL; }
+/* To correctly reflect the entered state if the poll state is enabled. */ +static int acpi_idle_lpi_enter_with_poll_state(struct cpuidle_device *dev, + struct cpuidle_driver *drv, int index) +{ + int entered_state; + + if (unlikely(index < 1)) + return -EINVAL; + + entered_state = acpi_idle_lpi_enter(dev, drv, index - 1); + if (entered_state < 0) + return entered_state; + + return entered_state + 1; +} + static int acpi_processor_setup_lpi_states(struct acpi_processor *pr) { - int i; + int i, count; struct acpi_lpi_state *lpi; struct cpuidle_state *state; struct cpuidle_driver *drv = &acpi_idle_driver; + typeof(state->enter) enter_method;
if (!pr->flags.has_lpi) return -EOPNOTSUPP;
+ if (IS_ENABLED(CONFIG_ARCH_HAS_OPTIMIZED_POLL)) { + cpuidle_poll_state_init(drv); + count = 1; + enter_method = acpi_idle_lpi_enter_with_poll_state; + } else { + count = 0; + enter_method = acpi_idle_lpi_enter; + } + for (i = 0; i < pr->power.count && i < CPUIDLE_STATE_MAX; i++) { lpi = &pr->power.lpi_states[i];
- state = &drv->states[i]; + state = &drv->states[count]; snprintf(state->name, CPUIDLE_NAME_LEN, "LPI-%d", i); strscpy(state->desc, lpi->desc, CPUIDLE_DESC_LEN); state->exit_latency = lpi->wake_latency; @@ -1215,11 +1241,14 @@ static int acpi_processor_setup_lpi_states(struct acpi_processor *pr) state->flags |= arch_get_idle_state_flags(lpi->arch_flags); if (i != 0 && lpi->entry_method == ACPI_CSTATE_FFH) state->flags |= CPUIDLE_FLAG_RCU_IDLE; - state->enter = acpi_idle_lpi_enter; - drv->safe_state_index = i; + state->enter = enter_method; + drv->safe_state_index = count; + count++; + if (count == CPUIDLE_STATE_MAX) + break; }
- drv->state_count = i; + drv->state_count = count;
return 0; }
From: Joao Martins joao.m.martins@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
While initializing haltpoll we check if KVM supports the realtime hint and if idle is overridden at boot.
Both of these checks are x86 specific. So, in pursuit of making cpuidle-haltpoll architecture independent, move these checks out of common code.
Signed-off-by: Joao Martins joao.m.martins@oracle.com Signed-off-by: Mihai Carabas mihai.carabas@oracle.com Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- arch/x86/include/asm/cpuidle_haltpoll.h | 1 + arch/x86/kernel/kvm.c | 13 +++++++++++++ drivers/cpuidle/cpuidle-haltpoll.c | 12 +----------- include/linux/cpuidle_haltpoll.h | 5 +++++ 4 files changed, 20 insertions(+), 11 deletions(-)
diff --git a/arch/x86/include/asm/cpuidle_haltpoll.h b/arch/x86/include/asm/cpuidle_haltpoll.h index c8b39c6716ff..8a0a12769c2e 100644 --- a/arch/x86/include/asm/cpuidle_haltpoll.h +++ b/arch/x86/include/asm/cpuidle_haltpoll.h @@ -4,5 +4,6 @@
void arch_haltpoll_enable(unsigned int cpu); void arch_haltpoll_disable(unsigned int cpu); +bool arch_haltpoll_want(bool force);
#endif diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index b8ab9ee5896c..4d45b1db340a 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -1149,4 +1149,17 @@ void arch_haltpoll_disable(unsigned int cpu) smp_call_function_single(cpu, kvm_enable_host_haltpoll, NULL, 1); } EXPORT_SYMBOL_GPL(arch_haltpoll_disable); + +bool arch_haltpoll_want(bool force) +{ + /* Do not load haltpoll if idle= is passed */ + if (boot_option_idle_override != IDLE_NO_OVERRIDE) + return false; + + if (!kvm_para_available()) + return false; + + return kvm_para_has_hint(KVM_HINTS_REALTIME) || force; +} +EXPORT_SYMBOL_GPL(arch_haltpoll_want); #endif diff --git a/drivers/cpuidle/cpuidle-haltpoll.c b/drivers/cpuidle/cpuidle-haltpoll.c index d8515d5c0853..d47906632ce3 100644 --- a/drivers/cpuidle/cpuidle-haltpoll.c +++ b/drivers/cpuidle/cpuidle-haltpoll.c @@ -15,7 +15,6 @@ #include <linux/cpuidle.h> #include <linux/module.h> #include <linux/sched/idle.h> -#include <linux/kvm_para.h> #include <linux/cpuidle_haltpoll.h>
static bool force __read_mostly; @@ -93,21 +92,12 @@ static void haltpoll_uninit(void) haltpoll_cpuidle_devices = NULL; }
-static bool haltpoll_want(void) -{ - return kvm_para_has_hint(KVM_HINTS_REALTIME) || force; -} - static int __init haltpoll_init(void) { int ret; struct cpuidle_driver *drv = &haltpoll_driver;
- /* Do not load haltpoll if idle= is passed */ - if (boot_option_idle_override != IDLE_NO_OVERRIDE) - return -ENODEV; - - if (!kvm_para_available() || !haltpoll_want()) + if (!arch_haltpoll_want(force)) return -ENODEV;
cpuidle_poll_state_init(drv); diff --git a/include/linux/cpuidle_haltpoll.h b/include/linux/cpuidle_haltpoll.h index d50c1e0411a2..68eb7a757120 100644 --- a/include/linux/cpuidle_haltpoll.h +++ b/include/linux/cpuidle_haltpoll.h @@ -12,5 +12,10 @@ static inline void arch_haltpoll_enable(unsigned int cpu) static inline void arch_haltpoll_disable(unsigned int cpu) { } + +static inline bool arch_haltpoll_want(bool force) +{ + return false; +} #endif #endif
From: Joao Martins joao.m.martins@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
The haltpoll governor is selected either by the cpuidle-haltpoll driver, or explicitly by the user. In particular, it is never selected by default since it has the lowest rating of all governors (menu=20, teo=19, ladder=10/25, haltpoll=9).
So, we can safely forgo the kvm_para_available() check. This also allows cpuidle-haltpoll to be tested on baremetal.
Signed-off-by: Joao Martins joao.m.martins@oracle.com Signed-off-by: Mihai Carabas mihai.carabas@oracle.com Acked-by: Rafael J. Wysocki rafael@kernel.org Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- drivers/cpuidle/governors/haltpoll.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/drivers/cpuidle/governors/haltpoll.c b/drivers/cpuidle/governors/haltpoll.c index 1dff3a52917d..e7b1c602ed08 100644 --- a/drivers/cpuidle/governors/haltpoll.c +++ b/drivers/cpuidle/governors/haltpoll.c @@ -18,7 +18,6 @@ #include <linux/tick.h> #include <linux/sched.h> #include <linux/module.h> -#include <linux/kvm_para.h> #include <trace/events/power.h>
static unsigned int guest_halt_poll_ns __read_mostly = 200000; @@ -143,10 +142,7 @@ static struct cpuidle_governor haltpoll_governor = {
static int __init init_haltpoll(void) { - if (kvm_para_available()) - return cpuidle_register_governor(&haltpoll_governor); - - return 0; + return cpuidle_register_governor(&haltpoll_governor); }
postcore_initcall(init_haltpoll);
From: Ankur Arora ankur.a.arora@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
The cpuidle-haltpoll driver and its namesake governor are selected under KVM_GUEST on X86. KVM_GUEST in-turn selects ARCH_CPUIDLE_HALTPOLL and defines the requisite arch_haltpoll_{enable,disable}() functions.
So remove the explicit dependence of HALTPOLL_CPUIDLE on KVM_GUEST, and instead use ARCH_CPUIDLE_HALTPOLL as proxy for architectural support for haltpoll.
Also change "halt poll" to "haltpoll" in one of the summary clauses, since the second form is used everywhere else.
Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- arch/x86/Kconfig | 1 + drivers/cpuidle/Kconfig | 5 ++--- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index f3c2d2933240..df023e1cb5dd 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -839,6 +839,7 @@ config KVM_GUEST
config ARCH_CPUIDLE_HALTPOLL def_bool n + depends on KVM_GUEST prompt "Disable host haltpoll when loading haltpoll driver" help If virtualized under KVM, disable host haltpoll. diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig index 75f6e176bbc8..c1bebadf22bc 100644 --- a/drivers/cpuidle/Kconfig +++ b/drivers/cpuidle/Kconfig @@ -35,7 +35,6 @@ config CPU_IDLE_GOV_TEO
config CPU_IDLE_GOV_HALTPOLL bool "Haltpoll governor (for virtualized systems)" - depends on KVM_GUEST help This governor implements haltpoll idle state selection, to be used in conjunction with the haltpoll cpuidle driver, allowing @@ -72,8 +71,8 @@ source "drivers/cpuidle/Kconfig.riscv" endmenu
config HALTPOLL_CPUIDLE - tristate "Halt poll cpuidle driver" - depends on X86 && KVM_GUEST && ARCH_HAS_OPTIMIZED_POLL + tristate "Haltpoll cpuidle driver" + depends on ARCH_CPUIDLE_HALTPOLL && ARCH_HAS_OPTIMIZED_POLL select CPU_IDLE_GOV_HALTPOLL default y help
From: Ankur Arora ankur.a.arora@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
Needed for cpuidle-haltpoll.
Acked-by: Will Deacon will@kernel.org Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- arch/arm64/kernel/idle.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/idle.c b/arch/arm64/kernel/idle.c index c1125753fe9b..3a0b59aa12e2 100644 --- a/arch/arm64/kernel/idle.c +++ b/arch/arm64/kernel/idle.c @@ -43,3 +43,4 @@ void noinstr arch_cpu_idle(void) */ cpu_do_idle(); } +EXPORT_SYMBOL_GPL(arch_cpu_idle);
From: Ankur Arora ankur.a.arora@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
Add architectural support for the cpuidle-haltpoll driver by defining arch_haltpoll_*(). Also define ARCH_CPUIDLE_HALTPOLL to allow cpuidle-haltpoll to be selected.
Tested-by: Haris Okanovic harisokn@amazon.com Tested-by: Misono Tomohiro misono.tomohiro@fujitsu.com Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- arch/arm64/Kconfig | 6 ++++++ arch/arm64/include/asm/cpuidle_haltpoll.h | 20 ++++++++++++++++++++ 2 files changed, 26 insertions(+) create mode 100644 arch/arm64/include/asm/cpuidle_haltpoll.h
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 350c431d0fc1..57b4f89e7d36 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2618,6 +2618,12 @@ config ARCH_HIBERNATION_HEADER config ARCH_SUSPEND_POSSIBLE def_bool y
+config ARCH_CPUIDLE_HALTPOLL + bool "Enable selection of the cpuidle-haltpoll driver" + help + cpuidle-haltpoll allows for adaptive polling based on + current load before entering the idle state. + endmenu # "Power management options"
menu "CPU Power Management" diff --git a/arch/arm64/include/asm/cpuidle_haltpoll.h b/arch/arm64/include/asm/cpuidle_haltpoll.h new file mode 100644 index 000000000000..aa01ae9ad5dd --- /dev/null +++ b/arch/arm64/include/asm/cpuidle_haltpoll.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _ARCH_HALTPOLL_H +#define _ARCH_HALTPOLL_H + +static inline void arch_haltpoll_enable(unsigned int cpu) { } +static inline void arch_haltpoll_disable(unsigned int cpu) { } + +static inline bool arch_haltpoll_want(bool force) +{ + /* + * Enabling haltpoll requires KVM support for arch_haltpoll_enable(), + * arch_haltpoll_disable(). + * + * Given that that's missing right now, only allow force loading for + * haltpoll. + */ + return force; +} +#endif
From: Ankur Arora ankur.a.arora@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
Adds some constants and functions related to xloops, cycles computation to a new header.
No functional change.
Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- arch/arm64/include/asm/delay-const.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) create mode 100644 arch/arm64/include/asm/delay-const.h
diff --git a/arch/arm64/include/asm/delay-const.h b/arch/arm64/include/asm/delay-const.h new file mode 100644 index 000000000000..610283ba8712 --- /dev/null +++ b/arch/arm64/include/asm/delay-const.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _ASM_DELAY_CONST_H +#define _ASM_DELAY_CONST_H + +#include <asm/param.h> /* For HZ */ + +/* 2**32 / 1000000000 (rounded up) */ +#define __nsecs_to_xloops_mult 0x5UL + +extern unsigned long loops_per_jiffy; + +#define NSECS_TO_CYCLES(time_nsecs) \ + ((((time_nsecs) * __nsecs_to_xloops_mult) * loops_per_jiffy * HZ) >> 32) + +#endif /* _ASM_DELAY_CONST_H */
From: Ankur Arora ankur.a.arora@oracle.com
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
Support a WFET based implementation of the waited variant of smp_cond_load_relaxed_timeout().
Signed-off-by: Ankur Arora ankur.a.arora@oracle.com Signed-off-by: lishusen lishusen2@huawei.com --- arch/arm64/include/asm/barrier.h | 12 ++++++++---- arch/arm64/include/asm/cmpxchg.h | 26 +++++++++++++++++--------- 2 files changed, 25 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h index ab2515ecd6ca..8f7861ccab7e 100644 --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -12,6 +12,7 @@ #include <linux/kasan-checks.h>
#include <asm/alternative-macros.h> +#include <asm/delay-const.h>
#define __nops(n) ".rept " #n "\nnop\n.endr\n" #define nops(n) asm volatile(__nops(n)) @@ -198,7 +199,7 @@ do { \ VAL = READ_ONCE(*__PTR); \ if (cond_expr) \ break; \ - __cmpwait_relaxed(__PTR, VAL); \ + __cmpwait_relaxed(__PTR, VAL, ~0UL); \ } \ (typeof(*ptr))VAL; \ }) @@ -211,7 +212,7 @@ do { \ VAL = smp_load_acquire(__PTR); \ if (cond_expr) \ break; \ - __cmpwait_relaxed(__PTR, VAL); \ + __cmpwait_relaxed(__PTR, VAL, ~0UL); \ } \ (typeof(*ptr))VAL; \ }) @@ -241,11 +242,13 @@ do { \ ({ \ typeof(ptr) __PTR = (ptr); \ __unqual_scalar_typeof(*ptr) VAL; \ + const unsigned long __time_limit_cycles = \ + NSECS_TO_CYCLES(time_limit_ns); \ for (;;) { \ VAL = READ_ONCE(*__PTR); \ if (cond_expr) \ break; \ - __cmpwait_relaxed(__PTR, VAL); \ + __cmpwait_relaxed(__PTR, VAL, __time_limit_cycles); \ if ((time_expr_ns) >= time_limit_ns) \ break; \ } \ @@ -257,7 +260,8 @@ do { \ ({ \ __unqual_scalar_typeof(*ptr) _val; \ \ - int __wfe = arch_timer_evtstrm_available(); \ + int __wfe = arch_timer_evtstrm_available() || \ + alternative_has_cap_unlikely(ARM64_HAS_WFXT); \ if (likely(__wfe)) \ _val = __smp_cond_load_timeout_wait(ptr, cond_expr, \ time_expr_ns, \ diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h index d7a540736741..bb842dab5d0e 100644 --- a/arch/arm64/include/asm/cmpxchg.h +++ b/arch/arm64/include/asm/cmpxchg.h @@ -210,7 +210,8 @@ __CMPXCHG_GEN(_mb)
#define __CMPWAIT_CASE(w, sfx, sz) \ static inline void __cmpwait_case_##sz(volatile void *ptr, \ - unsigned long val) \ + unsigned long val, \ + unsigned long time_limit_cycles) \ { \ unsigned long tmp; \ \ @@ -220,10 +221,12 @@ static inline void __cmpwait_case_##sz(volatile void *ptr, \ " ldxr" #sfx "\t%" #w "[tmp], %[v]\n" \ " eor %" #w "[tmp], %" #w "[tmp], %" #w "[val]\n" \ " cbnz %" #w "[tmp], 1f\n" \ - " wfe\n" \ + ALTERNATIVE("wfe\n", \ + "msr s0_3_c1_c0_0, %[time_limit_cycles]\n", \ + ARM64_HAS_WFXT) \ "1:" \ : [tmp] "=&r" (tmp), [v] "+Q" (*(u##sz *)ptr) \ - : [val] "r" (val)); \ + : [val] "r" (val), [time_limit_cycles] "r" (time_limit_cycles));\ }
__CMPWAIT_CASE(w, b, 8); @@ -236,17 +239,22 @@ __CMPWAIT_CASE( , , 64); #define __CMPWAIT_GEN(sfx) \ static __always_inline void __cmpwait##sfx(volatile void *ptr, \ unsigned long val, \ + unsigned long time_limit_cycles, \ int size) \ { \ switch (size) { \ case 1: \ - return __cmpwait_case##sfx##_8(ptr, (u8)val); \ + return __cmpwait_case##sfx##_8(ptr, (u8)val, \ + time_limit_cycles); \ case 2: \ - return __cmpwait_case##sfx##_16(ptr, (u16)val); \ + return __cmpwait_case##sfx##_16(ptr, (u16)val, \ + time_limit_cycles); \ case 4: \ - return __cmpwait_case##sfx##_32(ptr, val); \ + return __cmpwait_case##sfx##_32(ptr, val, \ + time_limit_cycles); \ case 8: \ - return __cmpwait_case##sfx##_64(ptr, val); \ + return __cmpwait_case##sfx##_64(ptr, val, \ + time_limit_cycles); \ default: \ BUILD_BUG(); \ } \ @@ -258,7 +266,7 @@ __CMPWAIT_GEN()
#undef __CMPWAIT_GEN
-#define __cmpwait_relaxed(ptr, val) \ - __cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr))) +#define __cmpwait_relaxed(ptr, val, time_limit_cycles) \ + __cmpwait((ptr), (unsigned long)(val), time_limit_cycles, sizeof(*(ptr)))
#endif /* __ASM_CMPXCHG_H */
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/IB7PU3 CVE: NA
----------------------------------------------------
To ensure energy efficiency, haltpoll is disabled by default. But In some performance scenarios, you can enable haltpoll using the following methods:
echo Y > /sys/module/cpuidle_haltpoll/parameters/force
Signed-off-by: Xiangyou Xie xiexiangyou@huawei.com Signed-off-by: lishusen lishusen2@huawei.com --- arch/arm64/configs/openeuler_defconfig | 3 + drivers/cpuidle/cpuidle-haltpoll.c | 95 +++++++++++++++++++++++++- 2 files changed, 96 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig index 41fba69b84e2..c86512eabc7d 100644 --- a/arch/arm64/configs/openeuler_defconfig +++ b/arch/arm64/configs/openeuler_defconfig @@ -789,6 +789,9 @@ CONFIG_TRACE_IRQFLAGS_SUPPORT=y CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y CONFIG_HAVE_ARCH_TRACEHOOK=y CONFIG_HAVE_DMA_CONTIGUOUS=y +CONFIG_ARCH_HAS_OPTIMIZED_POLL=y +CONFIG_ARCH_CPUIDLE_HALTPOLL=y +CONFIG_HALTPOLL_CPUIDLE=y CONFIG_GENERIC_SMP_IDLE_THREAD=y CONFIG_GENERIC_IDLE_POLL_SETUP=y CONFIG_ARCH_HAS_FORTIFY_SOURCE=y diff --git a/drivers/cpuidle/cpuidle-haltpoll.c b/drivers/cpuidle/cpuidle-haltpoll.c index d47906632ce3..24f379d72b04 100644 --- a/drivers/cpuidle/cpuidle-haltpoll.c +++ b/drivers/cpuidle/cpuidle-haltpoll.c @@ -17,9 +17,16 @@ #include <linux/sched/idle.h> #include <linux/cpuidle_haltpoll.h>
-static bool force __read_mostly; -module_param(force, bool, 0444); +static bool force; MODULE_PARM_DESC(force, "Load unconditionally"); +static int enable_haltpoll_driver(const char *val, const struct kernel_param *kp); + +static const struct kernel_param_ops enable_haltpoll_ops = { + .set = enable_haltpoll_driver, + .get = param_get_bool, +}; +module_param_cb(force, &enable_haltpoll_ops, &force, 0644); +
static struct cpuidle_device __percpu *haltpoll_cpuidle_devices; static enum cpuhp_state haltpoll_hp_state; @@ -129,6 +136,90 @@ static void __exit haltpoll_exit(void) haltpoll_uninit(); }
+#ifdef CONFIG_ARM64 +static int register_haltpoll_driver(void) +{ + int ret; + struct cpuidle_driver *drv = &haltpoll_driver; + +#ifdef CONFIG_X86 + /* Do not load haltpoll if idle= is passed */ + if (boot_option_idle_override != IDLE_NO_OVERRIDE) + return -ENODEV; +#endif + + cpuidle_poll_state_init(drv); + + ret = cpuidle_register_driver(drv); + if (ret < 0) + return ret; + + haltpoll_cpuidle_devices = alloc_percpu(struct cpuidle_device); + if (haltpoll_cpuidle_devices == NULL) { + cpuidle_unregister_driver(drv); + return -ENOMEM; + } + + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "cpuidle/haltpoll:online", + haltpoll_cpu_online, haltpoll_cpu_offline); + if (ret < 0) { + haltpoll_uninit(); + } else { + haltpoll_hp_state = ret; + ret = 0; + } + + return ret; +} + +static void unregister_haltpoll_driver(void) +{ + if (haltpoll_hp_state) + cpuhp_remove_state(haltpoll_hp_state); + cpuidle_unregister_driver(&haltpoll_driver); + + free_percpu(haltpoll_cpuidle_devices); + haltpoll_cpuidle_devices = NULL; + +} + +static int enable_haltpoll_driver(const char *val, const struct kernel_param *kp) +{ + int ret; + bool do_enable; + + if (!val) + return 0; + + ret = strtobool(val, &do_enable); + + if (ret || force == do_enable) + return ret; + + if (do_enable) { + ret = register_haltpoll_driver(); + + if (!ret) { + pr_info("Enable haltpoll driver.\n"); + force = 1; + } else { + pr_err("Fail to enable haltpoll driver.\n"); + } + } else { + unregister_haltpoll_driver(); + force = 0; + pr_info("Unregister haltpoll driver.\n"); + } + + return ret; +} +#else +static int enable_haltpoll_driver(const char *val, const struct kernel_param *kp) +{ + return -1; +} +#endif + module_init(haltpoll_init); module_exit(haltpoll_exit); MODULE_LICENSE("GPL");
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/13975 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/L...
FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/13975 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/L...