From: Qian Cai cai@lca.pw
mainline inclusion from mainline-v5.8-rc5 commit 345d52c184dc7de98cff63f1bfa6f90e9db19809 category: bugfix bugzilla: NA CVE: NA
--------------------------------
The commit f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for arm64") introduced a warning from Clang because vcpu_is_preempted() is compiled away,
kernel/locking/osq_lock.c:25:19: warning: unused function 'node_cpu' [-Wunused-function] static inline int node_cpu(struct optimistic_spin_node *node) ^ 1 warning generated.
Fix it by converting vcpu_is_preempted() to a static inline function.
Fixes: f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for arm64") Signed-off-by: Qian Cai cai@lca.pw Acked-by: Waiman Long longman@redhat.com Reviewed-by: zhanghailiang zhang.zhanghailiang@huawei.com Signed-off-by: Yang Yingliang yangyingliang@huawei.com --- arch/arm64/include/asm/spinlock.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h index 1210e34b4c08..a9dec081beca 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/asm/spinlock.h @@ -29,6 +29,10 @@ * See: * https://lore.kernel.org/lkml/20200110100612.GC2827@hirez.programming.kicks-a... */ -#define vcpu_is_preempted(cpu) false +#define vcpu_is_preempted vcpu_is_preempted +static inline bool vcpu_is_preempted(int cpu) +{ + return false; +}
#endif /* __ASM_SPINLOCK_H */