Reviewed-by: zhanghailiang zhang.zhanghailiang@huawei.com
-----Original Message----- From: yezengruan Sent: Tuesday, August 4, 2020 3:41 PM To: Xiexiuqi xiexiuqi@huawei.com; Guohanjun (Hanjun Guo) guohanjun@huawei.com Cc: Wanghaibin (D) wanghaibin.wang@huawei.com; Fanhenglong fanhenglong@huawei.com; yezengruan yezengruan@huawei.com; Zhanghailiang zhang.zhanghailiang@huawei.com; kernel.openeuler kernel.openeuler@huawei.com; Chenzhendong (alex) alex.chen@huawei.com; virt@openeuler.org; Xiexiangyou xiexiangyou@huawei.com; yuzenghui yuzenghui@huawei.com Subject: [PATCH hulk-4.19-next] KVM: Check preempted_in_kernel for involuntary preemption
From: Wanpeng Li wanpengli@tencent.com
mainline inclusion from mainline-v5.8-rc5 commit 046ddeed0461b5d270470c253cbb321103d048b6 category: feature bugzilla: NA DTS: NA CVE: NA
preempted_in_kernel is updated in preempt_notifier when involuntary preemption ocurrs, it can be stale when the voluntarily preempted vCPUs are taken into account by kvm_vcpu_on_spin() loop. This patch lets it just check preempted_in_kernel for involuntary preemption.
Cc: Paolo Bonzini pbonzini@redhat.com Cc: Radim Krčmář rkrcmar@redhat.com Signed-off-by: Wanpeng Li wanpengli@tencent.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com
virt/kvm/kvm_main.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 887f3b0c2b60..ed061d8a457c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2508,7 +2508,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool yield_to_kernel_mode) continue; if (swait_active(&vcpu->wq) && !kvm_arch_vcpu_runnable(vcpu)) continue;
if (yield_to_kernel_mode
&& !kvm_arch_vcpu_in_kernel(vcpu))
if (READ_ONCE(vcpu->preempted) && yield_to_kernel_mode
&&
!kvm_arch_vcpu_in_kernel(vcpu)) continue; if (!kvm_vcpu_eligible_for_directed_yield(vcpu)) continue;
@@ -4205,7 +4206,7 @@ static void kvm_sched_in(struct preempt_notifier *pn, int cpu) { struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn);
- vcpu->preempted = false;
WRITE_ONCE(vcpu->preempted, false); WRITE_ONCE(vcpu->ready, false);
kvm_arch_sched_in(vcpu, cpu);
@@ -4219,7 +4220,7 @@ static void kvm_sched_out(struct preempt_notifier *pn, struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn);
if (current->state == TASK_RUNNING) {
vcpu->preempted = true;
WRITE_ONCE(vcpu->ready, true); } kvm_arch_vcpu_put(vcpu);WRITE_ONCE(vcpu->preempted, true);
-- 2.19.1