Paolo Bonzini (2): kvm: avoid speculation-based attacks from out-of-range memslot accesses kvm: fix previous commit for 32-bit builds
include/linux/kvm_host.h | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
From: Paolo Bonzini pbonzini@redhat.com
stable inclusion from stable-v4.19.195 commit 22b87fb17a28d37331bb9c1110737627b17f6781 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I9R4K5 CVE: CVE-2021-47277
--------------------------------
commit da27a83fd6cc7780fea190e1f5c19e87019da65c upstream.
KVM's mechanism for accessing guest memory translates a guest physical address (gpa) to a host virtual address using the right-shifted gpa (also known as gfn) and a struct kvm_memory_slot. The translation is performed in __gfn_to_hva_memslot using the following formula:
hva = slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE
It is expected that gfn falls within the boundaries of the guest's physical memory. However, a guest can access invalid physical addresses in such a way that the gfn is invalid.
__gfn_to_hva_memslot is called from kvm_vcpu_gfn_to_hva_prot, which first retrieves a memslot through __gfn_to_memslot. While __gfn_to_memslot does check that the gfn falls within the boundaries of the guest's physical memory or not, a CPU can speculate the result of the check and continue execution speculatively using an illegal gfn. The speculation can result in calculating an out-of-bounds hva. If the resulting host virtual address is used to load another guest physical address, this is effectively a Spectre gadget consisting of two consecutive reads, the second of which is data dependent on the first.
Right now it's not clear if there are any cases in which this is exploitable. One interesting case was reported by the original author of this patch, and involves visiting guest page tables on x86. Right now these are not vulnerable because the hva read goes through get_user(), which contains an LFENCE speculation barrier. However, there are patches in progress for x86 uaccess.h to mask kernel addresses instead of using LFENCE; once these land, a guest could use speculation to read from the VMM's ring 3 address space. Other architectures such as ARM already use the address masking method, and would be susceptible to this same kind of data-dependent access gadgets. Therefore, this patch proactively protects from these attacks by masking out-of-bounds gfns in __gfn_to_hva_memslot, which blocks speculation of invalid hvas.
Sean Christopherson noted that this patch does not cover kvm_read_guest_offset_cached. This however is limited to a few bytes past the end of the cache, and therefore it is unlikely to be useful in the context of building a chain of data dependent accesses.
Reported-by: Artemiy Margaritov artemiy.margaritov@gmail.com Co-developed-by: Artemiy Margaritov artemiy.margaritov@gmail.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yongqiang Liu liuyongqiang13@huawei.com --- include/linux/kvm_host.h | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6346d0123077..43ea5585452e 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1044,7 +1044,15 @@ __gfn_to_memslot(struct kvm_memslots *slots, gfn_t gfn) static inline unsigned long __gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn) { - return slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE; + /* + * The index was checked originally in search_memslots. To avoid + * that a malicious guest builds a Spectre gadget out of e.g. page + * table walks, do not let the processor speculate loads outside + * the guest's registered memslots. + */ + unsigned long offset = array_index_nospec(gfn - slot->base_gfn, + slot->npages); + return slot->userspace_addr + offset * PAGE_SIZE; }
static inline int memslot_id(struct kvm *kvm, gfn_t gfn)
From: Paolo Bonzini pbonzini@redhat.com
stable inclusion from stable-v4.19.195 commit 270dadd7ea31c1bc43a0b1de118a30c2238171a8 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I9R4K5 CVE: CVE-2021-47277
--------------------------------
commit 4422829e8053068e0225e4d0ef42dc41ea7c9ef5 upstream.
array_index_nospec does not work for uint64_t on 32-bit builds. However, the size of a memory slot must be less than 20 bits wide on those system, since the memory slot must fit in the user address space. So just store it in an unsigned long.
Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Yongqiang Liu liuyongqiang13@huawei.com --- include/linux/kvm_host.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 43ea5585452e..f34f0989e453 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1050,8 +1050,8 @@ __gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn) * table walks, do not let the processor speculate loads outside * the guest's registered memslots. */ - unsigned long offset = array_index_nospec(gfn - slot->base_gfn, - slot->npages); + unsigned long offset = gfn - slot->base_gfn; + offset = array_index_nospec(offset, slot->npages); return slot->userspace_addr + offset * PAGE_SIZE; }
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/8354 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/I...
FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/8354 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/I...