Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > media > main-src > by-pkgid > aadbe78a25743146bb784eee19f007c5 > files > 366

kvm-83-164.el5_5.9.src.rpm

From dc65b53573d9c43c174f37332a2f9886400e321f Mon Sep 17 00:00:00 2001
From: Marcelo Tosatti <mtosatti@redhat.com>
Date: Thu, 10 Dec 2009 20:10:29 -0200
Subject: [PATCH] KVM: MMU: remove prefault from invlpg handler

RH-Author: Marcelo Tosatti <mtosatti@redhat.com>
Message-id: <20091210201029.GB8498@amt.cnet>
Patchwork-id: 3916
O-Subject: [RHEL 5.5 5.4.Z PATCH] KVM: MMU: remove prefault from invlpg handler
Bugzilla: 531887
RH-Acked-by: Avi Kivity <avi@redhat.com>
RH-Acked-by: Gleb Natapov <gleb@redhat.com>
RH-Acked-by: Juan Quintela <quintela@redhat.com>

commit a7e5d6f238c9184e0ce8011ff27210d977137dea
Author: Marcelo Tosatti <mtosatti@redhat.com>
Date:   Sat Dec 5 12:34:11 2009 -0200

KVM: MMU: remove prefault from invlpg handler

The invlpg prefault optimization breaks Windows 2008 R2 occasionally.

The visible effect is that the invlpg handler instantiates a pte which
is, microseconds later, written with a different gfn by another vcpu.

The OS could have other mechanisms to prevent a present translation from
being used, which the hypervisor is unaware of.

While the documentation states that the cpu is at liberty to prefetch tlb
entries, it looks like this is not heeded, so remove tlb prefetch from
invlpg.

Cc: stable@kernel.org
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Bugzilla: 531887

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
 arch/x86/kvm/paging_tmpl.h |   18 +-----------------
 1 files changed, 1 insertions(+), 17 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 5140910..e114b37 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -476,11 +476,6 @@ static int FNAME(shadow_invlpg_entry)(struct kvm_shadow_walk *_sw,
 	/* FIXME: properly handle invlpg on large guest pages */
 	if (level == PT_PAGE_TABLE_LEVEL ||
 	    ((level == PT_DIRECTORY_LEVEL) && is_large_pte(*sptep))) {
-		struct kvm_mmu_page *sp = page_header(__pa(sptep));
-
-		sw->pte_gpa = (sp->gfn << PAGE_SHIFT);
-		sw->pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
-
 		if (is_shadow_present_pte(*sptep)) {
 			rmap_remove(vcpu->kvm, sptep);
 			if (is_large_pte(*sptep))
@@ -497,7 +492,6 @@ static int FNAME(shadow_invlpg_entry)(struct kvm_shadow_walk *_sw,
 
 static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
 {
-	pt_element_t gpte;
 	struct shadow_walker walker = {
 		.walker = { .entry = FNAME(shadow_invlpg_entry), },
 		.pte_gpa = -1,
@@ -509,17 +503,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
 	if (walker.need_flush)
 		kvm_flush_remote_tlbs(vcpu->kvm);
 	spin_unlock(&vcpu->kvm->mmu_lock);
-	if (walker.pte_gpa == -1)
-		return;
-	if (kvm_read_guest_atomic(vcpu->kvm, walker.pte_gpa, &gpte,
-				  sizeof(pt_element_t)))
-		return;
-	if (is_present_pte(gpte) && (gpte & PT_ACCESSED_MASK)) {
-		if (mmu_topup_memory_caches(vcpu))
-			return;
-		kvm_mmu_pte_write(vcpu, walker.pte_gpa, (const u8 *)&gpte,
-				  sizeof(pt_element_t), 0);
-	}
+	return;
 }
 
 static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, gva_t vaddr)
-- 
1.6.3.rc4.29.g8146