Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > 340e01248478ba8b78a6d4d1809b1eff > files > 504

kvm-83-270.el5_11.src.rpm

From 7208dbcac244fe86251e1a18f2519a8a2395da10 Mon Sep 17 00:00:00 2001
From: Zachary Amsden <zamsden@redhat.com>
Date: Tue, 30 Nov 2010 19:10:18 -0200
Subject: [PATCH] KVM: fix AMD initial TSC offset problems (additional fix)

RH-Author: Zachary Amsden <zamsden@redhat.com>
Message-id: <4CF54C1A.4060107@redhat.com>
Patchwork-id: 14083
O-Subject: [PATCH RHEL 5.6 1/1] KVM: fix AMD initial TSC offset problems
	(additional fix)
Bugzilla: 642659
RH-Acked-by: Prarit Bhargava <prarit@redhat.com>
RH-Acked-by: Rik van Riel <riel@redhat.com>
RH-Acked-by: Marcelo Tosatti <mtosatti@redhat.com>
RH-Acked-by: Glauber Costa <glommer@redhat.com>
RH-Acked-by: Jes Sorensen <Jes.Sorensen@redhat.com>

QE found a bug in the compensation that was caused by a missing upstream
backport.

Upstream, we have the entire TSC backwards-proof check in vendor
independent code, as follows:

         if (unlikely(vcpu->cpu != cpu) || check_tsc_unstable()) {
                 /* Make sure TSC doesn't go backwards */
                 s64 tsc_delta = !vcpu->arch.last_host_tsc ? 0 :
                                 native_read_tsc() -
vcpu->arch.last_host_tsc;
                 if (tsc_delta < 0)
                         mark_tsc_unstable("KVM discovered backwards TSC");
                 if (check_tsc_unstable()) {
                         kvm_x86_ops->adjust_tsc_offset(vcpu, -tsc_delta);

However, the test  (!vcpu->arch.last_host_tsc) is missing in the vendor
specific AMD code on 5.6 which is performing nearly the same task:

         if (unlikely(cpu != vcpu->cpu)) {
                 u64 tsc_this, delta;

                 /*
                  * Make sure that the guest sees a monotonically
                  * increasing TSC.
                  */
                 rdtscll(tsc_this);
                 delta = vcpu->arch.host_tsc - tsc_this;
                 svm->vmcb->control.tsc_offset += delta;

The fix is very straightforward, we need to test host_tsc (the
equivalent to last_host_tsc variable, which was renamed) to check if it
is zero - if it is zero, this means the VCPU has never been run, and so
no compensation should be done.

Note that the order of subtraction is reversed, as the SVM code (below)
adds the delta, where the upstream code adds (-tsc_delta).  The upstream
code more clearly indicates what is going on.

And yes, the old code really is that horrible, when you switch physical
CPUs for a given VCPU on SVM, the TSC cycle time which elapsed between
the switch is lost.  This fairly quickly destabilizes SMP VMs, even on a
system with stable TSC.

However, fixing that requires bringing back too much code for 5.6 / 5.5.z
Don't attempt to erase host TSC drift on first entry to svm_vcpu_load

Signed-off-by: Zachary Amsden <zamsden@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
 arch/x86/kvm/svm.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 67d01fc..9e02ab0 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -712,7 +712,8 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		 * increasing TSC.
 		 */
 		rdtscll(tsc_this);
-		delta = vcpu->arch.host_tsc - tsc_this;
+		delta = !vcpu->arch.host_tsc ? 0 :
+			vcpu->arch.host_tsc - tsc_this;
 		svm->vmcb->control.tsc_offset += delta;
 		vcpu->cpu = cpu;
 		kvm_migrate_timers(vcpu);
-- 
1.7.3.2