Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > fc11cd6e1c513a17304da94a5390f3cd > files > 4516

kernel-2.6.18-194.11.1.el5.src.rpm

From: Rik van Riel <riel@redhat.com>
Date: Thu, 7 May 2009 11:02:34 -0400
Subject: [xen] introduce 'no missed-tick accounting'
Message-id: 20090507110234.64db75bc@cuia.bos.redhat.com
O-Subject: Re: [RHEL 5.4 PATCH 3/5] introduce "no missed-tick accounting" (bz 449346)
Bugzilla: 449346
RH-Acked-by: Chris Lalancette <clalance@redhat.com>
RH-Acked-by: Don Dutile <ddutile@redhat.com>
RH-Acked-by: Justin M. Forbes <jforbes@redhat.com>

# HG changeset patch
# User Keir Fraser <keir.fraser@citrix.com>
# Date 1198146352 0
# Node ID 0f3055da442efe4c60f99af4ab67d35f02ca4f63
# Parent  54e15994c5bec5acda7496f70cb22276e464416d
hvm: Fix mistake in timer cleanup.
Spotted by Dexuan Cui <dexuan.cui@intel.com>
Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
xen-unstable changeset:   16601:2ebced8f8bafe196b5c6e7097d98d77e93e254af
xen-unstable date:        Thu Dec 13 09:29:21 2007 +0000

hvm: Reduce vpt.c dependencies on external timer details.
Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
xen-unstable changeset:   16600:4553bc1087d9f73e5c27f5511c1d4c724b4dbccf
xen-unstable date:        Wed Dec 12 15:41:20 2007 +0000

hvm: Fix destroy_periodic_time() to not race destruction of one-shot
timers.

This bug was tracked down by Dexuan Cui <dexuan.cui@intel.com>

Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
xen-unstable changeset:   16595:f2f7c92bf1c15206aa7072f6a4e470a132d528e2
xen-unstable date:        Wed Dec 12 11:08:21 2007 +0000

hvm: Split no_missed_tick_accounting into two modes:
 * no_missed_ticks_pending ('SYNC')
 * one_missed_tick_pending ('MIXED')

This is based on a patch by Dave Winchell <dwinchell@virtualiron.com>

Signed-off-by: Keir Fraser <keir.fraser@citrix.com>
xen-unstable changeset:   16545:0f9b5ab59579e8b980e231bfd3fdf5ab8a74e005
xen-unstable date:        Thu Dec 06 11:56:51 2007 +0000

x86, hvm: Clean up periodic timer code a little. This leads naturally
to a no-missed-tick-accounting mode which is a combination of ticks
delivered 'off beat' immediately upon re-scheduling when ticks are
missed, then reverting to delivering ticks 'on beat' as usual.
Signed-off-by: Keir Fraser <keir@xensource.com>
xen-unstable changeset:   16341:8ff5bb70136dbb8ae4a725400334f4bff3643ba8
xen-unstable date:        Thu Nov 08 10:33:18 2007 +0000

x86, hvm: Fix typo in no-missed-tick-accounting timer mode.
From: Dave Winchell <dwinchell@virtualiron.com>
Signed-off-by: Keir Fraser <keir@xensource.com>
xen-unstable changeset:   16334:644e7577f6ee00f746a63a63ca16284cc31f9ee8
xen-unstable date:        Wed Nov 07 14:53:32 2007 +0000

x86, hvm: More fixes to no-missed-tick-accounting mode.
Signed-off-by: Keir Fraser <keir@xensource.com>
xen-unstable changeset:   16315:070da619e65e87b69b2d99794840d84998fdf083
xen-unstable date:        Mon Nov 05 10:09:10 2007 +0000

hvm: Timer fixes:
 1. Do not record more than one pending interrupt in
 no-missed-tick-accounting mode. We do not stack up missed interrupts
 in this timer mode.
 2. Always record all missed ticks when we are in a
 missed-tick-accounting mode. Do not have a ceiling for this as it
 simply causes guests to lose track of wall time.
 3. General bits of cleanup and simplification.
From: Dave Winchell <dwinchell@virtualiron.com>
Signed-off-by: Keir Fraser <keir@xensource.com>
xen-unstable changeset:   16312:838e77a41a3c53a54428e642cb0440a8a6f8912b
xen-unstable date:        Fri Nov 02 16:34:54 2007 +0000

x86, hvm: Fix 'no_missed_tick_accoutning' timer mode.
From: Haitao Shan <haitao.shan@intel.com>
Signed-off-by: Keir Fraser <keir@xensource.com>
xen-unstable changeset:   16277:eaa8014ef7796d267d6b9e9f05a64025b7b16118
xen-unstable date:        Wed Oct 31 09:14:49 2007 +0000

x86, hvm: New timer mode 'no missed-tick accounting'.
From: Haitao Shan <haitao.shan@intel.com>
Signed-off-by: Keir Fraser <keir@xensource.com>
xen-unstable changeset:   16274:44dde35cb2a663aef3fc27cc326766276cb393e8
xen-unstable date:        Tue Oct 30 16:11:47 2007 +0000

hvm, x86: Allow virtual timer mode to be specified.

In HVM config file:
timer_mode=0 # Default: virtual time is delayed when timer ticks are
             # missed dur to preemption
timer_mode=1 # Virtual time always equals wall time, even while missed
             # ticks are pending

From: Haitao Shan <haitao.shan@intel.com>
Signed-off-by: Keir Fraser <keir@xensource.com>
xen-unstable changeset:   16237:b5a2cbca39308bc28c0c27cc9fd5375e3b41ad13
xen-unstable date:        Fri Oct 26 09:56:54 2007 +0100

diff --git a/arch/x86/domain.c b/arch/x86/domain.c
index 84aaee4..389c418 100644
--- a/arch/x86/domain.c
+++ b/arch/x86/domain.c
@@ -1329,7 +1329,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     local_irq_disable();
 
     if ( is_hvm_vcpu(prev) && !list_empty(&prev->arch.hvm_vcpu.tm_list) )
-        pt_freeze_time(prev);
+        pt_save_timer(prev);
 
     set_current(next);
 
diff --git a/arch/x86/hvm/hvm.c b/arch/x86/hvm/hvm.c
index 6b414a6..227b3d8 100644
--- a/arch/x86/hvm/hvm.c
+++ b/arch/x86/hvm/hvm.c
@@ -107,17 +107,17 @@ void hvm_stts(struct vcpu *v)
         hvm_funcs.stts(v);
 }
 
-void hvm_set_guest_time(struct vcpu *v, u64 gtime)
+void hvm_set_guest_tsc(struct vcpu *v, u64 guest_tsc)
 {
     u64 host_tsc;
 
     rdtscll(host_tsc);
 
-    v->arch.hvm_vcpu.cache_tsc_offset = gtime - host_tsc;
+    v->arch.hvm_vcpu.cache_tsc_offset = guest_tsc - host_tsc;
     hvm_funcs.set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset);
 }
 
-u64 hvm_get_guest_time(struct vcpu *v)
+u64 hvm_get_guest_tsc(struct vcpu *v)
 {
     u64 host_tsc;
 
@@ -138,7 +138,7 @@ void hvm_do_resume(struct vcpu *v)
 
     hvm_stts(v);
 
-    pt_thaw_time(v);
+    pt_restore_timer(v);
 
     /* NB. Optimised for common case (p->state == STATE_IOREQ_NONE). */
     p = &get_ioreq(v)->vp_ioreq;
@@ -1132,6 +1132,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE(void) arg)
                 hvm_set_callback_via(d, a.value);
                 hvm_latch_shinfo_size(d);
                 break;
+            case HVM_PARAM_TIMER_MODE:
+                rc = -EINVAL;
+                if ( a.value > HVMPTM_one_missed_tick_pending )
+                    goto param_fail;
+                break;
             }
             d->arch.hvm_domain.params[a.index] = a.value;
             rc = 0;
diff --git a/arch/x86/hvm/i8254.c b/arch/x86/hvm/i8254.c
index 2aa3c1a..202a757 100644
--- a/arch/x86/hvm/i8254.c
+++ b/arch/x86/hvm/i8254.c
@@ -517,6 +517,7 @@ void pit_init(struct vcpu *v, unsigned long cpu_khz)
         s->mode = 0xff; /* the init mode */
         s->gate = (i != 2);
         pit_load_count(pit, i, 0);
+        pit->pt[i].source = PTSRC_isa;
     }
 
     spin_unlock(&pit->lock);
diff --git a/arch/x86/hvm/irq.c b/arch/x86/hvm/irq.c
index 25ada54..53a4b98 100644
--- a/arch/x86/hvm/irq.c
+++ b/arch/x86/hvm/irq.c
@@ -326,29 +326,6 @@ int cpu_get_interrupt(struct vcpu *v, int *type)
     return -1;
 }
 
-int get_isa_irq_vector(struct vcpu *v, int isa_irq, int type)
-{
-    unsigned int gsi = hvm_isa_irq_to_gsi(isa_irq);
-
-    if ( type == APIC_DM_EXTINT )
-        return (v->domain->arch.hvm_domain.vpic[isa_irq >> 3].irq_base
-                + (isa_irq & 7));
-
-    return domain_vioapic(v->domain)->redirtbl[gsi].fields.vector;
-}
-
-int is_isa_irq_masked(struct vcpu *v, int isa_irq)
-{
-    unsigned int gsi = hvm_isa_irq_to_gsi(isa_irq);
-
-    if ( is_lvtt(v, isa_irq) )
-        return !is_lvtt_enabled(v);
-
-    return ((v->domain->arch.hvm_domain.vpic[isa_irq >> 3].imr &
-             (1 << (isa_irq & 7))) &&
-            domain_vioapic(v->domain)->redirtbl[gsi].fields.mask);
-}
-
 int hvm_local_events_need_delivery(struct vcpu *v)
 {
     int pending = cpu_has_pending_irq(v);
diff --git a/arch/x86/hvm/rtc.c b/arch/x86/hvm/rtc.c
index 469eb34..a7c3913 100644
--- a/arch/x86/hvm/rtc.c
+++ b/arch/x86/hvm/rtc.c
@@ -42,14 +42,6 @@ static void rtc_periodic_cb(struct vcpu *v, void *opaque)
     spin_unlock(&s->lock);
 }
 
-int is_rtc_periodic_irq(void *opaque)
-{
-    RTCState *s = opaque;
-
-    return !(s->hw.cmos_data[RTC_REG_C] & RTC_AF || 
-             s->hw.cmos_data[RTC_REG_C] & RTC_UF);
-}
-
 /* Enable/configure/disable the periodic timer based on the RTC_PIE and
  * RTC_RATE_SELECT settings */
 static void rtc_timer_update(RTCState *s)
@@ -489,6 +481,8 @@ void rtc_init(struct vcpu *v, int base)
 
     spin_lock_init(&s->lock);
 
+    s->pt.source = PTSRC_isa;
+
     s->hw.cmos_data[RTC_REG_A] = RTC_REF_CLCK_32KHZ | 6; /* ~1kHz */
     s->hw.cmos_data[RTC_REG_B] = RTC_24H;
     s->hw.cmos_data[RTC_REG_C] = 0;
diff --git a/arch/x86/hvm/vlapic.c b/arch/x86/hvm/vlapic.c
index f4b3b8d..b039bcf 100644
--- a/arch/x86/hvm/vlapic.c
+++ b/arch/x86/hvm/vlapic.c
@@ -67,9 +67,6 @@ static unsigned int vlapic_lvt_mask[VLAPIC_LVT_NUM] =
 #define APIC_DEST_NOSHORT                0x0
 #define APIC_DEST_MASK                   0x800
 
-#define vlapic_lvt_enabled(vlapic, lvt_type)                    \
-    (!(vlapic_get_reg(vlapic, lvt_type) & APIC_LVT_MASKED))
-
 #define vlapic_lvt_vector(vlapic, lvt_type)                     \
     (vlapic_get_reg(vlapic, lvt_type) & APIC_VECTOR_MASK)
 
@@ -430,7 +427,7 @@ static uint32_t vlapic_get_tmcct(struct vlapic *vlapic)
     uint32_t tmcct, tmict = vlapic_get_reg(vlapic, APIC_TMICT);
     uint64_t counter_passed;
 
-    counter_passed = (hvm_get_guest_time(v) - vlapic->pt.last_plt_gtime) // TSC
+    counter_passed = (hvm_get_guest_time(v) - vlapic->timer_last_update) // TSC
                      * 1000000000ULL / ticks_per_sec(v) // NS
                      / APIC_BUS_CYCLE_NS / vlapic->hw.timer_divisor;
     tmcct = tmict - counter_passed;
@@ -531,6 +528,11 @@ static unsigned long vlapic_read(struct vcpu *v, unsigned long address,
     return 0;
 }
 
+void vlapic_pt_cb(struct vcpu *v, void *data)
+{
+    *(s_time_t *)data = hvm_get_guest_time(v);
+}
+
 static void vlapic_write(struct vcpu *v, unsigned long address,
                          unsigned long len, unsigned long val)
 {
@@ -660,7 +662,8 @@ static void vlapic_write(struct vcpu *v, unsigned long address,
 
         vlapic_set_reg(vlapic, APIC_TMICT, val);
         create_periodic_time(current, &vlapic->pt, period, vlapic->pt.irq,
-                             !vlapic_lvtt_period(vlapic), NULL, vlapic);
+                             !vlapic_lvtt_period(vlapic), vlapic_pt_cb,
+                             &vlapic->timer_last_update);
         vlapic->timer_last_update = vlapic->pt.last_plt_gtime;
 
         HVM_DBG_LOG(DBG_LEVEL_VLAPIC,
@@ -820,7 +823,8 @@ static void lapic_rearm(struct vlapic *s)
 
         s->pt.irq = lvtt & APIC_VECTOR_MASK;
         create_periodic_time(vlapic_vcpu(s), &s->pt, period, s->pt.irq,
-                             !vlapic_lvtt_period(s), NULL, s);
+                             !vlapic_lvtt_period(s), vlapic_pt_cb,
+                             &s->timer_last_update);
         s->timer_last_update = s->pt.last_plt_gtime;
 
         printk("lapic_load to rearm the actimer:"
@@ -917,6 +921,8 @@ int vlapic_init(struct vcpu *v)
 
     HVM_DBG_LOG(DBG_LEVEL_VLAPIC, "%d", v->vcpu_id);
 
+    vlapic->pt.source = PTSRC_lapic;
+
     vlapic->regs_page = alloc_domheap_page(NULL);
     if ( vlapic->regs_page == NULL )
     {
@@ -953,15 +959,3 @@ void vlapic_destroy(struct vcpu *v)
     unmap_domain_page_global(vlapic->regs);
     free_domheap_page(vlapic->regs_page);
 }
-
-int is_lvtt(struct vcpu *v, int vector)
-{
-    return (pt_active(&vcpu_vlapic(v)->pt) &&
-            (vector == vlapic_lvt_vector(vcpu_vlapic(v), APIC_LVTT)));
-}
-
-int is_lvtt_enabled(struct vcpu *v)
-{
-    return (vlapic_enabled(vcpu_vlapic(v)) &&
-            vlapic_lvt_enabled(vcpu_vlapic(v), APIC_LVTT));
-}
diff --git a/arch/x86/hvm/vpt.c b/arch/x86/hvm/vpt.c
index 18dc6ee..b2c3839 100644
--- a/arch/x86/hvm/vpt.c
+++ b/arch/x86/hvm/vpt.c
@@ -15,7 +15,6 @@
  * You should have received a copy of the GNU General Public License along with
  * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
  * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
  */
 
 #include <xen/time.h>
@@ -23,6 +22,48 @@
 #include <asm/hvm/vpt.h>
 #include <asm/event.h>
 
+#define mode_is(d, name) \
+    ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] == HVMPTM_##name)
+
+static int pt_irq_vector(struct periodic_time *pt, int type)
+{
+    struct vcpu *v = pt->vcpu;
+    unsigned int gsi, isa_irq;
+
+    if ( pt->source == PTSRC_lapic )
+        return pt->irq;
+
+    isa_irq = pt->irq;
+    gsi = hvm_isa_irq_to_gsi(isa_irq);
+
+    if ( type == APIC_DM_EXTINT )
+        return (v->domain->arch.hvm_domain.vpic[isa_irq >> 3].irq_base
+                + (isa_irq & 7));
+
+    return domain_vioapic(v->domain)->redirtbl[gsi].fields.vector;
+}
+
+static int pt_irq_masked(struct periodic_time *pt)
+{
+    struct vcpu *v = pt->vcpu;
+    unsigned int gsi, isa_irq;
+    uint8_t pic_imr;
+
+    if ( pt->source == PTSRC_lapic )
+    {
+        struct vlapic *vlapic = vcpu_vlapic(v);
+        return (!vlapic_enabled(vlapic) ||
+                (vlapic_get_reg(vlapic, APIC_LVTT) & APIC_LVT_MASKED));
+    }
+
+    isa_irq = pt->irq;
+    gsi = hvm_isa_irq_to_gsi(isa_irq);
+    pic_imr = v->domain->arch.hvm_domain.vpic[isa_irq >> 3].imr;
+
+    return (((pic_imr & (1 << (isa_irq & 7))) || !vlapic_accept_pic_intr(v)) &&
+            domain_vioapic(v->domain)->redirtbl[gsi].fields.mask);
+}
+
 static void pt_lock(struct periodic_time *pt)
 {
     struct vcpu *v;
@@ -42,32 +83,46 @@ static void pt_unlock(struct periodic_time *pt)
     spin_unlock(&pt->vcpu->arch.hvm_vcpu.tm_lock);
 }
 
-static void missed_ticks(struct periodic_time *pt)
+static void pt_process_missed_ticks(struct periodic_time *pt)
 {
-    s_time_t missed_ticks;
+    s_time_t missed_ticks, now = NOW();
 
     if ( pt->one_shot )
         return;
 
-    missed_ticks = NOW() - pt->scheduled;
+    missed_ticks = now - pt->scheduled;
     if ( missed_ticks <= 0 )
         return;
 
     missed_ticks = missed_ticks / (s_time_t) pt->period + 1;
-    if ( missed_ticks > 1000 )
-    {
-        /* TODO: Adjust guest time together */
-        pt->pending_intr_nr++;
-    }
+    if ( mode_is(pt->vcpu->domain, no_missed_ticks_pending) )
+        pt->do_not_freeze = !pt->pending_intr_nr;
     else
-    {
         pt->pending_intr_nr += missed_ticks;
-    }
-
     pt->scheduled += missed_ticks * pt->period;
 }
 
-void pt_freeze_time(struct vcpu *v)
+static void pt_freeze_time(struct vcpu *v)
+{
+    if ( !mode_is(v->domain, delay_for_missed_ticks) )
+        return;
+
+    v->arch.hvm_vcpu.guest_time = hvm_get_guest_time(v);
+}
+
+static void pt_thaw_time(struct vcpu *v)
+{
+    if ( !mode_is(v->domain, delay_for_missed_ticks) )
+        return;
+
+    if ( v->arch.hvm_vcpu.guest_time == 0 )
+         return;
+
+    hvm_set_guest_time(v, v->arch.hvm_vcpu.guest_time);
+    v->arch.hvm_vcpu.guest_time = 0;
+}
+
+void pt_save_timer(struct vcpu *v)
 {
     struct list_head *head = &v->arch.hvm_vcpu.tm_list;
     struct periodic_time *pt;
@@ -77,33 +132,30 @@ void pt_freeze_time(struct vcpu *v)
 
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
 
-    v->arch.hvm_vcpu.guest_time = hvm_get_guest_time(v);
-
     list_for_each_entry ( pt, head, list )
-        stop_timer(&pt->timer);
+        if ( !pt->do_not_freeze )
+            stop_timer(&pt->timer);
+
+    pt_freeze_time(v);
 
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
 }
 
-void pt_thaw_time(struct vcpu *v)
+void pt_restore_timer(struct vcpu *v)
 {
     struct list_head *head = &v->arch.hvm_vcpu.tm_list;
     struct periodic_time *pt;
 
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
 
-    if ( v->arch.hvm_vcpu.guest_time )
+    list_for_each_entry ( pt, head, list )
     {
-        hvm_set_guest_time(v, v->arch.hvm_vcpu.guest_time);
-        v->arch.hvm_vcpu.guest_time = 0;
-
-        list_for_each_entry ( pt, head, list )
-        {
-            missed_ticks(pt);
-            set_timer(&pt->timer, pt->scheduled);
-        }
+        pt_process_missed_ticks(pt);
+        set_timer(&pt->timer, pt->scheduled);
     }
 
+    pt_thaw_time(v);
+
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
 }
 
@@ -118,7 +170,7 @@ static void pt_timer_fn(void *data)
     if ( !pt->one_shot )
     {
         pt->scheduled += pt->period;
-        missed_ticks(pt);
+        pt_process_missed_ticks(pt);
         set_timer(&pt->timer, pt->scheduled);
     }
 
@@ -130,29 +182,39 @@ static void pt_timer_fn(void *data)
 void pt_update_irq(struct vcpu *v)
 {
     struct list_head *head = &v->arch.hvm_vcpu.tm_list;
-    struct periodic_time *pt;
+    struct periodic_time *pt, *earliest_pt = NULL;
     uint64_t max_lag = -1ULL;
-    int irq = -1;
+    int irq, is_lapic;
 
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
 
     list_for_each_entry ( pt, head, list )
     {
-        if ( !is_isa_irq_masked(v, pt->irq) && pt->pending_intr_nr &&
+        if ( !pt_irq_masked(pt) && pt->pending_intr_nr &&
              ((pt->last_plt_gtime + pt->period_cycles) < max_lag) )
         {
             max_lag = pt->last_plt_gtime + pt->period_cycles;
-            irq = pt->irq;
+            earliest_pt = pt;
         }
     }
 
+    if ( earliest_pt == NULL )
+    {
+        spin_unlock(&v->arch.hvm_vcpu.tm_lock);
+        return;
+    }
+
+    earliest_pt->irq_issued = 1;
+    irq = earliest_pt->irq;
+    is_lapic = (earliest_pt->source == PTSRC_lapic);
+
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
 
-    if ( is_lvtt(v, irq) )
+    if ( is_lapic )
     {
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
     }
-    else if ( irq >= 0 )
+    else
     {
         hvm_isa_irq_deassert(v->domain, irq);
         hvm_isa_irq_assert(v->domain, irq);
@@ -163,28 +225,12 @@ static struct periodic_time *is_pt_irq(struct vcpu *v, int vector, int type)
 {
     struct list_head *head = &v->arch.hvm_vcpu.tm_list;
     struct periodic_time *pt;
-    struct RTCState *rtc = &v->domain->arch.hvm_domain.pl_time.vrtc;
-    int vec;
 
     list_for_each_entry ( pt, head, list )
     {
-        if ( !pt->pending_intr_nr )
-            continue;
-
-        if ( is_lvtt(v, pt->irq) )
-        {
-            if ( pt->irq != vector )
-                continue;
+        if ( pt->pending_intr_nr && pt->irq_issued &&
+             (vector == pt_irq_vector(pt, type)) )
             return pt;
-        }
-
-        vec = get_isa_irq_vector(v, pt->irq, type);
-
-        /* RTC irq need special care */
-        if ( (vector != vec) || (pt->irq == 8 && !is_rtc_periodic_irq(rtc)) )
-            continue;
-
-        return pt;
     }
 
     return NULL;
@@ -205,6 +251,9 @@ void pt_intr_post(struct vcpu *v, int vector, int type)
         return;
     }
 
+    pt->do_not_freeze = 0;
+    pt->irq_issued = 0;
+
     if ( pt->one_shot )
     {
         if ( pt->on_list )
@@ -213,11 +262,20 @@ void pt_intr_post(struct vcpu *v, int vector, int type)
     }
     else
     {
-        pt->pending_intr_nr--;
-        pt->last_plt_gtime += pt->period_cycles;
+        if ( mode_is(v->domain, one_missed_tick_pending) )
+        {
+            pt->last_plt_gtime = hvm_get_guest_time(v);
+            pt->pending_intr_nr = 0; /* 'collapse' all missed ticks */
+        }
+        else
+        {
+            pt->last_plt_gtime += pt->period_cycles;
+            pt->pending_intr_nr--;
+        }
     }
 
-    if ( hvm_get_guest_time(v) < pt->last_plt_gtime )
+    if ( mode_is(v->domain, delay_for_missed_ticks) &&
+         (hvm_get_guest_time(v) < pt->last_plt_gtime) )
         hvm_set_guest_time(v, pt->last_plt_gtime);
 
     cb = pt->cb;
@@ -264,11 +322,15 @@ void create_periodic_time(
     struct vcpu *v, struct periodic_time *pt, uint64_t period,
     uint8_t irq, char one_shot, time_cb *cb, void *data)
 {
+    ASSERT(pt->source != 0);
+
     destroy_periodic_time(pt);
 
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
 
     pt->pending_intr_nr = 0;
+    pt->do_not_freeze = 0;
+    pt->irq_issued = 0;
 
     /* Periodic timer must be at least 0.9ms. */
     if ( (period < 900000) && !one_shot )
diff --git a/include/asm-x86/hvm/hvm.h b/include/asm-x86/hvm/hvm.h
index 9089965..6d78b57 100644
--- a/include/asm-x86/hvm/hvm.h
+++ b/include/asm-x86/hvm/hvm.h
@@ -185,8 +185,10 @@ hvm_load_cpu_guest_regs(struct vcpu *v, struct cpu_user_regs *r)
     hvm_funcs.load_cpu_guest_regs(v, r);
 }
 
-void hvm_set_guest_time(struct vcpu *v, u64 gtime);
-u64 hvm_get_guest_time(struct vcpu *v);
+void hvm_set_guest_tsc(struct vcpu *v, u64 guest_tsc);
+u64 hvm_get_guest_tsc(struct vcpu *v);
+#define hvm_set_guest_time(vcpu, gtime) hvm_set_guest_tsc(vcpu, gtime)
+#define hvm_get_guest_time(vcpu)        hvm_get_guest_tsc(vcpu)
 
 static inline int
 hvm_paging_enabled(struct vcpu *v)
diff --git a/include/asm-x86/hvm/irq.h b/include/asm-x86/hvm/irq.h
index 781b845..12c721d 100644
--- a/include/asm-x86/hvm/irq.h
+++ b/include/asm-x86/hvm/irq.h
@@ -122,8 +122,6 @@ void hvm_set_callback_via(struct domain *d, uint64_t via);
 
 int cpu_get_interrupt(struct vcpu *v, int *type);
 int cpu_has_pending_irq(struct vcpu *v);
-int get_isa_irq_vector(struct vcpu *vcpu, int irq, int type);
-int is_isa_irq_masked(struct vcpu *v, int isa_irq);
 
 /*
  * Currently IA64 Xen doesn't support MSI. So for x86, we define this macro
diff --git a/include/asm-x86/hvm/vlapic.h b/include/asm-x86/hvm/vlapic.h
index 80cd32e..2f83365 100644
--- a/include/asm-x86/hvm/vlapic.h
+++ b/include/asm-x86/hvm/vlapic.h
@@ -94,7 +94,4 @@ struct vlapic *apic_round_robin(
 
 int vlapic_match_logical_addr(struct vlapic *vlapic, uint8_t mda);
 
-int is_lvtt(struct vcpu *v, int vector);
-int is_lvtt_enabled(struct vcpu *v);
-
 #endif /* __ASM_X86_HVM_VLAPIC_H__ */
diff --git a/include/asm-x86/hvm/vpt.h b/include/asm-x86/hvm/vpt.h
index 20e4fbb..38d4aae 100644
--- a/include/asm-x86/hvm/vpt.h
+++ b/include/asm-x86/hvm/vpt.h
@@ -57,6 +57,11 @@ struct periodic_time {
     struct list_head list;
     bool_t on_list;
     bool_t one_shot;
+    bool_t do_not_freeze;
+    bool_t irq_issued;
+#define PTSRC_isa    1 /* ISA time source */
+#define PTSRC_lapic  2 /* LAPIC time source */
+    u8 source;                  /* PTSRC_ */
     u8 irq;
     struct vcpu *vcpu;          /* vcpu timer interrupt delivers to */
     u32 pending_intr_nr;        /* pending timer interrupts */
@@ -116,8 +121,8 @@ struct pl_time {    /* platform time */
 
 #define ticks_per_sec(v) (v->domain->arch.hvm_domain.tsc_frequency)
 
-void pt_freeze_time(struct vcpu *v);
-void pt_thaw_time(struct vcpu *v);
+void pt_save_timer(struct vcpu *v);
+void pt_restore_timer(struct vcpu *v);
 void pt_update_irq(struct vcpu *v);
 void pt_intr_post(struct vcpu *v, int vector, int type);
 void pt_reset(struct vcpu *v);
@@ -128,8 +133,10 @@ void pt_migrate(struct vcpu *v);
 
 /*
  * Create/destroy a periodic (or one-shot!) timer.
- * The given periodic timer structure must be initialised with zero bytes or
- * have been initialised by a previous invocation of create_periodic_time().
+ * The given periodic timer structure must be initialised with zero bytes,
+ * except for the 'source' field which must be initialised with the
+ * correct PTSRC_ value. The initialised timer structure can then be passed
+ * to {create,destroy}_periodic_time() and number of times and in any order.
  * Note that, for a given periodic timer, invocations of these functions MUST
  * be serialised.
  */
@@ -145,7 +152,6 @@ void pit_deinit(struct domain *d);
 void rtc_init(struct vcpu *v, int base);
 void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
-int is_rtc_periodic_irq(void *opaque);
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);
 
diff --git a/include/public/hvm/params.h b/include/public/hvm/params.h
index 9657654..622088f 100644
--- a/include/public/hvm/params.h
+++ b/include/public/hvm/params.h
@@ -52,9 +52,33 @@
 
 #ifdef __ia64__
 #define HVM_PARAM_NVRAM_FD     7
-#define HVM_NR_PARAMS          8
-#else
-#define HVM_NR_PARAMS          7
 #endif
 
+/*
+ * Set mode for virtual timers (currently x86 only):
+ *  delay_for_missed_ticks (default):
+ *   Do not advance a vcpu's time beyond the correct delivery time for
+ *   interrupts that have been missed due to preemption. Deliver missed
+ *   interrupts when the vcpu is rescheduled and advance the vcpu's virtual
+ *   time stepwise for each one.
+ *  no_delay_for_missed_ticks:
+ *   As above, missed interrupts are delivered, but guest time always tracks
+ *   wallclock (i.e., real) time while doing so.
+ *  no_missed_ticks_pending:
+ *   No missed interrupts are held pending. Instead, to ensure ticks are
+ *   delivered at some non-zero rate, if we detect missed ticks then the
+ *   internal tick alarm is not disabled if the VCPU is preempted during the
+ *   next tick period.
+ *  one_missed_tick_pending:
+ *   Missed interrupts are collapsed together and delivered as one 'late tick'.
+ *   Guest time always tracks wallclock (i.e., real) time.
+ */
+#define HVM_PARAM_TIMER_MODE   10
+#define HVMPTM_delay_for_missed_ticks    0
+#define HVMPTM_no_delay_for_missed_ticks 1
+#define HVMPTM_no_missed_ticks_pending   2
+#define HVMPTM_one_missed_tick_pending   3
+
+#define HVM_NR_PARAMS          11
+
 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */