From fb4f05ca0a66a18d26628f5f97dc0a53dbaf0153 Mon Sep 17 00:00:00 2001 From: Rik van Riel <riel@redhat.com> Date: Tue, 13 Apr 2010 22:38:03 -0300 Subject: [PATCH] EPT accessed bit emulation RH-Author: Rik van Riel <riel@redhat.com> Message-id: <20100413183803.1954291a@annuminas.surriel.com> Patchwork-id: 8600 O-Subject: [PATCH RHEL5] EPT accessed bit emulation Bugzilla: 582038 RH-Acked-by: Andrea Arcangeli <aarcange@redhat.com> RH-Acked-by: Marcelo Tosatti <mtosatti@redhat.com> RH-Acked-by: Juan Quintela <quintela@redhat.com> Trivial backport of the upstream changeset. Fixes bug 582038 commit 6316e1c8c6af6ccb55ff8564231710660608f46c Author: Rik van Riel <riel@redhat.com> Date: Wed Feb 3 16:11:03 2010 -0500 KVM: VMX: emulate accessed bit for EPT Currently KVM pretends that pages with EPT mappings never got accessed. This has some side effects in the VM, like swapping out actively used guest pages and needlessly breaking up actively used hugepages. We can avoid those very costly side effects by emulating the accessed bit for EPT PTEs, which should only be slightly costly because pages pass through page_referenced infrequently. TLB flushing is taken care of by kvm_mmu_notifier_clear_flush_young(). This seems to help prevent KVM guests from being swapped out when they should not on my system. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> --- arch/x86/kvm/mmu.c | 10 ++++++++-- 1 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 1b101e2..65f1938 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -800,9 +800,15 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp, void *param) u64 *spte; int young = 0; - /* always return old for EPT */ + /* + * Emulate the accessed bit for EPT, by checking if this page has + * an EPT mapping, and clearing it if it does. On the next access, + * a new EPT mapping will be established. + * This has some overhead, but not as much as the cost of swapping + * out actively used pages or breaking up actively used hugepages. + */ if (!shadow_accessed_mask) - return 0; + return kvm_unmap_rmapp(kvm, rmapp); spte = rmap_next(kvm, rmapp, NULL); while (spte) { -- 1.7.0.3