KVM: MMU: increase per-vcpu rmap cache alloc size
authorMarcelo Tosatti <mtosatti@redhat.com>
Thu, 6 Aug 2009 17:39:55 +0000 (14:39 -0300)
committerGreg Kroah-Hartman <gregkh@suse.de>
Wed, 9 Sep 2009 03:17:14 +0000 (20:17 -0700)
(cherry picked from commit c41ef344de212bd918f7765af21b5008628c03e0)

The page fault path can use two rmap_desc structures, if:

- walk_addr's dirty pte update allocates one rmap_desc.
- mmu_lock is dropped, sptes are zapped resulting in rmap_desc being
  freed.
- fetch->mmu_set_spte allocates another rmap_desc.

Increase to 4 for safety.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
arch/x86/kvm/mmu.c

index 19b0d69..0376c8e 100644 (file)
@@ -298,7 +298,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu)
        if (r)
                goto out;
        r = mmu_topup_memory_cache(&vcpu->arch.mmu_rmap_desc_cache,
-                                  rmap_desc_cache, 1);
+                                  rmap_desc_cache, 4);
        if (r)
                goto out;
        r = mmu_topup_memory_cache_page(&vcpu->arch.mmu_page_cache, 8);