KVM: MMU: make kvm_mmu_available_pages robust against n_used_mmu_pages > n_max_mmu_pages
authorMarcelo Tosatti <mtosatti@redhat.com>
Wed, 13 Mar 2013 01:36:43 +0000 (22:36 -0300)
committerGleb Natapov <gleb@redhat.com>
Wed, 13 Mar 2013 09:46:09 +0000 (11:46 +0200)
As noticed by Ulrich Obergfell <uobergfe@redhat.com>, the mmu
counters are for beancounting purposes only - so n_used_mmu_pages and
n_max_mmu_pages could be relaxed (example: before f0f5933a1626c8df7b),
resulting in n_used_mmu_pages > n_max_mmu_pages.

Make code robust against n_used_mmu_pages > n_max_mmu_pages.

Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
arch/x86/kvm/mmu.h

index 6987108..3b1ad00 100644 (file)
@@ -57,8 +57,11 @@ int kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context);
 
 static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm)
 {
-       return kvm->arch.n_max_mmu_pages -
-               kvm->arch.n_used_mmu_pages;
+       if (kvm->arch.n_max_mmu_pages > kvm->arch.n_used_mmu_pages)
+               return kvm->arch.n_max_mmu_pages -
+                       kvm->arch.n_used_mmu_pages;
+
+       return 0;
 }
 
 static inline void kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)