pandora-kernel.git
12 years agoKVM: mmu_notifier: Flush TLBs before releasing mmu_lock
Takuya Yoshikawa [Fri, 10 Feb 2012 06:28:31 +0000 (15:28 +0900)]
KVM: mmu_notifier: Flush TLBs before releasing mmu_lock

Other threads may process the same page in that small window and skip
TLB flush and then return before these functions do flush.

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Introduce kvm_memory_slot::arch and move lpage_info into it
Takuya Yoshikawa [Wed, 8 Feb 2012 04:02:18 +0000 (13:02 +0900)]
KVM: Introduce kvm_memory_slot::arch and move lpage_info into it

Some members of kvm_memory_slot are not used by every architecture.

This patch is the first step to make this difference clear by
introducing kvm_memory_slot::arch;  lpage_info is moved into it.

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Simplify ifndef conditional usage in __kvm_set_memory_region()
Takuya Yoshikawa [Wed, 8 Feb 2012 04:01:09 +0000 (13:01 +0900)]
KVM: Simplify ifndef conditional usage in __kvm_set_memory_region()

Narrow down the controlled text inside the conditional so that it will
include lpage_info and rmap stuff only.

For this we change the way we check whether the slot is being created
from "if (npages && !new.rmap)" to "if (npages && !old.npages)".

We also stop checking if lpage_info is NULL when we create lpage_info
because we do it from inside the slot creation code block.

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Split lpage_info creation out from __kvm_set_memory_region()
Takuya Yoshikawa [Wed, 8 Feb 2012 04:00:13 +0000 (13:00 +0900)]
KVM: Split lpage_info creation out from __kvm_set_memory_region()

This makes it easy to make lpage_info architecture specific.

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Introduce gfn_to_index() which returns the index for a given level
Takuya Yoshikawa [Wed, 8 Feb 2012 03:59:10 +0000 (12:59 +0900)]
KVM: Introduce gfn_to_index() which returns the index for a given level

This patch cleans up the code and removes the "(void)level;" warning
suppressor.

Note that we can also use this for PT_PAGE_TABLE_LEVEL to treat every
level uniformly later.

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: provide control registers via kvm_run
Christian Borntraeger [Mon, 6 Feb 2012 09:59:07 +0000 (10:59 +0100)]
KVM: s390: provide control registers via kvm_run

There are several cases were we need the control registers for
userspace. Lets also provide those in kvm_run.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: add stop_on_stop flag when doing stop and store
Jens Freimann [Mon, 6 Feb 2012 09:59:06 +0000 (10:59 +0100)]
KVM: s390: add stop_on_stop flag when doing stop and store

When we do a stop and store status we need to pass ACTION_STOP_ON_STOP
flag to __sigp_stop().

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: ignore sigp stop overinitiative
Jens Freimann [Mon, 6 Feb 2012 09:59:05 +0000 (10:59 +0100)]
KVM: s390: ignore sigp stop overinitiative

In __inject_sigp_stop() do nothing when the CPU is already in stopped
state.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: make sigp restart return busy when stop pending
Jens Freimann [Wed, 8 Feb 2012 07:28:29 +0000 (08:28 +0100)]
KVM: s390: make sigp restart return busy when stop pending

On reboot the guest sends in smp_send_stop() a sigp stop to all CPUs
except for current CPU.  Then the guest switches to the IPL cpu by
sending a restart to the IPL CPU, followed by a sigp stop to the
current cpu. Since restart is handled by userspace it's possible that
the restart is delivered before the old stop.  This means that the IPL
CPU isn't restarted and we have no running CPUs. So let's make sure
that there is no stop action pending when we do the restart.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: do store status after handling STOP_ON_STOP bit
Jens Freimann [Mon, 6 Feb 2012 09:59:03 +0000 (10:59 +0100)]
KVM: s390: do store status after handling STOP_ON_STOP bit

In handle_stop() handle the stop bit before doing the store status as
described for "Stop and Store Status" in the Principles of Operation.
We have to give up the local_int.lock before calling kvm store status
since it calls gmap_fault() which might sleep. Since local_int.lock
only protects local_int.* and not guest memory we can give up the lock.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: Sanitize fpc registers for KVM_SET_FPU
Christian Borntraeger [Mon, 6 Feb 2012 09:59:02 +0000 (10:59 +0100)]
KVM: s390: Sanitize fpc registers for KVM_SET_FPU

commit 7eef87dc99e419b1cc051e4417c37e4744d7b661 (KVM: s390: fix
register setting) added a load of the floating point control register
to the KVM_SET_FPU path. Lets make sure that the fpc is valid.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Fix write protection race during dirty logging
Takuya Yoshikawa [Sun, 5 Feb 2012 11:42:41 +0000 (20:42 +0900)]
KVM: Fix write protection race during dirty logging

This patch fixes a race introduced by:

  commit 95d4c16ce78cb6b7549a09159c409d52ddd18dae
  KVM: Optimize dirty logging by rmap_write_protect()

During protecting pages for dirty logging, other threads may also try
to protect a page in mmu_sync_children() or kvm_mmu_get_page().

In such a case, because get_dirty_log releases mmu_lock before flushing
TLB's, the following race condition can happen:

  A (get_dirty_log)     B (another thread)

  lock(mmu_lock)
  clear pte.w
  unlock(mmu_lock)
                        lock(mmu_lock)
                        pte.w is already cleared
                        unlock(mmu_lock)
                        skip TLB flush
                        return
  ...
  TLB flush

Though thread B assumes the page has already been protected when it
returns, the remaining TLB entry will break that assumption.

This patch fixes this problem by making get_dirty_log hold the mmu_lock
until it flushes the TLB's.

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: VMX: remove yield_on_hlt
Raghavendra K T [Tue, 7 Feb 2012 17:49:20 +0000 (23:19 +0530)]
KVM: VMX: remove yield_on_hlt

yield_on_hlt was introduced for CPU bandwidth capping. Now it is
redundant with CFS hardlimit.

yield_on_hlt also complicates the scenario in paravirtual environment,
that needs to trap halt. for e.g. paravirtualized ticket spinlocks.

Acked-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Track TSC synchronization in generations
Zachary Amsden [Fri, 3 Feb 2012 17:43:57 +0000 (15:43 -0200)]
KVM: Track TSC synchronization in generations

This allows us to track the original nanosecond and counter values
at each phase of TSC writing by the guest.  This gets us perfect
offset matching for stable TSC systems, and perfect software
computed TSC matching for machines with unstable TSC.

Signed-off-by: Zachary Amsden <zamsden@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Dont mark TSC unstable due to S4 suspend
Zachary Amsden [Fri, 3 Feb 2012 17:43:56 +0000 (15:43 -0200)]
KVM: Dont mark TSC unstable due to S4 suspend

During a host suspend, TSC may go backwards, which KVM interprets
as an unstable TSC.  Technically, KVM should not be marking the
TSC unstable, which causes the TSC clocksource to go bad, but we
need to be adjusting the TSC offsets in such a case.

Dealing with this issue is a little tricky as the only place we
can reliably do it is before much of the timekeeping infrastructure
is up and running.  On top of this, we are not in a KVM thread
context, so we may not be able to safely access VCPU fields.
Instead, we compute our best known hardware offset at power-up and
stash it to be applied to all VCPUs when they actually start running.

Signed-off-by: Zachary Amsden <zamsden@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Allow adjust_tsc_offset to be in host or guest cycles
Marcelo Tosatti [Fri, 3 Feb 2012 17:43:55 +0000 (15:43 -0200)]
KVM: Allow adjust_tsc_offset to be in host or guest cycles

Redefine the API to take a parameter indicating whether an
adjustment is in host or guest cycles.

Signed-off-by: Zachary Amsden <zamsden@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Add last_host_tsc tracking back to KVM
Zachary Amsden [Fri, 3 Feb 2012 17:43:54 +0000 (15:43 -0200)]
KVM: Add last_host_tsc tracking back to KVM

The variable last_host_tsc was removed from upstream code.  I am adding
it back for two reasons.  First, it is unnecessary to use guest TSC
computation to conclude information about the host TSC.  The guest may
set the TSC backwards (this case handled by the previous patch), but
the computation of guest TSC (and fetching an MSR) is significanlty more
work and complexity than simply reading the hardware counter.  In addition,
we don't actually need the guest TSC for any part of the computation,
by always recomputing the offset, we can eliminate the need to deal with
the current offset and any scaling factors that may apply.

The second reason is that later on, we are going to be using the host
TSC value to restore TSC offsets after a host S4 suspend, so we need to
be reading the host values, not the guest values here.

Signed-off-by: Zachary Amsden <zamsden@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Fix last_guest_tsc / tsc_offset semantics
Zachary Amsden [Fri, 3 Feb 2012 17:43:53 +0000 (15:43 -0200)]
KVM: Fix last_guest_tsc / tsc_offset semantics

The variable last_guest_tsc was being used as an ad-hoc indicator
that guest TSC has been initialized and recorded correctly.  However,
it may not have been, it could be that guest TSC has been set to some
large value, the back to a small value (by, say, a software reboot).

This defeats the logic and causes KVM to falsely assume that the
guest TSC has gone backwards, marking the host TSC unstable, which
is undesirable behavior.

In addition, rather than try to compute an offset adjustment for the
TSC on unstable platforms, just recompute the whole offset.  This
allows us to get rid of one callsite for adjust_tsc_offset, which
is problematic because the units it takes are in guest units, but
here, the computation was originally being done in host units.

Doing this, and also recording last_guest_tsc when the TSC is written
allow us to remove the tricky logic which depended on last_guest_tsc
being zero to indicate a reset of uninitialized value.

Instead, we now have the guarantee that the guest TSC offset is
always at least something which will get us last_guest_tsc.

Signed-off-by: Zachary Amsden <zamsden@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Leave TSC synchronization window open with each new sync
Zachary Amsden [Fri, 3 Feb 2012 17:43:52 +0000 (15:43 -0200)]
KVM: Leave TSC synchronization window open with each new sync

Currently, when the TSC is written by the guest, the variable
ns is updated to force the current write to appear to have taken
place at the time of the first write in this sync phase.  This
leaves a cliff at the end of the match window where updates will
fall of the end.  There are two scenarios where this can be a
problem in practe - first, on a system with a large number of
VCPUs, the sync period may last for an extended period of time.

The second way this can happen is if the VM reboots very rapidly
and we catch a VCPU TSC synchronization just around the edge.
We may be unaware of the reboot, and thus the first VCPU might
synchronize with an old set of the timer (at, say 0.97 seconds
ago, when first powered on).  The second VCPU can come in 0.04
seconds later to try to synchronize, but it misses the window
because it is just over the threshold.

Instead, stop doing this artificial setback of the ns variable
and just update it with every write of the TSC.

It may be observed that doing so causes values computed by
compute_guest_tsc to diverge slightly across CPUs - note that
the last_tsc_ns and last_tsc_write variable are used here, and
now they last_tsc_ns will be different for each VCPU, reflecting
the actual time of the update.

However, compute_guest_tsc is used only for guests which already
have TSC stability issues, and further, note that the previous
patch has caused last_tsc_write to be incremented by the difference
in nanoseconds, converted back into guest cycles.  As such, only
boundary rounding errors should be visible, which given the
resolution in nanoseconds, is going to only be a few cycles and
only visible in cross-CPU consistency tests.  The problem can be
fixed by adding a new set of variables to track the start offset
and start write value for the current sync cycle.

Signed-off-by: Zachary Amsden <zamsden@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Improve TSC offset matching
Zachary Amsden [Fri, 3 Feb 2012 17:43:51 +0000 (15:43 -0200)]
KVM: Improve TSC offset matching

There are a few improvements that can be made to the TSC offset
matching code.  First, we don't need to call the 128-bit multiply
(especially on a constant number), the code works much nicer to
do computation in nanosecond units.

Second, the way everything is setup with software TSC rate scaling,
we currently have per-cpu rates.  Obviously this isn't too desirable
to use in practice, but if for some reason we do change the rate of
all VCPUs at runtime, then reset the TSCs, we will only want to
match offsets for VCPUs running at the same rate.

Finally, for the case where we have an unstable host TSC, but
rate scaling is being done in hardware, we should call the platform
code to compute the TSC offset, so the math is reorganized to recompute
the base instead, then transform the base into an offset using the
existing API.

[avi: fix 64-bit division on i386]

Signed-off-by: Zachary Amsden <zamsden@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
KVM: Fix 64-bit division in kvm_write_tsc()

Breaks i386 build.

Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Infrastructure for software and hardware based TSC rate scaling
Zachary Amsden [Fri, 3 Feb 2012 17:43:50 +0000 (15:43 -0200)]
KVM: Infrastructure for software and hardware based TSC rate scaling

This requires some restructuring; rather than use 'virtual_tsc_khz'
to indicate whether hardware rate scaling is in effect, we consider
each VCPU to always have a virtual TSC rate.  Instead, there is new
logic above the vendor-specific hardware scaling that decides whether
it is even necessary to use and updates all rate variables used by
common code.  This means we can simply query the virtual rate at
any point, which is needed for software rate scaling.

There is also now a threshold added to the TSC rate scaling; minor
differences and variations of measured TSC rate can accidentally
provoke rate scaling to be used when it is not needed.  Instead,
we have a tolerance variable called tsc_tolerance_ppm, which is
the maximum variation from user requested rate at which scaling
will be used.  The default is 250ppm, which is the half the
threshold for NTP adjustment, allowing for some hardware variation.

In the event that hardware rate scaling is not available, we can
kludge a bit by forcing TSC catchup to turn on when a faster than
hardware speed has been requested, but there is nothing available
yet for the reverse case; this requires a trap and emulate software
implementation for RDTSC, which is still forthcoming.

[avi: fix 64-bit division on i386]

Signed-off-by: Zachary Amsden <zamsden@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: x86: increase recommended max vcpus to 160
Marcelo Tosatti [Fri, 3 Feb 2012 14:28:31 +0000 (12:28 -0200)]
KVM: x86: increase recommended max vcpus to 160

Increase recommended max vcpus from 64 to 160 (tested internally
at Red Hat).

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agox86: Introduce x86_cpuinit.early_percpu_clock_init hook
Igor Mammedov [Tue, 7 Feb 2012 14:52:44 +0000 (15:52 +0100)]
x86: Introduce x86_cpuinit.early_percpu_clock_init hook

When kvm guest uses kvmclock, it may hang on vcpu hot-plug.
This is caused by an overflow in pvclock_get_nsec_offset,

    u64 delta = tsc - shadow->tsc_timestamp;

which in turn is caused by an undefined values from percpu
hv_clock that hasn't been initialized yet.
Uninitialized clock on being booted cpu is accessed from
   start_secondary
    -> smp_callin
      ->  smp_store_cpu_info
        -> identify_secondary_cpu
          -> mtrr_ap_init
            -> mtrr_restore
              -> stop_machine_from_inactive_cpu
                -> queue_stop_cpus_work
                  ...
                    -> sched_clock
                      -> kvm_clock_read
which is well before x86_cpuinit.setup_percpu_clockev call in
start_secondary, where percpu clock is initialized.

This patch introduces a hook that allows to setup/initialize
per_cpu clock early and avoid overflow due to reading
  - undefined values
  - old values if cpu was offlined and then onlined again

Another possible early user of this clock source is ftrace that
accesses it to get timestamps for ring buffer entries. So if
mtrr_ap_init is moved from identify_secondary_cpu to past
x86_cpuinit.setup_percpu_clockev in start_secondary, ftrace
may cause the same overflow/hang on cpu hot-plug anyway.

More complete description of the problem:
  https://lkml.org/lkml/2012/2/2/101

Credits to Marcelo Tosatti <mtosatti@redhat.com> for hook idea.

Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: x86: reset edge sense circuit of i8259 on init
Gleb Natapov [Tue, 24 Jan 2012 13:06:05 +0000 (15:06 +0200)]
KVM: x86: reset edge sense circuit of i8259 on init

The spec says that during initialization "The edge sense circuit is
reset which means that following initialization an interrupt request
(IR) input must make a low-to-high transition to generate an interrupt",
but currently if edge triggered interrupt is in IRR it is delivered
after i8259 initialization.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Add HPT preallocator
Alexander Graf [Mon, 16 Jan 2012 18:12:11 +0000 (19:12 +0100)]
KVM: PPC: Add HPT preallocator

We're currently allocating 16MB of linear memory on demand when creating
a guest. That does work some times, but finding 16MB of linear memory
available in the system at runtime is definitely not a given.

So let's add another command line option similar to the RMA preallocator,
that we can use to keep a pool of page tables around. Now, when a guest
gets created it has a pretty low chance of receiving an OOM.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Initialize linears with zeros
Alexander Graf [Tue, 17 Jan 2012 14:11:28 +0000 (15:11 +0100)]
KVM: PPC: Initialize linears with zeros

RMAs and HPT preallocated spaces should be zeroed, so we don't accidently
leak information from previous VM executions.

Signed-off-by: Alexander Graf <agraf@suse.de>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Convert RMA allocation into generic code
Alexander Graf [Mon, 16 Jan 2012 15:50:10 +0000 (16:50 +0100)]
KVM: PPC: Convert RMA allocation into generic code

We have code to allocate big chunks of linear memory on bootup for later use.
This code is currently used for RMA allocation, but can be useful beyond that
extent.

Make it generic so we can reuse it for other stuff later.

Signed-off-by: Alexander Graf <agraf@suse.de>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: E500: Fail init when not on e500v2
Alexander Graf [Wed, 18 Jan 2012 23:23:46 +0000 (00:23 +0100)]
KVM: PPC: E500: Fail init when not on e500v2

When enabling the current KVM code on e500mc, I get the following oops:

    Oops: Exception in kernel mode, sig: 4 [#1]
    SMP NR_CPUS=8 P2041 RDB
    Modules linked in:
    NIP: c067df4c LR: c067df44 CTR: 00000000
    REGS: ee055ed0 TRAP: 0700   Not tainted  (3.2.0-10391-g36c5afe)
    MSR: 00029002 <CE,EE,ME>  CR: 24042022  XER: 00000000
    TASK = ee0429b0[1] 'swapper/0' THREAD: ee054000 CPU: 2
    GPR00: c067df44 ee055f80 ee0429b0 00000000 00000058 0000003f ee211600 60c6b864
    GPR08: 7cc903a6 0000002c 00000000 00000001 44042082 2d180088 00000000 00000000
    GPR16: c0000a00 00000014 3fffffff 03fe9000 00000015 7ff3be68 c06e0000 00000000
    GPR24: 00000000 00000000 00001720 c067df1c c06e0000 00000000 ee054000 c06ab51c
    NIP [c067df4c] kvmppc_e500_init+0x30/0xf8
    LR [c067df44] kvmppc_e500_init+0x28/0xf8
    Call Trace:
    [ee055f80] [c067df44] kvmppc_e500_init+0x28/0xf8 (unreliable)
    [ee055fb0] [c0001d30] do_one_initcall+0x50/0x1f0
    [ee055fe0] [c06721dc] kernel_init+0xa4/0x14c
    [ee055ff0] [c000e910] kernel_thread+0x4c/0x68
    Instruction dump:
    9421ffd0 7c0802a6 93410018 9361001c 90010034 93810020 93a10024 93c10028
    93e1002c 4bfffe7d 2c030000 408200a4 <7c1082a690010008 7c1182a6 9001000c
    ---[ end trace b8ef4903fcbf9dd3 ]---

Since it doesn't make sense to run the init function on any non-supported
platform, we can just call our "is this platform supported?" function and
bail out of init() if it's not.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Move gfn_to_memslot() to kvm_host.h
Paul Mackerras [Thu, 12 Jan 2012 20:09:51 +0000 (20:09 +0000)]
KVM: Move gfn_to_memslot() to kvm_host.h

This moves __gfn_to_memslot() and search_memslots() from kvm_main.c to
kvm_host.h to reduce the code duplication caused by the need for
non-modular code in arch/powerpc/kvm/book3s_hv_rm_mmu.c to call
gfn_to_memslot() in real mode.

Rather than putting gfn_to_memslot() itself in a header, which would
lead to increased code size, this puts __gfn_to_memslot() in a header.
Then, the non-modular uses of gfn_to_memslot() are changed to call
__gfn_to_memslot() instead.  This way there is only one place in the
source code that needs to be changed should the gfn_to_memslot()
implementation need to be modified.

On powerpc, the Book3S HV style of KVM has code that is called from
real mode which needs to call gfn_to_memslot() and thus needs this.
(Module code is allocated in the vmalloc region, which can't be
accessed in real mode.)

With this, we can remove builtin_gfn_to_memslot() from book3s_hv_rm_mmu.c.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: x86 emulator: reject SYSENTER in compatibility mode on AMD guests
Avi Kivity [Wed, 1 Feb 2012 10:23:21 +0000 (12:23 +0200)]
KVM: x86 emulator: reject SYSENTER in compatibility mode on AMD guests

If the guest thinks it's an AMD, it will not have prepared the SYSENTER MSRs,
and if the guest executes SYSENTER in compatibility mode, it will fails.

Detect this condition and #UD instead, like the spec says.

Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Don't mistreat edge-triggered INIT IPI as INIT de-assert. (LAPIC)
Julian Stecklina [Mon, 16 Jan 2012 13:02:20 +0000 (14:02 +0100)]
KVM: Don't mistreat edge-triggered INIT IPI as INIT de-assert. (LAPIC)

If the guest programs an IPI with level=0 (de-assert) and trig_mode=0 (edge),
it is erroneously treated as INIT de-assert and ignored, but to quote the
spec: "For this delivery mode [INIT de-assert], the level flag must be set to
0 and trigger mode flag to 1."

Signed-off-by: Julian Stecklina <js@alien8.de>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: fix error handling for out of range irq
Michael S. Tsirkin [Wed, 18 Jan 2012 18:07:09 +0000 (20:07 +0200)]
KVM: fix error handling for out of range irq

find_index_from_host_irq returns 0 on error
but callers assume < 0 on error. This should
not matter much: an out of range irq should never happen since
irq handler was registered with this irq #,
and even if it does we get a spurious msix irq in guest
and typically nothing terrible happens.

Still, better to make it consistent.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: SVM: comment nested paging and virtualization module parameters
Davidlohr Bueso [Tue, 17 Jan 2012 13:09:50 +0000 (14:09 +0100)]
KVM: SVM: comment nested paging and virtualization module parameters

Also use true instead of 1 for enabling by default.

Signed-off-by: Davidlohr Bueso <dave@gnu.org>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: MMU: Remove unused kvm parameter from rmap_next()
Takuya Yoshikawa [Tue, 17 Jan 2012 10:52:15 +0000 (19:52 +0900)]
KVM: MMU: Remove unused kvm parameter from rmap_next()

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: MMU: Remove unused kvm parameter from __gfn_to_rmap()
Takuya Yoshikawa [Tue, 17 Jan 2012 10:51:20 +0000 (19:51 +0900)]
KVM: MMU: Remove unused kvm parameter from __gfn_to_rmap()

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: MMU: Remove unused kvm_pte_chain
Takuya Yoshikawa [Tue, 17 Jan 2012 10:50:08 +0000 (19:50 +0900)]
KVM: MMU: Remove unused kvm_pte_chain

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: x86 emulator: Remove byte-sized MOVSX/MOVZX hack
Avi Kivity [Mon, 16 Jan 2012 13:08:45 +0000 (15:08 +0200)]
KVM: x86 emulator: Remove byte-sized MOVSX/MOVZX hack

Currently we treat MOVSX/MOVZX with a byte source as a byte instruction,
and change the destination operand size with a hack.  Change it to be
a word instruction, so the destination receives its natural size, and
change the source to be SrcMem8.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
12 years agoKVM: x86 emulator: add 8-bit memory operands
Avi Kivity [Mon, 16 Jan 2012 13:08:44 +0000 (15:08 +0200)]
KVM: x86 emulator: add 8-bit memory operands

Useful for MOVSX/MOVZX.

Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
12 years agoKVM: PPC: refer to paravirt docs in header file
Scott Wood [Wed, 11 Jan 2012 13:37:35 +0000 (13:37 +0000)]
KVM: PPC: refer to paravirt docs in header file

Instead of keeping separate copies of struct kvm_vcpu_arch_shared (one in
the code, one in the docs) that inevitably fail to be kept in sync
(already sr[] is missing from the doc version), just point to the header
file as the source of documentation on the contents of the magic page.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Acked-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Rename MMIO register identifiers
Alexander Graf [Sat, 7 Jan 2012 01:07:38 +0000 (02:07 +0100)]
KVM: PPC: Rename MMIO register identifiers

We need the KVM_REG namespace for generic register settings now, so
let's rename the existing users to something different, enabling
us to reuse the namespace for more visible interfaces.

While at it, also move these private constants to a private header.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Move kvm_vcpu_ioctl_[gs]et_one_reg down to platform-specific code
Paul Mackerras [Mon, 12 Dec 2011 12:26:50 +0000 (12:26 +0000)]
KVM: PPC: Move kvm_vcpu_ioctl_[gs]et_one_reg down to platform-specific code

This moves the get/set_one_reg implementation down from powerpc.c into
booke.c, book3s_pr.c and book3s_hv.c.  This avoids #ifdefs in C code,
but more importantly, it fixes a bug on Book3s HV where we were
accessing beyond the end of the kvm_vcpu struct (via the to_book3s()
macro) and corrupting memory, causing random crashes and file corruption.

On Book3s HV we only accept setting the HIOR to zero, since the guest
runs in supervisor mode and its vectors are never offset from zero.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
[agraf update to apply on top of changed ONE_REG patches]
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Add support for explicit HIOR setting
Alexander Graf [Wed, 14 Sep 2011 19:45:23 +0000 (21:45 +0200)]
KVM: PPC: Add support for explicit HIOR setting

Until now, we always set HIOR based on the PVR, but this is just wrong.
Instead, we should be setting HIOR explicitly, so user space can decide
what the initial HIOR value is - just like on real hardware.

We keep the old PVR based way around for backwards compatibility, but
once user space uses the SET_ONE_REG based method, we drop the PVR logic.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Add generic single register ioctls
Alexander Graf [Wed, 14 Sep 2011 08:02:41 +0000 (10:02 +0200)]
KVM: PPC: Add generic single register ioctls

Right now we transfer a static struct every time we want to get or set
registers. Unfortunately, over time we realize that there are more of
these than we thought of before and the extensibility and flexibility of
transferring a full struct every time is limited.

So this is a new approach to the problem. With these new ioctls, we can
get and set a single register that is identified by an ID. This allows for
very precise and limited transmittal of data. When we later realize that
it's a better idea to shove over multiple registers at once, we can reuse
most of the infrastructure and simply implement a GET_MANY_REGS / SET_MANY_REGS
interface.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Use the vcpu kmem_cache when allocating new VCPUs
Sasha Levin [Wed, 7 Dec 2011 08:24:56 +0000 (10:24 +0200)]
KVM: PPC: Use the vcpu kmem_cache when allocating new VCPUs

Currently the code kzalloc()s new VCPUs instead of using the kmem_cache
which is created when KVM is initialized.

Modify it to allocate VCPUs from that kmem_cache.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: booke: Add booke206 TLB trace
Liu Yu [Tue, 20 Dec 2011 14:42:56 +0000 (14:42 +0000)]
KVM: PPC: booke: Add booke206 TLB trace

The existing kvm_stlb_write/kvm_gtlb_write were a poor match for
the e500/book3e MMU -- mas1 was passed as "tid", mas2 was limited
to "unsigned int" which will be a problem on 64-bit, mas3/7 got
split up rather than treated as a single 64-bit word, etc.

Signed-off-by: Liu Yu <yu.liu@freescale.com>
[scottwood@freescale.com: made mas2 64-bit, and added mas8 init]
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit
Paul Mackerras [Thu, 15 Dec 2011 02:03:22 +0000 (02:03 +0000)]
KVM: PPC: Book3s HV: Implement get_dirty_log using hardware changed bit

This changes the implementation of kvm_vm_ioctl_get_dirty_log() for
Book3s HV guests to use the hardware C (changed) bits in the guest
hashed page table.  Since this makes the implementation quite different
from the Book3s PR case, this moves the existing implementation from
book3s.c to book3s_pr.c and creates a new implementation in book3s_hv.c.
That implementation calls kvmppc_hv_get_dirty_log() to do the actual
work by calling kvm_test_clear_dirty on each page.  It iterates over
the HPTEs, clearing the C bit if set, and returns 1 if any C bit was
set (including the saved C bit in the rmap entry).

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Book3S HV: Use the hardware referenced bit for kvm_age_hva
Paul Mackerras [Thu, 15 Dec 2011 02:02:47 +0000 (02:02 +0000)]
KVM: PPC: Book3S HV: Use the hardware referenced bit for kvm_age_hva

This uses the host view of the hardware R (referenced) bit to speed
up kvm_age_hva() and kvm_test_age_hva().  Instead of removing all
the relevant HPTEs in kvm_age_hva(), we now just reset their R bits
if set.  Also, kvm_test_age_hva() now scans the relevant HPTEs to
see if any of them have R set.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Book3s HV: Maintain separate guest and host views of R and C bits
Paul Mackerras [Thu, 15 Dec 2011 02:02:02 +0000 (02:02 +0000)]
KVM: PPC: Book3s HV: Maintain separate guest and host views of R and C bits

This allows both the guest and the host to use the referenced (R) and
changed (C) bits in the guest hashed page table.  The guest has a view
of R and C that is maintained in the guest_rpte field of the revmap
entry for the HPTE, and the host has a view that is maintained in the
rmap entry for the associated gfn.

Both view are updated from the guest HPT.  If a bit (R or C) is zero
in either view, it will be initially set to zero in the HPTE (or HPTEs),
until set to 1 by hardware.  When an HPTE is removed for any reason,
the R and C bits from the HPTE are ORed into both views.  We have to
be careful to read the R and C bits from the HPTE after invalidating
it, but before unlocking it, in case of any late updates by the hardware.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Book3S HV: Keep HPTE locked when invalidating
Paul Mackerras [Thu, 15 Dec 2011 02:01:10 +0000 (02:01 +0000)]
KVM: PPC: Book3S HV: Keep HPTE locked when invalidating

This reworks the implementations of the H_REMOVE and H_BULK_REMOVE
hcalls to make sure that we keep the HPTE locked and in the reverse-
mapping chain until we have finished invalidating it.  Previously
we would remove it from the chain and unlock it before invalidating
it, leaving a tiny window when the guest could access the page even
though we believe we have removed it from the guest (e.g.,
kvm_unmap_hva() has been called for the page and has found no HPTEs
in the chain).  In addition, we'll need this for future patches where
we will need to read the R and C bits in the HPTE after invalidating
it.

Doing this required restructuring kvmppc_h_bulk_remove() substantially.
Since we want to batch up the tlbies, we now need to keep several
HPTEs locked simultaneously.  In order to avoid possible deadlocks,
we don't spin on the HPTE bitlock for any except the first HPTE in
a batch.  If we can't acquire the HPTE bitlock for the second or
subsequent HPTE, we terminate the batch at that point, do the tlbies
that we have accumulated so far, unlock those HPTEs, and then start
a new batch to do the remaining invalidations.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Add KVM_CAP_NR_VCPUS and KVM_CAP_MAX_VCPUS
Matt Evans [Wed, 7 Dec 2011 16:55:57 +0000 (16:55 +0000)]
KVM: PPC: Add KVM_CAP_NR_VCPUS and KVM_CAP_MAX_VCPUS

PPC KVM lacks these two capabilities, and as such a userland system must assume
a max of 4 VCPUs (following api.txt).  With these, a userland can determine
a more realistic limit.

Signed-off-by: Matt Evans <matt@ozlabs.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Fix vcpu_create dereference before validity check.
Matt Evans [Tue, 6 Dec 2011 21:19:42 +0000 (21:19 +0000)]
KVM: PPC: Fix vcpu_create dereference before validity check.

Fix usage of vcpu struct before check that it's actually valid.

Signed-off-by: Matt Evans <matt@ozlabs.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Allow for read-only pages backing a Book3S HV guest
Paul Mackerras [Mon, 12 Dec 2011 12:38:51 +0000 (12:38 +0000)]
KVM: PPC: Allow for read-only pages backing a Book3S HV guest

With this, if a guest does an H_ENTER with a read/write HPTE on a page
which is currently read-only, we make the actual HPTE inserted be a
read-only version of the HPTE.  We now intercept protection faults as
well as HPTE not found faults, and for a protection fault we work out
whether it should be reflected to the guest (e.g. because the guest HPTE
didn't allow write access to usermode) or handled by switching to
kernel context and calling kvmppc_book3s_hv_page_fault, which will then
request write access to the page and update the actual HPTE.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: Add barriers to allow mmu_notifier_retry to be used locklessly
Paul Mackerras [Mon, 12 Dec 2011 12:37:21 +0000 (12:37 +0000)]
KVM: Add barriers to allow mmu_notifier_retry to be used locklessly

This adds an smp_wmb in kvm_mmu_notifier_invalidate_range_end() and an
smp_rmb in mmu_notifier_retry() so that mmu_notifier_retry() will give
the correct answer when called without kvm->mmu_lock being held.
PowerPC Book3S HV KVM wants to use a bitlock per guest page rather than
a single global spinlock in order to improve the scalability of updates
to the guest MMU hashed page table, and so needs this.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Implement MMU notifiers for Book3S HV guests
Paul Mackerras [Mon, 12 Dec 2011 12:38:05 +0000 (12:38 +0000)]
KVM: PPC: Implement MMU notifiers for Book3S HV guests

This adds the infrastructure to enable us to page out pages underneath
a Book3S HV guest, on processors that support virtualized partition
memory, that is, POWER7.  Instead of pinning all the guest's pages,
we now look in the host userspace Linux page tables to find the
mapping for a given guest page.  Then, if the userspace Linux PTE
gets invalidated, kvm_unmap_hva() gets called for that address, and
we replace all the guest HPTEs that refer to that page with absent
HPTEs, i.e. ones with the valid bit clear and the HPTE_V_ABSENT bit
set, which will cause an HDSI when the guest tries to access them.
Finally, the page fault handler is extended to reinstantiate the
guest HPTE when the guest tries to access a page which has been paged
out.

Since we can't intercept the guest DSI and ISI interrupts on PPC970,
we still have to pin all the guest pages on PPC970.  We have a new flag,
kvm->arch.using_mmu_notifiers, that indicates whether we can page
guest pages out.  If it is not set, the MMU notifier callbacks do
nothing and everything operates as before.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Implement MMIO emulation support for Book3S HV guests
Paul Mackerras [Mon, 12 Dec 2011 12:36:37 +0000 (12:36 +0000)]
KVM: PPC: Implement MMIO emulation support for Book3S HV guests

This provides the low-level support for MMIO emulation in Book3S HV
guests.  When the guest tries to map a page which is not covered by
any memslot, that page is taken to be an MMIO emulation page.  Instead
of inserting a valid HPTE, we insert an HPTE that has the valid bit
clear but another hypervisor software-use bit set, which we call
HPTE_V_ABSENT, to indicate that this is an absent page.  An
absent page is treated much like a valid page as far as guest hcalls
(H_ENTER, H_REMOVE, H_READ etc.) are concerned, except of course that
an absent HPTE doesn't need to be invalidated with tlbie since it
was never valid as far as the hardware is concerned.

When the guest accesses a page for which there is an absent HPTE, it
will take a hypervisor data storage interrupt (HDSI) since we now set
the VPM1 bit in the LPCR.  Our HDSI handler for HPTE-not-present faults
looks up the hash table and if it finds an absent HPTE mapping the
requested virtual address, will switch to kernel mode and handle the
fault in kvmppc_book3s_hv_page_fault(), which at present just calls
kvmppc_hv_emulate_mmio() to set up the MMIO emulation.

This is based on an earlier patch by Benjamin Herrenschmidt, but since
heavily reworked.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Maintain a doubly-linked list of guest HPTEs for each gfn
Paul Mackerras [Mon, 12 Dec 2011 12:33:07 +0000 (12:33 +0000)]
KVM: PPC: Maintain a doubly-linked list of guest HPTEs for each gfn

This expands the reverse mapping array to contain two links for each
HPTE which are used to link together HPTEs that correspond to the
same guest logical page.  Each circular list of HPTEs is pointed to
by the rmap array entry for the guest logical page, pointed to by
the relevant memslot.  Links are 32-bit HPT entry indexes rather than
full 64-bit pointers, to save space.  We use 3 of the remaining 32
bits in the rmap array entries as a lock bit, a referenced bit and
a present bit (the present bit is needed since HPTE index 0 is valid).
The bit lock for the rmap chain nests inside the HPTE lock bit.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Allow I/O mappings in memory slots
Paul Mackerras [Mon, 12 Dec 2011 12:32:27 +0000 (12:32 +0000)]
KVM: PPC: Allow I/O mappings in memory slots

This provides for the case where userspace maps an I/O device into the
address range of a memory slot using a VM_PFNMAP mapping.  In that
case, we work out the pfn from vma->vm_pgoff, and record the cache
enable bits from vma->vm_page_prot in two low-order bits in the
slot_phys array entries.  Then, in kvmppc_h_enter() we check that the
cache bits in the HPTE that the guest wants to insert match the cache
bits in the slot_phys array entry.  However, we do allow the guest to
create what it thinks is a non-cacheable or write-through mapping to
memory that is actually cacheable, so that we can use normal system
memory as part of an emulated device later on.  In that case the actual
HPTE we insert is a cacheable HPTE.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Allow use of small pages to back Book3S HV guests
Paul Mackerras [Mon, 12 Dec 2011 12:31:41 +0000 (12:31 +0000)]
KVM: PPC: Allow use of small pages to back Book3S HV guests

This relaxes the requirement that the guest memory be provided as
16MB huge pages, allowing it to be provided as normal memory, i.e.
in pages of PAGE_SIZE bytes (4k or 64k).  To allow this, we index
the kvm->arch.slot_phys[] arrays with a small page index, even if
huge pages are being used, and use the low-order 5 bits of each
entry to store the order of the enclosing page with respect to
normal pages, i.e. log_2(enclosing_page_size / PAGE_SIZE).

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Only get pages when actually needed, not in prepare_memory_region()
Paul Mackerras [Mon, 12 Dec 2011 12:31:00 +0000 (12:31 +0000)]
KVM: PPC: Only get pages when actually needed, not in prepare_memory_region()

This removes the code from kvmppc_core_prepare_memory_region() that
looked up the VMA for the region being added and called hva_to_page
to get the pfns for the memory.  We have no guarantee that there will
be anything mapped there at the time of the KVM_SET_USER_MEMORY_REGION
ioctl call; userspace can do that ioctl and then map memory into the
region later.

Instead we defer looking up the pfn for each memory page until it is
needed, which generally means when the guest does an H_ENTER hcall on
the page.  Since we can't call get_user_pages in real mode, if we don't
already have the pfn for the page, kvmppc_h_enter() will return
H_TOO_HARD and we then call kvmppc_virtmode_h_enter() once we get back
to kernel context.  That calls kvmppc_get_guest_page() to get the pfn
for the page, and then calls back to kvmppc_h_enter() to redo the HPTE
insertion.

When the first vcpu starts executing, we need to have the RMO or VRMA
region mapped so that the guest's real mode accesses will work.  Thus
we now have a check in kvmppc_vcpu_run() to see if the RMO/VRMA is set
up and if not, call kvmppc_hv_setup_rma().  It checks if the memslot
starting at guest physical 0 now has RMO memory mapped there; if so it
sets it up for the guest, otherwise on POWER7 it sets up the VRMA.
The function that does that, kvmppc_map_vrma, is now a bit simpler,
as it calls kvmppc_virtmode_h_enter instead of creating the HPTE itself.

Since we are now potentially updating entries in the slot_phys[]
arrays from multiple vcpu threads, we now have a spinlock protecting
those updates to ensure that we don't lose track of any references
to pages.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Make the H_ENTER hcall more reliable
Paul Mackerras [Mon, 12 Dec 2011 12:30:16 +0000 (12:30 +0000)]
KVM: PPC: Make the H_ENTER hcall more reliable

At present, our implementation of H_ENTER only makes one try at locking
each slot that it looks at, and doesn't even retry the ldarx/stdcx.
atomic update sequence that it uses to attempt to lock the slot.  Thus
it can return the H_PTEG_FULL error unnecessarily, particularly when
the H_EXACT flag is set, meaning that the caller wants a specific PTEG
slot.

This improves the situation by making a second pass when no free HPTE
slot is found, where we spin until we succeed in locking each slot in
turn and then check whether it is full while we hold the lock.  If the
second pass fails, then we return H_PTEG_FULL.

This also moves lock_hpte to a header file (since later commits in this
series will need to use it from other source files) and renames it to
try_lock_hpte, which is a somewhat less misleading name.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Add an interface for pinning guest pages in Book3s HV guests
Paul Mackerras [Mon, 12 Dec 2011 12:28:55 +0000 (12:28 +0000)]
KVM: PPC: Add an interface for pinning guest pages in Book3s HV guests

This adds two new functions, kvmppc_pin_guest_page() and
kvmppc_unpin_guest_page(), and uses them to pin the guest pages where
the guest has registered areas of memory for the hypervisor to update,
(i.e. the per-cpu virtual processor areas, SLB shadow buffers and
dispatch trace logs) and then unpin them when they are no longer
required.

Although it is not strictly necessary to pin the pages at this point,
since all guest pages are already pinned, later commits in this series
will mean that guest pages aren't all pinned.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Keep page physical addresses in per-slot arrays
Paul Mackerras [Mon, 12 Dec 2011 12:28:21 +0000 (12:28 +0000)]
KVM: PPC: Keep page physical addresses in per-slot arrays

This allocates an array for each memory slot that is added to store
the physical addresses of the pages in the slot.  This array is
vmalloc'd and accessed in kvmppc_h_enter using real_vmalloc_addr().
This allows us to remove the ram_pginfo field from the kvm_arch
struct, and removes the 64GB guest RAM limit that we had.

We use the low-order bits of the array entries to store a flag
indicating that we have done get_page on the corresponding page,
and therefore need to call put_page when we are finished with the
page.  Currently this is set for all pages except those in our
special RMO regions.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Keep a record of HV guest view of hashed page table entries
Paul Mackerras [Mon, 12 Dec 2011 12:27:39 +0000 (12:27 +0000)]
KVM: PPC: Keep a record of HV guest view of hashed page table entries

This adds an array that parallels the guest hashed page table (HPT),
that is, it has one entry per HPTE, used to store the guest's view
of the second doubleword of the corresponding HPTE.  The first
doubleword in the HPTE is the same as the guest's idea of it, so we
don't need to store a copy, but the second doubleword in the HPTE has
the real page number rather than the guest's logical page number.
This allows us to remove the back_translate() and reverse_xlate()
functions.

This "reverse mapping" array is vmalloc'd, meaning that to access it
in real mode we have to walk the kernel's page tables explicitly.
That is done by the new real_vmalloc_addr() function.  (In fact this
returns an address in the linear mapping, so the result is usable
both in real mode and in virtual mode.)

There are also some minor cleanups here: moving the definitions of
HPT_ORDER etc. to a header file and defining HPT_NPTE for HPT_NPTEG << 3.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Make wakeups work again for Book3S HV guests
Paul Mackerras [Mon, 12 Dec 2011 12:24:48 +0000 (12:24 +0000)]
KVM: PPC: Make wakeups work again for Book3S HV guests

When commit f43fdc15fa ("KVM: PPC: booke: Improve timer register
emulation") factored out some code in arch/powerpc/kvm/powerpc.c
into a new helper function, kvm_vcpu_kick(), an error crept in
which causes Book3s HV guest vcpus to stall.  This fixes it.
On POWER7 machines, guest vcpus are grouped together into virtual
CPU cores that share a single waitqueue, so it's important to use
vcpu->arch.wqp rather than &vcpu->wq.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Avoid patching paravirt template code
Liu Yu-B13201 [Thu, 1 Dec 2011 20:22:53 +0000 (20:22 +0000)]
KVM: PPC: Avoid patching paravirt template code

Currently we patch the whole code include paravirt template code.
This isn't safe for scratch area and has impact to performance.

Signed-off-by: Liu Yu <yu.liu@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: e500: use hardware hint when loading TLB0 entries
Scott Wood [Tue, 29 Nov 2011 10:40:23 +0000 (10:40 +0000)]
KVM: PPC: e500: use hardware hint when loading TLB0 entries

The hardware maintains a per-set next victim hint.  Using this
reduces conflicts, especially on e500v2 where a single guest
TLB entry is mapped to two shadow TLB entries (user and kernel).
We want those two entries to go to different TLB ways.

sesel is now only used for TLB1.

Reported-by: Liu Yu <yu.liu@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: e500: Fix TLBnCFG in KVM_CONFIG_TLB
Scott Wood [Mon, 28 Nov 2011 15:20:02 +0000 (15:20 +0000)]
KVM: PPC: e500: Fix TLBnCFG in KVM_CONFIG_TLB

The associativity, not just total size, can differ from the host
hardware.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Book3S: PR: Fix signal check race
Alexander Graf [Mon, 19 Dec 2011 12:36:55 +0000 (13:36 +0100)]
KVM: PPC: Book3S: PR: Fix signal check race

As Scott put it:

> If we get a signal after the check, we want to be sure that we don't
> receive the reschedule IPI until after we're in the guest, so that it
> will cause another signal check.

we need to have interrupts disabled from the point we do signal_check()
all the way until we actually enter the guest.

This patch fixes potential signal loss races.

Reported-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: align vcpu_kick with x86
Alexander Graf [Fri, 9 Dec 2011 14:20:46 +0000 (15:20 +0100)]
KVM: PPC: align vcpu_kick with x86

Our vcpu kick implementation differs a bit from x86 which resulted in us not
disabling preemption during the kick. Get it a bit closer to what x86 does.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Use get/set for to_svcpu to help preemption
Alexander Graf [Fri, 9 Dec 2011 13:44:13 +0000 (14:44 +0100)]
KVM: PPC: Use get/set for to_svcpu to help preemption

When running the 64-bit Book3s PR code without CONFIG_PREEMPT_NONE, we were
doing a few things wrong, most notably access to PACA fields without making
sure that the pointers stay stable accross the access (preempt_disable()).

This patch moves to_svcpu towards a get/put model which allows us to disable
preemption while accessing the shadow vcpu fields in the PACA. That way we
can run preemptible and everyone's happy!

Reported-by: Jörg Sommer <joerg@alea.gnuu.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Book3s: PR: No irq_disable in vcpu_run
Alexander Graf [Fri, 9 Dec 2011 14:47:53 +0000 (15:47 +0100)]
KVM: PPC: Book3s: PR: No irq_disable in vcpu_run

Somewhere during merges we ended up from

  local_irq_enable()
  foo();
  local_irq_disable()

to always keeping irqs enabled during that part. However, we now
have the following code:

  foo();
  local_irq_disable()

which disables interrupts without the surrounding code enabling them
again! So let's remove that disable and be happy.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Book3s: PR: Disable preemption in vcpu_run
Alexander Graf [Fri, 9 Dec 2011 14:46:21 +0000 (15:46 +0100)]
KVM: PPC: Book3s: PR: Disable preemption in vcpu_run

When entering the guest, we want to make sure we're not getting preempted
away, so let's disable preemption on entry, but enable it again while handling
guest exits.

Reported-by: Jörg Sommer <joerg@alea.gnuu.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: booke: Improve timer register emulation
Scott Wood [Thu, 17 Nov 2011 12:39:59 +0000 (12:39 +0000)]
KVM: PPC: booke: Improve timer register emulation

Decrementers are now properly driven by TCR/TSR, and the guest
has full read/write access to these registers.

The decrementer keeps ticking (and setting the TSR bit) regardless of
whether the interrupts are enabled with TCR.

The decrementer stops at zero, rather than going negative.

Decrementers (and FITs, once implemented) are delivered as
level-triggered interrupts -- dequeued when the TSR bit is cleared, not
on delivery.

Signed-off-by: Liu Yu <yu.liu@freescale.com>
[scottwood@freescale.com: significant changes]
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Paravirtualize SPRG4-7, ESR, PIR, MASn
Scott Wood [Wed, 9 Nov 2011 00:23:30 +0000 (18:23 -0600)]
KVM: PPC: Paravirtualize SPRG4-7, ESR, PIR, MASn

This allows additional registers to be accessed by the guest
in PR-mode KVM without trapping.

SPRG4-7 are readable from userspace.  On booke, KVM will sync
these registers when it enters the guest, so that accesses from
guest userspace will work.  The guest kernel, OTOH, must consistently
use either the real registers or the shared area between exits.  This
also applies to the already-paravirted SPRG3.

On non-booke, it's not clear to what extent SPRG4-7 are supported
(they're not architected for book3s, but exist on at least some classic
chips).  They are copied in the get/set regs ioctls, but I do not see any
non-booke emulation.  I also do not see any syncing with real registers
(in PR-mode) including the user-readable SPRG3.  This patch should not
make that situation any worse.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: booke: Paravirtualize wrtee
Scott Wood [Wed, 9 Nov 2011 00:23:28 +0000 (18:23 -0600)]
KVM: PPC: booke: Paravirtualize wrtee

Also fix wrteei 1 paravirt to check for a pending interrupt.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: booke: Fix int_pending calculation for MSR[EE] paravirt
Scott Wood [Wed, 9 Nov 2011 00:23:27 +0000 (18:23 -0600)]
KVM: PPC: booke: Fix int_pending calculation for MSR[EE] paravirt

int_pending was only being lowered if a bit in pending_exceptions
was cleared during exception delivery -- but for interrupts, we clear
it during IACK/TSR emulation.  This caused paravirt for enabling
MSR[EE] to be ineffective.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: booke: Check for MSR[WE] in prepare_to_enter
Scott Wood [Wed, 9 Nov 2011 00:23:25 +0000 (18:23 -0600)]
KVM: PPC: booke: Check for MSR[WE] in prepare_to_enter

This prevents us from inappropriately blocking in a KVM_SET_REGS
ioctl -- the MSR[WE] will take effect when the guest is next entered.

It also causes SRR1[WE] to be set when we enter the guest's interrupt
handler, which is what e500 hardware is documented to do.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Move prepare_to_enter call site into subarch code
Scott Wood [Wed, 9 Nov 2011 00:23:23 +0000 (18:23 -0600)]
KVM: PPC: Move prepare_to_enter call site into subarch code

This function should be called with interrupts disabled, to avoid
a race where an exception is delivered after we check, but the
resched kick is received before we disable interrupts (and thus doesn't
actually trigger the exit code that would recheck exceptions).

booke already does this properly in the lightweight exit case, but
not on initial entry.

For now, move the call of prepare_to_enter into subarch-specific code so
that booke can do the right thing here.  Ideally book3s would do the same
thing, but I'm having a hard time seeing where it does any interrupt
disabling of this sort (plus it has several additional call sites), so
I'm deferring the book3s fix to someone more familiar with that code.
book3s behavior should be unchanged by this patch.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Rename deliver_interrupts to prepare_to_enter
Scott Wood [Wed, 9 Nov 2011 00:23:20 +0000 (18:23 -0600)]
KVM: PPC: Rename deliver_interrupts to prepare_to_enter

This function also updates paravirt int_pending, so rename it
to be more obvious that this is a collection of checks run prior
to (re)entering a guest.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: booke: check for signals in kvmppc_vcpu_run
Scott Wood [Tue, 8 Nov 2011 22:11:59 +0000 (16:11 -0600)]
KVM: PPC: booke: check for signals in kvmppc_vcpu_run

Currently we check prior to returning from a lightweight exit,
but not prior to initial entry.

book3s already does a similar test.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: booke: Do Not start decrementer when SPRN_DEC set 0
Bharat Bhushan [Mon, 31 Oct 2011 08:52:08 +0000 (14:22 +0530)]
KVM: PPC: booke: Do Not start decrementer when SPRN_DEC set 0

As per specification the decrementer interrupt not happen when DEC is written
with 0. Also when DEC is zero, no decrementer running. So we should not start
hrtimer for decrementer when DEC = 0.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: Fix DEC truncation for greater than 0xffff_ffff/1000
Bharat Bhushan [Wed, 19 Oct 2011 04:16:06 +0000 (09:46 +0530)]
KVM: PPC: Fix DEC truncation for greater than 0xffff_ffff/1000

kvmppc_emulate_dec() uses dec_nsec of type unsigned long and does below calculation:

        dec_nsec = vcpu->arch.dec;
        dec_nsec *= 1000;
This will truncate if DEC value "vcpu->arch.dec" is greater than 0xffff_ffff/1000.
For example : For tb_ticks_per_usec = 4a, we can not set decrementer more than ~58ms.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Acked-by: Liu Yu <yu.liu@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoPPC: Fix race in mtmsr paravirt implementation
Bharat Bhushan [Thu, 13 Oct 2011 09:47:08 +0000 (15:17 +0530)]
PPC: Fix race in mtmsr paravirt implementation

The current implementation of mtmsr and mtmsrd are racy in that it does:

  * check (int_pending == 0)
  ---> host sets int_pending = 1 <---
  * write shared page
  * done

while instead we should check for int_pending after the shared page is written.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: E500: Support hugetlbfs
Alexander Graf [Mon, 19 Sep 2011 23:31:48 +0000 (01:31 +0200)]
KVM: PPC: E500: Support hugetlbfs

With hugetlbfs support emerging on e500, we should also support KVM
backing its guest memory by it.

This patch adds support for hugetlbfs into the e500 shadow mmu code.

Signed-off-by: Alexander Graf <agraf@suse.de>
Acked-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: e500: Don't hardcode PIR=0
Scott Wood [Fri, 2 Sep 2011 22:39:37 +0000 (17:39 -0500)]
KVM: PPC: e500: Don't hardcode PIR=0

The hardcoded behavior prevents proper SMP support.

user space shall specify the vcpu's PIR as the vcpu id.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: e500: tlbsx: fix tlb0 esel
Scott Wood [Thu, 18 Aug 2011 20:25:23 +0000 (15:25 -0500)]
KVM: PPC: e500: tlbsx: fix tlb0 esel

It should contain the way, not the absolute TLB0 index.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: e500: MMU API
Scott Wood [Thu, 18 Aug 2011 20:25:21 +0000 (15:25 -0500)]
KVM: PPC: e500: MMU API

This implements a shared-memory API for giving host userspace access to
the guest's TLB.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: e500: clear up confusion between host and guest entries
Scott Wood [Thu, 18 Aug 2011 20:25:18 +0000 (15:25 -0500)]
KVM: PPC: e500: clear up confusion between host and guest entries

Split out the portions of tlbe_priv that should be associated with host
entries into tlbe_ref.  Base victim selection on the number of hardware
entries, not guest entries.

For TLB1, where one guest entry can be mapped by multiple host entries,
we use the host tlbe_ref for tracking page references.  For the guest
TLB0 entries, we still track it with gtlb_priv, to avoid having to
retranslate if the entry is evicted from the host TLB but not the
guest TLB.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: e500: Eliminate preempt_disable in local_sid_destroy_all
Scott Wood [Thu, 18 Aug 2011 20:25:16 +0000 (15:25 -0500)]
KVM: PPC: e500: Eliminate preempt_disable in local_sid_destroy_all

The only place it makes sense to call this function already needs
to have preemption disabled.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: PPC: e500: don't translate gfn to pfn with preemption disabled
Scott Wood [Thu, 18 Aug 2011 20:25:14 +0000 (15:25 -0500)]
KVM: PPC: e500: don't translate gfn to pfn with preemption disabled

Delay allocation of the shadow pid until we're ready to disable
preemption and write the entry.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: provide access guest registers via kvm_run
Christian Borntraeger [Wed, 11 Jan 2012 10:20:33 +0000 (11:20 +0100)]
KVM: s390: provide access guest registers via kvm_run

This patch adds the access registers to the kvm_run structure.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: provide general purpose guest registers via kvm_run
Christian Borntraeger [Wed, 11 Jan 2012 10:20:32 +0000 (11:20 +0100)]
KVM: s390: provide general purpose guest registers via kvm_run

This patch adds the general purpose registers to the kvm_run structure.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: provide the prefix register via kvm_run
Christian Borntraeger [Wed, 11 Jan 2012 10:20:31 +0000 (11:20 +0100)]
KVM: s390: provide the prefix register via kvm_run

Add the prefix register to the synced register field in kvm_run.
While we need the prefix register most of the time read-only, this
patch also adds handling for guest dirtying of the prefix register.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: provide synchronous registers in kvm_run
Christian Borntraeger [Wed, 11 Jan 2012 10:20:30 +0000 (11:20 +0100)]
KVM: provide synchronous registers in kvm_run

On some cpus the overhead for virtualization instructions is in the same
range as a system call. Having to call multiple ioctls to get set registers
will make certain userspace handled exits more expensive than necessary.
Lets provide a section in kvm_run that works as a shared save area
for guest registers.
We also provide two 64bit flags fields (architecture specific), that will
specify
1. which parts of these fields are valid.
2. which registers were modified by userspace

Each bit for these flag fields will define a group of registers (like
general purpose) or a single register.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: rework code that sets the prefix
Christian Borntraeger [Wed, 11 Jan 2012 10:19:32 +0000 (11:19 +0100)]
KVM: s390: rework code that sets the prefix

There are several places in the kvm module, which set the prefix register.
Since we need to flush the cpu, lets combine this operation into a helper
function. This helper will also explicitely mask out the unused bits.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: SVM: Add support for AMD's OSVW feature in guests
Boris Ostrovsky [Mon, 9 Jan 2012 19:00:35 +0000 (14:00 -0500)]
KVM: SVM: Add support for AMD's OSVW feature in guests

In some cases guests should not provide workarounds for errata even when the
physical processor is affected. For example, because of erratum 400 on family
10h processors a Linux guest will read an MSR (resulting in VMEXIT) before
going to idle in order to avoid getting stuck in a non-C0 state. This is not
necessary: HLT and IO instructions are intercepted and therefore there is no
reason for erratum 400 workaround in the guest.

This patch allows us to present a guest with certain errata as fixed,
regardless of the state of actual hardware.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: MMU: unnecessary NX state assignment
Davidlohr Bueso [Fri, 6 Jan 2012 14:06:18 +0000 (15:06 +0100)]
KVM: MMU: unnecessary NX state assignment

We can remove the first ->nx state assignment since it is assigned afterwards anyways.

Signed-off-by: Davidlohr Bueso <dave@gnu.org>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: Fix return code for unknown ioctl numbers
Carsten Otte [Wed, 4 Jan 2012 09:25:30 +0000 (10:25 +0100)]
KVM: s390: Fix return code for unknown ioctl numbers

This patch fixes the return code of kvm_arch_vcpu_ioctl in case
of an unkown ioctl number.

Signed-off-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: ucontrol: announce capability for user controlled vms
Carsten Otte [Wed, 4 Jan 2012 09:25:29 +0000 (10:25 +0100)]
KVM: s390: ucontrol: announce capability for user controlled vms

This patch announces a new capability KVM_CAP_S390_UCONTROL that
indicates that kvm can now support virtual machines that are
controlled by userspace.

Signed-off-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
12 years agoKVM: s390: fix assumption for KVM_MAX_VCPUS
Carsten Otte [Wed, 4 Jan 2012 09:25:28 +0000 (10:25 +0100)]
KVM: s390: fix assumption for KVM_MAX_VCPUS

This patch fixes definition of the idle_mask and the local_int array
in kvm_s390_float_interrupt. Previous definition had 64 cpus max
hardcoded instead of using KVM_MAX_VCPUS.

Signed-off-by: Carsten Otte <cotte@de.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>