Merge branch 'drm-intel-next' of git://people.freedesktop.org/~danvet/drm-intel into...
authorDave Airlie <airlied@redhat.com>
Thu, 12 Apr 2012 09:27:01 +0000 (10:27 +0100)
committerDave Airlie <airlied@redhat.com>
Thu, 12 Apr 2012 09:27:01 +0000 (10:27 +0100)
Daniel Vetter wrote
First pull request for 3.5-next, slightly large than usual because new
things kept coming in since the last pull for 3.4.
Highlights:
- first batch of hw enablement for vlv (Jesse et al) and hsw (Eugeni). pci
 ids are not yet added, and there's still quite a few patches to merge
 (mostly modesetting). To make QA easier I've decided to merge this stuff
 in pieces.
- loads of cleanups and prep patches spurred by the above. Especially vlv
 is a real frankenstein chip, but also hsw is stretching our driver's
 code design. Expect more to come in this area for 3.5.
- more gmbus fixes, cleanups and improvements by Daniel Kurtz. Again,
 there are more patches needed (and some already queued up), but I wanted
 to split this a bit for better testing.
- pwrite/pread rework and retuning. This series has been in the works for
 a few months already and a lot of i-g-t tests have been created for it.
 Now it's finally ready to be merged.  Note that one patch in this series
 touches include/pagemap.h, that patch is acked-by akpm.
- reduce mappable pressure and relocation throughput improvements from
 Chris.
- mmap offset exhaustion mitigation by Chris Wilson.
- a start at figuring out which codepaths in our messy dri1/ums+gem/kms
 driver we actually need to support by bailing out of unsupported case.
 The driver now refuses to load without kms on gen6+ and disallows a few
 ioctls that userspace never used in certain cases. More of this will
 definitely come.
- More decoupling of global gtt and ppgtt.
- Improved dual-link lvds detection by Takashi Iwai.
- Shut up the compiler + plus fix the fallout (Ben)
- Inverted panel brightness handling (mostly Acer manages to break things
 in this way).
- Small fixlets and adjustements and some minor things to help debugging.

Regression-wise QA reported quite a few issues on ivb, but all of them
turned out to be hw stability issues which are already fixed in
drm-intel-fixes (QA runs the nightly regression tests on -next alone,
without -fixes automatically merged in). There's still one issue open on
snb, it looks like occlusion query writes are not quite as cache coherent
as we've expected. With some of the pwrite adjustements we can now
reliably hit this. Kernel workaround for it is in the works."

* 'drm-intel-next' of git://people.freedesktop.org/~danvet/drm-intel: (101 commits)
  drm/i915: VCS is not the last ring
  drm/i915: Add a dual link lvds quirk for MacBook Pro 8,2
  drm/i915: make quirks more verbose
  drm/i915: dump the DMA fetch addr register on pre-gen6
  drm/i915/sdvo: Include YRPB as an additional TV output type
  drm/i915: disallow gem init ioctl on ilk
  drm/i915: refuse to load on gen6+ without kms
  drm/i915: extract gt interrupt handler
  drm/i915: use render gen to switch ring irq functions
  drm/i915: rip out old HWSTAM missed irq WA for vlv
  drm/i915: open code gen6+ ring irqs
  drm/i915: ring irq cleanups
  drm/i915: add SFUSE_STRAP registers for digital port detection
  drm/i915: add WM_LINETIME registers
  drm/i915: add WRPLL clocks
  drm/i915: add LCPLL control registers
  drm/i915: add SSC offsets for SBI access
  drm/i915: add port clock selection support for HSW
  drm/i915: add S PLL control
  drm/i915: add PIXCLK_GATE register
  ...

Conflicts:
drivers/char/agp/intel-agp.h
drivers/char/agp/intel-gtt.c
drivers/gpu/drm/i915/i915_debugfs.c

19 files changed:
1  2 
Documentation/kernel-parameters.txt
drivers/char/agp/intel-agp.h
drivers/char/agp/intel-gtt.c
drivers/gpu/drm/drm_cache.c
drivers/gpu/drm/i915/i915_debugfs.c
drivers/gpu/drm/i915/i915_dma.c
drivers/gpu/drm/i915/i915_drv.c
drivers/gpu/drm/i915/i915_drv.h
drivers/gpu/drm/i915/i915_gem.c
drivers/gpu/drm/i915/i915_gem_execbuffer.c
drivers/gpu/drm/i915/i915_gem_gtt.c
drivers/gpu/drm/i915/i915_reg.h
drivers/gpu/drm/i915/intel_bios.c
drivers/gpu/drm/i915/intel_display.c
drivers/gpu/drm/i915/intel_drv.h
drivers/gpu/drm/i915/intel_lvds.c
drivers/gpu/drm/i915/intel_modes.c
drivers/gpu/drm/i915/intel_ringbuffer.c
include/drm/drmP.h

@@@ -713,21 -713,6 +713,21 @@@ bytes respectively. Such letter suffixe
                        The filter can be disabled or changed to another
                        driver later using sysfs.
  
 +      drm_kms_helper.edid_firmware=[<connector>:]<file>
 +                      Broken monitors, graphic adapters and KVMs may
 +                      send no or incorrect EDID data sets. This parameter
 +                      allows to specify an EDID data set in the
 +                      /lib/firmware directory that is used instead.
 +                      Generic built-in EDID data sets are used, if one of
 +                      edid/1024x768.bin, edid/1280x1024.bin,
 +                      edid/1680x1050.bin, or edid/1920x1080.bin is given
 +                      and no file with the same name exists. Details and
 +                      instructions how to build your own EDID data are
 +                      available in Documentation/EDID/HOWTO.txt. An EDID
 +                      data set will only be used for a particular connector,
 +                      if its name and a colon are prepended to the EDID
 +                      name.
 +
        dscc4.setup=    [NET]
  
        earlycon=       [KNL] Output early console device and options.
                             controller
        i8042.nopnp     [HW] Don't use ACPIPnP / PnPBIOS to discover KBD/AUX
                             controllers
 -      i8042.notimeout [HW] Ignore timeout condition signalled by conroller
 +      i8042.notimeout [HW] Ignore timeout condition signalled by controller
        i8042.reset     [HW] Reset the controller during init and cleanup
        i8042.unlock    [HW] Unlock (ignore) the keylock
  
        i8k.restricted  [HW] Allow controlling fans only if SYS_ADMIN
                        capability is set.
  
+       i915.invert_brightness=
+                       [DRM] Invert the sense of the variable that is used to
+                       set the brightness of the panel backlight. Normally a
+                       brightness value of 0 indicates backlight switched off,
+                       and the maximum of the brightness value sets the backlight
+                       to maximum brightness. If this parameter is set to 0
+                       (default) and the machine requires it, or this parameter
+                       is set to 1, a brightness value of 0 sets the backlight
+                       to maximum brightness, and the maximum of the brightness
+                       value switches the backlight off.
+                       -1 -- never invert brightness
+                        0 -- machine default
+                        1 -- force brightness inversion
        icn=            [HW,ISDN]
                        Format: <io>[,<membase>[,<icn_id>[,<icn_id2>]]]
  
                        no_x2apic_optout
                                BIOS x2APIC opt-out request will be ignored
  
 -      inttest=        [IA-64]
 -
        iomem=          Disable strict checking of access to MMIO memory
                strict  regions from userspace.
                relaxed
                        of returning the full 64-bit number.
                        The default is to return 64-bit inode numbers.
  
 +      nfs.max_session_slots=
 +                      [NFSv4.1] Sets the maximum number of session slots
 +                      the client will attempt to negotiate with the server.
 +                      This limits the number of simultaneous RPC requests
 +                      that the client can send to the NFSv4.1 server.
 +                      Note that there is little point in setting this
 +                      value higher than the max_tcp_slot_table_limit.
 +
        nfs.nfs4_disable_idmapping=
                        [NFSv4] When set to the default of '1', this option
                        ensures that both the RPC level authentication
                        back to using the idmapper.
                        To turn off this behaviour, set the value to '0'.
  
 +      nfs.send_implementation_id =
 +                      [NFSv4.1] Send client implementation identification
 +                      information in exchange_id requests.
 +                      If zero, no implementation identification information
 +                      will be sent.
 +                      The default is to send the implementation identification
 +                      information.
 +
 +      nfsd.nfs4_disable_idmapping=
 +                      [NFSv4] When set to the default of '1', the NFSv4
 +                      server will return only numeric uids and gids to
 +                      clients using auth_sys, and will accept numeric uids
 +                      and gids from such clients.  This is intended to ease
 +                      migration from NFSv2/v3.
 +
 +      objlayoutdriver.osd_login_prog=
 +                      [NFS] [OBJLAYOUT] sets the pathname to the program which
 +                      is used to automatically discover and login into new
 +                      osd-targets. Please see:
 +                      Documentation/filesystems/pnfs.txt for more explanations
 +
        nmi_debug=      [KNL,AVR32,SH] Specify one or more actions to take
                        when a NMI is triggered.
                        Format: [state][,regs][,debounce][,die]
                        shutdown the other cpus.  Instead use the REBOOT_VECTOR
                        irq.
  
 +      nomodule        Disable module load
 +
        nopat           [X86] Disable PAT (page attribute table extension of
                        pagetables) support.
  
                                the default.
                                off: Turn ECRC off
                                on: Turn ECRC on.
 -              realloc         reallocate PCI resources if allocations done by BIOS
 -                              are erroneous.
 +              realloc=        Enable/disable reallocating PCI bridge resources
 +                              if allocations done by BIOS are too small to
 +                              accommodate resources required by all child
 +                              devices.
 +                              off: Turn realloc off
 +                              on: Turn realloc on
 +              realloc         same as realloc=on
 +              noari           do not use PCIe ARI.
  
        pcie_aspm=      [PCIE] Forcibly enable or disable PCIe Active State Power
                        Management.
                force   Enable ASPM even on devices that claim not to support it.
                        WARNING: Forcing ASPM on may cause system lockups.
  
 +      pcie_hp=        [PCIE] PCI Express Hotplug driver options:
 +              nomsi   Do not use MSI for PCI Express Native Hotplug (this
 +                      makes all PCIe ports use INTx for hotplug services).
 +
        pcie_ports=     [PCIE] PCIe ports handling:
                auto    Ask the BIOS whether or not to use native PCIe services
                        associated with PCIe ports (PME, hot-plug, AER).  Use
  
                        default: off.
  
 +      printk.always_kmsg_dump=
 +                      Trigger kmsg_dump for cases other than kernel oops or
 +                      panics
 +                      Format: <bool>  (1/Y/y=enable, 0/N/n=disable)
 +                      default: disabled
 +
        printk.time=    Show timing data prefixed to each printk message line
                        Format: <bool>  (1/Y/y=enable, 0/N/n=disable)
  
                        For more information see Documentation/vm/slub.txt.
  
        slub_min_order= [MM, SLUB]
 -                      Determines the mininum page order for slabs. Must be
 +                      Determines the minimum page order for slabs. Must be
                        lower than slub_max_order.
                        For more information see Documentation/vm/slub.txt.
  
  
        threadirqs      [KNL]
                        Force threading of all interrupt handlers except those
 -                      marked explicitely IRQF_NO_THREAD.
 +                      marked explicitly IRQF_NO_THREAD.
  
        topology=       [S390]
                        Format: {off | on}
                        to facilitate early boot debugging.
                        See also Documentation/trace/events.txt
  
 +      transparent_hugepage=
 +                      [KNL]
 +                      Format: [always|madvise|never]
 +                      Can be used to control the default behavior of the system
 +                      with respect to transparent hugepages.
 +                      See Documentation/vm/transhuge.txt for more details.
 +
        tsc=            Disable clocksource stability checks for TSC.
                        Format: <string>
                        [x86] reliable: mark tsc clocksource as reliable, this
@@@ -96,6 -96,7 +96,7 @@@
  #define G4x_GMCH_SIZE_VT_2M   (G4x_GMCH_SIZE_2M | G4x_GMCH_SIZE_VT_EN)
  
  #define GFX_FLSH_CNTL         0x2170 /* 915+ */
+ #define GFX_FLSH_CNTL_VLV     0x101008
  
  #define I810_DRAM_CTL         0x3000
  #define I810_DRAM_ROW_0               0x00000001
  #define PCI_DEVICE_ID_INTEL_IVYBRIDGE_M_GT2_IG                0x0166
  #define PCI_DEVICE_ID_INTEL_IVYBRIDGE_S_HB            0x0158  /* Server */
  #define PCI_DEVICE_ID_INTEL_IVYBRIDGE_S_GT1_IG                0x015A
 +#define PCI_DEVICE_ID_INTEL_IVYBRIDGE_S_GT2_IG                0x016A
+ #define PCI_DEVICE_ID_INTEL_VALLEYVIEW_HB             0x0F00 /* VLV1 */
+ #define PCI_DEVICE_ID_INTEL_VALLEYVIEW_IG             0x0F30
+ #define PCI_DEVICE_ID_INTEL_HASWELL_HB                                0x0400 /* Desktop */
+ #define PCI_DEVICE_ID_INTEL_HASWELL_D_GT1_IG          0x0402
+ #define PCI_DEVICE_ID_INTEL_HASWELL_D_GT2_IG          0x0412
+ #define PCI_DEVICE_ID_INTEL_HASWELL_M_HB                      0x0404 /* Mobile */
+ #define PCI_DEVICE_ID_INTEL_HASWELL_M_GT1_IG          0x0406
+ #define PCI_DEVICE_ID_INTEL_HASWELL_M_GT2_IG          0x0416
+ #define PCI_DEVICE_ID_INTEL_HASWELL_S_HB                      0x0408 /* Server */
+ #define PCI_DEVICE_ID_INTEL_HASWELL_S_GT1_IG          0x040a
+ #define PCI_DEVICE_ID_INTEL_HASWELL_S_GT2_IG          0x041a
+ #define PCI_DEVICE_ID_INTEL_HASWELL_SDV               0x0c16 /* SDV */
+ #define PCI_DEVICE_ID_INTEL_HASWELL_E_HB                      0x0c04
  
  int intel_gmch_probe(struct pci_dev *pdev,
                               struct agp_bridge_data *bridge);
@@@ -1179,6 -1179,20 +1179,20 @@@ static void gen6_write_entry(dma_addr_
        writel(addr | pte_flags, intel_private.gtt + entry);
  }
  
+ static void valleyview_write_entry(dma_addr_t addr, unsigned int entry,
+                                  unsigned int flags)
+ {
+       u32 pte_flags;
+       pte_flags = GEN6_PTE_UNCACHED | I810_PTE_VALID;
+       /* gen6 has bit11-4 for physical addr bit39-32 */
+       addr |= (addr >> 28) & 0xff0;
+       writel(addr | pte_flags, intel_private.gtt + entry);
+       writel(1, intel_private.registers + GFX_FLSH_CNTL_VLV);
+ }
  static void gen6_cleanup(void)
  {
  }
@@@ -1190,6 -1204,7 +1204,6 @@@ static inline int needs_idle_maps(void
  {
  #ifdef CONFIG_INTEL_IOMMU
        const unsigned short gpu_devid = intel_private.pcidev->device;
 -      extern int intel_iommu_gfx_mapped;
  
        /* Query intel_iommu to see if we need the workaround. Presumably that
         * was loaded first.
  static int i9xx_setup(void)
  {
        u32 reg_addr;
+       int size = KB(512);
  
        pci_read_config_dword(intel_private.pcidev, I915_MMADDR, &reg_addr);
  
        reg_addr &= 0xfff80000;
  
-       intel_private.registers = ioremap(reg_addr, 128 * 4096);
+       if (INTEL_GTT_GEN >= 7)
+               size = MB(2);
+       intel_private.registers = ioremap(reg_addr, size);
        if (!intel_private.registers)
                return -ENOMEM;
  
@@@ -1354,6 -1373,15 +1372,15 @@@ static const struct intel_gtt_driver sa
        .check_flags = gen6_check_flags,
        .chipset_flush = i9xx_chipset_flush,
  };
+ static const struct intel_gtt_driver valleyview_gtt_driver = {
+       .gen = 7,
+       .setup = i9xx_setup,
+       .cleanup = gen6_cleanup,
+       .write_entry = valleyview_write_entry,
+       .dma_mask_size = 40,
+       .check_flags = gen6_check_flags,
+       .chipset_flush = i9xx_chipset_flush,
+ };
  
  /* Table to describe Intel GMCH and AGP/PCIE GART drivers.  At least one of
   * driver and gmch_driver must be non-null, and find_gmch will determine
@@@ -1458,8 -1486,22 +1485,24 @@@ static const struct intel_gtt_driver_de
            "Ivybridge", &sandybridge_gtt_driver },
        { PCI_DEVICE_ID_INTEL_IVYBRIDGE_S_GT1_IG,
            "Ivybridge", &sandybridge_gtt_driver },
 +      { PCI_DEVICE_ID_INTEL_IVYBRIDGE_S_GT2_IG,
 +          "Ivybridge", &sandybridge_gtt_driver },
+       { PCI_DEVICE_ID_INTEL_VALLEYVIEW_IG,
+           "ValleyView", &valleyview_gtt_driver },
+       { PCI_DEVICE_ID_INTEL_HASWELL_D_GT1_IG,
+           "Haswell", &sandybridge_gtt_driver },
+       { PCI_DEVICE_ID_INTEL_HASWELL_D_GT2_IG,
+           "Haswell", &sandybridge_gtt_driver },
+       { PCI_DEVICE_ID_INTEL_HASWELL_M_GT1_IG,
+           "Haswell", &sandybridge_gtt_driver },
+       { PCI_DEVICE_ID_INTEL_HASWELL_M_GT2_IG,
+           "Haswell", &sandybridge_gtt_driver },
+       { PCI_DEVICE_ID_INTEL_HASWELL_S_GT1_IG,
+           "Haswell", &sandybridge_gtt_driver },
+       { PCI_DEVICE_ID_INTEL_HASWELL_S_GT2_IG,
+           "Haswell", &sandybridge_gtt_driver },
+       { PCI_DEVICE_ID_INTEL_HASWELL_SDV,
+           "Haswell", &sandybridge_gtt_driver },
        { 0, NULL, NULL }
  };
  
@@@ -41,10 -41,10 +41,10 @@@ drm_clflush_page(struct page *page
        if (unlikely(page == NULL))
                return;
  
 -      page_virtual = kmap_atomic(page, KM_USER0);
 +      page_virtual = kmap_atomic(page);
        for (i = 0; i < PAGE_SIZE; i += boot_cpu_data.x86_clflush_size)
                clflush(page_virtual + i);
 -      kunmap_atomic(page_virtual, KM_USER0);
 +      kunmap_atomic(page_virtual);
  }
  
  static void drm_cache_flush_clflush(struct page *pages[],
@@@ -87,10 -87,10 +87,10 @@@ drm_clflush_pages(struct page *pages[]
                if (unlikely(page == NULL))
                        continue;
  
 -              page_virtual = kmap_atomic(page, KM_USER0);
 +              page_virtual = kmap_atomic(page);
                flush_dcache_range((unsigned long)page_virtual,
                                   (unsigned long)page_virtual + PAGE_SIZE);
 -              kunmap_atomic(page_virtual, KM_USER0);
 +              kunmap_atomic(page_virtual);
        }
  #else
        printk(KERN_ERR "Architecture has no drm_cache.c support\n");
  #endif
  }
  EXPORT_SYMBOL(drm_clflush_pages);
+ void
+ drm_clflush_virt_range(char *addr, unsigned long length)
+ {
+ #if defined(CONFIG_X86)
+       if (cpu_has_clflush) {
+               char *end = addr + length;
+               mb();
+               for (; addr < end; addr += boot_cpu_data.x86_clflush_size)
+                       clflush(addr);
+               clflush(end - 1);
+               mb();
+               return;
+       }
+       if (on_each_cpu(drm_clflush_ipi_handler, NULL, 1) != 0)
+               printk(KERN_ERR "Timed out waiting for cache flush.\n");
+ #else
+       printk(KERN_ERR "Architecture has no drm_cache.c support\n");
+       WARN_ON_ONCE(1);
+ #endif
+ }
+ EXPORT_SYMBOL(drm_clflush_virt_range);
@@@ -468,7 -468,45 +468,45 @@@ static int i915_interrupt_info(struct s
        if (ret)
                return ret;
  
-       if (!HAS_PCH_SPLIT(dev)) {
+       if (IS_VALLEYVIEW(dev)) {
+               seq_printf(m, "Display IER:\t%08x\n",
+                          I915_READ(VLV_IER));
+               seq_printf(m, "Display IIR:\t%08x\n",
+                          I915_READ(VLV_IIR));
+               seq_printf(m, "Display IIR_RW:\t%08x\n",
+                          I915_READ(VLV_IIR_RW));
+               seq_printf(m, "Display IMR:\t%08x\n",
+                          I915_READ(VLV_IMR));
+               for_each_pipe(pipe)
+                       seq_printf(m, "Pipe %c stat:\t%08x\n",
+                                  pipe_name(pipe),
+                                  I915_READ(PIPESTAT(pipe)));
+               seq_printf(m, "Master IER:\t%08x\n",
+                          I915_READ(VLV_MASTER_IER));
+               seq_printf(m, "Render IER:\t%08x\n",
+                          I915_READ(GTIER));
+               seq_printf(m, "Render IIR:\t%08x\n",
+                          I915_READ(GTIIR));
+               seq_printf(m, "Render IMR:\t%08x\n",
+                          I915_READ(GTIMR));
+               seq_printf(m, "PM IER:\t\t%08x\n",
+                          I915_READ(GEN6_PMIER));
+               seq_printf(m, "PM IIR:\t\t%08x\n",
+                          I915_READ(GEN6_PMIIR));
+               seq_printf(m, "PM IMR:\t\t%08x\n",
+                          I915_READ(GEN6_PMIMR));
+               seq_printf(m, "Port hotplug:\t%08x\n",
+                          I915_READ(PORT_HOTPLUG_EN));
+               seq_printf(m, "DPFLIPSTAT:\t%08x\n",
+                          I915_READ(VLV_DPFLIPSTAT));
+               seq_printf(m, "DPINVGTT:\t%08x\n",
+                          I915_READ(DPINVGTT));
+       } else if (!HAS_PCH_SPLIT(dev)) {
                seq_printf(m, "Interrupt enable:    %08x\n",
                           I915_READ(IER));
                seq_printf(m, "Interrupt identity:  %08x\n",
@@@ -704,6 -742,7 +742,7 @@@ static void i915_ring_error_state(struc
                                  struct drm_i915_error_state *error,
                                  unsigned ring)
  {
+       BUG_ON(ring >= I915_NUM_RINGS); /* shut up confused gcc */
        seq_printf(m, "%s command stream:\n", ring_str(ring));
        seq_printf(m, "  HEAD: 0x%08x\n", error->head[ring]);
        seq_printf(m, "  TAIL: 0x%08x\n", error->tail[ring]);
        if (INTEL_INFO(dev)->gen >= 4)
                seq_printf(m, "  INSTPS: 0x%08x\n", error->instps[ring]);
        seq_printf(m, "  INSTPM: 0x%08x\n", error->instpm[ring]);
+       seq_printf(m, "  FADDR: 0x%08x\n", error->faddr[ring]);
        if (INTEL_INFO(dev)->gen >= 6) {
-               seq_printf(m, "  FADDR: 0x%08x\n", error->faddr[ring]);
                seq_printf(m, "  FAULT_REG: 0x%08x\n", error->fault_reg[ring]);
                seq_printf(m, "  SYNC_0: 0x%08x\n",
                           error->semaphore_mboxes[ring][0]);
@@@ -1502,6 -1541,61 +1541,53 @@@ static int i915_ppgtt_info(struct seq_f
        return 0;
  }
  
 -static int
 -i915_debugfs_common_open(struct inode *inode,
 -                       struct file *filp)
 -{
 -      filp->private_data = inode->i_private;
 -      return 0;
 -}
 -
+ static int i915_dpio_info(struct seq_file *m, void *data)
+ {
+       struct drm_info_node *node = (struct drm_info_node *) m->private;
+       struct drm_device *dev = node->minor->dev;
+       struct drm_i915_private *dev_priv = dev->dev_private;
+       int ret;
+       if (!IS_VALLEYVIEW(dev)) {
+               seq_printf(m, "unsupported\n");
+               return 0;
+       }
+       ret = mutex_lock_interruptible(&dev->mode_config.mutex);
+       if (ret)
+               return ret;
+       seq_printf(m, "DPIO_CTL: 0x%08x\n", I915_READ(DPIO_CTL));
+       seq_printf(m, "DPIO_DIV_A: 0x%08x\n",
+                  intel_dpio_read(dev_priv, _DPIO_DIV_A));
+       seq_printf(m, "DPIO_DIV_B: 0x%08x\n",
+                  intel_dpio_read(dev_priv, _DPIO_DIV_B));
+       seq_printf(m, "DPIO_REFSFR_A: 0x%08x\n",
+                  intel_dpio_read(dev_priv, _DPIO_REFSFR_A));
+       seq_printf(m, "DPIO_REFSFR_B: 0x%08x\n",
+                  intel_dpio_read(dev_priv, _DPIO_REFSFR_B));
+       seq_printf(m, "DPIO_CORE_CLK_A: 0x%08x\n",
+                  intel_dpio_read(dev_priv, _DPIO_CORE_CLK_A));
+       seq_printf(m, "DPIO_CORE_CLK_B: 0x%08x\n",
+                  intel_dpio_read(dev_priv, _DPIO_CORE_CLK_B));
+       seq_printf(m, "DPIO_LFP_COEFF_A: 0x%08x\n",
+                  intel_dpio_read(dev_priv, _DPIO_LFP_COEFF_A));
+       seq_printf(m, "DPIO_LFP_COEFF_B: 0x%08x\n",
+                  intel_dpio_read(dev_priv, _DPIO_LFP_COEFF_B));
+       seq_printf(m, "DPIO_FASTCLK_DISABLE: 0x%08x\n",
+                  intel_dpio_read(dev_priv, DPIO_FASTCLK_DISABLE));
+       mutex_unlock(&dev->mode_config.mutex);
+       return 0;
+ }
  static ssize_t
  i915_wedged_read(struct file *filp,
                 char __user *ubuf,
@@@ -1552,7 -1646,7 +1638,7 @@@ i915_wedged_write(struct file *filp
  
  static const struct file_operations i915_wedged_fops = {
        .owner = THIS_MODULE,
 -      .open = i915_debugfs_common_open,
 +      .open = simple_open,
        .read = i915_wedged_read,
        .write = i915_wedged_write,
        .llseek = default_llseek,
@@@ -1614,7 -1708,7 +1700,7 @@@ i915_max_freq_write(struct file *filp
  
  static const struct file_operations i915_max_freq_fops = {
        .owner = THIS_MODULE,
 -      .open = i915_debugfs_common_open,
 +      .open = simple_open,
        .read = i915_max_freq_read,
        .write = i915_max_freq_write,
        .llseek = default_llseek,
@@@ -1685,7 -1779,7 +1771,7 @@@ i915_cache_sharing_write(struct file *f
  
  static const struct file_operations i915_cache_sharing_fops = {
        .owner = THIS_MODULE,
 -      .open = i915_debugfs_common_open,
 +      .open = simple_open,
        .read = i915_cache_sharing_read,
        .write = i915_cache_sharing_write,
        .llseek = default_llseek,
@@@ -1836,6 -1930,7 +1922,7 @@@ static struct drm_info_list i915_debugf
        {"i915_gen6_forcewake_count", i915_gen6_forcewake_count_info, 0},
        {"i915_swizzle_info", i915_swizzle_info, 0},
        {"i915_ppgtt_info", i915_ppgtt_info, 0},
+       {"i915_dpio", i915_dpio_info, 0},
  };
  #define I915_DEBUGFS_ENTRIES ARRAY_SIZE(i915_debugfs_list)
  
@@@ -26,6 -26,8 +26,8 @@@
   *
   */
  
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
  #include "drmP.h"
  #include "drm.h"
  #include "drm_crtc_helper.h"
@@@ -43,6 -45,7 +45,7 @@@
  #include <linux/slab.h>
  #include <linux/module.h>
  #include <acpi/video.h>
+ #include <asm/pat.h>
  
  static void i915_write_hws_pga(struct drm_device *dev)
  {
@@@ -787,6 -790,9 +790,9 @@@ static int i915_getparam(struct drm_dev
        case I915_PARAM_HAS_LLC:
                value = HAS_LLC(dev);
                break;
+       case I915_PARAM_HAS_ALIASING_PPGTT:
+               value = dev_priv->mm.aliasing_ppgtt ? 1 : 0;
+               break;
        default:
                DRM_DEBUG_DRIVER("Unknown parameter %d\n",
                                 param->param);
@@@ -1158,14 -1164,14 +1164,14 @@@ static void i915_switcheroo_set_state(s
        struct drm_device *dev = pci_get_drvdata(pdev);
        pm_message_t pmm = { .event = PM_EVENT_SUSPEND };
        if (state == VGA_SWITCHEROO_ON) {
-               printk(KERN_INFO "i915: switched on\n");
+               pr_info("switched on\n");
                dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
                /* i915 resume handler doesn't set to D0 */
                pci_set_power_state(dev->pdev, PCI_D0);
                i915_resume(dev);
                dev->switch_power_state = DRM_SWITCH_POWER_ON;
        } else {
-               printk(KERN_ERR "i915: switched off\n");
+               pr_err("switched off\n");
                dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
                i915_suspend(dev, pmm);
                dev->switch_power_state = DRM_SWITCH_POWER_OFF;
@@@ -1183,21 -1189,6 +1189,21 @@@ static bool i915_switcheroo_can_switch(
        return can_switch;
  }
  
 +static bool
 +intel_enable_ppgtt(struct drm_device *dev)
 +{
 +      if (i915_enable_ppgtt >= 0)
 +              return i915_enable_ppgtt;
 +
 +#ifdef CONFIG_INTEL_IOMMU
 +      /* Disable ppgtt on SNB if VT-d is on. */
 +      if (INTEL_INFO(dev)->gen == 6 && intel_iommu_gfx_mapped)
 +              return false;
 +#endif
 +
 +      return true;
 +}
 +
  static int i915_load_gem_init(struct drm_device *dev)
  {
        struct drm_i915_private *dev_priv = dev->dev_private;
        drm_mm_init(&dev_priv->mm.stolen, 0, prealloc_size);
  
        mutex_lock(&dev->struct_mutex);
 -      if (i915_enable_ppgtt && HAS_ALIASING_PPGTT(dev)) {
 +      if (intel_enable_ppgtt(dev) && HAS_ALIASING_PPGTT(dev)) {
                /* PPGTT pdes are stolen from global gtt ptes, so shrink the
                 * aperture accordingly when using aliasing ppgtt. */
                gtt_size -= I915_PPGTT_PD_ENTRIES*PAGE_SIZE;
-               /* For paranoia keep the guard page in between. */
-               gtt_size -= PAGE_SIZE;
  
-               i915_gem_do_init(dev, 0, mappable_size, gtt_size);
+               i915_gem_init_global_gtt(dev, 0, mappable_size, gtt_size);
  
                ret = i915_gem_init_aliasing_ppgtt(dev);
 -              if (ret)
 +              if (ret) {
 +                      mutex_unlock(&dev->struct_mutex);
                        return ret;
 +              }
        } else {
                /* Let GEM Manage all of the aperture.
                 *
                 * should be enough to keep any prefetching inside of the
                 * aperture.
                 */
-               i915_gem_do_init(dev, 0, mappable_size, gtt_size - PAGE_SIZE);
+               i915_gem_init_global_gtt(dev, 0, mappable_size,
+                                        gtt_size);
        }
  
        ret = i915_gem_init_hw(dev);
@@@ -1931,6 -1919,29 +1936,29 @@@ ips_ping_for_i915_load(void
        }
  }
  
+ static void
+ i915_mtrr_setup(struct drm_i915_private *dev_priv, unsigned long base,
+               unsigned long size)
+ {
+       dev_priv->mm.gtt_mtrr = -1;
+ #if defined(CONFIG_X86_PAT)
+       if (cpu_has_pat)
+               return;
+ #endif
+       /* Set up a WC MTRR for non-PAT systems.  This is more common than
+        * one would think, because the kernel disables PAT on first
+        * generation Core chips because WC PAT gets overridden by a UC
+        * MTRR if present.  Even if a UC MTRR isn't present.
+        */
+       dev_priv->mm.gtt_mtrr = mtrr_add(base, size, MTRR_TYPE_WRCOMB, 1);
+       if (dev_priv->mm.gtt_mtrr < 0) {
+               DRM_INFO("MTRR allocation failed.  Graphics "
+                        "performance may suffer.\n");
+       }
+ }
  /**
   * i915_driver_load - setup chip and create an initial config
   * @dev: DRM device
  int i915_driver_load(struct drm_device *dev, unsigned long flags)
  {
        struct drm_i915_private *dev_priv;
+       struct intel_device_info *info;
        int ret = 0, mmio_bar;
-       uint32_t agp_size;
+       uint32_t aperture_size;
+       info = (struct intel_device_info *) flags;
+       /* Refuse to load on gen6+ without kms enabled. */
+       if (info->gen >= 6 && !drm_core_check_feature(dev, DRIVER_MODESET))
+               return -ENODEV;
  
        /* i915 has 4 more counters */
        dev->counters += 4;
  
        dev->dev_private = (void *)dev_priv;
        dev_priv->dev = dev;
-       dev_priv->info = (struct intel_device_info *) flags;
+       dev_priv->info = info;
  
        if (i915_get_bridge_dev(dev)) {
                ret = -EIO;
                goto out_rmmap;
        }
  
-       agp_size = dev_priv->mm.gtt->gtt_mappable_entries << PAGE_SHIFT;
+       aperture_size = dev_priv->mm.gtt->gtt_mappable_entries << PAGE_SHIFT;
  
        dev_priv->mm.gtt_mapping =
-               io_mapping_create_wc(dev->agp->base, agp_size);
+               io_mapping_create_wc(dev->agp->base, aperture_size);
        if (dev_priv->mm.gtt_mapping == NULL) {
                ret = -EIO;
                goto out_rmmap;
        }
  
-       /* Set up a WC MTRR for non-PAT systems.  This is more common than
-        * one would think, because the kernel disables PAT on first
-        * generation Core chips because WC PAT gets overridden by a UC
-        * MTRR if present.  Even if a UC MTRR isn't present.
-        */
-       dev_priv->mm.gtt_mtrr = mtrr_add(dev->agp->base,
-                                        agp_size,
-                                        MTRR_TYPE_WRCOMB, 1);
-       if (dev_priv->mm.gtt_mtrr < 0) {
-               DRM_INFO("MTRR allocation failed.  Graphics "
-                        "performance may suffer.\n");
-       }
+       i915_mtrr_setup(dev_priv, dev->agp->base, aperture_size);
  
        /* The i915 workqueue is primarily used for batched retirement of
         * requests (and thus managing bo) once the task has been completed
@@@ -2272,7 -2280,7 +2297,7 @@@ int i915_driver_open(struct drm_device 
   * mode setting case, we want to restore the kernel's initial mode (just
   * in case the last client left us in a bad state).
   *
-  * Additionally, in the non-mode setting case, we'll tear down the AGP
+  * Additionally, in the non-mode setting case, we'll tear down the GTT
   * and DMA structures, since the kernel won't be using them, and clea
   * up any GEM state.
   */
@@@ -2350,16 -2358,10 +2375,10 @@@ struct drm_ioctl_desc i915_ioctls[] = 
  
  int i915_max_ioctl = DRM_ARRAY_SIZE(i915_ioctls);
  
- /**
-  * Determine if the device really is AGP or not.
-  *
-  * All Intel graphics chipsets are treated as AGP, even if they are really
-  * PCI-e.
-  *
-  * \param dev   The device to be tested.
-  *
-  * \returns
-  * A value of 1 is always retured to indictate every i9x5 is AGP.
+ /*
+  * This is really ugly: Because old userspace abused the linux agp interface to
+  * manage the gtt, we need to claim that all intel devices are agp.  For
+  * otherwise the drm core refuses to initialize the agp support code.
   */
  int i915_driver_device_is_agp(struct drm_device * dev)
  {
@@@ -66,11 -66,7 +66,11 @@@ MODULE_PARM_DESC(semaphores
  int i915_enable_rc6 __read_mostly = -1;
  module_param_named(i915_enable_rc6, i915_enable_rc6, int, 0600);
  MODULE_PARM_DESC(i915_enable_rc6,
 -              "Enable power-saving render C-state 6 (default: -1 (use per-chip default)");
 +              "Enable power-saving render C-state 6. "
 +              "Different stages can be selected via bitmask values "
 +              "(0 = disable; 1 = enable rc6; 2 = enable deep rc6; 4 = enable deepest rc6). "
 +              "For example, 3 would enable rc6 and deep rc6, and 7 would enable everything. "
 +              "default: -1 (use per-chip default)");
  
  int i915_enable_fbc __read_mostly = -1;
  module_param_named(i915_enable_fbc, i915_enable_fbc, int, 0600);
@@@ -84,6 -80,12 +84,12 @@@ MODULE_PARM_DESC(lvds_downclock
                "Use panel (LVDS/eDP) downclocking for power savings "
                "(default: false)");
  
+ int i915_lvds_channel_mode __read_mostly;
+ module_param_named(lvds_channel_mode, i915_lvds_channel_mode, int, 0600);
+ MODULE_PARM_DESC(lvds_channel_mode,
+                "Specify LVDS channel mode "
+                "(0=probe BIOS [default], 1=single-channel, 2=dual-channel)");
  int i915_panel_use_ssc __read_mostly = -1;
  module_param_named(lvds_use_ssc, i915_panel_use_ssc, int, 0600);
  MODULE_PARM_DESC(lvds_use_ssc,
@@@ -93,8 -95,8 +99,8 @@@
  int i915_vbt_sdvo_panel_type __read_mostly = -1;
  module_param_named(vbt_sdvo_panel_type, i915_vbt_sdvo_panel_type, int, 0600);
  MODULE_PARM_DESC(vbt_sdvo_panel_type,
-               "Override selection of SDVO panel mode in the VBT "
-               "(default: auto)");
+               "Override/Ignore selection of SDVO panel mode in the VBT "
+               "(-2=ignore, -1=auto [default], index in VBT BIOS table)");
  
  static bool i915_try_reset __read_mostly = true;
  module_param_named(reset, i915_try_reset, bool, 0600);
@@@ -107,8 -109,8 +113,8 @@@ MODULE_PARM_DESC(enable_hangcheck
                "WARNING: Disabling this can cause system wide hangs. "
                "(default: true)");
  
 -bool i915_enable_ppgtt __read_mostly = 1;
 -module_param_named(i915_enable_ppgtt, i915_enable_ppgtt, bool, 0600);
 +int i915_enable_ppgtt __read_mostly = -1;
 +module_param_named(i915_enable_ppgtt, i915_enable_ppgtt, int, 0600);
  MODULE_PARM_DESC(i915_enable_ppgtt,
                "Enable PPGTT (default: true)");
  
@@@ -209,6 -211,7 +215,7 @@@ static const struct intel_device_info i
        .gen = 5,
        .need_gfx_hws = 1, .has_hotplug = 1,
        .has_bsd_ring = 1,
+       .has_pch_split = 1,
  };
  
  static const struct intel_device_info intel_ironlake_m_info = {
        .need_gfx_hws = 1, .has_hotplug = 1,
        .has_fbc = 1,
        .has_bsd_ring = 1,
+       .has_pch_split = 1,
  };
  
  static const struct intel_device_info intel_sandybridge_d_info = {
        .has_bsd_ring = 1,
        .has_blt_ring = 1,
        .has_llc = 1,
+       .has_pch_split = 1,
  };
  
  static const struct intel_device_info intel_sandybridge_m_info = {
        .has_bsd_ring = 1,
        .has_blt_ring = 1,
        .has_llc = 1,
+       .has_pch_split = 1,
  };
  
  static const struct intel_device_info intel_ivybridge_d_info = {
        .has_bsd_ring = 1,
        .has_blt_ring = 1,
        .has_llc = 1,
+       .has_pch_split = 1,
  };
  
  static const struct intel_device_info intel_ivybridge_m_info = {
        .has_bsd_ring = 1,
        .has_blt_ring = 1,
        .has_llc = 1,
+       .has_pch_split = 1,
+ };
+ static const struct intel_device_info intel_valleyview_m_info = {
+       .gen = 7, .is_mobile = 1,
+       .need_gfx_hws = 1, .has_hotplug = 1,
+       .has_fbc = 0,
+       .has_bsd_ring = 1,
+       .has_blt_ring = 1,
+       .is_valleyview = 1,
+ };
+ static const struct intel_device_info intel_valleyview_d_info = {
+       .gen = 7,
+       .need_gfx_hws = 1, .has_hotplug = 1,
+       .has_fbc = 0,
+       .has_bsd_ring = 1,
+       .has_blt_ring = 1,
+       .is_valleyview = 1,
+ };
+ static const struct intel_device_info intel_haswell_d_info = {
+       .is_haswell = 1, .gen = 7,
+       .need_gfx_hws = 1, .has_hotplug = 1,
+       .has_bsd_ring = 1,
+       .has_blt_ring = 1,
+       .has_llc = 1,
+       .has_pch_split = 1,
+ };
+ static const struct intel_device_info intel_haswell_m_info = {
+       .is_haswell = 1, .gen = 7, .is_mobile = 1,
+       .need_gfx_hws = 1, .has_hotplug = 1,
+       .has_bsd_ring = 1,
+       .has_blt_ring = 1,
+       .has_llc = 1,
+       .has_pch_split = 1,
  };
  
  static const struct pci_device_id pciidlist[] = {             /* aka */
        INTEL_VGA_DEVICE(0x0152, &intel_ivybridge_d_info), /* GT1 desktop */
        INTEL_VGA_DEVICE(0x0162, &intel_ivybridge_d_info), /* GT2 desktop */
        INTEL_VGA_DEVICE(0x015a, &intel_ivybridge_d_info), /* GT1 server */
 +      INTEL_VGA_DEVICE(0x016a, &intel_ivybridge_d_info), /* GT2 server */
        {0, 0, 0}
  };
  
@@@ -308,6 -351,7 +356,7 @@@ MODULE_DEVICE_TABLE(pci, pciidlist)
  #define INTEL_PCH_IBX_DEVICE_ID_TYPE  0x3b00
  #define INTEL_PCH_CPT_DEVICE_ID_TYPE  0x1c00
  #define INTEL_PCH_PPT_DEVICE_ID_TYPE  0x1e00
+ #define INTEL_PCH_LPT_DEVICE_ID_TYPE  0x8c00
  
  void intel_detect_pch(struct drm_device *dev)
  {
                                /* PantherPoint is CPT compatible */
                                dev_priv->pch_type = PCH_CPT;
                                DRM_DEBUG_KMS("Found PatherPoint PCH\n");
+                       } else if (id == INTEL_PCH_LPT_DEVICE_ID_TYPE) {
+                               dev_priv->pch_type = PCH_LPT;
+                               DRM_DEBUG_KMS("Found LynxPoint PCH\n");
                        }
                }
                pci_dev_put(pch);
@@@ -446,6 -493,31 +498,31 @@@ int __gen6_gt_wait_for_fifo(struct drm_
        return ret;
  }
  
+ void vlv_force_wake_get(struct drm_i915_private *dev_priv)
+ {
+       int count;
+       count = 0;
+       /* Already awake? */
+       if ((I915_READ(0x130094) & 0xa1) == 0xa1)
+               return;
+       I915_WRITE_NOTRACE(FORCEWAKE_VLV, 0xffffffff);
+       POSTING_READ(FORCEWAKE_VLV);
+       count = 0;
+       while (count++ < 50 && (I915_READ_NOTRACE(FORCEWAKE_ACK_VLV) & 1) == 0)
+               udelay(10);
+ }
+ void vlv_force_wake_put(struct drm_i915_private *dev_priv)
+ {
+       I915_WRITE_NOTRACE(FORCEWAKE_VLV, 0xffff0000);
+       /* FIXME: confirm VLV behavior with Punit folks */
+       POSTING_READ(FORCEWAKE_VLV);
+ }
  static int i915_drm_freeze(struct drm_device *dev)
  {
        struct drm_i915_private *dev_priv = dev->dev_private;
        /* Modeset on resume, not lid events */
        dev_priv->modeset_on_lid = 0;
  
 +      console_lock();
 +      intel_fbdev_set_suspend(dev, 1);
 +      console_unlock();
 +
        return 0;
  }
  
@@@ -538,9 -606,7 +615,9 @@@ static int i915_drm_thaw(struct drm_dev
                drm_irq_install(dev);
  
                /* Resume the modeset for every activated CRTC */
 +              mutex_lock(&dev->mode_config.mutex);
                drm_helper_resume_force_mode(dev);
 +              mutex_unlock(&dev->mode_config.mutex);
  
                if (IS_IRONLAKE_M(dev))
                        ironlake_enable_rc6(dev);
  
        dev_priv->modeset_on_lid = 0;
  
 +      console_lock();
 +      intel_fbdev_set_suspend(dev, 0);
 +      console_unlock();
        return error;
  }
  
@@@ -993,6 -1056,13 +1070,13 @@@ MODULE_AUTHOR(DRIVER_AUTHOR)
  MODULE_DESCRIPTION(DRIVER_DESC);
  MODULE_LICENSE("GPL and additional rights");
  
+ /* We give fast paths for the really cool registers */
+ #define NEEDS_FORCE_WAKE(dev_priv, reg) \
+        (((dev_priv)->info->gen >= 6) && \
+         ((reg) < 0x40000) &&            \
+         ((reg) != FORCEWAKE)) && \
+        (!IS_VALLEYVIEW((dev_priv)->dev))
  #define __i915_read(x, y) \
  u##x i915_read##x(struct drm_i915_private *dev_priv, u32 reg) { \
        u##x val = 0; \
@@@ -63,6 -63,16 +63,16 @@@ enum plane 
  };
  #define plane_name(p) ((p) + 'A')
  
+ enum port {
+       PORT_A = 0,
+       PORT_B,
+       PORT_C,
+       PORT_D,
+       PORT_E,
+       I915_MAX_PORTS
+ };
+ #define port_name(p) ((p) + 'A')
  #define I915_GEM_GPU_DOMAINS  (~(I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT))
  
  #define for_each_pipe(p) for ((p) = 0; (p) < dev_priv->num_pipe; (p)++)
@@@ -255,6 -265,9 +265,9 @@@ struct intel_device_info 
        u8 is_broadwater:1;
        u8 is_crestline:1;
        u8 is_ivybridge:1;
+       u8 is_valleyview:1;
+       u8 has_pch_split:1;
+       u8 is_haswell:1;
        u8 has_fbc:1;
        u8 has_pipe_cxsr:1;
        u8 has_hotplug:1;
@@@ -291,10 -304,12 +304,12 @@@ enum no_fbc_reason 
  enum intel_pch {
        PCH_IBX,        /* Ibexpeak PCH */
        PCH_CPT,        /* Cougarpoint PCH */
+       PCH_LPT,        /* Lynxpoint PCH */
  };
  
  #define QUIRK_PIPEA_FORCE (1<<0)
  #define QUIRK_LVDS_SSC_DISABLE (1<<1)
+ #define QUIRK_INVERT_BRIGHTNESS (1<<2)
  
  struct intel_fbdev;
  struct intel_fbc_work;
  struct intel_gmbus {
        struct i2c_adapter adapter;
        bool force_bit;
-       bool has_gpio;
        u32 reg0;
        u32 gpio_reg;
        struct i2c_algo_bit_data bit_algo;
@@@ -326,12 -340,17 +340,17 @@@ typedef struct drm_i915_private 
        /** gt_lock is also taken in irq contexts. */
        struct spinlock gt_lock;
  
-       struct intel_gmbus *gmbus;
+       struct intel_gmbus gmbus[GMBUS_NUM_PORTS];
  
        /** gmbus_mutex protects against concurrent usage of the single hw gmbus
         * controller on different i2c buses. */
        struct mutex gmbus_mutex;
  
+       /**
+        * Base address of the gmbus and gpio block.
+        */
+       uint32_t gpio_mmio_base;
        struct pci_dev *bridge_dev;
        struct intel_ring_buffer ring[I915_NUM_RINGS];
        uint32_t next_seqno;
  
        /* protects the irq masks */
        spinlock_t irq_lock;
+       /* DPIO indirect register protection */
+       spinlock_t dpio_lock;
        /** Cached value of IMR to avoid reads in updating the bitfield */
        u32 pipestat[2];
        u32 irq_mask;
        unsigned int lvds_use_ssc:1;
        unsigned int display_clock_mode:1;
        int lvds_ssc_freq;
+       unsigned int bios_lvds_val; /* initial [PCH_]LVDS reg val in VBIOS */
+       unsigned int lvds_val; /* used for checking LVDS channel mode */
        struct {
                int rate;
                int lanes;
@@@ -881,6 -906,7 +906,7 @@@ struct drm_i915_gem_object 
        unsigned int cache_level:2;
  
        unsigned int has_aliasing_ppgtt_mapping:1;
+       unsigned int has_global_gtt_mapping:1;
  
        struct page **pages;
  
        /** Record of address bit 17 of each page at last unbind. */
        unsigned long *bit_17;
  
-       /**
-        * If present, while GEM_DOMAIN_CPU is in the read domain this array
-        * flags which individual pages are valid.
-        */
-       uint8_t *page_cpu_valid;
        /** User space pin count and filp owning the pin */
        uint32_t user_pin_count;
        struct drm_file *pin_filp;
@@@ -1001,6 -1020,8 +1020,8 @@@ struct drm_i915_file_private 
  #define IS_IRONLAKE_D(dev)    ((dev)->pci_device == 0x0042)
  #define IS_IRONLAKE_M(dev)    ((dev)->pci_device == 0x0046)
  #define IS_IVYBRIDGE(dev)     (INTEL_INFO(dev)->is_ivybridge)
+ #define IS_VALLEYVIEW(dev)    (INTEL_INFO(dev)->is_valleyview)
+ #define IS_HASWELL(dev)       (INTEL_INFO(dev)->is_haswell)
  #define IS_MOBILE(dev)                (INTEL_INFO(dev)->is_mobile)
  
  /*
  #define HAS_PIPE_CXSR(dev) (INTEL_INFO(dev)->has_pipe_cxsr)
  #define I915_HAS_FBC(dev) (INTEL_INFO(dev)->has_fbc)
  
- #define HAS_PCH_SPLIT(dev) (IS_GEN5(dev) || IS_GEN6(dev) || IS_IVYBRIDGE(dev))
+ #define HAS_PCH_SPLIT(dev) (INTEL_INFO(dev)->has_pch_split)
  #define HAS_PIPE_CONTROL(dev) (INTEL_INFO(dev)->gen >= 5)
  
  #define INTEL_PCH_TYPE(dev) (((struct drm_i915_private *)(dev)->dev_private)->pch_type)
+ #define HAS_PCH_LPT(dev) (INTEL_PCH_TYPE(dev) == PCH_LPT)
  #define HAS_PCH_CPT(dev) (INTEL_PCH_TYPE(dev) == PCH_CPT)
  #define HAS_PCH_IBX(dev) (INTEL_PCH_TYPE(dev) == PCH_IBX)
  
  #include "i915_trace.h"
  
 +/**
 + * RC6 is a special power stage which allows the GPU to enter an very
 + * low-voltage mode when idle, using down to 0V while at this stage.  This
 + * stage is entered automatically when the GPU is idle when RC6 support is
 + * enabled, and as soon as new workload arises GPU wakes up automatically as well.
 + *
 + * There are different RC6 modes available in Intel GPU, which differentiate
 + * among each other with the latency required to enter and leave RC6 and
 + * voltage consumed by the GPU in different states.
 + *
 + * The combination of the following flags define which states GPU is allowed
 + * to enter, while RC6 is the normal RC6 state, RC6p is the deep RC6, and
 + * RC6pp is deepest RC6. Their support by hardware varies according to the
 + * GPU, BIOS, chipset and platform. RC6 is usually the safest one and the one
 + * which brings the most power savings; deeper states save more power, but
 + * require higher latency to switch to and wake up.
 + */
 +#define INTEL_RC6_ENABLE                      (1<<0)
 +#define INTEL_RC6p_ENABLE                     (1<<1)
 +#define INTEL_RC6pp_ENABLE                    (1<<2)
 +
  extern struct drm_ioctl_desc i915_ioctls[];
  extern int i915_max_ioctl;
  extern unsigned int i915_fbpercrtc __always_unused;
@@@ -1081,12 -1082,13 +1103,13 @@@ extern int i915_panel_ignore_lid __read
  extern unsigned int i915_powersave __read_mostly;
  extern int i915_semaphores __read_mostly;
  extern unsigned int i915_lvds_downclock __read_mostly;
+ extern int i915_lvds_channel_mode __read_mostly;
  extern int i915_panel_use_ssc __read_mostly;
  extern int i915_vbt_sdvo_panel_type __read_mostly;
  extern int i915_enable_rc6 __read_mostly;
  extern int i915_enable_fbc __read_mostly;
  extern bool i915_enable_hangcheck __read_mostly;
 -extern bool i915_enable_ppgtt __read_mostly;
 +extern int i915_enable_ppgtt __read_mostly;
  
  extern int i915_suspend(struct drm_device *dev, pm_message_t state);
  extern int i915_resume(struct drm_device *dev);
@@@ -1264,10 -1266,6 +1287,6 @@@ int __must_check i915_gem_init_hw(struc
  void i915_gem_init_swizzling(struct drm_device *dev);
  void i915_gem_init_ppgtt(struct drm_device *dev);
  void i915_gem_cleanup_ringbuffer(struct drm_device *dev);
- void i915_gem_do_init(struct drm_device *dev,
-                     unsigned long start,
-                     unsigned long mappable_end,
-                     unsigned long end);
  int __must_check i915_gpu_idle(struct drm_device *dev, bool do_retire);
  int __must_check i915_gem_idle(struct drm_device *dev);
  int __must_check i915_add_request(struct intel_ring_buffer *ring,
@@@ -1281,6 -1279,8 +1300,8 @@@ int __must_chec
  i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj,
                                  bool write);
  int __must_check
+ i915_gem_object_set_to_cpu_domain(struct drm_i915_gem_object *obj, bool write);
+ int __must_check
  i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
                                     u32 alignment,
                                     struct intel_ring_buffer *pipelined);
@@@ -1311,10 -1311,15 +1332,15 @@@ void i915_ppgtt_unbind_object(struct i9
                              struct drm_i915_gem_object *obj);
  
  void i915_gem_restore_gtt_mappings(struct drm_device *dev);
- int __must_check i915_gem_gtt_bind_object(struct drm_i915_gem_object *obj);
- void i915_gem_gtt_rebind_object(struct drm_i915_gem_object *obj,
+ int __must_check i915_gem_gtt_prepare_object(struct drm_i915_gem_object *obj);
+ void i915_gem_gtt_bind_object(struct drm_i915_gem_object *obj,
                                enum i915_cache_level cache_level);
  void i915_gem_gtt_unbind_object(struct drm_i915_gem_object *obj);
+ void i915_gem_gtt_finish_object(struct drm_i915_gem_object *obj);
+ void i915_gem_init_global_gtt(struct drm_device *dev,
+                             unsigned long start,
+                             unsigned long mappable_end,
+                             unsigned long end);
  
  /* i915_gem_evict.c */
  int __must_check i915_gem_evict_something(struct drm_device *dev, int min_size,
@@@ -1357,6 -1362,13 +1383,13 @@@ extern int i915_restore_state(struct dr
  /* intel_i2c.c */
  extern int intel_setup_gmbus(struct drm_device *dev);
  extern void intel_teardown_gmbus(struct drm_device *dev);
+ extern inline bool intel_gmbus_is_port_valid(unsigned port)
+ {
+       return (port >= GMBUS_PORT_SSC && port <= GMBUS_PORT_DPD);
+ }
+ extern struct i2c_adapter *intel_gmbus_get_adapter(
+               struct drm_i915_private *dev_priv, unsigned port);
  extern void intel_gmbus_set_speed(struct i2c_adapter *adapter, int speed);
  extern void intel_gmbus_force_bit(struct i2c_adapter *adapter, bool force_bit);
  extern inline bool intel_gmbus_is_forced_bit(struct i2c_adapter *adapter)
@@@ -1409,6 -1421,9 +1442,9 @@@ extern void __gen6_gt_force_wake_mt_get
  extern void __gen6_gt_force_wake_put(struct drm_i915_private *dev_priv);
  extern void __gen6_gt_force_wake_mt_put(struct drm_i915_private *dev_priv);
  
+ extern void vlv_force_wake_get(struct drm_i915_private *dev_priv);
+ extern void vlv_force_wake_put(struct drm_i915_private *dev_priv);
  /* overlay */
  #ifdef CONFIG_DEBUG_FS
  extern struct intel_overlay_error_state *intel_overlay_capture_error_state(struct drm_device *dev);
@@@ -1450,12 -1465,6 +1486,6 @@@ void gen6_gt_force_wake_get(struct drm_
  void gen6_gt_force_wake_put(struct drm_i915_private *dev_priv);
  int __gen6_gt_wait_for_fifo(struct drm_i915_private *dev_priv);
  
- /* We give fast paths for the really cool registers */
- #define NEEDS_FORCE_WAKE(dev_priv, reg) \
-       (((dev_priv)->info->gen >= 6) && \
-        ((reg) < 0x40000) &&            \
-        ((reg) != FORCEWAKE))
  #define __i915_read(x, y) \
        u##x i915_read##x(struct drm_i915_private *dev_priv, u32 reg);
  
  static __must_check int i915_gem_object_flush_gpu_write_domain(struct drm_i915_gem_object *obj);
  static void i915_gem_object_flush_gtt_write_domain(struct drm_i915_gem_object *obj);
  static void i915_gem_object_flush_cpu_write_domain(struct drm_i915_gem_object *obj);
- static __must_check int i915_gem_object_set_to_cpu_domain(struct drm_i915_gem_object *obj,
-                                                         bool write);
- static __must_check int i915_gem_object_set_cpu_read_domain_range(struct drm_i915_gem_object *obj,
-                                                                 uint64_t offset,
-                                                                 uint64_t size);
- static void i915_gem_object_set_to_full_cpu_read_domain(struct drm_i915_gem_object *obj);
  static __must_check int i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
                                                    unsigned alignment,
                                                    bool map_and_fenceable);
@@@ -125,25 -119,6 +119,6 @@@ i915_gem_object_is_inactive(struct drm_
        return obj->gtt_space && !obj->active && obj->pin_count == 0;
  }
  
- void i915_gem_do_init(struct drm_device *dev,
-                     unsigned long start,
-                     unsigned long mappable_end,
-                     unsigned long end)
- {
-       drm_i915_private_t *dev_priv = dev->dev_private;
-       drm_mm_init(&dev_priv->mm.gtt_space, start, end - start);
-       dev_priv->mm.gtt_start = start;
-       dev_priv->mm.gtt_mappable_end = mappable_end;
-       dev_priv->mm.gtt_end = end;
-       dev_priv->mm.gtt_total = end - start;
-       dev_priv->mm.mappable_gtt_total = min(end, mappable_end) - start;
-       /* Take over this portion of the GTT */
-       intel_gtt_clear_range(start / PAGE_SIZE, (end-start) / PAGE_SIZE);
- }
  int
  i915_gem_init_ioctl(struct drm_device *dev, void *data,
                    struct drm_file *file)
            (args->gtt_end | args->gtt_start) & (PAGE_SIZE - 1))
                return -EINVAL;
  
+       /* GEM with user mode setting was never supported on ilk and later. */
+       if (INTEL_INFO(dev)->gen >= 5)
+               return -ENODEV;
        mutex_lock(&dev->struct_mutex);
-       i915_gem_do_init(dev, args->gtt_start, args->gtt_end, args->gtt_end);
+       i915_gem_init_global_gtt(dev, args->gtt_start,
+                                args->gtt_end, args->gtt_end);
        mutex_unlock(&dev->struct_mutex);
  
        return 0;
@@@ -259,66 -239,6 +239,6 @@@ static int i915_gem_object_needs_bit17_
                obj->tiling_mode != I915_TILING_NONE;
  }
  
- /**
-  * This is the fast shmem pread path, which attempts to copy_from_user directly
-  * from the backing pages of the object to the user's address space.  On a
-  * fault, it fails so we can fall back to i915_gem_shmem_pwrite_slow().
-  */
- static int
- i915_gem_shmem_pread_fast(struct drm_device *dev,
-                         struct drm_i915_gem_object *obj,
-                         struct drm_i915_gem_pread *args,
-                         struct drm_file *file)
- {
-       struct address_space *mapping = obj->base.filp->f_path.dentry->d_inode->i_mapping;
-       ssize_t remain;
-       loff_t offset;
-       char __user *user_data;
-       int page_offset, page_length;
-       user_data = (char __user *) (uintptr_t) args->data_ptr;
-       remain = args->size;
-       offset = args->offset;
-       while (remain > 0) {
-               struct page *page;
-               char *vaddr;
-               int ret;
-               /* Operation in this page
-                *
-                * page_offset = offset within page
-                * page_length = bytes to copy for this page
-                */
-               page_offset = offset_in_page(offset);
-               page_length = remain;
-               if ((page_offset + remain) > PAGE_SIZE)
-                       page_length = PAGE_SIZE - page_offset;
-               page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT);
-               if (IS_ERR(page))
-                       return PTR_ERR(page);
-               vaddr = kmap_atomic(page);
-               ret = __copy_to_user_inatomic(user_data,
-                                             vaddr + page_offset,
-                                             page_length);
-               kunmap_atomic(vaddr);
-               mark_page_accessed(page);
-               page_cache_release(page);
-               if (ret)
-                       return -EFAULT;
-               remain -= page_length;
-               user_data += page_length;
-               offset += page_length;
-       }
-       return 0;
- }
  static inline int
  __copy_to_user_swizzled(char __user *cpu_vaddr,
                        const char *gpu_vaddr, int gpu_offset,
@@@ -371,37 -291,121 +291,121 @@@ __copy_from_user_swizzled(char __user *
        return 0;
  }
  
- /**
-  * This is the fallback shmem pread path, which allocates temporary storage
-  * in kernel space to copy_to_user into outside of the struct_mutex, so we
-  * can copy out of the object's backing pages while holding the struct mutex
-  * and not take page faults.
-  */
+ /* Per-page copy function for the shmem pread fastpath.
+  * Flushes invalid cachelines before reading the target if
+  * needs_clflush is set. */
  static int
- i915_gem_shmem_pread_slow(struct drm_device *dev,
-                         struct drm_i915_gem_object *obj,
-                         struct drm_i915_gem_pread *args,
-                         struct drm_file *file)
+ shmem_pread_fast(struct page *page, int shmem_page_offset, int page_length,
+                char __user *user_data,
+                bool page_do_bit17_swizzling, bool needs_clflush)
+ {
+       char *vaddr;
+       int ret;
+       if (unlikely(page_do_bit17_swizzling))
+               return -EINVAL;
+       vaddr = kmap_atomic(page);
+       if (needs_clflush)
+               drm_clflush_virt_range(vaddr + shmem_page_offset,
+                                      page_length);
+       ret = __copy_to_user_inatomic(user_data,
+                                     vaddr + shmem_page_offset,
+                                     page_length);
+       kunmap_atomic(vaddr);
+       return ret;
+ }
+ static void
+ shmem_clflush_swizzled_range(char *addr, unsigned long length,
+                            bool swizzled)
+ {
+       if (unlikely(swizzled)) {
+               unsigned long start = (unsigned long) addr;
+               unsigned long end = (unsigned long) addr + length;
+               /* For swizzling simply ensure that we always flush both
+                * channels. Lame, but simple and it works. Swizzled
+                * pwrite/pread is far from a hotpath - current userspace
+                * doesn't use it at all. */
+               start = round_down(start, 128);
+               end = round_up(end, 128);
+               drm_clflush_virt_range((void *)start, end - start);
+       } else {
+               drm_clflush_virt_range(addr, length);
+       }
+ }
+ /* Only difference to the fast-path function is that this can handle bit17
+  * and uses non-atomic copy and kmap functions. */
+ static int
+ shmem_pread_slow(struct page *page, int shmem_page_offset, int page_length,
+                char __user *user_data,
+                bool page_do_bit17_swizzling, bool needs_clflush)
+ {
+       char *vaddr;
+       int ret;
+       vaddr = kmap(page);
+       if (needs_clflush)
+               shmem_clflush_swizzled_range(vaddr + shmem_page_offset,
+                                            page_length,
+                                            page_do_bit17_swizzling);
+       if (page_do_bit17_swizzling)
+               ret = __copy_to_user_swizzled(user_data,
+                                             vaddr, shmem_page_offset,
+                                             page_length);
+       else
+               ret = __copy_to_user(user_data,
+                                    vaddr + shmem_page_offset,
+                                    page_length);
+       kunmap(page);
+       return ret;
+ }
+ static int
+ i915_gem_shmem_pread(struct drm_device *dev,
+                    struct drm_i915_gem_object *obj,
+                    struct drm_i915_gem_pread *args,
+                    struct drm_file *file)
  {
        struct address_space *mapping = obj->base.filp->f_path.dentry->d_inode->i_mapping;
        char __user *user_data;
        ssize_t remain;
        loff_t offset;
-       int shmem_page_offset, page_length, ret;
+       int shmem_page_offset, page_length, ret = 0;
        int obj_do_bit17_swizzling, page_do_bit17_swizzling;
+       int hit_slowpath = 0;
+       int prefaulted = 0;
+       int needs_clflush = 0;
+       int release_page;
  
        user_data = (char __user *) (uintptr_t) args->data_ptr;
        remain = args->size;
  
        obj_do_bit17_swizzling = i915_gem_object_needs_bit17_swizzle(obj);
  
-       offset = args->offset;
+       if (!(obj->base.read_domains & I915_GEM_DOMAIN_CPU)) {
+               /* If we're not in the cpu read domain, set ourself into the gtt
+                * read domain and manually flush cachelines (if required). This
+                * optimizes for the case when the gpu will dirty the data
+                * anyway again before the next pread happens. */
+               if (obj->cache_level == I915_CACHE_NONE)
+                       needs_clflush = 1;
+               ret = i915_gem_object_set_to_gtt_domain(obj, false);
+               if (ret)
+                       return ret;
+       }
  
-       mutex_unlock(&dev->struct_mutex);
+       offset = args->offset;
  
        while (remain > 0) {
                struct page *page;
-               char *vaddr;
  
                /* Operation in this page
                 *
                if ((shmem_page_offset + page_length) > PAGE_SIZE)
                        page_length = PAGE_SIZE - shmem_page_offset;
  
-               page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT);
-               if (IS_ERR(page)) {
-                       ret = PTR_ERR(page);
-                       goto out;
+               if (obj->pages) {
+                       page = obj->pages[offset >> PAGE_SHIFT];
+                       release_page = 0;
+               } else {
+                       page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT);
+                       if (IS_ERR(page)) {
+                               ret = PTR_ERR(page);
+                               goto out;
+                       }
+                       release_page = 1;
                }
  
                page_do_bit17_swizzling = obj_do_bit17_swizzling &&
                        (page_to_phys(page) & (1 << 17)) != 0;
  
-               vaddr = kmap(page);
-               if (page_do_bit17_swizzling)
-                       ret = __copy_to_user_swizzled(user_data,
-                                                     vaddr, shmem_page_offset,
-                                                     page_length);
-               else
-                       ret = __copy_to_user(user_data,
-                                            vaddr + shmem_page_offset,
-                                            page_length);
-               kunmap(page);
+               ret = shmem_pread_fast(page, shmem_page_offset, page_length,
+                                      user_data, page_do_bit17_swizzling,
+                                      needs_clflush);
+               if (ret == 0)
+                       goto next_page;
+               hit_slowpath = 1;
+               page_cache_get(page);
+               mutex_unlock(&dev->struct_mutex);
+               if (!prefaulted) {
+                       ret = fault_in_multipages_writeable(user_data, remain);
+                       /* Userspace is tricking us, but we've already clobbered
+                        * its pages with the prefault and promised to write the
+                        * data up to the first fault. Hence ignore any errors
+                        * and just continue. */
+                       (void)ret;
+                       prefaulted = 1;
+               }
  
-               mark_page_accessed(page);
+               ret = shmem_pread_slow(page, shmem_page_offset, page_length,
+                                      user_data, page_do_bit17_swizzling,
+                                      needs_clflush);
+               mutex_lock(&dev->struct_mutex);
                page_cache_release(page);
+ next_page:
+               mark_page_accessed(page);
+               if (release_page)
+                       page_cache_release(page);
  
                if (ret) {
                        ret = -EFAULT;
        }
  
  out:
-       mutex_lock(&dev->struct_mutex);
-       /* Fixup: Kill any reinstated backing storage pages */
-       if (obj->madv == __I915_MADV_PURGED)
-               i915_gem_object_truncate(obj);
+       if (hit_slowpath) {
+               /* Fixup: Kill any reinstated backing storage pages */
+               if (obj->madv == __I915_MADV_PURGED)
+                       i915_gem_object_truncate(obj);
+       }
  
        return ret;
  }
@@@ -476,11 -504,6 +504,6 @@@ i915_gem_pread_ioctl(struct drm_device 
                       args->size))
                return -EFAULT;
  
-       ret = fault_in_pages_writeable((char __user *)(uintptr_t)args->data_ptr,
-                                      args->size);
-       if (ret)
-               return -EFAULT;
        ret = i915_mutex_lock_interruptible(dev);
        if (ret)
                return ret;
  
        trace_i915_gem_object_pread(obj, args->offset, args->size);
  
-       ret = i915_gem_object_set_cpu_read_domain_range(obj,
-                                                       args->offset,
-                                                       args->size);
-       if (ret)
-               goto out;
-       ret = -EFAULT;
-       if (!i915_gem_object_needs_bit17_swizzle(obj))
-               ret = i915_gem_shmem_pread_fast(dev, obj, args, file);
-       if (ret == -EFAULT)
-               ret = i915_gem_shmem_pread_slow(dev, obj, args, file);
+       ret = i915_gem_shmem_pread(dev, obj, args, file);
  
  out:
        drm_gem_object_unreference(&obj->base);
@@@ -539,30 -552,6 +552,6 @@@ fast_user_write(struct io_mapping *mapp
        return unwritten;
  }
  
- /* Here's the write path which can sleep for
-  * page faults
-  */
- static inline void
- slow_kernel_write(struct io_mapping *mapping,
-                 loff_t gtt_base, int gtt_offset,
-                 struct page *user_page, int user_offset,
-                 int length)
- {
-       char __iomem *dst_vaddr;
-       char *src_vaddr;
-       dst_vaddr = io_mapping_map_wc(mapping, gtt_base);
-       src_vaddr = kmap(user_page);
-       memcpy_toio(dst_vaddr + gtt_offset,
-                   src_vaddr + user_offset,
-                   length);
-       kunmap(user_page);
-       io_mapping_unmap(dst_vaddr);
- }
  /**
   * This is the fast pwrite path, where we copy the data directly from the
   * user into the GTT, uncached.
@@@ -577,7 -566,19 +566,19 @@@ i915_gem_gtt_pwrite_fast(struct drm_dev
        ssize_t remain;
        loff_t offset, page_base;
        char __user *user_data;
-       int page_offset, page_length;
+       int page_offset, page_length, ret;
+       ret = i915_gem_object_pin(obj, 0, true);
+       if (ret)
+               goto out;
+       ret = i915_gem_object_set_to_gtt_domain(obj, true);
+       if (ret)
+               goto out_unpin;
+       ret = i915_gem_object_put_fence(obj);
+       if (ret)
+               goto out_unpin;
  
        user_data = (char __user *) (uintptr_t) args->data_ptr;
        remain = args->size;
                 * retry in the slow path.
                 */
                if (fast_user_write(dev_priv->mm.gtt_mapping, page_base,
-                                   page_offset, user_data, page_length))
-                       return -EFAULT;
+                                   page_offset, user_data, page_length)) {
+                       ret = -EFAULT;
+                       goto out_unpin;
+               }
  
                remain -= page_length;
                user_data += page_length;
                offset += page_length;
        }
  
-       return 0;
+ out_unpin:
+       i915_gem_object_unpin(obj);
+ out:
+       return ret;
  }
  
- /**
-  * This is the fallback GTT pwrite path, which uses get_user_pages to pin
-  * the memory and maps it using kmap_atomic for copying.
-  *
-  * This code resulted in x11perf -rgb10text consuming about 10% more CPU
-  * than using i915_gem_gtt_pwrite_fast on a G45 (32-bit).
-  */
+ /* Per-page copy function for the shmem pwrite fastpath.
+  * Flushes invalid cachelines before writing to the target if
+  * needs_clflush_before is set and flushes out any written cachelines after
+  * writing if needs_clflush is set. */
  static int
- i915_gem_gtt_pwrite_slow(struct drm_device *dev,
-                        struct drm_i915_gem_object *obj,
-                        struct drm_i915_gem_pwrite *args,
-                        struct drm_file *file)
+ shmem_pwrite_fast(struct page *page, int shmem_page_offset, int page_length,
+                 char __user *user_data,
+                 bool page_do_bit17_swizzling,
+                 bool needs_clflush_before,
+                 bool needs_clflush_after)
  {
-       drm_i915_private_t *dev_priv = dev->dev_private;
-       ssize_t remain;
-       loff_t gtt_page_base, offset;
-       loff_t first_data_page, last_data_page, num_pages;
-       loff_t pinned_pages, i;
-       struct page **user_pages;
-       struct mm_struct *mm = current->mm;
-       int gtt_page_offset, data_page_offset, data_page_index, page_length;
+       char *vaddr;
        int ret;
-       uint64_t data_ptr = args->data_ptr;
-       remain = args->size;
-       /* Pin the user pages containing the data.  We can't fault while
-        * holding the struct mutex, and all of the pwrite implementations
-        * want to hold it while dereferencing the user data.
-        */
-       first_data_page = data_ptr / PAGE_SIZE;
-       last_data_page = (data_ptr + args->size - 1) / PAGE_SIZE;
-       num_pages = last_data_page - first_data_page + 1;
-       user_pages = drm_malloc_ab(num_pages, sizeof(struct page *));
-       if (user_pages == NULL)
-               return -ENOMEM;
-       mutex_unlock(&dev->struct_mutex);
-       down_read(&mm->mmap_sem);
-       pinned_pages = get_user_pages(current, mm, (uintptr_t)args->data_ptr,
-                                     num_pages, 0, 0, user_pages, NULL);
-       up_read(&mm->mmap_sem);
-       mutex_lock(&dev->struct_mutex);
-       if (pinned_pages < num_pages) {
-               ret = -EFAULT;
-               goto out_unpin_pages;
-       }
-       ret = i915_gem_object_set_to_gtt_domain(obj, true);
-       if (ret)
-               goto out_unpin_pages;
-       ret = i915_gem_object_put_fence(obj);
-       if (ret)
-               goto out_unpin_pages;
-       offset = obj->gtt_offset + args->offset;
-       while (remain > 0) {
-               /* Operation in this page
-                *
-                * gtt_page_base = page offset within aperture
-                * gtt_page_offset = offset within page in aperture
-                * data_page_index = page number in get_user_pages return
-                * data_page_offset = offset with data_page_index page.
-                * page_length = bytes to copy for this page
-                */
-               gtt_page_base = offset & PAGE_MASK;
-               gtt_page_offset = offset_in_page(offset);
-               data_page_index = data_ptr / PAGE_SIZE - first_data_page;
-               data_page_offset = offset_in_page(data_ptr);
-               page_length = remain;
-               if ((gtt_page_offset + page_length) > PAGE_SIZE)
-                       page_length = PAGE_SIZE - gtt_page_offset;
-               if ((data_page_offset + page_length) > PAGE_SIZE)
-                       page_length = PAGE_SIZE - data_page_offset;
  
-               slow_kernel_write(dev_priv->mm.gtt_mapping,
-                                 gtt_page_base, gtt_page_offset,
-                                 user_pages[data_page_index],
-                                 data_page_offset,
-                                 page_length);
-               remain -= page_length;
-               offset += page_length;
-               data_ptr += page_length;
-       }
+       if (unlikely(page_do_bit17_swizzling))
+               return -EINVAL;
  
- out_unpin_pages:
-       for (i = 0; i < pinned_pages; i++)
-               page_cache_release(user_pages[i]);
-       drm_free_large(user_pages);
+       vaddr = kmap_atomic(page);
+       if (needs_clflush_before)
+               drm_clflush_virt_range(vaddr + shmem_page_offset,
+                                      page_length);
+       ret = __copy_from_user_inatomic_nocache(vaddr + shmem_page_offset,
+                                               user_data,
+                                               page_length);
+       if (needs_clflush_after)
+               drm_clflush_virt_range(vaddr + shmem_page_offset,
+                                      page_length);
+       kunmap_atomic(vaddr);
  
        return ret;
  }
  
- /**
-  * This is the fast shmem pwrite path, which attempts to directly
-  * copy_from_user into the kmapped pages backing the object.
-  */
+ /* Only difference to the fast-path function is that this can handle bit17
+  * and uses non-atomic copy and kmap functions. */
  static int
- i915_gem_shmem_pwrite_fast(struct drm_device *dev,
-                          struct drm_i915_gem_object *obj,
-                          struct drm_i915_gem_pwrite *args,
-                          struct drm_file *file)
+ shmem_pwrite_slow(struct page *page, int shmem_page_offset, int page_length,
+                 char __user *user_data,
+                 bool page_do_bit17_swizzling,
+                 bool needs_clflush_before,
+                 bool needs_clflush_after)
  {
-       struct address_space *mapping = obj->base.filp->f_path.dentry->d_inode->i_mapping;
-       ssize_t remain;
-       loff_t offset;
-       char __user *user_data;
-       int page_offset, page_length;
-       user_data = (char __user *) (uintptr_t) args->data_ptr;
-       remain = args->size;
-       offset = args->offset;
-       obj->dirty = 1;
-       while (remain > 0) {
-               struct page *page;
-               char *vaddr;
-               int ret;
-               /* Operation in this page
-                *
-                * page_offset = offset within page
-                * page_length = bytes to copy for this page
-                */
-               page_offset = offset_in_page(offset);
-               page_length = remain;
-               if ((page_offset + remain) > PAGE_SIZE)
-                       page_length = PAGE_SIZE - page_offset;
-               page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT);
-               if (IS_ERR(page))
-                       return PTR_ERR(page);
+       char *vaddr;
+       int ret;
  
-               vaddr = kmap_atomic(page);
-               ret = __copy_from_user_inatomic(vaddr + page_offset,
+       vaddr = kmap(page);
+       if (unlikely(needs_clflush_before || page_do_bit17_swizzling))
+               shmem_clflush_swizzled_range(vaddr + shmem_page_offset,
+                                            page_length,
+                                            page_do_bit17_swizzling);
+       if (page_do_bit17_swizzling)
+               ret = __copy_from_user_swizzled(vaddr, shmem_page_offset,
                                                user_data,
                                                page_length);
-               kunmap_atomic(vaddr);
-               set_page_dirty(page);
-               mark_page_accessed(page);
-               page_cache_release(page);
-               /* If we get a fault while copying data, then (presumably) our
-                * source page isn't available.  Return the error and we'll
-                * retry in the slow path.
-                */
-               if (ret)
-                       return -EFAULT;
-               remain -= page_length;
-               user_data += page_length;
-               offset += page_length;
-       }
+       else
+               ret = __copy_from_user(vaddr + shmem_page_offset,
+                                      user_data,
+                                      page_length);
+       if (needs_clflush_after)
+               shmem_clflush_swizzled_range(vaddr + shmem_page_offset,
+                                            page_length,
+                                            page_do_bit17_swizzling);
+       kunmap(page);
  
-       return 0;
+       return ret;
  }
  
- /**
-  * This is the fallback shmem pwrite path, which uses get_user_pages to pin
-  * the memory and maps it using kmap_atomic for copying.
-  *
-  * This avoids taking mmap_sem for faulting on the user's address while the
-  * struct_mutex is held.
-  */
  static int
- i915_gem_shmem_pwrite_slow(struct drm_device *dev,
-                          struct drm_i915_gem_object *obj,
-                          struct drm_i915_gem_pwrite *args,
-                          struct drm_file *file)
+ i915_gem_shmem_pwrite(struct drm_device *dev,
+                     struct drm_i915_gem_object *obj,
+                     struct drm_i915_gem_pwrite *args,
+                     struct drm_file *file)
  {
        struct address_space *mapping = obj->base.filp->f_path.dentry->d_inode->i_mapping;
        ssize_t remain;
        loff_t offset;
        char __user *user_data;
-       int shmem_page_offset, page_length, ret;
+       int shmem_page_offset, page_length, ret = 0;
        int obj_do_bit17_swizzling, page_do_bit17_swizzling;
+       int hit_slowpath = 0;
+       int needs_clflush_after = 0;
+       int needs_clflush_before = 0;
+       int release_page;
  
        user_data = (char __user *) (uintptr_t) args->data_ptr;
        remain = args->size;
  
        obj_do_bit17_swizzling = i915_gem_object_needs_bit17_swizzle(obj);
  
+       if (obj->base.write_domain != I915_GEM_DOMAIN_CPU) {
+               /* If we're not in the cpu write domain, set ourself into the gtt
+                * write domain and manually flush cachelines (if required). This
+                * optimizes for the case when the gpu will use the data
+                * right away and we therefore have to clflush anyway. */
+               if (obj->cache_level == I915_CACHE_NONE)
+                       needs_clflush_after = 1;
+               ret = i915_gem_object_set_to_gtt_domain(obj, true);
+               if (ret)
+                       return ret;
+       }
+       /* Same trick applies for invalidate partially written cachelines before
+        * writing.  */
+       if (!(obj->base.read_domains & I915_GEM_DOMAIN_CPU)
+           && obj->cache_level == I915_CACHE_NONE)
+               needs_clflush_before = 1;
        offset = args->offset;
        obj->dirty = 1;
  
-       mutex_unlock(&dev->struct_mutex);
        while (remain > 0) {
                struct page *page;
-               char *vaddr;
+               int partial_cacheline_write;
  
                /* Operation in this page
                 *
                if ((shmem_page_offset + page_length) > PAGE_SIZE)
                        page_length = PAGE_SIZE - shmem_page_offset;
  
-               page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT);
-               if (IS_ERR(page)) {
-                       ret = PTR_ERR(page);
-                       goto out;
+               /* If we don't overwrite a cacheline completely we need to be
+                * careful to have up-to-date data by first clflushing. Don't
+                * overcomplicate things and flush the entire patch. */
+               partial_cacheline_write = needs_clflush_before &&
+                       ((shmem_page_offset | page_length)
+                               & (boot_cpu_data.x86_clflush_size - 1));
+               if (obj->pages) {
+                       page = obj->pages[offset >> PAGE_SHIFT];
+                       release_page = 0;
+               } else {
+                       page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT);
+                       if (IS_ERR(page)) {
+                               ret = PTR_ERR(page);
+                               goto out;
+                       }
+                       release_page = 1;
                }
  
                page_do_bit17_swizzling = obj_do_bit17_swizzling &&
                        (page_to_phys(page) & (1 << 17)) != 0;
  
-               vaddr = kmap(page);
-               if (page_do_bit17_swizzling)
-                       ret = __copy_from_user_swizzled(vaddr, shmem_page_offset,
-                                                       user_data,
-                                                       page_length);
-               else
-                       ret = __copy_from_user(vaddr + shmem_page_offset,
-                                              user_data,
-                                              page_length);
-               kunmap(page);
+               ret = shmem_pwrite_fast(page, shmem_page_offset, page_length,
+                                       user_data, page_do_bit17_swizzling,
+                                       partial_cacheline_write,
+                                       needs_clflush_after);
+               if (ret == 0)
+                       goto next_page;
+               hit_slowpath = 1;
+               page_cache_get(page);
+               mutex_unlock(&dev->struct_mutex);
+               ret = shmem_pwrite_slow(page, shmem_page_offset, page_length,
+                                       user_data, page_do_bit17_swizzling,
+                                       partial_cacheline_write,
+                                       needs_clflush_after);
  
+               mutex_lock(&dev->struct_mutex);
+               page_cache_release(page);
+ next_page:
                set_page_dirty(page);
                mark_page_accessed(page);
-               page_cache_release(page);
+               if (release_page)
+                       page_cache_release(page);
  
                if (ret) {
                        ret = -EFAULT;
        }
  
  out:
-       mutex_lock(&dev->struct_mutex);
-       /* Fixup: Kill any reinstated backing storage pages */
-       if (obj->madv == __I915_MADV_PURGED)
-               i915_gem_object_truncate(obj);
-       /* and flush dirty cachelines in case the object isn't in the cpu write
-        * domain anymore. */
-       if (obj->base.write_domain != I915_GEM_DOMAIN_CPU) {
-               i915_gem_clflush_object(obj);
-               intel_gtt_chipset_flush();
+       if (hit_slowpath) {
+               /* Fixup: Kill any reinstated backing storage pages */
+               if (obj->madv == __I915_MADV_PURGED)
+                       i915_gem_object_truncate(obj);
+               /* and flush dirty cachelines in case the object isn't in the cpu write
+                * domain anymore. */
+               if (obj->base.write_domain != I915_GEM_DOMAIN_CPU) {
+                       i915_gem_clflush_object(obj);
+                       intel_gtt_chipset_flush();
+               }
        }
  
+       if (needs_clflush_after)
+               intel_gtt_chipset_flush();
        return ret;
  }
  
@@@ -892,8 -838,8 +838,8 @@@ i915_gem_pwrite_ioctl(struct drm_devic
                       args->size))
                return -EFAULT;
  
-       ret = fault_in_pages_readable((char __user *)(uintptr_t)args->data_ptr,
-                                     args->size);
+       ret = fault_in_multipages_readable((char __user *)(uintptr_t)args->data_ptr,
+                                          args->size);
        if (ret)
                return -EFAULT;
  
  
        trace_i915_gem_object_pwrite(obj, args->offset, args->size);
  
+       ret = -EFAULT;
        /* We can only do the GTT pwrite on untiled buffers, as otherwise
         * it would end up going through the fenced access, and we'll get
         * different detiling behavior between reading and writing.
        }
  
        if (obj->gtt_space &&
+           obj->cache_level == I915_CACHE_NONE &&
+           obj->map_and_fenceable &&
            obj->base.write_domain != I915_GEM_DOMAIN_CPU) {
-               ret = i915_gem_object_pin(obj, 0, true);
-               if (ret)
-                       goto out;
-               ret = i915_gem_object_set_to_gtt_domain(obj, true);
-               if (ret)
-                       goto out_unpin;
-               ret = i915_gem_object_put_fence(obj);
-               if (ret)
-                       goto out_unpin;
                ret = i915_gem_gtt_pwrite_fast(dev, obj, args, file);
-               if (ret == -EFAULT)
-                       ret = i915_gem_gtt_pwrite_slow(dev, obj, args, file);
- out_unpin:
-               i915_gem_object_unpin(obj);
-               if (ret != -EFAULT)
-                       goto out;
-               /* Fall through to the shmfs paths because the gtt paths might
-                * fail with non-page-backed user pointers (e.g. gtt mappings
-                * when moving data between textures). */
+               /* Note that the gtt paths might fail with non-page-backed user
+                * pointers (e.g. gtt mappings when moving data between
+                * textures). Fallback to the shmem path in that case. */
        }
  
-       ret = i915_gem_object_set_to_cpu_domain(obj, 1);
-       if (ret)
-               goto out;
-       ret = -EFAULT;
-       if (!i915_gem_object_needs_bit17_swizzle(obj))
-               ret = i915_gem_shmem_pwrite_fast(dev, obj, args, file);
        if (ret == -EFAULT)
-               ret = i915_gem_shmem_pwrite_slow(dev, obj, args, file);
+               ret = i915_gem_shmem_pwrite(dev, obj, args, file);
  
  out:
        drm_gem_object_unreference(&obj->base);
@@@ -1153,6 -1075,9 +1075,9 @@@ int i915_gem_fault(struct vm_area_struc
                        goto unlock;
        }
  
+       if (!obj->has_global_gtt_mapping)
+               i915_gem_gtt_bind_object(obj, obj->cache_level);
        if (obj->tiling_mode == I915_TILING_NONE)
                ret = i915_gem_object_put_fence(obj);
        else
@@@ -1472,19 -1397,16 +1397,19 @@@ i915_gem_object_move_to_active(struct d
        list_move_tail(&obj->ring_list, &ring->active_list);
  
        obj->last_rendering_seqno = seqno;
 -      if (obj->fenced_gpu_access) {
 -              struct drm_i915_fence_reg *reg;
 -
 -              BUG_ON(obj->fence_reg == I915_FENCE_REG_NONE);
  
 +      if (obj->fenced_gpu_access) {
                obj->last_fenced_seqno = seqno;
                obj->last_fenced_ring = ring;
  
 -              reg = &dev_priv->fence_regs[obj->fence_reg];
 -              list_move_tail(&reg->lru_list, &dev_priv->mm.fence_list);
 +              /* Bump MRU to take account of the delayed flush */
 +              if (obj->fence_reg != I915_FENCE_REG_NONE) {
 +                      struct drm_i915_fence_reg *reg;
 +
 +                      reg = &dev_priv->fence_regs[obj->fence_reg];
 +                      list_move_tail(&reg->lru_list,
 +                                     &dev_priv->mm.fence_list);
 +              }
        }
  }
  
@@@ -1546,6 -1468,9 +1471,9 @@@ i915_gem_object_truncate(struct drm_i91
        inode = obj->base.filp->f_path.dentry->d_inode;
        shmem_truncate_range(inode, 0, (loff_t)-1);
  
+       if (obj->base.map_list.map)
+               drm_gem_free_mmap_offset(&obj->base);
        obj->madv = __I915_MADV_PURGED;
  }
  
@@@ -1954,6 -1879,8 +1882,8 @@@ i915_wait_request(struct intel_ring_buf
        if (!i915_seqno_passed(ring->get_seqno(ring), seqno)) {
                if (HAS_PCH_SPLIT(ring->dev))
                        ier = I915_READ(DEIER) | I915_READ(GTIER);
+               else if (IS_VALLEYVIEW(ring->dev))
+                       ier = I915_READ(GTIER) | I915_READ(VLV_IER);
                else
                        ier = I915_READ(IER);
                if (!ier) {
@@@ -2100,11 -2027,13 +2030,13 @@@ i915_gem_object_unbind(struct drm_i915_
  
        trace_i915_gem_object_unbind(obj);
  
-       i915_gem_gtt_unbind_object(obj);
+       if (obj->has_global_gtt_mapping)
+               i915_gem_gtt_unbind_object(obj);
        if (obj->has_aliasing_ppgtt_mapping) {
                i915_ppgtt_unbind_object(dev_priv->mm.aliasing_ppgtt, obj);
                obj->has_aliasing_ppgtt_mapping = 0;
        }
+       i915_gem_gtt_finish_object(obj);
  
        i915_gem_object_put_pages_gtt(obj);
  
@@@ -2749,7 -2678,7 +2681,7 @@@ i915_gem_object_bind_to_gtt(struct drm_
                return ret;
        }
  
-       ret = i915_gem_gtt_bind_object(obj);
+       ret = i915_gem_gtt_prepare_object(obj);
        if (ret) {
                i915_gem_object_put_pages_gtt(obj);
                drm_mm_put_block(obj->gtt_space);
                goto search_free;
        }
  
+       if (!dev_priv->mm.aliasing_ppgtt)
+               i915_gem_gtt_bind_object(obj, obj->cache_level);
        list_add_tail(&obj->gtt_list, &dev_priv->mm.gtt_list);
        list_add_tail(&obj->mm_list, &dev_priv->mm.inactive_list);
  
@@@ -2953,7 -2885,8 +2888,8 @@@ int i915_gem_object_set_cache_level(str
                                return ret;
                }
  
-               i915_gem_gtt_rebind_object(obj, cache_level);
+               if (obj->has_global_gtt_mapping)
+                       i915_gem_gtt_bind_object(obj, cache_level);
                if (obj->has_aliasing_ppgtt_mapping)
                        i915_ppgtt_bind_object(dev_priv->mm.aliasing_ppgtt,
                                               obj, cache_level);
@@@ -3082,7 -3015,7 +3018,7 @@@ i915_gem_object_finish_gpu(struct drm_i
   * This function returns when the move is complete, including waiting on
   * flushes to occur.
   */
static int
+ int
  i915_gem_object_set_to_cpu_domain(struct drm_i915_gem_object *obj, bool write)
  {
        uint32_t old_write_domain, old_read_domains;
  
        i915_gem_object_flush_gtt_write_domain(obj);
  
-       /* If we have a partially-valid cache of the object in the CPU,
-        * finish invalidating it and free the per-page flags.
-        */
-       i915_gem_object_set_to_full_cpu_read_domain(obj);
        old_write_domain = obj->base.write_domain;
        old_read_domains = obj->base.read_domains;
  
        return 0;
  }
  
- /**
-  * Moves the object from a partially CPU read to a full one.
-  *
-  * Note that this only resolves i915_gem_object_set_cpu_read_domain_range(),
-  * and doesn't handle transitioning from !(read_domains & I915_GEM_DOMAIN_CPU).
-  */
- static void
- i915_gem_object_set_to_full_cpu_read_domain(struct drm_i915_gem_object *obj)
- {
-       if (!obj->page_cpu_valid)
-               return;
-       /* If we're partially in the CPU read domain, finish moving it in.
-        */
-       if (obj->base.read_domains & I915_GEM_DOMAIN_CPU) {
-               int i;
-               for (i = 0; i <= (obj->base.size - 1) / PAGE_SIZE; i++) {
-                       if (obj->page_cpu_valid[i])
-                               continue;
-                       drm_clflush_pages(obj->pages + i, 1);
-               }
-       }
-       /* Free the page_cpu_valid mappings which are now stale, whether
-        * or not we've got I915_GEM_DOMAIN_CPU.
-        */
-       kfree(obj->page_cpu_valid);
-       obj->page_cpu_valid = NULL;
- }
- /**
-  * Set the CPU read domain on a range of the object.
-  *
-  * The object ends up with I915_GEM_DOMAIN_CPU in its read flags although it's
-  * not entirely valid.  The page_cpu_valid member of the object flags which
-  * pages have been flushed, and will be respected by
-  * i915_gem_object_set_to_cpu_domain() if it's called on to get a valid mapping
-  * of the whole object.
-  *
-  * This function returns when the move is complete, including waiting on
-  * flushes to occur.
-  */
- static int
- i915_gem_object_set_cpu_read_domain_range(struct drm_i915_gem_object *obj,
-                                         uint64_t offset, uint64_t size)
- {
-       uint32_t old_read_domains;
-       int i, ret;
-       if (offset == 0 && size == obj->base.size)
-               return i915_gem_object_set_to_cpu_domain(obj, 0);
-       ret = i915_gem_object_flush_gpu_write_domain(obj);
-       if (ret)
-               return ret;
-       ret = i915_gem_object_wait_rendering(obj);
-       if (ret)
-               return ret;
-       i915_gem_object_flush_gtt_write_domain(obj);
-       /* If we're already fully in the CPU read domain, we're done. */
-       if (obj->page_cpu_valid == NULL &&
-           (obj->base.read_domains & I915_GEM_DOMAIN_CPU) != 0)
-               return 0;
-       /* Otherwise, create/clear the per-page CPU read domain flag if we're
-        * newly adding I915_GEM_DOMAIN_CPU
-        */
-       if (obj->page_cpu_valid == NULL) {
-               obj->page_cpu_valid = kzalloc(obj->base.size / PAGE_SIZE,
-                                             GFP_KERNEL);
-               if (obj->page_cpu_valid == NULL)
-                       return -ENOMEM;
-       } else if ((obj->base.read_domains & I915_GEM_DOMAIN_CPU) == 0)
-               memset(obj->page_cpu_valid, 0, obj->base.size / PAGE_SIZE);
-       /* Flush the cache on any pages that are still invalid from the CPU's
-        * perspective.
-        */
-       for (i = offset / PAGE_SIZE; i <= (offset + size - 1) / PAGE_SIZE;
-            i++) {
-               if (obj->page_cpu_valid[i])
-                       continue;
-               drm_clflush_pages(obj->pages + i, 1);
-               obj->page_cpu_valid[i] = 1;
-       }
-       /* It should now be out of any other write domains, and we can update
-        * the domain values for our changes.
-        */
-       BUG_ON((obj->base.write_domain & ~I915_GEM_DOMAIN_CPU) != 0);
-       old_read_domains = obj->base.read_domains;
-       obj->base.read_domains |= I915_GEM_DOMAIN_CPU;
-       trace_i915_gem_object_change_domain(obj,
-                                           old_read_domains,
-                                           obj->base.write_domain);
-       return 0;
- }
  /* Throttle our rendering by waiting until the ring has completed our requests
   * emitted over 20 msec ago.
   *
@@@ -3343,6 -3164,9 +3167,9 @@@ i915_gem_object_pin(struct drm_i915_gem
                        return ret;
        }
  
+       if (!obj->has_global_gtt_mapping && map_and_fenceable)
+               i915_gem_gtt_bind_object(obj, obj->cache_level);
        if (obj->pin_count++ == 0) {
                if (!obj->active)
                        list_move_tail(&obj->mm_list,
@@@ -3664,7 -3488,6 +3491,6 @@@ static void i915_gem_free_object_tail(s
        drm_gem_object_release(&obj->base);
        i915_gem_info_remove_obj(dev_priv, obj->base.size);
  
-       kfree(obj->page_cpu_valid);
        kfree(obj->bit_17);
        kfree(obj);
  }
@@@ -3757,32 -3580,12 +3583,32 @@@ void i915_gem_init_ppgtt(struct drm_dev
        drm_i915_private_t *dev_priv = dev->dev_private;
        uint32_t pd_offset;
        struct intel_ring_buffer *ring;
 +      struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt;
 +      uint32_t __iomem *pd_addr;
 +      uint32_t pd_entry;
        int i;
  
        if (!dev_priv->mm.aliasing_ppgtt)
                return;
  
 -      pd_offset = dev_priv->mm.aliasing_ppgtt->pd_offset;
 +
 +      pd_addr = dev_priv->mm.gtt->gtt + ppgtt->pd_offset/sizeof(uint32_t);
 +      for (i = 0; i < ppgtt->num_pd_entries; i++) {
 +              dma_addr_t pt_addr;
 +
 +              if (dev_priv->mm.gtt->needs_dmar)
 +                      pt_addr = ppgtt->pt_dma_addr[i];
 +              else
 +                      pt_addr = page_to_phys(ppgtt->pt_pages[i]);
 +
 +              pd_entry = GEN6_PDE_ADDR_ENCODE(pt_addr);
 +              pd_entry |= GEN6_PDE_VALID;
 +
 +              writel(pd_entry, pd_addr + i);
 +      }
 +      readl(pd_addr);
 +
 +      pd_offset = ppgtt->pd_offset;
        pd_offset /= 64; /* in cachelines, */
        pd_offset <<= 16;
  
@@@ -266,6 -266,12 +266,12 @@@ eb_destroy(struct eb_objects *eb
        kfree(eb);
  }
  
+ static inline int use_cpu_reloc(struct drm_i915_gem_object *obj)
+ {
+       return (obj->base.write_domain == I915_GEM_DOMAIN_CPU ||
+               obj->cache_level != I915_CACHE_NONE);
+ }
  static int
  i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
                                   struct eb_objects *eb,
  {
        struct drm_device *dev = obj->base.dev;
        struct drm_gem_object *target_obj;
+       struct drm_i915_gem_object *target_i915_obj;
        uint32_t target_offset;
        int ret = -EINVAL;
  
        if (unlikely(target_obj == NULL))
                return -ENOENT;
  
-       target_offset = to_intel_bo(target_obj)->gtt_offset;
+       target_i915_obj = to_intel_bo(target_obj);
+       target_offset = target_i915_obj->gtt_offset;
  
        /* The target buffer should have appeared before us in the
         * exec_object list, so it should have a GTT space bound by now.
                return ret;
        }
  
+       /* We can't wait for rendering with pagefaults disabled */
+       if (obj->active && in_atomic())
+               return -EFAULT;
        reloc->delta += target_offset;
-       if (obj->base.write_domain == I915_GEM_DOMAIN_CPU) {
+       if (use_cpu_reloc(obj)) {
                uint32_t page_offset = reloc->offset & ~PAGE_MASK;
                char *vaddr;
  
+               ret = i915_gem_object_set_to_cpu_domain(obj, 1);
+               if (ret)
+                       return ret;
                vaddr = kmap_atomic(obj->pages[reloc->offset >> PAGE_SHIFT]);
                *(uint32_t *)(vaddr + page_offset) = reloc->delta;
                kunmap_atomic(vaddr);
                uint32_t __iomem *reloc_entry;
                void __iomem *reloc_page;
  
-               /* We can't wait for rendering with pagefaults disabled */
-               if (obj->active && in_atomic())
-                       return -EFAULT;
                ret = i915_gem_object_set_to_gtt_domain(obj, 1);
                if (ret)
                        return ret;
                io_mapping_unmap_atomic(reloc_page);
        }
  
+       /* Sandybridge PPGTT errata: We need a global gtt mapping for MI and
+        * pipe_control writes because the gpu doesn't properly redirect them
+        * through the ppgtt for non_secure batchbuffers. */
+       if (unlikely(IS_GEN6(dev) &&
+           reloc->write_domain == I915_GEM_DOMAIN_INSTRUCTION &&
+           !target_i915_obj->has_global_gtt_mapping)) {
+               i915_gem_gtt_bind_object(target_i915_obj,
+                                        target_i915_obj->cache_level);
+       }
        /* and update the user's relocation entry */
        reloc->presumed_offset = target_offset;
  
@@@ -393,30 -415,46 +415,46 @@@ static in
  i915_gem_execbuffer_relocate_object(struct drm_i915_gem_object *obj,
                                    struct eb_objects *eb)
  {
+ #define N_RELOC(x) ((x) / sizeof(struct drm_i915_gem_relocation_entry))
+       struct drm_i915_gem_relocation_entry stack_reloc[N_RELOC(512)];
        struct drm_i915_gem_relocation_entry __user *user_relocs;
        struct drm_i915_gem_exec_object2 *entry = obj->exec_entry;
-       int i, ret;
+       int remain, ret;
  
        user_relocs = (void __user *)(uintptr_t)entry->relocs_ptr;
-       for (i = 0; i < entry->relocation_count; i++) {
-               struct drm_i915_gem_relocation_entry reloc;
  
-               if (__copy_from_user_inatomic(&reloc,
-                                             user_relocs+i,
-                                             sizeof(reloc)))
+       remain = entry->relocation_count;
+       while (remain) {
+               struct drm_i915_gem_relocation_entry *r = stack_reloc;
+               int count = remain;
+               if (count > ARRAY_SIZE(stack_reloc))
+                       count = ARRAY_SIZE(stack_reloc);
+               remain -= count;
+               if (__copy_from_user_inatomic(r, user_relocs, count*sizeof(r[0])))
                        return -EFAULT;
  
-               ret = i915_gem_execbuffer_relocate_entry(obj, eb, &reloc);
-               if (ret)
-                       return ret;
+               do {
+                       u64 offset = r->presumed_offset;
  
-               if (__copy_to_user_inatomic(&user_relocs[i].presumed_offset,
-                                           &reloc.presumed_offset,
-                                           sizeof(reloc.presumed_offset)))
-                       return -EFAULT;
+                       ret = i915_gem_execbuffer_relocate_entry(obj, eb, r);
+                       if (ret)
+                               return ret;
+                       if (r->presumed_offset != offset &&
+                           __copy_to_user_inatomic(&user_relocs->presumed_offset,
+                                                   &r->presumed_offset,
+                                                   sizeof(r->presumed_offset))) {
+                               return -EFAULT;
+                       }
+                       user_relocs++;
+                       r++;
+               } while (--count);
        }
  
        return 0;
+ #undef N_RELOC
  }
  
  static int
@@@ -464,6 -502,13 +502,13 @@@ i915_gem_execbuffer_relocate(struct drm
  
  #define  __EXEC_OBJECT_HAS_FENCE (1<<31)
  
+ static int
+ need_reloc_mappable(struct drm_i915_gem_object *obj)
+ {
+       struct drm_i915_gem_exec_object2 *entry = obj->exec_entry;
+       return entry->relocation_count && !use_cpu_reloc(obj);
+ }
  static int
  pin_and_fence_object(struct drm_i915_gem_object *obj,
                     struct intel_ring_buffer *ring)
                has_fenced_gpu_access &&
                entry->flags & EXEC_OBJECT_NEEDS_FENCE &&
                obj->tiling_mode != I915_TILING_NONE;
-       need_mappable =
-               entry->relocation_count ? true : need_fence;
+       need_mappable = need_fence || need_reloc_mappable(obj);
  
        ret = i915_gem_object_pin(obj, entry->alignment, need_mappable);
        if (ret)
                                if (ret)
                                        goto err_unpin;
                        }
 +                      obj->pending_fenced_gpu_access = true;
                }
 -              obj->pending_fenced_gpu_access = need_fence;
        }
  
        entry->offset = obj->gtt_offset;
@@@ -535,8 -579,7 +579,7 @@@ i915_gem_execbuffer_reserve(struct inte
                        has_fenced_gpu_access &&
                        entry->flags & EXEC_OBJECT_NEEDS_FENCE &&
                        obj->tiling_mode != I915_TILING_NONE;
-               need_mappable =
-                       entry->relocation_count ? true : need_fence;
+               need_mappable = need_fence || need_reloc_mappable(obj);
  
                if (need_mappable)
                        list_move(&obj->exec_list, &ordered_objects);
                                has_fenced_gpu_access &&
                                entry->flags & EXEC_OBJECT_NEEDS_FENCE &&
                                obj->tiling_mode != I915_TILING_NONE;
-                       need_mappable =
-                               entry->relocation_count ? true : need_fence;
+                       need_mappable = need_fence || need_reloc_mappable(obj);
  
                        if ((entry->alignment && obj->gtt_offset & (entry->alignment - 1)) ||
                            (need_mappable && !obj->map_and_fenceable))
@@@ -955,7 -997,7 +997,7 @@@ validate_exec_list(struct drm_i915_gem_
                if (!access_ok(VERIFY_WRITE, ptr, length))
                        return -EFAULT;
  
-               if (fault_in_pages_readable(ptr, length))
+               if (fault_in_multipages_readable(ptr, length))
                        return -EFAULT;
        }
  
@@@ -65,7 -65,9 +65,7 @@@ int i915_gem_init_aliasing_ppgtt(struc
  {
        struct drm_i915_private *dev_priv = dev->dev_private;
        struct i915_hw_ppgtt *ppgtt;
 -      uint32_t pd_entry;
        unsigned first_pd_entry_in_global_pt;
 -      uint32_t __iomem *pd_addr;
        int i;
        int ret = -ENOMEM;
  
                        goto err_pt_alloc;
        }
  
 -      pd_addr = dev_priv->mm.gtt->gtt + first_pd_entry_in_global_pt;
        for (i = 0; i < ppgtt->num_pd_entries; i++) {
                dma_addr_t pt_addr;
                if (dev_priv->mm.gtt->needs_dmar) {
                        ppgtt->pt_dma_addr[i] = pt_addr;
                } else
                        pt_addr = page_to_phys(ppgtt->pt_pages[i]);
 -
 -              pd_entry = GEN6_PDE_ADDR_ENCODE(pt_addr);
 -              pd_entry |= GEN6_PDE_VALID;
 -
 -              writel(pd_entry, pd_addr + i);
        }
 -      readl(pd_addr);
  
        ppgtt->scratch_page_dma_addr = dev_priv->mm.gtt->scratch_page_dma;
  
@@@ -346,42 -355,28 +346,28 @@@ void i915_gem_restore_gtt_mappings(stru
  
        list_for_each_entry(obj, &dev_priv->mm.gtt_list, gtt_list) {
                i915_gem_clflush_object(obj);
-               i915_gem_gtt_rebind_object(obj, obj->cache_level);
+               i915_gem_gtt_bind_object(obj, obj->cache_level);
        }
  
        intel_gtt_chipset_flush();
  }
  
- int i915_gem_gtt_bind_object(struct drm_i915_gem_object *obj)
+ int i915_gem_gtt_prepare_object(struct drm_i915_gem_object *obj)
  {
        struct drm_device *dev = obj->base.dev;
        struct drm_i915_private *dev_priv = dev->dev_private;
-       unsigned int agp_type = cache_level_to_agp_type(dev, obj->cache_level);
-       int ret;
  
-       if (dev_priv->mm.gtt->needs_dmar) {
-               ret = intel_gtt_map_memory(obj->pages,
-                                          obj->base.size >> PAGE_SHIFT,
-                                          &obj->sg_list,
-                                          &obj->num_sg);
-               if (ret != 0)
-                       return ret;
-               intel_gtt_insert_sg_entries(obj->sg_list,
-                                           obj->num_sg,
-                                           obj->gtt_space->start >> PAGE_SHIFT,
-                                           agp_type);
-       } else
-               intel_gtt_insert_pages(obj->gtt_space->start >> PAGE_SHIFT,
-                                      obj->base.size >> PAGE_SHIFT,
-                                      obj->pages,
-                                      agp_type);
-       return 0;
+       if (dev_priv->mm.gtt->needs_dmar)
+               return intel_gtt_map_memory(obj->pages,
+                                           obj->base.size >> PAGE_SHIFT,
+                                           &obj->sg_list,
+                                           &obj->num_sg);
+       else
+               return 0;
  }
  
- void i915_gem_gtt_rebind_object(struct drm_i915_gem_object *obj,
-                               enum i915_cache_level cache_level)
+ void i915_gem_gtt_bind_object(struct drm_i915_gem_object *obj,
+                             enum i915_cache_level cache_level)
  {
        struct drm_device *dev = obj->base.dev;
        struct drm_i915_private *dev_priv = dev->dev_private;
                                       obj->base.size >> PAGE_SHIFT,
                                       obj->pages,
                                       agp_type);
+       obj->has_global_gtt_mapping = 1;
  }
  
  void i915_gem_gtt_unbind_object(struct drm_i915_gem_object *obj)
+ {
+       intel_gtt_clear_range(obj->gtt_space->start >> PAGE_SHIFT,
+                             obj->base.size >> PAGE_SHIFT);
+       obj->has_global_gtt_mapping = 0;
+ }
+ void i915_gem_gtt_finish_object(struct drm_i915_gem_object *obj)
  {
        struct drm_device *dev = obj->base.dev;
        struct drm_i915_private *dev_priv = dev->dev_private;
  
        interruptible = do_idling(dev_priv);
  
-       intel_gtt_clear_range(obj->gtt_space->start >> PAGE_SHIFT,
-                             obj->base.size >> PAGE_SHIFT);
        if (obj->sg_list) {
                intel_gtt_unmap_memory(obj->sg_list, obj->num_sg);
                obj->sg_list = NULL;
  
        undo_idling(dev_priv, interruptible);
  }
+ void i915_gem_init_global_gtt(struct drm_device *dev,
+                             unsigned long start,
+                             unsigned long mappable_end,
+                             unsigned long end)
+ {
+       drm_i915_private_t *dev_priv = dev->dev_private;
+       /* Substract the guard page ... */
+       drm_mm_init(&dev_priv->mm.gtt_space, start, end - start - PAGE_SIZE);
+       dev_priv->mm.gtt_start = start;
+       dev_priv->mm.gtt_mappable_end = mappable_end;
+       dev_priv->mm.gtt_end = end;
+       dev_priv->mm.gtt_total = end - start;
+       dev_priv->mm.mappable_gtt_total = min(end, mappable_end) - start;
+       /* ... but ensure that we clear the entire range. */
+       intel_gtt_clear_range(start / PAGE_SIZE, (end-start) / PAGE_SIZE);
+ }
@@@ -27,6 -27,8 +27,8 @@@
  
  #define _PIPE(pipe, a, b) ((a) + (pipe)*((b)-(a)))
  
+ #define _PORT(port, a, b) ((a) + (port)*((b)-(a)))
  /*
   * The Bridge device's PCI config space has information about the
   * fb aperture size and the amount of pre-reserved memory.
  #define  DEBUG_RESET_RENDER           (1<<8)
  #define  DEBUG_RESET_DISPLAY          (1<<9)
  
+ /*
+  * DPIO - a special bus for various display related registers to hide behind:
+  *  0x800c: m1, m2, n, p1, p2, k dividers
+  *  0x8014: REF and SFR select
+  *  0x8014: N divider, VCO select
+  *  0x801c/3c: core clock bits
+  *  0x8048/68: low pass filter coefficients
+  *  0x8100: fast clock controls
+  */
+ #define DPIO_PKT                      0x2100
+ #define  DPIO_RID                     (0<<24)
+ #define  DPIO_OP_WRITE                        (1<<16)
+ #define  DPIO_OP_READ                 (0<<16)
+ #define  DPIO_PORTID                  (0x12<<8)
+ #define  DPIO_BYTE                    (0xf<<4)
+ #define  DPIO_BUSY                    (1<<0) /* status only */
+ #define DPIO_DATA                     0x2104
+ #define DPIO_REG                      0x2108
+ #define DPIO_CTL                      0x2110
+ #define  DPIO_MODSEL1                 (1<<3) /* if ref clk b == 27 */
+ #define  DPIO_MODSEL0                 (1<<2) /* if ref clk a == 27 */
+ #define  DPIO_SFR_BYPASS              (1<<1)
+ #define  DPIO_RESET                   (1<<0)
+ #define _DPIO_DIV_A                   0x800c
+ #define   DPIO_POST_DIV_SHIFT         (28) /* 3 bits */
+ #define   DPIO_K_SHIFT                        (24) /* 4 bits */
+ #define   DPIO_P1_SHIFT                       (21) /* 3 bits */
+ #define   DPIO_P2_SHIFT                       (16) /* 5 bits */
+ #define   DPIO_N_SHIFT                        (12) /* 4 bits */
+ #define   DPIO_ENABLE_CALIBRATION     (1<<11)
+ #define   DPIO_M1DIV_SHIFT            (8) /* 3 bits */
+ #define   DPIO_M2DIV_MASK             0xff
+ #define _DPIO_DIV_B                   0x802c
+ #define DPIO_DIV(pipe) _PIPE(pipe, _DPIO_DIV_A, _DPIO_DIV_B)
+ #define _DPIO_REFSFR_A                        0x8014
+ #define   DPIO_REFSEL_OVERRIDE                27
+ #define   DPIO_PLL_MODESEL_SHIFT      24 /* 3 bits */
+ #define   DPIO_BIAS_CURRENT_CTL_SHIFT 21 /* 3 bits, always 0x7 */
+ #define   DPIO_PLL_REFCLK_SEL_SHIFT   16 /* 2 bits */
+ #define   DPIO_DRIVER_CTL_SHIFT               12 /* always set to 0x8 */
+ #define   DPIO_CLK_BIAS_CTL_SHIFT     8 /* always set to 0x5 */
+ #define _DPIO_REFSFR_B                        0x8034
+ #define DPIO_REFSFR(pipe) _PIPE(pipe, _DPIO_REFSFR_A, _DPIO_REFSFR_B)
+ #define _DPIO_CORE_CLK_A              0x801c
+ #define _DPIO_CORE_CLK_B              0x803c
+ #define DPIO_CORE_CLK(pipe) _PIPE(pipe, _DPIO_CORE_CLK_A, _DPIO_CORE_CLK_B)
+ #define _DPIO_LFP_COEFF_A             0x8048
+ #define _DPIO_LFP_COEFF_B             0x8068
+ #define DPIO_LFP_COEFF(pipe) _PIPE(pipe, _DPIO_LFP_COEFF_A, _DPIO_LFP_COEFF_B)
+ #define DPIO_FASTCLK_DISABLE          0x8100
  
  /*
   * Fence registers
  #define INSTDONE      0x02090
  #define NOPID         0x02094
  #define HWSTAM                0x02098
+ #define DMA_FADD_I8XX 0x020d0
  
  #define ERROR_GEN6    0x040a0
  
  #define IIR           0x020a4
  #define IMR           0x020a8
  #define ISR           0x020ac
+ #define VLV_IIR_RW    0x182084
+ #define VLV_IER               0x1820a0
+ #define VLV_IIR               0x1820a4
+ #define VLV_IMR               0x1820a8
+ #define VLV_ISR               0x1820ac
  #define   I915_PIPE_CONTROL_NOTIFY_INTERRUPT          (1<<18)
  #define   I915_DISPLAY_PORT_INTERRUPT                 (1<<17)
  #define   I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT  (1<<15)
  #define   ECO_GATING_CX_ONLY  (1<<3)
  #define   ECO_FLIP_DONE               (1<<0)
  
- /* GEN6 interrupt control */
+ #define CACHE_MODE_1          0x7004 /* IVB+ */
+ #define   PIXEL_SUBSPAN_COLLECT_OPT_DISABLE (1<<6)
+ /* GEN6 interrupt control
+  * Note that the per-ring interrupt bits do alias with the global interrupt bits
+  * in GTIMR. */
  #define GEN6_RENDER_HWSTAM    0x2098
  #define GEN6_RENDER_IMR               0x20a8
  #define   GEN6_RENDER_CONTEXT_SWITCH_INTERRUPT                (1 << 8)
  #define   GMBUS_PORT_PANEL    3
  #define   GMBUS_PORT_DPC      4 /* HDMIC */
  #define   GMBUS_PORT_DPB      5 /* SDVO, HDMIB */
                                /* 6 reserved */
- #define   GMBUS_PORT_DPD      7 /* HDMID */
- #define   GMBUS_NUM_PORTS       8
#define   GMBUS_PORT_DPD      6 /* HDMID */
+ #define   GMBUS_PORT_RESERVED 7 /* 7 reserved */
+ #define   GMBUS_NUM_PORTS     (GMBUS_PORT_DPD - GMBUS_PORT_SSC + 1)
  #define GMBUS1                        0x5104 /* command/status */
  #define   GMBUS_SW_CLR_INT    (1<<31)
  #define   GMBUS_SW_RDY                (1<<30)
  #define DPLL(pipe) _PIPE(pipe, _DPLL_A, _DPLL_B)
  #define   DPLL_VCO_ENABLE             (1 << 31)
  #define   DPLL_DVO_HIGH_SPEED         (1 << 30)
+ #define   DPLL_EXT_BUFFER_ENABLE_VLV  (1 << 30)
  #define   DPLL_SYNCLOCK_ENABLE                (1 << 29)
+ #define   DPLL_REFA_CLK_ENABLE_VLV    (1 << 29)
  #define   DPLL_VGA_MODE_DIS           (1 << 28)
  #define   DPLLB_MODE_DAC_SERIAL               (1 << 26) /* i915 */
  #define   DPLLB_MODE_LVDS             (2 << 26) /* i915 */
  #define   DPLL_P2_CLOCK_DIV_MASK      0x03000000 /* i915 */
  #define   DPLL_FPA01_P1_POST_DIV_MASK 0x00ff0000 /* i915 */
  #define   DPLL_FPA01_P1_POST_DIV_MASK_PINEVIEW        0x00ff8000 /* Pineview */
+ #define   DPLL_INTEGRATED_CLOCK_VLV   (1<<13)
  
  #define SRX_INDEX             0x3c4
  #define SRX_DATA              0x3c5
  #define   DPLL_MD_VGA_UDI_MULTIPLIER_SHIFT    0
  #define _DPLL_B_MD 0x06020 /* 965+ only */
  #define DPLL_MD(pipe) _PIPE(pipe, _DPLL_A_MD, _DPLL_B_MD)
  #define _FPA0 0x06040
  #define _FPA1 0x06044
  #define _FPB0 0x06048
  #define RAMCLK_GATE_D         0x6210          /* CRL only */
  #define DEUC                  0x6214          /* CRL only */
  
+ #define FW_BLC_SELF_VLV               0x6500
+ #define  FW_CSPWRDWNEN                (1<<15)
  /*
   * Palette regs
   */
  #define   PIPECONF_DISABLE    0
  #define   PIPECONF_DOUBLE_WIDE        (1<<30)
  #define   I965_PIPECONF_ACTIVE        (1<<30)
 +#define   PIPECONF_FRAME_START_DELAY_MASK (3<<27)
  #define   PIPECONF_SINGLE_WIDE        0
  #define   PIPECONF_PIPE_UNLOCKED 0
  #define   PIPECONF_PIPE_LOCKED        (1<<25)
  #define   PIPECONF_DITHER_TYPE_TEMP (3<<2)
  #define _PIPEASTAT            0x70024
  #define   PIPE_FIFO_UNDERRUN_STATUS           (1UL<<31)
+ #define   SPRITE1_FLIPDONE_INT_EN_VLV         (1UL<<30)
  #define   PIPE_CRC_ERROR_ENABLE                       (1UL<<29)
  #define   PIPE_CRC_DONE_ENABLE                        (1UL<<28)
  #define   PIPE_GMBUS_EVENT_ENABLE             (1UL<<27)
+ #define   PLANE_FLIP_DONE_INT_EN_VLV          (1UL<<26)
  #define   PIPE_HOTPLUG_INTERRUPT_ENABLE               (1UL<<26)
  #define   PIPE_VSYNC_INTERRUPT_ENABLE         (1UL<<25)
  #define   PIPE_DISPLAY_LINE_COMPARE_ENABLE    (1UL<<24)
  #define   PIPE_DPST_EVENT_ENABLE              (1UL<<23)
+ #define   SPRITE0_FLIP_DONE_INT_EN_VLV                (1UL<<26)
  #define   PIPE_LEGACY_BLC_EVENT_ENABLE                (1UL<<22)
  #define   PIPE_ODD_FIELD_INTERRUPT_ENABLE     (1UL<<21)
  #define   PIPE_EVEN_FIELD_INTERRUPT_ENABLE    (1UL<<20)
  #define   PIPE_HOTPLUG_TV_INTERRUPT_ENABLE    (1UL<<18) /* pre-965 */
  #define   PIPE_START_VBLANK_INTERRUPT_ENABLE  (1UL<<18) /* 965 or later */
  #define   PIPE_VBLANK_INTERRUPT_ENABLE                (1UL<<17)
+ #define   PIPEA_HBLANK_INT_EN_VLV             (1UL<<16)
  #define   PIPE_OVERLAY_UPDATED_ENABLE         (1UL<<16)
+ #define   SPRITE1_FLIPDONE_INT_STATUS_VLV     (1UL<<15)
+ #define   SPRITE0_FLIPDONE_INT_STATUS_VLV     (1UL<<15)
  #define   PIPE_CRC_ERROR_INTERRUPT_STATUS     (1UL<<13)
  #define   PIPE_CRC_DONE_INTERRUPT_STATUS      (1UL<<12)
  #define   PIPE_GMBUS_INTERRUPT_STATUS         (1UL<<11)
+ #define   PLANE_FLIPDONE_INT_STATUS_VLV               (1UL<<10)
  #define   PIPE_HOTPLUG_INTERRUPT_STATUS               (1UL<<10)
  #define   PIPE_VSYNC_INTERRUPT_STATUS         (1UL<<9)
  #define   PIPE_DISPLAY_LINE_COMPARE_STATUS    (1UL<<8)
  #define PIPEFRAMEPIXEL(pipe)  _PIPE(pipe, _PIPEAFRAMEPIXEL, _PIPEBFRAMEPIXEL)
  #define PIPESTAT(pipe) _PIPE(pipe, _PIPEASTAT, _PIPEBSTAT)
  
+ #define VLV_DPFLIPSTAT                                0x70028
+ #define   PIPEB_LINE_COMPARE_STATUS           (1<<29)
+ #define   PIPEB_HLINE_INT_EN                  (1<<28)
+ #define   PIPEB_VBLANK_INT_EN                 (1<<27)
+ #define   SPRITED_FLIPDONE_INT_EN             (1<<26)
+ #define   SPRITEC_FLIPDONE_INT_EN             (1<<25)
+ #define   PLANEB_FLIPDONE_INT_EN              (1<<24)
+ #define   PIPEA_LINE_COMPARE_STATUS           (1<<21)
+ #define   PIPEA_HLINE_INT_EN                  (1<<20)
+ #define   PIPEA_VBLANK_INT_EN                 (1<<19)
+ #define   SPRITEB_FLIPDONE_INT_EN             (1<<18)
+ #define   SPRITEA_FLIPDONE_INT_EN             (1<<17)
+ #define   PLANEA_FLIPDONE_INT_EN              (1<<16)
+ #define DPINVGTT                              0x7002c /* VLV only */
+ #define   CURSORB_INVALID_GTT_INT_EN          (1<<23)
+ #define   CURSORA_INVALID_GTT_INT_EN          (1<<22)
+ #define   SPRITED_INVALID_GTT_INT_EN          (1<<21)
+ #define   SPRITEC_INVALID_GTT_INT_EN          (1<<20)
+ #define   PLANEB_INVALID_GTT_INT_EN           (1<<19)
+ #define   SPRITEB_INVALID_GTT_INT_EN          (1<<18)
+ #define   SPRITEA_INVALID_GTT_INT_EN          (1<<17)
+ #define   PLANEA_INVALID_GTT_INT_EN           (1<<16)
+ #define   DPINVGTT_EN_MASK                    0xff0000
+ #define   CURSORB_INVALID_GTT_STATUS          (1<<7)
+ #define   CURSORA_INVALID_GTT_STATUS          (1<<6)
+ #define   SPRITED_INVALID_GTT_STATUS          (1<<5)
+ #define   SPRITEC_INVALID_GTT_STATUS          (1<<4)
+ #define   PLANEB_INVALID_GTT_STATUS           (1<<3)
+ #define   SPRITEB_INVALID_GTT_STATUS          (1<<2)
+ #define   SPRITEA_INVALID_GTT_STATUS          (1<<1)
+ #define   PLANEA_INVALID_GTT_STATUS           (1<<0)
+ #define   DPINVGTT_STATUS_MASK                        0xff
  #define DSPARB                        0x70030
  #define   DSPARB_CSTART_MASK  (0x7f << 7)
  #define   DSPARB_CSTART_SHIFT 7
  #define   DSPFW_HPLL_CURSOR_MASK      (0x3f<<16)
  #define   DSPFW_HPLL_SR_MASK          (0x1ff)
  
+ /* drain latency register values*/
+ #define DRAIN_LATENCY_PRECISION_32    32
+ #define DRAIN_LATENCY_PRECISION_16    16
+ #define VLV_DDL1                      0x70050
+ #define DDL_CURSORA_PRECISION_32      (1<<31)
+ #define DDL_CURSORA_PRECISION_16      (0<<31)
+ #define DDL_CURSORA_SHIFT             24
+ #define DDL_PLANEA_PRECISION_32               (1<<7)
+ #define DDL_PLANEA_PRECISION_16               (0<<7)
+ #define VLV_DDL2                      0x70054
+ #define DDL_CURSORB_PRECISION_32      (1<<31)
+ #define DDL_CURSORB_PRECISION_16      (0<<31)
+ #define DDL_CURSORB_SHIFT             24
+ #define DDL_PLANEB_PRECISION_32               (1<<7)
+ #define DDL_PLANEB_PRECISION_16               (0<<7)
  /* FIFO watermark sizes etc */
  #define G4X_FIFO_LINE_SIZE    64
  #define I915_FIFO_LINE_SIZE   64
  #define I830_FIFO_LINE_SIZE   32
  
+ #define VALLEYVIEW_FIFO_SIZE  255
  #define G4X_FIFO_SIZE         127
  #define I965_FIFO_SIZE                512
  #define I945_FIFO_SIZE                127
  #define I855GM_FIFO_SIZE      127 /* In cachelines */
  #define I830_FIFO_SIZE                95
  
+ #define VALLEYVIEW_MAX_WM     0xff
  #define G4X_MAX_WM            0x3f
  #define I915_MAX_WM           0x3f
  
  #define PINEVIEW_CURSOR_DFT_WM        0
  #define PINEVIEW_CURSOR_GUARD_WM      5
  
+ #define VALLEYVIEW_CURSOR_MAX_WM 64
  #define I965_CURSOR_FIFO      64
  #define I965_CURSOR_MAX_WM    32
  #define I965_CURSOR_DFT_WM    8
  #define   DVS_FORMAT_RGBX888  (2<<25)
  #define   DVS_FORMAT_RGBX161616       (3<<25)
  #define   DVS_SOURCE_KEY      (1<<22)
 -#define   DVS_RGB_ORDER_RGBX  (1<<20)
 +#define   DVS_RGB_ORDER_XBGR  (1<<20)
  #define   DVS_YUV_BYTE_ORDER_MASK (3<<16)
  #define   DVS_YUV_ORDER_YUYV  (0<<16)
  #define   DVS_YUV_ORDER_UYVY  (1<<16)
  #define DE_PIPEB_VBLANK_IVB           (1<<5)
  #define DE_PIPEA_VBLANK_IVB           (1<<0)
  
+ #define VLV_MASTER_IER                        0x4400c /* Gunit master IER */
+ #define   MASTER_INTERRUPT_ENABLE     (1<<31)
  #define DEISR   0x44000
  #define DEIMR   0x44004
  #define DEIIR   0x44008
  #define DEIER   0x4400c
  
- /* GT interrupt */
- #define GT_PIPE_NOTIFY                (1 << 4)
- #define GT_SYNC_STATUS          (1 << 2)
- #define GT_USER_INTERRUPT       (1 << 0)
- #define GT_BSD_USER_INTERRUPT   (1 << 5)
- #define GT_GEN6_BSD_USER_INTERRUPT    (1 << 12)
- #define GT_BLT_USER_INTERRUPT (1 << 22)
+ /* GT interrupt.
+  * Note that for gen6+ the ring-specific interrupt bits do alias with the
+  * corresponding bits in the per-ring interrupt control registers. */
+ #define GT_GEN6_BLT_FLUSHDW_NOTIFY_INTERRUPT  (1 << 26)
+ #define GT_GEN6_BLT_CS_ERROR_INTERRUPT                (1 << 25)
+ #define GT_GEN6_BLT_USER_INTERRUPT            (1 << 22)
+ #define GT_GEN6_BSD_CS_ERROR_INTERRUPT                (1 << 15)
+ #define GT_GEN6_BSD_USER_INTERRUPT            (1 << 12)
+ #define GT_BSD_USER_INTERRUPT                 (1 << 5) /* ilk only */
+ #define GT_GEN7_L3_PARITY_ERROR_INTERRUPT     (1 << 5)
+ #define GT_PIPE_NOTIFY                                (1 << 4)
+ #define GT_RENDER_CS_ERROR_INTERRUPT          (1 << 3)
+ #define GT_SYNC_STATUS                                (1 << 2)
+ #define GT_USER_INTERRUPT                     (1 << 0)
  
  #define GTISR   0x44010
  #define GTIMR   0x44014
  #define TVIDEO_DIP_DATA(pipe) _PIPE(pipe, _VIDEO_DIP_DATA_A, _VIDEO_DIP_DATA_B)
  #define TVIDEO_DIP_GCP(pipe) _PIPE(pipe, _VIDEO_DIP_GCP_A, _VIDEO_DIP_GCP_B)
  
+ #define VLV_VIDEO_DIP_CTL_A           0x60220
+ #define VLV_VIDEO_DIP_DATA_A          0x60208
+ #define VLV_VIDEO_DIP_GDCP_PAYLOAD_A  0x60210
+ #define VLV_VIDEO_DIP_CTL_B           0x61170
+ #define VLV_VIDEO_DIP_DATA_B          0x61174
+ #define VLV_VIDEO_DIP_GDCP_PAYLOAD_B  0x61178
+ #define VLV_TVIDEO_DIP_CTL(pipe) \
+        _PIPE(pipe, VLV_VIDEO_DIP_CTL_A, VLV_VIDEO_DIP_CTL_B)
+ #define VLV_TVIDEO_DIP_DATA(pipe) \
+        _PIPE(pipe, VLV_VIDEO_DIP_DATA_A, VLV_VIDEO_DIP_DATA_B)
+ #define VLV_TVIDEO_DIP_GCP(pipe) \
+       _PIPE(pipe, VLV_VIDEO_DIP_GDCP_PAYLOAD_A, VLV_VIDEO_DIP_GDCP_PAYLOAD_B)
  #define _TRANS_HTOTAL_B          0xe1000
  #define _TRANS_HBLANK_B          0xe1004
  #define _TRANS_HSYNC_B           0xe1008
  #define  ADPA_CRT_HOTPLUG_FORCE_TRIGGER (1<<16)
  
  /* or SDVOB */
+ #define VLV_HDMIB 0x61140
  #define HDMIB   0xe1140
  #define  PORT_ENABLE    (1 << 31)
  #define  TRANSCODER(pipe)       ((pipe) << 30)
  #define  EDP_LINK_TRAIN_VOL_EMP_MASK_IVB      (0x3f<<22)
  
  #define  FORCEWAKE                            0xA18C
+ #define  FORCEWAKE_VLV                                0x1300b0
+ #define  FORCEWAKE_ACK_VLV                    0x1300b4
  #define  FORCEWAKE_ACK                                0x130090
  #define  FORCEWAKE_MT                         0xa188 /* multi-threaded */
  #define  FORCEWAKE_MT_ACK                     0x130040
  #define   AUD_CONFIG_PIXEL_CLOCK_HDMI         (0xf << 16)
  #define   AUD_CONFIG_DISABLE_NCTS             (1 << 3)
  
+ /* HSW Power Wells */
+ #define HSW_PWR_WELL_CTL1             0x45400         /* BIOS */
+ #define HSW_PWR_WELL_CTL2             0x45404         /* Driver */
+ #define HSW_PWR_WELL_CTL3             0x45408         /* KVMR */
+ #define HSW_PWR_WELL_CTL4             0x4540C         /* Debug */
+ #define   HSW_PWR_WELL_ENABLE                         (1<<31)
+ #define   HSW_PWR_WELL_STATE                          (1<<30)
+ #define HSW_PWR_WELL_CTL5             0x45410
+ #define   HSW_PWR_WELL_ENABLE_SINGLE_STEP     (1<<31)
+ #define   HSW_PWR_WELL_PWR_GATE_OVERRIDE      (1<<20)
+ #define   HSW_PWR_WELL_FORCE_ON                               (1<<19)
+ #define HSW_PWR_WELL_CTL6             0x45414
+ /* Per-pipe DDI Function Control */
+ #define PIPE_DDI_FUNC_CTL_A                   0x60400
+ #define PIPE_DDI_FUNC_CTL_B                   0x61400
+ #define PIPE_DDI_FUNC_CTL_C                   0x62400
+ #define PIPE_DDI_FUNC_CTL_EDP         0x6F400
+ #define DDI_FUNC_CTL(pipe) _PIPE(pipe, \
+                                       PIPE_DDI_FUNC_CTL_A, \
+                                       PIPE_DDI_FUNC_CTL_B)
+ #define  PIPE_DDI_FUNC_ENABLE         (1<<31)
+ /* Those bits are ignored by pipe EDP since it can only connect to DDI A */
+ #define  PIPE_DDI_PORT_MASK                           (0xf<<28)
+ #define  PIPE_DDI_SELECT_PORT(x)              ((x)<<28)
+ #define  PIPE_DDI_MODE_SELECT_HDMI            (0<<24)
+ #define  PIPE_DDI_MODE_SELECT_DVI             (1<<24)
+ #define  PIPE_DDI_MODE_SELECT_DP_SST  (2<<24)
+ #define  PIPE_DDI_MODE_SELECT_DP_MST  (3<<24)
+ #define  PIPE_DDI_MODE_SELECT_FDI             (4<<24)
+ #define  PIPE_DDI_BPC_8                                       (0<<20)
+ #define  PIPE_DDI_BPC_10                              (1<<20)
+ #define  PIPE_DDI_BPC_6                                       (2<<20)
+ #define  PIPE_DDI_BPC_12                              (3<<20)
+ #define  PIPE_DDI_BFI_ENABLE                  (1<<4)
+ #define  PIPE_DDI_PORT_WIDTH_X1                       (0<<1)
+ #define  PIPE_DDI_PORT_WIDTH_X2                       (1<<1)
+ #define  PIPE_DDI_PORT_WIDTH_X4                       (3<<1)
+ /* DisplayPort Transport Control */
+ #define DP_TP_CTL_A                   0x64040
+ #define DP_TP_CTL_B                   0x64140
+ #define DP_TP_CTL(port) _PORT(port, \
+                                       DP_TP_CTL_A, \
+                                       DP_TP_CTL_B)
+ #define  DP_TP_CTL_ENABLE             (1<<31)
+ #define  DP_TP_CTL_MODE_SST   (0<<27)
+ #define  DP_TP_CTL_MODE_MST   (1<<27)
+ #define  DP_TP_CTL_ENHANCED_FRAME_ENABLE      (1<<18)
+ #define  DP_TP_CTL_FDI_AUTOTRAIN      (1<<15)
+ #define  DP_TP_CTL_LINK_TRAIN_MASK            (7<<8)
+ #define  DP_TP_CTL_LINK_TRAIN_PAT1            (0<<8)
+ #define  DP_TP_CTL_LINK_TRAIN_PAT2            (1<<8)
+ #define  DP_TP_CTL_LINK_TRAIN_NORMAL  (3<<8)
+ /* DisplayPort Transport Status */
+ #define DP_TP_STATUS_A                        0x64044
+ #define DP_TP_STATUS_B                        0x64144
+ #define DP_TP_STATUS(port) _PORT(port, \
+                                       DP_TP_STATUS_A, \
+                                       DP_TP_STATUS_B)
+ #define  DP_TP_STATUS_AUTOTRAIN_DONE  (1<<12)
+ /* DDI Buffer Control */
+ #define DDI_BUF_CTL_A                         0x64000
+ #define DDI_BUF_CTL_B                         0x64100
+ #define DDI_BUF_CTL(port) _PORT(port, \
+                                       DDI_BUF_CTL_A, \
+                                       DDI_BUF_CTL_B)
+ #define  DDI_BUF_CTL_ENABLE                           (1<<31)
+ #define  DDI_BUF_EMP_400MV_0DB_HSW            (0<<24)   /* Sel0 */
+ #define  DDI_BUF_EMP_400MV_3_5DB_HSW  (1<<24)   /* Sel1 */
+ #define  DDI_BUF_EMP_400MV_6DB_HSW            (2<<24)   /* Sel2 */
+ #define  DDI_BUF_EMP_400MV_9_5DB_HSW  (3<<24)   /* Sel3 */
+ #define  DDI_BUF_EMP_600MV_0DB_HSW            (4<<24)   /* Sel4 */
+ #define  DDI_BUF_EMP_600MV_3_5DB_HSW  (5<<24)   /* Sel5 */
+ #define  DDI_BUF_EMP_600MV_6DB_HSW            (6<<24)   /* Sel6 */
+ #define  DDI_BUF_EMP_800MV_0DB_HSW            (7<<24)   /* Sel7 */
+ #define  DDI_BUF_EMP_800MV_3_5DB_HSW  (8<<24)   /* Sel8 */
+ #define  DDI_BUF_EMP_MASK                             (0xf<<24)
+ #define  DDI_BUF_IS_IDLE                              (1<<7)
+ #define  DDI_PORT_WIDTH_X1                            (0<<1)
+ #define  DDI_PORT_WIDTH_X2                            (1<<1)
+ #define  DDI_PORT_WIDTH_X4                            (3<<1)
+ #define  DDI_INIT_DISPLAY_DETECTED            (1<<0)
+ /* DDI Buffer Translations */
+ #define DDI_BUF_TRANS_A                               0x64E00
+ #define DDI_BUF_TRANS_B                               0x64E60
+ #define DDI_BUF_TRANS(port) _PORT(port, \
+                                       DDI_BUF_TRANS_A, \
+                                       DDI_BUF_TRANS_B)
+ /* Sideband Interface (SBI) is programmed indirectly, via
+  * SBI_ADDR, which contains the register offset; and SBI_DATA,
+  * which contains the payload */
+ #define SBI_ADDR                              0xC6000
+ #define SBI_DATA                              0xC6004
+ #define SBI_CTL_STAT                  0xC6008
+ #define  SBI_CTL_OP_CRRD              (0x6<<8)
+ #define  SBI_CTL_OP_CRWR              (0x7<<8)
+ #define  SBI_RESPONSE_FAIL            (0x1<<1)
+ #define  SBI_RESPONSE_SUCCESS (0x0<<1)
+ #define  SBI_BUSY                             (0x1<<0)
+ #define  SBI_READY                            (0x0<<0)
+ /* SBI offsets */
+ #define  SBI_SSCDIVINTPHASE6          0x0600
+ #define   SBI_SSCDIVINTPHASE_DIVSEL_MASK      ((0x7f)<<1)
+ #define   SBI_SSCDIVINTPHASE_DIVSEL(x)                ((x)<<1)
+ #define   SBI_SSCDIVINTPHASE_INCVAL_MASK      ((0x7f)<<8)
+ #define   SBI_SSCDIVINTPHASE_INCVAL(x)                ((x)<<8)
+ #define   SBI_SSCDIVINTPHASE_DIR(x)                   ((x)<<15)
+ #define   SBI_SSCDIVINTPHASE_PROPAGATE                (1<<0)
+ #define  SBI_SSCCTL                                   0x020c
+ #define  SBI_SSCCTL6                          0x060C
+ #define   SBI_SSCCTL_DISABLE          (1<<0)
+ #define  SBI_SSCAUXDIV6                               0x0610
+ #define   SBI_SSCAUXDIV_FINALDIV2SEL(x)               ((x)<<4)
+ #define  SBI_DBUFF0                                   0x2a00
+ /* LPT PIXCLK_GATE */
+ #define PIXCLK_GATE                           0xC6020
+ #define  PIXCLK_GATE_UNGATE           1<<0
+ #define  PIXCLK_GATE_GATE             0<<0
+ /* SPLL */
+ #define SPLL_CTL                              0x46020
+ #define  SPLL_PLL_ENABLE              (1<<31)
+ #define  SPLL_PLL_SCC                 (1<<28)
+ #define  SPLL_PLL_NON_SCC             (2<<28)
+ #define  SPLL_PLL_FREQ_810MHz (0<<26)
+ #define  SPLL_PLL_FREQ_1350MHz        (1<<26)
+ /* WRPLL */
+ #define WRPLL_CTL1                            0x46040
+ #define WRPLL_CTL2                            0x46060
+ #define  WRPLL_PLL_ENABLE                             (1<<31)
+ #define  WRPLL_PLL_SELECT_SSC                 (0x01<<28)
+ #define  WRPLL_PLL_SELECT_NON_SCC             (0x02<<28)
+ #define  WRPLL_PLL_SELECT_LCPLL_2700  (0x03<<28)
+ /* Port clock selection */
+ #define PORT_CLK_SEL_A                        0x46100
+ #define PORT_CLK_SEL_B                        0x46104
+ #define PORT_CLK_SEL(port) _PORT(port, \
+                                       PORT_CLK_SEL_A, \
+                                       PORT_CLK_SEL_B)
+ #define  PORT_CLK_SEL_LCPLL_2700      (0<<29)
+ #define  PORT_CLK_SEL_LCPLL_1350      (1<<29)
+ #define  PORT_CLK_SEL_LCPLL_810               (2<<29)
+ #define  PORT_CLK_SEL_SPLL                    (3<<29)
+ #define  PORT_CLK_SEL_WRPLL1          (4<<29)
+ #define  PORT_CLK_SEL_WRPLL2          (5<<29)
+ /* Pipe clock selection */
+ #define PIPE_CLK_SEL_A                        0x46140
+ #define PIPE_CLK_SEL_B                        0x46144
+ #define PIPE_CLK_SEL(pipe) _PIPE(pipe, \
+                                       PIPE_CLK_SEL_A, \
+                                       PIPE_CLK_SEL_B)
+ /* For each pipe, we need to select the corresponding port clock */
+ #define  PIPE_CLK_SEL_DISABLED        (0x0<<29)
+ #define  PIPE_CLK_SEL_PORT(x) ((x+1)<<29)
+ /* LCPLL Control */
+ #define LCPLL_CTL                             0x130040
+ #define  LCPLL_PLL_DISABLE            (1<<31)
+ #define  LCPLL_PLL_LOCK                       (1<<30)
+ #define  LCPLL_CD_CLOCK_DISABLE       (1<<25)
+ #define  LCPLL_CD2X_CLOCK_DISABLE     (1<<23)
+ /* Pipe WM_LINETIME - watermark line time */
+ #define PIPE_WM_LINETIME_A            0x45270
+ #define PIPE_WM_LINETIME_B            0x45274
+ #define PIPE_WM_LINETIME(pipe) _PIPE(pipe, \
+                                       PIPE_WM_LINETIME_A, \
+                                       PIPE_WM_LINETIME_A)
+ #define   PIPE_WM_LINETIME_MASK               (0x1ff)
+ #define   PIPE_WM_LINETIME_TIME(x)                    ((x))
+ #define   PIPE_WM_LINETIME_IPS_LINETIME_MASK  (0x1ff<<16)
+ #define   PIPE_WM_LINETIME_IPS_LINETIME(x)            ((x)<<16)
+ /* SFUSE_STRAP */
+ #define SFUSE_STRAP                           0xc2014
+ #define  SFUSE_STRAP_DDIB_DETECTED    (1<<2)
+ #define  SFUSE_STRAP_DDIC_DETECTED    (1<<1)
+ #define  SFUSE_STRAP_DDID_DETECTED    (1<<0)
  #endif /* _I915_REG_H_ */
@@@ -24,7 -24,6 +24,7 @@@
   *    Eric Anholt <eric@anholt.net>
   *
   */
 +#include <linux/dmi.h>
  #include <drm/drm_dp_helper.h>
  #include "drmP.h"
  #include "drm.h"
@@@ -174,6 -173,28 +174,28 @@@ get_lvds_dvo_timing(const struct bdb_lv
        return (struct lvds_dvo_timing *)(entry + dvo_timing_offset);
  }
  
+ /* get lvds_fp_timing entry
+  * this function may return NULL if the corresponding entry is invalid
+  */
+ static const struct lvds_fp_timing *
+ get_lvds_fp_timing(const struct bdb_header *bdb,
+                  const struct bdb_lvds_lfp_data *data,
+                  const struct bdb_lvds_lfp_data_ptrs *ptrs,
+                  int index)
+ {
+       size_t data_ofs = (const u8 *)data - (const u8 *)bdb;
+       u16 data_size = ((const u16 *)data)[-1]; /* stored in header */
+       size_t ofs;
+       if (index >= ARRAY_SIZE(ptrs->ptr))
+               return NULL;
+       ofs = ptrs->ptr[index].fp_timing_offset;
+       if (ofs < data_ofs ||
+           ofs + sizeof(struct lvds_fp_timing) > data_ofs + data_size)
+               return NULL;
+       return (const struct lvds_fp_timing *)((const u8 *)bdb + ofs);
+ }
  /* Try to find integrated panel data */
  static void
  parse_lfp_panel_data(struct drm_i915_private *dev_priv,
        const struct bdb_lvds_lfp_data *lvds_lfp_data;
        const struct bdb_lvds_lfp_data_ptrs *lvds_lfp_data_ptrs;
        const struct lvds_dvo_timing *panel_dvo_timing;
+       const struct lvds_fp_timing *fp_timing;
        struct drm_display_mode *panel_fixed_mode;
        int i, downclock;
  
                              "Normal Clock %dKHz, downclock %dKHz\n",
                              panel_fixed_mode->clock, 10*downclock);
        }
+       fp_timing = get_lvds_fp_timing(bdb, lvds_lfp_data,
+                                      lvds_lfp_data_ptrs,
+                                      lvds_options->panel_type);
+       if (fp_timing) {
+               /* check the resolution, just to be sure */
+               if (fp_timing->x_res == panel_fixed_mode->hdisplay &&
+                   fp_timing->y_res == panel_fixed_mode->vdisplay) {
+                       dev_priv->bios_lvds_val = fp_timing->lvds_reg_val;
+                       DRM_DEBUG_KMS("VBT initial LVDS value %x\n",
+                                     dev_priv->bios_lvds_val);
+               }
+       }
  }
  
  /* Try to find sdvo panel data */
@@@ -256,6 -291,11 +292,11 @@@ parse_sdvo_panel_data(struct drm_i915_p
        int index;
  
        index = i915_vbt_sdvo_panel_type;
+       if (index == -2) {
+               DRM_DEBUG_KMS("Ignore SDVO panel mode from BIOS VBT tables.\n");
+               return;
+       }
        if (index == -1) {
                struct bdb_sdvo_lvds_options *sdvo_lvds_options;
  
@@@ -332,11 -372,11 +373,11 @@@ parse_general_definitions(struct drm_i9
                if (block_size >= sizeof(*general)) {
                        int bus_pin = general->crt_ddc_gmbus_pin;
                        DRM_DEBUG_KMS("crt_ddc_bus_pin: %d\n", bus_pin);
-                       if (bus_pin >= 1 && bus_pin <= 6)
+                       if (intel_gmbus_is_port_valid(bus_pin))
                                dev_priv->crt_ddc_pin = bus_pin;
                } else {
                        DRM_DEBUG_KMS("BDB_GD too small (%d). Invalid.\n",
-                                 block_size);
+                                     block_size);
                }
        }
  }
@@@ -622,26 -662,6 +663,26 @@@ init_vbt_defaults(struct drm_i915_priva
        dev_priv->edp.bpp = 18;
  }
  
 +static int __init intel_no_opregion_vbt_callback(const struct dmi_system_id *id)
 +{
 +      DRM_DEBUG_KMS("Falling back to manually reading VBT from "
 +                    "VBIOS ROM for %s\n",
 +                    id->ident);
 +      return 1;
 +}
 +
 +static const struct dmi_system_id intel_no_opregion_vbt[] = {
 +      {
 +              .callback = intel_no_opregion_vbt_callback,
 +              .ident = "ThinkCentre A57",
 +              .matches = {
 +                      DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
 +                      DMI_MATCH(DMI_PRODUCT_NAME, "97027RG"),
 +              },
 +      },
 +      { }
 +};
 +
  /**
   * intel_parse_bios - find VBT and initialize settings from the BIOS
   * @dev: DRM device
@@@ -662,7 -682,7 +703,7 @@@ intel_parse_bios(struct drm_device *dev
        init_vbt_defaults(dev_priv);
  
        /* XXX Should this validation be moved to intel_opregion.c? */
 -      if (dev_priv->opregion.vbt) {
 +      if (!dmi_check_system(intel_no_opregion_vbt) && dev_priv->opregion.vbt) {
                struct vbt_header *vbt = dev_priv->opregion.vbt;
                if (memcmp(vbt->signature, "$VBT", 4) == 0) {
                        DRM_DEBUG_KMS("Using VBT from OpRegion: %20s\n",
@@@ -24,6 -24,7 +24,7 @@@
   *    Eric Anholt <eric@anholt.net>
   */
  
+ #include <linux/dmi.h>
  #include <linux/cpufreq.h>
  #include <linux/module.h>
  #include <linux/input.h>
@@@ -360,6 -361,110 +361,110 @@@ static const intel_limit_t intel_limits
        .find_pll = intel_find_pll_ironlake_dp,
  };
  
+ u32 intel_dpio_read(struct drm_i915_private *dev_priv, int reg)
+ {
+       unsigned long flags;
+       u32 val = 0;
+       spin_lock_irqsave(&dev_priv->dpio_lock, flags);
+       if (wait_for_atomic_us((I915_READ(DPIO_PKT) & DPIO_BUSY) == 0, 100)) {
+               DRM_ERROR("DPIO idle wait timed out\n");
+               goto out_unlock;
+       }
+       I915_WRITE(DPIO_REG, reg);
+       I915_WRITE(DPIO_PKT, DPIO_RID | DPIO_OP_READ | DPIO_PORTID |
+                  DPIO_BYTE);
+       if (wait_for_atomic_us((I915_READ(DPIO_PKT) & DPIO_BUSY) == 0, 100)) {
+               DRM_ERROR("DPIO read wait timed out\n");
+               goto out_unlock;
+       }
+       val = I915_READ(DPIO_DATA);
+ out_unlock:
+       spin_unlock_irqrestore(&dev_priv->dpio_lock, flags);
+       return val;
+ }
+ static void intel_dpio_write(struct drm_i915_private *dev_priv, int reg,
+                            u32 val)
+ {
+       unsigned long flags;
+       spin_lock_irqsave(&dev_priv->dpio_lock, flags);
+       if (wait_for_atomic_us((I915_READ(DPIO_PKT) & DPIO_BUSY) == 0, 100)) {
+               DRM_ERROR("DPIO idle wait timed out\n");
+               goto out_unlock;
+       }
+       I915_WRITE(DPIO_DATA, val);
+       I915_WRITE(DPIO_REG, reg);
+       I915_WRITE(DPIO_PKT, DPIO_RID | DPIO_OP_WRITE | DPIO_PORTID |
+                  DPIO_BYTE);
+       if (wait_for_atomic_us((I915_READ(DPIO_PKT) & DPIO_BUSY) == 0, 100))
+               DRM_ERROR("DPIO write wait timed out\n");
+ out_unlock:
+       spin_unlock_irqrestore(&dev_priv->dpio_lock, flags);
+ }
+ static void vlv_init_dpio(struct drm_device *dev)
+ {
+       struct drm_i915_private *dev_priv = dev->dev_private;
+       /* Reset the DPIO config */
+       I915_WRITE(DPIO_CTL, 0);
+       POSTING_READ(DPIO_CTL);
+       I915_WRITE(DPIO_CTL, 1);
+       POSTING_READ(DPIO_CTL);
+ }
+ static int intel_dual_link_lvds_callback(const struct dmi_system_id *id)
+ {
+       DRM_INFO("Forcing lvds to dual link mode on %s\n", id->ident);
+       return 1;
+ }
+ static const struct dmi_system_id intel_dual_link_lvds[] = {
+       {
+               .callback = intel_dual_link_lvds_callback,
+               .ident = "Apple MacBook Pro (Core i5/i7 Series)",
+               .matches = {
+                       DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
+                       DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro8,2"),
+               },
+       },
+       { }     /* terminating entry */
+ };
+ static bool is_dual_link_lvds(struct drm_i915_private *dev_priv,
+                             unsigned int reg)
+ {
+       unsigned int val;
+       /* use the module option value if specified */
+       if (i915_lvds_channel_mode > 0)
+               return i915_lvds_channel_mode == 2;
+       if (dmi_check_system(intel_dual_link_lvds))
+               return true;
+       if (dev_priv->lvds_val)
+               val = dev_priv->lvds_val;
+       else {
+               /* BIOS should set the proper LVDS register value at boot, but
+                * in reality, it doesn't set the value when the lid is closed;
+                * we need to check "the value to be set" in VBT when LVDS
+                * register is uninitialized.
+                */
+               val = I915_READ(reg);
+               if (!(val & ~LVDS_DETECTED))
+                       val = dev_priv->bios_lvds_val;
+               dev_priv->lvds_val = val;
+       }
+       return (val & LVDS_CLKB_POWER_MASK) == LVDS_CLKB_POWER_UP;
+ }
  static const intel_limit_t *intel_ironlake_limit(struct drm_crtc *crtc,
                                                int refclk)
  {
        const intel_limit_t *limit;
  
        if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)) {
-               if ((I915_READ(PCH_LVDS) & LVDS_CLKB_POWER_MASK) ==
-                   LVDS_CLKB_POWER_UP) {
+               if (is_dual_link_lvds(dev_priv, PCH_LVDS)) {
                        /* LVDS dual channel */
                        if (refclk == 100000)
                                limit = &intel_limits_ironlake_dual_lvds_100m;
@@@ -397,8 -501,7 +501,7 @@@ static const intel_limit_t *intel_g4x_l
        const intel_limit_t *limit;
  
        if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)) {
-               if ((I915_READ(LVDS) & LVDS_CLKB_POWER_MASK) ==
-                   LVDS_CLKB_POWER_UP)
+               if (is_dual_link_lvds(dev_priv, LVDS))
                        /* LVDS with dual channel */
                        limit = &intel_limits_g4x_dual_channel_lvds;
                else
@@@ -536,8 -639,7 +639,7 @@@ intel_find_best_PLL(const intel_limit_
                 * reliably set up different single/dual channel state, if we
                 * even can.
                 */
-               if ((I915_READ(LVDS) & LVDS_CLKB_POWER_MASK) ==
-                   LVDS_CLKB_POWER_UP)
+               if (is_dual_link_lvds(dev_priv, LVDS))
                        clock.p2 = limit->p2.p2_fast;
                else
                        clock.p2 = limit->p2.p2_slow;
@@@ -2537,7 -2639,7 +2639,7 @@@ static void gen6_fdi_link_train(struct 
        struct drm_i915_private *dev_priv = dev->dev_private;
        struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
        int pipe = intel_crtc->pipe;
-       u32 reg, temp, i;
+       u32 reg, temp, i, retry;
  
        /* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit
           for train result */
                POSTING_READ(reg);
                udelay(500);
  
-               reg = FDI_RX_IIR(pipe);
-               temp = I915_READ(reg);
-               DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
-               if (temp & FDI_RX_BIT_LOCK) {
-                       I915_WRITE(reg, temp | FDI_RX_BIT_LOCK);
-                       DRM_DEBUG_KMS("FDI train 1 done.\n");
-                       break;
+               for (retry = 0; retry < 5; retry++) {
+                       reg = FDI_RX_IIR(pipe);
+                       temp = I915_READ(reg);
+                       DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
+                       if (temp & FDI_RX_BIT_LOCK) {
+                               I915_WRITE(reg, temp | FDI_RX_BIT_LOCK);
+                               DRM_DEBUG_KMS("FDI train 1 done.\n");
+                               break;
+                       }
+                       udelay(50);
                }
+               if (retry < 5)
+                       break;
        }
        if (i == 4)
                DRM_ERROR("FDI train 1 fail!\n");
                POSTING_READ(reg);
                udelay(500);
  
-               reg = FDI_RX_IIR(pipe);
-               temp = I915_READ(reg);
-               DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
-               if (temp & FDI_RX_SYMBOL_LOCK) {
-                       I915_WRITE(reg, temp | FDI_RX_SYMBOL_LOCK);
-                       DRM_DEBUG_KMS("FDI train 2 done.\n");
-                       break;
+               for (retry = 0; retry < 5; retry++) {
+                       reg = FDI_RX_IIR(pipe);
+                       temp = I915_READ(reg);
+                       DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
+                       if (temp & FDI_RX_SYMBOL_LOCK) {
+                               I915_WRITE(reg, temp | FDI_RX_SYMBOL_LOCK);
+                               DRM_DEBUG_KMS("FDI train 2 done.\n");
+                               break;
+                       }
+                       udelay(50);
                }
+               if (retry < 5)
+                       break;
        }
        if (i == 4)
                DRM_ERROR("FDI train 2 fail!\n");
@@@ -3457,6 -3567,11 +3567,11 @@@ static bool intel_crtc_mode_fixup(struc
        return true;
  }
  
+ static int valleyview_get_display_clock_speed(struct drm_device *dev)
+ {
+       return 400000; /* FIXME */
+ }
  static int i945_get_display_clock_speed(struct drm_device *dev)
  {
        return 400000;
@@@ -3606,6 -3721,20 +3721,20 @@@ static const struct intel_watermark_par
        2,
        G4X_FIFO_LINE_SIZE,
  };
+ static const struct intel_watermark_params valleyview_wm_info = {
+       VALLEYVIEW_FIFO_SIZE,
+       VALLEYVIEW_MAX_WM,
+       VALLEYVIEW_MAX_WM,
+       2,
+       G4X_FIFO_LINE_SIZE,
+ };
+ static const struct intel_watermark_params valleyview_cursor_wm_info = {
+       I965_CURSOR_FIFO,
+       VALLEYVIEW_CURSOR_MAX_WM,
+       I965_CURSOR_DFT_WM,
+       2,
+       G4X_FIFO_LINE_SIZE,
+ };
  static const struct intel_watermark_params i965_cursor_wm_info = {
        I965_CURSOR_FIFO,
        I965_CURSOR_MAX_WM,
@@@ -4128,8 -4257,134 +4257,134 @@@ static bool g4x_compute_srwm(struct drm
                              display, cursor);
  }
  
+ static bool vlv_compute_drain_latency(struct drm_device *dev,
+                                    int plane,
+                                    int *plane_prec_mult,
+                                    int *plane_dl,
+                                    int *cursor_prec_mult,
+                                    int *cursor_dl)
+ {
+       struct drm_crtc *crtc;
+       int clock, pixel_size;
+       int entries;
+       crtc = intel_get_crtc_for_plane(dev, plane);
+       if (crtc->fb == NULL || !crtc->enabled)
+               return false;
+       clock = crtc->mode.clock;       /* VESA DOT Clock */
+       pixel_size = crtc->fb->bits_per_pixel / 8;      /* BPP */
+       entries = (clock / 1000) * pixel_size;
+       *plane_prec_mult = (entries > 256) ?
+               DRAIN_LATENCY_PRECISION_32 : DRAIN_LATENCY_PRECISION_16;
+       *plane_dl = (64 * (*plane_prec_mult) * 4) / ((clock / 1000) *
+                                                    pixel_size);
+       entries = (clock / 1000) * 4;   /* BPP is always 4 for cursor */
+       *cursor_prec_mult = (entries > 256) ?
+               DRAIN_LATENCY_PRECISION_32 : DRAIN_LATENCY_PRECISION_16;
+       *cursor_dl = (64 * (*cursor_prec_mult) * 4) / ((clock / 1000) * 4);
+       return true;
+ }
+ /*
+  * Update drain latency registers of memory arbiter
+  *
+  * Valleyview SoC has a new memory arbiter and needs drain latency registers
+  * to be programmed. Each plane has a drain latency multiplier and a drain
+  * latency value.
+  */
+ static void vlv_update_drain_latency(struct drm_device *dev)
+ {
+       struct drm_i915_private *dev_priv = dev->dev_private;
+       int planea_prec, planea_dl, planeb_prec, planeb_dl;
+       int cursora_prec, cursora_dl, cursorb_prec, cursorb_dl;
+       int plane_prec_mult, cursor_prec_mult; /* Precision multiplier is
+                                                       either 16 or 32 */
+       /* For plane A, Cursor A */
+       if (vlv_compute_drain_latency(dev, 0, &plane_prec_mult, &planea_dl,
+                                     &cursor_prec_mult, &cursora_dl)) {
+               cursora_prec = (cursor_prec_mult == DRAIN_LATENCY_PRECISION_32) ?
+                       DDL_CURSORA_PRECISION_32 : DDL_CURSORA_PRECISION_16;
+               planea_prec = (plane_prec_mult == DRAIN_LATENCY_PRECISION_32) ?
+                       DDL_PLANEA_PRECISION_32 : DDL_PLANEA_PRECISION_16;
+               I915_WRITE(VLV_DDL1, cursora_prec |
+                               (cursora_dl << DDL_CURSORA_SHIFT) |
+                               planea_prec | planea_dl);
+       }
+       /* For plane B, Cursor B */
+       if (vlv_compute_drain_latency(dev, 1, &plane_prec_mult, &planeb_dl,
+                                     &cursor_prec_mult, &cursorb_dl)) {
+               cursorb_prec = (cursor_prec_mult == DRAIN_LATENCY_PRECISION_32) ?
+                       DDL_CURSORB_PRECISION_32 : DDL_CURSORB_PRECISION_16;
+               planeb_prec = (plane_prec_mult == DRAIN_LATENCY_PRECISION_32) ?
+                       DDL_PLANEB_PRECISION_32 : DDL_PLANEB_PRECISION_16;
+               I915_WRITE(VLV_DDL2, cursorb_prec |
+                               (cursorb_dl << DDL_CURSORB_SHIFT) |
+                               planeb_prec | planeb_dl);
+       }
+ }
  #define single_plane_enabled(mask) is_power_of_2(mask)
  
+ static void valleyview_update_wm(struct drm_device *dev)
+ {
+       static const int sr_latency_ns = 12000;
+       struct drm_i915_private *dev_priv = dev->dev_private;
+       int planea_wm, planeb_wm, cursora_wm, cursorb_wm;
+       int plane_sr, cursor_sr;
+       unsigned int enabled = 0;
+       vlv_update_drain_latency(dev);
+       if (g4x_compute_wm0(dev, 0,
+                           &valleyview_wm_info, latency_ns,
+                           &valleyview_cursor_wm_info, latency_ns,
+                           &planea_wm, &cursora_wm))
+               enabled |= 1;
+       if (g4x_compute_wm0(dev, 1,
+                           &valleyview_wm_info, latency_ns,
+                           &valleyview_cursor_wm_info, latency_ns,
+                           &planeb_wm, &cursorb_wm))
+               enabled |= 2;
+       plane_sr = cursor_sr = 0;
+       if (single_plane_enabled(enabled) &&
+           g4x_compute_srwm(dev, ffs(enabled) - 1,
+                            sr_latency_ns,
+                            &valleyview_wm_info,
+                            &valleyview_cursor_wm_info,
+                            &plane_sr, &cursor_sr))
+               I915_WRITE(FW_BLC_SELF_VLV, FW_CSPWRDWNEN);
+       else
+               I915_WRITE(FW_BLC_SELF_VLV,
+                          I915_READ(FW_BLC_SELF_VLV) & ~FW_CSPWRDWNEN);
+       DRM_DEBUG_KMS("Setting FIFO watermarks - A: plane=%d, cursor=%d, B: plane=%d, cursor=%d, SR: plane=%d, cursor=%d\n",
+                     planea_wm, cursora_wm,
+                     planeb_wm, cursorb_wm,
+                     plane_sr, cursor_sr);
+       I915_WRITE(DSPFW1,
+                  (plane_sr << DSPFW_SR_SHIFT) |
+                  (cursorb_wm << DSPFW_CURSORB_SHIFT) |
+                  (planeb_wm << DSPFW_PLANEB_SHIFT) |
+                  planea_wm);
+       I915_WRITE(DSPFW2,
+                  (I915_READ(DSPFW2) & DSPFW_CURSORA_MASK) |
+                  (cursora_wm << DSPFW_CURSORA_SHIFT));
+       I915_WRITE(DSPFW3,
+                  (I915_READ(DSPFW3) | (cursor_sr << DSPFW_CURSOR_SR_SHIFT)));
+ }
  static void g4x_update_wm(struct drm_device *dev)
  {
        static const int sr_latency_ns = 12000;
@@@ -4737,17 -4992,8 +4992,17 @@@ sandybridge_compute_sprite_srwm(struct 
  
        crtc = intel_get_crtc_for_plane(dev, plane);
        clock = crtc->mode.clock;
 +      if (!clock) {
 +              *sprite_wm = 0;
 +              return false;
 +      }
  
        line_time_us = (sprite_width * 1000) / clock;
 +      if (!line_time_us) {
 +              *sprite_wm = 0;
 +              return false;
 +      }
 +
        line_count = (latency_ns / line_time_us + 1000) / 1000;
        line_size = sprite_width * pixel_size;
  
@@@ -5113,6 -5359,233 +5368,233 @@@ static void i9xx_update_pll_dividers(st
        }
  }
  
+ static void intel_update_lvds(struct drm_crtc *crtc, intel_clock_t *clock,
+                             struct drm_display_mode *adjusted_mode)
+ {
+       struct drm_device *dev = crtc->dev;
+       struct drm_i915_private *dev_priv = dev->dev_private;
+       struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+       int pipe = intel_crtc->pipe;
+       u32 temp, lvds_sync = 0;
+       temp = I915_READ(LVDS);
+       temp |= LVDS_PORT_EN | LVDS_A0A2_CLKA_POWER_UP;
+       if (pipe == 1) {
+               temp |= LVDS_PIPEB_SELECT;
+       } else {
+               temp &= ~LVDS_PIPEB_SELECT;
+       }
+       /* set the corresponsding LVDS_BORDER bit */
+       temp |= dev_priv->lvds_border_bits;
+       /* Set the B0-B3 data pairs corresponding to whether we're going to
+        * set the DPLLs for dual-channel mode or not.
+        */
+       if (clock->p2 == 7)
+               temp |= LVDS_B0B3_POWER_UP | LVDS_CLKB_POWER_UP;
+       else
+               temp &= ~(LVDS_B0B3_POWER_UP | LVDS_CLKB_POWER_UP);
+       /* It would be nice to set 24 vs 18-bit mode (LVDS_A3_POWER_UP)
+        * appropriately here, but we need to look more thoroughly into how
+        * panels behave in the two modes.
+        */
+       /* set the dithering flag on LVDS as needed */
+       if (INTEL_INFO(dev)->gen >= 4) {
+               if (dev_priv->lvds_dither)
+                       temp |= LVDS_ENABLE_DITHER;
+               else
+                       temp &= ~LVDS_ENABLE_DITHER;
+       }
+       if (adjusted_mode->flags & DRM_MODE_FLAG_NHSYNC)
+               lvds_sync |= LVDS_HSYNC_POLARITY;
+       if (adjusted_mode->flags & DRM_MODE_FLAG_NVSYNC)
+               lvds_sync |= LVDS_VSYNC_POLARITY;
+       if ((temp & (LVDS_HSYNC_POLARITY | LVDS_VSYNC_POLARITY))
+           != lvds_sync) {
+               char flags[2] = "-+";
+               DRM_INFO("Changing LVDS panel from "
+                        "(%chsync, %cvsync) to (%chsync, %cvsync)\n",
+                        flags[!(temp & LVDS_HSYNC_POLARITY)],
+                        flags[!(temp & LVDS_VSYNC_POLARITY)],
+                        flags[!(lvds_sync & LVDS_HSYNC_POLARITY)],
+                        flags[!(lvds_sync & LVDS_VSYNC_POLARITY)]);
+               temp &= ~(LVDS_HSYNC_POLARITY | LVDS_VSYNC_POLARITY);
+               temp |= lvds_sync;
+       }
+       I915_WRITE(LVDS, temp);
+ }
+ static void i9xx_update_pll(struct drm_crtc *crtc,
+                           struct drm_display_mode *mode,
+                           struct drm_display_mode *adjusted_mode,
+                           intel_clock_t *clock, intel_clock_t *reduced_clock,
+                           int num_connectors)
+ {
+       struct drm_device *dev = crtc->dev;
+       struct drm_i915_private *dev_priv = dev->dev_private;
+       struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+       int pipe = intel_crtc->pipe;
+       u32 dpll;
+       bool is_sdvo;
+       is_sdvo = intel_pipe_has_type(crtc, INTEL_OUTPUT_SDVO) ||
+               intel_pipe_has_type(crtc, INTEL_OUTPUT_HDMI);
+       dpll = DPLL_VGA_MODE_DIS;
+       if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS))
+               dpll |= DPLLB_MODE_LVDS;
+       else
+               dpll |= DPLLB_MODE_DAC_SERIAL;
+       if (is_sdvo) {
+               int pixel_multiplier = intel_mode_get_pixel_multiplier(adjusted_mode);
+               if (pixel_multiplier > 1) {
+                       if (IS_I945G(dev) || IS_I945GM(dev) || IS_G33(dev))
+                               dpll |= (pixel_multiplier - 1) << SDVO_MULTIPLIER_SHIFT_HIRES;
+               }
+               dpll |= DPLL_DVO_HIGH_SPEED;
+       }
+       if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT))
+               dpll |= DPLL_DVO_HIGH_SPEED;
+       /* compute bitmask from p1 value */
+       if (IS_PINEVIEW(dev))
+               dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT_PINEVIEW;
+       else {
+               dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT;
+               if (IS_G4X(dev) && reduced_clock)
+                       dpll |= (1 << (reduced_clock->p1 - 1)) << DPLL_FPA1_P1_POST_DIV_SHIFT;
+       }
+       switch (clock->p2) {
+       case 5:
+               dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_5;
+               break;
+       case 7:
+               dpll |= DPLLB_LVDS_P2_CLOCK_DIV_7;
+               break;
+       case 10:
+               dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_10;
+               break;
+       case 14:
+               dpll |= DPLLB_LVDS_P2_CLOCK_DIV_14;
+               break;
+       }
+       if (INTEL_INFO(dev)->gen >= 4)
+               dpll |= (6 << PLL_LOAD_PULSE_PHASE_SHIFT);
+       if (is_sdvo && intel_pipe_has_type(crtc, INTEL_OUTPUT_TVOUT))
+               dpll |= PLL_REF_INPUT_TVCLKINBC;
+       else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_TVOUT))
+               /* XXX: just matching BIOS for now */
+               /*      dpll |= PLL_REF_INPUT_TVCLKINBC; */
+               dpll |= 3;
+       else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) &&
+                intel_panel_use_ssc(dev_priv) && num_connectors < 2)
+               dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN;
+       else
+               dpll |= PLL_REF_INPUT_DREFCLK;
+       dpll |= DPLL_VCO_ENABLE;
+       I915_WRITE(DPLL(pipe), dpll & ~DPLL_VCO_ENABLE);
+       POSTING_READ(DPLL(pipe));
+       udelay(150);
+       /* The LVDS pin pair needs to be on before the DPLLs are enabled.
+        * This is an exception to the general rule that mode_set doesn't turn
+        * things on.
+        */
+       if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS))
+               intel_update_lvds(crtc, clock, adjusted_mode);
+       if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT))
+               intel_dp_set_m_n(crtc, mode, adjusted_mode);
+       I915_WRITE(DPLL(pipe), dpll);
+       /* Wait for the clocks to stabilize. */
+       POSTING_READ(DPLL(pipe));
+       udelay(150);
+       if (INTEL_INFO(dev)->gen >= 4) {
+               u32 temp = 0;
+               if (is_sdvo) {
+                       temp = intel_mode_get_pixel_multiplier(adjusted_mode);
+                       if (temp > 1)
+                               temp = (temp - 1) << DPLL_MD_UDI_MULTIPLIER_SHIFT;
+                       else
+                               temp = 0;
+               }
+               I915_WRITE(DPLL_MD(pipe), temp);
+       } else {
+               /* The pixel multiplier can only be updated once the
+                * DPLL is enabled and the clocks are stable.
+                *
+                * So write it again.
+                */
+               I915_WRITE(DPLL(pipe), dpll);
+       }
+ }
+ static void i8xx_update_pll(struct drm_crtc *crtc,
+                           struct drm_display_mode *adjusted_mode,
+                           intel_clock_t *clock,
+                           int num_connectors)
+ {
+       struct drm_device *dev = crtc->dev;
+       struct drm_i915_private *dev_priv = dev->dev_private;
+       struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+       int pipe = intel_crtc->pipe;
+       u32 dpll;
+       dpll = DPLL_VGA_MODE_DIS;
+       if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS)) {
+               dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT;
+       } else {
+               if (clock->p1 == 2)
+                       dpll |= PLL_P1_DIVIDE_BY_TWO;
+               else
+                       dpll |= (clock->p1 - 2) << DPLL_FPA01_P1_POST_DIV_SHIFT;
+               if (clock->p2 == 4)
+                       dpll |= PLL_P2_DIVIDE_BY_4;
+       }
+       if (intel_pipe_has_type(crtc, INTEL_OUTPUT_TVOUT))
+               /* XXX: just matching BIOS for now */
+               /*      dpll |= PLL_REF_INPUT_TVCLKINBC; */
+               dpll |= 3;
+       else if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) &&
+                intel_panel_use_ssc(dev_priv) && num_connectors < 2)
+               dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN;
+       else
+               dpll |= PLL_REF_INPUT_DREFCLK;
+       dpll |= DPLL_VCO_ENABLE;
+       I915_WRITE(DPLL(pipe), dpll & ~DPLL_VCO_ENABLE);
+       POSTING_READ(DPLL(pipe));
+       udelay(150);
+       I915_WRITE(DPLL(pipe), dpll);
+       /* Wait for the clocks to stabilize. */
+       POSTING_READ(DPLL(pipe));
+       udelay(150);
+       /* The LVDS pin pair needs to be on before the DPLLs are enabled.
+        * This is an exception to the general rule that mode_set doesn't turn
+        * things on.
+        */
+       if (intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS))
+               intel_update_lvds(crtc, clock, adjusted_mode);
+       /* The pixel multiplier can only be updated once the
+        * DPLL is enabled and the clocks are stable.
+        *
+        * So write it again.
+        */
+       I915_WRITE(DPLL(pipe), dpll);
+ }
  static int i9xx_crtc_mode_set(struct drm_crtc *crtc,
                              struct drm_display_mode *mode,
                              struct drm_display_mode *adjusted_mode,
        int plane = intel_crtc->plane;
        int refclk, num_connectors = 0;
        intel_clock_t clock, reduced_clock;
-       u32 dpll, dspcntr, pipeconf, vsyncshift;
-       bool ok, has_reduced_clock = false, is_sdvo = false, is_dvo = false;
-       bool is_crt = false, is_lvds = false, is_tv = false, is_dp = false;
+       u32 dspcntr, pipeconf, vsyncshift;
+       bool ok, has_reduced_clock = false, is_sdvo = false;
+       bool is_lvds = false, is_tv = false, is_dp = false;
        struct drm_mode_config *mode_config = &dev->mode_config;
        struct intel_encoder *encoder;
        const intel_limit_t *limit;
        int ret;
-       u32 temp;
-       u32 lvds_sync = 0;
  
        list_for_each_entry(encoder, &mode_config->encoder_list, base.head) {
                if (encoder->base.crtc != crtc)
                        if (encoder->needs_tv_clock)
                                is_tv = true;
                        break;
-               case INTEL_OUTPUT_DVO:
-                       is_dvo = true;
-                       break;
                case INTEL_OUTPUT_TVOUT:
                        is_tv = true;
                        break;
-               case INTEL_OUTPUT_ANALOG:
-                       is_crt = true;
-                       break;
                case INTEL_OUTPUT_DISPLAYPORT:
                        is_dp = true;
                        break;
        i9xx_update_pll_dividers(crtc, &clock, has_reduced_clock ?
                                 &reduced_clock : NULL);
  
-       dpll = DPLL_VGA_MODE_DIS;
-       if (!IS_GEN2(dev)) {
-               if (is_lvds)
-                       dpll |= DPLLB_MODE_LVDS;
-               else
-                       dpll |= DPLLB_MODE_DAC_SERIAL;
-               if (is_sdvo) {
-                       int pixel_multiplier = intel_mode_get_pixel_multiplier(adjusted_mode);
-                       if (pixel_multiplier > 1) {
-                               if (IS_I945G(dev) || IS_I945GM(dev) || IS_G33(dev))
-                                       dpll |= (pixel_multiplier - 1) << SDVO_MULTIPLIER_SHIFT_HIRES;
-                       }
-                       dpll |= DPLL_DVO_HIGH_SPEED;
-               }
-               if (is_dp)
-                       dpll |= DPLL_DVO_HIGH_SPEED;
-               /* compute bitmask from p1 value */
-               if (IS_PINEVIEW(dev))
-                       dpll |= (1 << (clock.p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT_PINEVIEW;
-               else {
-                       dpll |= (1 << (clock.p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT;
-                       if (IS_G4X(dev) && has_reduced_clock)
-                               dpll |= (1 << (reduced_clock.p1 - 1)) << DPLL_FPA1_P1_POST_DIV_SHIFT;
-               }
-               switch (clock.p2) {
-               case 5:
-                       dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_5;
-                       break;
-               case 7:
-                       dpll |= DPLLB_LVDS_P2_CLOCK_DIV_7;
-                       break;
-               case 10:
-                       dpll |= DPLL_DAC_SERIAL_P2_CLOCK_DIV_10;
-                       break;
-               case 14:
-                       dpll |= DPLLB_LVDS_P2_CLOCK_DIV_14;
-                       break;
-               }
-               if (INTEL_INFO(dev)->gen >= 4)
-                       dpll |= (6 << PLL_LOAD_PULSE_PHASE_SHIFT);
-       } else {
-               if (is_lvds) {
-                       dpll |= (1 << (clock.p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT;
-               } else {
-                       if (clock.p1 == 2)
-                               dpll |= PLL_P1_DIVIDE_BY_TWO;
-                       else
-                               dpll |= (clock.p1 - 2) << DPLL_FPA01_P1_POST_DIV_SHIFT;
-                       if (clock.p2 == 4)
-                               dpll |= PLL_P2_DIVIDE_BY_4;
-               }
-       }
-       if (is_sdvo && is_tv)
-               dpll |= PLL_REF_INPUT_TVCLKINBC;
-       else if (is_tv)
-               /* XXX: just matching BIOS for now */
-               /*      dpll |= PLL_REF_INPUT_TVCLKINBC; */
-               dpll |= 3;
-       else if (is_lvds && intel_panel_use_ssc(dev_priv) && num_connectors < 2)
-               dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN;
+       if (IS_GEN2(dev))
+               i8xx_update_pll(crtc, adjusted_mode, &clock, num_connectors);
        else
-               dpll |= PLL_REF_INPUT_DREFCLK;
+               i9xx_update_pll(crtc, mode, adjusted_mode, &clock,
+                               has_reduced_clock ? &reduced_clock : NULL,
+                               num_connectors);
  
        /* setup pipeconf */
        pipeconf = I915_READ(PIPECONF(pipe));
                }
        }
  
-       dpll |= DPLL_VCO_ENABLE;
        DRM_DEBUG_KMS("Mode for pipe %c:\n", pipe == 0 ? 'A' : 'B');
        drm_mode_debug_printmodeline(mode);
  
-       I915_WRITE(DPLL(pipe), dpll & ~DPLL_VCO_ENABLE);
-       POSTING_READ(DPLL(pipe));
-       udelay(150);
-       /* The LVDS pin pair needs to be on before the DPLLs are enabled.
-        * This is an exception to the general rule that mode_set doesn't turn
-        * things on.
-        */
-       if (is_lvds) {
-               temp = I915_READ(LVDS);
-               temp |= LVDS_PORT_EN | LVDS_A0A2_CLKA_POWER_UP;
-               if (pipe == 1) {
-                       temp |= LVDS_PIPEB_SELECT;
-               } else {
-                       temp &= ~LVDS_PIPEB_SELECT;
-               }
-               /* set the corresponsding LVDS_BORDER bit */
-               temp |= dev_priv->lvds_border_bits;
-               /* Set the B0-B3 data pairs corresponding to whether we're going to
-                * set the DPLLs for dual-channel mode or not.
-                */
-               if (clock.p2 == 7)
-                       temp |= LVDS_B0B3_POWER_UP | LVDS_CLKB_POWER_UP;
-               else
-                       temp &= ~(LVDS_B0B3_POWER_UP | LVDS_CLKB_POWER_UP);
-               /* It would be nice to set 24 vs 18-bit mode (LVDS_A3_POWER_UP)
-                * appropriately here, but we need to look more thoroughly into how
-                * panels behave in the two modes.
-                */
-               /* set the dithering flag on LVDS as needed */
-               if (INTEL_INFO(dev)->gen >= 4) {
-                       if (dev_priv->lvds_dither)
-                               temp |= LVDS_ENABLE_DITHER;
-                       else
-                               temp &= ~LVDS_ENABLE_DITHER;
-               }
-               if (adjusted_mode->flags & DRM_MODE_FLAG_NHSYNC)
-                       lvds_sync |= LVDS_HSYNC_POLARITY;
-               if (adjusted_mode->flags & DRM_MODE_FLAG_NVSYNC)
-                       lvds_sync |= LVDS_VSYNC_POLARITY;
-               if ((temp & (LVDS_HSYNC_POLARITY | LVDS_VSYNC_POLARITY))
-                   != lvds_sync) {
-                       char flags[2] = "-+";
-                       DRM_INFO("Changing LVDS panel from "
-                                "(%chsync, %cvsync) to (%chsync, %cvsync)\n",
-                                flags[!(temp & LVDS_HSYNC_POLARITY)],
-                                flags[!(temp & LVDS_VSYNC_POLARITY)],
-                                flags[!(lvds_sync & LVDS_HSYNC_POLARITY)],
-                                flags[!(lvds_sync & LVDS_VSYNC_POLARITY)]);
-                       temp &= ~(LVDS_HSYNC_POLARITY | LVDS_VSYNC_POLARITY);
-                       temp |= lvds_sync;
-               }
-               I915_WRITE(LVDS, temp);
-       }
-       if (is_dp) {
-               intel_dp_set_m_n(crtc, mode, adjusted_mode);
-       }
-       I915_WRITE(DPLL(pipe), dpll);
-       /* Wait for the clocks to stabilize. */
-       POSTING_READ(DPLL(pipe));
-       udelay(150);
-       if (INTEL_INFO(dev)->gen >= 4) {
-               temp = 0;
-               if (is_sdvo) {
-                       temp = intel_mode_get_pixel_multiplier(adjusted_mode);
-                       if (temp > 1)
-                               temp = (temp - 1) << DPLL_MD_UDI_MULTIPLIER_SHIFT;
-                       else
-                               temp = 0;
-               }
-               I915_WRITE(DPLL_MD(pipe), temp);
-       } else {
-               /* The pixel multiplier can only be updated once the
-                * DPLL is enabled and the clocks are stable.
-                *
-                * So write it again.
-                */
-               I915_WRITE(DPLL(pipe), dpll);
-       }
        if (HAS_PIPE_CXSR(dev)) {
                if (intel_crtc->lowfreq_avail) {
                        DRM_DEBUG_KMS("enabling CxSR downclocking\n");
@@@ -5539,8 -5857,7 +5866,8 @@@ void ironlake_init_pch_refclk(struct dr
                if (intel_panel_use_ssc(dev_priv) && can_ssc) {
                        DRM_DEBUG_KMS("Using SSC on panel\n");
                        temp |= DREF_SSC1_ENABLE;
 -              }
 +              } else
 +                      temp &= ~DREF_SSC1_ENABLE;
  
                /* Get SSC going before enabling the outputs */
                I915_WRITE(PCH_DREF_CONTROL, temp);
@@@ -6278,7 -6595,7 +6605,7 @@@ void intel_crtc_load_lut(struct drm_crt
        int i;
  
        /* The clocks have to be on to load the palette. */
 -      if (!crtc->enabled)
 +      if (!crtc->enabled || !intel_crtc->active)
                return;
  
        /* use legacy palette for Ironlake */
@@@ -6664,7 -6981,7 +6991,7 @@@ intel_framebuffer_create_for_mode(struc
        mode_cmd.height = mode->vdisplay;
        mode_cmd.pitches[0] = intel_framebuffer_pitch_for_width(mode_cmd.width,
                                                                bpp);
 -      mode_cmd.pixel_format = 0;
 +      mode_cmd.pixel_format = drm_mode_legacy_fb_format(bpp, depth);
  
        return intel_framebuffer_create(dev, &mode_cmd, obj);
  }
@@@ -7581,12 -7898,6 +7908,12 @@@ static void intel_sanitize_modesetting(
        struct drm_i915_private *dev_priv = dev->dev_private;
        u32 reg, val;
  
 +      /* Clear any frame start delays used for debugging left by the BIOS */
 +      for_each_pipe(pipe) {
 +              reg = PIPECONF(pipe);
 +              I915_WRITE(reg, I915_READ(reg) & ~PIPECONF_FRAME_START_DELAY_MASK);
 +      }
 +
        if (HAS_PCH_SPLIT(dev))
                return;
  
@@@ -7796,7 -8107,7 +8123,7 @@@ static void intel_setup_outputs(struct 
  
                if (I915_READ(HDMIB) & PORT_DETECTED) {
                        /* PCH SDVOB multiplex with HDMIB */
-                       found = intel_sdvo_init(dev, PCH_SDVOB);
+                       found = intel_sdvo_init(dev, PCH_SDVOB, true);
                        if (!found)
                                intel_hdmi_init(dev, HDMIB);
                        if (!found && (I915_READ(PCH_DP_B) & DP_DETECTED))
  
                if (I915_READ(SDVOB) & SDVO_DETECTED) {
                        DRM_DEBUG_KMS("probing SDVOB\n");
-                       found = intel_sdvo_init(dev, SDVOB);
+                       found = intel_sdvo_init(dev, SDVOB, true);
                        if (!found && SUPPORTS_INTEGRATED_HDMI(dev)) {
                                DRM_DEBUG_KMS("probing HDMI on SDVOB\n");
                                intel_hdmi_init(dev, SDVOB);
  
                if (I915_READ(SDVOB) & SDVO_DETECTED) {
                        DRM_DEBUG_KMS("probing SDVOC\n");
-                       found = intel_sdvo_init(dev, SDVOC);
+                       found = intel_sdvo_init(dev, SDVOC, false);
                }
  
                if (!found && (I915_READ(SDVOC) & SDVO_DETECTED)) {
@@@ -7917,7 -8228,6 +8244,7 @@@ int intel_framebuffer_init(struct drm_d
        case DRM_FORMAT_RGB332:
        case DRM_FORMAT_RGB565:
        case DRM_FORMAT_XRGB8888:
 +      case DRM_FORMAT_XBGR8888:
        case DRM_FORMAT_ARGB8888:
        case DRM_FORMAT_XRGB2101010:
        case DRM_FORMAT_ARGB2101010:
@@@ -8222,7 -8532,7 +8549,7 @@@ void intel_init_emon(struct drm_device 
        dev_priv->corr = (lcfuse & LCFUSE_HIV_MASK);
  }
  
 -static bool intel_enable_rc6(struct drm_device *dev)
 +static int intel_enable_rc6(struct drm_device *dev)
  {
        /*
         * Respect the kernel parameter if it is set
         * Disable rc6 on Sandybridge
         */
        if (INTEL_INFO(dev)->gen == 6) {
 -              DRM_DEBUG_DRIVER("Sandybridge: RC6 disabled\n");
 -              return 0;
 +              DRM_DEBUG_DRIVER("Sandybridge: deep RC6 disabled\n");
 +              return INTEL_RC6_ENABLE;
        }
 -      DRM_DEBUG_DRIVER("RC6 enabled\n");
 -      return 1;
 +      DRM_DEBUG_DRIVER("RC6 and deep RC6 enabled\n");
 +      return (INTEL_RC6_ENABLE | INTEL_RC6p_ENABLE);
  }
  
  void gen6_enable_rps(struct drm_i915_private *dev_priv)
        u32 pcu_mbox, rc6_mask = 0;
        u32 gtfifodbg;
        int cur_freq, min_freq, max_freq;
 +      int rc6_mode;
        int i;
  
        /* Here begins a magic sequence of register writes to enable
        I915_WRITE(GEN6_RC6p_THRESHOLD, 100000);
        I915_WRITE(GEN6_RC6pp_THRESHOLD, 64000); /* unused */
  
 -      if (intel_enable_rc6(dev_priv->dev))
 -              rc6_mask = GEN6_RC_CTL_RC6p_ENABLE |
 -                      GEN6_RC_CTL_RC6_ENABLE;
 +      rc6_mode = intel_enable_rc6(dev_priv->dev);
 +      if (rc6_mode & INTEL_RC6_ENABLE)
 +              rc6_mask |= GEN6_RC_CTL_RC6_ENABLE;
 +
 +      if (rc6_mode & INTEL_RC6p_ENABLE)
 +              rc6_mask |= GEN6_RC_CTL_RC6p_ENABLE;
 +
 +      if (rc6_mode & INTEL_RC6pp_ENABLE)
 +              rc6_mask |= GEN6_RC_CTL_RC6pp_ENABLE;
 +
 +      DRM_INFO("Enabling RC6 states: RC6 %s, RC6p %s, RC6pp %s\n",
 +                      (rc6_mode & INTEL_RC6_ENABLE) ? "on" : "off",
 +                      (rc6_mode & INTEL_RC6p_ENABLE) ? "on" : "off",
 +                      (rc6_mode & INTEL_RC6pp_ENABLE) ? "on" : "off");
  
        I915_WRITE(GEN6_RC_CONTROL,
                   rc6_mask |
@@@ -8583,32 -8881,12 +8910,32 @@@ static void ivybridge_init_clock_gating
        I915_WRITE(WM2_LP_ILK, 0);
        I915_WRITE(WM1_LP_ILK, 0);
  
 +      /* According to the spec, bit 13 (RCZUNIT) must be set on IVB.
 +       * This implements the WaDisableRCZUnitClockGating workaround.
 +       */
 +      I915_WRITE(GEN6_UCGCTL2, GEN6_RCZUNIT_CLOCK_GATE_DISABLE);
 +
        I915_WRITE(ILK_DSPCLK_GATE, IVB_VRHUNIT_CLK_GATE);
  
        I915_WRITE(IVB_CHICKEN3,
                   CHICKEN3_DGMG_REQ_OUT_FIX_DISABLE |
                   CHICKEN3_DGMG_DONE_FIX_DISABLE);
  
 +      /* Apply the WaDisableRHWOOptimizationForRenderHang workaround. */
 +      I915_WRITE(GEN7_COMMON_SLICE_CHICKEN1,
 +                 GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC);
 +
 +      /* WaApplyL3ControlAndL3ChickenMode requires those two on Ivy Bridge */
 +      I915_WRITE(GEN7_L3CNTLREG1,
 +                      GEN7_WA_FOR_GEN7_L3_CONTROL);
 +      I915_WRITE(GEN7_L3_CHICKEN_MODE_REGISTER,
 +                      GEN7_WA_L3_CHICKEN_MODE);
 +
 +      /* This is required by WaCatErrorRejectionIssue */
 +      I915_WRITE(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG,
 +                      I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) |
 +                      GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB);
 +
        for_each_pipe(pipe) {
                I915_WRITE(DSPCNTR(pipe),
                           I915_READ(DSPCNTR(pipe)) |
        }
  }
  
+ static void valleyview_init_clock_gating(struct drm_device *dev)
+ {
+       struct drm_i915_private *dev_priv = dev->dev_private;
+       int pipe;
+       uint32_t dspclk_gate = VRHUNIT_CLOCK_GATE_DISABLE;
+       I915_WRITE(PCH_DSPCLK_GATE_D, dspclk_gate);
+       I915_WRITE(WM3_LP_ILK, 0);
+       I915_WRITE(WM2_LP_ILK, 0);
+       I915_WRITE(WM1_LP_ILK, 0);
+       /* According to the spec, bit 13 (RCZUNIT) must be set on IVB.
+        * This implements the WaDisableRCZUnitClockGating workaround.
+        */
+       I915_WRITE(GEN6_UCGCTL2, GEN6_RCZUNIT_CLOCK_GATE_DISABLE);
+       I915_WRITE(ILK_DSPCLK_GATE, IVB_VRHUNIT_CLK_GATE);
+       I915_WRITE(IVB_CHICKEN3,
+                  CHICKEN3_DGMG_REQ_OUT_FIX_DISABLE |
+                  CHICKEN3_DGMG_DONE_FIX_DISABLE);
+       /* Apply the WaDisableRHWOOptimizationForRenderHang workaround. */
+       I915_WRITE(GEN7_COMMON_SLICE_CHICKEN1,
+                  GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC);
+       /* WaApplyL3ControlAndL3ChickenMode requires those two on Ivy Bridge */
+       I915_WRITE(GEN7_L3CNTLREG1, GEN7_WA_FOR_GEN7_L3_CONTROL);
+       I915_WRITE(GEN7_L3_CHICKEN_MODE_REGISTER, GEN7_WA_L3_CHICKEN_MODE);
+       /* This is required by WaCatErrorRejectionIssue */
+       I915_WRITE(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG,
+                  I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) |
+                  GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB);
+       for_each_pipe(pipe) {
+               I915_WRITE(DSPCNTR(pipe),
+                          I915_READ(DSPCNTR(pipe)) |
+                          DISPPLANE_TRICKLE_FEED_DISABLE);
+               intel_flush_display_plane(dev_priv, pipe);
+       }
+       I915_WRITE(CACHE_MODE_1, I915_READ(CACHE_MODE_1) |
+                  (PIXEL_SUBSPAN_COLLECT_OPT_DISABLE << 16) |
+                  PIXEL_SUBSPAN_COLLECT_OPT_DISABLE);
+ }
  static void g4x_init_clock_gating(struct drm_device *dev)
  {
        struct drm_i915_private *dev_priv = dev->dev_private;
@@@ -8871,7 -9197,10 +9246,10 @@@ static void intel_init_display(struct d
        }
  
        /* Returns the core display clock speed */
-       if (IS_I945G(dev) || (IS_G33(dev) && !IS_PINEVIEW_M(dev)))
+       if (IS_VALLEYVIEW(dev))
+               dev_priv->display.get_display_clock_speed =
+                       valleyview_get_display_clock_speed;
+       else if (IS_I945G(dev) || (IS_G33(dev) && !IS_PINEVIEW_M(dev)))
                dev_priv->display.get_display_clock_speed =
                        i945_get_display_clock_speed;
        else if (IS_I915G(dev))
                        dev_priv->display.write_eld = ironlake_write_eld;
                } else
                        dev_priv->display.update_wm = NULL;
+       } else if (IS_VALLEYVIEW(dev)) {
+               dev_priv->display.update_wm = valleyview_update_wm;
+               dev_priv->display.init_clock_gating =
+                       valleyview_init_clock_gating;
+               dev_priv->display.force_wake_get = vlv_force_wake_get;
+               dev_priv->display.force_wake_put = vlv_force_wake_put;
        } else if (IS_PINEVIEW(dev)) {
                if (!intel_get_cxsr_latency(IS_PINEVIEW_G(dev),
                                            dev_priv->is_ddr3,
@@@ -9049,7 -9384,7 +9433,7 @@@ static void quirk_pipea_force(struct dr
        struct drm_i915_private *dev_priv = dev->dev_private;
  
        dev_priv->quirks |= QUIRK_PIPEA_FORCE;
-       DRM_DEBUG_DRIVER("applying pipe a force quirk\n");
+       DRM_INFO("applying pipe a force quirk\n");
  }
  
  /*
@@@ -9059,6 -9394,18 +9443,18 @@@ static void quirk_ssc_force_disable(str
  {
        struct drm_i915_private *dev_priv = dev->dev_private;
        dev_priv->quirks |= QUIRK_LVDS_SSC_DISABLE;
+       DRM_INFO("applying lvds SSC disable quirk\n");
+ }
+ /*
+  * A machine (e.g. Acer Aspire 5734Z) may need to invert the panel backlight
+  * brightness value
+  */
+ static void quirk_invert_brightness(struct drm_device *dev)
+ {
+       struct drm_i915_private *dev_priv = dev->dev_private;
+       dev_priv->quirks |= QUIRK_INVERT_BRIGHTNESS;
+       DRM_INFO("applying inverted panel brightness quirk\n");
  }
  
  struct intel_quirk {
@@@ -9093,6 -9440,9 +9489,9 @@@ struct intel_quirk intel_quirks[] = 
  
        /* Sony Vaio Y cannot use SSC on LVDS */
        { 0x0046, 0x104d, 0x9076, quirk_ssc_force_disable },
+       /* Acer Aspire 5734Z must invert backlight brightness */
+       { 0x2a42, 0x1025, 0x0459, quirk_invert_brightness },
  };
  
  static void intel_init_quirks(struct drm_device *dev)
@@@ -9236,6 -9586,9 +9635,9 @@@ void intel_modeset_cleanup(struct drm_d
        if (IS_IRONLAKE_M(dev))
                ironlake_disable_rc6(dev);
  
+       if (IS_VALLEYVIEW(dev))
+               vlv_init_dpio(dev);
        mutex_unlock(&dev->struct_mutex);
  
        /* Disable the irq before mode object teardown, for the irq might
        ret__;                                                          \
  })
  
+ #define wait_for_atomic_us(COND, US) ({ \
+       int i, ret__ = -ETIMEDOUT;      \
+       for (i = 0; i < (US); i++) {    \
+               if ((COND)) {           \
+                       ret__ = 0;      \
+                       break;          \
+               }                       \
+               udelay(1);              \
+       }                               \
+       ret__;                          \
+ })
  #define wait_for(COND, MS) _wait_for(COND, MS, 1)
  #define wait_for_atomic(COND, MS) _wait_for(COND, MS, 0)
  
@@@ -293,7 -305,8 +305,8 @@@ extern void intel_attach_broadcast_rgb_
  extern void intel_crt_init(struct drm_device *dev);
  extern void intel_hdmi_init(struct drm_device *dev, int sdvox_reg);
  void intel_dip_infoframe_csum(struct dip_infoframe *avi_if);
- extern bool intel_sdvo_init(struct drm_device *dev, int output_device);
+ extern bool intel_sdvo_init(struct drm_device *dev, uint32_t sdvo_reg,
+                           bool is_sdvob);
  extern void intel_dvo_init(struct drm_device *dev);
  extern void intel_tv_init(struct drm_device *dev);
  extern void intel_mark_busy(struct drm_device *dev,
@@@ -382,7 -395,7 +395,7 @@@ extern int intel_framebuffer_init(struc
                                  struct drm_i915_gem_object *obj);
  extern int intel_fbdev_init(struct drm_device *dev);
  extern void intel_fbdev_fini(struct drm_device *dev);
 -
 +extern void intel_fbdev_set_suspend(struct drm_device *dev, int state);
  extern void intel_prepare_page_flip(struct drm_device *dev, int plane);
  extern void intel_finish_page_flip(struct drm_device *dev, int pipe);
  extern void intel_finish_page_flip_plane(struct drm_device *dev, int plane);
@@@ -419,4 -432,6 +432,6 @@@ extern int intel_sprite_set_colorkey(st
  extern int intel_sprite_get_colorkey(struct drm_device *dev, void *data,
                                     struct drm_file *file_priv);
  
+ extern u32 intel_dpio_read(struct drm_i915_private *dev_priv, int reg);
  #endif /* __INTEL_DRV_H__ */
@@@ -474,7 -474,7 +474,7 @@@ static int intel_lvds_get_modes(struct 
  
  static int intel_no_modeset_on_lid_dmi_callback(const struct dmi_system_id *id)
  {
-       DRM_DEBUG_KMS("Skipping forced modeset for %s\n", id->ident);
+       DRM_INFO("Skipping forced modeset for %s\n", id->ident);
        return 1;
  }
  
@@@ -622,7 -622,7 +622,7 @@@ static const struct drm_encoder_funcs i
  
  static int __init intel_no_lvds_dmi_callback(const struct dmi_system_id *id)
  {
-       DRM_DEBUG_KMS("Skipping LVDS initialization for %s\n", id->ident);
+       DRM_INFO("Skipping LVDS initialization for %s\n", id->ident);
        return 1;
  }
  
@@@ -755,14 -755,6 +755,14 @@@ static const struct dmi_system_id intel
                        DMI_MATCH(DMI_BOARD_NAME, "hp st5747"),
                },
        },
 +      {
 +              .callback = intel_no_lvds_dmi_callback,
 +              .ident = "MSI Wind Box DC500",
 +              .matches = {
 +                      DMI_MATCH(DMI_BOARD_VENDOR, "MICRO-STAR INTERNATIONAL CO., LTD"),
 +                      DMI_MATCH(DMI_BOARD_NAME, "MS-7469"),
 +              },
 +      },
  
        { }     /* terminating entry */
  };
@@@ -845,8 -837,8 +845,8 @@@ static bool lvds_is_present_in_vbt(stru
                    child->device_type != DEVICE_TYPE_LFP)
                        continue;
  
-               if (child->i2c_pin)
-                   *i2c_pin = child->i2c_pin;
+               if (intel_gmbus_is_port_valid(child->i2c_pin))
+                       *i2c_pin = child->i2c_pin;
  
                /* However, we cannot trust the BIOS writers to populate
                 * the VBT correctly.  Since LVDS requires additional
@@@ -987,7 -979,8 +987,8 @@@ bool intel_lvds_init(struct drm_device 
         * preferred mode is the right one.
         */
        intel_lvds->edid = drm_get_edid(connector,
-                                       &dev_priv->gmbus[pin].adapter);
+                                       intel_gmbus_get_adapter(dev_priv,
+                                                               pin));
        if (intel_lvds->edid) {
                if (drm_add_edid_modes(connector,
                                       intel_lvds->edid)) {
@@@ -28,7 -28,6 +28,7 @@@
  #include <linux/fb.h>
  #include <drm/drm_edid.h>
  #include "drmP.h"
 +#include "drm_edid.h"
  #include "intel_drv.h"
  #include "i915_drv.h"
  
@@@ -43,20 -42,21 +43,21 @@@ bool intel_ddc_probe(struct intel_encod
        u8 buf[2];
        struct i2c_msg msgs[] = {
                {
 -                      .addr = 0x50,
 +                      .addr = DDC_ADDR,
                        .flags = 0,
                        .len = 1,
                        .buf = out_buf,
                },
                {
 -                      .addr = 0x50,
 +                      .addr = DDC_ADDR,
                        .flags = I2C_M_RD,
                        .len = 1,
                        .buf = buf,
                }
        };
  
-       return i2c_transfer(&dev_priv->gmbus[ddc_bus].adapter, msgs, 2) == 2;
+       return i2c_transfer(intel_gmbus_get_adapter(dev_priv, ddc_bus),
+                           msgs, 2) == 2;
  }
  
  /**
@@@ -287,12 -287,12 +287,12 @@@ static int init_ring_common(struct inte
  
        I915_WRITE_CTL(ring,
                        ((ring->size - PAGE_SIZE) & RING_NR_PAGES)
 -                      | RING_REPORT_64K | RING_VALID);
 +                      | RING_VALID);
  
        /* If the head is still not zero, the ring is dead */
-       if ((I915_READ_CTL(ring) & RING_VALID) == 0 ||
-           I915_READ_START(ring) != obj->gtt_offset ||
-           (I915_READ_HEAD(ring) & HEAD_ADDR) != 0) {
+       if (wait_for((I915_READ_CTL(ring) & RING_VALID) != 0 &&
+                    I915_READ_START(ring) == obj->gtt_offset &&
+                    (I915_READ_HEAD(ring) & HEAD_ADDR) == 0, 50)) {
                DRM_ERROR("%s initialization failed "
                                "ctl %08x head %08x tail %08x start %08x\n",
                                ring->name,
@@@ -626,7 -626,7 +626,7 @@@ gen6_ring_get_seqno(struct intel_ring_b
        /* Workaround to force correct ordering between irq and seqno writes on
         * ivb (and maybe also on snb) by reading from a CS register (like
         * ACTHD) before reading the status page. */
 -      if (IS_GEN7(dev))
 +      if (IS_GEN6(dev) || IS_GEN7(dev))
                intel_ring_get_active_head(ring);
        return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
  }
@@@ -687,7 -687,7 +687,7 @@@ render_ring_get_irq(struct intel_ring_b
  
        spin_lock(&ring->irq_lock);
        if (ring->irq_refcount++ == 0) {
-               if (HAS_PCH_SPLIT(dev))
+               if (INTEL_INFO(dev)->gen >= 5)
                        ironlake_enable_irq(dev_priv,
                                            GT_PIPE_NOTIFY | GT_USER_INTERRUPT);
                else
@@@ -706,7 -706,7 +706,7 @@@ render_ring_put_irq(struct intel_ring_b
  
        spin_lock(&ring->irq_lock);
        if (--ring->irq_refcount == 0) {
-               if (HAS_PCH_SPLIT(dev))
+               if (INTEL_INFO(dev)->gen >= 5)
                        ironlake_disable_irq(dev_priv,
                                             GT_USER_INTERRUPT |
                                             GT_PIPE_NOTIFY);
@@@ -788,10 -788,11 +788,11 @@@ ring_add_request(struct intel_ring_buff
  }
  
  static bool
- gen6_ring_get_irq(struct intel_ring_buffer *ring, u32 gflag, u32 rflag)
+ gen6_ring_get_irq(struct intel_ring_buffer *ring)
  {
        struct drm_device *dev = ring->dev;
        drm_i915_private_t *dev_priv = dev->dev_private;
+       u32 mask = ring->irq_enable;
  
        if (!dev->irq_enabled)
               return false;
  
        spin_lock(&ring->irq_lock);
        if (ring->irq_refcount++ == 0) {
-               ring->irq_mask &= ~rflag;
+               ring->irq_mask &= ~mask;
                I915_WRITE_IMR(ring, ring->irq_mask);
-               ironlake_enable_irq(dev_priv, gflag);
+               ironlake_enable_irq(dev_priv, mask);
        }
        spin_unlock(&ring->irq_lock);
  
  }
  
  static void
- gen6_ring_put_irq(struct intel_ring_buffer *ring, u32 gflag, u32 rflag)
+ gen6_ring_put_irq(struct intel_ring_buffer *ring)
  {
        struct drm_device *dev = ring->dev;
        drm_i915_private_t *dev_priv = dev->dev_private;
+       u32 mask = ring->irq_enable;
  
        spin_lock(&ring->irq_lock);
        if (--ring->irq_refcount == 0) {
-               ring->irq_mask |= rflag;
+               ring->irq_mask |= mask;
                I915_WRITE_IMR(ring, ring->irq_mask);
-               ironlake_disable_irq(dev_priv, gflag);
+               ironlake_disable_irq(dev_priv, mask);
        }
        spin_unlock(&ring->irq_lock);
  
@@@ -1191,6 -1193,18 +1193,6 @@@ int intel_wait_ring_buffer(struct intel
        struct drm_i915_private *dev_priv = dev->dev_private;
        unsigned long end;
        int ret;
 -      u32 head;
 -
 -      /* If the reported head position has wrapped or hasn't advanced,
 -       * fallback to the slow and accurate path.
 -       */
 -      head = intel_read_status_page(ring, 4);
 -      if (head > ring->head) {
 -              ring->head = head;
 -              ring->space = ring_space(ring);
 -              if (ring->space >= n)
 -                      return 0;
 -      }
  
        ret = intel_ring_wait_request(ring, n);
        if (ret != -ENOSPC)
@@@ -1361,38 -1375,6 +1363,6 @@@ gen6_ring_dispatch_execbuffer(struct in
        return 0;
  }
  
- static bool
- gen6_render_ring_get_irq(struct intel_ring_buffer *ring)
- {
-       return gen6_ring_get_irq(ring,
-                                GT_USER_INTERRUPT,
-                                GEN6_RENDER_USER_INTERRUPT);
- }
- static void
- gen6_render_ring_put_irq(struct intel_ring_buffer *ring)
- {
-       return gen6_ring_put_irq(ring,
-                                GT_USER_INTERRUPT,
-                                GEN6_RENDER_USER_INTERRUPT);
- }
- static bool
- gen6_bsd_ring_get_irq(struct intel_ring_buffer *ring)
- {
-       return gen6_ring_get_irq(ring,
-                                GT_GEN6_BSD_USER_INTERRUPT,
-                                GEN6_BSD_USER_INTERRUPT);
- }
- static void
- gen6_bsd_ring_put_irq(struct intel_ring_buffer *ring)
- {
-       return gen6_ring_put_irq(ring,
-                                GT_GEN6_BSD_USER_INTERRUPT,
-                                GEN6_BSD_USER_INTERRUPT);
- }
  /* ring buffer for Video Codec for Gen6+ */
  static const struct intel_ring_buffer gen6_bsd_ring = {
        .name                   = "gen6 bsd ring",
        .flush                  = gen6_ring_flush,
        .add_request            = gen6_add_request,
        .get_seqno              = gen6_ring_get_seqno,
-       .irq_get                = gen6_bsd_ring_get_irq,
-       .irq_put                = gen6_bsd_ring_put_irq,
+       .irq_enable             = GEN6_BSD_USER_INTERRUPT,
+       .irq_get                = gen6_ring_get_irq,
+       .irq_put                = gen6_ring_put_irq,
        .dispatch_execbuffer    = gen6_ring_dispatch_execbuffer,
        .sync_to                = gen6_bsd_ring_sync_to,
        .semaphore_register     = {MI_SEMAPHORE_SYNC_VR,
  
  /* Blitter support (SandyBridge+) */
  
- static bool
- blt_ring_get_irq(struct intel_ring_buffer *ring)
- {
-       return gen6_ring_get_irq(ring,
-                                GT_BLT_USER_INTERRUPT,
-                                GEN6_BLITTER_USER_INTERRUPT);
- }
- static void
- blt_ring_put_irq(struct intel_ring_buffer *ring)
- {
-       gen6_ring_put_irq(ring,
-                         GT_BLT_USER_INTERRUPT,
-                         GEN6_BLITTER_USER_INTERRUPT);
- }
  static int blt_ring_flush(struct intel_ring_buffer *ring,
                          u32 invalidate, u32 flush)
  {
@@@ -1463,8 -1430,9 +1418,9 @@@ static const struct intel_ring_buffer g
        .flush                  = blt_ring_flush,
        .add_request            = gen6_add_request,
        .get_seqno              = gen6_ring_get_seqno,
-       .irq_get                = blt_ring_get_irq,
-       .irq_put                = blt_ring_put_irq,
+       .irq_get                = gen6_ring_get_irq,
+       .irq_put                = gen6_ring_put_irq,
+       .irq_enable             = GEN6_BLITTER_USER_INTERRUPT,
        .dispatch_execbuffer    = gen6_ring_dispatch_execbuffer,
        .sync_to                = gen6_blt_ring_sync_to,
        .semaphore_register     = {MI_SEMAPHORE_SYNC_BR,
@@@ -1482,8 -1450,9 +1438,9 @@@ int intel_init_render_ring_buffer(struc
        if (INTEL_INFO(dev)->gen >= 6) {
                ring->add_request = gen6_add_request;
                ring->flush = gen6_render_ring_flush;
-               ring->irq_get = gen6_render_ring_get_irq;
-               ring->irq_put = gen6_render_ring_put_irq;
+               ring->irq_get = gen6_ring_get_irq;
+               ring->irq_put = gen6_ring_put_irq;
+               ring->irq_enable = GT_USER_INTERRUPT;
                ring->get_seqno = gen6_ring_get_seqno;
        } else if (IS_GEN5(dev)) {
                ring->add_request = pc_render_add_request;
@@@ -1506,8 -1475,9 +1463,9 @@@ int intel_render_ring_init_dri(struct d
        *ring = render_ring;
        if (INTEL_INFO(dev)->gen >= 6) {
                ring->add_request = gen6_add_request;
-               ring->irq_get = gen6_render_ring_get_irq;
-               ring->irq_put = gen6_render_ring_put_irq;
+               ring->irq_get = gen6_ring_get_irq;
+               ring->irq_put = gen6_ring_put_irq;
+               ring->irq_enable = GT_USER_INTERRUPT;
        } else if (IS_GEN5(dev)) {
                ring->add_request = pc_render_add_request;
                ring->get_seqno = pc_render_get_seqno;
diff --combined include/drm/drmP.h
@@@ -91,7 -91,6 +91,7 @@@ struct drm_device
  #define DRM_UT_CORE           0x01
  #define DRM_UT_DRIVER         0x02
  #define DRM_UT_KMS            0x04
 +#define DRM_UT_PRIME          0x08
  /*
   * Three debug levels are defined.
   * drm_core, drm_driver, drm_kms
@@@ -151,7 -150,6 +151,7 @@@ int drm_err(const char *func, const cha
  #define DRIVER_IRQ_VBL2    0x800
  #define DRIVER_GEM         0x1000
  #define DRIVER_MODESET     0x2000
 +#define DRIVER_PRIME       0x4000
  
  #define DRIVER_BUS_PCI 0x1
  #define DRIVER_BUS_PLATFORM 0x2
                drm_ut_debug_printk(DRM_UT_KMS, DRM_NAME,               \
                                         __func__, fmt, ##args);        \
        } while (0)
 +#define DRM_DEBUG_PRIME(fmt, args...)                                 \
 +      do {                                                            \
 +              drm_ut_debug_printk(DRM_UT_PRIME, DRM_NAME,             \
 +                                      __func__, fmt, ##args);         \
 +      } while (0)
  #define DRM_LOG(fmt, args...)                                         \
        do {                                                            \
                drm_ut_debug_printk(DRM_UT_CORE, NULL,                  \
  #else
  #define DRM_DEBUG_DRIVER(fmt, args...) do { } while (0)
  #define DRM_DEBUG_KMS(fmt, args...)   do { } while (0)
 +#define DRM_DEBUG_PRIME(fmt, args...) do { } while (0)
  #define DRM_DEBUG(fmt, arg...)                 do { } while (0)
  #define DRM_LOG(fmt, arg...)          do { } while (0)
  #define DRM_LOG_KMS(fmt, args...) do { } while (0)
@@@ -418,12 -410,6 +418,12 @@@ struct drm_pending_event 
        void (*destroy)(struct drm_pending_event *event);
  };
  
 +/* initial implementaton using a linked list - todo hashtab */
 +struct drm_prime_file_private {
 +      struct list_head head;
 +      struct mutex lock;
 +};
 +
  /** File private data */
  struct drm_file {
        int authenticated;
        wait_queue_head_t event_wait;
        struct list_head event_list;
        int event_space;
 +
 +      struct drm_prime_file_private prime;
  };
  
  /** Wait queue */
@@@ -668,12 -652,6 +668,12 @@@ struct drm_gem_object 
        uint32_t pending_write_domain;
  
        void *driver_private;
 +
 +      /* dma buf exported from this GEM object */
 +      struct dma_buf *export_dma_buf;
 +
 +      /* dma buf attachment backing this object */
 +      struct dma_buf_attachment *import_attach;
  };
  
  #include "drm_crtc.h"
@@@ -912,20 -890,6 +912,20 @@@ struct drm_driver 
        int (*gem_open_object) (struct drm_gem_object *, struct drm_file *);
        void (*gem_close_object) (struct drm_gem_object *, struct drm_file *);
  
 +      /* prime: */
 +      /* export handle -> fd (see drm_gem_prime_handle_to_fd() helper) */
 +      int (*prime_handle_to_fd)(struct drm_device *dev, struct drm_file *file_priv,
 +                              uint32_t handle, uint32_t flags, int *prime_fd);
 +      /* import fd -> handle (see drm_gem_prime_fd_to_handle() helper) */
 +      int (*prime_fd_to_handle)(struct drm_device *dev, struct drm_file *file_priv,
 +                              int prime_fd, uint32_t *handle);
 +      /* export GEM -> dmabuf */
 +      struct dma_buf * (*gem_prime_export)(struct drm_device *dev,
 +                              struct drm_gem_object *obj, int flags);
 +      /* import dmabuf -> GEM */
 +      struct drm_gem_object * (*gem_prime_import)(struct drm_device *dev,
 +                              struct dma_buf *dma_buf);
 +
        /* vga arb irq handler */
        void (*vgaarb_irq)(struct drm_device *dev, bool state);
  
@@@ -1206,8 -1170,6 +1206,8 @@@ struct drm_device 
        struct idr object_name_idr;
        /*@} */
        int switch_power_state;
 +
 +      atomic_t unplugged; /* device has been unplugged or gone away */
  };
  
  #define DRM_SWITCH_POWER_ON 0
@@@ -1273,19 -1235,6 +1273,19 @@@ static inline int drm_mtrr_del(int hand
  }
  #endif
  
 +static inline void drm_device_set_unplugged(struct drm_device *dev)
 +{
 +      smp_wmb();
 +      atomic_set(&dev->unplugged, 1);
 +}
 +
 +static inline int drm_device_is_unplugged(struct drm_device *dev)
 +{
 +      int ret = atomic_read(&dev->unplugged);
 +      smp_rmb();
 +      return ret;
 +}
 +
  /******************************************************************/
  /** \name Internal function definitions */
  /*@{*/
@@@ -1315,6 -1264,11 +1315,6 @@@ extern unsigned int drm_poll(struct fil
  
                                /* Memory management support (drm_memory.h) */
  #include "drm_memory.h"
 -extern void drm_mem_init(void);
 -extern int drm_mem_info(char *buf, char **start, off_t offset,
 -                      int request, int *eof, void *data);
 -extern void *drm_realloc(void *oldpt, size_t oldsize, size_t size, int area);
 -
  extern void drm_free_agp(DRM_AGP_MEM * handle, int pages);
  extern int drm_bind_agp(DRM_AGP_MEM * handle, unsigned int start);
  extern DRM_AGP_MEM *drm_agp_bind_pages(struct drm_device *dev,
@@@ -1378,6 -1332,7 +1378,7 @@@ extern int drm_remove_magic(struct drm_
  
  /* Cache management (drm_cache.c) */
  void drm_clflush_pages(struct page *pages[], unsigned long num_pages);
+ void drm_clflush_virt_range(char *addr, unsigned long length);
  
                                /* Locking IOCTL support (drm_lock.h) */
  extern int drm_lock(struct drm_device *dev, void *data,
@@@ -1429,8 -1384,12 +1430,8 @@@ extern void drm_core_reclaim_buffers(st
                                /* IRQ support (drm_irq.h) */
  extern int drm_control(struct drm_device *dev, void *data,
                       struct drm_file *file_priv);
 -extern irqreturn_t drm_irq_handler(DRM_IRQ_ARGS);
  extern int drm_irq_install(struct drm_device *dev);
  extern int drm_irq_uninstall(struct drm_device *dev);
 -extern void drm_driver_irq_preinstall(struct drm_device *dev);
 -extern void drm_driver_irq_postinstall(struct drm_device *dev);
 -extern void drm_driver_irq_uninstall(struct drm_device *dev);
  
  extern int drm_vblank_init(struct drm_device *dev, int num_crtcs);
  extern int drm_wait_vblank(struct drm_device *dev, void *data,
@@@ -1506,7 -1465,6 +1507,7 @@@ extern void drm_master_put(struct drm_m
  
  extern void drm_put_dev(struct drm_device *dev);
  extern int drm_put_minor(struct drm_minor **minor);
 +extern void drm_unplug_dev(struct drm_device *dev);
  extern unsigned int drm_debug;
  
  extern unsigned int drm_vblank_offdelay;
@@@ -1545,32 -1503,6 +1546,32 @@@ extern int drm_vblank_info(struct seq_f
  extern int drm_clients_info(struct seq_file *m, void* data);
  extern int drm_gem_name_info(struct seq_file *m, void *data);
  
 +
 +extern int drm_gem_prime_handle_to_fd(struct drm_device *dev,
 +              struct drm_file *file_priv, uint32_t handle, uint32_t flags,
 +              int *prime_fd);
 +extern int drm_gem_prime_fd_to_handle(struct drm_device *dev,
 +              struct drm_file *file_priv, int prime_fd, uint32_t *handle);
 +
 +extern int drm_prime_handle_to_fd_ioctl(struct drm_device *dev, void *data,
 +                                      struct drm_file *file_priv);
 +extern int drm_prime_fd_to_handle_ioctl(struct drm_device *dev, void *data,
 +                                      struct drm_file *file_priv);
 +
 +extern struct sg_table *drm_prime_pages_to_sg(struct page **pages, int nr_pages);
 +extern void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg);
 +
 +
 +void drm_prime_init_file_private(struct drm_prime_file_private *prime_fpriv);
 +void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv);
 +int drm_prime_add_imported_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t handle);
 +int drm_prime_lookup_imported_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t *handle);
 +void drm_prime_remove_imported_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf);
 +
 +int drm_prime_add_dma_buf(struct drm_device *dev, struct drm_gem_object *obj);
 +int drm_prime_lookup_obj(struct drm_device *dev, struct dma_buf *buf,
 +                       struct drm_gem_object **obj);
 +
  #if DRM_DEBUG_CODE
  extern int drm_vma_info(struct seq_file *m, void *data);
  #endif