sh: Ensure all PG_dcache_dirty pages are written back.
authorMarkus Pietrek <Markus.Pietrek@emtrion.de>
Thu, 24 Dec 2009 06:12:02 +0000 (15:12 +0900)
committerPaul Mundt <lethal@linux-sh.org>
Thu, 24 Dec 2009 06:12:02 +0000 (15:12 +0900)
With some of the cache rework an address aliasing optimization was added,
but this managed to fail on certain mappings resulting in pages with
PG_dcache_dirty set never writing back their dcache lines. This patch
reverts to the earlier behaviour of simply always writing back when the
dirty bit is set.

Signed-off-by: Markus Pietrek <Markus.Pietrek@emtrion.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
arch/sh/mm/cache.c

index e9415d3..b8607fa 100644 (file)
@@ -133,12 +133,8 @@ void __update_cache(struct vm_area_struct *vma,
        page = pfn_to_page(pfn);
        if (pfn_valid(pfn)) {
                int dirty = test_and_clear_bit(PG_dcache_dirty, &page->flags);
-               if (dirty) {
-                       unsigned long addr = (unsigned long)page_address(page);
-
-                       if (pages_do_alias(addr, address & PAGE_MASK))
-                               __flush_purge_region((void *)addr, PAGE_SIZE);
-               }
+               if (dirty)
+                       __flush_purge_region(page_address(page), PAGE_SIZE);
        }
 }