X-Git-Url: https://git.openpandora.org/cgi-bin/gitweb.cgi?a=blobdiff_plain;f=Documentation%2Fcachetlb.txt;h=da42ab414c4864db4bb9d35b0cca9005b27670d2;hb=5a86102248592e178a9023359ccf7f0e489d8e35;hp=73e794f0ff0924e2432e305cb5520345f776160b;hpb=fb7665544dd60e016494cd5531f5b65ddae22ddc;p=pandora-kernel.git diff --git a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt index 73e794f0ff09..da42ab414c48 100644 --- a/Documentation/cachetlb.txt +++ b/Documentation/cachetlb.txt @@ -87,30 +87,7 @@ changes occur: This is used primarily during fault processing. -5) void flush_tlb_pgtables(struct mm_struct *mm, - unsigned long start, unsigned long end) - - The software page tables for address space 'mm' for virtual - addresses in the range 'start' to 'end-1' are being torn down. - - Some platforms cache the lowest level of the software page tables - in a linear virtually mapped array, to make TLB miss processing - more efficient. On such platforms, since the TLB is caching the - software page table structure, it needs to be flushed when parts - of the software page table tree are unlinked/freed. - - Sparc64 is one example of a platform which does this. - - Usually, when munmap()'ing an area of user virtual address - space, the kernel leaves the page table parts around and just - marks the individual pte's as invalid. However, if very large - portions of the address space are unmapped, the kernel frees up - those portions of the software page tables to prevent potential - excessive kernel memory usage caused by erratic mmap/mmunmap - sequences. It is at these times that flush_tlb_pgtables will - be invoked. - -6) void update_mmu_cache(struct vm_area_struct *vma, +5) void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t pte) At the end of every page fault, this routine is invoked to @@ -123,7 +100,7 @@ changes occur: translations for software managed TLB configurations. The sparc64 port currently does this. -7) void tlb_migrate_finish(struct mm_struct *mm) +6) void tlb_migrate_finish(struct mm_struct *mm) This interface is called at the end of an explicit process migration. This interface provides a hook @@ -133,12 +110,6 @@ changes occur: The ia64 sn2 platform is one example of a platform that uses this interface. -8) void lazy_mmu_prot_update(pte_t pte) - This interface is called whenever the protection on - any user PTEs change. This interface provides a notification - to architecture specific code to take appropriate action. - - Next, we have the cache flushing interfaces. In general, when Linux is changing an existing virtual-->physical mapping to a new value, the sequence will be in one of the following forms: @@ -253,7 +224,7 @@ Here are the routines, one by one: The first of these two routines is invoked after map_vm_area() has installed the page table entries. The second is invoked - before unmap_vm_area() deletes the page table entries. + before unmap_kernel_range() deletes the page table entries. There exists another whole class of cpu cache issues which currently require a whole different set of interfaces to handle properly. @@ -373,14 +344,15 @@ maps this page at its virtual address. likely that you will need to flush the instruction cache for copy_to_user_page(). - void flush_anon_page(struct page *page, unsigned long vmaddr) + void flush_anon_page(struct vm_area_struct *vma, struct page *page, + unsigned long vmaddr) When the kernel needs to access the contents of an anonymous page, it calls this function (currently only get_user_pages()). Note: flush_dcache_page() deliberately doesn't work for an anonymous page. The default implementation is a nop (and should remain so for all coherent architectures). For incoherent architectures, it should flush - the cache of the page at vmaddr in the current user process. + the cache of the page at vmaddr. void flush_kernel_dcache_page(struct page *page) When the kernel needs to modify a user page is has obtained