Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > 27922b4260f65d317aabda37e42bbbff > files > 2202

kernel-2.6.18-238.el5.src.rpm

From: Jiri Pirko <jpirko@redhat.com>
Date: Thu, 19 Aug 2010 11:28:35 -0400
Subject: [mm] fix page table unmap for stack guard page properly
Message-id: <1282217317-11853-4-git-send-email-jpirko@redhat.com>
Patchwork-id: 27715
O-Subject: [PATCH RHEL5.6 3/5] mm: fix page table unmap for stack guard page
	properly
Bugzilla: 607858
CVE: CVE-2010-2240
RH-Acked-by: Rik van Riel <riel@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>

We do in fact need to unmap the page table _before_ doing the whole
    stack guard page logic, because if it is needed (mainly 32-bit x86 with
    PAE and CONFIG_HIGHPTE, but other architectures may use it too) then it
    will do a kmap_atomic/kunmap_atomic.

    And those kmaps will create an atomic region that we cannot do
    allocations in.  However, the whole stack expand code will need to do
    anon_vma_prepare() and vma_lock_anon_vma() and they cannot do that in an
    atomic region.

    Now, a better model might actually be to do the anon_vma_prepare() when
    _creating_ a VM_GROWSDOWN segment, and not have to worry about any of
    this at page fault time.  But in the meantime, this is the
    straightforward fix for the issue.

    See https://bugzilla.kernel.org/show_bug.cgi?id=16588 for details.

Signed-off-by: Jiri Pirko <jpirko@redhat.com>

diff --git a/mm/memory.c b/mm/memory.c
index 2be90cb..997afd3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2385,15 +2385,15 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
 	spinlock_t *ptl;
 	pte_t entry;
 
-	if (check_stack_guard_page(vma, address) < 0) {
-		pte_unmap(page_table);
+	pte_unmap(page_table);
+
+	/* Check if we need to add a guard page to the stack */
+	if (check_stack_guard_page(vma, address) < 0)
 		return VM_FAULT_SIGBUS;
-	}
 
+	/* Use the zero-page for reads */
 	if (write_access) {
 		/* Allocate our own private page. */
-		pte_unmap(page_table);
-
 		if (unlikely(anon_vma_prepare(vma)))
 			goto oom;
 		page = alloc_zeroed_user_highpage(vma, address);
@@ -2415,8 +2415,7 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		page_cache_get(page);
 		entry = mk_pte(page, vma->vm_page_prot);
 
-		ptl = pte_lockptr(mm, pmd);
-		spin_lock(ptl);
+		page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
 		if (!pte_none(*page_table))
 			goto release;
 		inc_mm_counter(mm, file_rss);