Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > 27922b4260f65d317aabda37e42bbbff > files > 2234

kernel-2.6.18-238.el5.src.rpm

From: Jiri Pirko <jpirko@redhat.com>
Date: Thu, 19 Aug 2010 11:28:33 -0400
Subject: [mm] keep a guard page below a grow-down stack segment
Message-id: <1282217317-11853-2-git-send-email-jpirko@redhat.com>
Patchwork-id: 27711
O-Subject: [PATCH RHEL5.6 1/5] mm: keep a guard page below a grow-down stack
	segment
Bugzilla: 607858
CVE: CVE-2010-2240
RH-Acked-by: Rik van Riel <riel@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>

This is a rather minimally invasive patch to solve the problem of the
    user stack growing into a memory mapped area below it.  Whenever we fill
    the first page of the stack segment, expand the segment down by one
    page.

    Now, admittedly some odd application might _want_ the stack to grow down
    into the preceding memory mapping, and so we may at some point need to
    make this a process tunable (some people might also want to have more
    than a single page of guarding), but let's try the minimal approach
    first.

    Tested with trivial application that maps a single page just below the
    stack, and then starts recursing.  Without this, we will get a SIGSEGV
    _after_ the stack has smashed the mapping.  With this patch, we'll get a
    nice SIGBUS just as the stack touches the page just above the mapping.

Signed-off-by: Jiri Pirko <jpirko@redhat.com>

diff --git a/mm/memory.c b/mm/memory.c
index 1ebc53e..fef663d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2353,6 +2353,26 @@ out_nomap:
 }
 
 /*
+ * This is like a special single-page "expand_downwards()",
+ * except we must first make sure that 'address-PAGE_SIZE'
+ * doesn't hit another vma.
+ *
+ * The "find_vma()" will do the right thing even if we wrap
+ */
+static inline int check_stack_guard_page(struct vm_area_struct *vma, unsigned long address)
+{
+	address &= PAGE_MASK;
+	if ((vma->vm_flags & VM_GROWSDOWN) && address == vma->vm_start) {
+		address -= PAGE_SIZE;
+		if (find_vma(vma->vm_mm, address) != vma)
+			return -ENOMEM;
+
+		expand_stack(vma, address);
+	}
+	return 0;
+}
+
+/*
  * We enter with non-exclusive mmap_sem (to exclude vma changes,
  * but allow concurrent faults), and pte mapped but not yet locked.
  * We return with mmap_sem still held, but pte unmapped and unlocked.
@@ -2365,6 +2385,9 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
 	spinlock_t *ptl;
 	pte_t entry;
 
+	if (check_stack_guard_page(vma, address) < 0)
+		return VM_FAULT_SIGBUS;
+
 	if (write_access) {
 		/* Allocate our own private page. */
 		pte_unmap(page_table);