Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > 27922b4260f65d317aabda37e42bbbff > files > 2196

kernel-2.6.18-238.el5.src.rpm

From: Scott Moser <smoser@redhat.com>
Date: Fri, 16 Nov 2007 15:45:05 -0500
Subject: [mm] fix hugepage allocation with memoryless nodes
Message-id: 1195245906265-do-send-email-smoser@redhat.com
O-Subject: [PATCH RHEL5u2] bz239790 Unequal allocation of hugepages on pseries [3/3]
Bugzilla: 239790

RH5.1 backport of:
commit 63b4613c3f0d4b724ba259dc6c201bb68b884e1a
Author: Nishanth Aravamudan <nacc@us.ibm.com>
Date:   Tue Oct 16 01:26:24 2007 -0700

    hugetlb: fix hugepage allocation with memoryless nodes

    Anton found a problem with the hugetlb pool allocation when some nodes have
    no memory (http://marc.info/?l=linux-mm&m=118133042025995&w=2).  Lee worked
    on versions that tried to fix it, but none were accepted.  Christoph has
    created a set of patches which allow for GFP_THISNODE allocations to fail
    if the node has no memory.

    Currently, alloc_fresh_huge_page() returns NULL when it is not able to
    allocate a huge page on the current node, as specified by its custom
    interleave variable.  The callers of this function, though, assume that a
    failure in alloc_fresh_huge_page() indicates no hugepages can be allocated
    on the system period.  This might not be the case, for instance, if we have
    an uneven NUMA system, and we happen to try to allocate a hugepage on a
    node with less memory and fail, while there is still plenty of free memory
    on the other nodes.

    To correct this, make alloc_fresh_huge_page() search through all online
    nodes before deciding no hugepages can be allocated.  Add a helper function
    for actually allocating the hugepage.  Use a new global nid iterator to
    control which nid to allocate on.

    Note: we expect particular semantics for __GFP_THISNODE, which are now
    enforced even for memoryless nodes.  That is, there is should be no
    fallback to other nodes.  Therefore, we rely on the nid passed into
    alloc_pages_node() to be the nid the page comes from.  If this is
    incorrect, accounting will break.

    Tested on x86 !NUMA, x86 NUMA, x86_64 NUMA and ppc64 NUMA (with 2
    memoryless nodes).

    Before on the ppc64 box:
    Trying to clear the hugetlb pool
    Done.       0 free
    Trying to resize the pool to 100
    Node 0 HugePages_Free:     25
    Node 1 HugePages_Free:     75
    Node 2 HugePages_Free:      0
    Node 3 HugePages_Free:      0
    Done. Initially     100 free
    Trying to resize the pool to 200
    Node 0 HugePages_Free:     50
    Node 1 HugePages_Free:    150
    Node 2 HugePages_Free:      0
    Node 3 HugePages_Free:      0
    Done.     200 free

    After:
    Trying to clear the hugetlb pool
    Done.       0 free
    Trying to resize the pool to 100
    Node 0 HugePages_Free:     50
    Node 1 HugePages_Free:     50
    Node 2 HugePages_Free:      0
    Node 3 HugePages_Free:      0
    Done. Initially     100 free
    Trying to resize the pool to 200
    Node 0 HugePages_Free:    100
    Node 1 HugePages_Free:    100
    Node 2 HugePages_Free:      0
    Node 3 HugePages_Free:      0
    Done.     200 free

    Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
    Acked-by: Christoph Lameter <clameter@sgi.com>
    Cc: Adam Litke <agl@us.ibm.com>
    Cc: David Gibson <hermes@gibson.dropbear.id.au>
    Cc: Badari Pulavarty <pbadari@us.ibm.com>
    Cc: Ken Chen <kenchen@google.com>
    Cc: William Lee Irwin III <wli@holomorphy.com>
    Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
    Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

--
 mm/hugetlb.c |   51 ++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 42 insertions(+), 9 deletions(-)

Acked-by: David Howells <dhowells@redhat.com>

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ec17cc5..0a80f51 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -27,6 +27,7 @@ unsigned long max_huge_pages;
 static struct list_head hugepage_freelists[MAX_NUMNODES];
 static unsigned int nr_huge_pages_node[MAX_NUMNODES];
 static unsigned int free_huge_pages_node[MAX_NUMNODES];
+static int hugetlb_next_nid;
 /*
  * Protects updates to hugepage_freelists, nr_huge_pages, and free_huge_pages
  */
@@ -99,25 +100,55 @@ static void free_huge_page(struct page *page)
 	spin_unlock(&hugetlb_lock);
 }
 
-static int alloc_fresh_huge_page(void)
+static struct page *alloc_fresh_huge_page_node(int nid)
 {
-	static int nid = 0;
 	struct page *page;
-	page = alloc_pages_node(nid, GFP_HIGHUSER|__GFP_COMP|__GFP_NOWARN,
+
+	page = alloc_pages_thisnode(nid, GFP_HIGHUSER|__GFP_COMP|__GFP_NOWARN,
 					HUGETLB_PAGE_ORDER);
-	nid = next_node(nid, node_online_map);
-	if (nid == MAX_NUMNODES)
-		nid = first_node(node_online_map);
 	if (page) {
 		set_compound_page_dtor(page, free_huge_page);
 		spin_lock(&hugetlb_lock);
 		nr_huge_pages++;
-		nr_huge_pages_node[page_to_nid(page)]++;
+		nr_huge_pages_node[nid]++;
 		spin_unlock(&hugetlb_lock);
 		put_page(page); /* free it into the hugepage allocator */
-		return 1;
 	}
-	return 0;
+	return page;
+}
+
+static int alloc_fresh_huge_page(void)
+{
+	struct page *page;
+	int start_nid;
+	int next_nid;
+	int ret = 0;
+
+	start_nid = hugetlb_next_nid;
+
+	do {
+		page = alloc_fresh_huge_page_node(hugetlb_next_nid);
+		if (page)
+			ret = 1;
+		/*
+		 * Use a helper variable to find the next node and then
+		 * copy it back to hugetlb_next_nid afterwards;
+		 * otherwise there's a window in which a racer might
+		 * pass invalid nid MAX_NUMNODES to
+		 * alloc_pages_thisnode. But we don't need to use a
+		 * spinlock here: it really doesn't matter if
+		 * occasionally a racer chooses the same nid as we do.
+		 * Move nid forward in the mask even if we just
+		 * successfully allocated a hugepage so that the next
+		 * caller gets hugepages on the next node.
+		 */
+		next_nid = next_node(hugetlb_next_nid, node_online_map);
+		if (next_nid == MAX_NUMNODES)
+			next_nid = first_node(node_online_map);
+		hugetlb_next_nid = next_nid;
+	} while (!page && hugetlb_next_nid != start_nid);
+
+	return ret;
 }
 
 static struct page *alloc_huge_page(struct vm_area_struct *vma,
@@ -154,6 +185,8 @@ static int __init hugetlb_init(void)
 	for (i = 0; i < MAX_NUMNODES; ++i)
 		INIT_LIST_HEAD(&hugepage_freelists[i]);
 
+	hugetlb_next_nid = first_node(node_online_map);
+
 	for (i = 0; i < max_huge_pages; ++i) {
 		if (!alloc_fresh_huge_page())
 			break;