Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > 27922b4260f65d317aabda37e42bbbff > files > 2194

kernel-2.6.18-238.el5.src.rpm

From: Larry Woodman <lwoodman@redhat.com>
Date: Mon, 19 Jul 2010 14:35:19 -0400
Subject: [mm] fix excessive memory reclaim from zones w/lots free
Message-id: <1279550119.3223.285.camel@dhcp-100-19-198.bos.redhat.com>
Patchwork-id: 26946
O-Subject: [RHEL5 Patch] prevent system from reclaiming too much memory from
	zones with lots of free memory.
Bugzilla: 604779
RH-Acked-by: Rik van Riel <riel@redhat.com>
RH-Acked-by: Danny Feng <dfeng@redhat.com>

We are seeing a customer system running RHEL5 that is paging out heavily
while there is several GB of free memory.

-vmstat-snipit----------------------------------------------------------
1  0    292 3060572   5260 1847753
2  0  73372 3108212   5256 1847068
1  0 171428 3210032   5256 1845956
------------------------------------------------------------------------

The cause of this is a 4GB x86_64 system that has most of its memory in
the DMA32 zone and only a small amount of memory in the normal zone.
When memory is exhausted it swaps the DMA32 zone heavily because that
where most of the RAM is but swaps the normal zone very lightly because
there is very little RAM there.  Over time is pretty much swaps out the
entire DMA32 zone by the time is satisfies the normal zome memory
deficit.

-show_mem()-snipit-------------------------------------------------------
Active:403572 inactive:1295480 dirty:250 writeback:0 unstable:0
free:40026 slab:258219 mapped-file:26994 mapped-anon:272389
pagetables:4126

Node 0 DMA free:11068kB min:12kB low:12kB high:16kB active:0kB
 inactive:0kB present:10652kB pages_scanned:0 all_unreclaimable? no
 lowmem_reserve[]: 0 3511 4016 4016
Node 0 DMA32 free:12068kB min:5004kB low:6252kB high:7504kB
 active:378352kB inactive:2344112kB present:3596236kB pages_scanned:0
 all_unreclaimable? no
 lowmem_reserve[]: 0 0 505 505
Node 0 Normal free:884kB min:716kB low:892kB high:1072kB active:133332kB
 inactive:161716kB present:517120kB pages_scanned:0 all_unreclaimable?
 lowmem_reserve[]: 0 0 0 0
-------------------------------------------------------------------------

The tested fix in an upstream backport that stops reclaiming memory from
a zone once the free page count is 8 times the high watermark.

Fixes BZ 604779

Signed-off-by: Jarod Wilson <jarod@redhat.com>

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 79be7fd..517023a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1221,7 +1221,13 @@ scan:
 			temp_priority[i] = priority;
 			sc.nr_scanned = 0;
 			note_zone_scanning_priority(zone, priority);
-			shrink_zone(priority, zone, &sc);
+			/*
+			 * We put equal pressure on every zone, unless one
+			 * zone has way too many pages free already.
+			 */
+			if (!zone_watermark_ok(zone, order,
+					8*zone->pages_high, end_zone, 0))
+				shrink_zone(priority, zone, &sc);
 			reclaim_state->reclaimed_slab = 0;
 			nr_slab = shrink_slab(sc.nr_scanned, GFP_KERNEL,
 						lru_pages);