Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > 27922b4260f65d317aabda37e42bbbff > files > 934

kernel-2.6.18-238.el5.src.rpm

From: David Teigland <teigland@redhat.com>
Date: Fri, 12 Nov 2010 16:53:15 -0500
Subject: [fs] dlm: reduce cond_resched during send
Message-id: <20101112165315.GC11037@redhat.com>
Patchwork-id: 29219
O-Subject: [RHEL5.6 PATCH] dlm: reduce cond_resched during send
Bugzilla: 604139
RH-Acked-by: Steven Whitehouse <swhiteho@redhat.com>
RH-Acked-by: Robert S Peterson <rpeterso@redhat.com>
RH-Acked-by: David S. Miller <davem@redhat.com>
RH-Acked-by: Neil Horman <nhorman@redhat.com>

bz 604139 (patch 2 of 2)

upstream: scheduled for 2.6.38

Calling cond_resched() after every send can unnecessarily
degrade performance.  Go back to an old method of scheduling
after 25 messages.


diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index 164ab41..3976ea7 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -60,6 +60,9 @@
 
 #define NEEDED_RMEM (4*1024*1024)
 
+/* Number of messages to send before rescheduling */
+#define MAX_SEND_MSG_COUNT 25
+
 struct cbuf {
 	unsigned int base;
 	unsigned int len;
@@ -1281,6 +1284,7 @@ static void send_to_sock(struct connection *con)
 	const int msg_flags = MSG_DONTWAIT | MSG_NOSIGNAL;
 	struct writequeue_entry *e;
 	int len, offset;
+	int count = 0;
 
 	mutex_lock(&con->sock_mutex);
 	if (con->sock == NULL)
@@ -1312,8 +1316,12 @@ static void send_to_sock(struct connection *con)
 			if (ret <= 0)
 				goto send_error;
 		}
+
 		/* Don't starve people filling buffers */
-		cond_resched();
+		if (++count >= MAX_SEND_MSG_COUNT) {
+			cond_resched();
+			count = 0;
+		}
 
 		spin_lock(&con->writequeue_lock);
 		e->offset += ret;