Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > fc11cd6e1c513a17304da94a5390f3cd > files > 2509

kernel-2.6.18-194.11.1.el5.src.rpm

From: Hideo AOKI <haoki@redhat.com>
Date: Thu, 28 Aug 2008 17:13:30 -0400
Subject: [net] udp: possible recursive locking
Message-id: 48B714FA.4000506@redhat.com
O-Subject: [RHEL 5.3 PATCH] bz458909: net: possible recursive locking in UDP
Bugzilla: 458909
RH-Acked-by: David Miller <davem@redhat.com>
RH-Acked-by: Neil Horman <nhorman@redhat.com>

Hello,

* BZ#
https://bugzilla.redhat.com/show_bug.cgi?id=458909

* Upstream Status
This patch is backport of the following upstream commit:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=d97106ea52aa57e63ff40d04479016836bbb5a4e

* Detailed description from upstream commit
The socket lock is there to protect the normal UDP receive path.
Encapsulation UDP sockets don't need that protection.  In fact
the locking is deadly for them as they may contain another UDP
packet within, possibly with the same addresses.

Also the nested bit was copied from TCP.  TCP needs it because
of accept(2) spawning sockets.  This simply doesn't apply to UDP
so I've removed it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>

* Note of backporting
Since the implementation of udp_queue_rcv_skb() in RHEL 5 differs
from upstream, I added a lock to the function to close the gap.

* kABI Status
There is no kABI issue.

* Brew
Built in brew.
https://brewweb.devel.redhat.com/brew/taskinfo?taskID=1443651

* Test Status
The patch was tested by me on x86_64.

Please review and ACK.

diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 7abe930..953cf99 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1034,7 +1034,9 @@ static int udp_queue_rcv_skb(struct sock * sk, struct sk_buff *skb)
 		 */
 		int ret;
 
+		bh_unlock_sock(sk);
 		ret = udp_encap_rcv(sk, skb);
+		bh_lock_sock(sk);
 		if (ret == 0) {
 			/* Eat the packet .. */
 			kfree_skb(skb);
@@ -1042,7 +1044,9 @@ static int udp_queue_rcv_skb(struct sock * sk, struct sk_buff *skb)
 		}
 		if (ret < 0) {
 			/* process the ESP packet */
+			bh_unlock_sock(sk);
 			ret = xfrm4_rcv_encap(skb, up->encap_type);
+			bh_lock_sock(sk);
 			return -ret;
 		}
 		/* FALLTHROUGH -- it's a UDP Packet */
@@ -1095,7 +1099,7 @@ static int udp_v4_mcast_deliver(struct sk_buff *skb, struct udphdr *uh,
 			if(skb1) {
 				int ret = 0;
 
-				bh_lock_sock_nested(sk);
+				bh_lock_sock(sk);
 				if (!sock_owned_by_user(sk))
 					ret = udp_queue_rcv_skb(sk, skb1);
 				else
@@ -1175,7 +1179,7 @@ int udp_rcv(struct sk_buff *skb)
 
 	if (sk != NULL) {
 		int ret = 0;
-		bh_lock_sock_nested(sk);
+		bh_lock_sock(sk);
 		if (!sock_owned_by_user(sk))
 			ret = udp_queue_rcv_skb(sk, skb);
 		else
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index ac58df5..bdbf323 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -436,7 +436,7 @@ static void udpv6_mcast_deliver(struct udphdr *uh,
 					uh->source, saddr, dif))) {
 		struct sk_buff *buff = skb_clone(skb, GFP_ATOMIC);
 		if (buff) {
-			bh_lock_sock_nested(sk2);
+			bh_lock_sock(sk2);
 			if (!sock_owned_by_user(sk2))
 				udpv6_queue_rcv_skb(sk2, buff);
 			else
@@ -444,7 +444,7 @@ static void udpv6_mcast_deliver(struct udphdr *uh,
 			bh_unlock_sock(sk2);
 		}
 	}
-	bh_lock_sock_nested(sk);
+	bh_lock_sock(sk);
 	if (!sock_owned_by_user(sk))
 		udpv6_queue_rcv_skb(sk, skb);
 	else
@@ -534,7 +534,7 @@ static int udpv6_rcv(struct sk_buff **pskb)
 	
 	/* deliver */
 	
-	bh_lock_sock_nested(sk);
+	bh_lock_sock(sk);
 	if (!sock_owned_by_user(sk))
 		udpv6_queue_rcv_skb(sk, skb);
 	else