Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > 27922b4260f65d317aabda37e42bbbff > files > 2148

kernel-2.6.18-238.el5.src.rpm

From: Oleg Nesterov <oleg@redhat.com>
Date: Sun, 30 May 2010 16:21:08 -0400
Subject: [misc] workqueue: prep flush_cpu_workqueue for additions
Message-id: <20100530162108.GF9577@redhat.com>
Patchwork-id: 25908
O-Subject: [RHEL5 PATCH 5/7] bz#596626:  workqueues: prepare
	flush_cpu_workqueue() for try_to_grab_pending()
Bugzilla: 596626
RH-Acked-by: Stanislaw Gruszka <sgruszka@redhat.com>
RH-Acked-by: Prarit Bhargava <prarit@redhat.com>

(changes the existing code)

We are going to implement try_to_grab_pending() which can remove a work
from cwq->worklist and thus it must decrement cwq->insert_sequence.

This breaks flush_cpu_workqueue() which assumes that cwq->remove_sequence
outrun sequence_needed eventually, this is no longer true if cancel_()
was called after flush_cpu_workqueue() has already cached sequence_needed.

Change flush_cpu_workqueue() to check

	(sequence_needed - cwq->remove_sequence > 0)
		&&
	(cwq->insert_sequence != cwq->remove_sequence)

the second new check is equivalent to

	!list_empty(cwq->worklist) || (cwq->current_work != NULL)

and it is needed to prevent the hang when cwq->remove_sequence never
grows enough because we raced with cancel.

IOW. If cancel was never called or it didn't remove the work from list,
the new check has no effect, it must be always true as long as the first
one is true.

Otherwise, flush_cpu_workqueue() waits until cwq->worklist becomes empty
(the second check), or it flushes one more work which might be queued
after initial cwq->insert_sequence point.

The latter case is OK, we never guaranteed how many works flush_workqueue()
actually flushes. All we need it is never livelocked by new incoming works.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c267e8e..d720d50 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -322,7 +322,20 @@ static void flush_cpu_workqueue(struct cpu_workqueue_struct *cwq)
 		spin_lock_irq(&cwq->lock);
 		sequence_needed = cwq->insert_sequence;
 
-		while (sequence_needed - cwq->remove_sequence > 0) {
+		/*
+		 * The second insert_sequence != remove_sequence is only
+		 * needed if insert_sequence goes back. This can happen
+		 * if try_to_grab_pending() removes a work from list.
+		 *
+		 * If the new works are queued after that we can wait a
+		 * bit longer until remove_sequence grows enough (see
+		 * the first check) - this is tolerable.
+		 *
+		 * But we must never hang otherwise, when cwq->worklist
+		 * becomes empty, that is why we have the second check.
+		 */
+		while ((sequence_needed - cwq->remove_sequence > 0) &&
+		       (cwq->insert_sequence != cwq->remove_sequence)) {
 			prepare_to_wait(&cwq->work_done, &wait,
 					TASK_UNINTERRUPTIBLE);
 			spin_unlock_irq(&cwq->lock);