Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > 27922b4260f65d317aabda37e42bbbff > files > 2144

kernel-2.6.18-238.el5.src.rpm

From: Oleg Nesterov <oleg@redhat.com>
Date: Sun, 30 May 2010 16:21:28 -0400
Subject: [misc] workqueue: implement try_to_grab_pending
Message-id: <20100530162128.GG9577@redhat.com>
Patchwork-id: 25909
O-Subject: [RHEL5 PATCH 6/7] bz#596626:  workqueues: implement
	try_to_grab_pending()
Bugzilla: 596626
RH-Acked-by: Stanislaw Gruszka <sgruszka@redhat.com>
RH-Acked-by: Prarit Bhargava <prarit@redhat.com>

Implement try_to_grab_pending().

It is very close to upstream version, but if we remove the work
from list we should decrement cwq->insert_sequence to keep the
"insert_sequence - remove_sequence == nr_of_works_in_flight"
invariant, and wake up the threads sleeping on work_done.

We can't race with take_over_work(), it takes both old_cwq->lock
and new_cwq->lock while moving the pending works. It plays with
work->entry in between, but this can't confuse try_to_grab_pending()
checking !list_empty(work->entry).

If it sees work->wq_data == old_cwq, it can't take this lock until
take_over_work() completes, after that it must see ->wq_data was changed.

If it gets work->wq_data == new_cwq, when it takes new_cwq->lock it
must either see that ->wq_data was changed to old_cwq in the very
unlikely case, or it can correctly remove the work because
take_over_work() has already called __queue_work().

Signed-off-by: Oleg Nesterov <oleg@redhat.com>

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index d720d50..6d91710 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -380,6 +380,48 @@ void fastcall flush_workqueue(struct workqueue_struct *wq)
 }
 EXPORT_SYMBOL_GPL(flush_workqueue);
 
+/*
+ * Upon a successful return (>= 0), the caller "owns" bit 0,
+ * so this work can't be re-armed in any way.
+ */
+static int try_to_grab_pending(struct work_struct *work)
+{
+	struct cpu_workqueue_struct *cwq;
+	int ret = -1;
+
+	if (!test_and_set_bit(0, &work->pending))
+		return 0;
+
+	/*
+	 * The queueing is in progress, or it is already queued. Try to
+	 * steal it from ->worklist without clearing the "pending" bit.
+	 */
+
+	cwq = get_wq_data(work);
+	if (!cwq)
+		return ret;
+
+	spin_lock_irq(&cwq->lock);
+	if (!list_empty(&work->entry)) {
+		/*
+		 * This work is queued, but perhaps we locked the wrong cwq.
+		 * In that case we must see the new value after rmb(), see
+		 * __queue_work()->wmb().
+		 */
+		smp_rmb();
+		if (cwq == get_wq_data(work)) {
+			list_del_init(&work->entry);
+			/* for flush_cpu_workqueue() */
+			cwq->insert_sequence--;
+			wake_up(&cwq->work_done);
+			ret = 1;
+		}
+	}
+	spin_unlock_irq(&cwq->lock);
+
+	return ret;
+}
+
 static void wait_on_cpu_work(struct cpu_workqueue_struct *cwq,
 				struct work_struct *work)
 {