Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > 27922b4260f65d317aabda37e42bbbff > files > 1735

kernel-2.6.18-238.el5.src.rpm

From: Mikulas Patocka <mpatocka@redhat.com>
Date: Fri, 15 May 2009 02:04:34 -0400
Subject: [md] dm: I/O failures when running dm-over-md with xen
Message-id: Pine.LNX.4.64.0905150156080.10867@hs20-bc2-1.build.redhat.com
O-Subject: [PATCH RHEL 5.4] bz223947: I/O failures when running dm-over-md with xen
Bugzilla: 223947
RH-Acked-by: Rik van Riel <riel@redhat.com>
RH-Acked-by: Mike Snitzer <snitzer@redhat.com>

Hi

This is the patch for bug 223947. When we run dm-over-dm and use it as a
storage for xen virtual machines, there are I/O errors.

Testing: I reproduced the setup described in the bugzilla and got I/O
errors similar to the ones posted there. Then I applied this patch,
succesfully installed the guest system in xen virtual machine and got no
I/O errors at all.

Upstream status: the upstream code is different and the bug happens there
under slightly different conditions --- you must use lvm volume different
from linear (for example stripe, raid1, snapshot...) to trigger the bug.
Different patch was submitted to upstream that does the same thing.

Mikulas

diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index afac24b..4f1aad9 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -880,6 +880,25 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
 	return &t->targets[(KEYS_PER_NODE * n) + k];
 }
 
+/*
+ * Some of our unrerlying devices provided a merge_bvec_fn.
+ *
+ * We can't call the device's merge_bvec_fn, so we must be conservative
+ * and not allow creating bio larger than one page.
+ */
+static int dm_max_one_bvec_entry(request_queue_t *q, struct bio *bio, struct bio_vec *biovec)
+{
+	/* If there was nothing in the bio, allow full page */
+	if (!bio->bi_vcnt)
+		return biovec->bv_len;
+
+	/* If there is just one page and we are appeding to it, allow it */
+	if (bio->bi_vcnt == 1 && biovec == &bio->bi_io_vec[0])
+		return biovec->bv_len;
+
+	return 0;
+}
+
 void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q)
 {
 	/*
@@ -887,6 +906,8 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q)
 	 * restrictions.
 	 */
 	blk_queue_max_sectors(q, t->limits.max_sectors);
+	if (t->limits.max_sectors <= PAGE_SIZE >> 9)
+		blk_queue_merge_bvec(q, dm_max_one_bvec_entry);
 	q->max_phys_segments = t->limits.max_phys_segments;
 	q->max_hw_segments = t->limits.max_hw_segments;
 	q->hardsect_size = t->limits.hardsect_size;