Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > 89877e42827f16fa5f86b1df0c2860b1 > files > 2148

kernel-2.6.18-128.1.10.el5.src.rpm

From: Brad Peters <bpeters@redhat.com>
Date: Mon, 21 Jul 2008 17:07:03 -0400
Subject: [scsi] ibmvscsi: latest 5.3 fixes and enhancements
Message-id: 4884FA77.4050701@redhat.com
O-Subject: Re: [PATCH RHEL5.3] Update ibmvscsi driver with upstream fixes and enhancements (Corrected - 439487)
Bugzilla: 439487
RH-Acked-by: David Howells <dhowells@redhat.com>

RHBZ#:
======
https://bugzilla.redhat.com/show_bug.cgi?id=439487

Description:
===========
Updates the IBM VSCSI driver to latest upstream version.  Almost a dozen patches are bundled here, including:

Here is a list of fixes included in this patch.  All of the fixes are upstream.

[POWERPC] atomic_dec_if_positive sign extension fix
Robert Jennings [Wed, 17 Jan 2007 16:50:20]
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=434f98c48fc1d2a1f562a28a1562a7b53e940957
This is a prerequisite for (dynamic adjustment of server request_limit: 903f9d8)
On 64-bit machines, if an atomic counter is explicitly set to a negative value,
the atomic_dec_if_positive function will decrement and store the next smallest
value in the atomic counter, contrary to its intended operation. Thus if a
counter is set to -1 it will become -2 after atomic_dec_if_positive rather than
remaining -1.

[SCSI] ibmvscsi: allow for dynamic adjustment of server request_limit
Robert Jennings [Wed, 28 Mar 2007 17:45:46]
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=a897ff2a6386ac4368ba41db18b626afd903f9d8
This addresses problems encountered with setting can_queue from the
request_limit provided by the server. The request limit calculations used
previously on the client failed to mirror the state of the server. Additionally,
when a value < 3 was provided there could be problems setting can_queue and
handling abort and reset commands.  With this the client view of request_limit
mirrors that of the server and can_queue is set independently.  When requests
are very low abort and reset requests can still be serviced.

[SCSI] ibmvscsi: Changeable queue depth
Brian King [Tue, 29 May 2007 20:46:14]
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=742d25b819f11dce91b89e6c9ac17402a119f20a
Adds support for a changeable queue depth to ibmvscsi.

SCSI] ibmvscsi: Remove unnecessary map_sg check
Brian King [Wed, 13 Jun 2007 22:12:11]
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2a7309372fe56ae46c499b772d811ad31c501dd9
Since sg_tablesize is set appropriately in the scsi host template, remove the
unnecessary check to make sure it is not exceeded following the dma_map_sg call.

[SCSI] ibmvscsi: Enhanced error logging
Brian King [Wed, 13 Jun 2007 22:12:19]
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=6c0a60ec52042ece8bf4904c91ac497188e8d70b
Converts ibmvscsi to use dev_printk and friends to simplify debugging. ibmvscsi
adapter initialization now looks like this:

ibmvscsi 30000005: SRP_VERSION: 16.a
ibmvscsi 30000005: partner initialization complete
ibmvscsi 30000005: sent SRP login
ibmvscsi 30000005: SRP_LOGIN succeeded

Additionally, this patch adds the logging of a couple return codes in a couple logs.

[SCSI] ibmvscsi: Add eh_host_reset_handler
Brian King [Wed, 13 Jun 2007 22:12:26]
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=3d0e91f7ace12499c4b00088e9a6b1361e1bb0ca
Adds an eh_host_reset_handler to ibmvscsi which resets the connection to the
vscsi server. This patch also adds a timer to internally issues commands to
prevent client hangs in the case of a misbehaving server. Tested by modifying
the VIOS such that it would occasionally drop one or more request in sequence.

[SCSI] ibmvscsi: Misc. locking fixes
Brian King [Wed, 13 Jun 2007 22:12:33]
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=06f923cbf080e22d1ffccbf3fd2cbab0176f6025
Fix a couple locking bugs discovered during code inspection.
ibmvscsi_send_srp_event needs to be called with the host lock held. This patch
fixes a couple paths in the code where this wasn't true.

[SCSI] ibmvscsi: Abort path fix
Brian King [Wed, 13 Jun 2007 22:12:40]
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=35f51eee99efe88866476300ccb7f206e88f3394
Since it is completely possible for scsi core to call a LLDD's eh_abort function
after the command has completed, fix ibmvscsi to return SUCCESS if this is the case.

[SCSI] ibmvscsi: fix timeout bugs
FUJITA Tomonori [Tue, 22 May 2007 04:50:51]
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=33874a002d4fdd34e35e8265f9b2e0582385f744
The viosrp_crq timeout field is in seconds.

[SCSI] ibmvscsi: Prevent IO during partner login
Robert Jennings [Tue, 30 Oct 2007 16:37:07]
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=3c887e8a1a4553ae6263fc9490e33de213e3746f
By setting the request_limit in send_srp_login to 1 we allowed login requests to
be sent to the server adapter.  If this was not an initial login, but was a
login after a disconnect with the server, other I/O requests could attempt to be
processed before the login occured.  These I/O requests would fail, sometimes
resulting in filesystems getting marked read-only.

To address this we can set the request_limit to 0 while doing the login and add
an exception where login requests, along with task management events, are always
passed to the server.

There is a case where the request_limit had already reached 0 would result in
all events being sent rather than returning SCSI_MLQUEUE_HOST_BUSY; this has
also been fixed by this patch.

[SCSI] ibmvscsi: requeue while CRQ closed
Robert Jennings [Mon, 12 Nov 2007 15:00:23]
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=860784c8a2b077157b6a51fb8749524d0363cc49

CRQ send errors that return with H_CLOSED should return with
SCSI_MLQUEUE_HOST_BUSY until firmware alerts the client of a CRQ transport
event.  The transport event will either reinitialize and requeue the requests or
fail and return IO with DID_ERROR.

To avoid failing the eh_* functions while re-attaching to the server adapter
this will retry for a period of time while ibmvscsi_send_srp_event returns
SCSI_MLQUEUE_HOST_BUSY.

In ibmvscsi_eh_abort_handler() the loop includes the search of the event list.
The lock on the hostdata is dropped while waiting to try again after failing
ibmvscsi_send_srp_event.  The event could have been purged if a login was in
progress when the function was called.

In ibmvscsi_eh_device_reset_handler() the loop includes the call to
get_event_struct() because a failing call to ibmvscsi_send_srp_event() will have
freed the event struct.

RHEL Version Found:
================
RHEL 5.2

kABI Status:
============
No symbols were harmed.

Brew:
=====
Built on all platforms.
http://brewweb.devel.redhat.com/brew/taskinfo?taskID=1370287

Kernel binary rpm available at:
===============================
http://people.redhat.com/bpeters/kernels/kernel-2.6.18-94.el5.94.el5.439487.ppc64.rpm

Test Status:
============
Tested using 'stress' tool to induce FS stress.  forked off 10 threads, each writing/reading 1GB chunks of random data.  No corruption or errors discovered.

7/21/08 <bpeters@redhat.com>

===============================================================

Brad Peters 1-978-392-1000 x 23183
IBM on-site partner.

Proposed Patch:
===============
Patch based on 2.6.18-94.el5

diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
index ac118a7..d554719 100644
--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
+++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
@@ -85,9 +85,7 @@
 static int max_id = 64;
 static int max_channel = 3;
 static int init_timeout = 5;
-static int max_requests = 50;
-/* host data buffer size */
-#define buff_size	4096
+static int max_requests = IBMVSCSI_MAX_REQUESTS_DEFAULT;
 
 #define IBMVSCSI_VERSION "1.5.9"
 
@@ -175,9 +173,8 @@ static void release_event_pool(struct event_pool *pool,
 		}
 	}
 	if (in_use)
-		printk(KERN_WARNING
-		       "ibmvscsi: releasing event pool with %d "
-		       "events still in use?\n", in_use);
+		dev_warn(hostdata->dev, "releasing event pool with %d "
+			 "events still in use?\n", in_use);
 	kfree(pool->events);
 	dma_free_coherent(hostdata->dev,
 			  pool->size * sizeof(*pool->iu_storage),
@@ -212,15 +209,13 @@ static void free_event_struct(struct event_pool *pool,
 				       struct srp_event_struct *evt)
 {
 	if (!valid_event_struct(pool, evt)) {
-		printk(KERN_ERR
-		       "ibmvscsi: Freeing invalid event_struct %p "
-		       "(not in pool %p)\n", evt, pool->events);
+		dev_err(evt->hostdata->dev, "Freeing invalid event_struct %p "
+			"(not in pool %p)\n", evt, pool->events);
 		return;
 	}
 	if (atomic_inc_return(&evt->free) != 1) {
-		printk(KERN_ERR
-		       "ibmvscsi: Freeing event_struct %p "
-		       "which is not in use!\n", evt);
+		dev_err(evt->hostdata->dev, "Freeing event_struct %p "
+			"which is not in use!\n", evt);
 		return;
 	}
 }
@@ -410,13 +405,6 @@ static int map_sg_data(struct scsi_cmnd *cmd,
 		return 1;
 	}
 
-	if (sg_mapped > SG_ALL) {
-		printk(KERN_ERR
-		       "ibmvscsi: More than %d mapped sg entries, got %d\n",
-		       SG_ALL, sg_mapped);
-		return 0;
-	}
-
 	indirect->table_desc.va = 0;
 	indirect->table_desc.len = sg_mapped * sizeof(struct srp_direct_buf);
 	indirect->table_desc.key = 0;
@@ -435,10 +423,9 @@ static int map_sg_data(struct scsi_cmnd *cmd,
 					   SG_ALL * sizeof(struct srp_direct_buf),
 					   &evt_struct->ext_list_token, 0);
 		if (!evt_struct->ext_list) {
-			printk(KERN_ERR
-			       "ibmvscsi: Can't allocate memory for indirect table\n");
+			sdev_printk(KERN_ERR, cmd->device,
+				    "Can't allocate memory for indirect table\n");
 			return 0;
-			
 		}
 	}
 
@@ -473,8 +460,8 @@ static int map_single_data(struct scsi_cmnd *cmd,
 			       cmd->request_bufflen,
 			       DMA_BIDIRECTIONAL);
 	if (dma_mapping_error(data->va)) {
-		printk(KERN_ERR
-		       "ibmvscsi: Unable to map request_buffer for command!\n");
+		sdev_printk(KERN_ERR, cmd->device,
+			    "Unable to map request_buffer for command!\n");
 		return 0;
 	}
 	data->len = cmd->request_bufflen;
@@ -505,13 +492,13 @@ static int map_data_for_srp_cmd(struct scsi_cmnd *cmd,
 	case DMA_NONE:
 		return 1;
 	case DMA_BIDIRECTIONAL:
-		printk(KERN_ERR
-		       "ibmvscsi: Can't map DMA_BIDIRECTIONAL to read/write\n");
+		sdev_printk(KERN_ERR, cmd->device,
+			    "Can't map DMA_BIDIRECTIONAL to read/write\n");
 		return 0;
 	default:
-		printk(KERN_ERR
-		       "ibmvscsi: Unknown data direction 0x%02x; can't map!\n",
-		       cmd->sc_data_direction);
+		sdev_printk(KERN_ERR, cmd->device,
+			    "Unknown data direction 0x%02x; can't map!\n",
+			    cmd->sc_data_direction);
 		return 0;
 	}
 
@@ -522,6 +509,70 @@ static int map_data_for_srp_cmd(struct scsi_cmnd *cmd,
 	return map_single_data(cmd, srp_cmd, dev);
 }
 
+/**
+ * purge_requests: Our virtual adapter just shut down.  purge any sent requests
+ * @hostdata:    the adapter
+ */
+static void purge_requests(struct ibmvscsi_host_data *hostdata, int error_code)
+{
+	struct srp_event_struct *tmp_evt, *pos;
+	unsigned long flags;
+
+	spin_lock_irqsave(hostdata->host->host_lock, flags);
+	list_for_each_entry_safe(tmp_evt, pos, &hostdata->sent, list) {
+		list_del(&tmp_evt->list);
+		del_timer(&tmp_evt->timer);
+		if (tmp_evt->cmnd) {
+			tmp_evt->cmnd->result = (error_code << 16);
+			unmap_cmd_data(&tmp_evt->iu.srp.cmd,
+				       tmp_evt,
+				       tmp_evt->hostdata->dev);
+			if (tmp_evt->cmnd_done)
+				tmp_evt->cmnd_done(tmp_evt->cmnd);
+		} else if (tmp_evt->done)
+			tmp_evt->done(tmp_evt);
+		free_event_struct(&tmp_evt->hostdata->pool, tmp_evt);
+	}
+	spin_unlock_irqrestore(hostdata->host->host_lock, flags);
+}
+
+/**
+ * ibmvscsi_reset_host - Reset the connection to the server
+ * @hostdata:	struct ibmvscsi_host_data to reset
+*/
+static void ibmvscsi_reset_host(struct ibmvscsi_host_data *hostdata)
+{
+	scsi_block_requests(hostdata->host);
+	atomic_set(&hostdata->request_limit, 0);
+
+	purge_requests(hostdata, DID_ERROR);
+	if ((ibmvscsi_reset_crq_queue(&hostdata->queue, hostdata)) ||
+	    (ibmvscsi_send_crq(hostdata, 0xC001000000000000LL, 0)) ||
+	    (vio_enable_interrupts(to_vio_dev(hostdata->dev)))) {
+		atomic_set(&hostdata->request_limit, -1);
+		dev_err(hostdata->dev, "error after reset\n");
+	}
+
+	scsi_unblock_requests(hostdata->host);
+}
+
+/**
+ * ibmvscsi_timeout - Internal command timeout handler
+ * @evt_struct:	struct srp_event_struct that timed out
+ *
+ * Called when an internally generated command times out
+*/
+static void ibmvscsi_timeout(struct srp_event_struct *evt_struct)
+{
+	struct ibmvscsi_host_data *hostdata = evt_struct->hostdata;
+
+	dev_err(hostdata->dev, "Command timed out (%x). Resetting connection\n",
+		evt_struct->iu.srp.cmd.opcode);
+
+	ibmvscsi_reset_host(hostdata);
+}
+
+
 /* ------------------------------------------------------------
  * Routines for sending and receiving SRPs
  */
@@ -529,18 +580,21 @@ static int map_data_for_srp_cmd(struct scsi_cmnd *cmd,
  * ibmvscsi_send_srp_event: - Transforms event to u64 array and calls send_crq()
  * @evt_struct:	evt_struct to be sent
  * @hostdata:	ibmvscsi_host_data of host
+ * @timeout:	timeout in seconds - 0 means do not time command
  *
  * Returns the value returned from ibmvscsi_send_crq(). (Zero for success)
  * Note that this routine assumes that host_lock is held for synchronization
 */
 static int ibmvscsi_send_srp_event(struct srp_event_struct *evt_struct,
-				   struct ibmvscsi_host_data *hostdata)
+				   struct ibmvscsi_host_data *hostdata,
+				   unsigned long timeout)
 {
 	u64 *crq_as_u64 = (u64 *) &evt_struct->crq;
-	int request_status;
+	int request_status = 0;
 	int rc;
 
-	/* If we have exhausted our request limit, just fail this request.
+	/* If we have exhausted our request limit, just fail this request,
+	 * unless it is for a reset or abort.
 	 * Note that there are rare cases involving driver generated requests 
 	 * (such as task management requests) that the mid layer may think we
 	 * can handle more requests (can_queue) when we actually can't
@@ -553,11 +607,37 @@ static int ibmvscsi_send_srp_event(struct srp_event_struct *evt_struct,
 		 */
 		if (request_status < -1)
 			goto send_error;
-		/* Otherwise, if we have run out of requests return host_busy */
-		/* Unless we have a login request, allow that to be sent. */
-		else if (request_status < 0 &&
+		/* Otherwise, we may have run out of requests. */
+		/* If request limit was 0 when we started the adapter is in the
+		 * process of performing a login with the server adapter, or
+		 * we may have run out of requests.
+		 */
+		else if (request_status == -1 &&
 		         evt_struct->iu.srp.login_req.opcode != SRP_LOGIN_REQ)
 			goto send_busy;
+		/* Abort and reset calls should make it through.
+		 * Nothing except abort and reset should use the last two
+		 * slots unless we had two or less to begin with.
+		 */
+		else if (request_status < 2 &&
+		         evt_struct->iu.srp.cmd.opcode != SRP_TSK_MGMT) {
+			/* In the case that we have less than two requests
+			 * available, check the server limit as a combination
+			 * of the request limit and the number of requests
+			 * in-flight (the size of the send list).  If the
+			 * server limit is greater than 2, return busy so
+			 * that the last two are reserved for reset and abort.
+			 */
+			int server_limit = request_status;
+			struct srp_event_struct *tmp_evt;
+
+			list_for_each_entry(tmp_evt, &hostdata->sent, list) {
+				server_limit++;
+			}
+
+			if (server_limit > 2)
+				goto send_busy;
+		}
 	}
 
 	/* Copy the IU into the transfer area */
@@ -570,9 +650,18 @@ static int ibmvscsi_send_srp_event(struct srp_event_struct *evt_struct,
 	 */
 	list_add_tail(&evt_struct->list, &hostdata->sent);
 
+	init_timer(&evt_struct->timer);
+	if (timeout) {
+		evt_struct->timer.data = (unsigned long) evt_struct;
+		evt_struct->timer.expires = jiffies + (timeout * HZ);
+		evt_struct->timer.function = (void (*)(unsigned long))ibmvscsi_timeout;
+		add_timer(&evt_struct->timer);
+	}
+
 	if ((rc =
 	     ibmvscsi_send_crq(hostdata, crq_as_u64[0], crq_as_u64[1])) != 0) {
 		list_del(&evt_struct->list);
+		del_timer(&evt_struct->timer);
 
 		/* If send_crq returns H_CLOSED, return SCSI_MLQUEUE_HOST_BUSY.
 		 * Firmware will send a CRQ with a transport event (0xFF) to
@@ -580,13 +669,12 @@ static int ibmvscsi_send_srp_event(struct srp_event_struct *evt_struct,
 		 * will be handled in ibmvscsi_handle_crq()
 		 */
 		if (rc == H_CLOSED) {
-			printk(KERN_WARNING "ibmvscsi: send warning. "
-			       "Receive queue closed, will retry.\n");
+			dev_warn(hostdata->dev, "send warning. "
+			         "Receive queue closed, will retry.\n");
 			goto send_busy;
 		}
-
-		printk(KERN_ERR "ibmvscsi: send error %d\n",
-		       rc);
+		dev_err(hostdata->dev, "send error %d\n", rc);
+		atomic_inc(&hostdata->request_limit);
 		goto send_error;
 	}
 
@@ -596,7 +684,9 @@ static int ibmvscsi_send_srp_event(struct srp_event_struct *evt_struct,
 	unmap_cmd_data(&evt_struct->iu.srp.cmd, evt_struct, hostdata->dev);
 
 	free_event_struct(&hostdata->pool, evt_struct);
- 	return SCSI_MLQUEUE_HOST_BUSY;
+	if (request_status != -1)
+		atomic_inc(&hostdata->request_limit);
+	return SCSI_MLQUEUE_HOST_BUSY;
 
  send_error:
 	unmap_cmd_data(&evt_struct->iu.srp.cmd, evt_struct, hostdata->dev);
@@ -625,9 +715,8 @@ static void handle_cmd_rsp(struct srp_event_struct *evt_struct)
 
 	if (unlikely(rsp->opcode != SRP_RSP)) {
 		if (printk_ratelimit())
-			printk(KERN_WARNING 
-			       "ibmvscsi: bad SRP RSP type %d\n",
-			       rsp->opcode);
+			dev_warn(evt_struct->hostdata->dev,
+				 "bad SRP RSP type %d\n", rsp->opcode);
 	}
 	
 	if (cmnd) {
@@ -671,8 +760,7 @@ static int ibmvscsi_queuecommand(struct scsi_cmnd *cmnd,
 	struct srp_cmd *srp_cmd;
 	struct srp_event_struct *evt_struct;
 	struct srp_indirect_buf *indirect;
-	struct ibmvscsi_host_data *hostdata =
-		(struct ibmvscsi_host_data *)&cmnd->device->host->hostdata;
+	struct ibmvscsi_host_data *hostdata = shost_priv(cmnd->device->host);
 	u16 lun = lun_from_dev(cmnd->device);
 	u8 out_fmt, in_fmt;
 
@@ -688,7 +776,7 @@ static int ibmvscsi_queuecommand(struct scsi_cmnd *cmnd,
 	srp_cmd->lun = ((u64) lun) << 48;
 
 	if (!map_data_for_srp_cmd(cmnd, evt_struct, srp_cmd, hostdata->dev)) {
-		printk(KERN_ERR "ibmvscsi: couldn't convert cmd to srp_cmd\n");
+		sdev_printk(KERN_ERR, cmnd->device, "couldn't convert cmd to srp_cmd\n");
 		free_event_struct(&hostdata->pool, evt_struct);
 		return SCSI_MLQUEUE_HOST_BUSY;
 	}
@@ -713,7 +801,7 @@ static int ibmvscsi_queuecommand(struct scsi_cmnd *cmnd,
 			offsetof(struct srp_indirect_buf, desc_list);
 	}
 
-	return ibmvscsi_send_srp_event(evt_struct, hostdata);
+	return ibmvscsi_send_srp_event(evt_struct, hostdata, 0);
 }
 
 /* ------------------------------------------------------------
@@ -735,16 +823,16 @@ static void adapter_info_rsp(struct srp_event_struct *evt_struct)
 			 DMA_BIDIRECTIONAL);
 
 	if (evt_struct->xfer_iu->mad.adapter_info.common.status) {
-		printk("ibmvscsi: error %d getting adapter info\n",
-		       evt_struct->xfer_iu->mad.adapter_info.common.status);
+		dev_err(hostdata->dev, "error %d getting adapter info\n",
+			evt_struct->xfer_iu->mad.adapter_info.common.status);
 	} else {
-		printk("ibmvscsi: host srp version: %s, "
-		       "host partition %s (%d), OS %d, max io %u\n",
-		       hostdata->madapter_info.srp_version,
-		       hostdata->madapter_info.partition_name,
-		       hostdata->madapter_info.partition_number,
-		       hostdata->madapter_info.os_type,
-		       hostdata->madapter_info.port_max_txu[0]);
+		dev_info(hostdata->dev, "host srp version: %s, "
+			 "host partition %s (%d), OS %d, max io %u\n",
+			 hostdata->madapter_info.srp_version,
+			 hostdata->madapter_info.partition_name,
+			 hostdata->madapter_info.partition_number,
+			 hostdata->madapter_info.os_type,
+			 hostdata->madapter_info.port_max_txu[0]);
 		
 		if (hostdata->madapter_info.port_max_txu[0]) 
 			hostdata->host->max_sectors = 
@@ -752,11 +840,10 @@ static void adapter_info_rsp(struct srp_event_struct *evt_struct)
 		
 		if (hostdata->madapter_info.os_type == 3 &&
 		    strcmp(hostdata->madapter_info.srp_version, "1.6a") <= 0) {
-			printk("ibmvscsi: host (Ver. %s) doesn't support large"
-			       "transfers\n",
-			       hostdata->madapter_info.srp_version);
-			printk("ibmvscsi: limiting scatterlists to %d\n",
-			       MAX_INDIRECT_BUFS);
+			dev_err(hostdata->dev, "host (Ver. %s) doesn't support large transfers\n",
+				hostdata->madapter_info.srp_version);
+			dev_err(hostdata->dev, "limiting scatterlists to %d\n",
+				MAX_INDIRECT_BUFS);
 			hostdata->host->sg_tablesize = MAX_INDIRECT_BUFS;
 		}
 	}
@@ -775,19 +862,20 @@ static void send_mad_adapter_info(struct ibmvscsi_host_data *hostdata)
 {
 	struct viosrp_adapter_info *req;
 	struct srp_event_struct *evt_struct;
+	unsigned long flags;
 	dma_addr_t addr;
 
 	evt_struct = get_event_struct(&hostdata->pool);
 	if (!evt_struct) {
-		printk(KERN_ERR "ibmvscsi: couldn't allocate an event "
-		       "for ADAPTER_INFO_REQ!\n");
+		dev_err(hostdata->dev,
+			"couldn't allocate an event for ADAPTER_INFO_REQ!\n");
 		return;
 	}
 
 	init_event_struct(evt_struct,
 			  adapter_info_rsp,
 			  VIOSRP_MAD_FORMAT,
-			  init_timeout * HZ);
+			  init_timeout);
 	
 	req = &evt_struct->iu.mad.adapter_info;
 	memset(req, 0x00, sizeof(*req));
@@ -800,20 +888,20 @@ static void send_mad_adapter_info(struct ibmvscsi_host_data *hostdata)
 					    DMA_BIDIRECTIONAL);
 
 	if (dma_mapping_error(req->buffer)) {
-		printk(KERN_ERR
-		       "ibmvscsi: Unable to map request_buffer "
-		       "for adapter_info!\n");
+		dev_err(hostdata->dev, "Unable to map request_buffer for adapter_info!\n");
 		free_event_struct(&hostdata->pool, evt_struct);
 		return;
 	}
 	
-	if (ibmvscsi_send_srp_event(evt_struct, hostdata)) {
-		printk(KERN_ERR "ibmvscsi: couldn't send ADAPTER_INFO_REQ!\n");
+	spin_lock_irqsave(hostdata->host->host_lock, flags);
+	if (ibmvscsi_send_srp_event(evt_struct, hostdata, init_timeout * 2)) {
+		dev_err(hostdata->dev, "couldn't send ADAPTER_INFO_REQ!\n");
 		dma_unmap_single(hostdata->dev,
 				 addr,
 				 sizeof(hostdata->madapter_info),
 				 DMA_BIDIRECTIONAL);
 	}
+	spin_unlock_irqrestore(hostdata->host->host_lock, flags);
 };
 
 /**
@@ -830,39 +918,31 @@ static void login_rsp(struct srp_event_struct *evt_struct)
 	case SRP_LOGIN_RSP:	/* it worked! */
 		break;
 	case SRP_LOGIN_REJ:	/* refused! */
-		printk(KERN_INFO "ibmvscsi: SRP_LOGIN_REJ reason %u\n",
-		       evt_struct->xfer_iu->srp.login_rej.reason);
+		dev_info(hostdata->dev, "SRP_LOGIN_REJ reason %u\n",
+			 evt_struct->xfer_iu->srp.login_rej.reason);
 		/* Login failed.  */
 		atomic_set(&hostdata->request_limit, -1);
 		return;
 	default:
-		printk(KERN_ERR
-		       "ibmvscsi: Invalid login response typecode 0x%02x!\n",
-		       evt_struct->xfer_iu->srp.login_rsp.opcode);
+		dev_err(hostdata->dev, "Invalid login response typecode 0x%02x!\n",
+			evt_struct->xfer_iu->srp.login_rsp.opcode);
 		/* Login failed.  */
 		atomic_set(&hostdata->request_limit, -1);
 		return;
 	}
 
-	printk(KERN_INFO "ibmvscsi: SRP_LOGIN succeeded\n");
+	dev_info(hostdata->dev, "SRP_LOGIN succeeded\n");
 
-	if (evt_struct->xfer_iu->srp.login_rsp.req_lim_delta >
-	    (max_requests - 2))
-		evt_struct->xfer_iu->srp.login_rsp.req_lim_delta =
-		    max_requests - 2;
+	if (evt_struct->xfer_iu->srp.login_rsp.req_lim_delta < 0)
+		dev_err(hostdata->dev, "Invalid request_limit.\n");
 
-	/* Now we know what the real request-limit is */
+	/* Now we know what the real request-limit is.
+	 * This value is set rather than added to request_limit because
+	 * request_limit could have been set to -1 by this client.
+	 */
 	atomic_set(&hostdata->request_limit,
 		   evt_struct->xfer_iu->srp.login_rsp.req_lim_delta);
 
-	hostdata->host->can_queue =
-	    evt_struct->xfer_iu->srp.login_rsp.req_lim_delta - 2;
-
-	if (hostdata->host->can_queue < 1) {
-		printk(KERN_ERR "ibmvscsi: Invalid request_limit_delta\n");
-		return;
-	}
-
 	/* If we had any pending I/Os, kick them */
 	scsi_unblock_requests(hostdata->host);
 
@@ -883,15 +963,14 @@ static int send_srp_login(struct ibmvscsi_host_data *hostdata)
 	struct srp_login_req *login;
 	struct srp_event_struct *evt_struct = get_event_struct(&hostdata->pool);
 	if (!evt_struct) {
-		printk(KERN_ERR
-		       "ibmvscsi: couldn't allocate an event for login req!\n");
+		dev_err(hostdata->dev, "couldn't allocate an event for login req!\n");
 		return FAILED;
 	}
 
 	init_event_struct(evt_struct,
 			  login_rsp,
 			  VIOSRP_SRP_FORMAT,
-			  init_timeout * HZ);
+			  init_timeout);
 
 	login = &evt_struct->iu.srp.login_req;
 	memset(login, 0x00, sizeof(struct srp_login_req));
@@ -906,9 +985,9 @@ static int send_srp_login(struct ibmvscsi_host_data *hostdata)
 	 */
 	atomic_set(&hostdata->request_limit, 0);
 
-	rc = ibmvscsi_send_srp_event(evt_struct, hostdata);
+	rc = ibmvscsi_send_srp_event(evt_struct, hostdata, init_timeout * 2);
 	spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-	printk("ibmvscsic: sent SRP login\n");
+	dev_info(hostdata->dev, "sent SRP login\n");
 	return rc;
 };
 
@@ -933,8 +1012,7 @@ static void sync_completion(struct srp_event_struct *evt_struct)
  */
 static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
 {
-	struct ibmvscsi_host_data *hostdata =
-	    (struct ibmvscsi_host_data *)cmd->device->host->hostdata;
+	struct ibmvscsi_host_data *hostdata = shost_priv(cmd->device->host);
 	struct srp_tsk_mgmt *tsk_mgmt;
 	struct srp_event_struct *evt;
 	struct srp_event_struct *tmp_evt, *found_evt;
@@ -960,20 +1038,20 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
 
 		if (!found_evt) {
 			spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-			return FAILED;
+			return SUCCESS;
 		}
 
 		evt = get_event_struct(&hostdata->pool);
 		if (evt == NULL) {
 			spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-			printk(KERN_ERR "ibmvscsi: failed to allocate abort event\n");
+			sdev_printk(KERN_ERR, cmd->device, "failed to allocate abort event\n");
 			return FAILED;
 		}
 	
 		init_event_struct(evt,
 				  sync_completion,
 				  VIOSRP_SRP_FORMAT,
-				  init_timeout * HZ);
+				  init_timeout);
 
 		tsk_mgmt = &evt->iu.srp.tsk_mgmt;
 	
@@ -987,7 +1065,7 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
 		evt->sync_srp = &srp_rsp;
 
 		init_completion(&evt->comp);
-		rsp_rc = ibmvscsi_send_srp_event(evt, hostdata);
+		rsp_rc = ibmvscsi_send_srp_event(evt, hostdata, init_timeout * 2);
 
 		if (rsp_rc != SCSI_MLQUEUE_HOST_BUSY)
 			break;
@@ -1000,21 +1078,22 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
 	spin_unlock_irqrestore(hostdata->host->host_lock, flags);
 
 	if (rsp_rc != 0) {
-		printk(KERN_ERR "ibmvscsi: failed to send abort() event\n");
+		sdev_printk(KERN_ERR, cmd->device,
+		            "failed to send abort() event. rc=%d\n", rsp_rc);
 		return FAILED;
 	}
 
-	printk(KERN_INFO "ibmvscsi: aborting command. lun 0x%lx, tag 0x%lx\n",
-	       (((u64) lun) << 48), (u64) found_evt);
+	sdev_printk(KERN_INFO, cmd->device,
+	            "aborting command. lun 0x%lx, tag 0x%lx\n",
+	            (((u64) lun) << 48), (u64) found_evt);
 
 	wait_for_completion(&evt->comp);
 
 	/* make sure we got a good response */
 	if (unlikely(srp_rsp.srp.rsp.opcode != SRP_RSP)) {
 		if (printk_ratelimit())
-			printk(KERN_WARNING 
-			       "ibmvscsi: abort bad SRP RSP type %d\n",
-			       srp_rsp.srp.rsp.opcode);
+			sdev_printk(KERN_WARNING, cmd->device, "abort bad SRP RSP type %d\n",
+				    srp_rsp.srp.rsp.opcode);
 		return FAILED;
 	}
 
@@ -1025,10 +1104,9 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
 
 	if (rsp_rc) {
 		if (printk_ratelimit())
-			printk(KERN_WARNING 
-			       "ibmvscsi: abort code %d for task tag 0x%lx\n",
-			       rsp_rc,
-			       tsk_mgmt->task_tag);
+			sdev_printk(KERN_WARNING, cmd->device,
+				    "abort code %d for task tag 0x%lx\n",
+				    rsp_rc, tsk_mgmt->task_tag);
 		return FAILED;
 	}
 
@@ -1047,15 +1125,13 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
 
 	if (found_evt == NULL) {
 		spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-		printk(KERN_INFO
-		       "ibmvscsi: aborted task tag 0x%lx completed\n",
-		       tsk_mgmt->task_tag);
+		sdev_printk(KERN_INFO, cmd->device, "aborted task tag 0x%lx completed\n",
+			    tsk_mgmt->task_tag);
 		return SUCCESS;
 	}
 
-	printk(KERN_INFO
-	       "ibmvscsi: successfully aborted task tag 0x%lx\n",
-	       tsk_mgmt->task_tag);
+	sdev_printk(KERN_INFO, cmd->device, "successfully aborted task tag 0x%lx\n",
+		    tsk_mgmt->task_tag);
 
 	cmd->result = (DID_ABORT << 16);
 	list_del(&found_evt->list);
@@ -1074,9 +1150,7 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
  */
 static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
 {
-	struct ibmvscsi_host_data *hostdata =
-	    (struct ibmvscsi_host_data *)cmd->device->host->hostdata;
-
+	struct ibmvscsi_host_data *hostdata = shost_priv(cmd->device->host);
 	struct srp_tsk_mgmt *tsk_mgmt;
 	struct srp_event_struct *evt;
 	struct srp_event_struct *tmp_evt, *pos;
@@ -1092,14 +1166,15 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
 		evt = get_event_struct(&hostdata->pool);
 		if (evt == NULL) {
 			spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-			printk(KERN_ERR "ibmvscsi: failed to allocate reset event\n");
+			sdev_printk(KERN_ERR, cmd->device,
+			            "failed to allocate reset event\n");
 			return FAILED;
 		}
 	
 		init_event_struct(evt,
 				  sync_completion,
 				  VIOSRP_SRP_FORMAT,
-				  init_timeout * HZ);
+				  init_timeout);
 
 		tsk_mgmt = &evt->iu.srp.tsk_mgmt;
 
@@ -1112,7 +1187,7 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
 		evt->sync_srp = &srp_rsp;
 
 		init_completion(&evt->comp);
-		rsp_rc = ibmvscsi_send_srp_event(evt, hostdata);
+		rsp_rc = ibmvscsi_send_srp_event(evt, hostdata, init_timeout * 2);
 
 		if (rsp_rc != SCSI_MLQUEUE_HOST_BUSY)
 			break;
@@ -1125,21 +1200,21 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
 	spin_unlock_irqrestore(hostdata->host->host_lock, flags);
 
 	if (rsp_rc != 0) {
-		printk(KERN_ERR "ibmvscsi: failed to send reset event\n");
+		sdev_printk(KERN_ERR, cmd->device,
+		            "failed to send reset event. rc=%d\n", rsp_rc);
 		return FAILED;
 	}
 
-	printk(KERN_INFO "ibmvscsi: resetting device. lun 0x%lx\n",
-	       (((u64) lun) << 48));
+	sdev_printk(KERN_INFO, cmd->device, "resetting device. lun 0x%lx\n",
+	            (((u64) lun) << 48));
 
 	wait_for_completion(&evt->comp);
 
 	/* make sure we got a good response */
 	if (unlikely(srp_rsp.srp.rsp.opcode != SRP_RSP)) {
 		if (printk_ratelimit())
-			printk(KERN_WARNING 
-			       "ibmvscsi: reset bad SRP RSP type %d\n",
-			       srp_rsp.srp.rsp.opcode);
+			sdev_printk(KERN_WARNING, cmd->device, "reset bad SRP RSP type %d\n",
+				    srp_rsp.srp.rsp.opcode);
 		return FAILED;
 	}
 
@@ -1150,9 +1225,9 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
 
 	if (rsp_rc) {
 		if (printk_ratelimit())
-			printk(KERN_WARNING 
-			       "ibmvscsi: reset code %d for task tag 0x%lx\n",
-			       rsp_rc, tsk_mgmt->task_tag);
+			sdev_printk(KERN_WARNING, cmd->device,
+				    "reset code %d for task tag 0x%lx\n",
+				    rsp_rc, tsk_mgmt->task_tag);
 		return FAILED;
 	}
 
@@ -1181,32 +1256,29 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
 }
 
 /**
- * purge_requests: Our virtual adapter just shut down.  purge any sent requests
- * @hostdata:    the adapter
- */
-static void purge_requests(struct ibmvscsi_host_data *hostdata, int error_code)
+ * ibmvscsi_eh_host_reset_handler - Reset the connection to the server
+ * @cmd:	struct scsi_cmnd having problems
+*/
+static int ibmvscsi_eh_host_reset_handler(struct scsi_cmnd *cmd)
 {
-	struct srp_event_struct *tmp_evt, *pos;
-	unsigned long flags;
+	unsigned long wait_switch = 0;
+	struct ibmvscsi_host_data *hostdata = shost_priv(cmd->device->host);
 
-	spin_lock_irqsave(hostdata->host->host_lock, flags);
-	list_for_each_entry_safe(tmp_evt, pos, &hostdata->sent, list) {
-		list_del(&tmp_evt->list);
-		if (tmp_evt->cmnd) {
-			tmp_evt->cmnd->result = (error_code << 16);
-			unmap_cmd_data(&tmp_evt->iu.srp.cmd, 
-				       tmp_evt,	
-				       tmp_evt->hostdata->dev);
-			if (tmp_evt->cmnd_done)
-				tmp_evt->cmnd_done(tmp_evt->cmnd);
-		} else {
-			if (tmp_evt->done) {
-				tmp_evt->done(tmp_evt);
-			}
-		}
-		free_event_struct(&tmp_evt->hostdata->pool, tmp_evt);
+	dev_err(hostdata->dev, "Resetting connection due to error recovery\n");
+
+	ibmvscsi_reset_host(hostdata);
+
+	for (wait_switch = jiffies + (init_timeout * HZ);
+	     time_before(jiffies, wait_switch) &&
+		     atomic_read(&hostdata->request_limit) < 2;) {
+
+		msleep(10);
 	}
-	spin_unlock_irqrestore(hostdata->host->host_lock, flags);
+
+	if (atomic_read(&hostdata->request_limit) <= 0)
+		return FAILED;
+
+	return SUCCESS;
 }
 
 /**
@@ -1218,6 +1290,7 @@ static void purge_requests(struct ibmvscsi_host_data *hostdata, int error_code)
 void ibmvscsi_handle_crq(struct viosrp_crq *crq,
 			 struct ibmvscsi_host_data *hostdata)
 {
+	long rc;
 	unsigned long flags;
 	struct srp_event_struct *evt_struct =
 	    (struct srp_event_struct *)crq->IU_data_ptr;
@@ -1225,27 +1298,25 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
 	case 0xC0:		/* initialization */
 		switch (crq->format) {
 		case 0x01:	/* Initialization message */
-			printk(KERN_INFO "ibmvscsi: partner initialized\n");
+			dev_info(hostdata->dev, "partner initialized\n");
 			/* Send back a response */
-			if (ibmvscsi_send_crq(hostdata,
-					      0xC002000000000000LL, 0) == 0) {
+			if ((rc = ibmvscsi_send_crq(hostdata,
+						    0xC002000000000000LL, 0)) == 0) {
 				/* Now login */
 				send_srp_login(hostdata);
 			} else {
-				printk(KERN_ERR
-				       "ibmvscsi: Unable to send init rsp\n");
+				dev_err(hostdata->dev, "Unable to send init rsp. rc=%ld\n", rc);
 			}
 
 			break;
 		case 0x02:	/* Initialization response */
-			printk(KERN_INFO
-			       "ibmvscsi: partner initialization complete\n");
+			dev_info(hostdata->dev, "partner initialization complete\n");
 
 			/* Now login */
 			send_srp_login(hostdata);
 			break;
 		default:
-			printk(KERN_ERR "ibmvscsi: unknown crq message type\n");
+			dev_err(hostdata->dev, "unknown crq message type: %d\n", crq->format);
 		}
 		return;
 	case 0xFF:	/* Hypervisor telling us the connection is closed */
@@ -1253,8 +1324,7 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
 		atomic_set(&hostdata->request_limit, 0);
 		if (crq->format == 0x06) {
 			/* We need to re-setup the interpartition connection */
-			printk(KERN_INFO
-			       "ibmvscsi: Re-enabling adapter!\n");
+			dev_info(hostdata->dev, "Re-enabling adapter!\n");
 			purge_requests(hostdata, DID_REQUEUE);
 			if ((ibmvscsi_reenable_crq_queue(&hostdata->queue,
 							hostdata)) ||
@@ -1262,14 +1332,11 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
 					       0xC001000000000000LL, 0))) {
 					atomic_set(&hostdata->request_limit,
 						   -1);
-					printk(KERN_ERR
-					       "ibmvscsi: error after"
-					       " enable\n");
+					dev_err(hostdata->dev, "error after enable\n");
 			}
 		} else {
-			printk(KERN_INFO
-			       "ibmvscsi: Virtual adapter failed rc %d!\n",
-			       crq->format);
+			dev_err(hostdata->dev, "Virtual adapter failed rc %d!\n",
+				crq->format);
 
 			purge_requests(hostdata, DID_ERROR);
 			if ((ibmvscsi_reset_crq_queue(&hostdata->queue,
@@ -1278,8 +1345,7 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
 					       0xC001000000000000LL, 0))) {
 					atomic_set(&hostdata->request_limit,
 						   -1);
-					printk(KERN_ERR
-					       "ibmvscsi: error after reset\n");
+					dev_err(hostdata->dev, "error after reset\n");
 			}
 		}
 		scsi_unblock_requests(hostdata->host);
@@ -1287,9 +1353,8 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
 	case 0x80:		/* real payload */
 		break;
 	default:
-		printk(KERN_ERR
-		       "ibmvscsi: got an invalid message type 0x%02x\n",
-		       crq->valid);
+		dev_err(hostdata->dev, "got an invalid message type 0x%02x\n",
+			crq->valid);
 		return;
 	}
 
@@ -1298,16 +1363,14 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
 	 * actually sent
 	 */
 	if (!valid_event_struct(&hostdata->pool, evt_struct)) {
-		printk(KERN_ERR
-		       "ibmvscsi: returned correlation_token 0x%p is invalid!\n",
+		dev_err(hostdata->dev, "returned correlation_token 0x%p is invalid!\n",
 		       (void *)crq->IU_data_ptr);
 		return;
 	}
 
 	if (atomic_read(&evt_struct->free)) {
-		printk(KERN_ERR
-		       "ibmvscsi: received duplicate  correlation_token 0x%p!\n",
-		       (void *)crq->IU_data_ptr);
+		dev_err(hostdata->dev, "received duplicate correlation_token 0x%p!\n",
+			(void *)crq->IU_data_ptr);
 		return;
 	}
 
@@ -1315,11 +1378,12 @@ void ibmvscsi_handle_crq(struct viosrp_crq *crq,
 		atomic_add(evt_struct->xfer_iu->srp.rsp.req_lim_delta,
 			   &hostdata->request_limit);
 
+	del_timer(&evt_struct->timer);
+
 	if (evt_struct->done)
 		evt_struct->done(evt_struct);
 	else
-		printk(KERN_ERR
-		       "ibmvscsi: returned done() is NULL; not running it!\n");
+		dev_err(hostdata->dev, "returned done() is NULL; not running it!\n");
 
 	/*
 	 * Lock the host_lock before messing with these structures, since we
@@ -1340,20 +1404,20 @@ static int ibmvscsi_do_host_config(struct ibmvscsi_host_data *hostdata,
 {
 	struct viosrp_host_config *host_config;
 	struct srp_event_struct *evt_struct;
+	unsigned long flags;
 	dma_addr_t addr;
 	int rc;
 
 	evt_struct = get_event_struct(&hostdata->pool);
 	if (!evt_struct) {
-		printk(KERN_ERR
-		       "ibmvscsi: could't allocate event for HOST_CONFIG!\n");
+		dev_err(hostdata->dev, "couldn't allocate event for HOST_CONFIG!\n");
 		return -1;
 	}
 
 	init_event_struct(evt_struct,
 			  sync_completion,
 			  VIOSRP_MAD_FORMAT,
-			  init_timeout * HZ);
+			  init_timeout);
 
 	host_config = &evt_struct->iu.mad.host_config;
 
@@ -1366,14 +1430,15 @@ static int ibmvscsi_do_host_config(struct ibmvscsi_host_data *hostdata,
 						    DMA_BIDIRECTIONAL);
 
 	if (dma_mapping_error(host_config->buffer)) {
-		printk(KERN_ERR
-		       "ibmvscsi: dma_mapping error " "getting host config\n");
+		dev_err(hostdata->dev, "dma_mapping error getting host config\n");
 		free_event_struct(&hostdata->pool, evt_struct);
 		return -1;
 	}
 
 	init_completion(&evt_struct->comp);
-	rc = ibmvscsi_send_srp_event(evt_struct, hostdata);
+	spin_lock_irqsave(hostdata->host->host_lock, flags);
+	rc = ibmvscsi_send_srp_event(evt_struct, hostdata, init_timeout * 2);
+	spin_unlock_irqrestore(hostdata->host->host_lock, flags);
 	if (rc == 0)
 		wait_for_completion(&evt_struct->comp);
 	dma_unmap_single(hostdata->dev, addr, length, DMA_BIDIRECTIONAL);
@@ -1404,17 +1469,33 @@ static int ibmvscsi_slave_configure(struct scsi_device *sdev)
 	return 0;
 }
 
+/**
+ * ibmvscsi_change_queue_depth - Change the device's queue depth
+ * @sdev:	scsi device struct
+ * @qdepth:	depth to set
+ *
+ * Return value:
+ * 	actual depth set
+ **/
+static int ibmvscsi_change_queue_depth(struct scsi_device *sdev, int qdepth)
+{
+	if (qdepth > IBMVSCSI_MAX_CMDS_PER_LUN)
+		qdepth = IBMVSCSI_MAX_CMDS_PER_LUN;
+
+	scsi_adjust_queue_depth(sdev, 0, qdepth);
+	return sdev->queue_depth;
+}
+
 /* ------------------------------------------------------------
  * sysfs attributes
  */
 static ssize_t show_host_srp_version(struct class_device *class_dev, char *buf)
 {
 	struct Scsi_Host *shost = class_to_shost(class_dev);
-	struct ibmvscsi_host_data *hostdata =
-	    (struct ibmvscsi_host_data *)shost->hostdata;
+	struct ibmvscsi_host_data *hostdata = shost_priv(shost);
 	int len;
 
-	len = snprintf(buf, buff_size, "%s\n",
+	len = snprintf(buf, PAGE_SIZE, "%s\n",
 		       hostdata->madapter_info.srp_version);
 	return len;
 }
@@ -1431,11 +1512,10 @@ static ssize_t show_host_partition_name(struct class_device *class_dev,
 					char *buf)
 {
 	struct Scsi_Host *shost = class_to_shost(class_dev);
-	struct ibmvscsi_host_data *hostdata =
-	    (struct ibmvscsi_host_data *)shost->hostdata;
+	struct ibmvscsi_host_data *hostdata = shost_priv(shost);
 	int len;
 
-	len = snprintf(buf, buff_size, "%s\n",
+	len = snprintf(buf, PAGE_SIZE, "%s\n",
 		       hostdata->madapter_info.partition_name);
 	return len;
 }
@@ -1452,11 +1532,10 @@ static ssize_t show_host_partition_number(struct class_device *class_dev,
 					  char *buf)
 {
 	struct Scsi_Host *shost = class_to_shost(class_dev);
-	struct ibmvscsi_host_data *hostdata =
-	    (struct ibmvscsi_host_data *)shost->hostdata;
+	struct ibmvscsi_host_data *hostdata = shost_priv(shost);
 	int len;
 
-	len = snprintf(buf, buff_size, "%d\n",
+	len = snprintf(buf, PAGE_SIZE, "%d\n",
 		       hostdata->madapter_info.partition_number);
 	return len;
 }
@@ -1472,11 +1551,10 @@ static struct class_device_attribute ibmvscsi_host_partition_number = {
 static ssize_t show_host_mad_version(struct class_device *class_dev, char *buf)
 {
 	struct Scsi_Host *shost = class_to_shost(class_dev);
-	struct ibmvscsi_host_data *hostdata =
-	    (struct ibmvscsi_host_data *)shost->hostdata;
+	struct ibmvscsi_host_data *hostdata = shost_priv(shost);
 	int len;
 
-	len = snprintf(buf, buff_size, "%d\n",
+	len = snprintf(buf, PAGE_SIZE, "%d\n",
 		       hostdata->madapter_info.mad_version);
 	return len;
 }
@@ -1492,11 +1570,10 @@ static struct class_device_attribute ibmvscsi_host_mad_version = {
 static ssize_t show_host_os_type(struct class_device *class_dev, char *buf)
 {
 	struct Scsi_Host *shost = class_to_shost(class_dev);
-	struct ibmvscsi_host_data *hostdata =
-	    (struct ibmvscsi_host_data *)shost->hostdata;
+	struct ibmvscsi_host_data *hostdata = shost_priv(shost);
 	int len;
 
-	len = snprintf(buf, buff_size, "%d\n", hostdata->madapter_info.os_type);
+	len = snprintf(buf, PAGE_SIZE, "%d\n", hostdata->madapter_info.os_type);
 	return len;
 }
 
@@ -1511,11 +1588,10 @@ static struct class_device_attribute ibmvscsi_host_os_type = {
 static ssize_t show_host_config(struct class_device *class_dev, char *buf)
 {
 	struct Scsi_Host *shost = class_to_shost(class_dev);
-	struct ibmvscsi_host_data *hostdata =
-	    (struct ibmvscsi_host_data *)shost->hostdata;
+	struct ibmvscsi_host_data *hostdata = shost_priv(shost);
 
 	/* returns null-terminated host config data */
-	if (ibmvscsi_do_host_config(hostdata, buf, buff_size) == 0)
+	if (ibmvscsi_do_host_config(hostdata, buf, PAGE_SIZE) == 0)
 		return strlen(buf);
 	else
 		return 0;
@@ -1549,9 +1625,11 @@ static struct scsi_host_template driver_template = {
 	.queuecommand = ibmvscsi_queuecommand,
 	.eh_abort_handler = ibmvscsi_eh_abort_handler,
 	.eh_device_reset_handler = ibmvscsi_eh_device_reset_handler,
+	.eh_host_reset_handler = ibmvscsi_eh_host_reset_handler,
 	.slave_configure = ibmvscsi_slave_configure,
+	.change_queue_depth = ibmvscsi_change_queue_depth,
 	.cmd_per_lun = 16,
-	.can_queue = 1,		/* Updated after SRP_LOGIN */
+	.can_queue = IBMVSCSI_MAX_REQUESTS_DEFAULT,
 	.this_id = -1,
 	.sg_tablesize = SG_ALL,
 	.use_clustering = ENABLE_CLUSTERING,
@@ -1571,13 +1649,14 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
 
 	vdev->dev.driver_data = NULL;
 
+	driver_template.can_queue = max_requests;
 	host = scsi_host_alloc(&driver_template, sizeof(*hostdata));
 	if (!host) {
-		printk(KERN_ERR "ibmvscsi: couldn't allocate host data\n");
+		dev_err(&vdev->dev, "couldn't allocate host data\n");
 		goto scsi_host_alloc_failed;
 	}
 
-	hostdata = (struct ibmvscsi_host_data *)host->hostdata;
+	hostdata = shost_priv(host);
 	memset(hostdata, 0x00, sizeof(*hostdata));
 	INIT_LIST_HEAD(&hostdata->sent);
 	hostdata->host = host;
@@ -1587,11 +1666,11 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
 
 	rc = ibmvscsi_init_crq_queue(&hostdata->queue, hostdata, max_requests);
 	if (rc != 0 && rc != H_RESOURCE) {
-		printk(KERN_ERR "ibmvscsi: couldn't initialize crq\n");
+		dev_err(&vdev->dev, "couldn't initialize crq. rc=%d\n", rc);
 		goto init_crq_failed;
 	}
 	if (initialize_event_pool(&hostdata->pool, max_requests, hostdata) != 0) {
-		printk(KERN_ERR "ibmvscsi: couldn't initialize event pool\n");
+		dev_err(&vdev->dev, "couldn't initialize event pool\n");
 		goto init_pool_failed;
 	}
 
diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.h b/drivers/scsi/ibmvscsi/ibmvscsi.h
index 5c6d935..b19c2e2 100644
--- a/drivers/scsi/ibmvscsi/ibmvscsi.h
+++ b/drivers/scsi/ibmvscsi/ibmvscsi.h
@@ -44,6 +44,9 @@ struct Scsi_Host;
  */
 #define MAX_INDIRECT_BUFS 10
 
+#define IBMVSCSI_MAX_REQUESTS_DEFAULT 100
+#define IBMVSCSI_MAX_CMDS_PER_LUN 64
+
 /* ------------------------------------------------------------
  * Data Structures
  */
@@ -67,6 +70,7 @@ struct srp_event_struct {
 	union viosrp_iu iu;
 	void (*cmnd_done) (struct scsi_cmnd *);
 	struct completion comp;
+	struct timer_list timer;
 	union viosrp_iu *sync_srp;
 	struct srp_direct_buf *ext_list;
 	dma_addr_t ext_list_token;
diff --git a/drivers/scsi/ibmvscsi/rpa_vscsi.c b/drivers/scsi/ibmvscsi/rpa_vscsi.c
index ed22b96..a9a7f5e 100644
--- a/drivers/scsi/ibmvscsi/rpa_vscsi.c
+++ b/drivers/scsi/ibmvscsi/rpa_vscsi.c
@@ -182,7 +182,7 @@ static void set_adapter_info(struct ibmvscsi_host_data *hostdata)
 	memset(&hostdata->madapter_info, 0x00,
 			sizeof(hostdata->madapter_info));
 
-	printk(KERN_INFO "rpa_vscsi: SPR_VERSION: %s\n", SRP_VERSION);
+	dev_info(hostdata->dev, "SRP_VERSION: %s\n", SRP_VERSION);
 	strcpy(hostdata->madapter_info.srp_version, SRP_VERSION);
 
 	strncpy(hostdata->madapter_info.partition_name, partition_name,
@@ -237,25 +237,24 @@ int ibmvscsi_init_crq_queue(struct crq_queue *queue,
 
 	if (rc == 2) {
 		/* Adapter is good, but other end is not ready */
-		printk(KERN_WARNING "ibmvscsi: Partner adapter not ready\n");
+		dev_warn(hostdata->dev, "Partner adapter not ready\n");
 		retrc = 0;
 	} else if (rc != 0) {
-		printk(KERN_WARNING "ibmvscsi: Error %d opening adapter\n", rc);
+		dev_warn(hostdata->dev, "Error %d opening adapter\n", rc);
 		goto reg_crq_failed;
 	}
 
 	if (request_irq(vdev->irq,
 			ibmvscsi_handle_event,
 			0, "ibmvscsi", (void *)hostdata) != 0) {
-		printk(KERN_ERR "ibmvscsi: couldn't register irq 0x%x\n",
-		       vdev->irq);
+		dev_err(hostdata->dev, "couldn't register irq 0x%x\n",
+			vdev->irq);
 		goto req_irq_failed;
 	}
 
 	rc = vio_enable_interrupts(vdev);
 	if (rc != 0) {
-		printk(KERN_ERR "ibmvscsi:  Error %d enabling interrupts!!!\n",
-		       rc);
+		dev_err(hostdata->dev, "Error %d enabling interrupts!!!\n", rc);
 		goto req_irq_failed;
 	}
 
@@ -299,7 +298,7 @@ int ibmvscsi_reenable_crq_queue(struct crq_queue *queue,
 	} while ((rc == H_IN_PROGRESS) || (rc == H_BUSY) || (H_IS_LONG_BUSY(rc)));
 
 	if (rc)
-		printk(KERN_ERR "ibmvscsi: Error %d enabling adapter\n", rc);
+		dev_err(hostdata->dev, "Error %d enabling adapter\n", rc);
 	return rc;
 }
 
@@ -332,10 +331,9 @@ int ibmvscsi_reset_crq_queue(struct crq_queue *queue,
 				queue->msg_token, PAGE_SIZE);
 	if (rc == 2) {
 		/* Adapter is good, but other end is not ready */
-		printk(KERN_WARNING "ibmvscsi: Partner adapter not ready\n");
+		dev_warn(hostdata->dev, "Partner adapter not ready\n");
 	} else if (rc != 0) {
-		printk(KERN_WARNING
-		       "ibmvscsi: couldn't register crq--rc 0x%x\n", rc);
+		dev_warn(hostdata->dev, "couldn't register crq--rc 0x%x\n", rc);
 	}
 	return rc;
 }
diff --git a/include/asm-powerpc/atomic.h b/include/asm-powerpc/atomic.h
index 53283e2..f038e33 100644
--- a/include/asm-powerpc/atomic.h
+++ b/include/asm-powerpc/atomic.h
@@ -207,7 +207,8 @@ static __inline__ int atomic_add_unless(atomic_t *v, int a, int u)
 
 /*
  * Atomically test *v and decrement if it is greater than 0.
- * The function returns the old value of *v minus 1.
+ * The function returns the old value of *v minus 1, even if
+ * the atomic variable, v, was not decremented.
  */
 static __inline__ int atomic_dec_if_positive(atomic_t *v)
 {
@@ -216,14 +217,15 @@ static __inline__ int atomic_dec_if_positive(atomic_t *v)
 	__asm__ __volatile__(
 	LWSYNC_ON_SMP
 "1:	lwarx	%0,0,%1		# atomic_dec_if_positive\n\
-	addic.	%0,%0,-1\n\
+	cmpwi	%0,1\n\
+	addi	%0,%0,-1\n\
 	blt-	2f\n"
 	PPC405_ERR77(0,%1)
 "	stwcx.	%0,0,%1\n\
 	bne-	1b"
 	ISYNC_ON_SMP
 	"\n\
-2:"	: "=&r" (t)
+2:"	: "=&b" (t)
 	: "r" (&v->counter)
 	: "cc", "memory");