Sophie

Sophie

distrib > Scientific%20Linux > 5x > x86_64 > by-pkgid > fc11cd6e1c513a17304da94a5390f3cd > files > 3673

kernel-2.6.18-194.11.1.el5.src.rpm

From: Chip Coldwell <coldwell@redhat.com>
Date: Wed, 5 Dec 2007 16:21:26 -0500
Subject: [scsi] update lpfc driver to 8.2.0.8
Message-id: alpine.LRH.0.9999.0712051426590.5594@bogart.boston.redhat.com
O-Subject: [RHEL-5.2 PATCH] bz252989 Update lpfc driver to 8.2.0.8
Bugzilla: 252989

The patch below comes from Emulex, and brings the RHEL-5 driver up to
date for 5.2.  A subset of this patch is being proposed for a
5.1.z-stream release in bz385351.  The patch touches nearly every part
of the driver, so I have tested it extensively using bonnie++, dt and
iozone.

The patch is extensive; the ChangeLog from the vendor is below:

Changes from 20070921 to 20071004

	* Changed version number to 8.2.0.8
	* Fix panic in fc_remote_port_delete when unloading driver (CR 26705)
	* NameServer PLOGI errors cause HBA to be left in unuseable state
	  (CR 23226)
	* Remove unneeded repeated authentication reject message (CR 26802)
	* Fix build error introduced in r4189
	* Added 8G to list of supported speeds (CR 26800)
	* Fix authentication failure message when security service is stopped.
	  (CR 26799)
	* Loading LPFC driver does not complain when link speed is set to value
	  (CR 26771)
	* Added code to re-enable link interrupts after FLOGI failure. (CR 25270)
	* Fix SLI3/HBQ flag handling in lpfcdfc_ct_unsol_event (CR 26511)
	* Added code in sysfs mailbox handler functions to implement pid based
	  contexts. (CR 26600)
	* Illegal State Transition messages display during LIP testing
	* Fix debugfs hbqinfo display for ppc
	* Fix usage of UNUSED list and ndlps
	* Ignore PLOGI responses from WWPName or WWNName of 0. (CR 26579)
	* Remove unused code in switch statement outside of a case.
	* Return EPERM error for mbox commands issued using sysfs when HBA is
	  over heated. (CR 26661)
	* Reject PLOGI from invalid PName or NName of 0.

Changes from 20070823 to 20070921

	* Changed version number to 8.2.0.7
	* Remove nodelist not empty messages on rmmod with 100 vports
	* Fix panic when deleteing vports
	* Added support for WRITE_VPARMS mailbox command. (CR 26701)
	* Revert the FCP Fbits offset back to 7
	* Crash while deleting vports while HBA is reset (CR 26700)
	* Fix RPI leak (CR 26549)
	* HBQ buffer_count implemented incorrectly
	* Cannot rcv unsol ELS frames after running HBA resets for a few minutes
	* ndlp left in PLOGI state after link up
	* Clear ADISC flag when unregistering RPI and REMOVE ndlps if in
	  recovery.
	* Fix too many retries in FC-AL mode
	* Add code to handle signals during vport_create (CR 26558)
	* Fix Vport CT flags to only be set when accepted
	* fc swap test lead to device going offline and dt threads to hang
	  (CR 26442)
	* Make driver compile on upstream kernel
	* Illegal state transition message seen with San Blaze (CR 26521)
	* Fixed temperature sensor sysfs attribute to report value correctly.
	* Use LPFC_MAX_VPORTS instead of LPFC_MAX_VPI in for loop.
	* Fix 2 minor issues for libdfc event support
	* Fix Cannot issue Register Fabric login problems on link up (CR 26082)
	* Implement new DA_ID CT command. (CR 26288)
	* Fix up lpfc_drop_node() usage
	* Remove annoying ELS messages when driver is unloaded
	* Default rpi cleanup code causes rogue ndlps to remain on the NPR list.

Changes from 20070809 to 20070823

	* Changed version number to 8.2.0.6
	* Make 2 functions static for sparse
	* Fix panic from bad ndlp in lpfc_cmpl_els_rsp after link down
	  (CR 26188)
	* Allow for only one Hash during authentication.
	* Clear security_active when authentication parameter is disabled.
	* Remove unused LPFC_MAX_HBQ #define
	* Set vport for all IOCB requests via ioctl.
	* Clear FC_FABRIC and FC_PUBLIC_LOOP flags on fabric LOGO. (CR 26291)
	* Update authentication config properly.
	* Potential ndlp use after free
	* Remove completion code related to dev_loss_tmo.
	* Increase timeout value for REPORT_LUN to 60 seconds.
	* Split uses of error variable in lpfc_pci_probe_one into retval and
	  error (CR 25832)

Changes from 20070727 to 20070809

	* Changed version number to 8.2.0.5
	* Prevent lock recursion in logo_regloin_issue
	* Prevent double IOCB free when FDISC fails.
	* Fix use after free of ndlp (CR 26160)
	* Fix warnings from added printfs
	* Added support for saturn temperature sensor. (CR 25273)
	* Remove MBX_STOP_IOCB logic (CR 25907)
	* Add vport and HBQ support to ioctl code. (CR 26203)
	* Fix broken vport parameters (CR 26100)
	* Relieve some mbox congestion on link up with 100 vports
	  (CR 25091)
	* Fixes to integrate DH-CHAP with hbanywhere and PPC
	* Fix link event causing flood of 0108 messages
	* Remove vport params when npiv is disabled (CR 26100)
	* Fix discovery use after free ndlp panics
	* Added remote WWN to the authentication configuration request
	  (CR 25639)

Changes from 20070719 to 20070727

	* Changed version number to 8.2.0.4
	* Fix inconsistent GFP_ flags in lpfc_els_hbq_alloc
	* Remove index-out-of-range error in iocb.
	* Remove restriction on what can be echoed into the
	  lpfc_restart_auth parameter
	* Enhance log message when retries ELS fail
	* Remove "issue_lip" for vports from the fc_host directory. (CR 25951)
	* Fixed memory corruption involving bad node reference
	  counting. (CR 25901)
	* Fix sli_validate_fcp_iocb, sli_sum_iocb, sli_abort_iocb to be
	  vport-aware.
	* Move #ifdefed members of lpfc_vport structure to end
	* Change NPIV config parameter name to enable_npiv
	* Prevent waiting in lpfc_dev_loss_tmo_callbk for the worker
	  thread. (CR 25584)
	* Fix GFF_ID response on Little Endian

Changes from 20070705 to 20070719

	* Changed version number to 8.2.0.3
	* Clean up sparse build errors
	* FDISC retries forever (CR 25900)
	* Send ADISC when rpi is 0
	* Fix lpfc debugfs discovery trace output for ELS rsp cmpl
	* Add enable_auth parameter for vports
	* Fix up HBQ processing
	* Add vport log messages (CR 25843)
	* Added check to validate device pointer in
	  lpfc_sli_validate_fcp_iocb function. (CR 25046)
	* Remove unbalanced hba unlock.
	* Add HBQ information to lpfc debugfs
	* Move security parameter to vport and change to enable_auth
	  (CR 25850)
	* Only show npiv specific parameters when npiv is enabled
	  (CR 25855)
	* Put a reference on the nameservers ndlp when performing CT
	  traffic.
	* Implement vport_delete wait/fail if in discovery
	* Only call pci_disable_msi if the pci_enable_msi succeeded
	* Clean up all instances of mixed tab-space indentation
	* Fix GFFID type offset to work correctly with big endian
	  structure. (CR 25814)
	* Set port_state the same as in upstream (CR 25799)
	* Fix references to hostdata as a phba

Changes from 20070705 to 20070705

	* Changed version number to 8.2.0.2
	* Fix port_list locking around driver unload.
	* Added support for MBX_SET_DEBUG mailbox command when HBA is
	  online.
	* Move log_verbose cfg params to vport
	* Fix up HBQ initialization
	* Fix problem where driver cannot unload right after link down or
	  rscn.
	* Fix driver unload problem
	* Use the correct flag (load_flag) to check for LOADING setting.
	  (CR 25651)
	* Move vport specific cfg params to vport
	* Fix locking around HBA's port_list (CR 25692)
	* Use lpfc_vport in ioctl path when searching for hba (CR 25658)
	* Fix warnings running on a 64 bit system.
	* Merged DHCHAP functionality
	* Connect ioctl support in driver

And here is the patch:

diff --git a/drivers/scsi/lpfc/Makefile b/drivers/scsi/lpfc/Makefile
index b65db44..9971360 100644
--- a/drivers/scsi/lpfc/Makefile
+++ b/drivers/scsi/lpfc/Makefile
@@ -1,7 +1,7 @@
 #/*******************************************************************
 # * This file is part of the Emulex Linux Device Driver for         *
 # * Fibre Channel Host Bus Adapters.                                *
-# * Copyright (C) 2004-2005 Emulex.  All rights reserved.           *
+# * Copyright (C) 2004-2007 Emulex.  All rights reserved.           *
 # * EMULEX and SLI are trademarks of Emulex.                        *
 # * www.emulex.com                                                  *
 # *                                                                 *
@@ -24,8 +24,11 @@ ifneq ($(GCOV),)
   EXTRA_CFLAGS += -O0
 endif
 
+CFLAGS += -DNETLINK_FCTRANSPORT=19
+
 obj-$(CONFIG_SCSI_LPFC) := lpfc.o
 
 lpfc-objs := lpfc_mem.o lpfc_sli.o lpfc_ct.o lpfc_els.o lpfc_hbadisc.o	\
 	lpfc_init.o lpfc_mbox.o lpfc_nportdisc.o lpfc_scsi.o lpfc_attr.o \
-	lpfc_ioctl.o
+	lpfc_vport.o lpfc_debugfs.o lpfc_security.o lpfc_auth_access.o \
+	lpfc_auth.o lpfc_ioctl.o
diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
index f29baeb..9b1306a 100644
--- a/drivers/scsi/lpfc/lpfc.h
+++ b/drivers/scsi/lpfc/lpfc.h
@@ -19,8 +19,9 @@
  * included with this package.                                     *
  *******************************************************************/
 
+#include <scsi/scsi_host.h>
+
 struct lpfc_sli2_slim;
-struct lpfcdfc_host;
 
 #define LPFC_MAX_TARGET		256	/* max number of targets supported */
 #define LPFC_MAX_DISC_THREADS	64	/* max outstanding discovery els
@@ -32,6 +33,20 @@ struct lpfcdfc_host;
 #define LPFC_IOCB_LIST_CNT	2250	/* list of IOCBs for fast-path usage. */
 #define LPFC_Q_RAMP_UP_INTERVAL 120     /* lun q_depth ramp up interval */
 
+/*
+ * Following time intervals are used of adjusting SCSI device
+ * queue depths when there are driver resource error or Firmware
+ * resource error.
+ */
+#define QUEUE_RAMP_DOWN_INTERVAL	(1 * HZ)   /* 1 Second */
+#define QUEUE_RAMP_UP_INTERVAL		(300 * HZ) /* 5 minutes */
+
+/* Number of exchanges reserved for discovery to complete */
+#define LPFC_DISC_IOCB_BUFF_COUNT 20
+
+#define LPFC_HB_MBOX_INTERVAL   5	/* Heart beat interval in seconds. */
+#define LPFC_HB_MBOX_TIMEOUT    30	/* Heart beat timeout  in seconds. */
+
 /* Define macros for 64 bit support */
 #define putPaddrLow(addr)    ((uint32_t) (0xffffffff & (u64)(addr)))
 #define putPaddrHigh(addr)   ((uint32_t) (0xffffffff & (((u64)(addr))>>32)))
@@ -61,6 +76,12 @@ struct lpfc_dma_pool {
 	uint32_t    current_count;
 };
 
+struct hbq_dmabuf {
+	struct lpfc_dmabuf dbuf;
+	uint32_t size;
+	uint32_t tag;
+};
+
 /* Priority bit.  Set value to exceed low water mark in lpfc_mem. */
 #define MEM_PRI		0x100
 
@@ -90,6 +111,29 @@ typedef struct lpfc_vpd {
 		uint32_t sli2FwRev;
 		uint8_t sli2FwName[16];
 	} rev;
+	struct {
+#ifdef __BIG_ENDIAN_BITFIELD
+		uint32_t rsvd2  :24;  /* Reserved                             */
+		uint32_t cmv	: 1;  /* Configure Max VPIs                   */
+		uint32_t ccrp   : 1;  /* Config Command Ring Polling          */
+		uint32_t csah   : 1;  /* Configure Synchronous Abort Handling */
+		uint32_t chbs   : 1;  /* Cofigure Host Backing store          */
+		uint32_t cinb   : 1;  /* Enable Interrupt Notification Block  */
+		uint32_t cerbm	: 1;  /* Configure Enhanced Receive Buf Mgmt  */
+		uint32_t cmx	: 1;  /* Configure Max XRIs                   */
+		uint32_t cmr	: 1;  /* Configure Max RPIs                   */
+#else	/*  __LITTLE_ENDIAN */
+		uint32_t cmr	: 1;  /* Configure Max RPIs                   */
+		uint32_t cmx	: 1;  /* Configure Max XRIs                   */
+		uint32_t cerbm	: 1;  /* Configure Enhanced Receive Buf Mgmt  */
+		uint32_t cinb   : 1;  /* Enable Interrupt Notification Block  */
+		uint32_t chbs   : 1;  /* Cofigure Host Backing store          */
+		uint32_t csah   : 1;  /* Configure Synchronous Abort Handling */
+		uint32_t ccrp   : 1;  /* Config Command Ring Polling          */
+		uint32_t cmv	: 1;  /* Configure Max VPIs                   */
+		uint32_t rsvd2  :24;  /* Reserved                             */
+#endif
+	} sli3Feat;
 } lpfc_vpd_t;
 
 struct lpfc_scsi_buf;
@@ -122,6 +166,7 @@ struct lpfc_stats {
 	uint32_t elsRcvRPS;
 	uint32_t elsRcvRPL;
 	uint32_t elsXmitFLOGI;
+	uint32_t elsXmitFDISC;
 	uint32_t elsXmitPLOGI;
 	uint32_t elsXmitPRLI;
 	uint32_t elsXmitADISC;
@@ -163,52 +208,287 @@ struct lpfc_sysfs_mbox {
 	enum sysfs_mbox_state state;
 	size_t                offset;
 	struct lpfcMboxq *    mbox;
+	/* process id of the mgmt application */
+	pid_t		      pid;
+	struct list_head      list;
+};
+
+struct lpfc_hba;
+
+enum fc_vport_state {
+	FC_VPORT_UNKNOWN = 0,
+	FC_VPORT_ACTIVE = 2,
+	FC_VPORT_DISABLED = 3,
+	FC_VPORT_LINKDOWN = 250,
+	FC_VPORT_INITIALIZING = 1,
+	FC_VPORT_NO_FABRIC_SUPP = 251,
+	FC_VPORT_NO_FABRIC_RSCS = 252,
+	FC_VPORT_FABRIC_LOGOUT = 253,
+	FC_VPORT_FABRIC_REJ_WWN = 254,
+	FC_VPORT_FAILED = 255,
+};
+
+enum discovery_state {
+	LPFC_VPORT_UNKNOWN     =  0,    /* vport state is unknown */
+	LPFC_VPORT_FAILED      =  1,    /* vport has failed */
+	LPFC_LOCAL_CFG_LINK    =  6,    /* local NPORT Id configured */
+	LPFC_FLOGI             =  7,    /* FLOGI sent to Fabric */
+	LPFC_FDISC             =  8,    /* FDISC sent for vport */
+	LPFC_FABRIC_CFG_LINK   =  9,    /* Fabric assigned NPORT Id
+				         * configured */
+	LPFC_NS_REG            =  10,   /* Register with NameServer */
+	LPFC_NS_QRY            =  11,   /* Query NameServer for NPort ID list */
+	LPFC_BUILD_DISC_LIST   =  12,   /* Build ADISC and PLOGI lists for
+				         * device authentication / discovery */
+	LPFC_DISC_AUTH         =  13,   /* Processing ADISC list */
+	LPFC_VPORT_READY       =  32,
+};
+
+enum hba_state {
+	LPFC_LINK_UNKNOWN    =   0,   /* HBA state is unknown */
+	LPFC_WARM_START      =   1,   /* HBA state after selective reset */
+	LPFC_INIT_START      =   2,   /* Initial state after board reset */
+	LPFC_INIT_MBX_CMDS   =   3,   /* Initialize HBA with mbox commands */
+	LPFC_LINK_DOWN       =   4,   /* HBA initialized, link is down */
+	LPFC_LINK_UP         =   5,   /* Link is up  - issue READ_LA */
+	LPFC_CLEAR_LA        =   6,   /* authentication cmplt - issue
+				       * CLEAR_LA */
+	LPFC_HBA_READY       =  32,
+	LPFC_HBA_ERROR       =  -1
+};
+
+enum auth_state {
+	LPFC_AUTH_UNKNOWN		=  0,
+	LPFC_AUTH_SUCCESS		=  1,
+	LPFC_AUTH_FAIL			=  2,
+};
+enum auth_msg_state {
+	LPFC_AUTH_NONE			=  0,
+	LPFC_AUTH_REJECT		=  1,	/* Sent a Reject */
+	LPFC_AUTH_NEGOTIATE		=  2,	/* Auth Negotiate */
+	LPFC_DHCHAP_CHALLENGE		=  3,	/* Challenge */
+	LPFC_DHCHAP_REPLY		=  4,	/* Reply */
+	LPFC_DHCHAP_SUCCESS_REPLY	=  5,	/* Success with Reply */
+	LPFC_DHCHAP_SUCCESS		=  6,	/* Success */
+	LPFC_AUTH_DONE			=  7,
+};
+
+struct lpfc_auth {
+	uint8_t auth_mode;
+	uint8_t bidirectional;
+	uint8_t hash_priority[4];
+	uint32_t hash_len;
+	uint8_t dh_group_priority[8];
+	uint32_t dh_group_len;
+	uint32_t reauth_interval;
+
+	uint8_t security_active;
+	uint8_t auth_state;
+	uint8_t auth_msg_state;
+	uint32_t trans_id;              /* current transaction id. Can be set
+					   by incomming transactions as well */
+	uint32_t group_id;
+	uint32_t hash_id;
+
+	uint8_t *challenge;
+	uint32_t challenge_len;
+	uint8_t *dh_pub_key;
+	uint32_t dh_pub_key_len;
+};
+
+struct lpfc_vport {
+	struct list_head listentry;
+	struct lpfc_hba *phba;
+	uint8_t port_type;
+#define LPFC_PHYSICAL_PORT 1
+#define LPFC_NPIV_PORT  2
+#define LPFC_FABRIC_PORT 3
+	enum discovery_state port_state;
+
+	uint16_t vpi;
+
+	uint32_t fc_flag;	/* FC flags */
+/* Several of these flags are HBA centric and should be moved to
+ * phba->link_flag (e.g. FC_PTP, FC_PUBLIC_LOOP)
+ */
+#define FC_PT2PT                0x1	 /* pt2pt with no fabric */
+#define FC_PT2PT_PLOGI          0x2	 /* pt2pt initiate PLOGI */
+#define FC_DISC_TMO             0x4	 /* Discovery timer running */
+#define FC_PUBLIC_LOOP          0x8	 /* Public loop */
+#define FC_LBIT                 0x10	 /* LOGIN bit in loopinit set */
+#define FC_RSCN_MODE            0x20	 /* RSCN cmd rcv'ed */
+#define FC_NLP_MORE             0x40	 /* More node to process in node tbl */
+#define FC_OFFLINE_MODE         0x80	 /* Interface is offline for diag */
+#define FC_FABRIC               0x100	 /* We are fabric attached */
+#define FC_ESTABLISH_LINK       0x200	 /* Reestablish Link */
+#define FC_RSCN_DISCOVERY       0x400	 /* Auth all devices after RSCN */
+#define FC_SCSI_SCAN_TMO        0x4000	 /* scsi scan timer running */
+#define FC_ABORT_DISCOVERY      0x8000	 /* we want to abort discovery */
+#define FC_NDISC_ACTIVE         0x10000	 /* NPort discovery active */
+#define FC_BYPASSED_MODE        0x20000	 /* NPort is in bypassed mode */
+#define FC_VPORT_NEEDS_REG_VPI	0x80000  /* Needs to have its vpi registered */
+#define FC_RSCN_DEFERRED	0x100000 /* A deferred RSCN being processed */
+
+	uint32_t ct_flags;
+#define FC_CT_RFF_ID		0x1	 /* RFF_ID accepted by switch */
+#define FC_CT_RNN_ID		0x2	 /* RNN_ID accepted by switch */
+#define FC_CT_RSNN_NN		0x4	 /* RSNN_NN accepted by switch */
+#define FC_CT_RSPN_ID		0x8	 /* RSPN_ID accepted by switch */
+#define FC_CT_RFT_ID		0x10	 /* RFT_ID accepted by switch */
+
+	struct list_head fc_nodes;
+
+	/* Keep counters for the number of entries in each list. */
+	uint16_t fc_plogi_cnt;
+	uint16_t fc_adisc_cnt;
+	uint16_t fc_reglogin_cnt;
+	uint16_t fc_prli_cnt;
+	uint16_t fc_unmap_cnt;
+	uint16_t fc_map_cnt;
+	uint16_t fc_npr_cnt;
+	uint16_t fc_unused_cnt;
+	struct serv_parm fc_sparam;	/* buffer for our service parameters */
+
+	uint32_t fc_myDID;	/* fibre channel S_ID */
+	uint32_t fc_prevDID;	/* previous fibre channel S_ID */
+
+	int32_t stopped;   /* HBA has not been restarted since last ERATT */
+	uint8_t fc_linkspeed;	/* Link speed after last READ_LA */
+
+	uint32_t num_disc_nodes;	/*in addition to hba_state */
+
+	uint32_t fc_nlp_cnt;	/* outstanding NODELIST requests */
+	uint32_t fc_rscn_id_cnt;	/* count of RSCNs payloads in list */
+	struct lpfc_dmabuf *fc_rscn_id_list[FC_MAX_HOLD_RSCN];
+	struct lpfc_name fc_nodename;	/* fc nodename */
+	struct lpfc_name fc_portname;	/* fc portname */
+
+	struct lpfc_work_evt disc_timeout_evt;
+
+	struct timer_list fc_disctmo;	/* Discovery rescue timer */
+	uint8_t fc_ns_retry;	/* retries for fabric nameserver */
+	uint32_t fc_prli_sent;	/* cntr for outstanding PRLIs */
+
+	spinlock_t work_port_lock;
+	uint32_t work_port_events; /* Timeout to be handled  */
+#define WORKER_DISC_TMO                0x1	/* vport: Discovery timeout */
+#define WORKER_ELS_TMO                 0x2	/* vport: ELS timeout */
+#define WORKER_FDMI_TMO                0x4	/* vport: FDMI timeout */
+
+#define WORKER_MBOX_TMO                0x100	/* hba: MBOX timeout */
+#define WORKER_HB_TMO                  0x200	/* hba: Heart beat timeout */
+#define WORKER_FABRIC_BLOCK_TMO        0x400	/* hba: fabric block timout */
+#define WORKER_RAMP_DOWN_QUEUE         0x800	/* hba: Decrease Q depth */
+#define WORKER_RAMP_UP_QUEUE           0x1000	/* hba: Increase Q depth */
+
+	struct timer_list fc_fdmitmo;
+	struct timer_list els_tmofunc;
+
+	int unreg_vpi_cmpl;
+
+	uint8_t load_flag;
+#define FC_LOADING		0x1	/* HBA in process of loading drvr */
+#define FC_UNLOADING		0x2	/* HBA in process of unloading drvr */
+	char  *vname;		        /* Application assigned name */
+
+	/* Fields used for accessing auth service */
+	struct lpfc_auth auth;
+	uint32_t sc_tran_id;
+	struct list_head sc_response_wait_queue;
+	struct list_head sc_users;
+	struct work_struct sc_online_work;
+	struct work_struct sc_offline_work;
+
+	/* Vport Config Parameters */
+	uint32_t cfg_scan_down;
+	uint32_t cfg_lun_queue_depth;
+	uint32_t cfg_nodev_tmo;
+	uint32_t cfg_devloss_tmo;
+	uint32_t cfg_restrict_login;
+	uint32_t cfg_peer_port_login;
+	uint32_t cfg_fcp_class;
+	uint32_t cfg_use_adisc;
+	uint32_t cfg_fdmi_on;
+	uint32_t cfg_discovery_threads;
+	uint32_t cfg_log_verbose;
+	uint32_t cfg_max_luns;
+	uint32_t cfg_enable_da_id;
+	uint32_t cfg_enable_auth;
+
+	uint32_t dev_loss_tmo_changed;
+
+#ifdef CONFIG_LPFC_DEBUG_FS
+	struct dentry *debug_disc_trc;
+	struct dentry *debug_nodelist;
+	struct dentry *vport_debugfs_root;
+	struct lpfc_debugfs_trc *disc_trc;
+	atomic_t disc_trc_cnt;
+#endif
+};
+
+struct hbq_s {
+	uint16_t entry_count;	  /* Current number of HBQ slots */
+	uint16_t buffer_count;	  /* Current number of buffers posted */
+	uint32_t next_hbqPutIdx;  /* Index to next HBQ slot to use */
+	uint32_t hbqPutIdx;	  /* HBQ slot to use */
+	uint32_t local_hbqGetIdx; /* Local copy of Get index from Port */
+	void    *hbq_virt;	  /* Virtual ptr to this hbq */
+	struct list_head hbq_buffer_list;  /* buffers assigned to this HBQ */
+				  /* Callback for HBQ buffer allocation */
+	struct hbq_dmabuf *(*hbq_alloc_buffer) (struct lpfc_hba *);
+				  /* Callback for HBQ buffer free */
+	void               (*hbq_free_buffer) (struct lpfc_hba *,
+					       struct hbq_dmabuf *);
+};
+
+#define LPFC_MAX_HBQS  4
+/* this matches the position in the lpfc_hbq_defs array */
+#define LPFC_ELS_HBQ	0
+#define LPFC_EXTRA_HBQ	1
+
+enum hba_temp_state {
+	HBA_NORMAL_TEMP,
+	HBA_OVER_TEMP
 };
 
 struct lpfc_hba {
 	struct lpfc_sli sli;
+	uint32_t sli_rev;		/* SLI2 or SLI3 */
+	uint32_t sli3_options;		/* Mask of enabled SLI3 options */
+#define LPFC_SLI3_ENABLED	 0x01
+#define LPFC_SLI3_HBQ_ENABLED	 0x02
+#define LPFC_SLI3_NPIV_ENABLED	 0x04
+#define LPFC_SLI3_VPORT_TEARDOWN 0x08
+	uint32_t iocb_cmd_size;
+	uint32_t iocb_rsp_size;
+
+	enum hba_state link_state;
+	uint32_t link_flag;	/* link state flags */
+#define LS_LOOPBACK_MODE      0x1	/* NPort is in Loopback mode */
+					/* This flag is set while issuing */
+					/* INIT_LINK mailbox command */
+#define LS_NPIV_FAB_SUPPORTED 0x2	/* Fabric supports NPIV */
+#define LS_IGNORE_ERATT       0x3	/* intr handler should ignore ERATT */
+
 	struct lpfc_sli2_slim *slim2p;
+	struct lpfc_dmabuf hbqslimp;
+
 	dma_addr_t slim2p_mapping;
+
 	uint16_t pci_cfg_value;
 
-	int32_t hba_state;
-
-#define LPFC_STATE_UNKNOWN        0    /* HBA state is unknown */
-#define LPFC_WARM_START           1    /* HBA state after selective reset */
-#define LPFC_INIT_START           2    /* Initial state after board reset */
-#define LPFC_INIT_MBX_CMDS        3    /* Initialize HBA with mbox commands */
-#define LPFC_LINK_DOWN            4    /* HBA initialized, link is down */
-#define LPFC_LINK_UP              5    /* Link is up  - issue READ_LA */
-#define LPFC_LOCAL_CFG_LINK       6    /* local NPORT Id configured */
-#define LPFC_FLOGI                7    /* FLOGI sent to Fabric */
-#define LPFC_FABRIC_CFG_LINK      8    /* Fabric assigned NPORT Id
-					   configured */
-#define LPFC_NS_REG               9	/* Register with NameServer */
-#define LPFC_NS_QRY               10	/* Query NameServer for NPort ID list */
-#define LPFC_BUILD_DISC_LIST      11	/* Build ADISC and PLOGI lists for
-					 * device authentication / discovery */
-#define LPFC_DISC_AUTH            12	/* Processing ADISC list */
-#define LPFC_CLEAR_LA             13	/* authentication cmplt - issue
-					   CLEAR_LA */
-#define LPFC_HBA_READY            32
-#define LPFC_HBA_ERROR            -1
+	uint8_t work_found;
+#define LPFC_MAX_WORKER_ITERATION  4
 
-	int32_t stopped;   /* HBA has not been restarted since last ERATT */
 	uint8_t fc_linkspeed;	/* Link speed after last READ_LA */
 
 	uint32_t fc_eventTag;	/* event tag for link attention */
-	uint32_t fc_prli_sent;	/* cntr for outstanding PRLIs */
 
-	uint32_t num_disc_nodes;	/*in addition to hba_state */
 
 	struct timer_list fc_estabtmo;	/* link establishment timer */
-	struct timer_list fc_disctmo;	/* Discovery rescue timer */
-	struct timer_list fc_fdmitmo;	/* fdmi timer */
 	/* These fields used to be binfo */
-	struct lpfc_name fc_nodename;	/* fc nodename */
-	struct lpfc_name fc_portname;	/* fc portname */
 	uint32_t fc_pref_DID;	/* preferred D_ID */
-	uint8_t fc_pref_ALPA;	/* preferred AL_PA */
+	uint8_t  fc_pref_ALPA;	/* preferred AL_PA */
 	uint32_t fc_edtov;	/* E_D_TOV timer value */
 	uint32_t fc_arbtov;	/* ARB_TOV timer value */
 	uint32_t fc_ratov;	/* R_A_TOV timer value */
@@ -216,120 +496,59 @@ struct lpfc_hba {
 	uint32_t fc_altov;	/* AL_TOV timer value */
 	uint32_t fc_crtov;	/* C_R_TOV timer value */
 	uint32_t fc_citov;	/* C_I_TOV timer value */
-	uint32_t fc_myDID;	/* fibre channel S_ID */
-	uint32_t fc_prevDID;	/* previous fibre channel S_ID */
 
-	struct serv_parm fc_sparam;	/* buffer for our service parameters */
 	struct serv_parm fc_fabparam;	/* fabric service parameters buffer */
 	uint8_t alpa_map[128];	/* AL_PA map from READ_LA */
 
-	uint8_t fc_ns_retry;	/* retries for fabric nameserver */
-	uint32_t fc_nlp_cnt;	/* outstanding NODELIST requests */
-	uint32_t fc_rscn_id_cnt;	/* count of RSCNs payloads in list */
-	struct lpfc_dmabuf *fc_rscn_id_list[FC_MAX_HOLD_RSCN];
 	uint32_t lmt;
-	uint32_t fc_flag;	/* FC flags */
-#define FC_PT2PT                0x1	/* pt2pt with no fabric */
-#define FC_PT2PT_PLOGI          0x2	/* pt2pt initiate PLOGI */
-#define FC_DISC_TMO             0x4	/* Discovery timer running */
-#define FC_PUBLIC_LOOP          0x8	/* Public loop */
-#define FC_LBIT                 0x10	/* LOGIN bit in loopinit set */
-#define FC_RSCN_MODE            0x20	/* RSCN cmd rcv'ed */
-#define FC_NLP_MORE             0x40	/* More node to process in node tbl */
-#define FC_OFFLINE_MODE         0x80	/* Interface is offline for diag */
-#define FC_FABRIC               0x100	/* We are fabric attached */
-#define FC_ESTABLISH_LINK       0x200	/* Reestablish Link */
-#define FC_RSCN_DISCOVERY       0x400	/* Authenticate all devices after RSCN*/
-#define FC_BLOCK_MGMT_IO        0x800   /* Don't allow mgmt mbx or iocb cmds */
-#define FC_LOADING		0x1000	/* HBA in process of loading drvr */
-#define FC_UNLOADING		0x2000	/* HBA in process of unloading drvr */
-#define FC_SCSI_SCAN_TMO        0x4000	/* scsi scan timer running */
-#define FC_ABORT_DISCOVERY      0x8000	/* we want to abort discovery */
-#define FC_NDISC_ACTIVE         0x10000	/* NPort discovery active */
-#define FC_BYPASSED_MODE        0x20000	/* NPort is in bypassed mode */
-#define FC_LOOPBACK_MODE        0x40000	/* NPort is in Loopback mode */
-					/* This flag is set while issuing */
-					/* INIT_LINK mailbox command */
-#define FC_IGNORE_ERATT         0x80000	/* intr handler should ignore ERATT */
 
 	uint32_t fc_topology;	/* link topology, from LINK INIT */
 
 	struct lpfc_stats fc_stat;
 
-	/* These are the head/tail pointers for the bind, plogi, adisc, unmap,
-	 *  and map lists.  Their counters are immediately following.
-	 */
-	struct list_head fc_plogi_list;
-	struct list_head fc_adisc_list;
-	struct list_head fc_reglogin_list;
-	struct list_head fc_prli_list;
-	struct list_head fc_nlpunmap_list;
-	struct list_head fc_nlpmap_list;
-	struct list_head fc_npr_list;
-	struct list_head fc_unused_list;
-
-	/* Keep counters for the number of entries in each list. */
-	uint16_t fc_plogi_cnt;
-	uint16_t fc_adisc_cnt;
-	uint16_t fc_reglogin_cnt;
-	uint16_t fc_prli_cnt;
-	uint16_t fc_unmap_cnt;
-	uint16_t fc_map_cnt;
-	uint16_t fc_npr_cnt;
-	uint16_t fc_unused_cnt;
 	struct lpfc_nodelist fc_fcpnodev; /* nodelist entry for no device */
 	uint32_t nport_event_cnt;	/* timestamp for nlplist entry */
 
-	uint32_t wwnn[2];
+	uint8_t  wwnn[8];
+	uint8_t  wwpn[8];
 	uint32_t RandomData[7];
 
-	uint32_t cfg_log_verbose;
-	uint32_t cfg_lun_queue_depth;
-	uint32_t cfg_nodev_tmo;
-	uint32_t cfg_devloss_tmo;
-	uint32_t cfg_hba_queue_depth;
-	uint32_t cfg_fcp_class;
-	uint32_t cfg_use_adisc;
+	/* HBA Config Parameters */
 	uint32_t cfg_ack0;
+	uint32_t cfg_enable_npiv;
 	uint32_t cfg_topology;
-	uint32_t cfg_scan_down;
 	uint32_t cfg_link_speed;
 	uint32_t cfg_cr_delay;
 	uint32_t cfg_cr_count;
 	uint32_t cfg_multi_ring_support;
 	uint32_t cfg_multi_ring_rctl;
 	uint32_t cfg_multi_ring_type;
-	uint32_t cfg_fdmi_on;
-	uint32_t cfg_discovery_threads;
-	uint32_t cfg_max_luns;
 	uint32_t cfg_poll;
 	uint32_t cfg_poll_tmo;
 	uint32_t cfg_use_msi;
+	uint32_t cfg_dev_loss_initiator;
 	uint32_t cfg_sg_seg_cnt;
 	uint32_t cfg_sg_dma_buf_size;
 	uint64_t cfg_soft_wwnn;
 	uint64_t cfg_soft_wwpn;
+	uint32_t cfg_hba_queue_depth;
 
-	uint32_t dev_loss_tmo_changed;
 
 	lpfc_vpd_t vpd;		/* vital product data */
 
-	struct Scsi_Host *host;
 	struct pci_dev *pcidev;
 	struct list_head      work_list;
 	uint32_t              work_ha;      /* Host Attention Bits for WT */
 	uint32_t              work_ha_mask; /* HA Bits owned by WT        */
 	uint32_t              work_hs;      /* HS stored in case of ERRAT */
 	uint32_t              work_status[2]; /* Extra status from SLIM */
-	uint32_t              work_hba_events; /* Timeout to be handled  */
-#define WORKER_DISC_TMO                0x1	/* Discovery timeout */
-#define WORKER_ELS_TMO                 0x2	/* ELS timeout */
-#define WORKER_MBOX_TMO                0x4	/* MBOX timeout */
-#define WORKER_FDMI_TMO                0x8	/* FDMI timeout */
 
 	wait_queue_head_t    *work_wait;
 	struct task_struct   *worker_thread;
 
+	uint32_t hbq_count;	        /* Count of configured HBQs */
+	struct hbq_s hbqs[LPFC_MAX_HBQS]; /* local copy of hbq indicies  */
+
 	unsigned long pci_bar0_map;     /* Physical address for PCI BAR0 */
 	unsigned long pci_bar2_map;     /* Physical address for PCI BAR2 */
 	void __iomem *slim_memmap_p;	/* Kernel memory mapped address for
@@ -344,6 +563,10 @@ struct lpfc_hba {
 					   reg */
 	void __iomem *HCregaddr;	/* virtual address for host ctl reg */
 
+	struct lpfc_hgp __iomem *host_gp; /* Host side get/put pointers */
+	uint32_t __iomem  *hbq_put;     /* Address in SLIM to HBQ put ptrs */
+	uint32_t          *hbq_get;     /* Host mem address of HBQ get ptrs */
+
 	int brd_no;			/* FC board number */
 
 	char SerialNumber[32];		/* adapter Serial Number */
@@ -363,7 +586,6 @@ struct lpfc_hba {
 	uint8_t soft_wwn_enable;
 
 	struct timer_list fcp_poll_timer;
-	struct timer_list els_tmofunc;
 
 	/*
 	 * stat  counters
@@ -372,7 +594,8 @@ struct lpfc_hba {
 	uint64_t fc4OutputRequests;
 	uint64_t fc4ControlRequests;
 
-	struct lpfc_sysfs_mbox sysfs_mbox;
+	/* List of mailbox commands issued through sysfs */
+	struct list_head sysfs_mbox_list;
 
 	/* fastpath list. */
 	spinlock_t scsi_buf_list_lock;
@@ -380,36 +603,89 @@ struct lpfc_hba {
 	uint32_t total_scsi_bufs;
 	struct list_head lpfc_iocb_list;
 	uint32_t total_iocbq_bufs;
+	spinlock_t hbalock;
 
 	/* pci_mem_pools */
 	struct pci_pool *lpfc_scsi_dma_buf_pool;
 	struct pci_pool *lpfc_mbuf_pool;
+	struct pci_pool *lpfc_hbq_pool;
 	struct lpfc_dma_pool lpfc_mbuf_safety_pool;
 
 	mempool_t *mbox_mem_pool;
 	mempool_t *nlp_mem_pool;
-	struct list_head freebufList;
-	struct list_head ctrspbuflist;
-	struct list_head rnidrspbuflist;
 
 	struct fc_host_statistics link_stats;
+	uint8_t using_msi;
 	struct lpfcdfc_host *dfc_host;
-	unsigned long pci_max_read;
+
+	struct list_head port_list;
+	struct lpfc_vport *pport;	/* physical lpfc_vport pointer */
+	uint16_t max_vpi;		/* Maximum virtual nports */
+#define LPFC_MAX_VPI 100		/* Max number of VPI supported */
+#define LPFC_MAX_VPORTS (LPFC_MAX_VPI+1)/* Max number of VPorts supported */
+	unsigned long *vpi_bmask;	/* vpi allocation table */
+
+	/* Data structure used by fabric iocb scheduler */
+	struct list_head fabric_iocb_list;
+	atomic_t fabric_iocb_count;
+	struct timer_list fabric_block_timer;
+	unsigned long bit_flags;
+#define	FABRIC_COMANDS_BLOCKED	0
+	atomic_t num_rsrc_err;
+	atomic_t num_cmd_success;
+	unsigned long last_rsrc_error_time;
+	unsigned long last_ramp_down_time;
+	unsigned long last_ramp_up_time;
+#ifdef CONFIG_LPFC_DEBUG_FS
+	struct dentry *hba_debugfs_root;
+	atomic_t debugfs_vport_count;
+	struct dentry *debug_hbqinfo;
+	struct dentry *debug_dumpslim;
+	struct dentry *debug_slow_ring_trc;
+	struct lpfc_debugfs_trc *slow_ring_trc;
+	atomic_t slow_ring_trc_cnt;
+#endif
+
+	uint8_t temp_sensor_support;
+	/* Fields used for heart beat. */
+	unsigned long last_completion_time;
+	struct timer_list hb_tmofunc;
+	uint8_t hb_outstanding;
+	enum hba_temp_state over_temp_state;
 };
 
+static inline struct Scsi_Host *
+lpfc_shost_from_vport(struct lpfc_vport *vport)
+{
+	return container_of((void *) vport, struct Scsi_Host, hostdata[0]);
+}
+
 static inline void
-lpfc_set_loopback_flag(struct lpfc_hba *phba) {
+lpfc_set_loopback_flag(struct lpfc_hba *phba)
+{
 	if (phba->cfg_topology == FLAGS_LOCAL_LB)
-		phba->fc_flag |= FC_LOOPBACK_MODE;
+		phba->link_flag |= LS_LOOPBACK_MODE;
 	else
-		phba->fc_flag &= ~FC_LOOPBACK_MODE;
+		phba->link_flag &= ~LS_LOOPBACK_MODE;
 }
 
-struct rnidrsp {
-	void *buf;
-	uint32_t uniqueid;
-	struct list_head list;
+static inline int
+lpfc_is_link_up(struct lpfc_hba *phba)
+{
+	return  phba->link_state == LPFC_LINK_UP ||
+		phba->link_state == LPFC_CLEAR_LA ||
+		phba->link_state == LPFC_HBA_READY;
+}
+
+#define FC_REG_DUMP_EVENT		0x10	/* Register for Dump events */
+#define FC_REG_TEMPERATURE_EVENT	0x20    /* Register for temperature
+						   event */
+
+struct temp_event {
+	uint32_t event_type;
+	uint32_t event_code;
 	uint32_t data;
 };
-
-#define FC_REG_DUMP_EVENT	0x10	/* Register for Dump events */
+#define LPFC_CRIT_TEMP		0x1
+#define LPFC_THRESHOLD_TEMP	0x2
+#define LPFC_NORMAL_TEMP	0x3
diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
index b6d31d9..972705f 100644
--- a/drivers/scsi/lpfc/lpfc_attr.c
+++ b/drivers/scsi/lpfc/lpfc_attr.c
@@ -39,11 +39,17 @@
 #include "lpfc_version.h"
 #include "lpfc_compat.h"
 #include "lpfc_crtn.h"
+#include "lpfc_vport.h"
+#include "lpfc_auth_access.h"
 
 #define LPFC_DEF_DEVLOSS_TMO 30
 #define LPFC_MIN_DEVLOSS_TMO 1
 #define LPFC_MAX_DEVLOSS_TMO 255
 
+#define LPFC_MAX_LINK_SPEED 8
+#define LPFC_LINK_SPEED_BITMAP 0x00000117
+#define LPFC_LINK_SPEED_STRING "0, 1, 2, 4, 8"
+
 static void
 lpfc_jedec_to_ascii(int incr, char hdw[])
 {
@@ -76,116 +82,162 @@ static ssize_t
 lpfc_info_show(struct class_device *cdev, char *buf)
 {
 	struct Scsi_Host *host = class_to_shost(cdev);
+
 	return snprintf(buf, PAGE_SIZE, "%s\n",lpfc_info(host));
 }
 
 static ssize_t
 lpfc_serialnum_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+
 	return snprintf(buf, PAGE_SIZE, "%s\n",phba->SerialNumber);
 }
 
 static ssize_t
+lpfc_temp_sensor_show(struct class_device *cdev, char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	return snprintf(buf, PAGE_SIZE, "%d\n",phba->temp_sensor_support);
+}
+
+static ssize_t
 lpfc_modeldesc_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+
 	return snprintf(buf, PAGE_SIZE, "%s\n",phba->ModelDesc);
 }
 
 static ssize_t
 lpfc_modelname_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+
 	return snprintf(buf, PAGE_SIZE, "%s\n",phba->ModelName);
 }
 
 static ssize_t
 lpfc_programtype_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+
 	return snprintf(buf, PAGE_SIZE, "%s\n",phba->ProgramType);
 }
 
 static ssize_t
-lpfc_portnum_show(struct class_device *cdev, char *buf)
+lpfc_vportnum_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+
 	return snprintf(buf, PAGE_SIZE, "%s\n",phba->Port);
 }
 
 static ssize_t
 lpfc_fwrev_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	char fwrev[32];
+
 	lpfc_decode_firmware_rev(phba, fwrev, 1);
-	return snprintf(buf, PAGE_SIZE, "%s\n",fwrev);
+	return snprintf(buf, PAGE_SIZE, "%s, sli-%d\n", fwrev, phba->sli_rev);
 }
 
 static ssize_t
 lpfc_hdw_show(struct class_device *cdev, char *buf)
 {
 	char hdw[9];
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	lpfc_vpd_t *vp = &phba->vpd;
+
 	lpfc_jedec_to_ascii(vp->rev.biuRev, hdw);
 	return snprintf(buf, PAGE_SIZE, "%s\n", hdw);
 }
 static ssize_t
 lpfc_option_rom_version_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+
 	return snprintf(buf, PAGE_SIZE, "%s\n", phba->OptionROMVersion);
 }
 static ssize_t
 lpfc_state_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
-	int len = 0;
-	switch (phba->hba_state) {
-	case LPFC_STATE_UNKNOWN:
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	int  len = 0;
+
+	switch (phba->link_state) {
+	case LPFC_LINK_UNKNOWN:
 	case LPFC_WARM_START:
 	case LPFC_INIT_START:
 	case LPFC_INIT_MBX_CMDS:
 	case LPFC_LINK_DOWN:
+	case LPFC_HBA_ERROR:
 		len += snprintf(buf + len, PAGE_SIZE-len, "Link Down\n");
 		break;
 	case LPFC_LINK_UP:
-	case LPFC_LOCAL_CFG_LINK:
-		len += snprintf(buf + len, PAGE_SIZE-len, "Link Up\n");
-		break;
-	case LPFC_FLOGI:
-	case LPFC_FABRIC_CFG_LINK:
-	case LPFC_NS_REG:
-	case LPFC_NS_QRY:
-	case LPFC_BUILD_DISC_LIST:
-	case LPFC_DISC_AUTH:
 	case LPFC_CLEAR_LA:
-		len += snprintf(buf + len, PAGE_SIZE-len,
-				"Link Up - Discovery\n");
-		break;
 	case LPFC_HBA_READY:
-		len += snprintf(buf + len, PAGE_SIZE-len,
-				"Link Up - Ready:\n");
+		len += snprintf(buf + len, PAGE_SIZE-len, "Link Up - ");
+
+		switch (vport->port_state) {
+		case LPFC_LOCAL_CFG_LINK:
+			len += snprintf(buf + len, PAGE_SIZE-len,
+					"Configuring Link\n");
+			break;
+		case LPFC_FDISC:
+		case LPFC_FLOGI:
+		case LPFC_FABRIC_CFG_LINK:
+		case LPFC_NS_REG:
+		case LPFC_NS_QRY:
+		case LPFC_BUILD_DISC_LIST:
+		case LPFC_DISC_AUTH:
+			len += snprintf(buf + len, PAGE_SIZE - len,
+					"Discovery\n");
+			break;
+		case LPFC_VPORT_READY:
+			len += snprintf(buf + len, PAGE_SIZE - len, "Ready\n");
+			break;
+
+		case LPFC_VPORT_FAILED:
+			len += snprintf(buf + len, PAGE_SIZE - len, "Failed\n");
+			break;
+
+		case LPFC_VPORT_UNKNOWN:
+			len += snprintf(buf + len, PAGE_SIZE - len,
+					"Unknown\n");
+			break;
+		}
+
 		if (phba->fc_topology == TOPOLOGY_LOOP) {
-			if (phba->fc_flag & FC_PUBLIC_LOOP)
+			if (vport->fc_flag & FC_PUBLIC_LOOP)
 				len += snprintf(buf + len, PAGE_SIZE-len,
 						"   Public Loop\n");
 			else
 				len += snprintf(buf + len, PAGE_SIZE-len,
 						"   Private Loop\n");
 		} else {
-			if (phba->fc_flag & FC_FABRIC)
+			if (vport->fc_flag & FC_FABRIC)
 				len += snprintf(buf + len, PAGE_SIZE-len,
 						"   Fabric\n");
 			else
@@ -193,29 +245,32 @@ lpfc_state_show(struct class_device *cdev, char *buf)
 						"   Point-2-Point\n");
 		}
 	}
+
 	return len;
 }
 
 static ssize_t
 lpfc_num_discovered_ports_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->fc_map_cnt +
-							phba->fc_unmap_cnt);
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+
+	return snprintf(buf, PAGE_SIZE, "%d\n",
+			vport->fc_map_cnt + vport->fc_unmap_cnt);
 }
 
 
 static int
-lpfc_issue_lip(struct Scsi_Host *host)
+lpfc_issue_lip(struct Scsi_Host *shost)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba *) host->hostdata;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	LPFC_MBOXQ_t *pmboxq;
 	int mbxstatus = MBXERR_ERROR;
 
-	if ((phba->fc_flag & FC_OFFLINE_MODE) ||
-	    (phba->fc_flag & FC_BLOCK_MGMT_IO) ||
-	    (phba->hba_state != LPFC_HBA_READY))
+	if ((vport->fc_flag & FC_OFFLINE_MODE) ||
+	    (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO) ||
+	    (vport->port_state != LPFC_VPORT_READY))
 		return -EPERM;
 
 	pmboxq = mempool_alloc(phba->mbox_mem_pool,GFP_KERNEL);
@@ -238,9 +293,7 @@ lpfc_issue_lip(struct Scsi_Host *host)
 	}
 
 	lpfc_set_loopback_flag(phba);
-	if (mbxstatus == MBX_TIMEOUT)
-		pmboxq->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-	else
+	if (mbxstatus != MBX_TIMEOUT)
 		mempool_free(pmboxq, phba->mbox_mem_pool);
 
 	if (mbxstatus == MBXERR_ERROR)
@@ -277,9 +330,8 @@ lpfc_do_offline(struct lpfc_hba *phba, uint32_t type)
 			if (cnt++ > 3000) {
 				lpfc_printf_log(phba,
 					KERN_WARNING, LOG_INIT,
-					"%d:0466 Outstanding IO when "
-					"bringing Adapter offline\n",
-					phba->brd_no);
+					"0466 Outstanding IO when "
+					"bringing Adapter offline\n");
 				break;
 			}
 		}
@@ -320,8 +372,10 @@ lpfc_selective_reset(struct lpfc_hba *phba)
 static ssize_t
 lpfc_issue_reset(struct class_device *cdev, const char *buf, size_t count)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+
 	int status = -EINVAL;
 
 	if (strncmp(buf, "selective", sizeof("selective") - 1) == 0)
@@ -336,23 +390,26 @@ lpfc_issue_reset(struct class_device *cdev, const char *buf, size_t count)
 static ssize_t
 lpfc_nport_evt_cnt_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+
 	return snprintf(buf, PAGE_SIZE, "%d\n", phba->nport_event_cnt);
 }
 
 static ssize_t
 lpfc_board_mode_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	char  * state;
 
-	if (phba->hba_state == LPFC_HBA_ERROR)
+	if (phba->link_state == LPFC_HBA_ERROR)
 		state = "error";
-	else if (phba->hba_state == LPFC_WARM_START)
+	else if (phba->link_state == LPFC_WARM_START)
 		state = "warm start";
-	else if (phba->hba_state == LPFC_INIT_START)
+	else if (phba->link_state == LPFC_INIT_START)
 		state = "offline";
 	else
 		state = "online";
@@ -363,8 +420,9 @@ lpfc_board_mode_show(struct class_device *cdev, char *buf)
 static ssize_t
 lpfc_board_mode_store(struct class_device *cdev, const char *buf, size_t count)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	struct completion online_compl;
 	int status=0;
 
@@ -380,6 +438,11 @@ lpfc_board_mode_store(struct class_device *cdev, const char *buf, size_t count)
 		status = lpfc_do_offline(phba, LPFC_EVT_WARM_START);
 	else if (strncmp(buf, "error", sizeof("error") - 1) == 0)
 		status = lpfc_do_offline(phba, LPFC_EVT_KILL);
+	else if (strncmp(buf, "remove", sizeof("remove") - 1) == 0 &&
+		 vport != phba->pport) {
+		status = lpfc_vport_delete(shost);
+		complete(&online_compl);
+	}
 	else
 		return -EINVAL;
 
@@ -389,11 +452,166 @@ lpfc_board_mode_store(struct class_device *cdev, const char *buf, size_t count)
 		return -EIO;
 }
 
+static int
+lpfc_get_hba_info(struct lpfc_hba *phba,
+		  uint32_t *mxri, uint32_t *axri,
+		  uint32_t *mrpi, uint32_t *arpi,
+		  uint32_t *mvpi, uint32_t *avpi)
+{
+	struct lpfc_sli   *psli = &phba->sli;
+	LPFC_MBOXQ_t *pmboxq;
+	MAILBOX_t *pmb;
+	int rc = 0;
+
+	/*
+	 * prevent udev from issuing mailbox commands until the port is
+	 * configured.
+	 */
+	if (phba->link_state < LPFC_LINK_DOWN ||
+	    !phba->mbox_mem_pool ||
+	    (phba->sli.sli_flag & LPFC_SLI2_ACTIVE) == 0)
+		return 0;
+
+	if (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO)
+		return 0;
+
+	pmboxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	if (!pmboxq)
+		return 0;
+	memset(pmboxq, 0, sizeof (LPFC_MBOXQ_t));
+
+	pmb = &pmboxq->mb;
+	pmb->mbxCommand = MBX_READ_CONFIG;
+	pmb->mbxOwner = OWN_HOST;
+	pmboxq->context1 = NULL;
+
+	if ((phba->pport->fc_flag & FC_OFFLINE_MODE) ||
+		(!(psli->sli_flag & LPFC_SLI2_ACTIVE)))
+		rc = MBX_NOT_FINISHED;
+	else
+		rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
+
+	if (rc != MBX_SUCCESS) {
+		if (rc != MBX_TIMEOUT)
+			mempool_free(pmboxq, phba->mbox_mem_pool);
+		return 0;
+	}
+
+	if (mrpi)
+		*mrpi = pmb->un.varRdConfig.max_rpi;
+	if (arpi)
+		*arpi = pmb->un.varRdConfig.avail_rpi;
+	if (mxri)
+		*mxri = pmb->un.varRdConfig.max_xri;
+	if (axri)
+		*axri = pmb->un.varRdConfig.avail_xri;
+	if (mvpi)
+		*mvpi = pmb->un.varRdConfig.max_vpi;
+	if (avpi)
+		*avpi = pmb->un.varRdConfig.avail_vpi;
+
+	mempool_free(pmboxq, phba->mbox_mem_pool);
+	return 1;
+}
+
+static ssize_t
+lpfc_max_rpi_show(struct class_device *cdev, char *buf)
+{
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	uint32_t cnt;
+
+	if (lpfc_get_hba_info(phba, NULL, NULL, &cnt, NULL, NULL, NULL))
+		return snprintf(buf, PAGE_SIZE, "%d\n", cnt);
+	return snprintf(buf, PAGE_SIZE, "Unknown\n");
+}
+
+static ssize_t
+lpfc_used_rpi_show(struct class_device *cdev, char *buf)
+{
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	uint32_t cnt, acnt;
+
+	if (lpfc_get_hba_info(phba, NULL, NULL, &cnt, &acnt, NULL, NULL))
+		return snprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
+	return snprintf(buf, PAGE_SIZE, "Unknown\n");
+}
+
+static ssize_t
+lpfc_max_xri_show(struct class_device *cdev, char *buf)
+{
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	uint32_t cnt;
+
+	if (lpfc_get_hba_info(phba, &cnt, NULL, NULL, NULL, NULL, NULL))
+		return snprintf(buf, PAGE_SIZE, "%d\n", cnt);
+	return snprintf(buf, PAGE_SIZE, "Unknown\n");
+}
+
+static ssize_t
+lpfc_used_xri_show(struct class_device *cdev, char *buf)
+{
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	uint32_t cnt, acnt;
+
+	if (lpfc_get_hba_info(phba, &cnt, &acnt, NULL, NULL, NULL, NULL))
+		return snprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
+	return snprintf(buf, PAGE_SIZE, "Unknown\n");
+}
+
+static ssize_t
+lpfc_max_vpi_show(struct class_device *cdev, char *buf)
+{
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	uint32_t cnt;
+
+	if (lpfc_get_hba_info(phba, NULL, NULL, NULL, NULL, &cnt, NULL))
+		return snprintf(buf, PAGE_SIZE, "%d\n", cnt);
+	return snprintf(buf, PAGE_SIZE, "Unknown\n");
+}
+
+static ssize_t
+lpfc_used_vpi_show(struct class_device *cdev, char *buf)
+{
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	uint32_t cnt, acnt;
+
+	if (lpfc_get_hba_info(phba, NULL, NULL, NULL, NULL, &cnt, &acnt))
+		return snprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
+	return snprintf(buf, PAGE_SIZE, "Unknown\n");
+}
+
+static ssize_t
+lpfc_npiv_info_show(struct class_device *cdev, char *buf)
+{
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+
+	if (!(phba->max_vpi))
+		return snprintf(buf, PAGE_SIZE, "NPIV Not Supported\n");
+	if (vport->port_type == LPFC_PHYSICAL_PORT)
+		return snprintf(buf, PAGE_SIZE, "NPIV Physical\n");
+	return snprintf(buf, PAGE_SIZE, "NPIV Virtual (VPI %d)\n", vport->vpi);
+}
+
 static ssize_t
 lpfc_poll_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 
 	return snprintf(buf, PAGE_SIZE, "%#x\n", phba->cfg_poll);
 }
@@ -402,8 +620,9 @@ static ssize_t
 lpfc_poll_store(struct class_device *cdev, const char *buf,
 		size_t count)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	uint32_t creg_val;
 	uint32_t old_val;
 	int val=0;
@@ -417,7 +636,7 @@ lpfc_poll_store(struct class_device *cdev, const char *buf,
 	if ((val & 0x3) != val)
 		return -EINVAL;
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 
 	old_val = phba->cfg_poll;
 
@@ -432,16 +651,16 @@ lpfc_poll_store(struct class_device *cdev, const char *buf,
 			lpfc_poll_start_timer(phba);
 		}
 	} else if (val != 0x0) {
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(&phba->hbalock);
 		return -EINVAL;
 	}
 
 	if (!(val & DISABLE_FCP_RING_INT) &&
 	    (old_val & DISABLE_FCP_RING_INT))
 	{
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(&phba->hbalock);
 		del_timer(&phba->fcp_poll_timer);
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(&phba->hbalock);
 		creg_val = readl(phba->HCregaddr);
 		creg_val |= (HC_R0INT_ENA << LPFC_FCP_RING);
 		writel(creg_val, phba->HCregaddr);
@@ -450,7 +669,7 @@ lpfc_poll_store(struct class_device *cdev, const char *buf,
 
 	phba->cfg_poll = val;
 
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	return strlen(buf);
 }
@@ -459,8 +678,9 @@ lpfc_poll_store(struct class_device *cdev, const char *buf,
 static ssize_t \
 lpfc_##attr##_show(struct class_device *cdev, char *buf) \
 { \
-	struct Scsi_Host *host = class_to_shost(cdev);\
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;\
+	struct Scsi_Host  *shost = class_to_shost(cdev);\
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
+	struct lpfc_hba   *phba = vport->phba;\
 	int val = 0;\
 	val = phba->cfg_##attr;\
 	return snprintf(buf, PAGE_SIZE, "%d\n",\
@@ -471,8 +691,9 @@ lpfc_##attr##_show(struct class_device *cdev, char *buf) \
 static ssize_t \
 lpfc_##attr##_show(struct class_device *cdev, char *buf) \
 { \
-	struct Scsi_Host *host = class_to_shost(cdev);\
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;\
+	struct Scsi_Host  *shost = class_to_shost(cdev);\
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
+	struct lpfc_hba   *phba = vport->phba;\
 	int val = 0;\
 	val = phba->cfg_##attr;\
 	return snprintf(buf, PAGE_SIZE, "%#x\n",\
@@ -488,9 +709,8 @@ lpfc_##attr##_init(struct lpfc_hba *phba, int val) \
 		return 0;\
 	}\
 	lpfc_printf_log(phba, KERN_ERR, LOG_INIT, \
-			"%d:0449 lpfc_"#attr" attribute cannot be set to %d, "\
-			"allowed range is ["#minval", "#maxval"]\n", \
-			phba->brd_no, val); \
+			"0449 lpfc_"#attr" attribute cannot be set to %d, "\
+			"allowed range is ["#minval", "#maxval"]\n", val); \
 	phba->cfg_##attr = default;\
 	return -EINVAL;\
 }
@@ -504,9 +724,8 @@ lpfc_##attr##_set(struct lpfc_hba *phba, int val) \
 		return 0;\
 	}\
 	lpfc_printf_log(phba, KERN_ERR, LOG_INIT, \
-			"%d:0450 lpfc_"#attr" attribute cannot be set to %d, "\
-			"allowed range is ["#minval", "#maxval"]\n", \
-			phba->brd_no, val); \
+			"0450 lpfc_"#attr" attribute cannot be set to %d, "\
+			"allowed range is ["#minval", "#maxval"]\n", val); \
 	return -EINVAL;\
 }
 
@@ -514,8 +733,9 @@ lpfc_##attr##_set(struct lpfc_hba *phba, int val) \
 static ssize_t \
 lpfc_##attr##_store(struct class_device *cdev, const char *buf, size_t count) \
 { \
-	struct Scsi_Host *host = class_to_shost(cdev);\
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;\
+	struct Scsi_Host  *shost = class_to_shost(cdev);\
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
+	struct lpfc_hba   *phba = vport->phba;\
 	int val=0;\
 	if (!isdigit(buf[0]))\
 		return -EINVAL;\
@@ -527,6 +747,75 @@ lpfc_##attr##_store(struct class_device *cdev, const char *buf, size_t count) \
 		return -EINVAL;\
 }
 
+#define lpfc_vport_param_show(attr)	\
+static ssize_t \
+lpfc_##attr##_show(struct class_device *cdev, char *buf) \
+{ \
+	struct Scsi_Host  *shost = class_to_shost(cdev);\
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
+	int val = 0;\
+	val = vport->cfg_##attr;\
+	return snprintf(buf, PAGE_SIZE, "%d\n", vport->cfg_##attr);\
+}
+
+#define lpfc_vport_param_hex_show(attr)	\
+static ssize_t \
+lpfc_##attr##_show(struct class_device *cdev, char *buf) \
+{ \
+	struct Scsi_Host  *shost = class_to_shost(cdev);\
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
+	int val = 0;\
+	val = vport->cfg_##attr;\
+	return snprintf(buf, PAGE_SIZE, "%#x\n", vport->cfg_##attr);\
+}
+
+#define lpfc_vport_param_init(attr, default, minval, maxval)	\
+static int \
+lpfc_##attr##_init(struct lpfc_vport *vport, int val) \
+{ \
+	if (val >= minval && val <= maxval) {\
+		vport->cfg_##attr = val;\
+		return 0;\
+	}\
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, \
+			 "0449 lpfc_"#attr" attribute cannot be set to %d, "\
+			 "allowed range is ["#minval", "#maxval"]\n", val); \
+	vport->cfg_##attr = default;\
+	return -EINVAL;\
+}
+
+#define lpfc_vport_param_set(attr, default, minval, maxval)	\
+static int \
+lpfc_##attr##_set(struct lpfc_vport *vport, int val) \
+{ \
+	if (val >= minval && val <= maxval) {\
+		vport->cfg_##attr = val;\
+		return 0;\
+	}\
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, \
+			 "0450 lpfc_"#attr" attribute cannot be set to %d, "\
+			 "allowed range is ["#minval", "#maxval"]\n", val); \
+	return -EINVAL;\
+}
+
+#define lpfc_vport_param_store(attr)	\
+static ssize_t \
+lpfc_##attr##_store(struct class_device *cdev, const char *buf, size_t count) \
+{ \
+	struct Scsi_Host  *shost = class_to_shost(cdev);\
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
+	int val=0;\
+	if (!isdigit(buf[0]))\
+		return -EINVAL;\
+	if (sscanf(buf, "%i", &val) != 1)\
+		return -EINVAL;\
+	if (lpfc_##attr##_set(vport, val) == 0) \
+		return strlen(buf);\
+	else \
+		return -EINVAL;\
+}
+
+
 #define LPFC_ATTR(name, defval, minval, maxval, desc) \
 static int lpfc_##name = defval;\
 module_param(lpfc_##name, int, 0);\
@@ -571,12 +860,56 @@ lpfc_param_store(name)\
 static CLASS_DEVICE_ATTR(lpfc_##name, S_IRUGO | S_IWUSR,\
 			 lpfc_##name##_show, lpfc_##name##_store)
 
+#define LPFC_VPORT_ATTR(name, defval, minval, maxval, desc) \
+static int lpfc_##name = defval;\
+module_param(lpfc_##name, int, 0);\
+MODULE_PARM_DESC(lpfc_##name, desc);\
+lpfc_vport_param_init(name, defval, minval, maxval)
+
+#define LPFC_VPORT_ATTR_R(name, defval, minval, maxval, desc) \
+static int lpfc_##name = defval;\
+module_param(lpfc_##name, int, 0);\
+MODULE_PARM_DESC(lpfc_##name, desc);\
+lpfc_vport_param_show(name)\
+lpfc_vport_param_init(name, defval, minval, maxval)\
+static CLASS_DEVICE_ATTR(lpfc_##name, S_IRUGO , lpfc_##name##_show, NULL)
+
+#define LPFC_VPORT_ATTR_RW(name, defval, minval, maxval, desc) \
+static int lpfc_##name = defval;\
+module_param(lpfc_##name, int, 0);\
+MODULE_PARM_DESC(lpfc_##name, desc);\
+lpfc_vport_param_show(name)\
+lpfc_vport_param_init(name, defval, minval, maxval)\
+lpfc_vport_param_set(name, defval, minval, maxval)\
+lpfc_vport_param_store(name)\
+static CLASS_DEVICE_ATTR(lpfc_##name, S_IRUGO | S_IWUSR,\
+			 lpfc_##name##_show, lpfc_##name##_store)
+
+#define LPFC_VPORT_ATTR_HEX_R(name, defval, minval, maxval, desc) \
+static int lpfc_##name = defval;\
+module_param(lpfc_##name, int, 0);\
+MODULE_PARM_DESC(lpfc_##name, desc);\
+lpfc_vport_param_hex_show(name)\
+lpfc_vport_param_init(name, defval, minval, maxval)\
+static CLASS_DEVICE_ATTR(lpfc_##name, S_IRUGO , lpfc_##name##_show, NULL)
+
+#define LPFC_VPORT_ATTR_HEX_RW(name, defval, minval, maxval, desc) \
+static int lpfc_##name = defval;\
+module_param(lpfc_##name, int, 0);\
+MODULE_PARM_DESC(lpfc_##name, desc);\
+lpfc_vport_param_hex_show(name)\
+lpfc_vport_param_init(name, defval, minval, maxval)\
+lpfc_vport_param_set(name, defval, minval, maxval)\
+lpfc_vport_param_store(name)\
+static CLASS_DEVICE_ATTR(lpfc_##name, S_IRUGO | S_IWUSR,\
+			 lpfc_##name##_show, lpfc_##name##_store)
+
 static CLASS_DEVICE_ATTR(info, S_IRUGO, lpfc_info_show, NULL);
 static CLASS_DEVICE_ATTR(serialnum, S_IRUGO, lpfc_serialnum_show, NULL);
 static CLASS_DEVICE_ATTR(modeldesc, S_IRUGO, lpfc_modeldesc_show, NULL);
 static CLASS_DEVICE_ATTR(modelname, S_IRUGO, lpfc_modelname_show, NULL);
 static CLASS_DEVICE_ATTR(programtype, S_IRUGO, lpfc_programtype_show, NULL);
-static CLASS_DEVICE_ATTR(portnum, S_IRUGO, lpfc_portnum_show, NULL);
+static CLASS_DEVICE_ATTR(portnum, S_IRUGO, lpfc_vportnum_show, NULL);
 static CLASS_DEVICE_ATTR(fwrev, S_IRUGO, lpfc_fwrev_show, NULL);
 static CLASS_DEVICE_ATTR(hdw, S_IRUGO, lpfc_hdw_show, NULL);
 static CLASS_DEVICE_ATTR(state, S_IRUGO, lpfc_state_show, NULL);
@@ -592,7 +925,120 @@ static CLASS_DEVICE_ATTR(management_version, S_IRUGO, management_version_show,
 static CLASS_DEVICE_ATTR(board_mode, S_IRUGO | S_IWUSR,
 			 lpfc_board_mode_show, lpfc_board_mode_store);
 static CLASS_DEVICE_ATTR(issue_reset, S_IWUSR, NULL, lpfc_issue_reset);
+static CLASS_DEVICE_ATTR(max_vpi, S_IRUGO, lpfc_max_vpi_show, NULL);
+static CLASS_DEVICE_ATTR(used_vpi, S_IRUGO, lpfc_used_vpi_show, NULL);
+static CLASS_DEVICE_ATTR(max_rpi, S_IRUGO, lpfc_max_rpi_show, NULL);
+static CLASS_DEVICE_ATTR(used_rpi, S_IRUGO, lpfc_used_rpi_show, NULL);
+static CLASS_DEVICE_ATTR(max_xri, S_IRUGO, lpfc_max_xri_show, NULL);
+static CLASS_DEVICE_ATTR(used_xri, S_IRUGO, lpfc_used_xri_show, NULL);
+static CLASS_DEVICE_ATTR(npiv_info, S_IRUGO, lpfc_npiv_info_show, NULL);
+static CLASS_DEVICE_ATTR(lpfc_temp_sensor, S_IRUGO, lpfc_temp_sensor_show,
+			 NULL);
+
+static int
+lpfc_parse_wwn(const char *ns, uint8_t *nm)
+{
+	unsigned int i, j;
+	memset(nm, 0, 8);
+
+	/* Validate and store the new name */
+	for (i=0, j=0; i < 16; i++) {
+		if ((*ns >= 'a') && (*ns <= 'f'))
+			j = ((j << 4) | ((*ns++ -'a') + 10));
+		else if ((*ns >= 'A') && (*ns <= 'F'))
+			j = ((j << 4) | ((*ns++ -'A') + 10));
+		else if ((*ns >= '0') && (*ns <= '9'))
+			j = ((j << 4) | (*ns++ -'0'));
+		else
+			return -EINVAL;
+		if (i % 2) {
+			nm[i/2] = j & 0xff;
+			j = 0;
+		}
+	}
 
+	return 0;
+}
+static ssize_t
+lpfc_create_vport(struct class_device *cdev, const char *buf, size_t count)
+{
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	uint8_t wwnn[8];
+	uint8_t wwpn[8];
+	uint8_t stat;
+
+	stat = lpfc_parse_wwn(&buf[0], wwpn);
+	if (stat)
+		return stat;
+	stat = lpfc_parse_wwn(&buf[17], wwnn);
+	if (stat)
+		return stat;
+	if (lpfc_vport_create(shost, wwnn, wwpn))
+		return -EIO;
+	return count;
+}
+
+static CLASS_DEVICE_ATTR(vport_create, S_IWUSR, NULL, lpfc_create_vport);
+
+static ssize_t
+lpfc_delete_vport(struct class_device *cdev, const char *buf, size_t count)
+{
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_hba   *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
+	uint8_t stat, match;
+	uint8_t wwnn[8];
+	uint8_t wwpn[8];
+	struct lpfc_vport *vport;
+
+	stat = lpfc_parse_wwn(&buf[0], wwpn);
+	if (stat)
+		return stat;
+	stat = lpfc_parse_wwn(&buf[17], wwnn);
+	if (stat)
+		return stat;
+
+	match = 0;
+	spin_lock_irq(&phba->hbalock);
+	list_for_each_entry(vport, &phba->port_list, listentry) {
+		if ((memcmp(&vport->fc_nodename, &wwnn,
+			    sizeof(struct lpfc_name)) == 0) &&
+		    (memcmp(&vport->fc_portname, &wwpn,
+			    sizeof(struct lpfc_name)) == 0)) {
+			match = 1;
+			break;
+		}
+	}
+	spin_unlock_irq(&phba->hbalock);
+	if (!match)
+		return -ENODEV;
+
+	stat = lpfc_vport_delete(lpfc_shost_from_vport(vport));
+	return stat ? stat : count;
+}
+
+static CLASS_DEVICE_ATTR(vport_delete, S_IWUSR, NULL, lpfc_delete_vport);
+
+static ssize_t
+lpfc_npiv_vports_inuse_show(struct class_device *cdev, char *buf)
+{
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_hba   *phba = ((struct lpfc_vport *) shost->hostdata)->phba;
+	struct lpfc_vport *vport_curr;
+	uint32_t inuse = 0;
+
+	spin_lock_irq(&phba->hbalock);
+	list_for_each_entry(vport_curr, &phba->port_list, listentry) {
+		if (vport_curr == phba->pport)
+			continue;
+		inuse++;
+	}
+	spin_unlock_irq(&phba->hbalock);
+	return snprintf(buf, PAGE_SIZE, "%d\n", inuse);
+}
+
+static CLASS_DEVICE_ATTR(npiv_vports_inuse, S_IRUGO,
+			 lpfc_npiv_vports_inuse_show, NULL);
+static CLASS_DEVICE_ATTR(max_npiv_vports, S_IRUGO, lpfc_max_vpi_show, NULL);
 
 static char *lpfc_soft_wwn_key = "C99G71SL8032A";
 
@@ -600,8 +1046,9 @@ static ssize_t
 lpfc_soft_wwn_enable_store(struct class_device *cdev, const char *buf,
 				size_t count)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	unsigned int cnt = count;
 
 	/*
@@ -634,8 +1081,10 @@ static CLASS_DEVICE_ATTR(lpfc_soft_wwn_enable, S_IWUSR, NULL,
 static ssize_t
 lpfc_soft_wwpn_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+
 	return snprintf(buf, PAGE_SIZE, "0x%llx\n",
 			(unsigned long long)phba->cfg_soft_wwpn);
 }
@@ -644,13 +1093,20 @@ lpfc_soft_wwpn_show(struct class_device *cdev, char *buf)
 static ssize_t
 lpfc_soft_wwpn_store(struct class_device *cdev, const char *buf, size_t count)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	struct completion online_compl;
 	int stat1=0, stat2=0;
 	unsigned int i, j, cnt=count;
 	u8 wwpn[8];
 
+	spin_lock_irq(&phba->hbalock);
+	if (phba->over_temp_state == HBA_OVER_TEMP) {
+		spin_unlock_irq(&phba->hbalock);
+		return -EPERM;
+	}
+	spin_unlock_irq(&phba->hbalock);
 	/* count may include a LF at end of string */
 	if (buf[cnt-1] == '\n')
 		cnt--;
@@ -680,9 +1136,9 @@ lpfc_soft_wwpn_store(struct class_device *cdev, const char *buf, size_t count)
 		}
 	}
 	phba->cfg_soft_wwpn = wwn_to_u64(wwpn);
-	fc_host_port_name(host) = phba->cfg_soft_wwpn;
+	fc_host_port_name(shost) = phba->cfg_soft_wwpn;
 	if (phba->cfg_soft_wwnn)
-		fc_host_node_name(host) = phba->cfg_soft_wwnn;
+		fc_host_node_name(shost) = phba->cfg_soft_wwnn;
 
 	dev_printk(KERN_NOTICE, &phba->pcidev->dev,
 		   "lpfc%d: Reinitializing to use soft_wwpn\n", phba->brd_no);
@@ -690,17 +1146,15 @@ lpfc_soft_wwpn_store(struct class_device *cdev, const char *buf, size_t count)
 	stat1 = lpfc_do_offline(phba, LPFC_EVT_OFFLINE);
 	if (stat1)
 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-			"%d:0463 lpfc_soft_wwpn attribute set failed to reinit "
-			"adapter - %d\n", phba->brd_no, stat1);
-
+				"0463 lpfc_soft_wwpn attribute set failed to "
+				"reinit adapter - %d\n", stat1);
 	init_completion(&online_compl);
 	lpfc_workq_post_event(phba, &stat2, &online_compl, LPFC_EVT_ONLINE);
 	wait_for_completion(&online_compl);
 	if (stat2)
 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-			"%d:0464 lpfc_soft_wwpn attribute set failed to reinit "
-			"adapter - %d\n", phba->brd_no, stat2);
-
+				"0464 lpfc_soft_wwpn attribute set failed to "
+				"reinit adapter - %d\n", stat2);
 	return (stat1 || stat2) ? -EIO : count;
 }
 static CLASS_DEVICE_ATTR(lpfc_soft_wwpn, S_IRUGO | S_IWUSR,\
@@ -709,8 +1163,8 @@ static CLASS_DEVICE_ATTR(lpfc_soft_wwpn, S_IRUGO | S_IWUSR,\
 static ssize_t
 lpfc_soft_wwnn_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
 	return snprintf(buf, PAGE_SIZE, "0x%llx\n",
 			(unsigned long long)phba->cfg_soft_wwnn);
 }
@@ -719,8 +1173,8 @@ lpfc_soft_wwnn_show(struct class_device *cdev, char *buf)
 static ssize_t
 lpfc_soft_wwnn_store(struct class_device *cdev, const char *buf, size_t count)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
 	unsigned int i, j, cnt=count;
 	u8 wwnn[8];
 
@@ -777,6 +1231,21 @@ MODULE_PARM_DESC(lpfc_poll, "FCP ring polling mode control:"
 static CLASS_DEVICE_ATTR(lpfc_poll, S_IRUGO | S_IWUSR,
 			 lpfc_poll_show, lpfc_poll_store);
 
+int  lpfc_sli_mode = 0;
+module_param(lpfc_sli_mode, int, 0);
+MODULE_PARM_DESC(lpfc_sli_mode, "SLI mode selector:"
+		 " 0 - auto (SLI-3 if supported),"
+		 " 2 - select SLI-2 even on SLI-3 capable HBAs,"
+		 " 3 - select SLI-3");
+
+int lpfc_enable_npiv = 0;
+module_param(lpfc_enable_npiv, int, 0);
+MODULE_PARM_DESC(lpfc_enable_npiv, "Enable NPIV functionality");
+lpfc_param_show(enable_npiv);
+lpfc_param_init(enable_npiv, 0, 0, 1);
+static CLASS_DEVICE_ATTR(lpfc_enable_npiv, S_IRUGO,
+			 lpfc_enable_npiv_show, NULL);
+
 /*
 # lpfc_nodev_tmo: If set, it will hold all I/O errors on devices that disappear
 # until the timer expires. Value range is [0,255]. Default value is 30.
@@ -790,105 +1259,116 @@ MODULE_PARM_DESC(lpfc_nodev_tmo,
 static ssize_t
 lpfc_nodev_tmo_show(struct class_device *cdev, char *buf)
 {
-	struct Scsi_Host *host = class_to_shost(cdev);
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
 	int val = 0;
-	val = phba->cfg_devloss_tmo;
-	return snprintf(buf, PAGE_SIZE, "%d\n",
-			phba->cfg_devloss_tmo);
+	val = vport->cfg_devloss_tmo;
+	return snprintf(buf, PAGE_SIZE, "%d\n",	vport->cfg_devloss_tmo);
 }
 
 static int
-lpfc_nodev_tmo_init(struct lpfc_hba *phba, int val)
-{
-	static int warned;
-	if (phba->cfg_devloss_tmo !=  LPFC_DEF_DEVLOSS_TMO) {
-		phba->cfg_nodev_tmo = phba->cfg_devloss_tmo;
-		if (!warned && val != LPFC_DEF_DEVLOSS_TMO) {
-			warned = 1;
-			lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-					"%d:0402 Ignoring nodev_tmo module "
-					"parameter because devloss_tmo is"
-					" set.\n",
-					phba->brd_no);
-		}
+lpfc_nodev_tmo_init(struct lpfc_vport *vport, int val)
+{
+	if (vport->cfg_devloss_tmo != LPFC_DEF_DEVLOSS_TMO) {
+		vport->cfg_nodev_tmo = vport->cfg_devloss_tmo;
+		if (val != LPFC_DEF_DEVLOSS_TMO)
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+					 "0402 Ignoring nodev_tmo module "
+					 "parameter because devloss_tmo is "
+					 "set.\n");
 		return 0;
 	}
 
 	if (val >= LPFC_MIN_DEVLOSS_TMO && val <= LPFC_MAX_DEVLOSS_TMO) {
-		phba->cfg_nodev_tmo = val;
-		phba->cfg_devloss_tmo = val;
+		vport->cfg_nodev_tmo = val;
+		vport->cfg_devloss_tmo = val;
 		return 0;
 	}
-	lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-			"%d:0400 lpfc_nodev_tmo attribute cannot be set to %d, "
-			"allowed range is [%d, %d]\n",
-			phba->brd_no, val,
-			LPFC_MIN_DEVLOSS_TMO, LPFC_MAX_DEVLOSS_TMO);
-	phba->cfg_nodev_tmo = LPFC_DEF_DEVLOSS_TMO;
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+			 "0400 lpfc_nodev_tmo attribute cannot be set to"
+			 " %d, allowed range is [%d, %d]\n",
+			 val, LPFC_MIN_DEVLOSS_TMO, LPFC_MAX_DEVLOSS_TMO);
+	vport->cfg_nodev_tmo = LPFC_DEF_DEVLOSS_TMO;
 	return -EINVAL;
 }
 
 static void
-lpfc_update_rport_devloss_tmo(struct lpfc_hba *phba)
+lpfc_update_rport_devloss_tmo(struct lpfc_vport *vport)
 {
+	struct Scsi_Host  *shost;
 	struct lpfc_nodelist  *ndlp;
-	int i;
-	struct list_head *listp, *node_list[7];
-
-	node_list[0] = &phba->fc_npr_list;
-	node_list[1] = &phba->fc_nlpmap_list;
-	node_list[2] = &phba->fc_nlpunmap_list;
-	node_list[3] = &phba->fc_prli_list;
-	node_list[4] = &phba->fc_reglogin_list;
-	node_list[5] = &phba->fc_adisc_list;
-	node_list[6] = &phba->fc_plogi_list;
-
-	spin_lock_irq(phba->host->host_lock);
-	for (i = 0; i < 7; i++) {
-		listp = node_list[i];
-
-		list_for_each_entry(ndlp, listp, nlp_listp) {
-			if (ndlp->rport)
-				ndlp->rport->dev_loss_tmo =
-					phba->cfg_devloss_tmo;
-		}
-	}
-	spin_unlock_irq(phba->host->host_lock);
+
+	shost = lpfc_shost_from_vport(vport);
+	spin_lock_irq(shost->host_lock);
+	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp)
+		if (ndlp->rport)
+			ndlp->rport->dev_loss_tmo = vport->cfg_devloss_tmo;
+	spin_unlock_irq(shost->host_lock);
 }
 
 static int
-lpfc_nodev_tmo_set(struct lpfc_hba *phba, int val)
+lpfc_nodev_tmo_set(struct lpfc_vport *vport, int val)
 {
-	if (phba->dev_loss_tmo_changed ||
-		(lpfc_devloss_tmo != LPFC_DEF_DEVLOSS_TMO)) {
-		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-				"%d:0401 Ignoring change to nodev_tmo "
-				"because devloss_tmo is set.\n",
-				phba->brd_no);
+	if (vport->dev_loss_tmo_changed ||
+	    (lpfc_devloss_tmo != LPFC_DEF_DEVLOSS_TMO)) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+				 "0401 Ignoring change to nodev_tmo "
+				 "because devloss_tmo is set.\n");
 		return 0;
 	}
-
 	if (val >= LPFC_MIN_DEVLOSS_TMO && val <= LPFC_MAX_DEVLOSS_TMO) {
-		phba->cfg_nodev_tmo = val;
-		phba->cfg_devloss_tmo = val;
-		lpfc_update_rport_devloss_tmo(phba);
+		vport->cfg_nodev_tmo = val;
+		vport->cfg_devloss_tmo = val;
+		lpfc_update_rport_devloss_tmo(vport);
 		return 0;
 	}
-
-	lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-			"%d:0403 lpfc_nodev_tmo attribute cannot be set to %d, "
-			"allowed range is [%d, %d]\n",
-			phba->brd_no, val, LPFC_MIN_DEVLOSS_TMO,
-			LPFC_MAX_DEVLOSS_TMO);
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+			 "0403 lpfc_nodev_tmo attribute cannot be set to"
+			 "%d, allowed range is [%d, %d]\n",
+			 val, LPFC_MIN_DEVLOSS_TMO, LPFC_MAX_DEVLOSS_TMO);
 	return -EINVAL;
 }
 
-lpfc_param_store(nodev_tmo)
+lpfc_vport_param_store(nodev_tmo)
 
 static CLASS_DEVICE_ATTR(lpfc_nodev_tmo, S_IRUGO | S_IWUSR,
 			 lpfc_nodev_tmo_show, lpfc_nodev_tmo_store);
 
+
+static ssize_t
+lpfc_restart_auth (struct class_device *cdev, const char *buf, size_t count)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	struct lpfc_nodelist *ndlp;
+	int status;
+
+	if ((vport->fc_flag & FC_OFFLINE_MODE) ||
+		(phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO) ||
+		(!vport->cfg_enable_auth))
+		return -EPERM;
+
+	/* If vport already in the middle of authentication do not restart */
+	if ((vport->auth.auth_msg_state == LPFC_AUTH_NEGOTIATE) ||
+	    (vport->auth.auth_msg_state == LPFC_DHCHAP_CHALLENGE) ||
+	    (vport->auth.auth_msg_state == LPFC_DHCHAP_REPLY))
+		return -EPERM;
+
+	ndlp = lpfc_findnode_did(vport, Fabric_DID);
+	if (!ndlp)
+		return -EPERM;
+	status = lpfc_start_node_authentication(ndlp);
+	if (status)
+		return status;
+
+	return strlen(buf);
+
+}
+
+static CLASS_DEVICE_ATTR(lpfc_restart_auth, S_IRUGO | S_IWUSR,
+			 NULL, lpfc_restart_auth);
+
 /*
 # lpfc_devloss_tmo: If set, it will hold all I/O errors on devices that
 # disappear until the timer expires. Value range is [0,255]. Default
@@ -898,29 +1378,28 @@ module_param(lpfc_devloss_tmo, int, 0);
 MODULE_PARM_DESC(lpfc_devloss_tmo,
 		 "Seconds driver will hold I/O waiting "
 		 "for a device to come back");
-lpfc_param_init(devloss_tmo, LPFC_DEF_DEVLOSS_TMO,
-		LPFC_MIN_DEVLOSS_TMO, LPFC_MAX_DEVLOSS_TMO)
-lpfc_param_show(devloss_tmo)
+lpfc_vport_param_init(devloss_tmo, LPFC_DEF_DEVLOSS_TMO,
+		      LPFC_MIN_DEVLOSS_TMO, LPFC_MAX_DEVLOSS_TMO)
+lpfc_vport_param_show(devloss_tmo)
 static int
-lpfc_devloss_tmo_set(struct lpfc_hba *phba, int val)
+lpfc_devloss_tmo_set(struct lpfc_vport *vport, int val)
 {
 	if (val >= LPFC_MIN_DEVLOSS_TMO && val <= LPFC_MAX_DEVLOSS_TMO) {
-		phba->cfg_nodev_tmo = val;
-		phba->cfg_devloss_tmo = val;
-		phba->dev_loss_tmo_changed = 1;
-		lpfc_update_rport_devloss_tmo(phba);
+		vport->cfg_nodev_tmo = val;
+		vport->cfg_devloss_tmo = val;
+		vport->dev_loss_tmo_changed = 1;
+		lpfc_update_rport_devloss_tmo(vport);
 		return 0;
 	}
 
-	lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-			"%d:0404 lpfc_devloss_tmo attribute cannot be set to"
-			" %d, allowed range is [%d, %d]\n",
-			phba->brd_no, val, LPFC_MIN_DEVLOSS_TMO,
-			LPFC_MAX_DEVLOSS_TMO);
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+			 "0404 lpfc_devloss_tmo attribute cannot be set to"
+			 " %d, allowed range is [%d, %d]\n",
+			 val, LPFC_MIN_DEVLOSS_TMO, LPFC_MAX_DEVLOSS_TMO);
 	return -EINVAL;
 }
 
-lpfc_param_store(devloss_tmo)
+lpfc_vport_param_store(devloss_tmo)
 static CLASS_DEVICE_ATTR(lpfc_devloss_tmo, S_IRUGO | S_IWUSR,
 	lpfc_devloss_tmo_show, lpfc_devloss_tmo_store);
 
@@ -942,14 +1421,22 @@ static CLASS_DEVICE_ATTR(lpfc_devloss_tmo, S_IRUGO | S_IWUSR,
 # LOG_LIBDFC                    0x2000     LIBDFC events
 # LOG_ALL_MSG                   0xffff     LOG all messages
 */
-LPFC_ATTR_HEX_RW(log_verbose, 0x0, 0x0, 0xffff, "Verbose logging bit-mask");
+LPFC_VPORT_ATTR_HEX_RW(log_verbose, 0x0, 0x0, 0xffff,
+		       "Verbose logging bit-mask");
+
+/*
+# lpfc_enable_da_id: This turns on the DA_ID CT command that deregisters
+# objects that have been registered with the nameserver after login.
+*/
+LPFC_VPORT_ATTR_R(enable_da_id, 0, 0, 1,
+		  "Deregister nameserver objects before LOGO");
 
 /*
 # lun_queue_depth:  This parameter is used to limit the number of outstanding
 # commands per FCP LUN. Value range is [1,128]. Default value is 30.
 */
-LPFC_ATTR_R(lun_queue_depth, 30, 1, 128,
-	    "Max number of FCP commands we can queue to a specific LUN");
+LPFC_VPORT_ATTR_R(lun_queue_depth, 30, 1, 128,
+		  "Max number of FCP commands we can queue to a specific LUN");
 
 /*
 # hba_queue_depth:  This parameter is used to limit the number of outstanding
@@ -962,6 +1449,80 @@ LPFC_ATTR_R(hba_queue_depth, 8192, 32, 8192,
 	    "Max number of FCP commands we can queue to a lpfc HBA");
 
 /*
+# peer_port_login:  This parameter allows/prevents logins
+# between peer ports hosted on the same physical port.
+# When this parameter is set 0 peer ports of same physical port
+# are not allowed to login to each other.
+# When this parameter is set 1 peer ports of same physical port
+# are allowed to login to each other.
+# Default value of this parameter is 0.
+*/
+LPFC_VPORT_ATTR_R(peer_port_login, 0, 0, 1,
+		  "Allow peer ports on the same physical port to login to each "
+		  "other.");
+
+/*
+# restrict_login:  This parameter allows/prevents logins
+# between Virtual Ports and remote initiators.
+# When this parameter is not set (0) Virtual Ports will accept PLOGIs from
+# other initiators and will attempt to PLOGI all remote ports.
+# When this parameter is set (1) Virtual Ports will reject PLOGIs from
+# remote ports and will not attempt to PLOGI to other initiators.
+# This parameter does not restrict to the physical port.
+# This parameter does not restrict logins to Fabric resident remote ports.
+# Default value of this parameter is 1.
+*/
+static int lpfc_restrict_login = 1;
+module_param(lpfc_restrict_login, int, 0);
+MODULE_PARM_DESC(lpfc_restrict_login,
+		 "Restrict virtual ports login to remote initiators.");
+lpfc_vport_param_show(restrict_login);
+
+static int
+lpfc_restrict_login_init(struct lpfc_vport *vport, int val)
+{
+	if (val < 0 || val > 1) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+				 "0449 lpfc_restrict_login attribute cannot "
+				 "be set to %d, allowed range is [0, 1]\n",
+				 val);
+		vport->cfg_restrict_login = 1;
+		return -EINVAL;
+	}
+	if (vport->port_type == LPFC_PHYSICAL_PORT) {
+		vport->cfg_restrict_login = 0;
+		return 0;
+	}
+	vport->cfg_restrict_login = val;
+	return 0;
+}
+
+static int
+lpfc_restrict_login_set(struct lpfc_vport *vport, int val)
+{
+	if (val < 0 || val > 1) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+				 "0450 lpfc_restrict_login attribute cannot "
+				 "be set to %d, allowed range is [0, 1]\n",
+				 val);
+		vport->cfg_restrict_login = 1;
+		return -EINVAL;
+	}
+	if (vport->port_type == LPFC_PHYSICAL_PORT && val != 0) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+				 "0468 lpfc_restrict_login must be 0 for "
+				 "Physical ports.\n");
+		vport->cfg_restrict_login = 0;
+		return 0;
+	}
+	vport->cfg_restrict_login = val;
+	return 0;
+}
+lpfc_vport_param_store(restrict_login);
+static CLASS_DEVICE_ATTR(lpfc_restrict_login, S_IRUGO | S_IWUSR,
+			 lpfc_restrict_login_show, lpfc_restrict_login_store);
+
+/*
 # Some disk devices have a "select ID" or "select Target" capability.
 # From a protocol standpoint "select ID" usually means select the
 # Fibre channel "ALPA".  In the FC-AL Profile there is an "informative
@@ -978,8 +1539,8 @@ LPFC_ATTR_R(hba_queue_depth, 8192, 32, 8192,
 # and will not work across a fabric. Also this parameter will take
 # effect only in the case when ALPA map is not available.)
 */
-LPFC_ATTR_R(scan_down, 1, 0, 1,
-	     "Start scanning for devices from highest ALPA to lowest");
+LPFC_VPORT_ATTR_R(scan_down, 1, 0, 1,
+		  "Start scanning for devices from highest ALPA to lowest");
 
 /*
 # lpfc_topology:  link topology for init link
@@ -1004,21 +1565,40 @@ LPFC_ATTR_RW(topology, 0, 0, 6, "Select Fibre Channel topology");
 #       8  = 8 Gigabaud
 # Value range is [0,8]. Default value is 0.
 */
-LPFC_ATTR_R(link_speed, 0, 0, 8, "Select link speed");
+static int lpfc_link_speed = 0;
+module_param(lpfc_link_speed, int, 0);
+MODULE_PARM_DESC(lpfc_link_speed, "Select link speed");
+lpfc_param_show(link_speed)
+static int
+lpfc_link_speed_init(struct lpfc_hba *phba, int val)
+{
+	if ((val >= 0 && val <= LPFC_MAX_LINK_SPEED)
+		&& (LPFC_LINK_SPEED_BITMAP & (1 << val))) {
+		phba->cfg_link_speed = val;
+		return 0;
+	}
+	lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+			"0454 lpfc_link_speed attribute cannot "
+			"be set to %d, allowed values are "
+			"["LPFC_LINK_SPEED_STRING"]\n", val);
+	phba->cfg_link_speed = 0;
+	return -EINVAL;
+}
+static CLASS_DEVICE_ATTR(lpfc_link_speed, S_IRUGO , lpfc_link_speed_show, NULL);
 
 /*
 # lpfc_fcp_class:  Determines FC class to use for the FCP protocol.
 # Value range is [2,3]. Default value is 3.
 */
-LPFC_ATTR_R(fcp_class, 3, 2, 3,
-	     "Select Fibre Channel class of service for FCP sequences");
+LPFC_VPORT_ATTR_R(fcp_class, 3, 2, 3,
+		  "Select Fibre Channel class of service for FCP sequences");
 
 /*
 # lpfc_use_adisc: Use ADISC for FCP rediscovery instead of PLOGI. Value range
 # is [0,1]. Default value is 0.
 */
-LPFC_ATTR_RW(use_adisc, 0, 0, 1,
-	     "Use ADISC on rediscovery to authenticate FCP devices");
+LPFC_VPORT_ATTR_RW(use_adisc, 0, 0, 1,
+		   "Use ADISC on rediscovery to authenticate FCP devices");
 
 /*
 # lpfc_ack0: Use ACK0, instead of ACK1 for class 2 acknowledgement. Value
@@ -1070,13 +1650,13 @@ LPFC_ATTR_R(multi_ring_type, FC_LLC_SNAP, 1,
 #       2 = support FDMI with attribute of hostname
 # Value range [0,2]. Default value is 0.
 */
-LPFC_ATTR_RW(fdmi_on, 0, 0, 2, "Enable FDMI support");
+LPFC_VPORT_ATTR_RW(fdmi_on, 0, 0, 2, "Enable FDMI support");
 
 /*
 # Specifies the maximum number of ELS cmds we can have outstanding (for
 # discovery). Value range is [1,64]. Default value = 32.
 */
-LPFC_ATTR(discovery_threads, 32, 1, 64, "Maximum number of ELS commands "
+LPFC_VPORT_ATTR(discovery_threads, 32, 1, 64, "Maximum number of ELS commands "
 		 "during discovery");
 
 /*
@@ -1084,8 +1664,7 @@ LPFC_ATTR(discovery_threads, 32, 1, 64, "Maximum number of ELS commands "
 # Value range is [0,65535]. Default value is 255.
 # NOTE: The SCSI layer might probe all allowed LUN on some old targets.
 */
-LPFC_ATTR_R(max_luns, 255, 0, 65535,
-	     "Maximum allowed LUN");
+LPFC_VPORT_ATTR_R(max_luns, 255, 0, 65535, "Maximum allowed LUN");
 
 /*
 # lpfc_poll_tmo: .Milliseconds driver will wait between polling FCP ring.
@@ -1103,8 +1682,120 @@ LPFC_ATTR_RW(poll_tmo, 10, 1, 255,
 */
 LPFC_ATTR_R(use_msi, 0, 0, 1, "Use Message Signaled Interrupts, if possible");
 
+/*
+# lpfc_enable_auth: controls FC Authentication.
+#       0 = Authentication OFF
+#       1 = Authentication ON
+# Value range [0,1]. Default value is 0.
+*/
+static int lpfc_enable_auth = 0;
+module_param(lpfc_enable_auth, int, 0);
+MODULE_PARM_DESC(lpfc_enable_auth, "Enable FC Authentication");
+lpfc_vport_param_show(enable_auth);
+lpfc_vport_param_init(enable_auth, 0, 0, 1);
+static int
+lpfc_enable_auth_set(struct lpfc_vport *vport, int val)
+{
+	if (val == vport->cfg_enable_auth)
+		return 0;
+	if (val == 0) {
+		spin_lock_irq(&fc_security_user_lock);
+		list_del(&vport->sc_users);
+		spin_unlock_irq(&fc_security_user_lock);
+		vport->cfg_enable_auth = val;
+		lpfc_fc_queue_security_work(vport,
+					    &vport->sc_offline_work);
+		return 0;
+	} else if (val == 1) {
+		spin_lock_irq(&fc_security_user_lock);
+		list_add_tail(&vport->sc_users, &fc_security_user_list);
+		spin_unlock_irq(&fc_security_user_lock);
+		vport->cfg_enable_auth = val;
+		lpfc_fc_queue_security_work(vport,
+					    &vport->sc_online_work);
+		return 0;
+	}
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+			 "0450 lpfc_enable_auth attribute cannot be set to %d, "
+			 "allowed range is [0, 1]\n", val);
+	return -EINVAL;
+}
+lpfc_vport_param_store(enable_auth);
+static CLASS_DEVICE_ATTR(lpfc_enable_auth, S_IRUGO | S_IWUSR,
+			 lpfc_enable_auth_show, lpfc_enable_auth_store);
+
+/*
+# lpfc_dev_loss_initiator: FC transport layer waits for dev_loss timer to
+#	expire for FC Initiators before calling rport dev_loss callback routine
+#       0  = disabled (dev_loss callback called immediately after rport delete)
+#       1  = enabled  (dev_loss callback is called after dev_loss timer expires)
+# Value range is [0,1]. Default value is 0.
+*/
+LPFC_ATTR_R(dev_loss_initiator, 0, 0, 1,
+		"FC Tranport dev_loss behavior for Initiators");
+
+struct class_device_attribute *lpfc_hba_attrs[] = {
+	&class_device_attr_info,
+	&class_device_attr_serialnum,
+	&class_device_attr_modeldesc,
+	&class_device_attr_modelname,
+	&class_device_attr_programtype,
+	&class_device_attr_portnum,
+	&class_device_attr_fwrev,
+	&class_device_attr_hdw,
+	&class_device_attr_option_rom_version,
+	&class_device_attr_state,
+	&class_device_attr_num_discovered_ports,
+	&class_device_attr_lpfc_drvr_version,
+	&class_device_attr_lpfc_temp_sensor,
+	&class_device_attr_lpfc_log_verbose,
+	&class_device_attr_lpfc_lun_queue_depth,
+	&class_device_attr_lpfc_hba_queue_depth,
+	&class_device_attr_lpfc_peer_port_login,
+	&class_device_attr_lpfc_nodev_tmo,
+	&class_device_attr_lpfc_devloss_tmo,
+	&class_device_attr_lpfc_fcp_class,
+	&class_device_attr_lpfc_use_adisc,
+	&class_device_attr_lpfc_ack0,
+	&class_device_attr_lpfc_topology,
+	&class_device_attr_lpfc_scan_down,
+	&class_device_attr_lpfc_link_speed,
+	&class_device_attr_lpfc_cr_delay,
+	&class_device_attr_lpfc_cr_count,
+	&class_device_attr_lpfc_multi_ring_support,
+	&class_device_attr_lpfc_multi_ring_rctl,
+	&class_device_attr_lpfc_multi_ring_type,
+	&class_device_attr_lpfc_fdmi_on,
+	&class_device_attr_lpfc_max_luns,
+	&class_device_attr_lpfc_enable_npiv,
+	&class_device_attr_nport_evt_cnt,
+	&class_device_attr_management_version,
+	&class_device_attr_board_mode,
+	&class_device_attr_max_vpi,
+	&class_device_attr_used_vpi,
+	&class_device_attr_max_rpi,
+	&class_device_attr_used_rpi,
+	&class_device_attr_max_xri,
+	&class_device_attr_used_xri,
+	&class_device_attr_npiv_info,
+	&class_device_attr_issue_reset,
+	&class_device_attr_lpfc_poll,
+	&class_device_attr_lpfc_poll_tmo,
+	&class_device_attr_lpfc_use_msi,
+	&class_device_attr_lpfc_enable_auth,
+	&class_device_attr_lpfc_restart_auth,
+	&class_device_attr_lpfc_dev_loss_initiator,
+	&class_device_attr_npiv_vports_inuse,
+	&class_device_attr_max_npiv_vports,
+	&class_device_attr_vport_delete,
+	&class_device_attr_vport_create,
+	&class_device_attr_lpfc_soft_wwnn,
+	&class_device_attr_lpfc_soft_wwpn,
+	&class_device_attr_lpfc_soft_wwn_enable,
+	NULL,
+};
 
-struct class_device_attribute *lpfc_host_attrs[] = {
+struct class_device_attribute *lpfc_hba_attrs_no_npiv[] = {
 	&class_device_attr_info,
 	&class_device_attr_serialnum,
 	&class_device_attr_modeldesc,
@@ -1117,9 +1808,11 @@ struct class_device_attribute *lpfc_host_attrs[] = {
 	&class_device_attr_state,
 	&class_device_attr_num_discovered_ports,
 	&class_device_attr_lpfc_drvr_version,
+	&class_device_attr_lpfc_temp_sensor,
 	&class_device_attr_lpfc_log_verbose,
 	&class_device_attr_lpfc_lun_queue_depth,
 	&class_device_attr_lpfc_hba_queue_depth,
+	&class_device_attr_lpfc_peer_port_login,
 	&class_device_attr_lpfc_nodev_tmo,
 	&class_device_attr_lpfc_devloss_tmo,
 	&class_device_attr_lpfc_fcp_class,
@@ -1135,26 +1828,64 @@ struct class_device_attribute *lpfc_host_attrs[] = {
 	&class_device_attr_lpfc_multi_ring_type,
 	&class_device_attr_lpfc_fdmi_on,
 	&class_device_attr_lpfc_max_luns,
+	&class_device_attr_lpfc_enable_npiv,
 	&class_device_attr_nport_evt_cnt,
 	&class_device_attr_management_version,
 	&class_device_attr_board_mode,
+	&class_device_attr_max_vpi,
+	&class_device_attr_used_vpi,
+	&class_device_attr_max_rpi,
+	&class_device_attr_used_rpi,
+	&class_device_attr_max_xri,
+	&class_device_attr_used_xri,
+	&class_device_attr_npiv_info,
 	&class_device_attr_issue_reset,
 	&class_device_attr_lpfc_poll,
 	&class_device_attr_lpfc_poll_tmo,
 	&class_device_attr_lpfc_use_msi,
+	&class_device_attr_lpfc_enable_auth,
+	&class_device_attr_lpfc_restart_auth,
+	&class_device_attr_lpfc_dev_loss_initiator,
 	&class_device_attr_lpfc_soft_wwnn,
 	&class_device_attr_lpfc_soft_wwpn,
 	&class_device_attr_lpfc_soft_wwn_enable,
 	NULL,
 };
 
+struct class_device_attribute *lpfc_vport_attrs[] = {
+	&class_device_attr_info,
+	&class_device_attr_state,
+	&class_device_attr_num_discovered_ports,
+	&class_device_attr_lpfc_drvr_version,
+	&class_device_attr_lpfc_enable_auth,
+	&class_device_attr_lpfc_log_verbose,
+	&class_device_attr_lpfc_lun_queue_depth,
+	&class_device_attr_lpfc_nodev_tmo,
+	&class_device_attr_lpfc_devloss_tmo,
+	&class_device_attr_lpfc_hba_queue_depth,
+	&class_device_attr_lpfc_peer_port_login,
+	&class_device_attr_lpfc_restrict_login,
+	&class_device_attr_lpfc_fcp_class,
+	&class_device_attr_lpfc_use_adisc,
+	&class_device_attr_lpfc_fdmi_on,
+	&class_device_attr_lpfc_max_luns,
+	&class_device_attr_nport_evt_cnt,
+	&class_device_attr_management_version,
+	&class_device_attr_npiv_info,
+	&class_device_attr_lpfc_enable_da_id,
+	&class_device_attr_lpfc_dev_loss_initiator,
+	NULL,
+};
+
 static ssize_t
 sysfs_ctlreg_write(struct kobject *kobj, char *buf, loff_t off, size_t count)
 {
 	size_t buf_off;
-	struct Scsi_Host *host = class_to_shost(container_of(kobj,
-					     struct class_device, kobj));
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct class_device *cdev = container_of(kobj, struct class_device,
+						 kobj);
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 
 	if ((off + count) > FF_REG_AREA_SIZE)
 		return -ERANGE;
@@ -1164,18 +1895,16 @@ sysfs_ctlreg_write(struct kobject *kobj, char *buf, loff_t off, size_t count)
 	if (off % 4 || count % 4 || (unsigned long)buf % 4)
 		return -EINVAL;
 
-	spin_lock_irq(phba->host->host_lock);
-
-	if (!(phba->fc_flag & FC_OFFLINE_MODE)) {
-		spin_unlock_irq(phba->host->host_lock);
+	if (!(vport->fc_flag & FC_OFFLINE_MODE)) {
 		return -EPERM;
 	}
 
+	spin_lock_irq(&phba->hbalock);
 	for (buf_off = 0; buf_off < count; buf_off += sizeof(uint32_t))
 		writel(*((uint32_t *)(buf + buf_off)),
 		       phba->ctrl_regs_memmap_p + off + buf_off);
 
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	return count;
 }
@@ -1185,9 +1914,11 @@ sysfs_ctlreg_read(struct kobject *kobj, char *buf, loff_t off, size_t count)
 {
 	size_t buf_off;
 	uint32_t * tmp_ptr;
-	struct Scsi_Host *host = class_to_shost(container_of(kobj,
-					     struct class_device, kobj));
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct class_device *cdev = container_of(kobj, struct class_device,
+						 kobj);
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 
 	if (off > FF_REG_AREA_SIZE)
 		return -ERANGE;
@@ -1200,14 +1931,14 @@ sysfs_ctlreg_read(struct kobject *kobj, char *buf, loff_t off, size_t count)
 	if (off % 4 || count % 4 || (unsigned long)buf % 4)
 		return -EINVAL;
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 
 	for (buf_off = 0; buf_off < count; buf_off += sizeof(uint32_t)) {
 		tmp_ptr = (uint32_t *)(buf + buf_off);
 		*tmp_ptr = readl(phba->ctrl_regs_memmap_p + off + buf_off);
 	}
 
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	return count;
 }
@@ -1223,27 +1954,62 @@ static struct bin_attribute sysfs_ctlreg_attr = {
 	.write = sysfs_ctlreg_write,
 };
 
+static struct lpfc_sysfs_mbox *
+lpfc_get_sysfs_mbox(struct lpfc_hba *phba, uint8_t create)
+{
+	struct lpfc_sysfs_mbox *sysfs_mbox;
+	pid_t pid;
+
+	pid = current->pid;
+
+	spin_lock_irq(&phba->hbalock);
+	list_for_each_entry(sysfs_mbox, &phba->sysfs_mbox_list, list) {
+		if (sysfs_mbox->pid == pid) {
+			spin_unlock_irq(&phba->hbalock);
+			return sysfs_mbox;
+		}
+	}
+	if (!create) {
+		spin_unlock_irq(&phba->hbalock);
+		return NULL;
+	}
+	spin_unlock_irq(&phba->hbalock);
+	sysfs_mbox = kzalloc(sizeof( struct lpfc_sysfs_mbox),
+			GFP_KERNEL);
+	if (!sysfs_mbox)
+		return NULL;
+	sysfs_mbox->state = SMBOX_IDLE;
+	sysfs_mbox->pid = pid;
+	spin_lock_irq(&phba->hbalock);
+	list_add_tail(&sysfs_mbox->list, &phba->sysfs_mbox_list);
+
+	spin_unlock_irq(&phba->hbalock);
+	return sysfs_mbox;
+
+}
 
 static void
-sysfs_mbox_idle (struct lpfc_hba * phba)
+sysfs_mbox_idle(struct lpfc_hba *phba,
+		struct lpfc_sysfs_mbox *sysfs_mbox)
 {
-	phba->sysfs_mbox.state = SMBOX_IDLE;
-	phba->sysfs_mbox.offset = 0;
-
-	if (phba->sysfs_mbox.mbox) {
-		mempool_free(phba->sysfs_mbox.mbox,
+	list_del_init(&sysfs_mbox->list);
+	if (sysfs_mbox->mbox) {
+		mempool_free(sysfs_mbox->mbox,
 			     phba->mbox_mem_pool);
-		phba->sysfs_mbox.mbox = NULL;
 	}
+	kfree(sysfs_mbox);
 }
 
 static ssize_t
 sysfs_mbox_write(struct kobject *kobj, char *buf, loff_t off, size_t count)
 {
-	struct Scsi_Host * host =
-		class_to_shost(container_of(kobj, struct class_device, kobj));
-	struct lpfc_hba * phba = (struct lpfc_hba*)host->hostdata;
-	struct lpfcMboxq * mbox = NULL;
+	struct class_device *cdev = container_of(kobj, struct class_device,
+						 kobj);
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	struct lpfcMboxq  *mbox = NULL;
+	struct lpfc_sysfs_mbox *sysfs_mbox;
 
 	if ((count + off) > MAILBOX_CMD_SIZE)
 		return -ERANGE;
@@ -1261,30 +2027,44 @@ sysfs_mbox_write(struct kobject *kobj, char *buf, loff_t off, size_t count)
 		memset(mbox, 0, sizeof (LPFC_MBOXQ_t));
 	}
 
-	spin_lock_irq(host->host_lock);
+	if (off == 0) {
+		sysfs_mbox = lpfc_get_sysfs_mbox(phba, 1);
+		if (sysfs_mbox == NULL) {
+			mempool_free(mbox, phba->mbox_mem_pool);
+			return -ENOMEM;
+		}
+	} else {
+		sysfs_mbox = lpfc_get_sysfs_mbox(phba, 0);
+		if (sysfs_mbox == NULL) {
+			mempool_free(mbox, phba->mbox_mem_pool);
+			return -EAGAIN;
+		}
+	}
+
+	spin_lock_irq(&phba->hbalock);
 
 	if (off == 0) {
-		if (phba->sysfs_mbox.mbox)
+		if (sysfs_mbox->mbox)
 			mempool_free(mbox, phba->mbox_mem_pool);
 		else
-			phba->sysfs_mbox.mbox = mbox;
-		phba->sysfs_mbox.state = SMBOX_WRITING;
+			sysfs_mbox->mbox = mbox;
+		sysfs_mbox->state = SMBOX_WRITING;
 	} else {
-		if (phba->sysfs_mbox.state  != SMBOX_WRITING ||
-		    phba->sysfs_mbox.offset != off           ||
-		    phba->sysfs_mbox.mbox   == NULL ) {
-			sysfs_mbox_idle(phba);
-			spin_unlock_irq(host->host_lock);
+		if (sysfs_mbox->state  != SMBOX_WRITING ||
+		    sysfs_mbox->offset != off           ||
+		    sysfs_mbox->mbox   == NULL) {
+			sysfs_mbox_idle(phba, sysfs_mbox);
+			spin_unlock_irq(&phba->hbalock);
 			return -EAGAIN;
 		}
 	}
 
-	memcpy((uint8_t *) & phba->sysfs_mbox.mbox->mb + off,
+	memcpy((uint8_t *) & sysfs_mbox->mbox->mb + off,
 	       buf, count);
 
-	phba->sysfs_mbox.offset = off + count;
+	sysfs_mbox->offset = off + count;
 
-	spin_unlock_irq(host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	return count;
 }
@@ -1292,11 +2072,13 @@ sysfs_mbox_write(struct kobject *kobj, char *buf, loff_t off, size_t count)
 static ssize_t
 sysfs_mbox_read(struct kobject *kobj, char *buf, loff_t off, size_t count)
 {
-	struct Scsi_Host *host =
-		class_to_shost(container_of(kobj, struct class_device,
-					    kobj));
-	struct lpfc_hba *phba = (struct lpfc_hba*)host->hostdata;
+	struct class_device *cdev = container_of(kobj, struct class_device,
+						 kobj);
+	struct Scsi_Host  *shost = class_to_shost(cdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	int rc;
+	struct lpfc_sysfs_mbox *sysfs_mbox;
 
 	if (off > MAILBOX_CMD_SIZE)
 		return -ERANGE;
@@ -1310,15 +2092,25 @@ sysfs_mbox_read(struct kobject *kobj, char *buf, loff_t off, size_t count)
 	if (off && count == 0)
 		return 0;
 
-	spin_lock_irq(phba->host->host_lock);
+	sysfs_mbox = lpfc_get_sysfs_mbox(phba, 0);
+
+	if (!sysfs_mbox)
+		return -EPERM;
+
+	spin_lock_irq(&phba->hbalock);
+
+	if (phba->over_temp_state == HBA_OVER_TEMP) {
+		sysfs_mbox_idle(phba, sysfs_mbox);
+		spin_unlock_irq(&phba->hbalock);
+		return  -EPERM;
+	}
 
 	if (off == 0 &&
-	    phba->sysfs_mbox.state  == SMBOX_WRITING &&
-	    phba->sysfs_mbox.offset >= 2 * sizeof(uint32_t)) {
+	    sysfs_mbox->state  == SMBOX_WRITING &&
+	    sysfs_mbox->offset >= 2 * sizeof(uint32_t)) {
 
-		switch (phba->sysfs_mbox.mbox->mb.mbxCommand) {
+		switch (sysfs_mbox->mbox->mb.mbxCommand) {
 			/* Offline only */
-		case MBX_WRITE_NV:
 		case MBX_INIT_LINK:
 		case MBX_DOWN_LINK:
 		case MBX_CONFIG_LINK:
@@ -1332,15 +2124,16 @@ sysfs_mbox_read(struct kobject *kobj, char *buf, loff_t off, size_t count)
 		case MBX_FLASH_WR_ULA:
 		case MBX_SET_MASK:
 		case MBX_SET_SLIM:
-		case MBX_SET_DEBUG:
-			if (!(phba->fc_flag & FC_OFFLINE_MODE)) {
+			if (!(vport->fc_flag & FC_OFFLINE_MODE)) {
 				printk(KERN_WARNING "mbox_read:Command 0x%x "
 				       "is illegal in on-line state\n",
-				       phba->sysfs_mbox.mbox->mb.mbxCommand);
-				sysfs_mbox_idle(phba);
-				spin_unlock_irq(phba->host->host_lock);
+				       sysfs_mbox->mbox->mb.mbxCommand);
+				sysfs_mbox_idle(phba,sysfs_mbox);
+				spin_unlock_irq(&phba->hbalock);
 				return -EPERM;
 			}
+		case MBX_WRITE_NV:
+		case MBX_WRITE_VPARMS:
 		case MBX_LOAD_SM:
 		case MBX_READ_NV:
 		case MBX_READ_CONFIG:
@@ -1357,6 +2150,7 @@ sysfs_mbox_read(struct kobject *kobj, char *buf, loff_t off, size_t count)
 		case MBX_LOAD_EXP_ROM:
 		case MBX_BEACON:
 		case MBX_DEL_LD_ENTRY:
+		case MBX_SET_DEBUG:
 			break;
 		case MBX_READ_SPARM64:
 		case MBX_READ_LA:
@@ -1366,70 +2160,69 @@ sysfs_mbox_read(struct kobject *kobj, char *buf, loff_t off, size_t count)
 		case MBX_CONFIG_PORT:
 		case MBX_RUN_BIU_DIAG:
 			printk(KERN_WARNING "mbox_read: Illegal Command 0x%x\n",
-			       phba->sysfs_mbox.mbox->mb.mbxCommand);
-			sysfs_mbox_idle(phba);
-			spin_unlock_irq(phba->host->host_lock);
+			       sysfs_mbox->mbox->mb.mbxCommand);
+			sysfs_mbox_idle(phba,sysfs_mbox);
+			spin_unlock_irq(&phba->hbalock);
 			return -EPERM;
 		default:
 			printk(KERN_WARNING "mbox_read: Unknown Command 0x%x\n",
-			       phba->sysfs_mbox.mbox->mb.mbxCommand);
-			sysfs_mbox_idle(phba);
-			spin_unlock_irq(phba->host->host_lock);
+			       sysfs_mbox->mbox->mb.mbxCommand);
+			sysfs_mbox_idle(phba,sysfs_mbox);
+			spin_unlock_irq(&phba->hbalock);
 			return -EPERM;
 		}
 
-		if (phba->fc_flag & FC_BLOCK_MGMT_IO) {
-			sysfs_mbox_idle(phba);
-			spin_unlock_irq(host->host_lock);
+		sysfs_mbox->mbox->vport = vport;
+
+		if (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO) {
+			sysfs_mbox_idle(phba,sysfs_mbox);
+			spin_unlock_irq(&phba->hbalock);
 			return  -EAGAIN;
 		}
 
-		if ((phba->fc_flag & FC_OFFLINE_MODE) ||
+		if ((vport->fc_flag & FC_OFFLINE_MODE) ||
 		    (!(phba->sli.sli_flag & LPFC_SLI2_ACTIVE))){
 
-			spin_unlock_irq(phba->host->host_lock);
+			spin_unlock_irq(&phba->hbalock);
 			rc = lpfc_sli_issue_mbox (phba,
-						  phba->sysfs_mbox.mbox,
+						  sysfs_mbox->mbox,
 						  MBX_POLL);
-			spin_lock_irq(phba->host->host_lock);
+			spin_lock_irq(&phba->hbalock);
 
 		} else {
-			spin_unlock_irq(phba->host->host_lock);
+			spin_unlock_irq(&phba->hbalock);
 			rc = lpfc_sli_issue_mbox_wait (phba,
-						       phba->sysfs_mbox.mbox,
+						       sysfs_mbox->mbox,
 				lpfc_mbox_tmo_val(phba,
-				    phba->sysfs_mbox.mbox->mb.mbxCommand) * HZ);
-			spin_lock_irq(phba->host->host_lock);
+				    sysfs_mbox->mbox->mb.mbxCommand) * HZ);
+			spin_lock_irq(&phba->hbalock);
 		}
 
 		if (rc != MBX_SUCCESS) {
 			if (rc == MBX_TIMEOUT) {
-				phba->sysfs_mbox.mbox->mbox_cmpl =
-					lpfc_sli_def_mbox_cmpl;
-				phba->sysfs_mbox.mbox = NULL;
+				sysfs_mbox->mbox = NULL;
 			}
-			sysfs_mbox_idle(phba);
-			spin_unlock_irq(host->host_lock);
+			sysfs_mbox_idle(phba,sysfs_mbox);
+			spin_unlock_irq(&phba->hbalock);
 			return  (rc == MBX_TIMEOUT) ? -ETIME : -ENODEV;
 		}
-		phba->sysfs_mbox.state = SMBOX_READING;
+		sysfs_mbox->state = SMBOX_READING;
 	}
-	else if (phba->sysfs_mbox.offset != off ||
-		 phba->sysfs_mbox.state  != SMBOX_READING) {
-		printk(KERN_WARNING  "mbox_read: Bad State\n");
-		sysfs_mbox_idle(phba);
-		spin_unlock_irq(host->host_lock);
+	else if (sysfs_mbox->offset != off ||
+		 sysfs_mbox->state  != SMBOX_READING) {
+		sysfs_mbox_idle(phba,sysfs_mbox);
+		spin_unlock_irq(&phba->hbalock);
 		return -EAGAIN;
 	}
 
-	memcpy(buf, (uint8_t *) & phba->sysfs_mbox.mbox->mb + off, count);
+	memcpy(buf, (uint8_t *) & sysfs_mbox->mbox->mb + off, count);
 
-	phba->sysfs_mbox.offset = off + count;
+	sysfs_mbox->offset = off + count;
 
-	if (phba->sysfs_mbox.offset == MAILBOX_CMD_SIZE)
-		sysfs_mbox_idle(phba);
+	if (sysfs_mbox->offset == MAILBOX_CMD_SIZE)
+		sysfs_mbox_idle(phba,sysfs_mbox);
 
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	return count;
 }
@@ -1446,35 +2239,35 @@ static struct bin_attribute sysfs_mbox_attr = {
 };
 
 int
-lpfc_alloc_sysfs_attr(struct lpfc_hba *phba)
+lpfc_alloc_sysfs_attr(struct lpfc_vport *vport)
 {
-	struct Scsi_Host *host = phba->host;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
 	int error;
 
-	error = sysfs_create_bin_file(&host->shost_classdev.kobj,
-							&sysfs_ctlreg_attr);
+	error = sysfs_create_bin_file(&shost->shost_classdev.kobj,
+				      &sysfs_ctlreg_attr);
 	if (error)
 		goto out;
 
-	error = sysfs_create_bin_file(&host->shost_classdev.kobj,
-							&sysfs_mbox_attr);
+	error = sysfs_create_bin_file(&shost->shost_classdev.kobj,
+				      &sysfs_mbox_attr);
 	if (error)
 		goto out_remove_ctlreg_attr;
 
 	return 0;
 out_remove_ctlreg_attr:
-	sysfs_remove_bin_file(&host->shost_classdev.kobj, &sysfs_ctlreg_attr);
+	sysfs_remove_bin_file(&shost->shost_classdev.kobj, &sysfs_ctlreg_attr);
 out:
 	return error;
 }
 
 void
-lpfc_free_sysfs_attr(struct lpfc_hba *phba)
+lpfc_free_sysfs_attr(struct lpfc_vport *vport)
 {
-	struct Scsi_Host *host = phba->host;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
 
-	sysfs_remove_bin_file(&host->shost_classdev.kobj, &sysfs_mbox_attr);
-	sysfs_remove_bin_file(&host->shost_classdev.kobj, &sysfs_ctlreg_attr);
+	sysfs_remove_bin_file(&shost->shost_classdev.kobj, &sysfs_mbox_attr);
+	sysfs_remove_bin_file(&shost->shost_classdev.kobj, &sysfs_ctlreg_attr);
 }
 
 
@@ -1485,26 +2278,30 @@ lpfc_free_sysfs_attr(struct lpfc_hba *phba)
 static void
 lpfc_get_host_port_id(struct Scsi_Host *shost)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba*)shost->hostdata;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+
 	/* note: fc_myDID already in cpu endianness */
-	fc_host_port_id(shost) = phba->fc_myDID;
+	fc_host_port_id(shost) = vport->fc_myDID;
 }
 
 static void
 lpfc_get_host_port_type(struct Scsi_Host *shost)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba*)shost->hostdata;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 
 	spin_lock_irq(shost->host_lock);
 
-	if (phba->hba_state == LPFC_HBA_READY) {
+	if (vport->port_type == LPFC_NPIV_PORT) {
+		fc_host_port_type(shost) = FC_PORTTYPE_NPORT;
+	} else if (lpfc_is_link_up(phba)) {
 		if (phba->fc_topology == TOPOLOGY_LOOP) {
-			if (phba->fc_flag & FC_PUBLIC_LOOP)
+			if (vport->fc_flag & FC_PUBLIC_LOOP)
 				fc_host_port_type(shost) = FC_PORTTYPE_NLPORT;
 			else
 				fc_host_port_type(shost) = FC_PORTTYPE_LPORT;
 		} else {
-			if (phba->fc_flag & FC_FABRIC)
+			if (vport->fc_flag & FC_FABRIC)
 				fc_host_port_type(shost) = FC_PORTTYPE_NPORT;
 			else
 				fc_host_port_type(shost) = FC_PORTTYPE_PTP;
@@ -1518,29 +2315,20 @@ lpfc_get_host_port_type(struct Scsi_Host *shost)
 static void
 lpfc_get_host_port_state(struct Scsi_Host *shost)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba*)shost->hostdata;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 
 	spin_lock_irq(shost->host_lock);
 
-	if (phba->fc_flag & FC_OFFLINE_MODE)
+	if (vport->fc_flag & FC_OFFLINE_MODE)
 		fc_host_port_state(shost) = FC_PORTSTATE_OFFLINE;
 	else {
-		switch (phba->hba_state) {
-		case LPFC_STATE_UNKNOWN:
-		case LPFC_WARM_START:
-		case LPFC_INIT_START:
-		case LPFC_INIT_MBX_CMDS:
+		switch (phba->link_state) {
+		case LPFC_LINK_UNKNOWN:
 		case LPFC_LINK_DOWN:
 			fc_host_port_state(shost) = FC_PORTSTATE_LINKDOWN;
 			break;
 		case LPFC_LINK_UP:
-		case LPFC_LOCAL_CFG_LINK:
-		case LPFC_FLOGI:
-		case LPFC_FABRIC_CFG_LINK:
-		case LPFC_NS_REG:
-		case LPFC_NS_QRY:
-		case LPFC_BUILD_DISC_LIST:
-		case LPFC_DISC_AUTH:
 		case LPFC_CLEAR_LA:
 		case LPFC_HBA_READY:
 			/* Links up, beyond this port_type reports state */
@@ -1561,11 +2349,12 @@ lpfc_get_host_port_state(struct Scsi_Host *shost)
 static void
 lpfc_get_host_speed(struct Scsi_Host *shost)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba*)shost->hostdata;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 
 	spin_lock_irq(shost->host_lock);
 
-	if (phba->hba_state == LPFC_HBA_READY) {
+	if (lpfc_is_link_up(phba)) {
 		switch(phba->fc_linkspeed) {
 			case LA_1GHZ_LINK:
 				fc_host_speed(shost) = FC_PORTSPEED_1GBIT;
@@ -1591,39 +2380,31 @@ lpfc_get_host_speed(struct Scsi_Host *shost)
 static void
 lpfc_get_host_fabric_name (struct Scsi_Host *shost)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba*)shost->hostdata;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	u64 node_name;
 
 	spin_lock_irq(shost->host_lock);
 
-	if ((phba->fc_flag & FC_FABRIC) ||
+	if ((vport->fc_flag & FC_FABRIC) ||
 	    ((phba->fc_topology == TOPOLOGY_LOOP) &&
-	     (phba->fc_flag & FC_PUBLIC_LOOP)))
+	     (vport->fc_flag & FC_PUBLIC_LOOP)))
 		node_name = wwn_to_u64(phba->fc_fabparam.nodeName.u.wwn);
 	else
 		/* fabric is local port if there is no F/FL_Port */
-		node_name = wwn_to_u64(phba->fc_nodename.u.wwn);
+		node_name = wwn_to_u64(vport->fc_nodename.u.wwn);
 
 	spin_unlock_irq(shost->host_lock);
 
 	fc_host_fabric_name(shost) = node_name;
 }
 
-static void
-lpfc_get_host_symbolic_name (struct Scsi_Host *shost)
-{
-	struct lpfc_hba *phba = (struct lpfc_hba*)shost->hostdata;
-
-	spin_lock_irq(shost->host_lock);
-	lpfc_get_hba_sym_node_name(phba, fc_host_symbolic_name(shost));
-	spin_unlock_irq(shost->host_lock);
-}
-
 static struct fc_host_statistics *
 lpfc_get_stats(struct Scsi_Host *shost)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba *)shost->hostdata;
-	struct lpfc_sli *psli = &phba->sli;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	struct lpfc_sli   *psli = &phba->sli;
 	struct fc_host_statistics *hs = &phba->link_stats;
 	struct lpfc_lnk_stat * lso = &psli->lnk_stat_offsets;
 	LPFC_MBOXQ_t *pmboxq;
@@ -1631,7 +2412,16 @@ lpfc_get_stats(struct Scsi_Host *shost)
 	unsigned long seconds;
 	int rc = 0;
 
-	if (phba->fc_flag & FC_BLOCK_MGMT_IO)
+	/*
+	 * prevent udev from issuing mailbox commands until the port is
+	 * configured.
+	 */
+	if (phba->link_state < LPFC_LINK_DOWN ||
+	    !phba->mbox_mem_pool ||
+	    (phba->sli.sli_flag & LPFC_SLI2_ACTIVE) == 0)
+		return NULL;
+
+	if (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO)
 		return NULL;
 
 	pmboxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
@@ -1643,17 +2433,16 @@ lpfc_get_stats(struct Scsi_Host *shost)
 	pmb->mbxCommand = MBX_READ_STATUS;
 	pmb->mbxOwner = OWN_HOST;
 	pmboxq->context1 = NULL;
+	pmboxq->vport = vport;
 
-	if ((phba->fc_flag & FC_OFFLINE_MODE) ||
+	if ((vport->fc_flag & FC_OFFLINE_MODE) ||
 		(!(psli->sli_flag & LPFC_SLI2_ACTIVE)))
 		rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL);
 	else
 		rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
 
 	if (rc != MBX_SUCCESS) {
-		if (rc == MBX_TIMEOUT)
-			pmboxq->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-		else
+		if (rc != MBX_TIMEOUT)
 			mempool_free(pmboxq, phba->mbox_mem_pool);
 		return NULL;
 	}
@@ -1669,18 +2458,17 @@ lpfc_get_stats(struct Scsi_Host *shost)
 	pmb->mbxCommand = MBX_READ_LNK_STAT;
 	pmb->mbxOwner = OWN_HOST;
 	pmboxq->context1 = NULL;
+	pmboxq->vport = vport;
 
-	if ((phba->fc_flag & FC_OFFLINE_MODE) ||
+	if ((vport->fc_flag & FC_OFFLINE_MODE) ||
 	    (!(psli->sli_flag & LPFC_SLI2_ACTIVE)))
 		rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL);
 	else
 		rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
 
 	if (rc != MBX_SUCCESS) {
-		if (rc == MBX_TIMEOUT)
-			pmboxq->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-		else
-			mempool_free( pmboxq, phba->mbox_mem_pool);
+		if (rc != MBX_TIMEOUT)
+			mempool_free(pmboxq, phba->mbox_mem_pool);
 		return NULL;
 	}
 
@@ -1727,14 +2515,15 @@ lpfc_get_stats(struct Scsi_Host *shost)
 static void
 lpfc_reset_stats(struct Scsi_Host *shost)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba *)shost->hostdata;
-	struct lpfc_sli *psli = &phba->sli;
-	struct lpfc_lnk_stat * lso = &psli->lnk_stat_offsets;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	struct lpfc_sli   *psli = &phba->sli;
+	struct lpfc_lnk_stat *lso = &psli->lnk_stat_offsets;
 	LPFC_MBOXQ_t *pmboxq;
 	MAILBOX_t *pmb;
 	int rc = 0;
 
-	if (phba->fc_flag & FC_BLOCK_MGMT_IO)
+	if (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO)
 		return;
 
 	pmboxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
@@ -1747,17 +2536,16 @@ lpfc_reset_stats(struct Scsi_Host *shost)
 	pmb->mbxOwner = OWN_HOST;
 	pmb->un.varWords[0] = 0x1; /* reset request */
 	pmboxq->context1 = NULL;
+	pmboxq->vport = vport;
 
-	if ((phba->fc_flag & FC_OFFLINE_MODE) ||
+	if ((vport->fc_flag & FC_OFFLINE_MODE) ||
 		(!(psli->sli_flag & LPFC_SLI2_ACTIVE)))
 		rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL);
 	else
 		rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
 
 	if (rc != MBX_SUCCESS) {
-		if (rc == MBX_TIMEOUT)
-			pmboxq->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-		else
+		if (rc != MBX_TIMEOUT)
 			mempool_free(pmboxq, phba->mbox_mem_pool);
 		return;
 	}
@@ -1766,17 +2554,16 @@ lpfc_reset_stats(struct Scsi_Host *shost)
 	pmb->mbxCommand = MBX_READ_LNK_STAT;
 	pmb->mbxOwner = OWN_HOST;
 	pmboxq->context1 = NULL;
+	pmboxq->vport = vport;
 
-	if ((phba->fc_flag & FC_OFFLINE_MODE) ||
+	if ((vport->fc_flag & FC_OFFLINE_MODE) ||
 	    (!(psli->sli_flag & LPFC_SLI2_ACTIVE)))
 		rc = lpfc_sli_issue_mbox(phba, pmboxq, MBX_POLL);
 	else
 		rc = lpfc_sli_issue_mbox_wait(phba, pmboxq, phba->fc_ratov * 2);
 
 	if (rc != MBX_SUCCESS) {
-		if (rc == MBX_TIMEOUT)
-			pmboxq->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-		else
+		if (rc != MBX_TIMEOUT)
 			mempool_free( pmboxq, phba->mbox_mem_pool);
 		return;
 	}
@@ -1801,67 +2588,51 @@ lpfc_reset_stats(struct Scsi_Host *shost)
  * The LPFC driver treats linkdown handling as target loss events so there
  * are no sysfs handlers for link_down_tmo.
  */
-static void
-lpfc_get_starget_port_id(struct scsi_target *starget)
+
+static struct lpfc_nodelist *
+lpfc_get_node_by_target(struct scsi_target *starget)
 {
-	struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
-	struct lpfc_hba *phba = (struct lpfc_hba *) shost->hostdata;
-	uint32_t did = -1;
-	struct lpfc_nodelist *ndlp = NULL;
+	struct Scsi_Host  *shost = dev_to_shost(starget->dev.parent);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_nodelist *ndlp;
 
 	spin_lock_irq(shost->host_lock);
-	/* Search the mapped list for this target ID */
-	list_for_each_entry(ndlp, &phba->fc_nlpmap_list, nlp_listp) {
-		if (starget->id == ndlp->nlp_sid) {
-			did = ndlp->nlp_DID;
-			break;
+	/* Search for this, mapped, target ID */
+	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+		if (ndlp->nlp_state == NLP_STE_MAPPED_NODE &&
+		    starget->id == ndlp->nlp_sid) {
+			spin_unlock_irq(shost->host_lock);
+			return ndlp;
 		}
 	}
 	spin_unlock_irq(shost->host_lock);
+	return NULL;
+}
+
+static void
+lpfc_get_starget_port_id(struct scsi_target *starget)
+{
+	struct lpfc_nodelist *ndlp = lpfc_get_node_by_target(starget);
 
-	fc_starget_port_id(starget) = did;
+	fc_starget_port_id(starget) = ndlp ? ndlp->nlp_DID : -1;
 }
 
 static void
 lpfc_get_starget_node_name(struct scsi_target *starget)
 {
-	struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
-	struct lpfc_hba *phba = (struct lpfc_hba *) shost->hostdata;
-	u64 node_name = 0;
-	struct lpfc_nodelist *ndlp = NULL;
-
-	spin_lock_irq(shost->host_lock);
-	/* Search the mapped list for this target ID */
-	list_for_each_entry(ndlp, &phba->fc_nlpmap_list, nlp_listp) {
-		if (starget->id == ndlp->nlp_sid) {
-			node_name = wwn_to_u64(ndlp->nlp_nodename.u.wwn);
-			break;
-		}
-	}
-	spin_unlock_irq(shost->host_lock);
+	struct lpfc_nodelist *ndlp = lpfc_get_node_by_target(starget);
 
-	fc_starget_node_name(starget) = node_name;
+	fc_starget_node_name(starget) =
+		ndlp ? wwn_to_u64(ndlp->nlp_nodename.u.wwn) : 0;
 }
 
 static void
 lpfc_get_starget_port_name(struct scsi_target *starget)
 {
-	struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
-	struct lpfc_hba *phba = (struct lpfc_hba *) shost->hostdata;
-	u64 port_name = 0;
-	struct lpfc_nodelist *ndlp = NULL;
-
-	spin_lock_irq(shost->host_lock);
-	/* Search the mapped list for this target ID */
-	list_for_each_entry(ndlp, &phba->fc_nlpmap_list, nlp_listp) {
-		if (starget->id == ndlp->nlp_sid) {
-			port_name = wwn_to_u64(ndlp->nlp_portname.u.wwn);
-			break;
-		}
-	}
-	spin_unlock_irq(shost->host_lock);
+	struct lpfc_nodelist *ndlp = lpfc_get_node_by_target(starget);
 
-	fc_starget_port_name(starget) = port_name;
+	fc_starget_port_name(starget) =
+		ndlp ? wwn_to_u64(ndlp->nlp_portname.u.wwn) : 0;
 }
 
 static void
@@ -1917,9 +2688,6 @@ struct fc_function_template lpfc_transport_functions = {
 	.get_host_fabric_name = lpfc_get_host_fabric_name,
 	.show_host_fabric_name = 1,
 
-	.get_host_symbolic_name = lpfc_get_host_symbolic_name,
-	.show_host_symbolic_name = 1,
-
 	/*
 	 * The LPFC driver treats linkdown handling as target loss events
 	 * so there are no sysfs handlers for link_down_tmo.
@@ -1947,41 +2715,89 @@ struct fc_function_template lpfc_transport_functions = {
 	.issue_fc_host_lip = lpfc_issue_lip,
 	.dev_loss_tmo_callbk = lpfc_dev_loss_tmo_callbk,
 	.terminate_rport_io = lpfc_terminate_rport_io,
+
+};
+
+struct fc_function_template lpfc_vport_transport_functions = {
+	/* fixed attributes the driver supports */
+	.show_host_node_name = 1,
+	.show_host_port_name = 1,
+	.show_host_supported_classes = 1,
+	.show_host_supported_fc4s = 1,
+	.show_host_supported_speeds = 1,
+	.show_host_maxframe_size = 1,
+
+	/* dynamic attributes the driver supports */
+	.get_host_port_id = lpfc_get_host_port_id,
+	.show_host_port_id = 1,
+
+	.get_host_port_type = lpfc_get_host_port_type,
+	.show_host_port_type = 1,
+
+	.get_host_port_state = lpfc_get_host_port_state,
+	.show_host_port_state = 1,
+
+	/* active_fc4s is shown but doesn't change (thus no get function) */
+	.show_host_active_fc4s = 1,
+
+	.get_host_speed = lpfc_get_host_speed,
+	.show_host_speed = 1,
+
+	.get_host_fabric_name = lpfc_get_host_fabric_name,
+	.show_host_fabric_name = 1,
+
+	/*
+	 * The LPFC driver treats linkdown handling as target loss events
+	 * so there are no sysfs handlers for link_down_tmo.
+	 */
+
+	.get_fc_host_stats = lpfc_get_stats,
+	.reset_fc_host_stats = lpfc_reset_stats,
+
+	.dd_fcrport_size = sizeof(struct lpfc_rport_data),
+	.show_rport_maxframe_size = 1,
+	.show_rport_supported_classes = 1,
+
+	.set_rport_dev_loss_tmo = lpfc_set_rport_loss_tmo,
+	.show_rport_dev_loss_tmo = 1,
+
+	.get_starget_port_id  = lpfc_get_starget_port_id,
+	.show_starget_port_id = 1,
+
+	.get_starget_node_name = lpfc_get_starget_node_name,
+	.show_starget_node_name = 1,
+
+	.get_starget_port_name = lpfc_get_starget_port_name,
+	.show_starget_port_name = 1,
+
+	.dev_loss_tmo_callbk = lpfc_dev_loss_tmo_callbk,
+	.terminate_rport_io = lpfc_terminate_rport_io,
+
 };
 
 void
 lpfc_get_cfgparam(struct lpfc_hba *phba)
 {
-	lpfc_log_verbose_init(phba, lpfc_log_verbose);
 	lpfc_cr_delay_init(phba, lpfc_cr_delay);
 	lpfc_cr_count_init(phba, lpfc_cr_count);
 	lpfc_multi_ring_support_init(phba, lpfc_multi_ring_support);
 	lpfc_multi_ring_rctl_init(phba, lpfc_multi_ring_rctl);
 	lpfc_multi_ring_type_init(phba, lpfc_multi_ring_type);
-	lpfc_lun_queue_depth_init(phba, lpfc_lun_queue_depth);
-	lpfc_fcp_class_init(phba, lpfc_fcp_class);
-	lpfc_use_adisc_init(phba, lpfc_use_adisc);
 	lpfc_ack0_init(phba, lpfc_ack0);
 	lpfc_topology_init(phba, lpfc_topology);
-	lpfc_scan_down_init(phba, lpfc_scan_down);
 	lpfc_link_speed_init(phba, lpfc_link_speed);
-	lpfc_fdmi_on_init(phba, lpfc_fdmi_on);
-	lpfc_discovery_threads_init(phba, lpfc_discovery_threads);
-	lpfc_max_luns_init(phba, lpfc_max_luns);
 	lpfc_poll_tmo_init(phba, lpfc_poll_tmo);
+	lpfc_enable_npiv_init(phba, lpfc_enable_npiv);
 	lpfc_use_msi_init(phba, lpfc_use_msi);
-	lpfc_devloss_tmo_init(phba, lpfc_devloss_tmo);
-	lpfc_nodev_tmo_init(phba, lpfc_nodev_tmo);
+	lpfc_dev_loss_initiator_init(phba, lpfc_dev_loss_initiator);
 	phba->cfg_poll = lpfc_poll;
 	phba->cfg_soft_wwnn = 0L;
 	phba->cfg_soft_wwpn = 0L;
-
 	/*
 	 * The total number of segments is the configuration value plus 2
 	 * since the IOCB need a command and response bde.
 	 */
 	phba->cfg_sg_seg_cnt = LPFC_SG_SEG_CNT + 2;
-
 	/*
 	 * Since the sg_tablesize is module parameter, the sg_dma_buf_size
 	 * used to create the sg_dma_buf_pool must be dynamically calculated
@@ -1989,9 +2805,26 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
 	phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd) +
 			sizeof(struct fcp_rsp) +
 			(phba->cfg_sg_seg_cnt * sizeof(struct ulp_bde64));
-
-
 	lpfc_hba_queue_depth_init(phba, lpfc_hba_queue_depth);
+	return;
+}
 
+void
+lpfc_get_vport_cfgparam(struct lpfc_vport *vport)
+{
+	lpfc_log_verbose_init(vport, lpfc_log_verbose);
+	lpfc_lun_queue_depth_init(vport, lpfc_lun_queue_depth);
+	lpfc_devloss_tmo_init(vport, lpfc_devloss_tmo);
+	lpfc_nodev_tmo_init(vport, lpfc_nodev_tmo);
+	lpfc_peer_port_login_init(vport, lpfc_peer_port_login);
+	lpfc_restrict_login_init(vport, lpfc_restrict_login);
+	lpfc_fcp_class_init(vport, lpfc_fcp_class);
+	lpfc_use_adisc_init(vport, lpfc_use_adisc);
+	lpfc_fdmi_on_init(vport, lpfc_fdmi_on);
+	lpfc_discovery_threads_init(vport, lpfc_discovery_threads);
+	lpfc_max_luns_init(vport, lpfc_max_luns);
+	lpfc_scan_down_init(vport, lpfc_scan_down);
+	lpfc_enable_da_id_init(vport, lpfc_enable_da_id);
+	lpfc_enable_auth_init(vport, lpfc_enable_auth);
 	return;
 }
diff --git a/drivers/scsi/lpfc/lpfc_auth.c b/drivers/scsi/lpfc/lpfc_auth.c
new file mode 100644
index 0000000..6141c3c
--- /dev/null
+++ b/drivers/scsi/lpfc/lpfc_auth.c
@@ -0,0 +1,826 @@
+/*******************************************************************
+ * This file is part of the Emulex Linux Device Driver for         *
+ * Fibre Channel Host Bus Adapters.                                *
+ * Copyright (C) 2006-2007 Emulex.  All rights reserved.           *
+ * EMULEX and SLI are trademarks of Emulex.                        *
+ * www.emulex.com                                                  *
+ *                                                                 *
+ * This program is free software; you can redistribute it and/or   *
+ * modify it under the terms of version 2 of the GNU General       *
+ * Public License as published by the Free Software Foundation.    *
+ * This program is distributed in the hope that it will be useful. *
+ * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
+ * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
+ * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
+ * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
+ * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
+ * more details, a copy of which can be found in the file COPYING  *
+ * included with this package.                                     *
+ *******************************************************************/
+
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_tcq.h>
+#include <scsi/scsi_transport_fc.h>
+
+#include "lpfc_hw.h"
+#include "lpfc_sli.h"
+#include "lpfc_disc.h"
+#include "lpfc.h"
+#include "lpfc_crtn.h"
+#include "lpfc_logmsg.h"
+#include "lpfc_auth_access.h"
+#include "lpfc_auth.h"
+
+void
+lpfc_start_authentication(struct lpfc_vport *vport,
+		       struct lpfc_nodelist *ndlp)
+{
+	uint32_t nego_payload_len;
+	uint8_t *nego_payload;
+
+	nego_payload = kmalloc(MAX_AUTH_REQ_SIZE, GFP_KERNEL);
+	if (!nego_payload)
+		return;
+	vport->auth.trans_id++;
+	vport->auth.auth_msg_state = LPFC_AUTH_NEGOTIATE;
+	nego_payload_len = lpfc_build_auth_neg(vport, nego_payload);
+	lpfc_issue_els_auth(vport, ndlp, AUTH_NEGOTIATE,
+			    nego_payload, nego_payload_len);
+	kfree(nego_payload);
+}
+
+void
+lpfc_dhchap_make_challenge(struct Scsi_Host *shost, int status,
+			void *rsp, uint32_t rsp_len)
+{
+	struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
+	struct lpfc_nodelist *ndlp;
+	uint32_t chal_payload_len;
+	uint8_t *chal_payload;
+	struct fc_auth_rsp *auth_rsp = rsp;
+
+	ndlp = lpfc_findnode_did(vport, Fabric_DID);
+
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_SECURITY,
+			 "1003 Send dhchap challenge local_wwpn "
+			 "%llX remote_wwpn %llX \n",
+			 (unsigned long long)auth_rsp->local_wwpn,
+			 (unsigned long long)auth_rsp->remote_wwpn);
+
+	chal_payload = kmalloc(MAX_AUTH_REQ_SIZE, GFP_KERNEL);
+	if (!chal_payload) {
+		kfree(rsp);
+		return;
+	}
+	vport->auth.auth_msg_state = LPFC_DHCHAP_CHALLENGE;
+	chal_payload_len = lpfc_build_dhchap_challenge(vport,
+				chal_payload, rsp);
+	lpfc_issue_els_auth(vport, ndlp, DHCHAP_CHALLENGE,
+			    chal_payload, chal_payload_len);
+	kfree(chal_payload);
+	kfree(rsp);
+}
+
+
+void
+lpfc_dhchap_make_response(struct Scsi_Host *shost, int status,
+			void *rsp, uint32_t rsp_len)
+{
+	struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
+	struct lpfc_nodelist *ndlp;
+	uint32_t reply_payload_len;
+	uint8_t *reply_payload;
+	struct fc_auth_rsp *auth_rsp = rsp;
+
+	ndlp = lpfc_findnode_did(vport, Fabric_DID);
+
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_SECURITY,
+			 "1004 Send dhchap reply local_wwpn "
+			 "%llX remote_wwpn %llX \n",
+			 (unsigned long long)auth_rsp->local_wwpn,
+			 (unsigned long long)auth_rsp->remote_wwpn);
+
+	reply_payload = kmalloc(MAX_AUTH_REQ_SIZE, GFP_KERNEL);
+	if (!reply_payload) {
+		kfree(rsp);
+		return;
+	}
+
+	vport->auth.auth_msg_state = LPFC_DHCHAP_REPLY;
+	reply_payload_len = lpfc_build_dhchap_reply(vport, reply_payload, rsp);
+	lpfc_issue_els_auth(vport, ndlp, DHCHAP_REPLY,
+			    reply_payload, reply_payload_len);
+	kfree(reply_payload);
+	kfree(rsp);
+
+}
+
+
+void
+lpfc_dhchap_authenticate(struct Scsi_Host *shost,
+			int status, void *rsp,
+			uint32_t rsp_len)
+{
+	struct fc_auth_rsp *auth_rsp = (struct fc_auth_rsp *)rsp;
+	struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
+	struct lpfc_nodelist *ndlp;
+
+	ndlp = lpfc_findnode_did(vport, Fabric_DID);
+	if (!ndlp) {
+		kfree(rsp);
+		return;
+	}
+	if (status != 0) {
+		lpfc_issue_els_auth_reject(vport, ndlp,
+			AUTH_ERR, AUTHENTICATION_FAILED);
+		kfree(rsp);
+		return;
+	}
+
+	if (auth_rsp->u.dhchap_success.authenticated) {
+		uint32_t suc_payload_len;
+		uint8_t *suc_payload;
+
+		suc_payload = kmalloc(MAX_AUTH_REQ_SIZE, GFP_KERNEL);
+		if (!suc_payload) {
+			lpfc_issue_els_auth_reject(vport, ndlp,
+				AUTH_ERR, AUTHENTICATION_FAILED);
+			kfree(rsp);
+			return;
+		}
+		suc_payload_len = lpfc_build_dhchap_success(vport,
+				suc_payload, rsp);
+		if (suc_payload_len == sizeof(uint32_t)) {
+		/* Authentication is complete after sending this SUCCESS */
+			vport->auth.auth_msg_state = LPFC_DHCHAP_SUCCESS;
+		} else {
+		/* Need to wait for SUCCESS from Auth Initiator */
+			vport->auth.auth_msg_state = LPFC_DHCHAP_SUCCESS_REPLY;
+		}
+		lpfc_issue_els_auth(vport, ndlp, DHCHAP_SUCCESS,
+				    suc_payload, suc_payload_len);
+		kfree(suc_payload);
+	} else {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1005 AUTHENTICATION_FAILURE Nport:x%x\n",
+				 ndlp->nlp_DID);
+		lpfc_issue_els_auth_reject(vport, ndlp,
+					   AUTH_ERR, AUTHENTICATION_FAILED);
+		if (vport->auth.auth_state == LPFC_AUTH_SUCCESS) {
+			lpfc_port_auth_failed(ndlp);
+		}
+	}
+
+	kfree(rsp);
+}
+
+int
+lpfc_unpack_auth_negotiate(struct lpfc_vport *vport, uint8_t *message,
+			   uint8_t *reason, uint8_t *explanation)
+{
+	uint32_t prot_len;
+	uint32_t param_len;
+	int i, j = 0;
+
+	/* Following is the format of the message.
+	 * uint16_t  nameTag;
+	 * uint16_t  nameLength;
+	 * uint8_t   name[8];
+	 * uint32_t  NumberOfAuthProtocals
+	 * uint32_t  AuthProtParameter#1Len
+	 * uint32_t  AuthProtID#1  (DH-CHAP = 0x1)
+
+	 * uint16_t  DH-CHAPParameterTag (HashList = 0x1)
+	 * uint16_t  DH-CHAPParameterWordCount (number of uint32_t entries)
+	 * uint8_t   DH-CHAPParameter[];
+
+	 * uint16_t  DH-CHAPParameterTag (DHglDList = 0x2)
+	 * uint16_t  DH-CHAPParameterWordCount (number of uint32_t entries)
+	 * uint8_t   DH-CHAPParameter[];
+
+	 * uint32_t  hashIdentifier;
+	 * uint32_t  dhgroupIdentifier;
+	 * uint32_t  challengevalueLen;
+	 * uint8_t   challengeValue[];
+	 * uint32_t  dhvalueLen;
+	 * uint8_t   dhvalue[];
+	 */
+
+	/* Name Tag */
+	if (be16_to_cpu(*(uint16_t *)message) != NAME_TAG) {
+		*reason = AUTH_ERR;
+		*explanation = BAD_PAYLOAD;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1006 Bad Name tag in auth message 0x%x\n",
+				 be16_to_cpu(*(uint16_t *)message));
+		return 1;
+	}
+	message += sizeof(uint16_t);
+
+	/* Name Length */
+	if (be16_to_cpu(*(uint16_t *)message) != NAME_LEN) {
+		*reason = AUTH_ERR;
+		*explanation = BAD_PAYLOAD;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1007 Bad Name length in auth message 0x%x\n",
+				 be16_to_cpu(*(uint16_t *)message));
+		return 1;
+	}
+	message += sizeof(uint16_t);
+
+	/* Remote Port Name */
+	message += NAME_LEN;
+
+	 /* Number of Auth Protocols */
+	if (be32_to_cpu(*(uint32_t *)message) != 1) {
+		*reason = AUTH_ERR;
+		*explanation = BAD_PAYLOAD;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1008 Bad Number of Protocols 0x%x\n",
+				 be32_to_cpu(*(uint32_t *)message));
+		return 1;
+	}
+	message += sizeof(uint32_t);
+
+	/* Protocol Parameter Length */
+	prot_len = be32_to_cpu(*(uint32_t *)message);
+	message += sizeof(uint32_t);
+
+	/* Protocol Paramter type */
+	if (be32_to_cpu(*(uint32_t *)message) != FC_DHCHAP) {
+		*reason = AUTH_ERR;
+		*explanation = BAD_PAYLOAD;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1009 Bad param type 0x%x\n",
+				 be32_to_cpu(*(uint32_t *)message));
+		return 1;
+	}
+	message += sizeof(uint32_t);
+
+	/* Parameter #1 Tag */
+	if (be16_to_cpu(*(uint16_t *)message) != HASH_LIST_TAG) {
+		*reason = AUTH_ERR;
+		*explanation = BAD_PAYLOAD;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1010 Bad Tag 1 0x%x\n",
+				 be32_to_cpu(*(uint16_t *)message));
+		return 1;
+	}
+	message += sizeof(uint16_t);
+
+	/* Parameter #1 Length */
+	param_len =  be16_to_cpu(*(uint16_t *)message);
+	message += sizeof(uint16_t);
+
+	/* Choose a hash function */
+	for (i = 0; i < vport->auth.hash_len; i++) {
+		for (j = 0; j < param_len; j++) {
+			if (vport->auth.hash_priority[i] ==
+			    be32_to_cpu(((uint32_t *)message)[j]))
+				break;
+		}
+		if (j != param_len)
+			break;
+	}
+	if (i == vport->auth.hash_len && j == param_len) {
+		*reason = AUTH_ERR;
+		*explanation = BAD_PAYLOAD;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1011 Auth_neg no hash function chosen.\n")
+		return 1;
+	}
+	vport->auth.hash_id = vport->auth.hash_priority[i];
+	message += sizeof(uint32_t) * param_len;
+
+	/* Parameter #2 Tag*/
+	if (be16_to_cpu(*(uint16_t *)message) != DHGID_LIST_TAG) {
+		*reason = AUTH_ERR;
+		*explanation = BAD_PAYLOAD;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1012 Auth_negotiate Bad Tag 2 0x%x\n",
+				 be32_to_cpu(*(uint16_t *)message));
+		return 1;
+	}
+	message += sizeof(uint16_t);
+
+	/* Parameter #2 Length */
+	param_len =  be16_to_cpu(*(uint16_t *)message);
+	message += sizeof(uint16_t);
+
+	/* Choose a DH Group */
+	for (i = 0; i < vport->auth.dh_group_len; i++) {
+		for (j = 0; j < param_len; j++) {
+			if (vport->auth.dh_group_priority[i] ==
+			    be32_to_cpu(((uint32_t *)message)[j]))
+				break;
+		}
+		if (j != param_len)
+			break;
+	}
+	if (i == vport->auth.dh_group_len && j == param_len) {
+		*reason = AUTH_ERR;
+		*explanation = BAD_PAYLOAD;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1013 Auth_negotiate  no DH_group found. \n");
+		return 1;
+	}
+	vport->auth.group_id = vport->auth.dh_group_priority[i];
+	message += sizeof(uint32_t) * param_len;
+
+	return 0;
+}
+
+int
+lpfc_unpack_dhchap_challenge(struct lpfc_vport *vport, uint8_t *message,
+			     uint8_t *reason, uint8_t *explanation)
+{
+	int i;
+
+	/* Following is the format of the message.
+	 * uint16_t  nameTag;
+	 * uint16_t  nameLength;
+	 * uint8_t   name[8];
+	 * uint32_t  hashIdentifier;
+	 * uint32_t  dhgroupIdentifier;
+	 * uint32_t  challengevalueLen;
+	 * uint8_t   challengeValue[];
+	 * uint32_t  dhvalueLen;
+	 * uint8_t   dhvalue[];
+	 */
+
+	/* Name Tag */
+	if (be16_to_cpu(*(uint16_t *)message) != NAME_TAG) {
+		*reason = AUTH_ERR;
+		*explanation = BAD_PAYLOAD;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1014 dhchap challenge bad name tag 0x%x. \n",
+				 be16_to_cpu(*(uint16_t *)message));
+		return 1;
+	}
+	message += sizeof(uint16_t);
+
+	/* Name Length */
+	if (be16_to_cpu(*(uint16_t *)message) != NAME_LEN) {
+		*reason = AUTH_ERR;
+		*explanation = BAD_PAYLOAD;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1015 dhchap challenge bad name length "
+				 "0x%x.\n", be16_to_cpu(*(uint16_t *)message));
+		return 1;
+	}
+	message += sizeof(uint16_t);
+
+	/* Remote Port Name */
+	message += NAME_LEN;
+
+	/* Hash ID */
+	vport->auth.hash_id = be32_to_cpu(*(uint32_t *)message);  /* Hash id */
+	for (i = 0; i < vport->auth.hash_len; i++) {
+		if (vport->auth.hash_id == vport->auth.hash_priority[i])
+			break;
+	}
+	if (i == vport->auth.hash_len) {
+		*reason = LOGIC_ERR;
+		*explanation = BAD_ALGORITHM;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1016 dhchap challenge Hash ID not Supported "
+				 "0x%x. \n", vport->auth.hash_id);
+		return 1;
+	}
+	message += sizeof(uint32_t);
+
+	vport->auth.group_id =
+		be32_to_cpu(*(uint32_t *)message);  /* DH group id */
+	for (i = 0; i < vport->auth.dh_group_len; i++) {
+		if (vport->auth.group_id == vport->auth.dh_group_priority[i])
+			break;
+	}
+	if (i == vport->auth.dh_group_len) {
+		*reason = LOGIC_ERR;
+		*explanation = BAD_DHGROUP;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1017 dhchap challenge could not find DH "
+				 "Group. \n");
+		return 1;
+	}
+	message += sizeof(uint32_t);
+
+	vport->auth.challenge_len =
+		be32_to_cpu(*(uint32_t *)message);  /* Challenge Len */
+	message += sizeof(uint32_t);
+
+	/* copy challenge to vport */
+	if (vport->auth.challenge != NULL) {
+		kfree(vport->auth.challenge);
+	}
+	vport->auth.challenge = kmalloc(vport->auth.challenge_len, GFP_KERNEL);
+	if (!vport->auth.challenge) {
+		*reason = AUTH_ERR;
+		return 1;
+	}
+	memcpy (vport->auth.challenge, message, vport->auth.challenge_len);
+	message += vport->auth.challenge_len;
+
+	vport->auth.dh_pub_key_len =
+		be32_to_cpu(*(uint32_t *)message);  /* DH Value Len */
+	message += sizeof(uint32_t);
+
+	if (vport->auth.dh_pub_key_len != 0) {
+		if (vport->auth.group_id == DH_GROUP_NULL) {
+			*reason = LOGIC_ERR;
+			*explanation = BAD_DHGROUP;
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+					 "1018 dhchap challenge No Public key "
+					 "for non-NULL DH Group.\n");
+			return 1;
+		}
+
+		/* Copy to the vport to save for authentication */
+		if (vport->auth.dh_pub_key != NULL)
+			kfree(vport->auth.dh_pub_key);
+		vport->auth.dh_pub_key = kmalloc(vport->auth.dh_pub_key_len,
+				GFP_KERNEL);
+		if (!vport->auth.dh_pub_key) {
+			*reason = AUTH_ERR;
+			return 1;
+		}
+		memcpy(vport->auth.dh_pub_key, message,
+			vport->auth.dh_pub_key_len);
+	}
+	return 0;
+}
+
+int
+lpfc_unpack_dhchap_reply(struct lpfc_vport *vport, uint8_t *message,
+			 struct fc_auth_req *fc_req)
+{
+	uint32_t rsp_len;
+	uint32_t dh_len;
+	uint32_t challenge_len;
+
+	/* Following is the format of the message.
+	 * uint32_t	Response Value Length;
+	 * uint8_t	Response Value[];
+	 * uint32_t	DH Value Length;
+	 * uint8_t	DH Value[];
+	 * uint32_t	Challenge Value Length;
+	 * uint8_t	Challenge Value[];
+	 */
+
+	rsp_len = be32_to_cpu(*(uint32_t *)message);   /* Response Len */
+	message += sizeof(uint32_t);
+	memcpy (fc_req->u.dhchap_success.data + vport->auth.challenge_len,
+		message, rsp_len);
+	fc_req->u.dhchap_success.received_response_len = rsp_len;
+	message += rsp_len;
+
+	dh_len = be32_to_cpu(*(uint32_t *)message);   /* DH Len */
+	message += sizeof(uint32_t);
+	memcpy (fc_req->u.dhchap_success.data + vport->auth.challenge_len +
+		rsp_len, message, dh_len);
+	fc_req->u.dhchap_success.received_public_key_len = dh_len;
+	message += dh_len;
+
+	challenge_len = be32_to_cpu(*(uint32_t *)message);   /* Challenge Len */
+	message += sizeof(uint32_t);
+	memcpy (fc_req->u.dhchap_success.data + vport->auth.challenge_len
+		+ rsp_len + dh_len,
+		message, challenge_len);
+	fc_req->u.dhchap_success.received_challenge_len = challenge_len;
+	message += challenge_len;
+
+	return (rsp_len + dh_len + challenge_len);
+}
+
+int
+lpfc_unpack_dhchap_success(struct lpfc_vport *vport, uint8_t *message,
+			   struct fc_auth_req *fc_req)
+{
+	uint32_t rsp_len = 0;
+
+	/*
+	 * uint32_t  responseValueLen;
+	 * uint8_t  response[];
+	 */
+
+	rsp_len = be32_to_cpu(*(uint32_t *)message);   /* Response Len */
+	message += sizeof(uint32_t);
+	memcpy(fc_req->u.dhchap_success.data + vport->auth.challenge_len,
+	       message, rsp_len);
+	fc_req->u.dhchap_success.received_response_len = rsp_len;
+
+	memcpy(fc_req->u.dhchap_success.data +
+		vport->auth.challenge_len + rsp_len,
+		vport->auth.dh_pub_key, vport->auth.dh_pub_key_len);
+
+	fc_req->u.dhchap_success.received_public_key_len =
+		vport->auth.dh_pub_key_len;
+
+	fc_req->u.dhchap_success.received_challenge_len = 0;
+
+	return (vport->auth.challenge_len + rsp_len +
+		vport->auth.dh_pub_key_len);
+	return 0;
+}
+
+int
+lpfc_build_auth_neg(struct lpfc_vport *vport, uint8_t *message)
+{
+	uint8_t *message_start = message;
+	uint8_t *params_start;
+	uint32_t *params_len;
+	uint32_t len;
+	int i;
+
+	/* Because some of the fields are not static in length
+	 * and number we will pack on the fly.This will be expanded
+	 * in the future to optionally offer DHCHAP or FCAP or both.
+	 * The packing is done in Big Endian byte order.
+	 *
+	 * uint16_t nameTag;
+	 * uint16_t nameLength;
+	 * uint8_t  name[8];
+	 * uint32_t available;		For now we will only offer one
+					protocol ( DHCHAP ) for authentication.
+	 * uint32_t potocolParamsLenId#1;
+	 * uint32_t protocolId#1;	1 : DHCHAP. The protocol list is
+	 *					in order of preference.
+	 * uint16_t parameter#1Tag	1 : HashList
+	 * uint16_t parameter#1Len	2 : Count of how many parameter values
+	 *                                  follow in order of preference.
+	 * uint16_t parameter#1value#1	5 : MD5 Hash Function
+	 * uint16_t parameter#1value#2	6 : SHA-1 Hash Function
+	 * uint16_t parameter#2Tag		2 : DHglDList
+	 * uint16_t parameter#2Len		1 : Only One is supported now
+	 * uint16_t parameter#2value#1	0 : NULL DH-CHAP Algorithm
+	 * uint16_t parameter#2value#2 ...
+	 * uint32_t protocolParamsLenId#2;
+	 * uint32_t protocolId#2;         2 = FCAP
+	 * uint16_t parameter#1Tag
+	 * uint16_t parameter#1Len
+	 * uint16_t parameter#1value#1
+	 * uint16_t parameter#1value#2 ...
+	 * uint16_t parameter#2Tag
+	 * uint16_t parameter#2Len
+	 * uint16_t parameter#2value#1
+	 * uint16_t parameter#2value#2 ...
+	 */
+
+
+	/* Name Tag */
+	*((uint16_t *)message) = cpu_to_be16(NAME_TAG);
+	message += sizeof(uint16_t);
+
+	/* Name Len */
+	*((uint16_t *)message) = cpu_to_be16(NAME_LEN);
+	message += sizeof(uint16_t);
+
+	memcpy(message, vport->fc_portname.u.wwn, sizeof(uint64_t));
+
+	message += sizeof(uint64_t);
+
+	/* Protocols Available */
+	*((uint32_t *)message) = cpu_to_be32(PROTS_NUM);
+	message += sizeof(uint32_t);
+
+	/* First Protocol Params Len */
+	params_len = (uint32_t *)message;
+	message += sizeof(uint32_t);
+
+	/* Start of first Param */
+	params_start = message;
+
+	 /* Protocol Id */
+	*((uint32_t *)message) = cpu_to_be32(FC_DHCHAP);
+	message += sizeof(uint32_t);
+
+	/* Hash List Tag */
+	*((uint16_t *)message) = cpu_to_be16(HASH_LIST_TAG);
+	message += sizeof(uint16_t);
+
+	/* Hash Value Len */
+	*((uint16_t *)message) = cpu_to_be16(vport->auth.hash_len);
+	message += sizeof(uint16_t);
+
+	/* Hash Value each 4 byte words */
+	for (i = 0; i < vport->auth.hash_len; i++) {
+		*((uint32_t *)message) =
+			cpu_to_be32(vport->auth.hash_priority[i]);
+		message += sizeof(uint32_t);
+	}
+
+	/* DHgIDList Tag */
+	*((uint16_t *)message) = cpu_to_be16(DHGID_LIST_TAG);
+	message += sizeof(uint16_t);
+
+	/* DHgIDListValue Len */
+	*((uint16_t *)message) = cpu_to_be16(vport->auth.dh_group_len);
+
+	message += sizeof(uint16_t);
+
+	/* DHgIDList each 4 byte words */
+
+	for (i = 0; i < vport->auth.dh_group_len; i++) {
+		*((uint32_t *)message) =
+			cpu_to_be32(vport->auth.dh_group_priority[i]);
+		message += sizeof(uint32_t);
+	}
+
+	*params_len = cpu_to_be32(message - params_start);
+
+	len = (uint32_t)(message - message_start);
+
+	return len;
+}
+
+int
+lpfc_build_dhchap_challenge(struct lpfc_vport *vport, uint8_t *message,
+			    struct fc_auth_rsp *fc_rsp)
+{
+	uint8_t *message_start = message;
+
+	/* Because some of the fields are not static in length and number
+	 * we will pack on the fly. The packing is done in Big Endian byte
+	 * order.
+	 *
+	 * uint16_t  nameTag;
+	 * uint16_t  nameLength;
+	 * uint8_t   name[8];
+	 * uint32_t  Hash_Identifier;
+	 * uint32_t  DH_Group_Identifier;
+	 * uint32_t  Challenge_Value_Length;
+	 * uint8_t   Challenge_Value[];
+	 * uint32_t  DH_Value_Length;
+	 * uint8_t	  DH_Value[];
+	 */
+
+	/* Name Tag */
+	*((uint16_t *)message) = cpu_to_be16(NAME_TAG);
+	message += sizeof(uint16_t);
+
+	/* Name Len */
+	*((uint16_t *)message) = cpu_to_be16(NAME_LEN);
+	message += sizeof(uint16_t);
+
+	memcpy(message, vport->fc_portname.u.wwn, NAME_LEN);
+	message += NAME_LEN;
+
+	/* Hash Value each 4 byte words */
+	*((uint32_t *)message) = cpu_to_be32(vport->auth.hash_id);
+	message += sizeof(uint32_t);
+
+	/* DH group id each 4 byte words */
+	*((uint32_t *)message) = cpu_to_be32(vport->auth.group_id);
+	message += sizeof(uint32_t);
+
+	/* Challenge Length */
+	*((uint32_t *)message) = cpu_to_be32(fc_rsp->u.
+		dhchap_challenge.our_challenge_len);
+	message += sizeof(uint32_t);
+
+	/* copy challenge to vport to save */
+	if (vport->auth.challenge)
+		kfree(vport->auth.challenge);
+	vport->auth.challenge_len = fc_rsp->u.
+		dhchap_challenge.our_challenge_len;
+	vport->auth.challenge = kmalloc(vport->auth.challenge_len, GFP_KERNEL);
+
+	if (!vport->auth.challenge)
+		return 0;
+
+	memcpy(vport->auth.challenge, fc_rsp->u.dhchap_challenge.data,
+	       fc_rsp->u.dhchap_challenge.our_challenge_len);
+
+	/* Challenge */
+	memcpy(message, fc_rsp->u.dhchap_challenge.data,
+	       fc_rsp->u.dhchap_challenge.our_challenge_len);
+	message += fc_rsp->u.dhchap_challenge.our_challenge_len;
+
+	/* Public Key length */
+	*((uint32_t *)message) = cpu_to_be32(fc_rsp->u.
+		dhchap_challenge.our_public_key_len);
+	message += sizeof(uint32_t);
+
+	/* Public Key */
+	memcpy(message, fc_rsp->u.dhchap_challenge.data +
+	       fc_rsp->u.dhchap_challenge.our_challenge_len,
+	       fc_rsp->u.dhchap_challenge.our_public_key_len);
+	message += fc_rsp->u.dhchap_challenge.our_public_key_len;
+
+	return ((uint32_t)(message - message_start));
+
+}
+
+int
+lpfc_build_dhchap_reply(struct lpfc_vport *vport, uint8_t *message,
+				struct fc_auth_rsp *fc_rsp)
+
+{
+	uint8_t *message_start = message;
+
+	/*
+	 * Because some of the fields are not static in length and
+	 * number we will pack on the fly. The packing is done in
+	 * Big Endian byte order.
+	 *
+	 * uint32_t  ResonseLength;
+	 * uint8_t   ResponseValue[];
+	 * uint32_t  DHLength;
+	 * uint8_t   DHValue[];          Our Public key
+	 * uint32_t  ChallengeLength;    Used for bi-directional authentication
+	 * uint8_t   ChallengeValue[];
+	 *
+	 * The combined key ( g^x mod p )^y mod p is used as the last
+	 * hash of the password.
+	 *
+	 * g is the base 2 or 5.
+	 * y is our private key.
+	 * ( g^y mod p ) is our public key which we send.
+	 * ( g^x mod p ) is their public key which we received.
+	 */
+
+	*((uint32_t *)message) = cpu_to_be32(fc_rsp->u.dhchap_reply.
+		our_challenge_rsp_len);
+
+	message += sizeof(uint32_t);
+
+	memcpy(message, fc_rsp->u.dhchap_reply.data,
+		fc_rsp->u.dhchap_reply.our_challenge_rsp_len);
+
+	message += fc_rsp->u.dhchap_reply.our_challenge_rsp_len;
+
+	*((uint32_t *)message) = cpu_to_be32(fc_rsp->u.dhchap_reply.
+			our_public_key_len);
+
+	message += sizeof(uint32_t);
+
+	memcpy(message, fc_rsp->u.dhchap_reply.data +
+				fc_rsp->u.dhchap_reply.our_challenge_rsp_len,
+				fc_rsp->u.dhchap_reply.our_public_key_len);
+
+	message += fc_rsp->u.dhchap_reply.our_public_key_len;
+
+	if (vport->auth.bidirectional) {
+
+		/* copy to vport to save */
+		if (vport->auth.challenge)
+			kfree(vport->auth.challenge);
+		vport->auth.challenge_len = fc_rsp->u.dhchap_reply.
+			our_challenge_len;
+		vport->auth.challenge = kmalloc(vport->auth.challenge_len,
+			GFP_KERNEL);
+		if (!vport->auth.challenge)
+			return 0;
+
+		memcpy(vport->auth.challenge, fc_rsp->u.dhchap_reply.data +
+		       fc_rsp->u.dhchap_reply.our_challenge_rsp_len +
+		       fc_rsp->u.dhchap_reply.our_public_key_len,
+		       fc_rsp->u.dhchap_reply.our_challenge_len);
+
+		*((uint32_t *)message) = cpu_to_be32(fc_rsp->u.
+			dhchap_reply.our_challenge_len);
+		message += sizeof(uint32_t);
+
+		memcpy(message, fc_rsp->u.dhchap_reply.data +
+			fc_rsp->u.dhchap_reply.our_challenge_rsp_len +
+			fc_rsp->u.dhchap_reply.our_public_key_len,
+			fc_rsp->u.dhchap_reply.our_challenge_len);
+
+		message += fc_rsp->u.dhchap_reply.our_challenge_len;
+
+	} else {
+		*((uint32_t *)message) = 0;      /* Challenge Len for No
+						bidirectional authentication */
+		message += sizeof(uint32_t); /* Challenge Value Not Present */
+	}
+
+	return ((uint32_t)(message - message_start));
+
+}
+
+int
+lpfc_build_dhchap_success(struct lpfc_vport *vport, uint8_t *message,
+			  struct fc_auth_rsp *fc_rsp)
+{
+	uint8_t *message_start = message;
+
+	/*
+	 * Because some of the fields are not static in length and number
+	 * we will pack on the fly. The packing is done in Big Endian byte
+	 * order.
+	 */
+
+	*((uint32_t *)message) = cpu_to_be32(fc_rsp->u.
+			dhchap_success.response_len);
+	message += sizeof(uint32_t);
+
+	memcpy(message, fc_rsp->u.dhchap_success.data,
+	       fc_rsp->u.dhchap_success.response_len);
+	message += fc_rsp->u.dhchap_success.response_len;
+
+	return ((uint32_t)(message - message_start));
+}
+
diff --git a/drivers/scsi/lpfc/lpfc_auth.h b/drivers/scsi/lpfc/lpfc_auth.h
new file mode 100644
index 0000000..c59d37f
--- /dev/null
+++ b/drivers/scsi/lpfc/lpfc_auth.h
@@ -0,0 +1,92 @@
+/*******************************************************************
+ * This file is part of the Emulex Linux Device Driver for         *
+ * Fibre Channel Host Bus Adapters.                                *
+ * Copyright (C) 2006-2007 Emulex.  All rights reserved.           *
+ * EMULEX and SLI are trademarks of Emulex.                        *
+ * www.emulex.com                                                  *
+ *                                                                 *
+ * This program is free software; you can redistribute it and/or   *
+ * modify it under the terms of version 2 of the GNU General       *
+ * Public License as published by the Free Software Foundation.    *
+ * This program is distributed in the hope that it will be useful. *
+ * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
+ * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
+ * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
+ * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
+ * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
+ * more details, a copy of which can be found in the file COPYING  *
+ * included with this package.                                     *
+ *******************************************************************/
+
+#define N_DH_GROUP            4
+#define ELS_CMD_AUTH_BYTE     0x90
+
+#define AUTH_REJECT           0xA
+#define AUTH_NEGOTIATE        0xB
+#define AUTH_DONE             0xC
+
+#define DHCHAP_CHALLENGE 0x10
+#define DHCHAP_REPLY     0x11
+#define DHCHAP_SUCCESS   0x12
+
+#define FCAP_REQUEST	0x13
+#define FCAP_ACK        0x14
+#define FCAP_CONFIRM    0x15
+
+#define PROTS_NUM	0x01
+
+#define NAME_TAG	0x01
+#define NAME_LEN	0x08
+
+#define HASH_LIST_TAG   0x01
+
+#define DHGID_LIST_TAG  0x02
+
+#define HBA_SECURITY       0x20
+
+#define AUTH_ERR                 0x1
+#define LOGIC_ERR                0x2
+
+#define BAD_DHGROUP              0x2
+#define BAD_ALGORITHM            0x3
+#define AUTHENTICATION_FAILED    0x5
+#define BAD_PAYLOAD              0x6
+#define BAD_PROTOCOL             0x7
+#define RESTART		         0x8
+
+#define AUTH_VERSION	0x1
+
+#define MAX_AUTH_MESSAGE_SIZE 1024
+
+struct lpfc_auth_reject {
+   uint8_t reason;
+   uint8_t explanation;
+   uint8_t reserved[2];
+}  __attribute__ ((packed));
+
+struct lpfc_auth_message {		/* Structure is in Big Endian format */
+	uint8_t command_code;
+	uint8_t flags;
+	uint8_t message_code;
+	uint8_t protocol_ver;
+	uint32_t message_len;
+	uint32_t trans_id;
+	uint8_t data[0];
+}  __attribute__ ((packed));
+
+int lpfc_build_auth_neg(struct lpfc_vport *vport, uint8_t *message);
+int lpfc_build_dhchap_challenge(struct lpfc_vport *vport, uint8_t *message,
+				struct fc_auth_rsp *fc_rsp);
+int lpfc_build_dhchap_reply(struct lpfc_vport *vport, uint8_t *message,
+			    struct fc_auth_rsp *fc_rsp);
+int lpfc_build_dhchap_success(struct lpfc_vport *vport, uint8_t *message,
+			      struct fc_auth_rsp *fc_rsp);
+
+int lpfc_unpack_auth_negotiate(struct lpfc_vport *vport, uint8_t *message,
+				 uint8_t *reason, uint8_t *explanation);
+int lpfc_unpack_dhchap_challenge(struct lpfc_vport *vport, uint8_t *message,
+				 uint8_t *reason, uint8_t *explanation);
+int lpfc_unpack_dhchap_reply(struct lpfc_vport *vport, uint8_t *message,
+			     struct fc_auth_req *fc_req);
+int lpfc_unpack_dhchap_success(struct lpfc_vport *vport, uint8_t *message,
+			       struct fc_auth_req *fc_req);
diff --git a/drivers/scsi/lpfc/lpfc_auth_access.c b/drivers/scsi/lpfc/lpfc_auth_access.c
new file mode 100644
index 0000000..26bdf27
--- /dev/null
+++ b/drivers/scsi/lpfc/lpfc_auth_access.c
@@ -0,0 +1,767 @@
+/*******************************************************************
+ * This file is part of the Emulex Linux Device Driver for         *
+ * Fibre Channel Host Bus Adapters.                                *
+ * Copyright (C) 2006-2007 Emulex.  All rights reserved.           *
+ * EMULEX and SLI are trademarks of Emulex.                        *
+ * www.emulex.com                                                  *
+ *                                                                 *
+ * This program is free software; you can redistribute it and/or   *
+ * modify it under the terms of version 2 of the GNU General       *
+ * Public License as published by the Free Software Foundation.    *
+ * This program is distributed in the hope that it will be useful. *
+ * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
+ * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
+ * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
+ * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
+ * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
+ * more details, a copy of which can be found in the file COPYING  *
+ * included with this package.                                     *
+ *******************************************************************/
+#include <linux/blkdev.h>
+#include <linux/pci.h>
+#include <linux/kthread.h>
+#include <linux/interrupt.h>
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/sched.h>	/* workqueue stuff, HZ */
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_transport.h>
+#include <scsi/scsi_transport_fc.h>
+#include <scsi/scsi_cmnd.h>
+#include <linux/time.h>
+#include <linux/jiffies.h>
+#include <linux/security.h>
+#include <net/sock.h>
+#include <net/netlink.h>
+
+#include <scsi/scsi.h>
+
+#include "lpfc_hw.h"
+#include "lpfc_sli.h"
+#include "lpfc_disc.h"
+#include "lpfc_scsi.h"
+#include "lpfc.h"
+#include "lpfc_logmsg.h"
+#include "lpfc_crtn.h"
+#include "lpfc_vport.h"
+#include "lpfc_auth_access.h"
+
+/* fc security */
+char security_work_q_name[KOBJ_NAME_LEN];
+struct workqueue_struct *security_work_q = NULL;
+struct sock *fc_nl_sock;
+struct list_head fc_security_user_list;
+int fc_service_state = FC_SC_SERVICESTATE_UNKNOWN;
+static int fc_service_pid;
+DEFINE_SPINLOCK(fc_security_user_lock);
+
+static inline struct lpfc_vport *
+lpfc_fc_find_vport(unsigned long host_no)
+{
+	struct lpfc_vport *vport;
+	struct Scsi_Host *shost;
+
+	list_for_each_entry(vport, &fc_security_user_list, sc_users) {
+		shost = lpfc_shost_from_vport(vport);
+		if (shost && (shost->host_no == host_no))
+			return vport;
+	}
+
+	return NULL;
+}
+
+
+/**
+ * lpfc_fc_sc_add_timer
+ *
+ *
+ **/
+
+void
+lpfc_fc_sc_add_timer(struct fc_security_request *req, int timeout,
+		    void (*complete)(struct fc_security_request *))
+{
+
+	init_timer(&req->timer);
+
+
+	req->timer.data = (unsigned long)req;
+	req->timer.expires = jiffies + timeout;
+	req->timer.function = (void (*)(unsigned long)) complete;
+
+	add_timer(&req->timer);
+}
+/**
+ * lpfc_fc_sc_req_times_out
+ *
+ *
+ **/
+
+void
+lpfc_fc_sc_req_times_out(struct fc_security_request *req)
+{
+
+	unsigned long flags;
+	int found = 0;
+	struct fc_security_request *fc_sc_req;
+	struct lpfc_vport *vport = req->vport;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	if (!req)
+		return;
+
+	spin_lock_irqsave(shost->host_lock, flags);
+
+	/* To avoid a completion race check to see if request is on the list */
+
+	list_for_each_entry(fc_sc_req, &vport->sc_response_wait_queue, rlist)
+		if (fc_sc_req == req) {
+			found = 1;
+			break;
+		}
+
+	if (!found) {
+		spin_unlock_irqrestore(shost->host_lock, flags);
+		return;
+		}
+
+	list_del(&fc_sc_req->rlist);
+
+	spin_unlock_irqrestore(shost->host_lock, flags);
+
+	lpfc_printf_vlog(vport, KERN_WARNING, LOG_SECURITY,
+			 "1019 Request tranid %d timed out\n",
+			 fc_sc_req->tran_id);
+
+	switch (fc_sc_req->req_type) {
+
+	case FC_NL_SC_GET_CONFIG_REQ:
+		lpfc_security_config(shost, -ETIMEDOUT,
+			fc_sc_req->data);
+		break;
+
+	case FC_NL_SC_DHCHAP_MAKE_CHALLENGE_REQ:
+		lpfc_dhchap_make_challenge(shost, -ETIMEDOUT,
+			fc_sc_req->data, 0);
+		break;
+
+	case FC_NL_SC_DHCHAP_MAKE_RESPONSE_REQ:
+		lpfc_dhchap_make_response(shost, -ETIMEDOUT,
+			fc_sc_req->data, 0);
+		break;
+
+	case FC_NL_SC_DHCHAP_AUTHENTICATE_REQ:
+		lpfc_dhchap_authenticate(shost, -ETIMEDOUT, fc_sc_req->data, 0);
+		break;
+	}
+
+	kfree(fc_sc_req);
+
+}
+
+
+static inline struct fc_security_request *
+lpfc_fc_find_sc_request(u32 tran_id, u32 type, struct lpfc_vport *vport)
+{
+	struct fc_security_request *fc_sc_req;
+
+	list_for_each_entry(fc_sc_req, &vport->sc_response_wait_queue, rlist)
+		if (fc_sc_req->tran_id == tran_id &&
+			fc_sc_req->req_type == type)
+			return fc_sc_req;
+	return NULL;
+}
+
+
+
+/**
+ * lpfc_fc_sc_request
+ *
+ *
+ **/
+
+int
+lpfc_fc_sc_request(struct lpfc_vport *vport,
+	      u32 msg_type,
+	      struct fc_auth_req *auth_req,
+	      u32 auth_req_len, /* includes length of struct fc_auth_req */
+	      struct fc_auth_rsp *auth_rsp,
+	      u32 auth_rsp_len)	/* includes length of struct fc_auth_rsp */
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct fc_security_request *fc_sc_req;
+	struct sk_buff *skb;
+	struct nlmsghdr	*nlh;
+	struct fc_nl_sc_message *fc_nl_sc_msg;
+	const char *fn;
+	unsigned long flags;
+	u32 len;
+	int err = 0;
+	u32 seq = ++vport->sc_tran_id;
+
+	if (fc_service_state != FC_SC_SERVICESTATE_ONLINE)
+		return -EINVAL;
+
+	if (vport->port_state == FC_PORTSTATE_DELETED)
+		return -EINVAL;
+
+	fc_sc_req = kzalloc(sizeof(struct fc_security_request), GFP_KERNEL);
+
+	if (!fc_sc_req)
+		return -ENOMEM;
+
+	fc_sc_req->req_type = msg_type;
+	fc_sc_req->data = auth_rsp;
+	fc_sc_req->data_len = auth_rsp_len;
+	fc_sc_req->vport = vport;
+
+	len = NLMSG_SPACE(sizeof(struct fc_nl_sc_message) + auth_req_len);
+
+	skb = alloc_skb(len, GFP_KERNEL);
+	if (!skb) {
+		err = -ENOBUFS;
+		fn = "alloc_skb";
+		goto send_fail;
+	}
+
+	nlh = nlmsg_put(skb, fc_service_pid, seq, FC_TRANSPORT_MSG,
+			len - sizeof(*nlh), 0);
+	if (!nlh) {
+		err = -ENOBUFS;
+		fn = "nlmsg_put";
+		goto send_fail;
+	}
+
+	fc_nl_sc_msg = NLMSG_DATA(nlh);
+	fc_nl_sc_msg->snlh.version = SCSI_NL_VERSION;
+	fc_nl_sc_msg->snlh.transport = SCSI_NL_TRANSPORT_FC;
+	fc_nl_sc_msg->snlh.magic = SCSI_NL_MAGIC;
+	fc_nl_sc_msg->snlh.msgtype = msg_type;
+	fc_nl_sc_msg->snlh.msglen = len;
+	fc_nl_sc_msg->data_len = auth_req_len;
+	if (auth_req_len)
+		memcpy(fc_nl_sc_msg->data, auth_req, auth_req_len);
+
+	fc_nl_sc_msg->host_no = shost->host_no;
+	fc_nl_sc_msg->tran_id = seq;
+	fc_sc_req->tran_id = seq;
+
+	spin_lock_irqsave(shost->host_lock, flags);
+	list_add_tail(&fc_sc_req->rlist, &vport->sc_response_wait_queue);
+	spin_unlock_irqrestore(shost->host_lock, flags);
+
+	err = nlmsg_unicast(fc_nl_sock, skb, fc_service_pid);
+	if (err < 0) {
+		fn = "nlmsg_unicast";
+		spin_lock_irqsave(shost->host_lock, flags);
+		list_del(&fc_sc_req->rlist);
+		spin_unlock_irqrestore(shost->host_lock, flags);
+		goto send_fail;
+	}
+	lpfc_fc_sc_add_timer(fc_sc_req, FC_SC_REQ_TIMEOUT,
+			     lpfc_fc_sc_req_times_out);
+
+	return 0;
+
+send_fail:
+
+	kfree(fc_sc_req);
+
+	lpfc_printf_vlog(vport, KERN_WARNING, LOG_SECURITY,
+			 "1020 Dropped Message type %d to PID %d : %s : err "
+			 "%d\n", msg_type, fc_service_pid, fn, err);
+	return err;
+
+}
+
+/**
+ * lpfc_fc_security_get_config
+ *
+ *
+ **/
+
+int
+lpfc_fc_security_get_config(struct Scsi_Host *shost,
+			struct fc_auth_req *auth_req,
+			u32 auth_req_len,
+			struct fc_auth_rsp *auth_rsp,
+			u32 auth_rsp_len)
+{
+
+	return(lpfc_fc_sc_request((struct lpfc_vport *) shost->hostdata,
+				FC_NL_SC_GET_CONFIG_REQ, auth_req,
+				auth_req_len, auth_rsp, auth_rsp_len));
+
+}
+EXPORT_SYMBOL(lpfc_fc_security_get_config);
+
+/**
+ * lpfc_fc_security_dhchap_make_challenge
+ *
+ *
+ **/
+
+int
+lpfc_fc_security_dhchap_make_challenge(struct Scsi_Host *shost,
+				  struct fc_auth_req *auth_req,
+				  u32 auth_req_len,
+				  struct fc_auth_rsp *auth_rsp,
+				  u32 auth_rsp_len)
+{
+
+	return(lpfc_fc_sc_request((struct lpfc_vport *) shost->hostdata,
+			FC_NL_SC_DHCHAP_MAKE_CHALLENGE_REQ,
+			auth_req, auth_req_len, auth_rsp, auth_rsp_len));
+
+}
+EXPORT_SYMBOL(lpfc_fc_security_dhchap_make_challenge);
+
+/**
+ * lpfc_fc_security_dhchap_make_response
+ *
+ *
+ **/
+
+int
+lpfc_fc_security_dhchap_make_response(struct Scsi_Host *shost,
+			struct fc_auth_req *auth_req,
+			u32 auth_req_len,
+			struct fc_auth_rsp *auth_rsp,
+			u32 auth_rsp_len)
+{
+
+	return(lpfc_fc_sc_request((struct lpfc_vport *) shost->hostdata,
+			FC_NL_SC_DHCHAP_MAKE_RESPONSE_REQ,
+			auth_req, auth_req_len, auth_rsp, auth_rsp_len));
+
+}
+EXPORT_SYMBOL(lpfc_fc_security_dhchap_make_response);
+
+
+/**
+ * lpfc_fc_security_dhchap_authenticate
+ *
+ *
+ **/
+
+int
+lpfc_fc_security_dhchap_authenticate(struct Scsi_Host *shost,
+			struct fc_auth_req *auth_req,
+			u32 auth_req_len,
+			struct fc_auth_rsp *auth_rsp,
+			u32 auth_rsp_len)
+{
+
+	return(lpfc_fc_sc_request((struct lpfc_vport *) shost->hostdata,
+			FC_NL_SC_DHCHAP_AUTHENTICATE_REQ,
+			auth_req, auth_req_len, auth_rsp, auth_rsp_len));
+
+}
+EXPORT_SYMBOL(lpfc_fc_security_dhchap_authenticate);
+
+/**
+ * lpfc_fc_queue_security_work - Queue work to the fc_host security workqueue.
+ * @shost:	Pointer to Scsi_Host bound to fc_host.
+ * @work:	Work to queue for execution.
+ *
+ * Return value:
+ *	1 - work queued for execution
+ *	0 - work is already queued
+ *	-EINVAL - work queue doesn't exist
+ **/
+int
+lpfc_fc_queue_security_work(struct lpfc_vport *vport, struct work_struct *work)
+{
+	if (unlikely(!security_work_q)) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+			"1021 ERROR: attempted to queue security work, "
+			"when no workqueue created.\n");
+		dump_stack();
+
+		return -EINVAL;
+	}
+
+	return queue_work(security_work_q, work);
+
+}
+
+
+
+ /**
+ * lpfc_fc_sc_schedule_notify_all
+ *
+ *
+ **/
+
+void
+lpfc_fc_sc_schedule_notify_all(int message)
+{
+	struct lpfc_vport *vport;
+	unsigned long flags;
+
+	spin_lock_irqsave(&fc_security_user_lock, flags);
+
+	list_for_each_entry(vport, &fc_security_user_list, sc_users) {
+
+		switch (message) {
+
+		case FC_NL_SC_REG:
+			lpfc_fc_queue_security_work(vport,
+				&vport->sc_online_work);
+			break;
+
+		case FC_NL_SC_DEREG:
+			lpfc_fc_queue_security_work(vport,
+				&vport->sc_offline_work);
+			break;
+		}
+	}
+
+	spin_unlock_irqrestore(&fc_security_user_lock, flags);
+}
+
+
+
+/**
+ * lpfc_fc_sc_security_online
+ *
+ *
+ **/
+
+void
+lpfc_fc_sc_security_online(struct work_struct *work)
+{
+	struct lpfc_vport *vport = container_of(work, struct lpfc_vport,
+						sc_online_work);
+	lpfc_security_service_online(lpfc_shost_from_vport(vport));
+	return;
+}
+
+/**
+ * lpfc_fc_sc_security_offline
+ *
+ *
+ **/
+void
+lpfc_fc_sc_security_offline(struct work_struct *work)
+{
+	struct lpfc_vport *vport = container_of(work, struct lpfc_vport,
+						sc_offline_work);
+	lpfc_security_service_offline(lpfc_shost_from_vport(vport));
+	return;
+}
+
+
+/**
+ * lpfc_fc_sc_process_msg
+ *
+ *
+ **/
+static void
+lpfc_fc_sc_process_msg(struct work_struct *work)
+{
+	struct fc_sc_msg_work_q_wrapper *wqw =
+		container_of(work, struct fc_sc_msg_work_q_wrapper, work);
+
+	switch (wqw->msgtype) {
+
+	case FC_NL_SC_GET_CONFIG_RSP:
+		lpfc_security_config(lpfc_shost_from_vport(wqw->fc_sc_req->
+				vport), wqw->status,
+				wqw->fc_sc_req->data);
+		break;
+
+	case FC_NL_SC_DHCHAP_MAKE_CHALLENGE_RSP:
+		lpfc_dhchap_make_challenge(lpfc_shost_from_vport(wqw->
+					fc_sc_req->vport), wqw->status,
+					wqw->fc_sc_req->data, wqw->data_len);
+		break;
+
+	case FC_NL_SC_DHCHAP_MAKE_RESPONSE_RSP:
+		lpfc_dhchap_make_response(lpfc_shost_from_vport(wqw->
+					fc_sc_req->vport), wqw->status,
+					wqw->fc_sc_req->data, wqw->data_len);
+		break;
+
+	case FC_NL_SC_DHCHAP_AUTHENTICATE_RSP:
+		lpfc_dhchap_authenticate(lpfc_shost_from_vport(wqw->fc_sc_req->
+					vport),
+					wqw->status,
+					wqw->fc_sc_req->data, wqw->data_len);
+		break;
+	}
+
+	kfree(wqw->fc_sc_req);
+	kfree(wqw);
+
+	return;
+}
+
+
+/**
+ * lpfc_fc_sc_schedule_msg
+ *
+ *
+ **/
+
+int
+lpfc_fc_sc_schedule_msg(struct fc_nl_sc_message *fc_nl_sc_msg, int rcvlen)
+{
+	struct fc_security_request *fc_sc_req;
+	u32 req_type;
+	struct lpfc_vport *vport = 0;
+	int err = 0;
+	struct fc_sc_msg_work_q_wrapper *wqw;
+	unsigned long flags;
+	struct Scsi_Host *shost;
+
+	spin_lock_irqsave(&fc_security_user_lock, flags);
+
+	vport = lpfc_fc_find_vport(fc_nl_sc_msg->host_no);
+
+	spin_unlock_irqrestore(&fc_security_user_lock, flags);
+	if (!vport) {
+		printk(KERN_WARNING
+			"%s: Host does not exist for msg type %x.\n",
+			__FUNCTION__, fc_nl_sc_msg->snlh.msgtype);
+		return -EBADR;
+	}
+	shost = lpfc_shost_from_vport(vport);
+
+	if (vport->port_state == FC_PORTSTATE_DELETED) {
+		printk(KERN_WARNING
+		"%s: Host being deleted.\n", __FUNCTION__);
+		return -EBADR;
+	}
+
+	wqw = kzalloc(sizeof(struct fc_sc_msg_work_q_wrapper), GFP_KERNEL);
+
+	if (!wqw)
+		return -ENOMEM;
+
+	switch (fc_nl_sc_msg->snlh.msgtype) {
+	case FC_NL_SC_GET_CONFIG_RSP:
+		req_type = FC_NL_SC_GET_CONFIG_REQ;
+		break;
+
+	case FC_NL_SC_DHCHAP_MAKE_CHALLENGE_RSP:
+		req_type = FC_NL_SC_DHCHAP_MAKE_CHALLENGE_REQ;
+		break;
+
+	case FC_NL_SC_DHCHAP_MAKE_RESPONSE_RSP:
+		req_type = FC_NL_SC_DHCHAP_MAKE_RESPONSE_REQ;
+		break;
+
+	case FC_NL_SC_DHCHAP_AUTHENTICATE_RSP:
+		req_type = FC_NL_SC_DHCHAP_AUTHENTICATE_REQ;
+		break;
+
+	default:
+		kfree(wqw);
+		return -EINVAL;
+	}
+
+	spin_lock_irqsave(shost->host_lock, flags);
+
+	fc_sc_req = lpfc_fc_find_sc_request(fc_nl_sc_msg->tran_id,
+				req_type, vport);
+
+	if (!fc_sc_req) {
+		spin_unlock_irqrestore(shost->host_lock, flags);
+		lpfc_printf_vlog(vport, KERN_WARNING, LOG_SECURITY,
+				 "1022 Security request does not exist.\n");
+		kfree(wqw);
+		return -EBADR;
+	}
+
+	list_del(&fc_sc_req->rlist);
+
+	spin_unlock_irqrestore(shost->host_lock, flags);
+
+	del_singleshot_timer_sync(&fc_sc_req->timer);
+
+	wqw->status = 0;
+	wqw->fc_sc_req = fc_sc_req;
+	wqw->data_len = rcvlen;
+	wqw->msgtype = fc_nl_sc_msg->snlh.msgtype;
+
+	if (!fc_sc_req->data ||
+		(fc_sc_req->data_len < fc_nl_sc_msg->data_len)) {
+		wqw->status = -ENOBUFS;
+		wqw->data_len = 0;
+		lpfc_printf_vlog(vport, KERN_WARNING, LOG_SECURITY,
+				 "1023 Warning - data may have been truncated. "
+				 "data:%p reqdl:%x mesdl:%x\n",
+				 fc_sc_req->data,
+				 fc_sc_req->data_len, fc_nl_sc_msg->data_len);
+	} else {
+		memcpy(fc_sc_req->data, fc_nl_sc_msg->data,
+			fc_nl_sc_msg->data_len);
+	}
+
+	INIT_WORK(&wqw->work, (void(*)(void *))lpfc_fc_sc_process_msg,
+		  &wqw->work);
+	lpfc_fc_queue_security_work(vport, &wqw->work);
+
+	return err;
+}
+
+static int
+lpfc_fc_handle_nl_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, int rcvlen)
+{
+	struct scsi_nl_hdr *snlh = NLMSG_DATA(nlh);
+	int err = 0;
+	int pid;
+
+	pid = nlh->nlmsg_pid;
+
+	switch (snlh->msgtype) {
+
+	case FC_NL_SC_REG:
+
+		fc_service_pid = nlh->nlmsg_pid;
+		fc_service_state = FC_SC_SERVICESTATE_ONLINE;
+		if (nlh->nlmsg_flags & NLM_F_ACK)
+			netlink_ack(skb, nlh, err);
+		skb_pull(skb, rcvlen);
+		kfree_skb(skb);
+		lpfc_fc_sc_schedule_notify_all(FC_NL_SC_REG);
+		break;
+
+	case FC_NL_SC_DEREG:
+
+		fc_service_pid = nlh->nlmsg_pid;
+		fc_service_state = FC_SC_SERVICESTATE_OFFLINE;
+		if (nlh->nlmsg_flags & NLM_F_ACK)
+			netlink_ack(skb, nlh, err);
+		skb_pull(skb, rcvlen);
+		kfree_skb(skb);
+		lpfc_fc_sc_schedule_notify_all(FC_NL_SC_DEREG);
+		break;
+
+	case FC_NL_SC_GET_CONFIG_RSP:
+	case FC_NL_SC_DHCHAP_MAKE_CHALLENGE_RSP:
+	case FC_NL_SC_DHCHAP_MAKE_RESPONSE_RSP:
+	case FC_NL_SC_DHCHAP_AUTHENTICATE_RSP:
+
+		err = lpfc_fc_sc_schedule_msg((struct fc_nl_sc_message *)snlh,
+				rcvlen);
+
+		if ((nlh->nlmsg_flags & NLM_F_ACK) || err)
+			netlink_ack(skb, nlh, err);
+
+		skb_pull(skb, rcvlen);
+		kfree_skb(skb);
+
+		break;
+
+	default:
+		printk(KERN_WARNING "%s: unknown msg type 0x%x len %d\n",
+			 __FUNCTION__, snlh->msgtype, rcvlen);
+		netlink_ack(skb, nlh, -EBADR);
+		skb_pull(skb, rcvlen);
+		kfree_skb(skb);
+		break;
+	}
+
+	return err;
+}
+
+void
+lpfc_fc_nl_rcv(struct sock *sk, int len)
+{
+	struct sk_buff *skb;
+	struct nlmsghdr *nlh;
+	struct scsi_nl_hdr *snlh;
+	uint32_t rlen;
+	int err;
+
+
+	while ((skb = skb_dequeue(&sk->sk_receive_queue)) != NULL) {
+
+		while (skb->len >= NLMSG_SPACE(0)) {
+			err = 0;
+
+			nlh = (struct nlmsghdr *) skb->data;
+
+
+			if ((nlh->nlmsg_len < (sizeof(*nlh) + sizeof(*snlh))) ||
+			    (skb->len < nlh->nlmsg_len)) {
+				kfree_skb(skb);
+				printk(KERN_WARNING "%s: discarding partial "
+					"skb\n", __FUNCTION__);
+				break;
+			}
+
+			rlen = NLMSG_ALIGN(nlh->nlmsg_len);
+			if (rlen > skb->len) {
+				printk(KERN_WARNING "%s: rlen > skb->len\n",
+					 __FUNCTION__);
+				rlen = skb->len;
+			}
+
+			if (nlh->nlmsg_type != FC_TRANSPORT_MSG) {
+				printk(KERN_WARNING "%s: Not "
+					"FC_TRANSPORT_MSG\n", __FUNCTION__);
+				err = -EBADMSG;
+				goto next_msg;
+			}
+
+			snlh = NLMSG_DATA(nlh);
+			if ((snlh->version != SCSI_NL_VERSION) ||
+			    (snlh->magic != SCSI_NL_MAGIC)) {
+				printk(KERN_WARNING "%s: Bad Version or Magic "
+					"number\n", __FUNCTION__);
+				err = -EPROTOTYPE;
+				goto next_msg;
+			}
+
+			/* Cathy this is failing. What is it?
+
+			if (security_netlink_recv(skb)) {
+				err = -EPERM;
+				goto next_msg;
+			}
+			*/
+next_msg:
+			if (err) {
+				printk(KERN_WARNING "%s: err %d\n",
+					 __FUNCTION__, err);
+				netlink_ack(skb, nlh, err);
+				skb_pull(skb, rlen);
+				kfree_skb(skb);
+				continue;
+			}
+
+
+			lpfc_fc_handle_nl_rcv_msg(skb, nlh, rlen);
+
+		}
+	}
+}
+
+
+int
+lpfc_fc_nl_rcv_nl_event(struct notifier_block *this,
+			unsigned long event,
+			void *ptr)
+{
+	struct netlink_notify *n = ptr;
+
+	if ((event == NETLINK_URELEASE) &&
+	    (n->protocol == NETLINK_FCTRANSPORT) && (n->pid)) {
+		fc_service_state = FC_SC_SERVICESTATE_OFFLINE;
+		lpfc_fc_sc_schedule_notify_all(FC_NL_SC_DEREG);
+		}
+
+	return NOTIFY_DONE;
+}
+
+struct notifier_block lpfc_fc_netlink_notifier = {
+	.notifier_call  = lpfc_fc_nl_rcv_nl_event,
+};
+
+
+
diff --git a/drivers/scsi/lpfc/lpfc_auth_access.h b/drivers/scsi/lpfc/lpfc_auth_access.h
new file mode 100644
index 0000000..9cbe1d5
--- /dev/null
+++ b/drivers/scsi/lpfc/lpfc_auth_access.h
@@ -0,0 +1,266 @@
+/*******************************************************************
+ * This file is part of the Emulex Linux Device Driver for         *
+ * Fibre Channel Host Bus Adapters.                                *
+ * Copyright (C) 2006-2007 Emulex.  All rights reserved.           *
+ * EMULEX and SLI are trademarks of Emulex.                        *
+ * www.emulex.com                                                  *
+ *                                                                 *
+ * This program is free software; you can redistribute it and/or   *
+ * modify it under the terms of version 2 of the GNU General       *
+ * Public License as published by the Free Software Foundation.    *
+ * This program is distributed in the hope that it will be useful. *
+ * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
+ * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
+ * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
+ * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
+ * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
+ * more details, a copy of which can be found in the file COPYING  *
+ * included with this package.                                     *
+ *******************************************************************/
+
+#define to_fc_internal(tmpl)	container_of(tmpl, struct fc_internal, t)
+
+#define NLMSG_ALIGNTO	4
+#define NLMSG_ALIGN(len) ( ((len)+NLMSG_ALIGNTO-1) & ~(NLMSG_ALIGNTO-1) )
+#define NLMSG_HDRLEN	 ((int) NLMSG_ALIGN(sizeof(struct nlmsghdr)))
+#define NLMSG_LENGTH(len) ((len)+NLMSG_ALIGN(NLMSG_HDRLEN))
+#define NLMSG_SPACE(len) NLMSG_ALIGN(NLMSG_LENGTH(len))
+#define NLMSG_DATA(nlh)  ((void*)(((char*)nlh) + NLMSG_LENGTH(0)))
+#define NLMSG_NEXT(nlh,len)	 ((len) -= NLMSG_ALIGN((nlh)->nlmsg_len), \
+				  (struct nlmsghdr*)(((char*)(nlh)) + \
+				NLMSG_ALIGN((nlh)->nlmsg_len)))
+#define NLMSG_OK(nlh,len) ((len) >= (int)sizeof(struct nlmsghdr) && \
+			   (nlh)->nlmsg_len >= sizeof(struct nlmsghdr) && \
+			   (nlh)->nlmsg_len <= (len))
+#define NLMSG_PAYLOAD(nlh,len) ((nlh)->nlmsg_len - NLMSG_SPACE((len)))
+
+#define NLMSG_NOOP		0x1	/* Nothing.		*/
+#define NLMSG_ERROR		0x2	/* Error		*/
+#define NLMSG_DONE		0x3	/* End of a dump	*/
+#define NLMSG_OVERRUN		0x4	/* Data lost		*/
+
+#define NLMSG_MIN_TYPE		0x10	/* < 0x10: reserved control messages */
+
+/* scsi_nl_hdr->version value */
+#define SCSI_NL_VERSION				1
+
+/* scsi_nl_hdr->magic value */
+#define SCSI_NL_MAGIC				0xA1B2
+
+/* scsi_nl_hdr->transport value */
+#define SCSI_NL_TRANSPORT			0
+#define SCSI_NL_TRANSPORT_FC			1
+#define SCSI_NL_MAX_TRANSPORTS			2
+
+#define FC_NL_GROUP_CNT		0
+
+	/* Note: when specifying vendor_id to fc_host_post_vendor_event()
+	 *   be sure to read the Vendor Type and ID formatting requirements
+	 *   specified in scsi_netlink.h
+	 */
+
+#define FC_SC_REQ_TIMEOUT (60*HZ)
+
+enum fc_sc_service_state {
+	FC_SC_SERVICESTATE_UNKNOWN,
+	FC_SC_SERVICESTATE_ONLINE,
+	FC_SC_SERVICESTATE_OFFLINE,
+	FC_SC_SERVICESTATE_ERROR,
+};
+
+struct fc_security_request {
+	struct list_head rlist;
+	int pid;
+	u32 tran_id;
+	u32 req_type;
+	struct timer_list timer;
+	struct lpfc_vport *vport;
+	u32 data_len;
+	void *data;
+};
+
+struct fc_sc_msg_work_q_wrapper {
+	struct work_struct work;
+	struct fc_security_request *fc_sc_req;
+	u32 data_len;
+	int status;
+	u32 msgtype;
+};
+struct fc_sc_notify_work_q_wrapper {
+	struct work_struct work;
+	struct Scsi_Host *shost;
+	int msg;
+};
+
+#define FC_DHCHAP	1
+#define FC_FCAP		2
+#define FC_FCPAP	3
+#define FC_KERBEROS	4
+
+#define FC_AUTHMODE_UNKNOWN	0
+#define FC_AUTHMODE_NONE	1
+#define FC_AUTHMODE_ACTIVE	2
+#define FC_AUTHMODE_PASSIVE	3
+
+#define FC_SP_HASH_MD5  0x5
+#define FC_SP_HASH_SHA1 0x6
+
+#define DH_GROUP_NULL	0x00
+#define DH_GROUP_1024	0x01
+#define DH_GROUP_1280	0x02
+#define DH_GROUP_1536	0x03
+#define DH_GROUP_2048	0x04
+
+#define MAX_AUTH_REQ_SIZE 1024
+#define MAX_AUTH_RSP_SIZE 1024
+
+#define AUTH_FABRIC_WWN	0xFFFFFFFFFFFFFFFFLL
+
+struct fc_auth_req {
+	uint64_t local_wwpn;
+	uint64_t remote_wwpn;
+	union {
+		struct dhchap_challenge_req {
+			uint32_t transaction_id;
+			uint32_t dh_group_id;
+			uint32_t hash_id;
+		} dhchap_challenge;
+		struct dhchap_reply_req {
+			uint32_t transaction_id;
+			uint32_t dh_group_id;
+			uint32_t hash_id;
+			uint32_t bidirectional;
+			uint32_t received_challenge_len;
+			uint32_t received_public_key_len;
+			uint8_t  data[0];
+		} dhchap_reply;
+		struct dhchap_success_req {
+			uint32_t transaction_id;
+			uint32_t dh_group_id;
+			uint32_t hash_id;
+			uint32_t our_challenge_len;
+			uint32_t received_response_len;
+			uint32_t received_public_key_len;
+			uint32_t received_challenge_len;
+			uint8_t  data[0];
+		} dhchap_success;
+	}u;
+} __attribute__ ((packed));
+
+struct fc_auth_rsp {
+	uint64_t local_wwpn;
+	uint64_t remote_wwpn;
+	union {
+		struct authinfo {
+			uint8_t  auth_mode;
+			uint16_t auth_timeout;
+			uint8_t  bidirectional;
+			uint8_t  type_priority[4];
+			uint16_t type_len;
+			uint8_t  hash_priority[4];
+			uint16_t hash_len;
+			uint8_t  dh_group_priority[8];
+			uint16_t dh_group_len;
+			uint32_t reauth_interval;
+		} dhchap_security_config;
+		struct dhchap_challenge_rsp {
+			uint32_t transaction_id;
+			uint32_t our_challenge_len;
+			uint32_t our_public_key_len;
+			uint8_t  data[0];
+		} dhchap_challenge;
+		struct dhchap_reply_rsp {
+			uint32_t transaction_id;
+			uint32_t our_challenge_rsp_len;
+			uint32_t our_public_key_len;
+			uint32_t our_challenge_len;
+			uint8_t  data[0];
+		} dhchap_reply;
+		struct dhchap_success_rsp {
+			uint32_t transaction_id;
+			uint32_t authenticated;
+			uint32_t response_len;
+			uint8_t  data[0];
+		} dhchap_success;
+	}u;
+}__attribute__ ((packed));
+
+int
+lpfc_fc_security_get_config(struct Scsi_Host *shost,
+			struct fc_auth_req *auth_req,
+			u32 req_len,
+			struct fc_auth_rsp *auth_rsp,
+			u32 rsp_len);
+int
+lpfc_fc_security_dhchap_make_challenge(struct Scsi_Host *shost,
+			struct fc_auth_req *auth_req,
+			u32 req_len,
+			struct fc_auth_rsp *auth_rsp,
+			u32 rsp_len);
+int
+lpfc_fc_security_dhchap_make_response(struct Scsi_Host *shost,
+			struct fc_auth_req *auth_req,
+			u32 req_len,
+			struct fc_auth_rsp *auth_rsp,
+			u32 rsp_len);
+int
+lpfc_fc_security_dhchap_authenticate(struct Scsi_Host *shost,
+			struct fc_auth_req *auth_req,
+			u32 req_len,
+			struct fc_auth_rsp *auth_rsp,
+			u32 rsp_len);
+
+int lpfc_fc_queue_security_work(struct lpfc_vport *,
+		struct work_struct *);
+
+/*
+ * FC Transport Message Types
+ */
+	/* user -> kernel */
+#define FC_NL_EVENTS_REG			0x0001
+#define FC_NL_EVENTS_DEREG			0x0002
+#define FC_NL_SC_REG				0x0003
+#define FC_NL_SC_DEREG				0x0004
+#define FC_NL_SC_GET_CONFIG_RSP			0x0005
+#define FC_NL_SC_SET_CONFIG_RSP			0x0006
+#define FC_NL_SC_DHCHAP_MAKE_CHALLENGE_RSP	0x0007
+#define FC_NL_SC_DHCHAP_MAKE_RESPONSE_RSP	0x0008
+#define FC_NL_SC_DHCHAP_AUTHENTICATE_RSP	0x0009
+	/* kernel -> user */
+#define FC_NL_ASYNC_EVENT			0x0010
+#define FC_NL_SC_GET_CONFIG_REQ			0x0020
+#define FC_NL_SC_SET_CONFIG_REQ			0x0030
+#define FC_NL_SC_DHCHAP_MAKE_CHALLENGE_REQ	0x0040
+#define FC_NL_SC_DHCHAP_MAKE_RESPONSE_REQ	0x0050
+#define FC_NL_SC_DHCHAP_AUTHENTICATE_REQ	0x0060
+
+/*
+ * Message Structures :
+ */
+
+/* macro to round up message lengths to 8byte boundary */
+#define FC_NL_MSGALIGN(len)		(((len) + 7) & ~7)
+
+#define FC_NETLINK_API_VERSION		1
+
+/* Single Netlink Message type to send all FC Transport messages */
+#define FC_TRANSPORT_MSG		NLMSG_MIN_TYPE + 1
+
+/* SCSI_TRANSPORT_MSG event message header */
+/*
+struct scsi_nl_hdr {
+	uint8_t version;
+	uint8_t transport;
+	uint16_t magic;
+	uint16_t msgtype;
+	uint16_t msglen;
+} __attribute__((aligned(sizeof(uint64_t))));
+*/
+struct fc_nl_sc_message {
+	struct scsi_nl_hdr snlh;              /* must be 1st element ! */
+	uint32_t host_no;
+	uint32_t tran_id;
+	uint32_t data_len;
+	uint8_t data[0];
+} __attribute__((aligned(sizeof(uint64_t))));
+
diff --git a/drivers/scsi/lpfc/lpfc_compat.h b/drivers/scsi/lpfc/lpfc_compat.h
index 970c2be..8c71cff 100644
--- a/drivers/scsi/lpfc/lpfc_compat.h
+++ b/drivers/scsi/lpfc/lpfc_compat.h
@@ -1,7 +1,7 @@
 /*******************************************************************
  * This file is part of the Emulex Linux Device Driver for         *
  * Fibre Channel Host Bus Adapters.                                *
- * Copyright (C) 2004-2007 Emulex.  All rights reserved.           *
+ * Copyright (C) 2004-2005 Emulex.  All rights reserved.           *
  * EMULEX and SLI are trademarks of Emulex.                        *
  * www.emulex.com                                                  *
  *                                                                 *
@@ -32,6 +32,11 @@ using writel() and readl().
  *******************************************************************/
 #include <asm/byteorder.h>
 
+/*
+ * This definition is to support older versions of scsi_transport_fc which
+ * do not have 8Gig speed definition.
+ */
+
 #ifdef __BIG_ENDIAN
 
 static inline void
diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
index ca8a443..663c263 100644
--- a/drivers/scsi/lpfc/lpfc_crtn.h
+++ b/drivers/scsi/lpfc/lpfc_crtn.h
@@ -18,97 +18,140 @@
  * included with this package.                                     *
  *******************************************************************/
 
+typedef int (*node_filter)(struct lpfc_nodelist *ndlp, void *param);
+
 struct fc_rport;
+int lpfc_issue_els_auth(struct lpfc_vport *, struct lpfc_nodelist *,
+			uint8_t message_code, uint8_t *payload,
+			uint32_t payload_len);
+int lpfc_issue_els_auth_reject(struct lpfc_vport *vport,
+			       struct lpfc_nodelist *ndlp,
+			       uint8_t reason, uint8_t explanation);
 void lpfc_dump_mem(struct lpfc_hba *, LPFC_MBOXQ_t *, uint16_t);
 void lpfc_read_nv(struct lpfc_hba *, LPFC_MBOXQ_t *);
+void lpfc_config_async(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t);
+
+void lpfc_heart_beat(struct lpfc_hba *, LPFC_MBOXQ_t *);
 int lpfc_read_la(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb,
 		 struct lpfc_dmabuf *mp);
 void lpfc_clear_la(struct lpfc_hba *, LPFC_MBOXQ_t *);
+void lpfc_issue_clear_la(struct lpfc_hba *phba, struct lpfc_vport *vport);
 void lpfc_config_link(struct lpfc_hba *, LPFC_MBOXQ_t *);
-int lpfc_read_sparam(struct lpfc_hba *, LPFC_MBOXQ_t *);
+int lpfc_read_sparam(struct lpfc_hba *, LPFC_MBOXQ_t *, int);
 void lpfc_read_config(struct lpfc_hba *, LPFC_MBOXQ_t *);
 void lpfc_read_lnk_stat(struct lpfc_hba *, LPFC_MBOXQ_t *);
-int lpfc_reg_login(struct lpfc_hba *, uint32_t, uint8_t *, LPFC_MBOXQ_t *,
-		   uint32_t);
-void lpfc_unreg_login(struct lpfc_hba *, uint32_t, LPFC_MBOXQ_t *);
-void lpfc_unreg_did(struct lpfc_hba *, uint32_t, LPFC_MBOXQ_t *);
+int lpfc_reg_login(struct lpfc_hba *, uint16_t, uint32_t, uint8_t *,
+		   LPFC_MBOXQ_t *, uint32_t);
+void lpfc_unreg_login(struct lpfc_hba *, uint16_t, uint32_t, LPFC_MBOXQ_t *);
+void lpfc_unreg_did(struct lpfc_hba *, uint16_t, uint32_t, LPFC_MBOXQ_t *);
+void lpfc_reg_vpi(struct lpfc_hba *, uint16_t, uint32_t, LPFC_MBOXQ_t *);
+void lpfc_unreg_vpi(struct lpfc_hba *, uint16_t, LPFC_MBOXQ_t *);
 void lpfc_init_link(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t, uint32_t);
 
-
+struct lpfc_vport *lpfc_find_vport_by_did(struct lpfc_hba *, uint32_t);
+void lpfc_cleanup_rpis(struct lpfc_vport *vport, int remove);
 int lpfc_linkdown(struct lpfc_hba *);
+void lpfc_port_link_failure(struct lpfc_vport *);
 void lpfc_mbx_cmpl_read_la(struct lpfc_hba *, LPFC_MBOXQ_t *);
 
 void lpfc_mbx_cmpl_clear_la(struct lpfc_hba *, LPFC_MBOXQ_t *);
 void lpfc_mbx_cmpl_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
+void lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *, LPFC_MBOXQ_t *);
 void lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
 void lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
 void lpfc_mbx_cmpl_fdmi_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *);
-int lpfc_nlp_list(struct lpfc_hba *, struct lpfc_nodelist *, int);
-void lpfc_set_disctmo(struct lpfc_hba *);
-int lpfc_can_disctmo(struct lpfc_hba *);
-int lpfc_unreg_rpi(struct lpfc_hba *, struct lpfc_nodelist *);
-void lpfc_set_slim(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t,
-		uint32_t value);
+void lpfc_dequeue_node(struct lpfc_vport *, struct lpfc_nodelist *);
+void lpfc_nlp_set_state(struct lpfc_vport *, struct lpfc_nodelist *, int);
+void lpfc_drop_node(struct lpfc_vport *, struct lpfc_nodelist *);
+void lpfc_set_disctmo(struct lpfc_vport *);
+int  lpfc_can_disctmo(struct lpfc_vport *);
+int  lpfc_unreg_rpi(struct lpfc_vport *, struct lpfc_nodelist *);
+void lpfc_unreg_all_rpis(struct lpfc_vport *);
+void lpfc_unreg_default_rpis(struct lpfc_vport *);
+void lpfc_issue_reg_vpi(struct lpfc_hba *, struct lpfc_vport *);
+
 int lpfc_check_sli_ndlp(struct lpfc_hba *, struct lpfc_sli_ring *,
-		    struct lpfc_iocbq *, struct lpfc_nodelist *);
-int lpfc_nlp_remove(struct lpfc_hba *, struct lpfc_nodelist *);
-void lpfc_nlp_init(struct lpfc_hba *, struct lpfc_nodelist *, uint32_t);
-struct lpfc_nodelist *lpfc_setup_disc_node(struct lpfc_hba *, uint32_t);
-void lpfc_disc_list_loopmap(struct lpfc_hba *);
-void lpfc_disc_start(struct lpfc_hba *);
-void lpfc_disc_flush_list(struct lpfc_hba *);
+			struct lpfc_iocbq *, struct lpfc_nodelist *);
+void lpfc_nlp_init(struct lpfc_vport *, struct lpfc_nodelist *, uint32_t);
+struct lpfc_nodelist *lpfc_nlp_get(struct lpfc_nodelist *);
+int  lpfc_nlp_put(struct lpfc_nodelist *);
+int  lpfc_nlp_not_used(struct lpfc_nodelist *ndlp);
+struct lpfc_nodelist *lpfc_setup_disc_node(struct lpfc_vport *, uint32_t);
+void lpfc_disc_list_loopmap(struct lpfc_vport *);
+void lpfc_disc_start(struct lpfc_vport *);
+void lpfc_disc_flush_list(struct lpfc_vport *);
+void lpfc_cleanup_discovery_resources(struct lpfc_vport *);
+void lpfc_cleanup(struct lpfc_vport *);
 void lpfc_disc_timeout(unsigned long);
 
-struct lpfc_nodelist *lpfc_findnode_rpi(struct lpfc_hba * phba, uint16_t rpi);
+struct lpfc_nodelist *__lpfc_findnode_rpi(struct lpfc_vport *, uint16_t);
+struct lpfc_nodelist *lpfc_findnode_rpi(struct lpfc_vport *, uint16_t);
+struct lpfc_nodelist *lpfc_findnode_wwnn(struct lpfc_vport *,
+					 struct lpfc_name *);
 
+void lpfc_port_auth_failed(struct lpfc_nodelist *);
+void lpfc_worker_wake_up(struct lpfc_hba *);
 int lpfc_workq_post_event(struct lpfc_hba *, void *, void *, uint32_t);
 int lpfc_do_work(void *);
-int lpfc_disc_state_machine(struct lpfc_hba *, struct lpfc_nodelist *, void *,
+int lpfc_disc_state_machine(struct lpfc_vport *, struct lpfc_nodelist *, void *,
 			    uint32_t);
 
-int lpfc_check_sparm(struct lpfc_hba *, struct lpfc_nodelist *,
+void lpfc_register_new_vport(struct lpfc_hba *, struct lpfc_vport *,
+			struct lpfc_nodelist *);
+void lpfc_do_scr_ns_plogi(struct lpfc_hba *, struct lpfc_vport *);
+int lpfc_check_sparm(struct lpfc_vport *, struct lpfc_nodelist *,
 		     struct serv_parm *, uint32_t);
-int lpfc_els_abort(struct lpfc_hba *, struct lpfc_nodelist * ndlp);
+int lpfc_els_abort(struct lpfc_hba *, struct lpfc_nodelist *);
+int lpfc_els_chk_latt(struct lpfc_vport *);
+struct lpfc_iocbq *lpfc_prep_els_iocb(struct lpfc_vport *, uint8_t, uint16_t,
+				      uint8_t, struct lpfc_nodelist *, uint32_t,
+				      uint32_t);
 int lpfc_els_abort_flogi(struct lpfc_hba *);
-int lpfc_initial_flogi(struct lpfc_hba *);
-int lpfc_issue_els_plogi(struct lpfc_hba *, uint32_t, uint8_t);
-int lpfc_issue_els_prli(struct lpfc_hba *, struct lpfc_nodelist *, uint8_t);
-int lpfc_issue_els_adisc(struct lpfc_hba *, struct lpfc_nodelist *, uint8_t);
-int lpfc_issue_els_logo(struct lpfc_hba *, struct lpfc_nodelist *, uint8_t);
-int lpfc_issue_els_scr(struct lpfc_hba *, uint32_t, uint8_t);
+int lpfc_initial_flogi(struct lpfc_vport *);
+int lpfc_initial_fdisc(struct lpfc_vport *);
+int lpfc_issue_els_fdisc(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
+int lpfc_issue_els_plogi(struct lpfc_vport *, uint32_t, uint8_t);
+int lpfc_issue_els_prli(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
+int lpfc_issue_els_adisc(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
+int lpfc_issue_els_logo(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
+int lpfc_issue_els_npiv_logo(struct lpfc_vport *, struct lpfc_nodelist *);
+int lpfc_issue_els_scr(struct lpfc_vport *, uint32_t, uint8_t);
 int lpfc_els_free_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
-int lpfc_els_rsp_acc(struct lpfc_hba *, uint32_t, struct lpfc_iocbq *,
-		     struct lpfc_nodelist *, LPFC_MBOXQ_t *, uint8_t);
-int lpfc_els_rsp_reject(struct lpfc_hba *, uint32_t, struct lpfc_iocbq *,
-			struct lpfc_nodelist *);
-int lpfc_els_rsp_adisc_acc(struct lpfc_hba *, struct lpfc_iocbq *,
+int lpfc_ct_free_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
+int lpfc_els_rsp_acc(struct lpfc_vport *, uint32_t, struct lpfc_iocbq *,
+		     struct lpfc_nodelist *, LPFC_MBOXQ_t *);
+int lpfc_els_rsp_reject(struct lpfc_vport *, uint32_t, struct lpfc_iocbq *,
+			struct lpfc_nodelist *, LPFC_MBOXQ_t *);
+int lpfc_els_rsp_adisc_acc(struct lpfc_vport *, struct lpfc_iocbq *,
 			   struct lpfc_nodelist *);
-int lpfc_els_rsp_prli_acc(struct lpfc_hba *, struct lpfc_iocbq *,
+int lpfc_els_rsp_prli_acc(struct lpfc_vport *, struct lpfc_iocbq *,
 			  struct lpfc_nodelist *);
-void lpfc_cancel_retry_delay_tmo(struct lpfc_hba *, struct lpfc_nodelist *);
+void lpfc_cancel_retry_delay_tmo(struct lpfc_vport *, struct lpfc_nodelist *);
 void lpfc_els_retry_delay(unsigned long);
 void lpfc_els_retry_delay_handler(struct lpfc_nodelist *);
+void lpfc_reauth_node(unsigned long);
+void lpfc_reauthentication_handler(struct lpfc_nodelist *);
+void lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *);
 void lpfc_els_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *,
 			  struct lpfc_iocbq *);
-int lpfc_els_handle_rscn(struct lpfc_hba *);
-int lpfc_els_flush_rscn(struct lpfc_hba *);
-int lpfc_rscn_payload_check(struct lpfc_hba *, uint32_t);
-void lpfc_els_flush_cmd(struct lpfc_hba *);
-int lpfc_els_disc_adisc(struct lpfc_hba *);
-int lpfc_els_disc_plogi(struct lpfc_hba *);
+int lpfc_els_handle_rscn(struct lpfc_vport *);
+void lpfc_els_flush_rscn(struct lpfc_vport *);
+int lpfc_rscn_payload_check(struct lpfc_vport *, uint32_t);
+void lpfc_els_flush_all_cmd(struct lpfc_hba *);
+void lpfc_els_flush_cmd(struct lpfc_vport *);
+int lpfc_els_disc_adisc(struct lpfc_vport *);
+int lpfc_els_disc_plogi(struct lpfc_vport *);
 void lpfc_els_timeout(unsigned long);
-void lpfc_els_timeout_handler(struct lpfc_hba *);
-struct lpfc_iocbq *lpfc_prep_els_iocb(struct lpfc_hba *, uint8_t expectRsp,
-					uint16_t, uint8_t,
-					struct lpfc_nodelist *,
-					uint32_t, uint32_t);
+void lpfc_els_timeout_handler(struct lpfc_vport *);
+void lpfc_hb_timeout(unsigned long);
+void lpfc_hb_timeout_handler(struct lpfc_hba *);
 
 void lpfc_ct_unsol_event(struct lpfc_hba *, struct lpfc_sli_ring *,
 			 struct lpfc_iocbq *);
-int lpfc_ns_cmd(struct lpfc_hba *, struct lpfc_nodelist *, int);
-int lpfc_fdmi_cmd(struct lpfc_hba *, struct lpfc_nodelist *, int);
+int lpfc_ns_cmd(struct lpfc_vport *, int, uint8_t, uint32_t);
+int lpfc_fdmi_cmd(struct lpfc_vport *, struct lpfc_nodelist *, int);
 void lpfc_fdmi_tmo(unsigned long);
-void lpfc_fdmi_tmo_handler(struct lpfc_hba *);
+void lpfc_fdmi_timeout_handler(struct lpfc_vport *vport);
 
 int lpfc_config_port_prep(struct lpfc_hba *);
 int lpfc_config_port_post(struct lpfc_hba *);
@@ -136,16 +179,25 @@ void lpfc_config_port(struct lpfc_hba *, LPFC_MBOXQ_t *);
 void lpfc_kill_board(struct lpfc_hba *, LPFC_MBOXQ_t *);
 void lpfc_mbox_put(struct lpfc_hba *, LPFC_MBOXQ_t *);
 LPFC_MBOXQ_t *lpfc_mbox_get(struct lpfc_hba *);
+void lpfc_mbox_cmpl_put(struct lpfc_hba *, LPFC_MBOXQ_t *);
 int lpfc_mbox_tmo_val(struct lpfc_hba *, int);
 
+void lpfc_config_hbq(struct lpfc_hba *, uint32_t, struct lpfc_hbq_init *,
+	uint32_t , LPFC_MBOXQ_t *);
+struct lpfc_hbq_entry * lpfc_sli_next_hbq_slot(struct lpfc_hba *, uint32_t);
+struct hbq_dmabuf *lpfc_els_hbq_alloc(struct lpfc_hba *);
+void lpfc_els_hbq_free(struct lpfc_hba *, struct hbq_dmabuf *);
+
 int lpfc_mem_alloc(struct lpfc_hba *);
 void lpfc_mem_free(struct lpfc_hba *);
+void lpfc_stop_vport_timers(struct lpfc_vport *);
 
 void lpfc_poll_timeout(unsigned long ptr);
 void lpfc_poll_start_timer(struct lpfc_hba * phba);
 void lpfc_sli_poll_fcp_ring(struct lpfc_hba * hba);
 struct lpfc_iocbq * lpfc_sli_get_iocbq(struct lpfc_hba *);
 void lpfc_sli_release_iocbq(struct lpfc_hba * phba, struct lpfc_iocbq * iocb);
+void __lpfc_sli_release_iocbq(struct lpfc_hba * phba, struct lpfc_iocbq * iocb);
 uint16_t lpfc_sli_next_iotag(struct lpfc_hba * phba, struct lpfc_iocbq * iocb);
 
 void lpfc_reset_barrier(struct lpfc_hba * phba);
@@ -154,6 +206,7 @@ int lpfc_sli_brdkill(struct lpfc_hba *);
 int lpfc_sli_brdreset(struct lpfc_hba *);
 int lpfc_sli_brdrestart(struct lpfc_hba *);
 int lpfc_sli_hba_setup(struct lpfc_hba *);
+int lpfc_sli_host_down(struct lpfc_vport *);
 int lpfc_sli_hba_down(struct lpfc_hba *);
 int lpfc_sli_issue_mbox(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t);
 int lpfc_sli_handle_mb_event(struct lpfc_hba *);
@@ -164,28 +217,36 @@ void lpfc_sli_def_mbox_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *);
 int lpfc_sli_issue_iocb(struct lpfc_hba *, struct lpfc_sli_ring *,
 			struct lpfc_iocbq *, uint32_t);
 void lpfc_sli_pcimem_bcopy(void *, void *, uint32_t);
-int lpfc_sli_abort_iocb_ring(struct lpfc_hba *, struct lpfc_sli_ring *);
+void lpfc_sli_abort_iocb_ring(struct lpfc_hba *, struct lpfc_sli_ring *);
 int lpfc_sli_ringpostbuf_put(struct lpfc_hba *, struct lpfc_sli_ring *,
 			     struct lpfc_dmabuf *);
 struct lpfc_dmabuf *lpfc_sli_ringpostbuf_get(struct lpfc_hba *,
 					     struct lpfc_sli_ring *,
 					     dma_addr_t);
+int lpfc_sli_hbq_count(void);
+int lpfc_sli_hbqbuf_init_hbqs(struct lpfc_hba *, uint32_t);
+int lpfc_sli_hbqbuf_add_hbqs(struct lpfc_hba *, uint32_t);
+void lpfc_sli_hbqbuf_free_all(struct lpfc_hba *);
+struct hbq_dmabuf *lpfc_sli_hbqbuf_find(struct lpfc_hba *, uint32_t);
+int lpfc_sli_hbq_size(void);
 int lpfc_sli_issue_abort_iotag(struct lpfc_hba *, struct lpfc_sli_ring *,
 			       struct lpfc_iocbq *);
-int lpfc_sli_sum_iocb(struct lpfc_hba *, struct lpfc_sli_ring *, uint16_t,
-			  uint64_t, lpfc_ctx_cmd);
-int lpfc_sli_abort_iocb(struct lpfc_hba *, struct lpfc_sli_ring *, uint16_t,
-			    uint64_t, uint32_t, lpfc_ctx_cmd);
+int lpfc_sli_sum_iocb(struct lpfc_vport *, uint16_t, uint64_t, lpfc_ctx_cmd);
+int lpfc_sli_abort_iocb(struct lpfc_vport *, struct lpfc_sli_ring *, uint16_t,
+			uint64_t, lpfc_ctx_cmd);
 
 void lpfc_mbox_timeout(unsigned long);
 void lpfc_mbox_timeout_handler(struct lpfc_hba *);
 
-struct lpfc_nodelist *lpfc_findnode_did(struct lpfc_hba *, uint32_t, uint32_t);
-struct lpfc_nodelist *lpfc_findnode_wwpn(struct lpfc_hba *, uint32_t,
-					struct lpfc_name *);
+struct lpfc_nodelist *__lpfc_find_node(struct lpfc_vport *, node_filter,
+				       void *);
+struct lpfc_nodelist *lpfc_find_node(struct lpfc_vport *, node_filter, void *);
+struct lpfc_nodelist *lpfc_findnode_did(struct lpfc_vport *, uint32_t);
+struct lpfc_nodelist *lpfc_findnode_wwpn(struct lpfc_vport *,
+					 struct lpfc_name *);
 
 int lpfc_sli_issue_mbox_wait(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq,
-			 uint32_t timeout);
+			     uint32_t timeout);
 
 int lpfc_sli_issue_iocb_wait(struct lpfc_hba * phba,
 			     struct lpfc_sli_ring * pring,
@@ -196,28 +257,80 @@ void lpfc_sli_abort_fcp_cmpl(struct lpfc_hba * phba,
 			     struct lpfc_iocbq * cmdiocb,
 			     struct lpfc_iocbq * rspiocb);
 
+void lpfc_sli_free_hbq(struct lpfc_hba *, struct hbq_dmabuf *);
+
 void *lpfc_mbuf_alloc(struct lpfc_hba *, int, dma_addr_t *);
+void __lpfc_mbuf_free(struct lpfc_hba *, void *, dma_addr_t);
 void lpfc_mbuf_free(struct lpfc_hba *, void *, dma_addr_t);
 
+void lpfc_in_buf_free(struct lpfc_hba *, struct lpfc_dmabuf *);
 /* Function prototypes. */
 const char* lpfc_info(struct Scsi_Host *);
+int lpfc_scan_finished(struct Scsi_Host *, unsigned long);
+
 void lpfc_get_cfgparam(struct lpfc_hba *);
-int lpfc_alloc_sysfs_attr(struct lpfc_hba *);
-void lpfc_free_sysfs_attr(struct lpfc_hba *);
-extern struct class_device_attribute *lpfc_host_attrs[];
+void lpfc_get_vport_cfgparam(struct lpfc_vport *);
+int lpfc_alloc_sysfs_attr(struct lpfc_vport *);
+void lpfc_free_sysfs_attr(struct lpfc_vport *);
+extern struct class_device_attribute *lpfc_hba_attrs[];
+extern struct class_device_attribute *lpfc_vport_attrs[];
 extern struct scsi_host_template lpfc_template;
+extern struct class_device_attribute *lpfc_hba_attrs_no_npiv[];
+extern struct scsi_host_template lpfc_template_no_npiv;
+extern struct scsi_host_template lpfc_vport_template;
 extern struct fc_function_template lpfc_transport_functions;
+extern struct fc_function_template lpfc_vport_transport_functions;
+extern int lpfc_sli_mode;
+extern int lpfc_enable_npiv;
 
-void lpfc_get_hba_sym_node_name(struct lpfc_hba * phba, uint8_t * symbp);
+int  lpfc_vport_symbolic_node_name(struct lpfc_vport *, char *, size_t);
 void lpfc_terminate_rport_io(struct fc_rport *);
 void lpfc_dev_loss_tmo_callbk(struct fc_rport *rport);
 
-/* Initialize/Un-initialize char device */
-int lpfc_cdev_init(void);
-void lpfc_cdev_exit(void);
-void  lpfcdfc_host_del (struct lpfcdfc_host *);
-struct lpfcdfc_host * lpfcdfc_host_add (struct pci_dev *, struct Scsi_Host *,
-					struct lpfc_hba *);
+struct lpfc_vport *lpfc_create_port(struct lpfc_hba *, int, struct device *);
+void lpfc_mbx_unreg_vpi(struct lpfc_vport *);
+void destroy_port(struct lpfc_vport *);
+int lpfc_get_instance(void);
+void lpfc_host_attrib_init(struct Scsi_Host *);
+
+int lpfc_security_wait(void);
+int  lpfc_get_security_enabled(struct Scsi_Host *);
+void lpfc_security_service_online(struct Scsi_Host *);
+void lpfc_security_service_offline(struct Scsi_Host *);
+void lpfc_security_config(struct Scsi_Host *, int status, void *);
+int lpfc_security_config_wait(struct lpfc_vport *vport);
+void lpfc_dhchap_make_challenge(struct Scsi_Host *, int , void *, uint32_t);
+void lpfc_dhchap_make_response(struct Scsi_Host *, int , void *, uint32_t);
+void lpfc_dhchap_authenticate(struct Scsi_Host *, int , void *, uint32_t);
+int lpfc_start_node_authentication(struct lpfc_nodelist *);
+void lpfc_start_discovery(struct lpfc_vport *vport);
+
+void lpfc_start_authentication(struct lpfc_vport *, struct lpfc_nodelist *);
+
+extern void lpfc_debugfs_initialize(struct lpfc_vport *);
+extern void lpfc_debugfs_terminate(struct lpfc_vport *);
+extern void lpfc_debugfs_disc_trc(struct lpfc_vport *, int, char *, uint32_t,
+	uint32_t, uint32_t);
+extern void lpfc_debugfs_slow_ring_trc(struct lpfc_hba *, char *, uint32_t,
+	uint32_t, uint32_t);
+extern struct lpfc_hbq_init *lpfc_hbq_defs[];
+
+extern uint8_t lpfc_security_service_state;
+extern spinlock_t fc_security_user_lock;
+extern struct list_head fc_security_user_list;
+extern int fc_service_state;
+
+/* Interface exported by fabric iocb scheduler */
+int lpfc_issue_fabric_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
+void lpfc_fabric_abort_vport(struct lpfc_vport *);
+void lpfc_fabric_abort_nport(struct lpfc_nodelist *);
+void lpfc_fabric_abort_hba(struct lpfc_hba *);
+void lpfc_fabric_abort_flogi(struct lpfc_hba *);
+void lpfc_fabric_block_timeout(unsigned long);
+void lpfc_unblock_fabric_iocbs(struct lpfc_hba *);
+void lpfc_adjust_queue_depth(struct lpfc_hba *);
+void lpfc_ramp_down_queue_handler(struct lpfc_hba *);
+void lpfc_ramp_up_queue_handler(struct lpfc_hba *);
 
 #define ScsiResult(host_code, scsi_code) (((host_code) << 16) | scsi_code)
 #define HBA_EVENT_RSCN                   5
diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
index 305e441..3b0dd49 100644
--- a/drivers/scsi/lpfc/lpfc_ct.c
+++ b/drivers/scsi/lpfc/lpfc_ct.c
@@ -40,6 +40,8 @@
 #include "lpfc_logmsg.h"
 #include "lpfc_crtn.h"
 #include "lpfc_version.h"
+#include "lpfc_vport.h"
+#include "lpfc_debugfs.h"
 
 #define HBA_PORTSPEED_UNKNOWN               0	/* Unknown - transceiver
 						 * incapable of reporting */
@@ -58,25 +60,69 @@ static char *lpfc_release_version = LPFC_DRIVER_VERSION;
 /*
  * lpfc_ct_unsol_event
  */
+static void
+lpfc_ct_unsol_buffer(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
+		     struct lpfc_dmabuf *mp, uint32_t size)
+{
+	if (!mp) {
+		printk(KERN_ERR "%s (%d): Unsolited CT, no buffer, "
+		       "piocbq = %p, status = x%x, mp = %p, size = %d\n",
+		       __FUNCTION__, __LINE__,
+		       piocbq, piocbq->iocb.ulpStatus, mp, size);
+	}
+
+	printk(KERN_ERR "%s (%d): Ignoring unsolicted CT piocbq = %p, "
+	       "buffer = %p, size = %d, status = x%x\n",
+	       __FUNCTION__, __LINE__,
+	       piocbq, mp, size,
+	       piocbq->iocb.ulpStatus);
+
+}
+
+static void
+lpfc_ct_ignore_hbq_buffer(struct lpfc_hba *phba, struct lpfc_iocbq *piocbq,
+			  struct lpfc_dmabuf *mp, uint32_t size)
+{
+	if (!mp) {
+		printk(KERN_ERR "%s (%d): Unsolited CT, no "
+		       "HBQ buffer, piocbq = %p, status = x%x\n",
+		       __FUNCTION__, __LINE__,
+		       piocbq, piocbq->iocb.ulpStatus);
+	} else {
+		lpfc_ct_unsol_buffer(phba, piocbq, mp, size);
+		printk(KERN_ERR "%s (%d): Ignoring unsolicted CT "
+		       "piocbq = %p, buffer = %p, size = %d, "
+		       "status = x%x\n",
+		       __FUNCTION__, __LINE__,
+		       piocbq, mp, size, piocbq->iocb.ulpStatus);
+	}
+}
+
 void
-lpfc_ct_unsol_event(struct lpfc_hba * phba,
-		    struct lpfc_sli_ring * pring, struct lpfc_iocbq * piocbq)
+lpfc_ct_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+		    struct lpfc_iocbq *piocbq)
 {
 
-	struct lpfc_iocbq *next_piocbq;
-	struct lpfc_dmabuf *pmbuf = NULL;
-	struct lpfc_dmabuf *matp, *next_matp;
-	uint32_t ctx = 0, size = 0, cnt = 0;
+	struct lpfc_dmabuf *mp = NULL;
 	IOCB_t *icmd = &piocbq->iocb;
-	IOCB_t *save_icmd = icmd;
-	int i, go_exit = 0;
-	struct list_head head;
+	int i;
+	struct lpfc_iocbq *iocbq;
+	dma_addr_t paddr;
+	uint32_t size;
+	struct lpfc_dmabuf *bdeBuf1 = piocbq->context2;
+	struct lpfc_dmabuf *bdeBuf2 = piocbq->context3;
+
+	piocbq->context2 = NULL;
+	piocbq->context3 = NULL;
 
-	if ((icmd->ulpStatus == IOSTAT_LOCAL_REJECT) &&
+	if (unlikely(icmd->ulpStatus == IOSTAT_NEED_BUFFER)) {
+		lpfc_sli_hbqbuf_add_hbqs(phba, LPFC_ELS_HBQ);
+	} else if ((icmd->ulpStatus == IOSTAT_LOCAL_REJECT) &&
 		((icmd->un.ulpWord[4] & 0xff) == IOERR_RCV_BUFFER_WAITING)) {
 		/* Not enough posted buffers; Try posting more buffers */
 		phba->fc_stat.NoRcvBuf++;
-		lpfc_post_buffer(phba, pring, 0, 1);
+		if (!(phba->sli3_options & LPFC_SLI3_HBQ_ENABLED))
+			lpfc_post_buffer(phba, pring, 0, 1);
 		return;
 	}
 
@@ -86,66 +132,56 @@ lpfc_ct_unsol_event(struct lpfc_hba * phba,
 	if (icmd->ulpBdeCount == 0)
 		return;
 
-	INIT_LIST_HEAD(&head);
-	list_add_tail(&head, &piocbq->list);
-
-	list_for_each_entry_safe(piocbq, next_piocbq, &head, list) {
-		icmd = &piocbq->iocb;
-		if (ctx == 0)
-			ctx = (uint32_t) (icmd->ulpContext);
-		if (icmd->ulpBdeCount == 0)
-			continue;
-
-		for (i = 0; i < icmd->ulpBdeCount; i++) {
-			matp = lpfc_sli_ringpostbuf_get(phba, pring,
-							getPaddr(icmd->un.
-								 cont64[i].
-								 addrHigh,
-								 icmd->un.
-								 cont64[i].
-								 addrLow));
-			if (!matp) {
-				/* Insert lpfc log message here */
-				lpfc_post_buffer(phba, pring, cnt, 1);
-				go_exit = 1;
-				goto ct_unsol_event_exit_piocbq;
+	if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
+		list_for_each_entry(iocbq, &piocbq->list, list) {
+			icmd = &iocbq->iocb;
+			if (icmd->ulpBdeCount == 0) {
+				printk(KERN_ERR "%s (%d): Unsolited CT, no "
+				       "BDE, iocbq = %p, status = x%x\n",
+				       __FUNCTION__, __LINE__,
+				       iocbq, iocbq->iocb.ulpStatus);
+				continue;
 			}
 
-			/* Typically for Unsolicited CT requests */
-			if (!pmbuf) {
-				pmbuf = matp;
-				INIT_LIST_HEAD(&pmbuf->list);
-			} else
-				list_add_tail(&matp->list, &pmbuf->list);
-
-			size += icmd->un.cont64[i].tus.f.bdeSize;
-			cnt++;
+			size  = icmd->un.cont64[0].tus.f.bdeSize;
+			lpfc_ct_ignore_hbq_buffer(phba, piocbq, bdeBuf1, size);
+			lpfc_in_buf_free(phba, bdeBuf1);
+			if (icmd->ulpBdeCount == 2) {
+				lpfc_ct_ignore_hbq_buffer(phba, piocbq, bdeBuf2,
+							  size);
+				lpfc_in_buf_free(phba, bdeBuf2);
+			}
 		}
+	} else {
+		struct lpfc_iocbq  *next;
+
+		list_for_each_entry_safe(iocbq, next, &piocbq->list, list) {
+			icmd = &iocbq->iocb;
+			if (icmd->ulpBdeCount == 0) {
+				printk(KERN_ERR "%s (%d): Unsolited CT, no "
+				       "BDE, iocbq = %p, status = x%x\n",
+				       __FUNCTION__, __LINE__,
+				       iocbq, iocbq->iocb.ulpStatus);
+				continue;
+			}
 
-		icmd->ulpBdeCount = 0;
-	}
-
-	lpfc_post_buffer(phba, pring, cnt, 1);
-	if (save_icmd->ulpStatus) {
-		go_exit = 1;
-	}
-
-ct_unsol_event_exit_piocbq:
-	list_del(&head);
-	if (pmbuf) {
-		list_for_each_entry_safe(matp, next_matp, &pmbuf->list, list) {
-			lpfc_mbuf_free(phba, matp->virt, matp->phys);
-			list_del(&matp->list);
-			kfree(matp);
+			for (i = 0; i < icmd->ulpBdeCount; i++) {
+				paddr = getPaddr(icmd->un.cont64[i].addrHigh,
+						 icmd->un.cont64[i].addrLow);
+				mp = lpfc_sli_ringpostbuf_get(phba, pring,
+							      paddr);
+				size = icmd->un.cont64[i].tus.f.bdeSize;
+				lpfc_ct_unsol_buffer(phba, piocbq, mp, size);
+				lpfc_in_buf_free(phba, mp);
+			}
+			list_del(&iocbq->list);
+			lpfc_sli_release_iocbq(phba, iocbq);
 		}
-		lpfc_mbuf_free(phba, pmbuf->virt, pmbuf->phys);
-		kfree(pmbuf);
 	}
-	return;
 }
 
 static void
-lpfc_free_ct_rsp(struct lpfc_hba * phba, struct lpfc_dmabuf * mlist)
+lpfc_free_ct_rsp(struct lpfc_hba *phba, struct lpfc_dmabuf *mlist)
 {
 	struct lpfc_dmabuf *mlast, *next_mlast;
 
@@ -160,7 +196,7 @@ lpfc_free_ct_rsp(struct lpfc_hba * phba, struct lpfc_dmabuf * mlist)
 }
 
 static struct lpfc_dmabuf *
-lpfc_alloc_ct_rsp(struct lpfc_hba * phba, int cmdcode, struct ulp_bde64 * bpl,
+lpfc_alloc_ct_rsp(struct lpfc_hba *phba, int cmdcode, struct ulp_bde64 *bpl,
 		  uint32_t size, int *entries)
 {
 	struct lpfc_dmabuf *mlist = NULL;
@@ -181,7 +217,8 @@ lpfc_alloc_ct_rsp(struct lpfc_hba * phba, int cmdcode, struct ulp_bde64 * bpl,
 
 		INIT_LIST_HEAD(&mp->list);
 
-		if (cmdcode == be16_to_cpu(SLI_CTNS_GID_FT))
+		if (cmdcode == be16_to_cpu(SLI_CTNS_GID_FT) ||
+		    cmdcode == be16_to_cpu(SLI_CTNS_GFF_ID))
 			mp->virt = lpfc_mbuf_alloc(phba, MEM_PRI, &(mp->phys));
 		else
 			mp->virt = lpfc_mbuf_alloc(phba, 0, &(mp->phys));
@@ -201,8 +238,8 @@ lpfc_alloc_ct_rsp(struct lpfc_hba * phba, int cmdcode, struct ulp_bde64 * bpl,
 
 		bpl->tus.f.bdeFlags = BUFF_USE_RCV;
 		/* build buffer ptr list for IOCB */
-		bpl->addrLow = le32_to_cpu( putPaddrLow(mp->phys) );
-		bpl->addrHigh = le32_to_cpu( putPaddrHigh(mp->phys) );
+		bpl->addrLow = le32_to_cpu(putPaddrLow(mp->phys) );
+		bpl->addrHigh = le32_to_cpu(putPaddrHigh(mp->phys) );
 		bpl->tus.f.bdeSize = (uint16_t) cnt;
 		bpl->tus.w = le32_to_cpu(bpl->tus.w);
 		bpl++;
@@ -215,24 +252,53 @@ lpfc_alloc_ct_rsp(struct lpfc_hba * phba, int cmdcode, struct ulp_bde64 * bpl,
 	return mlist;
 }
 
+int
+lpfc_ct_free_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *ctiocb)
+{
+	struct lpfc_dmabuf *buf_ptr;
+
+	if (ctiocb->context_un.ndlp) {
+		lpfc_nlp_put(ctiocb->context_un.ndlp);
+		ctiocb->context_un.ndlp = NULL;
+	}
+	if (ctiocb->context1) {
+		buf_ptr = (struct lpfc_dmabuf *) ctiocb->context1;
+		lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
+		kfree(buf_ptr);
+		ctiocb->context1 = NULL;
+	}
+	if (ctiocb->context2) {
+		lpfc_free_ct_rsp(phba, (struct lpfc_dmabuf *) ctiocb->context2);
+		ctiocb->context2 = NULL;
+	}
+
+	if (ctiocb->context3) {
+		buf_ptr = (struct lpfc_dmabuf *) ctiocb->context3;
+		lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
+		kfree(buf_ptr);
+		ctiocb->context1 = NULL;
+	}
+	lpfc_sli_release_iocbq(phba, ctiocb);
+	return 0;
+}
+
 static int
-lpfc_gen_req(struct lpfc_hba *phba, struct lpfc_dmabuf *bmp,
+lpfc_gen_req(struct lpfc_vport *vport, struct lpfc_dmabuf *bmp,
 	     struct lpfc_dmabuf *inp, struct lpfc_dmabuf *outp,
 	     void (*cmpl) (struct lpfc_hba *, struct lpfc_iocbq *,
 		     struct lpfc_iocbq *),
 	     struct lpfc_nodelist *ndlp, uint32_t usr_flg, uint32_t num_entry,
-	     uint32_t tmo)
+	     uint32_t tmo, uint8_t retry)
 {
-
-	struct lpfc_sli *psli = &phba->sli;
+	struct lpfc_hba  *phba = vport->phba;
+	struct lpfc_sli  *psli = &phba->sli;
 	struct lpfc_sli_ring *pring = &psli->ring[LPFC_ELS_RING];
 	IOCB_t *icmd;
 	struct lpfc_iocbq *geniocb;
+	int rc;
 
 	/* Allocate buffer for  command iocb */
-	spin_lock_irq(phba->host->host_lock);
 	geniocb = lpfc_sli_get_iocbq(phba);
-	spin_unlock_irq(phba->host->host_lock);
 
 	if (geniocb == NULL)
 		return 1;
@@ -252,6 +318,7 @@ lpfc_gen_req(struct lpfc_hba *phba, struct lpfc_dmabuf *bmp,
 	/* Save for completion so we can release these resources */
 	geniocb->context1 = (uint8_t *) inp;
 	geniocb->context2 = (uint8_t *) outp;
+	geniocb->context_un.ndlp = ndlp;
 
 	/* Fill in payload, bp points to frame payload */
 	icmd->ulpCommand = CMD_GEN_REQUEST64_CR;
@@ -272,31 +339,40 @@ lpfc_gen_req(struct lpfc_hba *phba, struct lpfc_dmabuf *bmp,
 	icmd->ulpClass = CLASS3;
 	icmd->ulpContext = ndlp->nlp_rpi;
 
+	if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
+		/* For GEN_REQUEST64_CR, use the RPI */
+		icmd->ulpCt_h = 0;
+		icmd->ulpCt_l = 0;
+	}
+
 	/* Issue GEN REQ IOCB for NPORT <did> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0119 Issue GEN REQ IOCB for NPORT x%x "
-			"Data: x%x x%x\n", phba->brd_no, icmd->un.ulpWord[5],
-			icmd->ulpIoTag, phba->hba_state);
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0119 Issue GEN REQ IOCB to NPORT x%x "
+			 "Data: x%x x%x\n",
+			 ndlp->nlp_DID, icmd->ulpIoTag,
+			 vport->port_state);
 	geniocb->iocb_cmpl = cmpl;
 	geniocb->drvrTimeout = icmd->ulpTimeout + LPFC_DRVR_TIMEOUT;
-	spin_lock_irq(phba->host->host_lock);
-	if (lpfc_sli_issue_iocb(phba, pring, geniocb, 0) == IOCB_ERROR) {
+	geniocb->vport = vport;
+	geniocb->retry = retry;
+	rc = lpfc_sli_issue_iocb(phba, pring, geniocb, 0);
+
+	if (rc == IOCB_ERROR) {
 		lpfc_sli_release_iocbq(phba, geniocb);
-		spin_unlock_irq(phba->host->host_lock);
 		return 1;
 	}
-	spin_unlock_irq(phba->host->host_lock);
 
 	return 0;
 }
 
 static int
-lpfc_ct_cmd(struct lpfc_hba *phba, struct lpfc_dmabuf *inmp,
+lpfc_ct_cmd(struct lpfc_vport *vport, struct lpfc_dmabuf *inmp,
 	    struct lpfc_dmabuf *bmp, struct lpfc_nodelist *ndlp,
 	    void (*cmpl) (struct lpfc_hba *, struct lpfc_iocbq *,
 			  struct lpfc_iocbq *),
-	    uint32_t rsp_size)
+	    uint32_t rsp_size, uint8_t retry)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	struct ulp_bde64 *bpl = (struct ulp_bde64 *) bmp->virt;
 	struct lpfc_dmabuf *outmp;
 	int cnt = 0, status;
@@ -310,8 +386,8 @@ lpfc_ct_cmd(struct lpfc_hba *phba, struct lpfc_dmabuf *inmp,
 	if (!outmp)
 		return -ENOMEM;
 
-	status = lpfc_gen_req(phba, bmp, inmp, outmp, cmpl, ndlp, 0,
-			      cnt+1, 0);
+	status = lpfc_gen_req(vport, bmp, inmp, outmp, cmpl, ndlp, 0,
+			      cnt+1, 0, retry);
 	if (status) {
 		lpfc_free_ct_rsp(phba, outmp);
 		return -ENOMEM;
@@ -319,20 +395,37 @@ lpfc_ct_cmd(struct lpfc_hba *phba, struct lpfc_dmabuf *inmp,
 	return 0;
 }
 
+struct lpfc_vport *
+lpfc_find_vport_by_did(struct lpfc_hba *phba, uint32_t did) {
+	struct lpfc_vport *vport_curr;
+	unsigned long flags;
+
+	spin_lock_irqsave(&phba->hbalock, flags);
+	list_for_each_entry(vport_curr, &phba->port_list, listentry) {
+		if ((vport_curr->fc_myDID) && (vport_curr->fc_myDID == did)) {
+			spin_unlock_irqrestore(&phba->hbalock, flags);
+			return vport_curr;
+		}
+	}
+	spin_unlock_irqrestore(&phba->hbalock, flags);
+	return NULL;
+}
+
 static int
-lpfc_ns_rsp(struct lpfc_hba * phba, struct lpfc_dmabuf * mp, uint32_t Size)
+lpfc_ns_rsp(struct lpfc_vport *vport, struct lpfc_dmabuf *mp, uint32_t Size)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	struct lpfc_sli_ct_request *Response =
 		(struct lpfc_sli_ct_request *) mp->virt;
 	struct lpfc_nodelist *ndlp = NULL;
 	struct lpfc_dmabuf *mlast, *next_mp;
 	uint32_t *ctptr = (uint32_t *) & Response->un.gid.PortType;
-	uint32_t Did;
-	uint32_t CTentry;
+	uint32_t Did, CTentry;
 	int Cnt;
 	struct list_head head;
 
-	lpfc_set_disctmo(phba);
+	lpfc_set_disctmo(vport);
+	vport->num_disc_nodes = 0;
 
 
 	list_add_tail(&head, &mp->list);
@@ -350,39 +443,93 @@ lpfc_ns_rsp(struct lpfc_hba * phba, struct lpfc_dmabuf * mp, uint32_t Size)
 
 		/* Loop through entire NameServer list of DIDs */
 		while (Cnt >= sizeof (uint32_t)) {
-
 			/* Get next DID from NameServer List */
 			CTentry = *ctptr++;
 			Did = ((be32_to_cpu(CTentry)) & Mask_DID);
 
 			ndlp = NULL;
-			if (Did != phba->fc_myDID) {
-				/* Check for rscn processing or not */
-				ndlp = lpfc_setup_disc_node(phba, Did);
-			}
-			/* Mark all node table entries that are in the
-			   Nameserver */
-			if (ndlp) {
-				/* NameServer Rsp */
-				lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-						"%d:0238 Process x%x NameServer"
-						" Rsp Data: x%x x%x x%x\n",
-						phba->brd_no,
+
+			/*
+			 * Check for rscn processing or not
+			 * To conserve rpi's, filter out addresses for other
+			 * vports on the same physical HBAs.
+			 */
+			if ((Did != vport->fc_myDID) &&
+			    ((lpfc_find_vport_by_did(phba, Did) == NULL) ||
+			     vport->cfg_peer_port_login)) {
+				if ((vport->port_type != LPFC_NPIV_PORT) ||
+				    (!(vport->ct_flags & FC_CT_RFF_ID)) ||
+				    (!vport->cfg_restrict_login)) {
+					ndlp = lpfc_setup_disc_node(vport, Did);
+					if (ndlp) {
+						lpfc_debugfs_disc_trc(vport,
+						LPFC_DISC_TRC_CT,
+						"Parse GID_FTrsp: "
+						"did:x%x flg:x%x x%x",
 						Did, ndlp->nlp_flag,
-						phba->fc_flag,
-						phba->fc_rscn_id_cnt);
-			} else {
-				/* NameServer Rsp */
-				lpfc_printf_log(phba,
-						KERN_INFO,
-						LOG_DISCOVERY,
-						"%d:0239 Skip x%x NameServer "
-						"Rsp Data: x%x x%x x%x\n",
-						phba->brd_no,
-						Did, Size, phba->fc_flag,
-						phba->fc_rscn_id_cnt);
+						vport->fc_flag);
+
+						lpfc_printf_vlog(vport,
+							KERN_INFO,
+							LOG_DISCOVERY,
+							"0238 Process "
+							"x%x NameServer Rsp"
+							"Data: x%x x%x x%x\n",
+							Did, ndlp->nlp_flag,
+							vport->fc_flag,
+							vport->fc_rscn_id_cnt);
+					} else {
+						lpfc_debugfs_disc_trc(vport,
+						LPFC_DISC_TRC_CT,
+						"Skip1 GID_FTrsp: "
+						"did:x%x flg:x%x cnt:%d",
+						Did, vport->fc_flag,
+						vport->fc_rscn_id_cnt);
+
+						lpfc_printf_vlog(vport,
+							KERN_INFO,
+							LOG_DISCOVERY,
+							"0239 Skip x%x "
+							"NameServer Rsp Data: "
+							"x%x x%x\n",
+							Did, vport->fc_flag,
+							vport->fc_rscn_id_cnt);
+					}
+
+				} else {
+					if (!(vport->fc_flag & FC_RSCN_MODE) ||
+					(lpfc_rscn_payload_check(vport, Did))) {
+						lpfc_debugfs_disc_trc(vport,
+						LPFC_DISC_TRC_CT,
+						"Query GID_FTrsp: "
+						"did:x%x flg:x%x cnt:%d",
+						Did, vport->fc_flag,
+						vport->fc_rscn_id_cnt);
+
+						if (lpfc_ns_cmd(vport,
+							SLI_CTNS_GFF_ID,
+							0, Did) == 0)
+							vport->num_disc_nodes++;
+					}
+					else {
+						lpfc_debugfs_disc_trc(vport,
+						LPFC_DISC_TRC_CT,
+						"Skip2 GID_FTrsp: "
+						"did:x%x flg:x%x cnt:%d",
+						Did, vport->fc_flag,
+						vport->fc_rscn_id_cnt);
+
+						lpfc_printf_vlog(vport,
+							KERN_INFO,
+							LOG_DISCOVERY,
+							"0245 Skip x%x "
+							"NameServer Rsp Data: "
+							"x%x x%x\n",
+							Did, vport->fc_flag,
+							vport->fc_rscn_id_cnt);
+					}
+				}
 			}
-
 			if (CTentry & (be32_to_cpu(SLI_CT_LAST_ENTRY)))
 				goto nsout1;
 			Cnt -= sizeof (uint32_t);
@@ -393,192 +540,467 @@ lpfc_ns_rsp(struct lpfc_hba * phba, struct lpfc_dmabuf * mp, uint32_t Size)
 
 nsout1:
 	list_del(&head);
-
-	/*
- 	 * The driver has cycled through all Nports in the RSCN payload.
- 	 * Complete the handling by cleaning up and marking the
- 	 * current driver state.
- 	 */
-	if (phba->hba_state == LPFC_HBA_READY) {
-		lpfc_els_flush_rscn(phba);
-		spin_lock_irq(phba->host->host_lock);
-		phba->fc_flag |= FC_RSCN_MODE; /* we are still in RSCN mode */
-		spin_unlock_irq(phba->host->host_lock);
-	}
 	return 0;
 }
 
-
-
-
 static void
-lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-			struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_ct_cmd_gid_ft(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+			struct lpfc_iocbq *rspiocb)
 {
+	struct lpfc_vport *vport = cmdiocb->vport;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
 	IOCB_t *irsp;
-	struct lpfc_sli *psli;
 	struct lpfc_dmabuf *bmp;
-	struct lpfc_dmabuf *inp;
 	struct lpfc_dmabuf *outp;
-	struct lpfc_nodelist *ndlp;
 	struct lpfc_sli_ct_request *CTrsp;
+	struct lpfc_nodelist *ndlp;
+	int rc;
+
+	/* First save ndlp, before we overwrite it */
+	ndlp = cmdiocb->context_un.ndlp;
 
-	psli = &phba->sli;
 	/* we pass cmdiocb to state machine which needs rspiocb as well */
 	cmdiocb->context_un.rsp_iocb = rspiocb;
 
-	inp = (struct lpfc_dmabuf *) cmdiocb->context1;
 	outp = (struct lpfc_dmabuf *) cmdiocb->context2;
 	bmp = (struct lpfc_dmabuf *) cmdiocb->context3;
-
 	irsp = &rspiocb->iocb;
-	if (irsp->ulpStatus) {
-		if ((irsp->ulpStatus == IOSTAT_LOCAL_REJECT) &&
-			((irsp->un.ulpWord[4] == IOERR_SLI_DOWN) ||
-			 (irsp->un.ulpWord[4] == IOERR_SLI_ABORTED))) {
-			goto out;
-		}
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT,
+		 "GID_FT cmpl:     status:x%x/x%x rtry:%d",
+		irsp->ulpStatus, irsp->un.ulpWord[4], vport->fc_ns_retry);
+
+	/* Don't bother processing response if vport is being torn down. */
+	if (vport->load_flag & FC_UNLOADING)
+		goto out;
+
+
+	if (lpfc_els_chk_latt(vport) || lpfc_error_lost_link(irsp)) {
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+				 "0216 Link event during NS query\n");
+		lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+		goto out;
+	}
+
+	if (irsp->ulpStatus) {
 		/* Check for retry */
-		if (phba->fc_ns_retry < LPFC_MAX_NS_RETRY) {
-			phba->fc_ns_retry++;
+		if (vport->fc_ns_retry < LPFC_MAX_NS_RETRY) {
+			if ((irsp->ulpStatus != IOSTAT_LOCAL_REJECT) ||
+				(irsp->un.ulpWord[4] != IOERR_NO_RESOURCES))
+				vport->fc_ns_retry++;
 			/* CT command is being retried */
-			ndlp =
-			    lpfc_findnode_did(phba, NLP_SEARCH_UNMAPPED,
-					      NameServer_DID);
-			if (ndlp) {
-				if (lpfc_ns_cmd(phba, ndlp, SLI_CTNS_GID_FT) ==
-				    0) {
-					goto out;
-				}
-			}
+			rc = lpfc_ns_cmd(vport, SLI_CTNS_GID_FT,
+					 vport->fc_ns_retry, 0);
+			if (rc == 0)
+				goto out;
 		}
+		lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0257 GID_FT Query error: 0x%x 0x%x\n",
+				 irsp->ulpStatus, vport->fc_ns_retry);
 	} else {
 		/* Good status, continue checking */
 		CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
 		if (CTrsp->CommandResponse.bits.CmdRsp ==
 		    be16_to_cpu(SLI_CT_RESPONSE_FS_ACC)) {
-			lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-					"%d:0208 NameServer Rsp "
-					"Data: x%x\n",
-					phba->brd_no,
-					phba->fc_flag);
-			lpfc_ns_rsp(phba, outp,
+			lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+					 "0208 NameServer Rsp Data: x%x\n",
+					 vport->fc_flag);
+			lpfc_ns_rsp(vport, outp,
 				    (uint32_t) (irsp->un.genreq64.bdl.bdeSize));
 		} else if (CTrsp->CommandResponse.bits.CmdRsp ==
 			   be16_to_cpu(SLI_CT_RESPONSE_FS_RJT)) {
 			/* NameServer Rsp Error */
-			lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-					"%d:0240 NameServer Rsp Error "
+			if ((CTrsp->ReasonCode == SLI_CT_UNABLE_TO_PERFORM_REQ)
+			    && (CTrsp->Explanation == SLI_CT_NO_FC4_TYPES)) {
+				lpfc_printf_vlog(vport, KERN_INFO,
+					LOG_DISCOVERY,
+					"0269 No NameServer Entries "
 					"Data: x%x x%x x%x x%x\n",
-					phba->brd_no,
 					CTrsp->CommandResponse.bits.CmdRsp,
 					(uint32_t) CTrsp->ReasonCode,
 					(uint32_t) CTrsp->Explanation,
-					phba->fc_flag);
+					vport->fc_flag);
+
+				lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT,
+				"GID_FT no entry  cmd:x%x rsn:x%x exp:x%x",
+				(uint32_t)CTrsp->CommandResponse.bits.CmdRsp,
+				(uint32_t) CTrsp->ReasonCode,
+				(uint32_t) CTrsp->Explanation);
+			} else {
+				lpfc_printf_vlog(vport, KERN_INFO,
+					LOG_DISCOVERY,
+					"0240 NameServer Rsp Error "
+					"Data: x%x x%x x%x x%x\n",
+					CTrsp->CommandResponse.bits.CmdRsp,
+					(uint32_t) CTrsp->ReasonCode,
+					(uint32_t) CTrsp->Explanation,
+					vport->fc_flag);
+
+				lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT,
+				"GID_FT rsp err1  cmd:x%x rsn:x%x exp:x%x",
+				(uint32_t)CTrsp->CommandResponse.bits.CmdRsp,
+				(uint32_t) CTrsp->ReasonCode,
+				(uint32_t) CTrsp->Explanation);
+			}
+
+
 		} else {
 			/* NameServer Rsp Error */
-			lpfc_printf_log(phba,
-					KERN_INFO,
-					LOG_DISCOVERY,
-					"%d:0241 NameServer Rsp Error "
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+					"0241 NameServer Rsp Error "
 					"Data: x%x x%x x%x x%x\n",
-					phba->brd_no,
 					CTrsp->CommandResponse.bits.CmdRsp,
 					(uint32_t) CTrsp->ReasonCode,
 					(uint32_t) CTrsp->Explanation,
-					phba->fc_flag);
+					vport->fc_flag);
+
+			lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT,
+				"GID_FT rsp err2  cmd:x%x rsn:x%x exp:x%x",
+				(uint32_t)CTrsp->CommandResponse.bits.CmdRsp,
+				(uint32_t) CTrsp->ReasonCode,
+				(uint32_t) CTrsp->Explanation);
 		}
 	}
 	/* Link up / RSCN discovery */
-	lpfc_disc_start(phba);
+	if (vport->num_disc_nodes == 0) {
+		/*
+		 * The driver has cycled through all Nports in the RSCN payload.
+		 * Complete the handling by cleaning up and marking the
+		 * current driver state.
+		 */
+		if (vport->port_state >= LPFC_DISC_AUTH) {
+			if (vport->fc_flag & FC_RSCN_MODE) {
+				lpfc_els_flush_rscn(vport);
+				spin_lock_irq(shost->host_lock);
+				vport->fc_flag |= FC_RSCN_MODE; /* RSCN still */
+				spin_unlock_irq(shost->host_lock);
+			}
+			else
+				lpfc_els_flush_rscn(vport);
+		}
+
+		lpfc_disc_start(vport);
+	}
 out:
-	lpfc_free_ct_rsp(phba, outp);
-	lpfc_mbuf_free(phba, inp->virt, inp->phys);
-	lpfc_mbuf_free(phba, bmp->virt, bmp->phys);
-	kfree(inp);
-	kfree(bmp);
-	spin_lock_irq(phba->host->host_lock);
-	lpfc_sli_release_iocbq(phba, cmdiocb);
-	spin_unlock_irq(phba->host->host_lock);
+	cmdiocb->context_un.ndlp = ndlp; /* Now restore ndlp for free */
+	lpfc_ct_free_iocb(phba, cmdiocb);
 	return;
 }
 
 static void
-lpfc_cmpl_ct_cmd_rft_id(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-			struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_ct_cmd_gff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+			struct lpfc_iocbq *rspiocb)
 {
-	struct lpfc_sli *psli;
-	struct lpfc_dmabuf *bmp;
+	struct lpfc_vport *vport = cmdiocb->vport;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	IOCB_t *irsp = &rspiocb->iocb;
+	struct lpfc_dmabuf *inp = (struct lpfc_dmabuf *) cmdiocb->context1;
+	struct lpfc_dmabuf *outp = (struct lpfc_dmabuf *) cmdiocb->context2;
+	struct lpfc_sli_ct_request *CTrsp;
+	int did;
+	uint8_t fbits;
+	struct lpfc_nodelist *ndlp;
+
+	did = ((struct lpfc_sli_ct_request *) inp->virt)->un.gff.PortId;
+	did = be32_to_cpu(did);
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT,
+		"GFF_ID cmpl:     status:x%x/x%x did:x%x",
+		irsp->ulpStatus, irsp->un.ulpWord[4], did);
+
+	if (irsp->ulpStatus == IOSTAT_SUCCESS) {
+		/* Good status, continue checking */
+		CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
+		fbits = CTrsp->un.gff_acc.fbits[FCP_TYPE_FEATURE_OFFSET];
+
+		if (CTrsp->CommandResponse.bits.CmdRsp ==
+		    be16_to_cpu(SLI_CT_RESPONSE_FS_ACC)) {
+			if ((fbits & FC4_FEATURE_INIT) &&
+			    !(fbits & FC4_FEATURE_TARGET)) {
+				lpfc_printf_vlog(vport, KERN_INFO,
+						 LOG_DISCOVERY,
+						 "0270 Skip x%x GFF "
+						 "NameServer Rsp Data: (init) "
+						 "x%x x%x\n", did, fbits,
+						 vport->fc_rscn_id_cnt);
+				goto out;
+			}
+		}
+	}
+	else {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				 "0267 NameServer GFF Rsp "
+				 "x%x Error (%d %d) Data: x%x x%x\n",
+				 did, irsp->ulpStatus, irsp->un.ulpWord[4],
+				 vport->fc_flag, vport->fc_rscn_id_cnt)
+	}
+
+	/* This is a target port, unregistered port, or the GFF_ID failed */
+	ndlp = lpfc_setup_disc_node(vport, did);
+	if (ndlp) {
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+				 "0242 Process x%x GFF "
+				 "NameServer Rsp Data: x%x x%x x%x\n",
+				 did, ndlp->nlp_flag, vport->fc_flag,
+				 vport->fc_rscn_id_cnt);
+	} else {
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+				 "0243 Skip x%x GFF "
+				 "NameServer Rsp Data: x%x x%x\n", did,
+				 vport->fc_flag, vport->fc_rscn_id_cnt);
+	}
+out:
+	/* Link up / RSCN discovery */
+	if (vport->num_disc_nodes)
+		vport->num_disc_nodes--;
+	if (vport->num_disc_nodes == 0) {
+		/*
+		 * The driver has cycled through all Nports in the RSCN payload.
+		 * Complete the handling by cleaning up and marking the
+		 * current driver state.
+		 */
+		if (vport->port_state >= LPFC_DISC_AUTH) {
+			if (vport->fc_flag & FC_RSCN_MODE) {
+				lpfc_els_flush_rscn(vport);
+				spin_lock_irq(shost->host_lock);
+				vport->fc_flag |= FC_RSCN_MODE; /* RSCN still */
+				spin_unlock_irq(shost->host_lock);
+			}
+			else
+				lpfc_els_flush_rscn(vport);
+		}
+		lpfc_disc_start(vport);
+	}
+	lpfc_ct_free_iocb(phba, cmdiocb);
+	return;
+}
+
+
+static void
+lpfc_cmpl_ct(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+	     struct lpfc_iocbq *rspiocb)
+{
+	struct lpfc_vport *vport = cmdiocb->vport;
 	struct lpfc_dmabuf *inp;
 	struct lpfc_dmabuf *outp;
 	IOCB_t *irsp;
 	struct lpfc_sli_ct_request *CTrsp;
+	struct lpfc_nodelist *ndlp;
+	int cmdcode, rc;
+	uint8_t retry;
+	uint32_t latt;
+
+	/* First save ndlp, before we overwrite it */
+	ndlp = cmdiocb->context_un.ndlp;
 
-	psli = &phba->sli;
 	/* we pass cmdiocb to state machine which needs rspiocb as well */
 	cmdiocb->context_un.rsp_iocb = rspiocb;
 
 	inp = (struct lpfc_dmabuf *) cmdiocb->context1;
 	outp = (struct lpfc_dmabuf *) cmdiocb->context2;
-	bmp = (struct lpfc_dmabuf *) cmdiocb->context3;
 	irsp = &rspiocb->iocb;
 
+	cmdcode = be16_to_cpu(((struct lpfc_sli_ct_request *) inp->virt)->
+					CommandResponse.bits.CmdRsp);
 	CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
 
+	latt = lpfc_els_chk_latt(vport);
+
 	/* RFT request completes status <ulpStatus> CmdRsp <CmdRsp> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-			"%d:0209 RFT request completes ulpStatus x%x "
-			"CmdRsp x%x\n", phba->brd_no, irsp->ulpStatus,
-			CTrsp->CommandResponse.bits.CmdRsp);
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0209 CT Request completes, latt %d, "
+			 "ulpStatus x%x CmdRsp x%x, Context x%x, Tag x%x\n",
+			 latt, irsp->ulpStatus,
+			 CTrsp->CommandResponse.bits.CmdRsp,
+			 cmdiocb->iocb.ulpContext, cmdiocb->iocb.ulpIoTag);
 
-	lpfc_free_ct_rsp(phba, outp);
-	lpfc_mbuf_free(phba, inp->virt, inp->phys);
-	lpfc_mbuf_free(phba, bmp->virt, bmp->phys);
-	kfree(inp);
-	kfree(bmp);
-	spin_lock_irq(phba->host->host_lock);
-	lpfc_sli_release_iocbq(phba, cmdiocb);
-	spin_unlock_irq(phba->host->host_lock);
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT,
+		"CT cmd cmpl:     status:x%x/x%x cmd:x%x",
+		irsp->ulpStatus, irsp->un.ulpWord[4], cmdcode);
+
+	if (irsp->ulpStatus) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				 "0268 NS cmd %x Error (%d %d)\n",
+				 cmdcode, irsp->ulpStatus, irsp->un.ulpWord[4]);
+
+		if ((irsp->ulpStatus == IOSTAT_LOCAL_REJECT) &&
+			((irsp->un.ulpWord[4] == IOERR_SLI_DOWN) ||
+			 (irsp->un.ulpWord[4] == IOERR_SLI_ABORTED)))
+			goto out;
+
+		retry = cmdiocb->retry;
+		if (retry >= LPFC_MAX_NS_RETRY)
+			goto out;
+
+		retry++;
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+				 "0216 Retrying NS cmd %x\n", cmdcode);
+		rc = lpfc_ns_cmd(vport, cmdcode, retry, 0);
+		if (rc == 0)
+			goto out;
+	}
+
+out:
+	cmdiocb->context_un.ndlp = ndlp; /* Now restore ndlp for free */
+	lpfc_ct_free_iocb(phba, cmdiocb);
 	return;
 }
 
 static void
-lpfc_cmpl_ct_cmd_rnn_id(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-			struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_ct_cmd_rft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+			struct lpfc_iocbq *rspiocb)
 {
-	lpfc_cmpl_ct_cmd_rft_id(phba, cmdiocb, rspiocb);
+	IOCB_t *irsp = &rspiocb->iocb;
+	struct lpfc_vport *vport = cmdiocb->vport;
+
+	if (irsp->ulpStatus == IOSTAT_SUCCESS) {
+		struct lpfc_dmabuf *outp;
+		struct lpfc_sli_ct_request *CTrsp;
+
+		outp = (struct lpfc_dmabuf *) cmdiocb->context2;
+		CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
+		if (CTrsp->CommandResponse.bits.CmdRsp ==
+		    be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
+			vport->ct_flags |= FC_CT_RFT_ID;
+	}
+	lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
 	return;
 }
 
 static void
-lpfc_cmpl_ct_cmd_rsnn_nn(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-			 struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_ct_cmd_rnn_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+			struct lpfc_iocbq *rspiocb)
 {
-	lpfc_cmpl_ct_cmd_rft_id(phba, cmdiocb, rspiocb);
+	IOCB_t *irsp = &rspiocb->iocb;
+	struct lpfc_vport *vport = cmdiocb->vport;
+
+	if (irsp->ulpStatus == IOSTAT_SUCCESS) {
+		struct lpfc_dmabuf *outp;
+		struct lpfc_sli_ct_request *CTrsp;
+
+		outp = (struct lpfc_dmabuf *) cmdiocb->context2;
+		CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
+		if (CTrsp->CommandResponse.bits.CmdRsp ==
+		    be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
+			vport->ct_flags |= FC_CT_RNN_ID;
+	}
+	lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
 	return;
 }
 
 static void
-lpfc_cmpl_ct_cmd_rff_id(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-			 struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_ct_cmd_rspn_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+			 struct lpfc_iocbq *rspiocb)
 {
-	lpfc_cmpl_ct_cmd_rft_id(phba, cmdiocb, rspiocb);
+	IOCB_t *irsp = &rspiocb->iocb;
+	struct lpfc_vport *vport = cmdiocb->vport;
+
+	if (irsp->ulpStatus == IOSTAT_SUCCESS) {
+		struct lpfc_dmabuf *outp;
+		struct lpfc_sli_ct_request *CTrsp;
+
+		outp = (struct lpfc_dmabuf *) cmdiocb->context2;
+		CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
+		if (CTrsp->CommandResponse.bits.CmdRsp ==
+		    be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
+			vport->ct_flags |= FC_CT_RSPN_ID;
+	}
+	lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
 	return;
 }
 
-void
-lpfc_get_hba_sym_node_name(struct lpfc_hba * phba, uint8_t * symbp)
+static void
+lpfc_cmpl_ct_cmd_rsnn_nn(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+			 struct lpfc_iocbq *rspiocb)
 {
-	char fwrev[16];
+	IOCB_t *irsp = &rspiocb->iocb;
+	struct lpfc_vport *vport = cmdiocb->vport;
+
+	if (irsp->ulpStatus == IOSTAT_SUCCESS) {
+		struct lpfc_dmabuf *outp;
+		struct lpfc_sli_ct_request *CTrsp;
+
+		outp = (struct lpfc_dmabuf *) cmdiocb->context2;
+		CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
+		if (CTrsp->CommandResponse.bits.CmdRsp ==
+		    be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
+			vport->ct_flags |= FC_CT_RSNN_NN;
+	}
+	lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
+	return;
+}
+
+static void
+lpfc_cmpl_ct_cmd_da_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ struct lpfc_iocbq *rspiocb)
+{
+	struct lpfc_vport *vport = cmdiocb->vport;
+
+	/* even if it fails we will act as though it succeeded. */
+	vport->ct_flags = 0;
+	lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
+	return;
+}
+
+static void
+lpfc_cmpl_ct_cmd_rff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+			struct lpfc_iocbq *rspiocb)
+{
+	IOCB_t *irsp = &rspiocb->iocb;
+	struct lpfc_vport *vport = cmdiocb->vport;
 
-	lpfc_decode_firmware_rev(phba, fwrev, 0);
+	if (irsp->ulpStatus == IOSTAT_SUCCESS) {
+		struct lpfc_dmabuf *outp;
+		struct lpfc_sli_ct_request *CTrsp;
 
-	sprintf(symbp, "Emulex %s FV%s DV%s", phba->ModelName,
-		fwrev, lpfc_release_version);
+		outp = (struct lpfc_dmabuf *) cmdiocb->context2;
+		CTrsp = (struct lpfc_sli_ct_request *) outp->virt;
+		if (CTrsp->CommandResponse.bits.CmdRsp ==
+		    be16_to_cpu(SLI_CT_RESPONSE_FS_ACC))
+			vport->ct_flags |= FC_CT_RFF_ID;
+	}
+	lpfc_cmpl_ct(phba, cmdiocb, rspiocb);
 	return;
 }
 
+static int
+lpfc_vport_symbolic_port_name(struct lpfc_vport *vport, char *symbol,
+	size_t size)
+{
+	int n;
+	uint8_t *wwn = vport->phba->wwpn;
+
+	n = snprintf(symbol, size,
+		     "Emulex PPN-%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x",
+		     wwn[0], wwn[1], wwn[2], wwn[3],
+		     wwn[4], wwn[5], wwn[6], wwn[7]);
+
+	if (vport->port_type == LPFC_PHYSICAL_PORT)
+		return n;
+
+	if (n < size)
+		n += snprintf(symbol + n, size - n, " VPort-%d", vport->vpi);
+
+	if (n < size && vport->vname)
+		n += snprintf(symbol + n, size - n, " VName-%s", vport->vname);
+	return n;
+}
+
+int
+lpfc_vport_symbolic_node_name(struct lpfc_vport *vport, char *symbol,
+	size_t size)
+{
+	char fwrev[16];
+	int n;
+
+	lpfc_decode_firmware_rev(vport->phba, fwrev, 0);
+
+	n = snprintf(symbol, size, "Emulex %s FV%s DV%s",
+		vport->phba->ModelName, fwrev, lpfc_release_version);
+	return n;
+}
+
 /*
  * lpfc_ns_cmd
  * Description:
@@ -587,57 +1009,79 @@ lpfc_get_hba_sym_node_name(struct lpfc_hba * phba, uint8_t * symbp)
  *       LI_CTNS_RFT_ID
  */
 int
-lpfc_ns_cmd(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, int cmdcode)
+lpfc_ns_cmd(struct lpfc_vport *vport, int cmdcode,
+	    uint8_t retry, uint32_t context)
 {
+	struct lpfc_nodelist * ndlp;
+	struct lpfc_hba *phba = vport->phba;
 	struct lpfc_dmabuf *mp, *bmp;
 	struct lpfc_sli_ct_request *CtReq;
 	struct ulp_bde64 *bpl;
 	void (*cmpl) (struct lpfc_hba *, struct lpfc_iocbq *,
 		      struct lpfc_iocbq *) = NULL;
 	uint32_t rsp_size = 1024;
+	size_t   size;
+	int rc = 0;
+
+	ndlp = lpfc_findnode_did(vport, NameServer_DID);
+	if (ndlp == NULL || ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) {
+		rc=1;
+		goto ns_cmd_exit;
+	}
 
 	/* fill in BDEs for command */
 	/* Allocate buffer for command payload */
 	mp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
-	if (!mp)
+	if (!mp) {
+		rc=2;
 		goto ns_cmd_exit;
+	}
 
 	INIT_LIST_HEAD(&mp->list);
 	mp->virt = lpfc_mbuf_alloc(phba, MEM_PRI, &(mp->phys));
-	if (!mp->virt)
+	if (!mp->virt) {
+		rc=3;
 		goto ns_cmd_free_mp;
+	}
 
 	/* Allocate buffer for Buffer ptr list */
 	bmp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
-	if (!bmp)
+	if (!bmp) {
+		rc=4;
 		goto ns_cmd_free_mpvirt;
+	}
 
 	INIT_LIST_HEAD(&bmp->list);
 	bmp->virt = lpfc_mbuf_alloc(phba, MEM_PRI, &(bmp->phys));
-	if (!bmp->virt)
+	if (!bmp->virt) {
+		rc=5;
 		goto ns_cmd_free_bmp;
+	}
 
 	/* NameServer Req */
-	lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_DISCOVERY,
-			"%d:0236 NameServer Req Data: x%x x%x x%x\n",
-			phba->brd_no, cmdcode, phba->fc_flag,
-			phba->fc_rscn_id_cnt);
+	lpfc_printf_vlog(vport, KERN_INFO ,LOG_DISCOVERY,
+			 "0236 NameServer Req Data: x%x x%x x%x\n",
+			 cmdcode, vport->fc_flag, vport->fc_rscn_id_cnt);
 
 	bpl = (struct ulp_bde64 *) bmp->virt;
 	memset(bpl, 0, sizeof(struct ulp_bde64));
-	bpl->addrHigh = le32_to_cpu( putPaddrHigh(mp->phys) );
-	bpl->addrLow = le32_to_cpu( putPaddrLow(mp->phys) );
+	bpl->addrHigh = le32_to_cpu(putPaddrHigh(mp->phys) );
+	bpl->addrLow = le32_to_cpu(putPaddrLow(mp->phys) );
 	bpl->tus.f.bdeFlags = 0;
 	if (cmdcode == SLI_CTNS_GID_FT)
 		bpl->tus.f.bdeSize = GID_REQUEST_SZ;
+	else if (cmdcode == SLI_CTNS_GFF_ID)
+		bpl->tus.f.bdeSize = GFF_REQUEST_SZ;
 	else if (cmdcode == SLI_CTNS_RFT_ID)
 		bpl->tus.f.bdeSize = RFT_REQUEST_SZ;
 	else if (cmdcode == SLI_CTNS_RNN_ID)
 		bpl->tus.f.bdeSize = RNN_REQUEST_SZ;
+	else if (cmdcode == SLI_CTNS_RSPN_ID)
+		bpl->tus.f.bdeSize = RSPN_REQUEST_SZ;
 	else if (cmdcode == SLI_CTNS_RSNN_NN)
 		bpl->tus.f.bdeSize = RSNN_REQUEST_SZ;
+	else if (cmdcode == SLI_CTNS_DA_ID)
+		bpl->tus.f.bdeSize = DA_ID_REQUEST_SZ;
 	else if (cmdcode == SLI_CTNS_RFF_ID)
 		bpl->tus.f.bdeSize = RFF_REQUEST_SZ;
 	else
@@ -656,56 +1100,91 @@ lpfc_ns_cmd(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, int cmdcode)
 		CtReq->CommandResponse.bits.CmdRsp =
 		    be16_to_cpu(SLI_CTNS_GID_FT);
 		CtReq->un.gid.Fc4Type = SLI_CTPT_FCP;
-		if (phba->hba_state < LPFC_HBA_READY)
-			phba->hba_state = LPFC_NS_QRY;
-		lpfc_set_disctmo(phba);
+		if (vport->port_state < LPFC_NS_QRY)
+			vport->port_state = LPFC_NS_QRY;
+		lpfc_set_disctmo(vport);
 		cmpl = lpfc_cmpl_ct_cmd_gid_ft;
 		rsp_size = FC_MAX_NS_RSP;
 		break;
 
+	case SLI_CTNS_GFF_ID:
+		CtReq->CommandResponse.bits.CmdRsp =
+			be16_to_cpu(SLI_CTNS_GFF_ID);
+		CtReq->un.gff.PortId = be32_to_cpu(context);
+		cmpl = lpfc_cmpl_ct_cmd_gff_id;
+		break;
+
 	case SLI_CTNS_RFT_ID:
+		vport->ct_flags &= ~FC_CT_RFT_ID;
 		CtReq->CommandResponse.bits.CmdRsp =
 		    be16_to_cpu(SLI_CTNS_RFT_ID);
-		CtReq->un.rft.PortId = be32_to_cpu(phba->fc_myDID);
+		CtReq->un.rft.PortId = be32_to_cpu(vport->fc_myDID);
 		CtReq->un.rft.fcpReg = 1;
 		cmpl = lpfc_cmpl_ct_cmd_rft_id;
 		break;
 
-	case SLI_CTNS_RFF_ID:
-		CtReq->CommandResponse.bits.CmdRsp =
-			be16_to_cpu(SLI_CTNS_RFF_ID);
-		CtReq->un.rff.PortId = be32_to_cpu(phba->fc_myDID);
-		CtReq->un.rff.feature_res = 0;
-		CtReq->un.rff.feature_tgt = 0;
-		CtReq->un.rff.type_code = FC_FCP_DATA;
-		CtReq->un.rff.feature_init = 1;
-		cmpl = lpfc_cmpl_ct_cmd_rff_id;
-		break;
-
 	case SLI_CTNS_RNN_ID:
+		vport->ct_flags &= ~FC_CT_RNN_ID;
 		CtReq->CommandResponse.bits.CmdRsp =
 		    be16_to_cpu(SLI_CTNS_RNN_ID);
-		CtReq->un.rnn.PortId = be32_to_cpu(phba->fc_myDID);
-		memcpy(CtReq->un.rnn.wwnn,  &phba->fc_nodename,
+		CtReq->un.rnn.PortId = be32_to_cpu(vport->fc_myDID);
+		memcpy(CtReq->un.rnn.wwnn,  &vport->fc_nodename,
 		       sizeof (struct lpfc_name));
 		cmpl = lpfc_cmpl_ct_cmd_rnn_id;
 		break;
 
+	case SLI_CTNS_RSPN_ID:
+		vport->ct_flags &= ~FC_CT_RSPN_ID;
+		CtReq->CommandResponse.bits.CmdRsp =
+		    be16_to_cpu(SLI_CTNS_RSPN_ID);
+		CtReq->un.rspn.PortId = be32_to_cpu(vport->fc_myDID);
+		size = sizeof(CtReq->un.rspn.symbname);
+		CtReq->un.rspn.len =
+			lpfc_vport_symbolic_port_name(vport,
+			CtReq->un.rspn.symbname, size);
+		cmpl = lpfc_cmpl_ct_cmd_rspn_id;
+		break;
 	case SLI_CTNS_RSNN_NN:
+		vport->ct_flags &= ~FC_CT_RSNN_NN;
 		CtReq->CommandResponse.bits.CmdRsp =
 		    be16_to_cpu(SLI_CTNS_RSNN_NN);
-		memcpy(CtReq->un.rsnn.wwnn, &phba->fc_nodename,
+		memcpy(CtReq->un.rsnn.wwnn, &vport->fc_nodename,
 		       sizeof (struct lpfc_name));
-		lpfc_get_hba_sym_node_name(phba, CtReq->un.rsnn.symbname);
-		CtReq->un.rsnn.len = strlen(CtReq->un.rsnn.symbname);
+		size = sizeof(CtReq->un.rsnn.symbname);
+		CtReq->un.rsnn.len =
+			lpfc_vport_symbolic_node_name(vport,
+			CtReq->un.rsnn.symbname, size);
 		cmpl = lpfc_cmpl_ct_cmd_rsnn_nn;
 		break;
+	case SLI_CTNS_DA_ID:
+		/* Implement DA_ID Nameserver request */
+		CtReq->CommandResponse.bits.CmdRsp =
+			be16_to_cpu(SLI_CTNS_DA_ID);
+		CtReq->un.da_id.port_id = be32_to_cpu(vport->fc_myDID);
+		cmpl = lpfc_cmpl_ct_cmd_da_id;
+		break;
+	case SLI_CTNS_RFF_ID:
+		vport->ct_flags &= ~FC_CT_RFF_ID;
+		CtReq->CommandResponse.bits.CmdRsp =
+		    be16_to_cpu(SLI_CTNS_RFF_ID);
+		CtReq->un.rff.PortId = be32_to_cpu(vport->fc_myDID);;
+		CtReq->un.rff.fbits = FC4_FEATURE_INIT;
+		CtReq->un.rff.type_code = FC_FCP_DATA;
+		cmpl = lpfc_cmpl_ct_cmd_rff_id;
+		break;
 	}
+	lpfc_nlp_get(ndlp);
 
-	if (!lpfc_ct_cmd(phba, mp, bmp, ndlp, cmpl, rsp_size))
+	if (!lpfc_ct_cmd(vport, mp, bmp, ndlp, cmpl, rsp_size, retry)) {
 		/* On success, The cmpl function will free the buffers */
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT,
+			"Issue CT cmd:    cmd:x%x did:x%x",
+			cmdcode, ndlp->nlp_DID, 0);
 		return 0;
+	}
 
+	rc=6;
+	lpfc_nlp_put(ndlp);
 	lpfc_mbuf_free(phba, bmp->virt, bmp->phys);
 ns_cmd_free_bmp:
 	kfree(bmp);
@@ -714,14 +1193,16 @@ ns_cmd_free_mpvirt:
 ns_cmd_free_mp:
 	kfree(mp);
 ns_cmd_exit:
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+			 "0266 Issue NameServer Req x%x err %d Data: x%x x%x\n",
+			 cmdcode, rc, vport->fc_flag, vport->fc_rscn_id_cnt);
 	return 1;
 }
 
 static void
-lpfc_cmpl_ct_cmd_fdmi(struct lpfc_hba * phba,
-		      struct lpfc_iocbq * cmdiocb, struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_ct_cmd_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		      struct lpfc_iocbq * rspiocb)
 {
-	struct lpfc_dmabuf *bmp = cmdiocb->context3;
 	struct lpfc_dmabuf *inp = cmdiocb->context1;
 	struct lpfc_dmabuf *outp = cmdiocb->context2;
 	struct lpfc_sli_ct_request *CTrsp = outp->virt;
@@ -729,48 +1210,58 @@ lpfc_cmpl_ct_cmd_fdmi(struct lpfc_hba * phba,
 	struct lpfc_nodelist *ndlp;
 	uint16_t fdmi_cmd = CTcmd->CommandResponse.bits.CmdRsp;
 	uint16_t fdmi_rsp = CTrsp->CommandResponse.bits.CmdRsp;
+	struct lpfc_vport *vport = cmdiocb->vport;
+	IOCB_t *irsp = &rspiocb->iocb;
+	uint32_t latt;
+
+	latt = lpfc_els_chk_latt(vport);
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT,
+		"FDMI cmpl:       status:x%x/x%x latt:%d",
+		irsp->ulpStatus, irsp->un.ulpWord[4], latt);
+
+	if (latt || irsp->ulpStatus) {
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+				 "0229 FDMI cmd %04x failed, latt = %d "
+				 "ulpStatus: x%x, rid x%x\n",
+				 be16_to_cpu(fdmi_cmd), latt, irsp->ulpStatus,
+				 irsp->un.ulpWord[4]);
+		lpfc_ct_free_iocb(phba, cmdiocb);
+		return;
+	}
 
-	ndlp = lpfc_findnode_did(phba, NLP_SEARCH_ALL, FDMI_DID);
+	ndlp = lpfc_findnode_did(vport, FDMI_DID);
 	if (fdmi_rsp == be16_to_cpu(SLI_CT_RESPONSE_FS_RJT)) {
 		/* FDMI rsp failed */
-		lpfc_printf_log(phba,
-			        KERN_INFO,
-			        LOG_DISCOVERY,
-			        "%d:0220 FDMI rsp failed Data: x%x\n",
-			        phba->brd_no,
-			       be16_to_cpu(fdmi_cmd));
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+				 "0220 FDMI rsp failed Data: x%x\n",
+				 be16_to_cpu(fdmi_cmd));
 	}
 
 	switch (be16_to_cpu(fdmi_cmd)) {
 	case SLI_MGMT_RHBA:
-		lpfc_fdmi_cmd(phba, ndlp, SLI_MGMT_RPA);
+		lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_RPA);
 		break;
 
 	case SLI_MGMT_RPA:
 		break;
 
 	case SLI_MGMT_DHBA:
-		lpfc_fdmi_cmd(phba, ndlp, SLI_MGMT_DPRT);
+		lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_DPRT);
 		break;
 
 	case SLI_MGMT_DPRT:
-		lpfc_fdmi_cmd(phba, ndlp, SLI_MGMT_RHBA);
+		lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_RHBA);
 		break;
 	}
-
-	lpfc_free_ct_rsp(phba, outp);
-	lpfc_mbuf_free(phba, inp->virt, inp->phys);
-	lpfc_mbuf_free(phba, bmp->virt, bmp->phys);
-	kfree(inp);
-	kfree(bmp);
-	spin_lock_irq(phba->host->host_lock);
-	lpfc_sli_release_iocbq(phba, cmdiocb);
-	spin_unlock_irq(phba->host->host_lock);
+	lpfc_ct_free_iocb(phba, cmdiocb);
 	return;
 }
+
 int
-lpfc_fdmi_cmd(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, int cmdcode)
+lpfc_fdmi_cmd(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, int cmdcode)
 {
+	struct lpfc_hba *phba = vport->phba;
 	struct lpfc_dmabuf *mp, *bmp;
 	struct lpfc_sli_ct_request *CtReq;
 	struct ulp_bde64 *bpl;
@@ -807,13 +1298,9 @@ lpfc_fdmi_cmd(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, int cmdcode)
 	INIT_LIST_HEAD(&bmp->list);
 
 	/* FDMI request */
-	lpfc_printf_log(phba,
-		        KERN_INFO,
-		        LOG_DISCOVERY,
-		        "%d:0218 FDMI Request Data: x%x x%x x%x\n",
-		        phba->brd_no,
-		       phba->fc_flag, phba->hba_state, cmdcode);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0218 FDMI Request Data: x%x x%x x%x\n",
+			 vport->fc_flag, vport->port_state, cmdcode);
 	CtReq = (struct lpfc_sli_ct_request *) mp->virt;
 
 	memset(CtReq, 0, sizeof(struct lpfc_sli_ct_request));
@@ -835,11 +1322,11 @@ lpfc_fdmi_cmd(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, int cmdcode)
 			    be16_to_cpu(SLI_MGMT_RHBA);
 			CtReq->CommandResponse.bits.Size = 0;
 			rh = (REG_HBA *) & CtReq->un.PortID;
-			memcpy(&rh->hi.PortName, &phba->fc_sparam.portName,
+			memcpy(&rh->hi.PortName, &vport->fc_sparam.portName,
 			       sizeof (struct lpfc_name));
 			/* One entry (port) per adapter */
 			rh->rpl.EntryCnt = be32_to_cpu(1);
-			memcpy(&rh->rpl.pe, &phba->fc_sparam.portName,
+			memcpy(&rh->rpl.pe, &vport->fc_sparam.portName,
 			       sizeof (struct lpfc_name));
 
 			/* point to the HBA attribute block */
@@ -855,7 +1342,7 @@ lpfc_fdmi_cmd(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, int cmdcode)
 			ae->ad.bits.AttrType = be16_to_cpu(NODE_NAME);
 			ae->ad.bits.AttrLen =  be16_to_cpu(FOURBYTES
 						+ sizeof (struct lpfc_name));
-			memcpy(&ae->un.NodeName, &phba->fc_sparam.nodeName,
+			memcpy(&ae->un.NodeName, &vport->fc_sparam.nodeName,
 			       sizeof (struct lpfc_name));
 			ab->EntryCnt++;
 			size += FOURBYTES + sizeof (struct lpfc_name);
@@ -956,7 +1443,8 @@ lpfc_fdmi_cmd(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, int cmdcode)
 			ae = (ATTRIBUTE_ENTRY *) ((uint8_t *) rh + size);
 			ae->ad.bits.AttrType = be16_to_cpu(OS_NAME_VERSION);
 			sprintf(ae->un.OsNameVersion, "%s %s %s",
-				system_utsname.sysname, system_utsname.release,
+				system_utsname.sysname,
+				system_utsname.release,
 				system_utsname.version);
 			len = strlen(ae->un.OsNameVersion);
 			len += (len & 3) ? (4 - (len & 3)) : 4;
@@ -992,7 +1480,7 @@ lpfc_fdmi_cmd(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, int cmdcode)
 			pab = (REG_PORT_ATTRIBUTE *) & CtReq->un.PortID;
 			size = sizeof (struct lpfc_name) + FOURBYTES;
 			memcpy((uint8_t *) & pab->PortName,
-			       (uint8_t *) & phba->fc_sparam.portName,
+			       (uint8_t *) & vport->fc_sparam.portName,
 			       sizeof (struct lpfc_name));
 			pab->ab.EntryCnt = 0;
 
@@ -1054,7 +1542,7 @@ lpfc_fdmi_cmd(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, int cmdcode)
 			ae = (ATTRIBUTE_ENTRY *) ((uint8_t *) pab + size);
 			ae->ad.bits.AttrType = be16_to_cpu(MAX_FRAME_SIZE);
 			ae->ad.bits.AttrLen = be16_to_cpu(FOURBYTES + 4);
-			hsp = (struct serv_parm *) & phba->fc_sparam;
+			hsp = (struct serv_parm *) & vport->fc_sparam;
 			ae->un.MaxFrameSize =
 			    (((uint32_t) hsp->cmn.
 			      bbRcvSizeMsb) << 8) | (uint32_t) hsp->cmn.
@@ -1072,7 +1560,7 @@ lpfc_fdmi_cmd(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, int cmdcode)
 			pab->ab.EntryCnt++;
 			size += FOURBYTES + len;
 
-			if (phba->cfg_fdmi_on == 2) {
+			if (vport->cfg_fdmi_on == 2) {
 				/* #6 Port attribute entry */
 				ae = (ATTRIBUTE_ENTRY *) ((uint8_t *) pab +
 							  size);
@@ -1098,7 +1586,7 @@ lpfc_fdmi_cmd(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, int cmdcode)
 		CtReq->CommandResponse.bits.Size = 0;
 		pe = (PORT_ENTRY *) & CtReq->un.PortID;
 		memcpy((uint8_t *) & pe->PortName,
-		       (uint8_t *) & phba->fc_sparam.portName,
+		       (uint8_t *) & vport->fc_sparam.portName,
 		       sizeof (struct lpfc_name));
 		size = GID_REQUEST_SZ - 4 + sizeof (struct lpfc_name);
 		break;
@@ -1108,24 +1596,26 @@ lpfc_fdmi_cmd(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, int cmdcode)
 		CtReq->CommandResponse.bits.Size = 0;
 		pe = (PORT_ENTRY *) & CtReq->un.PortID;
 		memcpy((uint8_t *) & pe->PortName,
-		       (uint8_t *) & phba->fc_sparam.portName,
+		       (uint8_t *) & vport->fc_sparam.portName,
 		       sizeof (struct lpfc_name));
 		size = GID_REQUEST_SZ - 4 + sizeof (struct lpfc_name);
 		break;
 	}
 
 	bpl = (struct ulp_bde64 *) bmp->virt;
-	bpl->addrHigh = le32_to_cpu( putPaddrHigh(mp->phys) );
-	bpl->addrLow = le32_to_cpu( putPaddrLow(mp->phys) );
+	bpl->addrHigh = le32_to_cpu(putPaddrHigh(mp->phys) );
+	bpl->addrLow = le32_to_cpu(putPaddrLow(mp->phys) );
 	bpl->tus.f.bdeFlags = 0;
 	bpl->tus.f.bdeSize = size;
 	bpl->tus.w = le32_to_cpu(bpl->tus.w);
 
 	cmpl = lpfc_cmpl_ct_cmd_fdmi;
+	lpfc_nlp_get(ndlp);
 
-	if (!lpfc_ct_cmd(phba, mp, bmp, ndlp, cmpl, FC_MAX_NS_RSP))
+	if (!lpfc_ct_cmd(vport, mp, bmp, ndlp, cmpl, FC_MAX_NS_RSP, 0))
 		return 0;
 
+	lpfc_nlp_put(ndlp);
 	lpfc_mbuf_free(phba, bmp->virt, bmp->phys);
 fdmi_cmd_free_bmp:
 	kfree(bmp);
@@ -1135,49 +1625,50 @@ fdmi_cmd_free_mp:
 	kfree(mp);
 fdmi_cmd_exit:
 	/* Issue FDMI request failed */
-	lpfc_printf_log(phba,
-		        KERN_INFO,
-		        LOG_DISCOVERY,
-		        "%d:0244 Issue FDMI request failed Data: x%x\n",
-		        phba->brd_no,
-			cmdcode);
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0244 Issue FDMI request failed Data: x%x\n",
+			 cmdcode);
 	return 1;
 }
 
 void
 lpfc_fdmi_tmo(unsigned long ptr)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba *)ptr;
+	struct lpfc_vport *vport = (struct lpfc_vport *)ptr;
+	struct lpfc_hba   *phba = vport->phba;
 	unsigned long iflag;
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
-	if (!(phba->work_hba_events & WORKER_FDMI_TMO)) {
-		phba->work_hba_events |= WORKER_FDMI_TMO;
+	spin_lock_irqsave(&vport->work_port_lock, iflag);
+	if (!(vport->work_port_events & WORKER_FDMI_TMO)) {
+		vport->work_port_events |= WORKER_FDMI_TMO;
+		spin_unlock_irqrestore(&vport->work_port_lock, iflag);
+
+		spin_lock_irqsave(&phba->hbalock, iflag);
 		if (phba->work_wait)
-			wake_up(phba->work_wait);
+			lpfc_worker_wake_up(phba);
+		spin_unlock_irqrestore(&phba->hbalock, iflag);
 	}
-	spin_unlock_irqrestore(phba->host->host_lock,iflag);
+	else
+		spin_unlock_irqrestore(&vport->work_port_lock, iflag);
 }
 
 void
-lpfc_fdmi_tmo_handler(struct lpfc_hba *phba)
+lpfc_fdmi_timeout_handler(struct lpfc_vport *vport)
 {
 	struct lpfc_nodelist *ndlp;
 
-	ndlp = lpfc_findnode_did(phba, NLP_SEARCH_ALL, FDMI_DID);
+	ndlp = lpfc_findnode_did(vport, FDMI_DID);
 	if (ndlp) {
-		if (system_utsname.nodename[0] != '\0') {
-			lpfc_fdmi_cmd(phba, ndlp, SLI_MGMT_DHBA);
-		} else {
-			mod_timer(&phba->fc_fdmitmo, jiffies + HZ * 60);
-		}
+		if (system_utsname.nodename[0] != '\0')
+			lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_DHBA);
+		else
+			mod_timer(&vport->fc_fdmitmo, jiffies + HZ * 60);
 	}
 	return;
 }
 
-
 void
-lpfc_decode_firmware_rev(struct lpfc_hba * phba, char *fwrevision, int flag)
+lpfc_decode_firmware_rev(struct lpfc_hba *phba, char *fwrevision, int flag)
 {
 	struct lpfc_sli *psli = &phba->sli;
 	lpfc_vpd_t *vp = &phba->vpd;
diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
new file mode 100644
index 0000000..0929c0e
--- /dev/null
+++ b/drivers/scsi/lpfc/lpfc_debugfs.c
@@ -0,0 +1,1006 @@
+/*******************************************************************
+ * This file is part of the Emulex Linux Device Driver for         *
+ * Fibre Channel Host Bus Adapters.                                *
+ * Copyright (C) 2007 Emulex.  All rights reserved.                *
+ * EMULEX and SLI are trademarks of Emulex.                        *
+ * www.emulex.com                                                  *
+ *                                                                 *
+ * This program is free software; you can redistribute it and/or   *
+ * modify it under the terms of version 2 of the GNU General       *
+ * Public License as published by the Free Software Foundation.    *
+ * This program is distributed in the hope that it will be useful. *
+ * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
+ * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
+ * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
+ * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
+ * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
+ * more details, a copy of which can be found in the file COPYING  *
+ * included with this package.                                     *
+ *******************************************************************/
+
+#include <linux/blkdev.h>
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/idr.h>
+#include <linux/interrupt.h>
+#include <linux/kthread.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/ctype.h>
+#include <linux/version.h>
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_transport_fc.h>
+
+#include "lpfc_hw.h"
+#include "lpfc_sli.h"
+#include "lpfc_disc.h"
+#include "lpfc_scsi.h"
+#include "lpfc.h"
+#include "lpfc_logmsg.h"
+#include "lpfc_crtn.h"
+#include "lpfc_vport.h"
+#include "lpfc_version.h"
+#include "lpfc_vport.h"
+#include "lpfc_debugfs.h"
+
+#ifdef CONFIG_LPFC_DEBUG_FS
+/* debugfs interface
+ *
+ * To access this interface the user should:
+ * # mkdir /debug
+ * # mount -t debugfs none /debug
+ *
+ * The lpfc debugfs directory hierachy is:
+ * lpfc/lpfcX/vportY
+ * where X is the lpfc hba unique_id
+ * where Y is the vport VPI on that hba
+ *
+ * Debugging services available per vport:
+ * discovery_trace
+ * This is an ACSII readable file that contains a trace of the last
+ * lpfc_debugfs_max_disc_trc events that happened on a specific vport.
+ * See lpfc_debugfs.h for different categories of
+ * discovery events. To enable the discovery trace, the following
+ * module parameters must be set:
+ * lpfc_debugfs_enable=1         Turns on lpfc debugfs filesystem support
+ * lpfc_debugfs_max_disc_trc=X   Where X is the event trace depth for
+ *                               EACH vport. X MUST also be a power of 2.
+ * lpfc_debugfs_mask_disc_trc=Y  Where Y is an event mask as defined in
+ *                               lpfc_debugfs.h .
+ */
+static int lpfc_debugfs_enable = 1;
+module_param(lpfc_debugfs_enable, int, 0);
+MODULE_PARM_DESC(lpfc_debugfs_enable, "Enable debugfs services");
+
+/* This MUST be a power of 2 */
+static int lpfc_debugfs_max_disc_trc = 0;
+module_param(lpfc_debugfs_max_disc_trc, int, 0);
+MODULE_PARM_DESC(lpfc_debugfs_max_disc_trc,
+	"Set debugfs discovery trace depth");
+
+/* This MUST be a power of 2 */
+static int lpfc_debugfs_max_slow_ring_trc = 0;
+module_param(lpfc_debugfs_max_slow_ring_trc, int, 0);
+MODULE_PARM_DESC(lpfc_debugfs_max_slow_ring_trc,
+	"Set debugfs slow ring trace depth");
+
+static int lpfc_debugfs_mask_disc_trc = 0;
+module_param(lpfc_debugfs_mask_disc_trc, int, 0);
+MODULE_PARM_DESC(lpfc_debugfs_mask_disc_trc,
+	"Set debugfs discovery trace mask");
+
+#include <linux/debugfs.h>
+
+/* size of output line, for discovery_trace and slow_ring_trace */
+#define LPFC_DEBUG_TRC_ENTRY_SIZE 100
+
+/* nodelist output buffer size */
+#define LPFC_NODELIST_SIZE 8192
+#define LPFC_NODELIST_ENTRY_SIZE 120
+
+/* dumpslim output buffer size */
+#define LPFC_DUMPSLIM_SIZE 4096
+
+/* hbqinfo output buffer size */
+#define LPFC_HBQINFO_SIZE 8192
+
+struct lpfc_debug {
+	char *buffer;
+	int  len;
+};
+
+static atomic_t lpfc_debugfs_seq_trc_cnt = ATOMIC_INIT(0);
+static unsigned long lpfc_debugfs_start_time = 0L;
+
+static int
+lpfc_debugfs_disc_trc_data(struct lpfc_vport *vport, char *buf, int size)
+{
+	int i, index, len, enable;
+	uint32_t ms;
+	struct lpfc_debugfs_trc *dtp;
+	char buffer[LPFC_DEBUG_TRC_ENTRY_SIZE];
+
+
+	enable = lpfc_debugfs_enable;
+	lpfc_debugfs_enable = 0;
+
+	len = 0;
+	index = (atomic_read(&vport->disc_trc_cnt) + 1) &
+		(lpfc_debugfs_max_disc_trc - 1);
+	for (i = index; i < lpfc_debugfs_max_disc_trc; i++) {
+		dtp = vport->disc_trc + i;
+		if (!dtp->fmt)
+			continue;
+		ms = jiffies_to_msecs(dtp->jif - lpfc_debugfs_start_time);
+		snprintf(buffer,
+			LPFC_DEBUG_TRC_ENTRY_SIZE, "%010d:%010d ms:%s\n",
+			dtp->seq_cnt, ms, dtp->fmt);
+		len +=  snprintf(buf+len, size-len, buffer,
+			dtp->data1, dtp->data2, dtp->data3);
+	}
+	for (i = 0; i < index; i++) {
+		dtp = vport->disc_trc + i;
+		if (!dtp->fmt)
+			continue;
+		ms = jiffies_to_msecs(dtp->jif - lpfc_debugfs_start_time);
+		snprintf(buffer,
+			LPFC_DEBUG_TRC_ENTRY_SIZE, "%010d:%010d ms:%s\n",
+			dtp->seq_cnt, ms, dtp->fmt);
+		len +=  snprintf(buf+len, size-len, buffer,
+			dtp->data1, dtp->data2, dtp->data3);
+	}
+
+	lpfc_debugfs_enable = enable;
+	return len;
+}
+
+static int
+lpfc_debugfs_slow_ring_trc_data(struct lpfc_hba *phba, char *buf, int size)
+{
+	int i, index, len, enable;
+	uint32_t ms;
+	struct lpfc_debugfs_trc *dtp;
+	char buffer[LPFC_DEBUG_TRC_ENTRY_SIZE];
+
+
+	enable = lpfc_debugfs_enable;
+	lpfc_debugfs_enable = 0;
+
+	len = 0;
+	index = (atomic_read(&phba->slow_ring_trc_cnt) + 1) &
+		(lpfc_debugfs_max_slow_ring_trc - 1);
+	for (i = index; i < lpfc_debugfs_max_slow_ring_trc; i++) {
+		dtp = phba->slow_ring_trc + i;
+		if (!dtp->fmt)
+			continue;
+		ms = jiffies_to_msecs(dtp->jif - lpfc_debugfs_start_time);
+		snprintf(buffer,
+			LPFC_DEBUG_TRC_ENTRY_SIZE, "%010d:%010d ms:%s\n",
+			dtp->seq_cnt, ms, dtp->fmt);
+		len +=  snprintf(buf+len, size-len, buffer,
+			dtp->data1, dtp->data2, dtp->data3);
+	}
+	for (i = 0; i < index; i++) {
+		dtp = phba->slow_ring_trc + i;
+		if (!dtp->fmt)
+			continue;
+		ms = jiffies_to_msecs(dtp->jif - lpfc_debugfs_start_time);
+		snprintf(buffer,
+			LPFC_DEBUG_TRC_ENTRY_SIZE, "%010d:%010d ms:%s\n",
+			dtp->seq_cnt, ms, dtp->fmt);
+		len +=  snprintf(buf+len, size-len, buffer,
+			dtp->data1, dtp->data2, dtp->data3);
+	}
+
+	lpfc_debugfs_enable = enable;
+	return len;
+}
+
+static int lpfc_debugfs_last_hbq = -1;
+
+static int
+lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+{
+	int len = 0;
+	int cnt, i, j, found, posted, low;
+	uint32_t phys, raw_index, getidx;
+	struct lpfc_hbq_init *hip;
+	struct hbq_s *hbqs;
+	struct lpfc_hbq_entry *hbqe;
+	struct lpfc_dmabuf *d_buf;
+	struct hbq_dmabuf *hbq_buf;
+
+	cnt = LPFC_HBQINFO_SIZE;
+	spin_lock_irq(&phba->hbalock);
+
+	/* toggle between multiple hbqs, if any */
+	i = lpfc_sli_hbq_count();
+	if (i > 1) {
+		 lpfc_debugfs_last_hbq++;
+		 if (lpfc_debugfs_last_hbq >= i)
+			lpfc_debugfs_last_hbq = 0;
+	}
+	else
+		lpfc_debugfs_last_hbq = 0;
+
+	i = lpfc_debugfs_last_hbq;
+
+	len +=  snprintf(buf+len, size-len, "HBQ %d Info\n", i);
+
+	hbqs =  &phba->hbqs[i];
+	posted = 0;
+	list_for_each_entry(d_buf, &hbqs->hbq_buffer_list, list)
+		posted++;
+
+	hip =  lpfc_hbq_defs[i];
+	len +=  snprintf(buf+len, size-len,
+		"idx:%d prof:%d rn:%d bufcnt:%d icnt:%d acnt:%d posted %d\n",
+		hip->hbq_index, hip->profile, hip->rn,
+		hip->buffer_count, hip->init_count, hip->add_count, posted);
+
+	raw_index = phba->hbq_get[i];
+	getidx = le32_to_cpu(raw_index);
+	len +=  snprintf(buf+len, size-len,
+		"entrys:%d bufcnt:%d Put:%d nPut:%d localGet:%d hbaGet:%d\n",
+		hbqs->entry_count, hbqs->buffer_count, hbqs->hbqPutIdx,
+		hbqs->next_hbqPutIdx, hbqs->local_hbqGetIdx, getidx);
+
+	hbqe = (struct lpfc_hbq_entry *) phba->hbqs[i].hbq_virt;
+	for (j=0; j<hbqs->entry_count; j++) {
+		len +=  snprintf(buf+len, size-len,
+			"%03d: %08x %04x %05x ", j,
+			le32_to_cpu(hbqe->bde.addrLow),
+			le32_to_cpu(hbqe->bde.tus.w),
+			le32_to_cpu(hbqe->buffer_tag));
+		i = 0;
+		found = 0;
+
+		/* First calculate if slot has an associated posted buffer */
+		low = hbqs->hbqPutIdx - posted;
+		if (low >= 0) {
+			if ((j >= hbqs->hbqPutIdx) || (j < low)) {
+				len +=  snprintf(buf+len, size-len, "Unused\n");
+				goto skipit;
+			}
+		}
+		else {
+			if ((j >= hbqs->hbqPutIdx) &&
+				(j < (hbqs->entry_count+low))) {
+				len +=  snprintf(buf+len, size-len, "Unused\n");
+				goto skipit;
+			}
+		}
+
+		/* Get the Buffer info for the posted buffer */
+		list_for_each_entry(d_buf, &hbqs->hbq_buffer_list, list) {
+			hbq_buf = container_of(d_buf, struct hbq_dmabuf, dbuf);
+			phys = ((uint64_t)hbq_buf->dbuf.phys & 0xffffffff);
+			if (phys == le32_to_cpu(hbqe->bde.addrLow)) {
+				len +=  snprintf(buf+len, size-len,
+					"Buf%d: %p %06x\n", i,
+					hbq_buf->dbuf.virt, hbq_buf->tag);
+				found = 1;
+				break;
+			}
+			i++;
+		}
+		if (!found) {
+			len +=  snprintf(buf+len, size-len, "No DMAinfo?\n");
+		}
+skipit:
+		hbqe++;
+		if (len > LPFC_HBQINFO_SIZE - 54)
+			break;
+	}
+	spin_unlock_irq(&phba->hbalock);
+	return len;
+}
+
+static int
+lpfc_debugfs_dumpslim_data(struct lpfc_hba *phba, char *buf, int size)
+{
+	int len = 0;
+	int cnt, i, off;
+	uint32_t word0, word1, word2, word3;
+	uint32_t *ptr;
+	struct lpfc_pgp *pgpp;
+	struct lpfc_sli *psli = &phba->sli;
+	struct lpfc_sli_ring *pring;
+
+	cnt = LPFC_DUMPSLIM_SIZE;
+	off = 0;
+	spin_lock_irq(&phba->hbalock);
+
+	len +=  snprintf(buf+len, size-len, "SLIM Mailbox\n");
+	ptr = (uint32_t *)phba->slim2p;
+	i = sizeof(MAILBOX_t);
+	while (i > 0) {
+		len +=  snprintf(buf+len, size-len,
+		"%08x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
+		off, *ptr, *(ptr+1), *(ptr+2), *(ptr+3), *(ptr+4),
+		*(ptr+5), *(ptr+6), *(ptr+7));
+		ptr += 8;
+		i -= (8 * sizeof(uint32_t));
+		off += (8 * sizeof(uint32_t));
+	}
+
+	len +=  snprintf(buf+len, size-len, "SLIM PCB\n");
+	ptr = (uint32_t *)&phba->slim2p->pcb;
+	i = sizeof(PCB_t);
+	while (i > 0) {
+		len +=  snprintf(buf+len, size-len,
+		"%08x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
+		off, *ptr, *(ptr+1), *(ptr+2), *(ptr+3), *(ptr+4),
+		*(ptr+5), *(ptr+6), *(ptr+7));
+		ptr += 8;
+		i -= (8 * sizeof(uint32_t));
+		off += (8 * sizeof(uint32_t));
+	}
+
+	pgpp = (struct lpfc_pgp *)&phba->slim2p->mbx.us.s3_pgp.port;
+	pring = &psli->ring[0];
+	len +=  snprintf(buf+len, size-len,
+		"Ring 0: CMD GetInx:%d (Max:%d Next:%d Local:%d flg:x%x)  "
+		"RSP PutInx:%d Max:%d\n",
+		pgpp->cmdGetInx, pring->numCiocb,
+		pring->next_cmdidx, pring->local_getidx, pring->flag,
+		pgpp->rspPutInx, pring->numRiocb);
+	pgpp++;
+
+	pring = &psli->ring[1];
+	len +=  snprintf(buf+len, size-len,
+		"Ring 1: CMD GetInx:%d (Max:%d Next:%d Local:%d flg:x%x)  "
+		"RSP PutInx:%d Max:%d\n",
+		pgpp->cmdGetInx, pring->numCiocb,
+		pring->next_cmdidx, pring->local_getidx, pring->flag,
+		pgpp->rspPutInx, pring->numRiocb);
+	pgpp++;
+
+	pring = &psli->ring[2];
+	len +=  snprintf(buf+len, size-len,
+		"Ring 2: CMD GetInx:%d (Max:%d Next:%d Local:%d flg:x%x)  "
+		"RSP PutInx:%d Max:%d\n",
+		pgpp->cmdGetInx, pring->numCiocb,
+		pring->next_cmdidx, pring->local_getidx, pring->flag,
+		pgpp->rspPutInx, pring->numRiocb);
+	pgpp++;
+
+	pring = &psli->ring[3];
+	len +=  snprintf(buf+len, size-len,
+		"Ring 3: CMD GetInx:%d (Max:%d Next:%d Local:%d flg:x%x)  "
+		"RSP PutInx:%d Max:%d\n",
+		pgpp->cmdGetInx, pring->numCiocb,
+		pring->next_cmdidx, pring->local_getidx, pring->flag,
+		pgpp->rspPutInx, pring->numRiocb);
+
+
+	ptr = (uint32_t *)&phba->slim2p->mbx.us.s3_pgp.hbq_get;
+	word0 = readl(phba->HAregaddr);
+	word1 = readl(phba->CAregaddr);
+	word2 = readl(phba->HSregaddr);
+	word3 = readl(phba->HCregaddr);
+	len +=  snprintf(buf+len, size-len, "HA:%08x CA:%08x HS:%08x HC:%08x\n",
+	word0, word1, word2, word3);
+	spin_unlock_irq(&phba->hbalock);
+	return len;
+}
+
+static int
+lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+{
+	int len = 0;
+	int cnt;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_nodelist *ndlp;
+	unsigned char *statep, *name;
+
+	cnt = (LPFC_NODELIST_SIZE / LPFC_NODELIST_ENTRY_SIZE);
+
+	spin_lock_irq(shost->host_lock);
+	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+		if (!cnt) {
+			len +=  snprintf(buf+len, size-len,
+				"Missing Nodelist Entries\n");
+			break;
+		}
+		cnt--;
+		switch (ndlp->nlp_state) {
+		case NLP_STE_UNUSED_NODE:
+			statep = "UNUSED";
+			break;
+		case NLP_STE_PLOGI_ISSUE:
+			statep = "PLOGI ";
+			break;
+		case NLP_STE_ADISC_ISSUE:
+			statep = "ADISC ";
+			break;
+		case NLP_STE_REG_LOGIN_ISSUE:
+			statep = "REGLOG";
+			break;
+		case NLP_STE_PRLI_ISSUE:
+			statep = "PRLI  ";
+			break;
+		case NLP_STE_UNMAPPED_NODE:
+			statep = "UNMAP ";
+			break;
+		case NLP_STE_MAPPED_NODE:
+			statep = "MAPPED";
+			break;
+		case NLP_STE_NPR_NODE:
+			statep = "NPR   ";
+			break;
+		default:
+			statep = "UNKNOWN";
+		}
+		len +=  snprintf(buf+len, size-len, "%s DID:x%06x ",
+			statep, ndlp->nlp_DID);
+		name = (unsigned char *)&ndlp->nlp_portname;
+		len +=  snprintf(buf+len, size-len,
+			"WWPN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x ",
+			*name, *(name+1), *(name+2), *(name+3),
+			*(name+4), *(name+5), *(name+6), *(name+7));
+		name = (unsigned char *)&ndlp->nlp_nodename;
+		len +=  snprintf(buf+len, size-len,
+			"WWNN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x ",
+			*name, *(name+1), *(name+2), *(name+3),
+			*(name+4), *(name+5), *(name+6), *(name+7));
+		len +=  snprintf(buf+len, size-len, "RPI:%03d flag:x%08x ",
+			ndlp->nlp_rpi, ndlp->nlp_flag);
+		if (!ndlp->nlp_type)
+			len +=  snprintf(buf+len, size-len, "UNKNOWN_TYPE ");
+		if (ndlp->nlp_type & NLP_FC_NODE)
+			len +=  snprintf(buf+len, size-len, "FC_NODE ");
+		if (ndlp->nlp_type & NLP_FABRIC)
+			len +=  snprintf(buf+len, size-len, "FABRIC ");
+		if (ndlp->nlp_type & NLP_FCP_TARGET)
+			len +=  snprintf(buf+len, size-len, "FCP_TGT sid:%d ",
+				ndlp->nlp_sid);
+		if (ndlp->nlp_type & NLP_FCP_INITIATOR)
+			len +=  snprintf(buf+len, size-len, "FCP_INITIATOR ");
+		len += snprintf(buf+len, size-len, "refcnt:%x",
+			atomic_read(&ndlp->kref.refcount));
+		len +=  snprintf(buf+len, size-len, "\n");
+	}
+	spin_unlock_irq(shost->host_lock);
+	return len;
+}
+#endif
+
+
+inline void
+lpfc_debugfs_disc_trc(struct lpfc_vport *vport, int mask, char *fmt,
+	uint32_t data1, uint32_t data2, uint32_t data3)
+{
+#ifdef CONFIG_LPFC_DEBUG_FS
+	struct lpfc_debugfs_trc *dtp;
+	int index;
+
+	if (!(lpfc_debugfs_mask_disc_trc & mask))
+		return;
+
+	if (!lpfc_debugfs_enable || !lpfc_debugfs_max_disc_trc ||
+		!vport || !vport->disc_trc)
+		return;
+
+	index = atomic_inc_return(&vport->disc_trc_cnt) &
+		(lpfc_debugfs_max_disc_trc - 1);
+	dtp = vport->disc_trc + index;
+	dtp->fmt = fmt;
+	dtp->data1 = data1;
+	dtp->data2 = data2;
+	dtp->data3 = data3;
+	dtp->seq_cnt = atomic_inc_return(&lpfc_debugfs_seq_trc_cnt);
+	dtp->jif = jiffies;
+#endif
+	return;
+}
+
+inline void
+lpfc_debugfs_slow_ring_trc(struct lpfc_hba *phba, char *fmt,
+	uint32_t data1, uint32_t data2, uint32_t data3)
+{
+#ifdef CONFIG_LPFC_DEBUG_FS
+	struct lpfc_debugfs_trc *dtp;
+	int index;
+
+	if (!lpfc_debugfs_enable || !lpfc_debugfs_max_slow_ring_trc ||
+		!phba || !phba->slow_ring_trc)
+		return;
+
+	index = atomic_inc_return(&phba->slow_ring_trc_cnt) &
+		(lpfc_debugfs_max_slow_ring_trc - 1);
+	dtp = phba->slow_ring_trc + index;
+	dtp->fmt = fmt;
+	dtp->data1 = data1;
+	dtp->data2 = data2;
+	dtp->data3 = data3;
+	dtp->seq_cnt = atomic_inc_return(&lpfc_debugfs_seq_trc_cnt);
+	dtp->jif = jiffies;
+#endif
+	return;
+}
+
+#ifdef CONFIG_LPFC_DEBUG_FS
+static int
+lpfc_debugfs_disc_trc_open(struct inode *inode, struct file *file)
+{
+	struct lpfc_vport *vport = inode->i_private;
+	struct lpfc_debug *debug;
+	int size;
+	int rc = -ENOMEM;
+
+	if (!lpfc_debugfs_max_disc_trc) {
+		 rc = -ENOSPC;
+		goto out;
+	}
+
+	debug = kmalloc(sizeof(*debug), GFP_KERNEL);
+	if (!debug)
+		goto out;
+
+	/* Round to page boundry */
+	size =  (lpfc_debugfs_max_disc_trc * LPFC_DEBUG_TRC_ENTRY_SIZE);
+	size = PAGE_ALIGN(size);
+
+	debug->buffer = kmalloc(size, GFP_KERNEL);
+	if (!debug->buffer) {
+		kfree(debug);
+		goto out;
+	}
+
+	debug->len = lpfc_debugfs_disc_trc_data(vport, debug->buffer, size);
+	file->private_data = debug;
+
+	rc = 0;
+out:
+	return rc;
+}
+
+static int
+lpfc_debugfs_slow_ring_trc_open(struct inode *inode, struct file *file)
+{
+	struct lpfc_hba *phba = inode->i_private;
+	struct lpfc_debug *debug;
+	int size;
+	int rc = -ENOMEM;
+
+	if (!lpfc_debugfs_max_slow_ring_trc) {
+		 rc = -ENOSPC;
+		goto out;
+	}
+
+	debug = kmalloc(sizeof(*debug), GFP_KERNEL);
+	if (!debug)
+		goto out;
+
+	/* Round to page boundry */
+	size =  (lpfc_debugfs_max_slow_ring_trc * LPFC_DEBUG_TRC_ENTRY_SIZE);
+	size = PAGE_ALIGN(size);
+
+	debug->buffer = kmalloc(size, GFP_KERNEL);
+	if (!debug->buffer) {
+		kfree(debug);
+		goto out;
+	}
+
+	debug->len = lpfc_debugfs_slow_ring_trc_data(phba, debug->buffer, size);
+	file->private_data = debug;
+
+	rc = 0;
+out:
+	return rc;
+}
+
+static int
+lpfc_debugfs_hbqinfo_open(struct inode *inode, struct file *file)
+{
+	struct lpfc_hba *phba = inode->i_private;
+	struct lpfc_debug *debug;
+	int rc = -ENOMEM;
+
+	debug = kmalloc(sizeof(*debug), GFP_KERNEL);
+	if (!debug)
+		goto out;
+
+	/* Round to page boundry */
+	debug->buffer = kmalloc(LPFC_HBQINFO_SIZE, GFP_KERNEL);
+	if (!debug->buffer) {
+		kfree(debug);
+		goto out;
+	}
+
+	debug->len = lpfc_debugfs_hbqinfo_data(phba, debug->buffer,
+		LPFC_HBQINFO_SIZE);
+	file->private_data = debug;
+
+	rc = 0;
+out:
+	return rc;
+}
+
+static int
+lpfc_debugfs_dumpslim_open(struct inode *inode, struct file *file)
+{
+	struct lpfc_hba *phba = inode->i_private;
+	struct lpfc_debug *debug;
+	int rc = -ENOMEM;
+
+	debug = kmalloc(sizeof(*debug), GFP_KERNEL);
+	if (!debug)
+		goto out;
+
+	/* Round to page boundry */
+	debug->buffer = kmalloc(LPFC_DUMPSLIM_SIZE, GFP_KERNEL);
+	if (!debug->buffer) {
+		kfree(debug);
+		goto out;
+	}
+
+	debug->len = lpfc_debugfs_dumpslim_data(phba, debug->buffer,
+		LPFC_DUMPSLIM_SIZE);
+	file->private_data = debug;
+
+	rc = 0;
+out:
+	return rc;
+}
+
+static int
+lpfc_debugfs_nodelist_open(struct inode *inode, struct file *file)
+{
+	struct lpfc_vport *vport = inode->i_private;
+	struct lpfc_debug *debug;
+	int rc = -ENOMEM;
+
+	debug = kmalloc(sizeof(*debug), GFP_KERNEL);
+	if (!debug)
+		goto out;
+
+	/* Round to page boundry */
+	debug->buffer = kmalloc(LPFC_NODELIST_SIZE, GFP_KERNEL);
+	if (!debug->buffer) {
+		kfree(debug);
+		goto out;
+	}
+
+	debug->len = lpfc_debugfs_nodelist_data(vport, debug->buffer,
+		LPFC_NODELIST_SIZE);
+	file->private_data = debug;
+
+	rc = 0;
+out:
+	return rc;
+}
+
+static loff_t
+lpfc_debugfs_lseek(struct file *file, loff_t off, int whence)
+{
+	struct lpfc_debug *debug;
+	loff_t pos = -1;
+
+	debug = file->private_data;
+
+	switch (whence) {
+	case 0:
+		pos = off;
+		break;
+	case 1:
+		pos = file->f_pos + off;
+		break;
+	case 2:
+		pos = debug->len - off;
+	}
+	return (pos < 0 || pos > debug->len) ? -EINVAL : (file->f_pos = pos);
+}
+
+static ssize_t
+lpfc_debugfs_read(struct file *file, char __user *buf,
+		  size_t nbytes, loff_t *ppos)
+{
+	struct lpfc_debug *debug = file->private_data;
+	return simple_read_from_buffer(buf, nbytes, ppos, debug->buffer,
+				       debug->len);
+}
+
+static int
+lpfc_debugfs_release(struct inode *inode, struct file *file)
+{
+	struct lpfc_debug *debug = file->private_data;
+
+	kfree(debug->buffer);
+	kfree(debug);
+
+	return 0;
+}
+
+#undef lpfc_debugfs_op_disc_trc
+static struct file_operations lpfc_debugfs_op_disc_trc = {
+	.owner =        THIS_MODULE,
+	.open =         lpfc_debugfs_disc_trc_open,
+	.llseek =       lpfc_debugfs_lseek,
+	.read =         lpfc_debugfs_read,
+	.release =      lpfc_debugfs_release,
+};
+
+#undef lpfc_debugfs_op_nodelist
+static struct file_operations lpfc_debugfs_op_nodelist = {
+	.owner =        THIS_MODULE,
+	.open =         lpfc_debugfs_nodelist_open,
+	.llseek =       lpfc_debugfs_lseek,
+	.read =         lpfc_debugfs_read,
+	.release =      lpfc_debugfs_release,
+};
+
+#undef lpfc_debugfs_op_hbqinfo
+static struct file_operations lpfc_debugfs_op_hbqinfo = {
+	.owner =        THIS_MODULE,
+	.open =         lpfc_debugfs_hbqinfo_open,
+	.llseek =       lpfc_debugfs_lseek,
+	.read =         lpfc_debugfs_read,
+	.release =      lpfc_debugfs_release,
+};
+
+#undef lpfc_debugfs_op_dumpslim
+static struct file_operations lpfc_debugfs_op_dumpslim = {
+	.owner =        THIS_MODULE,
+	.open =         lpfc_debugfs_dumpslim_open,
+	.llseek =       lpfc_debugfs_lseek,
+	.read =         lpfc_debugfs_read,
+	.release =      lpfc_debugfs_release,
+};
+
+#undef lpfc_debugfs_op_slow_ring_trc
+static struct file_operations lpfc_debugfs_op_slow_ring_trc = {
+	.owner =        THIS_MODULE,
+	.open =         lpfc_debugfs_slow_ring_trc_open,
+	.llseek =       lpfc_debugfs_lseek,
+	.read =         lpfc_debugfs_read,
+	.release =      lpfc_debugfs_release,
+};
+
+static struct dentry *lpfc_debugfs_root = NULL;
+static atomic_t lpfc_debugfs_hba_count;
+#endif
+
+inline void
+lpfc_debugfs_initialize(struct lpfc_vport *vport)
+{
+#ifdef CONFIG_LPFC_DEBUG_FS
+	struct lpfc_hba   *phba = vport->phba;
+	char name[64];
+	uint32_t num, i;
+
+	if (!lpfc_debugfs_enable)
+		return;
+
+	/* Setup lpfc root directory */
+	if (!lpfc_debugfs_root) {
+		lpfc_debugfs_root = debugfs_create_dir("lpfc", NULL);
+		atomic_set(&lpfc_debugfs_hba_count, 0);
+		if (!lpfc_debugfs_root) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+					 "0409 Cannot create debugfs root\n");
+			goto debug_failed;
+		}
+	}
+	if (!lpfc_debugfs_start_time)
+		lpfc_debugfs_start_time = jiffies;
+
+	/* Setup lpfcX directory for specific HBA */
+	snprintf(name, sizeof(name), "lpfc%d", phba->brd_no);
+	if (!phba->hba_debugfs_root) {
+		phba->hba_debugfs_root =
+			debugfs_create_dir(name, lpfc_debugfs_root);
+		if (!phba->hba_debugfs_root) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+					 "0409 Cannot create debugfs hba\n");
+			goto debug_failed;
+		}
+		atomic_inc(&lpfc_debugfs_hba_count);
+		atomic_set(&phba->debugfs_vport_count, 0);
+
+		/* Setup hbqinfo */
+		snprintf(name, sizeof(name), "hbqinfo");
+		phba->debug_hbqinfo =
+			debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
+				 phba->hba_debugfs_root,
+				 phba, &lpfc_debugfs_op_hbqinfo);
+		if (!phba->debug_hbqinfo) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+				"0409 Cannot create debugfs hbqinfo\n");
+			goto debug_failed;
+		}
+
+		/* Setup dumpslim */
+		snprintf(name, sizeof(name), "dumpslim");
+		phba->debug_dumpslim =
+			debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
+				 phba->hba_debugfs_root,
+				 phba, &lpfc_debugfs_op_dumpslim);
+		if (!phba->debug_dumpslim) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+				"0409 Cannot create debugfs dumpslim\n");
+			goto debug_failed;
+		}
+
+		/* Setup slow ring trace */
+		if (lpfc_debugfs_max_slow_ring_trc) {
+			num = lpfc_debugfs_max_slow_ring_trc - 1;
+			if (num & lpfc_debugfs_max_slow_ring_trc) {
+				/* Change to be a power of 2 */
+				num = lpfc_debugfs_max_slow_ring_trc;
+				i = 0;
+				while (num > 1) {
+					num = num >> 1;
+					i++;
+				}
+				lpfc_debugfs_max_slow_ring_trc = (1 << i);
+				printk(KERN_ERR
+				       "lpfc_debugfs_max_disc_trc changed to "
+				       "%d\n", lpfc_debugfs_max_disc_trc);
+			}
+		}
+
+
+		snprintf(name, sizeof(name), "slow_ring_trace");
+		phba->debug_slow_ring_trc =
+			debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
+				 phba->hba_debugfs_root,
+				 phba, &lpfc_debugfs_op_slow_ring_trc);
+		if (!phba->debug_slow_ring_trc) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+					 "0409 Cannot create debugfs "
+					 "slow_ring_trace\n");
+			goto debug_failed;
+		}
+		if (!phba->slow_ring_trc) {
+			phba->slow_ring_trc = kmalloc(
+				(sizeof(struct lpfc_debugfs_trc) *
+				lpfc_debugfs_max_slow_ring_trc),
+				GFP_KERNEL);
+			if (!phba->slow_ring_trc) {
+				lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+						 "0409 Cannot create debugfs "
+						 "slow_ring buffer\n");
+				goto debug_failed;
+			}
+			atomic_set(&phba->slow_ring_trc_cnt, 0);
+			memset(phba->slow_ring_trc, 0,
+				(sizeof(struct lpfc_debugfs_trc) *
+				lpfc_debugfs_max_slow_ring_trc));
+		}
+	}
+
+	snprintf(name, sizeof(name), "vport%d", vport->vpi);
+	if (!vport->vport_debugfs_root) {
+		vport->vport_debugfs_root =
+			debugfs_create_dir(name, phba->hba_debugfs_root);
+		if (!vport->vport_debugfs_root) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+					 "0409 Cant create debugfs");
+			goto debug_failed;
+		}
+		atomic_inc(&phba->debugfs_vport_count);
+	}
+
+	if (lpfc_debugfs_max_disc_trc) {
+		num = lpfc_debugfs_max_disc_trc - 1;
+		if (num & lpfc_debugfs_max_disc_trc) {
+			/* Change to be a power of 2 */
+			num = lpfc_debugfs_max_disc_trc;
+			i = 0;
+			while (num > 1) {
+				num = num >> 1;
+				i++;
+			}
+			lpfc_debugfs_max_disc_trc = (1 << i);
+			printk(KERN_ERR
+			       "lpfc_debugfs_max_disc_trc changed to %d\n",
+			       lpfc_debugfs_max_disc_trc);
+		}
+	}
+
+	vport->disc_trc = kmalloc(
+		(sizeof(struct lpfc_debugfs_trc) * lpfc_debugfs_max_disc_trc),
+		GFP_KERNEL);
+
+	if (!vport->disc_trc) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+				 "0409 Cannot create debugfs disc trace "
+				 "buffer\n");
+		goto debug_failed;
+	}
+	atomic_set(&vport->disc_trc_cnt, 0);
+	memset(vport->disc_trc, 0,
+		(sizeof(struct lpfc_debugfs_trc) * lpfc_debugfs_max_disc_trc));
+
+	snprintf(name, sizeof(name), "discovery_trace");
+	vport->debug_disc_trc =
+		debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
+				 vport->vport_debugfs_root,
+				 vport, &lpfc_debugfs_op_disc_trc);
+	if (!vport->debug_disc_trc) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+				 "0409 Cannot create debugfs "
+				 "discovery_trace\n");
+		goto debug_failed;
+	}
+	snprintf(name, sizeof(name), "nodelist");
+	vport->debug_nodelist =
+		debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR,
+				 vport->vport_debugfs_root,
+				 vport, &lpfc_debugfs_op_nodelist);
+	if (!vport->debug_nodelist) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
+				 "0409 Cant create debugfs nodelist");
+		goto debug_failed;
+	}
+debug_failed:
+	return;
+#endif
+}
+
+
+inline void
+lpfc_debugfs_terminate(struct lpfc_vport *vport)
+{
+#ifdef CONFIG_LPFC_DEBUG_FS
+	struct lpfc_hba   *phba = vport->phba;
+
+	if (vport->disc_trc) {
+		kfree(vport->disc_trc);
+		vport->disc_trc = NULL;
+	}
+	if (vport->debug_disc_trc) {
+		debugfs_remove(vport->debug_disc_trc); /* discovery_trace */
+		vport->debug_disc_trc = NULL;
+	}
+	if (vport->debug_nodelist) {
+		debugfs_remove(vport->debug_nodelist); /* nodelist */
+		vport->debug_nodelist = NULL;
+	}
+
+	if (vport->vport_debugfs_root) {
+		debugfs_remove(vport->vport_debugfs_root); /* vportX */
+		vport->vport_debugfs_root = NULL;
+		atomic_dec(&phba->debugfs_vport_count);
+	}
+	if (atomic_read(&phba->debugfs_vport_count) == 0) {
+
+		if (phba->debug_hbqinfo) {
+			debugfs_remove(phba->debug_hbqinfo); /* hbqinfo */
+			phba->debug_hbqinfo = NULL;
+		}
+		if (phba->debug_dumpslim) {
+			debugfs_remove(phba->debug_dumpslim); /* dumpslim */
+			phba->debug_dumpslim = NULL;
+		}
+		if (phba->slow_ring_trc) {
+			kfree(phba->slow_ring_trc);
+			phba->slow_ring_trc = NULL;
+		}
+		if (phba->debug_slow_ring_trc) {
+			/* slow_ring_trace */
+			debugfs_remove(phba->debug_slow_ring_trc);
+			phba->debug_slow_ring_trc = NULL;
+		}
+
+		if (phba->hba_debugfs_root) {
+			debugfs_remove(phba->hba_debugfs_root); /* lpfcX */
+			phba->hba_debugfs_root = NULL;
+			atomic_dec(&lpfc_debugfs_hba_count);
+		}
+
+		if (atomic_read(&lpfc_debugfs_hba_count) == 0) {
+			debugfs_remove(lpfc_debugfs_root); /* lpfc */
+			lpfc_debugfs_root = NULL;
+		}
+	}
+#endif
+	return;
+}
+
+
diff --git a/drivers/scsi/lpfc/lpfc_debugfs.h b/drivers/scsi/lpfc/lpfc_debugfs.h
new file mode 100644
index 0000000..31e86a5
--- /dev/null
+++ b/drivers/scsi/lpfc/lpfc_debugfs.h
@@ -0,0 +1,50 @@
+/*******************************************************************
+ * This file is part of the Emulex Linux Device Driver for         *
+ * Fibre Channel Host Bus Adapters.                                *
+ * Copyright (C) 2007 Emulex.  All rights reserved.                *
+ * EMULEX and SLI are trademarks of Emulex.                        *
+ * www.emulex.com                                                  *
+ *                                                                 *
+ * This program is free software; you can redistribute it and/or   *
+ * modify it under the terms of version 2 of the GNU General       *
+ * Public License as published by the Free Software Foundation.    *
+ * This program is distributed in the hope that it will be useful. *
+ * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
+ * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
+ * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
+ * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
+ * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
+ * more details, a copy of which can be found in the file COPYING  *
+ * included with this package.                                     *
+ *******************************************************************/
+
+#ifndef _H_LPFC_DEBUG_FS
+#define _H_LPFC_DEBUG_FS
+
+#ifdef CONFIG_LPFC_DEBUG_FS
+struct lpfc_debugfs_trc {
+	char *fmt;
+	uint32_t data1;
+	uint32_t data2;
+	uint32_t data3;
+	uint32_t seq_cnt;
+	unsigned long jif;
+};
+#endif
+
+/* Mask for discovery_trace */
+#define LPFC_DISC_TRC_ELS_CMD		0x1	/* Trace ELS commands */
+#define LPFC_DISC_TRC_ELS_RSP		0x2	/* Trace ELS response */
+#define LPFC_DISC_TRC_ELS_UNSOL		0x4	/* Trace ELS rcv'ed   */
+#define LPFC_DISC_TRC_ELS_ALL		0x7	/* Trace ELS */
+#define LPFC_DISC_TRC_MBOX_VPORT	0x8	/* Trace vport MBOXs */
+#define LPFC_DISC_TRC_MBOX		0x10	/* Trace other MBOXs */
+#define LPFC_DISC_TRC_MBOX_ALL		0x18	/* Trace all MBOXs */
+#define LPFC_DISC_TRC_CT		0x20	/* Trace disc CT requests */
+#define LPFC_DISC_TRC_DSM		0x40    /* Trace DSM events */
+#define LPFC_DISC_TRC_RPORT		0x80    /* Trace rport events */
+#define LPFC_DISC_TRC_NODE		0x100   /* Trace ndlp state changes */
+
+#define LPFC_DISC_TRC_DISCOVERY		0xef    /* common mask for general
+						 * discovery */
+#endif /* H_LPFC_DEBUG_FS */
diff --git a/drivers/scsi/lpfc/lpfc_disc.h b/drivers/scsi/lpfc/lpfc_disc.h
index 7716ffd..dde32cd 100644
--- a/drivers/scsi/lpfc/lpfc_disc.h
+++ b/drivers/scsi/lpfc/lpfc_disc.h
@@ -36,21 +36,24 @@ enum lpfc_work_type {
 	LPFC_EVT_WARM_START,
 	LPFC_EVT_KILL,
 	LPFC_EVT_ELS_RETRY,
+	LPFC_EVT_DEV_LOSS_DELAY,
+	LPFC_EVT_DEV_LOSS,
+	LPFC_EVT_REAUTH,
 };
 
 /* structure used to queue event to the discovery tasklet */
 struct lpfc_work_evt {
 	struct list_head      evt_listp;
-	void                * evt_arg1;
-	void                * evt_arg2;
+	void                 *evt_arg1;
+	void                 *evt_arg2;
 	enum lpfc_work_type   evt;
 };
 
 
 struct lpfc_nodelist {
 	struct list_head nlp_listp;
-	struct lpfc_name nlp_portname;		/* port name */
-	struct lpfc_name nlp_nodename;		/* node name */
+	struct lpfc_name nlp_portname;
+	struct lpfc_name nlp_nodename;
 	uint32_t         nlp_flag;		/* entry  flags */
 	uint32_t         nlp_DID;		/* FC D_ID of entry */
 	uint32_t         nlp_last_elscmd;	/* Last ELS cmd sent */
@@ -69,37 +72,31 @@ struct lpfc_nodelist {
 	uint16_t	nlp_maxframe;		/* Max RCV frame size */
 	uint8_t		nlp_class_sup;		/* Supported Classes */
 	uint8_t         nlp_retry;		/* used for ELS retries */
-	uint8_t         nlp_disc_refcnt;	/* used for DSM */
 	uint8_t         nlp_fcp_info;	        /* class info, bits 0-3 */
 #define NLP_FCP_2_DEVICE   0x10			/* FCP-2 device */
 
 	struct timer_list   nlp_delayfunc;	/* Used for delayed ELS cmds */
+	struct timer_list   nlp_reauth_tmr;	/* Used for re-authentication */
+	struct timer_list   nlp_initiator_tmr;	/* Used with dev_loss */
 	struct fc_rport *rport;			/* Corresponding FC transport
 						   port structure */
-	struct lpfc_hba      *nlp_phba;
+	struct lpfc_vport *vport;
 	struct lpfc_work_evt els_retry_evt;
+	struct lpfc_work_evt els_reauth_evt;
+	struct lpfc_work_evt dev_loss_evt;
 	unsigned long last_ramp_up_time;        /* jiffy of last ramp up */
 	unsigned long last_q_full_time;		/* jiffy of last queue full */
+	struct kref     kref;
 };
 
 /* Defines for nlp_flag (uint32) */
-#define NLP_NO_LIST        0x0		/* Indicates immediately free node */
-#define NLP_UNUSED_LIST    0x1		/* Flg to indicate node will be freed */
-#define NLP_PLOGI_LIST     0x2		/* Flg to indicate sent PLOGI */
-#define NLP_ADISC_LIST     0x3		/* Flg to indicate sent ADISC */
-#define NLP_REGLOGIN_LIST  0x4		/* Flg to indicate sent REG_LOGIN */
-#define NLP_PRLI_LIST      0x5		/* Flg to indicate sent PRLI */
-#define NLP_UNMAPPED_LIST  0x6		/* Node is now unmapped */
-#define NLP_MAPPED_LIST    0x7		/* Node is now mapped */
-#define NLP_NPR_LIST       0x8		/* Node is in NPort Recovery state */
-#define NLP_JUST_DQ        0x9		/* just deque ndlp in lpfc_nlp_list */
-#define NLP_LIST_MASK      0xf		/* mask to see what list node is on */
 #define NLP_PLOGI_SND      0x20		/* sent PLOGI request for this entry */
 #define NLP_PRLI_SND       0x40		/* sent PRLI request for this entry */
 #define NLP_ADISC_SND      0x80		/* sent ADISC request for this entry */
 #define NLP_LOGO_SND       0x100	/* sent LOGO request for this entry */
 #define NLP_RNID_SND       0x400	/* sent RNID request for this entry */
 #define NLP_ELS_SND_MASK   0x7e0	/* sent ELS request for this entry */
+#define NLP_DEFER_RM       0x10000	/* Remove this ndlp if no longer used */
 #define NLP_DELAY_TMO      0x20000	/* delay timeout is running for node */
 #define NLP_NPR_2B_DISC    0x40000	/* node is included in num_disc_nodes */
 #define NLP_RCV_PLOGI      0x80000	/* Rcv'ed PLOGI from remote system */
@@ -109,19 +106,9 @@ struct lpfc_nodelist {
 					   ACC */
 #define NLP_NPR_ADISC      0x2000000	/* Issue ADISC when dq'ed from
 					   NPR list */
-#define NLP_DELAY_REMOVE   0x4000000	/* Defer removal till end of DSM */
+#define NLP_RM_DFLT_RPI    0x4000000	/* need to remove leftover dflt RPI */
 #define NLP_NODEV_REMOVE   0x8000000	/* Defer removal till discovery ends */
-
-/* Defines for list searchs */
-#define NLP_SEARCH_MAPPED    0x1	/* search mapped */
-#define NLP_SEARCH_UNMAPPED  0x2	/* search unmapped */
-#define NLP_SEARCH_PLOGI     0x4	/* search plogi */
-#define NLP_SEARCH_ADISC     0x8	/* search adisc */
-#define NLP_SEARCH_REGLOGIN  0x10	/* search reglogin */
-#define NLP_SEARCH_PRLI      0x20	/* search prli */
-#define NLP_SEARCH_NPR       0x40	/* search npr */
-#define NLP_SEARCH_UNUSED    0x80	/* search mapped */
-#define NLP_SEARCH_ALL       0xff	/* search all lists */
+#define NLP_TARGET_REMOVE  0x10000000   /* Target remove in process */
 
 /* There are 4 different double linked lists nodelist entries can reside on.
  * The Port Login (PLOGI) list and Address Discovery (ADISC) list are used
diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
index f0afc22..94138fb 100644
--- a/drivers/scsi/lpfc/lpfc_els.c
+++ b/drivers/scsi/lpfc/lpfc_els.c
@@ -35,38 +35,41 @@
 #include "lpfc.h"
 #include "lpfc_logmsg.h"
 #include "lpfc_crtn.h"
+#include "lpfc_vport.h"
+#include "lpfc_debugfs.h"
+#include "lpfc_auth_access.h"
+#include "lpfc_auth.h"
+#include "lpfc_security.h"
 
 static int lpfc_els_retry(struct lpfc_hba *, struct lpfc_iocbq *,
 			  struct lpfc_iocbq *);
+static void lpfc_cmpl_fabric_iocb(struct lpfc_hba *, struct lpfc_iocbq *,
+			struct lpfc_iocbq *);
+
 static int lpfc_max_els_tries = 3;
 
-static int
-lpfc_els_chk_latt(struct lpfc_hba * phba)
+int
+lpfc_els_chk_latt(struct lpfc_vport *vport)
 {
-	struct lpfc_sli *psli;
-	LPFC_MBOXQ_t *mbox;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
 	uint32_t ha_copy;
-	int rc;
-
-	psli = &phba->sli;
 
-	if ((phba->hba_state >= LPFC_HBA_READY) ||
-	    (phba->hba_state == LPFC_LINK_DOWN))
+	if (vport->port_state >= LPFC_VPORT_READY ||
+	    phba->link_state == LPFC_LINK_DOWN)
 		return 0;
 
 	/* Read the HBA Host Attention Register */
-	spin_lock_irq(phba->host->host_lock);
 	ha_copy = readl(phba->HAregaddr);
-	spin_unlock_irq(phba->host->host_lock);
 
 	if (!(ha_copy & HA_LATT))
 		return 0;
 
 	/* Pending Link Event during Discovery */
-	lpfc_printf_log(phba, KERN_WARNING, LOG_DISCOVERY,
-			"%d:0237 Pending Link Event during "
-			"Discovery: State x%x\n",
-			phba->brd_no, phba->hba_state);
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+			 "0237 Pending Link Event during "
+			 "Discovery: State x%x\n",
+			 phba->pport->port_state);
 
 	/* CLEAR_LA should re-enable link attention events and
 	 * we should then imediately take a LATT event. The
@@ -74,48 +77,34 @@ lpfc_els_chk_latt(struct lpfc_hba * phba)
 	 * will cleanup any left over in-progress discovery
 	 * events.
 	 */
-	spin_lock_irq(phba->host->host_lock);
-	phba->fc_flag |= FC_ABORT_DISCOVERY;
-	spin_unlock_irq(phba->host->host_lock);
-
-	if (phba->hba_state != LPFC_CLEAR_LA) {
-		if ((mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL))) {
-			phba->hba_state = LPFC_CLEAR_LA;
-			lpfc_clear_la(phba, mbox);
-			mbox->mbox_cmpl = lpfc_mbx_cmpl_clear_la;
-			rc = lpfc_sli_issue_mbox (phba, mbox,
-						  (MBX_NOWAIT | MBX_STOP_IOCB));
-			if (rc == MBX_NOT_FINISHED) {
-				mempool_free(mbox, phba->mbox_mem_pool);
-				phba->hba_state = LPFC_HBA_ERROR;
-			}
-		}
-	}
+	spin_lock_irq(shost->host_lock);
+	vport->fc_flag |= FC_ABORT_DISCOVERY;
+	spin_unlock_irq(shost->host_lock);
 
-	return 1;
+	if (phba->link_state != LPFC_CLEAR_LA)
+		lpfc_issue_clear_la(phba, vport);
 
+	return 1;
 }
 
 struct lpfc_iocbq *
-lpfc_prep_els_iocb(struct lpfc_hba * phba, uint8_t expectRsp,
-		   uint16_t cmdSize, uint8_t retry, struct lpfc_nodelist * ndlp,
-		   uint32_t did, uint32_t elscmd)
+lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp,
+		   uint16_t cmdSize, uint8_t retry,
+		   struct lpfc_nodelist *ndlp, uint32_t did,
+		   uint32_t elscmd)
 {
-	struct lpfc_sli_ring *pring;
+	struct lpfc_hba  *phba = vport->phba;
 	struct lpfc_iocbq *elsiocb;
 	struct lpfc_dmabuf *pcmd, *prsp, *pbuflist;
 	struct ulp_bde64 *bpl;
 	IOCB_t *icmd;
 
-	pring = &phba->sli.ring[LPFC_ELS_RING];
 
-	if (phba->hba_state < LPFC_LINK_UP)
-		return  NULL;
+	if (!lpfc_is_link_up(phba))
+		return NULL;
 
 	/* Allocate buffer for  command iocb */
-	spin_lock_irq(phba->host->host_lock);
 	elsiocb = lpfc_sli_get_iocbq(phba);
-	spin_unlock_irq(phba->host->host_lock);
 
 	if (elsiocb == NULL)
 		return NULL;
@@ -123,14 +112,12 @@ lpfc_prep_els_iocb(struct lpfc_hba * phba, uint8_t expectRsp,
 
 	/* fill in BDEs for command */
 	/* Allocate buffer for command payload */
-	if (((pcmd = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL)) == 0) ||
+	if (((pcmd = kmalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL)) == 0) ||
 	    ((pcmd->virt = lpfc_mbuf_alloc(phba,
 					   MEM_PRI, &(pcmd->phys))) == 0)) {
 		kfree(pcmd);
 
-		spin_lock_irq(phba->host->host_lock);
 		lpfc_sli_release_iocbq(phba, elsiocb);
-		spin_unlock_irq(phba->host->host_lock);
 		return NULL;
 	}
 
@@ -138,7 +125,7 @@ lpfc_prep_els_iocb(struct lpfc_hba * phba, uint8_t expectRsp,
 
 	/* Allocate buffer for response payload */
 	if (expectRsp) {
-		prsp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
+		prsp = kmalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL);
 		if (prsp)
 			prsp->virt = lpfc_mbuf_alloc(phba, MEM_PRI,
 						     &prsp->phys);
@@ -146,9 +133,7 @@ lpfc_prep_els_iocb(struct lpfc_hba * phba, uint8_t expectRsp,
 			kfree(prsp);
 			lpfc_mbuf_free(phba, pcmd->virt, pcmd->phys);
 			kfree(pcmd);
-			spin_lock_irq(phba->host->host_lock);
 			lpfc_sli_release_iocbq(phba, elsiocb);
-			spin_unlock_irq(phba->host->host_lock);
 			return NULL;
 		}
 		INIT_LIST_HEAD(&prsp->list);
@@ -157,14 +142,12 @@ lpfc_prep_els_iocb(struct lpfc_hba * phba, uint8_t expectRsp,
 	}
 
 	/* Allocate buffer for Buffer ptr list */
-	pbuflist = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
+	pbuflist = kmalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL);
 	if (pbuflist)
-	    pbuflist->virt = lpfc_mbuf_alloc(phba, MEM_PRI,
-					     &pbuflist->phys);
+		pbuflist->virt = lpfc_mbuf_alloc(phba, MEM_PRI,
+						 &pbuflist->phys);
 	if (pbuflist == 0 || pbuflist->virt == 0) {
-		spin_lock_irq(phba->host->host_lock);
 		lpfc_sli_release_iocbq(phba, elsiocb);
-		spin_unlock_irq(phba->host->host_lock);
 		lpfc_mbuf_free(phba, pcmd->virt, pcmd->phys);
 		lpfc_mbuf_free(phba, prsp->virt, prsp->phys);
 		kfree(pcmd);
@@ -178,19 +161,28 @@ lpfc_prep_els_iocb(struct lpfc_hba * phba, uint8_t expectRsp,
 	icmd->un.elsreq64.bdl.addrHigh = putPaddrHigh(pbuflist->phys);
 	icmd->un.elsreq64.bdl.addrLow = putPaddrLow(pbuflist->phys);
 	icmd->un.elsreq64.bdl.bdeFlags = BUFF_TYPE_BDL;
+	icmd->un.elsreq64.remoteID = did;	/* DID */
 	if (expectRsp) {
-		icmd->un.elsreq64.bdl.bdeSize = (2 * sizeof (struct ulp_bde64));
-		icmd->un.elsreq64.remoteID = did;	/* DID */
+		icmd->un.elsreq64.bdl.bdeSize = (2 * sizeof(struct ulp_bde64));
 		icmd->ulpCommand = CMD_ELS_REQUEST64_CR;
+		icmd->ulpTimeout = phba->fc_ratov * 2;
 	} else {
-		icmd->un.elsreq64.bdl.bdeSize = sizeof (struct ulp_bde64);
+		icmd->un.elsreq64.bdl.bdeSize = sizeof(struct ulp_bde64);
 		icmd->ulpCommand = CMD_XMIT_ELS_RSP64_CX;
 	}
-
 	icmd->ulpBdeCount = 1;
 	icmd->ulpLe = 1;
 	icmd->ulpClass = CLASS3;
 
+	if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
+		icmd->un.elsreq64.myID = vport->fc_myDID;
+
+		/* For ELS_REQUEST64_CR, use the VPI by default */
+		icmd->ulpContext = vport->vpi;
+		icmd->ulpCt_h = 0;
+		icmd->ulpCt_l = 1;
+	}
+
 	bpl = (struct ulp_bde64 *) pbuflist->virt;
 	bpl->addrLow = le32_to_cpu(putPaddrLow(pcmd->phys));
 	bpl->addrHigh = le32_to_cpu(putPaddrHigh(pcmd->phys));
@@ -207,48 +199,121 @@ lpfc_prep_els_iocb(struct lpfc_hba * phba, uint8_t expectRsp,
 		bpl->tus.w = le32_to_cpu(bpl->tus.w);
 	}
 
-	/* Save for completion so we can release these resources */
-	elsiocb->context1 = (uint8_t *) ndlp;
-	elsiocb->context2 = (uint8_t *) pcmd;
-	elsiocb->context3 = (uint8_t *) pbuflist;
+	elsiocb->context1 = lpfc_nlp_get(ndlp);
+	elsiocb->context2 = pcmd;
+	elsiocb->context3 = pbuflist;
 	elsiocb->retry = retry;
+	elsiocb->vport = vport;
 	elsiocb->drvrTimeout = (phba->fc_ratov << 1) + LPFC_DRVR_TIMEOUT;
 
 	if (prsp) {
 		list_add(&prsp->list, &pcmd->list);
 	}
-
 	if (expectRsp) {
 		/* Xmit ELS command <elsCmd> to remote NPORT <did> */
-		lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-				"%d:0116 Xmit ELS command x%x to remote "
-				"NPORT x%x I/O tag: x%x, HBA state: x%x\n",
-				phba->brd_no, elscmd,
-				did, elsiocb->iotag, phba->hba_state);
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+				 "0116 Xmit ELS command x%x to remote "
+				 "NPORT x%x I/O tag: x%x, port state: x%x\n",
+				 elscmd, did, elsiocb->iotag,
+				 vport->port_state);
 	} else {
 		/* Xmit ELS response <elsCmd> to remote NPORT <did> */
-		lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-				"%d:0117 Xmit ELS response x%x to remote "
-				"NPORT x%x I/O tag: x%x, size: x%x\n",
-				phba->brd_no, elscmd,
-				ndlp->nlp_DID, elsiocb->iotag, cmdSize);
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+				 "0117 Xmit ELS response x%x to remote "
+				 "NPORT x%x I/O tag: x%x, size: x%x\n",
+				 elscmd, ndlp->nlp_DID, elsiocb->iotag,
+				 cmdSize);
 	}
-
 	return elsiocb;
 }
 
 
 static int
-lpfc_cmpl_els_flogi_fabric(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
-		struct serv_parm *sp, IOCB_t *irsp)
+lpfc_issue_fabric_reglogin(struct lpfc_vport *vport)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	LPFC_MBOXQ_t *mbox;
 	struct lpfc_dmabuf *mp;
+	struct lpfc_nodelist *ndlp;
+	struct serv_parm *sp;
 	int rc;
+	int err = 0;
 
-	spin_lock_irq(phba->host->host_lock);
-	phba->fc_flag |= FC_FABRIC;
-	spin_unlock_irq(phba->host->host_lock);
+	sp = &phba->fc_fabparam;
+	ndlp = lpfc_findnode_did(vport, Fabric_DID);
+	if (!ndlp) {
+		err = 1;
+		goto fail;
+	}
+
+	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	if (!mbox) {
+		err = 2;
+		goto fail;
+	}
+
+	vport->port_state = LPFC_FABRIC_CFG_LINK;
+	lpfc_config_link(phba, mbox);
+	mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+	mbox->vport = vport;
+
+	rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+	if (rc == MBX_NOT_FINISHED) {
+		err = 3;
+		goto fail_free_mbox;
+	}
+
+	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	if (!mbox) {
+		err = 4;
+		goto fail;
+	}
+	rc = lpfc_reg_login(phba, vport->vpi, Fabric_DID, (uint8_t *)sp, mbox,
+			    0);
+	if (rc) {
+		err = 5;
+		goto fail_free_mbox;
+	}
+
+	mbox->mbox_cmpl = lpfc_mbx_cmpl_fabric_reg_login;
+	mbox->vport = vport;
+	mbox->context2 = lpfc_nlp_get(ndlp);
+
+	rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+	if (rc == MBX_NOT_FINISHED) {
+		err = 6;
+		goto fail_issue_reg_login;
+	}
+
+	return 0;
+
+fail_issue_reg_login:
+	lpfc_nlp_put(ndlp);
+	mp = (struct lpfc_dmabuf *) mbox->context1;
+	lpfc_mbuf_free(phba, mp->virt, mp->phys);
+	kfree(mp);
+fail_free_mbox:
+	mempool_free(mbox, phba->mbox_mem_pool);
+
+fail:
+	lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+		"0249 Cannot issue Register Fabric login: Err %d\n", err);
+	return -ENXIO;
+}
+
+static int
+lpfc_cmpl_els_flogi_fabric(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			   struct serv_parm *sp, IOCB_t *irsp)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
+	struct lpfc_nodelist *np;
+	struct lpfc_nodelist *next_np;
+
+	spin_lock_irq(shost->host_lock);
+	vport->fc_flag |= FC_FABRIC;
+	spin_unlock_irq(shost->host_lock);
 
 	phba->fc_edtov = be32_to_cpu(sp->cmn.e_d_tov);
 	if (sp->cmn.edtovResolution)	/* E_D_TOV ticks are in nanoseconds */
@@ -257,20 +322,20 @@ lpfc_cmpl_els_flogi_fabric(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
 	phba->fc_ratov = (be32_to_cpu(sp->cmn.w2.r_a_tov) + 999) / 1000;
 
 	if (phba->fc_topology == TOPOLOGY_LOOP) {
-		spin_lock_irq(phba->host->host_lock);
-		phba->fc_flag |= FC_PUBLIC_LOOP;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
+		vport->fc_flag |= FC_PUBLIC_LOOP;
+		spin_unlock_irq(shost->host_lock);
 	} else {
 		/*
 		 * If we are a N-port connected to a Fabric, fixup sparam's so
 		 * logins to devices on remote loops work.
 		 */
-		phba->fc_sparam.cmn.altBbCredit = 1;
+		vport->fc_sparam.cmn.altBbCredit = 1;
 	}
 
-	phba->fc_myDID = irsp->un.ulpWord[4] & Mask_DID;
+	vport->fc_myDID = irsp->un.ulpWord[4] & Mask_DID;
 	memcpy(&ndlp->nlp_portname, &sp->portName, sizeof(struct lpfc_name));
-	memcpy(&ndlp->nlp_nodename, &sp->nodeName, sizeof (struct lpfc_name));
+	memcpy(&ndlp->nlp_nodename, &sp->nodeName, sizeof(struct lpfc_name));
 	ndlp->nlp_class_sup = 0;
 	if (sp->cls1.classValid)
 		ndlp->nlp_class_sup |= FC_COS_CLASS1;
@@ -284,67 +349,84 @@ lpfc_cmpl_els_flogi_fabric(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
 				sp->cmn.bbRcvSizeLsb;
 	memcpy(&phba->fc_fabparam, sp, sizeof(struct serv_parm));
 
-	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
-	if (!mbox)
-		goto fail;
-
-	phba->hba_state = LPFC_FABRIC_CFG_LINK;
-	lpfc_config_link(phba, mbox);
-	mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-
-	rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT | MBX_STOP_IOCB);
-	if (rc == MBX_NOT_FINISHED)
-		goto fail_free_mbox;
-
-	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
-	if (!mbox)
-		goto fail;
+	if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
+		if (sp->cmn.response_multiple_NPort) {
+			lpfc_printf_vlog(vport, KERN_WARNING,
+					 LOG_ELS | LOG_VPORT,
+					 "1816 FLOGI NPIV supported, "
+					 "response data 0x%x\n",
+					 sp->cmn.response_multiple_NPort);
+			phba->link_flag |= LS_NPIV_FAB_SUPPORTED;
+		} else {
+			/* Because we asked f/w for NPIV it still expects us
+			to call reg_vnpid atleast for the physcial host */
+			lpfc_printf_vlog(vport, KERN_WARNING,
+					 LOG_ELS | LOG_VPORT,
+					 "1817 Fabric does not support NPIV "
+					 "- configuring single port mode.\n");
+			phba->link_flag &= ~LS_NPIV_FAB_SUPPORTED;
+		}
+	}
 
-	if (lpfc_reg_login(phba, Fabric_DID, (uint8_t *) sp, mbox, 0))
-		goto fail_free_mbox;
+	if ((vport->fc_prevDID != vport->fc_myDID) &&
+		!(vport->fc_flag & FC_VPORT_NEEDS_REG_VPI)) {
 
-	mbox->mbox_cmpl = lpfc_mbx_cmpl_fabric_reg_login;
-	mbox->context2 = ndlp;
+		/* If our NportID changed, we need to ensure all
+		 * remaining NPORTs get unreg_login'ed.
+		 */
+		list_for_each_entry_safe(np, next_np,
+					&vport->fc_nodes, nlp_listp) {
+			if ((np->nlp_state != NLP_STE_NPR_NODE) ||
+				   !(np->nlp_flag & NLP_NPR_ADISC))
+				continue;
+			spin_lock_irq(shost->host_lock);
+			np->nlp_flag &= ~NLP_NPR_ADISC;
+			spin_unlock_irq(shost->host_lock);
+			lpfc_unreg_rpi(vport, np);
+		}
+		if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
+			lpfc_mbx_unreg_vpi(vport);
+			vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
+		}
+	}
 
-	rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT | MBX_STOP_IOCB);
-	if (rc == MBX_NOT_FINISHED)
-		goto fail_issue_reg_login;
+	ndlp->nlp_sid = irsp->un.ulpWord[4] & Mask_DID;
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_REG_LOGIN_ISSUE);
 
+	if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED &&
+	    vport->fc_flag & FC_VPORT_NEEDS_REG_VPI) {
+		lpfc_register_new_vport(phba, vport, ndlp);
+		return 0;
+	}
+	lpfc_issue_fabric_reglogin(vport);
 	return 0;
-
- fail_issue_reg_login:
-	mp = (struct lpfc_dmabuf *) mbox->context1;
-	lpfc_mbuf_free(phba, mp->virt, mp->phys);
-	kfree(mp);
- fail_free_mbox:
-	mempool_free(mbox, phba->mbox_mem_pool);
- fail:
-	return -ENXIO;
 }
 
 /*
  * We FLOGIed into an NPort, initiate pt2pt protocol
  */
 static int
-lpfc_cmpl_els_flogi_nport(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
-		struct serv_parm *sp)
+lpfc_cmpl_els_flogi_nport(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  struct serv_parm *sp)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
 	LPFC_MBOXQ_t *mbox;
 	int rc;
 
-	spin_lock_irq(phba->host->host_lock);
-	phba->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP);
-	spin_unlock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
+	vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP);
+	spin_unlock_irq(shost->host_lock);
 
 	phba->fc_edtov = FF_DEF_EDTOV;
 	phba->fc_ratov = FF_DEF_RATOV;
-	rc = memcmp(&phba->fc_portname, &sp->portName,
-			sizeof(struct lpfc_name));
+	rc = memcmp(&vport->fc_portname, &sp->portName,
+		    sizeof(vport->fc_portname));
 	if (rc >= 0) {
 		/* This side will initiate the PLOGI */
-		spin_lock_irq(phba->host->host_lock);
-		phba->fc_flag |= FC_PT2PT_PLOGI;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
+		vport->fc_flag |= FC_PT2PT_PLOGI;
+		spin_unlock_irq(shost->host_lock);
 
 		/*
 		 * N_Port ID cannot be 0, set our to LocalID the other
@@ -353,7 +435,7 @@ lpfc_cmpl_els_flogi_nport(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
 
 		/* not equal */
 		if (rc)
-			phba->fc_myDID = PT2PT_LocalID;
+			vport->fc_myDID = PT2PT_LocalID;
 
 		mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
 		if (!mbox)
@@ -362,15 +444,15 @@ lpfc_cmpl_els_flogi_nport(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
 		lpfc_config_link(phba, mbox);
 
 		mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-		rc = lpfc_sli_issue_mbox(phba, mbox,
-				MBX_NOWAIT | MBX_STOP_IOCB);
+		mbox->vport = vport;
+		rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
 		if (rc == MBX_NOT_FINISHED) {
 			mempool_free(mbox, phba->mbox_mem_pool);
 			goto fail;
 		}
-		mempool_free(ndlp, phba->nlp_mem_pool);
+		lpfc_nlp_put(ndlp);
 
-		ndlp = lpfc_findnode_did(phba, NLP_SEARCH_ALL, PT2PT_RemoteID);
+		ndlp = lpfc_findnode_did(vport, PT2PT_RemoteID);
 		if (!ndlp) {
 			/*
 			 * Cannot find existing Fabric ndlp, so allocate a
@@ -380,76 +462,83 @@ lpfc_cmpl_els_flogi_nport(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
 			if (!ndlp)
 				goto fail;
 
-			lpfc_nlp_init(phba, ndlp, PT2PT_RemoteID);
+			lpfc_nlp_init(vport, ndlp, PT2PT_RemoteID);
 		}
 
 		memcpy(&ndlp->nlp_portname, &sp->portName,
-				sizeof(struct lpfc_name));
+		       sizeof(struct lpfc_name));
 		memcpy(&ndlp->nlp_nodename, &sp->nodeName,
-				sizeof(struct lpfc_name));
-		ndlp->nlp_state = NLP_STE_NPR_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
+		       sizeof(struct lpfc_name));
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_NPR_2B_DISC;
+		spin_unlock_irq(shost->host_lock);
 	} else {
 		/* This side will wait for the PLOGI */
-		mempool_free( ndlp, phba->nlp_mem_pool);
+		lpfc_nlp_put(ndlp);
 	}
 
-	spin_lock_irq(phba->host->host_lock);
-	phba->fc_flag |= FC_PT2PT;
-	spin_unlock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
+	vport->fc_flag |= FC_PT2PT;
+	spin_unlock_irq(shost->host_lock);
 
 	/* Start discovery - this should just do CLEAR_LA */
-	lpfc_disc_start(phba);
+	lpfc_disc_start(vport);
 	return 0;
- fail:
+fail:
 	return -ENXIO;
 }
 
 static void
-lpfc_cmpl_els_flogi(struct lpfc_hba * phba,
-		    struct lpfc_iocbq * cmdiocb, struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_els_flogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		    struct lpfc_iocbq *rspiocb)
 {
+	struct lpfc_vport *vport = cmdiocb->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
 	IOCB_t *irsp = &rspiocb->iocb;
 	struct lpfc_nodelist *ndlp = cmdiocb->context1;
 	struct lpfc_dmabuf *pcmd = cmdiocb->context2, *prsp;
 	struct serv_parm *sp;
+	struct fc_auth_req auth_req;
+	struct fc_auth_rsp *auth_rsp;
+
 	int rc;
 
 	/* Check to see if link went down during discovery */
-	if (lpfc_els_chk_latt(phba)) {
-		lpfc_nlp_remove(phba, ndlp);
+	if (lpfc_els_chk_latt(vport)) {
+		lpfc_nlp_put(ndlp);
 		goto out;
 	}
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"FLOGI cmpl:      status:x%x/x%x state:x%x",
+		irsp->ulpStatus, irsp->un.ulpWord[4],
+		vport->port_state);
+
 	if (irsp->ulpStatus) {
 		/* Check for retry */
-		if (lpfc_els_retry(phba, cmdiocb, rspiocb)) {
-			/* ELS command is being retried */
+		if (lpfc_els_retry(phba, cmdiocb, rspiocb))
 			goto out;
-		}
+
 		/* FLOGI failed, so there is no fabric */
-		spin_lock_irq(phba->host->host_lock);
-		phba->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP);
-		spin_unlock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
+		vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP);
+		spin_unlock_irq(shost->host_lock);
 
-		/* If private loop, then allow max outstandting els to be
+		/* If private loop, then allow max outstanding els to be
 		 * LPFC_MAX_DISC_THREADS (32). Scanning in the case of no
 		 * alpa map would take too long otherwise.
 		 */
 		if (phba->alpa_map[0] == 0) {
-			phba->cfg_discovery_threads =
-			    LPFC_MAX_DISC_THREADS;
+			vport->cfg_discovery_threads = LPFC_MAX_DISC_THREADS;
 		}
 
 		/* FLOGI failure */
-		lpfc_printf_log(phba,
-				KERN_INFO,
-				LOG_ELS,
-				"%d:0100 FLOGI failure Data: x%x x%x x%x\n",
-				phba->brd_no,
-				irsp->ulpStatus, irsp->un.ulpWord[4],
-				irsp->ulpTimeout);
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+				 "0100 FLOGI failure Data: x%x x%x "
+				 "x%x\n",
+				 irsp->ulpStatus, irsp->un.ulpWord[4],
+				 irsp->ulpTimeout);
 		goto flogifail;
 	}
 
@@ -462,48 +551,113 @@ lpfc_cmpl_els_flogi(struct lpfc_hba * phba,
 	sp = prsp->virt + sizeof(uint32_t);
 
 	/* FLOGI completes successfully */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0101 FLOGI completes sucessfully "
-			"Data: x%x x%x x%x x%x\n",
-			phba->brd_no,
-			irsp->un.ulpWord[4], sp->cmn.e_d_tov,
-			sp->cmn.w2.r_a_tov, sp->cmn.edtovResolution);
-
-	if (phba->hba_state == LPFC_FLOGI) {
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0101 FLOGI completes sucessfully "
+			 "Data: x%x x%x x%x x%x\n",
+			 irsp->un.ulpWord[4], sp->cmn.e_d_tov,
+			 sp->cmn.w2.r_a_tov, sp->cmn.edtovResolution);
+
+	if (vport->cfg_enable_auth) {
+		auth_req.local_wwpn = wwn_to_u64(vport->fc_portname.u.wwn);
+		auth_req.remote_wwpn = AUTH_FABRIC_WWN;
+		if ((auth_rsp = kmalloc(sizeof(struct fc_auth_rsp),
+			GFP_KERNEL)) == 0) {
+			lpfc_printf_log(vport->phba,
+				KERN_WARNING,
+				LOG_SECURITY,
+				"%d:1030 Security config request: no buffers\n",
+				vport->phba->brd_no);
+			phba->link_state = LPFC_HBA_ERROR;
+			goto flogifail;
+		}
+		vport->auth.auth_mode = FC_AUTHMODE_UNKNOWN;
+		if (lpfc_fc_security_get_config(shost, &auth_req,
+						sizeof(struct fc_auth_req),
+						auth_rsp,
+						sizeof(struct fc_auth_rsp))) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+					 "1052 Unable to get security "
+					 "config.\n");
+			kfree(auth_rsp);
+			goto flogifail;
+		}
+		lpfc_security_config_wait(vport);
+		if (vport->auth.auth_mode == FC_AUTHMODE_ACTIVE) {
+			vport->auth.security_active = 1;
+		} else if (vport->auth.auth_mode == FC_AUTHMODE_PASSIVE) {
+			if (sp->cmn.security)
+				vport->auth.security_active = 1;
+			else {
+				lpfc_printf_vlog(vport, KERN_INFO, LOG_SECURITY,
+						 "1038 Authentication not "
+						 "required by the fabric. "
+						 "Disabled.\n");
+				vport->auth.security_active = 0;
+			}
+		} else {
+			vport->auth.security_active = 0;
+			/*
+			 * If switch require authentication and authentication
+			 * is disabled for this HBA/Fabric port, fail the
+			 * discovery.
+			 */
+			if (sp->cmn.security) {
+				lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+						 "1050 Authentication mode is "
+						 "disabled, but is required by "
+						 "the fabric.\n");
+				goto flogifail;
+			}
+		}
+	} else {
+		vport->auth.security_active = 0;
+		if (sp->cmn.security) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+					 "1055 Authentication parameter is "
+					 "disabled, but is required by "
+					 "the fabric.\n");
+			goto flogifail;
+		}
+	}
+	if (vport->port_state == LPFC_FLOGI) {
 		/*
 		 * If Common Service Parameters indicate Nport
 		 * we are point to point, if Fport we are Fabric.
 		 */
 		if (sp->cmn.fPort)
-			rc = lpfc_cmpl_els_flogi_fabric(phba, ndlp, sp, irsp);
+			rc = lpfc_cmpl_els_flogi_fabric(vport, ndlp, sp, irsp);
 		else
-			rc = lpfc_cmpl_els_flogi_nport(phba, ndlp, sp);
+			rc = lpfc_cmpl_els_flogi_nport(vport, ndlp, sp);
 
 		if (!rc)
 			goto out;
 	}
 
 flogifail:
-	lpfc_nlp_remove(phba, ndlp);
+	lpfc_nlp_put(ndlp);
 
-	if (irsp->ulpStatus != IOSTAT_LOCAL_REJECT ||
-	    (irsp->un.ulpWord[4] != IOERR_SLI_ABORTED &&
-	     irsp->un.ulpWord[4] != IOERR_SLI_DOWN)) {
+	if (!lpfc_error_lost_link(irsp)) {
 		/* FLOGI failed, so just use loop map to make discovery list */
-		lpfc_disc_list_loopmap(phba);
+		lpfc_disc_list_loopmap(vport);
 
 		/* Start discovery */
-		lpfc_disc_start(phba);
+		lpfc_disc_start(vport);
+	} else if (((irsp->ulpStatus != IOSTAT_LOCAL_REJECT) ||
+			((irsp->un.ulpWord[4] != IOERR_SLI_ABORTED) &&
+			(irsp->un.ulpWord[4] != IOERR_SLI_DOWN))) &&
+			(phba->link_state != LPFC_CLEAR_LA)) {
+		/* If FLOGI failed enable link interrupt. */
+		lpfc_issue_clear_la(phba, vport);
 	}
-
 out:
 	lpfc_els_free_iocb(phba, cmdiocb);
 }
 
 static int
-lpfc_issue_els_flogi(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
+lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		     uint8_t retry)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	struct serv_parm *sp;
 	IOCB_t *icmd;
 	struct lpfc_iocbq *elsiocb;
@@ -515,9 +669,10 @@ lpfc_issue_els_flogi(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
 
 	pring = &phba->sli.ring[LPFC_ELS_RING];
 
-	cmdsize = (sizeof (uint32_t) + sizeof (struct serv_parm));
-	elsiocb = lpfc_prep_els_iocb(phba, 1, cmdsize, retry, ndlp,
-						 ndlp->nlp_DID, ELS_CMD_FLOGI);
+	cmdsize = (sizeof(uint32_t) + sizeof(struct serv_parm));
+	elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp,
+				     ndlp->nlp_DID, ELS_CMD_FLOGI);
+
 	if (!elsiocb)
 		return 1;
 
@@ -526,11 +681,15 @@ lpfc_issue_els_flogi(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
 
 	/* For FLOGI request, remainder of payload is service parameters */
 	*((uint32_t *) (pcmd)) = ELS_CMD_FLOGI;
-	pcmd += sizeof (uint32_t);
-	memcpy(pcmd, &phba->fc_sparam, sizeof (struct serv_parm));
+	pcmd += sizeof(uint32_t);
+	memcpy(pcmd, &vport->fc_sparam, sizeof(struct serv_parm));
 	sp = (struct serv_parm *) pcmd;
 
 	/* Setup CSPs accordingly for Fabric */
+
+	if (vport->cfg_enable_auth)
+		sp->cmn.security = 1;
+
 	sp->cmn.e_d_tov = 0;
 	sp->cmn.w2.r_a_tov = 0;
 	sp->cls1.classValid = 0;
@@ -541,16 +700,32 @@ lpfc_issue_els_flogi(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
 	if (sp->cmn.fcphHigh < FC_PH3)
 		sp->cmn.fcphHigh = FC_PH3;
 
+	if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
+		sp->cmn.request_multiple_Nport = 1;
+
+		/* For FLOGI, Let FLOGI rsp set the NPortID for VPI 0 */
+		icmd->ulpCt_h = 1;
+		icmd->ulpCt_l = 0;
+	}
+
+	if (phba->fc_topology != TOPOLOGY_LOOP) {
+		icmd->un.elsreq64.myID = 0;
+		icmd->un.elsreq64.fl = 1;
+	}
+
 	tmo = phba->fc_ratov;
 	phba->fc_ratov = LPFC_DISC_FLOGI_TMO;
-	lpfc_set_disctmo(phba);
+	lpfc_set_disctmo(vport);
 	phba->fc_ratov = tmo;
 
 	phba->fc_stat.elsXmitFLOGI++;
 	elsiocb->iocb_cmpl = lpfc_cmpl_els_flogi;
-	spin_lock_irq(phba->host->host_lock);
-	rc = lpfc_sli_issue_iocb(phba, pring, elsiocb, 0);
-	spin_unlock_irq(phba->host->host_lock);
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"Issue FLOGI:     opt:x%x",
+		phba->sli3_options, 0, 0);
+
+	rc = lpfc_issue_fabric_iocb(phba, elsiocb);
 	if (rc == IOCB_ERROR) {
 		lpfc_els_free_iocb(phba, elsiocb);
 		return 1;
@@ -559,7 +734,7 @@ lpfc_issue_els_flogi(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
 }
 
 int
-lpfc_els_abort_flogi(struct lpfc_hba * phba)
+lpfc_els_abort_flogi(struct lpfc_hba *phba)
 {
 	struct lpfc_sli_ring *pring;
 	struct lpfc_iocbq *iocb, *next_iocb;
@@ -568,8 +743,8 @@ lpfc_els_abort_flogi(struct lpfc_hba * phba)
 
 	/* Abort outstanding I/O on NPort <nlp_DID> */
 	lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-			"%d:0201 Abort outstanding I/O on NPort x%x\n",
-			phba->brd_no, Fabric_DID);
+			"0201 Abort outstanding I/O on NPort x%x\n",
+			Fabric_DID);
 
 	pring = &phba->sli.ring[LPFC_ELS_RING];
 
@@ -577,73 +752,112 @@ lpfc_els_abort_flogi(struct lpfc_hba * phba)
 	 * Check the txcmplq for an iocb that matches the nport the driver is
 	 * searching for.
 	 */
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	list_for_each_entry_safe(iocb, next_iocb, &pring->txcmplq, list) {
 		icmd = &iocb->iocb;
-		if (icmd->ulpCommand == CMD_ELS_REQUEST64_CR) {
+		if (icmd->ulpCommand == CMD_ELS_REQUEST64_CR &&
+		    icmd->un.elsreq64.bdl.ulpIoTag32) {
 			ndlp = (struct lpfc_nodelist *)(iocb->context1);
-			if (ndlp && (ndlp->nlp_DID == Fabric_DID))
+			if (ndlp && (ndlp->nlp_DID == Fabric_DID)) {
 				lpfc_sli_issue_abort_iotag(phba, pring, iocb);
+			}
 		}
 	}
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	return 0;
 }
 
 int
-lpfc_initial_flogi(struct lpfc_hba * phba)
+lpfc_initial_flogi(struct lpfc_vport *vport)
 {
+	struct lpfc_hba *phba = vport->phba;
 	struct lpfc_nodelist *ndlp;
 
+	vport->port_state = LPFC_FLOGI;
+	lpfc_set_disctmo(vport);
+
 	/* First look for the Fabric ndlp */
-	ndlp = lpfc_findnode_did(phba, NLP_SEARCH_ALL, Fabric_DID);
+	ndlp = lpfc_findnode_did(vport, Fabric_DID);
 	if (!ndlp) {
 		/* Cannot find existing Fabric ndlp, so allocate a new one */
 		ndlp = mempool_alloc(phba->nlp_mem_pool, GFP_KERNEL);
 		if (!ndlp)
 			return 0;
-		lpfc_nlp_init(phba, ndlp, Fabric_DID);
+		lpfc_nlp_init(vport, ndlp, Fabric_DID);
 	} else {
-		lpfc_nlp_list(phba, ndlp, NLP_JUST_DQ);
+		lpfc_dequeue_node(vport, ndlp);
 	}
-	if (lpfc_issue_els_flogi(phba, ndlp, 0)) {
-		mempool_free( ndlp, phba->nlp_mem_pool);
+
+	if (lpfc_issue_els_flogi(vport, ndlp, 0)) {
+		lpfc_nlp_put(ndlp);
 	}
 	return 1;
 }
 
+int
+lpfc_initial_fdisc(struct lpfc_vport *vport)
+{
+	struct lpfc_hba *phba = vport->phba;
+	struct lpfc_nodelist *ndlp;
+
+	if (vport->cfg_enable_auth) {
+		lpfc_security_wait();
+		if (lpfc_security_service_state == SECURITY_OFFLINE) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+					 "1049 Security is enabled but no "
+					 "security service!\n");
+			vport->auth.auth_mode = FC_AUTHMODE_UNKNOWN;
+			return 0;
+		}
+	}
+
+	/* First look for the Fabric ndlp */
+	ndlp = lpfc_findnode_did(vport, Fabric_DID);
+	if (!ndlp) {
+		/* Cannot find existing Fabric ndlp, so allocate a new one */
+		ndlp = mempool_alloc(phba->nlp_mem_pool, GFP_KERNEL);
+		if (!ndlp)
+			return 0;
+		lpfc_nlp_init(vport, ndlp, Fabric_DID);
+	} else {
+		lpfc_dequeue_node(vport, ndlp);
+	}
+	if (lpfc_issue_els_fdisc(vport, ndlp, 0)) {
+		lpfc_nlp_put(ndlp);
+	}
+	return 1;
+}
 static void
-lpfc_more_plogi(struct lpfc_hba * phba)
+lpfc_more_plogi(struct lpfc_vport *vport)
 {
 	int sentplogi;
 
-	if (phba->num_disc_nodes)
-		phba->num_disc_nodes--;
+	if (vport->num_disc_nodes)
+		vport->num_disc_nodes--;
 
 	/* Continue discovery with <num_disc_nodes> PLOGIs to go */
-	lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-			"%d:0232 Continue discovery with %d PLOGIs to go "
-			"Data: x%x x%x x%x\n",
-			phba->brd_no, phba->num_disc_nodes, phba->fc_plogi_cnt,
-			phba->fc_flag, phba->hba_state);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0232 Continue discovery with %d PLOGIs to go "
+			 "Data: x%x x%x x%x\n",
+			 vport->num_disc_nodes, vport->fc_plogi_cnt,
+			 vport->fc_flag, vport->port_state);
 	/* Check to see if there are more PLOGIs to be sent */
-	if (phba->fc_flag & FC_NLP_MORE) {
-		/* go thru NPR list and issue any remaining ELS PLOGIs */
-		sentplogi = lpfc_els_disc_plogi(phba);
-	}
+	if (vport->fc_flag & FC_NLP_MORE)
+		/* go thru NPR nodes and issue any remaining ELS PLOGIs */
+		sentplogi = lpfc_els_disc_plogi(vport);
+
 	return;
 }
 
 static struct lpfc_nodelist *
-lpfc_plogi_confirm_nport(struct lpfc_hba * phba, struct lpfc_dmabuf *prsp,
+lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp,
 			 struct lpfc_nodelist *ndlp)
 {
+	struct lpfc_vport    *vport = ndlp->vport;
 	struct lpfc_nodelist *new_ndlp;
-	uint32_t *lp;
 	struct serv_parm *sp;
-	uint8_t name[sizeof (struct lpfc_name)];
+	uint8_t  name[sizeof(struct lpfc_name)];
 	uint32_t rc;
 
 	/* Fabric nodes can have the same WWPN so we don't bother searching
@@ -652,90 +866,100 @@ lpfc_plogi_confirm_nport(struct lpfc_hba * phba, struct lpfc_dmabuf *prsp,
 	if (ndlp->nlp_type & NLP_FABRIC)
 		return ndlp;
 
-	lp = (uint32_t *) prsp->virt;
-	sp = (struct serv_parm *) ((uint8_t *) lp + sizeof (uint32_t));
-	memset(name, 0, sizeof (struct lpfc_name));
+	sp = (struct serv_parm *) ((uint8_t *) prsp + sizeof(uint32_t));
+	memset(name, 0, sizeof(struct lpfc_name));
 
-	/* Now we to find out if the NPort we are logging into, matches the WWPN
+	/* Now we find out if the NPort we are logging into, matches the WWPN
 	 * we have for that ndlp. If not, we have some work to do.
 	 */
-	new_ndlp = lpfc_findnode_wwpn(phba, NLP_SEARCH_ALL, &sp->portName);
+	new_ndlp = lpfc_findnode_wwpn(vport, &sp->portName);
 
 	if (new_ndlp == ndlp)
 		return ndlp;
 
 	if (!new_ndlp) {
-		rc =
-		   memcmp(&ndlp->nlp_portname, name, sizeof(struct lpfc_name));
+		rc = memcmp(&ndlp->nlp_portname, name,
+			    sizeof(struct lpfc_name));
 		if (!rc)
 			return ndlp;
 		new_ndlp = mempool_alloc(phba->nlp_mem_pool, GFP_ATOMIC);
 		if (!new_ndlp)
 			return ndlp;
 
-		lpfc_nlp_init(phba, new_ndlp, ndlp->nlp_DID);
+		lpfc_nlp_init(vport, new_ndlp, ndlp->nlp_DID);
 	}
 
-	lpfc_unreg_rpi(phba, new_ndlp);
+	lpfc_unreg_rpi(vport, new_ndlp);
 	new_ndlp->nlp_DID = ndlp->nlp_DID;
 	new_ndlp->nlp_prev_state = ndlp->nlp_prev_state;
-	new_ndlp->nlp_state = ndlp->nlp_state;
-	lpfc_nlp_list(phba, new_ndlp, ndlp->nlp_flag & NLP_LIST_MASK);
+	lpfc_nlp_set_state(vport, new_ndlp, ndlp->nlp_state);
 
-	/* Move this back to NPR list */
+	/* Move this back to NPR state */
 	if (memcmp(&ndlp->nlp_portname, name, sizeof(struct lpfc_name)) == 0) {
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+		/* The new_ndlp is replacing ndlp totally, so we need
+		 * to put ndlp on UNUSED list and try to free it.
+		 */
+		lpfc_drop_node(vport, ndlp);
 	}
 	else {
-		lpfc_unreg_rpi(phba, ndlp);
+		lpfc_unreg_rpi(vport, ndlp);
 		ndlp->nlp_DID = 0; /* Two ndlps cannot have the same did */
-		ndlp->nlp_state = NLP_STE_NPR_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
 	}
 	return new_ndlp;
 }
 
 static void
-lpfc_cmpl_els_plogi(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-		    struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		    struct lpfc_iocbq *rspiocb)
 {
+	struct lpfc_vport *vport = cmdiocb->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
 	IOCB_t *irsp;
 	struct lpfc_nodelist *ndlp;
 	struct lpfc_dmabuf *prsp;
 	int disc, rc, did, type;
 
-
 	/* we pass cmdiocb to state machine which needs rspiocb as well */
 	cmdiocb->context_un.rsp_iocb = rspiocb;
 
 	irsp = &rspiocb->iocb;
-	ndlp = lpfc_findnode_did(phba, NLP_SEARCH_ALL,
-						irsp->un.elsreq64.remoteID);
-	if (!ndlp)
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"PLOGI cmpl:      status:x%x/x%x did:x%x",
+		irsp->ulpStatus, irsp->un.ulpWord[4],
+		irsp->un.elsreq64.remoteID);
+
+	ndlp = lpfc_findnode_did(vport, irsp->un.elsreq64.remoteID);
+	if (!ndlp) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0136 PLOGI completes to NPort x%x "
+				 "with no ndlp. Data: x%x x%x x%x\n",
+				 irsp->un.elsreq64.remoteID,
+				 irsp->ulpStatus, irsp->un.ulpWord[4],
+				 irsp->ulpIoTag);
 		goto out;
+	}
 
 	/* Since ndlp can be freed in the disc state machine, note if this node
 	 * is being used during discovery.
 	 */
+	spin_lock_irq(shost->host_lock);
 	disc = (ndlp->nlp_flag & NLP_NPR_2B_DISC);
-	spin_lock_irq(phba->host->host_lock);
 	ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 	rc   = 0;
 
 	/* PLOGI completes to NPort <nlp_DID> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0102 PLOGI completes to NPort x%x "
-			"Data: x%x x%x x%x x%x x%x\n",
-			phba->brd_no, ndlp->nlp_DID, irsp->ulpStatus,
-			irsp->un.ulpWord[4], irsp->ulpTimeout, disc,
-			phba->num_disc_nodes);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0102 PLOGI completes to NPort x%x "
+			 "Data: x%x x%x x%x x%x x%x\n",
+			 ndlp->nlp_DID, irsp->ulpStatus, irsp->un.ulpWord[4],
+			 irsp->ulpTimeout, disc, vport->num_disc_nodes);
 	/* Check to see if link went down during discovery */
-	if (lpfc_els_chk_latt(phba)) {
-		spin_lock_irq(phba->host->host_lock);
+	if (lpfc_els_chk_latt(vport)) {
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_NPR_2B_DISC;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		goto out;
 	}
 
@@ -748,56 +972,52 @@ lpfc_cmpl_els_plogi(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 		if (lpfc_els_retry(phba, cmdiocb, rspiocb)) {
 			/* ELS command is being retried */
 			if (disc) {
-				spin_lock_irq(phba->host->host_lock);
+				spin_lock_irq(shost->host_lock);
 				ndlp->nlp_flag |= NLP_NPR_2B_DISC;
-				spin_unlock_irq(phba->host->host_lock);
+				spin_unlock_irq(shost->host_lock);
 			}
 			goto out;
 		}
-
 		/* PLOGI failed */
 		/* Do not call DSM for lpfc_els_abort'ed ELS cmds */
-		if ((irsp->ulpStatus == IOSTAT_LOCAL_REJECT) &&
-		   ((irsp->un.ulpWord[4] == IOERR_SLI_ABORTED) ||
-		   (irsp->un.ulpWord[4] == IOERR_LINK_DOWN) ||
-		   (irsp->un.ulpWord[4] == IOERR_SLI_DOWN))) {
+		if (lpfc_error_lost_link(irsp)) {
 			rc = NLP_STE_FREED_NODE;
 		} else {
-			rc = lpfc_disc_state_machine(phba, ndlp, cmdiocb,
-					NLP_EVT_CMPL_PLOGI);
+			rc = lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+						     NLP_EVT_CMPL_PLOGI);
 		}
 	} else {
 		/* Good status, call state machine */
 		prsp = list_entry(((struct lpfc_dmabuf *)
-			cmdiocb->context2)->list.next,
-			struct lpfc_dmabuf, list);
-		ndlp = lpfc_plogi_confirm_nport(phba, prsp, ndlp);
-		rc = lpfc_disc_state_machine(phba, ndlp, cmdiocb,
-					NLP_EVT_CMPL_PLOGI);
+				   cmdiocb->context2)->list.next,
+				  struct lpfc_dmabuf, list);
+		ndlp = lpfc_plogi_confirm_nport(phba, prsp->virt, ndlp);
+		rc = lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+					     NLP_EVT_CMPL_PLOGI);
 	}
 
-	if (disc && phba->num_disc_nodes) {
+	if (disc && vport->num_disc_nodes) {
 		/* Check to see if there are more PLOGIs to be sent */
-		lpfc_more_plogi(phba);
+		lpfc_more_plogi(vport);
 
-		if (phba->num_disc_nodes == 0) {
-			spin_lock_irq(phba->host->host_lock);
-			phba->fc_flag &= ~FC_NDISC_ACTIVE;
-			spin_unlock_irq(phba->host->host_lock);
+		if (vport->num_disc_nodes == 0) {
+			spin_lock_irq(shost->host_lock);
+			vport->fc_flag &= ~FC_NDISC_ACTIVE;
+			spin_unlock_irq(shost->host_lock);
 
-			lpfc_can_disctmo(phba);
-			if (phba->fc_flag & FC_RSCN_MODE) {
+			lpfc_can_disctmo(vport);
+			if (vport->fc_flag & FC_RSCN_MODE) {
 				/*
 				 * Check to see if more RSCNs came in while
 				 * we were processing this one.
 				 */
-				if ((phba->fc_rscn_id_cnt == 0) &&
-			    	(!(phba->fc_flag & FC_RSCN_DISCOVERY))) {
-					spin_lock_irq(phba->host->host_lock);
-					phba->fc_flag &= ~FC_RSCN_MODE;
-					spin_unlock_irq(phba->host->host_lock);
+				if ((vport->fc_rscn_id_cnt == 0) &&
+				    (!(vport->fc_flag & FC_RSCN_DISCOVERY))) {
+					spin_lock_irq(shost->host_lock);
+					vport->fc_flag &= ~FC_RSCN_MODE;
+					spin_unlock_irq(shost->host_lock);
 				} else {
-					lpfc_els_handle_rscn(phba);
+					lpfc_els_handle_rscn(vport);
 				}
 			}
 		}
@@ -809,22 +1029,28 @@ out:
 }
 
 int
-lpfc_issue_els_plogi(struct lpfc_hba * phba, uint32_t did, uint8_t retry)
+lpfc_issue_els_plogi(struct lpfc_vport *vport, uint32_t did, uint8_t retry)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	struct serv_parm *sp;
 	IOCB_t *icmd;
+	struct lpfc_nodelist *ndlp;
 	struct lpfc_iocbq *elsiocb;
 	struct lpfc_sli_ring *pring;
 	struct lpfc_sli *psli;
 	uint8_t *pcmd;
 	uint16_t cmdsize;
+	int ret;
 
 	psli = &phba->sli;
 	pring = &psli->ring[LPFC_ELS_RING];	/* ELS ring */
 
-	cmdsize = (sizeof (uint32_t) + sizeof (struct serv_parm));
-	elsiocb = lpfc_prep_els_iocb(phba, 1, cmdsize, retry, NULL, did,
-								ELS_CMD_PLOGI);
+	ndlp = lpfc_findnode_did(vport, did);
+	/* If ndlp if not NULL, we will bump the reference count on it */
+
+	cmdsize = (sizeof(uint32_t) + sizeof(struct serv_parm));
+	elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp, did,
+				     ELS_CMD_PLOGI);
 	if (!elsiocb)
 		return 1;
 
@@ -833,8 +1059,8 @@ lpfc_issue_els_plogi(struct lpfc_hba * phba, uint32_t did, uint8_t retry)
 
 	/* For PLOGI request, remainder of payload is service parameters */
 	*((uint32_t *) (pcmd)) = ELS_CMD_PLOGI;
-	pcmd += sizeof (uint32_t);
-	memcpy(pcmd, &phba->fc_sparam, sizeof (struct serv_parm));
+	pcmd += sizeof(uint32_t);
+	memcpy(pcmd, &vport->fc_sparam, sizeof(struct serv_parm));
 	sp = (struct serv_parm *) pcmd;
 
 	if (sp->cmn.fcphLow < FC_PH_4_3)
@@ -843,22 +1069,27 @@ lpfc_issue_els_plogi(struct lpfc_hba * phba, uint32_t did, uint8_t retry)
 	if (sp->cmn.fcphHigh < FC_PH3)
 		sp->cmn.fcphHigh = FC_PH3;
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"Issue PLOGI:     did:x%x",
+		did, 0, 0);
+
 	phba->fc_stat.elsXmitPLOGI++;
 	elsiocb->iocb_cmpl = lpfc_cmpl_els_plogi;
-	spin_lock_irq(phba->host->host_lock);
-	if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
-		spin_unlock_irq(phba->host->host_lock);
+	ret = lpfc_sli_issue_iocb(phba, pring, elsiocb, 0);
+
+	if (ret == IOCB_ERROR) {
 		lpfc_els_free_iocb(phba, elsiocb);
 		return 1;
 	}
-	spin_unlock_irq(phba->host->host_lock);
 	return 0;
 }
 
 static void
-lpfc_cmpl_els_prli(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-		   struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		   struct lpfc_iocbq *rspiocb)
 {
+	struct lpfc_vport *vport = cmdiocb->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
 	IOCB_t *irsp;
 	struct lpfc_sli *psli;
 	struct lpfc_nodelist *ndlp;
@@ -869,21 +1100,24 @@ lpfc_cmpl_els_prli(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 
 	irsp = &(rspiocb->iocb);
 	ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag &= ~NLP_PRLI_SND;
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"PRLI cmpl:       status:x%x/x%x did:x%x",
+		irsp->ulpStatus, irsp->un.ulpWord[4],
+		ndlp->nlp_DID);
 	/* PRLI completes to NPort <nlp_DID> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0103 PRLI completes to NPort x%x "
-			"Data: x%x x%x x%x x%x\n",
-			phba->brd_no, ndlp->nlp_DID, irsp->ulpStatus,
-			irsp->un.ulpWord[4], irsp->ulpTimeout,
-			phba->num_disc_nodes);
-
-	phba->fc_prli_sent--;
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0103 PRLI completes to NPort x%x "
+			 "Data: x%x x%x x%x x%x\n",
+			 ndlp->nlp_DID, irsp->ulpStatus, irsp->un.ulpWord[4],
+			 irsp->ulpTimeout, vport->num_disc_nodes);
+
+	vport->fc_prli_sent--;
 	/* Check to see if link went down during discovery */
-	if (lpfc_els_chk_latt(phba))
+	if (lpfc_els_chk_latt(vport))
 		goto out;
 
 	if (irsp->ulpStatus) {
@@ -894,18 +1128,16 @@ lpfc_cmpl_els_prli(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 		}
 		/* PRLI failed */
 		/* Do not call DSM for lpfc_els_abort'ed ELS cmds */
-		if ((irsp->ulpStatus == IOSTAT_LOCAL_REJECT) &&
-		   ((irsp->un.ulpWord[4] == IOERR_SLI_ABORTED) ||
-		   (irsp->un.ulpWord[4] == IOERR_LINK_DOWN) ||
-		   (irsp->un.ulpWord[4] == IOERR_SLI_DOWN))) {
+		if (lpfc_error_lost_link(irsp)) {
 			goto out;
 		} else {
-			lpfc_disc_state_machine(phba, ndlp, cmdiocb,
-					NLP_EVT_CMPL_PRLI);
+			lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+						NLP_EVT_CMPL_PRLI);
 		}
 	} else {
 		/* Good status, call state machine */
-		lpfc_disc_state_machine(phba, ndlp, cmdiocb, NLP_EVT_CMPL_PRLI);
+		lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+					NLP_EVT_CMPL_PRLI);
 	}
 
 out:
@@ -914,9 +1146,11 @@ out:
 }
 
 int
-lpfc_issue_els_prli(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
+lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		    uint8_t retry)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba *phba = vport->phba;
 	PRLI *npr;
 	IOCB_t *icmd;
 	struct lpfc_iocbq *elsiocb;
@@ -928,9 +1162,9 @@ lpfc_issue_els_prli(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
 	psli = &phba->sli;
 	pring = &psli->ring[LPFC_ELS_RING];	/* ELS ring */
 
-	cmdsize = (sizeof (uint32_t) + sizeof (PRLI));
-	elsiocb = lpfc_prep_els_iocb(phba, 1, cmdsize, retry, ndlp,
-					ndlp->nlp_DID, ELS_CMD_PRLI);
+	cmdsize = (sizeof(uint32_t) + sizeof(PRLI));
+	elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp,
+				     ndlp->nlp_DID, ELS_CMD_PRLI);
 	if (!elsiocb)
 		return 1;
 
@@ -938,9 +1172,9 @@ lpfc_issue_els_prli(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
 	pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
 
 	/* For PRLI request, remainder of payload is service parameters */
-	memset(pcmd, 0, (sizeof (PRLI) + sizeof (uint32_t)));
+	memset(pcmd, 0, (sizeof(PRLI) + sizeof(uint32_t)));
 	*((uint32_t *) (pcmd)) = ELS_CMD_PRLI;
-	pcmd += sizeof (uint32_t);
+	pcmd += sizeof(uint32_t);
 
 	/* For PRLI, remainder of payload is PRLI parameter page */
 	npr = (PRLI *) pcmd;
@@ -960,81 +1194,85 @@ lpfc_issue_els_prli(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
 	npr->prliType = PRLI_FCP_TYPE;
 	npr->initiatorFunc = 1;
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"Issue PRLI:      did:x%x",
+		ndlp->nlp_DID, 0, 0);
+
 	phba->fc_stat.elsXmitPRLI++;
 	elsiocb->iocb_cmpl = lpfc_cmpl_els_prli;
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag |= NLP_PRLI_SND;
+	spin_unlock_irq(shost->host_lock);
 	if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag &= ~NLP_PRLI_SND;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		lpfc_els_free_iocb(phba, elsiocb);
 		return 1;
 	}
-	spin_unlock_irq(phba->host->host_lock);
-	phba->fc_prli_sent++;
+	vport->fc_prli_sent++;
 	return 0;
 }
 
 static void
-lpfc_more_adisc(struct lpfc_hba * phba)
+lpfc_more_adisc(struct lpfc_vport *vport)
 {
 	int sentadisc;
 
-	if (phba->num_disc_nodes)
-		phba->num_disc_nodes--;
-
+	if (vport->num_disc_nodes)
+		vport->num_disc_nodes--;
 	/* Continue discovery with <num_disc_nodes> ADISCs to go */
-	lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-			"%d:0210 Continue discovery with %d ADISCs to go "
-			"Data: x%x x%x x%x\n",
-			phba->brd_no, phba->num_disc_nodes, phba->fc_adisc_cnt,
-			phba->fc_flag, phba->hba_state);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0210 Continue discovery with %d ADISCs to go "
+			 "Data: x%x x%x x%x\n",
+			 vport->num_disc_nodes, vport->fc_adisc_cnt,
+			 vport->fc_flag, vport->port_state);
 	/* Check to see if there are more ADISCs to be sent */
-	if (phba->fc_flag & FC_NLP_MORE) {
-		lpfc_set_disctmo(phba);
-
-		/* go thru NPR list and issue any remaining ELS ADISCs */
-		sentadisc = lpfc_els_disc_adisc(phba);
+	if (vport->fc_flag & FC_NLP_MORE) {
+		lpfc_set_disctmo(vport);
+		/* go thru NPR nodes and issue any remaining ELS ADISCs */
+		sentadisc = lpfc_els_disc_adisc(vport);
 	}
 	return;
 }
 
 static void
-lpfc_rscn_disc(struct lpfc_hba * phba)
+lpfc_rscn_disc(struct lpfc_vport *vport)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	lpfc_can_disctmo(vport);
+
 	/* RSCN discovery */
-	/* go thru NPR list and issue ELS PLOGIs */
-	if (phba->fc_npr_cnt) {
-		if (lpfc_els_disc_plogi(phba))
+	/* go thru NPR nodes and issue ELS PLOGIs */
+	if (vport->fc_npr_cnt)
+		if (lpfc_els_disc_plogi(vport))
 			return;
-	}
-	if (phba->fc_flag & FC_RSCN_MODE) {
+
+	if (vport->fc_flag & FC_RSCN_MODE) {
 		/* Check to see if more RSCNs came in while we were
 		 * processing this one.
 		 */
-		if ((phba->fc_rscn_id_cnt == 0) &&
-		    (!(phba->fc_flag & FC_RSCN_DISCOVERY))) {
-			spin_lock_irq(phba->host->host_lock);
-			phba->fc_flag &= ~FC_RSCN_MODE;
-			spin_unlock_irq(phba->host->host_lock);
+		if ((vport->fc_rscn_id_cnt == 0) &&
+		    (!(vport->fc_flag & FC_RSCN_DISCOVERY))) {
+			spin_lock_irq(shost->host_lock);
+			vport->fc_flag &= ~FC_RSCN_MODE;
+			spin_unlock_irq(shost->host_lock);
 		} else {
-			lpfc_els_handle_rscn(phba);
+			lpfc_els_handle_rscn(vport);
 		}
 	}
 }
 
 static void
-lpfc_cmpl_els_adisc(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-		    struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_els_adisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		    struct lpfc_iocbq *rspiocb)
 {
+	struct lpfc_vport *vport = cmdiocb->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
 	IOCB_t *irsp;
-	struct lpfc_sli *psli;
 	struct lpfc_nodelist *ndlp;
-	LPFC_MBOXQ_t *mbox;
-	int disc, rc;
-
-	psli = &phba->sli;
+	int  disc;
 
 	/* we pass cmdiocb to state machine which needs rspiocb as well */
 	cmdiocb->context_un.rsp_iocb = rspiocb;
@@ -1042,27 +1280,29 @@ lpfc_cmpl_els_adisc(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 	irsp = &(rspiocb->iocb);
 	ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"ADISC cmpl:      status:x%x/x%x did:x%x",
+		irsp->ulpStatus, irsp->un.ulpWord[4],
+		ndlp->nlp_DID);
+
 	/* Since ndlp can be freed in the disc state machine, note if this node
 	 * is being used during discovery.
 	 */
+	spin_lock_irq(shost->host_lock);
 	disc = (ndlp->nlp_flag & NLP_NPR_2B_DISC);
-	spin_lock_irq(phba->host->host_lock);
 	ndlp->nlp_flag &= ~(NLP_ADISC_SND | NLP_NPR_2B_DISC);
-	spin_unlock_irq(phba->host->host_lock);
-
+	spin_unlock_irq(shost->host_lock);
 	/* ADISC completes to NPort <nlp_DID> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0104 ADISC completes to NPort x%x "
-			"Data: x%x x%x x%x x%x x%x\n",
-			phba->brd_no, ndlp->nlp_DID, irsp->ulpStatus,
-			irsp->un.ulpWord[4], irsp->ulpTimeout, disc,
-			phba->num_disc_nodes);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0104 ADISC completes to NPort x%x "
+			 "Data: x%x x%x x%x x%x x%x\n",
+			 ndlp->nlp_DID, irsp->ulpStatus, irsp->un.ulpWord[4],
+			 irsp->ulpTimeout, disc, vport->num_disc_nodes);
 	/* Check to see if link went down during discovery */
-	if (lpfc_els_chk_latt(phba)) {
-		spin_lock_irq(phba->host->host_lock);
+	if (lpfc_els_chk_latt(vport)) {
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_NPR_2B_DISC;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		goto out;
 	}
 
@@ -1071,67 +1311,68 @@ lpfc_cmpl_els_adisc(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 		if (lpfc_els_retry(phba, cmdiocb, rspiocb)) {
 			/* ELS command is being retried */
 			if (disc) {
-				spin_lock_irq(phba->host->host_lock);
+				spin_lock_irq(shost->host_lock);
 				ndlp->nlp_flag |= NLP_NPR_2B_DISC;
-				spin_unlock_irq(phba->host->host_lock);
-				lpfc_set_disctmo(phba);
+				spin_unlock_irq(shost->host_lock);
+				lpfc_set_disctmo(vport);
 			}
 			goto out;
 		}
 		/* ADISC failed */
 		/* Do not call DSM for lpfc_els_abort'ed ELS cmds */
-		if ((irsp->ulpStatus != IOSTAT_LOCAL_REJECT) ||
-		   ((irsp->un.ulpWord[4] != IOERR_SLI_ABORTED) &&
-		   (irsp->un.ulpWord[4] != IOERR_LINK_DOWN) &&
-		   (irsp->un.ulpWord[4] != IOERR_SLI_DOWN))) {
-			lpfc_disc_state_machine(phba, ndlp, cmdiocb,
-					NLP_EVT_CMPL_ADISC);
+		if (!lpfc_error_lost_link(irsp)) {
+			lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+						NLP_EVT_CMPL_ADISC);
 		}
 	} else {
 		/* Good status, call state machine */
-		lpfc_disc_state_machine(phba, ndlp, cmdiocb,
+		lpfc_disc_state_machine(vport, ndlp, cmdiocb,
 					NLP_EVT_CMPL_ADISC);
 	}
 
-	if (disc && phba->num_disc_nodes) {
+	if (disc && vport->num_disc_nodes) {
 		/* Check to see if there are more ADISCs to be sent */
-		lpfc_more_adisc(phba);
+		lpfc_more_adisc(vport);
 
 		/* Check to see if we are done with ADISC authentication */
-		if (phba->num_disc_nodes == 0) {
-			lpfc_can_disctmo(phba);
-			/* If we get here, there is nothing left to wait for */
-			if ((phba->hba_state < LPFC_HBA_READY) &&
-			    (phba->hba_state != LPFC_CLEAR_LA)) {
-				/* Link up discovery */
-				if ((mbox = mempool_alloc(phba->mbox_mem_pool,
-							  GFP_KERNEL))) {
-					phba->hba_state = LPFC_CLEAR_LA;
-					lpfc_clear_la(phba, mbox);
-					mbox->mbox_cmpl =
-					    lpfc_mbx_cmpl_clear_la;
-					rc = lpfc_sli_issue_mbox
-						(phba, mbox,
-						 (MBX_NOWAIT | MBX_STOP_IOCB));
-					if (rc == MBX_NOT_FINISHED) {
-						mempool_free(mbox,
-						     phba->mbox_mem_pool);
-						lpfc_disc_flush_list(phba);
-						psli->ring[(psli->extra_ring)].
-						    flag &=
-						    ~LPFC_STOP_IOCB_EVENT;
-						psli->ring[(psli->fcp_ring)].
-						    flag &=
-						    ~LPFC_STOP_IOCB_EVENT;
-						psli->ring[(psli->next_ring)].
-						    flag &=
-						    ~LPFC_STOP_IOCB_EVENT;
-						phba->hba_state =
-						    LPFC_HBA_READY;
+		if (vport->num_disc_nodes == 0) {
+			/* If we get here, there is nothing left to ADISC */
+			/*
+			 * For NPIV, cmpl_reg_vpi will set port_state to READY,
+			 * and continue discovery.
+			 */
+			if ((phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) &&
+			   !(vport->fc_flag & FC_RSCN_MODE)) {
+				lpfc_issue_reg_vpi(phba, vport);
+				goto out;
+			}
+			/*
+			 * For SLI2, we need to set port_state to READY
+			 * and continue discovery.
+			 */
+			if (vport->port_state < LPFC_VPORT_READY) {
+				/* If we get here, there is nothing to ADISC */
+				if (vport->port_type == LPFC_PHYSICAL_PORT)
+					lpfc_issue_clear_la(phba, vport);
+
+				if (!(vport->fc_flag & FC_ABORT_DISCOVERY)) {
+					vport->num_disc_nodes = 0;
+					/* go thru NPR list, issue ELS PLOGIs */
+					if (vport->fc_npr_cnt)
+						lpfc_els_disc_plogi(vport);
+
+					if (!vport->num_disc_nodes) {
+						spin_lock_irq(shost->host_lock);
+						vport->fc_flag &=
+							~FC_NDISC_ACTIVE;
+						spin_unlock_irq(
+							shost->host_lock);
+						lpfc_can_disctmo(vport);
 					}
 				}
+				vport->port_state = LPFC_VPORT_READY;
 			} else {
-				lpfc_rscn_disc(phba);
+				lpfc_rscn_disc(vport);
 			}
 		}
 	}
@@ -1141,23 +1382,22 @@ out:
 }
 
 int
-lpfc_issue_els_adisc(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
+lpfc_issue_els_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		     uint8_t retry)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
 	ADISC *ap;
 	IOCB_t *icmd;
 	struct lpfc_iocbq *elsiocb;
-	struct lpfc_sli_ring *pring;
-	struct lpfc_sli *psli;
+	struct lpfc_sli *psli = &phba->sli;
+	struct lpfc_sli_ring *pring = &psli->ring[LPFC_ELS_RING];
 	uint8_t *pcmd;
 	uint16_t cmdsize;
 
-	psli = &phba->sli;
-	pring = &psli->ring[LPFC_ELS_RING];	/* ELS ring */
-
-	cmdsize = (sizeof (uint32_t) + sizeof (ADISC));
-	elsiocb = lpfc_prep_els_iocb(phba, 1, cmdsize, retry, ndlp,
-						ndlp->nlp_DID, ELS_CMD_ADISC);
+	cmdsize = (sizeof(uint32_t) + sizeof(ADISC));
+	elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp,
+				     ndlp->nlp_DID, ELS_CMD_ADISC);
 	if (!elsiocb)
 		return 1;
 
@@ -1166,81 +1406,94 @@ lpfc_issue_els_adisc(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
 
 	/* For ADISC request, remainder of payload is service parameters */
 	*((uint32_t *) (pcmd)) = ELS_CMD_ADISC;
-	pcmd += sizeof (uint32_t);
+	pcmd += sizeof(uint32_t);
 
 	/* Fill in ADISC payload */
 	ap = (ADISC *) pcmd;
 	ap->hardAL_PA = phba->fc_pref_ALPA;
-	memcpy(&ap->portName, &phba->fc_portname, sizeof (struct lpfc_name));
-	memcpy(&ap->nodeName, &phba->fc_nodename, sizeof (struct lpfc_name));
-	ap->DID = be32_to_cpu(phba->fc_myDID);
+	memcpy(&ap->portName, &vport->fc_portname, sizeof(struct lpfc_name));
+	memcpy(&ap->nodeName, &vport->fc_nodename, sizeof(struct lpfc_name));
+	ap->DID = be32_to_cpu(vport->fc_myDID);
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"Issue ADISC:     did:x%x",
+		ndlp->nlp_DID, 0, 0);
 
 	phba->fc_stat.elsXmitADISC++;
 	elsiocb->iocb_cmpl = lpfc_cmpl_els_adisc;
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag |= NLP_ADISC_SND;
+	spin_unlock_irq(shost->host_lock);
 	if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag &= ~NLP_ADISC_SND;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		lpfc_els_free_iocb(phba, elsiocb);
 		return 1;
 	}
-	spin_unlock_irq(phba->host->host_lock);
 	return 0;
 }
 
 static void
-lpfc_cmpl_els_logo(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-		   struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		   struct lpfc_iocbq *rspiocb)
 {
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
+	struct lpfc_vport *vport = ndlp->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
 	IOCB_t *irsp;
 	struct lpfc_sli *psli;
-	struct lpfc_nodelist *ndlp;
 
 	psli = &phba->sli;
 	/* we pass cmdiocb to state machine which needs rspiocb as well */
 	cmdiocb->context_un.rsp_iocb = rspiocb;
 
 	irsp = &(rspiocb->iocb);
-	ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag &= ~NLP_LOGO_SND;
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"LOGO cmpl:       status:x%x/x%x did:x%x",
+		irsp->ulpStatus, irsp->un.ulpWord[4],
+		ndlp->nlp_DID);
 	/* LOGO completes to NPort <nlp_DID> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0105 LOGO completes to NPort x%x "
-			"Data: x%x x%x x%x x%x\n",
-			phba->brd_no, ndlp->nlp_DID, irsp->ulpStatus,
-			irsp->un.ulpWord[4], irsp->ulpTimeout,
-			phba->num_disc_nodes);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0105 LOGO completes to NPort x%x "
+			 "Data: x%x x%x x%x x%x\n",
+			 ndlp->nlp_DID, irsp->ulpStatus, irsp->un.ulpWord[4],
+			 irsp->ulpTimeout, vport->num_disc_nodes);
 	/* Check to see if link went down during discovery */
-	if (lpfc_els_chk_latt(phba))
+	if (lpfc_els_chk_latt(vport))
+		goto out;
+
+	if (ndlp->nlp_flag & NLP_TARGET_REMOVE) {
+	        /* NLP_EVT_DEVICE_RM should unregister the RPI
+		 * which should abort all outstanding IOs.
+		 */
+		lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+					NLP_EVT_DEVICE_RM);
 		goto out;
+	}
 
 	if (irsp->ulpStatus) {
 		/* Check for retry */
-		if (lpfc_els_retry(phba, cmdiocb, rspiocb)) {
+		if (lpfc_els_retry(phba, cmdiocb, rspiocb))
 			/* ELS command is being retried */
 			goto out;
-		}
 		/* LOGO failed */
 		/* Do not call DSM for lpfc_els_abort'ed ELS cmds */
-		if ((irsp->ulpStatus == IOSTAT_LOCAL_REJECT) &&
-		   ((irsp->un.ulpWord[4] == IOERR_SLI_ABORTED) ||
-		   (irsp->un.ulpWord[4] == IOERR_LINK_DOWN) ||
-		   (irsp->un.ulpWord[4] == IOERR_SLI_DOWN))) {
+		if (lpfc_error_lost_link(irsp))
 			goto out;
-		} else {
-			lpfc_disc_state_machine(phba, ndlp, cmdiocb,
-					NLP_EVT_CMPL_LOGO);
-		}
+		else
+			lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+						NLP_EVT_CMPL_LOGO);
 	} else {
 		/* Good status, call state machine.
 		 * This will unregister the rpi if needed.
 		 */
-		lpfc_disc_state_machine(phba, ndlp, cmdiocb, NLP_EVT_CMPL_LOGO);
+		lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+					NLP_EVT_CMPL_LOGO);
 	}
 
 out:
@@ -1249,75 +1502,94 @@ out:
 }
 
 int
-lpfc_issue_els_logo(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
+lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		    uint8_t retry)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
 	IOCB_t *icmd;
 	struct lpfc_iocbq *elsiocb;
 	struct lpfc_sli_ring *pring;
 	struct lpfc_sli *psli;
 	uint8_t *pcmd;
 	uint16_t cmdsize;
+	int rc;
 
 	psli = &phba->sli;
 	pring = &psli->ring[LPFC_ELS_RING];
 
-	cmdsize = (2 * sizeof (uint32_t)) + sizeof (struct lpfc_name);
-	elsiocb = lpfc_prep_els_iocb(phba, 1, cmdsize, retry, ndlp,
-						ndlp->nlp_DID, ELS_CMD_LOGO);
+	spin_lock_irq(shost->host_lock);
+	if (ndlp->nlp_flag & NLP_LOGO_SND) {
+		spin_unlock_irq(shost->host_lock);
+		return 0;
+	}
+	spin_unlock_irq(shost->host_lock);
+
+	cmdsize = (2 * sizeof(uint32_t)) + sizeof(struct lpfc_name);
+	elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp,
+				     ndlp->nlp_DID, ELS_CMD_LOGO);
 	if (!elsiocb)
 		return 1;
 
 	icmd = &elsiocb->iocb;
 	pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
 	*((uint32_t *) (pcmd)) = ELS_CMD_LOGO;
-	pcmd += sizeof (uint32_t);
+	pcmd += sizeof(uint32_t);
 
 	/* Fill in LOGO payload */
-	*((uint32_t *) (pcmd)) = be32_to_cpu(phba->fc_myDID);
-	pcmd += sizeof (uint32_t);
-	memcpy(pcmd, &phba->fc_portname, sizeof (struct lpfc_name));
+	*((uint32_t *) (pcmd)) = be32_to_cpu(vport->fc_myDID);
+	pcmd += sizeof(uint32_t);
+	memcpy(pcmd, &vport->fc_portname, sizeof(struct lpfc_name));
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"Issue LOGO:      did:x%x",
+		ndlp->nlp_DID, 0, 0);
 
 	phba->fc_stat.elsXmitLOGO++;
 	elsiocb->iocb_cmpl = lpfc_cmpl_els_logo;
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag |= NLP_LOGO_SND;
-	if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
+	spin_unlock_irq(shost->host_lock);
+	rc = lpfc_sli_issue_iocb(phba, pring, elsiocb, 0);
+
+	if (rc == IOCB_ERROR) {
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag &= ~NLP_LOGO_SND;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		lpfc_els_free_iocb(phba, elsiocb);
 		return 1;
 	}
-	spin_unlock_irq(phba->host->host_lock);
 	return 0;
 }
 
 static void
-lpfc_cmpl_els_cmd(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-		  struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_els_cmd(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		  struct lpfc_iocbq *rspiocb)
 {
+	struct lpfc_vport *vport = cmdiocb->vport;
 	IOCB_t *irsp;
 
 	irsp = &rspiocb->iocb;
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"ELS cmd cmpl:    status:x%x/x%x did:x%x",
+		irsp->ulpStatus, irsp->un.ulpWord[4],
+		irsp->un.elsreq64.remoteID);
 	/* ELS cmd tag <ulpIoTag> completes */
-	lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_ELS,
-			"%d:0106 ELS cmd tag x%x completes Data: x%x x%x x%x\n",
-			phba->brd_no,
-			irsp->ulpIoTag, irsp->ulpStatus,
-			irsp->un.ulpWord[4], irsp->ulpTimeout);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0106 ELS cmd tag x%x completes Data: x%x x%x x%x\n",
+			 irsp->ulpIoTag, irsp->ulpStatus,
+			 irsp->un.ulpWord[4], irsp->ulpTimeout);
 	/* Check to see if link went down during discovery */
-	lpfc_els_chk_latt(phba);
+	lpfc_els_chk_latt(vport);
 	lpfc_els_free_iocb(phba, cmdiocb);
 	return;
 }
 
 int
-lpfc_issue_els_scr(struct lpfc_hba * phba, uint32_t nportid, uint8_t retry)
+lpfc_issue_els_scr(struct lpfc_vport *vport, uint32_t nportid, uint8_t retry)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	IOCB_t *icmd;
 	struct lpfc_iocbq *elsiocb;
 	struct lpfc_sli_ring *pring;
@@ -1328,17 +1600,18 @@ lpfc_issue_els_scr(struct lpfc_hba * phba, uint32_t nportid, uint8_t retry)
 
 	psli = &phba->sli;
 	pring = &psli->ring[LPFC_ELS_RING];	/* ELS ring */
-	cmdsize = (sizeof (uint32_t) + sizeof (SCR));
+	cmdsize = (sizeof(uint32_t) + sizeof(SCR));
 	ndlp = mempool_alloc(phba->nlp_mem_pool, GFP_KERNEL);
 	if (!ndlp)
 		return 1;
 
-	lpfc_nlp_init(phba, ndlp, nportid);
+	lpfc_nlp_init(vport, ndlp, nportid);
+
+	elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp,
+				     ndlp->nlp_DID, ELS_CMD_SCR);
 
-	elsiocb = lpfc_prep_els_iocb(phba, 1, cmdsize, retry, ndlp,
-						ndlp->nlp_DID, ELS_CMD_SCR);
 	if (!elsiocb) {
-		mempool_free( ndlp, phba->nlp_mem_pool);
+		lpfc_nlp_put(ndlp);
 		return 1;
 	}
 
@@ -1346,29 +1619,31 @@ lpfc_issue_els_scr(struct lpfc_hba * phba, uint32_t nportid, uint8_t retry)
 	pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
 
 	*((uint32_t *) (pcmd)) = ELS_CMD_SCR;
-	pcmd += sizeof (uint32_t);
+	pcmd += sizeof(uint32_t);
 
 	/* For SCR, remainder of payload is SCR parameter page */
-	memset(pcmd, 0, sizeof (SCR));
+	memset(pcmd, 0, sizeof(SCR));
 	((SCR *) pcmd)->Function = SCR_FUNC_FULL;
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"Issue SCR:       did:x%x",
+		ndlp->nlp_DID, 0, 0);
+
 	phba->fc_stat.elsXmitSCR++;
 	elsiocb->iocb_cmpl = lpfc_cmpl_els_cmd;
-	spin_lock_irq(phba->host->host_lock);
 	if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
-		spin_unlock_irq(phba->host->host_lock);
-		mempool_free( ndlp, phba->nlp_mem_pool);
+		lpfc_nlp_put(ndlp);
 		lpfc_els_free_iocb(phba, elsiocb);
 		return 1;
 	}
-	spin_unlock_irq(phba->host->host_lock);
-	mempool_free( ndlp, phba->nlp_mem_pool);
+	lpfc_nlp_put(ndlp);
 	return 0;
 }
 
 static int
-lpfc_issue_els_farpr(struct lpfc_hba * phba, uint32_t nportid, uint8_t retry)
+lpfc_issue_els_farpr(struct lpfc_vport *vport, uint32_t nportid, uint8_t retry)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	IOCB_t *icmd;
 	struct lpfc_iocbq *elsiocb;
 	struct lpfc_sli_ring *pring;
@@ -1382,16 +1657,17 @@ lpfc_issue_els_farpr(struct lpfc_hba * phba, uint32_t nportid, uint8_t retry)
 
 	psli = &phba->sli;
 	pring = &psli->ring[LPFC_ELS_RING];	/* ELS ring */
-	cmdsize = (sizeof (uint32_t) + sizeof (FARP));
+	cmdsize = (sizeof(uint32_t) + sizeof(FARP));
 	ndlp = mempool_alloc(phba->nlp_mem_pool, GFP_KERNEL);
 	if (!ndlp)
 		return 1;
-	lpfc_nlp_init(phba, ndlp, nportid);
 
-	elsiocb = lpfc_prep_els_iocb(phba, 1, cmdsize, retry, ndlp,
-						ndlp->nlp_DID, ELS_CMD_RNID);
+	lpfc_nlp_init(vport, ndlp, nportid);
+
+	elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp,
+				     ndlp->nlp_DID, ELS_CMD_RNID);
 	if (!elsiocb) {
-		mempool_free( ndlp, phba->nlp_mem_pool);
+		lpfc_nlp_put(ndlp);
 		return 1;
 	}
 
@@ -1399,44 +1675,71 @@ lpfc_issue_els_farpr(struct lpfc_hba * phba, uint32_t nportid, uint8_t retry)
 	pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
 
 	*((uint32_t *) (pcmd)) = ELS_CMD_FARPR;
-	pcmd += sizeof (uint32_t);
+	pcmd += sizeof(uint32_t);
 
 	/* Fill in FARPR payload */
 	fp = (FARP *) (pcmd);
-	memset(fp, 0, sizeof (FARP));
+	memset(fp, 0, sizeof(FARP));
 	lp = (uint32_t *) pcmd;
 	*lp++ = be32_to_cpu(nportid);
-	*lp++ = be32_to_cpu(phba->fc_myDID);
+	*lp++ = be32_to_cpu(vport->fc_myDID);
 	fp->Rflags = 0;
 	fp->Mflags = (FARP_MATCH_PORT | FARP_MATCH_NODE);
 
-	memcpy(&fp->RportName, &phba->fc_portname, sizeof (struct lpfc_name));
-	memcpy(&fp->RnodeName, &phba->fc_nodename, sizeof (struct lpfc_name));
-	if ((ondlp = lpfc_findnode_did(phba, NLP_SEARCH_ALL, nportid))) {
+	memcpy(&fp->RportName, &vport->fc_portname, sizeof(struct lpfc_name));
+	memcpy(&fp->RnodeName, &vport->fc_nodename, sizeof(struct lpfc_name));
+	ondlp = lpfc_findnode_did(vport, nportid);
+	if (ondlp) {
 		memcpy(&fp->OportName, &ondlp->nlp_portname,
-		       sizeof (struct lpfc_name));
+		       sizeof(struct lpfc_name));
 		memcpy(&fp->OnodeName, &ondlp->nlp_nodename,
-		       sizeof (struct lpfc_name));
+		       sizeof(struct lpfc_name));
 	}
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"Issue FARPR:     did:x%x",
+		ndlp->nlp_DID, 0, 0);
+
 	phba->fc_stat.elsXmitFARPR++;
 	elsiocb->iocb_cmpl = lpfc_cmpl_els_cmd;
-	spin_lock_irq(phba->host->host_lock);
 	if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
-		spin_unlock_irq(phba->host->host_lock);
-		mempool_free( ndlp, phba->nlp_mem_pool);
+		lpfc_nlp_put(ndlp);
 		lpfc_els_free_iocb(phba, elsiocb);
 		return 1;
 	}
-	spin_unlock_irq(phba->host->host_lock);
-	mempool_free( ndlp, phba->nlp_mem_pool);
+	lpfc_nlp_put(ndlp);
 	return 0;
 }
 
+static void
+lpfc_end_rscn(struct lpfc_vport *vport)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	if (vport->fc_flag & FC_RSCN_MODE) {
+		/*
+		 * Check to see if more RSCNs came in while we were
+		 * processing this one.
+		 */
+		if (vport->fc_rscn_id_cnt ||
+		    (vport->fc_flag & FC_RSCN_DISCOVERY) != 0)
+			lpfc_els_handle_rscn(vport);
+		else {
+			spin_lock_irq(shost->host_lock);
+			vport->fc_flag &= ~FC_RSCN_MODE;
+			spin_unlock_irq(shost->host_lock);
+		}
+	}
+}
+
 void
-lpfc_cancel_retry_delay_tmo(struct lpfc_hba *phba, struct lpfc_nodelist * nlp)
+lpfc_cancel_retry_delay_tmo(struct lpfc_vport *vport, struct lpfc_nodelist *nlp)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	spin_lock_irq(shost->host_lock);
 	nlp->nlp_flag &= ~NLP_DELAY_TMO;
+	spin_unlock_irq(shost->host_lock);
 	del_timer_sync(&nlp->nlp_delayfunc);
 	nlp->nlp_last_elscmd = 0;
 
@@ -1444,30 +1747,21 @@ lpfc_cancel_retry_delay_tmo(struct lpfc_hba *phba, struct lpfc_nodelist * nlp)
 		list_del_init(&nlp->els_retry_evt.evt_listp);
 
 	if (nlp->nlp_flag & NLP_NPR_2B_DISC) {
+		spin_lock_irq(shost->host_lock);
 		nlp->nlp_flag &= ~NLP_NPR_2B_DISC;
-		if (phba->num_disc_nodes) {
+		spin_unlock_irq(shost->host_lock);
+		if (vport->num_disc_nodes) {
 			/* Check to see if there are more
 			 * PLOGIs to be sent
 			 */
-			lpfc_more_plogi(phba);
-
-			if (phba->num_disc_nodes == 0) {
-				phba->fc_flag &= ~FC_NDISC_ACTIVE;
-				lpfc_can_disctmo(phba);
-				if (phba->fc_flag & FC_RSCN_MODE) {
-					/*
-					 * Check to see if more RSCNs
-					 * came in while we were
-					 * processing this one.
-					 */
-					if((phba->fc_rscn_id_cnt==0) &&
-					 !(phba->fc_flag & FC_RSCN_DISCOVERY)) {
-						phba->fc_flag &= ~FC_RSCN_MODE;
-					}
-					else {
-						lpfc_els_handle_rscn(phba);
-					}
-				}
+			lpfc_more_plogi(vport);
+
+			if (vport->num_disc_nodes == 0) {
+				spin_lock_irq(shost->host_lock);
+				vport->fc_flag &= ~FC_NDISC_ACTIVE;
+				spin_unlock_irq(shost->host_lock);
+				lpfc_can_disctmo(vport);
+				lpfc_end_rscn(vport);
 			}
 		}
 	}
@@ -1477,18 +1771,19 @@ lpfc_cancel_retry_delay_tmo(struct lpfc_hba *phba, struct lpfc_nodelist * nlp)
 void
 lpfc_els_retry_delay(unsigned long ptr)
 {
-	struct lpfc_nodelist *ndlp;
-	struct lpfc_hba *phba;
-	unsigned long iflag;
-	struct lpfc_work_evt  *evtp;
-
-	ndlp = (struct lpfc_nodelist *)ptr;
-	phba = ndlp->nlp_phba;
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) ptr;
+	struct lpfc_vport *vport = ndlp->vport;
+	struct lpfc_hba   *phba = vport->phba;
+	unsigned long flags;
+	struct lpfc_work_evt  *evtp = &ndlp->els_retry_evt;
+
+	ndlp = (struct lpfc_nodelist *) ptr;
+	phba = ndlp->vport->phba;
 	evtp = &ndlp->els_retry_evt;
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
+	spin_lock_irqsave(&phba->hbalock, flags);
 	if (!list_empty(&evtp->evt_listp)) {
-		spin_unlock_irqrestore(phba->host->host_lock, iflag);
+		spin_unlock_irqrestore(&phba->hbalock, flags);
 		return;
 	}
 
@@ -1496,33 +1791,31 @@ lpfc_els_retry_delay(unsigned long ptr)
 	evtp->evt       = LPFC_EVT_ELS_RETRY;
 	list_add_tail(&evtp->evt_listp, &phba->work_list);
 	if (phba->work_wait)
-		wake_up(phba->work_wait);
+		lpfc_worker_wake_up(phba);
 
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
+	spin_unlock_irqrestore(&phba->hbalock, flags);
 	return;
 }
 
 void
 lpfc_els_retry_delay_handler(struct lpfc_nodelist *ndlp)
 {
-	struct lpfc_hba *phba;
-	uint32_t cmd;
-	uint32_t did;
-	uint8_t retry;
+	struct lpfc_vport *vport = ndlp->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	uint32_t cmd, did, retry;
 
-	phba = ndlp->nlp_phba;
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	did = ndlp->nlp_DID;
 	cmd = ndlp->nlp_last_elscmd;
 	ndlp->nlp_last_elscmd = 0;
 
 	if (!(ndlp->nlp_flag & NLP_DELAY_TMO)) {
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		return;
 	}
 
 	ndlp->nlp_flag &= ~NLP_DELAY_TMO;
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 	/*
 	 * If a discovery event readded nlp_delayfunc after timer
 	 * firing and before processing the timer, cancel the
@@ -1533,61 +1826,55 @@ lpfc_els_retry_delay_handler(struct lpfc_nodelist *ndlp)
 
 	switch (cmd) {
 	case ELS_CMD_FLOGI:
-		lpfc_issue_els_flogi(phba, ndlp, retry);
+		lpfc_issue_els_flogi(vport, ndlp, retry);
 		break;
 	case ELS_CMD_PLOGI:
-		if(!lpfc_issue_els_plogi(phba, ndlp->nlp_DID, retry)) {
+		if (!lpfc_issue_els_plogi(vport, ndlp->nlp_DID, retry)) {
 			ndlp->nlp_prev_state = ndlp->nlp_state;
-			ndlp->nlp_state = NLP_STE_PLOGI_ISSUE;
-			lpfc_nlp_list(phba, ndlp, NLP_PLOGI_LIST);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE);
 		}
 		break;
 	case ELS_CMD_ADISC:
-		if (!lpfc_issue_els_adisc(phba, ndlp, retry)) {
+		if (!lpfc_issue_els_adisc(vport, ndlp, retry)) {
 			ndlp->nlp_prev_state = ndlp->nlp_state;
-			ndlp->nlp_state = NLP_STE_ADISC_ISSUE;
-			lpfc_nlp_list(phba, ndlp, NLP_ADISC_LIST);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_ADISC_ISSUE);
 		}
 		break;
 	case ELS_CMD_PRLI:
-		if (!lpfc_issue_els_prli(phba, ndlp, retry)) {
+		if (!lpfc_issue_els_prli(vport, ndlp, retry)) {
 			ndlp->nlp_prev_state = ndlp->nlp_state;
-			ndlp->nlp_state = NLP_STE_PRLI_ISSUE;
-			lpfc_nlp_list(phba, ndlp, NLP_PRLI_LIST);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_PRLI_ISSUE);
 		}
 		break;
 	case ELS_CMD_LOGO:
-		if (!lpfc_issue_els_logo(phba, ndlp, retry)) {
+		if (!lpfc_issue_els_logo(vport, ndlp, retry)) {
 			ndlp->nlp_prev_state = ndlp->nlp_state;
-			ndlp->nlp_state = NLP_STE_NPR_NODE;
-			lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
 		}
 		break;
+	case ELS_CMD_FDISC:
+		lpfc_issue_els_fdisc(vport, ndlp, retry);
+		break;
 	}
 	return;
 }
 
 static int
-lpfc_els_retry(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-	       struct lpfc_iocbq * rspiocb)
+lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+	       struct lpfc_iocbq *rspiocb)
 {
-	IOCB_t *irsp;
-	struct lpfc_dmabuf *pcmd;
-	struct lpfc_nodelist *ndlp;
+	struct lpfc_vport *vport = cmdiocb->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	IOCB_t *irsp = &rspiocb->iocb;
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
+	struct lpfc_dmabuf *pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
 	uint32_t *elscmd;
 	struct ls_rjt stat;
-	int retry, maxretry;
-	int delay;
-	uint32_t cmd;
+	int retry = 0, maxretry = lpfc_max_els_tries, delay = 0;
+	int logerr = 0;
+	uint32_t cmd = 0;
 	uint32_t did;
 
-	retry = 0;
-	delay = 0;
-	maxretry = lpfc_max_els_tries;
-	irsp = &rspiocb->iocb;
-	ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
-	pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
-	cmd = 0;
 
 	/* Note: context2 may be 0 for internal driver abort
 	 * of delays ELS command.
@@ -1598,16 +1885,20 @@ lpfc_els_retry(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 		cmd = *elscmd++;
 	}
 
-	if(ndlp)
+	if (ndlp)
 		did = ndlp->nlp_DID;
 	else {
 		/* We should only hit this case for retrying PLOGI */
 		did = irsp->un.elsreq64.remoteID;
-		ndlp = lpfc_findnode_did(phba, NLP_SEARCH_ALL, did);
+		ndlp = lpfc_findnode_did(vport, did);
 		if (!ndlp && (cmd != ELS_CMD_PLOGI))
 			return 1;
 	}
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"Retry ELS:       wd7:x%x wd4:x%x did:x%x",
+		*(((uint32_t *) irsp) + 7), irsp->un.ulpWord[4], ndlp->nlp_DID);
+
 	switch (irsp->ulpStatus) {
 	case IOSTAT_FCP_RSP_ERROR:
 	case IOSTAT_REMOTE_STOP:
@@ -1616,25 +1907,40 @@ lpfc_els_retry(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 	case IOSTAT_LOCAL_REJECT:
 		switch ((irsp->un.ulpWord[4] & 0xff)) {
 		case IOERR_LOOP_OPEN_FAILURE:
-			if (cmd == ELS_CMD_PLOGI) {
-				if (cmdiocb->retry == 0) {
-					delay = 1;
-				}
-			}
+			if (cmd == ELS_CMD_PLOGI && cmdiocb->retry == 0)
+				delay = 1000;
 			retry = 1;
 			break;
 
-		case IOERR_SEQUENCE_TIMEOUT:
-			retry = 1;
+		case IOERR_ILLEGAL_COMMAND:
+			if ((phba->sli3_options & LPFC_SLI3_VPORT_TEARDOWN) &&
+			    (cmd == ELS_CMD_FDISC)) {
+				lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+						 "0124 FDISC failed (3/6) "
+						 "retrying...\n");
+				lpfc_mbx_unreg_vpi(vport);
+				retry = 1;
+				/* FDISC retry policy */
+				maxretry = 48;
+				if (cmdiocb->retry >= 32)
+					delay = 1000;
+			}
 			break;
 
 		case IOERR_NO_RESOURCES:
-			if (cmd == ELS_CMD_PLOGI) {
-				delay = 1;
-			}
+			logerr = 1; /* HBA out of resources */
 			retry = 1;
+			if (cmdiocb->retry > 100)
+				delay = 100;
+			maxretry = 250;
 			break;
 
+		case IOERR_ILLEGAL_FRAME:
+			delay = 100;
+			retry = 1;
+			break;
+
+		case IOERR_SEQUENCE_TIMEOUT:
 		case IOERR_INVALID_RPI:
 			retry = 1;
 			break;
@@ -1651,6 +1957,7 @@ lpfc_els_retry(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 
 	case IOSTAT_NPORT_BSY:
 	case IOSTAT_FABRIC_BSY:
+		logerr = 1; /* Fabric / Remote NPort out of resources */
 		retry = 1;
 		break;
 
@@ -1664,27 +1971,59 @@ lpfc_els_retry(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 			if (stat.un.b.lsRjtRsnCodeExp ==
 			    LSEXP_CMD_IN_PROGRESS) {
 				if (cmd == ELS_CMD_PLOGI) {
-					delay = 1;
+					delay = 1000;
 					maxretry = 48;
 				}
 				retry = 1;
 				break;
 			}
 			if (cmd == ELS_CMD_PLOGI) {
-				delay = 1;
+				delay = 1000;
 				maxretry = lpfc_max_els_tries + 1;
 				retry = 1;
 				break;
 			}
+			if ((phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) &&
+			  (cmd == ELS_CMD_FDISC) &&
+			  (stat.un.b.lsRjtRsnCodeExp == LSEXP_OUT_OF_RESOURCE)){
+				lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+						 "0125 FDISC Failed (x%x). "
+						 "Fabric out of resources\n",
+						 stat.un.lsRjtError);
+				lpfc_vport_set_state(vport,
+						     FC_VPORT_NO_FABRIC_RSCS);
+			}
 			break;
 
 		case LSRJT_LOGICAL_BSY:
-			if (cmd == ELS_CMD_PLOGI) {
-				delay = 1;
+			if ((cmd == ELS_CMD_PLOGI) ||
+			    (cmd == ELS_CMD_PRLI)) {
+				delay = 1000;
+				maxretry = 48;
+			} else if (cmd == ELS_CMD_FDISC) {
+				/* FDISC retry policy */
 				maxretry = 48;
+				if (cmdiocb->retry >= 32)
+					delay = 1000;
 			}
 			retry = 1;
 			break;
+
+		case LSRJT_LOGICAL_ERR:
+		case LSRJT_PROTOCOL_ERR:
+			if ((phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) &&
+			  (cmd == ELS_CMD_FDISC) &&
+			  ((stat.un.b.lsRjtRsnCodeExp == LSEXP_INVALID_PNAME) ||
+			  (stat.un.b.lsRjtRsnCodeExp == LSEXP_INVALID_NPORT_ID))
+			  ) {
+				lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+						 "0123 FDISC Failed (x%x). "
+						 "Fabric Detected Bad WWN\n",
+						 stat.un.lsRjtError);
+				lpfc_vport_set_state(vport,
+						     FC_VPORT_FABRIC_REJ_WWN);
+			}
+			break;
 		}
 		break;
 
@@ -1699,26 +2038,40 @@ lpfc_els_retry(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 	if (did == FDMI_DID)
 		retry = 1;
 
+	if ((cmd == ELS_CMD_FLOGI) &&
+	    (phba->fc_topology != TOPOLOGY_LOOP)) {
+		/* FLOGI retry policy */
+		retry = 1;
+		maxretry = 48;
+		if (cmdiocb->retry >= 32)
+			delay = 1000;
+	}
+
 	if ((++cmdiocb->retry) >= maxretry) {
 		phba->fc_stat.elsRetryExceeded++;
 		retry = 0;
 	}
 
+	if ((vport->load_flag & FC_UNLOADING) != 0)
+		retry = 0;
+
 	if (retry) {
 
 		/* Retry ELS command <elsCmd> to remote NPORT <did> */
-		lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-				"%d:0107 Retry ELS command x%x to remote "
-				"NPORT x%x Data: x%x x%x\n",
-				phba->brd_no,
-				cmd, did, cmdiocb->retry, delay);
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+				 "0107 Retry ELS command x%x to remote "
+				 "NPORT x%x Data: x%x x%x\n",
+				 cmd, did, cmdiocb->retry, delay);
+
+		if (((cmd == ELS_CMD_PLOGI) || (cmd == ELS_CMD_ADISC)) &&
+			((irsp->ulpStatus != IOSTAT_LOCAL_REJECT) ||
+			((irsp->un.ulpWord[4] & 0xff) != IOERR_NO_RESOURCES))) {
+			/* Don't reset timer for no resources */
 
-		if ((cmd == ELS_CMD_PLOGI) || (cmd == ELS_CMD_ADISC)) {
 			/* If discovery / RSCN timer is running, reset it */
-			if (timer_pending(&phba->fc_disctmo) ||
-			      (phba->fc_flag & FC_RSCN_MODE)) {
-				lpfc_set_disctmo(phba);
-			}
+			if (timer_pending(&vport->fc_disctmo) ||
+			    (vport->fc_flag & FC_RSCN_MODE))
+				lpfc_set_disctmo(vport);
 		}
 
 		phba->fc_stat.elsXmitRetry++;
@@ -1726,64 +2079,110 @@ lpfc_els_retry(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 			phba->fc_stat.elsDelayRetry++;
 			ndlp->nlp_retry = cmdiocb->retry;
 
-			mod_timer(&ndlp->nlp_delayfunc, jiffies + HZ);
+			/* delay is specified in milliseconds */
+			mod_timer(&ndlp->nlp_delayfunc,
+				jiffies + msecs_to_jiffies(delay));
+			spin_lock_irq(shost->host_lock);
 			ndlp->nlp_flag |= NLP_DELAY_TMO;
+			spin_unlock_irq(shost->host_lock);
 
 			ndlp->nlp_prev_state = ndlp->nlp_state;
-			ndlp->nlp_state = NLP_STE_NPR_NODE;
-			lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
+			if (cmd == ELS_CMD_PRLI)
+				lpfc_nlp_set_state(vport, ndlp,
+					NLP_STE_REG_LOGIN_ISSUE);
+			else
+				lpfc_nlp_set_state(vport, ndlp,
+					NLP_STE_NPR_NODE);
 			ndlp->nlp_last_elscmd = cmd;
 
 			return 1;
 		}
 		switch (cmd) {
 		case ELS_CMD_FLOGI:
-			lpfc_issue_els_flogi(phba, ndlp, cmdiocb->retry);
+			lpfc_issue_els_flogi(vport, ndlp, cmdiocb->retry);
+			return 1;
+		case ELS_CMD_FDISC:
+			lpfc_issue_els_fdisc(vport, ndlp, cmdiocb->retry);
 			return 1;
 		case ELS_CMD_PLOGI:
 			if (ndlp) {
 				ndlp->nlp_prev_state = ndlp->nlp_state;
-				ndlp->nlp_state = NLP_STE_PLOGI_ISSUE;
-				lpfc_nlp_list(phba, ndlp, NLP_PLOGI_LIST);
+				lpfc_nlp_set_state(vport, ndlp,
+						   NLP_STE_PLOGI_ISSUE);
 			}
-			lpfc_issue_els_plogi(phba, did, cmdiocb->retry);
+			lpfc_issue_els_plogi(vport, did, cmdiocb->retry);
 			return 1;
 		case ELS_CMD_ADISC:
 			ndlp->nlp_prev_state = ndlp->nlp_state;
-			ndlp->nlp_state = NLP_STE_ADISC_ISSUE;
-			lpfc_nlp_list(phba, ndlp, NLP_ADISC_LIST);
-			lpfc_issue_els_adisc(phba, ndlp, cmdiocb->retry);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_ADISC_ISSUE);
+			lpfc_issue_els_adisc(vport, ndlp, cmdiocb->retry);
 			return 1;
 		case ELS_CMD_PRLI:
 			ndlp->nlp_prev_state = ndlp->nlp_state;
-			ndlp->nlp_state = NLP_STE_PRLI_ISSUE;
-			lpfc_nlp_list(phba, ndlp, NLP_PRLI_LIST);
-			lpfc_issue_els_prli(phba, ndlp, cmdiocb->retry);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_PRLI_ISSUE);
+			lpfc_issue_els_prli(vport, ndlp, cmdiocb->retry);
 			return 1;
 		case ELS_CMD_LOGO:
 			ndlp->nlp_prev_state = ndlp->nlp_state;
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+			lpfc_issue_els_logo(vport, ndlp, cmdiocb->retry);
+			return 1;
+		case ELS_CMD_AUTH_NEG:
+		case ELS_CMD_DH_CHA:
+		case ELS_CMD_DH_REP:
+		case ELS_CMD_DH_SUC:
+			ndlp->nlp_prev_state = ndlp->nlp_state;
 			ndlp->nlp_state = NLP_STE_NPR_NODE;
-			lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
-			lpfc_issue_els_logo(phba, ndlp, cmdiocb->retry);
+			lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
+					"0122 Authentication LS_RJT Logical "
+					"busy\n");
+			lpfc_start_authentication(vport, ndlp);
 			return 1;
 		}
 	}
-
 	/* No retry ELS command <elsCmd> to remote NPORT <did> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0108 No retry ELS command x%x to remote NPORT x%x "
-			"Data: x%x\n",
-			phba->brd_no,
-			cmd, did, cmdiocb->retry);
-
+	if (logerr) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+			 "0137 No retry ELS command x%x to remote "
+			 "NPORT x%x: Out of Resources: Error:x%x/%x\n",
+			 cmd, did, irsp->ulpStatus,
+			 irsp->un.ulpWord[4]);
+	}
+	else {
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0108 No retry ELS command x%x to remote "
+			 "NPORT x%x Retried:%d Error:x%x/%x\n",
+			 cmd, did, cmdiocb->retry, irsp->ulpStatus,
+			 irsp->un.ulpWord[4]);
+	}
 	return 0;
 }
 
 int
-lpfc_els_free_iocb(struct lpfc_hba * phba, struct lpfc_iocbq * elsiocb)
+lpfc_els_free_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *elsiocb)
 {
 	struct lpfc_dmabuf *buf_ptr, *buf_ptr1;
+	struct lpfc_nodelist *ndlp;
 
+	ndlp = (struct lpfc_nodelist *)elsiocb->context1;
+	if (ndlp) {
+		if (ndlp->nlp_flag & NLP_DEFER_RM) {
+			lpfc_nlp_put(ndlp);
+
+			/* If the ndlp is not being used by another discovery
+			 * thread, free it.
+			 */
+			if (!lpfc_nlp_not_used(ndlp)) {
+				/* If ndlp is being used by another discovery
+				 * thread, just clear NLP_DEFER_RM
+				 */
+				ndlp->nlp_flag &= ~NLP_DEFER_RM;
+			}
+		}
+		else
+			lpfc_nlp_put(ndlp);
+		elsiocb->context1 = NULL;
+	}
 	/* context2  = cmd,  context2->next = rsp, context3 = bpl */
 	if (elsiocb->context2) {
 		buf_ptr1 = (struct lpfc_dmabuf *) elsiocb->context2;
@@ -1804,106 +2203,156 @@ lpfc_els_free_iocb(struct lpfc_hba * phba, struct lpfc_iocbq * elsiocb)
 		lpfc_mbuf_free(phba, buf_ptr->virt, buf_ptr->phys);
 		kfree(buf_ptr);
 	}
-	spin_lock_irq(phba->host->host_lock);
 	lpfc_sli_release_iocbq(phba, elsiocb);
-	spin_unlock_irq(phba->host->host_lock);
 	return 0;
 }
 
 static void
-lpfc_cmpl_els_logo_acc(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-		       struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		       struct lpfc_iocbq *rspiocb)
 {
-	struct lpfc_nodelist *ndlp;
-
-	ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
+	struct lpfc_vport *vport = cmdiocb->vport;
+	IOCB_t *irsp;
 
+	irsp = &rspiocb->iocb;
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
+		"ACC LOGO cmpl:   status:x%x/x%x did:x%x",
+		irsp->ulpStatus, irsp->un.ulpWord[4], ndlp->nlp_DID);
 	/* ACC to LOGO completes to NPort <nlp_DID> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0109 ACC to LOGO completes to NPort x%x "
-			"Data: x%x x%x x%x\n",
-			phba->brd_no, ndlp->nlp_DID, ndlp->nlp_flag,
-			ndlp->nlp_state, ndlp->nlp_rpi);
-
-	switch (ndlp->nlp_state) {
-	case NLP_STE_UNUSED_NODE:	/* node is just allocated */
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
-		break;
-	case NLP_STE_NPR_NODE:		/* NPort Recovery mode */
-		lpfc_unreg_rpi(phba, ndlp);
-		break;
-	default:
-		break;
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0109 ACC to LOGO completes to NPort x%x "
+			 "Data: x%x x%x x%x\n",
+			 ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
+			 ndlp->nlp_rpi);
+
+	if (ndlp->nlp_state == NLP_STE_NPR_NODE) {
+		/* NPort Recovery mode or node is just allocated */
+		if (!lpfc_nlp_not_used(ndlp)) {
+			/* If the ndlp is being used by another discovery
+			 * thread, just unregister the RPI.
+			 */
+			lpfc_unreg_rpi(vport, ndlp);
+		}
 	}
 	lpfc_els_free_iocb(phba, cmdiocb);
 	return;
 }
 
+void
+lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+{
+	struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) (pmb->context1);
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) pmb->context2;
+
+	pmb->context1 = NULL;
+	lpfc_mbuf_free(phba, mp->virt, mp->phys);
+	kfree(mp);
+	mempool_free(pmb, phba->mbox_mem_pool);
+	if (ndlp) {
+		lpfc_nlp_put(ndlp);
+
+		/* This is the end of the default RPI cleanup logic for this
+		 * ndlp. If no other discovery threads are using this ndlp.
+		 * we should free all resources associated with it.
+		 */
+		lpfc_nlp_not_used(ndlp);
+	}
+	return;
+}
+
 static void
-lpfc_cmpl_els_acc(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-		  struct lpfc_iocbq * rspiocb)
+lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		  struct lpfc_iocbq *rspiocb)
 {
-	IOCB_t *irsp;
-	struct lpfc_nodelist *ndlp;
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
+	struct lpfc_vport *vport = ndlp ? ndlp->vport : NULL;
+	struct Scsi_Host  *shost = vport ? lpfc_shost_from_vport(vport) : NULL;
+	IOCB_t  *irsp;
+	uint8_t *pcmd;
 	LPFC_MBOXQ_t *mbox = NULL;
-	struct lpfc_dmabuf *mp;
+	struct lpfc_dmabuf *mp = NULL;
+	uint32_t ls_rjt = 0;
 
 	irsp = &rspiocb->iocb;
 
-	ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
 	if (cmdiocb->context_un.mbox)
 		mbox = cmdiocb->context_un.mbox;
 
+	/* First determine if this is a LS_RJT cmpl */
+	pcmd = (uint8_t *) (((struct lpfc_dmabuf *) cmdiocb->context2)->virt);
+	if (*((uint32_t *) (pcmd)) == ELS_CMD_LS_RJT) {
+		/* A LS_RJT associated with Default RPI cleanup
+		 * has its own seperate code path.
+		 */
+		if (!(ndlp->nlp_flag & NLP_RM_DFLT_RPI))
+			ls_rjt = 1;
+	}
 
 	/* Check to see if link went down during discovery */
-	if ((lpfc_els_chk_latt(phba)) || !ndlp) {
+	if (!ndlp || lpfc_els_chk_latt(vport)) {
 		if (mbox) {
 			mp = (struct lpfc_dmabuf *) mbox->context1;
 			if (mp) {
 				lpfc_mbuf_free(phba, mp->virt, mp->phys);
 				kfree(mp);
 			}
-			mempool_free( mbox, phba->mbox_mem_pool);
+			mempool_free(mbox, phba->mbox_mem_pool);
 		}
+		if (ndlp && (ndlp->nlp_flag & NLP_RM_DFLT_RPI))
+			if (lpfc_nlp_not_used(ndlp))
+				ndlp = NULL;
 		goto out;
 	}
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
+		"ELS rsp cmpl:    status:x%x/x%x did:x%x",
+		irsp->ulpStatus, irsp->un.ulpWord[4],
+		cmdiocb->iocb.un.elsreq64.remoteID);
 	/* ELS response tag <ulpIoTag> completes */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0110 ELS response tag x%x completes "
-			"Data: x%x x%x x%x x%x x%x x%x x%x\n",
-			phba->brd_no,
-			cmdiocb->iocb.ulpIoTag, rspiocb->iocb.ulpStatus,
-			rspiocb->iocb.un.ulpWord[4], rspiocb->iocb.ulpTimeout,
- 			ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
-			ndlp->nlp_rpi);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0110 ELS response tag x%x completes "
+			 "Data: x%x x%x x%x x%x x%x x%x x%x\n",
+			 cmdiocb->iocb.ulpIoTag, rspiocb->iocb.ulpStatus,
+			 rspiocb->iocb.un.ulpWord[4], rspiocb->iocb.ulpTimeout,
+			 ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
+			 ndlp->nlp_rpi);
 	if (mbox) {
 		if ((rspiocb->iocb.ulpStatus == 0)
 		    && (ndlp->nlp_flag & NLP_ACC_REGLOGIN)) {
-			lpfc_unreg_rpi(phba, ndlp);
-			mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login;
-			mbox->context2 = ndlp;
-			ndlp->nlp_prev_state = ndlp->nlp_state;
-			ndlp->nlp_state = NLP_STE_REG_LOGIN_ISSUE;
-			lpfc_nlp_list(phba, ndlp, NLP_REGLOGIN_LIST);
-			if (lpfc_sli_issue_mbox(phba, mbox,
-						(MBX_NOWAIT | MBX_STOP_IOCB))
+			lpfc_unreg_rpi(vport, ndlp);
+			mbox->context2 = lpfc_nlp_get(ndlp);
+			mbox->vport = vport;
+			if (ndlp->nlp_flag & NLP_RM_DFLT_RPI) {
+				mbox->mbox_flag |= LPFC_MBX_IMED_UNREG;
+				mbox->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi;
+			}
+			else {
+				mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login;
+				ndlp->nlp_prev_state = ndlp->nlp_state;
+				lpfc_nlp_set_state(vport, ndlp,
+					   NLP_STE_REG_LOGIN_ISSUE);
+			}
+			if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT)
 			    != MBX_NOT_FINISHED) {
 				goto out;
 			}
-			/* NOTE: we should have messages for unsuccessful
-			   reglogin */
+
+			/* ELS rsp: Cannot issue reg_login for <NPortid> */
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				"0138 ELS rsp: Cannot issue reg_login for x%x "
+				"Data: x%x x%x x%x\n",
+				ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
+				ndlp->nlp_rpi);
+
+			if (lpfc_nlp_not_used(ndlp))
+				ndlp = NULL;
 		} else {
-			/* Do not call NO_LIST for lpfc_els_abort'ed ELS cmds */
-			if (!((irsp->ulpStatus == IOSTAT_LOCAL_REJECT) &&
-			      ((irsp->un.ulpWord[4] == IOERR_SLI_ABORTED) ||
-			       (irsp->un.ulpWord[4] == IOERR_LINK_DOWN) ||
-			       (irsp->un.ulpWord[4] == IOERR_SLI_DOWN)))) {
-				if (ndlp->nlp_flag & NLP_ACC_REGLOGIN) {
-					lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+			/* Do not drop node for lpfc_els_abort'ed ELS cmds */
+			if (!lpfc_error_lost_link(irsp) &&
+			    ndlp->nlp_flag & NLP_ACC_REGLOGIN) {
+				if (lpfc_nlp_not_used(ndlp))
 					ndlp = NULL;
-				}
 			}
 		}
 		mp = (struct lpfc_dmabuf *) mbox->context1;
@@ -1915,19 +2364,30 @@ lpfc_cmpl_els_acc(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 	}
 out:
 	if (ndlp) {
-		spin_lock_irq(phba->host->host_lock);
-		ndlp->nlp_flag &= ~NLP_ACC_REGLOGIN;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
+		ndlp->nlp_flag &= ~(NLP_ACC_REGLOGIN | NLP_RM_DFLT_RPI);
+		spin_unlock_irq(shost->host_lock);
+
+		/* If the node is not being used by another discovery thread,
+		 * and we are sending a reject, we are done with it.
+		 * Release driver reference count here and free associated
+		 * resources.
+		 */
+		if (ls_rjt)
+			lpfc_nlp_not_used(ndlp);
 	}
+
 	lpfc_els_free_iocb(phba, cmdiocb);
 	return;
 }
 
 int
-lpfc_els_rsp_acc(struct lpfc_hba * phba, uint32_t flag,
-		 struct lpfc_iocbq * oldiocb, struct lpfc_nodelist * ndlp,
-		 LPFC_MBOXQ_t * mbox, uint8_t newnode)
+lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag,
+		 struct lpfc_iocbq *oldiocb, struct lpfc_nodelist *ndlp,
+		 LPFC_MBOXQ_t *mbox)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
 	IOCB_t *icmd;
 	IOCB_t *oldcmd;
 	struct lpfc_iocbq *elsiocb;
@@ -1944,23 +2404,30 @@ lpfc_els_rsp_acc(struct lpfc_hba * phba, uint32_t flag,
 
 	switch (flag) {
 	case ELS_CMD_ACC:
-		cmdsize = sizeof (uint32_t);
-		elsiocb = lpfc_prep_els_iocb(phba, 0, cmdsize, oldiocb->retry,
-					ndlp, ndlp->nlp_DID, ELS_CMD_ACC);
+		cmdsize = sizeof(uint32_t);
+		elsiocb = lpfc_prep_els_iocb(vport, 0, cmdsize, oldiocb->retry,
+					     ndlp, ndlp->nlp_DID, ELS_CMD_ACC);
 		if (!elsiocb) {
+			spin_lock_irq(shost->host_lock);
 			ndlp->nlp_flag &= ~NLP_LOGO_ACC;
+			spin_unlock_irq(shost->host_lock);
 			return 1;
 		}
+
 		icmd = &elsiocb->iocb;
 		icmd->ulpContext = oldcmd->ulpContext;	/* Xri */
 		pcmd = (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
 		*((uint32_t *) (pcmd)) = ELS_CMD_ACC;
-		pcmd += sizeof (uint32_t);
+		pcmd += sizeof(uint32_t);
+
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
+			"Issue ACC:       did:x%x flg:x%x",
+			ndlp->nlp_DID, ndlp->nlp_flag, 0);
 		break;
 	case ELS_CMD_PLOGI:
-		cmdsize = (sizeof (struct serv_parm) + sizeof (uint32_t));
-		elsiocb = lpfc_prep_els_iocb(phba, 0, cmdsize, oldiocb->retry,
-					ndlp, ndlp->nlp_DID, ELS_CMD_ACC);
+		cmdsize = (sizeof(struct serv_parm) + sizeof(uint32_t));
+		elsiocb = lpfc_prep_els_iocb(vport, 0, cmdsize, oldiocb->retry,
+					     ndlp, ndlp->nlp_DID, ELS_CMD_ACC);
 		if (!elsiocb)
 			return 1;
 
@@ -1972,12 +2439,16 @@ lpfc_els_rsp_acc(struct lpfc_hba * phba, uint32_t flag,
 			elsiocb->context_un.mbox = mbox;
 
 		*((uint32_t *) (pcmd)) = ELS_CMD_ACC;
-		pcmd += sizeof (uint32_t);
-		memcpy(pcmd, &phba->fc_sparam, sizeof (struct serv_parm));
+		pcmd += sizeof(uint32_t);
+		memcpy(pcmd, &vport->fc_sparam, sizeof(struct serv_parm));
+
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
+			"Issue ACC PLOGI: did:x%x flg:x%x",
+			ndlp->nlp_DID, ndlp->nlp_flag, 0);
 		break;
 	case ELS_CMD_PRLO:
-		cmdsize = sizeof (uint32_t) + sizeof (PRLO);
-		elsiocb = lpfc_prep_els_iocb(phba, 0, cmdsize, oldiocb->retry,
+		cmdsize = sizeof(uint32_t) + sizeof(PRLO);
+		elsiocb = lpfc_prep_els_iocb(vport, 0, cmdsize, oldiocb->retry,
 					     ndlp, ndlp->nlp_DID, ELS_CMD_PRLO);
 		if (!elsiocb)
 			return 1;
@@ -1987,39 +2458,36 @@ lpfc_els_rsp_acc(struct lpfc_hba * phba, uint32_t flag,
 		pcmd = (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
 
 		memcpy(pcmd, ((struct lpfc_dmabuf *) oldiocb->context2)->virt,
-		       sizeof (uint32_t) + sizeof (PRLO));
+		       sizeof(uint32_t) + sizeof(PRLO));
 		*((uint32_t *) (pcmd)) = ELS_CMD_PRLO_ACC;
 		els_pkt_ptr = (ELS_PKT *) pcmd;
 		els_pkt_ptr->un.prlo.acceptRspCode = PRLO_REQ_EXECUTED;
+
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
+			"Issue ACC PRLO:  did:x%x flg:x%x",
+			ndlp->nlp_DID, ndlp->nlp_flag, 0);
 		break;
 	default:
 		return 1;
 	}
-
-	if (newnode)
-		elsiocb->context1 = NULL;
-
 	/* Xmit ELS ACC response tag <ulpIoTag> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0128 Xmit ELS ACC response tag x%x, XRI: x%x, "
-			"DID: x%x, nlp_flag: x%x nlp_state: x%x RPI: x%x\n",
-			phba->brd_no, elsiocb->iotag,
-			elsiocb->iocb.ulpContext, ndlp->nlp_DID,
-			ndlp->nlp_flag, ndlp->nlp_state, ndlp->nlp_rpi);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0128 Xmit ELS ACC response tag x%x, XRI: x%x, "
+			 "DID: x%x, nlp_flag: x%x nlp_state: x%x RPI: x%x\n",
+			 elsiocb->iotag, elsiocb->iocb.ulpContext,
+			 ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
+			 ndlp->nlp_rpi);
 	if (ndlp->nlp_flag & NLP_LOGO_ACC) {
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag &= ~NLP_LOGO_ACC;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		elsiocb->iocb_cmpl = lpfc_cmpl_els_logo_acc;
 	} else {
-		elsiocb->iocb_cmpl = lpfc_cmpl_els_acc;
+		elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
 	}
 
 	phba->fc_stat.elsXmitACC++;
-	spin_lock_irq(phba->host->host_lock);
 	rc = lpfc_sli_issue_iocb(phba, pring, elsiocb, 0);
-	spin_unlock_irq(phba->host->host_lock);
 	if (rc == IOCB_ERROR) {
 		lpfc_els_free_iocb(phba, elsiocb);
 		return 1;
@@ -2028,9 +2496,11 @@ lpfc_els_rsp_acc(struct lpfc_hba * phba, uint32_t flag,
 }
 
 int
-lpfc_els_rsp_reject(struct lpfc_hba * phba, uint32_t rejectError,
-		    struct lpfc_iocbq * oldiocb, struct lpfc_nodelist * ndlp)
+lpfc_els_rsp_reject(struct lpfc_vport *vport, uint32_t rejectError,
+		    struct lpfc_iocbq *oldiocb, struct lpfc_nodelist *ndlp,
+		    LPFC_MBOXQ_t *mbox)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	IOCB_t *icmd;
 	IOCB_t *oldcmd;
 	struct lpfc_iocbq *elsiocb;
@@ -2043,9 +2513,9 @@ lpfc_els_rsp_reject(struct lpfc_hba * phba, uint32_t rejectError,
 	psli = &phba->sli;
 	pring = &psli->ring[LPFC_ELS_RING];	/* ELS ring */
 
-	cmdsize = 2 * sizeof (uint32_t);
-	elsiocb = lpfc_prep_els_iocb(phba, 0, cmdsize, oldiocb->retry,
-					ndlp, ndlp->nlp_DID, ELS_CMD_LS_RJT);
+	cmdsize = 2 * sizeof(uint32_t);
+	elsiocb = lpfc_prep_els_iocb(vport, 0, cmdsize, oldiocb->retry, ndlp,
+				     ndlp->nlp_DID, ELS_CMD_LS_RJT);
 	if (!elsiocb)
 		return 1;
 
@@ -2055,23 +2525,27 @@ lpfc_els_rsp_reject(struct lpfc_hba * phba, uint32_t rejectError,
 	pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
 
 	*((uint32_t *) (pcmd)) = ELS_CMD_LS_RJT;
-	pcmd += sizeof (uint32_t);
+	pcmd += sizeof(uint32_t);
 	*((uint32_t *) (pcmd)) = rejectError;
 
+	if (mbox)
+		elsiocb->context_un.mbox = mbox;
 	/* Xmit ELS RJT <err> response tag <ulpIoTag> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0129 Xmit ELS RJT x%x response tag x%x "
-			"Data: x%x x%x x%x x%x x%x\n",
-			phba->brd_no,
-			rejectError, elsiocb->iocb.ulpIoTag,
-			elsiocb->iocb.ulpContext, ndlp->nlp_DID,
-			ndlp->nlp_flag, ndlp->nlp_state, ndlp->nlp_rpi);
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0129 Xmit ELS RJT x%x response tag x%x "
+			 "xri x%x, did x%x, nlp_flag x%x, nlp_state x%x, "
+			 "rpi x%x\n",
+			 rejectError, elsiocb->iotag,
+			 elsiocb->iocb.ulpContext, ndlp->nlp_DID,
+			 ndlp->nlp_flag, ndlp->nlp_state, ndlp->nlp_rpi);
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
+		"Issue LS_RJT:    did:x%x flg:x%x err:x%x",
+		ndlp->nlp_DID, ndlp->nlp_flag, rejectError);
 
 	phba->fc_stat.elsXmitLSRJT++;
-	elsiocb->iocb_cmpl = lpfc_cmpl_els_acc;
-	spin_lock_irq(phba->host->host_lock);
+	elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
 	rc = lpfc_sli_issue_iocb(phba, pring, elsiocb, 0);
-	spin_unlock_irq(phba->host->host_lock);
+
 	if (rc == IOCB_ERROR) {
 		lpfc_els_free_iocb(phba, elsiocb);
 		return 1;
@@ -2080,56 +2554,54 @@ lpfc_els_rsp_reject(struct lpfc_hba * phba, uint32_t rejectError,
 }
 
 int
-lpfc_els_rsp_adisc_acc(struct lpfc_hba * phba,
-		       struct lpfc_iocbq * oldiocb, struct lpfc_nodelist * ndlp)
+lpfc_els_rsp_adisc_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb,
+		       struct lpfc_nodelist *ndlp)
 {
+	struct lpfc_hba  *phba = vport->phba;
+	struct lpfc_sli  *psli = &phba->sli;
+	struct lpfc_sli_ring *pring = &psli->ring[LPFC_ELS_RING];
 	ADISC *ap;
-	IOCB_t *icmd;
-	IOCB_t *oldcmd;
+	IOCB_t *icmd, *oldcmd;
 	struct lpfc_iocbq *elsiocb;
-	struct lpfc_sli_ring *pring;
-	struct lpfc_sli *psli;
 	uint8_t *pcmd;
 	uint16_t cmdsize;
 	int rc;
 
-	psli = &phba->sli;
-	pring = &psli->ring[LPFC_ELS_RING];	/* ELS ring */
-
-	cmdsize = sizeof (uint32_t) + sizeof (ADISC);
-	elsiocb = lpfc_prep_els_iocb(phba, 0, cmdsize, oldiocb->retry,
-					ndlp, ndlp->nlp_DID, ELS_CMD_ACC);
+	cmdsize = sizeof(uint32_t) + sizeof(ADISC);
+	elsiocb = lpfc_prep_els_iocb(vport, 0, cmdsize, oldiocb->retry, ndlp,
+				     ndlp->nlp_DID, ELS_CMD_ACC);
 	if (!elsiocb)
 		return 1;
 
-	/* Xmit ADISC ACC response tag <ulpIoTag> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0130 Xmit ADISC ACC response tag x%x "
-			"Data: x%x x%x x%x x%x x%x\n",
-			phba->brd_no,
-			elsiocb->iocb.ulpIoTag,
-			elsiocb->iocb.ulpContext, ndlp->nlp_DID,
-			ndlp->nlp_flag, ndlp->nlp_state, ndlp->nlp_rpi);
-
 	icmd = &elsiocb->iocb;
 	oldcmd = &oldiocb->iocb;
 	icmd->ulpContext = oldcmd->ulpContext;	/* Xri */
+
+	/* Xmit ADISC ACC response tag <ulpIoTag> */
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0130 Xmit ADISC ACC response iotag x%x xri: "
+			 "x%x, did x%x, nlp_flag x%x, nlp_state x%x rpi x%x\n",
+			 elsiocb->iotag, elsiocb->iocb.ulpContext,
+			 ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
+			 ndlp->nlp_rpi);
 	pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
 
 	*((uint32_t *) (pcmd)) = ELS_CMD_ACC;
-	pcmd += sizeof (uint32_t);
+	pcmd += sizeof(uint32_t);
 
 	ap = (ADISC *) (pcmd);
 	ap->hardAL_PA = phba->fc_pref_ALPA;
-	memcpy(&ap->portName, &phba->fc_portname, sizeof (struct lpfc_name));
-	memcpy(&ap->nodeName, &phba->fc_nodename, sizeof (struct lpfc_name));
-	ap->DID = be32_to_cpu(phba->fc_myDID);
+	memcpy(&ap->portName, &vport->fc_portname, sizeof(struct lpfc_name));
+	memcpy(&ap->nodeName, &vport->fc_nodename, sizeof(struct lpfc_name));
+	ap->DID = be32_to_cpu(vport->fc_myDID);
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
+		"Issue ACC ADISC: did:x%x flg:x%x",
+		ndlp->nlp_DID, ndlp->nlp_flag, 0);
 
 	phba->fc_stat.elsXmitACC++;
-	elsiocb->iocb_cmpl = lpfc_cmpl_els_acc;
-	spin_lock_irq(phba->host->host_lock);
+	elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
 	rc = lpfc_sli_issue_iocb(phba, pring, elsiocb, 0);
-	spin_unlock_irq(phba->host->host_lock);
 	if (rc == IOCB_ERROR) {
 		lpfc_els_free_iocb(phba, elsiocb);
 		return 1;
@@ -2138,9 +2610,10 @@ lpfc_els_rsp_adisc_acc(struct lpfc_hba * phba,
 }
 
 int
-lpfc_els_rsp_prli_acc(struct lpfc_hba * phba,
-		      struct lpfc_iocbq * oldiocb, struct lpfc_nodelist * ndlp)
+lpfc_els_rsp_prli_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb,
+		      struct lpfc_nodelist *ndlp)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	PRLI *npr;
 	lpfc_vpd_t *vpd;
 	IOCB_t *icmd;
@@ -2155,31 +2628,29 @@ lpfc_els_rsp_prli_acc(struct lpfc_hba * phba,
 	psli = &phba->sli;
 	pring = &psli->ring[LPFC_ELS_RING];	/* ELS ring */
 
-	cmdsize = sizeof (uint32_t) + sizeof (PRLI);
-	elsiocb = lpfc_prep_els_iocb(phba, 0, cmdsize, oldiocb->retry, ndlp,
+	cmdsize = sizeof(uint32_t) + sizeof(PRLI);
+	elsiocb = lpfc_prep_els_iocb(vport, 0, cmdsize, oldiocb->retry, ndlp,
 		ndlp->nlp_DID, (ELS_CMD_ACC | (ELS_CMD_PRLI & ~ELS_RSP_MASK)));
 	if (!elsiocb)
 		return 1;
 
-	/* Xmit PRLI ACC response tag <ulpIoTag> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0131 Xmit PRLI ACC response tag x%x "
-			"Data: x%x x%x x%x x%x x%x\n",
-			phba->brd_no,
-			elsiocb->iocb.ulpIoTag,
-			elsiocb->iocb.ulpContext, ndlp->nlp_DID,
-			ndlp->nlp_flag, ndlp->nlp_state, ndlp->nlp_rpi);
-
 	icmd = &elsiocb->iocb;
 	oldcmd = &oldiocb->iocb;
 	icmd->ulpContext = oldcmd->ulpContext;	/* Xri */
+	/* Xmit PRLI ACC response tag <ulpIoTag> */
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0131 Xmit PRLI ACC response tag x%x xri x%x, "
+			 "did x%x, nlp_flag x%x, nlp_state x%x, rpi x%x\n",
+			 elsiocb->iotag, elsiocb->iocb.ulpContext,
+			 ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
+			 ndlp->nlp_rpi);
 	pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
 
 	*((uint32_t *) (pcmd)) = (ELS_CMD_ACC | (ELS_CMD_PRLI & ~ELS_RSP_MASK));
-	pcmd += sizeof (uint32_t);
+	pcmd += sizeof(uint32_t);
 
 	/* For PRLI, remainder of payload is PRLI parameter page */
-	memset(pcmd, 0, sizeof (PRLI));
+	memset(pcmd, 0, sizeof(PRLI));
 
 	npr = (PRLI *) pcmd;
 	vpd = &phba->vpd;
@@ -2201,12 +2672,14 @@ lpfc_els_rsp_prli_acc(struct lpfc_hba * phba,
 	npr->prliType = PRLI_FCP_TYPE;
 	npr->initiatorFunc = 1;
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
+		"Issue ACC PRLI:  did:x%x flg:x%x",
+		ndlp->nlp_DID, ndlp->nlp_flag, 0);
+
 	phba->fc_stat.elsXmitACC++;
-	elsiocb->iocb_cmpl = lpfc_cmpl_els_acc;
+	elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
 
-	spin_lock_irq(phba->host->host_lock);
 	rc = lpfc_sli_issue_iocb(phba, pring, elsiocb, 0);
-	spin_unlock_irq(phba->host->host_lock);
 	if (rc == IOCB_ERROR) {
 		lpfc_els_free_iocb(phba, elsiocb);
 		return 1;
@@ -2215,13 +2688,12 @@ lpfc_els_rsp_prli_acc(struct lpfc_hba * phba,
 }
 
 static int
-lpfc_els_rsp_rnid_acc(struct lpfc_hba * phba,
-		      uint8_t format,
-		      struct lpfc_iocbq * oldiocb, struct lpfc_nodelist * ndlp)
+lpfc_els_rsp_rnid_acc(struct lpfc_vport *vport, uint8_t format,
+		      struct lpfc_iocbq *oldiocb, struct lpfc_nodelist *ndlp)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	RNID *rn;
-	IOCB_t *icmd;
-	IOCB_t *oldcmd;
+	IOCB_t *icmd, *oldcmd;
 	struct lpfc_iocbq *elsiocb;
 	struct lpfc_sli_ring *pring;
 	struct lpfc_sli *psli;
@@ -2232,46 +2704,41 @@ lpfc_els_rsp_rnid_acc(struct lpfc_hba * phba,
 	psli = &phba->sli;
 	pring = &psli->ring[LPFC_ELS_RING];
 
-	cmdsize = sizeof (uint32_t) + sizeof (uint32_t)
-		+ (2 * sizeof (struct lpfc_name));
+	cmdsize = sizeof(uint32_t) + sizeof(uint32_t)
+					+ (2 * sizeof(struct lpfc_name));
 	if (format)
-		cmdsize += sizeof (RNID_TOP_DISC);
+		cmdsize += sizeof(RNID_TOP_DISC);
 
-	elsiocb = lpfc_prep_els_iocb(phba, 0, cmdsize, oldiocb->retry,
-					ndlp, ndlp->nlp_DID, ELS_CMD_ACC);
+	elsiocb = lpfc_prep_els_iocb(vport, 0, cmdsize, oldiocb->retry, ndlp,
+				     ndlp->nlp_DID, ELS_CMD_ACC);
 	if (!elsiocb)
 		return 1;
 
-	/* Xmit RNID ACC response tag <ulpIoTag> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0132 Xmit RNID ACC response tag x%x "
-			"Data: x%x\n",
-			phba->brd_no,
-			elsiocb->iocb.ulpIoTag,
-			elsiocb->iocb.ulpContext);
-
 	icmd = &elsiocb->iocb;
 	oldcmd = &oldiocb->iocb;
 	icmd->ulpContext = oldcmd->ulpContext;	/* Xri */
+	/* Xmit RNID ACC response tag <ulpIoTag> */
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0132 Xmit RNID ACC response tag x%x xri x%x\n",
+			 elsiocb->iotag, elsiocb->iocb.ulpContext);
 	pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
-
 	*((uint32_t *) (pcmd)) = ELS_CMD_ACC;
-	pcmd += sizeof (uint32_t);
+	pcmd += sizeof(uint32_t);
 
-	memset(pcmd, 0, sizeof (RNID));
+	memset(pcmd, 0, sizeof(RNID));
 	rn = (RNID *) (pcmd);
 	rn->Format = format;
-	rn->CommonLen = (2 * sizeof (struct lpfc_name));
-	memcpy(&rn->portName, &phba->fc_portname, sizeof (struct lpfc_name));
-	memcpy(&rn->nodeName, &phba->fc_nodename, sizeof (struct lpfc_name));
+	rn->CommonLen = (2 * sizeof(struct lpfc_name));
+	memcpy(&rn->portName, &vport->fc_portname, sizeof(struct lpfc_name));
+	memcpy(&rn->nodeName, &vport->fc_nodename, sizeof(struct lpfc_name));
 	switch (format) {
 	case 0:
 		rn->SpecificLen = 0;
 		break;
 	case RNID_TOPOLOGY_DISC:
-		rn->SpecificLen = sizeof (RNID_TOP_DISC);
+		rn->SpecificLen = sizeof(RNID_TOP_DISC);
 		memcpy(&rn->un.topologyDisc.portName,
-		       &phba->fc_portname, sizeof (struct lpfc_name));
+		       &vport->fc_portname, sizeof(struct lpfc_name));
 		rn->un.topologyDisc.unitType = RNID_HBA;
 		rn->un.topologyDisc.physPort = 0;
 		rn->un.topologyDisc.attachedNodes = 0;
@@ -2282,14 +2749,17 @@ lpfc_els_rsp_rnid_acc(struct lpfc_hba * phba,
 		break;
 	}
 
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
+		"Issue ACC RNID:  did:x%x flg:x%x",
+		ndlp->nlp_DID, ndlp->nlp_flag, 0);
+
 	phba->fc_stat.elsXmitACC++;
-	elsiocb->iocb_cmpl = lpfc_cmpl_els_acc;
+	elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
+	lpfc_nlp_put(ndlp);
 	elsiocb->context1 = NULL;  /* Don't need ndlp for cmpl,
 				    * it could be freed */
 
-	spin_lock_irq(phba->host->host_lock);
 	rc = lpfc_sli_issue_iocb(phba, pring, elsiocb, 0);
-	spin_unlock_irq(phba->host->host_lock);
 	if (rc == IOCB_ERROR) {
 		lpfc_els_free_iocb(phba, elsiocb);
 		return 1;
@@ -2298,373 +2768,398 @@ lpfc_els_rsp_rnid_acc(struct lpfc_hba * phba,
 }
 
 int
-lpfc_els_disc_adisc(struct lpfc_hba * phba)
+lpfc_els_disc_adisc(struct lpfc_vport *vport)
 {
-	int sentadisc;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
 	struct lpfc_nodelist *ndlp, *next_ndlp;
-
-	sentadisc = 0;
-	/* go thru NPR list and issue any remaining ELS ADISCs */
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_npr_list,
-			nlp_listp) {
-		if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
-			if (ndlp->nlp_flag & NLP_NPR_ADISC) {
-				ndlp->nlp_flag &= ~NLP_NPR_ADISC;
-				ndlp->nlp_prev_state = ndlp->nlp_state;
-				ndlp->nlp_state = NLP_STE_ADISC_ISSUE;
-				lpfc_nlp_list(phba, ndlp,
-					NLP_ADISC_LIST);
-				lpfc_issue_els_adisc(phba, ndlp, 0);
-				sentadisc++;
-				phba->num_disc_nodes++;
-				if (phba->num_disc_nodes >=
-				    phba->cfg_discovery_threads) {
-					spin_lock_irq(phba->host->host_lock);
-					phba->fc_flag |= FC_NLP_MORE;
-					spin_unlock_irq(phba->host->host_lock);
-					break;
-				}
+	int sentadisc = 0;
+
+	/* go thru NPR nodes and issue any remaining ELS ADISCs */
+	list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) {
+		if (ndlp->nlp_state == NLP_STE_NPR_NODE &&
+		    (ndlp->nlp_flag & NLP_NPR_2B_DISC) != 0 &&
+		    (ndlp->nlp_flag & NLP_NPR_ADISC) != 0) {
+			spin_lock_irq(shost->host_lock);
+			ndlp->nlp_flag &= ~NLP_NPR_ADISC;
+			spin_unlock_irq(shost->host_lock);
+			ndlp->nlp_prev_state = ndlp->nlp_state;
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_ADISC_ISSUE);
+			lpfc_issue_els_adisc(vport, ndlp, 0);
+			sentadisc++;
+			vport->num_disc_nodes++;
+			if (vport->num_disc_nodes >=
+			    vport->cfg_discovery_threads) {
+				spin_lock_irq(shost->host_lock);
+				vport->fc_flag |= FC_NLP_MORE;
+				spin_unlock_irq(shost->host_lock);
+				break;
 			}
 		}
 	}
 	if (sentadisc == 0) {
-		spin_lock_irq(phba->host->host_lock);
-		phba->fc_flag &= ~FC_NLP_MORE;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
+		vport->fc_flag &= ~FC_NLP_MORE;
+		spin_unlock_irq(shost->host_lock);
 	}
 	return sentadisc;
 }
 
 int
-lpfc_els_disc_plogi(struct lpfc_hba * phba)
+lpfc_els_disc_plogi(struct lpfc_vport *vport)
 {
-	int sentplogi;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
 	struct lpfc_nodelist *ndlp, *next_ndlp;
-
-	sentplogi = 0;
-	/* go thru NPR list and issue any remaining ELS PLOGIs */
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_npr_list,
-				nlp_listp) {
-		if ((ndlp->nlp_flag & NLP_NPR_2B_DISC) &&
-		   (!(ndlp->nlp_flag & NLP_DELAY_TMO))) {
-			if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) {
-				ndlp->nlp_prev_state = ndlp->nlp_state;
-				ndlp->nlp_state = NLP_STE_PLOGI_ISSUE;
-				lpfc_nlp_list(phba, ndlp, NLP_PLOGI_LIST);
-				lpfc_issue_els_plogi(phba, ndlp->nlp_DID, 0);
-				sentplogi++;
-				phba->num_disc_nodes++;
-				if (phba->num_disc_nodes >=
-				    phba->cfg_discovery_threads) {
-					spin_lock_irq(phba->host->host_lock);
-					phba->fc_flag |= FC_NLP_MORE;
-					spin_unlock_irq(phba->host->host_lock);
-					break;
-				}
+	int sentplogi = 0;
+
+	/* go thru NPR nodes and issue any remaining ELS PLOGIs */
+	list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) {
+		if (ndlp->nlp_state == NLP_STE_NPR_NODE &&
+		    (ndlp->nlp_flag & NLP_NPR_2B_DISC) != 0 &&
+		    (ndlp->nlp_flag & NLP_DELAY_TMO) == 0 &&
+		    (ndlp->nlp_flag & NLP_NPR_ADISC) == 0) {
+			ndlp->nlp_prev_state = ndlp->nlp_state;
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE);
+			lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0);
+			sentplogi++;
+			vport->num_disc_nodes++;
+			if (vport->num_disc_nodes >=
+			    vport->cfg_discovery_threads) {
+				spin_lock_irq(shost->host_lock);
+				vport->fc_flag |= FC_NLP_MORE;
+				spin_unlock_irq(shost->host_lock);
+				break;
 			}
 		}
 	}
 	if (sentplogi == 0) {
-		spin_lock_irq(phba->host->host_lock);
-		phba->fc_flag &= ~FC_NLP_MORE;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
+		vport->fc_flag &= ~FC_NLP_MORE;
+		spin_unlock_irq(shost->host_lock);
 	}
 	return sentplogi;
 }
 
-int
-lpfc_els_flush_rscn(struct lpfc_hba * phba)
+void
+lpfc_els_flush_rscn(struct lpfc_vport *vport)
 {
-	struct lpfc_dmabuf *mp;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
 	int i;
 
-	for (i = 0; i < phba->fc_rscn_id_cnt; i++) {
-		mp = phba->fc_rscn_id_list[i];
-		lpfc_mbuf_free(phba, mp->virt, mp->phys);
-		kfree(mp);
-		phba->fc_rscn_id_list[i] = NULL;
-	}
-	phba->fc_rscn_id_cnt = 0;
-	spin_lock_irq(phba->host->host_lock);
-	phba->fc_flag &= ~(FC_RSCN_MODE | FC_RSCN_DISCOVERY);
-	spin_unlock_irq(phba->host->host_lock);
-	lpfc_can_disctmo(phba);
-	return 0;
+	for (i = 0; i < vport->fc_rscn_id_cnt; i++) {
+		lpfc_in_buf_free(phba, vport->fc_rscn_id_list[i]);
+		vport->fc_rscn_id_list[i] = NULL;
+	}
+	spin_lock_irq(shost->host_lock);
+	vport->fc_rscn_id_cnt = 0;
+	vport->fc_flag &= ~(FC_RSCN_MODE | FC_RSCN_DISCOVERY);
+	spin_unlock_irq(shost->host_lock);
+	lpfc_can_disctmo(vport);
 }
 
 int
-lpfc_rscn_payload_check(struct lpfc_hba * phba, uint32_t did)
+lpfc_rscn_payload_check(struct lpfc_vport *vport, uint32_t did)
 {
 	D_ID ns_did;
 	D_ID rscn_did;
-	struct lpfc_dmabuf *mp;
 	uint32_t *lp;
-	uint32_t payload_len, cmd, i, match;
+	uint32_t payload_len, i;
 
 	ns_did.un.word = did;
-	match = 0;
 
 	/* Never match fabric nodes for RSCNs */
 	if ((did & Fabric_DID_MASK) == Fabric_DID_MASK)
-		return(0);
+		return 0;
 
 	/* If we are doing a FULL RSCN rediscovery, match everything */
-	if (phba->fc_flag & FC_RSCN_DISCOVERY) {
+	if (vport->fc_flag & FC_RSCN_DISCOVERY)
 		return did;
-	}
 
-	for (i = 0; i < phba->fc_rscn_id_cnt; i++) {
-		mp = phba->fc_rscn_id_list[i];
-		lp = (uint32_t *) mp->virt;
-		cmd = *lp++;
-		payload_len = be32_to_cpu(cmd) & 0xffff; /* payload length */
-		payload_len -= sizeof (uint32_t);	/* take off word 0 */
+	for (i = 0; i < vport->fc_rscn_id_cnt; i++) {
+		lp = vport->fc_rscn_id_list[i]->virt;
+		payload_len = be32_to_cpu(*lp++ & ~ELS_CMD_MASK);
+		payload_len -= sizeof(uint32_t);	/* take off word 0 */
 		while (payload_len) {
-			rscn_did.un.word = *lp++;
-			rscn_did.un.word = be32_to_cpu(rscn_did.un.word);
-			payload_len -= sizeof (uint32_t);
+			rscn_did.un.word = be32_to_cpu(*lp++);
+			payload_len -= sizeof(uint32_t);
 			switch (rscn_did.un.b.resv) {
 			case 0:	/* Single N_Port ID effected */
-				if (ns_did.un.word == rscn_did.un.word) {
-					match = did;
-				}
+				if (ns_did.un.word == rscn_did.un.word)
+					return did;
 				break;
 			case 1:	/* Whole N_Port Area effected */
 				if ((ns_did.un.b.domain == rscn_did.un.b.domain)
 				    && (ns_did.un.b.area == rscn_did.un.b.area))
-					{
-						match = did;
-					}
+					return did;
 				break;
 			case 2:	/* Whole N_Port Domain effected */
 				if (ns_did.un.b.domain == rscn_did.un.b.domain)
-					{
-						match = did;
-					}
-				break;
-			case 3:	/* Whole Fabric effected */
-				match = did;
+					return did;
 				break;
 			default:
-				/* Unknown Identifier in RSCN list */
-				lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
-						"%d:0217 Unknown Identifier in "
-						"RSCN payload Data: x%x\n",
-						phba->brd_no, rscn_did.un.word);
-				break;
-			}
-			if (match) {
-				break;
+				/* Unknown Identifier in RSCN node */
+				lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+						 "0217 Unknown Identifier in "
+						 "RSCN payload Data: x%x\n",
+						 rscn_did.un.word);
+			case 3:	/* Whole Fabric effected */
+				return did;
 			}
 		}
 	}
-	return match;
+	return 0;
 }
 
 static int
-lpfc_rscn_recovery_check(struct lpfc_hba * phba)
+lpfc_rscn_recovery_check(struct lpfc_vport *vport)
 {
-	struct lpfc_nodelist *ndlp = NULL, *next_ndlp;
-	struct list_head *listp;
-	struct list_head *node_list[7];
-	int i;
+	struct lpfc_nodelist *ndlp = NULL;
 
 	/* Look at all nodes effected by pending RSCNs and move
-	 * them to NPR list.
+	 * them to NPR state.
 	 */
-	node_list[0] = &phba->fc_npr_list;  /* MUST do this list first */
-	node_list[1] = &phba->fc_nlpmap_list;
-	node_list[2] = &phba->fc_nlpunmap_list;
-	node_list[3] = &phba->fc_prli_list;
-	node_list[4] = &phba->fc_reglogin_list;
-	node_list[5] = &phba->fc_adisc_list;
-	node_list[6] = &phba->fc_plogi_list;
-	for (i = 0; i < 7; i++) {
-		listp = node_list[i];
-		if (list_empty(listp))
-			continue;
 
-		list_for_each_entry_safe(ndlp, next_ndlp, listp, nlp_listp) {
-			if (!(lpfc_rscn_payload_check(phba, ndlp->nlp_DID)))
-				continue;
+	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+		if (ndlp->nlp_state == NLP_STE_UNUSED_NODE ||
+		    lpfc_rscn_payload_check(vport, ndlp->nlp_DID) == 0)
+			continue;
 
-			lpfc_disc_state_machine(phba, ndlp, NULL,
-					NLP_EVT_DEVICE_RECOVERY);
+		lpfc_disc_state_machine(vport, ndlp, NULL,
+						NLP_EVT_DEVICE_RECOVERY);
 
-			/* Make sure NLP_DELAY_TMO is NOT running
-			 * after a device recovery event.
-			 */
-			if (ndlp->nlp_flag & NLP_DELAY_TMO)
-				lpfc_cancel_retry_delay_tmo(phba, ndlp);
-		}
+		/*
+		 * Make sure NLP_DELAY_TMO is NOT running after a device
+		 * recovery event.
+		 */
+		if (ndlp->nlp_flag & NLP_DELAY_TMO)
+			lpfc_cancel_retry_delay_tmo(vport, ndlp);
 	}
+
 	return 0;
 }
 
 static int
-lpfc_els_rcv_rscn(struct lpfc_hba * phba,
-		  struct lpfc_iocbq * cmdiocb,
-		  struct lpfc_nodelist * ndlp, uint8_t newnode)
+lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		  struct lpfc_nodelist *ndlp)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
 	struct lpfc_dmabuf *pcmd;
-	uint32_t *lp;
+	uint32_t *lp, *datap;
 	IOCB_t *icmd;
-	uint32_t payload_len, cmd;
+	uint32_t payload_len, length, nportid, *cmd;
+	int rscn_cnt = vport->fc_rscn_id_cnt;
+	int rscn_id = 0, hba_id = 0;
 	int i;
 
 	icmd = &cmdiocb->iocb;
 	pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
 	lp = (uint32_t *) pcmd->virt;
 
-	cmd = *lp++;
-	payload_len = be32_to_cpu(cmd) & 0xffff;	/* payload length */
-	payload_len -= sizeof (uint32_t);	/* take off word 0 */
-	cmd &= ELS_CMD_MASK;
-
+	payload_len = be32_to_cpu(*lp++ & ~ELS_CMD_MASK);
+	payload_len -= sizeof(uint32_t);	/* take off word 0 */
 	/* RSCN received */
-	lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_DISCOVERY,
-			"%d:0214 RSCN received Data: x%x x%x x%x x%x\n",
-			phba->brd_no,
-			phba->fc_flag, payload_len, *lp, phba->fc_rscn_id_cnt);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0214 RSCN received Data: x%x x%x x%x x%x\n",
+			 vport->fc_flag, payload_len, *lp, rscn_cnt);
 	for (i = 0; i < payload_len/sizeof(uint32_t); i++)
-		fc_host_post_event(phba->host, fc_get_event_number(),
+		fc_host_post_event(shost, fc_get_event_number(),
 			FCH_EVT_RSCN, lp[i]);
 
 	/* If we are about to begin discovery, just ACC the RSCN.
 	 * Discovery processing will satisfy it.
 	 */
-	if (phba->hba_state <= LPFC_NS_QRY) {
-		lpfc_els_rsp_acc(phba, ELS_CMD_ACC, cmdiocb, ndlp, NULL,
-								newnode);
+	if (vport->port_state <= LPFC_NS_QRY) {
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV RSCN ignore: did:x%x/ste:x%x flg:x%x",
+			ndlp->nlp_DID, vport->port_state, ndlp->nlp_flag);
+
+		lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
 		return 0;
 	}
 
+	/* If this RSCN just contains NPortIDs for other vports on this HBA,
+	 * just ACC and ignore it.
+	 */
+	if ((phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) &&
+		!(vport->cfg_peer_port_login)) {
+		i = payload_len;
+		datap = lp;
+		while (i > 0) {
+			nportid = *datap++;
+			nportid = ((be32_to_cpu(nportid)) & Mask_DID);
+			i -= sizeof(uint32_t);
+			rscn_id++;
+			if (lpfc_find_vport_by_did(phba, nportid))
+				hba_id++;
+		}
+		if (rscn_id == hba_id) {
+			/* ALL NPortIDs in RSCN are on HBA */
+			lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+					 "0214 Ignore RSCN "
+					 "Data: x%x x%x x%x x%x\n",
+					 vport->fc_flag, payload_len,
+					 *lp, rscn_cnt);
+			lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+				"RCV RSCN vport:  did:x%x/ste:x%x flg:x%x",
+				ndlp->nlp_DID, vport->port_state,
+				ndlp->nlp_flag);
+
+			lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb,
+				ndlp, NULL);
+			return 0;
+		}
+	}
+
 	/* If we are already processing an RSCN, save the received
 	 * RSCN payload buffer, cmdiocb->context2 to process later.
 	 */
-	if (phba->fc_flag & (FC_RSCN_MODE | FC_NDISC_ACTIVE)) {
-		if ((phba->fc_rscn_id_cnt < FC_MAX_HOLD_RSCN) &&
-		    !(phba->fc_flag & FC_RSCN_DISCOVERY)) {
-			spin_lock_irq(phba->host->host_lock);
-			phba->fc_flag |= FC_RSCN_MODE;
-			spin_unlock_irq(phba->host->host_lock);
-			phba->fc_rscn_id_list[phba->fc_rscn_id_cnt++] = pcmd;
-
-			/* If we zero, cmdiocb->context2, the calling
-			 * routine will not try to free it.
-			 */
-			cmdiocb->context2 = NULL;
+	if (vport->fc_flag & (FC_RSCN_MODE | FC_NDISC_ACTIVE)) {
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV RSCN defer:  did:x%x/ste:x%x flg:x%x",
+			ndlp->nlp_DID, vport->port_state, ndlp->nlp_flag);
+
+		vport->fc_flag |= FC_RSCN_DEFERRED;
+		if ((rscn_cnt < FC_MAX_HOLD_RSCN) &&
+		    !(vport->fc_flag & FC_RSCN_DISCOVERY)) {
+			spin_lock_irq(shost->host_lock);
+			vport->fc_flag |= FC_RSCN_MODE;
+			spin_unlock_irq(shost->host_lock);
+			if (rscn_cnt) {
+				cmd = vport->fc_rscn_id_list[rscn_cnt-1]->virt;
+				length = be32_to_cpu(*cmd & ~ELS_CMD_MASK);
+			}
+			if ((rscn_cnt) &&
+			    (payload_len + length <= LPFC_BPL_SIZE)) {
+				*cmd &= ELS_CMD_MASK;
+				*cmd |= be32_to_cpu(payload_len + length);
+				memcpy(((uint8_t *)cmd) + length, lp,
+				       payload_len);
+			} else {
+				vport->fc_rscn_id_list[rscn_cnt] = pcmd;
+				vport->fc_rscn_id_cnt++;
+				/* If we zero, cmdiocb->context2, the calling
+				 * routine will not try to free it.
+				 */
+				cmdiocb->context2 = NULL;
+			}
 
 			/* Deferred RSCN */
-			lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-					"%d:0235 Deferred RSCN "
-					"Data: x%x x%x x%x\n",
-					phba->brd_no, phba->fc_rscn_id_cnt,
-					phba->fc_flag, phba->hba_state);
+			lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+					 "0235 Deferred RSCN "
+					 "Data: x%x x%x x%x\n",
+					 vport->fc_rscn_id_cnt, vport->fc_flag,
+					 vport->port_state);
 		} else {
-			spin_lock_irq(phba->host->host_lock);
-			phba->fc_flag |= FC_RSCN_DISCOVERY;
-			spin_unlock_irq(phba->host->host_lock);
+			spin_lock_irq(shost->host_lock);
+			vport->fc_flag |= FC_RSCN_DISCOVERY;
+			spin_unlock_irq(shost->host_lock);
 			/* ReDiscovery RSCN */
-			lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-					"%d:0234 ReDiscovery RSCN "
-					"Data: x%x x%x x%x\n",
-					phba->brd_no, phba->fc_rscn_id_cnt,
-					phba->fc_flag, phba->hba_state);
+			lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+					 "0234 ReDiscovery RSCN "
+					 "Data: x%x x%x x%x\n",
+					 vport->fc_rscn_id_cnt, vport->fc_flag,
+					 vport->port_state);
 		}
 		/* Send back ACC */
-		lpfc_els_rsp_acc(phba, ELS_CMD_ACC, cmdiocb, ndlp, NULL,
-								newnode);
+		lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
 
 		/* send RECOVERY event for ALL nodes that match RSCN payload */
-		lpfc_rscn_recovery_check(phba);
+		lpfc_rscn_recovery_check(vport);
+		vport->fc_flag &= ~FC_RSCN_DEFERRED;
 		return 0;
 	}
 
-	phba->fc_flag |= FC_RSCN_MODE;
-	phba->fc_rscn_id_list[phba->fc_rscn_id_cnt++] = pcmd;
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+		"RCV RSCN:        did:x%x/ste:x%x flg:x%x",
+		ndlp->nlp_DID, vport->port_state, ndlp->nlp_flag);
+
+	spin_lock_irq(shost->host_lock);
+	vport->fc_flag |= FC_RSCN_MODE;
+	spin_unlock_irq(shost->host_lock);
+	vport->fc_rscn_id_list[vport->fc_rscn_id_cnt++] = pcmd;
 	/*
 	 * If we zero, cmdiocb->context2, the calling routine will
 	 * not try to free it.
 	 */
 	cmdiocb->context2 = NULL;
 
-	lpfc_set_disctmo(phba);
+	lpfc_set_disctmo(vport);
 
 	/* Send back ACC */
-	lpfc_els_rsp_acc(phba, ELS_CMD_ACC, cmdiocb, ndlp, NULL, newnode);
+	lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
 
 	/* send RECOVERY event for ALL nodes that match RSCN payload */
-	lpfc_rscn_recovery_check(phba);
+	lpfc_rscn_recovery_check(vport);
 
-	return lpfc_els_handle_rscn(phba);
+	return lpfc_els_handle_rscn(vport);
 }
 
 int
-lpfc_els_handle_rscn(struct lpfc_hba * phba)
+lpfc_els_handle_rscn(struct lpfc_vport *vport)
 {
 	struct lpfc_nodelist *ndlp;
+	struct lpfc_hba *phba = vport->phba;
+
+	/* Ignore RSCN if the port is being torn down. */
+	if (vport->load_flag & FC_UNLOADING) {
+		lpfc_els_flush_rscn(vport);
+		return 0;
+	}
 
 	/* Start timer for RSCN processing */
-	lpfc_set_disctmo(phba);
+	lpfc_set_disctmo(vport);
 
 	/* RSCN processed */
-	lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_DISCOVERY,
-			"%d:0215 RSCN processed Data: x%x x%x x%x x%x\n",
-			phba->brd_no,
-			phba->fc_flag, 0, phba->fc_rscn_id_cnt,
-			phba->hba_state);
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0215 RSCN processed Data: x%x x%x x%x x%x\n",
+			 vport->fc_flag, 0, vport->fc_rscn_id_cnt,
+			 vport->port_state);
 
 	/* To process RSCN, first compare RSCN data with NameServer */
-	phba->fc_ns_retry = 0;
-	ndlp = lpfc_findnode_did(phba, NLP_SEARCH_UNMAPPED, NameServer_DID);
-	if (ndlp) {
+	vport->fc_ns_retry = 0;
+	ndlp = lpfc_findnode_did(vport, NameServer_DID);
+	if (ndlp && ndlp->nlp_state == NLP_STE_UNMAPPED_NODE) {
 		/* Good ndlp, issue CT Request to NameServer */
-		if (lpfc_ns_cmd(phba, ndlp, SLI_CTNS_GID_FT) == 0) {
+		if (lpfc_ns_cmd(vport, SLI_CTNS_GID_FT, 0, 0) == 0)
 			/* Wait for NameServer query cmpl before we can
 			   continue */
 			return 1;
-		}
 	} else {
 		/* If login to NameServer does not exist, issue one */
 		/* Good status, issue PLOGI to NameServer */
-		ndlp = lpfc_findnode_did(phba, NLP_SEARCH_ALL, NameServer_DID);
-		if (ndlp) {
+		ndlp = lpfc_findnode_did(vport, NameServer_DID);
+		if (ndlp)
 			/* Wait for NameServer login cmpl before we can
 			   continue */
 			return 1;
-		}
+
 		ndlp = mempool_alloc(phba->nlp_mem_pool, GFP_KERNEL);
 		if (!ndlp) {
-			lpfc_els_flush_rscn(phba);
+			lpfc_els_flush_rscn(vport);
 			return 0;
 		} else {
-			lpfc_nlp_init(phba, ndlp, NameServer_DID);
+			lpfc_nlp_init(vport, ndlp, NameServer_DID);
 			ndlp->nlp_type |= NLP_FABRIC;
 			ndlp->nlp_prev_state = ndlp->nlp_state;
-			ndlp->nlp_state = NLP_STE_PLOGI_ISSUE;
-			lpfc_nlp_list(phba, ndlp, NLP_PLOGI_LIST);
-			lpfc_issue_els_plogi(phba, NameServer_DID, 0);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE);
+			lpfc_issue_els_plogi(vport, NameServer_DID, 0);
 			/* Wait for NameServer login cmpl before we can
 			   continue */
 			return 1;
 		}
 	}
 
-	lpfc_els_flush_rscn(phba);
+	lpfc_els_flush_rscn(vport);
 	return 0;
 }
 
 static int
-lpfc_els_rcv_flogi(struct lpfc_hba * phba,
-		   struct lpfc_iocbq * cmdiocb,
-		   struct lpfc_nodelist * ndlp, uint8_t newnode)
+lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		   struct lpfc_nodelist *ndlp)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
 	struct lpfc_dmabuf *pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
 	uint32_t *lp = (uint32_t *) pcmd->virt;
 	IOCB_t *icmd = &cmdiocb->iocb;
@@ -2679,7 +3174,7 @@ lpfc_els_rcv_flogi(struct lpfc_hba * phba,
 
 	/* FLOGI received */
 
-	lpfc_set_disctmo(phba);
+	lpfc_set_disctmo(vport);
 
 	if (phba->fc_topology == TOPOLOGY_LOOP) {
 		/* We should never receive a FLOGI in loop mode, ignore it */
@@ -2687,67 +3182,70 @@ lpfc_els_rcv_flogi(struct lpfc_hba * phba,
 
 		/* An FLOGI ELS command <elsCmd> was received from DID <did> in
 		   Loop Mode */
-		lpfc_printf_log(phba, KERN_ERR, LOG_ELS,
-				"%d:0113 An FLOGI ELS command x%x was received "
-				"from DID x%x in Loop Mode\n",
-				phba->brd_no, cmd, did);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0113 An FLOGI ELS command x%x was "
+				 "received from DID x%x in Loop Mode\n",
+				 cmd, did);
 		return 1;
 	}
 
 	did = Fabric_DID;
 
-	if ((lpfc_check_sparm(phba, ndlp, sp, CLASS3))) {
+	if ((lpfc_check_sparm(vport, ndlp, sp, CLASS3))) {
 		/* For a FLOGI we accept, then if our portname is greater
 		 * then the remote portname we initiate Nport login.
 		 */
 
-		rc = memcmp(&phba->fc_portname, &sp->portName,
-			    sizeof (struct lpfc_name));
+		rc = memcmp(&vport->fc_portname, &sp->portName,
+			    sizeof(struct lpfc_name));
 
 		if (!rc) {
-			if ((mbox = mempool_alloc(phba->mbox_mem_pool,
-						  GFP_KERNEL)) == 0) {
+			mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+			if (!mbox)
 				return 1;
-			}
+
 			lpfc_linkdown(phba);
 			lpfc_init_link(phba, mbox,
 				       phba->cfg_topology,
 				       phba->cfg_link_speed);
 			mbox->mb.un.varInitLnk.lipsr_AL_PA = 0;
 			mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-			rc = lpfc_sli_issue_mbox
-				(phba, mbox, (MBX_NOWAIT | MBX_STOP_IOCB));
+			mbox->vport = vport;
+			rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
 			lpfc_set_loopback_flag(phba);
 			if (rc == MBX_NOT_FINISHED) {
-				mempool_free( mbox, phba->mbox_mem_pool);
+				mempool_free(mbox, phba->mbox_mem_pool);
 			}
 			return 1;
 		} else if (rc > 0) {	/* greater than */
-			spin_lock_irq(phba->host->host_lock);
-			phba->fc_flag |= FC_PT2PT_PLOGI;
-			spin_unlock_irq(phba->host->host_lock);
+			spin_lock_irq(shost->host_lock);
+			vport->fc_flag |= FC_PT2PT_PLOGI;
+			spin_unlock_irq(shost->host_lock);
 		}
-		phba->fc_flag |= FC_PT2PT;
-		phba->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP);
+		spin_lock_irq(shost->host_lock);
+		vport->fc_flag |= FC_PT2PT;
+		vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP);
+		spin_unlock_irq(shost->host_lock);
 	} else {
 		/* Reject this request because invalid parameters */
 		stat.un.b.lsRjtRsvd0 = 0;
 		stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
 		stat.un.b.lsRjtRsnCodeExp = LSEXP_SPARM_OPTIONS;
 		stat.un.b.vendorUnique = 0;
-		lpfc_els_rsp_reject(phba, stat.un.lsRjtError, cmdiocb, ndlp);
+		lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
+			NULL);
 		return 1;
 	}
 
 	/* Send back ACC */
-	lpfc_els_rsp_acc(phba, ELS_CMD_PLOGI, cmdiocb, ndlp, NULL, newnode);
+	lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, ndlp, NULL);
 
 	return 0;
 }
 
 static int
-lpfc_els_rcv_rnid(struct lpfc_hba * phba,
-		  struct lpfc_iocbq * cmdiocb, struct lpfc_nodelist * ndlp)
+lpfc_els_rcv_rnid(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		  struct lpfc_nodelist *ndlp)
 {
 	struct lpfc_dmabuf *pcmd;
 	uint32_t *lp;
@@ -2770,7 +3268,7 @@ lpfc_els_rcv_rnid(struct lpfc_hba * phba,
 	case 0:
 	case RNID_TOPOLOGY_DISC:
 		/* Send back ACC */
-		lpfc_els_rsp_rnid_acc(phba, rn->Format, cmdiocb, ndlp);
+		lpfc_els_rsp_rnid_acc(vport, rn->Format, cmdiocb, ndlp);
 		break;
 	default:
 		/* Reject this request because format not supported */
@@ -2778,14 +3276,15 @@ lpfc_els_rcv_rnid(struct lpfc_hba * phba,
 		stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
 		stat.un.b.lsRjtRsnCodeExp = LSEXP_CANT_GIVE_DATA;
 		stat.un.b.vendorUnique = 0;
-		lpfc_els_rsp_reject(phba, stat.un.lsRjtError, cmdiocb, ndlp);
+		lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
+			NULL);
 	}
 	return 0;
 }
 
 static int
-lpfc_els_rcv_lirr(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-		 struct lpfc_nodelist * ndlp)
+lpfc_els_rcv_lirr(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		  struct lpfc_nodelist *ndlp)
 {
 	struct ls_rjt stat;
 
@@ -2794,15 +3293,15 @@ lpfc_els_rcv_lirr(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 	stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
 	stat.un.b.lsRjtRsnCodeExp = LSEXP_CANT_GIVE_DATA;
 	stat.un.b.vendorUnique = 0;
-	lpfc_els_rsp_reject(phba, stat.un.lsRjtError, cmdiocb, ndlp);
+	lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp, NULL);
 	return 0;
 }
 
 static void
-lpfc_els_rsp_rps_acc(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+lpfc_els_rsp_rps_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
 {
-	struct lpfc_sli *psli;
-	struct lpfc_sli_ring *pring;
+	struct lpfc_sli *psli = &phba->sli;
+	struct lpfc_sli_ring *pring = &psli->ring[LPFC_ELS_RING];
 	MAILBOX_t *mb;
 	IOCB_t *icmd;
 	RPS_RSP *rps_rsp;
@@ -2812,8 +3311,6 @@ lpfc_els_rsp_rps_acc(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 	uint16_t xri, status;
 	uint32_t cmdsize;
 
-	psli = &phba->sli;
-	pring = &psli->ring[LPFC_ELS_RING];
 	mb = &pmb->mb;
 
 	ndlp = (struct lpfc_nodelist *) pmb->context2;
@@ -2822,14 +3319,16 @@ lpfc_els_rsp_rps_acc(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 	pmb->context2 = NULL;
 
 	if (mb->mbxStatus) {
-		mempool_free( pmb, phba->mbox_mem_pool);
+		mempool_free(pmb, phba->mbox_mem_pool);
 		return;
 	}
 
 	cmdsize = sizeof(RPS_RSP) + sizeof(uint32_t);
-	mempool_free( pmb, phba->mbox_mem_pool);
-	elsiocb = lpfc_prep_els_iocb(phba, 0, cmdsize, lpfc_max_els_tries, ndlp,
-						ndlp->nlp_DID, ELS_CMD_ACC);
+	mempool_free(pmb, phba->mbox_mem_pool);
+	elsiocb = lpfc_prep_els_iocb(phba->pport, 0, cmdsize,
+				     lpfc_max_els_tries, ndlp,
+				     ndlp->nlp_DID, ELS_CMD_ACC);
+	lpfc_nlp_put(ndlp);
 	if (!elsiocb)
 		return;
 
@@ -2838,14 +3337,14 @@ lpfc_els_rsp_rps_acc(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 
 	pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
 	*((uint32_t *) (pcmd)) = ELS_CMD_ACC;
-	pcmd += sizeof (uint32_t); /* Skip past command */
+	pcmd += sizeof(uint32_t); /* Skip past command */
 	rps_rsp = (RPS_RSP *)pcmd;
 
 	if (phba->fc_topology != TOPOLOGY_LOOP)
 		status = 0x10;
 	else
 		status = 0x8;
-	if (phba->fc_flag & FC_FABRIC)
+	if (phba->pport->fc_flag & FC_FABRIC)
 		status |= 0x4;
 
 	rps_rsp->rsvd1 = 0;
@@ -2856,28 +3355,25 @@ lpfc_els_rsp_rps_acc(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 	rps_rsp->primSeqErrCnt = be32_to_cpu(mb->un.varRdLnk.primSeqErrCnt);
 	rps_rsp->invalidXmitWord = be32_to_cpu(mb->un.varRdLnk.invalidXmitWord);
 	rps_rsp->crcCnt = be32_to_cpu(mb->un.varRdLnk.crcCnt);
-
 	/* Xmit ELS RPS ACC response tag <ulpIoTag> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0118 Xmit ELS RPS ACC response tag x%x "
-			"Data: x%x x%x x%x x%x x%x\n",
-			phba->brd_no,
-			elsiocb->iocb.ulpIoTag,
-			elsiocb->iocb.ulpContext, ndlp->nlp_DID,
-			ndlp->nlp_flag, ndlp->nlp_state, ndlp->nlp_rpi);
-
-	elsiocb->iocb_cmpl = lpfc_cmpl_els_acc;
+	lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_ELS,
+			 "0118 Xmit ELS RPS ACC response tag x%x xri x%x, "
+			 "did x%x, nlp_flag x%x, nlp_state x%x, rpi x%x\n",
+			 elsiocb->iotag, elsiocb->iocb.ulpContext,
+			 ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
+			 ndlp->nlp_rpi);
+	elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
 	phba->fc_stat.elsXmitACC++;
-	if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
+	if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR)
 		lpfc_els_free_iocb(phba, elsiocb);
-	}
 	return;
 }
 
 static int
-lpfc_els_rcv_rps(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-		 struct lpfc_nodelist * ndlp)
+lpfc_els_rcv_rps(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		 struct lpfc_nodelist *ndlp)
 {
+	struct lpfc_hba *phba = vport->phba;
 	uint32_t *lp;
 	uint8_t flag;
 	LPFC_MBOXQ_t *mbox;
@@ -2891,7 +3387,8 @@ lpfc_els_rcv_rps(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 		stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
 		stat.un.b.lsRjtRsnCodeExp = LSEXP_CANT_GIVE_DATA;
 		stat.un.b.vendorUnique = 0;
-		lpfc_els_rsp_reject(phba, stat.un.lsRjtError, cmdiocb, ndlp);
+		lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
+			NULL);
 	}
 
 	pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
@@ -2901,19 +3398,25 @@ lpfc_els_rcv_rps(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 
 	if ((flag == 0) ||
 	    ((flag == 1) && (be32_to_cpu(rps->un.portNum) == 0)) ||
-	    ((flag == 2) && (memcmp(&rps->un.portName, &phba->fc_portname,
-			   sizeof (struct lpfc_name)) == 0))) {
-		if ((mbox = mempool_alloc(phba->mbox_mem_pool, GFP_ATOMIC))) {
+	    ((flag == 2) && (memcmp(&rps->un.portName, &vport->fc_portname,
+				    sizeof(struct lpfc_name)) == 0))) {
+
+		printk("Fix me....\n");
+		dump_stack();
+		mbox = mempool_alloc(phba->mbox_mem_pool, GFP_ATOMIC);
+		if (mbox) {
 			lpfc_read_lnk_stat(phba, mbox);
 			mbox->context1 =
-			    (void *)((unsigned long)cmdiocb->iocb.ulpContext);
-			mbox->context2 = ndlp;
+			    (void *)((unsigned long) cmdiocb->iocb.ulpContext);
+			mbox->context2 = lpfc_nlp_get(ndlp);
+			mbox->vport = vport;
 			mbox->mbox_cmpl = lpfc_els_rsp_rps_acc;
-			if (lpfc_sli_issue_mbox (phba, mbox,
-			    (MBX_NOWAIT | MBX_STOP_IOCB)) != MBX_NOT_FINISHED) {
+			if (lpfc_sli_issue_mbox (phba, mbox, MBX_NOWAIT)
+				!= MBX_NOT_FINISHED)
 				/* Mbox completion will send ELS Response */
 				return 0;
-			}
+
+			lpfc_nlp_put(ndlp);
 			mempool_free(mbox, phba->mbox_mem_pool);
 		}
 	}
@@ -2921,27 +3424,25 @@ lpfc_els_rcv_rps(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 	stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
 	stat.un.b.lsRjtRsnCodeExp = LSEXP_CANT_GIVE_DATA;
 	stat.un.b.vendorUnique = 0;
-	lpfc_els_rsp_reject(phba, stat.un.lsRjtError, cmdiocb, ndlp);
+	lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp, NULL);
 	return 0;
 }
 
 static int
-lpfc_els_rsp_rpl_acc(struct lpfc_hba * phba, uint16_t cmdsize,
-		 struct lpfc_iocbq * oldiocb, struct lpfc_nodelist * ndlp)
+lpfc_els_rsp_rpl_acc(struct lpfc_vport *vport, uint16_t cmdsize,
+		     struct lpfc_iocbq *oldiocb, struct lpfc_nodelist *ndlp)
 {
-	IOCB_t *icmd;
-	IOCB_t *oldcmd;
+	struct lpfc_hba *phba = vport->phba;
+	IOCB_t *icmd, *oldcmd;
 	RPL_RSP rpl_rsp;
 	struct lpfc_iocbq *elsiocb;
-	struct lpfc_sli_ring *pring;
-	struct lpfc_sli *psli;
+	struct lpfc_sli *psli = &phba->sli;
+	struct lpfc_sli_ring *pring = &psli->ring[LPFC_ELS_RING];
 	uint8_t *pcmd;
 
-	psli = &phba->sli;
-	pring = &psli->ring[LPFC_ELS_RING];	/* ELS ring */
+	elsiocb = lpfc_prep_els_iocb(vport, 0, cmdsize, oldiocb->retry, ndlp,
+				     ndlp->nlp_DID, ELS_CMD_ACC);
 
-	elsiocb = lpfc_prep_els_iocb(phba, 0, cmdsize, oldiocb->retry,
-					ndlp, ndlp->nlp_DID, ELS_CMD_ACC);
 	if (!elsiocb)
 		return 1;
 
@@ -2951,7 +3452,7 @@ lpfc_els_rsp_rpl_acc(struct lpfc_hba * phba, uint16_t cmdsize,
 
 	pcmd = (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
 	*((uint32_t *) (pcmd)) = ELS_CMD_ACC;
-	pcmd += sizeof (uint16_t);
+	pcmd += sizeof(uint16_t);
 	*((uint16_t *)(pcmd)) = be16_to_cpu(cmdsize);
 	pcmd += sizeof(uint16_t);
 
@@ -2959,24 +3460,19 @@ lpfc_els_rsp_rpl_acc(struct lpfc_hba * phba, uint16_t cmdsize,
 	rpl_rsp.listLen = be32_to_cpu(1);
 	rpl_rsp.index = 0;
 	rpl_rsp.port_num_blk.portNum = 0;
-	rpl_rsp.port_num_blk.portID = be32_to_cpu(phba->fc_myDID);
-	memcpy(&rpl_rsp.port_num_blk.portName, &phba->fc_portname,
+	rpl_rsp.port_num_blk.portID = be32_to_cpu(vport->fc_myDID);
+	memcpy(&rpl_rsp.port_num_blk.portName, &vport->fc_portname,
 	    sizeof(struct lpfc_name));
-
 	memcpy(pcmd, &rpl_rsp, cmdsize - sizeof(uint32_t));
-
-
 	/* Xmit ELS RPL ACC response tag <ulpIoTag> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0120 Xmit ELS RPL ACC response tag x%x "
-			"Data: x%x x%x x%x x%x x%x\n",
-			phba->brd_no,
-			elsiocb->iocb.ulpIoTag,
-			elsiocb->iocb.ulpContext, ndlp->nlp_DID,
-			ndlp->nlp_flag, ndlp->nlp_state, ndlp->nlp_rpi);
-
-	elsiocb->iocb_cmpl = lpfc_cmpl_els_acc;
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0120 Xmit ELS RPL ACC response tag x%x "
+			 "xri x%x, did x%x, nlp_flag x%x, nlp_state x%x, "
+			 "rpi x%x\n",
+			 elsiocb->iotag, elsiocb->iocb.ulpContext,
+			 ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
+			 ndlp->nlp_rpi);
+	elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp;
 	phba->fc_stat.elsXmitACC++;
 	if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
 		lpfc_els_free_iocb(phba, elsiocb);
@@ -2986,8 +3482,8 @@ lpfc_els_rsp_rpl_acc(struct lpfc_hba * phba, uint16_t cmdsize,
 }
 
 static int
-lpfc_els_rcv_rpl(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-		 struct lpfc_nodelist * ndlp)
+lpfc_els_rcv_rpl(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		 struct lpfc_nodelist *ndlp)
 {
 	struct lpfc_dmabuf *pcmd;
 	uint32_t *lp;
@@ -3002,7 +3498,8 @@ lpfc_els_rcv_rpl(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 		stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
 		stat.un.b.lsRjtRsnCodeExp = LSEXP_CANT_GIVE_DATA;
 		stat.un.b.vendorUnique = 0;
-		lpfc_els_rsp_reject(phba, stat.un.lsRjtError, cmdiocb, ndlp);
+		lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
+			NULL);
 	}
 
 	pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
@@ -3019,14 +3516,14 @@ lpfc_els_rcv_rpl(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 	} else {
 		cmdsize = sizeof(uint32_t) + maxsize * sizeof(uint32_t);
 	}
-	lpfc_els_rsp_rpl_acc(phba, cmdsize, cmdiocb, ndlp);
+	lpfc_els_rsp_rpl_acc(vport, cmdsize, cmdiocb, ndlp);
 
 	return 0;
 }
 
 static int
-lpfc_els_rcv_farp(struct lpfc_hba * phba,
-		  struct lpfc_iocbq * cmdiocb, struct lpfc_nodelist * ndlp)
+lpfc_els_rcv_farp(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		  struct lpfc_nodelist *ndlp)
 {
 	struct lpfc_dmabuf *pcmd;
 	uint32_t *lp;
@@ -3041,14 +3538,9 @@ lpfc_els_rcv_farp(struct lpfc_hba * phba,
 
 	cmd = *lp++;
 	fp = (FARP *) lp;
-
 	/* FARP-REQ received from DID <did> */
-	lpfc_printf_log(phba,
-			 KERN_INFO,
-			 LOG_ELS,
-			 "%d:0601 FARP-REQ received from DID x%x\n",
-			 phba->brd_no, did);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0601 FARP-REQ received from DID x%x\n", did);
 	/* We will only support match on WWPN or WWNN */
 	if (fp->Mflags & ~(FARP_MATCH_NODE | FARP_MATCH_PORT)) {
 		return 0;
@@ -3057,15 +3549,15 @@ lpfc_els_rcv_farp(struct lpfc_hba * phba,
 	cnt = 0;
 	/* If this FARP command is searching for my portname */
 	if (fp->Mflags & FARP_MATCH_PORT) {
-		if (memcmp(&fp->RportName, &phba->fc_portname,
-			   sizeof (struct lpfc_name)) == 0)
+		if (memcmp(&fp->RportName, &vport->fc_portname,
+			   sizeof(struct lpfc_name)) == 0)
 			cnt = 1;
 	}
 
 	/* If this FARP command is searching for my nodename */
 	if (fp->Mflags & FARP_MATCH_NODE) {
-		if (memcmp(&fp->RnodeName, &phba->fc_nodename,
-			   sizeof (struct lpfc_name)) == 0)
+		if (memcmp(&fp->RnodeName, &vport->fc_nodename,
+			   sizeof(struct lpfc_name)) == 0)
 			cnt = 1;
 	}
 
@@ -3075,23 +3567,22 @@ lpfc_els_rcv_farp(struct lpfc_hba * phba,
 			/* Log back into the node before sending the FARP. */
 			if (fp->Rflags & FARP_REQUEST_PLOGI) {
 				ndlp->nlp_prev_state = ndlp->nlp_state;
-				ndlp->nlp_state = NLP_STE_PLOGI_ISSUE;
-				lpfc_nlp_list(phba, ndlp, NLP_PLOGI_LIST);
-				lpfc_issue_els_plogi(phba, ndlp->nlp_DID, 0);
+				lpfc_nlp_set_state(vport, ndlp,
+						   NLP_STE_PLOGI_ISSUE);
+				lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0);
 			}
 
 			/* Send a FARP response to that node */
-			if (fp->Rflags & FARP_REQUEST_FARPR) {
-				lpfc_issue_els_farpr(phba, did, 0);
-			}
+			if (fp->Rflags & FARP_REQUEST_FARPR)
+				lpfc_issue_els_farpr(vport, did, 0);
 		}
 	}
 	return 0;
 }
 
 static int
-lpfc_els_rcv_farpr(struct lpfc_hba * phba,
-		   struct lpfc_iocbq * cmdiocb, struct lpfc_nodelist * ndlp)
+lpfc_els_rcv_farpr(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		   struct lpfc_nodelist  *ndlp)
 {
 	struct lpfc_dmabuf *pcmd;
 	uint32_t *lp;
@@ -3105,21 +3596,17 @@ lpfc_els_rcv_farpr(struct lpfc_hba * phba,
 
 	cmd = *lp++;
 	/* FARP-RSP received from DID <did> */
-	lpfc_printf_log(phba,
-			 KERN_INFO,
-			 LOG_ELS,
-			 "%d:0600 FARP-RSP received from DID x%x\n",
-			 phba->brd_no, did);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0600 FARP-RSP received from DID x%x\n", did);
 	/* ACCEPT the Farp resp request */
-	lpfc_els_rsp_acc(phba, ELS_CMD_ACC, cmdiocb, ndlp, NULL, 0);
+	lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
 
 	return 0;
 }
 
 static int
-lpfc_els_rcv_fan(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-		 struct lpfc_nodelist * fan_ndlp)
+lpfc_els_rcv_fan(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		 struct lpfc_nodelist *fan_ndlp)
 {
 	struct lpfc_dmabuf *pcmd;
 	uint32_t *lp;
@@ -3127,22 +3614,22 @@ lpfc_els_rcv_fan(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 	uint32_t cmd, did;
 	FAN *fp;
 	struct lpfc_nodelist *ndlp, *next_ndlp;
+	struct lpfc_hba *phba = vport->phba;
 
 	/* FAN received */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS, "%d:0265 FAN received\n",
-								phba->brd_no);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0265 FAN received\n");
 	icmd = &cmdiocb->iocb;
 	did = icmd->un.elsreq64.remoteID;
 	pcmd = (struct lpfc_dmabuf *)cmdiocb->context2;
 	lp = (uint32_t *)pcmd->virt;
 
 	cmd = *lp++;
-	fp = (FAN *)lp;
+	fp = (FAN *) lp;
 
 	/* FAN received; Fan does not have a reply sequence */
 
-	if (phba->hba_state == LPFC_LOCAL_CFG_LINK) {
+	if (phba->pport->port_state == LPFC_LOCAL_CFG_LINK) {
 		if ((memcmp(&phba->fc_fabparam.nodeName, &fp->FnodeName,
 			sizeof(struct lpfc_name)) != 0) ||
 		    (memcmp(&phba->fc_fabparam.portName, &fp->FportName,
@@ -3153,44 +3640,46 @@ lpfc_els_rcv_fan(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 			 */
 
 			list_for_each_entry_safe(ndlp, next_ndlp,
-				&phba->fc_npr_list, nlp_listp) {
-
+						 &vport->fc_nodes, nlp_listp) {
+				if (ndlp->nlp_state != NLP_STE_NPR_NODE)
+					continue;
 				if (ndlp->nlp_type & NLP_FABRIC) {
 					/*
 					 * Clean up old Fabric, Nameserver and
 					 * other NLP_FABRIC logins
 					 */
-					lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+					lpfc_drop_node(vport, ndlp);
+
 				} else if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) {
 					/* Fail outstanding I/O now since this
 					 * device is marked for PLOGI
 					 */
-					lpfc_unreg_rpi(phba, ndlp);
+					lpfc_unreg_rpi(vport, ndlp);
 				}
 			}
 
-			phba->hba_state = LPFC_FLOGI;
-			lpfc_set_disctmo(phba);
-			lpfc_initial_flogi(phba);
+			lpfc_initial_flogi(vport);
 			return 0;
 		}
 		/* Discovery not needed,
 		 * move the nodes to their original state.
 		 */
-		list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_npr_list,
-			nlp_listp) {
+		list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes,
+					 nlp_listp) {
+			if (ndlp->nlp_state != NLP_STE_NPR_NODE)
+				continue;
 
 			switch (ndlp->nlp_prev_state) {
 			case NLP_STE_UNMAPPED_NODE:
 				ndlp->nlp_prev_state = NLP_STE_NPR_NODE;
-				ndlp->nlp_state = NLP_STE_UNMAPPED_NODE;
-				lpfc_nlp_list(phba, ndlp, NLP_UNMAPPED_LIST);
+				lpfc_nlp_set_state(vport, ndlp,
+						   NLP_STE_UNMAPPED_NODE);
 				break;
 
 			case NLP_STE_MAPPED_NODE:
 				ndlp->nlp_prev_state = NLP_STE_NPR_NODE;
-				ndlp->nlp_state = NLP_STE_MAPPED_NODE;
-				lpfc_nlp_list(phba, ndlp, NLP_MAPPED_LIST);
+				lpfc_nlp_set_state(vport, ndlp,
+						   NLP_STE_MAPPED_NODE);
 				break;
 
 			default:
@@ -3199,7 +3688,7 @@ lpfc_els_rcv_fan(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 		}
 
 		/* Start discovery - this should just do CLEAR_LA */
-		lpfc_disc_start(phba);
+		lpfc_disc_start(vport);
 	}
 	return 0;
 }
@@ -3207,42 +3696,42 @@ lpfc_els_rcv_fan(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
 void
 lpfc_els_timeout(unsigned long ptr)
 {
-	struct lpfc_hba *phba;
+	struct lpfc_vport *vport = (struct lpfc_vport *) ptr;
+	struct lpfc_hba   *phba = vport->phba;
 	unsigned long iflag;
 
-	phba = (struct lpfc_hba *)ptr;
-	if (phba == 0)
-		return;
-	spin_lock_irqsave(phba->host->host_lock, iflag);
-	if (!(phba->work_hba_events & WORKER_ELS_TMO)) {
-		phba->work_hba_events |= WORKER_ELS_TMO;
+	spin_lock_irqsave(&vport->work_port_lock, iflag);
+	if ((vport->work_port_events & WORKER_ELS_TMO) == 0) {
+		vport->work_port_events |= WORKER_ELS_TMO;
+		spin_unlock_irqrestore(&vport->work_port_lock, iflag);
+
+		spin_lock_irqsave(&phba->hbalock, iflag);
 		if (phba->work_wait)
-			wake_up(phba->work_wait);
+			lpfc_worker_wake_up(phba);
+		spin_unlock_irqrestore(&phba->hbalock, iflag);
 	}
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
+	else
+		spin_unlock_irqrestore(&vport->work_port_lock, iflag);
 	return;
 }
 
 void
-lpfc_els_timeout_handler(struct lpfc_hba *phba)
+lpfc_els_timeout_handler(struct lpfc_vport *vport)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	struct lpfc_sli_ring *pring;
 	struct lpfc_iocbq *tmp_iocb, *piocb;
 	IOCB_t *cmd = NULL;
 	struct lpfc_dmabuf *pcmd;
-	uint32_t *elscmd;
-	uint32_t els_command=0;
+	uint32_t els_command = 0;
 	uint32_t timeout;
-	uint32_t remote_ID;
+	uint32_t remote_ID = 0xffffffff;
 
-	if (phba == 0)
-		return;
-	spin_lock_irq(phba->host->host_lock);
 	/* If the timer is already canceled do nothing */
-	if (!(phba->work_hba_events & WORKER_ELS_TMO)) {
-		spin_unlock_irq(phba->host->host_lock);
+	if ((vport->work_port_events & WORKER_ELS_TMO) == 0) {
 		return;
 	}
+	spin_lock_irq(&phba->hbalock);
 	timeout = (uint32_t)(phba->fc_ratov << 1);
 
 	pring = &phba->sli.ring[LPFC_ELS_RING];
@@ -3250,65 +3739,67 @@ lpfc_els_timeout_handler(struct lpfc_hba *phba)
 	list_for_each_entry_safe(piocb, tmp_iocb, &pring->txcmplq, list) {
 		cmd = &piocb->iocb;
 
-		if ((piocb->iocb_flag & LPFC_IO_LIBDFC) ||
-			(piocb->iocb.ulpCommand == CMD_ABORT_XRI_CN) ||
-			(piocb->iocb.ulpCommand == CMD_CLOSE_XRI_CN)) {
+		if ((piocb->iocb_flag & LPFC_IO_LIBDFC) != 0 ||
+		    piocb->iocb.ulpCommand == CMD_ABORT_XRI_CN ||
+		    piocb->iocb.ulpCommand == CMD_CLOSE_XRI_CN)
 			continue;
-		}
+
+		if (piocb->vport != vport)
+			continue;
+
 		pcmd = (struct lpfc_dmabuf *) piocb->context2;
-		if (pcmd) {
-			elscmd = (uint32_t *) (pcmd->virt);
-			els_command = *elscmd;
-		}
+		if (pcmd)
+			els_command = *(uint32_t *) (pcmd->virt);
 
-		if ((els_command == ELS_CMD_FARP)
-		    || (els_command == ELS_CMD_FARPR)) {
+		if (els_command == ELS_CMD_FARP ||
+		    els_command == ELS_CMD_FARPR ||
+		    els_command == ELS_CMD_FDISC)
+			continue;
+
+		if (vport != piocb->vport)
 			continue;
-		}
 
 		if (piocb->drvrTimeout > 0) {
-			if (piocb->drvrTimeout >= timeout) {
+			if (piocb->drvrTimeout >= timeout)
 				piocb->drvrTimeout -= timeout;
-			} else {
+			else
 				piocb->drvrTimeout = 0;
-			}
 			continue;
 		}
 
-		if (cmd->ulpCommand == CMD_GEN_REQUEST64_CR) {
-			struct lpfc_nodelist *ndlp;
-			spin_unlock_irq(phba->host->host_lock);
-			ndlp = lpfc_findnode_rpi(phba, cmd->ulpContext);
-			spin_lock_irq(phba->host->host_lock);
-			remote_ID = ndlp->nlp_DID;
-		} else {
+		remote_ID = 0xffffffff;
+		if (cmd->ulpCommand != CMD_GEN_REQUEST64_CR)
 			remote_ID = cmd->un.elsreq64.remoteID;
+		else {
+			struct lpfc_nodelist *ndlp;
+			ndlp = __lpfc_findnode_rpi(vport, cmd->ulpContext);
+			if (ndlp)
+				remote_ID = ndlp->nlp_DID;
 		}
-
-		lpfc_printf_log(phba,
-				KERN_ERR,
-				LOG_ELS,
-				"%d:0127 ELS timeout Data: x%x x%x x%x x%x\n",
-				phba->brd_no, els_command,
-				remote_ID, cmd->ulpCommand, cmd->ulpIoTag);
-
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0127 ELS timeout Data: x%x x%x x%x "
+				 "x%x\n", els_command,
+				 remote_ID, cmd->ulpCommand, cmd->ulpIoTag);
 		lpfc_sli_issue_abort_iotag(phba, pring, piocb);
 	}
-	if (phba->sli.ring[LPFC_ELS_RING].txcmplq_cnt)
-		mod_timer(&phba->els_tmofunc, jiffies + HZ * timeout);
+	spin_unlock_irq(&phba->hbalock);
 
-	spin_unlock_irq(phba->host->host_lock);
+	if (phba->sli.ring[LPFC_ELS_RING].txcmplq_cnt)
+		mod_timer(&vport->els_tmofunc, jiffies + HZ * timeout);
 }
 
 void
-lpfc_els_flush_cmd(struct lpfc_hba * phba)
+lpfc_els_flush_cmd(struct lpfc_vport *vport)
 {
-	struct lpfc_sli_ring *pring;
+	LIST_HEAD(completions);
+	struct lpfc_hba  *phba = vport->phba;
+	struct lpfc_sli_ring *pring = &phba->sli.ring[LPFC_ELS_RING];
 	struct lpfc_iocbq *tmp_iocb, *piocb;
 	IOCB_t *cmd = NULL;
 
-	pring = &phba->sli.ring[LPFC_ELS_RING];
-	spin_lock_irq(phba->host->host_lock);
+	lpfc_fabric_abort_vport(vport);
+
+	spin_lock_irq(&phba->hbalock);
 	list_for_each_entry_safe(piocb, tmp_iocb, &pring->txq, list) {
 		cmd = &piocb->iocb;
 
@@ -3317,277 +3808,1726 @@ lpfc_els_flush_cmd(struct lpfc_hba * phba)
 		}
 
 		/* Do not flush out the QUE_RING and ABORT/CLOSE iocbs */
-		if ((cmd->ulpCommand == CMD_QUE_RING_BUF_CN) ||
-		    (cmd->ulpCommand == CMD_QUE_RING_BUF64_CN) ||
-		    (cmd->ulpCommand == CMD_CLOSE_XRI_CN) ||
-		    (cmd->ulpCommand == CMD_ABORT_XRI_CN)) {
+		if (cmd->ulpCommand == CMD_QUE_RING_BUF_CN ||
+		    cmd->ulpCommand == CMD_QUE_RING_BUF64_CN ||
+		    cmd->ulpCommand == CMD_CLOSE_XRI_CN ||
+		    cmd->ulpCommand == CMD_ABORT_XRI_CN)
 			continue;
-		}
-
-		list_del(&piocb->list);
-		pring->txq_cnt--;
 
-		cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
-		cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
+		if (piocb->vport != vport)
+			continue;
 
-		if (piocb->iocb_cmpl) {
-			spin_unlock_irq(phba->host->host_lock);
-			(piocb->iocb_cmpl) (phba, piocb, piocb);
-			spin_lock_irq(phba->host->host_lock);
-		} else
-			lpfc_sli_release_iocbq(phba, piocb);
+		list_move_tail(&piocb->list, &completions);
+		pring->txq_cnt--;
 	}
 
 	list_for_each_entry_safe(piocb, tmp_iocb, &pring->txcmplq, list) {
-		cmd = &piocb->iocb;
-
 		if (piocb->iocb_flag & LPFC_IO_LIBDFC) {
 			continue;
 		}
 
+		if (piocb->vport != vport)
+			continue;
+
 		lpfc_sli_issue_abort_iotag(phba, pring, piocb);
 	}
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
+
+	while (!list_empty(&completions)) {
+		piocb = list_get_first(&completions, struct lpfc_iocbq, list);
+		cmd = &piocb->iocb;
+		list_del_init(&piocb->list);
+
+		if (!piocb->iocb_cmpl)
+			lpfc_sli_release_iocbq(phba, piocb);
+		else {
+			cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
+			cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
+			(piocb->iocb_cmpl) (phba, piocb, piocb);
+		}
+	}
+
 	return;
 }
 
 void
-lpfc_els_unsol_event(struct lpfc_hba * phba,
-		     struct lpfc_sli_ring * pring, struct lpfc_iocbq * elsiocb)
+lpfc_els_flush_all_cmd(struct lpfc_hba  *phba)
 {
-	struct lpfc_sli *psli;
-	struct lpfc_nodelist *ndlp;
-	struct lpfc_dmabuf *mp;
-	uint32_t *lp;
-	IOCB_t *icmd;
-	struct ls_rjt stat;
-	uint32_t cmd;
-	uint32_t did;
-	uint32_t newnode;
-	uint32_t drop_cmd = 0;	/* by default do NOT drop received cmd */
-	uint32_t rjt_err = 0;
+	LIST_HEAD(completions);
+	struct lpfc_sli_ring *pring = &phba->sli.ring[LPFC_ELS_RING];
+	struct lpfc_iocbq *tmp_iocb, *piocb;
+	IOCB_t *cmd = NULL;
 
-	psli = &phba->sli;
-	icmd = &elsiocb->iocb;
+	lpfc_fabric_abort_hba(phba);
+	spin_lock_irq(&phba->hbalock);
+	list_for_each_entry_safe(piocb, tmp_iocb, &pring->txq, list) {
+		cmd = &piocb->iocb;
+		if (piocb->iocb_flag & LPFC_IO_LIBDFC)
+			continue;
+		/* Do not flush out the QUE_RING and ABORT/CLOSE iocbs */
+		if (cmd->ulpCommand == CMD_QUE_RING_BUF_CN ||
+		    cmd->ulpCommand == CMD_QUE_RING_BUF64_CN ||
+		    cmd->ulpCommand == CMD_CLOSE_XRI_CN ||
+		    cmd->ulpCommand == CMD_ABORT_XRI_CN)
+			continue;
+		list_move_tail(&piocb->list, &completions);
+		pring->txq_cnt--;
+	}
+	list_for_each_entry_safe(piocb, tmp_iocb, &pring->txcmplq, list) {
+		if (piocb->iocb_flag & LPFC_IO_LIBDFC)
+			continue;
+		lpfc_sli_issue_abort_iotag(phba, pring, piocb);
+	}
+	spin_unlock_irq(&phba->hbalock);
+	while (!list_empty(&completions)) {
+		piocb = list_get_first(&completions, struct lpfc_iocbq, list);
+		cmd = &piocb->iocb;
+		list_del_init(&piocb->list);
+		if (!piocb->iocb_cmpl)
+			lpfc_sli_release_iocbq(phba, piocb);
+		else {
+			cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
+			cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
+			(piocb->iocb_cmpl) (phba, piocb, piocb);
+		}
+	}
+	return;
+}
 
-	if ((icmd->ulpStatus == IOSTAT_LOCAL_REJECT) &&
-		((icmd->un.ulpWord[4] & 0xff) == IOERR_RCV_BUFFER_WAITING)) {
-		/* Not enough posted buffers; Try posting more buffers */
-		phba->fc_stat.NoRcvBuf++;
-		lpfc_post_buffer(phba, pring, 0, 1);
+static void
+lpfc_els_rcv_auth_neg(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		  struct lpfc_nodelist *ndlp)
+{
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_dmabuf *pcmd = cmdiocb->context2;
+	struct lpfc_auth_message *authcmd;
+	uint8_t reason, explanation;
+	uint32_t message_len;
+	uint32_t trans_id;
+	struct fc_auth_req *fc_req;
+	struct fc_auth_rsp *fc_rsp;
+
+	authcmd = pcmd->virt;
+	message_len = be32_to_cpu(authcmd->message_len);
+	trans_id = be32_to_cpu(authcmd->trans_id);
+
+	lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
+
+	vport->auth.trans_id = trans_id;
+
+	if(lpfc_unpack_auth_negotiate(vport, authcmd->data,
+					&reason, &explanation)) {
+		lpfc_issue_els_auth_reject(vport, ndlp, reason, explanation);
 		return;
 	}
+	lpfc_printf_vlog(vport, KERN_WARNING, LOG_SECURITY,
+			 "1033 Received auth_negotiate from Nport:x%x\n",
+			 ndlp->nlp_DID);
 
-	/* If there are no BDEs associated with this IOCB,
-	 * there is nothing to do.
-	 */
-	if (icmd->ulpBdeCount == 0)
+	fc_req = kzalloc(sizeof(struct fc_auth_req), GFP_KERNEL);
+
+	fc_req->local_wwpn = wwn_to_u64(vport->fc_portname.u.wwn);
+	if (ndlp->nlp_type & NLP_FABRIC)
+		fc_req->remote_wwpn = AUTH_FABRIC_WWN;
+	else
+		fc_req->remote_wwpn = wwn_to_u64(ndlp->nlp_portname.u.wwn);
+	fc_req->u.dhchap_challenge.transaction_id = vport->auth.trans_id;
+	fc_req->u.dhchap_challenge.dh_group_id = vport->auth.group_id;
+	fc_req->u.dhchap_challenge.hash_id = vport->auth.hash_id;
+
+	fc_rsp = kzalloc(MAX_AUTH_RSP_SIZE, GFP_KERNEL);
+
+	if (lpfc_fc_security_dhchap_make_challenge(shost,
+			      fc_req, sizeof(struct fc_auth_req),
+	fc_rsp, MAX_AUTH_RSP_SIZE)) {
+		kfree(fc_rsp);
+		lpfc_issue_els_auth_reject(vport, ndlp, LOGIC_ERR, 0);
+	}
+
+	kfree(fc_req);
+
+}
+
+static void
+lpfc_els_rcv_chap_chal(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		       struct lpfc_nodelist *ndlp)
+{
+
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_dmabuf *pcmd = cmdiocb->context2;
+	struct lpfc_auth_message *authcmd;
+	uint8_t reason, explanation;
+	uint32_t message_len;
+	uint32_t trans_id;
+	struct fc_auth_req *fc_req;
+	struct fc_auth_rsp *fc_rsp;
+	uint32_t fc_req_len;
+
+	authcmd = pcmd->virt;
+	message_len = be32_to_cpu(authcmd->message_len);
+	trans_id = be32_to_cpu(authcmd->trans_id);
+
+	lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
+
+	if (vport->auth.auth_msg_state != LPFC_AUTH_NEGOTIATE) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1034 Not Expecting Challenge - Rejecting "
+				 "Challenge.\n");
+		lpfc_issue_els_auth_reject(vport, ndlp, AUTH_ERR, BAD_PROTOCOL);
 		return;
+	}
 
-	/* type of ELS cmd is first 32bit word in packet */
-	mp = lpfc_sli_ringpostbuf_get(phba, pring, getPaddr(icmd->un.
-							    cont64[0].
-							    addrHigh,
-							    icmd->un.
-							    cont64[0].addrLow));
-	if (mp == 0) {
-		drop_cmd = 1;
-		goto dropit;
+	if (trans_id  != vport->auth.trans_id) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1035 Transport ID does not match - Rejecting "
+				 "Challenge.\n");
+		lpfc_issue_els_auth_reject(vport, ndlp, AUTH_ERR, BAD_PAYLOAD);
+		return;
+	}
+
+	if (lpfc_unpack_dhchap_challenge(vport, authcmd->data,
+					 &reason, &explanation)) {
+		lpfc_issue_els_auth_reject(vport, ndlp, reason, explanation);
+		return;
+	}
+
+	fc_req_len = (sizeof(struct fc_auth_req) +
+		      vport->auth.challenge_len +
+		      vport->auth.dh_pub_key_len);
+	fc_req = kzalloc(fc_req_len, GFP_KERNEL);
+	fc_req->local_wwpn = wwn_to_u64(vport->fc_portname.u.wwn);
+	if (ndlp->nlp_type & NLP_FABRIC)
+		fc_req->remote_wwpn = AUTH_FABRIC_WWN;
+	else
+		fc_req->remote_wwpn = wwn_to_u64(ndlp->nlp_portname.u.wwn);
+	fc_req->u.dhchap_reply.transaction_id = vport->auth.trans_id;
+	fc_req->u.dhchap_reply.dh_group_id = vport->auth.group_id;
+	fc_req->u.dhchap_reply.hash_id = vport->auth.hash_id;
+	fc_req->u.dhchap_reply.bidirectional = vport->auth.bidirectional;
+	fc_req->u.dhchap_reply.received_challenge_len =
+		vport->auth.challenge_len;
+	fc_req->u.dhchap_reply.received_public_key_len =
+			vport->auth.dh_pub_key_len;
+	memcpy (fc_req->u.dhchap_reply.data, vport->auth.challenge,
+		vport->auth.challenge_len);
+	if (vport->auth.group_id != DH_GROUP_NULL) {
+		memcpy (fc_req->u.dhchap_reply.data + vport->auth.challenge_len,
+			vport->auth.dh_pub_key, vport->auth.dh_pub_key_len);
+	}
+
+	fc_rsp = kzalloc(MAX_AUTH_RSP_SIZE, GFP_KERNEL);
+
+	if (lpfc_fc_security_dhchap_make_response(shost,
+			fc_req, fc_req_len,
+			fc_rsp, MAX_AUTH_RSP_SIZE)) {
+		kfree(fc_rsp);
+		lpfc_issue_els_auth_reject(vport, ndlp, LOGIC_ERR, 0);
+	}
+
+	kfree(fc_req);
+
+}
+
+static void
+lpfc_els_rcv_auth_rjt(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		      struct lpfc_nodelist *ndlp)
+{
+
+	struct lpfc_dmabuf *pcmd = cmdiocb->context2;
+	struct lpfc_auth_message *authcmd;
+	uint32_t message_len;
+	uint32_t trans_id;
+	struct lpfc_auth_reject *rjt;
+
+	authcmd = pcmd->virt;
+	rjt = (struct lpfc_auth_reject *)authcmd->data;
+
+	message_len = be32_to_cpu(authcmd->message_len);
+	trans_id = be32_to_cpu(authcmd->trans_id);
+
+	lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
+
+
+	if (vport->auth.auth_state == LPFC_AUTH_SUCCESS) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1036 Authentication transaction reject - "
+				 "re-auth request reason 0x%x exp 0x%x\n",
+				 rjt->reason, rjt->explanation);
+		lpfc_port_auth_failed(ndlp);
+		if (vport->auth.auth_msg_state == LPFC_DHCHAP_SUCCESS) {
+			/* start authentication */
+			lpfc_start_authentication(vport, ndlp);
+		}
+	} else {
+		if (rjt->reason == LOGIC_ERR && rjt->explanation == RESTART) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+					 "1037 Authentication transaction "
+					 "reject - restarting authentication. "
+					 "reason 0x%x exp 0x%x\n",
+					 rjt->reason, rjt->explanation);
+			/* restart auth */
+			lpfc_start_authentication(vport, ndlp);
+		} else {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				"%d:1038 Authentication transaction "
+				"reject. reason 0x%x exp 0x%x\n",
+				vport->phba->brd_no,
+				rjt->reason, rjt->explanation);
+			vport->auth.auth_msg_state = LPFC_AUTH_REJECT;
+		}
+	}
+}
+
+static void
+lpfc_els_rcv_chap_reply(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		  struct lpfc_nodelist *ndlp)
+{
+
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_dmabuf *pcmd = cmdiocb->context2;
+	struct lpfc_auth_message *authcmd;
+	uint32_t message_len;
+	uint32_t trans_id;
+	struct fc_auth_req *fc_req;
+	struct fc_auth_rsp *fc_rsp;
+	uint32_t data_len;
+
+	authcmd = pcmd->virt;
+	message_len = be32_to_cpu(authcmd->message_len);
+	trans_id = be32_to_cpu(authcmd->trans_id);
+
+	lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
+
+	fc_req = kzalloc(MAX_AUTH_REQ_SIZE, GFP_KERNEL);
+
+	fc_req->local_wwpn = wwn_to_u64(vport->fc_portname.u.wwn);
+	if (ndlp->nlp_type & NLP_FABRIC)
+		fc_req->remote_wwpn = AUTH_FABRIC_WWN;
+	else
+		fc_req->remote_wwpn = wwn_to_u64(ndlp->nlp_portname.u.wwn);
+
+	if (vport->auth.auth_msg_state != LPFC_DHCHAP_CHALLENGE) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1039 Not Expecting Reply - rejecting. State "
+				 "0x%x\n", vport->auth.auth_state);
+
+		lpfc_issue_els_auth_reject(vport, ndlp, AUTH_ERR, BAD_PROTOCOL);
+		return;
+	}
+
+	if (trans_id  != vport->auth.trans_id) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1040 Bad Reply trans_id- rejecting. "
+				 "Trans_id: 0x%x Expecting: 0x%x \n",
+				 trans_id, vport->auth.trans_id)
+		lpfc_issue_els_auth_reject(vport, ndlp, AUTH_ERR, BAD_PAYLOAD);
+		return;
+	}
+
+	/* Zero is a valid length to be returned */
+	data_len = lpfc_unpack_dhchap_reply(vport, authcmd->data, fc_req);
+	fc_req->u.dhchap_success.hash_id = vport->auth.hash_id;
+	fc_req->u.dhchap_success.dh_group_id = vport->auth.group_id;
+	fc_req->u.dhchap_success.transaction_id = vport->auth.trans_id;
+	fc_req->u.dhchap_success.our_challenge_len = vport->auth.challenge_len;
+	memcpy (fc_req->u.dhchap_success.data, vport->auth.challenge,
+		vport->auth.challenge_len);
+
+	fc_rsp = kzalloc(MAX_AUTH_RSP_SIZE, GFP_KERNEL);
+
+	if (lpfc_fc_security_dhchap_authenticate(shost, fc_req,
+			(sizeof(struct fc_auth_req) +
+			data_len + vport->auth.challenge_len),
+			fc_rsp, MAX_AUTH_RSP_SIZE)) {
+		kfree(fc_rsp);
+		lpfc_issue_els_auth_reject(vport, ndlp, LOGIC_ERR, 0);
+	}
+
+	kfree(fc_req);
+
+}
+
+static void
+lpfc_els_rcv_chap_suc(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+		  struct lpfc_nodelist *ndlp)
+{
+
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_dmabuf *pcmd = cmdiocb->context2;
+	struct lpfc_auth_message *authcmd;
+	uint32_t message_len;
+	uint32_t trans_id;
+	struct fc_auth_req *fc_req;
+	struct fc_auth_rsp *fc_rsp;
+	uint32_t data_len;
+
+	authcmd = pcmd->virt;
+	message_len = be32_to_cpu(authcmd->message_len);
+	trans_id = be32_to_cpu(authcmd->trans_id);
+
+	lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
+
+	if (vport->auth.auth_msg_state != LPFC_DHCHAP_REPLY &&
+	    vport->auth.auth_msg_state != LPFC_DHCHAP_SUCCESS_REPLY) {
+		lpfc_issue_els_auth_reject(vport, ndlp, AUTH_ERR, BAD_PROTOCOL);
+		return;
+	}
+
+	if (trans_id  != vport->auth.trans_id) {
+		lpfc_issue_els_auth_reject(vport, ndlp, AUTH_ERR, BAD_PAYLOAD);
+		return;
 	}
 
+	if (vport->auth.auth_msg_state == LPFC_DHCHAP_REPLY &&
+	    vport->auth.bidirectional) {
+
+		fc_req = kzalloc(MAX_AUTH_REQ_SIZE, GFP_KERNEL);
+		if (!fc_req) {
+			return;
+		}
+
+		fc_req->local_wwpn = wwn_to_u64(vport->fc_portname.u.wwn);
+		if (ndlp->nlp_type & NLP_FABRIC)
+			fc_req->remote_wwpn = AUTH_FABRIC_WWN;
+		else
+			fc_req->remote_wwpn =
+				wwn_to_u64(ndlp->nlp_portname.u.wwn);
+		fc_req->u.dhchap_success.hash_id = vport->auth.hash_id;
+		fc_req->u.dhchap_success.dh_group_id = vport->auth.group_id;
+		fc_req->u.dhchap_success.transaction_id = vport->auth.trans_id;
+		fc_req->u.dhchap_success.our_challenge_len =
+				vport->auth.challenge_len;
+
+		memcpy(fc_req->u.dhchap_success.data, vport->auth.challenge,
+		       vport->auth.challenge_len);
+
+		/* Zero is a valid return length */
+		data_len = lpfc_unpack_dhchap_success(vport,
+						      authcmd->data,
+						      fc_req);
+
+		fc_rsp = kzalloc(MAX_AUTH_RSP_SIZE, GFP_KERNEL);
+		if (!fc_rsp) {
+			return;
+		}
+
+		if (lpfc_fc_security_dhchap_authenticate(shost,
+			fc_req, sizeof(struct fc_auth_req) + data_len,
+			fc_rsp, MAX_AUTH_RSP_SIZE)) {
+			kfree(fc_rsp);
+			lpfc_issue_els_auth_reject(vport, ndlp, LOGIC_ERR, 0);
+		}
+
+		kfree(fc_req);
+
+	} else {
+		vport->auth.auth_msg_state = LPFC_DHCHAP_SUCCESS;
+
+		if (vport->auth.challenge)
+			kfree(vport->auth.challenge);
+		vport->auth.challenge = NULL;
+		vport->auth.challenge_len = 0;
+
+		if (vport->auth.auth_state != LPFC_AUTH_SUCCESS) {
+			vport->auth.auth_state = LPFC_AUTH_SUCCESS;
+			lpfc_printf_vlog(vport, KERN_INFO, LOG_SECURITY,
+					 "1041 Authentication Successful\n");
+
+			lpfc_start_discovery(vport);
+
+		} else {
+			lpfc_printf_vlog(vport, KERN_INFO, LOG_SECURITY,
+				"1042 Re-Authentication Successful\n");
+		}
+		/* If config requires re-authentication start the timer */
+		if (vport->auth.reauth_interval)
+			mod_timer(&ndlp->nlp_reauth_tmr, jiffies +
+				vport->auth.reauth_interval * 60 * HZ);
+	}
+}
+static void
+lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+		      struct lpfc_vport *vport, struct lpfc_iocbq *elsiocb)
+{
+	struct Scsi_Host  *shost;
+	struct lpfc_nodelist *ndlp;
+	struct ls_rjt stat;
+	uint32_t *payload;
+	uint32_t cmd, did, newnode, rjt_err = 0;
+	IOCB_t *icmd = &elsiocb->iocb;
+
+	if (vport == NULL || elsiocb->context2 == NULL)
+		goto dropit;
+
 	newnode = 0;
-	lp = (uint32_t *) mp->virt;
-	cmd = *lp++;
-	lpfc_post_buffer(phba, &psli->ring[LPFC_ELS_RING], 1, 1);
+	payload = ((struct lpfc_dmabuf *)elsiocb->context2)->virt;
+	cmd = *payload;
+	if ((phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) == 0)
+		lpfc_post_buffer(phba, pring, 1, 1);
 
+	did = icmd->un.rcvels.remoteID;
 	if (icmd->ulpStatus) {
-		lpfc_mbuf_free(phba, mp->virt, mp->phys);
-		kfree(mp);
-		drop_cmd = 1;
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV Unsol ELS:  status:x%x/x%x did:x%x",
+			icmd->ulpStatus, icmd->un.ulpWord[4], did);
 		goto dropit;
 	}
 
 	/* Check to see if link went down during discovery */
-	if (lpfc_els_chk_latt(phba)) {
-		lpfc_mbuf_free(phba, mp->virt, mp->phys);
-		kfree(mp);
-		drop_cmd = 1;
+	if (lpfc_els_chk_latt(vport))
 		goto dropit;
-	}
 
-	did = icmd->un.rcvels.remoteID;
-	ndlp = lpfc_findnode_did(phba, NLP_SEARCH_ALL, did);
+	/* Ignore traffic recevied during vport shutdown. */
+	if (vport->load_flag & FC_UNLOADING)
+		goto dropit;
+
+	ndlp = lpfc_findnode_did(vport, did);
 	if (!ndlp) {
 		/* Cannot find existing Fabric ndlp, so allocate a new one */
 		ndlp = mempool_alloc(phba->nlp_mem_pool, GFP_KERNEL);
-		if (!ndlp) {
-			lpfc_mbuf_free(phba, mp->virt, mp->phys);
-			kfree(mp);
-			drop_cmd = 1;
+		if (!ndlp)
 			goto dropit;
-		}
 
-		lpfc_nlp_init(phba, ndlp, did);
+		lpfc_nlp_init(vport, ndlp, did);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
 		newnode = 1;
 		if ((did & Fabric_DID_MASK) == Fabric_DID_MASK) {
 			ndlp->nlp_type |= NLP_FABRIC;
 		}
-		ndlp->nlp_state = NLP_STE_UNUSED_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_UNUSED_LIST);
+	}
+	else {
+		if (ndlp->nlp_state == NLP_STE_UNUSED_NODE) {
+			/* This is simular to the new node path */
+			lpfc_nlp_get(ndlp);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+			newnode = 1;
+		}
 	}
 
 	phba->fc_stat.elsRcvFrame++;
-	elsiocb->context1 = ndlp;
-	elsiocb->context2 = mp;
+	elsiocb->context1 = lpfc_nlp_get(ndlp);
+	elsiocb->vport = vport;
 
 	if ((cmd & ELS_CMD_MASK) == ELS_CMD_RSCN) {
 		cmd &= ELS_CMD_MASK;
 	}
 	/* ELS command <elsCmd> received from NPORT <did> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
-			"%d:0112 ELS command x%x received from NPORT x%x "
-			"Data: x%x\n", phba->brd_no, cmd, did, phba->hba_state);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0112 ELS command x%x received from NPORT x%x "
+			 "Data: x%x\n", cmd, did, vport->port_state);
 	switch (cmd) {
 	case ELS_CMD_PLOGI:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV PLOGI:       did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvPLOGI++;
-		if (phba->hba_state < LPFC_DISC_AUTH) {
-			rjt_err = 1;
+		ndlp = lpfc_plogi_confirm_nport(phba, payload, ndlp);
+
+		if (vport->port_state < LPFC_DISC_AUTH) {
+			rjt_err = LSRJT_UNABLE_TPC;
 			break;
 		}
-		ndlp = lpfc_plogi_confirm_nport(phba, mp, ndlp);
-		lpfc_disc_state_machine(phba, ndlp, elsiocb, NLP_EVT_RCV_PLOGI);
+
+		shost = lpfc_shost_from_vport(vport);
+		spin_lock_irq(shost->host_lock);
+		ndlp->nlp_flag &= ~NLP_TARGET_REMOVE;
+		spin_unlock_irq(shost->host_lock);
+
+		lpfc_disc_state_machine(vport, ndlp, elsiocb,
+					NLP_EVT_RCV_PLOGI);
+
 		break;
 	case ELS_CMD_FLOGI:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV FLOGI:       did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvFLOGI++;
-		lpfc_els_rcv_flogi(phba, elsiocb, ndlp, newnode);
-		if (newnode) {
-			lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
-		}
+		lpfc_els_rcv_flogi(vport, elsiocb, ndlp);
+		if (newnode)
+			lpfc_nlp_put(ndlp);
 		break;
 	case ELS_CMD_LOGO:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV LOGO:        did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvLOGO++;
-		if (phba->hba_state < LPFC_DISC_AUTH) {
-			rjt_err = 1;
+		if (vport->port_state < LPFC_DISC_AUTH) {
+			rjt_err = LSRJT_UNABLE_TPC;
 			break;
 		}
-		lpfc_disc_state_machine(phba, ndlp, elsiocb, NLP_EVT_RCV_LOGO);
+		lpfc_disc_state_machine(vport, ndlp, elsiocb, NLP_EVT_RCV_LOGO);
 		break;
 	case ELS_CMD_PRLO:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV PRLO:        did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvPRLO++;
-		if (phba->hba_state < LPFC_DISC_AUTH) {
-			rjt_err = 1;
+		if (vport->port_state < LPFC_DISC_AUTH) {
+			rjt_err = LSRJT_UNABLE_TPC;
 			break;
 		}
-		lpfc_disc_state_machine(phba, ndlp, elsiocb, NLP_EVT_RCV_PRLO);
+		lpfc_disc_state_machine(vport, ndlp, elsiocb, NLP_EVT_RCV_PRLO);
 		break;
 	case ELS_CMD_RSCN:
 		phba->fc_stat.elsRcvRSCN++;
-		lpfc_els_rcv_rscn(phba, elsiocb, ndlp, newnode);
-		if (newnode) {
-			lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
-		}
+		lpfc_els_rcv_rscn(vport, elsiocb, ndlp);
+		if (newnode)
+			lpfc_nlp_put(ndlp);
 		break;
 	case ELS_CMD_ADISC:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV ADISC:       did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvADISC++;
-		if (phba->hba_state < LPFC_DISC_AUTH) {
-			rjt_err = 1;
+		if (vport->port_state < LPFC_DISC_AUTH) {
+			rjt_err = LSRJT_UNABLE_TPC;
 			break;
 		}
-		lpfc_disc_state_machine(phba, ndlp, elsiocb, NLP_EVT_RCV_ADISC);
+		lpfc_disc_state_machine(vport, ndlp, elsiocb,
+					NLP_EVT_RCV_ADISC);
 		break;
 	case ELS_CMD_PDISC:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV PDISC:       did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvPDISC++;
-		if (phba->hba_state < LPFC_DISC_AUTH) {
-			rjt_err = 1;
+		if (vport->port_state < LPFC_DISC_AUTH) {
+			rjt_err = LSRJT_UNABLE_TPC;
 			break;
 		}
-		lpfc_disc_state_machine(phba, ndlp, elsiocb, NLP_EVT_RCV_PDISC);
+		lpfc_disc_state_machine(vport, ndlp, elsiocb,
+					NLP_EVT_RCV_PDISC);
 		break;
 	case ELS_CMD_FARPR:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV FARPR:       did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvFARPR++;
-		lpfc_els_rcv_farpr(phba, elsiocb, ndlp);
+		lpfc_els_rcv_farpr(vport, elsiocb, ndlp);
 		break;
 	case ELS_CMD_FARP:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV FARP:        did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvFARP++;
-		lpfc_els_rcv_farp(phba, elsiocb, ndlp);
+		lpfc_els_rcv_farp(vport, elsiocb, ndlp);
 		break;
 	case ELS_CMD_FAN:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV FAN:         did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvFAN++;
-		lpfc_els_rcv_fan(phba, elsiocb, ndlp);
+		lpfc_els_rcv_fan(vport, elsiocb, ndlp);
 		break;
 	case ELS_CMD_PRLI:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV PRLI:        did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvPRLI++;
-		if (phba->hba_state < LPFC_DISC_AUTH) {
-			rjt_err = 1;
+		if (vport->port_state < LPFC_DISC_AUTH) {
+			rjt_err = LSRJT_UNABLE_TPC;
 			break;
 		}
-		lpfc_disc_state_machine(phba, ndlp, elsiocb, NLP_EVT_RCV_PRLI);
+		lpfc_disc_state_machine(vport, ndlp, elsiocb, NLP_EVT_RCV_PRLI);
 		break;
 	case ELS_CMD_LIRR:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV LIRR:        did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvLIRR++;
-		lpfc_els_rcv_lirr(phba, elsiocb, ndlp);
-		if (newnode) {
-			lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
-		}
+		lpfc_els_rcv_lirr(vport, elsiocb, ndlp);
+		if (newnode)
+			lpfc_nlp_put(ndlp);
 		break;
 	case ELS_CMD_RPS:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV RPS:         did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvRPS++;
-		lpfc_els_rcv_rps(phba, elsiocb, ndlp);
-		if (newnode) {
-			lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
-		}
+		lpfc_els_rcv_rps(vport, elsiocb, ndlp);
+		if (newnode)
+			lpfc_nlp_put(ndlp);
 		break;
 	case ELS_CMD_RPL:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV RPL:         did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvRPL++;
-		lpfc_els_rcv_rpl(phba, elsiocb, ndlp);
-		if (newnode) {
-			lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
-		}
+		lpfc_els_rcv_rpl(vport, elsiocb, ndlp);
+		if (newnode)
+			lpfc_nlp_put(ndlp);
 		break;
 	case ELS_CMD_RNID:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV RNID:        did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
 		phba->fc_stat.elsRcvRNID++;
-		lpfc_els_rcv_rnid(phba, elsiocb, ndlp);
-		if (newnode) {
-			lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
-		}
+		lpfc_els_rcv_rnid(vport, elsiocb, ndlp);
+		if (newnode)
+			lpfc_nlp_put(ndlp);
 		break;
+	case ELS_CMD_AUTH_RJT:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV AUTH_RJT:        did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
+		lpfc_els_rcv_auth_rjt(vport, elsiocb, ndlp);
+		break;
+	case ELS_CMD_AUTH_NEG:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV AUTH_NEG:        did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
+		lpfc_els_rcv_auth_neg(vport, elsiocb, ndlp);
+		break;
+	case ELS_CMD_DH_CHA:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV DH_CHA:        did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
+		lpfc_els_rcv_chap_chal(vport, elsiocb, ndlp);
+		break;
+	case ELS_CMD_DH_REP:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV DH_REP:        did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
+		lpfc_els_rcv_chap_reply(vport, elsiocb, ndlp);
+		break;
+	case ELS_CMD_DH_SUC:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV DH_SUC:        did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
+		lpfc_els_rcv_chap_suc(vport, elsiocb, ndlp);
+		break;
+
+	case ELS_CMD_AUTH_DONE:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV AUTH_DONE:        did:x%x/ste:x%x flg:x%x",
+			did, vport->port_state, ndlp->nlp_flag);
+
+
 	default:
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL,
+			"RCV ELS cmd:     cmd:x%x did:x%x/ste:x%x",
+			cmd, did, vport->port_state);
+
 		/* Unsupported ELS command, reject */
-		rjt_err = 1;
+		rjt_err = LSRJT_INVALID_CMD;
 
 		/* Unknown ELS command <elsCmd> received from NPORT <did> */
-		lpfc_printf_log(phba, KERN_ERR, LOG_ELS,
-				"%d:0115 Unknown ELS command x%x received from "
-				"NPORT x%x\n", phba->brd_no, cmd, did);
-		if (newnode) {
-			lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
-		}
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0115 Unknown ELS command x%x "
+				 "received from NPORT x%x\n", cmd, did);
+		if (newnode)
+			lpfc_nlp_put(ndlp);
 		break;
 	}
 
 	/* check if need to LS_RJT received ELS cmd */
 	if (rjt_err) {
-		stat.un.b.lsRjtRsvd0 = 0;
-		stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
+		memset(&stat, 0, sizeof(stat));
+		stat.un.b.lsRjtRsnCode = rjt_err;
 		stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE;
-		stat.un.b.vendorUnique = 0;
-		lpfc_els_rsp_reject(phba, stat.un.lsRjtError, elsiocb, ndlp);
+		lpfc_els_rsp_reject(vport, stat.un.lsRjtError, elsiocb, ndlp,
+			NULL);
 	}
 
-	if (elsiocb->context2) {
-		lpfc_mbuf_free(phba, mp->virt, mp->phys);
-		kfree(mp);
-	}
+	lpfc_nlp_put(elsiocb->context1);
+	elsiocb->context1 = NULL;
+	return;
+
 dropit:
-	/* check if need to drop received ELS cmd */
-	if (drop_cmd == 1) {
+	if (vport && !(vport->load_flag & FC_UNLOADING))
 		lpfc_printf_log(phba, KERN_ERR, LOG_ELS,
-				"%d:0111 Dropping received ELS cmd "
-				"Data: x%x x%x x%x\n", phba->brd_no,
-				icmd->ulpStatus, icmd->un.ulpWord[4],
-				icmd->ulpTimeout);
-		phba->fc_stat.elsRcvDrop++;
+			"(%d):0111 Dropping received ELS cmd "
+			"Data: x%x x%x x%x\n",
+			vport->vpi, icmd->ulpStatus,
+			icmd->un.ulpWord[4], icmd->ulpTimeout);
+	phba->fc_stat.elsRcvDrop++;
+}
+
+static struct lpfc_vport *
+lpfc_find_vport_by_vpid(struct lpfc_hba *phba, uint16_t vpi)
+{
+	struct lpfc_vport *vport;
+	unsigned long flags;
+
+	spin_lock_irqsave(&phba->hbalock, flags);
+	list_for_each_entry(vport, &phba->port_list, listentry) {
+		if (vport->vpi == vpi) {
+			spin_unlock_irqrestore(&phba->hbalock, flags);
+			return vport;
+		}
+	}
+	spin_unlock_irqrestore(&phba->hbalock, flags);
+	return NULL;
+}
+
+void
+lpfc_els_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+		     struct lpfc_iocbq *elsiocb)
+{
+	struct lpfc_vport *vport = phba->pport;
+	IOCB_t *icmd = &elsiocb->iocb;
+	dma_addr_t paddr;
+	struct lpfc_dmabuf *bdeBuf1 = elsiocb->context2;
+	struct lpfc_dmabuf *bdeBuf2 = elsiocb->context3;
+
+	elsiocb->context1 = NULL;
+	elsiocb->context2 = NULL;
+	elsiocb->context3 = NULL;
+
+	if (icmd->ulpStatus == IOSTAT_NEED_BUFFER) {
+		lpfc_sli_hbqbuf_add_hbqs(phba, LPFC_ELS_HBQ);
+	} else if (icmd->ulpStatus == IOSTAT_LOCAL_REJECT &&
+	    (icmd->un.ulpWord[4] & 0xff) == IOERR_RCV_BUFFER_WAITING) {
+		phba->fc_stat.NoRcvBuf++;
+		/* Not enough posted buffers; Try posting more buffers */
+		if (!(phba->sli3_options & LPFC_SLI3_HBQ_ENABLED))
+			lpfc_post_buffer(phba, pring, 0, 1);
+		return;
+	}
+
+	if ((phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) &&
+	    (icmd->ulpCommand == CMD_IOCB_RCV_ELS64_CX ||
+	     icmd->ulpCommand == CMD_IOCB_RCV_SEQ64_CX)) {
+		if (icmd->unsli3.rcvsli3.vpi == 0xffff)
+			vport = phba->pport;
+		else {
+			uint16_t vpi = icmd->unsli3.rcvsli3.vpi;
+			vport = lpfc_find_vport_by_vpid(phba, vpi);
+		}
+	}
+				/* If there are no BDEs associated
+				 * with this IOCB, there is nothing to do.
+				 */
+	if (icmd->ulpBdeCount == 0)
+		return;
+
+				/* type of ELS cmd is first 32bit word
+				 * in packet
+				 */
+	if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
+		elsiocb->context2 = bdeBuf1;
+	} else {
+		paddr = getPaddr(icmd->un.cont64[0].addrHigh,
+				 icmd->un.cont64[0].addrLow);
+		elsiocb->context2 = lpfc_sli_ringpostbuf_get(phba, pring,
+							     paddr);
+	}
+
+	lpfc_els_unsol_buffer(phba, pring, vport, elsiocb);
+	/*
+	 * The different unsolicited event handlers would tell us
+	 * if they are done with "mp" by setting context2 to NULL.
+	 */
+	if (elsiocb->context2) {
+		lpfc_in_buf_free(phba, (struct lpfc_dmabuf *)elsiocb->context2);
+		elsiocb->context2 = NULL;
+	}
+
+	/* RCV_ELS64_CX provide for 2 BDEs - process 2nd if included */
+	if ((phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) &&
+	    icmd->ulpBdeCount == 2) {
+		elsiocb->context2 = bdeBuf2;
+		lpfc_els_unsol_buffer(phba, pring, vport, elsiocb);
+		/* free mp if we are done with it */
+		if (elsiocb->context2) {
+			lpfc_in_buf_free(phba, elsiocb->context2);
+			elsiocb->context2 = NULL;
+		}
+	}
+}
+
+void
+lpfc_do_scr_ns_plogi(struct lpfc_hba *phba, struct lpfc_vport *vport)
+{
+	struct lpfc_nodelist *ndlp, *ndlp_fdmi;
+
+	ndlp = lpfc_findnode_did(vport, NameServer_DID);
+	if (!ndlp) {
+		ndlp = mempool_alloc(phba->nlp_mem_pool, GFP_KERNEL);
+		if (!ndlp) {
+			if (phba->fc_topology == TOPOLOGY_LOOP) {
+				lpfc_disc_start(vport);
+				return;
+			}
+			lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+					 "0251 NameServer login: no memory\n");
+			return;
+		}
+		lpfc_nlp_init(vport, ndlp, NameServer_DID);
+		ndlp->nlp_type |= NLP_FABRIC;
+	}
+
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE);
+
+	if (lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0)) {
+		lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0252 Cannot issue NameServer login\n");
+		return;
+	}
+
+	if (vport->cfg_fdmi_on) {
+		ndlp_fdmi = mempool_alloc(phba->nlp_mem_pool,
+					  GFP_KERNEL);
+		if (ndlp_fdmi) {
+			lpfc_nlp_init(vport, ndlp_fdmi, FDMI_DID);
+			ndlp_fdmi->nlp_type |= NLP_FABRIC;
+			ndlp_fdmi->nlp_state =
+				NLP_STE_PLOGI_ISSUE;
+			lpfc_issue_els_plogi(vport, ndlp_fdmi->nlp_DID,
+					     0);
+		}
+	}
+	return;
+}
+
+static void
+lpfc_cmpl_reg_new_vport(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+{
+	struct lpfc_vport *vport = pmb->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) pmb->context2;
+	MAILBOX_t *mb = &pmb->mb;
+
+	vport->fc_flag &= ~FC_VPORT_NEEDS_REG_VPI;
+	lpfc_nlp_put(ndlp);
+
+	if (mb->mbxStatus) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
+				 "0915 Register VPI failed: 0x%x\n",
+				 mb->mbxStatus);
+
+		switch (mb->mbxStatus) {
+		case 0x11:	/* unsupported feature */
+		case 0x9603:	/* max_vpi exceeded */
+			/* giving up on vport registration */
+			lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+			spin_lock_irq(shost->host_lock);
+			vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP);
+			spin_unlock_irq(shost->host_lock);
+			lpfc_can_disctmo(vport);
+			break;
+		default:
+			/* Try to recover from this error */
+			lpfc_mbx_unreg_vpi(vport);
+			vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
+			lpfc_initial_fdisc(vport);
+			break;
+		}
+
+	} else {
+		if (vport == phba->pport)
+			lpfc_issue_fabric_reglogin(vport);
+		else if (vport->auth.security_active) {
+			lpfc_start_authentication(vport, ndlp);
+		} else {
+			lpfc_do_scr_ns_plogi(phba, vport);
+		}
+	}
+	mempool_free(pmb, phba->mbox_mem_pool);
+	return;
+}
+
+void
+lpfc_register_new_vport(struct lpfc_hba *phba, struct lpfc_vport *vport,
+			struct lpfc_nodelist *ndlp)
+{
+	LPFC_MBOXQ_t *mbox;
+
+	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	if (mbox) {
+		lpfc_reg_vpi(phba, vport->vpi, vport->fc_myDID, mbox);
+		mbox->vport = vport;
+		mbox->context2 = lpfc_nlp_get(ndlp);
+		mbox->mbox_cmpl = lpfc_cmpl_reg_new_vport;
+		if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT)
+		    == MBX_NOT_FINISHED) {
+			mempool_free(mbox, phba->mbox_mem_pool);
+			vport->fc_flag &= ~FC_VPORT_NEEDS_REG_VPI;
+
+			lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
+				"0253 Register VPI: Can't send mbox\n");
+		}
+	} else {
+		lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
+				 "0254 Register VPI: no memory\n");
+
+		vport->fc_flag &= ~FC_VPORT_NEEDS_REG_VPI;
+		lpfc_nlp_put(ndlp);
+	}
+}
+
+static void
+lpfc_cmpl_els_fdisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		    struct lpfc_iocbq *rspiocb)
+{
+	struct lpfc_vport *vport = cmdiocb->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
+	struct lpfc_nodelist *np;
+	struct lpfc_nodelist *next_np;
+	IOCB_t *irsp = &rspiocb->iocb;
+	struct lpfc_iocbq *piocb;
+	struct lpfc_dmabuf *pcmd = cmdiocb->context2, *prsp;
+	struct serv_parm *sp;
+	struct fc_auth_req auth_req;
+	struct fc_auth_rsp *auth_rsp;
+
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0123 FDISC completes. x%x/x%x prevDID: x%x\n",
+			 irsp->ulpStatus, irsp->un.ulpWord[4],
+			 vport->fc_prevDID);
+	/* Since all FDISCs are being single threaded, we
+	 * must reset the discovery timer for ALL vports
+	 * waiting to send FDISC when one completes.
+	 */
+	list_for_each_entry(piocb, &phba->fabric_iocb_list, list) {
+		lpfc_set_disctmo(piocb->vport);
+	}
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"FDISC cmpl:      status:x%x/x%x prevdid:x%x",
+		irsp->ulpStatus, irsp->un.ulpWord[4], vport->fc_prevDID);
+
+	if (irsp->ulpStatus) {
+		/* Check for retry */
+		if (lpfc_els_retry(phba, cmdiocb, rspiocb))
+			goto out;
+		/* FDISC failed */
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0124 FDISC failed. (%d/%d)\n",
+				 irsp->ulpStatus, irsp->un.ulpWord[4]);
+		goto fdisc_failed;
+	}
+
+	prsp = list_get_first(&pcmd->list, struct lpfc_dmabuf, list);
+	sp = prsp->virt + sizeof(uint32_t);
+
+	if (vport->cfg_enable_auth) {
+		auth_req.local_wwpn = wwn_to_u64(vport->fc_portname.u.wwn);
+		memcpy(&auth_req.remote_wwpn, &sp->portName,
+			sizeof(struct lpfc_name));
+
+		if ((auth_rsp = kmalloc(sizeof(struct fc_auth_rsp),
+			GFP_KERNEL)) == 0) {
+			lpfc_printf_log(vport->phba,
+				KERN_WARNING, LOG_SECURITY,
+				"%d:1050 Security config request: no buffers\n",
+				vport->phba->brd_no);
+			goto fdisc_failed;
+		}
+		vport->auth.auth_mode = FC_AUTHMODE_UNKNOWN;
+		if (lpfc_fc_security_get_config(shost, &auth_req,
+			sizeof(struct fc_auth_req),
+			auth_rsp, sizeof(struct fc_auth_rsp))) {
+			kfree(auth_rsp);
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+					 "1053 Unable to get security "
+					 "config.\n");
+			goto fdisc_failed;
+		}
+		lpfc_security_config_wait(vport);
+		if (vport->auth.auth_mode == FC_AUTHMODE_ACTIVE) {
+			vport->auth.security_active = 1;
+		} else if (vport->auth.auth_mode == FC_AUTHMODE_PASSIVE) {
+			if (sp->cmn.security)
+				vport->auth.security_active = 1;
+			else
+				vport->auth.security_active = 0;
+		} else {
+			vport->auth.security_active = 0;
+			if (sp->cmn.security) {
+				lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+						 "1051 Authentication mode is "
+						 "disabled, but is required "
+						 "by the fabric.\n");
+				goto fdisc_failed;
+			}
+		}
+	} else {
+		vport->auth.security_active = 0;
+		if (sp->cmn.security) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+					 "1056 Authentication mode is "
+					 "disabled, but is required "
+					 "by the fabric.\n");
+			goto fdisc_failed;
+		}
+	}
+	spin_lock_irq(shost->host_lock);
+	vport->fc_flag |= FC_FABRIC;
+	if (vport->phba->fc_topology == TOPOLOGY_LOOP)
+		vport->fc_flag |=  FC_PUBLIC_LOOP;
+	spin_unlock_irq(shost->host_lock);
+
+	vport->fc_myDID = irsp->un.ulpWord[4] & Mask_DID;
+	lpfc_vport_set_state(vport, FC_VPORT_ACTIVE);
+	if ((vport->fc_prevDID != vport->fc_myDID) &&
+		!(vport->fc_flag & FC_VPORT_NEEDS_REG_VPI)) {
+		/* If our NportID changed, we need to ensure all
+		 * remaining NPORTs get unreg_login'ed so we can
+		 * issue unreg_vpi.
+		 */
+		list_for_each_entry_safe(np, next_np,
+			&vport->fc_nodes, nlp_listp) {
+			if (np->nlp_state != NLP_STE_NPR_NODE
+			   || !(np->nlp_flag & NLP_NPR_ADISC))
+				continue;
+			spin_lock_irq(shost->host_lock);
+			np->nlp_flag &= ~NLP_NPR_ADISC;
+			spin_unlock_irq(shost->host_lock);
+			lpfc_unreg_rpi(vport, np);
+		}
+		lpfc_mbx_unreg_vpi(vport);
+		vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
+	}
+
+	if (vport->fc_flag & FC_VPORT_NEEDS_REG_VPI)
+		lpfc_register_new_vport(phba, vport, ndlp);
+	else if (vport->auth.security_active) {
+		lpfc_start_authentication(vport, ndlp);
+	} else {
+		lpfc_do_scr_ns_plogi(phba, vport);
+	}
+	lpfc_nlp_put(ndlp); /* Free Fabric ndlp for vports */
+	lpfc_els_free_iocb(phba, cmdiocb);
+	return;
+
+fdisc_failed:
+	lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+
+	lpfc_nlp_put(ndlp);
+	/* Cancel discovery timer */
+	lpfc_can_disctmo(vport);
+out:
+	lpfc_els_free_iocb(phba, cmdiocb);
+	return;
+}
+
+int
+lpfc_issue_els_fdisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+		     uint8_t retry)
+{
+	struct lpfc_hba *phba = vport->phba;
+	IOCB_t *icmd;
+	struct lpfc_iocbq *elsiocb;
+	struct serv_parm *sp;
+	uint8_t *pcmd;
+	uint16_t cmdsize;
+	int did = ndlp->nlp_DID;
+	int rc;
+
+	cmdsize = (sizeof(uint32_t) + sizeof(struct serv_parm));
+	elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp, did,
+				     ELS_CMD_FDISC);
+	if (!elsiocb) {
+		lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0255 Issue FDISC: no IOCB\n");
+		return 1;
+	}
+
+	icmd = &elsiocb->iocb;
+	icmd->un.elsreq64.myID = 0;
+	icmd->un.elsreq64.fl = 1;
+
+	/* For FDISC, Let FDISC rsp set the NPortID for this VPI */
+	icmd->ulpCt_h = 1;
+	icmd->ulpCt_l = 0;
+
+	pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
+	*((uint32_t *) (pcmd)) = ELS_CMD_FDISC;
+	pcmd += sizeof(uint32_t); /* CSP Word 1 */
+	memcpy(pcmd, &vport->phba->pport->fc_sparam, sizeof(struct serv_parm));
+	sp = (struct serv_parm *) pcmd;
+	/* Setup CSPs accordingly for Fabric */
+	sp->cmn.e_d_tov = 0;
+	sp->cmn.w2.r_a_tov = 0;
+	sp->cls1.classValid = 0;
+	sp->cls2.seqDelivery = 1;
+	sp->cls3.seqDelivery = 1;
+
+	/* Set the security service parameter */
+	if (vport->cfg_enable_auth)
+		sp->cmn.security = 1;
+
+	pcmd += sizeof(uint32_t); /* CSP Word 2 */
+	pcmd += sizeof(uint32_t); /* CSP Word 3 */
+	pcmd += sizeof(uint32_t); /* CSP Word 4 */
+	pcmd += sizeof(uint32_t); /* Port Name */
+	memcpy(pcmd, &vport->fc_portname, 8);
+	pcmd += sizeof(uint32_t); /* Node Name */
+	pcmd += sizeof(uint32_t); /* Node Name */
+	memcpy(pcmd, &vport->fc_nodename, 8);
+
+	lpfc_set_disctmo(vport);
+
+	phba->fc_stat.elsXmitFDISC++;
+	elsiocb->iocb_cmpl = lpfc_cmpl_els_fdisc;
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"Issue FDISC:     did:x%x",
+		did, 0, 0);
+
+	rc = lpfc_issue_fabric_iocb(phba, elsiocb);
+	if (rc == IOCB_ERROR) {
+		lpfc_els_free_iocb(phba, elsiocb);
+		lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0256 Issue FDISC: Cannot send IOCB\n");
+		return 1;
+	}
+	lpfc_vport_set_state(vport, FC_VPORT_INITIALIZING);
+	vport->port_state = LPFC_FDISC;
+	return 0;
+}
+
+static void
+lpfc_cmpl_els_npiv_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+			struct lpfc_iocbq *rspiocb)
+{
+	struct lpfc_vport *vport = cmdiocb->vport;
+	IOCB_t *irsp;
+
+	irsp = &rspiocb->iocb;
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"LOGO npiv cmpl:  status:x%x/x%x did:x%x",
+		irsp->ulpStatus, irsp->un.ulpWord[4], irsp->un.rcvels.remoteID);
+
+	lpfc_els_free_iocb(phba, cmdiocb);
+	vport->unreg_vpi_cmpl = VPORT_ERROR;
+}
+
+int
+lpfc_issue_els_npiv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
+	struct lpfc_sli_ring *pring = &phba->sli.ring[LPFC_ELS_RING];
+	IOCB_t *icmd;
+	struct lpfc_iocbq *elsiocb;
+	uint8_t *pcmd;
+	uint16_t cmdsize;
+
+	cmdsize = 2 * sizeof(uint32_t) + sizeof(struct lpfc_name);
+	elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, 0, ndlp, ndlp->nlp_DID,
+				     ELS_CMD_LOGO);
+	if (!elsiocb)
+		return 1;
+
+	icmd = &elsiocb->iocb;
+	pcmd = (uint8_t *) (((struct lpfc_dmabuf *) elsiocb->context2)->virt);
+	*((uint32_t *) (pcmd)) = ELS_CMD_LOGO;
+	pcmd += sizeof(uint32_t);
+
+	/* Fill in LOGO payload */
+	*((uint32_t *) (pcmd)) = be32_to_cpu(vport->fc_myDID);
+	pcmd += sizeof(uint32_t);
+	memcpy(pcmd, &vport->fc_portname, sizeof(struct lpfc_name));
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"Issue LOGO npiv  did:x%x flg:x%x",
+		ndlp->nlp_DID, ndlp->nlp_flag, 0);
+
+	elsiocb->iocb_cmpl = lpfc_cmpl_els_npiv_logo;
+	spin_lock_irq(shost->host_lock);
+	ndlp->nlp_flag |= NLP_LOGO_SND;
+	spin_unlock_irq(shost->host_lock);
+	if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
+		spin_lock_irq(shost->host_lock);
+		ndlp->nlp_flag &= ~NLP_LOGO_SND;
+		spin_unlock_irq(shost->host_lock);
+		lpfc_els_free_iocb(phba, elsiocb);
+		return 1;
 	}
+	return 0;
+}
+
+void
+lpfc_fabric_block_timeout(unsigned long ptr)
+{
+	struct lpfc_hba  *phba = (struct lpfc_hba *) ptr;
+	unsigned long iflags;
+	uint32_t tmo_posted;
+	spin_lock_irqsave(&phba->pport->work_port_lock, iflags);
+	tmo_posted = phba->pport->work_port_events & WORKER_FABRIC_BLOCK_TMO;
+	if (!tmo_posted)
+		phba->pport->work_port_events |= WORKER_FABRIC_BLOCK_TMO;
+	spin_unlock_irqrestore(&phba->pport->work_port_lock, iflags);
+
+	if (!tmo_posted) {
+		spin_lock_irqsave(&phba->hbalock, iflags);
+		if (phba->work_wait)
+			lpfc_worker_wake_up(phba);
+		spin_unlock_irqrestore(&phba->hbalock, iflags);
+	}
+}
+
+static void
+lpfc_resume_fabric_iocbs(struct lpfc_hba *phba)
+{
+	struct lpfc_iocbq *iocb;
+	unsigned long iflags;
+	int ret;
+	struct lpfc_sli_ring *pring = &phba->sli.ring[LPFC_ELS_RING];
+	IOCB_t *cmd;
+
+repeat:
+	iocb = NULL;
+	spin_lock_irqsave(&phba->hbalock, iflags);
+				/* Post any pending iocb to the SLI layer */
+	if (atomic_read(&phba->fabric_iocb_count) == 0) {
+		list_remove_head(&phba->fabric_iocb_list, iocb, typeof(*iocb),
+				 list);
+		if (iocb)
+			atomic_inc(&phba->fabric_iocb_count);
+	}
+	spin_unlock_irqrestore(&phba->hbalock, iflags);
+	if (iocb) {
+		iocb->fabric_iocb_cmpl = iocb->iocb_cmpl;
+		iocb->iocb_cmpl = lpfc_cmpl_fabric_iocb;
+		iocb->iocb_flag |= LPFC_IO_FABRIC;
+
+		lpfc_debugfs_disc_trc(iocb->vport, LPFC_DISC_TRC_ELS_CMD,
+			"Fabric sched1:   ste:x%x",
+			iocb->vport->port_state, 0, 0);
+
+		ret = lpfc_sli_issue_iocb(phba, pring, iocb, 0);
+
+		if (ret == IOCB_ERROR) {
+			iocb->iocb_cmpl = iocb->fabric_iocb_cmpl;
+			iocb->fabric_iocb_cmpl = NULL;
+			iocb->iocb_flag &= ~LPFC_IO_FABRIC;
+			cmd = &iocb->iocb;
+			cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
+			cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
+			iocb->iocb_cmpl(phba, iocb, iocb);
+
+			atomic_dec(&phba->fabric_iocb_count);
+			goto repeat;
+		}
+	}
+
+	return;
+}
+
+void
+lpfc_unblock_fabric_iocbs(struct lpfc_hba *phba)
+{
+	clear_bit(FABRIC_COMANDS_BLOCKED, &phba->bit_flags);
+
+	lpfc_resume_fabric_iocbs(phba);
+	return;
+}
+
+static void
+lpfc_block_fabric_iocbs(struct lpfc_hba *phba)
+{
+	int blocked;
+
+	blocked = test_and_set_bit(FABRIC_COMANDS_BLOCKED, &phba->bit_flags);
+				/* Start a timer to unblock fabric
+				 * iocbs after 100ms
+				 */
+	if (!blocked)
+		mod_timer(&phba->fabric_block_timer, jiffies + HZ/10 );
+
 	return;
 }
+
+static void
+lpfc_cmpl_fabric_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+	struct lpfc_iocbq *rspiocb)
+{
+	struct ls_rjt stat;
+
+	if ((cmdiocb->iocb_flag & LPFC_IO_FABRIC) != LPFC_IO_FABRIC)
+		BUG();
+
+	switch (rspiocb->iocb.ulpStatus) {
+		case IOSTAT_NPORT_RJT:
+		case IOSTAT_FABRIC_RJT:
+			if (rspiocb->iocb.un.ulpWord[4] & RJT_UNAVAIL_TEMP) {
+				lpfc_block_fabric_iocbs(phba);
+			}
+			break;
+
+		case IOSTAT_NPORT_BSY:
+		case IOSTAT_FABRIC_BSY:
+			lpfc_block_fabric_iocbs(phba);
+			break;
+
+		case IOSTAT_LS_RJT:
+			stat.un.lsRjtError =
+				be32_to_cpu(rspiocb->iocb.un.ulpWord[4]);
+			if ((stat.un.b.lsRjtRsnCode == LSRJT_UNABLE_TPC) ||
+				(stat.un.b.lsRjtRsnCode == LSRJT_LOGICAL_BSY))
+				lpfc_block_fabric_iocbs(phba);
+			break;
+	}
+
+	if (atomic_read(&phba->fabric_iocb_count) == 0)
+		BUG();
+
+	cmdiocb->iocb_cmpl = cmdiocb->fabric_iocb_cmpl;
+	cmdiocb->fabric_iocb_cmpl = NULL;
+	cmdiocb->iocb_flag &= ~LPFC_IO_FABRIC;
+	cmdiocb->iocb_cmpl(phba, cmdiocb, rspiocb);
+
+	atomic_dec(&phba->fabric_iocb_count);
+	if (!test_bit(FABRIC_COMANDS_BLOCKED, &phba->bit_flags)) {
+				/* Post any pending iocbs to HBA */
+		    lpfc_resume_fabric_iocbs(phba);
+	}
+}
+
+int
+lpfc_issue_fabric_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *iocb)
+{
+	unsigned long iflags;
+	struct lpfc_sli_ring *pring = &phba->sli.ring[LPFC_ELS_RING];
+	int ready;
+	int ret;
+
+	if (atomic_read(&phba->fabric_iocb_count) > 1)
+		BUG();
+
+	spin_lock_irqsave(&phba->hbalock, iflags);
+	ready = atomic_read(&phba->fabric_iocb_count) == 0 &&
+		!test_bit(FABRIC_COMANDS_BLOCKED, &phba->bit_flags);
+
+	spin_unlock_irqrestore(&phba->hbalock, iflags);
+	if (ready) {
+		iocb->fabric_iocb_cmpl = iocb->iocb_cmpl;
+		iocb->iocb_cmpl = lpfc_cmpl_fabric_iocb;
+		iocb->iocb_flag |= LPFC_IO_FABRIC;
+
+		lpfc_debugfs_disc_trc(iocb->vport, LPFC_DISC_TRC_ELS_CMD,
+			"Fabric sched2:   ste:x%x",
+			iocb->vport->port_state, 0, 0);
+
+		atomic_inc(&phba->fabric_iocb_count);
+		ret = lpfc_sli_issue_iocb(phba, pring, iocb, 0);
+
+		if (ret == IOCB_ERROR) {
+			iocb->iocb_cmpl = iocb->fabric_iocb_cmpl;
+			iocb->fabric_iocb_cmpl = NULL;
+			iocb->iocb_flag &= ~LPFC_IO_FABRIC;
+			atomic_dec(&phba->fabric_iocb_count);
+		}
+	} else {
+		spin_lock_irqsave(&phba->hbalock, iflags);
+		list_add_tail(&iocb->list, &phba->fabric_iocb_list);
+		spin_unlock_irqrestore(&phba->hbalock, iflags);
+		ret = IOCB_SUCCESS;
+	}
+	return ret;
+}
+
+
+void lpfc_fabric_abort_vport(struct lpfc_vport *vport)
+{
+	LIST_HEAD(completions);
+	struct lpfc_hba  *phba = vport->phba;
+	struct lpfc_iocbq *tmp_iocb, *piocb;
+	IOCB_t *cmd;
+
+	spin_lock_irq(&phba->hbalock);
+	list_for_each_entry_safe(piocb, tmp_iocb, &phba->fabric_iocb_list,
+				 list) {
+
+		if (piocb->vport != vport)
+			continue;
+
+		list_move_tail(&piocb->list, &completions);
+	}
+	spin_unlock_irq(&phba->hbalock);
+
+	while (!list_empty(&completions)) {
+		piocb = list_get_first(&completions, struct lpfc_iocbq, list);
+		list_del_init(&piocb->list);
+
+		cmd = &piocb->iocb;
+		cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
+		cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
+		(piocb->iocb_cmpl) (phba, piocb, piocb);
+	}
+}
+
+void lpfc_fabric_abort_nport(struct lpfc_nodelist *ndlp)
+{
+	LIST_HEAD(completions);
+	struct lpfc_hba  *phba = ndlp->vport->phba;
+	struct lpfc_iocbq *tmp_iocb, *piocb;
+	struct lpfc_sli_ring *pring = &phba->sli.ring[LPFC_ELS_RING];
+	IOCB_t *cmd;
+
+	spin_lock_irq(&phba->hbalock);
+	list_for_each_entry_safe(piocb, tmp_iocb, &phba->fabric_iocb_list,
+				 list) {
+		if ((lpfc_check_sli_ndlp(phba, pring, piocb, ndlp))) {
+
+			list_move_tail(&piocb->list, &completions);
+		}
+	}
+	spin_unlock_irq(&phba->hbalock);
+
+	while (!list_empty(&completions)) {
+		piocb = list_get_first(&completions, struct lpfc_iocbq, list);
+		list_del_init(&piocb->list);
+
+		cmd = &piocb->iocb;
+		cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
+		cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
+		(piocb->iocb_cmpl) (phba, piocb, piocb);
+	}
+}
+
+void lpfc_fabric_abort_hba(struct lpfc_hba *phba)
+{
+	LIST_HEAD(completions);
+	struct lpfc_iocbq *piocb;
+	IOCB_t *cmd;
+
+	spin_lock_irq(&phba->hbalock);
+	list_splice_init(&phba->fabric_iocb_list, &completions);
+	spin_unlock_irq(&phba->hbalock);
+
+	while (!list_empty(&completions)) {
+		piocb = list_get_first(&completions, struct lpfc_iocbq, list);
+		list_del_init(&piocb->list);
+
+		cmd = &piocb->iocb;
+		cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
+		cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
+		(piocb->iocb_cmpl) (phba, piocb, piocb);
+	}
+}
+
+
+void lpfc_fabric_abort_flogi(struct lpfc_hba *phba)
+{
+	LIST_HEAD(completions);
+	struct lpfc_iocbq *tmp_iocb, *piocb;
+	IOCB_t *cmd;
+	struct lpfc_nodelist *ndlp;
+
+	spin_lock_irq(&phba->hbalock);
+	list_for_each_entry_safe(piocb, tmp_iocb, &phba->fabric_iocb_list,
+				 list) {
+
+		cmd = &piocb->iocb;
+		ndlp = (struct lpfc_nodelist *) piocb->context1;
+		if (cmd->ulpCommand == CMD_ELS_REQUEST64_CR &&
+		    ndlp != NULL &&
+		    ndlp->nlp_DID == Fabric_DID)
+			list_move_tail(&piocb->list, &completions);
+	}
+	spin_unlock_irq(&phba->hbalock);
+
+	while (!list_empty(&completions)) {
+		piocb = list_get_first(&completions, struct lpfc_iocbq, list);
+		list_del_init(&piocb->list);
+
+		cmd = &piocb->iocb;
+		cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
+		cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
+		(piocb->iocb_cmpl) (phba, piocb, piocb);
+	}
+}
+
+
+static void
+lpfc_cmpl_els_auth(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		   struct lpfc_iocbq *rspiocb)
+{
+	IOCB_t *irsp = &rspiocb->iocb;
+	struct lpfc_vport *vport = cmdiocb->vport;
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
+
+	if (irsp->ulpStatus) {
+		if (irsp->ulpStatus == IOSTAT_LS_RJT) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+					 "1043 Authentication LS_RJT\n");
+		}
+		/* Check for retry */
+		if (!lpfc_els_retry(phba, cmdiocb, rspiocb)) {
+			if (irsp->ulpStatus != IOSTAT_LS_RJT) {
+				lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+						 "1045 Issue AUTH_NEG failed."
+						 "Status:%x\n",
+						 irsp->ulpStatus);
+			}
+			if (vport->auth.auth_mode == FC_AUTHMODE_ACTIVE) {
+				lpfc_can_disctmo(vport);
+				lpfc_nlp_set_state(vport, ndlp,
+						   NLP_STE_NPR_NODE);
+				lpfc_issue_els_logo(vport, ndlp, 0);
+			}
+		}
+		lpfc_els_free_iocb(phba, cmdiocb);
+		return;
+	}
+
+	if (vport->auth.auth_msg_state == LPFC_DHCHAP_SUCCESS ||
+	    vport->auth.auth_msg_state == LPFC_DHCHAP_SUCCESS_REPLY) {
+
+		if (vport->auth.challenge)
+			kfree(vport->auth.challenge);
+		vport->auth.challenge = NULL;
+		vport->auth.challenge_len = 0;
+		if (vport->auth.dh_pub_key)
+			kfree(vport->auth.dh_pub_key);
+		vport->auth.dh_pub_key = NULL;
+		vport->auth.dh_pub_key_len = 0;
+
+		if (vport->auth.auth_msg_state == LPFC_DHCHAP_SUCCESS) {
+			if (vport->auth.auth_state != LPFC_AUTH_SUCCESS) {
+				lpfc_printf_vlog(vport, KERN_WARNING,
+						 LOG_SECURITY, "1046 "
+						 "Authentication Successful\n");
+				vport->auth.auth_state = LPFC_AUTH_SUCCESS;
+				lpfc_start_discovery(vport);
+			} else {
+				lpfc_printf_vlog(vport, KERN_WARNING,
+						 LOG_SECURITY,
+						 "1047 Re-Authentication"
+						 " Successful\n");
+			}
+		}
+		/* restart authentication timer */
+		if (vport->auth.reauth_interval)
+			mod_timer(&ndlp->nlp_reauth_tmr,
+				jiffies +
+				vport->auth.reauth_interval * 60 * HZ);
+	}
+	lpfc_els_free_iocb(phba, cmdiocb);
+}
+
+int
+lpfc_issue_els_auth(struct lpfc_vport *vport,
+		    struct lpfc_nodelist *ndlp,
+		    uint8_t message_code,
+		    uint8_t *payload,
+		    uint32_t payload_len)
+{
+	struct lpfc_hba *phba = vport->phba;
+	struct lpfc_iocbq *elsiocb;
+	struct lpfc_auth_message *authreq;
+
+	elsiocb = lpfc_prep_els_iocb(vport, 1,
+			sizeof(struct lpfc_auth_message) + payload_len,
+			0, ndlp, ndlp->nlp_DID, ELS_CMD_AUTH);
+
+	if (!elsiocb)
+		return 1;
+	authreq = (struct lpfc_auth_message *)
+		(((struct lpfc_dmabuf *) elsiocb->context2)->virt);
+	authreq->command_code = ELS_CMD_AUTH_BYTE;
+	authreq->flags = 0;
+	authreq->message_code = message_code;
+	authreq->protocol_ver = AUTH_VERSION;
+	authreq->message_len = cpu_to_be32(payload_len);
+	authreq->trans_id = cpu_to_be32(vport->auth.trans_id);
+	memcpy(authreq->data, payload, payload_len);
+
+	elsiocb->iocb_cmpl = lpfc_cmpl_els_auth;
+
+	if (lpfc_sli_issue_iocb(phba, &phba->sli.ring[LPFC_ELS_RING],
+				elsiocb, 0) == IOCB_ERROR) {
+		lpfc_els_free_iocb(phba, elsiocb);
+		return 1;
+	}
+
+	return 0;
+}
+
+static void
+lpfc_cmpl_els_auth_reject(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		    struct lpfc_iocbq *rspiocb)
+{
+	struct lpfc_vport *vport = cmdiocb->vport;
+	IOCB_t *irsp = &rspiocb->iocb;
+
+	if (irsp->ulpStatus) {
+		/* Check for retry */
+		if (!lpfc_els_retry(phba, cmdiocb, rspiocb)) {
+			lpfc_printf_log(phba, KERN_ERR, LOG_ELS,
+					"1048 Issue AUTH_REJECT failed.\n");
+		}
+	}
+	else
+	       vport->port_state = LPFC_VPORT_UNKNOWN;
+
+	lpfc_els_free_iocb(phba, cmdiocb);
+}
+
+int
+lpfc_issue_els_auth_reject(struct lpfc_vport *vport,
+					struct lpfc_nodelist *ndlp,
+					uint8_t reason, uint8_t explanation)
+{
+	struct lpfc_hba *phba = vport->phba;
+	struct lpfc_iocbq *elsiocb;
+	struct lpfc_sli_ring *pring;
+	struct lpfc_sli *psli;
+	struct lpfc_auth_message *authreq;
+	struct lpfc_auth_reject *reject;
+
+	psli = &phba->sli;
+	pring = &psli->ring[LPFC_ELS_RING];
+
+	vport->auth.auth_msg_state = LPFC_AUTH_REJECT;
+
+	elsiocb = lpfc_prep_els_iocb(vport, 1, sizeof(struct lpfc_auth_message)
+				     + sizeof(struct lpfc_auth_reject), 0, ndlp,
+				     ndlp->nlp_DID, ELS_CMD_AUTH);
+
+	if (!elsiocb)
+		return 1;
+
+	authreq = (struct lpfc_auth_message *)
+		(((struct lpfc_dmabuf *) elsiocb->context2)->virt);
+	authreq->command_code = ELS_CMD_AUTH_BYTE;
+	authreq->flags = 0;
+	authreq->message_code = AUTH_REJECT;
+	authreq->protocol_ver = AUTH_VERSION;
+	reject = (struct lpfc_auth_reject *)authreq->data;
+	memset(reject, 0, sizeof(struct lpfc_auth_reject));
+	reject->reason = reason;
+	reject->explanation = explanation;
+
+	authreq->message_len = cpu_to_be32(sizeof(struct lpfc_auth_reject));
+	authreq->trans_id = cpu_to_be32(vport->auth.trans_id);
+	elsiocb->iocb_cmpl = lpfc_cmpl_els_auth_reject;
+
+	if (lpfc_sli_issue_iocb(phba, pring, elsiocb, 0) == IOCB_ERROR) {
+		lpfc_els_free_iocb(phba, elsiocb);
+		return 1;
+	}
+
+	return 0;
+}
diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
index 0499a9a..1caf802 100644
--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
+++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
@@ -36,7 +36,8 @@
 #include "lpfc.h"
 #include "lpfc_logmsg.h"
 #include "lpfc_crtn.h"
-#include "lpfc_ioctl.h"
+#include "lpfc_vport.h"
+#include "lpfc_debugfs.h"
 
 /* AlpaArray for assignment of scsid for scan-down and bind_method */
 static uint8_t lpfcAlpaArray[] = {
@@ -55,7 +56,43 @@ static uint8_t lpfcAlpaArray[] = {
 	0x10, 0x0F, 0x08, 0x04, 0x02, 0x01
 };
 
-static void lpfc_disc_timeout_handler(struct lpfc_hba *);
+static void lpfc_disc_timeout_handler(struct lpfc_vport *);
+
+void
+lpfc_start_discovery(struct lpfc_vport *vport)
+{
+	struct lpfc_hba *phba = vport->phba;
+	struct lpfc_vport **vports;
+	int i;
+
+	if (vport->auth.security_active &&
+	    vport->auth.auth_state != LPFC_AUTH_SUCCESS) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				 "0266 Authentication not complete.\n");
+		return;
+	}
+	if (vport->port_type == LPFC_NPIV_PORT) {
+		lpfc_do_scr_ns_plogi(phba, vport);
+		return;
+	}
+	vports = lpfc_create_vport_work_array(phba);
+	if (vports != NULL)
+		for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
+			if (vports[i]->port_type == LPFC_PHYSICAL_PORT)
+				continue;
+			if (phba->link_flag & LS_NPIV_FAB_SUPPORTED)
+				lpfc_initial_fdisc(vports[i]);
+			else if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) {
+				lpfc_vport_set_state(vports[i],
+						     FC_VPORT_NO_FABRIC_SUPP);
+				lpfc_printf_vlog(vports[i], KERN_ERR, LOG_ELS,
+						 "0259 No NPIV Fabric "
+						 "support\n");
+			}
+		}
+	lpfc_destroy_vport_work_array(vports);
+	lpfc_do_scr_ns_plogi(phba, vport);
+}
 
 void
 lpfc_terminate_rport_io(struct fc_rport *rport)
@@ -75,15 +112,24 @@ lpfc_terminate_rport_io(struct fc_rport *rport)
 		return;
 	}
 
-	phba = ndlp->nlp_phba;
+	phba  = ndlp->vport->phba;
+
+	lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_RPORT,
+		"rport terminate: sid:x%x did:x%x flg:x%x",
+		ndlp->nlp_sid, ndlp->nlp_DID, ndlp->nlp_flag);
 
-	spin_lock_irq(phba->host->host_lock);
 	if (ndlp->nlp_sid != NLP_NO_SID) {
-		lpfc_sli_abort_iocb(phba, &phba->sli.ring[phba->sli.fcp_ring],
-			ndlp->nlp_sid, 0, 0, LPFC_CTX_TGT);
+		lpfc_sli_abort_iocb(ndlp->vport,
+			&phba->sli.ring[phba->sli.fcp_ring],
+			ndlp->nlp_sid, 0, LPFC_CTX_TGT);
 	}
-	spin_unlock_irq(phba->host->host_lock);
 
+	/*
+	 * A device is normally blocked for rediscovery and unblocked when
+	 * devloss timeout happens.  In case a vport is removed or driver
+	 * unloaded before devloss timeout happens, we need to unblock here.
+	 */
+	scsi_target_unblock(&rport->dev);
 	return;
 }
 
@@ -95,103 +141,265 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
 {
 	struct lpfc_rport_data *rdata;
 	struct lpfc_nodelist * ndlp;
-	uint8_t *name;
-	int warn_on = 0;
-	struct lpfc_hba *phba;
+	struct lpfc_vport *vport;
+	struct lpfc_hba   *phba;
+	struct lpfc_work_evt *evtp;
+	int  put_node;
+	int  put_rport;
 
 	rdata = rport->dd_data;
 	ndlp = rdata->pnode;
 
 	if (!ndlp) {
-		if (rport->roles & FC_RPORT_ROLE_FCP_TARGET)
+		if (rport->scsi_target_id != -1) {
 			printk(KERN_ERR "Cannot find remote node"
-			" for rport in dev_loss_tmo_callbk x%x\n",
-			rport->port_id);
+				" for rport in dev_loss_tmo_callbk x%x\n",
+				rport->port_id);
+		}
+		return;
+	}
+
+	vport = ndlp->vport;
+	phba  = vport->phba;
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT,
+		"rport devlosscb: sid:x%x did:x%x flg:x%x",
+		ndlp->nlp_sid, ndlp->nlp_DID, ndlp->nlp_flag);
+
+	/* Don't defer this if we are in the process of deleting the vport
+	 * or unloading the driver. The unload will cleanup the node
+	 * appropriately we just need to cleanup the ndlp rport info here.
+	 */
+	if (vport->load_flag & FC_UNLOADING) {
+		put_node = rdata->pnode != NULL;
+		put_rport = ndlp->rport != NULL;
+		rdata->pnode = NULL;
+		ndlp->rport = NULL;
+		if (put_node)
+			lpfc_nlp_put(ndlp);
+		if (put_rport)
+			put_device(&rport->dev);
+		return;
+	}
+
+	if (ndlp->nlp_state == NLP_STE_MAPPED_NODE)
+		return;
+
+	evtp = &ndlp->dev_loss_evt;
+
+	if (!list_empty(&evtp->evt_listp))
+		return;
+
+	spin_lock_irq(&phba->hbalock);
+	evtp->evt_arg1  = ndlp;
+	evtp->evt       = LPFC_EVT_DEV_LOSS;
+	list_add_tail(&evtp->evt_listp, &phba->work_list);
+	if (phba->work_wait)
+		wake_up(phba->work_wait);
+
+	spin_unlock_irq(&phba->hbalock);
+
+	return;
+}
+
+/*
+ * This function is called from the worker thread when dev_loss_tmo
+ * expire.
+ */
+void
+lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
+{
+	struct lpfc_rport_data *rdata;
+	struct fc_rport   *rport;
+	struct lpfc_vport *vport;
+	struct lpfc_hba   *phba;
+	uint8_t *name;
+	int  put_node;
+	int  put_rport;
+	int warn_on = 0;
+
+	rport = ndlp->rport;
+
+	if (!rport)
+		return;
+
+	rdata = rport->dd_data;
+	name = (uint8_t *) &ndlp->nlp_portname;
+	vport = ndlp->vport;
+	phba  = vport->phba;
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT,
+		"rport devlosstmo:did:x%x type:x%x id:x%x",
+		ndlp->nlp_DID, ndlp->nlp_type, rport->scsi_target_id);
+
+	/* Don't defer this if we are in the process of deleting the vport
+	 * or unloading the driver. The unload will cleanup the node
+	 * appropriately we just need to cleanup the ndlp rport info here.
+	 */
+	if (vport->load_flag & FC_UNLOADING) {
+		put_node = rdata->pnode != NULL;
+		put_rport = ndlp->rport != NULL;
+		rdata->pnode = NULL;
+		ndlp->rport = NULL;
+		if (put_node)
+			lpfc_nlp_put(ndlp);
+		if (put_rport)
+			put_device(&rport->dev);
 		return;
 	}
 
 	if (ndlp->nlp_state == NLP_STE_MAPPED_NODE)
 		return;
 
-	name = (uint8_t *)&ndlp->nlp_portname;
-	phba = ndlp->nlp_phba;
+	if (ndlp->nlp_type & NLP_FABRIC) {
+		/* We will clean up these Nodes in linkup */
+		put_node = rdata->pnode != NULL;
+		put_rport = ndlp->rport != NULL;
+		rdata->pnode = NULL;
+		ndlp->rport = NULL;
+		if (put_node)
+			lpfc_nlp_put(ndlp);
+		if (put_rport)
+			put_device(&rport->dev);
+		return;
+	}
 
-	spin_lock_irq(phba->host->host_lock);
+	if (!phba->cfg_dev_loss_initiator && rport->scsi_target_id == -1) {
+		/*
+		 * Until scsi_transport_fc provides support of timing out remote
+		 * initiator ports, provide a parameter to prevent initiator
+		 * traffic for CT or IP from failing until dev_loss timeout
+		 * expires.
+		 * This behavior is keyed off the lpfc_dev_loss_initiator
+		 * config parameter.
+		 */
+		put_node = rdata->pnode != NULL;
+		put_rport = ndlp->rport != NULL;
+		rdata->pnode = NULL;
+		ndlp->rport = NULL;
+		if (put_node)
+			lpfc_nlp_put(ndlp);
+		if (put_rport)
+			put_device(&rport->dev);
+		mod_timer(&ndlp->nlp_initiator_tmr,
+			  jiffies + HZ * vport->cfg_devloss_tmo);
+		return;
+	}
 
 	if (ndlp->nlp_sid != NLP_NO_SID) {
 		warn_on = 1;
 		/* flush the target */
-		lpfc_sli_abort_iocb(phba, &phba->sli.ring[phba->sli.fcp_ring],
-			ndlp->nlp_sid, 0, 0, LPFC_CTX_TGT);
+		lpfc_sli_abort_iocb(vport, &phba->sli.ring[phba->sli.fcp_ring],
+				    ndlp->nlp_sid, 0, LPFC_CTX_TGT);
 	}
-	if (phba->fc_flag & FC_UNLOADING)
+	if (vport->load_flag & FC_UNLOADING)
 		warn_on = 0;
 
-	spin_unlock_irq(phba->host->host_lock);
-
 	if (warn_on) {
-		lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
-				"%d:0203 Devloss timeout on "
-				"WWPN %x:%x:%x:%x:%x:%x:%x:%x "
-				"NPort x%x Data: x%x x%x x%x\n",
-				phba->brd_no,
-				*name, *(name+1), *(name+2), *(name+3),
-				*(name+4), *(name+5), *(name+6), *(name+7),
-				ndlp->nlp_DID, ndlp->nlp_flag,
-				ndlp->nlp_state, ndlp->nlp_rpi);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				 "0203 Devloss timeout on "
+				 "WWPN %x:%x:%x:%x:%x:%x:%x:%x "
+				 "NPort x%x Data: x%x x%x x%x\n",
+				 *name, *(name+1), *(name+2), *(name+3),
+				 *(name+4), *(name+5), *(name+6), *(name+7),
+				 ndlp->nlp_DID, ndlp->nlp_flag,
+				 ndlp->nlp_state, ndlp->nlp_rpi);
 	} else {
-		lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-				"%d:0204 Devloss timeout on "
-				"WWPN %x:%x:%x:%x:%x:%x:%x:%x "
-				"NPort x%x Data: x%x x%x x%x\n",
-				phba->brd_no,
-				*name, *(name+1), *(name+2), *(name+3),
-				*(name+4), *(name+5), *(name+6), *(name+7),
-				ndlp->nlp_DID, ndlp->nlp_flag,
-				ndlp->nlp_state, ndlp->nlp_rpi);
-	}
-
-	if (!(phba->fc_flag & FC_UNLOADING) &&
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+				 "0204 Devloss timeout on "
+				 "WWPN %x:%x:%x:%x:%x:%x:%x:%x "
+				 "NPort x%x Data: x%x x%x x%x\n",
+				 *name, *(name+1), *(name+2), *(name+3),
+				 *(name+4), *(name+5), *(name+6), *(name+7),
+				 ndlp->nlp_DID, ndlp->nlp_flag,
+				 ndlp->nlp_state, ndlp->nlp_rpi);
+	}
+
+	put_node = rdata->pnode != NULL;
+	put_rport = ndlp->rport != NULL;
+	rdata->pnode = NULL;
+	ndlp->rport = NULL;
+	if (put_node)
+		lpfc_nlp_put(ndlp);
+	if (put_rport)
+		put_device(&rport->dev);
+
+	if (!(vport->load_flag & FC_UNLOADING) &&
 	    !(ndlp->nlp_flag & NLP_DELAY_TMO) &&
 	    !(ndlp->nlp_flag & NLP_NPR_2B_DISC) &&
-	    (ndlp->nlp_state != NLP_STE_UNMAPPED_NODE))
-		lpfc_disc_state_machine(phba, ndlp, NULL, NLP_EVT_DEVICE_RM);
-	else {
-		rdata->pnode = NULL;
-		ndlp->rport = NULL;
+	    (ndlp->nlp_state != NLP_STE_UNMAPPED_NODE)) {
+		lpfc_disc_state_machine(vport, ndlp, NULL, NLP_EVT_DEVICE_RM);
 	}
+}
+
 
+void
+lpfc_worker_wake_up(struct lpfc_hba *phba)
+{
+	wake_up(phba->work_wait);
 	return;
 }
 
 static void
-lpfc_work_list_done(struct lpfc_hba * phba)
+lpfc_work_list_done(struct lpfc_hba *phba)
 {
 	struct lpfc_work_evt  *evtp = NULL;
 	struct lpfc_nodelist  *ndlp;
+	struct lpfc_vport     *vport;
 	int free_evt;
 
-	spin_lock_irq(phba->host->host_lock);
-	while(!list_empty(&phba->work_list)) {
+	spin_lock_irq(&phba->hbalock);
+	while (!list_empty(&phba->work_list)) {
 		list_remove_head((&phba->work_list), evtp, typeof(*evtp),
 				 evt_listp);
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(&phba->hbalock);
 		free_evt = 1;
 		switch (evtp->evt) {
+		case LPFC_EVT_DEV_LOSS_DELAY:
+			free_evt = 0; /* evt is part of ndlp */
+			ndlp = (struct lpfc_nodelist *) (evtp->evt_arg1);
+			vport = ndlp->vport;
+			if (!vport)
+				break;
+
+			lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT,
+				"rport devlossdly:did:x%x flg:x%x",
+				ndlp->nlp_DID, ndlp->nlp_flag, 0);
+
+			if (!(vport->load_flag & FC_UNLOADING) &&
+			    !(ndlp->nlp_flag & NLP_DELAY_TMO) &&
+			    !(ndlp->nlp_flag & NLP_NPR_2B_DISC) &&
+			    (ndlp->nlp_state != NLP_STE_UNMAPPED_NODE)) {
+				lpfc_disc_state_machine(vport, ndlp, NULL,
+					NLP_EVT_DEVICE_RM);
+			}
+			break;
 		case LPFC_EVT_ELS_RETRY:
-			ndlp = (struct lpfc_nodelist *)(evtp->evt_arg1);
+			ndlp = (struct lpfc_nodelist *) (evtp->evt_arg1);
 			lpfc_els_retry_delay_handler(ndlp);
+			free_evt = 0; /* evt is part of ndlp */
+			break;
+		case LPFC_EVT_REAUTH:
+			ndlp = (struct lpfc_nodelist *) (evtp->evt_arg1);
+			lpfc_reauthentication_handler(ndlp);
+			free_evt = 0; /* evt is part of ndlp */
+			break;
+		case LPFC_EVT_DEV_LOSS:
+			ndlp = (struct lpfc_nodelist *)(evtp->evt_arg1);
+			lpfc_nlp_get(ndlp);
+			lpfc_dev_loss_tmo_handler(ndlp);
 			free_evt = 0;
+			lpfc_nlp_put(ndlp);
 			break;
 		case LPFC_EVT_ONLINE:
-			if (phba->hba_state < LPFC_LINK_DOWN)
-				*(int *)(evtp->evt_arg1)  = lpfc_online(phba);
+			if (phba->link_state < LPFC_LINK_DOWN)
+				*(int *) (evtp->evt_arg1) = lpfc_online(phba);
 			else
-				*(int *)(evtp->evt_arg1)  = 0;
+				*(int *) (evtp->evt_arg1) = 0;
 			complete((struct completion *)(evtp->evt_arg2));
 			break;
 		case LPFC_EVT_OFFLINE_PREP:
-			if (phba->hba_state >= LPFC_LINK_DOWN)
+			if (phba->link_state >= LPFC_LINK_DOWN)
 				lpfc_offline_prep(phba);
 			*(int *)(evtp->evt_arg1) = 0;
 			complete((struct completion *)(evtp->evt_arg2));
@@ -217,33 +425,33 @@ lpfc_work_list_done(struct lpfc_hba * phba)
 		case LPFC_EVT_KILL:
 			lpfc_offline(phba);
 			*(int *)(evtp->evt_arg1)
-				= (phba->stopped) ? 0 : lpfc_sli_brdkill(phba);
+				= (phba->pport->stopped)
+				        ? 0 : lpfc_sli_brdkill(phba);
 			lpfc_unblock_mgmt_io(phba);
 			complete((struct completion *)(evtp->evt_arg2));
 			break;
 		}
 		if (free_evt)
 			kfree(evtp);
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(&phba->hbalock);
 	}
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 }
 
 static void
-lpfc_work_done(struct lpfc_hba * phba)
+lpfc_work_done(struct lpfc_hba *phba)
 {
 	struct lpfc_sli_ring *pring;
+	uint32_t ha_copy, status, control, work_port_events;
+	struct lpfc_vport **vports;
+	struct lpfc_vport *vport;
 	int i;
-	uint32_t ha_copy;
-	uint32_t control;
-	uint32_t work_hba_events;
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	ha_copy = phba->work_ha;
 	phba->work_ha = 0;
-	work_hba_events=phba->work_hba_events;
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	if (ha_copy & HA_ERATT)
 		lpfc_handle_eratt(phba);
@@ -253,67 +461,104 @@ lpfc_work_done(struct lpfc_hba * phba)
 
 	if (ha_copy & HA_LATT)
 		lpfc_handle_latt(phba);
-
-	if (work_hba_events & WORKER_DISC_TMO)
-		lpfc_disc_timeout_handler(phba);
-
-	if (work_hba_events & WORKER_ELS_TMO)
-		lpfc_els_timeout_handler(phba);
-
-	if (work_hba_events & WORKER_MBOX_TMO)
-		lpfc_mbox_timeout_handler(phba);
-
-	if (work_hba_events & WORKER_FDMI_TMO)
-		lpfc_fdmi_tmo_handler(phba);
-
-	spin_lock_irq(phba->host->host_lock);
-	phba->work_hba_events &= ~work_hba_events;
-	spin_unlock_irq(phba->host->host_lock);
-
-	for (i = 0; i < phba->sli.num_rings; i++, ha_copy >>= 4) {
-		pring = &phba->sli.ring[i];
-		if ((ha_copy & HA_RXATT)
-		    || (pring->flag & LPFC_DEFERRED_RING_EVENT)) {
-			if (pring->flag & LPFC_STOP_IOCB_MASK) {
-				pring->flag |= LPFC_DEFERRED_RING_EVENT;
-			} else {
-				lpfc_sli_handle_slow_ring_event(phba, pring,
-								(ha_copy &
-								 HA_RXMASK));
-				pring->flag &= ~LPFC_DEFERRED_RING_EVENT;
-			}
+	vports = lpfc_create_vport_work_array(phba);
+	if (vports != NULL)
+		for(i = 0; i < LPFC_MAX_VPORTS; i++) {
 			/*
-			 * Turn on Ring interrupts
+			 * We could have no vports in array if unloading, so if
+			 * this happens then just use the pport
 			 */
-			spin_lock_irq(phba->host->host_lock);
-			control = readl(phba->HCregaddr);
-			control |= (HC_R0INT_ENA << i);
+			if (vports[i] == NULL && i == 0)
+				vport = phba->pport;
+			else
+				vport = vports[i];
+			if (vport == NULL)
+				break;
+			work_port_events = vport->work_port_events;
+			if (work_port_events & WORKER_DISC_TMO)
+				lpfc_disc_timeout_handler(vport);
+			if (work_port_events & WORKER_ELS_TMO)
+				lpfc_els_timeout_handler(vport);
+			if (work_port_events & WORKER_HB_TMO)
+				lpfc_hb_timeout_handler(phba);
+			if (work_port_events & WORKER_MBOX_TMO)
+				lpfc_mbox_timeout_handler(phba);
+			if (work_port_events & WORKER_FABRIC_BLOCK_TMO)
+				lpfc_unblock_fabric_iocbs(phba);
+			if (work_port_events & WORKER_FDMI_TMO)
+				lpfc_fdmi_timeout_handler(vport);
+			if (work_port_events & WORKER_RAMP_DOWN_QUEUE)
+				lpfc_ramp_down_queue_handler(phba);
+			if (work_port_events & WORKER_RAMP_UP_QUEUE)
+				lpfc_ramp_up_queue_handler(phba);
+			spin_lock_irq(&vport->work_port_lock);
+			vport->work_port_events &= ~work_port_events;
+			spin_unlock_irq(&vport->work_port_lock);
+		}
+	lpfc_destroy_vport_work_array(vports);
+
+	pring = &phba->sli.ring[LPFC_ELS_RING];
+	status = (ha_copy & (HA_RXMASK  << (4*LPFC_ELS_RING)));
+	status >>= (4*LPFC_ELS_RING);
+	if ((status & HA_RXMASK)
+		|| (pring->flag & LPFC_DEFERRED_RING_EVENT)) {
+		if (pring->flag & LPFC_STOP_IOCB_EVENT) {
+			pring->flag |= LPFC_DEFERRED_RING_EVENT;
+		} else {
+			lpfc_sli_handle_slow_ring_event(phba, pring,
+							(status &
+							 HA_RXMASK));
+			pring->flag &= ~LPFC_DEFERRED_RING_EVENT;
+		}
+		/*
+		 * Turn on Ring interrupts
+		 */
+		spin_lock_irq(&phba->hbalock);
+		control = readl(phba->HCregaddr);
+		if (!(control & (HC_R0INT_ENA << LPFC_ELS_RING))) {
+			lpfc_debugfs_slow_ring_trc(phba,
+				"WRK Enable ring: cntl:x%x hacopy:x%x",
+				control, ha_copy, 0);
+
+			control |= (HC_R0INT_ENA << LPFC_ELS_RING);
 			writel(control, phba->HCregaddr);
 			readl(phba->HCregaddr); /* flush */
-			spin_unlock_irq(phba->host->host_lock);
 		}
+		else {
+			lpfc_debugfs_slow_ring_trc(phba,
+				"WRK Ring ok:     cntl:x%x hacopy:x%x",
+				control, ha_copy, 0);
+		}
+		spin_unlock_irq(&phba->hbalock);
 	}
-
-	lpfc_work_list_done (phba);
-
+	lpfc_work_list_done(phba);
 }
 
 static int
-check_work_wait_done(struct lpfc_hba *phba) {
-
-	spin_lock_irq(phba->host->host_lock);
-	if (phba->work_ha ||
-	    phba->work_hba_events ||
-	    (!list_empty(&phba->work_list)) ||
-	    kthread_should_stop()) {
-		spin_unlock_irq(phba->host->host_lock);
-		return 1;
-	} else {
-		spin_unlock_irq(phba->host->host_lock);
-		return 0;
+check_work_wait_done(struct lpfc_hba *phba)
+{
+	struct lpfc_vport *vport;
+	struct lpfc_sli_ring *pring = &phba->sli.ring[LPFC_ELS_RING];
+	int rc = 0;
+
+	spin_lock_irq(&phba->hbalock);
+	list_for_each_entry(vport, &phba->port_list, listentry) {
+		if (vport->work_port_events) {
+			rc = 1;
+			break;
+		}
 	}
+	if (rc || phba->work_ha || (!list_empty(&phba->work_list)) ||
+	    kthread_should_stop() || pring->flag & LPFC_DEFERRED_RING_EVENT) {
+		rc = 1;
+		phba->work_found++;
+	} else
+		phba->work_found = 0;
+	spin_unlock_irq(&phba->hbalock);
+	return rc;
 }
 
+
 int
 lpfc_do_work(void *p)
 {
@@ -323,11 +568,13 @@ lpfc_do_work(void *p)
 
 	set_user_nice(current, -20);
 	phba->work_wait = &work_waitq;
+	phba->work_found = 0;
 
 	while (1) {
 
 		rc = wait_event_interruptible(work_waitq,
-						check_work_wait_done(phba));
+					      check_work_wait_done(phba));
+
 		BUG_ON(rc);
 
 		if (kthread_should_stop())
@@ -335,6 +582,17 @@ lpfc_do_work(void *p)
 
 		lpfc_work_done(phba);
 
+		/* If there is alot of slow ring work, like during link up
+		 * check_work_wait_done() may cause this thread to not give
+		 * up the CPU for very long periods of time. This may cause
+		 * soft lockups or other problems. To avoid these situations
+		 * give up the CPU here after LPFC_MAX_WORKER_ITERATION
+		 * consecutive iterations.
+		 */
+		if (phba->work_found >= LPFC_MAX_WORKER_ITERATION) {
+			phba->work_found = 0;
+			schedule();
+		}
 	}
 	phba->work_wait = NULL;
 	return 0;
@@ -346,16 +604,17 @@ lpfc_do_work(void *p)
  * embedding it in the IOCB.
  */
 int
-lpfc_workq_post_event(struct lpfc_hba * phba, void *arg1, void *arg2,
+lpfc_workq_post_event(struct lpfc_hba *phba, void *arg1, void *arg2,
 		      uint32_t evt)
 {
 	struct lpfc_work_evt  *evtp;
+	unsigned long flags;
 
 	/*
 	 * All Mailbox completions and LPFC_ELS_RING rcv ring IOCB events will
 	 * be queued to worker thread for processing
 	 */
-	evtp = kmalloc(sizeof(struct lpfc_work_evt), GFP_KERNEL);
+	evtp = kmalloc(sizeof(struct lpfc_work_evt), GFP_ATOMIC);
 	if (!evtp)
 		return 0;
 
@@ -363,162 +622,232 @@ lpfc_workq_post_event(struct lpfc_hba * phba, void *arg1, void *arg2,
 	evtp->evt_arg2  = arg2;
 	evtp->evt       = evt;
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irqsave(&phba->hbalock, flags);
 	list_add_tail(&evtp->evt_listp, &phba->work_list);
 	if (phba->work_wait)
-		wake_up(phba->work_wait);
-	spin_unlock_irq(phba->host->host_lock);
+		lpfc_worker_wake_up(phba);
+	spin_unlock_irqrestore(&phba->hbalock, flags);
 
 	return 1;
 }
 
-int
-lpfc_linkdown(struct lpfc_hba * phba)
+void
+lpfc_cleanup_rpis(struct lpfc_vport *vport, int remove)
 {
-	struct lpfc_sli       *psli;
-	struct lpfc_nodelist  *ndlp, *next_ndlp;
-	struct list_head *listp, *node_list[7];
-	LPFC_MBOXQ_t     *mb;
-	int               rc, i;
+	struct lpfc_hba  *phba = vport->phba;
+	struct lpfc_nodelist *ndlp, *next_ndlp;
+	int  rc;
 
-	psli = &phba->sli;
-	/* sysfs or selective reset may call this routine to clean up */
-	if (phba->hba_state >= LPFC_LINK_DOWN) {
-		if (phba->hba_state == LPFC_LINK_DOWN)
-			return 0;
+	list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) {
+		if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
+			continue;
 
-		spin_lock_irq(phba->host->host_lock);
-		phba->hba_state = LPFC_LINK_DOWN;
-		spin_unlock_irq(phba->host->host_lock);
-	}
+		/* Stop re-authentication timer of all nodes. */
+		del_timer_sync(&ndlp->nlp_reauth_tmr);
 
-	fc_host_post_event(phba->host, fc_get_event_number(),
-			FCH_EVT_LINKDOWN, 0);
+		if ((phba->sli3_options & LPFC_SLI3_VPORT_TEARDOWN) ||
+			((vport->port_type == LPFC_NPIV_PORT) &&
+			(ndlp->nlp_DID == NameServer_DID)))
+			lpfc_unreg_rpi(vport, ndlp);
 
-	/* Clean up any firmware default rpi's */
-	if ((mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL))) {
-		lpfc_unreg_did(phba, 0xffffffff, mb);
-		mb->mbox_cmpl=lpfc_sli_def_mbox_cmpl;
-		if (lpfc_sli_issue_mbox(phba, mb, (MBX_NOWAIT | MBX_STOP_IOCB))
-		    == MBX_NOT_FINISHED) {
-			mempool_free( mb, phba->mbox_mem_pool);
-		}
+		/* Leave Fabric nodes alone on link down */
+		if (!remove && ndlp->nlp_type & NLP_FABRIC)
+			continue;
+		rc = lpfc_disc_state_machine(vport, ndlp, NULL,
+					     remove
+					     ? NLP_EVT_DEVICE_RM
+					     : NLP_EVT_DEVICE_RECOVERY);
 	}
+	if (phba->sli3_options & LPFC_SLI3_VPORT_TEARDOWN) {
+		lpfc_mbx_unreg_vpi(vport);
+		vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
+	}
+}
 
+/*
+ * This function can be called due to physical link failure
+ * or link authentication failure.
+ */
+void
+lpfc_port_link_failure(struct lpfc_vport *vport)
+{
 	/* Cleanup any outstanding RSCN activity */
-	lpfc_els_flush_rscn(phba);
+	lpfc_els_flush_rscn(vport);
 
 	/* Cleanup any outstanding ELS commands */
-	lpfc_els_flush_cmd(phba);
-
-	/* Issue a LINK DOWN event to all nodes */
-	node_list[0] = &phba->fc_npr_list;  /* MUST do this list first */
-	node_list[1] = &phba->fc_nlpmap_list;
-	node_list[2] = &phba->fc_nlpunmap_list;
-	node_list[3] = &phba->fc_prli_list;
-	node_list[4] = &phba->fc_reglogin_list;
-	node_list[5] = &phba->fc_adisc_list;
-	node_list[6] = &phba->fc_plogi_list;
-	for (i = 0; i < 7; i++) {
-		listp = node_list[i];
-		if (list_empty(listp))
-			continue;
+	lpfc_els_flush_cmd(vport);
 
-		list_for_each_entry_safe(ndlp, next_ndlp, listp, nlp_listp) {
+	lpfc_cleanup_rpis(vport, 0);
 
-			rc = lpfc_disc_state_machine(phba, ndlp, NULL,
-					     NLP_EVT_DEVICE_RECOVERY);
+	/* Turn off discovery timer if its running */
+	lpfc_can_disctmo(vport);
+}
 
-		}
+static void
+lpfc_linkdown_port(struct lpfc_vport *vport)
+{
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+
+	fc_host_post_event(shost, fc_get_event_number(), FCH_EVT_LINKDOWN, 0);
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"Link Down:       state:x%x rtry:x%x flg:x%x",
+		vport->port_state, vport->fc_ns_retry, vport->fc_flag);
+
+	lpfc_port_link_failure(vport);
+
+	vport->auth.auth_state = LPFC_AUTH_UNKNOWN;
+	vport->auth.auth_msg_state = LPFC_AUTH_NONE;
+}
+
+void
+lpfc_port_auth_failed(struct lpfc_nodelist *ndlp)
+{
+	struct lpfc_vport *vport = ndlp->vport;
+
+	vport->auth.auth_state = LPFC_AUTH_FAIL;
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+	if (ndlp->nlp_type & NLP_FABRIC) {
+		lpfc_port_link_failure (vport);
+		lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+		lpfc_issue_els_logo(vport, ndlp, 0);
 	}
+}
 
-	/* free any ndlp's on unused list */
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_unused_list,
-				nlp_listp) {
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+int
+lpfc_linkdown(struct lpfc_hba *phba)
+{
+	struct lpfc_vport *vport = phba->pport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_vport **vports;
+	LPFC_MBOXQ_t          *mb;
+	int i;
+
+	if (phba->link_state == LPFC_LINK_DOWN) {
+		return 0;
+	}
+	spin_lock_irq(&phba->hbalock);
+	if (phba->link_state > LPFC_LINK_DOWN) {
+		phba->link_state = LPFC_LINK_DOWN;
+		phba->pport->fc_flag &= ~FC_LBIT;
+	}
+	spin_unlock_irq(&phba->hbalock);
+	vports = lpfc_create_vport_work_array(phba);
+	if (vports != NULL)
+		for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
+			/* Issue a LINK DOWN event to all nodes */
+			lpfc_linkdown_port(vports[i]);
+		}
+	lpfc_destroy_vport_work_array(vports);
+	/* Clean up any firmware default rpi's */
+	mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	if (mb) {
+		lpfc_unreg_did(phba, 0xffff, 0xffffffff, mb);
+		mb->vport = vport;
+		mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+		if (lpfc_sli_issue_mbox(phba, mb, MBX_NOWAIT)
+		    == MBX_NOT_FINISHED) {
+			mempool_free(mb, phba->mbox_mem_pool);
+		}
 	}
 
 	/* Setup myDID for link up if we are in pt2pt mode */
-	if (phba->fc_flag & FC_PT2PT) {
-		phba->fc_myDID = 0;
-		if ((mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL))) {
+	if (phba->pport->fc_flag & FC_PT2PT) {
+		phba->pport->fc_myDID = 0;
+		mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+		if (mb) {
 			lpfc_config_link(phba, mb);
-			mb->mbox_cmpl=lpfc_sli_def_mbox_cmpl;
-			if (lpfc_sli_issue_mbox
-			    (phba, mb, (MBX_NOWAIT | MBX_STOP_IOCB))
+			mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+			mb->vport = vport;
+			if (lpfc_sli_issue_mbox(phba, mb, MBX_NOWAIT)
 			    == MBX_NOT_FINISHED) {
-				mempool_free( mb, phba->mbox_mem_pool);
+				mempool_free(mb, phba->mbox_mem_pool);
 			}
 		}
-		spin_lock_irq(phba->host->host_lock);
-		phba->fc_flag &= ~(FC_PT2PT | FC_PT2PT_PLOGI);
-		spin_unlock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
+		phba->pport->fc_flag &= ~(FC_PT2PT | FC_PT2PT_PLOGI);
+		spin_unlock_irq(shost->host_lock);
 	}
-	spin_lock_irq(phba->host->host_lock);
-	phba->fc_flag &= ~FC_LBIT;
-	spin_unlock_irq(phba->host->host_lock);
-
-	/* Turn off discovery timer if its running */
-	lpfc_can_disctmo(phba);
 
-	/* Must process IOCBs on all rings to handle ABORTed I/Os */
 	return 0;
 }
 
-static int
-lpfc_linkup(struct lpfc_hba * phba)
+static void
+lpfc_linkup_cleanup_nodes(struct lpfc_vport *vport)
 {
-	struct lpfc_nodelist *ndlp, *next_ndlp;
-	struct list_head *listp, *node_list[7];
-	int i;
+	struct lpfc_nodelist *ndlp;
 
-	fc_host_post_event(phba->host, fc_get_event_number(),
-			FCH_EVT_LINKUP, 0);
-
-	spin_lock_irq(phba->host->host_lock);
-	phba->hba_state = LPFC_LINK_UP;
-	phba->fc_flag &= ~(FC_PT2PT | FC_PT2PT_PLOGI | FC_ABORT_DISCOVERY |
-			   FC_RSCN_MODE | FC_NLP_MORE | FC_RSCN_DISCOVERY);
-	phba->fc_flag |= FC_NDISC_ACTIVE;
-	phba->fc_ns_retry = 0;
-	spin_unlock_irq(phba->host->host_lock);
-
-
-	node_list[0] = &phba->fc_plogi_list;
-	node_list[1] = &phba->fc_adisc_list;
-	node_list[2] = &phba->fc_reglogin_list;
-	node_list[3] = &phba->fc_prli_list;
-	node_list[4] = &phba->fc_nlpunmap_list;
-	node_list[5] = &phba->fc_nlpmap_list;
-	node_list[6] = &phba->fc_npr_list;
-	for (i = 0; i < 7; i++) {
-		listp = node_list[i];
-		if (list_empty(listp))
+	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+		if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
 			continue;
 
-		list_for_each_entry_safe(ndlp, next_ndlp, listp, nlp_listp) {
-			if (phba->fc_flag & FC_LBIT) {
-				if (ndlp->nlp_type & NLP_FABRIC) {
-					/* On Linkup its safe to clean up the
-					 * ndlp from Fabric connections.
-					 */
-					lpfc_nlp_list(phba, ndlp,
-							NLP_UNUSED_LIST);
-				} else if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) {
-					/* Fail outstanding IO now since device
-					 * is marked for PLOGI.
-					 */
-					lpfc_unreg_rpi(phba, ndlp);
-				}
-			}
+		if (ndlp->nlp_type & NLP_FABRIC) {
+				/* On Linkup its safe to clean up the ndlp
+				 * from Fabric connections.
+				 */
+			if (ndlp->nlp_DID != Fabric_DID)
+				lpfc_unreg_rpi(vport, ndlp);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+		} else if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) {
+				/* Fail outstanding IO now since device is
+				 * marked for PLOGI.
+				 */
+			lpfc_unreg_rpi(vport, ndlp);
 		}
 	}
+}
 
-	/* free any ndlp's on unused list */
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_unused_list,
-				nlp_listp) {
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
-	}
+static void
+lpfc_linkup_port(struct lpfc_vport *vport)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
+
+	if ((vport->load_flag & FC_UNLOADING) != 0)
+		return;
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"Link Up:         top:x%x speed:x%x flg:x%x",
+		phba->fc_topology, phba->fc_linkspeed, phba->link_flag);
+
+	/* If NPIV is not enabled, only bring the physical port up */
+	if (!(phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) &&
+		(vport != phba->pport))
+		return;
+
+	fc_host_post_event(shost, fc_get_event_number(), FCH_EVT_LINKUP, 0);
+
+	spin_lock_irq(shost->host_lock);
+	vport->fc_flag &= ~(FC_PT2PT | FC_PT2PT_PLOGI | FC_ABORT_DISCOVERY |
+			    FC_RSCN_MODE | FC_NLP_MORE | FC_RSCN_DISCOVERY);
+	vport->fc_flag |= FC_NDISC_ACTIVE;
+	vport->fc_ns_retry = 0;
+	spin_unlock_irq(shost->host_lock);
+
+	if (vport->fc_flag & FC_LBIT)
+		lpfc_linkup_cleanup_nodes(vport);
+
+}
+
+static int
+lpfc_linkup(struct lpfc_hba *phba)
+{
+	struct lpfc_vport **vports;
+	int i;
+
+	phba->link_state = LPFC_LINK_UP;
+
+	/* Unblock fabric iocbs if they are blocked */
+	clear_bit(FABRIC_COMANDS_BLOCKED, &phba->bit_flags);
+	del_timer_sync(&phba->fabric_block_timer);
+
+	vports = lpfc_create_vport_work_array(phba);
+	if (vports != NULL)
+		for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++)
+			lpfc_linkup_port(vports[i]);
+	lpfc_destroy_vport_work_array(vports);
+	if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED)
+		lpfc_issue_clear_la(phba, phba->pport);
 
 	return 0;
 }
@@ -530,14 +859,14 @@ lpfc_linkup(struct lpfc_hba * phba)
  * handed off to the SLI layer.
  */
 void
-lpfc_mbx_cmpl_clear_la(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+lpfc_mbx_cmpl_clear_la(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
 {
-	struct lpfc_sli *psli;
-	MAILBOX_t *mb;
+	struct lpfc_vport *vport = pmb->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_sli   *psli = &phba->sli;
+	MAILBOX_t *mb = &pmb->mb;
 	uint32_t control;
 
-	psli = &phba->sli;
-	mb = &pmb->mb;
 	/* Since we don't do discovery right now, turn these off here */
 	psli->ring[psli->extra_ring].flag &= ~LPFC_STOP_IOCB_EVENT;
 	psli->ring[psli->fcp_ring].flag &= ~LPFC_STOP_IOCB_EVENT;
@@ -546,70 +875,71 @@ lpfc_mbx_cmpl_clear_la(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 	/* Check for error */
 	if ((mb->mbxStatus) && (mb->mbxStatus != 0x1601)) {
 		/* CLEAR_LA mbox error <mbxStatus> state <hba_state> */
-		lpfc_printf_log(phba, KERN_ERR, LOG_MBOX,
-				"%d:0320 CLEAR_LA mbxStatus error x%x hba "
-				"state x%x\n",
-				phba->brd_no, mb->mbxStatus, phba->hba_state);
-
-		phba->hba_state = LPFC_HBA_ERROR;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
+				 "0320 CLEAR_LA mbxStatus error x%x hba "
+				 "state x%x\n",
+				 mb->mbxStatus, vport->port_state);
+		phba->link_state = LPFC_HBA_ERROR;
 		goto out;
 	}
 
-	if (phba->fc_flag & FC_ABORT_DISCOVERY)
-		goto out;
+	if (vport->port_type == LPFC_PHYSICAL_PORT)
+		phba->link_state = LPFC_HBA_READY;
 
-	phba->num_disc_nodes = 0;
-	/* go thru NPR list and issue ELS PLOGIs */
-	if (phba->fc_npr_cnt) {
-		lpfc_els_disc_plogi(phba);
-	}
+	spin_lock_irq(&phba->hbalock);
+	psli->sli_flag |= LPFC_PROCESS_LA;
+	control = readl(phba->HCregaddr);
+	control |= HC_LAINT_ENA;
+	writel(control, phba->HCregaddr);
+	readl(phba->HCregaddr); /* flush */
+	spin_unlock_irq(&phba->hbalock);
+	return;
 
-	if (!phba->num_disc_nodes) {
-		spin_lock_irq(phba->host->host_lock);
-		phba->fc_flag &= ~FC_NDISC_ACTIVE;
-		spin_unlock_irq(phba->host->host_lock);
+	vport->num_disc_nodes = 0;
+	/* go thru NPR nodes and issue ELS PLOGIs */
+	if (vport->fc_npr_cnt)
+		lpfc_els_disc_plogi(vport);
+
+	if (!vport->num_disc_nodes) {
+		spin_lock_irq(shost->host_lock);
+		vport->fc_flag &= ~FC_NDISC_ACTIVE;
+		spin_unlock_irq(shost->host_lock);
 	}
 
-	phba->hba_state = LPFC_HBA_READY;
+	vport->port_state = LPFC_VPORT_READY;
 
 out:
 	/* Device Discovery completes */
-	lpfc_printf_log(phba,
-			 KERN_INFO,
-			 LOG_DISCOVERY,
-			 "%d:0225 Device Discovery completes\n",
-			 phba->brd_no);
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0225 Device Discovery completes\n");
+	mempool_free(pmb, phba->mbox_mem_pool);
 
-	mempool_free( pmb, phba->mbox_mem_pool);
-
-	spin_lock_irq(phba->host->host_lock);
-	phba->fc_flag &= ~FC_ABORT_DISCOVERY;
-	if (phba->fc_flag & FC_ESTABLISH_LINK) {
-		phba->fc_flag &= ~FC_ESTABLISH_LINK;
-	}
-	spin_unlock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
+	vport->fc_flag &= ~(FC_ABORT_DISCOVERY | FC_ESTABLISH_LINK);
+	spin_unlock_irq(shost->host_lock);
 
 	del_timer_sync(&phba->fc_estabtmo);
 
-	lpfc_can_disctmo(phba);
+	lpfc_can_disctmo(vport);
 
 	/* turn on Link Attention interrupts */
-	spin_lock_irq(phba->host->host_lock);
+
+	spin_lock_irq(&phba->hbalock);
 	psli->sli_flag |= LPFC_PROCESS_LA;
 	control = readl(phba->HCregaddr);
 	control |= HC_LAINT_ENA;
 	writel(control, phba->HCregaddr);
 	readl(phba->HCregaddr); /* flush */
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	return;
 }
 
+
 static void
 lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
 {
-	struct lpfc_sli *psli = &phba->sli;
-	int rc;
+	struct lpfc_vport *vport = pmb->vport;
 
 	if (pmb->mb.mbxStatus)
 		goto out;
@@ -617,160 +947,139 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
 	mempool_free(pmb, phba->mbox_mem_pool);
 
 	if (phba->fc_topology == TOPOLOGY_LOOP &&
-		phba->fc_flag & FC_PUBLIC_LOOP &&
-		 !(phba->fc_flag & FC_LBIT)) {
+	    vport->fc_flag & FC_PUBLIC_LOOP &&
+	    !(vport->fc_flag & FC_LBIT)) {
 			/* Need to wait for FAN - use discovery timer
-			 * for timeout.  hba_state is identically
+			 * for timeout.  port_state is identically
 			 * LPFC_LOCAL_CFG_LINK while waiting for FAN
 			 */
-			lpfc_set_disctmo(phba);
+			lpfc_set_disctmo(vport);
 			return;
-		}
+	}
 
-	/* Start discovery by sending a FLOGI. hba_state is identically
+	/* Start discovery by sending a FLOGI. port_state is identically
 	 * LPFC_FLOGI while waiting for FLOGI cmpl
 	 */
-	phba->hba_state = LPFC_FLOGI;
-	lpfc_set_disctmo(phba);
-	lpfc_initial_flogi(phba);
+	if (vport->port_state != LPFC_FLOGI) {
+		lpfc_initial_flogi(vport);
+	}
 	return;
 
 out:
-	lpfc_printf_log(phba, KERN_ERR, LOG_MBOX,
-			"%d:0306 CONFIG_LINK mbxStatus error x%x "
-			"HBA state x%x\n",
-			phba->brd_no, pmb->mb.mbxStatus, phba->hba_state);
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
+			 "0306 CONFIG_LINK mbxStatus error x%x "
+			 "HBA state x%x\n",
+			 pmb->mb.mbxStatus, vport->port_state);
+	mempool_free(pmb, phba->mbox_mem_pool);
 
 	lpfc_linkdown(phba);
 
-	phba->hba_state = LPFC_HBA_ERROR;
-
-	lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
-			"%d:0200 CONFIG_LINK bad hba state x%x\n",
-			phba->brd_no, phba->hba_state);
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+			 "0200 CONFIG_LINK bad hba state x%x\n",
+			 vport->port_state);
 
-	lpfc_clear_la(phba, pmb);
-	pmb->mbox_cmpl = lpfc_mbx_cmpl_clear_la;
-	rc = lpfc_sli_issue_mbox(phba, pmb, (MBX_NOWAIT | MBX_STOP_IOCB));
-	if (rc == MBX_NOT_FINISHED) {
-		mempool_free(pmb, phba->mbox_mem_pool);
-		lpfc_disc_flush_list(phba);
-		psli->ring[(psli->extra_ring)].flag &= ~LPFC_STOP_IOCB_EVENT;
-		psli->ring[(psli->fcp_ring)].flag &= ~LPFC_STOP_IOCB_EVENT;
-		psli->ring[(psli->next_ring)].flag &= ~LPFC_STOP_IOCB_EVENT;
-		phba->hba_state = LPFC_HBA_READY;
-	}
+	lpfc_issue_clear_la(phba, vport);
 	return;
 }
 
 static void
-lpfc_mbx_cmpl_read_sparam(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+lpfc_mbx_cmpl_read_sparam(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
 {
-	struct lpfc_sli *psli = &phba->sli;
 	MAILBOX_t *mb = &pmb->mb;
 	struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) pmb->context1;
+	struct lpfc_vport  *vport = pmb->vport;
 
 
 	/* Check for error */
 	if (mb->mbxStatus) {
 		/* READ_SPARAM mbox error <mbxStatus> state <hba_state> */
-		lpfc_printf_log(phba, KERN_ERR, LOG_MBOX,
-				"%d:0319 READ_SPARAM mbxStatus error x%x "
-				"hba state x%x>\n",
-				phba->brd_no, mb->mbxStatus, phba->hba_state);
-
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
+				 "0319 READ_SPARAM mbxStatus error x%x "
+				 "hba state x%x>\n",
+				 mb->mbxStatus, vport->port_state);
 		lpfc_linkdown(phba);
-		phba->hba_state = LPFC_HBA_ERROR;
 		goto out;
 	}
 
-	memcpy((uint8_t *) & phba->fc_sparam, (uint8_t *) mp->virt,
+	memcpy((uint8_t *) &vport->fc_sparam, (uint8_t *) mp->virt,
 	       sizeof (struct serv_parm));
 	if (phba->cfg_soft_wwnn)
-		u64_to_wwn(phba->cfg_soft_wwnn, phba->fc_sparam.nodeName.u.wwn);
+		u64_to_wwn(phba->cfg_soft_wwnn,
+			   vport->fc_sparam.nodeName.u.wwn);
 	if (phba->cfg_soft_wwpn)
-		u64_to_wwn(phba->cfg_soft_wwpn, phba->fc_sparam.portName.u.wwn);
-	memcpy((uint8_t *) & phba->fc_nodename,
-	       (uint8_t *) & phba->fc_sparam.nodeName,
-	       sizeof (struct lpfc_name));
-	memcpy((uint8_t *) & phba->fc_portname,
-	       (uint8_t *) & phba->fc_sparam.portName,
-	       sizeof (struct lpfc_name));
+		u64_to_wwn(phba->cfg_soft_wwpn,
+			   vport->fc_sparam.portName.u.wwn);
+	memcpy(&vport->fc_nodename, &vport->fc_sparam.nodeName,
+	       sizeof(vport->fc_nodename));
+	memcpy(&vport->fc_portname, &vport->fc_sparam.portName,
+	       sizeof(vport->fc_portname));
+	if (vport->port_type == LPFC_PHYSICAL_PORT) {
+		memcpy(&phba->wwnn, &vport->fc_nodename, sizeof(phba->wwnn));
+		memcpy(&phba->wwpn, &vport->fc_portname, sizeof(phba->wwnn));
+	}
+
 	lpfc_mbuf_free(phba, mp->virt, mp->phys);
 	kfree(mp);
-	mempool_free( pmb, phba->mbox_mem_pool);
+	mempool_free(pmb, phba->mbox_mem_pool);
 	return;
 
 out:
 	pmb->context1 = NULL;
 	lpfc_mbuf_free(phba, mp->virt, mp->phys);
 	kfree(mp);
-	if (phba->hba_state != LPFC_CLEAR_LA) {
-		lpfc_clear_la(phba, pmb);
-		pmb->mbox_cmpl = lpfc_mbx_cmpl_clear_la;
-		if (lpfc_sli_issue_mbox(phba, pmb, (MBX_NOWAIT | MBX_STOP_IOCB))
-		    == MBX_NOT_FINISHED) {
-			mempool_free( pmb, phba->mbox_mem_pool);
-			lpfc_disc_flush_list(phba);
-			psli->ring[(psli->extra_ring)].flag &=
-			    ~LPFC_STOP_IOCB_EVENT;
-			psli->ring[(psli->fcp_ring)].flag &=
-			    ~LPFC_STOP_IOCB_EVENT;
-			psli->ring[(psli->next_ring)].flag &=
-			    ~LPFC_STOP_IOCB_EVENT;
-			phba->hba_state = LPFC_HBA_READY;
-		}
-	} else {
-		mempool_free( pmb, phba->mbox_mem_pool);
-	}
+	lpfc_issue_clear_la(phba, vport);
+	mempool_free(pmb, phba->mbox_mem_pool);
 	return;
 }
 
 static void
 lpfc_mbx_process_link_up(struct lpfc_hba *phba, READ_LA_VAR *la)
 {
-	int i;
+	struct lpfc_vport *vport = phba->pport;
 	LPFC_MBOXQ_t *sparam_mbox, *cfglink_mbox;
+	int i;
 	struct lpfc_dmabuf *mp;
 	int rc;
 
 	sparam_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
 	cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	switch (la->UlnkSpeed) {
-		case LA_1GHZ_LINK:
-			phba->fc_linkspeed = LA_1GHZ_LINK;
-			break;
-		case LA_2GHZ_LINK:
-			phba->fc_linkspeed = LA_2GHZ_LINK;
-			break;
-		case LA_4GHZ_LINK:
-			phba->fc_linkspeed = LA_4GHZ_LINK;
-			break;
-		case LA_8GHZ_LINK:
-			phba->fc_linkspeed = LA_8GHZ_LINK;
-			break;
-		default:
-			phba->fc_linkspeed = LA_UNKNW_LINK;
-			break;
+	case LA_1GHZ_LINK:
+		phba->fc_linkspeed = LA_1GHZ_LINK;
+		break;
+	case LA_2GHZ_LINK:
+		phba->fc_linkspeed = LA_2GHZ_LINK;
+		break;
+	case LA_4GHZ_LINK:
+		phba->fc_linkspeed = LA_4GHZ_LINK;
+		break;
+	case LA_8GHZ_LINK:
+		phba->fc_linkspeed = LA_8GHZ_LINK;
+		break;
+	default:
+		phba->fc_linkspeed = LA_UNKNW_LINK;
+		break;
 	}
 
 	phba->fc_topology = la->topology;
+	phba->link_flag &= ~LS_NPIV_FAB_SUPPORTED;
 
 	if (phba->fc_topology == TOPOLOGY_LOOP) {
-	/* Get Loop Map information */
+		phba->sli3_options &= ~LPFC_SLI3_NPIV_ENABLED;
 
+				/* Get Loop Map information */
 		if (la->il)
-			phba->fc_flag |= FC_LBIT;
+			vport->fc_flag |= FC_LBIT;
 
-		phba->fc_myDID = la->granted_AL_PA;
+		vport->fc_myDID = la->granted_AL_PA;
 		i = la->un.lilpBde64.tus.f.bdeSize;
 
 		if (i == 0) {
 			phba->alpa_map[0] = 0;
 		} else {
-			if (phba->cfg_log_verbose & LOG_LINK_EVENT) {
+			if (vport->cfg_log_verbose & LOG_LINK_EVENT) {
 				int numalpa, j, k;
 				union {
 					uint8_t pamap[16];
@@ -794,29 +1103,33 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, READ_LA_VAR *la)
 					}
 					/* Link Up Event ALPA map */
 					lpfc_printf_log(phba,
-						KERN_WARNING,
-						LOG_LINK_EVENT,
-						"%d:1304 Link Up Event "
-						"ALPA map Data: x%x "
-						"x%x x%x x%x\n",
-						phba->brd_no,
-						un.pa.wd1, un.pa.wd2,
-						un.pa.wd3, un.pa.wd4);
+							KERN_WARNING,
+							LOG_LINK_EVENT,
+							"1304 Link Up Event "
+							"ALPA map Data: x%x "
+							"x%x x%x x%x\n",
+							un.pa.wd1, un.pa.wd2,
+							un.pa.wd3, un.pa.wd4);
 				}
 			}
 		}
 	} else {
-		phba->fc_myDID = phba->fc_pref_DID;
-		phba->fc_flag |= FC_LBIT;
+		if (!(phba->sli3_options & LPFC_SLI3_NPIV_ENABLED)) {
+			if (phba->max_vpi && phba->cfg_enable_npiv &&
+			   (phba->sli_rev == 3))
+				phba->sli3_options |= LPFC_SLI3_NPIV_ENABLED;
+		}
+		vport->fc_myDID = phba->fc_pref_DID;
+		vport->fc_flag |= FC_LBIT;
 	}
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	lpfc_linkup(phba);
 	if (sparam_mbox) {
-		lpfc_read_sparam(phba, sparam_mbox);
+		lpfc_read_sparam(phba, sparam_mbox, 0);
+		sparam_mbox->vport = vport;
 		sparam_mbox->mbox_cmpl = lpfc_mbx_cmpl_read_sparam;
-		rc = lpfc_sli_issue_mbox(phba, sparam_mbox,
-						(MBX_NOWAIT | MBX_STOP_IOCB));
+		rc = lpfc_sli_issue_mbox(phba, sparam_mbox, MBX_NOWAIT);
 		if (rc == MBX_NOT_FINISHED) {
 			mp = (struct lpfc_dmabuf *) sparam_mbox->context1;
 			lpfc_mbuf_free(phba, mp->virt, mp->phys);
@@ -824,36 +1137,45 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, READ_LA_VAR *la)
 			mempool_free(sparam_mbox, phba->mbox_mem_pool);
 			if (cfglink_mbox)
 				mempool_free(cfglink_mbox, phba->mbox_mem_pool);
-			return;
+			goto out;
 		}
 	}
 
 	if (cfglink_mbox) {
-		phba->hba_state = LPFC_LOCAL_CFG_LINK;
+		vport->port_state = LPFC_LOCAL_CFG_LINK;
 		lpfc_config_link(phba, cfglink_mbox);
+		cfglink_mbox->vport = vport;
 		cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link;
-		rc = lpfc_sli_issue_mbox(phba, cfglink_mbox,
-						(MBX_NOWAIT | MBX_STOP_IOCB));
-		if (rc == MBX_NOT_FINISHED)
-			mempool_free(cfglink_mbox, phba->mbox_mem_pool);
+		rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT);
+		if (rc != MBX_NOT_FINISHED)
+			return;
+		mempool_free(cfglink_mbox, phba->mbox_mem_pool);
 	}
+out:
+	lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
+			 "0263 Discovery Mailbox error: state: 0x%x : %p %p\n",
+			 vport->port_state, sparam_mbox, cfglink_mbox);
+	lpfc_issue_clear_la(phba, vport);
+	return;
 }
 
 static void
-lpfc_mbx_issue_link_down(struct lpfc_hba *phba) {
+lpfc_mbx_issue_link_down(struct lpfc_hba *phba)
+{
 	uint32_t control;
 	struct lpfc_sli *psli = &phba->sli;
 
 	lpfc_linkdown(phba);
 
 	/* turn on Link Attention interrupts - no CLEAR_LA needed */
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	psli->sli_flag |= LPFC_PROCESS_LA;
 	control = readl(phba->HCregaddr);
 	control |= HC_LAINT_ENA;
 	writel(control, phba->HCregaddr);
 	readl(phba->HCregaddr); /* flush */
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 }
 
 /*
@@ -863,22 +1185,21 @@ lpfc_mbx_issue_link_down(struct lpfc_hba *phba) {
  * handed off to the SLI layer.
  */
 void
-lpfc_mbx_cmpl_read_la(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+lpfc_mbx_cmpl_read_la(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
 {
+	struct lpfc_vport *vport = pmb->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
 	READ_LA_VAR *la;
 	MAILBOX_t *mb = &pmb->mb;
 	struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) (pmb->context1);
 
 	/* Check for error */
 	if (mb->mbxStatus) {
-		lpfc_printf_log(phba,
-				KERN_INFO,
-				LOG_LINK_EVENT,
-				"%d:1307 READ_LA mbox error x%x state x%x\n",
-				phba->brd_no,
-				mb->mbxStatus, phba->hba_state);
+		lpfc_printf_log(phba, KERN_INFO, LOG_LINK_EVENT,
+				"1307 READ_LA mbox error x%x state x%x\n",
+				mb->mbxStatus, vport->port_state);
 		lpfc_mbx_issue_link_down(phba);
-		phba->hba_state = LPFC_HBA_ERROR;
+		phba->link_state = LPFC_HBA_ERROR;
 		goto lpfc_mbx_cmpl_read_la_free_mbuf;
 	}
 
@@ -886,49 +1207,48 @@ lpfc_mbx_cmpl_read_la(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 
 	memcpy(&phba->alpa_map[0], mp->virt, 128);
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	if (la->pb)
-		phba->fc_flag |= FC_BYPASSED_MODE;
+		vport->fc_flag |= FC_BYPASSED_MODE;
 	else
-		phba->fc_flag &= ~FC_BYPASSED_MODE;
-	spin_unlock_irq(phba->host->host_lock);
+		vport->fc_flag &= ~FC_BYPASSED_MODE;
+	spin_unlock_irq(shost->host_lock);
 
 	if (((phba->fc_eventTag + 1) < la->eventTag) ||
-	     (phba->fc_eventTag == la->eventTag)) {
+	    (phba->fc_eventTag == la->eventTag)) {
 		phba->fc_stat.LinkMultiEvent++;
-		if (la->attType == AT_LINK_UP) {
+		if (la->attType == AT_LINK_UP)
 			if (phba->fc_eventTag != 0)
 				lpfc_linkdown(phba);
-		}
 	}
 
 	phba->fc_eventTag = la->eventTag;
 
 	if (la->attType == AT_LINK_UP) {
 		phba->fc_stat.LinkUp++;
-		if (phba->fc_flag & FC_LOOPBACK_MODE) {
+		if (phba->link_flag & LS_LOOPBACK_MODE) {
 			lpfc_printf_log(phba, KERN_INFO, LOG_LINK_EVENT,
-				"%d:1306 Link Up Event in loop back mode "
-				"x%x received Data: x%x x%x x%x x%x\n",
-				phba->brd_no, la->eventTag, phba->fc_eventTag,
-				la->granted_AL_PA, la->UlnkSpeed,
-				phba->alpa_map[0]);
+					"1306 Link Up Event in loop back mode "
+					"x%x received Data: x%x x%x x%x x%x\n",
+					la->eventTag, phba->fc_eventTag,
+					la->granted_AL_PA, la->UlnkSpeed,
+					phba->alpa_map[0]);
 		} else {
 			lpfc_printf_log(phba, KERN_ERR, LOG_LINK_EVENT,
-				"%d:1303 Link Up Event x%x received "
-				"Data: x%x x%x x%x x%x\n",
-				phba->brd_no, la->eventTag, phba->fc_eventTag,
-				la->granted_AL_PA, la->UlnkSpeed,
-				phba->alpa_map[0]);
+					"1303 Link Up Event x%x received "
+					"Data: x%x x%x x%x x%x\n",
+					la->eventTag, phba->fc_eventTag,
+					la->granted_AL_PA, la->UlnkSpeed,
+					phba->alpa_map[0]);
 		}
 		lpfc_mbx_process_link_up(phba, la);
 	} else {
 		phba->fc_stat.LinkDown++;
 		lpfc_printf_log(phba, KERN_ERR, LOG_LINK_EVENT,
-				"%d:1305 Link Down Event x%x received "
+				"1305 Link Down Event x%x received "
 				"Data: x%x x%x x%x\n",
-				phba->brd_no, la->eventTag, phba->fc_eventTag,
-				phba->hba_state, phba->fc_flag);
+				la->eventTag, phba->fc_eventTag,
+				phba->pport->port_state, vport->fc_flag);
 		lpfc_mbx_issue_link_down(phba);
 	}
 
@@ -946,27 +1266,110 @@ lpfc_mbx_cmpl_read_la_free_mbuf:
  * handed off to the SLI layer.
  */
 void
-lpfc_mbx_cmpl_reg_login(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+lpfc_mbx_cmpl_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
 {
-	struct lpfc_sli *psli;
-	MAILBOX_t *mb;
-	struct lpfc_dmabuf *mp;
-	struct lpfc_nodelist *ndlp;
-
-	psli = &phba->sli;
-	mb = &pmb->mb;
-
-	ndlp = (struct lpfc_nodelist *) pmb->context2;
-	mp = (struct lpfc_dmabuf *) (pmb->context1);
+	struct lpfc_vport  *vport = pmb->vport;
+	struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) (pmb->context1);
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) pmb->context2;
 
 	pmb->context1 = NULL;
 
 	/* Good status, call state machine */
-	lpfc_disc_state_machine(phba, ndlp, pmb, NLP_EVT_CMPL_REG_LOGIN);
+	lpfc_disc_state_machine(vport, ndlp, pmb, NLP_EVT_CMPL_REG_LOGIN);
 	lpfc_mbuf_free(phba, mp->virt, mp->phys);
 	kfree(mp);
-	mempool_free( pmb, phba->mbox_mem_pool);
+	mempool_free(pmb, phba->mbox_mem_pool);
+	lpfc_nlp_put(ndlp);
+
+	return;
+}
+
+static void
+lpfc_mbx_cmpl_unreg_vpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+{
+	MAILBOX_t *mb = &pmb->mb;
+	struct lpfc_vport *vport = pmb->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+
+	switch (mb->mbxStatus) {
+	case 0x0011:
+	case 0x0020:
+	case 0x9700:
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
+				 "0911 cmpl_unreg_vpi, mb status = 0x%x\n",
+				 mb->mbxStatus);
+		break;
+	}
+	vport->unreg_vpi_cmpl = VPORT_OK;
+	mempool_free(pmb, phba->mbox_mem_pool);
+	/*
+	 * This shost reference might have been taken at the beginning of
+	 * lpfc_vport_delete()
+	 */
+	if (vport->load_flag & FC_UNLOADING)
+		scsi_host_put(shost);
+}
+
+void
+lpfc_mbx_unreg_vpi(struct lpfc_vport *vport)
+{
+	struct lpfc_hba  *phba = vport->phba;
+	LPFC_MBOXQ_t *mbox;
+	int rc;
+
+	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	if (!mbox)
+		return;
+
+	lpfc_unreg_vpi(phba, vport->vpi, mbox);
+	mbox->vport = vport;
+	mbox->mbox_cmpl = lpfc_mbx_cmpl_unreg_vpi;
+	rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+	if (rc == MBX_NOT_FINISHED) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX | LOG_VPORT,
+				 "1800 Could not issue unreg_vpi\n");
+		mempool_free(mbox, phba->mbox_mem_pool);
+		vport->unreg_vpi_cmpl = VPORT_ERROR;
+	}
+}
+
+static void
+lpfc_mbx_cmpl_reg_vpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+{
+	struct lpfc_vport *vport = pmb->vport;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	MAILBOX_t *mb = &pmb->mb;
+
+	switch (mb->mbxStatus) {
+	case 0x0011:
+	case 0x9601:
+	case 0x9602:
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
+				 "0912 cmpl_reg_vpi, mb status = 0x%x\n",
+				 mb->mbxStatus);
+		lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+		spin_lock_irq(shost->host_lock);
+		vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP);
+		spin_unlock_irq(shost->host_lock);
+		vport->fc_myDID = 0;
+		goto out;
+	}
 
+	vport->num_disc_nodes = 0;
+	/* go thru NPR list and issue ELS PLOGIs */
+	if (vport->fc_npr_cnt)
+		lpfc_els_disc_plogi(vport);
+
+	if (!vport->num_disc_nodes) {
+		spin_lock_irq(shost->host_lock);
+		vport->fc_flag &= ~FC_NDISC_ACTIVE;
+		spin_unlock_irq(shost->host_lock);
+		lpfc_can_disctmo(vport);
+	}
+	vport->port_state = LPFC_VPORT_READY;
+
+out:
+	mempool_free(pmb, phba->mbox_mem_pool);
 	return;
 }
 
@@ -977,88 +1380,54 @@ lpfc_mbx_cmpl_reg_login(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
  * handed off to the SLI layer.
  */
 void
-lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
 {
-	struct lpfc_sli *psli;
-	MAILBOX_t *mb;
-	struct lpfc_dmabuf *mp;
+	struct lpfc_vport *vport = pmb->vport;
+	MAILBOX_t *mb = &pmb->mb;
+	struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) (pmb->context1);
 	struct lpfc_nodelist *ndlp;
-	struct lpfc_nodelist *ndlp_fdmi;
-
-
-	psli = &phba->sli;
-	mb = &pmb->mb;
 
 	ndlp = (struct lpfc_nodelist *) pmb->context2;
-	mp = (struct lpfc_dmabuf *) (pmb->context1);
-
+	pmb->context1 = NULL;
+	pmb->context2 = NULL;
 	if (mb->mbxStatus) {
 		lpfc_mbuf_free(phba, mp->virt, mp->phys);
 		kfree(mp);
-		mempool_free( pmb, phba->mbox_mem_pool);
-		mempool_free( ndlp, phba->nlp_mem_pool);
+		mempool_free(pmb, phba->mbox_mem_pool);
+		lpfc_nlp_put(ndlp);
 
-		/* FLOGI failed, so just use loop map to make discovery list */
-		lpfc_disc_list_loopmap(phba);
+		if (phba->fc_topology == TOPOLOGY_LOOP) {
+			/* FLOGI failed, use loop map to make discovery list */
+			lpfc_disc_list_loopmap(vport);
 
-		/* Start discovery */
-		lpfc_disc_start(phba);
+			/* Start discovery */
+			lpfc_disc_start(vport);
+			return;
+		}
+
+		lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX,
+				 "0258 Register Fabric login error: 0x%x\n",
+				 mb->mbxStatus);
 		return;
 	}
 
-	pmb->context1 = NULL;
-
 	ndlp->nlp_rpi = mb->un.varWords[0];
 	ndlp->nlp_type |= NLP_FABRIC;
-	ndlp->nlp_state = NLP_STE_UNMAPPED_NODE;
-	lpfc_nlp_list(phba, ndlp, NLP_UNMAPPED_LIST);
-
-	if (phba->hba_state == LPFC_FABRIC_CFG_LINK) {
-		/* This NPort has been assigned an NPort_ID by the fabric as a
-		 * result of the completed fabric login.  Issue a State Change
-		 * Registration (SCR) ELS request to the fabric controller
-		 * (SCR_DID) so that this NPort gets RSCN events from the
-		 * fabric.
-		 */
-		lpfc_issue_els_scr(phba, SCR_DID, 0);
-
-		ndlp = lpfc_findnode_did(phba, NLP_SEARCH_ALL, NameServer_DID);
-		if (!ndlp) {
-			/* Allocate a new node instance. If the pool is empty,
-			 * start the discovery process and skip the Nameserver
-			 * login process.  This is attempted again later on.
-			 * Otherwise, issue a Port Login (PLOGI) to NameServer.
-			 */
-			ndlp = mempool_alloc(phba->nlp_mem_pool, GFP_ATOMIC);
-			if (!ndlp) {
-				lpfc_disc_start(phba);
-				lpfc_mbuf_free(phba, mp->virt, mp->phys);
-				kfree(mp);
-				mempool_free( pmb, phba->mbox_mem_pool);
-				return;
-			} else {
-				lpfc_nlp_init(phba, ndlp, NameServer_DID);
-				ndlp->nlp_type |= NLP_FABRIC;
-			}
-		}
-		ndlp->nlp_state = NLP_STE_PLOGI_ISSUE;
-		lpfc_nlp_list(phba, ndlp, NLP_PLOGI_LIST);
-		lpfc_issue_els_plogi(phba, NameServer_DID, 0);
-		if (phba->cfg_fdmi_on) {
-			ndlp_fdmi = mempool_alloc(phba->nlp_mem_pool,
-								GFP_KERNEL);
-			if (ndlp_fdmi) {
-				lpfc_nlp_init(phba, ndlp_fdmi, FDMI_DID);
-				ndlp_fdmi->nlp_type |= NLP_FABRIC;
-				ndlp_fdmi->nlp_state = NLP_STE_PLOGI_ISSUE;
-				lpfc_issue_els_plogi(phba, FDMI_DID, 0);
-			}
-		}
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
+
+	lpfc_nlp_put(ndlp);	/* Drop the reference from the mbox */
+
+	if (vport->port_state == LPFC_FABRIC_CFG_LINK) {
+		if (vport->auth.security_active)
+			lpfc_start_authentication(vport, ndlp);
+		else
+			lpfc_start_discovery(vport);
 	}
 
 	lpfc_mbuf_free(phba, mp->virt, mp->phys);
 	kfree(mp);
-	mempool_free( pmb, phba->mbox_mem_pool);
+	mempool_free(pmb, phba->mbox_mem_pool);
 	return;
 }
 
@@ -1069,31 +1438,38 @@ lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
  * handed off to the SLI layer.
  */
 void
-lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
 {
-	struct lpfc_sli *psli;
-	MAILBOX_t *mb;
-	struct lpfc_dmabuf *mp;
-	struct lpfc_nodelist *ndlp;
-
-	psli = &phba->sli;
-	mb = &pmb->mb;
-
-	ndlp = (struct lpfc_nodelist *) pmb->context2;
-	mp = (struct lpfc_dmabuf *) (pmb->context1);
+	MAILBOX_t *mb = &pmb->mb;
+	struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) (pmb->context1);
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) pmb->context2;
+	struct lpfc_vport *vport = pmb->vport;
 
 	if (mb->mbxStatus) {
+out:
+		lpfc_nlp_put(ndlp);
 		lpfc_mbuf_free(phba, mp->virt, mp->phys);
 		kfree(mp);
-		mempool_free( pmb, phba->mbox_mem_pool);
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+		mempool_free(pmb, phba->mbox_mem_pool);
 
-		/* RegLogin failed, so just use loop map to make discovery
-		   list */
-		lpfc_disc_list_loopmap(phba);
+		/* If no other thread is using the ndlp, free it */
+		lpfc_nlp_not_used(ndlp);
 
-		/* Start discovery */
-		lpfc_disc_start(phba);
+		if (phba->fc_topology == TOPOLOGY_LOOP) {
+			/*
+			 * RegLogin failed, use loop map to make discovery
+			 * list
+			 */
+			lpfc_disc_list_loopmap(vport);
+
+			/* Start discovery */
+			lpfc_disc_start(vport);
+			return;
+		}
+		lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0260 Register NameServer error: 0x%x\n",
+				 mb->mbxStatus);
 		return;
 	}
 
@@ -1101,38 +1477,43 @@ lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 
 	ndlp->nlp_rpi = mb->un.varWords[0];
 	ndlp->nlp_type |= NLP_FABRIC;
-	ndlp->nlp_state = NLP_STE_UNMAPPED_NODE;
-	lpfc_nlp_list(phba, ndlp, NLP_UNMAPPED_LIST);
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
 
-	if (phba->hba_state < LPFC_HBA_READY) {
-		/* Link up discovery requires Fabrib registration. */
-		lpfc_ns_cmd(phba, ndlp, SLI_CTNS_RNN_ID);
-		lpfc_ns_cmd(phba, ndlp, SLI_CTNS_RSNN_NN);
-		lpfc_ns_cmd(phba, ndlp, SLI_CTNS_RFT_ID);
-		lpfc_ns_cmd(phba, ndlp, SLI_CTNS_RFF_ID);
+	if (vport->port_state < LPFC_VPORT_READY) {
+		/* Link up discovery requires Fabric registration. */
+		lpfc_ns_cmd(vport, SLI_CTNS_RFF_ID, 0, 0); /* Do this first! */
+		lpfc_ns_cmd(vport, SLI_CTNS_RNN_ID, 0, 0);
+		lpfc_ns_cmd(vport, SLI_CTNS_RSNN_NN, 0, 0);
+		lpfc_ns_cmd(vport, SLI_CTNS_RSPN_ID, 0, 0);
+		lpfc_ns_cmd(vport, SLI_CTNS_RFT_ID, 0, 0);
+
+		/* Issue SCR just before NameServer GID_FT Query */
+		lpfc_issue_els_scr(vport, SCR_DID, 0);
 	}
 
-	phba->fc_ns_retry = 0;
+	vport->fc_ns_retry = 0;
 	/* Good status, issue CT Request to NameServer */
-	if (lpfc_ns_cmd(phba, ndlp, SLI_CTNS_GID_FT)) {
+	if (lpfc_ns_cmd(vport, SLI_CTNS_GID_FT, 0, 0)) {
 		/* Cannot issue NameServer Query, so finish up discovery */
-		lpfc_disc_start(phba);
+		goto out;
 	}
 
+	lpfc_nlp_put(ndlp);
 	lpfc_mbuf_free(phba, mp->virt, mp->phys);
 	kfree(mp);
-	mempool_free( pmb, phba->mbox_mem_pool);
+	mempool_free(pmb, phba->mbox_mem_pool);
 
 	return;
 }
 
 static void
-lpfc_register_remote_port(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp)
+lpfc_register_remote_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
 {
-	struct fc_rport *rport;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct fc_rport  *rport;
 	struct lpfc_rport_data *rdata;
 	struct fc_rport_identifiers rport_ids;
+	struct lpfc_hba  *phba = vport->phba;
 
 	/* Remote port has reappeared. Re-register w/ FC transport */
 	rport_ids.node_name = wwn_to_u64(ndlp->nlp_nodename.u.wwn);
@@ -1140,8 +1521,24 @@ lpfc_register_remote_port(struct lpfc_hba * phba,
 	rport_ids.port_id = ndlp->nlp_DID;
 	rport_ids.roles = FC_RPORT_ROLE_UNKNOWN;
 
-	ndlp->rport = rport = fc_remote_port_add(phba->host, 0, &rport_ids);
-	if (!rport) {
+	/*
+	 * We leave our node pointer in rport->dd_data when we unregister a
+	 * FCP target port.  But fc_remote_port_add zeros the space to which
+	 * rport->dd_data points.  So, if we're reusing a previously
+	 * registered port, drop the reference that we took the last time we
+	 * registered the port.
+	 */
+	if (ndlp->rport && ndlp->rport->dd_data &&
+	    ((struct lpfc_rport_data *) ndlp->rport->dd_data)->pnode == ndlp) {
+		lpfc_nlp_put(ndlp);
+	}
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT,
+		"rport add:       did:x%x flg:x%x type x%x",
+		ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_type);
+
+	ndlp->rport = rport = fc_remote_port_add(shost, 0, &rport_ids);
+	if (!rport || !get_device(&rport->dev)) {
 		dev_printk(KERN_WARNING, &phba->pcidev->dev,
 			   "Warning: fc_remote_port_add failed\n");
 		return;
@@ -1151,243 +1548,227 @@ lpfc_register_remote_port(struct lpfc_hba * phba,
 	rport->maxframe_size = ndlp->nlp_maxframe;
 	rport->supported_classes = ndlp->nlp_class_sup;
 	rdata = rport->dd_data;
-	rdata->pnode = ndlp;
+	rdata->pnode = lpfc_nlp_get(ndlp);
 
 	if (ndlp->nlp_type & NLP_FCP_TARGET)
 		rport_ids.roles |= FC_RPORT_ROLE_FCP_TARGET;
 	if (ndlp->nlp_type & NLP_FCP_INITIATOR)
 		rport_ids.roles |= FC_RPORT_ROLE_FCP_INITIATOR;
+	del_timer_sync(&ndlp->nlp_initiator_tmr);
 
 
 	if (rport_ids.roles !=  FC_RPORT_ROLE_UNKNOWN)
 		fc_remote_port_rolechg(rport, rport_ids.roles);
 
 	if ((rport->scsi_target_id != -1) &&
-		(rport->scsi_target_id < LPFC_MAX_TARGET)) {
+	    (rport->scsi_target_id < LPFC_MAX_TARGET)) {
 		ndlp->nlp_sid = rport->scsi_target_id;
 	}
-
 	return;
 }
 
 static void
-lpfc_unregister_remote_port(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp)
+lpfc_unregister_remote_port(struct lpfc_nodelist *ndlp)
 {
 	struct fc_rport *rport = ndlp->rport;
-	struct lpfc_rport_data *rdata = rport->dd_data;
 
-	if (rport->scsi_target_id == -1) {
-		ndlp->rport = NULL;
-		rdata->pnode = NULL;
-	}
+	lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_RPORT,
+		"rport delete:    did:x%x flg:x%x type x%x",
+		ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_type);
 
 	fc_remote_port_delete(rport);
 
 	return;
 }
 
-int
-lpfc_nlp_list(struct lpfc_hba * phba, struct lpfc_nodelist * nlp, int list)
+static void
+lpfc_nlp_counters(struct lpfc_vport *vport, int state, int count)
 {
-	enum { none, unmapped, mapped } rport_add = none, rport_del = none;
-	struct lpfc_sli      *psli;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
 
-	psli = &phba->sli;
-	/* Sanity check to ensure we are not moving to / from the same list */
-	if ((nlp->nlp_flag & NLP_LIST_MASK) == list)
-		if (list != NLP_NO_LIST)
-			return 0;
-
-	spin_lock_irq(phba->host->host_lock);
-	switch (nlp->nlp_flag & NLP_LIST_MASK) {
-	case NLP_NO_LIST: /* Not on any list */
-		break;
-	case NLP_UNUSED_LIST:
-		phba->fc_unused_cnt--;
-		list_del(&nlp->nlp_listp);
+	spin_lock_irq(shost->host_lock);
+	switch (state) {
+	case NLP_STE_UNUSED_NODE:
+		vport->fc_unused_cnt += count;
 		break;
-	case NLP_PLOGI_LIST:
-		phba->fc_plogi_cnt--;
-		list_del(&nlp->nlp_listp);
+	case NLP_STE_PLOGI_ISSUE:
+		vport->fc_plogi_cnt += count;
 		break;
-	case NLP_ADISC_LIST:
-		phba->fc_adisc_cnt--;
-		list_del(&nlp->nlp_listp);
+	case NLP_STE_ADISC_ISSUE:
+		vport->fc_adisc_cnt += count;
 		break;
-	case NLP_REGLOGIN_LIST:
-		phba->fc_reglogin_cnt--;
-		list_del(&nlp->nlp_listp);
+	case NLP_STE_REG_LOGIN_ISSUE:
+		vport->fc_reglogin_cnt += count;
 		break;
-	case NLP_PRLI_LIST:
-		phba->fc_prli_cnt--;
-		list_del(&nlp->nlp_listp);
+	case NLP_STE_PRLI_ISSUE:
+		vport->fc_prli_cnt += count;
 		break;
-	case NLP_UNMAPPED_LIST:
-		phba->fc_unmap_cnt--;
-		list_del(&nlp->nlp_listp);
-		nlp->nlp_flag &= ~NLP_TGT_NO_SCSIID;
-		nlp->nlp_type &= ~NLP_FC_NODE;
-		phba->nport_event_cnt++;
-		if (nlp->rport)
-			rport_del = unmapped;
+	case NLP_STE_UNMAPPED_NODE:
+		vport->fc_unmap_cnt += count;
 		break;
-	case NLP_MAPPED_LIST:
-		phba->fc_map_cnt--;
-		list_del(&nlp->nlp_listp);
-		phba->nport_event_cnt++;
-		if (nlp->rport)
-			rport_del = mapped;
+	case NLP_STE_MAPPED_NODE:
+		vport->fc_map_cnt += count;
 		break;
-	case NLP_NPR_LIST:
-		phba->fc_npr_cnt--;
-		list_del(&nlp->nlp_listp);
-		/* Stop delay tmo if taking node off NPR list */
-		if ((nlp->nlp_flag & NLP_DELAY_TMO) &&
-		   (list != NLP_NPR_LIST)) {
-			spin_unlock_irq(phba->host->host_lock);
-			lpfc_cancel_retry_delay_tmo(phba, nlp);
-			spin_lock_irq(phba->host->host_lock);
-		}
+	case NLP_STE_NPR_NODE:
+		vport->fc_npr_cnt += count;
 		break;
 	}
+	spin_unlock_irq(shost->host_lock);
+}
 
-	nlp->nlp_flag &= ~NLP_LIST_MASK;
-
-	/* Add NPort <did> to <num> list */
-	lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_NODE,
-			"%d:0904 Add NPort x%x to %d list Data: x%x\n",
-			phba->brd_no,
-			nlp->nlp_DID, list, nlp->nlp_flag);
-
-	switch (list) {
-	case NLP_NO_LIST: /* No list, just remove it */
-		spin_unlock_irq(phba->host->host_lock);
-		lpfc_nlp_remove(phba, nlp);
-		spin_lock_irq(phba->host->host_lock);
-		/* as node removed - stop further transport calls */
-		rport_del = none;
-		break;
-	case NLP_UNUSED_LIST:
-		nlp->nlp_flag |= list;
-		/* Put it at the end of the unused list */
-		list_add_tail(&nlp->nlp_listp, &phba->fc_unused_list);
-		phba->fc_unused_cnt++;
-		break;
-	case NLP_PLOGI_LIST:
-		nlp->nlp_flag |= list;
-		/* Put it at the end of the plogi list */
-		list_add_tail(&nlp->nlp_listp, &phba->fc_plogi_list);
-		phba->fc_plogi_cnt++;
-		break;
-	case NLP_ADISC_LIST:
-		nlp->nlp_flag |= list;
-		/* Put it at the end of the adisc list */
-		list_add_tail(&nlp->nlp_listp, &phba->fc_adisc_list);
-		phba->fc_adisc_cnt++;
-		break;
-	case NLP_REGLOGIN_LIST:
-		nlp->nlp_flag |= list;
-		/* Put it at the end of the reglogin list */
-		list_add_tail(&nlp->nlp_listp, &phba->fc_reglogin_list);
-		phba->fc_reglogin_cnt++;
-		break;
-	case NLP_PRLI_LIST:
-		nlp->nlp_flag |= list;
-		/* Put it at the end of the prli list */
-		list_add_tail(&nlp->nlp_listp, &phba->fc_prli_list);
-		phba->fc_prli_cnt++;
-		break;
-	case NLP_UNMAPPED_LIST:
-		rport_add = unmapped;
-		/* ensure all vestiges of "mapped" significance are gone */
-		nlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR);
-		nlp->nlp_flag |= list;
-		/* Put it at the end of the unmap list */
-		list_add_tail(&nlp->nlp_listp, &phba->fc_nlpunmap_list);
-		phba->fc_unmap_cnt++;
-		phba->nport_event_cnt++;
-		nlp->nlp_flag &= ~NLP_NODEV_REMOVE;
-		nlp->nlp_type |= NLP_FC_NODE;
-		break;
-	case NLP_MAPPED_LIST:
-		rport_add = mapped;
-		nlp->nlp_flag |= list;
-		/* Put it at the end of the map list */
-		list_add_tail(&nlp->nlp_listp, &phba->fc_nlpmap_list);
-		phba->fc_map_cnt++;
-		phba->nport_event_cnt++;
-		nlp->nlp_flag &= ~NLP_NODEV_REMOVE;
-		break;
-	case NLP_NPR_LIST:
-		nlp->nlp_flag |= list;
-		/* Put it at the end of the npr list */
-		list_add_tail(&nlp->nlp_listp, &phba->fc_npr_list);
-		phba->fc_npr_cnt++;
+static void
+lpfc_nlp_state_cleanup(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+		       int old_state, int new_state)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
 
-		nlp->nlp_flag &= ~NLP_RCV_PLOGI;
-		break;
-	case NLP_JUST_DQ:
-		break;
+	if (new_state == NLP_STE_UNMAPPED_NODE) {
+		ndlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR);
+		ndlp->nlp_flag &= ~NLP_NODEV_REMOVE;
+		ndlp->nlp_type |= NLP_FC_NODE;
 	}
+	if (new_state == NLP_STE_MAPPED_NODE)
+		ndlp->nlp_flag &= ~NLP_NODEV_REMOVE;
+	if (new_state == NLP_STE_NPR_NODE)
+		ndlp->nlp_flag &= ~NLP_RCV_PLOGI;
 
-	spin_unlock_irq(phba->host->host_lock);
+	/* Transport interface */
+	if (ndlp->rport && (old_state == NLP_STE_MAPPED_NODE ||
+			    old_state == NLP_STE_UNMAPPED_NODE)) {
+		vport->phba->nport_event_cnt++;
+		lpfc_unregister_remote_port(ndlp);
+	}
 
+	if (new_state ==  NLP_STE_MAPPED_NODE ||
+	    new_state == NLP_STE_UNMAPPED_NODE) {
+		vport->phba->nport_event_cnt++;
+		/*
+		 * Tell the fc transport about the port, if we haven't
+		 * already. If we have, and it's a scsi entity, be
+		 * sure to unblock any attached scsi devices
+		 */
+		lpfc_register_remote_port(vport, ndlp);
+	}
 	/*
-	 * We make all the calls into the transport after we have
-	 * moved the node between lists. This so that we don't
-	 * release the lock while in-between lists.
+	 * if we added to Mapped list, but the remote port
+	 * registration failed or assigned a target id outside
+	 * our presentable range - move the node to the
+	 * Unmapped List
 	 */
+	if (new_state == NLP_STE_MAPPED_NODE &&
+	    (!ndlp->rport ||
+	     ndlp->rport->scsi_target_id == -1 ||
+	     ndlp->rport->scsi_target_id >= LPFC_MAX_TARGET)) {
+		spin_lock_irq(shost->host_lock);
+		ndlp->nlp_flag |= NLP_TGT_NO_SCSIID;
+		spin_unlock_irq(shost->host_lock);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
+	}
+}
 
-	/* Don't upcall midlayer if we're unloading */
-	if (!(phba->fc_flag & FC_UNLOADING)) {
-		/*
-		 * We revalidate the rport pointer as the "add" function
-		 * may have removed the remote port.
-		 */
-		if ((rport_del != none) && nlp->rport)
-			lpfc_unregister_remote_port(phba, nlp);
+static char *
+lpfc_nlp_state_name(char *buffer, size_t size, int state)
+{
+	static char *states[] = {
+		[NLP_STE_UNUSED_NODE] = "UNUSED",
+		[NLP_STE_PLOGI_ISSUE] = "PLOGI",
+		[NLP_STE_ADISC_ISSUE] = "ADISC",
+		[NLP_STE_REG_LOGIN_ISSUE] = "REGLOGIN",
+		[NLP_STE_PRLI_ISSUE] = "PRLI",
+		[NLP_STE_UNMAPPED_NODE] = "UNMAPPED",
+		[NLP_STE_MAPPED_NODE] = "MAPPED",
+		[NLP_STE_NPR_NODE] = "NPR",
+	};
+
+	if (state < NLP_STE_MAX_STATE && states[state])
+		strlcpy(buffer, states[state], size);
+	else
+		snprintf(buffer, size, "unknown (%d)", state);
+	return buffer;
+}
 
-		if (rport_add != none) {
-			/*
-			 * Tell the fc transport about the port, if we haven't
-			 * already. If we have, and it's a scsi entity, be
-			 * sure to unblock any attached scsi devices
-			 */
-			lpfc_register_remote_port(phba, nlp);
+void
+lpfc_nlp_set_state(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+		   int state)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	int  old_state = ndlp->nlp_state;
+	char name1[16], name2[16];
+
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
+			 "0904 NPort state transition x%06x, %s -> %s\n",
+			 ndlp->nlp_DID,
+			 lpfc_nlp_state_name(name1, sizeof(name1), old_state),
+			 lpfc_nlp_state_name(name2, sizeof(name2), state));
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_NODE,
+		"node statechg    did:x%x old:%d ste:%d",
+		ndlp->nlp_DID, old_state, state);
+
+	if (old_state == NLP_STE_NPR_NODE &&
+	    (ndlp->nlp_flag & NLP_DELAY_TMO) != 0 &&
+	    state != NLP_STE_NPR_NODE)
+		lpfc_cancel_retry_delay_tmo(vport, ndlp);
+	if (old_state == NLP_STE_UNMAPPED_NODE) {
+		ndlp->nlp_flag &= ~NLP_TGT_NO_SCSIID;
+		ndlp->nlp_type &= ~NLP_FC_NODE;
+	}
+
+	if (list_empty(&ndlp->nlp_listp)) {
+		spin_lock_irq(shost->host_lock);
+		list_add_tail(&ndlp->nlp_listp, &vport->fc_nodes);
+		spin_unlock_irq(shost->host_lock);
+	} else if (old_state)
+		lpfc_nlp_counters(vport, old_state, -1);
+
+	ndlp->nlp_state = state;
+	lpfc_nlp_counters(vport, state, 1);
+	lpfc_nlp_state_cleanup(vport, ndlp, old_state, state);
+}
 
-			/*
-			 * if we added to Mapped list, but the remote port
-			 * registration failed or assigned a target id outside
-			 * our presentable range - move the node to the
-			 * Unmapped List
-			 */
-			if ((rport_add == mapped) &&
-			    ((!nlp->rport) ||
-			     (nlp->rport->scsi_target_id == -1) ||
-			     (nlp->rport->scsi_target_id >= LPFC_MAX_TARGET))) {
-				nlp->nlp_state = NLP_STE_UNMAPPED_NODE;
-				spin_lock_irq(phba->host->host_lock);
-				nlp->nlp_flag |= NLP_TGT_NO_SCSIID;
-				spin_unlock_irq(phba->host->host_lock);
-				lpfc_nlp_list(phba, nlp, NLP_UNMAPPED_LIST);
-			}
-		}
-	}
-	return 0;
+void
+lpfc_dequeue_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	if ((ndlp->nlp_flag & NLP_DELAY_TMO) != 0)
+		lpfc_cancel_retry_delay_tmo(vport, ndlp);
+	if (ndlp->nlp_state && !list_empty(&ndlp->nlp_listp))
+		lpfc_nlp_counters(vport, ndlp->nlp_state, -1);
+	spin_lock_irq(shost->host_lock);
+	list_del_init(&ndlp->nlp_listp);
+	spin_unlock_irq(shost->host_lock);
+	lpfc_nlp_state_cleanup(vport, ndlp, ndlp->nlp_state,
+			       NLP_STE_UNUSED_NODE);
+}
+
+void
+lpfc_drop_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+{
+	/*
+	 * Use of lpfc_drop_node and UNUSED list. lpfc_drop_node should
+	 * be used if we wish to issue the "last" lpfc_nlp_put() to remove
+	 * the ndlp from the vport.  The ndlp resides on the UNUSED list
+	 * until ALL other outstanding threads have completed. Thus, if a
+	 * ndlp is on the UNUSED list already, we should never do another
+	 * lpfc_drop_node() on it.
+	 */
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
+	lpfc_nlp_put(ndlp);
+	return;
 }
 
 /*
  * Start / ReStart rescue timer for Discovery / RSCN handling
  */
 void
-lpfc_set_disctmo(struct lpfc_hba * phba)
+lpfc_set_disctmo(struct lpfc_vport *vport)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
 	uint32_t tmo;
 
-	if (phba->hba_state == LPFC_LOCAL_CFG_LINK) {
+	if (vport->port_state == LPFC_LOCAL_CFG_LINK) {
 		/* For FAN, timeout should be greater then edtov */
 		tmo = (((phba->fc_edtov + 999) / 1000) + 1);
 	} else {
@@ -1397,18 +1778,25 @@ lpfc_set_disctmo(struct lpfc_hba * phba)
 		tmo = ((phba->fc_ratov * 3) + 3);
 	}
 
-	mod_timer(&phba->fc_disctmo, jiffies + HZ * tmo);
-	spin_lock_irq(phba->host->host_lock);
-	phba->fc_flag |= FC_DISC_TMO;
-	spin_unlock_irq(phba->host->host_lock);
+
+	if (!timer_pending(&vport->fc_disctmo)) {
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+			"set disc timer:  tmo:x%x state:x%x flg:x%x",
+			tmo, vport->port_state, vport->fc_flag);
+	}
+
+	mod_timer(&vport->fc_disctmo, jiffies + HZ * tmo);
+	spin_lock_irq(shost->host_lock);
+	vport->fc_flag |= FC_DISC_TMO;
+	spin_unlock_irq(shost->host_lock);
 
 	/* Start Discovery Timer state <hba_state> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-			"%d:0247 Start Discovery Timer state x%x "
-			"Data: x%x x%lx x%x x%x\n",
-			phba->brd_no,
-			phba->hba_state, tmo, (unsigned long)&phba->fc_disctmo,
-			phba->fc_plogi_cnt, phba->fc_adisc_cnt);
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0247 Start Discovery Timer state x%x "
+			 "Data: x%x x%lx x%x x%x\n",
+			 vport->port_state, tmo,
+			 (unsigned long)&vport->fc_disctmo, vport->fc_plogi_cnt,
+			 vport->fc_adisc_cnt);
 
 	return;
 }
@@ -1417,24 +1805,32 @@ lpfc_set_disctmo(struct lpfc_hba * phba)
  * Cancel rescue timer for Discovery / RSCN handling
  */
 int
-lpfc_can_disctmo(struct lpfc_hba * phba)
+lpfc_can_disctmo(struct lpfc_vport *vport)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	unsigned long iflags;
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"can disc timer:  state:x%x rtry:x%x flg:x%x",
+		vport->port_state, vport->fc_ns_retry, vport->fc_flag);
+
 	/* Turn off discovery timer if its running */
-	if (phba->fc_flag & FC_DISC_TMO) {
-		spin_lock_irq(phba->host->host_lock);
-		phba->fc_flag &= ~FC_DISC_TMO;
-		spin_unlock_irq(phba->host->host_lock);
-		del_timer_sync(&phba->fc_disctmo);
-		phba->work_hba_events &= ~WORKER_DISC_TMO;
+	if (vport->fc_flag & FC_DISC_TMO) {
+		spin_lock_irqsave(shost->host_lock, iflags);
+		vport->fc_flag &= ~FC_DISC_TMO;
+		spin_unlock_irqrestore(shost->host_lock, iflags);
+		del_timer_sync(&vport->fc_disctmo);
+		spin_lock_irqsave(&vport->work_port_lock, iflags);
+		vport->work_port_events &= ~WORKER_DISC_TMO;
+		spin_unlock_irqrestore(&vport->work_port_lock, iflags);
 	}
 
 	/* Cancel Discovery Timer state <hba_state> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-			"%d:0248 Cancel Discovery Timer state x%x "
-			"Data: x%x x%x x%x\n",
-			phba->brd_no, phba->hba_state, phba->fc_flag,
-			phba->fc_plogi_cnt, phba->fc_adisc_cnt);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0248 Cancel Discovery Timer state x%x "
+			 "Data: x%x x%x x%x\n",
+			 vport->port_state, vport->fc_flag,
+			 vport->fc_plogi_cnt, vport->fc_adisc_cnt);
 	return 0;
 }
 
@@ -1443,15 +1839,18 @@ lpfc_can_disctmo(struct lpfc_hba * phba)
  * Return true if iocb matches the specified nport
  */
 int
-lpfc_check_sli_ndlp(struct lpfc_hba * phba,
-		    struct lpfc_sli_ring * pring,
-		    struct lpfc_iocbq * iocb, struct lpfc_nodelist * ndlp)
+lpfc_check_sli_ndlp(struct lpfc_hba *phba,
+		    struct lpfc_sli_ring *pring,
+		    struct lpfc_iocbq *iocb,
+		    struct lpfc_nodelist *ndlp)
 {
-	struct lpfc_sli *psli;
-	IOCB_t *icmd;
+	struct lpfc_sli *psli = &phba->sli;
+	IOCB_t *icmd = &iocb->iocb;
+	struct lpfc_vport    *vport = ndlp->vport;
+
+	if (iocb->vport != vport)
+		return 0;
 
-	psli = &phba->sli;
-	icmd = &iocb->iocb;
 	if (pring->ringno == LPFC_ELS_RING) {
 		switch (icmd->ulpCommand) {
 		case CMD_GEN_REQUEST64_CR:
@@ -1469,7 +1868,7 @@ lpfc_check_sli_ndlp(struct lpfc_hba * phba,
 	} else if (pring->ringno == psli->fcp_ring) {
 		/* Skip match check if waiting to relogin to FCP target */
 		if ((ndlp->nlp_type & NLP_FCP_TARGET) &&
-		  (ndlp->nlp_flag & NLP_DELAY_TMO)) {
+		    (ndlp->nlp_flag & NLP_DELAY_TMO)) {
 			return 0;
 		}
 		if (icmd->ulpContext == (volatile ushort)ndlp->nlp_rpi) {
@@ -1486,14 +1885,17 @@ lpfc_check_sli_ndlp(struct lpfc_hba * phba,
  * associated with nlp_rpi in the LPFC_NODELIST entry.
  */
 static int
-lpfc_no_rpi(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
+lpfc_no_rpi(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)
 {
+	LIST_HEAD(completions);
 	struct lpfc_sli *psli;
 	struct lpfc_sli_ring *pring;
 	struct lpfc_iocbq *iocb, *next_iocb;
 	IOCB_t *icmd;
 	uint32_t rpi, i;
 
+	lpfc_fabric_abort_nport(ndlp);
+
 	/*
 	 * Everything that matches on txcmplq will be returned
 	 * by firmware with a no rpi error.
@@ -1505,40 +1907,40 @@ lpfc_no_rpi(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
 		for (i = 0; i < psli->num_rings; i++) {
 			pring = &psli->ring[i];
 
-			spin_lock_irq(phba->host->host_lock);
+			spin_lock_irq(&phba->hbalock);
 			list_for_each_entry_safe(iocb, next_iocb, &pring->txq,
-						list) {
+						 list) {
 				/*
 				 * Check to see if iocb matches the nport we are
 				 * looking for
 				 */
-				if ((lpfc_check_sli_ndlp
-				     (phba, pring, iocb, ndlp))) {
+				if ((lpfc_check_sli_ndlp(phba, pring, iocb,
+							 ndlp))) {
 					/* It matches, so deque and call compl
 					   with an error */
-					list_del(&iocb->list);
+					list_move_tail(&iocb->list,
+						       &completions);
 					pring->txq_cnt--;
-					if (iocb->iocb_cmpl) {
-						icmd = &iocb->iocb;
-						icmd->ulpStatus =
-						    IOSTAT_LOCAL_REJECT;
-						icmd->un.ulpWord[4] =
-						    IOERR_SLI_ABORTED;
-						spin_unlock_irq(phba->host->
-								host_lock);
-						(iocb->iocb_cmpl) (phba,
-								   iocb, iocb);
-						spin_lock_irq(phba->host->
-							      host_lock);
-					} else
-						lpfc_sli_release_iocbq(phba,
-								       iocb);
 				}
 			}
-			spin_unlock_irq(phba->host->host_lock);
+			spin_unlock_irq(&phba->hbalock);
+		}
+	}
+
+	while (!list_empty(&completions)) {
+		iocb = list_get_first(&completions, struct lpfc_iocbq, list);
+		list_del_init(&iocb->list);
 
+		if (!iocb->iocb_cmpl)
+			lpfc_sli_release_iocbq(phba, iocb);
+		else {
+			icmd = &iocb->iocb;
+			icmd->ulpStatus = IOSTAT_LOCAL_REJECT;
+			icmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
+			(iocb->iocb_cmpl)(phba, iocb, iocb);
 		}
 	}
+
 	return 0;
 }
 
@@ -1552,19 +1954,21 @@ lpfc_no_rpi(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
  * we are waiting to PLOGI back to the remote NPort.
  */
 int
-lpfc_unreg_rpi(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
+lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
 {
-	LPFC_MBOXQ_t *mbox;
+	struct lpfc_hba *phba = vport->phba;
+	LPFC_MBOXQ_t    *mbox;
 	int rc;
 
 	if (ndlp->nlp_rpi) {
-		if ((mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL))) {
-			lpfc_unreg_login(phba, ndlp->nlp_rpi, mbox);
-			mbox->mbox_cmpl=lpfc_sli_def_mbox_cmpl;
-			rc = lpfc_sli_issue_mbox
-				    (phba, mbox, (MBX_NOWAIT | MBX_STOP_IOCB));
+		mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+		if (mbox) {
+			lpfc_unreg_login(phba, vport->vpi, ndlp->nlp_rpi, mbox);
+			mbox->vport = vport;
+			mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+			rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
 			if (rc == MBX_NOT_FINISHED)
-				mempool_free( mbox, phba->mbox_mem_pool);
+				mempool_free(mbox, phba->mbox_mem_pool);
 		}
 		lpfc_no_rpi(phba, ndlp);
 		ndlp->nlp_rpi = 0;
@@ -1573,25 +1977,66 @@ lpfc_unreg_rpi(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
 	return 0;
 }
 
+void
+lpfc_unreg_all_rpis(struct lpfc_vport *vport)
+{
+	struct lpfc_hba  *phba  = vport->phba;
+	LPFC_MBOXQ_t     *mbox;
+	int rc;
+
+	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	if (mbox) {
+		lpfc_unreg_login(phba, vport->vpi, 0xffff, mbox);
+		mbox->vport = vport;
+		mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+		rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+		if (rc == MBX_NOT_FINISHED) {
+			mempool_free(mbox, phba->mbox_mem_pool);
+		}
+	}
+}
+
+void
+lpfc_unreg_default_rpis(struct lpfc_vport *vport)
+{
+	struct lpfc_hba  *phba  = vport->phba;
+	LPFC_MBOXQ_t     *mbox;
+	int rc;
+
+	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	if (mbox) {
+		lpfc_unreg_did(phba, vport->vpi, 0xffffffff, mbox);
+		mbox->vport = vport;
+		mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+		rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+		if (rc == MBX_NOT_FINISHED) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX | LOG_VPORT,
+					 "1815 Could not issue "
+					 "unreg_did (default rpis)\n");
+			mempool_free(mbox, phba->mbox_mem_pool);
+		}
+	}
+}
+
 /*
  * Free resources associated with LPFC_NODELIST entry
  * so it can be freed.
  */
 static int
-lpfc_freenode(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
+lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
 {
-	LPFC_MBOXQ_t       *mb;
-	LPFC_MBOXQ_t       *nextmb;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
+	LPFC_MBOXQ_t *mb, *nextmb;
 	struct lpfc_dmabuf *mp;
 
 	/* Cleanup node for NPort <nlp_DID> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_NODE,
-			"%d:0900 Cleanup node for NPort x%x "
-			"Data: x%x x%x x%x\n",
-			phba->brd_no, ndlp->nlp_DID, ndlp->nlp_flag,
-			ndlp->nlp_state, ndlp->nlp_rpi);
-
-	lpfc_nlp_list(phba, ndlp, NLP_JUST_DQ);
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
+			 "0900 Cleanup node for NPort x%x "
+			 "Data: x%x x%x x%x\n",
+			 ndlp->nlp_DID, ndlp->nlp_flag,
+			 ndlp->nlp_state, ndlp->nlp_rpi);
+	lpfc_dequeue_node(vport, ndlp);
 
 	/* cleanup any ndlp on mbox q waiting for reglogin cmpl */
 	if ((mb = phba->sli.mbox_active)) {
@@ -1602,33 +2047,40 @@ lpfc_freenode(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
 		}
 	}
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	list_for_each_entry_safe(mb, nextmb, &phba->sli.mboxq, list) {
 		if ((mb->mb.mbxCommand == MBX_REG_LOGIN64) &&
-		   (ndlp == (struct lpfc_nodelist *) mb->context2)) {
+		    (ndlp == (struct lpfc_nodelist *) mb->context2)) {
 			mp = (struct lpfc_dmabuf *) (mb->context1);
 			if (mp) {
-				lpfc_mbuf_free(phba, mp->virt, mp->phys);
+				__lpfc_mbuf_free(phba, mp->virt, mp->phys);
 				kfree(mp);
 			}
 			list_del(&mb->list);
 			mempool_free(mb, phba->mbox_mem_pool);
+			lpfc_nlp_put(ndlp);
 		}
 	}
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	lpfc_els_abort(phba,ndlp);
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag &= ~NLP_DELAY_TMO;
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 
 	ndlp->nlp_last_elscmd = 0;
 	del_timer_sync(&ndlp->nlp_delayfunc);
+	del_timer_sync(&ndlp->nlp_reauth_tmr);
+	del_timer_sync(&ndlp->nlp_initiator_tmr);
 
 	if (!list_empty(&ndlp->els_retry_evt.evt_listp))
 		list_del_init(&ndlp->els_retry_evt.evt_listp);
+	if (!list_empty(&ndlp->dev_loss_evt.evt_listp))
+		list_del_init(&ndlp->dev_loss_evt.evt_listp);
+	if (!list_empty(&ndlp->els_reauth_evt.evt_listp))
+		list_del_init(&ndlp->els_reauth_evt.evt_listp);
 
-	lpfc_unreg_rpi(phba, ndlp);
+	lpfc_unreg_rpi(vport, ndlp);
 
 	return 0;
 }
@@ -1638,39 +2090,61 @@ lpfc_freenode(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
  * If we are in the middle of using the nlp in the discovery state
  * machine, defer the free till we reach the end of the state machine.
  */
-int
-lpfc_nlp_remove(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
+static void
+lpfc_nlp_remove(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	struct lpfc_rport_data *rdata;
+	LPFC_MBOXQ_t *mbox;
+	int rc;
 
 	if (ndlp->nlp_flag & NLP_DELAY_TMO) {
-		lpfc_cancel_retry_delay_tmo(phba, ndlp);
+		lpfc_cancel_retry_delay_tmo(vport, ndlp);
 	}
 
-	if (ndlp->nlp_disc_refcnt) {
-		spin_lock_irq(phba->host->host_lock);
-		ndlp->nlp_flag |= NLP_DELAY_REMOVE;
-		spin_unlock_irq(phba->host->host_lock);
-	} else {
-		lpfc_freenode(phba, ndlp);
-
-		if ((ndlp->rport) && !(phba->fc_flag & FC_UNLOADING)) {
-			rdata = ndlp->rport->dd_data;
-			rdata->pnode = NULL;
-			ndlp->rport = NULL;
+	if (ndlp->nlp_flag & NLP_DEFER_RM && !ndlp->nlp_rpi) {
+		/* For this case we need to cleanup the default rpi
+		 * allocated by the firmware.
+		 */
+		if ((mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL))
+			!= NULL) {
+			rc = lpfc_reg_login(phba, vport->vpi, ndlp->nlp_DID,
+			    (uint8_t *) &vport->fc_sparam, mbox, 0);
+			if (rc) {
+				mempool_free(mbox, phba->mbox_mem_pool);
+			}
+			else {
+				mbox->mbox_flag |= LPFC_MBX_IMED_UNREG;
+				mbox->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi;
+				mbox->vport = vport;
+				mbox->context2 = 0;
+				rc =lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+				if (rc == MBX_NOT_FINISHED) {
+					mempool_free(mbox, phba->mbox_mem_pool);
+				}
+			}
 		}
+	}
+
+	lpfc_cleanup_node(vport, ndlp);
 
-		mempool_free( ndlp, phba->nlp_mem_pool);
+	/*
+	 * We can get here with a non-NULL ndlp->rport because when we
+	 * unregister a rport we don't break the rport/node linkage.  So if we
+	 * do, make sure we don't leaving any dangling pointers behind.
+	 */
+	if (ndlp->rport) {
+		rdata = ndlp->rport->dd_data;
+		rdata->pnode = NULL;
+		ndlp->rport = NULL;
 	}
-	return 0;
 }
 
 static int
-lpfc_matchdid(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, uint32_t did)
+lpfc_matchdid(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+	      uint32_t did)
 {
-	D_ID mydid;
-	D_ID ndlpdid;
-	D_ID matchdid;
+	D_ID mydid, ndlpdid, matchdid;
 
 	if (did == Bcast_DID)
 		return 0;
@@ -1684,7 +2158,7 @@ lpfc_matchdid(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, uint32_t did)
 		return 1;
 
 	/* Next check for area/domain identically equals 0 match */
-	mydid.un.word = phba->fc_myDID;
+	mydid.un.word = vport->fc_myDID;
 	if ((mydid.un.b.domain == 0) && (mydid.un.b.area == 0)) {
 		return 0;
 	}
@@ -1715,126 +2189,125 @@ lpfc_matchdid(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp, uint32_t did)
 	return 0;
 }
 
-/* Search for a nodelist entry on a specific list */
-struct lpfc_nodelist *
-lpfc_findnode_did(struct lpfc_hba * phba, uint32_t order, uint32_t did)
+/* Search for a nodelist entry */
+static struct lpfc_nodelist *
+__lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did)
 {
 	struct lpfc_nodelist *ndlp;
-	struct list_head *lists[]={&phba->fc_nlpunmap_list,
-				   &phba->fc_nlpmap_list,
-				   &phba->fc_plogi_list,
-				   &phba->fc_adisc_list,
-				   &phba->fc_reglogin_list,
-				   &phba->fc_prli_list,
-				   &phba->fc_npr_list,
-				   &phba->fc_unused_list};
-	uint32_t search[]={NLP_SEARCH_UNMAPPED,
-			   NLP_SEARCH_MAPPED,
-			   NLP_SEARCH_PLOGI,
-			   NLP_SEARCH_ADISC,
-			   NLP_SEARCH_REGLOGIN,
-			   NLP_SEARCH_PRLI,
-			   NLP_SEARCH_NPR,
-			   NLP_SEARCH_UNUSED};
-	int i;
 	uint32_t data1;
 
-	spin_lock_irq(phba->host->host_lock);
-	for (i = 0; i < ARRAY_SIZE(lists); i++ ) {
-		if (!(order & search[i]))
-			continue;
-		list_for_each_entry(ndlp, lists[i], nlp_listp) {
-			if (lpfc_matchdid(phba, ndlp, did)) {
-				data1 = (((uint32_t) ndlp->nlp_state << 24) |
-					 ((uint32_t) ndlp->nlp_xri << 16) |
-					 ((uint32_t) ndlp->nlp_type << 8) |
-					 ((uint32_t) ndlp->nlp_rpi & 0xff));
-				lpfc_printf_log(phba, KERN_INFO, LOG_NODE,
-						"%d:0929 FIND node DID "
-						" Data: x%p x%x x%x x%x\n",
-						phba->brd_no,
-						ndlp, ndlp->nlp_DID,
-						ndlp->nlp_flag, data1);
-				spin_unlock_irq(phba->host->host_lock);
-				return ndlp;
-			}
+	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+		if (lpfc_matchdid(vport, ndlp, did)) {
+			data1 = (((uint32_t) ndlp->nlp_state << 24) |
+				 ((uint32_t) ndlp->nlp_xri << 16) |
+				 ((uint32_t) ndlp->nlp_type << 8) |
+				 ((uint32_t) ndlp->nlp_rpi & 0xff));
+			lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
+					 "0929 FIND node DID "
+					 "Data: x%p x%x x%x x%x\n",
+					 ndlp, ndlp->nlp_DID,
+					 ndlp->nlp_flag, data1);
+			return ndlp;
 		}
 	}
-	spin_unlock_irq(phba->host->host_lock);
 
 	/* FIND node did <did> NOT FOUND */
-	lpfc_printf_log(phba, KERN_INFO, LOG_NODE,
-			"%d:0932 FIND node did x%x NOT FOUND Data: x%x\n",
-			phba->brd_no, did, order);
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
+			 "0932 FIND node did x%x NOT FOUND.\n", did);
 	return NULL;
 }
 
 struct lpfc_nodelist *
-lpfc_setup_disc_node(struct lpfc_hba * phba, uint32_t did)
+lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_nodelist *ndlp;
+
+	spin_lock_irq(shost->host_lock);
+	ndlp = __lpfc_findnode_did(vport, did);
+	spin_unlock_irq(shost->host_lock);
+	return ndlp;
+}
+
+struct lpfc_nodelist *
+lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
 	struct lpfc_nodelist *ndlp;
-	uint32_t flg;
 
-	ndlp = lpfc_findnode_did(phba, NLP_SEARCH_ALL, did);
+	ndlp = lpfc_findnode_did(vport, did);
 	if (!ndlp) {
-		if ((phba->fc_flag & FC_RSCN_MODE) &&
-		   ((lpfc_rscn_payload_check(phba, did) == 0)))
+		if ((vport->fc_flag & FC_RSCN_MODE) != 0 &&
+		    lpfc_rscn_payload_check(vport, did) == 0)
 			return NULL;
 		ndlp = (struct lpfc_nodelist *)
-		     mempool_alloc(phba->nlp_mem_pool, GFP_KERNEL);
+		     mempool_alloc(vport->phba->nlp_mem_pool, GFP_KERNEL);
 		if (!ndlp)
 			return NULL;
-		lpfc_nlp_init(phba, ndlp, did);
-		ndlp->nlp_state = NLP_STE_NPR_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
+		lpfc_nlp_init(vport, ndlp, did);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_NPR_2B_DISC;
+		spin_unlock_irq(shost->host_lock);
 		return ndlp;
 	}
-	if (phba->fc_flag & FC_RSCN_MODE) {
-		if (lpfc_rscn_payload_check(phba, did)) {
+	if (vport->fc_flag & FC_RSCN_MODE) {
+		if (lpfc_rscn_payload_check(vport, did)) {
+			/* If we've already recieved a PLOGI from this NPort
+			 * we don't need to try to discover it again.
+			 */
+			if (ndlp->nlp_flag & NLP_RCV_PLOGI)
+				return NULL;
+
+			spin_lock_irq(shost->host_lock);
 			ndlp->nlp_flag |= NLP_NPR_2B_DISC;
+			spin_unlock_irq(shost->host_lock);
 
 			/* Since this node is marked for discovery,
 			 * delay timeout is not needed.
 			 */
 			if (ndlp->nlp_flag & NLP_DELAY_TMO)
-				lpfc_cancel_retry_delay_tmo(phba, ndlp);
+				lpfc_cancel_retry_delay_tmo(vport, ndlp);
 		} else
 			ndlp = NULL;
 	} else {
-		flg = ndlp->nlp_flag & NLP_LIST_MASK;
-		if ((flg == NLP_ADISC_LIST) || (flg == NLP_PLOGI_LIST))
+		/* If we've already recieved a PLOGI from this NPort,
+		 * or we are already in the process of discovery on it,
+		 * we don't need to try to discover it again.
+		 */
+		if (ndlp->nlp_state == NLP_STE_ADISC_ISSUE ||
+		    ndlp->nlp_state == NLP_STE_PLOGI_ISSUE ||
+		    ndlp->nlp_flag & NLP_RCV_PLOGI)
 			return NULL;
-		ndlp->nlp_state = NLP_STE_NPR_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_NPR_2B_DISC;
+		spin_unlock_irq(shost->host_lock);
 	}
 	return ndlp;
 }
 
 /* Build a list of nodes to discover based on the loopmap */
 void
-lpfc_disc_list_loopmap(struct lpfc_hba * phba)
+lpfc_disc_list_loopmap(struct lpfc_vport *vport)
 {
+	struct lpfc_hba  *phba = vport->phba;
 	int j;
 	uint32_t alpa, index;
 
-	if (phba->hba_state <= LPFC_LINK_DOWN) {
+	if (!lpfc_is_link_up(phba))
 		return;
-	}
-	if (phba->fc_topology != TOPOLOGY_LOOP) {
+
+	if (phba->fc_topology != TOPOLOGY_LOOP)
 		return;
-	}
 
 	/* Check for loop map present or not */
 	if (phba->alpa_map[0]) {
 		for (j = 1; j <= phba->alpa_map[0]; j++) {
 			alpa = phba->alpa_map[j];
-
-			if (((phba->fc_myDID & 0xff) == alpa) || (alpa == 0)) {
+			if (((vport->fc_myDID & 0xff) == alpa) || (alpa == 0))
 				continue;
-			}
-			lpfc_setup_disc_node(phba, alpa);
+			lpfc_setup_disc_node(vport, alpa);
 		}
 	} else {
 		/* No alpamap, so try all alpa's */
@@ -1842,118 +2315,169 @@ lpfc_disc_list_loopmap(struct lpfc_hba * phba)
 			/* If cfg_scan_down is set, start from highest
 			 * ALPA (0xef) to lowest (0x1).
 			 */
-			if (phba->cfg_scan_down)
+			if (vport->cfg_scan_down)
 				index = j;
 			else
 				index = FC_MAXLOOP - j - 1;
 			alpa = lpfcAlpaArray[index];
-			if ((phba->fc_myDID & 0xff) == alpa) {
+			if ((vport->fc_myDID & 0xff) == alpa)
 				continue;
-			}
-
-			lpfc_setup_disc_node(phba, alpa);
+			lpfc_setup_disc_node(vport, alpa);
 		}
 	}
 	return;
 }
 
-/* Start Link up / RSCN discovery on NPR list */
 void
-lpfc_disc_start(struct lpfc_hba * phba)
+lpfc_issue_clear_la(struct lpfc_hba *phba, struct lpfc_vport *vport)
 {
-	struct lpfc_sli *psli;
 	LPFC_MBOXQ_t *mbox;
-	struct lpfc_nodelist *ndlp, *next_ndlp;
-	uint32_t did_changed, num_sent;
-	uint32_t clear_la_pending;
-	int rc;
-
-	psli = &phba->sli;
+	struct lpfc_sli *psli = &phba->sli;
+	struct lpfc_sli_ring *extra_ring = &psli->ring[psli->extra_ring];
+	struct lpfc_sli_ring *fcp_ring   = &psli->ring[psli->fcp_ring];
+	struct lpfc_sli_ring *next_ring  = &psli->ring[psli->next_ring];
+	int  rc;
 
-	if (phba->hba_state <= LPFC_LINK_DOWN) {
+	/*
+	 * if it's not a physical port or if we already send
+	 * clear_la then don't send it.
+	 */
+	if ((phba->link_state >= LPFC_CLEAR_LA) ||
+	    (vport->port_type != LPFC_PHYSICAL_PORT))
 		return;
+
+			/* Link up discovery */
+	if ((mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL)) != NULL) {
+		phba->link_state = LPFC_CLEAR_LA;
+		lpfc_clear_la(phba, mbox);
+		mbox->mbox_cmpl = lpfc_mbx_cmpl_clear_la;
+		mbox->vport = vport;
+		rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
+		if (rc == MBX_NOT_FINISHED) {
+			mempool_free(mbox, phba->mbox_mem_pool);
+			lpfc_disc_flush_list(vport);
+			extra_ring->flag &= ~LPFC_STOP_IOCB_EVENT;
+			fcp_ring->flag &= ~LPFC_STOP_IOCB_EVENT;
+			next_ring->flag &= ~LPFC_STOP_IOCB_EVENT;
+			phba->link_state = LPFC_HBA_ERROR;
+		}
 	}
-	if (phba->hba_state == LPFC_CLEAR_LA)
+}
+
+/* Reg_vpi to tell firmware to resume normal operations */
+void
+lpfc_issue_reg_vpi(struct lpfc_hba *phba, struct lpfc_vport *vport)
+{
+	LPFC_MBOXQ_t *regvpimbox;
+
+	regvpimbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	if (regvpimbox) {
+		lpfc_reg_vpi(phba, vport->vpi, vport->fc_myDID, regvpimbox);
+		regvpimbox->mbox_cmpl = lpfc_mbx_cmpl_reg_vpi;
+		regvpimbox->vport = vport;
+		if (lpfc_sli_issue_mbox(phba, regvpimbox, MBX_NOWAIT)
+					== MBX_NOT_FINISHED) {
+			mempool_free(regvpimbox, phba->mbox_mem_pool);
+		}
+	}
+}
+
+/* Start Link up / RSCN discovery on NPR nodes */
+void
+lpfc_disc_start(struct lpfc_vport *vport)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
+	uint32_t num_sent;
+	uint32_t clear_la_pending;
+	int did_changed;
+
+	if (!lpfc_is_link_up(phba))
+		return;
+
+	if (phba->link_state == LPFC_CLEAR_LA)
 		clear_la_pending = 1;
 	else
 		clear_la_pending = 0;
 
-	if (phba->hba_state < LPFC_HBA_READY) {
-		phba->hba_state = LPFC_DISC_AUTH;
-	}
-	lpfc_set_disctmo(phba);
+	if (vport->port_state < LPFC_VPORT_READY)
+		vport->port_state = LPFC_DISC_AUTH;
+
+	lpfc_set_disctmo(vport);
 
-	if (phba->fc_prevDID == phba->fc_myDID) {
+	if (vport->fc_prevDID == vport->fc_myDID)
 		did_changed = 0;
-	} else {
+	else
 		did_changed = 1;
-	}
-	phba->fc_prevDID = phba->fc_myDID;
-	phba->num_disc_nodes = 0;
+
+	vport->fc_prevDID = vport->fc_myDID;
+	vport->num_disc_nodes = 0;
 
 	/* Start Discovery state <hba_state> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-			"%d:0202 Start Discovery hba state x%x "
-			"Data: x%x x%x x%x\n",
-			phba->brd_no, phba->hba_state, phba->fc_flag,
-			phba->fc_plogi_cnt, phba->fc_adisc_cnt);
-
-	/* If our did changed, we MUST do PLOGI */
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_npr_list,
-				nlp_listp) {
-		if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
-			if (did_changed) {
-				spin_lock_irq(phba->host->host_lock);
-				ndlp->nlp_flag &= ~NLP_NPR_ADISC;
-				spin_unlock_irq(phba->host->host_lock);
-			}
-		}
-	}
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0202 Start Discovery hba state x%x "
+			 "Data: x%x x%x x%x\n",
+			 vport->port_state, vport->fc_flag, vport->fc_plogi_cnt,
+			 vport->fc_adisc_cnt);
 
 	/* First do ADISCs - if any */
-	num_sent = lpfc_els_disc_adisc(phba);
+	num_sent = lpfc_els_disc_adisc(vport);
 
 	if (num_sent)
 		return;
 
-	if ((phba->hba_state < LPFC_HBA_READY) && (!clear_la_pending)) {
+	/*
+	 * For SLI3, cmpl_reg_vpi will set port_state to READY, and
+	 * continue discovery.
+	 */
+	if ((phba->sli3_options & LPFC_SLI3_NPIV_ENABLED) &&
+	    !(vport->fc_flag & FC_RSCN_MODE)) {
+		lpfc_issue_reg_vpi(phba, vport);
+		return;
+	}
+
+	/*
+	 * For SLI2, we need to set port_state to READY and continue
+	 * discovery.
+	 */
+	if (vport->port_state < LPFC_VPORT_READY && !clear_la_pending) {
 		/* If we get here, there is nothing to ADISC */
-		if ((mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL))) {
-			phba->hba_state = LPFC_CLEAR_LA;
-			lpfc_clear_la(phba, mbox);
-			mbox->mbox_cmpl = lpfc_mbx_cmpl_clear_la;
-			rc = lpfc_sli_issue_mbox(phba, mbox,
-						 (MBX_NOWAIT | MBX_STOP_IOCB));
-			if (rc == MBX_NOT_FINISHED) {
-				mempool_free( mbox, phba->mbox_mem_pool);
-				lpfc_disc_flush_list(phba);
-				psli->ring[(psli->extra_ring)].flag &=
-					~LPFC_STOP_IOCB_EVENT;
-				psli->ring[(psli->fcp_ring)].flag &=
-					~LPFC_STOP_IOCB_EVENT;
-				psli->ring[(psli->next_ring)].flag &=
-					~LPFC_STOP_IOCB_EVENT;
-				phba->hba_state = LPFC_HBA_READY;
+		if (vport->port_type == LPFC_PHYSICAL_PORT)
+			lpfc_issue_clear_la(phba, vport);
+
+		if (!(vport->fc_flag & FC_ABORT_DISCOVERY)) {
+			vport->num_disc_nodes = 0;
+			/* go thru NPR nodes and issue ELS PLOGIs */
+			if (vport->fc_npr_cnt)
+				lpfc_els_disc_plogi(vport);
+
+			if (!vport->num_disc_nodes) {
+				spin_lock_irq(shost->host_lock);
+				vport->fc_flag &= ~FC_NDISC_ACTIVE;
+				spin_unlock_irq(shost->host_lock);
+				lpfc_can_disctmo(vport);
 			}
 		}
+		vport->port_state = LPFC_VPORT_READY;
 	} else {
 		/* Next do PLOGIs - if any */
-		num_sent = lpfc_els_disc_plogi(phba);
+		num_sent = lpfc_els_disc_plogi(vport);
 
 		if (num_sent)
 			return;
 
-		if (phba->fc_flag & FC_RSCN_MODE) {
+		if (vport->fc_flag & FC_RSCN_MODE) {
 			/* Check to see if more RSCNs came in while we
 			 * were processing this one.
 			 */
-			if ((phba->fc_rscn_id_cnt == 0) &&
-			    (!(phba->fc_flag & FC_RSCN_DISCOVERY))) {
-				spin_lock_irq(phba->host->host_lock);
-				phba->fc_flag &= ~FC_RSCN_MODE;
-				spin_unlock_irq(phba->host->host_lock);
+			if ((vport->fc_rscn_id_cnt == 0) &&
+			    (!(vport->fc_flag & FC_RSCN_DISCOVERY))) {
+				spin_lock_irq(shost->host_lock);
+				vport->fc_flag &= ~FC_RSCN_MODE;
+				spin_unlock_irq(shost->host_lock);
+				lpfc_can_disctmo(vport);
 			} else
-				lpfc_els_handle_rscn(phba);
+				lpfc_els_handle_rscn(vport);
 		}
 	}
 	return;
@@ -1964,13 +2488,13 @@ lpfc_disc_start(struct lpfc_hba * phba)
  *  ring the match the sppecified nodelist.
  */
 static void
-lpfc_free_tx(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
+lpfc_free_tx(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)
 {
+	LIST_HEAD(completions);
 	struct lpfc_sli *psli;
 	IOCB_t     *icmd;
 	struct lpfc_iocbq    *iocb, *next_iocb;
 	struct lpfc_sli_ring *pring;
-	struct lpfc_dmabuf   *mp;
 
 	psli = &phba->sli;
 	pring = &psli->ring[LPFC_ELS_RING];
@@ -1978,6 +2502,7 @@ lpfc_free_tx(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
 	/* Error matching iocb on txq or txcmplq
 	 * First check the txq.
 	 */
+	spin_lock_irq(&phba->hbalock);
 	list_for_each_entry_safe(iocb, next_iocb, &pring->txq, list) {
 		if (iocb->context1 != ndlp) {
 			continue;
@@ -1986,9 +2511,8 @@ lpfc_free_tx(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
 		if ((icmd->ulpCommand == CMD_ELS_REQUEST64_CR) ||
 		    (icmd->ulpCommand == CMD_XMIT_ELS_RSP64_CX)) {
 
-			list_del(&iocb->list);
+			list_move_tail(&iocb->list, &completions);
 			pring->txq_cnt--;
-			lpfc_els_free_iocb(phba, iocb);
 		}
 	}
 
@@ -1998,70 +2522,51 @@ lpfc_free_tx(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
 			continue;
 		}
 		icmd = &iocb->iocb;
-		if ((icmd->ulpCommand == CMD_ELS_REQUEST64_CR) ||
-		    (icmd->ulpCommand == CMD_XMIT_ELS_RSP64_CX)) {
+		if (icmd->ulpCommand == CMD_ELS_REQUEST64_CR ||
+		    icmd->ulpCommand == CMD_XMIT_ELS_RSP64_CX) {
+			lpfc_sli_issue_abort_iotag(phba, pring, iocb);
+		}
+	}
+	spin_unlock_irq(&phba->hbalock);
 
-			iocb->iocb_cmpl = NULL;
-			/* context2 = cmd, context2->next = rsp, context3 =
-			   bpl */
-			if (iocb->context2) {
-				/* Free the response IOCB before handling the
-				   command. */
-
-				mp = (struct lpfc_dmabuf *) (iocb->context2);
-				mp = list_get_first(&mp->list,
-						    struct lpfc_dmabuf,
-						    list);
-				if (mp) {
-					/* Delay before releasing rsp buffer to
-					 * give UNREG mbox a chance to take
-					 * effect.
-					 */
-					list_add(&mp->list,
-						&phba->freebufList);
-				}
-				lpfc_mbuf_free(phba,
-					       ((struct lpfc_dmabuf *)
-						iocb->context2)->virt,
-					       ((struct lpfc_dmabuf *)
-						iocb->context2)->phys);
-				kfree(iocb->context2);
-			}
+	while (!list_empty(&completions)) {
+		iocb = list_get_first(&completions, struct lpfc_iocbq, list);
+		list_del_init(&iocb->list);
 
-			if (iocb->context3) {
-				lpfc_mbuf_free(phba,
-					       ((struct lpfc_dmabuf *)
-						iocb->context3)->virt,
-					       ((struct lpfc_dmabuf *)
-						iocb->context3)->phys);
-				kfree(iocb->context3);
-			}
+		if (!iocb->iocb_cmpl)
+			lpfc_sli_release_iocbq(phba, iocb);
+		else {
+			icmd = &iocb->iocb;
+			icmd->ulpStatus = IOSTAT_LOCAL_REJECT;
+			icmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
+			(iocb->iocb_cmpl) (phba, iocb, iocb);
 		}
 	}
-
-	return;
 }
 
 void
-lpfc_disc_flush_list(struct lpfc_hba * phba)
+lpfc_disc_flush_list(struct lpfc_vport *vport)
 {
 	struct lpfc_nodelist *ndlp, *next_ndlp;
-
-	if (phba->fc_plogi_cnt) {
-		list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_plogi_list,
-					nlp_listp) {
-			lpfc_free_tx(phba, ndlp);
-			lpfc_nlp_remove(phba, ndlp);
-		}
-	}
-	if (phba->fc_adisc_cnt) {
-		list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_adisc_list,
-					nlp_listp) {
-			lpfc_free_tx(phba, ndlp);
-			lpfc_nlp_remove(phba, ndlp);
+	struct lpfc_hba *phba = vport->phba;
+
+	if (vport->fc_plogi_cnt || vport->fc_adisc_cnt) {
+		list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes,
+					 nlp_listp) {
+			if (ndlp->nlp_state == NLP_STE_PLOGI_ISSUE ||
+			    ndlp->nlp_state == NLP_STE_ADISC_ISSUE) {
+				lpfc_free_tx(phba, ndlp);
+			}
 		}
 	}
-	return;
+}
+
+void
+lpfc_cleanup_discovery_resources(struct lpfc_vport *vport)
+{
+	lpfc_els_flush_rscn(vport);
+	lpfc_els_flush_cmd(vport);
+	lpfc_disc_flush_list(vport);
 }
 
 /*****************************************************************************/
@@ -2082,157 +2587,147 @@ lpfc_disc_flush_list(struct lpfc_hba * phba)
 void
 lpfc_disc_timeout(unsigned long ptr)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba *)ptr;
+	struct lpfc_vport *vport = (struct lpfc_vport *) ptr;
+	struct lpfc_hba   *phba = vport->phba;
 	unsigned long flags = 0;
 
 	if (unlikely(!phba))
 		return;
 
-	spin_lock_irqsave(phba->host->host_lock, flags);
-	if (!(phba->work_hba_events & WORKER_DISC_TMO)) {
-		phba->work_hba_events |= WORKER_DISC_TMO;
+	if ((vport->work_port_events & WORKER_DISC_TMO) == 0) {
+		spin_lock_irqsave(&vport->work_port_lock, flags);
+		vport->work_port_events |= WORKER_DISC_TMO;
+		spin_unlock_irqrestore(&vport->work_port_lock, flags);
+
+		spin_lock_irqsave(&phba->hbalock, flags);
 		if (phba->work_wait)
-			wake_up(phba->work_wait);
+			lpfc_worker_wake_up(phba);
+		spin_unlock_irqrestore(&phba->hbalock, flags);
 	}
-	spin_unlock_irqrestore(phba->host->host_lock, flags);
 	return;
 }
 
 static void
-lpfc_disc_timeout_handler(struct lpfc_hba *phba)
+lpfc_disc_timeout_handler(struct lpfc_vport *vport)
 {
-	struct lpfc_sli *psli;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
+	struct lpfc_sli  *psli = &phba->sli;
 	struct lpfc_nodelist *ndlp, *next_ndlp;
-	LPFC_MBOXQ_t *clearlambox, *initlinkmbox;
+	LPFC_MBOXQ_t *initlinkmbox;
 	int rc, clrlaerr = 0;
 
-	if (unlikely(!phba))
+	if (!(vport->fc_flag & FC_DISC_TMO))
 		return;
 
-	if (!(phba->fc_flag & FC_DISC_TMO))
-		return;
-
-	psli = &phba->sli;
+	spin_lock_irq(shost->host_lock);
+	vport->fc_flag &= ~FC_DISC_TMO;
+	spin_unlock_irq(shost->host_lock);
 
-	spin_lock_irq(phba->host->host_lock);
-	phba->fc_flag &= ~FC_DISC_TMO;
-	spin_unlock_irq(phba->host->host_lock);
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+		"disc timeout:    state:x%x rtry:x%x flg:x%x",
+		vport->port_state, vport->fc_ns_retry, vport->fc_flag);
 
-	switch (phba->hba_state) {
+	switch (vport->port_state) {
 
 	case LPFC_LOCAL_CFG_LINK:
-	/* hba_state is identically LPFC_LOCAL_CFG_LINK while waiting for FAN */
-		/* FAN timeout */
-		lpfc_printf_log(phba,
-				 KERN_WARNING,
-				 LOG_DISCOVERY,
-				 "%d:0221 FAN timeout\n",
-				 phba->brd_no);
-
+	/* port_state is identically  LPFC_LOCAL_CFG_LINK while waiting for
+	 * FAN
+	 */
+				/* FAN timeout */
+		lpfc_printf_vlog(vport, KERN_WARNING, LOG_DISCOVERY,
+				 "0221 FAN timeout\n");
 		/* Start discovery by sending FLOGI, clean up old rpis */
-		list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_npr_list,
-					nlp_listp) {
+		list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes,
+					 nlp_listp) {
+			if (ndlp->nlp_state != NLP_STE_NPR_NODE)
+				continue;
 			if (ndlp->nlp_type & NLP_FABRIC) {
 				/* Clean up the ndlp on Fabric connections */
-				lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+				lpfc_drop_node(vport, ndlp);
+
 			} else if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) {
 				/* Fail outstanding IO now since device
 				 * is marked for PLOGI.
 				 */
-				lpfc_unreg_rpi(phba, ndlp);
+				lpfc_unreg_rpi(vport, ndlp);
 			}
 		}
-		phba->hba_state = LPFC_FLOGI;
-		lpfc_set_disctmo(phba);
-		lpfc_initial_flogi(phba);
+		if (vport->port_state != LPFC_FLOGI) {
+			lpfc_initial_flogi(vport);
+		}
 		break;
 
+	case LPFC_FDISC:
 	case LPFC_FLOGI:
-	/* hba_state is identically LPFC_FLOGI while waiting for FLOGI cmpl */
+	/* port_state is identically LPFC_FLOGI while waiting for FLOGI cmpl */
 		/* Initial FLOGI timeout */
-		lpfc_printf_log(phba,
-				 KERN_ERR,
-				 LOG_DISCOVERY,
-				 "%d:0222 Initial FLOGI timeout\n",
-				 phba->brd_no);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				 "0222 Initial %s timeout\n",
+				 vport->vpi ? "FDISC" : "FLOGI");
 
 		/* Assume no Fabric and go on with discovery.
 		 * Check for outstanding ELS FLOGI to abort.
 		 */
 
 		/* FLOGI failed, so just use loop map to make discovery list */
-		lpfc_disc_list_loopmap(phba);
+		lpfc_disc_list_loopmap(vport);
 
 		/* Start discovery */
-		lpfc_disc_start(phba);
+		lpfc_disc_start(vport);
 		break;
 
 	case LPFC_FABRIC_CFG_LINK:
 	/* hba_state is identically LPFC_FABRIC_CFG_LINK while waiting for
 	   NameServer login */
-		lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
-				"%d:0223 Timeout while waiting for NameServer "
-				"login\n", phba->brd_no);
-
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				 "0223 Timeout while waiting for "
+				 "NameServer login\n");
 		/* Next look for NameServer ndlp */
-		ndlp = lpfc_findnode_did(phba, NLP_SEARCH_ALL, NameServer_DID);
+		ndlp = lpfc_findnode_did(vport, NameServer_DID);
 		if (ndlp)
-			lpfc_nlp_remove(phba, ndlp);
-		/* Start discovery */
-		lpfc_disc_start(phba);
-		break;
+			lpfc_els_abort(phba, ndlp);
+
+		/* ReStart discovery */
+		goto restart_disc;
 
 	case LPFC_NS_QRY:
 	/* Check for wait for NameServer Rsp timeout */
-		lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
-				"%d:0224 NameServer Query timeout "
-				"Data: x%x x%x\n",
-				phba->brd_no,
-				phba->fc_ns_retry, LPFC_MAX_NS_RETRY);
-
-		ndlp = lpfc_findnode_did(phba, NLP_SEARCH_UNMAPPED,
-								NameServer_DID);
-		if (ndlp) {
-			if (phba->fc_ns_retry < LPFC_MAX_NS_RETRY) {
-				/* Try it one more time */
-				rc = lpfc_ns_cmd(phba, ndlp, SLI_CTNS_GID_FT);
-				if (rc == 0)
-					break;
-			}
-			phba->fc_ns_retry = 0;
-		}
-
-		/* Nothing to authenticate, so CLEAR_LA right now */
-		clearlambox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
-		if (!clearlambox) {
-			clrlaerr = 1;
-			lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
-					"%d:0226 Device Discovery "
-					"completion error\n",
-					phba->brd_no);
-			phba->hba_state = LPFC_HBA_ERROR;
-			break;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				 "0224 NameServer Query timeout "
+				 "Data: x%x x%x\n",
+				 vport->fc_ns_retry, LPFC_MAX_NS_RETRY);
+
+		if (vport->fc_ns_retry < LPFC_MAX_NS_RETRY) {
+			/* Try it one more time */
+			vport->fc_ns_retry++;
+			rc = lpfc_ns_cmd(vport, SLI_CTNS_GID_FT,
+					 vport->fc_ns_retry, 0);
+			if (rc == 0)
+				break;
 		}
+		vport->fc_ns_retry = 0;
 
-		phba->hba_state = LPFC_CLEAR_LA;
-		lpfc_clear_la(phba, clearlambox);
-		clearlambox->mbox_cmpl = lpfc_mbx_cmpl_clear_la;
-		rc = lpfc_sli_issue_mbox(phba, clearlambox,
-					 (MBX_NOWAIT | MBX_STOP_IOCB));
-		if (rc == MBX_NOT_FINISHED) {
-			mempool_free(clearlambox, phba->mbox_mem_pool);
-			clrlaerr = 1;
-			break;
+restart_disc:
+		/*
+		 * Discovery is over.
+		 * set port_state to PORT_READY if SLI2.
+		 * cmpl_reg_vpi will set port_state to READY for SLI3.
+		 */
+		if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED)
+			lpfc_issue_reg_vpi(phba, vport);
+		else  {	/* NPIV Not enabled */
+			lpfc_issue_clear_la(phba, vport);
+			vport->port_state = LPFC_VPORT_READY;
 		}
 
 		/* Setup and issue mailbox INITIALIZE LINK command */
 		initlinkmbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
 		if (!initlinkmbox) {
-			lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
-					"%d:0206 Device Discovery "
-					"completion error\n",
-					phba->brd_no);
-			phba->hba_state = LPFC_HBA_ERROR;
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+					 "0206 Device Discovery "
+					 "completion error\n");
+			phba->link_state = LPFC_HBA_ERROR;
 			break;
 		}
 
@@ -2240,8 +2735,9 @@ lpfc_disc_timeout_handler(struct lpfc_hba *phba)
 		lpfc_init_link(phba, initlinkmbox, phba->cfg_topology,
 			       phba->cfg_link_speed);
 		initlinkmbox->mb.un.varInitLnk.lipsr_AL_PA = 0;
-		rc = lpfc_sli_issue_mbox(phba, initlinkmbox,
-					 (MBX_NOWAIT | MBX_STOP_IOCB));
+		initlinkmbox->vport = vport;
+		initlinkmbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+		rc = lpfc_sli_issue_mbox(phba, initlinkmbox, MBX_NOWAIT);
 		lpfc_set_loopback_flag(phba);
 		if (rc == MBX_NOT_FINISHED)
 			mempool_free(initlinkmbox, phba->mbox_mem_pool);
@@ -2250,67 +2746,75 @@ lpfc_disc_timeout_handler(struct lpfc_hba *phba)
 
 	case LPFC_DISC_AUTH:
 	/* Node Authentication timeout */
-		lpfc_printf_log(phba,
-				 KERN_ERR,
-				 LOG_DISCOVERY,
-				 "%d:0227 Node Authentication timeout\n",
-				 phba->brd_no);
-		lpfc_disc_flush_list(phba);
-		clearlambox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
-		if (!clearlambox) {
-			clrlaerr = 1;
-			lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY,
-					"%d:0207 Device Discovery "
-					"completion error\n",
-					phba->brd_no);
-			phba->hba_state = LPFC_HBA_ERROR;
-			break;
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				 "0227 Node Authentication timeout\n");
+		lpfc_disc_flush_list(vport);
+
+		/*
+		 * set port_state to PORT_READY if SLI2.
+		 * cmpl_reg_vpi will set port_state to READY for SLI3.
+		 */
+		if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED)
+			lpfc_issue_reg_vpi(phba, vport);
+		else {	/* NPIV Not enabled */
+			lpfc_issue_clear_la(phba, vport);
+			vport->port_state = LPFC_VPORT_READY;
 		}
-		phba->hba_state = LPFC_CLEAR_LA;
-		lpfc_clear_la(phba, clearlambox);
-		clearlambox->mbox_cmpl = lpfc_mbx_cmpl_clear_la;
-		rc = lpfc_sli_issue_mbox(phba, clearlambox,
-					 (MBX_NOWAIT | MBX_STOP_IOCB));
-		if (rc == MBX_NOT_FINISHED) {
-			mempool_free(clearlambox, phba->mbox_mem_pool);
-			clrlaerr = 1;
+		break;
+
+	case LPFC_VPORT_READY:
+		if (vport->fc_flag & FC_RSCN_MODE) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+					 "0231 RSCN timeout Data: x%x "
+					 "x%x\n",
+					 vport->fc_ns_retry, LPFC_MAX_NS_RETRY);
+
+			/* Cleanup any outstanding ELS commands */
+			lpfc_els_flush_cmd(vport);
+
+			lpfc_els_flush_rscn(vport);
+			lpfc_disc_flush_list(vport);
 		}
 		break;
 
+	default:
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				 "0229 Unexpected discovery timeout, "
+				 "vport State x%x\n", vport->port_state);
+		break;
+	}
+
+	switch (phba->link_state) {
 	case LPFC_CLEAR_LA:
-	/* CLEAR LA timeout */
-		lpfc_printf_log(phba,
-				 KERN_ERR,
-				 LOG_DISCOVERY,
-				 "%d:0228 CLEAR LA timeout\n",
-				 phba->brd_no);
+				/* CLEAR LA timeout */
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				 "0228 CLEAR LA timeout\n");
 		clrlaerr = 1;
 		break;
 
-	case LPFC_HBA_READY:
-		if (phba->fc_flag & FC_RSCN_MODE) {
-			lpfc_printf_log(phba,
-					KERN_ERR,
-					LOG_DISCOVERY,
-					"%d:0231 RSCN timeout Data: x%x x%x\n",
-					phba->brd_no,
-					phba->fc_ns_retry, LPFC_MAX_NS_RETRY);
-
-			/* Cleanup any outstanding ELS commands */
-			lpfc_els_flush_cmd(phba);
+	case LPFC_LINK_UNKNOWN:
+	case LPFC_WARM_START:
+	case LPFC_INIT_START:
+	case LPFC_INIT_MBX_CMDS:
+	case LPFC_LINK_DOWN:
+	case LPFC_LINK_UP:
+	case LPFC_HBA_ERROR:
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				 "0230 Unexpected timeout, hba link "
+				 "state x%x\n", phba->link_state);
+		clrlaerr = 1;
+		break;
 
-			lpfc_els_flush_rscn(phba);
-			lpfc_disc_flush_list(phba);
-		}
+	case LPFC_HBA_READY:
 		break;
 	}
 
 	if (clrlaerr) {
-		lpfc_disc_flush_list(phba);
+		lpfc_disc_flush_list(vport);
 		psli->ring[(psli->extra_ring)].flag &= ~LPFC_STOP_IOCB_EVENT;
 		psli->ring[(psli->fcp_ring)].flag &= ~LPFC_STOP_IOCB_EVENT;
 		psli->ring[(psli->next_ring)].flag &= ~LPFC_STOP_IOCB_EVENT;
-		phba->hba_state = LPFC_HBA_READY;
+		vport->port_state = LPFC_VPORT_READY;
 	}
 
 	return;
@@ -2323,128 +2827,269 @@ lpfc_disc_timeout_handler(struct lpfc_hba *phba)
  * handed off to the SLI layer.
  */
 void
-lpfc_mbx_cmpl_fdmi_reg_login(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+lpfc_mbx_cmpl_fdmi_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
 {
-	struct lpfc_sli *psli;
-	MAILBOX_t *mb;
-	struct lpfc_dmabuf *mp;
-	struct lpfc_nodelist *ndlp;
-
-	psli = &phba->sli;
-	mb = &pmb->mb;
-
-	ndlp = (struct lpfc_nodelist *) pmb->context2;
-	mp = (struct lpfc_dmabuf *) (pmb->context1);
+	MAILBOX_t *mb = &pmb->mb;
+	struct lpfc_dmabuf   *mp = (struct lpfc_dmabuf *) (pmb->context1);
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) pmb->context2;
+	struct lpfc_vport    *vport = pmb->vport;
 
 	pmb->context1 = NULL;
 
 	ndlp->nlp_rpi = mb->un.varWords[0];
 	ndlp->nlp_type |= NLP_FABRIC;
-	ndlp->nlp_state = NLP_STE_UNMAPPED_NODE;
-	lpfc_nlp_list(phba, ndlp, NLP_UNMAPPED_LIST);
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
 
-	/* Start issuing Fabric-Device Management Interface (FDMI)
-	 * command to 0xfffffa (FDMI well known port)
+	/*
+	 * Start issuing Fabric-Device Management Interface (FDMI) command to
+	 * 0xfffffa (FDMI well known port) or Delay issuing FDMI command if
+	 * fdmi-on=2 (supporting RPA/hostnmae)
 	 */
-	if (phba->cfg_fdmi_on == 1) {
-		lpfc_fdmi_cmd(phba, ndlp, SLI_MGMT_DHBA);
-	} else {
-		/*
-		 * Delay issuing FDMI command if fdmi-on=2
-		 * (supporting RPA/hostnmae)
-		 */
-		mod_timer(&phba->fc_fdmitmo, jiffies + HZ * 60);
-	}
 
+	if (vport->cfg_fdmi_on == 1)
+		lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_DHBA);
+	else
+		mod_timer(&vport->fc_fdmitmo, jiffies + HZ * 60);
+
+				/* Mailbox took a reference to the node */
+	lpfc_nlp_put(ndlp);
 	lpfc_mbuf_free(phba, mp->virt, mp->phys);
 	kfree(mp);
-	mempool_free( pmb, phba->mbox_mem_pool);
+	mempool_free(pmb, phba->mbox_mem_pool);
 
 	return;
 }
 
+static int
+lpfc_filter_by_rpi(struct lpfc_nodelist *ndlp, void *param)
+{
+	uint16_t *rpi = param;
+
+	return ndlp->nlp_rpi == *rpi;
+}
+
+static int
+lpfc_filter_by_wwpn(struct lpfc_nodelist *ndlp, void *param)
+{
+	return memcmp(&ndlp->nlp_portname, param,
+		      sizeof(ndlp->nlp_portname)) == 0;
+}
+
+static int
+lpfc_filter_by_wwnn(struct lpfc_nodelist *ndlp, void *param)
+{
+	return memcmp(&ndlp->nlp_nodename, param,
+		      sizeof(ndlp->nlp_nodename)) == 0;
+}
+
+struct lpfc_nodelist *
+__lpfc_find_node(struct lpfc_vport *vport, node_filter filter, void *param)
+{
+	struct lpfc_nodelist *ndlp;
+
+	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+		if (filter(ndlp, param))
+			return ndlp;
+	}
+	return NULL;
+}
+
 /*
- * This routine looks up the ndlp  lists
- * for the given RPI. If rpi found
- * it return the node list pointer
- * else return NULL.
+ * Search node lists for a remote port matching filter criteria
+ * Caller needs to hold host_lock before calling this routine.
  */
 struct lpfc_nodelist *
-lpfc_findnode_rpi(struct lpfc_hba * phba, uint16_t rpi)
+lpfc_find_node(struct lpfc_vport *vport, node_filter filter, void *param)
 {
+	struct Scsi_Host     *shost = lpfc_shost_from_vport(vport);
 	struct lpfc_nodelist *ndlp;
-	struct list_head * lists[]={&phba->fc_nlpunmap_list,
-				    &phba->fc_nlpmap_list,
-				    &phba->fc_plogi_list,
-				    &phba->fc_adisc_list,
-				    &phba->fc_reglogin_list};
-	int i;
 
-	spin_lock_irq(phba->host->host_lock);
-	for (i = 0; i < ARRAY_SIZE(lists); i++ )
-		list_for_each_entry(ndlp, lists[i], nlp_listp)
-			if (ndlp->nlp_rpi == rpi) {
-				spin_unlock_irq(phba->host->host_lock);
-				return ndlp;
-			}
-	spin_unlock_irq(phba->host->host_lock);
-	return NULL;
+	spin_lock_irq(shost->host_lock);
+	ndlp = __lpfc_find_node(vport, filter, param);
+	spin_unlock_irq(shost->host_lock);
+	return ndlp;
 }
 
 /*
- * This routine looks up the ndlp  lists
- * for the given WWPN. If WWPN found
- * it return the node list pointer
- * else return NULL.
+ * This routine looks up the ndlp lists for the given RPI. If rpi found it
+ * returns the node list element pointer else return NULL.
  */
 struct lpfc_nodelist *
-lpfc_findnode_wwpn(struct lpfc_hba * phba, uint32_t order,
-		   struct lpfc_name * wwpn)
+__lpfc_findnode_rpi(struct lpfc_vport *vport, uint16_t rpi)
+{
+	return __lpfc_find_node(vport, lpfc_filter_by_rpi, &rpi);
+}
+
+struct lpfc_nodelist *
+lpfc_findnode_rpi(struct lpfc_vport *vport, uint16_t rpi)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
 	struct lpfc_nodelist *ndlp;
-	struct list_head * lists[]={&phba->fc_nlpunmap_list,
-				    &phba->fc_nlpmap_list,
-				    &phba->fc_npr_list,
-				    &phba->fc_plogi_list,
-				    &phba->fc_adisc_list,
-				    &phba->fc_reglogin_list,
-				    &phba->fc_prli_list};
-	uint32_t search[]={NLP_SEARCH_UNMAPPED,
-			   NLP_SEARCH_MAPPED,
-			   NLP_SEARCH_NPR,
-			   NLP_SEARCH_PLOGI,
-			   NLP_SEARCH_ADISC,
-			   NLP_SEARCH_REGLOGIN,
-			   NLP_SEARCH_PRLI};
-	int i;
 
-	spin_lock_irq(phba->host->host_lock);
-	for (i = 0; i < ARRAY_SIZE(lists); i++ ) {
-		if (!(order & search[i]))
-			continue;
-		list_for_each_entry(ndlp, lists[i], nlp_listp) {
-			if (memcmp(&ndlp->nlp_portname, wwpn,
-				   sizeof(struct lpfc_name)) == 0) {
-				spin_unlock_irq(phba->host->host_lock);
-				return ndlp;
-			}
-		}
+	spin_lock_irq(shost->host_lock);
+	ndlp = __lpfc_findnode_rpi(vport, rpi);
+	spin_unlock_irq(shost->host_lock);
+	return ndlp;
+}
+
+/*
+ * This routine looks up the ndlp lists for the given WWPN. If WWPN found it
+ * returns the node element list pointer else return NULL.
+ */
+struct lpfc_nodelist *
+lpfc_findnode_wwpn(struct lpfc_vport *vport, struct lpfc_name *wwpn)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_nodelist *ndlp;
+
+	spin_lock_irq(shost->host_lock);
+	ndlp = __lpfc_find_node(vport, lpfc_filter_by_wwpn, wwpn);
+	spin_unlock_irq(shost->host_lock);
+	return ndlp;
+}
+
+/*
+ * This routine looks up the ndlp lists for the given WWNN. If WWNN found it
+ * returns the node element list pointer else return NULL.
+ */
+struct lpfc_nodelist *
+lpfc_findnode_wwnn(struct lpfc_vport *vport, struct lpfc_name *wwnn)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_nodelist *ndlp;
+
+	spin_lock_irq(shost->host_lock);
+	ndlp = __lpfc_find_node(vport, lpfc_filter_by_wwnn, wwnn);
+	spin_unlock_irq(shost->host_lock);
+	return ndlp;
+}
+
+void
+lpfc_dev_loss_delay(unsigned long ptr)
+{
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) ptr;
+	struct lpfc_vport *vport = ndlp->vport;
+	struct lpfc_hba   *phba = vport->phba;
+	struct lpfc_work_evt  *evtp = &ndlp->dev_loss_evt;
+	unsigned long flags;
+
+	evtp = &ndlp->dev_loss_evt;
+
+	spin_lock_irqsave(&phba->hbalock, flags);
+	if (!list_empty(&evtp->evt_listp)) {
+		spin_unlock_irqrestore(&phba->hbalock, flags);
+		return;
 	}
-	spin_unlock_irq(phba->host->host_lock);
-	return NULL;
+
+	evtp->evt_arg1  = ndlp;
+	evtp->evt       = LPFC_EVT_DEV_LOSS_DELAY;
+	list_add_tail(&evtp->evt_listp, &phba->work_list);
+	if (phba->work_wait)
+		lpfc_worker_wake_up(phba);
+	spin_unlock_irqrestore(&phba->hbalock, flags);
+	return;
 }
 
 void
-lpfc_nlp_init(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
-		 uint32_t did)
+lpfc_nlp_init(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+	      uint32_t did)
 {
 	memset(ndlp, 0, sizeof (struct lpfc_nodelist));
 	INIT_LIST_HEAD(&ndlp->els_retry_evt.evt_listp);
+	INIT_LIST_HEAD(&ndlp->dev_loss_evt.evt_listp);
+	INIT_LIST_HEAD(&ndlp->els_reauth_evt.evt_listp);
+	init_timer(&ndlp->nlp_initiator_tmr);
+	ndlp->nlp_initiator_tmr.function = lpfc_dev_loss_delay;
+	ndlp->nlp_initiator_tmr.data = (unsigned long)ndlp;
 	init_timer(&ndlp->nlp_delayfunc);
 	ndlp->nlp_delayfunc.function = lpfc_els_retry_delay;
 	ndlp->nlp_delayfunc.data = (unsigned long)ndlp;
+
+	init_timer(&ndlp->nlp_reauth_tmr);
+	ndlp->nlp_reauth_tmr.function = lpfc_reauth_node;
+	ndlp->nlp_reauth_tmr.data = (unsigned long)ndlp;
+
 	ndlp->nlp_DID = did;
-	ndlp->nlp_phba = phba;
+	ndlp->vport = vport;
 	ndlp->nlp_sid = NLP_NO_SID;
+	INIT_LIST_HEAD(&ndlp->nlp_listp);
+	kref_init(&ndlp->kref);
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_NODE,
+		"node init:       did:x%x",
+		ndlp->nlp_DID, 0, 0);
+
 	return;
 }
+
+/* This routine releases all resources associated with a specifc NPort's ndlp
+ * and mempool_free's the nodelist.
+ */
+static void
+lpfc_nlp_release(struct kref *kref)
+{
+	struct lpfc_nodelist *ndlp = container_of(kref, struct lpfc_nodelist,
+						  kref);
+
+	lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE,
+		"node release:    did:x%x flg:x%x type:x%x",
+		ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_type);
+
+	lpfc_nlp_remove(ndlp->vport, ndlp);
+	mempool_free(ndlp, ndlp->vport->phba->nlp_mem_pool);
+}
+
+/* This routine bumps the reference count for a ndlp structure to ensure
+ * that one discovery thread won't free a ndlp while another discovery thread
+ * is using it.
+ */
+struct lpfc_nodelist *
+lpfc_nlp_get(struct lpfc_nodelist *ndlp)
+{
+	if (ndlp) {
+		lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE,
+			"node get:        did:x%x flg:x%x refcnt:x%x",
+			ndlp->nlp_DID, ndlp->nlp_flag,
+			atomic_read(&ndlp->kref.refcount));
+		kref_get(&ndlp->kref);
+	}
+	return ndlp;
+}
+
+
+/* This routine decrements the reference count for a ndlp structure. If the
+ * count goes to 0, this indicates the the associated nodelist should be freed.
+ */
+int
+lpfc_nlp_put(struct lpfc_nodelist *ndlp)
+{
+	if (ndlp) {
+		lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE,
+		"node put:        did:x%x flg:x%x refcnt:x%x",
+			ndlp->nlp_DID, ndlp->nlp_flag,
+			atomic_read(&ndlp->kref.refcount));
+	}
+	return ndlp ? kref_put(&ndlp->kref, lpfc_nlp_release) : 0;
+}
+
+/* This routine free's the specified nodelist if it is not in use
+ * by any other discovery thread. This routine returns 1 if the ndlp
+ * is not being used by anyone and has been freed. A return value of
+ * 0 indicates it is being used by another discovery thread and the
+ * refcount is left unchanged.
+ */
+int
+lpfc_nlp_not_used(struct lpfc_nodelist *ndlp)
+{
+	lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE,
+		"node not used:   did:x%x flg:x%x refcnt:x%x",
+		ndlp->nlp_DID, ndlp->nlp_flag,
+		atomic_read(&ndlp->kref.refcount));
+
+	if (atomic_read(&ndlp->kref.refcount) == 1) {
+		lpfc_nlp_put(ndlp);
+		return 1;
+	}
+	return 0;
+}
+
diff --git a/drivers/scsi/lpfc/lpfc_hw.h b/drivers/scsi/lpfc/lpfc_hw.h
index d2947ff..fddca02 100644
--- a/drivers/scsi/lpfc/lpfc_hw.h
+++ b/drivers/scsi/lpfc/lpfc_hw.h
@@ -59,6 +59,12 @@
 #define SLI2_IOCB_CMD_R3XTRA_ENTRIES 24
 #define SLI2_IOCB_RSP_R3XTRA_ENTRIES 32
 
+#define SLI2_IOCB_CMD_SIZE	32
+#define SLI2_IOCB_RSP_SIZE	32
+#define SLI3_IOCB_CMD_SIZE	128
+#define SLI3_IOCB_RSP_SIZE	64
+
+
 /* Common Transport structures and definitions */
 
 union CtRevisionId {
@@ -79,6 +85,9 @@ union CtCommandResponse {
 	uint32_t word;
 };
 
+#define FC4_FEATURE_INIT 0x2
+#define FC4_FEATURE_TARGET 0x1
+
 struct lpfc_sli_ct_request {
 	/* Structure is in Big Endian format */
 	union CtRevisionId RevisionId;
@@ -121,20 +130,6 @@ struct lpfc_sli_ct_request {
 
 			uint32_t rsvd[7];
 		} rft;
-		struct rff {
-			uint32_t PortId;
-			uint8_t reserved[2];
-#ifdef __BIG_ENDIAN_BITFIELD
-			uint8_t feature_res:6;
-			uint8_t feature_init:1;
-			uint8_t feature_tgt:1;
-#else  /*  __LITTLE_ENDIAN_BITFIELD */
-			uint8_t feature_tgt:1;
-			uint8_t feature_init:1;
-			uint8_t feature_res:6;
-#endif
-			uint8_t type_code;     /* type=8 for FCP */
-		} rff;
 		struct rnn {
 			uint32_t PortId;	/* For RNN_ID requests */
 			uint8_t wwnn[8];
@@ -144,15 +139,47 @@ struct lpfc_sli_ct_request {
 			uint8_t len;
 			uint8_t symbname[255];
 		} rsnn;
+		struct da_id { /* For DA_ID requests */
+			uint32_t port_id;
+		} da_id;
+		struct rspn {	/* For RSPN_ID requests */
+			uint32_t PortId;
+			uint8_t len;
+			uint8_t symbname[255];
+		} rspn;
+		struct gff {
+			uint32_t PortId;
+		} gff;
+		struct gff_acc {
+			uint8_t fbits[128];
+		} gff_acc;
+#define FCP_TYPE_FEATURE_OFFSET 7
+		struct rff {
+			uint32_t PortId;
+			uint8_t reserved[2];
+			uint8_t fbits;
+			uint8_t type_code;     /* type=8 for FCP */
+		} rff;
 	} un;
 };
 
 #define  SLI_CT_REVISION        1
-#define  GID_REQUEST_SZ         (sizeof(struct lpfc_sli_ct_request) - 260)
-#define  RFT_REQUEST_SZ         (sizeof(struct lpfc_sli_ct_request) - 228)
-#define  RFF_REQUEST_SZ         (sizeof(struct lpfc_sli_ct_request) - 235)
-#define  RNN_REQUEST_SZ         (sizeof(struct lpfc_sli_ct_request) - 252)
-#define  RSNN_REQUEST_SZ        (sizeof(struct lpfc_sli_ct_request))
+#define  GID_REQUEST_SZ   (offsetof(struct lpfc_sli_ct_request, un) + \
+			   sizeof(struct gid))
+#define  GFF_REQUEST_SZ   (offsetof(struct lpfc_sli_ct_request, un) + \
+			   sizeof(struct gff))
+#define  RFT_REQUEST_SZ   (offsetof(struct lpfc_sli_ct_request, un) + \
+			   sizeof(struct rft))
+#define  RFF_REQUEST_SZ   (offsetof(struct lpfc_sli_ct_request, un) + \
+			   sizeof(struct rff))
+#define  RNN_REQUEST_SZ   (offsetof(struct lpfc_sli_ct_request, un) + \
+			   sizeof(struct rnn))
+#define  RSNN_REQUEST_SZ  (offsetof(struct lpfc_sli_ct_request, un) + \
+			   sizeof(struct rsnn))
+#define DA_ID_REQUEST_SZ (offsetof(struct lpfc_sli_ct_request, un) + \
+			  sizeof(struct da_id))
+#define  RSPN_REQUEST_SZ  (offsetof(struct lpfc_sli_ct_request, un) + \
+			   sizeof(struct rspn))
 
 /*
  * FsType Definitions
@@ -227,6 +254,7 @@ struct lpfc_sli_ct_request {
 #define  SLI_CTNS_GFT_ID      0x0117
 #define  SLI_CTNS_GSPN_ID     0x0118
 #define  SLI_CTNS_GPT_ID      0x011A
+#define  SLI_CTNS_GFF_ID      0x011F
 #define  SLI_CTNS_GID_PN      0x0121
 #define  SLI_CTNS_GID_NN      0x0131
 #define  SLI_CTNS_GIP_NN      0x0135
@@ -240,9 +268,9 @@ struct lpfc_sli_ct_request {
 #define  SLI_CTNS_RNN_ID      0x0213
 #define  SLI_CTNS_RCS_ID      0x0214
 #define  SLI_CTNS_RFT_ID      0x0217
-#define  SLI_CTNS_RFF_ID      0x021F
 #define  SLI_CTNS_RSPN_ID     0x0218
 #define  SLI_CTNS_RPT_ID      0x021A
+#define  SLI_CTNS_RFF_ID      0x021F
 #define  SLI_CTNS_RIP_NN      0x0235
 #define  SLI_CTNS_RIPA_NN     0x0236
 #define  SLI_CTNS_RSNN_NN     0x0239
@@ -311,9 +339,9 @@ struct csp {
 	uint8_t bbCreditlsb;	/* FC Word 0, byte 3 */
 
 #ifdef __BIG_ENDIAN_BITFIELD
-	uint16_t increasingOffset:1;	/* FC Word 1, bit 31 */
+	uint16_t request_multiple_Nport:1;	/* FC Word 1, bit 31 */
 	uint16_t randomOffset:1;	/* FC Word 1, bit 30 */
-	uint16_t word1Reserved2:1;	/* FC Word 1, bit 29 */
+	uint16_t response_multiple_NPort:1;	/* FC Word 1, bit 29 */
 	uint16_t fPort:1;	/* FC Word 1, bit 28 */
 	uint16_t altBbCredit:1;	/* FC Word 1, bit 27 */
 	uint16_t edtovResolution:1;	/* FC Word 1, bit 26 */
@@ -322,7 +350,8 @@ struct csp {
 
 	uint16_t huntgroup:1;	/* FC Word 1, bit 23 */
 	uint16_t simplex:1;	/* FC Word 1, bit 22 */
-	uint16_t word1Reserved1:3;	/* FC Word 1, bit 21:19 */
+	uint16_t security:1;    /* FC Word 1, bit 21 */
+	uint16_t word1Reserved1:2;	/* FC Word 1, bit 20:19 */
 	uint16_t dhd:1;		/* FC Word 1, bit 18 */
 	uint16_t contIncSeqCnt:1;	/* FC Word 1, bit 17 */
 	uint16_t payloadlength:1;	/* FC Word 1, bit 16 */
@@ -332,14 +361,15 @@ struct csp {
 	uint16_t edtovResolution:1;	/* FC Word 1, bit 26 */
 	uint16_t altBbCredit:1;	/* FC Word 1, bit 27 */
 	uint16_t fPort:1;	/* FC Word 1, bit 28 */
-	uint16_t word1Reserved2:1;	/* FC Word 1, bit 29 */
+	uint16_t response_multiple_NPort:1;	/* FC Word 1, bit 29 */
 	uint16_t randomOffset:1;	/* FC Word 1, bit 30 */
-	uint16_t increasingOffset:1;	/* FC Word 1, bit 31 */
+	uint16_t request_multiple_Nport:1;	/* FC Word 1, bit 31 */
 
 	uint16_t payloadlength:1;	/* FC Word 1, bit 16 */
 	uint16_t contIncSeqCnt:1;	/* FC Word 1, bit 17 */
 	uint16_t dhd:1;		/* FC Word 1, bit 18 */
-	uint16_t word1Reserved1:3;	/* FC Word 1, bit 21:19 */
+	uint16_t word1Reserved1:2;	/* FC Word 1, bit 20:19 */
+	 uint16_t security:1;    /* FC Word 1, bit 21 */
 	uint16_t simplex:1;	/* FC Word 1, bit 22 */
 	uint16_t huntgroup:1;	/* FC Word 1, bit 23 */
 #endif
@@ -478,6 +508,17 @@ struct serv_parm {	/* Structure is in Big Endian format */
 #define ELS_CMD_SCR       0x62000000
 #define ELS_CMD_RNID      0x78000000
 #define ELS_CMD_LIRR      0x7A000000
+/*
+ * ELS commands for authentication
+ * ELS_CMD_AUTH<<24 | AUTH_NEGOTIATE<<8 | AUTH_VERSION
+ */
+#define ELS_CMD_AUTH      0x90000000
+#define ELS_CMD_AUTH_RJT  0x90000A01
+#define ELS_CMD_AUTH_NEG  0x90000B01
+#define ELS_CMD_AUTH_DONE 0x90000C01
+#define ELS_CMD_DH_CHA    0x90001001
+#define ELS_CMD_DH_REP    0x90001101
+#define ELS_CMD_DH_SUC    0x90001201
 #else	/*  __LITTLE_ENDIAN_BITFIELD */
 #define ELS_CMD_MASK      0xffff
 #define ELS_RSP_MASK      0xff
@@ -514,6 +555,17 @@ struct serv_parm {	/* Structure is in Big Endian format */
 #define ELS_CMD_SCR       0x62
 #define ELS_CMD_RNID      0x78
 #define ELS_CMD_LIRR      0x7A
+/*
+ * ELS commands for authentication
+ * ELS_CMD_AUTH | AUTH_NEGOTIATE<<16 | AUTH_VERSION<<24
+ */
+#define ELS_CMD_AUTH      0x00000090
+#define ELS_CMD_AUTH_RJT  0x010A0090
+#define ELS_CMD_AUTH_NEG  0x010B0090
+#define ELS_CMD_AUTH_DONE 0x010C0090
+#define ELS_CMD_DH_CHA    0x01100090
+#define ELS_CMD_DH_REP    0x01110090
+#define ELS_CMD_DH_SUC    0x01120090
 #endif
 
 /*
@@ -782,7 +834,7 @@ typedef struct _RNID {		/* Structure is in Big Endian format */
 	} un;
 } RNID;
 
-typedef struct  _RPS {  	/* Structure is in Big Endian format */
+typedef struct  _RPS {		/* Structure is in Big Endian format */
 	union {
 		uint32_t portNum;
 		struct lpfc_name portName;
@@ -800,7 +852,7 @@ typedef struct  _RPS_RSP {	/* Structure is in Big Endian format */
 	uint32_t crcCnt;
 } RPS_RSP;
 
-typedef struct  _RPL {  	/* Structure is in Big Endian format */
+typedef struct  _RPL {		/* Structure is in Big Endian format */
 	uint32_t maxsize;
 	uint32_t index;
 } RPL;
@@ -811,7 +863,7 @@ typedef struct  _PORT_NUM_BLK {
 	struct lpfc_name portName;
 } PORT_NUM_BLK;
 
-typedef struct  _RPL_RSP { 	/* Structure is in Big Endian format */
+typedef struct  _RPL_RSP {	/* Structure is in Big Endian format */
 	uint32_t listLen;
 	uint32_t index;
 	PORT_NUM_BLK port_num_blk;
@@ -1201,7 +1253,8 @@ typedef struct {		/* FireFly BIU registers */
 #define HS_FFER3       0x20000000	/* Bit 29 */
 #define HS_FFER2       0x40000000	/* Bit 30 */
 #define HS_FFER1       0x80000000	/* Bit 31 */
-#define HS_FFERM       0xFF000000	/* Mask for error bits 31:24 */
+#define HS_CRIT_TEMP   0x00000100	/* Bit 8  */
+#define HS_FFERM       0xFF000100	/* Mask for error bits 31:24 and 8 */
 
 /* Host Control Register */
 
@@ -1255,7 +1308,11 @@ typedef struct {		/* FireFly BIU registers */
 #define MBX_KILL_BOARD      0x24
 #define MBX_CONFIG_FARP     0x25
 #define MBX_BEACON          0x2A
+#define MBX_HEARTBEAT       0x31
+#define MBX_WRITE_VPARMS    0x32
+#define MBX_ASYNCEVT_ENABLE 0x33
 
+#define MBX_CONFIG_HBQ	    0x7C
 #define MBX_LOAD_AREA       0x81
 #define MBX_RUN_BIU_DIAG64  0x84
 #define MBX_CONFIG_PORT     0x88
@@ -1263,6 +1320,10 @@ typedef struct {		/* FireFly BIU registers */
 #define MBX_READ_RPI64      0x8F
 #define MBX_REG_LOGIN64     0x93
 #define MBX_READ_LA64       0x95
+#define MBX_REG_VPI	    0x96
+#define MBX_UNREG_VPI	    0x97
+#define MBX_REG_VNPID	    0x96
+#define MBX_UNREG_VNPID	    0x97
 
 #define MBX_FLASH_WR_ULA    0x98
 #define MBX_SET_DEBUG       0x99
@@ -1311,6 +1372,7 @@ typedef struct {		/* FireFly BIU registers */
 
 /*  SLI_2 IOCB Command Set */
 
+#define CMD_ASYNC_STATUS        0x7C
 #define CMD_RCV_SEQUENCE64_CX   0x81
 #define CMD_XMIT_SEQUENCE64_CR  0x82
 #define CMD_XMIT_SEQUENCE64_CX  0x83
@@ -1335,6 +1397,10 @@ typedef struct {		/* FireFly BIU registers */
 #define CMD_FCP_TRECEIVE64_CX   0xA1
 #define CMD_FCP_TRSP64_CX       0xA3
 
+#define CMD_IOCB_RCV_SEQ64_CX	0xB5
+#define CMD_IOCB_RCV_ELS64_CX	0xB7
+#define CMD_IOCB_RCV_CONT64_CX	0xBB
+
 #define CMD_GEN_REQUEST64_CR    0xC2
 #define CMD_GEN_REQUEST64_CX    0xC3
 
@@ -1364,12 +1430,13 @@ typedef struct {		/* FireFly BIU registers */
 #define MBXERR_BAD_RCV_LENGTH       14
 #define MBXERR_DMA_ERROR            15
 #define MBXERR_ERROR                16
-#define MBXERR_UNKNOWN_CMD          18
 #define MBX_NOT_FINISHED           255
 
 #define MBX_BUSY                   0xffffff /* Attempted cmd to busy Mailbox */
 #define MBX_TIMEOUT                0xfffffe /* time-out expired waiting for */
 
+#define TEMPERATURE_OFFSET 0xB0	/* Slim offset for critical temperature event */
+
 /*
  *    Begin Structure Definitions for Mailbox Commands
  */
@@ -1562,6 +1629,7 @@ typedef struct {
 #define FLAGS_TOPOLOGY_MODE_PT_PT    0x02 /* Attempt pt-pt only */
 #define FLAGS_TOPOLOGY_MODE_LOOP     0x04 /* Attempt loop only */
 #define FLAGS_TOPOLOGY_MODE_PT_LOOP  0x06 /* Attempt pt-pt then loop */
+#define	FLAGS_UNREG_LOGIN_ALL	     0x08 /* UNREG_LOGIN all on link down */
 #define FLAGS_LIRP_LILP              0x80 /* LIRP / LILP is disabled */
 
 #define FLAGS_TOPOLOGY_FAILOVER      0x0400	/* Bit 10 */
@@ -1745,8 +1813,6 @@ typedef struct {
 #define LMT_4Gb       0x040
 #define LMT_8Gb       0x080
 #define LMT_10Gb      0x100
-
-
 	uint32_t rsvd2;
 	uint32_t rsvd3;
 	uint32_t max_xri;
@@ -1755,7 +1821,10 @@ typedef struct {
 	uint32_t avail_xri;
 	uint32_t avail_iocb;
 	uint32_t avail_rpi;
-	uint32_t default_rpi;
+	uint32_t max_vpi;
+	uint32_t rsvd4;
+	uint32_t rsvd5;
+	uint32_t avail_vpi;
 } READ_CONFIG_VAR;
 
 /* Structure for MB Command READ_RCONFIG (12) */
@@ -1819,6 +1888,13 @@ typedef struct {
 				      structure */
 		struct ulp_bde64 sp64;
 	} un;
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint16_t rsvd3;
+	uint16_t vpi;
+#else	/*  __LITTLE_ENDIAN_BITFIELD */
+	uint16_t vpi;
+	uint16_t rsvd3;
+#endif
 } READ_SPARM_VAR;
 
 /* Structure for MB Command READ_STATUS (14) */
@@ -1919,11 +1995,17 @@ typedef struct {
 #ifdef __BIG_ENDIAN_BITFIELD
 	uint32_t cv:1;
 	uint32_t rr:1;
-	uint32_t rsvd1:29;
+	uint32_t rsvd2:2;
+	uint32_t v3req:1;
+	uint32_t v3rsp:1;
+	uint32_t rsvd1:25;
 	uint32_t rv:1;
 #else	/*  __LITTLE_ENDIAN_BITFIELD */
 	uint32_t rv:1;
-	uint32_t rsvd1:29;
+	uint32_t rsvd1:25;
+	uint32_t v3rsp:1;
+	uint32_t v3req:1;
+	uint32_t rsvd2:2;
 	uint32_t rr:1;
 	uint32_t cv:1;
 #endif
@@ -1973,8 +2055,8 @@ typedef struct {
 	uint8_t sli1FwName[16];
 	uint32_t sli2FwRev;
 	uint8_t sli2FwName[16];
-	uint32_t rsvd2;
-	uint32_t RandomData[7];
+	uint32_t sli3Feat;
+	uint32_t RandomData[6];
 } READ_REV_VAR;
 
 /* Structure for MB Command READ_LINK_STAT (18) */
@@ -2014,6 +2096,14 @@ typedef struct {
 		struct ulp_bde64 sp64;
 	} un;
 
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint16_t rsvd6;
+	uint16_t vpi;
+#else /* __LITTLE_ENDIAN_BITFIELD */
+	uint16_t vpi;
+	uint16_t rsvd6;
+#endif
+
 } REG_LOGIN_VAR;
 
 /* Word 30 contents for REG_LOGIN */
@@ -2038,16 +2128,78 @@ typedef struct {
 #ifdef __BIG_ENDIAN_BITFIELD
 	uint16_t rsvd1;
 	uint16_t rpi;
+	uint32_t rsvd2;
+	uint32_t rsvd3;
+	uint32_t rsvd4;
+	uint32_t rsvd5;
+	uint16_t rsvd6;
+	uint16_t vpi;
 #else	/*  __LITTLE_ENDIAN_BITFIELD */
 	uint16_t rpi;
 	uint16_t rsvd1;
+	uint32_t rsvd2;
+	uint32_t rsvd3;
+	uint32_t rsvd4;
+	uint32_t rsvd5;
+	uint16_t vpi;
+	uint16_t rsvd6;
 #endif
 } UNREG_LOGIN_VAR;
 
+/* Structure for MB Command REG_VPI (0x96) */
+typedef struct {
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint32_t rsvd1;
+	uint32_t rsvd2:8;
+	uint32_t sid:24;
+	uint32_t rsvd3;
+	uint32_t rsvd4;
+	uint32_t rsvd5;
+	uint16_t rsvd6;
+	uint16_t vpi;
+#else	/*  __LITTLE_ENDIAN */
+	uint32_t rsvd1;
+	uint32_t sid:24;
+	uint32_t rsvd2:8;
+	uint32_t rsvd3;
+	uint32_t rsvd4;
+	uint32_t rsvd5;
+	uint16_t vpi;
+	uint16_t rsvd6;
+#endif
+} REG_VPI_VAR;
+
+/* Structure for MB Command UNREG_VPI (0x97) */
+typedef struct {
+	uint32_t rsvd1;
+	uint32_t rsvd2;
+	uint32_t rsvd3;
+	uint32_t rsvd4;
+	uint32_t rsvd5;
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint16_t rsvd6;
+	uint16_t vpi;
+#else	/*  __LITTLE_ENDIAN */
+	uint16_t vpi;
+	uint16_t rsvd6;
+#endif
+} UNREG_VPI_VAR;
+
 /* Structure for MB Command UNREG_D_ID (0x23) */
 
 typedef struct {
 	uint32_t did;
+	uint32_t rsvd2;
+	uint32_t rsvd3;
+	uint32_t rsvd4;
+	uint32_t rsvd5;
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint16_t rsvd6;
+	uint16_t vpi;
+#else
+	uint16_t vpi;
+	uint16_t rsvd6;
+#endif
 } UNREG_D_ID_VAR;
 
 /* Structure for MB Command READ_LA (21) */
@@ -2145,13 +2297,6 @@ typedef struct {
 	uint32_t rsvd1;
 } CLEAR_LA_VAR;
 
-/* Values needed to set MAX_DMA_LENGTH parameter */
-#define SLIM_VAR_MAX_DMA_LENGTH 0x100506
-#define SLIM_VAL_MAX_DMA_512    0x0
-#define SLIM_VAL_MAX_DMA_1024   0x1
-#define SLIM_VAL_MAX_DMA_2048   0x2
-#define SLIM_VAL_MAX_DMA_4096   0x3
-
 /* Structure for MB Command DUMP */
 
 typedef struct {
@@ -2186,13 +2331,240 @@ typedef struct {
 #define  DMP_RSP_OFFSET          0x14   /* word 5 contains first word of rsp */
 #define  DMP_RSP_SIZE            0x6C   /* maximum of 27 words of rsp data */
 
-/* Structure for MB Command CONFIG_PORT (0x88) */
+struct hbq_mask {
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint8_t tmatch;
+	uint8_t tmask;
+	uint8_t rctlmatch;
+	uint8_t rctlmask;
+#else	/*  __LITTLE_ENDIAN */
+	uint8_t rctlmask;
+	uint8_t rctlmatch;
+	uint8_t tmask;
+	uint8_t tmatch;
+#endif
+};
+
+
+/* Structure for MB Command CONFIG_HBQ (7c) */
+
+struct config_hbq_var {
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint32_t rsvd1      :7;
+	uint32_t recvNotify :1;     /* Receive Notification */
+	uint32_t numMask    :8;     /* # Mask Entries       */
+	uint32_t profile    :8;     /* Selection Profile    */
+	uint32_t rsvd2      :8;
+#else	/*  __LITTLE_ENDIAN */
+	uint32_t rsvd2      :8;
+	uint32_t profile    :8;     /* Selection Profile    */
+	uint32_t numMask    :8;     /* # Mask Entries       */
+	uint32_t recvNotify :1;     /* Receive Notification */
+	uint32_t rsvd1      :7;
+#endif
+
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint32_t hbqId      :16;
+	uint32_t rsvd3      :12;
+	uint32_t ringMask   :4;
+#else	/*  __LITTLE_ENDIAN */
+	uint32_t ringMask   :4;
+	uint32_t rsvd3      :12;
+	uint32_t hbqId      :16;
+#endif
 
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint32_t entry_count :16;
+	uint32_t rsvd4        :8;
+	uint32_t headerLen    :8;
+#else	/*  __LITTLE_ENDIAN */
+	uint32_t headerLen    :8;
+	uint32_t rsvd4        :8;
+	uint32_t entry_count :16;
+#endif
+
+	uint32_t hbqaddrLow;
+	uint32_t hbqaddrHigh;
+
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint32_t rsvd5      :31;
+	uint32_t logEntry   :1;
+#else	/*  __LITTLE_ENDIAN */
+	uint32_t logEntry   :1;
+	uint32_t rsvd5      :31;
+#endif
+
+	uint32_t rsvd6;    /* w7 */
+	uint32_t rsvd7;    /* w8 */
+	uint32_t rsvd8;    /* w9 */
+
+	struct hbq_mask hbqMasks[6];
+
+
+	union {
+		uint32_t allprofiles[12];
+
+		struct {
+			#ifdef __BIG_ENDIAN_BITFIELD
+				uint32_t	seqlenoff	:16;
+				uint32_t	maxlen		:16;
+			#else	/*  __LITTLE_ENDIAN */
+				uint32_t	maxlen		:16;
+				uint32_t	seqlenoff	:16;
+			#endif
+			#ifdef __BIG_ENDIAN_BITFIELD
+				uint32_t	rsvd1		:28;
+				uint32_t	seqlenbcnt	:4;
+			#else	/*  __LITTLE_ENDIAN */
+				uint32_t	seqlenbcnt	:4;
+				uint32_t	rsvd1		:28;
+			#endif
+			uint32_t rsvd[10];
+		} profile2;
+
+		struct {
+			#ifdef __BIG_ENDIAN_BITFIELD
+				uint32_t	seqlenoff	:16;
+				uint32_t	maxlen		:16;
+			#else	/*  __LITTLE_ENDIAN */
+				uint32_t	maxlen		:16;
+				uint32_t	seqlenoff	:16;
+			#endif
+			#ifdef __BIG_ENDIAN_BITFIELD
+				uint32_t	cmdcodeoff	:28;
+				uint32_t	rsvd1		:12;
+				uint32_t	seqlenbcnt	:4;
+			#else	/*  __LITTLE_ENDIAN */
+				uint32_t	seqlenbcnt	:4;
+				uint32_t	rsvd1		:12;
+				uint32_t	cmdcodeoff	:28;
+			#endif
+			uint32_t cmdmatch[8];
+
+			uint32_t rsvd[2];
+		} profile3;
+
+		struct {
+			#ifdef __BIG_ENDIAN_BITFIELD
+				uint32_t	seqlenoff	:16;
+				uint32_t	maxlen		:16;
+			#else	/*  __LITTLE_ENDIAN */
+				uint32_t	maxlen		:16;
+				uint32_t	seqlenoff	:16;
+			#endif
+			#ifdef __BIG_ENDIAN_BITFIELD
+				uint32_t	cmdcodeoff	:28;
+				uint32_t	rsvd1		:12;
+				uint32_t	seqlenbcnt	:4;
+			#else	/*  __LITTLE_ENDIAN */
+				uint32_t	seqlenbcnt	:4;
+				uint32_t	rsvd1		:12;
+				uint32_t	cmdcodeoff	:28;
+			#endif
+			uint32_t cmdmatch[8];
+
+			uint32_t rsvd[2];
+		} profile5;
+
+	} profiles;
+
+};
+
+
+
+/* Structure for MB Command CONFIG_PORT (0x88) */
 typedef struct {
-	uint32_t pcbLen;
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint32_t cBE       :  1;
+	uint32_t cET       :  1;
+	uint32_t cHpcb     :  1;
+	uint32_t cMA       :  1;
+	uint32_t sli_mode  :  4;
+	uint32_t pcbLen    : 24;       /* bit 23:0  of memory based port
+					* config block */
+#else	/*  __LITTLE_ENDIAN */
+	uint32_t pcbLen    : 24;       /* bit 23:0  of memory based port
+					* config block */
+	uint32_t sli_mode  :  4;
+	uint32_t cMA       :  1;
+	uint32_t cHpcb     :  1;
+	uint32_t cET       :  1;
+	uint32_t cBE       :  1;
+#endif
+
 	uint32_t pcbLow;       /* bit 31:0  of memory based port config block */
 	uint32_t pcbHigh;      /* bit 63:32 of memory based port config block */
-	uint32_t hbainit[5];
+	uint32_t hbainit[6];
+
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint32_t rsvd      : 24;  /* Reserved                             */
+	uint32_t cmv	   :  1;  /* Configure Max VPIs                   */
+	uint32_t ccrp      :  1;  /* Config Command Ring Polling          */
+	uint32_t csah      :  1;  /* Configure Synchronous Abort Handling */
+	uint32_t chbs      :  1;  /* Cofigure Host Backing store          */
+	uint32_t cinb      :  1;  /* Enable Interrupt Notification Block  */
+	uint32_t cerbm	   :  1;  /* Configure Enhanced Receive Buf Mgmt  */
+	uint32_t cmx	   :  1;  /* Configure Max XRIs                   */
+	uint32_t cmr	   :  1;  /* Configure Max RPIs                   */
+#else	/*  __LITTLE_ENDIAN */
+	uint32_t cmr	   :  1;  /* Configure Max RPIs                   */
+	uint32_t cmx	   :  1;  /* Configure Max XRIs                   */
+	uint32_t cerbm	   :  1;  /* Configure Enhanced Receive Buf Mgmt  */
+	uint32_t cinb      :  1;  /* Enable Interrupt Notification Block  */
+	uint32_t chbs      :  1;  /* Cofigure Host Backing store          */
+	uint32_t csah      :  1;  /* Configure Synchronous Abort Handling */
+	uint32_t ccrp      :  1;  /* Config Command Ring Polling          */
+	uint32_t cmv	   :  1;  /* Configure Max VPIs                   */
+	uint32_t rsvd      : 24;  /* Reserved                             */
+#endif
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint32_t rsvd2     : 24;  /* Reserved                             */
+	uint32_t gmv	   :  1;  /* Grant Max VPIs                       */
+	uint32_t gcrp	   :  1;  /* Grant Command Ring Polling           */
+	uint32_t gsah	   :  1;  /* Grant Synchronous Abort Handling     */
+	uint32_t ghbs	   :  1;  /* Grant Host Backing Store             */
+	uint32_t ginb	   :  1;  /* Grant Interrupt Notification Block   */
+	uint32_t gerbm	   :  1;  /* Grant ERBM Request                   */
+	uint32_t gmx	   :  1;  /* Grant Max XRIs                       */
+	uint32_t gmr	   :  1;  /* Grant Max RPIs                       */
+#else	/*  __LITTLE_ENDIAN */
+	uint32_t gmr	   :  1;  /* Grant Max RPIs                       */
+	uint32_t gmx	   :  1;  /* Grant Max XRIs                       */
+	uint32_t gerbm	   :  1;  /* Grant ERBM Request                   */
+	uint32_t ginb	   :  1;  /* Grant Interrupt Notification Block   */
+	uint32_t ghbs	   :  1;  /* Grant Host Backing Store             */
+	uint32_t gsah	   :  1;  /* Grant Synchronous Abort Handling     */
+	uint32_t gcrp	   :  1;  /* Grant Command Ring Polling           */
+	uint32_t gmv	   :  1;  /* Grant Max VPIs                       */
+	uint32_t rsvd2     : 24;  /* Reserved                             */
+#endif
+
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint32_t max_rpi   : 16;  /* Max RPIs Port should configure       */
+	uint32_t max_xri   : 16;  /* Max XRIs Port should configure       */
+#else	/*  __LITTLE_ENDIAN */
+	uint32_t max_xri   : 16;  /* Max XRIs Port should configure       */
+	uint32_t max_rpi   : 16;  /* Max RPIs Port should configure       */
+#endif
+
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint32_t max_hbq   : 16;  /* Max HBQs Host expect to configure    */
+	uint32_t rsvd3     : 16;  /* Max HBQs Host expect to configure    */
+#else	/*  __LITTLE_ENDIAN */
+	uint32_t rsvd3     : 16;  /* Max HBQs Host expect to configure    */
+	uint32_t max_hbq   : 16;  /* Max HBQs Host expect to configure    */
+#endif
+
+	uint32_t rsvd4;           /* Reserved                             */
+
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint32_t rsvd5      : 16;  /* Reserved                             */
+	uint32_t max_vpi    : 16;  /* Max number of virt N-Ports           */
+#else	/*  __LITTLE_ENDIAN */
+	uint32_t max_vpi    : 16;  /* Max number of virt N-Ports           */
+	uint32_t rsvd5      : 16;  /* Reserved                             */
+#endif
+
 } CONFIG_PORT_VAR;
 
 /* SLI-2 Port Control Block */
@@ -2265,38 +2637,58 @@ typedef struct {
 	uint32_t IPAddress;
 } CONFIG_FARP_VAR;
 
+/* Structure for MB Command MBX_ASYNCEVT_ENABLE (0x33) */
+
+typedef struct {
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint32_t rsvd:30;
+	uint32_t ring:2;	/* Ring for ASYNC_EVENT iocb Bits 0-1*/
+#else /*  __LITTLE_ENDIAN */
+	uint32_t ring:2;	/* Ring for ASYNC_EVENT iocb Bits 0-1*/
+	uint32_t rsvd:30;
+#endif
+} ASYNCEVT_ENABLE_VAR;
+
 /* Union of all Mailbox Command types */
 #define MAILBOX_CMD_WSIZE	32
 #define MAILBOX_CMD_SIZE	(MAILBOX_CMD_WSIZE * sizeof(uint32_t))
 
 typedef union {
-	uint32_t varWords[MAILBOX_CMD_WSIZE - 1];
-	LOAD_SM_VAR varLdSM;	/* cmd =  1 (LOAD_SM)        */
-	READ_NV_VAR varRDnvp;	/* cmd =  2 (READ_NVPARMS)   */
-	WRITE_NV_VAR varWTnvp;	/* cmd =  3 (WRITE_NVPARMS)  */
+	uint32_t varWords[MAILBOX_CMD_WSIZE - 1]; /* first word is type/
+						    * feature/max ring number
+						    */
+	LOAD_SM_VAR varLdSM;		/* cmd =  1 (LOAD_SM)        */
+	READ_NV_VAR varRDnvp;		/* cmd =  2 (READ_NVPARMS)   */
+	WRITE_NV_VAR varWTnvp;		/* cmd =  3 (WRITE_NVPARMS)  */
 	BIU_DIAG_VAR varBIUdiag;	/* cmd =  4 (RUN_BIU_DIAG)   */
 	INIT_LINK_VAR varInitLnk;	/* cmd =  5 (INIT_LINK)      */
 	DOWN_LINK_VAR varDwnLnk;	/* cmd =  6 (DOWN_LINK)      */
-	CONFIG_LINK varCfgLnk;	/* cmd =  7 (CONFIG_LINK)    */
-	PART_SLIM_VAR varSlim;	/* cmd =  8 (PART_SLIM)      */
+	CONFIG_LINK varCfgLnk;		/* cmd =  7 (CONFIG_LINK)    */
+	PART_SLIM_VAR varSlim;		/* cmd =  8 (PART_SLIM)      */
 	CONFIG_RING_VAR varCfgRing;	/* cmd =  9 (CONFIG_RING)    */
 	RESET_RING_VAR varRstRing;	/* cmd = 10 (RESET_RING)     */
 	READ_CONFIG_VAR varRdConfig;	/* cmd = 11 (READ_CONFIG)    */
 	READ_RCONF_VAR varRdRConfig;	/* cmd = 12 (READ_RCONFIG)   */
 	READ_SPARM_VAR varRdSparm;	/* cmd = 13 (READ_SPARM(64)) */
 	READ_STATUS_VAR varRdStatus;	/* cmd = 14 (READ_STATUS)    */
-	READ_RPI_VAR varRdRPI;	/* cmd = 15 (READ_RPI(64))   */
-	READ_XRI_VAR varRdXRI;	/* cmd = 16 (READ_XRI)       */
-	READ_REV_VAR varRdRev;	/* cmd = 17 (READ_REV)       */
-	READ_LNK_VAR varRdLnk;	/* cmd = 18 (READ_LNK_STAT)  */
+	READ_RPI_VAR varRdRPI;		/* cmd = 15 (READ_RPI(64))   */
+	READ_XRI_VAR varRdXRI;		/* cmd = 16 (READ_XRI)       */
+	READ_REV_VAR varRdRev;		/* cmd = 17 (READ_REV)       */
+	READ_LNK_VAR varRdLnk;		/* cmd = 18 (READ_LNK_STAT)  */
 	REG_LOGIN_VAR varRegLogin;	/* cmd = 19 (REG_LOGIN(64))  */
 	UNREG_LOGIN_VAR varUnregLogin;	/* cmd = 20 (UNREG_LOGIN)    */
-	READ_LA_VAR varReadLA;	/* cmd = 21 (READ_LA(64))    */
+	READ_LA_VAR varReadLA;		/* cmd = 21 (READ_LA(64))    */
 	CLEAR_LA_VAR varClearLA;	/* cmd = 22 (CLEAR_LA)       */
-	DUMP_VAR varDmp;	/* Warm Start DUMP mbx cmd   */
-	UNREG_D_ID_VAR varUnregDID; /* cmd = 0x23 (UNREG_D_ID)   */
-	CONFIG_FARP_VAR varCfgFarp; /* cmd = 0x25 (CONFIG_FARP)  NEW_FEATURE */
-	CONFIG_PORT_VAR varCfgPort; /* cmd = 0x88 (CONFIG_PORT)  */
+	DUMP_VAR varDmp;		/* Warm Start DUMP mbx cmd   */
+	UNREG_D_ID_VAR varUnregDID;	/* cmd = 0x23 (UNREG_D_ID)   */
+	CONFIG_FARP_VAR varCfgFarp;	/* cmd = 0x25 (CONFIG_FARP)
+					 * NEW_FEATURE
+					 */
+	struct config_hbq_var varCfgHbq;/* cmd = 0x7c (CONFIG_HBQ)  */
+	CONFIG_PORT_VAR varCfgPort;	/* cmd = 0x88 (CONFIG_PORT)  */
+	REG_VPI_VAR varRegVpi;		/* cmd = 0x96 (REG_VPI) */
+	UNREG_VPI_VAR varUnregVpi;	/* cmd = 0x97 (UNREG_VPI) */
+	ASYNCEVT_ENABLE_VAR varCfgAsyncEvent; /*cmd = x33 (CONFIG_ASYNC) */
 } MAILVARIANTS;
 
 /*
@@ -2313,14 +2705,27 @@ struct lpfc_pgp {
 	__le32 rspPutInx;
 };
 
-typedef struct _SLI2_DESC {
-	struct lpfc_hgp host[MAX_RINGS];
+struct sli2_desc {
 	uint32_t unused1[16];
+	struct lpfc_hgp host[MAX_RINGS];
+	struct lpfc_pgp port[MAX_RINGS];
+};
+
+struct sli3_desc {
+	struct lpfc_hgp host[MAX_RINGS];
+	uint32_t reserved[8];
+	uint32_t hbq_put[16];
+};
+
+struct sli3_pgp {
 	struct lpfc_pgp port[MAX_RINGS];
-} SLI2_DESC;
+	uint32_t hbq_get[16];
+};
 
 typedef union {
-	SLI2_DESC s2;
+	struct sli2_desc s2;
+	struct sli3_desc s3;
+	struct sli3_pgp  s3_pgp;
 } SLI_VAR;
 
 typedef struct {
@@ -2626,6 +3031,40 @@ typedef struct {
 	uint32_t fcpt_Length;	/* transfer ready for IWRITE */
 } FCPT_FIELDS64;
 
+/* IOCB Command template for Async Status iocb commands */
+typedef struct {
+	uint32_t rsvd[4];
+	uint32_t param;
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint16_t evt_code;		/* High order bits word 5 */
+	uint16_t sub_ctxt_tag;		/* Low  order bits word 5 */
+#else   /*  __LITTLE_ENDIAN_BITFIELD */
+	uint16_t sub_ctxt_tag;		/* High order bits word 5 */
+	uint16_t evt_code;		/* Low  order bits word 5 */
+#endif
+} ASYNCSTAT_FIELDS;
+#define ASYNC_TEMP_WARN		0x100
+#define ASYNC_TEMP_SAFE		0x101
+
+/* IOCB Command template for CMD_IOCB_RCV_ELS64_CX (0xB7)
+   or CMD_IOCB_RCV_SEQ64_CX (0xB5) */
+
+struct rcv_sli3 {
+	uint32_t word8Rsvd;
+#ifdef __BIG_ENDIAN_BITFIELD
+	uint16_t vpi;
+	uint16_t word9Rsvd;
+#else  /*  __LITTLE_ENDIAN */
+	uint16_t word9Rsvd;
+	uint16_t vpi;
+#endif
+	uint32_t word10Rsvd;
+	uint32_t acc_len;      /* accumulated length */
+	struct ulp_bde64 bde2;
+};
+
+
+
 typedef struct _IOCB {	/* IOCB structure */
 	union {
 		GENERIC_RSP grsp;	/* Generic response */
@@ -2640,14 +3079,15 @@ typedef struct _IOCB {	/* IOCB structure */
 
 		/* SLI-2 structures */
 
-		struct ulp_bde64 cont64[2];	/* up to 2 64 bit continuation
-					   bde_64s */
+		struct ulp_bde64 cont64[2];  /* up to 2 64 bit continuation
+					      * bde_64s */
 		ELS_REQUEST64 elsreq64;	/* ELS_REQUEST template */
 		GEN_REQUEST64 genreq64;	/* GEN_REQUEST template */
 		RCV_ELS_REQ64 rcvels64;	/* RCV_ELS_REQ template */
 		XMT_SEQ_FIELDS64 xseq64;	/* XMIT / BCAST cmd */
 		FCPI_FIELDS64 fcpi64;	/* FCP 64 bit Initiator template */
 		FCPT_FIELDS64 fcpt64;	/* FCP 64 bit target template */
+		ASYNCSTAT_FIELDS asyncstat; /* async_status iocb */
 
 		uint32_t ulpWord[IOCB_WORD_SZ - 2];	/* generic 6 'words' */
 	} un;
@@ -2703,9 +3143,20 @@ typedef struct _IOCB {	/* IOCB structure */
 	uint32_t ulpTimeout:8;
 #endif
 
+	union {
+		struct rcv_sli3 rcvsli3; /* words 8 - 15 */
+		uint32_t sli3Words[24]; /* 96 extra bytes for SLI-3 */
+	} unsli3;
+
+#define ulpCt_h ulpXS
+#define ulpCt_l ulpFCP2Rcvy
+
+#define IOCB_FCP	   1	/* IOCB is used for FCP ELS cmds-ulpRsvByte */
+#define IOCB_IP		   2	/* IOCB is used for IP ELS cmds */
 #define PARM_UNUSED        0	/* PU field (Word 4) not used */
 #define PARM_REL_OFF       1	/* PU field (Word 4) = R. O. */
 #define PARM_READ_CHECK    2	/* PU field (Word 4) = Data Transfer Length */
+#define PARM_NPIV_DID	   3
 #define CLASS1             0	/* Class 1 */
 #define CLASS2             1	/* Class 2 */
 #define CLASS3             2	/* Class 3 */
@@ -2726,39 +3177,51 @@ typedef struct _IOCB {	/* IOCB structure */
 #define IOSTAT_RSVD2           0xC
 #define IOSTAT_RSVD3           0xD
 #define IOSTAT_RSVD4           0xE
-#define IOSTAT_RSVD5           0xF
+#define IOSTAT_NEED_BUFFER     0xF
 #define IOSTAT_DRIVER_REJECT   0x10   /* ulpStatus  - Driver defined */
 #define IOSTAT_DEFAULT         0xF    /* Same as rsvd5 for now */
 #define IOSTAT_CNT             0x11
 
 } IOCB_t;
 
+/* Structure used for a single HBQ entry */
+struct lpfc_hbq_entry {
+	struct ulp_bde64 bde;
+	uint32_t buffer_tag;
+};
+
 
 #define SLI1_SLIM_SIZE   (4 * 1024)
 
 /* Up to 498 IOCBs will fit into 16k
  * 256 (MAILBOX_t) + 140 (PCB_t) + ( 32 (IOCB_t) * 498 ) = < 16384
  */
-#define SLI2_SLIM_SIZE   (16 * 1024)
+#define SLI2_SLIM_SIZE   (64 * 1024)
 
 /* Maximum IOCBs that will fit in SLI2 slim */
 #define MAX_SLI2_IOCB    498
+#define MAX_SLIM_IOCB_SIZE (SLI2_SLIM_SIZE - \
+			    (sizeof(MAILBOX_t) + sizeof(PCB_t)))
+
+/* HBQ entries are 4 words each = 4k */
+#define LPFC_TOTAL_HBQ_SIZE (sizeof(struct lpfc_hbq_entry) *  \
+			     lpfc_sli_hbq_count())
 
 struct lpfc_sli2_slim {
 	MAILBOX_t mbx;
 	PCB_t pcb;
-	IOCB_t IOCBs[MAX_SLI2_IOCB];
+	IOCB_t IOCBs[MAX_SLIM_IOCB_SIZE];
 };
 
-/*******************************************************************
-This macro check PCI device to allow special handling for LC HBAs.
-
-Parameters:
-device : struct pci_dev 's device field
-
-return 1 => TRUE
-       0 => FALSE
- *******************************************************************/
+/*
+ * This function checks PCI device to allow special handling for LC HBAs.
+ *
+ * Parameters:
+ * device : struct pci_dev 's device field
+ *
+ * return 1 => TRUE
+ *        0 => FALSE
+ */
 static inline int
 lpfc_is_LC_HBA(unsigned short device)
 {
@@ -2774,3 +3237,16 @@ lpfc_is_LC_HBA(unsigned short device)
 	else
 		return 0;
 }
+
+/*
+ * Determine if an IOCB failed because of a link event or firmware reset.
+ */
+
+static inline int
+lpfc_error_lost_link(IOCB_t *iocbp)
+{
+	return (iocbp->ulpStatus == IOSTAT_LOCAL_REJECT &&
+		(iocbp->un.ulpWord[4] == IOERR_SLI_ABORTED ||
+		 iocbp->un.ulpWord[4] == IOERR_LINK_DOWN ||
+		 iocbp->un.ulpWord[4] == IOERR_SLI_DOWN));
+}
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index d939513..71c72d6 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -27,6 +27,7 @@
 #include <linux/kthread.h>
 #include <linux/pci.h>
 #include <linux/spinlock.h>
+#include <linux/ctype.h>
 
 #include <scsi/scsi.h>
 #include <scsi/scsi_device.h>
@@ -40,17 +41,38 @@
 #include "lpfc.h"
 #include "lpfc_logmsg.h"
 #include "lpfc_crtn.h"
+#include "lpfc_vport.h"
 #include "lpfc_version.h"
+#include "lpfc_vport.h"
+#include "lpfc_auth_access.h"
 #include "lpfc_ioctl.h"
+#include "lpfc_security.h"
 #include "lpfc_compat.h"
 
+#include <net/sock.h>
+#include <linux/netlink.h>
+
+extern struct notifier_block lpfc_fc_netlink_notifier;
+extern char security_work_q_name[KOBJ_NAME_LEN];
+extern struct workqueue_struct *security_work_q;
+extern struct sock *fc_nl_sock;
+extern struct list_head fc_security_user_list;
+extern int fc_service_state;
+void lpfc_fc_sc_security_online(struct work_struct *work);
+void lpfc_fc_sc_security_offline(struct work_struct *work);
+void lpfc_fc_nl_rcv(struct sock *sk, int len);
+int lpfc_fc_queue_security_work(struct lpfc_vport *, struct work_struct *);
+int lpfc_fc_nl_rcv_nl_event(struct notifier_block *, unsigned long , void *);
 static int lpfc_parse_vpd(struct lpfc_hba *, uint8_t *, int);
 static void lpfc_get_hba_model_desc(struct lpfc_hba *, uint8_t *, uint8_t *);
 static int lpfc_post_rcv_buf(struct lpfc_hba *);
 
 static struct scsi_transport_template *lpfc_transport_template = NULL;
+static struct scsi_transport_template *lpfc_vport_transport_template = NULL;
 static DEFINE_IDR(lpfc_hba_index);
 
+
+
 /************************************************************************/
 /*                                                                      */
 /*    lpfc_config_port_prep                                             */
@@ -63,7 +85,7 @@ static DEFINE_IDR(lpfc_hba_index);
 /*                                                                      */
 /************************************************************************/
 int
-lpfc_config_port_prep(struct lpfc_hba * phba)
+lpfc_config_port_prep(struct lpfc_hba *phba)
 {
 	lpfc_vpd_t *vp = &phba->vpd;
 	int i = 0, rc;
@@ -77,12 +99,12 @@ lpfc_config_port_prep(struct lpfc_hba * phba)
 
 	pmb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
 	if (!pmb) {
-		phba->hba_state = LPFC_HBA_ERROR;
+		phba->link_state = LPFC_HBA_ERROR;
 		return -ENOMEM;
 	}
 
 	mb = &pmb->mb;
-	phba->hba_state = LPFC_INIT_MBX_CMDS;
+	phba->link_state = LPFC_INIT_MBX_CMDS;
 
 	if (lpfc_is_LC_HBA(phba->pcidev->device)) {
 		if (init_key) {
@@ -102,36 +124,35 @@ lpfc_config_port_prep(struct lpfc_hba * phba)
 		rc = lpfc_sli_issue_mbox(phba, pmb, MBX_POLL);
 
 		if (rc != MBX_SUCCESS) {
-			lpfc_printf_log(phba,
-					KERN_ERR,
-					LOG_MBOX,
-					"%d:0324 Config Port initialization "
+			lpfc_printf_log(phba, KERN_ERR, LOG_MBOX,
+					"0324 Config Port initialization "
 					"error, mbxCmd x%x READ_NVPARM, "
 					"mbxStatus x%x\n",
-					phba->brd_no,
 					mb->mbxCommand, mb->mbxStatus);
 			mempool_free(pmb, phba->mbox_mem_pool);
 			return -ERESTART;
 		}
 		memcpy(phba->wwnn, (char *)mb->un.varRDnvp.nodename,
-		       sizeof (mb->un.varRDnvp.nodename));
+		       sizeof(phba->wwnn));
+		memcpy(phba->wwpn, (char *)mb->un.varRDnvp.portname,
+		       sizeof(phba->wwpn));
 	}
 
+	phba->sli3_options = 0x0;
+
 	/* Setup and issue mailbox READ REV command */
 	lpfc_read_rev(phba, pmb);
 	rc = lpfc_sli_issue_mbox(phba, pmb, MBX_POLL);
 	if (rc != MBX_SUCCESS) {
-		lpfc_printf_log(phba,
-				KERN_ERR,
-				LOG_INIT,
-				"%d:0439 Adapter failed to init, mbxCmd x%x "
+		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+				"0439 Adapter failed to init, mbxCmd x%x "
 				"READ_REV, mbxStatus x%x\n",
-				phba->brd_no,
 				mb->mbxCommand, mb->mbxStatus);
 		mempool_free( pmb, phba->mbox_mem_pool);
 		return -ERESTART;
 	}
 
+
 	/*
 	 * The value of rr must be 1 since the driver set the cv field to 1.
 	 * This setting requires the FW to set all revision fields.
@@ -139,15 +160,18 @@ lpfc_config_port_prep(struct lpfc_hba * phba)
 	if (mb->un.varRdRev.rr == 0) {
 		vp->rev.rBit = 0;
 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-				"%d:0440 Adapter failed to init, READ_REV has "
-				"missing revision information.\n",
-				phba->brd_no);
+				"0440 Adapter failed to init, READ_REV has "
+				"missing revision information.\n");
 		mempool_free(pmb, phba->mbox_mem_pool);
 		return -ERESTART;
 	}
 
+	if (phba->sli_rev == 3 && !mb->un.varRdRev.v3rsp)
+		return -EINVAL;
+
 	/* Save information as VPD data */
 	vp->rev.rBit = 1;
+	memcpy(&vp->sli3Feat, &mb->un.varRdRev.sli3Feat, sizeof(uint32_t));
 	vp->rev.sli1FwRev = mb->un.varRdRev.sli1FwRev;
 	memcpy(vp->rev.sli1FwName, (char*) mb->un.varRdRev.sli1FwName, 16);
 	vp->rev.sli2FwRev = mb->un.varRdRev.sli2FwRev;
@@ -163,6 +187,13 @@ lpfc_config_port_prep(struct lpfc_hba * phba)
 	vp->rev.postKernRev = mb->un.varRdRev.postKernRev;
 	vp->rev.opFwRev = mb->un.varRdRev.opFwRev;
 
+	/* If the sli feature level is less then 9, we must
+	 * tear down all RPIs and VPIs on link down if NPIV
+	 * is enabled.
+	 */
+	if (vp->rev.feaLevelHigh < 9)
+		phba->sli3_options |= LPFC_SLI3_VPORT_TEARDOWN;
+
 	if (lpfc_is_LC_HBA(phba->pcidev->device))
 		memcpy(phba->RandomData, (char *)&mb->un.varWords[24],
 						sizeof (phba->RandomData));
@@ -181,16 +212,15 @@ lpfc_config_port_prep(struct lpfc_hba * phba)
 
 		if (rc != MBX_SUCCESS) {
 			lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
-					"%d:0441 VPD not present on adapter, "
+					"0441 VPD not present on adapter, "
 					"mbxCmd x%x DUMP VPD, mbxStatus x%x\n",
-					phba->brd_no,
 					mb->mbxCommand, mb->mbxStatus);
 			mb->un.varDmp.word_cnt = 0;
 		}
 		if (mb->un.varDmp.word_cnt > DMP_VPD_SIZE - offset)
 			mb->un.varDmp.word_cnt = DMP_VPD_SIZE - offset;
 		lpfc_sli_pcimem_bcopy(pmb->context2, lpfc_vpd_data + offset,
-							mb->un.varDmp.word_cnt);
+				      mb->un.varDmp.word_cnt);
 		offset += mb->un.varDmp.word_cnt;
 	} while (mb->un.varDmp.word_cnt && offset < DMP_VPD_SIZE);
 	lpfc_parse_vpd(phba, lpfc_vpd_data, offset);
@@ -203,6 +233,18 @@ out_free_mbox:
 	return 0;
 }
 
+/* Completion handler for config async event mailbox command. */
+static void
+lpfc_config_async_cmpl(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq)
+{
+	if (pmboxq->mb.mbxStatus == MBX_SUCCESS)
+		phba->temp_sensor_support = 1;
+	else
+		phba->temp_sensor_support = 0;
+	mempool_free(pmboxq, phba->mbox_mem_pool);
+	return;
+}
+
 /************************************************************************/
 /*                                                                      */
 /*    lpfc_config_port_post                                             */
@@ -214,48 +256,42 @@ out_free_mbox:
 /*                                                                      */
 /************************************************************************/
 int
-lpfc_config_port_post(struct lpfc_hba * phba)
+lpfc_config_port_post(struct lpfc_hba *phba)
 {
+	struct lpfc_vport *vport = phba->pport;
 	LPFC_MBOXQ_t *pmb;
 	MAILBOX_t *mb;
 	struct lpfc_dmabuf *mp;
 	struct lpfc_sli *psli = &phba->sli;
 	uint32_t status, timeout;
-	int i, j, rc;
+	int i, j;
+	int rc;
+
+	spin_lock_irq(&phba->hbalock);
+	/*
+	 * If the Config port completed correctly the HBA is not
+	 * over heated any more.
+	 */
+	if (phba->over_temp_state == HBA_OVER_TEMP)
+		phba->over_temp_state = HBA_NORMAL_TEMP;
+	spin_unlock_irq(&phba->hbalock);
 
 	pmb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
 	if (!pmb) {
-		phba->hba_state = LPFC_HBA_ERROR;
+		phba->link_state = LPFC_HBA_ERROR;
 		return -ENOMEM;
 	}
 	mb = &pmb->mb;
 
-	lpfc_config_link(phba, pmb);
-	rc = lpfc_sli_issue_mbox(phba, pmb, MBX_POLL);
-	if (rc != MBX_SUCCESS) {
-		lpfc_printf_log(phba,
-				KERN_ERR,
-				LOG_INIT,
-				"%d:0447 Adapter failed init, mbxCmd x%x "
-				"CONFIG_LINK mbxStatus x%x\n",
-				phba->brd_no,
-				mb->mbxCommand, mb->mbxStatus);
-		phba->hba_state = LPFC_HBA_ERROR;
-		mempool_free( pmb, phba->mbox_mem_pool);
-		return -EIO;
-	}
-
 	/* Get login parameters for NID.  */
-	lpfc_read_sparam(phba, pmb);
+	lpfc_read_sparam(phba, pmb, 0);
+	pmb->vport = vport;
 	if (lpfc_sli_issue_mbox(phba, pmb, MBX_POLL) != MBX_SUCCESS) {
-		lpfc_printf_log(phba,
-				KERN_ERR,
-				LOG_INIT,
-				"%d:0448 Adapter failed init, mbxCmd x%x "
+		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+				"0448 Adapter failed init, mbxCmd x%x "
 				"READ_SPARM mbxStatus x%x\n",
-				phba->brd_no,
 				mb->mbxCommand, mb->mbxStatus);
-		phba->hba_state = LPFC_HBA_ERROR;
+		phba->link_state = LPFC_HBA_ERROR;
 		mp = (struct lpfc_dmabuf *) pmb->context1;
 		mempool_free( pmb, phba->mbox_mem_pool);
 		lpfc_mbuf_free(phba, mp->virt, mp->phys);
@@ -265,25 +301,27 @@ lpfc_config_port_post(struct lpfc_hba * phba)
 
 	mp = (struct lpfc_dmabuf *) pmb->context1;
 
-	memcpy(&phba->fc_sparam, mp->virt, sizeof (struct serv_parm));
+	memcpy(&vport->fc_sparam, mp->virt, sizeof (struct serv_parm));
 	lpfc_mbuf_free(phba, mp->virt, mp->phys);
 	kfree(mp);
 	pmb->context1 = NULL;
 
 	if (phba->cfg_soft_wwnn)
-		u64_to_wwn(phba->cfg_soft_wwnn, phba->fc_sparam.nodeName.u.wwn);
+		u64_to_wwn(phba->cfg_soft_wwnn,
+			   vport->fc_sparam.nodeName.u.wwn);
 	if (phba->cfg_soft_wwpn)
-		u64_to_wwn(phba->cfg_soft_wwpn, phba->fc_sparam.portName.u.wwn);
-	memcpy(&phba->fc_nodename, &phba->fc_sparam.nodeName,
+		u64_to_wwn(phba->cfg_soft_wwpn,
+			   vport->fc_sparam.portName.u.wwn);
+	memcpy(&vport->fc_nodename, &vport->fc_sparam.nodeName,
 	       sizeof (struct lpfc_name));
-	memcpy(&phba->fc_portname, &phba->fc_sparam.portName,
+	memcpy(&vport->fc_portname, &vport->fc_sparam.portName,
 	       sizeof (struct lpfc_name));
 	/* If no serial number in VPD data, use low 6 bytes of WWNN */
 	/* This should be consolidated into parse_vpd ? - mr */
 	if (phba->SerialNumber[0] == 0) {
 		uint8_t *outptr;
 
-		outptr = &phba->fc_nodename.u.s.IEEE[0];
+		outptr = &vport->fc_nodename.u.s.IEEE[0];
 		for (i = 0; i < 12; i++) {
 			status = *outptr++;
 			j = ((status & 0xf0) >> 4);
@@ -305,15 +343,13 @@ lpfc_config_port_post(struct lpfc_hba * phba)
 	}
 
 	lpfc_read_config(phba, pmb);
+	pmb->vport = vport;
 	if (lpfc_sli_issue_mbox(phba, pmb, MBX_POLL) != MBX_SUCCESS) {
-		lpfc_printf_log(phba,
-				KERN_ERR,
-				LOG_INIT,
-				"%d:0453 Adapter failed to init, mbxCmd x%x "
+		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+				"0453 Adapter failed to init, mbxCmd x%x "
 				"READ_CONFIG, mbxStatus x%x\n",
-				phba->brd_no,
 				mb->mbxCommand, mb->mbxStatus);
-		phba->hba_state = LPFC_HBA_ERROR;
+		phba->link_state = LPFC_HBA_ERROR;
 		mempool_free( pmb, phba->mbox_mem_pool);
 		return -EIO;
 	}
@@ -340,19 +376,16 @@ lpfc_config_port_post(struct lpfc_hba * phba)
 	    || ((phba->cfg_link_speed == LINK_SPEED_10G)
 		&& !(phba->lmt & LMT_10Gb))) {
 		/* Reset link speed to auto */
-		lpfc_printf_log(phba,
-			KERN_WARNING,
-			LOG_LINK_EVENT,
-			"%d:1302 Invalid speed for this board: "
+		lpfc_printf_log(phba, KERN_WARNING, LOG_LINK_EVENT,
+			"1302 Invalid speed for this board: "
 			"Reset link speed to auto: x%x\n",
-			phba->brd_no,
 			phba->cfg_link_speed);
 			phba->cfg_link_speed = LINK_SPEED_AUTO;
 	}
 
-	phba->hba_state = LPFC_LINK_DOWN;
+	phba->link_state = LPFC_LINK_DOWN;
 
-	/* Only process IOCBs on ring 0 till hba_state is READY */
+	/* Only process IOCBs on ELS ring till hba_state is READY */
 	if (psli->ring[psli->extra_ring].cmdringaddr)
 		psli->ring[psli->extra_ring].flag |= LPFC_STOP_IOCB_EVENT;
 	if (psli->ring[psli->fcp_ring].cmdringaddr)
@@ -361,10 +394,11 @@ lpfc_config_port_post(struct lpfc_hba * phba)
 		psli->ring[psli->next_ring].flag |= LPFC_STOP_IOCB_EVENT;
 
 	/* Post receive buffers for desired rings */
-	lpfc_post_rcv_buf(phba);
+	if (phba->sli_rev != 3)
+		lpfc_post_rcv_buf(phba);
 
 	/* Enable appropriate host interrupts */
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	status = readl(phba->HCregaddr);
 	status |= HC_MBINT_ENA | HC_ERINT_ENA | HC_LAINT_ENA;
 	if (psli->num_rings > 0)
@@ -382,25 +416,40 @@ lpfc_config_port_post(struct lpfc_hba * phba)
 
 	writel(status, phba->HCregaddr);
 	readl(phba->HCregaddr); /* flush */
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	/*
 	 * Setup the ring 0 (els)  timeout handler
 	 */
 	timeout = phba->fc_ratov << 1;
-	mod_timer(&phba->els_tmofunc, jiffies + HZ * timeout);
+	mod_timer(&vport->els_tmofunc, jiffies + HZ * timeout);
+	mod_timer(&phba->hb_tmofunc, jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
+	phba->hb_outstanding = 0;
+	phba->last_completion_time = jiffies;
+
+	if (vport->cfg_enable_auth) {
+		lpfc_security_wait();
+		if (lpfc_security_service_state == SECURITY_OFFLINE) {
+			lpfc_printf_log(vport->phba, KERN_ERR, LOG_SECURITY,
+				"1029 Authentication is enabled but security "
+				"service is not running!\n");
+			vport->auth.auth_mode = FC_AUTHMODE_UNKNOWN;
+			phba->link_state = LPFC_HBA_ERROR;
+			mempool_free( pmb, phba->mbox_mem_pool);
+			return 0;
+		}
+	}
 
 	lpfc_init_link(phba, pmb, phba->cfg_topology, phba->cfg_link_speed);
 	pmb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-	rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
 	lpfc_set_loopback_flag(phba);
+	rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
 	if (rc != MBX_SUCCESS) {
 		lpfc_printf_log(phba,
 				KERN_ERR,
 				LOG_INIT,
-				"%d:0454 Adapter failed to init, mbxCmd x%x "
+				"1031 Adapter failed to init, mbxCmd x%x "
 				"INIT_LINK, mbxStatus x%x\n",
-				phba->brd_no,
 				mb->mbxCommand, mb->mbxStatus);
 
 		/* Clear all interrupt enable conditions */
@@ -410,40 +459,28 @@ lpfc_config_port_post(struct lpfc_hba * phba)
 		writel(0xffffffff, phba->HAregaddr);
 		readl(phba->HAregaddr); /* flush */
 
-		phba->hba_state = LPFC_HBA_ERROR;
+		phba->link_state = LPFC_HBA_ERROR;
 		if (rc != MBX_BUSY)
 			mempool_free(pmb, phba->mbox_mem_pool);
 		return -EIO;
 	}
-	/* MBOX buffer will be freed in mbox compl */
 
-	return (0);
-}
-
-static int
-lpfc_discovery_wait(struct lpfc_hba *phba)
-{
-	int i = 0;
-
-	while ((phba->hba_state != LPFC_HBA_READY) ||
-	       (phba->num_disc_nodes) || (phba->fc_prli_sent) ||
-	       ((phba->fc_map_cnt == 0) && (i<2)) ||
-	       (phba->sli.sli_flag & LPFC_SLI_MBOX_ACTIVE)) {
-		/* Check every second for 30 retries. */
-		i++;
-		if (i > 30) {
-			return -ETIMEDOUT;
-		}
-		if ((i >= 15) && (phba->hba_state <= LPFC_LINK_DOWN)) {
-			/* The link is down.  Set linkdown timeout */
-			return -ETIMEDOUT;
-		}
-
-		/* Delay for 1 second to give discovery time to complete. */
-		msleep(1000);
+	/* MBOX buffer will be freed in mbox compl */
+	pmb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	lpfc_config_async(phba, pmb, LPFC_ELS_RING);
+	pmb->mbox_cmpl = lpfc_config_async_cmpl;
+	pmb->vport = phba->pport;
+	rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
 
+	if ((rc != MBX_BUSY) && (rc != MBX_SUCCESS)) {
+		lpfc_printf_log(phba,
+				KERN_ERR,
+				LOG_INIT,
+				"0456 Adapter failed to issue "
+				"ASYNCEVT_ENABLE mbox status x%x \n.",
+				rc);
+		mempool_free(pmb, phba->mbox_mem_pool);
 	}
-
 	return 0;
 }
 
@@ -458,18 +495,14 @@ lpfc_discovery_wait(struct lpfc_hba *phba)
 /*                                                                      */
 /************************************************************************/
 int
-lpfc_hba_down_prep(struct lpfc_hba * phba)
+lpfc_hba_down_prep(struct lpfc_hba *phba)
 {
 	/* Disable interrupts */
 	writel(0, phba->HCregaddr);
 	readl(phba->HCregaddr); /* flush */
 
-	/* Cleanup potential discovery resources */
-	lpfc_els_flush_rscn(phba);
-	lpfc_els_flush_cmd(phba);
-	lpfc_disc_flush_list(phba);
-
-	return (0);
+	lpfc_cleanup_discovery_resources(phba->pport);
+	return 0;
 }
 
 /************************************************************************/
@@ -482,20 +515,24 @@ lpfc_hba_down_prep(struct lpfc_hba * phba)
 /*                                                                      */
 /************************************************************************/
 int
-lpfc_hba_down_post(struct lpfc_hba * phba)
+lpfc_hba_down_post(struct lpfc_hba *phba)
 {
 	struct lpfc_sli *psli = &phba->sli;
 	struct lpfc_sli_ring *pring;
 	struct lpfc_dmabuf *mp, *next_mp;
 	int i;
 
-	/* Cleanup preposted buffers on the ELS ring */
-	pring = &psli->ring[LPFC_ELS_RING];
-	list_for_each_entry_safe(mp, next_mp, &pring->postbufq, list) {
-		list_del(&mp->list);
-		pring->postbufq_cnt--;
-		lpfc_mbuf_free(phba, mp->virt, mp->phys);
-		kfree(mp);
+	if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED)
+		lpfc_sli_hbqbuf_free_all(phba);
+	else {
+		/* Cleanup preposted buffers on the ELS ring */
+		pring = &psli->ring[LPFC_ELS_RING];
+		list_for_each_entry_safe(mp, next_mp, &pring->postbufq, list) {
+			list_del(&mp->list);
+			pring->postbufq_cnt--;
+			lpfc_mbuf_free(phba, mp->virt, mp->phys);
+			kfree(mp);
+		}
 	}
 
 	for (i = 0; i < psli->num_rings; i++) {
@@ -506,6 +543,119 @@ lpfc_hba_down_post(struct lpfc_hba * phba)
 	return 0;
 }
 
+/* HBA heart beat timeout handler */
+void
+lpfc_hb_timeout(unsigned long ptr)
+{
+	struct lpfc_hba *phba;
+	unsigned long iflag;
+
+	phba = (struct lpfc_hba *)ptr;
+	spin_lock_irqsave(&phba->pport->work_port_lock, iflag);
+	if (!(phba->pport->work_port_events & WORKER_HB_TMO))
+		phba->pport->work_port_events |= WORKER_HB_TMO;
+	spin_unlock_irqrestore(&phba->pport->work_port_lock, iflag);
+
+	if (phba->work_wait)
+		wake_up(phba->work_wait);
+	return;
+}
+
+static void
+lpfc_hb_mbox_cmpl(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq)
+{
+	unsigned long drvr_flag;
+
+	spin_lock_irqsave(&phba->hbalock, drvr_flag);
+	phba->hb_outstanding = 0;
+	spin_unlock_irqrestore(&phba->hbalock, drvr_flag);
+
+	mempool_free(pmboxq, phba->mbox_mem_pool);
+	if (!(phba->pport->fc_flag & FC_OFFLINE_MODE) &&
+		!(phba->link_state == LPFC_HBA_ERROR) &&
+		!(phba->pport->load_flag & FC_UNLOADING))
+		mod_timer(&phba->hb_tmofunc,
+			jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
+	return;
+}
+
+void
+lpfc_hb_timeout_handler(struct lpfc_hba *phba)
+{
+	LPFC_MBOXQ_t *pmboxq;
+	int retval;
+	struct lpfc_sli *psli = &phba->sli;
+
+	if ((phba->link_state == LPFC_HBA_ERROR) ||
+		(phba->pport->load_flag & FC_UNLOADING) ||
+		(phba->pport->fc_flag & FC_OFFLINE_MODE))
+		return;
+
+	spin_lock_irq(&phba->pport->work_port_lock);
+	/* If the timer is already canceled do nothing */
+	if (!(phba->pport->work_port_events & WORKER_HB_TMO)) {
+		spin_unlock_irq(&phba->pport->work_port_lock);
+		return;
+	}
+
+	if (time_after(phba->last_completion_time + LPFC_HB_MBOX_INTERVAL * HZ,
+		jiffies)) {
+		spin_unlock_irq(&phba->pport->work_port_lock);
+		if (!phba->hb_outstanding)
+			mod_timer(&phba->hb_tmofunc,
+				jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
+		else
+			mod_timer(&phba->hb_tmofunc,
+				jiffies + HZ * LPFC_HB_MBOX_TIMEOUT);
+		return;
+	}
+	spin_unlock_irq(&phba->pport->work_port_lock);
+
+	/* If there is no heart beat outstanding, issue a heartbeat command */
+	if (!phba->hb_outstanding) {
+		pmboxq = mempool_alloc(phba->mbox_mem_pool,GFP_KERNEL);
+		if (!pmboxq) {
+			mod_timer(&phba->hb_tmofunc,
+				jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
+			return;
+		}
+
+		lpfc_heart_beat(phba, pmboxq);
+		pmboxq->mbox_cmpl = lpfc_hb_mbox_cmpl;
+		pmboxq->vport = phba->pport;
+		retval = lpfc_sli_issue_mbox(phba, pmboxq, MBX_NOWAIT);
+
+		if (retval != MBX_BUSY && retval != MBX_SUCCESS) {
+			mempool_free(pmboxq, phba->mbox_mem_pool);
+			mod_timer(&phba->hb_tmofunc,
+				jiffies + HZ * LPFC_HB_MBOX_INTERVAL);
+			return;
+		}
+		mod_timer(&phba->hb_tmofunc,
+			jiffies + HZ * LPFC_HB_MBOX_TIMEOUT);
+		phba->hb_outstanding = 1;
+		return;
+	} else {
+		/*
+		 * If heart beat timeout called with hb_outstanding set we
+		 * need to take the HBA offline.
+		 */
+		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+				"0459 Adapter heartbeat failure, taking "
+				"this port offline.\n");
+
+		spin_lock_irq(&phba->hbalock);
+		psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
+		spin_unlock_irq(&phba->hbalock);
+
+		lpfc_offline_prep(phba);
+		lpfc_offline(phba);
+		lpfc_unblock_mgmt_io(phba);
+		phba->link_state = LPFC_HBA_ERROR;
+		lpfc_hba_down_post(phba);
+	}
+}
+
 /************************************************************************/
 /*                                                                      */
 /*    lpfc_handle_eratt                                                 */
@@ -515,24 +665,41 @@ lpfc_hba_down_post(struct lpfc_hba * phba)
 /*                                                                      */
 /************************************************************************/
 void
-lpfc_handle_eratt(struct lpfc_hba * phba)
+lpfc_handle_eratt(struct lpfc_hba *phba)
 {
-	struct lpfc_sli *psli = &phba->sli;
+	struct lpfc_vport *vport = phba->pport;
+	struct lpfc_sli   *psli = &phba->sli;
 	struct lpfc_sli_ring  *pring;
+	struct lpfc_vport **vports;
 	uint32_t event_data;
+	unsigned long temperature;
+	struct temp_event temp_event_data;
+	struct Scsi_Host  *shost;
+	int i;
+
 
 	if (phba->work_hs & HS_FFER6 ||
 	    phba->work_hs & HS_FFER5) {
 		/* Re-establishing Link */
 		lpfc_printf_log(phba, KERN_INFO, LOG_LINK_EVENT,
-				"%d:1301 Re-establishing Link "
+				"1301 Re-establishing Link "
 				"Data: x%x x%x x%x\n",
-				phba->brd_no, phba->work_hs,
+				phba->work_hs,
 				phba->work_status[0], phba->work_status[1]);
-		spin_lock_irq(phba->host->host_lock);
-		phba->fc_flag |= FC_ESTABLISH_LINK;
+		vports = lpfc_create_vport_work_array(phba);
+		if (vports != NULL)
+			for(i = 0;
+			    i < LPFC_MAX_VPORTS && vports[i] != NULL;
+			    i++){
+				shost = lpfc_shost_from_vport(vports[i]);
+				spin_lock_irq(shost->host_lock);
+				vports[i]->fc_flag |= FC_ESTABLISH_LINK;
+				spin_unlock_irq(shost->host_lock);
+			}
+		lpfc_destroy_vport_work_array(vports);
+		spin_lock_irq(&phba->hbalock);
 		psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(&phba->hbalock);
 
 		/*
 		* Firmware stops when it triggled erratt with HS_FFER6.
@@ -557,27 +724,60 @@ lpfc_handle_eratt(struct lpfc_hba * phba)
 			return;
 		}
 		lpfc_unblock_mgmt_io(phba);
+	} else if (phba->work_hs & HS_CRIT_TEMP) {
+		temperature = readl(phba->MBslimaddr + TEMPERATURE_OFFSET);
+		temp_event_data.event_type = FC_REG_TEMPERATURE_EVENT;
+		temp_event_data.event_code = LPFC_CRIT_TEMP;
+		temp_event_data.data = (uint32_t)temperature;
+
+		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+				"0459 Adapter maximum temperature exceeded "
+				"(%ld), taking this port offline "
+				"Data: x%x x%x x%x\n",
+				temperature, phba->work_hs,
+				phba->work_status[0], phba->work_status[1]);
+
+		shost = lpfc_shost_from_vport(phba->pport);
+		fc_host_post_vendor_event(shost, fc_get_event_number(),
+					  sizeof(temp_event_data),
+					  (char *) &temp_event_data,
+					  SCSI_NL_VID_TYPE_PCI
+					  | PCI_VENDOR_ID_EMULEX);
+
+		spin_lock_irq(&phba->hbalock);
+		psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
+		phba->over_temp_state = HBA_OVER_TEMP;
+		spin_unlock_irq(&phba->hbalock);
+		lpfc_offline_prep(phba);
+		lpfc_offline(phba);
+		lpfc_unblock_mgmt_io(phba);
+		phba->link_state = LPFC_HBA_ERROR;
+		lpfc_hba_down_post(phba);
+
 	} else {
 		/* The if clause above forces this code path when the status
 		 * failure is a value other than FFER6.  Do not call the offline
 		 *  twice. This is the adapter hardware error path.
 		 */
 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-				"%d:0457 Adapter Hardware Error "
+				"0457 Adapter Hardware Error "
 				"Data: x%x x%x x%x\n",
-				phba->brd_no, phba->work_hs,
+				phba->work_hs,
 				phba->work_status[0], phba->work_status[1]);
 
 		event_data = FC_REG_DUMP_EVENT;
-		fc_host_post_vendor_event(phba->host, fc_get_event_number(),
+		shost = lpfc_shost_from_vport(vport);
+		fc_host_post_vendor_event(shost, fc_get_event_number(),
 				sizeof(event_data), (char *) &event_data,
 				SCSI_NL_VID_TYPE_PCI | PCI_VENDOR_ID_EMULEX);
 
+		spin_lock_irq(&phba->hbalock);
 		psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
+		spin_unlock_irq(&phba->hbalock);
 		lpfc_offline_prep(phba);
 		lpfc_offline(phba);
 		lpfc_unblock_mgmt_io(phba);
-		phba->hba_state = LPFC_HBA_ERROR;
+		phba->link_state = LPFC_HBA_ERROR;
 		lpfc_hba_down_post(phba);
 	}
 }
@@ -591,9 +791,10 @@ lpfc_handle_eratt(struct lpfc_hba * phba)
 /*                                                                      */
 /************************************************************************/
 void
-lpfc_handle_latt(struct lpfc_hba * phba)
+lpfc_handle_latt(struct lpfc_hba *phba)
 {
-	struct lpfc_sli *psli = &phba->sli;
+	struct lpfc_vport *vport = phba->pport;
+	struct lpfc_sli   *psli = &phba->sli;
 	LPFC_MBOXQ_t *pmb;
 	volatile uint32_t control;
 	struct lpfc_dmabuf *mp;
@@ -614,20 +815,21 @@ lpfc_handle_latt(struct lpfc_hba * phba)
 	rc = -EIO;
 
 	/* Cleanup any outstanding ELS commands */
-	lpfc_els_flush_cmd(phba);
+	lpfc_els_flush_all_cmd(phba);
 
 	psli->slistat.link_event++;
 	lpfc_read_la(phba, pmb, mp);
 	pmb->mbox_cmpl = lpfc_mbx_cmpl_read_la;
-	rc = lpfc_sli_issue_mbox (phba, pmb, (MBX_NOWAIT | MBX_STOP_IOCB));
+	pmb->vport = vport;
+	rc = lpfc_sli_issue_mbox (phba, pmb, MBX_NOWAIT);
 	if (rc == MBX_NOT_FINISHED)
 		goto lpfc_handle_latt_free_mbuf;
 
 	/* Clear Link Attention in HA REG */
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	writel(HA_LATT, phba->HAregaddr);
 	readl(phba->HAregaddr); /* flush */
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	return;
 
@@ -639,7 +841,7 @@ lpfc_handle_latt_free_pmb:
 	mempool_free(pmb, phba->mbox_mem_pool);
 lpfc_handle_latt_err_exit:
 	/* Enable Link attention interrupts */
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	psli->sli_flag |= LPFC_PROCESS_LA;
 	control = readl(phba->HCregaddr);
 	control |= HC_LAINT_ENA;
@@ -649,17 +851,14 @@ lpfc_handle_latt_err_exit:
 	/* Clear Link Attention in HA REG */
 	writel(HA_LATT, phba->HAregaddr);
 	readl(phba->HAregaddr); /* flush */
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 	lpfc_linkdown(phba);
-	phba->hba_state = LPFC_HBA_ERROR;
+	phba->link_state = LPFC_HBA_ERROR;
 
 	/* The other case is an error from issue_mbox */
 	if (rc == -ENOMEM)
-		lpfc_printf_log(phba,
-				KERN_WARNING,
-				LOG_MBOX,
-			        "%d:0300 READ_LA: no buffers\n",
-				phba->brd_no);
+		lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX,
+			        "0300 READ_LA: no buffers\n");
 
 	return;
 }
@@ -671,10 +870,10 @@ lpfc_handle_latt_err_exit:
 /*                                                                      */
 /************************************************************************/
 static int
-lpfc_parse_vpd(struct lpfc_hba * phba, uint8_t * vpd, int len)
+lpfc_parse_vpd(struct lpfc_hba *phba, uint8_t *vpd, int len)
 {
 	uint8_t lenlo, lenhi;
-	uint32_t Length;
+	int Length;
 	int i, j;
 	int finished = 0;
 	int index = 0;
@@ -683,11 +882,8 @@ lpfc_parse_vpd(struct lpfc_hba * phba, uint8_t * vpd, int len)
 		return 0;
 
 	/* Vital Product */
-	lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_INIT,
-			"%d:0455 Vital Product Data: x%x x%x x%x x%x\n",
-			phba->brd_no,
+	lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
+			"0455 Vital Product Data: x%x x%x x%x x%x\n",
 			(uint32_t) vpd[0], (uint32_t) vpd[1], (uint32_t) vpd[2],
 			(uint32_t) vpd[3]);
 	while (!finished && (index < (len - 4))) {
@@ -810,7 +1006,7 @@ lpfc_parse_vpd(struct lpfc_hba * phba, uint8_t * vpd, int len)
 }
 
 static void
-lpfc_get_hba_model_desc(struct lpfc_hba * phba, uint8_t * mdp, uint8_t * descp)
+lpfc_get_hba_model_desc(struct lpfc_hba *phba, uint8_t *mdp, uint8_t *descp)
 {
 	lpfc_vpd_t *vp;
 	uint16_t dev_id = phba->pcidev->device;
@@ -968,7 +1164,7 @@ lpfc_get_hba_model_desc(struct lpfc_hba * phba, uint8_t * mdp, uint8_t * descp)
 /*   Returns the number of buffers NOT posted.    */
 /**************************************************/
 int
-lpfc_post_buffer(struct lpfc_hba * phba, struct lpfc_sli_ring * pring, int cnt,
+lpfc_post_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, int cnt,
 		 int type)
 {
 	IOCB_t *icmd;
@@ -980,9 +1176,7 @@ lpfc_post_buffer(struct lpfc_hba * phba, struct lpfc_sli_ring * pring, int cnt,
 	/* While there are buffers to post */
 	while (cnt > 0) {
 		/* Allocate buffer for  command iocb */
-		spin_lock_irq(phba->host->host_lock);
 		iocb = lpfc_sli_get_iocbq(phba);
-		spin_unlock_irq(phba->host->host_lock);
 		if (iocb == NULL) {
 			pring->missbufcnt = cnt;
 			return cnt;
@@ -997,9 +1191,7 @@ lpfc_post_buffer(struct lpfc_hba * phba, struct lpfc_sli_ring * pring, int cnt,
 						&mp1->phys);
 		if (mp1 == 0 || mp1->virt == 0) {
 			kfree(mp1);
-			spin_lock_irq(phba->host->host_lock);
 			lpfc_sli_release_iocbq(phba, iocb);
-			spin_unlock_irq(phba->host->host_lock);
 			pring->missbufcnt = cnt;
 			return cnt;
 		}
@@ -1015,9 +1207,7 @@ lpfc_post_buffer(struct lpfc_hba * phba, struct lpfc_sli_ring * pring, int cnt,
 				kfree(mp2);
 				lpfc_mbuf_free(phba, mp1->virt, mp1->phys);
 				kfree(mp1);
-				spin_lock_irq(phba->host->host_lock);
 				lpfc_sli_release_iocbq(phba, iocb);
-				spin_unlock_irq(phba->host->host_lock);
 				pring->missbufcnt = cnt;
 				return cnt;
 			}
@@ -1043,7 +1233,6 @@ lpfc_post_buffer(struct lpfc_hba * phba, struct lpfc_sli_ring * pring, int cnt,
 		icmd->ulpCommand = CMD_QUE_RING_BUF64_CN;
 		icmd->ulpLe = 1;
 
-		spin_lock_irq(phba->host->host_lock);
 		if (lpfc_sli_issue_iocb(phba, pring, iocb, 0) == IOCB_ERROR) {
 			lpfc_mbuf_free(phba, mp1->virt, mp1->phys);
 			kfree(mp1);
@@ -1055,14 +1244,11 @@ lpfc_post_buffer(struct lpfc_hba * phba, struct lpfc_sli_ring * pring, int cnt,
 			}
 			lpfc_sli_release_iocbq(phba, iocb);
 			pring->missbufcnt = cnt;
-			spin_unlock_irq(phba->host->host_lock);
 			return cnt;
 		}
-		spin_unlock_irq(phba->host->host_lock);
 		lpfc_sli_ringpostbuf_put(phba, pring, mp1);
-		if (mp2) {
+		if (mp2)
 			lpfc_sli_ringpostbuf_put(phba, pring, mp2);
-		}
 	}
 	pring->missbufcnt = 0;
 	return 0;
@@ -1075,7 +1261,7 @@ lpfc_post_buffer(struct lpfc_hba * phba, struct lpfc_sli_ring * pring, int cnt,
 /*                                                                      */
 /************************************************************************/
 static int
-lpfc_post_rcv_buf(struct lpfc_hba * phba)
+lpfc_post_rcv_buf(struct lpfc_hba *phba)
 {
 	struct lpfc_sli *psli = &phba->sli;
 
@@ -1176,7 +1362,7 @@ lpfc_hba_init(struct lpfc_hba *phba, uint32_t *hbainit)
 {
 	int t;
 	uint32_t *HashWorking;
-	uint32_t *pwwnn = phba->wwnn;
+	uint32_t *pwwnn = (uint32_t *) phba->wwnn;
 
 	HashWorking = kmalloc(80 * sizeof(uint32_t), GFP_KERNEL);
 	if (!HashWorking)
@@ -1194,134 +1380,104 @@ lpfc_hba_init(struct lpfc_hba *phba, uint32_t *hbainit)
 	kfree(HashWorking);
 }
 
-static void
-lpfc_cleanup(struct lpfc_hba * phba)
+void
+lpfc_cleanup(struct lpfc_vport *vport)
 {
+	struct lpfc_hba   *phba = vport->phba;
 	struct lpfc_nodelist *ndlp, *next_ndlp;
+	int i = 0;
 
-	/* clean up phba - lpfc specific */
-	lpfc_can_disctmo(phba);
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_nlpunmap_list,
-				nlp_listp) {
-		lpfc_nlp_remove(phba, ndlp);
-	}
-
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_nlpmap_list,
-				 nlp_listp) {
-		lpfc_nlp_remove(phba, ndlp);
-	}
-
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_unused_list,
-				nlp_listp) {
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
-	}
+	if (phba->link_state > LPFC_LINK_DOWN)
+		lpfc_port_link_failure(vport);
 
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_plogi_list,
-				nlp_listp) {
-		lpfc_nlp_remove(phba, ndlp);
+	list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) {
+		if (ndlp->nlp_type & NLP_FABRIC)
+			lpfc_disc_state_machine(vport, ndlp, NULL,
+					NLP_EVT_DEVICE_RECOVERY);
+		lpfc_disc_state_machine(vport, ndlp, NULL,
+					     NLP_EVT_DEVICE_RM);
 	}
 
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_adisc_list,
-				nlp_listp) {
-		lpfc_nlp_remove(phba, ndlp);
-	}
-
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_reglogin_list,
-				nlp_listp) {
-		lpfc_nlp_remove(phba, ndlp);
-	}
+	/* At this point, ALL ndlp's should be gone
+	 * because of the previous NLP_EVT_DEVICE_RM.
+	 * Lets wait for this to happen, if needed.
+	 */
+	while (!list_empty(&vport->fc_nodes)) {
 
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_prli_list,
-				nlp_listp) {
-		lpfc_nlp_remove(phba, ndlp);
-	}
+		if (i++ > 3000) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				"0233 Nodelist not empty\n");
+			break;
+		}
 
-	list_for_each_entry_safe(ndlp, next_ndlp, &phba->fc_npr_list,
-				nlp_listp) {
-		lpfc_nlp_remove(phba, ndlp);
+		/* Wait for any activity on ndlps to settle */
+		msleep(10);
 	}
-
-	INIT_LIST_HEAD(&phba->fc_nlpmap_list);
-	INIT_LIST_HEAD(&phba->fc_nlpunmap_list);
-	INIT_LIST_HEAD(&phba->fc_unused_list);
-	INIT_LIST_HEAD(&phba->fc_plogi_list);
-	INIT_LIST_HEAD(&phba->fc_adisc_list);
-	INIT_LIST_HEAD(&phba->fc_reglogin_list);
-	INIT_LIST_HEAD(&phba->fc_prli_list);
-	INIT_LIST_HEAD(&phba->fc_npr_list);
-
-	phba->fc_map_cnt   = 0;
-	phba->fc_unmap_cnt = 0;
-	phba->fc_plogi_cnt = 0;
-	phba->fc_adisc_cnt = 0;
-	phba->fc_reglogin_cnt = 0;
-	phba->fc_prli_cnt  = 0;
-	phba->fc_npr_cnt   = 0;
-	phba->fc_unused_cnt= 0;
 	return;
 }
 
 static void
 lpfc_establish_link_tmo(unsigned long ptr)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba *)ptr;
+	struct lpfc_hba   *phba = (struct lpfc_hba *) ptr;
+	struct lpfc_vport **vports;
 	unsigned long iflag;
-
+	int i;
 
 	/* Re-establishing Link, timer expired */
 	lpfc_printf_log(phba, KERN_ERR, LOG_LINK_EVENT,
-			"%d:1300 Re-establishing Link, timer expired "
+			"1300 Re-establishing Link, timer expired "
 			"Data: x%x x%x\n",
-			phba->brd_no, phba->fc_flag, phba->hba_state);
-	spin_lock_irqsave(phba->host->host_lock, iflag);
-	phba->fc_flag &= ~FC_ESTABLISH_LINK;
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
+			phba->pport->fc_flag, phba->pport->port_state);
+	vports = lpfc_create_vport_work_array(phba);
+	if (vports != NULL)
+		for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
+			struct Scsi_Host *shost;
+			shost = lpfc_shost_from_vport(vports[i]);
+			spin_lock_irqsave(shost->host_lock, iflag);
+			vports[i]->fc_flag &= ~FC_ESTABLISH_LINK;
+			spin_unlock_irqrestore(shost->host_lock, iflag);
+		}
+	lpfc_destroy_vport_work_array(vports);
 }
 
-static int
-lpfc_stop_timer(struct lpfc_hba * phba)
+void
+lpfc_stop_vport_timers(struct lpfc_vport *vport)
 {
-	struct lpfc_sli *psli = &phba->sli;
-
-	/* Instead of a timer, this has been converted to a
-	 * deferred procedding list.
-	 */
-	while (!list_empty(&phba->freebufList)) {
-
-		struct lpfc_dmabuf *mp = NULL;
-
-		list_remove_head((&phba->freebufList), mp,
-				 struct lpfc_dmabuf, list);
-		if (mp) {
-			lpfc_mbuf_free(phba, mp->virt, mp->phys);
-			kfree(mp);
-		}
-	}
+	del_timer_sync(&vport->els_tmofunc);
+	del_timer_sync(&vport->fc_fdmitmo);
+	lpfc_can_disctmo(vport);
+	return;
+}
 
+static void
+lpfc_stop_phba_timers(struct lpfc_hba *phba)
+{
 	del_timer_sync(&phba->fcp_poll_timer);
 	del_timer_sync(&phba->fc_estabtmo);
-	del_timer_sync(&phba->fc_disctmo);
-	del_timer_sync(&phba->fc_fdmitmo);
-	del_timer_sync(&phba->els_tmofunc);
-	psli = &phba->sli;
-	del_timer_sync(&psli->mbox_tmo);
-	return(1);
+	lpfc_stop_vport_timers(phba->pport);
+	del_timer_sync(&phba->sli.mbox_tmo);
+	del_timer_sync(&phba->fabric_block_timer);
+	phba->hb_outstanding = 0;
+	del_timer_sync(&phba->hb_tmofunc);
+	return;
 }
 
 int
-lpfc_online(struct lpfc_hba * phba)
+lpfc_online(struct lpfc_hba *phba)
 {
+	struct lpfc_vport *vport = phba->pport;
+	struct lpfc_vport **vports;
+	int i;
+
 	if (!phba)
 		return 0;
 
-	if (!(phba->fc_flag & FC_OFFLINE_MODE))
+	if (!(vport->fc_flag & FC_OFFLINE_MODE))
 		return 0;
 
-	lpfc_printf_log(phba,
-		       KERN_WARNING,
-		       LOG_INIT,
-		       "%d:0458 Bring Adapter online\n",
-		       phba->brd_no);
+	lpfc_printf_log(phba, KERN_WARNING, LOG_INIT,
+			"0458 Bring Adapter online\n");
 
 	lpfc_block_mgmt_io(phba);
 
@@ -1335,9 +1491,18 @@ lpfc_online(struct lpfc_hba * phba)
 		return 1;
 	}
 
-	spin_lock_irq(phba->host->host_lock);
-	phba->fc_flag &= ~FC_OFFLINE_MODE;
-	spin_unlock_irq(phba->host->host_lock);
+	vports = lpfc_create_vport_work_array(phba);
+	if (vports != NULL)
+		for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
+			struct Scsi_Host *shost;
+			shost = lpfc_shost_from_vport(vports[i]);
+			spin_lock_irq(shost->host_lock);
+			vports[i]->fc_flag &= ~FC_OFFLINE_MODE;
+			if (phba->sli3_options & LPFC_SLI3_NPIV_ENABLED)
+				vports[i]->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
+			spin_unlock_irq(shost->host_lock);
+		}
+		lpfc_destroy_vport_work_array(vports);
 
 	lpfc_unblock_mgmt_io(phba);
 	return 0;
@@ -1348,9 +1513,9 @@ lpfc_block_mgmt_io(struct lpfc_hba * phba)
 {
 	unsigned long iflag;
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
-	phba->fc_flag |= FC_BLOCK_MGMT_IO;
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
+	spin_lock_irqsave(&phba->hbalock, iflag);
+	phba->sli.sli_flag |= LPFC_BLOCK_MGMT_IO;
+	spin_unlock_irqrestore(&phba->hbalock, iflag);
 }
 
 void
@@ -1358,71 +1523,93 @@ lpfc_unblock_mgmt_io(struct lpfc_hba * phba)
 {
 	unsigned long iflag;
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
-	phba->fc_flag &= ~FC_BLOCK_MGMT_IO;
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
+	spin_lock_irqsave(&phba->hbalock, iflag);
+	phba->sli.sli_flag &= ~LPFC_BLOCK_MGMT_IO;
+	spin_unlock_irqrestore(&phba->hbalock, iflag);
 }
 
 void
 lpfc_offline_prep(struct lpfc_hba * phba)
 {
+	struct lpfc_vport *vport = phba->pport;
 	struct lpfc_nodelist  *ndlp, *next_ndlp;
-	struct list_head *listp, *node_list[7];
+	struct lpfc_vport **vports;
 	int i;
 
-	if (phba->fc_flag & FC_OFFLINE_MODE)
+	if (vport->fc_flag & FC_OFFLINE_MODE)
 		return;
 
 	lpfc_block_mgmt_io(phba);
 
 	lpfc_linkdown(phba);
 
-	/* Issue an unreg_login to all nodes */
-	node_list[0] = &phba->fc_npr_list;  /* MUST do this list first */
-	node_list[1] = &phba->fc_nlpmap_list;
-	node_list[2] = &phba->fc_nlpunmap_list;
-	node_list[3] = &phba->fc_prli_list;
-	node_list[4] = &phba->fc_reglogin_list;
-	node_list[5] = &phba->fc_adisc_list;
-	node_list[6] = &phba->fc_plogi_list;
-	for (i = 0; i < 7; i++) {
-		listp = node_list[i];
-		if (list_empty(listp))
-			continue;
-
-		list_for_each_entry_safe(ndlp, next_ndlp, listp, nlp_listp)
-			lpfc_unreg_rpi(phba, ndlp);
+	/* Issue an unreg_login to all nodes on all vports */
+	vports = lpfc_create_vport_work_array(phba);
+	if (vports != NULL) {
+		for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
+			struct Scsi_Host *shost;
+
+			if (vports[i]->load_flag & FC_UNLOADING)
+				continue;
+			shost =	lpfc_shost_from_vport(vports[i]);
+			list_for_each_entry_safe(ndlp, next_ndlp,
+						 &vports[i]->fc_nodes,
+						 nlp_listp) {
+				if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
+					continue;
+				if (ndlp->nlp_type & NLP_FABRIC) {
+					lpfc_disc_state_machine(vports[i], ndlp,
+						NULL, NLP_EVT_DEVICE_RECOVERY);
+					lpfc_disc_state_machine(vports[i], ndlp,
+						NULL, NLP_EVT_DEVICE_RM);
+				}
+				spin_lock_irq(shost->host_lock);
+				ndlp->nlp_flag &= ~NLP_NPR_ADISC;
+				spin_unlock_irq(shost->host_lock);
+				lpfc_unreg_rpi(vports[i], ndlp);
+			}
+		}
 	}
+	lpfc_destroy_vport_work_array(vports);
 
 	lpfc_sli_flush_mbox_queue(phba);
 }
 
 void
-lpfc_offline(struct lpfc_hba * phba)
+lpfc_offline(struct lpfc_hba *phba)
 {
-	unsigned long iflag;
+	struct Scsi_Host  *shost;
+	struct lpfc_vport **vports;
+	int i;
 
-	if (phba->fc_flag & FC_OFFLINE_MODE)
+	if (phba->pport->fc_flag & FC_OFFLINE_MODE)
 		return;
 
 	/* stop all timers associated with this hba */
-	lpfc_stop_timer(phba);
-
-	lpfc_printf_log(phba,
-		       KERN_WARNING,
-		       LOG_INIT,
-		       "%d:0460 Bring Adapter offline\n",
-		       phba->brd_no);
-
+	lpfc_stop_phba_timers(phba);
+	vports = lpfc_create_vport_work_array(phba);
+	if (vports != NULL)
+		for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++)
+			lpfc_stop_vport_timers(vports[i]);
+	lpfc_destroy_vport_work_array(vports);
+	lpfc_printf_log(phba, KERN_WARNING, LOG_INIT,
+			"0460 Bring Adapter offline\n");
 	/* Bring down the SLI Layer and cleanup.  The HBA is offline
 	   now.  */
 	lpfc_sli_hba_down(phba);
-	lpfc_cleanup(phba);
-	spin_lock_irqsave(phba->host->host_lock, iflag);
-	phba->work_hba_events = 0;
+	spin_lock_irq(&phba->hbalock);
 	phba->work_ha = 0;
-	phba->fc_flag |= FC_OFFLINE_MODE;
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
+	spin_unlock_irq(&phba->hbalock);
+	vports = lpfc_create_vport_work_array(phba);
+	if (vports != NULL)
+		for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
+			shost = lpfc_shost_from_vport(vports[i]);
+			spin_lock_irq(shost->host_lock);
+			vports[i]->work_port_events = 0;
+			vports[i]->fc_flag |= FC_OFFLINE_MODE;
+			spin_unlock_irq(shost->host_lock);
+		}
+	lpfc_destroy_vport_work_array(vports);
 }
 
 /******************************************************************************
@@ -1432,17 +1619,17 @@ lpfc_offline(struct lpfc_hba * phba)
 *
 ******************************************************************************/
 static int
-lpfc_scsi_free(struct lpfc_hba * phba)
+lpfc_scsi_free(struct lpfc_hba *phba)
 {
 	struct lpfc_scsi_buf *sb, *sb_next;
 	struct lpfc_iocbq *io, *io_next;
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	/* Release all the lpfc_scsi_bufs maintained by this host. */
 	list_for_each_entry_safe(sb, sb_next, &phba->lpfc_scsi_buf_list, list) {
 		list_del(&sb->list);
 		pci_pool_free(phba->lpfc_scsi_dma_buf_pool, sb->data,
-								sb->dma_handle);
+			      sb->dma_handle);
 		kfree(sb);
 		phba->total_scsi_bufs--;
 	}
@@ -1454,124 +1641,290 @@ lpfc_scsi_free(struct lpfc_hba * phba)
 		phba->total_iocbq_bufs--;
 	}
 
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	return 0;
 }
 
-static void
-lpfc_setup_max_dma_length(struct lpfc_hba * phba)
+struct lpfc_vport *
+lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
 {
-	struct pci_dev *pdev = phba->pcidev;
-	struct pci_bus *bus = pdev->bus;
-	uint8_t rev;
+	struct lpfc_vport *vport;
+	struct Scsi_Host  *shost;
+	int error = 0;
 
-	while(bus) {
-		/*
-		 * 0x7450 == PCI_DEVICE_ID_AMD_8131_BRIDGE for 2.6 kernels
-		 * 0x7450 == PCI_DEVICE_ID_AMD_8131_APIC   for 2.4 kernels
-		 */
-		if ( bus->self &&
-			(bus->self->vendor == PCI_VENDOR_ID_AMD) &&
-			(bus->self->device == PCI_DEVICE_ID_AMD_8131_BRIDGE)) {
-				pci_read_config_byte(bus->self, 0x08, &rev);
-				if (rev == 0x13) {
-					phba->pci_max_read = 1024;
-					return;
-				}
-		}
-		bus = bus->parent;
+	if (dev != &phba->pcidev->dev)
+		shost = scsi_host_alloc(&lpfc_vport_template,
+					sizeof(struct lpfc_vport));
+	else
+		if (phba->cfg_enable_npiv)
+			shost = scsi_host_alloc(&lpfc_template,
+						sizeof(struct lpfc_vport));
+		else
+			shost = scsi_host_alloc(&lpfc_template_no_npiv,
+						sizeof(struct lpfc_vport));
+	if (!shost)
+		goto out;
+
+	vport = (struct lpfc_vport *) shost->hostdata;
+	vport->phba = phba;
+
+	vport->load_flag |= FC_LOADING;
+	vport->fc_flag |= FC_VPORT_NEEDS_REG_VPI;
+
+	lpfc_get_vport_cfgparam(vport);
+	shost->unique_id = instance;
+	shost->max_id = LPFC_MAX_TARGET;
+	shost->max_lun = vport->cfg_max_luns;
+	shost->this_id = -1;
+	shost->max_cmd_len = 16;
+	/*
+	 * Set initial can_queue value since 0 is no longer supported and
+	 * scsi_add_host will fail. This will be adjusted later based on the
+	 * max xri value determined in hba setup.
+	 */
+	shost->can_queue = phba->cfg_hba_queue_depth - 10;
+	if (dev != &phba->pcidev->dev) {
+		shost->transportt = lpfc_vport_transport_template;
+		vport->port_type = LPFC_NPIV_PORT;
+	} else {
+		shost->transportt = lpfc_transport_template;
+		vport->port_type = LPFC_PHYSICAL_PORT;
 	}
+
+	/* Initialize all internally managed lists. */
+	INIT_LIST_HEAD(&vport->fc_nodes);
+	spin_lock_init(&vport->work_port_lock);
+
+	init_timer(&vport->fc_disctmo);
+	vport->fc_disctmo.function = lpfc_disc_timeout;
+	vport->fc_disctmo.data = (unsigned long)vport;
+
+	init_timer(&vport->fc_fdmitmo);
+	vport->fc_fdmitmo.function = lpfc_fdmi_tmo;
+	vport->fc_fdmitmo.data = (unsigned long)vport;
+
+	init_timer(&vport->els_tmofunc);
+	vport->els_tmofunc.function = lpfc_els_timeout;
+	vport->els_tmofunc.data = (unsigned long)vport;
+
+	error = scsi_add_host(shost, dev);
+	if (error)
+		goto out_put_shost;
+	if (!shost->shost_classdev.kobj.dentry)
+		goto out_put_shost;
+	vport->auth.challenge = NULL;
+	vport->auth.challenge_len = 0;
+	vport->auth.dh_pub_key = NULL;
+	vport->auth.dh_pub_key_len = 0;
+
+	INIT_WORK(&vport->sc_online_work,
+		  (void(*)(void *))lpfc_fc_sc_security_online,
+		  &vport->sc_online_work);
+	INIT_WORK(&vport->sc_offline_work,
+		  (void(*)(void *))lpfc_fc_sc_security_offline,
+		  &vport->sc_offline_work);
+	INIT_LIST_HEAD(&vport->sc_users);
+	INIT_LIST_HEAD(&vport->sc_response_wait_queue);
+
+	spin_lock_irq(&phba->hbalock);
+	list_add_tail(&vport->listentry, &phba->port_list);
+	spin_unlock_irq(&phba->hbalock);
+	return vport;
+
+out_put_shost:
+	scsi_host_put(shost);
+out:
+	return NULL;
+}
+
+void
+destroy_port(struct lpfc_vport *vport)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
+
+	kfree(vport->vname);
+
+	lpfc_debugfs_terminate(vport);
+	fc_remove_host(shost);
+	scsi_remove_host(shost);
+
+	spin_lock_irq(&phba->hbalock);
+	list_del_init(&vport->listentry);
+	spin_unlock_irq(&phba->hbalock);
+
+	lpfc_cleanup(vport);
 	return;
 }
 
+int
+lpfc_get_instance(void)
+{
+	int instance = 0;
+
+	/* Assign an unused number */
+	if (!idr_pre_get(&lpfc_hba_index, GFP_KERNEL))
+		return -1;
+	if (idr_get_new(&lpfc_hba_index, NULL, &instance))
+		return -1;
+	return instance;
+}
+
+/*
+ * Note: there is no scan_start function as adapter initialization
+ * will have asynchronously kicked off the link initialization.
+ */
+
+int lpfc_scan_finished(struct Scsi_Host *shost, unsigned long time)
+{
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	int stat = 0;
+
+	spin_lock_irq(shost->host_lock);
+
+	if (vport->load_flag & FC_UNLOADING) {
+		stat = 1;
+		goto finished;
+	}
+	if (time >= 30 * HZ) {
+		lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
+				"0461 Scanning longer than 30 "
+				"seconds.  Continuing initialization\n");
+		stat = 1;
+		goto finished;
+	}
+	if (time >= 15 * HZ && phba->link_state <= LPFC_LINK_DOWN) {
+		lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
+				"0465 Link down longer than 15 "
+				"seconds.  Continuing initialization\n");
+		stat = 1;
+		goto finished;
+	}
+
+	if (vport->port_state != LPFC_VPORT_READY)
+		goto finished;
+	if (vport->num_disc_nodes || vport->fc_prli_sent)
+		goto finished;
+	if (vport->fc_map_cnt == 0 && time < 2 * HZ)
+		goto finished;
+	if ((phba->sli.sli_flag & LPFC_SLI_MBOX_ACTIVE) != 0)
+		goto finished;
+
+	stat = 1;
+
+finished:
+	spin_unlock_irq(shost->host_lock);
+	return stat;
+}
+
+void lpfc_host_attrib_init(struct Scsi_Host *shost)
+{
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	/*
+	 * Set fixed host attributes.  Must done after lpfc_sli_hba_setup().
+	 */
+
+	fc_host_node_name(shost) = wwn_to_u64(vport->fc_nodename.u.wwn);
+	fc_host_port_name(shost) = wwn_to_u64(vport->fc_portname.u.wwn);
+	fc_host_supported_classes(shost) = FC_COS_CLASS3;
+
+	memset(fc_host_supported_fc4s(shost), 0,
+	       sizeof(fc_host_supported_fc4s(shost)));
+	fc_host_supported_fc4s(shost)[2] = 1;
+	fc_host_supported_fc4s(shost)[7] = 1;
+
+	lpfc_vport_symbolic_node_name(vport, fc_host_symbolic_name(shost),
+				 sizeof fc_host_symbolic_name(shost));
+
+	fc_host_supported_speeds(shost) = 0;
+	if (phba->lmt & LMT_10Gb)
+		fc_host_supported_speeds(shost) |= FC_PORTSPEED_10GBIT;
+	if (phba->lmt & LMT_8Gb)
+		fc_host_supported_speeds(shost) |= FC_PORTSPEED_8GBIT;
+	if (phba->lmt & LMT_4Gb)
+		fc_host_supported_speeds(shost) |= FC_PORTSPEED_4GBIT;
+	if (phba->lmt & LMT_2Gb)
+		fc_host_supported_speeds(shost) |= FC_PORTSPEED_2GBIT;
+	if (phba->lmt & LMT_1Gb)
+		fc_host_supported_speeds(shost) |= FC_PORTSPEED_1GBIT;
+
+	fc_host_maxframe_size(shost) =
+		(((uint32_t) vport->fc_sparam.cmn.bbRcvSizeMsb & 0x0F) << 8) |
+		(uint32_t) vport->fc_sparam.cmn.bbRcvSizeLsb;
+
+	/* This value is also unchanging */
+	memset(fc_host_active_fc4s(shost), 0,
+	       sizeof(fc_host_active_fc4s(shost)));
+	fc_host_active_fc4s(shost)[2] = 1;
+	fc_host_active_fc4s(shost)[7] = 1;
+
+	spin_lock_irq(shost->host_lock);
+	vport->load_flag &= ~FC_LOADING;
+	spin_unlock_irq(shost->host_lock);
+}
+
 static int __devinit
 lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
 {
-	struct Scsi_Host *host;
-	struct lpfc_hba  *phba;
-	struct lpfc_sli  *psli;
+	struct lpfc_vport *vport = NULL;
+	struct lpfc_hba   *phba;
+	struct lpfc_sli   *psli;
 	struct lpfc_iocbq *iocbq_entry = NULL, *iocbq_next = NULL;
+	struct Scsi_Host  *shost = NULL;
+	void *ptr;
 	unsigned long bar0map_len, bar2map_len;
 	int error = -ENODEV, retval;
-	int i;
+	int  i, hbq_count;
 	uint16_t iotag;
+	unsigned long start;
 
 	if (pci_enable_device(pdev))
 		goto out;
 	if (pci_request_regions(pdev, LPFC_DRIVER_NAME))
 		goto out_disable_device;
 
-	host = scsi_host_alloc(&lpfc_template, sizeof (struct lpfc_hba));
-	if (!host)
+	phba = kzalloc(sizeof (struct lpfc_hba), GFP_KERNEL);
+	if (!phba)
 		goto out_release_regions;
 
-	phba = (struct lpfc_hba*)host->hostdata;
-	memset(phba, 0, sizeof (struct lpfc_hba));
-	phba->host = host;
+	spin_lock_init(&phba->hbalock);
 
-	phba->fc_flag |= FC_LOADING;
 	phba->pcidev = pdev;
 
-	/* Check if we need to change the DMA length */
-	lpfc_setup_max_dma_length(phba);
-
 	/* Assign an unused board number */
-	if (!idr_pre_get(&lpfc_hba_index, GFP_KERNEL))
-		goto out_put_host;
-
-	error = idr_get_new(&lpfc_hba_index, NULL, &phba->brd_no);
-	if (error)
-		goto out_put_host;
+	if ((phba->brd_no = lpfc_get_instance()) < 0)
+		goto out_free_phba;
 
-	host->unique_id = phba->brd_no;
-	INIT_LIST_HEAD(&phba->ctrspbuflist);
-	INIT_LIST_HEAD(&phba->rnidrspbuflist);
-	INIT_LIST_HEAD(&phba->freebufList);
+	INIT_LIST_HEAD(&phba->port_list);
+	/*
+	 * Get all the module params for configuring this host and then
+	 * establish the host.
+	 */
+	lpfc_get_cfgparam(phba);
+	phba->max_vpi = LPFC_MAX_VPI;
 
 	/* Initialize timers used by driver */
 	init_timer(&phba->fc_estabtmo);
 	phba->fc_estabtmo.function = lpfc_establish_link_tmo;
 	phba->fc_estabtmo.data = (unsigned long)phba;
-	init_timer(&phba->fc_disctmo);
-	phba->fc_disctmo.function = lpfc_disc_timeout;
-	phba->fc_disctmo.data = (unsigned long)phba;
-
-	init_timer(&phba->fc_fdmitmo);
-	phba->fc_fdmitmo.function = lpfc_fdmi_tmo;
-	phba->fc_fdmitmo.data = (unsigned long)phba;
-	init_timer(&phba->els_tmofunc);
-	phba->els_tmofunc.function = lpfc_els_timeout;
-	phba->els_tmofunc.data = (unsigned long)phba;
+
+	init_timer(&phba->hb_tmofunc);
+	phba->hb_tmofunc.function = lpfc_hb_timeout;
+	phba->hb_tmofunc.data = (unsigned long)phba;
+
 	psli = &phba->sli;
 	init_timer(&psli->mbox_tmo);
 	psli->mbox_tmo.function = lpfc_mbox_timeout;
-	psli->mbox_tmo.data = (unsigned long)phba;
-
+	psli->mbox_tmo.data = (unsigned long) phba;
 	init_timer(&phba->fcp_poll_timer);
 	phba->fcp_poll_timer.function = lpfc_poll_timeout;
-	phba->fcp_poll_timer.data = (unsigned long)phba;
-
-	/*
-	 * Get all the module params for configuring this host and then
-	 * establish the host parameters.
-	 */
-	lpfc_get_cfgparam(phba);
-
-	host->max_id = LPFC_MAX_TARGET;
-	host->max_lun = phba->cfg_max_luns;
-	host->this_id = -1;
-
-	/* Initialize all internally managed lists. */
-	INIT_LIST_HEAD(&phba->fc_nlpmap_list);
-	INIT_LIST_HEAD(&phba->fc_nlpunmap_list);
-	INIT_LIST_HEAD(&phba->fc_unused_list);
-	INIT_LIST_HEAD(&phba->fc_plogi_list);
-	INIT_LIST_HEAD(&phba->fc_adisc_list);
-	INIT_LIST_HEAD(&phba->fc_reglogin_list);
-	INIT_LIST_HEAD(&phba->fc_prli_list);
-	INIT_LIST_HEAD(&phba->fc_npr_list);
-
+	phba->fcp_poll_timer.data = (unsigned long) phba;
+	init_timer(&phba->fabric_block_timer);
+	phba->fabric_block_timer.function = lpfc_fabric_block_timeout;
+	phba->fabric_block_timer.data = (unsigned long) phba;
 
 	pci_set_master(pdev);
 	retval = pci_set_mwi(pdev);
@@ -1619,18 +1972,40 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
 
 	memset(phba->slim2p, 0, SLI2_SLIM_SIZE);
 
+	phba->hbqslimp.virt = dma_alloc_coherent(&phba->pcidev->dev,
+						 lpfc_sli_hbq_size(),
+						 &phba->hbqslimp.phys,
+						 GFP_KERNEL);
+	if (!phba->hbqslimp.virt)
+		goto out_free_slim;
+
+	hbq_count = lpfc_sli_hbq_count();
+	ptr = phba->hbqslimp.virt;
+	for (i = 0; i < hbq_count; ++i) {
+		phba->hbqs[i].hbq_virt = ptr;
+		INIT_LIST_HEAD(&phba->hbqs[i].hbq_buffer_list);
+		ptr += (lpfc_hbq_defs[i]->entry_count *
+			sizeof(struct lpfc_hbq_entry));
+	}
+	phba->hbqs[LPFC_ELS_HBQ].hbq_alloc_buffer = lpfc_els_hbq_alloc;
+	phba->hbqs[LPFC_ELS_HBQ].hbq_free_buffer  = lpfc_els_hbq_free;
+
+	memset(phba->hbqslimp.virt, 0, lpfc_sli_hbq_size());
+
 	/* Initialize the SLI Layer to run with lpfc HBAs. */
 	lpfc_sli_setup(phba);
 	lpfc_sli_queue_setup(phba);
 
-	error = lpfc_mem_alloc(phba);
-	if (error)
-		goto out_free_slim;
+	retval = lpfc_mem_alloc(phba);
+	if (retval) {
+		error = retval;
+		goto out_free_hbqslimp;
+	}
 
 	/* Initialize and populate the iocb list per host.  */
 	INIT_LIST_HEAD(&phba->lpfc_iocb_list);
 	for (i = 0; i < LPFC_IOCB_LIST_CNT; i++) {
-		iocbq_entry = kmalloc(sizeof(struct lpfc_iocbq), GFP_KERNEL);
+		iocbq_entry = kzalloc(sizeof(struct lpfc_iocbq), GFP_KERNEL);
 		if (iocbq_entry == NULL) {
 			printk(KERN_ERR "%s: only allocated %d iocbs of "
 				"expected %d count. Unloading driver.\n",
@@ -1639,7 +2014,6 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
 			goto out_free_iocbq;
 		}
 
-		memset(iocbq_entry, 0, sizeof(struct lpfc_iocbq));
 		iotag = lpfc_sli_next_iotag(phba, iocbq_entry);
 		if (iotag == 0) {
 			kfree (iocbq_entry);
@@ -1649,10 +2023,11 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
 			error = -ENOMEM;
 			goto out_free_iocbq;
 		}
-		spin_lock_irq(phba->host->host_lock);
+
+		spin_lock_irq(&phba->hbalock);
 		list_add(&iocbq_entry->list, &phba->lpfc_iocb_list);
 		phba->total_iocbq_bufs++;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(&phba->hbalock);
 	}
 
 	/* Initialize HBA structure */
@@ -1673,149 +2048,134 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
 		goto out_free_iocbq;
 	}
 
-	/*
-	 * Set initial can_queue value since 0 is no longer supported and
-	 * scsi_add_host will fail. This will be adjusted later based on the
-	 * max xri value determined in hba setup.
-	 */
-	host->can_queue = phba->cfg_hba_queue_depth - 10;
-
-	/* Tell the midlayer we support 16 byte commands */
-	host->max_cmd_len = 16;
-
 	/* Initialize the list of scsi buffers used by driver for scsi IO. */
 	spin_lock_init(&phba->scsi_buf_list_lock);
 	INIT_LIST_HEAD(&phba->lpfc_scsi_buf_list);
 
-	host->transportt = lpfc_transport_template;
-	pci_set_drvdata(pdev, host);
-	error = scsi_add_host(host, &pdev->dev);
-	if (error)
+	/* Initialize list of fabric iocbs */
+	INIT_LIST_HEAD(&phba->fabric_iocb_list);
+
+	/* Initialize list of sysfs mailbox commands */
+	INIT_LIST_HEAD(&phba->sysfs_mbox_list);
+
+	vport = lpfc_create_port(phba, phba->brd_no, &phba->pcidev->dev);
+	if (!vport)
 		goto out_kthread_stop;
 
-	error = lpfc_alloc_sysfs_attr(phba);
-	if (error)
-		goto out_remove_host;
+	shost = lpfc_shost_from_vport(vport);
+
+	if ((lpfc_get_security_enabled)(shost)){
+		int flags;
+		spin_lock_irqsave(&fc_security_user_lock, flags);
+
+		list_add_tail(&vport->sc_users, &fc_security_user_list);
+
+		spin_unlock_irqrestore(&fc_security_user_lock, flags);
+
+		if (fc_service_state == FC_SC_SERVICESTATE_ONLINE) {
+			lpfc_fc_queue_security_work(vport,
+				&vport->sc_online_work);
+		}
+	}
+
+	vport->port_type = LPFC_PHYSICAL_PORT;
+	phba->pport = vport;
+	lpfc_debugfs_initialize(vport);
+
+	pci_set_drvdata(pdev, shost);
 
 	if (phba->cfg_use_msi) {
-		error = pci_enable_msi(phba->pcidev);
-		if (error)
-			lpfc_printf_log(phba, KERN_INFO, LOG_INIT, "%d:0452 "
-					"Enable MSI failed, continuing with "
-					"IRQ\n", phba->brd_no);
+		retval = pci_enable_msi(phba->pcidev);
+		if (!retval)
+			phba->using_msi = 1;
+		else
+			lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
+					"0452 Enable MSI failed, continuing "
+					"with IRQ\n");
 	}
 
-	error =	request_irq(phba->pcidev->irq, lpfc_intr_handler, IRQF_SHARED,
-							LPFC_DRIVER_NAME, phba);
-	if (error) {
+	retval = request_irq(phba->pcidev->irq, lpfc_intr_handler, IRQF_SHARED,
+			    LPFC_DRIVER_NAME, phba);
+	if (retval) {
 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-			"%d:0451 Enable interrupt handler failed\n",
-			phba->brd_no);
-		goto out_free_sysfs_attr;
+			"0451 Enable interrupt handler failed\n");
+		error = retval;
+		goto out_disable_msi;
 	}
+
 	phba->MBslimaddr = phba->slim_memmap_p;
 	phba->HAregaddr = phba->ctrl_regs_memmap_p + HA_REG_OFFSET;
 	phba->CAregaddr = phba->ctrl_regs_memmap_p + CA_REG_OFFSET;
 	phba->HSregaddr = phba->ctrl_regs_memmap_p + HS_REG_OFFSET;
 	phba->HCregaddr = phba->ctrl_regs_memmap_p + HC_REG_OFFSET;
 
-	phba->dfc_host = lpfcdfc_host_add (pdev, host, phba);
+	phba->dfc_host = lpfcdfc_host_add(pdev, shost, phba);
 	if (!phba->dfc_host) {
-		lpfc_printf_log(phba,
-				KERN_ERR,
-				LOG_LIBDFC,
-				"%d:1201 Failed to allocate dfc_host \n",
-				phba->brd_no);
+		lpfc_printf_log(phba, KERN_ERR, LOG_LIBDFC,
+				"1201 Failed to allocate dfc_host \n");
 		error = -ENOMEM;
 		goto out_free_irq;
 	}
 
-	error = lpfc_sli_hba_setup(phba);
-	if (error) {
-		error = -ENODEV;
+	if (lpfc_alloc_sysfs_attr(vport)) {
+		error = -ENOMEM;
 		goto out_free_irq;
 	}
 
+	if (lpfc_sli_hba_setup(phba)) {
+		error = -ENODEV;
+		goto out_remove_device;
+	}
+
 	/*
 	 * hba setup may have changed the hba_queue_depth so we need to adjust
 	 * the value of can_queue.
 	 */
-	host->can_queue = phba->cfg_hba_queue_depth - 10;
+	shost->can_queue = phba->cfg_hba_queue_depth - 10;
 
-	lpfc_discovery_wait(phba);
+	lpfc_host_attrib_init(shost);
 
 	if (phba->cfg_poll & DISABLE_FCP_RING_INT) {
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
 		lpfc_poll_start_timer(phba);
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 	}
 
-	/*
-	 * set fixed host attributes
-	 * Must done after lpfc_sli_hba_setup()
-	 */
-
-	fc_host_node_name(host) = wwn_to_u64(phba->fc_nodename.u.wwn);
-	fc_host_port_name(host) = wwn_to_u64(phba->fc_portname.u.wwn);
-	fc_host_supported_classes(host) = FC_COS_CLASS3;
-
-	memset(fc_host_supported_fc4s(host), 0,
-		sizeof(fc_host_supported_fc4s(host)));
-	fc_host_supported_fc4s(host)[2] = 1;
-	fc_host_supported_fc4s(host)[7] = 1;
-
-	lpfc_get_hba_sym_node_name(phba, fc_host_symbolic_name(host));
-
-	fc_host_supported_speeds(host) = 0;
-	if (phba->lmt & LMT_10Gb)
-		fc_host_supported_speeds(host) |= FC_PORTSPEED_10GBIT;
-	if (phba->lmt & LMT_8Gb)
-		fc_host_supported_speeds(host) |= FC_PORTSPEED_8GBIT;
-	if (phba->lmt & LMT_4Gb)
-		fc_host_supported_speeds(host) |= FC_PORTSPEED_4GBIT;
-	if (phba->lmt & LMT_2Gb)
-		fc_host_supported_speeds(host) |= FC_PORTSPEED_2GBIT;
-	if (phba->lmt & LMT_1Gb)
-		fc_host_supported_speeds(host) |= FC_PORTSPEED_1GBIT;
+	start = jiffies;
 
-	fc_host_maxframe_size(host) =
-		((((uint32_t) phba->fc_sparam.cmn.bbRcvSizeMsb & 0x0F) << 8) |
-		 (uint32_t) phba->fc_sparam.cmn.bbRcvSizeLsb);
+	while (!lpfc_scan_finished(shost, jiffies - start))
+		msleep(10);
+	scsi_scan_host(shost);
 
-	/* This value is also unchanging */
-	memset(fc_host_active_fc4s(host), 0,
-		sizeof(fc_host_active_fc4s(host)));
-	fc_host_active_fc4s(host)[2] = 1;
-	fc_host_active_fc4s(host)[7] = 1;
-
-	spin_lock_irq(phba->host->host_lock);
-	phba->fc_flag &= ~FC_LOADING;
-	spin_unlock_irq(phba->host->host_lock);
 	return 0;
 
+out_remove_device:
+	lpfc_free_sysfs_attr(vport);
+	spin_lock_irq(shost->host_lock);
+	vport->load_flag |= FC_UNLOADING;
+	spin_unlock_irq(shost->host_lock);
 out_free_irq:
 	if (phba->dfc_host)
 		lpfcdfc_host_del(phba->dfc_host);
-	lpfc_stop_timer(phba);
-	phba->work_hba_events = 0;
+	lpfc_stop_phba_timers(phba);
+	phba->pport->work_port_events = 0;
 	free_irq(phba->pcidev->irq, phba);
-	pci_disable_msi(phba->pcidev);
-out_free_sysfs_attr:
-	lpfc_free_sysfs_attr(phba);
-out_remove_host:
-	fc_remove_host(phba->host);
-	scsi_remove_host(phba->host);
+out_disable_msi:
+	if (phba->using_msi)
+		pci_disable_msi(phba->pcidev);
+	destroy_port(vport);
 out_kthread_stop:
 	kthread_stop(phba->worker_thread);
 out_free_iocbq:
 	list_for_each_entry_safe(iocbq_entry, iocbq_next,
 						&phba->lpfc_iocb_list, list) {
-		spin_lock_irq(phba->host->host_lock);
 		kfree(iocbq_entry);
 		phba->total_iocbq_bufs--;
-		spin_unlock_irq(phba->host->host_lock);
 	}
 	lpfc_mem_free(phba);
+out_free_hbqslimp:
+	dma_free_coherent(&pdev->dev, lpfc_sli_hbq_size(), phba->hbqslimp.virt,
+			  phba->hbqslimp.phys);
 out_free_slim:
 	dma_free_coherent(&pdev->dev, SLI2_SLIM_SIZE, phba->slim2p,
 							phba->slim2p_mapping);
@@ -1825,39 +2185,49 @@ out_iounmap_slim:
 	iounmap(phba->slim_memmap_p);
 out_idr_remove:
 	idr_remove(&lpfc_hba_index, phba->brd_no);
-out_put_host:
-	phba->host = NULL;
-	scsi_host_put(host);
+out_free_phba:
+	kfree(phba);
 out_release_regions:
 	pci_release_regions(pdev);
 out_disable_device:
 	pci_disable_device(pdev);
 out:
 	pci_set_drvdata(pdev, NULL);
+	if (shost)
+		scsi_host_put(shost);
 	return error;
 }
 
 static void __devexit
 lpfc_pci_remove_one(struct pci_dev *pdev)
 {
-	struct Scsi_Host   *host = pci_get_drvdata(pdev);
-	struct lpfc_hba    *phba = (struct lpfc_hba *)host->hostdata;
-	unsigned long iflag;
+	struct Scsi_Host  *shost = pci_get_drvdata(pdev);
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	struct lpfc_vport **vports;
+	int i;
 
 	lpfcdfc_host_del(phba->dfc_host);
 	phba->dfc_host = NULL;
 
-	lpfc_free_sysfs_attr(phba);
-
-	spin_lock_irqsave(phba->host->host_lock, iflag);
-	phba->fc_flag |= FC_UNLOADING;
-
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
+	spin_lock_irq(&phba->hbalock);
+	vport->load_flag |= FC_UNLOADING;
+	spin_unlock_irq(&phba->hbalock);
 
-	fc_remove_host(phba->host);
-	scsi_remove_host(phba->host);
+	kfree(vport->vname);
+	lpfc_free_sysfs_attr(vport);
 
-	kthread_stop(phba->worker_thread);
+	vports = lpfc_create_vport_work_array(phba);
+	if (vports != NULL)
+		for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
+			if (vports[i]->port_type == LPFC_PHYSICAL_PORT)
+				continue;
+			lpfc_vport_delete(lpfc_shost_from_vport(vports[i]));
+		}
+	lpfc_destroy_vport_work_array(vports);
+	fc_remove_host(shost);
+	scsi_remove_host(shost);
+	lpfc_cleanup(vport);
 
 	/*
 	 * Bring down the SLI Layer. This step disable all interrupts,
@@ -1867,13 +2237,22 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
 	lpfc_sli_hba_down(phba);
 	lpfc_sli_brdrestart(phba);
 
+	lpfc_stop_phba_timers(phba);
+	spin_lock_irq(&phba->hbalock);
+	list_del_init(&vport->listentry);
+	spin_unlock_irq(&phba->hbalock);
+
+	lpfc_debugfs_terminate(vport);
+
+	kthread_stop(phba->worker_thread);
+
 	/* Release the irq reservation */
 	free_irq(phba->pcidev->irq, phba);
-	pci_disable_msi(phba->pcidev);
+	if (phba->using_msi)
+		pci_disable_msi(phba->pcidev);
 
-	lpfc_cleanup(phba);
-	lpfc_stop_timer(phba);
-	phba->work_hba_events = 0;
+	pci_set_drvdata(pdev, NULL);
+	scsi_host_put(shost);
 
 	/*
 	 * Call scsi_free before mem_free since scsi bufs are released to their
@@ -1882,6 +2261,9 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
 	lpfc_scsi_free(phba);
 	lpfc_mem_free(phba);
 
+	dma_free_coherent(&pdev->dev, lpfc_sli_hbq_size(), phba->hbqslimp.virt,
+			  phba->hbqslimp.phys);
+
 	/* Free resources associated with SLI2 interface */
 	dma_free_coherent(&pdev->dev, SLI2_SLIM_SIZE,
 			  phba->slim2p, phba->slim2p_mapping);
@@ -1890,15 +2272,15 @@ lpfc_pci_remove_one(struct pci_dev *pdev)
 	iounmap(phba->ctrl_regs_memmap_p);
 	iounmap(phba->slim_memmap_p);
 
-	pci_release_regions(phba->pcidev);
-	pci_disable_device(phba->pcidev);
-
 	idr_remove(&lpfc_hba_index, phba->brd_no);
-	scsi_host_put(phba->host);
 
-	pci_set_drvdata(pdev, NULL);
+	kfree(phba);
+
+	pci_release_regions(pdev);
+	pci_disable_device(pdev);
 }
 
+
 static struct pci_device_id lpfc_id_table[] = {
 	{PCI_VENDOR_ID_EMULEX, PCI_DEVICE_ID_VIPER,
 		PCI_ANY_ID, PCI_ANY_ID, },
@@ -1971,6 +2353,7 @@ static struct pci_device_id lpfc_id_table[] = {
 
 MODULE_DEVICE_TABLE(pci, lpfc_id_table);
 
+
 static struct pci_driver lpfc_driver = {
 	.name		= LPFC_DRIVER_NAME,
 	.id_table	= lpfc_id_table,
@@ -1988,8 +2371,33 @@ lpfc_init(void)
 
 	lpfc_transport_template =
 				fc_attach_transport(&lpfc_transport_functions);
-	if (!lpfc_transport_template)
+	if (lpfc_transport_template == NULL)
 		return -ENOMEM;
+	if (lpfc_enable_npiv) {
+		lpfc_vport_transport_template =
+			fc_attach_transport(&lpfc_vport_transport_functions);
+		if (lpfc_vport_transport_template == NULL) {
+			fc_release_transport(lpfc_transport_template);
+			return -ENOMEM;
+		}
+	}
+
+	if (netlink_register_notifier(&lpfc_fc_netlink_notifier))
+		return 1;
+	fc_nl_sock = netlink_kernel_create(NETLINK_FCTRANSPORT, FC_NL_GROUP_CNT,
+					   lpfc_fc_nl_rcv, THIS_MODULE);
+	if (!fc_nl_sock) {
+		netlink_unregister_notifier(&lpfc_fc_netlink_notifier);
+		return 1;
+	}
+	snprintf(security_work_q_name, KOBJ_NAME_LEN, "fc_sc_wq");
+	security_work_q = create_singlethread_workqueue(security_work_q_name);
+	if (!security_work_q) {
+		netlink_unregister_notifier(&lpfc_fc_netlink_notifier);
+		sock_release(fc_nl_sock->sk_socket);
+		return 1;
+	}
+	INIT_LIST_HEAD(&fc_security_user_list);
 	error = pci_register_driver(&lpfc_driver);
 	if (error)
 		goto out_release_transport;
@@ -2004,6 +2412,8 @@ out_pci_unregister:
 	pci_unregister_driver(&lpfc_driver);
 out_release_transport:
 	fc_release_transport(lpfc_transport_template);
+	fc_release_transport(lpfc_vport_transport_template);
+
 	return error;
 }
 
@@ -2011,7 +2421,15 @@ static void __exit
 lpfc_exit(void)
 {
 	pci_unregister_driver(&lpfc_driver);
+
+	if (fc_nl_sock)
+		sock_release(fc_nl_sock->sk_socket);
+	netlink_unregister_notifier(&lpfc_fc_netlink_notifier);
+	fc_nl_sock = NULL;
+
 	fc_release_transport(lpfc_transport_template);
+	if (lpfc_enable_npiv)
+		fc_release_transport(lpfc_vport_transport_template);
 	lpfc_cdev_exit();
 }
 
diff --git a/drivers/scsi/lpfc/lpfc_ioctl.c b/drivers/scsi/lpfc/lpfc_ioctl.c
index 788e70e..20f0c2e 100644
--- a/drivers/scsi/lpfc/lpfc_ioctl.c
+++ b/drivers/scsi/lpfc/lpfc_ioctl.c
@@ -82,6 +82,7 @@ struct lpfcdfc_host {
 	struct list_head node;
 	int inst;
 	struct lpfc_hba * phba;
+	struct lpfc_vport *vport;
 	struct Scsi_Host * host;
 	struct pci_dev * dev;
 	void (*base_ct_unsol_event)(struct lpfc_hba *,
@@ -110,10 +111,6 @@ struct lpfc_dmabufext {
 };
 
 
-
-static struct lpfc_nodelist *
-lpfc_findnode_wwnn(struct lpfc_hba *, uint32_t,
-					  struct lpfc_name *);
 static void lpfc_ioctl_timeout_iocb_cmpl(struct lpfc_hba *,
 				  struct lpfc_iocbq *, struct lpfc_iocbq *);
 
@@ -123,7 +120,7 @@ dfc_cmd_data_alloc(struct lpfc_hba *, char *,
 static int dfc_cmd_data_free(struct lpfc_hba *, struct lpfc_dmabufext *);
 static int dfc_rsp_data_copy(struct lpfc_hba *, uint8_t *,
 				struct lpfc_dmabufext *,
-		      		uint32_t);
+				uint32_t);
 static int lpfc_issue_ct_rsp(struct lpfc_hba *, uint32_t, struct lpfc_dmabuf *,
 		      struct lpfc_dmabufext *);
 
@@ -140,7 +137,6 @@ lpfc_ioctl_hba_rnid(struct lpfc_hba * phba,
 			struct lpfcCmdInput * cip,
 			void *dataout)
 {
-
 	struct nport_id idn;
 	struct lpfc_sli *psli;
 	struct lpfc_iocbq *cmdiocbq = NULL;
@@ -152,7 +148,6 @@ lpfc_ioctl_hba_rnid(struct lpfc_hba * phba,
 	struct lpfc_sli_ring *pring;
 	void *context2;
 	int i0;
-	unsigned long iflag;
 	int rtnbfrsiz;
 	struct lpfc_nodelist *pndl;
 	int rc = 0;
@@ -167,13 +162,11 @@ lpfc_ioctl_hba_rnid(struct lpfc_hba * phba,
 	}
 
 	if (idn.idType == LPFC_WWNN_TYPE)
-		pndl = lpfc_findnode_wwnn(phba,
-					NLP_SEARCH_MAPPED | NLP_SEARCH_UNMAPPED,
-					(struct lpfc_name *) idn.wwpn);
+		pndl = lpfc_findnode_wwnn(phba->pport,
+					  (struct lpfc_name *) idn.wwpn);
 	else
-		pndl = lpfc_findnode_wwpn(phba,
-					NLP_SEARCH_MAPPED | NLP_SEARCH_UNMAPPED,
-					(struct lpfc_name *) idn.wwpn);
+		pndl = lpfc_findnode_wwpn(phba->pport,
+					  (struct lpfc_name *) idn.wwpn);
 
 	if (!pndl)
 		return ENODEV;
@@ -189,8 +182,8 @@ lpfc_ioctl_hba_rnid(struct lpfc_hba * phba,
 		return EBUSY;
 	}
 
-	cmdiocbq = lpfc_prep_els_iocb(phba, 1, (2 * sizeof(uint32_t)), 0, pndl,
-				      pndl->nlp_DID, ELS_CMD_RNID);
+	cmdiocbq = lpfc_prep_els_iocb(phba->pport, 1, (2 * sizeof(uint32_t)), 0,
+				      pndl, pndl->nlp_DID, ELS_CMD_RNID);
 	if (!cmdiocbq)
 		return ENOMEM;
 
@@ -203,11 +196,8 @@ lpfc_ioctl_hba_rnid(struct lpfc_hba * phba,
 	 */
 	context2 = cmdiocbq->context2;
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
-
 	if ((rspiocbq = lpfc_sli_get_iocbq(phba)) == NULL) {
 		rc = ENOMEM;
-		spin_unlock_irqrestore(phba->host->host_lock, iflag);
 		goto sndrndqwt;
 	}
 	rsp = &(rspiocbq->iocb);
@@ -234,12 +224,9 @@ lpfc_ioctl_hba_rnid(struct lpfc_hba * phba,
 		lpfc_sli_release_iocbq(phba, rspiocbq);
 		cmdiocbq->context1 = NULL;
 		cmdiocbq->iocb_cmpl = lpfc_ioctl_timeout_iocb_cmpl;
-		spin_unlock_irqrestore(phba->host->host_lock, iflag);
 		return EIO;
 	}
 
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
-
 	if (rc != IOCB_SUCCESS) {
 		rc = EIO;
 		goto sndrndqwt;
@@ -288,12 +275,9 @@ sndrndqwt:
 	if (cmdiocbq)
 		lpfc_els_free_iocb(phba, cmdiocbq);
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
-
 	if (rspiocbq)
 		lpfc_sli_release_iocbq(phba, rspiocbq);
 
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
 	return rc;
 }
 
@@ -303,16 +287,12 @@ lpfc_ioctl_timeout_iocb_cmpl(struct lpfc_hba * phba,
 			     struct lpfc_iocbq * rsp_iocb_q)
 {
 	struct lpfc_timedout_iocb_ctxt *iocb_ctxt = cmd_iocb_q->context1;
-	unsigned long iflag;
 
 	if (!iocb_ctxt) {
 		if (cmd_iocb_q->context2)
 			lpfc_els_free_iocb(phba, cmd_iocb_q);
-		else {
-			spin_lock_irqsave(phba->host->host_lock, iflag);
+		else
 			lpfc_sli_release_iocbq(phba,cmd_iocb_q);
-			spin_unlock_irqrestore(phba->host->host_lock, iflag);
-		}
 		return;
 	}
 
@@ -336,12 +316,10 @@ lpfc_ioctl_timeout_iocb_cmpl(struct lpfc_hba * phba,
 		kfree(iocb_ctxt->bmp);
 	}
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
 	lpfc_sli_release_iocbq(phba,cmd_iocb_q);
 
 	if (iocb_ctxt->rspiocbq)
 			lpfc_sli_release_iocbq(phba, iocb_ctxt->rspiocbq);
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
 
 	kfree(iocb_ctxt);
 }
@@ -351,6 +329,7 @@ static int
 lpfc_ioctl_send_els(struct lpfc_hba * phba,
 		    struct lpfcCmdInput * cip, void *dataout)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(phba->pport);
 	struct lpfc_sli *psli = &phba->sli;
 	struct lpfc_sli_ring *pring = &psli->ring[LPFC_ELS_RING];
 	struct lpfc_iocbq *cmdiocbq, *rspiocbq;
@@ -377,53 +356,48 @@ lpfc_ioctl_send_els(struct lpfc_hba * phba,
 			   sizeof(struct nport_id)))
 		return EIO;
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
-	if ((rspiocbq = lpfc_sli_get_iocbq(phba)) == NULL) {
-		spin_unlock_irqrestore(phba->host->host_lock, iflag);
+	if ((rspiocbq = lpfc_sli_get_iocbq(phba)) == NULL)
 		return ENOMEM;
-	}
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
 
 	rsp = &rspiocbq->iocb;
 
 	if (destID.idType == 0)
-		pndl = lpfc_findnode_wwpn(phba, NLP_SEARCH_ALL,
+		pndl = lpfc_findnode_wwpn(phba->pport,
 					  (struct lpfc_name *)&destID.wwpn);
 	else {
 		destID.d_id = (destID.d_id & Mask_DID);
-		pndl = lpfc_findnode_did(phba, NLP_SEARCH_ALL, destID.d_id);
+		pndl = lpfc_findnode_did(phba->pport, destID.d_id);
 	}
 
 	if (pndl == NULL) {
 		if (destID.idType == 0) {
-			spin_lock_irqsave(phba->host->host_lock, iflag);
+			spin_lock_irqsave(shost->host_lock, iflag);
 			lpfc_sli_release_iocbq(phba, rspiocbq);
-			spin_unlock_irqrestore(phba->host->host_lock, iflag);
+			spin_unlock_irqrestore(shost->host_lock, iflag);
 			return ENODEV;
 		}
 		pndl = kmalloc(sizeof (struct lpfc_nodelist), GFP_KERNEL);
 		if (!pndl) {
-			spin_lock_irqsave(phba->host->host_lock, iflag);
+			spin_lock_irqsave(shost->host_lock, iflag);
 			lpfc_sli_release_iocbq(phba, rspiocbq);
-			spin_unlock_irqrestore(phba->host->host_lock, iflag);
+			spin_unlock_irqrestore(shost->host_lock, iflag);
 			return ENODEV;
 		}
-		lpfc_nlp_init(phba, pndl, destID.d_id);
+		lpfc_nlp_init(phba->pport, pndl, destID.d_id);
+		lpfc_nlp_set_state(phba->pport, pndl, NLP_STE_NPR_NODE);
 		new_pndl = 1;
 	} else
 		rpi = pndl->nlp_rpi;
 
 
-	cmdiocbq = lpfc_prep_els_iocb(phba, 1, cmdsize, 0, pndl,
+	cmdiocbq = lpfc_prep_els_iocb(phba->pport, 1, cmdsize, 0, pndl,
 				      pndl->nlp_DID, elscmd);
 
 	if (new_pndl)
 		kfree(pndl);
 
 	if (cmdiocbq == NULL) {
-		spin_lock_irqsave(phba->host->host_lock, iflag);
 		lpfc_sli_release_iocbq(phba, rspiocbq);
-		spin_unlock_irqrestore(phba->host->host_lock, iflag);
 		return EIO;
 	}
 
@@ -447,9 +421,9 @@ lpfc_ioctl_send_els(struct lpfc_hba * phba,
 					     bpl, cmdsize);
 		if (!pcmdext) {
 			lpfc_els_free_iocb(phba, cmdiocbq);
-			spin_lock_irqsave(phba->host->host_lock, iflag);
+			spin_lock_irqsave(shost->host_lock, iflag);
 			lpfc_sli_release_iocbq(phba, rspiocbq);
-			spin_unlock_irqrestore(phba->host->host_lock, iflag);
+			spin_unlock_irqrestore(shost->host_lock, iflag);
 			return ENOMEM;
 		}
 		bpl += pcmdext->flag;
@@ -457,9 +431,9 @@ lpfc_ioctl_send_els(struct lpfc_hba * phba,
 		if (!prspext) {
 			dfc_cmd_data_free(phba, pcmdext);
 			lpfc_els_free_iocb(phba, cmdiocbq);
-			spin_lock_irqsave(phba->host->host_lock, iflag);
+			spin_lock_irqsave(shost->host_lock, iflag);
 			lpfc_sli_release_iocbq(phba, rspiocbq);
-			spin_unlock_irqrestore(phba->host->host_lock, iflag);
+			spin_unlock_irqrestore(shost->host_lock, iflag);
 			return ENOMEM;
 		}
 	} else {
@@ -468,9 +442,9 @@ lpfc_ioctl_send_els(struct lpfc_hba * phba,
 				   (void __user *) cip->lpfc_arg2,
 				   cmdsize)) {
 			lpfc_els_free_iocb(phba, cmdiocbq);
-			spin_lock_irqsave(phba->host->host_lock, iflag);
+			spin_lock_irqsave(shost->host_lock, iflag);
 			lpfc_sli_release_iocbq(phba, rspiocbq);
-			spin_unlock_irqrestore(phba->host->host_lock, iflag);
+			spin_unlock_irqrestore(shost->host_lock, iflag);
 			return EIO;
 		}
 	}
@@ -480,12 +454,10 @@ lpfc_ioctl_send_els(struct lpfc_hba * phba,
 	cmdiocbq->context1 = NULL;
 	cmdiocbq->context2 = NULL;
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
 	iocb_status = lpfc_sli_issue_iocb_wait(phba, pring, cmdiocbq, rspiocbq,
 				      (phba->fc_ratov*2) + LPFC_DRVR_TIMEOUT);
 	rc = iocb_status;
 
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
 	if (rc == IOCB_SUCCESS) {
 		if (rsp->ulpStatus == IOSTAT_SUCCESS) {
 			if (rspsize < (rsp->un.ulpWord[0] & 0xffffff)) {
@@ -549,9 +521,9 @@ lpfc_ioctl_send_els(struct lpfc_hba * phba,
 	if (iocb_status != IOCB_TIMEDOUT)
 		lpfc_els_free_iocb(phba, cmdiocbq);
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
+	spin_lock_irqsave(shost->host_lock, iflag);
 	lpfc_sli_release_iocbq(phba, rspiocbq);
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
+	spin_unlock_irqrestore(shost->host_lock, iflag);
 	return rc;
 }
 
@@ -559,6 +531,7 @@ static int
 lpfc_ioctl_send_mgmt_rsp(struct lpfc_hba * phba,
 			 struct lpfcCmdInput * cip)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(phba->pport);
 	struct ulp_bde64 *bpl;
 	struct lpfc_dmabuf *bmp = NULL;
 	struct lpfc_dmabufext *indmp = NULL;
@@ -577,9 +550,9 @@ lpfc_ioctl_send_mgmt_rsp(struct lpfc_hba * phba,
 		rc = ENOMEM;
 		goto send_mgmt_rsp_exit;
 	}
-	spin_lock_irqsave(phba->host->host_lock, iflag);
+	spin_lock_irqsave(shost->host_lock, iflag);
 	bmp->virt = lpfc_mbuf_alloc(phba, 0, &bmp->phys);
-	spin_unlock_irqrestore(phba->host->host_lock, iflag); /* remove */
+	spin_unlock_irqrestore(shost->host_lock, iflag); /* remove */
 	if (!bmp->virt) {
 		rc = ENOMEM;
 		goto send_mgmt_rsp_free_bmp;
@@ -614,6 +587,7 @@ static int
 lpfc_ioctl_send_mgmt_cmd(struct lpfc_hba * phba,
 			 struct lpfcCmdInput * cip, void *dataout)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(phba->pport);
 	struct lpfc_nodelist *pndl = NULL;
 	struct ulp_bde64 *bpl = NULL;
 	struct lpfc_name findwwn;
@@ -643,6 +617,10 @@ lpfc_ioctl_send_mgmt_cmd(struct lpfc_hba * phba,
 		goto send_mgmt_cmd_exit;
 	}
 
+	if (phba->pport->port_state != LPFC_VPORT_READY) {
+		rc = ENODEV;
+		goto send_mgmt_cmd_exit;
+	}
 
 	if (cip->lpfc_cmd == LPFC_HBA_SEND_MGMT_CMD) {
 		rc = copy_from_user(&findwwn, (void __user *)cip->lpfc_arg3,
@@ -651,14 +629,12 @@ lpfc_ioctl_send_mgmt_cmd(struct lpfc_hba * phba,
 			rc = EIO;
 			goto send_mgmt_cmd_exit;
 		}
-		pndl = lpfc_findnode_wwpn(phba, NLP_SEARCH_MAPPED |
-			 NLP_SEARCH_UNMAPPED, &findwwn);
+		pndl = lpfc_findnode_wwpn(phba->pport, &findwwn);
 	} else {
 		finddid = (uint32_t)(unsigned long)cip->lpfc_arg3;
-		pndl = lpfc_findnode_did(phba, NLP_SEARCH_MAPPED |
-					NLP_SEARCH_UNMAPPED, finddid);
+		pndl = lpfc_findnode_did(phba->pport, finddid);
 		if (!pndl) {
-			if (phba->fc_flag & FC_FABRIC) {
+			if (phba->pport->fc_flag & FC_FABRIC) {
 				pndl = kmalloc(sizeof (struct lpfc_nodelist),
 						GFP_KERNEL);
 				if (!pndl) {
@@ -668,12 +644,11 @@ lpfc_ioctl_send_mgmt_cmd(struct lpfc_hba * phba,
 
 				memset(pndl, 0, sizeof (struct lpfc_nodelist));
 				pndl->nlp_DID = finddid;
-				lpfc_nlp_init(phba, pndl, finddid);
-				pndl->nlp_state = NLP_STE_PLOGI_ISSUE;
-				lpfc_nlp_list(phba, pndl, NLP_PLOGI_LIST);
-				if (lpfc_issue_els_plogi(phba,
-							pndl->nlp_DID, 0)) {
-					lpfc_nlp_list(phba, pndl, NLP_JUST_DQ);
+				lpfc_nlp_init(phba->pport, pndl, finddid);
+				lpfc_nlp_set_state(phba->pport,
+					pndl, NLP_STE_PLOGI_ISSUE);
+				if (lpfc_issue_els_plogi(phba->pport,
+							 pndl->nlp_DID, 0)) {
 					kfree(pndl);
 					rc = ENODEV;
 					goto send_mgmt_cmd_exit;
@@ -681,10 +656,8 @@ lpfc_ioctl_send_mgmt_cmd(struct lpfc_hba * phba,
 
 				/* Allow the node to complete discovery */
 				while ((i0++ < 4) &&
-					! (pndl = lpfc_findnode_did(phba,
-							NLP_SEARCH_MAPPED |
-							NLP_SEARCH_UNMAPPED,
-							 finddid))) {
+					! (pndl = lpfc_findnode_did(phba->pport,
+								    finddid))) {
 					msleep(500);
 				}
 
@@ -710,11 +683,11 @@ lpfc_ioctl_send_mgmt_cmd(struct lpfc_hba * phba,
 		goto send_mgmt_cmd_exit;
 	}
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	cmdiocbq = lpfc_sli_get_iocbq(phba);
 	if (!cmdiocbq) {
 		rc = ENOMEM;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		goto send_mgmt_cmd_exit;
 	}
 	cmd = &cmdiocbq->iocb;
@@ -724,31 +697,31 @@ lpfc_ioctl_send_mgmt_cmd(struct lpfc_hba * phba,
 		rc = ENOMEM;
 		goto send_mgmt_cmd_free_cmdiocbq;
 	}
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 
 	rsp = &rspiocbq->iocb;
 
 	bmp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
 	if (!bmp) {
 		rc = ENOMEM;
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
 		goto send_mgmt_cmd_free_rspiocbq;
 	}
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	bmp->virt = lpfc_mbuf_alloc(phba, 0, &bmp->phys);
 	if (!bmp->virt) {
 		rc = ENOMEM;
 		goto send_mgmt_cmd_free_bmp;
 	}
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 
 	INIT_LIST_HEAD(&bmp->list);
 	bpl = (struct ulp_bde64 *) bmp->virt;
 	indmp = dfc_cmd_data_alloc(phba, cip->lpfc_arg1, bpl, reqbfrcnt);
 	if (!indmp) {
 		rc = ENOMEM;
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
 		goto send_mgmt_cmd_free_bmpvirt;
 	}
 
@@ -758,7 +731,7 @@ lpfc_ioctl_send_mgmt_cmd(struct lpfc_hba * phba,
 	outdmp = dfc_cmd_data_alloc(phba, NULL, bpl, snsbfrcnt);
 	if (!outdmp) {
 		rc = ENOMEM;
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
 		goto send_mgmt_cmd_free_indmp;
 	}
 
@@ -778,6 +751,7 @@ lpfc_ioctl_send_mgmt_cmd(struct lpfc_hba * phba,
 	cmd->ulpClass = CLASS3;
 	cmd->ulpContext = pndl->nlp_rpi;
 	cmd->ulpOwner = OWN_CHIP;
+	cmdiocbq->vport = phba->pport;
 	cmdiocbq->context1 = NULL;
 	cmdiocbq->context2 = NULL;
 	cmdiocbq->iocb_flag |= LPFC_IO_LIBDFC;
@@ -789,15 +763,13 @@ lpfc_ioctl_send_mgmt_cmd(struct lpfc_hba * phba,
 
 	cmd->ulpTimeout = timeout;
 
-	spin_lock_irq(phba->host->host_lock);
 	rc = lpfc_sli_issue_iocb_wait(phba, pring, cmdiocbq, rspiocbq,
 					timeout + LPFC_DRVR_TIMEOUT);
-	spin_unlock_irq(phba->host->host_lock);
 
 	if (rc == IOCB_TIMEDOUT) {
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
 		lpfc_sli_release_iocbq(phba, rspiocbq);
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		iocb_ctxt = kmalloc(sizeof(struct lpfc_timedout_iocb_ctxt),
 				    GFP_KERNEL);
 		if (!iocb_ctxt)
@@ -842,12 +814,9 @@ lpfc_ioctl_send_mgmt_cmd(struct lpfc_hba * phba,
 	/* Copy back response data */
 	if (outdmp->flag > snsbfrcnt) {
 		rc = ERANGE;
-		lpfc_printf_log(phba,
-				KERN_INFO,
-				LOG_LIBDFC,
-			       "%d:1209 C_CT Request error Data: x%x x%x\n",
-				phba->brd_no,
-			       outdmp->flag, 4096);
+		lpfc_printf_log(phba, KERN_INFO, LOG_LIBDFC,
+				"1209 C_CT Request error Data: x%x x%x\n",
+				outdmp->flag, 4096);
 		goto send_mgmt_cmd_free_outdmp;
 	}
 
@@ -858,7 +827,7 @@ lpfc_ioctl_send_mgmt_cmd(struct lpfc_hba * phba,
 		rc = EIO;
 
 send_mgmt_cmd_free_outdmp:
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	dfc_cmd_data_free(phba, outdmp);
 send_mgmt_cmd_free_indmp:
 	dfc_cmd_data_free(phba, indmp);
@@ -870,7 +839,7 @@ send_mgmt_cmd_free_rspiocbq:
 	lpfc_sli_release_iocbq(phba, rspiocbq);
 send_mgmt_cmd_free_cmdiocbq:
 	lpfc_sli_release_iocbq(phba, cmdiocbq);
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 send_mgmt_cmd_exit:
 	return rc;
 }
@@ -1096,6 +1065,7 @@ static int
 lpfc_ioctl_loopback_mode(struct lpfc_hba *phba,
 		   struct lpfcCmdInput  *cip, void *dataout)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(phba->pport);
 	struct lpfc_sli *psli = &phba->sli;
 	struct lpfc_sli_ring *pring = &psli->ring[LPFC_FCP_RING];
 	uint32_t link_flags = cip->lpfc_arg4;
@@ -1105,18 +1075,18 @@ lpfc_ioctl_loopback_mode(struct lpfc_hba *phba,
 	int i = 0;
 	int rc = 0;
 
-	if ((phba->hba_state == LPFC_HBA_ERROR) ||
-	    (phba->fc_flag & FC_BLOCK_MGMT_IO) ||
+	if ((phba->link_state == LPFC_HBA_ERROR) ||
+	    (psli->sli_flag & LPFC_BLOCK_MGMT_IO) ||
 	    (!(psli->sli_flag & LPFC_SLI2_ACTIVE)))
 		return EACCES;
 
 	if ((pmboxq = mempool_alloc(phba->mbox_mem_pool,GFP_KERNEL)) == 0)
 		return ENOMEM;
 
-	scsi_block_requests(phba->host);
+	scsi_block_requests(shost);
 
 	while (pring->txcmplq_cnt) {
-		if (i++ > 500) 	/* wait up to 5 seconds */
+		if (i++ > 500)	/* wait up to 5 seconds */
 			break;
 
 		mdelay(10);
@@ -1132,7 +1102,7 @@ lpfc_ioctl_loopback_mode(struct lpfc_hba *phba,
 
 		/* wait for link down before proceeding */
 		i = 0;
-		while (phba->hba_state != LPFC_LINK_DOWN) {
+		while (phba->link_state != LPFC_LINK_DOWN) {
 			if (i++ > timeout) {
 				rc = ETIMEDOUT;
 				goto loopback_mode_exit;
@@ -1156,12 +1126,12 @@ lpfc_ioctl_loopback_mode(struct lpfc_hba *phba,
 		if ((mbxstatus != MBX_SUCCESS) || (pmboxq->mb.mbxStatus))
 			rc = ENODEV;
 		else {
-			phba->fc_flag |= FC_LOOPBACK_MODE;
+			phba->link_flag |= LS_LOOPBACK_MODE;
 			/* wait for the link attention interrupt */
 			msleep(100);
 
 			i = 0;
-			while (phba->hba_state != LPFC_HBA_READY) {
+			while (phba->link_state != LPFC_HBA_READY) {
 				if (i++ > timeout) {
 					rc = ETIMEDOUT;
 					break;
@@ -1173,14 +1143,12 @@ lpfc_ioctl_loopback_mode(struct lpfc_hba *phba,
 		rc = ENODEV;
 
 loopback_mode_exit:
-	scsi_unblock_requests(phba->host);
+	scsi_unblock_requests(shost);
 
 	/*
 	 * Let SLI layer release mboxq if mbox command completed after timeout.
 	 */
-	if (mbxstatus == MBX_TIMEOUT)
-		pmboxq->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-	else
+	if (mbxstatus != MBX_TIMEOUT)
 		mempool_free( pmboxq, phba->mbox_mem_pool);
 
 	return rc;
@@ -1196,8 +1164,8 @@ static int lpfcdfc_loop_self_reg(struct lpfc_hba *phba, uint16_t * rpi)
 	if (mbox == NULL)
 		return ENOMEM;
 
-	status = lpfc_reg_login(phba, phba->fc_myDID,
-				(uint8_t *)&phba->fc_sparam, mbox, 0);
+	status = lpfc_reg_login(phba, 0, phba->pport->fc_myDID,
+				(uint8_t *)&phba->pport->fc_sparam, mbox, 0);
 	if (status) {
 		mempool_free(mbox, phba->mbox_mem_pool);
 		return ENOMEM;
@@ -1210,9 +1178,7 @@ static int lpfcdfc_loop_self_reg(struct lpfc_hba *phba, uint16_t * rpi)
 	if ((status != MBX_SUCCESS) || (mbox->mb.mbxStatus)) {
 		lpfc_mbuf_free(phba, dmabuff->virt, dmabuff->phys);
 		kfree(dmabuff);
-		if (status == MBX_TIMEOUT)
-			mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-		else
+		if (status != MBX_TIMEOUT)
 			mempool_free(mbox, phba->mbox_mem_pool);
 		return ENODEV;
 	}
@@ -1236,13 +1202,11 @@ static int lpfcdfc_loop_self_unreg(struct lpfc_hba *phba, uint16_t rpi)
 	if (mbox == NULL)
 		return ENOMEM;
 
-	lpfc_unreg_login(phba, rpi, mbox);
+	lpfc_unreg_login(phba, 0, rpi, mbox);
 	status = lpfc_sli_issue_mbox_wait(phba, mbox, LPFC_MBOX_TMO);
 
 	if ((status != MBX_SUCCESS) || (mbox->mb.mbxStatus)) {
-		if (status == MBX_TIMEOUT)
-			mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-		else
+		if (status != MBX_TIMEOUT)
 			mempool_free(mbox, phba->mbox_mem_pool);
 		return EIO;
 	}
@@ -1284,10 +1248,8 @@ static int lpfcdfc_loop_get_xri(struct lpfc_hba *phba, uint16_t rpi,
 	evt = lpfcdfc_event_new(FC_REG_CT_EVENT, current->pid,
 				SLI_CT_ELX_LOOPBACK);
 
-	spin_lock_irq(phba->host->host_lock);
 	cmdiocbq = lpfc_sli_get_iocbq(phba);
 	rspiocbq = lpfc_sli_get_iocbq(phba);
-	spin_unlock_irq(phba->host->host_lock);
 
 	dmabuf = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
 	if (dmabuf) {
@@ -1348,12 +1310,11 @@ static int lpfcdfc_loop_get_xri(struct lpfc_hba *phba, uint16_t rpi,
 	cmd->ulpContext = rpi;
 
 	cmdiocbq->iocb_flag |= LPFC_IO_LIBDFC;
+	cmdiocbq->vport = phba->pport;
 
-	spin_lock_irq(phba->host->host_lock);
 	ret_val = lpfc_sli_issue_iocb_wait(phba, pring, cmdiocbq, rspiocbq,
 					   (phba->fc_ratov * 2)
 					   + LPFC_DRVR_TIMEOUT);
-	spin_unlock_irq(phba->host->host_lock);
 	if (ret_val) {
 		lpfcdfc_loop_self_unreg(phba, rpi);
 		goto err_get_xri_exit;
@@ -1392,12 +1353,10 @@ err_get_xri_exit:
 		kfree(dmabuf);
 	}
 
-	spin_lock_irq(phba->host->host_lock);
 	if (cmdiocbq && (ret_val != IOCB_TIMEDOUT))
 		lpfc_sli_release_iocbq(phba, cmdiocbq);
 	if (rspiocbq)
 		lpfc_sli_release_iocbq(phba, rspiocbq);
-	spin_unlock_irq(phba->host->host_lock);
 
 	return ret_val;
 }
@@ -1419,10 +1378,7 @@ static int lpfcdfc_loop_post_rxbufs(struct lpfc_hba *phba, uint16_t rxxri,
 	int ret_val = 0;
 	int i = 0;
 
-	spin_lock_irq(phba->host->host_lock);
 	cmdiocbq = lpfc_sli_get_iocbq(phba);
-	spin_unlock_irq(phba->host->host_lock);
-
 	rxbmp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
 	if (rxbmp != NULL) {
 		rxbmp->virt = lpfc_mbuf_alloc(phba, 0, &rxbmp->phys);
@@ -1464,9 +1420,7 @@ static int lpfcdfc_loop_post_rxbufs(struct lpfc_hba *phba, uint16_t rxxri,
 		cmd->ulpClass = CLASS3;
 		cmd->ulpContext = rxxri;
 
-		spin_lock_irq(phba->host->host_lock);
 		ret_val = lpfc_sli_issue_iocb(phba, pring, cmdiocbq, 0);
-		spin_unlock_irq(phba->host->host_lock);
 
 		if (ret_val == IOCB_ERROR) {
 			dfc_cmd_data_free(phba, (struct lpfc_dmabufext *)mp[0]);
@@ -1478,7 +1432,6 @@ static int lpfcdfc_loop_post_rxbufs(struct lpfc_hba *phba, uint16_t rxxri,
 			goto err_post_rxbufs_exit;
 		}
 
-		spin_lock_irq(phba->host->host_lock);
 		lpfc_sli_ringpostbuf_put(phba, pring, mp[0]);
 		if (mp[1]) {
 			lpfc_sli_ringpostbuf_put(phba, pring, mp[1]);
@@ -1489,10 +1442,8 @@ static int lpfcdfc_loop_post_rxbufs(struct lpfc_hba *phba, uint16_t rxxri,
 		if ((cmdiocbq = lpfc_sli_get_iocbq(phba)) == NULL) {
 			dmp = list_entry(next, struct lpfc_dmabuf, list);
 			ret_val = EIO;
-			spin_unlock_irq(phba->host->host_lock);
 			goto err_post_rxbufs_exit;
 		}
-		spin_unlock_irq(phba->host->host_lock);
 		cmd = &cmdiocbq->iocb;
 		i = 0;
 	}
@@ -1506,10 +1457,8 @@ err_post_rxbufs_exit:
 		kfree(rxbmp);
 	}
 
-	spin_lock_irq(phba->host->host_lock);
 	if (cmdiocbq)
 		lpfc_sli_release_iocbq(phba, cmdiocbq);
-	spin_unlock_irq(phba->host->host_lock);
 
 	return ret_val;
 }
@@ -1540,8 +1489,8 @@ lpfc_ioctl_loopback_test(struct lpfc_hba *phba,
 	uint8_t *ptr = NULL, *rx_databuf = NULL;
 	int rc = 0;
 
-	if ((phba->hba_state == LPFC_HBA_ERROR) ||
-	    (phba->fc_flag & FC_BLOCK_MGMT_IO) ||
+	if ((phba->link_state == LPFC_HBA_ERROR) ||
+	    (psli->sli_flag & LPFC_BLOCK_MGMT_IO) ||
 	    (!(psli->sli_flag & LPFC_SLI2_ACTIVE)))
 		return EACCES;
 
@@ -1575,11 +1524,8 @@ lpfc_ioctl_loopback_test(struct lpfc_hba *phba,
 	evt = lpfcdfc_event_new(FC_REG_CT_EVENT, current->pid,
 				SLI_CT_ELX_LOOPBACK);
 
-	spin_lock_irq(phba->host->host_lock);
 	cmdiocbq = lpfc_sli_get_iocbq(phba);
 	rspiocbq = lpfc_sli_get_iocbq(phba);
-	spin_unlock_irq(phba->host->host_lock);
-
 	txbmp = kmalloc(sizeof (struct lpfc_dmabuf), GFP_KERNEL);
 
 	if (txbmp) {
@@ -1655,11 +1601,10 @@ lpfc_ioctl_loopback_test(struct lpfc_hba *phba,
 	cmd->ulpContext = txxri;
 
 	cmdiocbq->iocb_flag |= LPFC_IO_LIBDFC;
+	cmdiocbq->vport = phba->pport;
 
-	spin_lock_irq(phba->host->host_lock);
 	rc = lpfc_sli_issue_iocb_wait(phba, pring, cmdiocbq, rspiocbq,
 				      (phba->fc_ratov * 2) + LPFC_DRVR_TIMEOUT);
-	spin_unlock_irq(phba->host->host_lock);
 
 	if ((rc != IOCB_SUCCESS) || (rsp->ulpStatus != IOCB_SUCCESS)) {
 		rc = EIO;
@@ -1701,13 +1646,11 @@ lpfc_ioctl_loopback_test(struct lpfc_hba *phba,
 err_loopback_test_exit:
 	lpfcdfc_loop_self_unreg(phba, rpi);
 
-	spin_lock_irq(phba->host->host_lock);
 	if ((rc != IOCB_TIMEDOUT) && (cmdiocbq != NULL))
 		lpfc_sli_release_iocbq(phba, cmdiocbq);
 
 	if(rspiocbq != NULL)
 		lpfc_sli_release_iocbq(phba, rspiocbq);
-	spin_unlock_irq(phba->host->host_lock);
 
 	if (txbmp != NULL) {
 		if (txbpl != NULL) {
@@ -1773,10 +1716,8 @@ lpfc_issue_ct_rsp(struct lpfc_hba * phba, uint32_t tag,
 	struct lpfc_iocbq *ctiocb;
 	struct lpfc_sli_ring *pring;
 	uint32_t num_entry;
-	unsigned long iflag;
 	int rc = 0;
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
 	psli = &phba->sli;
 	pring = &psli->ring[LPFC_ELS_RING];
 	num_entry = inp->flag;
@@ -1812,16 +1753,13 @@ lpfc_issue_ct_rsp(struct lpfc_hba * phba, uint32_t tag,
 	icmd->ulpTimeout = phba->fc_ratov * 2;
 
 	/* Xmit CT response on exchange <xid> */
-	lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_ELS,
-			"%d:1200 Xmit CT response on exchange x%x Data: x%x "
-			"x%x\n",
-			phba->brd_no,
-			icmd->ulpContext, icmd->ulpIoTag, phba->hba_state);
+	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
+			"1200 Xmit CT response on exchange x%x Data: x%x x%x\n",
+			icmd->ulpContext, icmd->ulpIoTag, phba->link_state);
 
 	ctiocb->iocb_cmpl = NULL;
 	ctiocb->iocb_flag |= LPFC_IO_LIBDFC;
+	ctiocb->vport = phba->pport;
 	rc = lpfc_sli_issue_iocb_wait(phba, pring, ctiocb, NULL,
 				     phba->fc_ratov * 2 + LPFC_DRVR_TIMEOUT);
 
@@ -1829,7 +1767,6 @@ lpfc_issue_ct_rsp(struct lpfc_hba * phba, uint32_t tag,
 		ctiocb->context1 = NULL;
 		ctiocb->context2 = NULL;
 		ctiocb->iocb_cmpl = lpfc_ioctl_timeout_iocb_cmpl;
-		spin_unlock_irqrestore(phba->host->host_lock, iflag);
 		return rc;
 	}
 
@@ -1839,52 +1776,10 @@ lpfc_issue_ct_rsp(struct lpfc_hba * phba, uint32_t tag,
 
 	lpfc_sli_release_iocbq(phba, ctiocb);
 issue_ct_rsp_exit:
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
 	return rc;
 }
 
 
-
-/* Search for a nodelist entry on a specific list */
-static struct lpfc_nodelist *
-lpfc_findnode_wwnn(struct lpfc_hba * phba, uint32_t order,
-		   struct lpfc_name * wwnn)
-{
-	struct lpfc_nodelist *ndlp;
-	struct list_head * lists[]={&phba->fc_nlpunmap_list,
-				    &phba->fc_nlpmap_list};
-	uint32_t search[]={NLP_SEARCH_UNMAPPED, NLP_SEARCH_MAPPED};
-	uint32_t data1;
-	int i;
-
-	spin_lock_irq(phba->host->host_lock);
-	for (i = 0; i < ARRAY_SIZE(lists); i++ ) {
-		if (!(order & search[i]))
-			continue;
-		list_for_each_entry(ndlp, lists[i], nlp_listp)
-			if (memcmp(&ndlp->nlp_nodename, wwnn,
-				   sizeof(struct lpfc_name)) == 0) {
-				spin_unlock_irq(phba->host->host_lock);
-				data1 = (((uint32_t) ndlp->nlp_state << 24) |
-					 ((uint32_t) ndlp->nlp_xri << 16) |
-					 ((uint32_t) ndlp->nlp_type << 8) |
-					 ((uint32_t) ndlp->nlp_rpi & 0xff));
-				/* FIND node DID unmapped */
-				lpfc_printf_log(phba,
-						KERN_INFO,
-						LOG_NODE,
-						"%d:0911 FIND node by WWNN"
-						" Data: x%p x%x x%x x%x\n",
-						phba->brd_no,
-						ndlp, ndlp->nlp_DID,
-						ndlp->nlp_flag, data1);
-				return ndlp;
-			}
-	}
-	spin_unlock_irq(phba->host->host_lock);
-	return NULL;
-}
-
 static void
 lpfcdfc_ct_unsol_event(struct lpfc_hba * phba,
 			    struct lpfc_sli_ring * pring,
@@ -1904,26 +1799,31 @@ lpfcdfc_ct_unsol_event(struct lpfc_hba * phba,
 	struct ulp_bde64 * bde;
 	dma_addr_t dma_addr;
 	int i;
+	struct lpfc_dmabuf *bdeBuf1 = piocbq->context2;
+	struct lpfc_dmabuf *bdeBuf2 = piocbq->context3;
 
 	BUG_ON(&dfchba->node == &lpfcdfc_hosts);
 	INIT_LIST_HEAD(&head);
 	evt_type = FC_REG_CT_EVENT;
-	if (piocbq->iocb.ulpBdeCount > 0
-	    && piocbq->iocb.un.cont64[0].tus.f.bdeSize > 0)
-	{
+	if (piocbq->iocb.ulpBdeCount == 0 ||
+	    piocbq->iocb.un.cont64[0].tus.f.bdeSize == 0)
+		goto error_unsol_ct_exit;
+
+	if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED)
+		dmabuf = bdeBuf1;
+	else {
 		dma_addr = getPaddr(piocbq->iocb.un.cont64[0].addrHigh,
 				    piocbq->iocb.un.cont64[0].addrLow);
 		dmabuf = lpfc_sli_ringpostbuf_get(phba, pring, dma_addr);
-		BUG_ON(dmabuf == NULL);
-		evt_req_id =
-			((struct lpfc_sli_ct_request *)(dmabuf->virt))->FsType;
-		cmd = ((struct lpfc_sli_ct_request *)
+	}
+	BUG_ON(dmabuf == NULL);
+	evt_req_id = ((struct lpfc_sli_ct_request *)(dmabuf->virt))->FsType;
+	cmd = ((struct lpfc_sli_ct_request *)
 			(dmabuf->virt))->CommandResponse.bits.CmdRsp;
-		len = ((struct lpfc_sli_ct_request *)
+	len = ((struct lpfc_sli_ct_request *)
 			(dmabuf->virt))->CommandResponse.bits.Size;
+	if (!(phba->sli3_options & LPFC_SLI3_HBQ_ENABLED))
 		lpfc_sli_ringpostbuf_put(phba, pring, dmabuf);
-	} else
-		goto error_unsol_ct_exit;
 
 	mutex_lock(&lpfcdfc_lock);
 	list_for_each_entry(evt, &dfchba->ev_waiters, node) {
@@ -1942,7 +1842,6 @@ lpfcdfc_ct_unsol_event(struct lpfc_hba * phba,
 
 		INIT_LIST_HEAD(&head);
 		list_add_tail(&head, &piocbq->list);
-		iocbq = piocbq;
 		list_for_each_entry(iocbq, &head, list)
 			for (i = 0; i < iocbq->iocb.ulpBdeCount; i++)
 				evt_dat->len +=
@@ -1957,14 +1856,23 @@ lpfcdfc_ct_unsol_event(struct lpfc_hba * phba,
 			goto error_unsol_ct_exit;
 		}
 
-		iocbq = piocbq;
 		list_for_each_entry(iocbq, &head, list)
 			for (i = 0; i < iocbq->iocb.ulpBdeCount; i++) {
-				bde = &iocbq->iocb.un.cont64[i];
-				dma_addr = getPaddr(bde->addrHigh,
-						    bde->addrLow);
-				dmabuf = lpfc_sli_ringpostbuf_get(phba, pring,
-								  dma_addr);
+				int size;
+				size = iocbq->iocb.un.cont64[i].tus.f.bdeSize;
+				if (phba->sli3_options &
+				    LPFC_SLI3_HBQ_ENABLED) {
+					if (i == 0)
+						dmabuf = bdeBuf1;
+					else if (i == 1)
+						dmabuf = bdeBuf2;
+				} else {
+					bde = &iocbq->iocb.un.cont64[i];
+					dma_addr = getPaddr(bde->addrHigh,
+							    bde->addrLow);
+					dmabuf = lpfc_sli_ringpostbuf_get(phba,
+							pring, dma_addr);
+				}
 				if (dmabuf == NULL) {
 					kfree (evt_dat->data);
 					kfree (evt_dat);
@@ -1974,9 +1882,11 @@ lpfcdfc_ct_unsol_event(struct lpfc_hba * phba,
 					goto error_unsol_ct_exit;
 				}
 				memcpy ((char *)(evt_dat->data) + offset,
-					dmabuf->virt, bde->tus.f.bdeSize);
-				offset += bde->tus.f.bdeSize;
-				if (evt_req_id != SLI_CT_ELX_LOOPBACK)
+					dmabuf->virt, size);
+				offset += size;
+				if (evt_req_id != SLI_CT_ELX_LOOPBACK &&
+				    !(phba->sli3_options &
+				      LPFC_SLI3_HBQ_ENABLED))
 					lpfc_sli_ringpostbuf_put(phba, pring,
 								 dmabuf);
 				else {
@@ -1988,12 +1898,12 @@ lpfcdfc_ct_unsol_event(struct lpfc_hba * phba,
 						break;
 					case ELX_LOOPBACK_XRI_SETUP:
 					default:
-						lpfc_post_buffer(phba, pring,
-								1, 1);
-						lpfc_mbuf_free(phba,
-								dmabuf->virt,
-								dmabuf->phys);
-						kfree(dmabuf);
+						if (!(phba->sli3_options &
+						      LPFC_SLI3_HBQ_ENABLED))
+							lpfc_post_buffer(phba,
+									 pring,
+									 1, 1);
+						lpfc_in_buf_free(phba, dmabuf);
 						break;
 					};
 				}
@@ -2158,16 +2068,17 @@ lpfcdfc_host_add (struct pci_dev * dev,
 
 	dfchba->inst = phba->brd_no;
 	dfchba->phba = phba;
+	dfchba->vport = phba->pport;
 	dfchba->host = host;
 	dfchba->dev = dev;
 	dfchba->blocked = 0;
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	prt = phba->sli.ring[LPFC_ELS_RING].prt;
 	dfchba->base_ct_unsol_event = prt[2].lpfc_sli_rcv_unsol_event;
 	prt[2].lpfc_sli_rcv_unsol_event = lpfcdfc_ct_unsol_event;
 	prt[3].lpfc_sli_rcv_unsol_event = lpfcdfc_ct_unsol_event;
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 	mutex_lock(&lpfcdfc_lock);
 	list_add_tail(&dfchba->node, &lpfcdfc_hosts);
 	INIT_LIST_HEAD(&dfchba->ev_waiters);
@@ -2201,16 +2112,16 @@ lpfcdfc_host_del (struct lpfcdfc_host *  dfchba)
 	if (dfchba->dev->driver) {
 		host = pci_get_drvdata(dfchba->dev);
 		if ((host != NULL) &&
-		    (struct lpfc_hba*)host->hostdata == dfchba->phba) {
+		    (struct lpfc_vport *)host->hostdata == dfchba->vport) {
 			phba = dfchba->phba;
 			mutex_unlock(&lpfcdfc_lock);
-			spin_lock_irq(phba->host->host_lock);
+			spin_lock_irq(&phba->hbalock);
 			prt = phba->sli.ring[LPFC_ELS_RING].prt;
 			prt[2].lpfc_sli_rcv_unsol_event =
 				dfchba->base_ct_unsol_event;
 			prt[3].lpfc_sli_rcv_unsol_event =
 				dfchba->base_ct_unsol_event;
-			spin_unlock_irq(phba->host->host_lock);
+			spin_unlock_irq(&phba->hbalock);
 			mutex_lock(&lpfcdfc_lock);
 		}
 	}
@@ -2236,15 +2147,14 @@ lpfcdfc_get_phba_by_inst(int inst)
 			if (dfchba->dev->driver) {
 				host = pci_get_drvdata(dfchba->dev);
 				if ((host != NULL) &&
-				    (struct lpfc_hba*)host->hostdata ==
-					dfchba->phba) {
+				    (struct lpfc_vport *)host->hostdata ==
+					dfchba->vport) {
 					mutex_unlock(&lpfcdfc_lock);
 					BUG_ON(dfchba->phba->brd_no != inst);
 					return dfchba;
 				}
 			}
 			mutex_unlock(&lpfcdfc_lock);
-			lpfcdfc_host_del (dfchba);
 			return NULL;
 		}
 	}
@@ -2276,15 +2186,10 @@ lpfcdfc_do_ioctl(struct lpfcCmdInput *cip)
 	};
 
 	if (phba)
-		lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_LIBDFC,
-			"%d:1601 libdfc ioctl entry Data: x%x x%lx x%lx x%x\n",
-			phba->brd_no, cip->lpfc_cmd,
-			(unsigned long) cip->lpfc_arg1,
-			(unsigned long) cip->lpfc_arg2,
-			cip->lpfc_outsz);
-
+		lpfc_printf_log(phba, KERN_INFO, LOG_LIBDFC,
+			"1601 libdfc ioctl entry Data: x%x x%lx x%lx x%x\n",
+			cip->lpfc_cmd, (unsigned long) cip->lpfc_arg1,
+			(unsigned long) cip->lpfc_arg2, cip->lpfc_outsz);
 	mutex_lock(&lpfcdfc_lock);
 	if (dfchba && dfchba->blocked) {
 		mutex_unlock(&lpfcdfc_lock);
@@ -2383,15 +2288,9 @@ lpfcdfc_do_ioctl(struct lpfcCmdInput *cip)
 	}
 
 	if (phba)
-		lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_LIBDFC,
-			"%d:1602 libdfc ioctl exit Data: x%x x%x x%lx\n",
-			cip->lpfc_brd,
-			rc,
-			cip->lpfc_outsz,
-			(unsigned long) cip->lpfc_dataout);
-
+		lpfc_printf_log(phba, KERN_INFO, LOG_LIBDFC,
+			"1602 libdfc ioctl exit Data: x%x x%x x%lx\n",
+			rc, cip->lpfc_outsz, (unsigned long) cip->lpfc_dataout);
 	/* Copy data to user space config method */
 	if (rc == 0) {
 		if (cip->lpfc_outsz) {
diff --git a/drivers/scsi/lpfc/lpfc_ioctl.h b/drivers/scsi/lpfc/lpfc_ioctl.h
index a901122..d434d46 100644
--- a/drivers/scsi/lpfc/lpfc_ioctl.h
+++ b/drivers/scsi/lpfc/lpfc_ioctl.h
@@ -30,6 +30,10 @@
 #ifndef FC_REG_DUMP_EVENT
 #define FC_REG_DUMP_EVENT       0x10    /* Register for Dump events */
 #endif
+#ifndef FC_REG_TEMPERATURE_EVENT
+#define FC_REG_TEMPERATURE_EVENT	0x20    /* Register for temperature
+						   event */
+#endif
 
 #define FC_REG_EVENT_MASK       0xff    /* event mask */
 
@@ -58,7 +62,7 @@ struct DfcRevInfo {
 	uint32_t a_Minor;
 } ;
 
-#define LPFC_WWPN_TYPE 		0
+#define LPFC_WWPN_TYPE		0
 #define LPFC_PORTID_TYPE	1
 #define LPFC_WWNN_TYPE		2
 
@@ -173,3 +177,14 @@ struct lpfc_host_event {
 	enum lpfc_host_event_code event_code;
 	uint32_t data;
 };
+
+#ifdef __KERNEL__
+struct lpfcdfc_host;
+
+/* Initialize/Un-initialize char device */
+int lpfc_cdev_init(void);
+void lpfc_cdev_exit(void);
+void lpfcdfc_host_del(struct lpfcdfc_host *);
+struct lpfcdfc_host *lpfcdfc_host_add(struct pci_dev *, struct Scsi_Host *,
+				      struct lpfc_hba *);
+#endif	/* __KERNEL__ */
diff --git a/drivers/scsi/lpfc/lpfc_logmsg.h b/drivers/scsi/lpfc/lpfc_logmsg.h
index 438cbcd..f9b4efa 100644
--- a/drivers/scsi/lpfc/lpfc_logmsg.h
+++ b/drivers/scsi/lpfc/lpfc_logmsg.h
@@ -26,12 +26,21 @@
 #define LOG_IP                        0x20	/* IP traffic history */
 #define LOG_FCP                       0x40	/* FCP traffic history */
 #define LOG_NODE                      0x80	/* Node table events */
+#define LOG_TEMP                      0x100	/* Temperature sensor events */
 #define LOG_MISC                      0x400	/* Miscellaneous events */
 #define LOG_SLI                       0x800	/* SLI events */
 #define LOG_FCP_ERROR                 0x1000	/* log errors, not underruns */
 #define LOG_LIBDFC                    0x2000	/* Libdfc events */
+#define LOG_VPORT                     0x4000	/* NPIV events */
+#define LOG_SECURITY                  0x8000    /* FC Security */
 #define LOG_ALL_MSG                   0xffff	/* LOG all messages */
 
+#define lpfc_printf_vlog(vport, level, mask, fmt, arg...) \
+	{ if (((mask) &(vport)->cfg_log_verbose) || (level[1] <= '3')) \
+		dev_printk(level, &((vport)->phba->pcidev)->dev, "%d:(%d):" \
+			   fmt, (vport)->phba->brd_no, vport->vpi, ##arg); }
+
 #define lpfc_printf_log(phba, level, mask, fmt, arg...) \
-	{ if (((mask) &(phba)->cfg_log_verbose) || (level[1] <= '3')) \
-		dev_printk(level, &((phba)->pcidev)->dev, fmt, ##arg); }
+	{ if (((mask) &(phba)->pport->cfg_log_verbose) || (level[1] <= '3')) \
+		dev_printk(level, &((phba)->pcidev)->dev, "%d:" \
+			   fmt, phba->brd_no, ##arg); }
diff --git a/drivers/scsi/lpfc/lpfc_mbox.c b/drivers/scsi/lpfc/lpfc_mbox.c
index 2315723..872190d 100644
--- a/drivers/scsi/lpfc/lpfc_mbox.c
+++ b/drivers/scsi/lpfc/lpfc_mbox.c
@@ -82,6 +82,40 @@ lpfc_read_nv(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 }
 
 /**********************************************/
+/*  lpfc_config_async  Issue a                */
+/*  MBX_ASYNC_EVT_ENABLE mailbox command      */
+/**********************************************/
+void
+lpfc_config_async(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb,
+		uint32_t ring)
+{
+	MAILBOX_t *mb;
+
+	mb = &pmb->mb;
+	memset(pmb, 0, sizeof (LPFC_MBOXQ_t));
+	mb->mbxCommand = MBX_ASYNCEVT_ENABLE;
+	mb->un.varCfgAsyncEvent.ring = ring;
+	mb->mbxOwner = OWN_HOST;
+	return;
+}
+
+/**********************************************/
+/*  lpfc_heart_beat  Issue a HEART_BEAT       */
+/*                mailbox command             */
+/**********************************************/
+void
+lpfc_heart_beat(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+{
+	MAILBOX_t *mb;
+
+	mb = &pmb->mb;
+	memset(pmb, 0, sizeof (LPFC_MBOXQ_t));
+	mb->mbxCommand = MBX_HEARTBEAT;
+	mb->mbxOwner = OWN_HOST;
+	return;
+}
+
+/**********************************************/
 /*  lpfc_read_la  Issue a READ LA             */
 /*                mailbox command             */
 /**********************************************/
@@ -134,6 +168,7 @@ lpfc_clear_la(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 void
 lpfc_config_link(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 {
+	struct lpfc_vport  *vport = phba->pport;
 	MAILBOX_t *mb = &pmb->mb;
 	memset(pmb, 0, sizeof (LPFC_MBOXQ_t));
 
@@ -147,7 +182,7 @@ lpfc_config_link(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 		mb->un.varCfgLnk.cr_count = phba->cfg_cr_count;
 	}
 
-	mb->un.varCfgLnk.myId = phba->fc_myDID;
+	mb->un.varCfgLnk.myId = vport->fc_myDID;
 	mb->un.varCfgLnk.edtov = phba->fc_edtov;
 	mb->un.varCfgLnk.arbtov = phba->fc_arbtov;
 	mb->un.varCfgLnk.ratov = phba->fc_ratov;
@@ -239,7 +274,7 @@ lpfc_init_link(struct lpfc_hba * phba,
 /*                    mailbox command         */
 /**********************************************/
 int
-lpfc_read_sparam(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+lpfc_read_sparam(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb, int vpi)
 {
 	struct lpfc_dmabuf *mp;
 	MAILBOX_t *mb;
@@ -258,11 +293,8 @@ lpfc_read_sparam(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 		kfree(mp);
 		mb->mbxCommand = MBX_READ_SPARM64;
 		/* READ_SPARAM: no buffers */
-		lpfc_printf_log(phba,
-			        KERN_WARNING,
-			        LOG_MBOX,
-			        "%d:0301 READ_SPARAM: no buffers\n",
-			        phba->brd_no);
+		lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX,
+			        "0301 READ_SPARAM: no buffers\n");
 		return (1);
 	}
 	INIT_LIST_HEAD(&mp->list);
@@ -270,6 +302,7 @@ lpfc_read_sparam(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 	mb->un.varRdSparm.un.sp64.tus.f.bdeSize = sizeof (struct serv_parm);
 	mb->un.varRdSparm.un.sp64.addrHigh = putPaddrHigh(mp->phys);
 	mb->un.varRdSparm.un.sp64.addrLow = putPaddrLow(mp->phys);
+	mb->un.varRdSparm.vpi = vpi;
 
 	/* save address for completion */
 	pmb->context1 = mp;
@@ -282,7 +315,8 @@ lpfc_read_sparam(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 /*                  mailbox command         */
 /********************************************/
 void
-lpfc_unreg_did(struct lpfc_hba * phba, uint32_t did, LPFC_MBOXQ_t * pmb)
+lpfc_unreg_did(struct lpfc_hba * phba, uint16_t vpi, uint32_t did,
+	       LPFC_MBOXQ_t * pmb)
 {
 	MAILBOX_t *mb;
 
@@ -290,32 +324,13 @@ lpfc_unreg_did(struct lpfc_hba * phba, uint32_t did, LPFC_MBOXQ_t * pmb)
 	memset(pmb, 0, sizeof (LPFC_MBOXQ_t));
 
 	mb->un.varUnregDID.did = did;
+	mb->un.varUnregDID.vpi = vpi;
 
 	mb->mbxCommand = MBX_UNREG_D_ID;
 	mb->mbxOwner = OWN_HOST;
 	return;
 }
 
-/***********************************************/
-/*                  command to write slim      */
-/***********************************************/
-void
-lpfc_set_slim(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb, uint32_t addr,
-		uint32_t value)
-{
-	MAILBOX_t *mb;
-
-	mb = &pmb->mb;
-	memset(pmb, 0, sizeof (LPFC_MBOXQ_t));
-
-	mb->un.varWords[0] = addr;
-	mb->un.varWords[1] = value;
-
-	mb->mbxCommand = MBX_SET_SLIM;
-	mb->mbxOwner = OWN_HOST;
-	return;
-}
-
 /**********************************************/
 /*  lpfc_read_nv  Issue a READ CONFIG         */
 /*                mailbox command             */
@@ -355,19 +370,17 @@ lpfc_read_lnk_stat(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 /*                  mailbox command         */
 /********************************************/
 int
-lpfc_reg_login(struct lpfc_hba * phba,
-	       uint32_t did, uint8_t * param, LPFC_MBOXQ_t * pmb, uint32_t flag)
+lpfc_reg_login(struct lpfc_hba *phba, uint16_t vpi, uint32_t did,
+	       uint8_t *param, LPFC_MBOXQ_t *pmb, uint32_t flag)
 {
+	MAILBOX_t *mb = &pmb->mb;
 	uint8_t *sparam;
 	struct lpfc_dmabuf *mp;
-	MAILBOX_t *mb;
-	struct lpfc_sli *psli;
 
-	psli = &phba->sli;
-	mb = &pmb->mb;
 	memset(pmb, 0, sizeof (LPFC_MBOXQ_t));
 
 	mb->un.varRegLogin.rpi = 0;
+	mb->un.varRegLogin.vpi = vpi;
 	mb->un.varRegLogin.did = did;
 	mb->un.varWords[30] = flag;	/* Set flag to issue action on cmpl */
 
@@ -379,12 +392,9 @@ lpfc_reg_login(struct lpfc_hba * phba,
 		kfree(mp);
 		mb->mbxCommand = MBX_REG_LOGIN64;
 		/* REG_LOGIN: no buffers */
-		lpfc_printf_log(phba,
-			       KERN_WARNING,
-			       LOG_MBOX,
-			       "%d:0302 REG_LOGIN: no buffers Data x%x x%x\n",
-			       phba->brd_no,
-			       (uint32_t) did, (uint32_t) flag);
+		lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX,
+				"0302 REG_LOGIN: no buffers, VPI:%d DID:x%x, "
+				"flag x%x\n", vpi, did, flag);
 		return (1);
 	}
 	INIT_LIST_HEAD(&mp->list);
@@ -409,7 +419,8 @@ lpfc_reg_login(struct lpfc_hba * phba,
 /*                    mailbox command         */
 /**********************************************/
 void
-lpfc_unreg_login(struct lpfc_hba * phba, uint32_t rpi, LPFC_MBOXQ_t * pmb)
+lpfc_unreg_login(struct lpfc_hba *phba, uint16_t vpi, uint32_t rpi,
+		 LPFC_MBOXQ_t * pmb)
 {
 	MAILBOX_t *mb;
 
@@ -418,12 +429,52 @@ lpfc_unreg_login(struct lpfc_hba * phba, uint32_t rpi, LPFC_MBOXQ_t * pmb)
 
 	mb->un.varUnregLogin.rpi = (uint16_t) rpi;
 	mb->un.varUnregLogin.rsvd1 = 0;
+	mb->un.varUnregLogin.vpi = vpi;
 
 	mb->mbxCommand = MBX_UNREG_LOGIN;
 	mb->mbxOwner = OWN_HOST;
 	return;
 }
 
+/**************************************************/
+/*  lpfc_reg_vpi   Issue a REG_VPI                */
+/*                    mailbox command             */
+/**************************************************/
+void
+lpfc_reg_vpi(struct lpfc_hba *phba, uint16_t vpi, uint32_t sid,
+	     LPFC_MBOXQ_t *pmb)
+{
+	MAILBOX_t *mb = &pmb->mb;
+
+	memset(pmb, 0, sizeof (LPFC_MBOXQ_t));
+
+	mb->un.varRegVpi.vpi = vpi;
+	mb->un.varRegVpi.sid = sid;
+
+	mb->mbxCommand = MBX_REG_VPI;
+	mb->mbxOwner = OWN_HOST;
+	return;
+
+}
+
+/**************************************************/
+/*  lpfc_unreg_vpi   Issue a UNREG_VNPI           */
+/*                    mailbox command             */
+/**************************************************/
+void
+lpfc_unreg_vpi(struct lpfc_hba *phba, uint16_t vpi, LPFC_MBOXQ_t *pmb)
+{
+	MAILBOX_t *mb = &pmb->mb;
+	memset(pmb, 0, sizeof (LPFC_MBOXQ_t));
+
+	mb->un.varUnregVpi.vpi = vpi;
+
+	mb->mbxCommand = MBX_UNREG_VPI;
+	mb->mbxOwner = OWN_HOST;
+	return;
+
+}
+
 static void
 lpfc_config_pcb_setup(struct lpfc_hba * phba)
 {
@@ -432,14 +483,18 @@ lpfc_config_pcb_setup(struct lpfc_hba * phba)
 	PCB_t *pcbp = &phba->slim2p->pcb;
 	dma_addr_t pdma_addr;
 	uint32_t offset;
-	uint32_t iocbCnt;
+	uint32_t iocbCnt = 0;
 	int i;
 
 	pcbp->maxRing = (psli->num_rings - 1);
 
-	iocbCnt = 0;
 	for (i = 0; i < psli->num_rings; i++) {
 		pring = &psli->ring[i];
+
+		pring->sizeCiocb = phba->sli_rev == 3 ? SLI3_IOCB_CMD_SIZE:
+							SLI2_IOCB_CMD_SIZE;
+		pring->sizeRiocb = phba->sli_rev == 3 ? SLI3_IOCB_RSP_SIZE:
+							SLI2_IOCB_RSP_SIZE;
 		/* A ring MUST have both cmd and rsp entries defined to be
 		   valid */
 		if ((pring->numCiocb == 0) || (pring->numRiocb == 0)) {
@@ -454,20 +509,18 @@ lpfc_config_pcb_setup(struct lpfc_hba * phba)
 			continue;
 		}
 		/* Command ring setup for ring */
-		pring->cmdringaddr =
-		    (void *)&phba->slim2p->IOCBs[iocbCnt];
+		pring->cmdringaddr = (void *) &phba->slim2p->IOCBs[iocbCnt];
 		pcbp->rdsc[i].cmdEntries = pring->numCiocb;
 
-		offset = (uint8_t *)&phba->slim2p->IOCBs[iocbCnt] -
-			 (uint8_t *)phba->slim2p;
+		offset = (uint8_t *) &phba->slim2p->IOCBs[iocbCnt] -
+			 (uint8_t *) phba->slim2p;
 		pdma_addr = phba->slim2p_mapping + offset;
 		pcbp->rdsc[i].cmdAddrHigh = putPaddrHigh(pdma_addr);
 		pcbp->rdsc[i].cmdAddrLow = putPaddrLow(pdma_addr);
 		iocbCnt += pring->numCiocb;
 
 		/* Response ring setup for ring */
-		pring->rspringaddr =
-		    (void *)&phba->slim2p->IOCBs[iocbCnt];
+		pring->rspringaddr = (void *) &phba->slim2p->IOCBs[iocbCnt];
 
 		pcbp->rdsc[i].rspEntries = pring->numRiocb;
 		offset = (uint8_t *)&phba->slim2p->IOCBs[iocbCnt] -
@@ -476,22 +529,126 @@ lpfc_config_pcb_setup(struct lpfc_hba * phba)
 		pcbp->rdsc[i].rspAddrHigh = putPaddrHigh(pdma_addr);
 		pcbp->rdsc[i].rspAddrLow = putPaddrLow(pdma_addr);
 		iocbCnt += pring->numRiocb;
+#if 0
+		printk("lpfc_config_pcb_setup: brdno:%d, Ring #%d:\n"
+			"numCiocb:%d, sizeCiocb:%d,\n"
+			"numRiocb:%d, sizeRiocb:%d,\n"
+			"cmdAddrLow:0x%x, rspAddrLow:0x%x, iocbCnt:0x%x\n\n",
+			phba->brd_no, i, pcbp->rdsc[i].cmdEntries,
+			pring->sizeCiocb, pcbp->rdsc[i].rspEntries,
+			pring->sizeRiocb, pcbp->rdsc[i].cmdAddrLow,
+			pcbp->rdsc[i].rspAddrLow, iocbCnt);
+#endif
 	}
 }
 
 void
 lpfc_read_rev(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 {
-	MAILBOX_t *mb;
-
-	mb = &pmb->mb;
+	MAILBOX_t *mb = &pmb->mb;
 	memset(pmb, 0, sizeof (LPFC_MBOXQ_t));
 	mb->un.varRdRev.cv = 1;
+	mb->un.varRdRev.v3req = 1; /* Request SLI3 info */
 	mb->mbxCommand = MBX_READ_REV;
 	mb->mbxOwner = OWN_HOST;
 	return;
 }
 
+static void
+lpfc_build_hbq_profile2(struct config_hbq_var *hbqmb,
+			struct lpfc_hbq_init  *hbq_desc)
+{
+	hbqmb->profiles.profile2.seqlenbcnt = hbq_desc->seqlenbcnt;
+	hbqmb->profiles.profile2.maxlen     = hbq_desc->maxlen;
+	hbqmb->profiles.profile2.seqlenoff  = hbq_desc->seqlenoff;
+}
+
+static void
+lpfc_build_hbq_profile3(struct config_hbq_var *hbqmb,
+			struct lpfc_hbq_init  *hbq_desc)
+{
+	hbqmb->profiles.profile3.seqlenbcnt = hbq_desc->seqlenbcnt;
+	hbqmb->profiles.profile3.maxlen     = hbq_desc->maxlen;
+	hbqmb->profiles.profile3.cmdcodeoff = hbq_desc->cmdcodeoff;
+	hbqmb->profiles.profile3.seqlenoff  = hbq_desc->seqlenoff;
+	memcpy(&hbqmb->profiles.profile3.cmdmatch, hbq_desc->cmdmatch,
+	       sizeof(hbqmb->profiles.profile3.cmdmatch));
+}
+
+static void
+lpfc_build_hbq_profile5(struct config_hbq_var *hbqmb,
+			struct lpfc_hbq_init  *hbq_desc)
+{
+	hbqmb->profiles.profile5.seqlenbcnt = hbq_desc->seqlenbcnt;
+	hbqmb->profiles.profile5.maxlen     = hbq_desc->maxlen;
+	hbqmb->profiles.profile5.cmdcodeoff = hbq_desc->cmdcodeoff;
+	hbqmb->profiles.profile5.seqlenoff  = hbq_desc->seqlenoff;
+	memcpy(&hbqmb->profiles.profile5.cmdmatch, hbq_desc->cmdmatch,
+	       sizeof(hbqmb->profiles.profile5.cmdmatch));
+}
+
+void
+lpfc_config_hbq(struct lpfc_hba *phba, uint32_t id,
+		 struct lpfc_hbq_init *hbq_desc,
+		uint32_t hbq_entry_index, LPFC_MBOXQ_t *pmb)
+{
+	int i;
+	MAILBOX_t *mb = &pmb->mb;
+	struct config_hbq_var *hbqmb = &mb->un.varCfgHbq;
+
+	memset(pmb, 0, sizeof (LPFC_MBOXQ_t));
+	hbqmb->hbqId = id;
+	hbqmb->entry_count = hbq_desc->entry_count;   /* # entries in HBQ */
+	hbqmb->recvNotify = hbq_desc->rn;             /* Receive
+						       * Notification */
+	hbqmb->numMask    = hbq_desc->mask_count;     /* # R_CTL/TYPE masks
+						       * # in words 0-19 */
+	hbqmb->profile    = hbq_desc->profile;	      /* Selection profile:
+						       * 0 = all,
+						       * 7 = logentry */
+	hbqmb->ringMask   = hbq_desc->ring_mask;      /* Binds HBQ to a ring
+						       * e.g. Ring0=b0001,
+						       * ring2=b0100 */
+	hbqmb->headerLen  = hbq_desc->headerLen;      /* 0 if not profile 4
+						       * or 5 */
+	hbqmb->logEntry   = hbq_desc->logEntry;       /* Set to 1 if this
+						       * HBQ will be used
+						       * for LogEntry
+						       * buffers */
+	hbqmb->hbqaddrLow = putPaddrLow(phba->hbqslimp.phys) +
+		hbq_entry_index * sizeof(struct lpfc_hbq_entry);
+	hbqmb->hbqaddrHigh = putPaddrHigh(phba->hbqslimp.phys);
+
+	mb->mbxCommand = MBX_CONFIG_HBQ;
+	mb->mbxOwner = OWN_HOST;
+
+				/* Copy info for profiles 2,3,5. Other
+				 * profiles this area is reserved
+				 */
+	if (hbq_desc->profile == 2)
+		lpfc_build_hbq_profile2(hbqmb, hbq_desc);
+	else if (hbq_desc->profile == 3)
+		lpfc_build_hbq_profile3(hbqmb, hbq_desc);
+	else if (hbq_desc->profile == 5)
+		lpfc_build_hbq_profile5(hbqmb, hbq_desc);
+
+	/* Return if no rctl / type masks for this HBQ */
+	if (!hbq_desc->mask_count)
+		return;
+
+	/* Otherwise we setup specific rctl / type masks for this HBQ */
+	for (i = 0; i < hbq_desc->mask_count; i++) {
+		hbqmb->hbqMasks[i].tmatch = hbq_desc->hbqMasks[i].tmatch;
+		hbqmb->hbqMasks[i].tmask  = hbq_desc->hbqMasks[i].tmask;
+		hbqmb->hbqMasks[i].rctlmatch = hbq_desc->hbqMasks[i].rctlmatch;
+		hbqmb->hbqMasks[i].rctlmask  = hbq_desc->hbqMasks[i].rctlmask;
+	}
+
+	return;
+}
+
+
+
 void
 lpfc_config_ring(struct lpfc_hba * phba, int ring, LPFC_MBOXQ_t * pmb)
 {
@@ -534,15 +691,16 @@ lpfc_config_ring(struct lpfc_hba * phba, int ring, LPFC_MBOXQ_t * pmb)
 }
 
 void
-lpfc_config_port(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+lpfc_config_port(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
 {
+	MAILBOX_t __iomem *mb_slim = (MAILBOX_t __iomem *) phba->MBslimaddr;
 	MAILBOX_t *mb = &pmb->mb;
 	dma_addr_t pdma_addr;
 	uint32_t bar_low, bar_high;
 	size_t offset;
 	struct lpfc_hgp hgp;
-	void __iomem *to_slim;
 	int i;
+	uint32_t pgp_offset;
 
 	memset(pmb, 0, sizeof(LPFC_MBOXQ_t));
 	mb->mbxCommand = MBX_CONFIG_PORT;
@@ -555,12 +713,29 @@ lpfc_config_port(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 	mb->un.varCfgPort.pcbLow = putPaddrLow(pdma_addr);
 	mb->un.varCfgPort.pcbHigh = putPaddrHigh(pdma_addr);
 
+	/* If HBA supports SLI=3 ask for it */
+
+	if (phba->sli_rev == 3 && phba->vpd.sli3Feat.cerbm) {
+		mb->un.varCfgPort.cerbm = 1; /* Request HBQs */
+		mb->un.varCfgPort.max_hbq = lpfc_sli_hbq_count();
+		if (phba->max_vpi && phba->cfg_enable_npiv &&
+		    phba->vpd.sli3Feat.cmv) {
+			mb->un.varCfgPort.max_vpi = phba->max_vpi;
+			mb->un.varCfgPort.cmv = 1;
+			phba->sli3_options |= LPFC_SLI3_NPIV_ENABLED;
+		} else
+			mb->un.varCfgPort.max_vpi = phba->max_vpi = 0;
+	} else
+		phba->sli_rev = 2;
+	mb->un.varCfgPort.sli_mode = phba->sli_rev;
+
 	/* Now setup pcb */
 	phba->slim2p->pcb.type = TYPE_NATIVE_SLI2;
 	phba->slim2p->pcb.feature = FEATURE_INITIAL_SLI2;
 
 	/* Setup Mailbox pointers */
-	phba->slim2p->pcb.mailBoxSize = sizeof(MAILBOX_t);
+	phba->slim2p->pcb.mailBoxSize = offsetof(MAILBOX_t, us) +
+		sizeof(struct sli2_desc);
 	offset = (uint8_t *)&phba->slim2p->mbx - (uint8_t *)phba->slim2p;
 	pdma_addr = phba->slim2p_mapping + offset;
 	phba->slim2p->pcb.mbAddrHigh = putPaddrHigh(pdma_addr);
@@ -588,29 +763,70 @@ lpfc_config_port(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 	pci_read_config_dword(phba->pcidev, PCI_BASE_ADDRESS_0, &bar_low);
 	pci_read_config_dword(phba->pcidev, PCI_BASE_ADDRESS_1, &bar_high);
 
+	/*
+	 * Set up HGP - Port Memory
+	 *
+	 * The port expects the host get/put pointers to reside in memory
+	 * following the "non-diagnostic" mode mailbox (32 words, 0x80 bytes)
+	 * area of SLIM.  In SLI-2 mode, there's an additional 16 reserved
+	 * words (0x40 bytes).  This area is not reserved if HBQs are
+	 * configured in SLI-3.
+	 *
+	 * CR0Put    - SLI2(no HBQs) = 0xc0, With HBQs = 0x80
+	 * RR0Get                      0xc4              0x84
+	 * CR1Put                      0xc8              0x88
+	 * RR1Get                      0xcc              0x8c
+	 * CR2Put                      0xd0              0x90
+	 * RR2Get                      0xd4              0x94
+	 * CR3Put                      0xd8              0x98
+	 * RR3Get                      0xdc              0x9c
+	 *
+	 * Reserved                    0xa0-0xbf
+	 *    If HBQs configured:
+	 *                         HBQ 0 Put ptr  0xc0
+	 *                         HBQ 1 Put ptr  0xc4
+	 *                         HBQ 2 Put ptr  0xc8
+	 *                         ......
+	 *                         HBQ(M-1)Put Pointer 0xc0+(M-1)*4
+	 *
+	 */
+
+	if (phba->sli_rev == 3) {
+		phba->host_gp = &mb_slim->us.s3.host[0];
+		phba->hbq_put = &mb_slim->us.s3.hbq_put[0];
+	} else {
+		phba->host_gp = &mb_slim->us.s2.host[0];
+		phba->hbq_put = NULL;
+	}
 
 	/* mask off BAR0's flag bits 0 - 3 */
 	phba->slim2p->pcb.hgpAddrLow = (bar_low & PCI_BASE_ADDRESS_MEM_MASK) +
-					(SLIMOFF*sizeof(uint32_t));
+		(void __iomem *) phba->host_gp -
+		(void __iomem *)phba->MBslimaddr;
 	if (bar_low & PCI_BASE_ADDRESS_MEM_TYPE_64)
 		phba->slim2p->pcb.hgpAddrHigh = bar_high;
 	else
 		phba->slim2p->pcb.hgpAddrHigh = 0;
 	/* write HGP data to SLIM at the required longword offset */
 	memset(&hgp, 0, sizeof(struct lpfc_hgp));
-	to_slim = phba->MBslimaddr + (SLIMOFF*sizeof (uint32_t));
 
 	for (i=0; i < phba->sli.num_rings; i++) {
-		lpfc_memcpy_to_slim(to_slim, &hgp, sizeof(struct lpfc_hgp));
-		to_slim += sizeof (struct lpfc_hgp);
+		lpfc_memcpy_to_slim(phba->host_gp + i, &hgp,
+				    sizeof(*phba->host_gp));
 	}
 
 	/* Setup Port Group ring pointer */
-	offset = (uint8_t *)&phba->slim2p->mbx.us.s2.port -
-		 (uint8_t *)phba->slim2p;
-	pdma_addr = phba->slim2p_mapping + offset;
+	if (phba->sli_rev == 3)
+		pgp_offset = (uint8_t *)&phba->slim2p->mbx.us.s3_pgp.port -
+			(uint8_t *)phba->slim2p;
+	else
+		pgp_offset = (uint8_t *)&phba->slim2p->mbx.us.s2.port -
+			(uint8_t *)phba->slim2p;
+
+	pdma_addr = phba->slim2p_mapping + pgp_offset;
 	phba->slim2p->pcb.pgpAddrHigh = putPaddrHigh(pdma_addr);
 	phba->slim2p->pcb.pgpAddrLow = putPaddrLow(pdma_addr);
+	phba->hbq_get = &phba->slim2p->mbx.us.s3_pgp.hbq_get[0];
 
 	/* Use callback routine to setp rings in the pcb */
 	lpfc_config_pcb_setup(phba);
@@ -626,11 +842,7 @@ lpfc_config_port(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 
 	/* Swap PCB if needed */
 	lpfc_sli_pcimem_bcopy(&phba->slim2p->pcb, &phba->slim2p->pcb,
-								sizeof (PCB_t));
-
-	lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
-		        "%d:0405 Service Level Interface (SLI) 2 selected\n",
-		        phba->brd_no);
+			      sizeof(PCB_t));
 }
 
 void
@@ -664,15 +876,23 @@ lpfc_mbox_get(struct lpfc_hba * phba)
 	LPFC_MBOXQ_t *mbq = NULL;
 	struct lpfc_sli *psli = &phba->sli;
 
-	list_remove_head((&psli->mboxq), mbq, LPFC_MBOXQ_t,
-			 list);
-	if (mbq) {
+	list_remove_head((&psli->mboxq), mbq, LPFC_MBOXQ_t, list);
+	if (mbq)
 		psli->mboxq_cnt--;
-	}
 
 	return mbq;
 }
 
+void
+lpfc_mbox_cmpl_put(struct lpfc_hba * phba, LPFC_MBOXQ_t * mbq)
+{
+	/* This function expects to be called from interupt context */
+	spin_lock(&phba->hbalock);
+	list_add_tail(&mbq->list, &phba->sli.mboxq_cmpl);
+	spin_unlock(&phba->hbalock);
+	return;
+}
+
 int
 lpfc_mbox_tmo_val(struct lpfc_hba *phba, int cmd)
 {
diff --git a/drivers/scsi/lpfc/lpfc_mem.c b/drivers/scsi/lpfc/lpfc_mem.c
index ec3bbbd..43c3b8a 100644
--- a/drivers/scsi/lpfc/lpfc_mem.c
+++ b/drivers/scsi/lpfc/lpfc_mem.c
@@ -1,7 +1,7 @@
 /*******************************************************************
  * This file is part of the Emulex Linux Device Driver for         *
  * Fibre Channel Host Bus Adapters.                                *
- * Copyright (C) 2004-2005 Emulex.  All rights reserved.           *
+ * Copyright (C) 2004-2006 Emulex.  All rights reserved.           *
  * EMULEX and SLI are trademarks of Emulex.                        *
  * www.emulex.com                                                  *
  * Portions Copyright (C) 2004-2005 Christoph Hellwig              *
@@ -38,10 +38,13 @@
 #define LPFC_MBUF_POOL_SIZE     64      /* max elements in MBUF safety pool */
 #define LPFC_MEM_POOL_SIZE      64      /* max elem in non-DMA safety pool */
 
+
+
 int
 lpfc_mem_alloc(struct lpfc_hba * phba)
 {
 	struct lpfc_dma_pool *pool = &phba->lpfc_mbuf_safety_pool;
+	int longs;
 	int i;
 
 	phba->lpfc_scsi_dma_buf_pool = pci_pool_create("lpfc_scsi_dma_buf_pool",
@@ -80,10 +83,27 @@ lpfc_mem_alloc(struct lpfc_hba * phba)
 	if (!phba->nlp_mem_pool)
 		goto fail_free_mbox_pool;
 
+	phba->lpfc_hbq_pool = pci_pool_create("lpfc_hbq_pool",phba->pcidev,
+					      LPFC_BPL_SIZE, 8, 0);
+	if (!phba->lpfc_hbq_pool)
+		goto fail_free_nlp_mem_pool;
+
+	/* vpi zero is reserved for the physical port so add 1 to max */
+	longs = ((phba->max_vpi + 1) + BITS_PER_LONG - 1) / BITS_PER_LONG;
+	phba->vpi_bmask = kzalloc(longs * sizeof(unsigned long), GFP_KERNEL);
+	if (!phba->vpi_bmask)
+		goto fail_free_hbq_pool;
+
 	return 0;
 
+ fail_free_hbq_pool:
+	lpfc_sli_hbqbuf_free_all(phba);
+ fail_free_nlp_mem_pool:
+	mempool_destroy(phba->nlp_mem_pool);
+	phba->nlp_mem_pool = NULL;
  fail_free_mbox_pool:
 	mempool_destroy(phba->mbox_mem_pool);
+	phba->mbox_mem_pool = NULL;
  fail_free_mbuf_pool:
 	while (i--)
 		pci_pool_free(phba->lpfc_mbuf_pool, pool->elements[i].virt,
@@ -91,8 +111,10 @@ lpfc_mem_alloc(struct lpfc_hba * phba)
 	kfree(pool->elements);
  fail_free_lpfc_mbuf_pool:
 	pci_pool_destroy(phba->lpfc_mbuf_pool);
+	phba->lpfc_mbuf_pool = NULL;
  fail_free_dma_buf_pool:
 	pci_pool_destroy(phba->lpfc_scsi_dma_buf_pool);
+	phba->lpfc_scsi_dma_buf_pool = NULL;
  fail:
 	return -ENOMEM;
 }
@@ -106,6 +128,9 @@ lpfc_mem_free(struct lpfc_hba * phba)
 	struct lpfc_dmabuf   *mp;
 	int i;
 
+	kfree(phba->vpi_bmask);
+	lpfc_sli_hbqbuf_free_all(phba);
+
 	list_for_each_entry_safe(mbox, next_mbox, &psli->mboxq, list) {
 		mp = (struct lpfc_dmabuf *) (mbox->context1);
 		if (mp) {
@@ -115,6 +140,15 @@ lpfc_mem_free(struct lpfc_hba * phba)
 		list_del(&mbox->list);
 		mempool_free(mbox, phba->mbox_mem_pool);
 	}
+	list_for_each_entry_safe(mbox, next_mbox, &psli->mboxq_cmpl, list) {
+		mp = (struct lpfc_dmabuf *) (mbox->context1);
+		if (mp) {
+			lpfc_mbuf_free(phba, mp->virt, mp->phys);
+			kfree(mp);
+		}
+		list_del(&mbox->list);
+		mempool_free(mbox, phba->mbox_mem_pool);
+	}
 
 	psli->sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
 	if (psli->mbox_active) {
@@ -132,13 +166,21 @@ lpfc_mem_free(struct lpfc_hba * phba)
 		pci_pool_free(phba->lpfc_mbuf_pool, pool->elements[i].virt,
 						 pool->elements[i].phys);
 	kfree(pool->elements);
+
+	pci_pool_destroy(phba->lpfc_hbq_pool);
 	mempool_destroy(phba->nlp_mem_pool);
 	mempool_destroy(phba->mbox_mem_pool);
 
 	pci_pool_destroy(phba->lpfc_scsi_dma_buf_pool);
 	pci_pool_destroy(phba->lpfc_mbuf_pool);
 
-	/* Free the iocb lookup array */
+	phba->lpfc_hbq_pool = NULL;
+	phba->nlp_mem_pool = NULL;
+	phba->mbox_mem_pool = NULL;
+	phba->lpfc_scsi_dma_buf_pool = NULL;
+	phba->lpfc_mbuf_pool = NULL;
+
+				/* Free the iocb lookup array */
 	kfree(psli->iocbq_lookup);
 	psli->iocbq_lookup = NULL;
 
@@ -148,20 +190,23 @@ void *
 lpfc_mbuf_alloc(struct lpfc_hba *phba, int mem_flags, dma_addr_t *handle)
 {
 	struct lpfc_dma_pool *pool = &phba->lpfc_mbuf_safety_pool;
+	unsigned long iflags;
 	void *ret;
 
 	ret = pci_pool_alloc(phba->lpfc_mbuf_pool, GFP_KERNEL, handle);
 
-	if (!ret && ( mem_flags & MEM_PRI) && pool->current_count) {
+	spin_lock_irqsave(&phba->hbalock, iflags);
+	if (!ret && (mem_flags & MEM_PRI) && pool->current_count) {
 		pool->current_count--;
 		ret = pool->elements[pool->current_count].virt;
 		*handle = pool->elements[pool->current_count].phys;
 	}
+	spin_unlock_irqrestore(&phba->hbalock, iflags);
 	return ret;
 }
 
 void
-lpfc_mbuf_free(struct lpfc_hba * phba, void *virt, dma_addr_t dma)
+__lpfc_mbuf_free(struct lpfc_hba * phba, void *virt, dma_addr_t dma)
 {
 	struct lpfc_dma_pool *pool = &phba->lpfc_mbuf_safety_pool;
 
@@ -174,3 +219,63 @@ lpfc_mbuf_free(struct lpfc_hba * phba, void *virt, dma_addr_t dma)
 	}
 	return;
 }
+
+void
+lpfc_mbuf_free(struct lpfc_hba * phba, void *virt, dma_addr_t dma)
+{
+	unsigned long iflags;
+
+	spin_lock_irqsave(&phba->hbalock, iflags);
+	__lpfc_mbuf_free(phba, virt, dma);
+	spin_unlock_irqrestore(&phba->hbalock, iflags);
+	return;
+}
+
+struct hbq_dmabuf *
+lpfc_els_hbq_alloc(struct lpfc_hba *phba)
+{
+	struct hbq_dmabuf *hbqbp;
+
+	hbqbp = kmalloc(sizeof(struct hbq_dmabuf), GFP_KERNEL);
+	if (!hbqbp)
+		return NULL;
+
+	hbqbp->dbuf.virt = pci_pool_alloc(phba->lpfc_hbq_pool, GFP_KERNEL,
+					  &hbqbp->dbuf.phys);
+	if (!hbqbp->dbuf.virt) {
+		kfree(hbqbp);
+		return NULL;
+	}
+	hbqbp->size = LPFC_BPL_SIZE;
+	return hbqbp;
+}
+
+void
+lpfc_els_hbq_free(struct lpfc_hba *phba, struct hbq_dmabuf *hbqbp)
+{
+	pci_pool_free(phba->lpfc_hbq_pool, hbqbp->dbuf.virt, hbqbp->dbuf.phys);
+	kfree(hbqbp);
+	return;
+}
+
+/* This is ONLY called for the LPFC_ELS_HBQ */
+void
+lpfc_in_buf_free(struct lpfc_hba *phba, struct lpfc_dmabuf *mp)
+{
+	struct hbq_dmabuf *hbq_entry;
+
+	if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
+		hbq_entry = container_of(mp, struct hbq_dmabuf, dbuf);
+		if (hbq_entry->tag == -1) {
+			(phba->hbqs[LPFC_ELS_HBQ].hbq_free_buffer)
+				(phba, hbq_entry);
+		} else {
+			lpfc_sli_free_hbq(phba, hbq_entry);
+		}
+	} else {
+		lpfc_mbuf_free(phba, mp->virt, mp->phys);
+		kfree(mp);
+	}
+	return;
+}
+
diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
index 5fcd545..346999a 100644
--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
+++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
@@ -1,4 +1,4 @@
-/*******************************************************************
+ /*******************************************************************
  * This file is part of the Emulex Linux Device Driver for         *
  * Fibre Channel Host Bus Adapters.                                *
  * Copyright (C) 2004-2007 Emulex.  All rights reserved.           *
@@ -35,20 +35,22 @@
 #include "lpfc.h"
 #include "lpfc_logmsg.h"
 #include "lpfc_crtn.h"
+#include "lpfc_vport.h"
+#include "lpfc_debugfs.h"
 
 
 /* Called to verify a rcv'ed ADISC was intended for us. */
 static int
-lpfc_check_adisc(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
-		 struct lpfc_name * nn, struct lpfc_name * pn)
+lpfc_check_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+		 struct lpfc_name *nn, struct lpfc_name *pn)
 {
 	/* Compare the ADISC rsp WWNN / WWPN matches our internal node
 	 * table entry for that node.
 	 */
-	if (memcmp(nn, &ndlp->nlp_nodename, sizeof (struct lpfc_name)) != 0)
+	if (memcmp(nn, &ndlp->nlp_nodename, sizeof (struct lpfc_name)))
 		return 0;
 
-	if (memcmp(pn, &ndlp->nlp_portname, sizeof (struct lpfc_name)) != 0)
+	if (memcmp(pn, &ndlp->nlp_portname, sizeof (struct lpfc_name)))
 		return 0;
 
 	/* we match, return success */
@@ -56,11 +58,10 @@ lpfc_check_adisc(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
 }
 
 int
-lpfc_check_sparm(struct lpfc_hba * phba,
-		 struct lpfc_nodelist * ndlp, struct serv_parm * sp,
-		 uint32_t class)
+lpfc_check_sparm(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+		 struct serv_parm * sp, uint32_t class)
 {
-	volatile struct serv_parm *hsp = &phba->fc_sparam;
+	volatile struct serv_parm *hsp = &vport->fc_sparam;
 	uint16_t hsp_value, ssp_value = 0;
 
 	/*
@@ -75,12 +76,14 @@ lpfc_check_sparm(struct lpfc_hba * phba,
 				hsp->cls1.rcvDataSizeLsb;
 		ssp_value = (sp->cls1.rcvDataSizeMsb << 8) |
 				sp->cls1.rcvDataSizeLsb;
+		if (!ssp_value)
+			goto bad_service_param;
 		if (ssp_value > hsp_value) {
 			sp->cls1.rcvDataSizeLsb = hsp->cls1.rcvDataSizeLsb;
 			sp->cls1.rcvDataSizeMsb = hsp->cls1.rcvDataSizeMsb;
 		}
 	} else if (class == CLASS1) {
-		return 0;
+		goto bad_service_param;
 	}
 
 	if (sp->cls2.classValid) {
@@ -88,12 +91,14 @@ lpfc_check_sparm(struct lpfc_hba * phba,
 				hsp->cls2.rcvDataSizeLsb;
 		ssp_value = (sp->cls2.rcvDataSizeMsb << 8) |
 				sp->cls2.rcvDataSizeLsb;
+		if (!ssp_value)
+			goto bad_service_param;
 		if (ssp_value > hsp_value) {
 			sp->cls2.rcvDataSizeLsb = hsp->cls2.rcvDataSizeLsb;
 			sp->cls2.rcvDataSizeMsb = hsp->cls2.rcvDataSizeMsb;
 		}
 	} else if (class == CLASS2) {
-		return 0;
+		goto bad_service_param;
 	}
 
 	if (sp->cls3.classValid) {
@@ -101,12 +106,14 @@ lpfc_check_sparm(struct lpfc_hba * phba,
 				hsp->cls3.rcvDataSizeLsb;
 		ssp_value = (sp->cls3.rcvDataSizeMsb << 8) |
 				sp->cls3.rcvDataSizeLsb;
+		if (!ssp_value)
+			goto bad_service_param;
 		if (ssp_value > hsp_value) {
 			sp->cls3.rcvDataSizeLsb = hsp->cls3.rcvDataSizeLsb;
 			sp->cls3.rcvDataSizeMsb = hsp->cls3.rcvDataSizeMsb;
 		}
 	} else if (class == CLASS3) {
-		return 0;
+		goto bad_service_param;
 	}
 
 	/*
@@ -125,12 +132,22 @@ lpfc_check_sparm(struct lpfc_hba * phba,
 	memcpy(&ndlp->nlp_nodename, &sp->nodeName, sizeof (struct lpfc_name));
 	memcpy(&ndlp->nlp_portname, &sp->portName, sizeof (struct lpfc_name));
 	return 1;
+bad_service_param:
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+			 "0207 Device %x "
+			 "(%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x) sent "
+			 "invalid service parameters.  Ignoring device.\n",
+			 ndlp->nlp_DID,
+			 sp->nodeName.u.wwn[0], sp->nodeName.u.wwn[1],
+			 sp->nodeName.u.wwn[2], sp->nodeName.u.wwn[3],
+			 sp->nodeName.u.wwn[4], sp->nodeName.u.wwn[5],
+			 sp->nodeName.u.wwn[6], sp->nodeName.u.wwn[7]);
+	return 0;
 }
 
 static void *
-lpfc_check_elscmpl_iocb(struct lpfc_hba * phba,
-		      struct lpfc_iocbq *cmdiocb,
-		      struct lpfc_iocbq *rspiocb)
+lpfc_check_elscmpl_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+			struct lpfc_iocbq *rspiocb)
 {
 	struct lpfc_dmabuf *pcmd, *prsp;
 	uint32_t *lp;
@@ -168,75 +185,69 @@ lpfc_check_elscmpl_iocb(struct lpfc_hba * phba,
  * routine effectively results in a "software abort".
  */
 int
-lpfc_els_abort(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp)
+lpfc_els_abort(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)
 {
-	struct lpfc_sli *psli;
-	struct lpfc_sli_ring *pring;
+	LIST_HEAD(completions);
+	struct lpfc_sli  *psli = &phba->sli;
+	struct lpfc_sli_ring *pring = &psli->ring[LPFC_ELS_RING];
 	struct lpfc_iocbq *iocb, *next_iocb;
-	IOCB_t *icmd;
-	int    found = 0;
+	IOCB_t *cmd;
 
 	/* Abort outstanding I/O on NPort <nlp_DID> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_DISCOVERY,
-			"%d:0205 Abort outstanding I/O on NPort x%x "
-			"Data: x%x x%x x%x\n",
-			phba->brd_no, ndlp->nlp_DID, ndlp->nlp_flag,
-			ndlp->nlp_state, ndlp->nlp_rpi);
+	lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_DISCOVERY,
+			 "0205 Abort outstanding I/O on NPort x%x "
+			 "Data: x%x x%x x%x\n",
+			 ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state,
+			 ndlp->nlp_rpi);
 
-	psli = &phba->sli;
-	pring = &psli->ring[LPFC_ELS_RING];
+	lpfc_fabric_abort_nport(ndlp);
 
 	/* First check the txq */
-	do {
-		found = 0;
-		spin_lock_irq(phba->host->host_lock);
-		list_for_each_entry_safe(iocb, next_iocb, &pring->txq, list) {
-			/* Check to see if iocb matches the nport we are looking
-			   for */
-			if ((lpfc_check_sli_ndlp(phba, pring, iocb, ndlp))) {
-				found = 1;
-				/* It matches, so deque and call compl with an
-				   error */
-				list_del(&iocb->list);
-				pring->txq_cnt--;
-				if (iocb->iocb_cmpl) {
-					icmd = &iocb->iocb;
-					icmd->ulpStatus = IOSTAT_LOCAL_REJECT;
-					icmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
-					spin_unlock_irq(phba->host->host_lock);
-					(iocb->iocb_cmpl) (phba, iocb, iocb);
-					spin_lock_irq(phba->host->host_lock);
-				} else
-					lpfc_sli_release_iocbq(phba, iocb);
-				break;
-			}
+	spin_lock_irq(&phba->hbalock);
+	list_for_each_entry_safe(iocb, next_iocb, &pring->txq, list) {
+		/* Check to see if iocb matches the nport we are looking for */
+		if (lpfc_check_sli_ndlp(phba, pring, iocb, ndlp)) {
+			/* It matches, so deque and call compl with anp error */
+			list_move_tail(&iocb->list, &completions);
+			pring->txq_cnt--;
 		}
-		spin_unlock_irq(phba->host->host_lock);
-	} while (found);
+	}
 
 	/* Next check the txcmplq */
-	spin_lock_irq(phba->host->host_lock);
 	list_for_each_entry_safe(iocb, next_iocb, &pring->txcmplq, list) {
-		/* Check to see if iocb matches the nport we are looking
-		   for */
-		if ((lpfc_check_sli_ndlp (phba, pring, iocb, ndlp))) {
-			icmd = &iocb->iocb;
+		/* Check to see if iocb matches the nport we are looking for */
+		if (lpfc_check_sli_ndlp(phba, pring, iocb, ndlp)) {
 			lpfc_sli_issue_abort_iotag(phba, pring, iocb);
 		}
 	}
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
+
+	while (!list_empty(&completions)) {
+		iocb = list_get_first(&completions, struct lpfc_iocbq, list);
+		cmd = &iocb->iocb;
+		list_del_init(&iocb->list);
+
+		if (!iocb->iocb_cmpl)
+			lpfc_sli_release_iocbq(phba, iocb);
+		else {
+			cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
+			cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
+			(iocb->iocb_cmpl) (phba, iocb, iocb);
+		}
+	}
 
 	/* If we are delaying issuing an ELS command, cancel it */
 	if (ndlp->nlp_flag & NLP_DELAY_TMO)
-		lpfc_cancel_retry_delay_tmo(phba, ndlp);
+		lpfc_cancel_retry_delay_tmo(phba->pport, ndlp);
 	return 0;
 }
 
 static int
-lpfc_rcv_plogi(struct lpfc_hba * phba,
-		      struct lpfc_nodelist * ndlp,
-		      struct lpfc_iocbq *cmdiocb)
+lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+	       struct lpfc_iocbq *cmdiocb)
 {
+	struct Scsi_Host   *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba    *phba = vport->phba;
 	struct lpfc_dmabuf *pcmd;
 	uint32_t *lp;
 	IOCB_t *icmd;
@@ -246,14 +257,14 @@ lpfc_rcv_plogi(struct lpfc_hba * phba,
 	int rc;
 
 	memset(&stat, 0, sizeof (struct ls_rjt));
-	if (phba->hba_state <= LPFC_FLOGI) {
+	if (vport->port_state <= LPFC_FLOGI) {
 		/* Before responding to PLOGI, check for pt2pt mode.
 		 * If we are pt2pt, with an outstanding FLOGI, abort
 		 * the FLOGI and resend it first.
 		 */
-		if (phba->fc_flag & FC_PT2PT) {
-			lpfc_els_abort_flogi(phba);
-		        if (!(phba->fc_flag & FC_PT2PT_PLOGI)) {
+		if (vport->fc_flag & FC_PT2PT) {
+			 lpfc_els_abort_flogi(phba);
+		        if (!(vport->fc_flag & FC_PT2PT_PLOGI)) {
 				/* If the other side is supposed to initiate
 				 * the PLOGI anyway, just ACC it now and
 				 * move on with discovery.
@@ -262,45 +273,59 @@ lpfc_rcv_plogi(struct lpfc_hba * phba,
 				phba->fc_ratov = FF_DEF_RATOV;
 				/* Start discovery - this should just do
 				   CLEAR_LA */
-				lpfc_disc_start(phba);
-			} else {
-				lpfc_initial_flogi(phba);
-			}
+				lpfc_disc_start(vport);
+			} else
+				lpfc_initial_flogi(vport);
 		} else {
 			stat.un.b.lsRjtRsnCode = LSRJT_LOGICAL_BSY;
 			stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE;
-			lpfc_els_rsp_reject(phba, stat.un.lsRjtError, cmdiocb,
-					    ndlp);
+			lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb,
+					    ndlp, NULL);
 			return 0;
 		}
 	}
 	pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
 	lp = (uint32_t *) pcmd->virt;
 	sp = (struct serv_parm *) ((uint8_t *) lp + sizeof (uint32_t));
-	if ((lpfc_check_sparm(phba, ndlp, sp, CLASS3) == 0)) {
+	if (wwn_to_u64(sp->portName.u.wwn) == 0) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0140 PLOGI Reject: invalid nname\n");
+		stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
+		stat.un.b.lsRjtRsnCodeExp = LSEXP_INVALID_PNAME;
+		lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
+			NULL);
+		return 0;
+	}
+	if (wwn_to_u64(sp->nodeName.u.wwn) == 0) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0141 PLOGI Reject: invalid pname\n");
+		stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
+		stat.un.b.lsRjtRsnCodeExp = LSEXP_INVALID_NNAME;
+		lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
+			NULL);
+		return 0;
+	}
+	if ((lpfc_check_sparm(vport, ndlp, sp, CLASS3) == 0)) {
 		/* Reject this request because invalid parameters */
 		stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
 		stat.un.b.lsRjtRsnCodeExp = LSEXP_SPARM_OPTIONS;
-		lpfc_els_rsp_reject(phba, stat.un.lsRjtError, cmdiocb, ndlp);
+		lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
+			NULL);
 		return 0;
 	}
 	icmd = &cmdiocb->iocb;
 
 	/* PLOGI chkparm OK */
-	lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_ELS,
-			"%d:0114 PLOGI chkparm OK Data: x%x x%x x%x x%x\n",
-			phba->brd_no,
-			ndlp->nlp_DID, ndlp->nlp_state, ndlp->nlp_flag,
-			ndlp->nlp_rpi);
-
-	if ((phba->cfg_fcp_class == 2) &&
-	    (sp->cls2.classValid)) {
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0114 PLOGI chkparm OK Data: x%x x%x x%x x%x\n",
+			 ndlp->nlp_DID, ndlp->nlp_state, ndlp->nlp_flag,
+			 ndlp->nlp_rpi);
+
+	if (vport->cfg_fcp_class == 2 && sp->cls2.classValid)
 		ndlp->nlp_fcp_info |= CLASS2;
-	} else {
+	else
 		ndlp->nlp_fcp_info |= CLASS3;
-	}
+
 	ndlp->nlp_class_sup = 0;
 	if (sp->cls1.classValid)
 		ndlp->nlp_class_sup |= FC_COS_CLASS1;
@@ -322,35 +347,36 @@ lpfc_rcv_plogi(struct lpfc_hba * phba,
 	case  NLP_STE_PRLI_ISSUE:
 	case  NLP_STE_UNMAPPED_NODE:
 	case  NLP_STE_MAPPED_NODE:
-		lpfc_els_rsp_acc(phba, ELS_CMD_PLOGI, cmdiocb, ndlp, NULL, 0);
+		lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, ndlp, NULL);
 		return 1;
 	}
 
-	if ((phba->fc_flag & FC_PT2PT)
-	    && !(phba->fc_flag & FC_PT2PT_PLOGI)) {
+	if ((vport->fc_flag & FC_PT2PT) &&
+	    !(vport->fc_flag & FC_PT2PT_PLOGI)) {
 		/* rcv'ed PLOGI decides what our NPortId will be */
-		phba->fc_myDID = icmd->un.rcvels.parmRo;
+		vport->fc_myDID = icmd->un.rcvels.parmRo;
 		mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
 		if (mbox == NULL)
 			goto out;
 		lpfc_config_link(phba, mbox);
 		mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
-		rc = lpfc_sli_issue_mbox
-			(phba, mbox, (MBX_NOWAIT | MBX_STOP_IOCB));
+		mbox->vport = vport;
+		rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT);
 		if (rc == MBX_NOT_FINISHED) {
-			mempool_free( mbox, phba->mbox_mem_pool);
+			mempool_free(mbox, phba->mbox_mem_pool);
 			goto out;
 		}
 
-		lpfc_can_disctmo(phba);
+		lpfc_can_disctmo(vport);
 	}
 	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
-	if (mbox == NULL)
+	if (!mbox)
 		goto out;
 
-	if (lpfc_reg_login(phba, icmd->un.rcvels.remoteID,
-			   (uint8_t *) sp, mbox, 0)) {
-		mempool_free( mbox, phba->mbox_mem_pool);
+	rc = lpfc_reg_login(phba, vport->vpi, icmd->un.rcvels.remoteID,
+			    (uint8_t *) sp, mbox, 0);
+	if (rc) {
+		mempool_free(mbox, phba->mbox_mem_pool);
 		goto out;
 	}
 
@@ -358,8 +384,14 @@ lpfc_rcv_plogi(struct lpfc_hba * phba,
 	 * queue this mbox command to be processed later.
 	 */
 	mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login;
-	mbox->context2  = ndlp;
+	/*
+	 * mbox->context2 = lpfc_nlp_get(ndlp) deferred until mailbox
+	 * command issued in lpfc_cmpl_els_acc().
+	 */
+	mbox->vport = vport;
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag |= (NLP_ACC_REGLOGIN | NLP_RCV_PLOGI);
+	spin_unlock_irq(shost->host_lock);
 
 	/*
 	 * If there is an outstanding PLOGI issued, abort it before
@@ -375,24 +407,62 @@ lpfc_rcv_plogi(struct lpfc_hba * phba,
 		lpfc_els_abort(phba, ndlp);
 	}
 
-	lpfc_els_rsp_acc(phba, ELS_CMD_PLOGI, cmdiocb, ndlp, mbox, 0);
+	if ((vport->port_type == LPFC_NPIV_PORT &&
+	     vport->cfg_restrict_login)) {
+
+		/* In order to preserve RPIs, we want to cleanup
+		 * the default RPI the firmware created to rcv
+		 * this ELS request. The only way to do this is
+		 * to register, then unregister the RPI.
+		 */
+		spin_lock_irq(shost->host_lock);
+		ndlp->nlp_flag |= NLP_RM_DFLT_RPI;
+		spin_unlock_irq(shost->host_lock);
+		stat.un.b.lsRjtRsnCode = LSRJT_INVALID_CMD;
+		stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE;
+		lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb,
+			ndlp, mbox);
+		return 1;
+	}
+
+	/* If the remote NPort logs into us, before we can initiate
+	 * discovery to them, cleanup the NPort from discovery accordingly.
+	 */
+	if (ndlp->nlp_state == NLP_STE_NPR_NODE) {
+		spin_lock_irq(shost->host_lock);
+		ndlp->nlp_flag &= ~NLP_DELAY_TMO;
+		spin_unlock_irq(shost->host_lock);
+		del_timer_sync(&ndlp->nlp_delayfunc);
+		ndlp->nlp_last_elscmd = 0;
+
+		if (!list_empty(&ndlp->els_retry_evt.evt_listp))
+			list_del_init(&ndlp->els_retry_evt.evt_listp);
+
+		if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
+			spin_lock_irq(shost->host_lock);
+			ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
+			spin_unlock_irq(shost->host_lock);
+		}
+	}
+
+	lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, ndlp, mbox);
 	return 1;
 
 out:
 	stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
 	stat.un.b.lsRjtRsnCodeExp = LSEXP_OUT_OF_RESOURCE;
-	lpfc_els_rsp_reject(phba, stat.un.lsRjtError, cmdiocb, ndlp);
+	lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp, NULL);
 	return 0;
 }
 
 static int
-lpfc_rcv_padisc(struct lpfc_hba * phba,
-		struct lpfc_nodelist * ndlp,
+lpfc_rcv_padisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 		struct lpfc_iocbq *cmdiocb)
 {
+	struct Scsi_Host   *shost = lpfc_shost_from_vport(vport);
 	struct lpfc_dmabuf *pcmd;
-	struct serv_parm *sp;
-	struct lpfc_name *pnn, *ppn;
+	struct serv_parm   *sp;
+	struct lpfc_name   *pnn, *ppn;
 	struct ls_rjt stat;
 	ADISC *ap;
 	IOCB_t *icmd;
@@ -414,13 +484,12 @@ lpfc_rcv_padisc(struct lpfc_hba * phba,
 	}
 
 	icmd = &cmdiocb->iocb;
-	if ((icmd->ulpStatus == 0) &&
-	    (lpfc_check_adisc(phba, ndlp, pnn, ppn))) {
+	if (icmd->ulpStatus == 0 && lpfc_check_adisc(vport, ndlp, pnn, ppn)) {
 		if (cmd == ELS_CMD_ADISC) {
-			lpfc_els_rsp_adisc_acc(phba, cmdiocb, ndlp);
+			lpfc_els_rsp_adisc_acc(vport, cmdiocb, ndlp);
 		} else {
-			lpfc_els_rsp_acc(phba, ELS_CMD_PLOGI, cmdiocb, ndlp,
-				NULL, 0);
+			lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, ndlp,
+					 NULL);
 		}
 		return 1;
 	}
@@ -429,58 +498,54 @@ lpfc_rcv_padisc(struct lpfc_hba * phba,
 	stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
 	stat.un.b.lsRjtRsnCodeExp = LSEXP_SPARM_OPTIONS;
 	stat.un.b.vendorUnique = 0;
-	lpfc_els_rsp_reject(phba, stat.un.lsRjtError, cmdiocb, ndlp);
+	lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp, NULL);
 
 	/* 1 sec timeout */
 	mod_timer(&ndlp->nlp_delayfunc, jiffies + HZ);
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag |= NLP_DELAY_TMO;
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 	ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
 	ndlp->nlp_prev_state = ndlp->nlp_state;
-	ndlp->nlp_state = NLP_STE_NPR_NODE;
-	lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
 	return 0;
 }
 
 static int
-lpfc_rcv_logo(struct lpfc_hba * phba,
-		      struct lpfc_nodelist * ndlp,
-		      struct lpfc_iocbq *cmdiocb,
-		      uint32_t els_cmd)
+lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+	      struct lpfc_iocbq *cmdiocb, uint32_t els_cmd)
 {
-	/* Put ndlp on NPR list with 1 sec timeout for plogi, ACC logo */
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	/* Put ndlp in NPR state with 1 sec timeout for plogi, ACC logo */
 	/* Only call LOGO ACC for first LOGO, this avoids sending unnecessary
 	 * PLOGIs during LOGO storms from a device.
 	 */
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag |= NLP_LOGO_ACC;
+	spin_unlock_irq(shost->host_lock);
 	if (els_cmd == ELS_CMD_PRLO)
-		lpfc_els_rsp_acc(phba, ELS_CMD_PRLO, cmdiocb, ndlp, NULL, 0);
+		lpfc_els_rsp_acc(vport, ELS_CMD_PRLO, cmdiocb, ndlp, NULL);
 	else
-		lpfc_els_rsp_acc(phba, ELS_CMD_ACC, cmdiocb, ndlp, NULL, 0);
+		lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
 
 	if (!(ndlp->nlp_type & NLP_FABRIC) ||
-		(ndlp->nlp_state == NLP_STE_ADISC_ISSUE)) {
+	    (ndlp->nlp_state == NLP_STE_ADISC_ISSUE)) {
 		/* Only try to re-login if this is NOT a Fabric Node */
 		mod_timer(&ndlp->nlp_delayfunc, jiffies + HZ * 1);
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_DELAY_TMO;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 
 		ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
-		ndlp->nlp_prev_state = ndlp->nlp_state;
-		ndlp->nlp_state = NLP_STE_NPR_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
-	} else {
-		ndlp->nlp_prev_state = ndlp->nlp_state;
-		ndlp->nlp_state = NLP_STE_UNUSED_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_UNUSED_LIST);
 	}
+	ndlp->nlp_prev_state = ndlp->nlp_state;
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag &= ~NLP_NPR_ADISC;
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 	/* The driver has to wait until the ACC completes before it continues
 	 * processing the LOGO.  The action will resume in
 	 * lpfc_cmpl_els_logo_acc routine. Since part of processing includes an
@@ -490,9 +555,8 @@ lpfc_rcv_logo(struct lpfc_hba * phba,
 }
 
 static void
-lpfc_rcv_prli(struct lpfc_hba * phba,
-		      struct lpfc_nodelist * ndlp,
-		      struct lpfc_iocbq *cmdiocb)
+lpfc_rcv_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+	      struct lpfc_iocbq *cmdiocb)
 {
 	struct lpfc_dmabuf *pcmd;
 	uint32_t *lp;
@@ -506,8 +570,7 @@ lpfc_rcv_prli(struct lpfc_hba * phba,
 
 	ndlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR);
 	ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE;
-	if ((npr->acceptRspCode == PRLI_REQ_EXECUTED) &&
-	    (npr->prliType == PRLI_FCP_TYPE)) {
+	if (npr->prliType == PRLI_FCP_TYPE) {
 		if (npr->initiatorFunc)
 			ndlp->nlp_type |= NLP_FCP_INITIATOR;
 		if (npr->targetFunc)
@@ -522,191 +585,202 @@ lpfc_rcv_prli(struct lpfc_hba * phba,
 			roles |= FC_RPORT_ROLE_FCP_INITIATOR;
 		if (ndlp->nlp_type & NLP_FCP_TARGET)
 			roles |= FC_RPORT_ROLE_FCP_TARGET;
+
+		lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT,
+			"rport rolechg:   role:x%x did:x%x flg:x%x",
+			roles, ndlp->nlp_DID, ndlp->nlp_flag);
+
 		fc_remote_port_rolechg(rport, roles);
 	}
 }
 
 static uint32_t
-lpfc_disc_set_adisc(struct lpfc_hba * phba,
-		      struct lpfc_nodelist * ndlp)
+lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	if (!ndlp->nlp_rpi) {
+		ndlp->nlp_flag &= ~NLP_NPR_ADISC;
+		return 0;
+	}
+
 	/* Check config parameter use-adisc or FCP-2 */
-	if ((phba->cfg_use_adisc == 0) &&
-		!(phba->fc_flag & FC_RSCN_MODE)) {
-		if (!(ndlp->nlp_fcp_info & NLP_FCP_2_DEVICE))
-			return 0;
+	if ((vport->cfg_use_adisc && (vport->fc_flag & FC_RSCN_MODE)) ||
+	    ndlp->nlp_fcp_info & NLP_FCP_2_DEVICE) {
+		spin_lock_irq(shost->host_lock);
+		ndlp->nlp_flag |= NLP_NPR_ADISC;
+		spin_unlock_irq(shost->host_lock);
+		return 1;
 	}
-	spin_lock_irq(phba->host->host_lock);
-	ndlp->nlp_flag |= NLP_NPR_ADISC;
-	spin_unlock_irq(phba->host->host_lock);
-	return 1;
+	ndlp->nlp_flag &= ~NLP_NPR_ADISC;
+	lpfc_unreg_rpi(vport, ndlp);
+	return 0;
 }
 
 static uint32_t
-lpfc_disc_illegal(struct lpfc_hba * phba,
-		   struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
-{
-	lpfc_printf_log(phba,
-			KERN_ERR,
-			LOG_DISCOVERY,
-			"%d:0253 Illegal State Transition: node x%x event x%x, "
-			"state x%x Data: x%x x%x\n",
-			phba->brd_no,
-			ndlp->nlp_DID, evt, ndlp->nlp_state, ndlp->nlp_rpi,
-			ndlp->nlp_flag);
+lpfc_disc_illegal(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+		  void *arg, uint32_t evt)
+{
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+			 "0253 Illegal State Transition: node x%x "
+			 "event x%x, state x%x Data: x%x x%x\n",
+			 ndlp->nlp_DID, evt, ndlp->nlp_state, ndlp->nlp_rpi,
+			 ndlp->nlp_flag);
 	return ndlp->nlp_state;
 }
 
 /* Start of Discovery State Machine routines */
 
 static uint32_t
-lpfc_rcv_plogi_unused_node(struct lpfc_hba * phba,
-			   struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_plogi_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			   void *arg, uint32_t evt)
 {
 	struct lpfc_iocbq *cmdiocb;
 
 	cmdiocb = (struct lpfc_iocbq *) arg;
 
-	if (lpfc_rcv_plogi(phba, ndlp, cmdiocb)) {
-		ndlp->nlp_prev_state = NLP_STE_UNUSED_NODE;
-		ndlp->nlp_state = NLP_STE_UNUSED_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_UNUSED_LIST);
+	if (lpfc_rcv_plogi(vport, ndlp, cmdiocb)) {
 		return ndlp->nlp_state;
 	}
-	lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
 	return NLP_STE_FREED_NODE;
 }
 
 static uint32_t
-lpfc_rcv_els_unused_node(struct lpfc_hba * phba,
-			 struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_els_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			 void *arg, uint32_t evt)
 {
-	lpfc_issue_els_logo(phba, ndlp, 0);
-	lpfc_nlp_list(phba, ndlp, NLP_UNUSED_LIST);
+	lpfc_issue_els_logo(vport, ndlp, 0);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_logo_unused_node(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_logo_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq     *cmdiocb;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
-
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag |= NLP_LOGO_ACC;
-	spin_unlock_irq(phba->host->host_lock);
-	lpfc_els_rsp_acc(phba, ELS_CMD_ACC, cmdiocb, ndlp, NULL, 0);
-	lpfc_nlp_list(phba, ndlp, NLP_UNUSED_LIST);
+	spin_unlock_irq(shost->host_lock);
+	lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
 
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_cmpl_logo_unused_node(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_cmpl_logo_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			   void *arg, uint32_t evt)
 {
-	lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
 	return NLP_STE_FREED_NODE;
 }
 
 static uint32_t
-lpfc_device_rm_unused_node(struct lpfc_hba * phba,
-			   struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_device_rm_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			   void *arg, uint32_t evt)
 {
-	lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
 	return NLP_STE_FREED_NODE;
 }
 
 static uint32_t
-lpfc_rcv_plogi_plogi_issue(struct lpfc_hba * phba, struct lpfc_nodelist * ndlp,
+lpfc_rcv_plogi_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 			   void *arg, uint32_t evt)
 {
+	struct lpfc_hba   *phba = vport->phba;
 	struct lpfc_iocbq *cmdiocb = arg;
-	struct lpfc_dmabuf *pcmd;
-	struct serv_parm *sp;
-	uint32_t *lp;
+	struct lpfc_dmabuf *pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
+	uint32_t *lp = (uint32_t *) pcmd->virt;
+	struct serv_parm *sp = (struct serv_parm *) (lp + 1);
 	struct ls_rjt stat;
 	int port_cmp;
 
-	pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
-	lp = (uint32_t *) pcmd->virt;
-	sp = (struct serv_parm *) ((uint8_t *) lp + sizeof (uint32_t));
-
 	memset(&stat, 0, sizeof (struct ls_rjt));
 
 	/* For a PLOGI, we only accept if our portname is less
 	 * than the remote portname.
 	 */
 	phba->fc_stat.elsLogiCol++;
-	port_cmp = memcmp(&phba->fc_portname, &sp->portName,
-			  sizeof (struct lpfc_name));
+	port_cmp = memcmp(&vport->fc_portname, &sp->portName,
+			  sizeof(struct lpfc_name));
 
 	if (port_cmp >= 0) {
 		/* Reject this request because the remote node will accept
 		   ours */
 		stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
 		stat.un.b.lsRjtRsnCodeExp = LSEXP_CMD_IN_PROGRESS;
-		lpfc_els_rsp_reject(phba, stat.un.lsRjtError, cmdiocb, ndlp);
+		lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp,
+			NULL);
 	} else {
-		lpfc_rcv_plogi(phba, ndlp, cmdiocb);
-	} /* if our portname was less */
+		lpfc_rcv_plogi(vport, ndlp, cmdiocb);
+	} /* If our portname was less */
 
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_logo_plogi_issue(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_prli_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq     *cmdiocb;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
+	struct ls_rjt     stat;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	memset(&stat, 0, sizeof (struct ls_rjt));
+	stat.un.b.lsRjtRsnCode = LSRJT_LOGICAL_BSY;
+	stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE;
+	lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp, NULL);
+	return ndlp->nlp_state;
+}
 
-	/* software abort outstanding PLOGI */
-	lpfc_els_abort(phba, ndlp);
+static uint32_t
+lpfc_rcv_logo_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
+{
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	lpfc_rcv_logo(phba, ndlp, cmdiocb, ELS_CMD_LOGO);
+				/* software abort outstanding PLOGI */
+	lpfc_els_abort(vport->phba, ndlp);
+
+	lpfc_rcv_logo(vport, ndlp, cmdiocb, ELS_CMD_LOGO);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_els_plogi_issue(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_els_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			 void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq     *cmdiocb;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba   *phba = vport->phba;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
 	/* software abort outstanding PLOGI */
 	lpfc_els_abort(phba, ndlp);
 
 	if (evt == NLP_EVT_RCV_LOGO) {
-		lpfc_els_rsp_acc(phba, ELS_CMD_ACC, cmdiocb, ndlp, NULL, 0);
+		lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
 	} else {
-		lpfc_issue_els_logo(phba, ndlp, 0);
+		lpfc_issue_els_logo(vport, ndlp, 0);
 	}
 
-	/* Put ndlp in npr list set plogi timer for 1 sec */
+	/* Put ndlp in npr state set plogi timer for 1 sec */
 	mod_timer(&ndlp->nlp_delayfunc, jiffies + HZ * 1);
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag |= NLP_DELAY_TMO;
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 	ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
 	ndlp->nlp_prev_state = NLP_STE_PLOGI_ISSUE;
-	ndlp->nlp_state = NLP_STE_NPR_NODE;
-	lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
 
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_cmpl_plogi_plogi_issue(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
+lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport,
+			    struct lpfc_nodelist *ndlp,
+			    void *arg,
 			    uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb, *rspiocb;
+	struct lpfc_hba    *phba = vport->phba;
+	struct lpfc_iocbq  *cmdiocb, *rspiocb;
 	struct lpfc_dmabuf *pcmd, *prsp, *mp;
 	uint32_t *lp;
 	IOCB_t *irsp;
@@ -728,31 +802,28 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_hba * phba,
 
 	pcmd = (struct lpfc_dmabuf *) cmdiocb->context2;
 
-	prsp = list_get_first(&pcmd->list,
-			      struct lpfc_dmabuf,
-			      list);
-	lp = (uint32_t *) prsp->virt;
+	prsp = list_get_first(&pcmd->list, struct lpfc_dmabuf, list);
 
+	lp = (uint32_t *) prsp->virt;
 	sp = (struct serv_parm *) ((uint8_t *) lp + sizeof (uint32_t));
-	if (!lpfc_check_sparm(phba, ndlp, sp, CLASS3))
+	if (wwn_to_u64(sp->portName.u.wwn) == 0 ||
+	    wwn_to_u64(sp->nodeName.u.wwn) == 0) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0142 PLOGI RSP: Invalid WWN.\n");
+		goto out;
+	}
+	if (!lpfc_check_sparm(vport, ndlp, sp, CLASS3))
 		goto out;
-
 	/* PLOGI chkparm OK */
-	lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_ELS,
-			"%d:0121 PLOGI chkparm OK "
-			"Data: x%x x%x x%x x%x\n",
-			phba->brd_no,
-			ndlp->nlp_DID, ndlp->nlp_state,
-			ndlp->nlp_flag, ndlp->nlp_rpi);
-
-	if ((phba->cfg_fcp_class == 2) &&
-	    (sp->cls2.classValid)) {
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+			 "0121 PLOGI chkparm OK Data: x%x x%x x%x x%x\n",
+			 ndlp->nlp_DID, ndlp->nlp_state,
+			 ndlp->nlp_flag, ndlp->nlp_rpi);
+	if (vport->cfg_fcp_class == 2 && (sp->cls2.classValid))
 		ndlp->nlp_fcp_info |= CLASS2;
-	} else {
+	else
 		ndlp->nlp_fcp_info |= CLASS3;
-	}
+
 	ndlp->nlp_class_sup = 0;
 	if (sp->cls1.classValid)
 		ndlp->nlp_class_sup |= FC_COS_CLASS1;
@@ -763,96 +834,125 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_hba * phba,
 	if (sp->cls4.classValid)
 		ndlp->nlp_class_sup |= FC_COS_CLASS4;
 	ndlp->nlp_maxframe =
-		((sp->cmn.bbRcvSizeMsb & 0x0F) << 8) |
-		sp->cmn.bbRcvSizeLsb;
+		((sp->cmn.bbRcvSizeMsb & 0x0F) << 8) | sp->cmn.bbRcvSizeLsb;
 
-	if (!(mbox = mempool_alloc(phba->mbox_mem_pool,
-				   GFP_KERNEL)))
+	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	if (!mbox) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+			"0133 PLOGI: no memory for reg_login "
+			"Data: x%x x%x x%x x%x\n",
+			ndlp->nlp_DID, ndlp->nlp_state,
+			ndlp->nlp_flag, ndlp->nlp_rpi);
 		goto out;
+	}
 
-	lpfc_unreg_rpi(phba, ndlp);
-	if (lpfc_reg_login
-	    (phba, irsp->un.elsreq64.remoteID,
-	     (uint8_t *) sp, mbox, 0) == 0) {
+	lpfc_unreg_rpi(vport, ndlp);
+
+	if (lpfc_reg_login(phba, vport->vpi, irsp->un.elsreq64.remoteID,
+			   (uint8_t *) sp, mbox, 0) == 0) {
 		switch (ndlp->nlp_DID) {
 		case NameServer_DID:
-			mbox->mbox_cmpl =
-				lpfc_mbx_cmpl_ns_reg_login;
+			mbox->mbox_cmpl = lpfc_mbx_cmpl_ns_reg_login;
 			break;
 		case FDMI_DID:
-			mbox->mbox_cmpl =
-				lpfc_mbx_cmpl_fdmi_reg_login;
+			mbox->mbox_cmpl = lpfc_mbx_cmpl_fdmi_reg_login;
 			break;
 		default:
-			mbox->mbox_cmpl =
-				lpfc_mbx_cmpl_reg_login;
+			mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login;
 		}
-		mbox->context2 = ndlp;
-		if (lpfc_sli_issue_mbox(phba, mbox,
-					(MBX_NOWAIT | MBX_STOP_IOCB))
+		mbox->context2 = lpfc_nlp_get(ndlp);
+		mbox->vport = vport;
+		if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT)
 		    != MBX_NOT_FINISHED) {
-			ndlp->nlp_state =
-				NLP_STE_REG_LOGIN_ISSUE;
-			lpfc_nlp_list(phba, ndlp,
-				      NLP_REGLOGIN_LIST);
+			lpfc_nlp_set_state(vport, ndlp,
+					   NLP_STE_REG_LOGIN_ISSUE);
 			return ndlp->nlp_state;
 		}
-		mp = (struct lpfc_dmabuf *)mbox->context1;
+		lpfc_nlp_put(ndlp);
+		mp = (struct lpfc_dmabuf *) mbox->context1;
 		lpfc_mbuf_free(phba, mp->virt, mp->phys);
 		kfree(mp);
 		mempool_free(mbox, phba->mbox_mem_pool);
+
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0134 PLOGI: cannot issue reg_login "
+				 "Data: x%x x%x x%x x%x\n",
+				 ndlp->nlp_DID, ndlp->nlp_state,
+				 ndlp->nlp_flag, ndlp->nlp_rpi);
 	} else {
 		mempool_free(mbox, phba->mbox_mem_pool);
+
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0135 PLOGI: cannot format reg_login "
+				 "Data: x%x x%x x%x x%x\n",
+				 ndlp->nlp_DID, ndlp->nlp_state,
+				 ndlp->nlp_flag, ndlp->nlp_rpi);
 	}
 
 
- out:
-	/* Free this node since the driver cannot login or has the wrong
-	   sparm */
-	lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+out:
+	if (ndlp->nlp_DID == NameServer_DID) {
+		lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+				 "0261 Cannot Register NameServer login\n");
+	}
+
+	ndlp->nlp_flag |= NLP_DEFER_RM;
 	return NLP_STE_FREED_NODE;
 }
 
 static uint32_t
-lpfc_device_rm_plogi_issue(struct lpfc_hba * phba,
-			   struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_device_rm_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			   void *arg, uint32_t evt)
 {
-	if(ndlp->nlp_flag & NLP_NPR_2B_DISC) {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_NODEV_REMOVE;
+		spin_unlock_irq(shost->host_lock);
 		return ndlp->nlp_state;
-	}
-	else {
+	} else {
 		/* software abort outstanding PLOGI */
-		lpfc_els_abort(phba, ndlp);
+		lpfc_els_abort(vport->phba, ndlp);
 
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+		lpfc_drop_node(vport, ndlp);
 		return NLP_STE_FREED_NODE;
 	}
 }
 
 static uint32_t
-lpfc_device_recov_plogi_issue(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_device_recov_plogi_issue(struct lpfc_vport *vport,
+			      struct lpfc_nodelist *ndlp,
+			      void *arg,
+			      uint32_t evt)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
+
+	/* Don't do anything that will mess up processing of the
+	 * previous RSCN.
+	 */
+	if (vport->fc_flag & FC_RSCN_DEFERRED)
+		return ndlp->nlp_state;
+
 	/* software abort outstanding PLOGI */
 	lpfc_els_abort(phba, ndlp);
 
 	ndlp->nlp_prev_state = NLP_STE_PLOGI_ISSUE;
-	ndlp->nlp_state = NLP_STE_NPR_NODE;
-	lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
-	spin_lock_irq(phba->host->host_lock);
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_plogi_adisc_issue(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_rcv_plogi_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			   void *arg, uint32_t evt)
 {
+	struct lpfc_hba   *phba = vport->phba;
 	struct lpfc_iocbq *cmdiocb;
 
 	/* software abort outstanding ADISC */
@@ -860,35 +960,31 @@ lpfc_rcv_plogi_adisc_issue(struct lpfc_hba * phba,
 
 	cmdiocb = (struct lpfc_iocbq *) arg;
 
-	if (lpfc_rcv_plogi(phba, ndlp, cmdiocb)) {
+	if (lpfc_rcv_plogi(vport, ndlp, cmdiocb))
 		return ndlp->nlp_state;
-	}
+
 	ndlp->nlp_prev_state = NLP_STE_ADISC_ISSUE;
-	ndlp->nlp_state = NLP_STE_PLOGI_ISSUE;
-	lpfc_nlp_list(phba, ndlp, NLP_PLOGI_LIST);
-	lpfc_issue_els_plogi(phba, ndlp->nlp_DID, 0);
+	lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0);
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE);
 
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_prli_adisc_issue(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_rcv_prli_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
-
-	lpfc_els_rsp_prli_acc(phba, cmdiocb, ndlp);
+	lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_logo_adisc_issue(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_rcv_logo_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
 {
+	struct lpfc_hba *phba = vport->phba;
 	struct lpfc_iocbq *cmdiocb;
 
 	cmdiocb = (struct lpfc_iocbq *) arg;
@@ -896,42 +992,43 @@ lpfc_rcv_logo_adisc_issue(struct lpfc_hba * phba,
 	/* software abort outstanding ADISC */
 	lpfc_els_abort(phba, ndlp);
 
-	lpfc_rcv_logo(phba, ndlp, cmdiocb, ELS_CMD_LOGO);
+	lpfc_rcv_logo(vport, ndlp, cmdiocb, ELS_CMD_LOGO);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_padisc_adisc_issue(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_rcv_padisc_adisc_issue(struct lpfc_vport *vport,
+			    struct lpfc_nodelist *ndlp,
+			    void *arg, uint32_t evt)
 {
 	struct lpfc_iocbq *cmdiocb;
 
 	cmdiocb = (struct lpfc_iocbq *) arg;
 
-	lpfc_rcv_padisc(phba, ndlp, cmdiocb);
+	lpfc_rcv_padisc(vport, ndlp, cmdiocb);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_prlo_adisc_issue(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_rcv_prlo_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
 {
 	struct lpfc_iocbq *cmdiocb;
 
 	cmdiocb = (struct lpfc_iocbq *) arg;
 
 	/* Treat like rcv logo */
-	lpfc_rcv_logo(phba, ndlp, cmdiocb, ELS_CMD_PRLO);
+	lpfc_rcv_logo(vport, ndlp, cmdiocb, ELS_CMD_PRLO);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_cmpl_adisc_adisc_issue(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_cmpl_adisc_adisc_issue(struct lpfc_vport *vport,
+			    struct lpfc_nodelist *ndlp,
+			    void *arg, uint32_t evt)
 {
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba   *phba = vport->phba;
 	struct lpfc_iocbq *cmdiocb, *rspiocb;
 	IOCB_t *irsp;
 	ADISC *ap;
@@ -943,105 +1040,112 @@ lpfc_cmpl_adisc_adisc_issue(struct lpfc_hba * phba,
 	irsp = &rspiocb->iocb;
 
 	if ((irsp->ulpStatus) ||
-		(!lpfc_check_adisc(phba, ndlp, &ap->nodeName, &ap->portName))) {
+	    (!lpfc_check_adisc(vport, ndlp, &ap->nodeName, &ap->portName))) {
 		/* 1 sec timeout */
 		mod_timer(&ndlp->nlp_delayfunc, jiffies + HZ);
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_DELAY_TMO;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
 
-		memset(&ndlp->nlp_nodename, 0, sizeof (struct lpfc_name));
-		memset(&ndlp->nlp_portname, 0, sizeof (struct lpfc_name));
+		memset(&ndlp->nlp_nodename, 0, sizeof(struct lpfc_name));
+		memset(&ndlp->nlp_portname, 0, sizeof(struct lpfc_name));
 
 		ndlp->nlp_prev_state = NLP_STE_ADISC_ISSUE;
-		ndlp->nlp_state = NLP_STE_NPR_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
-		lpfc_unreg_rpi(phba, ndlp);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+		lpfc_unreg_rpi(vport, ndlp);
 		return ndlp->nlp_state;
 	}
 
 	if (ndlp->nlp_type & NLP_FCP_TARGET) {
 		ndlp->nlp_prev_state = NLP_STE_ADISC_ISSUE;
-		ndlp->nlp_state = NLP_STE_MAPPED_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_MAPPED_LIST);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_MAPPED_NODE);
 	} else {
 		ndlp->nlp_prev_state = NLP_STE_ADISC_ISSUE;
-		ndlp->nlp_state = NLP_STE_UNMAPPED_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_UNMAPPED_LIST);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
 	}
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_device_rm_adisc_issue(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_device_rm_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			   void *arg, uint32_t evt)
 {
-	if(ndlp->nlp_flag & NLP_NPR_2B_DISC) {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_NODEV_REMOVE;
+		spin_unlock_irq(shost->host_lock);
 		return ndlp->nlp_state;
-	}
-	else {
+	} else {
 		/* software abort outstanding ADISC */
-		lpfc_els_abort(phba, ndlp);
+		lpfc_els_abort(vport->phba, ndlp);
 
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+		lpfc_drop_node(vport, ndlp);
 		return NLP_STE_FREED_NODE;
 	}
 }
 
 static uint32_t
-lpfc_device_recov_adisc_issue(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_device_recov_adisc_issue(struct lpfc_vport *vport,
+			      struct lpfc_nodelist *ndlp,
+			      void *arg,
+			      uint32_t evt)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
+
+	/* Don't do anything that will mess up processing of the
+	 * previous RSCN.
+	 */
+	if (vport->fc_flag & FC_RSCN_DEFERRED)
+		return ndlp->nlp_state;
+
 	/* software abort outstanding ADISC */
 	lpfc_els_abort(phba, ndlp);
 
 	ndlp->nlp_prev_state = NLP_STE_ADISC_ISSUE;
-	ndlp->nlp_state = NLP_STE_NPR_NODE;
-	lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
-	spin_lock_irq(phba->host->host_lock);
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
-	ndlp->nlp_flag |= NLP_NPR_ADISC;
-	spin_unlock_irq(phba->host->host_lock);
-
+	spin_unlock_irq(shost->host_lock);
+	lpfc_disc_set_adisc(vport, ndlp);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_plogi_reglogin_issue(struct lpfc_hba * phba,
-			      struct lpfc_nodelist * ndlp, void *arg,
+lpfc_rcv_plogi_reglogin_issue(struct lpfc_vport *vport,
+			      struct lpfc_nodelist *ndlp,
+			      void *arg,
 			      uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
-
-	lpfc_rcv_plogi(phba, ndlp, cmdiocb);
+	lpfc_rcv_plogi(vport, ndlp, cmdiocb);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_prli_reglogin_issue(struct lpfc_hba * phba,
-			     struct lpfc_nodelist * ndlp, void *arg,
+lpfc_rcv_prli_reglogin_issue(struct lpfc_vport *vport,
+			     struct lpfc_nodelist *ndlp,
+			     void *arg,
 			     uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	lpfc_els_rsp_prli_acc(phba, cmdiocb, ndlp);
+	lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_logo_reglogin_issue(struct lpfc_hba * phba,
-			     struct lpfc_nodelist * ndlp, void *arg,
+lpfc_rcv_logo_reglogin_issue(struct lpfc_vport *vport,
+			     struct lpfc_nodelist *ndlp,
+			     void *arg,
 			     uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
+	struct lpfc_hba   *phba = vport->phba;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 	LPFC_MBOXQ_t	  *mb;
 	LPFC_MBOXQ_t	  *nextmb;
 	struct lpfc_dmabuf *mp;
@@ -1052,98 +1156,93 @@ lpfc_rcv_logo_reglogin_issue(struct lpfc_hba * phba,
 	if ((mb = phba->sli.mbox_active)) {
 		if ((mb->mb.mbxCommand == MBX_REG_LOGIN64) &&
 		   (ndlp == (struct lpfc_nodelist *) mb->context2)) {
+			lpfc_nlp_put(ndlp);
 			mb->context2 = NULL;
 			mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
 		}
 	}
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	list_for_each_entry_safe(mb, nextmb, &phba->sli.mboxq, list) {
 		if ((mb->mb.mbxCommand == MBX_REG_LOGIN64) &&
 		   (ndlp == (struct lpfc_nodelist *) mb->context2)) {
 			mp = (struct lpfc_dmabuf *) (mb->context1);
 			if (mp) {
-				lpfc_mbuf_free(phba, mp->virt, mp->phys);
+				__lpfc_mbuf_free(phba, mp->virt, mp->phys);
 				kfree(mp);
 			}
+			lpfc_nlp_put(ndlp);
 			list_del(&mb->list);
 			mempool_free(mb, phba->mbox_mem_pool);
 		}
 	}
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
-	lpfc_rcv_logo(phba, ndlp, cmdiocb, ELS_CMD_LOGO);
+	lpfc_rcv_logo(vport, ndlp, cmdiocb, ELS_CMD_LOGO);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_padisc_reglogin_issue(struct lpfc_hba * phba,
-			       struct lpfc_nodelist * ndlp, void *arg,
+lpfc_rcv_padisc_reglogin_issue(struct lpfc_vport *vport,
+			       struct lpfc_nodelist *ndlp,
+			       void *arg,
 			       uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	lpfc_rcv_padisc(phba, ndlp, cmdiocb);
+	lpfc_rcv_padisc(vport, ndlp, cmdiocb);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_prlo_reglogin_issue(struct lpfc_hba * phba,
-			     struct lpfc_nodelist * ndlp, void *arg,
+lpfc_rcv_prlo_reglogin_issue(struct lpfc_vport *vport,
+			     struct lpfc_nodelist *ndlp,
+			     void *arg,
 			     uint32_t evt)
 {
 	struct lpfc_iocbq *cmdiocb;
 
 	cmdiocb = (struct lpfc_iocbq *) arg;
-	lpfc_els_rsp_acc(phba, ELS_CMD_PRLO, cmdiocb, ndlp, NULL, 0);
+	lpfc_els_rsp_acc(vport, ELS_CMD_PRLO, cmdiocb, ndlp, NULL);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_hba * phba,
-				  struct lpfc_nodelist * ndlp,
-				  void *arg, uint32_t evt)
+lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport,
+				  struct lpfc_nodelist *ndlp,
+				  void *arg,
+				  uint32_t evt)
 {
-	LPFC_MBOXQ_t *pmb;
-	MAILBOX_t *mb;
-	uint32_t did;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	LPFC_MBOXQ_t *pmb = (LPFC_MBOXQ_t *) arg;
+	MAILBOX_t *mb = &pmb->mb;
+	uint32_t did  = mb->un.varWords[1];
 
-	pmb = (LPFC_MBOXQ_t *) arg;
-	mb = &pmb->mb;
-	did = mb->un.varWords[1];
 	if (mb->mbxStatus) {
 		/* RegLogin failed */
-		lpfc_printf_log(phba,
-				KERN_ERR,
-				LOG_DISCOVERY,
-				"%d:0246 RegLogin failed Data: x%x x%x x%x\n",
-				phba->brd_no,
-				did, mb->mbxStatus, phba->hba_state);
-
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_DISCOVERY,
+				"0246 RegLogin failed Data: x%x x%x x%x\n",
+				did, mb->mbxStatus, vport->port_state);
 		/*
 		 * If RegLogin failed due to lack of HBA resources do not
 		 * retry discovery.
 		 */
 		if (mb->mbxStatus == MBXERR_RPI_FULL) {
-			ndlp->nlp_prev_state = NLP_STE_UNUSED_NODE;
-			ndlp->nlp_state = NLP_STE_UNUSED_NODE;
-			lpfc_nlp_list(phba, ndlp, NLP_UNUSED_LIST);
+			ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE;
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
 			return ndlp->nlp_state;
 		}
 
-		/* Put ndlp in npr list set plogi timer for 1 sec */
+		/* Put ndlp in npr state set plogi timer for 1 sec */
 		mod_timer(&ndlp->nlp_delayfunc, jiffies + HZ * 1);
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_DELAY_TMO;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
 
-		lpfc_issue_els_logo(phba, ndlp, 0);
+		lpfc_issue_els_logo(vport, ndlp, 0);
 		ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE;
-		ndlp->nlp_state = NLP_STE_NPR_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
 		return ndlp->nlp_state;
 	}
 
@@ -1152,94 +1251,99 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_hba * phba,
 	/* Only if we are not a fabric nport do we issue PRLI */
 	if (!(ndlp->nlp_type & NLP_FABRIC)) {
 		ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE;
-		ndlp->nlp_state = NLP_STE_PRLI_ISSUE;
-		lpfc_nlp_list(phba, ndlp, NLP_PRLI_LIST);
-		lpfc_issue_els_prli(phba, ndlp, 0);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_PRLI_ISSUE);
+		lpfc_issue_els_prli(vport, ndlp, 0);
 	} else {
 		ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE;
-		ndlp->nlp_state = NLP_STE_UNMAPPED_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_UNMAPPED_LIST);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
 	}
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_device_rm_reglogin_issue(struct lpfc_hba * phba,
-			      struct lpfc_nodelist * ndlp, void *arg,
+lpfc_device_rm_reglogin_issue(struct lpfc_vport *vport,
+			      struct lpfc_nodelist *ndlp,
+			      void *arg,
 			      uint32_t evt)
 {
-	if(ndlp->nlp_flag & NLP_NPR_2B_DISC) {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_NODEV_REMOVE;
+		spin_unlock_irq(shost->host_lock);
 		return ndlp->nlp_state;
-	}
-	else {
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+	} else {
+		lpfc_drop_node(vport, ndlp);
 		return NLP_STE_FREED_NODE;
 	}
 }
 
 static uint32_t
-lpfc_device_recov_reglogin_issue(struct lpfc_hba * phba,
-			       struct lpfc_nodelist * ndlp, void *arg,
-			       uint32_t evt)
+lpfc_device_recov_reglogin_issue(struct lpfc_vport *vport,
+				 struct lpfc_nodelist *ndlp,
+				 void *arg,
+				 uint32_t evt)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	/* Don't do anything that will mess up processing of the
+	 * previous RSCN.
+	 */
+	if (vport->fc_flag & FC_RSCN_DEFERRED)
+		return ndlp->nlp_state;
+
 	ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE;
-	ndlp->nlp_state = NLP_STE_NPR_NODE;
-	lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
-	spin_lock_irq(phba->host->host_lock);
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
+	lpfc_disc_set_adisc(vport, ndlp);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_plogi_prli_issue(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_plogi_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
 {
 	struct lpfc_iocbq *cmdiocb;
 
 	cmdiocb = (struct lpfc_iocbq *) arg;
 
-	lpfc_rcv_plogi(phba, ndlp, cmdiocb);
+	lpfc_rcv_plogi(vport, ndlp, cmdiocb);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_prli_prli_issue(struct lpfc_hba * phba,
-			 struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			 void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
-
-	lpfc_els_rsp_prli_acc(phba, cmdiocb, ndlp);
+	lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_logo_prli_issue(struct lpfc_hba * phba,
-			 struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_logo_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			 void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
 	/* Software abort outstanding PRLI before sending acc */
-	lpfc_els_abort(phba, ndlp);
+	lpfc_els_abort(vport->phba, ndlp);
 
-	lpfc_rcv_logo(phba, ndlp, cmdiocb, ELS_CMD_LOGO);
+	lpfc_rcv_logo(vport, ndlp, cmdiocb, ELS_CMD_LOGO);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_padisc_prli_issue(struct lpfc_hba * phba,
-			   struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_padisc_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			   void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
-
-	lpfc_rcv_padisc(phba, ndlp, cmdiocb);
+	lpfc_rcv_padisc(vport, ndlp, cmdiocb);
 	return ndlp->nlp_state;
 }
 
@@ -1249,21 +1353,22 @@ lpfc_rcv_padisc_prli_issue(struct lpfc_hba * phba,
  * NEXT STATE = PRLI_ISSUE
  */
 static uint32_t
-lpfc_rcv_prlo_prli_issue(struct lpfc_hba * phba,
-			 struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_prlo_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			 void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
-	lpfc_els_rsp_acc(phba, ELS_CMD_PRLO, cmdiocb, ndlp, NULL, 0);
+	lpfc_els_rsp_acc(vport, ELS_CMD_PRLO, cmdiocb, ndlp, NULL);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_cmpl_prli_prli_issue(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
 	struct lpfc_iocbq *cmdiocb, *rspiocb;
+	struct lpfc_hba   *phba = vport->phba;
 	IOCB_t *irsp;
 	PRLI *npr;
 
@@ -1273,9 +1378,12 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_hba * phba,
 
 	irsp = &rspiocb->iocb;
 	if (irsp->ulpStatus) {
+		if ((vport->port_type == LPFC_NPIV_PORT) &&
+		    vport->cfg_restrict_login) {
+			goto out;
+		}
 		ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE;
-		ndlp->nlp_state = NLP_STE_UNMAPPED_NODE;
-		lpfc_nlp_list(phba, ndlp, NLP_UNMAPPED_LIST);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
 		return ndlp->nlp_state;
 	}
 
@@ -1291,327 +1399,329 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_hba * phba,
 		if (npr->Retry)
 			ndlp->nlp_fcp_info |= NLP_FCP_2_DEVICE;
 	}
+	if (!(ndlp->nlp_type & NLP_FCP_TARGET) &&
+	    (vport->port_type == LPFC_NPIV_PORT) &&
+	     vport->cfg_restrict_login) {
+out:
+		spin_lock_irq(shost->host_lock);
+		ndlp->nlp_flag |= NLP_TARGET_REMOVE;
+		spin_unlock_irq(shost->host_lock);
+		lpfc_issue_els_logo(vport, ndlp, 0);
+
+		ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE;
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+		return ndlp->nlp_state;
+	}
 
 	ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE;
-	ndlp->nlp_state = NLP_STE_MAPPED_NODE;
-	lpfc_nlp_list(phba, ndlp, NLP_MAPPED_LIST);
+	if (ndlp->nlp_type & NLP_FCP_TARGET)
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_MAPPED_NODE);
+	else
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
 	return ndlp->nlp_state;
 }
 
 /*! lpfc_device_rm_prli_issue
-  *
-  * \pre
-  * \post
-  * \param   phba
-  * \param   ndlp
-  * \param   arg
-  * \param   evt
-  * \return  uint32_t
-  *
-  * \b Description:
-  *    This routine is envoked when we a request to remove a nport we are in the
-  *    process of PRLIing. We should software abort outstanding prli, unreg
-  *    login, send a logout. We will change node state to UNUSED_NODE, put it
-  *    on plogi list so it can be freed when LOGO completes.
-  *
-  */
+ *
+ * \pre
+ * \post
+ * \param   phba
+ * \param   ndlp
+ * \param   arg
+ * \param   evt
+ * \return  uint32_t
+ *
+ * \b Description:
+ *    This routine is envoked when we a request to remove a nport we are in the
+ *    process of PRLIing. We should software abort outstanding prli, unreg
+ *    login, send a logout. We will change node state to UNUSED_NODE, put it
+ *    on plogi list so it can be freed when LOGO completes.
+ *
+ */
+
 static uint32_t
-lpfc_device_rm_prli_issue(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_device_rm_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
 {
-	if(ndlp->nlp_flag & NLP_NPR_2B_DISC) {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_NODEV_REMOVE;
+		spin_unlock_irq(shost->host_lock);
 		return ndlp->nlp_state;
-	}
-	else {
+	} else {
 		/* software abort outstanding PLOGI */
-		lpfc_els_abort(phba, ndlp);
+		lpfc_els_abort(vport->phba, ndlp);
 
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+		lpfc_drop_node(vport, ndlp);
 		return NLP_STE_FREED_NODE;
 	}
 }
 
 
 /*! lpfc_device_recov_prli_issue
-  *
-  * \pre
-  * \post
-  * \param   phba
-  * \param   ndlp
-  * \param   arg
-  * \param   evt
-  * \return  uint32_t
-  *
-  * \b Description:
-  *    The routine is envoked when the state of a device is unknown, like
-  *    during a link down. We should remove the nodelist entry from the
-  *    unmapped list, issue a UNREG_LOGIN, do a software abort of the
-  *    outstanding PRLI command, then free the node entry.
-  */
+ *
+ * \pre
+ * \post
+ * \param   phba
+ * \param   ndlp
+ * \param   arg
+ * \param   evt
+ * \return  uint32_t
+ *
+ * \b Description:
+ *    The routine is envoked when the state of a device is unknown, like
+ *    during a link down. We should remove the nodelist entry from the
+ *    unmapped list, issue a UNREG_LOGIN, do a software abort of the
+ *    outstanding PRLI command, then free the node entry.
+ */
 static uint32_t
-lpfc_device_recov_prli_issue(struct lpfc_hba * phba,
-			   struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_device_recov_prli_issue(struct lpfc_vport *vport,
+			     struct lpfc_nodelist *ndlp,
+			     void *arg,
+			     uint32_t evt)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_hba  *phba = vport->phba;
+
+	/* Don't do anything that will mess up processing of the
+	 * previous RSCN.
+	 */
+	if (vport->fc_flag & FC_RSCN_DEFERRED)
+		return ndlp->nlp_state;
+
 	/* software abort outstanding PRLI */
 	lpfc_els_abort(phba, ndlp);
 
 	ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE;
-	ndlp->nlp_state = NLP_STE_NPR_NODE;
-	lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
-	spin_lock_irq(phba->host->host_lock);
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
+	lpfc_disc_set_adisc(vport, ndlp);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_plogi_unmap_node(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_plogi_unmap_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
-
-	lpfc_rcv_plogi(phba, ndlp, cmdiocb);
+	lpfc_rcv_plogi(vport, ndlp, cmdiocb);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_prli_unmap_node(struct lpfc_hba * phba,
-			 struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_prli_unmap_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			 void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	lpfc_rcv_prli(phba, ndlp, cmdiocb);
-	lpfc_els_rsp_prli_acc(phba, cmdiocb, ndlp);
+	lpfc_rcv_prli(vport, ndlp, cmdiocb);
+	lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_logo_unmap_node(struct lpfc_hba * phba,
-			 struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_logo_unmap_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			 void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	lpfc_rcv_logo(phba, ndlp, cmdiocb, ELS_CMD_LOGO);
+	lpfc_rcv_logo(vport, ndlp, cmdiocb, ELS_CMD_LOGO);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_padisc_unmap_node(struct lpfc_hba * phba,
-			   struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_padisc_unmap_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			   void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
-
-	lpfc_rcv_padisc(phba, ndlp, cmdiocb);
+	lpfc_rcv_padisc(vport, ndlp, cmdiocb);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_prlo_unmap_node(struct lpfc_hba * phba,
-			 struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_prlo_unmap_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			 void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
-
-	lpfc_els_rsp_acc(phba, ELS_CMD_PRLO, cmdiocb, ndlp, NULL, 0);
+	lpfc_els_rsp_acc(vport, ELS_CMD_PRLO, cmdiocb, ndlp, NULL);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_device_recov_unmap_node(struct lpfc_hba * phba,
-			   struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_device_recov_unmap_node(struct lpfc_vport *vport,
+			     struct lpfc_nodelist *ndlp,
+			     void *arg,
+			     uint32_t evt)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
 	ndlp->nlp_prev_state = NLP_STE_UNMAPPED_NODE;
-	ndlp->nlp_state = NLP_STE_NPR_NODE;
-	lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
-	lpfc_disc_set_adisc(phba, ndlp);
+	spin_unlock_irq(shost->host_lock);
+	lpfc_disc_set_adisc(vport, ndlp);
 
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_plogi_mapped_node(struct lpfc_hba * phba,
-			   struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_plogi_mapped_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			   void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	lpfc_rcv_plogi(phba, ndlp, cmdiocb);
+	lpfc_rcv_plogi(vport, ndlp, cmdiocb);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_prli_mapped_node(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_prli_mapped_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	lpfc_els_rsp_prli_acc(phba, cmdiocb, ndlp);
+	lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_logo_mapped_node(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_logo_mapped_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
-
-	lpfc_rcv_logo(phba, ndlp, cmdiocb, ELS_CMD_LOGO);
+	lpfc_rcv_logo(vport, ndlp, cmdiocb, ELS_CMD_LOGO);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_padisc_mapped_node(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_rcv_padisc_mapped_node(struct lpfc_vport *vport,
+			    struct lpfc_nodelist *ndlp,
+			    void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
-
-	lpfc_rcv_padisc(phba, ndlp, cmdiocb);
+	lpfc_rcv_padisc(vport, ndlp, cmdiocb);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_prlo_mapped_node(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_rcv_prlo_mapped_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			  void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct lpfc_hba  *phba = vport->phba;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
 	/* flush the target */
-	spin_lock_irq(phba->host->host_lock);
-	lpfc_sli_abort_iocb(phba, &phba->sli.ring[phba->sli.fcp_ring],
-			       ndlp->nlp_sid, 0, 0, LPFC_CTX_TGT);
-	spin_unlock_irq(phba->host->host_lock);
+	lpfc_sli_abort_iocb(vport, &phba->sli.ring[phba->sli.fcp_ring],
+			    ndlp->nlp_sid, 0, LPFC_CTX_TGT);
 
 	/* Treat like rcv logo */
-	lpfc_rcv_logo(phba, ndlp, cmdiocb, ELS_CMD_PRLO);
+	lpfc_rcv_logo(vport, ndlp, cmdiocb, ELS_CMD_PRLO);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_device_recov_mapped_node(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_device_recov_mapped_node(struct lpfc_vport *vport,
+			      struct lpfc_nodelist *ndlp,
+			      void *arg,
+			      uint32_t evt)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
 	ndlp->nlp_prev_state = NLP_STE_MAPPED_NODE;
-	ndlp->nlp_state = NLP_STE_NPR_NODE;
-	lpfc_nlp_list(phba, ndlp, NLP_NPR_LIST);
-	spin_lock_irq(phba->host->host_lock);
+	lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
-	spin_unlock_irq(phba->host->host_lock);
-	lpfc_disc_set_adisc(phba, ndlp);
+	spin_unlock_irq(shost->host_lock);
+	lpfc_disc_set_adisc(vport, ndlp);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_plogi_npr_node(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_rcv_plogi_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq *cmdiocb;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_iocbq *cmdiocb  = (struct lpfc_iocbq *) arg;
 
 	/* Ignore PLOGI if we have an outstanding LOGO */
-	if (ndlp->nlp_flag & NLP_LOGO_SND) {
+	if (ndlp->nlp_flag & (NLP_LOGO_SND | NLP_LOGO_ACC)) {
 		return ndlp->nlp_state;
 	}
 
-	if (lpfc_rcv_plogi(phba, ndlp, cmdiocb)) {
-		spin_lock_irq(phba->host->host_lock);
+	if (lpfc_rcv_plogi(vport, ndlp, cmdiocb)) {
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag &= ~NLP_NPR_ADISC;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		return ndlp->nlp_state;
 	}
 
 	/* send PLOGI immediately, move to PLOGI issue state */
 	if (!(ndlp->nlp_flag & NLP_DELAY_TMO)) {
 		ndlp->nlp_prev_state = NLP_STE_NPR_NODE;
-		ndlp->nlp_state = NLP_STE_PLOGI_ISSUE;
-		lpfc_nlp_list(phba, ndlp, NLP_PLOGI_LIST);
-		lpfc_issue_els_plogi(phba, ndlp->nlp_DID, 0);
+		lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE);
+		lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0);
 	}
 
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_prli_npr_node(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_rcv_prli_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+		       void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq     *cmdiocb;
-	struct ls_rjt          stat;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct Scsi_Host  *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
+	struct ls_rjt     stat;
 
 	memset(&stat, 0, sizeof (struct ls_rjt));
 	stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
 	stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE;
-	lpfc_els_rsp_reject(phba, stat.un.lsRjtError, cmdiocb, ndlp);
+	lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp, NULL);
 
 	if (!(ndlp->nlp_flag & NLP_DELAY_TMO)) {
 		if (ndlp->nlp_flag & NLP_NPR_ADISC) {
-			spin_lock_irq(phba->host->host_lock);
+			spin_lock_irq(shost->host_lock);
 			ndlp->nlp_flag &= ~NLP_NPR_ADISC;
-			spin_unlock_irq(phba->host->host_lock);
 			ndlp->nlp_prev_state = NLP_STE_NPR_NODE;
-			ndlp->nlp_state = NLP_STE_ADISC_ISSUE;
-			lpfc_nlp_list(phba, ndlp, NLP_ADISC_LIST);
-			lpfc_issue_els_adisc(phba, ndlp, 0);
+			spin_unlock_irq(shost->host_lock);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_ADISC_ISSUE);
+			lpfc_issue_els_adisc(vport, ndlp, 0);
 		} else {
 			ndlp->nlp_prev_state = NLP_STE_NPR_NODE;
-			ndlp->nlp_state = NLP_STE_PLOGI_ISSUE;
-			lpfc_nlp_list(phba, ndlp, NLP_PLOGI_LIST);
-			lpfc_issue_els_plogi(phba, ndlp->nlp_DID, 0);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE);
+			lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0);
 		}
-
 	}
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_logo_npr_node(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_rcv_logo_npr_node(struct lpfc_vport *vport,  struct lpfc_nodelist *ndlp,
+		       void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq     *cmdiocb;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	cmdiocb = (struct lpfc_iocbq *) arg;
-
-	lpfc_rcv_logo(phba, ndlp, cmdiocb, ELS_CMD_LOGO);
+	lpfc_rcv_logo(vport, ndlp, cmdiocb, ELS_CMD_LOGO);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_padisc_npr_node(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_rcv_padisc_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			 void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq     *cmdiocb;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	lpfc_rcv_padisc(phba, ndlp, cmdiocb);
+	lpfc_rcv_padisc(vport, ndlp, cmdiocb);
 
 	/*
 	 * Do not start discovery if discovery is about to start
@@ -1619,55 +1729,52 @@ lpfc_rcv_padisc_npr_node(struct lpfc_hba * phba,
 	 * here will affect the counting of discovery threads.
 	 */
 	if (!(ndlp->nlp_flag & NLP_DELAY_TMO) &&
-		!(ndlp->nlp_flag & NLP_NPR_2B_DISC)){
+	    !(ndlp->nlp_flag & NLP_NPR_2B_DISC)) {
 		if (ndlp->nlp_flag & NLP_NPR_ADISC) {
+			ndlp->nlp_flag &= ~NLP_NPR_ADISC;
 			ndlp->nlp_prev_state = NLP_STE_NPR_NODE;
-			ndlp->nlp_state = NLP_STE_ADISC_ISSUE;
-			lpfc_nlp_list(phba, ndlp, NLP_ADISC_LIST);
-			lpfc_issue_els_adisc(phba, ndlp, 0);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_ADISC_ISSUE);
+			lpfc_issue_els_adisc(vport, ndlp, 0);
 		} else {
 			ndlp->nlp_prev_state = NLP_STE_NPR_NODE;
-			ndlp->nlp_state = NLP_STE_PLOGI_ISSUE;
-			lpfc_nlp_list(phba, ndlp, NLP_PLOGI_LIST);
-			lpfc_issue_els_plogi(phba, ndlp->nlp_DID, 0);
+			lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE);
+			lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0);
 		}
 	}
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_rcv_prlo_npr_node(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_rcv_prlo_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+		       void *arg, uint32_t evt)
 {
-	struct lpfc_iocbq     *cmdiocb;
-
-	cmdiocb = (struct lpfc_iocbq *) arg;
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag |= NLP_LOGO_ACC;
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 
-	lpfc_els_rsp_acc(phba, ELS_CMD_ACC, cmdiocb, ndlp, NULL, 0);
+	lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
 
-	if (!(ndlp->nlp_flag & NLP_DELAY_TMO)) {
+	if ((ndlp->nlp_flag & NLP_DELAY_TMO) == 0) {
 		mod_timer(&ndlp->nlp_delayfunc, jiffies + HZ * 1);
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_DELAY_TMO;
 		ndlp->nlp_flag &= ~NLP_NPR_ADISC;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 		ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
 	} else {
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag &= ~NLP_NPR_ADISC;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(shost->host_lock);
 	}
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_cmpl_plogi_npr_node(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_cmpl_plogi_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			 void *arg, uint32_t evt)
 {
 	struct lpfc_iocbq *cmdiocb, *rspiocb;
 	IOCB_t *irsp;
@@ -1677,15 +1784,15 @@ lpfc_cmpl_plogi_npr_node(struct lpfc_hba * phba,
 
 	irsp = &rspiocb->iocb;
 	if (irsp->ulpStatus) {
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+		ndlp->nlp_flag |= NLP_DEFER_RM;
 		return NLP_STE_FREED_NODE;
 	}
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_cmpl_prli_npr_node(struct lpfc_hba * phba,
-			  struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_cmpl_prli_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			void *arg, uint32_t evt)
 {
 	struct lpfc_iocbq *cmdiocb, *rspiocb;
 	IOCB_t *irsp;
@@ -1695,25 +1802,29 @@ lpfc_cmpl_prli_npr_node(struct lpfc_hba * phba,
 
 	irsp = &rspiocb->iocb;
 	if (irsp->ulpStatus && (ndlp->nlp_flag & NLP_NODEV_REMOVE)) {
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+		lpfc_drop_node(vport, ndlp);
 		return NLP_STE_FREED_NODE;
 	}
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_cmpl_logo_npr_node(struct lpfc_hba * phba,
-		struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
-{
-	lpfc_unreg_rpi(phba, ndlp);
-	/* This routine does nothing, just return the current state */
+lpfc_cmpl_logo_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			void *arg, uint32_t evt)
+{
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+	if (ndlp->nlp_DID == Fabric_DID) {
+		spin_lock_irq(shost->host_lock);
+		vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP);
+		spin_unlock_irq(shost->host_lock);
+	}
+	lpfc_unreg_rpi(vport, ndlp);
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_cmpl_adisc_npr_node(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_cmpl_adisc_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			 void *arg, uint32_t evt)
 {
 	struct lpfc_iocbq *cmdiocb, *rspiocb;
 	IOCB_t *irsp;
@@ -1723,28 +1834,25 @@ lpfc_cmpl_adisc_npr_node(struct lpfc_hba * phba,
 
 	irsp = &rspiocb->iocb;
 	if (irsp->ulpStatus && (ndlp->nlp_flag & NLP_NODEV_REMOVE)) {
-		lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+		lpfc_drop_node(vport, ndlp);
 		return NLP_STE_FREED_NODE;
 	}
 	return ndlp->nlp_state;
 }
 
 static uint32_t
-lpfc_cmpl_reglogin_npr_node(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_cmpl_reglogin_npr_node(struct lpfc_vport *vport,
+			    struct lpfc_nodelist *ndlp,
+			    void *arg, uint32_t evt)
 {
-	LPFC_MBOXQ_t *pmb;
-	MAILBOX_t *mb;
-
-	pmb = (LPFC_MBOXQ_t *) arg;
-	mb = &pmb->mb;
+	LPFC_MBOXQ_t *pmb = (LPFC_MBOXQ_t *) arg;
+	MAILBOX_t    *mb = &pmb->mb;
 
 	if (!mb->mbxStatus)
 		ndlp->nlp_rpi = mb->un.varWords[0];
 	else {
 		if (ndlp->nlp_flag & NLP_NODEV_REMOVE) {
-			lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+			lpfc_drop_node(vport, ndlp);
 			return NLP_STE_FREED_NODE;
 		}
 	}
@@ -1752,28 +1860,38 @@ lpfc_cmpl_reglogin_npr_node(struct lpfc_hba * phba,
 }
 
 static uint32_t
-lpfc_device_rm_npr_node(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_device_rm_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			void *arg, uint32_t evt)
 {
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
 	if (ndlp->nlp_flag & NLP_NPR_2B_DISC) {
+		spin_lock_irq(shost->host_lock);
 		ndlp->nlp_flag |= NLP_NODEV_REMOVE;
+		spin_unlock_irq(shost->host_lock);
 		return ndlp->nlp_state;
 	}
-	lpfc_nlp_list(phba, ndlp, NLP_NO_LIST);
+	lpfc_drop_node(vport, ndlp);
 	return NLP_STE_FREED_NODE;
 }
 
 static uint32_t
-lpfc_device_recov_npr_node(struct lpfc_hba * phba,
-			    struct lpfc_nodelist * ndlp, void *arg,
-			    uint32_t evt)
+lpfc_device_recov_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			   void *arg, uint32_t evt)
 {
-	spin_lock_irq(phba->host->host_lock);
+	struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+
+	/* Don't do anything that will mess up processing of the
+	 * previous RSCN.
+	 */
+	if (vport->fc_flag & FC_RSCN_DEFERRED)
+		return ndlp->nlp_state;
+
+	spin_lock_irq(shost->host_lock);
 	ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC);
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(shost->host_lock);
 	if (ndlp->nlp_flag & NLP_DELAY_TMO) {
-		lpfc_cancel_retry_delay_tmo(phba, ndlp);
+		lpfc_cancel_retry_delay_tmo(vport, ndlp);
 	}
 	return ndlp->nlp_state;
 }
@@ -1836,7 +1954,7 @@ lpfc_device_recov_npr_node(struct lpfc_hba * phba,
  */
 
 static uint32_t (*lpfc_disc_action[NLP_STE_MAX_STATE * NLP_EVT_MAX_EVENT])
-     (struct lpfc_hba *, struct lpfc_nodelist *, void *, uint32_t) = {
+     (struct lpfc_vport *, struct lpfc_nodelist *, void *, uint32_t) = {
 	/* Action routine                  Event       Current State  */
 	lpfc_rcv_plogi_unused_node,	/* RCV_PLOGI   UNUSED_NODE    */
 	lpfc_rcv_els_unused_node,	/* RCV_PRLI        */
@@ -1853,7 +1971,7 @@ static uint32_t (*lpfc_disc_action[NLP_STE_MAX_STATE * NLP_EVT_MAX_EVENT])
 	lpfc_disc_illegal,		/* DEVICE_RECOVERY */
 
 	lpfc_rcv_plogi_plogi_issue,	/* RCV_PLOGI   PLOGI_ISSUE    */
-	lpfc_rcv_els_plogi_issue,	/* RCV_PRLI        */
+	lpfc_rcv_prli_plogi_issue,	/* RCV_PRLI        */
 	lpfc_rcv_logo_plogi_issue,	/* RCV_LOGO        */
 	lpfc_rcv_els_plogi_issue,	/* RCV_ADISC       */
 	lpfc_rcv_els_plogi_issue,	/* RCV_PDISC       */
@@ -1952,48 +2070,39 @@ static uint32_t (*lpfc_disc_action[NLP_STE_MAX_STATE * NLP_EVT_MAX_EVENT])
 };
 
 int
-lpfc_disc_state_machine(struct lpfc_hba * phba,
-			struct lpfc_nodelist * ndlp, void *arg, uint32_t evt)
+lpfc_disc_state_machine(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+			void *arg, uint32_t evt)
 {
 	uint32_t cur_state, rc;
-	uint32_t(*func) (struct lpfc_hba *, struct lpfc_nodelist *, void *,
+	uint32_t(*func) (struct lpfc_vport *, struct lpfc_nodelist *, void *,
 			 uint32_t);
 
-	ndlp->nlp_disc_refcnt++;
+	lpfc_nlp_get(ndlp);
 	cur_state = ndlp->nlp_state;
 
 	/* DSM in event <evt> on NPort <nlp_DID> in state <cur_state> */
-	lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_DISCOVERY,
-			"%d:0211 DSM in event x%x on NPort x%x in state %d "
-			"Data: x%x\n",
-			phba->brd_no,
-			evt, ndlp->nlp_DID, cur_state, ndlp->nlp_flag);
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0211 DSM in event x%x on NPort x%x in "
+			 "state %d Data: x%x\n",
+			 evt, ndlp->nlp_DID, cur_state, ndlp->nlp_flag);
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_DSM,
+		 "DSM in:          evt:%d ste:%d did:x%x",
+		evt, cur_state, ndlp->nlp_DID);
 
 	func = lpfc_disc_action[(cur_state * NLP_EVT_MAX_EVENT) + evt];
-	rc = (func) (phba, ndlp, arg, evt);
+	rc = (func) (vport, ndlp, arg, evt);
 
 	/* DSM out state <rc> on NPort <nlp_DID> */
-	lpfc_printf_log(phba,
-		       KERN_INFO,
-		       LOG_DISCOVERY,
-		       "%d:0212 DSM out state %d on NPort x%x Data: x%x\n",
-		       phba->brd_no,
-		       rc, ndlp->nlp_DID, ndlp->nlp_flag);
-
-	ndlp->nlp_disc_refcnt--;
-
-	/* Check to see if ndlp removal is deferred */
-	if ((ndlp->nlp_disc_refcnt == 0)
-	    && (ndlp->nlp_flag & NLP_DELAY_REMOVE)) {
-		spin_lock_irq(phba->host->host_lock);
-		ndlp->nlp_flag &= ~NLP_DELAY_REMOVE;
-		spin_unlock_irq(phba->host->host_lock);
-		lpfc_nlp_remove(phba, ndlp);
-		return NLP_STE_FREED_NODE;
-	}
-	if (rc == NLP_STE_FREED_NODE)
-		return NLP_STE_FREED_NODE;
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+			 "0212 DSM out state %d on NPort x%x Data: x%x\n",
+			 rc, ndlp->nlp_DID, ndlp->nlp_flag);
+
+	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_DSM,
+		 "DSM out:         ste:%d did:x%x flg:x%x",
+		rc, ndlp->nlp_DID, ndlp->nlp_flag);
+
+	lpfc_nlp_put(ndlp);
+
 	return rc;
 }
diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
index ccb8a72..bb8d833 100644
--- a/drivers/scsi/lpfc/lpfc_scsi.c
+++ b/drivers/scsi/lpfc/lpfc_scsi.c
@@ -37,10 +37,152 @@
 #include "lpfc.h"
 #include "lpfc_logmsg.h"
 #include "lpfc_crtn.h"
+#include "lpfc_vport.h"
 
 #define LPFC_RESET_WAIT  2
 #define LPFC_ABORT_WAIT  2
 
+/*
+ * This function is called with no lock held when there is a resource
+ * error in driver or in firmware.
+ */
+void
+lpfc_adjust_queue_depth(struct lpfc_hba *phba)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&phba->hbalock, flags);
+	atomic_inc(&phba->num_rsrc_err);
+	phba->last_rsrc_error_time = jiffies;
+
+	if ((phba->last_ramp_down_time + QUEUE_RAMP_DOWN_INTERVAL) > jiffies) {
+		spin_unlock_irqrestore(&phba->hbalock, flags);
+		return;
+	}
+
+	phba->last_ramp_down_time = jiffies;
+
+	spin_unlock_irqrestore(&phba->hbalock, flags);
+
+	spin_lock_irqsave(&phba->pport->work_port_lock, flags);
+	if ((phba->pport->work_port_events &
+		WORKER_RAMP_DOWN_QUEUE) == 0) {
+		phba->pport->work_port_events |= WORKER_RAMP_DOWN_QUEUE;
+	}
+	spin_unlock_irqrestore(&phba->pport->work_port_lock, flags);
+
+	spin_lock_irqsave(&phba->hbalock, flags);
+	if (phba->work_wait)
+		wake_up(phba->work_wait);
+	spin_unlock_irqrestore(&phba->hbalock, flags);
+
+	return;
+}
+
+/*
+ * This function is called with no lock held when there is a successful
+ * SCSI command completion.
+ */
+static inline void
+lpfc_rampup_queue_depth(struct lpfc_vport  *vport,
+			struct scsi_device *sdev)
+{
+	unsigned long flags;
+	struct lpfc_hba *phba = vport->phba;
+	atomic_inc(&phba->num_cmd_success);
+
+	if (vport->cfg_lun_queue_depth <= sdev->queue_depth)
+		return;
+	spin_lock_irqsave(&phba->hbalock, flags);
+	if (((phba->last_ramp_up_time + QUEUE_RAMP_UP_INTERVAL) > jiffies) ||
+	 ((phba->last_rsrc_error_time + QUEUE_RAMP_UP_INTERVAL ) > jiffies)) {
+		spin_unlock_irqrestore(&phba->hbalock, flags);
+		return;
+	}
+	phba->last_ramp_up_time = jiffies;
+	spin_unlock_irqrestore(&phba->hbalock, flags);
+
+	spin_lock_irqsave(&phba->pport->work_port_lock, flags);
+	if ((phba->pport->work_port_events &
+		WORKER_RAMP_UP_QUEUE) == 0) {
+		phba->pport->work_port_events |= WORKER_RAMP_UP_QUEUE;
+	}
+	spin_unlock_irqrestore(&phba->pport->work_port_lock, flags);
+
+	spin_lock_irqsave(&phba->hbalock, flags);
+	if (phba->work_wait)
+		wake_up(phba->work_wait);
+	spin_unlock_irqrestore(&phba->hbalock, flags);
+}
+
+void
+lpfc_ramp_down_queue_handler(struct lpfc_hba *phba)
+{
+	struct lpfc_vport **vports;
+	struct Scsi_Host  *shost;
+	struct scsi_device *sdev;
+	unsigned long new_queue_depth;
+	unsigned long num_rsrc_err, num_cmd_success;
+	int i;
+
+	num_rsrc_err = atomic_read(&phba->num_rsrc_err);
+	num_cmd_success = atomic_read(&phba->num_cmd_success);
+
+	vports = lpfc_create_vport_work_array(phba);
+	if (vports != NULL)
+		for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
+			shost = lpfc_shost_from_vport(vports[i]);
+			shost_for_each_device(sdev, shost) {
+				new_queue_depth =
+					sdev->queue_depth * num_rsrc_err /
+					(num_rsrc_err + num_cmd_success);
+				if (!new_queue_depth)
+					new_queue_depth = sdev->queue_depth - 1;
+				else
+					new_queue_depth = sdev->queue_depth -
+								new_queue_depth;
+				if (sdev->ordered_tags)
+					scsi_adjust_queue_depth(sdev,
+							MSG_ORDERED_TAG,
+							new_queue_depth);
+				else
+					scsi_adjust_queue_depth(sdev,
+							MSG_SIMPLE_TAG,
+							new_queue_depth);
+			}
+		}
+	lpfc_destroy_vport_work_array(vports);
+	atomic_set(&phba->num_rsrc_err, 0);
+	atomic_set(&phba->num_cmd_success, 0);
+}
+
+void
+lpfc_ramp_up_queue_handler(struct lpfc_hba *phba)
+{
+	struct lpfc_vport **vports;
+	struct Scsi_Host  *shost;
+	struct scsi_device *sdev;
+	int i;
+
+	vports = lpfc_create_vport_work_array(phba);
+	if (vports != NULL)
+		for(i = 0; i < LPFC_MAX_VPORTS && vports[i] != NULL; i++) {
+			shost = lpfc_shost_from_vport(vports[i]);
+			shost_for_each_device(sdev, shost) {
+				if (sdev->ordered_tags)
+					scsi_adjust_queue_depth(sdev,
+							MSG_ORDERED_TAG,
+							sdev->queue_depth+1);
+				else
+					scsi_adjust_queue_depth(sdev,
+							MSG_SIMPLE_TAG,
+							sdev->queue_depth+1);
+			}
+		}
+	lpfc_destroy_vport_work_array(vports);
+	atomic_set(&phba->num_rsrc_err, 0);
+	atomic_set(&phba->num_cmd_success, 0);
+}
 
 /*
  * This routine allocates a scsi buffer, which contains all the necessary
@@ -51,8 +193,9 @@
  * and the BPL BDE is setup in the IOCB.
  */
 static struct lpfc_scsi_buf *
-lpfc_new_scsi_buf(struct lpfc_hba * phba)
+lpfc_new_scsi_buf(struct lpfc_vport *vport)
 {
+	struct lpfc_hba *phba = vport->phba;
 	struct lpfc_scsi_buf *psb;
 	struct ulp_bde64 *bpl;
 	IOCB_t *iocb;
@@ -63,7 +206,6 @@ lpfc_new_scsi_buf(struct lpfc_hba * phba)
 	if (!psb)
 		return NULL;
 	memset(psb, 0, sizeof (struct lpfc_scsi_buf));
-	psb->scsi_hba = phba;
 
 	/*
 	 * Get memory from the pci pool to map the virt space to pci bus space
@@ -155,7 +297,7 @@ lpfc_get_scsi_buf(struct lpfc_hba * phba)
 }
 
 static void
-lpfc_release_scsi_buf(struct lpfc_hba * phba, struct lpfc_scsi_buf * psb)
+lpfc_release_scsi_buf(struct lpfc_hba *phba, struct lpfc_scsi_buf *psb)
 {
 	unsigned long iflag = 0;
 
@@ -166,13 +308,16 @@ lpfc_release_scsi_buf(struct lpfc_hba * phba, struct lpfc_scsi_buf * psb)
 }
 
 static int
-lpfc_scsi_prep_dma_buf(struct lpfc_hba * phba, struct lpfc_scsi_buf * lpfc_cmd)
+lpfc_scsi_prep_dma_buf(struct lpfc_hba *phba, struct lpfc_scsi_buf *lpfc_cmd)
 {
 	struct scsi_cmnd *scsi_cmnd = lpfc_cmd->pCmd;
 	struct scatterlist *sgel = NULL;
 	struct fcp_cmnd *fcp_cmnd = lpfc_cmd->fcp_cmnd;
 	struct ulp_bde64 *bpl = lpfc_cmd->fcp_bpl;
 	IOCB_t *iocb_cmd = &lpfc_cmd->cur_iocbq.iocb;
+	uint32_t vpi = (lpfc_cmd->cur_iocbq.vport
+			? lpfc_cmd->cur_iocbq.vport->vpi
+			: 0);
 	dma_addr_t physaddr;
 	uint32_t i, num_bde = 0;
 	int datadir = scsi_cmnd->sc_data_direction;
@@ -236,9 +381,9 @@ lpfc_scsi_prep_dma_buf(struct lpfc_hba * phba, struct lpfc_scsi_buf * lpfc_cmd)
 		dma_error = dma_mapping_error(physaddr);
 		if (dma_error) {
 			lpfc_printf_log(phba, KERN_ERR, LOG_FCP,
-				"%d:0718 Unable to dma_map_single "
-				"request_buffer: x%x\n",
-				phba->brd_no, dma_error);
+					"(%d):0718 Unable to dma_map_single "
+					"request_buffer: x%x\n",
+					vpi, dma_error);
 			return 1;
 		}
 
@@ -292,12 +437,12 @@ lpfc_scsi_unprep_dma_buf(struct lpfc_hba * phba, struct lpfc_scsi_buf * psb)
 }
 
 static void
-lpfc_handle_fcp_err(struct lpfc_scsi_buf *lpfc_cmd, struct lpfc_iocbq *rsp_iocb)
+lpfc_handle_fcp_err(struct lpfc_vport *vport, struct lpfc_scsi_buf *lpfc_cmd,
+		    struct lpfc_iocbq *rsp_iocb)
 {
 	struct scsi_cmnd *cmnd = lpfc_cmd->pCmd;
 	struct fcp_cmnd *fcpcmd = lpfc_cmd->fcp_cmnd;
 	struct fcp_rsp *fcprsp = lpfc_cmd->fcp_rsp;
-	struct lpfc_hba *phba = lpfc_cmd->scsi_hba;
 	uint32_t fcpi_parm = rsp_iocb->iocb.un.fcpi.fcpi_parm;
 	uint32_t resp_info = fcprsp->rspStatus2;
 	uint32_t scsi_status = fcprsp->rspStatus3;
@@ -330,15 +475,15 @@ lpfc_handle_fcp_err(struct lpfc_scsi_buf *lpfc_cmd, struct lpfc_iocbq *rsp_iocb)
 	if (!scsi_status && (resp_info & RESID_UNDER))
 		logit = LOG_FCP;
 
-	lpfc_printf_log(phba, KERN_WARNING, logit,
-			"%d:0730 FCP command x%x failed: x%x SNS x%x x%x "
-			"Data: x%x x%x x%x x%x x%x\n",
-			phba->brd_no, cmnd->cmnd[0], scsi_status,
-			be32_to_cpu(*lp), be32_to_cpu(*(lp + 3)), resp_info,
-			be32_to_cpu(fcprsp->rspResId),
-			be32_to_cpu(fcprsp->rspSnsLen),
-			be32_to_cpu(fcprsp->rspRspLen),
-			fcprsp->rspInfo3);
+	lpfc_printf_vlog(vport, KERN_WARNING, logit,
+			 "0730 FCP command x%x failed: x%x SNS x%x x%x "
+			 "Data: x%x x%x x%x x%x x%x\n",
+			 cmnd->cmnd[0], scsi_status,
+			 be32_to_cpu(*lp), be32_to_cpu(*(lp + 3)), resp_info,
+			 be32_to_cpu(fcprsp->rspResId),
+			 be32_to_cpu(fcprsp->rspSnsLen),
+			 be32_to_cpu(fcprsp->rspRspLen),
+			 fcprsp->rspInfo3);
 
 	if (resp_info & RSP_LEN_VALID) {
 		rsplen = be32_to_cpu(fcprsp->rspRspLen);
@@ -353,12 +498,12 @@ lpfc_handle_fcp_err(struct lpfc_scsi_buf *lpfc_cmd, struct lpfc_iocbq *rsp_iocb)
 	if (resp_info & RESID_UNDER) {
 		cmnd->resid = be32_to_cpu(fcprsp->rspResId);
 
-		lpfc_printf_log(phba, KERN_INFO, LOG_FCP,
-				"%d:0716 FCP Read Underrun, expected %d, "
-				"residual %d Data: x%x x%x x%x\n", phba->brd_no,
-				be32_to_cpu(fcpcmd->fcpDl), cmnd->resid,
-				fcpi_parm, cmnd->cmnd[0], cmnd->underflow);
-
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
+				 "0716 FCP Read Underrun, expected %d, "
+				 "residual %d Data: x%x x%x x%x\n",
+				 be32_to_cpu(fcpcmd->fcpDl),
+				 cmnd->resid, fcpi_parm, cmnd->cmnd[0],
+				 cmnd->underflow);
 		/*
 		 * If there is an under run check if under run reported by
 		 * storage array is same as the under run reported by HBA.
@@ -367,13 +512,12 @@ lpfc_handle_fcp_err(struct lpfc_scsi_buf *lpfc_cmd, struct lpfc_iocbq *rsp_iocb)
 		if ((cmnd->sc_data_direction == DMA_FROM_DEVICE) &&
 			fcpi_parm &&
 			(cmnd->resid != fcpi_parm)) {
-			lpfc_printf_log(phba, KERN_WARNING,
-				LOG_FCP | LOG_FCP_ERROR,
-				"%d:0735 FCP Read Check Error and Underrun "
-				"Data: x%x x%x x%x x%x\n", phba->brd_no,
-				be32_to_cpu(fcpcmd->fcpDl),
-				cmnd->resid,
-				fcpi_parm, cmnd->cmnd[0]);
+			lpfc_printf_vlog(vport, KERN_WARNING,
+					 LOG_FCP | LOG_FCP_ERROR,
+					 "0735 FCP Read Check Error "
+					 "and Underrun Data: x%x x%x x%x x%x\n",
+					 be32_to_cpu(fcpcmd->fcpDl),
+					 cmnd->resid, fcpi_parm, cmnd->cmnd[0]);
 			cmnd->resid = cmnd->request_bufflen;
 			host_status = DID_ERROR;
 		}
@@ -386,21 +530,19 @@ lpfc_handle_fcp_err(struct lpfc_scsi_buf *lpfc_cmd, struct lpfc_iocbq *rsp_iocb)
 		if (!(resp_info & SNS_LEN_VALID) &&
 		    (scsi_status == SAM_STAT_GOOD) &&
 		    (cmnd->request_bufflen - cmnd->resid) < cmnd->underflow) {
-			lpfc_printf_log(phba, KERN_INFO, LOG_FCP,
-					"%d:0717 FCP command x%x residual "
-					"underrun converted to error "
-					"Data: x%x x%x x%x\n", phba->brd_no,
-					cmnd->cmnd[0], cmnd->request_bufflen,
-					cmnd->resid, cmnd->underflow);
-
+			lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
+					 "0717 FCP command x%x residual "
+					 "underrun converted to error "
+					 "Data: x%x x%x x%x\n",
+					 cmnd->cmnd[0], cmnd->request_bufflen,
+					 cmnd->resid, cmnd->underflow);
 			host_status = DID_ERROR;
 		}
 	} else if (resp_info & RESID_OVER) {
-		lpfc_printf_log(phba, KERN_WARNING, LOG_FCP,
-				"%d:0720 FCP command x%x residual "
-				"overrun error. Data: x%x x%x \n",
-				phba->brd_no, cmnd->cmnd[0],
-				cmnd->request_bufflen, cmnd->resid);
+		lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+				 "0720 FCP command x%x residual overrun error. "
+				 "Data: x%x x%x \n", cmnd->cmnd[0],
+				 cmnd->request_bufflen, cmnd->resid);
 		host_status = DID_ERROR;
 
 	/*
@@ -409,12 +551,12 @@ lpfc_handle_fcp_err(struct lpfc_scsi_buf *lpfc_cmd, struct lpfc_iocbq *rsp_iocb)
 	 */
 	} else if ((scsi_status == SAM_STAT_GOOD) && fcpi_parm &&
 			(cmnd->sc_data_direction == DMA_FROM_DEVICE)) {
-		lpfc_printf_log(phba, KERN_WARNING, LOG_FCP | LOG_FCP_ERROR,
-			"%d:0734 FCP Read Check Error Data: "
-			"x%x x%x x%x x%x\n", phba->brd_no,
-			be32_to_cpu(fcpcmd->fcpDl),
-			be32_to_cpu(fcprsp->rspResId),
-			fcpi_parm, cmnd->cmnd[0]);
+		lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP | LOG_FCP_ERROR,
+				 "0734 FCP Read Check Error Data: "
+				 "x%x x%x x%x x%x\n",
+				 be32_to_cpu(fcpcmd->fcpDl),
+				 be32_to_cpu(fcprsp->rspResId),
+				 fcpi_parm, cmnd->cmnd[0]);
 		host_status = DID_ERROR;
 		cmnd->resid = cmnd->request_bufflen;
 	}
@@ -429,6 +571,7 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
 {
 	struct lpfc_scsi_buf *lpfc_cmd =
 		(struct lpfc_scsi_buf *) pIocbIn->context1;
+	struct lpfc_vport      *vport = pIocbIn->vport;
 	struct lpfc_rport_data *rdata = lpfc_cmd->rdata;
 	struct lpfc_nodelist *pnode = rdata->pnode;
 	struct scsi_cmnd *cmd = lpfc_cmd->pCmd;
@@ -446,23 +589,32 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
 		else if (lpfc_cmd->status >= IOSTAT_CNT)
 			lpfc_cmd->status = IOSTAT_DEFAULT;
 
-		lpfc_printf_log(phba, KERN_WARNING, LOG_FCP,
-				"%d:0729 FCP cmd x%x failed <%d/%d> status: "
-				"x%x result: x%x Data: x%x x%x\n",
-				phba->brd_no, cmd->cmnd[0], cmd->device->id,
-				cmd->device->lun, lpfc_cmd->status,
-				lpfc_cmd->result, pIocbOut->iocb.ulpContext,
-				lpfc_cmd->cur_iocbq.iocb.ulpIoTag);
+		lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+				 "0729 FCP cmd x%x failed <%d/%d> "
+				 "status: x%x result: x%x Data: x%x x%x\n",
+				 cmd->cmnd[0],
+				 cmd->device ? cmd->device->id : 0xffff,
+				 cmd->device ? cmd->device->lun : 0xffff,
+				 lpfc_cmd->status, lpfc_cmd->result,
+				 pIocbOut->iocb.ulpContext,
+				 lpfc_cmd->cur_iocbq.iocb.ulpIoTag);
 
 		switch (lpfc_cmd->status) {
 		case IOSTAT_FCP_RSP_ERROR:
 			/* Call FCP RSP handler to determine result */
-			lpfc_handle_fcp_err(lpfc_cmd,pIocbOut);
+			lpfc_handle_fcp_err(vport, lpfc_cmd, pIocbOut);
 			break;
 		case IOSTAT_NPORT_BSY:
 		case IOSTAT_FABRIC_BSY:
 			cmd->result = ScsiResult(DID_BUS_BUSY, 0);
 			break;
+		case IOSTAT_LOCAL_REJECT:
+			if (lpfc_cmd->result == RJT_UNAVAIL_PERM ||
+			    lpfc_cmd->result == IOERR_NO_RESOURCES ||
+			    lpfc_cmd->result == RJT_LOGIN_REQUIRED) {
+				cmd->result = ScsiResult(DID_REQUEUE, 0);
+			break;
+		} /* else: fall through */
 		default:
 			cmd->result = ScsiResult(DID_ERROR, 0);
 			break;
@@ -478,12 +630,12 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
 	if (cmd->result || lpfc_cmd->fcp_rsp->rspSnsLen) {
 		uint32_t *lp = (uint32_t *)cmd->sense_buffer;
 
-		lpfc_printf_log(phba, KERN_INFO, LOG_FCP,
-				"%d:0710 Iodone <%d/%d> cmd %p, error x%x "
-				"SNS x%x x%x Data: x%x x%x\n",
-				phba->brd_no, cmd->device->id,
-				cmd->device->lun, cmd, cmd->result,
-				*lp, *(lp + 3), cmd->retries, cmd->resid);
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
+				 "0710 Iodone <%d/%d> cmd %p, error "
+				 "x%x SNS x%x x%x Data: x%x x%x\n",
+				 cmd->device->id, cmd->device->lun, cmd,
+				 cmd->result, *lp, *(lp + 3), cmd->retries,
+				 cmd->resid);
 	}
 
 	result = cmd->result;
@@ -496,14 +648,18 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
 		return;
 	}
 
+
+	if (!result)
+		lpfc_rampup_queue_depth(vport, sdev);
+
 	if (!result && pnode != NULL &&
 	   ((jiffies - pnode->last_ramp_up_time) >
 		LPFC_Q_RAMP_UP_INTERVAL * HZ) &&
 	   ((jiffies - pnode->last_q_full_time) >
 		LPFC_Q_RAMP_UP_INTERVAL * HZ) &&
-	   (phba->cfg_lun_queue_depth > sdev->queue_depth)) {
+	   (vport->cfg_lun_queue_depth > sdev->queue_depth)) {
 		shost_for_each_device(tmp_sdev, sdev->host) {
-			if (phba->cfg_lun_queue_depth > tmp_sdev->queue_depth) {
+			if (vport->cfg_lun_queue_depth > tmp_sdev->queue_depth){
 				if (tmp_sdev->id != sdev->id)
 					continue;
 				if (tmp_sdev->ordered_tags)
@@ -534,7 +690,7 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
 					tmp_sdev->queue_depth - 1);
 		}
 		/*
- 		 * The queue depth cannot be lowered any more.
+		 * The queue depth cannot be lowered any more.
 		 * Modify the returned error code to store
 		 * the final depth value set by
 		 * scsi_track_queue_full.
@@ -543,9 +699,9 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
 			depth = sdev->host->cmd_per_lun;
 
 		if (depth) {
-			lpfc_printf_log(phba, KERN_WARNING, LOG_FCP,
-				"%d:0711 detected queue full - lun queue depth "
-				" adjusted to %d.\n", phba->brd_no, depth);
+			lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+					 "0711 detected queue full - lun queue "
+					 "depth adjusted to %d.\n", depth);
 		}
 	}
 
@@ -553,9 +709,10 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
 }
 
 static void
-lpfc_scsi_prep_cmnd(struct lpfc_hba * phba, struct lpfc_scsi_buf * lpfc_cmd,
-			struct lpfc_nodelist *pnode)
+lpfc_scsi_prep_cmnd(struct lpfc_vport *vport, struct lpfc_scsi_buf *lpfc_cmd,
+		    struct lpfc_nodelist *pnode)
 {
+	struct lpfc_hba *phba = vport->phba;
 	struct scsi_cmnd *scsi_cmnd = lpfc_cmd->pCmd;
 	struct fcp_cmnd *fcp_cmnd = lpfc_cmd->fcp_cmnd;
 	IOCB_t *iocb_cmd = &lpfc_cmd->cur_iocbq.iocb;
@@ -642,15 +799,15 @@ lpfc_scsi_prep_cmnd(struct lpfc_hba * phba, struct lpfc_scsi_buf * lpfc_cmd,
 	piocbq->context1  = lpfc_cmd;
 	piocbq->iocb_cmpl = lpfc_scsi_cmd_iocb_cmpl;
 	piocbq->iocb.ulpTimeout = lpfc_cmd->timeout;
+	piocbq->vport = vport;
 }
 
 static int
-lpfc_scsi_prep_task_mgmt_cmd(struct lpfc_hba *phba,
+lpfc_scsi_prep_task_mgmt_cmd(struct lpfc_vport *vport,
 			     struct lpfc_scsi_buf *lpfc_cmd,
 			     unsigned int lun,
 			     uint8_t task_mgmt_cmd)
 {
-	struct lpfc_sli *psli;
 	struct lpfc_iocbq *piocbq;
 	IOCB_t *piocb;
 	struct fcp_cmnd *fcp_cmnd;
@@ -661,8 +818,9 @@ lpfc_scsi_prep_task_mgmt_cmd(struct lpfc_hba *phba,
 		return 0;
 	}
 
-	psli = &phba->sli;
 	piocbq = &(lpfc_cmd->cur_iocbq);
+	piocbq->vport = vport;
+
 	piocb = &piocbq->iocb;
 
 	fcp_cmnd = lpfc_cmd->fcp_cmnd;
@@ -688,7 +846,7 @@ lpfc_scsi_prep_task_mgmt_cmd(struct lpfc_hba *phba,
 		piocb->ulpTimeout = lpfc_cmd->timeout;
 	}
 
-	return (1);
+	return 1;
 }
 
 static void
@@ -704,10 +862,11 @@ lpfc_tskmgmt_def_cmpl(struct lpfc_hba *phba,
 }
 
 static int
-lpfc_scsi_tgt_reset(struct lpfc_scsi_buf * lpfc_cmd, struct lpfc_hba * phba,
+lpfc_scsi_tgt_reset(struct lpfc_scsi_buf *lpfc_cmd, struct lpfc_vport *vport,
 		    unsigned  tgt_id, unsigned int lun,
 		    struct lpfc_rport_data *rdata)
 {
+	struct lpfc_hba   *phba = vport->phba;
 	struct lpfc_iocbq *iocbq;
 	struct lpfc_iocbq *iocbqrsp;
 	int ret;
@@ -716,12 +875,11 @@ lpfc_scsi_tgt_reset(struct lpfc_scsi_buf * lpfc_cmd, struct lpfc_hba * phba,
 		return FAILED;
 
 	lpfc_cmd->rdata = rdata;
-	ret = lpfc_scsi_prep_task_mgmt_cmd(phba, lpfc_cmd, lun,
+	ret = lpfc_scsi_prep_task_mgmt_cmd(vport, lpfc_cmd, lun,
 					   FCP_TARGET_RESET);
 	if (!ret)
 		return FAILED;
 
-	lpfc_cmd->scsi_hba = phba;
 	iocbq = &lpfc_cmd->cur_iocbq;
 	iocbqrsp = lpfc_sli_get_iocbq(phba);
 
@@ -729,12 +887,9 @@ lpfc_scsi_tgt_reset(struct lpfc_scsi_buf * lpfc_cmd, struct lpfc_hba * phba,
 		return FAILED;
 
 	/* Issue Target Reset to TGT <num> */
-	lpfc_printf_log(phba, KERN_INFO, LOG_FCP,
-			"%d:0702 Issue Target Reset to TGT %d "
-			"Data: x%x x%x\n",
-			phba->brd_no, tgt_id, rdata->pnode->nlp_rpi,
-			rdata->pnode->nlp_flag);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
+			 "0702 Issue Target Reset to TGT %d Data: x%x x%x\n",
+			 tgt_id, rdata->pnode->nlp_rpi, rdata->pnode->nlp_flag);
 	ret = lpfc_sli_issue_iocb_wait(phba,
 				       &phba->sli.ring[phba->sli.fcp_ring],
 				       iocbq, iocbqrsp, lpfc_cmd->timeout);
@@ -758,7 +913,8 @@ lpfc_scsi_tgt_reset(struct lpfc_scsi_buf * lpfc_cmd, struct lpfc_hba * phba,
 const char *
 lpfc_info(struct Scsi_Host *host)
 {
-	struct lpfc_hba    *phba = (struct lpfc_hba *) host->hostdata;
+	struct lpfc_vport *vport = (struct lpfc_vport *) host->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	int len;
 	static char  lpfcinfobuf[384];
 
@@ -800,26 +956,22 @@ void lpfc_poll_start_timer(struct lpfc_hba * phba)
 
 void lpfc_poll_timeout(unsigned long ptr)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba *)ptr;
-	unsigned long iflag;
-
-	spin_lock_irqsave(phba->host->host_lock, iflag);
+	struct lpfc_hba *phba = (struct lpfc_hba *) ptr;
 
 	if (phba->cfg_poll & ENABLE_FCP_RING_POLLING) {
 		lpfc_sli_poll_fcp_ring (phba);
 		if (phba->cfg_poll & DISABLE_FCP_RING_INT)
 			lpfc_poll_rearm_timer(phba);
 	}
-
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
 }
 
 static int
 lpfc_queuecommand(struct scsi_cmnd *cmnd, void (*done) (struct scsi_cmnd *))
 {
-	struct lpfc_hba *phba =
-		(struct lpfc_hba *) cmnd->device->host->hostdata;
-	struct lpfc_sli *psli = &phba->sli;
+	struct Scsi_Host  *shost = cmnd->device->host;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	struct lpfc_sli   *psli = &phba->sli;
 	struct lpfc_rport_data *rdata = cmnd->device->hostdata;
 	struct lpfc_nodelist *ndlp = rdata->pnode;
 	struct lpfc_scsi_buf *lpfc_cmd;
@@ -840,11 +992,13 @@ lpfc_queuecommand(struct scsi_cmnd *cmnd, void (*done) (struct scsi_cmnd *))
 		cmnd->result = ScsiResult(DID_BUS_BUSY, 0);
 		goto out_fail_command;
 	}
-	lpfc_cmd = lpfc_get_scsi_buf (phba);
+	lpfc_cmd = lpfc_get_scsi_buf(phba);
 	if (lpfc_cmd == NULL) {
-		lpfc_printf_log(phba, KERN_INFO, LOG_FCP,
-				"%d:0707 driver's buffer pool is empty, "
-				"IO busied\n", phba->brd_no);
+		lpfc_adjust_queue_depth(phba);
+
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
+				 "0707 driver's buffer pool is empty, "
+				 "IO busied\n");
 		goto out_host_busy;
 	}
 
@@ -862,10 +1016,13 @@ lpfc_queuecommand(struct scsi_cmnd *cmnd, void (*done) (struct scsi_cmnd *))
 	if (err)
 		goto out_host_busy_free_buf;
 
-	lpfc_scsi_prep_cmnd(phba, lpfc_cmd, ndlp);
+	lpfc_scsi_prep_cmnd(vport, lpfc_cmd, ndlp);
+
+	if ((cmnd->cmnd[0] == REPORT_LUNS) && phba->cfg_enable_npiv)
+		mod_timer(&cmnd->eh_timeout, jiffies + 60 * HZ);
 
 	err = lpfc_sli_issue_iocb(phba, &phba->sli.ring[psli->fcp_ring],
-				&lpfc_cmd->cur_iocbq, SLI_IOCB_RET_IOCB);
+				  &lpfc_cmd->cur_iocbq, SLI_IOCB_RET_IOCB);
 	if (err)
 		goto out_host_busy_free_buf;
 
@@ -907,8 +1064,9 @@ lpfc_block_error_handler(struct scsi_cmnd *cmnd)
 static int
 lpfc_abort_handler(struct scsi_cmnd *cmnd)
 {
-	struct Scsi_Host *shost = cmnd->device->host;
-	struct lpfc_hba *phba = (struct lpfc_hba *)shost->hostdata;
+	struct Scsi_Host  *shost = cmnd->device->host;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	struct lpfc_sli_ring *pring = &phba->sli.ring[phba->sli.fcp_ring];
 	struct lpfc_iocbq *iocb;
 	struct lpfc_iocbq *abtsiocb;
@@ -918,8 +1076,6 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
 	int ret = SUCCESS;
 
 	lpfc_block_error_handler(cmnd);
-	spin_lock_irq(shost->host_lock);
-
 	lpfc_cmd = (struct lpfc_scsi_buf *)cmnd->host_scribble;
 	BUG_ON(!lpfc_cmd);
 
@@ -956,12 +1112,13 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
 
 	icmd->ulpLe = 1;
 	icmd->ulpClass = cmd->ulpClass;
-	if (phba->hba_state >= LPFC_LINK_UP)
+	if (lpfc_is_link_up(phba))
 		icmd->ulpCommand = CMD_ABORT_XRI_CN;
 	else
 		icmd->ulpCommand = CMD_CLOSE_XRI_CN;
 
 	abtsiocb->iocb_cmpl = lpfc_sli_abort_fcp_cmpl;
+	abtsiocb->vport = vport;
 	if (lpfc_sli_issue_iocb(phba, pring, abtsiocb, 0) == IOCB_ERROR) {
 		lpfc_sli_release_iocbq(phba, abtsiocb);
 		ret = FAILED;
@@ -977,41 +1134,36 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
 		if (phba->cfg_poll & DISABLE_FCP_RING_INT)
 			lpfc_sli_poll_fcp_ring (phba);
 
-		spin_unlock_irq(phba->host->host_lock);
-			schedule_timeout_uninterruptible(LPFC_ABORT_WAIT*HZ);
-		spin_lock_irq(phba->host->host_lock);
+		schedule_timeout_uninterruptible(LPFC_ABORT_WAIT * HZ);
 		if (++loop_count
-		    > (2 * phba->cfg_devloss_tmo)/LPFC_ABORT_WAIT)
+		    > (2 * vport->cfg_devloss_tmo)/LPFC_ABORT_WAIT)
 			break;
 	}
 
 	if (lpfc_cmd->pCmd == cmnd) {
 		ret = FAILED;
-		lpfc_printf_log(phba, KERN_ERR, LOG_FCP,
-				"%d:0748 abort handler timed out waiting for "
-				"abort to complete: ret %#x, ID %d, LUN %d, "
-				"snum %#lx\n",
-				phba->brd_no,  ret, cmnd->device->id,
-				cmnd->device->lun, cmnd->serial_number);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
+				 "0748 abort handler timed out waiting "
+				 "for abort to complete: ret %#x, ID %d, "
+				 "LUN %d, snum %#lx\n",
+				 ret, cmnd->device->id, cmnd->device->lun,
+				 cmnd->serial_number);
 	}
 
  out:
-	lpfc_printf_log(phba, KERN_WARNING, LOG_FCP,
-			"%d:0749 SCSI Layer I/O Abort Request "
-			"Status x%x ID %d LUN %d snum %#lx\n",
-			phba->brd_no, ret, cmnd->device->id,
-			cmnd->device->lun, cmnd->serial_number);
-
-	spin_unlock_irq(shost->host_lock);
-
+	lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+			 "0749 SCSI Layer I/O Abort Request Status x%x ID %d "
+			 "LUN %d snum %#lx\n", ret, cmnd->device->id,
+			 cmnd->device->lun, cmnd->serial_number);
 	return ret;
 }
 
 static int
-lpfc_reset_lun_handler(struct scsi_cmnd *cmnd)
+lpfc_device_reset_handler(struct scsi_cmnd *cmnd)
 {
-	struct Scsi_Host *shost = cmnd->device->host;
-	struct lpfc_hba *phba = (struct lpfc_hba *)shost->hostdata;
+	struct Scsi_Host  *shost = cmnd->device->host;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	struct lpfc_scsi_buf *lpfc_cmd;
 	struct lpfc_iocbq *iocbq, *iocbqrsp;
 	struct lpfc_rport_data *rdata = cmnd->device->hostdata;
@@ -1022,28 +1174,25 @@ lpfc_reset_lun_handler(struct scsi_cmnd *cmnd)
 	int cnt, loopcnt;
 
 	lpfc_block_error_handler(cmnd);
-	spin_lock_irq(shost->host_lock);
 	loopcnt = 0;
 	/*
 	 * If target is not in a MAPPED state, delay the reset until
 	 * target is rediscovered or devloss timeout expires.
 	 */
-	while ( 1 ) {
+	while (1) {
 		if (!pnode)
 			goto out;
 
 		if (pnode->nlp_state != NLP_STE_MAPPED_NODE) {
-			spin_unlock_irq(phba->host->host_lock);
 			schedule_timeout_uninterruptible(msecs_to_jiffies(500));
-			spin_lock_irq(phba->host->host_lock);
 			loopcnt++;
 			rdata = cmnd->device->hostdata;
 			if (!rdata ||
-				(loopcnt > ((phba->cfg_devloss_tmo * 2) + 1))) {
-				lpfc_printf_log(phba, KERN_ERR, LOG_FCP,
-		   			"%d:0721 LUN Reset rport failure:"
-					" cnt x%x rdata x%p\n",
-		   			phba->brd_no, loopcnt, rdata);
+				(loopcnt > ((vport->cfg_devloss_tmo * 2) + 1))){
+				lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
+						 "0721 LUN Reset rport "
+						 "failure: cnt x%x rdata x%p\n",
+						 loopcnt, rdata);
 				goto out;
 			}
 			pnode = rdata->pnode;
@@ -1054,16 +1203,15 @@ lpfc_reset_lun_handler(struct scsi_cmnd *cmnd)
 			break;
 	}
 
-	lpfc_cmd = lpfc_get_scsi_buf (phba);
+	lpfc_cmd = lpfc_get_scsi_buf(phba);
 	if (lpfc_cmd == NULL)
 		goto out;
 
 	lpfc_cmd->timeout = 60;
-	lpfc_cmd->scsi_hba = phba;
 	lpfc_cmd->rdata = rdata;
 
-	ret = lpfc_scsi_prep_task_mgmt_cmd(phba, lpfc_cmd, cmnd->device->lun,
-					   FCP_LUN_RESET);
+	ret = lpfc_scsi_prep_task_mgmt_cmd(vport, lpfc_cmd, cmnd->device->lun,
+					   FCP_TARGET_RESET);
 	if (!ret)
 		goto out_free_scsi_buf;
 
@@ -1074,11 +1222,10 @@ lpfc_reset_lun_handler(struct scsi_cmnd *cmnd)
 	if (iocbqrsp == NULL)
 		goto out_free_scsi_buf;
 
-	lpfc_printf_log(phba, KERN_INFO, LOG_FCP,
-			"%d:0703 Issue LUN Reset to TGT %d LUN %d "
-			"Data: x%x x%x\n", phba->brd_no, cmnd->device->id,
-			cmnd->device->lun, pnode->nlp_rpi, pnode->nlp_flag);
-
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
+			 "0703 Issue target reset to TGT %d LUN %d "
+			 "rpi x%x nlp_flag x%x\n", cmnd->device->id,
+			 cmnd->device->lun, pnode->nlp_rpi, pnode->nlp_flag);
 	iocb_status = lpfc_sli_issue_iocb_wait(phba,
 				       &phba->sli.ring[phba->sli.fcp_ring],
 				       iocbq, iocbqrsp, lpfc_cmd->timeout);
@@ -1101,34 +1248,28 @@ lpfc_reset_lun_handler(struct scsi_cmnd *cmnd)
 	 * Unfortunately, some targets do not abide by this forcing the driver
 	 * to double check.
 	 */
-	cnt = lpfc_sli_sum_iocb(phba, &phba->sli.ring[phba->sli.fcp_ring],
-				cmnd->device->id, cmnd->device->lun,
+	cnt = lpfc_sli_sum_iocb(vport, cmnd->device->id, cmnd->device->lun,
 				LPFC_CTX_LUN);
 	if (cnt)
-		lpfc_sli_abort_iocb(phba,
-				    &phba->sli.ring[phba->sli.fcp_ring],
+		lpfc_sli_abort_iocb(vport, &phba->sli.ring[phba->sli.fcp_ring],
 				    cmnd->device->id, cmnd->device->lun,
-				    0, LPFC_CTX_LUN);
+				    LPFC_CTX_LUN);
 	loopcnt = 0;
 	while(cnt) {
-		spin_unlock_irq(phba->host->host_lock);
 		schedule_timeout_uninterruptible(LPFC_RESET_WAIT*HZ);
-		spin_lock_irq(phba->host->host_lock);
 
 		if (++loopcnt
-		    > (2 * phba->cfg_devloss_tmo)/LPFC_RESET_WAIT)
+		    > (2 * vport->cfg_devloss_tmo)/LPFC_RESET_WAIT)
 			break;
 
-		cnt = lpfc_sli_sum_iocb(phba,
-					&phba->sli.ring[phba->sli.fcp_ring],
-					cmnd->device->id, cmnd->device->lun,
-					LPFC_CTX_LUN);
+		cnt = lpfc_sli_sum_iocb(vport, cmnd->device->id,
+					cmnd->device->lun, LPFC_CTX_LUN);
 	}
 
 	if (cnt) {
-		lpfc_printf_log(phba, KERN_ERR, LOG_FCP,
-			"%d:0719 LUN Reset I/O flush failure: cnt x%x\n",
-			phba->brd_no, cnt);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
+				 "0719 device reset I/O flush failure: "
+				 "cnt x%x\n", cnt);
 		ret = FAILED;
 	}
 
@@ -1136,22 +1277,21 @@ out_free_scsi_buf:
 	if (iocb_status != IOCB_TIMEDOUT) {
 		lpfc_release_scsi_buf(phba, lpfc_cmd);
 	}
-	lpfc_printf_log(phba, KERN_ERR, LOG_FCP,
-			"%d:0713 SCSI layer issued LUN reset (%d, %d) "
-			"Data: x%x x%x x%x\n",
-			phba->brd_no, cmnd->device->id,cmnd->device->lun,
-			ret, cmd_status, cmd_result);
-
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
+			 "0713 SCSI layer issued device reset (%d, %d) "
+			 "return x%x status x%x result x%x\n",
+			 cmnd->device->id, cmnd->device->lun, ret,
+			 cmd_status, cmd_result);
 out:
-	spin_unlock_irq(shost->host_lock);
 	return ret;
 }
 
 static int
-lpfc_reset_bus_handler(struct scsi_cmnd *cmnd)
+lpfc_bus_reset_handler(struct scsi_cmnd *cmnd)
 {
-	struct Scsi_Host *shost = cmnd->device->host;
-	struct lpfc_hba *phba = (struct lpfc_hba *)shost->hostdata;
+	struct Scsi_Host  *shost = cmnd->device->host;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	struct lpfc_nodelist *ndlp = NULL;
 	int match;
 	int ret = FAILED, i, err_count = 0;
@@ -1159,7 +1299,6 @@ lpfc_reset_bus_handler(struct scsi_cmnd *cmnd)
 	struct lpfc_scsi_buf * lpfc_cmd;
 
 	lpfc_block_error_handler(cmnd);
-	spin_lock_irq(shost->host_lock);
 
 	lpfc_cmd = lpfc_get_scsi_buf(phba);
 	if (lpfc_cmd == NULL)
@@ -1167,7 +1306,6 @@ lpfc_reset_bus_handler(struct scsi_cmnd *cmnd)
 
 	/* The lpfc_cmd storage is reused.  Set all loop invariants. */
 	lpfc_cmd->timeout = 60;
-	lpfc_cmd->scsi_hba = phba;
 
 	/*
 	 * Since the driver manages a single bus device, reset all
@@ -1175,23 +1313,28 @@ lpfc_reset_bus_handler(struct scsi_cmnd *cmnd)
 	 * fail, this routine returns failure to the midlayer.
 	 */
 	for (i = 0; i < LPFC_MAX_TARGET; i++) {
-		/* Search the mapped list for this target ID */
+		/* Search for mapped node by target ID */
 		match = 0;
-		list_for_each_entry(ndlp, &phba->fc_nlpmap_list, nlp_listp) {
-			if ((i == ndlp->nlp_sid) && ndlp->rport) {
+		spin_lock_irq(shost->host_lock);
+		list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+			if (ndlp->nlp_state == NLP_STE_MAPPED_NODE &&
+			    i == ndlp->nlp_sid &&
+			    ndlp->rport) {
 				match = 1;
 				break;
 			}
 		}
+		spin_unlock_irq(shost->host_lock);
 		if (!match)
 			continue;
 
-		ret = lpfc_scsi_tgt_reset(lpfc_cmd, phba, i, cmnd->device->lun,
+		ret = lpfc_scsi_tgt_reset(lpfc_cmd, vport, i,
+					  cmnd->device->lun,
 					  ndlp->rport->dd_data);
 		if (ret != SUCCESS) {
-			lpfc_printf_log(phba, KERN_ERR, LOG_FCP,
-				"%d:0700 Bus Reset on target %d failed\n",
-				phba->brd_no, i);
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
+					 "0700 Bus Reset on target %d failed\n",
+					 i);
 			err_count++;
 			break;
 		}
@@ -1210,47 +1353,39 @@ lpfc_reset_bus_handler(struct scsi_cmnd *cmnd)
 	 * the targets.  Unfortunately, some targets do not abide by
 	 * this forcing the driver to double check.
 	 */
-	cnt = lpfc_sli_sum_iocb(phba, &phba->sli.ring[phba->sli.fcp_ring],
-				0, 0, LPFC_CTX_HOST);
+	cnt = lpfc_sli_sum_iocb(vport, 0, 0, LPFC_CTX_HOST);
 	if (cnt)
-		lpfc_sli_abort_iocb(phba, &phba->sli.ring[phba->sli.fcp_ring],
-				    0, 0, 0, LPFC_CTX_HOST);
+		lpfc_sli_abort_iocb(vport, &phba->sli.ring[phba->sli.fcp_ring],
+				    0, 0, LPFC_CTX_HOST);
 	loopcnt = 0;
 	while(cnt) {
-		spin_unlock_irq(phba->host->host_lock);
 		schedule_timeout_uninterruptible(LPFC_RESET_WAIT*HZ);
-		spin_lock_irq(phba->host->host_lock);
 
 		if (++loopcnt
-		    > (2 * phba->cfg_devloss_tmo)/LPFC_RESET_WAIT)
+		    > (2 * vport->cfg_devloss_tmo)/LPFC_RESET_WAIT)
 			break;
 
-		cnt = lpfc_sli_sum_iocb(phba,
-					&phba->sli.ring[phba->sli.fcp_ring],
-					0, 0, LPFC_CTX_HOST);
+		cnt = lpfc_sli_sum_iocb(vport, 0, 0, LPFC_CTX_HOST);
 	}
 
 	if (cnt) {
-		lpfc_printf_log(phba, KERN_ERR, LOG_FCP,
-		   "%d:0715 Bus Reset I/O flush failure: cnt x%x left x%x\n",
-		   phba->brd_no, cnt, i);
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
+				 "0715 Bus Reset I/O flush failure: "
+				 "cnt x%x left x%x\n", cnt, i);
 		ret = FAILED;
 	}
 
-	lpfc_printf_log(phba,
-			KERN_ERR,
-			LOG_FCP,
-			"%d:0714 SCSI layer issued Bus Reset Data: x%x\n",
-			phba->brd_no, ret);
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
+			 "0714 SCSI layer issued Bus Reset Data: x%x\n", ret);
 out:
-	spin_unlock_irq(shost->host_lock);
 	return ret;
 }
 
 static int
 lpfc_slave_alloc(struct scsi_device *sdev)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba *)sdev->host->hostdata;
+	struct lpfc_vport *vport = (struct lpfc_vport *) sdev->host->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
 	struct lpfc_scsi_buf *scsi_buf = NULL;
 	struct fc_rport *rport = starget_to_rport(scsi_target(sdev));
 	uint32_t total = 0, i;
@@ -1270,28 +1405,32 @@ lpfc_slave_alloc(struct scsi_device *sdev)
 	 * extra.  This list of scsi bufs exists for the lifetime of the driver.
 	 */
 	total = phba->total_scsi_bufs;
-	num_to_alloc = phba->cfg_lun_queue_depth + 2;
-	if (total >= phba->cfg_hba_queue_depth) {
-		lpfc_printf_log(phba, KERN_WARNING, LOG_FCP,
-				"%d:0704 At limitation of %d preallocated "
-				"command buffers\n", phba->brd_no, total);
+	num_to_alloc = vport->cfg_lun_queue_depth + 2;
+
+	/* Allow some exchanges to be available always to complete discovery */
+	if (total >= phba->cfg_hba_queue_depth - LPFC_DISC_IOCB_BUFF_COUNT ) {
+		lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+				 "0704 At limitation of %d preallocated "
+				 "command buffers\n", total);
 		return 0;
-	} else if (total + num_to_alloc > phba->cfg_hba_queue_depth) {
-		lpfc_printf_log(phba, KERN_WARNING, LOG_FCP,
-				"%d:0705 Allocation request of %d command "
-				"buffers will exceed max of %d.  Reducing "
-				"allocation request to %d.\n", phba->brd_no,
-				num_to_alloc, phba->cfg_hba_queue_depth,
-				(phba->cfg_hba_queue_depth - total));
+	/* Allow some exchanges to be available always to complete discovery */
+	} else if (total + num_to_alloc >
+		phba->cfg_hba_queue_depth - LPFC_DISC_IOCB_BUFF_COUNT ) {
+		lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+				 "0705 Allocation request of %d "
+				 "command buffers will exceed max of %d.  "
+				 "Reducing allocation request to %d.\n",
+				 num_to_alloc, phba->cfg_hba_queue_depth,
+				 (phba->cfg_hba_queue_depth - total));
 		num_to_alloc = phba->cfg_hba_queue_depth - total;
 	}
 
 	for (i = 0; i < num_to_alloc; i++) {
-		scsi_buf = lpfc_new_scsi_buf(phba);
+		scsi_buf = lpfc_new_scsi_buf(vport);
 		if (!scsi_buf) {
-			lpfc_printf_log(phba, KERN_ERR, LOG_FCP,
-					"%d:0706 Failed to allocate command "
-					"buffer\n", phba->brd_no);
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_FCP,
+					 "0706 Failed to allocate "
+					 "command buffer\n");
 			break;
 		}
 
@@ -1306,13 +1445,14 @@ lpfc_slave_alloc(struct scsi_device *sdev)
 static int
 lpfc_slave_configure(struct scsi_device *sdev)
 {
-	struct lpfc_hba *phba = (struct lpfc_hba *) sdev->host->hostdata;
-	struct fc_rport *rport = starget_to_rport(sdev->sdev_target);
+	struct lpfc_vport *vport = (struct lpfc_vport *) sdev->host->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	struct fc_rport   *rport = starget_to_rport(sdev->sdev_target);
 
 	if (sdev->tagged_supported)
-		scsi_activate_tcq(sdev, phba->cfg_lun_queue_depth);
+		scsi_activate_tcq(sdev, vport->cfg_lun_queue_depth);
 	else
-		scsi_deactivate_tcq(sdev, phba->cfg_lun_queue_depth);
+		scsi_deactivate_tcq(sdev, vport->cfg_lun_queue_depth);
 
 	/*
 	 * Initialize the fc transport attributes for the target
@@ -1320,7 +1460,7 @@ lpfc_slave_configure(struct scsi_device *sdev)
 	 * target pointer is stored in the starget_data for the
 	 * driver's sysfs entry point functions.
 	 */
-	rport->dev_loss_tmo = phba->cfg_devloss_tmo;
+	rport->dev_loss_tmo = vport->cfg_devloss_tmo;
 
 	if (phba->cfg_poll & ENABLE_FCP_RING_POLLING) {
 		lpfc_sli_poll_fcp_ring(phba);
@@ -1338,14 +1478,53 @@ lpfc_slave_destroy(struct scsi_device *sdev)
 	return;
 }
 
+
 struct scsi_host_template lpfc_template = {
 	.module			= THIS_MODULE,
 	.name			= LPFC_DRIVER_NAME,
 	.info			= lpfc_info,
 	.queuecommand		= lpfc_queuecommand,
 	.eh_abort_handler	= lpfc_abort_handler,
-	.eh_device_reset_handler= lpfc_reset_lun_handler,
-	.eh_bus_reset_handler	= lpfc_reset_bus_handler,
+	.eh_device_reset_handler= lpfc_device_reset_handler,
+	.eh_bus_reset_handler	= lpfc_bus_reset_handler,
+	.slave_alloc		= lpfc_slave_alloc,
+	.slave_configure	= lpfc_slave_configure,
+	.slave_destroy		= lpfc_slave_destroy,
+	.this_id		= -1,
+	.sg_tablesize		= LPFC_SG_SEG_CNT,
+	.cmd_per_lun		= LPFC_CMD_PER_LUN,
+	.use_clustering		= ENABLE_CLUSTERING,
+	.shost_attrs		= lpfc_hba_attrs,
+	.max_sectors		= 0xFFFF,
+};
+
+struct scsi_host_template lpfc_template_no_npiv = {
+	.module			= THIS_MODULE,
+	.name			= LPFC_DRIVER_NAME,
+	.info			= lpfc_info,
+	.queuecommand		= lpfc_queuecommand,
+	.eh_abort_handler	= lpfc_abort_handler,
+	.eh_device_reset_handler= lpfc_device_reset_handler,
+	.eh_bus_reset_handler	= lpfc_bus_reset_handler,
+	.slave_alloc		= lpfc_slave_alloc,
+	.slave_configure	= lpfc_slave_configure,
+	.slave_destroy		= lpfc_slave_destroy,
+	.this_id		= -1,
+	.sg_tablesize		= LPFC_SG_SEG_CNT,
+	.cmd_per_lun		= LPFC_CMD_PER_LUN,
+	.use_clustering		= ENABLE_CLUSTERING,
+	.shost_attrs		= lpfc_hba_attrs_no_npiv,
+	.max_sectors		= 0xFFFF,
+};
+
+struct scsi_host_template lpfc_vport_template = {
+	.module			= THIS_MODULE,
+	.name			= LPFC_DRIVER_NAME,
+	.info			= lpfc_info,
+	.queuecommand		= lpfc_queuecommand,
+	.eh_abort_handler	= lpfc_abort_handler,
+	.eh_device_reset_handler= lpfc_device_reset_handler,
+	.eh_bus_reset_handler	= lpfc_bus_reset_handler,
 	.slave_alloc		= lpfc_slave_alloc,
 	.slave_configure	= lpfc_slave_configure,
 	.slave_destroy		= lpfc_slave_destroy,
@@ -1353,6 +1532,6 @@ struct scsi_host_template lpfc_template = {
 	.sg_tablesize		= LPFC_SG_SEG_CNT,
 	.cmd_per_lun		= LPFC_CMD_PER_LUN,
 	.use_clustering		= ENABLE_CLUSTERING,
-	.shost_attrs		= lpfc_host_attrs,
+	.shost_attrs		= lpfc_vport_attrs,
 	.max_sectors		= 0xFFFF,
 };
diff --git a/drivers/scsi/lpfc/lpfc_scsi.h b/drivers/scsi/lpfc/lpfc_scsi.h
index cdcd253..31787bb 100644
--- a/drivers/scsi/lpfc/lpfc_scsi.h
+++ b/drivers/scsi/lpfc/lpfc_scsi.h
@@ -1,7 +1,7 @@
 /*******************************************************************
  * This file is part of the Emulex Linux Device Driver for         *
  * Fibre Channel Host Bus Adapters.                                *
- * Copyright (C) 2004-2005 Emulex.  All rights reserved.           *
+ * Copyright (C) 2004-2006 Emulex.  All rights reserved.           *
  * EMULEX and SLI are trademarks of Emulex.                        *
  * www.emulex.com                                                  *
  *                                                                 *
@@ -110,7 +110,6 @@ struct fcp_cmnd {
 struct lpfc_scsi_buf {
 	struct list_head list;
 	struct scsi_cmnd *pCmd;
-	struct lpfc_hba *scsi_hba;
 	struct lpfc_rport_data *rdata;
 
 	uint32_t timeout;
diff --git a/drivers/scsi/lpfc/lpfc_security.c b/drivers/scsi/lpfc/lpfc_security.c
new file mode 100644
index 0000000..413b022
--- /dev/null
+++ b/drivers/scsi/lpfc/lpfc_security.c
@@ -0,0 +1,255 @@
+/*******************************************************************
+ * This file is part of the Emulex Linux Device Driver for         *
+ * Fibre Channel Host Bus Adapters.                                *
+ * Copyright (C) 2006-2007 Emulex.  All rights reserved.           *
+ * EMULEX and SLI are trademarks of Emulex.                        *
+ * www.emulex.com                                                  *
+ *                                                                 *
+ * This program is free software; you can redistribute it and/or   *
+ * modify it under the terms of version 2 of the GNU General       *
+ * Public License as published by the Free Software Foundation.    *
+ * This program is distributed in the hope that it will be useful. *
+ * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
+ * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
+ * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
+ * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
+ * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
+ * more details, a copy of which can be found in the file COPYING  *
+ * included with this package.                                     *
+ *******************************************************************/
+
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+
+#include <scsi/scsi_tcq.h>
+#include <scsi/scsi_transport_fc.h>
+
+#include "lpfc_hw.h"
+#include "lpfc_sli.h"
+#include "lpfc_disc.h"
+#include "lpfc.h"
+#include "lpfc_crtn.h"
+#include "lpfc_logmsg.h"
+#include "lpfc_security.h"
+#include "lpfc_auth_access.h"
+
+uint8_t lpfc_security_service_state = SECURITY_OFFLINE;
+
+void
+lpfc_security_service_online(struct Scsi_Host *shost)
+{
+	lpfc_security_service_state = SECURITY_ONLINE;
+}
+
+void
+lpfc_security_service_offline(struct Scsi_Host *shost)
+{
+	lpfc_security_service_state = SECURITY_OFFLINE;
+}
+
+void
+lpfc_security_config(struct Scsi_Host *shost, int status, void *rsp)
+{
+	struct fc_auth_rsp *auth_rsp = (struct fc_auth_rsp *)rsp;
+	struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
+	struct lpfc_nodelist *ndlp;
+
+	if (status == 0) {
+		vport->auth.bidirectional =
+			auth_rsp->u.dhchap_security_config.bidirectional;
+		memcpy(&vport->auth.hash_priority[0],
+			&auth_rsp->u.dhchap_security_config.hash_priority[0],
+			sizeof(vport->auth.hash_priority));
+		vport->auth.hash_len =
+			auth_rsp->u.dhchap_security_config.hash_len;
+		memcpy(&vport->auth.dh_group_priority[0],
+			&auth_rsp->u.dhchap_security_config.
+			dh_group_priority[0],
+			sizeof(vport->auth.dh_group_priority));
+		vport->auth.dh_group_len =
+			auth_rsp->u.dhchap_security_config.dh_group_len;
+		vport->auth.reauth_interval =
+			auth_rsp->u.dhchap_security_config.reauth_interval;
+		vport->auth.auth_mode =
+			auth_rsp->u.dhchap_security_config.auth_mode;
+
+		lpfc_printf_vlog(vport, KERN_INFO, LOG_SECURITY,
+			"1025 Received security config local_wwpn:"
+			"%llX remote_wwpn:%llX \nmode:0x%x "
+			"hash(%d):%x:%x:%x:%x bidir:0x%x "
+			"dh_group(%d):%x:%x:%x:%x:%x:%x:%x:%x "
+			"reauth_interval:0x%x\n",
+			(unsigned long long)auth_rsp->local_wwpn,
+			(unsigned long long)auth_rsp->remote_wwpn,
+			auth_rsp->u.dhchap_security_config.auth_mode,
+			auth_rsp->u.dhchap_security_config.hash_len,
+			auth_rsp->u.dhchap_security_config.hash_priority[0],
+			auth_rsp->u.dhchap_security_config.hash_priority[1],
+			auth_rsp->u.dhchap_security_config.hash_priority[2],
+			auth_rsp->u.dhchap_security_config.hash_priority[3],
+			auth_rsp->u.dhchap_security_config.bidirectional,
+			auth_rsp->u.dhchap_security_config.dh_group_len,
+			auth_rsp->u.dhchap_security_config.dh_group_priority[0],
+			auth_rsp->u.dhchap_security_config.dh_group_priority[1],
+			auth_rsp->u.dhchap_security_config.dh_group_priority[2],
+			auth_rsp->u.dhchap_security_config.dh_group_priority[3],
+			auth_rsp->u.dhchap_security_config.dh_group_priority[4],
+			auth_rsp->u.dhchap_security_config.dh_group_priority[5],
+			auth_rsp->u.dhchap_security_config.dh_group_priority[6],
+			auth_rsp->u.dhchap_security_config.dh_group_priority[7],
+			auth_rsp->u.dhchap_security_config.reauth_interval);
+		kfree(auth_rsp);
+	}
+
+	/* re-authenticate whenever we get new configs */
+	ndlp = lpfc_findnode_did(vport, Fabric_DID);
+	if (!ndlp) {
+		lpfc_printf_vlog(vport, KERN_WARNING, LOG_SECURITY,
+				 "1026 Unable to find ndlp. \n");
+		return;
+	}
+	if (vport->port_state == LPFC_VPORT_READY) {
+		lpfc_printf_vlog(vport, KERN_WARNING, LOG_SECURITY,
+				 "1027 Re-Authentication triggered. \n");
+		lpfc_start_authentication(vport, ndlp);
+	}
+}
+
+int
+lpfc_get_security_enabled(struct Scsi_Host *shost)
+{
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+
+	return(vport->cfg_enable_auth);
+}
+
+int
+lpfc_security_wait(void)
+{
+	int i = 0;
+	while (lpfc_security_service_state == SECURITY_OFFLINE) {
+		i++;
+		if (i > 240) {
+			return -ETIMEDOUT;
+		}
+		/* Delay for half of a second */
+		msleep(500);
+	}
+	return 0;
+}
+
+int
+lpfc_security_config_wait(struct lpfc_vport *vport)
+{
+	int i = 0;
+
+	while (vport->auth.auth_mode == FC_AUTHMODE_UNKNOWN) {
+		i++;
+		if (i > 120) {
+			return -ETIMEDOUT;
+		}
+		/* Delay for half of a second */
+		msleep(500);
+	}
+	return 0;
+}
+
+void
+lpfc_reauth_node(unsigned long ptr)
+{
+	struct lpfc_nodelist *ndlp = (struct lpfc_nodelist *) ptr;
+	struct lpfc_vport *vport = ndlp->vport;
+	struct lpfc_hba   *phba = vport->phba;
+	unsigned long flags;
+	struct lpfc_work_evt  *evtp = &ndlp->els_reauth_evt;
+
+	ndlp = (struct lpfc_nodelist *) ptr;
+	phba = ndlp->vport->phba;
+
+	spin_lock_irqsave(&phba->hbalock, flags);
+	if (!list_empty(&evtp->evt_listp)) {
+		spin_unlock_irqrestore(&phba->hbalock, flags);
+		return;
+	}
+
+	evtp->evt_arg1  = ndlp;
+	evtp->evt       = LPFC_EVT_REAUTH;
+	list_add_tail(&evtp->evt_listp, &phba->work_list);
+	if (phba->work_wait)
+		lpfc_worker_wake_up(phba);
+	spin_unlock_irqrestore(&phba->hbalock, flags);
+	return;
+}
+
+void
+lpfc_reauthentication_handler(struct lpfc_nodelist *ndlp)
+{
+	struct lpfc_vport *vport = ndlp->vport;
+	if (vport->auth.auth_msg_state != LPFC_DHCHAP_SUCCESS)
+		return;
+
+	if (lpfc_start_node_authentication(ndlp)) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1029 Reauthentication Failure\n");
+		if (vport->auth.auth_state == LPFC_AUTH_SUCCESS)
+			lpfc_port_auth_failed(ndlp);
+	}
+}
+
+/*
+ * This function will kick start authentication for a node.
+ * This is used for re-authentication of a node or a user
+ * initiated node authentication.
+ */
+int
+lpfc_start_node_authentication(struct lpfc_nodelist *ndlp)
+{
+	struct lpfc_vport *vport;
+	struct fc_auth_req auth_req;
+	struct fc_auth_rsp *auth_rsp;
+	struct Scsi_Host   *shost;
+	int ret;
+
+	vport = ndlp->vport;
+	shost = lpfc_shost_from_vport(vport);
+
+	/* If there is authentication timer cancel the timer */
+	del_timer_sync(&ndlp->nlp_reauth_tmr);
+
+	auth_req.local_wwpn = wwn_to_u64(vport->fc_portname.u.wwn);
+	if (ndlp->nlp_type & NLP_FABRIC)
+		auth_req.remote_wwpn = AUTH_FABRIC_WWN;
+	else
+		auth_req.remote_wwpn = wwn_to_u64(ndlp->nlp_portname.u.wwn);
+	if (lpfc_security_service_state == SECURITY_OFFLINE) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1053 Start Authentication: "
+				 "Security service offline.\n");
+		return -EINVAL;
+	}
+	if ((auth_rsp = kmalloc(sizeof(struct fc_auth_rsp),
+			GFP_KERNEL)) == 0) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1028 Start Authentication: No buffers\n");
+		return -ENOMEM;
+	}
+
+	if ((ret = lpfc_fc_security_get_config(shost, &auth_req,
+					       sizeof(struct fc_auth_req),
+					       auth_rsp,
+					       sizeof(struct fc_auth_rsp)))) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1031 Start Authentication: Get config "
+				 "failed.\n");
+		kfree(auth_rsp);
+		return ret;
+	}
+	if ((ret = lpfc_security_config_wait(vport))) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_SECURITY,
+				 "1032 Start Authentication: get config "
+				 "timed out.\n");
+		return ret;
+	}
+	return 0;
+}
diff --git a/drivers/scsi/lpfc/lpfc_security.h b/drivers/scsi/lpfc/lpfc_security.h
new file mode 100644
index 0000000..c86d36d
--- /dev/null
+++ b/drivers/scsi/lpfc/lpfc_security.h
@@ -0,0 +1,22 @@
+/*******************************************************************
+ * This file is part of the Emulex Linux Device Driver for         *
+ * Fibre Channel Host Bus Adapters.                                *
+ * Copyright (C) 2006-2007 Emulex.  All rights reserved.           *
+ * EMULEX and SLI are trademarks of Emulex.                        *
+ * www.emulex.com                                                  *
+ *                                                                 *
+ * This program is free software; you can redistribute it and/or   *
+ * modify it under the terms of version 2 of the GNU General       *
+ * Public License as published by the Free Software Foundation.    *
+ * This program is distributed in the hope that it will be useful. *
+ * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
+ * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
+ * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
+ * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
+ * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
+ * more details, a copy of which can be found in the file COPYING  *
+ * included with this package.                                     *
+ *******************************************************************/
+
+#define SECURITY_OFFLINE     0x0
+#define SECURITY_ONLINE      0x1
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index be6b3a2..1d807ff 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -38,23 +38,24 @@
 #include "lpfc_crtn.h"
 #include "lpfc_logmsg.h"
 #include "lpfc_compat.h"
+#include "lpfc_debugfs.h"
 
 /*
  * Define macro to log: Mailbox command x%x cannot issue Data
  * This allows multiple uses of lpfc_msgBlk0311
  * w/o perturbing log msg utility.
  */
-#define LOG_MBOX_CANNOT_ISSUE_DATA( phba, mb, psli, flag) \
+#define LOG_MBOX_CANNOT_ISSUE_DATA(phba, pmbox, psli, flag) \
 			lpfc_printf_log(phba, \
 				KERN_INFO, \
 				LOG_MBOX | LOG_SLI, \
-				"%d:0311 Mailbox command x%x cannot issue " \
-				"Data: x%x x%x x%x\n", \
-				phba->brd_no, \
-				mb->mbxCommand,		\
-				phba->hba_state,	\
+				"(%d):0311 Mailbox command x%x cannot " \
+				"issue Data: x%x x%x x%x\n", \
+				pmbox->vport ? pmbox->vport->vpi : 0, \
+				pmbox->mb.mbxCommand,		\
+				phba->pport->port_state,	\
 				psli->sli_flag,	\
-				flag);
+				flag)
 
 
 /* There are only four IOCB completion types. */
@@ -65,8 +66,26 @@ typedef enum _lpfc_iocb_type {
 	LPFC_ABORT_IOCB
 } lpfc_iocb_type;
 
-struct lpfc_iocbq *
-lpfc_sli_get_iocbq(struct lpfc_hba * phba)
+		/* SLI-2/SLI-3 provide different sized iocbs.  Given a pointer
+		 * to the start of the ring, and the slot number of the
+		 * desired iocb entry, calc a pointer to that entry.
+		 */
+static inline IOCB_t *
+lpfc_cmd_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
+{
+	return (IOCB_t *) (((char *) pring->cmdringaddr) +
+			   pring->cmdidx * phba->iocb_cmd_size);
+}
+
+static inline IOCB_t *
+lpfc_resp_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
+{
+	return (IOCB_t *) (((char *) pring->rspringaddr) +
+			   pring->rspidx * phba->iocb_rsp_size);
+}
+
+static struct lpfc_iocbq *
+__lpfc_sli_get_iocbq(struct lpfc_hba *phba)
 {
 	struct list_head *lpfc_iocb_list = &phba->lpfc_iocb_list;
 	struct lpfc_iocbq * iocbq = NULL;
@@ -75,10 +94,22 @@ lpfc_sli_get_iocbq(struct lpfc_hba * phba)
 	return iocbq;
 }
 
+struct lpfc_iocbq *
+lpfc_sli_get_iocbq(struct lpfc_hba *phba)
+{
+	struct lpfc_iocbq * iocbq = NULL;
+	unsigned long iflags;
+
+	spin_lock_irqsave(&phba->hbalock, iflags);
+	iocbq = __lpfc_sli_get_iocbq(phba);
+	spin_unlock_irqrestore(&phba->hbalock, iflags);
+	return iocbq;
+}
+
 void
-lpfc_sli_release_iocbq(struct lpfc_hba * phba, struct lpfc_iocbq * iocbq)
+__lpfc_sli_release_iocbq(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq)
 {
-	size_t start_clean = (size_t)(&((struct lpfc_iocbq *)NULL)->iocb);
+	size_t start_clean = offsetof(struct lpfc_iocbq, iocb);
 
 	/*
 	 * Clean all volatile data fields, preserve iotag and node struct.
@@ -87,6 +118,19 @@ lpfc_sli_release_iocbq(struct lpfc_hba * phba, struct lpfc_iocbq * iocbq)
 	list_add_tail(&iocbq->list, &phba->lpfc_iocb_list);
 }
 
+void
+lpfc_sli_release_iocbq(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq)
+{
+	unsigned long iflags;
+
+	/*
+	 * Clean all volatile data fields, preserve iotag and node struct.
+	 */
+	spin_lock_irqsave(&phba->hbalock, iflags);
+	__lpfc_sli_release_iocbq(phba, iocbq);
+	spin_unlock_irqrestore(&phba->hbalock, iflags);
+}
+
 /*
  * Translate the iocb command to an iocb command type used to decide the final
  * disposition of each completed IOCB.
@@ -155,6 +199,10 @@ lpfc_sli_iocb_cmd_type(uint8_t iocb_cmnd)
 	case CMD_RCV_ELS_REQ_CX:
 	case CMD_RCV_SEQUENCE64_CX:
 	case CMD_RCV_ELS_REQ64_CX:
+	case CMD_ASYNC_STATUS:
+	case CMD_IOCB_RCV_SEQ64_CX:
+	case CMD_IOCB_RCV_ELS64_CX:
+	case CMD_IOCB_RCV_CONT64_CX:
 		type = LPFC_UNSOL_IOCB;
 		break;
 	default:
@@ -166,73 +214,75 @@ lpfc_sli_iocb_cmd_type(uint8_t iocb_cmnd)
 }
 
 static int
-lpfc_sli_ring_map(struct lpfc_hba * phba, LPFC_MBOXQ_t *pmb)
+lpfc_sli_ring_map(struct lpfc_hba *phba)
 {
 	struct lpfc_sli *psli = &phba->sli;
-	MAILBOX_t *pmbox = &pmb->mb;
-	int i, rc;
+	LPFC_MBOXQ_t *pmb;
+	MAILBOX_t *pmbox;
+	int i, rc, ret = 0;
 
+	pmb = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	if (!pmb)
+		return -ENOMEM;
+	pmbox = &pmb->mb;
+	phba->link_state = LPFC_INIT_MBX_CMDS;
 	for (i = 0; i < psli->num_rings; i++) {
-		phba->hba_state = LPFC_INIT_MBX_CMDS;
 		lpfc_config_ring(phba, i, pmb);
 		rc = lpfc_sli_issue_mbox(phba, pmb, MBX_POLL);
 		if (rc != MBX_SUCCESS) {
-			lpfc_printf_log(phba,
-					KERN_ERR,
-					LOG_INIT,
-					"%d:0446 Adapter failed to init, "
+			lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+					"0446 Adapter failed to init (%d), "
 					"mbxCmd x%x CFG_RING, mbxStatus x%x, "
 					"ring %d\n",
-					phba->brd_no,
-					pmbox->mbxCommand,
-					pmbox->mbxStatus,
-					i);
-			phba->hba_state = LPFC_HBA_ERROR;
-			return -ENXIO;
+					rc, pmbox->mbxCommand,
+					pmbox->mbxStatus, i);
+			phba->link_state = LPFC_HBA_ERROR;
+			ret = -ENXIO;
+			break;
 		}
 	}
-	return 0;
+	mempool_free(pmb, phba->mbox_mem_pool);
+	return ret;
 }
 
 static int
-lpfc_sli_ringtxcmpl_put(struct lpfc_hba * phba,
-			struct lpfc_sli_ring * pring, struct lpfc_iocbq * piocb)
+lpfc_sli_ringtxcmpl_put(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+			struct lpfc_iocbq *piocb)
 {
 	list_add_tail(&piocb->list, &pring->txcmplq);
 	pring->txcmplq_cnt++;
-	if (unlikely(pring->ringno == LPFC_ELS_RING))
-		mod_timer(&phba->els_tmofunc,
-					jiffies + HZ * (phba->fc_ratov << 1));
+	if ((unlikely(pring->ringno == LPFC_ELS_RING)) &&
+	   (piocb->iocb.ulpCommand != CMD_ABORT_XRI_CN) &&
+	   (piocb->iocb.ulpCommand != CMD_CLOSE_XRI_CN)) {
+		if (!piocb->vport)
+			BUG();
+		else
+			mod_timer(&piocb->vport->els_tmofunc,
+				  jiffies + HZ * (phba->fc_ratov << 1));
+	}
+
 
-	return (0);
+	return 0;
 }
 
 static struct lpfc_iocbq *
-lpfc_sli_ringtx_get(struct lpfc_hba * phba, struct lpfc_sli_ring * pring)
+lpfc_sli_ringtx_get(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
 {
-	struct list_head *dlp;
 	struct lpfc_iocbq *cmd_iocb;
 
-	dlp = &pring->txq;
-	cmd_iocb = NULL;
-	list_remove_head((&pring->txq), cmd_iocb,
-			 struct lpfc_iocbq,
-			 list);
-	if (cmd_iocb) {
-		/* If the first ptr is not equal to the list header,
-		 * deque the IOCBQ_t and return it.
-		 */
+	list_remove_head((&pring->txq), cmd_iocb, struct lpfc_iocbq, list);
+	if (cmd_iocb != NULL)
 		pring->txq_cnt--;
-	}
-	return (cmd_iocb);
+	return cmd_iocb;
 }
 
 static IOCB_t *
 lpfc_sli_next_iocb_slot (struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
 {
-	struct lpfc_pgp *pgp = &phba->slim2p->mbx.us.s2.port[pring->ringno];
+	struct lpfc_pgp *pgp = (phba->sli_rev == 3) ?
+		&phba->slim2p->mbx.us.s3_pgp.port[pring->ringno] :
+		&phba->slim2p->mbx.us.s2.port[pring->ringno];
 	uint32_t  max_cmd_idx = pring->numCiocb;
-	IOCB_t *iocb = NULL;
 
 	if ((pring->next_cmdidx == pring->cmdidx) &&
 	   (++pring->next_cmdidx >= max_cmd_idx))
@@ -244,20 +294,22 @@ lpfc_sli_next_iocb_slot (struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
 
 		if (unlikely(pring->local_getidx >= max_cmd_idx)) {
 			lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
-					"%d:0315 Ring %d issue: portCmdGet %d "
+					"0315 Ring %d issue: portCmdGet %d "
 					"is bigger then cmd ring %d\n",
-					phba->brd_no, pring->ringno,
+					pring->ringno,
 					pring->local_getidx, max_cmd_idx);
 
-			phba->hba_state = LPFC_HBA_ERROR;
+			phba->link_state = LPFC_HBA_ERROR;
 			/*
 			 * All error attention handlers are posted to
 			 * worker thread
 			 */
 			phba->work_ha |= HA_ERATT;
 			phba->work_hs = HS_FFER3;
+
+			/* hbalock should already be held */
 			if (phba->work_wait)
-				wake_up(phba->work_wait);
+				lpfc_worker_wake_up(phba);
 
 			return NULL;
 		}
@@ -266,39 +318,34 @@ lpfc_sli_next_iocb_slot (struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
 			return NULL;
 	}
 
-	iocb = IOCB_ENTRY(pring->cmdringaddr, pring->cmdidx);
-
-	return iocb;
+	return lpfc_cmd_iocb(phba, pring);
 }
 
 uint16_t
-lpfc_sli_next_iotag(struct lpfc_hba * phba, struct lpfc_iocbq * iocbq)
+lpfc_sli_next_iotag(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq)
 {
-	struct lpfc_iocbq ** new_arr;
-	struct lpfc_iocbq ** old_arr;
+	struct lpfc_iocbq **new_arr;
+	struct lpfc_iocbq **old_arr;
 	size_t new_len;
 	struct lpfc_sli *psli = &phba->sli;
 	uint16_t iotag;
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	iotag = psli->last_iotag;
 	if(++iotag < psli->iocbq_lookup_len) {
 		psli->last_iotag = iotag;
 		psli->iocbq_lookup[iotag] = iocbq;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(&phba->hbalock);
 		iocbq->iotag = iotag;
 		return iotag;
-	}
-	else if (psli->iocbq_lookup_len < (0xffff
+	} else if (psli->iocbq_lookup_len < (0xffff
 					   - LPFC_IOCBQ_LOOKUP_INCREMENT)) {
 		new_len = psli->iocbq_lookup_len + LPFC_IOCBQ_LOOKUP_INCREMENT;
-		spin_unlock_irq(phba->host->host_lock);
-		new_arr = kmalloc(new_len * sizeof (struct lpfc_iocbq *),
+		spin_unlock_irq(&phba->hbalock);
+		new_arr = kzalloc(new_len * sizeof (struct lpfc_iocbq *),
 				  GFP_KERNEL);
 		if (new_arr) {
-			memset((char *)new_arr, 0,
-			       new_len * sizeof (struct lpfc_iocbq *));
-			spin_lock_irq(phba->host->host_lock);
+			spin_lock_irq(&phba->hbalock);
 			old_arr = psli->iocbq_lookup;
 			if (new_len <= psli->iocbq_lookup_len) {
 				/* highly unprobable case */
@@ -307,32 +354,32 @@ lpfc_sli_next_iotag(struct lpfc_hba * phba, struct lpfc_iocbq * iocbq)
 				if(++iotag < psli->iocbq_lookup_len) {
 					psli->last_iotag = iotag;
 					psli->iocbq_lookup[iotag] = iocbq;
-					spin_unlock_irq(phba->host->host_lock);
+					spin_unlock_irq(&phba->hbalock);
 					iocbq->iotag = iotag;
 					return iotag;
 				}
-				spin_unlock_irq(phba->host->host_lock);
+				spin_unlock_irq(&phba->hbalock);
 				return 0;
 			}
 			if (psli->iocbq_lookup)
 				memcpy(new_arr, old_arr,
 				       ((psli->last_iotag  + 1) *
-	 				sizeof (struct lpfc_iocbq *)));
+					sizeof (struct lpfc_iocbq *)));
 			psli->iocbq_lookup = new_arr;
 			psli->iocbq_lookup_len = new_len;
 			psli->last_iotag = iotag;
 			psli->iocbq_lookup[iotag] = iocbq;
-			spin_unlock_irq(phba->host->host_lock);
+			spin_unlock_irq(&phba->hbalock);
 			iocbq->iotag = iotag;
 			kfree(old_arr);
 			return iotag;
 		}
 	} else
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(&phba->hbalock);
 
 	lpfc_printf_log(phba, KERN_ERR,LOG_SLI,
-			"%d:0318 Failed to allocate IOTAG.last IOTAG is %d\n",
-			phba->brd_no, psli->last_iotag);
+			"0318 Failed to allocate IOTAG.last IOTAG is %d\n",
+			psli->last_iotag);
 
 	return 0;
 }
@@ -346,10 +393,18 @@ lpfc_sli_submit_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 	 */
 	nextiocb->iocb.ulpIoTag = (nextiocb->iocb_cmpl) ? nextiocb->iotag : 0;
 
+	if (pring->ringno == LPFC_ELS_RING) {
+		lpfc_debugfs_slow_ring_trc(phba,
+			"IOCB cmd ring:   wd4:x%08x wd6:x%08x wd7:x%08x",
+			*(((uint32_t *) &nextiocb->iocb) + 4),
+			*(((uint32_t *) &nextiocb->iocb) + 6),
+			*(((uint32_t *) &nextiocb->iocb) + 7));
+	}
+
 	/*
 	 * Issue iocb command to adapter
 	 */
-	lpfc_sli_pcimem_bcopy(&nextiocb->iocb, iocb, sizeof (IOCB_t));
+	lpfc_sli_pcimem_bcopy(&nextiocb->iocb, iocb, phba->iocb_cmd_size);
 	wmb();
 	pring->stats.iocb_cmd++;
 
@@ -361,20 +416,18 @@ lpfc_sli_submit_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 	if (nextiocb->iocb_cmpl)
 		lpfc_sli_ringtxcmpl_put(phba, pring, nextiocb);
 	else
-		lpfc_sli_release_iocbq(phba, nextiocb);
+		__lpfc_sli_release_iocbq(phba, nextiocb);
 
 	/*
 	 * Let the HBA know what IOCB slot will be the next one the
 	 * driver will put a command into.
 	 */
 	pring->cmdidx = pring->next_cmdidx;
-	writel(pring->cmdidx, phba->MBslimaddr
-	       + (SLIMOFF + (pring->ringno * 2)) * 4);
+	writel(pring->cmdidx, &phba->host_gp[pring->ringno].cmdPutInx);
 }
 
 static void
-lpfc_sli_update_full_ring(struct lpfc_hba * phba,
-			  struct lpfc_sli_ring *pring)
+lpfc_sli_update_full_ring(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
 {
 	int ringno = pring->ringno;
 
@@ -393,8 +446,7 @@ lpfc_sli_update_full_ring(struct lpfc_hba * phba,
 }
 
 static void
-lpfc_sli_update_ring(struct lpfc_hba * phba,
-		     struct lpfc_sli_ring *pring)
+lpfc_sli_update_ring(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
 {
 	int ringno = pring->ringno;
 
@@ -407,7 +459,7 @@ lpfc_sli_update_ring(struct lpfc_hba * phba,
 }
 
 static void
-lpfc_sli_resume_iocb(struct lpfc_hba * phba, struct lpfc_sli_ring * pring)
+lpfc_sli_resume_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
 {
 	IOCB_t *iocb;
 	struct lpfc_iocbq *nextiocb;
@@ -420,10 +472,9 @@ lpfc_sli_resume_iocb(struct lpfc_hba * phba, struct lpfc_sli_ring * pring)
 	 *  (d) IOCB processing is not blocked by the outstanding mbox command.
 	 */
 	if (pring->txq_cnt &&
-	    (phba->hba_state > LPFC_LINK_DOWN) &&
+	    lpfc_is_link_up(phba) &&
 	    (pring->ringno != phba->sli.fcp_ring ||
-	     phba->sli.sli_flag & LPFC_PROCESS_LA) &&
-	    !(pring->flag & LPFC_STOP_IOCB_MBX)) {
+	     phba->sli.sli_flag & LPFC_PROCESS_LA)) {
 
 		while ((iocb = lpfc_sli_next_iocb_slot(phba, pring)) &&
 		       (nextiocb = lpfc_sli_ringtx_get(phba, pring)))
@@ -438,24 +489,192 @@ lpfc_sli_resume_iocb(struct lpfc_hba * phba, struct lpfc_sli_ring * pring)
 	return;
 }
 
-/* lpfc_sli_turn_on_ring is only called by lpfc_sli_handle_mb_event below */
-static void
-lpfc_sli_turn_on_ring(struct lpfc_hba * phba, int ringno)
+struct lpfc_hbq_entry *
+lpfc_sli_next_hbq_slot(struct lpfc_hba *phba, uint32_t hbqno)
 {
-	struct lpfc_pgp *pgp = &phba->slim2p->mbx.us.s2.port[ringno];
+	struct hbq_s *hbqp = &phba->hbqs[hbqno];
 
-	/* If the ring is active, flag it */
-	if (phba->sli.ring[ringno].cmdringaddr) {
-		if (phba->sli.ring[ringno].flag & LPFC_STOP_IOCB_MBX) {
-			phba->sli.ring[ringno].flag &= ~LPFC_STOP_IOCB_MBX;
-			/*
-			 * Force update of the local copy of cmdGetInx
-			 */
-			phba->sli.ring[ringno].local_getidx
-				= le32_to_cpu(pgp->cmdGetInx);
-			spin_lock_irq(phba->host->host_lock);
-			lpfc_sli_resume_iocb(phba, &phba->sli.ring[ringno]);
-			spin_unlock_irq(phba->host->host_lock);
+	if (hbqp->next_hbqPutIdx == hbqp->hbqPutIdx &&
+	    ++hbqp->next_hbqPutIdx >= hbqp->entry_count)
+		hbqp->next_hbqPutIdx = 0;
+
+	if (unlikely(hbqp->local_hbqGetIdx == hbqp->next_hbqPutIdx)) {
+		uint32_t raw_index = phba->hbq_get[hbqno];
+		uint32_t getidx = le32_to_cpu(raw_index);
+
+		hbqp->local_hbqGetIdx = getidx;
+
+		if (unlikely(hbqp->local_hbqGetIdx >= hbqp->entry_count)) {
+			lpfc_printf_log(phba, KERN_ERR,
+					LOG_SLI | LOG_VPORT,
+					"1802 HBQ %d: local_hbqGetIdx "
+					"%u is > than hbqp->entry_count %u\n",
+					hbqno, hbqp->local_hbqGetIdx,
+					hbqp->entry_count);
+
+			phba->link_state = LPFC_HBA_ERROR;
+			return NULL;
+		}
+
+		if (hbqp->local_hbqGetIdx == hbqp->next_hbqPutIdx)
+			return NULL;
+	}
+
+	return (struct lpfc_hbq_entry *) phba->hbqs[hbqno].hbq_virt +
+			hbqp->hbqPutIdx;
+}
+
+void
+lpfc_sli_hbqbuf_free_all(struct lpfc_hba *phba)
+{
+	struct lpfc_dmabuf *dmabuf, *next_dmabuf;
+	struct hbq_dmabuf *hbq_buf;
+	int i, hbq_count;
+
+	hbq_count = lpfc_sli_hbq_count();
+	/* Return all memory used by all HBQs */
+	for (i = 0; i < hbq_count; ++i) {
+		list_for_each_entry_safe(dmabuf, next_dmabuf,
+				&phba->hbqs[i].hbq_buffer_list, list) {
+			hbq_buf = container_of(dmabuf, struct hbq_dmabuf, dbuf);
+			list_del(&hbq_buf->dbuf.list);
+			(phba->hbqs[i].hbq_free_buffer)(phba, hbq_buf);
+		}
+		phba->hbqs[i].buffer_count = 0;
+	}
+}
+
+static struct lpfc_hbq_entry *
+lpfc_sli_hbq_to_firmware(struct lpfc_hba *phba, uint32_t hbqno,
+			 struct hbq_dmabuf *hbq_buf)
+{
+	struct lpfc_hbq_entry *hbqe;
+	dma_addr_t physaddr = hbq_buf->dbuf.phys;
+
+	/* Get next HBQ entry slot to use */
+	hbqe = lpfc_sli_next_hbq_slot(phba, hbqno);
+	if (hbqe) {
+		struct hbq_s *hbqp = &phba->hbqs[hbqno];
+
+		hbqe->bde.addrHigh = le32_to_cpu(putPaddrHigh(physaddr));
+		hbqe->bde.addrLow  = le32_to_cpu(putPaddrLow(physaddr));
+		hbqe->bde.tus.f.bdeSize = hbq_buf->size;
+		hbqe->bde.tus.f.bdeFlags = 0;
+		hbqe->bde.tus.w = le32_to_cpu(hbqe->bde.tus.w);
+		hbqe->buffer_tag = le32_to_cpu(hbq_buf->tag);
+				/* Sync SLIM */
+		hbqp->hbqPutIdx = hbqp->next_hbqPutIdx;
+		writel(hbqp->hbqPutIdx, phba->hbq_put + hbqno);
+				/* flush */
+		readl(phba->hbq_put + hbqno);
+		list_add_tail(&hbq_buf->dbuf.list, &hbqp->hbq_buffer_list);
+	}
+	return hbqe;
+}
+
+static struct lpfc_hbq_init lpfc_els_hbq = {
+	.rn = 1,
+	.entry_count = 200,
+	.mask_count = 0,
+	.profile = 0,
+	.ring_mask = (1 << LPFC_ELS_RING),
+	.buffer_count = 0,
+	.init_count = 20,
+	.add_count = 5,
+};
+
+static struct lpfc_hbq_init lpfc_extra_hbq = {
+	.rn = 1,
+	.entry_count = 200,
+	.mask_count = 0,
+	.profile = 0,
+	.ring_mask = (1 << LPFC_EXTRA_RING),
+	.buffer_count = 0,
+	.init_count = 0,
+	.add_count = 5,
+};
+
+struct lpfc_hbq_init *lpfc_hbq_defs[] = {
+	&lpfc_els_hbq,
+	&lpfc_extra_hbq,
+};
+
+static int
+lpfc_sli_hbqbuf_fill_hbqs(struct lpfc_hba *phba, uint32_t hbqno, uint32_t count)
+{
+	uint32_t i, start, end;
+	struct hbq_dmabuf *hbq_buffer;
+
+	if (!phba->hbqs[hbqno].hbq_alloc_buffer) {
+		return 0;
+	}
+
+	start = phba->hbqs[hbqno].buffer_count;
+	end = count + start;
+	if (end > lpfc_hbq_defs[hbqno]->entry_count) {
+		end = lpfc_hbq_defs[hbqno]->entry_count;
+	}
+
+	/* Populate HBQ entries */
+	for (i = start; i < end; i++) {
+		hbq_buffer = (phba->hbqs[hbqno].hbq_alloc_buffer)(phba);
+		if (!hbq_buffer)
+			return 1;
+		hbq_buffer->tag = (i | (hbqno << 16));
+		if (lpfc_sli_hbq_to_firmware(phba, hbqno, hbq_buffer))
+			phba->hbqs[hbqno].buffer_count++;
+		else
+			(phba->hbqs[hbqno].hbq_free_buffer)(phba, hbq_buffer);
+	}
+	return 0;
+}
+
+int
+lpfc_sli_hbqbuf_add_hbqs(struct lpfc_hba *phba, uint32_t qno)
+{
+	return(lpfc_sli_hbqbuf_fill_hbqs(phba, qno,
+					 lpfc_hbq_defs[qno]->add_count));
+}
+
+int
+lpfc_sli_hbqbuf_init_hbqs(struct lpfc_hba *phba, uint32_t qno)
+{
+	return(lpfc_sli_hbqbuf_fill_hbqs(phba, qno,
+					 lpfc_hbq_defs[qno]->init_count));
+}
+
+struct hbq_dmabuf *
+lpfc_sli_hbqbuf_find(struct lpfc_hba *phba, uint32_t tag)
+{
+	struct lpfc_dmabuf *d_buf;
+	struct hbq_dmabuf *hbq_buf;
+	uint32_t hbqno;
+
+	hbqno = tag >> 16;
+	if (hbqno > LPFC_MAX_HBQS)
+		return NULL;
+
+	list_for_each_entry(d_buf, &phba->hbqs[hbqno].hbq_buffer_list, list) {
+		hbq_buf = container_of(d_buf, struct hbq_dmabuf, dbuf);
+		if (hbq_buf->tag == tag) {
+			return hbq_buf;
+		}
+	}
+	lpfc_printf_log(phba, KERN_ERR, LOG_SLI | LOG_VPORT,
+			"1803 Bad hbq tag. Data: x%x x%x\n",
+			tag, phba->hbqs[tag >> 16].buffer_count);
+	return NULL;
+}
+
+void
+lpfc_sli_free_hbq(struct lpfc_hba *phba, struct hbq_dmabuf *hbq_buffer)
+{
+	uint32_t hbqno;
+
+	if (hbq_buffer) {
+		hbqno = hbq_buffer->tag >> 16;
+		if (!lpfc_sli_hbq_to_firmware(phba, hbqno, hbq_buffer)) {
+			(phba->hbqs[hbqno].hbq_free_buffer)(phba, hbq_buffer);
 		}
 	}
 }
@@ -469,6 +688,7 @@ lpfc_sli_chk_mbx_command(uint8_t mbxCommand)
 	case MBX_LOAD_SM:
 	case MBX_READ_NV:
 	case MBX_WRITE_NV:
+	case MBX_WRITE_VPARMS:
 	case MBX_RUN_BIU_DIAG:
 	case MBX_INIT_LINK:
 	case MBX_DOWN_LINK:
@@ -511,32 +731,39 @@ lpfc_sli_chk_mbx_command(uint8_t mbxCommand)
 	case MBX_FLASH_WR_ULA:
 	case MBX_SET_DEBUG:
 	case MBX_LOAD_EXP_ROM:
+	case MBX_ASYNCEVT_ENABLE:
+	case MBX_REG_VPI:
+	case MBX_UNREG_VPI:
+	case MBX_HEARTBEAT:
 		ret = mbxCommand;
 		break;
 	default:
 		ret = MBX_SHUTDOWN;
 		break;
 	}
-	return (ret);
+	return ret;
 }
 static void
-lpfc_sli_wake_mbox_wait(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq)
+lpfc_sli_wake_mbox_wait(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq)
 {
 	wait_queue_head_t *pdone_q;
+	unsigned long drvr_flag;
 
 	/*
 	 * If pdone_q is empty, the driver thread gave up waiting and
 	 * continued running.
 	 */
 	pmboxq->mbox_flag |= LPFC_MBX_WAKE;
+	spin_lock_irqsave(&phba->hbalock, drvr_flag);
 	pdone_q = (wait_queue_head_t *) pmboxq->context1;
 	if (pdone_q)
 		wake_up_interruptible(pdone_q);
+	spin_unlock_irqrestore(&phba->hbalock, drvr_flag);
 	return;
 }
 
 void
-lpfc_sli_def_mbox_cmpl(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
+lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
 {
 	struct lpfc_dmabuf *mp;
 	uint16_t rpi;
@@ -553,131 +780,111 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmb)
 	 * If a REG_LOGIN succeeded  after node is destroyed or node
 	 * is in re-discovery driver need to cleanup the RPI.
 	 */
-	if (!(phba->fc_flag & FC_UNLOADING) &&
-		(pmb->mb.mbxCommand == MBX_REG_LOGIN64) &&
-		(!pmb->mb.mbxStatus)) {
+	if (!(phba->pport->load_flag & FC_UNLOADING) &&
+	    pmb->mb.mbxCommand == MBX_REG_LOGIN64 &&
+	    !pmb->mb.mbxStatus) {
 
 		rpi = pmb->mb.un.varWords[0];
-		lpfc_unreg_login(phba, rpi, pmb);
-		pmb->mbox_cmpl=lpfc_sli_def_mbox_cmpl;
+		lpfc_unreg_login(phba, pmb->mb.un.varRegLogin.vpi, rpi, pmb);
+		pmb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
 		rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
 		if (rc != MBX_NOT_FINISHED)
 			return;
 	}
 
-	mempool_free( pmb, phba->mbox_mem_pool);
+	mempool_free(pmb, phba->mbox_mem_pool);
 	return;
 }
 
 int
-lpfc_sli_handle_mb_event(struct lpfc_hba * phba)
+lpfc_sli_handle_mb_event(struct lpfc_hba *phba)
 {
-	MAILBOX_t *mbox;
 	MAILBOX_t *pmbox;
 	LPFC_MBOXQ_t *pmb;
-	struct lpfc_sli *psli;
-	int i, rc;
-	uint32_t process_next;
-
-	psli = &phba->sli;
-	/* We should only get here if we are in SLI2 mode */
-	if (!(phba->sli.sli_flag & LPFC_SLI2_ACTIVE)) {
-		return (1);
-	}
+	int rc;
+	LIST_HEAD(cmplq);
 
 	phba->sli.slistat.mbox_event++;
 
+	/* Get all completed mailboxe buffers into the cmplq */
+	spin_lock_irq(&phba->hbalock);
+	list_splice_init(&phba->sli.mboxq_cmpl, &cmplq);
+	spin_unlock_irq(&phba->hbalock);
+
 	/* Get a Mailbox buffer to setup mailbox commands for callback */
-	if ((pmb = phba->sli.mbox_active)) {
-		pmbox = &pmb->mb;
-		mbox = &phba->slim2p->mbx;
+	do {
+		list_remove_head(&cmplq, pmb, LPFC_MBOXQ_t, list);
+		if (pmb == NULL)
+			break;
 
-		/* First check out the status word */
-		lpfc_sli_pcimem_bcopy(mbox, pmbox, sizeof (uint32_t));
+		pmbox = &pmb->mb;
 
-		/* Sanity check to ensure the host owns the mailbox */
-		if (pmbox->mbxOwner != OWN_HOST) {
-			/* Lets try for a while */
-			for (i = 0; i < 10240; i++) {
-				/* First copy command data */
-				lpfc_sli_pcimem_bcopy(mbox, pmbox,
-							sizeof (uint32_t));
-				if (pmbox->mbxOwner == OWN_HOST)
-					goto mbout;
+		if (pmbox->mbxCommand != MBX_HEARTBEAT) {
+			if (pmb->vport) {
+				lpfc_debugfs_disc_trc(pmb->vport,
+					LPFC_DISC_TRC_MBOX_VPORT,
+					"MBOX cmpl vport: cmd:x%x mb:x%x x%x",
+					(uint32_t)pmbox->mbxCommand,
+					pmbox->un.varWords[0],
+					pmbox->un.varWords[1]);
+			}
+			else {
+				lpfc_debugfs_disc_trc(phba->pport,
+					LPFC_DISC_TRC_MBOX,
+					"MBOX cmpl:       cmd:x%x mb:x%x x%x",
+					(uint32_t)pmbox->mbxCommand,
+					pmbox->un.varWords[0],
+					pmbox->un.varWords[1]);
 			}
-			/* Stray Mailbox Interrupt, mbxCommand <cmd> mbxStatus
-			   <status> */
-			lpfc_printf_log(phba,
-					KERN_WARNING,
-					LOG_MBOX | LOG_SLI,
-					"%d:0304 Stray Mailbox Interrupt "
-					"mbxCommand x%x mbxStatus x%x\n",
-					phba->brd_no,
-					pmbox->mbxCommand,
-					pmbox->mbxStatus);
-
-			spin_lock_irq(phba->host->host_lock);
-			phba->sli.sli_flag |= LPFC_SLI_MBOX_ACTIVE;
-			spin_unlock_irq(phba->host->host_lock);
-			return (1);
 		}
 
-	      mbout:
-		del_timer_sync(&phba->sli.mbox_tmo);
-		phba->work_hba_events &= ~WORKER_MBOX_TMO;
-
 		/*
 		 * It is a fatal error if unknown mbox command completion.
 		 */
 		if (lpfc_sli_chk_mbx_command(pmbox->mbxCommand) ==
 		    MBX_SHUTDOWN) {
-
 			/* Unknow mailbox command compl */
-			lpfc_printf_log(phba,
-				KERN_ERR,
-				LOG_MBOX | LOG_SLI,
-				"%d:0323 Unknown Mailbox command %x Cmpl\n",
-				phba->brd_no,
-				pmbox->mbxCommand);
-			phba->hba_state = LPFC_HBA_ERROR;
+			lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
+					"(%d):0323 Unknown Mailbox command "
+					"%x Cmpl\n",
+					pmb->vport ? pmb->vport->vpi : 0,
+					pmbox->mbxCommand);
+			phba->link_state = LPFC_HBA_ERROR;
 			phba->work_hs = HS_FFER3;
 			lpfc_handle_eratt(phba);
-			return (0);
+			continue;
 		}
 
-		phba->sli.mbox_active = NULL;
 		if (pmbox->mbxStatus) {
 			phba->sli.slistat.mbox_stat_err++;
 			if (pmbox->mbxStatus == MBXERR_NO_RESOURCES) {
 				/* Mbox cmd cmpl error - RETRYing */
-				lpfc_printf_log(phba,
-					KERN_INFO,
-					LOG_MBOX | LOG_SLI,
-					"%d:0305 Mbox cmd cmpl error - "
-					"RETRYing Data: x%x x%x x%x x%x\n",
-					phba->brd_no,
-					pmbox->mbxCommand,
-					pmbox->mbxStatus,
-					pmbox->un.varWords[0],
-					phba->hba_state);
+				lpfc_printf_log(phba, KERN_INFO,
+						LOG_MBOX | LOG_SLI,
+						"(%d):0305 Mbox cmd cmpl "
+						"error - RETRYing Data: x%x "
+						"x%x x%x x%x\n",
+						pmb->vport ? pmb->vport->vpi :0,
+						pmbox->mbxCommand,
+						pmbox->mbxStatus,
+						pmbox->un.varWords[0],
+						pmb->vport->port_state);
 				pmbox->mbxStatus = 0;
 				pmbox->mbxOwner = OWN_HOST;
-				spin_lock_irq(phba->host->host_lock);
+				spin_lock_irq(&phba->hbalock);
 				phba->sli.sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
-				spin_unlock_irq(phba->host->host_lock);
+				spin_unlock_irq(&phba->hbalock);
 				rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
 				if (rc == MBX_SUCCESS)
-					return (0);
+					continue;
 			}
 		}
 
 		/* Mailbox cmd <cmd> Cmpl <cmpl> */
-		lpfc_printf_log(phba,
-				KERN_INFO,
-				LOG_MBOX | LOG_SLI,
-				"%d:0307 Mailbox cmd x%x Cmpl x%p "
+		lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI,
+				"(%d):0307 Mailbox cmd x%x Cmpl x%p "
 				"Data: x%x x%x x%x x%x x%x x%x x%x x%x x%x\n",
-				phba->brd_no,
+				pmb->vport ? pmb->vport->vpi : 0,
 				pmbox->mbxCommand,
 				pmb->mbox_cmpl,
 				*((uint32_t *) pmbox),
@@ -690,56 +897,41 @@ lpfc_sli_handle_mb_event(struct lpfc_hba * phba)
 				pmbox->un.varWords[6],
 				pmbox->un.varWords[7]);
 
-		if (pmb->mbox_cmpl) {
-			lpfc_sli_pcimem_bcopy(mbox, pmbox, MAILBOX_CMD_SIZE);
+		if (pmb->mbox_cmpl)
 			pmb->mbox_cmpl(phba,pmb);
-		}
-	}
-
-
-	do {
-		process_next = 0;	/* by default don't loop */
-		spin_lock_irq(phba->host->host_lock);
-		phba->sli.sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
-
-		/* Process next mailbox command if there is one */
-		if ((pmb = lpfc_mbox_get(phba))) {
-			spin_unlock_irq(phba->host->host_lock);
-			rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
-			if (rc == MBX_NOT_FINISHED) {
-				pmb->mb.mbxStatus = MBX_NOT_FINISHED;
-				pmb->mbox_cmpl(phba,pmb);
-				process_next = 1;
-				continue;	/* loop back */
-			}
-		} else {
-			spin_unlock_irq(phba->host->host_lock);
-			/* Turn on IOCB processing */
-			for (i = 0; i < phba->sli.num_rings; i++) {
-				lpfc_sli_turn_on_ring(phba, i);
-			}
+	} while (1);
+	return 0;
+}
 
-			/* Free any lpfc_dmabuf's waiting for mbox cmd cmpls */
-			while (!list_empty(&phba->freebufList)) {
-				struct lpfc_dmabuf *mp;
-
-				mp = NULL;
-				list_remove_head((&phba->freebufList),
-						 mp,
-						 struct lpfc_dmabuf,
-						 list);
-				if (mp) {
-					lpfc_mbuf_free(phba, mp->virt,
-						       mp->phys);
-					kfree(mp);
-				}
-			}
-		}
+static struct lpfc_dmabuf *
+lpfc_sli_replace_hbqbuff(struct lpfc_hba *phba, uint32_t tag)
+{
+	struct hbq_dmabuf *hbq_entry, *new_hbq_entry;
+	uint32_t hbqno;
+	void *virt;		/* virtual address ptr */
+	dma_addr_t phys;	/* mapped address */
+
+	hbq_entry = lpfc_sli_hbqbuf_find(phba, tag);
+	if (hbq_entry == NULL)
+		return NULL;
+	list_del(&hbq_entry->dbuf.list);
+
+	hbqno = tag >> 16;
+	new_hbq_entry = (phba->hbqs[hbqno].hbq_alloc_buffer)(phba);
+	if (new_hbq_entry == NULL)
+		return &hbq_entry->dbuf;
+	new_hbq_entry->tag = -1;
+	phys = new_hbq_entry->dbuf.phys;
+	virt = new_hbq_entry->dbuf.virt;
+	new_hbq_entry->dbuf.phys = hbq_entry->dbuf.phys;
+	new_hbq_entry->dbuf.virt = hbq_entry->dbuf.virt;
+	hbq_entry->dbuf.phys = phys;
+	hbq_entry->dbuf.virt = virt;
+	lpfc_sli_free_hbq(phba, hbq_entry);
+	return &new_hbq_entry->dbuf;
+}
 
-	} while (process_next);
 
-	return (0);
-}
 static int
 lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 			    struct lpfc_iocbq *saveq)
@@ -751,8 +943,26 @@ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 
 	match = 0;
 	irsp = &(saveq->iocb);
+
+	if (irsp->ulpCommand == CMD_ASYNC_STATUS) {
+		if (pring->lpfc_sli_rcv_async_status)
+			pring->lpfc_sli_rcv_async_status(phba, pring, saveq);
+		else
+			lpfc_printf_log(phba,
+					KERN_WARNING,
+					LOG_SLI,
+					"0316 Ring %d handler: unexpected "
+					"ASYNC_STATUS iocb received evt_code "
+					"0x%x\n",
+					pring->ringno,
+					irsp->un.asyncstat.evt_code);
+		return 1;
+	}
+
 	if ((irsp->ulpCommand == CMD_RCV_ELS_REQ64_CX)
-	    || (irsp->ulpCommand == CMD_RCV_ELS_REQ_CX)) {
+	    || (irsp->ulpCommand == CMD_RCV_ELS_REQ_CX)
+	    || (irsp->ulpCommand == CMD_IOCB_RCV_ELS64_CX)
+	    || (irsp->ulpCommand == CMD_IOCB_RCV_CONT64_CX)) {
 		Rctl = FC_ELS_REQ;
 		Type = FC_ELS_DATA;
 	} else {
@@ -764,13 +974,24 @@ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 
 		/* Firmware Workaround */
 		if ((Rctl == 0) && (pring->ringno == LPFC_ELS_RING) &&
-			(irsp->ulpCommand == CMD_RCV_SEQUENCE64_CX)) {
+			(irsp->ulpCommand == CMD_RCV_SEQUENCE64_CX ||
+			 irsp->ulpCommand == CMD_IOCB_RCV_SEQ64_CX)) {
 			Rctl = FC_ELS_REQ;
 			Type = FC_ELS_DATA;
 			w5p->hcsw.Rctl = Rctl;
 			w5p->hcsw.Type = Type;
 		}
 	}
+
+	if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
+		if (irsp->ulpBdeCount != 0)
+			saveq->context2 = lpfc_sli_replace_hbqbuff(phba,
+						irsp->un.ulpWord[3]);
+		if (irsp->ulpBdeCount == 2)
+			saveq->context3 = lpfc_sli_replace_hbqbuff(phba,
+						irsp->unsli3.sli3Words[7]);
+	}
+
 	/* unSolicited Responses */
 	if (pring->prt[0].profile) {
 		if (pring->prt[0].lpfc_sli_rcv_unsol_event)
@@ -798,23 +1019,18 @@ lpfc_sli_process_unsol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 		/* Unexpected Rctl / Type received */
 		/* Ring <ringno> handler: unexpected
 		   Rctl <Rctl> Type <Type> received */
-		lpfc_printf_log(phba,
-				KERN_WARNING,
-				LOG_SLI,
-				"%d:0313 Ring %d handler: unexpected Rctl x%x "
-				"Type x%x received \n",
-				phba->brd_no,
-				pring->ringno,
-				Rctl,
-				Type);
+		lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
+				"0313 Ring %d handler: unexpected Rctl x%x "
+				"Type x%x received\n",
+				pring->ringno, Rctl, Type);
 	}
-	return(1);
+	return 1;
 }
 
 static struct lpfc_iocbq *
-lpfc_sli_iocbq_lookup(struct lpfc_hba * phba,
-		      struct lpfc_sli_ring * pring,
-		      struct lpfc_iocbq * prspiocb)
+lpfc_sli_iocbq_lookup(struct lpfc_hba *phba,
+		      struct lpfc_sli_ring *pring,
+		      struct lpfc_iocbq *prspiocb)
 {
 	struct lpfc_iocbq *cmd_iocb = NULL;
 	uint16_t iotag;
@@ -823,31 +1039,32 @@ lpfc_sli_iocbq_lookup(struct lpfc_hba * phba,
 
 	if (iotag != 0 && iotag <= phba->sli.last_iotag) {
 		cmd_iocb = phba->sli.iocbq_lookup[iotag];
-		list_del(&cmd_iocb->list);
+		list_del_init(&cmd_iocb->list);
 		pring->txcmplq_cnt--;
 		return cmd_iocb;
 	}
 
 	lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
-			"%d:0317 iotag x%x is out off "
+			"0317 iotag x%x is out off "
 			"range: max iotag x%x wd0 x%x\n",
-			phba->brd_no, iotag,
-			phba->sli.last_iotag,
+			iotag, phba->sli.last_iotag,
 			*(((uint32_t *) &prspiocb->iocb) + 7));
 	return NULL;
 }
 
 static int
-lpfc_sli_process_sol_iocb(struct lpfc_hba * phba, struct lpfc_sli_ring * pring,
+lpfc_sli_process_sol_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 			  struct lpfc_iocbq *saveq)
 {
-	struct lpfc_iocbq * cmdiocbp;
+	struct lpfc_iocbq *cmdiocbp;
 	int rc = 1;
 	unsigned long iflag;
 
 	/* Based on the iotag field, get the cmd IOCB from the txcmplq */
-	spin_lock_irqsave(phba->host->host_lock, iflag);
+	spin_lock_irqsave(&phba->hbalock, iflag);
 	cmdiocbp = lpfc_sli_iocbq_lookup(phba, pring, saveq);
+	spin_unlock_irqrestore(&phba->hbalock, iflag);
+
 	if (cmdiocbp) {
 		if (cmdiocbp->iocb_cmpl) {
 			/*
@@ -863,17 +1080,8 @@ lpfc_sli_process_sol_iocb(struct lpfc_hba * phba, struct lpfc_sli_ring * pring,
 					saveq->iocb.un.ulpWord[4] =
 						IOERR_SLI_ABORTED;
 				}
-				spin_unlock_irqrestore(phba->host->host_lock,
-						       iflag);
-				(cmdiocbp->iocb_cmpl) (phba, cmdiocbp, saveq);
-				spin_lock_irqsave(phba->host->host_lock, iflag);
-			}
-			else {
-				spin_unlock_irqrestore(phba->host->host_lock,
-						       iflag);
-				(cmdiocbp->iocb_cmpl) (phba, cmdiocbp, saveq);
-				spin_lock_irqsave(phba->host->host_lock, iflag);
 			}
+			(cmdiocbp->iocb_cmpl) (phba, cmdiocbp, saveq);
 		} else
 			lpfc_sli_release_iocbq(phba, cmdiocbp);
 	} else {
@@ -887,41 +1095,39 @@ lpfc_sli_process_sol_iocb(struct lpfc_hba * phba, struct lpfc_sli_ring * pring,
 			 * Ring <ringno> handler: unexpected completion IoTag
 			 * <IoTag>
 			 */
-			lpfc_printf_log(phba,
-				KERN_WARNING,
-				LOG_SLI,
-				"%d:0322 Ring %d handler: unexpected "
-				"completion IoTag x%x Data: x%x x%x x%x x%x\n",
-				phba->brd_no,
-				pring->ringno,
-				saveq->iocb.ulpIoTag,
-				saveq->iocb.ulpStatus,
-				saveq->iocb.un.ulpWord[4],
-				saveq->iocb.ulpCommand,
-				saveq->iocb.ulpContext);
+			lpfc_printf_vlog(cmdiocbp->vport, KERN_WARNING, LOG_SLI,
+					 "0322 Ring %d handler: "
+					 "unexpected completion IoTag x%x "
+					 "Data: x%x x%x x%x x%x\n",
+					 pring->ringno,
+					 saveq->iocb.ulpIoTag,
+					 saveq->iocb.ulpStatus,
+					 saveq->iocb.un.ulpWord[4],
+					 saveq->iocb.ulpCommand,
+					 saveq->iocb.ulpContext);
 		}
 	}
 
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
 	return rc;
 }
 
-static void lpfc_sli_rsp_pointers_error(struct lpfc_hba * phba,
-					struct lpfc_sli_ring * pring)
+static void
+lpfc_sli_rsp_pointers_error(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
 {
-	struct lpfc_pgp *pgp = &phba->slim2p->mbx.us.s2.port[pring->ringno];
+	struct lpfc_pgp *pgp = (phba->sli_rev == 3) ?
+		&phba->slim2p->mbx.us.s3_pgp.port[pring->ringno] :
+		&phba->slim2p->mbx.us.s2.port[pring->ringno];
 	/*
 	 * Ring <ringno> handler: portRspPut <portRspPut> is bigger then
 	 * rsp ring <portRspMax>
 	 */
 	lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
-			"%d:0312 Ring %d handler: portRspPut %d "
+			"0312 Ring %d handler: portRspPut %d "
 			"is bigger then rsp ring %d\n",
-			phba->brd_no, pring->ringno,
-			le32_to_cpu(pgp->rspPutInx),
+			pring->ringno, le32_to_cpu(pgp->rspPutInx),
 			pring->numRiocb);
 
-	phba->hba_state = LPFC_HBA_ERROR;
+	phba->link_state = LPFC_HBA_ERROR;
 
 	/*
 	 * All error attention handlers are posted to
@@ -929,16 +1135,18 @@ static void lpfc_sli_rsp_pointers_error(struct lpfc_hba * phba,
 	 */
 	phba->work_ha |= HA_ERATT;
 	phba->work_hs = HS_FFER3;
+
+	/* hbalock should already be held */
 	if (phba->work_wait)
-		wake_up(phba->work_wait);
+		lpfc_worker_wake_up(phba);
 
 	return;
 }
 
-void lpfc_sli_poll_fcp_ring(struct lpfc_hba * phba)
+void lpfc_sli_poll_fcp_ring(struct lpfc_hba *phba)
 {
-	struct lpfc_sli      * psli   = &phba->sli;
-	struct lpfc_sli_ring * pring = &psli->ring[LPFC_FCP_RING];
+	struct lpfc_sli      *psli  = &phba->sli;
+	struct lpfc_sli_ring *pring = &psli->ring[LPFC_FCP_RING];
 	IOCB_t *irsp = NULL;
 	IOCB_t *entry = NULL;
 	struct lpfc_iocbq *cmdiocbq = NULL;
@@ -948,13 +1156,15 @@ void lpfc_sli_poll_fcp_ring(struct lpfc_hba * phba)
 	uint32_t portRspPut, portRspMax;
 	int type;
 	uint32_t rsp_cmpl = 0;
-	void __iomem *to_slim;
 	uint32_t ha_copy;
+	unsigned long iflags;
 
 	pring->stats.iocb_event++;
 
-	/* The driver assumes SLI-2 mode */
-	pgp =  &phba->slim2p->mbx.us.s2.port[pring->ringno];
+	pgp = (phba->sli_rev == 3) ?
+		&phba->slim2p->mbx.us.s3_pgp.port[pring->ringno] :
+		&phba->slim2p->mbx.us.s2.port[pring->ringno];
+
 
 	/*
 	 * The next available response entry should never exceed the maximum
@@ -969,15 +1179,13 @@ void lpfc_sli_poll_fcp_ring(struct lpfc_hba * phba)
 
 	rmb();
 	while (pring->rspidx != portRspPut) {
-
-		entry = IOCB_ENTRY(pring->rspringaddr, pring->rspidx);
-
+		entry = lpfc_resp_iocb(phba, pring);
 		if (++pring->rspidx >= portRspMax)
 			pring->rspidx = 0;
 
 		lpfc_sli_pcimem_bcopy((uint32_t *) entry,
 				      (uint32_t *) &rspiocbq.iocb,
-				      sizeof (IOCB_t));
+				      phba->iocb_rsp_size);
 		irsp = &rspiocbq.iocb;
 		type = lpfc_sli_iocb_cmd_type(irsp->ulpCommand & CMD_IOCB_MASK);
 		pring->stats.iocb_rsp++;
@@ -986,9 +1194,9 @@ void lpfc_sli_poll_fcp_ring(struct lpfc_hba * phba)
 		if (unlikely(irsp->ulpStatus)) {
 			/* Rsp ring <ringno> error: IOCB */
 			lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
-					"%d:0326 Rsp Ring %d error: IOCB Data: "
+					"0326 Rsp Ring %d error: IOCB Data: "
 					"x%x x%x x%x x%x x%x x%x x%x x%x\n",
-					phba->brd_no, pring->ringno,
+					pring->ringno,
 					irsp->un.ulpWord[0],
 					irsp->un.ulpWord[1],
 					irsp->un.ulpWord[2],
@@ -1008,15 +1216,17 @@ void lpfc_sli_poll_fcp_ring(struct lpfc_hba * phba)
 			 */
 			if (unlikely(irsp->ulpCommand == CMD_XRI_ABORTED_CX)) {
 				lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
-						"%d:0314 IOCB cmd 0x%x"
-						" processed. Skipping"
-						" completion", phba->brd_no,
+						"0314 IOCB cmd 0x%x "
+						"processed. Skipping "
+						"completion",
 						irsp->ulpCommand);
 				break;
 			}
 
+			spin_lock_irqsave(&phba->hbalock, iflags);
 			cmdiocbq = lpfc_sli_iocbq_lookup(phba, pring,
 							 &rspiocbq);
+			spin_unlock_irqrestore(&phba->hbalock, iflags);
 			if ((cmdiocbq) && (cmdiocbq->iocb_cmpl)) {
 				(cmdiocbq->iocb_cmpl)(phba, cmdiocbq,
 						      &rspiocbq);
@@ -1033,10 +1243,9 @@ void lpfc_sli_poll_fcp_ring(struct lpfc_hba * phba)
 			} else {
 				/* Unknown IOCB command */
 				lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
-						"%d:0321 Unknown IOCB command "
+						"0321 Unknown IOCB command "
 						"Data: x%x, x%x x%x x%x x%x\n",
-						phba->brd_no, type,
-						irsp->ulpCommand,
+						type, irsp->ulpCommand,
 						irsp->ulpStatus,
 						irsp->ulpIoTag,
 						irsp->ulpContext);
@@ -1050,9 +1259,7 @@ void lpfc_sli_poll_fcp_ring(struct lpfc_hba * phba)
 		 * been updated, sync the pgp->rspPutInx and fetch the new port
 		 * response put pointer.
 		 */
-		to_slim = phba->MBslimaddr +
-			(SLIMOFF + (pring->ringno * 2) + 1) * 4;
-		writeb(pring->rspidx, to_slim);
+		writel(pring->rspidx, &phba->host_gp[pring->ringno].rspGetInx);
 
 		if (pring->rspidx == portRspPut)
 			portRspPut = le32_to_cpu(pgp->rspPutInx);
@@ -1062,13 +1269,16 @@ void lpfc_sli_poll_fcp_ring(struct lpfc_hba * phba)
 	ha_copy >>= (LPFC_FCP_RING * 4);
 
 	if ((rsp_cmpl > 0) && (ha_copy & HA_R0RE_REQ)) {
+		spin_lock_irqsave(&phba->hbalock, iflags);
 		pring->stats.iocb_rsp_full++;
 		status = ((CA_R0ATT | CA_R0RE_RSP) << (LPFC_FCP_RING * 4));
 		writel(status, phba->CAregaddr);
 		readl(phba->CAregaddr);
+		spin_unlock_irqrestore(&phba->hbalock, iflags);
 	}
 	if ((ha_copy & HA_R0CE_RSP) &&
 	    (pring->flag & LPFC_CALL_RING_AVAILABLE)) {
+		spin_lock_irqsave(&phba->hbalock, iflags);
 		pring->flag &= ~LPFC_CALL_RING_AVAILABLE;
 		pring->stats.iocb_cmd_empty++;
 
@@ -1079,6 +1289,7 @@ void lpfc_sli_poll_fcp_ring(struct lpfc_hba * phba)
 		if ((pring->lpfc_sli_cmd_available))
 			(pring->lpfc_sli_cmd_available) (phba, pring);
 
+		spin_unlock_irqrestore(&phba->hbalock, iflags);
 	}
 
 	return;
@@ -1089,10 +1300,12 @@ void lpfc_sli_poll_fcp_ring(struct lpfc_hba * phba)
  * to check it explicitly.
  */
 static int
-lpfc_sli_handle_fast_ring_event(struct lpfc_hba * phba,
-				struct lpfc_sli_ring * pring, uint32_t mask)
+lpfc_sli_handle_fast_ring_event(struct lpfc_hba *phba,
+				struct lpfc_sli_ring *pring, uint32_t mask)
 {
- 	struct lpfc_pgp *pgp = &phba->slim2p->mbx.us.s2.port[pring->ringno];
+	struct lpfc_pgp *pgp = (phba->sli_rev == 3) ?
+		&phba->slim2p->mbx.us.s3_pgp.port[pring->ringno] :
+		&phba->slim2p->mbx.us.s2.port[pring->ringno];
 	IOCB_t *irsp = NULL;
 	IOCB_t *entry = NULL;
 	struct lpfc_iocbq *cmdiocbq = NULL;
@@ -1103,9 +1316,8 @@ lpfc_sli_handle_fast_ring_event(struct lpfc_hba * phba,
 	lpfc_iocb_type type;
 	unsigned long iflag;
 	uint32_t rsp_cmpl = 0;
-	void __iomem  *to_slim;
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
+	spin_lock_irqsave(&phba->hbalock, iflag);
 	pring->stats.iocb_event++;
 
 	/*
@@ -1116,7 +1328,7 @@ lpfc_sli_handle_fast_ring_event(struct lpfc_hba * phba,
 	portRspPut = le32_to_cpu(pgp->rspPutInx);
 	if (unlikely(portRspPut >= portRspMax)) {
 		lpfc_sli_rsp_pointers_error(phba, pring);
-		spin_unlock_irqrestore(phba->host->host_lock, iflag);
+		spin_unlock_irqrestore(&phba->hbalock, iflag);
 		return 1;
 	}
 
@@ -1127,14 +1339,15 @@ lpfc_sli_handle_fast_ring_event(struct lpfc_hba * phba,
 		 * structure.  The copy involves a byte-swap since the
 		 * network byte order and pci byte orders are different.
 		 */
-		entry = IOCB_ENTRY(pring->rspringaddr, pring->rspidx);
+		entry = lpfc_resp_iocb(phba, pring);
+		phba->last_completion_time = jiffies;
 
 		if (++pring->rspidx >= portRspMax)
 			pring->rspidx = 0;
 
 		lpfc_sli_pcimem_bcopy((uint32_t *) entry,
 				      (uint32_t *) &rspiocbq.iocb,
-				      sizeof (IOCB_t));
+				      phba->iocb_rsp_size);
 		INIT_LIST_HEAD(&(rspiocbq.list));
 		irsp = &rspiocbq.iocb;
 
@@ -1143,16 +1356,30 @@ lpfc_sli_handle_fast_ring_event(struct lpfc_hba * phba,
 		rsp_cmpl++;
 
 		if (unlikely(irsp->ulpStatus)) {
+			/*
+			 * If resource errors reported from HBA, reduce
+			 * queuedepths of the SCSI device.
+			 */
+			if ((irsp->ulpStatus == IOSTAT_LOCAL_REJECT) &&
+				(irsp->un.ulpWord[4] == IOERR_NO_RESOURCES)) {
+				spin_unlock_irqrestore(&phba->hbalock, iflag);
+				lpfc_adjust_queue_depth(phba);
+				spin_lock_irqsave(&phba->hbalock, iflag);
+			}
+
 			/* Rsp ring <ringno> error: IOCB */
 			lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
-				"%d:0336 Rsp Ring %d error: IOCB Data: "
-				"x%x x%x x%x x%x x%x x%x x%x x%x\n",
-				phba->brd_no, pring->ringno,
-				irsp->un.ulpWord[0], irsp->un.ulpWord[1],
-				irsp->un.ulpWord[2], irsp->un.ulpWord[3],
-				irsp->un.ulpWord[4], irsp->un.ulpWord[5],
-				*(((uint32_t *) irsp) + 6),
-				*(((uint32_t *) irsp) + 7));
+					"0336 Rsp Ring %d error: IOCB Data: "
+					"x%x x%x x%x x%x x%x x%x x%x x%x\n",
+					pring->ringno,
+					irsp->un.ulpWord[0],
+					irsp->un.ulpWord[1],
+					irsp->un.ulpWord[2],
+					irsp->un.ulpWord[3],
+					irsp->un.ulpWord[4],
+					irsp->un.ulpWord[5],
+					*(((uint32_t *) irsp) + 6),
+					*(((uint32_t *) irsp) + 7));
 		}
 
 		switch (type) {
@@ -1164,9 +1391,9 @@ lpfc_sli_handle_fast_ring_event(struct lpfc_hba * phba,
 			 */
 			if (unlikely(irsp->ulpCommand == CMD_XRI_ABORTED_CX)) {
 				lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
-						"%d:0333 IOCB cmd 0x%x"
+						"0333 IOCB cmd 0x%x"
 						" processed. Skipping"
-						" completion\n", phba->brd_no,
+						" completion\n",
 						irsp->ulpCommand);
 				break;
 			}
@@ -1178,19 +1405,19 @@ lpfc_sli_handle_fast_ring_event(struct lpfc_hba * phba,
 					(cmdiocbq->iocb_cmpl)(phba, cmdiocbq,
 							      &rspiocbq);
 				} else {
-					spin_unlock_irqrestore(
-						phba->host->host_lock, iflag);
+					spin_unlock_irqrestore(&phba->hbalock,
+							       iflag);
 					(cmdiocbq->iocb_cmpl)(phba, cmdiocbq,
 							      &rspiocbq);
-					spin_lock_irqsave(phba->host->host_lock,
+					spin_lock_irqsave(&phba->hbalock,
 							  iflag);
 				}
 			}
 			break;
 		case LPFC_UNSOL_IOCB:
-			spin_unlock_irqrestore(phba->host->host_lock, iflag);
+			spin_unlock_irqrestore(&phba->hbalock, iflag);
 			lpfc_sli_process_unsol_iocb(phba, pring, &rspiocbq);
-			spin_lock_irqsave(phba->host->host_lock, iflag);
+			spin_lock_irqsave(&phba->hbalock, iflag);
 			break;
 		default:
 			if (irsp->ulpCommand == CMD_ADAPTER_MSG) {
@@ -1203,11 +1430,12 @@ lpfc_sli_handle_fast_ring_event(struct lpfc_hba * phba,
 			} else {
 				/* Unknown IOCB command */
 				lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
-					"%d:0334 Unknown IOCB command "
-					"Data: x%x, x%x x%x x%x x%x\n",
-					phba->brd_no, type, irsp->ulpCommand,
-					irsp->ulpStatus, irsp->ulpIoTag,
-					irsp->ulpContext);
+						"0334 Unknown IOCB command "
+						"Data: x%x, x%x x%x x%x x%x\n",
+						type, irsp->ulpCommand,
+						irsp->ulpStatus,
+						irsp->ulpIoTag,
+						irsp->ulpContext);
 			}
 			break;
 		}
@@ -1218,9 +1446,7 @@ lpfc_sli_handle_fast_ring_event(struct lpfc_hba * phba,
 		 * been updated, sync the pgp->rspPutInx and fetch the new port
 		 * response put pointer.
 		 */
-		to_slim = phba->MBslimaddr +
-			(SLIMOFF + (pring->ringno * 2) + 1) * 4;
-		writel(pring->rspidx, to_slim);
+		writel(pring->rspidx, &phba->host_gp[pring->ringno].rspGetInx);
 
 		if (pring->rspidx == portRspPut)
 			portRspPut = le32_to_cpu(pgp->rspPutInx);
@@ -1245,31 +1471,31 @@ lpfc_sli_handle_fast_ring_event(struct lpfc_hba * phba,
 
 	}
 
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
+	spin_unlock_irqrestore(&phba->hbalock, iflag);
 	return rc;
 }
 
-
 int
-lpfc_sli_handle_slow_ring_event(struct lpfc_hba * phba,
-			   struct lpfc_sli_ring * pring, uint32_t mask)
+lpfc_sli_handle_slow_ring_event(struct lpfc_hba *phba,
+				struct lpfc_sli_ring *pring, uint32_t mask)
 {
+	struct lpfc_pgp *pgp = (phba->sli_rev == 3) ?
+		&phba->slim2p->mbx.us.s3_pgp.port[pring->ringno] :
+		&phba->slim2p->mbx.us.s2.port[pring->ringno];
 	IOCB_t *entry;
 	IOCB_t *irsp = NULL;
 	struct lpfc_iocbq *rspiocbp = NULL;
 	struct lpfc_iocbq *next_iocb;
 	struct lpfc_iocbq *cmdiocbp;
 	struct lpfc_iocbq *saveq;
-	struct lpfc_pgp *pgp = &phba->slim2p->mbx.us.s2.port[pring->ringno];
 	uint8_t iocb_cmd_type;
 	lpfc_iocb_type type;
 	uint32_t status, free_saveq;
 	uint32_t portRspPut, portRspMax;
 	int rc = 1;
 	unsigned long iflag;
-	void __iomem  *to_slim;
 
-	spin_lock_irqsave(phba->host->host_lock, iflag);
+	spin_lock_irqsave(&phba->hbalock, iflag);
 	pring->stats.iocb_event++;
 
 	/*
@@ -1283,16 +1509,13 @@ lpfc_sli_handle_slow_ring_event(struct lpfc_hba * phba,
 		 * Ring <ringno> handler: portRspPut <portRspPut> is bigger then
 		 * rsp ring <portRspMax>
 		 */
-		lpfc_printf_log(phba,
-				KERN_ERR,
-				LOG_SLI,
-				"%d:0303 Ring %d handler: portRspPut %d "
+		lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+				"0303 Ring %d handler: portRspPut %d "
 				"is bigger then rsp ring %d\n",
-				phba->brd_no,
 				pring->ringno, portRspPut, portRspMax);
 
-		phba->hba_state = LPFC_HBA_ERROR;
-		spin_unlock_irqrestore(phba->host->host_lock, iflag);
+		phba->link_state = LPFC_HBA_ERROR;
+		spin_unlock_irqrestore(&phba->hbalock, iflag);
 
 		phba->work_hs = HS_FFER3;
 		lpfc_handle_eratt(phba);
@@ -1315,23 +1538,32 @@ lpfc_sli_handle_slow_ring_event(struct lpfc_hba * phba,
 		 * the ulpLe field is set, the entire Command has been
 		 * received.
 		 */
-		entry = IOCB_ENTRY(pring->rspringaddr, pring->rspidx);
-		rspiocbp = lpfc_sli_get_iocbq(phba);
+		entry = lpfc_resp_iocb(phba, pring);
+
+		phba->last_completion_time = jiffies;
+		rspiocbp = __lpfc_sli_get_iocbq(phba);
 		if (rspiocbp == NULL) {
 			printk(KERN_ERR "%s: out of buffers! Failing "
 			       "completion.\n", __FUNCTION__);
 			break;
 		}
 
-		lpfc_sli_pcimem_bcopy(entry, &rspiocbp->iocb, sizeof (IOCB_t));
+		lpfc_sli_pcimem_bcopy(entry, &rspiocbp->iocb,
+				      phba->iocb_rsp_size);
 		irsp = &rspiocbp->iocb;
 
 		if (++pring->rspidx >= portRspMax)
 			pring->rspidx = 0;
 
-		to_slim = phba->MBslimaddr + (SLIMOFF + (pring->ringno * 2)
-					      + 1) * 4;
-		writel(pring->rspidx, to_slim);
+		if (pring->ringno == LPFC_ELS_RING) {
+			lpfc_debugfs_slow_ring_trc(phba,
+			"IOCB rsp ring:   wd4:x%08x wd6:x%08x wd7:x%08x",
+				*(((uint32_t *) irsp) + 4),
+				*(((uint32_t *) irsp) + 6),
+				*(((uint32_t *) irsp) + 7));
+		}
+
+		writel(pring->rspidx, &phba->host_gp[pring->ringno].rspGetInx);
 
 		if (list_empty(&(pring->iocb_continueq))) {
 			list_add(&rspiocbp->list, &(pring->iocb_continueq));
@@ -1355,23 +1587,43 @@ lpfc_sli_handle_slow_ring_event(struct lpfc_hba * phba,
 
 			pring->stats.iocb_rsp++;
 
+			/*
+			 * If resource errors reported from HBA, reduce
+			 * queuedepths of the SCSI device.
+			 */
+			if ((irsp->ulpStatus == IOSTAT_LOCAL_REJECT) &&
+			     (irsp->un.ulpWord[4] == IOERR_NO_RESOURCES)) {
+				spin_unlock_irqrestore(&phba->hbalock, iflag);
+				lpfc_adjust_queue_depth(phba);
+				spin_lock_irqsave(&phba->hbalock, iflag);
+			}
+
 			if (irsp->ulpStatus) {
 				/* Rsp ring <ringno> error: IOCB */
-				lpfc_printf_log(phba,
-					KERN_WARNING,
-					LOG_SLI,
-					"%d:0328 Rsp Ring %d error: IOCB Data: "
-					"x%x x%x x%x x%x x%x x%x x%x x%x\n",
-					phba->brd_no,
-					pring->ringno,
-					irsp->un.ulpWord[0],
-					irsp->un.ulpWord[1],
-					irsp->un.ulpWord[2],
-					irsp->un.ulpWord[3],
-					irsp->un.ulpWord[4],
-					irsp->un.ulpWord[5],
-					*(((uint32_t *) irsp) + 6),
-					*(((uint32_t *) irsp) + 7));
+				lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
+						"0328 Rsp Ring %d error: "
+						"IOCB Data: "
+						"x%x x%x x%x x%x "
+						"x%x x%x x%x x%x "
+						"x%x x%x x%x x%x "
+						"x%x x%x x%x x%x\n",
+						pring->ringno,
+						irsp->un.ulpWord[0],
+						irsp->un.ulpWord[1],
+						irsp->un.ulpWord[2],
+						irsp->un.ulpWord[3],
+						irsp->un.ulpWord[4],
+						irsp->un.ulpWord[5],
+						*(((uint32_t *) irsp) + 6),
+						*(((uint32_t *) irsp) + 7),
+						*(((uint32_t *) irsp) + 8),
+						*(((uint32_t *) irsp) + 9),
+						*(((uint32_t *) irsp) + 10),
+						*(((uint32_t *) irsp) + 11),
+						*(((uint32_t *) irsp) + 12),
+						*(((uint32_t *) irsp) + 13),
+						*(((uint32_t *) irsp) + 14),
+						*(((uint32_t *) irsp) + 15));
 			}
 
 			/*
@@ -1383,17 +1635,17 @@ lpfc_sli_handle_slow_ring_event(struct lpfc_hba * phba,
 			iocb_cmd_type = irsp->ulpCommand & CMD_IOCB_MASK;
 			type = lpfc_sli_iocb_cmd_type(iocb_cmd_type);
 			if (type == LPFC_SOL_IOCB) {
-				spin_unlock_irqrestore(phba->host->host_lock,
+				spin_unlock_irqrestore(&phba->hbalock,
 						       iflag);
 				rc = lpfc_sli_process_sol_iocb(phba, pring,
-					saveq);
-				spin_lock_irqsave(phba->host->host_lock, iflag);
+							       saveq);
+				spin_lock_irqsave(&phba->hbalock, iflag);
 			} else if (type == LPFC_UNSOL_IOCB) {
-				spin_unlock_irqrestore(phba->host->host_lock,
+				spin_unlock_irqrestore(&phba->hbalock,
 						       iflag);
 				rc = lpfc_sli_process_unsol_iocb(phba, pring,
-					saveq);
-				spin_lock_irqsave(phba->host->host_lock, iflag);
+								 saveq);
+				spin_lock_irqsave(&phba->hbalock, iflag);
 			} else if (type == LPFC_ABORT_IOCB) {
 				if ((irsp->ulpCommand != CMD_XRI_ABORTED_CX) &&
 				    ((cmdiocbp =
@@ -1403,15 +1655,15 @@ lpfc_sli_handle_slow_ring_event(struct lpfc_hba * phba,
 					   routine */
 					if (cmdiocbp->iocb_cmpl) {
 						spin_unlock_irqrestore(
-						       phba->host->host_lock,
+						       &phba->hbalock,
 						       iflag);
 						(cmdiocbp->iocb_cmpl) (phba,
 							     cmdiocbp, saveq);
 						spin_lock_irqsave(
-							  phba->host->host_lock,
+							  &phba->hbalock,
 							  iflag);
 					} else
-						lpfc_sli_release_iocbq(phba,
+						__lpfc_sli_release_iocbq(phba,
 								      cmdiocbp);
 				}
 			} else if (type == LPFC_UNKNOWN_IOCB) {
@@ -1428,32 +1680,27 @@ lpfc_sli_handle_slow_ring_event(struct lpfc_hba * phba,
 						 phba->brd_no, adaptermsg);
 				} else {
 					/* Unknown IOCB command */
-					lpfc_printf_log(phba,
-						KERN_ERR,
-						LOG_SLI,
-						"%d:0335 Unknown IOCB command "
-						"Data: x%x x%x x%x x%x\n",
-						phba->brd_no,
-						irsp->ulpCommand,
-						irsp->ulpStatus,
-						irsp->ulpIoTag,
-						irsp->ulpContext);
+					lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+							"0335 Unknown IOCB "
+							"command Data: x%x "
+							"x%x x%x x%x\n",
+							irsp->ulpCommand,
+							irsp->ulpStatus,
+							irsp->ulpIoTag,
+							irsp->ulpContext);
 				}
 			}
 
 			if (free_saveq) {
-				if (!list_empty(&saveq->list)) {
-					list_for_each_entry_safe(rspiocbp,
-								 next_iocb,
-								 &saveq->list,
-								 list) {
-						list_del(&rspiocbp->list);
-						lpfc_sli_release_iocbq(phba,
-								     rspiocbp);
-					}
+				list_for_each_entry_safe(rspiocbp, next_iocb,
+							 &saveq->list, list) {
+					list_del(&rspiocbp->list);
+					__lpfc_sli_release_iocbq(phba,
+								 rspiocbp);
 				}
-				lpfc_sli_release_iocbq(phba, saveq);
+				__lpfc_sli_release_iocbq(phba, saveq);
 			}
+			rspiocbp = NULL;
 		}
 
 		/*
@@ -1466,7 +1713,7 @@ lpfc_sli_handle_slow_ring_event(struct lpfc_hba * phba,
 		}
 	} /* while (pring->rspidx != portRspPut) */
 
-	if ((rspiocbp != 0) && (mask & HA_R0RE_REQ)) {
+	if ((rspiocbp != NULL) && (mask & HA_R0RE_REQ)) {
 		/* At least one response entry has been freed */
 		pring->stats.iocb_rsp_full++;
 		/* SET RxRE_RSP in Chip Att register */
@@ -1487,68 +1734,51 @@ lpfc_sli_handle_slow_ring_event(struct lpfc_hba * phba,
 
 	}
 
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
+	spin_unlock_irqrestore(&phba->hbalock, iflag);
 	return rc;
 }
 
-int
+void
 lpfc_sli_abort_iocb_ring(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
 {
+	LIST_HEAD(completions);
 	struct lpfc_iocbq *iocb, *next_iocb;
-	IOCB_t *icmd = NULL, *cmd = NULL;
-	int errcnt;
+	IOCB_t *cmd = NULL;
 
-	errcnt = 0;
+	if (pring->ringno == LPFC_ELS_RING) {
+		lpfc_fabric_abort_hba(phba);
+	}
 
 	/* Error everything on txq and txcmplq
 	 * First do the txq.
 	 */
-	spin_lock_irq(phba->host->host_lock);
-	list_for_each_entry_safe(iocb, next_iocb, &pring->txq, list) {
-		list_del_init(&iocb->list);
-		if (iocb->iocb_cmpl) {
-			icmd = &iocb->iocb;
-			icmd->ulpStatus = IOSTAT_LOCAL_REJECT;
-			icmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
-			spin_unlock_irq(phba->host->host_lock);
-			(iocb->iocb_cmpl) (phba, iocb, iocb);
-			spin_lock_irq(phba->host->host_lock);
-		} else
-			lpfc_sli_release_iocbq(phba, iocb);
-	}
+	spin_lock_irq(&phba->hbalock);
+	list_splice_init(&pring->txq, &completions);
 	pring->txq_cnt = 0;
-	INIT_LIST_HEAD(&(pring->txq));
 
 	/* Next issue ABTS for everything on the txcmplq */
-	list_for_each_entry_safe(iocb, next_iocb, &pring->txcmplq, list) {
-		cmd = &iocb->iocb;
+	list_for_each_entry_safe(iocb, next_iocb, &pring->txcmplq, list)
+		lpfc_sli_issue_abort_iotag(phba, pring, iocb);
 
-		/*
-		 * Imediate abort of IOCB, deque and call compl
-		 */
+	spin_unlock_irq(&phba->hbalock);
 
+	while (!list_empty(&completions)) {
+		iocb = list_get_first(&completions, struct lpfc_iocbq, list);
+		cmd = &iocb->iocb;
 		list_del_init(&iocb->list);
-		pring->txcmplq_cnt--;
 
-		if (iocb->iocb_cmpl) {
+		if (!iocb->iocb_cmpl)
+			lpfc_sli_release_iocbq(phba, iocb);
+		else {
 			cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
 			cmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
-			spin_unlock_irq(phba->host->host_lock);
 			(iocb->iocb_cmpl) (phba, iocb, iocb);
-			spin_lock_irq(phba->host->host_lock);
-		} else
-			lpfc_sli_release_iocbq(phba, iocb);
+		}
 	}
-
-	INIT_LIST_HEAD(&pring->txcmplq);
-	pring->txcmplq_cnt = 0;
-	spin_unlock_irq(phba->host->host_lock);
-
-	return errcnt;
 }
 
 int
-lpfc_sli_brdready(struct lpfc_hba * phba, uint32_t mask)
+lpfc_sli_brdready(struct lpfc_hba *phba, uint32_t mask)
 {
 	uint32_t status;
 	int i = 0;
@@ -1575,7 +1805,8 @@ lpfc_sli_brdready(struct lpfc_hba * phba, uint32_t mask)
 			msleep(2500);
 
 		if (i == 15) {
-			phba->hba_state = LPFC_STATE_UNKNOWN; /* Do post */
+				/* Do post */
+			phba->pport->port_state = LPFC_VPORT_UNKNOWN;
 			lpfc_sli_brdrestart(phba);
 		}
 		/* Read the HBA Host Status Register */
@@ -1584,7 +1815,7 @@ lpfc_sli_brdready(struct lpfc_hba * phba, uint32_t mask)
 
 	/* Check to see if any errors occurred during init */
 	if ((status & HS_FFERM) || (i >= 20)) {
-		phba->hba_state = LPFC_HBA_ERROR;
+		phba->link_state = LPFC_HBA_ERROR;
 		retval = 1;
 	}
 
@@ -1593,7 +1824,7 @@ lpfc_sli_brdready(struct lpfc_hba * phba, uint32_t mask)
 
 #define BARRIER_TEST_PATTERN (0xdeadbeef)
 
-void lpfc_reset_barrier(struct lpfc_hba * phba)
+void lpfc_reset_barrier(struct lpfc_hba *phba)
 {
 	uint32_t __iomem *resp_buf;
 	uint32_t __iomem *mbox_buf;
@@ -1618,12 +1849,12 @@ void lpfc_reset_barrier(struct lpfc_hba * phba)
 	hc_copy = readl(phba->HCregaddr);
 	writel((hc_copy & ~HC_ERINT_ENA), phba->HCregaddr);
 	readl(phba->HCregaddr); /* flush */
-	phba->fc_flag |= FC_IGNORE_ERATT;
+	phba->link_flag |= LS_IGNORE_ERATT;
 
 	if (readl(phba->HAregaddr) & HA_ERATT) {
 		/* Clear Chip error bit */
 		writel(HA_ERATT, phba->HAregaddr);
-		phba->stopped = 1;
+		phba->pport->stopped = 1;
 	}
 
 	mbox = 0;
@@ -1640,7 +1871,7 @@ void lpfc_reset_barrier(struct lpfc_hba * phba)
 
 	if (readl(resp_buf + 1) != ~(BARRIER_TEST_PATTERN)) {
 		if (phba->sli.sli_flag & LPFC_SLI2_ACTIVE ||
-		    phba->stopped)
+		    phba->pport->stopped)
 			goto restore_hc;
 		else
 			goto clear_errat;
@@ -1657,17 +1888,17 @@ clear_errat:
 
 	if (readl(phba->HAregaddr) & HA_ERATT) {
 		writel(HA_ERATT, phba->HAregaddr);
-		phba->stopped = 1;
+		phba->pport->stopped = 1;
 	}
 
 restore_hc:
-	phba->fc_flag &= ~FC_IGNORE_ERATT;
+	phba->link_flag &= ~LS_IGNORE_ERATT;
 	writel(hc_copy, phba->HCregaddr);
 	readl(phba->HCregaddr); /* flush */
 }
 
 int
-lpfc_sli_brdkill(struct lpfc_hba * phba)
+lpfc_sli_brdkill(struct lpfc_hba *phba)
 {
 	struct lpfc_sli *psli;
 	LPFC_MBOXQ_t *pmb;
@@ -1679,26 +1910,22 @@ lpfc_sli_brdkill(struct lpfc_hba * phba)
 	psli = &phba->sli;
 
 	/* Kill HBA */
-	lpfc_printf_log(phba,
-		KERN_INFO,
-		LOG_SLI,
-		"%d:0329 Kill HBA Data: x%x x%x\n",
-		phba->brd_no,
-		phba->hba_state,
-		psli->sli_flag);
+	lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
+			"0329 Kill HBA Data: x%x x%x\n",
+			phba->pport->port_state, psli->sli_flag);
 
 	if ((pmb = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool,
 						  GFP_KERNEL)) == 0)
 		return 1;
 
 	/* Disable the error attention */
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	status = readl(phba->HCregaddr);
 	status &= ~HC_ERINT_ENA;
 	writel(status, phba->HCregaddr);
 	readl(phba->HCregaddr); /* flush */
-	phba->fc_flag |= FC_IGNORE_ERATT;
-	spin_unlock_irq(phba->host->host_lock);
+	phba->link_flag |= LS_IGNORE_ERATT;
+	spin_unlock_irq(&phba->hbalock);
 
 	lpfc_kill_board(phba, pmb);
 	pmb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
@@ -1707,9 +1934,9 @@ lpfc_sli_brdkill(struct lpfc_hba * phba)
 	if (retval != MBX_SUCCESS) {
 		if (retval != MBX_BUSY)
 			mempool_free(pmb, phba->mbox_mem_pool);
-		spin_lock_irq(phba->host->host_lock);
-		phba->fc_flag &= ~FC_IGNORE_ERATT;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_lock_irq(&phba->hbalock);
+		phba->link_flag &= ~LS_IGNORE_ERATT;
+		spin_unlock_irq(&phba->hbalock);
 		return 1;
 	}
 
@@ -1732,22 +1959,22 @@ lpfc_sli_brdkill(struct lpfc_hba * phba)
 	del_timer_sync(&psli->mbox_tmo);
 	if (ha_copy & HA_ERATT) {
 		writel(HA_ERATT, phba->HAregaddr);
-		phba->stopped = 1;
+		phba->pport->stopped = 1;
 	}
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	psli->sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
-	phba->fc_flag &= ~FC_IGNORE_ERATT;
-	spin_unlock_irq(phba->host->host_lock);
+	phba->link_flag &= ~LS_IGNORE_ERATT;
+	spin_unlock_irq(&phba->hbalock);
 
 	psli->mbox_active = NULL;
 	lpfc_hba_down_post(phba);
-	phba->hba_state = LPFC_HBA_ERROR;
+	phba->link_state = LPFC_HBA_ERROR;
 
-	return (ha_copy & HA_ERATT ? 0 : 1);
+	return ha_copy & HA_ERATT ? 0 : 1;
 }
 
 int
-lpfc_sli_brdreset(struct lpfc_hba * phba)
+lpfc_sli_brdreset(struct lpfc_hba *phba)
 {
 	struct lpfc_sli *psli;
 	struct lpfc_sli_ring *pring;
@@ -1758,13 +1985,13 @@ lpfc_sli_brdreset(struct lpfc_hba * phba)
 
 	/* Reset HBA */
 	lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
-			"%d:0325 Reset HBA Data: x%x x%x\n", phba->brd_no,
-			phba->hba_state, psli->sli_flag);
+			"0325 Reset HBA Data: x%x x%x\n",
+			phba->pport->port_state, psli->sli_flag);
 
 	/* perform board reset */
 	phba->fc_eventTag = 0;
-	phba->fc_myDID = 0;
-	phba->fc_prevDID = 0;
+	phba->pport->fc_myDID = 0;
+	phba->pport->fc_prevDID = 0;
 
 	/* Turn off parity checking and serr during the physical reset */
 	pci_read_config_word(phba->pcidev, PCI_COMMAND, &cfg_value);
@@ -1794,12 +2021,12 @@ lpfc_sli_brdreset(struct lpfc_hba * phba)
 		pring->missbufcnt = 0;
 	}
 
-	phba->hba_state = LPFC_WARM_START;
+	phba->link_state = LPFC_WARM_START;
 	return 0;
 }
 
 int
-lpfc_sli_brdrestart(struct lpfc_hba * phba)
+lpfc_sli_brdrestart(struct lpfc_hba *phba)
 {
 	MAILBOX_t *mb;
 	struct lpfc_sli *psli;
@@ -1807,14 +2034,14 @@ lpfc_sli_brdrestart(struct lpfc_hba * phba)
 	volatile uint32_t word0;
 	void __iomem *to_slim;
 
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 
 	psli = &phba->sli;
 
 	/* Restart HBA */
 	lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
-			"%d:0337 Restart HBA Data: x%x x%x\n", phba->brd_no,
-			phba->hba_state, psli->sli_flag);
+			"0337 Restart HBA Data: x%x x%x\n",
+			phba->pport->port_state, psli->sli_flag);
 
 	word0 = 0;
 	mb = (MAILBOX_t *) &word0;
@@ -1828,7 +2055,7 @@ lpfc_sli_brdrestart(struct lpfc_hba * phba)
 	readl(to_slim); /* flush */
 
 	/* Only skip post after fc_ffinit is completed */
-	if (phba->hba_state) {
+	if (phba->pport->port_state) {
 		skip_post = 1;
 		word0 = 1;	/* This is really setting up word1 */
 	} else {
@@ -1840,10 +2067,10 @@ lpfc_sli_brdrestart(struct lpfc_hba * phba)
 	readl(to_slim); /* flush */
 
 	lpfc_sli_brdreset(phba);
-	phba->stopped = 0;
-	phba->hba_state = LPFC_INIT_START;
+	phba->pport->stopped = 0;
+	phba->link_state = LPFC_INIT_START;
 
-	spin_unlock_irq(phba->host->host_lock);
+	spin_unlock_irq(&phba->hbalock);
 
 	memset(&psli->lnk_stat_offsets, 0, sizeof(psli->lnk_stat_offsets));
 	psli->stats_start = get_seconds();
@@ -1877,14 +2104,10 @@ lpfc_sli_chipset_init(struct lpfc_hba *phba)
 		if (i++ >= 20) {
 			/* Adapter failed to init, timeout, status reg
 			   <status> */
-			lpfc_printf_log(phba,
-					KERN_ERR,
-					LOG_INIT,
-					"%d:0436 Adapter failed to init, "
-					"timeout, status reg x%x\n",
-					phba->brd_no,
-					status);
-			phba->hba_state = LPFC_HBA_ERROR;
+			lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+					"0436 Adapter failed to init, "
+					"timeout, status reg x%x\n", status);
+			phba->link_state = LPFC_HBA_ERROR;
 			return -ETIMEDOUT;
 		}
 
@@ -1893,14 +2116,10 @@ lpfc_sli_chipset_init(struct lpfc_hba *phba)
 			/* ERROR: During chipset initialization */
 			/* Adapter failed to init, chipset, status reg
 			   <status> */
-			lpfc_printf_log(phba,
-					KERN_ERR,
-					LOG_INIT,
-					"%d:0437 Adapter failed to init, "
-					"chipset, status reg x%x\n",
-					phba->brd_no,
-					status);
-			phba->hba_state = LPFC_HBA_ERROR;
+			lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+					"0437 Adapter failed to init, "
+					"chipset, status reg x%x\n", status);
+			phba->link_state = LPFC_HBA_ERROR;
 			return -EIO;
 		}
 
@@ -1913,7 +2132,8 @@ lpfc_sli_chipset_init(struct lpfc_hba *phba)
 		}
 
 		if (i == 15) {
-			phba->hba_state = LPFC_STATE_UNKNOWN; /* Do post */
+				/* Do post */
+			phba->pport->port_state = LPFC_VPORT_UNKNOWN;
 			lpfc_sli_brdrestart(phba);
 		}
 		/* Read the HBA Host Status Register */
@@ -1924,14 +2144,10 @@ lpfc_sli_chipset_init(struct lpfc_hba *phba)
 	if (status & HS_FFERM) {
 		/* ERROR: During chipset initialization */
 		/* Adapter failed to init, chipset, status reg <status> */
-		lpfc_printf_log(phba,
-				KERN_ERR,
-				LOG_INIT,
-				"%d:0438 Adapter failed to init, chipset, "
-				"status reg x%x\n",
-				phba->brd_no,
-				status);
-		phba->hba_state = LPFC_HBA_ERROR;
+		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+				"0438 Adapter failed to init, chipset, "
+				"status reg x%x\n", status);
+		phba->link_state = LPFC_HBA_ERROR;
 		return -EIO;
 	}
 
@@ -1946,139 +2162,249 @@ lpfc_sli_chipset_init(struct lpfc_hba *phba)
 }
 
 int
-lpfc_sli_hba_setup(struct lpfc_hba * phba)
+lpfc_sli_hbq_count(void)
+{
+	return ARRAY_SIZE(lpfc_hbq_defs);
+}
+
+static int
+lpfc_sli_hbq_entry_count(void)
+{
+	int  hbq_count = lpfc_sli_hbq_count();
+	int  count = 0;
+	int  i;
+
+	for (i = 0; i < hbq_count; ++i)
+		count += lpfc_hbq_defs[i]->entry_count;
+	return count;
+}
+
+int
+lpfc_sli_hbq_size(void)
+{
+	return lpfc_sli_hbq_entry_count() * sizeof(struct lpfc_hbq_entry);
+}
+
+static int
+lpfc_sli_hbq_setup(struct lpfc_hba *phba)
+{
+	int  hbq_count = lpfc_sli_hbq_count();
+	LPFC_MBOXQ_t *pmb;
+	MAILBOX_t *pmbox;
+	uint32_t hbqno;
+	uint32_t hbq_entry_index;
+
+				/* Get a Mailbox buffer to setup mailbox
+				 * commands for HBA initialization
+				 */
+	pmb = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+
+	if (!pmb)
+		return -ENOMEM;
+
+	pmbox = &pmb->mb;
+
+	/* Initialize the struct lpfc_sli_hbq structure for each hbq */
+	phba->link_state = LPFC_INIT_MBX_CMDS;
+
+	hbq_entry_index = 0;
+	for (hbqno = 0; hbqno < hbq_count; ++hbqno) {
+		phba->hbqs[hbqno].next_hbqPutIdx = 0;
+		phba->hbqs[hbqno].hbqPutIdx      = 0;
+		phba->hbqs[hbqno].local_hbqGetIdx   = 0;
+		phba->hbqs[hbqno].entry_count =
+			lpfc_hbq_defs[hbqno]->entry_count;
+		lpfc_config_hbq(phba, hbqno, lpfc_hbq_defs[hbqno],
+			hbq_entry_index, pmb);
+		hbq_entry_index += phba->hbqs[hbqno].entry_count;
+
+		if (lpfc_sli_issue_mbox(phba, pmb, MBX_POLL) != MBX_SUCCESS) {
+			/* Adapter failed to init, mbxCmd <cmd> CFG_RING,
+			   mbxStatus <status>, ring <num> */
+
+			lpfc_printf_log(phba, KERN_ERR,
+					LOG_SLI | LOG_VPORT,
+					"1805 Adapter failed to init. "
+					"Data: x%x x%x x%x\n",
+					pmbox->mbxCommand,
+					pmbox->mbxStatus, hbqno);
+
+			phba->link_state = LPFC_HBA_ERROR;
+			mempool_free(pmb, phba->mbox_mem_pool);
+			return ENXIO;
+		}
+	}
+	phba->hbq_count = hbq_count;
+
+	mempool_free(pmb, phba->mbox_mem_pool);
+
+	/* Initially populate or replenish the HBQs */
+	for (hbqno = 0; hbqno < hbq_count; ++hbqno) {
+		if (lpfc_sli_hbqbuf_init_hbqs(phba, hbqno))
+			return -ENOMEM;
+	}
+	return 0;
+}
+
+static int
+lpfc_do_config_port(struct lpfc_hba *phba, int sli_mode)
 {
 	LPFC_MBOXQ_t *pmb;
 	uint32_t resetcount = 0, rc = 0, done = 0;
 
 	pmb = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
 	if (!pmb) {
-		phba->hba_state = LPFC_HBA_ERROR;
+		phba->link_state = LPFC_HBA_ERROR;
 		return -ENOMEM;
 	}
 
+	phba->sli_rev = sli_mode;
 	while (resetcount < 2 && !done) {
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(&phba->hbalock);
 		phba->sli.sli_flag |= LPFC_SLI_MBOX_ACTIVE;
-		spin_unlock_irq(phba->host->host_lock);
-		phba->hba_state = LPFC_STATE_UNKNOWN;
+		spin_unlock_irq(&phba->hbalock);
+		phba->pport->port_state = LPFC_VPORT_UNKNOWN;
 		lpfc_sli_brdrestart(phba);
 		msleep(2500);
 		rc = lpfc_sli_chipset_init(phba);
 		if (rc)
 			break;
 
-		spin_lock_irq(phba->host->host_lock);
+		spin_lock_irq(&phba->hbalock);
 		phba->sli.sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
-		spin_unlock_irq(phba->host->host_lock);
+		spin_unlock_irq(&phba->hbalock);
 		resetcount++;
 
-	/* Call pre CONFIG_PORT mailbox command initialization.  A value of 0
-	 * means the call was successful.  Any other nonzero value is a failure,
-	 * but if ERESTART is returned, the driver may reset the HBA and try
-	 * again.
-	 */
+		/* Call pre CONFIG_PORT mailbox command initialization.  A
+		 * value of 0 means the call was successful.  Any other
+		 * nonzero value is a failure, but if ERESTART is returned,
+		 * the driver may reset the HBA and try again.
+		 */
 		rc = lpfc_config_port_prep(phba);
 		if (rc == -ERESTART) {
-			phba->hba_state = 0;
+			phba->link_state = LPFC_LINK_UNKNOWN;
 			continue;
 		} else if (rc) {
 			break;
 		}
 
-		phba->hba_state = LPFC_INIT_MBX_CMDS;
+		phba->link_state = LPFC_INIT_MBX_CMDS;
 		lpfc_config_port(phba, pmb);
 		rc = lpfc_sli_issue_mbox(phba, pmb, MBX_POLL);
-		if (rc == MBX_SUCCESS)
-			done = 1;
-		else {
+		if (rc != MBX_SUCCESS) {
 			lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-				"%d:0442 Adapter failed to init, mbxCmd x%x "
+				"0442 Adapter failed to init, mbxCmd x%x "
 				"CONFIG_PORT, mbxStatus x%x Data: x%x\n",
-				phba->brd_no, pmb->mb.mbxCommand,
-				pmb->mb.mbxStatus, 0);
+				pmb->mb.mbxCommand, pmb->mb.mbxStatus, 0);
+			spin_lock_irq(&phba->hbalock);
 			phba->sli.sli_flag &= ~LPFC_SLI2_ACTIVE;
+			spin_unlock_irq(&phba->hbalock);
+			rc = -ENXIO;
+		} else {
+			done = 1;
+			phba->max_vpi = (phba->max_vpi &&
+					 pmb->mb.un.varCfgPort.gmv) != 0
+				? pmb->mb.un.varCfgPort.max_vpi
+				: 0;
 		}
 	}
-	if (!done)
+
+	if (!done) {
+		rc = -EINVAL;
+		goto do_prep_failed;
+	}
+
+	if ((pmb->mb.un.varCfgPort.sli_mode == 3) &&
+		(!pmb->mb.un.varCfgPort.cMA)) {
+		rc = -ENXIO;
+		goto do_prep_failed;
+	}
+	return rc;
+
+do_prep_failed:
+	mempool_free(pmb, phba->mbox_mem_pool);
+	return rc;
+}
+
+int
+lpfc_sli_hba_setup(struct lpfc_hba *phba)
+{
+	uint32_t rc;
+	int  mode = 3;
+
+	switch (lpfc_sli_mode) {
+	case 2:
+		if (phba->cfg_enable_npiv) {
+			lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_VPORT,
+				"1824 NPIV enabled: Override lpfc_sli_mode "
+				"parameter (%d) to auto (0).\n",
+				lpfc_sli_mode);
+			break;
+		}
+		mode = 2;
+		break;
+	case 0:
+	case 3:
+		break;
+	default:
+		lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_VPORT,
+				"1819 Unrecognized lpfc_sli_mode "
+				"parameter: %d.\n", lpfc_sli_mode);
+
+		break;
+	}
+
+	rc = lpfc_do_config_port(phba, mode);
+	if (rc && lpfc_sli_mode == 3)
+		lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_VPORT,
+				"1820 Unable to select SLI-3.  "
+				"Not supported by adapter.\n");
+	if (rc && mode != 2)
+		rc = lpfc_do_config_port(phba, 2);
+	if (rc)
 		goto lpfc_sli_hba_setup_error;
 
-	if (phba->pci_max_read) {
-		lpfc_set_slim(phba, pmb, SLIM_VAR_MAX_DMA_LENGTH,
-			SLIM_VAL_MAX_DMA_1024);
-		if (lpfc_sli_issue_mbox(phba, pmb, MBX_POLL) != MBX_SUCCESS)
-			if (pmb->mb.mbxStatus != MBXERR_UNKNOWN_CMD) {
-				lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-					"%d:0443 Adapter failed to set maximum"
-					" DMA length mbxStatus x%x \n",
-					phba->brd_no, pmb->mb.mbxStatus);
-			} else {
-				lpfc_printf_log(phba, KERN_WARNING, LOG_INIT,
-					"%d:0444 Adapter does not support "
-					"setting maximum DMA length mbxStatus "
-					"x%x \n", phba->brd_no,
-					pmb->mb.mbxStatus);
-			}
-		else
-				lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
-					"%d:0445 Adapter set maximum"
-					" DMA length to 1024 bytes.\n",
-					phba->brd_no);
+	if (phba->sli_rev == 3) {
+		phba->iocb_cmd_size = SLI3_IOCB_CMD_SIZE;
+		phba->iocb_rsp_size = SLI3_IOCB_RSP_SIZE;
+		phba->sli3_options |= LPFC_SLI3_ENABLED;
+		phba->sli3_options |= LPFC_SLI3_HBQ_ENABLED;
+
+	} else {
+		phba->iocb_cmd_size = SLI2_IOCB_CMD_SIZE;
+		phba->iocb_rsp_size = SLI2_IOCB_RSP_SIZE;
+		phba->sli3_options = 0;
 	}
 
-	rc = lpfc_sli_ring_map(phba, pmb);
+	lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
+			"0444 Firmware in SLI %x mode. Max_vpi %d\n",
+			phba->sli_rev, phba->max_vpi);
+	rc = lpfc_sli_ring_map(phba);
 
 	if (rc)
 		goto lpfc_sli_hba_setup_error;
 
+				/* Init HBQs */
+
+	if (phba->sli3_options & LPFC_SLI3_HBQ_ENABLED) {
+		rc = lpfc_sli_hbq_setup(phba);
+		if (rc)
+			goto lpfc_sli_hba_setup_error;
+	}
+
 	phba->sli.sli_flag |= LPFC_PROCESS_LA;
 
 	rc = lpfc_config_port_post(phba);
 	if (rc)
 		goto lpfc_sli_hba_setup_error;
 
-	goto lpfc_sli_hba_setup_exit;
-lpfc_sli_hba_setup_error:
-	phba->hba_state = LPFC_HBA_ERROR;
-lpfc_sli_hba_setup_exit:
-	mempool_free(pmb, phba->mbox_mem_pool);
 	return rc;
-}
-
-static void
-lpfc_mbox_abort(struct lpfc_hba * phba)
-{
-	LPFC_MBOXQ_t *pmbox;
-	MAILBOX_t *mb;
-
-	if (phba->sli.mbox_active) {
-		del_timer_sync(&phba->sli.mbox_tmo);
-		phba->work_hba_events &= ~WORKER_MBOX_TMO;
-		pmbox = phba->sli.mbox_active;
-		mb = &pmbox->mb;
-		phba->sli.mbox_active = NULL;
-		if (pmbox->mbox_cmpl) {
-			mb->mbxStatus = MBX_NOT_FINISHED;
-			(pmbox->mbox_cmpl) (phba, pmbox);
-		}
-		phba->sli.sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
-	}
 
-	/* Abort all the non active mailbox commands. */
-	spin_lock_irq(phba->host->host_lock);
-	pmbox = lpfc_mbox_get(phba);
-	while (pmbox) {
-		mb = &pmbox->mb;
-		if (pmbox->mbox_cmpl) {
-			mb->mbxStatus = MBX_NOT_FINISHED;
-			spin_unlock_irq(phba->host->host_lock);
-			(pmbox->mbox_cmpl) (phba, pmbox);
-			spin_lock_irq(phba->host->host_lock);
-		}
-		pmbox = lpfc_mbox_get(phba);
-	}
-	spin_unlock_irq(phba->host->host_lock);
-	return;
+lpfc_sli_hba_setup_error:
+	phba->link_state = LPFC_HBA_ERROR;
+	lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
+			"0445 Firmware initialization failed\n");
+	return rc;
 }
 
 /*! lpfc_mbox_timeout
@@ -2097,66 +2423,80 @@ lpfc_mbox_abort(struct lpfc_hba * phba)
 void
 lpfc_mbox_timeout(unsigned long ptr)
 {
-	struct lpfc_hba *phba;
+	struct lpfc_hba  *phba = (struct lpfc_hba *) ptr;
 	unsigned long iflag;
+	uint32_t tmo_posted;
 
-	phba = (struct lpfc_hba *)ptr;
-	spin_lock_irqsave(phba->host->host_lock, iflag);
-	if (!(phba->work_hba_events & WORKER_MBOX_TMO)) {
-		phba->work_hba_events |= WORKER_MBOX_TMO;
+	spin_lock_irqsave(&phba->pport->work_port_lock, iflag);
+	tmo_posted = phba->pport->work_port_events & WORKER_MBOX_TMO;
+	if (!tmo_posted)
+		phba->pport->work_port_events |= WORKER_MBOX_TMO;
+	spin_unlock_irqrestore(&phba->pport->work_port_lock, iflag);
+
+	if (!tmo_posted) {
+		spin_lock_irqsave(&phba->hbalock, iflag);
 		if (phba->work_wait)
-			wake_up(phba->work_wait);
+			lpfc_worker_wake_up(phba);
+		spin_unlock_irqrestore(&phba->hbalock, iflag);
 	}
-	spin_unlock_irqrestore(phba->host->host_lock, iflag);
 }
 
 void
 lpfc_mbox_timeout_handler(struct lpfc_hba *phba)
 {
-	LPFC_MBOXQ_t *pmbox;
-	MAILBOX_t *mb;
+	LPFC_MBOXQ_t *pmbox = phba->sli.mbox_active;
+	MAILBOX_t *mb = &pmbox->mb;
+	struct lpfc_sli *psli = &phba->sli;
+	struct lpfc_sli_ring *pring;
 
-	spin_lock_irq(phba->host->host_lock);
-	if (!(phba->work_hba_events & WORKER_MBOX_TMO)) {
-		spin_unlock_irq(phba->host->host_lock);
+	if (!(phba->pport->work_port_events & WORKER_MBOX_TMO)) {
 		return;
 	}
 
-	phba->work_hba_events &= ~WORKER_MBOX_TMO;
+	/* Mbox cmd <mbxCommand> timeout */
+	lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
+			"0310 Mailbox command x%x timeout Data: x%x x%x x%p\n",
+			mb->mbxCommand,
+			phba->pport->port_state,
+			phba->sli.sli_flag,
+			phba->sli.mbox_active);
 
-	pmbox = phba->sli.mbox_active;
-	mb = &pmbox->mb;
+	/* Setting state unknown so lpfc_sli_abort_iocb_ring
+	 * would get IOCB_ERROR from lpfc_sli_issue_iocb, allowing
+	 * it to fail all oustanding SCSI IO.
+	 */
+	spin_lock_irq(&phba->pport->work_port_lock);
+	phba->pport->work_port_events &= ~WORKER_MBOX_TMO;
+	spin_unlock_irq(&phba->pport->work_port_lock);
+	spin_lock_irq(&phba->hbalock);
+	phba->link_state = LPFC_LINK_UNKNOWN;
+	phba->pport->fc_flag |= FC_ESTABLISH_LINK;
+	psli->sli_flag &= ~LPFC_SLI2_ACTIVE;
+	spin_unlock_irq(&phba->hbalock);
 
-	/* Mbox cmd <mbxCommand> timeout */
-	lpfc_printf_log(phba,
-		KERN_ERR,
-		LOG_MBOX | LOG_SLI,
-		"%d:0310 Mailbox command x%x timeout Data: x%x x%x x%p\n",
-		phba->brd_no,
-		mb->mbxCommand,
-		phba->hba_state,
-		phba->sli.sli_flag,
-		phba->sli.mbox_active);
-
-	phba->sli.mbox_active = NULL;
-	if (pmbox->mbox_cmpl) {
-		mb->mbxStatus = MBX_NOT_FINISHED;
-		spin_unlock_irq(phba->host->host_lock);
-		(pmbox->mbox_cmpl) (phba, pmbox);
-		spin_lock_irq(phba->host->host_lock);
-	}
-	phba->sli.sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
-
-	spin_unlock_irq(phba->host->host_lock);
-	lpfc_mbox_abort(phba);
+	pring = &psli->ring[psli->fcp_ring];
+	lpfc_sli_abort_iocb_ring(phba, pring);
+
+	lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,
+			"0316 Resetting board due to mailbox timeout\n");
+	/*
+	 * lpfc_offline calls lpfc_sli_hba_down which will clean up
+	 * on oustanding mailbox commands.
+	 */
+	lpfc_offline_prep(phba);
+	lpfc_offline(phba);
+	lpfc_sli_brdrestart(phba);
+	if (lpfc_online(phba) == 0)		/* Initialize the HBA */
+		mod_timer(&phba->fc_estabtmo, jiffies + HZ * 60);
+	lpfc_unblock_mgmt_io(phba);
 	return;
 }
 
 int
-lpfc_sli_issue_mbox(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmbox, uint32_t flag)
+lpfc_sli_issue_mbox(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox, uint32_t flag)
 {
 	MAILBOX_t *mb;
-	struct lpfc_sli *psli;
+	struct lpfc_sli *psli = &phba->sli;
 	uint32_t status, evtctr;
 	uint32_t ha_copy;
 	int i;
@@ -2164,27 +2504,40 @@ lpfc_sli_issue_mbox(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmbox, uint32_t flag)
 	volatile uint32_t word0, ldata;
 	void __iomem *to_slim;
 
-	psli = &phba->sli;
+	if (pmbox->mbox_cmpl && pmbox->mbox_cmpl != lpfc_sli_def_mbox_cmpl &&
+		pmbox->mbox_cmpl != lpfc_sli_wake_mbox_wait) {
+		if(!pmbox->vport) {
+			lpfc_printf_log(phba, KERN_ERR,
+					LOG_MBOX | LOG_VPORT,
+					"1806 Mbox x%x failed. No vport\n",
+					pmbox->mb.mbxCommand);
+			dump_stack();
+			return MBXERR_ERROR;
+		}
+	}
 
-	spin_lock_irqsave(phba->host->host_lock, drvr_flag);
+
+
+	spin_lock_irqsave(&phba->hbalock, drvr_flag);
+	psli = &phba->sli;
 
 
 	mb = &pmbox->mb;
 	status = MBX_SUCCESS;
 
-	if (phba->hba_state == LPFC_HBA_ERROR) {
-		spin_unlock_irqrestore(phba->host->host_lock, drvr_flag);
+	if (phba->link_state == LPFC_HBA_ERROR) {
+		spin_unlock_irqrestore(&phba->hbalock, drvr_flag);
 
 		/* Mbox command <mbxCommand> cannot issue */
-		LOG_MBOX_CANNOT_ISSUE_DATA( phba, mb, psli, flag)
-		return (MBX_NOT_FINISHED);
+		LOG_MBOX_CANNOT_ISSUE_DATA(phba, pmbox, psli, flag)
+		return MBX_NOT_FINISHED;
 	}
 
 	if (mb->mbxCommand != MBX_KILL_BOARD && flag & MBX_NOWAIT &&
 	    !(readl(phba->HCregaddr) & HC_MBINT_ENA)) {
-		spin_unlock_irqrestore(phba->host->host_lock, drvr_flag);
-		LOG_MBOX_CANNOT_ISSUE_DATA( phba, mb, psli, flag)
-		return (MBX_NOT_FINISHED);
+		spin_unlock_irqrestore(&phba->hbalock, drvr_flag);
+		LOG_MBOX_CANNOT_ISSUE_DATA(phba, pmbox, psli, flag)
+		return MBX_NOT_FINISHED;
 	}
 
 	if (psli->sli_flag & LPFC_SLI_MBOX_ACTIVE) {
@@ -2194,35 +2547,18 @@ lpfc_sli_issue_mbox(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmbox, uint32_t flag)
 		 */
 
 		if (flag & MBX_POLL) {
-			spin_unlock_irqrestore(phba->host->host_lock,
-					       drvr_flag);
+			spin_unlock_irqrestore(&phba->hbalock, drvr_flag);
 
 			/* Mbox command <mbxCommand> cannot issue */
-			LOG_MBOX_CANNOT_ISSUE_DATA( phba, mb, psli, flag)
-			return (MBX_NOT_FINISHED);
+			LOG_MBOX_CANNOT_ISSUE_DATA(phba, pmbox, psli, flag);
+			return MBX_NOT_FINISHED;
 		}
 
 		if (!(psli->sli_flag & LPFC_SLI2_ACTIVE)) {
-			spin_unlock_irqrestore(phba->host->host_lock,
-					       drvr_flag);
+			spin_unlock_irqrestore(&phba->hbalock, drvr_flag);
 			/* Mbox command <mbxCommand> cannot issue */
-			LOG_MBOX_CANNOT_ISSUE_DATA( phba, mb, psli, flag)
-			return (MBX_NOT_FINISHED);
-		}
-
-		/* Handle STOP IOCB processing flag. This is only meaningful
-		 * if we are not polling for mbox completion.
-		 */
-		if (flag & MBX_STOP_IOCB) {
-			flag &= ~MBX_STOP_IOCB;
-			/* Now flag each ring */
-			for (i = 0; i < psli->num_rings; i++) {
-				/* If the ring is active, flag it */
-				if (psli->ring[i].cmdringaddr) {
-					psli->ring[i].flag |=
-					    LPFC_STOP_IOCB_MBX;
-				}
-			}
+			LOG_MBOX_CANNOT_ISSUE_DATA(phba, pmbox, psli, flag);
+			return MBX_NOT_FINISHED;
 		}
 
 		/* Another mailbox command is still being processed, queue this
@@ -2231,38 +2567,32 @@ lpfc_sli_issue_mbox(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmbox, uint32_t flag)
 		lpfc_mbox_put(phba, pmbox);
 
 		/* Mbox cmd issue - BUSY */
-		lpfc_printf_log(phba,
-			KERN_INFO,
-			LOG_MBOX | LOG_SLI,
-			"%d:0308 Mbox cmd issue - BUSY Data: x%x x%x x%x x%x\n",
-			phba->brd_no,
-			mb->mbxCommand,
-			phba->hba_state,
-			psli->sli_flag,
-			flag);
+		lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI,
+				"(%d):0308 Mbox cmd issue - BUSY Data: "
+				"x%x x%x x%x x%x\n",
+				pmbox->vport ? pmbox->vport->vpi : 0xffffff,
+				mb->mbxCommand, phba->pport->port_state,
+				psli->sli_flag, flag);
 
 		psli->slistat.mbox_busy++;
-		spin_unlock_irqrestore(phba->host->host_lock,
-				       drvr_flag);
-
-		return (MBX_BUSY);
-	}
-
-	/* Handle STOP IOCB processing flag. This is only meaningful
-	 * if we are not polling for mbox completion.
-	 */
-	if (flag & MBX_STOP_IOCB) {
-		flag &= ~MBX_STOP_IOCB;
-		if (flag == MBX_NOWAIT) {
-			/* Now flag each ring */
-			for (i = 0; i < psli->num_rings; i++) {
-				/* If the ring is active, flag it */
-				if (psli->ring[i].cmdringaddr) {
-					psli->ring[i].flag |=
-					    LPFC_STOP_IOCB_MBX;
-				}
-			}
+		spin_unlock_irqrestore(&phba->hbalock, drvr_flag);
+
+		if (pmbox->vport) {
+			lpfc_debugfs_disc_trc(pmbox->vport,
+				LPFC_DISC_TRC_MBOX_VPORT,
+				"MBOX Bsy vport:  cmd:x%x mb:x%x x%x",
+				(uint32_t)mb->mbxCommand,
+				mb->un.varWords[0], mb->un.varWords[1]);
+		}
+		else {
+			lpfc_debugfs_disc_trc(phba->pport,
+				LPFC_DISC_TRC_MBOX,
+				"MBOX Bsy:        cmd:x%x mb:x%x x%x",
+				(uint32_t)mb->mbxCommand,
+				mb->un.varWords[0], mb->un.varWords[1]);
 		}
+
+		return MBX_BUSY;
 	}
 
 	psli->sli_flag |= LPFC_SLI_MBOX_ACTIVE;
@@ -2272,11 +2602,10 @@ lpfc_sli_issue_mbox(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmbox, uint32_t flag)
 		if (!(psli->sli_flag & LPFC_SLI2_ACTIVE) &&
 		    (mb->mbxCommand != MBX_KILL_BOARD)) {
 			psli->sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
-			spin_unlock_irqrestore(phba->host->host_lock,
-					       drvr_flag);
+			spin_unlock_irqrestore(&phba->hbalock, drvr_flag);
 			/* Mbox command <mbxCommand> cannot issue */
-			LOG_MBOX_CANNOT_ISSUE_DATA( phba, mb, psli, flag);
-			return (MBX_NOT_FINISHED);
+			LOG_MBOX_CANNOT_ISSUE_DATA(phba, pmbox, psli, flag);
+			return MBX_NOT_FINISHED;
 		}
 		/* timeout active mbox command */
 		mod_timer(&psli->mbox_tmo, (jiffies +
@@ -2284,15 +2613,29 @@ lpfc_sli_issue_mbox(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmbox, uint32_t flag)
 	}
 
 	/* Mailbox cmd <cmd> issue */
-	lpfc_printf_log(phba,
-		KERN_INFO,
-		LOG_MBOX | LOG_SLI,
-		"%d:0309 Mailbox cmd x%x issue Data: x%x x%x x%x\n",
-		phba->brd_no,
-		mb->mbxCommand,
-		phba->hba_state,
-		psli->sli_flag,
-		flag);
+	lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI,
+			"(%d):0309 Mailbox cmd x%x issue Data: x%x x%x "
+			"x%x\n",
+			pmbox->vport ? pmbox->vport->vpi : 0,
+			mb->mbxCommand, phba->pport->port_state,
+			psli->sli_flag, flag);
+
+	if (mb->mbxCommand != MBX_HEARTBEAT) {
+		if (pmbox->vport) {
+			lpfc_debugfs_disc_trc(pmbox->vport,
+				LPFC_DISC_TRC_MBOX_VPORT,
+				"MBOX Send vport: cmd:x%x mb:x%x x%x",
+				(uint32_t)mb->mbxCommand,
+				mb->un.varWords[0], mb->un.varWords[1]);
+		}
+		else {
+			lpfc_debugfs_disc_trc(phba->pport,
+				LPFC_DISC_TRC_MBOX,
+				"MBOX Send:       cmd:x%x mb:x%x x%x",
+				(uint32_t)mb->mbxCommand,
+				mb->un.varWords[0], mb->un.varWords[1]);
+		}
+	}
 
 	psli->slistat.mbox_cmd++;
 	evtctr = psli->slistat.mbox_event;
@@ -2307,7 +2650,7 @@ lpfc_sli_issue_mbox(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmbox, uint32_t flag)
 		if (mb->mbxCommand == MBX_CONFIG_PORT) {
 			/* copy command data into host mbox for cmpl */
 			lpfc_sli_pcimem_bcopy(mb, &phba->slim2p->mbx,
-					MAILBOX_CMD_SIZE);
+					      MAILBOX_CMD_SIZE);
 		}
 
 		/* First copy mbox command data to HBA SLIM, skip past first
@@ -2359,12 +2702,12 @@ lpfc_sli_issue_mbox(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmbox, uint32_t flag)
 		/* Wait for command to complete */
 		while (((word0 & OWN_CHIP) == OWN_CHIP) ||
 		       (!(ha_copy & HA_MBATT) &&
-			(phba->hba_state > LPFC_WARM_START))) {
+			(phba->link_state > LPFC_WARM_START))) {
 			if (i-- <= 0) {
 				psli->sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
-				spin_unlock_irqrestore(phba->host->host_lock,
+				spin_unlock_irqrestore(&phba->hbalock,
 						       drvr_flag);
-				return (MBX_NOT_FINISHED);
+				return MBX_NOT_FINISHED;
 			}
 
 			/* Check if we took a mbox interrupt while we were
@@ -2373,12 +2716,12 @@ lpfc_sli_issue_mbox(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmbox, uint32_t flag)
 			    && (evtctr != psli->slistat.mbox_event))
 				break;
 
-			spin_unlock_irqrestore(phba->host->host_lock,
+			spin_unlock_irqrestore(&phba->hbalock,
 					       drvr_flag);
 
 			msleep(1);
 
-			spin_lock_irqsave(phba->host->host_lock, drvr_flag);
+			spin_lock_irqsave(&phba->hbalock, drvr_flag);
 
 			if (psli->sli_flag & LPFC_SLI2_ACTIVE) {
 				/* First copy command data */
@@ -2409,7 +2752,7 @@ lpfc_sli_issue_mbox(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmbox, uint32_t flag)
 		if (psli->sli_flag & LPFC_SLI2_ACTIVE) {
 			/* copy results back to user */
 			lpfc_sli_pcimem_bcopy(&phba->slim2p->mbx, mb,
-					MAILBOX_CMD_SIZE);
+					      MAILBOX_CMD_SIZE);
 		} else {
 			/* First copy command data */
 			lpfc_memcpy_from_slim(mb, phba->MBslimaddr,
@@ -2429,23 +2772,25 @@ lpfc_sli_issue_mbox(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmbox, uint32_t flag)
 		status = mb->mbxStatus;
 	}
 
-	spin_unlock_irqrestore(phba->host->host_lock, drvr_flag);
-	return (status);
+	spin_unlock_irqrestore(&phba->hbalock, drvr_flag);
+	return status;
 }
 
-static int
-lpfc_sli_ringtx_put(struct lpfc_hba * phba, struct lpfc_sli_ring * pring,
-		    struct lpfc_iocbq * piocb)
+/*
+ * Caller needs to hold lock.
+ */
+static void
+__lpfc_sli_ringtx_put(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+		    struct lpfc_iocbq *piocb)
 {
 	/* Insert the caller's iocb in the txq tail for later processing. */
 	list_add_tail(&piocb->list, &pring->txq);
 	pring->txq_cnt++;
-	return (0);
 }
 
 static struct lpfc_iocbq *
 lpfc_sli_next_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
-		   struct lpfc_iocbq ** piocb)
+		   struct lpfc_iocbq **piocb)
 {
 	struct lpfc_iocbq * nextiocb;
 
@@ -2458,29 +2803,45 @@ lpfc_sli_next_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 	return nextiocb;
 }
 
-int
-lpfc_sli_issue_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+/*
+ * Lockless version of lpfc_sli_issue_iocb.
+ */
+static int
+__lpfc_sli_issue_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 		    struct lpfc_iocbq *piocb, uint32_t flag)
 {
 	struct lpfc_iocbq *nextiocb;
 	IOCB_t *iocb;
 
+	if (piocb->iocb_cmpl && (!piocb->vport) &&
+	   (piocb->iocb.ulpCommand != CMD_ABORT_XRI_CN) &&
+	   (piocb->iocb.ulpCommand != CMD_CLOSE_XRI_CN)) {
+		lpfc_printf_log(phba, KERN_ERR,
+				LOG_SLI | LOG_VPORT,
+				"1807 IOCB x%x failed. No vport\n",
+				piocb->iocb.ulpCommand);
+		dump_stack();
+		return IOCB_ERROR;
+	}
+
+
+
 	/*
 	 * We should never get an IOCB if we are in a < LINK_DOWN state
 	 */
-	if (unlikely(phba->hba_state < LPFC_LINK_DOWN))
+	if (unlikely(phba->link_state < LPFC_LINK_DOWN))
 		return IOCB_ERROR;
 
 	/*
 	 * Check to see if we are blocking IOCB processing because of a
-	 * outstanding mbox command.
+	 * outstanding event.
 	 */
-	if (unlikely(pring->flag & LPFC_STOP_IOCB_MBX))
+	if (unlikely(pring->flag & LPFC_STOP_IOCB_EVENT))
 		goto iocb_busy;
 
-	if (unlikely(phba->hba_state == LPFC_LINK_DOWN)) {
+	if (unlikely(phba->link_state == LPFC_LINK_DOWN)) {
 		/*
-		 * Only CREATE_XRI, CLOSE_XRI, ABORT_XRI, and QUE_RING_BUF
+		 * Only CREATE_XRI, CLOSE_XRI, and QUE_RING_BUF
 		 * can be issued if the link is not up.
 		 */
 		switch (piocb->iocb.ulpCommand) {
@@ -2494,6 +2855,8 @@ lpfc_sli_issue_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 				piocb->iocb_cmpl = NULL;
 			/*FALLTHROUGH*/
 		case CMD_CREATE_XRI_CR:
+		case CMD_CLOSE_XRI_CN:
+		case CMD_CLOSE_XRI_CX:
 			break;
 		default:
 			goto iocb_busy;
@@ -2504,8 +2867,9 @@ lpfc_sli_issue_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 	 * attention events.
 	 */
 	} else if (unlikely(pring->ringno == phba->sli.fcp_ring &&
-		   !(phba->sli.sli_flag & LPFC_PROCESS_LA)))
+			    !(phba->sli.sli_flag & LPFC_PROCESS_LA))) {
 		goto iocb_busy;
+	}
 
 	while ((iocb = lpfc_sli_next_iocb_slot(phba, pring)) &&
 	       (nextiocb = lpfc_sli_next_iocb(phba, pring, &piocb)))
@@ -2527,13 +2891,28 @@ lpfc_sli_issue_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
  out_busy:
 
 	if (!(flag & SLI_IOCB_RET_IOCB)) {
-		lpfc_sli_ringtx_put(phba, pring, piocb);
+		__lpfc_sli_ringtx_put(phba, pring, piocb);
 		return IOCB_SUCCESS;
 	}
 
 	return IOCB_BUSY;
 }
 
+
+int
+lpfc_sli_issue_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+		    struct lpfc_iocbq *piocb, uint32_t flag)
+{
+	unsigned long iflags;
+	int rc;
+
+	spin_lock_irqsave(&phba->hbalock, iflags);
+	rc = __lpfc_sli_issue_iocb(phba, pring, piocb, flag);
+	spin_unlock_irqrestore(&phba->hbalock, iflags);
+
+	return rc;
+}
+
 static int
 lpfc_extra_ring_setup( struct lpfc_hba *phba)
 {
@@ -2569,10 +2948,65 @@ lpfc_extra_ring_setup( struct lpfc_hba *phba)
 	return 0;
 }
 
+static void
+lpfc_sli_async_event_handler(struct lpfc_hba * phba,
+	struct lpfc_sli_ring * pring, struct lpfc_iocbq * iocbq)
+{
+	IOCB_t *icmd;
+	uint16_t evt_code;
+	uint16_t temp;
+	struct temp_event temp_event_data;
+	struct Scsi_Host *shost;
+
+	icmd = &iocbq->iocb;
+	evt_code = icmd->un.asyncstat.evt_code;
+	temp = icmd->ulpContext;
+
+	if ((evt_code != ASYNC_TEMP_WARN) &&
+		(evt_code != ASYNC_TEMP_SAFE)) {
+		lpfc_printf_log(phba,
+			KERN_ERR,
+			LOG_SLI,
+			"0327 Ring %d handler: unexpected ASYNC_STATUS"
+			" evt_code 0x%x\n",
+			pring->ringno,
+			icmd->un.asyncstat.evt_code);
+		return;
+	}
+	temp_event_data.data = (uint32_t)temp;
+	temp_event_data.event_type = FC_REG_TEMPERATURE_EVENT;
+	if (evt_code == ASYNC_TEMP_WARN) {
+		temp_event_data.event_code = LPFC_THRESHOLD_TEMP;
+		lpfc_printf_log(phba,
+				KERN_WARNING,
+				LOG_TEMP,
+				"0339 Adapter is very hot, please take "
+				"corrective action. temperature : %d Celsius\n",
+				temp);
+	}
+	if (evt_code == ASYNC_TEMP_SAFE) {
+		temp_event_data.event_code = LPFC_NORMAL_TEMP;
+		lpfc_printf_log(phba,
+				KERN_INFO,
+				LOG_TEMP,
+				"0340 Adapter temperature is OK now. "
+				"temperature : %d Celsius\n",
+				temp);
+	}
+
+	/* Send temperature change event to applications */
+	shost = lpfc_shost_from_vport(phba->pport);
+	fc_host_post_vendor_event(shost, fc_get_event_number(),
+		sizeof(temp_event_data), (char *) &temp_event_data,
+		SCSI_NL_VID_TYPE_PCI | PCI_VENDOR_ID_EMULEX);
+
+}
+
+
 int
 lpfc_sli_setup(struct lpfc_hba *phba)
 {
-	int i, totiocb = 0;
+	int i, totiocbsize = 0;
 	struct lpfc_sli *psli = &phba->sli;
 	struct lpfc_sli_ring *pring;
 
@@ -2597,6 +3031,12 @@ lpfc_sli_setup(struct lpfc_hba *phba)
 			pring->numRiocb += SLI2_IOCB_RSP_R1XTRA_ENTRIES;
 			pring->numCiocb += SLI2_IOCB_CMD_R3XTRA_ENTRIES;
 			pring->numRiocb += SLI2_IOCB_RSP_R3XTRA_ENTRIES;
+			pring->sizeCiocb = (phba->sli_rev == 3) ?
+							SLI3_IOCB_CMD_SIZE :
+							SLI2_IOCB_CMD_SIZE;
+			pring->sizeRiocb = (phba->sli_rev == 3) ?
+							SLI3_IOCB_RSP_SIZE :
+							SLI2_IOCB_RSP_SIZE;
 			pring->iotag_ctr = 0;
 			pring->iotag_max =
 			    (phba->cfg_hba_queue_depth * 2);
@@ -2607,15 +3047,30 @@ lpfc_sli_setup(struct lpfc_hba *phba)
 			/* numCiocb and numRiocb are used in config_port */
 			pring->numCiocb = SLI2_IOCB_CMD_R1_ENTRIES;
 			pring->numRiocb = SLI2_IOCB_RSP_R1_ENTRIES;
+			pring->sizeCiocb = (phba->sli_rev == 3) ?
+							SLI3_IOCB_CMD_SIZE :
+							SLI2_IOCB_CMD_SIZE;
+			pring->sizeRiocb = (phba->sli_rev == 3) ?
+							SLI3_IOCB_RSP_SIZE :
+							SLI2_IOCB_RSP_SIZE;
+			pring->iotag_max = phba->cfg_hba_queue_depth;
 			pring->num_mask = 0;
 			break;
 		case LPFC_ELS_RING:	/* ring 2 - ELS / CT */
 			/* numCiocb and numRiocb are used in config_port */
 			pring->numCiocb = SLI2_IOCB_CMD_R2_ENTRIES;
 			pring->numRiocb = SLI2_IOCB_RSP_R2_ENTRIES;
+			pring->sizeCiocb = (phba->sli_rev == 3) ?
+							SLI3_IOCB_CMD_SIZE :
+							SLI2_IOCB_CMD_SIZE;
+			pring->sizeRiocb = (phba->sli_rev == 3) ?
+							SLI3_IOCB_RSP_SIZE :
+							SLI2_IOCB_RSP_SIZE;
 			pring->fast_iotag = 0;
 			pring->iotag_ctr = 0;
 			pring->iotag_max = 4096;
+			pring->lpfc_sli_rcv_async_status =
+				lpfc_sli_async_event_handler;
 			pring->num_mask = 4;
 			pring->prt[0].profile = 0;	/* Mask 0 */
 			pring->prt[0].rctl = FC_ELS_REQ;
@@ -2643,14 +3098,15 @@ lpfc_sli_setup(struct lpfc_hba *phba)
 			    lpfc_ct_unsol_event;
 			break;
 		}
-		totiocb += (pring->numCiocb + pring->numRiocb);
+		totiocbsize += (pring->numCiocb * pring->sizeCiocb) +
+				(pring->numRiocb * pring->sizeRiocb);
 	}
-	if (totiocb > MAX_SLI2_IOCB) {
+	if (totiocbsize > MAX_SLIM_IOCB_SIZE) {
 		/* Too many cmd / rsp ring entries in SLI2 SLIM */
-		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-				"%d:0462 Too many cmd / rsp ring entries in "
-				"SLI2 SLIM Data: x%x x%x\n",
-				phba->brd_no, totiocb, MAX_SLI2_IOCB);
+		printk(KERN_ERR "%d:0462 Too many cmd / rsp ring entries in "
+		       "SLI2 SLIM Data: x%x x%lx\n",
+		       phba->brd_no, totiocbsize,
+		       (unsigned long) MAX_SLIM_IOCB_SIZE);
 	}
 	if (phba->cfg_multi_ring_support == 2)
 		lpfc_extra_ring_setup(phba);
@@ -2659,15 +3115,16 @@ lpfc_sli_setup(struct lpfc_hba *phba)
 }
 
 int
-lpfc_sli_queue_setup(struct lpfc_hba * phba)
+lpfc_sli_queue_setup(struct lpfc_hba *phba)
 {
 	struct lpfc_sli *psli;
 	struct lpfc_sli_ring *pring;
 	int i;
 
 	psli = &phba->sli;
-	spin_lock_irq(phba->host->host_lock);
+	spin_lock_irq(&phba->hbalock);
 	INIT_LIST_HEAD(&psli->mboxq);
+	INIT_LIST_HEAD(&psli->mboxq_cmpl);
 	/* Initialize list headers for txq and txcmplq as double linked lists */
 	for (i = 0; i < psli->num_rings; i++) {
 		pring = &psli->ring[i];
@@ -2680,86 +3137,142 @@ lpfc_sli_queue_setup(struct lpfc_hba * phba)
 		INIT_LIST_HEAD(&pring->iocb_continueq);
 		INIT_LIST_HEAD(&pring->postbufq);
 	}
-	spin_unlock_irq(phba->host->host_lock);
-	return (1);
+	spin_unlock_irq(&phba->hbalock);
+	return 1;
 }
 
 int
-lpfc_sli_hba_down(struct lpfc_hba * phba)
+lpfc_sli_host_down(struct lpfc_vport *vport)
 {
-	struct lpfc_sli *psli;
+	LIST_HEAD(completions);
+	struct lpfc_hba *phba = vport->phba;
+	struct lpfc_sli *psli = &phba->sli;
 	struct lpfc_sli_ring *pring;
-	LPFC_MBOXQ_t *pmb;
 	struct lpfc_iocbq *iocb, *next_iocb;
-	IOCB_t *icmd = NULL;
+	int i;
+	unsigned long flags = 0;
+	uint16_t prev_pring_flag;
+
+	lpfc_cleanup_discovery_resources(vport);
+
+	spin_lock_irqsave(&phba->hbalock, flags);
+	for (i = 0; i < psli->num_rings; i++) {
+		pring = &psli->ring[i];
+		prev_pring_flag = pring->flag;
+		if (pring->ringno == LPFC_ELS_RING) /* Only slow rings */
+			pring->flag |= LPFC_DEFERRED_RING_EVENT;
+		/*
+		 * Error everything on the txq since these iocbs have not been
+		 * given to the FW yet.
+		 */
+		list_for_each_entry_safe(iocb, next_iocb, &pring->txq, list) {
+			if (iocb->vport != vport)
+				continue;
+			list_move_tail(&iocb->list, &completions);
+			pring->txq_cnt--;
+		}
+
+		/* Next issue ABTS for everything on the txcmplq */
+		list_for_each_entry_safe(iocb, next_iocb, &pring->txcmplq,
+									list) {
+			if (iocb->vport != vport)
+				continue;
+			lpfc_sli_issue_abort_iotag(phba, pring, iocb);
+		}
+
+		pring->flag = prev_pring_flag;
+	}
+
+	spin_unlock_irqrestore(&phba->hbalock, flags);
+
+	while (!list_empty(&completions)) {
+		list_remove_head(&completions, iocb, struct lpfc_iocbq, list);
+
+		if (!iocb->iocb_cmpl)
+			lpfc_sli_release_iocbq(phba, iocb);
+		else {
+			iocb->iocb.ulpStatus = IOSTAT_LOCAL_REJECT;
+			iocb->iocb.un.ulpWord[4] = IOERR_SLI_DOWN;
+			(iocb->iocb_cmpl) (phba, iocb, iocb);
+		}
+	}
+	return 1;
+}
+
+int
+lpfc_sli_hba_down(struct lpfc_hba *phba)
+{
+	LIST_HEAD(completions);
+	struct lpfc_sli *psli = &phba->sli;
+	struct lpfc_sli_ring *pring;
+	LPFC_MBOXQ_t *pmb;
+	struct lpfc_iocbq *iocb;
+	IOCB_t *cmd = NULL;
 	int i;
 	unsigned long flags = 0;
 
-	psli = &phba->sli;
 	lpfc_hba_down_prep(phba);
 
-	spin_lock_irqsave(phba->host->host_lock, flags);
+	lpfc_fabric_abort_hba(phba);
 
+	spin_lock_irqsave(&phba->hbalock, flags);
 	for (i = 0; i < psli->num_rings; i++) {
 		pring = &psli->ring[i];
-		pring->flag |= LPFC_DEFERRED_RING_EVENT;
+		if (pring->ringno == LPFC_ELS_RING) /* Only slow rings */
+			pring->flag |= LPFC_DEFERRED_RING_EVENT;
 
 		/*
 		 * Error everything on the txq since these iocbs have not been
 		 * given to the FW yet.
 		 */
+		list_splice_init(&pring->txq, &completions);
 		pring->txq_cnt = 0;
 
-		list_for_each_entry_safe(iocb, next_iocb, &pring->txq, list) {
-			list_del_init(&iocb->list);
-			if (iocb->iocb_cmpl) {
-				icmd = &iocb->iocb;
-				icmd->ulpStatus = IOSTAT_LOCAL_REJECT;
-				icmd->un.ulpWord[4] = IOERR_SLI_DOWN;
-				spin_unlock_irqrestore(phba->host->host_lock,
-						       flags);
-				(iocb->iocb_cmpl) (phba, iocb, iocb);
-				spin_lock_irqsave(phba->host->host_lock, flags);
-			} else
-				lpfc_sli_release_iocbq(phba, iocb);
-		}
+	}
+	spin_unlock_irqrestore(&phba->hbalock, flags);
 
-		INIT_LIST_HEAD(&(pring->txq));
+	while (!list_empty(&completions)) {
+		list_remove_head(&completions, iocb, struct lpfc_iocbq, list);
+		cmd = &iocb->iocb;
 
+		if (!iocb->iocb_cmpl)
+			lpfc_sli_release_iocbq(phba, iocb);
+		else {
+			cmd->ulpStatus = IOSTAT_LOCAL_REJECT;
+			cmd->un.ulpWord[4] = IOERR_SLI_DOWN;
+			(iocb->iocb_cmpl) (phba, iocb, iocb);
+		}
 	}
 
-	spin_unlock_irqrestore(phba->host->host_lock, flags);
-
 	/* Return any active mbox cmds */
 	del_timer_sync(&psli->mbox_tmo);
-	spin_lock_irqsave(phba->host->host_lock, flags);
-	phba->work_hba_events &= ~WORKER_MBOX_TMO;
+	spin_lock_irqsave(&phba->hbalock, flags);
+
+	spin_lock(&phba->pport->work_port_lock);
+	phba->pport->work_port_events &= ~WORKER_MBOX_TMO;
+	spin_unlock(&phba->pport->work_port_lock);
+
 	if (psli->mbox_active) {
-		pmb = psli->mbox_active;
-		pmb->mb.mbxStatus = MBX_NOT_FINISHED;
-		if (pmb->mbox_cmpl) {
-			spin_unlock_irqrestore(phba->host->host_lock, flags);
-			pmb->mbox_cmpl(phba,pmb);
-			spin_lock_irqsave(phba->host->host_lock, flags);
-		}
+		list_add_tail(&psli->mbox_active->list, &completions);
+		psli->mbox_active = NULL;
+		psli->sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
 	}
-	psli->sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
-	psli->mbox_active = NULL;
 
-	/* Return any pending mbox cmds */
-	while ((pmb = lpfc_mbox_get(phba)) != NULL) {
+	/* Return any pending or completed mbox cmds */
+	list_splice_init(&phba->sli.mboxq, &completions);
+	list_splice_init(&phba->sli.mboxq_cmpl, &completions);
+	INIT_LIST_HEAD(&psli->mboxq);
+	INIT_LIST_HEAD(&psli->mboxq_cmpl);
+
+	spin_unlock_irqrestore(&phba->hbalock, flags);
+
+	while (!list_empty(&completions)) {
+		list_remove_head(&completions, pmb, LPFC_MBOXQ_t, list);
 		pmb->mb.mbxStatus = MBX_NOT_FINISHED;
 		if (pmb->mbox_cmpl) {
-			spin_unlock_irqrestore(phba->host->host_lock, flags);
 			pmb->mbox_cmpl(phba,pmb);
-			spin_lock_irqsave(phba->host->host_lock, flags);
 		}
 	}
-
-	INIT_LIST_HEAD(&psli->mboxq);
-
-	spin_unlock_irqrestore(phba->host->host_lock, flags);
-
 	return 1;
 }
 
@@ -2781,14 +3294,15 @@ lpfc_sli_pcimem_bcopy(void *srcp, void *destp, uint32_t cnt)
 }
 
 int
-lpfc_sli_ringpostbuf_put(struct lpfc_hba * phba, struct lpfc_sli_ring * pring,
-			 struct lpfc_dmabuf * mp)
+lpfc_sli_ringpostbuf_put(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+			 struct lpfc_dmabuf *mp)
 {
 	/* Stick struct lpfc_dmabuf at end of postbufq so driver can look it up
 	   later */
+	spin_lock_irq(&phba->hbalock);
 	list_add_tail(&mp->list, &pring->postbufq);
-
 	pring->postbufq_cnt++;
+	spin_unlock_irq(&phba->hbalock);
 	return 0;
 }
 
@@ -2801,58 +3315,129 @@ lpfc_sli_ringpostbuf_get(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 	struct list_head *slp = &pring->postbufq;
 
 	/* Search postbufq, from the begining, looking for a match on phys */
+	spin_lock_irq(&phba->hbalock);
 	list_for_each_entry_safe(mp, next_mp, &pring->postbufq, list) {
 		if (mp->phys == phys) {
 			list_del_init(&mp->list);
 			pring->postbufq_cnt--;
+			spin_unlock_irq(&phba->hbalock);
 			return mp;
 		}
 	}
 
+	spin_unlock_irq(&phba->hbalock);
 	lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
-			"%d:0410 Cannot find virtual addr for mapped buf on "
+			"0410 Cannot find virtual addr for mapped buf on "
 			"ring %d Data x%llx x%p x%p x%x\n",
-			phba->brd_no, pring->ringno, (unsigned long long)phys,
+			pring->ringno, (unsigned long long)phys,
 			slp->next, slp->prev, pring->postbufq_cnt);
 	return NULL;
 }
 
 static void
-lpfc_sli_abort_els_cmpl(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-			struct lpfc_iocbq * rspiocb)
+lpfc_sli_abort_els_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+			struct lpfc_iocbq *rspiocb)
 {
-	spin_lock_irq(phba->host->host_lock);
+	IOCB_t *irsp = &rspiocb->iocb;
+	uint16_t abort_iotag, abort_context;
+	struct lpfc_iocbq *abort_iocb;
+	struct lpfc_sli_ring *pring = &phba->sli.ring[LPFC_ELS_RING];
+
+	abort_iocb = NULL;
+
+	if (irsp->ulpStatus) {
+		abort_context = cmdiocb->iocb.un.acxri.abortContextTag;
+		abort_iotag = cmdiocb->iocb.un.acxri.abortIoTag;
+
+		spin_lock_irq(&phba->hbalock);
+		if (abort_iotag != 0 && abort_iotag <= phba->sli.last_iotag)
+			abort_iocb = phba->sli.iocbq_lookup[abort_iotag];
+
+		lpfc_printf_log(phba, KERN_INFO, LOG_ELS | LOG_SLI,
+				"0327 Cannot abort els iocb %p "
+				"with tag %x context %x, abort status %x, "
+				"abort code %x\n",
+				abort_iocb, abort_iotag, abort_context,
+				irsp->ulpStatus, irsp->un.ulpWord[4]);
+
+		/*
+		 * make sure we have the right iocbq before taking it
+		 * off the txcmplq and try to call completion routine.
+		 */
+		if (!abort_iocb ||
+		    abort_iocb->iocb.ulpContext != abort_context ||
+		    (abort_iocb->iocb_flag & LPFC_DRIVER_ABORTED) == 0)
+			spin_unlock_irq(&phba->hbalock);
+		else {
+			list_del_init(&abort_iocb->list);
+			pring->txcmplq_cnt--;
+			spin_unlock_irq(&phba->hbalock);
+
+			abort_iocb->iocb_flag &= ~LPFC_DRIVER_ABORTED;
+			abort_iocb->iocb.ulpStatus = IOSTAT_LOCAL_REJECT;
+			abort_iocb->iocb.un.ulpWord[4] = IOERR_SLI_ABORTED;
+			(abort_iocb->iocb_cmpl)(phba, abort_iocb, abort_iocb);
+		}
+	}
+
 	lpfc_sli_release_iocbq(phba, cmdiocb);
-	spin_unlock_irq(phba->host->host_lock);
+	return;
+}
+
+static void
+lpfc_ignore_els_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+		     struct lpfc_iocbq *rspiocb)
+{
+	IOCB_t *irsp = &rspiocb->iocb;
+
+	/* ELS cmd tag <ulpIoTag> completes */
+	lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
+			"0133 Ignoring ELS cmd tag x%x completion Data: "
+			"x%x x%x x%x\n",
+			irsp->ulpIoTag, irsp->ulpStatus,
+			irsp->un.ulpWord[4], irsp->ulpTimeout);
+	if (cmdiocb->iocb.ulpCommand == CMD_GEN_REQUEST64_CR)
+		lpfc_ct_free_iocb(phba, cmdiocb);
+	else
+		lpfc_els_free_iocb(phba, cmdiocb);
 	return;
 }
 
 int
-lpfc_sli_issue_abort_iotag(struct lpfc_hba * phba,
-			   struct lpfc_sli_ring * pring,
-			   struct lpfc_iocbq * cmdiocb)
+lpfc_sli_issue_abort_iotag(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+			   struct lpfc_iocbq *cmdiocb)
 {
+	struct lpfc_vport *vport = cmdiocb->vport;
 	struct lpfc_iocbq *abtsiocbp;
 	IOCB_t *icmd = NULL;
 	IOCB_t *iabt = NULL;
 	int retval = IOCB_ERROR;
 
-	/* There are certain command types we don't want
-	 * to abort.
+	/*
+	 * There are certain command types we don't want to abort.  And we
+	 * don't want to abort commands that are already in the process of
+	 * being aborted.
 	 */
 	icmd = &cmdiocb->iocb;
-	if ((icmd->ulpCommand == CMD_ABORT_XRI_CN) ||
-	    (icmd->ulpCommand == CMD_CLOSE_XRI_CN))
+	if (icmd->ulpCommand == CMD_ABORT_XRI_CN ||
+	    icmd->ulpCommand == CMD_CLOSE_XRI_CN ||
+	    (cmdiocb->iocb_flag & LPFC_DRIVER_ABORTED) != 0)
 		return 0;
 
-	/* If we're unloading, interrupts are disabled so we
-	 * need to cleanup the iocb here.
+	/* If we're unloading, don't abort iocb on the ELS ring, but change the
+	 * callback so that nothing happens when it finishes.
 	 */
-	if (phba->fc_flag & FC_UNLOADING)
+	if ((vport->load_flag & FC_UNLOADING) &&
+	    (pring->ringno == LPFC_ELS_RING)) {
+		if (cmdiocb->iocb_flag & LPFC_IO_FABRIC)
+			cmdiocb->fabric_iocb_cmpl = lpfc_ignore_els_cmpl;
+		else
+			cmdiocb->iocb_cmpl = lpfc_ignore_els_cmpl;
 		goto abort_iotag_exit;
+	}
 
 	/* issue ABTS for this IOCB based on iotag */
-	abtsiocbp = lpfc_sli_get_iocbq(phba);
+	abtsiocbp = __lpfc_sli_get_iocbq(phba);
 	if (abtsiocbp == NULL)
 		return 0;
 
@@ -2868,39 +3453,32 @@ lpfc_sli_issue_abort_iotag(struct lpfc_hba * phba,
 	iabt->ulpLe = 1;
 	iabt->ulpClass = icmd->ulpClass;
 
-	if (phba->hba_state >= LPFC_LINK_UP)
+	if (phba->link_state >= LPFC_LINK_UP)
 		iabt->ulpCommand = CMD_ABORT_XRI_CN;
 	else
 		iabt->ulpCommand = CMD_CLOSE_XRI_CN;
 
 	abtsiocbp->iocb_cmpl = lpfc_sli_abort_els_cmpl;
-	retval = lpfc_sli_issue_iocb(phba, pring, abtsiocbp, 0);
 
-abort_iotag_exit:
+	lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI,
+			 "0339 Abort xri x%x, original iotag x%x, "
+			 "abort cmd iotag x%x\n",
+			 iabt->un.acxri.abortContextTag,
+			 iabt->un.acxri.abortIoTag, abtsiocbp->iotag);
+	retval = __lpfc_sli_issue_iocb(phba, pring, abtsiocbp, 0);
 
-	/* If we could not issue an abort dequeue the iocb and handle
-	 * the completion here.
+abort_iotag_exit:
+	/*
+	 * Caller to this routine should check for IOCB_ERROR
+	 * and handle it properly.  This routine no longer removes
+	 * iocb off txcmplq and call compl in case of IOCB_ERROR.
 	 */
-	if (retval == IOCB_ERROR) {
-		list_del(&cmdiocb->list);
-		pring->txcmplq_cnt--;
-
-		if (cmdiocb->iocb_cmpl) {
-			icmd->ulpStatus = IOSTAT_LOCAL_REJECT;
-			icmd->un.ulpWord[4] = IOERR_SLI_ABORTED;
-			spin_unlock_irq(phba->host->host_lock);
-			(cmdiocb->iocb_cmpl) (phba, cmdiocb, cmdiocb);
-			spin_lock_irq(phba->host->host_lock);
-		} else
-			lpfc_sli_release_iocbq(phba, cmdiocb);
-	}
-
-	return 1;
+	return retval;
 }
 
 static int
-lpfc_sli_validate_fcp_iocb(struct lpfc_iocbq *iocbq, uint16_t tgt_id,
-			   uint64_t lun_id, uint32_t ctx,
+lpfc_sli_validate_fcp_iocb(struct lpfc_iocbq *iocbq, struct lpfc_vport *vport,
+			   uint16_t tgt_id, uint64_t lun_id,
 			   lpfc_ctx_cmd ctx_cmd)
 {
 	struct lpfc_scsi_buf *lpfc_cmd;
@@ -2910,6 +3488,9 @@ lpfc_sli_validate_fcp_iocb(struct lpfc_iocbq *iocbq, uint16_t tgt_id,
 	if (!(iocbq->iocb_flag &  LPFC_IO_FCP))
 		return rc;
 
+	if (iocbq->vport != vport)
+		return rc;
+
 	lpfc_cmd = container_of(iocbq, struct lpfc_scsi_buf, cur_iocbq);
 	cmnd = lpfc_cmd->pCmd;
 
@@ -2918,16 +3499,14 @@ lpfc_sli_validate_fcp_iocb(struct lpfc_iocbq *iocbq, uint16_t tgt_id,
 
 	switch (ctx_cmd) {
 	case LPFC_CTX_LUN:
-		if ((cmnd->device->id == tgt_id) &&
+		if (cmnd->device &&
+		    (cmnd->device->id == tgt_id) &&
 		    (cmnd->device->lun == lun_id))
 			rc = 0;
 		break;
 	case LPFC_CTX_TGT:
-		if (cmnd->device->id == tgt_id)
-			rc = 0;
-		break;
-	case LPFC_CTX_CTX:
-		if (iocbq->iocb.ulpContext == ctx)
+		if (cmnd->device &&
+			(cmnd->device->id == tgt_id))
 			rc = 0;
 		break;
 	case LPFC_CTX_HOST:
@@ -2943,17 +3522,18 @@ lpfc_sli_validate_fcp_iocb(struct lpfc_iocbq *iocbq, uint16_t tgt_id,
 }
 
 int
-lpfc_sli_sum_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
-		uint16_t tgt_id, uint64_t lun_id, lpfc_ctx_cmd ctx_cmd)
+lpfc_sli_sum_iocb(struct lpfc_vport *vport, uint16_t tgt_id, uint64_t lun_id,
+		  lpfc_ctx_cmd ctx_cmd)
 {
+	struct lpfc_hba *phba = vport->phba;
 	struct lpfc_iocbq *iocbq;
 	int sum, i;
 
 	for (i = 1, sum = 0; i <= phba->sli.last_iotag; i++) {
 		iocbq = phba->sli.iocbq_lookup[i];
 
-		if (lpfc_sli_validate_fcp_iocb (iocbq, tgt_id, lun_id,
-						0, ctx_cmd) == 0)
+		if (lpfc_sli_validate_fcp_iocb (iocbq, vport, tgt_id, lun_id,
+						ctx_cmd) == 0)
 			sum++;
 	}
 
@@ -2961,20 +3541,18 @@ lpfc_sli_sum_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 }
 
 void
-lpfc_sli_abort_fcp_cmpl(struct lpfc_hba * phba, struct lpfc_iocbq * cmdiocb,
-			   struct lpfc_iocbq * rspiocb)
+lpfc_sli_abort_fcp_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+			struct lpfc_iocbq *rspiocb)
 {
-	spin_lock_irq(phba->host->host_lock);
 	lpfc_sli_release_iocbq(phba, cmdiocb);
-	spin_unlock_irq(phba->host->host_lock);
 	return;
 }
 
 int
-lpfc_sli_abort_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
-		    uint16_t tgt_id, uint64_t lun_id, uint32_t ctx,
-		    lpfc_ctx_cmd abort_cmd)
+lpfc_sli_abort_iocb(struct lpfc_vport *vport, struct lpfc_sli_ring *pring,
+		    uint16_t tgt_id, uint64_t lun_id, lpfc_ctx_cmd abort_cmd)
 {
+	struct lpfc_hba *phba = vport->phba;
 	struct lpfc_iocbq *iocbq;
 	struct lpfc_iocbq *abtsiocb;
 	IOCB_t *cmd = NULL;
@@ -2984,8 +3562,8 @@ lpfc_sli_abort_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 	for (i = 1; i <= phba->sli.last_iotag; i++) {
 		iocbq = phba->sli.iocbq_lookup[i];
 
-		if (lpfc_sli_validate_fcp_iocb (iocbq, tgt_id, lun_id,
-						0, abort_cmd) != 0)
+		if (lpfc_sli_validate_fcp_iocb(iocbq, vport, tgt_id, lun_id,
+					       abort_cmd) != 0)
 			continue;
 
 		/* issue ABTS for this IOCB based on iotag */
@@ -3001,8 +3579,9 @@ lpfc_sli_abort_iocb(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
 		abtsiocb->iocb.un.acxri.abortIoTag = cmd->ulpIoTag;
 		abtsiocb->iocb.ulpLe = 1;
 		abtsiocb->iocb.ulpClass = cmd->ulpClass;
+		abtsiocb->vport = phba->pport;
 
-		if (phba->hba_state >= LPFC_LINK_UP)
+		if (lpfc_is_link_up(phba))
 			abtsiocb->iocb.ulpCommand = CMD_ABORT_XRI_CN;
 		else
 			abtsiocb->iocb.ulpCommand = CMD_CLOSE_XRI_CN;
@@ -3028,16 +3607,16 @@ lpfc_sli_wake_iocb_wait(struct lpfc_hba *phba,
 	wait_queue_head_t *pdone_q;
 	unsigned long iflags;
 
-	spin_lock_irqsave(phba->host->host_lock, iflags);
+	spin_lock_irqsave(&phba->hbalock, iflags);
 	cmdiocbq->iocb_flag |= LPFC_IO_WAKE;
 	if (cmdiocbq->context2 && rspiocbq)
 		memcpy(&((struct lpfc_iocbq *)cmdiocbq->context2)->iocb,
 		       &rspiocbq->iocb, sizeof(IOCB_t));
 
 	pdone_q = cmdiocbq->context_un.wait_queue;
-	spin_unlock_irqrestore(phba->host->host_lock, iflags);
 	if (pdone_q)
 		wake_up(pdone_q);
+	spin_unlock_irqrestore(&phba->hbalock, iflags);
 	return;
 }
 
@@ -3047,11 +3626,12 @@ lpfc_sli_wake_iocb_wait(struct lpfc_hba *phba,
  * lpfc_sli_issue_call since the wake routine sets a unique value and by
  * definition this is a wait function.
  */
+
 int
-lpfc_sli_issue_iocb_wait(struct lpfc_hba * phba,
-			 struct lpfc_sli_ring * pring,
-			 struct lpfc_iocbq * piocb,
-			 struct lpfc_iocbq * prspiocbq,
+lpfc_sli_issue_iocb_wait(struct lpfc_hba *phba,
+			 struct lpfc_sli_ring *pring,
+			 struct lpfc_iocbq *piocb,
+			 struct lpfc_iocbq *prspiocbq,
 			 uint32_t timeout)
 {
 	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(done_q);
@@ -3083,33 +3663,29 @@ lpfc_sli_issue_iocb_wait(struct lpfc_hba * phba,
 	retval = lpfc_sli_issue_iocb(phba, pring, piocb, 0);
 	if (retval == IOCB_SUCCESS) {
 		timeout_req = timeout * HZ;
-		spin_unlock_irq(phba->host->host_lock);
 		timeleft = wait_event_timeout(done_q,
 				piocb->iocb_flag & LPFC_IO_WAKE,
 				timeout_req);
-		spin_lock_irq(phba->host->host_lock);
 
 		if (piocb->iocb_flag & LPFC_IO_WAKE) {
 			lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
-					"%d:0331 IOCB wake signaled\n",
-					phba->brd_no);
+					"0331 IOCB wake signaled\n");
 		} else if (timeleft == 0) {
 			lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
-					"%d:0338 IOCB wait timeout error - no "
-					"wake response Data x%x\n",
-					phba->brd_no, timeout);
+					"0338 IOCB wait timeout error - no "
+					"wake response Data x%x\n", timeout);
 			retval = IOCB_TIMEDOUT;
 		} else {
 			lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
-					"%d:0330 IOCB wake NOT set, "
-					"Data x%x x%lx\n", phba->brd_no,
+					"0330 IOCB wake NOT set, "
+					"Data x%x x%lx\n",
 					timeout, (timeleft / jiffies));
 			retval = IOCB_TIMEDOUT;
 		}
 	} else {
 		lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
-				"%d:0332 IOCB wait issue failed, Data x%x\n",
-				phba->brd_no, retval);
+				":0332 IOCB wait issue failed, Data x%x\n",
+				retval);
 		retval = IOCB_ERROR;
 	}
 
@@ -3129,16 +3705,16 @@ lpfc_sli_issue_iocb_wait(struct lpfc_hba * phba,
 }
 
 int
-lpfc_sli_issue_mbox_wait(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq,
+lpfc_sli_issue_mbox_wait(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq,
 			 uint32_t timeout)
 {
 	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(done_q);
 	int retval;
+	unsigned long flag;
 
 	/* The caller must leave context1 empty. */
-	if (pmboxq->context1 != 0) {
-		return (MBX_NOT_FINISHED);
-	}
+	if (pmboxq->context1 != 0)
+		return MBX_NOT_FINISHED;
 
 	/* setup wake call as IOCB callback */
 	pmboxq->mbox_cmpl = lpfc_sli_wake_mbox_wait;
@@ -3153,6 +3729,7 @@ lpfc_sli_issue_mbox_wait(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq,
 				pmboxq->mbox_flag & LPFC_MBX_WAKE,
 				timeout * HZ);
 
+		spin_lock_irqsave(&phba->hbalock, flag);
 		pmboxq->context1 = NULL;
 		/*
 		 * if LPFC_MBX_WAKE flag is set the mailbox is completed
@@ -3160,8 +3737,11 @@ lpfc_sli_issue_mbox_wait(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq,
 		 */
 		if (pmboxq->mbox_flag & LPFC_MBX_WAKE)
 			retval = MBX_SUCCESS;
-		else
+		else {
 			retval = MBX_TIMEOUT;
+			pmboxq->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+		}
+		spin_unlock_irqrestore(&phba->hbalock, flag);
 	}
 
 	return retval;
@@ -3170,14 +3750,27 @@ lpfc_sli_issue_mbox_wait(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq,
 int
 lpfc_sli_flush_mbox_queue(struct lpfc_hba * phba)
 {
+	struct lpfc_vport *vport = phba->pport;
 	int i = 0;
+	uint32_t ha_copy;
 
-	while (phba->sli.sli_flag & LPFC_SLI_MBOX_ACTIVE && !phba->stopped) {
+	while (phba->sli.sli_flag & LPFC_SLI_MBOX_ACTIVE && !vport->stopped) {
 		if (i++ > LPFC_MBOX_TMO * 1000)
 			return 1;
 
-		if (lpfc_sli_handle_mb_event(phba) == 0)
-			i = 0;
+		/*
+		 * Call lpfc_sli_handle_mb_event only if a mailbox cmd
+		 * did finish. This way we won't get the misleading
+		 * "Stray Mailbox Interrupt" message.
+		 */
+		spin_lock_irq(&phba->hbalock);
+		ha_copy = phba->work_ha;
+		phba->work_ha &= ~HA_MBATT;
+		spin_unlock_irq(&phba->hbalock);
+
+		if (ha_copy & HA_MBATT)
+			if (lpfc_sli_handle_mb_event(phba) == 0)
+				i = 0;
 
 		msleep(1);
 	}
@@ -3188,13 +3781,19 @@ lpfc_sli_flush_mbox_queue(struct lpfc_hba * phba)
 irqreturn_t
 lpfc_intr_handler(int irq, void *dev_id, struct pt_regs * regs)
 {
-	struct lpfc_hba *phba;
+	struct lpfc_hba  *phba;
 	uint32_t ha_copy;
 	uint32_t work_ha_copy;
 	unsigned long status;
-	int i;
 	uint32_t control;
 
+	MAILBOX_t *mbox, *pmbox;
+	struct lpfc_vport *vport;
+	struct lpfc_nodelist *ndlp;
+	struct lpfc_dmabuf *mp;
+	LPFC_MBOXQ_t *pmb;
+	int rc;
+
 	/*
 	 * Get the driver's phba structure from the dev_id and
 	 * assume the HBA is not interrupting.
@@ -3204,6 +3803,7 @@ lpfc_intr_handler(int irq, void *dev_id, struct pt_regs * regs)
 	if (unlikely(!phba))
 		return IRQ_NONE;
 
+
 	phba->sli.slistat.sli_intr++;
 
 	/*
@@ -3212,7 +3812,7 @@ lpfc_intr_handler(int irq, void *dev_id, struct pt_regs * regs)
 	 */
 
 	/* Ignore all interrupts during initialization. */
-	if (unlikely(phba->hba_state < LPFC_LINK_DOWN))
+	if (unlikely(phba->link_state < LPFC_LINK_DOWN))
 		return IRQ_NONE;
 
 	/*
@@ -3220,16 +3820,16 @@ lpfc_intr_handler(int irq, void *dev_id, struct pt_regs * regs)
 	 * Clear Attention Sources, except Error Attention (to
 	 * preserve status) and Link Attention
 	 */
-	spin_lock(phba->host->host_lock);
+	spin_lock(&phba->hbalock);
 	ha_copy = readl(phba->HAregaddr);
 	/* If somebody is waiting to handle an eratt don't process it
 	 * here.  The brdkill function will do this.
 	 */
-	if (phba->fc_flag & FC_IGNORE_ERATT)
+	if (phba->link_flag & LS_IGNORE_ERATT)
 		ha_copy &= ~HA_ERATT;
 	writel((ha_copy & ~(HA_LATT | HA_ERATT)), phba->HAregaddr);
 	readl(phba->HAregaddr); /* flush */
-	spin_unlock(phba->host->host_lock);
+	spin_unlock(&phba->hbalock);
 
 	if (unlikely(!ha_copy))
 		return IRQ_NONE;
@@ -3243,36 +3843,62 @@ lpfc_intr_handler(int irq, void *dev_id, struct pt_regs * regs)
 				 * Turn off Link Attention interrupts
 				 * until CLEAR_LA done
 				 */
-				spin_lock(phba->host->host_lock);
+				spin_lock(&phba->hbalock);
 				phba->sli.sli_flag &= ~LPFC_PROCESS_LA;
 				control = readl(phba->HCregaddr);
 				control &= ~HC_LAINT_ENA;
 				writel(control, phba->HCregaddr);
 				readl(phba->HCregaddr); /* flush */
-				spin_unlock(phba->host->host_lock);
+				spin_unlock(&phba->hbalock);
 			}
 			else
 				work_ha_copy &= ~HA_LATT;
 		}
 
 		if (work_ha_copy & ~(HA_ERATT|HA_MBATT|HA_LATT)) {
-			for (i = 0; i < phba->sli.num_rings; i++) {
-				if (work_ha_copy & (HA_RXATT << (4*i))) {
-					/*
-					 * Turn off Slow Rings interrupts
-					 */
-					spin_lock(phba->host->host_lock);
-					control = readl(phba->HCregaddr);
-					control &= ~(HC_R0INT_ENA << i);
+			/*
+			 * Turn off Slow Rings interrupts, LPFC_ELS_RING is
+			 * the only slow ring.
+			 */
+			status = (work_ha_copy &
+				(HA_RXMASK  << (4*LPFC_ELS_RING)));
+			status >>= (4*LPFC_ELS_RING);
+			if (status & HA_RXMASK) {
+				spin_lock(&phba->hbalock);
+				control = readl(phba->HCregaddr);
+
+				lpfc_debugfs_slow_ring_trc(phba,
+				"ISR slow ring:   ctl:x%x stat:x%x isrcnt:x%x",
+				control, status,
+				(uint32_t)phba->sli.slistat.sli_intr);
+
+				if (control & (HC_R0INT_ENA << LPFC_ELS_RING)) {
+					lpfc_debugfs_slow_ring_trc(phba,
+						"ISR Disable ring:"
+						"pwork:x%x hawork:x%x wait:x%x",
+						phba->work_ha, work_ha_copy,
+						(uint32_t)((unsigned long)
+						phba->work_wait));
+
+					control &=
+					    ~(HC_R0INT_ENA << LPFC_ELS_RING);
 					writel(control, phba->HCregaddr);
 					readl(phba->HCregaddr); /* flush */
-					spin_unlock(phba->host->host_lock);
 				}
+				else {
+					lpfc_debugfs_slow_ring_trc(phba,
+						"ISR slow ring:   pwork:"
+						"x%x hawork:x%x wait:x%x",
+						phba->work_ha, work_ha_copy,
+						(uint32_t)((unsigned long)
+						phba->work_wait));
+				}
+				spin_unlock(&phba->hbalock);
 			}
 		}
 
 		if (work_ha_copy & HA_ERATT) {
-			phba->hba_state = LPFC_HBA_ERROR;
+			phba->link_state = LPFC_HBA_ERROR;
 			/*
 			 * There was a link/board error.  Read the
 			 * status register to retrieve the error event
@@ -3287,14 +3913,102 @@ lpfc_intr_handler(int irq, void *dev_id, struct pt_regs * regs)
 			/* Clear Chip error bit */
 			writel(HA_ERATT, phba->HAregaddr);
 			readl(phba->HAregaddr); /* flush */
-			phba->stopped = 1;
+			phba->pport->stopped = 1;
+		}
+
+		if ((work_ha_copy & HA_MBATT) &&
+		    (phba->sli.mbox_active)) {
+			pmb = phba->sli.mbox_active;
+			pmbox = &pmb->mb;
+			mbox = &phba->slim2p->mbx;
+			vport = pmb->vport;
+
+			/* First check out the status word */
+			lpfc_sli_pcimem_bcopy(mbox, pmbox, sizeof(uint32_t));
+			if (pmbox->mbxOwner != OWN_HOST) {
+				/*
+				 * Stray Mailbox Interrupt, mbxCommand <cmd>
+				 * mbxStatus <status>
+				 */
+				lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX |
+						LOG_SLI,
+						"(%d):0304 Stray Mailbox "
+						"Interrupt mbxCommand x%x "
+						"mbxStatus x%x\n",
+						(vport ? vport->vpi : 0),
+						pmbox->mbxCommand,
+						pmbox->mbxStatus);
+			}
+			phba->last_completion_time = jiffies;
+			del_timer_sync(&phba->sli.mbox_tmo);
+
+			phba->sli.mbox_active = NULL;
+			if (pmb->mbox_cmpl) {
+				lpfc_sli_pcimem_bcopy(mbox, pmbox,
+						      MAILBOX_CMD_SIZE);
+			}
+			if (pmb->mbox_flag & LPFC_MBX_IMED_UNREG) {
+				pmb->mbox_flag &= ~LPFC_MBX_IMED_UNREG;
+
+				lpfc_debugfs_disc_trc(vport,
+					LPFC_DISC_TRC_MBOX_VPORT,
+					"MBOX dflt rpi: : status:x%x rpi:x%x",
+					(uint32_t)pmbox->mbxStatus,
+					pmbox->un.varWords[0], 0);
+
+				if ( !pmbox->mbxStatus) {
+					mp = (struct lpfc_dmabuf *)
+						(pmb->context1);
+					ndlp = (struct lpfc_nodelist *)
+						pmb->context2;
+
+					/* Reg_LOGIN of dflt RPI was successful.
+					 * new lets get rid of the RPI using the
+					 * same mbox buffer.
+					 */
+					lpfc_unreg_login(phba, vport->vpi,
+						pmbox->un.varWords[0], pmb);
+					pmb->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi;
+					pmb->context1 = mp;
+					pmb->context2 = ndlp;
+					pmb->vport = vport;
+					spin_lock(&phba->hbalock);
+					phba->sli.sli_flag &=
+						~LPFC_SLI_MBOX_ACTIVE;
+					spin_unlock(&phba->hbalock);
+					goto send_current_mbox;
+				}
+			}
+			spin_lock(&phba->pport->work_port_lock);
+			phba->pport->work_port_events &= ~WORKER_MBOX_TMO;
+			spin_unlock(&phba->pport->work_port_lock);
+			lpfc_mbox_cmpl_put(phba, pmb);
+		}
+		if ((work_ha_copy & HA_MBATT) &&
+		    (phba->sli.mbox_active == NULL)) {
+send_next_mbox:
+			spin_lock(&phba->hbalock);
+			phba->sli.sli_flag &= ~LPFC_SLI_MBOX_ACTIVE;
+			pmb = lpfc_mbox_get(phba);
+			spin_unlock(&phba->hbalock);
+send_current_mbox:
+			/* Process next mailbox command if there is one */
+			if (pmb != NULL) {
+				rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
+				if (rc == MBX_NOT_FINISHED) {
+					pmb->mb.mbxStatus = MBX_NOT_FINISHED;
+					lpfc_mbox_cmpl_put(phba, pmb);
+					goto send_next_mbox;
+				}
+			}
+
 		}
 
-		spin_lock(phba->host->host_lock);
+		spin_lock(&phba->hbalock);
 		phba->work_ha |= work_ha_copy;
 		if (phba->work_wait)
-			wake_up(phba->work_wait);
-		spin_unlock(phba->host->host_lock);
+			lpfc_worker_wake_up(phba);
+		spin_unlock(&phba->hbalock);
 	}
 
 	ha_copy &= ~(phba->work_ha_mask);
@@ -3306,7 +4020,7 @@ lpfc_intr_handler(int irq, void *dev_id, struct pt_regs * regs)
 	 */
 	status = (ha_copy & (HA_RXMASK  << (4*LPFC_FCP_RING)));
 	status >>= (4*LPFC_FCP_RING);
-	if (status & HA_RXATT)
+	if (status & HA_RXMASK)
 		lpfc_sli_handle_fast_ring_event(phba,
 						&phba->sli.ring[LPFC_FCP_RING],
 						status);
@@ -3319,7 +4033,7 @@ lpfc_intr_handler(int irq, void *dev_id, struct pt_regs * regs)
 		 */
 		status = (ha_copy & (HA_RXMASK  << (4*LPFC_EXTRA_RING)));
 		status >>= (4*LPFC_EXTRA_RING);
-		if (status & HA_RXATT) {
+		if (status & HA_RXMASK) {
 			lpfc_sli_handle_fast_ring_event(phba,
 					&phba->sli.ring[LPFC_EXTRA_RING],
 					status);
diff --git a/drivers/scsi/lpfc/lpfc_sli.h b/drivers/scsi/lpfc/lpfc_sli.h
index 41c38d3..5fcfe88 100644
--- a/drivers/scsi/lpfc/lpfc_sli.h
+++ b/drivers/scsi/lpfc/lpfc_sli.h
@@ -20,12 +20,12 @@
 
 /* forward declaration for LPFC_IOCB_t's use */
 struct lpfc_hba;
+struct lpfc_vport;
 
 /* Define the context types that SLI handles for abort and sums. */
 typedef enum _lpfc_ctx_cmd {
 	LPFC_CTX_LUN,
 	LPFC_CTX_TGT,
-	LPFC_CTX_CTX,
 	LPFC_CTX_HOST
 } lpfc_ctx_cmd;
 
@@ -43,19 +43,24 @@ struct lpfc_iocbq {
 #define LPFC_IO_WAKE		2	/* High Priority Queue signal flag */
 #define LPFC_IO_FCP		4	/* FCP command -- iocbq in scsi_buf */
 #define LPFC_DRIVER_ABORTED	8	/* driver aborted this request */
+#define LPFC_IO_FABRIC		0x10	/* Iocb send using fabric scheduler */
 
 	uint8_t abort_count;
 	uint8_t rsvd2;
 	uint32_t drvrTimeout;	/* driver timeout in seconds */
+	struct lpfc_vport *vport;/* virtual port pointer */
 	void *context1;		/* caller context information */
 	void *context2;		/* caller context information */
 	void *context3;		/* caller context information */
 	union {
-		wait_queue_head_t  *wait_queue;
-		struct lpfc_iocbq  *rsp_iocb;
-		struct lpfcMboxq   *mbox;
+		wait_queue_head_t    *wait_queue;
+		struct lpfc_iocbq    *rsp_iocb;
+		struct lpfcMboxq     *mbox;
+		struct lpfc_nodelist *ndlp;
 	} context_un;
 
+	void (*fabric_iocb_cmpl) (struct lpfc_hba *, struct lpfc_iocbq *,
+			   struct lpfc_iocbq *);
 	void (*iocb_cmpl) (struct lpfc_hba *, struct lpfc_iocbq *,
 			   struct lpfc_iocbq *);
 
@@ -68,12 +73,14 @@ struct lpfc_iocbq {
 #define IOCB_ERROR          2
 #define IOCB_TIMEDOUT       3
 
-#define LPFC_MBX_WAKE	1
+#define LPFC_MBX_WAKE		1
+#define LPFC_MBX_IMED_UNREG	2
 
 typedef struct lpfcMboxq {
 	/* MBOXQs are used in single linked lists */
 	struct list_head list;	/* ptr to next mailbox command */
 	MAILBOX_t mb;		/* Mailbox cmd */
+	struct lpfc_vport *vport;/* virutal port pointer */
 	void *context1;		/* caller context information */
 	void *context2;		/* caller context information */
 
@@ -85,8 +92,6 @@ typedef struct lpfcMboxq {
 #define MBX_POLL        1	/* poll mailbox till command done, then
 				   return */
 #define MBX_NOWAIT      2	/* issue command then return immediately */
-#define MBX_STOP_IOCB   4	/* Stop iocb processing till mbox cmds
-				   complete */
 
 #define LPFC_MAX_RING_MASK  4	/* max num of rctl/type masks allowed per
 				   ring */
@@ -122,9 +127,7 @@ struct lpfc_sli_ring {
 	uint16_t flag;		/* ring flags */
 #define LPFC_DEFERRED_RING_EVENT 0x001	/* Deferred processing a ring event */
 #define LPFC_CALL_RING_AVAILABLE 0x002	/* indicates cmd was full */
-#define LPFC_STOP_IOCB_MBX       0x010	/* Stop processing IOCB cmds mbox */
 #define LPFC_STOP_IOCB_EVENT     0x020	/* Stop processing IOCB cmds event */
-#define LPFC_STOP_IOCB_MASK      0x030	/* Stop processing IOCB cmds mask */
 	uint16_t abtsiotag;	/* tracks next iotag to use for ABTS */
 
 	uint32_t local_getidx;   /* last available cmd index (from cmdGetInx) */
@@ -135,6 +138,8 @@ struct lpfc_sli_ring {
 	uint8_t ringno;		/* ring number */
 	uint16_t numCiocb;	/* number of command iocb's per ring */
 	uint16_t numRiocb;	/* number of rsp iocb's per ring */
+	uint16_t sizeCiocb;	/* Size of command iocb's in this ring */
+	uint16_t sizeRiocb;	/* Size of response iocb's in this ring */
 
 	uint32_t fast_iotag;	/* max fastlookup based iotag           */
 	uint32_t iotag_ctr;	/* keeps track of the next iotag to use */
@@ -157,6 +162,8 @@ struct lpfc_sli_ring {
 
 	struct lpfc_sli_ring_mask prt[LPFC_MAX_RING_MASK];
 	uint32_t num_mask;	/* number of mask entries in prt array */
+	void (*lpfc_sli_rcv_async_status) (struct lpfc_hba *,
+		struct lpfc_sli_ring *, struct lpfc_iocbq *);
 
 	struct lpfc_sli_ring_stat stats;	/* SLI statistical info */
 
@@ -165,6 +172,31 @@ struct lpfc_sli_ring {
 					struct lpfc_sli_ring *);
 };
 
+/* Structure used for configuring rings to a specific profile or rctl / type */
+struct lpfc_hbq_init {
+	uint32_t rn;		/* Receive buffer notification */
+	uint32_t entry_count;	/* max # of entries in HBQ */
+	uint32_t headerLen;	/* 0 if not profile 4 or 5 */
+	uint32_t logEntry;	/* Set to 1 if this HBQ used for LogEntry */
+	uint32_t profile;	/* Selection profile 0=all, 7=logentry */
+	uint32_t ring_mask;	/* Binds HBQ to a ring e.g. Ring0=b0001,
+				 * ring2=b0100 */
+	uint32_t hbq_index;	/* index of this hbq in ring .HBQs[] */
+
+	uint32_t seqlenoff;
+	uint32_t maxlen;
+	uint32_t seqlenbcnt;
+	uint32_t cmdcodeoff;
+	uint32_t cmdmatch[8];
+	uint32_t mask_count;	/* number of mask entries in prt array */
+	struct hbq_mask hbqMasks[6];
+
+	/* Non-config rings fields to keep track of buffer allocations */
+	uint32_t buffer_count;	/* number of buffers allocated */
+	uint32_t init_count;	/* number to allocate when initialized */
+	uint32_t add_count;	/* number to allocate when starved */
+} ;
+
 /* Structure used to hold SLI statistical counters and info */
 struct lpfc_sli_stat {
 	uint64_t mbox_stat_err;  /* Mbox cmds completed status error */
@@ -197,6 +229,7 @@ struct lpfc_sli {
 #define LPFC_SLI_MBOX_ACTIVE      0x100	/* HBA mailbox is currently active */
 #define LPFC_SLI2_ACTIVE          0x200	/* SLI2 overlay in firmware is active */
 #define LPFC_PROCESS_LA           0x400	/* Able to process link attention */
+#define LPFC_BLOCK_MGMT_IO        0x800	/* Don't allow mgmt mbx or iocb cmds */
 
 	struct lpfc_sli_ring ring[LPFC_MAX_RING];
 	int fcp_ring;		/* ring used for FCP initiator commands */
@@ -209,6 +242,7 @@ struct lpfc_sli {
 	uint16_t mboxq_cnt;	/* current length of queue */
 	uint16_t mboxq_max;	/* max length */
 	LPFC_MBOXQ_t *mbox_active;	/* active mboxq information */
+	struct list_head mboxq_cmpl;
 
 	struct timer_list mbox_tmo;	/* Hold clk to timeout active mbox
 					   cmd */
@@ -221,12 +255,6 @@ struct lpfc_sli {
 	struct lpfc_lnk_stat lnk_stat_offsets;
 };
 
-/* Given a pointer to the start of the ring, and the slot number of
- * the desired iocb entry, calc a pointer to that entry.
- * (assume iocb entry size is 32 bytes, or 8 words)
- */
-#define IOCB_ENTRY(ring,slot) ((IOCB_t *)(((char *)(ring)) + ((slot) * 32)))
-
 #define LPFC_MBOX_TMO           30	/* Sec tmo for outstanding mbox
 					   command */
 #define LPFC_MBOX_TMO_FLASH_CMD 300     /* Sec tmo for outstanding FLASH write
diff --git a/drivers/scsi/lpfc/lpfc_version.h b/drivers/scsi/lpfc/lpfc_version.h
index 5d07ecd..2dac2dd 100644
--- a/drivers/scsi/lpfc/lpfc_version.h
+++ b/drivers/scsi/lpfc/lpfc_version.h
@@ -18,7 +18,7 @@
  * included with this package.                                     *
  *******************************************************************/
 
-#define LPFC_DRIVER_VERSION "8.1.10.9"
+#define LPFC_DRIVER_VERSION "8.2.0.8"
 
 #define LPFC_DRIVER_NAME "lpfc"
 
diff --git a/drivers/scsi/lpfc/lpfc_vport.c b/drivers/scsi/lpfc/lpfc_vport.c
new file mode 100644
index 0000000..b32c5a6
--- /dev/null
+++ b/drivers/scsi/lpfc/lpfc_vport.c
@@ -0,0 +1,503 @@
+/*******************************************************************
+ * This file is part of the Emulex Linux Device Driver for         *
+ * Fibre Channel Host Bus Adapters.                                *
+ * Copyright (C) 2004-2006 Emulex.  All rights reserved.           *
+ * EMULEX and SLI are trademarks of Emulex.                        *
+ * www.emulex.com                                                  *
+ * Portions Copyright (C) 2004-2005 Christoph Hellwig              *
+ *                                                                 *
+ * This program is free software; you can redistribute it and/or   *
+ * modify it under the terms of version 2 of the GNU General       *
+ * Public License as published by the Free Software Foundation.    *
+ * This program is distributed in the hope that it will be useful. *
+ * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
+ * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
+ * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
+ * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
+ * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
+ * more details, a copy of which can be found in the file COPYING  *
+ * included with this package.                                     *
+ *******************************************************************/
+
+#include <linux/blkdev.h>
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/idr.h>
+#include <linux/interrupt.h>
+#include <linux/kthread.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_transport_fc.h>
+#include "lpfc_hw.h"
+#include "lpfc_sli.h"
+#include "lpfc_disc.h"
+#include "lpfc_scsi.h"
+#include "lpfc.h"
+#include "lpfc_logmsg.h"
+#include "lpfc_crtn.h"
+#include "lpfc_version.h"
+#include "lpfc_vport.h"
+#include "lpfc_auth_access.h"
+
+inline void lpfc_vport_set_state(struct lpfc_vport *vport,
+				 enum fc_vport_state new_state)
+{
+
+	/* for all the error states we will set the invternal state to FAILED */
+	switch (new_state) {
+	case FC_VPORT_NO_FABRIC_SUPP:
+	case FC_VPORT_NO_FABRIC_RSCS:
+	case FC_VPORT_FABRIC_LOGOUT:
+	case FC_VPORT_FABRIC_REJ_WWN:
+	case FC_VPORT_FAILED:
+		vport->port_state = LPFC_VPORT_FAILED;
+		break;
+	case FC_VPORT_LINKDOWN:
+		vport->port_state = LPFC_VPORT_UNKNOWN;
+		break;
+	default:
+		/* do nothing */
+		break;
+	}
+}
+
+static int
+lpfc_alloc_vpi(struct lpfc_hba *phba)
+{
+	int  vpi;
+
+	spin_lock_irq(&phba->hbalock);
+	/* Start at bit 1 because vpi zero is reserved for the physical port */
+	vpi = find_next_zero_bit(phba->vpi_bmask, (phba->max_vpi + 1), 1);
+	if (vpi > phba->max_vpi)
+		vpi = 0;
+	else
+		set_bit(vpi, phba->vpi_bmask);
+	spin_unlock_irq(&phba->hbalock);
+	return vpi;
+}
+
+static void
+lpfc_free_vpi(struct lpfc_hba *phba, int vpi)
+{
+	spin_lock_irq(&phba->hbalock);
+	clear_bit(vpi, phba->vpi_bmask);
+	spin_unlock_irq(&phba->hbalock);
+}
+
+static int
+lpfc_vport_sparm(struct lpfc_hba *phba, struct lpfc_vport *vport)
+{
+	LPFC_MBOXQ_t *pmb;
+	MAILBOX_t *mb;
+	struct lpfc_dmabuf *mp;
+	int  rc;
+
+	pmb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+	if (!pmb) {
+		return -ENOMEM;
+	}
+	mb = &pmb->mb;
+
+	lpfc_read_sparam(phba, pmb, vport->vpi);
+	/*
+	 * Grab buffer pointer and clear context1 so we can use
+	 * lpfc_sli_issue_box_wait
+	 */
+	mp = (struct lpfc_dmabuf *) pmb->context1;
+	pmb->context1 = NULL;
+
+	pmb->vport = vport;
+	rc = lpfc_sli_issue_mbox_wait(phba, pmb, phba->fc_ratov * 2);
+	if (rc != MBX_SUCCESS) {
+		if (signal_pending(current)) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT | LOG_VPORT,
+					 "1830 Signal aborted mbxCmd x%x\n",
+					 mb->mbxCommand);
+			lpfc_mbuf_free(phba, mp->virt, mp->phys);
+			kfree(mp);
+			if (rc != MBX_TIMEOUT)
+				mempool_free(pmb, phba->mbox_mem_pool);
+			return -EINTR;
+		} else {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT | LOG_VPORT,
+					 "1818 VPort failed init, mbxCmd x%x "
+					 "READ_SPARM mbxStatus x%x, rc = x%x\n",
+					 mb->mbxCommand, mb->mbxStatus, rc);
+			lpfc_mbuf_free(phba, mp->virt, mp->phys);
+			kfree(mp);
+			if (rc != MBX_TIMEOUT)
+				mempool_free(pmb, phba->mbox_mem_pool);
+			return -EIO;
+		}
+	}
+
+	memcpy(&vport->fc_sparam, mp->virt, sizeof (struct serv_parm));
+	memcpy(&vport->fc_nodename, &vport->fc_sparam.nodeName,
+	       sizeof (struct lpfc_name));
+	memcpy(&vport->fc_portname, &vport->fc_sparam.portName,
+	       sizeof (struct lpfc_name));
+
+	lpfc_mbuf_free(phba, mp->virt, mp->phys);
+	kfree(mp);
+	mempool_free(pmb, phba->mbox_mem_pool);
+
+	return 0;
+}
+
+static int
+lpfc_valid_wwn_format(struct lpfc_hba *phba, struct lpfc_name *wwn,
+		      const char *name_type)
+{
+				/* ensure that IEEE format 1 addresses
+				 * contain zeros in bits 59-48
+				 */
+	if (!((wwn->u.wwn[0] >> 4) == 1 &&
+	      ((wwn->u.wwn[0] & 0xf) != 0 || (wwn->u.wwn[1] & 0xf) != 0)))
+		return 1;
+
+	lpfc_printf_log(phba, KERN_ERR, LOG_VPORT,
+			"1822 Invalid %s: %02x:%02x:%02x:%02x:"
+			"%02x:%02x:%02x:%02x\n",
+			name_type,
+			wwn->u.wwn[0], wwn->u.wwn[1],
+			wwn->u.wwn[2], wwn->u.wwn[3],
+			wwn->u.wwn[4], wwn->u.wwn[5],
+			wwn->u.wwn[6], wwn->u.wwn[7]);
+	return 0;
+}
+
+static int
+lpfc_unique_wwpn(struct lpfc_hba *phba, struct lpfc_vport *new_vport)
+{
+	struct lpfc_vport *vport;
+	unsigned long flags;
+
+	spin_lock_irqsave(&phba->hbalock, flags);
+	list_for_each_entry(vport, &phba->port_list, listentry) {
+		if (vport == new_vport)
+			continue;
+		/* If they match, return not unique */
+		if (memcmp(&vport->fc_sparam.portName,
+			   &new_vport->fc_sparam.portName,
+			   sizeof(struct lpfc_name)) == 0) {
+			spin_unlock_irqrestore(&phba->hbalock, flags);
+			return 0;
+		}
+	}
+	spin_unlock_irqrestore(&phba->hbalock, flags);
+	return 1;
+}
+
+int
+lpfc_vport_create(struct Scsi_Host *shost, const uint8_t *wwnn,
+		  const uint8_t *wwpn)
+{
+	struct lpfc_nodelist *ndlp;
+	static uint8_t null_name[8] = { 0, 0, 0, 0, 0, 0, 0, 0, };
+	struct lpfc_vport *pport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = pport->phba;
+	struct lpfc_vport *vport = NULL;
+	int instance;
+	int vpi;
+	int rc = VPORT_ERROR;
+	int status;
+
+	if ((phba->sli_rev < 3) ||
+		!(phba->sli3_options & LPFC_SLI3_NPIV_ENABLED)) {
+		lpfc_printf_log(phba, KERN_ERR, LOG_VPORT,
+				"1808 Create VPORT failed: "
+				"NPIV is not enabled: SLImode:%d\n",
+				phba->sli_rev);
+		rc = VPORT_INVAL;
+		goto error_out;
+	}
+
+	vpi = lpfc_alloc_vpi(phba);
+	if (vpi == 0) {
+		lpfc_printf_log(phba, KERN_ERR, LOG_VPORT,
+				"1809 Create VPORT failed: "
+				"Max VPORTs (%d) exceeded\n",
+				phba->max_vpi);
+		rc = VPORT_NORESOURCES;
+		goto error_out;
+	}
+
+
+	/* Assign an unused board number */
+	if ((instance = lpfc_get_instance()) < 0) {
+		lpfc_printf_log(phba, KERN_ERR, LOG_VPORT,
+				"1810 Create VPORT failed: Cannot get "
+				"instance number\n");
+		lpfc_free_vpi(phba, vpi);
+		rc = VPORT_NORESOURCES;
+		goto error_out;
+	}
+
+	vport = lpfc_create_port(phba, instance, &shost->shost_gendev);
+	if (!vport) {
+		lpfc_printf_log(phba, KERN_ERR, LOG_VPORT,
+				"1811 Create VPORT failed: vpi x%x\n", vpi);
+		lpfc_free_vpi(phba, vpi);
+		rc = VPORT_NORESOURCES;
+		goto error_out;
+	}
+
+	vport->vpi = vpi;
+	lpfc_debugfs_initialize(vport);
+
+	if ((status = lpfc_vport_sparm(phba, vport))) {
+		if (status == -EINTR) {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
+					 "1831 Create VPORT Interrupted.\n");
+			rc = VPORT_ERROR;
+		} else {
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
+					 "1813 Create VPORT failed. "
+					 "Cannot get sparam\n");
+			rc = VPORT_NORESOURCES;
+		}
+		lpfc_free_vpi(phba, vpi);
+		destroy_port(vport);
+		goto error_out;
+	}
+
+	memcpy(vport->fc_portname.u.wwn, vport->fc_sparam.portName.u.wwn, 8);
+	memcpy(vport->fc_nodename.u.wwn, vport->fc_sparam.nodeName.u.wwn, 8);
+
+	if (wwnn && memcmp(wwnn, null_name, 8))
+		memcpy(vport->fc_nodename.u.wwn, wwnn, 8);
+	if (wwpn && memcmp(wwpn, null_name, 8))
+		memcpy(vport->fc_portname.u.wwn, wwpn, 8);
+
+	memcpy(&vport->fc_sparam.portName, vport->fc_portname.u.wwn, 8);
+	memcpy(&vport->fc_sparam.nodeName, vport->fc_nodename.u.wwn, 8);
+
+	if (!lpfc_valid_wwn_format(phba, &vport->fc_sparam.nodeName, "WWNN") ||
+	    !lpfc_valid_wwn_format(phba, &vport->fc_sparam.portName, "WWPN")) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
+				 "1821 Create VPORT failed. "
+				 "Invalid WWN format\n");
+		lpfc_free_vpi(phba, vpi);
+		destroy_port(vport);
+		rc = VPORT_INVAL;
+		goto error_out;
+	}
+
+	if (!lpfc_unique_wwpn(phba, vport)) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
+				 "1823 Create VPORT failed. "
+				 "Duplicate WWN on HBA\n");
+		lpfc_free_vpi(phba, vpi);
+		destroy_port(vport);
+		rc = VPORT_INVAL;
+		goto error_out;
+	}
+
+	shost = lpfc_shost_from_vport(vport);
+
+	if ((lpfc_get_security_enabled)(shost)){
+		spin_lock_irq(&fc_security_user_lock);
+
+		list_add_tail(&vport->sc_users, &fc_security_user_list);
+
+		spin_unlock_irq(&fc_security_user_lock);
+
+		if (fc_service_state == FC_SC_SERVICESTATE_ONLINE) {
+			lpfc_fc_queue_security_work(vport,
+				&vport->sc_online_work);
+		}
+	}
+
+
+	vport->port_type = LPFC_NPIV_PORT;
+	if ((phba->link_state < LPFC_LINK_UP) ||
+	    (phba->fc_topology == TOPOLOGY_LOOP)) {
+		lpfc_vport_set_state(vport, FC_VPORT_LINKDOWN);
+		rc = VPORT_OK;
+		goto out;
+	}
+
+
+	/* Use the Physical nodes Fabric NDLP to determine if the link is
+	 * up and ready to FDISC.
+	 */
+	ndlp = lpfc_findnode_did(phba->pport, Fabric_DID);
+	if (ndlp && ndlp->nlp_state == NLP_STE_UNMAPPED_NODE) {
+		if (phba->link_flag & LS_NPIV_FAB_SUPPORTED) {
+			lpfc_set_disctmo(vport);
+			lpfc_initial_fdisc(vport);
+		} else {
+			lpfc_vport_set_state(vport, FC_VPORT_NO_FABRIC_SUPP);
+			lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS,
+					 "0262 No NPIV Fabric support\n");
+		}
+	} else {
+		lpfc_vport_set_state(vport, FC_VPORT_FAILED);
+	}
+	rc = VPORT_OK;
+
+out:
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
+			"1825 Vport Created.\n");
+	lpfc_host_attrib_init(lpfc_shost_from_vport(vport));
+error_out:
+	return rc;
+}
+
+
+int
+lpfc_vport_delete(struct Scsi_Host *shost)
+{
+	struct lpfc_nodelist *ndlp = NULL;
+	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+	struct lpfc_hba   *phba = vport->phba;
+	long timeout;
+
+	if (vport->port_type == LPFC_PHYSICAL_PORT) {
+		lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
+				 "1812 vport_delete failed: Cannot delete "
+				 "physical host\n");
+		return VPORT_ERROR;
+	}
+	/*
+	 * If we are not unloading the driver then prevent the vport_delete
+	 * from happening until after this vport's discovery is finished.
+	 */
+	if (!(phba->pport->load_flag & FC_UNLOADING)) {
+		int check_count = 0;
+		while (check_count < ((phba->fc_ratov * 3) + 3) &&
+		       vport->port_state > LPFC_VPORT_FAILED &&
+		       vport->port_state < LPFC_VPORT_READY) {
+			check_count++;
+			msleep(1000);
+		}
+		if (vport->port_state > LPFC_VPORT_FAILED &&
+		    vport->port_state < LPFC_VPORT_READY)
+			return -EAGAIN;
+	}
+	/*
+	 * This is a bit of a mess.  We want to ensure the shost doesn't get
+	 * torn down until we're done with the embedded lpfc_vport structure.
+	 *
+	 * Beyond holding a reference for this function, we also need a
+	 * reference for outstanding I/O requests we schedule during delete
+	 * processing.  But once we scsi_remove_host() we can no longer obtain
+	 * a reference through scsi_host_get().
+	 *
+	 * So we take two references here.  We release one reference at the
+	 * bottom of the function -- after delinking the vport.  And we
+	 * release the other at the completion of the unreg_vpi that get's
+	 * initiated after we've disposed of all other resources associated
+	 * with the port.
+	 */
+	if (!scsi_host_get(shost) || !scsi_host_get(shost))
+		return VPORT_INVAL;
+	spin_lock_irq(&phba->hbalock);
+	vport->load_flag |= FC_UNLOADING;
+	spin_unlock_irq(&phba->hbalock);
+	kfree(vport->vname);
+	lpfc_debugfs_terminate(vport);
+	fc_remove_host(lpfc_shost_from_vport(vport));
+	scsi_remove_host(lpfc_shost_from_vport(vport));
+
+	ndlp = lpfc_findnode_did(phba->pport, Fabric_DID);
+	if (ndlp && ndlp->nlp_state == NLP_STE_UNMAPPED_NODE &&
+	    phba->link_state >= LPFC_LINK_UP) {
+		if (vport->cfg_enable_da_id) {
+			timeout = msecs_to_jiffies(phba->fc_ratov * 2000);
+			if (!lpfc_ns_cmd(vport, SLI_CTNS_DA_ID, 0, 0))
+				while (vport->ct_flags && timeout)
+					timeout = schedule_timeout(timeout);
+			else
+				lpfc_printf_log(vport->phba, KERN_WARNING,
+						LOG_VPORT,
+						"1829 CT command failed to "
+						"delete objects on fabric. \n");
+		}
+		/* First look for the Fabric ndlp */
+		ndlp = lpfc_findnode_did(vport, Fabric_DID);
+		if (!ndlp) {
+			/* Cannot find existing Fabric ndlp, allocate one */
+			ndlp = mempool_alloc(phba->nlp_mem_pool, GFP_KERNEL);
+			if (!ndlp)
+				goto skip_logo;
+			lpfc_nlp_init(vport, ndlp, Fabric_DID);
+		} else {
+			lpfc_dequeue_node(vport, ndlp);
+		}
+		vport->unreg_vpi_cmpl = VPORT_INVAL;
+		timeout = msecs_to_jiffies(phba->fc_ratov * 2000);
+		if (!lpfc_issue_els_npiv_logo(vport, ndlp))
+			while (vport->unreg_vpi_cmpl == VPORT_INVAL && timeout)
+				timeout = schedule_timeout(timeout);
+	}
+
+skip_logo:
+	lpfc_cleanup(vport);
+	lpfc_sli_host_down(vport);
+
+	lpfc_stop_vport_timers(vport);
+	lpfc_unreg_all_rpis(vport);
+
+	if (!(phba->pport->load_flag & FC_UNLOADING)) {
+		lpfc_unreg_default_rpis(vport);
+		/*
+		 * Completion of unreg_vpi (lpfc_mbx_cmpl_unreg_vpi)
+		 * does the scsi_host_put() to release the vport.
+		 */
+		lpfc_mbx_unreg_vpi(vport);
+	}
+
+	lpfc_free_vpi(phba, vport->vpi);
+	vport->work_port_events = 0;
+	spin_lock_irq(&phba->hbalock);
+	list_del_init(&vport->listentry);
+	spin_unlock_irq(&phba->hbalock);
+	lpfc_printf_vlog(vport, KERN_ERR, LOG_VPORT,
+			 "1828 Vport Deleted.\n");
+	scsi_host_put(shost);
+	return VPORT_OK;
+}
+
+EXPORT_SYMBOL(lpfc_vport_create);
+EXPORT_SYMBOL(lpfc_vport_delete);
+
+struct lpfc_vport **
+lpfc_create_vport_work_array(struct lpfc_hba *phba)
+{
+	struct lpfc_vport *port_iterator;
+	struct lpfc_vport **vports;
+	int index = 0;
+	vports = kzalloc(LPFC_MAX_VPORTS * sizeof(struct lpfc_vport *),
+			 GFP_KERNEL);
+	if (vports == NULL)
+		return NULL;
+	spin_lock_irq(&phba->hbalock);
+	list_for_each_entry(port_iterator, &phba->port_list, listentry) {
+		if (!scsi_host_get(lpfc_shost_from_vport(port_iterator))) {
+			lpfc_printf_vlog(port_iterator, KERN_WARNING, LOG_VPORT,
+					 "1801 Create vport work array FAILED: "
+					 "cannot do scsi_host_get\n");
+			continue;
+		}
+		vports[index++] = port_iterator;
+	}
+	spin_unlock_irq(&phba->hbalock);
+	return vports;
+}
+
+void
+lpfc_destroy_vport_work_array(struct lpfc_vport **vports)
+{
+	int i;
+	if (vports == NULL)
+		return;
+	for (i=0; vports[i] != NULL && i < LPFC_MAX_VPORTS; i++)
+		scsi_host_put(lpfc_shost_from_vport(vports[i]));
+	kfree(vports);
+}
diff --git a/drivers/scsi/lpfc/lpfc_vport.h b/drivers/scsi/lpfc/lpfc_vport.h
new file mode 100644
index 0000000..c689ed8
--- /dev/null
+++ b/drivers/scsi/lpfc/lpfc_vport.h
@@ -0,0 +1,115 @@
+/*******************************************************************
+ * This file is part of the Emulex Linux Device Driver for         *
+ * Fibre Channel Host Bus Adapters.                                *
+ * Copyright (C) 2004-2006 Emulex.  All rights reserved.           *
+ * EMULEX and SLI are trademarks of Emulex.                        *
+ * www.emulex.com                                                  *
+ * Portions Copyright (C) 2004-2005 Christoph Hellwig              *
+ *                                                                 *
+ * This program is free software; you can redistribute it and/or   *
+ * modify it under the terms of version 2 of the GNU General       *
+ * Public License as published by the Free Software Foundation.    *
+ * This program is distributed in the hope that it will be useful. *
+ * ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND          *
+ * WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY,  *
+ * FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT, ARE      *
+ * DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD *
+ * TO BE LEGALLY INVALID.  See the GNU General Public License for  *
+ * more details, a copy of which can be found in the file COPYING  *
+ * included with this package.                                     *
+ *******************************************************************/
+
+#ifndef _H_LPFC_VPORT
+#define _H_LPFC_VPORT
+
+/* API version values (each will be an individual bit) */
+#define VPORT_API_VERSION_1	0x01
+
+/* Values returned via lpfc_vport_getinfo() */
+struct vport_info {
+
+	uint32_t api_versions;
+	uint8_t linktype;
+#define  VPORT_TYPE_PHYSICAL	0
+#define  VPORT_TYPE_VIRTUAL	1
+
+	uint8_t state;
+#define  VPORT_STATE_OFFLINE	0
+#define  VPORT_STATE_ACTIVE	1
+#define  VPORT_STATE_FAILED	2
+
+	uint8_t fail_reason;
+	uint8_t prev_fail_reason;
+#define  VPORT_FAIL_UNKNOWN	0
+#define  VPORT_FAIL_LINKDOWN	1
+#define  VPORT_FAIL_FAB_UNSUPPORTED	2
+#define  VPORT_FAIL_FAB_NORESOURCES	3
+#define  VPORT_FAIL_FAB_LOGOUT	4
+#define  VPORT_FAIL_ADAP_NORESOURCES	5
+
+	uint8_t node_name[8];	/* WWNN */
+	uint8_t port_name[8];	/* WWPN */
+
+	struct Scsi_Host *shost;
+
+/* Following values are valid only on physical links */
+	uint32_t vports_max;
+	uint32_t vports_inuse;
+	uint32_t rpi_max;
+	uint32_t rpi_inuse;
+#define  VPORT_CNT_INVALID	0xFFFFFFFF
+};
+
+/* data used  in link creation */
+struct vport_data {
+	uint32_t api_version;
+
+	uint32_t options;
+#define  VPORT_OPT_AUTORETRY	0x01
+
+	uint8_t node_name[8];	/* WWNN */
+	uint8_t port_name[8];	/* WWPN */
+
+/*
+ *  Upon successful creation, vport_shost will point to the new Scsi_Host
+ *  structure for the new virtual link.
+ */
+	struct Scsi_Host *vport_shost;
+};
+
+/* API function return codes */
+#define VPORT_OK	0
+#define VPORT_ERROR	-1
+#define VPORT_INVAL	-2
+#define VPORT_NOMEM	-3
+#define VPORT_NORESOURCES	-4
+
+int lpfc_vport_create(struct Scsi_Host *, const uint8_t *, const uint8_t *);
+int lpfc_vport_delete(struct Scsi_Host *);
+int lpfc_vport_getinfo(struct Scsi_Host *, struct vport_info *);
+int lpfc_vport_tgt_remove(struct Scsi_Host *, uint, uint);
+struct lpfc_vport **lpfc_create_vport_work_array(struct lpfc_hba *);
+void lpfc_destroy_vport_work_array(struct lpfc_vport **);
+
+/*
+ *  queuecommand  VPORT-specific return codes. Specified in  the host byte code.
+ *  Returned when the virtual link has failed or is not active.
+ */
+#define  DID_VPORT_ERROR	0x0f
+
+#define VPORT_INFO	0x1
+#define VPORT_CREATE	0x2
+#define VPORT_DELETE	0x4
+
+struct vport_cmd_tag {
+	uint32_t cmd;
+	struct vport_data cdata;
+	struct vport_info cinfo;
+	void *vport;
+	int vport_num;
+};
+
+void lpfc_vport_set_state(struct lpfc_vport *vport,
+			  enum fc_vport_state new_state);
+
+#endif /* H_LPFC_VPORT */