Kernel
Threads by month
- ----- 2026 -----
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- 28 participants
- 22396 discussions
Fix CVE-2025-40242.
Andreas Gruenbacher (2):
gfs2: Add proper lockspace locking
gfs2: Fix unlikely race in gdlm_put_lock
fs/gfs2/file.c | 23 ++++++++++++-------
fs/gfs2/glock.c | 5 ++---
fs/gfs2/incore.h | 2 ++
fs/gfs2/lock_dlm.c | 56 ++++++++++++++++++++++++++++++++++------------
4 files changed, 61 insertions(+), 25 deletions(-)
--
2.39.2
2
3
08 Jan '26
From: Eric Sandeen <sandeen(a)redhat.com>
stable inclusion
from stable-v6.6.103
commit d3cc7476b89fb45b7e00874f4f56f6b928467c60
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/8667
CVE: CVE-2025-39835
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=…
--------------------------------
commit ae668cd567a6a7622bc813ee0bb61c42bed61ba7 upstream.
ENODATA (aka ENOATTR) has a very specific meaning in the xfs xattr code;
namely, that the requested attribute name could not be found.
However, a medium error from disk may also return ENODATA. At best,
this medium error may escape to userspace as "attribute not found"
when in fact it's an IO (disk) error.
At worst, we may oops in xfs_attr_leaf_get() when we do:
error = xfs_attr_leaf_hasname(args, &bp);
if (error == -ENOATTR) {
xfs_trans_brelse(args->trans, bp);
return error;
}
because an ENODATA/ENOATTR error from disk leaves us with a null bp,
and the xfs_trans_brelse will then null-deref it.
As discussed on the list, we really need to modify the lower level
IO functions to trap all disk errors and ensure that we don't let
unique errors like this leak up into higher xfs functions - many
like this should be remapped to EIO.
However, this patch directly addresses a reported bug in the xattr
code, and should be safe to backport to stable kernels. A larger-scope
patch to handle more unique errors at lower levels can follow later.
(Note, prior to 07120f1abdff we did not oops, but we did return the
wrong error code to userspace.)
Signed-off-by: Eric Sandeen <sandeen(a)redhat.com>
Fixes: 07120f1abdff ("xfs: Add xfs_has_attr and subroutines")
Cc: stable(a)vger.kernel.org # v5.9+
Reviewed-by: Darrick J. Wong <djwong(a)kernel.org>
Signed-off-by: Carlos Maiolino <cem(a)kernel.org>
[ Adjust context: removed metadata health tracking calls ]
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Long Li <leo.lilong(a)huawei.com>
---
fs/xfs/libxfs/xfs_attr_remote.c | 7 +++++++
fs/xfs/libxfs/xfs_da_btree.c | 6 ++++++
2 files changed, 13 insertions(+)
diff --git a/fs/xfs/libxfs/xfs_attr_remote.c b/fs/xfs/libxfs/xfs_attr_remote.c
index 54de405cbab5..4d369876487b 100644
--- a/fs/xfs/libxfs/xfs_attr_remote.c
+++ b/fs/xfs/libxfs/xfs_attr_remote.c
@@ -418,6 +418,13 @@ xfs_attr_rmtval_get(
dblkcnt = XFS_FSB_TO_BB(mp, map[i].br_blockcount);
error = xfs_buf_read(mp->m_ddev_targp, dblkno, dblkcnt,
0, &bp, &xfs_attr3_rmt_buf_ops);
+ /*
+ * ENODATA from disk implies a disk medium failure;
+ * ENODATA for xattrs means attribute not found, so
+ * disambiguate that here.
+ */
+ if (error == -ENODATA)
+ error = -EIO;
if (error)
return error;
diff --git a/fs/xfs/libxfs/xfs_da_btree.c b/fs/xfs/libxfs/xfs_da_btree.c
index 6b5abbcb61c6..6daf13898f33 100644
--- a/fs/xfs/libxfs/xfs_da_btree.c
+++ b/fs/xfs/libxfs/xfs_da_btree.c
@@ -2676,6 +2676,12 @@ xfs_da_read_buf(
error = xfs_trans_read_buf_map(mp, tp, mp->m_ddev_targp, mapp, nmap, 0,
&bp, ops);
+ /*
+ * ENODATA from disk implies a disk medium failure; ENODATA for
+ * xattrs means attribute not found, so disambiguate that here.
+ */
+ if (error == -ENODATA && whichfork == XFS_ATTR_FORK)
+ error = -EIO;
if (error)
goto out_free;
--
2.39.2
2
1
From: Hao Dongdong <doubled(a)leap-io-kernel.com>
LeapIO inclusion
category: feature
bugzilla: https://atomgit.com/openeuler/kernel/issues/8340
------------------------------------------
The LeapRAID driver provides support for LeapRAID PCIe RAID controllers,
enabling communication between the host operating system, firmware, and
hardware for efficient storage management.
The main source files are organized as follows:
leapraid_os.c:
Implements the scsi_host_template functions, PCIe device probing, and
initialization routines, integrating the driver with the Linux SCSI
subsystem.
leapraid_func.c:
Provides the core functional routines that handle low-level interactions
with the controller firmware and hardware, including interrupt handling,
topology management, and reset sequence processing, and other related
operations.
leapraid_app.c:
Implements the ioctl interface, providing user-space tools access to device
management and diagnostic operations.
leapraid_transport.c:
Interacts with the Linux SCSI transport layer to add SAS phys and ports.
leapraid_func.h:
Declares common data structures, constants, and function prototypes shared
across the driver.
leapraid.h:
Provides global constants, register mappings, and interface definitions
that facilitate communication between the driver and the controller
firmware.
The leapraid_probe function is called when the driver detects a supported
LeapRAID PCIe device. It allocates and initializes the Scsi_Host structure,
configures hardware and firmware interfaces, and registers the host adapter
with the Linux SCSI mid-layer.
After registration, the driver invokes scsi_scan_host() to initiate device
discovery. The firmware then reports discovered logical and physical
devices to the host through interrupt-driven events and synchronizes their
operational states.
leapraid_adapter is the core data structure that encapsulates all resources
and runtime state information maintained during driver operation, described
as follows:
/**
* struct leapraid_adapter - Main LeapRaid adapter structure
* @list: List head for adapter management
* @shost: SCSI host structure
* @pdev: PCI device structure
* @iomem_base: I/O memory mapped base address
* @rep_msg_host_idx: Host index for reply messages
* @mask_int: Interrupt masking flag
* @timestamp_sync_cnt: Timestamp synchronization counter
* @adapter_attr: Adapter attributes
* @mem_desc: Memory descriptor
* @driver_cmds: Driver commands
* @dynamic_task_desc: Dynamic task descriptor
* @fw_evt_s: Firmware event structure
* @notification_desc: Notification descriptor
* @reset_desc: Reset descriptor
* @scan_dev_desc: Device scan descriptor
* @access_ctrl: Access control
* @fw_log_desc: Firmware log descriptor
* @dev_topo: Device topology
* @boot_devs: Boot devices
* @smart_poll_desc: SMART polling descriptor
*/
struct leapraid_adapter {
struct list_head list;
struct Scsi_Host *shost;
struct pci_dev *pdev;
struct leapraid_reg_base __iomem *iomem_base;
u32 rep_msg_host_idx;
bool mask_int;
u32 timestamp_sync_cnt;
struct leapraid_adapter_attr adapter_attr;
struct leapraid_mem_desc mem_desc;
struct leapraid_driver_cmds driver_cmds;
struct leapraid_dynamic_task_desc dynamic_task_desc;
struct leapraid_fw_evt_struct fw_evt_s;
struct leapraid_notification_desc notification_desc;
struct leapraid_reset_desc reset_desc;
struct leapraid_scan_dev_desc scan_dev_desc;
struct leapraid_access_ctrl access_ctrl;
struct leapraid_fw_log_desc fw_log_desc;
struct leapraid_dev_topo dev_topo;
struct leapraid_boot_devs boot_devs;
struct leapraid_smart_poll_desc smart_poll_desc;
};
Signed-off-by: Hao Dongdong <doubled(a)leap-io-kernel.com>
---
arch/arm64/configs/openeuler_defconfig | 1 +
arch/x86/configs/openeuler_defconfig | 1 +
drivers/scsi/Kconfig | 1 +
drivers/scsi/Makefile | 1 +
drivers/scsi/leapraid/Kconfig | 14 +
drivers/scsi/leapraid/Makefile | 10 +
drivers/scsi/leapraid/leapraid.h | 2070 +++++
drivers/scsi/leapraid/leapraid_app.c | 675 ++
drivers/scsi/leapraid/leapraid_func.c | 8264 ++++++++++++++++++++
drivers/scsi/leapraid/leapraid_func.h | 1423 ++++
drivers/scsi/leapraid/leapraid_os.c | 2271 ++++++
drivers/scsi/leapraid/leapraid_transport.c | 1256 +++
12 files changed, 15987 insertions(+)
create mode 100644 drivers/scsi/leapraid/Kconfig
create mode 100644 drivers/scsi/leapraid/Makefile
create mode 100644 drivers/scsi/leapraid/leapraid.h
create mode 100644 drivers/scsi/leapraid/leapraid_app.c
create mode 100644 drivers/scsi/leapraid/leapraid_func.c
create mode 100644 drivers/scsi/leapraid/leapraid_func.h
create mode 100644 drivers/scsi/leapraid/leapraid_os.c
create mode 100644 drivers/scsi/leapraid/leapraid_transport.c
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index d190cc0cb030..12a48e4f54c3 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -2465,6 +2465,7 @@ CONFIG_SCSI_HISI_SAS_DEBUGFS_DEFAULT_ENABLE=y
# CONFIG_MEGARAID_LEGACY is not set
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_3SNIC_SSSRAID=m
+CONFIG_SCSI_LEAPRAID=m
CONFIG_SCSI_MPT3SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index fdd8d59bad01..2ef8a9d6dcbb 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -2393,6 +2393,7 @@ CONFIG_SCSI_AACRAID=m
# CONFIG_MEGARAID_LEGACY is not set
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_3SNIC_SSSRAID=m
+CONFIG_SCSI_LEAPRAID=m
CONFIG_SCSI_MPT3SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index edec9aa0993e..528a62318a48 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -432,6 +432,7 @@ source "drivers/scsi/aic7xxx/Kconfig.aic79xx"
source "drivers/scsi/aic94xx/Kconfig"
source "drivers/scsi/hisi_sas/Kconfig"
source "drivers/scsi/mvsas/Kconfig"
+source "drivers/scsi/leapraid/Kconfig"
config SCSI_MVUMI
tristate "Marvell UMI driver"
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index b27758db0c02..04864ff0db84 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -156,6 +156,7 @@ obj-$(CONFIG_CHR_DEV_SCH) += ch.o
obj-$(CONFIG_SCSI_ENCLOSURE) += ses.o
obj-$(CONFIG_SCSI_HISI_SAS) += hisi_sas/
+obj-$(CONFIG_SCSI_LEAPRAID) += leapraid/
# This goes last, so that "real" scsi devices probe earlier
obj-$(CONFIG_SCSI_DEBUG) += scsi_debug.o
diff --git a/drivers/scsi/leapraid/Kconfig b/drivers/scsi/leapraid/Kconfig
new file mode 100644
index 000000000000..b539183b24a7
--- /dev/null
+++ b/drivers/scsi/leapraid/Kconfig
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: GPL-2.0-or-later
+
+config SCSI_LEAPRAID
+ tristate "LeapIO RAID Adapter"
+ depends on PCI && SCSI
+ select SCSI_SAS_ATTRS
+ help
+ This driver supports LeapIO PCIe-based Storage
+ and RAID controllers.
+
+ <http://www.leap-io.com>
+
+ To compile this driver as a module, choose M here: the
+ resulting kernel module will be named leapraid.
diff --git a/drivers/scsi/leapraid/Makefile b/drivers/scsi/leapraid/Makefile
new file mode 100644
index 000000000000..bdafc036cd00
--- /dev/null
+++ b/drivers/scsi/leapraid/Makefile
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for the LEAPRAID drivers.
+#
+
+obj-$(CONFIG_SCSI_LEAPRAID) += leapraid.o
+leapraid-objs += leapraid_func.o \
+ leapraid_os.o \
+ leapraid_transport.o \
+ leapraid_app.o
diff --git a/drivers/scsi/leapraid/leapraid.h b/drivers/scsi/leapraid/leapraid.h
new file mode 100644
index 000000000000..842810d41542
--- /dev/null
+++ b/drivers/scsi/leapraid/leapraid.h
@@ -0,0 +1,2070 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2025 LeapIO Tech Inc.
+ *
+ * LeapRAID Storage and RAID Controller driver.
+ */
+#ifndef LEAPRAID_H
+#define LEAPRAID_H
+
+/* doorbell register definitions */
+#define LEAPRAID_DB_RESET 0x00000000
+#define LEAPRAID_DB_READY 0x10000000
+#define LEAPRAID_DB_OPERATIONAL 0x20000000
+#define LEAPRAID_DB_FAULT 0x40000000
+
+#define LEAPRAID_DB_MASK 0xF0000000
+
+#define LEAPRAID_DB_OVER_TEMPERATURE 0x2810
+
+#define LEAPRAID_DB_USED 0x08000000
+#define LEAPRAID_DB_DATA_MASK 0x0000FFFF
+#define LEAPRAID_DB_FUNC_SHIFT 24
+#define LEAPRAID_DB_ADD_DWORDS_SHIFT 16
+
+/* maximum number of retries waiting for doorbell to become ready */
+#define LEAPRAID_DB_RETRY_COUNT_MAX 10
+/* maximum number of retries waiting for doorbell to become operational */
+#define LEAPRAID_DB_WAIT_OPERATIONAL 10
+/* sleep interval (in seconds) between doorbell polls */
+#define LEAPRAID_DB_POLL_INTERVAL_S 1
+
+/* maximum number of retries waiting for host to end recovery */
+#define LEAPRAID_WAIT_SHOST_RECOVERY 30
+
+/* diagnostic register definitions */
+#define LEAPRAID_DIAG_WRITE_ENABLE 0x00000080
+#define LEAPRAID_DIAG_RESET 0x00000004
+#define LEAPRAID_DIAG_HOLD_ADAPTER_RESET 0x00000002
+
+/* interrupt status register definitions */
+#define LEAPRAID_HOST2ADAPTER_DB_STATUS 0x80000000
+#define LEAPRAID_ADAPTER2HOST_DB_STATUS 0x00000001
+
+/* the number of debug register */
+#define LEAPRAID_DEBUGLOG_SZ_MAX 16
+
+/* reply post host register defines */
+#define REP_POST_HOST_IDX_REG_CNT 16
+#define LEAPRAID_RPHI_MSIX_IDX_SHIFT 24
+
+/* vphy flags */
+#define LEAPRAID_SAS_PHYINFO_VPHY 0x00001000
+
+/* linux driver init fw */
+#define LEAPRAID_WHOINIT_LINUX_DRIVER 0x04
+
+/* rdpq array mode */
+#define LEAPRAID_ADAPTER_INIT_MSGFLG_RDPQ_ARRAY_MODE 0x01
+
+/* request description flags */
+#define LEAPRAID_REQ_DESC_FLG_SCSI_IO 0x00
+#define LEAPRAID_REQ_DESC_FLG_HPR 0x06
+#define LEAPRAID_REQ_DESC_FLG_DFLT_TYPE 0x08
+
+/* reply description flags */
+#define LEAPRAID_RPY_DESC_FLG_TYPE_MASK 0x0F
+#define LEAPRAID_RPY_DESC_FLG_SCSI_IO_SUCCESS 0x00
+#define LEAPRAID_RPY_DESC_FLG_ADDRESS_REPLY 0x01
+#define LEAPRAID_RPY_DESC_FLG_FP_SCSI_IO_SUCCESS 0x06
+#define LEAPRAID_RPY_DESC_FLG_UNUSED 0x0F
+
+/* MPI functions */
+#define LEAPRAID_FUNC_SCSIIO_REQ 0x00
+#define LEAPRAID_FUNC_SCSI_TMF 0x01
+#define LEAPRAID_FUNC_ADAPTER_INIT 0x02
+#define LEAPRAID_FUNC_GET_ADAPTER_FEATURES 0x03
+#define LEAPRAID_FUNC_CONFIG_OP 0x04
+#define LEAPRAID_FUNC_SCAN_DEV 0x06
+#define LEAPRAID_FUNC_EVENT_NOTIFY 0x07
+#define LEAPRAID_FUNC_FW_DOWNLOAD 0x09
+#define LEAPRAID_FUNC_FW_UPLOAD 0x12
+#define LEAPRAID_FUNC_RAID_ACTION 0x15
+#define LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH 0x16
+#define LEAPRAID_FUNC_SCSI_ENC_PROCESSOR 0x18
+#define LEAPRAID_FUNC_SMP_PASSTHROUGH 0x1A
+#define LEAPRAID_FUNC_SAS_IO_UNIT_CTRL 0x1B
+#define LEAPRAID_FUNC_SATA_PASSTHROUGH 0x1C
+#define LEAPRAID_FUNC_ADAPTER_UNIT_RESET 0x40
+#define LEAPRAID_FUNC_HANDSHAKE 0x42
+#define LEAPRAID_FUNC_LOGBUF_INIT 0x57
+
+/* adapter status values */
+#define LEAPRAID_ADAPTER_STATUS_MASK 0x7FFF
+#define LEAPRAID_ADAPTER_STATUS_SUCCESS 0x0000
+#define LEAPRAID_ADAPTER_STATUS_BUSY 0x0002
+#define LEAPRAID_ADAPTER_STATUS_INTERNAL_ERROR 0x0004
+#define LEAPRAID_ADAPTER_STATUS_INSUFFICIENT_RESOURCES 0x0006
+#define LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_ACTION 0x0020
+#define LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_TYPE 0x0021
+#define LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_PAGE 0x0022
+#define LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_DATA 0x0023
+#define LEAPRAID_ADAPTER_STATUS_CONFIG_NO_DEFAULTS 0x0024
+#define LEAPRAID_ADAPTER_STATUS_CONFIG_CANT_COMMIT 0x0025
+#define LEAPRAID_ADAPTER_STATUS_SCSI_RECOVERED_ERROR 0x0040
+#define LEAPRAID_ADAPTER_STATUS_SCSI_DEVICE_NOT_THERE 0x0043
+#define LEAPRAID_ADAPTER_STATUS_SCSI_DATA_OVERRUN 0x0044
+#define LEAPRAID_ADAPTER_STATUS_SCSI_DATA_UNDERRUN 0x0045
+#define LEAPRAID_ADAPTER_STATUS_SCSI_IO_DATA_ERROR 0x0046
+#define LEAPRAID_ADAPTER_STATUS_SCSI_PROTOCOL_ERROR 0x0047
+#define LEAPRAID_ADAPTER_STATUS_SCSI_TASK_TERMINATED 0x0048
+#define LEAPRAID_ADAPTER_STATUS_SCSI_RESIDUAL_MISMATCH 0x0049
+#define LEAPRAID_ADAPTER_STATUS_SCSI_TASK_MGMT_FAILED 0x004A
+#define LEAPRAID_ADAPTER_STATUS_SCSI_ADAPTER_TERMINATED 0x004B
+#define LEAPRAID_ADAPTER_STATUS_SCSI_EXT_TERMINATED 0x004C
+
+/* sge flags */
+#define LEAPRAID_SGE_FLG_LAST_ONE 0x80
+#define LEAPRAID_SGE_FLG_EOB 0x40
+#define LEAPRAID_SGE_FLG_EOL 0x01
+#define LEAPRAID_SGE_FLG_SHIFT 24
+#define LEAPRAID_SGE_FLG_SIMPLE_ONE 0x10
+#define LEAPRAID_SGE_FLG_SYSTEM_ADDR 0x00
+#define LEAPRAID_SGE_FLG_H2C 0x04
+#define LEAPRAID_SGE_FLG_32 0x00
+#define LEAPRAID_SGE_FLG_64 0x02
+
+#define LEAPRAID_IEEE_SGE_FLG_EOL 0x40
+#define LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE 0x00
+#define LEAPRAID_IEEE_SGE_FLG_CHAIN_ONE 0x80
+#define LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR 0x00
+
+#define LEAPRAID_SGE_OFFSET_SIZE 4
+
+/* page and ext page type */
+#define LEAPRAID_CFG_PT_IO_UNIT 0x00
+#define LEAPRAID_CFG_PT_ADAPTER 0x01
+#define LEAPRAID_CFG_PT_BIOS 0x02
+#define LEAPRAID_CFG_PT_RAID_VOLUME 0x08
+#define LEAPRAID_CFG_PT_RAID_PHYSDISK 0x0A
+#define LEAPRAID_CFG_PT_EXTENDED 0x0F
+#define LEAPRAID_CFG_EXTPT_SAS_IO_UNIT 0x10
+#define LEAPRAID_CFG_EXTPT_SAS_EXP 0x11
+#define LEAPRAID_CFG_EXTPT_SAS_DEV 0x12
+#define LEAPRAID_CFG_EXTPT_SAS_PHY 0x13
+#define LEAPRAID_CFG_EXTPT_ENC 0x15
+#define LEAPRAID_CFG_EXTPT_RAID_CONFIG 0x16
+
+/* config page address */
+#define LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP 0x00000000
+#define LEAPRAID_SAS_ENC_CFG_PGAD_HDL 0x10000000
+#define LEAPRAID_SAS_DEV_CFG_PGAD_HDL 0x20000000
+#define LEAPRAID_SAS_EXP_CFG_PGAD_HDL_PHY_NUM 0x10000000
+#define LEAPRAID_SAS_EXP_CFD_PGAD_HDL 0x20000000
+#define LEAPRAID_SAS_EXP_CFG_PGAD_PHYNUM_SHIFT 16
+#define LEAPRAID_RAID_VOL_CFG_PGAD_HDL 0x10000000
+#define LEAPRAID_SAS_PHY_CFG_PGAD_PHY_NUMBER 0x00000000
+#define LEAPRAID_PHYSDISK_CFG_PGAD_PHYSDISKNUM 0x10000000
+
+/* config page operations */
+#define LEAPRAID_CFG_ACT_PAGE_HEADER 0x00
+#define LEAPRAID_CFG_ACT_PAGE_READ_CUR 0x01
+#define LEAPRAID_CFG_ACT_PAGE_WRITE_CUR 0x02
+
+/* bios pages */
+#define LEAPRAID_CFG_PAGE_NUM_BIOS2 0x2
+#define LEAPRAID_CFG_PAGE_NUM_BIOS3 0x3
+
+/* sas device pages */
+#define LEAPRAID_CFG_PAGE_NUM_DEV0 0x0
+
+/* sas device page 0 flags */
+#define LEAPRAID_SAS_DEV_P0_FLG_FP_CAP 0x2000
+#define LEAPRAID_SAS_DEV_P0_FLG_SATA_SMART 0x0040
+#define LEAPRAID_SAS_DEV_P0_FLG_ENC_LEVEL_VALID 0x0002
+#define LEAPRAID_SAS_DEV_P0_FLG_DEV_PRESENT 0x0001
+
+/* sas IO unit pages */
+#define LEAPRAID_CFG_PAGE_NUM_IOUNIT0 0x0
+#define LEAPRAID_CFG_PAGE_NUM_IOUNIT1 0x1
+
+/* sas expander pages */
+#define LEAPRAID_CFG_PAGE_NUM_EXP0 0x0
+#define LEAPRAID_CFG_PAGE_NUM_EXP1 0x1
+
+/* sas enclosure page */
+#define LEAPRAID_CFG_PAGE_NUM_ENC0 0x0
+
+/* sas phy page */
+#define LEAPRAID_CFG_PAGE_NUM_PHY0 0x0
+
+/* raid volume pages */
+#define LEAPRAID_CFG_PAGE_NUM_VOL0 0x0
+#define LEAPRAID_CFG_PAGE_NUM_VOL1 0x1
+
+/* physical disk page */
+#define LEAPRAID_CFG_PAGE_NUM_PD0 0x0
+
+/* adapter page */
+#define LEAPRAID_CFG_PAGE_NUM_ADAPTER1 0x1
+
+#define LEAPRAID_CFG_UNIT_SIZE 4
+
+/* raid volume type and state */
+#define LEAPRAID_VOL_STATE_MISSING 0x00
+#define LEAPRAID_VOL_STATE_FAILED 0x01
+#define LEAPRAID_VOL_STATE_INITIALIZING 0x02
+#define LEAPRAID_VOL_STATE_ONLINE 0x03
+#define LEAPRAID_VOL_STATE_DEGRADED 0x04
+#define LEAPRAID_VOL_STATE_OPTIMAL 0x05
+#define LEAPRAID_VOL_TYPE_RAID0 0x00
+#define LEAPRAID_VOL_TYPE_RAID1E 0x01
+#define LEAPRAID_VOL_TYPE_RAID1 0x02
+#define LEAPRAID_VOL_TYPE_RAID10 0x05
+#define LEAPRAID_VOL_TYPE_UNKNOWN 0xFF
+
+/* raid volume element flags */
+#define LEAPRAID_RAIDCFG_P0_EFLG_MASK_ELEMENT_TYPE 0x000F
+#define LEAPRAID_RAIDCFG_P0_EFLG_VOL_PHYS_DISK_ELEMENT 0x0001
+#define LEAPRAID_RAIDCFG_P0_EFLG_HOT_SPARE_ELEMENT 0x0002
+#define LEAPRAID_RAIDCFG_P0_EFLG_OCE_ELEMENT 0x0003
+
+/* raid action */
+#define LEAPRAID_RAID_ACT_SYSTEM_SHUTDOWN_INITIATED 0x20
+#define LEAPRAID_RAID_ACT_PHYSDISK_HIDDEN 0x24
+
+/* sas negotiated link rates */
+#define LEAPRAID_SAS_NEG_LINK_RATE_MASK_PHYSICAL 0x0F
+#define LEAPRAID_SAS_NEG_LINK_RATE_UNKNOWN_LINK_RATE 0x00
+#define LEAPRAID_SAS_NEG_LINK_RATE_PHY_DISABLED 0x01
+#define LEAPRAID_SAS_NEG_LINK_RATE_NEGOTIATION_FAILED 0x02
+#define LEAPRAID_SAS_NEG_LINK_RATE_SATA_OOB_COMPLETE 0x03
+#define LEAPRAID_SAS_NEG_LINK_RATE_PORT_SELECTOR 0x04
+#define LEAPRAID_SAS_NEG_LINK_RATE_SMP_RESETTING 0x05
+
+#define LEAPRAID_SAS_NEG_LINK_RATE_1_5 0x08
+#define LEAPRAID_SAS_NEG_LINK_RATE_3_0 0x09
+#define LEAPRAID_SAS_NEG_LINK_RATE_6_0 0x0A
+#define LEAPRAID_SAS_NEG_LINK_RATE_12_0 0x0B
+
+#define LEAPRAID_SAS_PRATE_MIN_RATE_MASK 0x0F
+#define LEAPRAID_SAS_HWRATE_MIN_RATE_MASK 0x0F
+
+/* scsi IO control bits */
+#define LEAPRAID_SCSIIO_CTRL_ADDCDBLEN_SHIFT 26
+#define LEAPRAID_SCSIIO_CTRL_NODATATRANSFER 0x00000000
+#define LEAPRAID_SCSIIO_CTRL_WRITE 0x01000000
+#define LEAPRAID_SCSIIO_CTRL_READ 0x02000000
+#define LEAPRAID_SCSIIO_CTRL_BIDIRECTIONAL 0x03000000
+#define LEAPRAID_SCSIIO_CTRL_SIMPLEQ 0x00000000
+#define LEAPRAID_SCSIIO_CTRL_ORDEREDQ 0x00000200
+#define LEAPRAID_SCSIIO_CTRL_CMDPRI 0x00000800
+
+/* scsi state and status */
+#define LEAPRAID_SCSI_STATUS_BUSY 0x08
+#define LEAPRAID_SCSI_STATUS_RESERVATION_CONFLICT 0x18
+#define LEAPRAID_SCSI_STATUS_TASK_SET_FULL 0x28
+
+#define LEAPRAID_SCSI_STATE_RESPONSE_INFO_VALID 0x10
+#define LEAPRAID_SCSI_STATE_TERMINATED 0x08
+#define LEAPRAID_SCSI_STATE_NO_SCSI_STATUS 0x04
+#define LEAPRAID_SCSI_STATE_AUTOSENSE_FAILED 0x02
+#define LEAPRAID_SCSI_STATE_AUTOSENSE_VALID 0x01
+
+/* scsi task management defines */
+#define LEAPRAID_TM_TASKTYPE_ABORT_TASK 0x01
+#define LEAPRAID_TM_TASKTYPE_ABRT_TASK_SET 0x02
+#define LEAPRAID_TM_TASKTYPE_TARGET_RESET 0x03
+#define LEAPRAID_TM_TASKTYPE_LOGICAL_UNIT_RESET 0x05
+#define LEAPRAID_TM_TASKTYPE_CLEAR_TASK_SET 0x06
+#define LEAPRAID_TM_TASKTYPE_QUERY_TASK 0x07
+#define LEAPRAID_TM_TASKTYPE_CLEAR_ACA 0x08
+#define LEAPRAID_TM_TASKTYPE_QUERY_TASK_SET 0x09
+#define LEAPRAID_TM_TASKTYPE_QUERY_ASYNC_EVENT 0x0A
+
+#define LEAPRAID_TM_MSGFLAGS_LINK_RESET 0x00
+#define LEAPRAID_TM_RSP_INVALID_FRAME 0x02
+#define LEAPRAID_TM_RSP_TM_SUCCEEDED 0x08
+#define LEAPRAID_TM_RSP_IO_QUEUED_ON_ADAPTER 0x80
+
+/* scsi sep request defines */
+#define LEAPRAID_SEP_REQ_ACT_WRITE_STATUS 0x00
+#define LEAPRAID_SEP_REQ_FLG_DEVHDL_ADDRESS 0x00
+#define LEAPRAID_SEP_REQ_FLG_ENCLOSURE_SLOT_ADDRESS 0x01
+#define LEAPRAID_SEP_REQ_SLOTSTATUS_PREDICTED_FAULT 0x00000040
+
+/* the capabilities of the adapter */
+#define LEAPRAID_ADAPTER_FEATURES_CAP_ATOMIC_REQ 0x00080000
+#define LEAPRAID_ADAPTER_FEATURES_CAP_RDPQ_ARRAY_CAPABLE 0x00040000
+#define LEAPRAID_ADAPTER_FEATURES_CAP_EVENT_REPLAY 0x00002000
+#define LEAPRAID_ADAPTER_FEATURES_CAP_INTEGRATED_RAID 0x00001000
+
+/* event code definitions for the firmware */
+#define LEAPRAID_EVT_SAS_DEV_STATUS_CHANGE 0x000F
+#define LEAPRAID_EVT_SAS_DISCOVERY 0x0016
+#define LEAPRAID_EVT_SAS_TOPO_CHANGE_LIST 0x001C
+#define LEAPRAID_EVT_SAS_ENCL_DEV_STATUS_CHANGE 0x001D
+#define LEAPRAID_EVT_IR_CHANGE 0x0020
+#define LEAPRAID_EVT_TURN_ON_PFA_LED 0xFFFC
+#define LEAPRAID_EVT_SCAN_DEV_DONE 0xFFFD
+#define LEAPRAID_EVT_REMOVE_DEAD_DEV 0xFFFF
+#define LEAPRAID_MAX_EVENT_NUM 128
+
+#define LEAPRAID_EVT_SAS_DEV_STAT_RC_INTERNAL_DEV_RESET 0x08
+#define LEAPRAID_EVT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET 0x0E
+
+/* raid configuration change event */
+#define LEAPRAID_EVT_IR_RC_VOLUME_ADD 0x01
+#define LEAPRAID_EVT_IR_RC_VOLUME_DELETE 0x02
+#define LEAPRAID_EVT_IR_RC_PD_HIDDEN_TO_ADD 0x03
+#define LEAPRAID_EVT_IR_RC_PD_UNHIDDEN_TO_DELETE 0x04
+#define LEAPRAID_EVT_IR_RC_PD_CREATED_TO_HIDE 0x05
+#define LEAPRAID_EVT_IR_RC_PD_DELETED_TO_EXPOSE 0x06
+
+/* sas topology change event */
+#define LEAPRAID_EVT_SAS_TOPO_ES_NO_EXPANDER 0x00
+#define LEAPRAID_EVT_SAS_TOPO_ES_ADDED 0x01
+#define LEAPRAID_EVT_SAS_TOPO_ES_NOT_RESPONDING 0x02
+#define LEAPRAID_EVT_SAS_TOPO_ES_RESPONDING 0x03
+
+#define LEAPRAID_EVT_SAS_TOPO_RC_MASK 0x0F
+#define LEAPRAID_EVT_SAS_TOPO_RC_CLEAR_MASK 0xF0
+#define LEAPRAID_EVT_SAS_TOPO_RC_TARG_ADDED 0x01
+#define LEAPRAID_EVT_SAS_TOPO_RC_TARG_NOT_RESPONDING 0x02
+#define LEAPRAID_EVT_SAS_TOPO_RC_PHY_CHANGED 0x03
+
+/* sas discovery event defines */
+#define LEAPRAID_EVT_SAS_DISC_RC_STARTED 0x01
+#define LEAPRAID_EVT_SAS_DISC_RC_COMPLETED 0x02
+
+/* enclosure device status change event */
+#define LEAPRAID_EVT_SAS_ENCL_RC_ADDED 0x01
+#define LEAPRAID_EVT_SAS_ENCL_RC_NOT_RESPONDING 0x02
+
+/* device type and identifiers */
+#define LEAPRAID_DEVTYP_SEP 0x00004000
+#define LEAPRAID_DEVTYP_SSP_TGT 0x00000400
+#define LEAPRAID_DEVTYP_STP_TGT 0x00000200
+#define LEAPRAID_DEVTYP_SMP_TGT 0x00000100
+#define LEAPRAID_DEVTYP_SATA_DEV 0x00000080
+#define LEAPRAID_DEVTYP_SSP_INIT 0x00000040
+#define LEAPRAID_DEVTYP_STP_INIT 0x00000020
+#define LEAPRAID_DEVTYP_SMP_INIT 0x00000010
+#define LEAPRAID_DEVTYP_SATA_HOST 0x00000008
+
+#define LEAPRAID_DEVTYP_MASK_DEV_TYPE 0x00000007
+#define LEAPRAID_DEVTYP_NO_DEV 0x00000000
+#define LEAPRAID_DEVTYP_END_DEV 0x00000001
+#define LEAPRAID_DEVTYP_EDGE_EXPANDER 0x00000002
+#define LEAPRAID_DEVTYP_FANOUT_EXPANDER 0x00000003
+
+/* sas control operation */
+#define LEAPRAID_SAS_OP_PHY_LINK_RESET 0x06
+#define LEAPRAID_SAS_OP_PHY_HARD_RESET 0x07
+#define LEAPRAID_SAS_OP_SET_PARAMETER 0x0F
+
+/* boot device defines */
+#define LEAPRAID_BOOTDEV_FORM_MASK 0x0F
+#define LEAPRAID_BOOTDEV_FORM_NONE 0x00
+#define LEAPRAID_BOOTDEV_FORM_SAS_WWID 0x05
+#define LEAPRAID_BOOTDEV_FORM_ENC_SLOT 0x06
+#define LEAPRAID_BOOTDEV_FORM_DEV_NAME 0x07
+
+/**
+ * struct leapraid_reg_base - Register layout of the LeapRAID controller
+ *
+ * @db: Doorbell register used to signal commands or status to firmware
+ * @ws: Write sequence register for synchronizing doorbell operations
+ * @host_diag: Diagnostic register used for status or debug reporting
+ * @r1: Reserved
+ * @host_int_status: Interrupt status register reporting active interrupts
+ * @host_int_mask: Interrupt mask register enabling or disabling sources
+ * @r2: Reserved
+ * @rep_msg_host_idx: Reply message index for the next available reply slot
+ * @r3: Reserved
+ * @debug_log: DebugLog registers for firmware debug and diagnostic output
+ * @r4: Reserved
+ * @atomic_req_desc_post: Atomic register for single descriptor posting
+ * @adapter_log_buf_pos: Adapter log buffer write position
+ * @host_log_buf_pos: Host log buffer write position
+ * @r5: Reserved
+ * @rep_post_reg_idx: Array of reply post index registers, one per queue.
+ * The number of entries is defined by
+ * REP_POST_HOST_IDX_REG_CNT.
+ */
+struct leapraid_reg_base {
+ __le32 db;
+ __le32 ws;
+ __le32 host_diag;
+ __le32 r1[9];
+ __le32 host_int_status;
+ __le32 host_int_mask;
+ __le32 r2[4];
+ __le32 rep_msg_host_idx;
+ __le32 r3[13];
+ __le32 debug_log[LEAPRAID_DEBUGLOG_SZ_MAX];
+ __le32 r4[2];
+ __le32 atomic_req_desc_post;
+ __le32 adapter_log_buf_pos;
+ __le32 host_log_buf_pos;
+ __le32 r5[142];
+ struct leapraid_rep_post_reg_idx {
+ __le32 idx;
+ __le32 r1;
+ __le32 r2;
+ __le32 r3;
+ } rep_post_reg_idx[REP_POST_HOST_IDX_REG_CNT];
+} __packed;
+
+/**
+ * struct leapraid_atomic_req_desc - Atomic request descriptor
+ *
+ * @flg: Descriptor flag indicating the type of request (e.g. SCSI I/O)
+ * @msix_idx: MSI-X vector index used for interrupt routing
+ * @taskid: Unique task identifier associated with this request
+ */
+struct leapraid_atomic_req_desc {
+ u8 flg;
+ u8 msix_idx;
+ __le16 taskid;
+};
+
+/**
+ * union leapraid_rep_desc_union - Unified reply descriptor format
+ *
+ * @dflt_rep: Default reply descriptor containing basic completion info
+ * @dflt_rep.rep_flg: Reply flag indicating reply type or status
+ * @dflt_rep.msix_idx: MSI-X index for interrupt routing
+ * @dflt_rep.taskid: Task identifier matching the submitted request
+ * @r1: Reserved
+ *
+ * @addr_rep: Address reply descriptor used when firmware returns a
+ * memory address associated with the reply
+ * @addr_rep.rep_flg: Reply flag indicating reply type or status
+ * @addr_rep.msix_idx: MSI-X index for interrupt routing
+ * @addr_rep.taskid: Task identifier matching the submitted request
+ * @addr_rep.rep_frame_addr: Physical address of the reply frame
+ *
+ * @words: Raw 64-bit representation of the reply descriptor
+ * @u: Alternative access using 32-bit low/high words
+ * @u.low: Lower 32 bits of the descriptor
+ * @u.high: Upper 32 bits of the descriptor
+ */
+union leapraid_rep_desc_union {
+ struct leapraid_rep_desc {
+ u8 rep_flg;
+ u8 msix_idx;
+ __le16 taskid;
+ u8 r1[4];
+ } dflt_rep;
+ struct leapraid_add_rep_desc {
+ u8 rep_flg;
+ u8 msix_idx;
+ __le16 taskid;
+ __le32 rep_frame_addr;
+ } addr_rep;
+ __le64 words;
+ struct {
+ u32 low;
+ u32 high;
+ } u;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_req - Generic request header
+ *
+ * @func_dep1: Function-dependent parameter (low 16 bits)
+ * @r1: Reserved
+ * @func: Function code identifying the command type
+ * @r2: Reserved
+ */
+struct leapraid_req {
+ __le16 func_dep1;
+ u8 r1;
+ u8 func;
+ u8 r2[8];
+};
+
+/**
+ * struct leapraid_rep - Generic reply header
+ *
+ * @r1: Reserved
+ * @msg_len: Length of the reply message in bytes
+ * @function: Function code corresponding to the request
+ * @r2: Reserved
+ * @adapter_status: Status code reported by the adapter
+ * @r3: Reserved
+ */
+struct leapraid_rep {
+ u8 r1[2];
+ u8 msg_len;
+ u8 function;
+ u8 r2[10];
+ __le16 adapter_status;
+ u8 r3[4];
+};
+
+/**
+ * struct leapraid_sge_simple32 - 32-bit simple scatter-gather entry
+ *
+ * @flg_and_len: Combined field for flags and segment length
+ * @addr: 32-bit physical address of the data buffer
+ */
+struct leapraid_sge_simple32 {
+ __le32 flg_and_len;
+ __le32 addr;
+};
+
+/**
+ * struct leapraid_sge_simple64 - 64-bit simple scatter-gather entry
+ *
+ * @flg_and_len: Combined field for flags and segment length
+ * @addr: 64-bit physical address of the data buffer
+ */
+struct leapraid_sge_simple64 {
+ __le32 flg_and_len;
+ __le64 addr;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_sge_simple_union - Unified 32/64-bit SGE representation
+ *
+ * @flg_and_len: Combined field for flags and segment length
+ * @u.addr32: 32-bit address field
+ * @u.addr64: 64-bit address field
+ */
+struct leapraid_sge_simple_union {
+ __le32 flg_and_len;
+ union {
+ __le32 addr32;
+ __le64 addr64;
+ } u;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_sge_chain_union - Chained scatter-gather entry
+ *
+ * @len: Length of the chain descriptor
+ * @next_chain_offset: Offset to the next SGE chain
+ * @flg: Flags indicating chain or termination properties
+ * @u.addr32: 32-bit physical address
+ * @u.addr64: 64-bit physical address
+ */
+struct leapraid_sge_chain_union {
+ __le16 len;
+ u8 next_chain_offset;
+ u8 flg;
+ union {
+ __le32 addr32;
+ __le64 addr64;
+ } u;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_ieee_sge_simple32 - IEEE 32-bit simple SGE format
+ *
+ * @addr: 32-bit physical address of the data buffer
+ * @flg_and_len: Combined field for flags and data length
+ */
+struct leapraid_ieee_sge_simple32 {
+ __le32 addr;
+ __le32 flg_and_len;
+};
+
+/**
+ * struct leapraid_ieee_sge_simple64 - IEEE 64-bit simple SGE format
+ *
+ * @addr: 64-bit physical address of the data buffer
+ * @len: Length of the data segment
+ * @r1: Reserved
+ * @flg: Flags indicating transfer properties
+ */
+struct leapraid_ieee_sge_simple64 {
+ __le64 addr;
+ __le32 len;
+ u8 r1[3];
+ u8 flg;
+} __packed __aligned(4);
+
+/**
+ * union leapraid_ieee_sge_simple_union - Unified IEEE SGE format
+ *
+ * @simple32: IEEE 32-bit simple SGE entry
+ * @simple64: IEEE 64-bit simple SGE entry
+ */
+union leapraid_ieee_sge_simple_union {
+ struct leapraid_ieee_sge_simple32 simple32;
+ struct leapraid_ieee_sge_simple64 simple64;
+};
+
+/**
+ * union leapraid_ieee_sge_chain_union - Unified IEEE SGE chain format
+ *
+ * @chain32: IEEE 32-bit chain SGE entry
+ * @chain64: IEEE 64-bit chain SGE entry
+ */
+union leapraid_ieee_sge_chain_union {
+ struct leapraid_ieee_sge_simple32 chain32;
+ struct leapraid_ieee_sge_simple64 chain64;
+};
+
+/**
+ * struct leapraid_chain64_ieee_sg - 64-bit IEEE chain SGE descriptor
+ *
+ * @addr: Physical address of the next chain segment
+ * @len: Length of the current SGE
+ * @r1: Reserved
+ * @next_chain_offset: Offset to the next chain element
+ * @flg: Flags that describe SGE attributes
+ */
+struct leapraid_chain64_ieee_sg {
+ __le64 addr;
+ __le32 len;
+ u8 r1[2];
+ u8 next_chain_offset;
+ u8 flg;
+} __packed __aligned(4);
+
+/**
+ * union leapraid_ieee_sge_io_union - IEEE-style SGE union for I/O
+ *
+ * @ieee_simple: Simple IEEE SGE descriptor
+ * @ieee_chain: IEEE chain SGE descriptor
+ */
+union leapraid_ieee_sge_io_union {
+ struct leapraid_ieee_sge_simple64 ieee_simple;
+ struct leapraid_chain64_ieee_sg ieee_chain;
+};
+
+/**
+ * union leapraid_simple_sge_union - Union of simple SGE descriptors
+ *
+ * @leapio_simple: LeapIO-style simple SGE
+ * @ieee_simple: IEEE-style simple SGE
+ */
+union leapraid_simple_sge_union {
+ struct leapraid_sge_simple_union leapio_simple;
+ union leapraid_ieee_sge_simple_union ieee_simple;
+};
+
+/**
+ * union leapraid_sge_io_union - Combined SGE union for all I/O types
+ *
+ * @leapio_simple: LeapIO simple SGE format
+ * @leapio_chain: LeapIO chain SGE format
+ * @ieee_simple: IEEE simple SGE format
+ * @ieee_chain: IEEE chain SGE format
+ */
+union leapraid_sge_io_union {
+ struct leapraid_sge_simple_union leapio_simple;
+ struct leapraid_sge_chain_union leapio_chain;
+ union leapraid_ieee_sge_simple_union ieee_simple;
+ union leapraid_ieee_sge_chain_union ieee_chain;
+};
+
+/**
+ * struct leapraid_cfg_pg_header - Standard configuration page header
+ *
+ * @r1: Reserved
+ * @page_len: Length of the page in 4-byte units
+ * @page_num: Page number
+ * @page_type: Page type
+ */
+struct leapraid_cfg_pg_header {
+ u8 r1;
+ u8 page_len;
+ u8 page_num;
+ u8 page_type;
+};
+
+/**
+ * struct leapraid_cfg_ext_pg_header - Extended configuration page header
+ *
+ * @r1: Reserved
+ * @r2: Reserved
+ * @page_num: Page number
+ * @page_type: Page type
+ * @ext_page_len: Extended page length
+ * @ext_page_type: Extended page type
+ * @r3: Reserved
+ */
+struct leapraid_cfg_ext_pg_header {
+ u8 r1;
+ u8 r2;
+ u8 page_num;
+ u8 page_type;
+ __le16 ext_page_len;
+ u8 ext_page_type;
+ u8 r3;
+};
+
+/**
+ * struct leapraid_cfg_req - Configuration request message
+ *
+ * @action: Requested action type
+ * @sgl_flag: SGL flag field
+ * @chain_offset: Offset to next chain SGE
+ * @func: Function code
+ * @ext_page_len: Extended page length
+ * @ext_page_type: Extended page type
+ * @msg_flag: Message flags
+ * @r1: Reserved
+ * @header: Configuration page header
+ * @page_addr: Address of the page buffer
+ * @page_buf_sge: SGE describing the page buffer
+ */
+struct leapraid_cfg_req {
+ u8 action;
+ u8 sgl_flag;
+ u8 chain_offset;
+ u8 func;
+ __le16 ext_page_len;
+ u8 ext_page_type;
+ u8 msg_flag;
+ u8 r1[12];
+ struct leapraid_cfg_pg_header header;
+ __le32 page_addr;
+ union leapraid_sge_io_union page_buf_sge;
+};
+
+/**
+ * struct leapraid_cfg_rep - Configuration reply message
+ *
+ * @action: Action type from the request
+ * @r1: Reserved
+ * @msg_len: Message length in bytes
+ * @func: Function code
+ * @ext_page_len: Extended page length
+ * @ext_page_type: Extended page type
+ * @msg_flag: Message flags
+ * @r2: Reserved
+ * @adapter_status: Adapter status code
+ * @r3: Reserved
+ * @header: Configuration page header
+ */
+struct leapraid_cfg_rep {
+ u8 action;
+ u8 r1;
+ u8 msg_len;
+ u8 func;
+ __le16 ext_page_len;
+ u8 ext_page_type;
+ u8 msg_flag;
+ u8 r2[6];
+ __le16 adapter_status;
+ u8 r3[4];
+ struct leapraid_cfg_pg_header header;
+};
+
+/**
+ * struct leapraid_boot_dev_format_sas_wwid - Boot device identified by wwid
+ *
+ * @sas_addr: SAS address of the device
+ * @lun: Logical unit number
+ * @r1: Reserved
+ */
+struct leapraid_boot_dev_format_sas_wwid {
+ __le64 sas_addr;
+ u8 lun[8];
+ u8 r1[8];
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_boot_dev_format_enc_slot - identified by enclosure
+ *
+ * @enc_lid: Enclosure logical ID
+ * @r1: Reserved
+ * @slot_num: Slot number in the enclosure
+ * @r2: Reserved
+ */
+struct leapraid_boot_dev_format_enc_slot {
+ __le64 enc_lid;
+ u8 r1[8];
+ __le16 slot_num;
+ u8 r2[6];
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_boot_dev_format_dev_name - Boot device by device name
+ *
+ * @dev_name: Device name identifier
+ * @lun: Logical unit number
+ * @r1: Reserved
+ */
+struct leapraid_boot_dev_format_dev_name {
+ __le64 dev_name;
+ u8 lun[8];
+ u8 r1[8];
+} __packed __aligned(4);
+
+/**
+ * union leapraid_boot_dev_format - Boot device format union
+ *
+ * @sas_wwid: Format using SAS WWID and LUN
+ * @enc_slot: Format using enclosure slot and ID
+ * @dev_name: Format using device name and LUN
+ */
+union leapraid_boot_dev_format {
+ struct leapraid_boot_dev_format_sas_wwid sas_wwid;
+ struct leapraid_boot_dev_format_enc_slot enc_slot;
+ struct leapraid_boot_dev_format_dev_name dev_name;
+};
+
+/**
+ * struct leapraid_bios_page2 - BIOS configuration page 2
+ *
+ * @header: Configuration page header
+ * @r1: Reserved
+ * @requested_boot_dev_form: Format type of the requested boot device
+ * @r2: Reserved
+ * @requested_boot_dev: Boot device requested by BIOS or user
+ * @requested_alt_boot_dev_form: Format of the alternate boot device
+ * @r3: Reserved
+ * @requested_alt_boot_dev: Alternate boot device requested
+ * @current_boot_dev_form: Format type of the active boot device
+ * @r4: Reserved
+ * @current_boot_dev: Currently active boot device in use
+ */
+struct leapraid_bios_page2 {
+ struct leapraid_cfg_pg_header header;
+ u8 r1[24];
+ u8 requested_boot_dev_form;
+ u8 r2[3];
+ union leapraid_boot_dev_format requested_boot_dev;
+ u8 requested_alt_boot_dev_form;
+ u8 r3[3];
+ union leapraid_boot_dev_format requested_alt_boot_dev;
+ u8 current_boot_dev_form;
+ u8 r4[3];
+ union leapraid_boot_dev_format current_boot_dev;
+};
+
+/**
+ * struct leapraid_bios_page3 - BIOS configuration page 3
+ *
+ * @header: Configuration page header
+ * @r1: Reserved
+ * @bios_version: BIOS firmware version number
+ * @r2: Reserved
+ */
+struct leapraid_bios_page3 {
+ struct leapraid_cfg_pg_header header;
+ u8 r1[4];
+ __le32 bios_version;
+ u8 r2[84];
+};
+
+/**
+ * struct leapraid_raidvol0_phys_disk - Physical disk in RAID volume
+ *
+ * @r1: Reserved
+ * @phys_disk_num: Physical disk number within the RAID volume
+ * @r2: Reserved
+ */
+struct leapraid_raidvol0_phys_disk {
+ u8 r1[2];
+ u8 phys_disk_num;
+ u8 r2;
+};
+
+/**
+ * struct leapraid_raidvol_p0 - RAID volume configuration page 0
+ *
+ * @header: Configuration page header
+ * @dev_hdl: Device handle for the RAID volume
+ * @volume_state: State of the RAID volume
+ * @volume_type: RAID type
+ * @r1: Reserved
+ * @num_phys_disks: Number of physical disks in the volume
+ * @r2: Reserved
+ * @phys_disk: Array of physical disks in this volume
+ */
+struct leapraid_raidvol_p0 {
+ struct leapraid_cfg_pg_header header;
+ __le16 dev_hdl;
+ u8 volume_state;
+ u8 volume_type;
+ u8 r1[28];
+ u8 num_phys_disks;
+ u8 r2[3];
+ struct leapraid_raidvol0_phys_disk phys_disk[];
+};
+
+/**
+ * struct leapraid_raidvol_p1 - RAID volume configuration page 1
+ *
+ * @header: Configuration page header
+ * @dev_hdl: Device handle of the RAID volume
+ * @r1: Reserved
+ * @wwid: World-wide identifier for the volume
+ * @r2: Reserved
+ */
+struct leapraid_raidvol_p1 {
+ struct leapraid_cfg_pg_header header;
+ __le16 dev_hdl;
+ u8 r1[42];
+ __le64 wwid;
+ u8 r2[8];
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_raidpd_p0 - Physical disk configuration page 0
+ *
+ * @header: Configuration page header
+ * @dev_hdl: Device handle of the physical disk
+ * @r1: Reserved
+ * @phys_disk_num: Physical disk number
+ * @r2: Reserved
+ */
+struct leapraid_raidpd_p0 {
+ struct leapraid_cfg_pg_header header;
+ __le16 dev_hdl;
+ u8 r1;
+ u8 phys_disk_num;
+ u8 r2[112];
+};
+
+/**
+ * struct leapraid_sas_io_unit0_phy_info - PHY info for SAS I/O unit
+ *
+ * @port: Port number the PHY belongs to
+ * @port_flg: Flags describing port status
+ * @phy_flg: Flags describing PHY status
+ * @neg_link_rate: Negotiated link rate of the PHY
+ * @controller_phy_dev_info: Controller PHY device info
+ * @attached_dev_hdl: Handle of attached device
+ * @controller_dev_hdl: Handle of the controller device
+ * @r1: Reserved
+ */
+struct leapraid_sas_io_unit0_phy_info {
+ u8 port;
+ u8 port_flg;
+ u8 phy_flg;
+ u8 neg_link_rate;
+ __le32 controller_phy_dev_info;
+ __le16 attached_dev_hdl;
+ __le16 controller_dev_hdl;
+ u8 r1[8];
+};
+
+/**
+ * struct leapraid_sas_io_unit_p0 - SAS I/O unit configuration page 0
+ *
+ * @header: Extended configuration page header
+ * @r1: Reserved
+ * @phy_num: Number of PHYs in this unit
+ * @r2: Reserved
+ * @phy_info: Array of PHY information
+ */
+struct leapraid_sas_io_unit_p0 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 r1[4];
+ u8 phy_num;
+ u8 r2[3];
+ struct leapraid_sas_io_unit0_phy_info phy_info[];
+};
+
+/**
+ * struct leapraid_sas_io_unit1_phy_info - Placeholder for SAS unit page 1 PHY
+ *
+ * @r1: Reserved
+ */
+struct leapraid_sas_io_unit1_phy_info {
+ u8 r1[12];
+};
+
+/**
+ * struct leapraid_sas_io_unit_page1 - SAS I/O unit configuration page 1
+ *
+ * @header: Extended configuration page header
+ * @r1: Reserved
+ * @narrowport_max_queue_depth: Maximum queue depth for narrow ports
+ * @r2: Reserved
+ * @wideport_max_queue_depth: Maximum queue depth for wide ports
+ * @r3: Reserved
+ * @sata_max_queue_depth: Maximum SATA queue depth
+ * @r4: Reserved
+ * @phy_info: Array of PHY info structures
+ */
+struct leapraid_sas_io_unit_page1 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 r1[2];
+ __le16 narrowport_max_queue_depth;
+ u8 r2[2];
+ __le16 wideport_max_queue_depth;
+ u8 r3;
+ u8 sata_max_queue_depth;
+ u8 r4[2];
+ struct leapraid_sas_io_unit1_phy_info phy_info[];
+};
+
+/**
+ * struct leapraid_exp_p0 - SAS expander page 0
+ *
+ * @header: Extended page header
+ * @physical_port: Physical port number
+ * @r1: Reserved
+ * @enc_hdl: Enclosure handle
+ * @sas_address: SAS address of the expander
+ * @r2: Reserved
+ * @dev_hdl: Device handle of this expander
+ * @parent_dev_hdl: Device handle of parent expander
+ * @r3: Reserved
+ * @phy_num: Number of PHYs
+ * @r4: Reserved
+ */
+struct leapraid_exp_p0 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 physical_port;
+ u8 r1;
+ __le16 enc_hdl;
+ __le64 sas_address;
+ u8 r2[4];
+ __le16 dev_hdl;
+ __le16 parent_dev_hdl;
+ u8 r3[4];
+ u8 phy_num;
+ u8 r4[27];
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_exp_p1 - SAS expander page 1
+ *
+ * @header: Extended page header
+ * @r1: Reserved
+ * @p_link_rate: PHY link rate
+ * @hw_link_rate: Hardware supported link rate
+ * @attached_dev_hdl: Attached device handle
+ * @r2: Reserved
+ * @neg_link_rate: Negotiated link rate
+ * @r3: Reserved
+ */
+struct leapraid_exp_p1 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 r1[8];
+ u8 p_link_rate;
+ u8 hw_link_rate;
+ __le16 attached_dev_hdl;
+ u8 r2[11];
+ u8 neg_link_rate;
+ u8 r3[12];
+};
+
+/**
+ * struct leapraid_sas_dev_p0 - SAS device page 0
+ *
+ * @header: Extended configuration page header
+ * @slot: Slot number
+ * @enc_hdl: Enclosure handle
+ * @sas_address: SAS address
+ * @parent_dev_hdl: Parent device handle
+ * @phy_num: Number of PHYs
+ * @r1: Reserved
+ * @dev_hdl: Device handle
+ * @r2: Reserved
+ * @dev_info: Device information
+ * @flg: Flags
+ * @physical_port: Physical port number
+ * @max_port_connections: Maximum port connections
+ * @dev_name: Device name
+ * @port_groups: Number of port groups
+ * @r3: Reserved
+ * @enc_level: Enclosure level
+ * @connector_name: Connector identifier
+ * @r4: Reserved
+ */
+struct leapraid_sas_dev_p0 {
+ struct leapraid_cfg_ext_pg_header header;
+ __le16 slot;
+ __le16 enc_hdl;
+ __le64 sas_address;
+ __le16 parent_dev_hdl;
+ u8 phy_num;
+ u8 r1;
+ __le16 dev_hdl;
+ u8 r2[2];
+ __le32 dev_info;
+ __le16 flg;
+ u8 physical_port;
+ u8 max_port_connections;
+ __le64 dev_name;
+ u8 port_groups;
+ u8 r3[2];
+ u8 enc_level;
+ u8 connector_name[4];
+ u8 r4[4];
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_sas_phy_p0 - SAS PHY configuration page 0
+ *
+ * @header: Extended configuration page header
+ * @r1: Reserved
+ * @attached_dev_hdl: Handle of attached device
+ * @r2: Reserved
+ * @p_link_rate: PHY link rate
+ * @hw_link_rate: Hardware supported link rate
+ * @r3: Reserved
+ * @phy_info: PHY information
+ * @neg_link_rate: Negotiated link rate
+ * @r4: Reserved
+ */
+struct leapraid_sas_phy_p0 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 r1[4];
+ __le16 attached_dev_hdl;
+ u8 r2[6];
+ u8 p_link_rate;
+ u8 hw_link_rate;
+ u8 r3[2];
+ __le32 phy_info;
+ u8 neg_link_rate;
+ u8 r4[3];
+};
+
+/**
+ * struct leapraid_enc_p0 - SAS enclosure page 0
+ *
+ * @header: Extended configuration page header
+ * @r1: Reserved
+ * @enc_lid: Enclosure logical ID
+ * @r2: Reserved
+ * @enc_hdl: Enclosure handle
+ * @r3: Reserved
+ */
+struct leapraid_enc_p0 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 r1[4];
+ __le64 enc_lid;
+ u8 r2[2];
+ __le16 enc_hdl;
+ u8 r3[15];
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_raid_cfg_p0_element - RAID configuration element
+ *
+ * @element_flg: Element flags
+ * @vol_dev_hdl: Volume device handle
+ * @r1: Reserved
+ * @phys_disk_dev_hdl: Physical disk device handle
+ */
+struct leapraid_raid_cfg_p0_element {
+ __le16 element_flg;
+ __le16 vol_dev_hdl;
+ u8 r1[2];
+ __le16 phys_disk_dev_hdl;
+};
+
+/**
+ * struct leapraid_raid_cfg_p0 - RAID configuration page 0
+ *
+ * @header: Extended configuration page header
+ * @r1: Reserved
+ * @cfg_num: Configuration number
+ * @r2: Reserved
+ * @elements_num: Number of RAID elements
+ * @r3: Reserved
+ * @cfg_element: Array of RAID elements
+ */
+struct leapraid_raid_cfg_p0 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 r1[3];
+ u8 cfg_num;
+ u8 r2[32];
+ u8 elements_num;
+ u8 r3[3];
+ struct leapraid_raid_cfg_p0_element cfg_element[];
+};
+
+/**
+ * union leapraid_mpi_scsi_io_cdb_union - SCSI I/O CDB or simple SGE
+ *
+ * @cdb32: 32-byte SCSI command descriptor block
+ * @sge: Simple SGE format
+ */
+union leapraid_mpi_scsi_io_cdb_union {
+ u8 cdb32[32];
+ struct leapraid_sge_simple_union sge;
+};
+
+/**
+ * struct leapraid_mpi_scsiio_req - MPI SCSI I/O request
+ *
+ * @dev_hdl: Device handle for the target
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @r1: Reserved
+ * @msg_flg: Message flags
+ * @r2: Reserved
+ * @sense_buffer_low_add: Lower 32-bit address of sense buffer
+ * @dma_flag: DMA flags
+ * @r3: Reserved
+ * @sense_buffer_len: Sense buffer length
+ * @r4: Reserved
+ * @sgl_offset0..3: SGL offsets
+ * @skip_count: Bytes to skip before transfer
+ * @data_len: Length of data transfer
+ * @bi_dir_data_len: Bi-directional transfer length
+ * @io_flg: I/O flags
+ * @eedp_flag: EEDP flags
+ * @eedp_block_size: EEDP block size
+ * @r5: Reserved
+ * @secondary_ref_tag: Secondary reference tag
+ * @secondary_app_tag: Secondary application tag
+ * @app_tag_trans_mask: Application tag mask
+ * @lun: Logical Unit Number
+ * @ctrl: Control flags
+ * @cdb: SCSI Command Descriptor Block or simple SGE
+ * @sgl: Scatter-gather list
+ */
+struct leapraid_mpi_scsiio_req {
+ __le16 dev_hdl;
+ u8 chain_offset;
+ u8 func;
+ u8 r1[3];
+ u8 msg_flg;
+ u8 r2[4];
+ __le32 sense_buffer_low_add;
+ u8 dma_flag;
+ u8 r3;
+ u8 sense_buffer_len;
+ u8 r4;
+ u8 sgl_offset0;
+ u8 sgl_offset1;
+ u8 sgl_offset2;
+ u8 sgl_offset3;
+ __le32 skip_count;
+ __le32 data_len;
+ __le32 bi_dir_data_len;
+ __le16 io_flg;
+ __le16 eedp_flag;
+ __le16 eedp_block_size;
+ u8 r5[2];
+ __le32 secondary_ref_tag;
+ __le16 secondary_app_tag;
+ __le16 app_tag_trans_mask;
+ u8 lun[8];
+ __le32 ctrl;
+ union leapraid_mpi_scsi_io_cdb_union cdb;
+ union leapraid_sge_io_union sgl;
+};
+
+/**
+ * union leapraid_scsi_io_cdb_union - SCSI I/O CDB or IEEE simple SGE
+ *
+ * @cdb32: 32-byte SCSI CDB
+ * @sge: IEEE simple 64-bit SGE
+ */
+union leapraid_scsi_io_cdb_union {
+ u8 cdb32[32];
+ struct leapraid_ieee_sge_simple64 sge;
+};
+
+/**
+ * struct leapraid_scsiio_req - SCSI I/O request
+ *
+ * @dev_hdl: Device handle
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @r1: Reserved
+ * @msg_flg: Message flags
+ * @r2: Reserved
+ * @sense_buffer_low_add: Lower 32-bit address of sense buffer
+ * @dma_flag: DMA flag
+ * @r3: Reserved
+ * @sense_buffer_len: Sense buffer length
+ * @r4: Reserved
+ * @sgl_offset0-3: SGL offsets
+ * @skip_count: Bytes to skip before transfer
+ * @data_len: Length of data transfer
+ * @bi_dir_data_len: Bi-directional transfer length
+ * @io_flg: I/O flags
+ * @eedp_flag: EEDP flags
+ * @eedp_block_size: EEDP block size
+ * @r5: Reserved
+ * @secondary_ref_tag: Secondary reference tag
+ * @secondary_app_tag: Secondary application tag
+ * @app_tag_trans_mask: Application tag mask
+ * @lun: Logical Unit Number
+ * @ctrl: Control flags
+ * @cdb: SCSI Command Descriptor Block or simple SGE
+ * @sgl: Scatter-gather list
+ */
+struct leapraid_scsiio_req {
+ __le16 dev_hdl;
+ u8 chain_offset;
+ u8 func;
+ u8 r1[3];
+ u8 msg_flg;
+ u8 r2[4];
+ __le32 sense_buffer_low_add;
+ u8 dma_flag;
+ u8 r3;
+ u8 sense_buffer_len;
+ u8 r4;
+ u8 sgl_offset0;
+ u8 sgl_offset1;
+ u8 sgl_offset2;
+ u8 sgl_offset3;
+ __le32 skip_count;
+ __le32 data_len;
+ __le32 bi_dir_data_len;
+ __le16 io_flg;
+ __le16 eedp_flag;
+ __le16 eedp_block_size;
+ u8 r5[2];
+ __le32 secondary_ref_tag;
+ __le16 secondary_app_tag;
+ __le16 app_tag_trans_mask;
+ u8 lun[8];
+ __le32 ctrl;
+ union leapraid_scsi_io_cdb_union cdb;
+ union leapraid_ieee_sge_io_union sgl;
+};
+
+/**
+ * struct leapraid_scsiio_rep - SCSI I/O response
+ *
+ * @dev_hdl: Device handle
+ * @msg_len: Length of response message
+ * @func: Function code
+ * @r1: Reserved
+ * @msg_flg: Message flags
+ * @r2: Reserved
+ * @scsi_status: SCSI status
+ * @scsi_state: SCSI state
+ * @adapter_status: Adapter status
+ * @r3: Reserved
+ * @transfer_count: Number of bytes transferred
+ * @sense_count: Number of sense bytes
+ * @resp_info: Additional response info
+ * @task_tag: Task identifier
+ * @scsi_status_qualifier: SCSI status qualifier
+ * @bi_dir_trans_count: Bi-directional transfer count
+ * @r4: Reserved
+ */
+struct leapraid_scsiio_rep {
+ __le16 dev_hdl;
+ u8 msg_len;
+ u8 func;
+ u8 r1[3];
+ u8 msg_flg;
+ u8 r2[4];
+ u8 scsi_status;
+ u8 scsi_state;
+ __le16 adapter_status;
+ u8 r3[4];
+ __le32 transfer_count;
+ __le32 sense_count;
+ __le32 resp_info;
+ __le16 task_tag;
+ __le16 scsi_status_qualifier;
+ __le32 bi_dir_trans_count;
+ __le32 r4[3];
+};
+
+/**
+ * struct leapraid_scsi_tm_req - SCSI Task Management request
+ *
+ * @dev_hdl: Device handle
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @r1: Reserved
+ * @task_type: Task management function type
+ * @r2: Reserved
+ * @msg_flg: Message flags
+ * @r3: Reserved
+ * @lun: Logical Unit Number
+ * @r4: Reserved
+ * @task_mid: Task identifier
+ * @r5: Reserved
+ */
+struct leapraid_scsi_tm_req {
+ __le16 dev_hdl;
+ u8 chain_offset;
+ u8 func;
+ u8 r1;
+ u8 task_type;
+ u8 r2;
+ u8 msg_flg;
+ u8 r3[4];
+ u8 lun[8];
+ u8 r4[28];
+ __le16 task_mid;
+ u8 r5[2];
+};
+
+/**
+ * struct leapraid_scsi_tm_rep - SCSI Task Management response
+ *
+ * @dev_hdl: Device handle
+ * @msg_len: Length of response message
+ * @func: Function code
+ * @resp_code: Response code
+ * @task_type: Task management type
+ * @r1: Reserved
+ * @msg_flag: Message flags
+ * @r2: Reserved
+ * @adapter_status: Adapter status
+ * @r3: Reserved
+ * @termination_count: Count of terminated tasks
+ * @response_info: Additional response info
+ */
+struct leapraid_scsi_tm_rep {
+ __le16 dev_hdl;
+ u8 msg_len;
+ u8 func;
+ u8 resp_code;
+ u8 task_type;
+ u8 r1;
+ u8 msg_flag;
+ u8 r2[6];
+ __le16 adapter_status;
+ u8 r3[4];
+ __le32 termination_count;
+ __le32 response_info;
+};
+
+/**
+ * struct leapraid_sep_req - SEP (SCSI Enclosure Processor) request
+ *
+ * @dev_hdl: Device handle
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @act: Action to perform
+ * @flg: Flags
+ * @r1: Reserved
+ * @msg_flag: Message flags
+ * @r2: Reserved
+ * @slot_status: Slot status
+ * @r3: Reserved
+ * @slot: Slot number
+ * @enc_hdl: Enclosure handle
+ */
+struct leapraid_sep_req {
+ __le16 dev_hdl;
+ u8 chain_offset;
+ u8 func;
+ u8 act;
+ u8 flg;
+ u8 r1;
+ u8 msg_flag;
+ u8 r2[4];
+ __le32 slot_status;
+ u8 r3[12];
+ __le16 slot;
+ __le16 enc_hdl;
+};
+
+/**
+ * struct leapraid_sep_rep - SEP response
+ *
+ * @dev_hdl: Device handle
+ * @msg_len: Message length
+ * @func: Function code
+ * @act: Action performed
+ * @flg: Flags
+ * @msg_flag: Message flags
+ * @r1: Reserved
+ * @adapter_status: Adapter status
+ * @r2: Reserved
+ * @slot_status: Slot status
+ * @r3: Reserved
+ * @slot: Slot number
+ * @enc_hdl: Enclosure handle
+ */
+struct leapraid_sep_rep {
+ __le16 dev_hdl;
+ u8 msg_len;
+ u8 func;
+ u8 act;
+ u8 flg;
+ u8 r1;
+ u8 msg_flag;
+ u8 r2[6];
+ __le16 adapter_status;
+ u8 r3[4];
+ __le32 slot_status;
+ u8 r4[4];
+ __le16 slot;
+ __le16 enc_hdl;
+};
+
+/**
+ * struct leapraid_adapter_init_req - Adapter initialization request
+ *
+ * @who_init: Initiator of the initialization
+ * @r1: Reserved
+ * @chain_offset: Chain offset
+ * @func: Function code
+ * @r2: Reserved
+ * @msg_flg: Message flags
+ * @r3: Reserved
+ * @msg_ver: Message version
+ * @header_ver: Header version
+ * @host_buf_addr: Host buffer address (non adapter-ref)
+ * @r4: Reserved
+ * @host_buf_size: Host buffer size (non adapter-ref)
+ * @host_msix_vectors: Number of host MSI-X vectors
+ * @r6: Reserved
+ * @req_frame_size: Request frame size
+ * @rep_desc_qd: Reply descriptor queue depth
+ * @rep_msg_qd: Reply message queue depth
+ * @sense_buffer_add_high: High 32-bit of sense buffer address
+ * @rep_msg_dma_high: High 32-bit of reply message DMA address
+ * @task_desc_base_addr: Base address of task descriptors
+ * @rep_desc_q_arr_addr: Address of reply descriptor queue array
+ * @rep_msg_addr_dma: Reply message DMA address
+ * @time_stamp: Timestamp
+ */
+struct leapraid_adapter_init_req {
+ u8 who_init;
+ u8 r1;
+ u8 chain_offset;
+ u8 func;
+ u8 r2[3];
+ u8 msg_flg;
+ __le32 driver_ver;
+ __le16 msg_ver;
+ __le16 header_ver;
+ __le32 host_buf_addr;
+ u8 r4[2];
+ u8 host_buf_size;
+ u8 host_msix_vectors;
+ u8 r6[2];
+ __le16 req_frame_size;
+ __le16 rep_desc_qd;
+ __le16 rep_msg_qd;
+ __le32 sense_buffer_add_high;
+ __le32 rep_msg_dma_high;
+ __le64 task_desc_base_addr;
+ __le64 rep_desc_q_arr_addr;
+ __le64 rep_msg_addr_dma;
+ __le64 time_stamp;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_rep_desc_q_arr - Reply descriptor queue array
+ *
+ * @rep_desc_base_addr: Base address of the reply descriptors
+ * @r1: Reserved
+ */
+struct leapraid_rep_desc_q_arr {
+ __le64 rep_desc_base_addr;
+ __le64 r1;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_adapter_init_rep - Adapter initialization reply
+ *
+ * @who_init: Initiator of the initialization
+ * @r1: Reserved
+ * @msg_len: Length of reply message
+ * @func: Function code
+ * @r2: Reserved
+ * @msg_flag: Message flags
+ * @r3: Reserved
+ * @adapter_status: Adapter status
+ * @r4: Reserved
+ */
+struct leapraid_adapter_init_rep {
+ u8 who_init;
+ u8 r1;
+ u8 msg_len;
+ u8 func;
+ u8 r2[3];
+ u8 msg_flag;
+ u8 r3[6];
+ __le16 adapter_status;
+ u8 r4[4];
+};
+
+/**
+ * struct leapraid_adapter_log_req - Adapter log request
+ *
+ * @action: Action code
+ * @type: Log type
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * r1: Reserved
+ * @msg_flag: Message flags
+ * r2: Reserved
+ * @mbox: Mailbox for command-specific parameters
+ * @sge: Scatter-gather entry for data buffer
+ */
+struct leapraid_adapter_log_req {
+ u8 action;
+ u8 type;
+ u8 chain_offset;
+ u8 func;
+ u8 r1[3];
+ u8 msg_flag;
+ u8 r2[4];
+ union {
+ u8 b[12];
+ __le16 s[6];
+ __le32 w[3];
+ } mbox;
+ struct leapraid_sge_simple64 sge;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_adapter_log_rep - Adapter log reply
+ *
+ * @action: Action code echoed
+ * @type: Log type echoed
+ * @msg_len: Length of message
+ * @func: Function code
+ * @r1: Reserved
+ * @msg_flag: Message flags
+ * @r2: Reserved
+ * @adapter_status: Status returned by adapter
+ */
+struct leapraid_adapter_log_rep {
+ u8 action;
+ u8 type;
+ u8 msg_len;
+ u8 func;
+ u8 r1[3];
+ u8 msg_flag;
+ u8 r2[6];
+ __le16 adapter_status;
+};
+
+/**
+ * struct leapraid_adapter_features_req - Request adapter features
+ *
+ * @r1: Reserved
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @r2: Reserved
+ * @msg_flag: Message flags
+ * @r3: Reserved
+ */
+struct leapraid_adapter_features_req {
+ u8 r1[2];
+ u8 chain_offset;
+ u8 func;
+ u8 r2[3];
+ u8 msg_flag;
+ u8 r3[4];
+};
+
+/**
+ * struct leapraid_adapter_features_rep - Adapter features reply
+ *
+ * @msg_ver: Message version
+ * @msg_len: Length of reply message
+ * @func: Function code
+ * @header_ver: Header version
+ * @r1: Reserved
+ * @msg_flag: Message flags
+ * @r2: Reserved
+ * @adapter_status: Adapter status
+ * @r3: Reserved
+ * @who_init: Who initialized the adapter
+ * @r4: Reserved
+ * @max_msix_vectors: Max MSI-X vectors supported
+ * @req_slot: Number of request slots
+ * @r5: Reserved
+ * @adapter_caps: Adapter capabilities
+ * @fw_version: Firmware version
+ * @sas_wide_max_qdepth: Max wide SAS queue depth
+ * @sas_narrow_max_qdepth: Max narrow SAS queue depth
+ * @r6: Reserved
+ * @hp_slot: Number of high-priority slots
+ * @r7: Reserved
+ * @max_volumes: Maximum supported volumes
+ * @max_dev_hdl: Maximum device handle
+ * @r8: Reserved
+ * @min_dev_hdl: Minimum device handle
+ * @r9: Reserved
+ */
+struct leapraid_adapter_features_rep {
+ u16 msg_ver;
+ u8 msg_len;
+ u8 func;
+ u16 header_ver;
+ u8 r1;
+ u8 msg_flag;
+ u8 r2[6];
+ u16 adapter_status;
+ u8 r3[4];
+ u8 sata_max_qdepth;
+ u8 who_init;
+ u8 r4;
+ u8 max_msix_vectors;
+ __le16 req_slot;
+ u8 r5[2];
+ __le32 adapter_caps;
+ __le32 fw_version;
+ __le16 sas_wide_max_qdepth;
+ __le16 sas_narrow_max_qdepth;
+ u8 r6[10];
+ __le16 hp_slot;
+ u8 r7[3];
+ u8 max_volumes;
+ __le16 max_dev_hdl;
+ u8 r8[2];
+ __le16 min_dev_hdl;
+ u8 r9[6];
+};
+
+/**
+ * struct leapraid_scan_dev_req - Request to scan devices
+ *
+ * @r1: Reserved
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @r2: Reserved
+ * @msg_flag: Message flags
+ * @r3: Reserved
+ */
+struct leapraid_scan_dev_req {
+ u8 r1[2];
+ u8 chain_offset;
+ u8 func;
+ u8 r2[3];
+ u8 msg_flag;
+ u8 r3[4];
+};
+
+/**
+ * struct leapraid_scan_dev_rep - Scan devices reply
+ *
+ * @r1: Reserved
+ * @msg_len: Length of message
+ * @func: Function code
+ * @r2: Reserved
+ * @msg_flag: Message flags
+ * @r3: Reserved
+ * @adapter_status: Adapter status
+ * @r4: Reserved
+ */
+struct leapraid_scan_dev_rep {
+ u8 r1[2];
+ u8 msg_len;
+ u8 func;
+ u8 r2[3];
+ u8 msg_flag;
+ u8 r3[6];
+ __le16 adapter_status;
+ u8 r4[4];
+};
+
+/**
+ * struct leapraid_evt_notify_req - Event notification request
+ *
+ * @r1: Reserved
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @r2: Reserved
+ * @msg_flag: Message flags
+ * @r3: Reserved
+ * @evt_masks: Event masks to enable notifications
+ * @r4: Reserved
+ */
+struct leapraid_evt_notify_req {
+ u8 r1[2];
+ u8 chain_offset;
+ u8 func;
+ u8 r2[3];
+ u8 msg_flag;
+ u8 r3[12];
+ __le32 evt_masks[4];
+ u8 r4[8];
+};
+
+/**
+ * struct leapraid_evt_notify_rep - Event notification reply
+ *
+ * @evt_data_len: Length of event data
+ * @msg_len: Length of message
+ * @func: Function code
+ * @r1: Reserved
+ * @r2: Reserved
+ * @msg_flag: Message flags
+ * @r3: Reserved
+ * @adapter_status: Adapter status
+ * @r4: Reserved
+ * @evt: Event code
+ * @r5: Reserved
+ * @evt_data: Event data array
+ */
+struct leapraid_evt_notify_rep {
+ __le16 evt_data_len;
+ u8 msg_len;
+ u8 func;
+ u8 r1[2];
+ u8 r2;
+ u8 msg_flag;
+ u8 r3[6];
+ __le16 adapter_status;
+ u8 r4[4];
+ __le16 evt;
+ u8 r5[6];
+ __le32 evt_data[];
+};
+
+/**
+ * struct leapraid_evt_data_sas_dev_status_change - SAS device status change
+ *
+ * @task_tag: Task identifier
+ * @reason_code: Reason for status change
+ * @physical_port: Physical port number
+ * @r1: Reserved
+ * @dev_hdl: Device handle
+ * @r2: Reserved
+ * @sas_address: SAS address of device
+ * @lun: Logical Unit Number
+ */
+struct leapraid_evt_data_sas_dev_status_change {
+ __le16 task_tag;
+ u8 reason_code;
+ u8 physical_port;
+ u8 r1[2];
+ __le16 dev_hdl;
+ u8 r2[4];
+ __le64 sas_address;
+ u8 lun[8];
+} __packed __aligned(4);
+/**
+ * struct leapraid_evt_data_ir_change - IR (Integrated RAID) change event data
+ *
+ * @r1: Reserved
+ * @reason_code: Reason for IR change
+ * @r2: Reserved
+ * @vol_dev_hdl: Volume device handle
+ * @phys_disk_dev_hdl: Physical disk device handle
+ */
+struct leapraid_evt_data_ir_change {
+ u8 r1;
+ u8 reason_code;
+ u8 r2[2];
+ __le16 vol_dev_hdl;
+ __le16 phys_disk_dev_hdl;
+};
+
+/**
+ * struct leapraid_evt_data_sas_disc - SAS discovery event data
+ *
+ * @r1: Reserved
+ * @reason_code: Reason for discovery event
+ * @physical_port: Physical port number where event occurred
+ * @r2: Reserved
+ */
+struct leapraid_evt_data_sas_disc {
+ u8 r1;
+ u8 reason_code;
+ u8 physical_port;
+ u8 r2[5];
+};
+
+/**
+ * struct leapraid_evt_sas_topo_phy_entry - SAS topology PHY entry
+ *
+ * @attached_dev_hdl: Device handle attached to PHY
+ * @link_rate: Current link rate
+ * @phy_status: PHY status flags
+ */
+struct leapraid_evt_sas_topo_phy_entry {
+ __le16 attached_dev_hdl;
+ u8 link_rate;
+ u8 phy_status;
+};
+
+/**
+ * struct leapraid_evt_data_sas_topo_change_list - SAS topology change list
+ *
+ * @encl_hdl: Enclosure handle
+ * @exp_dev_hdl: Expander device handle
+ * @num_phys: Number of PHYs in this entry
+ * @r1: Reserved
+ * @entry_num: Entry index
+ * @start_phy_num: Start PHY number
+ * @exp_status: Expander status
+ * @physical_port: Physical port number
+ * @phy: Array of SAS PHY entries
+ */
+struct leapraid_evt_data_sas_topo_change_list {
+ __le16 encl_hdl;
+ __le16 exp_dev_hdl;
+ u8 num_phys;
+ u8 r1[3];
+ u8 entry_num;
+ u8 start_phy_num;
+ u8 exp_status;
+ u8 physical_port;
+ struct leapraid_evt_sas_topo_phy_entry phy[];
+};
+
+/**
+ * struct leapraid_evt_data_sas_enc_dev_status_change - SAS enclosure device status
+ *
+ * @enc_hdl: Enclosure handle
+ * @reason_code: Reason code for status change
+ * @physical_port: Physical port number
+ * @encl_logical_id: Enclosure logical ID
+ * @num_slots: Number of slots in enclosure
+ * @start_slot: First affected slot
+ * @phy_bits: Bitmap of affected PHYs
+ */
+struct leapraid_evt_data_sas_enc_dev_status_change {
+ __le16 enc_hdl;
+ u8 reason_code;
+ u8 physical_port;
+ __le64 encl_logical_id;
+ __le16 num_slots;
+ __le16 start_slot;
+ __le32 phy_bits;
+};
+
+/**
+ * struct leapraid_io_unit_ctrl_req - IO unit control request
+ *
+ * @op: Operation code
+ * @r1: Reserved
+ * @chain_offset: SGE chain offset
+ * @func: Function code
+ * @dev_hdl: Device handle
+ * @adapter_para: Adapter parameter selector
+ * @msg_flag: Message flags
+ * @r2: Reserved
+ * @phy_num: PHY number
+ * @r3: Reserved
+ * @adapter_para_value: Value for adapter parameter
+ * @adapter_para_value2: Optional second parameter value
+ * @r4: Reserved
+ */
+struct leapraid_io_unit_ctrl_req {
+ u8 op;
+ u8 r1;
+ u8 chain_offset;
+ u8 func;
+ u16 dev_hdl;
+ u8 adapter_para;
+ u8 msg_flag;
+ u8 r2[6];
+ u8 phy_num;
+ u8 r3[17];
+ __le32 adapter_para_value;
+ __le32 adapter_para_value2;
+ u8 r4[4];
+};
+
+/**
+ * struct leapraid_io_unit_ctrl_rep - IO unit control reply
+ *
+ * @op: Operation code echoed
+ * @r1: Reserved
+ * @func: Function code
+ * @dev_hdl: Device handle
+ * @r2: Reserved
+ */
+struct leapraid_io_unit_ctrl_rep {
+ u8 op;
+ u8 r1[2];
+ u8 func;
+ __le16 dev_hdl;
+ u8 r2[14];
+};
+
+/**
+ * struct leapraid_raid_act_req - RAID action request
+ *
+ * @act: RAID action code
+ * @r1: Reserved
+ * @func: Function code
+ * @r2: Reserved
+ * @phys_disk_num: Number of physical disks involved
+ * @r3: Reserved
+ * @action_data_sge: SGE describing action-specific data
+ */
+struct leapraid_raid_act_req {
+ u8 act;
+ u8 r1[2];
+ u8 func;
+ u8 r2[2];
+ u8 phys_disk_num;
+ u8 r3[13];
+ struct leapraid_sge_simple_union action_data_sge;
+};
+
+/**
+ * struct leapraid_raid_act_rep - RAID action reply
+ *
+ * @act: RAID action code echoed
+ * @r1: Reserved
+ * @func: Function code
+ * @vol_dev_hdl: Volume device handle
+ * @r2: Reserved
+ * @adapter_status: Status returned by adapter
+ * @r3: Reserved
+ */
+struct leapraid_raid_act_rep {
+ u8 act;
+ u8 r1[2];
+ u8 func;
+ __le16 vol_dev_hdl;
+ u8 r2[8];
+ __le16 adapter_status;
+ u8 r3[76];
+};
+
+/**
+ * struct leapraid_smp_passthrough_req - SMP passthrough request
+ *
+ * @passthrough_flg: Passthrough flags
+ * @physical_port: Target PHY port
+ * @r1: Reserved
+ * @func: Function code
+ * @req_data_len: Request data length
+ * @r2: Reserved
+ * @sas_address: SAS address of target device
+ * @r3: Reserved
+ * @sgl: Scatter-gather list describing request buffer
+ */
+struct leapraid_smp_passthrough_req {
+ u8 passthrough_flg;
+ u8 physical_port;
+ u8 r1;
+ u8 func;
+ __le16 req_data_len;
+ u8 r2[10];
+ __le64 sas_address;
+ u8 r3[8];
+ union leapraid_simple_sge_union sgl;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_smp_passthrough_rep - SMP passthrough reply
+ *
+ * @passthrough_flg: Passthrough flags echoed
+ * @physical_port: Target PHY port
+ * @r1: Reserved
+ * @func: Function code
+ * @resp_data_len: Length of response data
+ * @r2: Reserved
+ * @adapter_status: Adapter status
+ * @r3: Reserved
+ */
+struct leapraid_smp_passthrough_rep {
+ u8 passthrough_flg;
+ u8 physical_port;
+ u8 r1;
+ u8 func;
+ __le16 resp_data_len;
+ u8 r2[8];
+ __le16 adapter_status;
+ u8 r3[12];
+};
+
+/**
+ * struct leapraid_sas_io_unit_ctrl_req - SAS IO unit control request
+ *
+ * @op: Operation code
+ * @r1: Reserved
+ * @func: Function code
+ * @dev_hdl: Device handle
+ * @r2: Reserved
+ */
+struct leapraid_sas_io_unit_ctrl_req {
+ u8 op;
+ u8 r1[2];
+ u8 func;
+ __le16 dev_hdl;
+ u8 r2[38];
+};
+
+#endif /* LEAPRAID_H */
diff --git a/drivers/scsi/leapraid/leapraid_app.c b/drivers/scsi/leapraid/leapraid_app.c
new file mode 100644
index 000000000000..f838bd5aa20e
--- /dev/null
+++ b/drivers/scsi/leapraid/leapraid_app.c
@@ -0,0 +1,675 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2025 LeapIO Tech Inc.
+ *
+ * LeapRAID Storage and RAID Controller driver.
+ */
+
+#include <linux/compat.h>
+#include <linux/module.h>
+#include <linux/miscdevice.h>
+
+#include "leapraid_func.h"
+
+/* ioctl device file */
+#define LEAPRAID_DEV_NAME "leapraid_ctl"
+
+/* ioctl version */
+#define LEAPRAID_IOCTL_VERSION 0x07
+
+/* ioctl command */
+#define LEAPRAID_ADAPTER_INFO 17
+#define LEAPRAID_COMMAND 20
+#define LEAPRAID_EVENTQUERY 21
+#define LEAPRAID_EVENTREPORT 23
+
+/**
+ * struct leapraid_ioctl_header - IOCTL command header
+ * @adapter_id : Adapter identifier
+ * @port_number: Port identifier
+ * @max_data_size: Maximum data size for transfer
+ */
+struct leapraid_ioctl_header {
+ u32 adapter_id;
+ u32 port_number;
+ u32 max_data_size;
+};
+
+/**
+ * struct leapraid_ioctl_diag_reset - Diagnostic reset request
+ * @hdr: Common IOCTL header
+ */
+struct leapraid_ioctl_diag_reset {
+ struct leapraid_ioctl_header hdr;
+};
+
+/**
+ * struct leapraid_ioctl_pci_info - PCI device information
+ * @u: Union holding PCI bus/device/function information
+ * @u.bits.dev: PCI device number
+ * @u.bits.func: PCI function number
+ * @u.bits.bus: PCI bus number
+ * @u.word: Combined representation of PCI BDF
+ * @seg_id: PCI segment identifier
+ */
+struct leapraid_ioctl_pci_info {
+ union {
+ struct {
+ u32 dev:5;
+ u32 func:3;
+ u32 bus:24;
+ } bits;
+ u32 word;
+ } u;
+ u32 seg_id;
+};
+
+/**
+ * struct leapraid_ioctl_adapter_info - Adapter information for IOCTL
+ * @hdr: IOCTL header
+ * @adapter_type: Adapter type identifier
+ * @port_number: Port number
+ * @pci_id: PCI device ID
+ * @revision: Revision number
+ * @sub_dev: Subsystem device ID
+ * @sub_vendor: Subsystem vendor ID
+ * @r0: Reserved
+ * @fw_ver: Firmware version
+ * @bios_ver: BIOS version
+ * @driver_ver: Driver version
+ * @r1: Reserved
+ * @scsi_id: SCSI ID
+ * @r2: Reserved
+ * @pci_info: PCI information structure
+ */
+struct leapraid_ioctl_adapter_info {
+ struct leapraid_ioctl_header hdr;
+ u32 adapter_type;
+ u32 port_number;
+ u32 pci_id;
+ u32 revision;
+ u32 sub_dev;
+ u32 sub_vendor;
+ u32 r0;
+ u32 fw_ver;
+ u32 bios_ver;
+ u8 driver_ver[32];
+ u8 r1;
+ u8 scsi_id;
+ u16 r2;
+ struct leapraid_ioctl_pci_info pci_info;
+};
+
+/**
+ * struct leapraid_ioctl_command - IOCTL command structure
+ * @hdr: IOCTL header
+ * @timeout: Command timeout
+ * @rep_msg_buf_ptr: User pointer to reply message buffer
+ * @c2h_buf_ptr: User pointer to card-to-host data buffer
+ * @h2c_buf_ptr: User pointer to host-to-card data buffer
+ * @sense_data_ptr: User pointer to sense data buffer
+ * @max_rep_bytes: Maximum reply bytes
+ * @c2h_size: Card-to-host data size
+ * @h2c_size: Host-to-card data size
+ * @max_sense_bytes: Maximum sense data bytes
+ * @data_sge_offset: Data SGE offset
+ * @mf: Message frame data (flexible array)
+ */
+struct leapraid_ioctl_command {
+ struct leapraid_ioctl_header hdr;
+ u32 timeout;
+ void __user *rep_msg_buf_ptr;
+ void __user *c2h_buf_ptr;
+ void __user *h2c_buf_ptr;
+ void __user *sense_data_ptr;
+ u32 max_rep_bytes;
+ u32 c2h_size;
+ u32 h2c_size;
+ u32 max_sense_bytes;
+ u32 data_sge_offset;
+ u8 mf[];
+};
+
+static struct leapraid_adapter *leapraid_ctl_lookup_adapter(int adapter_id)
+{
+ struct leapraid_adapter *adapter;
+
+ spin_lock(&leapraid_adapter_lock);
+ list_for_each_entry(adapter, &leapraid_adapter_list, list) {
+ if (adapter->adapter_attr.id == adapter_id) {
+ spin_unlock(&leapraid_adapter_lock);
+ return adapter;
+ }
+ }
+ spin_unlock(&leapraid_adapter_lock);
+
+ return NULL;
+}
+
+static void leapraid_cli_scsiio_cmd(struct leapraid_adapter *adapter,
+ struct leapraid_req *ctl_sp_mpi_req, u16 taskid,
+ dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size,
+ u16 dev_hdl, void *psge)
+{
+ struct leapraid_mpi_scsiio_req *scsiio_request =
+ (struct leapraid_mpi_scsiio_req *)ctl_sp_mpi_req;
+
+ scsiio_request->sense_buffer_len = SCSI_SENSE_BUFFERSIZE;
+ scsiio_request->sense_buffer_low_add =
+ leapraid_get_sense_buffer_dma(adapter, taskid);
+ memset((void *)(&adapter->driver_cmds.ctl_cmd.sense),
+ 0, SCSI_SENSE_BUFFERSIZE);
+ leapraid_build_ieee_sg(adapter, psge, h2c_dma_addr,
+ h2c_size, c2h_dma_addr, c2h_size);
+ if (scsiio_request->func == LEAPRAID_FUNC_SCSIIO_REQ)
+ leapraid_fire_scsi_io(adapter, taskid, dev_hdl);
+ else
+ leapraid_fire_task(adapter, taskid);
+}
+
+static void leapraid_ctl_smp_passthrough_cmd(struct leapraid_adapter *adapter,
+ struct leapraid_req *ctl_sp_mpi_req,
+ u16 taskid,
+ dma_addr_t h2c_dma_addr,
+ size_t h2c_size,
+ dma_addr_t c2h_dma_addr,
+ size_t c2h_size,
+ void *psge, void *h2c)
+{
+ struct leapraid_smp_passthrough_req *smp_pt_req =
+ (struct leapraid_smp_passthrough_req *)ctl_sp_mpi_req;
+ u8 *data;
+
+ if (!adapter->adapter_attr.enable_mp)
+ smp_pt_req->physical_port = LEAPRAID_DISABLE_MP_PORT_ID;
+ if (smp_pt_req->passthrough_flg & LEAPRAID_SMP_PT_FLAG_SGL_PTR)
+ data = (u8 *)&smp_pt_req->sgl;
+ else
+ data = h2c;
+
+ if (data[1] == LEAPRAID_SMP_FN_REPORT_PHY_ERR_LOG &&
+ (data[10] == 1 || data[10] == 2))
+ adapter->reset_desc.adapter_link_resetting = true;
+ leapraid_build_ieee_sg(adapter, psge, h2c_dma_addr,
+ h2c_size, c2h_dma_addr, c2h_size);
+ leapraid_fire_task(adapter, taskid);
+}
+
+static void leapraid_ctl_fire_ieee_cmd(struct leapraid_adapter *adapter,
+ dma_addr_t h2c_dma_addr,
+ size_t h2c_size,
+ dma_addr_t c2h_dma_addr,
+ size_t c2h_size,
+ void *psge, u16 taskid)
+{
+ leapraid_build_ieee_sg(adapter, psge, h2c_dma_addr, h2c_size,
+ c2h_dma_addr, c2h_size);
+ leapraid_fire_task(adapter, taskid);
+}
+
+static void leapraid_ctl_sata_passthrough_cmd(struct leapraid_adapter *adapter,
+ dma_addr_t h2c_dma_addr,
+ size_t h2c_size,
+ dma_addr_t c2h_dma_addr,
+ size_t c2h_size,
+ void *psge, u16 taskid)
+{
+ leapraid_ctl_fire_ieee_cmd(adapter, h2c_dma_addr,
+ h2c_size, c2h_dma_addr,
+ c2h_size, psge, taskid);
+}
+
+static void leapraid_ctl_load_fw_cmd(struct leapraid_adapter *adapter,
+ dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size,
+ void *psge, u16 taskid)
+{
+ leapraid_ctl_fire_ieee_cmd(adapter, h2c_dma_addr,
+ h2c_size, c2h_dma_addr,
+ c2h_size, psge, taskid);
+}
+
+static void leapraid_ctl_fire_mpi_cmd(struct leapraid_adapter *adapter,
+ dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size,
+ void *psge, u16 taskid)
+{
+ leapraid_build_mpi_sg(adapter, psge, h2c_dma_addr,
+ h2c_size, c2h_dma_addr, c2h_size);
+ leapraid_fire_task(adapter, taskid);
+}
+
+static void leapraid_ctl_sas_io_unit_ctrl_cmd(struct leapraid_adapter *adapter,
+ struct leapraid_req *ctl_sp_mpi_req,
+ dma_addr_t h2c_dma_addr,
+ size_t h2c_size,
+ dma_addr_t c2h_dma_addr,
+ size_t c2h_size,
+ void *psge, u16 taskid)
+{
+ struct leapraid_sas_io_unit_ctrl_req *sas_io_unit_ctrl_req =
+ (struct leapraid_sas_io_unit_ctrl_req *)ctl_sp_mpi_req;
+
+ if (sas_io_unit_ctrl_req->op == LEAPRAID_SAS_OP_PHY_HARD_RESET ||
+ sas_io_unit_ctrl_req->op == LEAPRAID_SAS_OP_PHY_LINK_RESET)
+ adapter->reset_desc.adapter_link_resetting = true;
+ leapraid_ctl_fire_mpi_cmd(adapter, h2c_dma_addr,
+ h2c_size, c2h_dma_addr,
+ c2h_size, psge, taskid);
+}
+
+static long leapraid_ctl_do_command(struct leapraid_adapter *adapter,
+ struct leapraid_ioctl_command *karg,
+ void __user *mf)
+{
+ struct leapraid_req *leap_mpi_req = NULL;
+ struct leapraid_req *ctl_sp_mpi_req = NULL;
+ u16 taskid;
+ void *h2c = NULL;
+ size_t h2c_size = 0;
+ dma_addr_t h2c_dma_addr = 0;
+ void *c2h = NULL;
+ size_t c2h_size = 0;
+ dma_addr_t c2h_dma_addr = 0;
+ void *psge;
+ unsigned long timeout;
+ u16 dev_hdl = LEAPRAID_INVALID_DEV_HANDLE;
+ bool issue_reset = false;
+ u32 sz;
+ long rc = 0;
+
+ rc = leapraid_check_adapter_is_op(adapter);
+ if (rc)
+ goto out;
+
+ leap_mpi_req = kzalloc(LEAPRAID_REQUEST_SIZE, GFP_KERNEL);
+ if (!leap_mpi_req) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ if (karg->data_sge_offset * LEAPRAID_SGE_OFFSET_SIZE > LEAPRAID_REQUEST_SIZE ||
+ karg->data_sge_offset > ((UINT_MAX) / LEAPRAID_SGE_OFFSET_SIZE)) {
+ rc = -EINVAL;
+ goto out;
+ }
+
+ if (copy_from_user(leap_mpi_req, mf,
+ karg->data_sge_offset * LEAPRAID_SGE_OFFSET_SIZE)) {
+ rc = -EFAULT;
+ goto out;
+ }
+
+ taskid = adapter->driver_cmds.ctl_cmd.taskid;
+
+ adapter->driver_cmds.ctl_cmd.status = LEAPRAID_CMD_PENDING;
+ memset((void *)(&adapter->driver_cmds.ctl_cmd.reply), 0,
+ LEAPRAID_REPLY_SIEZ);
+ ctl_sp_mpi_req = leapraid_get_task_desc(adapter, taskid);
+ memset(ctl_sp_mpi_req, 0, LEAPRAID_REQUEST_SIZE);
+ memcpy(ctl_sp_mpi_req,
+ leap_mpi_req,
+ karg->data_sge_offset * LEAPRAID_SGE_OFFSET_SIZE);
+
+ if (ctl_sp_mpi_req->func == LEAPRAID_FUNC_SCSIIO_REQ ||
+ ctl_sp_mpi_req->func == LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH ||
+ ctl_sp_mpi_req->func == LEAPRAID_FUNC_SATA_PASSTHROUGH) {
+ dev_hdl = le16_to_cpu(ctl_sp_mpi_req->func_dep1);
+ if (!dev_hdl || dev_hdl > adapter->adapter_attr.features.max_dev_handle) {
+ rc = -EINVAL;
+ goto out;
+ }
+ }
+
+ if (WARN_ON(ctl_sp_mpi_req->func == LEAPRAID_FUNC_SCSI_TMF))
+ return -EINVAL;
+
+ h2c_size = karg->h2c_size;
+ c2h_size = karg->c2h_size;
+ if (h2c_size) {
+ h2c = dma_alloc_coherent(&adapter->pdev->dev, h2c_size,
+ &h2c_dma_addr, GFP_ATOMIC);
+ if (!h2c) {
+ rc = -ENOMEM;
+ goto out;
+ }
+ if (copy_from_user(h2c, karg->h2c_buf_ptr, h2c_size)) {
+ rc = -EFAULT;
+ goto out;
+ }
+ }
+ if (c2h_size) {
+ c2h = dma_alloc_coherent(&adapter->pdev->dev,
+ c2h_size, &c2h_dma_addr, GFP_ATOMIC);
+ if (!c2h) {
+ rc = -ENOMEM;
+ goto out;
+ }
+ }
+
+ psge = (void *)ctl_sp_mpi_req + (karg->data_sge_offset *
+ LEAPRAID_SGE_OFFSET_SIZE);
+ init_completion(&adapter->driver_cmds.ctl_cmd.done);
+
+ switch (ctl_sp_mpi_req->func) {
+ case LEAPRAID_FUNC_SCSIIO_REQ:
+ case LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH:
+ if (test_bit(dev_hdl, (unsigned long *)adapter->dev_topo.dev_removing)) {
+ rc = -EINVAL;
+ goto out;
+ }
+ leapraid_cli_scsiio_cmd(adapter, ctl_sp_mpi_req, taskid,
+ h2c_dma_addr, h2c_size,
+ c2h_dma_addr, c2h_size,
+ dev_hdl, psge);
+ break;
+ case LEAPRAID_FUNC_SMP_PASSTHROUGH:
+ if (!h2c) {
+ rc = -EINVAL;
+ goto out;
+ }
+ leapraid_ctl_smp_passthrough_cmd(adapter,
+ ctl_sp_mpi_req, taskid,
+ h2c_dma_addr, h2c_size,
+ c2h_dma_addr, c2h_size,
+ psge, h2c);
+ break;
+ case LEAPRAID_FUNC_SATA_PASSTHROUGH:
+ if (test_bit(dev_hdl, (unsigned long *)adapter->dev_topo.dev_removing)) {
+ rc = -EINVAL;
+ goto out;
+ }
+ leapraid_ctl_sata_passthrough_cmd(adapter, h2c_dma_addr,
+ h2c_size, c2h_dma_addr,
+ c2h_size, psge, taskid);
+ break;
+ case LEAPRAID_FUNC_FW_DOWNLOAD:
+ case LEAPRAID_FUNC_FW_UPLOAD:
+ leapraid_ctl_load_fw_cmd(adapter, h2c_dma_addr,
+ h2c_size, c2h_dma_addr,
+ c2h_size, psge, taskid);
+ break;
+ case LEAPRAID_FUNC_SAS_IO_UNIT_CTRL:
+ leapraid_ctl_sas_io_unit_ctrl_cmd(adapter, ctl_sp_mpi_req,
+ h2c_dma_addr, h2c_size,
+ c2h_dma_addr, c2h_size,
+ psge, taskid);
+ break;
+ default:
+ leapraid_ctl_fire_mpi_cmd(adapter, h2c_dma_addr,
+ h2c_size, c2h_dma_addr,
+ c2h_size, psge, taskid);
+ break;
+ }
+
+ timeout = karg->timeout;
+ if (timeout < LEAPRAID_CTL_CMD_TIMEOUT)
+ timeout = LEAPRAID_CTL_CMD_TIMEOUT;
+ wait_for_completion_timeout(&adapter->driver_cmds.ctl_cmd.done,
+ timeout * HZ);
+
+ if ((leap_mpi_req->func == LEAPRAID_FUNC_SMP_PASSTHROUGH ||
+ leap_mpi_req->func == LEAPRAID_FUNC_SAS_IO_UNIT_CTRL) &&
+ adapter->reset_desc.adapter_link_resetting) {
+ adapter->reset_desc.adapter_link_resetting = false;
+ }
+ if (!(adapter->driver_cmds.ctl_cmd.status & LEAPRAID_CMD_DONE)) {
+ issue_reset =
+ leapraid_check_reset(
+ adapter->driver_cmds.ctl_cmd.status);
+ goto reset;
+ }
+
+ if (c2h_size) {
+ if (copy_to_user(karg->c2h_buf_ptr, c2h, c2h_size)) {
+ rc = -ENODATA;
+ goto out;
+ }
+ }
+ if (karg->max_rep_bytes) {
+ sz = min_t(u32, karg->max_rep_bytes, LEAPRAID_REPLY_SIEZ);
+ if (copy_to_user(karg->rep_msg_buf_ptr,
+ (void *)&adapter->driver_cmds.ctl_cmd.reply,
+ sz)) {
+ rc = -ENODATA;
+ goto out;
+ }
+ }
+
+ if (karg->max_sense_bytes &&
+ (leap_mpi_req->func == LEAPRAID_FUNC_SCSIIO_REQ ||
+ leap_mpi_req->func == LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH)) {
+ if (!karg->sense_data_ptr)
+ goto out;
+
+ sz = min_t(u32, karg->max_sense_bytes, SCSI_SENSE_BUFFERSIZE);
+ if (copy_to_user(karg->sense_data_ptr,
+ (void *)&adapter->driver_cmds.ctl_cmd.sense,
+ sz)) {
+ rc = -ENODATA;
+ goto out;
+ }
+ }
+reset:
+ if (issue_reset) {
+ rc = -ENODATA;
+ if (leap_mpi_req->func == LEAPRAID_FUNC_SCSIIO_REQ ||
+ leap_mpi_req->func == LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH ||
+ leap_mpi_req->func == LEAPRAID_FUNC_SATA_PASSTHROUGH) {
+ dev_err(&adapter->pdev->dev,
+ "fire tgt reset: hdl=0x%04x\n",
+ le16_to_cpu(leap_mpi_req->func_dep1));
+ leapraid_issue_locked_tm(adapter,
+ le16_to_cpu(leap_mpi_req->func_dep1), 0, 0, 0,
+ LEAPRAID_TM_TASKTYPE_TARGET_RESET, taskid,
+ LEAPRAID_TM_MSGFLAGS_LINK_RESET);
+ } else {
+ dev_info(&adapter->pdev->dev,
+ "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, FULL_RESET);
+ }
+ }
+out:
+ if (c2h)
+ dma_free_coherent(&adapter->pdev->dev, c2h_size,
+ c2h, c2h_dma_addr);
+ if (h2c)
+ dma_free_coherent(&adapter->pdev->dev, h2c_size,
+ h2c, h2c_dma_addr);
+ kfree(leap_mpi_req);
+ adapter->driver_cmds.ctl_cmd.status = LEAPRAID_CMD_NOT_USED;
+ return rc;
+}
+
+static long leapraid_ctl_get_adapter_info(struct leapraid_adapter *adapter,
+ void __user *arg)
+{
+ struct leapraid_ioctl_adapter_info *karg;
+ ssize_t __maybe_unused ret;
+ u8 revision;
+
+ karg = kzalloc(sizeof(*karg), GFP_KERNEL);
+ if (!karg)
+ return -ENOMEM;
+
+ pci_read_config_byte(adapter->pdev, PCI_CLASS_REVISION, &revision);
+ karg->revision = revision;
+ karg->pci_id = adapter->pdev->device;
+ karg->sub_dev = adapter->pdev->subsystem_device;
+ karg->sub_vendor = adapter->pdev->subsystem_vendor;
+ karg->pci_info.u.bits.bus = adapter->pdev->bus->number;
+ karg->pci_info.u.bits.dev = PCI_SLOT(adapter->pdev->devfn);
+ karg->pci_info.u.bits.func = PCI_FUNC(adapter->pdev->devfn);
+ karg->pci_info.seg_id = pci_domain_nr(adapter->pdev->bus);
+ karg->fw_ver = adapter->adapter_attr.features.fw_version;
+ ret = strscpy(karg->driver_ver, LEAPRAID_DRIVER_NAME,
+ sizeof(karg->driver_ver));
+ strcat(karg->driver_ver, "-");
+ strcat(karg->driver_ver, LEAPRAID_DRIVER_VERSION);
+ karg->adapter_type = LEAPRAID_IOCTL_VERSION;
+ karg->bios_ver = adapter->adapter_attr.bios_version;
+ if (copy_to_user(arg, karg,
+ sizeof(struct leapraid_ioctl_adapter_info))) {
+ kfree(karg);
+ return -EFAULT;
+ }
+
+ kfree(karg);
+ return 0;
+}
+
+static long leapraid_ctl_ioctl_main(struct file *file, unsigned int cmd,
+ void __user *arg, u8 compat)
+{
+ struct leapraid_ioctl_header ioctl_header;
+ struct leapraid_adapter *adapter;
+ long rc = -ENOIOCTLCMD;
+ int count;
+
+ if (copy_from_user(&ioctl_header, (char __user *)arg,
+ sizeof(struct leapraid_ioctl_header)))
+ return -EFAULT;
+
+ adapter = leapraid_ctl_lookup_adapter(ioctl_header.adapter_id);
+ if (!adapter)
+ return -EFAULT;
+
+ mutex_lock(&adapter->access_ctrl.pci_access_lock);
+
+ rc = leapraid_check_adapter_is_op(adapter);
+ if (rc)
+ goto out;
+
+ count = LEAPRAID_WAIT_SHOST_RECOVERY;
+ while (count--) {
+ if (!adapter->access_ctrl.shost_recovering)
+ break;
+ ssleep(1);
+ }
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering ||
+ adapter->scan_dev_desc.driver_loading ||
+ adapter->access_ctrl.host_removing) {
+ rc = -EAGAIN;
+ goto out;
+ }
+
+ if (file->f_flags & O_NONBLOCK) {
+ if (!mutex_trylock(&adapter->driver_cmds.ctl_cmd.mutex)) {
+ rc = -EAGAIN;
+ goto out;
+ }
+ } else if (mutex_lock_interruptible(&adapter->driver_cmds.ctl_cmd.mutex)) {
+ rc = -ERESTARTSYS;
+ goto out;
+ }
+
+ switch (_IOC_NR(cmd)) {
+ case LEAPRAID_ADAPTER_INFO:
+ if (_IOC_SIZE(cmd) == sizeof(struct leapraid_ioctl_adapter_info))
+ rc = leapraid_ctl_get_adapter_info(adapter, arg);
+ break;
+ case LEAPRAID_COMMAND:
+ {
+ struct leapraid_ioctl_command __user *uarg;
+ struct leapraid_ioctl_command karg;
+
+ if (copy_from_user(&karg, arg, sizeof(karg))) {
+ rc = -EFAULT;
+ break;
+ }
+
+ if (karg.hdr.adapter_id != ioctl_header.adapter_id) {
+ rc = -EINVAL;
+ break;
+ }
+
+ if (_IOC_SIZE(cmd) == sizeof(struct leapraid_ioctl_command)) {
+ uarg = arg;
+ rc = leapraid_ctl_do_command(adapter, &karg,
+ &uarg->mf);
+ }
+ break;
+ }
+ case LEAPRAID_EVENTQUERY:
+ case LEAPRAID_EVENTREPORT:
+ rc = 0;
+ break;
+ default:
+ pr_err("unknown ioctl opcode=0x%08x\n", cmd);
+ break;
+ }
+ mutex_unlock(&adapter->driver_cmds.ctl_cmd.mutex);
+
+out:
+ mutex_unlock(&adapter->access_ctrl.pci_access_lock);
+ return rc;
+}
+
+static long leapraid_ctl_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ return leapraid_ctl_ioctl_main(file, cmd,
+ (void __user *)arg, 0);
+}
+
+static int leapraid_fw_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ struct leapraid_adapter *adapter;
+ unsigned long length;
+ unsigned long pfn;
+
+ length = vma->vm_end - vma->vm_start;
+
+ adapter = list_first_entry(&leapraid_adapter_list,
+ struct leapraid_adapter, list);
+
+ if (length > (LEAPRAID_SYS_LOG_BUF_SIZE +
+ LEAPRAID_SYS_LOG_BUF_RESERVE)) {
+ dev_err(&adapter->pdev->dev,
+ "requested mapping size is too large!\n");
+ return -EINVAL;
+ }
+
+ if (!adapter->fw_log_desc.fw_log_buffer) {
+ dev_err(&adapter->pdev->dev, "no log buffer!\n");
+ return -EINVAL;
+ }
+
+ pfn = virt_to_phys(adapter->fw_log_desc.fw_log_buffer) >> PAGE_SHIFT;
+
+ if (remap_pfn_range(vma, vma->vm_start, pfn, length,
+ vma->vm_page_prot)) {
+ dev_err(&adapter->pdev->dev,
+ "failed to map memory to user space!\n");
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static const struct file_operations leapraid_ctl_fops = {
+ .owner = THIS_MODULE,
+ .unlocked_ioctl = leapraid_ctl_ioctl,
+ .mmap = leapraid_fw_mmap,
+};
+
+static struct miscdevice leapraid_ctl_dev = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = LEAPRAID_DEV_NAME,
+ .fops = &leapraid_ctl_fops,
+};
+
+void leapraid_ctl_init(void)
+{
+ if (misc_register(&leapraid_ctl_dev) < 0)
+ pr_err("%s can't register misc device\n", LEAPRAID_DRIVER_NAME);
+}
+
+void leapraid_ctl_exit(void)
+{
+ misc_deregister(&leapraid_ctl_dev);
+}
diff --git a/drivers/scsi/leapraid/leapraid_func.c b/drivers/scsi/leapraid/leapraid_func.c
new file mode 100644
index 000000000000..c83c30f56805
--- /dev/null
+++ b/drivers/scsi/leapraid/leapraid_func.c
@@ -0,0 +1,8264 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2025 LeapIO Tech Inc.
+ *
+ * LeapRAID Storage and RAID Controller driver.
+ */
+
+#include <linux/module.h>
+
+#include "leapraid_func.h"
+
+static int msix_disable;
+module_param(msix_disable, int, 0444);
+MODULE_PARM_DESC(msix_disable,
+ "disable msix routed interrupts (default=0)");
+
+static int smart_poll;
+module_param(smart_poll, int, 0444);
+MODULE_PARM_DESC(smart_poll,
+ "check SATA drive health via SMART polling: (default=0)");
+
+static int interrupt_mode;
+module_param(interrupt_mode, int, 0444);
+MODULE_PARM_DESC(interrupt_mode,
+ "intr mode: 0 for MSI-X, 1 for MSI, 2 for legacy. (default=0)");
+
+static int max_msix_vectors = -1;
+module_param(max_msix_vectors, int, 0444);
+MODULE_PARM_DESC(max_msix_vectors, " max msix vectors");
+
+static void leapraid_remove_device(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev);
+static void leapraid_set_led(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev, bool on);
+static void leapraid_ublk_io_dev(struct leapraid_adapter *adapter,
+ u64 sas_address,
+ struct leapraid_card_port *port);
+static int leapraid_make_adapter_available(struct leapraid_adapter *adapter);
+static int leapraid_fw_log_init(struct leapraid_adapter *adapter);
+static int leapraid_make_adapter_ready(struct leapraid_adapter *adapter,
+ enum reset_type type);
+
+static inline bool leapraid_is_end_dev(u32 dev_type)
+{
+ return (dev_type & LEAPRAID_DEVTYP_END_DEV) &&
+ ((dev_type & LEAPRAID_DEVTYP_SSP_TGT) ||
+ (dev_type & LEAPRAID_DEVTYP_STP_TGT) ||
+ (dev_type & LEAPRAID_DEVTYP_SATA_DEV));
+}
+
+bool leapraid_pci_removed(struct leapraid_adapter *adapter)
+{
+ struct pci_dev *pdev = adapter->pdev;
+ u32 vendor_id;
+
+ if (pci_bus_read_config_dword(pdev->bus, pdev->devfn, PCI_VENDOR_ID,
+ &vendor_id))
+ return true;
+
+ return ((vendor_id & LEAPRAID_PCI_VENDOR_ID_MASK) !=
+ LEAPRAID_VENDOR_ID);
+}
+
+static bool leapraid_pci_active(struct leapraid_adapter *adapter)
+{
+ return !(adapter->access_ctrl.pcie_recovering ||
+ leapraid_pci_removed(adapter));
+}
+
+void *leapraid_get_reply_vaddr(struct leapraid_adapter *adapter, u32 rep_paddr)
+{
+ if (!rep_paddr)
+ return NULL;
+
+ return adapter->mem_desc.rep_msg +
+ (rep_paddr - (u32)adapter->mem_desc.rep_msg_dma);
+}
+
+void *leapraid_get_task_desc(struct leapraid_adapter *adapter, u16 taskid)
+{
+ return (void *)(adapter->mem_desc.task_desc +
+ (taskid * LEAPRAID_REQUEST_SIZE));
+}
+
+void *leapraid_get_sense_buffer(struct leapraid_adapter *adapter, u16 taskid)
+{
+ return (void *)(adapter->mem_desc.sense_data +
+ ((taskid - 1) * SCSI_SENSE_BUFFERSIZE));
+}
+
+__le32 leapraid_get_sense_buffer_dma(struct leapraid_adapter *adapter,
+ u16 taskid)
+{
+ return cpu_to_le32(adapter->mem_desc.sense_data_dma +
+ ((taskid - 1) * SCSI_SENSE_BUFFERSIZE));
+}
+
+void leapraid_mask_int(struct leapraid_adapter *adapter)
+{
+ u32 reg;
+
+ adapter->mask_int = true;
+ reg = leapraid_readl(&adapter->iomem_base->host_int_mask);
+ reg |= LEAPRAID_TO_SYS_DB_MASK + LEAPRAID_REPLY_INT_MASK +
+ LEAPRAID_RESET_IRQ_MASK;
+ writel(reg, &adapter->iomem_base->host_int_mask);
+ leapraid_readl(&adapter->iomem_base->host_int_mask);
+}
+
+void leapraid_unmask_int(struct leapraid_adapter *adapter)
+{
+ u32 reg;
+
+ reg = leapraid_readl(&adapter->iomem_base->host_int_mask);
+ reg &= ~LEAPRAID_REPLY_INT_MASK;
+ writel(reg, &adapter->iomem_base->host_int_mask);
+ adapter->mask_int = false;
+}
+
+static void leapraid_flush_io_and_panic(struct leapraid_adapter *adapter)
+{
+ adapter->access_ctrl.adapter_thermal_alert = true;
+ leapraid_smart_polling_stop(adapter);
+ leapraid_fw_log_stop(adapter);
+ leapraid_mq_polling_pause(adapter);
+ leapraid_clean_active_scsi_cmds(adapter);
+}
+
+static void leapraid_check_panic_needed(struct leapraid_adapter *adapter,
+ u32 db, u32 adapter_state)
+{
+ bool fault_1 = adapter_state == LEAPRAID_DB_MASK;
+ bool fault_2 = (adapter_state == LEAPRAID_DB_FAULT) &&
+ ((db & LEAPRAID_DB_DATA_MASK) == LEAPRAID_DB_OVER_TEMPERATURE);
+
+ if (!fault_1 && !fault_2)
+ return;
+
+ if (fault_1)
+ pr_err("%s, doorbell status 0xFFFF!\n", __func__);
+ else
+ pr_err("%s, adapter overheating detected!\n", __func__);
+
+ leapraid_flush_io_and_panic(adapter);
+ panic("%s overheating detected, panic now!!!\n", __func__);
+}
+
+u32 leapraid_get_adapter_state(struct leapraid_adapter *adapter)
+{
+ u32 db;
+ u32 adapter_state;
+
+ db = leapraid_readl(&adapter->iomem_base->db);
+ adapter_state = db & LEAPRAID_DB_MASK;
+ leapraid_check_panic_needed(adapter, db, adapter_state);
+ return adapter_state;
+}
+
+static bool leapraid_wait_adapter_ready(struct leapraid_adapter *adapter)
+{
+ u32 cur_state;
+ u32 cnt = LEAPRAID_ADAPTER_READY_MAX_RETRY;
+
+ do {
+ cur_state = leapraid_get_adapter_state(adapter);
+ if (cur_state == LEAPRAID_DB_READY)
+ return true;
+ if (cur_state == LEAPRAID_DB_FAULT)
+ break;
+ usleep_range(LEAPRAID_ADAPTER_READY_SLEEP_MIN_US,
+ LEAPRAID_ADAPTER_READY_SLEEP_MAX_US);
+ } while (--cnt);
+
+ return false;
+}
+
+static int leapraid_db_wait_int_host(struct leapraid_adapter *adapter)
+{
+ u32 cnt = LEAPRAID_DB_WAIT_MAX_RETRY;
+
+ do {
+ if (leapraid_readl(&adapter->iomem_base->host_int_status) &
+ LEAPRAID_ADAPTER2HOST_DB_STATUS)
+ return 0;
+ udelay(LEAPRAID_DB_WAIT_DELAY_US);
+ } while (--cnt);
+
+ return -EFAULT;
+}
+
+static int leapraid_db_wait_ack_and_clear_int(struct leapraid_adapter *adapter)
+{
+ u32 adapter_state;
+ u32 int_status;
+ u32 cnt;
+
+ cnt = LEAPRAID_ADAPTER_READY_MAX_RETRY;
+ do {
+ int_status =
+ leapraid_readl(&adapter->iomem_base->host_int_status);
+ if (!(int_status & LEAPRAID_HOST2ADAPTER_DB_STATUS)) {
+ return 0;
+ } else if (int_status & LEAPRAID_ADAPTER2HOST_DB_STATUS) {
+ adapter_state = leapraid_get_adapter_state(adapter);
+ if (adapter_state == LEAPRAID_DB_FAULT)
+ return -EFAULT;
+ } else if (int_status == 0xFFFFFFFF) {
+ goto out;
+ }
+
+ usleep_range(LEAPRAID_ADAPTER_READY_SLEEP_MIN_US,
+ LEAPRAID_ADAPTER_READY_SLEEP_MAX_US);
+ } while (--cnt);
+
+out:
+ return -EFAULT;
+}
+
+static int leapraid_handshake_func(struct leapraid_adapter *adapter,
+ int req_bytes, u32 *req,
+ int rep_bytes, u16 *rep)
+{
+ int failed, i;
+
+ if ((leapraid_readl(&adapter->iomem_base->db) &
+ LEAPRAID_DB_USED)) {
+ dev_err(&adapter->pdev->dev, "doorbell used\n");
+ return -EFAULT;
+ }
+
+ if (leapraid_readl(&adapter->iomem_base->host_int_status) &
+ LEAPRAID_ADAPTER2HOST_DB_STATUS)
+ writel(0, &adapter->iomem_base->host_int_status);
+
+ writel(((LEAPRAID_FUNC_HANDSHAKE << LEAPRAID_DB_FUNC_SHIFT) |
+ ((req_bytes / LEAPRAID_DWORDS_BYTE_SIZE) <<
+ LEAPRAID_DB_ADD_DWORDS_SHIFT)),
+ &adapter->iomem_base->db);
+
+ if (leapraid_db_wait_int_host(adapter)) {
+ dev_err(&adapter->pdev->dev, "%d:wait db interrupt timeout\n",
+ __LINE__);
+ return -EFAULT;
+ }
+
+ writel(0, &adapter->iomem_base->host_int_status);
+
+ if (leapraid_db_wait_ack_and_clear_int(adapter)) {
+ dev_err(&adapter->pdev->dev, "%d:wait ack failure\n",
+ __LINE__);
+ return -EFAULT;
+ }
+
+ for (i = 0, failed = 0;
+ i < req_bytes / LEAPRAID_DWORDS_BYTE_SIZE && !failed;
+ i++) {
+ writel((u32)(req[i]), &adapter->iomem_base->db);
+ if (leapraid_db_wait_ack_and_clear_int(adapter))
+ failed = 1;
+ }
+ if (failed) {
+ dev_err(&adapter->pdev->dev, "%d:wait ack failure\n",
+ __LINE__);
+ return -EFAULT;
+ }
+
+ for (i = 0; i < rep_bytes / LEAPRAID_WORD_BYTE_SIZE; i++) {
+ if (leapraid_db_wait_int_host(adapter)) {
+ dev_err(&adapter->pdev->dev,
+ "%d:wait db interrupt timeout\n", __LINE__);
+ return -EFAULT;
+ }
+ rep[i] = (u16)(leapraid_readl(&adapter->iomem_base->db)
+ & LEAPRAID_DB_DATA_MASK);
+ writel(0, &adapter->iomem_base->host_int_status);
+ }
+
+ if (leapraid_db_wait_int_host(adapter)) {
+ dev_err(&adapter->pdev->dev, "%d:wait db interrupt timeout\n",
+ __LINE__);
+ return -EFAULT;
+ }
+
+ writel(0, &adapter->iomem_base->host_int_status);
+
+ return 0;
+}
+
+int leapraid_check_adapter_is_op(struct leapraid_adapter *adapter)
+{
+ int wait_count = LEAPRAID_DB_WAIT_OPERATIONAL;
+
+ do {
+ if (leapraid_pci_removed(adapter))
+ return -EFAULT;
+
+ if (leapraid_get_adapter_state(adapter) ==
+ LEAPRAID_DB_OPERATIONAL)
+ return 0;
+
+ dev_info(&adapter->pdev->dev,
+ "waiting for adapter to become op status(cnt=%d)\n",
+ LEAPRAID_DB_WAIT_OPERATIONAL - wait_count);
+
+ ssleep(1);
+ } while (--wait_count);
+
+ dev_err(&adapter->pdev->dev,
+ "adapter failed to become op state, last state=%d\n",
+ leapraid_get_adapter_state(adapter));
+
+ return -EFAULT;
+}
+
+struct leapraid_io_req_tracker *leapraid_get_io_tracker_from_taskid(
+ struct leapraid_adapter *adapter, u16 taskid)
+{
+ struct scsi_cmnd *scmd;
+
+ if (WARN_ON(!taskid))
+ return NULL;
+
+ if (WARN_ON(taskid > adapter->shost->can_queue))
+ return NULL;
+
+ scmd = leapraid_get_scmd_from_taskid(adapter, taskid);
+ if (scmd)
+ return leapraid_get_scmd_priv(scmd);
+
+ return NULL;
+}
+
+static u8 leapraid_get_cb_idx(struct leapraid_adapter *adapter, u16 taskid)
+{
+ struct leapraid_driver_cmd *sp_cmd;
+ u8 cb_idx = 0xFF;
+
+ if (WARN_ON(!taskid))
+ return cb_idx;
+
+ list_for_each_entry(sp_cmd, &adapter->driver_cmds.special_cmd_list,
+ list)
+ if (taskid == sp_cmd->taskid ||
+ taskid == sp_cmd->hp_taskid ||
+ taskid == sp_cmd->inter_taskid)
+ return sp_cmd->cb_idx;
+
+ WARN_ON(cb_idx == 0xFF);
+ return cb_idx;
+}
+
+struct scsi_cmnd *leapraid_get_scmd_from_taskid(
+ struct leapraid_adapter *adapter, u16 taskid)
+{
+ struct leapraid_scsiio_req *leap_mpi_req;
+ struct leapraid_io_req_tracker *st;
+ struct scsi_cmnd *scmd;
+ u32 uniq_tag;
+
+ if (taskid <= 0 || taskid > adapter->shost->can_queue)
+ return NULL;
+
+ uniq_tag = taskid - 1;
+ leap_mpi_req = leapraid_get_task_desc(adapter, taskid);
+ if (!leap_mpi_req->dev_hdl)
+ return NULL;
+
+ scmd = scsi_host_find_tag(adapter->shost, uniq_tag);
+ if (scmd) {
+ st = leapraid_get_scmd_priv(scmd);
+ if (st && st->taskid == taskid)
+ return scmd;
+ }
+
+ return NULL;
+}
+
+u16 leapraid_alloc_scsiio_taskid(struct leapraid_adapter *adapter,
+ struct scsi_cmnd *scmd)
+{
+ struct leapraid_io_req_tracker *request;
+ u16 taskid;
+ u32 tag = scmd->request->tag;
+
+ scmd->host_scribble =
+ (unsigned char *)(&adapter->mem_desc.io_tracker[tag]);
+ request = leapraid_get_scmd_priv(scmd);
+ taskid = tag + 1;
+ request->taskid = taskid;
+ request->scmd = scmd;
+ return taskid;
+}
+
+static void leapraid_check_pending_io(struct leapraid_adapter *adapter)
+{
+ if (adapter->access_ctrl.shost_recovering &&
+ adapter->reset_desc.pending_io_cnt) {
+ if (adapter->reset_desc.pending_io_cnt == 1)
+ wake_up(&adapter->reset_desc.reset_wait_queue);
+ adapter->reset_desc.pending_io_cnt--;
+ }
+}
+
+static void leapraid_clear_io_tracker(struct leapraid_adapter *adapter,
+ struct leapraid_io_req_tracker *io_tracker)
+{
+ if (!io_tracker)
+ return;
+
+ if (WARN_ON(io_tracker->taskid == 0))
+ return;
+
+ io_tracker->scmd = NULL;
+}
+
+static bool leapraid_is_fixed_taskid(struct leapraid_adapter *adapter,
+ u16 taskid)
+{
+ return (taskid == adapter->driver_cmds.ctl_cmd.taskid ||
+ taskid == adapter->driver_cmds.driver_scsiio_cmd.taskid ||
+ taskid == adapter->driver_cmds.tm_cmd.hp_taskid ||
+ taskid == adapter->driver_cmds.ctl_cmd.hp_taskid ||
+ taskid == adapter->driver_cmds.scan_dev_cmd.inter_taskid ||
+ taskid == adapter->driver_cmds.timestamp_sync_cmd.inter_taskid ||
+ taskid == adapter->driver_cmds.raid_action_cmd.inter_taskid ||
+ taskid == adapter->driver_cmds.transport_cmd.inter_taskid ||
+ taskid == adapter->driver_cmds.cfg_op_cmd.inter_taskid ||
+ taskid == adapter->driver_cmds.enc_cmd.inter_taskid ||
+ taskid == adapter->driver_cmds.notify_event_cmd.inter_taskid);
+}
+
+void leapraid_free_taskid(struct leapraid_adapter *adapter, u16 taskid)
+{
+ struct leapraid_io_req_tracker *io_tracker;
+ void *task_desc;
+
+ if (leapraid_is_fixed_taskid(adapter, taskid))
+ return;
+
+ if (taskid <= adapter->shost->can_queue) {
+ io_tracker = leapraid_get_io_tracker_from_taskid(adapter,
+ taskid);
+ if (!io_tracker) {
+ leapraid_check_pending_io(adapter);
+ return;
+ }
+
+ task_desc = leapraid_get_task_desc(adapter, taskid);
+ memset(task_desc, 0, LEAPRAID_REQUEST_SIZE);
+ leapraid_clear_io_tracker(adapter, io_tracker);
+ leapraid_check_pending_io(adapter);
+ }
+}
+
+static u8 leapraid_get_msix_idx(struct leapraid_adapter *adapter,
+ struct scsi_cmnd *scmd)
+{
+ return adapter->notification_desc.msix_cpu_map[raw_smp_processor_id()];
+}
+
+static u8 leapraid_get_and_set_msix_idx_from_taskid(
+ struct leapraid_adapter *adapter, u16 taskid)
+{
+ struct leapraid_io_req_tracker *io_tracker = NULL;
+
+ if (taskid <= adapter->shost->can_queue)
+ io_tracker = leapraid_get_io_tracker_from_taskid(adapter,
+ taskid);
+
+ if (!io_tracker)
+ return leapraid_get_msix_idx(adapter, NULL);
+
+ io_tracker->msix_io = leapraid_get_msix_idx(adapter, io_tracker->scmd);
+
+ return io_tracker->msix_io;
+}
+
+void leapraid_fire_scsi_io(struct leapraid_adapter *adapter, u16 taskid,
+ u16 handle)
+{
+ struct leapraid_atomic_req_desc desc;
+
+ desc.flg = LEAPRAID_REQ_DESC_FLG_SCSI_IO;
+ desc.msix_idx = leapraid_get_and_set_msix_idx_from_taskid(adapter,
+ taskid);
+ desc.taskid = cpu_to_le16(taskid);
+ writel((__force u32)cpu_to_le32(*((u32 *)&desc)),
+ &adapter->iomem_base->atomic_req_desc_post);
+}
+
+void leapraid_fire_hpr_task(struct leapraid_adapter *adapter, u16 taskid,
+ u16 msix_task)
+{
+ struct leapraid_atomic_req_desc desc;
+
+ desc.flg = LEAPRAID_REQ_DESC_FLG_HPR;
+ desc.msix_idx = msix_task;
+ desc.taskid = cpu_to_le16(taskid);
+ writel((__force u32)cpu_to_le32(*((u32 *)&desc)),
+ &adapter->iomem_base->atomic_req_desc_post);
+}
+
+void leapraid_fire_task(struct leapraid_adapter *adapter, u16 taskid)
+{
+ struct leapraid_atomic_req_desc desc;
+
+ desc.flg = LEAPRAID_REQ_DESC_FLG_DFLT_TYPE;
+ desc.msix_idx = leapraid_get_and_set_msix_idx_from_taskid(adapter,
+ taskid);
+ desc.taskid = cpu_to_le16(taskid);
+ writel((__force u32)cpu_to_le32(*((u32 *)&desc)),
+ &adapter->iomem_base->atomic_req_desc_post);
+}
+
+void leapraid_clean_active_scsi_cmds(struct leapraid_adapter *adapter)
+{
+ struct leapraid_io_req_tracker *io_tracker;
+ struct scsi_cmnd *scmd;
+ u16 taskid;
+
+ for (taskid = 1; taskid <= adapter->shost->can_queue; taskid++) {
+ scmd = leapraid_get_scmd_from_taskid(adapter, taskid);
+ if (!scmd)
+ continue;
+
+ io_tracker = leapraid_get_scmd_priv(scmd);
+ if (io_tracker && io_tracker->taskid == 0)
+ continue;
+
+ scsi_dma_unmap(scmd);
+ leapraid_clear_io_tracker(adapter, io_tracker);
+ if (!leapraid_pci_active(adapter) ||
+ adapter->reset_desc.adapter_reset_results != 0 ||
+ adapter->access_ctrl.adapter_thermal_alert ||
+ adapter->access_ctrl.host_removing)
+ scmd->result = DID_NO_CONNECT << 16;
+ else
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ scmd->scsi_done(scmd);
+ }
+}
+
+static void leapraid_clean_active_driver_cmd(
+ struct leapraid_driver_cmd *driver_cmd)
+{
+ if (driver_cmd->status & LEAPRAID_CMD_PENDING) {
+ driver_cmd->status |= LEAPRAID_CMD_RESET;
+ complete(&driver_cmd->done);
+ }
+}
+
+static void leapraid_clean_active_driver_cmds(struct leapraid_adapter *adapter)
+{
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.timestamp_sync_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.raid_action_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.driver_scsiio_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.tm_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.transport_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.enc_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.notify_event_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.cfg_op_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.ctl_cmd);
+
+ if (adapter->driver_cmds.scan_dev_cmd.status & LEAPRAID_CMD_PENDING) {
+ adapter->scan_dev_desc.scan_dev_failed = true;
+ adapter->driver_cmds.scan_dev_cmd.status |= LEAPRAID_CMD_RESET;
+ if (adapter->scan_dev_desc.driver_loading) {
+ adapter->scan_dev_desc.scan_start_failed =
+ LEAPRAID_ADAPTER_STATUS_INTERNAL_ERROR;
+ adapter->scan_dev_desc.scan_start = false;
+ } else {
+ complete(&adapter->driver_cmds.scan_dev_cmd.done);
+ }
+ }
+}
+
+static void leapraid_clean_active_cmds(struct leapraid_adapter *adapter)
+{
+ leapraid_clean_active_driver_cmds(adapter);
+ memset(adapter->dev_topo.pending_dev_add, 0,
+ adapter->dev_topo.pending_dev_add_sz);
+ memset(adapter->dev_topo.dev_removing, 0,
+ adapter->dev_topo.dev_removing_sz);
+ leapraid_clean_active_fw_evt(adapter);
+ leapraid_clean_active_scsi_cmds(adapter);
+}
+
+static void leapraid_tgt_not_responding(struct leapraid_adapter *adapter,
+ u16 hdl)
+{
+ struct leapraid_starget_priv *starget_priv = NULL;
+ struct leapraid_sas_dev *sas_dev = NULL;
+ unsigned long flags = 0;
+ u32 adapter_state = 0;
+
+ if (adapter->access_ctrl.pcie_recovering)
+ return;
+
+ adapter_state = leapraid_get_adapter_state(adapter);
+ if (adapter_state != LEAPRAID_DB_OPERATIONAL)
+ return;
+
+ if (test_bit(hdl, (unsigned long *)adapter->dev_topo.pd_hdls))
+ return;
+
+ clear_bit(hdl, (unsigned long *)adapter->dev_topo.pending_dev_add);
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ if (sas_dev && sas_dev->starget && sas_dev->starget->hostdata) {
+ starget_priv = sas_dev->starget->hostdata;
+ starget_priv->deleted = true;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (starget_priv)
+ starget_priv->hdl = LEAPRAID_INVALID_DEV_HANDLE;
+
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+}
+
+static void leapraid_tgt_rst_send(struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_starget_priv *starget_priv = NULL;
+ struct leapraid_sas_dev *sas_dev = NULL;
+ struct leapraid_card_port *port = NULL;
+ u64 sas_address = 0;
+ unsigned long flags;
+ u32 adapter_state;
+
+ if (adapter->access_ctrl.pcie_recovering)
+ return;
+
+ adapter_state = leapraid_get_adapter_state(adapter);
+ if (adapter_state != LEAPRAID_DB_OPERATIONAL)
+ return;
+
+ if (test_bit(hdl, (unsigned long *)adapter->dev_topo.pd_hdls))
+ return;
+
+ clear_bit(hdl, (unsigned long *)adapter->dev_topo.pending_dev_add);
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ if (sas_dev && sas_dev->starget && sas_dev->starget->hostdata) {
+ starget_priv = sas_dev->starget->hostdata;
+ starget_priv->deleted = true;
+ sas_address = sas_dev->sas_addr;
+ port = sas_dev->card_port;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (starget_priv) {
+ leapraid_ublk_io_dev(adapter, sas_address, port);
+ starget_priv->hdl = LEAPRAID_INVALID_DEV_HANDLE;
+ }
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+}
+
+static inline void leapraid_single_mpi_sg_append(struct leapraid_adapter *adapter,
+ void *sge, u32 flag_and_len,
+ dma_addr_t dma_addr)
+{
+ if (adapter->adapter_attr.use_32_dma_mask) {
+ ((struct leapraid_sge_simple32 *)sge)->flg_and_len =
+ cpu_to_le32(flag_and_len |
+ (LEAPRAID_SGE_FLG_32 |
+ LEAPRAID_SGE_FLG_SYSTEM_ADDR) <<
+ LEAPRAID_SGE_FLG_SHIFT);
+ ((struct leapraid_sge_simple32 *)sge)->addr =
+ cpu_to_le32(dma_addr);
+ } else {
+ ((struct leapraid_sge_simple64 *)sge)->flg_and_len =
+ cpu_to_le32(flag_and_len |
+ (LEAPRAID_SGE_FLG_64 |
+ LEAPRAID_SGE_FLG_SYSTEM_ADDR) <<
+ LEAPRAID_SGE_FLG_SHIFT);
+ ((struct leapraid_sge_simple64 *)sge)->addr =
+ cpu_to_le64(dma_addr);
+ }
+}
+
+static inline void leapraid_single_ieee_sg_append(void *sge, u8 flag,
+ u8 next_chain_offset,
+ u32 len,
+ dma_addr_t dma_addr)
+{
+ ((struct leapraid_chain64_ieee_sg *)sge)->flg = flag;
+ ((struct leapraid_chain64_ieee_sg *)sge)->next_chain_offset =
+ next_chain_offset;
+ ((struct leapraid_chain64_ieee_sg *)sge)->len = cpu_to_le32(len);
+ ((struct leapraid_chain64_ieee_sg *)sge)->addr = cpu_to_le64(dma_addr);
+}
+
+static void leapraid_build_nodata_mpi_sg(struct leapraid_adapter *adapter,
+ void *sge)
+{
+ leapraid_single_mpi_sg_append(adapter,
+ sge,
+ (u32)((LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL |
+ LEAPRAID_SGE_FLG_SIMPLE_ONE) <<
+ LEAPRAID_SGE_FLG_SHIFT),
+ -1);
+}
+
+void leapraid_build_mpi_sg(struct leapraid_adapter *adapter, void *sge,
+ dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size)
+{
+ if (h2c_size && !c2h_size) {
+ leapraid_single_mpi_sg_append(adapter,
+ sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL |
+ LEAPRAID_SGE_FLG_H2C) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ h2c_size,
+ h2c_dma_addr);
+ } else if (!h2c_size && c2h_size) {
+ leapraid_single_mpi_sg_append(adapter,
+ sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ c2h_size,
+ c2h_dma_addr);
+ } else if (h2c_size && c2h_size) {
+ leapraid_single_mpi_sg_append(adapter,
+ sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_H2C) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ h2c_size,
+ h2c_dma_addr);
+ if (adapter->adapter_attr.use_32_dma_mask)
+ sge += sizeof(struct leapraid_sge_simple32);
+ else
+ sge += sizeof(struct leapraid_sge_simple64);
+ leapraid_single_mpi_sg_append(adapter,
+ sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ c2h_size,
+ c2h_dma_addr);
+ } else {
+ return leapraid_build_nodata_mpi_sg(adapter, sge);
+ }
+}
+
+void leapraid_build_ieee_nodata_sg(struct leapraid_adapter *adapter, void *sge)
+{
+ leapraid_single_ieee_sg_append(sge,
+ (LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR |
+ LEAPRAID_IEEE_SGE_FLG_EOL),
+ 0, 0, -1);
+}
+
+int leapraid_build_scmd_ieee_sg(struct leapraid_adapter *adapter,
+ struct scsi_cmnd *scmd, u16 taskid)
+{
+ struct leapraid_scsiio_req *scsiio_req;
+ struct leapraid_io_req_tracker *io_tracker;
+ struct scatterlist *scmd_sg_cur;
+ int sg_entries_left;
+ void *sg_entry_cur;
+ void *host_chain;
+ dma_addr_t host_chain_dma;
+ u8 host_chain_cursor;
+ u32 sg_entries_in_cur_seg;
+ u32 chain_offset_in_cur_seg;
+ u32 chain_len_in_cur_seg;
+
+ io_tracker = leapraid_get_scmd_priv(scmd);
+ scsiio_req = leapraid_get_task_desc(adapter, taskid);
+ scmd_sg_cur = scsi_sglist(scmd);
+ sg_entries_left = scsi_dma_map(scmd);
+ if (sg_entries_left < 0)
+ return -ENOMEM;
+ sg_entry_cur = &scsiio_req->sgl;
+ if (sg_entries_left <= LEAPRAID_SGL_INLINE_THRESHOLD)
+ goto fill_last_seg;
+
+ scsiio_req->chain_offset = LEAPRAID_CHAIN_OFFSET_DWORDS;
+ leapraid_single_ieee_sg_append(sg_entry_cur,
+ LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR,
+ 0, sg_dma_len(scmd_sg_cur),
+ sg_dma_address(scmd_sg_cur));
+ scmd_sg_cur = sg_next(scmd_sg_cur);
+ sg_entry_cur += LEAPRAID_IEEE_SGE64_ENTRY_SIZE;
+ sg_entries_left--;
+
+ host_chain_cursor = 0;
+ host_chain = io_tracker->chain +
+ host_chain_cursor * LEAPRAID_CHAIN_SEG_SIZE;
+ host_chain_dma = io_tracker->chain_dma +
+ host_chain_cursor * LEAPRAID_CHAIN_SEG_SIZE;
+ host_chain_cursor += 1;
+ for (;;) {
+ sg_entries_in_cur_seg =
+ (sg_entries_left <= LEAPRAID_MAX_SGES_IN_CHAIN) ?
+ sg_entries_left : LEAPRAID_MAX_SGES_IN_CHAIN;
+ chain_offset_in_cur_seg =
+ (sg_entries_left == (int)sg_entries_in_cur_seg) ?
+ 0 : sg_entries_in_cur_seg;
+ chain_len_in_cur_seg = sg_entries_in_cur_seg *
+ LEAPRAID_IEEE_SGE64_ENTRY_SIZE;
+ if (chain_offset_in_cur_seg)
+ chain_len_in_cur_seg += LEAPRAID_IEEE_SGE64_ENTRY_SIZE;
+
+ leapraid_single_ieee_sg_append(sg_entry_cur,
+ LEAPRAID_IEEE_SGE_FLG_CHAIN_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR,
+ chain_offset_in_cur_seg, chain_len_in_cur_seg,
+ host_chain_dma);
+ sg_entry_cur = host_chain;
+ if (!chain_offset_in_cur_seg)
+ goto fill_last_seg;
+
+ while (sg_entries_in_cur_seg) {
+ leapraid_single_ieee_sg_append(sg_entry_cur,
+ LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR,
+ 0, sg_dma_len(scmd_sg_cur),
+ sg_dma_address(scmd_sg_cur));
+ scmd_sg_cur = sg_next(scmd_sg_cur);
+ sg_entry_cur += LEAPRAID_IEEE_SGE64_ENTRY_SIZE;
+ sg_entries_left--;
+ sg_entries_in_cur_seg--;
+ }
+ host_chain = io_tracker->chain +
+ host_chain_cursor * LEAPRAID_CHAIN_SEG_SIZE;
+ host_chain_dma = io_tracker->chain_dma +
+ host_chain_cursor * LEAPRAID_CHAIN_SEG_SIZE;
+ host_chain_cursor += 1;
+ }
+
+fill_last_seg:
+ while (sg_entries_left > 0) {
+ u32 flags = LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR;
+ if (sg_entries_left == 1)
+ flags |= LEAPRAID_IEEE_SGE_FLG_EOL;
+ leapraid_single_ieee_sg_append(sg_entry_cur, flags,
+ 0, sg_dma_len(scmd_sg_cur),
+ sg_dma_address(scmd_sg_cur));
+ scmd_sg_cur = sg_next(scmd_sg_cur);
+ sg_entry_cur += LEAPRAID_IEEE_SGE64_ENTRY_SIZE;
+ sg_entries_left--;
+ }
+ return 0;
+}
+
+void leapraid_build_ieee_sg(struct leapraid_adapter *adapter, void *sge,
+ dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size)
+{
+ if (h2c_size && !c2h_size) {
+ leapraid_single_ieee_sg_append(sge,
+ LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_EOL |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR,
+ 0,
+ h2c_size,
+ h2c_dma_addr);
+ } else if (!h2c_size && c2h_size) {
+ leapraid_single_ieee_sg_append(sge,
+ LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_EOL |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR,
+ 0,
+ c2h_size,
+ c2h_dma_addr);
+ } else if (h2c_size && c2h_size) {
+ leapraid_single_ieee_sg_append(sge,
+ LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR,
+ 0,
+ h2c_size,
+ h2c_dma_addr);
+ sge += LEAPRAID_IEEE_SGE64_ENTRY_SIZE;
+ leapraid_single_ieee_sg_append(sge,
+ LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR |
+ LEAPRAID_IEEE_SGE_FLG_EOL,
+ 0,
+ c2h_size,
+ c2h_dma_addr);
+ } else {
+ return leapraid_build_ieee_nodata_sg(adapter, sge);
+ }
+}
+
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_from_tgt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_starget_priv *tgt_priv)
+{
+ assert_spin_locked(&adapter->dev_topo.sas_dev_lock);
+ if (tgt_priv->sas_dev)
+ leapraid_sdev_get(tgt_priv->sas_dev);
+
+ return tgt_priv->sas_dev;
+}
+
+struct leapraid_sas_dev *leapraid_get_sas_dev_from_tgt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_starget_priv *tgt_priv)
+{
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_from_tgt(adapter, tgt_priv);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return sas_dev;
+}
+
+static struct leapraid_card_port *leapraid_get_port_by_id(
+ struct leapraid_adapter *adapter,
+ u8 port_id, bool skip_dirty)
+{
+ struct leapraid_card_port *port;
+ struct leapraid_card_port *dirty_port = NULL;
+
+ if (!adapter->adapter_attr.enable_mp)
+ port_id = LEAPRAID_DISABLE_MP_PORT_ID;
+
+ list_for_each_entry(port, &adapter->dev_topo.card_port_list, list) {
+ if (port->port_id != port_id)
+ continue;
+
+ if (!(port->flg & LEAPRAID_CARD_PORT_FLG_DIRTY))
+ return port;
+
+ if (skip_dirty && !dirty_port)
+ dirty_port = port;
+ }
+
+ if (dirty_port)
+ return dirty_port;
+
+ if (unlikely(!adapter->adapter_attr.enable_mp)) {
+ port = kzalloc(sizeof(*port), GFP_ATOMIC);
+ if (!port)
+ return NULL;
+
+ port->port_id = LEAPRAID_DISABLE_MP_PORT_ID;
+ list_add_tail(&port->list, &adapter->dev_topo.card_port_list);
+ return port;
+ }
+
+ return NULL;
+}
+
+struct leapraid_vphy *leapraid_get_vphy_by_phy(struct leapraid_card_port *port,
+ u32 phy_seq_num)
+{
+ struct leapraid_vphy *vphy;
+
+ if (!port || !port->vphys_mask)
+ return NULL;
+
+ list_for_each_entry(vphy, &port->vphys_list, list) {
+ if (vphy->phy_mask & BIT(phy_seq_num))
+ return vphy;
+ }
+
+ return NULL;
+}
+
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_by_addr_and_rphy(
+ struct leapraid_adapter *adapter, u64 sas_address,
+ struct sas_rphy *rphy)
+{
+ struct leapraid_sas_dev *sas_dev;
+
+ assert_spin_locked(&adapter->dev_topo.sas_dev_lock);
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_list, list)
+ if (sas_dev->sas_addr == sas_address &&
+ sas_dev->rphy == rphy) {
+ leapraid_sdev_get(sas_dev);
+ return sas_dev;
+ }
+
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_init_list,
+ list)
+ if (sas_dev->sas_addr == sas_address &&
+ sas_dev->rphy == rphy) {
+ leapraid_sdev_get(sas_dev);
+ return sas_dev;
+ }
+
+ return NULL;
+}
+
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_by_addr(
+ struct leapraid_adapter *adapter,
+ u64 sas_address, struct leapraid_card_port *port)
+{
+ struct leapraid_sas_dev *sas_dev;
+
+ if (!port)
+ return NULL;
+
+ assert_spin_locked(&adapter->dev_topo.sas_dev_lock);
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_list, list)
+ if (sas_dev->sas_addr == sas_address &&
+ sas_dev->card_port == port) {
+ leapraid_sdev_get(sas_dev);
+ return sas_dev;
+ }
+
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_init_list,
+ list)
+ if (sas_dev->sas_addr == sas_address &&
+ sas_dev->card_port == port) {
+ leapraid_sdev_get(sas_dev);
+ return sas_dev;
+ }
+
+ return NULL;
+}
+
+struct leapraid_sas_dev *leapraid_get_sas_dev_by_addr(
+ struct leapraid_adapter *adapter,
+ u64 sas_address, struct leapraid_card_port *port)
+{
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+
+ if (!port)
+ return NULL;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(adapter, sas_address,
+ port);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return sas_dev;
+}
+
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_by_hdl(
+ struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_sas_dev *sas_dev;
+
+ assert_spin_locked(&adapter->dev_topo.sas_dev_lock);
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_list, list)
+ if (sas_dev->hdl == hdl) {
+ leapraid_sdev_get(sas_dev);
+ return sas_dev;
+ }
+
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_init_list,
+ list)
+ if (sas_dev->hdl == hdl) {
+ leapraid_sdev_get(sas_dev);
+ return sas_dev;
+ }
+
+ return NULL;
+}
+
+struct leapraid_sas_dev *leapraid_get_sas_dev_by_hdl(
+ struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return sas_dev;
+}
+
+void leapraid_sas_dev_remove(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev)
+{
+ unsigned long flags;
+ bool del_from_list;
+
+ if (!sas_dev)
+ return;
+
+ del_from_list = false;
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ if (!list_empty(&sas_dev->list)) {
+ list_del_init(&sas_dev->list);
+ del_from_list = true;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (del_from_list)
+ leapraid_sdev_put(sas_dev);
+}
+
+static void leapraid_sas_dev_remove_by_hdl(struct leapraid_adapter *adapter,
+ u16 hdl)
+{
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ bool del_from_list;
+
+ if (adapter->access_ctrl.shost_recovering)
+ return;
+
+ del_from_list = false;
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ if (sas_dev) {
+ if (!list_empty(&sas_dev->list)) {
+ list_del_init(&sas_dev->list);
+ del_from_list = true;
+ leapraid_sdev_put(sas_dev);
+ }
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (del_from_list) {
+ leapraid_remove_device(adapter, sas_dev);
+ leapraid_sdev_put(sas_dev);
+ }
+}
+
+void leapraid_sas_dev_remove_by_sas_address(struct leapraid_adapter *adapter,
+ u64 sas_address,
+ struct leapraid_card_port *port)
+{
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ bool del_from_list;
+
+ if (adapter->access_ctrl.shost_recovering)
+ return;
+
+ del_from_list = false;
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(adapter, sas_address,
+ port);
+ if (sas_dev) {
+ if (!list_empty(&sas_dev->list)) {
+ list_del_init(&sas_dev->list);
+ del_from_list = true;
+ leapraid_sdev_put(sas_dev);
+ }
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (del_from_list) {
+ leapraid_remove_device(adapter, sas_dev);
+ leapraid_sdev_put(sas_dev);
+ }
+}
+
+struct leapraid_raid_volume *leapraid_raid_volume_find_by_id(
+ struct leapraid_adapter *adapter, uint id, uint channel)
+{
+ struct leapraid_raid_volume *raid_volume;
+
+ list_for_each_entry(raid_volume, &adapter->dev_topo.raid_volume_list,
+ list) {
+ if (raid_volume->id == id &&
+ raid_volume->channel == channel) {
+ return raid_volume;
+ }
+ }
+
+ return NULL;
+}
+
+struct leapraid_raid_volume *leapraid_raid_volume_find_by_hdl(
+ struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_raid_volume *raid_volume;
+
+ list_for_each_entry(raid_volume, &adapter->dev_topo.raid_volume_list,
+ list) {
+ if (raid_volume->hdl == hdl)
+ return raid_volume;
+ }
+
+ return NULL;
+}
+
+static struct leapraid_raid_volume *leapraid_raid_volume_find_by_wwid(
+ struct leapraid_adapter *adapter, u64 wwid)
+{
+ struct leapraid_raid_volume *raid_volume;
+
+ list_for_each_entry(raid_volume, &adapter->dev_topo.raid_volume_list,
+ list) {
+ if (raid_volume->wwid == wwid)
+ return raid_volume;
+ }
+
+ return NULL;
+}
+
+static void leapraid_raid_volume_add(struct leapraid_adapter *adapter,
+ struct leapraid_raid_volume *raid_volume)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ list_add_tail(&raid_volume->list, &adapter->dev_topo.raid_volume_list);
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+}
+
+void leapraid_raid_volume_remove(struct leapraid_adapter *adapter,
+ struct leapraid_raid_volume *raid_volume)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ list_del(&raid_volume->list);
+ kfree(raid_volume);
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+}
+
+static struct leapraid_enc_node *leapraid_enc_find_by_hdl(
+ struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_enc_node *enc_dev;
+
+ list_for_each_entry(enc_dev, &adapter->dev_topo.enc_list, list) {
+ if (le16_to_cpu(enc_dev->pg0.enc_hdl) == hdl)
+ return enc_dev;
+ }
+
+ return NULL;
+}
+
+struct leapraid_topo_node *leapraid_exp_find_by_sas_address(
+ struct leapraid_adapter *adapter,
+ u64 sas_address, struct leapraid_card_port *port)
+{
+ struct leapraid_topo_node *sas_exp;
+
+ if (!port)
+ return NULL;
+
+ list_for_each_entry(sas_exp, &adapter->dev_topo.exp_list, list) {
+ if (sas_exp->sas_address == sas_address &&
+ sas_exp->card_port == port)
+ return sas_exp;
+ }
+
+ return NULL;
+}
+
+bool leapraid_scmd_find_by_tgt(struct leapraid_adapter *adapter, uint id,
+ uint channel)
+{
+ struct scsi_cmnd *scmd;
+ int taskid;
+
+ for (taskid = 1; taskid <= adapter->shost->can_queue; taskid++) {
+ scmd = leapraid_get_scmd_from_taskid(adapter, taskid);
+ if (!scmd)
+ continue;
+
+ if (scmd->device->id == id && scmd->device->channel == channel)
+ return true;
+ }
+
+ return false;
+}
+
+bool leapraid_scmd_find_by_lun(struct leapraid_adapter *adapter, uint id,
+ unsigned int lun, uint channel)
+{
+ struct scsi_cmnd *scmd;
+ int taskid;
+
+ for (taskid = 1; taskid <= adapter->shost->can_queue; taskid++) {
+ scmd = leapraid_get_scmd_from_taskid(adapter, taskid);
+ if (!scmd)
+ continue;
+
+ if (scmd->device->id == id &&
+ scmd->device->channel == channel &&
+ scmd->device->lun == lun)
+ return true;
+ }
+
+ return false;
+}
+
+static struct leapraid_topo_node *leapraid_exp_find_by_hdl(
+ struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_topo_node *sas_exp;
+
+ list_for_each_entry(sas_exp, &adapter->dev_topo.exp_list, list) {
+ if (sas_exp->hdl == hdl)
+ return sas_exp;
+ }
+
+ return NULL;
+}
+
+static enum leapraid_card_port_checking_flg leapraid_get_card_port_feature(
+ struct leapraid_card_port *old_card_port,
+ struct leapraid_card_port *card_port,
+ struct leapraid_card_port_feature *feature)
+{
+ feature->dirty_flg =
+ old_card_port->flg & LEAPRAID_CARD_PORT_FLG_DIRTY;
+ feature->same_addr =
+ old_card_port->sas_address == card_port->sas_address;
+ feature->exact_phy =
+ old_card_port->phy_mask == card_port->phy_mask;
+ feature->phy_overlap =
+ old_card_port->phy_mask & card_port->phy_mask;
+ feature->same_port =
+ old_card_port->port_id == card_port->port_id;
+ feature->cur_chking_old_port = old_card_port;
+
+ if (!feature->dirty_flg || !feature->same_addr)
+ return CARD_PORT_SKIP_CHECKING;
+
+ return CARD_PORT_FURTHER_CHECKING_NEEDED;
+}
+
+static int leapraid_process_card_port_feature(
+ struct leapraid_card_port_feature *feature)
+{
+ struct leapraid_card_port *old_card_port;
+
+ old_card_port = feature->cur_chking_old_port;
+ if (feature->exact_phy) {
+ feature->checking_state = SAME_PORT_WITH_NOTHING_CHANGED;
+ feature->expected_old_port = old_card_port;
+ return 1;
+ } else if (feature->phy_overlap) {
+ if (feature->same_port) {
+ feature->checking_state =
+ SAME_PORT_WITH_PARTIALLY_CHANGED_PHYS;
+ feature->expected_old_port = old_card_port;
+ } else if (feature->checking_state !=
+ SAME_PORT_WITH_PARTIALLY_CHANGED_PHYS) {
+ feature->checking_state =
+ SAME_ADDR_WITH_PARTIALLY_CHANGED_PHYS;
+ feature->expected_old_port = old_card_port;
+ }
+ } else {
+ if (feature->checking_state !=
+ SAME_PORT_WITH_PARTIALLY_CHANGED_PHYS &&
+ feature->checking_state !=
+ SAME_ADDR_WITH_PARTIALLY_CHANGED_PHYS) {
+ feature->checking_state = SAME_ADDR_ONLY;
+ feature->expected_old_port = old_card_port;
+ feature->same_addr_port_count++;
+ }
+ }
+
+ return 0;
+}
+
+static int leapraid_check_card_port(struct leapraid_adapter *adapter,
+ struct leapraid_card_port *card_port,
+ struct leapraid_card_port **expected_card_port,
+ int *count)
+{
+ struct leapraid_card_port *old_card_port;
+ struct leapraid_card_port_feature feature;
+
+ *expected_card_port = NULL;
+ memset(&feature, 0, sizeof(struct leapraid_card_port_feature));
+ feature.expected_old_port = NULL;
+ feature.same_addr_port_count = 0;
+ feature.checking_state = NEW_CARD_PORT;
+
+ list_for_each_entry(old_card_port, &adapter->dev_topo.card_port_list,
+ list) {
+ if (leapraid_get_card_port_feature(old_card_port, card_port,
+ &feature))
+ continue;
+
+ if (leapraid_process_card_port_feature(&feature))
+ break;
+ }
+
+ if (feature.checking_state == SAME_ADDR_ONLY)
+ *count = feature.same_addr_port_count;
+
+ *expected_card_port = feature.expected_old_port;
+ return feature.checking_state;
+}
+
+static void leapraid_del_phy_part_of_anther_port(
+ struct leapraid_adapter *adapter,
+ struct leapraid_card_port *card_port_table, int index,
+ u8 port_count, int offset)
+{
+ struct leapraid_topo_node *card_topo_node;
+ bool found = false;
+ int i;
+
+ card_topo_node = &adapter->dev_topo.card;
+ for (i = 0; i < port_count; i++) {
+ if (i == index)
+ continue;
+
+ if (card_port_table[i].phy_mask & BIT(offset)) {
+ leapraid_transport_detach_phy_to_port(adapter,
+ card_topo_node,
+ &card_topo_node->card_phy[offset]);
+ found = true;
+ break;
+ }
+ }
+
+ if (!found)
+ card_port_table[index].phy_mask |= BIT(offset);
+}
+
+static void leapraid_add_or_del_phys_from_existing_port(
+ struct leapraid_adapter *adapter,
+ struct leapraid_card_port *card_port,
+ struct leapraid_card_port *card_port_table,
+ int index, u8 port_count)
+{
+ struct leapraid_topo_node *card_topo_node;
+ u32 phy_mask_diff;
+ u32 offset = 0;
+
+ card_topo_node = &adapter->dev_topo.card;
+ phy_mask_diff = card_port->phy_mask ^
+ card_port_table[index].phy_mask;
+ for (offset = 0; offset < adapter->dev_topo.card.phys_num; offset++) {
+ if (!(phy_mask_diff & BIT(offset)))
+ continue;
+
+ if (!(card_port_table[index].phy_mask & BIT(offset))) {
+ leapraid_del_phy_part_of_anther_port(adapter,
+ card_port_table,
+ index, port_count,
+ offset);
+ continue;
+ }
+
+ if (card_topo_node->card_phy[offset].phy_is_assigned)
+ leapraid_transport_detach_phy_to_port(adapter,
+ card_topo_node,
+ &card_topo_node->card_phy[offset]);
+
+ leapraid_transport_attach_phy_to_port(adapter,
+ card_topo_node, &card_topo_node->card_phy[offset],
+ card_port->sas_address,
+ card_port);
+ }
+}
+
+struct leapraid_sas_dev *leapraid_get_next_sas_dev_from_init_list(
+ struct leapraid_adapter *adapter)
+{
+ struct leapraid_sas_dev *sas_dev = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ if (!list_empty(&adapter->dev_topo.sas_dev_init_list)) {
+ sas_dev = list_first_entry(&adapter->dev_topo.sas_dev_init_list,
+ struct leapraid_sas_dev, list);
+ leapraid_sdev_get(sas_dev);
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return sas_dev;
+}
+
+static bool leapraid_check_boot_dev_internal(u64 sas_address, u64 dev_name,
+ u64 enc_lid, u16 slot,
+ struct leapraid_boot_dev *boot_dev,
+ u8 form)
+{
+ if (!boot_dev)
+ return false;
+
+ switch (form & LEAPRAID_BOOTDEV_FORM_MASK) {
+ case LEAPRAID_BOOTDEV_FORM_SAS_WWID:
+ if (!sas_address)
+ return false;
+
+ return sas_address ==
+ le64_to_cpu(((struct leapraid_boot_dev_format_sas_wwid *)(
+ boot_dev->pg_dev))->sas_addr);
+ case LEAPRAID_BOOTDEV_FORM_ENC_SLOT:
+ if (!enc_lid)
+ return false;
+
+ return (enc_lid == le64_to_cpu(((struct leapraid_boot_dev_format_enc_slot *)(
+ boot_dev->pg_dev))->enc_lid) &&
+ slot == le16_to_cpu(((struct leapraid_boot_dev_format_enc_slot *)(
+ boot_dev->pg_dev))->slot_num));
+ case LEAPRAID_BOOTDEV_FORM_DEV_NAME:
+ if (!dev_name)
+ return false;
+
+ return dev_name == le64_to_cpu(((struct leapraid_boot_dev_format_dev_name *)(
+ boot_dev->pg_dev))->dev_name);
+ case LEAPRAID_BOOTDEV_FORM_NONE:
+ default:
+ return false;
+ }
+}
+
+static void leapraid_try_set_boot_dev(struct leapraid_boot_dev *boot_dev,
+ u64 sas_addr, u64 dev_name,
+ u64 enc_lid, u16 slot,
+ void *dev, u32 chnl)
+{
+ bool matched = false;
+
+ if (boot_dev->dev)
+ return;
+
+ matched = leapraid_check_boot_dev_internal(sas_addr, dev_name, enc_lid,
+ slot, boot_dev,
+ boot_dev->form);
+ if (matched) {
+ boot_dev->dev = dev;
+ boot_dev->chnl = chnl;
+ }
+}
+
+static void leapraid_check_boot_dev(struct leapraid_adapter *adapter,
+ void *dev, u32 chnl)
+{
+ u64 sas_addr = 0;
+ u64 dev_name = 0;
+ u64 enc_lid = 0;
+ u16 slot = 0;
+
+ if (!adapter->scan_dev_desc.driver_loading)
+ return;
+
+ switch (chnl) {
+ case RAID_CHANNEL:
+ {
+ struct leapraid_raid_volume *raid_volume =
+ (struct leapraid_raid_volume *)dev;
+
+ sas_addr = raid_volume->wwid;
+ break;
+ }
+ default:
+ {
+ struct leapraid_sas_dev *sas_dev =
+ (struct leapraid_sas_dev *)dev;
+ sas_addr = sas_dev->sas_addr;
+ dev_name = sas_dev->dev_name;
+ enc_lid = sas_dev->enc_lid;
+ slot = sas_dev->slot;
+ break;
+ }
+ }
+
+ leapraid_try_set_boot_dev(&adapter->boot_devs.requested_boot_dev,
+ sas_addr, dev_name, enc_lid,
+ slot, dev, chnl);
+ leapraid_try_set_boot_dev(&adapter->boot_devs.requested_alt_boot_dev,
+ sas_addr, dev_name, enc_lid,
+ slot, dev, chnl);
+ leapraid_try_set_boot_dev(&adapter->boot_devs.current_boot_dev,
+ sas_addr, dev_name, enc_lid,
+ slot, dev, chnl);
+}
+
+static void leapraid_build_and_fire_cfg_req(struct leapraid_adapter *adapter,
+ struct leapraid_cfg_req *leap_mpi_cfgp_req,
+ struct leapraid_cfg_rep *leap_mpi_cfgp_rep)
+{
+ struct leapraid_cfg_req *local_leap_cfg_req;
+
+ memset(leap_mpi_cfgp_rep, 0, sizeof(struct leapraid_cfg_rep));
+ memset((void *)(&adapter->driver_cmds.cfg_op_cmd.reply), 0,
+ sizeof(struct leapraid_cfg_rep));
+ adapter->driver_cmds.cfg_op_cmd.status = LEAPRAID_CMD_PENDING;
+ local_leap_cfg_req = leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.cfg_op_cmd.inter_taskid);
+ memcpy(local_leap_cfg_req, leap_mpi_cfgp_req,
+ sizeof(struct leapraid_cfg_req));
+ init_completion(&adapter->driver_cmds.cfg_op_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.cfg_op_cmd.inter_taskid);
+ wait_for_completion_timeout(&adapter->driver_cmds.cfg_op_cmd.done,
+ LEAPRAID_CFG_OP_TIMEOUT * HZ);
+}
+
+static int leapraid_req_cfg_func(struct leapraid_adapter *adapter,
+ struct leapraid_cfg_req *leap_mpi_cfgp_req,
+ struct leapraid_cfg_rep *leap_mpi_cfgp_rep,
+ void *target_cfg_pg, void *real_cfg_pg_addr,
+ u16 target_real_cfg_pg_sz)
+{
+ u32 adapter_status = UINT_MAX;
+ bool issue_reset = false;
+ u8 retry_cnt;
+ int rc;
+
+ retry_cnt = 0;
+ mutex_lock(&adapter->driver_cmds.cfg_op_cmd.mutex);
+retry:
+ if (retry_cnt) {
+ if (retry_cnt > LEAPRAID_CFG_REQ_RETRY_TIMES) {
+ rc = -EFAULT;
+ goto out;
+ }
+ dev_warn(&adapter->pdev->dev,
+ "cfg-req: retry request, cnt=%u\n", retry_cnt);
+ }
+
+ rc = leapraid_check_adapter_is_op(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "cfg-req: adapter not operational\n");
+ goto out;
+ }
+
+ leapraid_build_and_fire_cfg_req(adapter, leap_mpi_cfgp_req,
+ leap_mpi_cfgp_rep);
+ if (!(adapter->driver_cmds.cfg_op_cmd.status & LEAPRAID_CMD_DONE)) {
+ retry_cnt++;
+ if (adapter->driver_cmds.cfg_op_cmd.status &
+ LEAPRAID_CMD_RESET) {
+ dev_warn(&adapter->pdev->dev,
+ "cfg-req: cmd gg due to hard reset\n");
+ goto retry;
+ }
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering) {
+ dev_err(&adapter->pdev->dev,
+ "cfg-req: cmd not done during %s, skip reset\n",
+ adapter->access_ctrl.shost_recovering ?
+ "shost recovery" : "pcie recovery");
+ issue_reset = false;
+ rc = -EFAULT;
+ } else {
+ dev_err(&adapter->pdev->dev,
+ "cfg-req: cmd timeout, issuing hard reset\n");
+ issue_reset = true;
+ }
+
+ goto out;
+ }
+
+ if (adapter->driver_cmds.cfg_op_cmd.status &
+ LEAPRAID_CMD_REPLY_VALID) {
+ memcpy(leap_mpi_cfgp_rep,
+ (void *)(&adapter->driver_cmds.cfg_op_cmd.reply),
+ sizeof(struct leapraid_cfg_rep));
+ adapter_status = le16_to_cpu(
+ leap_mpi_cfgp_rep->adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (adapter_status == LEAPRAID_ADAPTER_STATUS_SUCCESS) {
+ if (target_cfg_pg && real_cfg_pg_addr &&
+ target_real_cfg_pg_sz)
+ if (leap_mpi_cfgp_req->action ==
+ LEAPRAID_CFG_ACT_PAGE_READ_CUR)
+ memcpy(target_cfg_pg,
+ real_cfg_pg_addr,
+ target_real_cfg_pg_sz);
+ } else {
+ if (adapter_status !=
+ LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_PAGE)
+ dev_err(&adapter->pdev->dev,
+ "cfg-rep: adapter_status=0x%x\n",
+ adapter_status);
+ rc = -EFAULT;
+ }
+ } else {
+ dev_err(&adapter->pdev->dev, "cfg-rep: reply invalid\n");
+ rc = -EFAULT;
+ }
+
+out:
+ adapter->driver_cmds.cfg_op_cmd.status = LEAPRAID_CMD_NOT_USED;
+ mutex_unlock(&adapter->driver_cmds.cfg_op_cmd.mutex);
+ if (issue_reset) {
+ if (adapter->scan_dev_desc.first_scan_dev_fired) {
+ dev_info(&adapter->pdev->dev,
+ "%s:%d cfg-req: failure, issuing reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, FULL_RESET);
+ rc = -EFAULT;
+ } else {
+ dev_warn(&adapter->pdev->dev,
+ "cfg-req: cmd gg during init, skip reset\n");
+ rc = -EFAULT;
+ }
+ }
+ return rc;
+}
+
+static int leapraid_request_cfg_pg_header(struct leapraid_adapter *adapter,
+ struct leapraid_cfg_req *leap_mpi_cfgp_req,
+ struct leapraid_cfg_rep *leap_mpi_cfgp_rep)
+{
+ return leapraid_req_cfg_func(adapter, leap_mpi_cfgp_req,
+ leap_mpi_cfgp_rep, NULL, NULL, 0);
+}
+
+static int leapraid_request_cfg_pg(struct leapraid_adapter *adapter,
+ struct leapraid_cfg_req *leap_mpi_cfgp_req,
+ struct leapraid_cfg_rep *leap_mpi_cfgp_rep,
+ void *target_cfg_pg, void *real_cfg_pg_addr,
+ u16 target_real_cfg_pg_sz)
+{
+ return leapraid_req_cfg_func(adapter, leap_mpi_cfgp_req,
+ leap_mpi_cfgp_rep, target_cfg_pg,
+ real_cfg_pg_addr, target_real_cfg_pg_sz);
+}
+
+int leapraid_op_config_page(struct leapraid_adapter *adapter,
+ void *target_cfg_pg, union cfg_param_1 cfgp1,
+ union cfg_param_2 cfgp2,
+ enum config_page_action cfg_op)
+{
+ struct leapraid_cfg_req leap_mpi_cfgp_req;
+ struct leapraid_cfg_rep leap_mpi_cfgp_rep;
+ u16 real_cfg_pg_sz = 0;
+ void *real_cfg_pg_addr = NULL;
+ dma_addr_t real_cfg_pg_dma = 0;
+ u32 __page_size;
+ int rc;
+
+ memset(&leap_mpi_cfgp_req, 0, sizeof(struct leapraid_cfg_req));
+ leap_mpi_cfgp_req.func = LEAPRAID_FUNC_CONFIG_OP;
+ leap_mpi_cfgp_req.action = LEAPRAID_CFG_ACT_PAGE_HEADER;
+
+ switch (cfg_op) {
+ case GET_BIOS_PG3:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_BIOS;
+ leap_mpi_cfgp_req.header.page_num =
+ LEAPRAID_CFG_PAGE_NUM_BIOS3;
+ __page_size = sizeof(struct leapraid_bios_page3);
+ break;
+ case GET_BIOS_PG2:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_BIOS;
+ leap_mpi_cfgp_req.header.page_num =
+ LEAPRAID_CFG_PAGE_NUM_BIOS2;
+ __page_size = sizeof(struct leapraid_bios_page2);
+ break;
+ case GET_SAS_DEVICE_PG0:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type = LEAPRAID_CFG_EXTPT_SAS_DEV;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_DEV0;
+ __page_size = sizeof(struct leapraid_sas_dev_p0);
+ break;
+ case GET_SAS_IOUNIT_PG0:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type =
+ LEAPRAID_CFG_EXTPT_SAS_IO_UNIT;
+ leap_mpi_cfgp_req.header.page_num =
+ LEAPRAID_CFG_PAGE_NUM_IOUNIT0;
+ __page_size = cfgp1.size;
+ break;
+ case GET_SAS_IOUNIT_PG1:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type =
+ LEAPRAID_CFG_EXTPT_SAS_IO_UNIT;
+ leap_mpi_cfgp_req.header.page_num =
+ LEAPRAID_CFG_PAGE_NUM_IOUNIT1;
+ __page_size = cfgp1.size;
+ break;
+ case GET_SAS_EXPANDER_PG0:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type = LEAPRAID_CFG_EXTPT_SAS_EXP;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_EXP0;
+ __page_size = sizeof(struct leapraid_exp_p0);
+ break;
+ case GET_SAS_EXPANDER_PG1:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type = LEAPRAID_CFG_EXTPT_SAS_EXP;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_EXP1;
+ __page_size = sizeof(struct leapraid_exp_p1);
+ break;
+ case GET_SAS_ENCLOSURE_PG0:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type = LEAPRAID_CFG_EXTPT_ENC;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_ENC0;
+ __page_size = sizeof(struct leapraid_enc_p0);
+ break;
+ case GET_PHY_PG0:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type = LEAPRAID_CFG_EXTPT_SAS_PHY;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_PHY0;
+ __page_size = sizeof(struct leapraid_sas_phy_p0);
+ break;
+ case GET_RAID_VOLUME_PG0:
+ leap_mpi_cfgp_req.header.page_type =
+ LEAPRAID_CFG_PT_RAID_VOLUME;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_VOL0;
+ __page_size = cfgp1.size;
+ break;
+ case GET_RAID_VOLUME_PG1:
+ leap_mpi_cfgp_req.header.page_type =
+ LEAPRAID_CFG_PT_RAID_VOLUME;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_VOL1;
+ __page_size = sizeof(struct leapraid_raidvol_p1);
+ break;
+ case GET_PHY_DISK_PG0:
+ leap_mpi_cfgp_req.header.page_type =
+ LEAPRAID_CFG_PT_RAID_PHYSDISK;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_PD0;
+ __page_size = sizeof(struct leapraid_raidpd_p0);
+ break;
+ default:
+ dev_err(&adapter->pdev->dev,
+ "unsupported config page action=%d!\n", cfg_op);
+ rc = -EINVAL;
+ goto out;
+ }
+
+ leapraid_build_nodata_mpi_sg(adapter,
+ &leap_mpi_cfgp_req.page_buf_sge);
+ rc = leapraid_request_cfg_pg_header(adapter,
+ &leap_mpi_cfgp_req,
+ &leap_mpi_cfgp_rep);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "cfg-req: header failed rc=%dn", rc);
+ goto out;
+ }
+
+ if (cfg_op == GET_SAS_DEVICE_PG0 ||
+ cfg_op == GET_SAS_EXPANDER_PG0 ||
+ cfg_op == GET_SAS_ENCLOSURE_PG0 ||
+ cfg_op == GET_RAID_VOLUME_PG1)
+ leap_mpi_cfgp_req.page_addr = cpu_to_le32(cfgp1.form |
+ cfgp2.handle);
+ else if (cfg_op == GET_PHY_DISK_PG0)
+ leap_mpi_cfgp_req.page_addr = cpu_to_le32(cfgp1.form |
+ cfgp2.form_specific);
+ else if (cfg_op == GET_RAID_VOLUME_PG0)
+ leap_mpi_cfgp_req.page_addr =
+ cpu_to_le32(cfgp2.handle |
+ LEAPRAID_RAID_VOL_CFG_PGAD_HDL);
+ else if (cfg_op == GET_SAS_EXPANDER_PG1)
+ leap_mpi_cfgp_req.page_addr =
+ cpu_to_le32(cfgp2.handle |
+ (cfgp1.phy_number <<
+ LEAPRAID_SAS_EXP_CFG_PGAD_PHYNUM_SHIFT) |
+ LEAPRAID_SAS_EXP_CFG_PGAD_HDL_PHY_NUM);
+ else if (cfg_op == GET_PHY_PG0)
+ leap_mpi_cfgp_req.page_addr = cpu_to_le32(cfgp1.phy_number |
+ LEAPRAID_SAS_PHY_CFG_PGAD_PHY_NUMBER);
+
+ leap_mpi_cfgp_req.action = LEAPRAID_CFG_ACT_PAGE_READ_CUR;
+
+ leap_mpi_cfgp_req.header.page_num = leap_mpi_cfgp_rep.header.page_num;
+ leap_mpi_cfgp_req.header.page_type =
+ leap_mpi_cfgp_rep.header.page_type;
+ leap_mpi_cfgp_req.header.page_len = leap_mpi_cfgp_rep.header.page_len;
+ leap_mpi_cfgp_req.ext_page_len = leap_mpi_cfgp_rep.ext_page_len;
+ leap_mpi_cfgp_req.ext_page_type = leap_mpi_cfgp_rep.ext_page_type;
+
+ real_cfg_pg_sz = (leap_mpi_cfgp_req.header.page_len) ?
+ leap_mpi_cfgp_req.header.page_len * 4 :
+ le16_to_cpu(leap_mpi_cfgp_rep.ext_page_len) * 4;
+ real_cfg_pg_addr = dma_alloc_coherent(&adapter->pdev->dev,
+ real_cfg_pg_sz,
+ &real_cfg_pg_dma,
+ GFP_KERNEL);
+ if (!real_cfg_pg_addr) {
+ dev_err(&adapter->pdev->dev, "cfg-req: dma alloc failed\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ if (leap_mpi_cfgp_req.action == LEAPRAID_CFG_ACT_PAGE_WRITE_CUR) {
+ leapraid_single_mpi_sg_append(adapter,
+ &leap_mpi_cfgp_req.page_buf_sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL |
+ LEAPRAID_SGE_FLG_H2C) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ real_cfg_pg_sz,
+ real_cfg_pg_dma);
+ memcpy(real_cfg_pg_addr, target_cfg_pg,
+ min_t(u16, real_cfg_pg_sz, __page_size));
+ } else {
+ memset(target_cfg_pg, 0, __page_size);
+ leapraid_single_mpi_sg_append(adapter,
+ &leap_mpi_cfgp_req.page_buf_sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ real_cfg_pg_sz,
+ real_cfg_pg_dma);
+ memset(real_cfg_pg_addr, 0,
+ min_t(u16, real_cfg_pg_sz, __page_size));
+ }
+
+ rc = leapraid_request_cfg_pg(adapter,
+ &leap_mpi_cfgp_req,
+ &leap_mpi_cfgp_rep,
+ target_cfg_pg,
+ real_cfg_pg_addr,
+ min_t(u16, real_cfg_pg_sz, __page_size));
+ if (rc) {
+ u32 adapter_status;
+
+ adapter_status = le16_to_cpu(leap_mpi_cfgp_rep.adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (adapter_status !=
+ LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_PAGE)
+ dev_err(&adapter->pdev->dev,
+ "cfg-req: rc=%d, pg_info: 0x%x, 0x%x, %d\n",
+ rc, leap_mpi_cfgp_req.header.page_type,
+ leap_mpi_cfgp_req.ext_page_type,
+ leap_mpi_cfgp_req.header.page_num);
+ }
+
+ if (real_cfg_pg_addr)
+ dma_free_coherent(&adapter->pdev->dev,
+ real_cfg_pg_sz,
+ real_cfg_pg_addr,
+ real_cfg_pg_dma);
+out:
+ return rc;
+}
+
+static int leapraid_cfg_get_volume_hdl_dispatch(
+ struct leapraid_adapter *adapter,
+ struct leapraid_cfg_req *cfg_req,
+ struct leapraid_cfg_rep *cfg_rep,
+ struct leapraid_raid_cfg_p0 *raid_cfg_p0,
+ void *real_cfg_pg_addr,
+ u16 real_cfg_pg_sz,
+ u16 raid_cfg_p0_sz,
+ u16 pd_hdl, u16 *vol_hdl)
+{
+ u16 phys_disk_dev_hdl;
+ u16 adapter_status;
+ u16 element_type;
+ int config_num;
+ int rc, i;
+
+ config_num = 0xff;
+ while (true) {
+ cfg_req->page_addr =
+ cpu_to_le32(config_num +
+ LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP);
+ rc = leapraid_request_cfg_pg(
+ adapter, cfg_req, cfg_rep,
+ raid_cfg_p0, real_cfg_pg_addr,
+ min_t(u16, real_cfg_pg_sz, raid_cfg_p0_sz));
+ adapter_status = le16_to_cpu(cfg_rep->adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (rc) {
+ if (adapter_status ==
+ LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_PAGE) {
+ *vol_hdl = 0;
+ return 0;
+ }
+ return rc;
+ }
+
+ if (adapter_status != LEAPRAID_ADAPTER_STATUS_SUCCESS)
+ return -1;
+
+ for (i = 0; i < raid_cfg_p0->elements_num; i++) {
+ element_type =
+ le16_to_cpu(raid_cfg_p0->cfg_element[i].element_flg) &
+ LEAPRAID_RAIDCFG_P0_EFLG_MASK_ELEMENT_TYPE;
+
+ switch (element_type) {
+ case LEAPRAID_RAIDCFG_P0_EFLG_VOL_PHYS_DISK_ELEMENT:
+ case LEAPRAID_RAIDCFG_P0_EFLG_OCE_ELEMENT:
+ phys_disk_dev_hdl =
+ le16_to_cpu(raid_cfg_p0->cfg_element[i]
+ .phys_disk_dev_hdl);
+ if (phys_disk_dev_hdl == pd_hdl) {
+ *vol_hdl =
+ le16_to_cpu
+ (raid_cfg_p0->cfg_element[i]
+ .vol_dev_hdl);
+ return 0;
+ }
+ break;
+
+ case LEAPRAID_RAIDCFG_P0_EFLG_HOT_SPARE_ELEMENT:
+ *vol_hdl = 0;
+ return 0;
+ default:
+ break;
+ }
+ }
+ config_num = raid_cfg_p0->cfg_num;
+ }
+ return 0;
+}
+
+int leapraid_cfg_get_volume_hdl(struct leapraid_adapter *adapter,
+ u16 pd_hdl, u16 *vol_hdl)
+{
+ struct leapraid_raid_cfg_p0 *raid_cfg_p0 = NULL;
+ struct leapraid_cfg_req cfg_req;
+ struct leapraid_cfg_rep cfg_rep;
+ dma_addr_t real_cfg_pg_dma = 0;
+ void *real_cfg_pg_addr = NULL;
+ u16 real_cfg_pg_sz = 0;
+ int rc, raid_cfg_p0_sz;
+
+ *vol_hdl = 0;
+ memset(&cfg_req, 0, sizeof(struct leapraid_cfg_req));
+ cfg_req.func = LEAPRAID_FUNC_CONFIG_OP;
+ cfg_req.action = LEAPRAID_CFG_ACT_PAGE_HEADER;
+ cfg_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ cfg_req.ext_page_type = LEAPRAID_CFG_EXTPT_RAID_CONFIG;
+ cfg_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_VOL0;
+
+ leapraid_build_nodata_mpi_sg(adapter, &cfg_req.page_buf_sge);
+ rc = leapraid_request_cfg_pg_header(adapter, &cfg_req, &cfg_rep);
+ if (rc)
+ goto out;
+
+ cfg_req.action = LEAPRAID_CFG_ACT_PAGE_READ_CUR;
+ raid_cfg_p0_sz = le16_to_cpu(cfg_rep.ext_page_len) *
+ LEAPRAID_CFG_UNIT_SIZE;
+ raid_cfg_p0 = kmalloc(raid_cfg_p0_sz, GFP_KERNEL);
+ if (!raid_cfg_p0) {
+ rc = -1;
+ goto out;
+ }
+
+ real_cfg_pg_sz = (cfg_req.header.page_len) ?
+ cfg_req.header.page_len * LEAPRAID_CFG_UNIT_SIZE :
+ le16_to_cpu(cfg_rep.ext_page_len) * LEAPRAID_CFG_UNIT_SIZE;
+
+ real_cfg_pg_addr = dma_alloc_coherent(&adapter->pdev->dev,
+ real_cfg_pg_sz, &real_cfg_pg_dma,
+ GFP_KERNEL);
+ if (!real_cfg_pg_addr) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ memset(raid_cfg_p0, 0, raid_cfg_p0_sz);
+ leapraid_single_mpi_sg_append(adapter,
+ &cfg_req.page_buf_sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ real_cfg_pg_sz,
+ real_cfg_pg_dma);
+ memset(real_cfg_pg_addr, 0,
+ min_t(u16, real_cfg_pg_sz, raid_cfg_p0_sz));
+
+ rc = leapraid_cfg_get_volume_hdl_dispatch(adapter,
+ &cfg_req, &cfg_rep,
+ raid_cfg_p0,
+ real_cfg_pg_addr,
+ real_cfg_pg_sz,
+ raid_cfg_p0_sz,
+ pd_hdl, vol_hdl);
+
+out:
+ if (real_cfg_pg_addr)
+ dma_free_coherent(&adapter->pdev->dev,
+ real_cfg_pg_sz, real_cfg_pg_addr,
+ real_cfg_pg_dma);
+ kfree(raid_cfg_p0);
+ return rc;
+}
+
+static int leapraid_get_adapter_phys(struct leapraid_adapter *adapter,
+ u8 *nr_phys)
+{
+ struct leapraid_sas_io_unit_p0 sas_io_unit_page0;
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ int rc = 0;
+
+ *nr_phys = 0;
+ cfgp1.size = sizeof(struct leapraid_sas_io_unit_p0);
+ rc = leapraid_op_config_page(adapter, &sas_io_unit_page0, cfgp1,
+ cfgp2, GET_SAS_IOUNIT_PG0);
+ if (rc)
+ return rc;
+
+ *nr_phys = sas_io_unit_page0.phy_num;
+
+ return 0;
+}
+
+static int leapraid_cfg_get_number_pds(struct leapraid_adapter *adapter,
+ u16 hdl, u8 *num_pds)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_raidvol_p0 raidvol_p0;
+ int rc;
+
+ *num_pds = 0;
+ cfgp1.size = sizeof(struct leapraid_raidvol_p0);
+ cfgp2.handle = hdl;
+ rc = leapraid_op_config_page(adapter, &raidvol_p0, cfgp1,
+ cfgp2, GET_RAID_VOLUME_PG0);
+
+ if (!rc)
+ *num_pds = raidvol_p0.num_phys_disks;
+
+ return rc;
+}
+
+int leapraid_cfg_get_volume_wwid(struct leapraid_adapter *adapter,
+ u16 vol_hdl, u64 *wwid)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_raidvol_p1 raidvol_p1;
+ int rc;
+
+ *wwid = 0;
+ cfgp1.form = LEAPRAID_RAID_VOL_CFG_PGAD_HDL;
+ cfgp2.handle = vol_hdl;
+ rc = leapraid_op_config_page(adapter, &raidvol_p1, cfgp1,
+ cfgp2, GET_RAID_VOLUME_PG1);
+ if (!rc)
+ *wwid = le64_to_cpu(raidvol_p1.wwid);
+
+ return rc;
+}
+
+static int leapraid_get_sas_io_unit_page0(struct leapraid_adapter *adapter,
+ struct leapraid_sas_io_unit_p0 *sas_io_unit_p0,
+ u16 sas_iou_pg0_sz)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+
+ cfgp1.size = sas_iou_pg0_sz;
+ return leapraid_op_config_page(adapter, sas_io_unit_p0, cfgp1,
+ cfgp2, GET_SAS_IOUNIT_PG0);
+}
+
+static int leapraid_get_sas_address(struct leapraid_adapter *adapter,
+ u16 hdl, u64 *sas_address)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_dev_p0 sas_dev_p0;
+
+ *sas_address = 0;
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = hdl;
+ if ((leapraid_op_config_page(adapter, &sas_dev_p0, cfgp1,
+ cfgp2, GET_SAS_DEVICE_PG0)))
+ return -ENXIO;
+
+ if (hdl <= adapter->dev_topo.card.phys_num &&
+ (!(le32_to_cpu(sas_dev_p0.dev_info) & LEAPRAID_DEVTYP_SEP)))
+ *sas_address = adapter->dev_topo.card.sas_address;
+ else
+ *sas_address = le64_to_cpu(sas_dev_p0.sas_address);
+
+ return 0;
+}
+
+int leapraid_get_volume_cap(struct leapraid_adapter *adapter,
+ struct leapraid_raid_volume *raid_volume)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_raidvol_p0 *raidvol_p0;
+ struct leapraid_sas_dev_p0 sas_dev_p0;
+ struct leapraid_raidpd_p0 raidpd_p0;
+ u8 num_pds;
+ u16 sz;
+
+ if ((leapraid_cfg_get_number_pds(adapter, raid_volume->hdl,
+ &num_pds)) || !num_pds)
+ return -EFAULT;
+
+ raid_volume->pd_num = num_pds;
+ sz = offsetof(struct leapraid_raidvol_p0, phys_disk) +
+ (num_pds * sizeof(struct leapraid_raidvol0_phys_disk));
+ raidvol_p0 = kzalloc(sz, GFP_KERNEL);
+ if (!raidvol_p0)
+ return -EFAULT;
+
+ cfgp1.size = sz;
+ cfgp2.handle = raid_volume->hdl;
+ if ((leapraid_op_config_page(adapter, raidvol_p0, cfgp1, cfgp2,
+ GET_RAID_VOLUME_PG0))) {
+ kfree(raidvol_p0);
+ return -EFAULT;
+ }
+
+ raid_volume->vol_type = raidvol_p0->volume_type;
+ cfgp1.form = LEAPRAID_PHYSDISK_CFG_PGAD_PHYSDISKNUM;
+ cfgp2.form_specific = raidvol_p0->phys_disk[0].phys_disk_num;
+ if (!(leapraid_op_config_page(adapter, &raidpd_p0, cfgp1, cfgp2,
+ GET_PHY_DISK_PG0))) {
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = le16_to_cpu(raidpd_p0.dev_hdl);
+ if (!(leapraid_op_config_page(adapter, &sas_dev_p0, cfgp1,
+ cfgp2, GET_SAS_DEVICE_PG0))) {
+ raid_volume->dev_info =
+ le32_to_cpu(sas_dev_p0.dev_info);
+ }
+ }
+
+ kfree(raidvol_p0);
+ return 0;
+}
+
+static void leapraid_fw_log_work(struct work_struct *work)
+{
+ struct leapraid_adapter *adapter = container_of(work,
+ struct leapraid_adapter, fw_log_desc.fw_log_work.work);
+ struct leapraid_fw_log_info *infom;
+ unsigned long flags;
+
+ infom = (struct leapraid_fw_log_info *)(adapter->fw_log_desc.fw_log_buffer +
+ LEAPRAID_SYS_LOG_BUF_SIZE);
+
+ if (adapter->fw_log_desc.fw_log_init_flag == 0) {
+ infom->user_position =
+ leapraid_readl(&adapter->iomem_base->host_log_buf_pos);
+ infom->adapter_position =
+ leapraid_readl(&adapter->iomem_base->adapter_log_buf_pos);
+ adapter->fw_log_desc.fw_log_init_flag++;
+ }
+
+ writel(infom->user_position, &adapter->iomem_base->host_log_buf_pos);
+ infom->adapter_position =
+ leapraid_readl(&adapter->iomem_base->adapter_log_buf_pos);
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ if (adapter->fw_log_desc.fw_log_wq)
+ queue_delayed_work(adapter->fw_log_desc.fw_log_wq,
+ &adapter->fw_log_desc.fw_log_work,
+ msecs_to_jiffies(LEAPRAID_PCIE_LOG_POLLING_INTERVAL));
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+}
+
+void leapraid_fw_log_stop(struct leapraid_adapter *adapter)
+{
+ struct workqueue_struct *wq;
+ unsigned long flags;
+
+ if (!adapter->fw_log_desc.open_pcie_trace)
+ return;
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ wq = adapter->fw_log_desc.fw_log_wq;
+ adapter->fw_log_desc.fw_log_wq = NULL;
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+ if (wq) {
+ if (!cancel_delayed_work_sync(&adapter->fw_log_desc.fw_log_work))
+ flush_workqueue(wq);
+ destroy_workqueue(wq);
+ }
+}
+
+void leapraid_fw_log_start(struct leapraid_adapter *adapter)
+{
+ unsigned long flags;
+
+ if (!adapter->fw_log_desc.open_pcie_trace)
+ return;
+
+ if (adapter->fw_log_desc.fw_log_wq)
+ return;
+
+ INIT_DELAYED_WORK(&adapter->fw_log_desc.fw_log_work,
+ leapraid_fw_log_work);
+ snprintf(adapter->fw_log_desc.fw_log_wq_name,
+ sizeof(adapter->fw_log_desc.fw_log_wq_name),
+ "poll_%s%u_fw_log",
+ LEAPRAID_DRIVER_NAME, adapter->adapter_attr.id);
+ adapter->fw_log_desc.fw_log_wq =
+ create_singlethread_workqueue(
+ adapter->fw_log_desc.fw_log_wq_name);
+ if (!adapter->fw_log_desc.fw_log_wq)
+ return;
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ if (adapter->fw_log_desc.fw_log_wq)
+ queue_delayed_work(adapter->fw_log_desc.fw_log_wq,
+ &adapter->fw_log_desc.fw_log_work,
+ msecs_to_jiffies(LEAPRAID_PCIE_LOG_POLLING_INTERVAL));
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+}
+
+static void leapraid_timestamp_sync(struct leapraid_adapter *adapter)
+{
+ struct leapraid_io_unit_ctrl_req *io_unit_ctrl_req;
+ ktime_t current_time;
+ bool issue_reset = false;
+ u64 time_stamp = 0;
+
+ mutex_lock(&adapter->driver_cmds.timestamp_sync_cmd.mutex);
+ adapter->driver_cmds.timestamp_sync_cmd.status = LEAPRAID_CMD_PENDING;
+ io_unit_ctrl_req =
+ leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.timestamp_sync_cmd.inter_taskid);
+ memset(io_unit_ctrl_req, 0, sizeof(struct leapraid_io_unit_ctrl_req));
+ io_unit_ctrl_req->func = LEAPRAID_FUNC_SAS_IO_UNIT_CTRL;
+ io_unit_ctrl_req->op = LEAPRAID_SAS_OP_SET_PARAMETER;
+ io_unit_ctrl_req->adapter_para = LEAPRAID_SET_PARAMETER_SYNC_TIMESTAMP;
+
+ current_time = ktime_get_real();
+ time_stamp = ktime_to_ms(current_time);
+
+ io_unit_ctrl_req->adapter_para_value =
+ cpu_to_le32(time_stamp & 0xFFFFFFFF);
+ io_unit_ctrl_req->adapter_para_value2 =
+ cpu_to_le32(time_stamp >> 32);
+ init_completion(&adapter->driver_cmds.timestamp_sync_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.timestamp_sync_cmd.inter_taskid);
+ wait_for_completion_timeout(&adapter->driver_cmds.timestamp_sync_cmd.done,
+ LEAPRAID_TIMESTAMP_SYNC_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.timestamp_sync_cmd.status &
+ LEAPRAID_CMD_DONE))
+ issue_reset =
+ leapraid_check_reset(
+ adapter->driver_cmds.timestamp_sync_cmd.status);
+
+ if (issue_reset) {
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, FULL_RESET);
+ }
+
+ adapter->driver_cmds.timestamp_sync_cmd.status = LEAPRAID_CMD_NOT_USED;
+ mutex_unlock(&adapter->driver_cmds.timestamp_sync_cmd.mutex);
+}
+
+static bool leapraid_should_skip_fault_check(struct leapraid_adapter *adapter)
+{
+ unsigned long flags;
+ bool skip;
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ skip = adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering ||
+ adapter->access_ctrl.host_removing;
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+
+ return skip;
+}
+
+static void leapraid_check_scheduled_fault_work(struct work_struct *work)
+{
+ struct leapraid_adapter *adapter;
+ unsigned long flags;
+ u32 adapter_state;
+ int rc;
+
+ adapter = container_of(work, struct leapraid_adapter,
+ reset_desc.fault_reset_work.work);
+
+ if (leapraid_should_skip_fault_check(adapter))
+ goto scheduled_timer;
+
+ adapter_state = leapraid_get_adapter_state(adapter);
+ if (adapter_state != LEAPRAID_DB_OPERATIONAL) {
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ rc = leapraid_hard_reset_handler(adapter, FULL_RESET);
+ dev_warn(&adapter->pdev->dev, "%s: hard reset: %s\n",
+ __func__, (rc == 0) ? "success" : "failed");
+
+ adapter_state = leapraid_get_adapter_state(adapter);
+ if (rc && adapter_state != LEAPRAID_DB_OPERATIONAL)
+ return;
+ }
+
+ if (++adapter->timestamp_sync_cnt >=
+ LEAPRAID_TIMESTAMP_SYNC_INTERVAL) {
+ adapter->timestamp_sync_cnt = 0;
+ leapraid_timestamp_sync(adapter);
+ }
+
+scheduled_timer:
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ if (adapter->reset_desc.fault_reset_wq)
+ queue_delayed_work(adapter->reset_desc.fault_reset_wq,
+ &adapter->reset_desc.fault_reset_work,
+ msecs_to_jiffies(LEAPRAID_FAULT_POLLING_INTERVAL));
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+}
+
+void leapraid_check_scheduled_fault_start(struct leapraid_adapter *adapter)
+{
+ unsigned long flags;
+
+ if (adapter->reset_desc.fault_reset_wq)
+ return;
+
+ adapter->timestamp_sync_cnt = 0;
+ INIT_DELAYED_WORK(&adapter->reset_desc.fault_reset_work,
+ leapraid_check_scheduled_fault_work);
+ snprintf(adapter->reset_desc.fault_reset_wq_name,
+ sizeof(adapter->reset_desc.fault_reset_wq_name),
+ "poll_%s%u_status",
+ LEAPRAID_DRIVER_NAME, adapter->adapter_attr.id);
+ adapter->reset_desc.fault_reset_wq =
+ create_singlethread_workqueue(
+ adapter->reset_desc.fault_reset_wq_name);
+ if (!adapter->reset_desc.fault_reset_wq) {
+ dev_err(&adapter->pdev->dev,
+ "create single thread workqueue failed!\n");
+ return;
+ }
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ if (adapter->reset_desc.fault_reset_wq)
+ queue_delayed_work(adapter->reset_desc.fault_reset_wq,
+ &adapter->reset_desc.fault_reset_work,
+ msecs_to_jiffies(LEAPRAID_FAULT_POLLING_INTERVAL));
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+}
+
+void leapraid_check_scheduled_fault_stop(struct leapraid_adapter *adapter)
+{
+ struct workqueue_struct *wq;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ wq = adapter->reset_desc.fault_reset_wq;
+ adapter->reset_desc.fault_reset_wq = NULL;
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+
+ if (!wq)
+ return;
+
+ if (!cancel_delayed_work_sync(&adapter->reset_desc.fault_reset_work))
+ flush_workqueue(wq);
+ destroy_workqueue(wq);
+}
+
+static bool leapraid_ready_for_scsi_io(struct leapraid_adapter *adapter,
+ u16 hdl)
+{
+ if (adapter->access_ctrl.pcie_recovering ||
+ adapter->access_ctrl.shost_recovering)
+ return false;
+
+ if (leapraid_check_adapter_is_op(adapter))
+ return false;
+
+ if (hdl == LEAPRAID_INVALID_DEV_HANDLE)
+ return false;
+
+ if (test_bit(hdl, (unsigned long *)adapter->dev_topo.dev_removing))
+ return false;
+
+ return true;
+}
+
+static int leapraid_dispatch_scsi_io(struct leapraid_adapter *adapter,
+ struct leapraid_scsi_cmd_desc *cmd_desc)
+{
+ struct scsi_device *sdev;
+ struct leapraid_sdev_priv *sdev_priv;
+ struct scsi_cmnd *scmd;
+ void *dma_buffer = NULL;
+ dma_addr_t dma_addr = 0;
+ u8 sdev_flg = 0;
+ bool issue_reset = false;
+ int rc = 0;
+
+ if (WARN_ON(!adapter->driver_cmds.internal_scmd))
+ return -EINVAL;
+
+ if (!leapraid_ready_for_scsi_io(adapter, cmd_desc->hdl))
+ return -EINVAL;
+
+ mutex_lock(&adapter->driver_cmds.driver_scsiio_cmd.mutex);
+ if (adapter->driver_cmds.driver_scsiio_cmd.status !=
+ LEAPRAID_CMD_NOT_USED) {
+ rc = -EAGAIN;
+ goto out;
+ }
+ adapter->driver_cmds.driver_scsiio_cmd.status = LEAPRAID_CMD_PENDING;
+
+ __shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (sdev_priv->starget_priv->hdl == cmd_desc->hdl &&
+ sdev_priv->lun == cmd_desc->lun) {
+ sdev_flg = 1;
+ break;
+ }
+ }
+
+ if (!sdev_flg) {
+ rc = -ENXIO;
+ goto out;
+ }
+
+ if (cmd_desc->data_length) {
+ dma_buffer = dma_alloc_coherent(&adapter->pdev->dev,
+ cmd_desc->data_length,
+ &dma_addr, GFP_ATOMIC);
+ if (!dma_buffer) {
+ rc = -ENOMEM;
+ goto out;
+ }
+ if (cmd_desc->dir == DMA_TO_DEVICE)
+ memcpy(dma_buffer, cmd_desc->data_buffer,
+ cmd_desc->data_length);
+ }
+
+ scmd = adapter->driver_cmds.internal_scmd;
+ scmd->device = sdev;
+ scmd->cmd_len = cmd_desc->cdb_length;
+ memcpy(scmd->cmnd, cmd_desc->cdb, cmd_desc->cdb_length);
+ scmd->sc_data_direction = cmd_desc->dir;
+ scmd->sdb.length = cmd_desc->data_length;
+ scmd->sdb.table.nents = 1;
+ scmd->sdb.table.orig_nents = 1;
+ sg_init_one(scmd->sdb.table.sgl, dma_buffer, cmd_desc->data_length);
+ init_completion(&adapter->driver_cmds.driver_scsiio_cmd.done);
+ if (leapraid_queuecommand(adapter->shost, scmd)) {
+ adapter->driver_cmds.driver_scsiio_cmd.status &=
+ ~LEAPRAID_CMD_PENDING;
+ complete(&adapter->driver_cmds.driver_scsiio_cmd.done);
+ rc = -EINVAL;
+ goto out;
+ }
+
+ wait_for_completion_timeout(&adapter->driver_cmds.driver_scsiio_cmd.done,
+ cmd_desc->time_out * HZ);
+
+ if (!(adapter->driver_cmds.driver_scsiio_cmd.status &
+ LEAPRAID_CMD_DONE)) {
+ issue_reset =
+ leapraid_check_reset(
+ adapter->driver_cmds.driver_scsiio_cmd.status);
+ rc = -ENODATA;
+ goto reset;
+ }
+
+ rc = adapter->driver_cmds.internal_scmd->result;
+ if (!rc && cmd_desc->dir == DMA_FROM_DEVICE)
+ memcpy(cmd_desc->data_buffer, dma_buffer,
+ cmd_desc->data_length);
+
+reset:
+ if (issue_reset) {
+ rc = -ENODATA;
+ dev_err(&adapter->pdev->dev, "fire tgt reset: hdl=0x%04x\n",
+ cmd_desc->hdl);
+ leapraid_issue_locked_tm(adapter, cmd_desc->hdl, 0, 0, 0,
+ LEAPRAID_TM_TASKTYPE_TARGET_RESET,
+ adapter->driver_cmds.driver_scsiio_cmd.taskid,
+ LEAPRAID_TM_MSGFLAGS_LINK_RESET);
+ }
+out:
+ if (dma_buffer)
+ dma_free_coherent(&adapter->pdev->dev,
+ cmd_desc->data_length, dma_buffer, dma_addr);
+ adapter->driver_cmds.driver_scsiio_cmd.status = LEAPRAID_CMD_NOT_USED;
+ mutex_unlock(&adapter->driver_cmds.driver_scsiio_cmd.mutex);
+ return rc;
+}
+
+static int leapraid_dispatch_logsense(struct leapraid_adapter *adapter,
+ u16 hdl, u32 lun)
+{
+ struct leapraid_scsi_cmd_desc *desc;
+ int rc = 0;
+
+ desc = kzalloc(sizeof(*desc), GFP_KERNEL);
+ if (!desc)
+ return -ENOMEM;
+
+ desc->hdl = hdl;
+ desc->lun = lun;
+ desc->data_length = LEAPRAID_LOGSENSE_DATA_LENGTH;
+ desc->dir = DMA_FROM_DEVICE;
+ desc->cdb_length = LEAPRAID_LOGSENSE_CDB_LENGTH;
+ desc->cdb[0] = LOG_SENSE;
+ desc->cdb[2] = LEAPRAID_LOGSENSE_CDB_CODE;
+ desc->cdb[8] = desc->data_length;
+ desc->raid_member = false;
+ desc->time_out = LEAPRAID_LOGSENSE_TIMEOUT;
+
+ desc->data_buffer = kzalloc(desc->data_length, GFP_KERNEL);
+ if (!desc->data_buffer) {
+ kfree(desc);
+ return -ENOMEM;
+ }
+
+ rc = leapraid_dispatch_scsi_io(adapter, desc);
+ if (!rc) {
+ if (((char *)desc->data_buffer)[8] ==
+ LEAPRAID_LOGSENSE_SMART_CODE)
+ leapraid_smart_fault_detect(adapter, hdl);
+ }
+
+ kfree(desc->data_buffer);
+ kfree(desc);
+
+ return rc;
+}
+
+static bool leapraid_smart_poll_check(struct leapraid_adapter *adapter,
+ struct leapraid_sdev_priv *sdev_priv,
+ u32 reset_flg)
+{
+ struct leapraid_sas_dev *sas_dev = NULL;
+
+ if (!sdev_priv || !sdev_priv->starget_priv->card_port)
+ goto out;
+
+ sas_dev = leapraid_get_sas_dev_by_addr(adapter,
+ sdev_priv->starget_priv->sas_address,
+ sdev_priv->starget_priv->card_port);
+ if (!sas_dev || !sas_dev->support_smart)
+ goto out;
+
+ if (reset_flg)
+ sas_dev->led_on = false;
+ else if (sas_dev->led_on)
+ goto out;
+
+ if ((sdev_priv->starget_priv->flg & LEAPRAID_TGT_FLG_RAID_MEMBER) ||
+ (sdev_priv->starget_priv->flg & LEAPRAID_TGT_FLG_VOLUME) ||
+ sdev_priv->block)
+ goto out;
+
+ leapraid_sdev_put(sas_dev);
+ return true;
+
+out:
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+ return false;
+}
+
+static void leapraid_sata_smart_poll_work(struct work_struct *work)
+{
+ struct leapraid_adapter *adapter =
+ container_of(work, struct leapraid_adapter,
+ smart_poll_desc.smart_poll_work.work);
+ struct scsi_device *sdev;
+ struct leapraid_sdev_priv *sdev_priv;
+ static u32 reset_cnt;
+ bool reset_flg = false;
+
+ if (leapraid_check_adapter_is_op(adapter))
+ goto out;
+
+ reset_flg = (reset_cnt < adapter->reset_desc.reset_cnt);
+ reset_cnt = adapter->reset_desc.reset_cnt;
+
+ __shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (leapraid_smart_poll_check(adapter, sdev_priv, reset_flg))
+ leapraid_dispatch_logsense(adapter,
+ sdev_priv->starget_priv->hdl,
+ sdev_priv->lun);
+ }
+
+out:
+ if (adapter->smart_poll_desc.smart_poll_wq)
+ queue_delayed_work(adapter->smart_poll_desc.smart_poll_wq,
+ &adapter->smart_poll_desc.smart_poll_work,
+ msecs_to_jiffies(LEAPRAID_SMART_POLLING_INTERVAL));
+}
+
+void leapraid_smart_polling_start(struct leapraid_adapter *adapter)
+{
+ if (adapter->smart_poll_desc.smart_poll_wq || !smart_poll)
+ return;
+
+ INIT_DELAYED_WORK(&adapter->smart_poll_desc.smart_poll_work,
+ leapraid_sata_smart_poll_work);
+
+ snprintf(adapter->smart_poll_desc.smart_poll_wq_name,
+ sizeof(adapter->smart_poll_desc.smart_poll_wq_name),
+ "poll_%s%u_smart_poll",
+ LEAPRAID_DRIVER_NAME,
+ adapter->adapter_attr.id);
+ adapter->smart_poll_desc.smart_poll_wq =
+ create_singlethread_workqueue(
+ adapter->smart_poll_desc.smart_poll_wq_name);
+ if (!adapter->smart_poll_desc.smart_poll_wq)
+ return;
+ queue_delayed_work(adapter->smart_poll_desc.smart_poll_wq,
+ &adapter->smart_poll_desc.smart_poll_work,
+ msecs_to_jiffies(LEAPRAID_SMART_POLLING_INTERVAL));
+}
+
+void leapraid_smart_polling_stop(struct leapraid_adapter *adapter)
+{
+ struct workqueue_struct *wq;
+
+ if (!adapter->smart_poll_desc.smart_poll_wq)
+ return;
+
+ wq = adapter->smart_poll_desc.smart_poll_wq;
+ adapter->smart_poll_desc.smart_poll_wq = NULL;
+
+ if (wq) {
+ if (!cancel_delayed_work_sync(&adapter->smart_poll_desc.smart_poll_work))
+ flush_workqueue(wq);
+ destroy_workqueue(wq);
+ }
+}
+
+static void leapraid_fw_work(struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt);
+
+static void leapraid_fw_evt_free(struct kref *r)
+{
+ struct leapraid_fw_evt_work *fw_evt;
+
+ fw_evt = container_of(r, struct leapraid_fw_evt_work, refcnt);
+
+ kfree(fw_evt->evt_data);
+ kfree(fw_evt);
+}
+
+static void leapraid_fw_evt_get(struct leapraid_fw_evt_work *fw_evt)
+{
+ kref_get(&fw_evt->refcnt);
+}
+
+static void leapraid_fw_evt_put(struct leapraid_fw_evt_work *fw_work)
+{
+ kref_put(&fw_work->refcnt, leapraid_fw_evt_free);
+}
+
+static struct leapraid_fw_evt_work *leapraid_alloc_fw_evt_work(void)
+{
+ struct leapraid_fw_evt_work *fw_evt =
+ kzalloc(sizeof(*fw_evt), GFP_ATOMIC);
+ if (!fw_evt)
+ return NULL;
+
+ kref_init(&fw_evt->refcnt);
+ return fw_evt;
+}
+
+static void leapraid_run_fw_evt_work(struct work_struct *work)
+{
+ struct leapraid_fw_evt_work *fw_evt =
+ container_of(work, struct leapraid_fw_evt_work, work);
+
+ leapraid_fw_work(fw_evt->adapter, fw_evt);
+}
+
+static void leapraid_fw_evt_add(struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt)
+{
+ unsigned long flags;
+
+ if (!adapter->fw_evt_s.fw_evt_thread)
+ return;
+
+ spin_lock_irqsave(&adapter->fw_evt_s.fw_evt_lock, flags);
+ leapraid_fw_evt_get(fw_evt);
+ INIT_LIST_HEAD(&fw_evt->list);
+ list_add_tail(&fw_evt->list, &adapter->fw_evt_s.fw_evt_list);
+ INIT_WORK(&fw_evt->work, leapraid_run_fw_evt_work);
+ leapraid_fw_evt_get(fw_evt);
+ queue_work(adapter->fw_evt_s.fw_evt_thread, &fw_evt->work);
+ spin_unlock_irqrestore(&adapter->fw_evt_s.fw_evt_lock, flags);
+}
+
+static void leapraid_del_fw_evt_from_list(struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->fw_evt_s.fw_evt_lock, flags);
+ if (!list_empty(&fw_evt->list)) {
+ list_del_init(&fw_evt->list);
+ leapraid_fw_evt_put(fw_evt);
+ }
+ spin_unlock_irqrestore(&adapter->fw_evt_s.fw_evt_lock, flags);
+}
+
+static struct leapraid_fw_evt_work *leapraid_next_fw_evt(
+ struct leapraid_adapter *adapter)
+{
+ struct leapraid_fw_evt_work *fw_evt = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->fw_evt_s.fw_evt_lock, flags);
+ if (!list_empty(&adapter->fw_evt_s.fw_evt_list)) {
+ fw_evt = list_first_entry(&adapter->fw_evt_s.fw_evt_list,
+ struct leapraid_fw_evt_work, list);
+ list_del_init(&fw_evt->list);
+ leapraid_fw_evt_put(fw_evt);
+ }
+ spin_unlock_irqrestore(&adapter->fw_evt_s.fw_evt_lock, flags);
+ return fw_evt;
+}
+
+void leapraid_clean_active_fw_evt(struct leapraid_adapter *adapter)
+{
+ struct leapraid_fw_evt_work *fw_evt;
+ bool rc = false;
+
+ if ((list_empty(&adapter->fw_evt_s.fw_evt_list) &&
+ !adapter->fw_evt_s.cur_evt) || !adapter->fw_evt_s.fw_evt_thread)
+ return;
+
+ adapter->fw_evt_s.fw_evt_cleanup = 1;
+ if (adapter->access_ctrl.shost_recovering &&
+ adapter->fw_evt_s.cur_evt)
+ adapter->fw_evt_s.cur_evt->ignore = 1;
+
+ while ((fw_evt = leapraid_next_fw_evt(adapter)) ||
+ (fw_evt = adapter->fw_evt_s.cur_evt)) {
+ if (fw_evt == adapter->fw_evt_s.cur_evt &&
+ adapter->fw_evt_s.cur_evt->evt_type !=
+ LEAPRAID_EVT_REMOVE_DEAD_DEV) {
+ adapter->fw_evt_s.cur_evt = NULL;
+ continue;
+ }
+
+ rc = cancel_work_sync(&fw_evt->work);
+
+ if (rc)
+ leapraid_fw_evt_put(fw_evt);
+ }
+ adapter->fw_evt_s.fw_evt_cleanup = 0;
+}
+
+static void leapraid_internal_dev_ublk(struct scsi_device *sdev,
+ struct leapraid_sdev_priv *sdev_priv)
+{
+ int rc = 0;
+
+ sdev_printk(KERN_WARNING, sdev,
+ "hdl 0x%04x: now internal unblkg dev\n",
+ sdev_priv->starget_priv->hdl);
+ sdev_priv->block = false;
+ rc = scsi_internal_device_unblock_nowait(sdev, SDEV_RUNNING);
+ if (rc == -EINVAL) {
+ sdev_printk(KERN_WARNING, sdev,
+ "hdl 0x%04x: unblkg failed, rc=%d\n",
+ sdev_priv->starget_priv->hdl, rc);
+ sdev_priv->block = true;
+ rc = scsi_internal_device_block_nowait(sdev);
+ if (rc)
+ sdev_printk(KERN_WARNING, sdev,
+ "hdl 0x%04x: blkg failed: earlier unblkg err, rc=%d\n",
+ sdev_priv->starget_priv->hdl, rc);
+
+ sdev_priv->block = false;
+ rc = scsi_internal_device_unblock_nowait(sdev, SDEV_RUNNING);
+ if (rc)
+ sdev_printk(KERN_WARNING, sdev,
+ "hdl 0x%04x: ublkg failed again, rc=%d\n",
+ sdev_priv->starget_priv->hdl, rc);
+ }
+}
+
+static void leapraid_internal_ublk_io_dev_to_running(struct scsi_device *sdev)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+
+ sdev_priv = sdev->hostdata;
+ sdev_priv->block = false;
+ scsi_internal_device_unblock_nowait(sdev, SDEV_RUNNING);
+ sdev_printk(KERN_WARNING, sdev, "%s: ublk hdl 0x%04x\n",
+ __func__, sdev_priv->starget_priv->hdl);
+}
+
+static void leapraid_ublk_io_dev_to_running(
+ struct leapraid_adapter *adapter, u64 sas_addr,
+ struct leapraid_card_port *card_port)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct scsi_device *sdev;
+
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv)
+ continue;
+
+ if (sdev_priv->starget_priv->sas_address != sas_addr ||
+ sdev_priv->starget_priv->card_port != card_port)
+ continue;
+
+ if (sdev_priv->block)
+ leapraid_internal_ublk_io_dev_to_running(sdev);
+ }
+}
+
+static void leapraid_ublk_io_dev(struct leapraid_adapter *adapter,
+ u64 sas_addr,
+ struct leapraid_card_port *card_port)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct scsi_device *sdev;
+
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv || !sdev_priv->starget_priv)
+ continue;
+
+ if (sdev_priv->starget_priv->sas_address != sas_addr)
+ continue;
+
+ if (sdev_priv->starget_priv->card_port != card_port)
+ continue;
+
+ if (sdev_priv->block)
+ leapraid_internal_dev_ublk(sdev, sdev_priv);
+
+ scsi_device_set_state(sdev, SDEV_OFFLINE);
+ }
+}
+
+static void leapraid_ublk_io_all_dev(struct leapraid_adapter *adapter)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct leapraid_starget_priv *stgt_priv;
+ struct scsi_device *sdev;
+
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+
+ if (!sdev_priv)
+ continue;
+
+ stgt_priv = sdev_priv->starget_priv;
+ if (!stgt_priv || stgt_priv->deleted)
+ continue;
+
+ if (!sdev_priv->block)
+ continue;
+
+ sdev_printk(KERN_WARNING, sdev, "hdl 0x%04x: blkg...\n",
+ sdev_priv->starget_priv->hdl);
+ leapraid_internal_dev_ublk(sdev, sdev_priv);
+ continue;
+ }
+}
+
+static void __maybe_unused leapraid_internal_dev_blk(
+ struct scsi_device *sdev,
+ struct leapraid_sdev_priv *sdev_priv)
+{
+ int rc = 0;
+
+ sdev_printk(KERN_INFO, sdev, "internal blkg hdl 0x%04x\n",
+ sdev_priv->starget_priv->hdl);
+ sdev_priv->block = true;
+ rc = scsi_internal_device_block_nowait(sdev);
+ if (rc == -EINVAL)
+ sdev_printk(KERN_WARNING, sdev,
+ "hdl 0x%04x: blkg failed, rc=%d\n",
+ rc, sdev_priv->starget_priv->hdl);
+}
+
+static void __maybe_unused leapraid_blkio_dev(struct leapraid_adapter *adapter,
+ u16 hdl)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct leapraid_sas_dev *sas_dev;
+ struct scsi_device *sdev;
+
+ sas_dev = leapraid_get_sas_dev_by_hdl(adapter, hdl);
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv)
+ continue;
+
+ if (sdev_priv->starget_priv->hdl != hdl)
+ continue;
+
+ if (sdev_priv->block)
+ continue;
+
+ if (sas_dev && sas_dev->pend_sas_rphy_add)
+ continue;
+
+ if (sdev_priv->sep) {
+ sdev_printk(KERN_INFO, sdev,
+ "sep hdl 0x%04x skip blkg\n",
+ sdev_priv->starget_priv->hdl);
+ continue;
+ }
+
+ leapraid_internal_dev_blk(sdev, sdev_priv);
+ }
+
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+}
+
+static void leapraid_imm_blkio_to_end_dev(struct leapraid_adapter *adapter,
+ struct leapraid_sas_port *sas_port)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct leapraid_sas_dev *sas_dev;
+ struct scsi_device *sdev;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(
+ adapter,
+ sas_port->remote_identify.sas_address,
+ sas_port->card_port);
+
+ if (sas_dev) {
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv)
+ continue;
+
+ if (sdev_priv->starget_priv->hdl != sas_dev->hdl)
+ continue;
+
+ if (sdev_priv->block)
+ continue;
+
+ if (sas_dev && sas_dev->pend_sas_rphy_add)
+ continue;
+
+ if (sdev_priv->sep) {
+ sdev_printk(KERN_INFO, sdev,
+ "%s skip dev blk for sep hdl 0x%04x\n",
+ __func__,
+ sdev_priv->starget_priv->hdl);
+ continue;
+ }
+
+ leapraid_internal_dev_blk(sdev, sdev_priv);
+ }
+
+ leapraid_sdev_put(sas_dev);
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+}
+
+static void leapraid_imm_blkio_set_end_dev_blk_hdls(
+ struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node_exp)
+{
+ struct leapraid_sas_port *sas_port;
+
+ list_for_each_entry(sas_port,
+ &topo_node_exp->sas_port_list, port_list) {
+ if (sas_port->remote_identify.device_type ==
+ SAS_END_DEVICE) {
+ leapraid_imm_blkio_to_end_dev(adapter, sas_port);
+ }
+ }
+}
+
+static void leapraid_imm_blkio_to_kids_attchd_to_ex(
+ struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node_exp);
+
+static void leapraid_imm_blkio_to_sib_exp(
+ struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node_exp)
+{
+ struct leapraid_topo_node *topo_node_exp_sib;
+ struct leapraid_sas_port *sas_port;
+
+ list_for_each_entry(sas_port,
+ &topo_node_exp->sas_port_list, port_list) {
+ if (sas_port->remote_identify.device_type ==
+ SAS_EDGE_EXPANDER_DEVICE ||
+ sas_port->remote_identify.device_type ==
+ SAS_FANOUT_EXPANDER_DEVICE) {
+ topo_node_exp_sib =
+ leapraid_exp_find_by_sas_address(
+ adapter,
+ sas_port->remote_identify.sas_address,
+ sas_port->card_port);
+ leapraid_imm_blkio_to_kids_attchd_to_ex(
+ adapter,
+ topo_node_exp_sib);
+ }
+ }
+}
+
+static void leapraid_imm_blkio_to_kids_attchd_to_ex(
+ struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node_exp)
+{
+ if (!topo_node_exp)
+ return;
+
+ leapraid_imm_blkio_set_end_dev_blk_hdls(adapter, topo_node_exp);
+
+ leapraid_imm_blkio_to_sib_exp(adapter, topo_node_exp);
+}
+
+static void leapraid_report_sdev_directly(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev)
+{
+ struct leapraid_sas_port *sas_port;
+
+ sas_port = leapraid_transport_port_add(adapter,
+ sas_dev->hdl,
+ sas_dev->parent_sas_addr,
+ sas_dev->card_port);
+ if (!sas_port) {
+ leapraid_sas_dev_remove(adapter, sas_dev);
+ return;
+ }
+
+ if (!sas_dev->starget) {
+ if (!adapter->scan_dev_desc.driver_loading) {
+ leapraid_transport_port_remove(adapter,
+ sas_dev->sas_addr,
+ sas_dev->parent_sas_addr,
+ sas_dev->card_port);
+ leapraid_sas_dev_remove(adapter, sas_dev);
+ }
+ return;
+ }
+
+ clear_bit(sas_dev->hdl,
+ (unsigned long *)adapter->dev_topo.pending_dev_add);
+}
+
+static struct leapraid_sas_dev *leapraid_init_sas_dev(
+ struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev_p0 *sas_dev_pg0,
+ struct leapraid_card_port *card_port, u16 hdl,
+ u64 parent_sas_addr, u64 sas_addr, u32 dev_info)
+{
+ struct leapraid_sas_dev *sas_dev;
+ struct leapraid_enc_node *enc_dev;
+
+ sas_dev = kzalloc(sizeof(*sas_dev), GFP_KERNEL);
+ if (!sas_dev)
+ return NULL;
+
+ kref_init(&sas_dev->refcnt);
+ sas_dev->hdl = hdl;
+ sas_dev->dev_info = dev_info;
+ sas_dev->sas_addr = sas_addr;
+ sas_dev->card_port = card_port;
+ sas_dev->parent_sas_addr = parent_sas_addr;
+ sas_dev->phy = sas_dev_pg0->phy_num;
+ sas_dev->enc_hdl = le16_to_cpu(sas_dev_pg0->enc_hdl);
+ sas_dev->dev_name = le64_to_cpu(sas_dev_pg0->dev_name);
+ sas_dev->port_type = sas_dev_pg0->max_port_connections;
+ sas_dev->slot = sas_dev->enc_hdl ? le16_to_cpu(sas_dev_pg0->slot) : 0;
+ sas_dev->support_smart = (le16_to_cpu(sas_dev_pg0->flg) &
+ LEAPRAID_SAS_DEV_P0_FLG_SATA_SMART);
+ if (le16_to_cpu(sas_dev_pg0->flg) &
+ LEAPRAID_SAS_DEV_P0_FLG_ENC_LEVEL_VALID) {
+ sas_dev->enc_level = sas_dev_pg0->enc_level;
+ memcpy(sas_dev->connector_name, sas_dev_pg0->connector_name, 4);
+ sas_dev->connector_name[4] = '\0';
+ } else {
+ sas_dev->enc_level = 0;
+ sas_dev->connector_name[0] = '\0';
+ }
+ if (le16_to_cpu(sas_dev_pg0->enc_hdl)) {
+ enc_dev = leapraid_enc_find_by_hdl(adapter,
+ le16_to_cpu(sas_dev_pg0->enc_hdl));
+ sas_dev->enc_lid = enc_dev ?
+ le64_to_cpu(enc_dev->pg0.enc_lid) : 0;
+ }
+ dev_info(&adapter->pdev->dev,
+ "add dev: hdl=0x%0x, sas addr=0x%016llx, port_type=0x%0x\n",
+ hdl, sas_dev->sas_addr, sas_dev->port_type);
+
+ return sas_dev;
+}
+
+static void leapraid_add_dev(struct leapraid_adapter *adapter, u16 hdl)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_dev_p0 sas_dev_pg0;
+ struct leapraid_card_port *card_port;
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ u64 parent_sas_addr;
+ u32 dev_info;
+ u64 sas_addr;
+ u8 port_id;
+
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = hdl;
+ if ((leapraid_op_config_page(adapter, &sas_dev_pg0,
+ cfgp1, cfgp2, GET_SAS_DEVICE_PG0)))
+ return;
+
+ dev_info = le32_to_cpu(sas_dev_pg0.dev_info);
+ if (!(leapraid_is_end_dev(dev_info)))
+ return;
+
+ set_bit(hdl, (unsigned long *)adapter->dev_topo.pending_dev_add);
+ sas_addr = le64_to_cpu(sas_dev_pg0.sas_address);
+ if (!(le16_to_cpu(sas_dev_pg0.flg) &
+ LEAPRAID_SAS_DEV_P0_FLG_DEV_PRESENT))
+ return;
+
+ port_id = sas_dev_pg0.physical_port;
+ card_port = leapraid_get_port_by_id(adapter, port_id, false);
+ if (!card_port)
+ return;
+
+ sas_dev = leapraid_get_sas_dev_by_addr(adapter, sas_addr, card_port);
+ if (sas_dev) {
+ clear_bit(hdl,
+ (unsigned long *)adapter->dev_topo.pending_dev_add);
+ leapraid_sdev_put(sas_dev);
+ return;
+ }
+
+ if (leapraid_get_sas_address(adapter,
+ le16_to_cpu(sas_dev_pg0.parent_dev_hdl),
+ &parent_sas_addr))
+ return;
+
+ sas_dev = leapraid_init_sas_dev(adapter, &sas_dev_pg0, card_port,
+ hdl, parent_sas_addr, sas_addr,
+ dev_info);
+ if (!sas_dev)
+ return;
+ if (adapter->scan_dev_desc.wait_scan_dev_done) {
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ leapraid_sdev_get(sas_dev);
+ list_add_tail(&sas_dev->list,
+ &adapter->dev_topo.sas_dev_init_list);
+ leapraid_check_boot_dev(adapter, sas_dev, 0);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ } else {
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ leapraid_sdev_get(sas_dev);
+ list_add_tail(&sas_dev->list, &adapter->dev_topo.sas_dev_list);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ leapraid_report_sdev_directly(adapter, sas_dev);
+ }
+}
+
+static void leapraid_remove_device(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev)
+{
+ struct leapraid_starget_priv *starget_priv;
+
+ if (sas_dev->led_on) {
+ leapraid_set_led(adapter, sas_dev, false);
+ sas_dev->led_on = false;
+ }
+
+ if (sas_dev->starget && sas_dev->starget->hostdata) {
+ starget_priv = sas_dev->starget->hostdata;
+ starget_priv->deleted = true;
+ leapraid_ublk_io_dev(adapter,
+ sas_dev->sas_addr, sas_dev->card_port);
+ starget_priv->hdl = LEAPRAID_INVALID_DEV_HANDLE;
+ }
+
+ leapraid_transport_port_remove(adapter,
+ sas_dev->sas_addr,
+ sas_dev->parent_sas_addr,
+ sas_dev->card_port);
+
+ dev_info(&adapter->pdev->dev,
+ "remove dev: hdl=0x%04x, sas addr=0x%016llx\n",
+ sas_dev->hdl, (unsigned long long)sas_dev->sas_addr);
+}
+
+static struct leapraid_vphy *leapraid_alloc_vphy(struct leapraid_adapter *adapter,
+ u8 port_id, u8 phy_num)
+{
+ struct leapraid_card_port *port;
+ struct leapraid_vphy *vphy;
+
+ port = leapraid_get_port_by_id(adapter, port_id, false);
+ if (!port)
+ return NULL;
+
+ vphy = leapraid_get_vphy_by_phy(port, phy_num);
+ if (vphy)
+ return vphy;
+
+ vphy = kzalloc(sizeof(*vphy), GFP_KERNEL);
+ if (!vphy)
+ return NULL;
+
+ if (!port->vphys_mask)
+ INIT_LIST_HEAD(&port->vphys_list);
+
+ port->vphys_mask |= BIT(phy_num);
+ vphy->phy_mask |= BIT(phy_num);
+ list_add_tail(&vphy->list, &port->vphys_list);
+ return vphy;
+}
+
+static int leapraid_add_port_to_card_port_list(struct leapraid_adapter *adapter,
+ u8 port_id, bool refresh)
+{
+ struct leapraid_card_port *card_port;
+
+ card_port = leapraid_get_port_by_id(adapter, port_id, false);
+ if (card_port)
+ return 0;
+
+ card_port = kzalloc(sizeof(*card_port), GFP_KERNEL);
+ if (!card_port)
+ return -ENOMEM;
+
+ card_port->port_id = port_id;
+ dev_info(&adapter->pdev->dev,
+ "port: %d is added to card_port list\n",
+ card_port->port_id);
+
+ if (refresh)
+ if (adapter->access_ctrl.shost_recovering)
+ card_port->flg = LEAPRAID_CARD_PORT_FLG_NEW;
+ list_add_tail(&card_port->list, &adapter->dev_topo.card_port_list);
+ return 0;
+}
+
+static void leapraid_sas_host_add(struct leapraid_adapter *adapter,
+ bool refresh)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_phy_p0 phy_pg0;
+ struct leapraid_sas_dev_p0 sas_dev_pg0;
+ struct leapraid_enc_p0 enc_pg0;
+ struct leapraid_sas_io_unit_p0 *sas_iou_pg0;
+ u16 sas_iou_pg0_sz;
+ u16 attached_hdl;
+ u8 phys_num;
+ u8 port_id;
+ u8 link_rate;
+ int i;
+
+ if (!refresh) {
+ if (leapraid_get_adapter_phys(adapter, &phys_num) || !phys_num)
+ return;
+
+ adapter->dev_topo.card.card_phy =
+ kcalloc(phys_num,
+ sizeof(struct leapraid_card_phy), GFP_KERNEL);
+ if (!adapter->dev_topo.card.card_phy)
+ return;
+
+ adapter->dev_topo.card.phys_num = phys_num;
+ }
+
+ sas_iou_pg0_sz = offsetof(struct leapraid_sas_io_unit_p0, phy_info) +
+ (adapter->dev_topo.card.phys_num *
+ sizeof(struct leapraid_sas_io_unit0_phy_info));
+ sas_iou_pg0 = kzalloc(sas_iou_pg0_sz, GFP_KERNEL);
+ if (!sas_iou_pg0)
+ goto out;
+
+ if (leapraid_get_sas_io_unit_page0(adapter,
+ sas_iou_pg0,
+ sas_iou_pg0_sz))
+ goto out;
+
+ adapter->dev_topo.card.parent_dev = &adapter->shost->shost_gendev;
+ adapter->dev_topo.card.hdl =
+ le16_to_cpu(sas_iou_pg0->phy_info[0].controller_dev_hdl);
+ for (i = 0; i < adapter->dev_topo.card.phys_num; i++) {
+ if (!refresh) { /* add */
+ cfgp1.phy_number = i;
+ if (leapraid_op_config_page(adapter, &phy_pg0, cfgp1,
+ cfgp2, GET_PHY_PG0))
+ goto out;
+
+ port_id = sas_iou_pg0->phy_info[i].port;
+ if (leapraid_add_port_to_card_port_list(adapter,
+ port_id,
+ false))
+ goto out;
+
+ if ((le32_to_cpu(phy_pg0.phy_info) &
+ LEAPRAID_SAS_PHYINFO_VPHY) &&
+ (phy_pg0.neg_link_rate >> 4) >=
+ LEAPRAID_SAS_NEG_LINK_RATE_1_5) {
+ if (!leapraid_alloc_vphy(adapter, port_id, i))
+ goto out;
+ adapter->dev_topo.card.card_phy[i].vphy = true;
+ }
+
+ adapter->dev_topo.card.card_phy[i].hdl =
+ adapter->dev_topo.card.hdl;
+ adapter->dev_topo.card.card_phy[i].phy_id = i;
+ adapter->dev_topo.card.card_phy[i].card_port =
+ leapraid_get_port_by_id(adapter,
+ port_id,
+ false);
+ leapraid_transport_add_card_phy(
+ adapter,
+ &adapter->dev_topo.card.card_phy[i],
+ &phy_pg0, adapter->dev_topo.card.parent_dev);
+ } else { /* refresh */
+ link_rate = sas_iou_pg0->phy_info[i].neg_link_rate >> 4;
+ port_id = sas_iou_pg0->phy_info[i].port;
+ if (leapraid_add_port_to_card_port_list(adapter,
+ port_id,
+ true))
+ goto out;
+
+ if (le32_to_cpu(sas_iou_pg0->phy_info[i]
+ .controller_phy_dev_info) &
+ LEAPRAID_DEVTYP_SEP &&
+ link_rate >= LEAPRAID_SAS_NEG_LINK_RATE_1_5) {
+ cfgp1.phy_number = i;
+ if ((leapraid_op_config_page(adapter, &phy_pg0,
+ cfgp1, cfgp2,
+ GET_PHY_PG0)))
+ continue;
+
+ if ((le32_to_cpu(phy_pg0.phy_info) &
+ LEAPRAID_SAS_PHYINFO_VPHY)) {
+ if (!leapraid_alloc_vphy(adapter,
+ port_id,
+ i))
+ goto out;
+ adapter->dev_topo.card.card_phy[i].vphy = true;
+ }
+ }
+
+ adapter->dev_topo.card.card_phy[i].hdl =
+ adapter->dev_topo.card.hdl;
+ attached_hdl =
+ le16_to_cpu(sas_iou_pg0->phy_info[i].attached_dev_hdl);
+ if (attached_hdl && link_rate < LEAPRAID_SAS_NEG_LINK_RATE_1_5)
+ link_rate = LEAPRAID_SAS_NEG_LINK_RATE_1_5;
+
+ adapter->dev_topo.card.card_phy[i].card_port =
+ leapraid_get_port_by_id(adapter,
+ port_id,
+ false);
+ if (!adapter->dev_topo.card.card_phy[i].phy) {
+ cfgp1.phy_number = i;
+ if ((leapraid_op_config_page(adapter, &phy_pg0,
+ cfgp1, cfgp2,
+ GET_PHY_PG0)))
+ continue;
+
+ adapter->dev_topo.card.card_phy[i].phy_id = i;
+ leapraid_transport_add_card_phy(adapter,
+ &adapter->dev_topo.card.card_phy[i],
+ &phy_pg0,
+ adapter->dev_topo.card.parent_dev);
+ continue;
+ }
+
+ leapraid_transport_update_links(adapter,
+ adapter->dev_topo.card.sas_address,
+ attached_hdl, i, link_rate,
+ adapter->dev_topo.card.card_phy[i].card_port);
+ }
+ }
+
+ if (!refresh) {
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = adapter->dev_topo.card.hdl;
+ if ((leapraid_op_config_page(adapter, &sas_dev_pg0, cfgp1,
+ cfgp2, GET_SAS_DEVICE_PG0)))
+ goto out;
+
+ adapter->dev_topo.card.enc_hdl =
+ le16_to_cpu(sas_dev_pg0.enc_hdl);
+ adapter->dev_topo.card.sas_address =
+ le64_to_cpu(sas_dev_pg0.sas_address);
+ dev_info(&adapter->pdev->dev,
+ "add host: devhdl=0x%04x, sas addr=0x%016llx, phynums=%d\n",
+ adapter->dev_topo.card.hdl,
+ (unsigned long long)adapter->dev_topo.card.sas_address,
+ adapter->dev_topo.card.phys_num);
+
+ if (adapter->dev_topo.card.enc_hdl) {
+ cfgp1.form = LEAPRAID_SAS_ENC_CFG_PGAD_HDL;
+ cfgp2.handle = adapter->dev_topo.card.enc_hdl;
+ if (!(leapraid_op_config_page(adapter, &enc_pg0,
+ cfgp1, cfgp2,
+ GET_SAS_ENCLOSURE_PG0)))
+ adapter->dev_topo.card.enc_lid =
+ le64_to_cpu(enc_pg0.enc_lid);
+ }
+ }
+out:
+ kfree(sas_iou_pg0);
+}
+
+static int leapraid_internal_exp_add(struct leapraid_adapter *adapter,
+ struct leapraid_exp_p0 *exp_pg0,
+ union cfg_param_1 *cfgp1,
+ union cfg_param_2 *cfgp2,
+ u16 hdl)
+{
+ struct leapraid_topo_node *topo_node_exp;
+ struct leapraid_sas_port *sas_port = NULL;
+ struct leapraid_enc_node *enc_dev;
+ struct leapraid_exp_p1 exp_pg1;
+ int rc = 0;
+ unsigned long flags;
+ u8 port_id;
+ u16 parent_handle;
+ u64 sas_addr_parent = 0;
+ int i;
+
+ port_id = exp_pg0->physical_port;
+ parent_handle = le16_to_cpu(exp_pg0->parent_dev_hdl);
+
+ if (leapraid_get_sas_address(adapter, parent_handle, &sas_addr_parent))
+ return -1;
+
+ topo_node_exp = kzalloc(sizeof(*topo_node_exp), GFP_KERNEL);
+ if (!topo_node_exp)
+ return -1;
+
+ topo_node_exp->hdl = hdl;
+ topo_node_exp->phys_num = exp_pg0->phy_num;
+ topo_node_exp->sas_address_parent = sas_addr_parent;
+ topo_node_exp->sas_address = le64_to_cpu(exp_pg0->sas_address);
+ topo_node_exp->card_port =
+ leapraid_get_port_by_id(adapter, port_id, false);
+ if (!topo_node_exp->card_port) {
+ rc = -1;
+ goto out_fail;
+ }
+
+ dev_info(&adapter->pdev->dev,
+ "add exp: sas addr=0x%016llx, hdl=0x%04x, phdl=0x%04x, phys=%d\n",
+ (unsigned long long)topo_node_exp->sas_address,
+ hdl, parent_handle,
+ topo_node_exp->phys_num);
+ if (!topo_node_exp->phys_num) {
+ rc = -1;
+ goto out_fail;
+ }
+
+ topo_node_exp->card_phy =
+ kcalloc(topo_node_exp->phys_num,
+ sizeof(struct leapraid_card_phy), GFP_KERNEL);
+ if (!topo_node_exp->card_phy) {
+ rc = -1;
+ goto out_fail;
+ }
+
+ INIT_LIST_HEAD(&topo_node_exp->sas_port_list);
+ sas_port = leapraid_transport_port_add(adapter, hdl, sas_addr_parent,
+ topo_node_exp->card_port);
+ if (!sas_port) {
+ rc = -1;
+ goto out_fail;
+ }
+
+ topo_node_exp->parent_dev = &sas_port->rphy->dev;
+ topo_node_exp->rphy = sas_port->rphy;
+ for (i = 0; i < topo_node_exp->phys_num; i++) {
+ cfgp1->phy_number = i;
+ cfgp2->handle = hdl;
+ if ((leapraid_op_config_page(adapter, &exp_pg1, *cfgp1, *cfgp2,
+ GET_SAS_EXPANDER_PG1))) {
+ rc = -1;
+ goto out_fail;
+ }
+
+ topo_node_exp->card_phy[i].hdl = hdl;
+ topo_node_exp->card_phy[i].phy_id = i;
+ topo_node_exp->card_phy[i].card_port =
+ leapraid_get_port_by_id(adapter, port_id, false);
+ if ((leapraid_transport_add_exp_phy(adapter,
+ &topo_node_exp->card_phy[i],
+ &exp_pg1,
+ topo_node_exp->parent_dev))) {
+ rc = -1;
+ goto out_fail;
+ }
+ }
+
+ if (topo_node_exp->enc_hdl) {
+ enc_dev = leapraid_enc_find_by_hdl(adapter,
+ topo_node_exp->enc_hdl);
+ if (enc_dev)
+ topo_node_exp->enc_lid =
+ le64_to_cpu(enc_dev->pg0.enc_lid);
+ }
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ list_add_tail(&topo_node_exp->list, &adapter->dev_topo.exp_list);
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+ return 0;
+
+out_fail:
+ if (sas_port)
+ leapraid_transport_port_remove(adapter,
+ topo_node_exp->sas_address,
+ sas_addr_parent,
+ topo_node_exp->card_port);
+ kfree(topo_node_exp);
+ return rc;
+}
+
+static int leapraid_exp_add(struct leapraid_adapter *adapter, u16 hdl)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_topo_node *topo_node_exp;
+ struct leapraid_exp_p0 exp_pg0;
+ u16 parent_handle;
+ u64 sas_addr, sas_addr_parent = 0;
+ unsigned long flags;
+ u8 port_id;
+ int rc = 0;
+
+ if (!hdl)
+ return -EPERM;
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering)
+ return -EPERM;
+
+ cfgp1.form = LEAPRAID_SAS_EXP_CFD_PGAD_HDL;
+ cfgp2.handle = hdl;
+ if ((leapraid_op_config_page(adapter, &exp_pg0, cfgp1, cfgp2,
+ GET_SAS_EXPANDER_PG0)))
+ return -EPERM;
+
+ parent_handle = le16_to_cpu(exp_pg0.parent_dev_hdl);
+ if (leapraid_get_sas_address(adapter, parent_handle, &sas_addr_parent))
+ return -EPERM;
+
+ port_id = exp_pg0.physical_port;
+ if (sas_addr_parent != adapter->dev_topo.card.sas_address) {
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ topo_node_exp =
+ leapraid_exp_find_by_sas_address(adapter,
+ sas_addr_parent,
+ leapraid_get_port_by_id(adapter, port_id, false));
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+ if (!topo_node_exp) {
+ rc = leapraid_exp_add(adapter, parent_handle);
+ if (rc != 0)
+ return rc;
+ }
+ }
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ sas_addr = le64_to_cpu(exp_pg0.sas_address);
+ topo_node_exp =
+ leapraid_exp_find_by_sas_address(adapter, sas_addr,
+ leapraid_get_port_by_id(adapter, port_id, false));
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ if (topo_node_exp)
+ return 0;
+
+ return leapraid_internal_exp_add(adapter, &exp_pg0, &cfgp1,
+ &cfgp2, hdl);
+}
+
+static void leapraid_exp_node_rm(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node_exp)
+{
+ struct leapraid_sas_port *sas_port, *sas_port_next;
+ unsigned long flags;
+ int port_id;
+
+ list_for_each_entry_safe(sas_port, sas_port_next,
+ &topo_node_exp->sas_port_list,
+ port_list) {
+ if (adapter->access_ctrl.shost_recovering)
+ return;
+
+ switch (sas_port->remote_identify.device_type) {
+ case SAS_END_DEVICE:
+ leapraid_sas_dev_remove_by_sas_address(
+ adapter,
+ sas_port->remote_identify.sas_address,
+ sas_port->card_port);
+ break;
+ case SAS_EDGE_EXPANDER_DEVICE:
+ case SAS_FANOUT_EXPANDER_DEVICE:
+ leapraid_exp_rm(
+ adapter,
+ sas_port->remote_identify.sas_address,
+ sas_port->card_port);
+ break;
+ default:
+ break;
+ }
+ }
+
+ port_id = topo_node_exp->card_port->port_id;
+ leapraid_transport_port_remove(adapter, topo_node_exp->sas_address,
+ topo_node_exp->sas_address_parent,
+ topo_node_exp->card_port);
+ dev_info(&adapter->pdev->dev,
+ "removing exp: port=%d, sas addr=0x%016llx, hdl=0x%04x\n",
+ port_id, (unsigned long long)topo_node_exp->sas_address,
+ topo_node_exp->hdl);
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ list_del(&topo_node_exp->list);
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+ kfree(topo_node_exp->card_phy);
+ kfree(topo_node_exp);
+}
+
+void leapraid_exp_rm(struct leapraid_adapter *adapter, u64 sas_addr,
+ struct leapraid_card_port *port)
+{
+ struct leapraid_topo_node *topo_node_exp;
+ unsigned long flags;
+
+ if (adapter->access_ctrl.shost_recovering)
+ return;
+
+ if (!port)
+ return;
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ topo_node_exp = leapraid_exp_find_by_sas_address(adapter,
+ sas_addr,
+ port);
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ if (topo_node_exp)
+ leapraid_exp_node_rm(adapter, topo_node_exp);
+}
+
+static void leapraid_check_device(struct leapraid_adapter *adapter,
+ u64 parent_sas_address, u16 handle,
+ u8 phy_number, u8 link_rate)
+{
+ struct leapraid_sas_dev_p0 sas_device_pg0;
+ struct leapraid_sas_dev *sas_dev = NULL;
+ struct leapraid_enc_node *enclosure_dev = NULL;
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ unsigned long flags;
+ u64 sas_address;
+ struct scsi_target *starget;
+ struct leapraid_starget_priv *sas_target_priv_data;
+ u32 device_info;
+ struct leapraid_card_port *port;
+
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = handle;
+ if ((leapraid_op_config_page(adapter, &sas_device_pg0, cfgp1, cfgp2,
+ GET_SAS_DEVICE_PG0)))
+ return;
+
+ if (phy_number != sas_device_pg0.phy_num)
+ return;
+
+ device_info = le32_to_cpu(sas_device_pg0.dev_info);
+ if (!(leapraid_is_end_dev(device_info)))
+ return;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_address = le64_to_cpu(sas_device_pg0.sas_address);
+ port = leapraid_get_port_by_id(adapter, sas_device_pg0.physical_port,
+ false);
+ if (!port)
+ goto out_unlock;
+
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(adapter, sas_address,
+ port);
+ if (!sas_dev)
+ goto out_unlock;
+
+ if (unlikely(sas_dev->hdl != handle)) {
+ starget = sas_dev->starget;
+ sas_target_priv_data = starget->hostdata;
+ starget_printk(KERN_INFO, starget,
+ "hdl changed from 0x%04x to 0x%04x!\n",
+ sas_dev->hdl, handle);
+ sas_target_priv_data->hdl = handle;
+ sas_dev->hdl = handle;
+ if (le16_to_cpu(sas_device_pg0.flg) &
+ LEAPRAID_SAS_DEV_P0_FLG_ENC_LEVEL_VALID) {
+ sas_dev->enc_level =
+ sas_device_pg0.enc_level;
+ memcpy(sas_dev->connector_name,
+ sas_device_pg0.connector_name, 4);
+ sas_dev->connector_name[4] = '\0';
+ } else {
+ sas_dev->enc_level = 0;
+ sas_dev->connector_name[0] = '\0';
+ }
+ sas_dev->enc_hdl =
+ le16_to_cpu(sas_device_pg0.enc_hdl);
+ enclosure_dev =
+ leapraid_enc_find_by_hdl(adapter, sas_dev->enc_hdl);
+ if (enclosure_dev) {
+ sas_dev->enc_lid =
+ le64_to_cpu(enclosure_dev->pg0.enc_lid);
+ }
+ }
+
+ if (!(le16_to_cpu(sas_device_pg0.flg) &
+ LEAPRAID_SAS_DEV_P0_FLG_DEV_PRESENT))
+ goto out_unlock;
+
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ leapraid_ublk_io_dev_to_running(adapter, sas_address, port);
+ goto out;
+
+out_unlock:
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+out:
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+}
+
+static int leapraid_internal_sas_topo_chg_evt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_card_port *card_port,
+ struct leapraid_topo_node *topo_node_exp,
+ struct leapraid_fw_evt_work *fw_evt,
+ u64 sas_addr, u8 max_phys)
+{
+ struct leapraid_evt_data_sas_topo_change_list *evt_data;
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ u8 phy_number;
+ u8 link_rate, prev_link_rate;
+ u16 reason_code;
+ u16 hdl;
+ int i;
+
+ evt_data = fw_evt->evt_data;
+ for (i = 0; i < evt_data->entry_num; i++) {
+ if (fw_evt->ignore)
+ return 0;
+
+ if (adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering)
+ return 0;
+
+ phy_number = evt_data->start_phy_num + i;
+ if (phy_number >= max_phys)
+ continue;
+
+ reason_code = evt_data->phy[i].phy_status &
+ LEAPRAID_EVT_SAS_TOPO_RC_MASK;
+
+ hdl = le16_to_cpu(evt_data->phy[i].attached_dev_hdl);
+ if (!hdl)
+ continue;
+
+ link_rate = evt_data->phy[i].link_rate >> 4;
+ prev_link_rate = evt_data->phy[i].link_rate & 0xF;
+ switch (reason_code) {
+ case LEAPRAID_EVT_SAS_TOPO_RC_PHY_CHANGED:
+ if (adapter->access_ctrl.shost_recovering)
+ break;
+
+ if (link_rate == prev_link_rate)
+ break;
+
+ leapraid_transport_update_links(adapter, sas_addr,
+ hdl, phy_number,
+ link_rate, card_port);
+ if (link_rate < LEAPRAID_SAS_NEG_LINK_RATE_1_5)
+ break;
+
+ leapraid_check_device(adapter, sas_addr, hdl,
+ phy_number, link_rate);
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock,
+ flags);
+ sas_dev =
+ leapraid_hold_lock_get_sas_dev_by_hdl(
+ adapter, hdl);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock,
+ flags);
+ if (sas_dev) {
+ leapraid_sdev_put(sas_dev);
+ break;
+ }
+ if (!test_bit(hdl, (unsigned long *)adapter->dev_topo.pending_dev_add))
+ break;
+
+ evt_data->phy[i].phy_status &=
+ LEAPRAID_EVT_SAS_TOPO_RC_CLEAR_MASK;
+ evt_data->phy[i].phy_status |=
+ LEAPRAID_EVT_SAS_TOPO_RC_TARG_ADDED;
+ fallthrough;
+
+ case LEAPRAID_EVT_SAS_TOPO_RC_TARG_ADDED:
+ if (adapter->access_ctrl.shost_recovering)
+ break;
+ leapraid_transport_update_links(adapter, sas_addr,
+ hdl, phy_number,
+ link_rate, card_port);
+ if (link_rate < LEAPRAID_SAS_NEG_LINK_RATE_1_5)
+ break;
+ leapraid_add_dev(adapter, hdl);
+ break;
+ case LEAPRAID_EVT_SAS_TOPO_RC_TARG_NOT_RESPONDING:
+ leapraid_sas_dev_remove_by_hdl(adapter, hdl);
+ break;
+ }
+ }
+
+ if (evt_data->exp_status == LEAPRAID_EVT_SAS_TOPO_ES_NOT_RESPONDING &&
+ topo_node_exp)
+ leapraid_exp_rm(adapter, sas_addr, card_port);
+
+ return 0;
+}
+
+static int leapraid_sas_topo_chg_evt(struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt)
+{
+ struct leapraid_topo_node *topo_node_exp;
+ struct leapraid_card_port *card_port;
+ struct leapraid_evt_data_sas_topo_change_list *evt_data;
+ u16 phdl;
+ u8 max_phys;
+ u64 sas_addr;
+ unsigned long flags;
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering)
+ return 0;
+
+ evt_data = fw_evt->evt_data;
+ leapraid_sas_host_add(adapter, adapter->dev_topo.card.phys_num);
+
+ if (fw_evt->ignore)
+ return 0;
+
+ phdl = le16_to_cpu(evt_data->exp_dev_hdl);
+ card_port = leapraid_get_port_by_id(adapter,
+ evt_data->physical_port,
+ false);
+ if (evt_data->exp_status == LEAPRAID_EVT_SAS_TOPO_ES_ADDED)
+ if (leapraid_exp_add(adapter, phdl) != 0)
+ return 0;
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ topo_node_exp = leapraid_exp_find_by_hdl(adapter, phdl);
+ if (topo_node_exp) {
+ sas_addr = topo_node_exp->sas_address;
+ max_phys = topo_node_exp->phys_num;
+ card_port = topo_node_exp->card_port;
+ } else if (phdl < adapter->dev_topo.card.phys_num) {
+ sas_addr = adapter->dev_topo.card.sas_address;
+ max_phys = adapter->dev_topo.card.phys_num;
+ } else {
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock,
+ flags);
+ return 0;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ return leapraid_internal_sas_topo_chg_evt(adapter, card_port,
+ topo_node_exp, fw_evt,
+ sas_addr, max_phys);
+}
+
+static void leapraid_reprobe_lun(struct scsi_device *sdev, void *no_uld_attach)
+{
+ sdev->no_uld_attach = no_uld_attach ? 1 : 0;
+ sdev_printk(KERN_INFO, sdev,
+ "%s raid component to upper layer\n",
+ sdev->no_uld_attach ? "hide" : "expose");
+ WARN_ON(scsi_device_reprobe(sdev));
+}
+
+static void leapraid_sas_pd_add(struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_ir_change *evt_data)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_dev_p0 sas_dev_p0;
+ struct leapraid_sas_dev *sas_dev;
+ u64 sas_address;
+ u16 parent_hdl;
+ u16 hdl;
+
+ hdl = le16_to_cpu(evt_data->phys_disk_dev_hdl);
+ set_bit(hdl, (unsigned long *)adapter->dev_topo.pd_hdls);
+ sas_dev = leapraid_get_sas_dev_by_hdl(adapter, hdl);
+ if (sas_dev) {
+ leapraid_sdev_put(sas_dev);
+ dev_warn(&adapter->pdev->dev,
+ "dev handle 0x%x already exists\n", hdl);
+ return;
+ }
+
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = hdl;
+ if ((leapraid_op_config_page(adapter, &sas_dev_p0, cfgp1, cfgp2,
+ GET_SAS_DEVICE_PG0))) {
+ dev_warn(&adapter->pdev->dev, "failed to read dev page0\n");
+ return;
+ }
+
+ parent_hdl = le16_to_cpu(sas_dev_p0.parent_dev_hdl);
+ if (!leapraid_get_sas_address(adapter, parent_hdl, &sas_address))
+ leapraid_transport_update_links(adapter, sas_address, hdl,
+ sas_dev_p0.phy_num,
+ LEAPRAID_SAS_NEG_LINK_RATE_1_5,
+ leapraid_get_port_by_id(adapter,
+ sas_dev_p0.physical_port,
+ false));
+ leapraid_add_dev(adapter, hdl);
+}
+
+static void leapraid_sas_pd_delete(struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_ir_change *evt_data)
+{
+ u16 hdl;
+
+ hdl = le16_to_cpu(evt_data->phys_disk_dev_hdl);
+ leapraid_sas_dev_remove_by_hdl(adapter, hdl);
+}
+
+static void leapraid_sas_pd_hide(struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_ir_change *evt_data)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct scsi_target *starget = NULL;
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ u64 volume_wwid = 0;
+ u16 volume_hdl = 0;
+ u16 hdl;
+
+ hdl = le16_to_cpu(evt_data->phys_disk_dev_hdl);
+ leapraid_cfg_get_volume_hdl(adapter, hdl, &volume_hdl);
+ if (volume_hdl)
+ leapraid_cfg_get_volume_wwid(adapter,
+ volume_hdl,
+ &volume_wwid);
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ if (!sas_dev) {
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return;
+ }
+
+ set_bit(hdl, (unsigned long *)adapter->dev_topo.pd_hdls);
+ if (sas_dev->starget && sas_dev->starget->hostdata) {
+ starget = sas_dev->starget;
+ starget_priv = starget->hostdata;
+ starget_priv->flg |= LEAPRAID_TGT_FLG_RAID_MEMBER;
+ sas_dev->volume_hdl = volume_hdl;
+ sas_dev->volume_wwid = volume_wwid;
+ leapraid_sdev_put(sas_dev);
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ if (starget) {
+ dev_info(&adapter->pdev->dev, "hide sas_dev, hdl=0x%x\n", hdl);
+ starget_for_each_device(starget,
+ (void *)1, leapraid_reprobe_lun);
+ }
+}
+
+static void leapraid_sas_pd_expose(
+ struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_ir_change *evt_data)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct scsi_target *starget = NULL;
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ u16 hdl;
+
+ hdl = le16_to_cpu(evt_data->phys_disk_dev_hdl);
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ if (!sas_dev) {
+ dev_warn(&adapter->pdev->dev,
+ "%s:%d: sas_dev not found, hdl=0x%x\n",
+ __func__, __LINE__, hdl);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return;
+ }
+
+ sas_dev->volume_hdl = 0;
+ sas_dev->volume_wwid = 0;
+ clear_bit(hdl, (unsigned long *)adapter->dev_topo.pd_hdls);
+ if (sas_dev->starget && sas_dev->starget->hostdata) {
+ starget = sas_dev->starget;
+ starget_priv = starget->hostdata;
+ starget_priv->flg &= ~LEAPRAID_TGT_FLG_RAID_MEMBER;
+ sas_dev->led_on = false;
+ leapraid_sdev_put(sas_dev);
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (starget) {
+ dev_info(&adapter->pdev->dev,
+ "expose sas_dev, hdl=0x%x\n", hdl);
+ starget_for_each_device(starget, NULL, leapraid_reprobe_lun);
+ }
+}
+
+static void leapraid_sas_volume_add(struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_ir_change *evt_data)
+{
+ struct leapraid_raid_volume *raid_volume;
+ unsigned long flags;
+ u64 wwid;
+ u16 hdl;
+
+ hdl = le16_to_cpu(evt_data->vol_dev_hdl);
+
+ if (leapraid_cfg_get_volume_wwid(adapter, hdl, &wwid)) {
+ dev_warn(&adapter->pdev->dev, "failed to read volume page1\n");
+ return;
+ }
+
+ if (!wwid) {
+ dev_warn(&adapter->pdev->dev, "invalid WWID(handle=0x%x)\n",
+ hdl);
+ return;
+ }
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_wwid(adapter, wwid);
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+
+ if (raid_volume) {
+ dev_warn(&adapter->pdev->dev,
+ "volume handle 0x%x already exists\n", hdl);
+ return;
+ }
+
+ raid_volume = kzalloc(sizeof(*raid_volume), GFP_KERNEL);
+ if (!raid_volume)
+ return;
+
+ raid_volume->id = adapter->dev_topo.sas_id++;
+ raid_volume->channel = RAID_CHANNEL;
+ raid_volume->hdl = hdl;
+ raid_volume->wwid = wwid;
+ leapraid_raid_volume_add(adapter, raid_volume);
+ if (!adapter->scan_dev_desc.wait_scan_dev_done) {
+ if (scsi_add_device(adapter->shost, RAID_CHANNEL,
+ raid_volume->id, 0))
+ leapraid_raid_volume_remove(adapter, raid_volume);
+ dev_info(&adapter->pdev->dev,
+ "add raid volume: hdl=0x%x, wwid=0x%llx\n", hdl, wwid);
+ } else {
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ leapraid_check_boot_dev(adapter, raid_volume, RAID_CHANNEL);
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock,
+ flags);
+ }
+}
+
+static void leapraid_sas_volume_delete(struct leapraid_adapter *adapter,
+ u16 hdl)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_raid_volume *raid_volume;
+ struct scsi_target *starget = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_hdl(adapter, hdl);
+ if (!raid_volume) {
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock,
+ flags);
+ dev_warn(&adapter->pdev->dev,
+ "%s:%d: volume handle 0x%x not found\n",
+ __func__, __LINE__, hdl);
+ return;
+ }
+
+ if (raid_volume->starget) {
+ starget = raid_volume->starget;
+ starget_priv = starget->hostdata;
+ starget_priv->deleted = true;
+ }
+
+ dev_info(&adapter->pdev->dev,
+ "delete raid volume: hdl=0x%x, wwid=0x%llx\n",
+ raid_volume->hdl, raid_volume->wwid);
+ list_del(&raid_volume->list);
+ kfree(raid_volume);
+
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+
+ if (starget)
+ scsi_remove_target(&starget->dev);
+}
+
+static void leapraid_sas_ir_chg_evt(struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt)
+{
+ struct leapraid_evt_data_ir_change *evt_data;
+
+ evt_data = fw_evt->evt_data;
+
+ switch (evt_data->reason_code) {
+ case LEAPRAID_EVT_IR_RC_VOLUME_ADD:
+ leapraid_sas_volume_add(adapter, evt_data);
+ break;
+ case LEAPRAID_EVT_IR_RC_VOLUME_DELETE:
+ leapraid_sas_volume_delete(adapter,
+ le16_to_cpu(evt_data->vol_dev_hdl));
+ break;
+ case LEAPRAID_EVT_IR_RC_PD_HIDDEN_TO_ADD:
+ leapraid_sas_pd_add(adapter, evt_data);
+ break;
+ case LEAPRAID_EVT_IR_RC_PD_UNHIDDEN_TO_DELETE:
+ leapraid_sas_pd_delete(adapter, evt_data);
+ break;
+ case LEAPRAID_EVT_IR_RC_PD_CREATED_TO_HIDE:
+ leapraid_sas_pd_hide(adapter, evt_data);
+ break;
+ case LEAPRAID_EVT_IR_RC_PD_DELETED_TO_EXPOSE:
+ leapraid_sas_pd_expose(adapter, evt_data);
+ break;
+ default:
+ break;
+ }
+}
+
+static void leapraid_sas_enc_dev_stat_add_node(
+ struct leapraid_adapter *adapter, u16 hdl)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_enc_node *enc_node = NULL;
+ int rc;
+
+ enc_node = kzalloc(sizeof(*enc_node), GFP_KERNEL);
+ if (!enc_node)
+ return;
+
+ cfgp1.form = LEAPRAID_SAS_ENC_CFG_PGAD_HDL;
+ cfgp2.handle = hdl;
+ rc = leapraid_op_config_page(adapter, &enc_node->pg0, cfgp1, cfgp2,
+ GET_SAS_ENCLOSURE_PG0);
+ if (rc) {
+ kfree(enc_node);
+ return;
+ }
+ list_add_tail(&enc_node->list, &adapter->dev_topo.enc_list);
+}
+
+static void leapraid_sas_enc_dev_stat_del_node(
+ struct leapraid_enc_node *enc_node)
+{
+ if (!enc_node)
+ return;
+
+ list_del(&enc_node->list);
+ kfree(enc_node);
+}
+
+static void leapraid_sas_enc_dev_stat_chg_evt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt)
+{
+ struct leapraid_enc_node *enc_node = NULL;
+ struct leapraid_evt_data_sas_enc_dev_status_change *evt_data;
+ u16 enc_hdl;
+
+ if (adapter->access_ctrl.shost_recovering)
+ return;
+
+ evt_data = fw_evt->evt_data;
+ enc_hdl = le16_to_cpu(evt_data->enc_hdl);
+ if (enc_hdl)
+ enc_node = leapraid_enc_find_by_hdl(adapter, enc_hdl);
+ switch (evt_data->reason_code) {
+ case LEAPRAID_EVT_SAS_ENCL_RC_ADDED:
+ if (!enc_node)
+ leapraid_sas_enc_dev_stat_add_node(adapter, enc_hdl);
+ break;
+ case LEAPRAID_EVT_SAS_ENCL_RC_NOT_RESPONDING:
+ leapraid_sas_enc_dev_stat_del_node(enc_node);
+ break;
+ default:
+ break;
+ }
+}
+
+static void leapraid_remove_unresp_sas_end_dev(
+ struct leapraid_adapter *adapter)
+{
+ struct leapraid_sas_dev *sas_dev, *sas_dev_next;
+ unsigned long flags;
+ LIST_HEAD(head);
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ list_for_each_entry_safe(sas_dev, sas_dev_next,
+ &adapter->dev_topo.sas_dev_init_list, list) {
+ list_del_init(&sas_dev->list);
+ leapraid_sdev_put(sas_dev);
+ }
+ list_for_each_entry_safe(sas_dev, sas_dev_next,
+ &adapter->dev_topo.sas_dev_list, list) {
+ if (!sas_dev->resp)
+ list_move_tail(&sas_dev->list, &head);
+ else
+ sas_dev->resp = false;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ list_for_each_entry_safe(sas_dev, sas_dev_next, &head, list) {
+ leapraid_remove_device(adapter, sas_dev);
+ list_del_init(&sas_dev->list);
+ leapraid_sdev_put(sas_dev);
+ }
+
+ dev_info(&adapter->pdev->dev,
+ "unresponding sas end devices removed\n");
+}
+
+static void leapraid_remove_unresp_raid_volumes(
+ struct leapraid_adapter *adapter)
+{
+ struct leapraid_raid_volume *raid_volume, *raid_volume_next;
+
+ list_for_each_entry_safe(raid_volume, raid_volume_next,
+ &adapter->dev_topo.raid_volume_list, list) {
+ if (!raid_volume->resp)
+ leapraid_sas_volume_delete(adapter, raid_volume->hdl);
+ else
+ raid_volume->resp = false;
+ }
+ dev_info(&adapter->pdev->dev,
+ "unresponding raid volumes removed\n");
+}
+
+static void leapraid_remove_unresp_sas_exp(struct leapraid_adapter *adapter)
+{
+ struct leapraid_topo_node *topo_node_exp, *topo_node_exp_next;
+ unsigned long flags;
+ LIST_HEAD(head);
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ list_for_each_entry_safe(topo_node_exp, topo_node_exp_next,
+ &adapter->dev_topo.exp_list, list) {
+ if (!topo_node_exp->resp)
+ list_move_tail(&topo_node_exp->list, &head);
+ else
+ topo_node_exp->resp = false;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ list_for_each_entry_safe(topo_node_exp, topo_node_exp_next,
+ &head, list)
+ leapraid_exp_node_rm(adapter, topo_node_exp);
+
+ dev_info(&adapter->pdev->dev,
+ "unresponding sas expanders removed\n");
+}
+
+static void leapraid_remove_unresp_dev(struct leapraid_adapter *adapter)
+{
+ leapraid_remove_unresp_sas_end_dev(adapter);
+ if (adapter->adapter_attr.raid_support)
+ leapraid_remove_unresp_raid_volumes(adapter);
+ leapraid_remove_unresp_sas_exp(adapter);
+ leapraid_ublk_io_all_dev(adapter);
+}
+
+static void leapraid_del_dirty_vphy(struct leapraid_adapter *adapter)
+{
+ struct leapraid_card_port *card_port, *card_port_next;
+ struct leapraid_vphy *vphy, *vphy_next;
+
+ list_for_each_entry_safe(card_port, card_port_next,
+ &adapter->dev_topo.card_port_list, list) {
+ if (!card_port->vphys_mask)
+ continue;
+
+ list_for_each_entry_safe(vphy, vphy_next,
+ &card_port->vphys_list, list) {
+ if (!(vphy->flg & LEAPRAID_VPHY_FLG_DIRTY))
+ continue;
+
+ card_port->vphys_mask &= ~vphy->phy_mask;
+ list_del(&vphy->list);
+ kfree(vphy);
+ }
+
+ if (!card_port->vphys_mask && !card_port->sas_address)
+ card_port->flg |= LEAPRAID_CARD_PORT_FLG_DIRTY;
+ }
+}
+
+static void leapraid_del_dirty_card_port(struct leapraid_adapter *adapter)
+{
+ struct leapraid_card_port *card_port, *card_port_next;
+
+ list_for_each_entry_safe(card_port, card_port_next,
+ &adapter->dev_topo.card_port_list, list) {
+ if (!(card_port->flg & LEAPRAID_CARD_PORT_FLG_DIRTY) ||
+ card_port->flg & LEAPRAID_CARD_PORT_FLG_NEW)
+ continue;
+
+ list_del(&card_port->list);
+ kfree(card_port);
+ }
+}
+
+static void leapraid_update_dev_qdepth(struct leapraid_adapter *adapter)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct leapraid_sas_dev *sas_dev;
+ struct scsi_device *sdev;
+ u16 qdepth;
+
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv || !sdev_priv->starget_priv)
+ continue;
+ sas_dev = sdev_priv->starget_priv->sas_dev;
+ if (sas_dev && sas_dev->dev_info & LEAPRAID_DEVTYP_SSP_TGT)
+ qdepth = (sas_dev->port_type > 1) ?
+ adapter->adapter_attr.wideport_max_queue_depth :
+ adapter->adapter_attr.narrowport_max_queue_depth;
+ else if (sas_dev && sas_dev->dev_info &
+ LEAPRAID_DEVTYP_SATA_DEV)
+ qdepth = adapter->adapter_attr.sata_max_queue_depth;
+ else
+ continue;
+
+ leapraid_adjust_sdev_queue_depth(sdev, qdepth);
+ }
+}
+
+static void leapraid_update_exp_links(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node_exp,
+ u16 hdl)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_exp_p1 exp_p1;
+ int i;
+
+ cfgp2.handle = hdl;
+ for (i = 0; i < topo_node_exp->phys_num; i++) {
+ cfgp1.phy_number = i;
+ if ((leapraid_op_config_page(adapter, &exp_p1, cfgp1, cfgp2,
+ GET_SAS_EXPANDER_PG1)))
+ return;
+
+ leapraid_transport_update_links(adapter,
+ topo_node_exp->sas_address,
+ le16_to_cpu(exp_p1.attached_dev_hdl),
+ i,
+ exp_p1.neg_link_rate >> 4,
+ topo_node_exp->card_port);
+ }
+}
+
+static void leapraid_scan_exp_after_reset(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_topo_node *topo_node_exp;
+ struct leapraid_exp_p0 exp_p0;
+ unsigned long flags;
+ u16 hdl;
+ u8 port_id;
+
+ dev_info(&adapter->pdev->dev, "begin scanning expanders\n");
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (hdl = 0xFFFF, cfgp2.handle = hdl;
+ !leapraid_op_config_page(adapter, &exp_p0, cfgp1, cfgp2,
+ GET_SAS_EXPANDER_PG0);
+ cfgp2.handle = hdl) {
+ hdl = le16_to_cpu(exp_p0.dev_hdl);
+ port_id = exp_p0.physical_port;
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ topo_node_exp =
+ leapraid_exp_find_by_sas_address(adapter,
+ le64_to_cpu(exp_p0.sas_address),
+ leapraid_get_port_by_id(adapter,
+ port_id,
+ false));
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock,
+ flags);
+
+ if (topo_node_exp) {
+ leapraid_update_exp_links(adapter, topo_node_exp, hdl);
+ } else {
+ leapraid_exp_add(adapter, hdl);
+
+ dev_info(&adapter->pdev->dev,
+ "add exp: hdl=0x%04x, sas addr=0x%016llx\n",
+ hdl,
+ (unsigned long long)le64_to_cpu(
+ exp_p0.sas_address));
+ }
+ }
+
+ dev_info(&adapter->pdev->dev, "expanders scan complete\n");
+}
+
+static void leapraid_scan_phy_disks_after_reset(
+ struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ union cfg_param_1 cfgp1_extra = {0};
+ union cfg_param_2 cfgp2_extra = {0};
+ struct leapraid_sas_dev_p0 sas_dev_p0;
+ struct leapraid_raidpd_p0 raidpd_p0;
+ struct leapraid_sas_dev *sas_dev;
+ u8 phys_disk_num, port_id;
+ u16 hdl, parent_hdl;
+ u64 sas_addr;
+
+ dev_info(&adapter->pdev->dev, "begin scanning phys disk\n");
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (phys_disk_num = 0xFF, cfgp2.form_specific = phys_disk_num;
+ !leapraid_op_config_page(adapter, &raidpd_p0,
+ cfgp1, cfgp2, GET_PHY_DISK_PG0);
+ cfgp2.form_specific = phys_disk_num) {
+ phys_disk_num = raidpd_p0.phys_disk_num;
+ hdl = le16_to_cpu(raidpd_p0.dev_hdl);
+ sas_dev = leapraid_get_sas_dev_by_hdl(adapter, hdl);
+ if (sas_dev) {
+ leapraid_sdev_put(sas_dev);
+ continue;
+ }
+
+ cfgp1_extra.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2_extra.handle = hdl;
+ if (leapraid_op_config_page(adapter, &sas_dev_p0, cfgp1_extra,
+ cfgp2_extra, GET_SAS_DEVICE_PG0) !=
+ 0)
+ continue;
+
+ parent_hdl = le16_to_cpu(sas_dev_p0.parent_dev_hdl);
+ if (!leapraid_get_sas_address(adapter,
+ parent_hdl,
+ &sas_addr)) {
+ port_id = sas_dev_p0.physical_port;
+ leapraid_transport_update_links(
+ adapter, sas_addr, hdl,
+ sas_dev_p0.phy_num,
+ LEAPRAID_SAS_NEG_LINK_RATE_1_5,
+ leapraid_get_port_by_id(
+ adapter, port_id, false));
+ set_bit(hdl,
+ (unsigned long *)adapter->dev_topo.pd_hdls);
+
+ leapraid_add_dev(adapter, hdl);
+
+ dev_info(&adapter->pdev->dev,
+ "add phys disk: hdl=0x%04x, sas addr=0x%016llx\n",
+ hdl,
+ (unsigned long long)le64_to_cpu(
+ sas_dev_p0.sas_address));
+ }
+ }
+
+ dev_info(&adapter->pdev->dev, "phys disk scan complete\n");
+}
+
+static void leapraid_scan_vol_after_reset(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ union cfg_param_1 cfgp1_extra = {0};
+ union cfg_param_2 cfgp2_extra = {0};
+ struct leapraid_evt_data_ir_change evt_data;
+ static struct leapraid_raid_volume *raid_volume;
+ struct leapraid_raidvol_p1 *vol_p1;
+ struct leapraid_raidvol_p0 *vol_p0;
+ unsigned long flags;
+ u16 hdl;
+
+ vol_p0 = kzalloc(sizeof(*vol_p0), GFP_KERNEL);
+ if (!vol_p0)
+ return;
+
+ vol_p1 = kzalloc(sizeof(*vol_p1), GFP_KERNEL);
+ if (!vol_p1) {
+ kfree(vol_p0);
+ return;
+ }
+
+ dev_info(&adapter->pdev->dev, "begin scanning volumes\n");
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (hdl = 0xFFFF, cfgp2.handle = hdl;
+ !leapraid_op_config_page(adapter, vol_p1, cfgp1,
+ cfgp2, GET_RAID_VOLUME_PG1);
+ cfgp2.handle = hdl) {
+ hdl = le16_to_cpu(vol_p1->dev_hdl);
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_wwid(
+ adapter,
+ le64_to_cpu(vol_p1->wwid));
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock,
+ flags);
+ if (raid_volume)
+ continue;
+
+ cfgp1_extra.size = sizeof(struct leapraid_raidvol_p0);
+ cfgp2_extra.handle = hdl;
+ if (leapraid_op_config_page(adapter, vol_p0, cfgp1_extra,
+ cfgp2_extra, GET_RAID_VOLUME_PG0))
+ continue;
+
+ if (vol_p0->volume_state == LEAPRAID_VOL_STATE_OPTIMAL ||
+ vol_p0->volume_state == LEAPRAID_VOL_STATE_ONLINE ||
+ vol_p0->volume_state == LEAPRAID_VOL_STATE_DEGRADED) {
+ memset(&evt_data, 0,
+ sizeof(struct leapraid_evt_data_ir_change));
+ evt_data.reason_code = LEAPRAID_EVT_IR_RC_VOLUME_ADD;
+ evt_data.vol_dev_hdl = vol_p1->dev_hdl;
+ leapraid_sas_volume_add(adapter, &evt_data);
+ dev_info(&adapter->pdev->dev,
+ "add volume: hdl=0x%04x\n",
+ vol_p1->dev_hdl);
+ }
+ }
+
+ kfree(vol_p0);
+ kfree(vol_p1);
+
+ dev_info(&adapter->pdev->dev, "volumes scan complete\n");
+}
+
+static void leapraid_scan_sas_dev_after_reset(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_dev_p0 sas_dev_p0;
+ struct leapraid_sas_dev *sas_dev;
+ u16 hdl, parent_hdl;
+ u64 sas_address;
+ u8 port_id;
+
+ dev_info(&adapter->pdev->dev,
+ "begin scanning sas end devices\n");
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (hdl = 0xFFFF, cfgp2.handle = hdl;
+ !leapraid_op_config_page(adapter, &sas_dev_p0, cfgp1, cfgp2,
+ GET_SAS_DEVICE_PG0);
+ cfgp2.handle = hdl) {
+ hdl = le16_to_cpu(sas_dev_p0.dev_hdl);
+ if (!(leapraid_is_end_dev(le32_to_cpu(sas_dev_p0.dev_info))))
+ continue;
+
+ port_id = sas_dev_p0.physical_port;
+ sas_dev = leapraid_get_sas_dev_by_addr(
+ adapter,
+ le64_to_cpu(sas_dev_p0.sas_address),
+ leapraid_get_port_by_id(
+ adapter,
+ port_id,
+ false));
+ if (sas_dev) {
+ leapraid_sdev_put(sas_dev);
+ continue;
+ }
+
+ parent_hdl = le16_to_cpu(sas_dev_p0.parent_dev_hdl);
+ if (!leapraid_get_sas_address(adapter, parent_hdl,
+ &sas_address)) {
+ leapraid_transport_update_links(
+ adapter,
+ sas_address,
+ hdl,
+ sas_dev_p0.phy_num,
+ LEAPRAID_SAS_NEG_LINK_RATE_1_5,
+ leapraid_get_port_by_id(adapter,
+ port_id,
+ false));
+ leapraid_add_dev(adapter, hdl);
+ dev_info(&adapter->pdev->dev,
+ "add sas dev: hdl=0x%04x, sas addr=0x%016llx\n",
+ hdl,
+ (unsigned long long)le64_to_cpu(
+ sas_dev_p0.sas_address));
+ }
+ }
+
+ dev_info(&adapter->pdev->dev, "sas end devices scan complete\n");
+}
+
+static void leapraid_scan_all_dev_after_reset(struct leapraid_adapter *adapter)
+{
+ dev_info(&adapter->pdev->dev, "begin scanning devices\n");
+
+ leapraid_sas_host_add(adapter, adapter->dev_topo.card.phys_num);
+ leapraid_scan_exp_after_reset(adapter);
+ if (adapter->adapter_attr.raid_support) {
+ leapraid_scan_phy_disks_after_reset(adapter);
+ leapraid_scan_vol_after_reset(adapter);
+ }
+ leapraid_scan_sas_dev_after_reset(adapter);
+
+ dev_info(&adapter->pdev->dev, "devices scan complete\n");
+}
+
+static void leapraid_hardreset_async_logic(struct leapraid_adapter *adapter)
+{
+ leapraid_remove_unresp_dev(adapter);
+ leapraid_del_dirty_vphy(adapter);
+ leapraid_del_dirty_card_port(adapter);
+ leapraid_update_dev_qdepth(adapter);
+ leapraid_scan_all_dev_after_reset(adapter);
+
+ if (adapter->scan_dev_desc.driver_loading)
+ leapraid_scan_dev_done(adapter);
+}
+
+static int leapraid_send_enc_cmd(struct leapraid_adapter *adapter,
+ struct leapraid_sep_rep *sep_rep,
+ struct leapraid_sep_req *sep_req)
+{
+ void *req;
+ bool reset_flg = false;
+ int rc = 0;
+
+ mutex_lock(&adapter->driver_cmds.enc_cmd.mutex);
+ rc = leapraid_check_adapter_is_op(adapter);
+ if (rc)
+ goto out;
+
+ adapter->driver_cmds.enc_cmd.status = LEAPRAID_CMD_PENDING;
+ req = leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.enc_cmd.inter_taskid);
+ memset(req, 0, LEAPRAID_REQUEST_SIZE);
+ memcpy(req, sep_req, sizeof(struct leapraid_sep_req));
+ init_completion(&adapter->driver_cmds.enc_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.enc_cmd.inter_taskid);
+ wait_for_completion_timeout(&adapter->driver_cmds.enc_cmd.done,
+ LEAPRAID_ENC_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.enc_cmd.status & LEAPRAID_CMD_DONE)) {
+ reset_flg =
+ leapraid_check_reset(
+ adapter->driver_cmds.enc_cmd.status);
+ rc = -EFAULT;
+ goto do_hard_reset;
+ }
+
+ if (adapter->driver_cmds.enc_cmd.status & LEAPRAID_CMD_REPLY_VALID)
+ memcpy(sep_rep, (void *)(&adapter->driver_cmds.enc_cmd.reply),
+ sizeof(struct leapraid_sep_rep));
+do_hard_reset:
+ if (reset_flg) {
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, FULL_RESET);
+ }
+
+ adapter->driver_cmds.enc_cmd.status = LEAPRAID_CMD_NOT_USED;
+out:
+ mutex_unlock(&adapter->driver_cmds.enc_cmd.mutex);
+ return rc;
+}
+
+static void leapraid_set_led(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev, bool on)
+{
+ struct leapraid_sep_rep sep_rep;
+ struct leapraid_sep_req sep_req;
+
+ if (!sas_dev)
+ return;
+
+ memset(&sep_req, 0, sizeof(struct leapraid_sep_req));
+ memset(&sep_rep, 0, sizeof(struct leapraid_sep_rep));
+ sep_req.func = LEAPRAID_FUNC_SCSI_ENC_PROCESSOR;
+ sep_req.act = LEAPRAID_SEP_REQ_ACT_WRITE_STATUS;
+ if (on) {
+ sep_req.slot_status =
+ cpu_to_le32(LEAPRAID_SEP_REQ_SLOTSTATUS_PREDICTED_FAULT);
+ sep_req.dev_hdl = cpu_to_le16(sas_dev->hdl);
+ sep_req.flg = LEAPRAID_SEP_REQ_FLG_DEVHDL_ADDRESS;
+ if (leapraid_send_enc_cmd(adapter, &sep_rep, &sep_req)) {
+ leapraid_sdev_put(sas_dev);
+ return;
+ }
+
+ sas_dev->led_on = true;
+ if (sep_rep.adapter_status)
+ leapraid_sdev_put(sas_dev);
+ } else {
+ sep_req.slot_status = 0;
+ sep_req.slot = cpu_to_le16(sas_dev->slot);
+ sep_req.dev_hdl = 0;
+ sep_req.enc_hdl = cpu_to_le16(sas_dev->enc_hdl);
+ sep_req.flg = LEAPRAID_SEP_REQ_FLG_ENCLOSURE_SLOT_ADDRESS;
+ if ((leapraid_send_enc_cmd(adapter, &sep_rep, &sep_req))) {
+ leapraid_sdev_put(sas_dev);
+ return;
+ }
+
+ if (sep_rep.adapter_status) {
+ leapraid_sdev_put(sas_dev);
+ return;
+ }
+ }
+}
+
+static void leapraid_fw_work(struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt)
+{
+ struct leapraid_sas_dev *sas_dev;
+
+ adapter->fw_evt_s.cur_evt = fw_evt;
+ leapraid_del_fw_evt_from_list(adapter, fw_evt);
+ if (adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering) {
+ leapraid_fw_evt_put(fw_evt);
+ adapter->fw_evt_s.cur_evt = NULL;
+ return;
+ }
+ switch (fw_evt->evt_type) {
+ case LEAPRAID_EVT_SAS_DISCOVERY:
+ {
+ struct leapraid_evt_data_sas_disc *evt_data;
+
+ evt_data = fw_evt->evt_data;
+ if (evt_data->reason_code ==
+ LEAPRAID_EVT_SAS_DISC_RC_STARTED &&
+ !adapter->dev_topo.card.phys_num)
+ leapraid_sas_host_add(adapter, 0);
+ break;
+ }
+ case LEAPRAID_EVT_SAS_TOPO_CHANGE_LIST:
+ leapraid_sas_topo_chg_evt(adapter, fw_evt);
+ break;
+ case LEAPRAID_EVT_IR_CHANGE:
+ leapraid_sas_ir_chg_evt(adapter, fw_evt);
+ break;
+ case LEAPRAID_EVT_SAS_ENCL_DEV_STATUS_CHANGE:
+ leapraid_sas_enc_dev_stat_chg_evt(adapter, fw_evt);
+ break;
+ case LEAPRAID_EVT_REMOVE_DEAD_DEV:
+ while (scsi_host_in_recovery(adapter->shost) ||
+ adapter->access_ctrl.shost_recovering) {
+ if (adapter->access_ctrl.host_removing ||
+ adapter->fw_evt_s.fw_evt_cleanup)
+ goto out;
+
+ ssleep(1);
+ }
+ leapraid_hardreset_async_logic(adapter);
+ break;
+ case LEAPRAID_EVT_TURN_ON_PFA_LED:
+ sas_dev = leapraid_get_sas_dev_by_hdl(adapter,
+ fw_evt->dev_handle);
+ leapraid_set_led(adapter, sas_dev, true);
+ break;
+ case LEAPRAID_EVT_SCAN_DEV_DONE:
+ adapter->scan_dev_desc.scan_start = false;
+ break;
+ default:
+ break;
+ }
+out:
+ leapraid_fw_evt_put(fw_evt);
+ adapter->fw_evt_s.cur_evt = NULL;
+}
+
+static void leapraid_sas_dev_stat_chg_evt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_sas_dev_status_change *event_data)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_sas_dev *sas_dev = NULL;
+ u64 sas_address;
+ unsigned long flags;
+
+ switch (event_data->reason_code) {
+ case LEAPRAID_EVT_SAS_DEV_STAT_RC_INTERNAL_DEV_RESET:
+ case LEAPRAID_EVT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET:
+ break;
+ default:
+ return;
+ }
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+
+ sas_address = le64_to_cpu(event_data->sas_address);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(adapter,
+ sas_address,
+ leapraid_get_port_by_id(adapter,
+ event_data->physical_port,
+ false));
+
+ if (sas_dev && sas_dev->starget) {
+ starget_priv = sas_dev->starget->hostdata;
+ if (starget_priv) {
+ switch (event_data->reason_code) {
+ case LEAPRAID_EVT_SAS_DEV_STAT_RC_INTERNAL_DEV_RESET:
+ starget_priv->tm_busy = true;
+ break;
+ case LEAPRAID_EVT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET:
+ starget_priv->tm_busy = false;
+ break;
+ }
+ }
+ }
+
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+}
+
+static void leapraid_set_volume_delete_flag(struct leapraid_adapter *adapter,
+ u16 handle)
+{
+ struct leapraid_raid_volume *raid_volume;
+ struct leapraid_starget_priv *sas_target_priv_data;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_hdl(adapter, handle);
+ if (raid_volume && raid_volume->starget &&
+ raid_volume->starget->hostdata) {
+ sas_target_priv_data = raid_volume->starget->hostdata;
+ sas_target_priv_data->deleted = true;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+}
+
+static void leapraid_check_ir_change_evt(struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_ir_change *evt_data)
+{
+ u16 phys_disk_dev_hdl;
+
+ switch (evt_data->reason_code) {
+ case LEAPRAID_EVT_IR_RC_VOLUME_DELETE:
+ leapraid_set_volume_delete_flag(adapter,
+ le16_to_cpu(evt_data->vol_dev_hdl));
+ break;
+ case LEAPRAID_EVT_IR_RC_PD_UNHIDDEN_TO_DELETE:
+ phys_disk_dev_hdl =
+ le16_to_cpu(evt_data->phys_disk_dev_hdl);
+ clear_bit(phys_disk_dev_hdl,
+ (unsigned long *)adapter->dev_topo.pd_hdls);
+ leapraid_tgt_rst_send(adapter, phys_disk_dev_hdl);
+ break;
+ }
+}
+
+static void leapraid_topo_del_evts_process_exp_status(
+ struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_sas_topo_change_list *evt_data)
+{
+ struct leapraid_fw_evt_work *fw_evt = NULL;
+ struct leapraid_evt_data_sas_topo_change_list *loc_evt_data = NULL;
+ unsigned long flags;
+ u16 exp_hdl;
+
+ exp_hdl = le16_to_cpu(evt_data->exp_dev_hdl);
+
+ switch (evt_data->exp_status) {
+ case LEAPRAID_EVT_SAS_TOPO_ES_NOT_RESPONDING:
+ spin_lock_irqsave(&adapter->fw_evt_s.fw_evt_lock, flags);
+ list_for_each_entry(fw_evt,
+ &adapter->fw_evt_s.fw_evt_list, list) {
+ if (fw_evt->evt_type !=
+ LEAPRAID_EVT_SAS_TOPO_CHANGE_LIST ||
+ fw_evt->ignore)
+ continue;
+
+ loc_evt_data = fw_evt->evt_data;
+ if ((loc_evt_data->exp_status ==
+ LEAPRAID_EVT_SAS_TOPO_ES_ADDED ||
+ loc_evt_data->exp_status ==
+ LEAPRAID_EVT_SAS_TOPO_ES_RESPONDING) &&
+ le16_to_cpu(loc_evt_data->exp_dev_hdl) == exp_hdl)
+ fw_evt->ignore = 1;
+ }
+ spin_unlock_irqrestore(&adapter->fw_evt_s.fw_evt_lock, flags);
+ break;
+ default:
+ break;
+ }
+}
+
+static void leapraid_check_topo_del_evts(struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_sas_topo_change_list *evt_data)
+{
+ int reason_code;
+ u16 hdl;
+ int i;
+
+ for (i = 0; i < evt_data->entry_num; i++) {
+ hdl = le16_to_cpu(evt_data->phy[i].attached_dev_hdl);
+ if (!hdl)
+ continue;
+
+ reason_code = evt_data->phy[i].phy_status &
+ LEAPRAID_EVT_SAS_TOPO_RC_MASK;
+ if (reason_code ==
+ LEAPRAID_EVT_SAS_TOPO_RC_TARG_NOT_RESPONDING)
+ leapraid_tgt_not_responding(adapter, hdl);
+ }
+ leapraid_topo_del_evts_process_exp_status(adapter, evt_data);
+}
+
+static bool leapraid_async_process_evt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_evt_notify_rep *event_notify_rep)
+{
+ u16 evt = le16_to_cpu(event_notify_rep->evt);
+ bool exit_flag = false;
+
+ switch (evt) {
+ case LEAPRAID_EVT_SAS_DEV_STATUS_CHANGE:
+ leapraid_sas_dev_stat_chg_evt(adapter,
+ (struct leapraid_evt_data_sas_dev_status_change
+ *)event_notify_rep->evt_data);
+ break;
+ case LEAPRAID_EVT_IR_CHANGE:
+ leapraid_check_ir_change_evt(adapter,
+ (struct leapraid_evt_data_ir_change
+ *)event_notify_rep->evt_data);
+ break;
+ case LEAPRAID_EVT_SAS_TOPO_CHANGE_LIST:
+ leapraid_check_topo_del_evts(adapter,
+ (struct leapraid_evt_data_sas_topo_change_list
+ *)event_notify_rep->evt_data);
+ if (adapter->access_ctrl.shost_recovering) {
+ exit_flag = true;
+ return exit_flag;
+ }
+ break;
+ case LEAPRAID_EVT_SAS_DISCOVERY:
+ case LEAPRAID_EVT_SAS_ENCL_DEV_STATUS_CHANGE:
+ break;
+ default:
+ exit_flag = true;
+ return exit_flag;
+ }
+
+ return exit_flag;
+}
+
+static void leapraid_async_evt_cb_enqueue(
+ struct leapraid_adapter *adapter,
+ struct leapraid_evt_notify_rep *evt_notify_rep)
+{
+ struct leapraid_fw_evt_work *fw_evt;
+ u16 evt_sz;
+
+ fw_evt = leapraid_alloc_fw_evt_work();
+ if (!fw_evt)
+ return;
+
+ evt_sz = le16_to_cpu(evt_notify_rep->evt_data_len) * 4;
+ fw_evt->evt_data = kmemdup(evt_notify_rep->evt_data,
+ evt_sz, GFP_ATOMIC);
+ if (!fw_evt->evt_data) {
+ leapraid_fw_evt_put(fw_evt);
+ return;
+ }
+ fw_evt->adapter = adapter;
+ fw_evt->evt_type = le16_to_cpu(evt_notify_rep->evt);
+ leapraid_fw_evt_add(adapter, fw_evt);
+ leapraid_fw_evt_put(fw_evt);
+}
+
+static void leapraid_async_evt_cb(struct leapraid_adapter *adapter,
+ u8 msix_index, u32 rep_paddr)
+{
+ struct leapraid_evt_notify_rep *evt_notify_rep;
+
+ if (adapter->access_ctrl.pcie_recovering)
+ return;
+
+ evt_notify_rep = leapraid_get_reply_vaddr(adapter, rep_paddr);
+ if (unlikely(!evt_notify_rep))
+ return;
+
+ if (leapraid_async_process_evt(adapter, evt_notify_rep))
+ return;
+
+ leapraid_async_evt_cb_enqueue(adapter, evt_notify_rep);
+}
+
+static void leapraid_handle_async_event(struct leapraid_adapter *adapter,
+ u8 msix_index, u32 reply)
+{
+ struct leapraid_evt_notify_rep *leap_mpi_rep =
+ leapraid_get_reply_vaddr(adapter, reply);
+
+ if (!leap_mpi_rep)
+ return;
+
+ if (leap_mpi_rep->func != LEAPRAID_FUNC_EVENT_NOTIFY)
+ return;
+
+ leapraid_async_evt_cb(adapter, msix_index, reply);
+}
+
+void leapraid_async_turn_on_led(struct leapraid_adapter *adapter, u16 handle)
+{
+ struct leapraid_fw_evt_work *fw_event;
+
+ fw_event = leapraid_alloc_fw_evt_work();
+ if (!fw_event)
+ return;
+
+ fw_event->dev_handle = handle;
+ fw_event->adapter = adapter;
+ fw_event->evt_type = LEAPRAID_EVT_TURN_ON_PFA_LED;
+ leapraid_fw_evt_add(adapter, fw_event);
+ leapraid_fw_evt_put(fw_event);
+}
+
+static void leapraid_hardreset_barrier(struct leapraid_adapter *adapter)
+{
+ struct leapraid_fw_evt_work *fw_event;
+
+ fw_event = leapraid_alloc_fw_evt_work();
+ if (!fw_event)
+ return;
+
+ fw_event->adapter = adapter;
+ fw_event->evt_type = LEAPRAID_EVT_REMOVE_DEAD_DEV;
+ leapraid_fw_evt_add(adapter, fw_event);
+ leapraid_fw_evt_put(fw_event);
+}
+
+static void leapraid_scan_dev_complete(struct leapraid_adapter *adapter)
+{
+ struct leapraid_fw_evt_work *fw_evt;
+
+ fw_evt = leapraid_alloc_fw_evt_work();
+ if (!fw_evt)
+ return;
+
+ fw_evt->evt_type = LEAPRAID_EVT_SCAN_DEV_DONE;
+ fw_evt->adapter = adapter;
+ leapraid_fw_evt_add(adapter, fw_evt);
+ leapraid_fw_evt_put(fw_evt);
+}
+
+static u8 leapraid_driver_cmds_done(struct leapraid_adapter *adapter,
+ u16 taskid, u8 msix_index,
+ u32 rep_paddr, u8 cb_idx)
+{
+ struct leapraid_rep *leap_mpi_rep =
+ leapraid_get_reply_vaddr(adapter, rep_paddr);
+ struct leapraid_driver_cmd *sp_cmd, *_sp_cmd = NULL;
+
+ list_for_each_entry(sp_cmd, &adapter->driver_cmds.special_cmd_list,
+ list)
+ if (cb_idx == sp_cmd->cb_idx) {
+ _sp_cmd = sp_cmd;
+ break;
+ }
+
+ if (WARN_ON(!_sp_cmd))
+ return 1;
+ if (WARN_ON(_sp_cmd->status == LEAPRAID_CMD_NOT_USED))
+ return 1;
+ if (WARN_ON(taskid != _sp_cmd->hp_taskid &&
+ taskid != _sp_cmd->taskid &&
+ taskid != _sp_cmd->inter_taskid))
+ return 1;
+
+ _sp_cmd->status |= LEAPRAID_CMD_DONE;
+ if (leap_mpi_rep) {
+ memcpy((void *)(&_sp_cmd->reply), leap_mpi_rep,
+ leap_mpi_rep->msg_len * 4);
+ _sp_cmd->status |= LEAPRAID_CMD_REPLY_VALID;
+
+ if (_sp_cmd->cb_idx == LEAPRAID_SCAN_DEV_CB_IDX) {
+ u16 adapter_status;
+
+ _sp_cmd->status &= ~LEAPRAID_CMD_PENDING;
+ adapter_status =
+ le16_to_cpu(leap_mpi_rep->adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (adapter_status != LEAPRAID_ADAPTER_STATUS_SUCCESS)
+ adapter->scan_dev_desc.scan_dev_failed = true;
+
+ if (_sp_cmd->async_scan_dev) {
+ if (adapter_status ==
+ LEAPRAID_ADAPTER_STATUS_SUCCESS) {
+ leapraid_scan_dev_complete(adapter);
+ } else {
+ adapter->scan_dev_desc.scan_start_failed =
+ adapter_status;
+ }
+ return 1;
+ }
+
+ complete(&_sp_cmd->done);
+ return 1;
+ }
+
+ if (_sp_cmd->cb_idx == LEAPRAID_CTL_CB_IDX) {
+ struct leapraid_scsiio_rep *scsiio_reply;
+
+ if (leap_mpi_rep->function ==
+ LEAPRAID_FUNC_SCSIIO_REQ ||
+ leap_mpi_rep->function ==
+ LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH) {
+ scsiio_reply =
+ (struct leapraid_scsiio_rep *)leap_mpi_rep;
+ if (scsiio_reply->scsi_state &
+ LEAPRAID_SCSI_STATE_AUTOSENSE_VALID)
+ memcpy((void *)(&adapter->driver_cmds.ctl_cmd.sense),
+ leapraid_get_sense_buffer(adapter, taskid),
+ min_t(u32,
+ SCSI_SENSE_BUFFERSIZE,
+ le32_to_cpu(scsiio_reply->sense_count)));
+ }
+ }
+ }
+
+ _sp_cmd->status &= ~LEAPRAID_CMD_PENDING;
+ complete(&_sp_cmd->done);
+
+ return 1;
+}
+
+static void leapraid_request_descript_handler(struct leapraid_adapter *adapter,
+ union leapraid_rep_desc_union *rpf,
+ u8 req_desc_type, u8 msix_idx)
+{
+ u32 rep;
+ u16 taskid;
+
+ rep = 0;
+ taskid = le16_to_cpu(rpf->dflt_rep.taskid);
+ switch (req_desc_type) {
+ case LEAPRAID_RPY_DESC_FLG_FP_SCSI_IO_SUCCESS:
+ case LEAPRAID_RPY_DESC_FLG_SCSI_IO_SUCCESS:
+ if (taskid <= adapter->shost->can_queue ||
+ taskid == adapter->driver_cmds.driver_scsiio_cmd.taskid) {
+ leapraid_scsiio_done(adapter, taskid, msix_idx, 0);
+ } else {
+ if (leapraid_driver_cmds_done(adapter, taskid,
+ msix_idx, 0,
+ leapraid_get_cb_idx(adapter,
+ taskid)))
+ leapraid_free_taskid(adapter, taskid);
+ }
+ break;
+ case LEAPRAID_RPY_DESC_FLG_ADDRESS_REPLY:
+ rep = le32_to_cpu(rpf->addr_rep.rep_frame_addr);
+ if (rep > ((u32)adapter->mem_desc.rep_msg_dma +
+ adapter->adapter_attr.rep_msg_qd * LEAPRAID_REPLY_SIEZ) ||
+ rep < ((u32)adapter->mem_desc.rep_msg_dma))
+ rep = 0;
+ if (taskid) {
+ if (taskid <= adapter->shost->can_queue ||
+ taskid == adapter->driver_cmds.driver_scsiio_cmd.taskid) {
+ leapraid_scsiio_done(adapter, taskid,
+ msix_idx, rep);
+ } else {
+ if (leapraid_driver_cmds_done(adapter, taskid,
+ msix_idx, rep,
+ leapraid_get_cb_idx(adapter,
+ taskid)))
+ leapraid_free_taskid(adapter, taskid);
+ }
+ } else {
+ leapraid_handle_async_event(adapter, msix_idx, rep);
+ }
+
+ if (rep) {
+ adapter->rep_msg_host_idx =
+ (adapter->rep_msg_host_idx ==
+ (adapter->adapter_attr.rep_msg_qd - 1)) ?
+ 0 : adapter->rep_msg_host_idx + 1;
+ adapter->mem_desc.rep_msg_addr[adapter->rep_msg_host_idx] =
+ cpu_to_le32(rep);
+ wmb(); /* Make sure that all write ops are in order */
+ writel(adapter->rep_msg_host_idx,
+ &adapter->iomem_base->rep_msg_host_idx);
+ }
+ break;
+ default:
+ break;
+ }
+}
+
+int leapraid_rep_queue_handler(struct leapraid_rq *rq)
+{
+ struct leapraid_adapter *adapter = rq->adapter;
+ union leapraid_rep_desc_union *rep_desc;
+ u8 req_desc_type;
+ u64 finish_cmds;
+ u8 msix_idx;
+
+ msix_idx = rq->msix_idx;
+ finish_cmds = 0;
+ if (!atomic_add_unless(&rq->busy, LEAPRAID_BUSY_LIMIT,
+ LEAPRAID_BUSY_LIMIT))
+ return finish_cmds;
+
+ rep_desc = &rq->rep_desc[rq->rep_post_host_idx];
+ req_desc_type = rep_desc->dflt_rep.rep_flg &
+ LEAPRAID_RPY_DESC_FLG_TYPE_MASK;
+ if (req_desc_type == LEAPRAID_RPY_DESC_FLG_UNUSED) {
+ atomic_dec(&rq->busy);
+ return finish_cmds;
+ }
+
+ for (;;) {
+ if (rep_desc->u.low == UINT_MAX ||
+ rep_desc->u.high == UINT_MAX)
+ break;
+
+ leapraid_request_descript_handler(adapter, rep_desc,
+ req_desc_type, msix_idx);
+ dev_dbg(&adapter->pdev->dev,
+ "LEAPRAID_SCSIIO: Handled Desc taskid %d, msix %d\n",
+ rep_desc->dflt_rep.taskid, msix_idx);
+ rep_desc->words = cpu_to_le64(ULLONG_MAX);
+ rq->rep_post_host_idx =
+ (rq->rep_post_host_idx ==
+ (adapter->adapter_attr.rep_desc_qd -
+ LEAPRAID_BUSY_LIMIT)) ?
+ 0 : rq->rep_post_host_idx + 1;
+ req_desc_type =
+ rq->rep_desc[rq->rep_post_host_idx].dflt_rep.rep_flg &
+ LEAPRAID_RPY_DESC_FLG_TYPE_MASK;
+ finish_cmds++;
+ if (req_desc_type == LEAPRAID_RPY_DESC_FLG_UNUSED)
+ break;
+ rep_desc = rq->rep_desc + rq->rep_post_host_idx;
+ }
+
+ if (!finish_cmds) {
+ atomic_dec(&rq->busy);
+ return finish_cmds;
+ }
+
+ wmb(); /* Make sure that all write ops are in order */
+ writel(rq->rep_post_host_idx | ((msix_idx & LEAPRAID_MSIX_GROUP_MASK) <<
+ LEAPRAID_RPHI_MSIX_IDX_SHIFT),
+ &adapter->iomem_base->rep_post_reg_idx[msix_idx /
+ LEAPRAID_MSIX_GROUP_SIZE].idx);
+ atomic_dec(&rq->busy);
+ return finish_cmds;
+}
+
+static irqreturn_t leapraid_irq_handler(int irq, void *bus_id)
+{
+ struct leapraid_rq *rq = bus_id;
+ struct leapraid_adapter *adapter = rq->adapter;
+
+ dev_dbg(&adapter->pdev->dev,
+ "LEAPRAID_SCSIIO: Receive a interrupt, irq %d msix %d\n",
+ irq, rq->msix_idx);
+
+ if (adapter->mask_int)
+ return IRQ_NONE;
+
+ return ((leapraid_rep_queue_handler(rq) > 0) ?
+ IRQ_HANDLED : IRQ_NONE);
+}
+
+void leapraid_sync_irqs(struct leapraid_adapter *adapter, bool poll)
+{
+ struct leapraid_int_rq *int_rq;
+ struct leapraid_blk_mq_poll_rq *blk_mq_poll_rq;
+ unsigned int i;
+
+ if (!adapter->notification_desc.msix_enable)
+ return;
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering)
+ return;
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qdex; i++) {
+ int_rq = &adapter->notification_desc.int_rqs[i];
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering)
+ return;
+
+ if (int_rq->rq.msix_idx == 0)
+ continue;
+
+ synchronize_irq(pci_irq_vector(adapter->pdev, int_rq->rq.msix_idx));
+ if (poll)
+ leapraid_rep_queue_handler(&int_rq->rq);
+ }
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qcnt; i++) {
+ blk_mq_poll_rq =
+ &adapter->notification_desc.blk_mq_poll_rqs[i];
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering)
+ return;
+
+ if (blk_mq_poll_rq->rq.msix_idx == 0)
+ continue;
+
+ leapraid_rep_queue_handler(&blk_mq_poll_rq->rq);
+ }
+}
+
+void leapraid_mq_polling_pause(struct leapraid_adapter *adapter)
+{
+ int iopoll_q_count =
+ adapter->adapter_attr.rq_cnt -
+ adapter->notification_desc.iopoll_qdex;
+ int qid;
+
+ for (qid = 0; qid < iopoll_q_count; qid++)
+ atomic_set(&adapter->notification_desc.blk_mq_poll_rqs[qid].pause, 1);
+
+ for (qid = 0; qid < iopoll_q_count; qid++) {
+ while (atomic_read(&adapter->notification_desc.blk_mq_poll_rqs[qid].busy)) {
+ cpu_relax();
+ udelay(LEAPRAID_IO_POLL_DELAY_US);
+ }
+ }
+}
+
+void leapraid_mq_polling_resume(struct leapraid_adapter *adapter)
+{
+ int iopoll_q_count =
+ adapter->adapter_attr.rq_cnt -
+ adapter->notification_desc.iopoll_qdex;
+ int qid;
+
+ for (qid = 0; qid < iopoll_q_count; qid++)
+ atomic_set(&adapter->notification_desc.blk_mq_poll_rqs[qid].pause, 0);
+}
+
+static int leapraid_unlock_host_diag(struct leapraid_adapter *adapter,
+ u32 *host_diag)
+{
+ const u32 unlock_seq[] = { 0x0, 0xF, 0x4, 0xB, 0x2, 0x7, 0xD };
+ const int max_retries = LEAPRAID_UNLOCK_RETRY_LIMIT;
+ int retry = 0;
+ unsigned int i;
+
+ *host_diag = 0;
+ while (retry++ <= max_retries) {
+ for (i = 0; i < ARRAY_SIZE(unlock_seq); i++)
+ writel(unlock_seq[i], &adapter->iomem_base->ws);
+
+ msleep(LEAPRAID_UNLOCK_SLEEP_MS);
+
+ *host_diag = leapraid_readl(&adapter->iomem_base->host_diag);
+ if (*host_diag & LEAPRAID_DIAG_WRITE_ENABLE)
+ return 0;
+ }
+
+ dev_err(&adapter->pdev->dev, "try host reset timeout!\n");
+ return -EFAULT;
+}
+
+static int leapraid_host_diag_reset(struct leapraid_adapter *adapter)
+{
+ u32 host_diag;
+ u32 cnt;
+
+ dev_info(&adapter->pdev->dev, "entering host diag reset!\n");
+ pci_cfg_access_lock(adapter->pdev);
+
+ mutex_lock(&adapter->reset_desc.host_diag_mutex);
+ if (leapraid_unlock_host_diag(adapter, &host_diag))
+ goto out;
+
+ writel(host_diag | LEAPRAID_DIAG_RESET,
+ &adapter->iomem_base->host_diag);
+
+ msleep(LEAPRAID_MSLEEP_NORMAL_MS);
+ for (cnt = 0; cnt < LEAPRAID_RESET_LOOP_COUNT_DEFAULT; cnt++) {
+ host_diag = leapraid_readl(&adapter->iomem_base->host_diag);
+ if (host_diag == LEAPRAID_INVALID_HOST_DIAG_VAL)
+ goto out;
+
+ if (!(host_diag & LEAPRAID_DIAG_RESET))
+ break;
+
+ msleep(LEAPRAID_RESET_POLL_INTERVAL_MS);
+ }
+
+ writel(host_diag & ~LEAPRAID_DIAG_HOLD_ADAPTER_RESET,
+ &adapter->iomem_base->host_diag);
+ writel(0x0, &adapter->iomem_base->ws);
+ mutex_unlock(&adapter->reset_desc.host_diag_mutex);
+ if (!leapraid_wait_adapter_ready(adapter))
+ goto out;
+
+ pci_cfg_access_unlock(adapter->pdev);
+ dev_info(&adapter->pdev->dev, "host diag success!\n");
+ return 0;
+out:
+ pci_cfg_access_unlock(adapter->pdev);
+ dev_info(&adapter->pdev->dev, "host diag failed!\n");
+ mutex_unlock(&adapter->reset_desc.host_diag_mutex);
+ return -EFAULT;
+}
+
+static int leapraid_find_matching_port(
+ struct leapraid_card_port *card_port_table,
+ u8 count, u8 port_id, u64 sas_addr)
+{
+ int i;
+
+ for (i = 0; i < count; i++) {
+ if (card_port_table[i].port_id == port_id &&
+ card_port_table[i].sas_address == sas_addr)
+ return i;
+ }
+ return -1;
+}
+
+static u8 leapraid_fill_card_port_table(
+ struct leapraid_adapter *adapter,
+ struct leapraid_sas_io_unit_p0 *sas_iounit_p0,
+ struct leapraid_card_port *new_card_port_table)
+{
+ u8 port_entry_num = 0, port_id;
+ u16 attached_hdl;
+ u64 attached_sas_addr;
+ int i, idx;
+
+ for (i = 0; i < adapter->dev_topo.card.phys_num; i++) {
+ if ((sas_iounit_p0->phy_info[i].neg_link_rate >> 4)
+ < LEAPRAID_SAS_NEG_LINK_RATE_1_5)
+ continue;
+
+ attached_hdl =
+ le16_to_cpu(sas_iounit_p0->phy_info[i].attached_dev_hdl);
+ if (leapraid_get_sas_address(adapter,
+ attached_hdl,
+ &attached_sas_addr) != 0)
+ continue;
+
+ port_id = sas_iounit_p0->phy_info[i].port;
+
+ idx = leapraid_find_matching_port(new_card_port_table,
+ port_entry_num,
+ port_id,
+ attached_sas_addr);
+ if (idx >= 0) {
+ new_card_port_table[idx].phy_mask |= BIT(i);
+ } else {
+ new_card_port_table[port_entry_num].port_id = port_id;
+ new_card_port_table[port_entry_num].phy_mask = BIT(i);
+ new_card_port_table[port_entry_num].sas_address =
+ attached_sas_addr;
+ port_entry_num++;
+ }
+ }
+
+ return port_entry_num;
+}
+
+static u8 leapraid_set_new_card_port_table_after_reset(
+ struct leapraid_adapter *adapter,
+ struct leapraid_card_port *new_card_port_table)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_io_unit_p0 *sas_iounit_p0 = NULL;
+ u8 port_entry_num = 0;
+ u16 sz;
+
+ sz = offsetof(struct leapraid_sas_io_unit_p0, phy_info) +
+ (adapter->dev_topo.card.phys_num *
+ sizeof(struct leapraid_sas_io_unit0_phy_info));
+ sas_iounit_p0 = kzalloc(sz, GFP_KERNEL);
+ if (!sas_iounit_p0)
+ return port_entry_num;
+
+ cfgp1.size = sz;
+ if ((leapraid_op_config_page(adapter, sas_iounit_p0, cfgp1, cfgp2,
+ GET_SAS_IOUNIT_PG0)) != 0)
+ goto out;
+
+ port_entry_num = leapraid_fill_card_port_table(adapter,
+ sas_iounit_p0,
+ new_card_port_table);
+out:
+ kfree(sas_iounit_p0);
+ return port_entry_num;
+}
+
+static void leapraid_update_existing_port(struct leapraid_adapter *adapter,
+ struct leapraid_card_port *new_table,
+ int entry_idx, int port_entry_num)
+{
+ struct leapraid_card_port *matched_card_port = NULL;
+ int matched_code;
+ int count = 0, lcount = 0;
+ u64 sas_addr;
+ int i;
+
+ matched_code = leapraid_check_card_port(adapter,
+ &new_table[entry_idx],
+ &matched_card_port,
+ &count);
+
+ if (!matched_card_port)
+ return;
+
+ if (matched_code == SAME_PORT_WITH_PARTIALLY_CHANGED_PHYS ||
+ matched_code == SAME_ADDR_WITH_PARTIALLY_CHANGED_PHYS) {
+ leapraid_add_or_del_phys_from_existing_port(adapter,
+ matched_card_port,
+ new_table,
+ entry_idx,
+ port_entry_num);
+ } else if (matched_code == SAME_ADDR_ONLY) {
+ sas_addr = new_table[entry_idx].sas_address;
+ for (i = 0; i < port_entry_num; i++) {
+ if (new_table[i].sas_address == sas_addr)
+ lcount++;
+ }
+ if (count > 1 || lcount > 1)
+ return;
+
+ leapraid_add_or_del_phys_from_existing_port(adapter,
+ matched_card_port,
+ new_table,
+ entry_idx,
+ port_entry_num);
+ }
+
+ if (matched_card_port->port_id != new_table[entry_idx].port_id)
+ matched_card_port->port_id = new_table[entry_idx].port_id;
+
+ matched_card_port->flg &= ~LEAPRAID_CARD_PORT_FLG_DIRTY;
+ matched_card_port->phy_mask = new_table[entry_idx].phy_mask;
+}
+
+static void leapraid_update_card_port_after_reset(
+ struct leapraid_adapter *adapter)
+{
+ struct leapraid_card_port *new_card_port_table;
+ struct leapraid_card_port *matched_card_port = NULL;
+ u8 port_entry_num = 0;
+ u8 nr_phys;
+ int i;
+
+ if (leapraid_get_adapter_phys(adapter, &nr_phys) || !nr_phys)
+ return;
+
+ adapter->dev_topo.card.phys_num = nr_phys;
+ new_card_port_table = kcalloc(adapter->dev_topo.card.phys_num,
+ sizeof(struct leapraid_card_port),
+ GFP_KERNEL);
+ if (!new_card_port_table)
+ return;
+
+ port_entry_num =
+ leapraid_set_new_card_port_table_after_reset(adapter,
+ new_card_port_table);
+ if (!port_entry_num)
+ return;
+
+ list_for_each_entry(matched_card_port,
+ &adapter->dev_topo.card_port_list, list) {
+ matched_card_port->flg |= LEAPRAID_CARD_PORT_FLG_DIRTY;
+ }
+
+ matched_card_port = NULL;
+ for (i = 0; i < port_entry_num; i++)
+ leapraid_update_existing_port(adapter,
+ new_card_port_table,
+ i, port_entry_num);
+}
+
+static bool leapraid_is_valid_vphy(
+ struct leapraid_adapter *adapter,
+ struct leapraid_sas_io_unit_p0 *sas_io_unit_p0,
+ int phy_index)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_phy_p0 phy_p0;
+
+ if ((sas_io_unit_p0->phy_info[phy_index].neg_link_rate >> 4) <
+ LEAPRAID_SAS_NEG_LINK_RATE_1_5)
+ return false;
+
+ if (!(le32_to_cpu(sas_io_unit_p0->phy_info[phy_index].controller_phy_dev_info) &
+ LEAPRAID_DEVTYP_SEP))
+ return false;
+
+ cfgp1.phy_number = phy_index;
+ if (leapraid_op_config_page(adapter, &phy_p0, cfgp1, cfgp2,
+ GET_PHY_PG0))
+ return false;
+
+ if (!(le32_to_cpu(phy_p0.phy_info) & LEAPRAID_SAS_PHYINFO_VPHY))
+ return false;
+
+ return true;
+}
+
+static void leapraid_update_vphy_binding(struct leapraid_adapter *adapter,
+ struct leapraid_card_port *card_port,
+ struct leapraid_vphy *vphy,
+ int phy_index, u8 may_new_port_id,
+ u64 attached_sas_addr)
+{
+ struct leapraid_card_port *may_new_card_port;
+ struct leapraid_sas_dev *sas_dev;
+
+ may_new_card_port = leapraid_get_port_by_id(adapter,
+ may_new_port_id,
+ true);
+ if (!may_new_card_port) {
+ may_new_card_port = kzalloc(sizeof(*may_new_card_port),
+ GFP_KERNEL);
+ if (!may_new_card_port)
+ return;
+ may_new_card_port->port_id = may_new_port_id;
+ dev_err(&adapter->pdev->dev,
+ "%s: new card port %p added, port=%d\n",
+ __func__, may_new_card_port, may_new_port_id);
+ list_add_tail(&may_new_card_port->list,
+ &adapter->dev_topo.card_port_list);
+ }
+
+ if (card_port != may_new_card_port) {
+ if (!may_new_card_port->vphys_mask)
+ INIT_LIST_HEAD(&may_new_card_port->vphys_list);
+ may_new_card_port->vphys_mask |= BIT(phy_index);
+ card_port->vphys_mask &= ~BIT(phy_index);
+ list_move(&vphy->list, &may_new_card_port->vphys_list);
+
+ sas_dev = leapraid_get_sas_dev_by_addr(adapter,
+ attached_sas_addr,
+ card_port);
+ if (sas_dev)
+ sas_dev->card_port = may_new_card_port;
+ }
+
+ if (may_new_card_port->flg & LEAPRAID_CARD_PORT_FLG_DIRTY) {
+ may_new_card_port->sas_address = 0;
+ may_new_card_port->phy_mask = 0;
+ may_new_card_port->flg &= ~LEAPRAID_CARD_PORT_FLG_DIRTY;
+ }
+ vphy->flg &= ~LEAPRAID_VPHY_FLG_DIRTY;
+}
+
+static void leapraid_update_vphys_after_reset(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_io_unit_p0 *sas_iounit_p0 = NULL;
+ struct leapraid_card_port *card_port, *card_port_next;
+ struct leapraid_vphy *vphy, *vphy_next;
+ u64 attached_sas_addr;
+ u16 sz;
+ u16 attached_hdl;
+ bool found = false;
+ u8 port_id;
+ int i;
+
+ list_for_each_entry_safe(card_port, card_port_next,
+ &adapter->dev_topo.card_port_list, list) {
+ if (!card_port->vphys_mask)
+ continue;
+
+ list_for_each_entry_safe(vphy, vphy_next,
+ &card_port->vphys_list, list) {
+ vphy->flg |= LEAPRAID_VPHY_FLG_DIRTY;
+ }
+ }
+
+ sz = offsetof(struct leapraid_sas_io_unit_p0, phy_info) +
+ (adapter->dev_topo.card.phys_num *
+ sizeof(struct leapraid_sas_io_unit0_phy_info));
+ sas_iounit_p0 = kzalloc(sz, GFP_KERNEL);
+ if (!sas_iounit_p0)
+ return;
+
+ cfgp1.size = sz;
+ if ((leapraid_op_config_page(adapter, sas_iounit_p0, cfgp1, cfgp2,
+ GET_SAS_IOUNIT_PG0)) != 0)
+ goto out;
+
+ for (i = 0; i < adapter->dev_topo.card.phys_num; i++) {
+ if (!leapraid_is_valid_vphy(adapter, sas_iounit_p0, i))
+ continue;
+
+ attached_hdl =
+ le16_to_cpu(sas_iounit_p0->phy_info[i].attached_dev_hdl);
+ if (leapraid_get_sas_address(adapter, attached_hdl,
+ &attached_sas_addr) != 0)
+ continue;
+
+ found = false;
+ card_port = NULL;
+ card_port_next = NULL;
+ list_for_each_entry_safe(card_port, card_port_next,
+ &adapter->dev_topo.card_port_list,
+ list) {
+ if (!card_port->vphys_mask)
+ continue;
+
+ list_for_each_entry_safe(vphy, vphy_next,
+ &card_port->vphys_list,
+ list) {
+ if (!(vphy->flg & LEAPRAID_VPHY_FLG_DIRTY))
+ continue;
+
+ if (vphy->sas_address != attached_sas_addr)
+ continue;
+
+ if (!(vphy->phy_mask & BIT(i)))
+ vphy->phy_mask = BIT(i);
+
+ port_id = sas_iounit_p0->phy_info[i].port;
+
+ leapraid_update_vphy_binding(adapter,
+ card_port,
+ vphy,
+ i,
+ port_id,
+ attached_sas_addr);
+
+ found = true;
+ break;
+ }
+ if (found)
+ break;
+ }
+ }
+out:
+ kfree(sas_iounit_p0);
+}
+
+static void leapraid_mark_all_dev_deleted(struct leapraid_adapter *adapter)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct scsi_device *sdev;
+
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (sdev_priv && sdev_priv->starget_priv)
+ sdev_priv->starget_priv->deleted = true;
+ }
+}
+
+static void leapraid_free_enc_list(struct leapraid_adapter *adapter)
+{
+ struct leapraid_enc_node *enc_dev, *enc_dev_next;
+
+ list_for_each_entry_safe(enc_dev, enc_dev_next,
+ &adapter->dev_topo.enc_list,
+ list) {
+ list_del(&enc_dev->list);
+ kfree(enc_dev);
+ }
+}
+
+static void leapraid_rebuild_enc_list_after_reset(
+ struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_enc_node *enc_node;
+ u16 enc_hdl;
+ int rc;
+
+ leapraid_free_enc_list(adapter);
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (enc_hdl = 0xFFFF; ; enc_hdl = le16_to_cpu(enc_node->pg0.enc_hdl)) {
+ enc_node = kzalloc(sizeof(*enc_node),
+ GFP_KERNEL);
+ if (!enc_node)
+ return;
+
+ cfgp2.handle = enc_hdl;
+ rc = leapraid_op_config_page(adapter, &enc_node->pg0, cfgp1,
+ cfgp2, GET_SAS_ENCLOSURE_PG0);
+ if (rc) {
+ kfree(enc_node);
+ return;
+ }
+
+ list_add_tail(&enc_node->list, &adapter->dev_topo.enc_list);
+ }
+}
+
+static void leapraid_mark_resp_sas_dev(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev_p0 *sas_dev_p0)
+{
+ struct leapraid_starget_priv *starget_priv = NULL;
+ struct leapraid_enc_node *enc_node = NULL;
+ struct leapraid_card_port *card_port;
+ struct leapraid_sas_dev *sas_dev;
+ struct scsi_target *starget;
+ unsigned long flags;
+
+ card_port = leapraid_get_port_by_id(adapter, sas_dev_p0->physical_port,
+ false);
+ if (sas_dev_p0->enc_hdl) {
+ enc_node = leapraid_enc_find_by_hdl(adapter,
+ le16_to_cpu(
+ sas_dev_p0->enc_hdl));
+ if (!enc_node)
+ dev_info(&adapter->pdev->dev,
+ "enc hdl 0x%04x has no matched enc dev\n",
+ le16_to_cpu(sas_dev_p0->enc_hdl));
+ }
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_list, list) {
+ if (sas_dev->sas_addr == le64_to_cpu(sas_dev_p0->sas_address) &&
+ sas_dev->slot == le16_to_cpu(sas_dev_p0->slot) &&
+ sas_dev->card_port == card_port) {
+ sas_dev->resp = true;
+ starget = sas_dev->starget;
+ if (starget && starget->hostdata) {
+ starget_priv = starget->hostdata;
+ starget_priv->tm_busy = false;
+ starget_priv->deleted = false;
+ } else {
+ starget_priv = NULL;
+ }
+
+ if (starget) {
+ starget_printk(KERN_INFO, starget,
+ "dev: hdl=0x%04x, sas addr=0x%016llx, port_id=%d\n",
+ sas_dev->hdl,
+ (unsigned long long)sas_dev->sas_addr,
+ sas_dev->card_port->port_id);
+ if (sas_dev->enc_hdl != 0)
+ starget_printk(KERN_INFO, starget,
+ "enc info: enc_lid=0x%016llx, slot=%d\n",
+ (unsigned long long)sas_dev->enc_lid,
+ sas_dev->slot);
+ }
+
+ if (le16_to_cpu(sas_dev_p0->flg) &
+ LEAPRAID_SAS_DEV_P0_FLG_ENC_LEVEL_VALID) {
+ sas_dev->enc_level = sas_dev_p0->enc_level;
+ memcpy(sas_dev->connector_name,
+ sas_dev_p0->connector_name, 4);
+ sas_dev->connector_name[4] = '\0';
+ } else {
+ sas_dev->enc_level = 0;
+ sas_dev->connector_name[0] = '\0';
+ }
+
+ sas_dev->enc_hdl =
+ le16_to_cpu(sas_dev_p0->enc_hdl);
+ if (enc_node) {
+ sas_dev->enc_lid =
+ le64_to_cpu(enc_node->pg0.enc_lid);
+ }
+ if (sas_dev->hdl == le16_to_cpu(sas_dev_p0->dev_hdl))
+ goto out;
+
+ dev_info(&adapter->pdev->dev,
+ "hdl changed: 0x%04x -> 0x%04x\n",
+ sas_dev->hdl, sas_dev_p0->dev_hdl);
+ sas_dev->hdl = le16_to_cpu(sas_dev_p0->dev_hdl);
+ if (starget_priv)
+ starget_priv->hdl =
+ le16_to_cpu(sas_dev_p0->dev_hdl);
+ goto out;
+ }
+ }
+out:
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+}
+
+static void leapraid_search_resp_sas_dev(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_dev_p0 sas_dev_p0;
+ u32 device_info;
+
+ dev_info(&adapter->pdev->dev,
+ "begin searching for sas end devices\n");
+
+ if (list_empty(&adapter->dev_topo.sas_dev_list))
+ goto out;
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (cfgp2.handle = 0xFFFF;
+ !leapraid_op_config_page(adapter, &sas_dev_p0,
+ cfgp1, cfgp2, GET_SAS_DEVICE_PG0);
+ cfgp2.handle = le16_to_cpu(sas_dev_p0.dev_hdl)) {
+ device_info = le32_to_cpu(sas_dev_p0.dev_info);
+ if (!(leapraid_is_end_dev(device_info)))
+ continue;
+
+ leapraid_mark_resp_sas_dev(adapter, &sas_dev_p0);
+ }
+out:
+ dev_info(&adapter->pdev->dev,
+ "sas end devices searching complete\n");
+}
+
+static void leapraid_mark_resp_raid_volume(struct leapraid_adapter *adapter,
+ u64 wwid, u16 hdl)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_raid_volume *raid_volume;
+ struct scsi_target *starget;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ list_for_each_entry(raid_volume,
+ &adapter->dev_topo.raid_volume_list, list) {
+ if (raid_volume->wwid == wwid && raid_volume->starget) {
+ starget = raid_volume->starget;
+ if (starget && starget->hostdata) {
+ starget_priv = starget->hostdata;
+ starget_priv->deleted = false;
+ } else {
+ starget_priv = NULL;
+ }
+
+ raid_volume->resp = true;
+ spin_unlock_irqrestore(
+ &adapter->dev_topo.raid_volume_lock,
+ flags);
+
+ starget_printk(
+ KERN_INFO, raid_volume->starget,
+ "raid volume: hdl=0x%04x, wwid=0x%016llx\n",
+ hdl, (unsigned long long)raid_volume->wwid);
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock,
+ flags);
+ if (raid_volume->hdl == hdl) {
+ spin_unlock_irqrestore(
+ &adapter->dev_topo.raid_volume_lock,
+ flags);
+ return;
+ }
+
+ dev_info(&adapter->pdev->dev,
+ "hdl changed: 0x%04x -> 0x%04x\n",
+ raid_volume->hdl, hdl);
+
+ raid_volume->hdl = hdl;
+ if (starget_priv)
+ starget_priv->hdl = hdl;
+ spin_unlock_irqrestore(
+ &adapter->dev_topo.raid_volume_lock,
+ flags);
+ return;
+ }
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+}
+
+static void leapraid_search_resp_raid_volume(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_1 cfgp1_extra = {0};
+ union cfg_param_2 cfgp2 = {0};
+ union cfg_param_2 cfgp2_extra = {0};
+ struct leapraid_raidvol_p1 raidvol_p1;
+ struct leapraid_raidvol_p0 raidvol_p0;
+ struct leapraid_raidpd_p0 raidpd_p0;
+ u16 hdl;
+ u8 phys_disk_num;
+
+ if (!adapter->adapter_attr.raid_support)
+ return;
+
+ dev_info(&adapter->pdev->dev,
+ "begin searching for raid volumes\n");
+
+ if (list_empty(&adapter->dev_topo.raid_volume_list))
+ goto out;
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (hdl = 0xFFFF, cfgp2.handle = hdl;
+ !leapraid_op_config_page(adapter, &raidvol_p1, cfgp1, cfgp2,
+ GET_RAID_VOLUME_PG1);
+ cfgp2.handle = hdl) {
+ hdl = le16_to_cpu(raidvol_p1.dev_hdl);
+ cfgp1_extra.size = sizeof(struct leapraid_raidvol_p0);
+ cfgp2_extra.handle = hdl;
+ if (leapraid_op_config_page(adapter, &raidvol_p0, cfgp1_extra,
+ cfgp2_extra, GET_RAID_VOLUME_PG0))
+ continue;
+
+ if (raidvol_p0.volume_state == LEAPRAID_VOL_STATE_OPTIMAL ||
+ raidvol_p0.volume_state == LEAPRAID_VOL_STATE_ONLINE ||
+ raidvol_p0.volume_state == LEAPRAID_VOL_STATE_DEGRADED)
+ leapraid_mark_resp_raid_volume(
+ adapter,
+ le64_to_cpu(raidvol_p1.wwid),
+ hdl);
+ }
+
+ memset(adapter->dev_topo.pd_hdls, 0, adapter->dev_topo.pd_hdls_sz);
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (phys_disk_num = 0xFF, cfgp2.form_specific = phys_disk_num;
+ !leapraid_op_config_page(adapter, &raidpd_p0, cfgp1, cfgp2,
+ GET_PHY_DISK_PG0);
+ cfgp2.form_specific = phys_disk_num) {
+ phys_disk_num = raidpd_p0.phys_disk_num;
+ hdl = le16_to_cpu(raidpd_p0.dev_hdl);
+ set_bit(hdl, (unsigned long *)adapter->dev_topo.pd_hdls);
+ }
+out:
+ dev_info(&adapter->pdev->dev,
+ "raid volumes searching complete\n");
+}
+
+static void leapraid_mark_resp_exp(struct leapraid_adapter *adapter,
+ struct leapraid_exp_p0 *exp_pg0)
+{
+ struct leapraid_enc_node *enc_node = NULL;
+ struct leapraid_topo_node *topo_node_exp;
+ u16 enc_hdl = le16_to_cpu(exp_pg0->enc_hdl);
+ u64 sas_address = le64_to_cpu(exp_pg0->sas_address);
+ u16 hdl = le16_to_cpu(exp_pg0->dev_hdl);
+ u8 port_id = exp_pg0->physical_port;
+ struct leapraid_card_port *card_port = leapraid_get_port_by_id(adapter,
+ port_id,
+ false);
+ unsigned long flags;
+ int i;
+
+ if (enc_hdl)
+ enc_node = leapraid_enc_find_by_hdl(adapter, enc_hdl);
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ list_for_each_entry(topo_node_exp, &adapter->dev_topo.exp_list, list) {
+ if (topo_node_exp->sas_address != sas_address ||
+ topo_node_exp->card_port != card_port)
+ continue;
+
+ topo_node_exp->resp = true;
+ if (enc_node) {
+ topo_node_exp->enc_lid =
+ le64_to_cpu(enc_node->pg0.enc_lid);
+ topo_node_exp->enc_hdl = le16_to_cpu(exp_pg0->enc_hdl);
+ }
+ if (topo_node_exp->hdl == hdl)
+ goto out;
+
+ dev_info(&adapter->pdev->dev,
+ "hdl changed: 0x%04x -> 0x%04x\n",
+ topo_node_exp->hdl, hdl);
+ topo_node_exp->hdl = hdl;
+ for (i = 0; i < topo_node_exp->phys_num; i++)
+ topo_node_exp->card_phy[i].hdl = hdl;
+ goto out;
+ }
+out:
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+}
+
+static void leapraid_search_resp_exp(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_exp_p0 exp_p0;
+ u64 sas_address;
+ u16 hdl;
+ u8 port;
+
+ dev_info(&adapter->pdev->dev,
+ "begin searching for expanders\n");
+ if (list_empty(&adapter->dev_topo.exp_list))
+ goto out;
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (hdl = 0xFFFF, cfgp2.handle = hdl;
+ !leapraid_op_config_page(adapter, &exp_p0, cfgp1, cfgp2,
+ GET_SAS_EXPANDER_PG0);
+ cfgp2.handle = hdl) {
+ hdl = le16_to_cpu(exp_p0.dev_hdl);
+ sas_address = le64_to_cpu(exp_p0.sas_address);
+ port = exp_p0.physical_port;
+
+ dev_info(&adapter->pdev->dev,
+ "exp detected: hdl=0x%04x, sas=0x%016llx, port=%u",
+ hdl, (unsigned long long)sas_address,
+ ((adapter->adapter_attr.enable_mp) ? (port) :
+ (LEAPRAID_DISABLE_MP_PORT_ID)));
+ leapraid_mark_resp_exp(adapter, &exp_p0);
+ }
+out:
+ dev_info(&adapter->pdev->dev,
+ "expander searching complete\n");
+}
+
+void leapraid_wait_cmds_done(struct leapraid_adapter *adapter)
+{
+ struct leapraid_io_req_tracker *io_req_tracker;
+ unsigned long flags;
+ u16 i;
+
+ adapter->reset_desc.pending_io_cnt = 0;
+ if (!leapraid_pci_active(adapter)) {
+ dev_err(&adapter->pdev->dev,
+ "%s %s: pci error, device reset or unplugged!\n",
+ adapter->adapter_attr.name, __func__);
+ return;
+ }
+
+ if (leapraid_get_adapter_state(adapter) != LEAPRAID_DB_OPERATIONAL)
+ return;
+
+ spin_lock_irqsave(&adapter->dynamic_task_desc.task_lock, flags);
+ for (i = 1; i <= adapter->shost->can_queue; i++) {
+ io_req_tracker = leapraid_get_io_tracker_from_taskid(adapter,
+ i);
+ if (io_req_tracker && io_req_tracker->taskid != 0)
+ if (io_req_tracker->scmd)
+ adapter->reset_desc.pending_io_cnt++;
+ }
+ spin_unlock_irqrestore(&adapter->dynamic_task_desc.task_lock, flags);
+
+ if (!adapter->reset_desc.pending_io_cnt)
+ return;
+
+ wait_event_timeout(adapter->reset_desc.reset_wait_queue,
+ adapter->reset_desc.pending_io_cnt == 0, 10 * HZ);
+}
+
+int leapraid_hard_reset_handler(struct leapraid_adapter *adapter,
+ enum reset_type type)
+{
+ unsigned long flags;
+ int rc;
+
+ if (!mutex_trylock(&adapter->reset_desc.adapter_reset_mutex)) {
+ do {
+ ssleep(1);
+ } while (adapter->access_ctrl.shost_recovering);
+ return adapter->reset_desc.adapter_reset_results;
+ }
+
+ if (!leapraid_pci_active(adapter)) {
+ if (leapraid_pci_removed(adapter)) {
+ dev_info(&adapter->pdev->dev,
+ "pci_dev removed, pausing polling and cleaning cmds\n");
+ leapraid_mq_polling_pause(adapter);
+ leapraid_clean_active_scsi_cmds(adapter);
+ leapraid_mq_polling_resume(adapter);
+ }
+ rc = 0;
+ goto exit_pci_unavailable;
+ }
+
+ dev_info(&adapter->pdev->dev, "starting hard reset\n");
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ adapter->access_ctrl.shost_recovering = true;
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+
+ leapraid_wait_cmds_done(adapter);
+ leapraid_mask_int(adapter);
+ leapraid_mq_polling_pause(adapter);
+ rc = leapraid_make_adapter_ready(adapter, type);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "failed to make adapter ready, rc=%d\n", rc);
+ goto out;
+ }
+
+ rc = leapraid_fw_log_init(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "firmware log init failed\n");
+ goto out;
+ }
+
+ leapraid_clean_active_cmds(adapter);
+ if (adapter->scan_dev_desc.driver_loading &&
+ adapter->scan_dev_desc.scan_dev_failed) {
+ dev_err(&adapter->pdev->dev,
+ "Previous device scan failed or driver loading\n");
+ adapter->access_ctrl.host_removing = true;
+ rc = -EFAULT;
+ goto out;
+ }
+
+ rc = leapraid_make_adapter_available(adapter);
+ if (!rc) {
+ dev_info(&adapter->pdev->dev,
+ "adapter is now available, rebuilding topology\n");
+ if (adapter->adapter_attr.enable_mp) {
+ leapraid_update_card_port_after_reset(adapter);
+ leapraid_update_vphys_after_reset(adapter);
+ }
+ leapraid_mark_all_dev_deleted(adapter);
+ leapraid_rebuild_enc_list_after_reset(adapter);
+ leapraid_search_resp_sas_dev(adapter);
+ leapraid_search_resp_raid_volume(adapter);
+ leapraid_search_resp_exp(adapter);
+ leapraid_hardreset_barrier(adapter);
+ }
+out:
+ dev_info(&adapter->pdev->dev, "hard reset %s\n",
+ ((rc == 0) ? "SUCCESS" : "FAILED"));
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ adapter->reset_desc.adapter_reset_results = rc;
+ adapter->access_ctrl.shost_recovering = false;
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+ adapter->reset_desc.reset_cnt++;
+ mutex_unlock(&adapter->reset_desc.adapter_reset_mutex);
+
+ if (rc)
+ leapraid_clean_active_scsi_cmds(adapter);
+ leapraid_mq_polling_resume(adapter);
+
+exit_pci_unavailable:
+ dev_info(&adapter->pdev->dev, "pcie unavailable!\n");
+ return rc;
+}
+
+static int leapraid_get_adapter_features(struct leapraid_adapter *adapter)
+{
+ struct leapraid_adapter_features_req leap_mpi_req;
+ struct leapraid_adapter_features_rep leap_mpi_rep;
+ u8 fw_major, fw_minor, fw_build, fw_release;
+ u32 db;
+ int r;
+
+ db = leapraid_readl(&adapter->iomem_base->db);
+ if (db & LEAPRAID_DB_USED ||
+ (db & LEAPRAID_DB_MASK) == LEAPRAID_DB_FAULT)
+ return -EFAULT;
+
+ if (((db & LEAPRAID_DB_MASK) != LEAPRAID_DB_READY) &&
+ ((db & LEAPRAID_DB_MASK) != LEAPRAID_DB_OPERATIONAL)) {
+ if (!leapraid_wait_adapter_ready(adapter))
+ return -EFAULT;
+ }
+
+ memset(&leap_mpi_req, 0, sizeof(struct leapraid_adapter_features_req));
+ memset(&leap_mpi_rep, 0, sizeof(struct leapraid_adapter_features_rep));
+ leap_mpi_req.func = LEAPRAID_FUNC_GET_ADAPTER_FEATURES;
+ r = leapraid_handshake_func(adapter,
+ sizeof(struct leapraid_adapter_features_req),
+ (u32 *)&leap_mpi_req,
+ sizeof(struct leapraid_adapter_features_rep),
+ (u16 *)&leap_mpi_rep);
+ if (r) {
+ dev_err(&adapter->pdev->dev,
+ "%s %s: handshake failed, r=%d\n",
+ adapter->adapter_attr.name, __func__, r);
+ return r;
+ }
+
+ memset(&adapter->adapter_attr.features, 0,
+ sizeof(struct leapraid_adapter_features));
+ adapter->adapter_attr.features.req_slot =
+ le16_to_cpu(leap_mpi_rep.req_slot);
+ adapter->adapter_attr.features.hp_slot =
+ le16_to_cpu(leap_mpi_rep.hp_slot);
+ adapter->adapter_attr.features.adapter_caps =
+ le32_to_cpu(leap_mpi_rep.adapter_caps);
+ adapter->adapter_attr.features.max_volumes =
+ leap_mpi_rep.max_volumes;
+ if (!adapter->adapter_attr.features.max_volumes)
+ adapter->adapter_attr.features.max_volumes =
+ LEAPRAID_MAX_VOLUMES_DEFAULT;
+ adapter->adapter_attr.features.max_dev_handle =
+ le16_to_cpu(leap_mpi_rep.max_dev_hdl);
+ if (!adapter->adapter_attr.features.max_dev_handle)
+ adapter->adapter_attr.features.max_dev_handle =
+ LEAPRAID_MAX_DEV_HANDLE_DEFAULT;
+ adapter->adapter_attr.features.min_dev_handle =
+ le16_to_cpu(leap_mpi_rep.min_dev_hdl);
+ if ((adapter->adapter_attr.features.adapter_caps &
+ LEAPRAID_ADAPTER_FEATURES_CAP_INTEGRATED_RAID))
+ adapter->adapter_attr.raid_support = true;
+ if (WARN_ON(!(adapter->adapter_attr.features.adapter_caps &
+ LEAPRAID_ADAPTER_FEATURES_CAP_ATOMIC_REQ)))
+ return -EFAULT;
+ adapter->adapter_attr.features.fw_version =
+ le32_to_cpu(leap_mpi_rep.fw_version);
+
+ fw_major = (adapter->adapter_attr.features.fw_version >> 24) & 0xFF;
+ fw_minor = (adapter->adapter_attr.features.fw_version >> 16) & 0xFF;
+ fw_build = (adapter->adapter_attr.features.fw_version >> 8) & 0xFF;
+ fw_release = adapter->adapter_attr.features.fw_version & 0xFF;
+
+ dev_info(&adapter->pdev->dev,
+ "Firmware version: %u.%u.%u.%u (0x%08x)\n",
+ fw_major, fw_minor, fw_build, fw_release,
+ adapter->adapter_attr.features.fw_version);
+
+ if (fw_major < 2) {
+ dev_err(&adapter->pdev->dev,
+ "Unsupported firmware major version, requires >= 2\n");
+ return -EFAULT;
+ }
+ adapter->shost->max_id = -1;
+
+ return 0;
+}
+
+static inline void leapraid_disable_pcie(struct leapraid_adapter *adapter)
+{
+ mutex_lock(&adapter->access_ctrl.pci_access_lock);
+ if (adapter->iomem_base) {
+ iounmap(adapter->iomem_base);
+ adapter->iomem_base = NULL;
+ }
+ if (pci_is_enabled(adapter->pdev)) {
+ pci_disable_pcie_error_reporting(adapter->pdev);
+ pci_release_regions(adapter->pdev);
+ pci_disable_device(adapter->pdev);
+ }
+ mutex_unlock(&adapter->access_ctrl.pci_access_lock);
+}
+
+static int leapraid_enable_pcie(struct leapraid_adapter *adapter)
+{
+ u64 dma_mask;
+ int rc;
+
+ rc = pci_enable_device(adapter->pdev);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "failed to enable PCI device\n");
+ return rc;
+ }
+
+ rc = pci_request_regions(adapter->pdev, LEAPRAID_DRIVER_NAME);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "failed to obtain PCI resources\n");
+ goto disable_pcie;
+ }
+
+ if (sizeof(dma_addr_t) > 4) {
+ dma_mask = DMA_BIT_MASK(64);
+ adapter->adapter_attr.use_32_dma_mask = false;
+ } else {
+ dma_mask = DMA_BIT_MASK(32);
+ adapter->adapter_attr.use_32_dma_mask = true;
+ }
+
+ rc = dma_set_mask_and_coherent(&adapter->pdev->dev, dma_mask);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "failed to set %lld DMA mask\n", dma_mask);
+ goto disable_pcie;
+ }
+ adapter->iomem_base = ioremap(pci_resource_start(adapter->pdev, 0),
+ sizeof(struct leapraid_reg_base));
+ if (!adapter->iomem_base) {
+ dev_err(&adapter->pdev->dev,
+ "failed to map memory for controller registers\n");
+ rc = -ENOMEM;
+ goto disable_pcie;
+ }
+
+ pci_enable_pcie_error_reporting(adapter->pdev);
+ pci_set_master(adapter->pdev);
+
+ return 0;
+
+disable_pcie:
+ return rc;
+}
+
+static void leapraid_cpus_on_irq(struct leapraid_adapter *adapter)
+{
+ struct leapraid_int_rq *int_rq;
+ unsigned int i, base_group, this_group;
+ unsigned int cpu, nr_cpus, total_msix, index = 0;
+
+ total_msix = adapter->notification_desc.iopoll_qdex;
+ nr_cpus = num_online_cpus();
+
+ if (!nr_cpus || !total_msix)
+ return;
+ base_group = nr_cpus / total_msix;
+
+ cpu = cpumask_first(cpu_online_mask);
+ for (index = 0; index < adapter->notification_desc.iopoll_qdex;
+ index++) {
+ int_rq = &adapter->notification_desc.int_rqs[index];
+
+ if (cpu >= nr_cpus)
+ break;
+
+ this_group = base_group +
+ (index < (nr_cpus % total_msix) ? 1 : 0);
+
+ for (i = 0 ; i < this_group ; i++) {
+ adapter->notification_desc.msix_cpu_map[cpu] =
+ int_rq->rq.msix_idx;
+ cpu = cpumask_next(cpu, cpu_online_mask);
+ }
+ }
+}
+
+static void leapraid_map_msix_to_cpu(struct leapraid_adapter *adapter)
+{
+ struct leapraid_int_rq *int_rq;
+ const cpumask_t *affinity_mask;
+ u32 i;
+ u16 cpu;
+
+ if (!adapter->adapter_attr.rq_cnt)
+ return;
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qdex; i++) {
+ int_rq = &adapter->notification_desc.int_rqs[i];
+ affinity_mask = pci_irq_get_affinity(adapter->pdev,
+ int_rq->rq.msix_idx);
+ if (!affinity_mask)
+ goto out;
+
+ for_each_cpu_and(cpu, affinity_mask, cpu_online_mask) {
+ if (cpu >= adapter->notification_desc.msix_cpu_map_sz)
+ break;
+
+ adapter->notification_desc.msix_cpu_map[cpu] =
+ int_rq->rq.msix_idx;
+ }
+ }
+out:
+ leapraid_cpus_on_irq(adapter);
+}
+
+static void leapraid_configure_reply_queue_affinity(
+ struct leapraid_adapter *adapter)
+{
+ if (!adapter || !adapter->notification_desc.msix_enable)
+ return;
+
+ leapraid_map_msix_to_cpu(adapter);
+}
+
+static void leapraid_free_irq(struct leapraid_adapter *adapter)
+{
+ struct leapraid_int_rq *int_rq;
+ unsigned int i;
+
+ if (!adapter->notification_desc.int_rqs)
+ return;
+
+ for (i = 0; i < adapter->notification_desc.int_rqs_allocated; i++) {
+ int_rq = &adapter->notification_desc.int_rqs[i];
+ if (!int_rq)
+ continue;
+
+ irq_set_affinity_hint(pci_irq_vector(adapter->pdev,
+ int_rq->rq.msix_idx), NULL);
+ free_irq(pci_irq_vector(adapter->pdev, int_rq->rq.msix_idx),
+ &int_rq->rq);
+ }
+ adapter->notification_desc.int_rqs_allocated = 0;
+
+ if (!adapter->notification_desc.msix_enable)
+ return;
+
+ pci_free_irq_vectors(adapter->pdev);
+ adapter->notification_desc.msix_enable = false;
+
+ kfree(adapter->notification_desc.blk_mq_poll_rqs);
+ adapter->notification_desc.blk_mq_poll_rqs = NULL;
+
+ kfree(adapter->notification_desc.int_rqs);
+ adapter->notification_desc.int_rqs = NULL;
+
+ kfree(adapter->notification_desc.msix_cpu_map);
+ adapter->notification_desc.msix_cpu_map = NULL;
+}
+
+static inline int leapraid_msix_cnt(struct pci_dev *pdev)
+{
+ return pci_msix_vec_count(pdev);
+}
+
+static inline int leapraid_msi_cnt(struct pci_dev *pdev)
+{
+ return pci_msi_vec_count(pdev);
+}
+
+static int leapraid_setup_irqs(struct leapraid_adapter *adapter)
+{
+ unsigned int i;
+ int rc = 0;
+
+ if (interrupt_mode == 0) {
+ rc = pci_alloc_irq_vectors_affinity(
+ adapter->pdev,
+ adapter->notification_desc.iopoll_qdex,
+ adapter->notification_desc.iopoll_qdex,
+ PCI_IRQ_MSIX | PCI_IRQ_AFFINITY, NULL);
+
+ if (rc < 0) {
+ dev_err(&adapter->pdev->dev,
+ "%d msi/msix vectors alloacted failed!\n",
+ adapter->notification_desc.iopoll_qdex);
+ return rc;
+ }
+ }
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qdex; i++) {
+ adapter->notification_desc.int_rqs[i].rq.adapter = adapter;
+ adapter->notification_desc.int_rqs[i].rq.msix_idx = i;
+ atomic_set(&adapter->notification_desc.int_rqs[i].rq.busy, 0);
+ if (interrupt_mode == 0)
+ snprintf(adapter->notification_desc.int_rqs[i].rq.name,
+ LEAPRAID_NAME_LENGTH, "%s%u-MSIx%u",
+ LEAPRAID_DRIVER_NAME,
+ adapter->adapter_attr.id, i);
+ else if (interrupt_mode == 1)
+ snprintf(adapter->notification_desc.int_rqs[i].rq.name,
+ LEAPRAID_NAME_LENGTH, "%s%u-MSI%u",
+ LEAPRAID_DRIVER_NAME,
+ adapter->adapter_attr.id, i);
+
+ rc = request_irq(pci_irq_vector(adapter->pdev, i),
+ leapraid_irq_handler,
+ IRQF_SHARED,
+ adapter->notification_desc.int_rqs[i].rq.name,
+ &adapter->notification_desc.int_rqs[i].rq);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "MSI/MSIx: request_irq %s failed!\n",
+ adapter->notification_desc.int_rqs[i].rq.name);
+ return rc;
+ }
+ adapter->notification_desc.int_rqs_allocated++;
+ }
+
+ return 0;
+}
+
+static int leapraid_setup_legacy_int(struct leapraid_adapter *adapter)
+{
+ int rc;
+
+ adapter->notification_desc.int_rqs[0].rq.adapter = adapter;
+ adapter->notification_desc.int_rqs[0].rq.msix_idx = 0;
+ atomic_set(&adapter->notification_desc.int_rqs[0].rq.busy, 0);
+ snprintf(adapter->notification_desc.int_rqs[0].rq.name,
+ LEAPRAID_NAME_LENGTH, "%s%d-LegacyInt",
+ LEAPRAID_DRIVER_NAME, adapter->adapter_attr.id);
+
+ rc = pci_alloc_irq_vectors_affinity(
+ adapter->pdev,
+ adapter->notification_desc.iopoll_qdex,
+ adapter->notification_desc.iopoll_qdex,
+ PCI_IRQ_LEGACY | PCI_IRQ_AFFINITY,
+ NULL);
+ if (rc < 0) {
+ dev_err(&adapter->pdev->dev,
+ "legacy irq alloacted failed!\n");
+ return rc;
+ }
+
+ rc = request_irq(pci_irq_vector(adapter->pdev, 0),
+ leapraid_irq_handler,
+ IRQF_SHARED,
+ adapter->notification_desc.int_rqs[0].rq.name,
+ &adapter->notification_desc.int_rqs[0].rq);
+ if (rc) {
+ irq_set_affinity_hint(pci_irq_vector(adapter->pdev, 0), NULL);
+ pci_free_irq_vectors(adapter->pdev);
+ dev_err(&adapter->pdev->dev,
+ "Legact Int: request_irq %s failed!\n",
+ adapter->notification_desc.int_rqs[0].rq.name);
+ return -EBUSY;
+ }
+ adapter->notification_desc.int_rqs_allocated = 1;
+ return rc;
+}
+
+static int leapraid_set_legacy_int(struct leapraid_adapter *adapter)
+{
+ int rc;
+
+ adapter->notification_desc.msix_cpu_map_sz = num_online_cpus();
+ adapter->notification_desc.msix_cpu_map =
+ kzalloc(adapter->notification_desc.msix_cpu_map_sz,
+ GFP_KERNEL);
+ if (!adapter->notification_desc.msix_cpu_map)
+ return -ENOMEM;
+
+ adapter->adapter_attr.rq_cnt = 1;
+ adapter->notification_desc.iopoll_qdex =
+ adapter->adapter_attr.rq_cnt;
+ adapter->notification_desc.iopoll_qcnt = 0;
+ dev_info(&adapter->pdev->dev,
+ "Legacy Intr: req queue cnt=%d, intr=%d/poll=%d rep queues!\n",
+ adapter->adapter_attr.rq_cnt,
+ adapter->notification_desc.iopoll_qdex,
+ adapter->notification_desc.iopoll_qcnt);
+ adapter->notification_desc.int_rqs =
+ kcalloc(adapter->notification_desc.iopoll_qdex,
+ sizeof(struct leapraid_int_rq), GFP_KERNEL);
+ if (!adapter->notification_desc.int_rqs) {
+ dev_err(&adapter->pdev->dev,
+ "Legacy Intr: allocate %d intr rep queues failed!\n",
+ adapter->notification_desc.iopoll_qdex);
+ return -ENOMEM;
+ }
+
+ rc = leapraid_setup_legacy_int(adapter);
+
+ return rc;
+}
+
+static int leapraid_set_msix(struct leapraid_adapter *adapter)
+{
+ int iopoll_qcnt = 0;
+ unsigned int i;
+ int rc, msix_cnt;
+
+ if (msix_disable == 1)
+ goto legacy_int;
+
+ msix_cnt = leapraid_msix_cnt(adapter->pdev);
+ if (msix_cnt <= 0) {
+ dev_info(&adapter->pdev->dev, "msix unsupported!\n");
+ goto legacy_int;
+ }
+
+ if (reset_devices)
+ adapter->adapter_attr.rq_cnt = 1;
+ else
+ adapter->adapter_attr.rq_cnt = min_t(int,
+ num_online_cpus(),
+ msix_cnt);
+
+ if (max_msix_vectors > 0)
+ adapter->adapter_attr.rq_cnt = min_t(
+ int, max_msix_vectors, adapter->adapter_attr.rq_cnt);
+
+ if (iopoll_qcnt) {
+ adapter->notification_desc.blk_mq_poll_rqs =
+ kcalloc(iopoll_qcnt,
+ sizeof(struct leapraid_blk_mq_poll_rq),
+ GFP_KERNEL);
+ if (!adapter->notification_desc.blk_mq_poll_rqs)
+ return -ENOMEM;
+ adapter->adapter_attr.rq_cnt =
+ min(adapter->adapter_attr.rq_cnt + iopoll_qcnt,
+ msix_cnt);
+ }
+
+ adapter->notification_desc.iopoll_qdex =
+ adapter->adapter_attr.rq_cnt - iopoll_qcnt;
+
+ adapter->notification_desc.iopoll_qcnt = iopoll_qcnt;
+ dev_info(&adapter->pdev->dev,
+ "MSIx: req queue cnt=%d, intr=%d/poll=%d rep queues!\n",
+ adapter->adapter_attr.rq_cnt,
+ adapter->notification_desc.iopoll_qdex,
+ adapter->notification_desc.iopoll_qcnt);
+
+ adapter->notification_desc.int_rqs =
+ kcalloc(adapter->notification_desc.iopoll_qdex,
+ sizeof(struct leapraid_int_rq), GFP_KERNEL);
+ if (!adapter->notification_desc.int_rqs) {
+ dev_err(&adapter->pdev->dev,
+ "MSIx: allocate %d interrupt reply queues failed!\n",
+ adapter->notification_desc.iopoll_qdex);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qcnt; i++) {
+ adapter->notification_desc.blk_mq_poll_rqs[i].rq.adapter =
+ adapter;
+ adapter->notification_desc.blk_mq_poll_rqs[i].rq.msix_idx =
+ i + adapter->notification_desc.iopoll_qdex;
+ atomic_set(&adapter->notification_desc.blk_mq_poll_rqs[i].rq.busy, 0);
+ snprintf(adapter->notification_desc.blk_mq_poll_rqs[i].rq.name,
+ LEAPRAID_NAME_LENGTH,
+ "%s%u-MQ-Poll%u", LEAPRAID_DRIVER_NAME,
+ adapter->adapter_attr.id, i);
+ atomic_set(&adapter->notification_desc.blk_mq_poll_rqs[i].busy, 0);
+ atomic_set(&adapter->notification_desc.blk_mq_poll_rqs[i].pause, 0);
+ }
+
+ adapter->notification_desc.msix_cpu_map_sz =
+ num_online_cpus();
+ adapter->notification_desc.msix_cpu_map =
+ kzalloc(adapter->notification_desc.msix_cpu_map_sz,
+ GFP_KERNEL);
+ if (!adapter->notification_desc.msix_cpu_map)
+ return -ENOMEM;
+ memset(adapter->notification_desc.msix_cpu_map, 0,
+ adapter->notification_desc.msix_cpu_map_sz);
+
+ adapter->notification_desc.msix_enable = true;
+ rc = leapraid_setup_irqs(adapter);
+ if (rc) {
+ leapraid_free_irq(adapter);
+ adapter->notification_desc.msix_enable = false;
+ goto legacy_int;
+ }
+
+ return 0;
+
+legacy_int:
+ rc = leapraid_set_legacy_int(adapter);
+
+ return rc;
+}
+
+static int leapraid_set_msi(struct leapraid_adapter *adapter)
+{
+ int iopoll_qcnt = 0;
+ unsigned int i;
+ int rc, msi_cnt;
+
+ if (msix_disable == 1)
+ goto legacy_int1;
+
+ msi_cnt = leapraid_msi_cnt(adapter->pdev);
+ if (msi_cnt <= 0) {
+ dev_info(&adapter->pdev->dev, "msix unsupported!\n");
+ goto legacy_int1;
+ }
+
+ if (reset_devices)
+ adapter->adapter_attr.rq_cnt = 1;
+ else
+ adapter->adapter_attr.rq_cnt = min_t(int,
+ num_online_cpus(),
+ msi_cnt);
+
+ if (max_msix_vectors > 0)
+ adapter->adapter_attr.rq_cnt = min_t(
+ int, max_msix_vectors, adapter->adapter_attr.rq_cnt);
+
+
+ if (iopoll_qcnt) {
+ adapter->notification_desc.blk_mq_poll_rqs =
+ kcalloc(iopoll_qcnt,
+ sizeof(struct leapraid_blk_mq_poll_rq),
+ GFP_KERNEL);
+ if (!adapter->notification_desc.blk_mq_poll_rqs)
+ return -ENOMEM;
+
+ adapter->adapter_attr.rq_cnt =
+ min(adapter->adapter_attr.rq_cnt + iopoll_qcnt,
+ msi_cnt);
+ }
+
+ adapter->notification_desc.iopoll_qdex =
+ adapter->adapter_attr.rq_cnt - iopoll_qcnt;
+ rc = pci_alloc_irq_vectors_affinity(
+ adapter->pdev,
+ 1,
+ adapter->notification_desc.iopoll_qdex,
+ PCI_IRQ_MSI | PCI_IRQ_AFFINITY, NULL);
+ if (rc < 0) {
+ dev_err(&adapter->pdev->dev,
+ "%d msi vectors alloacted failed!\n",
+ adapter->notification_desc.iopoll_qdex);
+ goto legacy_int1;
+ }
+ if (rc != adapter->notification_desc.iopoll_qdex) {
+ adapter->notification_desc.iopoll_qdex = rc;
+ adapter->adapter_attr.rq_cnt =
+ adapter->notification_desc.iopoll_qdex + iopoll_qcnt;
+ }
+ adapter->notification_desc.iopoll_qcnt = iopoll_qcnt;
+ dev_info(&adapter->pdev->dev,
+ "MSI: req queue cnt=%d, intr=%d/poll=%d rep queues!\n",
+ adapter->adapter_attr.rq_cnt,
+ adapter->notification_desc.iopoll_qdex,
+ adapter->notification_desc.iopoll_qcnt);
+
+ adapter->notification_desc.int_rqs =
+ kcalloc(adapter->notification_desc.iopoll_qdex,
+ sizeof(struct leapraid_int_rq),
+ GFP_KERNEL);
+ if (!adapter->notification_desc.int_rqs) {
+ dev_err(&adapter->pdev->dev,
+ "MSI: allocate %d interrupt reply queues failed!\n",
+ adapter->notification_desc.iopoll_qdex);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qcnt; i++) {
+ adapter->notification_desc.blk_mq_poll_rqs[i].rq.adapter =
+ adapter;
+ adapter->notification_desc.blk_mq_poll_rqs[i].rq.msix_idx =
+ i + adapter->notification_desc.iopoll_qdex;
+ atomic_set(
+ &adapter->notification_desc.blk_mq_poll_rqs[i].rq.busy,
+ 0);
+ snprintf(adapter->notification_desc.blk_mq_poll_rqs[i].rq.name,
+ LEAPRAID_NAME_LENGTH,
+ "%s%u-MQ-Poll%u", LEAPRAID_DRIVER_NAME,
+ adapter->adapter_attr.id, i);
+ atomic_set(
+ &adapter->notification_desc.blk_mq_poll_rqs[i].busy,
+ 0);
+ atomic_set(
+ &adapter->notification_desc.blk_mq_poll_rqs[i].pause,
+ 0);
+ }
+
+ adapter->notification_desc.msix_cpu_map_sz = num_online_cpus();
+ adapter->notification_desc.msix_cpu_map =
+ kzalloc(adapter->notification_desc.msix_cpu_map_sz,
+ GFP_KERNEL);
+ if (!adapter->notification_desc.msix_cpu_map)
+ return -ENOMEM;
+ memset(adapter->notification_desc.msix_cpu_map, 0,
+ adapter->notification_desc.msix_cpu_map_sz);
+
+ adapter->notification_desc.msix_enable = true;
+ rc = leapraid_setup_irqs(adapter);
+ if (rc) {
+ leapraid_free_irq(adapter);
+ adapter->notification_desc.msix_enable = false;
+ goto legacy_int1;
+ }
+
+ return 0;
+
+legacy_int1:
+ rc = leapraid_set_legacy_int(adapter);
+
+ return rc;
+}
+
+static int leapraid_set_notification(struct leapraid_adapter *adapter)
+{
+ int rc = 0;
+
+ if (interrupt_mode == 0) {
+ rc = leapraid_set_msix(adapter);
+ if (rc)
+ pr_err("%s enable MSI-X irq failed!\n", __func__);
+ } else if (interrupt_mode == 1) {
+ rc = leapraid_set_msi(adapter);
+ if (rc)
+ pr_err("%s enable MSI irq failed!\n", __func__);
+ } else if (interrupt_mode == 2) {
+ rc = leapraid_set_legacy_int(adapter);
+ if (rc)
+ pr_err("%s enable legacy irq failed!\n", __func__);
+ }
+
+ return rc;
+}
+
+static void leapraid_disable_pcie_and_notification(
+ struct leapraid_adapter *adapter)
+{
+ leapraid_free_irq(adapter);
+ leapraid_disable_pcie(adapter);
+}
+
+int leapraid_set_pcie_and_notification(struct leapraid_adapter *adapter)
+{
+ int rc;
+
+ rc = leapraid_enable_pcie(adapter);
+ if (rc)
+ goto out_fail;
+
+ leapraid_mask_int(adapter);
+
+ rc = leapraid_set_notification(adapter);
+ if (rc)
+ goto out_fail;
+
+ pci_save_state(adapter->pdev);
+
+ return 0;
+
+out_fail:
+ leapraid_disable_pcie_and_notification(adapter);
+ return rc;
+}
+
+void leapraid_disable_controller(struct leapraid_adapter *adapter)
+{
+ if (!adapter->iomem_base)
+ return;
+
+ leapraid_mask_int(adapter);
+
+ adapter->access_ctrl.shost_recovering = true;
+ leapraid_make_adapter_ready(adapter, PART_RESET);
+ adapter->access_ctrl.shost_recovering = false;
+
+ leapraid_disable_pcie_and_notification(adapter);
+}
+
+static int leapraid_adapter_unit_reset(struct leapraid_adapter *adapter)
+{
+ int rc = 0;
+
+ dev_info(&adapter->pdev->dev, "fire unit reset\n");
+ writel(LEAPRAID_FUNC_ADAPTER_UNIT_RESET << LEAPRAID_DB_FUNC_SHIFT,
+ &adapter->iomem_base->db);
+ if (leapraid_db_wait_ack_and_clear_int(adapter))
+ rc = -EFAULT;
+
+ if (!leapraid_wait_adapter_ready(adapter)) {
+ rc = -EFAULT;
+ goto out;
+ }
+out:
+ dev_info(&adapter->pdev->dev, "unit reset: %s\n",
+ ((rc == 0) ? "SUCCESS" : "FAILED"));
+ return rc;
+}
+
+static int leapraid_make_adapter_ready(struct leapraid_adapter *adapter,
+ enum reset_type type)
+{
+ u32 db;
+ int rc;
+ int count;
+
+ if (!leapraid_pci_active(adapter))
+ return 0;
+
+ count = 0;
+ db = leapraid_readl(&adapter->iomem_base->db);
+ if ((db & LEAPRAID_DB_MASK) == LEAPRAID_DB_RESET) {
+ while ((db & LEAPRAID_DB_MASK) != LEAPRAID_DB_READY) {
+ if (count++ == LEAPRAID_DB_RETRY_COUNT_MAX) {
+ dev_err(&adapter->pdev->dev,
+ "wait adapter ready timeout\n");
+ return -EFAULT;
+ }
+ ssleep(1);
+ db = leapraid_readl(&adapter->iomem_base->db);
+ dev_info(&adapter->pdev->dev,
+ "wait adapter ready, count=%d, db=0x%x\n",
+ count, db);
+ }
+ }
+ if ((db & LEAPRAID_DB_MASK) == LEAPRAID_DB_READY)
+ return 0;
+
+ if (db & LEAPRAID_DB_USED)
+ goto full_reset;
+
+ if ((db & LEAPRAID_DB_MASK) == LEAPRAID_DB_FAULT)
+ goto full_reset;
+
+ if (type == FULL_RESET)
+ goto full_reset;
+
+ if ((db & LEAPRAID_DB_MASK) == LEAPRAID_DB_OPERATIONAL)
+ if (!(leapraid_adapter_unit_reset(adapter)))
+ return 0;
+
+full_reset:
+ rc = leapraid_host_diag_reset(adapter);
+ return rc;
+}
+
+static void leapraid_fw_log_exit(struct leapraid_adapter *adapter)
+{
+ if (!adapter->fw_log_desc.open_pcie_trace)
+ return;
+
+ if (adapter->fw_log_desc.fw_log_buffer) {
+ dma_free_coherent(&adapter->pdev->dev,
+ (LEAPRAID_SYS_LOG_BUF_SIZE +
+ LEAPRAID_SYS_LOG_BUF_RESERVE),
+ adapter->fw_log_desc.fw_log_buffer,
+ adapter->fw_log_desc.fw_log_buffer_dma);
+ adapter->fw_log_desc.fw_log_buffer = NULL;
+ }
+}
+
+static int leapraid_fw_log_init(struct leapraid_adapter *adapter)
+{
+ struct leapraid_adapter_log_req adapter_log_req;
+ struct leapraid_adapter_log_rep adapter_log_rep;
+ u16 adapter_status;
+ u64 buf_addr;
+ u32 rc;
+
+ if (!adapter->fw_log_desc.open_pcie_trace)
+ return 0;
+
+ if (!adapter->fw_log_desc.fw_log_buffer) {
+ adapter->fw_log_desc.fw_log_buffer =
+ dma_alloc_coherent(
+ &adapter->pdev->dev,
+ (LEAPRAID_SYS_LOG_BUF_SIZE +
+ LEAPRAID_SYS_LOG_BUF_RESERVE),
+ &adapter->fw_log_desc.fw_log_buffer_dma,
+ GFP_KERNEL);
+ if (!adapter->fw_log_desc.fw_log_buffer) {
+ dev_err(&adapter->pdev->dev,
+ "%s: log buf alloc failed.\n",
+ __func__);
+ return -ENOMEM;
+ }
+ }
+
+ memset(&adapter_log_req, 0, sizeof(struct leapraid_adapter_log_req));
+ adapter_log_req.func = LEAPRAID_FUNC_LOGBUF_INIT;
+ buf_addr = adapter->fw_log_desc.fw_log_buffer_dma;
+
+ adapter_log_req.mbox.w[0] =
+ cpu_to_le32((u32)(buf_addr & 0xFFFFFFFF));
+ adapter_log_req.mbox.w[1] =
+ cpu_to_le32((u32)((buf_addr >> 32) & 0xFFFFFFFF));
+ adapter_log_req.mbox.w[2] =
+ cpu_to_le32(LEAPRAID_SYS_LOG_BUF_SIZE);
+ rc = leapraid_handshake_func(adapter,
+ sizeof(struct leapraid_adapter_log_req),
+ (u32 *)&adapter_log_req,
+ sizeof(struct leapraid_adapter_log_rep),
+ (u16 *)&adapter_log_rep);
+ if (rc != 0) {
+ dev_err(&adapter->pdev->dev, "%s: handshake failed, rc=%d\n",
+ __func__, rc);
+ return rc;
+ }
+
+ adapter_status = le16_to_cpu(adapter_log_rep.adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (adapter_status != LEAPRAID_ADAPTER_STATUS_SUCCESS) {
+ dev_err(&adapter->pdev->dev, "%s: failed!\n", __func__);
+ rc = -EIO;
+ }
+
+ return rc;
+}
+
+static void leapraid_free_host_memory(struct leapraid_adapter *adapter)
+{
+ unsigned int i;
+
+ if (adapter->mem_desc.task_desc) {
+ dma_free_coherent(&adapter->pdev->dev,
+ adapter->adapter_attr.task_desc_dma_size,
+ adapter->mem_desc.task_desc,
+ adapter->mem_desc.task_desc_dma);
+ adapter->mem_desc.task_desc = NULL;
+ }
+
+ if (adapter->mem_desc.sense_data) {
+ dma_free_coherent(
+ &adapter->pdev->dev,
+ adapter->adapter_attr.io_qd * SCSI_SENSE_BUFFERSIZE,
+ adapter->mem_desc.sense_data,
+ adapter->mem_desc.sense_data_dma);
+ adapter->mem_desc.sense_data = NULL;
+ }
+
+ if (adapter->mem_desc.rep_msg) {
+ dma_free_coherent(
+ &adapter->pdev->dev,
+ adapter->adapter_attr.rep_msg_qd * LEAPRAID_REPLY_SIEZ,
+ adapter->mem_desc.rep_msg,
+ adapter->mem_desc.rep_msg_dma);
+ adapter->mem_desc.rep_msg = NULL;
+ }
+
+ if (adapter->mem_desc.rep_msg_addr) {
+ dma_free_coherent(&adapter->pdev->dev,
+ adapter->adapter_attr.rep_msg_qd *
+ LEAPRAID_REP_MSG_ADDR_SIZE,
+ adapter->mem_desc.rep_msg_addr,
+ adapter->mem_desc.rep_msg_addr_dma);
+ adapter->mem_desc.rep_msg_addr = NULL;
+ }
+
+ if (adapter->mem_desc.rep_desc_seg_maint) {
+ for (i = 0; i < adapter->adapter_attr.rep_desc_q_seg_cnt;
+ i++) {
+ if (adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg) {
+ dma_free_coherent(
+ &adapter->pdev->dev,
+ (adapter->adapter_attr.rep_desc_qd *
+ LEAPRAID_REP_DESC_ENTRY_SIZE) *
+ LEAPRAID_REP_DESC_CHUNK_SIZE,
+ adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg,
+ adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg_dma);
+ adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg = NULL;
+ }
+ }
+
+ if (adapter->mem_desc.rep_desc_q_arr) {
+ dma_free_coherent(
+ &adapter->pdev->dev,
+ adapter->adapter_attr.rq_cnt *
+ LEAPRAID_REP_RQ_CNT_SIZE,
+ adapter->mem_desc.rep_desc_q_arr,
+ adapter->mem_desc.rep_desc_q_arr_dma);
+ adapter->mem_desc.rep_desc_q_arr = NULL;
+ }
+
+ for (i = 0; i < adapter->adapter_attr.rep_desc_q_seg_cnt; i++)
+ kfree(adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_maint);
+ kfree(adapter->mem_desc.rep_desc_seg_maint);
+ }
+
+ if (adapter->mem_desc.io_tracker) {
+ for (i = 0; i < (unsigned int)adapter->shost->can_queue; i++)
+ leapraid_internal_exit_cmd_priv(
+ adapter,
+ adapter->mem_desc.io_tracker + i);
+ kfree(adapter->mem_desc.io_tracker);
+ adapter->mem_desc.io_tracker = NULL;
+ }
+
+ dma_pool_destroy(adapter->mem_desc.sg_chain_pool);
+}
+
+static inline bool leapraid_is_in_same_4g_seg(dma_addr_t start, u32 size)
+{
+ return (upper_32_bits(start) == upper_32_bits(start + size - 1));
+}
+
+int leapraid_internal_init_cmd_priv(struct leapraid_adapter *adapter,
+ struct leapraid_io_req_tracker *io_tracker)
+{
+ io_tracker->chain =
+ dma_pool_alloc(adapter->mem_desc.sg_chain_pool,
+ GFP_KERNEL,
+ &io_tracker->chain_dma);
+
+ if (!io_tracker->chain)
+ return -ENOMEM;
+
+ return 0;
+}
+
+int leapraid_internal_exit_cmd_priv(struct leapraid_adapter *adapter,
+ struct leapraid_io_req_tracker *io_tracker)
+{
+ if (io_tracker && io_tracker->chain)
+ dma_pool_free(adapter->mem_desc.sg_chain_pool,
+ io_tracker->chain,
+ io_tracker->chain_dma);
+
+ return 0;
+}
+
+static int leapraid_request_host_memory(struct leapraid_adapter *adapter)
+{
+ struct leapraid_adapter_features *facts =
+ &adapter->adapter_attr.features;
+ u16 rep_desc_q_cnt_allocated;
+ unsigned int i, j;
+ int rc;
+
+ /* sg table size */
+ adapter->shost->sg_tablesize = LEAPRAID_SG_DEPTH;
+ if (reset_devices)
+ adapter->shost->sg_tablesize =
+ LEAPRAID_KDUMP_MIN_PHYS_SEGMENTS;
+ /* high priority cmds queue depth */
+ adapter->dynamic_task_desc.hp_cmd_qd = facts->hp_slot;
+ adapter->dynamic_task_desc.hp_cmd_qd = LEAPRAID_FIXED_HP_CMDS;
+ /* internal cmds queue depth */
+ adapter->dynamic_task_desc.inter_cmd_qd = LEAPRAID_FIXED_INTER_CMDS;
+ /* adapter cmds total queue depth */
+ if (reset_devices)
+ adapter->adapter_attr.adapter_total_qd =
+ LEAPRAID_DEFAULT_CMD_QD_OFFSET +
+ adapter->dynamic_task_desc.inter_cmd_qd +
+ adapter->dynamic_task_desc.hp_cmd_qd;
+ else
+ adapter->adapter_attr.adapter_total_qd = facts->req_slot +
+ adapter->dynamic_task_desc.hp_cmd_qd;
+ /* reply message queue depth */
+ adapter->adapter_attr.rep_msg_qd =
+ adapter->adapter_attr.adapter_total_qd +
+ LEAPRAID_DEFAULT_CMD_QD_OFFSET;
+ /* reply descriptor queue depth */
+ adapter->adapter_attr.rep_desc_qd =
+ round_up(adapter->adapter_attr.adapter_total_qd +
+ adapter->adapter_attr.rep_msg_qd +
+ LEAPRAID_TASKID_OFFSET_CTRL_CMD,
+ LEAPRAID_REPLY_QD_ALIGNMENT);
+ /* scsi cmd io depth */
+ adapter->adapter_attr.io_qd =
+ adapter->adapter_attr.adapter_total_qd -
+ adapter->dynamic_task_desc.hp_cmd_qd -
+ adapter->dynamic_task_desc.inter_cmd_qd;
+ /* scsi host can queue */
+ adapter->shost->can_queue = adapter->adapter_attr.io_qd -
+ LEAPRAID_TASKID_OFFSET_SCSIIO_CMD;
+ adapter->driver_cmds.ctl_cmd.taskid = adapter->shost->can_queue +
+ LEAPRAID_TASKID_OFFSET_CTRL_CMD;
+ adapter->driver_cmds.driver_scsiio_cmd.taskid =
+ adapter->shost->can_queue +
+ LEAPRAID_TASKID_OFFSET_SCSIIO_CMD;
+
+ /* allocate task descriptor */
+try_again:
+ adapter->adapter_attr.task_desc_dma_size =
+ (adapter->adapter_attr.adapter_total_qd +
+ LEAPRAID_TASKID_OFFSET_CTRL_CMD) *
+ LEAPRAID_REQUEST_SIZE;
+ adapter->mem_desc.task_desc =
+ dma_alloc_coherent(&adapter->pdev->dev,
+ adapter->adapter_attr.task_desc_dma_size,
+ &adapter->mem_desc.task_desc_dma,
+ GFP_KERNEL);
+ if (!adapter->mem_desc.task_desc) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate task descriptor DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+ /* allocate chain message pool */
+ adapter->mem_desc.sg_chain_pool_size =
+ LEAPRAID_DEFAULT_CHAINS_PER_IO * LEAPRAID_CHAIN_SEG_SIZE;
+ adapter->mem_desc.sg_chain_pool =
+ dma_pool_create("leapraid chain pool",
+ &adapter->pdev->dev,
+ adapter->mem_desc.sg_chain_pool_size, 16, 0);
+ if (!adapter->mem_desc.sg_chain_pool) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate chain message DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ /* allocate io tracker to ref scsi io */
+ adapter->mem_desc.io_tracker =
+ kcalloc(adapter->shost->can_queue,
+ sizeof(struct leapraid_io_req_tracker),
+ GFP_KERNEL);
+ if (!adapter->mem_desc.io_tracker) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ for (i = 0; (int)i < adapter->shost->can_queue; i++) {
+ rc = leapraid_internal_init_cmd_priv(
+ adapter,
+ adapter->mem_desc.io_tracker + i);
+ if (rc)
+ goto out;
+ }
+
+
+ adapter->dynamic_task_desc.hp_taskid =
+ adapter->adapter_attr.io_qd +
+ LEAPRAID_HP_TASKID_OFFSET_CTL_CMD;
+ /* allocate static hp taskid */
+ adapter->driver_cmds.ctl_cmd.hp_taskid =
+ adapter->dynamic_task_desc.hp_taskid;
+ adapter->driver_cmds.tm_cmd.hp_taskid =
+ adapter->dynamic_task_desc.hp_taskid +
+ LEAPRAID_HP_TASKID_OFFSET_TM_CMD;
+
+ adapter->dynamic_task_desc.inter_taskid =
+ adapter->dynamic_task_desc.hp_taskid +
+ adapter->dynamic_task_desc.hp_cmd_qd;
+ adapter->driver_cmds.scan_dev_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid;
+ adapter->driver_cmds.cfg_op_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid +
+ LEAPRAID_TASKID_OFFSET_CFG_OP_CMD;
+ adapter->driver_cmds.transport_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid +
+ LEAPRAID_TASKID_OFFSET_TRANSPORT_CMD;
+ adapter->driver_cmds.timestamp_sync_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid +
+ LEAPRAID_TASKID_OFFSET_TIMESTAMP_SYNC_CMD;
+ adapter->driver_cmds.raid_action_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid +
+ LEAPRAID_TASKID_OFFSET_RAID_ACTION_CMD;
+ adapter->driver_cmds.enc_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid +
+ LEAPRAID_TASKID_OFFSET_ENC_CMD;
+ adapter->driver_cmds.notify_event_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid +
+ LEAPRAID_TASKID_OFFSET_NOTIFY_EVENT_CMD;
+ dev_info(&adapter->pdev->dev, "queue depth:\n");
+ dev_info(&adapter->pdev->dev, " host->can_queue: %d\n",
+ adapter->shost->can_queue);
+ dev_info(&adapter->pdev->dev, " io_qd: %d\n",
+ adapter->adapter_attr.io_qd);
+ dev_info(&adapter->pdev->dev, " hpr_cmd_qd: %d\n",
+ adapter->dynamic_task_desc.hp_cmd_qd);
+ dev_info(&adapter->pdev->dev, " inter_cmd_qd: %d\n",
+ adapter->dynamic_task_desc.inter_cmd_qd);
+ dev_info(&adapter->pdev->dev, " adapter_total_qd: %d\n",
+ adapter->adapter_attr.adapter_total_qd);
+
+ dev_info(&adapter->pdev->dev, "taskid range:\n");
+ dev_info(&adapter->pdev->dev,
+ " adapter->dynamic_task_desc.hp_taskid: %d\n",
+ adapter->dynamic_task_desc.hp_taskid);
+ dev_info(&adapter->pdev->dev,
+ " adapter->dynamic_task_desc.inter_taskid: %d\n",
+ adapter->dynamic_task_desc.inter_taskid);
+
+ /*
+ * allocate sense dma, driver maintain
+ * need in same 4GB segment
+ */
+ adapter->mem_desc.sense_data =
+ dma_alloc_coherent(
+ &adapter->pdev->dev,
+ adapter->adapter_attr.io_qd * SCSI_SENSE_BUFFERSIZE,
+ &adapter->mem_desc.sense_data_dma, GFP_KERNEL);
+ if (!adapter->mem_desc.sense_data) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate sense data DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+ if (!leapraid_is_in_same_4g_seg(adapter->mem_desc.sense_data_dma,
+ adapter->adapter_attr.io_qd *
+ SCSI_SENSE_BUFFERSIZE)) {
+ dev_warn(&adapter->pdev->dev,
+ "try 32 bit dma due to sense data is not in same 4g!\n");
+ rc = -EAGAIN;
+ goto out;
+ }
+
+ /* reply frame, need in same 4GB segment */
+ adapter->mem_desc.rep_msg =
+ dma_alloc_coherent(&adapter->pdev->dev,
+ adapter->adapter_attr.rep_msg_qd *
+ LEAPRAID_REPLY_SIEZ,
+ &adapter->mem_desc.rep_msg_dma,
+ GFP_KERNEL);
+ if (!adapter->mem_desc.rep_msg) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate reply message DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+ if (!leapraid_is_in_same_4g_seg(adapter->mem_desc.rep_msg_dma,
+ adapter->adapter_attr.rep_msg_qd *
+ LEAPRAID_REPLY_SIEZ)) {
+ dev_warn(&adapter->pdev->dev,
+ "use 32 bit dma due to rep msg is not in same 4g!\n");
+ rc = -EAGAIN;
+ goto out;
+ }
+
+ /* address of reply frame */
+ adapter->mem_desc.rep_msg_addr =
+ dma_alloc_coherent(&adapter->pdev->dev,
+ adapter->adapter_attr.rep_msg_qd *
+ LEAPRAID_REP_MSG_ADDR_SIZE,
+ &adapter->mem_desc.rep_msg_addr_dma,
+ GFP_KERNEL);
+ if (!adapter->mem_desc.rep_msg_addr) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate reply message address DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+ adapter->adapter_attr.rep_desc_q_seg_cnt =
+ DIV_ROUND_UP(adapter->adapter_attr.rq_cnt,
+ LEAPRAID_REP_DESC_CHUNK_SIZE);
+ adapter->mem_desc.rep_desc_seg_maint =
+ kcalloc(adapter->adapter_attr.rep_desc_q_seg_cnt,
+ sizeof(struct leapraid_rep_desc_seg_maint),
+ GFP_KERNEL);
+ if (!adapter->mem_desc.rep_desc_seg_maint) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ rep_desc_q_cnt_allocated = 0;
+ for (i = 0; i < adapter->adapter_attr.rep_desc_q_seg_cnt; i++) {
+ adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_maint =
+ kcalloc(LEAPRAID_REP_DESC_CHUNK_SIZE,
+ sizeof(struct leapraid_rep_desc_maint),
+ GFP_KERNEL);
+ if (!adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_maint) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg =
+ dma_alloc_coherent(
+ &adapter->pdev->dev,
+ (adapter->adapter_attr.rep_desc_qd *
+ LEAPRAID_REP_DESC_ENTRY_SIZE) *
+ LEAPRAID_REP_DESC_CHUNK_SIZE,
+ &adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg_dma,
+ GFP_KERNEL);
+ if (!adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate reply descriptor segment DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ for (j = 0; j < LEAPRAID_REP_DESC_CHUNK_SIZE; j++) {
+ if (rep_desc_q_cnt_allocated >=
+ adapter->adapter_attr.rq_cnt)
+ break;
+ adapter->mem_desc
+ .rep_desc_seg_maint[i]
+ .rep_desc_maint[j]
+ .rep_desc =
+ (void *)((u8 *)(
+ adapter->mem_desc
+ .rep_desc_seg_maint[i]
+ .rep_desc_seg) +
+ j *
+ (adapter->adapter_attr.rep_desc_qd *
+ LEAPRAID_REP_DESC_ENTRY_SIZE));
+ adapter->mem_desc
+ .rep_desc_seg_maint[i]
+ .rep_desc_maint[j]
+ .rep_desc_dma =
+ adapter->mem_desc
+ .rep_desc_seg_maint[i]
+ .rep_desc_seg_dma +
+ j *
+ (adapter->adapter_attr.rep_desc_qd *
+ LEAPRAID_REP_DESC_ENTRY_SIZE);
+ rep_desc_q_cnt_allocated++;
+ }
+ }
+
+ if (!reset_devices) {
+ adapter->mem_desc.rep_desc_q_arr =
+ dma_alloc_coherent(
+ &adapter->pdev->dev,
+ adapter->adapter_attr.rq_cnt *
+ LEAPRAID_REP_RQ_CNT_SIZE,
+ &adapter->mem_desc.rep_desc_q_arr_dma,
+ GFP_KERNEL);
+ if (!adapter->mem_desc.rep_desc_q_arr) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate reply descriptor queue array DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+ }
+
+ return 0;
+out:
+ if (rc == -EAGAIN) {
+ leapraid_free_host_memory(adapter);
+ adapter->adapter_attr.use_32_dma_mask = true;
+ rc = dma_set_mask_and_coherent(&adapter->pdev->dev,
+ DMA_BIT_MASK(32));
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "failed to set 32 DMA mask\n");
+ return rc;
+ }
+ goto try_again;
+ }
+ return rc;
+}
+
+static int leapraid_alloc_dev_topo_bitmaps(struct leapraid_adapter *adapter)
+{
+ adapter->dev_topo.pd_hdls_sz =
+ adapter->adapter_attr.features.max_dev_handle /
+ LEAPRAID_BITS_PER_BYTE;
+ if (adapter->adapter_attr.features.max_dev_handle %
+ LEAPRAID_BITS_PER_BYTE)
+ adapter->dev_topo.pd_hdls_sz++;
+ adapter->dev_topo.pd_hdls =
+ kzalloc(adapter->dev_topo.pd_hdls_sz, GFP_KERNEL);
+ if (!adapter->dev_topo.pd_hdls)
+ return -ENOMEM;
+
+ adapter->dev_topo.blocking_hdls =
+ kzalloc(adapter->dev_topo.pd_hdls_sz, GFP_KERNEL);
+ if (!adapter->dev_topo.blocking_hdls)
+ return -ENOMEM;
+
+ adapter->dev_topo.pending_dev_add_sz =
+ adapter->adapter_attr.features.max_dev_handle /
+ LEAPRAID_BITS_PER_BYTE;
+ if (adapter->adapter_attr.features.max_dev_handle %
+ LEAPRAID_BITS_PER_BYTE)
+ adapter->dev_topo.pending_dev_add_sz++;
+ adapter->dev_topo.pending_dev_add =
+ kzalloc(adapter->dev_topo.pending_dev_add_sz, GFP_KERNEL);
+ if (!adapter->dev_topo.pending_dev_add)
+ return -ENOMEM;
+
+ adapter->dev_topo.dev_removing_sz =
+ adapter->dev_topo.pending_dev_add_sz;
+ adapter->dev_topo.dev_removing =
+ kzalloc(adapter->dev_topo.dev_removing_sz, GFP_KERNEL);
+ if (!adapter->dev_topo.dev_removing)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void leapraid_free_dev_topo_bitmaps(struct leapraid_adapter *adapter)
+{
+ kfree(adapter->dev_topo.pd_hdls);
+ kfree(adapter->dev_topo.blocking_hdls);
+ kfree(adapter->dev_topo.pending_dev_add);
+ kfree(adapter->dev_topo.dev_removing);
+}
+
+static int leapraid_init_driver_cmds(struct leapraid_adapter *adapter)
+{
+ u32 buffer_size = 0;
+ void *buffer;
+
+ INIT_LIST_HEAD(&adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.scan_dev_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.scan_dev_cmd.cb_idx = LEAPRAID_SCAN_DEV_CB_IDX;
+ list_add_tail(&adapter->driver_cmds.scan_dev_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.cfg_op_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.cfg_op_cmd.cb_idx = LEAPRAID_CONFIG_CB_IDX;
+ mutex_init(&adapter->driver_cmds.cfg_op_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.cfg_op_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.transport_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.transport_cmd.cb_idx = LEAPRAID_TRANSPORT_CB_IDX;
+ mutex_init(&adapter->driver_cmds.transport_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.transport_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.timestamp_sync_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.timestamp_sync_cmd.cb_idx =
+ LEAPRAID_TIMESTAMP_SYNC_CB_IDX;
+ mutex_init(&adapter->driver_cmds.timestamp_sync_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.timestamp_sync_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.raid_action_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.raid_action_cmd.cb_idx =
+ LEAPRAID_RAID_ACTION_CB_IDX;
+ mutex_init(&adapter->driver_cmds.raid_action_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.raid_action_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.driver_scsiio_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.driver_scsiio_cmd.cb_idx =
+ LEAPRAID_DRIVER_SCSIIO_CB_IDX;
+ mutex_init(&adapter->driver_cmds.driver_scsiio_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.driver_scsiio_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ buffer_size = sizeof(struct scsi_cmnd) +
+ sizeof(struct leapraid_io_req_tracker) +
+ SCSI_SENSE_BUFFERSIZE +
+ sizeof(struct scatterlist);
+ buffer_size += 32;
+ buffer = kzalloc(buffer_size, GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
+
+ adapter->driver_cmds.internal_scmd = buffer;
+ buffer = (void *)((u8 *)buffer +
+ sizeof(struct scsi_cmnd) +
+ sizeof(struct leapraid_io_req_tracker));
+ adapter->driver_cmds.internal_scmd->sense_buffer =
+ (unsigned char *)buffer;
+ buffer = (void *)((u8 *)buffer + SCSI_SENSE_BUFFERSIZE);
+ adapter->driver_cmds.internal_scmd->sdb.table.sgl =
+ (struct scatterlist *)buffer;
+ buffer = (void *)((u8 *)buffer + sizeof(struct scatterlist));
+ adapter->driver_cmds.internal_scmd->cmnd = buffer;
+ adapter->driver_cmds.internal_scmd->host_scribble =
+ (unsigned char *)(adapter->driver_cmds.internal_scmd + 1);
+
+ adapter->driver_cmds.enc_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.enc_cmd.cb_idx = LEAPRAID_ENC_CB_IDX;
+ mutex_init(&adapter->driver_cmds.enc_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.enc_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.notify_event_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.notify_event_cmd.cb_idx =
+ LEAPRAID_NOTIFY_EVENT_CB_IDX;
+ mutex_init(&adapter->driver_cmds.notify_event_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.notify_event_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.ctl_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.ctl_cmd.cb_idx = LEAPRAID_CTL_CB_IDX;
+ mutex_init(&adapter->driver_cmds.ctl_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.ctl_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.tm_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.tm_cmd.cb_idx = LEAPRAID_TM_CB_IDX;
+ mutex_init(&adapter->driver_cmds.tm_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.tm_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ return 0;
+}
+
+static void leapraid_unmask_evts(struct leapraid_adapter *adapter, u16 evt)
+{
+ if (evt >= LEAPRAID_MAX_EVENT_NUM)
+ return;
+
+ clear_bit(evt, (unsigned long *)adapter->fw_evt_s.leapraid_evt_masks);
+}
+
+static void leapraid_init_event_mask(struct leapraid_adapter *adapter)
+{
+ int i;
+
+ for (i = 0; i < LEAPRAID_EVT_MASK_COUNT; i++)
+ adapter->fw_evt_s.leapraid_evt_masks[i] = -1;
+ leapraid_unmask_evts(adapter, LEAPRAID_EVT_SAS_DISCOVERY);
+ leapraid_unmask_evts(adapter, LEAPRAID_EVT_SAS_TOPO_CHANGE_LIST);
+ leapraid_unmask_evts(adapter, LEAPRAID_EVT_SAS_ENCL_DEV_STATUS_CHANGE);
+ leapraid_unmask_evts(adapter, LEAPRAID_EVT_SAS_DEV_STATUS_CHANGE);
+ leapraid_unmask_evts(adapter, LEAPRAID_EVT_IR_CHANGE);
+}
+
+static void leapraid_prepare_adp_init_req(
+ struct leapraid_adapter *adapter,
+ struct leapraid_adapter_init_req *init_req)
+{
+ ktime_t cur_time;
+ int i;
+ u32 reply_post_free_ary_sz;
+
+ memset(init_req, 0, sizeof(struct leapraid_adapter_init_req));
+ init_req->func = LEAPRAID_FUNC_ADAPTER_INIT;
+ init_req->who_init = LEAPRAID_WHOINIT_LINUX_DRIVER;
+ init_req->msg_ver = cpu_to_le16(0x0100);
+ init_req->header_ver = cpu_to_le16(0x0000);
+
+ init_req->driver_ver = cpu_to_le32((LEAPRAID_MAJOR_VERSION << 24) |
+ (LEAPRAID_MINOR_VERSION << 16) |
+ (LEAPRAID_BUILD_VERSION << 8) |
+ LEAPRAID_RELEASE_VERSION);
+ if (adapter->notification_desc.msix_enable)
+ init_req->host_msix_vectors = adapter->adapter_attr.rq_cnt;
+
+ init_req->req_frame_size =
+ cpu_to_le16(LEAPRAID_REQUEST_SIZE / LEAPRAID_DWORDS_BYTE_SIZE);
+ init_req->rep_desc_qd =
+ cpu_to_le16(adapter->adapter_attr.rep_desc_qd);
+ init_req->rep_msg_qd =
+ cpu_to_le16(adapter->adapter_attr.rep_msg_qd);
+ init_req->sense_buffer_add_high =
+ cpu_to_le32((u64)adapter->mem_desc.sense_data_dma >> 32);
+ init_req->rep_msg_dma_high =
+ cpu_to_le32((u64)adapter->mem_desc.rep_msg_dma >> 32);
+ init_req->task_desc_base_addr =
+ cpu_to_le64((u64)adapter->mem_desc.task_desc_dma);
+ init_req->rep_msg_addr_dma =
+ cpu_to_le64((u64)adapter->mem_desc.rep_msg_addr_dma);
+ if (!reset_devices) {
+ reply_post_free_ary_sz =
+ adapter->adapter_attr.rq_cnt * LEAPRAID_REP_RQ_CNT_SIZE;
+ memset(adapter->mem_desc.rep_desc_q_arr, 0,
+ reply_post_free_ary_sz);
+
+ for (i = 0; i < adapter->adapter_attr.rq_cnt; i++) {
+ adapter->mem_desc
+ .rep_desc_q_arr[i]
+ .rep_desc_base_addr =
+ cpu_to_le64 (
+ (u64)adapter->mem_desc
+ .rep_desc_seg_maint[i /
+ LEAPRAID_REP_DESC_CHUNK_SIZE]
+ .rep_desc_maint[i %
+ LEAPRAID_REP_DESC_CHUNK_SIZE]
+ .rep_desc_dma);
+ }
+
+ init_req->msg_flg =
+ LEAPRAID_ADAPTER_INIT_MSGFLG_RDPQ_ARRAY_MODE;
+ init_req->rep_desc_q_arr_addr =
+ cpu_to_le64((u64)adapter->mem_desc.rep_desc_q_arr_dma);
+ } else {
+ init_req->rep_desc_q_arr_addr =
+ cpu_to_le64((u64)adapter->mem_desc
+ .rep_desc_seg_maint[0]
+ .rep_desc_maint[0]
+ .rep_desc_dma);
+ }
+ cur_time = ktime_get_real();
+ init_req->time_stamp = cpu_to_le64(ktime_to_ms(cur_time));
+}
+
+static int leapraid_send_adapter_init(struct leapraid_adapter *adapter)
+{
+ struct leapraid_adapter_init_req init_req;
+ struct leapraid_adapter_init_rep init_rep;
+ u16 adapter_status;
+ int rc = 0;
+
+ leapraid_prepare_adp_init_req(adapter, &init_req);
+
+ rc = leapraid_handshake_func(adapter,
+ sizeof(struct leapraid_adapter_init_req),
+ (u32 *)&init_req,
+ sizeof(struct leapraid_adapter_init_rep),
+ (u16 *)&init_rep);
+ if (rc != 0) {
+ dev_err(&adapter->pdev->dev, "%s: handshake failed, rc=%d\n",
+ __func__, rc);
+ return rc;
+ }
+
+ adapter_status =
+ le16_to_cpu(init_rep.adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (adapter_status != LEAPRAID_ADAPTER_STATUS_SUCCESS) {
+ dev_err(&adapter->pdev->dev, "%s: failed\n", __func__);
+ rc = -EIO;
+ }
+
+ adapter->timestamp_sync_cnt = 0;
+ return rc;
+}
+
+static int leapraid_cfg_pages(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_io_unit_page1 *sas_io_unit_page1 = NULL;
+ struct leapraid_bios_page3 bios_page3;
+ struct leapraid_bios_page2 bios_page2;
+ int rc = 0;
+ int sz;
+
+ rc = leapraid_op_config_page(adapter, &bios_page3, cfgp1,
+ cfgp2, GET_BIOS_PG3);
+ if (rc)
+ return rc;
+
+ rc = leapraid_op_config_page(adapter, &bios_page2, cfgp1,
+ cfgp2, GET_BIOS_PG2);
+ if (rc)
+ return rc;
+
+ adapter->adapter_attr.bios_version =
+ le32_to_cpu(bios_page3.bios_version);
+ adapter->adapter_attr.wideport_max_queue_depth =
+ LEAPRAID_SAS_QUEUE_DEPTH;
+ adapter->adapter_attr.narrowport_max_queue_depth =
+ LEAPRAID_SAS_QUEUE_DEPTH;
+ adapter->adapter_attr.sata_max_queue_depth =
+ LEAPRAID_SATA_QUEUE_DEPTH;
+
+ adapter->boot_devs.requested_boot_dev.form =
+ bios_page2.requested_boot_dev_form;
+ memcpy((void *)adapter->boot_devs.requested_boot_dev.pg_dev,
+ (void *)&bios_page2.requested_boot_dev,
+ LEAPRAID_BOOT_DEV_SIZE);
+ adapter->boot_devs.requested_alt_boot_dev.form =
+ bios_page2.requested_alt_boot_dev_form;
+ memcpy((void *)adapter->boot_devs.requested_alt_boot_dev.pg_dev,
+ (void *)&bios_page2.requested_alt_boot_dev,
+ LEAPRAID_BOOT_DEV_SIZE);
+ adapter->boot_devs.current_boot_dev.form =
+ bios_page2.current_boot_dev_form;
+ memcpy((void *)adapter->boot_devs.current_boot_dev.pg_dev,
+ (void *)&bios_page2.current_boot_dev,
+ LEAPRAID_BOOT_DEV_SIZE);
+
+ sz = offsetof(struct leapraid_sas_io_unit_page1, phy_info);
+ sas_io_unit_page1 = kzalloc(sz, GFP_KERNEL);
+ if (!sas_io_unit_page1) {
+ rc = -ENOMEM;
+ return rc;
+ }
+
+ cfgp1.size = sz;
+
+ rc = leapraid_op_config_page(adapter, sas_io_unit_page1, cfgp1,
+ cfgp2, GET_SAS_IOUNIT_PG1);
+ if (rc)
+ goto out;
+
+ if (le16_to_cpu(sas_io_unit_page1->wideport_max_queue_depth))
+ adapter->adapter_attr.wideport_max_queue_depth =
+ le16_to_cpu(
+ sas_io_unit_page1->wideport_max_queue_depth);
+
+ if (le16_to_cpu(sas_io_unit_page1->narrowport_max_queue_depth))
+ adapter->adapter_attr.narrowport_max_queue_depth =
+ le16_to_cpu(
+ sas_io_unit_page1->narrowport_max_queue_depth);
+
+ if (sas_io_unit_page1->sata_max_queue_depth)
+ adapter->adapter_attr.sata_max_queue_depth =
+ sas_io_unit_page1->sata_max_queue_depth;
+
+out:
+ kfree(sas_io_unit_page1);
+ dev_info(&adapter->pdev->dev,
+ "max wp qd=%d, max np qd=%d, max sata qd=%d\n",
+ adapter->adapter_attr.wideport_max_queue_depth,
+ adapter->adapter_attr.narrowport_max_queue_depth,
+ adapter->adapter_attr.sata_max_queue_depth);
+ return rc;
+}
+
+static int leapraid_evt_notify(struct leapraid_adapter *adapter)
+{
+ struct leapraid_evt_notify_req *evt_notify_req;
+ int rc = 0;
+ int i;
+
+ mutex_lock(&adapter->driver_cmds.notify_event_cmd.mutex);
+ adapter->driver_cmds.notify_event_cmd.status = LEAPRAID_CMD_PENDING;
+ evt_notify_req =
+ leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.notify_event_cmd.inter_taskid);
+ memset(evt_notify_req, 0, sizeof(struct leapraid_evt_notify_req));
+ evt_notify_req->func = LEAPRAID_FUNC_EVENT_NOTIFY;
+ for (i = 0; i < LEAPRAID_EVT_MASK_COUNT; i++)
+ evt_notify_req->evt_masks[i] =
+ cpu_to_le32(adapter->fw_evt_s.leapraid_evt_masks[i]);
+ init_completion(&adapter->driver_cmds.notify_event_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.notify_event_cmd.inter_taskid);
+ wait_for_completion_timeout(
+ &adapter->driver_cmds.notify_event_cmd.done,
+ LEAPRAID_NOTIFY_EVENT_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.notify_event_cmd.status &
+ LEAPRAID_CMD_DONE))
+ if (adapter->driver_cmds.notify_event_cmd.status &
+ LEAPRAID_CMD_RESET)
+ rc = -EFAULT;
+ adapter->driver_cmds.notify_event_cmd.status = LEAPRAID_CMD_NOT_USED;
+ mutex_unlock(&adapter->driver_cmds.notify_event_cmd.mutex);
+
+ return rc;
+}
+
+int leapraid_scan_dev(struct leapraid_adapter *adapter, bool async_scan_dev)
+{
+ struct leapraid_scan_dev_req *scan_dev_req;
+ struct leapraid_scan_dev_rep *scan_dev_rep;
+ u16 adapter_status;
+ int rc = 0;
+
+ dev_info(&adapter->pdev->dev,
+ "send device scan, async_scan_dev=%d!\n", async_scan_dev);
+
+ adapter->driver_cmds.scan_dev_cmd.status = LEAPRAID_CMD_PENDING;
+ adapter->driver_cmds.scan_dev_cmd.async_scan_dev = async_scan_dev;
+ scan_dev_req = leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.scan_dev_cmd.inter_taskid);
+ memset(scan_dev_req, 0, sizeof(struct leapraid_scan_dev_req));
+ scan_dev_req->func = LEAPRAID_FUNC_SCAN_DEV;
+
+ if (async_scan_dev) {
+ adapter->scan_dev_desc.first_scan_dev_fired = true;
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.scan_dev_cmd.inter_taskid);
+ return 0;
+ }
+
+ init_completion(&adapter->driver_cmds.scan_dev_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.scan_dev_cmd.inter_taskid);
+ wait_for_completion_timeout(&adapter->driver_cmds.scan_dev_cmd.done,
+ LEAPRAID_SCAN_DEV_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.scan_dev_cmd.status & LEAPRAID_CMD_DONE)) {
+ dev_err(&adapter->pdev->dev, "device scan timeout!\n");
+ if (adapter->driver_cmds.scan_dev_cmd.status &
+ LEAPRAID_CMD_RESET)
+ rc = -EFAULT;
+ else
+ rc = -ETIME;
+ goto out;
+ }
+
+ scan_dev_rep = (void *)(&adapter->driver_cmds.scan_dev_cmd.reply);
+ adapter_status =
+ le16_to_cpu(scan_dev_rep->adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (adapter_status != LEAPRAID_ADAPTER_STATUS_SUCCESS) {
+ dev_err(&adapter->pdev->dev, "device scan failure!\n");
+ rc = -EFAULT;
+ goto out;
+ }
+
+out:
+ adapter->driver_cmds.scan_dev_cmd.status = LEAPRAID_CMD_NOT_USED;
+ dev_info(&adapter->pdev->dev,
+ "device scan %s\n", ((rc == 0) ? "SUCCESS" : "FAILED"));
+ return rc;
+}
+
+static void leapraid_init_task_tracker(struct leapraid_adapter *adapter)
+{
+ unsigned long flags;
+ u16 taskid;
+ int i;
+
+ spin_lock_irqsave(&adapter->dynamic_task_desc.task_lock, flags);
+ taskid = 1;
+ for (i = 0; i < adapter->shost->can_queue; i++, taskid++) {
+ adapter->mem_desc.io_tracker[i].taskid = taskid;
+ adapter->mem_desc.io_tracker[i].scmd = NULL;
+ }
+
+ spin_unlock_irqrestore(&adapter->dynamic_task_desc.task_lock, flags);
+}
+
+static void leapraid_init_rep_msg_addr(struct leapraid_adapter *adapter)
+{
+ u32 reply_address;
+ unsigned int i;
+
+ for (i = 0, reply_address = (u32)adapter->mem_desc.rep_msg_dma;
+ i < adapter->adapter_attr.rep_msg_qd;
+ i++, reply_address += LEAPRAID_REPLY_SIEZ) {
+ adapter->mem_desc.rep_msg_addr[i] = cpu_to_le32(reply_address);
+ }
+}
+
+static void init_rep_desc(struct leapraid_rq *rq, int index,
+ union leapraid_rep_desc_union *reply_post_free_contig)
+{
+ struct leapraid_adapter *adapter = rq->adapter;
+ unsigned int i;
+
+ if (!reset_devices)
+ rq->rep_desc =
+ adapter->mem_desc
+ .rep_desc_seg_maint[index /
+ LEAPRAID_REP_DESC_CHUNK_SIZE]
+ .rep_desc_maint[index %
+ LEAPRAID_REP_DESC_CHUNK_SIZE]
+ .rep_desc;
+ else
+ rq->rep_desc = reply_post_free_contig;
+
+ rq->rep_post_host_idx = 0;
+ for (i = 0; i < adapter->adapter_attr.rep_desc_qd; i++)
+ rq->rep_desc[i].words = cpu_to_le64(ULLONG_MAX);
+}
+
+static void leapraid_init_rep_desc(struct leapraid_adapter *adapter)
+{
+ union leapraid_rep_desc_union *reply_post_free_contig;
+ struct leapraid_int_rq *int_rq;
+ struct leapraid_blk_mq_poll_rq *blk_mq_poll_rq;
+ unsigned int i;
+ int index;
+
+ index = 0;
+ reply_post_free_contig = adapter->mem_desc
+ .rep_desc_seg_maint[0]
+ .rep_desc_maint[0]
+ .rep_desc;
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qdex; i++) {
+ int_rq = &adapter->notification_desc.int_rqs[i];
+ init_rep_desc(&int_rq->rq, index, reply_post_free_contig);
+ if (!reset_devices)
+ index++;
+ else
+ reply_post_free_contig +=
+ adapter->adapter_attr.rep_desc_qd;
+ }
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qcnt; i++) {
+ blk_mq_poll_rq = &adapter->notification_desc.blk_mq_poll_rqs[i];
+ init_rep_desc(&blk_mq_poll_rq->rq,
+ index, reply_post_free_contig);
+ if (!reset_devices)
+ index++;
+ else
+ reply_post_free_contig +=
+ adapter->adapter_attr.rep_desc_qd;
+ }
+}
+
+static void leapraid_init_bar_idx_regs(struct leapraid_adapter *adapter)
+{
+ struct leapraid_int_rq *int_rq;
+ struct leapraid_blk_mq_poll_rq *blk_mq_poll_rq;
+ unsigned int i, j;
+
+ adapter->rep_msg_host_idx = adapter->adapter_attr.rep_msg_qd - 1;
+ writel(adapter->rep_msg_host_idx,
+ &adapter->iomem_base->rep_msg_host_idx);
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qdex; i++) {
+ int_rq = &adapter->notification_desc.int_rqs[i];
+ for (j = 0; j < REP_POST_HOST_IDX_REG_CNT; j++)
+ writel((int_rq->rq.msix_idx & 7) <<
+ LEAPRAID_RPHI_MSIX_IDX_SHIFT,
+ &adapter->iomem_base->rep_post_reg_idx[j].idx);
+ }
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qcnt; i++) {
+ blk_mq_poll_rq =
+ &adapter->notification_desc.blk_mq_poll_rqs[i];
+ for (j = 0; j < REP_POST_HOST_IDX_REG_CNT; j++)
+ writel((blk_mq_poll_rq->rq.msix_idx & 7) <<
+ LEAPRAID_RPHI_MSIX_IDX_SHIFT,
+ &adapter->iomem_base->rep_post_reg_idx[j].idx);
+ }
+}
+
+static int leapraid_make_adapter_available(struct leapraid_adapter *adapter)
+{
+ int rc = 0;
+
+ leapraid_init_task_tracker(adapter);
+ leapraid_init_rep_msg_addr(adapter);
+
+ if (adapter->scan_dev_desc.driver_loading)
+ leapraid_configure_reply_queue_affinity(adapter);
+
+ leapraid_init_rep_desc(adapter);
+ rc = leapraid_send_adapter_init(adapter);
+ if (rc)
+ return rc;
+
+ leapraid_init_bar_idx_regs(adapter);
+ leapraid_unmask_int(adapter);
+ rc = leapraid_cfg_pages(adapter);
+ if (rc)
+ return rc;
+
+ rc = leapraid_evt_notify(adapter);
+ if (rc)
+ return rc;
+
+ if (!adapter->access_ctrl.shost_recovering) {
+ adapter->scan_dev_desc.wait_scan_dev_done = true;
+ return 0;
+ }
+
+ rc = leapraid_scan_dev(adapter, false);
+ if (rc)
+ return rc;
+
+ return rc;
+}
+
+int leapraid_ctrl_init(struct leapraid_adapter *adapter)
+{
+ u32 cap;
+ int rc = 0;
+
+ rc = leapraid_set_pcie_and_notification(adapter);
+ if (rc)
+ goto out_free_resources;
+
+ pci_set_drvdata(adapter->pdev, adapter->shost);
+
+ pcie_capability_read_dword(adapter->pdev, PCI_EXP_DEVCAP, &cap);
+
+ if (cap & PCI_EXP_DEVCAP_EXT_TAG) {
+ pcie_capability_set_word(adapter->pdev, PCI_EXP_DEVCTL,
+ PCI_EXP_DEVCTL_EXT_TAG);
+ }
+
+ rc = leapraid_make_adapter_ready(adapter, PART_RESET);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "make adapter ready failure\n");
+ goto out_free_resources;
+ }
+
+ rc = leapraid_get_adapter_features(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "get adapter feature failure\n");
+ goto out_free_resources;
+ }
+
+ rc = leapraid_fw_log_init(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "fw log init failure\n");
+ goto out_free_resources;
+ }
+
+ rc = leapraid_request_host_memory(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "request host memory failure\n");
+ goto out_free_resources;
+ }
+
+ init_waitqueue_head(&adapter->reset_desc.reset_wait_queue);
+
+ rc = leapraid_alloc_dev_topo_bitmaps(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "alloc topo bitmaps failure\n");
+ goto out_free_resources;
+ }
+
+ rc = leapraid_init_driver_cmds(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "init driver cmds failure\n");
+ goto out_free_resources;
+ }
+
+ leapraid_init_event_mask(adapter);
+
+ rc = leapraid_make_adapter_available(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "make adapter available failure\n");
+ goto out_free_resources;
+ }
+ return 0;
+
+out_free_resources:
+ adapter->access_ctrl.host_removing = true;
+ leapraid_fw_log_exit(adapter);
+ leapraid_disable_controller(adapter);
+ leapraid_free_host_memory(adapter);
+ leapraid_free_dev_topo_bitmaps(adapter);
+ pci_set_drvdata(adapter->pdev, NULL);
+ return rc;
+}
+
+void leapraid_remove_ctrl(struct leapraid_adapter *adapter)
+{
+ leapraid_check_scheduled_fault_stop(adapter);
+ leapraid_fw_log_stop(adapter);
+ leapraid_fw_log_exit(adapter);
+ leapraid_disable_controller(adapter);
+ leapraid_free_host_memory(adapter);
+ leapraid_free_dev_topo_bitmaps(adapter);
+ leapraid_free_enc_list(adapter);
+ pci_set_drvdata(adapter->pdev, NULL);
+}
+
+void leapraid_free_internal_scsi_cmd(struct leapraid_adapter *adapter)
+{
+ mutex_lock(&adapter->driver_cmds.driver_scsiio_cmd.mutex);
+ kfree(adapter->driver_cmds.internal_scmd);
+ adapter->driver_cmds.internal_scmd = NULL;
+ mutex_unlock(&adapter->driver_cmds.driver_scsiio_cmd.mutex);
+}
diff --git a/drivers/scsi/leapraid/leapraid_func.h b/drivers/scsi/leapraid/leapraid_func.h
new file mode 100644
index 000000000000..9f42763bda72
--- /dev/null
+++ b/drivers/scsi/leapraid/leapraid_func.h
@@ -0,0 +1,1423 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2025 LeapIO Tech Inc.
+ *
+ * LeapRAID Storage and RAID Controller driver.
+ */
+
+#ifndef LEAPRAID_FUNC_H_INCLUDED
+#define LEAPRAID_FUNC_H_INCLUDED
+
+#include <linux/pci.h>
+#include <linux/aer.h>
+#include <linux/poll.h>
+#include <linux/errno.h>
+#include <linux/ktime.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_eh.h>
+#include <scsi/scsicam.h>
+#include <scsi/scsi_tcq.h>
+#include <scsi/scsi_dbg.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_transport_sas.h>
+
+#include "leapraid.h"
+
+/* some requset and reply buffer size */
+#define LEAPRAID_REQUEST_SIZE 128
+#define LEAPRAID_REPLY_SIEZ 128
+#define LEAPRAID_CHAIN_SEG_SIZE 128
+#define LEAPRAID_MAX_SGES_IN_CHAIN 7
+#define LEAPRAID_DEFAULT_CHAINS_PER_IO 19
+#define LEAPRAID_DEFAULT_DIX_CHAINS_PER_IO \
+ (2 * LEAPRAID_DEFAULT_CHAINS_PER_IO) /* TODO DIX */
+#define LEAPRAID_IEEE_SGE64_ENTRY_SIZE 16
+#define LEAPRAID_REP_DESC_CHUNK_SIZE 16
+#define LEAPRAID_REP_DESC_ENTRY_SIZE 8
+#define LEAPRAID_REP_MSG_ADDR_SIZE 4
+#define LEAPRAID_REP_RQ_CNT_SIZE 16
+
+#define LEAPRAID_SYS_LOG_BUF_SIZE 0x200000
+#define LEAPRAID_SYS_LOG_BUF_RESERVE 0x1000
+
+/* Driver version and name */
+#define LEAPRAID_DRIVER_NAME "LeapRaid"
+#define LEAPRAID_NAME_LENGTH 48
+#define LEAPRAID_AUTHOR "LeapIO Inc."
+#define LEAPRAID_DESCRIPTION "LeapRaid Driver"
+#define LEAPRAID_DRIVER_VERSION "2.00.00.05"
+#define LEAPRAID_MAJOR_VERSION 2
+#define LEAPRAID_MINOR_VERSION 00
+#define LEAPRAID_BUILD_VERSION 00
+#define LEAPRAID_RELEASE_VERSION 05
+
+/* Device ID */
+#define LEAPRAID_VENDOR_ID 0xD405
+#define LEAPRAID_DEVID_HBA 0x8200
+#define LEAPRAID_DEVID_RAID 0x8201
+
+#define LEAPRAID_PCI_VENDOR_ID_MASK 0xFFFF
+
+ /* RAID virtual channel ID */
+#define RAID_CHANNEL 1
+
+/* Scatter/Gather (SG) segment limits */
+#define LEAPRAID_MAX_PHYS_SEGMENTS SG_CHUNK_SIZE
+
+#define LEAPRAID_KDUMP_MIN_PHYS_SEGMENTS 32
+#define LEAPRAID_SG_DEPTH LEAPRAID_MAX_PHYS_SEGMENTS
+
+/* firmware / config page operations */
+#define LEAPRAID_SET_PARAMETER_SYNC_TIMESTAMP 0x81
+#define LEAPRAID_CFG_REQ_RETRY_TIMES 2
+
+/* Hardware access helpers*/
+#define leapraid_readl(addr) readl(addr)
+#define leapraid_check_reset(status) \
+ (!((status) & LEAPRAID_CMD_RESET))
+
+/* Polling intervals */
+#define LEAPRAID_PCIE_LOG_POLLING_INTERVAL 1
+#define LEAPRAID_FAULT_POLLING_INTERVAL 1000
+#define LEAPRAID_TIMESTAMP_SYNC_INTERVAL 900
+#define LEAPRAID_SMART_POLLING_INTERVAL (300 * 1000)
+
+/* init mask */
+#define LEAPRAID_RESET_IRQ_MASK 0x40000000
+#define LEAPRAID_REPLY_INT_MASK 0x00000008
+#define LEAPRAID_TO_SYS_DB_MASK 0x00000001
+
+/* queue depth */
+#define LEAPRAID_SATA_QUEUE_DEPTH 32
+#define LEAPRAID_SAS_QUEUE_DEPTH 254
+#define LEAPRAID_RAID_QUEUE_DEPTH 128
+
+/* SCSI device and queue limits */
+#define LEAPRAID_MAX_SECTORS 8192
+#define LEAPRAID_DEF_MAX_SECTORS 32767
+#define LEAPRAID_MAX_CDB_LEN 32
+#define LEAPRAID_MAX_LUNS 16384
+#define LEAPRAID_CAN_QUEUE_MIN 1
+#define LEAPRAID_THIS_ID_NONE -1
+#define LEAPRAID_CMD_PER_LUN 128
+#define LEAPRAID_MAX_SEGMENT_SIZE 0xffffffff
+
+/* SCSI sense and ASC/ASCQ and disk geometry configuration */
+#define DESC_FORMAT_THRESHOLD 0x72
+#define SENSE_KEY_MASK 0x0F
+#define SCSI_SENSE_RESPONSE_CODE_MASK 0x7F
+#define ASC_FAILURE_PREDICTION_THRESHOLD_EXCEEDED 0x5D
+#define LEAPRAID_LARGE_DISK_THRESHOLD 0x200000UL /* in sectors, 1GB */
+#define LEAPRAID_LARGE_DISK_HEADS 255
+#define LEAPRAID_LARGE_DISK_SECTORS 63
+#define LEAPRAID_SMALL_DISK_HEADS 64
+#define LEAPRAID_SMALL_DISK_SECTORS 32
+
+/* SMP (Serial Management Protocol) */
+#define LEAPRAID_SMP_PT_FLAG_SGL_PTR 0x80
+#define LEAPRAID_SMP_FN_REPORT_PHY_ERR_LOG 0x91
+#define LEAPRAID_SMP_FRAME_HEADER_SIZE 4
+#define LEAPRAID_SCSI_HOST_SHIFT 16
+#define LEAPRAID_SCSI_DRIVER_SHIFT 24
+
+/* SCSI ASC/ASCQ definitions */
+#define LEAPRAID_SCSI_ASCQ_DEFAULT 0x00
+#define LEAPRAID_SCSI_ASC_POWER_ON_RESET 0x29
+#define LEAPRAID_SCSI_ASC_INVALID_CMD_CODE 0x20
+#define LEAPRAID_SCSI_ASCQ_POWER_ON_RESET 0x07
+
+/* ---- VPD Page 0x89 (ATA Information) ---- */
+#define LEAPRAID_VPD_PAGE_ATA_INFO 0x89
+#define LEAPRAID_VPD_PG89_MAX_LEN 255
+#define LEAPRAID_VPD_PG89_MIN_LEN 214
+
+/* Byte index for NCQ support flag in VPD Page 0x89 */
+#define LEAPRAID_VPD_PG89_NCQ_BYTE_IDX 213
+#define LEAPRAID_VPD_PG89_NCQ_BIT_SHIFT 4
+#define LEAPRAID_VPD_PG89_NCQ_BIT_MASK 0x1
+
+/* readiness polling: max retries, sleep µs between */
+#define LEAPRAID_ADAPTER_READY_MAX_RETRY 15000
+#define LEAPRAID_ADAPTER_READY_SLEEP_MIN_US 1000
+#define LEAPRAID_ADAPTER_READY_SLEEP_MAX_US 1100
+
+/* Doorbell wait parameters */
+#define LEAPRAID_DB_WAIT_MAX_RETRY 20000
+#define LEAPRAID_DB_WAIT_DELAY_US 500
+
+/* Basic data size definitions */
+#define LEAPRAID_DWORDS_BYTE_SIZE 4
+#define LEAPRAID_WORD_BYTE_SIZE 2
+
+/* SGL threshold and chain offset*/
+#define LEAPRAID_SGL_INLINE_THRESHOLD 2
+#define LEAPRAID_CHAIN_OFFSET_DWORDS 7
+
+/* MSI-X group size and mask */
+#define LEAPRAID_MSIX_GROUP_SIZE 8
+#define LEAPRAID_MSIX_GROUP_MASK 7
+
+/* basic constants and limits */
+#define LEAPRAID_BUSY_LIMIT 1
+#define LEAPRAID_INDEX_FIRST 0
+#define LEAPRAID_BITS_PER_BYTE 8
+#define LEAPRAID_INVALID_HOST_DIAG_VAL 0xFFFFFFFF
+
+/* retry / sleep configuration */
+#define LEAPRAID_UNLOCK_RETRY_LIMIT 20
+#define LEAPRAID_UNLOCK_SLEEP_MS 100
+#define LEAPRAID_MSLEEP_SHORT_MS 50
+#define LEAPRAID_MSLEEP_NORMAL_MS 100
+#define LEAPRAID_MSLEEP_LONG_MS 256
+#define LEAPRAID_MSLEEP_EXTRA_LONG_MS 500
+#define LEAPRAID_IO_POLL_DELAY_US 500
+
+/* controller reset loop parameters */
+#define LEAPRAID_RESET_LOOP_COUNT_REF (300000 / 256)
+#define LEAPRAID_RESET_LOOP_COUNT_DEFAULT 10000
+#define LEAPRAID_RESET_POLL_INTERVAL_MS 500
+
+/* Device / Volume configuration */
+#define LEAPRAID_MAX_VOLUMES_DEFAULT 32
+#define LEAPRAID_MAX_DEV_HANDLE_DEFAULT 2048
+#define LEAPRAID_INVALID_DEV_HANDLE 0xFFFF
+
+/* cmd queue depth */
+#define LEAPRAID_COALESCING_DEPTH_MAX 256
+#define LEAPRAID_DEFAULT_CMD_QD_OFFSET 64
+#define LEAPRAID_REPLY_QD_ALIGNMENT 16
+/* task id offset */
+#define LEAPRAID_TASKID_OFFSET_CTRL_CMD 1
+#define LEAPRAID_TASKID_OFFSET_SCSIIO_CMD 2
+#define LEAPRAID_TASKID_OFFSET_CFG_OP_CMD 1
+#define LEAPRAID_TASKID_OFFSET_TRANSPORT_CMD 2
+#define LEAPRAID_TASKID_OFFSET_TIMESTAMP_SYNC_CMD 3
+#define LEAPRAID_TASKID_OFFSET_RAID_ACTION_CMD 4
+#define LEAPRAID_TASKID_OFFSET_ENC_CMD 5
+#define LEAPRAID_TASKID_OFFSET_NOTIFY_EVENT_CMD 6
+
+/* task id offset for high-priority */
+#define LEAPRAID_HP_TASKID_OFFSET_CTL_CMD 0
+#define LEAPRAID_HP_TASKID_OFFSET_TM_CMD 1
+
+/* Event / Boot configuration */
+#define LEAPRAID_EVT_MASK_COUNT 4
+#define LEAPRAID_BOOT_DEV_SIZE 24
+
+/* logsense command definitions */
+#define LEAPRAID_LOGSENSE_DATA_LENGTH 16
+#define LEAPRAID_LOGSENSE_CDB_LENGTH 10
+#define LEAPRAID_LOGSENSE_CDB_CODE 0x6F
+#define LEAPRAID_LOGSENSE_TIMEOUT 5
+#define LEAPRAID_LOGSENSE_SMART_CODE 0x5D
+
+/* cmd timeout */
+#define LEAPRAID_DRIVER_SCSIIO_CMD_TIMEOUT LEAPRAID_LOGSENSE_TIMEOUT
+#define LEAPRAID_CFG_OP_TIMEOUT 15
+#define LEAPRAID_CTL_CMD_TIMEOUT 10
+#define LEAPRAID_SCAN_DEV_CMD_TIMEOUT 300
+#define LEAPRAID_TIMESTAMP_SYNC_CMD_TIMEOUT 10
+#define LEAPRAID_RAID_ACTION_CMD_TIMEOUT 10
+#define LEAPRAID_ENC_CMD_TIMEOUT 10
+#define LEAPRAID_NOTIFY_EVENT_CMD_TIMEOUT 30
+#define LEAPRAID_TM_CMD_TIMEOUT 30
+#define LEAPRAID_TRANSPORT_CMD_TIMEOUT 10
+
+/**
+ * struct leapraid_adapter_features - Features and
+ * capabilities of a LeapRAID adapter
+ *
+ * @req_slot: Number of request slots supported by the adapter
+ * @hp_slot: Number of high-priority slots supported by the adapter
+ * @adapter_caps: Adapter capabilities
+ * @fw_version: Firmware version of the adapter
+ * @max_dev_handle: Maximum device supported by the adapter
+ */
+struct leapraid_adapter_features {
+ u16 req_slot;
+ u16 hp_slot;
+ u32 adapter_caps;
+ u32 fw_version;
+ u8 max_volumes;
+ u16 max_dev_handle;
+ u16 min_dev_handle;
+};
+
+/**
+ * struct leapraid_adapter_attr - Adapter attributes and capabilities
+ *
+ * @id: Adapter identifier
+ * @raid_support: Indicates if RAID is supported
+ * @bios_version: Version of the adapter BIOS
+ * @enable_mp: Indicates if multipath (MP) support is enabled
+ * @wideport_max_queue_depth: Maximum queue depth for wide ports
+ * @narrowport_max_queue_depth: Maximum queue depth for narrow ports
+ * @sata_max_queue_depth: Maximum queue depth for SATA
+ * @features: Detailed features of the adapter
+ * @adapter_total_qd: Total queue depth available on the adapter
+ * @io_qd: Queue depth allocated for I/O operations
+ * @rep_msg_qd: Queue depth for reply messages
+ * @rep_desc_qd: Queue depth for reply descriptors
+ * @rep_desc_q_seg_cnt: Number of segments in a reply descriptor queue
+ * @rq_cnt: Number of request queues
+ * @task_desc_dma_size: Size of task descriptor DMA memory
+ * @use_32_dma_mask: Indicates if 32-bit DMA mask is used
+ * @name: Adapter name string
+ */
+struct leapraid_adapter_attr {
+ u8 id;
+ bool raid_support;
+ u32 bios_version;
+ bool enable_mp;
+ u32 wideport_max_queue_depth;
+ u32 narrowport_max_queue_depth;
+ u32 sata_max_queue_depth;
+ struct leapraid_adapter_features features;
+ u32 adapter_total_qd;
+ u32 io_qd;
+ u32 rep_msg_qd;
+ u32 rep_desc_qd;
+ u32 rep_desc_q_seg_cnt;
+ u16 rq_cnt;
+ u32 task_desc_dma_size;
+ bool use_32_dma_mask;
+ char name[LEAPRAID_NAME_LENGTH];
+};
+
+/**
+ * struct leapraid_io_req_tracker - Track a SCSI I/O request
+ * for the adapter
+ *
+ * @taskid: Unique task ID for this I/O request
+ * @scmd: Pointer to the associated SCSI command
+ * @chain_list: List of chain frames associated with this request
+ * @msix_io: MSI-X vector assigned to this I/O request
+ * @chain: Pointer to the chain memory for this request
+ * @chain_dma: DMA address of the chain memory
+ */
+struct leapraid_io_req_tracker {
+ u16 taskid;
+ struct scsi_cmnd *scmd;
+ struct list_head chain_list;
+ u16 msix_io;
+ void *chain;
+ dma_addr_t chain_dma;
+};
+
+/**
+ * struct leapraid_task_tracker - Tracks a task in the adapter
+ *
+ * @taskid: Unique task ID for this tracker
+ * @cb_idx: Callback index associated with this task
+ * @tracker_list: Linked list node to chain this tracker in lists
+ */
+struct leapraid_task_tracker {
+ u16 taskid;
+ u8 cb_idx;
+ struct list_head tracker_list;
+};
+
+/**
+ * struct leapraid_rep_desc_maint - Maintains reply descriptor
+ * memory
+ *
+ * @rep_desc: Pointer to the reply descriptor
+ * @rep_desc_dma: DMA address of the reply descriptor
+ */
+struct leapraid_rep_desc_maint {
+ union leapraid_rep_desc_union *rep_desc;
+ dma_addr_t rep_desc_dma;
+};
+
+/**
+ * struct leapraid_rep_desc_seg_maint - Maintains reply descriptor
+ * segment memory
+ *
+ * @rep_desc_seg: Pointer to the reply descriptor segment
+ * @rep_desc_seg_dma: DMA address of the reply descriptor segment
+ * @rep_desc_maint: Pointer to the main reply descriptor structure
+ */
+struct leapraid_rep_desc_seg_maint {
+ void *rep_desc_seg;
+ dma_addr_t rep_desc_seg_dma;
+ struct leapraid_rep_desc_maint *rep_desc_maint;
+};
+
+/**
+ * struct leapraid_mem_desc - Memory descriptor for LeapRaid adapter
+ *
+ * @task_desc: Pointer to task descriptor
+ * @task_desc_dma: DMA address of task descriptor
+ * @sg_chain_pool: DMA pool for SGL chain allocations
+ * @sg_chain_pool_size: Size of the sg_chain_pool
+ * @io_tracker: IO request tracker array
+ * @sense_data: Buffer for SCSI sense data
+ * @sense_data_dma: DMA address of sense_data buffer
+ * @rep_msg: Buffer for reply message
+ * @rep_msg_dma: DMA address of reply message buffer
+ * @rep_msg_addr: Pointer to reply message address
+ * @rep_msg_addr_dma: DMA address of reply message address
+ * @rep_desc_seg_maint: Pointer to reply descriptor segment
+ * @rep_desc_q_arr: Pointer to reply descriptor queue array
+ * @rep_desc_q_arr_dma: DMA address of reply descriptor queue array
+ */
+struct leapraid_mem_desc {
+ void *task_desc;
+ dma_addr_t task_desc_dma;
+ struct dma_pool *sg_chain_pool;
+ u16 sg_chain_pool_size;
+ struct leapraid_io_req_tracker *io_tracker;
+ u8 *sense_data;
+ dma_addr_t sense_data_dma;
+ u8 *rep_msg;
+ dma_addr_t rep_msg_dma;
+ __le32 *rep_msg_addr;
+ dma_addr_t rep_msg_addr_dma;
+ struct leapraid_rep_desc_seg_maint *rep_desc_seg_maint;
+ struct leapraid_rep_desc_q_arr *rep_desc_q_arr;
+ dma_addr_t rep_desc_q_arr_dma;
+};
+
+#define LEAPRAID_FIXED_INTER_CMDS 7
+#define LEAPRAID_FIXED_HP_CMDS 2
+#define LEAPRAID_INTER_HP_CMDS_DIF \
+ (LEAPRAID_FIXED_INTER_CMDS - LEAPRAID_FIXED_HP_CMDS)
+
+#define LEAPRAID_CMD_NOT_USED 0x8000
+#define LEAPRAID_CMD_DONE 0x0001
+#define LEAPRAID_CMD_PENDING 0x0002
+#define LEAPRAID_CMD_REPLY_VALID 0x0004
+#define LEAPRAID_CMD_RESET 0x0008
+
+/**
+ * enum LEAPRAID_CB_INDEX - Callback index for LeapRaid driver
+ *
+ * @LEAPRAID_SCAN_DEV_CB_IDX: Scan device callback index
+ * @LEAPRAID_CONFIG_CB_IDX: Configuration callback index
+ * @LEAPRAID_TRANSPORT_CB_IDX: Transport callback index
+ * @LEAPRAID_TIMESTAMP_SYNC_CB_IDX: Timestamp sync callback index
+ * @LEAPRAID_RAID_ACTION_CB_IDX: RAID action callback index
+ * @LEAPRAID_DRIVER_SCSIIO_CB_IDX: Driver SCSI I/O callback index
+ * @LEAPRAID_SAS_CTRL_CB_IDX: SAS controller callback index
+ * @LEAPRAID_ENC_CB_IDX: Encryption callback index
+ * @LEAPRAID_NOTIFY_EVENT_CB_IDX: Notify event callback index
+ * @LEAPRAID_CTL_CB_IDX: Control callback index
+ * @LEAPRAID_TM_CB_IDX: Task management callback index
+ */
+enum LEAPRAID_CB_INDEX {
+ LEAPRAID_SCAN_DEV_CB_IDX = 0x1,
+ LEAPRAID_CONFIG_CB_IDX = 0x2,
+ LEAPRAID_TRANSPORT_CB_IDX = 0x3,
+ LEAPRAID_TIMESTAMP_SYNC_CB_IDX = 0x4,
+ LEAPRAID_RAID_ACTION_CB_IDX = 0x5,
+ LEAPRAID_DRIVER_SCSIIO_CB_IDX = 0x6,
+ LEAPRAID_SAS_CTRL_CB_IDX = 0x7,
+ LEAPRAID_ENC_CB_IDX = 0x8,
+ LEAPRAID_NOTIFY_EVENT_CB_IDX = 0x9,
+ LEAPRAID_CTL_CB_IDX = 0xA,
+ LEAPRAID_TM_CB_IDX = 0xB,
+ LEAPRAID_NUM_CB_IDXS
+};
+
+struct leapraid_default_reply {
+ u8 pad[LEAPRAID_REPLY_SIEZ];
+};
+
+struct leapraid_sense_buffer {
+ u8 pad[SCSI_SENSE_BUFFERSIZE];
+};
+
+/**
+ * struct leapraid_driver_cmd - Driver command tracking structure
+ *
+ * @reply: Default reply structure returned by the adapter
+ * @done: Completion object used to signal command completion
+ * @status: Status code returned by the firmware
+ * @taskid: Unique task identifier for this command
+ * @hp_taskid: Task identifier for high-priority commands
+ * @inter_taskid: Task identifier for internal commands
+ * @cb_idx: Callback index used to identify completion context
+ * @async_scan_dev: True if this command is for asynchronous device scan
+ * @sense: Sense buffer holding error information from device
+ * @mutex: Mutex to protect access to this command structure
+ * @list: List node for linking driver commands into lists
+ */
+struct leapraid_driver_cmd {
+ struct leapraid_default_reply reply;
+ struct completion done;
+ u16 status;
+ u16 taskid;
+ u16 hp_taskid;
+ u16 inter_taskid;
+ u8 cb_idx;
+ bool async_scan_dev;
+ struct leapraid_sense_buffer sense;
+ struct mutex mutex;
+ struct list_head list;
+};
+
+/**
+ * struct leapraid_driver_cmds - Collection of driver command objects
+ *
+ * @special_cmd_list: List head for tracking special driver commands
+ * @scan_dev_cmd: Command used for asynchronous device scan operations
+ * @cfg_op_cmd: Command for configuration operations
+ * @transport_cmd: Command for transport-level operations
+ * @timestamp_sync_cmd: Command for synchronizing timestamp with firmware
+ * @raid_action_cmd: Command for RAID-related management or action requests
+ * @driver_scsiio_cmd: Command used for internal SCSI I/O processing
+ * @enc_cmd: Command for enclosure management operations
+ * @notify_event_cmd: Command for asynchronous event notification handling
+ * @ctl_cmd: Command for generic control or maintenance operations
+ * @tm_cmd: Task management command
+ * @internal_scmd: Pointer to internal SCSI command used by the driver
+ */
+struct leapraid_driver_cmds {
+ struct list_head special_cmd_list;
+ struct leapraid_driver_cmd scan_dev_cmd;
+ struct leapraid_driver_cmd cfg_op_cmd;
+ struct leapraid_driver_cmd transport_cmd;
+ struct leapraid_driver_cmd timestamp_sync_cmd;
+ struct leapraid_driver_cmd raid_action_cmd;
+ struct leapraid_driver_cmd driver_scsiio_cmd;
+ struct leapraid_driver_cmd enc_cmd;
+ struct leapraid_driver_cmd notify_event_cmd;
+ struct leapraid_driver_cmd ctl_cmd;
+ struct leapraid_driver_cmd tm_cmd;
+ struct scsi_cmnd *internal_scmd;
+};
+
+/**
+ * struct leapraid_dynamic_task_desc - Dynamic task descriptor
+ *
+ * @task_lock: Spinlock to protect concurrent access
+ * @hp_taskid: Current high-priority task ID
+ * @hp_cmd_qd: Fixed command queue depth for high-priority tasks
+ * @inter_taskid: Current internal task ID
+ * @inter_cmd_qd: Fixed command queue depth for internal tasks
+ */
+struct leapraid_dynamic_task_desc {
+ spinlock_t task_lock;
+ u16 hp_taskid;
+ u16 hp_cmd_qd;
+ u16 inter_taskid;
+ u16 inter_cmd_qd;
+};
+
+/**
+ * struct leapraid_fw_evt_work - Firmware event work structure
+ *
+ * @list: Linked list node for queuing the work
+ * @adapter: Pointer to the associated LeapRaid adapter
+ * @work: Work structure used by the kernel workqueue
+ * @refcnt: Reference counter for managing the lifetime of this work
+ * @evt_data: Pointer to firmware event data
+ * @dev_handle: Device handle associated with the event
+ * @evt_type: Type of firmware event
+ * @ignore: Flag indicating whether the event should be ignored
+ */
+struct leapraid_fw_evt_work {
+ struct list_head list;
+ struct leapraid_adapter *adapter;
+ struct work_struct work;
+ struct kref refcnt;
+ void *evt_data;
+ u16 dev_handle;
+ u16 evt_type;
+ u8 ignore;
+};
+
+/**
+ * struct leapraid_fw_evt_struct - Firmware event handling structure
+ *
+ * @fw_evt_name: Name of the firmware event
+ * @fw_evt_thread: Workqueue used for processing firmware events
+ * @fw_evt_lock: Spinlock protecting access to the firmware event list
+ * @fw_evt_list: Linked list of pending firmware events
+ * @cur_evt: Pointer to the currently processing firmware event
+ * @fw_evt_cleanup: Flag indicating whether cleanup of events is in progress
+ * @leapraid_evt_masks: Array of event masks for filtering firmware events
+ */
+struct leapraid_fw_evt_struct {
+ char fw_evt_name[48];
+ struct workqueue_struct *fw_evt_thread;
+ spinlock_t fw_evt_lock;
+ struct list_head fw_evt_list;
+ struct leapraid_fw_evt_work *cur_evt;
+ int fw_evt_cleanup;
+ u32 leapraid_evt_masks[4];
+};
+
+/**
+ * struct leapraid_rq - Represents a LeapRaid request queue
+ *
+ * @adapter: Pointer to the associated LeapRaid adapter
+ * @msix_idx: MSI-X vector index used by this queue
+ * @rep_post_host_idx: Index of the last processed reply descriptor
+ * @rep_desc: Pointer to the reply descriptor associated with this queue
+ * @name: Name of the request queue
+ * @busy: Atomic counter indicating if the queue is busy
+ */
+struct leapraid_rq {
+ struct leapraid_adapter *adapter;
+ u8 msix_idx;
+ u32 rep_post_host_idx;
+ union leapraid_rep_desc_union *rep_desc;
+ char name[LEAPRAID_NAME_LENGTH];
+ atomic_t busy;
+};
+
+/**
+ * struct leapraid_int_rq - Internal request queue for a CPU
+ *
+ * @affinity_hint: CPU affinity mask for the queue
+ * @rq: Underlying LeapRaid request queue structure
+ */
+struct leapraid_int_rq {
+ cpumask_var_t affinity_hint;
+ struct leapraid_rq rq;
+};
+
+/**
+ * struct leapraid_blk_mq_poll_rq - Polling request for LeapRaid blk-mq
+ *
+ * @busy: Atomic flag indicating request is being processed
+ * @pause: Atomic flag to temporarily suspend polling
+ * @rq: The underlying LeapRaid request structure
+ */
+struct leapraid_blk_mq_poll_rq {
+ atomic_t busy;
+ atomic_t pause;
+ struct leapraid_rq rq;
+};
+
+/**
+ * struct leapraid_notification_desc - Notification
+ * descriptor for LeapRaid
+ *
+ * @iopoll_qdex: Index of the I/O polling queue
+ * @iopoll_qcnt: Count of I/O polling queues
+ * @msix_enable: Flag indicating MSI-X is enabled
+ * @msix_cpu_map: CPU map for MSI-X interrupts
+ * @msix_cpu_map_sz: Size of the MSI-X CPU map
+ * @int_rqs: Array of interrupt request queues
+ * @int_rqs_allocated: Count of allocated interrupt request queues
+ * @blk_mq_poll_rqs: Array of blk-mq polling requests
+ */
+struct leapraid_notification_desc {
+ u32 iopoll_qdex;
+ u32 iopoll_qcnt;
+ bool msix_enable;
+ u8 *msix_cpu_map;
+ u32 msix_cpu_map_sz;
+ struct leapraid_int_rq *int_rqs;
+ u32 int_rqs_allocated;
+ struct leapraid_blk_mq_poll_rq *blk_mq_poll_rqs;
+};
+
+/**
+ * struct leapraid_reset_desc - Reset descriptor for LeapRaid
+ *
+ * @fault_reset_wq: Workqueue for fault reset operations
+ * @fault_reset_work: Delayed work structure for fault reset
+ * @fault_reset_wq_name: Name of the fault reset workqueue
+ * @host_diag_mutex: Mutex for host diagnostic operations
+ * @adapter_reset_lock: Spinlock for adapter reset operations
+ * @adapter_reset_mutex: Mutex for adapter reset operations
+ * @adapter_link_resetting: Flag indicating if adapter link is resetting
+ * @adapter_reset_results: Results of the adapter reset operation
+ * @pending_io_cnt: Count of pending I/O operations
+ * @reset_wait_queue: Wait queue for reset operations
+ * @reset_cnt: Counter for reset operations
+ */
+struct leapraid_reset_desc {
+ struct workqueue_struct *fault_reset_wq;
+ struct delayed_work fault_reset_work;
+ char fault_reset_wq_name[48];
+ struct mutex host_diag_mutex;
+ spinlock_t adapter_reset_lock;
+ struct mutex adapter_reset_mutex;
+ bool adapter_link_resetting;
+ int adapter_reset_results;
+ int pending_io_cnt;
+ wait_queue_head_t reset_wait_queue;
+ u32 reset_cnt;
+};
+
+/**
+ * struct leapraid_scan_dev_desc - Scan device descriptor
+ * for LeapRaid
+ *
+ * @wait_scan_dev_done: Flag indicating if scan device operation is done
+ * @driver_loading: Flag indicating if driver is loading
+ * @first_scan_dev_fired: Flag indicating if first scan device operation fired
+ * @scan_dev_failed: Flag indicating if scan device operation failed
+ * @scan_start: Flag indicating if scan operation started
+ * @scan_start_failed: Count of failed scan start operations
+ */
+struct leapraid_scan_dev_desc {
+ bool wait_scan_dev_done;
+ bool driver_loading;
+ bool first_scan_dev_fired;
+ bool scan_dev_failed;
+ bool scan_start;
+ u16 scan_start_failed;
+};
+
+/**
+ * struct leapraid_access_ctrl - Access control structure for LeapRaid
+ *
+ * @pci_access_lock: Mutex for PCI access control
+ * @adapter_thermal_alert: Flag indicating if adapter thermal alert is active
+ * @shost_recovering: Flag indicating if host is recovering
+ * @host_removing: Flag indicating if host is being removed
+ * @pcie_recovering: Flag indicating if PCIe is recovering
+ */
+struct leapraid_access_ctrl {
+ struct mutex pci_access_lock;
+ bool adapter_thermal_alert;
+ bool shost_recovering;
+ bool host_removing;
+ bool pcie_recovering;
+};
+
+/**
+ * struct leapraid_fw_log_desc - Firmware log descriptor for LeapRaid
+ *
+ * @fw_log_buffer: Buffer for firmware log data
+ * @fw_log_buffer_dma: DMA address of the firmware log buffer
+ * @fw_log_wq_name: Name of the firmware log workqueue
+ * @fw_log_wq: Workqueue for firmware log operations
+ * @fw_log_work: Delayed work structure for firmware log
+ * @open_pcie_trace: Flag indicating if PCIe tracing is open
+ * @fw_log_init_flag: Flag indicating if firmware log is initialized
+ */
+struct leapraid_fw_log_desc {
+ u8 *fw_log_buffer;
+ dma_addr_t fw_log_buffer_dma;
+ char fw_log_wq_name[48];
+ struct workqueue_struct *fw_log_wq;
+ struct delayed_work fw_log_work;
+ int open_pcie_trace;
+ int fw_log_init_flag;
+};
+
+#define LEAPRAID_CARD_PORT_FLG_DIRTY 0x01
+#define LEAPRAID_CARD_PORT_FLG_NEW 0x02
+#define LEAPRAID_DISABLE_MP_PORT_ID 0xFF
+/**
+ * struct leapraid_card_port - Card port structure for LeapRaid
+ *
+ * @list: List head for card port
+ * @vphys_list: List head for virtual phy list
+ * @port_id: Port ID
+ * @sas_address: SAS address
+ * @phy_mask: Mask of phy
+ * @vphys_mask: Mask of virtual phy
+ * @flg: Flags for the port
+ */
+struct leapraid_card_port {
+ struct list_head list;
+ struct list_head vphys_list;
+ u8 port_id;
+ u64 sas_address;
+ u32 phy_mask;
+ u32 vphys_mask;
+ u8 flg;
+};
+
+/**
+ * struct leapraid_card_phy - Card phy structure for LeapRaid
+ *
+ * @port_siblings: List head for port siblings
+ * @card_port: Pointer to the card port
+ * @identify: SAS identify structure
+ * @remote_identify: Remote SAS identify structure
+ * @phy: SAS phy structure
+ * @phy_id: Phy ID
+ * @hdl: Handle for the port
+ * @attached_hdl: Handle for the attached port
+ * @phy_is_assigned: Flag indicating if phy is assigned
+ * @vphy: Flag indicating if virtual phy
+ */
+struct leapraid_card_phy {
+ struct list_head port_siblings;
+ struct leapraid_card_port *card_port;
+ struct sas_identify identify;
+ struct sas_identify remote_identify;
+ struct sas_phy *phy;
+ u8 phy_id;
+ u16 hdl;
+ u16 attached_hdl;
+ bool phy_is_assigned;
+ bool vphy;
+};
+
+/**
+ * struct leapraid_topo_node - SAS topology node for LeapRaid
+ *
+ * @list: List head for linking nodes
+ * @sas_port_list: List of SAS ports
+ * @card_port: Associated card port
+ * @card_phy: Associated card PHY
+ * @rphy: SAS remote PHY device
+ * @parent_dev: Parent device pointer
+ * @sas_address: SAS address of this node
+ * @sas_address_parent: Parent node's SAS address
+ * @phys_num: Number of physical links
+ * @hdl: Handle identifier
+ * @enc_hdl: Enclosure handle
+ * @enc_lid: Enclosure logical identifier
+ * @resp: Response status flag
+ */
+struct leapraid_topo_node {
+ struct list_head list;
+ struct list_head sas_port_list;
+ struct leapraid_card_port *card_port;
+ struct leapraid_card_phy *card_phy;
+ struct sas_rphy *rphy;
+ struct device *parent_dev;
+ u64 sas_address;
+ u64 sas_address_parent;
+ u8 phys_num;
+ u16 hdl;
+ u16 enc_hdl;
+ u64 enc_lid;
+ bool resp;
+};
+
+/**
+ * struct leapraid_dev_topo - LeapRaid device topology management structure
+ *
+ * @topo_node_lock: Spinlock for protecting topology node operations
+ * @sas_dev_lock: Spinlock for SAS device list access
+ * @raid_volume_lock: Spinlock for RAID volume list access
+ * @sas_id: SAS domain identifier
+ * @card: Main card topology node
+ * @exp_list: List of expander devices
+ * @enc_list: List of enclosure devices
+ * @sas_dev_list: List of SAS devices
+ * @sas_dev_init_list: List of SAS devices being initialized
+ * @raid_volume_list: List of RAID volumes
+ * @card_port_list: List of card ports
+ * @pd_hdls: Array of physical disk handles
+ * @dev_removing: Array tracking devices being removed
+ * @pending_dev_add: Array tracking devices pending addition
+ * @blocking_hdls: Array of blocking handles
+ */
+struct leapraid_dev_topo {
+ spinlock_t topo_node_lock;
+ spinlock_t sas_dev_lock;
+ spinlock_t raid_volume_lock;
+ int sas_id;
+ struct leapraid_topo_node card;
+ struct list_head exp_list;
+ struct list_head enc_list;
+ struct list_head sas_dev_list;
+ struct list_head sas_dev_init_list;
+ struct list_head raid_volume_list;
+ struct list_head card_port_list;
+ u16 pd_hdls_sz;
+ void *pd_hdls;
+ void *blocking_hdls;
+ u16 pending_dev_add_sz;
+ void *pending_dev_add;
+ u16 dev_removing_sz;
+ void *dev_removing;
+};
+
+/**
+ * struct leapraid_boot_dev - Boot device structure for LeapRaid
+ *
+ * @dev: Device pointer
+ * @chnl: Channel number
+ * @form: Form factor
+ * @pg_dev: Config page device content
+ */
+struct leapraid_boot_dev {
+ void *dev;
+ u8 chnl;
+ u8 form;
+ u8 pg_dev[24];
+};
+
+/**
+ * struct leapraid_boot_devs - Boot device management structure
+ * @requested_boot_dev: Requested primary boot device
+ * @requested_alt_boot_dev: Requested alternate boot device
+ * @current_boot_dev: Currently active boot device
+ */
+struct leapraid_boot_devs {
+ struct leapraid_boot_dev requested_boot_dev;
+ struct leapraid_boot_dev requested_alt_boot_dev;
+ struct leapraid_boot_dev current_boot_dev;
+};
+
+/**
+ * struct leapraid_smart_poll_desc - SMART polling descriptor
+ * @smart_poll_wq: Workqueue for SMART polling tasks
+ * @smart_poll_work: Delayed work for SMART polling operations
+ * @smart_poll_wq_name: Workqueue name string
+ */
+struct leapraid_smart_poll_desc {
+ struct workqueue_struct *smart_poll_wq;
+ struct delayed_work smart_poll_work;
+ char smart_poll_wq_name[48];
+};
+
+/**
+ * struct leapraid_adapter - Main LeapRaid adapter structure
+ * @list: List head for adapter management
+ * @shost: SCSI host structure
+ * @pdev: PCI device structure
+ * @iomem_base: I/O memory mapped base address
+ * @rep_msg_host_idx: Host index for reply messages
+ * @mask_int: Interrupt masking flag
+ * @timestamp_sync_cnt: Timestamp synchronization counter
+ * @adapter_attr: Adapter attributes
+ * @mem_desc: Memory descriptor
+ * @driver_cmds: Driver commands
+ * @dynamic_task_desc: Dynamic task descriptor
+ * @fw_evt_s: Firmware event structure
+ * @notification_desc: Notification descriptor
+ * @reset_desc: Reset descriptor
+ * @scan_dev_desc: Device scan descriptor
+ * @access_ctrl: Access control
+ * @fw_log_desc: Firmware log descriptor
+ * @dev_topo: Device topology
+ * @boot_devs: Boot devices
+ * @smart_poll_desc: SMART polling descriptor
+ */
+struct leapraid_adapter {
+ struct list_head list;
+ struct Scsi_Host *shost;
+ struct pci_dev *pdev;
+ struct leapraid_reg_base __iomem *iomem_base;
+ u32 rep_msg_host_idx;
+ bool mask_int;
+ u32 timestamp_sync_cnt;
+
+ struct leapraid_adapter_attr adapter_attr;
+ struct leapraid_mem_desc mem_desc;
+ struct leapraid_driver_cmds driver_cmds;
+ struct leapraid_dynamic_task_desc dynamic_task_desc;
+ struct leapraid_fw_evt_struct fw_evt_s;
+ struct leapraid_notification_desc notification_desc;
+ struct leapraid_reset_desc reset_desc;
+ struct leapraid_scan_dev_desc scan_dev_desc;
+ struct leapraid_access_ctrl access_ctrl;
+ struct leapraid_fw_log_desc fw_log_desc;
+ struct leapraid_dev_topo dev_topo;
+ struct leapraid_boot_devs boot_devs;
+ struct leapraid_smart_poll_desc smart_poll_desc;
+};
+
+union cfg_param_1 {
+ u32 form;
+ u32 size;
+ u32 phy_number;
+};
+
+union cfg_param_2 {
+ u32 handle;
+ u32 form_specific;
+};
+
+enum config_page_action {
+ GET_BIOS_PG2,
+ GET_BIOS_PG3,
+ GET_SAS_DEVICE_PG0,
+ GET_SAS_IOUNIT_PG0,
+ GET_SAS_IOUNIT_PG1,
+ GET_SAS_EXPANDER_PG0,
+ GET_SAS_EXPANDER_PG1,
+ GET_SAS_ENCLOSURE_PG0,
+ GET_PHY_PG0,
+ GET_RAID_VOLUME_PG0,
+ GET_RAID_VOLUME_PG1,
+ GET_PHY_DISK_PG0,
+};
+
+/**
+ * struct leapraid_enc_node - Enclosure node structure
+ * @list: List head for enclosure management
+ * @pg0: Enclosure page 0 data
+ */
+struct leapraid_enc_node {
+ struct list_head list;
+ struct leapraid_enc_p0 pg0;
+};
+
+/**
+ * struct leapraid_raid_volume - RAID volume structure
+ * @list: List head for volume management
+ * @starget: SCSI target structure
+ * @sdev: SCSI device structure
+ * @id: Volume ID
+ * @channel: SCSI channel
+ * @wwid: World Wide Identifier
+ * @hdl: Volume handle
+ * @vol_type: Volume type
+ * @pd_num: Number of physical disks
+ * @resp: Response status
+ * @dev_info: Device information
+ */
+struct leapraid_raid_volume {
+ struct list_head list;
+ struct scsi_target *starget;
+ struct scsi_device *sdev;
+ unsigned int id;
+ unsigned int channel;
+ u64 wwid;
+ u16 hdl;
+ u8 vol_type;
+ u8 pd_num;
+ u8 resp;
+ u32 dev_info;
+};
+
+#define LEAPRAID_TGT_FLG_RAID_MEMBER 0x01
+#define LEAPRAID_TGT_FLG_VOLUME 0x02
+#define LEAPRAID_NO_ULD_ATTACH 1
+/**
+ * struct leapraid_starget_priv - SCSI target private data
+ * @starget: SCSI target structure
+ * @sas_address: SAS address
+ * @hdl: Device handle
+ * @num_luns: Number of LUNs
+ * @flg: Flags
+ * @deleted: Deletion flag
+ * @tm_busy: Task management busy flag
+ * @card_port: Associated card port
+ * @sas_dev: SAS device structure
+ */
+struct leapraid_starget_priv {
+ struct scsi_target *starget;
+ u64 sas_address;
+ u16 hdl;
+ int num_luns;
+ u32 flg;
+ bool deleted;
+ bool tm_busy;
+ struct leapraid_card_port *card_port;
+ struct leapraid_sas_dev *sas_dev;
+};
+
+#define LEAPRAID_DEVICE_FLG_INIT 0x01
+/**
+ * struct leapraid_sdev_priv - SCSI device private data
+ * @starget_priv: Associated target private data
+ * @lun: Logical Unit Number
+ * @flg: Flags
+ * @block: Block flag
+ * @deleted: Deletion flag
+ * @sep: SEP flag
+ */
+struct leapraid_sdev_priv {
+ struct leapraid_starget_priv *starget_priv;
+ unsigned int lun;
+ u32 flg;
+ bool ncq;
+ bool block;
+ bool deleted;
+ bool sep;
+};
+
+/**
+ * struct leapraid_sas_dev - SAS device structure
+ * @list: List head for device management
+ * @starget: SCSI target structure
+ * @card_port: Associated card port
+ * @rphy: SAS remote PHY
+ * @refcnt: Reference count
+ * @id: Device ID
+ * @channel: SCSI channel
+ * @slot: Slot number
+ * @phy: PHY identifier
+ * @resp: Response status
+ * @led_on: LED state
+ * @sas_addr: SAS address
+ * @dev_name: Device name
+ * @hdl: Device handle
+ * @parent_sas_addr: Parent SAS address
+ * @enc_hdl: Enclosure handle
+ * @enc_lid: Enclosure logical ID
+ * @volume_hdl: Volume handle
+ * @volume_wwid: Volume WWID
+ * @dev_info: Device information
+ * @pend_sas_rphy_add: Pending SAS rphy addition flag
+ * @enc_level: Enclosure level
+ * @port_type: Port type
+ * @connector_name: Connector name
+ * @support_smart: SMART support flag
+ */
+struct leapraid_sas_dev {
+ struct list_head list;
+ struct scsi_target *starget;
+ struct leapraid_card_port *card_port;
+ struct sas_rphy *rphy;
+ struct kref refcnt;
+ unsigned int id;
+ unsigned int channel;
+ u16 slot;
+ u8 phy;
+ bool resp;
+ bool led_on;
+ u64 sas_addr;
+ u64 dev_name;
+ u16 hdl;
+ u64 parent_sas_addr;
+ u16 enc_hdl;
+ u64 enc_lid;
+ u16 volume_hdl;
+ u64 volume_wwid;
+ u32 dev_info;
+ u8 pend_sas_rphy_add;
+ u8 enc_level;
+ u8 port_type;
+ u8 connector_name[5];
+ bool support_smart;
+};
+
+static inline void leapraid_sdev_free(struct kref *ref)
+{
+ kfree(container_of(ref, struct leapraid_sas_dev, refcnt));
+}
+
+#define leapraid_sdev_get(sdev) kref_get(&(sdev)->refcnt)
+#define leapraid_sdev_put(sdev) kref_put(&(sdev)->refcnt, leapraid_sdev_free)
+
+/**
+ * struct leapraid_sas_port - SAS port structure
+ * @port_list: List head for port management
+ * @phy_list: List of PHYs in this port
+ * @port: SAS port structure
+ * @card_port: Associated card port
+ * @remote_identify: Remote device identification
+ * @rphy: SAS remote PHY
+ * @phys_num: Number of PHYs in this port
+ */
+struct leapraid_sas_port {
+ struct list_head port_list;
+ struct list_head phy_list;
+ struct sas_port *port;
+ struct leapraid_card_port *card_port;
+ struct sas_identify remote_identify;
+ struct sas_rphy *rphy;
+ u8 phys_num;
+};
+
+#define LEAPRAID_VPHY_FLG_DIRTY 0x01
+/**
+ * struct leapraid_vphy - Virtual PHY structure
+ * @list: List head for PHY management
+ * @sas_address: SAS address
+ * @phy_mask: PHY mask
+ * @flg: Flags
+ */
+struct leapraid_vphy {
+ struct list_head list;
+ u64 sas_address;
+ u32 phy_mask;
+ u8 flg;
+};
+
+struct leapraid_tgt_rst_list {
+ struct list_head list;
+ u16 handle;
+ u16 state;
+};
+
+struct leapraid_sc_list {
+ struct list_head list;
+ u16 handle;
+};
+
+struct sense_info {
+ u8 sense_key;
+ u8 asc;
+ u8 ascq;
+};
+
+struct leapraid_fw_log_info {
+ u32 user_position;
+ u32 adapter_position;
+};
+
+/**
+ * enum reset_type - Reset type enumeration
+ * @FULL_RESET: Full hardware reset
+ * @PART_RESET: Partial reset
+ */
+enum reset_type {
+ FULL_RESET,
+ PART_RESET,
+};
+
+enum leapraid_card_port_checking_flg {
+ CARD_PORT_FURTHER_CHECKING_NEEDED = 0,
+ CARD_PORT_SKIP_CHECKING,
+};
+
+enum leapraid_port_checking_state {
+ NEW_CARD_PORT = 0,
+ SAME_PORT_WITH_NOTHING_CHANGED,
+ SAME_PORT_WITH_PARTIALLY_CHANGED_PHYS,
+ SAME_ADDR_WITH_PARTIALLY_CHANGED_PHYS,
+ SAME_ADDR_ONLY,
+};
+
+/**
+ * struct leapraid_card_port_feature - Card port feature
+ * @dirty_flg: Dirty flag indicator
+ * @same_addr: Same address flag
+ * @exact_phy: Exact PHY match flag
+ * @phy_overlap: PHY overlap bitmap
+ * @same_port: Same port flag
+ * @cur_chking_old_port: Current checking old port
+ * @expected_old_port: Expected old port
+ * @same_addr_port_count: Same address port count
+ * @checking_state: Port checking state
+ */
+struct leapraid_card_port_feature {
+ u8 dirty_flg;
+ bool same_addr;
+ bool exact_phy;
+ u32 phy_overlap;
+ bool same_port;
+ struct leapraid_card_port *cur_chking_old_port;
+ struct leapraid_card_port *expected_old_port;
+ int same_addr_port_count;
+ enum leapraid_port_checking_state checking_state;
+};
+
+#define SMP_REPORT_MANUFACTURER_INFORMATION_FRAME_TYPE 0x40
+#define SMP_REPORT_MANUFACTURER_INFORMATION_FUNC 0x01
+
+/**
+ * ref: SAS-2(INCITS 457-2010) 10.4.3.5
+ */
+struct leapraid_rep_manu_request {
+ u8 smp_frame_type;
+ u8 function;
+ u8 allocated_response_length;
+ u8 request_length;
+};
+
+/**
+ * ref: SAS-2(INCITS 457-2010) 10.4.3.5
+ */
+struct leapraid_rep_manu_reply {
+ u8 smp_frame_type;
+ u8 function;
+ u8 function_result;
+ u8 response_length;
+ u16 expander_change_count;
+ u8 r1[2];
+ u8 sas_format;
+ u8 r2[3];
+ u8 vendor_identification[SAS_EXPANDER_VENDOR_ID_LEN];
+ u8 product_identification[SAS_EXPANDER_PRODUCT_ID_LEN];
+ u8 product_revision_level[SAS_EXPANDER_PRODUCT_REV_LEN];
+ u8 component_vendor_identification[SAS_EXPANDER_COMPONENT_VENDOR_ID_LEN];
+ u16 component_id;
+ u8 component_revision_level;
+ u8 r3;
+ u8 vendor_specific[8];
+};
+
+/**
+ * struct leapraid_scsi_cmd_desc - SCSI command descriptor
+ * @hdl: Device handle
+ * @lun: Logical Unit Number
+ * @raid_member: RAID member flag
+ * @dir: DMA data direction
+ * @data_length: Data transfer length
+ * @data_buffer: Data buffer pointer
+ * @cdb_length: CDB length
+ * @cdb: Command Descriptor Block
+ * @time_out: Timeout
+ */
+struct leapraid_scsi_cmd_desc {
+ u16 hdl;
+ u32 lun;
+ bool raid_member;
+ enum dma_data_direction dir;
+ u32 data_length;
+ void *data_buffer;
+ u8 cdb_length;
+ u8 cdb[32];
+ u8 time_out;
+};
+
+extern struct list_head leapraid_adapter_list;
+extern spinlock_t leapraid_adapter_lock;
+extern char driver_name[LEAPRAID_NAME_LENGTH];
+
+int leapraid_ctrl_init(struct leapraid_adapter *adapter);
+void leapraid_remove_ctrl(struct leapraid_adapter *adapter);
+void leapraid_check_scheduled_fault_start(struct leapraid_adapter *adapter);
+void leapraid_check_scheduled_fault_stop(struct leapraid_adapter *adapter);
+void leapraid_fw_log_start(struct leapraid_adapter *adapter);
+void leapraid_fw_log_stop(struct leapraid_adapter *adapter);
+int leapraid_set_pcie_and_notification(struct leapraid_adapter *adapter);
+void leapraid_disable_controller(struct leapraid_adapter *adapter);
+int leapraid_hard_reset_handler(struct leapraid_adapter *adapter,
+ enum reset_type type);
+void leapraid_mask_int(struct leapraid_adapter *adapter);
+void leapraid_unmask_int(struct leapraid_adapter *adapter);
+u32 leapraid_get_adapter_state(struct leapraid_adapter *adapter);
+bool leapraid_pci_removed(struct leapraid_adapter *adapter);
+int leapraid_check_adapter_is_op(struct leapraid_adapter *adapter);
+void *leapraid_get_task_desc(struct leapraid_adapter *adapter, u16 taskid);
+void *leapraid_get_sense_buffer(struct leapraid_adapter *adapter, u16 taskid);
+__le32 leapraid_get_sense_buffer_dma(struct leapraid_adapter *adapter,
+ u16 taskid);
+void *leapraid_get_reply_vaddr(struct leapraid_adapter *adapter,
+ u32 phys_addr);
+u16 leapraid_alloc_scsiio_taskid(struct leapraid_adapter *adapter,
+ struct scsi_cmnd *scmd);
+void leapraid_free_taskid(struct leapraid_adapter *adapter, u16 taskid);
+struct leapraid_io_req_tracker *leapraid_get_io_tracker_from_taskid(
+ struct leapraid_adapter *adapter, u16 taskid);
+struct leapraid_io_req_tracker *leapraid_get_scmd_priv(struct scsi_cmnd *scmd);
+struct scsi_cmnd *leapraid_get_scmd_from_taskid(
+ struct leapraid_adapter *adapter, u16 taskid);
+int leapraid_scan_dev(struct leapraid_adapter *adapter, bool async_scan_dev);
+void leapraid_scan_dev_done(struct leapraid_adapter *adapter);
+void leapraid_wait_cmds_done(struct leapraid_adapter *adapter);
+void leapraid_clean_active_scsi_cmds(struct leapraid_adapter *adapter);
+void leapraid_sync_irqs(struct leapraid_adapter *adapter, bool poll);
+int leapraid_rep_queue_handler(struct leapraid_rq *rq);
+void leapraid_mq_polling_pause(struct leapraid_adapter *adapter);
+void leapraid_mq_polling_resume(struct leapraid_adapter *adapter);
+void leapraid_set_tm_flg(struct leapraid_adapter *adapter, u16 handle);
+void leapraid_clear_tm_flg(struct leapraid_adapter *adapter, u16 handle);
+void leapraid_async_turn_on_led(struct leapraid_adapter *adapter, u16 handle);
+int leapraid_issue_locked_tm(struct leapraid_adapter *adapter, u16 handle,
+ uint channel, uint id, uint lun, u8 type,
+ u16 taskid_task, u8 tr_method);
+int leapraid_issue_tm(struct leapraid_adapter *adapter, u16 handle,
+ uint channel, uint id, uint lun, u8 type,
+ u16 taskid_task, u8 tr_method);
+u8 leapraid_scsiio_done(struct leapraid_adapter *adapter, u16 taskid,
+ u8 msix_index, u32 rep);
+int leapraid_get_volume_cap(struct leapraid_adapter *adapter,
+ struct leapraid_raid_volume *raid_volume);
+int leapraid_internal_init_cmd_priv(struct leapraid_adapter *adapter,
+ struct leapraid_io_req_tracker *io_tracker);
+int leapraid_internal_exit_cmd_priv(struct leapraid_adapter *adapter,
+ struct leapraid_io_req_tracker *io_tracker);
+void leapraid_clean_active_fw_evt(struct leapraid_adapter *adapter);
+bool leapraid_scmd_find_by_lun(struct leapraid_adapter *adapter,
+ uint id, unsigned int lun, uint channel);
+bool leapraid_scmd_find_by_tgt(struct leapraid_adapter *adapter,
+ uint id, uint channel);
+struct leapraid_vphy *leapraid_get_vphy_by_phy(struct leapraid_card_port *port,
+ u32 phy);
+struct leapraid_raid_volume *leapraid_raid_volume_find_by_id(
+ struct leapraid_adapter *adapter, uint id, uint channel);
+struct leapraid_raid_volume *leapraid_raid_volume_find_by_hdl(
+ struct leapraid_adapter *adapter, u16 handle);
+struct leapraid_topo_node *leapraid_exp_find_by_sas_address(
+ struct leapraid_adapter *adapter, u64 sas_address,
+ struct leapraid_card_port *port);
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_by_addr_and_rphy(
+ struct leapraid_adapter *adapter,
+ u64 sas_address, struct sas_rphy *rphy);
+struct leapraid_sas_dev *leapraid_get_sas_dev_by_addr(
+ struct leapraid_adapter *adapter, u64 sas_address,
+ struct leapraid_card_port *port);
+struct leapraid_sas_dev *leapraid_get_sas_dev_by_hdl(
+ struct leapraid_adapter *adapter, u16 handle);
+struct leapraid_sas_dev *leapraid_get_sas_dev_from_tgt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_starget_priv *tgt_priv);
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_from_tgt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_starget_priv *tgt_priv);
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_by_hdl(
+ struct leapraid_adapter *adapter, u16 handle);
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_by_addr(
+ struct leapraid_adapter *adapter, u64 sas_address,
+ struct leapraid_card_port *port);
+struct leapraid_sas_dev *leapraid_get_next_sas_dev_from_init_list(
+ struct leapraid_adapter *adapter);
+void leapraid_sas_dev_remove_by_sas_address(
+ struct leapraid_adapter *adapter,
+ u64 sas_address, struct leapraid_card_port *port);
+void leapraid_sas_dev_remove(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev);
+void leapraid_raid_volume_remove(struct leapraid_adapter *adapter,
+ struct leapraid_raid_volume *raid_volume);
+void leapraid_exp_rm(struct leapraid_adapter *adapter,
+ u64 sas_address, struct leapraid_card_port *port);
+void leapraid_build_mpi_sg(struct leapraid_adapter *adapter,
+ void *sge, dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size);
+void leapraid_build_ieee_nodata_sg(struct leapraid_adapter *adapter,
+ void *sge);
+void leapraid_build_ieee_sg(struct leapraid_adapter *adapter,
+ void *psge, dma_addr_t h2c_dma_addr,
+ size_t h2c_size, dma_addr_t c2h_dma_addr,
+ size_t c2h_size);
+int leapraid_build_scmd_ieee_sg(struct leapraid_adapter *adapter,
+ struct scsi_cmnd *scmd, u16 taskid);
+void leapraid_fire_scsi_io(struct leapraid_adapter *adapter,
+ u16 taskid, u16 handle);
+void leapraid_fire_hpr_task(struct leapraid_adapter *adapter, u16 taskid,
+ u16 msix_task);
+void leapraid_fire_task(struct leapraid_adapter *adapter, u16 taskid);
+int leapraid_cfg_get_volume_hdl(struct leapraid_adapter *adapter,
+ u16 pd_handle, u16 *volume_handle);
+int leapraid_cfg_get_volume_wwid(struct leapraid_adapter *adapter,
+ u16 volume_handle, u64 *wwid);
+int leapraid_op_config_page(struct leapraid_adapter *adapter,
+ void *cfgp, union cfg_param_1 cfgp1,
+ union cfg_param_2 cfgp2,
+ enum config_page_action cfg_op);
+void leapraid_adjust_sdev_queue_depth(struct scsi_device *sdev, int qdepth);
+
+int leapraid_ctl_release(struct inode *inode, struct file *filep);
+void leapraid_ctl_init(void);
+void leapraid_ctl_exit(void);
+
+extern struct sas_function_template leapraid_transport_functions;
+extern struct scsi_transport_template *leapraid_transport_template;
+struct leapraid_sas_port *leapraid_transport_port_add(
+ struct leapraid_adapter *adapter, u16 handle, u64 sas_address,
+ struct leapraid_card_port *card_port);
+void leapraid_transport_port_remove(struct leapraid_adapter *adapter,
+ u64 sas_address, u64 sas_address_parent,
+ struct leapraid_card_port *card_port);
+void leapraid_transport_add_card_phy(struct leapraid_adapter *adapter,
+ struct leapraid_card_phy *card_phy,
+ struct leapraid_sas_phy_p0 *phy_pg0,
+ struct device *parent_dev);
+int leapraid_transport_add_exp_phy(struct leapraid_adapter *adapter,
+ struct leapraid_card_phy *card_phy,
+ struct leapraid_exp_p1 *exp_pg1,
+ struct device *parent_dev);
+void leapraid_transport_update_links(struct leapraid_adapter *adapter,
+ u64 sas_address, u16 handle,
+ u8 phy_number, u8 link_rate,
+ struct leapraid_card_port *card_port);
+void leapraid_transport_detach_phy_to_port(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node,
+ struct leapraid_card_phy *card_phy);
+void leapraid_transport_attach_phy_to_port(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *sas_node,
+ struct leapraid_card_phy *card_phy,
+ u64 sas_address,
+ struct leapraid_card_port *card_port);
+int leapraid_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmd);
+void leapraid_smart_polling_start(struct leapraid_adapter *adapter);
+void leapraid_smart_polling_stop(struct leapraid_adapter *adapter);
+void leapraid_smart_fault_detect(struct leapraid_adapter *adapter, u16 hdl);
+void leapraid_free_internal_scsi_cmd(struct leapraid_adapter *adapter);
+
+#endif /* LEAPRAID_FUNC_H_INCLUDED */
diff --git a/drivers/scsi/leapraid/leapraid_os.c b/drivers/scsi/leapraid/leapraid_os.c
new file mode 100644
index 000000000000..44ec2615648f
--- /dev/null
+++ b/drivers/scsi/leapraid/leapraid_os.c
@@ -0,0 +1,2271 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2025 LeapIO Tech Inc.
+ *
+ * LeapRAID Storage and RAID Controller driver.
+ */
+
+#include <linux/module.h>
+
+#include "leapraid_func.h"
+#include "leapraid.h"
+
+LIST_HEAD(leapraid_adapter_list);
+DEFINE_SPINLOCK(leapraid_adapter_lock);
+
+MODULE_AUTHOR(LEAPRAID_AUTHOR);
+MODULE_DESCRIPTION(LEAPRAID_DESCRIPTION);
+MODULE_LICENSE("GPL");
+MODULE_VERSION(LEAPRAID_DRIVER_VERSION);
+
+static int leapraid_ids;
+
+static int open_pcie_trace = 1;
+module_param(open_pcie_trace, int, 0644);
+MODULE_PARM_DESC(open_pcie_trace, "open_pcie_trace: default=1(open)/0(close)");
+
+static int enable_mp = 1;
+module_param(enable_mp, int, 0444);
+MODULE_PARM_DESC(enable_mp,
+ "enable multipath on target device. default=1(enable)");
+
+static inline void leapraid_get_sense_data(char *sense,
+ struct sense_info *data)
+{
+ bool desc_format = (sense[0] & SCSI_SENSE_RESPONSE_CODE_MASK) >=
+ DESC_FORMAT_THRESHOLD;
+
+ if (desc_format) {
+ data->sense_key = sense[1] & SENSE_KEY_MASK;
+ data->asc = sense[2];
+ data->ascq = sense[3];
+ } else {
+ data->sense_key = sense[2] & SENSE_KEY_MASK;
+ data->asc = sense[12];
+ data->ascq = sense[13];
+ }
+}
+
+static struct Scsi_Host *pdev_to_shost(struct pci_dev *pdev)
+{
+ return pci_get_drvdata(pdev);
+}
+
+static struct leapraid_adapter *pdev_to_adapter(struct pci_dev *pdev)
+{
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+
+ if (!shost)
+ return NULL;
+
+ return shost_priv(shost);
+}
+
+struct leapraid_io_req_tracker *leapraid_get_scmd_priv(struct scsi_cmnd *scmd)
+{
+ return (struct leapraid_io_req_tracker *)scmd->host_scribble;
+}
+
+void leapraid_set_tm_flg(struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct scsi_device *sdev;
+ bool skip = false;
+
+ /* don't break out of the loop */
+ shost_for_each_device(sdev, adapter->shost) {
+ if (skip)
+ continue;
+
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv)
+ continue;
+
+ if (sdev_priv->starget_priv->hdl == hdl) {
+ sdev_priv->starget_priv->tm_busy = true;
+ skip = true;
+ }
+ }
+}
+
+void leapraid_clear_tm_flg(struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct scsi_device *sdev;
+ bool skip = false;
+
+ /* don't break out of the loop */
+ shost_for_each_device(sdev, adapter->shost) {
+ if (skip)
+ continue;
+
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv)
+ continue;
+
+ if (sdev_priv->starget_priv->hdl == hdl) {
+ sdev_priv->starget_priv->tm_busy = false;
+ skip = true;
+ }
+ }
+}
+
+static int leapraid_tm_cmd_map_status(struct leapraid_adapter *adapter,
+ uint channel,
+ uint id,
+ uint lun,
+ u8 type,
+ u16 taskid_task)
+{
+ int rc = FAILED;
+
+ if (taskid_task <= adapter->shost->can_queue) {
+ switch (type) {
+ case LEAPRAID_TM_TASKTYPE_ABRT_TASK_SET:
+ case LEAPRAID_TM_TASKTYPE_LOGICAL_UNIT_RESET:
+ if (!leapraid_scmd_find_by_lun(adapter, id, lun,
+ channel))
+ rc = SUCCESS;
+ break;
+ case LEAPRAID_TM_TASKTYPE_TARGET_RESET:
+ if (!leapraid_scmd_find_by_tgt(adapter, id, channel))
+ rc = SUCCESS;
+ break;
+ default:
+ rc = SUCCESS;
+ }
+ }
+
+ if (taskid_task == adapter->driver_cmds.driver_scsiio_cmd.taskid) {
+ if ((adapter->driver_cmds.driver_scsiio_cmd.status &
+ LEAPRAID_CMD_DONE) ||
+ (adapter->driver_cmds.driver_scsiio_cmd.status &
+ LEAPRAID_CMD_NOT_USED))
+ rc = SUCCESS;
+ }
+
+ if (taskid_task == adapter->driver_cmds.ctl_cmd.hp_taskid) {
+ if ((adapter->driver_cmds.ctl_cmd.status &
+ LEAPRAID_CMD_DONE) ||
+ (adapter->driver_cmds.ctl_cmd.status &
+ LEAPRAID_CMD_NOT_USED))
+ rc = SUCCESS;
+ }
+
+ return rc;
+}
+
+static int leapraid_tm_post_processing(struct leapraid_adapter *adapter,
+ u16 hdl, uint channel, uint id,
+ uint lun, u8 type, u16 taskid_task)
+{
+ int rc;
+
+ rc = leapraid_tm_cmd_map_status(adapter, channel, id, lun,
+ type, taskid_task);
+ if (rc == SUCCESS)
+ return rc;
+
+ leapraid_mask_int(adapter);
+ leapraid_sync_irqs(adapter, true);
+ leapraid_unmask_int(adapter);
+
+ rc = leapraid_tm_cmd_map_status(adapter, channel, id, lun, type,
+ taskid_task);
+ return rc;
+}
+
+static void leapraid_build_tm_req(struct leapraid_scsi_tm_req *scsi_tm_req,
+ u16 hdl, uint lun, u8 type, u8 tr_method,
+ u16 target_taskid)
+{
+ memset(scsi_tm_req, 0, sizeof(*scsi_tm_req));
+ scsi_tm_req->func = LEAPRAID_FUNC_SCSI_TMF;
+ scsi_tm_req->dev_hdl = cpu_to_le16(hdl);
+ scsi_tm_req->task_type = type;
+ scsi_tm_req->msg_flg = tr_method;
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK ||
+ type == LEAPRAID_TM_TASKTYPE_QUERY_TASK)
+ scsi_tm_req->task_mid = cpu_to_le16(target_taskid);
+ int_to_scsilun(lun, (struct scsi_lun *)scsi_tm_req->lun);
+}
+
+int leapraid_issue_tm(struct leapraid_adapter *adapter, u16 hdl, uint channel,
+ uint id, uint lun, u8 type,
+ u16 target_taskid, u8 tr_method)
+{
+ struct leapraid_scsi_tm_req *scsi_tm_req;
+ struct leapraid_scsiio_req *scsiio_req;
+ struct leapraid_io_req_tracker *io_req_tracker = NULL;
+ u16 msix_task = 0;
+ bool issue_reset = false;
+ u32 db;
+ int rc;
+
+ lockdep_assert_held(&adapter->driver_cmds.tm_cmd.mutex);
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering) {
+ dev_info(&adapter->pdev->dev,
+ "%s %s: host is recovering, skip tm command!\n",
+ __func__, adapter->adapter_attr.name);
+ return FAILED;
+ }
+
+ db = leapraid_readl(&adapter->iomem_base->db);
+ if (db & LEAPRAID_DB_USED) {
+ dev_info(&adapter->pdev->dev,
+ "%s unexpected db status, issuing hard reset!\n",
+ adapter->adapter_attr.name);
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ rc = leapraid_hard_reset_handler(adapter, FULL_RESET);
+ return (!rc) ? SUCCESS : FAILED;
+ }
+
+ if ((db & LEAPRAID_DB_MASK) == LEAPRAID_DB_FAULT) {
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ rc = leapraid_hard_reset_handler(adapter, FULL_RESET);
+ return (!rc) ? SUCCESS : FAILED;
+ }
+
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK)
+ io_req_tracker = leapraid_get_io_tracker_from_taskid(adapter,
+ target_taskid);
+
+ adapter->driver_cmds.tm_cmd.status = LEAPRAID_CMD_PENDING;
+ scsi_tm_req =
+ leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.tm_cmd.hp_taskid);
+ leapraid_build_tm_req(scsi_tm_req, hdl, lun, type, tr_method,
+ target_taskid);
+ memset((void *)(&adapter->driver_cmds.tm_cmd.reply), 0,
+ sizeof(struct leapraid_scsi_tm_rep));
+ leapraid_set_tm_flg(adapter, hdl);
+ init_completion(&adapter->driver_cmds.tm_cmd.done);
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK &&
+ io_req_tracker &&
+ io_req_tracker->msix_io < adapter->adapter_attr.rq_cnt)
+ msix_task = io_req_tracker->msix_io;
+ else
+ msix_task = 0;
+ leapraid_fire_hpr_task(adapter,
+ adapter->driver_cmds.tm_cmd.hp_taskid,
+ msix_task);
+ wait_for_completion_timeout(&adapter->driver_cmds.tm_cmd.done,
+ LEAPRAID_TM_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.tm_cmd.status & LEAPRAID_CMD_DONE)) {
+ issue_reset =
+ leapraid_check_reset(
+ adapter->driver_cmds.tm_cmd.status);
+ if (issue_reset) {
+ dev_info(&adapter->pdev->dev,
+ "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ rc = leapraid_hard_reset_handler(adapter, FULL_RESET);
+ rc = (!rc) ? SUCCESS : FAILED;
+ goto out;
+ }
+ }
+
+ leapraid_sync_irqs(adapter, false);
+
+ switch (type) {
+ case LEAPRAID_TM_TASKTYPE_TARGET_RESET:
+ case LEAPRAID_TM_TASKTYPE_ABRT_TASK_SET:
+ case LEAPRAID_TM_TASKTYPE_LOGICAL_UNIT_RESET:
+ rc = leapraid_tm_post_processing(adapter, hdl, channel, id, lun,
+ type, target_taskid);
+ break;
+ case LEAPRAID_TM_TASKTYPE_ABORT_TASK:
+ rc = SUCCESS;
+ scsiio_req = leapraid_get_task_desc(adapter, target_taskid);
+ if (le16_to_cpu(scsiio_req->dev_hdl) != hdl)
+ break;
+ dev_err(&adapter->pdev->dev, "%s abort failed, hdl=0x%04x\n",
+ adapter->adapter_attr.name, hdl);
+ rc = FAILED;
+ break;
+ case LEAPRAID_TM_TASKTYPE_QUERY_TASK:
+ rc = SUCCESS;
+ break;
+ default:
+ rc = FAILED;
+ break;
+ }
+
+out:
+ leapraid_clear_tm_flg(adapter, hdl);
+ adapter->driver_cmds.tm_cmd.status = LEAPRAID_CMD_NOT_USED;
+ return rc;
+}
+
+int leapraid_issue_locked_tm(struct leapraid_adapter *adapter, u16 hdl,
+ uint channel, uint id, uint lun, u8 type,
+ u16 target_taskid, u8 tr_method)
+{
+ int rc;
+
+ mutex_lock(&adapter->driver_cmds.tm_cmd.mutex);
+ rc = leapraid_issue_tm(adapter, hdl, channel, id, lun, type,
+ target_taskid, tr_method);
+ mutex_unlock(&adapter->driver_cmds.tm_cmd.mutex);
+
+ return rc;
+}
+
+void leapraid_smart_fault_detect(struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_sas_dev *sas_dev;
+ struct scsi_target *starget;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ if (!sas_dev) {
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ goto out;
+ }
+
+ starget = sas_dev->starget;
+ starget_priv = starget->hostdata;
+ if ((starget_priv->flg & LEAPRAID_TGT_FLG_RAID_MEMBER) ||
+ (starget_priv->flg & LEAPRAID_TGT_FLG_VOLUME)) {
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ goto out;
+ }
+
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ leapraid_async_turn_on_led(adapter, hdl);
+out:
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+}
+
+static void leapraid_process_sense_data(struct leapraid_adapter *adapter,
+ struct leapraid_scsiio_rep *scsiio_rep,
+ struct scsi_cmnd *scmd, u16 taskid)
+{
+ struct sense_info data;
+ const void *sense_data;
+ u32 sz;
+
+ if (!(scsiio_rep->scsi_state & LEAPRAID_SCSI_STATE_AUTOSENSE_VALID))
+ return;
+
+ sense_data = leapraid_get_sense_buffer(adapter, taskid);
+ sz = min_t(u32, SCSI_SENSE_BUFFERSIZE,
+ le32_to_cpu(scsiio_rep->sense_count));
+
+ memcpy(scmd->sense_buffer, sense_data, sz);
+ leapraid_get_sense_data(scmd->sense_buffer, &data);
+ if (data.asc == ASC_FAILURE_PREDICTION_THRESHOLD_EXCEEDED)
+ leapraid_smart_fault_detect(adapter,
+ le16_to_cpu(scsiio_rep->dev_hdl));
+}
+
+static void leapraid_handle_data_underrun(
+ struct leapraid_scsiio_rep *scsiio_rep,
+ struct scsi_cmnd *scmd, u32 xfer_cnt)
+{
+ u8 scsi_status = scsiio_rep->scsi_status;
+ u8 scsi_state = scsiio_rep->scsi_state;
+
+ scmd->result = (DID_OK << LEAPRAID_SCSI_HOST_SHIFT) | scsi_status;
+
+ if (scsi_state & LEAPRAID_SCSI_STATE_AUTOSENSE_VALID)
+ return;
+
+ if (xfer_cnt < scmd->underflow) {
+ if (scsi_status == SAM_STAT_BUSY)
+ scmd->result = SAM_STAT_BUSY;
+ else
+ scmd->result = DID_SOFT_ERROR <<
+ LEAPRAID_SCSI_HOST_SHIFT;
+ } else if (scsi_state & (LEAPRAID_SCSI_STATE_AUTOSENSE_FAILED |
+ LEAPRAID_SCSI_STATE_NO_SCSI_STATUS)) {
+ scmd->result = DID_SOFT_ERROR << LEAPRAID_SCSI_HOST_SHIFT;
+ } else if (scsi_state & LEAPRAID_SCSI_STATE_TERMINATED) {
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ } else if (!xfer_cnt && scmd->cmnd[0] == REPORT_LUNS) {
+ scsiio_rep->scsi_state = LEAPRAID_SCSI_STATE_AUTOSENSE_VALID;
+ scsiio_rep->scsi_status = SAM_STAT_CHECK_CONDITION;
+ scsi_build_sense_buffer(0, scmd->sense_buffer, ILLEGAL_REQUEST,
+ LEAPRAID_SCSI_ASC_INVALID_CMD_CODE,
+ LEAPRAID_SCSI_ASCQ_DEFAULT);
+ scmd->result = (DRIVER_SENSE << LEAPRAID_SCSI_DRIVER_SHIFT) |
+ (DID_OK << LEAPRAID_SCSI_HOST_SHIFT) |
+ SAM_STAT_CHECK_CONDITION;
+ }
+}
+
+static void leapraid_handle_success_status(
+ struct leapraid_scsiio_rep *scsiio_rep,
+ struct scsi_cmnd *scmd,
+ u32 response_code)
+{
+ u8 scsi_status = scsiio_rep->scsi_status;
+ u8 scsi_state = scsiio_rep->scsi_state;
+
+ scmd->result = (DID_OK << LEAPRAID_SCSI_HOST_SHIFT) | scsi_status;
+
+ if (response_code == LEAPRAID_TM_RSP_INVALID_FRAME ||
+ (scsi_state & (LEAPRAID_SCSI_STATE_AUTOSENSE_FAILED |
+ LEAPRAID_SCSI_STATE_NO_SCSI_STATUS)))
+ scmd->result = DID_SOFT_ERROR << LEAPRAID_SCSI_HOST_SHIFT;
+ else if (scsi_state & LEAPRAID_SCSI_STATE_TERMINATED)
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+}
+
+static void leapraid_scsiio_done_dispatch(struct leapraid_adapter *adapter,
+ struct leapraid_scsiio_rep *scsiio_rep,
+ struct leapraid_sdev_priv *sdev_priv,
+ struct scsi_cmnd *scmd,
+ u16 taskid, u32 response_code)
+{
+ u8 scsi_status = scsiio_rep->scsi_status;
+ u8 scsi_state = scsiio_rep->scsi_state;
+ u16 adapter_status;
+ u32 xfer_cnt;
+ u32 sz;
+
+ adapter_status = le16_to_cpu(scsiio_rep->adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+
+ xfer_cnt = le32_to_cpu(scsiio_rep->transfer_count);
+ scsi_set_resid(scmd, scsi_bufflen(scmd) - xfer_cnt);
+
+ if (adapter_status == LEAPRAID_ADAPTER_STATUS_SCSI_DATA_UNDERRUN &&
+ xfer_cnt == 0 &&
+ (scsi_status == LEAPRAID_SCSI_STATUS_BUSY ||
+ scsi_status == LEAPRAID_SCSI_STATUS_RESERVATION_CONFLICT ||
+ scsi_status == LEAPRAID_SCSI_STATUS_TASK_SET_FULL)) {
+ adapter_status = LEAPRAID_ADAPTER_STATUS_SUCCESS;
+ }
+
+ switch (adapter_status) {
+ case LEAPRAID_ADAPTER_STATUS_SCSI_DEVICE_NOT_THERE:
+ scmd->result = DID_NO_CONNECT << LEAPRAID_SCSI_HOST_SHIFT;
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_BUSY:
+ case LEAPRAID_ADAPTER_STATUS_INSUFFICIENT_RESOURCES:
+ scmd->result = SAM_STAT_BUSY;
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_SCSI_RESIDUAL_MISMATCH:
+ if (xfer_cnt == 0 || scmd->underflow > xfer_cnt)
+ scmd->result = DID_SOFT_ERROR <<
+ LEAPRAID_SCSI_HOST_SHIFT;
+ else
+ scmd->result = (DID_OK << LEAPRAID_SCSI_HOST_SHIFT) |
+ scsi_status;
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_SCSI_ADAPTER_TERMINATED:
+ if (sdev_priv->block) {
+ scmd->result = DID_TRANSPORT_DISRUPTED <<
+ LEAPRAID_SCSI_HOST_SHIFT;
+ return;
+ }
+
+ if (scmd->device->channel == RAID_CHANNEL &&
+ scsi_state == (LEAPRAID_SCSI_STATE_TERMINATED |
+ LEAPRAID_SCSI_STATE_NO_SCSI_STATUS)) {
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ break;
+ }
+
+ scmd->result = DID_SOFT_ERROR << LEAPRAID_SCSI_HOST_SHIFT;
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_SCSI_TASK_TERMINATED:
+ case LEAPRAID_ADAPTER_STATUS_SCSI_EXT_TERMINATED:
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_SCSI_DATA_UNDERRUN:
+ leapraid_handle_data_underrun(scsiio_rep, scmd, xfer_cnt);
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_SCSI_DATA_OVERRUN:
+ scsi_set_resid(scmd, 0);
+ leapraid_handle_success_status(scsiio_rep, scmd,
+ response_code);
+ break;
+ case LEAPRAID_ADAPTER_STATUS_SCSI_RECOVERED_ERROR:
+ case LEAPRAID_ADAPTER_STATUS_SUCCESS:
+ leapraid_handle_success_status(scsiio_rep, scmd,
+ response_code);
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_SCSI_PROTOCOL_ERROR:
+ case LEAPRAID_ADAPTER_STATUS_INTERNAL_ERROR:
+ case LEAPRAID_ADAPTER_STATUS_SCSI_IO_DATA_ERROR:
+ case LEAPRAID_ADAPTER_STATUS_SCSI_TASK_MGMT_FAILED:
+ default:
+ scmd->result = DID_SOFT_ERROR << LEAPRAID_SCSI_HOST_SHIFT;
+ break;
+ }
+
+ if (!scmd->result)
+ return;
+
+ scsi_print_command(scmd);
+ dev_warn(&adapter->pdev->dev,
+ "scsiio warn: hdl=0x%x, status are: 0x%x, 0x%x, 0x%x\n",
+ le16_to_cpu(scsiio_rep->dev_hdl), adapter_status,
+ scsi_status, scsi_state);
+
+ if (scsi_state & LEAPRAID_SCSI_STATE_AUTOSENSE_VALID) {
+ struct scsi_sense_hdr sshdr;
+
+ sz = min_t(u32, SCSI_SENSE_BUFFERSIZE,
+ le32_to_cpu(scsiio_rep->sense_count));
+ if (scsi_normalize_sense(scmd->sense_buffer, sz,
+ &sshdr)) {
+ dev_warn(&adapter->pdev->dev,
+ "sense: key=0x%x asc=0x%x ascq=0x%x\n",
+ sshdr.sense_key, sshdr.asc,
+ sshdr.ascq);
+ } else {
+ dev_warn(&adapter->pdev->dev,
+ "sense: invalid sense data\n");
+ }
+ }
+}
+
+u8 leapraid_scsiio_done(struct leapraid_adapter *adapter, u16 taskid,
+ u8 msix_index, u32 rep)
+{
+ struct leapraid_scsiio_rep *scsiio_rep = NULL;
+ struct leapraid_sdev_priv *sdev_priv = NULL;
+ struct scsi_cmnd *scmd = NULL;
+ u32 response_code = 0;
+
+ if (likely(taskid != adapter->driver_cmds.driver_scsiio_cmd.taskid))
+ scmd = leapraid_get_scmd_from_taskid(adapter, taskid);
+ else
+ scmd = adapter->driver_cmds.internal_scmd;
+ if (!scmd)
+ return 1;
+
+ scsiio_rep = leapraid_get_reply_vaddr(adapter, rep);
+ if (!scsiio_rep) {
+ scmd->result = DID_OK << LEAPRAID_SCSI_HOST_SHIFT;
+ goto out;
+ }
+
+ sdev_priv = scmd->device->hostdata;
+ if (!sdev_priv ||
+ !sdev_priv->starget_priv ||
+ sdev_priv->starget_priv->deleted) {
+ scmd->result = DID_NO_CONNECT << LEAPRAID_SCSI_HOST_SHIFT;
+ goto out;
+ }
+
+ if (scsiio_rep->scsi_state & LEAPRAID_SCSI_STATE_RESPONSE_INFO_VALID)
+ response_code = le32_to_cpu(scsiio_rep->resp_info) & 0xFF;
+
+ leapraid_process_sense_data(adapter, scsiio_rep, scmd, taskid);
+ leapraid_scsiio_done_dispatch(adapter, scsiio_rep, sdev_priv, scmd,
+ taskid, response_code);
+
+out:
+ scsi_dma_unmap(scmd);
+ if (unlikely(taskid == adapter->driver_cmds.driver_scsiio_cmd.taskid)) {
+ adapter->driver_cmds.driver_scsiio_cmd.status =
+ LEAPRAID_CMD_DONE;
+ complete(&adapter->driver_cmds.driver_scsiio_cmd.done);
+ return 0;
+ }
+ leapraid_free_taskid(adapter, taskid);
+ scmd->scsi_done(scmd);
+ return 0;
+}
+
+static void leapraid_probe_raid(struct leapraid_adapter *adapter)
+{
+ struct leapraid_raid_volume *raid_volume, *raid_volume_next;
+ int rc;
+
+ list_for_each_entry_safe(raid_volume, raid_volume_next,
+ &adapter->dev_topo.raid_volume_list, list) {
+ if (raid_volume->starget)
+ continue;
+
+ rc = scsi_add_device(adapter->shost, RAID_CHANNEL,
+ raid_volume->id, 0);
+ if (rc)
+ leapraid_raid_volume_remove(adapter, raid_volume);
+ }
+}
+
+static void leapraid_sas_dev_make_active(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ if (!list_empty(&sas_dev->list)) {
+ list_del_init(&sas_dev->list);
+ leapraid_sdev_put(sas_dev);
+ }
+
+ leapraid_sdev_get(sas_dev);
+ list_add_tail(&sas_dev->list, &adapter->dev_topo.sas_dev_list);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+}
+
+static void leapraid_probe_sas(struct leapraid_adapter *adapter)
+{
+ struct leapraid_sas_dev *sas_dev;
+ bool added;
+
+ for (;;) {
+ sas_dev = leapraid_get_next_sas_dev_from_init_list(adapter);
+ if (!sas_dev)
+ break;
+
+ added = leapraid_transport_port_add(adapter,
+ sas_dev->hdl,
+ sas_dev->parent_sas_addr,
+ sas_dev->card_port);
+
+ if (!added)
+ goto remove_dev;
+
+ if (!sas_dev->starget &&
+ !adapter->scan_dev_desc.driver_loading) {
+ leapraid_transport_port_remove(adapter,
+ sas_dev->sas_addr,
+ sas_dev->parent_sas_addr,
+ sas_dev->card_port);
+ goto remove_dev;
+ }
+
+ leapraid_sas_dev_make_active(adapter, sas_dev);
+ leapraid_sdev_put(sas_dev);
+ continue;
+
+remove_dev:
+ leapraid_sas_dev_remove(adapter, sas_dev);
+ leapraid_sdev_put(sas_dev);
+ }
+}
+
+static bool leapraid_get_boot_dev(struct leapraid_boot_dev *boot_dev,
+ void **pdev, u32 *pchnl)
+{
+ if (boot_dev->dev) {
+ *pdev = boot_dev->dev;
+ *pchnl = boot_dev->chnl;
+ return true;
+ }
+ return false;
+}
+
+static void leapraid_probe_boot_dev(struct leapraid_adapter *adapter)
+{
+ void *dev = NULL;
+ u32 chnl;
+
+ if (leapraid_get_boot_dev(&adapter->boot_devs.requested_boot_dev, &dev,
+ &chnl))
+ goto boot_dev_found;
+
+ if (leapraid_get_boot_dev(&adapter->boot_devs.requested_alt_boot_dev,
+ &dev, &chnl))
+ goto boot_dev_found;
+
+ if (leapraid_get_boot_dev(&adapter->boot_devs.current_boot_dev, &dev,
+ &chnl))
+ goto boot_dev_found;
+
+ return;
+
+boot_dev_found:
+ switch (chnl) {
+ case RAID_CHANNEL:
+ {
+ struct leapraid_raid_volume *raid_volume =
+ (struct leapraid_raid_volume *)dev;
+
+ if (raid_volume->starget)
+ return;
+
+ /* TODO eedp */
+
+ if (scsi_add_device(adapter->shost, RAID_CHANNEL,
+ raid_volume->id, 0))
+ leapraid_raid_volume_remove(adapter, raid_volume);
+ break;
+ }
+ default:
+ {
+ struct leapraid_sas_dev *sas_dev =
+ (struct leapraid_sas_dev *)dev;
+ struct leapraid_sas_port *sas_port;
+ unsigned long flags;
+
+ if (sas_dev->starget)
+ return;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ list_move_tail(&sas_dev->list,
+ &adapter->dev_topo.sas_dev_list);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (!sas_dev->card_port)
+ return;
+
+ sas_port = leapraid_transport_port_add(adapter, sas_dev->hdl,
+ sas_dev->parent_sas_addr,
+ sas_dev->card_port);
+ if (!sas_port)
+ leapraid_sas_dev_remove(adapter, sas_dev);
+ break;
+ }
+ }
+}
+
+static void leapraid_probe_devices(struct leapraid_adapter *adapter)
+{
+ leapraid_probe_boot_dev(adapter);
+
+ if (adapter->adapter_attr.raid_support) {
+ leapraid_probe_raid(adapter);
+ leapraid_probe_sas(adapter);
+ } else {
+ leapraid_probe_sas(adapter);
+ }
+}
+
+void leapraid_scan_dev_done(struct leapraid_adapter *adapter)
+{
+ if (adapter->scan_dev_desc.wait_scan_dev_done) {
+ adapter->scan_dev_desc.wait_scan_dev_done = false;
+ leapraid_probe_devices(adapter);
+ }
+
+ leapraid_check_scheduled_fault_start(adapter);
+ leapraid_fw_log_start(adapter);
+ adapter->scan_dev_desc.driver_loading = false;
+ leapraid_smart_polling_start(adapter);
+}
+
+static void leapraid_ir_shutdown(struct leapraid_adapter *adapter)
+{
+ struct leapraid_raid_act_req *raid_act_req;
+ struct leapraid_raid_act_rep *raid_act_rep;
+ struct leapraid_driver_cmd *raid_action_cmd;
+
+ if (!adapter || !adapter->adapter_attr.raid_support)
+ return;
+
+ if (list_empty(&adapter->dev_topo.raid_volume_list))
+ return;
+
+ if (leapraid_pci_removed(adapter))
+ return;
+
+ raid_action_cmd = &adapter->driver_cmds.raid_action_cmd;
+
+ mutex_lock(&raid_action_cmd->mutex);
+ raid_action_cmd->status = LEAPRAID_CMD_PENDING;
+
+ raid_act_req = leapraid_get_task_desc(adapter,
+ raid_action_cmd->inter_taskid);
+ memset(raid_act_req, 0, sizeof(struct leapraid_raid_act_req));
+ raid_act_req->func = LEAPRAID_FUNC_RAID_ACTION;
+ raid_act_req->act = LEAPRAID_RAID_ACT_SYSTEM_SHUTDOWN_INITIATED;
+
+ dev_info(&adapter->pdev->dev, "ir shutdown start\n");
+ init_completion(&raid_action_cmd->done);
+ leapraid_fire_task(adapter, raid_action_cmd->inter_taskid);
+ wait_for_completion_timeout(&raid_action_cmd->done,
+ LEAPRAID_RAID_ACTION_CMD_TIMEOUT * HZ);
+
+ if (!(raid_action_cmd->status & LEAPRAID_CMD_DONE)) {
+ dev_err(&adapter->pdev->dev,
+ "%s: timeout waiting for ir shutdown\n", __func__);
+ goto out;
+ }
+
+ if (raid_action_cmd->status & LEAPRAID_CMD_REPLY_VALID) {
+ raid_act_rep = (void *)(&raid_action_cmd->reply);
+ dev_info(&adapter->pdev->dev,
+ "ir shutdown done, adapter status=0x%04x\n",
+ le16_to_cpu(raid_act_rep->adapter_status));
+ }
+
+out:
+ raid_action_cmd->status = LEAPRAID_CMD_NOT_USED;
+ mutex_unlock(&raid_action_cmd->mutex);
+}
+
+static const struct pci_device_id leapraid_pci_table[] = {
+ { PCI_DEVICE(LEAPRAID_VENDOR_ID, LEAPRAID_DEVID_HBA) },
+ { PCI_DEVICE(LEAPRAID_VENDOR_ID, LEAPRAID_DEVID_RAID) },
+ { 0, }
+};
+
+static inline bool leapraid_is_scmd_permitted(struct leapraid_adapter *adapter,
+ struct scsi_cmnd *scmd)
+{
+ u8 opcode;
+
+ if (adapter->access_ctrl.pcie_recovering ||
+ adapter->access_ctrl.adapter_thermal_alert)
+ return false;
+
+ if (adapter->access_ctrl.host_removing) {
+ if (leapraid_pci_removed(adapter))
+ return false;
+
+ opcode = scmd->cmnd[0];
+ if (opcode == SYNCHRONIZE_CACHE || opcode == START_STOP)
+ return true;
+ else
+ return false;
+ }
+ return true;
+}
+
+static bool leapraid_should_queuecommand(struct leapraid_adapter *adapter,
+ struct leapraid_sdev_priv *sdev_priv,
+ struct scsi_cmnd *scmd, int *rc)
+{
+ struct leapraid_starget_priv *starget_priv;
+
+ if (!sdev_priv || !sdev_priv->starget_priv)
+ goto no_connect;
+
+ if (!leapraid_is_scmd_permitted(adapter, scmd))
+ goto no_connect;
+
+ starget_priv = sdev_priv->starget_priv;
+ if (starget_priv->hdl == LEAPRAID_INVALID_DEV_HANDLE)
+ goto no_connect;
+
+ if (sdev_priv->block &&
+ scmd->device->host->shost_state == SHOST_RECOVERY &&
+ scmd->cmnd[0] == TEST_UNIT_READY) {
+ scsi_build_sense_buffer(0, scmd->sense_buffer, UNIT_ATTENTION,
+ LEAPRAID_SCSI_ASC_POWER_ON_RESET,
+ LEAPRAID_SCSI_ASCQ_POWER_ON_RESET);
+ scmd->result = (DRIVER_SENSE << LEAPRAID_SCSI_DRIVER_SHIFT) |
+ (DID_OK << LEAPRAID_SCSI_HOST_SHIFT) |
+ SAM_STAT_CHECK_CONDITION;
+ goto done_out;
+ }
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->reset_desc.adapter_link_resetting) {
+ *rc = SCSI_MLQUEUE_HOST_BUSY;
+ goto out;
+ } else if (starget_priv->deleted || sdev_priv->deleted) {
+ goto no_connect;
+ } else if (starget_priv->tm_busy || sdev_priv->block) {
+ *rc = SCSI_MLQUEUE_DEVICE_BUSY;
+ goto out;
+ }
+
+ return true;
+
+no_connect:
+ scmd->result = DID_NO_CONNECT << LEAPRAID_SCSI_HOST_SHIFT;
+done_out:
+ if (likely(scmd != adapter->driver_cmds.internal_scmd))
+ scmd->scsi_done(scmd);
+out:
+ return false;
+}
+
+static u32 build_scsiio_req_control(struct scsi_cmnd *scmd,
+ struct leapraid_sdev_priv *sdev_priv)
+{
+ u32 control;
+
+ switch (scmd->sc_data_direction) {
+ case DMA_FROM_DEVICE:
+ control = LEAPRAID_SCSIIO_CTRL_READ;
+ break;
+ case DMA_TO_DEVICE:
+ control = LEAPRAID_SCSIIO_CTRL_WRITE;
+ break;
+ default:
+ control = LEAPRAID_SCSIIO_CTRL_NODATATRANSFER;
+ break;
+ }
+
+ control |= LEAPRAID_SCSIIO_CTRL_SIMPLEQ;
+
+ if (sdev_priv->ncq &&
+ (IOPRIO_PRIO_CLASS(req_get_ioprio(scmd->request)) ==
+ IOPRIO_CLASS_RT))
+ control |= LEAPRAID_SCSIIO_CTRL_CMDPRI;
+ if (scmd->cmd_len == 32)
+ control |= 4 << LEAPRAID_SCSIIO_CTRL_ADDCDBLEN_SHIFT;
+
+ return control;
+}
+
+int leapraid_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+{
+ struct leapraid_adapter *adapter = shost_priv(scmd->device->host);
+ struct leapraid_sdev_priv *sdev_priv = scmd->device->hostdata;
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_scsiio_req *scsiio_req;
+ u32 control;
+ u16 taskid;
+ u16 hdl;
+ int rc = 0;
+
+ if (!leapraid_should_queuecommand(adapter, sdev_priv, scmd, &rc))
+ goto out;
+
+ starget_priv = sdev_priv->starget_priv;
+ hdl = starget_priv->hdl;
+ control = build_scsiio_req_control(scmd, sdev_priv);
+
+ if (unlikely(scmd == adapter->driver_cmds.internal_scmd))
+ taskid = adapter->driver_cmds.driver_scsiio_cmd.taskid;
+ else
+ taskid = leapraid_alloc_scsiio_taskid(adapter, scmd);
+ scsiio_req = leapraid_get_task_desc(adapter, taskid);
+
+ scsiio_req->func = LEAPRAID_FUNC_SCSIIO_REQ;
+ if (sdev_priv->starget_priv->flg & LEAPRAID_TGT_FLG_RAID_MEMBER)
+ scsiio_req->func = LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH;
+ else
+ scsiio_req->func = LEAPRAID_FUNC_SCSIIO_REQ;
+
+ scsiio_req->dev_hdl = cpu_to_le16(hdl);
+ scsiio_req->data_len = cpu_to_le32(scsi_bufflen(scmd));
+ scsiio_req->ctrl = cpu_to_le32(control);
+ scsiio_req->io_flg = cpu_to_le16(scmd->cmd_len);
+ scsiio_req->msg_flg = 0;
+ scsiio_req->sense_buffer_len = SCSI_SENSE_BUFFERSIZE;
+ scsiio_req->sense_buffer_low_add =
+ leapraid_get_sense_buffer_dma(adapter, taskid);
+ scsiio_req->sgl_offset0 =
+ offsetof(struct leapraid_scsiio_req, sgl) /
+ LEAPRAID_DWORDS_BYTE_SIZE;
+ int_to_scsilun(sdev_priv->lun, (struct scsi_lun *)scsiio_req->lun);
+ memcpy(scsiio_req->cdb.cdb32, scmd->cmnd, scmd->cmd_len);
+ if (scsiio_req->data_len) {
+ if (leapraid_build_scmd_ieee_sg(adapter, scmd, taskid)) {
+ leapraid_free_taskid(adapter, taskid);
+ rc = SCSI_MLQUEUE_HOST_BUSY;
+ goto out;
+ }
+ } else {
+ leapraid_build_ieee_nodata_sg(adapter, &scsiio_req->sgl);
+ }
+
+ if (likely(scsiio_req->func == LEAPRAID_FUNC_SCSIIO_REQ)) {
+ leapraid_fire_scsi_io(adapter, taskid,
+ le16_to_cpu(scsiio_req->dev_hdl));
+ } else {
+ leapraid_fire_task(adapter, taskid);
+ }
+ dev_dbg(&adapter->pdev->dev,
+ "LEAPRAID_SCSIIO: Send Descriptor taskid %d, req type 0x%x\n",
+ taskid, scsiio_req->func);
+out:
+ return rc;
+}
+
+
+static int leapraid_error_handler(struct scsi_cmnd *scmd,
+ const char *str, u8 type)
+{
+ struct leapraid_adapter *adapter = shost_priv(scmd->device->host);
+ struct scsi_target *starget = scmd->device->sdev_target;
+ struct leapraid_starget_priv *starget_priv = starget->hostdata;
+ struct leapraid_io_req_tracker *io_req_tracker = NULL;
+ struct leapraid_sdev_priv *sdev_priv;
+ struct leapraid_sas_dev *sas_dev = NULL;
+ u16 hdl;
+ int rc;
+
+ dev_info(&adapter->pdev->dev,
+ "EH enter: type=%s, scmd=0x%p, req tag=%d\n", str, scmd,
+ scmd->request->tag);
+ scsi_print_command(scmd);
+
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK) {
+ io_req_tracker = leapraid_get_scmd_priv(scmd);
+ dev_info(&adapter->pdev->dev,
+ "EH ABORT: scmd=0x%p, pending=%u ms, tout=%u ms, req tag=%d\n",
+ scmd,
+ jiffies_to_msecs(jiffies - scmd->jiffies_at_alloc),
+ (scmd->request->timeout / HZ) * 1000,
+ scmd->request->tag);
+ }
+
+ if (leapraid_pci_removed(adapter) ||
+ adapter->access_ctrl.host_removing) {
+ dev_err(&adapter->pdev->dev,
+ "EH %s failed: %s scmd=0x%p\n", str,
+ (adapter->access_ctrl.host_removing ?
+ "shost removing!" : "pci_dev removed!"), scmd);
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK)
+ if (io_req_tracker && io_req_tracker->taskid)
+ leapraid_free_taskid(adapter,
+ io_req_tracker->taskid);
+ scmd->result = DID_NO_CONNECT << LEAPRAID_SCSI_HOST_SHIFT;
+#ifdef FAST_IO_FAIL
+ rc = FAST_IO_FAIL;
+#else
+ rc = FAILED;
+#endif
+ goto out;
+ }
+
+ sdev_priv = scmd->device->hostdata;
+ if (!sdev_priv || !sdev_priv->starget_priv) {
+ dev_warn(&adapter->pdev->dev,
+ "EH %s: sdev or starget gone, scmd=0x%p\n",
+ str, scmd);
+ scmd->result = DID_NO_CONNECT << LEAPRAID_SCSI_HOST_SHIFT;
+ scmd->scsi_done(scmd);
+ rc = SUCCESS;
+ goto out;
+ }
+
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK) {
+ if (!io_req_tracker) {
+ dev_warn(&adapter->pdev->dev,
+ "EH ABORT: no io tracker, scmd 0x%p\n", scmd);
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ rc = SUCCESS;
+ goto out;
+ }
+
+ if (sdev_priv->starget_priv->flg &
+ LEAPRAID_TGT_FLG_RAID_MEMBER ||
+ sdev_priv->starget_priv->flg & LEAPRAID_TGT_FLG_VOLUME) {
+ dev_err(&adapter->pdev->dev,
+ "EH ABORT: skip RAID/VOLUME target, scmd=0x%p\n",
+ scmd);
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ rc = FAILED;
+ goto out;
+ }
+
+ hdl = sdev_priv->starget_priv->hdl;
+ } else {
+ hdl = 0;
+ if (sdev_priv->starget_priv->flg &
+ LEAPRAID_TGT_FLG_RAID_MEMBER) {
+ sas_dev = leapraid_get_sas_dev_from_tgt(adapter,
+ starget_priv);
+ if (sas_dev)
+ hdl = sas_dev->volume_hdl;
+ } else {
+ hdl = sdev_priv->starget_priv->hdl;
+ }
+
+ if (!hdl) {
+ dev_err(&adapter->pdev->dev,
+ "EH %s failed: target handle is 0, scmd=0x%p\n",
+ str, scmd);
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ rc = FAILED;
+ goto out;
+ }
+ }
+
+ dev_info(&adapter->pdev->dev,
+ "EH issue TM: type=%s, scmd=0x%p, hdl=0x%x\n",
+ str, scmd, hdl);
+
+ rc = leapraid_issue_locked_tm(adapter, hdl, scmd->device->channel,
+ scmd->device->id,
+ (type == LEAPRAID_TM_TASKTYPE_TARGET_RESET ?
+ 0 : scmd->device->lun),
+ type,
+ (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK ?
+ io_req_tracker->taskid : 0),
+ LEAPRAID_TM_MSGFLAGS_LINK_RESET);
+
+out:
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK) {
+ dev_info(&adapter->pdev->dev,
+ "EH ABORT result: %s, scmd=0x%p\n",
+ ((rc == SUCCESS) ? "success" : "failed"), scmd);
+ } else {
+ dev_info(&adapter->pdev->dev,
+ "EH %s result: %s, scmd=0x%p\n",
+ str, ((rc == SUCCESS) ? "success" : "failed"), scmd);
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+ }
+ return rc;
+}
+
+static int leapraid_eh_abort_handler(struct scsi_cmnd *scmd)
+{
+ return leapraid_error_handler(scmd, "ABORT TASK",
+ LEAPRAID_TM_TASKTYPE_ABORT_TASK);
+}
+
+static int leapraid_eh_device_reset_handler(struct scsi_cmnd *scmd)
+{
+ return leapraid_error_handler(scmd, "UNIT RESET",
+ LEAPRAID_TM_TASKTYPE_LOGICAL_UNIT_RESET);
+}
+
+static int leapraid_eh_target_reset_handler(struct scsi_cmnd *scmd)
+{
+ return leapraid_error_handler(scmd, "TARGET RESET",
+ LEAPRAID_TM_TASKTYPE_TARGET_RESET);
+}
+
+static int leapraid_eh_host_reset_handler(struct scsi_cmnd *scmd)
+{
+ struct leapraid_adapter *adapter = shost_priv(scmd->device->host);
+ int rc;
+
+ dev_info(&adapter->pdev->dev,
+ "EH HOST RESET enter: scmd=%p, req tag=%d\n",
+ scmd,
+ scmd->request->tag);
+ scsi_print_command(scmd);
+
+ if (adapter->scan_dev_desc.driver_loading ||
+ adapter->access_ctrl.host_removing) {
+ dev_err(&adapter->pdev->dev,
+ "EH HOST RESET failed: %s scmd=0x%p\n",
+ (adapter->access_ctrl.host_removing ?
+ "shost removing!" : "driver loading!"), scmd);
+ rc = FAILED;
+ goto out;
+ }
+
+ dev_info(&adapter->pdev->dev, "%s:%d issuing hard reset\n",
+ __func__, __LINE__);
+ if (leapraid_hard_reset_handler(adapter, FULL_RESET) < 0)
+ rc = FAILED;
+ else
+ rc = SUCCESS;
+
+out:
+ dev_info(&adapter->pdev->dev, "EH HOST RESET result: %s, scmd=0x%p\n",
+ ((rc == SUCCESS) ? "success" : "failed"), scmd);
+ return rc;
+}
+
+static int leapraid_slave_alloc(struct scsi_device *sdev)
+{
+ struct leapraid_raid_volume *raid_volume;
+ struct leapraid_starget_priv *stgt_priv;
+ struct leapraid_sdev_priv *sdev_priv;
+ struct leapraid_adapter *adapter;
+ struct leapraid_sas_dev *sas_dev;
+ struct scsi_target *tgt;
+ struct Scsi_Host *shost;
+ unsigned long flags;
+
+ sdev_priv = kzalloc(sizeof(*sdev_priv), GFP_KERNEL);
+ if (!sdev_priv)
+ return -ENOMEM;
+
+ sdev_priv->lun = sdev->lun;
+ sdev_priv->flg = LEAPRAID_DEVICE_FLG_INIT;
+ tgt = scsi_target(sdev);
+ stgt_priv = tgt->hostdata;
+ stgt_priv->num_luns++;
+ sdev_priv->starget_priv = stgt_priv;
+ sdev->hostdata = sdev_priv;
+ if ((stgt_priv->flg & LEAPRAID_TGT_FLG_RAID_MEMBER))
+ sdev->no_uld_attach = LEAPRAID_NO_ULD_ATTACH;
+
+ shost = dev_to_shost(&tgt->dev);
+ adapter = shost_priv(shost);
+ if (tgt->channel == RAID_CHANNEL) {
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_id(adapter,
+ tgt->id,
+ tgt->channel);
+ if (raid_volume)
+ raid_volume->sdev = sdev;
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock,
+ flags);
+ }
+
+ if (!(stgt_priv->flg & LEAPRAID_TGT_FLG_VOLUME)) {
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(adapter,
+ stgt_priv->sas_address,
+ stgt_priv->card_port);
+ if (sas_dev && !sas_dev->starget) {
+ sdev_printk(KERN_INFO, sdev,
+ "%s: assign starget to sas_dev\n", __func__);
+ sas_dev->starget = tgt;
+ }
+
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ }
+ return 0;
+}
+
+static int leapraid_slave_cfg_volume(struct scsi_device *sdev)
+{
+ struct Scsi_Host *shost = sdev->host;
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ struct leapraid_raid_volume *raid_volume;
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_sdev_priv *sdev_priv;
+ unsigned long flags;
+ int qd;
+ u16 hdl;
+
+ sdev_priv = sdev->hostdata;
+ starget_priv = sdev_priv->starget_priv;
+ hdl = starget_priv->hdl;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_hdl(adapter, hdl);
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+ if (!raid_volume) {
+ sdev_printk(KERN_WARNING, sdev,
+ "%s: raid_volume not found, hdl=0x%x\n",
+ __func__, hdl);
+ return 1;
+ }
+
+ if (leapraid_get_volume_cap(adapter, raid_volume)) {
+ sdev_printk(KERN_ERR, sdev,
+ "%s: failed to get volume cap, hdl=0x%x\n",
+ __func__, hdl);
+ return 1;
+ }
+
+ qd = (raid_volume->dev_info & LEAPRAID_DEVTYP_SSP_TGT) ?
+ LEAPRAID_SAS_QUEUE_DEPTH : LEAPRAID_SATA_QUEUE_DEPTH;
+ if (raid_volume->vol_type != LEAPRAID_VOL_TYPE_RAID0)
+ qd = LEAPRAID_RAID_QUEUE_DEPTH;
+
+ sdev_printk(KERN_INFO, sdev,
+ "raid volume: hdl=0x%04x, wwid=0x%016llx\n",
+ raid_volume->hdl, (unsigned long long)raid_volume->wwid);
+
+ if (shost->max_sectors > LEAPRAID_MAX_SECTORS)
+ blk_queue_max_hw_sectors(sdev->request_queue,
+ LEAPRAID_MAX_SECTORS);
+
+ leapraid_adjust_sdev_queue_depth(sdev, qd);
+ return 0;
+}
+
+static int leapraid_slave_configure_extra(struct scsi_device *sdev,
+ struct leapraid_sas_dev **psas_dev,
+ u16 vol_hdl, u64 volume_wwid,
+ bool *is_target_ssp, int *qd)
+{
+ struct leapraid_sas_dev *sas_dev;
+ struct leapraid_sdev_priv *sdev_priv;
+ struct Scsi_Host *shost = sdev->host;
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ unsigned long flags;
+
+ sdev_priv = sdev->hostdata;
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ *is_target_ssp = false;
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(adapter,
+ sdev_priv->starget_priv->sas_address,
+ sdev_priv->starget_priv->card_port);
+ if (!sas_dev) {
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ sdev_printk(KERN_WARNING, sdev,
+ "%s: sas_dev not found, sas=0x%llx\n",
+ __func__, sdev_priv->starget_priv->sas_address);
+ return 1;
+ }
+
+ *psas_dev = sas_dev;
+ sas_dev->volume_hdl = vol_hdl;
+ sas_dev->volume_wwid = volume_wwid;
+ if (sas_dev->dev_info & LEAPRAID_DEVTYP_SSP_TGT) {
+ *qd = (sas_dev->port_type > 1) ?
+ adapter->adapter_attr.wideport_max_queue_depth :
+ adapter->adapter_attr.narrowport_max_queue_depth;
+ *is_target_ssp = true;
+ if (sas_dev->dev_info & LEAPRAID_DEVTYP_SEP)
+ sdev_priv->sep = true;
+ } else {
+ *qd = adapter->adapter_attr.sata_max_queue_depth;
+ }
+
+ sdev_printk(KERN_INFO, sdev,
+ "sdev: dev name=0x%016llx, sas addr=0x%016llx\n",
+ (unsigned long long)sas_dev->dev_name,
+ (unsigned long long)sas_dev->sas_addr);
+ leapraid_sdev_put(sas_dev);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return 0;
+}
+
+static int leapraid_slave_configure(struct scsi_device *sdev)
+{
+ struct leapraid_sas_dev *sas_dev;
+ struct leapraid_sdev_priv *sdev_priv;
+ struct Scsi_Host *shost = sdev->host;
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_adapter *adapter;
+ u16 hdl, vol_hdl = 0;
+ bool is_target_ssp = false;
+ u64 volume_wwid = 0;
+ int qd = 1;
+
+ adapter = shost_priv(shost);
+ sdev_priv = sdev->hostdata;
+ sdev_priv->flg &= ~LEAPRAID_DEVICE_FLG_INIT;
+ starget_priv = sdev_priv->starget_priv;
+ hdl = starget_priv->hdl;
+ if (starget_priv->flg & LEAPRAID_TGT_FLG_VOLUME)
+ return leapraid_slave_cfg_volume(sdev);
+
+ if (starget_priv->flg & LEAPRAID_TGT_FLG_RAID_MEMBER) {
+ if (leapraid_cfg_get_volume_hdl(adapter, hdl, &vol_hdl)) {
+ sdev_printk(KERN_WARNING, sdev,
+ "%s: get volume hdl failed, hdl=0x%x\n",
+ __func__, hdl);
+ return 1;
+ }
+
+ if (vol_hdl && leapraid_cfg_get_volume_wwid(adapter, vol_hdl,
+ &volume_wwid)) {
+ sdev_printk(KERN_WARNING, sdev,
+ "%s: get wwid failed, volume_hdl=0x%x\n",
+ __func__, vol_hdl);
+ return 1;
+ }
+ }
+
+ if (leapraid_slave_configure_extra(sdev, &sas_dev, vol_hdl,
+ volume_wwid, &is_target_ssp, &qd)) {
+ sdev_printk(KERN_WARNING, sdev,
+ "%s: slave_configure_extra failed\n", __func__);
+ return 1;
+ }
+
+ leapraid_adjust_sdev_queue_depth(sdev, qd);
+ if (is_target_ssp)
+ sas_read_port_mode_page(sdev);
+
+ return 0;
+}
+
+static void leapraid_slave_destroy(struct scsi_device *sdev)
+{
+ struct leapraid_adapter *adapter;
+ struct Scsi_Host *shost;
+ struct leapraid_sas_dev *sas_dev;
+ struct leapraid_starget_priv *starget_priv;
+ struct scsi_target *stgt;
+ unsigned long flags;
+
+ if (!sdev->hostdata)
+ return;
+
+ stgt = scsi_target(sdev);
+ starget_priv = stgt->hostdata;
+ starget_priv->num_luns--;
+ shost = dev_to_shost(&stgt->dev);
+ adapter = shost_priv(shost);
+ if (!(starget_priv->flg & LEAPRAID_TGT_FLG_VOLUME)) {
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_from_tgt(adapter,
+ starget_priv);
+ if (sas_dev && !starget_priv->num_luns)
+ sas_dev->starget = NULL;
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ }
+
+ kfree(sdev->hostdata);
+ sdev->hostdata = NULL;
+}
+
+static int leapraid_target_alloc_raid(struct scsi_target *tgt)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_raid_volume *raid_volume;
+ struct Scsi_Host *shost = dev_to_shost(&tgt->dev);
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ unsigned long flags;
+
+ starget_priv = (struct leapraid_starget_priv *)tgt->hostdata;
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_id(adapter, tgt->id,
+ tgt->channel);
+ if (raid_volume) {
+ starget_priv->hdl = raid_volume->hdl;
+ starget_priv->sas_address = raid_volume->wwid;
+ starget_priv->flg |= LEAPRAID_TGT_FLG_VOLUME;
+ raid_volume->starget = tgt;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+ return 0;
+}
+
+static int leapraid_target_alloc_sas(struct scsi_target *tgt)
+{
+ struct sas_rphy *rphy;
+ struct Scsi_Host *shost;
+ struct leapraid_sas_dev *sas_dev;
+ struct leapraid_adapter *adapter;
+ struct leapraid_starget_priv *starget_priv;
+ unsigned long flags;
+
+ shost = dev_to_shost(&tgt->dev);
+ adapter = shost_priv(shost);
+ starget_priv = (struct leapraid_starget_priv *)tgt->hostdata;
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ rphy = dev_to_rphy(tgt->dev.parent);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr_and_rphy(adapter,
+ rphy->identify.sas_address,
+ rphy);
+ if (sas_dev) {
+ starget_priv->sas_dev = sas_dev;
+ starget_priv->card_port = sas_dev->card_port;
+ starget_priv->sas_address = sas_dev->sas_addr;
+ starget_priv->hdl = sas_dev->hdl;
+ sas_dev->channel = tgt->channel;
+ sas_dev->id = tgt->id;
+ sas_dev->starget = tgt;
+ if (test_bit(sas_dev->hdl,
+ (unsigned long *)adapter->dev_topo.pd_hdls))
+ starget_priv->flg |= LEAPRAID_TGT_FLG_RAID_MEMBER;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ return 0;
+}
+
+static int leapraid_target_alloc(struct scsi_target *tgt)
+{
+ struct leapraid_starget_priv *starget_priv;
+
+ starget_priv = kzalloc(sizeof(*starget_priv), GFP_KERNEL);
+ if (!starget_priv)
+ return -ENOMEM;
+
+ tgt->hostdata = starget_priv;
+ starget_priv->starget = tgt;
+ starget_priv->hdl = LEAPRAID_INVALID_DEV_HANDLE;
+ if (tgt->channel == RAID_CHANNEL)
+ return leapraid_target_alloc_raid(tgt);
+
+ return leapraid_target_alloc_sas(tgt);
+}
+
+static void leapraid_target_destroy_raid(struct scsi_target *tgt)
+{
+ struct leapraid_raid_volume *raid_volume;
+ struct Scsi_Host *shost = dev_to_shost(&tgt->dev);
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_id(adapter, tgt->id,
+ tgt->channel);
+ if (raid_volume) {
+ raid_volume->starget = NULL;
+ raid_volume->sdev = NULL;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+}
+
+static void leapraid_target_destroy_sas(struct scsi_target *tgt)
+{
+ struct leapraid_adapter *adapter;
+ struct leapraid_sas_dev *sas_dev;
+ struct leapraid_starget_priv *starget_priv;
+ struct Scsi_Host *shost;
+ unsigned long flags;
+
+ shost = dev_to_shost(&tgt->dev);
+ adapter = shost_priv(shost);
+ starget_priv = tgt->hostdata;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_from_tgt(adapter,
+ starget_priv);
+ if (sas_dev &&
+ sas_dev->starget == tgt &&
+ sas_dev->id == tgt->id &&
+ sas_dev->channel == tgt->channel)
+ sas_dev->starget = NULL;
+
+ if (sas_dev) {
+ starget_priv->sas_dev = NULL;
+ leapraid_sdev_put(sas_dev);
+ leapraid_sdev_put(sas_dev);
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+}
+
+static void leapraid_target_destroy(struct scsi_target *tgt)
+{
+ struct leapraid_starget_priv *starget_priv;
+
+ starget_priv = tgt->hostdata;
+ if (!starget_priv)
+ return;
+
+ if (tgt->channel == RAID_CHANNEL) {
+ leapraid_target_destroy_raid(tgt);
+ goto out;
+ }
+
+ leapraid_target_destroy_sas(tgt);
+
+out:
+ kfree(starget_priv);
+ tgt->hostdata = NULL;
+}
+
+static bool leapraid_scan_check_status(struct leapraid_adapter *adapter,
+ bool *need_hard_reset)
+{
+ u32 adapter_state;
+
+ if (adapter->scan_dev_desc.scan_start) {
+ adapter_state = leapraid_get_adapter_state(adapter);
+ if (adapter_state == LEAPRAID_DB_FAULT) {
+ *need_hard_reset = true;
+ return true;
+ }
+ return false;
+ }
+
+ if (adapter->driver_cmds.scan_dev_cmd.status & LEAPRAID_CMD_RESET) {
+ dev_err(&adapter->pdev->dev,
+ "device scan: aborted due to reset\n");
+ adapter->driver_cmds.scan_dev_cmd.status =
+ LEAPRAID_CMD_NOT_USED;
+ adapter->scan_dev_desc.driver_loading = false;
+ return true;
+ }
+
+ if (adapter->scan_dev_desc.scan_start_failed) {
+ dev_err(&adapter->pdev->dev,
+ "device scan: failed with adapter_status=0x%08x\n",
+ adapter->scan_dev_desc.scan_start_failed);
+ adapter->scan_dev_desc.driver_loading = false;
+ adapter->scan_dev_desc.wait_scan_dev_done = false;
+ adapter->access_ctrl.host_removing = true;
+ return true;
+ }
+
+ dev_info(&adapter->pdev->dev, "device scan: SUCCESS\n");
+ adapter->driver_cmds.scan_dev_cmd.status = LEAPRAID_CMD_NOT_USED;
+ leapraid_scan_dev_done(adapter);
+ return true;
+}
+
+static int leapraid_scan_finished(struct Scsi_Host *shost, unsigned long time)
+{
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ bool need_hard_reset = false;
+
+ if (time >= (LEAPRAID_SCAN_DEV_CMD_TIMEOUT * HZ)) {
+ adapter->driver_cmds.scan_dev_cmd.status =
+ LEAPRAID_CMD_NOT_USED;
+ dev_err(&adapter->pdev->dev,
+ "device scan: failed with timeout 300s\n");
+ adapter->scan_dev_desc.driver_loading = false;
+ return 1;
+ }
+
+ if (!leapraid_scan_check_status(adapter, &need_hard_reset))
+ return 0;
+
+ if (need_hard_reset) {
+ adapter->driver_cmds.scan_dev_cmd.status =
+ LEAPRAID_CMD_NOT_USED;
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ if (leapraid_hard_reset_handler(adapter, PART_RESET))
+ adapter->scan_dev_desc.driver_loading = false;
+ }
+
+ return 1;
+}
+
+static void leapraid_scan_start(struct Scsi_Host *shost)
+{
+ struct leapraid_adapter *adapter = shost_priv(shost);
+
+ adapter->scan_dev_desc.scan_start = true;
+ leapraid_scan_dev(adapter, true);
+}
+
+static int leapraid_calc_max_queue_depth(struct scsi_device *sdev, int qdepth)
+{
+ struct Scsi_Host *shost;
+ int max_depth;
+
+ shost = sdev->host;
+ max_depth = shost->can_queue;
+
+ if (!sdev->tagged_supported)
+ max_depth = 1;
+
+ if (qdepth > max_depth)
+ qdepth = max_depth;
+
+ return qdepth;
+}
+
+static int leapraid_change_queue_depth(struct scsi_device *sdev, int qdepth)
+{
+ qdepth = leapraid_calc_max_queue_depth(sdev, qdepth);
+ scsi_change_queue_depth(sdev, qdepth);
+ return sdev->queue_depth;
+}
+
+void leapraid_adjust_sdev_queue_depth(struct scsi_device *sdev, int qdepth)
+{
+ leapraid_change_queue_depth(sdev, qdepth);
+}
+
+
+
+static int leapraid_bios_param(struct scsi_device *sdev,
+ struct block_device *bdev,
+ sector_t capacity, int geom[])
+{
+ int heads = 0;
+ int sectors = 0;
+ sector_t cylinders;
+
+ if (scsi_partsize(bdev, capacity, geom))
+ return 0;
+
+ if ((ulong)capacity >= LEAPRAID_LARGE_DISK_THRESHOLD) {
+ heads = LEAPRAID_LARGE_DISK_HEADS;
+ sectors = LEAPRAID_LARGE_DISK_SECTORS;
+ } else {
+ heads = LEAPRAID_SMALL_DISK_HEADS;
+ sectors = LEAPRAID_SMALL_DISK_SECTORS;
+ }
+
+ cylinders = capacity;
+ sector_div(cylinders, heads * sectors);
+
+ geom[0] = heads;
+ geom[1] = sectors;
+ geom[2] = cylinders;
+ return 0;
+}
+
+static ssize_t fw_queue_depth_show(struct device *cdev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct leapraid_adapter *adapter = shost_priv(shost);
+
+ return scnprintf(buf, PAGE_SIZE, "%02d\n",
+ adapter->adapter_attr.features.req_slot);
+}
+
+static ssize_t host_sas_address_show(struct device *cdev,
+ struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct leapraid_adapter *adapter = shost_priv(shost);
+
+ return scnprintf(buf, PAGE_SIZE, "0x%016llx\n",
+ (unsigned long long)adapter->dev_topo.card.sas_address);
+}
+
+static DEVICE_ATTR_RO(fw_queue_depth);
+static DEVICE_ATTR_RO(host_sas_address);
+
+static struct device_attribute *leapraid_shost_attrs[] = {
+ &dev_attr_fw_queue_depth,
+ &dev_attr_host_sas_address,
+ NULL,
+};
+
+static ssize_t sas_address_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct leapraid_sdev_priv *sas_device_priv_data = sdev->hostdata;
+
+ return scnprintf(buf, PAGE_SIZE, "0x%016llx\n",
+ (unsigned long long)sas_device_priv_data->starget_priv->sas_address);
+}
+
+static ssize_t sas_device_handle_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct leapraid_sdev_priv *sas_device_priv_data = sdev->hostdata;
+
+ return scnprintf(buf, PAGE_SIZE, "0x%04x\n",
+ sas_device_priv_data->starget_priv->hdl);
+}
+
+static ssize_t sas_ncq_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct leapraid_sdev_priv *sas_device_priv_data = sdev->hostdata;
+
+ return scnprintf(buf, PAGE_SIZE, "%d\n", sas_device_priv_data->ncq);
+}
+
+static ssize_t sas_ncq_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct leapraid_sdev_priv *sas_device_priv_data = sdev->hostdata;
+ unsigned char *vpd_pg89;
+ int ncq_op = 0;
+ bool ncq_supported = false;
+
+ if (kstrtoint(buf, 0, &ncq_op))
+ goto out;
+
+ vpd_pg89 = kmalloc(LEAPRAID_VPD_PG89_MAX_LEN, GFP_KERNEL);
+ if (!vpd_pg89)
+ goto out;
+
+ if (!scsi_device_supports_vpd(sdev) ||
+ scsi_get_vpd_page(sdev, LEAPRAID_VPD_PAGE_ATA_INFO,
+ vpd_pg89, LEAPRAID_VPD_PG89_MAX_LEN)) {
+ kfree(vpd_pg89);
+ goto out;
+ }
+
+ ncq_supported = (vpd_pg89[LEAPRAID_VPD_PG89_NCQ_BYTE_IDX] >>
+ LEAPRAID_VPD_PG89_NCQ_BIT_SHIFT) &
+ LEAPRAID_VPD_PG89_NCQ_BIT_MASK;
+ kfree(vpd_pg89);
+ if (ncq_supported)
+ sas_device_priv_data->ncq = ncq_op;
+ return strlen(buf);
+out:
+ return -EINVAL;
+}
+
+static DEVICE_ATTR_RO(sas_address);
+static DEVICE_ATTR_RO(sas_device_handle);
+
+static DEVICE_ATTR_RW(sas_ncq);
+
+static struct device_attribute *leapraid_sdev_attrs[] = {
+ &dev_attr_sas_address,
+ &dev_attr_sas_device_handle,
+ &dev_attr_sas_ncq,
+ NULL,
+};
+
+static struct scsi_host_template leapraid_driver_template = {
+ .module = THIS_MODULE,
+ .name = "LEAPIO RAID Host",
+ .proc_name = LEAPRAID_DRIVER_NAME,
+ .queuecommand = leapraid_queuecommand,
+ .eh_abort_handler = leapraid_eh_abort_handler,
+ .eh_device_reset_handler = leapraid_eh_device_reset_handler,
+ .eh_target_reset_handler = leapraid_eh_target_reset_handler,
+ .eh_host_reset_handler = leapraid_eh_host_reset_handler,
+ .slave_alloc = leapraid_slave_alloc,
+ .slave_destroy = leapraid_slave_destroy,
+ .slave_configure = leapraid_slave_configure,
+ .target_alloc = leapraid_target_alloc,
+ .target_destroy = leapraid_target_destroy,
+ .scan_finished = leapraid_scan_finished,
+ .scan_start = leapraid_scan_start,
+ .change_queue_depth = leapraid_change_queue_depth,
+ .bios_param = leapraid_bios_param,
+ .can_queue = LEAPRAID_CAN_QUEUE_MIN,
+ .this_id = LEAPRAID_THIS_ID_NONE,
+ .sg_tablesize = LEAPRAID_SG_DEPTH,
+ .max_sectors = LEAPRAID_DEF_MAX_SECTORS,
+ .max_segment_size = LEAPRAID_MAX_SEGMENT_SIZE,
+ .cmd_per_lun = LEAPRAID_CMD_PER_LUN,
+ .shost_attrs = leapraid_shost_attrs,
+ .sdev_attrs = leapraid_sdev_attrs,
+ .track_queue_depth = 1,
+};
+
+static void leapraid_lock_init(struct leapraid_adapter *adapter)
+{
+ mutex_init(&adapter->reset_desc.adapter_reset_mutex);
+ mutex_init(&adapter->reset_desc.host_diag_mutex);
+ mutex_init(&adapter->access_ctrl.pci_access_lock);
+
+ spin_lock_init(&adapter->reset_desc.adapter_reset_lock);
+ spin_lock_init(&adapter->dynamic_task_desc.task_lock);
+ spin_lock_init(&adapter->dev_topo.sas_dev_lock);
+ spin_lock_init(&adapter->dev_topo.topo_node_lock);
+ spin_lock_init(&adapter->fw_evt_s.fw_evt_lock);
+ spin_lock_init(&adapter->dev_topo.raid_volume_lock);
+}
+
+static void leapraid_list_init(struct leapraid_adapter *adapter)
+{
+ INIT_LIST_HEAD(&adapter->dev_topo.sas_dev_list);
+ INIT_LIST_HEAD(&adapter->dev_topo.card_port_list);
+ INIT_LIST_HEAD(&adapter->dev_topo.sas_dev_init_list);
+ INIT_LIST_HEAD(&adapter->dev_topo.exp_list);
+ INIT_LIST_HEAD(&adapter->dev_topo.enc_list);
+ INIT_LIST_HEAD(&adapter->fw_evt_s.fw_evt_list);
+ INIT_LIST_HEAD(&adapter->dev_topo.raid_volume_list);
+ INIT_LIST_HEAD(&adapter->dev_topo.card.sas_port_list);
+}
+
+static int leapraid_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct leapraid_adapter *adapter = NULL;
+ struct Scsi_Host *shost = NULL;
+ int rc;
+
+ shost = scsi_host_alloc(&leapraid_driver_template,
+ sizeof(struct leapraid_adapter));
+ if (!shost)
+ return -ENODEV;
+
+ adapter = shost_priv(shost);
+ memset(adapter, 0, sizeof(struct leapraid_adapter));
+ adapter->adapter_attr.id = leapraid_ids++;
+
+ adapter->adapter_attr.enable_mp = enable_mp;
+
+ adapter = shost_priv(shost);
+ INIT_LIST_HEAD(&adapter->list);
+ spin_lock(&leapraid_adapter_lock);
+ list_add_tail(&adapter->list, &leapraid_adapter_list);
+ spin_unlock(&leapraid_adapter_lock);
+
+ adapter->shost = shost;
+ adapter->pdev = pdev;
+ adapter->fw_log_desc.open_pcie_trace = open_pcie_trace;
+ leapraid_lock_init(adapter);
+ leapraid_list_init(adapter);
+ sprintf(adapter->adapter_attr.name, "%s%d",
+ LEAPRAID_DRIVER_NAME, adapter->adapter_attr.id);
+
+ shost->max_cmd_len = LEAPRAID_MAX_CDB_LEN;
+ shost->max_lun = LEAPRAID_MAX_LUNS;
+ shost->transportt = leapraid_transport_template;
+ shost->unique_id = adapter->adapter_attr.id;
+
+ snprintf(adapter->fw_evt_s.fw_evt_name,
+ sizeof(adapter->fw_evt_s.fw_evt_name),
+ "fw_event_%s%d", LEAPRAID_DRIVER_NAME,
+ adapter->adapter_attr.id);
+ adapter->fw_evt_s.fw_evt_thread =
+ alloc_ordered_workqueue(adapter->fw_evt_s.fw_evt_name, 0);
+ if (!adapter->fw_evt_s.fw_evt_thread) {
+ rc = -ENODEV;
+ goto evt_wq_fail;
+ }
+
+ adapter->scan_dev_desc.driver_loading = true;
+ if ((leapraid_ctrl_init(adapter))) {
+ rc = -ENODEV;
+ goto ctrl_init_fail;
+ }
+
+
+ rc = scsi_add_host(shost, &pdev->dev);
+ if (rc) {
+ spin_lock(&leapraid_adapter_lock);
+ list_del(&adapter->list);
+ spin_unlock(&leapraid_adapter_lock);
+ goto scsi_add_shost_fail;
+ }
+
+ scsi_scan_host(shost);
+ return 0;
+
+scsi_add_shost_fail:
+ leapraid_remove_ctrl(adapter);
+ctrl_init_fail:
+ destroy_workqueue(adapter->fw_evt_s.fw_evt_thread);
+evt_wq_fail:
+ spin_lock(&leapraid_adapter_lock);
+ list_del(&adapter->list);
+ spin_unlock(&leapraid_adapter_lock);
+ scsi_host_put(shost);
+ return rc;
+}
+
+static void leapraid_cleanup_lists(struct leapraid_adapter *adapter)
+{
+ struct leapraid_raid_volume *raid_volume, *next_raid_volume;
+ struct leapraid_starget_priv *starget_priv_data;
+ struct leapraid_sas_port *leapraid_port, *next_port;
+ struct leapraid_card_port *port, *port_next;
+ struct leapraid_vphy *vphy, *vphy_next;
+
+ list_for_each_entry_safe(raid_volume, next_raid_volume,
+ &adapter->dev_topo.raid_volume_list, list) {
+ if (raid_volume->starget) {
+ starget_priv_data = raid_volume->starget->hostdata;
+ starget_priv_data->deleted = true;
+ scsi_remove_target(&raid_volume->starget->dev);
+ }
+ pr_info("removing hdl=0x%04x, wwid=0x%016llx\n",
+ raid_volume->hdl,
+ (unsigned long long)raid_volume->wwid);
+ leapraid_raid_volume_remove(adapter, raid_volume);
+ }
+
+ list_for_each_entry_safe(leapraid_port, next_port,
+ &adapter->dev_topo.card.sas_port_list,
+ port_list) {
+ if (leapraid_port->remote_identify.device_type ==
+ SAS_END_DEVICE)
+ leapraid_sas_dev_remove_by_sas_address(adapter,
+ leapraid_port->remote_identify.sas_address,
+ leapraid_port->card_port);
+ else if (leapraid_port->remote_identify.device_type ==
+ SAS_EDGE_EXPANDER_DEVICE ||
+ leapraid_port->remote_identify.device_type ==
+ SAS_FANOUT_EXPANDER_DEVICE)
+ leapraid_exp_rm(adapter,
+ leapraid_port->remote_identify.sas_address,
+ leapraid_port->card_port);
+ }
+
+ list_for_each_entry_safe(port, port_next,
+ &adapter->dev_topo.card_port_list, list) {
+ if (port->vphys_mask)
+ list_for_each_entry_safe(vphy, vphy_next,
+ &port->vphys_list, list) {
+ list_del(&vphy->list);
+ kfree(vphy);
+ }
+ list_del(&port->list);
+ kfree(port);
+ }
+
+ if (adapter->dev_topo.card.phys_num) {
+ kfree(adapter->dev_topo.card.card_phy);
+ adapter->dev_topo.card.card_phy = NULL;
+ adapter->dev_topo.card.phys_num = 0;
+ }
+}
+
+static void leapraid_remove(struct pci_dev *pdev)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+ struct workqueue_struct *wq;
+ unsigned long flags;
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev, "unable to remove!\n");
+ return;
+ }
+
+ while (adapter->scan_dev_desc.driver_loading)
+ ssleep(1);
+
+ while (adapter->access_ctrl.shost_recovering)
+ ssleep(1);
+
+ adapter->access_ctrl.host_removing = true;
+
+ leapraid_wait_cmds_done(adapter);
+
+ leapraid_smart_polling_stop(adapter);
+ leapraid_free_internal_scsi_cmd(adapter);
+
+ if (leapraid_pci_removed(adapter)) {
+ leapraid_mq_polling_pause(adapter);
+ leapraid_clean_active_scsi_cmds(adapter);
+ }
+ leapraid_clean_active_fw_evt(adapter);
+
+ spin_lock_irqsave(&adapter->fw_evt_s.fw_evt_lock, flags);
+ wq = adapter->fw_evt_s.fw_evt_thread;
+ adapter->fw_evt_s.fw_evt_thread = NULL;
+ spin_unlock_irqrestore(&adapter->fw_evt_s.fw_evt_lock, flags);
+ if (wq)
+ destroy_workqueue(wq);
+
+ leapraid_ir_shutdown(adapter);
+ sas_remove_host(shost);
+ leapraid_cleanup_lists(adapter);
+ leapraid_remove_ctrl(adapter);
+ spin_lock(&leapraid_adapter_lock);
+ list_del(&adapter->list);
+ spin_unlock(&leapraid_adapter_lock);
+ scsi_host_put(shost);
+}
+
+static void leapraid_shutdown(struct pci_dev *pdev)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+ struct workqueue_struct *wq;
+ unsigned long flags;
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev, "unable to shutdown!\n");
+ return;
+ }
+
+ adapter->access_ctrl.host_removing = true;
+ leapraid_wait_cmds_done(adapter);
+ leapraid_clean_active_fw_evt(adapter);
+
+ spin_lock_irqsave(&adapter->fw_evt_s.fw_evt_lock, flags);
+ wq = adapter->fw_evt_s.fw_evt_thread;
+ adapter->fw_evt_s.fw_evt_thread = NULL;
+ spin_unlock_irqrestore(&adapter->fw_evt_s.fw_evt_lock, flags);
+ if (wq)
+ destroy_workqueue(wq);
+
+ leapraid_ir_shutdown(adapter);
+ leapraid_disable_controller(adapter);
+}
+
+static pci_ers_result_t leapraid_pci_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev, "failed to error detected for device\n");
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ pr_err("%s: pci error detected, state=%d\n",
+ adapter->adapter_attr.name, state);
+
+ switch (state) {
+ case pci_channel_io_normal:
+ return PCI_ERS_RESULT_CAN_RECOVER;
+ case pci_channel_io_frozen:
+ adapter->access_ctrl.pcie_recovering = true;
+ scsi_block_requests(adapter->shost);
+ leapraid_smart_polling_stop(adapter);
+ leapraid_check_scheduled_fault_stop(adapter);
+ leapraid_fw_log_stop(adapter);
+ leapraid_disable_controller(adapter);
+ return PCI_ERS_RESULT_NEED_RESET;
+ case pci_channel_io_perm_failure:
+ adapter->access_ctrl.pcie_recovering = true;
+ leapraid_smart_polling_stop(adapter);
+ leapraid_check_scheduled_fault_stop(adapter);
+ leapraid_fw_log_stop(adapter);
+ leapraid_mq_polling_pause(adapter);
+ leapraid_clean_active_scsi_cmds(adapter);
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ return PCI_ERS_RESULT_NEED_RESET;
+}
+
+static pci_ers_result_t leapraid_pci_mmio_enabled(struct pci_dev *pdev)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev,
+ "failed to enable mmio for device\n");
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ dev_info(&pdev->dev, "%s: pci error mmio enabled\n",
+ adapter->adapter_attr.name);
+
+ return PCI_ERS_RESULT_RECOVERED;
+}
+
+static pci_ers_result_t leapraid_pci_slot_reset(struct pci_dev *pdev)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+ int rc;
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev,
+ "failed to slot reset for device\n");
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ dev_err(&pdev->dev, "%s pci error slot reset\n",
+ adapter->adapter_attr.name);
+
+ adapter->access_ctrl.pcie_recovering = false;
+ adapter->pdev = pdev;
+ pci_restore_state(pdev);
+ if (leapraid_set_pcie_and_notification(adapter))
+ return PCI_ERS_RESULT_DISCONNECT;
+
+ dev_info(&pdev->dev, "%s: hard reset triggered by pci slot reset\n",
+ adapter->adapter_attr.name);
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ rc = leapraid_hard_reset_handler(adapter, FULL_RESET);
+ dev_info(&pdev->dev, "%s hard reset: %s\n",
+ adapter->adapter_attr.name, (rc == 0) ? "success" : "failed");
+
+ return (rc == 0) ? PCI_ERS_RESULT_RECOVERED :
+ PCI_ERS_RESULT_DISCONNECT;
+}
+
+static void leapraid_pci_resume(struct pci_dev *pdev)
+{
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev, "failed to resume\n");
+ return;
+ }
+
+ dev_err(&pdev->dev, "PCI error resume!\n");
+ pci_aer_clear_nonfatal_status(pdev);
+ leapraid_check_scheduled_fault_start(adapter);
+ leapraid_fw_log_start(adapter);
+ scsi_unblock_requests(adapter->shost);
+ leapraid_smart_polling_start(adapter);
+}
+
+MODULE_DEVICE_TABLE(pci, leapraid_pci_table);
+static struct pci_error_handlers leapraid_err_handler = {
+ .error_detected = leapraid_pci_error_detected,
+ .mmio_enabled = leapraid_pci_mmio_enabled,
+ .slot_reset = leapraid_pci_slot_reset,
+ .resume = leapraid_pci_resume,
+};
+
+#ifdef CONFIG_PM
+static int leapraid_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+ pci_power_t device_state;
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev,
+ "suspend failed, invalid host or adapter\n");
+ return -ENXIO;
+ }
+
+ leapraid_smart_polling_stop(adapter);
+ leapraid_check_scheduled_fault_stop(adapter);
+ leapraid_fw_log_stop(adapter);
+ flush_scheduled_work();
+ scsi_block_requests(shost);
+ device_state = pci_choose_state(pdev, state);
+ leapraid_ir_shutdown(adapter);
+
+ dev_info(&pdev->dev, "entering PCI power state D%d, (slot=%s)\n",
+ device_state, pci_name(pdev));
+
+ pci_save_state(pdev);
+ leapraid_disable_controller(adapter);
+ pci_set_power_state(pdev, device_state);
+ return 0;
+}
+
+static int leapraid_resume(struct pci_dev *pdev)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+ pci_power_t device_state = pdev->current_state;
+ int rc;
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev,
+ "resume failed, invalid host or adapter\n");
+ return -ENXIO;
+ }
+
+ dev_info(&pdev->dev,
+ "resuming device %s, previous state D%d\n",
+ pci_name(pdev), device_state);
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_enable_wake(pdev, PCI_D0, 0);
+ pci_restore_state(pdev);
+ adapter->pdev = pdev;
+ rc = leapraid_set_pcie_and_notification(adapter);
+ if (rc)
+ return rc;
+
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, PART_RESET);
+ scsi_unblock_requests(shost);
+ leapraid_check_scheduled_fault_start(adapter);
+ leapraid_fw_log_start(adapter);
+ leapraid_smart_polling_start(adapter);
+ return 0;
+}
+#endif /* CONFIG_PM */
+
+static struct pci_driver leapraid_driver = {
+ .name = LEAPRAID_DRIVER_NAME,
+ .id_table = leapraid_pci_table,
+ .probe = leapraid_probe,
+ .remove = leapraid_remove,
+ .shutdown = leapraid_shutdown,
+ .err_handler = &leapraid_err_handler,
+#ifdef CONFIG_PM
+ .suspend = leapraid_suspend,
+ .resume = leapraid_resume,
+#endif /* CONFIG_PM */
+};
+
+static int __init leapraid_init(void)
+{
+ int error;
+
+ pr_info("%s version %s loaded\n", LEAPRAID_DRIVER_NAME,
+ LEAPRAID_DRIVER_VERSION);
+
+ leapraid_transport_template =
+ sas_attach_transport(&leapraid_transport_functions);
+ if (!leapraid_transport_template)
+ return -ENODEV;
+
+ leapraid_ids = 0;
+
+ leapraid_ctl_init();
+
+ error = pci_register_driver(&leapraid_driver);
+ if (error)
+ sas_release_transport(leapraid_transport_template);
+
+ return error;
+}
+
+static void __exit leapraid_exit(void)
+{
+ pr_info("leapraid version %s unloading\n",
+ LEAPRAID_DRIVER_VERSION);
+
+ leapraid_ctl_exit();
+ pci_unregister_driver(&leapraid_driver);
+ sas_release_transport(leapraid_transport_template);
+}
+
+module_init(leapraid_init);
+module_exit(leapraid_exit);
diff --git a/drivers/scsi/leapraid/leapraid_transport.c b/drivers/scsi/leapraid/leapraid_transport.c
new file mode 100644
index 000000000000..d224449732a3
--- /dev/null
+++ b/drivers/scsi/leapraid/leapraid_transport.c
@@ -0,0 +1,1256 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2025 LeapIO Tech Inc.
+ *
+ * LeapRAID Storage and RAID Controller driver.
+ */
+
+#include <scsi/scsi_host.h>
+
+#include "leapraid_func.h"
+
+static struct leapraid_topo_node *leapraid_transport_topo_node_by_sas_addr(
+ struct leapraid_adapter *adapter,
+ u64 sas_addr,
+ struct leapraid_card_port *card_port)
+{
+ if (adapter->dev_topo.card.sas_address == sas_addr)
+ return &adapter->dev_topo.card;
+ else
+ return leapraid_exp_find_by_sas_address(adapter,
+ sas_addr,
+ card_port);
+}
+
+static u8 leapraid_get_port_id_by_expander(struct leapraid_adapter *adapter,
+ struct sas_rphy *rphy)
+{
+ struct leapraid_topo_node *topo_node_exp;
+ unsigned long flags;
+ u8 port_id = 0xFF;
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ list_for_each_entry(topo_node_exp, &adapter->dev_topo.exp_list, list) {
+ if (topo_node_exp->rphy == rphy) {
+ port_id = topo_node_exp->card_port->port_id;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ return port_id;
+}
+
+static u8 leapraid_get_port_id_by_end_dev(struct leapraid_adapter *adapter,
+ struct sas_rphy *rphy)
+{
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ u8 port_id = 0xFF;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr_and_rphy(adapter,
+ rphy->identify.sas_address,
+ rphy);
+ if (sas_dev) {
+ port_id = sas_dev->card_port->port_id;
+ leapraid_sdev_put(sas_dev);
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ return port_id;
+}
+
+static u8 leapraid_transport_get_port_id_by_rphy(
+ struct leapraid_adapter *adapter,
+ struct sas_rphy *rphy)
+{
+ if (!rphy)
+ return 0xFF;
+
+ switch (rphy->identify.device_type) {
+ case SAS_EDGE_EXPANDER_DEVICE:
+ case SAS_FANOUT_EXPANDER_DEVICE:
+ return leapraid_get_port_id_by_expander(adapter, rphy);
+ case SAS_END_DEVICE:
+ return leapraid_get_port_id_by_end_dev(adapter, rphy);
+ default:
+ return 0xFF;
+ }
+}
+
+static enum sas_linkrate leapraid_transport_convert_phy_link_rate(u8 link_rate)
+{
+ unsigned int i;
+
+ #define SAS_RATE_12G SAS_LINK_RATE_12_0_GBPS
+
+ const struct linkrate_map {
+ u8 in;
+ enum sas_linkrate out;
+ } linkrate_table[] = {
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_1_5,
+ SAS_LINK_RATE_1_5_GBPS
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_3_0,
+ SAS_LINK_RATE_3_0_GBPS
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_6_0,
+ SAS_LINK_RATE_6_0_GBPS
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_12_0,
+ SAS_RATE_12G
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_PHY_DISABLED,
+ SAS_PHY_DISABLED
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_NEGOTIATION_FAILED,
+ SAS_LINK_RATE_FAILED
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_PORT_SELECTOR,
+ SAS_SATA_PORT_SELECTOR
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_SMP_RESETTING,
+ SAS_LINK_RATE_UNKNOWN
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_SATA_OOB_COMPLETE,
+ SAS_LINK_RATE_UNKNOWN
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_UNKNOWN_LINK_RATE,
+ SAS_LINK_RATE_UNKNOWN
+ },
+ };
+
+ for (i = 0; i < ARRAY_SIZE(linkrate_table); i++) {
+ if (linkrate_table[i].in == link_rate)
+ return linkrate_table[i].out;
+ }
+
+ return SAS_LINK_RATE_UNKNOWN;
+}
+
+static void leapraid_set_identify_protocol_flags(u32 dev_info,
+ struct sas_identify *identify)
+{
+ unsigned int i;
+
+ const struct protocol_mapping {
+ u32 mask;
+ u32 *target;
+ u32 protocol;
+ } mappings[] = {
+ {
+ LEAPRAID_DEVTYP_SSP_INIT,
+ &identify->initiator_port_protocols,
+ SAS_PROTOCOL_SSP
+ },
+ {
+ LEAPRAID_DEVTYP_STP_INIT,
+ &identify->initiator_port_protocols,
+ SAS_PROTOCOL_STP
+ },
+ {
+ LEAPRAID_DEVTYP_SMP_INIT,
+ &identify->initiator_port_protocols,
+ SAS_PROTOCOL_SMP
+ },
+ {
+ LEAPRAID_DEVTYP_SATA_HOST,
+ &identify->initiator_port_protocols,
+ SAS_PROTOCOL_SATA
+ },
+ {
+ LEAPRAID_DEVTYP_SSP_TGT,
+ &identify->target_port_protocols,
+ SAS_PROTOCOL_SSP
+ },
+ {
+ LEAPRAID_DEVTYP_STP_TGT,
+ &identify->target_port_protocols,
+ SAS_PROTOCOL_STP
+ },
+ {
+ LEAPRAID_DEVTYP_SMP_TGT,
+ &identify->target_port_protocols,
+ SAS_PROTOCOL_SMP
+ },
+ {
+ LEAPRAID_DEVTYP_SATA_DEV,
+ &identify->target_port_protocols,
+ SAS_PROTOCOL_SATA
+ },
+ };
+
+ for (i = 0; i < ARRAY_SIZE(mappings); i++)
+ if ((dev_info & mappings[i].mask) && mappings[i].target)
+ *mappings[i].target |= mappings[i].protocol;
+}
+
+static int leapraid_transport_set_identify(struct leapraid_adapter *adapter,
+ u16 hdl,
+ struct sas_identify *identify)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_dev_p0 sas_dev_pg0;
+ u32 dev_info;
+
+ if ((adapter->access_ctrl.shost_recovering &&
+ !adapter->scan_dev_desc.driver_loading) ||
+ adapter->access_ctrl.pcie_recovering)
+ return -EFAULT;
+
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = hdl;
+ if ((leapraid_op_config_page(adapter, &sas_dev_pg0, cfgp1,
+ cfgp2, GET_SAS_DEVICE_PG0)))
+ return -ENXIO;
+
+ memset(identify, 0, sizeof(struct sas_identify));
+ dev_info = le32_to_cpu(sas_dev_pg0.dev_info);
+ identify->sas_address = le64_to_cpu(sas_dev_pg0.sas_address);
+ identify->phy_identifier = sas_dev_pg0.phy_num;
+
+ switch (dev_info & LEAPRAID_DEVTYP_MASK_DEV_TYPE) {
+ case LEAPRAID_DEVTYP_NO_DEV:
+ identify->device_type = SAS_PHY_UNUSED;
+ break;
+ case LEAPRAID_DEVTYP_END_DEV:
+ identify->device_type = SAS_END_DEVICE;
+ break;
+ case LEAPRAID_DEVTYP_EDGE_EXPANDER:
+ identify->device_type = SAS_EDGE_EXPANDER_DEVICE;
+ break;
+ case LEAPRAID_DEVTYP_FANOUT_EXPANDER:
+ identify->device_type = SAS_FANOUT_EXPANDER_DEVICE;
+ break;
+ }
+
+ leapraid_set_identify_protocol_flags(dev_info, identify);
+
+ return 0;
+}
+
+static void leapraid_transport_exp_set_edev(struct leapraid_adapter *adapter,
+ void *data_out,
+ struct sas_expander_device *edev)
+{
+ struct leapraid_smp_passthrough_rep *smp_passthrough_rep;
+ struct leapraid_rep_manu_reply *rep_manu_reply;
+ u8 *component_id;
+ ssize_t __maybe_unused ret;
+
+ smp_passthrough_rep =
+ (void *)(&adapter->driver_cmds.transport_cmd.reply);
+ if (le16_to_cpu(smp_passthrough_rep->resp_data_len) !=
+ sizeof(struct leapraid_rep_manu_reply))
+ return;
+
+ rep_manu_reply = data_out + sizeof(struct leapraid_rep_manu_request);
+ ret = strscpy(edev->vendor_id, rep_manu_reply->vendor_identification,
+ SAS_EXPANDER_VENDOR_ID_LEN);
+ ret = strscpy(edev->product_id, rep_manu_reply->product_identification,
+ SAS_EXPANDER_PRODUCT_ID_LEN);
+ ret = strscpy(edev->product_rev,
+ rep_manu_reply->product_revision_level,
+ SAS_EXPANDER_PRODUCT_REV_LEN);
+ edev->level = rep_manu_reply->sas_format & 1;
+ if (edev->level) {
+ ret = strscpy(edev->component_vendor_id,
+ rep_manu_reply->component_vendor_identification,
+ SAS_EXPANDER_COMPONENT_VENDOR_ID_LEN);
+
+ component_id = (u8 *)&rep_manu_reply->component_id;
+ edev->component_id = component_id[0] << 8 | component_id[1];
+ edev->component_revision_id =
+ rep_manu_reply->component_revision_level;
+ }
+}
+
+static int leapraid_transport_exp_report_manu(struct leapraid_adapter *adapter,
+ u64 sas_address,
+ struct sas_expander_device *edev,
+ u8 port_id)
+{
+ struct leapraid_smp_passthrough_req *smp_passthrough_req;
+ struct leapraid_rep_manu_request *rep_manu_request;
+ dma_addr_t h2c_dma_addr;
+ dma_addr_t c2h_dma_addr;
+ bool issue_reset = false;
+ void *data_out = NULL;
+ size_t c2h_size;
+ size_t h2c_size;
+ void *psge;
+ int rc = 0;
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering) {
+ return -EFAULT;
+ }
+
+ mutex_lock(&adapter->driver_cmds.transport_cmd.mutex);
+ adapter->driver_cmds.transport_cmd.status = LEAPRAID_CMD_PENDING;
+ rc = leapraid_check_adapter_is_op(adapter);
+ if (rc)
+ goto out;
+
+ h2c_size = sizeof(struct leapraid_rep_manu_request);
+ c2h_size = sizeof(struct leapraid_rep_manu_reply);
+ data_out = dma_alloc_coherent(&adapter->pdev->dev,
+ h2c_size + c2h_size,
+ &h2c_dma_addr,
+ GFP_ATOMIC);
+ if (!data_out) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ rep_manu_request = data_out;
+ rep_manu_request->smp_frame_type =
+ SMP_REPORT_MANUFACTURER_INFORMATION_FRAME_TYPE;
+ rep_manu_request->function = SMP_REPORT_MANUFACTURER_INFORMATION_FUNC;
+ rep_manu_request->allocated_response_length = 0;
+ rep_manu_request->request_length = 0;
+
+ smp_passthrough_req =
+ leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.transport_cmd.inter_taskid);
+ memset(smp_passthrough_req, 0,
+ sizeof(struct leapraid_smp_passthrough_req));
+ smp_passthrough_req->func = LEAPRAID_FUNC_SMP_PASSTHROUGH;
+ smp_passthrough_req->physical_port = port_id;
+ smp_passthrough_req->sas_address = cpu_to_le64(sas_address);
+ smp_passthrough_req->req_data_len = cpu_to_le16(h2c_size);
+ psge = &smp_passthrough_req->sgl;
+ c2h_dma_addr = h2c_dma_addr + sizeof(struct leapraid_rep_manu_request);
+ leapraid_build_ieee_sg(adapter, psge, h2c_dma_addr, h2c_size,
+ c2h_dma_addr, c2h_size);
+
+ init_completion(&adapter->driver_cmds.transport_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.transport_cmd.inter_taskid);
+ wait_for_completion_timeout(&adapter->driver_cmds.transport_cmd.done,
+ LEAPRAID_TRANSPORT_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.transport_cmd.status & LEAPRAID_CMD_DONE)) {
+ dev_err(&adapter->pdev->dev,
+ "%s: smp passthrough to exp timeout\n",
+ __func__);
+ if (!(adapter->driver_cmds.transport_cmd.status &
+ LEAPRAID_CMD_RESET))
+ issue_reset = true;
+
+ goto hard_reset;
+ }
+
+ if (adapter->driver_cmds.transport_cmd.status &
+ LEAPRAID_CMD_REPLY_VALID)
+ leapraid_transport_exp_set_edev(adapter, data_out, edev);
+
+hard_reset:
+ if (issue_reset) {
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, FULL_RESET);
+ }
+out:
+ adapter->driver_cmds.transport_cmd.status = LEAPRAID_CMD_NOT_USED;
+ if (data_out)
+ dma_free_coherent(&adapter->pdev->dev, h2c_size + c2h_size,
+ data_out, h2c_dma_addr);
+
+ mutex_unlock(&adapter->driver_cmds.transport_cmd.mutex);
+ return rc;
+}
+
+static void leapraid_transport_del_port(struct leapraid_adapter *adapter,
+ struct leapraid_sas_port *sas_port)
+{
+ dev_info(&sas_port->port->dev,
+ "remove port: sas addr=0x%016llx\n",
+ (unsigned long long)sas_port->remote_identify.sas_address);
+ switch (sas_port->remote_identify.device_type) {
+ case SAS_END_DEVICE:
+ leapraid_sas_dev_remove_by_sas_address(adapter,
+ sas_port->remote_identify.sas_address,
+ sas_port->card_port);
+ break;
+ case SAS_EDGE_EXPANDER_DEVICE:
+ case SAS_FANOUT_EXPANDER_DEVICE:
+ leapraid_exp_rm(adapter, sas_port->remote_identify.sas_address,
+ sas_port->card_port);
+ break;
+ default:
+ break;
+ }
+}
+
+static void leapraid_transport_del_phy(struct leapraid_adapter *adapter,
+ struct leapraid_sas_port *sas_port,
+ struct leapraid_card_phy *card_phy)
+{
+ dev_info(&card_phy->phy->dev,
+ "remove phy: sas addr=0x%016llx, phy=%d\n",
+ (unsigned long long)sas_port->remote_identify.sas_address,
+ card_phy->phy_id);
+ list_del(&card_phy->port_siblings);
+ sas_port->phys_num--;
+ sas_port_delete_phy(sas_port->port, card_phy->phy);
+ card_phy->phy_is_assigned = false;
+}
+
+static void leapraid_transport_add_phy(struct leapraid_adapter *adapter,
+ struct leapraid_sas_port *sas_port,
+ struct leapraid_card_phy *card_phy)
+{
+ dev_info(&card_phy->phy->dev,
+ "add phy: sas addr=0x%016llx, phy=%d\n",
+ (unsigned long long)sas_port->remote_identify.sas_address,
+ card_phy->phy_id);
+ list_add_tail(&card_phy->port_siblings, &sas_port->phy_list);
+ sas_port->phys_num++;
+ sas_port_add_phy(sas_port->port, card_phy->phy);
+ card_phy->phy_is_assigned = true;
+}
+
+void leapraid_transport_attach_phy_to_port(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node,
+ struct leapraid_card_phy *card_phy,
+ u64 sas_address,
+ struct leapraid_card_port *card_port)
+{
+ struct leapraid_sas_port *sas_port;
+ struct leapraid_card_phy *card_phy_srch;
+
+ if (card_phy->phy_is_assigned)
+ return;
+
+ if (!card_port)
+ return;
+
+ list_for_each_entry(sas_port, &topo_node->sas_port_list, port_list) {
+ if (sas_port->remote_identify.sas_address != sas_address)
+ continue;
+
+ if (sas_port->card_port != card_port)
+ continue;
+
+ list_for_each_entry(card_phy_srch, &sas_port->phy_list,
+ port_siblings) {
+ if (card_phy_srch == card_phy)
+ return;
+ }
+ leapraid_transport_add_phy(adapter, sas_port, card_phy);
+ return;
+ }
+}
+
+void leapraid_transport_detach_phy_to_port(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node,
+ struct leapraid_card_phy *target_card_phy)
+{
+ struct leapraid_sas_port *sas_port, *sas_port_next;
+ struct leapraid_card_phy *cur_card_phy;
+
+ if (!target_card_phy->phy_is_assigned)
+ return;
+
+ list_for_each_entry_safe(sas_port, sas_port_next,
+ &topo_node->sas_port_list, port_list) {
+ list_for_each_entry(cur_card_phy, &sas_port->phy_list,
+ port_siblings) {
+ if (cur_card_phy != target_card_phy)
+ continue;
+
+ if (sas_port->phys_num == 1 &&
+ !adapter->access_ctrl.shost_recovering)
+ leapraid_transport_del_port(adapter, sas_port);
+ else
+ leapraid_transport_del_phy(adapter, sas_port,
+ target_card_phy);
+ return;
+ }
+ }
+}
+
+static void leapraid_detach_phy_from_old_port(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node,
+ u64 sas_address,
+ struct leapraid_card_port *card_port)
+{
+ int i;
+
+ for (i = 0; i < topo_node->phys_num; i++) {
+ if (topo_node->card_phy[i].remote_identify.sas_address !=
+ sas_address ||
+ topo_node->card_phy[i].card_port != card_port)
+ continue;
+ if (topo_node->card_phy[i].phy_is_assigned)
+ leapraid_transport_detach_phy_to_port(adapter,
+ topo_node,
+ &topo_node->card_phy[i]);
+ }
+}
+
+static struct leapraid_sas_port *leapraid_prepare_sas_port(
+ struct leapraid_adapter *adapter,
+ u16 handle, u64 sas_address,
+ struct leapraid_card_port *card_port,
+ struct leapraid_topo_node **out_topo_node)
+{
+ struct leapraid_topo_node *topo_node;
+ struct leapraid_sas_port *sas_port;
+ unsigned long flags;
+
+ sas_port = kzalloc(sizeof(*sas_port), GFP_KERNEL);
+ if (!sas_port)
+ return NULL;
+
+ INIT_LIST_HEAD(&sas_port->port_list);
+ INIT_LIST_HEAD(&sas_port->phy_list);
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ topo_node = leapraid_transport_topo_node_by_sas_addr(adapter,
+ sas_address,
+ card_port);
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ if (!topo_node) {
+ dev_err(&adapter->pdev->dev,
+ "%s: failed to find parent node for sas addr 0x%016llx!\n",
+ __func__, sas_address);
+ kfree(sas_port);
+ return NULL;
+ }
+
+ if (leapraid_transport_set_identify(adapter, handle,
+ &sas_port->remote_identify)) {
+ kfree(sas_port);
+ return NULL;
+ }
+
+ if (sas_port->remote_identify.device_type == SAS_PHY_UNUSED) {
+ kfree(sas_port);
+ return NULL;
+ }
+
+ sas_port->card_port = card_port;
+ *out_topo_node = topo_node;
+
+ return sas_port;
+}
+
+static int leapraid_bind_phys_and_vphy(struct leapraid_adapter *adapter,
+ struct leapraid_sas_port *sas_port,
+ struct leapraid_topo_node *topo_node,
+ struct leapraid_card_port *card_port,
+ struct leapraid_vphy **out_vphy)
+{
+ struct leapraid_vphy *vphy = NULL;
+ int i;
+
+ for (i = 0; i < topo_node->phys_num; i++) {
+ if (topo_node->card_phy[i].remote_identify.sas_address !=
+ sas_port->remote_identify.sas_address ||
+ topo_node->card_phy[i].card_port != card_port)
+ continue;
+
+ list_add_tail(&topo_node->card_phy[i].port_siblings,
+ &sas_port->phy_list);
+ sas_port->phys_num++;
+
+ if (topo_node->hdl <= adapter->dev_topo.card.phys_num) {
+ if (!topo_node->card_phy[i].vphy) {
+ card_port->phy_mask |= BIT(i);
+ continue;
+ }
+
+ vphy = leapraid_get_vphy_by_phy(card_port, i);
+ if (!vphy)
+ return -1;
+ }
+ }
+
+ *out_vphy = vphy;
+ return sas_port->phys_num ? 0 : -1;
+}
+
+static struct sas_rphy *leapraid_create_and_register_rphy(
+ struct leapraid_adapter *adapter,
+ struct leapraid_sas_port *sas_port,
+ struct leapraid_topo_node *topo_node,
+ struct leapraid_card_port *card_port,
+ struct leapraid_vphy *vphy)
+{
+ struct leapraid_sas_dev *sas_dev = NULL;
+ struct leapraid_card_phy *card_phy;
+ struct sas_port *port;
+ struct sas_rphy *rphy;
+
+ if (!topo_node->parent_dev)
+ return NULL;
+
+ port = sas_port_alloc_num(topo_node->parent_dev);
+ if (sas_port_add(port))
+ return NULL;
+
+ list_for_each_entry(card_phy, &sas_port->phy_list, port_siblings) {
+ sas_port_add_phy(port, card_phy->phy);
+ card_phy->phy_is_assigned = true;
+ card_phy->card_port = card_port;
+ }
+
+ if (sas_port->remote_identify.device_type == SAS_END_DEVICE) {
+ sas_dev = leapraid_get_sas_dev_by_addr(adapter,
+ sas_port->remote_identify.sas_address,
+ card_port);
+ if (!sas_dev)
+ return NULL;
+ sas_dev->pend_sas_rphy_add = 1;
+ rphy = sas_end_device_alloc(port);
+ sas_dev->rphy = rphy;
+
+ if (topo_node->hdl <= adapter->dev_topo.card.phys_num) {
+ if (!vphy)
+ card_port->sas_address = sas_dev->sas_addr;
+ else
+ vphy->sas_address = sas_dev->sas_addr;
+ }
+
+ } else {
+ rphy = sas_expander_alloc(port,
+ sas_port->remote_identify.device_type);
+ if (topo_node->hdl <= adapter->dev_topo.card.phys_num)
+ card_port->sas_address =
+ sas_port->remote_identify.sas_address;
+ }
+
+ rphy->identify = sas_port->remote_identify;
+
+ if (sas_rphy_add(rphy))
+ dev_err(&adapter->pdev->dev,
+ "%s: failed to add rphy\n", __func__);
+
+ if (sas_dev) {
+ sas_dev->pend_sas_rphy_add = 0;
+ leapraid_sdev_put(sas_dev);
+ }
+
+ sas_port->port = port;
+ return rphy;
+}
+
+struct leapraid_sas_port *leapraid_transport_port_add(
+ struct leapraid_adapter *adapter,
+ u16 hdl, u64 sas_address,
+ struct leapraid_card_port *card_port)
+{
+ struct leapraid_card_phy *card_phy, *card_phy_next;
+ struct leapraid_topo_node *topo_node = NULL;
+ struct leapraid_sas_port *sas_port = NULL;
+ struct leapraid_vphy *vphy = NULL;
+ struct sas_rphy *rphy = NULL;
+ unsigned long flags;
+
+ if (!card_port)
+ return NULL;
+
+ sas_port = leapraid_prepare_sas_port(adapter, hdl, sas_address,
+ card_port, &topo_node);
+ if (!sas_port)
+ return NULL;
+
+ leapraid_detach_phy_from_old_port(adapter,
+ topo_node,
+ sas_port->remote_identify.sas_address,
+ card_port);
+
+ if (leapraid_bind_phys_and_vphy(adapter, sas_port, topo_node,
+ card_port, &vphy))
+ goto out_fail;
+
+ rphy = leapraid_create_and_register_rphy(adapter, sas_port, topo_node,
+ card_port, vphy);
+ if (!rphy)
+ goto out_fail;
+
+ dev_info(&rphy->dev,
+ "%s: added dev: hdl=0x%04x, sas addr=0x%016llx\n",
+ __func__, hdl,
+ (unsigned long long)sas_port->remote_identify.sas_address);
+
+ sas_port->rphy = rphy;
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ list_add_tail(&sas_port->port_list, &topo_node->sas_port_list);
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ if (sas_port->remote_identify.device_type ==
+ LEAPRAID_DEVTYP_EDGE_EXPANDER ||
+ sas_port->remote_identify.device_type ==
+ LEAPRAID_DEVTYP_FANOUT_EXPANDER)
+ leapraid_transport_exp_report_manu(adapter,
+ sas_port->remote_identify.sas_address,
+ rphy_to_expander_device(rphy),
+ card_port->port_id);
+
+ return sas_port;
+
+out_fail:
+ list_for_each_entry_safe(card_phy, card_phy_next,
+ &sas_port->phy_list, port_siblings)
+ list_del(&card_phy->port_siblings);
+ kfree(sas_port);
+ return NULL;
+}
+
+static struct leapraid_sas_port *leapraid_find_and_remove_sas_port(
+ struct leapraid_topo_node *topo_node,
+ u64 sas_address,
+ struct leapraid_card_port *remove_card_port,
+ bool *found)
+{
+ struct leapraid_sas_port *sas_port, *sas_port_next;
+
+ list_for_each_entry_safe(sas_port, sas_port_next,
+ &topo_node->sas_port_list, port_list) {
+ if (sas_port->remote_identify.sas_address != sas_address)
+ continue;
+
+ if (sas_port->card_port != remove_card_port)
+ continue;
+
+ *found = true;
+ list_del(&sas_port->port_list);
+ return sas_port;
+ }
+ return NULL;
+}
+
+static void leapraid_cleanup_card_port_and_vphys(
+ struct leapraid_adapter *adapter,
+ u64 sas_address,
+ struct leapraid_card_port *remove_card_port)
+{
+ struct leapraid_card_port *card_port, *card_port_next;
+ struct leapraid_vphy *vphy, *vphy_next;
+
+ if (remove_card_port->vphys_mask) {
+ list_for_each_entry_safe(vphy, vphy_next,
+ &remove_card_port->vphys_list, list) {
+ if (vphy->sas_address != sas_address)
+ continue;
+
+ dev_info(&adapter->pdev->dev,
+ "%s: remove vphy: %p from port: %p, port_id=%d\n",
+ __func__, vphy, remove_card_port,
+ remove_card_port->port_id);
+
+ remove_card_port->vphys_mask &= ~vphy->phy_mask;
+ list_del(&vphy->list);
+ kfree(vphy);
+ }
+
+ if (!remove_card_port->vphys_mask &&
+ !remove_card_port->sas_address) {
+ dev_info(&adapter->pdev->dev,
+ "%s: remove empty hba_port: %p, port_id=%d\n",
+ __func__,
+ remove_card_port,
+ remove_card_port->port_id);
+ list_del(&remove_card_port->list);
+ kfree(remove_card_port);
+ remove_card_port = NULL;
+ }
+ }
+
+ list_for_each_entry_safe(card_port, card_port_next,
+ &adapter->dev_topo.card_port_list, list) {
+ if (card_port != remove_card_port)
+ continue;
+
+ if (card_port->sas_address != sas_address)
+ continue;
+
+ if (!remove_card_port->vphys_mask) {
+ dev_info(&adapter->pdev->dev,
+ "%s: remove hba_port: %p, port_id=%d\n",
+ __func__, card_port, card_port->port_id);
+ list_del(&card_port->list);
+ kfree(card_port);
+ } else {
+ dev_info(&adapter->pdev->dev,
+ "%s: clear sas_address of hba_port: %p, port_id=%d\n",
+ __func__, card_port, card_port->port_id);
+ remove_card_port->sas_address = 0;
+ }
+ break;
+ }
+}
+
+static void leapraid_clear_topo_node_phys(struct leapraid_topo_node *topo_node,
+ u64 sas_address)
+{
+ int i;
+
+ for (i = 0; i < topo_node->phys_num; i++) {
+ if (topo_node->card_phy[i].remote_identify.sas_address ==
+ sas_address) {
+ memset(&topo_node->card_phy[i].remote_identify, 0,
+ sizeof(struct sas_identify));
+ topo_node->card_phy[i].vphy = false;
+ }
+ }
+}
+
+void leapraid_transport_port_remove(struct leapraid_adapter *adapter,
+ u64 sas_address, u64 sas_address_parent,
+ struct leapraid_card_port *remove_card_port)
+{
+ struct leapraid_card_phy *card_phy, *card_phy_next;
+ struct leapraid_sas_port *sas_port = NULL;
+ struct leapraid_topo_node *topo_node;
+ unsigned long flags;
+ bool found = false;
+
+ if (!remove_card_port)
+ return;
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+
+ topo_node = leapraid_transport_topo_node_by_sas_addr(adapter,
+ sas_address_parent,
+ remove_card_port);
+ if (!topo_node) {
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock,
+ flags);
+ return;
+ }
+
+ sas_port = leapraid_find_and_remove_sas_port(topo_node, sas_address,
+ remove_card_port, &found);
+
+ if (!found) {
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock,
+ flags);
+ return;
+ }
+
+ if (topo_node->hdl <= adapter->dev_topo.card.phys_num &&
+ adapter->adapter_attr.enable_mp)
+ leapraid_cleanup_card_port_and_vphys(adapter, sas_address,
+ remove_card_port);
+
+ leapraid_clear_topo_node_phys(topo_node, sas_address);
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ list_for_each_entry_safe(card_phy, card_phy_next,
+ &sas_port->phy_list, port_siblings) {
+ card_phy->phy_is_assigned = false;
+ if (!adapter->access_ctrl.host_removing)
+ sas_port_delete_phy(sas_port->port, card_phy->phy);
+
+ list_del(&card_phy->port_siblings);
+ }
+
+ if (!adapter->access_ctrl.host_removing)
+ sas_port_delete(sas_port->port);
+
+ dev_info(&adapter->pdev->dev,
+ "%s: removed sas_port for sas addr=0x%016llx\n",
+ __func__, (unsigned long long)sas_address);
+
+ kfree(sas_port);
+}
+
+static void leapraid_init_sas_or_exp_phy(struct leapraid_adapter *adapter,
+ struct leapraid_card_phy *card_phy,
+ struct sas_phy *phy,
+ struct leapraid_sas_phy_p0 *phy_pg0,
+ struct leapraid_exp_p1 *exp_pg1)
+{
+ if (exp_pg1 && phy_pg0)
+ return;
+
+ if (!exp_pg1 && !phy_pg0)
+ return;
+
+ phy->identify = card_phy->identify;
+ phy->identify.phy_identifier = card_phy->phy_id;
+ phy->negotiated_linkrate = phy_pg0 ?
+ leapraid_transport_convert_phy_link_rate(
+ phy_pg0->neg_link_rate &
+ LEAPRAID_SAS_NEG_LINK_RATE_MASK_PHYSICAL) :
+ leapraid_transport_convert_phy_link_rate(
+ exp_pg1->neg_link_rate &
+ LEAPRAID_SAS_NEG_LINK_RATE_MASK_PHYSICAL);
+ phy->minimum_linkrate_hw = phy_pg0 ?
+ leapraid_transport_convert_phy_link_rate(
+ phy_pg0->hw_link_rate &
+ LEAPRAID_SAS_HWRATE_MIN_RATE_MASK) :
+ leapraid_transport_convert_phy_link_rate(
+ exp_pg1->hw_link_rate &
+ LEAPRAID_SAS_HWRATE_MIN_RATE_MASK);
+ phy->maximum_linkrate_hw = phy_pg0 ?
+ leapraid_transport_convert_phy_link_rate(
+ phy_pg0->hw_link_rate >> 4) :
+ leapraid_transport_convert_phy_link_rate(
+ exp_pg1->hw_link_rate >> 4);
+ phy->minimum_linkrate = phy_pg0 ?
+ leapraid_transport_convert_phy_link_rate(
+ phy_pg0->p_link_rate &
+ LEAPRAID_SAS_PRATE_MIN_RATE_MASK) :
+ leapraid_transport_convert_phy_link_rate(
+ exp_pg1->p_link_rate &
+ LEAPRAID_SAS_PRATE_MIN_RATE_MASK);
+ phy->maximum_linkrate = phy_pg0 ?
+ leapraid_transport_convert_phy_link_rate(
+ phy_pg0->p_link_rate >> 4) :
+ leapraid_transport_convert_phy_link_rate(
+ exp_pg1->p_link_rate >> 4);
+ phy->hostdata = card_phy->card_port;
+}
+
+void leapraid_transport_add_card_phy(struct leapraid_adapter *adapter,
+ struct leapraid_card_phy *card_phy,
+ struct leapraid_sas_phy_p0 *phy_pg0,
+ struct device *parent_dev)
+{
+ struct sas_phy *phy;
+
+ INIT_LIST_HEAD(&card_phy->port_siblings);
+ phy = sas_phy_alloc(parent_dev, card_phy->phy_id);
+ if (!phy) {
+ dev_err(&adapter->pdev->dev,
+ "%s sas_phy_alloc failed!\n", __func__);
+ return;
+ }
+
+ if ((leapraid_transport_set_identify(adapter, card_phy->hdl,
+ &card_phy->identify))) {
+ dev_err(&adapter->pdev->dev,
+ "%s set phy handle identify failed!\n", __func__);
+ sas_phy_free(phy);
+ return;
+ }
+
+ card_phy->attached_hdl = le16_to_cpu(phy_pg0->attached_dev_hdl);
+ if (card_phy->attached_hdl) {
+ if (leapraid_transport_set_identify(adapter,
+ card_phy->attached_hdl,
+ &card_phy->remote_identify)) {
+ dev_err(&adapter->pdev->dev,
+ "%s set phy attached handle identify failed!\n",
+ __func__);
+ sas_phy_free(phy);
+ return;
+ }
+ }
+
+ leapraid_init_sas_or_exp_phy(adapter, card_phy, phy, phy_pg0, NULL);
+
+ if ((sas_phy_add(phy))) {
+ sas_phy_free(phy);
+ return;
+ }
+
+ card_phy->phy = phy;
+}
+
+int leapraid_transport_add_exp_phy(struct leapraid_adapter *adapter,
+ struct leapraid_card_phy *card_phy,
+ struct leapraid_exp_p1 *exp_pg1,
+ struct device *parent_dev)
+{
+ struct sas_phy *phy;
+
+ INIT_LIST_HEAD(&card_phy->port_siblings);
+ phy = sas_phy_alloc(parent_dev, card_phy->phy_id);
+ if (!phy) {
+ dev_err(&adapter->pdev->dev,
+ "%s sas_phy_alloc failed!\n", __func__);
+ return -EFAULT;
+ }
+
+ if ((leapraid_transport_set_identify(adapter, card_phy->hdl,
+ &card_phy->identify))) {
+ dev_err(&adapter->pdev->dev,
+ "%s set phy hdl identify failed!\n", __func__);
+ sas_phy_free(phy);
+ return -EFAULT;
+ }
+
+ card_phy->attached_hdl = le16_to_cpu(exp_pg1->attached_dev_hdl);
+ if (card_phy->attached_hdl) {
+ if (leapraid_transport_set_identify(adapter,
+ card_phy->attached_hdl,
+ &card_phy->remote_identify)) {
+ dev_err(&adapter->pdev->dev,
+ "%s set phy attached hdl identify failed!\n",
+ __func__);
+ sas_phy_free(phy);
+ }
+ }
+
+ leapraid_init_sas_or_exp_phy(adapter, card_phy, phy, NULL, exp_pg1);
+
+ if ((sas_phy_add(phy))) {
+ sas_phy_free(phy);
+ return -EFAULT;
+ }
+
+ card_phy->phy = phy;
+ return 0;
+}
+
+void leapraid_transport_update_links(struct leapraid_adapter *adapter,
+ u64 sas_address, u16 hdl, u8 phy_index,
+ u8 link_rate, struct leapraid_card_port *target_card_port)
+{
+ struct leapraid_topo_node *topo_node;
+ struct leapraid_card_phy *card_phy;
+ struct leapraid_card_port *card_port = NULL;
+ unsigned long flags;
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering)
+ return;
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ topo_node = leapraid_transport_topo_node_by_sas_addr(adapter,
+ sas_address,
+ target_card_port);
+ if (!topo_node) {
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock,
+ flags);
+ return;
+ }
+
+ card_phy = &topo_node->card_phy[phy_index];
+ card_phy->attached_hdl = hdl;
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ if (hdl && link_rate >= LEAPRAID_SAS_NEG_LINK_RATE_1_5) {
+ leapraid_transport_set_identify(adapter, hdl,
+ &card_phy->remote_identify);
+ if (topo_node->hdl <= adapter->dev_topo.card.phys_num &&
+ adapter->adapter_attr.enable_mp) {
+ list_for_each_entry(card_port,
+ &adapter->dev_topo.card_port_list,
+ list) {
+ if (card_port->sas_address == sas_address &&
+ card_port == target_card_port)
+ card_port->phy_mask |=
+ BIT(card_phy->phy_id);
+ }
+ }
+ leapraid_transport_attach_phy_to_port(adapter, topo_node,
+ card_phy,
+ card_phy->remote_identify.sas_address,
+ target_card_port);
+ } else {
+ memset(&card_phy->remote_identify, 0,
+ sizeof(struct sas_identify));
+ }
+
+ if (card_phy->phy)
+ card_phy->phy->negotiated_linkrate =
+ leapraid_transport_convert_phy_link_rate(link_rate);
+}
+
+static int leapraid_dma_map_buffer(struct device *dev, struct bsg_buffer *buf,
+ dma_addr_t *dma_addr,
+ size_t *dma_len, void **p)
+{
+ if (buf->sg_cnt > 1) {
+ *p = dma_alloc_coherent(dev, buf->payload_len, dma_addr,
+ GFP_KERNEL);
+ if (!*p)
+ return -ENOMEM;
+
+ *dma_len = buf->payload_len;
+ } else {
+ if (!dma_map_sg(dev, buf->sg_list, 1, DMA_BIDIRECTIONAL))
+ return -ENOMEM;
+
+ *dma_addr = sg_dma_address(buf->sg_list);
+ *dma_len = sg_dma_len(buf->sg_list);
+ *p = NULL;
+ }
+ return 0;
+}
+
+static void leapraid_dma_unmap_buffer(struct device *dev,
+ struct bsg_buffer *buf,
+ dma_addr_t dma_addr,
+ void *p)
+{
+ if (p)
+ dma_free_coherent(dev, buf->payload_len, p, dma_addr);
+ else
+ dma_unmap_sg(dev, buf->sg_list, 1, DMA_BIDIRECTIONAL);
+}
+
+static void leapraid_build_smp_task(struct leapraid_adapter *adapter,
+ struct sas_rphy *rphy,
+ dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size)
+{
+ struct leapraid_smp_passthrough_req *smp_passthrough_req;
+ void *psge;
+
+ smp_passthrough_req =
+ leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.transport_cmd.inter_taskid);
+ memset(smp_passthrough_req, 0, sizeof(*smp_passthrough_req));
+
+ smp_passthrough_req->func = LEAPRAID_FUNC_SMP_PASSTHROUGH;
+ smp_passthrough_req->physical_port =
+ leapraid_transport_get_port_id_by_rphy(adapter, rphy);
+ smp_passthrough_req->sas_address = (rphy) ?
+ cpu_to_le64(rphy->identify.sas_address) :
+ cpu_to_le64(adapter->dev_topo.card.sas_address);
+ smp_passthrough_req->req_data_len =
+ cpu_to_le16(h2c_size - LEAPRAID_SMP_FRAME_HEADER_SIZE);
+ psge = &smp_passthrough_req->sgl;
+ leapraid_build_ieee_sg(adapter, psge, h2c_dma_addr,
+ h2c_size - LEAPRAID_SMP_FRAME_HEADER_SIZE,
+ c2h_dma_addr,
+ c2h_size - LEAPRAID_SMP_FRAME_HEADER_SIZE);
+}
+
+static int leapraid_send_smp_req(struct leapraid_adapter *adapter)
+{
+ dev_info(&adapter->pdev->dev,
+ "%s: sending smp request\n", __func__);
+ init_completion(&adapter->driver_cmds.transport_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.transport_cmd.inter_taskid);
+ wait_for_completion_timeout(&adapter->driver_cmds.transport_cmd.done,
+ LEAPRAID_TRANSPORT_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.transport_cmd.status & LEAPRAID_CMD_DONE)) {
+ dev_err(&adapter->pdev->dev, "%s: timeout\n", __func__);
+ if (!(adapter->driver_cmds.transport_cmd.status &
+ LEAPRAID_CMD_RESET)) {
+ dev_info(&adapter->pdev->dev,
+ "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, FULL_RESET);
+ return -ETIMEDOUT;
+ }
+ }
+
+ dev_info(&adapter->pdev->dev, "%s: smp request complete\n", __func__);
+ if (!(adapter->driver_cmds.transport_cmd.status &
+ LEAPRAID_CMD_REPLY_VALID)) {
+ dev_err(&adapter->pdev->dev,
+ "%s: smp request no reply\n", __func__);
+ return -ENXIO;
+ }
+
+ return 0;
+}
+
+static void leapraid_handle_smp_rep(struct leapraid_adapter *adapter,
+ struct bsg_job *job, void *addr_in,
+ unsigned int *reslen)
+{
+ struct leapraid_smp_passthrough_rep *smp_passthrough_rep;
+
+ smp_passthrough_rep =
+ (void *)(&adapter->driver_cmds.transport_cmd.reply);
+
+ dev_info(&adapter->pdev->dev, "%s: response data len=%d\n",
+ __func__, le16_to_cpu(smp_passthrough_rep->resp_data_len));
+
+ memcpy(job->reply, smp_passthrough_rep, sizeof(*smp_passthrough_rep));
+ job->reply_len = sizeof(*smp_passthrough_rep);
+ *reslen = le16_to_cpu(smp_passthrough_rep->resp_data_len);
+
+ if (addr_in)
+ sg_copy_from_buffer(job->reply_payload.sg_list,
+ job->reply_payload.sg_cnt, addr_in,
+ job->reply_payload.payload_len);
+}
+
+static void leapraid_transport_smp_handler(struct bsg_job *job,
+ struct Scsi_Host *shost,
+ struct sas_rphy *rphy)
+{
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ dma_addr_t c2h_dma_addr;
+ dma_addr_t h2c_dma_addr;
+ void *addr_in = NULL;
+ void *addr_out = NULL;
+ size_t c2h_size;
+ size_t h2c_size;
+ int rc;
+ unsigned int reslen = 0;
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering) {
+ rc = -EFAULT;
+ goto done;
+ }
+
+ rc = mutex_lock_interruptible(&adapter->driver_cmds.transport_cmd.mutex);
+ if (rc)
+ goto done;
+
+ adapter->driver_cmds.transport_cmd.status = LEAPRAID_CMD_PENDING;
+ rc = leapraid_dma_map_buffer(&adapter->pdev->dev,
+ &job->request_payload,
+ &h2c_dma_addr, &h2c_size, &addr_out);
+ if (rc)
+ goto release_lock;
+
+ if (addr_out)
+ sg_copy_to_buffer(job->request_payload.sg_list,
+ job->request_payload.sg_cnt, addr_out,
+ job->request_payload.payload_len);
+
+ rc = leapraid_dma_map_buffer(&adapter->pdev->dev, &job->reply_payload,
+ &c2h_dma_addr, &c2h_size, &addr_in);
+ if (rc)
+ goto free_req_buf;
+
+ rc = leapraid_check_adapter_is_op(adapter);
+ if (rc)
+ goto free_rep_buf;
+
+ leapraid_build_smp_task(adapter, rphy, h2c_dma_addr,
+ h2c_size, c2h_dma_addr, c2h_size);
+
+ rc = leapraid_send_smp_req(adapter);
+ if (rc)
+ goto free_rep_buf;
+
+ leapraid_handle_smp_rep(adapter, job, addr_in, &reslen);
+
+free_rep_buf:
+ leapraid_dma_unmap_buffer(&adapter->pdev->dev, &job->reply_payload,
+ c2h_dma_addr, addr_in);
+free_req_buf:
+ leapraid_dma_unmap_buffer(&adapter->pdev->dev, &job->request_payload,
+ h2c_dma_addr, addr_out);
+release_lock:
+ adapter->driver_cmds.transport_cmd.status = LEAPRAID_CMD_NOT_USED;
+ mutex_unlock(&adapter->driver_cmds.transport_cmd.mutex);
+done:
+ bsg_job_done(job, rc, reslen);
+}
+
+struct sas_function_template leapraid_transport_functions = {
+ .smp_handler = leapraid_transport_smp_handler,
+};
+
+struct scsi_transport_template *leapraid_transport_template;
--
2.25.1
2
1
From: Hao Dongdong <doubled(a)leap-io-kernel.com>
LeapIO inclusion
category: feature
bugzilla: https://atomgit.com/openeuler/kernel/issues/8338
------------------------------------------
The LeapRAID driver provides support for LeapRAID PCIe RAID controllers,
enabling communication between the host operating system, firmware, and
hardware for efficient storage management.
The main source files are organized as follows:
leapraid_os.c:
Implements the scsi_host_template functions, PCIe device probing, and
initialization routines, integrating the driver with the Linux SCSI
subsystem.
leapraid_func.c:
Provides the core functional routines that handle low-level interactions
with the controller firmware and hardware, including interrupt handling,
topology management, and reset sequence processing, and other related
operations.
leapraid_app.c:
Implements the ioctl interface, providing user-space tools access to device
management and diagnostic operations.
leapraid_transport.c:
Interacts with the Linux SCSI transport layer to add SAS phys and ports.
leapraid_func.h:
Declares common data structures, constants, and function prototypes shared
across the driver.
leapraid.h:
Provides global constants, register mappings, and interface definitions
that facilitate communication between the driver and the controller
firmware.
The leapraid_probe function is called when the driver detects a supported
LeapRAID PCIe device. It allocates and initializes the Scsi_Host structure,
configures hardware and firmware interfaces, and registers the host adapter
with the Linux SCSI mid-layer.
After registration, the driver invokes scsi_scan_host() to initiate device
discovery. The firmware then reports discovered logical and physical
devices to the host through interrupt-driven events and synchronizes their
operational states.
leapraid_adapter is the core data structure that encapsulates all resources
and runtime state information maintained during driver operation, described
as follows:
/**
* struct leapraid_adapter - Main LeapRaid adapter structure
* @list: List head for adapter management
* @shost: SCSI host structure
* @pdev: PCI device structure
* @iomem_base: I/O memory mapped base address
* @rep_msg_host_idx: Host index for reply messages
* @mask_int: Interrupt masking flag
* @timestamp_sync_cnt: Timestamp synchronization counter
* @adapter_attr: Adapter attributes
* @mem_desc: Memory descriptor
* @driver_cmds: Driver commands
* @dynamic_task_desc: Dynamic task descriptor
* @fw_evt_s: Firmware event structure
* @notification_desc: Notification descriptor
* @reset_desc: Reset descriptor
* @scan_dev_desc: Device scan descriptor
* @access_ctrl: Access control
* @fw_log_desc: Firmware log descriptor
* @dev_topo: Device topology
* @boot_devs: Boot devices
* @smart_poll_desc: SMART polling descriptor
*/
struct leapraid_adapter {
struct list_head list;
struct Scsi_Host *shost;
struct pci_dev *pdev;
struct leapraid_reg_base __iomem *iomem_base;
u32 rep_msg_host_idx;
bool mask_int;
u32 timestamp_sync_cnt;
struct leapraid_adapter_attr adapter_attr;
struct leapraid_mem_desc mem_desc;
struct leapraid_driver_cmds driver_cmds;
struct leapraid_dynamic_task_desc dynamic_task_desc;
struct leapraid_fw_evt_struct fw_evt_s;
struct leapraid_notification_desc notification_desc;
struct leapraid_reset_desc reset_desc;
struct leapraid_scan_dev_desc scan_dev_desc;
struct leapraid_access_ctrl access_ctrl;
struct leapraid_fw_log_desc fw_log_desc;
struct leapraid_dev_topo dev_topo;
struct leapraid_boot_devs boot_devs;
struct leapraid_smart_poll_desc smart_poll_desc;
};
Signed-off-by: Hao Dongdong <doubled(a)leap-io-kernel.com>
---
arch/arm64/configs/openeuler_defconfig | 2 +-
arch/x86/configs/openeuler_defconfig | 2 +-
drivers/scsi/Kconfig | 2 +-
drivers/scsi/Makefile | 2 +-
drivers/scsi/leapioraid/Kconfig | 13 -
drivers/scsi/leapioraid/Makefile | 9 -
drivers/scsi/leapioraid/leapioraid.h | 2026 ----
drivers/scsi/leapioraid/leapioraid_app.c | 2253 ----
drivers/scsi/leapioraid/leapioraid_func.c | 7056 ------------
drivers/scsi/leapioraid/leapioraid_func.h | 1262 ---
drivers/scsi/leapioraid/leapioraid_os.c | 9825 -----------------
.../scsi/leapioraid/leapioraid_transport.c | 1926 ----
drivers/scsi/leapraid/Kconfig | 14 +
drivers/scsi/leapraid/Makefile | 10 +
drivers/scsi/leapraid/leapraid.h | 2070 ++++
drivers/scsi/leapraid/leapraid_app.c | 675 ++
drivers/scsi/leapraid/leapraid_func.c | 8264 ++++++++++++++
drivers/scsi/leapraid/leapraid_func.h | 1425 +++
drivers/scsi/leapraid/leapraid_os.c | 2365 ++++
drivers/scsi/leapraid/leapraid_transport.c | 1256 +++
20 files changed, 16083 insertions(+), 24374 deletions(-)
delete mode 100644 drivers/scsi/leapioraid/Kconfig
delete mode 100644 drivers/scsi/leapioraid/Makefile
delete mode 100644 drivers/scsi/leapioraid/leapioraid.h
delete mode 100644 drivers/scsi/leapioraid/leapioraid_app.c
delete mode 100644 drivers/scsi/leapioraid/leapioraid_func.c
delete mode 100644 drivers/scsi/leapioraid/leapioraid_func.h
delete mode 100644 drivers/scsi/leapioraid/leapioraid_os.c
delete mode 100644 drivers/scsi/leapioraid/leapioraid_transport.c
create mode 100644 drivers/scsi/leapraid/Kconfig
create mode 100644 drivers/scsi/leapraid/Makefile
create mode 100644 drivers/scsi/leapraid/leapraid.h
create mode 100644 drivers/scsi/leapraid/leapraid_app.c
create mode 100644 drivers/scsi/leapraid/leapraid_func.c
create mode 100644 drivers/scsi/leapraid/leapraid_func.h
create mode 100644 drivers/scsi/leapraid/leapraid_os.c
create mode 100644 drivers/scsi/leapraid/leapraid_transport.c
diff --git a/arch/arm64/configs/openeuler_defconfig b/arch/arm64/configs/openeuler_defconfig
index 8ecb06017ae9..5f3e7b52e1de 100644
--- a/arch/arm64/configs/openeuler_defconfig
+++ b/arch/arm64/configs/openeuler_defconfig
@@ -2628,7 +2628,7 @@ CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_PS3STOR=m
# CONFIG_SCSI_MPI3MR is not set
CONFIG_SCSI_3SNIC_SSSRAID=m
-CONFIG_SCSI_LEAPIORAID=m
+CONFIG_SCSI_LEAPRAID=m
CONFIG_SCSI_SMARTPQI=m
CONFIG_SCSI_HISI_RAID=m
# CONFIG_SCSI_HPTIOP is not set
diff --git a/arch/x86/configs/openeuler_defconfig b/arch/x86/configs/openeuler_defconfig
index eb98178d843b..6b333cae6b46 100644
--- a/arch/x86/configs/openeuler_defconfig
+++ b/arch/x86/configs/openeuler_defconfig
@@ -2587,7 +2587,7 @@ CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_PS3STOR=m
# CONFIG_SCSI_MPI3MR is not set
CONFIG_SCSI_3SNIC_SSSRAID=m
-CONFIG_SCSI_LEAPIORAID=m
+CONFIG_SCSI_LEAPRAID=m
CONFIG_SCSI_SMARTPQI=m
CONFIG_SCSI_HISI_RAID=m
# CONFIG_SCSI_HPTIOP is not set
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 6e2f48159a44..16ee71ad611b 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -449,6 +449,7 @@ source "drivers/scsi/aic7xxx/Kconfig.aic79xx"
source "drivers/scsi/aic94xx/Kconfig"
source "drivers/scsi/hisi_sas/Kconfig"
source "drivers/scsi/mvsas/Kconfig"
+source "drivers/scsi/leapraid/Kconfig"
config SCSI_MVUMI
tristate "Marvell UMI driver"
@@ -491,7 +492,6 @@ source "drivers/scsi/mpt3sas/Kconfig"
source "drivers/scsi/linkdata/Kconfig"
source "drivers/scsi/mpi3mr/Kconfig"
source "drivers/scsi/sssraid/Kconfig"
-source "drivers/scsi/leapioraid/Kconfig"
source "drivers/scsi/smartpqi/Kconfig"
source "drivers/scsi/hisi_raid/Kconfig"
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index 26fcfc54dc7b..69a9bd41e39f 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -103,7 +103,6 @@ obj-$(CONFIG_SCSI_PS3STOR) += linkdata/
obj-$(CONFIG_SCSI_HISI_RAID) += hisi_raid/
obj-$(CONFIG_SCSI_MPI3MR) += mpi3mr/
obj-$(CONFIG_SCSI_3SNIC_SSSRAID) += sssraid/
-obj-$(CONFIG_SCSI_LEAPIORAID) += leapioraid/
obj-$(CONFIG_SCSI_ACARD) += atp870u.o
obj-$(CONFIG_SCSI_SUNESP) += esp_scsi.o sun_esp.o
obj-$(CONFIG_SCSI_INITIO) += initio.o
@@ -155,6 +154,7 @@ obj-$(CONFIG_CHR_DEV_SCH) += ch.o
obj-$(CONFIG_SCSI_ENCLOSURE) += ses.o
obj-$(CONFIG_SCSI_HISI_SAS) += hisi_sas/
+obj-$(CONFIG_SCSI_LEAPRAID) += leapraid/
# This goes last, so that "real" scsi devices probe earlier
obj-$(CONFIG_SCSI_DEBUG) += scsi_debug.o
diff --git a/drivers/scsi/leapioraid/Kconfig b/drivers/scsi/leapioraid/Kconfig
deleted file mode 100644
index a309d530284b..000000000000
--- a/drivers/scsi/leapioraid/Kconfig
+++ /dev/null
@@ -1,13 +0,0 @@
-#
-# Kernel configuration file for the LEAPIORAID
-#
-
-config SCSI_LEAPIORAID
- tristate "LeapIO RAID Adapter"
- depends on PCI && SCSI
- select SCSI_SAS_ATTRS
- select RAID_ATTRS
- select IRQ_POLL
- help
- This driver supports LEAPIO RAID controller, which supports PCI Express Gen4 interface
- and supports SAS/SATA HDD/SSD.
diff --git a/drivers/scsi/leapioraid/Makefile b/drivers/scsi/leapioraid/Makefile
deleted file mode 100644
index 81f286f44bd0..000000000000
--- a/drivers/scsi/leapioraid/Makefile
+++ /dev/null
@@ -1,9 +0,0 @@
-#
-# Makefile for the LEAPIORAID drivers.
-#
-
-obj-$(CONFIG_SCSI_LEAPIORAID) += leapioraid.o
-leapioraid-objs += leapioraid_func.o \
- leapioraid_os.o \
- leapioraid_transport.o \
- leapioraid_app.o
\ No newline at end of file
diff --git a/drivers/scsi/leapioraid/leapioraid.h b/drivers/scsi/leapioraid/leapioraid.h
deleted file mode 100644
index 30908fffe43b..000000000000
--- a/drivers/scsi/leapioraid/leapioraid.h
+++ /dev/null
@@ -1,2026 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- *
- * Copyright 2000-2020 Broadcom Inc. All rights reserved.
- *
- * Copyright (C) 2024 LeapIO Tech Inc.
- *
- */
-
-#ifndef LEAPIORAID_H
-#define LEAPIORAID_H
-
-typedef u8 U8;
-typedef __le16 U16;
-typedef __le32 U32;
-typedef __le64 U64 __aligned(4);
-
-#define LEAPIORAID_IOC_STATE_RESET (0x00000000)
-#define LEAPIORAID_IOC_STATE_READY (0x10000000)
-#define LEAPIORAID_IOC_STATE_OPERATIONAL (0x20000000)
-#define LEAPIORAID_IOC_STATE_FAULT (0x40000000)
-#define LEAPIORAID_IOC_STATE_COREDUMP (0x50000000)
-#define LEAPIORAID_IOC_STATE_MASK (0xF0000000)
-
-struct LeapioraidSysInterfaceRegs_t {
- U32 Doorbell;
- U32 WriteSequence;
- U32 HostDiagnostic;
- U32 Reserved1;
- U32 DiagRWData;
- U32 DiagRWAddressLow;
- U32 DiagRWAddressHigh;
- U32 Reserved2[5];
- U32 HostInterruptStatus;
- U32 HostInterruptMask;
- U32 DCRData;
- U32 DCRAddress;
- U32 Reserved3[2];
- U32 ReplyFreeHostIndex;
- U32 Reserved4[8];
- U32 ReplyPostHostIndex;
- U32 Reserved5;
- U32 HCBSize;
- U32 HCBAddressLow;
- U32 HCBAddressHigh;
- U32 Reserved6[12];
- U32 Scratchpad[4];
- U32 RequestDescriptorPostLow;
- U32 RequestDescriptorPostHigh;
- U32 AtomicRequestDescriptorPost;
- U32 IocLogBufPosition;
- U32 HostLogBufPosition;
- U32 Reserved7[11];
-};
-
-#define LEAPIORAID_DOORBELL_USED (0x08000000)
-#define LEAPIORAID_DOORBELL_DATA_MASK (0x0000FFFF)
-#define LEAPIORAID_DOORBELL_FUNCTION_SHIFT (24)
-#define LEAPIORAID_DOORBELL_ADD_DWORDS_SHIFT (16)
-
-#define LEAPIORAID_DIAG_RESET_ADAPTER (0x00000004)
-
-#define LEAPIORAID_HIS_SYS2IOC_DB_STATUS (0x80000000)
-#define LEAPIORAID_HIS_IOC2SYS_DB_STATUS (0x00000001)
-
-#define LEAPIORAID_RPHI_MSIX_INDEX_SHIFT (24)
-
-#define LEAPIORAID_REQ_DESCRIPT_FLAGS_SCSI_IO (0x00)
-#define LEAPIORAID_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY (0x06)
-#define LEAPIORAID_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE (0x08)
-#define LEAPIORAID_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO (0x0C)
-
-struct LEAPIORAID_DEFAULT_REQUEST_DESCRIPTOR {
- U8 RequestFlags;
- U8 MSIxIndex;
- U16 SMID;
- U16 LMID;
- U16 DescriptorTypeDependent;
-};
-
-struct LEAPIORAID_HIGH_PRIORITY_REQUEST_DESCRIPTOR {
- U8 RequestFlags;
- U8 MSIxIndex;
- U16 SMID;
- U16 LMID;
- U16 Reserved1;
-};
-
-struct LEAPIORAID_SCSI_IO_REQUEST_DESCRIPTOR {
- U8 RequestFlags;
- U8 MSIxIndex;
- U16 SMID;
- U16 LMID;
- U16 DevHandle;
-};
-
-typedef
-struct LEAPIORAID_SCSI_IO_REQUEST_DESCRIPTOR
- LEAPIORAID_FP_SCSI_IO_REQUEST_DESCRIPTOR;
-
-union LeapioraidReqDescUnion_t {
- struct LEAPIORAID_DEFAULT_REQUEST_DESCRIPTOR Default;
- struct LEAPIORAID_HIGH_PRIORITY_REQUEST_DESCRIPTOR HighPriority;
- struct LEAPIORAID_SCSI_IO_REQUEST_DESCRIPTOR SCSIIO;
- LEAPIORAID_FP_SCSI_IO_REQUEST_DESCRIPTOR FastPathSCSIIO;
- U64 Words;
-};
-
-struct LeapioraidAtomicReqDesc_t {
- U8 RequestFlags;
- U8 MSIxIndex;
- U16 SMID;
-};
-
-#define LEAPIORAID_RPY_DESCRIPT_FLAGS_TYPE_MASK (0x0F)
-#define LEAPIORAID_RPY_DESCRIPT_FLAGS_SCSI_IO_SUCCESS (0x00)
-#define LEAPIORAID_RPY_DESCRIPT_FLAGS_ADDRESS_REPLY (0x01)
-#define LEAPIORAID_RPY_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO_SUCCESS (0x06)
-#define LEAPIORAID_RPY_DESCRIPT_FLAGS_UNUSED (0x0F)
-
-struct LeapioraidDefaultRepDesc_t {
- U8 ReplyFlags;
- U8 MSIxIndex;
- U16 DescriptorTypeDependent1;
- U32 DescriptorTypeDependent2;
-};
-
-struct LEAPIORAID_ADDRESS_REPLY_DESCRIPTOR {
- U8 ReplyFlags;
- U8 MSIxIndex;
- U16 SMID;
- U32 ReplyFrameAddress;
-};
-
-struct LEAPIORAID_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR {
- U8 ReplyFlags;
- U8 MSIxIndex;
- U16 SMID;
- U16 TaskTag;
- U16 Reserved1;
-};
-
-typedef
-struct LEAPIORAID_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR
- LEAPIORAID_FP_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR;
-
-union LeapioraidRepDescUnion_t {
- struct LeapioraidDefaultRepDesc_t Default;
- struct LEAPIORAID_ADDRESS_REPLY_DESCRIPTOR AddressReply;
- struct LEAPIORAID_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR SCSIIOSuccess;
- LEAPIORAID_FP_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR FastPathSCSIIOSuccess;
- U64 Words;
-};
-
-#define LEAPIORAID_FUNC_SCSI_IO_REQUEST (0x00)
-#define LEAPIORAID_FUNC_SCSI_TASK_MGMT (0x01)
-#define LEAPIORAID_FUNC_IOC_INIT (0x02)
-#define LEAPIORAID_FUNC_IOC_FACTS (0x03)
-#define LEAPIORAID_FUNC_CONFIG (0x04)
-#define LEAPIORAID_FUNC_PORT_FACTS (0x05)
-#define LEAPIORAID_FUNC_PORT_ENABLE (0x06)
-#define LEAPIORAID_FUNC_EVENT_NOTIFICATION (0x07)
-#define LEAPIORAID_FUNC_EVENT_ACK (0x08)
-#define LEAPIORAID_FUNC_FW_DOWNLOAD (0x09)
-#define LEAPIORAID_FUNC_FW_UPLOAD (0x12)
-#define LEAPIORAID_FUNC_RAID_ACTION (0x15)
-#define LEAPIORAID_FUNC_RAID_SCSI_IO_PASSTHROUGH (0x16)
-#define LEAPIORAID_FUNC_SCSI_ENCLOSURE_PROCESSOR (0x18)
-#define LEAPIORAID_FUNC_SMP_PASSTHROUGH (0x1A)
-#define LEAPIORAID_FUNC_SAS_IO_UNIT_CONTROL (0x1B)
-#define LEAPIORAID_FUNC_IO_UNIT_CONTROL (0x1B)
-#define LEAPIORAID_FUNC_SATA_PASSTHROUGH (0x1C)
-#define LEAPIORAID_FUNC_IOC_MESSAGE_UNIT_RESET (0x40)
-#define LEAPIORAID_FUNC_HANDSHAKE (0x42)
-#define LEAPIORAID_FUNC_LOG_INIT (0x57)
-
-#define LEAPIORAID_IOCSTATUS_MASK (0x7FFF)
-#define LEAPIORAID_IOCSTATUS_SUCCESS (0x0000)
-#define LEAPIORAID_IOCSTATUS_INVALID_FUNCTION (0x0001)
-#define LEAPIORAID_IOCSTATUS_BUSY (0x0002)
-#define LEAPIORAID_IOCSTATUS_INVALID_SGL (0x0003)
-#define LEAPIORAID_IOCSTATUS_INTERNAL_ERROR (0x0004)
-#define LEAPIORAID_IOCSTATUS_INVALID_VPID (0x0005)
-#define LEAPIORAID_IOCSTATUS_INSUFFICIENT_RESOURCES (0x0006)
-#define LEAPIORAID_IOCSTATUS_INVALID_FIELD (0x0007)
-#define LEAPIORAID_IOCSTATUS_INVALID_STATE (0x0008)
-#define LEAPIORAID_IOCSTATUS_OP_STATE_NOT_SUPPORTED (0x0009)
-#define LEAPIORAID_IOCSTATUS_INSUFFICIENT_POWER (0x000A)
-
-#define LEAPIORAID_IOCSTATUS_CONFIG_INVALID_ACTION (0x0020)
-#define LEAPIORAID_IOCSTATUS_CONFIG_INVALID_TYPE (0x0021)
-#define LEAPIORAID_IOCSTATUS_CONFIG_INVALID_PAGE (0x0022)
-#define LEAPIORAID_IOCSTATUS_CONFIG_INVALID_DATA (0x0023)
-#define LEAPIORAID_IOCSTATUS_CONFIG_NO_DEFAULTS (0x0024)
-#define LEAPIORAID_IOCSTATUS_CONFIG_CANT_COMMIT (0x0025)
-
-#define LEAPIORAID_IOCSTATUS_SCSI_RECOVERED_ERROR (0x0040)
-#define LEAPIORAID_IOCSTATUS_SCSI_INVALID_DEVHANDLE (0x0042)
-#define LEAPIORAID_IOCSTATUS_SCSI_DEVICE_NOT_THERE (0x0043)
-#define LEAPIORAID_IOCSTATUS_SCSI_DATA_OVERRUN (0x0044)
-#define LEAPIORAID_IOCSTATUS_SCSI_DATA_UNDERRUN (0x0045)
-#define LEAPIORAID_IOCSTATUS_SCSI_IO_DATA_ERROR (0x0046)
-#define LEAPIORAID_IOCSTATUS_SCSI_PROTOCOL_ERROR (0x0047)
-#define LEAPIORAID_IOCSTATUS_SCSI_TASK_TERMINATED (0x0048)
-#define LEAPIORAID_IOCSTATUS_SCSI_RESIDUAL_MISMATCH (0x0049)
-#define LEAPIORAID_IOCSTATUS_SCSI_TASK_MGMT_FAILED (0x004A)
-#define LEAPIORAID_IOCSTATUS_SCSI_IOC_TERMINATED (0x004B)
-#define LEAPIORAID_IOCSTATUS_SCSI_EXT_TERMINATED (0x004C)
-
-#define LEAPIORAID_IOCSTATUS_EEDP_GUARD_ERROR (0x004D)
-#define LEAPIORAID_IOCSTATUS_EEDP_REF_TAG_ERROR (0x004E)
-#define LEAPIORAID_IOCSTATUS_EEDP_APP_TAG_ERROR (0x004F)
-
-#define LEAPIORAID_IOCSTATUS_TARGET_INVALID_IO_INDEX (0x0062)
-#define LEAPIORAID_IOCSTATUS_TARGET_ABORTED (0x0063)
-#define LEAPIORAID_IOCSTATUS_TARGET_NO_CONN_RETRYABLE (0x0064)
-#define LEAPIORAID_IOCSTATUS_TARGET_NO_CONNECTION (0x0065)
-#define LEAPIORAID_IOCSTATUS_TARGET_XFER_COUNT_MISMATCH (0x006A)
-#define LEAPIORAID_IOCSTATUS_TARGET_DATA_OFFSET_ERROR (0x006D)
-#define LEAPIORAID_IOCSTATUS_TARGET_TOO_MUCH_WRITE_DATA (0x006E)
-#define LEAPIORAID_IOCSTATUS_TARGET_IU_TOO_SHORT (0x006F)
-#define LEAPIORAID_IOCSTATUS_TARGET_ACK_NAK_TIMEOUT (0x0070)
-#define LEAPIORAID_IOCSTATUS_TARGET_NAK_RECEIVED (0x0071)
-
-#define LEAPIORAID_IOCSTATUS_SAS_SMP_REQUEST_FAILED (0x0090)
-#define LEAPIORAID_IOCSTATUS_SAS_SMP_DATA_OVERRUN (0x0091)
-#define LEAPIORAID_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE (0x8000)
-
-struct LeapioraidReqHeader_t {
- U16 FunctionDependent1;
- U8 ChainOffset;
- U8 Function;
- U16 FunctionDependent2;
- U8 FunctionDependent3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved1;
-};
-
-struct LeapioraidDefaultRep_t {
- U16 FunctionDependent1;
- U8 MsgLength;
- U8 Function;
- U16 FunctionDependent2;
- U8 FunctionDependent3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved1;
- U16 FunctionDependent5;
- U16 IOCStatus;
- U32 IOCLogInfo;
-};
-
-struct LEAPIORAID_VERSION_STRUCT {
- U8 Dev;
- U8 Unit;
- U8 Minor;
- U8 Major;
-};
-
-union LEAPIORAID_VERSION_UNION {
- struct LEAPIORAID_VERSION_STRUCT Struct;
- U32 Word;
-};
-
-struct LeapioSGESimple32_t {
- U32 FlagsLength;
- U32 Address;
-};
-
-struct LeapioSGESimple64_t {
- U32 FlagsLength;
- U64 Address;
-};
-
-struct LEAPIORAID_SGE_SIMPLE_UNION {
- U32 FlagsLength;
- union {
- U32 Address32;
- U64 Address64;
- } u;
-};
-
-struct LEAPIORAID_SGE_CHAIN_UNION {
- U16 Length;
- U8 NextChainOffset;
- U8 Flags;
- union {
- U32 Address32;
- U64 Address64;
- } u;
-};
-
-#define LEAPIORAID_SGE_FLAGS_LAST_ELEMENT (0x80)
-#define LEAPIORAID_SGE_FLAGS_END_OF_BUFFER (0x40)
-#define LEAPIORAID_SGE_FLAGS_END_OF_LIST (0x01)
-#define LEAPIORAID_SGE_FLAGS_SHIFT (24)
-#define LEAPIORAID_SGE_FLAGS_SIMPLE_ELEMENT (0x10)
-#define LEAPIORAID_SGE_FLAGS_SYSTEM_ADDRESS (0x00)
-#define LEAPIORAID_SGE_FLAGS_HOST_TO_IOC (0x04)
-#define LEAPIORAID_SGE_FLAGS_32_BIT_ADDRESSING (0x00)
-#define LEAPIORAID_SGE_FLAGS_64_BIT_ADDRESSING (0x02)
-
-struct LEAPIORAID_IEEE_SGE_SIMPLE32 {
- U32 Address;
- U32 FlagsLength;
-};
-
-struct LEAPIORAID_IEEE_SGE_SIMPLE64 {
- U64 Address;
- U32 Length;
- U16 Reserved1;
- U8 Reserved2;
- U8 Flags;
-};
-
-union LEAPIORAID_IEEE_SGE_SIMPLE_UNION {
- struct LEAPIORAID_IEEE_SGE_SIMPLE32 Simple32;
- struct LEAPIORAID_IEEE_SGE_SIMPLE64 Simple64;
-};
-
-union LEAPIORAID_IEEE_SGE_CHAIN_UNION {
- struct LEAPIORAID_IEEE_SGE_SIMPLE32 Chain32;
- struct LEAPIORAID_IEEE_SGE_SIMPLE64 Chain64;
-};
-
-struct LEAPIORAID_IEEE_SGE_CHAIN64 {
- U64 Address;
- U32 Length;
- U16 Reserved1;
- U8 NextChainOffset;
- U8 Flags;
-};
-
-union LEAPIORAID_IEEE_SGE_IO_UNION {
- struct LEAPIORAID_IEEE_SGE_SIMPLE64 IeeeSimple;
- struct LEAPIORAID_IEEE_SGE_CHAIN64 IeeeChain;
-};
-
-#define LEAPIORAID_IEEE_SGE_FLAGS_END_OF_LIST (0x40)
-#define LEAPIORAID_IEEE_SGE_FLAGS_SIMPLE_ELEMENT (0x00)
-#define LEAPIORAID_IEEE_SGE_FLAGS_CHAIN_ELEMENT (0x80)
-#define LEAPIORAID_IEEE_SGE_FLAGS_SYSTEM_ADDR (0x00)
-
-union LEAPIORAID_SIMPLE_SGE_UNION {
- struct LEAPIORAID_SGE_SIMPLE_UNION LeapioSimple;
- union LEAPIORAID_IEEE_SGE_SIMPLE_UNION IeeeSimple;
-};
-
-union LEAPIORAID_SGE_IO_UNION {
- struct LEAPIORAID_SGE_SIMPLE_UNION LeapioSimple;
- struct LEAPIORAID_SGE_CHAIN_UNION LeapioChain;
- union LEAPIORAID_IEEE_SGE_SIMPLE_UNION IeeeSimple;
- union LEAPIORAID_IEEE_SGE_CHAIN_UNION IeeeChain;
-};
-
-struct LEAPIORAID_CONFIG_PAGE_HEADER {
- U8 PageVersion;
- U8 PageLength;
- U8 PageNumber;
- U8 PageType;
-};
-
-struct LEAPIORAID_CONFIG_EXTENDED_PAGE_HEADER {
- U8 PageVersion;
- U8 Reserved1;
- U8 PageNumber;
- U8 PageType;
- U16 ExtPageLength;
- U8 ExtPageType;
- U8 Reserved2;
-};
-
-#define LEAPIORAID_CONFIG_PAGETYPE_IO_UNIT (0x00)
-#define LEAPIORAID_CONFIG_PAGETYPE_IOC (0x01)
-#define LEAPIORAID_CONFIG_PAGETYPE_BIOS (0x02)
-#define LEAPIORAID_CONFIG_PAGETYPE_RAID_VOLUME (0x08)
-#define LEAPIORAID_CONFIG_PAGETYPE_MANUFACTURING (0x09)
-#define LEAPIORAID_CONFIG_PAGETYPE_RAID_PHYSDISK (0x0A)
-#define LEAPIORAID_CONFIG_PAGETYPE_EXTENDED (0x0F)
-#define LEAPIORAID_CONFIG_PAGETYPE_MASK (0x0F)
-#define LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_IO_UNIT (0x10)
-#define LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_EXPANDER (0x11)
-#define LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_DEVICE (0x12)
-#define LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_PHY (0x13)
-#define LEAPIORAID_CONFIG_EXTPAGETYPE_LOG (0x14)
-#define LEAPIORAID_CONFIG_EXTPAGETYPE_ENCLOSURE (0x15)
-#define LEAPIORAID_CONFIG_EXTPAGETYPE_RAID_CONFIG (0x16)
-#define LEAPIORAID_CONFIG_EXTPAGETYPE_DRIVER_MAPPING (0x17)
-#define LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_PORT (0x18)
-#define LEAPIORAID_CONFIG_EXTPAGETYPE_EXT_MANUFACTURING (0x1A)
-
-#define LEAPIORAID_RAID_VOLUME_PGAD_FORM_GET_NEXT_HANDLE (0x00000000)
-#define LEAPIORAID_RAID_VOLUME_PGAD_FORM_HANDLE (0x10000000)
-
-#define LEAPIORAID_PHYSDISK_PGAD_FORM_GET_NEXT_PHYSDISKNUM (0x00000000)
-#define LEAPIORAID_PHYSDISK_PGAD_FORM_PHYSDISKNUM (0x10000000)
-
-#define LEAPIORAID_SAS_EXPAND_PGAD_FORM_GET_NEXT_HNDL (0x00000000)
-#define LEAPIORAID_SAS_EXPAND_PGAD_FORM_HNDL_PHY_NUM (0x10000000)
-#define LEAPIORAID_SAS_EXPAND_PGAD_FORM_HNDL (0x20000000)
-#define LEAPIORAID_SAS_EXPAND_PGAD_PHYNUM_SHIFT (16)
-#define LEAPIORAID_SAS_DEVICE_PGAD_FORM_GET_NEXT_HANDLE (0x00000000)
-#define LEAPIORAID_SAS_DEVICE_PGAD_FORM_HANDLE (0x20000000)
-#define LEAPIORAID_SAS_PHY_PGAD_FORM_PHY_NUMBER (0x00000000)
-#define LEAPIORAID_SAS_ENCLOS_PGAD_FORM_GET_NEXT_HANDLE (0x00000000)
-#define LEAPIORAID_SAS_ENCLOS_PGAD_FORM_HANDLE (0x10000000)
-#define LEAPIORAID_RAID_PGAD_FORM_GET_NEXT_CONFIGNUM (0x00000000)
-
-struct LeapioraidCfgReq_t {
- U8 Action;
- U8 SGLFlags;
- U8 ChainOffset;
- U8 Function;
- U16 ExtPageLength;
- U8 ExtPageType;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved1;
- U8 Reserved2;
- U8 ProxyVF_ID;
- U16 Reserved4;
- U32 Reserved3;
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U32 PageAddress;
- union LEAPIORAID_SGE_IO_UNION PageBufferSGE;
-};
-
-#define LEAPIORAID_CONFIG_ACTION_PAGE_HEADER (0x00)
-#define LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT (0x01)
-#define LEAPIORAID_CONFIG_ACTION_PAGE_WRITE_CURRENT (0x02)
-#define LEAPIORAID_CONFIG_ACTION_PAGE_WRITE_NVRAM (0x04)
-
-struct LeapioraidCfgRep_t {
- U8 Action;
- U8 SGLFlags;
- U8 MsgLength;
- U8 Function;
- U16 ExtPageLength;
- U8 ExtPageType;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved1;
- U16 Reserved2;
- U16 IOCStatus;
- U32 IOCLogInfo;
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
-};
-
-struct LeapioraidManP0_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U8 ChipName[16];
- U8 ChipRevision[8];
- U8 BoardName[16];
- U8 BoardAssembly[16];
- U8 BoardTracerNumber[16];
-};
-
-struct LEAPIORAID_MANPAGE7_CONNECTOR_INFO {
- U32 Pinout;
- U8 Connector[16];
- U8 Location;
- U8 ReceptacleID;
- U16 Slot;
- U16 Slotx2;
- U16 Slotx4;
-};
-
-struct LeapioraidIOUnitP0_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U64 UniqueValue;
- union LEAPIORAID_VERSION_UNION NvdataVersionDefault;
- union LEAPIORAID_VERSION_UNION NvdataVersionPersistent;
-};
-
-struct LeapioraidIOUnitP1_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U32 Flags;
-};
-
-#define LEAPIORAID_IOUNITPAGE1_NATIVE_COMMAND_Q_DISABLE (0x00000100)
-#define LEAPIORAID_IOUNITPAGE1_DISABLE_TASK_SET_FULL_HANDLING (0x00000020)
-
-struct LEAPIORAID_IOUNIT8_SENSOR {
- U16 Flags;
- U16 Reserved1;
- U16 Threshold[4];
- U32 Reserved2;
- U32 Reserved3;
- U32 Reserved4;
-};
-
-struct LeapioraidIOUnitP8_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U32 Reserved1;
- U32 Reserved2;
- U8 NumSensors;
- U8 PollingInterval;
- U16 Reserved3;
- struct LEAPIORAID_IOUNIT8_SENSOR Sensor[];
-};
-
-struct LeapioraidIOCP1_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U32 Flags;
- U32 CoalescingTimeout;
- U8 CoalescingDepth;
- U8 PCISlotNum;
- U8 PCIBusNum;
- U8 PCIDomainSegment;
- U32 Reserved1;
- U32 ProductSpecific;
-};
-
-struct LeapioraidIOCP8_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U8 NumDevsPerEnclosure;
- U8 Reserved1;
- U16 Reserved2;
- U16 MaxPersistentEntries;
- U16 MaxNumPhysicalMappedIDs;
- U16 Flags;
- U16 Reserved3;
- U16 IRVolumeMappingFlags;
- U16 Reserved4;
- U32 Reserved5;
-};
-
-#define LEAPIORAID_IOCPAGE8_IRFLAGS_MASK_VOLUME_MAPPING_MODE (0x00000003)
-#define LEAPIORAID_IOCPAGE8_IRFLAGS_LOW_VOLUME_MAPPING (0x00000000)
-
-struct LEAPIORAID_BOOT_DEVICE_ADAPTER_ORDER {
- U32 Reserved1;
- U32 Reserved2;
- U32 Reserved3;
- U32 Reserved4;
- U32 Reserved5;
- U32 Reserved6;
-};
-
-struct LEAPIORAID_BOOT_DEVICE_SAS_WWID {
- U64 SASAddress;
- U8 LUN[8];
- U32 Reserved1;
- U32 Reserved2;
-};
-
-struct LEAPIORAID_BOOT_DEVICE_ENCLOSURE_SLOT {
- U64 EnclosureLogicalID;
- U32 Reserved1;
- U32 Reserved2;
- U16 SlotNumber;
- U16 Reserved3;
- U32 Reserved4;
-};
-
-struct LEAPIORAID_BOOT_DEVICE_DEVICE_NAME {
- U64 DeviceName;
- U8 LUN[8];
- U32 Reserved1;
- U32 Reserved2;
-};
-
-union LEAPIORAID_BIOSPAGE2_BOOT_DEVICE {
- struct LEAPIORAID_BOOT_DEVICE_ADAPTER_ORDER AdapterOrder;
- struct LEAPIORAID_BOOT_DEVICE_SAS_WWID SasWwid;
- struct LEAPIORAID_BOOT_DEVICE_ENCLOSURE_SLOT EnclosureSlot;
- struct LEAPIORAID_BOOT_DEVICE_DEVICE_NAME DeviceName;
-};
-
-struct LeapioraidBiosP2_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U32 Reserved1;
- U32 Reserved2;
- U32 Reserved3;
- U32 Reserved4;
- U32 Reserved5;
- U32 Reserved6;
- U8 ReqBootDeviceForm;
- U8 Reserved7;
- U16 Reserved8;
- union LEAPIORAID_BIOSPAGE2_BOOT_DEVICE RequestedBootDevice;
- U8 ReqAltBootDeviceForm;
- U8 Reserved9;
- U16 Reserved10;
- union LEAPIORAID_BIOSPAGE2_BOOT_DEVICE RequestedAltBootDevice;
- U8 CurrentBootDeviceForm;
- U8 Reserved11;
- U16 Reserved12;
- union LEAPIORAID_BIOSPAGE2_BOOT_DEVICE CurrentBootDevice;
-};
-
-#define LEAPIORAID_BIOSPAGE2_FORM_MASK (0x0F)
-#define LEAPIORAID_BIOSPAGE2_FORM_NO_DEVICE_SPECIFIED (0x00)
-#define LEAPIORAID_BIOSPAGE2_FORM_SAS_WWID (0x05)
-#define LEAPIORAID_BIOSPAGE2_FORM_ENCLOSURE_SLOT (0x06)
-#define LEAPIORAID_BIOSPAGE2_FORM_DEVICE_NAME (0x07)
-
-struct LEAPIORAID_ADAPTER_INFO {
- U8 PciBusNumber;
- U8 PciDeviceAndFunctionNumber;
- U16 AdapterFlags;
-};
-
-struct LEAPIORAID_ADAPTER_ORDER_AUX {
- U64 WWID;
- U32 Reserved1;
- U32 Reserved2;
-};
-
-struct LeapioraidBiosP3_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U32 GlobalFlags;
- U32 BiosVersion;
- struct LEAPIORAID_ADAPTER_INFO AdapterOrder[4];
- U32 Reserved1;
- struct LEAPIORAID_ADAPTER_ORDER_AUX AdapterOrderAux[4];
-};
-
-struct LEAPIORAID_RAIDVOL0_PHYS_DISK {
- U8 RAIDSetNum;
- U8 PhysDiskMap;
- U8 PhysDiskNum;
- U8 Reserved;
-};
-
-struct LEAPIORAID_RAIDVOL0_SETTINGS {
- U16 Settings;
- U8 HotSparePool;
- U8 Reserved;
-};
-
-struct LeapioraidRaidVolP0_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U16 DevHandle;
- U8 VolumeState;
- U8 VolumeType;
- U32 VolumeStatusFlags;
- struct LEAPIORAID_RAIDVOL0_SETTINGS VolumeSettings;
- U64 MaxLBA;
- U32 StripeSize;
- U16 BlockSize;
- U16 Reserved1;
- U8 SupportedPhysDisks;
- U8 ResyncRate;
- U16 DataScrubDuration;
- U8 NumPhysDisks;
- U8 Reserved2;
- U8 Reserved3;
- U8 InactiveStatus;
- struct LEAPIORAID_RAIDVOL0_PHYS_DISK PhysDisk[];
-};
-
-#define LEAPIORAID_RAID_VOL_STATE_MISSING (0x00)
-#define LEAPIORAID_RAID_VOL_STATE_FAILED (0x01)
-#define LEAPIORAID_RAID_VOL_STATE_INITIALIZING (0x02)
-#define LEAPIORAID_RAID_VOL_STATE_ONLINE (0x03)
-#define LEAPIORAID_RAID_VOL_STATE_DEGRADED (0x04)
-#define LEAPIORAID_RAID_VOL_STATE_OPTIMAL (0x05)
-#define LEAPIORAID_RAID_VOL_TYPE_RAID0 (0x00)
-#define LEAPIORAID_RAID_VOL_TYPE_RAID1E (0x01)
-#define LEAPIORAID_RAID_VOL_TYPE_RAID1 (0x02)
-#define LEAPIORAID_RAID_VOL_TYPE_RAID10 (0x05)
-#define LEAPIORAID_RAID_VOL_TYPE_UNKNOWN (0xFF)
-
-#define LEAPIORAID_RAIDVOL0_STATUS_FLAG_RESYNC_IN_PROGRESS (0x00010000)
-
-struct LeapioraidRaidVolP1_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U16 DevHandle;
- U16 Reserved0;
- U8 GUID[24];
- U8 Name[16];
- U64 WWID;
- U32 Reserved1;
- U32 Reserved2;
-};
-
-struct LEAPIORAID_RAIDPHYSDISK0_SETTINGS {
- U16 Reserved1;
- U8 HotSparePool;
- U8 Reserved2;
-};
-
-struct LEAPIORAID_RAIDPHYSDISK0_INQUIRY_DATA {
- U8 VendorID[8];
- U8 ProductID[16];
- U8 ProductRevLevel[4];
- U8 SerialNum[32];
-};
-
-struct LeapioraidRaidPDP0_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U16 DevHandle;
- U8 Reserved1;
- U8 PhysDiskNum;
- struct LEAPIORAID_RAIDPHYSDISK0_SETTINGS PhysDiskSettings;
- U32 Reserved2;
- struct LEAPIORAID_RAIDPHYSDISK0_INQUIRY_DATA InquiryData;
- U32 Reserved3;
- U8 PhysDiskState;
- U8 OfflineReason;
- U8 IncompatibleReason;
- U8 PhysDiskAttributes;
- U32 PhysDiskStatusFlags;
- U64 DeviceMaxLBA;
- U64 HostMaxLBA;
- U64 CoercedMaxLBA;
- U16 BlockSize;
- U16 Reserved5;
- U32 Reserved6;
-};
-
-#define LEAPIORAID_RAID_PD_STATE_NOT_CONFIGURED (0x00)
-#define LEAPIORAID_RAID_PD_STATE_NOT_COMPATIBLE (0x01)
-#define LEAPIORAID_RAID_PD_STATE_OFFLINE (0x02)
-#define LEAPIORAID_RAID_PD_STATE_ONLINE (0x03)
-#define LEAPIORAID_RAID_PD_STATE_HOT_SPARE (0x04)
-#define LEAPIORAID_RAID_PD_STATE_DEGRADED (0x05)
-#define LEAPIORAID_RAID_PD_STATE_REBUILDING (0x06)
-#define LEAPIORAID_RAID_PD_STATE_OPTIMAL (0x07)
-
-#define LEAPIORAID_SAS_NEG_LINK_RATE_MASK_PHYSICAL (0x0F)
-#define LEAPIORAID_SAS_NEG_LINK_RATE_UNKNOWN_LINK_RATE (0x00)
-#define LEAPIORAID_SAS_NEG_LINK_RATE_PHY_DISABLED (0x01)
-#define LEAPIORAID_SAS_NEG_LINK_RATE_NEGOTIATION_FAILED (0x02)
-#define LEAPIORAID_SAS_NEG_LINK_RATE_SATA_OOB_COMPLETE (0x03)
-#define LEAPIORAID_SAS_NEG_LINK_RATE_PORT_SELECTOR (0x04)
-#define LEAPIORAID_SAS_NEG_LINK_RATE_SMP_RESET_IN_PROGRESS (0x05)
-#define LEAPIORAID_SAS_NEG_LINK_RATE_1_5 (0x08)
-#define LEAPIORAID_SAS_NEG_LINK_RATE_3_0 (0x09)
-#define LEAPIORAID_SAS_NEG_LINK_RATE_6_0 (0x0A)
-#define LEAPIORAID_SAS_NEG_LINK_RATE_12_0 (0x0B)
-
-#define LEAPIORAID_SAS_PHYINFO_VIRTUAL_PHY (0x00001000)
-
-#define LEAPIORAID_SAS_PRATE_MIN_RATE_MASK (0x0F)
-#define LEAPIORAID_SAS_HWRATE_MIN_RATE_MASK (0x0F)
-
-struct LEAPIORAID_SAS_IO_UNIT0_PHY_DATA {
- U8 Port;
- U8 PortFlags;
- U8 PhyFlags;
- U8 NegotiatedLinkRate;
- U32 ControllerPhyDeviceInfo;
- U16 AttachedDevHandle;
- U16 ControllerDevHandle;
- U32 DiscoveryStatus;
- U32 Reserved;
-};
-
-struct LeapioraidSasIOUnitP0_t {
- struct LEAPIORAID_CONFIG_EXTENDED_PAGE_HEADER Header;
- U32 Reserved1;
- U8 NumPhys;
- U8 Reserved2;
- U16 Reserved3;
- struct LEAPIORAID_SAS_IO_UNIT0_PHY_DATA PhyData[];
-};
-
-#define LEAPIORAID_SASIOUNIT0_PORTFLAGS_DISCOVERY_IN_PROGRESS (0x08)
-#define LEAPIORAID_SASIOUNIT0_PORTFLAGS_AUTO_PORT_CONFIG (0x01)
-#define LEAPIORAID_SASIOUNIT0_PHYFLAGS_ZONING_ENABLED (0x10)
-#define LEAPIORAID_SASIOUNIT0_PHYFLAGS_PHY_DISABLED (0x08)
-
-struct LEAPIORAID_SAS_IO_UNIT1_PHY_DATA {
- U8 Port;
- U8 PortFlags;
- U8 PhyFlags;
- U8 MaxMinLinkRate;
- U32 ControllerPhyDeviceInfo;
- U16 MaxTargetPortConnectTime;
- U16 Reserved1;
-};
-
-struct LeapioraidSasIOUnitP1_t {
- struct LEAPIORAID_CONFIG_EXTENDED_PAGE_HEADER Header;
- U16 ControlFlags;
- U16 SASNarrowMaxQueueDepth;
- U16 AdditionalControlFlags;
- U16 SASWideMaxQueueDepth;
- U8 NumPhys;
- U8 SATAMaxQDepth;
- U8 ReportDeviceMissingDelay;
- U8 IODeviceMissingDelay;
- struct LEAPIORAID_SAS_IO_UNIT1_PHY_DATA PhyData[];
-};
-
-#define LEAPIORAID_SASIOUNIT1_REPORT_MISSING_TIMEOUT_MASK (0x7F)
-#define LEAPIORAID_SASIOUNIT1_REPORT_MISSING_UNIT_16 (0x80)
-#define LEAPIORAID_SASIOUNIT1_PHYFLAGS_ZONING_ENABLE (0x10)
-#define LEAPIORAID_SASIOUNIT1_PHYFLAGS_PHY_DISABLE (0x08)
-
-struct LeapioraidExpanderP0_t {
- struct LEAPIORAID_CONFIG_EXTENDED_PAGE_HEADER Header;
- U8 PhysicalPort;
- U8 ReportGenLength;
- U16 EnclosureHandle;
- U64 SASAddress;
- U32 DiscoveryStatus;
- U16 DevHandle;
- U16 ParentDevHandle;
- U16 ExpanderChangeCount;
- U16 ExpanderRouteIndexes;
- U8 NumPhys;
- U8 SASLevel;
- U16 Flags;
- U16 STPBusInactivityTimeLimit;
- U16 STPMaxConnectTimeLimit;
- U16 STP_SMP_NexusLossTime;
- U16 MaxNumRoutedSasAddresses;
- U64 ActiveZoneManagerSASAddress;
- U16 ZoneLockInactivityLimit;
- U16 Reserved1;
- U8 TimeToReducedFunc;
- U8 InitialTimeToReducedFunc;
- U8 MaxReducedFuncTime;
- U8 Reserved2;
-};
-
-struct LeapioraidExpanderP1_t {
- struct LEAPIORAID_CONFIG_EXTENDED_PAGE_HEADER Header;
- U8 PhysicalPort;
- U8 Reserved1;
- U16 Reserved2;
- U8 NumPhys;
- U8 Phy;
- U16 NumTableEntriesProgrammed;
- U8 ProgrammedLinkRate;
- U8 HwLinkRate;
- U16 AttachedDevHandle;
- U32 PhyInfo;
- U32 AttachedDeviceInfo;
- U16 ExpanderDevHandle;
- U8 ChangeCount;
- U8 NegotiatedLinkRate;
- U8 PhyIdentifier;
- U8 AttachedPhyIdentifier;
- U8 Reserved3;
- U8 DiscoveryInfo;
- U32 AttachedPhyInfo;
- U8 ZoneGroup;
- U8 SelfConfigStatus;
- U16 Reserved4;
-};
-
-struct LeapioraidSasDevP0_t {
- struct LEAPIORAID_CONFIG_EXTENDED_PAGE_HEADER Header;
- U16 Slot;
- U16 EnclosureHandle;
- U64 SASAddress;
- U16 ParentDevHandle;
- U8 PhyNum;
- U8 AccessStatus;
- U16 DevHandle;
- U8 AttachedPhyIdentifier;
- U8 ZoneGroup;
- U32 DeviceInfo;
- U16 Flags;
- U8 PhysicalPort;
- U8 MaxPortConnections;
- U64 DeviceName;
- U8 PortGroups;
- U8 DmaGroup;
- U8 ControlGroup;
- U8 EnclosureLevel;
- U8 ConnectorName[4];
- U32 Reserved3;
-};
-
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_NO_ERRORS (0x00)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SATA_INIT_FAILED (0x01)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SATA_CAPABILITY_FAILED (0x02)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SATA_AFFILIATION_CONFLICT (0x03)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SATA_NEEDS_INITIALIZATION (0x04)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_ROUTE_NOT_ADDRESSABLE (0x05)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SMP_ERROR_NOT_ADDRESSABLE (0x06)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_DEVICE_BLOCKED (0x07)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_UNKNOWN (0x10)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_AFFILIATION_CONFLICT (0x11)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_DIAG (0x12)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_IDENTIFICATION (0x13)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_CHECK_POWER (0x14)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_PIO_SN (0x15)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_MDMA_SN (0x16)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_UDMA_SN (0x17)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_ZONING_VIOLATION (0x18)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_NOT_ADDRESSABLE (0x19)
-#define LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_MAX (0x1F)
-#define LEAPIORAID_SAS_DEVICE0_FLAGS_FAST_PATH_CAPABLE (0x2000)
-#define LEAPIORAID_SAS_DEVICE0_FLAGS_SATA_ASYNCHRONOUS_NOTIFY (0x0400)
-#define LEAPIORAID_SAS_DEVICE0_FLAGS_SATA_SW_PRESERVE (0x0200)
-#define LEAPIORAID_SAS_DEVICE0_FLAGS_SATA_SMART_SUPPORTED (0x0040)
-#define LEAPIORAID_SAS_DEVICE0_FLAGS_SATA_NCQ_SUPPORTED (0x0020)
-#define LEAPIORAID_SAS_DEVICE0_FLAGS_SATA_FUA_SUPPORTED (0x0010)
-#define LEAPIORAID_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID (0x0002)
-#define LEAPIORAID_SAS_DEVICE0_FLAGS_DEVICE_PRESENT (0x0001)
-
-struct LeapioraidSasPhyP0_t {
- struct LEAPIORAID_CONFIG_EXTENDED_PAGE_HEADER Header;
- U16 OwnerDevHandle;
- U16 Reserved1;
- U16 AttachedDevHandle;
- U8 AttachedPhyIdentifier;
- U8 Reserved2;
- U32 AttachedPhyInfo;
- U8 ProgrammedLinkRate;
- U8 HwLinkRate;
- U8 ChangeCount;
- U8 Flags;
- U32 PhyInfo;
- U8 NegotiatedLinkRate;
- U8 Reserved3;
- U16 Reserved4;
-};
-
-struct LeapioraidSasPhyP1_t {
- struct LEAPIORAID_CONFIG_EXTENDED_PAGE_HEADER Header;
- U32 Reserved1;
- U32 InvalidDwordCount;
- U32 RunningDisparityErrorCount;
- U32 LossDwordSynchCount;
- U32 PhyResetProblemCount;
-};
-
-struct LeapioraidSasEncP0_t {
- struct LEAPIORAID_CONFIG_EXTENDED_PAGE_HEADER Header;
- U32 Reserved1;
- U64 EnclosureLogicalID;
- U16 Flags;
- U16 EnclosureHandle;
- U16 NumSlots;
- U16 StartSlot;
- U8 ChassisSlot;
- U8 EnclosureLevel;
- U16 SEPDevHandle;
- U8 OEMRD;
- U8 Reserved1a;
- U16 Reserved2;
- U32 Reserved3;
-};
-
-#define LEAPIORAID_SAS_ENCLS0_FLAGS_CHASSIS_SLOT_VALID (0x0020)
-
-struct LEAPIORAID_RAIDCONFIG0_CONFIG_ELEMENT {
- U16 ElementFlags;
- U16 VolDevHandle;
- U8 HotSparePool;
- U8 PhysDiskNum;
- U16 PhysDiskDevHandle;
-};
-
-#define LEAPIORAID_RAIDCONFIG0_EFLAGS_MASK_ELEMENT_TYPE (0x000F)
-#define LEAPIORAID_RAIDCONFIG0_EFLAGS_VOL_PHYS_DISK_ELEMENT (0x0001)
-#define LEAPIORAID_RAIDCONFIG0_EFLAGS_HOT_SPARE_ELEMENT (0x0002)
-#define LEAPIORAID_RAIDCONFIG0_EFLAGS_OCE_ELEMENT (0x0003)
-
-struct LeapioraidRaidCfgP0_t {
- struct LEAPIORAID_CONFIG_EXTENDED_PAGE_HEADER Header;
- U8 NumHotSpares;
- U8 NumPhysDisks;
- U8 NumVolumes;
- U8 ConfigNum;
- U32 Flags;
- U8 ConfigGUID[24];
- U32 Reserved1;
- U8 NumElements;
- U8 Reserved2;
- U16 Reserved3;
- struct LEAPIORAID_RAIDCONFIG0_CONFIG_ELEMENT ConfigElement[];
-};
-
-struct LeapioraidFWImgHeader_t {
- U32 Signature;
- U32 Signature0;
- U32 Signature1;
- U32 Signature2;
- union LEAPIORAID_VERSION_UNION LEAPIOVersion;
- union LEAPIORAID_VERSION_UNION FWVersion;
- union LEAPIORAID_VERSION_UNION NVDATAVersion;
- union LEAPIORAID_VERSION_UNION PackageVersion;
- U16 VendorID;
- U16 ProductID;
- U16 ProtocolFlags;
- U16 Reserved26;
- U32 IOCCapabilities;
- U32 ImageSize;
- U32 NextImageHeaderOffset;
- U32 Checksum;
- U32 Reserved38;
- U32 Reserved3C;
- U32 Reserved40;
- U32 Reserved44;
- U32 Reserved48;
- U32 Reserved4C;
- U32 Reserved50;
- U32 Reserved54;
- U32 Reserved58;
- U32 Reserved5C;
- U32 BootFlags;
- U32 FirmwareVersionNameWhat;
- U8 FirmwareVersionName[32];
- U32 VendorNameWhat;
- U8 VendorName[32];
- U32 PackageNameWhat;
- U8 PackageName[32];
- U32 ReservedD0;
- U32 ReservedD4;
- U32 ReservedD8;
- U32 ReservedDC;
- U32 ReservedE0;
- U32 ReservedE4;
- U32 ReservedE8;
- U32 ReservedEC;
- U32 ReservedF0;
- U32 ReservedF4;
- U32 ReservedF8;
- U32 ReservedFC;
-};
-
-struct LEAPIORAID_HASH_EXCLUSION_FORMAT {
- U32 Offset;
- U32 Size;
-};
-
-struct LeapioraidComptImgHeader_t {
- U32 Signature0;
- U32 LoadAddress;
- U32 DataSize;
- U32 StartAddress;
- U32 Signature1;
- U32 FlashOffset;
- U32 FlashSize;
- U32 VersionStringOffset;
- U32 BuildDateStringOffset;
- U32 BuildTimeStringOffset;
- U32 EnvironmentVariableOffset;
- U32 ApplicationSpecific;
- U32 Signature2;
- U32 HeaderSize;
- U32 Crc;
- U8 NotFlashImage;
- U8 Compressed;
- U16 Reserved3E;
- U32 SecondaryFlashOffset;
- U32 Reserved44;
- U32 Reserved48;
- union LEAPIORAID_VERSION_UNION RMCInterfaceVersion;
- union LEAPIORAID_VERSION_UNION Reserved50;
- union LEAPIORAID_VERSION_UNION FWVersion;
- union LEAPIORAID_VERSION_UNION NvdataVersion;
- struct LEAPIORAID_HASH_EXCLUSION_FORMAT HashExclusion[4];
- U32 NextImageHeaderOffset;
- U32 Reserved80[32];
-};
-
-struct LEAPIORAID_SCSI_IO_CDB_EEDP32 {
- U8 CDB[20];
- __be32 PrimaryReferenceTag;
- U16 PrimaryApplicationTag;
- U16 PrimaryApplicationTagMask;
- U32 TransferLength;
-};
-
-union LEAPIO_SCSI_IO_CDB_UNION {
- U8 CDB32[32];
- struct LEAPIORAID_SCSI_IO_CDB_EEDP32 EEDP32;
- struct LEAPIORAID_SGE_SIMPLE_UNION SGE;
-};
-
-struct LeapioSCSIIOReq_t {
- U16 DevHandle;
- U8 ChainOffset;
- U8 Function;
- U16 Reserved1;
- U8 Reserved2;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved3;
- U32 SenseBufferLowAddress;
- U16 SGLFlags;
- U8 SenseBufferLength;
- U8 Reserved4;
- U8 SGLOffset0;
- U8 SGLOffset1;
- U8 SGLOffset2;
- U8 SGLOffset3;
- U32 SkipCount;
- U32 DataLength;
- U32 BidirectionalDataLength;
- U16 IoFlags;
- U16 EEDPFlags;
- U32 EEDPBlockSize;
- U32 SecondaryReferenceTag;
- U16 SecondaryApplicationTag;
- U16 ApplicationTagTranslationMask;
- U8 LUN[8];
- U32 Control;
- union LEAPIO_SCSI_IO_CDB_UNION CDB;
- union LEAPIORAID_SGE_IO_UNION SGL;
-};
-
-#define LEAPIORAID_SCSIIO_MSGFLAGS_SYSTEM_SENSE_ADDR (0x00)
-
-#define LEAPIORAID_SCSIIO_CONTROL_ADDCDBLEN_SHIFT (26)
-#define LEAPIORAID_SCSIIO_CONTROL_NODATATRANSFER (0x00000000)
-#define LEAPIORAID_SCSIIO_CONTROL_WRITE (0x01000000)
-#define LEAPIORAID_SCSIIO_CONTROL_READ (0x02000000)
-#define LEAPIORAID_SCSIIO_CONTROL_BIDIRECTIONAL (0x03000000)
-#define LEAPIORAID_SCSIIO_CONTROL_CMDPRI_SHIFT (11)
-#define LEAPIORAID_SCSIIO_CONTROL_SIMPLEQ (0x00000000)
-#define LEAPIORAID_SCSIIO_CONTROL_ORDEREDQ (0x00000200)
-#define LEAPIORAID_SCSIIO_CONTROL_TLR_ON (0x00000040)
-
-union LEAPIORAID_SCSI_IO_CDB_UNION {
- U8 CDB32[32];
- struct LEAPIORAID_SCSI_IO_CDB_EEDP32 EEDP32;
- struct LEAPIORAID_IEEE_SGE_SIMPLE64 SGE;
-};
-
-struct LeapioraidSCSIIOReq_t {
- U16 DevHandle;
- U8 ChainOffset;
- U8 Function;
- U16 Reserved1;
- U8 Reserved2;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved3;
- U32 SenseBufferLowAddress;
- U8 DMAFlags;
- U8 Reserved5;
- U8 SenseBufferLength;
- U8 Reserved4;
- U8 SGLOffset0;
- U8 SGLOffset1;
- U8 SGLOffset2;
- U8 SGLOffset3;
- U32 SkipCount;
- U32 DataLength;
- U32 BidirectionalDataLength;
- U16 IoFlags;
- U16 EEDPFlags;
- U16 EEDPBlockSize;
- U16 Reserved6;
- U32 SecondaryReferenceTag;
- U16 SecondaryApplicationTag;
- U16 ApplicationTagTranslationMask;
- U8 LUN[8];
- U32 Control;
- union LEAPIORAID_SCSI_IO_CDB_UNION CDB;
- union LEAPIORAID_IEEE_SGE_IO_UNION SGL;
-};
-
-struct LeapioraidSCSIIORep_t {
- U16 DevHandle;
- U8 MsgLength;
- U8 Function;
- U16 Reserved1;
- U8 Reserved2;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved3;
- U8 SCSIStatus;
- U8 SCSIState;
- U16 IOCStatus;
- U32 IOCLogInfo;
- U32 TransferCount;
- U32 SenseCount;
- U32 ResponseInfo;
- U16 TaskTag;
- U16 SCSIStatusQualifier;
- U32 BidirectionalTransferCount;
- U32 EEDPErrorOffset;
- U16 EEDPObservedAppTag;
- U16 EEDPObservedGuard;
- U32 EEDPObservedRefTag;
-};
-
-#define LEAPIORAID_SCSI_STATUS_GOOD (0x00)
-#define LEAPIORAID_SCSI_STATUS_CHECK_CONDITION (0x02)
-#define LEAPIORAID_SCSI_STATUS_CONDITION_MET (0x04)
-#define LEAPIORAID_SCSI_STATUS_BUSY (0x08)
-#define LEAPIORAID_SCSI_STATUS_INTERMEDIATE (0x10)
-#define LEAPIORAID_SCSI_STATUS_INTERMEDIATE_CONDMET (0x14)
-#define LEAPIORAID_SCSI_STATUS_RESERVATION_CONFLICT (0x18)
-#define LEAPIORAID_SCSI_STATUS_COMMAND_TERMINATED (0x22)
-#define LEAPIORAID_SCSI_STATUS_TASK_SET_FULL (0x28)
-#define LEAPIORAID_SCSI_STATUS_ACA_ACTIVE (0x30)
-#define LEAPIORAID_SCSI_STATUS_TASK_ABORTED (0x40)
-#define LEAPIORAID_SCSI_STATE_RESPONSE_INFO_VALID (0x10)
-#define LEAPIORAID_SCSI_STATE_TERMINATED (0x08)
-#define LEAPIORAID_SCSI_STATE_NO_SCSI_STATUS (0x04)
-#define LEAPIORAID_SCSI_STATE_AUTOSENSE_FAILED (0x02)
-#define LEAPIORAID_SCSI_STATE_AUTOSENSE_VALID (0x01)
-
-struct LeapioraidSCSITmgReq_t {
- U16 DevHandle;
- U8 ChainOffset;
- U8 Function;
- U8 Reserved1;
- U8 TaskType;
- U8 Reserved2;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved3;
- U8 LUN[8];
- U32 Reserved4[7];
- U16 TaskMID;
- U16 Reserved5;
-};
-
-#define LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK (0x01)
-#define LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET (0x02)
-#define LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET (0x03)
-#define LEAPIORAID_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET (0x05)
-#define LEAPIORAID_SCSITASKMGMT_TASKTYPE_QUERY_TASK (0x07)
-#define LEAPIORAID_SCSITASKMGMT_MSGFLAGS_LINK_RESET (0x00)
-
-struct LeapioraidSCSITmgRep_t {
- U16 DevHandle;
- U8 MsgLength;
- U8 Function;
- U8 ResponseCode;
- U8 TaskType;
- U8 Reserved1;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved2;
- U16 Reserved3;
- U16 IOCStatus;
- U32 IOCLogInfo;
- U32 TerminationCount;
- U32 ResponseInfo;
-};
-
-#define LEAPIORAID_SCSITASKMGMT_RSP_TM_COMPLETE (0x00)
-#define LEAPIORAID_SCSITASKMGMT_RSP_INVALID_FRAME (0x02)
-#define LEAPIORAID_SCSITASKMGMT_RSP_TM_NOT_SUPPORTED (0x04)
-#define LEAPIORAID_SCSITASKMGMT_RSP_TM_FAILED (0x05)
-#define LEAPIORAID_SCSITASKMGMT_RSP_TM_SUCCEEDED (0x08)
-#define LEAPIORAID_SCSITASKMGMT_RSP_TM_INVALID_LUN (0x09)
-#define LEAPIORAID_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC (0x80)
-
-struct LeapioraidSepReq_t {
- U16 DevHandle;
- U8 ChainOffset;
- U8 Function;
- U8 Action;
- U8 Flags;
- U8 Reserved1;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved2;
- U32 SlotStatus;
- U32 Reserved3;
- U32 Reserved4;
- U32 Reserved5;
- U16 Slot;
- U16 EnclosureHandle;
-};
-
-#define LEAPIORAID_SEP_REQ_ACTION_WRITE_STATUS (0x00)
-#define LEAPIORAID_SEP_REQ_FLAGS_DEVHANDLE_ADDRESS (0x00)
-#define LEAPIORAID_SEP_REQ_FLAGS_ENCLOSURE_SLOT_ADDRESS (0x01)
-#define LEAPIORAID_SEP_REQ_SLOTSTATUS_PREDICTED_FAULT (0x00000040)
-
-struct LeapioraidSepRep_t {
- U16 DevHandle;
- U8 MsgLength;
- U8 Function;
- U8 Action;
- U8 Flags;
- U8 Reserved1;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved2;
- U16 Reserved3;
- U16 IOCStatus;
- U32 IOCLogInfo;
- U32 SlotStatus;
- U32 Reserved4;
- U16 Slot;
- U16 EnclosureHandle;
-};
-
-struct LeapioraidIOCInitReq_t {
- U8 WhoInit;
- U8 Reserved1;
- U8 ChainOffset;
- U8 Function;
- U16 Reserved2;
- U8 Reserved3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved4;
- U16 MsgVersion;
- U16 HeaderVersion;
- U32 Reserved5;
- U16 ConfigurationFlags;
- U8 HostPageSize;
- U8 HostMSIxVectors;
- U16 Reserved8;
- U16 SystemRequestFrameSize;
- U16 ReplyDescriptorPostQueueDepth;
- U16 ReplyFreeQueueDepth;
- U32 SenseBufferAddressHigh;
- U32 SystemReplyAddressHigh;
- U64 SystemRequestFrameBaseAddress;
- U64 ReplyDescriptorPostQueueAddress;
- U64 ReplyFreeQueueAddress;
- U64 TimeStamp;
-};
-
-#define LEAPIORAID_WHOINIT_HOST_DRIVER (0x04)
-#define LEAPIORAID_IOCINIT_MSGFLAG_RDPQ_ARRAY_MODE (0x01)
-
-struct LeapioraidIOCInitRDPQArrayEntry {
- U64 RDPQBaseAddress;
- U32 Reserved1;
- U32 Reserved2;
-};
-
-struct LeapioraidIOCInitRep_t {
- U8 WhoInit;
- U8 Reserved1;
- U8 MsgLength;
- U8 Function;
- U16 Reserved2;
- U8 Reserved3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved4;
- U16 Reserved5;
- U16 IOCStatus;
- U32 IOCLogInfo;
-};
-
-struct LeapioraidIOCLogReq_t {
- U16 Reserved1;
- U8 ChainOffset;
- U8 Function;
- U16 Reserved2;
- U8 Reserved3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved4;
- U64 BufAddr;
- U32 BufSize;
-};
-
-struct LeapioraidIOCLogRep_t {
- U16 Reserved1;
- U8 MsgLength;
- U8 Function;
- U16 Reserved2;
- U8 Reserved3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved4;
- U16 Reserved5;
- U16 IOCStatus;
- U32 IOCLogInfo;
-};
-
-struct LeapioraidIOCFactsReq_t {
- U16 Reserved1;
- U8 ChainOffset;
- U8 Function;
- U16 Reserved2;
- U8 Reserved3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved4;
-};
-
-struct LeapioraidIOCFactsRep_t {
- U16 MsgVersion;
- U8 MsgLength;
- U8 Function;
- U16 HeaderVersion;
- U8 IOCNumber;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved1;
- U16 IOCExceptions;
- U16 IOCStatus;
- U32 IOCLogInfo;
- U8 MaxChainDepth;
- U8 WhoInit;
- U8 NumberOfPorts;
- U8 MaxMSIxVectors;
- U16 RequestCredit;
- U16 ProductID;
- U32 IOCCapabilities;
- union LEAPIORAID_VERSION_UNION FWVersion;
- U16 IOCRequestFrameSize;
- U16 IOCMaxChainSegmentSize;
- U16 MaxInitiators;
- U16 MaxTargets;
- U16 MaxSasExpanders;
- U16 MaxEnclosures;
- U16 ProtocolFlags;
- U16 HighPriorityCredit;
- U16 MaxReplyDescriptorPostQueueDepth;
- U8 ReplyFrameSize;
- U8 MaxVolumes;
- U16 MaxDevHandle;
- U16 MaxPersistentEntries;
- U16 MinDevHandle;
- U8 CurrentHostPageSize;
- U8 Reserved4;
- U8 SGEModifierMask;
- U8 SGEModifierValue;
- U8 SGEModifierShift;
- U8 Reserved5;
-};
-
-#define LEAPIORAID_IOCFACTS_CAPABILITY_ATOMIC_REQ (0x00080000)
-#define LEAPIORAID_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE (0x00040000)
-#define LEAPIORAID_IOCFACTS_CAPABILITY_MSI_X_INDEX (0x00008000)
-#define LEAPIORAID_IOCFACTS_CAPABILITY_EVENT_REPLAY (0x00002000)
-#define LEAPIORAID_IOCFACTS_CAPABILITY_INTEGRATED_RAID (0x00001000)
-#define LEAPIORAID_IOCFACTS_CAPABILITY_TLR (0x00000800)
-#define LEAPIORAID_IOCFACTS_CAPABILITY_MULTICAST (0x00000100)
-#define LEAPIORAID_IOCFACTS_CAPABILITY_BIDIRECTIONAL_TARGET (0x00000080)
-#define LEAPIORAID_IOCFACTS_CAPABILITY_EEDP (0x00000040)
-#define LEAPIORAID_IOCFACTS_CAPABILITY_TASK_SET_FULL_HANDLING (0x00000004)
-#define LEAPIORAID_IOCFACTS_PROTOCOL_SCSI_INITIATOR (0x0002)
-#define LEAPIORAID_IOCFACTS_PROTOCOL_SCSI_TARGET (0x0001)
-
-struct LeapioraidPortFactsReq_t {
- U16 Reserved1;
- U8 ChainOffset;
- U8 Function;
- U16 Reserved2;
- U8 PortNumber;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved3;
-};
-
-struct LeapioraidPortFactsRep_t {
- U16 Reserved1;
- U8 MsgLength;
- U8 Function;
- U16 Reserved2;
- U8 PortNumber;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved3;
- U16 Reserved4;
- U16 IOCStatus;
- U32 IOCLogInfo;
- U8 Reserved5;
- U8 PortType;
- U16 Reserved6;
- U16 MaxPostedCmdBuffers;
- U16 Reserved7;
-};
-
-struct LeapioraidPortEnableReq_t {
- U16 Reserved1;
- U8 ChainOffset;
- U8 Function;
- U8 Reserved2;
- U8 PortFlags;
- U8 Reserved3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved4;
-};
-
-struct LeapioraidPortEnableRep_t {
- U16 Reserved1;
- U8 MsgLength;
- U8 Function;
- U8 Reserved2;
- U8 PortFlags;
- U8 Reserved3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved4;
- U16 Reserved5;
- U16 IOCStatus;
- U32 IOCLogInfo;
-};
-
-#define LEAPIORAID_EVENT_NOTIFY_EVENTMASK_WORDS (4)
-struct LeapioraidEventNotificationReq_t {
- U16 Reserved1;
- U8 ChainOffset;
- U8 Function;
- U16 Reserved2;
- U8 Reserved3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved4;
- U32 Reserved5;
- U32 Reserved6;
- U32 EventMasks[LEAPIORAID_EVENT_NOTIFY_EVENTMASK_WORDS];
- U16 SASBroadcastPrimitiveMasks;
- U16 SASNotifyPrimitiveMasks;
- U32 Reserved8;
-};
-
-struct LeapioraidEventNotificationRep_t {
- U16 EventDataLength;
- U8 MsgLength;
- U8 Function;
- U16 Reserved1;
- U8 AckRequired;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved2;
- U16 Reserved3;
- U16 IOCStatus;
- U32 IOCLogInfo;
- U16 Event;
- U16 Reserved4;
- U32 EventContext;
- U32 EventData[];
-};
-
-#define LEAPIORAID_EVENT_NOTIFICATION_ACK_REQUIRED (0x01)
-#define LEAPIORAID_EVENT_LOG_DATA (0x0001)
-#define LEAPIORAID_EVENT_STATE_CHANGE (0x0002)
-#define LEAPIORAID_EVENT_HARD_RESET_RECEIVED (0x0005)
-#define LEAPIORAID_EVENT_EVENT_CHANGE (0x000A)
-#define LEAPIORAID_EVENT_SAS_DEVICE_STATUS_CHANGE (0x000F)
-#define LEAPIORAID_EVENT_IR_OPERATION_STATUS (0x0014)
-#define LEAPIORAID_EVENT_SAS_DISCOVERY (0x0016)
-#define LEAPIORAID_EVENT_SAS_BROADCAST_PRIMITIVE (0x0017)
-#define LEAPIORAID_EVENT_SAS_INIT_DEVICE_STATUS_CHANGE (0x0018)
-#define LEAPIORAID_EVENT_SAS_INIT_TABLE_OVERFLOW (0x0019)
-#define LEAPIORAID_EVENT_SAS_TOPOLOGY_CHANGE_LIST (0x001C)
-#define LEAPIORAID_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE (0x001D)
-#define LEAPIORAID_EVENT_IR_VOLUME (0x001E)
-#define LEAPIORAID_EVENT_IR_PHYSICAL_DISK (0x001F)
-#define LEAPIORAID_EVENT_IR_CONFIGURATION_CHANGE_LIST (0x0020)
-#define LEAPIORAID_EVENT_LOG_ENTRY_ADDED (0x0021)
-#define LEAPIORAID_EVENT_SAS_QUIESCE (0x0025)
-#define LEAPIORAID_EVENT_TEMP_THRESHOLD (0x0027)
-#define LEAPIORAID_EVENT_SAS_DEVICE_DISCOVERY_ERROR (0x0035)
-
-struct LeapioraidEventDataSasDeviceStatusChange_t {
- U16 TaskTag;
- U8 ReasonCode;
- U8 PhysicalPort;
- U8 ASC;
- U8 ASCQ;
- U16 DevHandle;
- U32 Reserved2;
- U64 SASAddress;
- U8 LUN[8];
-};
-
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_SMART_DATA (0x05)
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_UNSUPPORTED (0x07)
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_INTERNAL_DEVICE_RESET (0x08)
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_TASK_ABORT_INTERNAL (0x09)
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_ABORT_TASK_SET_INTERNAL (0x0A)
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_CLEAR_TASK_SET_INTERNAL (0x0B)
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_QUERY_TASK_INTERNAL (0x0C)
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_ASYNC_NOTIFICATION (0x0D)
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET (0x0E)
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_CMP_TASK_ABORT_INTERNAL (0x0F)
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_SATA_INIT_FAILURE (0x10)
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_EXPANDER_REDUCED_FUNCTIONALITY (0x11)
-#define LEAPIORAID_EVENT_SAS_DEV_STAT_RC_CMP_EXPANDER_REDUCED_FUNCTIONALITY (0x12)
-
-struct LeapioraidEventDataIrOpStatus_t {
- U16 VolDevHandle;
- U16 Reserved1;
- U8 RAIDOperation;
- U8 PercentComplete;
- U16 Reserved2;
- U32 ElapsedSeconds;
-};
-
-#define LEAPIORAID_EVENT_IR_RAIDOP_RESYNC (0x00)
-#define LEAPIORAID_EVENT_IR_RAIDOP_ONLINE_CAP_EXPANSION (0x01)
-#define LEAPIORAID_EVENT_IR_RAIDOP_CONSISTENCY_CHECK (0x02)
-#define LEAPIORAID_EVENT_IR_RAIDOP_BACKGROUND_INIT (0x03)
-#define LEAPIORAID_EVENT_IR_RAIDOP_MAKE_DATA_CONSISTENT (0x04)
-
-struct LeapioraidEventDataIrVol_t {
- U16 VolDevHandle;
- U8 ReasonCode;
- U8 Reserved1;
- U32 NewValue;
- U32 PreviousValue;
-};
-
-#define LEAPIORAID_EVENT_IR_VOLUME_RC_STATE_CHANGED (0x03)
-struct LeapioraidEventDataIrPhyDisk_t {
- U16 Reserved1;
- U8 ReasonCode;
- U8 PhysDiskNum;
- U16 PhysDiskDevHandle;
- U16 Reserved2;
- U16 Slot;
- U16 EnclosureHandle;
- U32 NewValue;
- U32 PreviousValue;
-};
-
-#define LEAPIORAID_EVENT_IR_PHYSDISK_RC_STATE_CHANGED (0x03)
-
-struct LeapioraidEventIrCfgEle_t {
- U16 ElementFlags;
- U16 VolDevHandle;
- U8 ReasonCode;
- U8 PhysDiskNum;
- U16 PhysDiskDevHandle;
-};
-
-#define LEAPIORAID_EVENT_IR_CHANGE_EFLAGS_ELEMENT_TYPE_MASK (0x000F)
-#define LEAPIORAID_EVENT_IR_CHANGE_EFLAGS_VOLUME_ELEMENT (0x0000)
-#define LEAPIORAID_EVENT_IR_CHANGE_EFLAGS_VOLPHYSDISK_ELEMENT (0x0001)
-#define LEAPIORAID_EVENT_IR_CHANGE_EFLAGS_HOTSPARE_ELEMENT (0x0002)
-#define LEAPIORAID_EVENT_IR_CHANGE_RC_ADDED (0x01)
-#define LEAPIORAID_EVENT_IR_CHANGE_RC_REMOVED (0x02)
-#define LEAPIORAID_EVENT_IR_CHANGE_RC_NO_CHANGE (0x03)
-#define LEAPIORAID_EVENT_IR_CHANGE_RC_HIDE (0x04)
-#define LEAPIORAID_EVENT_IR_CHANGE_RC_UNHIDE (0x05)
-#define LEAPIORAID_EVENT_IR_CHANGE_RC_VOLUME_CREATED (0x06)
-#define LEAPIORAID_EVENT_IR_CHANGE_RC_VOLUME_DELETED (0x07)
-#define LEAPIORAID_EVENT_IR_CHANGE_RC_PD_CREATED (0x08)
-#define LEAPIORAID_EVENT_IR_CHANGE_RC_PD_DELETED (0x09)
-
-struct LeapioraidEventDataIrCfgChangeList_t {
- U8 NumElements;
- U8 Reserved1;
- U8 Reserved2;
- U8 ConfigNum;
- U32 Flags;
- struct LeapioraidEventIrCfgEle_t ConfigElement[];
-};
-
-#define LEAPIORAID_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG (0x00000001)
-struct LeapioraidEventDataSasDiscovery_t {
- U8 Flags;
- U8 ReasonCode;
- U8 PhysicalPort;
- U8 Reserved1;
- U32 DiscoveryStatus;
-};
-
-#define LEAPIORAID_EVENT_SAS_DISC_RC_STARTED (0x01)
-
-struct LeapioraidEventDataSasBroadcastPrimitive_t {
- U8 PhyNum;
- U8 Port;
- U8 PortWidth;
- U8 Primitive;
-};
-
-#define LEAPIORAID_EVENT_PRIMITIVE_ASYNCHRONOUS_EVENT (0x04)
-
-struct LEAPIORAID_EVENT_SAS_TOPO_PHY_ENTRY {
- U16 AttachedDevHandle;
- U8 LinkRate;
- U8 PhyStatus;
-};
-
-struct LeapioraidEventDataSasTopoChangeList_t {
- U16 EnclosureHandle;
- U16 ExpanderDevHandle;
- U8 NumPhys;
- U8 Reserved1;
- U16 Reserved2;
- U8 NumEntries;
- U8 StartPhyNum;
- U8 ExpStatus;
- U8 PhysicalPort;
- struct LEAPIORAID_EVENT_SAS_TOPO_PHY_ENTRY PHY[];
-};
-
-#define LEAPIORAID_EVENT_SAS_TOPO_ES_ADDED (0x01)
-#define LEAPIORAID_EVENT_SAS_TOPO_ES_NOT_RESPONDING (0x02)
-#define LEAPIORAID_EVENT_SAS_TOPO_ES_RESPONDING (0x03)
-#define LEAPIORAID_EVENT_SAS_TOPO_ES_DELAY_NOT_RESPONDING (0x04)
-#define LEAPIORAID_EVENT_SAS_TOPO_PHYSTATUS_VACANT (0x80)
-#define LEAPIORAID_EVENT_SAS_TOPO_RC_MASK (0x0F)
-#define LEAPIORAID_EVENT_SAS_TOPO_RC_TARG_ADDED (0x01)
-#define LEAPIORAID_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING (0x02)
-#define LEAPIORAID_EVENT_SAS_TOPO_RC_PHY_CHANGED (0x03)
-#define LEAPIORAID_EVENT_SAS_TOPO_RC_NO_CHANGE (0x04)
-#define LEAPIORAID_EVENT_SAS_TOPO_RC_DELAY_NOT_RESPONDING (0x05)
-
-struct LeapioraidEventDataSasEnclDevStatusChange_t {
- U16 EnclosureHandle;
- U8 ReasonCode;
- U8 PhysicalPort;
- U64 EnclosureLogicalID;
- U16 NumSlots;
- U16 StartSlot;
- U32 PhyBits;
-};
-
-#define LEAPIORAID_EVENT_SAS_ENCL_RC_ADDED (0x01)
-#define LEAPIORAID_EVENT_SAS_ENCL_RC_NOT_RESPONDING (0x02)
-
-struct LeapioraidEventDataSasDeviceDiscoveryError_t {
- U16 DevHandle;
- U8 ReasonCode;
- U8 PhysicalPort;
- U32 Reserved1[2];
- U64 SASAddress;
- U32 Reserved2[2];
-};
-
-#define LEAPIORAID_EVENT_SAS_DISC_ERR_SMP_FAILED (0x01)
-#define LEAPIORAID_EVENT_SAS_DISC_ERR_SMP_TIMEOUT (0x02)
-
-struct LeapioraidEventAckReq_t {
- U16 Reserved1;
- U8 ChainOffset;
- U8 Function;
- U16 Reserved2;
- U8 Reserved3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved4;
- U16 Event;
- U16 Reserved5;
- U32 EventContext;
-};
-
-struct LeapioraidFWUploadReq_t {
- U8 ImageType;
- U8 Reserved1;
- U8 ChainOffset;
- U8 Function;
- U16 Reserved2;
- U8 Reserved3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved4;
- U32 Reserved5;
- U32 Reserved6;
- U32 Reserved7;
- U32 ImageOffset;
- U32 ImageSize;
- union LEAPIORAID_IEEE_SGE_IO_UNION SGL;
-};
-
-struct LeapioraidFWUploadRep_t {
- U8 ImageType;
- U8 Reserved1;
- U8 MsgLength;
- U8 Function;
- U16 Reserved2;
- U8 Reserved3;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved4;
- U16 Reserved5;
- U16 IOCStatus;
- U32 IOCLogInfo;
- U32 ActualImageSize;
-};
-
-struct LeapioraidIoUnitControlReq_t {
- U8 Operation;
- U8 Reserved1;
- U8 ChainOffset;
- U8 Function;
- U16 DevHandle;
- U8 IOCParameter;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved3;
- U16 Reserved4;
- U8 PhyNum;
- U8 PrimFlags;
- U32 Primitive;
- U8 LookupMethod;
- U8 Reserved5;
- U16 SlotNumber;
- U64 LookupAddress;
- U32 IOCParameterValue;
- U32 IOCParameterValue2;
- U32 Reserved8;
-};
-
-#define LEAPIORAID_CTRL_OP_REMOVE_DEVICE (0x0D)
-
-struct LeapioraidIoUnitControlRep_t {
- U8 Operation;
- U8 Reserved1;
- U8 MsgLength;
- U8 Function;
- U16 DevHandle;
- U8 IOCParameter;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved3;
- U16 Reserved4;
- U16 IOCStatus;
- U32 IOCLogInfo;
-};
-
-struct LEAPIORAID_RAID_ACTION_RATE_DATA {
- U8 RateToChange;
- U8 RateOrMode;
- U16 DataScrubDuration;
-};
-
-struct LEAPIORAID_RAID_ACTION_START_RAID_FUNCTION {
- U8 RAIDFunction;
- U8 Flags;
- U16 Reserved1;
-};
-
-struct LEAPIORAID_RAID_ACTION_STOP_RAID_FUNCTION {
- U8 RAIDFunction;
- U8 Flags;
- U16 Reserved1;
-};
-
-struct LEAPIORAID_RAID_ACTION_HOT_SPARE {
- U8 HotSparePool;
- U8 Reserved1;
- U16 DevHandle;
-};
-
-struct LEAPIORAID_RAID_ACTION_FW_UPDATE_MODE {
- U8 Flags;
- U8 DeviceFirmwareUpdateModeTimeout;
- U16 Reserved1;
-};
-
-union LEAPIORAID_RAID_ACTION_DATA {
- U32 Word;
- struct LEAPIORAID_RAID_ACTION_RATE_DATA Rates;
- struct LEAPIORAID_RAID_ACTION_START_RAID_FUNCTION StartRaidFunction;
- struct LEAPIORAID_RAID_ACTION_STOP_RAID_FUNCTION StopRaidFunction;
- struct LEAPIORAID_RAID_ACTION_HOT_SPARE HotSpare;
- struct LEAPIORAID_RAID_ACTION_FW_UPDATE_MODE FwUpdateMode;
-};
-
-struct LeapioraidRaidActionReq_t {
- U8 Action;
- U8 Reserved1;
- U8 ChainOffset;
- U8 Function;
- U16 VolDevHandle;
- U8 PhysDiskNum;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved2;
- U32 Reserved3;
- union LEAPIORAID_RAID_ACTION_DATA ActionDataWord;
- struct LEAPIORAID_SGE_SIMPLE_UNION ActionDataSGE;
-};
-
-struct LEAPIORAID_RAID_VOL_INDICATOR {
- U64 TotalBlocks;
- U64 BlocksRemaining;
- U32 Flags;
- U32 ElapsedSeconds;
-};
-
-struct LEAPIORAID_RAID_COMPATIBILITY_RESULT_STRUCT {
- U8 State;
- U8 Reserved1;
- U16 Reserved2;
- U32 GenericAttributes;
- U32 OEMSpecificAttributes;
- U32 Reserved3;
- U32 Reserved4;
-};
-
-union LEAPIORAID_RAID_ACTION_REPLY_DATA {
- U32 Word[6];
- struct LEAPIORAID_RAID_VOL_INDICATOR RaidVolumeIndicator;
- U16 VolDevHandle;
- U8 VolumeState;
- U8 PhysDiskNum;
- struct LEAPIORAID_RAID_COMPATIBILITY_RESULT_STRUCT RaidCompatibilityResult;
-};
-
-struct LeapioraidRaidActionRep_t {
- U8 Action;
- U8 Reserved1;
- U8 MsgLength;
- U8 Function;
- U16 VolDevHandle;
- U8 PhysDiskNum;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved2;
- U16 Reserved3;
- U16 IOCStatus;
- U32 IOCLogInfo;
- union LEAPIORAID_RAID_ACTION_REPLY_DATA ActionData;
-};
-
-#define LEAPIORAID_SAS_DEVICE_INFO_SEP (0x00004000)
-#define LEAPIORAID_SAS_DEVICE_INFO_ATAPI_DEVICE (0x00002000)
-#define LEAPIORAID_SAS_DEVICE_INFO_SSP_TARGET (0x00000400)
-#define LEAPIORAID_SAS_DEVICE_INFO_STP_TARGET (0x00000200)
-#define LEAPIORAID_SAS_DEVICE_INFO_SMP_TARGET (0x00000100)
-#define LEAPIORAID_SAS_DEVICE_INFO_SATA_DEVICE (0x00000080)
-#define LEAPIORAID_SAS_DEVICE_INFO_SSP_INITIATOR (0x00000040)
-#define LEAPIORAID_SAS_DEVICE_INFO_STP_INITIATOR (0x00000020)
-#define LEAPIORAID_SAS_DEVICE_INFO_SMP_INITIATOR (0x00000010)
-#define LEAPIORAID_SAS_DEVICE_INFO_SATA_HOST (0x00000008)
-#define LEAPIORAID_SAS_DEVICE_INFO_MASK_DEVICE_TYPE (0x00000007)
-#define LEAPIORAID_SAS_DEVICE_INFO_NO_DEVICE (0x00000000)
-#define LEAPIORAID_SAS_DEVICE_INFO_END_DEVICE (0x00000001)
-#define LEAPIORAID_SAS_DEVICE_INFO_EDGE_EXPANDER (0x00000002)
-#define LEAPIORAID_SAS_DEVICE_INFO_FANOUT_EXPANDER (0x00000003)
-
-struct LeapioraidSmpPassthroughReq_t {
- U8 PassthroughFlags;
- U8 PhysicalPort;
- U8 ChainOffset;
- U8 Function;
- U16 RequestDataLength;
- U8 SGLFlags;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved1;
- U32 Reserved2;
- U64 SASAddress;
- U32 Reserved3;
- U32 Reserved4;
- union LEAPIORAID_SIMPLE_SGE_UNION SGL;
-};
-
-struct LeapioraidSmpPassthroughRep_t {
- U8 PassthroughFlags;
- U8 PhysicalPort;
- U8 MsgLength;
- U8 Function;
- U16 ResponseDataLength;
- U8 SGLFlags;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved1;
- U8 Reserved2;
- U8 SASStatus;
- U16 IOCStatus;
- U32 IOCLogInfo;
- U32 Reserved3;
- U8 ResponseData[4];
-};
-
-struct LeapioraidSasIoUnitControlReq_t {
- U8 Operation;
- U8 Reserved1;
- U8 ChainOffset;
- U8 Function;
- U16 DevHandle;
- U8 IOCParameter;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved3;
- U16 Reserved4;
- U8 PhyNum;
- U8 PrimFlags;
- U32 Primitive;
- U8 LookupMethod;
- U8 Reserved5;
- U16 SlotNumber;
- U64 LookupAddress;
- U32 IOCParameterValue;
- U32 Reserved7;
- U32 Reserved8;
-};
-
-#define LEAPIORAID_SAS_OP_PHY_LINK_RESET (0x06)
-#define LEAPIORAID_SAS_OP_PHY_HARD_RESET (0x07)
-#define LEAPIORAID_SAS_OP_REMOVE_DEVICE (0x0D)
-struct LeapioraidSasIoUnitControlRep_t {
- U8 Operation;
- U8 Reserved1;
- U8 MsgLength;
- U8 Function;
- U16 DevHandle;
- U8 IOCParameter;
- U8 MsgFlags;
- U8 VP_ID;
- U8 VF_ID;
- U16 Reserved3;
- U16 Reserved4;
- U16 IOCStatus;
- U32 IOCLogInfo;
-};
-#endif
diff --git a/drivers/scsi/leapioraid/leapioraid_app.c b/drivers/scsi/leapioraid/leapioraid_app.c
deleted file mode 100644
index 039b4d8ffd02..000000000000
--- a/drivers/scsi/leapioraid/leapioraid_app.c
+++ /dev/null
@@ -1,2253 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Management Module Support for MPT (Message Passing Technology) based
- * controllers
- *
- * Copyright (C) 2013-2021 LSI Corporation
- * Copyright (C) 2013-2021 Avago Technologies
- * Copyright (C) 2013-2021 Broadcom Inc.
- * (mailto:MPT-FusionLinux.pdl@broadcom.com)
- *
- * Copyright (C) 2024 LeapIO Tech Inc.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2
- * of the License, or (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * NO WARRANTY
- * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
- * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
- * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
- * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
- * solely responsible for determining the appropriateness of using and
- * distributing the Program and assumes all risks associated with its
- * exercise of rights under this Agreement, including but not limited to
- * the risks and costs of program errors, damage to or loss of data,
- * programs or equipment, and unavailability or interruption of operations.
-
- * DISCLAIMER OF LIABILITY
- * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
- * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
- * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
- * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
- */
-
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/errno.h>
-#include <linux/init.h>
-#include <linux/slab.h>
-#include <linux/types.h>
-#include <linux/pci.h>
-#include <linux/delay.h>
-#include <linux/compat.h>
-#include <linux/poll.h>
-#include <linux/io.h>
-#include <linux/uaccess.h>
-
-#ifdef __KERNEL__
-#include <linux/miscdevice.h>
-#endif
-#include "leapioraid_func.h"
-
-#define LEAPIORAID_DEV_NAME "leapioraid_ctl"
-
-#define LEAPIORAID_MAGIC_NUMBER 'L'
-#define LEAPIORAID_IOCTL_DEFAULT_TIMEOUT (10)
-
-#define LEAPIORAID_IOCINFO \
- _IOWR(LEAPIORAID_MAGIC_NUMBER, 17, struct leapio_ioctl_iocinfo)
-#define LEAPIORAID_COMMAND \
- _IOWR(LEAPIORAID_MAGIC_NUMBER, 20, struct leapio_ioctl_command)
-#ifdef CONFIG_COMPAT
-#define LEAPIORAID_COMMAND32 \
- _IOWR(LEAPIORAID_MAGIC_NUMBER, 20, struct leapio_ioctl_command32)
-#endif
-#define LEAPIORAID_EVENTQUERY \
- _IOWR(LEAPIORAID_MAGIC_NUMBER, 21, struct leapio_ioctl_eventquery)
-#define LEAPIORAID_EVENTENABLE \
- _IOWR(LEAPIORAID_MAGIC_NUMBER, 22, struct leapio_ioctl_eventenable)
-#define LEAPIORAID_EVENTREPORT \
- _IOWR(LEAPIORAID_MAGIC_NUMBER, 23, struct leapio_ioctl_eventreport)
-#define LEAPIORAID_HARDRESET \
- _IOWR(LEAPIORAID_MAGIC_NUMBER, 24, struct leapio_ioctl_diag_reset)
-#define LEAPIORAID_BTDHMAPPING \
- _IOWR(LEAPIORAID_MAGIC_NUMBER, 31, struct leapio_ioctl_btdh_mapping)
-
-struct leapio_ioctl_header {
- uint32_t ioc_number;
- uint32_t port_number;
- uint32_t max_data_size;
-};
-
-struct leapio_ioctl_diag_reset {
- struct leapio_ioctl_header hdr;
-};
-
-struct leapio_ioctl_pci_info {
- union {
- struct {
- uint32_t device:5;
- uint32_t function:3;
- uint32_t bus:24;
- } bits;
- uint32_t word;
- } u;
- uint32_t segment_id;
-};
-
-struct leapio_ioctl_iocinfo {
- struct leapio_ioctl_header hdr;
- uint32_t adapter_type;
- uint32_t port_number;
- uint32_t pci_id;
- uint32_t hw_rev;
- uint32_t subsystem_device;
- uint32_t subsystem_vendor;
- uint32_t rsvd0;
- uint32_t firmware_version;
- uint32_t bios_version;
- uint8_t driver_version[32];
- uint8_t rsvd1;
- uint8_t scsi_id;
- uint16_t rsvd2;
- struct leapio_ioctl_pci_info pci_information;
-};
-
-#define LEAPIORAID_CTL_EVENT_LOG_SIZE (200)
-struct leapio_ioctl_eventquery {
- struct leapio_ioctl_header hdr;
- uint16_t event_entries;
- uint16_t rsvd;
- uint32_t event_types[LEAPIORAID_EVENT_NOTIFY_EVENTMASK_WORDS];
-};
-
-struct leapio_ioctl_eventenable {
- struct leapio_ioctl_header hdr;
- uint32_t event_types[4];
-};
-
-#define LEAPIORAID_EVENT_DATA_SIZE (192)
-struct LEAPIORAID_IOCTL_EVENTS {
- uint32_t event;
- uint32_t context;
- uint8_t data[LEAPIORAID_EVENT_DATA_SIZE];
-};
-
-struct leapio_ioctl_eventreport {
- struct leapio_ioctl_header hdr;
- struct LEAPIORAID_IOCTL_EVENTS event_data[];
-};
-
-struct leapio_ioctl_command {
- struct leapio_ioctl_header hdr;
- uint32_t timeout;
- void __user *reply_frame_buf_ptr;
- void __user *data_in_buf_ptr;
- void __user *data_out_buf_ptr;
- void __user *sense_data_ptr;
- uint32_t max_reply_bytes;
- uint32_t data_in_size;
- uint32_t data_out_size;
- uint32_t max_sense_bytes;
- uint32_t data_sge_offset;
- uint8_t mf[];
-};
-
-#ifdef CONFIG_COMPAT
-struct leapio_ioctl_command32 {
- struct leapio_ioctl_header hdr;
- uint32_t timeout;
- uint32_t reply_frame_buf_ptr;
- uint32_t data_in_buf_ptr;
- uint32_t data_out_buf_ptr;
- uint32_t sense_data_ptr;
- uint32_t max_reply_bytes;
- uint32_t data_in_size;
- uint32_t data_out_size;
- uint32_t max_sense_bytes;
- uint32_t data_sge_offset;
- uint8_t mf[];
-};
-#endif
-
-struct leapio_ioctl_btdh_mapping {
- struct leapio_ioctl_header hdr;
- uint32_t id;
- uint32_t bus;
- uint16_t handle;
- uint16_t rsvd;
-};
-
-static struct fasync_struct *leapioraid_async_queue;
-static DECLARE_WAIT_QUEUE_HEAD(leapioraid_ctl_poll_wait);
-
-enum leapioraid_block_state {
- NON_BLOCKING,
- BLOCKING,
-};
-
-static void
-leapioraid_ctl_display_some_debug(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- char *calling_function_name,
- struct LeapioraidDefaultRep_t *mpi_reply)
-{
- struct LeapioraidCfgReq_t *mpi_request;
- char *desc = NULL;
-
- if (!(ioc->logging_level & LEAPIORAID_DEBUG_IOCTL))
- return;
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- switch (mpi_request->Function) {
- case LEAPIORAID_FUNC_SCSI_IO_REQUEST:
- {
- struct LeapioSCSIIOReq_t *scsi_request =
- (struct LeapioSCSIIOReq_t *) mpi_request;
- snprintf(ioc->tmp_string, LEAPIORAID_STRING_LENGTH,
- "scsi_io, cmd(0x%02x), cdb_len(%d)",
- scsi_request->CDB.CDB32[0],
- le16_to_cpu(scsi_request->IoFlags) & 0xF);
- desc = ioc->tmp_string;
- break;
- }
- case LEAPIORAID_FUNC_SCSI_TASK_MGMT:
- desc = "task_mgmt";
- break;
- case LEAPIORAID_FUNC_IOC_INIT:
- desc = "ioc_init";
- break;
- case LEAPIORAID_FUNC_IOC_FACTS:
- desc = "ioc_facts";
- break;
- case LEAPIORAID_FUNC_CONFIG:
- {
- struct LeapioraidCfgReq_t *config_request =
- (struct LeapioraidCfgReq_t *) mpi_request;
- snprintf(ioc->tmp_string, LEAPIORAID_STRING_LENGTH,
- "config, type(0x%02x), ext_type(0x%02x), number(%d)",
- (config_request->Header.PageType &
- LEAPIORAID_CONFIG_PAGETYPE_MASK),
- config_request->ExtPageType,
- config_request->Header.PageNumber);
- desc = ioc->tmp_string;
- break;
- }
- case LEAPIORAID_FUNC_PORT_FACTS:
- desc = "port_facts";
- break;
- case LEAPIORAID_FUNC_PORT_ENABLE:
- desc = "port_enable";
- break;
- case LEAPIORAID_FUNC_EVENT_NOTIFICATION:
- desc = "event_notification";
- break;
- case LEAPIORAID_FUNC_FW_DOWNLOAD:
- desc = "fw_download";
- break;
- case LEAPIORAID_FUNC_FW_UPLOAD:
- desc = "fw_upload";
- break;
- case LEAPIORAID_FUNC_RAID_ACTION:
- desc = "raid_action";
- break;
- case LEAPIORAID_FUNC_RAID_SCSI_IO_PASSTHROUGH:
- {
- struct LeapioSCSIIOReq_t *scsi_request =
- (struct LeapioSCSIIOReq_t *) mpi_request;
- snprintf(ioc->tmp_string, LEAPIORAID_STRING_LENGTH,
- "raid_pass, cmd(0x%02x), cdb_len(%d)",
- scsi_request->CDB.CDB32[0],
- le16_to_cpu(scsi_request->IoFlags) & 0xF);
- desc = ioc->tmp_string;
- break;
- }
- case LEAPIORAID_FUNC_SAS_IO_UNIT_CONTROL:
- desc = "sas_iounit_cntl";
- break;
- case LEAPIORAID_FUNC_SATA_PASSTHROUGH:
- desc = "sata_pass";
- break;
- case LEAPIORAID_FUNC_SMP_PASSTHROUGH:
- desc = "smp_passthrough";
- break;
- }
- if (!desc)
- return;
- pr_info("%s %s: %s, smid(%d)\n",
- ioc->name, calling_function_name, desc, smid);
- if (!mpi_reply)
- return;
- if (mpi_reply->IOCStatus || mpi_reply->IOCLogInfo)
- pr_info(
- "%s \tiocstatus(0x%04x), loginfo(0x%08x)\n",
- ioc->name, le16_to_cpu(mpi_reply->IOCStatus),
- le32_to_cpu(mpi_reply->IOCLogInfo));
- if (mpi_request->Function == LEAPIORAID_FUNC_SCSI_IO_REQUEST ||
- mpi_request->Function ==
- LEAPIORAID_FUNC_RAID_SCSI_IO_PASSTHROUGH) {
- struct LeapioraidSCSIIORep_t *scsi_reply =
- (struct LeapioraidSCSIIORep_t *) mpi_reply;
- struct leapioraid_sas_device *sas_device = NULL;
-
- sas_device = leapioraid_get_sdev_by_handle(ioc,
- le16_to_cpu(scsi_reply->DevHandle));
- if (sas_device) {
- pr_info("%s \tsas_address(0x%016llx), phy(%d)\n",
- ioc->name, (unsigned long long)
- sas_device->sas_address, sas_device->phy);
- if (sas_device->enclosure_handle != 0)
- pr_info(
- "%s \tenclosure_logical_id(0x%016llx), slot(%d)\n",
- ioc->name, (unsigned long long)
- sas_device->enclosure_logical_id,
- sas_device->slot);
- leapioraid_sas_device_put(sas_device);
- }
- if (scsi_reply->SCSIState || scsi_reply->SCSIStatus)
- pr_info(
- "%s \tscsi_state(0x%02x), scsi_status (0x%02x)\n",
- ioc->name, scsi_reply->SCSIState, scsi_reply->SCSIStatus);
- }
-}
-
-u8
-leapioraid_ctl_done(struct LEAPIORAID_ADAPTER *ioc, u16 smid, u8 msix_index,
- u32 reply)
-{
- struct LeapioraidDefaultRep_t *mpi_reply;
- struct LeapioraidSCSIIORep_t *scsiio_reply;
- const void *sense_data;
- u32 sz;
-
- if (ioc->ctl_cmds.status == LEAPIORAID_CMD_NOT_USED)
- return 1;
- if (ioc->ctl_cmds.smid != smid)
- return 1;
- ioc->ctl_cmds.status |= LEAPIORAID_CMD_COMPLETE;
- mpi_reply = leapioraid_base_get_reply_virt_addr(ioc, reply);
- if (mpi_reply) {
- memcpy(ioc->ctl_cmds.reply, mpi_reply,
- mpi_reply->MsgLength * 4);
- ioc->ctl_cmds.status |= LEAPIORAID_CMD_REPLY_VALID;
- if (mpi_reply->Function == LEAPIORAID_FUNC_SCSI_IO_REQUEST ||
- mpi_reply->Function ==
- LEAPIORAID_FUNC_RAID_SCSI_IO_PASSTHROUGH) {
- scsiio_reply = (struct LeapioraidSCSIIORep_t *) mpi_reply;
- if (scsiio_reply->SCSIState &
- LEAPIORAID_SCSI_STATE_AUTOSENSE_VALID) {
- sz = min_t(u32, SCSI_SENSE_BUFFERSIZE,
- le32_to_cpu(scsiio_reply->SenseCount));
- sense_data =
- leapioraid_base_get_sense_buffer(ioc, smid);
- memcpy(ioc->ctl_cmds.sense, sense_data, sz);
- }
- }
- }
- leapioraid_ctl_display_some_debug(ioc, smid, "ctl_done", mpi_reply);
- ioc->ctl_cmds.status &= ~LEAPIORAID_CMD_PENDING;
- complete(&ioc->ctl_cmds.done);
- return 1;
-}
-
-static int leapioraid_ctl_check_event_type(
- struct LEAPIORAID_ADAPTER *ioc, u16 event)
-{
- u16 i;
- u32 desired_event;
-
- if (event >= 128 || !event || !ioc->event_log)
- return 0;
- desired_event = (1 << (event % 32));
- if (!desired_event)
- desired_event = 1;
- i = event / 32;
- return desired_event & ioc->event_type[i];
-}
-
-void
-leapioraid_ctl_add_to_event_log(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventNotificationRep_t *mpi_reply)
-{
- struct LEAPIORAID_IOCTL_EVENTS *event_log;
- u16 event;
- int i;
- u32 sz, event_data_sz;
- u8 send_aen = 0;
-
- if (!ioc->event_log)
- return;
- event = le16_to_cpu(mpi_reply->Event);
- if (leapioraid_ctl_check_event_type(ioc, event)) {
- i = ioc->event_context % LEAPIORAID_CTL_EVENT_LOG_SIZE;
- event_log = ioc->event_log;
- event_log[i].event = event;
- event_log[i].context = ioc->event_context++;
- event_data_sz = le16_to_cpu(mpi_reply->EventDataLength) * 4;
- sz = min_t(u32, event_data_sz, LEAPIORAID_EVENT_DATA_SIZE);
- memset(event_log[i].data, 0, LEAPIORAID_EVENT_DATA_SIZE);
- memcpy(event_log[i].data, mpi_reply->EventData, sz);
- send_aen = 1;
- }
- if (event == LEAPIORAID_EVENT_LOG_ENTRY_ADDED ||
- (send_aen && !ioc->aen_event_read_flag)) {
- ioc->aen_event_read_flag = 1;
- wake_up_interruptible(&leapioraid_ctl_poll_wait);
- if (leapioraid_async_queue)
- kill_fasync(&leapioraid_async_queue, SIGIO, POLL_IN);
- }
-}
-
-u8
-leapioraid_ctl_event_callback(
- struct LEAPIORAID_ADAPTER *ioc, u8 msix_index,
- u32 reply)
-{
- struct LeapioraidEventNotificationRep_t *mpi_reply;
-
- mpi_reply = leapioraid_base_get_reply_virt_addr(ioc, reply);
- if (mpi_reply)
- leapioraid_ctl_add_to_event_log(ioc, mpi_reply);
- return 1;
-}
-
-static int
-leapioraid_ctl_verify_adapter(
- int ioc_number, struct LEAPIORAID_ADAPTER **iocpp)
-{
- struct LEAPIORAID_ADAPTER *ioc;
-
- spin_lock(&leapioraid_gioc_lock);
- list_for_each_entry(ioc, &leapioraid_ioc_list, list) {
- if (ioc->id != ioc_number)
- continue;
- spin_unlock(&leapioraid_gioc_lock);
- *iocpp = ioc;
- return ioc_number;
- }
- spin_unlock(&leapioraid_gioc_lock);
- *iocpp = NULL;
- return -1;
-}
-
-void
-leapioraid_ctl_clear_outstanding_ioctls(struct LEAPIORAID_ADAPTER *ioc)
-{
- if (ioc->ctl_cmds.status & LEAPIORAID_CMD_PENDING) {
- ioc->ctl_cmds.status |= LEAPIORAID_CMD_RESET;
- leapioraid_base_free_smid(ioc, ioc->ctl_cmds.smid);
- complete(&ioc->ctl_cmds.done);
- }
-}
-
-void
-leapioraid_ctl_reset_handler(struct LEAPIORAID_ADAPTER *ioc, int reset_phase)
-{
- switch (reset_phase) {
- case LEAPIORAID_IOC_PRE_RESET_PHASE:
- dtmprintk(ioc, pr_info(
- "%s %s: LEAPIORAID_IOC_PRE_RESET_PHASE\n", ioc->name,
- __func__));
- break;
- case LEAPIORAID_IOC_AFTER_RESET_PHASE:
- dtmprintk(ioc, pr_info(
- "%s %s: LEAPIORAID_IOC_AFTER_RESET_PHASE\n", ioc->name,
- __func__));
- leapioraid_ctl_clear_outstanding_ioctls(ioc);
- break;
- case LEAPIORAID_IOC_DONE_RESET_PHASE:
- dtmprintk(ioc, pr_info(
- "%s %s: LEAPIORAID_IOC_DONE_RESET_PHASE\n", ioc->name,
- __func__));
- break;
- }
-}
-
-static int
-leapioraid_ctl_fasync(int fd, struct file *filep, int mode)
-{
- return fasync_helper(fd, filep, mode, &leapioraid_async_queue);
-}
-
-int
-leapioraid_ctl_release(struct inode *inode, struct file *filep)
-{
- return fasync_helper(-1, filep, 0, &leapioraid_async_queue);
-}
-
-static unsigned int
-leapioraid_ctl_poll(struct file *filep, poll_table *wait)
-{
- struct LEAPIORAID_ADAPTER *ioc;
-
- poll_wait(filep, &leapioraid_ctl_poll_wait, wait);
- spin_lock(&leapioraid_gioc_lock);
- list_for_each_entry(ioc, &leapioraid_ioc_list, list) {
- if (ioc->aen_event_read_flag) {
- spin_unlock(&leapioraid_gioc_lock);
- return POLLIN | POLLRDNORM;
- }
- }
- spin_unlock(&leapioraid_gioc_lock);
- return 0;
-}
-
-static int
-leapioraid_ctl_set_task_mid(struct LEAPIORAID_ADAPTER *ioc,
- struct leapio_ioctl_command *karg,
- struct LeapioraidSCSITmgReq_t *tm_request)
-{
- u8 found = 0;
- u16 smid;
- u16 handle;
- struct scsi_cmnd *scmd;
- struct LEAPIORAID_DEVICE *priv_data;
- struct LeapioraidSCSITmgRep_t *tm_reply;
- u32 sz;
- u32 lun;
- char *desc = NULL;
- struct leapioraid_scsiio_tracker *st = NULL;
-
- if (tm_request->TaskType == LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK)
- desc = "abort_task";
- else if (tm_request->TaskType ==
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_QUERY_TASK)
- desc = "query_task";
- else
- return 0;
- lun = scsilun_to_int((struct scsi_lun *)tm_request->LUN);
- handle = le16_to_cpu(tm_request->DevHandle);
- for (smid = ioc->shost->can_queue; smid && !found; smid--) {
- scmd = leapioraid_scsihost_scsi_lookup_get(ioc, smid);
- if (scmd == NULL || scmd->device == NULL ||
- scmd->device->hostdata == NULL)
- continue;
- if (lun != scmd->device->lun)
- continue;
- priv_data = scmd->device->hostdata;
- if (priv_data->sas_target == NULL)
- continue;
- if (priv_data->sas_target->handle != handle)
- continue;
- st = leapioraid_base_scsi_cmd_priv(scmd);
- if ((!st) || (st->smid == 0))
- continue;
- if (!tm_request->TaskMID || tm_request->TaskMID == st->smid) {
- tm_request->TaskMID = cpu_to_le16(st->smid);
- found = 1;
- }
- }
- if (!found) {
- dctlprintk(ioc, pr_info(
- "%s %s: handle(0x%04x), lun(%d), no active mid!!\n",
- ioc->name, desc,
- le16_to_cpu(tm_request->DevHandle),
- lun));
- tm_reply = ioc->ctl_cmds.reply;
- tm_reply->DevHandle = tm_request->DevHandle;
- tm_reply->Function = LEAPIORAID_FUNC_SCSI_TASK_MGMT;
- tm_reply->TaskType = tm_request->TaskType;
- tm_reply->MsgLength =
- sizeof(struct LeapioraidSCSITmgRep_t) / 4;
- tm_reply->VP_ID = tm_request->VP_ID;
- tm_reply->VF_ID = tm_request->VF_ID;
- sz = min_t(u32, karg->max_reply_bytes, ioc->reply_sz);
- if (copy_to_user(karg->reply_frame_buf_ptr, ioc->ctl_cmds.reply,
- sz))
- pr_err("failure at %s:%d/%s()!\n", __FILE__,
- __LINE__, __func__);
- return 1;
- }
- dctlprintk(ioc, pr_info(
- "%s %s: handle(0x%04x), lun(%d), task_mid(%d)\n",
- ioc->name, desc,
- le16_to_cpu(tm_request->DevHandle), lun,
- le16_to_cpu(tm_request->TaskMID)));
- return 0;
-}
-
-static long
-leapioraid_ctl_do_command(struct LEAPIORAID_ADAPTER *ioc,
- struct leapio_ioctl_command karg, void __user *mf)
-{
- struct LeapioraidReqHeader_t *mpi_request = NULL, *request;
- struct LeapioraidDefaultRep_t *mpi_reply;
- u16 smid;
- unsigned long timeout;
- u8 issue_reset;
- u32 sz, sz_arg;
- void *psge;
- void *data_out = NULL;
- dma_addr_t data_out_dma = 0;
- size_t data_out_sz = 0;
- void *data_in = NULL;
- dma_addr_t data_in_dma = 0;
- size_t data_in_sz = 0;
- long ret;
- u16 device_handle = LEAPIORAID_INVALID_DEVICE_HANDLE;
-
- issue_reset = 0;
- if (ioc->ctl_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_err("%s %s: ctl_cmd in use\n",
- ioc->name, __func__);
- ret = -EAGAIN;
- goto out;
- }
- ret = leapioraid_wait_for_ioc_to_operational(ioc, 10);
- if (ret)
- goto out;
- mpi_request = kzalloc(ioc->request_sz, GFP_KERNEL);
- if (!mpi_request) {
- ret = -ENOMEM;
- goto out;
- }
- if (karg.data_sge_offset * 4 > ioc->request_sz ||
- karg.data_sge_offset > (UINT_MAX / 4)) {
- ret = -EINVAL;
- goto out;
- }
- if (copy_from_user(mpi_request, mf, karg.data_sge_offset * 4)) {
- pr_err("failure at %s:%d/%s()!\n", __FILE__, __LINE__,
- __func__);
- ret = -EFAULT;
- goto out;
- }
- if (mpi_request->Function == LEAPIORAID_FUNC_SCSI_TASK_MGMT) {
- smid = leapioraid_base_get_smid_hpr(ioc, ioc->ctl_cb_idx);
- if (!smid) {
- pr_err(
- "%s %s: failed obtaining a smid\n", ioc->name,
- __func__);
- ret = -EAGAIN;
- goto out;
- }
- } else {
- smid = ioc->shost->can_queue + LEAPIORAID_INTERNAL_SCSIIO_FOR_IOCTL;
- }
- ret = 0;
- ioc->ctl_cmds.status = LEAPIORAID_CMD_PENDING;
- memset(ioc->ctl_cmds.reply, 0, ioc->reply_sz);
- request = leapioraid_base_get_msg_frame(ioc, smid);
- memset(request, 0, ioc->request_sz);
- memcpy(request, mpi_request, karg.data_sge_offset * 4);
- ioc->ctl_cmds.smid = smid;
- data_out_sz = karg.data_out_size;
- data_in_sz = karg.data_in_size;
- if (mpi_request->Function == LEAPIORAID_FUNC_SCSI_IO_REQUEST ||
- mpi_request->Function == LEAPIORAID_FUNC_RAID_SCSI_IO_PASSTHROUGH
- || mpi_request->Function == LEAPIORAID_FUNC_SCSI_TASK_MGMT
- || mpi_request->Function == LEAPIORAID_FUNC_SATA_PASSTHROUGH) {
- device_handle = le16_to_cpu(mpi_request->FunctionDependent1);
- if (!device_handle || (device_handle > ioc->facts.MaxDevHandle)) {
- ret = -EINVAL;
- leapioraid_base_free_smid(ioc, smid);
- goto out;
- }
- }
- if (data_out_sz) {
- data_out = dma_alloc_coherent(&ioc->pdev->dev, data_out_sz,
- &data_out_dma, GFP_ATOMIC);
- if (!data_out) {
- ret = -ENOMEM;
- leapioraid_base_free_smid(ioc, smid);
- goto out;
- }
- if (copy_from_user(data_out, karg.data_out_buf_ptr,
- data_out_sz)) {
- pr_err("failure at %s:%d/%s()!\n", __FILE__,
- __LINE__, __func__);
- ret = -EFAULT;
- leapioraid_base_free_smid(ioc, smid);
- goto out;
- }
- }
- if (data_in_sz) {
- data_in = dma_alloc_coherent(&ioc->pdev->dev, data_in_sz,
- &data_in_dma, GFP_ATOMIC);
- if (!data_in) {
- ret = -ENOMEM;
- leapioraid_base_free_smid(ioc, smid);
- goto out;
- }
- }
- psge = (void *)request + (karg.data_sge_offset * 4);
- leapioraid_ctl_display_some_debug(ioc, smid, "ctl_request", NULL);
- init_completion(&ioc->ctl_cmds.done);
- switch (mpi_request->Function) {
- case LEAPIORAID_FUNC_SCSI_IO_REQUEST:
- case LEAPIORAID_FUNC_RAID_SCSI_IO_PASSTHROUGH:
- {
- struct LeapioSCSIIOReq_t *scsiio_request =
- (struct LeapioSCSIIOReq_t *) request;
- scsiio_request->SenseBufferLength =
- SCSI_SENSE_BUFFERSIZE;
- scsiio_request->SenseBufferLowAddress =
- leapioraid_base_get_sense_buffer_dma(ioc, smid);
- memset(ioc->ctl_cmds.sense, 0, SCSI_SENSE_BUFFERSIZE);
- ioc->build_sg(ioc, psge, data_out_dma, data_out_sz,
- data_in_dma, data_in_sz);
- if (test_bit
- (device_handle, ioc->device_remove_in_progress)) {
- dtmprintk(ioc,
- pr_info(
- "%s handle(0x%04x) :ioctl failed due to device removal in progress\n",
- ioc->name, device_handle));
- leapioraid_base_free_smid(ioc, smid);
- ret = -EINVAL;
- goto out;
- }
- if (mpi_request->Function ==
- LEAPIORAID_FUNC_SCSI_IO_REQUEST)
- ioc->put_smid_scsi_io(ioc, smid, device_handle);
- else
- ioc->put_smid_default(ioc, smid);
- break;
- }
- case LEAPIORAID_FUNC_SCSI_TASK_MGMT:
- {
- struct LeapioraidSCSITmgReq_t *tm_request =
- (struct LeapioraidSCSITmgReq_t *) request;
- dtmprintk(ioc,
- pr_info("%s TASK_MGMT: handle(0x%04x), task_type(0x%02x)\n",
- ioc->name,
- le16_to_cpu(tm_request->DevHandle),
- tm_request->TaskType));
- ioc->got_task_abort_from_ioctl = 1;
- if (tm_request->TaskType ==
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK ||
- tm_request->TaskType ==
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_QUERY_TASK) {
- if (leapioraid_ctl_set_task_mid(ioc, &karg, tm_request)) {
- leapioraid_base_free_smid(ioc, smid);
- ioc->got_task_abort_from_ioctl = 0;
- goto out;
- }
- }
- ioc->got_task_abort_from_ioctl = 0;
- if (test_bit
- (device_handle, ioc->device_remove_in_progress)) {
- dtmprintk(ioc,
- pr_info(
- "%s handle(0x%04x) :ioctl failed due to device removal in progress\n",
- ioc->name, device_handle));
- leapioraid_base_free_smid(ioc, smid);
- ret = -EINVAL;
- goto out;
- }
- leapioraid_scsihost_set_tm_flag(ioc,
- le16_to_cpu(tm_request->DevHandle));
- ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz,
- data_in_dma, data_in_sz);
- ioc->put_smid_hi_priority(ioc, smid, 0);
- break;
- }
- case LEAPIORAID_FUNC_SMP_PASSTHROUGH:
- {
- struct LeapioraidSmpPassthroughReq_t *smp_request =
- (struct LeapioraidSmpPassthroughReq_t *) mpi_request;
- u8 *data;
-
- if (!ioc->multipath_on_hba)
- smp_request->PhysicalPort = 0xFF;
- if (smp_request->PassthroughFlags &
- 0x80)
- data = (u8 *) &smp_request->SGL;
- else {
- if (unlikely(data_out == NULL)) {
- pr_err(
- "failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- leapioraid_base_free_smid(ioc, smid);
- ret = -EINVAL;
- goto out;
- }
- data = data_out;
- }
- if (data[1] == 0x91 && (data[10] == 1 || data[10] == 2)) {
- ioc->ioc_link_reset_in_progress = 1;
- ioc->ignore_loginfos = 1;
- }
- ioc->build_sg(ioc, psge, data_out_dma, data_out_sz,
- data_in_dma, data_in_sz);
- ioc->put_smid_default(ioc, smid);
- break;
- }
- case LEAPIORAID_FUNC_SATA_PASSTHROUGH:
- {
- ioc->build_sg(ioc, psge, data_out_dma, data_out_sz,
- data_in_dma, data_in_sz);
- if (test_bit
- (device_handle, ioc->device_remove_in_progress)) {
- dtmprintk(ioc,
- pr_info(
- "%s handle(0x%04x) :ioctl failed due to device removal in progress\n",
- ioc->name, device_handle));
- leapioraid_base_free_smid(ioc, smid);
- ret = -EINVAL;
- goto out;
- }
- ioc->put_smid_default(ioc, smid);
- break;
- }
- case LEAPIORAID_FUNC_FW_DOWNLOAD:
- case LEAPIORAID_FUNC_FW_UPLOAD:
- {
- ioc->build_sg(ioc, psge, data_out_dma, data_out_sz,
- data_in_dma, data_in_sz);
- ioc->put_smid_default(ioc, smid);
- break;
- }
- case LEAPIORAID_FUNC_SAS_IO_UNIT_CONTROL:
- {
- struct LeapioraidSasIoUnitControlReq_t *sasiounit_request =
- (struct LeapioraidSasIoUnitControlReq_t *) mpi_request;
- if (sasiounit_request->Operation ==
- LEAPIORAID_SAS_OP_PHY_HARD_RESET
- || sasiounit_request->Operation ==
- LEAPIORAID_SAS_OP_PHY_LINK_RESET) {
- ioc->ioc_link_reset_in_progress = 1;
- ioc->ignore_loginfos = 1;
- }
- }
- fallthrough;
- default:
- ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz,
- data_in_dma, data_in_sz);
- ioc->put_smid_default(ioc, smid);
- break;
- }
- timeout = karg.timeout;
- if (timeout < LEAPIORAID_IOCTL_DEFAULT_TIMEOUT)
- timeout = LEAPIORAID_IOCTL_DEFAULT_TIMEOUT;
- wait_for_completion_timeout(&ioc->ctl_cmds.done, timeout * HZ);
- if (mpi_request->Function == LEAPIORAID_FUNC_SCSI_TASK_MGMT) {
- struct LeapioraidSCSITmgReq_t *tm_request =
- (struct LeapioraidSCSITmgReq_t *) mpi_request;
- leapioraid_scsihost_clear_tm_flag(ioc,
- le16_to_cpu(tm_request->DevHandle));
- } else if ((mpi_request->Function == LEAPIORAID_FUNC_SMP_PASSTHROUGH
- || mpi_request->Function ==
- LEAPIORAID_FUNC_SAS_IO_UNIT_CONTROL)
- && ioc->ioc_link_reset_in_progress) {
- ioc->ioc_link_reset_in_progress = 0;
- ioc->ignore_loginfos = 0;
- }
- if (!(ioc->ctl_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- leapioraid_check_cmd_timeout(ioc,
- ioc->ctl_cmds.status, mpi_request,
- karg.data_sge_offset, issue_reset);
- goto issue_host_reset;
- }
- mpi_reply = ioc->ctl_cmds.reply;
- if (mpi_reply->Function == LEAPIORAID_FUNC_SCSI_TASK_MGMT &&
- (ioc->logging_level & LEAPIORAID_DEBUG_TM)) {
- struct LeapioraidSCSITmgRep_t *tm_reply =
- (struct LeapioraidSCSITmgRep_t *) mpi_reply;
- pr_info(
- "%s TASK_MGMT: IOCStatus(0x%04x), IOCLogInfo(0x%08x), TerminationCount(0x%08x)\n",
- ioc->name,
- le16_to_cpu(tm_reply->IOCStatus),
- le32_to_cpu(tm_reply->IOCLogInfo),
- le32_to_cpu(tm_reply->TerminationCount));
- }
- if (data_in_sz) {
- if (copy_to_user(karg.data_in_buf_ptr, data_in, data_in_sz)) {
- pr_err("failure at %s:%d/%s()!\n", __FILE__,
- __LINE__, __func__);
- ret = -ENODATA;
- goto out;
- }
- }
- if (karg.max_reply_bytes) {
- sz = min_t(u32, karg.max_reply_bytes, ioc->reply_sz);
- if (copy_to_user(karg.reply_frame_buf_ptr, ioc->ctl_cmds.reply,
- sz)) {
- pr_err("failure at %s:%d/%s()!\n", __FILE__,
- __LINE__, __func__);
- ret = -ENODATA;
- goto out;
- }
- }
- if (karg.max_sense_bytes && (mpi_request->Function ==
- LEAPIORAID_FUNC_SCSI_IO_REQUEST
- || mpi_request->Function ==
- LEAPIORAID_FUNC_RAID_SCSI_IO_PASSTHROUGH)) {
- if (karg.sense_data_ptr == NULL) {
- pr_err(
- "%s Response buffer provided by application is NULL; Response data will not be returned.\n",
- ioc->name);
- goto out;
- }
- sz_arg = SCSI_SENSE_BUFFERSIZE;
- sz = min_t(u32, karg.max_sense_bytes, sz_arg);
- if (copy_to_user(karg.sense_data_ptr, ioc->ctl_cmds.sense, sz)) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- ret = -ENODATA;
- goto out;
- }
- }
-issue_host_reset:
- if (issue_reset) {
- ret = -ENODATA;
- if ((mpi_request->Function == LEAPIORAID_FUNC_SCSI_IO_REQUEST
- || mpi_request->Function ==
- LEAPIORAID_FUNC_RAID_SCSI_IO_PASSTHROUGH
- || mpi_request->Function ==
- LEAPIORAID_FUNC_SATA_PASSTHROUGH)) {
- pr_err(
- "%s issue target reset: handle = (0x%04x)\n",
- ioc->name,
- le16_to_cpu(mpi_request->FunctionDependent1));
- leapioraid_halt_firmware(ioc, 0);
- leapioraid_scsihost_issue_locked_tm(ioc,
- le16_to_cpu
- (mpi_request->FunctionDependent1),
- 0, 0, 0,
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET,
- smid, 30,
- LEAPIORAID_SCSITASKMGMT_MSGFLAGS_LINK_RESET);
- } else
- leapioraid_base_hard_reset_handler(ioc,
- FORCE_BIG_HAMMER);
- }
-out:
- if (data_in)
- dma_free_coherent(&ioc->pdev->dev, data_in_sz, data_in,
- data_in_dma);
- if (data_out)
- dma_free_coherent(&ioc->pdev->dev, data_out_sz, data_out,
- data_out_dma);
- kfree(mpi_request);
- ioc->ctl_cmds.status = LEAPIORAID_CMD_NOT_USED;
- return ret;
-}
-
-static long
-leapioraid_ctl_getiocinfo(
- struct LEAPIORAID_ADAPTER *ioc, void __user *arg)
-{
- struct leapio_ioctl_iocinfo karg;
- u8 revision;
-
- dctlprintk(ioc, pr_info("%s %s: enter\n", ioc->name,
- __func__));
- memset(&karg, 0, sizeof(karg));
- if (ioc->pfacts)
- karg.port_number = ioc->pfacts[0].PortNumber;
- pci_read_config_byte(ioc->pdev, PCI_CLASS_REVISION, &revision);
- karg.hw_rev = revision;
- karg.pci_id = ioc->pdev->device;
- karg.subsystem_device = ioc->pdev->subsystem_device;
- karg.subsystem_vendor = ioc->pdev->subsystem_vendor;
- karg.pci_information.u.bits.bus = ioc->pdev->bus->number;
- karg.pci_information.u.bits.device = PCI_SLOT(ioc->pdev->devfn);
- karg.pci_information.u.bits.function = PCI_FUNC(ioc->pdev->devfn);
- karg.pci_information.segment_id = pci_domain_nr(ioc->pdev->bus);
- karg.firmware_version = ioc->facts.FWVersion.Word;
- strscpy(karg.driver_version, ioc->driver_name, sizeof(karg.driver_version));
- strcat(karg.driver_version, "-");
- karg.adapter_type = 0x06;
- strcat(karg.driver_version, LEAPIORAID_DRIVER_VERSION);
- karg.adapter_type = 0x07;
- karg.bios_version = le32_to_cpu(ioc->bios_pg3.BiosVersion);
- if (copy_to_user(arg, &karg, sizeof(karg))) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- return -EFAULT;
- }
- return 0;
-}
-
-static long
-leapioraid_ctl_eventquery(
- struct LEAPIORAID_ADAPTER *ioc, void __user *arg)
-{
- struct leapio_ioctl_eventquery karg;
-
- if (copy_from_user(&karg, arg, sizeof(karg))) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- return -EFAULT;
- }
- dctlprintk(ioc, pr_info("%s %s: enter\n", ioc->name,
- __func__));
- karg.event_entries = LEAPIORAID_CTL_EVENT_LOG_SIZE;
- memcpy(karg.event_types, ioc->event_type,
- LEAPIORAID_EVENT_NOTIFY_EVENTMASK_WORDS * sizeof(u32));
- if (copy_to_user(arg, &karg, sizeof(karg))) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- return -EFAULT;
- }
- return 0;
-}
-
-static long
-leapioraid_ctl_eventenable(
- struct LEAPIORAID_ADAPTER *ioc, void __user *arg)
-{
- struct leapio_ioctl_eventenable karg;
-
- if (copy_from_user(&karg, arg, sizeof(karg))) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- return -EFAULT;
- }
- dctlprintk(ioc, pr_info("%s %s: enter\n", ioc->name,
- __func__));
- memcpy(ioc->event_type, karg.event_types,
- LEAPIORAID_EVENT_NOTIFY_EVENTMASK_WORDS * sizeof(u32));
- leapioraid_base_validate_event_type(ioc, ioc->event_type);
- if (ioc->event_log)
- return 0;
- ioc->event_context = 0;
- ioc->aen_event_read_flag = 0;
- ioc->event_log = kcalloc(LEAPIORAID_CTL_EVENT_LOG_SIZE,
- sizeof(struct LEAPIORAID_IOCTL_EVENTS),
- GFP_KERNEL);
- if (!ioc->event_log) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- return -ENOMEM;
- }
- return 0;
-}
-
-static long
-leapioraid_ctl_eventreport(
- struct LEAPIORAID_ADAPTER *ioc, void __user *arg)
-{
- struct leapio_ioctl_eventreport karg;
- u32 number_bytes, max_events, max;
- struct leapio_ioctl_eventreport __user *uarg = arg;
-
- if (copy_from_user(&karg, arg, sizeof(karg))) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- return -EFAULT;
- }
- dctlprintk(ioc, pr_info("%s %s: enter\n", ioc->name,
- __func__));
- number_bytes = karg.hdr.max_data_size -
- sizeof(struct leapio_ioctl_header);
- max_events = number_bytes / sizeof(struct LEAPIORAID_IOCTL_EVENTS);
- max = min_t(u32, LEAPIORAID_CTL_EVENT_LOG_SIZE, max_events);
- if (!max || !ioc->event_log)
- return -ENODATA;
- number_bytes = max * sizeof(struct LEAPIORAID_IOCTL_EVENTS);
- if (copy_to_user(uarg->event_data, ioc->event_log, number_bytes)) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- return -EFAULT;
- }
- ioc->aen_event_read_flag = 0;
- return 0;
-}
-
-static long
-leapioraid_ctl_do_reset(
- struct LEAPIORAID_ADAPTER *ioc, void __user *arg)
-{
- struct leapio_ioctl_diag_reset karg;
- int retval;
-
- if (copy_from_user(&karg, arg, sizeof(karg))) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- return -EFAULT;
- }
- if (ioc->shost_recovery ||
- ioc->pci_error_recovery || ioc->is_driver_loading ||
- ioc->remove_host)
- return -EAGAIN;
- dctlprintk(ioc, pr_info("%s %s: enter\n", ioc->name,
- __func__));
- ioc->reset_from_user = 1;
- scsi_block_requests(ioc->shost);
- retval = leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
- scsi_unblock_requests(ioc->shost);
- pr_info("%s ioctl: host reset: %s\n",
- ioc->name, ((!retval) ? "SUCCESS" : "FAILED"));
- return 0;
-}
-
-static int
-leapioraid_ctl_btdh_search_sas_device(struct LEAPIORAID_ADAPTER *ioc,
- struct leapio_ioctl_btdh_mapping *btdh)
-{
- struct leapioraid_sas_device *sas_device;
- unsigned long flags;
- int rc = 0;
-
- if (list_empty(&ioc->sas_device_list))
- return rc;
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- list_for_each_entry(sas_device, &ioc->sas_device_list, list) {
- if (btdh->bus == 0xFFFFFFFF && btdh->id == 0xFFFFFFFF &&
- btdh->handle == sas_device->handle) {
- btdh->bus = sas_device->channel;
- btdh->id = sas_device->id;
- rc = 1;
- goto out;
- } else if (btdh->bus == sas_device->channel && btdh->id ==
- sas_device->id && btdh->handle == 0xFFFF) {
- btdh->handle = sas_device->handle;
- rc = 1;
- goto out;
- }
- }
-out:
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- return rc;
-}
-
-static int
-leapioraid_ctl_btdh_search_raid_device(struct LEAPIORAID_ADAPTER *ioc,
- struct leapio_ioctl_btdh_mapping *btdh)
-{
- struct leapioraid_raid_device *raid_device;
- unsigned long flags;
- int rc = 0;
-
- if (list_empty(&ioc->raid_device_list))
- return rc;
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- list_for_each_entry(raid_device, &ioc->raid_device_list, list) {
- if (btdh->bus == 0xFFFFFFFF && btdh->id == 0xFFFFFFFF &&
- btdh->handle == raid_device->handle) {
- btdh->bus = raid_device->channel;
- btdh->id = raid_device->id;
- rc = 1;
- goto out;
- } else if (btdh->bus == raid_device->channel && btdh->id ==
- raid_device->id && btdh->handle == 0xFFFF) {
- btdh->handle = raid_device->handle;
- rc = 1;
- goto out;
- }
- }
-out:
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- return rc;
-}
-
-static long
-leapioraid_ctl_btdh_mapping(
- struct LEAPIORAID_ADAPTER *ioc, void __user *arg)
-{
- struct leapio_ioctl_btdh_mapping karg;
- int rc;
-
- if (copy_from_user(&karg, arg, sizeof(karg))) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- return -EFAULT;
- }
- dctlprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- rc = leapioraid_ctl_btdh_search_sas_device(ioc, &karg);
- if (!rc)
- leapioraid_ctl_btdh_search_raid_device(ioc, &karg);
- if (copy_to_user(arg, &karg, sizeof(karg))) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- return -EFAULT;
- }
- return 0;
-}
-
-#ifdef CONFIG_COMPAT
-static long
-leapioraid_ctl_compat_command(
- struct LEAPIORAID_ADAPTER *ioc, unsigned int cmd,
- void __user *arg)
-{
- struct leapio_ioctl_command32 karg32;
- struct leapio_ioctl_command32 __user *uarg;
- struct leapio_ioctl_command karg;
-
- if (_IOC_SIZE(cmd) != sizeof(struct leapio_ioctl_command32))
- return -EINVAL;
- uarg = (struct leapio_ioctl_command32 __user *)arg;
- if (copy_from_user(&karg32, (char __user *)arg, sizeof(karg32))) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- return -EFAULT;
- }
- memset(&karg, 0, sizeof(struct leapio_ioctl_command));
- karg.hdr.ioc_number = karg32.hdr.ioc_number;
- karg.hdr.port_number = karg32.hdr.port_number;
- karg.hdr.max_data_size = karg32.hdr.max_data_size;
- karg.timeout = karg32.timeout;
- karg.max_reply_bytes = karg32.max_reply_bytes;
- karg.data_in_size = karg32.data_in_size;
- karg.data_out_size = karg32.data_out_size;
- karg.max_sense_bytes = karg32.max_sense_bytes;
- karg.data_sge_offset = karg32.data_sge_offset;
- karg.reply_frame_buf_ptr = compat_ptr(karg32.reply_frame_buf_ptr);
- karg.data_in_buf_ptr = compat_ptr(karg32.data_in_buf_ptr);
- karg.data_out_buf_ptr = compat_ptr(karg32.data_out_buf_ptr);
- karg.sense_data_ptr = compat_ptr(karg32.sense_data_ptr);
- return leapioraid_ctl_do_command(ioc, karg, &uarg->mf);
-}
-#endif
-
-static long
-leapioraid_ctl_ioctl_main(
- struct file *file, unsigned int cmd, void __user *arg,
- u8 compat)
-{
- struct LEAPIORAID_ADAPTER *ioc;
- struct leapio_ioctl_header ioctl_header;
- enum leapioraid_block_state state;
- long ret = -ENOIOCTLCMD;
-
- if (copy_from_user(&ioctl_header, (char __user *)arg,
- sizeof(struct leapio_ioctl_header))) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- return -EFAULT;
- }
- if (leapioraid_ctl_verify_adapter(ioctl_header.ioc_number,
- &ioc) == -1 || !ioc)
- return -ENODEV;
- mutex_lock(&ioc->pci_access_mutex);
- if (ioc->shost_recovery ||
- ioc->pci_error_recovery || ioc->is_driver_loading ||
- ioc->remove_host) {
- ret = -EAGAIN;
- goto unlock_pci_access;
- }
- state = (file->f_flags & O_NONBLOCK) ? NON_BLOCKING : BLOCKING;
- if (state == NON_BLOCKING) {
- if (!mutex_trylock(&ioc->ctl_cmds.mutex)) {
- ret = -EAGAIN;
- goto unlock_pci_access;
- }
- } else if (mutex_lock_interruptible(&ioc->ctl_cmds.mutex)) {
- ret = -ERESTARTSYS;
- goto unlock_pci_access;
- }
- switch (cmd) {
- case LEAPIORAID_IOCINFO:
- if (_IOC_SIZE(cmd) == sizeof(struct leapio_ioctl_iocinfo))
- ret = leapioraid_ctl_getiocinfo(ioc, arg);
- break;
-#ifdef CONFIG_COMPAT
- case LEAPIORAID_COMMAND32:
-#endif
- case LEAPIORAID_COMMAND:
- {
- struct leapio_ioctl_command __user *uarg;
- struct leapio_ioctl_command karg;
-
-#ifdef CONFIG_COMPAT
- if (compat) {
- ret =
- leapioraid_ctl_compat_command(ioc, cmd, arg);
- break;
- }
-#endif
- if (copy_from_user(&karg, arg, sizeof(karg))) {
- pr_err("failure at %s:%d/%s()!\n",
- __FILE__, __LINE__, __func__);
- ret = -EFAULT;
- break;
- }
- if (karg.hdr.ioc_number != ioctl_header.ioc_number) {
- ret = -EINVAL;
- break;
- }
- if (_IOC_SIZE(cmd) ==
- sizeof(struct leapio_ioctl_command)) {
- uarg = arg;
- ret =
- leapioraid_ctl_do_command(ioc, karg,
- &uarg->mf);
- }
- break;
- }
- case LEAPIORAID_EVENTQUERY:
- if (_IOC_SIZE(cmd) == sizeof(struct leapio_ioctl_eventquery))
- ret = leapioraid_ctl_eventquery(ioc, arg);
- break;
- case LEAPIORAID_EVENTENABLE:
- if (_IOC_SIZE(cmd) == sizeof(struct leapio_ioctl_eventenable))
- ret = leapioraid_ctl_eventenable(ioc, arg);
- break;
- case LEAPIORAID_EVENTREPORT:
- ret = leapioraid_ctl_eventreport(ioc, arg);
- break;
- case LEAPIORAID_HARDRESET:
- if (_IOC_SIZE(cmd) == sizeof(struct leapio_ioctl_diag_reset))
- ret = leapioraid_ctl_do_reset(ioc, arg);
- break;
- case LEAPIORAID_BTDHMAPPING:
- if (_IOC_SIZE(cmd) == sizeof(struct leapio_ioctl_btdh_mapping))
- ret = leapioraid_ctl_btdh_mapping(ioc, arg);
- break;
- default:
- dctlprintk(ioc, pr_err(
- "%s unsupported ioctl opcode(0x%08x)\n",
- ioc->name, cmd));
- break;
- }
- mutex_unlock(&ioc->ctl_cmds.mutex);
-unlock_pci_access:
- mutex_unlock(&ioc->pci_access_mutex);
- return ret;
-}
-
-static long
-leapioraid_ctl_ioctl(
- struct file *file, unsigned int cmd, unsigned long arg)
-{
- long ret;
-
- ret = leapioraid_ctl_ioctl_main(file, cmd, (void __user *)arg, 0);
- return ret;
-}
-
-#ifdef CONFIG_COMPAT
-static long
-leapioraid_ctl_ioctl_compat(
- struct file *file, unsigned int cmd, unsigned long arg)
-{
- long ret;
-
- ret = leapioraid_ctl_ioctl_main(file, cmd, (void __user *)arg, 1);
- return ret;
-}
-#endif
-
-static ssize_t
-version_fw_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "%02d.%02d.%02d.%02d\n",
- (ioc->facts.FWVersion.Word & 0xFF000000) >> 24,
- (ioc->facts.FWVersion.Word & 0x00FF0000) >> 16,
- (ioc->facts.FWVersion.Word & 0x0000FF00) >> 8,
- ioc->facts.FWVersion.Word & 0x000000FF);
-}
-static DEVICE_ATTR_RO(version_fw);
-
-static ssize_t
-version_bios_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
- u32 version = le32_to_cpu(ioc->bios_pg3.BiosVersion);
-
- return snprintf(buf, PAGE_SIZE, "%02d.%02d.%02d.%02d\n",
- (version & 0xFF000000) >> 24,
- (version & 0x00FF0000) >> 16,
- (version & 0x0000FF00) >> 8, version & 0x000000FF);
-}
-static DEVICE_ATTR_RO(version_bios);
-
-static ssize_t
-version_leapioraid_show(struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "%03x.%02x\n",
- ioc->facts.MsgVersion, ioc->facts.HeaderVersion >> 8);
-}
-static DEVICE_ATTR_RO(version_leapioraid);
-
-static ssize_t
-version_product_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, 16, "%s\n", ioc->manu_pg0.ChipName);
-}
-static DEVICE_ATTR_RO(version_product);
-
-static ssize_t
-version_nvdata_persistent_show(struct device *cdev,
- struct device_attribute *attr, char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "%08xh\n",
- le32_to_cpu(ioc->iounit_pg0.NvdataVersionPersistent.Word));
-}
-static DEVICE_ATTR_RO(version_nvdata_persistent);
-
-static ssize_t
-version_nvdata_default_show(struct device *cdev,
- struct device_attribute *attr, char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "%08xh\n",
- le32_to_cpu(ioc->iounit_pg0.NvdataVersionDefault.Word));
-}
-static DEVICE_ATTR_RO(version_nvdata_default);
-
-static ssize_t
-board_name_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardName);
-}
-static DEVICE_ATTR_RO(board_name);
-
-static ssize_t
-board_assembly_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardAssembly);
-}
-static DEVICE_ATTR_RO(board_assembly);
-
-static ssize_t
-board_tracer_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardTracerNumber);
-}
-static DEVICE_ATTR_RO(board_tracer);
-
-static ssize_t
-io_delay_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->io_missing_delay);
-}
-static DEVICE_ATTR_RO(io_delay);
-
-static ssize_t
-device_delay_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->device_missing_delay);
-}
-static DEVICE_ATTR_RO(device_delay);
-
-static ssize_t
-fw_queue_depth_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->facts.RequestCredit);
-}
-static DEVICE_ATTR_RO(fw_queue_depth);
-
-static ssize_t
-host_sas_address_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "0x%016llx\n",
- (unsigned long long)ioc->sas_hba.sas_address);
-}
-static DEVICE_ATTR_RO(host_sas_address);
-
-static ssize_t
-logging_level_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "%08xh\n", ioc->logging_level);
-}
-
-static ssize_t
-logging_level_store(
- struct device *cdev, struct device_attribute *attr,
- const char *buf, size_t count)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
- int val = 0;
-
- if (kstrtoint(buf, 0, &val))
- return -EINVAL;
- ioc->logging_level = val;
- pr_info("%s logging_level=%08xh\n", ioc->name,
- ioc->logging_level);
- return strlen(buf);
-}
-static DEVICE_ATTR_RW(logging_level);
-
-static ssize_t
-fwfault_debug_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "%d\n", ioc->fwfault_debug);
-}
-
-static ssize_t
-fwfault_debug_store(
- struct device *cdev, struct device_attribute *attr,
- const char *buf, size_t count)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
- int val = 0;
-
- if (kstrtoint(buf, 0, &val))
- return -EINVAL;
- ioc->fwfault_debug = val;
- pr_info("%s fwfault_debug=%d\n", ioc->name,
- ioc->fwfault_debug);
- return strlen(buf);
-}
-static DEVICE_ATTR_RW(fwfault_debug);
-
-static
-struct leapioraid_raid_device *leapioraid_ctl_raid_device_find_by_handle(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct leapioraid_raid_device *raid_device, *r;
-
- r = NULL;
- list_for_each_entry(raid_device, &ioc->raid_device_list, list) {
- if (raid_device->handle != handle)
- continue;
- r = raid_device;
- goto out;
- }
-out:
- return r;
-}
-
-u8
-leapioraid_ctl_tm_done(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid, u8 msix_index,
- u32 reply)
-{
- u8 rc;
- unsigned long flags;
- struct leapioraid_sas_device *sas_device;
- struct leapioraid_raid_device *raid_device;
- u16 smid_task_abort;
- u16 handle;
- struct LeapioraidSCSITmgReq_t *mpi_request;
- struct LeapioraidSCSITmgRep_t *mpi_reply =
- leapioraid_base_get_reply_virt_addr(ioc, reply);
-
- rc = 1;
- if (unlikely(!mpi_reply)) {
- pr_err(
- "%s mpi_reply not valid at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- return rc;
- }
- handle = le16_to_cpu(mpi_reply->DevHandle);
- sas_device = leapioraid_get_sdev_by_handle(ioc, handle);
- if (sas_device) {
- smid_task_abort = 0;
- if (mpi_reply->TaskType ==
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK) {
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- smid_task_abort = le16_to_cpu(mpi_request->TaskMID);
- }
- pr_info("\tcomplete: sas_addr(0x%016llx), handle(0x%04x), smid(%d), term(%d)\n",
- (unsigned long long)sas_device->sas_address, handle,
- (smid_task_abort ? smid_task_abort : smid),
- le32_to_cpu(mpi_reply->TerminationCount));
- leapioraid_sas_device_put(sas_device);
- }
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- raid_device = leapioraid_ctl_raid_device_find_by_handle(ioc, handle);
- if (raid_device)
- pr_info("\tcomplete: wwid(0x%016llx), handle(0x%04x), smid(%d), term(%d)\n",
- (unsigned long long)raid_device->wwid, handle,
- smid, le32_to_cpu(mpi_reply->TerminationCount));
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- ioc->terminated_tm_count += le32_to_cpu(mpi_reply->TerminationCount);
- if (ioc->out_of_frames) {
- rc = 0;
- leapioraid_base_free_smid(ioc, smid);
- ioc->out_of_frames = 0;
- wake_up(&ioc->no_frames_tm_wq);
- }
- ioc->pending_tm_count--;
- if (!ioc->pending_tm_count)
- wake_up(&ioc->pending_tm_wq);
- return rc;
-}
-
-static void
-leapioraid_ctl_tm_sysfs(struct LEAPIORAID_ADAPTER *ioc, u8 task_type)
-{
- struct leapioraid_sas_device *sas_device;
- struct leapioraid_raid_device *raid_device;
- struct LeapioraidSCSITmgReq_t *mpi_request;
- u16 smid, handle, hpr_smid;
- struct LEAPIORAID_DEVICE *device_priv_data;
- struct LEAPIORAID_TARGET *target_priv_data;
- struct scsi_cmnd *scmd;
- struct scsi_device *sdev;
- unsigned long flags;
- int tm_count;
- int lun;
- u32 doorbell;
- struct leapioraid_scsiio_tracker *st;
- u8 tr_method = 0x00;
-
- if (list_empty(&ioc->sas_device_list))
- return;
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
- if (ioc->shost_recovery || ioc->remove_host) {
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
- pr_err(
- "%s %s: busy : host reset in progress, try later\n",
- ioc->name, __func__);
- return;
- }
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
- scsi_block_requests(ioc->shost);
- init_waitqueue_head(&ioc->pending_tm_wq);
- ioc->ignore_loginfos = 1;
- ioc->pending_tm_count = 0;
- ioc->terminated_tm_count = 0;
- ioc->out_of_frames = 0;
- tm_count = 0;
- switch (task_type) {
- case LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK:
- for (smid = 1; smid <= ioc->shost->can_queue; smid++) {
- if (list_empty(&ioc->hpr_free_list)) {
- ioc->out_of_frames = 1;
- init_waitqueue_head(&ioc->no_frames_tm_wq);
- wait_event_timeout(ioc->no_frames_tm_wq,
- !ioc->out_of_frames, HZ);
- }
- scmd = leapioraid_scsihost_scsi_lookup_get(ioc, smid);
- if (!scmd)
- continue;
- st = leapioraid_base_scsi_cmd_priv(scmd);
- if ((!st) || (st->cb_idx == 0xFF) || (st->smid == 0))
- continue;
- lun = scmd->device->lun;
- device_priv_data = scmd->device->hostdata;
- if (!device_priv_data || !device_priv_data->sas_target)
- continue;
- target_priv_data = device_priv_data->sas_target;
- if (!target_priv_data)
- continue;
- if (target_priv_data->flags &
- LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT ||
- target_priv_data->flags & LEAPIORAID_TARGET_FLAGS_VOLUME)
- continue;
- handle = device_priv_data->sas_target->handle;
- hpr_smid = leapioraid_base_get_smid_hpr(ioc,
- ioc->ctl_tm_cb_idx);
- if (!hpr_smid) {
- pr_err(
- "%s %s: out of hi-priority requests!!\n",
- ioc->name, __func__);
- goto out_of_frames;
- }
- mpi_request =
- leapioraid_base_get_msg_frame(ioc, hpr_smid);
- memset(mpi_request, 0,
- sizeof(struct LeapioraidSCSITmgReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_SCSI_TASK_MGMT;
- mpi_request->DevHandle = cpu_to_le16(handle);
- mpi_request->TaskType =
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK;
- mpi_request->TaskMID = cpu_to_le16(st->smid);
- int_to_scsilun(lun,
- (struct scsi_lun *)mpi_request->LUN);
- starget_printk(KERN_INFO,
- device_priv_data->sas_target->starget,
- "sending tm: sas_addr(0x%016llx), handle(0x%04x), smid(%d)\n",
- (unsigned long long)
- device_priv_data->sas_target->sas_address, handle, st->smid);
- ioc->pending_tm_count++;
- tm_count++;
- doorbell = leapioraid_base_get_iocstate(ioc, 0);
- if ((doorbell &
- LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_FAULT
- || (doorbell & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_COREDUMP)
- goto fault_in_progress;
- ioc->put_smid_hi_priority(ioc, hpr_smid, 0);
- }
- break;
- case LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET:
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- list_for_each_entry(sas_device, &ioc->sas_device_list, list) {
- if (list_empty(&ioc->hpr_free_list)) {
- spin_unlock_irqrestore(&ioc->sas_device_lock,
- flags);
- ioc->out_of_frames = 1;
- init_waitqueue_head(&ioc->no_frames_tm_wq);
- wait_event_timeout(ioc->no_frames_tm_wq,
- !ioc->out_of_frames, HZ);
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- }
- if (!sas_device->starget)
- continue;
- if (test_bit(sas_device->handle, ioc->pd_handles))
- continue;
- hpr_smid = leapioraid_base_get_smid_hpr(ioc,
- ioc->ctl_tm_cb_idx);
- if (!hpr_smid) {
- pr_err(
- "%s %s: out of hi-priority requests!!\n",
- ioc->name, __func__);
- spin_unlock_irqrestore(&ioc->sas_device_lock,
- flags);
- goto out_of_frames;
- }
- mpi_request =
- leapioraid_base_get_msg_frame(ioc, hpr_smid);
- memset(mpi_request, 0,
- sizeof(struct LeapioraidSCSITmgReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_SCSI_TASK_MGMT;
- mpi_request->DevHandle =
- cpu_to_le16(sas_device->handle);
- mpi_request->TaskType =
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET;
- starget_printk(KERN_INFO,
- sas_device->starget,
- "sending tm: sas_addr(0x%016llx), handle(0x%04x), smid(%d)\n",
- (unsigned long long)sas_device->sas_address,
- sas_device->handle,
- hpr_smid);
- ioc->pending_tm_count++;
- tm_count++;
- doorbell = leapioraid_base_get_iocstate(ioc, 0);
- if ((doorbell &
- LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_FAULT
- || (doorbell & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_COREDUMP) {
- spin_unlock_irqrestore(&ioc->sas_device_lock,
- flags);
- goto fault_in_progress;
- }
- ioc->put_smid_hi_priority(ioc, hpr_smid, 0);
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- list_for_each_entry(raid_device, &ioc->raid_device_list, list) {
- if (list_empty(&ioc->hpr_free_list)) {
- spin_unlock_irqrestore(&ioc->raid_device_lock,
- flags);
- ioc->out_of_frames = 1;
- init_waitqueue_head(&ioc->no_frames_tm_wq);
- wait_event_timeout(ioc->no_frames_tm_wq,
- !ioc->out_of_frames, HZ);
- spin_lock_irqsave(&ioc->raid_device_lock,
- flags);
- }
- if (!raid_device->starget)
- continue;
- hpr_smid = leapioraid_base_get_smid_hpr(ioc,
- ioc->ctl_tm_cb_idx);
- if (!hpr_smid) {
- pr_err("%s %s: out of hi-priority requests!!\n",
- ioc->name, __func__);
- spin_unlock_irqrestore(&ioc->raid_device_lock,
- flags);
- goto out_of_frames;
- }
- mpi_request =
- leapioraid_base_get_msg_frame(ioc, hpr_smid);
- memset(mpi_request, 0,
- sizeof(struct LeapioraidSCSITmgReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_SCSI_TASK_MGMT;
- mpi_request->DevHandle =
- cpu_to_le16(raid_device->handle);
- mpi_request->TaskType =
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET;
- starget_printk(KERN_INFO,
- raid_device->starget,
- "sending tm: wwid(0x%016llx), handle(0x%04x), smid(%d)\n",
- (unsigned long long)raid_device->wwid,
- raid_device->handle, hpr_smid);
- ioc->pending_tm_count++;
- tm_count++;
- doorbell = leapioraid_base_get_iocstate(ioc, 0);
- if ((doorbell &
- LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_FAULT
- || (doorbell & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_COREDUMP) {
- spin_unlock_irqrestore(&ioc->raid_device_lock,
- flags);
- goto fault_in_progress;
- }
- ioc->put_smid_hi_priority(ioc, hpr_smid, 0);
- }
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- break;
- case LEAPIORAID_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET:
- case LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET:
- shost_for_each_device(sdev, ioc->shost) {
- if (list_empty(&ioc->hpr_free_list)) {
- ioc->out_of_frames = 1;
- init_waitqueue_head(&ioc->no_frames_tm_wq);
- wait_event_timeout(ioc->no_frames_tm_wq,
- !ioc->out_of_frames, HZ);
- }
- device_priv_data = sdev->hostdata;
- if (!device_priv_data || !device_priv_data->sas_target)
- continue;
- target_priv_data = device_priv_data->sas_target;
- if (!target_priv_data)
- continue;
- if (target_priv_data->flags &
- LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT)
- continue;
- if ((target_priv_data->flags & LEAPIORAID_TARGET_FLAGS_VOLUME)
- && (task_type ==
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET))
- continue;
- handle = device_priv_data->sas_target->handle;
- hpr_smid = leapioraid_base_get_smid_hpr(ioc,
- ioc->ctl_tm_cb_idx);
- if (!hpr_smid) {
- pr_err("%s %s: out of hi-priority requests!!\n",
- ioc->name, __func__);
- scsi_device_put(sdev);
- goto out_of_frames;
- }
- mpi_request =
- leapioraid_base_get_msg_frame(ioc, hpr_smid);
- memset(mpi_request, 0,
- sizeof(struct LeapioraidSCSITmgReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_SCSI_TASK_MGMT;
- mpi_request->DevHandle = cpu_to_le16(handle);
- mpi_request->TaskType = task_type;
- mpi_request->MsgFlags = tr_method;
- int_to_scsilun(sdev->lun, (struct scsi_lun *)
- mpi_request->LUN);
- sdev_printk(KERN_INFO, sdev,
- "sending tm: sas_addr(0x%016llx), handle(0x%04x), smid(%d)\n",
- (unsigned long long)target_priv_data->sas_address,
- handle, hpr_smid);
- ioc->pending_tm_count++;
- tm_count++;
- doorbell = leapioraid_base_get_iocstate(ioc, 0);
- if ((doorbell &
- LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_FAULT
- || (doorbell & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_COREDUMP) {
- scsi_device_put(sdev);
- goto fault_in_progress;
- }
- ioc->put_smid_hi_priority(ioc, hpr_smid, 0);
- }
- break;
- }
-out_of_frames:
- if (ioc->pending_tm_count)
- wait_event_timeout(ioc->pending_tm_wq,
- !ioc->pending_tm_count, 30 * HZ);
- pr_info("%s task management requests issued(%d)\n",
- ioc->name, tm_count);
- pr_info("%s number IO terminated(%d)\n",
- ioc->name, ioc->terminated_tm_count);
-fault_in_progress:
- scsi_unblock_requests(ioc->shost);
- ioc->ignore_loginfos = 0;
-}
-
-static ssize_t
-task_management_store(
- struct device *cdev, struct device_attribute *attr,
- const char *buf, size_t count)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
- int opcode = 0;
-
- if (kstrtoint(buf, 0, &opcode))
- return -EINVAL;
- switch (opcode) {
- case 1:
- ioc->reset_from_user = 1;
- scsi_block_requests(ioc->shost);
- pr_err("%s sysfs: diag reset issued: %s\n", ioc->name,
- ((!leapioraid_base_hard_reset_handler(ioc,
- FORCE_BIG_HAMMER))
- ? "SUCCESS" : "FAILED"));
- scsi_unblock_requests(ioc->shost);
- break;
- case 2:
- ioc->reset_from_user = 1;
- scsi_block_requests(ioc->shost);
- pr_err("%s sysfs: message unit reset issued: %s\n", ioc->name,
- ((!leapioraid_base_hard_reset_handler(ioc,
- SOFT_RESET)) ?
- "SUCCESS" : "FAILED"));
- scsi_unblock_requests(ioc->shost);
- break;
- case 3:
- pr_err("%s sysfs: TASKTYPE_ABORT_TASK :\n", ioc->name);
- ioc->got_task_abort_from_sysfs = 1;
- leapioraid_ctl_tm_sysfs(ioc,
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK);
- ioc->got_task_abort_from_sysfs = 0;
- break;
- case 4:
- pr_err("%s sysfs: TASKTYPE_TARGET_RESET:\n", ioc->name);
- leapioraid_ctl_tm_sysfs(ioc,
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET);
- break;
- case 5:
- pr_err("%s sysfs: TASKTYPE_LOGICAL_UNIT_RESET:\n", ioc->name);
- leapioraid_ctl_tm_sysfs(ioc,
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET);
- break;
- case 6:
- pr_info("%s sysfs: TASKTYPE_ABRT_TASK_SET\n", ioc->name);
- leapioraid_ctl_tm_sysfs(ioc,
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET);
- break;
- default:
- pr_info("%s unsupported opcode(%d)\n",
- ioc->name, opcode);
- break;
- };
- return strlen(buf);
-}
-static DEVICE_ATTR_WO(task_management);
-
-static ssize_t
-ioc_reset_count_show(
- struct device *cdev, struct device_attribute *attr,
- char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "%d\n", ioc->ioc_reset_count);
-}
-static DEVICE_ATTR_RO(ioc_reset_count);
-
-static ssize_t
-reply_queue_count_show(struct device *cdev,
- struct device_attribute *attr, char *buf)
-{
- u8 reply_queue_count;
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- if ((ioc->facts.IOCCapabilities &
- LEAPIORAID_IOCFACTS_CAPABILITY_MSI_X_INDEX) && ioc->msix_enable)
- reply_queue_count = ioc->reply_queue_count;
- else
- reply_queue_count = 1;
- return snprintf(buf, PAGE_SIZE, "%d\n", reply_queue_count);
-}
-static DEVICE_ATTR_RO(reply_queue_count);
-
-static ssize_t
-drv_support_bitmap_show(struct device *cdev,
- struct device_attribute *attr, char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "0x%08x\n", ioc->drv_support_bitmap);
-}
-static DEVICE_ATTR_RO(drv_support_bitmap);
-
-static ssize_t
-enable_sdev_max_qd_show(struct device *cdev,
- struct device_attribute *attr, char *buf)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- return snprintf(buf, PAGE_SIZE, "%d\n", ioc->enable_sdev_max_qd);
-}
-
-static ssize_t
-enable_sdev_max_qd_store(struct device *cdev,
- struct device_attribute *attr, const char *buf,
- size_t count)
-{
- struct Scsi_Host *shost = class_to_shost(cdev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- int val = 0;
- struct scsi_device *sdev;
- struct leapioraid_raid_device *raid_device;
- int qdepth;
-
- if (kstrtoint(buf, 0, &val))
- return -EINVAL;
- switch (val) {
- case 0:
- ioc->enable_sdev_max_qd = 0;
- shost_for_each_device(sdev, ioc->shost) {
- sas_device_priv_data = sdev->hostdata;
- if (!sas_device_priv_data)
- continue;
- sas_target_priv_data = sas_device_priv_data->sas_target;
- if (!sas_target_priv_data)
- continue;
- if (sas_target_priv_data->flags & LEAPIORAID_TARGET_FLAGS_VOLUME) {
- raid_device =
- leapioraid_raid_device_find_by_handle(ioc,
- sas_target_priv_data->handle);
- switch (raid_device->volume_type) {
- case LEAPIORAID_RAID_VOL_TYPE_RAID0:
- if (raid_device->device_info &
- LEAPIORAID_SAS_DEVICE_INFO_SSP_TARGET)
- qdepth =
- LEAPIORAID_SAS_QUEUE_DEPTH;
- else
- qdepth =
- LEAPIORAID_SATA_QUEUE_DEPTH;
- break;
- case LEAPIORAID_RAID_VOL_TYPE_RAID1E:
- case LEAPIORAID_RAID_VOL_TYPE_RAID1:
- case LEAPIORAID_RAID_VOL_TYPE_RAID10:
- case LEAPIORAID_RAID_VOL_TYPE_UNKNOWN:
- default:
- qdepth = LEAPIORAID_RAID_QUEUE_DEPTH;
- }
- } else
- qdepth =
- (sas_target_priv_data->sas_dev->port_type >
- 1) ? ioc->max_wideport_qd : ioc->max_narrowport_qd;
- leapioraid__scsihost_change_queue_depth(sdev, qdepth);
- }
- break;
- case 1:
- ioc->enable_sdev_max_qd = 1;
- shost_for_each_device(sdev, ioc->shost) {
- leapioraid__scsihost_change_queue_depth(sdev,
- shost->can_queue);
- }
- break;
- default:
- return -EINVAL;
- }
- return strlen(buf);
-}
-static DEVICE_ATTR_RW(enable_sdev_max_qd);
-
-static struct attribute *leapioraid_host_attrs[] = {
- &dev_attr_version_fw.attr,
- &dev_attr_version_bios.attr,
- &dev_attr_version_leapioraid.attr,
- &dev_attr_version_product.attr,
- &dev_attr_version_nvdata_persistent.attr,
- &dev_attr_version_nvdata_default.attr,
- &dev_attr_board_name.attr,
- &dev_attr_board_assembly.attr,
- &dev_attr_board_tracer.attr,
- &dev_attr_io_delay.attr,
- &dev_attr_device_delay.attr,
- &dev_attr_logging_level.attr,
- &dev_attr_fwfault_debug.attr,
- &dev_attr_fw_queue_depth.attr,
- &dev_attr_host_sas_address.attr,
- &dev_attr_task_management.attr,
- &dev_attr_ioc_reset_count.attr,
- &dev_attr_reply_queue_count.attr,
- &dev_attr_drv_support_bitmap.attr,
- &dev_attr_enable_sdev_max_qd.attr,
- NULL,
-};
-
-static const struct attribute_group leapioraid_host_attr_group = {
- .attrs = leapioraid_host_attrs
-};
-
-const struct attribute_group *leapioraid_host_groups[] = {
- &leapioraid_host_attr_group,
- NULL
-};
-
-static ssize_t
-sas_address_show(
- struct device *dev, struct device_attribute *attr,
- char *buf)
-{
- struct scsi_device *sdev = to_scsi_device(dev);
- struct LEAPIORAID_DEVICE *sas_device_priv_data = sdev->hostdata;
-
- return snprintf(
- buf, PAGE_SIZE, "0x%016llx\n",
- (unsigned long long)sas_device_priv_data->sas_target->sas_address);
-}
-static DEVICE_ATTR_RO(sas_address);
-
-static ssize_t
-sas_device_handle_show(
- struct device *dev, struct device_attribute *attr,
- char *buf)
-{
- struct scsi_device *sdev = to_scsi_device(dev);
- struct LEAPIORAID_DEVICE *sas_device_priv_data = sdev->hostdata;
-
- return snprintf(buf, PAGE_SIZE, "0x%04x\n",
- sas_device_priv_data->sas_target->handle);
-}
-static DEVICE_ATTR_RO(sas_device_handle);
-
-static ssize_t
-sas_ncq_prio_enable_show(
- struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- struct scsi_device *sdev = to_scsi_device(dev);
- struct LEAPIORAID_DEVICE *sas_device_priv_data = sdev->hostdata;
-
- return snprintf(buf, PAGE_SIZE, "%d\n",
- sas_device_priv_data->ncq_prio_enable);
-}
-
-static ssize_t
-sas_ncq_prio_enable_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t count)
-{
- struct scsi_device *sdev = to_scsi_device(dev);
- struct LEAPIORAID_DEVICE *sas_device_priv_data = sdev->hostdata;
- int ncq_prio_enable = 0;
-
- if (kstrtoint(buf, 0, &ncq_prio_enable))
- return -EINVAL;
- if (!leapioraid_scsihost_ncq_prio_supp(sdev))
- return -EINVAL;
- sas_device_priv_data->ncq_prio_enable = ncq_prio_enable;
- return strlen(buf);
-}
-static DEVICE_ATTR_RW(sas_ncq_prio_enable);
-
-static struct attribute *leapioraid_dev_attrs[] = {
- &dev_attr_sas_address.attr,
- &dev_attr_sas_device_handle.attr,
- &dev_attr_sas_ncq_prio_enable.attr,
- NULL,
-};
-static const struct attribute_group leapioraid_dev_attr_group = {
- .attrs = leapioraid_dev_attrs
-};
-const struct attribute_group *leapioraid_dev_groups[] = {
- &leapioraid_dev_attr_group,
- NULL
-};
-
-static int my_mmap(struct file *filp, struct vm_area_struct *vma)
-{
- struct LEAPIORAID_ADAPTER *ioc;
- unsigned long pfn;
- unsigned long length = vma->vm_end - vma->vm_start;
-
- ioc = list_first_entry(&leapioraid_ioc_list,
- struct LEAPIORAID_ADAPTER, list);
- if (length > (SYS_LOG_BUF_SIZE + SYS_LOG_BUF_RESERVE)) {
- pr_err("Requested mapping size is too large\n");
- return -EINVAL;
- }
- if (ioc->log_buffer == NULL) {
- pr_err("no log buffer\n");
- return -EINVAL;
- }
-
- pfn = virt_to_phys(ioc->log_buffer) >> PAGE_SHIFT;
-
- if (remap_pfn_range(vma, vma->vm_start, pfn, length, vma->vm_page_prot)) {
- pr_err("Failed to map memory to user space\n");
- return -EAGAIN;
- }
-
- return 0;
-}
-
-static const struct
-file_operations leapioraid_ctl_fops = {
- .owner = THIS_MODULE,
- .unlocked_ioctl = leapioraid_ctl_ioctl,
- .poll = leapioraid_ctl_poll,
- .fasync = leapioraid_ctl_fasync,
-#ifdef CONFIG_COMPAT
- .compat_ioctl = leapioraid_ctl_ioctl_compat,
-#endif
- .mmap = my_mmap,
-};
-
-static struct miscdevice leapioraid_ctl_dev = {
- .minor = MISC_DYNAMIC_MINOR,
- .name = LEAPIORAID_DEV_NAME,
- .fops = &leapioraid_ctl_fops,
-};
-
-void leapioraid_ctl_init(void)
-{
- leapioraid_async_queue = NULL;
- if (misc_register(&leapioraid_ctl_dev) < 0)
- pr_err("%s can't register misc device\n",
- LEAPIORAID_DRIVER_NAME);
- init_waitqueue_head(&leapioraid_ctl_poll_wait);
-}
-
-void leapioraid_ctl_exit(void)
-{
- struct LEAPIORAID_ADAPTER *ioc;
-
- list_for_each_entry(ioc, &leapioraid_ioc_list, list) {
- kfree(ioc->event_log);
- }
- misc_deregister(&leapioraid_ctl_dev);
-}
diff --git a/drivers/scsi/leapioraid/leapioraid_func.c b/drivers/scsi/leapioraid/leapioraid_func.c
deleted file mode 100644
index 19fe5e96a9ad..000000000000
--- a/drivers/scsi/leapioraid/leapioraid_func.c
+++ /dev/null
@@ -1,7056 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * This is the Fusion MPT base driver providing common API layer interface
- * for access to MPT (Message Passing Technology) firmware.
- *
- * Copyright (C) 2013-2021 LSI Corporation
- * Copyright (C) 2013-2021 Avago Technologies
- * Copyright (C) 2013-2021 Broadcom Inc.
- * (mailto:MPT-FusionLinux.pdl@broadcom.com)
- *
- * Copyright (C) 2024 LeapIO Tech Inc.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2
- * of the License, or (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * NO WARRANTY
- * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
- * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
- * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
- * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
- * solely responsible for determining the appropriateness of using and
- * distributing the Program and assumes all risks associated with its
- * exercise of rights under this Agreement, including but not limited to
- * the risks and costs of program errors, damage to or loss of data,
- * programs or equipment, and unavailability or interruption of operations.
-
- * DISCLAIMER OF LIABILITY
- * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
- * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
- * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
- * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
- */
-
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/errno.h>
-#include <linux/init.h>
-#include <linux/slab.h>
-#include <linux/types.h>
-#include <linux/pci.h>
-#include <linux/kdev_t.h>
-#include <linux/blkdev.h>
-#include <linux/delay.h>
-#include <linux/interrupt.h>
-#include <linux/dma-mapping.h>
-#include <linux/io.h>
-#include <linux/time.h>
-#include <linux/ktime.h>
-#include <linux/kthread.h>
-#include <asm/page.h>
-#include <linux/aer.h>
-#include "leapioraid_func.h"
-#include <linux/net.h>
-#include <net/sock.h>
-#include <linux/inet.h>
-
-static char *dest_ip = "127.0.0.1";
-module_param(dest_ip, charp, 0000);
-MODULE_PARM_DESC(dest_ip, "Destination IP address");
-
-static u16 port_no = 6666;
-module_param(port_no, ushort, 0000);
-MODULE_PARM_DESC(port_no, "Destination Port number");
-static struct sockaddr_in dest_addr;
-static struct socket *sock;
-static struct msghdr msg;
-
-#define LEAPIORAID_LOG_POLLING_INTERVAL 1
-static LEAPIORAID_CALLBACK leapioraid_callbacks[LEAPIORAID_MAX_CALLBACKS];
-#define LEAPIORAID_FAULT_POLLING_INTERVAL 1000
-#define LEAPIORAID_MAX_HBA_QUEUE_DEPTH 1024
-
-static int smp_affinity_enable = 1;
-module_param(smp_affinity_enable, int, 0444);
-MODULE_PARM_DESC(smp_affinity_enable,
- "SMP affinity feature enable/disable Default: enable(1)");
-
-static int max_msix_vectors = -1;
-module_param(max_msix_vectors, int, 0444);
-MODULE_PARM_DESC(max_msix_vectors, " max msix vectors");
-
-static int irqpoll_weight = -1;
-module_param(irqpoll_weight, int, 0444);
-MODULE_PARM_DESC(irqpoll_weight,
- "irq poll weight (default= one fourth of HBA queue depth)");
-
-static int leapioraid_fwfault_debug;
-
-static int perf_mode = -1;
-
-static int poll_queues;
-module_param(poll_queues, int, 0444);
-MODULE_PARM_DESC(poll_queues,
- "Number of queues to be use for io_uring poll mode.\n\t\t"
- "This parameter is effective only if host_tagset_enable=1. &\n\t\t"
- "when poll_queues are enabled then &\n\t\t"
- "perf_mode is set to latency mode. &\n\t\t");
-
-enum leapioraid_perf_mode {
- LEAPIORAID_PERF_MODE_DEFAULT = -1,
- LEAPIORAID_PERF_MODE_BALANCED = 0,
- LEAPIORAID_PERF_MODE_IOPS = 1,
- LEAPIORAID_PERF_MODE_LATENCY = 2,
-};
-
-static void
-leapioraid_base_clear_outstanding_leapioraid_commands(
- struct LEAPIORAID_ADAPTER *ioc);
-static
-int leapioraid_base_wait_on_iocstate(struct LEAPIORAID_ADAPTER *ioc,
- u32 ioc_state, int timeout);
-
-static int
-leapioraid_scsihost_set_fwfault_debug(
- const char *val, const struct kernel_param *kp)
-{
- int ret = param_set_int(val, kp);
- struct LEAPIORAID_ADAPTER *ioc;
-
- if (ret)
- return ret;
- pr_info("setting fwfault_debug(%d)\n",
- leapioraid_fwfault_debug);
- spin_lock(&leapioraid_gioc_lock);
- list_for_each_entry(ioc, &leapioraid_ioc_list, list)
- ioc->fwfault_debug = leapioraid_fwfault_debug;
- spin_unlock(&leapioraid_gioc_lock);
- return 0;
-}
-
-module_param_call(
- leapioraid_fwfault_debug,
- leapioraid_scsihost_set_fwfault_debug,
- param_get_int, &leapioraid_fwfault_debug, 0644);
-
-static inline u32
-leapioraid_base_readl_aero(
- const void __iomem *addr, u8 retry_count)
-{
- u32 i = 0, ret_val;
-
- do {
- ret_val = readl(addr);
- i++;
- } while (ret_val == 0 && i < retry_count);
- return ret_val;
-}
-
-u8
-leapioraid_base_check_cmd_timeout(
- struct LEAPIORAID_ADAPTER *ioc,
- U8 status, void *mpi_request, int sz)
-{
- u8 issue_reset = 0;
-
- if (!(status & LEAPIORAID_CMD_RESET))
- issue_reset = 1;
- pr_err("%s Command %s\n", ioc->name,
- ((issue_reset ==
- 0) ? "terminated due to Host Reset" : "Timeout"));
- leapioraid_debug_dump_mf(mpi_request, sz);
- return issue_reset;
-}
-
-static int
-leapioraid_remove_dead_ioc_func(void *arg)
-{
- struct LEAPIORAID_ADAPTER *ioc = (struct LEAPIORAID_ADAPTER *)arg;
- struct pci_dev *pdev;
-
- if (ioc == NULL)
- return -1;
- pdev = ioc->pdev;
- if (pdev == NULL)
- return -1;
-#if defined(DISABLE_RESET_SUPPORT)
- ssleep(2);
-#endif
-
- pci_stop_and_remove_bus_device(pdev);
- return 0;
-}
-
-u8
-leapioraid_base_pci_device_is_unplugged(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct pci_dev *pdev = ioc->pdev;
- struct pci_bus *bus = pdev->bus;
- int devfn = pdev->devfn;
- u32 vendor_id;
-
- if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendor_id))
- return 1;
- if (vendor_id == 0xffffffff || vendor_id == 0x00000000 ||
- vendor_id == 0x0000ffff || vendor_id == 0xffff0000)
- return 1;
- if ((vendor_id & 0xffff) == 0x0001)
- return 1;
- return 0;
-}
-
-u8
-leapioraid_base_pci_device_is_available(struct LEAPIORAID_ADAPTER *ioc)
-{
- if (ioc->pci_error_recovery
- || leapioraid_base_pci_device_is_unplugged(ioc))
- return 0;
- return 1;
-}
-
-static void
-leapioraid_base_sync_drv_fw_timestamp(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidIoUnitControlReq_t *mpi_request;
- struct LeapioraidIoUnitControlRep_t *mpi_reply;
- u16 smid;
- ktime_t current_time;
- u64 TimeStamp = 0;
- u8 issue_reset = 0;
-
- mutex_lock(&ioc->scsih_cmds.mutex);
- if (ioc->scsih_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_err("%s: scsih_cmd in use %s\n", ioc->name, __func__);
- goto out;
- }
- ioc->scsih_cmds.status = LEAPIORAID_CMD_PENDING;
- smid = leapioraid_base_get_smid(ioc, ioc->scsih_cb_idx);
- if (!smid) {
- pr_err("%s: failed obtaining a smid %s\n", ioc->name, __func__);
- ioc->scsih_cmds.status = LEAPIORAID_CMD_NOT_USED;
- goto out;
- }
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->scsih_cmds.smid = smid;
- memset(mpi_request, 0, sizeof(struct LeapioraidIoUnitControlReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_IO_UNIT_CONTROL;
- mpi_request->Operation = 0x0F;
- mpi_request->IOCParameter = 0x81;
- current_time = ktime_get_real();
- TimeStamp = ktime_to_ms(current_time);
- mpi_request->IOCParameterValue = cpu_to_le32(TimeStamp & 0xFFFFFFFF);
- mpi_request->IOCParameterValue2 = cpu_to_le32(TimeStamp >> 32);
- init_completion(&ioc->scsih_cmds.done);
- ioc->put_smid_default(ioc, smid);
- dinitprintk(ioc, pr_err(
- "%s Io Unit Control Sync TimeStamp (sending), @time %lld ms\n",
- ioc->name, TimeStamp));
- wait_for_completion_timeout(&ioc->scsih_cmds.done,
- 10 * HZ);
- if (!(ioc->scsih_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- leapioraid_check_cmd_timeout(ioc,
- ioc->scsih_cmds.status,
- mpi_request,
- sizeof
- (struct LeapioraidSasIoUnitControlReq_t)
- / 4, issue_reset);
- goto issue_host_reset;
- }
- if (ioc->scsih_cmds.status & LEAPIORAID_CMD_REPLY_VALID) {
- mpi_reply = ioc->scsih_cmds.reply;
- dinitprintk(ioc, pr_err(
- "%s Io Unit Control sync timestamp (complete): ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name,
- le16_to_cpu(mpi_reply->IOCStatus),
- le32_to_cpu(mpi_reply->IOCLogInfo)));
- }
-issue_host_reset:
- if (issue_reset)
- leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
- ioc->scsih_cmds.status = LEAPIORAID_CMD_NOT_USED;
-out:
- mutex_unlock(&ioc->scsih_cmds.mutex);
-}
-
-static int
-leapioraid_udp_init(void)
-{
- int ret;
- u32 ip;
-
- if (sock)
- return 0;
- if (!in4_pton(dest_ip, -1, (u8 *) &ip, -1, NULL)) {
- pr_err("Invalid IP address: %s, set to default: 127.0.0.1\n",
- dest_ip);
- dest_ip = "127.0.0.1";
- }
- ret =
- sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, IPPROTO_UDP,
- &sock);
- memset(&dest_addr, 0, sizeof(dest_addr));
- dest_addr.sin_family = AF_INET;
- dest_addr.sin_addr.s_addr = ip;
- dest_addr.sin_port = htons(port_no);
- memset(&msg, 0, sizeof(msg));
- msg.msg_name = &dest_addr;
- msg.msg_namelen = sizeof(struct sockaddr_in);
- return ret;
-}
-
-static void
-leapioraid_udp_exit(void)
-{
- if (sock)
- sock_release(sock);
-}
-
-struct info
-{
- u32 user_position;
- u32 ioc_position;
-};
-int cooo;
-
-static void
-leapioraid_base_pcie_log_work(struct work_struct *work)
-{
- struct LEAPIORAID_ADAPTER *ioc =
- container_of(work,
- struct LEAPIORAID_ADAPTER, pcie_log_work.work);
- unsigned long flags;
- struct info *infom = (struct info *)(ioc->log_buffer + SYS_LOG_BUF_SIZE);
-
- if (cooo == 0) {
- infom->user_position =
- ioc->base_readl(&ioc->chip->HostLogBufPosition, 0);
- infom->ioc_position =
- ioc->base_readl(&ioc->chip->IocLogBufPosition, 0);
- cooo++;
- }
-
- writel(infom->user_position, &ioc->chip->HostLogBufPosition);
- infom->ioc_position = ioc->base_readl(&ioc->chip->IocLogBufPosition, 0);
-
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
- if (ioc->pcie_log_work_q)
- queue_delayed_work(ioc->pcie_log_work_q,
- &ioc->pcie_log_work,
- msecs_to_jiffies(LEAPIORAID_LOG_POLLING_INTERVAL));
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
-}
-
-void
-leapioraid_base_start_log_watchdog(struct LEAPIORAID_ADAPTER *ioc)
-{
- unsigned long flags;
-
- if (ioc->pcie_log_work_q)
- return;
- leapioraid_udp_init();
- INIT_DELAYED_WORK(&ioc->pcie_log_work, leapioraid_base_pcie_log_work);
- snprintf(ioc->pcie_log_work_q_name,
- sizeof(ioc->pcie_log_work_q_name), "poll_%s%u_status",
- ioc->driver_name, ioc->id);
- ioc->pcie_log_work_q =
- create_singlethread_workqueue(ioc->pcie_log_work_q_name);
- if (!ioc->pcie_log_work_q) {
- pr_err("%s %s: failed (line=%d)\n", ioc->name,
- __func__, __LINE__);
- return;
- }
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
- if (ioc->pcie_log_work_q)
- queue_delayed_work(ioc->pcie_log_work_q,
- &ioc->pcie_log_work,
- msecs_to_jiffies(LEAPIORAID_LOG_POLLING_INTERVAL));
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
-}
-
-void
-leapioraid_base_stop_log_watchdog(struct LEAPIORAID_ADAPTER *ioc)
-{
- unsigned long flags;
- struct workqueue_struct *wq;
-
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
- wq = ioc->pcie_log_work_q;
- ioc->pcie_log_work_q = NULL;
- leapioraid_udp_exit();
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
- if (wq) {
- if (!cancel_delayed_work_sync(&ioc->pcie_log_work))
- flush_workqueue(wq);
- destroy_workqueue(wq);
- }
-}
-
-static void
-leapioraid_base_fault_reset_work(struct work_struct *work)
-{
- struct LEAPIORAID_ADAPTER *ioc =
- container_of(work, struct LEAPIORAID_ADAPTER,
- fault_reset_work.work);
- unsigned long flags;
- u32 doorbell;
- int rc;
- struct task_struct *p;
-
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
- if ((ioc->shost_recovery && (ioc->ioc_coredump_loop == 0)) ||
- ioc->pci_error_recovery || ioc->remove_host)
- goto rearm_timer;
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
- doorbell = leapioraid_base_get_iocstate(ioc, 0);
- if ((doorbell & LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_MASK) {
- pr_err(
- "%s SAS host is non-operational !!!!\n", ioc->name);
- if (ioc->non_operational_loop++ < 5) {
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock,
- flags);
- goto rearm_timer;
- }
- ioc->remove_host = 1;
- leapioraid_base_pause_mq_polling(ioc);
- ioc->schedule_dead_ioc_flush_running_cmds(ioc);
- p = kthread_run(leapioraid_remove_dead_ioc_func, ioc,
- "%s_dead_ioc_%d", ioc->driver_name, ioc->id);
- if (IS_ERR(p))
- pr_err(
- "%s %s: Running leapioraid_dead_ioc thread failed !!!!\n",
- ioc->name, __func__);
- else
- pr_err(
- "%s %s: Running leapioraid_dead_ioc thread success !!!!\n",
- ioc->name, __func__);
- return;
- }
- if ((doorbell & LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_COREDUMP) {
- u8 timeout = (ioc->manu_pg11.CoreDumpTOSec) ?
- ioc->manu_pg11.CoreDumpTOSec :
- 15;
- timeout /= (LEAPIORAID_FAULT_POLLING_INTERVAL / 1000);
- if (ioc->ioc_coredump_loop == 0) {
- leapioraid_base_coredump_info(ioc, doorbell &
- LEAPIORAID_DOORBELL_DATA_MASK);
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock,
- flags);
- ioc->shost_recovery = 1;
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock,
- flags);
- leapioraid_base_pause_mq_polling(ioc);
- leapioraid_scsihost_clear_outstanding_scsi_tm_commands
- (ioc);
- leapioraid_base_mask_interrupts(ioc);
- leapioraid_base_clear_outstanding_leapioraid_commands(ioc);
- leapioraid_ctl_clear_outstanding_ioctls(ioc);
- }
- drsprintk(ioc,
- pr_info("%s %s: CoreDump loop %d.",
- ioc->name, __func__, ioc->ioc_coredump_loop));
- if (ioc->ioc_coredump_loop++ < timeout) {
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock,
- flags);
- goto rearm_timer;
- }
- }
- if (ioc->ioc_coredump_loop) {
- if ((doorbell & LEAPIORAID_IOC_STATE_MASK) !=
- LEAPIORAID_IOC_STATE_COREDUMP)
- pr_err(
- "%s %s: CoreDump completed. LoopCount: %d",
- ioc->name, __func__, ioc->ioc_coredump_loop);
- else
- pr_err(
- "%s %s: CoreDump Timed out. LoopCount: %d",
- ioc->name, __func__, ioc->ioc_coredump_loop);
- ioc->ioc_coredump_loop = 0xFF;
- }
- ioc->non_operational_loop = 0;
- if ((doorbell & LEAPIORAID_IOC_STATE_MASK) !=
- LEAPIORAID_IOC_STATE_OPERATIONAL) {
- rc = leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
- pr_warn("%s %s: hard reset: %s\n", ioc->name,
- __func__, (rc == 0) ? "success" : "failed");
- doorbell = leapioraid_base_get_iocstate(ioc, 0);
- if ((doorbell & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_FAULT) {
- leapioraid_print_fault_code(ioc,
- doorbell &
- LEAPIORAID_DOORBELL_DATA_MASK);
- } else if ((doorbell & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_COREDUMP)
- leapioraid_base_coredump_info(ioc,
- doorbell &
- LEAPIORAID_DOORBELL_DATA_MASK);
- if (rc
- && (doorbell & LEAPIORAID_IOC_STATE_MASK) !=
- LEAPIORAID_IOC_STATE_OPERATIONAL)
- return;
- }
- ioc->ioc_coredump_loop = 0;
- if (ioc->time_sync_interval &&
- ++ioc->timestamp_update_count >= ioc->time_sync_interval) {
- ioc->timestamp_update_count = 0;
- leapioraid_base_sync_drv_fw_timestamp(ioc);
- }
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
-rearm_timer:
- if (ioc->fault_reset_work_q)
- queue_delayed_work(ioc->fault_reset_work_q,
- &ioc->fault_reset_work,
- msecs_to_jiffies(LEAPIORAID_FAULT_POLLING_INTERVAL));
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
-}
-
-static void
-leapioraid_base_hba_hot_unplug_work(struct work_struct *work)
-{
- struct LEAPIORAID_ADAPTER *ioc =
- container_of(work, struct LEAPIORAID_ADAPTER,
- hba_hot_unplug_work.work);
- unsigned long flags;
-
- spin_lock_irqsave(&ioc->hba_hot_unplug_lock, flags);
- if (ioc->shost_recovery || ioc->pci_error_recovery)
- goto rearm_timer;
- if (leapioraid_base_pci_device_is_unplugged(ioc)) {
- if (ioc->remove_host) {
- pr_err("%s The host is removeing!!!\n",
- ioc->name);
- goto rearm_timer;
- }
- ioc->remove_host = 1;
- leapioraid_base_clear_outstanding_leapioraid_commands(ioc);
- leapioraid_base_pause_mq_polling(ioc);
- leapioraid_scsihost_clear_outstanding_scsi_tm_commands(ioc);
- leapioraid_ctl_clear_outstanding_ioctls(ioc);
- }
-rearm_timer:
- if (ioc->hba_hot_unplug_work_q)
- queue_delayed_work(ioc->hba_hot_unplug_work_q,
- &ioc->hba_hot_unplug_work,
- msecs_to_jiffies
- (1000));
- spin_unlock_irqrestore(&ioc->hba_hot_unplug_lock, flags);
-}
-
-void
-leapioraid_base_start_watchdog(struct LEAPIORAID_ADAPTER *ioc)
-{
- unsigned long flags;
-
- if (ioc->fault_reset_work_q)
- return;
- ioc->timestamp_update_count = 0;
- INIT_DELAYED_WORK(&ioc->fault_reset_work,
- leapioraid_base_fault_reset_work);
- snprintf(ioc->fault_reset_work_q_name,
- sizeof(ioc->fault_reset_work_q_name), "poll_%s%u_status",
- ioc->driver_name, ioc->id);
- ioc->fault_reset_work_q =
- create_singlethread_workqueue(ioc->fault_reset_work_q_name);
- if (!ioc->fault_reset_work_q) {
- pr_err("%s %s: failed (line=%d)\n",
- ioc->name, __func__, __LINE__);
- return;
- }
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
- if (ioc->fault_reset_work_q)
- queue_delayed_work(ioc->fault_reset_work_q,
- &ioc->fault_reset_work,
- msecs_to_jiffies(LEAPIORAID_FAULT_POLLING_INTERVAL));
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
- if (ioc->open_pcie_trace)
- leapioraid_base_start_log_watchdog(ioc);
-}
-
-void
-leapioraid_base_stop_watchdog(struct LEAPIORAID_ADAPTER *ioc)
-{
- unsigned long flags;
- struct workqueue_struct *wq;
-
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
- wq = ioc->fault_reset_work_q;
- ioc->fault_reset_work_q = NULL;
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
- if (wq) {
- if (!cancel_delayed_work_sync(&ioc->fault_reset_work))
- flush_workqueue(wq);
- destroy_workqueue(wq);
- }
- if (ioc->open_pcie_trace)
- leapioraid_base_stop_log_watchdog(ioc);
-}
-
-void
-leapioraid_base_start_hba_unplug_watchdog(struct LEAPIORAID_ADAPTER *ioc)
-{
- unsigned long flags;
-
- if (ioc->hba_hot_unplug_work_q)
- return;
- INIT_DELAYED_WORK(&ioc->hba_hot_unplug_work,
- leapioraid_base_hba_hot_unplug_work);
- snprintf(ioc->hba_hot_unplug_work_q_name,
- sizeof(ioc->hba_hot_unplug_work_q_name),
- "poll_%s%u_hba_unplug", ioc->driver_name, ioc->id);
- ioc->hba_hot_unplug_work_q =
- create_singlethread_workqueue(ioc->hba_hot_unplug_work_q_name);
- if (!ioc->hba_hot_unplug_work_q) {
- pr_err("%s %s: failed (line=%d)\n",
- ioc->name, __func__, __LINE__);
- return;
- }
- spin_lock_irqsave(&ioc->hba_hot_unplug_lock, flags);
- if (ioc->hba_hot_unplug_work_q)
- queue_delayed_work(ioc->hba_hot_unplug_work_q,
- &ioc->hba_hot_unplug_work,
- msecs_to_jiffies(LEAPIORAID_FAULT_POLLING_INTERVAL));
- spin_unlock_irqrestore(&ioc->hba_hot_unplug_lock, flags);
-}
-
-void
-leapioraid_base_stop_hba_unplug_watchdog(struct LEAPIORAID_ADAPTER *ioc)
-{
- unsigned long flags;
- struct workqueue_struct *wq;
-
- spin_lock_irqsave(&ioc->hba_hot_unplug_lock, flags);
- wq = ioc->hba_hot_unplug_work_q;
- ioc->hba_hot_unplug_work_q = NULL;
- spin_unlock_irqrestore(&ioc->hba_hot_unplug_lock, flags);
- if (wq) {
- if (!cancel_delayed_work_sync(&ioc->hba_hot_unplug_work))
- flush_workqueue(wq);
- destroy_workqueue(wq);
- }
-}
-
-static void
-leapioraid_base_stop_smart_polling(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct workqueue_struct *wq;
-
- wq = ioc->smart_poll_work_q;
- ioc->smart_poll_work_q = NULL;
- if (wq) {
- if (!cancel_delayed_work(&ioc->smart_poll_work))
- flush_workqueue(wq);
- destroy_workqueue(wq);
- }
-}
-
-void
-leapioraid_base_fault_info(struct LEAPIORAID_ADAPTER *ioc, u16 fault_code)
-{
- pr_err("%s fault_state(0x%04x)!\n",
- ioc->name, fault_code);
-}
-
-void
-leapioraid_base_coredump_info(struct LEAPIORAID_ADAPTER *ioc, u16 fault_code)
-{
- pr_err("%s coredump_state(0x%04x)!\n",
- ioc->name, fault_code);
-}
-
-int
-leapioraid_base_wait_for_coredump_completion(struct LEAPIORAID_ADAPTER *ioc,
- const char *caller)
-{
- u8 timeout =
- (ioc->manu_pg11.CoreDumpTOSec) ? ioc->manu_pg11.CoreDumpTOSec : 15;
- int ioc_state =
- leapioraid_base_wait_on_iocstate(ioc, LEAPIORAID_IOC_STATE_FAULT,
- timeout);
-
- if (ioc_state)
- pr_err("%s %s: CoreDump timed out. (ioc_state=0x%x)\n",
- ioc->name, caller, ioc_state);
- else
- pr_info("%s %s: CoreDump completed. (ioc_state=0x%x)\n",
- ioc->name, caller, ioc_state);
- return ioc_state;
-}
-
-void
-leapioraid_halt_firmware(struct LEAPIORAID_ADAPTER *ioc, u8 set_fault)
-{
- u32 doorbell;
-
- if ((!ioc->fwfault_debug) && (!set_fault))
- return;
- if (!set_fault)
- dump_stack();
- doorbell =
- ioc->base_readl(&ioc->chip->Doorbell,
- LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY);
- if ((doorbell & LEAPIORAID_IOC_STATE_MASK)
- == LEAPIORAID_IOC_STATE_FAULT) {
- leapioraid_print_fault_code(ioc, doorbell);
- } else if ((doorbell & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_COREDUMP)
- leapioraid_base_coredump_info(ioc,
- doorbell &
- LEAPIORAID_DOORBELL_DATA_MASK);
- else {
- writel(0xC0FFEE00, &ioc->chip->Doorbell);
- if (!set_fault)
- pr_err("%s Firmware is halted due to command timeout\n",
- ioc->name);
- }
- if (set_fault)
- return;
- if (ioc->fwfault_debug == 2) {
- for (;;)
- ;
- } else
- panic("panic in %s\n", __func__);
-}
-
-static void
-leapioraid_base_group_cpus_on_irq(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_adapter_reply_queue *reply_q;
- unsigned int i, cpu, group, nr_cpus, nr_msix, index = 0;
- int iopoll_q_count = ioc->reply_queue_count - ioc->iopoll_q_start_index;
- int unmanaged_q_count = ioc->high_iops_queues + iopoll_q_count;
-
- cpu = cpumask_first(cpu_online_mask);
- nr_msix = ioc->reply_queue_count - unmanaged_q_count;
- nr_cpus = num_online_cpus();
- group = nr_cpus / nr_msix;
- list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
- if (reply_q->msix_index < ioc->high_iops_queues ||
- reply_q->msix_index >= ioc->iopoll_q_start_index)
- continue;
- if (cpu >= nr_cpus)
- break;
- if (index < nr_cpus % nr_msix)
- group++;
- for (i = 0; i < group; i++) {
- ioc->cpu_msix_table[cpu] = reply_q->msix_index;
- cpu = cpumask_next(cpu, cpu_online_mask);
- }
- index++;
- }
-}
-
-static void
-leapioraid_base_sas_ioc_info(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidDefaultRep_t *mpi_reply,
- struct LeapioraidReqHeader_t *request_hdr)
-{
- u16 ioc_status = le16_to_cpu(mpi_reply->IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK;
- char *desc = NULL;
- u16 frame_sz;
- char *func_str = NULL;
-
- if (request_hdr->Function == LEAPIORAID_FUNC_SCSI_IO_REQUEST ||
- request_hdr->Function == LEAPIORAID_FUNC_RAID_SCSI_IO_PASSTHROUGH
- || request_hdr->Function == LEAPIORAID_FUNC_EVENT_NOTIFICATION)
- return;
- if (ioc_status == LEAPIORAID_IOCSTATUS_CONFIG_INVALID_PAGE)
- return;
- switch (ioc_status) {
- case LEAPIORAID_IOCSTATUS_INVALID_FUNCTION:
- desc = "invalid function";
- break;
- case LEAPIORAID_IOCSTATUS_BUSY:
- desc = "busy";
- break;
- case LEAPIORAID_IOCSTATUS_INVALID_SGL:
- desc = "invalid sgl";
- break;
- case LEAPIORAID_IOCSTATUS_INTERNAL_ERROR:
- desc = "internal error";
- break;
- case LEAPIORAID_IOCSTATUS_INVALID_VPID:
- desc = "invalid vpid";
- break;
- case LEAPIORAID_IOCSTATUS_INSUFFICIENT_RESOURCES:
- desc = "insufficient resources";
- break;
- case LEAPIORAID_IOCSTATUS_INSUFFICIENT_POWER:
- desc = "insufficient power";
- break;
- case LEAPIORAID_IOCSTATUS_INVALID_FIELD:
- desc = "invalid field";
- break;
- case LEAPIORAID_IOCSTATUS_INVALID_STATE:
- desc = "invalid state";
- break;
- case LEAPIORAID_IOCSTATUS_OP_STATE_NOT_SUPPORTED:
- desc = "op state not supported";
- break;
- case LEAPIORAID_IOCSTATUS_CONFIG_INVALID_ACTION:
- desc = "config invalid action";
- break;
- case LEAPIORAID_IOCSTATUS_CONFIG_INVALID_TYPE:
- desc = "config invalid type";
- break;
- case LEAPIORAID_IOCSTATUS_CONFIG_INVALID_DATA:
- desc = "config invalid data";
- break;
- case LEAPIORAID_IOCSTATUS_CONFIG_NO_DEFAULTS:
- desc = "config no defaults";
- break;
- case LEAPIORAID_IOCSTATUS_CONFIG_CANT_COMMIT:
- desc = "config can not commit";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_RECOVERED_ERROR:
- case LEAPIORAID_IOCSTATUS_SCSI_INVALID_DEVHANDLE:
- case LEAPIORAID_IOCSTATUS_SCSI_DEVICE_NOT_THERE:
- case LEAPIORAID_IOCSTATUS_SCSI_DATA_OVERRUN:
- case LEAPIORAID_IOCSTATUS_SCSI_DATA_UNDERRUN:
- case LEAPIORAID_IOCSTATUS_SCSI_IO_DATA_ERROR:
- case LEAPIORAID_IOCSTATUS_SCSI_PROTOCOL_ERROR:
- case LEAPIORAID_IOCSTATUS_SCSI_TASK_TERMINATED:
- case LEAPIORAID_IOCSTATUS_SCSI_RESIDUAL_MISMATCH:
- case LEAPIORAID_IOCSTATUS_SCSI_TASK_MGMT_FAILED:
- case LEAPIORAID_IOCSTATUS_SCSI_IOC_TERMINATED:
- case LEAPIORAID_IOCSTATUS_SCSI_EXT_TERMINATED:
- break;
- case LEAPIORAID_IOCSTATUS_EEDP_GUARD_ERROR:
- if (!ioc->disable_eedp_support)
- desc = "eedp guard error";
- break;
- case LEAPIORAID_IOCSTATUS_EEDP_REF_TAG_ERROR:
- if (!ioc->disable_eedp_support)
- desc = "eedp ref tag error";
- break;
- case LEAPIORAID_IOCSTATUS_EEDP_APP_TAG_ERROR:
- if (!ioc->disable_eedp_support)
- desc = "eedp app tag error";
- break;
- case LEAPIORAID_IOCSTATUS_TARGET_INVALID_IO_INDEX:
- desc = "target invalid io index";
- break;
- case LEAPIORAID_IOCSTATUS_TARGET_ABORTED:
- desc = "target aborted";
- break;
- case LEAPIORAID_IOCSTATUS_TARGET_NO_CONN_RETRYABLE:
- desc = "target no conn retryable";
- break;
- case LEAPIORAID_IOCSTATUS_TARGET_NO_CONNECTION:
- desc = "target no connection";
- break;
- case LEAPIORAID_IOCSTATUS_TARGET_XFER_COUNT_MISMATCH:
- desc = "target xfer count mismatch";
- break;
- case LEAPIORAID_IOCSTATUS_TARGET_DATA_OFFSET_ERROR:
- desc = "target data offset error";
- break;
- case LEAPIORAID_IOCSTATUS_TARGET_TOO_MUCH_WRITE_DATA:
- desc = "target too much write data";
- break;
- case LEAPIORAID_IOCSTATUS_TARGET_IU_TOO_SHORT:
- desc = "target iu too short";
- break;
- case LEAPIORAID_IOCSTATUS_TARGET_ACK_NAK_TIMEOUT:
- desc = "target ack nak timeout";
- break;
- case LEAPIORAID_IOCSTATUS_TARGET_NAK_RECEIVED:
- desc = "target nak received";
- break;
- case LEAPIORAID_IOCSTATUS_SAS_SMP_REQUEST_FAILED:
- desc = "smp request failed";
- break;
- case LEAPIORAID_IOCSTATUS_SAS_SMP_DATA_OVERRUN:
- desc = "smp data overrun";
- break;
- default:
- break;
- }
- if (!desc)
- return;
- switch (request_hdr->Function) {
- case LEAPIORAID_FUNC_CONFIG:
- frame_sz = sizeof(struct LeapioraidCfgReq_t) + ioc->sge_size;
- func_str = "config_page";
- break;
- case LEAPIORAID_FUNC_SCSI_TASK_MGMT:
- frame_sz = sizeof(struct LeapioraidSCSITmgReq_t);
- func_str = "task_mgmt";
- break;
- case LEAPIORAID_FUNC_SAS_IO_UNIT_CONTROL:
- frame_sz = sizeof(struct LeapioraidSasIoUnitControlReq_t);
- func_str = "sas_iounit_ctl";
- break;
- case LEAPIORAID_FUNC_SCSI_ENCLOSURE_PROCESSOR:
- frame_sz = sizeof(struct LeapioraidSepReq_t);
- func_str = "enclosure";
- break;
- case LEAPIORAID_FUNC_IOC_INIT:
- frame_sz = sizeof(struct LeapioraidIOCInitReq_t);
- func_str = "ioc_init";
- break;
- case LEAPIORAID_FUNC_PORT_ENABLE:
- frame_sz = sizeof(struct LeapioraidPortEnableReq_t);
- func_str = "port_enable";
- break;
- case LEAPIORAID_FUNC_SMP_PASSTHROUGH:
- frame_sz =
- sizeof(struct LeapioraidSmpPassthroughReq_t) + ioc->sge_size;
- func_str = "smp_passthru";
- break;
- default:
- frame_sz = 32;
- func_str = "unknown";
- break;
- }
- pr_warn("%s ioc_status: %s(0x%04x), request(0x%p), (%s)\n",
- ioc->name, desc, ioc_status, request_hdr, func_str);
- leapioraid_debug_dump_mf(request_hdr, frame_sz / 4);
-}
-
-static void
-leapioraid_base_display_event_data(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventNotificationRep_t *mpi_reply)
-{
- char *desc = NULL;
- u16 event;
-
- if (!(ioc->logging_level & LEAPIORAID_DEBUG_EVENTS))
- return;
- event = le16_to_cpu(mpi_reply->Event);
- if (ioc->warpdrive_msg) {
- switch (event) {
- case LEAPIORAID_EVENT_IR_OPERATION_STATUS:
- case LEAPIORAID_EVENT_IR_VOLUME:
- case LEAPIORAID_EVENT_IR_PHYSICAL_DISK:
- case LEAPIORAID_EVENT_IR_CONFIGURATION_CHANGE_LIST:
- case LEAPIORAID_EVENT_LOG_ENTRY_ADDED:
- return;
- }
- }
- switch (event) {
- case LEAPIORAID_EVENT_LOG_DATA:
- desc = "Log Data";
- break;
- case LEAPIORAID_EVENT_STATE_CHANGE:
- desc = "Status Change";
- break;
- case LEAPIORAID_EVENT_HARD_RESET_RECEIVED:
- desc = "Hard Reset Received";
- break;
- case LEAPIORAID_EVENT_EVENT_CHANGE:
- desc = "Event Change";
- break;
- case LEAPIORAID_EVENT_SAS_DEVICE_STATUS_CHANGE:
- desc = "Device Status Change";
- break;
- case LEAPIORAID_EVENT_IR_OPERATION_STATUS:
- desc = "IR Operation Status";
- break;
- case LEAPIORAID_EVENT_SAS_DISCOVERY:
- {
- struct LeapioraidEventDataSasDiscovery_t *event_data =
- (struct LeapioraidEventDataSasDiscovery_t *) mpi_reply->EventData;
- pr_info("%s SAS Discovery: (%s)",
- ioc->name,
- (event_data->ReasonCode ==
- LEAPIORAID_EVENT_SAS_DISC_RC_STARTED) ? "start" :
- "stop");
- if (event_data->DiscoveryStatus)
- pr_info("discovery_status(0x%08x)",
- le32_to_cpu(event_data->DiscoveryStatus));
- pr_info("\n");
- return;
- }
- case LEAPIORAID_EVENT_SAS_BROADCAST_PRIMITIVE:
- desc = "SAS Broadcast Primitive";
- break;
- case LEAPIORAID_EVENT_SAS_INIT_DEVICE_STATUS_CHANGE:
- desc = "SAS Init Device Status Change";
- break;
- case LEAPIORAID_EVENT_SAS_INIT_TABLE_OVERFLOW:
- desc = "SAS Init Table Overflow";
- break;
- case LEAPIORAID_EVENT_SAS_TOPOLOGY_CHANGE_LIST:
- desc = "SAS Topology Change List";
- break;
- case LEAPIORAID_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE:
- desc = "SAS Enclosure Device Status Change";
- break;
- case LEAPIORAID_EVENT_IR_VOLUME:
- desc = "IR Volume";
- break;
- case LEAPIORAID_EVENT_IR_PHYSICAL_DISK:
- desc = "IR Physical Disk";
- break;
- case LEAPIORAID_EVENT_IR_CONFIGURATION_CHANGE_LIST:
- desc = "IR Configuration Change List";
- break;
- case LEAPIORAID_EVENT_LOG_ENTRY_ADDED:
- desc = "Log Entry Added";
- break;
- case LEAPIORAID_EVENT_TEMP_THRESHOLD:
- desc = "Temperature Threshold";
- break;
- case LEAPIORAID_EVENT_SAS_DEVICE_DISCOVERY_ERROR:
- desc = "SAS Device Discovery Error";
- break;
- }
- if (!desc)
- return;
- pr_info("%s %s\n", ioc->name, desc);
-}
-
-static void
-leapioraid_base_sas_log_info(struct LEAPIORAID_ADAPTER *ioc, u32 log_info)
-{
- union loginfo_type {
- u32 loginfo;
- struct {
- u32 subcode:16;
- u32 code:8;
- u32 originator:4;
- u32 bus_type:4;
- } dw;
- };
- union loginfo_type sas_loginfo;
- char *originator_str = NULL;
-
- sas_loginfo.loginfo = log_info;
- if (sas_loginfo.dw.bus_type != 3)
- return;
- if (log_info == 0x31170000)
- return;
- if (ioc->ignore_loginfos && (log_info == 0x30050000 || log_info ==
- 0x31140000 || log_info == 0x31130000))
- return;
- switch (sas_loginfo.dw.originator) {
- case 0:
- originator_str = "IOP";
- break;
- case 1:
- originator_str = "PL";
- break;
- case 2:
- if (ioc->warpdrive_msg)
- originator_str = "WarpDrive";
- else
- originator_str = "IR";
- break;
- }
- pr_warn("%s log_info(0x%08x):\n\t\t"
- "originator(%s), code(0x%02x), sub_code(0x%04x)\n",
- ioc->name,
- log_info,
- originator_str,
- sas_loginfo.dw.code,
- sas_loginfo.dw.subcode);
-}
-
-static void
-leapioraid_base_display_reply_info(struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u8 msix_index, u32 reply)
-{
- struct LeapioraidDefaultRep_t *mpi_reply;
- u16 ioc_status;
- u32 loginfo = 0;
-
- mpi_reply = leapioraid_base_get_reply_virt_addr(ioc, reply);
- if (unlikely(!mpi_reply)) {
- pr_err(
- "%s mpi_reply not valid at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- return;
- }
- ioc_status = le16_to_cpu(mpi_reply->IOCStatus);
- if ((ioc_status & LEAPIORAID_IOCSTATUS_MASK) &&
- (ioc->logging_level & LEAPIORAID_DEBUG_REPLY)) {
- leapioraid_base_sas_ioc_info(ioc, mpi_reply,
- leapioraid_base_get_msg_frame(ioc,
- smid));
- }
- if (ioc_status & LEAPIORAID_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE) {
- loginfo = le32_to_cpu(mpi_reply->IOCLogInfo);
- leapioraid_base_sas_log_info(ioc, loginfo);
- }
-}
-
-u8
-leapioraid_base_done(struct LEAPIORAID_ADAPTER *ioc, u16 smid, u8 msix_index,
- u32 reply)
-{
- struct LeapioraidDefaultRep_t *mpi_reply;
-
- mpi_reply = leapioraid_base_get_reply_virt_addr(ioc, reply);
- if (mpi_reply && mpi_reply->Function == LEAPIORAID_FUNC_EVENT_ACK)
- return leapioraid_check_for_pending_internal_cmds(ioc, smid);
- if (ioc->base_cmds.status == LEAPIORAID_CMD_NOT_USED)
- return 1;
- ioc->base_cmds.status |= LEAPIORAID_CMD_COMPLETE;
- if (mpi_reply) {
- ioc->base_cmds.status |= LEAPIORAID_CMD_REPLY_VALID;
- memcpy(ioc->base_cmds.reply, mpi_reply,
- mpi_reply->MsgLength * 4);
- }
- ioc->base_cmds.status &= ~LEAPIORAID_CMD_PENDING;
- complete(&ioc->base_cmds.done);
- return 1;
-}
-
-static u8
-leapioraid_base_async_event(
- struct LEAPIORAID_ADAPTER *ioc, u8 msix_index, u32 reply)
-{
- struct LeapioraidEventNotificationRep_t *mpi_reply;
- struct LeapioraidEventAckReq_t *ack_request;
- u16 smid;
- struct leapioraid_event_ack_list *delayed_event_ack;
-
- mpi_reply = leapioraid_base_get_reply_virt_addr(ioc, reply);
- if (!mpi_reply)
- return 1;
- if (mpi_reply->Function != LEAPIORAID_FUNC_EVENT_NOTIFICATION)
- return 1;
- leapioraid_base_display_event_data(ioc, mpi_reply);
- if (!(mpi_reply->AckRequired & LEAPIORAID_EVENT_NOTIFICATION_ACK_REQUIRED))
- goto out;
- smid = leapioraid_base_get_smid(ioc, ioc->base_cb_idx);
- if (!smid) {
- delayed_event_ack =
- kzalloc(sizeof(*delayed_event_ack), GFP_ATOMIC);
- if (!delayed_event_ack)
- goto out;
- INIT_LIST_HEAD(&delayed_event_ack->list);
- delayed_event_ack->Event = mpi_reply->Event;
- delayed_event_ack->EventContext = mpi_reply->EventContext;
- list_add_tail(&delayed_event_ack->list,
- &ioc->delayed_event_ack_list);
- dewtprintk(ioc, pr_err(
- "%s DELAYED: EVENT ACK: event (0x%04x)\n",
- ioc->name,
- le16_to_cpu(mpi_reply->Event)));
- goto out;
- }
- ack_request = leapioraid_base_get_msg_frame(ioc, smid);
- memset(ack_request, 0, sizeof(struct LeapioraidEventAckReq_t));
- ack_request->Function = LEAPIORAID_FUNC_EVENT_ACK;
- ack_request->Event = mpi_reply->Event;
- ack_request->EventContext = mpi_reply->EventContext;
- ack_request->VF_ID = 0;
- ack_request->VP_ID = 0;
- ioc->put_smid_default(ioc, smid);
-out:
- leapioraid_scsihost_event_callback(ioc, msix_index, reply);
- leapioraid_ctl_event_callback(ioc, msix_index, reply);
- return 1;
-}
-
-inline
-struct leapioraid_scsiio_tracker *leapioraid_base_scsi_cmd_priv(
- struct scsi_cmnd *scmd)
-{
- return scsi_cmd_priv(scmd);
-}
-
-struct leapioraid_scsiio_tracker *leapioraid_get_st_from_smid(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid)
-{
- struct scsi_cmnd *cmd;
-
- if (WARN_ON(!smid) || WARN_ON(smid >= ioc->hi_priority_smid))
- return NULL;
- cmd = leapioraid_scsihost_scsi_lookup_get(ioc, smid);
- if (cmd)
- return leapioraid_base_scsi_cmd_priv(cmd);
- return NULL;
-}
-
-static u8
-leapioraid_base_get_cb_idx(struct LEAPIORAID_ADAPTER *ioc, u16 smid)
-{
- int i;
- u16 ctl_smid = ioc->shost->can_queue + LEAPIORAID_INTERNAL_SCSIIO_FOR_IOCTL;
- u16 discovery_smid =
- ioc->shost->can_queue + LEAPIORAID_INTERNAL_SCSIIO_FOR_DISCOVERY;
- u8 cb_idx = 0xFF;
-
- if (smid < ioc->hi_priority_smid) {
- struct leapioraid_scsiio_tracker *st;
-
- if (smid < ctl_smid) {
- st = leapioraid_get_st_from_smid(ioc, smid);
- if (st)
- cb_idx = st->cb_idx;
- } else if (smid < discovery_smid)
- cb_idx = ioc->ctl_cb_idx;
- else
- cb_idx = ioc->scsih_cb_idx;
- } else if (smid < ioc->internal_smid) {
- i = smid - ioc->hi_priority_smid;
- cb_idx = ioc->hpr_lookup[i].cb_idx;
- } else if (smid <= ioc->hba_queue_depth) {
- i = smid - ioc->internal_smid;
- cb_idx = ioc->internal_lookup[i].cb_idx;
- }
- return cb_idx;
-}
-
-void
-leapioraid_base_pause_mq_polling(struct LEAPIORAID_ADAPTER *ioc)
-{
- int iopoll_q_count = ioc->reply_queue_count - ioc->iopoll_q_start_index;
- int qid;
-
- for (qid = 0; qid < iopoll_q_count; qid++)
- atomic_set(&ioc->blk_mq_poll_queues[qid].pause, 1);
- for (qid = 0; qid < iopoll_q_count; qid++) {
- while (atomic_read(&ioc->blk_mq_poll_queues[qid].busy)) {
- cpu_relax();
- udelay(500);
- }
- }
-}
-
-void
-leapioraid_base_resume_mq_polling(struct LEAPIORAID_ADAPTER *ioc)
-{
- int iopoll_q_count = ioc->reply_queue_count - ioc->iopoll_q_start_index;
- int qid;
-
- for (qid = 0; qid < iopoll_q_count; qid++)
- atomic_set(&ioc->blk_mq_poll_queues[qid].pause, 0);
-}
-
-void
-leapioraid_base_mask_interrupts(struct LEAPIORAID_ADAPTER *ioc)
-{
- u32 him_register;
-
- ioc->mask_interrupts = 1;
- him_register =
- ioc->base_readl(&ioc->chip->HostInterruptMask,
- LEAPIORAID_READL_RETRY_COUNT_OF_THREE);
- him_register |=
- 0x00000001 + 0x00000008 + 0x40000000;
- writel(him_register, &ioc->chip->HostInterruptMask);
- ioc->base_readl(&ioc->chip->HostInterruptMask,
- LEAPIORAID_READL_RETRY_COUNT_OF_THREE);
-}
-
-void
-leapioraid_base_unmask_interrupts(struct LEAPIORAID_ADAPTER *ioc)
-{
- u32 him_register;
-
- him_register =
- ioc->base_readl(&ioc->chip->HostInterruptMask,
- LEAPIORAID_READL_RETRY_COUNT_OF_THREE);
- him_register &= ~0x00000008;
- writel(him_register, &ioc->chip->HostInterruptMask);
- ioc->mask_interrupts = 0;
-}
-
-union leapioraid_reply_descriptor {
- u64 word;
- struct {
- u32 low;
- u32 high;
- } u;
-};
-
-static int
-leapioraid_base_process_reply_queue(
- struct leapioraid_adapter_reply_queue *reply_q)
-{
- union leapioraid_reply_descriptor rd;
- u64 completed_cmds;
- u8 request_descript_type;
- u16 smid;
- u8 cb_idx;
- u32 reply;
- u8 msix_index = reply_q->msix_index;
- struct LEAPIORAID_ADAPTER *ioc = reply_q->ioc;
- union LeapioraidRepDescUnion_t *rpf;
- u8 rc;
-
- completed_cmds = 0;
- if (!atomic_add_unless(&reply_q->busy, 1, 1))
- return completed_cmds;
- rpf = &reply_q->reply_post_free[reply_q->reply_post_host_index];
- request_descript_type = rpf->Default.ReplyFlags
- & LEAPIORAID_RPY_DESCRIPT_FLAGS_TYPE_MASK;
- if (request_descript_type == LEAPIORAID_RPY_DESCRIPT_FLAGS_UNUSED) {
- atomic_dec(&reply_q->busy);
- return 1;
- }
- cb_idx = 0xFF;
- do {
- rd.word = le64_to_cpu(rpf->Words);
- if (rd.u.low == UINT_MAX || rd.u.high == UINT_MAX)
- goto out;
- reply = 0;
- smid = le16_to_cpu(rpf->Default.DescriptorTypeDependent1);
- if (request_descript_type ==
- LEAPIORAID_RPY_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO_SUCCESS ||
- request_descript_type ==
- LEAPIORAID_RPY_DESCRIPT_FLAGS_SCSI_IO_SUCCESS) {
- cb_idx = leapioraid_base_get_cb_idx(ioc, smid);
- if ((likely(cb_idx < LEAPIORAID_MAX_CALLBACKS)) &&
- (likely(leapioraid_callbacks[cb_idx] != NULL))) {
- rc = leapioraid_callbacks[cb_idx] (ioc, smid,
- msix_index, 0);
- if (rc)
- leapioraid_base_free_smid(ioc, smid);
- }
- } else if (request_descript_type ==
- LEAPIORAID_RPY_DESCRIPT_FLAGS_ADDRESS_REPLY) {
- reply =
- le32_to_cpu(rpf->AddressReply.ReplyFrameAddress);
- if (reply > ioc->reply_dma_max_address
- || reply < ioc->reply_dma_min_address)
- reply = 0;
- if (smid) {
- cb_idx = leapioraid_base_get_cb_idx(ioc, smid);
- if ((likely(cb_idx < LEAPIORAID_MAX_CALLBACKS)) &&
- (likely(leapioraid_callbacks[cb_idx] != NULL))) {
- rc = leapioraid_callbacks[cb_idx] (ioc,
- smid,
- msix_index,
- reply);
- if (reply)
- leapioraid_base_display_reply_info
- (ioc, smid, msix_index,
- reply);
- if (rc)
- leapioraid_base_free_smid(ioc,
- smid);
- }
- } else {
- leapioraid_base_async_event(ioc, msix_index, reply);
- }
- if (reply) {
- ioc->reply_free_host_index =
- (ioc->reply_free_host_index ==
- (ioc->reply_free_queue_depth - 1)) ?
- 0 : ioc->reply_free_host_index + 1;
- ioc->reply_free[ioc->reply_free_host_index] =
- cpu_to_le32(reply);
- wmb(); /* Make sure that all write ops are in order */
- writel(ioc->reply_free_host_index,
- &ioc->chip->ReplyFreeHostIndex);
- }
- }
- rpf->Words = cpu_to_le64(ULLONG_MAX);
- reply_q->reply_post_host_index =
- (reply_q->reply_post_host_index ==
- (ioc->reply_post_queue_depth - 1)) ? 0 :
- reply_q->reply_post_host_index + 1;
- request_descript_type =
- reply_q->reply_post_free[reply_q->reply_post_host_index].Default.ReplyFlags
- & LEAPIORAID_RPY_DESCRIPT_FLAGS_TYPE_MASK;
- completed_cmds++;
- if (completed_cmds >= ioc->thresh_hold) {
- if (ioc->combined_reply_queue) {
- writel(reply_q->reply_post_host_index |
- ((msix_index & 7) <<
- LEAPIORAID_RPHI_MSIX_INDEX_SHIFT),
- ioc->replyPostRegisterIndex[msix_index /
- 8]);
- } else {
- writel(reply_q->reply_post_host_index |
- (msix_index <<
- LEAPIORAID_RPHI_MSIX_INDEX_SHIFT),
- &ioc->chip->ReplyPostHostIndex);
- }
- if (!reply_q->is_blk_mq_poll_q &&
- !reply_q->irq_poll_scheduled) {
- reply_q->irq_poll_scheduled = true;
- irq_poll_sched(&reply_q->irqpoll);
- }
- atomic_dec(&reply_q->busy);
- return completed_cmds;
- }
- if (request_descript_type == LEAPIORAID_RPY_DESCRIPT_FLAGS_UNUSED)
- goto out;
- if (!reply_q->reply_post_host_index)
- rpf = reply_q->reply_post_free;
- else
- rpf++;
- } while (1);
-out:
- if (!completed_cmds) {
- atomic_dec(&reply_q->busy);
- return completed_cmds;
- }
- wmb(); /* Make sure that all write ops are in order */
- if (ioc->combined_reply_queue) {
- writel(reply_q->reply_post_host_index | ((msix_index & 7) <<
- LEAPIORAID_RPHI_MSIX_INDEX_SHIFT),
- ioc->replyPostRegisterIndex[msix_index / 8]);
- } else {
- writel(reply_q->reply_post_host_index | (msix_index <<
- LEAPIORAID_RPHI_MSIX_INDEX_SHIFT),
- &ioc->chip->ReplyPostHostIndex);
- }
- atomic_dec(&reply_q->busy);
- return completed_cmds;
-}
-
-int leapioraid_blk_mq_poll(struct Scsi_Host *shost, unsigned int queue_num)
-{
- struct LEAPIORAID_ADAPTER *ioc =
- (struct LEAPIORAID_ADAPTER *)shost->hostdata;
- struct leapioraid_adapter_reply_queue *reply_q;
- int num_entries = 0;
- int qid = queue_num - ioc->iopoll_q_start_index;
-
- if (atomic_read(&ioc->blk_mq_poll_queues[qid].pause) ||
- !atomic_add_unless(&ioc->blk_mq_poll_queues[qid].busy, 1, 1))
- return 0;
- reply_q = ioc->blk_mq_poll_queues[qid].reply_q;
- num_entries = leapioraid_base_process_reply_queue(reply_q);
- atomic_dec(&ioc->blk_mq_poll_queues[qid].busy);
- return num_entries;
-}
-
-static irqreturn_t
-leapioraid_base_interrupt(int irq, void *bus_id)
-{
- struct leapioraid_adapter_reply_queue *reply_q = bus_id;
- struct LEAPIORAID_ADAPTER *ioc = reply_q->ioc;
-
- if (ioc->mask_interrupts)
- return IRQ_NONE;
- if (reply_q->irq_poll_scheduled)
- return IRQ_HANDLED;
- return ((leapioraid_base_process_reply_queue(reply_q) > 0) ?
- IRQ_HANDLED : IRQ_NONE);
-}
-
-static
-int leapioraid_base_irqpoll(struct irq_poll *irqpoll, int budget)
-{
- struct leapioraid_adapter_reply_queue *reply_q;
- int num_entries = 0;
-
- reply_q = container_of(irqpoll,
- struct leapioraid_adapter_reply_queue, irqpoll);
- if (reply_q->irq_line_enable) {
- disable_irq_nosync(reply_q->os_irq);
- reply_q->irq_line_enable = false;
- }
- num_entries = leapioraid_base_process_reply_queue(reply_q);
- if (num_entries < budget) {
- irq_poll_complete(irqpoll);
- reply_q->irq_poll_scheduled = false;
- reply_q->irq_line_enable = true;
- enable_irq(reply_q->os_irq);
- }
- return num_entries;
-}
-
-static void
-leapioraid_base_init_irqpolls(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_adapter_reply_queue *reply_q, *next;
-
- if (list_empty(&ioc->reply_queue_list))
- return;
- list_for_each_entry_safe(reply_q, next, &ioc->reply_queue_list, list) {
- if (reply_q->is_blk_mq_poll_q)
- continue;
- irq_poll_init(&reply_q->irqpoll, ioc->thresh_hold,
- leapioraid_base_irqpoll);
- reply_q->irq_poll_scheduled = false;
- reply_q->irq_line_enable = true;
- reply_q->os_irq = pci_irq_vector(ioc->pdev,
- reply_q->msix_index);
- }
-}
-
-static inline int
-leapioraid_base_is_controller_msix_enabled(struct LEAPIORAID_ADAPTER *ioc)
-{
- return (ioc->facts.IOCCapabilities &
- LEAPIORAID_IOCFACTS_CAPABILITY_MSI_X_INDEX) && ioc->msix_enable;
-}
-
-void
-leapioraid_base_sync_reply_irqs(struct LEAPIORAID_ADAPTER *ioc, u8 poll)
-{
- struct leapioraid_adapter_reply_queue *reply_q;
-
- if (!leapioraid_base_is_controller_msix_enabled(ioc))
- return;
- list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
- if (ioc->shost_recovery || ioc->remove_host ||
- ioc->pci_error_recovery)
- return;
- if (reply_q->msix_index == 0)
- continue;
- if (reply_q->is_blk_mq_poll_q) {
- leapioraid_base_process_reply_queue(reply_q);
- continue;
- }
- synchronize_irq(pci_irq_vector(ioc->pdev, reply_q->msix_index));
- if (reply_q->irq_poll_scheduled) {
- irq_poll_disable(&reply_q->irqpoll);
- irq_poll_enable(&reply_q->irqpoll);
- if (reply_q->irq_poll_scheduled) {
- reply_q->irq_poll_scheduled = false;
- reply_q->irq_line_enable = true;
- enable_irq(reply_q->os_irq);
- }
- }
- if (poll)
- leapioraid_base_process_reply_queue(reply_q);
- }
-}
-
-void
-leapioraid_base_release_callback_handler(u8 cb_idx)
-{
- leapioraid_callbacks[cb_idx] = NULL;
-}
-
-u8
-leapioraid_base_register_callback_handler(LEAPIORAID_CALLBACK cb_func)
-{
- u8 cb_idx;
-
- for (cb_idx = LEAPIORAID_MAX_CALLBACKS - 1; cb_idx; cb_idx--)
- if (leapioraid_callbacks[cb_idx] == NULL)
- break;
- leapioraid_callbacks[cb_idx] = cb_func;
- return cb_idx;
-}
-
-void
-leapioraid_base_initialize_callback_handler(void)
-{
- u8 cb_idx;
-
- for (cb_idx = 0; cb_idx < LEAPIORAID_MAX_CALLBACKS; cb_idx++)
- leapioraid_base_release_callback_handler(cb_idx);
-}
-
-static void
-leapioraid_base_build_zero_len_sge(
- struct LEAPIORAID_ADAPTER *ioc, void *paddr)
-{
- u32 flags_length = (u32) ((LEAPIORAID_SGE_FLAGS_LAST_ELEMENT |
- LEAPIORAID_SGE_FLAGS_END_OF_BUFFER |
- LEAPIORAID_SGE_FLAGS_END_OF_LIST |
- LEAPIORAID_SGE_FLAGS_SIMPLE_ELEMENT) <<
- LEAPIORAID_SGE_FLAGS_SHIFT);
-
- ioc->base_add_sg_single(paddr, flags_length, -1);
-}
-
-static void
-leapioraid_base_add_sg_single_32(void *paddr, u32 flags_length,
- dma_addr_t dma_addr)
-{
- struct LeapioSGESimple32_t *sgel = paddr;
-
- flags_length |= (LEAPIORAID_SGE_FLAGS_32_BIT_ADDRESSING |
- LEAPIORAID_SGE_FLAGS_SYSTEM_ADDRESS) <<
- LEAPIORAID_SGE_FLAGS_SHIFT;
- sgel->FlagsLength = cpu_to_le32(flags_length);
- sgel->Address = cpu_to_le32(dma_addr);
-}
-
-static void
-leapioraid_base_add_sg_single_64(void *paddr, u32 flags_length,
- dma_addr_t dma_addr)
-{
- struct LeapioSGESimple64_t *sgel = paddr;
-
- flags_length |= (LEAPIORAID_SGE_FLAGS_64_BIT_ADDRESSING |
- LEAPIORAID_SGE_FLAGS_SYSTEM_ADDRESS) <<
- LEAPIORAID_SGE_FLAGS_SHIFT;
- sgel->FlagsLength = cpu_to_le32(flags_length);
- sgel->Address = cpu_to_le64(dma_addr);
-}
-
-static
-struct leapioraid_chain_tracker *leapioraid_base_get_chain_buffer_tracker(
- struct LEAPIORAID_ADAPTER *ioc,
- struct scsi_cmnd *scmd)
-{
- struct leapioraid_chain_tracker *chain_req;
- struct leapioraid_scsiio_tracker *st = leapioraid_base_scsi_cmd_priv(scmd);
- u16 smid = st->smid;
- u8 chain_offset =
- atomic_read(&ioc->chain_lookup[smid - 1].chain_offset);
-
- if (chain_offset == ioc->chains_needed_per_io)
- return NULL;
- chain_req = &ioc->chain_lookup[smid - 1].chains_per_smid[chain_offset];
- atomic_inc(&ioc->chain_lookup[smid - 1].chain_offset);
- return chain_req;
-}
-
-static void
-leapioraid_base_build_sg(struct LEAPIORAID_ADAPTER *ioc, void *psge,
- dma_addr_t data_out_dma, size_t data_out_sz,
- dma_addr_t data_in_dma, size_t data_in_sz)
-{
- u32 sgl_flags;
-
- if (!data_out_sz && !data_in_sz) {
- leapioraid_base_build_zero_len_sge(ioc, psge);
- return;
- }
- if (data_out_sz && data_in_sz) {
- sgl_flags = (LEAPIORAID_SGE_FLAGS_SIMPLE_ELEMENT |
- LEAPIORAID_SGE_FLAGS_END_OF_BUFFER |
- LEAPIORAID_SGE_FLAGS_HOST_TO_IOC);
- sgl_flags = sgl_flags << LEAPIORAID_SGE_FLAGS_SHIFT;
- ioc->base_add_sg_single(psge, sgl_flags |
- data_out_sz, data_out_dma);
- psge += ioc->sge_size;
- sgl_flags = (LEAPIORAID_SGE_FLAGS_SIMPLE_ELEMENT |
- LEAPIORAID_SGE_FLAGS_LAST_ELEMENT |
- LEAPIORAID_SGE_FLAGS_END_OF_BUFFER |
- LEAPIORAID_SGE_FLAGS_END_OF_LIST);
- sgl_flags = sgl_flags << LEAPIORAID_SGE_FLAGS_SHIFT;
- ioc->base_add_sg_single(psge, sgl_flags |
- data_in_sz, data_in_dma);
- } else if (data_out_sz) {
- sgl_flags = (LEAPIORAID_SGE_FLAGS_SIMPLE_ELEMENT |
- LEAPIORAID_SGE_FLAGS_LAST_ELEMENT |
- LEAPIORAID_SGE_FLAGS_END_OF_BUFFER |
- LEAPIORAID_SGE_FLAGS_END_OF_LIST |
- LEAPIORAID_SGE_FLAGS_HOST_TO_IOC);
- sgl_flags = sgl_flags << LEAPIORAID_SGE_FLAGS_SHIFT;
- ioc->base_add_sg_single(psge, sgl_flags |
- data_out_sz, data_out_dma);
- } else if (data_in_sz) {
- sgl_flags = (LEAPIORAID_SGE_FLAGS_SIMPLE_ELEMENT |
- LEAPIORAID_SGE_FLAGS_LAST_ELEMENT |
- LEAPIORAID_SGE_FLAGS_END_OF_BUFFER |
- LEAPIORAID_SGE_FLAGS_END_OF_LIST);
- sgl_flags = sgl_flags << LEAPIORAID_SGE_FLAGS_SHIFT;
- ioc->base_add_sg_single(psge, sgl_flags |
- data_in_sz, data_in_dma);
- }
-}
-
-u32
-leapioraid_base_mod64(u64 dividend, u32 divisor)
-{
- u32 remainder;
-
- if (!divisor) {
- pr_err("leapioraid : DIVISOR is zero, in div fn\n");
- return 0;
- }
- remainder = do_div(dividend, divisor);
- return remainder;
-}
-
-static void
-leapioraid_base_add_sg_single_ieee(void *paddr, u8 flags, u8 chain_offset,
- u32 length, dma_addr_t dma_addr)
-{
- struct LEAPIORAID_IEEE_SGE_CHAIN64 *sgel = paddr;
-
- sgel->Flags = flags;
- sgel->NextChainOffset = chain_offset;
- sgel->Length = cpu_to_le32(length);
- sgel->Address = cpu_to_le64(dma_addr);
-}
-
-static void
-leapioraid_base_build_zero_len_sge_ieee(struct LEAPIORAID_ADAPTER *ioc,
- void *paddr)
-{
- u8 sgl_flags = (LEAPIORAID_IEEE_SGE_FLAGS_SIMPLE_ELEMENT |
- LEAPIORAID_IEEE_SGE_FLAGS_SYSTEM_ADDR |
- LEAPIORAID_IEEE_SGE_FLAGS_END_OF_LIST);
-
- leapioraid_base_add_sg_single_ieee(paddr, sgl_flags, 0, 0, -1);
-}
-
-static int
-leapioraid_base_build_sg_scmd_ieee(struct LEAPIORAID_ADAPTER *ioc,
- struct scsi_cmnd *scmd, u16 smid)
-{
- struct LeapioraidSCSIIOReq_t *mpi_request;
- dma_addr_t chain_dma;
- struct scatterlist *sg_scmd;
- void *sg_local, *chain;
- u32 chain_offset;
- u32 chain_length;
- int sges_left;
- u32 sges_in_segment;
- u8 simple_sgl_flags;
- u8 simple_sgl_flags_last;
- u8 chain_sgl_flags;
- struct leapioraid_chain_tracker *chain_req;
-
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- simple_sgl_flags = LEAPIORAID_IEEE_SGE_FLAGS_SIMPLE_ELEMENT |
- LEAPIORAID_IEEE_SGE_FLAGS_SYSTEM_ADDR;
- simple_sgl_flags_last = simple_sgl_flags |
- LEAPIORAID_IEEE_SGE_FLAGS_END_OF_LIST;
- chain_sgl_flags = LEAPIORAID_IEEE_SGE_FLAGS_CHAIN_ELEMENT |
- LEAPIORAID_IEEE_SGE_FLAGS_SYSTEM_ADDR;
-
- sg_scmd = scsi_sglist(scmd);
- sges_left = scsi_dma_map(scmd);
- if (sges_left < 0) {
- pr_err_ratelimited
- ("sd %s: scsi_dma_map failed: request for %d bytes!\n",
- dev_name(&scmd->device->sdev_gendev), scsi_bufflen(scmd));
- return -ENOMEM;
- }
- sg_local = &mpi_request->SGL;
- sges_in_segment = (ioc->request_sz -
- offsetof(struct LeapioraidSCSIIOReq_t,
- SGL)) / ioc->sge_size_ieee;
- if (sges_left <= sges_in_segment)
- goto fill_in_last_segment;
- mpi_request->ChainOffset = (sges_in_segment - 1) +
- (offsetof(struct LeapioraidSCSIIOReq_t, SGL) / ioc->sge_size_ieee);
- while (sges_in_segment > 1) {
- leapioraid_base_add_sg_single_ieee(sg_local, simple_sgl_flags,
- 0, sg_dma_len(sg_scmd),
- sg_dma_address(sg_scmd));
-
- sg_scmd = sg_next(sg_scmd);
- sg_local += ioc->sge_size_ieee;
- sges_left--;
- sges_in_segment--;
- }
- chain_req = leapioraid_base_get_chain_buffer_tracker(ioc, scmd);
- if (!chain_req)
- return -1;
- chain = chain_req->chain_buffer;
- chain_dma = chain_req->chain_buffer_dma;
- do {
- sges_in_segment = (sges_left <=
- ioc->max_sges_in_chain_message) ? sges_left :
- ioc->max_sges_in_chain_message;
- chain_offset = (sges_left == sges_in_segment) ?
- 0 : sges_in_segment;
- chain_length = sges_in_segment * ioc->sge_size_ieee;
- if (chain_offset)
- chain_length += ioc->sge_size_ieee;
- leapioraid_base_add_sg_single_ieee(sg_local, chain_sgl_flags,
- chain_offset, chain_length,
- chain_dma);
- sg_local = chain;
- if (!chain_offset)
- goto fill_in_last_segment;
- while (sges_in_segment) {
- leapioraid_base_add_sg_single_ieee(sg_local,
- simple_sgl_flags, 0,
- sg_dma_len(sg_scmd),
- sg_dma_address
- (sg_scmd));
-
- sg_scmd = sg_next(sg_scmd);
- sg_local += ioc->sge_size_ieee;
- sges_left--;
- sges_in_segment--;
- }
- chain_req = leapioraid_base_get_chain_buffer_tracker(ioc, scmd);
- if (!chain_req)
- return -1;
- chain = chain_req->chain_buffer;
- chain_dma = chain_req->chain_buffer_dma;
- } while (1);
-fill_in_last_segment:
- while (sges_left > 0) {
- if (sges_left == 1)
- leapioraid_base_add_sg_single_ieee(sg_local,
- simple_sgl_flags_last,
- 0,
- sg_dma_len(sg_scmd),
- sg_dma_address
- (sg_scmd));
- else
- leapioraid_base_add_sg_single_ieee(sg_local,
- simple_sgl_flags, 0,
- sg_dma_len(sg_scmd),
- sg_dma_address
- (sg_scmd));
-
- sg_scmd = sg_next(sg_scmd);
- sg_local += ioc->sge_size_ieee;
- sges_left--;
- }
- return 0;
-}
-
-static void
-leapioraid_base_build_sg_ieee(struct LEAPIORAID_ADAPTER *ioc, void *psge,
- dma_addr_t data_out_dma, size_t data_out_sz,
- dma_addr_t data_in_dma, size_t data_in_sz)
-{
- u8 sgl_flags;
-
- if (!data_out_sz && !data_in_sz) {
- leapioraid_base_build_zero_len_sge_ieee(ioc, psge);
- return;
- }
- if (data_out_sz && data_in_sz) {
- sgl_flags = LEAPIORAID_IEEE_SGE_FLAGS_SIMPLE_ELEMENT |
- LEAPIORAID_IEEE_SGE_FLAGS_SYSTEM_ADDR;
- leapioraid_base_add_sg_single_ieee(psge, sgl_flags, 0,
- data_out_sz, data_out_dma);
- psge += ioc->sge_size_ieee;
- sgl_flags |= LEAPIORAID_IEEE_SGE_FLAGS_END_OF_LIST;
- leapioraid_base_add_sg_single_ieee(psge, sgl_flags, 0,
- data_in_sz, data_in_dma);
- } else if (data_out_sz) {
- sgl_flags = LEAPIORAID_IEEE_SGE_FLAGS_SIMPLE_ELEMENT |
- LEAPIORAID_IEEE_SGE_FLAGS_END_OF_LIST |
- LEAPIORAID_IEEE_SGE_FLAGS_SYSTEM_ADDR;
- leapioraid_base_add_sg_single_ieee(psge, sgl_flags, 0,
- data_out_sz, data_out_dma);
- } else if (data_in_sz) {
- sgl_flags = LEAPIORAID_IEEE_SGE_FLAGS_SIMPLE_ELEMENT |
- LEAPIORAID_IEEE_SGE_FLAGS_END_OF_LIST |
- LEAPIORAID_IEEE_SGE_FLAGS_SYSTEM_ADDR;
- leapioraid_base_add_sg_single_ieee(psge, sgl_flags, 0,
- data_in_sz, data_in_dma);
- }
-}
-
-#define leapioraid_convert_to_kb(x) ((x) << (PAGE_SHIFT - 10))
-static int
-leapioraid_base_config_dma_addressing(struct LEAPIORAID_ADAPTER *ioc,
- struct pci_dev *pdev)
-{
- struct sysinfo s;
- char *desc = "64";
- u64 consistant_dma_mask = DMA_BIT_MASK(64);
- u64 dma_mask = DMA_BIT_MASK(64);
-
- consistant_dma_mask = DMA_BIT_MASK(63);
- dma_mask = DMA_BIT_MASK(63);
- desc = "63";
- ioc->dma_mask = 63;
- if (ioc->use_32bit_dma)
- consistant_dma_mask = DMA_BIT_MASK(32);
- if (sizeof(dma_addr_t) > 4) {
- if (!dma_set_mask(&pdev->dev, dma_mask) &&
- !dma_set_coherent_mask(&pdev->dev, consistant_dma_mask)) {
- ioc->base_add_sg_single =
- &leapioraid_base_add_sg_single_64;
- ioc->sge_size = sizeof(struct LeapioSGESimple64_t);
- if (!ioc->use_32bit_dma)
- goto out;
- return 0;
- }
- }
- if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))
- && !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32))) {
- ioc->base_add_sg_single = &leapioraid_base_add_sg_single_32;
- ioc->sge_size = sizeof(struct LeapioSGESimple32_t);
- desc = "32";
- ioc->dma_mask = 32;
- } else
- return -ENODEV;
-out:
- si_meminfo(&s);
- pr_info("%s %s BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (%ld kB)\n",
- ioc->name, desc, leapioraid_convert_to_kb(s.totalram));
- return 0;
-}
-
-int
-leapioraid_base_check_and_get_msix_vectors(struct pci_dev *pdev)
-{
- int base;
- u16 message_control, msix_vector_count;
-
- base = pci_find_capability(pdev, PCI_CAP_ID_MSIX);
- if (!base)
- return -EINVAL;
- pci_read_config_word(pdev, base + 2, &message_control);
- msix_vector_count = (message_control & 0x3FF) + 1;
- return msix_vector_count;
-}
-
-enum leapioraid_pci_bus_speed {
- LEAPIORAID_PCIE_SPEED_2_5GT = 0x14,
- LEAPIORAID_PCIE_SPEED_5_0GT = 0x15,
- LEAPIORAID_PCIE_SPEED_8_0GT = 0x16,
- LEAPIORAID_PCIE_SPEED_16_0GT = 0x17,
- LEAPIORAID_PCI_SPEED_UNKNOWN = 0xff,
-};
-
-const unsigned char leapioraid_pcie_link_speed[] = {
- LEAPIORAID_PCI_SPEED_UNKNOWN,
- LEAPIORAID_PCIE_SPEED_2_5GT,
- LEAPIORAID_PCIE_SPEED_5_0GT,
- LEAPIORAID_PCIE_SPEED_8_0GT,
- LEAPIORAID_PCIE_SPEED_16_0GT,
- LEAPIORAID_PCI_SPEED_UNKNOWN,
- LEAPIORAID_PCI_SPEED_UNKNOWN,
- LEAPIORAID_PCI_SPEED_UNKNOWN,
- LEAPIORAID_PCI_SPEED_UNKNOWN,
- LEAPIORAID_PCI_SPEED_UNKNOWN,
- LEAPIORAID_PCI_SPEED_UNKNOWN,
- LEAPIORAID_PCI_SPEED_UNKNOWN,
- LEAPIORAID_PCI_SPEED_UNKNOWN,
- LEAPIORAID_PCI_SPEED_UNKNOWN,
- LEAPIORAID_PCI_SPEED_UNKNOWN,
- LEAPIORAID_PCI_SPEED_UNKNOWN
-};
-
-static void
-leapioraid_base_check_and_enable_high_iops_queues(
- struct LEAPIORAID_ADAPTER *ioc,
- int hba_msix_vector_count,
- int iopoll_q_count)
-{
- u16 lnksta;
- enum leapioraid_pci_bus_speed speed;
-
- if (perf_mode == LEAPIORAID_PERF_MODE_IOPS ||
- perf_mode == LEAPIORAID_PERF_MODE_LATENCY || iopoll_q_count) {
- ioc->high_iops_queues = 0;
- return;
- }
- if (perf_mode == LEAPIORAID_PERF_MODE_DEFAULT) {
- pcie_capability_read_word(ioc->pdev, PCI_EXP_LNKSTA, &lnksta);
- speed = leapioraid_pcie_link_speed[lnksta & PCI_EXP_LNKSTA_CLS];
- dev_info(&ioc->pdev->dev, "PCIe device speed is %s\n",
- speed == LEAPIORAID_PCIE_SPEED_2_5GT ? "2.5GHz" :
- speed == LEAPIORAID_PCIE_SPEED_5_0GT ? "5.0GHz" :
- speed == LEAPIORAID_PCIE_SPEED_8_0GT ? "8.0GHz" :
- speed == LEAPIORAID_PCIE_SPEED_16_0GT ? "16.0GHz" :
- "Unknown");
- if (speed < LEAPIORAID_PCIE_SPEED_16_0GT) {
- ioc->high_iops_queues = 0;
- return;
- }
- }
- if (!reset_devices &&
- hba_msix_vector_count == LEAPIORAID_GEN35_MAX_MSIX_QUEUES &&
- num_online_cpus() >= LEAPIORAID_HIGH_IOPS_REPLY_QUEUES &&
- max_msix_vectors == -1)
- ioc->high_iops_queues = LEAPIORAID_HIGH_IOPS_REPLY_QUEUES;
- else
- ioc->high_iops_queues = 0;
-}
-
-void
-leapioraid_base_disable_msix(struct LEAPIORAID_ADAPTER *ioc)
-{
- if (!ioc->msix_enable)
- return;
- pci_free_irq_vectors(ioc->pdev);
- kfree(ioc->blk_mq_poll_queues);
- ioc->msix_enable = 0;
-}
-
-void
-leapioraid_base_free_irq(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_adapter_reply_queue *reply_q, *next;
-
- if (list_empty(&ioc->reply_queue_list))
- return;
- list_for_each_entry_safe(reply_q, next, &ioc->reply_queue_list, list) {
- list_del(&reply_q->list);
- if (reply_q->is_blk_mq_poll_q) {
- kfree(reply_q);
- continue;
- }
- irq_poll_disable(&reply_q->irqpoll);
- if (ioc->smp_affinity_enable)
- irq_set_affinity_hint(pci_irq_vector(ioc->pdev,
- reply_q->msix_index), NULL);
- free_irq(pci_irq_vector(ioc->pdev, reply_q->msix_index),
- reply_q);
- kfree(reply_q);
- }
-}
-
-static int
-leapioraid_base_request_irq(struct LEAPIORAID_ADAPTER *ioc, u8 index)
-{
- struct leapioraid_adapter_reply_queue *reply_q;
- int r;
- u8 qid;
-
- reply_q = kzalloc(sizeof(struct leapioraid_adapter_reply_queue),
- GFP_KERNEL);
- if (!reply_q)
- return -ENOMEM;
-
- reply_q->ioc = ioc;
- reply_q->msix_index = index;
- atomic_set(&reply_q->busy, 0);
- if (index >= ioc->iopoll_q_start_index) {
- qid = index - ioc->iopoll_q_start_index;
- snprintf(reply_q->name, LEAPIORAID_NAME_LENGTH, "%s%u-mq-poll%u",
- ioc->driver_name, ioc->id, qid);
- reply_q->is_blk_mq_poll_q = 1;
- ioc->blk_mq_poll_queues[qid].reply_q = reply_q;
- INIT_LIST_HEAD(&reply_q->list);
- list_add_tail(&reply_q->list, &ioc->reply_queue_list);
- return 0;
- }
- if (ioc->msix_enable)
- snprintf(reply_q->name, LEAPIORAID_NAME_LENGTH, "%s%u-msix%u",
- ioc->driver_name, ioc->id, index);
- else
- snprintf(reply_q->name, LEAPIORAID_NAME_LENGTH, "%s%d",
- ioc->driver_name, ioc->id);
- r = request_irq(pci_irq_vector(ioc->pdev, index), leapioraid_base_interrupt,
- IRQF_SHARED, reply_q->name, reply_q);
- if (r) {
- pr_err("%s unable to allocate interrupt %d!\n", reply_q->name,
- pci_irq_vector(ioc->pdev, index));
- kfree(reply_q);
- return -EBUSY;
- }
-
- INIT_LIST_HEAD(&reply_q->list);
- list_add_tail(&reply_q->list, &ioc->reply_queue_list);
- return 0;
-}
-
-static int leapioraid_base_alloc_irq_vectors(struct LEAPIORAID_ADAPTER *ioc)
-{
- int i, irq_flags = PCI_IRQ_MSIX;
- struct irq_affinity desc = {.pre_vectors = ioc->high_iops_queues };
- struct irq_affinity *descp = &desc;
- int nr_msix_vectors = ioc->iopoll_q_start_index;
-
- if (ioc->smp_affinity_enable)
- irq_flags |= PCI_IRQ_AFFINITY | PCI_IRQ_ALL_TYPES;
- else
- descp = NULL;
- dinitprintk(ioc, pr_err(
- "%s high_iops_queues: %d,\n\t\t"
- "reply_queue_count: %d, nr_msix_vectors: %d\n",
- ioc->name,
- ioc->high_iops_queues,
- ioc->reply_queue_count,
- nr_msix_vectors));
- i = pci_alloc_irq_vectors_affinity(
- ioc->pdev,
- ioc->high_iops_queues,
- nr_msix_vectors, irq_flags, descp);
- return i;
-}
-
-static int
-leapioraid_base_enable_msix(struct LEAPIORAID_ADAPTER *ioc)
-{
- int r, i, msix_vector_count, local_max_msix_vectors;
- int iopoll_q_count = 0;
-
- ioc->msix_load_balance = false;
- msix_vector_count =
- leapioraid_base_check_and_get_msix_vectors(ioc->pdev);
- if (msix_vector_count <= 0) {
- dfailprintk(ioc, pr_info("%s msix not supported\n", ioc->name));
- goto try_ioapic;
- }
- dinitprintk(ioc, pr_err(
- "%s MSI-X vectors supported: %d, no of cores: %d\n",
- ioc->name, msix_vector_count, ioc->cpu_count));
- ioc->reply_queue_count = min_t(int, ioc->cpu_count, msix_vector_count);
- if (!ioc->rdpq_array_enable && max_msix_vectors == -1) {
- if (reset_devices)
- local_max_msix_vectors = 1;
- else
- local_max_msix_vectors = 8;
- } else
- local_max_msix_vectors = max_msix_vectors;
- if (local_max_msix_vectors == 0)
- goto try_ioapic;
- if (!ioc->combined_reply_queue) {
- pr_err(
- "%s combined reply queue is off, so enabling msix load balance\n",
- ioc->name);
- ioc->msix_load_balance = true;
- }
- if (ioc->msix_load_balance)
- ioc->smp_affinity_enable = 0;
- if (!ioc->smp_affinity_enable || ioc->reply_queue_count <= 1)
- ioc->shost->host_tagset = 0;
- if (ioc->shost->host_tagset)
- iopoll_q_count = poll_queues;
- if (iopoll_q_count) {
- ioc->blk_mq_poll_queues = kcalloc(iopoll_q_count,
- sizeof(struct
- leapioraid_blk_mq_poll_queue),
- GFP_KERNEL);
- if (!ioc->blk_mq_poll_queues)
- iopoll_q_count = 0;
- }
- leapioraid_base_check_and_enable_high_iops_queues(ioc,
- msix_vector_count,
- iopoll_q_count);
- ioc->reply_queue_count =
- min_t(int, ioc->reply_queue_count + ioc->high_iops_queues,
- msix_vector_count);
- if (local_max_msix_vectors > 0)
- ioc->reply_queue_count = min_t(int, local_max_msix_vectors,
- ioc->reply_queue_count);
- if (iopoll_q_count) {
- if (ioc->reply_queue_count < (iopoll_q_count + 1))
- iopoll_q_count = 0;
- ioc->reply_queue_count =
- min(ioc->reply_queue_count + iopoll_q_count,
- msix_vector_count);
- }
- ioc->iopoll_q_start_index = ioc->reply_queue_count - iopoll_q_count;
- r = leapioraid_base_alloc_irq_vectors(ioc);
- if (r < 0) {
- pr_warn(
- "%s pci_alloc_irq_vectors failed (r=%d) !!!\n",
- ioc->name, r);
- goto try_ioapic;
- }
- ioc->msix_enable = 1;
- for (i = 0; i < ioc->reply_queue_count; i++) {
- r = leapioraid_base_request_irq(ioc, i);
- if (r) {
- leapioraid_base_free_irq(ioc);
- leapioraid_base_disable_msix(ioc);
- goto try_ioapic;
- }
- }
- dinitprintk(ioc,
- pr_info("%s High IOPs queues : %s\n",
- ioc->name,
- ioc->high_iops_queues ? "enabled" : "disabled"));
- return 0;
-try_ioapic:
- ioc->high_iops_queues = 0;
- dinitprintk(ioc, pr_err(
- "%s High IOPs queues : disabled\n", ioc->name));
- ioc->reply_queue_count = 1;
- ioc->iopoll_q_start_index = ioc->reply_queue_count - 0;
- r = leapioraid_base_request_irq(ioc, 0);
- return r;
-}
-
-static void
-leapioraid_base_import_managed_irqs_affinity(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_adapter_reply_queue *reply_q;
- unsigned int cpu, nr_msix;
- int local_numa_node;
- unsigned int index = 0;
-
- nr_msix = ioc->reply_queue_count;
- if (!nr_msix)
- return;
- if (ioc->smp_affinity_enable) {
- if (ioc->high_iops_queues) {
- local_numa_node = dev_to_node(&ioc->pdev->dev);
- for (index = 0; index < ioc->high_iops_queues; index++) {
- irq_set_affinity_hint(pci_irq_vector(ioc->pdev,
- index),
- cpumask_of_node
- (local_numa_node));
- }
- }
- list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
- const cpumask_t *mask;
-
- if (reply_q->msix_index < ioc->high_iops_queues ||
- reply_q->msix_index >= ioc->iopoll_q_start_index)
- continue;
- mask = pci_irq_get_affinity(ioc->pdev,
- reply_q->msix_index);
- if (!mask) {
- dinitprintk(ioc, pr_warn(
- "%s no affinity for msi %x\n",
- ioc->name,
- reply_q->msix_index));
- goto fall_back;
- }
- for_each_cpu_and(cpu, mask, cpu_online_mask) {
- if (cpu >= ioc->cpu_msix_table_sz)
- break;
- ioc->cpu_msix_table[cpu] = reply_q->msix_index;
- }
- }
- return;
- }
-fall_back:
- leapioraid_base_group_cpus_on_irq(ioc);
-}
-
-static void
-leapioraid_base_assign_reply_queues(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_adapter_reply_queue *reply_q;
- int reply_queue;
-
- if (!leapioraid_base_is_controller_msix_enabled(ioc))
- return;
- if (ioc->msix_load_balance)
- return;
- memset(ioc->cpu_msix_table, 0, ioc->cpu_msix_table_sz);
- if (ioc->reply_queue_count > ioc->facts.MaxMSIxVectors) {
- ioc->reply_queue_count = ioc->facts.MaxMSIxVectors;
- reply_queue = 0;
- list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
- reply_q->msix_index = reply_queue;
- if (++reply_queue == ioc->reply_queue_count)
- reply_queue = 0;
- }
- }
- leapioraid_base_import_managed_irqs_affinity(ioc);
-}
-
-static int
-leapioraid_base_wait_for_doorbell_int(
- struct LEAPIORAID_ADAPTER *ioc, int timeout)
-{
- u32 cntdn, count;
- u32 int_status;
-
- count = 0;
- cntdn = 1000 * timeout;
- do {
- int_status =
- ioc->base_readl(&ioc->chip->HostInterruptStatus,
- LEAPIORAID_READL_RETRY_COUNT_OF_THREE);
- if (int_status & LEAPIORAID_HIS_IOC2SYS_DB_STATUS) {
- dhsprintk(ioc, pr_info(
- "%s %s: successful count(%d), timeout(%d)\n",
- ioc->name, __func__, count,
- timeout));
- return 0;
- }
- usleep_range(1000, 1100);
- count++;
- } while (--cntdn);
- pr_err("%s %s: failed due to timeout count(%d), int_status(%x)!\n",
- ioc->name, __func__, count, int_status);
- return -EFAULT;
-}
-
-static int
-leapioraid_base_spin_on_doorbell_int(struct LEAPIORAID_ADAPTER *ioc,
- int timeout)
-{
- u32 cntdn, count;
- u32 int_status;
-
- count = 0;
- cntdn = 2000 * timeout;
- do {
- int_status =
- ioc->base_readl(&ioc->chip->HostInterruptStatus,
- LEAPIORAID_READL_RETRY_COUNT_OF_THREE);
- if (int_status & LEAPIORAID_HIS_IOC2SYS_DB_STATUS) {
- dhsprintk(ioc, pr_info(
- "%s %s: successful count(%d), timeout(%d)\n",
- ioc->name, __func__, count,
- timeout));
- return 0;
- }
- udelay(500);
- count++;
- } while (--cntdn);
- pr_err("%s %s: failed due to timeout count(%d), int_status(%x)!\n",
- ioc->name, __func__, count, int_status);
- return -EFAULT;
-}
-
-static int
-leapioraid_base_wait_for_doorbell_ack(struct LEAPIORAID_ADAPTER *ioc,
- int timeout)
-{
- u32 cntdn, count;
- u32 int_status;
- u32 doorbell;
-
- count = 0;
- cntdn = 1000 * timeout;
- do {
- int_status =
- ioc->base_readl(&ioc->chip->HostInterruptStatus,
- LEAPIORAID_READL_RETRY_COUNT_OF_THREE);
- if (!(int_status & LEAPIORAID_HIS_SYS2IOC_DB_STATUS)) {
- dhsprintk(ioc, pr_info(
- "%s %s: successful count(%d), timeout(%d)\n",
- ioc->name, __func__, count,
- timeout));
- return 0;
- } else if (int_status & LEAPIORAID_HIS_IOC2SYS_DB_STATUS) {
- doorbell =
- ioc->base_readl(&ioc->chip->Doorbell,
- LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY);
- if ((doorbell & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_FAULT) {
- leapioraid_print_fault_code(ioc, doorbell);
- return -EFAULT;
- }
- if ((doorbell & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_COREDUMP) {
- leapioraid_base_coredump_info(ioc, doorbell);
- return -EFAULT;
- }
- } else if (int_status == 0xFFFFFFFF)
- goto out;
- usleep_range(1000, 1100);
- count++;
- } while (--cntdn);
-out:
- pr_err("%s %s: failed due to timeout count(%d), int_status(%x)!\n",
- ioc->name, __func__, count, int_status);
- return -EFAULT;
-}
-
-static int
-leapioraid_base_wait_for_doorbell_not_used(struct LEAPIORAID_ADAPTER *ioc,
- int timeout)
-{
- u32 cntdn, count;
- u32 doorbell_reg;
-
- count = 0;
- cntdn = 1000 * timeout;
- do {
- doorbell_reg =
- ioc->base_readl(&ioc->chip->Doorbell,
- LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY);
- if (!(doorbell_reg & LEAPIORAID_DOORBELL_USED)) {
- dhsprintk(ioc, pr_info(
- "%s %s: successful count(%d), timeout(%d)\n",
- ioc->name, __func__, count,
- timeout));
- return 0;
- }
- usleep_range(1000, 1100);
- count++;
- } while (--cntdn);
- pr_err("%s %s: failed due to timeout count(%d), doorbell_reg(%x)!\n",
- ioc->name, __func__, count, doorbell_reg);
- return -EFAULT;
-}
-
-static int
-leapioraid_base_handshake_req_reply_wait(struct LEAPIORAID_ADAPTER *ioc,
- int request_bytes, u32 *request,
- int reply_bytes, u16 *reply,
- int timeout)
-{
- struct LeapioraidDefaultRep_t *default_reply
- = (struct LeapioraidDefaultRep_t *) reply;
- int i;
- u8 failed;
- __le32 *mfp;
-
- if ((ioc->base_readl(&ioc->chip->Doorbell,
- LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY) & LEAPIORAID_DOORBELL_USED)) {
- pr_err("%s doorbell is in use (line=%d)\n", ioc->name, __LINE__);
- return -EFAULT;
- }
- if (ioc->base_readl(&ioc->chip->HostInterruptStatus,
- LEAPIORAID_READL_RETRY_COUNT_OF_THREE) &
- LEAPIORAID_HIS_IOC2SYS_DB_STATUS)
- writel(0, &ioc->chip->HostInterruptStatus);
- writel(((LEAPIORAID_FUNC_HANDSHAKE << LEAPIORAID_DOORBELL_FUNCTION_SHIFT)
- | ((request_bytes / 4) << LEAPIORAID_DOORBELL_ADD_DWORDS_SHIFT)),
- &ioc->chip->Doorbell);
- if ((leapioraid_base_spin_on_doorbell_int(ioc, 5))) {
- pr_err("%s doorbell handshake int failed (line=%d)\n",
- ioc->name, __LINE__);
- return -EFAULT;
- }
- writel(0, &ioc->chip->HostInterruptStatus);
- if ((leapioraid_base_wait_for_doorbell_ack(ioc, 5))) {
- pr_err("%s doorbell handshake ack failed (line=%d)\n",
- ioc->name, __LINE__);
- return -EFAULT;
- }
- for (i = 0, failed = 0; i < request_bytes / 4 && !failed; i++) {
- writel((u32) (request[i]), &ioc->chip->Doorbell);
- if ((leapioraid_base_wait_for_doorbell_ack(ioc, 5)))
- failed = 1;
- }
- if (failed) {
- pr_err("%s doorbell handshake sending request failed (line=%d)\n",
- ioc->name, __LINE__);
- return -EFAULT;
- }
- if ((leapioraid_base_wait_for_doorbell_int(ioc, timeout))) {
- pr_err("%s doorbell handshake int failed (line=%d)\n",
- ioc->name, __LINE__);
- return -EFAULT;
- }
- reply[0] =
- (u16) (ioc->base_readl(&ioc->chip->Doorbell,
- LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY)
- & LEAPIORAID_DOORBELL_DATA_MASK);
- writel(0, &ioc->chip->HostInterruptStatus);
- if ((leapioraid_base_wait_for_doorbell_int(ioc, 5))) {
- pr_err("%s doorbell handshake int failed (line=%d)\n",
- ioc->name, __LINE__);
- return -EFAULT;
- }
- reply[1] =
- (u16) (ioc->base_readl(&ioc->chip->Doorbell,
- LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY)
- & LEAPIORAID_DOORBELL_DATA_MASK);
- writel(0, &ioc->chip->HostInterruptStatus);
- for (i = 2; i < default_reply->MsgLength * 2; i++) {
- if ((leapioraid_base_wait_for_doorbell_int(ioc, 5))) {
- pr_err("%s doorbell handshake int failed (line=%d)\n",
- ioc->name, __LINE__);
- return -EFAULT;
- }
- if (i >= reply_bytes / 2)
- ioc->base_readl(&ioc->chip->Doorbell,
- LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY);
- else
- reply[i] =
- (u16) (ioc->base_readl(&ioc->chip->Doorbell,
- LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY)
- & LEAPIORAID_DOORBELL_DATA_MASK);
- writel(0, &ioc->chip->HostInterruptStatus);
- }
- if (leapioraid_base_wait_for_doorbell_int(ioc, 5)) {
- pr_err("%s doorbell handshake int failed (line=%d)\n",
- ioc->name, __LINE__);
- return -EFAULT;
- }
- if (leapioraid_base_wait_for_doorbell_not_used(ioc, 5) != 0) {
- dhsprintk(ioc,
- pr_info("%s doorbell is in use (line=%d)\n",
- ioc->name, __LINE__));
- }
- writel(0, &ioc->chip->HostInterruptStatus);
- if (ioc->logging_level & LEAPIORAID_DEBUG_INIT) {
- mfp = (__le32 *) reply;
- pr_info("%s \toffset:data\n", ioc->name);
- for (i = 0; i < reply_bytes / 4; i++)
- pr_info("%s \t[0x%02x]:%08x\n",
- ioc->name, i * 4, le32_to_cpu(mfp[i]));
- }
- return 0;
-}
-
-static int
-leapioraid_base_wait_on_iocstate(
- struct LEAPIORAID_ADAPTER *ioc, u32 ioc_state,
- int timeout)
-{
- u32 count, cntdn;
- u32 current_state;
-
- count = 0;
- cntdn = 1000 * timeout;
- do {
- current_state = leapioraid_base_get_iocstate(ioc, 1);
- if (current_state == ioc_state)
- return 0;
- if (count && current_state == LEAPIORAID_IOC_STATE_FAULT)
- break;
- usleep_range(1000, 1100);
- count++;
- } while (--cntdn);
- return current_state;
-}
-
-static inline void
-leapioraid_base_dump_reg_set(struct LEAPIORAID_ADAPTER *ioc)
-{
- unsigned int i, sz = 256;
- u32 __iomem *reg = (u32 __iomem *) ioc->chip;
-
- pr_info("%s System Register set:\n", ioc->name);
- for (i = 0; i < (sz / sizeof(u32)); i++)
- pr_info("%08x: %08x\n", (i * 4), readl(®[i]));
-}
-
-int
-leapioraid_base_unlock_and_get_host_diagnostic(
- struct LEAPIORAID_ADAPTER *ioc,
- u32 *host_diagnostic)
-{
- u32 count;
-
- *host_diagnostic = 0;
- count = 0;
- do {
- drsprintk(ioc, pr_info("%s write magic sequence\n", ioc->name));
- writel(0x0, &ioc->chip->WriteSequence);
- writel(0xF, &ioc->chip->WriteSequence);
- writel(0x4, &ioc->chip->WriteSequence);
- writel(0xB, &ioc->chip->WriteSequence);
- writel(0x2, &ioc->chip->WriteSequence);
- writel(0x7, &ioc->chip->WriteSequence);
- writel(0xD, &ioc->chip->WriteSequence);
- msleep(100);
- if (count++ > 20) {
- pr_err("%s Giving up writing magic sequence after 20 retries\n",
- ioc->name);
- leapioraid_base_dump_reg_set(ioc);
- return -EFAULT;
- }
- *host_diagnostic =
- ioc->base_readl(&ioc->chip->HostDiagnostic,
- LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY);
- drsprintk(ioc, pr_info(
- "%s wrote magic sequence: cnt(%d), host_diagnostic(0x%08x)\n",
- ioc->name, count, *host_diagnostic));
- } while ((*host_diagnostic & 0x00000080) == 0);
- return 0;
-}
-
-void
-leapioraid_base_lock_host_diagnostic(struct LEAPIORAID_ADAPTER *ioc)
-{
- drsprintk(ioc, pr_info("%s disable writes to the diagnostic register\n",
- ioc->name));
- writel(0x0, &ioc->chip->WriteSequence);
-}
-
-static int
-leapioraid_base_diag_reset(struct LEAPIORAID_ADAPTER *ioc)
-{
- u32 host_diagnostic;
- u32 ioc_state;
- u32 count;
- u32 hcb_size;
-
- pr_info("%s sending diag reset !!\n", ioc->name);
- drsprintk(ioc,
- pr_info("%s Locking pci cfg space access\n",
- ioc->name));
- pci_cfg_access_lock(ioc->pdev);
- drsprintk(ioc, pr_info("%s clear interrupts\n",
- ioc->name));
- mutex_lock(&ioc->hostdiag_unlock_mutex);
- if (leapioraid_base_unlock_and_get_host_diagnostic
- (ioc, &host_diagnostic)) {
- mutex_unlock(&ioc->hostdiag_unlock_mutex);
- goto out;
- }
- hcb_size =
- ioc->base_readl(&ioc->chip->HCBSize, LEAPIORAID_READL_RETRY_COUNT_OF_THREE);
- drsprintk(ioc,
- pr_info("%s diag reset: issued\n",
- ioc->name));
- writel(host_diagnostic | LEAPIORAID_DIAG_RESET_ADAPTER,
- &ioc->chip->HostDiagnostic);
-#if defined(DISABLE_RESET_SUPPORT)
- count = 0;
- do {
- msleep(50);
- host_diagnostic =
- ioc->base_readl(&ioc->chip->HostDiagnostic,
- LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY);
- if (host_diagnostic == 0xFFFFFFFF)
- goto out;
- else if (count++ >= 300)
- goto out;
- if (!(count % 20))
- pr_info("waiting on diag reset bit to clear, count = %d\n",
- (count / 20));
- } while (host_diagnostic & LEAPIORAID_DIAG_RESET_ADAPTER);
-#else
- msleep(50);
- for (count = 0; count < (300000 / 256); count++) {
- host_diagnostic =
- ioc->base_readl(&ioc->chip->HostDiagnostic,
- LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY);
- if (host_diagnostic == 0xFFFFFFFF) {
- pr_err("%s Invalid host diagnostic register value\n",
- ioc->name);
- leapioraid_base_dump_reg_set(ioc);
- goto out;
- }
- if (!(host_diagnostic & LEAPIORAID_DIAG_RESET_ADAPTER))
- break;
-
- msleep(256);
- }
-#endif
- if (host_diagnostic & 0x00000100) {
- drsprintk(ioc, pr_info(
- "%s restart IOC assuming HCB Address points to good F/W\n",
- ioc->name));
- host_diagnostic &= ~0x00001800;
- host_diagnostic |= 0x00000800;
- writel(host_diagnostic, &ioc->chip->HostDiagnostic);
- drsprintk(ioc, pr_err(
- "%s re-enable the HCDW\n", ioc->name));
- writel(hcb_size | 0x00000001,
- &ioc->chip->HCBSize);
- }
- drsprintk(ioc, pr_info("%s restart the adapter\n",
- ioc->name));
- writel(host_diagnostic & ~0x00000002,
- &ioc->chip->HostDiagnostic);
- leapioraid_base_lock_host_diagnostic(ioc);
- mutex_unlock(&ioc->hostdiag_unlock_mutex);
- drsprintk(ioc, pr_info("%s Wait for FW to go to the READY state\n",
- ioc->name));
- ioc_state =
- leapioraid_base_wait_on_iocstate(
- ioc, LEAPIORAID_IOC_STATE_READY, 20);
- if (ioc_state) {
- pr_err("%s %s: failed going to ready state (ioc_state=0x%x)\n",
- ioc->name, __func__, ioc_state);
- leapioraid_base_dump_reg_set(ioc);
- goto out;
- }
- drsprintk(ioc, pr_err(
- "%s Unlocking pci cfg space access\n", ioc->name));
- pci_cfg_access_unlock(ioc->pdev);
- if (ioc->open_pcie_trace)
- leapioraid_base_trace_log_init(ioc);
- pr_info("%s diag reset: SUCCESS\n", ioc->name);
- return 0;
-out:
- drsprintk(ioc, pr_err(
- "%s Unlocking pci cfg space access\n", ioc->name));
- pci_cfg_access_unlock(ioc->pdev);
- pr_err("%s diag reset: FAILED\n", ioc->name);
- mutex_unlock(&ioc->hostdiag_unlock_mutex);
- return -EFAULT;
-}
-
-static int
-leapioraid_base_wait_for_iocstate(
- struct LEAPIORAID_ADAPTER *ioc, int timeout)
-{
- u32 ioc_state;
- int rc;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- if (!leapioraid_base_pci_device_is_available(ioc))
- return 0;
- ioc_state = leapioraid_base_get_iocstate(ioc, 0);
- dhsprintk(ioc, pr_info("%s %s: ioc_state(0x%08x)\n",
- ioc->name, __func__, ioc_state));
- if (((ioc_state & LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_READY) ||
- (ioc_state & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_OPERATIONAL)
- return 0;
- if (ioc_state & LEAPIORAID_DOORBELL_USED) {
- dhsprintk(ioc,
- pr_info("%s unexpected doorbell active!\n", ioc->name));
- goto issue_diag_reset;
- }
- if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_FAULT) {
- leapioraid_print_fault_code(ioc, ioc_state &
- LEAPIORAID_DOORBELL_DATA_MASK);
- goto issue_diag_reset;
- } else if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_COREDUMP) {
- pr_err("%s %s: Skipping the diag reset here. (ioc_state=0x%x)\n",
- ioc->name, __func__, ioc_state);
- return -EFAULT;
- }
- ioc_state =
- leapioraid_base_wait_on_iocstate(ioc, LEAPIORAID_IOC_STATE_READY,
- timeout);
- if (ioc_state) {
- pr_err("%s %s: failed going to ready state (ioc_state=0x%x)\n",
- ioc->name, __func__, ioc_state);
- return -EFAULT;
- }
-issue_diag_reset:
- rc = leapioraid_base_diag_reset(ioc);
- return rc;
-}
-
-int
-leapioraid_base_check_for_fault_and_issue_reset(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- u32 ioc_state;
- int rc = -EFAULT;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- if (!leapioraid_base_pci_device_is_available(ioc))
- return rc;
- ioc_state = leapioraid_base_get_iocstate(ioc, 0);
- dhsprintk(ioc, pr_info("%s %s: ioc_state(0x%08x)\n",
- ioc->name, __func__, ioc_state));
- if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_FAULT) {
- leapioraid_print_fault_code(ioc, ioc_state &
- LEAPIORAID_DOORBELL_DATA_MASK);
- leapioraid_base_mask_interrupts(ioc);
- rc = leapioraid_base_diag_reset(ioc);
- } else if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_COREDUMP) {
- leapioraid_base_coredump_info(ioc,
- ioc_state &
- LEAPIORAID_DOORBELL_DATA_MASK);
- leapioraid_base_wait_for_coredump_completion(ioc, __func__);
- leapioraid_base_mask_interrupts(ioc);
- rc = leapioraid_base_diag_reset(ioc);
- }
- return rc;
-}
-
-static int
-leapioraid_base_get_ioc_facts(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidIOCFactsReq_t mpi_request;
- struct LeapioraidIOCFactsRep_t mpi_reply;
- struct leapioraid_facts *facts;
- int mpi_reply_sz, mpi_request_sz, r;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- r = leapioraid_base_wait_for_iocstate(ioc, 10);
- if (r) {
- pr_err(
- "%s %s: failed getting to correct state\n", ioc->name,
- __func__);
- return r;
- }
- mpi_reply_sz = sizeof(struct LeapioraidIOCFactsRep_t);
- mpi_request_sz = sizeof(struct LeapioraidIOCFactsReq_t);
- memset(&mpi_request, 0, mpi_request_sz);
- mpi_request.Function = LEAPIORAID_FUNC_IOC_FACTS;
- r = leapioraid_base_handshake_req_reply_wait(ioc, mpi_request_sz,
- (u32 *) &mpi_request,
- mpi_reply_sz,
- (u16 *) &mpi_reply, 5);
- if (r != 0) {
- pr_err("%s %s: handshake failed (r=%d)\n",
- ioc->name, __func__, r);
- return r;
- }
- facts = &ioc->facts;
- memset(facts, 0, sizeof(struct leapioraid_facts));
- facts->MsgVersion = le16_to_cpu(mpi_reply.MsgVersion);
- facts->HeaderVersion = le16_to_cpu(mpi_reply.HeaderVersion);
- facts->IOCNumber = mpi_reply.IOCNumber;
- pr_info("%s IOC Number : %d\n", ioc->name, facts->IOCNumber);
- ioc->IOCNumber = facts->IOCNumber;
- facts->VP_ID = mpi_reply.VP_ID;
- facts->VF_ID = mpi_reply.VF_ID;
- facts->IOCExceptions = le16_to_cpu(mpi_reply.IOCExceptions);
- facts->MaxChainDepth = mpi_reply.MaxChainDepth;
- facts->WhoInit = mpi_reply.WhoInit;
- facts->NumberOfPorts = mpi_reply.NumberOfPorts;
- facts->MaxMSIxVectors = mpi_reply.MaxMSIxVectors;
- if (ioc->msix_enable && (facts->MaxMSIxVectors <= 16))
- ioc->combined_reply_queue = 0;
- facts->RequestCredit = le16_to_cpu(mpi_reply.RequestCredit);
- facts->MaxReplyDescriptorPostQueueDepth =
- le16_to_cpu(mpi_reply.MaxReplyDescriptorPostQueueDepth);
- facts->ProductID = le16_to_cpu(mpi_reply.ProductID);
- facts->IOCCapabilities = le32_to_cpu(mpi_reply.IOCCapabilities);
- if ((facts->IOCCapabilities & LEAPIORAID_IOCFACTS_CAPABILITY_INTEGRATED_RAID))
- ioc->ir_firmware = 1;
- if ((facts->IOCCapabilities & LEAPIORAID_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE)
- && (!reset_devices))
- ioc->rdpq_array_capable = 1;
- else
- ioc->rdpq_array_capable = 0;
- if (facts->IOCCapabilities & LEAPIORAID_IOCFACTS_CAPABILITY_ATOMIC_REQ)
- ioc->atomic_desc_capable = 1;
- else
- ioc->atomic_desc_capable = 0;
-
- facts->FWVersion.Word = le32_to_cpu(mpi_reply.FWVersion.Word);
- facts->IOCRequestFrameSize = le16_to_cpu(mpi_reply.IOCRequestFrameSize);
- facts->IOCMaxChainSegmentSize =
- le16_to_cpu(mpi_reply.IOCMaxChainSegmentSize);
- facts->MaxInitiators = le16_to_cpu(mpi_reply.MaxInitiators);
- facts->MaxTargets = le16_to_cpu(mpi_reply.MaxTargets);
- ioc->shost->max_id = -1;
- facts->MaxSasExpanders = le16_to_cpu(mpi_reply.MaxSasExpanders);
- facts->MaxEnclosures = le16_to_cpu(mpi_reply.MaxEnclosures);
- facts->ProtocolFlags = le16_to_cpu(mpi_reply.ProtocolFlags);
- facts->HighPriorityCredit = le16_to_cpu(mpi_reply.HighPriorityCredit);
- facts->ReplyFrameSize = mpi_reply.ReplyFrameSize;
- facts->MaxDevHandle = le16_to_cpu(mpi_reply.MaxDevHandle);
- facts->CurrentHostPageSize = mpi_reply.CurrentHostPageSize;
- ioc->page_size = 1 << facts->CurrentHostPageSize;
- if (ioc->page_size == 1) {
- pr_err(
- "%s CurrentHostPageSize is 0: Setting host page to 4k\n",
- ioc->name);
- ioc->page_size = 1 << 12;
- }
- dinitprintk(ioc,
- pr_info("%s CurrentHostPageSize(%d)\n",
- ioc->name, facts->CurrentHostPageSize));
- dinitprintk(ioc,
- pr_info("%s hba queue depth(%d), max chains per io(%d)\n",
- ioc->name, facts->RequestCredit, facts->MaxChainDepth));
- dinitprintk(ioc,
- pr_info("%s request frame size(%d), reply frame size(%d)\n",
- ioc->name,
- facts->IOCRequestFrameSize * 4,
- facts->ReplyFrameSize * 4));
- return 0;
-}
-
-static void
-leapioraid_base_unmap_resources(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct pci_dev *pdev = ioc->pdev;
-
- pr_info("%s %s\n", ioc->name, __func__);
- leapioraid_base_free_irq(ioc);
- leapioraid_base_disable_msix(ioc);
- kfree(ioc->replyPostRegisterIndex);
- mutex_lock(&ioc->pci_access_mutex);
- if (ioc->chip_phys) {
- iounmap(ioc->chip);
- ioc->chip_phys = 0;
- }
-
- pci_release_selected_regions(ioc->pdev, ioc->bars);
- pci_disable_device(pdev);
- mutex_unlock(&ioc->pci_access_mutex);
-}
-
-int
-leapioraid_base_map_resources(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct pci_dev *pdev = ioc->pdev;
- u32 memap_sz;
- u32 pio_sz;
- int i, r = 0, rc;
- u64 pio_chip = 0;
- phys_addr_t chip_phys = 0;
- struct leapioraid_adapter_reply_queue *reply_q;
- int iopoll_q_count = 0;
-
- dinitprintk(ioc, pr_info("%s %s\n",
- ioc->name, __func__));
-
- ioc->bars = pci_select_bars(pdev, IORESOURCE_MEM);
- if (pci_enable_device_mem(pdev)) {
- pr_warn("%s pci_enable_device_mem: failed\n", ioc->name);
- return -ENODEV;
- }
- if (pci_request_selected_regions(pdev, ioc->bars, ioc->driver_name)) {
- pr_warn("%s pci_request_selected_regions: failed\n", ioc->name);
- r = -ENODEV;
- goto out_fail;
- }
-
- pci_set_master(pdev);
-
- if (leapioraid_base_config_dma_addressing(ioc, pdev) != 0) {
- pr_warn("%s no suitable DMA mask for %s\n",
- ioc->name, pci_name(pdev));
- r = -ENODEV;
- goto out_fail;
- }
- for (i = 0, memap_sz = 0, pio_sz = 0; i < DEVICE_COUNT_RESOURCE; i++) {
- if (pci_resource_flags(pdev, i) & IORESOURCE_IO) {
- if (pio_sz)
- continue;
- pio_chip = (u64) pci_resource_start(pdev, i);
- pio_sz = pci_resource_len(pdev, i);
- } else if (pci_resource_flags(pdev, i) & IORESOURCE_MEM) {
- if (memap_sz)
- continue;
- ioc->chip_phys = pci_resource_start(pdev, i);
- chip_phys = ioc->chip_phys;
- memap_sz = pci_resource_len(pdev, i);
- ioc->chip = ioremap(ioc->chip_phys, memap_sz);
- if (ioc->chip == NULL) {
- pr_err("%s unable to map adapter memory!\n",
- ioc->name);
- r = -EINVAL;
- goto out_fail;
- }
- }
- }
- leapioraid_base_mask_interrupts(ioc);
- r = leapioraid_base_get_ioc_facts(ioc);
- if (r) {
- rc = leapioraid_base_check_for_fault_and_issue_reset(ioc);
- if (rc || (leapioraid_base_get_ioc_facts(ioc)))
- goto out_fail;
- }
- if (!ioc->rdpq_array_enable_assigned) {
- ioc->rdpq_array_enable = ioc->rdpq_array_capable;
- ioc->rdpq_array_enable_assigned = 1;
- }
- r = leapioraid_base_enable_msix(ioc);
- if (r)
- goto out_fail;
- iopoll_q_count = ioc->reply_queue_count - ioc->iopoll_q_start_index;
- for (i = 0; i < iopoll_q_count; i++) {
- atomic_set(&ioc->blk_mq_poll_queues[i].busy, 0);
- atomic_set(&ioc->blk_mq_poll_queues[i].pause, 0);
- }
- if (!ioc->is_driver_loading)
- leapioraid_base_init_irqpolls(ioc);
- if (ioc->combined_reply_queue) {
- ioc->replyPostRegisterIndex = kcalloc(ioc->nc_reply_index_count,
- sizeof(resource_size_t *),
- GFP_KERNEL);
- if (!ioc->replyPostRegisterIndex) {
- pr_err("%s allocation for reply Post Register Index failed!!!\n",
- ioc->name);
- r = -ENOMEM;
- goto out_fail;
- }
-
- for (i = 0; i < ioc->nc_reply_index_count; i++) {
- ioc->replyPostRegisterIndex[i] = (resource_size_t *)
- ((u8 *) &ioc->chip->Doorbell +
- 0x0000030C +
- (i * 0x10));
- }
- }
- list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
- if (reply_q->msix_index >= ioc->iopoll_q_start_index) {
- pr_info("%s enabled: index: %d\n",
- reply_q->name, reply_q->msix_index);
- continue;
- }
- pr_info("%s %s: IRQ %d\n",
- reply_q->name,
- ((ioc->msix_enable) ? "PCI-MSI-X enabled" :
- "IO-APIC enabled"), pci_irq_vector(ioc->pdev,
- reply_q->msix_index));
- }
- pr_info("%s iomem(%pap), mapped(0x%p), size(%d)\n",
- ioc->name, &chip_phys, ioc->chip, memap_sz);
- pr_info("%s ioport(0x%016llx), size(%d)\n",
- ioc->name, (unsigned long long)pio_chip, pio_sz);
-
- pci_save_state(pdev);
- return 0;
-out_fail:
- leapioraid_base_unmap_resources(ioc);
- return r;
-}
-
-void *leapioraid_base_get_msg_frame(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid)
-{
- return (void *)(ioc->request + (smid * ioc->request_sz));
-}
-
-void *leapioraid_base_get_sense_buffer(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid)
-{
- return (void *)(ioc->sense + ((smid - 1) * SCSI_SENSE_BUFFERSIZE));
-}
-
-__le32
-leapioraid_base_get_sense_buffer_dma(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid)
-{
- return cpu_to_le32(ioc->sense_dma + ((smid - 1) *
- SCSI_SENSE_BUFFERSIZE));
-}
-
-__le64
-leapioraid_base_get_sense_buffer_dma_64(struct LEAPIORAID_ADAPTER *ioc,
- u16 smid)
-{
- return cpu_to_le64(ioc->sense_dma + ((smid - 1) *
- SCSI_SENSE_BUFFERSIZE));
-}
-
-void *leapioraid_base_get_reply_virt_addr(struct LEAPIORAID_ADAPTER *ioc,
- u32 phys_addr)
-{
- if (!phys_addr)
- return NULL;
- return ioc->reply + (phys_addr - (u32) ioc->reply_dma);
-}
-
-static inline u8
-leapioraid_base_get_msix_index(
- struct LEAPIORAID_ADAPTER *ioc, struct scsi_cmnd *scmd)
-{
- if (ioc->msix_load_balance)
- return ioc->reply_queue_count ?
- leapioraid_base_mod64(atomic64_add_return(1, &ioc->total_io_cnt),
- ioc->reply_queue_count) : 0;
- if (scmd && ioc->shost->nr_hw_queues > 1) {
- u32 tag = blk_mq_unique_tag(scsi_cmd_to_rq(scmd));
-
- return blk_mq_unique_tag_to_hwq(tag) + ioc->high_iops_queues;
- }
- return ioc->cpu_msix_table[raw_smp_processor_id()];
-}
-
-inline unsigned long
-leapioraid_base_sdev_nr_inflight_request(struct LEAPIORAID_ADAPTER *ioc,
- struct scsi_cmnd *scmd)
-{
- return scsi_device_busy(scmd->device);
-}
-
-static inline u8
-leapioraid_base_get_high_iops_msix_index(struct LEAPIORAID_ADAPTER *ioc,
- struct scsi_cmnd *scmd)
-{
- if (leapioraid_base_sdev_nr_inflight_request(ioc, scmd) >
- LEAPIORAID_DEVICE_HIGH_IOPS_DEPTH)
- return
- leapioraid_base_mod64((atomic64_add_return
- (1,
- &ioc->high_iops_outstanding) /
- LEAPIORAID_HIGH_IOPS_BATCH_COUNT),
- LEAPIORAID_HIGH_IOPS_REPLY_QUEUES);
- return leapioraid_base_get_msix_index(ioc, scmd);
-}
-
-u16
-leapioraid_base_get_smid(struct LEAPIORAID_ADAPTER *ioc, u8 cb_idx)
-{
- unsigned long flags;
- struct leapioraid_request_tracker *request;
- u16 smid;
-
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- if (list_empty(&ioc->internal_free_list)) {
- spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
- pr_err("%s %s: smid not available\n",
- ioc->name, __func__);
- return 0;
- }
- request = list_entry(ioc->internal_free_list.next,
- struct leapioraid_request_tracker, tracker_list);
- request->cb_idx = cb_idx;
- smid = request->smid;
- list_del(&request->tracker_list);
- spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
- return smid;
-}
-
-u16
-leapioraid_base_get_smid_scsiio(struct LEAPIORAID_ADAPTER *ioc, u8 cb_idx,
- struct scsi_cmnd *scmd)
-{
- struct leapioraid_scsiio_tracker *request;
- u16 smid;
- u32 tag = scsi_cmd_to_rq(scmd)->tag;
- u32 unique_tag;
-
- unique_tag = blk_mq_unique_tag(scsi_cmd_to_rq(scmd));
- tag = blk_mq_unique_tag_to_tag(unique_tag);
- ioc->io_queue_num[tag] = blk_mq_unique_tag_to_hwq(unique_tag);
- request = leapioraid_base_scsi_cmd_priv(scmd);
- smid = tag + 1;
- request->cb_idx = cb_idx;
- request->smid = smid;
- request->scmd = scmd;
- return smid;
-}
-
-u16
-leapioraid_base_get_smid_hpr(struct LEAPIORAID_ADAPTER *ioc, u8 cb_idx)
-{
- unsigned long flags;
- struct leapioraid_request_tracker *request;
- u16 smid;
-
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- if (list_empty(&ioc->hpr_free_list)) {
- spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
- return 0;
- }
- request = list_entry(ioc->hpr_free_list.next,
- struct leapioraid_request_tracker, tracker_list);
- request->cb_idx = cb_idx;
- smid = request->smid;
- list_del(&request->tracker_list);
- spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
- return smid;
-}
-
-static void
-leapioraid_base_recovery_check(struct LEAPIORAID_ADAPTER *ioc)
-{
- if (ioc->shost_recovery && ioc->pending_io_count) {
- if (ioc->pending_io_count == 1)
- wake_up(&ioc->reset_wq);
- ioc->pending_io_count--;
- }
-}
-
-void
-leapioraid_base_clear_st(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_scsiio_tracker *st)
-{
- if (!st)
- return;
- if (WARN_ON(st->smid == 0))
- return;
- st->cb_idx = 0xFF;
- st->direct_io = 0;
- st->scmd = NULL;
- atomic_set(&ioc->chain_lookup[st->smid - 1].chain_offset, 0);
-}
-
-void
-leapioraid_base_free_smid(struct LEAPIORAID_ADAPTER *ioc, u16 smid)
-{
- unsigned long flags;
- int i;
- struct leapioraid_scsiio_tracker *st;
- void *request;
-
- if (smid < ioc->hi_priority_smid) {
- st = leapioraid_get_st_from_smid(ioc, smid);
- if (!st) {
- leapioraid_base_recovery_check(ioc);
- return;
- }
- request = leapioraid_base_get_msg_frame(ioc, smid);
- memset(request, 0, ioc->request_sz);
- leapioraid_base_clear_st(ioc, st);
- leapioraid_base_recovery_check(ioc);
- ioc->io_queue_num[smid - 1] = 0xFFFF;
- return;
- }
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- if (smid < ioc->internal_smid) {
- i = smid - ioc->hi_priority_smid;
- ioc->hpr_lookup[i].cb_idx = 0xFF;
- list_add(&ioc->hpr_lookup[i].tracker_list, &ioc->hpr_free_list);
- } else if (smid <= ioc->hba_queue_depth) {
- i = smid - ioc->internal_smid;
- ioc->internal_lookup[i].cb_idx = 0xFF;
- list_add(&ioc->internal_lookup[i].tracker_list,
- &ioc->internal_free_list);
- }
- spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
-}
-
-#if defined(writeq) && defined(CONFIG_64BIT)
-static inline void
-leapioraid_base_writeq(
- __u64 b, void __iomem *addr, spinlock_t *writeq_lock)
-{
- writeq(b, addr);
-}
-#else
-static inline void
-leapioraid_base_writeq(
- __u64 b, void __iomem *addr, spinlock_t *writeq_lock)
-{
- unsigned long flags;
- __u64 data_out = b;
-
- spin_lock_irqsave(writeq_lock, flags);
- writel((u32) (data_out), addr);
- writel((u32) (data_out >> 32), (addr + 4));
- spin_unlock_irqrestore(writeq_lock, flags);
-}
-#endif
-
-static u8
-leapioraid_base_set_and_get_msix_index(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid)
-{
- struct leapioraid_scsiio_tracker *st;
-
- st = (smid <
- ioc->hi_priority_smid) ? (leapioraid_get_st_from_smid(ioc,
- smid))
- : (NULL);
- if (st == NULL)
- return leapioraid_base_get_msix_index(ioc, NULL);
- st->msix_io = ioc->get_msix_index_for_smlio(ioc, st->scmd);
- return st->msix_io;
-}
-
-static void
-leapioraid_base_put_smid_scsi_io(struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u16 handle)
-{
- union LeapioraidReqDescUnion_t descriptor;
- u64 *request = (u64 *) &descriptor;
-
- descriptor.SCSIIO.RequestFlags = LEAPIORAID_REQ_DESCRIPT_FLAGS_SCSI_IO;
- descriptor.SCSIIO.MSIxIndex
- = leapioraid_base_set_and_get_msix_index(ioc, smid);
- descriptor.SCSIIO.SMID = cpu_to_le16(smid);
- descriptor.SCSIIO.DevHandle = cpu_to_le16(handle);
- descriptor.SCSIIO.LMID = 0;
- leapioraid_base_writeq(*request, &ioc->chip->RequestDescriptorPostLow,
- &ioc->scsi_lookup_lock);
-}
-
-static void
-leapioraid_base_put_smid_fast_path(struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u16 handle)
-{
- union LeapioraidReqDescUnion_t descriptor;
- u64 *request = (u64 *) &descriptor;
-
- descriptor.SCSIIO.RequestFlags =
- LEAPIORAID_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO;
- descriptor.SCSIIO.MSIxIndex
- = leapioraid_base_set_and_get_msix_index(ioc, smid);
- descriptor.SCSIIO.SMID = cpu_to_le16(smid);
- descriptor.SCSIIO.DevHandle = cpu_to_le16(handle);
- descriptor.SCSIIO.LMID = 0;
- leapioraid_base_writeq(*request, &ioc->chip->RequestDescriptorPostLow,
- &ioc->scsi_lookup_lock);
-}
-
-static void
-leapioraid_base_put_smid_hi_priority(struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u16 msix_task)
-{
- union LeapioraidReqDescUnion_t descriptor;
- u64 *request;
-
- request = (u64 *) &descriptor;
- descriptor.HighPriority.RequestFlags =
- LEAPIORAID_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY;
- descriptor.HighPriority.MSIxIndex = msix_task;
- descriptor.HighPriority.SMID = cpu_to_le16(smid);
- descriptor.HighPriority.LMID = 0;
- descriptor.HighPriority.Reserved1 = 0;
- leapioraid_base_writeq(*request, &ioc->chip->RequestDescriptorPostLow,
- &ioc->scsi_lookup_lock);
-}
-
-static void
-leapioraid_base_put_smid_default(struct LEAPIORAID_ADAPTER *ioc, u16 smid)
-{
- union LeapioraidReqDescUnion_t descriptor;
- u64 *request;
-
- request = (u64 *) &descriptor;
- descriptor.Default.RequestFlags =
- LEAPIORAID_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE;
- descriptor.Default.MSIxIndex
- = leapioraid_base_set_and_get_msix_index(ioc, smid);
- descriptor.Default.SMID = cpu_to_le16(smid);
- descriptor.Default.LMID = 0;
- descriptor.Default.DescriptorTypeDependent = 0;
- leapioraid_base_writeq(*request, &ioc->chip->RequestDescriptorPostLow,
- &ioc->scsi_lookup_lock);
-}
-
-static void
-leapioraid_base_put_smid_scsi_io_atomic(struct LEAPIORAID_ADAPTER *ioc,
- u16 smid, u16 handle)
-{
- struct LeapioraidAtomicReqDesc_t descriptor;
- u32 *request = (u32 *) &descriptor;
-
- descriptor.RequestFlags = LEAPIORAID_REQ_DESCRIPT_FLAGS_SCSI_IO;
- descriptor.MSIxIndex = leapioraid_base_set_and_get_msix_index(ioc, smid);
- descriptor.SMID = cpu_to_le16(smid);
- writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
-}
-
-static void
-leapioraid_base_put_smid_fast_path_atomic(struct LEAPIORAID_ADAPTER *ioc,
- u16 smid, u16 handle)
-{
- struct LeapioraidAtomicReqDesc_t descriptor;
- u32 *request = (u32 *) &descriptor;
-
- descriptor.RequestFlags = LEAPIORAID_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO;
- descriptor.MSIxIndex = leapioraid_base_set_and_get_msix_index(ioc, smid);
- descriptor.SMID = cpu_to_le16(smid);
- writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
-}
-
-static void
-leapioraid_base_put_smid_hi_priority_atomic(struct LEAPIORAID_ADAPTER *ioc,
- u16 smid, u16 msix_task)
-{
- struct LeapioraidAtomicReqDesc_t descriptor;
- u32 *request = (u32 *) &descriptor;
-
- descriptor.RequestFlags = LEAPIORAID_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY;
- descriptor.MSIxIndex = msix_task;
- descriptor.SMID = cpu_to_le16(smid);
- writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
-}
-
-static void
-leapioraid_base_put_smid_default_atomic(struct LEAPIORAID_ADAPTER *ioc,
- u16 smid)
-{
- struct LeapioraidAtomicReqDesc_t descriptor;
- u32 *request = (u32 *)(&descriptor);
-
- descriptor.RequestFlags = LEAPIORAID_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE;
- descriptor.MSIxIndex = leapioraid_base_set_and_get_msix_index(ioc, smid);
- descriptor.SMID = cpu_to_le16(smid);
- writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost);
-}
-
-static int
-leapioraid_base_display_fwpkg_version(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidFWImgHeader_t *fw_img_hdr;
- struct LeapioraidComptImgHeader_t *cmp_img_hdr;
- struct LeapioraidFWUploadReq_t *mpi_request;
- struct LeapioraidFWUploadRep_t mpi_reply;
- int r = 0, issue_diag_reset = 0;
- u32 package_version = 0;
- void *fwpkg_data = NULL;
- dma_addr_t fwpkg_data_dma;
- u16 smid, ioc_status;
- size_t data_length;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- if (ioc->base_cmds.status & LEAPIORAID_CMD_PENDING) {
- pr_err("%s %s: internal command already in use\n", ioc->name,
- __func__);
- return -EAGAIN;
- }
- data_length = sizeof(struct LeapioraidFWImgHeader_t);
- fwpkg_data = dma_alloc_coherent(&ioc->pdev->dev, data_length,
- &fwpkg_data_dma, GFP_ATOMIC);
- if (!fwpkg_data)
- return -ENOMEM;
-
- smid = leapioraid_base_get_smid(ioc, ioc->base_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- r = -EAGAIN;
- goto out;
- }
- ioc->base_cmds.status = LEAPIORAID_CMD_PENDING;
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->base_cmds.smid = smid;
- memset(mpi_request, 0, sizeof(struct LeapioraidFWUploadReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_FW_UPLOAD;
- mpi_request->ImageType = 0x01;
- mpi_request->ImageSize = data_length;
- ioc->build_sg(ioc, &mpi_request->SGL, 0, 0, fwpkg_data_dma,
- data_length);
- init_completion(&ioc->base_cmds.done);
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->base_cmds.done, 15 * HZ);
- dinitprintk(ioc, pr_info("%s %s: complete\n",
- ioc->name, __func__));
- if (!(ioc->base_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- pr_err("%s %s: timeout\n",
- ioc->name, __func__);
- leapioraid_debug_dump_mf(mpi_request,
- sizeof(struct LeapioraidFWUploadReq_t) / 4);
- issue_diag_reset = 1;
- } else {
- memset(&mpi_reply, 0, sizeof(struct LeapioraidFWUploadRep_t));
- if (ioc->base_cmds.status & LEAPIORAID_CMD_REPLY_VALID) {
- memcpy(&mpi_reply, ioc->base_cmds.reply,
- sizeof(struct LeapioraidFWUploadRep_t));
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status == LEAPIORAID_IOCSTATUS_SUCCESS) {
- fw_img_hdr =
- (struct LeapioraidFWImgHeader_t *) fwpkg_data;
- if (le32_to_cpu(fw_img_hdr->Signature) ==
- 0xEB000042) {
- cmp_img_hdr =
- (struct LeapioraidComptImgHeader_t
- *) (fwpkg_data);
- package_version =
- le32_to_cpu(cmp_img_hdr->ApplicationSpecific);
- } else
- package_version =
- le32_to_cpu(fw_img_hdr->PackageVersion.Word);
- if (package_version)
- pr_err(
- "%s FW Package Version(%02d.%02d.%02d.%02d)\n",
- ioc->name,
- ((package_version) & 0xFF000000)
- >> 24,
- ((package_version) & 0x00FF0000)
- >> 16,
- ((package_version) & 0x0000FF00)
- >> 8,
- (package_version) & 0x000000FF);
- } else {
- leapioraid_debug_dump_mf(&mpi_reply,
- sizeof(struct LeapioraidFWUploadRep_t) /
- 4);
- }
- }
- }
- ioc->base_cmds.status = LEAPIORAID_CMD_NOT_USED;
-out:
- if (fwpkg_data)
- dma_free_coherent(&ioc->pdev->dev, data_length, fwpkg_data,
- fwpkg_data_dma);
- if (issue_diag_reset) {
- if (ioc->drv_internal_flags & LEAPIORAID_DRV_INERNAL_FIRST_PE_ISSUED)
- return -EFAULT;
- if (leapioraid_base_check_for_fault_and_issue_reset(ioc))
- return -EFAULT;
- r = -EAGAIN;
- }
- return r;
-}
-
-static void
-leapioraid_base_display_ioc_capabilities(struct LEAPIORAID_ADAPTER *ioc)
-{
- int i = 0;
- char desc[17] = { 0 };
- u8 revision;
- u32 iounit_pg1_flags;
-
- pci_read_config_byte(ioc->pdev, PCI_CLASS_REVISION, &revision);
- strscpy(desc, ioc->manu_pg0.ChipName, sizeof(desc));
- pr_info("%s %s: FWVersion(%02d.%02d.%02d.%02d), ChipRevision(0x%02x)\n",
- ioc->name, desc,
- (ioc->facts.FWVersion.Word & 0xFF000000) >> 24,
- (ioc->facts.FWVersion.Word & 0x00FF0000) >> 16,
- (ioc->facts.FWVersion.Word & 0x0000FF00) >> 8,
- ioc->facts.FWVersion.Word & 0x000000FF, revision);
- pr_info("%s Protocol=(", ioc->name);
- if (ioc->facts.ProtocolFlags & LEAPIORAID_IOCFACTS_PROTOCOL_SCSI_INITIATOR) {
- pr_info("Initiator");
- i++;
- }
- if (ioc->facts.ProtocolFlags & LEAPIORAID_IOCFACTS_PROTOCOL_SCSI_TARGET) {
- pr_info("%sTarget", i ? "," : "");
- i++;
- }
- i = 0;
- pr_info("), ");
- pr_info("Capabilities=(");
- if ((!ioc->warpdrive_msg) && (ioc->facts.IOCCapabilities &
- LEAPIORAID_IOCFACTS_CAPABILITY_INTEGRATED_RAID)) {
- pr_info("Raid");
- i++;
- }
- if (ioc->facts.IOCCapabilities & LEAPIORAID_IOCFACTS_CAPABILITY_TLR) {
- pr_info("%sTLR", i ? "," : "");
- i++;
- }
- if (ioc->facts.IOCCapabilities & LEAPIORAID_IOCFACTS_CAPABILITY_MULTICAST) {
- pr_info("%sMulticast", i ? "," : "");
- i++;
- }
- if (ioc->facts.IOCCapabilities &
- LEAPIORAID_IOCFACTS_CAPABILITY_BIDIRECTIONAL_TARGET) {
- pr_info("%sBIDI Target", i ? "," : "");
- i++;
- }
- if (ioc->facts.IOCCapabilities & LEAPIORAID_IOCFACTS_CAPABILITY_EEDP) {
- pr_info("%sEEDP", i ? "," : "");
- i++;
- }
- if (ioc->facts.IOCCapabilities &
- LEAPIORAID_IOCFACTS_CAPABILITY_TASK_SET_FULL_HANDLING) {
- pr_info("%sTask Set Full", i ? "," : "");
- i++;
- }
- iounit_pg1_flags = le32_to_cpu(ioc->iounit_pg1.Flags);
- if (!(iounit_pg1_flags & LEAPIORAID_IOUNITPAGE1_NATIVE_COMMAND_Q_DISABLE)) {
- pr_info("%sNCQ", i ? "," : "");
- i++;
- }
- pr_info(")\n");
-}
-
-static int
-leapioraid_base_update_ioc_page1_inlinewith_perf_mode(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidIOCP1_t ioc_pg1;
- struct LeapioraidCfgRep_t mpi_reply;
- int rc;
-
- rc = leapioraid_config_get_ioc_pg1(ioc, &mpi_reply, &ioc->ioc_pg1_copy);
- if (rc)
- return rc;
- memcpy(&ioc_pg1, &ioc->ioc_pg1_copy, sizeof(struct LeapioraidIOCP1_t));
- switch (perf_mode) {
- case LEAPIORAID_PERF_MODE_DEFAULT:
- case LEAPIORAID_PERF_MODE_BALANCED:
- if (ioc->high_iops_queues) {
- pr_err(
- "%s Enable int coalescing only for first %d reply queues\n",
- ioc->name, LEAPIORAID_HIGH_IOPS_REPLY_QUEUES);
- ioc_pg1.ProductSpecific = cpu_to_le32(0x80000000 |
- ((1 <<
- LEAPIORAID_HIGH_IOPS_REPLY_QUEUES
- / 8) - 1));
- rc = leapioraid_config_set_ioc_pg1(ioc, &mpi_reply,
- &ioc_pg1);
- if (rc)
- return rc;
- pr_err("%s performance mode: balanced\n", ioc->name);
- return 0;
- }
- fallthrough;
- case LEAPIORAID_PERF_MODE_LATENCY:
- ioc_pg1.CoalescingTimeout = cpu_to_le32(0xa);
- ioc_pg1.Flags |= cpu_to_le32(0x00000001);
- ioc_pg1.ProductSpecific = 0;
- rc = leapioraid_config_set_ioc_pg1(ioc, &mpi_reply, &ioc_pg1);
- if (rc)
- return rc;
- pr_err("%s performance mode: latency\n", ioc->name);
- break;
- case LEAPIORAID_PERF_MODE_IOPS:
- pr_err(
- "%s performance mode: iops with coalescing timeout: 0x%x\n",
- ioc->name, le32_to_cpu(ioc_pg1.CoalescingTimeout));
- ioc_pg1.Flags |= cpu_to_le32(0x00000001);
- ioc_pg1.ProductSpecific = 0;
- rc = leapioraid_config_set_ioc_pg1(ioc, &mpi_reply, &ioc_pg1);
- if (rc)
- return rc;
- break;
- }
- return 0;
-}
-
-static int
-leapioraid_base_assign_fw_reported_qd(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidSasIOUnitP1_t *sas_iounit_pg1 = NULL;
- int sz;
- int rc = 0;
-
- ioc->max_wideport_qd = LEAPIORAID_SAS_QUEUE_DEPTH;
- ioc->max_narrowport_qd = LEAPIORAID_SAS_QUEUE_DEPTH;
- ioc->max_sata_qd = LEAPIORAID_SATA_QUEUE_DEPTH;
-
- sz = offsetof(struct LeapioraidSasIOUnitP1_t, PhyData);
- sas_iounit_pg1 = kzalloc(sz, GFP_KERNEL);
- if (!sas_iounit_pg1) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return rc;
- }
- rc = leapioraid_config_get_sas_iounit_pg1(ioc, &mpi_reply,
- sas_iounit_pg1, sz);
- if (rc) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- ioc->max_wideport_qd =
- (le16_to_cpu(sas_iounit_pg1->SASWideMaxQueueDepth)) ?
- le16_to_cpu(sas_iounit_pg1->SASWideMaxQueueDepth) :
- LEAPIORAID_SAS_QUEUE_DEPTH;
- ioc->max_narrowport_qd =
- (le16_to_cpu(sas_iounit_pg1->SASNarrowMaxQueueDepth)) ?
- le16_to_cpu(sas_iounit_pg1->SASNarrowMaxQueueDepth) :
- LEAPIORAID_SAS_QUEUE_DEPTH;
- ioc->max_sata_qd = (sas_iounit_pg1->SATAMaxQDepth) ?
- sas_iounit_pg1->SATAMaxQDepth : LEAPIORAID_SATA_QUEUE_DEPTH;
-out:
- dinitprintk(ioc, pr_err(
- "%s MaxWidePortQD: 0x%x MaxNarrowPortQD: 0x%x MaxSataQD: 0x%x\n",
- ioc->name, ioc->max_wideport_qd,
- ioc->max_narrowport_qd, ioc->max_sata_qd));
- kfree(sas_iounit_pg1);
- return rc;
-}
-
-static int
-leapioraid_base_static_config_pages(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidCfgRep_t mpi_reply;
- u32 iounit_pg1_flags;
- int rc;
-
- rc = leapioraid_config_get_manufacturing_pg0(ioc, &mpi_reply,
- &ioc->manu_pg0);
- if (rc)
- return rc;
- if (ioc->ir_firmware) {
- rc = leapioraid_config_get_manufacturing_pg10(ioc, &mpi_reply,
- &ioc->manu_pg10);
- if (rc)
- return rc;
- }
- rc = leapioraid_config_get_manufacturing_pg11(ioc, &mpi_reply,
- &ioc->manu_pg11);
- if (rc)
- return rc;
-
- ioc->time_sync_interval =
- ioc->manu_pg11.TimeSyncInterval & 0x7F;
- if (ioc->time_sync_interval) {
- if (ioc->manu_pg11.TimeSyncInterval & 0x80)
- ioc->time_sync_interval =
- ioc->time_sync_interval * 3600;
- else
- ioc->time_sync_interval =
- ioc->time_sync_interval * 60;
- dinitprintk(ioc, pr_info(
- "%s Driver-FW TimeSync interval is %d seconds.\n\t\t"
- "ManuPg11 TimeSync Unit is in %s's",
- ioc->name,
- ioc->time_sync_interval,
- ((ioc->manu_pg11.TimeSyncInterval & 0x80)
- ? "Hour" : "Minute")));
- }
- rc = leapioraid_base_assign_fw_reported_qd(ioc);
- if (rc)
- return rc;
- rc = leapioraid_config_get_bios_pg2(ioc, &mpi_reply, &ioc->bios_pg2);
- if (rc)
- return rc;
- rc = leapioraid_config_get_bios_pg3(ioc, &mpi_reply, &ioc->bios_pg3);
- if (rc)
- return rc;
- rc = leapioraid_config_get_ioc_pg8(ioc, &mpi_reply, &ioc->ioc_pg8);
- if (rc)
- return rc;
- rc = leapioraid_config_get_iounit_pg0(ioc, &mpi_reply,
- &ioc->iounit_pg0);
- if (rc)
- return rc;
- rc = leapioraid_config_get_iounit_pg1(ioc, &mpi_reply,
- &ioc->iounit_pg1);
- if (rc)
- return rc;
- rc = leapioraid_config_get_iounit_pg8(ioc, &mpi_reply,
- &ioc->iounit_pg8);
- if (rc)
- return rc;
- leapioraid_base_display_ioc_capabilities(ioc);
- iounit_pg1_flags = le32_to_cpu(ioc->iounit_pg1.Flags);
- if ((ioc->facts.IOCCapabilities &
- LEAPIORAID_IOCFACTS_CAPABILITY_TASK_SET_FULL_HANDLING))
- iounit_pg1_flags &=
- ~LEAPIORAID_IOUNITPAGE1_DISABLE_TASK_SET_FULL_HANDLING;
- else
- iounit_pg1_flags |=
- LEAPIORAID_IOUNITPAGE1_DISABLE_TASK_SET_FULL_HANDLING;
- ioc->iounit_pg1.Flags = cpu_to_le32(iounit_pg1_flags);
- rc = leapioraid_config_set_iounit_pg1(ioc, &mpi_reply,
- &ioc->iounit_pg1);
- if (rc)
- return rc;
- if (ioc->iounit_pg8.NumSensors)
- ioc->temp_sensors_count = ioc->iounit_pg8.NumSensors;
-
- rc = leapioraid_base_update_ioc_page1_inlinewith_perf_mode(ioc);
- if (rc)
- return rc;
-
- return 0;
-}
-
-void
-leapioraid_free_enclosure_list(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_enclosure_node *enclosure_dev, *enclosure_dev_next;
-
- list_for_each_entry_safe(enclosure_dev,
- enclosure_dev_next, &ioc->enclosure_list,
- list) {
- list_del(&enclosure_dev->list);
- kfree(enclosure_dev);
- }
-}
-
-static void
-leapioraid_base_release_memory_pools(struct LEAPIORAID_ADAPTER *ioc)
-{
- int i, j;
- int dma_alloc_count = 0;
- struct leapioraid_chain_tracker *ct;
- int count = ioc->rdpq_array_enable ? ioc->reply_queue_count : 1;
-
- dexitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- if (ioc->request) {
- dma_free_coherent(&ioc->pdev->dev, ioc->request_dma_sz,
- ioc->request, ioc->request_dma);
- dexitprintk(ioc,
- pr_info("%s request_pool(0x%p): free\n",
- ioc->name, ioc->request));
- ioc->request = NULL;
- }
- if (ioc->sense) {
- dma_pool_free(ioc->sense_dma_pool, ioc->sense, ioc->sense_dma);
- dma_pool_destroy(ioc->sense_dma_pool);
- dexitprintk(ioc, pr_info("%s sense_pool(0x%p): free\n",
- ioc->name, ioc->sense));
- ioc->sense = NULL;
- }
- if (ioc->reply) {
- dma_pool_free(ioc->reply_dma_pool, ioc->reply, ioc->reply_dma);
- dma_pool_destroy(ioc->reply_dma_pool);
- dexitprintk(ioc, pr_info("%s reply_pool(0x%p): free\n",
- ioc->name, ioc->reply));
- ioc->reply = NULL;
- }
- if (ioc->reply_free) {
- dma_pool_free(ioc->reply_free_dma_pool, ioc->reply_free,
- ioc->reply_free_dma);
- dma_pool_destroy(ioc->reply_free_dma_pool);
- dexitprintk(ioc, pr_info("%s reply_free_pool(0x%p): free\n",
- ioc->name, ioc->reply_free));
- ioc->reply_free = NULL;
- }
- if (ioc->reply_post) {
- dma_alloc_count = DIV_ROUND_UP(count,
- LEAPIORAID_RDPQ_MAX_INDEX_IN_ONE_CHUNK);
- for (i = 0; i < count; i++) {
- if (i % LEAPIORAID_RDPQ_MAX_INDEX_IN_ONE_CHUNK == 0
- && dma_alloc_count) {
- if (ioc->reply_post[i].reply_post_free) {
- dma_pool_free(ioc->reply_post_free_dma_pool,
- ioc->reply_post[i].reply_post_free,
- ioc->reply_post[i].reply_post_free_dma);
- pr_err(
- "%s reply_post_free_pool(0x%p): free\n",
- ioc->name,
- ioc->reply_post[i].reply_post_free);
- ioc->reply_post[i].reply_post_free =
- NULL;
- }
- --dma_alloc_count;
- }
- }
- dma_pool_destroy(ioc->reply_post_free_dma_pool);
- if (ioc->reply_post_free_array && ioc->rdpq_array_enable) {
- dma_pool_free(ioc->reply_post_free_array_dma_pool,
- ioc->reply_post_free_array,
- ioc->reply_post_free_array_dma);
- ioc->reply_post_free_array = NULL;
- }
- dma_pool_destroy(ioc->reply_post_free_array_dma_pool);
- kfree(ioc->reply_post);
- }
- if (ioc->config_page) {
- dexitprintk(ioc, pr_err(
- "%s config_page(0x%p): free\n", ioc->name,
- ioc->config_page));
- dma_free_coherent(&ioc->pdev->dev, ioc->config_page_sz,
- ioc->config_page, ioc->config_page_dma);
- }
- kfree(ioc->hpr_lookup);
- kfree(ioc->internal_lookup);
- if (ioc->chain_lookup) {
- for (i = 0; i < ioc->scsiio_depth; i++) {
- for (j = ioc->chains_per_prp_buffer;
- j < ioc->chains_needed_per_io; j++) {
- ct = &ioc->chain_lookup[i].chains_per_smid[j];
- if (ct && ct->chain_buffer)
- dma_pool_free(ioc->chain_dma_pool,
- ct->chain_buffer,
- ct->chain_buffer_dma);
- }
- kfree(ioc->chain_lookup[i].chains_per_smid);
- }
- dma_pool_destroy(ioc->chain_dma_pool);
- kfree(ioc->chain_lookup);
- ioc->chain_lookup = NULL;
- }
- kfree(ioc->io_queue_num);
- ioc->io_queue_num = NULL;
-}
-
-static int
-leapioraid_check_same_4gb_region(dma_addr_t start_address, u32 pool_sz)
-{
- dma_addr_t end_address;
-
- end_address = start_address + pool_sz - 1;
- if (upper_32_bits(start_address) == upper_32_bits(end_address))
- return 1;
- else
- return 0;
-}
-
-static inline int
-leapioraid_base_reduce_hba_queue_depth(struct LEAPIORAID_ADAPTER *ioc)
-{
- int reduce_sz = 64;
-
- if ((ioc->hba_queue_depth - reduce_sz) >
- (ioc->internal_depth + LEAPIORAID_INTERNAL_SCSIIO_CMDS_COUNT)) {
- ioc->hba_queue_depth -= reduce_sz;
- return 0;
- } else
- return -ENOMEM;
-}
-
-static int
-leapioraid_base_allocate_reply_post_free_array(struct LEAPIORAID_ADAPTER *ioc,
- int reply_post_free_array_sz)
-{
- ioc->reply_post_free_array_dma_pool =
- dma_pool_create("reply_post_free_array pool",
- &ioc->pdev->dev, reply_post_free_array_sz, 16, 0);
- if (!ioc->reply_post_free_array_dma_pool) {
- dinitprintk(ioc,
- pr_err
- ("reply_post_free_array pool: dma_pool_create failed\n"));
- return -ENOMEM;
- }
- ioc->reply_post_free_array =
- dma_pool_alloc(ioc->reply_post_free_array_dma_pool,
- GFP_KERNEL, &ioc->reply_post_free_array_dma);
- if (!ioc->reply_post_free_array) {
- dinitprintk(ioc,
- pr_err
- ("reply_post_free_array pool: dma_pool_alloc failed\n"));
- return -EAGAIN;
- }
- if (!leapioraid_check_same_4gb_region(ioc->reply_post_free_array_dma,
- reply_post_free_array_sz)) {
- dinitprintk(ioc, pr_err(
- "Bad Reply Free Pool! Reply Free (0x%p)\n\t\t"
- "Reply Free dma = (0x%llx)\n",
- ioc->reply_free,
- (unsigned long long)ioc->reply_free_dma));
- ioc->use_32bit_dma = 1;
- return -EAGAIN;
- }
- return 0;
-}
-
-static int
-base_alloc_rdpq_dma_pool(struct LEAPIORAID_ADAPTER *ioc, int sz)
-{
- int i = 0;
- u32 dma_alloc_count = 0;
- int reply_post_free_sz = ioc->reply_post_queue_depth *
- sizeof(struct LeapioraidDefaultRepDesc_t);
- int count = ioc->rdpq_array_enable ? ioc->reply_queue_count : 1;
-
- ioc->reply_post =
- kcalloc(count, sizeof(struct leapioraid_reply_post_struct), GFP_KERNEL);
- if (!ioc->reply_post) {
- pr_err("%s reply_post_free pool: kcalloc failed\n", ioc->name);
- return -ENOMEM;
- }
- dma_alloc_count = DIV_ROUND_UP(
- count, LEAPIORAID_RDPQ_MAX_INDEX_IN_ONE_CHUNK);
- ioc->reply_post_free_dma_pool =
- dma_pool_create("reply_post_free pool", &ioc->pdev->dev, sz, 16, 0);
- if (!ioc->reply_post_free_dma_pool) {
- pr_err("reply_post_free pool: dma_pool_create failed\n");
- return -ENOMEM;
- }
- for (i = 0; i < count; i++) {
- if ((i % LEAPIORAID_RDPQ_MAX_INDEX_IN_ONE_CHUNK == 0) && dma_alloc_count) {
- ioc->reply_post[i].reply_post_free =
- dma_pool_zalloc(ioc->reply_post_free_dma_pool,
- GFP_KERNEL,
- &ioc->reply_post[i].reply_post_free_dma);
- if (!ioc->reply_post[i].reply_post_free) {
- pr_err("reply_post_free pool: dma_pool_alloc failed\n");
- return -EAGAIN;
- }
- if (!leapioraid_check_same_4gb_region
- (ioc->reply_post[i].reply_post_free_dma, sz)) {
- dinitprintk(ioc, pr_err(
- "%s bad Replypost free pool(0x%p) dma = (0x%llx)\n",
- ioc->name,
- ioc->reply_post[i].reply_post_free,
- (unsigned long long)
- ioc->reply_post[i].reply_post_free_dma));
- ioc->use_32bit_dma = 1;
- return -EAGAIN;
- }
- dma_alloc_count--;
- } else {
- ioc->reply_post[i].reply_post_free =
- (union LeapioraidRepDescUnion_t *)
- ((long)ioc->reply_post[i - 1].reply_post_free
- + reply_post_free_sz);
- ioc->reply_post[i].reply_post_free_dma = (dma_addr_t)
- (ioc->reply_post[i - 1].reply_post_free_dma +
- reply_post_free_sz);
- }
- }
- return 0;
-}
-
-static int
-leapioraid_base_allocate_chain_dma_pool(struct LEAPIORAID_ADAPTER *ioc, int sz)
-{
- int i = 0, j = 0;
- struct leapioraid_chain_tracker *ctr;
-
- ioc->chain_dma_pool = dma_pool_create("chain pool", &ioc->pdev->dev,
- ioc->chain_segment_sz, 16, 0);
- if (!ioc->chain_dma_pool) {
- pr_err("%s chain_dma_pool: dma_pool_create failed\n", ioc->name);
- return -ENOMEM;
- }
- for (i = 0; i < ioc->scsiio_depth; i++) {
- for (j = ioc->chains_per_prp_buffer;
- j < ioc->chains_needed_per_io; j++) {
- ctr = &ioc->chain_lookup[i].chains_per_smid[j];
- ctr->chain_buffer = dma_pool_alloc(ioc->chain_dma_pool,
- GFP_KERNEL,
- &ctr->chain_buffer_dma);
- if (!ctr->chain_buffer)
- return -EAGAIN;
- if (!leapioraid_check_same_4gb_region
- (ctr->chain_buffer_dma, ioc->chain_segment_sz)) {
- pr_err(
- "%s buffers not in same 4G! buff=(0x%p) dma=(0x%llx)\n",
- ioc->name,
- ctr->chain_buffer,
- (unsigned long long)ctr->chain_buffer_dma);
- ioc->use_32bit_dma = 1;
- return -EAGAIN;
- }
- }
- }
- dinitprintk(ioc, pr_info(
- "%s chain_lookup depth(%d), frame_size(%d), pool_size(%d kB)\n",
- ioc->name, ioc->scsiio_depth,
- ioc->chain_segment_sz,
- ((ioc->scsiio_depth *
- (ioc->chains_needed_per_io -
- ioc->chains_per_prp_buffer) *
- ioc->chain_segment_sz)) / 1024));
- return 0;
-}
-
-static int
-leapioraid_base_allocate_sense_dma_pool(struct LEAPIORAID_ADAPTER *ioc, int sz)
-{
- ioc->sense_dma_pool =
- dma_pool_create("sense pool", &ioc->pdev->dev, sz, 4, 0);
- if (!ioc->sense_dma_pool) {
- pr_err("%s sense pool: dma_pool_create failed\n", ioc->name);
- return -ENOMEM;
- }
- ioc->sense = dma_pool_alloc(ioc->sense_dma_pool,
- GFP_KERNEL, &ioc->sense_dma);
- if (!ioc->sense) {
- pr_err("%s sense pool: dma_pool_alloc failed\n", ioc->name);
- return -EAGAIN;
- }
- if (!leapioraid_check_same_4gb_region(ioc->sense_dma, sz)) {
- dinitprintk(ioc,
- pr_err("Bad Sense Pool! sense (0x%p) sense_dma = (0x%llx)\n",
- ioc->sense,
- (unsigned long long)ioc->sense_dma));
- ioc->use_32bit_dma = 1;
- return -EAGAIN;
- }
- pr_err(
- "%s sense pool(0x%p) - dma(0x%llx): depth(%d),\n\t\t"
- "element_size(%d), pool_size (%d kB)\n",
- ioc->name,
- ioc->sense,
- (unsigned long long)ioc->sense_dma,
- ioc->scsiio_depth,
- SCSI_SENSE_BUFFERSIZE, sz / 1024);
- return 0;
-}
-
-static int
-leapioraid_base_allocate_reply_free_dma_pool(struct LEAPIORAID_ADAPTER *ioc,
- int sz)
-{
- ioc->reply_free_dma_pool =
- dma_pool_create("reply_free pool", &ioc->pdev->dev, sz, 16, 0);
- if (!ioc->reply_free_dma_pool) {
- pr_err("%s reply_free pool: dma_pool_create failed\n", ioc->name);
- return -ENOMEM;
- }
- ioc->reply_free = dma_pool_alloc(ioc->reply_free_dma_pool,
- GFP_KERNEL, &ioc->reply_free_dma);
- if (!ioc->reply_free) {
- pr_err("%s reply_free pool: dma_pool_alloc failed\n", ioc->name);
- return -EAGAIN;
- }
- if (!leapioraid_check_same_4gb_region(ioc->reply_free_dma, sz)) {
- dinitprintk(ioc, pr_err(
- "Bad Reply Free Pool! Reply Free (0x%p)\n\t\t"
- "Reply Free dma = (0x%llx)\n",
- ioc->reply_free,
- (unsigned long long)ioc->reply_free_dma));
- ioc->use_32bit_dma = 1;
- return -EAGAIN;
- }
- memset(ioc->reply_free, 0, sz);
- dinitprintk(ioc, pr_info(
- "%s reply_free pool(0x%p): depth(%d),\n\t\t"
- "element_size(%d), pool_size(%d kB)\n",
- ioc->name,
- ioc->reply_free,
- ioc->reply_free_queue_depth, 4,
- sz / 1024));
- dinitprintk(ioc,
- pr_info("%s reply_free_dma (0x%llx)\n",
- ioc->name, (unsigned long long)ioc->reply_free_dma));
- return 0;
-}
-
-static int
-leapioraid_base_allocate_reply_pool(struct LEAPIORAID_ADAPTER *ioc, int sz)
-{
- ioc->reply_dma_pool = dma_pool_create("reply pool",
- &ioc->pdev->dev, sz, 4, 0);
- if (!ioc->reply_dma_pool) {
- pr_err("%s reply pool: dma_pool_create failed\n", ioc->name);
- return -ENOMEM;
- }
- ioc->reply = dma_pool_alloc(ioc->reply_dma_pool, GFP_KERNEL,
- &ioc->reply_dma);
- if (!ioc->reply) {
- pr_err("%s reply pool: dma_pool_alloc failed\n", ioc->name);
- return -EAGAIN;
- }
- if (!leapioraid_check_same_4gb_region(ioc->reply_dma, sz)) {
- dinitprintk(ioc,
- pr_err("Bad Reply Pool! Reply (0x%p) Reply dma = (0x%llx)\n",
- ioc->reply,
- (unsigned long long)ioc->reply_dma));
- ioc->use_32bit_dma = 1;
- return -EAGAIN;
- }
- ioc->reply_dma_min_address = (u32) (ioc->reply_dma);
- ioc->reply_dma_max_address = (u32) (ioc->reply_dma) + sz;
- pr_err(
- "%s reply pool(0x%p) - dma(0x%llx): depth(%d)\n\t\t"
- "frame_size(%d), pool_size(%d kB)\n",
- ioc->name,
- ioc->reply,
- (unsigned long long)ioc->reply_dma,
- ioc->reply_free_queue_depth,
- ioc->reply_sz,
- sz / 1024);
- return 0;
-}
-
-static int
-leapioraid_base_allocate_memory_pools(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_facts *facts;
- u16 max_sge_elements;
- u16 chains_needed_per_io;
- u32 sz, total_sz, reply_post_free_sz, rc = 0;
- u32 retry_sz;
- u32 rdpq_sz = 0, sense_sz = 0, reply_post_free_array_sz = 0;
- u16 max_request_credit;
- unsigned short sg_tablesize;
- u16 sge_size;
- int i = 0;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- retry_sz = 0;
- facts = &ioc->facts;
- sg_tablesize = LEAPIORAID_SG_DEPTH;
- if (reset_devices)
- sg_tablesize = min_t(unsigned short, sg_tablesize,
- LEAPIORAID_KDUMP_MIN_PHYS_SEGMENTS);
- if (sg_tablesize < LEAPIORAID_MIN_PHYS_SEGMENTS)
- sg_tablesize = LEAPIORAID_MIN_PHYS_SEGMENTS;
- else if (sg_tablesize > LEAPIORAID_MAX_PHYS_SEGMENTS) {
- sg_tablesize = min_t(unsigned short, sg_tablesize,
- LEAPIORAID_MAX_SG_SEGMENTS);
- pr_warn(
- "%s sg_tablesize(%u) is bigger than kernel defined %s(%u)\n",
- ioc->name,
- sg_tablesize, LEAPIORAID_MAX_PHYS_SEGMENTS_STRING,
- LEAPIORAID_MAX_PHYS_SEGMENTS);
- }
- ioc->shost->sg_tablesize = sg_tablesize;
- ioc->internal_depth = min_t(int, (facts->HighPriorityCredit + (5)),
- (facts->RequestCredit / 4));
- if (ioc->internal_depth < LEAPIORAID_INTERNAL_CMDS_COUNT) {
- if (facts->RequestCredit <= (LEAPIORAID_INTERNAL_CMDS_COUNT +
- LEAPIORAID_INTERNAL_SCSIIO_CMDS_COUNT)) {
- pr_err(
- "%s RequestCredits not enough, it has %d credits\n",
- ioc->name,
- facts->RequestCredit);
- return -ENOMEM;
- }
- ioc->internal_depth = 10;
- }
- ioc->hi_priority_depth = ioc->internal_depth - (5);
- if (reset_devices)
- max_request_credit = min_t(u16, facts->RequestCredit,
- (LEAPIORAID_KDUMP_SCSI_IO_DEPTH +
- ioc->internal_depth));
- else
- max_request_credit = min_t(u16, facts->RequestCredit,
- LEAPIORAID_MAX_HBA_QUEUE_DEPTH);
-retry:
- ioc->hba_queue_depth = max_request_credit + ioc->hi_priority_depth;
- ioc->request_sz = facts->IOCRequestFrameSize * 4;
- ioc->reply_sz = facts->ReplyFrameSize * 4;
- if (facts->IOCMaxChainSegmentSize)
- ioc->chain_segment_sz =
- facts->IOCMaxChainSegmentSize * LEAPIORAID_MAX_CHAIN_ELEMT_SZ;
- else
- ioc->chain_segment_sz =
- LEAPIORAID_DEFAULT_NUM_FWCHAIN_ELEMTS * LEAPIORAID_MAX_CHAIN_ELEMT_SZ;
- sge_size = max_t(u16, ioc->sge_size, ioc->sge_size_ieee);
-retry_allocation:
- total_sz = 0;
- max_sge_elements =
- ioc->request_sz -
- ((sizeof(struct LeapioraidSCSIIOReq_t) -
- sizeof(union LEAPIORAID_IEEE_SGE_IO_UNION)) + 2 * sge_size);
- ioc->max_sges_in_main_message = max_sge_elements / sge_size;
- max_sge_elements = ioc->chain_segment_sz - sge_size;
- ioc->max_sges_in_chain_message = max_sge_elements / sge_size;
- chains_needed_per_io = ((ioc->shost->sg_tablesize -
- ioc->max_sges_in_main_message) /
- ioc->max_sges_in_chain_message)
- + 1;
- if (chains_needed_per_io > facts->MaxChainDepth) {
- chains_needed_per_io = facts->MaxChainDepth;
- ioc->shost->sg_tablesize = min_t(u16,
- ioc->max_sges_in_main_message +
- (ioc->max_sges_in_chain_message *
- chains_needed_per_io),
- ioc->shost->sg_tablesize);
- }
- ioc->chains_needed_per_io = chains_needed_per_io;
- ioc->reply_free_queue_depth = ioc->hba_queue_depth + 64;
- ioc->reply_post_queue_depth = ioc->hba_queue_depth +
- ioc->reply_free_queue_depth + 1;
- if (ioc->reply_post_queue_depth % 16)
- ioc->reply_post_queue_depth +=
- 16 - (ioc->reply_post_queue_depth % 16);
- if (ioc->reply_post_queue_depth >
- facts->MaxReplyDescriptorPostQueueDepth) {
- ioc->reply_post_queue_depth =
- facts->MaxReplyDescriptorPostQueueDepth -
- (facts->MaxReplyDescriptorPostQueueDepth % 16);
- ioc->hba_queue_depth =
- ((ioc->reply_post_queue_depth - 64) / 2) - 1;
- ioc->reply_free_queue_depth = ioc->hba_queue_depth + 64;
- }
- pr_info(
- "%s scatter gather: sge_in_main_msg(%d),\n\t\t"
- "sge_per_chain(%d), sge_per_io(%d), chains_per_io(%d)\n",
- ioc->name,
- ioc->max_sges_in_main_message,
- ioc->max_sges_in_chain_message,
- ioc->shost->sg_tablesize,
- ioc->chains_needed_per_io);
- ioc->scsiio_depth = ioc->hba_queue_depth -
- ioc->hi_priority_depth - ioc->internal_depth;
- ioc->shost->can_queue =
- ioc->scsiio_depth - LEAPIORAID_INTERNAL_SCSIIO_CMDS_COUNT;
- dinitprintk(ioc, pr_info("%s scsi host: can_queue depth (%d)\n", ioc->name,
- ioc->shost->can_queue));
- sz = ((ioc->scsiio_depth + 1) * ioc->request_sz);
- sz += (ioc->hi_priority_depth * ioc->request_sz);
- sz += (ioc->internal_depth * ioc->request_sz);
- ioc->request_dma_sz = sz;
- ioc->request = dma_alloc_coherent(&ioc->pdev->dev, sz,
- &ioc->request_dma, GFP_KERNEL);
- if (!ioc->request) {
- if (ioc->scsiio_depth < LEAPIORAID_SAS_QUEUE_DEPTH) {
- rc = -ENOMEM;
- goto out;
- }
- retry_sz = 64;
- if ((ioc->hba_queue_depth - retry_sz) >
- (ioc->internal_depth + LEAPIORAID_INTERNAL_SCSIIO_CMDS_COUNT)) {
- ioc->hba_queue_depth -= retry_sz;
- goto retry_allocation;
- } else {
- rc = -ENOMEM;
- goto out;
- }
- }
- memset(ioc->request, 0, sz);
- if (retry_sz)
- pr_err(
- "%s request pool: dma_alloc_consistent succeed:\n\t\t"
- "hba_depth(%d), chains_per_io(%d), frame_sz(%d), total(%d kb)\n",
- ioc->name,
- ioc->hba_queue_depth,
- ioc->chains_needed_per_io,
- ioc->request_sz,
- sz / 1024);
- ioc->hi_priority =
- ioc->request + ((ioc->scsiio_depth + 1) * ioc->request_sz);
- ioc->hi_priority_dma =
- ioc->request_dma + ((ioc->scsiio_depth + 1) * ioc->request_sz);
- ioc->internal =
- ioc->hi_priority + (ioc->hi_priority_depth * ioc->request_sz);
- ioc->internal_dma =
- ioc->hi_priority_dma + (ioc->hi_priority_depth * ioc->request_sz);
- pr_info(
- "%s request pool(0x%p) - dma(0x%llx):\n\t\t"
- "depth(%d), frame_size(%d), pool_size(%d kB)\n",
- ioc->name,
- ioc->request,
- (unsigned long long)ioc->request_dma,
- ioc->hba_queue_depth,
- ioc->request_sz,
- (ioc->hba_queue_depth * ioc->request_sz) / 1024);
- total_sz += sz;
- ioc->io_queue_num = kcalloc(ioc->scsiio_depth, sizeof(u16), GFP_KERNEL);
- if (!ioc->io_queue_num) {
- rc = -ENOMEM;
- goto out;
- }
- dinitprintk(ioc, pr_info("%s scsiio(0x%p): depth(%d)\n",
- ioc->name, ioc->request, ioc->scsiio_depth));
- ioc->hpr_lookup = kcalloc(ioc->hi_priority_depth,
- sizeof(struct leapioraid_request_tracker), GFP_KERNEL);
- if (!ioc->hpr_lookup) {
- rc = -ENOMEM;
- goto out;
- }
- ioc->hi_priority_smid = ioc->scsiio_depth + 1;
- dinitprintk(ioc, pr_info(
- "%s hi_priority(0x%p): depth(%d), start smid(%d)\n",
- ioc->name, ioc->hi_priority, ioc->hi_priority_depth,
- ioc->hi_priority_smid));
- ioc->internal_lookup =
- kcalloc(ioc->internal_depth, sizeof(struct leapioraid_request_tracker),
- GFP_KERNEL);
- if (!ioc->internal_lookup) {
- pr_err("%s internal_lookup: kcalloc failed\n",
- ioc->name);
- rc = -ENOMEM;
- goto out;
- }
- ioc->internal_smid = ioc->hi_priority_smid + ioc->hi_priority_depth;
- dinitprintk(ioc, pr_info(
- "%s internal(0x%p): depth(%d), start smid(%d)\n",
- ioc->name, ioc->internal, ioc->internal_depth,
- ioc->internal_smid));
- sz = ioc->scsiio_depth * sizeof(struct leapioraid_chain_lookup);
- ioc->chain_lookup = kzalloc(sz, GFP_KERNEL);
- if (!ioc->chain_lookup) {
- if ((max_request_credit - 64) >
- (ioc->internal_depth + LEAPIORAID_INTERNAL_SCSIIO_CMDS_COUNT)) {
- max_request_credit -= 64;
- leapioraid_base_release_memory_pools(ioc);
- goto retry;
- } else {
- pr_err(
- "%s chain_lookup: __get_free_pages failed\n",
- ioc->name);
- rc = -ENOMEM;
- goto out;
- }
- }
- sz = ioc->chains_needed_per_io * sizeof(struct leapioraid_chain_tracker);
- for (i = 0; i < ioc->scsiio_depth; i++) {
- ioc->chain_lookup[i].chains_per_smid = kzalloc(sz, GFP_KERNEL);
- if (!ioc->chain_lookup[i].chains_per_smid) {
- if ((max_request_credit - 64) >
- (ioc->internal_depth +
- LEAPIORAID_INTERNAL_SCSIIO_CMDS_COUNT)) {
- max_request_credit -= 64;
- leapioraid_base_release_memory_pools(ioc);
- goto retry;
- } else {
- pr_err("%s chain_lookup: kzalloc failed\n", ioc->name);
- rc = -ENOMEM;
- goto out;
- }
- }
- }
- ioc->chains_per_prp_buffer = 0;
- rc = leapioraid_base_allocate_chain_dma_pool(ioc, ioc->chain_segment_sz);
- if (rc == -ENOMEM)
- return -ENOMEM;
- else if (rc == -EAGAIN) {
- if (ioc->use_32bit_dma && ioc->dma_mask > 32)
- goto try_32bit_dma;
- else {
- if ((max_request_credit - 64) >
- (ioc->internal_depth +
- LEAPIORAID_INTERNAL_SCSIIO_CMDS_COUNT)) {
- max_request_credit -= 64;
- leapioraid_base_release_memory_pools(ioc);
- goto retry_allocation;
- } else {
- pr_err("%s chain_lookup: dma_pool_alloc failed\n", ioc->name);
- return -ENOMEM;
- }
- }
- }
- total_sz += ioc->chain_segment_sz *
- ((ioc->chains_needed_per_io - ioc->chains_per_prp_buffer) *
- ioc->scsiio_depth);
- sense_sz = ioc->scsiio_depth * SCSI_SENSE_BUFFERSIZE;
- rc = leapioraid_base_allocate_sense_dma_pool(ioc, sense_sz);
- if (rc == -ENOMEM)
- return -ENOMEM;
- else if (rc == -EAGAIN)
- goto try_32bit_dma;
- total_sz += sense_sz;
- sz = ioc->reply_free_queue_depth * ioc->reply_sz;
- rc = leapioraid_base_allocate_reply_pool(ioc, sz);
- if (rc == -ENOMEM)
- return -ENOMEM;
- else if (rc == -EAGAIN)
- goto try_32bit_dma;
- total_sz += sz;
- sz = ioc->reply_free_queue_depth * 4;
- rc = leapioraid_base_allocate_reply_free_dma_pool(ioc, sz);
- if (rc == -ENOMEM)
- return -ENOMEM;
- else if (rc == -EAGAIN)
- goto try_32bit_dma;
- total_sz += sz;
- reply_post_free_sz = ioc->reply_post_queue_depth *
- sizeof(struct LeapioraidDefaultRepDesc_t);
- rdpq_sz = reply_post_free_sz * LEAPIORAID_RDPQ_MAX_INDEX_IN_ONE_CHUNK;
- if ((leapioraid_base_is_controller_msix_enabled(ioc)
- && !ioc->rdpq_array_enable)
- || (ioc->reply_queue_count < LEAPIORAID_RDPQ_MAX_INDEX_IN_ONE_CHUNK))
- rdpq_sz = reply_post_free_sz * ioc->reply_queue_count;
- rc = base_alloc_rdpq_dma_pool(ioc, rdpq_sz);
- if (rc == -ENOMEM)
- return -ENOMEM;
- else if (rc == -EAGAIN)
- goto try_32bit_dma;
- else {
- if (ioc->rdpq_array_enable && rc == 0) {
- reply_post_free_array_sz = ioc->reply_queue_count *
- sizeof(struct LeapioraidIOCInitRDPQArrayEntry);
- rc = leapioraid_base_allocate_reply_post_free_array(
- ioc, reply_post_free_array_sz);
- if (rc == -ENOMEM)
- return -ENOMEM;
- else if (rc == -EAGAIN)
- goto try_32bit_dma;
- }
- }
- total_sz += rdpq_sz;
- ioc->config_page_sz = 512;
- ioc->config_page = dma_alloc_coherent(&ioc->pdev->dev,
- ioc->config_page_sz,
- &ioc->config_page_dma,
- GFP_KERNEL);
- if (!ioc->config_page) {
- pr_err("%s config page: dma_pool_alloc failed\n", ioc->name);
- rc = -ENOMEM;
- goto out;
- }
- pr_err("%s config page(0x%p) - dma(0x%llx): size(%d)\n",
- ioc->name, ioc->config_page,
- (unsigned long long)ioc->config_page_dma,
- ioc->config_page_sz);
- total_sz += ioc->config_page_sz;
- pr_info("%s Allocated physical memory: size(%d kB)\n",
- ioc->name, total_sz / 1024);
- pr_info(
- "%s Current IOC Queue Depth(%d), Max Queue Depth(%d)\n",
- ioc->name,
- ioc->shost->can_queue,
- facts->RequestCredit);
- return 0;
-try_32bit_dma:
- leapioraid_base_release_memory_pools(ioc);
- if (ioc->use_32bit_dma && (ioc->dma_mask > 32)) {
- if (leapioraid_base_config_dma_addressing(ioc, ioc->pdev) != 0) {
- pr_err("Setting 32 bit coherent DMA mask Failed %s\n",
- pci_name(ioc->pdev));
- return -ENODEV;
- }
- } else if (leapioraid_base_reduce_hba_queue_depth(ioc) != 0)
- return -ENOMEM;
- goto retry_allocation;
-out:
- return rc;
-}
-
-static void
-leapioraid_base_flush_ios_and_panic(
- struct LEAPIORAID_ADAPTER *ioc, u16 fault_code)
-{
- ioc->adapter_over_temp = 1;
- leapioraid_base_stop_smart_polling(ioc);
- leapioraid_base_stop_watchdog(ioc);
- leapioraid_base_stop_hba_unplug_watchdog(ioc);
- leapioraid_base_pause_mq_polling(ioc);
- leapioraid_scsihost_flush_running_cmds(ioc);
- leapioraid_print_fault_code(ioc, fault_code);
-}
-
-u32
-leapioraid_base_get_iocstate(struct LEAPIORAID_ADAPTER *ioc, int cooked)
-{
- u32 s, sc;
-
- s = ioc->base_readl(
- &ioc->chip->Doorbell, LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY);
- sc = s & LEAPIORAID_IOC_STATE_MASK;
- if (sc != LEAPIORAID_IOC_STATE_MASK) {
- if ((sc == LEAPIORAID_IOC_STATE_FAULT) &&
- ((s & LEAPIORAID_DOORBELL_DATA_MASK) ==
- LEAPIORAID_IFAULT_IOP_OVER_TEMP_THRESHOLD_EXCEEDED)) {
- leapioraid_base_flush_ios_and_panic(ioc,
- s &
- LEAPIORAID_DOORBELL_DATA_MASK);
- panic("TEMPERATURE FAULT: STOPPING; panic in %s\n",
- __func__);
- }
- }
- return cooked ? sc : s;
-}
-
-static int
-leapioraid_base_send_ioc_reset(
- struct LEAPIORAID_ADAPTER *ioc, u8 reset_type, int timeout)
-{
- u32 ioc_state;
- int r = 0;
- unsigned long flags;
-
- if (reset_type != LEAPIORAID_FUNC_IOC_MESSAGE_UNIT_RESET) {
- pr_err("%s %s: unknown reset_type\n",
- ioc->name, __func__);
- return -EFAULT;
- }
- if (!(ioc->facts.IOCCapabilities &
- LEAPIORAID_IOCFACTS_CAPABILITY_EVENT_REPLAY))
- return -EFAULT;
- pr_info("%s sending message unit reset !!\n",
- ioc->name);
- writel(reset_type << LEAPIORAID_DOORBELL_FUNCTION_SHIFT,
- &ioc->chip->Doorbell);
- if ((leapioraid_base_wait_for_doorbell_ack(ioc, 15)))
- r = -EFAULT;
- ioc_state = leapioraid_base_get_iocstate(ioc, 0);
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
- if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_COREDUMP
- && (ioc->is_driver_loading == 1
- || ioc->fault_reset_work_q == NULL)) {
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
- leapioraid_base_coredump_info(ioc, ioc_state);
- leapioraid_base_wait_for_coredump_completion(ioc, __func__);
- r = -EFAULT;
- goto out;
- }
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
- if (r != 0)
- goto out;
- ioc_state =
- leapioraid_base_wait_on_iocstate(ioc, LEAPIORAID_IOC_STATE_READY,
- timeout);
- if (ioc_state) {
- pr_err("%s %s: failed going to ready state (ioc_state=0x%x)\n",
- ioc->name, __func__, ioc_state);
- r = -EFAULT;
- goto out;
- }
-out:
- pr_info("%s message unit reset: %s\n",
- ioc->name, ((r == 0) ? "SUCCESS" : "FAILED"));
- return r;
-}
-
-int
-leapioraid_wait_for_ioc_to_operational(struct LEAPIORAID_ADAPTER *ioc,
- int wait_count)
-{
- int wait_state_count = 0;
- u32 ioc_state;
-
- if (leapioraid_base_pci_device_is_unplugged(ioc))
- return -EFAULT;
- ioc_state = leapioraid_base_get_iocstate(ioc, 1);
- while (ioc_state != LEAPIORAID_IOC_STATE_OPERATIONAL) {
- if (leapioraid_base_pci_device_is_unplugged(ioc))
- return -EFAULT;
- if (ioc->is_driver_loading)
- return -ETIME;
- if (wait_state_count++ == wait_count) {
- pr_err(
- "%s %s: failed due to ioc not operational\n",
- ioc->name, __func__);
- return -EFAULT;
- }
- ssleep(1);
- ioc_state = leapioraid_base_get_iocstate(ioc, 1);
- pr_info("%s %s: waiting for operational state(count=%d)\n",
- ioc->name, __func__, wait_state_count);
- }
- if (wait_state_count)
- pr_info("%s %s: ioc is operational\n",
- ioc->name, __func__);
- return 0;
-}
-
-int
-leapioraid_base_sas_iounit_control(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidSasIoUnitControlRep_t *mpi_reply,
- struct LeapioraidSasIoUnitControlReq_t *mpi_request)
-{
- u16 smid;
- u8 issue_reset;
- int rc;
- void *request;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- mutex_lock(&ioc->base_cmds.mutex);
- if (ioc->base_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_err("%s %s: base_cmd in use\n",
- ioc->name, __func__);
- rc = -EAGAIN;
- goto out;
- }
- rc = leapioraid_wait_for_ioc_to_operational(ioc, 10);
- if (rc)
- goto out;
- smid = leapioraid_base_get_smid(ioc, ioc->base_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- rc = -EAGAIN;
- goto out;
- }
- rc = 0;
- ioc->base_cmds.status = LEAPIORAID_CMD_PENDING;
- request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->base_cmds.smid = smid;
- memcpy(request, mpi_request, sizeof(struct LeapioraidSasIoUnitControlReq_t));
- if (mpi_request->Operation == LEAPIORAID_SAS_OP_PHY_HARD_RESET ||
- mpi_request->Operation == LEAPIORAID_SAS_OP_PHY_LINK_RESET)
- ioc->ioc_link_reset_in_progress = 1;
- init_completion(&ioc->base_cmds.done);
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->base_cmds.done,
- msecs_to_jiffies(10000));
- if ((mpi_request->Operation == LEAPIORAID_SAS_OP_PHY_HARD_RESET ||
- mpi_request->Operation == LEAPIORAID_SAS_OP_PHY_LINK_RESET) &&
- ioc->ioc_link_reset_in_progress)
- ioc->ioc_link_reset_in_progress = 0;
- if (!(ioc->base_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- leapioraid_check_cmd_timeout(ioc,
- ioc->base_cmds.status, mpi_request,
- sizeof
- (struct LeapioraidSasIoUnitControlReq_t)
- / 4, issue_reset);
- goto issue_host_reset;
- }
- if (ioc->base_cmds.status & LEAPIORAID_CMD_REPLY_VALID)
- memcpy(mpi_reply, ioc->base_cmds.reply,
- sizeof(struct LeapioraidSasIoUnitControlRep_t));
- else
- memset(mpi_reply, 0, sizeof(struct LeapioraidSasIoUnitControlRep_t));
- ioc->base_cmds.status = LEAPIORAID_CMD_NOT_USED;
- goto out;
-issue_host_reset:
- if (issue_reset)
- leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
- ioc->base_cmds.status = LEAPIORAID_CMD_NOT_USED;
- rc = -EFAULT;
-out:
- mutex_unlock(&ioc->base_cmds.mutex);
- return rc;
-}
-
-int
-leapioraid_base_scsi_enclosure_processor(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidSepRep_t *mpi_reply,
- struct LeapioraidSepReq_t *mpi_request)
-{
- u16 smid;
- u8 issue_reset;
- int rc;
- void *request;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- mutex_lock(&ioc->base_cmds.mutex);
- if (ioc->base_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_err("%s %s: base_cmd in use\n",
- ioc->name, __func__);
- rc = -EAGAIN;
- goto out;
- }
- rc = leapioraid_wait_for_ioc_to_operational(ioc, 10);
- if (rc)
- goto out;
- smid = leapioraid_base_get_smid(ioc, ioc->base_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- rc = -EAGAIN;
- goto out;
- }
- rc = 0;
- ioc->base_cmds.status = LEAPIORAID_CMD_PENDING;
- request = leapioraid_base_get_msg_frame(ioc, smid);
- memset(request, 0, ioc->request_sz);
- ioc->base_cmds.smid = smid;
- memcpy(request, mpi_request, sizeof(struct LeapioraidSepReq_t));
- init_completion(&ioc->base_cmds.done);
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->base_cmds.done,
- msecs_to_jiffies(10000));
- if (!(ioc->base_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- leapioraid_check_cmd_timeout(ioc,
- ioc->base_cmds.status, mpi_request,
- sizeof(struct LeapioraidSepReq_t) / 4,
- issue_reset);
- goto issue_host_reset;
- }
- if (ioc->base_cmds.status & LEAPIORAID_CMD_REPLY_VALID)
- memcpy(mpi_reply, ioc->base_cmds.reply,
- sizeof(struct LeapioraidSepRep_t));
- else
- memset(mpi_reply, 0, sizeof(struct LeapioraidSepRep_t));
- ioc->base_cmds.status = LEAPIORAID_CMD_NOT_USED;
- goto out;
-issue_host_reset:
- if (issue_reset)
- leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
- ioc->base_cmds.status = LEAPIORAID_CMD_NOT_USED;
- rc = -EFAULT;
-out:
- mutex_unlock(&ioc->base_cmds.mutex);
- return rc;
-}
-
-static int
-leapioraid_base_get_port_facts(struct LEAPIORAID_ADAPTER *ioc, int port)
-{
- struct LeapioraidPortFactsReq_t mpi_request;
- struct LeapioraidPortFactsRep_t mpi_reply;
- struct leapioraid_port_facts *pfacts;
- int mpi_reply_sz, mpi_request_sz, r;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- mpi_reply_sz = sizeof(struct LeapioraidPortFactsRep_t);
- mpi_request_sz = sizeof(struct LeapioraidPortFactsReq_t);
- memset(&mpi_request, 0, mpi_request_sz);
- mpi_request.Function = LEAPIORAID_FUNC_PORT_FACTS;
- mpi_request.PortNumber = port;
- r = leapioraid_base_handshake_req_reply_wait(ioc, mpi_request_sz,
- (u32 *) &mpi_request,
- mpi_reply_sz,
- (u16 *) &mpi_reply, 5);
- if (r != 0) {
- pr_err("%s %s: handshake failed (r=%d)\n",
- ioc->name, __func__, r);
- return r;
- }
- pfacts = &ioc->pfacts[port];
- memset(pfacts, 0, sizeof(struct leapioraid_port_facts));
- pfacts->PortNumber = mpi_reply.PortNumber;
- pfacts->VP_ID = mpi_reply.VP_ID;
- pfacts->VF_ID = mpi_reply.VF_ID;
- pfacts->MaxPostedCmdBuffers =
- le16_to_cpu(mpi_reply.MaxPostedCmdBuffers);
- return 0;
-}
-
-static int
-leapioraid_base_send_ioc_init(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidIOCInitReq_t mpi_request;
- struct LeapioraidIOCInitRep_t mpi_reply;
- int i, r = 0;
- ktime_t current_time;
- u16 ioc_status;
- u32 reply_post_free_ary_sz;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- memset(&mpi_request, 0, sizeof(struct LeapioraidIOCInitReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_IOC_INIT;
- mpi_request.WhoInit = LEAPIORAID_WHOINIT_HOST_DRIVER;
- mpi_request.VF_ID = 0;
- mpi_request.VP_ID = 0;
- mpi_request.MsgVersion = cpu_to_le16(0x0206);
- mpi_request.HeaderVersion = cpu_to_le16(0x3A00);
- mpi_request.HostPageSize = 12;
- if (leapioraid_base_is_controller_msix_enabled(ioc))
- mpi_request.HostMSIxVectors = ioc->reply_queue_count;
- mpi_request.SystemRequestFrameSize = cpu_to_le16(ioc->request_sz / 4);
- mpi_request.ReplyDescriptorPostQueueDepth =
- cpu_to_le16(ioc->reply_post_queue_depth);
- mpi_request.ReplyFreeQueueDepth =
- cpu_to_le16(ioc->reply_free_queue_depth);
- mpi_request.SenseBufferAddressHigh =
- cpu_to_le32((u64) ioc->sense_dma >> 32);
- mpi_request.SystemReplyAddressHigh =
- cpu_to_le32((u64) ioc->reply_dma >> 32);
- mpi_request.SystemRequestFrameBaseAddress =
- cpu_to_le64((u64) ioc->request_dma);
- mpi_request.ReplyFreeQueueAddress =
- cpu_to_le64((u64) ioc->reply_free_dma);
- if (ioc->rdpq_array_enable) {
- reply_post_free_ary_sz = ioc->reply_queue_count *
- sizeof(struct LeapioraidIOCInitRDPQArrayEntry);
- memset(ioc->reply_post_free_array, 0, reply_post_free_ary_sz);
- for (i = 0; i < ioc->reply_queue_count; i++)
- ioc->reply_post_free_array[i].RDPQBaseAddress =
- cpu_to_le64((u64) ioc->reply_post[i].reply_post_free_dma);
- mpi_request.MsgFlags = LEAPIORAID_IOCINIT_MSGFLAG_RDPQ_ARRAY_MODE;
- mpi_request.ReplyDescriptorPostQueueAddress =
- cpu_to_le64((u64) ioc->reply_post_free_array_dma);
- } else {
- mpi_request.ReplyDescriptorPostQueueAddress =
- cpu_to_le64((u64) ioc->reply_post[0].reply_post_free_dma);
- }
- mpi_request.ConfigurationFlags |= 0x0002;
- current_time = ktime_get_real();
- mpi_request.TimeStamp = cpu_to_le64(ktime_to_ms(current_time));
- if (ioc->logging_level & LEAPIORAID_DEBUG_INIT) {
-
- pr_info("%s \toffset:data\n", ioc->name);
- leapioraid_debug_dump_mf(&mpi_request,
- sizeof(struct LeapioraidIOCInitReq_t) / 4);
-
- }
- r = leapioraid_base_handshake_req_reply_wait(ioc,
- sizeof
- (struct LeapioraidIOCInitReq_t),
- (u32 *) &mpi_request,
- sizeof
- (struct LeapioraidIOCInitRep_t),
- (u16 *) &mpi_reply, 30);
- if (r != 0) {
- pr_err("%s %s: handshake failed (r=%d)\n",
- ioc->name, __func__, r);
- return r;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS || mpi_reply.IOCLogInfo) {
- pr_err("%s %s: failed\n", ioc->name,
- __func__);
- r = -EIO;
- }
- ioc->timestamp_update_count = 0;
- return r;
-}
-
-int
-leapioraid_base_trace_log_init(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidIOCLogReq_t mpi_request;
- struct LeapioraidIOCLogRep_t mpi_reply;
- u16 ioc_status;
- u32 r;
-
- dinitprintk(ioc,
- pr_info("%s %s\n", ioc->name, __func__));
- if (ioc->log_buffer == NULL) {
- ioc->log_buffer =
- dma_alloc_coherent(&ioc->pdev->dev,
- (SYS_LOG_BUF_SIZE + SYS_LOG_BUF_RESERVE),
- &ioc->log_buffer_dma, GFP_KERNEL);
- if (!ioc->log_buffer) {
- pr_err("%s: Failed to allocate log_buffer at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return -ENOMEM;
- }
-
- }
- memset(ioc->log_buffer, 0, (SYS_LOG_BUF_SIZE + SYS_LOG_BUF_RESERVE));
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidIOCLogReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_LOG_INIT;
- mpi_request.BufAddr = ioc->log_buffer_dma;
- mpi_request.BufSize = SYS_LOG_BUF_SIZE;
- r = leapioraid_base_handshake_req_reply_wait(ioc,
- sizeof
- (struct LeapioraidIOCLogReq_t),
- (u32 *) &mpi_request,
- sizeof
- (struct LeapioraidIOCLogRep_t),
- (u16 *) &mpi_reply, 30);
- if (r != 0) {
- pr_err("%s %s: handshake failed (r=%d)\n",
- ioc->name, __func__, r);
- return r;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS || mpi_reply.IOCLogInfo) {
- pr_err("%s %s: failed\n", ioc->name,
- __func__);
- r = -EIO;
- }
- return r;
-}
-
-static int
-leapioraid_base_trace_log_exit(struct LEAPIORAID_ADAPTER *ioc)
-{
- if (ioc->log_buffer)
- dma_free_coherent(&ioc->pdev->dev, SYS_LOG_BUF_SIZE,
- ioc->log_buffer, ioc->log_buffer_dma);
- return 0;
-}
-
-u8
-leapioraid_port_enable_done(struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u8 msix_index, u32 reply)
-{
- struct LeapioraidDefaultRep_t *mpi_reply;
- u16 ioc_status;
-
- if (ioc->port_enable_cmds.status == LEAPIORAID_CMD_NOT_USED)
- return 1;
- mpi_reply = leapioraid_base_get_reply_virt_addr(ioc, reply);
- if (!mpi_reply)
- return 1;
- if (mpi_reply->Function != LEAPIORAID_FUNC_PORT_ENABLE)
- return 1;
- ioc->port_enable_cmds.status &= ~LEAPIORAID_CMD_PENDING;
- ioc->port_enable_cmds.status |= LEAPIORAID_CMD_COMPLETE;
- ioc->port_enable_cmds.status |= LEAPIORAID_CMD_REPLY_VALID;
- memcpy(ioc->port_enable_cmds.reply, mpi_reply,
- mpi_reply->MsgLength * 4);
- ioc_status = le16_to_cpu(mpi_reply->IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS)
- ioc->port_enable_failed = 1;
- if (ioc->port_enable_cmds.status & LEAPIORAID_CMD_COMPLETE_ASYNC) {
- ioc->port_enable_cmds.status &= ~LEAPIORAID_CMD_COMPLETE_ASYNC;
- if (ioc_status == LEAPIORAID_IOCSTATUS_SUCCESS) {
- leapioraid_port_enable_complete(ioc);
- return 1;
- }
-
- ioc->start_scan_failed = ioc_status;
- ioc->start_scan = 0;
- return 1;
- }
- complete(&ioc->port_enable_cmds.done);
- return 1;
-}
-
-static int
-leapioraid_base_send_port_enable(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidPortEnableReq_t *mpi_request;
- struct LeapioraidPortEnableRep_t *mpi_reply;
- int r = 0;
- u16 smid;
- u16 ioc_status;
-
- pr_info("%s sending port enable !!\n", ioc->name);
- if (ioc->port_enable_cmds.status & LEAPIORAID_CMD_PENDING) {
- pr_err(
- "%s %s: internal command already in use\n", ioc->name,
- __func__);
- return -EAGAIN;
- }
- smid = leapioraid_base_get_smid(ioc, ioc->port_enable_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- return -EAGAIN;
- }
- ioc->port_enable_cmds.status = LEAPIORAID_CMD_PENDING;
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->port_enable_cmds.smid = smid;
- memset(mpi_request, 0, sizeof(struct LeapioraidPortEnableReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_PORT_ENABLE;
- init_completion(&ioc->port_enable_cmds.done);
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->port_enable_cmds.done, 300 * HZ);
- if (!(ioc->port_enable_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- pr_err("%s %s: timeout\n",
- ioc->name, __func__);
- leapioraid_debug_dump_mf(mpi_request,
- sizeof(struct LeapioraidPortEnableReq_t) / 4);
- if (ioc->port_enable_cmds.status & LEAPIORAID_CMD_RESET)
- r = -EFAULT;
- else
- r = -ETIME;
- goto out;
- }
- mpi_reply = ioc->port_enable_cmds.reply;
- ioc_status = le16_to_cpu(mpi_reply->IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err(
- "%s %s: failed with (ioc_status=0x%08x)\n", ioc->name,
- __func__, ioc_status);
- r = -EFAULT;
- goto out;
- }
-out:
- ioc->port_enable_cmds.status = LEAPIORAID_CMD_NOT_USED;
- pr_info("%s port enable: %s\n", ioc->name, ((r == 0) ?
- "SUCCESS"
- :
- "FAILED"));
- return r;
-}
-
-int
-leapioraid_port_enable(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidPortEnableReq_t *mpi_request;
- u16 smid;
-
- pr_info("%s sending port enable !!\n", ioc->name);
- if (ioc->port_enable_cmds.status & LEAPIORAID_CMD_PENDING) {
- pr_err(
- "%s %s: internal command already in use\n", ioc->name,
- __func__);
- return -EAGAIN;
- }
- smid = leapioraid_base_get_smid(ioc, ioc->port_enable_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- return -EAGAIN;
- }
- ioc->drv_internal_flags |= LEAPIORAID_DRV_INERNAL_FIRST_PE_ISSUED;
- ioc->port_enable_cmds.status = LEAPIORAID_CMD_PENDING;
- ioc->port_enable_cmds.status |= LEAPIORAID_CMD_COMPLETE_ASYNC;
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->port_enable_cmds.smid = smid;
- memset(mpi_request, 0, sizeof(struct LeapioraidPortEnableReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_PORT_ENABLE;
- ioc->put_smid_default(ioc, smid);
- return 0;
-}
-
-static int
-leapioraid_base_determine_wait_on_discovery(struct LEAPIORAID_ADAPTER *ioc)
-{
- if (ioc->ir_firmware)
- return 1;
- if (!ioc->bios_pg3.BiosVersion)
- return 0;
- if ((ioc->bios_pg2.CurrentBootDeviceForm &
- LEAPIORAID_BIOSPAGE2_FORM_MASK) ==
- LEAPIORAID_BIOSPAGE2_FORM_NO_DEVICE_SPECIFIED &&
- (ioc->bios_pg2.ReqBootDeviceForm &
- LEAPIORAID_BIOSPAGE2_FORM_MASK) ==
- LEAPIORAID_BIOSPAGE2_FORM_NO_DEVICE_SPECIFIED &&
- (ioc->bios_pg2.ReqAltBootDeviceForm &
- LEAPIORAID_BIOSPAGE2_FORM_MASK) ==
- LEAPIORAID_BIOSPAGE2_FORM_NO_DEVICE_SPECIFIED)
- return 0;
- return 1;
-}
-
-static void
-leapioraid_base_unmask_events(struct LEAPIORAID_ADAPTER *ioc, u16 event)
-{
- u32 desired_event;
-
- if (event >= 128)
- return;
- desired_event = (1 << (event % 32));
- if (event < 32)
- ioc->event_masks[0] &= ~desired_event;
- else if (event < 64)
- ioc->event_masks[1] &= ~desired_event;
- else if (event < 96)
- ioc->event_masks[2] &= ~desired_event;
- else if (event < 128)
- ioc->event_masks[3] &= ~desired_event;
-}
-
-static int
-leapioraid_base_event_notification(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidEventNotificationReq_t *mpi_request;
- u16 smid;
- int r = 0;
- int i, issue_diag_reset = 0;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- if (ioc->base_cmds.status & LEAPIORAID_CMD_PENDING) {
- pr_err(
- "%s %s: internal command already in use\n", ioc->name,
- __func__);
- return -EAGAIN;
- }
- smid = leapioraid_base_get_smid(ioc, ioc->base_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- return -EAGAIN;
- }
- ioc->base_cmds.status = LEAPIORAID_CMD_PENDING;
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->base_cmds.smid = smid;
- memset(mpi_request, 0, sizeof(struct LeapioraidEventNotificationReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_EVENT_NOTIFICATION;
- mpi_request->VF_ID = 0;
- mpi_request->VP_ID = 0;
- for (i = 0; i < LEAPIORAID_EVENT_NOTIFY_EVENTMASK_WORDS; i++)
- mpi_request->EventMasks[i] = cpu_to_le32(ioc->event_masks[i]);
- init_completion(&ioc->base_cmds.done);
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->base_cmds.done, 30 * HZ);
- if (!(ioc->base_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- pr_err("%s %s: timeout\n",
- ioc->name, __func__);
- leapioraid_debug_dump_mf(mpi_request,
- sizeof(struct LeapioraidEventNotificationReq_t) / 4);
- if (ioc->base_cmds.status & LEAPIORAID_CMD_RESET)
- r = -EFAULT;
- else
- issue_diag_reset = 1;
- } else
- dinitprintk(ioc, pr_info("%s %s: complete\n",
- ioc->name, __func__));
- ioc->base_cmds.status = LEAPIORAID_CMD_NOT_USED;
- if (issue_diag_reset) {
- if (ioc->drv_internal_flags & LEAPIORAID_DRV_INERNAL_FIRST_PE_ISSUED)
- return -EFAULT;
- if (leapioraid_base_check_for_fault_and_issue_reset(ioc))
- return -EFAULT;
- r = -EAGAIN;
- }
- return r;
-}
-
-void
-leapioraid_base_validate_event_type(struct LEAPIORAID_ADAPTER *ioc,
- u32 *event_type)
-{
- int i, j;
- u32 event_mask, desired_event;
- u8 send_update_to_fw;
-
- for (i = 0, send_update_to_fw = 0; i <
- LEAPIORAID_EVENT_NOTIFY_EVENTMASK_WORDS; i++) {
- event_mask = ~event_type[i];
- desired_event = 1;
- for (j = 0; j < 32; j++) {
- if (!(event_mask & desired_event) &&
- (ioc->event_masks[i] & desired_event)) {
- ioc->event_masks[i] &= ~desired_event;
- send_update_to_fw = 1;
- }
- desired_event = (desired_event << 1);
- }
- }
- if (!send_update_to_fw)
- return;
- mutex_lock(&ioc->base_cmds.mutex);
- leapioraid_base_event_notification(ioc);
- mutex_unlock(&ioc->base_cmds.mutex);
-}
-
-int
-leapioraid_base_make_ioc_ready(struct LEAPIORAID_ADAPTER *ioc,
- enum reset_type type)
-{
- u32 ioc_state;
- int rc;
- int count;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- if (!leapioraid_base_pci_device_is_available(ioc))
- return 0;
- ioc_state = leapioraid_base_get_iocstate(ioc, 0);
- dhsprintk(ioc, pr_info("%s %s: ioc_state(0x%08x)\n",
- ioc->name, __func__, ioc_state));
- count = 0;
- if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_RESET) {
- while ((ioc_state & LEAPIORAID_IOC_STATE_MASK) !=
- LEAPIORAID_IOC_STATE_READY) {
- if (count++ == 10) {
- pr_err(
- "%s %s: failed going to ready state (ioc_state=0x%x)\n",
- ioc->name, __func__, ioc_state);
- return -EFAULT;
- }
- ssleep(1);
- ioc_state = leapioraid_base_get_iocstate(ioc, 0);
- }
- }
- if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_READY)
- return 0;
- if (ioc_state & LEAPIORAID_DOORBELL_USED) {
- pr_info("%s unexpected doorbell active!\n",
- ioc->name);
- goto issue_diag_reset;
- }
- if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_FAULT) {
- leapioraid_print_fault_code(ioc, ioc_state &
- LEAPIORAID_DOORBELL_DATA_MASK);
- goto issue_diag_reset;
- }
- if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_COREDUMP) {
- if (ioc->ioc_coredump_loop != 0xFF) {
- leapioraid_base_coredump_info(ioc, ioc_state &
- LEAPIORAID_DOORBELL_DATA_MASK);
- leapioraid_base_wait_for_coredump_completion(ioc,
- __func__);
- }
- goto issue_diag_reset;
- }
- if (type == FORCE_BIG_HAMMER)
- goto issue_diag_reset;
- if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_OPERATIONAL)
- if (!
- (leapioraid_base_send_ioc_reset
- (ioc, LEAPIORAID_FUNC_IOC_MESSAGE_UNIT_RESET, 15))) {
- return 0;
- }
-issue_diag_reset:
- rc = leapioraid_base_diag_reset(ioc);
- return rc;
-}
-
-static int
-leapioraid_base_make_ioc_operational(struct LEAPIORAID_ADAPTER *ioc)
-{
- int r, rc, i, index;
- unsigned long flags;
- u32 reply_address;
- u16 smid;
- struct leapioraid_tr_list *delayed_tr, *delayed_tr_next;
- struct leapioraid_sc_list *delayed_sc, *delayed_sc_next;
- struct leapioraid_event_ack_list *delayed_event_ack, *delayed_event_ack_next;
- struct leapioraid_adapter_reply_queue *reply_q;
- union LeapioraidRepDescUnion_t *reply_post_free_contig;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- list_for_each_entry_safe(delayed_tr, delayed_tr_next,
- &ioc->delayed_tr_list, list) {
- list_del(&delayed_tr->list);
- kfree(delayed_tr);
- }
- list_for_each_entry_safe(delayed_tr, delayed_tr_next,
- &ioc->delayed_tr_volume_list, list) {
- list_del(&delayed_tr->list);
- kfree(delayed_tr);
- }
- list_for_each_entry_safe(delayed_tr, delayed_tr_next,
- &ioc->delayed_internal_tm_list, list) {
- list_del(&delayed_tr->list);
- kfree(delayed_tr);
- }
- list_for_each_entry_safe(delayed_sc, delayed_sc_next,
- &ioc->delayed_sc_list, list) {
- list_del(&delayed_sc->list);
- kfree(delayed_sc);
- }
- list_for_each_entry_safe(delayed_event_ack, delayed_event_ack_next,
- &ioc->delayed_event_ack_list, list) {
- list_del(&delayed_event_ack->list);
- kfree(delayed_event_ack);
- }
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- INIT_LIST_HEAD(&ioc->hpr_free_list);
- smid = ioc->hi_priority_smid;
- for (i = 0; i < ioc->hi_priority_depth; i++, smid++) {
- ioc->hpr_lookup[i].cb_idx = 0xFF;
- ioc->hpr_lookup[i].smid = smid;
- list_add_tail(&ioc->hpr_lookup[i].tracker_list,
- &ioc->hpr_free_list);
- }
- INIT_LIST_HEAD(&ioc->internal_free_list);
- smid = ioc->internal_smid;
- for (i = 0; i < ioc->internal_depth; i++, smid++) {
- ioc->internal_lookup[i].cb_idx = 0xFF;
- ioc->internal_lookup[i].smid = smid;
- list_add_tail(&ioc->internal_lookup[i].tracker_list,
- &ioc->internal_free_list);
- }
- spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
- for (i = 0, reply_address = (u32) ioc->reply_dma;
- i < ioc->reply_free_queue_depth; i++, reply_address +=
- ioc->reply_sz) {
- ioc->reply_free[i] = cpu_to_le32(reply_address);
- }
- if (ioc->is_driver_loading)
- leapioraid_base_assign_reply_queues(ioc);
- index = 0;
- reply_post_free_contig = ioc->reply_post[0].reply_post_free;
- list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
- if (ioc->rdpq_array_enable) {
- reply_q->reply_post_free =
- ioc->reply_post[index++].reply_post_free;
- } else {
- reply_q->reply_post_free = reply_post_free_contig;
- reply_post_free_contig += ioc->reply_post_queue_depth;
- }
- reply_q->reply_post_host_index = 0;
- for (i = 0; i < ioc->reply_post_queue_depth; i++)
- reply_q->reply_post_free[i].Words =
- cpu_to_le64(ULLONG_MAX);
- if (!leapioraid_base_is_controller_msix_enabled(ioc))
- goto skip_init_reply_post_free_queue;
- }
-skip_init_reply_post_free_queue:
- r = leapioraid_base_send_ioc_init(ioc);
- if (r) {
- if (!ioc->is_driver_loading)
- return r;
- rc = leapioraid_base_check_for_fault_and_issue_reset(ioc);
- if (rc || (leapioraid_base_send_ioc_init(ioc)))
- return r;
- }
- ioc->reply_free_host_index = ioc->reply_free_queue_depth - 1;
- writel(ioc->reply_free_host_index, &ioc->chip->ReplyFreeHostIndex);
- list_for_each_entry(reply_q, &ioc->reply_queue_list, list) {
- if (ioc->combined_reply_queue) {
- for (i = 0; i < ioc->nc_reply_index_count; i++)
- writel((reply_q->msix_index & 7) <<
- LEAPIORAID_RPHI_MSIX_INDEX_SHIFT,
- ioc->replyPostRegisterIndex[i]);
- } else {
- writel(reply_q->msix_index << LEAPIORAID_RPHI_MSIX_INDEX_SHIFT,
- &ioc->chip->ReplyPostHostIndex);
- }
- if (!leapioraid_base_is_controller_msix_enabled(ioc))
- goto skip_init_reply_post_host_index;
- }
-skip_init_reply_post_host_index:
- leapioraid_base_unmask_interrupts(ioc);
- r = leapioraid_base_display_fwpkg_version(ioc);
- if (r)
- return r;
- r = leapioraid_base_static_config_pages(ioc);
- if (r)
- return r;
- r = leapioraid_base_event_notification(ioc);
- if (r)
- return r;
- leapioraid_base_start_hba_unplug_watchdog(ioc);
- if (!ioc->shost_recovery) {
- ioc->wait_for_discovery_to_complete =
- leapioraid_base_determine_wait_on_discovery(ioc);
- return r;
- }
- r = leapioraid_base_send_port_enable(ioc);
- if (r)
- return r;
- return r;
-}
-
-void
-leapioraid_base_free_resources(struct LEAPIORAID_ADAPTER *ioc)
-{
- dexitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- if (!ioc->chip_phys)
- return;
- leapioraid_base_mask_interrupts(ioc);
- ioc->shost_recovery = 1;
- leapioraid_base_make_ioc_ready(ioc, SOFT_RESET);
- ioc->shost_recovery = 0;
- leapioraid_base_unmap_resources(ioc);
-}
-
-int
-leapioraid_base_attach(struct LEAPIORAID_ADAPTER *ioc)
-{
- int r, rc, i;
- int cpu_id, last_cpu_id = 0;
-
- dinitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- ioc->cpu_count = num_online_cpus();
- for_each_online_cpu(cpu_id)
- last_cpu_id = cpu_id;
- ioc->cpu_msix_table_sz = last_cpu_id + 1;
- ioc->cpu_msix_table = kzalloc(ioc->cpu_msix_table_sz, GFP_KERNEL);
- ioc->reply_queue_count = 1;
- if (!ioc->cpu_msix_table) {
- r = -ENOMEM;
- goto out_free_resources;
- }
- ioc->rdpq_array_enable_assigned = 0;
- ioc->use_32bit_dma = 0;
- ioc->dma_mask = 64;
- ioc->base_readl = &leapioraid_base_readl_aero;
- ioc->smp_affinity_enable = smp_affinity_enable;
- r = leapioraid_base_map_resources(ioc);
- if (r)
- goto out_free_resources;
- pci_set_drvdata(ioc->pdev, ioc->shost);
- r = leapioraid_base_get_ioc_facts(ioc);
- if (r) {
- rc = leapioraid_base_check_for_fault_and_issue_reset(ioc);
- if (rc || (leapioraid_base_get_ioc_facts(ioc)))
- goto out_free_resources;
- }
-
- ioc->build_sg_scmd = &leapioraid_base_build_sg_scmd_ieee;
- ioc->build_sg = &leapioraid_base_build_sg_ieee;
- ioc->build_zero_len_sge =
- &leapioraid_base_build_zero_len_sge_ieee;
- ioc->sge_size_ieee = sizeof(struct LEAPIORAID_IEEE_SGE_SIMPLE64);
- if (ioc->high_iops_queues)
- ioc->get_msix_index_for_smlio =
- &leapioraid_base_get_high_iops_msix_index;
- else
- ioc->get_msix_index_for_smlio = &leapioraid_base_get_msix_index;
-
- if (ioc->atomic_desc_capable) {
- ioc->put_smid_default =
- &leapioraid_base_put_smid_default_atomic;
- ioc->put_smid_scsi_io =
- &leapioraid_base_put_smid_scsi_io_atomic;
- ioc->put_smid_fast_path =
- &leapioraid_base_put_smid_fast_path_atomic;
- ioc->put_smid_hi_priority =
- &leapioraid_base_put_smid_hi_priority_atomic;
- } else {
- ioc->put_smid_default = &leapioraid_base_put_smid_default;
- ioc->put_smid_scsi_io = &leapioraid_base_put_smid_scsi_io;
- ioc->put_smid_fast_path = &leapioraid_base_put_smid_fast_path;
- ioc->put_smid_hi_priority =
- &leapioraid_base_put_smid_hi_priority;
- }
- ioc->build_sg_mpi = &leapioraid_base_build_sg;
- ioc->build_zero_len_sge_mpi = &leapioraid_base_build_zero_len_sge;
- r = leapioraid_base_make_ioc_ready(ioc, SOFT_RESET);
- if (r)
- goto out_free_resources;
- if (ioc->open_pcie_trace) {
- r = leapioraid_base_trace_log_init(ioc);
- if (r) {
- pr_err("log init failed\n");
- goto out_free_resources;
- }
- }
- ioc->pfacts = kcalloc(ioc->facts.NumberOfPorts,
- sizeof(struct leapioraid_port_facts), GFP_KERNEL);
- if (!ioc->pfacts) {
- r = -ENOMEM;
- goto out_free_resources;
- }
- for (i = 0; i < ioc->facts.NumberOfPorts; i++) {
- r = leapioraid_base_get_port_facts(ioc, i);
- if (r) {
- rc = leapioraid_base_check_for_fault_and_issue_reset
- (ioc);
- if (rc || (leapioraid_base_get_port_facts(ioc, i)))
- goto out_free_resources;
- }
- }
- r = leapioraid_base_allocate_memory_pools(ioc);
- if (r)
- goto out_free_resources;
- if (irqpoll_weight > 0)
- ioc->thresh_hold = irqpoll_weight;
- else
- ioc->thresh_hold = ioc->hba_queue_depth / 4;
- leapioraid_base_init_irqpolls(ioc);
- init_waitqueue_head(&ioc->reset_wq);
- ioc->pd_handles_sz = (ioc->facts.MaxDevHandle / 8);
- if (ioc->facts.MaxDevHandle % 8)
- ioc->pd_handles_sz++;
- ioc->pd_handles = kzalloc(ioc->pd_handles_sz, GFP_KERNEL);
- if (!ioc->pd_handles) {
- r = -ENOMEM;
- goto out_free_resources;
- }
- ioc->blocking_handles = kzalloc(ioc->pd_handles_sz, GFP_KERNEL);
- if (!ioc->blocking_handles) {
- r = -ENOMEM;
- goto out_free_resources;
- }
- ioc->pend_os_device_add_sz = (ioc->facts.MaxDevHandle / 8);
- if (ioc->facts.MaxDevHandle % 8)
- ioc->pend_os_device_add_sz++;
- ioc->pend_os_device_add = kzalloc(ioc->pend_os_device_add_sz,
- GFP_KERNEL);
- if (!ioc->pend_os_device_add)
- goto out_free_resources;
- ioc->device_remove_in_progress_sz = ioc->pend_os_device_add_sz;
- ioc->device_remove_in_progress =
- kzalloc(ioc->device_remove_in_progress_sz, GFP_KERNEL);
- if (!ioc->device_remove_in_progress)
- goto out_free_resources;
- ioc->tm_tr_retry_sz = ioc->facts.MaxDevHandle * sizeof(u8);
- ioc->tm_tr_retry = kzalloc(ioc->tm_tr_retry_sz, GFP_KERNEL);
- if (!ioc->tm_tr_retry)
- goto out_free_resources;
- ioc->fwfault_debug = leapioraid_fwfault_debug;
- mutex_init(&ioc->base_cmds.mutex);
- ioc->base_cmds.reply = kzalloc(ioc->reply_sz, GFP_KERNEL);
- ioc->base_cmds.status = LEAPIORAID_CMD_NOT_USED;
- ioc->port_enable_cmds.reply = kzalloc(ioc->reply_sz, GFP_KERNEL);
- ioc->port_enable_cmds.status = LEAPIORAID_CMD_NOT_USED;
- ioc->transport_cmds.reply = kzalloc(ioc->reply_sz, GFP_KERNEL);
- ioc->transport_cmds.status = LEAPIORAID_CMD_NOT_USED;
- mutex_init(&ioc->transport_cmds.mutex);
- ioc->scsih_cmds.reply = kzalloc(ioc->reply_sz, GFP_KERNEL);
- ioc->scsih_cmds.status = LEAPIORAID_CMD_NOT_USED;
- mutex_init(&ioc->scsih_cmds.mutex);
- ioc->tm_cmds.reply = kzalloc(ioc->reply_sz, GFP_KERNEL);
- ioc->tm_cmds.status = LEAPIORAID_CMD_NOT_USED;
- mutex_init(&ioc->tm_cmds.mutex);
- ioc->config_cmds.reply = kzalloc(ioc->reply_sz, GFP_KERNEL);
- ioc->config_cmds.status = LEAPIORAID_CMD_NOT_USED;
- mutex_init(&ioc->config_cmds.mutex);
- ioc->ctl_cmds.reply = kzalloc(ioc->reply_sz, GFP_KERNEL);
- ioc->ctl_cmds.sense = kzalloc(SCSI_SENSE_BUFFERSIZE, GFP_KERNEL);
- ioc->ctl_cmds.status = LEAPIORAID_CMD_NOT_USED;
- mutex_init(&ioc->ctl_cmds.mutex);
-
- if (!ioc->base_cmds.reply || !ioc->port_enable_cmds.reply ||
- !ioc->transport_cmds.reply || !ioc->scsih_cmds.reply ||
- !ioc->tm_cmds.reply || !ioc->config_cmds.reply ||
- !ioc->ctl_cmds.reply || !ioc->ctl_cmds.sense) {
- r = -ENOMEM;
- goto out_free_resources;
- }
- for (i = 0; i < LEAPIORAID_EVENT_NOTIFY_EVENTMASK_WORDS; i++)
- ioc->event_masks[i] = -1;
- leapioraid_base_unmask_events(ioc, LEAPIORAID_EVENT_SAS_DISCOVERY);
- leapioraid_base_unmask_events(ioc,
- LEAPIORAID_EVENT_SAS_BROADCAST_PRIMITIVE);
- leapioraid_base_unmask_events(ioc,
- LEAPIORAID_EVENT_SAS_TOPOLOGY_CHANGE_LIST);
- leapioraid_base_unmask_events(ioc,
- LEAPIORAID_EVENT_SAS_DEVICE_STATUS_CHANGE);
- leapioraid_base_unmask_events(ioc,
- LEAPIORAID_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE);
- leapioraid_base_unmask_events(ioc,
- LEAPIORAID_EVENT_IR_CONFIGURATION_CHANGE_LIST);
- leapioraid_base_unmask_events(ioc, LEAPIORAID_EVENT_IR_VOLUME);
- leapioraid_base_unmask_events(ioc, LEAPIORAID_EVENT_IR_PHYSICAL_DISK);
- leapioraid_base_unmask_events(ioc, LEAPIORAID_EVENT_IR_OPERATION_STATUS);
- leapioraid_base_unmask_events(ioc, LEAPIORAID_EVENT_LOG_ENTRY_ADDED);
- leapioraid_base_unmask_events(ioc, LEAPIORAID_EVENT_TEMP_THRESHOLD);
- leapioraid_base_unmask_events(ioc,
- LEAPIORAID_EVENT_SAS_DEVICE_DISCOVERY_ERROR);
- r = leapioraid_base_make_ioc_operational(ioc);
- if (r == -EAGAIN)
- r = leapioraid_base_make_ioc_operational(ioc);
- if (r)
- goto out_free_resources;
- memcpy(&ioc->prev_fw_facts, &ioc->facts,
- sizeof(struct leapioraid_facts));
- ioc->non_operational_loop = 0;
- ioc->ioc_coredump_loop = 0;
- ioc->got_task_abort_from_ioctl = 0;
- ioc->got_task_abort_from_sysfs = 0;
- return 0;
-out_free_resources:
- ioc->remove_host = 1;
- leapioraid_base_free_resources(ioc);
- leapioraid_base_release_memory_pools(ioc);
- pci_set_drvdata(ioc->pdev, NULL);
- kfree(ioc->cpu_msix_table);
- kfree(ioc->pd_handles);
- kfree(ioc->blocking_handles);
- kfree(ioc->tm_tr_retry);
- kfree(ioc->device_remove_in_progress);
- kfree(ioc->pend_os_device_add);
- kfree(ioc->tm_cmds.reply);
- kfree(ioc->transport_cmds.reply);
- kfree(ioc->scsih_cmds.reply);
- kfree(ioc->config_cmds.reply);
- kfree(ioc->base_cmds.reply);
- kfree(ioc->port_enable_cmds.reply);
- kfree(ioc->ctl_cmds.reply);
- kfree(ioc->ctl_cmds.sense);
- kfree(ioc->pfacts);
- ioc->ctl_cmds.reply = NULL;
- ioc->base_cmds.reply = NULL;
- ioc->tm_cmds.reply = NULL;
- ioc->scsih_cmds.reply = NULL;
- ioc->transport_cmds.reply = NULL;
- ioc->config_cmds.reply = NULL;
- ioc->pfacts = NULL;
- return r;
-}
-
-void
-leapioraid_base_detach(struct LEAPIORAID_ADAPTER *ioc)
-{
- dexitprintk(ioc, pr_info("%s %s\n", ioc->name,
- __func__));
- if (ioc->open_pcie_trace)
- leapioraid_base_trace_log_exit(ioc);
- leapioraid_base_stop_watchdog(ioc);
- leapioraid_base_stop_hba_unplug_watchdog(ioc);
- leapioraid_base_free_resources(ioc);
- leapioraid_base_release_memory_pools(ioc);
- leapioraid_free_enclosure_list(ioc);
- pci_set_drvdata(ioc->pdev, NULL);
- kfree(ioc->cpu_msix_table);
- kfree(ioc->pd_handles);
- kfree(ioc->blocking_handles);
- kfree(ioc->tm_tr_retry);
- kfree(ioc->device_remove_in_progress);
- kfree(ioc->pend_os_device_add);
- kfree(ioc->pfacts);
- kfree(ioc->ctl_cmds.reply);
- kfree(ioc->ctl_cmds.sense);
- kfree(ioc->base_cmds.reply);
- kfree(ioc->port_enable_cmds.reply);
- kfree(ioc->tm_cmds.reply);
- kfree(ioc->transport_cmds.reply);
- kfree(ioc->scsih_cmds.reply);
- kfree(ioc->config_cmds.reply);
-}
-
-static void
-leapioraid_base_clear_outstanding_leapioraid_commands(struct LEAPIORAID_ADAPTER
- *ioc)
-{
- struct leapioraid_internal_qcmd *scsih_qcmd, *scsih_qcmd_next;
- unsigned long flags;
-
- if (ioc->transport_cmds.status & LEAPIORAID_CMD_PENDING) {
- ioc->transport_cmds.status |= LEAPIORAID_CMD_RESET;
- leapioraid_base_free_smid(ioc, ioc->transport_cmds.smid);
- complete(&ioc->transport_cmds.done);
- }
- if (ioc->base_cmds.status & LEAPIORAID_CMD_PENDING) {
- ioc->base_cmds.status |= LEAPIORAID_CMD_RESET;
- leapioraid_base_free_smid(ioc, ioc->base_cmds.smid);
- complete(&ioc->base_cmds.done);
- }
- if (ioc->port_enable_cmds.status & LEAPIORAID_CMD_PENDING) {
- ioc->port_enable_failed = 1;
- ioc->port_enable_cmds.status |= LEAPIORAID_CMD_RESET;
- leapioraid_base_free_smid(ioc, ioc->port_enable_cmds.smid);
- if (ioc->is_driver_loading) {
- ioc->start_scan_failed =
- LEAPIORAID_IOCSTATUS_INTERNAL_ERROR;
- ioc->start_scan = 0;
- } else
- complete(&ioc->port_enable_cmds.done);
- }
- if (ioc->config_cmds.status & LEAPIORAID_CMD_PENDING) {
- ioc->config_cmds.status |= LEAPIORAID_CMD_RESET;
- leapioraid_base_free_smid(ioc, ioc->config_cmds.smid);
- ioc->config_cmds.smid = USHORT_MAX;
- complete(&ioc->config_cmds.done);
- }
- spin_lock_irqsave(&ioc->scsih_q_internal_lock, flags);
- list_for_each_entry_safe(scsih_qcmd, scsih_qcmd_next,
- &ioc->scsih_q_intenal_cmds, list) {
- if ((scsih_qcmd->status) & LEAPIORAID_CMD_PENDING) {
- scsih_qcmd->status |= LEAPIORAID_CMD_RESET;
- leapioraid_base_free_smid(ioc, scsih_qcmd->smid);
- }
- }
- spin_unlock_irqrestore(&ioc->scsih_q_internal_lock, flags);
-}
-
-static void
-leapioraid_base_reset_handler(struct LEAPIORAID_ADAPTER *ioc, int reset_phase)
-{
- leapioraid_scsihost_reset_handler(ioc, reset_phase);
- leapioraid_ctl_reset_handler(ioc, reset_phase);
- switch (reset_phase) {
- case LEAPIORAID_IOC_PRE_RESET_PHASE:
- dtmprintk(ioc, pr_info("%s %s: LEAPIORAID_IOC_PRE_RESET_PHASE\n",
- ioc->name, __func__));
- break;
- case LEAPIORAID_IOC_AFTER_RESET_PHASE:
- dtmprintk(ioc, pr_info("%s %s: LEAPIORAID_IOC_AFTER_RESET_PHASE\n",
- ioc->name, __func__));
- leapioraid_base_clear_outstanding_leapioraid_commands(ioc);
- break;
- case LEAPIORAID_IOC_DONE_RESET_PHASE:
- dtmprintk(ioc, pr_info("%s %s: LEAPIORAID_IOC_DONE_RESET_PHASE\n",
- ioc->name, __func__));
- break;
- }
-}
-
-void
-leapioraid_wait_for_commands_to_complete(struct LEAPIORAID_ADAPTER *ioc)
-{
- u32 ioc_state;
- unsigned long flags;
- u16 i;
- struct leapioraid_scsiio_tracker *st;
-
- ioc->pending_io_count = 0;
- if (!leapioraid_base_pci_device_is_available(ioc)) {
- pr_err("%s %s: pci error recovery reset or pci device unplug occurred\n",
- ioc->name, __func__);
- return;
- }
- ioc_state = leapioraid_base_get_iocstate(ioc, 0);
- if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) !=
- LEAPIORAID_IOC_STATE_OPERATIONAL)
- return;
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- for (i = 1; i <= ioc->scsiio_depth; i++) {
- st = leapioraid_get_st_from_smid(ioc, i);
- if (st && st->smid != 0) {
- if (st->cb_idx != 0xFF)
- ioc->pending_io_count++;
- }
- }
- spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
- if (!ioc->pending_io_count)
- return;
- wait_event_timeout(ioc->reset_wq, ioc->pending_io_count == 0, 10 * HZ);
-}
-
-static int
-leapioraid_base_check_ioc_facts_changes(struct LEAPIORAID_ADAPTER *ioc)
-{
- u16 pd_handles_sz, tm_tr_retry_sz;
- void *pd_handles = NULL, *blocking_handles = NULL;
- void *pend_os_device_add = NULL, *device_remove_in_progress = NULL;
- u8 *tm_tr_retry = NULL;
- struct leapioraid_facts *old_facts = &ioc->prev_fw_facts;
-
- if (ioc->facts.MaxDevHandle > old_facts->MaxDevHandle) {
- pd_handles_sz = (ioc->facts.MaxDevHandle / 8);
- if (ioc->facts.MaxDevHandle % 8)
- pd_handles_sz++;
- pd_handles = krealloc(ioc->pd_handles, pd_handles_sz,
- GFP_KERNEL);
- if (!pd_handles) {
- pr_err(
- "%s Unable to allocate the memory for pd_handles of sz: %d\n",
- ioc->name, pd_handles_sz);
- return -ENOMEM;
- }
- memset(pd_handles + ioc->pd_handles_sz, 0,
- (pd_handles_sz - ioc->pd_handles_sz));
- ioc->pd_handles = pd_handles;
- blocking_handles =
- krealloc(ioc->blocking_handles, pd_handles_sz, GFP_KERNEL);
- if (!blocking_handles) {
- pr_err(
- "%s Unable to allocate the memory for blocking_handles of sz: %d\n",
- ioc->name, pd_handles_sz);
- return -ENOMEM;
- }
- memset(blocking_handles + ioc->pd_handles_sz, 0,
- (pd_handles_sz - ioc->pd_handles_sz));
- ioc->blocking_handles = blocking_handles;
- ioc->pd_handles_sz = pd_handles_sz;
- pend_os_device_add =
- krealloc(ioc->pend_os_device_add, pd_handles_sz,
- GFP_KERNEL);
- if (!pend_os_device_add) {
- pr_err(
- "%s Unable to allocate the memory for pend_os_device_add of sz: %d\n",
- ioc->name, pd_handles_sz);
- return -ENOMEM;
- }
- memset(pend_os_device_add + ioc->pend_os_device_add_sz, 0,
- (pd_handles_sz - ioc->pend_os_device_add_sz));
- ioc->pend_os_device_add = pend_os_device_add;
- ioc->pend_os_device_add_sz = pd_handles_sz;
- device_remove_in_progress =
- krealloc(ioc->device_remove_in_progress, pd_handles_sz,
- GFP_KERNEL);
- if (!device_remove_in_progress) {
- pr_err(
- "%s Unable to allocate the memory for device_remove_in_progress of sz: %d\n",
- ioc->name, pd_handles_sz);
- return -ENOMEM;
- }
- memset(device_remove_in_progress +
- ioc->device_remove_in_progress_sz, 0,
- (pd_handles_sz - ioc->device_remove_in_progress_sz));
- ioc->device_remove_in_progress = device_remove_in_progress;
- ioc->device_remove_in_progress_sz = pd_handles_sz;
- tm_tr_retry_sz = ioc->facts.MaxDevHandle * sizeof(u8);
- tm_tr_retry = krealloc(ioc->tm_tr_retry, tm_tr_retry_sz,
- GFP_KERNEL);
- if (!tm_tr_retry) {
- pr_err(
- "%s Unable to allocate the memory for tm_tr_retry of sz: %d\n",
- ioc->name, tm_tr_retry_sz);
- return -ENOMEM;
- }
- memset(tm_tr_retry + ioc->tm_tr_retry_sz, 0,
- (tm_tr_retry_sz - ioc->tm_tr_retry_sz));
- ioc->tm_tr_retry = tm_tr_retry;
- ioc->tm_tr_retry_sz = tm_tr_retry_sz;
- }
- memcpy(&ioc->prev_fw_facts, &ioc->facts,
- sizeof(struct leapioraid_facts));
- return 0;
-}
-
-int
-leapioraid_base_hard_reset_handler(
- struct LEAPIORAID_ADAPTER *ioc,
- enum reset_type type)
-{
- int r;
- unsigned long flags;
-
- dtmprintk(ioc, pr_info("%s %s: enter\n", ioc->name,
- __func__));
- if (!mutex_trylock(&ioc->reset_in_progress_mutex)) {
- do {
- ssleep(1);
- } while (ioc->shost_recovery == 1);
- dtmprintk(ioc,
- pr_info("%s %s: exit\n", ioc->name,
- __func__));
- return ioc->ioc_reset_status;
- }
- if (!leapioraid_base_pci_device_is_available(ioc)) {
- pr_err(
- "%s %s: pci error recovery reset or pci device unplug occurred\n",
- ioc->name, __func__);
- if (leapioraid_base_pci_device_is_unplugged(ioc)) {
- leapioraid_base_pause_mq_polling(ioc);
- ioc->schedule_dead_ioc_flush_running_cmds(ioc);
- leapioraid_base_resume_mq_polling(ioc);
- }
- r = 0;
- goto out_unlocked;
- }
- leapioraid_halt_firmware(ioc, 0);
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
- ioc->shost_recovery = 1;
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
- leapioraid_base_get_iocstate(ioc, 0);
- leapioraid_base_reset_handler(ioc, LEAPIORAID_IOC_PRE_RESET_PHASE);
- leapioraid_wait_for_commands_to_complete(ioc);
- leapioraid_base_mask_interrupts(ioc);
- leapioraid_base_pause_mq_polling(ioc);
- r = leapioraid_base_make_ioc_ready(ioc, type);
- if (r)
- goto out;
- leapioraid_base_reset_handler(ioc, LEAPIORAID_IOC_AFTER_RESET_PHASE);
- if (ioc->is_driver_loading && ioc->port_enable_failed) {
- ioc->remove_host = 1;
- r = -EFAULT;
- goto out;
- }
- r = leapioraid_base_get_ioc_facts(ioc);
- if (r)
- goto out;
- r = leapioraid_base_check_ioc_facts_changes(ioc);
- if (r) {
- pr_err(
- "%s Some of the parameters got changed in this\n\t\t"
- "new firmware image and it requires system reboot\n",
- ioc->name);
- goto out;
- }
- if (ioc->rdpq_array_enable && !ioc->rdpq_array_capable)
- panic(
- "%s: Issue occurred with flashing controller firmware.\n\t\t"
- "Please reboot the system and ensure that the correct\n\t\t"
- "firmware version is running\n",
- ioc->name);
- r = leapioraid_base_make_ioc_operational(ioc);
- if (!r)
- leapioraid_base_reset_handler(ioc, LEAPIORAID_IOC_DONE_RESET_PHASE);
-out:
- pr_info("%s %s: %s\n",
- ioc->name, __func__, ((r == 0) ? "SUCCESS" : "FAILED"));
- spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
- ioc->ioc_reset_status = r;
- ioc->shost_recovery = 0;
- spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
- ioc->ioc_reset_count++;
- mutex_unlock(&ioc->reset_in_progress_mutex);
-#if defined(DISABLE_RESET_SUPPORT)
- if (r != 0) {
- struct task_struct *p;
-
- ioc->remove_host = 1;
- ioc->schedule_dead_ioc_flush_running_cmds(ioc);
- p = kthread_run(leapioraid_remove_dead_ioc_func, ioc,
- "leapioraid_dead_ioc_%d", ioc->id);
- if (IS_ERR(p))
- pr_err(
- "%s %s: Running leapioraid_dead_ioc thread failed !!!!\n",
- ioc->name, __func__);
- else
- pr_err(
- "%s %s: Running leapioraid_dead_ioc thread success !!!!\n",
- ioc->name, __func__);
- }
-#else
- if (r != 0)
- ioc->schedule_dead_ioc_flush_running_cmds(ioc);
-#endif
- leapioraid_base_resume_mq_polling(ioc);
-out_unlocked:
- dtmprintk(ioc, pr_info("%s %s: exit\n", ioc->name,
- __func__));
- return r;
-}
-
-struct config_request {
- u16 sz;
- void *page;
- dma_addr_t page_dma;
-};
-
-static void
-leapioraid_config_display_some_debug(struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- char *calling_function_name,
- struct LeapioraidDefaultRep_t *mpi_reply)
-{
- struct LeapioraidCfgReq_t *mpi_request;
- char *desc = NULL;
-
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- switch (mpi_request->Header.PageType & LEAPIORAID_CONFIG_PAGETYPE_MASK) {
- case LEAPIORAID_CONFIG_PAGETYPE_IO_UNIT:
- desc = "io_unit";
- break;
- case LEAPIORAID_CONFIG_PAGETYPE_IOC:
- desc = "ioc";
- break;
- case LEAPIORAID_CONFIG_PAGETYPE_BIOS:
- desc = "bios";
- break;
- case LEAPIORAID_CONFIG_PAGETYPE_RAID_VOLUME:
- desc = "raid_volume";
- break;
- case LEAPIORAID_CONFIG_PAGETYPE_MANUFACTURING:
- desc = "manufacturing";
- break;
- case LEAPIORAID_CONFIG_PAGETYPE_RAID_PHYSDISK:
- desc = "physdisk";
- break;
- case LEAPIORAID_CONFIG_PAGETYPE_EXTENDED:
- switch (mpi_request->ExtPageType) {
- case LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_IO_UNIT:
- desc = "sas_io_unit";
- break;
- case LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_EXPANDER:
- desc = "sas_expander";
- break;
- case LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_DEVICE:
- desc = "sas_device";
- break;
- case LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_PHY:
- desc = "sas_phy";
- break;
- case LEAPIORAID_CONFIG_EXTPAGETYPE_LOG:
- desc = "log";
- break;
- case LEAPIORAID_CONFIG_EXTPAGETYPE_ENCLOSURE:
- desc = "enclosure";
- break;
- case LEAPIORAID_CONFIG_EXTPAGETYPE_RAID_CONFIG:
- desc = "raid_config";
- break;
- case LEAPIORAID_CONFIG_EXTPAGETYPE_DRIVER_MAPPING:
- desc = "driver_mapping";
- break;
- case LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_PORT:
- desc = "sas_port";
- break;
- case LEAPIORAID_CONFIG_EXTPAGETYPE_EXT_MANUFACTURING:
- desc = "ext_manufacturing";
- break;
- }
- break;
- }
- if (!desc)
- return;
- pr_info("%s %s: %s(%d), action(%d), form(0x%08x), smid(%d)\n",
- ioc->name, calling_function_name, desc,
- mpi_request->Header.PageNumber, mpi_request->Action,
- le32_to_cpu(mpi_request->PageAddress), smid);
- if (!mpi_reply)
- return;
- if (mpi_reply->IOCStatus || mpi_reply->IOCLogInfo)
- pr_err(
- "%s \tiocstatus(0x%04x), loginfo(0x%08x)\n",
- ioc->name, le16_to_cpu(mpi_reply->IOCStatus),
- le32_to_cpu(mpi_reply->IOCLogInfo));
-}
-
-static int
-leapioraid_config_alloc_config_dma_memory(struct LEAPIORAID_ADAPTER *ioc,
- struct config_request *mem)
-{
- int r = 0;
-
- if (mem->sz > ioc->config_page_sz) {
- mem->page = dma_alloc_coherent(&ioc->pdev->dev, mem->sz,
- &mem->page_dma, GFP_KERNEL);
- if (!mem->page)
- r = -ENOMEM;
- } else {
- mem->page = ioc->config_page;
- mem->page_dma = ioc->config_page_dma;
- }
- ioc->config_vaddr = mem->page;
- return r;
-}
-
-static void
-leapioraid_config_free_config_dma_memory(struct LEAPIORAID_ADAPTER *ioc,
- struct config_request *mem)
-{
- if (mem->sz > ioc->config_page_sz)
- dma_free_coherent(&ioc->pdev->dev, mem->sz, mem->page,
- mem->page_dma);
-}
-
-u8
-leapioraid_config_done(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid, u8 msix_index,
- u32 reply)
-{
- struct LeapioraidDefaultRep_t *mpi_reply;
-
- if (ioc->config_cmds.status == LEAPIORAID_CMD_NOT_USED)
- return 1;
- if (ioc->config_cmds.smid != smid)
- return 1;
- ioc->config_cmds.status |= LEAPIORAID_CMD_COMPLETE;
- mpi_reply = leapioraid_base_get_reply_virt_addr(ioc, reply);
- if (mpi_reply) {
- ioc->config_cmds.status |= LEAPIORAID_CMD_REPLY_VALID;
- memcpy(ioc->config_cmds.reply, mpi_reply,
- mpi_reply->MsgLength * 4);
- }
- ioc->config_cmds.status &= ~LEAPIORAID_CMD_PENDING;
- if (ioc->logging_level & LEAPIORAID_DEBUG_CONFIG)
- leapioraid_config_display_some_debug(
- ioc, smid, "config_done", mpi_reply);
- ioc->config_cmds.smid = USHORT_MAX;
- complete(&ioc->config_cmds.done);
- return 1;
-}
-
-static int
-leapioraid_config_request(
- struct LEAPIORAID_ADAPTER *ioc, struct LeapioraidCfgReq_t *mpi_request,
- struct LeapioraidCfgRep_t *mpi_reply, int timeout,
- void *config_page, u16 config_page_sz)
-{
- u16 smid;
- struct LeapioraidCfgReq_t *config_request;
- int r;
- u8 retry_count, issue_host_reset = 0;
- struct config_request mem;
- u32 ioc_status = UINT_MAX;
- u8 issue_reset;
-
- mutex_lock(&ioc->config_cmds.mutex);
- if (ioc->config_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_err("%s %s: config_cmd in use\n",
- ioc->name, __func__);
- mutex_unlock(&ioc->config_cmds.mutex);
- return -EAGAIN;
- }
- retry_count = 0;
- memset(&mem, 0, sizeof(struct config_request));
- mpi_request->VF_ID = 0;
- mpi_request->VP_ID = 0;
- if (config_page) {
- mpi_request->Header.PageVersion = mpi_reply->Header.PageVersion;
- mpi_request->Header.PageNumber = mpi_reply->Header.PageNumber;
- mpi_request->Header.PageType = mpi_reply->Header.PageType;
- mpi_request->Header.PageLength = mpi_reply->Header.PageLength;
- mpi_request->ExtPageLength = mpi_reply->ExtPageLength;
- mpi_request->ExtPageType = mpi_reply->ExtPageType;
- if (mpi_request->Header.PageLength)
- mem.sz = mpi_request->Header.PageLength * 4;
- else
- mem.sz = le16_to_cpu(mpi_reply->ExtPageLength) * 4;
- r = leapioraid_config_alloc_config_dma_memory(ioc, &mem);
- if (r != 0)
- goto out;
- if (mpi_request->Action ==
- LEAPIORAID_CONFIG_ACTION_PAGE_WRITE_CURRENT ||
- mpi_request->Action ==
- LEAPIORAID_CONFIG_ACTION_PAGE_WRITE_NVRAM) {
- ioc->base_add_sg_single(&mpi_request->PageBufferSGE,
- LEAPIORAID_CONFIG_COMMON_WRITE_SGLFLAGS
- | mem.sz, mem.page_dma);
- memcpy(mem.page, config_page,
- min_t(u16, mem.sz, config_page_sz));
- } else {
- memset(config_page, 0, config_page_sz);
- ioc->base_add_sg_single(&mpi_request->PageBufferSGE,
- LEAPIORAID_CONFIG_COMMON_SGLFLAGS
- | mem.sz, mem.page_dma);
- memset(mem.page, 0, min_t(u16, mem.sz, config_page_sz));
- }
- }
-retry_config:
- if (retry_count) {
- if (retry_count > 2) {
- r = -EFAULT;
- goto free_mem;
- }
- pr_info("%s %s: attempting retry (%d)\n",
- ioc->name, __func__, retry_count);
- }
- r = leapioraid_wait_for_ioc_to_operational(ioc,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT);
- if (r) {
- if (r == -ETIME)
- issue_host_reset = 1;
- goto free_mem;
- }
- smid = leapioraid_base_get_smid(ioc, ioc->config_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- ioc->config_cmds.status = LEAPIORAID_CMD_NOT_USED;
- r = -EAGAIN;
- goto free_mem;
- }
- r = 0;
- memset(mpi_reply, 0, sizeof(struct LeapioraidCfgRep_t));
- memset(ioc->config_cmds.reply, 0, sizeof(struct LeapioraidCfgRep_t));
- ioc->config_cmds.status = LEAPIORAID_CMD_PENDING;
- config_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->config_cmds.smid = smid;
- memcpy(config_request, mpi_request, sizeof(struct LeapioraidCfgReq_t));
- if (ioc->logging_level & LEAPIORAID_DEBUG_CONFIG)
- leapioraid_config_display_some_debug(ioc, smid, "config_request", NULL);
- init_completion(&ioc->config_cmds.done);
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->config_cmds.done, timeout * HZ);
- if (!(ioc->config_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- if (!(ioc->logging_level & LEAPIORAID_DEBUG_CONFIG))
- leapioraid_config_display_some_debug(ioc, smid,
- "config_request no reply",
- NULL);
- leapioraid_check_cmd_timeout(ioc, ioc->config_cmds.status,
- mpi_request,
- sizeof(struct LeapioraidCfgReq_t) / 4,
- issue_reset);
- pr_info("%s issue_reset=%d\n", __func__, issue_reset);
- retry_count++;
- if (ioc->config_cmds.smid == smid)
- leapioraid_base_free_smid(ioc, smid);
- if (ioc->config_cmds.status & LEAPIORAID_CMD_RESET)
- goto retry_config;
- if (ioc->shost_recovery || ioc->pci_error_recovery) {
- issue_host_reset = 0;
- r = -EFAULT;
- } else
- issue_host_reset = 1;
- goto free_mem;
- }
- if (ioc->config_cmds.status & LEAPIORAID_CMD_REPLY_VALID) {
- memcpy(mpi_reply, ioc->config_cmds.reply,
- sizeof(struct LeapioraidCfgRep_t));
- if ((mpi_request->Header.PageType & 0xF) !=
- (mpi_reply->Header.PageType & 0xF)) {
- if (!(ioc->logging_level & LEAPIORAID_DEBUG_CONFIG))
- leapioraid_config_display_some_debug(ioc, smid,
- "config_request",
- NULL);
- leapioraid_debug_dump_mf(mpi_request, ioc->request_sz / 4);
- leapioraid_debug_dump_reply(mpi_reply, ioc->reply_sz / 4);
- panic(
- "%s %s: Firmware BUG: mpi_reply mismatch:\n\t\t"
- "Requested PageType(0x%02x) Reply PageType(0x%02x)\n",
- ioc->name,
- __func__,
- (mpi_request->Header.PageType & 0xF),
- (mpi_reply->Header.PageType & 0xF));
- }
- if (((mpi_request->Header.PageType & 0xF) ==
- LEAPIORAID_CONFIG_PAGETYPE_EXTENDED) &&
- mpi_request->ExtPageType != mpi_reply->ExtPageType) {
- if (!(ioc->logging_level & LEAPIORAID_DEBUG_CONFIG))
- leapioraid_config_display_some_debug(ioc, smid,
- "config_request",
- NULL);
- leapioraid_debug_dump_mf(mpi_request, ioc->request_sz / 4);
- leapioraid_debug_dump_reply(mpi_reply, ioc->reply_sz / 4);
- panic(
- "%s %s: Firmware BUG: mpi_reply mismatch:\n\t\t"
- "Requested ExtPageType(0x%02x) Reply ExtPageType(0x%02x)\n",
- ioc->name,
- __func__,
- mpi_request->ExtPageType,
- mpi_reply->ExtPageType);
- }
- ioc_status = le16_to_cpu(mpi_reply->IOCStatus)
- & LEAPIORAID_IOCSTATUS_MASK;
- }
- if (retry_count)
- pr_info("%s %s: retry (%d) completed!!\n",
- ioc->name, __func__, retry_count);
- if ((ioc_status == LEAPIORAID_IOCSTATUS_SUCCESS) &&
- config_page && mpi_request->Action ==
- LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT) {
- u8 *p = (u8 *) mem.page;
-
- if (p) {
- if ((mpi_request->Header.PageType & 0xF) !=
- (p[3] & 0xF)) {
- if (!
- (ioc->logging_level & LEAPIORAID_DEBUG_CONFIG))
- leapioraid_config_display_some_debug(ioc, smid,
- "config_request",
- NULL);
- leapioraid_debug_dump_mf(mpi_request,
- ioc->request_sz / 4);
- leapioraid_debug_dump_reply(mpi_reply, ioc->reply_sz / 4);
- leapioraid_debug_dump_config(p, min_t(u16, mem.sz,
- config_page_sz) /
- 4);
- panic(
- "%s %s: Firmware BUG: config page mismatch:\n\t\t"
- "Requested PageType(0x%02x) Reply PageType(0x%02x)\n",
- ioc->name,
- __func__,
- (mpi_request->Header.PageType & 0xF),
- (p[3] & 0xF));
- }
- if (((mpi_request->Header.PageType & 0xF) ==
- LEAPIORAID_CONFIG_PAGETYPE_EXTENDED) &&
- (mpi_request->ExtPageType != p[6])) {
- if (!
- (ioc->logging_level & LEAPIORAID_DEBUG_CONFIG))
- leapioraid_config_display_some_debug(ioc, smid,
- "config_request",
- NULL);
- leapioraid_debug_dump_mf(mpi_request,
- ioc->request_sz / 4);
- leapioraid_debug_dump_reply(mpi_reply, ioc->reply_sz / 4);
- leapioraid_debug_dump_config(p, min_t(u16, mem.sz,
- config_page_sz) /
- 4);
- panic(
- "%s %s: Firmware BUG: config page mismatch:\n\t\t"
- "Requested ExtPageType(0x%02x) Reply ExtPageType(0x%02x)\n",
- ioc->name,
- __func__,
- mpi_request->ExtPageType,
- p[6]);
- }
- }
- memcpy(config_page, mem.page, min_t(u16, mem.sz,
- config_page_sz));
- }
-free_mem:
- if (config_page)
- leapioraid_config_free_config_dma_memory(ioc, &mem);
-out:
- ioc->config_cmds.status = LEAPIORAID_CMD_NOT_USED;
- mutex_unlock(&ioc->config_cmds.mutex);
- if (issue_host_reset) {
- if (ioc->drv_internal_flags & LEAPIORAID_DRV_INERNAL_FIRST_PE_ISSUED) {
- leapioraid_base_hard_reset_handler(ioc,
- FORCE_BIG_HAMMER);
- r = -EFAULT;
- } else {
- if (leapioraid_base_check_for_fault_and_issue_reset
- (ioc))
- return -EFAULT;
- r = -EAGAIN;
- }
- }
- return r;
-}
-
-int
-leapioraid_config_get_manufacturing_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidManP0_t *
- config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_MANUFACTURING;
- mpi_request.Header.PageNumber = 0;
- mpi_request.Header.PageVersion = 0x00;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_manufacturing_pg10(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidManuP10_t *config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_MANUFACTURING;
- mpi_request.Header.PageNumber = 10;
- mpi_request.Header.PageVersion = 0x00;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_manufacturing_pg11(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidManuP11_t
- *config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_MANUFACTURING;
- mpi_request.Header.PageNumber = 11;
- mpi_request.Header.PageVersion = 0x00;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_set_manufacturing_pg11(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidManuP11_t
- *config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_MANUFACTURING;
- mpi_request.Header.PageNumber = 11;
- mpi_request.Header.PageVersion = 0x00;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_WRITE_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_bios_pg2(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidBiosP2_t *config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_BIOS;
- mpi_request.Header.PageNumber = 2;
- mpi_request.Header.PageVersion = 0x04;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_bios_pg3(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidBiosP3_t *config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_BIOS;
- mpi_request.Header.PageNumber = 3;
- mpi_request.Header.PageVersion = 0x01;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_iounit_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOUnitP0_t *config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_IO_UNIT;
- mpi_request.Header.PageNumber = 0;
- mpi_request.Header.PageVersion = 0x02;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_iounit_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOUnitP1_t *config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_IO_UNIT;
- mpi_request.Header.PageNumber = 1;
- mpi_request.Header.PageVersion = 0x04;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_set_iounit_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOUnitP1_t *config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_IO_UNIT;
- mpi_request.Header.PageNumber = 1;
- mpi_request.Header.PageVersion = 0x04;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_WRITE_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_iounit_pg8(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOUnitP8_t *config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_IO_UNIT;
- mpi_request.Header.PageNumber = 8;
- mpi_request.Header.PageVersion = 0x00;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_ioc_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOCP1_t *config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_IOC;
- mpi_request.Header.PageNumber = 1;
- mpi_request.Header.PageVersion = 0x00;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_set_ioc_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOCP1_t *config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_IOC;
- mpi_request.Header.PageNumber = 1;
- mpi_request.Header.PageVersion = 0x00;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_WRITE_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_ioc_pg8(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOCP8_t *config_page)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_IOC;
- mpi_request.Header.PageNumber = 8;
- mpi_request.Header.PageVersion = 0x00;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_sas_device_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasDevP0_t *config_page,
- u32 form, u32 handle)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_EXTENDED;
- mpi_request.ExtPageType = LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_DEVICE;
- mpi_request.Header.PageVersion = 0x09;
- mpi_request.Header.PageNumber = 0;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.PageAddress = cpu_to_le32(form | handle);
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_number_hba_phys(struct LEAPIORAID_ADAPTER *ioc,
- u8 *num_phys)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
- u16 ioc_status;
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidSasIOUnitP0_t config_page;
-
- *num_phys = 0;
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_EXTENDED;
- mpi_request.ExtPageType = LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_IO_UNIT;
- mpi_request.Header.PageNumber = 0;
- mpi_request.Header.PageVersion = 0x05;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, &mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, &mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, &config_page,
- sizeof(struct LeapioraidSasIOUnitP0_t));
- if (!r) {
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status == LEAPIORAID_IOCSTATUS_SUCCESS)
- *num_phys = config_page.NumPhys;
- }
-out:
- return r;
-}
-
-int
-leapioraid_config_get_sas_iounit_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasIOUnitP0_t *config_page,
- u16 sz)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_EXTENDED;
- mpi_request.ExtPageType = LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_IO_UNIT;
- mpi_request.Header.PageNumber = 0;
- mpi_request.Header.PageVersion = 0x05;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sz);
-out:
- return r;
-}
-
-int
-leapioraid_config_get_sas_iounit_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasIOUnitP1_t *config_page,
- u16 sz)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_EXTENDED;
- mpi_request.ExtPageType = LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_IO_UNIT;
- mpi_request.Header.PageNumber = 1;
- mpi_request.Header.PageVersion = 0x09;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sz);
-out:
- return r;
-}
-
-int
-leapioraid_config_set_sas_iounit_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasIOUnitP1_t *config_page,
- u16 sz)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_EXTENDED;
- mpi_request.ExtPageType = LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_IO_UNIT;
- mpi_request.Header.PageNumber = 1;
- mpi_request.Header.PageVersion = 0x09;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_WRITE_CURRENT;
- leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page, sz);
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_WRITE_NVRAM;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sz);
-out:
- return r;
-}
-
-int
-leapioraid_config_get_expander_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidExpanderP0_t *config_page,
- u32 form, u32 handle)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_EXTENDED;
- mpi_request.ExtPageType = LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_EXPANDER;
- mpi_request.Header.PageNumber = 0;
- mpi_request.Header.PageVersion = 0x06;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.PageAddress = cpu_to_le32(form | handle);
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_expander_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidExpanderP1_t *config_page,
- u32 phy_number, u16 handle)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_EXTENDED;
- mpi_request.ExtPageType = LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_EXPANDER;
- mpi_request.Header.PageNumber = 1;
- mpi_request.Header.PageVersion = 0x02;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.PageAddress =
- cpu_to_le32(LEAPIORAID_SAS_EXPAND_PGAD_FORM_HNDL_PHY_NUM |
- (phy_number << LEAPIORAID_SAS_EXPAND_PGAD_PHYNUM_SHIFT) |
- handle);
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_enclosure_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasEncP0_t *config_page,
- u32 form, u32 handle)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_EXTENDED;
- mpi_request.ExtPageType = LEAPIORAID_CONFIG_EXTPAGETYPE_ENCLOSURE;
- mpi_request.Header.PageNumber = 0;
- mpi_request.Header.PageVersion = 0x04;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.PageAddress = cpu_to_le32(form | handle);
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_phy_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasPhyP0_t *config_page,
- u32 phy_number)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_EXTENDED;
- mpi_request.ExtPageType = LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_PHY;
- mpi_request.Header.PageNumber = 0;
- mpi_request.Header.PageVersion = 0x03;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.PageAddress =
- cpu_to_le32(LEAPIORAID_SAS_PHY_PGAD_FORM_PHY_NUMBER | phy_number);
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_phy_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasPhyP1_t *config_page,
- u32 phy_number)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_EXTENDED;
- mpi_request.ExtPageType = LEAPIORAID_CONFIG_EXTPAGETYPE_SAS_PHY;
- mpi_request.Header.PageNumber = 1;
- mpi_request.Header.PageVersion = 0x01;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.PageAddress =
- cpu_to_le32(LEAPIORAID_SAS_PHY_PGAD_FORM_PHY_NUMBER | phy_number);
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_raid_volume_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidRaidVolP1_t *config_page,
- u32 form, u32 handle)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_RAID_VOLUME;
- mpi_request.Header.PageNumber = 1;
- mpi_request.Header.PageVersion = 0x03;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.PageAddress = cpu_to_le32(form | handle);
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_number_pds(struct LEAPIORAID_ADAPTER *ioc,
- u16 handle, u8 *num_pds)
-{
- struct LeapioraidCfgReq_t mpi_request;
- struct LeapioraidRaidVolP0_t config_page;
- struct LeapioraidCfgRep_t mpi_reply;
- int r;
- u16 ioc_status;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- *num_pds = 0;
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_RAID_VOLUME;
- mpi_request.Header.PageNumber = 0;
- mpi_request.Header.PageVersion = 0x0A;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, &mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.PageAddress =
- cpu_to_le32(LEAPIORAID_RAID_VOLUME_PGAD_FORM_HANDLE | handle);
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, &mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, &config_page,
- sizeof(struct LeapioraidRaidVolP0_t));
- if (!r) {
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status == LEAPIORAID_IOCSTATUS_SUCCESS)
- *num_pds = config_page.NumPhysDisks;
- }
-out:
- return r;
-}
-
-int
-leapioraid_config_get_raid_volume_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidRaidVolP0_t *config_page,
- u32 form, u32 handle, u16 sz)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_RAID_VOLUME;
- mpi_request.Header.PageNumber = 0;
- mpi_request.Header.PageVersion = 0x0A;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.PageAddress = cpu_to_le32(form | handle);
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sz);
-out:
- return r;
-}
-
-int
-leapioraid_config_get_phys_disk_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidRaidPDP0_t *config_page,
- u32 form, u32 form_specific)
-{
- struct LeapioraidCfgReq_t mpi_request;
- int r;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_RAID_PHYSDISK;
- mpi_request.Header.PageNumber = 0;
- mpi_request.Header.PageVersion = 0x05;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.PageAddress = cpu_to_le32(form | form_specific);
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- r = leapioraid_config_request(ioc, &mpi_request, mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page,
- sizeof(*config_page));
-out:
- return r;
-}
-
-int
-leapioraid_config_get_volume_handle(struct LEAPIORAID_ADAPTER *ioc,
- u16 pd_handle, u16 *volume_handle)
-{
- struct LeapioraidRaidCfgP0_t *config_page = NULL;
- struct LeapioraidCfgReq_t mpi_request;
- struct LeapioraidCfgRep_t mpi_reply;
- int r, i, config_page_sz;
- u16 ioc_status;
- int config_num;
- u16 element_type;
- u16 phys_disk_dev_handle;
-
- *volume_handle = 0;
- memset(&mpi_request, 0, sizeof(struct LeapioraidCfgReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_CONFIG;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_HEADER;
- mpi_request.Header.PageType = LEAPIORAID_CONFIG_PAGETYPE_EXTENDED;
- mpi_request.ExtPageType = LEAPIORAID_CONFIG_EXTPAGETYPE_RAID_CONFIG;
- mpi_request.Header.PageVersion = 0x00;
- mpi_request.Header.PageNumber = 0;
- ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE);
- r = leapioraid_config_request(ioc, &mpi_request, &mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
- if (r)
- goto out;
- mpi_request.Action = LEAPIORAID_CONFIG_ACTION_PAGE_READ_CURRENT;
- config_page_sz = (le16_to_cpu(mpi_reply.ExtPageLength) * 4);
- config_page = kmalloc(config_page_sz, GFP_KERNEL);
- if (!config_page) {
- r = -1;
- goto out;
- }
- config_num = 0xff;
- while (1) {
- mpi_request.PageAddress = cpu_to_le32(config_num +
- LEAPIORAID_RAID_PGAD_FORM_GET_NEXT_CONFIGNUM);
- r = leapioraid_config_request(ioc, &mpi_request, &mpi_reply,
- LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT,
- config_page, config_page_sz);
- if (r)
- goto out;
- r = -1;
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status == LEAPIORAID_IOCSTATUS_CONFIG_INVALID_PAGE) {
- *volume_handle = 0;
- r = 0;
- goto out;
- } else if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS)
- goto out;
- for (i = 0; i < config_page->NumElements; i++) {
- element_type =
- le16_to_cpu(config_page->ConfigElement[i].ElementFlags) &
- LEAPIORAID_RAIDCONFIG0_EFLAGS_MASK_ELEMENT_TYPE;
- if (element_type ==
- LEAPIORAID_RAIDCONFIG0_EFLAGS_VOL_PHYS_DISK_ELEMENT
- || element_type ==
- LEAPIORAID_RAIDCONFIG0_EFLAGS_OCE_ELEMENT) {
- phys_disk_dev_handle =
- le16_to_cpu(config_page->ConfigElement[i].PhysDiskDevHandle);
- if (phys_disk_dev_handle == pd_handle) {
- *volume_handle =
- le16_to_cpu
- (config_page->ConfigElement[i].VolDevHandle);
- r = 0;
- goto out;
- }
- } else if (element_type ==
- LEAPIORAID_RAIDCONFIG0_EFLAGS_HOT_SPARE_ELEMENT) {
- *volume_handle = 0;
- r = 0;
- goto out;
- }
- }
- config_num = config_page->ConfigNum;
- }
-out:
- kfree(config_page);
- return r;
-}
-
-int
-leapioraid_config_get_volume_wwid(struct LEAPIORAID_ADAPTER *ioc,
- u16 volume_handle, u64 *wwid)
-{
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidRaidVolP1_t raid_vol_pg1;
-
- *wwid = 0;
- if (!(leapioraid_config_get_raid_volume_pg1(ioc, &mpi_reply,
- &raid_vol_pg1,
- LEAPIORAID_RAID_VOLUME_PGAD_FORM_HANDLE,
- volume_handle))) {
- *wwid = le64_to_cpu(raid_vol_pg1.WWID);
- return 0;
- } else
- return -1;
-}
diff --git a/drivers/scsi/leapioraid/leapioraid_func.h b/drivers/scsi/leapioraid/leapioraid_func.h
deleted file mode 100644
index a4beb1412d66..000000000000
--- a/drivers/scsi/leapioraid/leapioraid_func.h
+++ /dev/null
@@ -1,1262 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * This is the Fusion MPT base driver providing common API layer interface
- * for access to MPT (Message Passing Technology) firmware.
- *
- * Copyright (C) 2013-2021 LSI Corporation
- * Copyright (C) 2013-2021 Avago Technologies
- * Copyright (C) 2013-2021 Broadcom Inc.
- * (mailto:MPT-FusionLinux.pdl@broadcom.com)
- *
- * Copyright (C) 2024 LeapIO Tech Inc.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2
- * of the License, or (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * NO WARRANTY
- * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
- * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
- * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
- * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
- * solely responsible for determining the appropriateness of using and
- * distributing the Program and assumes all risks associated with its
- * exercise of rights under this Agreement, including but not limited to
- * the risks and costs of program errors, damage to or loss of data,
- * programs or equipment, and unavailability or interruption of operations.
-
- * DISCLAIMER OF LIABILITY
- * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
- * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
- * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
- * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
- */
-
-#ifndef LEAPIORAID_FUNC_H_INCLUDED
-#define LEAPIORAID_FUNC_H_INCLUDED
-
-#include "leapioraid.h"
-#include <scsi/scsi.h>
-#include <scsi/scsi_cmnd.h>
-#include <scsi/scsi_device.h>
-#include <scsi/scsi_host.h>
-#include <scsi/scsi_tcq.h>
-#include <scsi/scsi_transport_sas.h>
-#include <scsi/scsi_dbg.h>
-#include <scsi/scsi_eh.h>
-#include <linux/pci.h>
-#include <linux/poll.h>
-#include <linux/irq_poll.h>
-
-#ifndef fallthrough
-#define fallthrough
-#endif
-
-#define SYS_LOG_BUF_SIZE (0x200000) //2M
-#define SYS_LOG_BUF_RESERVE (0x1000) //256
-
-#define MAX_UPD_PAYLOAD_SZ (0x4000)
-
-#define LEAPIORAID_DRIVER_NAME "LeapIoRaid"
-#define LEAPIORAID_AUTHOR "LeapIO Inc."
-#define LEAPIORAID_DESCRIPTION "LEAPIO RAID Driver"
-#define LEAPIORAID_DRIVER_VERSION "1.02.02.00"
-#define LEAPIORAID_MAJOR_VERSION (1)
-#define LEAPIORAID_MINOR_VERSION (02)
-#define LEAPIORAID_BUILD_VERSION (02)
-#define LEAPIORAID_RELEASE_VERSION (00)
-
-#define LEAPIORAID_VENDOR_ID (0xD405)
-#define LEAPIORAID_DEVICE_ID_1 (0x1000)
-#define LEAPIORAID_DEVICE_ID_2 (0x1001)
-#define LEAPIORAID_HBA (0x8200)
-#define LEAPIORAID_RAID (0x8201)
-
-#define LEAPIORAID_MAX_PHYS_SEGMENTS SG_CHUNK_SIZE
-
-#define LEAPIORAID_MIN_PHYS_SEGMENTS (16)
-#define LEAPIORAID_KDUMP_MIN_PHYS_SEGMENTS (32)
-
-#define LEAPIORAID_MAX_SG_SEGMENTS SG_MAX_SEGMENTS
-#define LEAPIORAID_MAX_PHYS_SEGMENTS_STRING "SG_CHUNK_SIZE"
-
-#define LEAPIORAID_SG_DEPTH LEAPIORAID_MAX_PHYS_SEGMENTS
-
-
-#define LEAPIORAID_CONFIG_PAGE_DEFAULT_TIMEOUT 15
-#define LEAPIORAID_CONFIG_COMMON_SGLFLAGS ((LEAPIORAID_SGE_FLAGS_SIMPLE_ELEMENT | \
- LEAPIORAID_SGE_FLAGS_LAST_ELEMENT | LEAPIORAID_SGE_FLAGS_END_OF_BUFFER \
- | LEAPIORAID_SGE_FLAGS_END_OF_LIST) << LEAPIORAID_SGE_FLAGS_SHIFT)
-#define LEAPIORAID_CONFIG_COMMON_WRITE_SGLFLAGS ((LEAPIORAID_SGE_FLAGS_SIMPLE_ELEMENT | \
- LEAPIORAID_SGE_FLAGS_LAST_ELEMENT | LEAPIORAID_SGE_FLAGS_END_OF_BUFFER \
- | LEAPIORAID_SGE_FLAGS_END_OF_LIST | LEAPIORAID_SGE_FLAGS_HOST_TO_IOC) \
- << LEAPIORAID_SGE_FLAGS_SHIFT)
-
-#define LEAPIORAID_SATA_QUEUE_DEPTH (32)
-#define LEAPIORAID_SAS_QUEUE_DEPTH (64)
-#define LEAPIORAID_RAID_QUEUE_DEPTH (64)
-#define LEAPIORAID_KDUMP_SCSI_IO_DEPTH (64)
-#define LEAPIORAID_RAID_MAX_SECTORS (128)
-
-#define LEAPIORAID_NAME_LENGTH (48)
-#define LEAPIORAID_DRIVER_NAME_LENGTH (24)
-#define LEAPIORAID_STRING_LENGTH (64)
-
-#define LEAPIORAID_FRAME_START_OFFSET (256)
-#define LEAPIORAID_REPLY_FREE_POOL_SIZE (512)
-#define LEAPIORAID_MAX_CALLBACKS (32)
-#define LEAPIORAID_MAX_HBA_NUM_PHYS (16)
-
-#define LEAPIORAID_INTERNAL_CMDS_COUNT (10)
-#define LEAPIORAID_INTERNAL_SCSIIO_CMDS_COUNT (3)
-#define LEAPIORAID_INTERNAL_SCSIIO_FOR_IOCTL (1)
-#define LEAPIORAID_INTERNAL_SCSIIO_FOR_DISCOVERY (2)
-
-#define LEAPIORAID_INVALID_DEVICE_HANDLE (0xFFFF)
-#define LEAPIORAID_MAX_CHAIN_ELEMT_SZ (16)
-#define LEAPIORAID_DEFAULT_NUM_FWCHAIN_ELEMTS (8)
-#define LEAPIORAID_READL_RETRY_COUNT_OF_THIRTY (30)
-#define LEAPIORAID_READL_RETRY_COUNT_OF_THREE (3)
-
-#define LEAPIORAID_IOC_PRE_RESET_PHASE (1)
-#define LEAPIORAID_IOC_AFTER_RESET_PHASE (2)
-#define LEAPIORAID_IOC_DONE_RESET_PHASE (3)
-
-#define LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT (0x01)
-#define LEAPIORAID_TARGET_FLAGS_VOLUME (0x02)
-#define LEAPIORAID_TARGET_FASTPATH_IO (0x08)
-
-#define LEAPIORAID_DEVICE_HIGH_IOPS_DEPTH (8)
-#define LEAPIORAID_HIGH_IOPS_REPLY_QUEUES (8)
-#define LEAPIORAID_HIGH_IOPS_BATCH_COUNT (16)
-#define LEAPIORAID_GEN35_MAX_MSIX_QUEUES (128)
-#define LEAPIORAID_RDPQ_MAX_INDEX_IN_ONE_CHUNK (16)
-
-#define LEAPIORAID_IFAULT_IOP_OVER_TEMP_THRESHOLD_EXCEEDED (0x2810)
-
-#ifndef DID_TRANSPORT_DISRUPTED
-#define DID_TRANSPORT_DISRUPTED DID_BUS_BUSY
-#endif
-#ifndef ULLONG_MAX
-#define ULLONG_MAX (~0ULL)
-#endif
-#ifndef USHORT_MAX
-#define USHORT_MAX ((u16)(~0U))
-#endif
-#ifndef UINT_MAX
-#define UINT_MAX (~0U)
-#endif
-
-static inline void *leapioraid_shost_private(struct Scsi_Host *shost)
-{
- return (void *)shost->hostdata;
-}
-
-struct LeapioraidManuP10_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- U8 OEMIdentifier;
- U8 Reserved1;
- U16 Reserved2;
- U32 Reserved3;
- U32 GenericFlags0;
- U32 GenericFlags1;
- U32 Reserved4;
- U32 OEMSpecificFlags0;
- U32 OEMSpecificFlags1;
- U32 Reserved5[18];
-};
-
-struct LeapioraidManuP11_t {
- struct LEAPIORAID_CONFIG_PAGE_HEADER Header;
- __le32 Reserved1;
- u8 Reserved2;
- u8 EEDPTagMode;
- u8 Reserved3;
- u8 Reserved4;
- __le32 Reserved5[8];
- u16 AddlFlags2;
- u8 AddlFlags3;
- u8 Reserved6;
- __le32 Reserved7[7];
- u8 AbortTO;
- u8 NumPerDevEvents;
- u8 HostTraceBufferDecrementSizeKB;
- u8 HostTraceBufferFlags;
- u16 HostTraceBufferMaxSizeKB;
- u16 HostTraceBufferMinSizeKB;
- u8 CoreDumpTOSec;
- u8 TimeSyncInterval;
- u16 Reserved9;
- __le32 Reserved10;
-};
-
-struct LEAPIORAID_TARGET {
- struct scsi_target *starget;
- u64 sas_address;
- struct leapioraid_raid_device *raid_device;
- u16 handle;
- int num_luns;
- u32 flags;
- u8 deleted;
- u8 tm_busy;
- struct leapioraid_hba_port *port;
- struct leapioraid_sas_device *sas_dev;
-};
-
-#define LEAPIORAID_DEVICE_FLAGS_INIT (0x01)
-#define LEAPIORAID_DEVICE_TLR_ON (0x02)
-
-struct LEAPIORAID_DEVICE {
- struct LEAPIORAID_TARGET *sas_target;
- unsigned int lun;
- u32 flags;
- u8 configured_lun;
- u8 block;
- u8 deleted;
- u8 tlr_snoop_check;
- u8 ignore_delay_remove;
- u8 ncq_prio_enable;
- unsigned long ata_command_pending;
-};
-
-#define LEAPIORAID_CMND_PENDING_BIT (0)
-#define LEAPIORAID_CMD_NOT_USED (0x8000)
-#define LEAPIORAID_CMD_COMPLETE (0x0001)
-#define LEAPIORAID_CMD_PENDING (0x0002)
-#define LEAPIORAID_CMD_REPLY_VALID (0x0004)
-#define LEAPIORAID_CMD_RESET (0x0008)
-#define LEAPIORAID_CMD_COMPLETE_ASYNC (0x0010)
-
-struct leapioraid_internal_cmd {
- struct mutex mutex;
- struct completion done;
- void *reply;
- void *sense;
- u16 status;
- u16 smid;
-};
-
-struct leapioraid_scsi_io_transfer {
- u16 handle;
- u8 is_raid;
- enum dma_data_direction dir;
- u32 data_length;
- dma_addr_t data_dma;
- u8 sense[SCSI_SENSE_BUFFERSIZE];
- u32 lun;
- u8 cdb_length;
- u8 cdb[32];
- u8 timeout;
- u8 VF_ID;
- u8 VP_ID;
- u8 valid_reply;
- u32 sense_length;
- u16 ioc_status;
- u8 scsi_state;
- u8 scsi_status;
- u32 log_info;
- u32 transfer_length;
-};
-
-struct leapioraid_internal_qcmd {
- struct list_head list;
- void *request;
- void *reply;
- void *sense;
- u16 status;
- u16 smid;
- struct leapioraid_scsi_io_transfer *transfer_packet;
-};
-
-#define LEAPIORAID_WIDE_PORT_API (1)
-#define LEAPIORAID_WIDE_PORT_API_PLUS (1)
-
-struct leapioraid_sas_device {
- struct list_head list;
- struct scsi_target *starget;
- u64 sas_address;
- u64 device_name;
- u16 handle;
- u64 sas_address_parent;
- u16 enclosure_handle;
- u64 enclosure_logical_id;
- u16 volume_handle;
- u64 volume_wwid;
- u32 device_info;
- int id;
- int channel;
- u16 slot;
- u8 phy;
- u8 responding;
- u8 fast_path;
- u8 pfa_led_on;
- struct kref refcount;
- u8 *serial_number;
- u8 pend_sas_rphy_add;
- u8 enclosure_level;
- u8 chassis_slot;
- u8 is_chassis_slot_valid;
- u8 connector_name[5];
- u8 ssd_device;
- u8 supports_sata_smart;
- u8 port_type;
- struct leapioraid_hba_port *port;
- struct sas_rphy *rphy;
-};
-
-static inline
-void leapioraid_sas_device_get(struct leapioraid_sas_device *s)
-{
- kref_get(&s->refcount);
-}
-
-static inline
-void leapioraid_sas_device_free(struct kref *r)
-{
- kfree(container_of(r, struct leapioraid_sas_device, refcount));
-}
-
-static inline
-void leapioraid_sas_device_put(struct leapioraid_sas_device *s)
-{
- kref_put(&s->refcount, leapioraid_sas_device_free);
-}
-
-struct leapioraid_raid_device {
- struct list_head list;
- struct scsi_target *starget;
- struct scsi_device *sdev;
- u64 wwid;
- u16 handle;
- u16 block_sz;
- int id;
- int channel;
- u8 volume_type;
- u8 num_pds;
- u8 responding;
- u8 percent_complete;
- u8 direct_io_enabled;
- u8 stripe_exponent;
- u8 block_exponent;
- u64 max_lba;
- u32 stripe_sz;
- u32 device_info;
- u16 pd_handle[8];
-};
-
-struct leapioraid_boot_device {
- int channel;
- void *device;
-};
-
-struct leapioraid_sas_port {
- struct list_head port_list;
- u8 num_phys;
- struct leapioraid_hba_port *hba_port;
- struct sas_identify remote_identify;
- struct sas_rphy *rphy;
-#if defined(LEAPIORAID_WIDE_PORT_API)
- struct sas_port *port;
-#endif
- struct list_head phy_list;
-};
-
-struct leapioraid_sas_phy {
- struct list_head port_siblings;
- struct sas_identify identify;
- struct sas_identify remote_identify;
- struct sas_phy *phy;
- u8 phy_id;
- u16 handle;
- u16 attached_handle;
- u8 phy_belongs_to_port;
- u8 hba_vphy;
- struct leapioraid_hba_port *port;
-};
-
-struct leapioraid_raid_sas_node {
- struct list_head list;
- struct device *parent_dev;
- u8 num_phys;
- u64 sas_address;
- u16 handle;
- u64 sas_address_parent;
- u16 enclosure_handle;
- u64 enclosure_logical_id;
- u8 responding;
- u8 nr_phys_allocated;
- struct leapioraid_hba_port *port;
- struct leapioraid_sas_phy *phy;
- struct list_head sas_port_list;
- struct sas_rphy *rphy;
-};
-
-struct leapioraid_enclosure_node {
- struct list_head list;
- struct LeapioraidSasEncP0_t pg0;
-};
-
-enum reset_type {
- FORCE_BIG_HAMMER,
- SOFT_RESET,
-};
-
-struct leapioraid_chain_tracker {
- void *chain_buffer;
- dma_addr_t chain_buffer_dma;
-};
-
-struct leapioraid_chain_lookup {
- struct leapioraid_chain_tracker *chains_per_smid;
- atomic_t chain_offset;
-};
-
-struct leapioraid_scsiio_tracker {
- u16 smid;
- struct scsi_cmnd *scmd;
- u8 cb_idx;
- u8 direct_io;
- struct list_head chain_list;
- u16 msix_io;
-};
-
-struct leapioraid_request_tracker {
- u16 smid;
- u8 cb_idx;
- struct list_head tracker_list;
-};
-
-struct leapioraid_tr_list {
- struct list_head list;
- u16 handle;
- u16 state;
-};
-
-struct leapioraid_sc_list {
- struct list_head list;
- u16 handle;
-};
-
-struct leapioraid_event_ack_list {
- struct list_head list;
- U16 Event;
- U32 EventContext;
-};
-
-struct leapioraid_adapter_reply_queue {
- struct LEAPIORAID_ADAPTER *ioc;
- u8 msix_index;
- u32 reply_post_host_index;
- union LeapioraidRepDescUnion_t *reply_post_free;
- char name[LEAPIORAID_NAME_LENGTH];
- atomic_t busy;
- cpumask_var_t affinity_hint;
- u32 os_irq;
- struct irq_poll irqpoll;
- bool irq_poll_scheduled;
- bool irq_line_enable;
- bool is_blk_mq_poll_q;
- struct list_head list;
-};
-
-struct leapioraid_blk_mq_poll_queue {
- atomic_t busy;
- atomic_t pause;
- struct leapioraid_adapter_reply_queue *reply_q;
-};
-
-union leapioraid_version_union {
- struct LEAPIORAID_VERSION_STRUCT Struct;
- u32 Word;
-};
-
-typedef void (*LEAPIORAID_ADD_SGE)(void *paddr, u32 flags_length,
- dma_addr_t dma_addr);
-typedef int (*LEAPIORAID_BUILD_SG_SCMD)(struct LEAPIORAID_ADAPTER *ioc,
- struct scsi_cmnd *scmd, u16 smid);
-typedef void (*LEAPIORAID_BUILD_SG)(struct LEAPIORAID_ADAPTER *ioc, void *psge,
- dma_addr_t data_out_dma, size_t data_out_sz,
- dma_addr_t data_in_dma, size_t data_in_sz);
-typedef void (*LEAPIORAID_BUILD_ZERO_LEN_SGE)(struct LEAPIORAID_ADAPTER *ioc,
- void *paddr);
-typedef void (*PUT_SMID_IO_FP_HIP_TA)(struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u16 funcdep);
-typedef void (*PUT_SMID_DEFAULT)(struct LEAPIORAID_ADAPTER *ioc, u16 smid);
-typedef u32(*BASE_READ_REG) (const void __iomem *addr,
- u8 retry_count);
-typedef u8(*GET_MSIX_INDEX) (struct LEAPIORAID_ADAPTER *ioc,
- struct scsi_cmnd *scmd);
-
-struct leapioraid_facts {
- u16 MsgVersion;
- u16 HeaderVersion;
- u8 IOCNumber;
- u8 VP_ID;
- u8 VF_ID;
- u16 IOCExceptions;
- u16 IOCStatus;
- u32 IOCLogInfo;
- u8 MaxChainDepth;
- u8 WhoInit;
- u8 NumberOfPorts;
- u8 MaxMSIxVectors;
- u16 RequestCredit;
- u16 ProductID;
- u32 IOCCapabilities;
- union leapioraid_version_union FWVersion;
- u16 IOCRequestFrameSize;
- u16 IOCMaxChainSegmentSize;
- u16 MaxInitiators;
- u16 MaxTargets;
- u16 MaxSasExpanders;
- u16 MaxEnclosures;
- u16 ProtocolFlags;
- u16 HighPriorityCredit;
- u16 MaxReplyDescriptorPostQueueDepth;
- u8 ReplyFrameSize;
- u8 MaxVolumes;
- u16 MaxDevHandle;
- u16 MaxPersistentEntries;
- u16 MinDevHandle;
- u8 CurrentHostPageSize;
-};
-
-struct leapioraid_port_facts {
- u8 PortNumber;
- u8 VP_ID;
- u8 VF_ID;
- u8 PortType;
- u16 MaxPostedCmdBuffers;
-};
-
-struct leapioraid_reply_post_struct {
- union LeapioraidRepDescUnion_t *reply_post_free;
- dma_addr_t reply_post_free_dma;
-};
-
-struct leapioraid_virtual_phy {
- struct list_head list;
- u64 sas_address;
- u32 phy_mask;
- u8 flags;
-};
-
-#define LEAPIORAID_VPHY_FLAG_DIRTY_PHY (0x01)
-struct leapioraid_hba_port {
- struct list_head list;
- u64 sas_address;
- u32 phy_mask;
- u8 port_id;
- u8 flags;
- u32 vphys_mask;
- struct list_head vphys_list;
-};
-
-#define LEAPIORAID_HBA_PORT_FLAG_DIRTY_PORT (0x01)
-#define LEAPIORAID_HBA_PORT_FLAG_NEW_PORT (0x02)
-#define LEAPIORAID_MULTIPATH_DISABLED_PORT_ID (0xFF)
-
-typedef void (*LEAPIORAID_FLUSH_RUNNING_CMDS)(struct LEAPIORAID_ADAPTER *
- ioc);
-
-struct LEAPIORAID_ADAPTER {
- struct list_head list;
- struct Scsi_Host *shost;
- u8 id;
- u8 IOCNumber;
- int cpu_count;
- char name[LEAPIORAID_NAME_LENGTH];
- char driver_name[LEAPIORAID_DRIVER_NAME_LENGTH];
- char tmp_string[LEAPIORAID_STRING_LENGTH];
- struct pci_dev *pdev;
- struct LeapioraidSysInterfaceRegs_t __iomem *chip;
- phys_addr_t chip_phys;
- int logging_level;
- int fwfault_debug;
- u8 ir_firmware;
- int bars;
- u8 mask_interrupts;
- struct mutex pci_access_mutex;
- char fault_reset_work_q_name[48];
- char hba_hot_unplug_work_q_name[48];
- struct workqueue_struct *fault_reset_work_q;
- struct workqueue_struct *hba_hot_unplug_work_q;
- struct delayed_work fault_reset_work;
- struct delayed_work hba_hot_unplug_work;
- struct workqueue_struct *smart_poll_work_q;
- struct delayed_work smart_poll_work;
- u8 adapter_over_temp;
- char firmware_event_name[48];
- struct workqueue_struct *firmware_event_thread;
- spinlock_t fw_event_lock;
- struct list_head fw_event_list;
- struct leapioraid_fw_event_work *current_event;
- u8 fw_events_cleanup;
- int aen_event_read_flag;
- u8 broadcast_aen_busy;
- u16 broadcast_aen_pending;
- u8 shost_recovery;
- u8 got_task_abort_from_ioctl;
- u8 got_task_abort_from_sysfs;
- struct mutex reset_in_progress_mutex;
- struct mutex hostdiag_unlock_mutex;
- spinlock_t ioc_reset_in_progress_lock;
- spinlock_t hba_hot_unplug_lock;
- u8 ioc_link_reset_in_progress;
- int ioc_reset_status;
- u8 ignore_loginfos;
- u8 remove_host;
- u8 pci_error_recovery;
- u8 wait_for_discovery_to_complete;
- u8 is_driver_loading;
- u8 port_enable_failed;
- u8 start_scan;
- u16 start_scan_failed;
- u8 msix_enable;
- u8 *cpu_msix_table;
- resource_size_t **reply_post_host_index;
- u16 cpu_msix_table_sz;
- u32 ioc_reset_count;
- LEAPIORAID_FLUSH_RUNNING_CMDS schedule_dead_ioc_flush_running_cmds;
- u32 non_operational_loop;
- u8 ioc_coredump_loop;
- u32 timestamp_update_count;
- u32 time_sync_interval;
- u8 multipath_on_hba;
- atomic64_t total_io_cnt;
- atomic64_t high_iops_outstanding;
- bool msix_load_balance;
- u16 thresh_hold;
- u8 high_iops_queues;
- u8 iopoll_q_start_index;
- u32 drv_internal_flags;
- u32 drv_support_bitmap;
- u32 dma_mask;
- bool enable_sdev_max_qd;
- bool use_32bit_dma;
- struct leapioraid_blk_mq_poll_queue *blk_mq_poll_queues;
- u8 scsi_io_cb_idx;
- u8 tm_cb_idx;
- u8 transport_cb_idx;
- u8 scsih_cb_idx;
- u8 ctl_cb_idx;
- u8 ctl_tm_cb_idx;
- u8 base_cb_idx;
- u8 port_enable_cb_idx;
- u8 config_cb_idx;
- u8 tm_tr_cb_idx;
- u8 tm_tr_volume_cb_idx;
- u8 tm_tr_internal_cb_idx;
- u8 tm_sas_control_cb_idx;
- struct leapioraid_internal_cmd base_cmds;
- struct leapioraid_internal_cmd port_enable_cmds;
- struct leapioraid_internal_cmd transport_cmds;
- struct leapioraid_internal_cmd scsih_cmds;
- struct leapioraid_internal_cmd tm_cmds;
- struct leapioraid_internal_cmd ctl_cmds;
- struct leapioraid_internal_cmd config_cmds;
- struct list_head scsih_q_intenal_cmds;
- spinlock_t scsih_q_internal_lock;
- LEAPIORAID_ADD_SGE base_add_sg_single;
- LEAPIORAID_BUILD_SG_SCMD build_sg_scmd;
- LEAPIORAID_BUILD_SG build_sg;
- LEAPIORAID_BUILD_ZERO_LEN_SGE build_zero_len_sge;
- u16 sge_size_ieee;
- LEAPIORAID_BUILD_SG build_sg_mpi;
- LEAPIORAID_BUILD_ZERO_LEN_SGE build_zero_len_sge_mpi;
- u32 event_type[LEAPIORAID_EVENT_NOTIFY_EVENTMASK_WORDS];
- u32 event_context;
- void *event_log;
- u32 event_masks[LEAPIORAID_EVENT_NOTIFY_EVENTMASK_WORDS];
- u8 disable_eedp_support;
- u8 tm_custom_handling;
- u16 max_shutdown_latency;
- u16 max_wideport_qd;
- u16 max_narrowport_qd;
- u8 max_sata_qd;
- struct leapioraid_facts facts;
- struct leapioraid_facts prev_fw_facts;
- struct leapioraid_port_facts *pfacts;
- struct LeapioraidManP0_t manu_pg0;
- struct LeapioraidManuP10_t manu_pg10;
- struct LeapioraidManuP11_t manu_pg11;
- struct LeapioraidBiosP2_t bios_pg2;
- struct LeapioraidBiosP3_t bios_pg3;
- struct LeapioraidIOCP8_t ioc_pg8;
- struct LeapioraidIOUnitP0_t iounit_pg0;
- struct LeapioraidIOUnitP1_t iounit_pg1;
- struct LeapioraidIOUnitP8_t iounit_pg8;
- struct LeapioraidIOCP1_t ioc_pg1_copy;
- struct leapioraid_boot_device req_boot_device;
- struct leapioraid_boot_device req_alt_boot_device;
- struct leapioraid_boot_device current_boot_device;
- struct leapioraid_raid_sas_node sas_hba;
- struct list_head sas_expander_list;
- struct list_head enclosure_list;
- spinlock_t sas_node_lock;
- struct list_head sas_device_list;
- struct list_head sas_device_init_list;
- spinlock_t sas_device_lock;
- struct list_head pcie_device_list;
- struct list_head pcie_device_init_list;
- spinlock_t pcie_device_lock;
- struct list_head raid_device_list;
- spinlock_t raid_device_lock;
- u8 io_missing_delay;
- u16 device_missing_delay;
- int sas_id;
- int pcie_target_id;
- void *blocking_handles;
- void *pd_handles;
- u16 pd_handles_sz;
- void *pend_os_device_add;
- u16 pend_os_device_add_sz;
- u16 config_page_sz;
- void *config_page;
- dma_addr_t config_page_dma;
- void *config_vaddr;
- u16 hba_queue_depth;
- u16 sge_size;
- u16 scsiio_depth;
- u16 request_sz;
- u8 *request;
- dma_addr_t request_dma;
- u32 request_dma_sz;
- spinlock_t scsi_lookup_lock;
- int pending_io_count;
- wait_queue_head_t reset_wq;
- int pending_tm_count;
- u32 terminated_tm_count;
- wait_queue_head_t pending_tm_wq;
- u8 out_of_frames;
- wait_queue_head_t no_frames_tm_wq;
- u16 *io_queue_num;
- u32 page_size;
- struct leapioraid_chain_lookup *chain_lookup;
- struct list_head free_chain_list;
- struct dma_pool *chain_dma_pool;
- u16 max_sges_in_main_message;
- u16 max_sges_in_chain_message;
- u16 chains_needed_per_io;
- u16 chain_segment_sz;
- u16 chains_per_prp_buffer;
- u16 hi_priority_smid;
- u8 *hi_priority;
- dma_addr_t hi_priority_dma;
- u16 hi_priority_depth;
- struct leapioraid_request_tracker *hpr_lookup;
- struct list_head hpr_free_list;
- u16 internal_smid;
- u8 *internal;
- dma_addr_t internal_dma;
- u16 internal_depth;
- struct leapioraid_request_tracker *internal_lookup;
- struct list_head internal_free_list;
- u8 *sense;
- dma_addr_t sense_dma;
- struct dma_pool *sense_dma_pool;
- u16 reply_sz;
- u8 *reply;
- dma_addr_t reply_dma;
- u32 reply_dma_max_address;
- u32 reply_dma_min_address;
- struct dma_pool *reply_dma_pool;
- u16 reply_free_queue_depth;
- __le32 *reply_free;
- dma_addr_t reply_free_dma;
- struct dma_pool *reply_free_dma_pool;
- u32 reply_free_host_index;
- u16 reply_post_queue_depth;
- struct leapioraid_reply_post_struct *reply_post;
- struct dma_pool *reply_post_free_dma_pool;
- struct dma_pool *reply_post_free_array_dma_pool;
- struct LeapioraidIOCInitRDPQArrayEntry *reply_post_free_array;
- dma_addr_t reply_post_free_array_dma;
- u8 reply_queue_count;
- struct list_head reply_queue_list;
- u8 rdpq_array_capable;
- u8 rdpq_array_enable;
- u8 rdpq_array_enable_assigned;
- u8 combined_reply_queue;
- u8 nc_reply_index_count;
- u8 smp_affinity_enable;
- resource_size_t **replyPostRegisterIndex;
- struct list_head delayed_tr_list;
- struct list_head delayed_tr_volume_list;
- struct list_head delayed_internal_tm_list;
- struct list_head delayed_sc_list;
- struct list_head delayed_event_ack_list;
- u32 ring_buffer_offset;
- u32 ring_buffer_sz;
- u8 reset_from_user;
- u8 hide_ir_msg;
- u8 warpdrive_msg;
- u8 mfg_pg10_hide_flag;
- u8 hide_drives;
- u8 atomic_desc_capable;
- BASE_READ_REG base_readl;
- PUT_SMID_IO_FP_HIP_TA put_smid_scsi_io;
- PUT_SMID_IO_FP_HIP_TA put_smid_fast_path;
- PUT_SMID_IO_FP_HIP_TA put_smid_hi_priority;
- PUT_SMID_DEFAULT put_smid_default;
- GET_MSIX_INDEX get_msix_index_for_smlio;
- void *device_remove_in_progress;
- u16 device_remove_in_progress_sz;
- u8 *tm_tr_retry;
- u32 tm_tr_retry_sz;
- u8 temp_sensors_count;
- struct list_head port_table_list;
- u8 *log_buffer;
- dma_addr_t log_buffer_dma;
- char pcie_log_work_q_name[48];
- struct workqueue_struct *pcie_log_work_q;
- struct delayed_work pcie_log_work;
- u32 open_pcie_trace;
-};
-
-#define LEAPIORAID_DEBUG (0x00000001)
-#define LEAPIORAID_DEBUG_MSG_FRAME (0x00000002)
-#define LEAPIORAID_DEBUG_SG (0x00000004)
-#define LEAPIORAID_DEBUG_EVENTS (0x00000008)
-#define LEAPIORAID_DEBUG_EVENT_WORK_TASK (0x00000010)
-#define LEAPIORAID_DEBUG_INIT (0x00000020)
-#define LEAPIORAID_DEBUG_EXIT (0x00000040)
-#define LEAPIORAID_DEBUG_FAIL (0x00000080)
-#define LEAPIORAID_DEBUG_TM (0x00000100)
-#define LEAPIORAID_DEBUG_REPLY (0x00000200)
-#define LEAPIORAID_DEBUG_HANDSHAKE (0x00000400)
-#define LEAPIORAID_DEBUG_CONFIG (0x00000800)
-#define LEAPIORAID_DEBUG_DL (0x00001000)
-#define LEAPIORAID_DEBUG_RESET (0x00002000)
-#define LEAPIORAID_DEBUG_SCSI (0x00004000)
-#define LEAPIORAID_DEBUG_IOCTL (0x00008000)
-#define LEAPIORAID_DEBUG_CSMISAS (0x00010000)
-#define LEAPIORAID_DEBUG_SAS (0x00020000)
-#define LEAPIORAID_DEBUG_TRANSPORT (0x00040000)
-#define LEAPIORAID_DEBUG_TASK_SET_FULL (0x00080000)
-
-#define LEAPIORAID_CHECK_LOGGING(IOC, CMD, BITS) \
-{ \
- if (IOC->logging_level & BITS) \
- CMD; \
-}
-
-#define dprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG)
-#define dsgprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_SG)
-#define devtprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_EVENTS)
-#define dewtprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_EVENT_WORK_TASK)
-#define dinitprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_INIT)
-#define dexitprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_EXIT)
-#define dfailprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_FAIL)
-#define dtmprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_TM)
-#define dreplyprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_REPLY)
-#define dhsprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_HANDSHAKE)
-#define dcprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_CONFIG)
-#define ddlprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_DL)
-#define drsprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_RESET)
-#define dsprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_SCSI)
-#define dctlprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_IOCTL)
-#define dcsmisasprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_CSMISAS)
-#define dsasprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_SAS)
-#define dsastransport(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_SAS_WIDE)
-#define dmfprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_MSG_FRAME)
-#define dtsfprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_TASK_SET_FULL)
-#define dtransportprintk(IOC, CMD) \
- LEAPIORAID_CHECK_LOGGING(IOC, CMD, LEAPIORAID_DEBUG_TRANSPORT)
-
-static inline void
-leapioraid_debug_dump_mf(void *mpi_request, int sz)
-{
- int i;
- __le32 *mfp = (__le32 *) mpi_request;
-
- pr_info("mf:\n\t");
- for (i = 0; i < sz; i++) {
- if (i && ((i % 8) == 0))
- pr_info("\n\t");
- pr_info("%08x ", le32_to_cpu(mfp[i]));
- }
- pr_info("\n");
-}
-
-static inline void
-leapioraid_debug_dump_reply(void *mpi_request, int sz)
-{
- int i;
- __le32 *mfp = (__le32 *) mpi_request;
-
- pr_info("reply:\n\t");
- for (i = 0; i < sz; i++) {
- if (i && ((i % 8) == 0))
- pr_info("\n\t");
- pr_info("%08x ", le32_to_cpu(mfp[i]));
- }
- pr_info("\n");
-}
-
-static inline void
-leapioraid_debug_dump_config(void *mpi_request, int sz)
-{
- int i;
- __le32 *mfp = (__le32 *) mpi_request;
-
- pr_info("config:\n\t");
- for (i = 0; i < sz; i++) {
- if (i && ((i % 8) == 0))
- pr_info("\n\t");
- pr_info("%08x ", le32_to_cpu(mfp[i]));
- }
- pr_info("\n");
-}
-
-#define LEAPIORAID_DRV_INTERNAL_BITMAP_BLK_MQ (0x00000001)
-#define LEAPIORAID_DRV_INERNAL_FIRST_PE_ISSUED (0x00000002)
-
-typedef u8(*LEAPIORAID_CALLBACK) (struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u8 msix_index, u32 reply);
-
-#define SCSIH_MAP_QUEUE(shost) static void leapioraid_scsihost_map_queues(shost)
-
-extern struct list_head leapioraid_ioc_list;
-extern spinlock_t leapioraid_gioc_lock;
-void leapioraid_base_start_watchdog(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_base_stop_watchdog(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_base_start_log_watchdog(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_base_stop_log_watchdog(struct LEAPIORAID_ADAPTER *ioc);
-int leapioraid_base_trace_log_init(struct LEAPIORAID_ADAPTER *ioc);
-int leapioraid_base_attach(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_base_detach(struct LEAPIORAID_ADAPTER *ioc);
-int leapioraid_base_map_resources(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_base_free_resources(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_free_enclosure_list(struct LEAPIORAID_ADAPTER *ioc);
-int leapioraid_base_hard_reset_handler(struct LEAPIORAID_ADAPTER *ioc,
- enum reset_type type);
-void *leapioraid_base_get_msg_frame(struct LEAPIORAID_ADAPTER *ioc, u16 smid);
-void *leapioraid_base_get_sense_buffer(struct LEAPIORAID_ADAPTER *ioc,
- u16 smid);
-__le32 leapioraid_base_get_sense_buffer_dma(struct LEAPIORAID_ADAPTER *ioc,
- u16 smid);
-__le64 leapioraid_base_get_sense_buffer_dma_64(struct LEAPIORAID_ADAPTER *ioc,
- u16 smid);
-void leapioraid_base_sync_reply_irqs(struct LEAPIORAID_ADAPTER *ioc, u8 poll);
-u16 leapioraid_base_get_smid_hpr(struct LEAPIORAID_ADAPTER *ioc, u8 cb_idx);
-u16 leapioraid_base_get_smid_scsiio(struct LEAPIORAID_ADAPTER *ioc, u8 cb_idx,
- struct scsi_cmnd *scmd);
-u16 leapioraid_base_get_smid(struct LEAPIORAID_ADAPTER *ioc, u8 cb_idx);
-void leapioraid_base_free_smid(struct LEAPIORAID_ADAPTER *ioc, u16 smid);
-void leapioraid_base_initialize_callback_handler(void);
-u8 leapioraid_base_register_callback_handler(LEAPIORAID_CALLBACK cb_func);
-void leapioraid_base_release_callback_handler(u8 cb_idx);
-u8 leapioraid_base_done(struct LEAPIORAID_ADAPTER *ioc, u16 smid, u8 msix_index,
- u32 reply);
-u8 leapioraid_port_enable_done(struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u8 msix_index, u32 reply);
-void *leapioraid_base_get_reply_virt_addr(struct LEAPIORAID_ADAPTER *ioc,
- u32 phys_addr);
-u32 leapioraid_base_get_iocstate(struct LEAPIORAID_ADAPTER *ioc, int cooked);
-int leapioraid_base_check_and_get_msix_vectors(struct pci_dev *pdev);
-void leapioraid_base_fault_info(struct LEAPIORAID_ADAPTER *ioc, u16 fault_code);
-#define leapioraid_print_fault_code(ioc, fault_code) \
- do { \
- pr_err("%s fault info from func: %s\n", ioc->name, __func__); \
- leapioraid_base_fault_info(ioc, fault_code); \
- } while (0)
-void leapioraid_base_coredump_info(struct LEAPIORAID_ADAPTER *ioc,
- u16 fault_code);
-int leapioraid_base_wait_for_coredump_completion(struct LEAPIORAID_ADAPTER *ioc,
- const char *caller);
-int leapioraid_base_sas_iounit_control(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidSasIoUnitControlRep_t *
- mpi_reply,
- struct LeapioraidSasIoUnitControlReq_t *
- mpi_request);
-int leapioraid_base_scsi_enclosure_processor(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidSepRep_t *mpi_reply,
- struct LeapioraidSepReq_t *mpi_request);
-void leapioraid_base_validate_event_type(struct LEAPIORAID_ADAPTER *ioc,
- u32 *event_type);
-void leapioraid_halt_firmware(struct LEAPIORAID_ADAPTER *ioc, u8 set_fault);
-struct leapioraid_scsiio_tracker *leapioraid_get_st_from_smid(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid);
-void leapioraid_base_clear_st(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_scsiio_tracker *st);
-struct leapioraid_scsiio_tracker *leapioraid_base_scsi_cmd_priv(
- struct scsi_cmnd *scmd);
-int
-leapioraid_base_check_for_fault_and_issue_reset(struct LEAPIORAID_ADAPTER *ioc);
-int leapioraid_port_enable(struct LEAPIORAID_ADAPTER *ioc);
-u8 leapioraid_base_pci_device_is_unplugged(struct LEAPIORAID_ADAPTER *ioc);
-u8 leapioraid_base_pci_device_is_available(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_base_free_irq(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_base_disable_msix(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_wait_for_commands_to_complete(struct LEAPIORAID_ADAPTER *ioc);
-u8 leapioraid_base_check_cmd_timeout(struct LEAPIORAID_ADAPTER *ioc,
- u8 status, void *mpi_request, int sz);
-#define leapioraid_check_cmd_timeout(ioc, status, mpi_request, sz, issue_reset) \
- do { \
- pr_err("%s In func: %s\n", ioc->name, __func__); \
- issue_reset = leapioraid_base_check_cmd_timeout(ioc, status, mpi_request, sz); \
- } while (0)
-int leapioraid_wait_for_ioc_to_operational(struct LEAPIORAID_ADAPTER *ioc,
- int wait_count);
-void leapioraid_base_start_hba_unplug_watchdog(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_base_stop_hba_unplug_watchdog(struct LEAPIORAID_ADAPTER *ioc);
-int leapioraid_base_make_ioc_ready(struct LEAPIORAID_ADAPTER *ioc,
- enum reset_type type);
-void leapioraid_base_mask_interrupts(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_base_unmask_interrupts(struct LEAPIORAID_ADAPTER *ioc);
-int leapioraid_blk_mq_poll(struct Scsi_Host *shost, unsigned int queue_num);
-void leapioraid_base_pause_mq_polling(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_base_resume_mq_polling(struct LEAPIORAID_ADAPTER *ioc);
-int leapioraid_base_unlock_and_get_host_diagnostic(struct LEAPIORAID_ADAPTER
- *ioc, u32 *host_diagnostic);
-void leapioraid_base_lock_host_diagnostic(struct LEAPIORAID_ADAPTER *ioc);
-extern char driver_name[LEAPIORAID_NAME_LENGTH];
-struct scsi_cmnd *leapioraid_scsihost_scsi_lookup_get(struct LEAPIORAID_ADAPTER
- *ioc, u16 smid);
-u8 leapioraid_scsihost_event_callback(struct LEAPIORAID_ADAPTER *ioc,
- u8 msix_index, u32 reply);
-void leapioraid_scsihost_reset_handler(struct LEAPIORAID_ADAPTER *ioc,
- int reset_phase);
-int leapioraid_scsihost_issue_tm(struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- uint channel, uint id, uint lun, u8 type,
- u16 smid_task, u8 timeout, u8 tr_method);
-int leapioraid_scsihost_issue_locked_tm(struct LEAPIORAID_ADAPTER *ioc,
- u16 handle, uint channel, uint id,
- uint lun, u8 type, u16 smid_task,
- u8 timeout, u8 tr_method);
-void leapioraid_scsihost_set_tm_flag(struct LEAPIORAID_ADAPTER *ioc,
- u16 handle);
-void leapioraid_scsihost_clear_tm_flag(struct LEAPIORAID_ADAPTER *ioc,
- u16 handle);
-void leapioraid_expander_remove(
- struct LEAPIORAID_ADAPTER *ioc, u64 sas_address,
- struct leapioraid_hba_port *port);
-void leapioraid_device_remove_by_sas_address(struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address,
- struct leapioraid_hba_port *port);
-u8 leapioraid_check_for_pending_internal_cmds(struct LEAPIORAID_ADAPTER *ioc,
- u16 smid);
-struct leapioraid_hba_port *leapioraid_get_port_by_id(
- struct LEAPIORAID_ADAPTER *ioc, u8 port, u8 skip_dirty_flag);
-struct leapioraid_virtual_phy *leapioraid_get_vphy_by_phy(
- struct LEAPIORAID_ADAPTER *ioc, struct leapioraid_hba_port *port, u32 phy);
-struct leapioraid_raid_sas_node *leapioraid_scsihost_expander_find_by_handle(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle);
-struct leapioraid_raid_sas_node *leapioraid_scsihost_expander_find_by_sas_address(
- struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address,
- struct leapioraid_hba_port *port);
-struct leapioraid_sas_device *__leapioraid_get_sdev_by_addr_and_rphy(
- struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address,
- struct sas_rphy *rphy);
-struct leapioraid_sas_device *leapioraid_get_sdev_by_addr(
- struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address,
- struct leapioraid_hba_port *port);
-struct leapioraid_sas_device *leapioraid_get_sdev_by_handle(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle);
-void leapioraid_scsihost_flush_running_cmds(struct LEAPIORAID_ADAPTER *ioc);
-void leapioraid_port_enable_complete(struct LEAPIORAID_ADAPTER *ioc);
-struct leapioraid_raid_device *leapioraid_raid_device_find_by_handle(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle);
-void leapioraid_scsihost_sas_device_remove(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_device *sas_device);
-void leapioraid_scsihost_clear_outstanding_scsi_tm_commands(
- struct LEAPIORAID_ADAPTER *ioc);
-u32 leapioraid_base_mod64(u64 dividend, u32 divisor);
-void
-leapioraid__scsihost_change_queue_depth(struct scsi_device *sdev, int qdepth);
-u8 leapioraid_scsihost_ncq_prio_supp(struct scsi_device *sdev);
-u8 leapioraid_config_done(struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u8 msix_index, u32 reply);
-int leapioraid_config_get_number_hba_phys(struct LEAPIORAID_ADAPTER *ioc,
- u8 *num_phys);
-int leapioraid_config_get_manufacturing_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidManP0_t *
- config_page);
-int leapioraid_config_get_manufacturing_pg10(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidManuP10_t
- *config_page);
-int leapioraid_config_get_manufacturing_pg11(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidManuP11_t
- *config_page);
-int leapioraid_config_set_manufacturing_pg11(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidManuP11_t
- *config_page);
-int leapioraid_config_get_bios_pg2(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidBiosP2_t *config_page);
-int leapioraid_config_get_bios_pg3(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidBiosP3_t *config_page);
-int leapioraid_config_get_iounit_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOUnitP0_t *config_page);
-int leapioraid_config_get_sas_device_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasDevP0_t *config_page,
- u32 form, u32 handle);
-int leapioraid_config_get_sas_iounit_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasIOUnitP0_t *config_page,
- u16 sz);
-int leapioraid_config_get_iounit_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOUnitP1_t *config_page);
-int leapioraid_config_set_iounit_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOUnitP1_t *config_page);
-int leapioraid_config_get_iounit_pg8(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOUnitP8_t *config_page);
-int leapioraid_config_get_sas_iounit_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasIOUnitP1_t *config_page,
- u16 sz);
-int leapioraid_config_set_sas_iounit_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasIOUnitP1_t *config_page,
- u16 sz);
-int leapioraid_config_get_ioc_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOCP1_t *config_page);
-int leapioraid_config_set_ioc_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOCP1_t *config_page);
-int leapioraid_config_get_ioc_pg8(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidIOCP8_t *config_page);
-int leapioraid_config_get_expander_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidExpanderP0_t *config_page,
- u32 form, u32 handle);
-int leapioraid_config_get_expander_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidExpanderP1_t *config_page,
- u32 phy_number, u16 handle);
-int leapioraid_config_get_enclosure_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasEncP0_t *
- config_page, u32 form, u32 handle);
-int leapioraid_config_get_phy_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasPhyP0_t *config_page,
- u32 phy_number);
-int leapioraid_config_get_phy_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidSasPhyP1_t *config_page,
- u32 phy_number);
-int leapioraid_config_get_raid_volume_pg1(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidRaidVolP1_t *config_page,
- u32 form, u32 handle);
-int leapioraid_config_get_number_pds(struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- u8 *num_pds);
-int leapioraid_config_get_raid_volume_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidRaidVolP0_t *config_page,
- u32 form, u32 handle, u16 sz);
-int leapioraid_config_get_phys_disk_pg0(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidCfgRep_t *mpi_reply,
- struct LeapioraidRaidPDP0_t *
- config_page, u32 form,
- u32 form_specific);
-int leapioraid_config_get_volume_handle(struct LEAPIORAID_ADAPTER *ioc,
- u16 pd_handle, u16 *volume_handle);
-int leapioraid_config_get_volume_wwid(struct LEAPIORAID_ADAPTER *ioc,
- u16 volume_handle, u64 *wwid);
-extern const struct attribute_group *leapioraid_host_groups[];
-extern const struct attribute_group *leapioraid_dev_groups[];
-void leapioraid_ctl_init(void);
-void leapioraid_ctl_exit(void);
-u8 leapioraid_ctl_done(struct LEAPIORAID_ADAPTER *ioc, u16 smid, u8 msix_index,
- u32 reply);
-u8 leapioraid_ctl_tm_done(struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u8 msix_index, u32 reply);
-void leapioraid_ctl_reset_handler(struct LEAPIORAID_ADAPTER *ioc,
- int reset_phase);
-u8 leapioraid_ctl_event_callback(struct LEAPIORAID_ADAPTER *ioc, u8 msix_index,
- u32 reply);
-void leapioraid_ctl_add_to_event_log(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventNotificationRep_t *
- mpi_reply);
-void leapioraid_ctl_clear_outstanding_ioctls(struct LEAPIORAID_ADAPTER *ioc);
-int leapioraid_ctl_release(struct inode *inode, struct file *filep);
-void ctl_init(void);
-void ctl_exit(void);
-u8 leapioraid_transport_done(struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u8 msix_index, u32 reply);
-struct leapioraid_sas_port *leapioraid_transport_port_add(
- struct LEAPIORAID_ADAPTER *ioc,
- u16 handle, u64 sas_address,
- struct leapioraid_hba_port *port);
-void leapioraid_transport_port_remove(struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address, u64 sas_address_parent,
- struct leapioraid_hba_port *port);
-int leapioraid_transport_add_host_phy(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_phy *leapioraid_phy,
- struct LeapioraidSasPhyP0_t phy_pg0,
- struct device *parent_dev);
-int leapioraid_transport_add_expander_phy(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_phy *leapioraid_phy,
- struct LeapioraidExpanderP1_t expander_pg1,
- struct device *parent_dev);
-void leapioraid_transport_update_links(struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address, u16 handle,
- u8 phy_number, u8 link_rate,
- struct leapioraid_hba_port *port);
-extern struct sas_function_template leapioraid_transport_functions;
-extern struct scsi_transport_template *leapioraid_transport_template;
-void
-leapioraid_transport_del_phy_from_an_existing_port(struct LEAPIORAID_ADAPTER
- *ioc,
- struct leapioraid_raid_sas_node *sas_node,
- struct leapioraid_sas_phy
- *leapioraid_phy);
-#if defined(LEAPIORAID_WIDE_PORT_API)
-void
-leapioraid_transport_add_phy_to_an_existing_port(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_raid_sas_node *sas_node,
- struct leapioraid_sas_phy
- *leapioraid_phy,
- u64 sas_address,
- struct leapioraid_hba_port *port);
-#endif
-#endif
diff --git a/drivers/scsi/leapioraid/leapioraid_os.c b/drivers/scsi/leapioraid/leapioraid_os.c
deleted file mode 100644
index c7bb14f2eff9..000000000000
--- a/drivers/scsi/leapioraid/leapioraid_os.c
+++ /dev/null
@@ -1,9825 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Scsi Host Layer for MPT (Message Passing Technology) based controllers
- *
- * Copyright (C) 2013-2021 LSI Corporation
- * Copyright (C) 2013-2021 Avago Technologies
- * Copyright (C) 2013-2021 Broadcom Inc.
- * (mailto:MPT-FusionLinux.pdl@broadcom.com)
- *
- * Copyright (C) 2024 LeapIO Tech Inc.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2
- * of the License, or (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * NO WARRANTY
- * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
- * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
- * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
- * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
- * solely responsible for determining the appropriateness of using and
- * distributing the Program and assumes all risks associated with its
- * exercise of rights under this Agreement, including but not limited to
- * the risks and costs of program errors, damage to or loss of data,
- * programs or equipment, and unavailability or interruption of operations.
-
- * DISCLAIMER OF LIABILITY
- * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
- * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
- * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
- * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
- */
-
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/init.h>
-#include <linux/errno.h>
-#include <linux/blkdev.h>
-#include <linux/sched.h>
-#include <linux/workqueue.h>
-#include <linux/delay.h>
-#include <linux/pci.h>
-#include <linux/interrupt.h>
-#include <asm/unaligned.h>
-#include <linux/aer.h>
-#include <linux/raid_class.h>
-#include "leapioraid_func.h"
-#include <linux/blk-mq-pci.h>
-
-#define RAID_CHANNEL 1
-
-static void leapioraid_scsihost_expander_node_remove(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_raid_sas_node *sas_expander);
-static void leapioraid_firmware_event_work(struct work_struct *work);
-static void leapioraid_firmware_event_work_delayed(struct work_struct *work);
-static enum device_responsive_state
-leapioraid_scsihost_inquiry_vpd_sn(struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- u8 **serial_number);
-static enum device_responsive_state
-leapioraid_scsihost_inquiry_vpd_supported_pages(struct LEAPIORAID_ADAPTER *ioc,
- u16 handle, u32 lun, void *data,
- u32 data_length);
-static enum device_responsive_state leapioraid_scsihost_ata_pass_thru_idd(
- struct LEAPIORAID_ADAPTER *ioc,
- u16 handle,
- u8 *is_ssd_device,
- u8 tr_timeout,
- u8 tr_method);
-static enum device_responsive_state
-leapioraid_scsihost_wait_for_target_to_become_ready(
- struct LEAPIORAID_ADAPTER *ioc,
- u16 handle, u8 retry_count, u8 is_pd,
- u8 tr_timeout, u8 tr_method);
-static enum device_responsive_state
-leapioraid_scsihost_wait_for_device_to_become_ready(
- struct LEAPIORAID_ADAPTER *ioc,
- u16 handle, u8 retry_count, u8 is_pd,
- int lun, u8 tr_timeout, u8 tr_method);
-static void leapioraid_scsihost_remove_device(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_device *sas_device);
-static int leapioraid_scsihost_add_device(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- u8 retry_count, u8 is_pd);
-static u8 leapioraid_scsihost_check_for_pending_tm(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid);
-static void leapioraid_scsihost_send_event_to_turn_on_pfa_led(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle);
-static void leapioraid_scsihost_complete_devices_scanning(
- struct LEAPIORAID_ADAPTER *ioc);
-
-LIST_HEAD(leapioraid_ioc_list);
-DEFINE_SPINLOCK(leapioraid_gioc_lock);
-
-MODULE_AUTHOR(LEAPIORAID_AUTHOR);
-MODULE_DESCRIPTION(LEAPIORAID_DESCRIPTION);
-MODULE_LICENSE("GPL");
-MODULE_VERSION(LEAPIORAID_DRIVER_VERSION);
-
-static u8 scsi_io_cb_idx = -1;
-static u8 tm_cb_idx = -1;
-static u8 ctl_cb_idx = -1;
-static u8 ctl_tm_cb_idx = -1;
-static u8 base_cb_idx = -1;
-static u8 port_enable_cb_idx = -1;
-static u8 transport_cb_idx = -1;
-static u8 scsih_cb_idx = -1;
-static u8 config_cb_idx = -1;
-static int leapioraid_ids;
-static u8 tm_tr_cb_idx = -1;
-static u8 tm_tr_volume_cb_idx = -1;
-static u8 tm_tr_internal_cb_idx = -1;
-static u8 tm_sas_control_cb_idx = -1;
-static u32 logging_level;
-
-MODULE_PARM_DESC(logging_level,
- " bits for enabling additional logging info (default=0)");
-
-static int open_pcie_trace;
-module_param(open_pcie_trace, int, 0444);
-MODULE_PARM_DESC(open_pcie_trace, "open_pcie_trace: open=1/default=0(close)");
-
-static int disable_discovery = -1;
-module_param(disable_discovery, int, 0444);
-MODULE_PARM_DESC(disable_discovery, "disable discovery");
-
-static struct raid_template *leapioraid_raid_template;
-
-enum device_responsive_state {
- DEVICE_READY,
- DEVICE_RETRY,
- DEVICE_RETRY_UA,
- DEVICE_START_UNIT,
- DEVICE_STOP_UNIT,
- DEVICE_ERROR,
-};
-
-struct sense_info {
- u8 skey;
- u8 asc;
- u8 ascq;
-};
-
-#define LEAPIORAID_TURN_ON_PFA_LED (0xFFFC)
-#define LEAPIORAID_PORT_ENABLE_COMPLETE (0xFFFD)
-#define LEAPIORAID_REMOVE_UNRESPONDING_DEVICES (0xFFFF)
-
-struct leapioraid_fw_event_work {
- struct list_head list;
- struct work_struct work;
- u8 cancel_pending_work;
- struct delayed_work delayed_work;
- u8 delayed_work_active;
- struct LEAPIORAID_ADAPTER *ioc;
- u16 device_handle;
- u8 VF_ID;
- u8 VP_ID;
- u8 ignore;
- u16 event;
- struct kref refcount;
- void *event_data;
- u8 *retries;
-};
-
-static void
-leapioraid_fw_event_work_free(struct kref *r)
-{
- struct leapioraid_fw_event_work *fw_work;
-
- fw_work = container_of(
- r, struct leapioraid_fw_event_work, refcount);
- kfree(fw_work->event_data);
- kfree(fw_work->retries);
- kfree(fw_work);
-}
-
-static void
-leapioraid_fw_event_work_get(
- struct leapioraid_fw_event_work *fw_work)
-{
- kref_get(&fw_work->refcount);
-}
-
-static void
-leapioraid_fw_event_work_put(struct leapioraid_fw_event_work *fw_work)
-{
- kref_put(&fw_work->refcount, leapioraid_fw_event_work_free);
-}
-
-static
-struct leapioraid_fw_event_work *leapioraid_alloc_fw_event_work(int len)
-{
- struct leapioraid_fw_event_work *fw_event;
-
- fw_event = kzalloc(sizeof(*fw_event) + len, GFP_ATOMIC);
- if (!fw_event)
- return NULL;
- kref_init(&fw_event->refcount);
- return fw_event;
-}
-
-static int
-leapioraid_scsihost_set_debug_level(
- const char *val, const struct kernel_param *kp)
-{
- int ret = param_set_int(val, kp);
- struct LEAPIORAID_ADAPTER *ioc;
-
- if (ret)
- return ret;
- pr_info("setting logging_level(0x%08x)\n", logging_level);
- spin_lock(&leapioraid_gioc_lock);
- list_for_each_entry(ioc, &leapioraid_ioc_list, list)
- ioc->logging_level = logging_level;
- spin_unlock(&leapioraid_gioc_lock);
- return 0;
-}
-
-module_param_call(logging_level,
- leapioraid_scsihost_set_debug_level, param_get_int,
- &logging_level, 0644);
-
-static inline int
-leapioraid_scsihost_srch_boot_sas_address(u64 sas_address,
- struct LEAPIORAID_BOOT_DEVICE_SAS_WWID *boot_device)
-{
- return (sas_address == le64_to_cpu(boot_device->SASAddress)) ? 1 : 0;
-}
-
-static inline int
-leapioraid_scsihost_srch_boot_device_name(u64 device_name,
- struct LEAPIORAID_BOOT_DEVICE_DEVICE_NAME *boot_device)
-{
- return (device_name == le64_to_cpu(boot_device->DeviceName)) ? 1 : 0;
-}
-
-static inline int
-leapioraid_scsihost_srch_boot_encl_slot(u64 enclosure_logical_id, u16 slot_number,
- struct LEAPIORAID_BOOT_DEVICE_ENCLOSURE_SLOT *boot_device)
-{
- return (enclosure_logical_id ==
- le64_to_cpu(boot_device->EnclosureLogicalID)
- && slot_number == le16_to_cpu(boot_device->SlotNumber)) ? 1 : 0;
-}
-
-static void
-leapioraid_scsihost_display_enclosure_chassis_info(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_device *sas_device,
- struct scsi_device *sdev,
- struct scsi_target *starget)
-{
- if (sdev) {
- if (sas_device->enclosure_handle != 0)
- sdev_printk(KERN_INFO, sdev,
- "enclosure logical id(0x%016llx), slot(%d)\n",
- (unsigned long long)sas_device->enclosure_logical_id,
- sas_device->slot);
- if (sas_device->connector_name[0] != '\0')
- sdev_printk(KERN_INFO, sdev,
- "enclosure level(0x%04x), connector name( %s)\n",
- sas_device->enclosure_level,
- sas_device->connector_name);
- if (sas_device->is_chassis_slot_valid)
- sdev_printk(KERN_INFO, sdev, "chassis slot(0x%04x)\n",
- sas_device->chassis_slot);
- } else if (starget) {
- if (sas_device->enclosure_handle != 0)
- starget_printk(KERN_INFO, starget,
- "enclosure logical id(0x%016llx), slot(%d)\n",
- (unsigned long long)sas_device->enclosure_logical_id,
- sas_device->slot);
- if (sas_device->connector_name[0] != '\0')
- starget_printk(KERN_INFO, starget,
- "enclosure level(0x%04x), connector name( %s)\n",
- sas_device->enclosure_level,
- sas_device->connector_name);
- if (sas_device->is_chassis_slot_valid)
- starget_printk(KERN_INFO, starget,
- "chassis slot(0x%04x)\n", sas_device->chassis_slot);
- } else {
- if (sas_device->enclosure_handle != 0)
- pr_info("%s enclosure logical id(0x%016llx), slot(%d)\n",
- ioc->name,
- (unsigned long long)sas_device->enclosure_logical_id,
- sas_device->slot);
- if (sas_device->connector_name[0] != '\0')
- pr_info("%s enclosure level(0x%04x),connector name( %s)\n",
- ioc->name,
- sas_device->enclosure_level,
- sas_device->connector_name);
- if (sas_device->is_chassis_slot_valid)
- pr_info("%s chassis slot(0x%04x)\n",
- ioc->name, sas_device->chassis_slot);
- }
-}
-
-struct leapioraid_hba_port *leapioraid_get_port_by_id(
- struct LEAPIORAID_ADAPTER *ioc,
- u8 port_id, u8 skip_dirty_flag)
-{
- struct leapioraid_hba_port *port, *port_next;
-
- if (!ioc->multipath_on_hba)
- port_id = LEAPIORAID_MULTIPATH_DISABLED_PORT_ID;
- list_for_each_entry_safe(port, port_next, &ioc->port_table_list, list) {
- if (port->port_id != port_id)
- continue;
- if (port->flags & LEAPIORAID_HBA_PORT_FLAG_DIRTY_PORT)
- continue;
- return port;
- }
- if (skip_dirty_flag) {
- port = port_next = NULL;
- list_for_each_entry_safe(port, port_next,
- &ioc->port_table_list, list) {
- if (port->port_id != port_id)
- continue;
- return port;
- }
- }
- if (unlikely(!ioc->multipath_on_hba)) {
- port = kzalloc(sizeof(struct leapioraid_hba_port), GFP_ATOMIC);
- if (!port)
- return NULL;
-
- port->port_id = LEAPIORAID_MULTIPATH_DISABLED_PORT_ID;
- pr_err(
- "%s hba_port entry: %p, port: %d is added to hba_port list\n",
- ioc->name, port, port->port_id);
- list_add_tail(&port->list, &ioc->port_table_list);
- return port;
- }
- return NULL;
-}
-
-struct leapioraid_virtual_phy *leapioraid_get_vphy_by_phy(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_hba_port *port, u32 phy)
-{
- struct leapioraid_virtual_phy *vphy, *vphy_next;
-
- if (!port->vphys_mask)
- return NULL;
- list_for_each_entry_safe(vphy, vphy_next, &port->vphys_list, list) {
- if (vphy->phy_mask & (1 << phy))
- return vphy;
- }
- return NULL;
-}
-
-static int
-leapioraid_scsihost_is_boot_device(u64 sas_address, u64 device_name,
- u64 enclosure_logical_id, u16 slot, u8 form,
- union LEAPIORAID_BIOSPAGE2_BOOT_DEVICE *boot_device)
-{
- int rc = 0;
-
- switch (form) {
- case LEAPIORAID_BIOSPAGE2_FORM_SAS_WWID:
- if (!sas_address)
- break;
- rc = leapioraid_scsihost_srch_boot_sas_address(sas_address,
- &boot_device->SasWwid);
- break;
- case LEAPIORAID_BIOSPAGE2_FORM_ENCLOSURE_SLOT:
- if (!enclosure_logical_id)
- break;
- rc = leapioraid_scsihost_srch_boot_encl_slot(
- enclosure_logical_id,
- slot,
- &boot_device->EnclosureSlot);
- break;
- case LEAPIORAID_BIOSPAGE2_FORM_DEVICE_NAME:
- if (!device_name)
- break;
- rc = leapioraid_scsihost_srch_boot_device_name(device_name,
- &boot_device->DeviceName);
- break;
- case LEAPIORAID_BIOSPAGE2_FORM_NO_DEVICE_SPECIFIED:
- break;
- }
- return rc;
-}
-
-static int
-leapioraid_scsihost_get_sas_address(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- u64 *sas_address)
-{
- struct LeapioraidSasDevP0_t sas_device_pg0;
- struct LeapioraidCfgRep_t mpi_reply;
- u32 ioc_status;
-
- *sas_address = 0;
- if ((leapioraid_config_get_sas_device_pg0
- (ioc, &mpi_reply, &sas_device_pg0,
- LEAPIORAID_SAS_DEVICE_PGAD_FORM_HANDLE, handle))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return -ENXIO;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status == LEAPIORAID_IOCSTATUS_SUCCESS) {
- if ((handle <= ioc->sas_hba.num_phys) &&
- (!(le32_to_cpu(sas_device_pg0.DeviceInfo) &
- LEAPIORAID_SAS_DEVICE_INFO_SEP)))
- *sas_address = ioc->sas_hba.sas_address;
- else
- *sas_address = le64_to_cpu(sas_device_pg0.SASAddress);
- return 0;
- }
- if (ioc_status == LEAPIORAID_IOCSTATUS_CONFIG_INVALID_PAGE)
- return -ENXIO;
- pr_err("%s handle(0x%04x), ioc_status(0x%04x), failure at %s:%d/%s()!\n",
- ioc->name, handle, ioc_status,
- __FILE__, __LINE__, __func__);
- return -EIO;
-}
-
-static void
-leapioraid_scsihost_determine_boot_device(
- struct LEAPIORAID_ADAPTER *ioc, void *device,
- u32 channel)
-{
- struct leapioraid_sas_device *sas_device;
- struct leapioraid_raid_device *raid_device;
- u64 sas_address;
- u64 device_name;
- u64 enclosure_logical_id;
- u16 slot;
-
- if (!ioc->is_driver_loading)
- return;
- if (!ioc->bios_pg3.BiosVersion)
- return;
- if (channel == RAID_CHANNEL) {
- raid_device = device;
- sas_address = raid_device->wwid;
- device_name = 0;
- enclosure_logical_id = 0;
- slot = 0;
- } else {
- sas_device = device;
- sas_address = sas_device->sas_address;
- device_name = sas_device->device_name;
- enclosure_logical_id = sas_device->enclosure_logical_id;
- slot = sas_device->slot;
- }
- if (!ioc->req_boot_device.device) {
- if (leapioraid_scsihost_is_boot_device(sas_address, device_name,
- enclosure_logical_id, slot,
- (ioc->bios_pg2.ReqBootDeviceForm &
- LEAPIORAID_BIOSPAGE2_FORM_MASK),
- &ioc->bios_pg2.RequestedBootDevice)) {
- dinitprintk(ioc,
- pr_err(
- "%s %s: req_boot_device(0x%016llx)\n",
- ioc->name, __func__,
- (unsigned long long)sas_address));
- ioc->req_boot_device.device = device;
- ioc->req_boot_device.channel = channel;
- }
- }
- if (!ioc->req_alt_boot_device.device) {
- if (leapioraid_scsihost_is_boot_device(sas_address, device_name,
- enclosure_logical_id, slot,
- (ioc->bios_pg2.ReqAltBootDeviceForm &
- LEAPIORAID_BIOSPAGE2_FORM_MASK),
- &ioc->bios_pg2.RequestedAltBootDevice)) {
- dinitprintk(ioc,
- pr_err(
- "%s %s: req_alt_boot_device(0x%016llx)\n",
- ioc->name, __func__,
- (unsigned long long)sas_address));
- ioc->req_alt_boot_device.device = device;
- ioc->req_alt_boot_device.channel = channel;
- }
- }
- if (!ioc->current_boot_device.device) {
- if (leapioraid_scsihost_is_boot_device(sas_address, device_name,
- enclosure_logical_id, slot,
- (ioc->bios_pg2.CurrentBootDeviceForm &
- LEAPIORAID_BIOSPAGE2_FORM_MASK),
- &ioc->bios_pg2.CurrentBootDevice)) {
- dinitprintk(ioc,
- pr_err(
- "%s %s: current_boot_device(0x%016llx)\n",
- ioc->name, __func__,
- (unsigned long long)sas_address));
- ioc->current_boot_device.device = device;
- ioc->current_boot_device.channel = channel;
- }
- }
-}
-
-static
-struct leapioraid_sas_device *__leapioraid_get_sdev_from_target(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LEAPIORAID_TARGET *tgt_priv)
-{
- struct leapioraid_sas_device *ret;
-
- assert_spin_locked(&ioc->sas_device_lock);
- ret = tgt_priv->sas_dev;
- if (ret)
- leapioraid_sas_device_get(ret);
- return ret;
-}
-
-static
-struct leapioraid_sas_device *leapioraid_get_sdev_from_target(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LEAPIORAID_TARGET *tgt_priv)
-{
- struct leapioraid_sas_device *ret;
- unsigned long flags;
-
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- ret = __leapioraid_get_sdev_from_target(ioc, tgt_priv);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- return ret;
-}
-
-static
-struct leapioraid_sas_device *__leapioraid_get_sdev_by_addr(
- struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address, struct leapioraid_hba_port *port)
-{
- struct leapioraid_sas_device *sas_device;
-
- if (!port)
- return NULL;
- assert_spin_locked(&ioc->sas_device_lock);
- list_for_each_entry(sas_device, &ioc->sas_device_list, list)
- if (sas_device->sas_address == sas_address &&
- sas_device->port == port)
- goto found_device;
- list_for_each_entry(sas_device, &ioc->sas_device_init_list, list)
- if (sas_device->sas_address == sas_address &&
- sas_device->port == port)
- goto found_device;
- return NULL;
-found_device:
- leapioraid_sas_device_get(sas_device);
- return sas_device;
-}
-
-struct leapioraid_sas_device *__leapioraid_get_sdev_by_addr_and_rphy(
- struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address,
- struct sas_rphy *rphy)
-{
- struct leapioraid_sas_device *sas_device;
-
- assert_spin_locked(&ioc->sas_device_lock);
- list_for_each_entry(sas_device, &ioc->sas_device_list, list)
- if (sas_device->sas_address == sas_address &&
- (sas_device->rphy == rphy))
- goto found_device;
- list_for_each_entry(sas_device, &ioc->sas_device_init_list, list)
- if (sas_device->sas_address == sas_address &&
- (sas_device->rphy == rphy))
- goto found_device;
- return NULL;
-found_device:
- leapioraid_sas_device_get(sas_device);
- return sas_device;
-}
-
-struct leapioraid_sas_device *leapioraid_get_sdev_by_addr(
- struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address,
- struct leapioraid_hba_port *port)
-{
- struct leapioraid_sas_device *sas_device = NULL;
- unsigned long flags;
-
- if (!port)
- return sas_device;
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_addr(ioc, sas_address, port);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- return sas_device;
-}
-
-static struct leapioraid_sas_device *__leapioraid_get_sdev_by_handle(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct leapioraid_sas_device *sas_device;
-
- assert_spin_locked(&ioc->sas_device_lock);
- list_for_each_entry(sas_device, &ioc->sas_device_list, list)
- if (sas_device->handle == handle)
- goto found_device;
- list_for_each_entry(sas_device, &ioc->sas_device_init_list, list)
- if (sas_device->handle == handle)
- goto found_device;
- return NULL;
-found_device:
- leapioraid_sas_device_get(sas_device);
- return sas_device;
-}
-
-struct leapioraid_sas_device *leapioraid_get_sdev_by_handle(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct leapioraid_sas_device *sas_device;
- unsigned long flags;
-
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_handle(ioc, handle);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- return sas_device;
-}
-
-void
-leapioraid_scsihost_sas_device_remove(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_device *sas_device)
-{
- unsigned long flags;
- int was_on_sas_device_list = 0;
-
- if (!sas_device)
- return;
- pr_info("%s %s: removing handle(0x%04x), sas_addr(0x%016llx)\n",
- ioc->name, __func__, sas_device->handle,
- (unsigned long long)sas_device->sas_address);
- leapioraid_scsihost_display_enclosure_chassis_info(
- ioc, sas_device, NULL, NULL);
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- if (!list_empty(&sas_device->list)) {
- list_del_init(&sas_device->list);
- was_on_sas_device_list = 1;
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- if (was_on_sas_device_list) {
- kfree(sas_device->serial_number);
- leapioraid_sas_device_put(sas_device);
- }
-}
-
-static void
-leapioraid_scsihost_device_remove_by_handle(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct leapioraid_sas_device *sas_device;
- unsigned long flags;
- int was_on_sas_device_list = 0;
-
- if (ioc->shost_recovery)
- return;
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_handle(ioc, handle);
- if (sas_device) {
- if (!list_empty(&sas_device->list)) {
- list_del_init(&sas_device->list);
- was_on_sas_device_list = 1;
- leapioraid_sas_device_put(sas_device);
- }
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- if (was_on_sas_device_list) {
- leapioraid_scsihost_remove_device(ioc, sas_device);
- leapioraid_sas_device_put(sas_device);
- }
-}
-
-void
-leapioraid_device_remove_by_sas_address(
- struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address, struct leapioraid_hba_port *port)
-{
- struct leapioraid_sas_device *sas_device;
- unsigned long flags;
- int was_on_sas_device_list = 0;
-
- if (ioc->shost_recovery)
- return;
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_addr(ioc, sas_address, port);
- if (sas_device) {
- if (!list_empty(&sas_device->list)) {
- list_del_init(&sas_device->list);
- was_on_sas_device_list = 1;
- leapioraid_sas_device_put(sas_device);
- }
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- if (was_on_sas_device_list) {
- leapioraid_scsihost_remove_device(ioc, sas_device);
- leapioraid_sas_device_put(sas_device);
- }
-}
-
-static void
-leapioraid_scsihost_sas_device_add(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_device *sas_device)
-{
- unsigned long flags;
-
- dewtprintk(ioc, pr_info("%s %s: handle(0x%04x), sas_addr(0x%016llx)\n",
- ioc->name,
- __func__, sas_device->handle,
- (unsigned long long)sas_device->sas_address));
- dewtprintk(ioc,
- leapioraid_scsihost_display_enclosure_chassis_info(ioc, sas_device,
- NULL, NULL));
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- leapioraid_sas_device_get(sas_device);
- list_add_tail(&sas_device->list, &ioc->sas_device_list);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- if (ioc->hide_drives) {
- clear_bit(sas_device->handle, ioc->pend_os_device_add);
- return;
- }
- if (!leapioraid_transport_port_add(ioc, sas_device->handle,
- sas_device->sas_address_parent,
- sas_device->port)) {
- leapioraid_scsihost_sas_device_remove(ioc, sas_device);
- } else if (!sas_device->starget) {
- if (!ioc->is_driver_loading) {
- leapioraid_transport_port_remove(ioc,
- sas_device->sas_address,
- sas_device->sas_address_parent,
- sas_device->port);
- leapioraid_scsihost_sas_device_remove(ioc, sas_device);
- }
- } else
- clear_bit(sas_device->handle, ioc->pend_os_device_add);
-}
-
-static void
-leapioraid_scsihost_sas_device_init_add(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_device *sas_device)
-{
- unsigned long flags;
-
- dewtprintk(ioc, pr_info("%s %s: handle(0x%04x), sas_addr(0x%016llx)\n",
- ioc->name,
- __func__, sas_device->handle,
- (unsigned long long)sas_device->sas_address));
- dewtprintk(ioc,
- leapioraid_scsihost_display_enclosure_chassis_info(ioc, sas_device,
- NULL, NULL));
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- leapioraid_sas_device_get(sas_device);
- list_add_tail(&sas_device->list, &ioc->sas_device_init_list);
- leapioraid_scsihost_determine_boot_device(ioc, sas_device, 0);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
-}
-
-static
-struct leapioraid_raid_device *leapioraid_scsihost_raid_device_find_by_id(
- struct LEAPIORAID_ADAPTER *ioc, int id, int channel)
-{
- struct leapioraid_raid_device *raid_device, *r;
-
- r = NULL;
- list_for_each_entry(raid_device, &ioc->raid_device_list, list) {
- if (raid_device->id == id && raid_device->channel == channel) {
- r = raid_device;
- goto out;
- }
- }
-out:
- return r;
-}
-
-struct leapioraid_raid_device *leapioraid_raid_device_find_by_handle(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct leapioraid_raid_device *raid_device, *r;
-
- r = NULL;
- list_for_each_entry(raid_device, &ioc->raid_device_list, list) {
- if (raid_device->handle != handle)
- continue;
- r = raid_device;
- goto out;
- }
-out:
- return r;
-}
-
-static
-struct leapioraid_raid_device *leapioraid_scsihost_raid_device_find_by_wwid(
- struct LEAPIORAID_ADAPTER *ioc, u64 wwid)
-{
- struct leapioraid_raid_device *raid_device, *r;
-
- r = NULL;
- list_for_each_entry(raid_device, &ioc->raid_device_list, list) {
- if (raid_device->wwid != wwid)
- continue;
- r = raid_device;
- goto out;
- }
-out:
- return r;
-}
-
-static void
-leapioraid_scsihost_raid_device_add(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_raid_device *raid_device)
-{
- unsigned long flags;
- u8 protection_mask;
-
- dewtprintk(ioc, pr_info("%s %s: handle(0x%04x), wwid(0x%016llx)\n",
- ioc->name,
- __func__, raid_device->handle,
- (unsigned long long)raid_device->wwid));
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- list_add_tail(&raid_device->list, &ioc->raid_device_list);
- if (!ioc->disable_eedp_support) {
- protection_mask = scsi_host_get_prot(ioc->shost);
- if (protection_mask & SHOST_DIX_TYPE0_PROTECTION) {
- scsi_host_set_prot(ioc->shost, protection_mask & 0x77);
- pr_err(
- "%s: Disabling DIX0 because of unsupport!\n",
- ioc->name);
- }
- }
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
-}
-
-static void
-leapioraid_scsihost_raid_device_remove(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_raid_device *raid_device)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- list_del(&raid_device->list);
- kfree(raid_device);
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
-}
-
-struct leapioraid_raid_sas_node *leapioraid_scsihost_expander_find_by_handle(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct leapioraid_raid_sas_node *sas_expander, *r;
-
- r = NULL;
- list_for_each_entry(sas_expander, &ioc->sas_expander_list, list) {
- if (sas_expander->handle != handle)
- continue;
- r = sas_expander;
- goto out;
- }
-out:
- return r;
-}
-
-static
-struct leapioraid_enclosure_node *leapioraid_scsihost_enclosure_find_by_handle(
- struct LEAPIORAID_ADAPTER *ioc,
- u16 handle)
-{
- struct leapioraid_enclosure_node *enclosure_dev, *r;
-
- r = NULL;
- list_for_each_entry(enclosure_dev, &ioc->enclosure_list, list) {
- if (le16_to_cpu(enclosure_dev->pg0.EnclosureHandle) != handle)
- continue;
- r = enclosure_dev;
- goto out;
- }
-out:
- return r;
-}
-
-struct leapioraid_raid_sas_node *leapioraid_scsihost_expander_find_by_sas_address(
- struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address,
- struct leapioraid_hba_port *port)
-{
- struct leapioraid_raid_sas_node *sas_expander, *r;
-
- r = NULL;
- if (!port)
- return r;
- list_for_each_entry(sas_expander, &ioc->sas_expander_list, list) {
- if (sas_expander->sas_address != sas_address ||
- sas_expander->port != port)
- continue;
- r = sas_expander;
- goto out;
- }
-out:
- return r;
-}
-
-static void
-leapioraid_scsihost_expander_node_add(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_raid_sas_node *sas_expander)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- list_add_tail(&sas_expander->list, &ioc->sas_expander_list);
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
-}
-
-static int
-leapioraid_scsihost_is_sas_end_device(u32 device_info)
-{
- if (device_info & LEAPIORAID_SAS_DEVICE_INFO_END_DEVICE &&
- ((device_info & LEAPIORAID_SAS_DEVICE_INFO_SSP_TARGET) |
- (device_info & LEAPIORAID_SAS_DEVICE_INFO_STP_TARGET) |
- (device_info & LEAPIORAID_SAS_DEVICE_INFO_SATA_DEVICE)))
- return 1;
- else
- return 0;
-}
-
-static u8
-leapioraid_scsihost_scsi_lookup_find_by_target(
- struct LEAPIORAID_ADAPTER *ioc, int id,
- int channel)
-{
- int smid;
- struct scsi_cmnd *scmd;
-
- for (smid = 1; smid <= ioc->shost->can_queue; smid++) {
- scmd = leapioraid_scsihost_scsi_lookup_get(ioc, smid);
- if (!scmd)
- continue;
- if (scmd->device->id == id && scmd->device->channel == channel)
- return 1;
- }
- return 0;
-}
-
-static u8
-leapioraid_scsihost_scsi_lookup_find_by_lun(
- struct LEAPIORAID_ADAPTER *ioc, int id,
- unsigned int lun, int channel)
-{
- int smid;
- struct scsi_cmnd *scmd;
-
- for (smid = 1; smid <= ioc->shost->can_queue; smid++) {
- scmd = leapioraid_scsihost_scsi_lookup_get(ioc, smid);
- if (!scmd)
- continue;
- if (scmd->device->id == id &&
- scmd->device->channel == channel &&
- scmd->device->lun == lun)
- return 1;
- }
- return 0;
-}
-
-struct scsi_cmnd *leapioraid_scsihost_scsi_lookup_get(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid)
-{
- struct scsi_cmnd *scmd = NULL;
- struct leapioraid_scsiio_tracker *st;
- struct LeapioraidSCSIIOReq_t *mpi_request;
- u32 unique_tag = smid - 1;
-
- if (smid > 0 && smid <= ioc->shost->can_queue) {
- unique_tag =
- ioc->io_queue_num[smid -
- 1] << BLK_MQ_UNIQUE_TAG_BITS | (smid - 1);
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- if (!mpi_request->DevHandle)
- return scmd;
- scmd = scsi_host_find_tag(ioc->shost, unique_tag);
- if (scmd) {
- st = leapioraid_base_scsi_cmd_priv(scmd);
- if ((!st) || (st->cb_idx == 0xFF) || (st->smid == 0))
- scmd = NULL;
- }
- }
- return scmd;
-}
-
-static void
-leapioraid_scsihost_display_sdev_qd(struct scsi_device *sdev)
-{
- if (sdev->inquiry_len <= 7)
- return;
- sdev_printk(KERN_INFO, sdev,
- "qdepth(%d), tagged(%d), scsi_level(%d), cmd_que(%d)\n",
- sdev->queue_depth, sdev->tagged_supported,
- sdev->scsi_level, ((sdev->inquiry[7] & 2) >> 1));
-}
-
-static int
-leapioraid_scsihost_change_queue_depth(
- struct scsi_device *sdev, int qdepth)
-{
- struct Scsi_Host *shost = sdev->host;
- int max_depth;
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- struct leapioraid_sas_device *sas_device;
- unsigned long flags;
-
- max_depth = shost->can_queue;
-
- goto not_sata;
-
- sas_device_priv_data = sdev->hostdata;
- if (!sas_device_priv_data)
- goto not_sata;
- sas_target_priv_data = sas_device_priv_data->sas_target;
- if (!sas_target_priv_data)
- goto not_sata;
- if ((sas_target_priv_data->flags & LEAPIORAID_TARGET_FLAGS_VOLUME))
- goto not_sata;
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device =
- __leapioraid_get_sdev_from_target(ioc, sas_target_priv_data);
- if (sas_device) {
- if (sas_device->device_info & LEAPIORAID_SAS_DEVICE_INFO_SATA_DEVICE)
- max_depth = LEAPIORAID_SATA_QUEUE_DEPTH;
- leapioraid_sas_device_put(sas_device);
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
-not_sata:
- if (!sdev->tagged_supported)
- max_depth = 1;
- if (qdepth > max_depth)
- qdepth = max_depth;
- scsi_change_queue_depth(sdev, qdepth);
- leapioraid_scsihost_display_sdev_qd(sdev);
- return sdev->queue_depth;
-}
-
-void
-leapioraid__scsihost_change_queue_depth(
- struct scsi_device *sdev, int qdepth)
-{
- struct Scsi_Host *shost = sdev->host;
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
-
- if (ioc->enable_sdev_max_qd)
- qdepth = shost->can_queue;
- leapioraid_scsihost_change_queue_depth(sdev, qdepth);
-}
-
-static int
-leapioraid_scsihost_target_alloc(struct scsi_target *starget)
-{
- struct Scsi_Host *shost = dev_to_shost(&starget->dev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- struct leapioraid_sas_device *sas_device;
- struct leapioraid_raid_device *raid_device;
- unsigned long flags;
- struct sas_rphy *rphy;
-
- sas_target_priv_data =
- kzalloc(sizeof(struct LEAPIORAID_TARGET), GFP_KERNEL);
- if (!sas_target_priv_data)
- return -ENOMEM;
- starget->hostdata = sas_target_priv_data;
- sas_target_priv_data->starget = starget;
- sas_target_priv_data->handle = LEAPIORAID_INVALID_DEVICE_HANDLE;
- if (starget->channel == RAID_CHANNEL) {
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- raid_device = leapioraid_scsihost_raid_device_find_by_id(
- ioc, starget->id, starget->channel);
- if (raid_device) {
- sas_target_priv_data->handle = raid_device->handle;
- sas_target_priv_data->sas_address = raid_device->wwid;
- sas_target_priv_data->flags |=
- LEAPIORAID_TARGET_FLAGS_VOLUME;
- raid_device->starget = starget;
- }
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- return 0;
- }
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- rphy = dev_to_rphy(starget->dev.parent);
- sas_device = __leapioraid_get_sdev_by_addr_and_rphy(ioc,
- rphy->identify.sas_address, rphy);
- if (sas_device) {
- sas_target_priv_data->handle = sas_device->handle;
- sas_target_priv_data->sas_address = sas_device->sas_address;
- sas_target_priv_data->port = sas_device->port;
- sas_target_priv_data->sas_dev = sas_device;
- sas_device->starget = starget;
- sas_device->id = starget->id;
- sas_device->channel = starget->channel;
- if (test_bit(sas_device->handle, ioc->pd_handles))
- sas_target_priv_data->flags |=
- LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT;
- if (sas_device->fast_path)
- sas_target_priv_data->flags |=
- LEAPIORAID_TARGET_FASTPATH_IO;
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- return 0;
-}
-
-static void
-leapioraid_scsihost_target_destroy(struct scsi_target *starget)
-{
- struct Scsi_Host *shost = dev_to_shost(&starget->dev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- struct leapioraid_sas_device *sas_device;
- struct leapioraid_raid_device *raid_device;
- unsigned long flags;
-
- sas_target_priv_data = starget->hostdata;
- if (!sas_target_priv_data)
- return;
- if (starget->channel == RAID_CHANNEL) {
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- raid_device = leapioraid_scsihost_raid_device_find_by_id(
- ioc, starget->id, starget->channel);
- if (raid_device) {
- raid_device->starget = NULL;
- raid_device->sdev = NULL;
- }
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- goto out;
- }
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device =
- __leapioraid_get_sdev_from_target(ioc, sas_target_priv_data);
- if (sas_device && (sas_device->starget == starget)
- && (sas_device->id == starget->id)
- && (sas_device->channel == starget->channel))
- sas_device->starget = NULL;
- if (sas_device) {
- sas_target_priv_data->sas_dev = NULL;
- leapioraid_sas_device_put(sas_device);
- leapioraid_sas_device_put(sas_device);
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
-out:
- kfree(sas_target_priv_data);
- starget->hostdata = NULL;
-}
-
-static int
-leapioraid_scsihost_slave_alloc(struct scsi_device *sdev)
-{
- struct Scsi_Host *shost;
- struct LEAPIORAID_ADAPTER *ioc;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct scsi_target *starget;
- struct leapioraid_raid_device *raid_device;
- struct leapioraid_sas_device *sas_device;
- unsigned long flags;
-
- sas_device_priv_data =
- kzalloc(sizeof(*sas_device_priv_data), GFP_KERNEL);
- if (!sas_device_priv_data)
- return -ENOMEM;
- sas_device_priv_data->lun = sdev->lun;
- sas_device_priv_data->flags = LEAPIORAID_DEVICE_FLAGS_INIT;
- starget = scsi_target(sdev);
- sas_target_priv_data = starget->hostdata;
- sas_target_priv_data->num_luns++;
- sas_device_priv_data->sas_target = sas_target_priv_data;
- sdev->hostdata = sas_device_priv_data;
- if ((sas_target_priv_data->flags & LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT))
- sdev->no_uld_attach = 1;
- shost = dev_to_shost(&starget->dev);
- ioc = leapioraid_shost_private(shost);
- if (starget->channel == RAID_CHANNEL) {
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- raid_device = leapioraid_scsihost_raid_device_find_by_id(ioc,
- starget->id,
- starget->channel);
- if (raid_device)
- raid_device->sdev = sdev;
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- }
- if (!(sas_target_priv_data->flags & LEAPIORAID_TARGET_FLAGS_VOLUME)) {
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_addr(ioc,
- sas_target_priv_data->sas_address,
- sas_target_priv_data->port);
- if (sas_device && (sas_device->starget == NULL)) {
- sdev_printk(KERN_INFO, sdev,
- "%s : sas_device->starget set to starget @ %d\n",
- __func__, __LINE__);
- sas_device->starget = starget;
- }
- if (sas_device)
- leapioraid_sas_device_put(sas_device);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- }
- return 0;
-}
-
-static void
-leapioraid_scsihost_slave_destroy(struct scsi_device *sdev)
-{
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- struct scsi_target *starget;
- struct Scsi_Host *shost;
- struct LEAPIORAID_ADAPTER *ioc;
- struct leapioraid_sas_device *sas_device;
- unsigned long flags;
-
- if (!sdev->hostdata)
- return;
- starget = scsi_target(sdev);
- sas_target_priv_data = starget->hostdata;
- sas_target_priv_data->num_luns--;
- shost = dev_to_shost(&starget->dev);
- ioc = leapioraid_shost_private(shost);
- if (!(sas_target_priv_data->flags & LEAPIORAID_TARGET_FLAGS_VOLUME)) {
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_from_target(ioc,
- sas_target_priv_data);
- if (sas_device && !sas_target_priv_data->num_luns)
- sas_device->starget = NULL;
- if (sas_device)
- leapioraid_sas_device_put(sas_device);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- }
- kfree(sdev->hostdata);
- sdev->hostdata = NULL;
-}
-
-static void
-leapioraid_scsihost_display_sata_capabilities(
- struct LEAPIORAID_ADAPTER *ioc,
- u16 handle, struct scsi_device *sdev)
-{
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidSasDevP0_t sas_device_pg0;
- u32 ioc_status;
- u16 flags;
- u32 device_info;
-
- if ((leapioraid_config_get_sas_device_pg0
- (ioc, &mpi_reply, &sas_device_pg0,
- LEAPIORAID_SAS_DEVICE_PGAD_FORM_HANDLE, handle))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- flags = le16_to_cpu(sas_device_pg0.Flags);
- device_info = le32_to_cpu(sas_device_pg0.DeviceInfo);
- sdev_printk(KERN_INFO, sdev,
- "atapi(%s), ncq(%s), asyn_notify(%s),\n\t\t"
- "smart(%s), fua(%s), sw_preserve(%s)\n",
- (device_info & LEAPIORAID_SAS_DEVICE_INFO_ATAPI_DEVICE) ? "y" :
- "n",
- (flags & LEAPIORAID_SAS_DEVICE0_FLAGS_SATA_NCQ_SUPPORTED) ? "y"
- : "n",
- (flags & LEAPIORAID_SAS_DEVICE0_FLAGS_SATA_ASYNCHRONOUS_NOTIFY)
- ? "y" : "n",
- (flags & LEAPIORAID_SAS_DEVICE0_FLAGS_SATA_SMART_SUPPORTED) ?
- "y" : "n",
- (flags & LEAPIORAID_SAS_DEVICE0_FLAGS_SATA_FUA_SUPPORTED) ? "y"
- : "n",
- (flags & LEAPIORAID_SAS_DEVICE0_FLAGS_SATA_SW_PRESERVE) ? "y" :
- "n");
-}
-
-static int
-leapioraid_scsihost_is_raid(struct device *dev)
-{
- struct scsi_device *sdev = to_scsi_device(dev);
-
- return (sdev->channel == RAID_CHANNEL) ? 1 : 0;
-}
-
-static void
-leapioraid_scsihost_get_resync(struct device *dev)
-{
- struct scsi_device *sdev = to_scsi_device(dev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(sdev->host);
- static struct leapioraid_raid_device *raid_device;
- unsigned long flags;
- struct LeapioraidRaidVolP0_t vol_pg0;
- struct LeapioraidCfgRep_t mpi_reply;
- u32 volume_status_flags;
- u8 percent_complete;
- u16 handle;
-
- percent_complete = 0;
- handle = 0;
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- raid_device = leapioraid_scsihost_raid_device_find_by_id(
- ioc, sdev->id, sdev->channel);
- if (raid_device) {
- handle = raid_device->handle;
- percent_complete = raid_device->percent_complete;
- }
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- if (!handle)
- goto out;
- if (leapioraid_config_get_raid_volume_pg0(ioc, &mpi_reply, &vol_pg0,
- LEAPIORAID_RAID_VOLUME_PGAD_FORM_HANDLE,
- handle,
- sizeof
- (struct LeapioraidRaidVolP0_t))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- percent_complete = 0;
- goto out;
- }
- volume_status_flags = le32_to_cpu(vol_pg0.VolumeStatusFlags);
- if (!(volume_status_flags &
- LEAPIORAID_RAIDVOL0_STATUS_FLAG_RESYNC_IN_PROGRESS))
- percent_complete = 0;
-out:
- raid_set_resync(leapioraid_raid_template, dev, percent_complete);
-}
-
-static void
-leapioraid_scsihost_get_state(struct device *dev)
-{
- struct scsi_device *sdev = to_scsi_device(dev);
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(sdev->host);
- static struct leapioraid_raid_device *raid_device;
- unsigned long flags;
- struct LeapioraidRaidVolP0_t vol_pg0;
- struct LeapioraidCfgRep_t mpi_reply;
- u32 volstate;
- enum raid_state state = RAID_STATE_UNKNOWN;
- u16 handle = 0;
-
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- raid_device = leapioraid_scsihost_raid_device_find_by_id(
- ioc, sdev->id, sdev->channel);
- if (raid_device)
- handle = raid_device->handle;
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- if (!raid_device)
- goto out;
- if (leapioraid_config_get_raid_volume_pg0(ioc, &mpi_reply, &vol_pg0,
- LEAPIORAID_RAID_VOLUME_PGAD_FORM_HANDLE,
- handle,
- sizeof
- (struct LeapioraidRaidVolP0_t))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- volstate = le32_to_cpu(vol_pg0.VolumeStatusFlags);
- if (volstate & LEAPIORAID_RAIDVOL0_STATUS_FLAG_RESYNC_IN_PROGRESS) {
- state = RAID_STATE_RESYNCING;
- goto out;
- }
- switch (vol_pg0.VolumeState) {
- case LEAPIORAID_RAID_VOL_STATE_OPTIMAL:
- case LEAPIORAID_RAID_VOL_STATE_ONLINE:
- state = RAID_STATE_ACTIVE;
- break;
- case LEAPIORAID_RAID_VOL_STATE_DEGRADED:
- state = RAID_STATE_DEGRADED;
- break;
- case LEAPIORAID_RAID_VOL_STATE_FAILED:
- case LEAPIORAID_RAID_VOL_STATE_MISSING:
- state = RAID_STATE_OFFLINE;
- break;
- }
-out:
- raid_set_state(leapioraid_raid_template, dev, state);
-}
-
-static void
-leapioraid_scsihost_set_level(struct LEAPIORAID_ADAPTER *ioc,
- struct scsi_device *sdev, u8 volume_type)
-{
- enum raid_level level = RAID_LEVEL_UNKNOWN;
-
- switch (volume_type) {
- case LEAPIORAID_RAID_VOL_TYPE_RAID0:
- level = RAID_LEVEL_0;
- break;
- case LEAPIORAID_RAID_VOL_TYPE_RAID10:
- case LEAPIORAID_RAID_VOL_TYPE_RAID1E:
- level = RAID_LEVEL_10;
- break;
- case LEAPIORAID_RAID_VOL_TYPE_RAID1:
- level = RAID_LEVEL_1;
- break;
- }
- raid_set_level(leapioraid_raid_template, &sdev->sdev_gendev, level);
-}
-
-static int
-leapioraid_scsihost_get_volume_capabilities(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_raid_device *raid_device)
-{
- struct LeapioraidRaidVolP0_t *vol_pg0;
- struct LeapioraidRaidPDP0_t pd_pg0;
- struct LeapioraidSasDevP0_t sas_device_pg0;
- struct LeapioraidCfgRep_t mpi_reply;
- u16 sz;
- u8 num_pds;
-
- if ((leapioraid_config_get_number_pds(ioc, raid_device->handle,
- &num_pds)) || !num_pds) {
- dfailprintk(ioc, pr_warn(
- "%s failure at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__));
- return 1;
- }
- raid_device->num_pds = num_pds;
- sz = offsetof(struct LeapioraidRaidVolP0_t, PhysDisk) + (num_pds *
- sizeof
- (struct LEAPIORAID_RAIDVOL0_PHYS_DISK));
- vol_pg0 = kzalloc(sz, GFP_KERNEL);
- if (!vol_pg0) {
- dfailprintk(ioc, pr_warn(
- "%s failure at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__));
- return 1;
- }
- if ((leapioraid_config_get_raid_volume_pg0(ioc, &mpi_reply, vol_pg0,
- LEAPIORAID_RAID_VOLUME_PGAD_FORM_HANDLE,
- raid_device->handle, sz))) {
- dfailprintk(ioc,
- pr_warn(
- "%s failure at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__));
- kfree(vol_pg0);
- return 1;
- }
- raid_device->volume_type = vol_pg0->VolumeType;
- if (!(leapioraid_config_get_phys_disk_pg0(ioc, &mpi_reply,
- &pd_pg0,
- LEAPIORAID_PHYSDISK_PGAD_FORM_PHYSDISKNUM,
- vol_pg0->PhysDisk[0].PhysDiskNum))) {
- if (!
- (leapioraid_config_get_sas_device_pg0
- (ioc, &mpi_reply, &sas_device_pg0,
- LEAPIORAID_SAS_DEVICE_PGAD_FORM_HANDLE,
- le16_to_cpu(pd_pg0.DevHandle)))) {
- raid_device->device_info =
- le32_to_cpu(sas_device_pg0.DeviceInfo);
- }
- }
- kfree(vol_pg0);
- return 0;
-}
-
-static void
-leapioraid_scsihost_enable_tlr(
- struct LEAPIORAID_ADAPTER *ioc, struct scsi_device *sdev)
-{
- u8 data[30];
- u8 page_len, ii;
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- struct leapioraid_sas_device *sas_device;
-
- if (sdev->type != TYPE_TAPE)
- return;
- if (!(ioc->facts.IOCCapabilities & LEAPIORAID_IOCFACTS_CAPABILITY_TLR))
- return;
- sas_device_priv_data = sdev->hostdata;
- if (!sas_device_priv_data)
- return;
- sas_target_priv_data = sas_device_priv_data->sas_target;
- if (!sas_target_priv_data)
- return;
- if (leapioraid_scsihost_inquiry_vpd_supported_pages(ioc,
- sas_target_priv_data->handle,
- sdev->lun, data,
- sizeof(data)) !=
- DEVICE_READY) {
- sas_device =
- leapioraid_get_sdev_by_addr(ioc,
- sas_target_priv_data->sas_address,
- sas_target_priv_data->port);
- if (sas_device) {
- sdev_printk(KERN_INFO, sdev,
- "%s: DEVICE NOT READY: handle(0x%04x),\n\t\t"
- "sas_addr(0x%016llx), phy(%d), device_name(0x%016llx)\n",
- __func__,
- sas_device->handle,
- (unsigned long long)sas_device->sas_address,
- sas_device->phy,
- (unsigned long long)sas_device->device_name);
- leapioraid_scsihost_display_enclosure_chassis_info(NULL,
- sas_device,
- sdev, NULL);
- leapioraid_sas_device_put(sas_device);
- }
- return;
- }
- page_len = data[3];
- for (ii = 4; ii < page_len + 4; ii++) {
- if (data[ii] == 0x90) {
- sas_device_priv_data->flags |= LEAPIORAID_DEVICE_TLR_ON;
- return;
- }
- }
-}
-
-static void
-leapioraid_scsihost_enable_ssu_on_sata(
- struct leapioraid_sas_device *sas_device,
- struct scsi_device *sdev)
-{
- if (!(sas_device->device_info & LEAPIORAID_SAS_DEVICE_INFO_SATA_DEVICE))
- return;
- if (sas_device->ssd_device) {
- sdev->manage_system_start_stop = 1;
- sdev->manage_runtime_start_stop = 1;
- }
-}
-
-static int
-leapioraid_scsihost_slave_configure(struct scsi_device *sdev)
-{
- struct Scsi_Host *shost = sdev->host;
- struct LEAPIORAID_ADAPTER *ioc = leapioraid_shost_private(shost);
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- struct leapioraid_sas_device *sas_device;
- struct leapioraid_raid_device *raid_device;
- unsigned long flags;
- int qdepth;
- u8 ssp_target = 0;
- char *ds = "";
- char *r_level = "";
- u16 handle, volume_handle = 0;
- u64 volume_wwid = 0;
- u8 *serial_number = NULL;
- enum device_responsive_state retval;
- u8 count = 0;
-
- qdepth = 1;
- sas_device_priv_data = sdev->hostdata;
- sas_device_priv_data->configured_lun = 1;
- sas_device_priv_data->flags &= ~LEAPIORAID_DEVICE_FLAGS_INIT;
- sas_target_priv_data = sas_device_priv_data->sas_target;
- handle = sas_target_priv_data->handle;
- if (sas_target_priv_data->flags & LEAPIORAID_TARGET_FLAGS_VOLUME) {
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- raid_device =
- leapioraid_raid_device_find_by_handle(ioc, handle);
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- if (!raid_device) {
- dfailprintk(ioc, pr_warn(
- "%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__,
- __func__));
- return 1;
- }
- if (leapioraid_scsihost_get_volume_capabilities(ioc, raid_device)) {
- dfailprintk(ioc, pr_warn(
- "%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__,
- __func__));
- return 1;
- }
- if (raid_device->device_info &
- LEAPIORAID_SAS_DEVICE_INFO_SSP_TARGET) {
- qdepth = LEAPIORAID_SAS_QUEUE_DEPTH;
- ds = "SSP";
- } else {
- qdepth = LEAPIORAID_SATA_QUEUE_DEPTH;
- if (raid_device->device_info &
- LEAPIORAID_SAS_DEVICE_INFO_SATA_DEVICE)
- ds = "SATA";
- else
- ds = "STP";
- }
- switch (raid_device->volume_type) {
- case LEAPIORAID_RAID_VOL_TYPE_RAID0:
- r_level = "RAID0";
- break;
- case LEAPIORAID_RAID_VOL_TYPE_RAID1E:
- qdepth = LEAPIORAID_RAID_QUEUE_DEPTH;
- if (ioc->manu_pg10.OEMIdentifier &&
- (le32_to_cpu(ioc->manu_pg10.GenericFlags0) &
- 0x00000004) &&
- !(raid_device->num_pds % 2))
- r_level = "RAID10";
- else
- r_level = "RAID1E";
- break;
- case LEAPIORAID_RAID_VOL_TYPE_RAID1:
- qdepth = LEAPIORAID_RAID_QUEUE_DEPTH;
- r_level = "RAID1";
- break;
- case LEAPIORAID_RAID_VOL_TYPE_RAID10:
- qdepth = LEAPIORAID_RAID_QUEUE_DEPTH;
- r_level = "RAID10";
- break;
- case LEAPIORAID_RAID_VOL_TYPE_UNKNOWN:
- default:
- qdepth = LEAPIORAID_RAID_QUEUE_DEPTH;
- r_level = "RAIDX";
- break;
- }
- if (!ioc->warpdrive_msg)
- sdev_printk(
- KERN_INFO, sdev,
- "%s: handle(0x%04x), wwid(0x%016llx), pd_count(%d), type(%s)\n",
- r_level, raid_device->handle,
- (unsigned long long)raid_device->wwid,
- raid_device->num_pds, ds);
- if (shost->max_sectors > LEAPIORAID_RAID_MAX_SECTORS) {
- blk_queue_max_hw_sectors(sdev->request_queue,
- LEAPIORAID_RAID_MAX_SECTORS);
- sdev_printk(KERN_INFO, sdev,
- "Set queue's max_sector to: %u\n",
- LEAPIORAID_RAID_MAX_SECTORS);
- }
- leapioraid__scsihost_change_queue_depth(sdev, qdepth);
- leapioraid_scsihost_set_level(ioc, sdev, raid_device->volume_type);
- return 0;
- }
- if (sas_target_priv_data->flags & LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT) {
- if (leapioraid_config_get_volume_handle(ioc, handle,
- &volume_handle)) {
- dfailprintk(ioc, pr_warn(
- "%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__,
- __func__));
- return 1;
- }
- if (volume_handle && leapioraid_config_get_volume_wwid(ioc,
- volume_handle,
- &volume_wwid)) {
- dfailprintk(ioc,
- pr_warn(
- "%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__,
- __func__));
- return 1;
- }
- }
- leapioraid_scsihost_inquiry_vpd_sn(ioc, handle, &serial_number);
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_addr(ioc,
- sas_device_priv_data->sas_target->sas_address,
- sas_device_priv_data->sas_target->port);
- if (!sas_device) {
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- dfailprintk(ioc, pr_warn(
- "%s failure at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__));
- kfree(serial_number);
- return 1;
- }
- sas_device->volume_handle = volume_handle;
- sas_device->volume_wwid = volume_wwid;
- sas_device->serial_number = serial_number;
- if (sas_device->device_info & LEAPIORAID_SAS_DEVICE_INFO_SSP_TARGET) {
- qdepth = (sas_device->port_type > 1) ?
- ioc->max_wideport_qd : ioc->max_narrowport_qd;
- ssp_target = 1;
- if (sas_device->device_info & LEAPIORAID_SAS_DEVICE_INFO_SEP) {
- sdev_printk(KERN_WARNING, sdev,
- "set ignore_delay_remove for handle(0x%04x)\n",
- sas_device_priv_data->sas_target->handle);
- sas_device_priv_data->ignore_delay_remove = 1;
- ds = "SES";
- } else
- ds = "SSP";
- } else {
- qdepth = ioc->max_sata_qd;
- if (sas_device->device_info & LEAPIORAID_SAS_DEVICE_INFO_STP_TARGET)
- ds = "STP";
- else if (sas_device->device_info &
- LEAPIORAID_SAS_DEVICE_INFO_SATA_DEVICE)
- ds = "SATA";
- }
- sdev_printk(
- KERN_INFO, sdev,
- "%s: handle(0x%04x), sas_addr(0x%016llx), phy(%d), device_name(0x%016llx)\n",
- ds, handle, (unsigned long long)sas_device->sas_address,
- sas_device->phy,
- (unsigned long long)sas_device->device_name);
- leapioraid_scsihost_display_enclosure_chassis_info(
- NULL, sas_device, sdev, NULL);
- leapioraid_sas_device_put(sas_device);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- if (!ssp_target) {
- leapioraid_scsihost_display_sata_capabilities(ioc, handle, sdev);
- do {
- retval = leapioraid_scsihost_ata_pass_thru_idd(ioc, handle,
- &sas_device->ssd_device, 30, 0);
- } while ((retval == DEVICE_RETRY || retval == DEVICE_RETRY_UA)
- && count++ < 3);
- }
- leapioraid_scsihost_enable_ssu_on_sata(sas_device, sdev);
- if (serial_number)
- sdev_printk(KERN_INFO, sdev, "serial_number(%s)\n",
- serial_number);
- leapioraid__scsihost_change_queue_depth(sdev, qdepth);
- if (ssp_target) {
- sas_read_port_mode_page(sdev);
- leapioraid_scsihost_enable_tlr(ioc, sdev);
- }
-
- return 0;
-}
-
-static int
-leapioraid_scsihost_bios_param(
- struct scsi_device *sdev, struct block_device *bdev,
- sector_t capacity, int params[])
-{
- int heads;
- int sectors;
- sector_t cylinders;
- ulong dummy;
-
- heads = 64;
- sectors = 32;
- dummy = heads * sectors;
- cylinders = capacity;
- sector_div(cylinders, dummy);
- if ((ulong) capacity >= 0x200000) {
- heads = 255;
- sectors = 63;
- dummy = heads * sectors;
- cylinders = capacity;
- sector_div(cylinders, dummy);
- }
- params[0] = heads;
- params[1] = sectors;
- params[2] = cylinders;
- return 0;
-}
-
-static void
-leapioraid_scsihost_response_code(
- struct LEAPIORAID_ADAPTER *ioc, u8 response_code)
-{
- char *desc;
-
- switch (response_code) {
- case LEAPIORAID_SCSITASKMGMT_RSP_TM_COMPLETE:
- desc = "task management request completed";
- break;
- case LEAPIORAID_SCSITASKMGMT_RSP_INVALID_FRAME:
- desc = "invalid frame";
- break;
- case LEAPIORAID_SCSITASKMGMT_RSP_TM_NOT_SUPPORTED:
- desc = "task management request not supported";
- break;
- case LEAPIORAID_SCSITASKMGMT_RSP_TM_FAILED:
- desc = "task management request failed";
- break;
- case LEAPIORAID_SCSITASKMGMT_RSP_TM_SUCCEEDED:
- desc = "task management request succeeded";
- break;
- case LEAPIORAID_SCSITASKMGMT_RSP_TM_INVALID_LUN:
- desc = "invalid lun";
- break;
- case 0xA:
- desc = "overlapped tag attempted";
- break;
- case LEAPIORAID_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC:
- desc = "task queued, however not sent to target";
- break;
- default:
- desc = "unknown";
- break;
- }
- pr_warn("%s response_code(0x%01x): %s\n",
- ioc->name, response_code, desc);
-}
-
-static u8
-leapioraid_scsihost_tm_done(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid, u8 msix_index,
- u32 reply)
-{
- struct LeapioraidDefaultRep_t *mpi_reply;
-
- if (ioc->tm_cmds.status == LEAPIORAID_CMD_NOT_USED)
- return 1;
- if (ioc->tm_cmds.smid != smid)
- return 1;
- ioc->tm_cmds.status |= LEAPIORAID_CMD_COMPLETE;
- mpi_reply = leapioraid_base_get_reply_virt_addr(ioc, reply);
- if (mpi_reply) {
- memcpy(ioc->tm_cmds.reply, mpi_reply, mpi_reply->MsgLength * 4);
- ioc->tm_cmds.status |= LEAPIORAID_CMD_REPLY_VALID;
- }
- ioc->tm_cmds.status &= ~LEAPIORAID_CMD_PENDING;
- complete(&ioc->tm_cmds.done);
- return 1;
-}
-
-void
-leapioraid_scsihost_set_tm_flag(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct scsi_device *sdev;
- u8 skip = 0;
-
- shost_for_each_device(sdev, ioc->shost) {
- if (skip)
- continue;
- sas_device_priv_data = sdev->hostdata;
- if (!sas_device_priv_data)
- continue;
- if (sas_device_priv_data->sas_target->handle == handle) {
- sas_device_priv_data->sas_target->tm_busy = 1;
- skip = 1;
- ioc->ignore_loginfos = 1;
- }
- }
-}
-
-void
-leapioraid_scsihost_clear_tm_flag(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct scsi_device *sdev;
- u8 skip = 0;
-
- shost_for_each_device(sdev, ioc->shost) {
- if (skip)
- continue;
- sas_device_priv_data = sdev->hostdata;
- if (!sas_device_priv_data)
- continue;
- if (sas_device_priv_data->sas_target->handle == handle) {
- sas_device_priv_data->sas_target->tm_busy = 0;
- skip = 1;
- ioc->ignore_loginfos = 0;
- }
- }
-}
-
-static int
-leapioraid_scsihost_tm_cmd_map_status(
- struct LEAPIORAID_ADAPTER *ioc, uint channel,
- uint id, uint lun, u8 type, u16 smid_task)
-{
- if (smid_task <= ioc->shost->can_queue) {
- switch (type) {
- case LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET:
- if (!
- (leapioraid_scsihost_scsi_lookup_find_by_target
- (ioc, id, channel)))
- return SUCCESS;
- break;
- case LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET:
- case LEAPIORAID_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET:
- if (!
- (leapioraid_scsihost_scsi_lookup_find_by_lun
- (ioc, id, lun, channel)))
- return SUCCESS;
- break;
- default:
- return SUCCESS;
- }
- } else if (smid_task == ioc->scsih_cmds.smid) {
- if ((ioc->scsih_cmds.status & LEAPIORAID_CMD_COMPLETE) ||
- (ioc->scsih_cmds.status & LEAPIORAID_CMD_NOT_USED))
- return SUCCESS;
- } else if (smid_task == ioc->ctl_cmds.smid) {
- if ((ioc->ctl_cmds.status & LEAPIORAID_CMD_COMPLETE) ||
- (ioc->ctl_cmds.status & LEAPIORAID_CMD_NOT_USED))
- return SUCCESS;
- }
- return FAILED;
-}
-
-static int
-leapioraid_scsihost_tm_post_processing(struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- uint channel, uint id, uint lun, u8 type,
- u16 smid_task)
-{
- int rc;
-
- rc = leapioraid_scsihost_tm_cmd_map_status(ioc, channel, id, lun, type, smid_task);
- if (rc == SUCCESS)
- return rc;
- pr_err(
- "%s Poll finish of smid(%d),task_type(0x%02x),handle(0x%04x)\n",
- ioc->name,
- smid_task,
- type,
- handle);
- leapioraid_base_mask_interrupts(ioc);
- leapioraid_base_sync_reply_irqs(ioc, 1);
- leapioraid_base_unmask_interrupts(ioc);
- return leapioraid_scsihost_tm_cmd_map_status(
- ioc, channel, id, lun, type, smid_task);
-}
-
-int
-leapioraid_scsihost_issue_tm(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- uint channel, uint id, uint lun, u8 type,
- u16 smid_task, u8 timeout, u8 tr_method)
-{
- struct LeapioraidSCSITmgReq_t *mpi_request;
- struct LeapioraidSCSITmgRep_t *mpi_reply;
- struct LeapioraidSCSIIOReq_t *request;
- u16 smid = 0;
- u32 ioc_state;
- struct leapioraid_scsiio_tracker *scsi_lookup = NULL;
- int rc;
- u16 msix_task = 0;
- u8 issue_reset = 0;
-
- lockdep_assert_held(&ioc->tm_cmds.mutex);
- if (ioc->tm_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_info("%s %s: tm_cmd busy!!!\n",
- __func__, ioc->name);
- return FAILED;
- }
- if (ioc->shost_recovery || ioc->remove_host || ioc->pci_error_recovery) {
- pr_info("%s %s: host reset in progress!\n",
- __func__, ioc->name);
- return FAILED;
- }
- ioc_state = leapioraid_base_get_iocstate(ioc, 0);
- if (ioc_state & LEAPIORAID_DOORBELL_USED) {
- pr_info("%s unexpected doorbell active!\n",
- ioc->name);
- rc = leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
- return (!rc) ? SUCCESS : FAILED;
- }
- if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) == LEAPIORAID_IOC_STATE_FAULT) {
- leapioraid_print_fault_code(ioc, ioc_state &
- LEAPIORAID_DOORBELL_DATA_MASK);
- rc = leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
- return (!rc) ? SUCCESS : FAILED;
- } else if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_COREDUMP) {
- leapioraid_base_coredump_info(ioc,
- ioc_state &
- LEAPIORAID_DOORBELL_DATA_MASK);
- rc = leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
- return (!rc) ? SUCCESS : FAILED;
- }
- smid = leapioraid_base_get_smid_hpr(ioc, ioc->tm_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- return FAILED;
- }
- if (type == LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK)
- scsi_lookup = leapioraid_get_st_from_smid(ioc, smid_task);
- dtmprintk(ioc, pr_info(
- "%s sending tm: handle(0x%04x),\n\t\t"
- "task_type(0x%02x), timeout(%d) tr_method(0x%x) smid(%d)\n",
- ioc->name,
- handle,
- type,
- timeout,
- tr_method,
- smid_task));
- ioc->tm_cmds.status = LEAPIORAID_CMD_PENDING;
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->tm_cmds.smid = smid;
- memset(mpi_request, 0, sizeof(struct LeapioraidSCSITmgReq_t));
- memset(ioc->tm_cmds.reply, 0, sizeof(struct LeapioraidSCSITmgRep_t));
- mpi_request->Function = LEAPIORAID_FUNC_SCSI_TASK_MGMT;
- mpi_request->DevHandle = cpu_to_le16(handle);
- mpi_request->TaskType = type;
- mpi_request->MsgFlags = tr_method;
- if (type == LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK ||
- type == LEAPIORAID_SCSITASKMGMT_TASKTYPE_QUERY_TASK)
- mpi_request->TaskMID = cpu_to_le16(smid_task);
- int_to_scsilun(lun, (struct scsi_lun *)mpi_request->LUN);
- leapioraid_scsihost_set_tm_flag(ioc, handle);
- init_completion(&ioc->tm_cmds.done);
- if ((type == LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK) &&
- (scsi_lookup && (scsi_lookup->msix_io < ioc->reply_queue_count)))
- msix_task = scsi_lookup->msix_io;
- else
- msix_task = 0;
- ioc->put_smid_hi_priority(ioc, smid, msix_task);
- wait_for_completion_timeout(&ioc->tm_cmds.done, timeout * HZ);
- if (!(ioc->tm_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- leapioraid_check_cmd_timeout(ioc,
- ioc->tm_cmds.status, mpi_request,
- sizeof
- (struct LeapioraidSCSITmgReq_t)
- / 4, issue_reset);
- if (issue_reset) {
- rc = leapioraid_base_hard_reset_handler(ioc,
- FORCE_BIG_HAMMER);
- rc = (!rc) ? SUCCESS : FAILED;
- goto out;
- }
- }
- leapioraid_base_sync_reply_irqs(ioc, 0);
- if (ioc->tm_cmds.status & LEAPIORAID_CMD_REPLY_VALID) {
- mpi_reply = ioc->tm_cmds.reply;
- dtmprintk(ioc, pr_info(
- "%s complete tm: ioc_status(0x%04x),\n\t\t"
- "loginfo(0x%08x), term_count(0x%08x)\n",
- ioc->name,
- le16_to_cpu(mpi_reply->IOCStatus),
- le32_to_cpu(mpi_reply->IOCLogInfo),
- le32_to_cpu(mpi_reply->TerminationCount)));
- if (ioc->logging_level & LEAPIORAID_DEBUG_TM) {
- leapioraid_scsihost_response_code(
- ioc, mpi_reply->ResponseCode);
- if (mpi_reply->IOCStatus)
- leapioraid_debug_dump_mf(
- mpi_request,
- sizeof(struct LeapioraidSCSITmgReq_t) / 4);
- }
- }
- switch (type) {
- case LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK:
- rc = SUCCESS;
- request = leapioraid_base_get_msg_frame(ioc, smid_task);
- if (le16_to_cpu(request->DevHandle) != handle)
- break;
- pr_err(
- "%s Task abort tm failed:\n\t\t"
- "handle(0x%04x), timeout(%d),\n\t\t"
- "tr_method(0x%x), smid(%d), msix_index(%d)\n",
- ioc->name,
- handle,
- timeout,
- tr_method,
- smid_task,
- msix_task);
- rc = FAILED;
- break;
- case LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET:
- case LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET:
- case LEAPIORAID_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET:
- rc = leapioraid_scsihost_tm_post_processing(
- ioc, handle, channel, id, lun, type, smid_task);
- break;
- case LEAPIORAID_SCSITASKMGMT_TASKTYPE_QUERY_TASK:
- rc = SUCCESS;
- break;
- default:
- rc = FAILED;
- break;
- }
-out:
- leapioraid_scsihost_clear_tm_flag(ioc, handle);
- ioc->tm_cmds.status = LEAPIORAID_CMD_NOT_USED;
- return rc;
-}
-
-int
-leapioraid_scsihost_issue_locked_tm(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- uint channel, uint id, uint lun, u8 type,
- u16 smid_task, u8 timeout, u8 tr_method)
-{
- int ret;
-
- mutex_lock(&ioc->tm_cmds.mutex);
- ret = leapioraid_scsihost_issue_tm(
- ioc, handle, channel, id, lun, type,
- smid_task, timeout, tr_method);
- mutex_unlock(&ioc->tm_cmds.mutex);
- return ret;
-}
-
-static void
-leapioraid_scsihost_tm_display_info(
- struct LEAPIORAID_ADAPTER *ioc,
- struct scsi_cmnd *scmd)
-{
- struct scsi_target *starget = scmd->device->sdev_target;
- struct LEAPIORAID_TARGET *priv_target = starget->hostdata;
- struct leapioraid_sas_device *sas_device = NULL;
- unsigned long flags;
- char *device_str = NULL;
-
- if (!priv_target)
- return;
- if (ioc->warpdrive_msg)
- device_str = "WarpDrive";
- else
- device_str = "volume";
- scsi_print_command(scmd);
- if (priv_target->flags & LEAPIORAID_TARGET_FLAGS_VOLUME) {
- starget_printk(
- KERN_INFO, starget, "%s handle(0x%04x), %s wwid(0x%016llx)\n",
- device_str,
- priv_target->handle, device_str,
- (unsigned long long)priv_target->sas_address);
- } else {
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device =
- __leapioraid_get_sdev_from_target(ioc, priv_target);
- if (sas_device) {
- if (priv_target->flags &
- LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT) {
- starget_printk(KERN_INFO, starget,
- "volume handle(0x%04x), volume wwid(0x%016llx)\n",
- sas_device->volume_handle,
- (unsigned long long)sas_device->volume_wwid);
- }
- starget_printk(KERN_INFO, starget,
- "%s: handle(0x%04x), sas_address(0x%016llx), phy(%d)\n",
- __func__, sas_device->handle,
- (unsigned long long)sas_device->sas_address, sas_device->phy);
- leapioraid_scsihost_display_enclosure_chassis_info(NULL,
- sas_device,
- NULL, starget);
- leapioraid_sas_device_put(sas_device);
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- }
-}
-
-static int
-leapioraid_scsihost_abort(struct scsi_cmnd *scmd)
-{
- struct LEAPIORAID_ADAPTER *ioc
- = leapioraid_shost_private(scmd->device->host);
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- u16 handle;
- int r;
- struct leapioraid_scsiio_tracker *st
- = leapioraid_base_scsi_cmd_priv(scmd);
- u8 timeout = 30;
-
- sdev_printk(
- KERN_INFO, scmd->device,
- "attempting task abort! scmd(0x%p), outstanding for %u ms & timeout %u ms\n",
- scmd, jiffies_to_msecs(jiffies - scmd->jiffies_at_alloc),
- (scsi_cmd_to_rq(scmd)->timeout / HZ) * 1000);
- leapioraid_scsihost_tm_display_info(ioc, scmd);
- if (leapioraid_base_pci_device_is_unplugged(ioc) || ioc->remove_host) {
- sdev_printk(KERN_INFO, scmd->device, "%s scmd(0x%p)\n",
- ((ioc->remove_host) ? ("shost is getting removed!")
- : ("pci device been removed!")), scmd);
- if (st && st->smid)
- leapioraid_base_free_smid(ioc, st->smid);
- scmd->result = DID_NO_CONNECT << 16;
- r = FAILED;
- goto out;
- }
- sas_device_priv_data = scmd->device->hostdata;
- if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
- sdev_printk(KERN_INFO, scmd->device,
- "device been deleted! scmd(0x%p)\n", scmd);
- scmd->result = DID_NO_CONNECT << 16;
- scsi_done(scmd);
- r = SUCCESS;
- goto out;
- }
- if (st == NULL || st->cb_idx == 0xFF) {
- sdev_printk(KERN_INFO, scmd->device,
- "No ref at driver, assuming scmd(0x%p) might have completed\n",
- scmd);
- scmd->result = DID_RESET << 16;
- r = SUCCESS;
- goto out;
- }
- if (sas_device_priv_data->sas_target->flags &
- LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT ||
- sas_device_priv_data->sas_target->flags & LEAPIORAID_TARGET_FLAGS_VOLUME) {
- scmd->result = DID_RESET << 16;
- r = FAILED;
- goto out;
- }
- leapioraid_halt_firmware(ioc, 0);
- handle = sas_device_priv_data->sas_target->handle;
- r = leapioraid_scsihost_issue_locked_tm(
- ioc, handle,
- scmd->device->channel,
- scmd->device->id,
- scmd->device->lun,
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK,
- st->smid, timeout, 0);
-out:
- sdev_printk(
- KERN_INFO, scmd->device,
- "task abort: %s scmd(0x%p)\n",
- ((r == SUCCESS) ? "SUCCESS" : "FAILED"), scmd);
- return r;
-}
-
-static int
-leapioraid_scsihost_dev_reset(struct scsi_cmnd *scmd)
-{
- struct LEAPIORAID_ADAPTER *ioc
- = leapioraid_shost_private(scmd->device->host);
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct leapioraid_sas_device *sas_device = NULL;
- u16 handle;
- u8 tr_method = 0;
- u8 tr_timeout = 30;
- int r;
- struct scsi_target *starget = scmd->device->sdev_target;
- struct LEAPIORAID_TARGET *target_priv_data = starget->hostdata;
-
- sdev_printk(KERN_INFO, scmd->device,
- "attempting device reset! scmd(0x%p)\n", scmd);
- leapioraid_scsihost_tm_display_info(ioc, scmd);
- if (leapioraid_base_pci_device_is_unplugged(ioc) || ioc->remove_host) {
- sdev_printk(KERN_INFO, scmd->device, "%s scmd(0x%p)\n",
- ((ioc->remove_host) ? ("shost is getting removed!")
- : ("pci device been removed!")), scmd);
- scmd->result = DID_NO_CONNECT << 16;
- r = FAILED;
- goto out;
- }
- sas_device_priv_data = scmd->device->hostdata;
- if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
- sdev_printk(KERN_INFO, scmd->device,
- "device been deleted! scmd(0x%p)\n", scmd);
- scmd->result = DID_NO_CONNECT << 16;
- scsi_done(scmd);
- r = SUCCESS;
- goto out;
- }
- handle = 0;
- if (sas_device_priv_data->sas_target->flags &
- LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT) {
- sas_device = leapioraid_get_sdev_from_target(ioc,
- target_priv_data);
- if (sas_device)
- handle = sas_device->volume_handle;
- } else
- handle = sas_device_priv_data->sas_target->handle;
- if (!handle) {
- scmd->result = DID_RESET << 16;
- r = FAILED;
- goto out;
- }
- tr_method = LEAPIORAID_SCSITASKMGMT_MSGFLAGS_LINK_RESET;
- r = leapioraid_scsihost_issue_locked_tm(ioc, handle,
- scmd->device->channel,
- scmd->device->id,
- scmd->device->lun,
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET,
- 0, tr_timeout, tr_method);
-out:
- sdev_printk(KERN_INFO, scmd->device,
- "device reset: %s scmd(0x%p)\n",
- ((r == SUCCESS) ? "SUCCESS" : "FAILED"), scmd);
- if (sas_device)
- leapioraid_sas_device_put(sas_device);
- return r;
-}
-
-static int
-leapioraid_scsihost_target_reset(struct scsi_cmnd *scmd)
-{
- struct LEAPIORAID_ADAPTER *ioc
- = leapioraid_shost_private(scmd->device->host);
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct leapioraid_sas_device *sas_device = NULL;
- u16 handle;
- u8 tr_method = 0;
- u8 tr_timeout = 30;
- int r;
- struct scsi_target *starget = scmd->device->sdev_target;
- struct LEAPIORAID_TARGET *target_priv_data = starget->hostdata;
-
- starget_printk(KERN_INFO, starget,
- "attempting target reset! scmd(0x%p)\n", scmd);
- leapioraid_scsihost_tm_display_info(ioc, scmd);
- if (leapioraid_base_pci_device_is_unplugged(ioc) || ioc->remove_host) {
- sdev_printk(KERN_INFO, scmd->device, "%s scmd(0x%p)\n",
- ((ioc->remove_host) ? ("shost is getting removed!")
- : ("pci device been removed!")), scmd);
- scmd->result = DID_NO_CONNECT << 16;
- r = FAILED;
- goto out;
- }
- sas_device_priv_data = scmd->device->hostdata;
- if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
- starget_printk(KERN_INFO, starget,
- "target been deleted! scmd(0x%p)\n", scmd);
- scmd->result = DID_NO_CONNECT << 16;
- scsi_done(scmd);
- r = SUCCESS;
- goto out;
- }
- handle = 0;
- if (sas_device_priv_data->sas_target->flags &
- LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT) {
- sas_device = leapioraid_get_sdev_from_target(ioc,
- target_priv_data);
- if (sas_device)
- handle = sas_device->volume_handle;
- } else
- handle = sas_device_priv_data->sas_target->handle;
- if (!handle) {
- scmd->result = DID_RESET << 16;
- r = FAILED;
- goto out;
- }
- tr_method = LEAPIORAID_SCSITASKMGMT_MSGFLAGS_LINK_RESET;
- r = leapioraid_scsihost_issue_locked_tm(ioc, handle,
- scmd->device->channel,
- scmd->device->id, 0,
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET,
- 0, tr_timeout, tr_method);
-out:
- starget_printk(KERN_INFO, starget,
- "target reset: %s scmd(0x%p)\n",
- ((r == SUCCESS) ? "SUCCESS" : "FAILED"), scmd);
- if (sas_device)
- leapioraid_sas_device_put(sas_device);
- return r;
-}
-
-static int
-leapioraid_scsihost_host_reset(struct scsi_cmnd *scmd)
-{
- struct LEAPIORAID_ADAPTER *ioc
- = leapioraid_shost_private(scmd->device->host);
- int r, retval;
-
- pr_info("%s attempting host reset! scmd(0x%p)\n",
- ioc->name, scmd);
- scsi_print_command(scmd);
- if (ioc->is_driver_loading || ioc->remove_host) {
- pr_info("%s Blocking the host reset\n",
- ioc->name);
- r = FAILED;
- goto out;
- }
- retval = leapioraid_base_hard_reset_handler(
- ioc, FORCE_BIG_HAMMER);
- r = (retval < 0) ? FAILED : SUCCESS;
-out:
- pr_info("%s host reset: %s scmd(0x%p)\n",
- ioc->name, ((r == SUCCESS) ? "SUCCESS" : "FAILED"),
- scmd);
- return r;
-}
-
-static void
-leapioraid_scsihost_fw_event_add(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event)
-{
- unsigned long flags;
-
- if (ioc->firmware_event_thread == NULL)
- return;
- spin_lock_irqsave(&ioc->fw_event_lock, flags);
- leapioraid_fw_event_work_get(fw_event);
- INIT_LIST_HEAD(&fw_event->list);
- list_add_tail(&fw_event->list, &ioc->fw_event_list);
- INIT_WORK(&fw_event->work, leapioraid_firmware_event_work);
- leapioraid_fw_event_work_get(fw_event);
- queue_work(ioc->firmware_event_thread, &fw_event->work);
- spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
-}
-
-static void
-leapioraid_scsihost_fw_event_del_from_list(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&ioc->fw_event_lock, flags);
- if (!list_empty(&fw_event->list)) {
- list_del_init(&fw_event->list);
- leapioraid_fw_event_work_put(fw_event);
- }
- spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
-}
-
-static void
-leapioraid_scsihost_fw_event_requeue(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event, unsigned long delay)
-{
- unsigned long flags;
-
- if (ioc->firmware_event_thread == NULL)
- return;
- spin_lock_irqsave(&ioc->fw_event_lock, flags);
- leapioraid_fw_event_work_get(fw_event);
- list_add_tail(&fw_event->list, &ioc->fw_event_list);
- if (!fw_event->delayed_work_active) {
- fw_event->delayed_work_active = 1;
- INIT_DELAYED_WORK(&fw_event->delayed_work,
- leapioraid_firmware_event_work_delayed);
- }
- queue_delayed_work(ioc->firmware_event_thread, &fw_event->delayed_work,
- msecs_to_jiffies(delay));
- spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
-}
-
-static void
-leapioraid_scsihost_error_recovery_delete_devices(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_fw_event_work *fw_event;
-
- fw_event = leapioraid_alloc_fw_event_work(0);
- if (!fw_event)
- return;
- fw_event->event = LEAPIORAID_REMOVE_UNRESPONDING_DEVICES;
- fw_event->ioc = ioc;
- leapioraid_scsihost_fw_event_add(ioc, fw_event);
- leapioraid_fw_event_work_put(fw_event);
-}
-
-void
-leapioraid_port_enable_complete(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_fw_event_work *fw_event;
-
- fw_event = leapioraid_alloc_fw_event_work(0);
- if (!fw_event)
- return;
- fw_event->event = LEAPIORAID_PORT_ENABLE_COMPLETE;
- fw_event->ioc = ioc;
- leapioraid_scsihost_fw_event_add(ioc, fw_event);
- leapioraid_fw_event_work_put(fw_event);
-}
-
-static struct leapioraid_fw_event_work *dequeue_next_fw_event(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- unsigned long flags;
- struct leapioraid_fw_event_work *fw_event = NULL;
-
- spin_lock_irqsave(&ioc->fw_event_lock, flags);
- if (!list_empty(&ioc->fw_event_list)) {
- fw_event = list_first_entry(&ioc->fw_event_list,
- struct leapioraid_fw_event_work, list);
- list_del_init(&fw_event->list);
- leapioraid_fw_event_work_put(fw_event);
- }
- spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
- return fw_event;
-}
-
-static void
-leapioraid_scsihost_fw_event_cleanup_queue(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_fw_event_work *fw_event;
- bool rc = false;
-
- if ((list_empty(&ioc->fw_event_list) && !ioc->current_event) ||
- !ioc->firmware_event_thread || in_interrupt())
- return;
-
- ioc->fw_events_cleanup = 1;
- if (ioc->shost_recovery && ioc->current_event)
- ioc->current_event->ignore = 1;
- while ((fw_event = dequeue_next_fw_event(ioc)) ||
- (fw_event = ioc->current_event)) {
- if (fw_event == ioc->current_event &&
- ioc->current_event->event !=
- LEAPIORAID_REMOVE_UNRESPONDING_DEVICES) {
- ioc->current_event = NULL;
- continue;
- }
- if (fw_event->event == LEAPIORAID_PORT_ENABLE_COMPLETE) {
- ioc->port_enable_cmds.status |= LEAPIORAID_CMD_RESET;
- ioc->start_scan = 0;
- }
- if (fw_event->delayed_work_active)
- rc = cancel_delayed_work_sync(&fw_event->delayed_work);
- else
- rc = cancel_work_sync(&fw_event->work);
- if (rc)
- leapioraid_fw_event_work_put(fw_event);
- }
- ioc->fw_events_cleanup = 0;
-}
-
-static void
-leapioraid_scsihost_internal_device_block(
- struct scsi_device *sdev,
- struct LEAPIORAID_DEVICE
- *sas_device_priv_data)
-{
- int r = 0;
-
- sdev_printk(KERN_INFO, sdev, "device_block, handle(0x%04x)\n",
- sas_device_priv_data->sas_target->handle);
- sas_device_priv_data->block = 1;
-
- r = scsi_internal_device_block_nowait(sdev);
- if (r == -EINVAL)
- sdev_printk(KERN_WARNING, sdev,
- "device_block failed with return(%d) for handle(0x%04x)\n",
- r, sas_device_priv_data->sas_target->handle);
-}
-
-static void
-leapioraid_scsihost_internal_device_unblock(struct scsi_device *sdev,
- struct LEAPIORAID_DEVICE
- *sas_device_priv_data)
-{
- int r = 0;
-
- sdev_printk(KERN_WARNING, sdev,
- "device_unblock and setting to running, handle(0x%04x)\n",
- sas_device_priv_data->sas_target->handle);
- sas_device_priv_data->block = 0;
-
- r = scsi_internal_device_unblock_nowait(sdev, SDEV_RUNNING);
- if (r == -EINVAL) {
- sdev_printk(KERN_WARNING, sdev,
- "device_unblock failed with return(%d)\n\t\t"
- "for handle(0x%04x) performing a block followed by an unblock\n",
- r,
- sas_device_priv_data->sas_target->handle);
- sas_device_priv_data->block = 1;
- r = scsi_internal_device_block_nowait(sdev);
- if (r)
- sdev_printk(KERN_WARNING, sdev,
- "retried device_block failed with return(%d)\n\t\t"
- "for handle(0x%04x)\n",
- r,
- sas_device_priv_data->sas_target->handle);
- sas_device_priv_data->block = 0;
-
- r = scsi_internal_device_unblock_nowait(sdev, SDEV_RUNNING);
- if (r)
- sdev_printk(KERN_WARNING, sdev,
- "retried device_unblock failed\n\t\t"
- "with return(%d) for handle(0x%04x)\n",
- r,
- sas_device_priv_data->sas_target->handle);
- }
-}
-
-static void
-leapioraid_scsihost_ublock_io_all_device(
- struct LEAPIORAID_ADAPTER *ioc, u8 no_turs)
-{
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct LEAPIORAID_TARGET *sas_target;
- enum device_responsive_state rc;
- struct scsi_device *sdev;
- struct leapioraid_sas_device *sas_device = NULL;
- int count;
- u8 tr_timeout = 30;
- u8 tr_method = 0;
-
- shost_for_each_device(sdev, ioc->shost) {
- sas_device_priv_data = sdev->hostdata;
- if (!sas_device_priv_data)
- continue;
- sas_target = sas_device_priv_data->sas_target;
- if (!sas_target || sas_target->deleted)
- continue;
- if (!sas_device_priv_data->block)
- continue;
- count = 0;
- if (no_turs) {
- sdev_printk(KERN_WARNING, sdev,
- "device_unblocked, handle(0x%04x)\n",
- sas_device_priv_data->sas_target->handle);
- leapioraid_scsihost_internal_device_unblock(sdev,
- sas_device_priv_data);
- continue;
- }
- do {
- rc = leapioraid_scsihost_wait_for_device_to_become_ready(
- ioc,
- sas_target->handle,
- 0,
- (sas_target->flags
- & LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT),
- sdev->lun,
- tr_timeout,
- tr_method);
- if (rc == DEVICE_RETRY || rc == DEVICE_START_UNIT
- || rc == DEVICE_STOP_UNIT || rc == DEVICE_RETRY_UA)
- ssleep(1);
- } while ((rc == DEVICE_RETRY || rc == DEVICE_START_UNIT ||
- rc == DEVICE_STOP_UNIT || rc == DEVICE_RETRY_UA)
- && count++ < 144);
- sas_device_priv_data->block = 0;
- if (rc != DEVICE_READY)
- sas_device_priv_data->deleted = 1;
- leapioraid_scsihost_internal_device_unblock(
- sdev, sas_device_priv_data);
- if (rc != DEVICE_READY) {
- sdev_printk(KERN_WARNING, sdev,
- "%s: device_offlined, handle(0x%04x)\n",
- __func__,
- sas_device_priv_data->sas_target->handle);
- scsi_device_set_state(sdev, SDEV_OFFLINE);
- sas_device = leapioraid_get_sdev_by_addr(ioc,
- sas_device_priv_data->sas_target->sas_address,
- sas_device_priv_data->sas_target->port);
- if (sas_device) {
- leapioraid_scsihost_display_enclosure_chassis_info(
- NULL,
- sas_device,
- sdev,
- NULL);
- leapioraid_sas_device_put(sas_device);
- }
- } else
- sdev_printk(KERN_WARNING, sdev,
- "device_unblocked, handle(0x%04x)\n",
- sas_device_priv_data->sas_target->handle);
- }
-}
-
-static void
-leapioraid_scsihost_ublock_io_device_wait(
- struct LEAPIORAID_ADAPTER *ioc, u64 sas_address,
- struct leapioraid_hba_port *port)
-{
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct LEAPIORAID_TARGET *sas_target;
- enum device_responsive_state rc;
- struct scsi_device *sdev;
- int count, host_reset_completion_count;
- struct leapioraid_sas_device *sas_device;
- u8 tr_timeout = 30;
- u8 tr_method = 0;
-
- shost_for_each_device(sdev, ioc->shost) {
- sas_device_priv_data = sdev->hostdata;
- if (!sas_device_priv_data)
- continue;
- sas_target = sas_device_priv_data->sas_target;
- if (!sas_target)
- continue;
- if (sas_target->sas_address != sas_address ||
- sas_target->port != port)
- continue;
- if (sdev->sdev_state == SDEV_OFFLINE) {
- sas_device_priv_data->block = 1;
- sas_device_priv_data->deleted = 0;
- scsi_device_set_state(sdev, SDEV_RUNNING);
- scsi_internal_device_block_nowait(sdev);
- }
- }
- shost_for_each_device(sdev, ioc->shost) {
- sas_device_priv_data = sdev->hostdata;
- if (!sas_device_priv_data)
- continue;
- sas_target = sas_device_priv_data->sas_target;
- if (!sas_target)
- continue;
- if (sas_target->sas_address != sas_address ||
- sas_target->port != port)
- continue;
- if (!sas_device_priv_data->block)
- continue;
- count = 0;
- do {
- host_reset_completion_count = 0;
- rc = leapioraid_scsihost_wait_for_device_to_become_ready(
- ioc,
- sas_target->handle,
- 0,
- (sas_target->flags & LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT),
- sdev->lun,
- tr_timeout,
- tr_method);
- if (rc == DEVICE_RETRY || rc == DEVICE_START_UNIT
- || rc == DEVICE_STOP_UNIT
- || rc == DEVICE_RETRY_UA) {
- do {
- msleep(500);
- host_reset_completion_count++;
- } while (rc == DEVICE_RETRY &&
- ioc->shost_recovery);
- if (host_reset_completion_count > 1) {
- rc = leapioraid_scsihost_wait_for_device_to_become_ready(
- ioc, sas_target->handle, 0,
- (sas_target->flags
- & LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT),
- sdev->lun, tr_timeout, tr_method);
- if (rc == DEVICE_RETRY
- || rc == DEVICE_START_UNIT
- || rc == DEVICE_STOP_UNIT
- || rc == DEVICE_RETRY_UA)
- msleep(500);
- }
- continue;
- }
- } while ((rc == DEVICE_RETRY || rc == DEVICE_START_UNIT ||
- rc == DEVICE_STOP_UNIT || rc == DEVICE_RETRY_UA)
- && count++ <= 144);
- sas_device_priv_data->block = 0;
- if (rc != DEVICE_READY)
- sas_device_priv_data->deleted = 1;
-
- scsi_internal_device_unblock_nowait(sdev, SDEV_RUNNING);
-
- if (rc != DEVICE_READY) {
- sdev_printk(KERN_WARNING, sdev,
- "%s: device_offlined, handle(0x%04x)\n",
- __func__,
- sas_device_priv_data->sas_target->handle);
- sas_device =
- leapioraid_get_sdev_by_handle(ioc,
- sas_device_priv_data->sas_target->handle);
- if (sas_device) {
- leapioraid_scsihost_display_enclosure_chassis_info(NULL,
- sas_device,
- sdev,
- NULL);
- leapioraid_sas_device_put(sas_device);
- }
- scsi_device_set_state(sdev, SDEV_OFFLINE);
- } else {
- sdev_printk(KERN_WARNING, sdev,
- "device_unblocked, handle(0x%04x)\n",
- sas_device_priv_data->sas_target->handle);
- }
- }
-}
-
-static void
-leapioraid_scsihost_ublock_io_device(
- struct LEAPIORAID_ADAPTER *ioc, u64 sas_address,
- struct leapioraid_hba_port *port)
-{
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct scsi_device *sdev;
-
- shost_for_each_device(sdev, ioc->shost) {
- sas_device_priv_data = sdev->hostdata;
- if (!sas_device_priv_data || !sas_device_priv_data->sas_target)
- continue;
- if (sas_device_priv_data->sas_target->sas_address
- != sas_address ||
- sas_device_priv_data->sas_target->port != port)
- continue;
- if (sas_device_priv_data->block) {
- leapioraid_scsihost_internal_device_unblock(sdev,
- sas_device_priv_data);
- }
- scsi_device_set_state(sdev, SDEV_OFFLINE);
- }
-}
-
-static void leapioraid_scsihost_block_io_all_device(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct scsi_device *sdev;
-
- shost_for_each_device(sdev, ioc->shost) {
- sas_device_priv_data = sdev->hostdata;
- if (!sas_device_priv_data)
- continue;
- if (sas_device_priv_data->block)
- continue;
- if (sas_device_priv_data->ignore_delay_remove) {
- sdev_printk(KERN_INFO, sdev,
- "%s skip device_block for SES handle(0x%04x)\n",
- __func__,
- sas_device_priv_data->sas_target->handle);
- continue;
- }
- leapioraid_scsihost_internal_device_block(
- sdev, sas_device_priv_data);
- }
-}
-
-static void
-leapioraid_scsihost_block_io_device(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct scsi_device *sdev;
- struct leapioraid_sas_device *sas_device;
-
- sas_device = leapioraid_get_sdev_by_handle(ioc, handle);
- shost_for_each_device(sdev, ioc->shost) {
- sas_device_priv_data = sdev->hostdata;
- if (!sas_device_priv_data)
- continue;
- if (sas_device_priv_data->sas_target->handle != handle)
- continue;
- if (sas_device_priv_data->block)
- continue;
- if (sas_device && sas_device->pend_sas_rphy_add)
- continue;
- if (sas_device_priv_data->ignore_delay_remove) {
- sdev_printk(KERN_INFO, sdev,
- "%s skip device_block for SES handle(0x%04x)\n",
- __func__,
- sas_device_priv_data->sas_target->handle);
- continue;
- }
- leapioraid_scsihost_internal_device_block(
- sdev, sas_device_priv_data);
- }
- if (sas_device)
- leapioraid_sas_device_put(sas_device);
-}
-
-static void
-leapioraid_scsihost_block_io_to_children_attached_to_ex(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_raid_sas_node *sas_expander)
-{
- struct leapioraid_sas_port *leapioraid_port;
- struct leapioraid_sas_device *sas_device;
- struct leapioraid_raid_sas_node *expander_sibling;
- unsigned long flags;
-
- if (!sas_expander)
- return;
- list_for_each_entry(leapioraid_port,
- &sas_expander->sas_port_list, port_list) {
- if (leapioraid_port->remote_identify.device_type ==
- SAS_END_DEVICE) {
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_addr(ioc,
- leapioraid_port->remote_identify.sas_address,
- leapioraid_port->hba_port);
- if (sas_device) {
- set_bit(sas_device->handle,
- ioc->blocking_handles);
- leapioraid_sas_device_put(sas_device);
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- }
- }
- list_for_each_entry(leapioraid_port,
- &sas_expander->sas_port_list, port_list) {
- if (leapioraid_port->remote_identify.device_type ==
- SAS_EDGE_EXPANDER_DEVICE ||
- leapioraid_port->remote_identify.device_type ==
- SAS_FANOUT_EXPANDER_DEVICE) {
- expander_sibling =
- leapioraid_scsihost_expander_find_by_sas_address
- (ioc, leapioraid_port->remote_identify.sas_address,
- leapioraid_port->hba_port);
- leapioraid_scsihost_block_io_to_children_attached_to_ex(
- ioc, expander_sibling);
- }
- }
-}
-
-static void
-leapioraid_scsihost_block_io_to_children_attached_directly(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventDataSasTopoChangeList_t *event_data)
-{
- int i;
- u16 handle;
- u16 reason_code;
-
- for (i = 0; i < event_data->NumEntries; i++) {
- handle = le16_to_cpu(event_data->PHY[i].AttachedDevHandle);
- if (!handle)
- continue;
- reason_code = event_data->PHY[i].PhyStatus &
- LEAPIORAID_EVENT_SAS_TOPO_RC_MASK;
- if (reason_code ==
- LEAPIORAID_EVENT_SAS_TOPO_RC_DELAY_NOT_RESPONDING)
- leapioraid_scsihost_block_io_device(ioc, handle);
- }
-}
-
-static void
-leapioraid_scsihost_tm_tr_send(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct LeapioraidSCSITmgReq_t *mpi_request;
- u16 smid;
- struct leapioraid_sas_device *sas_device = NULL;
- struct LEAPIORAID_TARGET *sas_target_priv_data = NULL;
- u64 sas_address = 0;
- unsigned long flags;
- struct leapioraid_tr_list *delayed_tr;
- u32 ioc_state;
- struct leapioraid_hba_port *port = NULL;
- u8 tr_method = 0;
-
- if (ioc->pci_error_recovery) {
- dewtprintk(ioc, pr_info(
- "%s %s: host in pci error recovery: handle(0x%04x)\n",
- __func__, ioc->name, handle));
- return;
- }
- ioc_state = leapioraid_base_get_iocstate(ioc, 1);
- if (ioc_state != LEAPIORAID_IOC_STATE_OPERATIONAL) {
- dewtprintk(ioc, pr_info(
- "%s %s: host is not operational: handle(0x%04x)\n",
- __func__, ioc->name, handle));
- return;
- }
- if (test_bit(handle, ioc->pd_handles))
- return;
- clear_bit(handle, ioc->pend_os_device_add);
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_handle(ioc, handle);
- if (sas_device && sas_device->starget && sas_device->starget->hostdata) {
- sas_target_priv_data = sas_device->starget->hostdata;
- sas_target_priv_data->deleted = 1;
- sas_address = sas_device->sas_address;
- port = sas_device->port;
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- if (!sas_device)
- tr_method = LEAPIORAID_SCSITASKMGMT_MSGFLAGS_LINK_RESET;
-
- if (sas_target_priv_data) {
- dewtprintk(ioc, pr_err(
- "%s %s: setting delete flag: handle(0x%04x), sas_addr(0x%016llx)\n",
- ioc->name, __func__, handle,
- (unsigned long long)sas_address));
- if (sas_device) {
- dewtprintk(ioc,
- leapioraid_scsihost_display_enclosure_chassis_info(
- ioc,
- sas_device,
- NULL,
- NULL));
- }
- leapioraid_scsihost_ublock_io_device(ioc, sas_address, port);
- sas_target_priv_data->handle =
- LEAPIORAID_INVALID_DEVICE_HANDLE;
- }
- smid = leapioraid_base_get_smid_hpr(ioc, ioc->tm_tr_cb_idx);
- if (!smid) {
- delayed_tr = kzalloc(sizeof(*delayed_tr), GFP_ATOMIC);
- if (!delayed_tr)
- goto out;
- INIT_LIST_HEAD(&delayed_tr->list);
- delayed_tr->handle = handle;
- list_add_tail(&delayed_tr->list, &ioc->delayed_tr_list);
- dewtprintk(ioc, pr_err(
- "%s DELAYED:tr:handle(0x%04x), (open)\n",
- ioc->name, handle));
- goto out;
- }
- dewtprintk(ioc, pr_info(
- "%s tr_send:handle(0x%04x), (open), smid(%d), cb(%d)\n",
- ioc->name, handle,
- smid, ioc->tm_tr_cb_idx));
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- memset(mpi_request, 0, sizeof(struct LeapioraidSCSITmgReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_SCSI_TASK_MGMT;
- mpi_request->DevHandle = cpu_to_le16(handle);
- mpi_request->TaskType = LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET;
- mpi_request->MsgFlags = tr_method;
- set_bit(handle, ioc->device_remove_in_progress);
- ioc->put_smid_hi_priority(ioc, smid, 0);
-out:
- if (sas_device)
- leapioraid_sas_device_put(sas_device);
-}
-
-static u8
-leapioraid_scsihost_tm_tr_complete(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u8 msix_index, u32 reply)
-{
- u16 handle;
- struct LeapioraidSCSITmgReq_t *mpi_request_tm;
- struct LeapioraidSCSITmgRep_t *mpi_reply =
- leapioraid_base_get_reply_virt_addr(ioc, reply);
- struct LeapioraidSasIoUnitControlReq_t *mpi_request;
- u16 smid_sas_ctrl;
- u32 ioc_state;
- struct leapioraid_sc_list *delayed_sc;
-
- if (ioc->pci_error_recovery) {
- dewtprintk(ioc, pr_info(
- "%s %s: host in pci error recovery\n", __func__,
- ioc->name));
- return 1;
- }
- ioc_state = leapioraid_base_get_iocstate(ioc, 1);
- if (ioc_state != LEAPIORAID_IOC_STATE_OPERATIONAL) {
- dewtprintk(ioc, pr_info(
- "%s %s: host is not operational\n", __func__, ioc->name));
- return 1;
- }
- if (unlikely(!mpi_reply)) {
- pr_err(
- "%s mpi_reply not valid at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- return 1;
- }
- mpi_request_tm = leapioraid_base_get_msg_frame(ioc, smid);
- handle = le16_to_cpu(mpi_request_tm->DevHandle);
- if (handle != le16_to_cpu(mpi_reply->DevHandle)) {
- dewtprintk(ioc, pr_err(
- "%s spurious interrupt: handle(0x%04x:0x%04x), smid(%d)!!!\n",
- ioc->name, handle,
- le16_to_cpu(mpi_reply->DevHandle), smid));
- return 0;
- }
- dewtprintk(ioc, pr_err(
- "%s tr_complete: handle(0x%04x), (open) smid(%d),\n\t\t"
- "ioc_status(0x%04x), loginfo(0x%08x), completed(%d)\n",
- ioc->name,
- handle,
- smid,
- le16_to_cpu(mpi_reply->IOCStatus),
- le32_to_cpu(mpi_reply->IOCLogInfo),
- le32_to_cpu(mpi_reply->TerminationCount)));
- smid_sas_ctrl =
- leapioraid_base_get_smid(ioc, ioc->tm_sas_control_cb_idx);
- if (!smid_sas_ctrl) {
- delayed_sc = kzalloc(sizeof(*delayed_sc), GFP_ATOMIC);
- if (!delayed_sc)
- return leapioraid_scsihost_check_for_pending_tm(ioc, smid);
- INIT_LIST_HEAD(&delayed_sc->list);
- delayed_sc->handle = le16_to_cpu(mpi_request_tm->DevHandle);
- list_add_tail(&delayed_sc->list, &ioc->delayed_sc_list);
- dewtprintk(ioc, pr_err(
- "%s DELAYED:sc:handle(0x%04x), (open)\n",
- ioc->name, handle));
- return leapioraid_scsihost_check_for_pending_tm(ioc, smid);
- }
- dewtprintk(ioc, pr_info(
- "%s sc_send:handle(0x%04x), (open), smid(%d), cb(%d)\n",
- ioc->name, handle,
- smid_sas_ctrl, ioc->tm_sas_control_cb_idx));
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid_sas_ctrl);
- memset(mpi_request, 0, sizeof(struct LeapioraidIoUnitControlReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_IO_UNIT_CONTROL;
- mpi_request->Operation = LEAPIORAID_CTRL_OP_REMOVE_DEVICE;
- mpi_request->DevHandle = mpi_request_tm->DevHandle;
- ioc->put_smid_default(ioc, smid_sas_ctrl);
- return leapioraid_scsihost_check_for_pending_tm(ioc, smid);
-}
-
-inline bool
-leapioraid_scsihost_allow_scmd_to_device(
- struct LEAPIORAID_ADAPTER *ioc,
- struct scsi_cmnd *scmd)
-{
- if (ioc->pci_error_recovery)
- return false;
- if (ioc->adapter_over_temp)
- return false;
- if (ioc->remove_host) {
- if (leapioraid_base_pci_device_is_unplugged(ioc))
- return false;
- switch (scmd->cmnd[0]) {
- case SYNCHRONIZE_CACHE:
- case START_STOP:
- return true;
- default:
- return false;
- }
- }
- return true;
-}
-
-static u8
-leapioraid_scsihost_sas_control_complete(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u8 msix_index, u32 reply)
-{
- struct LeapioraidDefaultRep_t *mpi_reply =
- leapioraid_base_get_reply_virt_addr(ioc, reply);
- u16 dev_handle;
-
- if (likely(mpi_reply)) {
- dev_handle
- = ((struct LeapioraidIoUnitControlRep_t *)mpi_reply)->DevHandle;
- dewtprintk(ioc, pr_err(
- "%s sc_complete:handle(0x%04x), (open) smid(%d),\n\t\t"
- "ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name,
- le16_to_cpu(dev_handle),
- smid,
- le16_to_cpu(mpi_reply->IOCStatus),
- le32_to_cpu(mpi_reply->IOCLogInfo)));
- if (le16_to_cpu(mpi_reply->IOCStatus) ==
- LEAPIORAID_IOCSTATUS_SUCCESS) {
- clear_bit(le16_to_cpu(dev_handle),
- ioc->device_remove_in_progress);
- ioc->tm_tr_retry[le16_to_cpu(dev_handle)] = 0;
- } else if (ioc->tm_tr_retry[le16_to_cpu(dev_handle)] < 3) {
- dewtprintk(ioc, pr_err(
- "%s re-initiating tm_tr_send:handle(0x%04x)\n",
- ioc->name,
- le16_to_cpu(dev_handle)));
- ioc->tm_tr_retry[le16_to_cpu(dev_handle)]++;
- leapioraid_scsihost_tm_tr_send(ioc, le16_to_cpu(dev_handle));
- } else {
- dewtprintk(ioc, pr_err(
- "%s Exiting out of tm_tr_send retries:handle(0x%04x)\n",
- ioc->name,
- le16_to_cpu(dev_handle)));
- ioc->tm_tr_retry[le16_to_cpu(dev_handle)] = 0;
- clear_bit(le16_to_cpu(dev_handle),
- ioc->device_remove_in_progress);
- }
- } else {
- pr_err(
- "%s mpi_reply not valid at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- }
- return leapioraid_check_for_pending_internal_cmds(ioc, smid);
-}
-
-static void
-leapioraid_scsihost_tm_tr_volume_send(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct LeapioraidSCSITmgReq_t *mpi_request;
- u16 smid;
- struct leapioraid_tr_list *delayed_tr;
-
- if (ioc->pci_error_recovery) {
- dewtprintk(ioc, pr_info(
- "%s %s: host reset in progress!\n", __func__, ioc->name));
- return;
- }
- smid = leapioraid_base_get_smid_hpr(ioc, ioc->tm_tr_volume_cb_idx);
- if (!smid) {
- delayed_tr = kzalloc(sizeof(*delayed_tr), GFP_ATOMIC);
- if (!delayed_tr)
- return;
- INIT_LIST_HEAD(&delayed_tr->list);
- delayed_tr->handle = handle;
- list_add_tail(&delayed_tr->list, &ioc->delayed_tr_volume_list);
- dewtprintk(ioc, pr_err(
- "%s DELAYED:tr:handle(0x%04x), (open)\n",
- ioc->name, handle));
- return;
- }
- dewtprintk(ioc, pr_info(
- "%s tr_send:handle(0x%04x), (open), smid(%d), cb(%d)\n",
- ioc->name, handle,
- smid, ioc->tm_tr_volume_cb_idx));
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- memset(mpi_request, 0, sizeof(struct LeapioraidSCSITmgReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_SCSI_TASK_MGMT;
- mpi_request->DevHandle = cpu_to_le16(handle);
- mpi_request->TaskType = LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET;
- ioc->put_smid_hi_priority(ioc, smid, 0);
-}
-
-static u8
-leapioraid_scsihost_tm_volume_tr_complete(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u8 msix_index, u32 reply)
-{
- u16 handle;
- struct LeapioraidSCSITmgReq_t *mpi_request_tm;
- struct LeapioraidSCSITmgRep_t *mpi_reply =
- leapioraid_base_get_reply_virt_addr(ioc, reply);
-
- if (ioc->shost_recovery || ioc->pci_error_recovery) {
- dewtprintk(ioc, pr_info(
- "%s %s: host reset in progress!\n", __func__, ioc->name));
- return 1;
- }
- if (unlikely(!mpi_reply)) {
- pr_err(
- "%s mpi_reply not valid at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- return 1;
- }
- mpi_request_tm = leapioraid_base_get_msg_frame(ioc, smid);
- handle = le16_to_cpu(mpi_request_tm->DevHandle);
- if (handle != le16_to_cpu(mpi_reply->DevHandle)) {
- dewtprintk(ioc, pr_err(
- "%s spurious interrupt: handle(0x%04x:0x%04x), smid(%d)!!!\n",
- ioc->name, handle,
- le16_to_cpu(mpi_reply->DevHandle), smid));
- return 0;
- }
- dewtprintk(ioc, pr_err(
- "%s tr_complete:handle(0x%04x), (open) smid(%d),\n\t\t"
- "ioc_status(0x%04x), loginfo(0x%08x), completed(%d)\n",
- ioc->name,
- handle,
- smid,
- le16_to_cpu(mpi_reply->IOCStatus),
- le32_to_cpu(mpi_reply->IOCLogInfo),
- le32_to_cpu(mpi_reply->TerminationCount)));
- return leapioraid_scsihost_check_for_pending_tm(ioc, smid);
-}
-
-static void
-leapioraid_scsihost_tm_internal_tr_send(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct leapioraid_tr_list *delayed_tr;
- struct LeapioraidSCSITmgReq_t *mpi_request;
- u16 smid;
- u8 tr_method = LEAPIORAID_SCSITASKMGMT_MSGFLAGS_LINK_RESET;
-
- smid = leapioraid_base_get_smid_hpr(ioc, ioc->tm_tr_internal_cb_idx);
- if (!smid) {
- delayed_tr = kzalloc(sizeof(*delayed_tr), GFP_ATOMIC);
- if (!delayed_tr)
- return;
- INIT_LIST_HEAD(&delayed_tr->list);
- delayed_tr->handle = handle;
- list_add_tail(&delayed_tr->list,
- &ioc->delayed_internal_tm_list);
- dewtprintk(ioc,
- pr_err(
- "%s DELAYED:tr:handle(0x%04x), (open)\n",
- ioc->name, handle));
- return;
- }
- dewtprintk(ioc, pr_info(
- "%s tr_send:handle(0x%04x), (open), smid(%d), cb(%d)\n",
- ioc->name, handle,
- smid, ioc->tm_tr_internal_cb_idx));
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- memset(mpi_request, 0, sizeof(struct LeapioraidSCSITmgReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_SCSI_TASK_MGMT;
- mpi_request->DevHandle = cpu_to_le16(handle);
- mpi_request->TaskType = LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET;
- mpi_request->MsgFlags = tr_method;
- ioc->put_smid_hi_priority(ioc, smid, 0);
-}
-
-static u8
-leapioraid_scsihost_tm_internal_tr_complete(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u8 msix_index, u32 reply)
-{
- struct LeapioraidSCSITmgRep_t *mpi_reply =
- leapioraid_base_get_reply_virt_addr(ioc, reply);
-
- if (likely(mpi_reply)) {
- dewtprintk(ioc, pr_err(
- "%s tr_complete:handle(0x%04x),\n\t\t"
- "(open) smid(%d), ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name,
- le16_to_cpu(mpi_reply->DevHandle),
- smid,
- le16_to_cpu(mpi_reply->IOCStatus),
- le32_to_cpu(mpi_reply->IOCLogInfo)));
- } else {
- pr_err("%s mpi_reply not valid at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- return 1;
- }
- return leapioraid_scsihost_check_for_pending_tm(ioc, smid);
-}
-
-static void
-leapioraid_scsihost_issue_delayed_event_ack(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- U16 event, U32 event_context)
-{
- struct LeapioraidEventAckReq_t *ack_request;
- int i = smid - ioc->internal_smid;
- unsigned long flags;
-
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- ioc->internal_lookup[i].cb_idx = ioc->base_cb_idx;
- spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
- dewtprintk(ioc, pr_info(
- "%s EVENT ACK: event(0x%04x), smid(%d), cb(%d)\n",
- ioc->name, le16_to_cpu(event),
- smid, ioc->base_cb_idx));
- ack_request = leapioraid_base_get_msg_frame(ioc, smid);
- memset(ack_request, 0, sizeof(struct LeapioraidEventAckReq_t));
- ack_request->Function = LEAPIORAID_FUNC_EVENT_ACK;
- ack_request->Event = event;
- ack_request->EventContext = event_context;
- ack_request->VF_ID = 0;
- ack_request->VP_ID = 0;
- ioc->put_smid_default(ioc, smid);
-}
-
-static void
-leapioraid_scsihost_issue_delayed_sas_io_unit_ctrl(
- struct LEAPIORAID_ADAPTER *ioc,
- u16 smid, u16 handle)
-{
- struct LeapioraidSasIoUnitControlReq_t *mpi_request;
- u32 ioc_state;
- int i = smid - ioc->internal_smid;
- unsigned long flags;
-
- if (ioc->remove_host) {
- dewtprintk(ioc, pr_info(
- "%s %s: host has been removed\n", __func__, ioc->name));
- return;
- } else if (ioc->pci_error_recovery) {
- dewtprintk(ioc, pr_info(
- "%s %s: host in pci error recovery\n", __func__,
- ioc->name));
- return;
- }
- ioc_state = leapioraid_base_get_iocstate(ioc, 1);
- if (ioc_state != LEAPIORAID_IOC_STATE_OPERATIONAL) {
- dewtprintk(ioc, pr_info(
- "%s %s: host is not operational\n", __func__, ioc->name));
- return;
- }
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- ioc->internal_lookup[i].cb_idx = ioc->tm_sas_control_cb_idx;
- spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
- dewtprintk(ioc, pr_info(
- "%s sc_send:handle(0x%04x), (open), smid(%d), cb(%d)\n",
- ioc->name, handle,
- smid, ioc->tm_sas_control_cb_idx));
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- memset(mpi_request, 0, sizeof(struct LeapioraidIoUnitControlReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_IO_UNIT_CONTROL;
- mpi_request->Operation = 0x0D;
- mpi_request->DevHandle = cpu_to_le16(handle);
- ioc->put_smid_default(ioc, smid);
-}
-
-u8
-leapioraid_check_for_pending_internal_cmds(struct LEAPIORAID_ADAPTER *ioc,
- u16 smid)
-{
- struct leapioraid_sc_list *delayed_sc;
- struct leapioraid_event_ack_list *delayed_event_ack;
-
- if (!list_empty(&ioc->delayed_event_ack_list)) {
- delayed_event_ack = list_entry(ioc->delayed_event_ack_list.next,
- struct leapioraid_event_ack_list, list);
- leapioraid_scsihost_issue_delayed_event_ack(ioc, smid,
- delayed_event_ack->Event,
- delayed_event_ack->EventContext);
- list_del(&delayed_event_ack->list);
- kfree(delayed_event_ack);
- return 0;
- }
- if (!list_empty(&ioc->delayed_sc_list)) {
- delayed_sc = list_entry(ioc->delayed_sc_list.next,
- struct leapioraid_sc_list, list);
- leapioraid_scsihost_issue_delayed_sas_io_unit_ctrl(ioc, smid,
- delayed_sc->handle);
- list_del(&delayed_sc->list);
- kfree(delayed_sc);
- return 0;
- }
- return 1;
-}
-
-static u8
-leapioraid_scsihost_check_for_pending_tm(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid)
-{
- struct leapioraid_tr_list *delayed_tr;
-
- if (!list_empty(&ioc->delayed_tr_volume_list)) {
- delayed_tr = list_entry(ioc->delayed_tr_volume_list.next,
- struct leapioraid_tr_list, list);
- leapioraid_base_free_smid(ioc, smid);
- leapioraid_scsihost_tm_tr_volume_send(ioc, delayed_tr->handle);
- list_del(&delayed_tr->list);
- kfree(delayed_tr);
- return 0;
- }
- if (!list_empty(&ioc->delayed_tr_list)) {
- delayed_tr = list_entry(ioc->delayed_tr_list.next,
- struct leapioraid_tr_list, list);
- leapioraid_base_free_smid(ioc, smid);
- leapioraid_scsihost_tm_tr_send(ioc, delayed_tr->handle);
- list_del(&delayed_tr->list);
- kfree(delayed_tr);
- return 0;
- }
- if (!list_empty(&ioc->delayed_internal_tm_list)) {
- delayed_tr = list_entry(ioc->delayed_internal_tm_list.next,
- struct leapioraid_tr_list, list);
- leapioraid_base_free_smid(ioc, smid);
- leapioraid_scsihost_tm_internal_tr_send(
- ioc, delayed_tr->handle);
- list_del(&delayed_tr->list);
- kfree(delayed_tr);
- return 0;
- }
- return 1;
-}
-
-static void
-leapioraid_scsihost_check_topo_delete_events(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventDataSasTopoChangeList_t *event_data)
-{
- struct leapioraid_fw_event_work *fw_event;
- struct LeapioraidEventDataSasTopoChangeList_t *local_event_data;
- u16 expander_handle;
- struct leapioraid_raid_sas_node *sas_expander;
- unsigned long flags;
- int i, reason_code;
- u16 handle;
-
- for (i = 0; i < event_data->NumEntries; i++) {
- handle = le16_to_cpu(event_data->PHY[i].AttachedDevHandle);
- if (!handle)
- continue;
- reason_code = event_data->PHY[i].PhyStatus &
- LEAPIORAID_EVENT_SAS_TOPO_RC_MASK;
- if (reason_code ==
- LEAPIORAID_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING)
- leapioraid_scsihost_tm_tr_send(ioc, handle);
- }
- expander_handle = le16_to_cpu(event_data->ExpanderDevHandle);
- if (expander_handle < ioc->sas_hba.num_phys) {
- leapioraid_scsihost_block_io_to_children_attached_directly(
- ioc, event_data);
- return;
- }
- if (event_data->ExpStatus ==
- LEAPIORAID_EVENT_SAS_TOPO_ES_DELAY_NOT_RESPONDING) {
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- sas_expander = leapioraid_scsihost_expander_find_by_handle(
- ioc, expander_handle);
- leapioraid_scsihost_block_io_to_children_attached_to_ex(
- ioc, sas_expander);
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- do {
- handle = find_first_bit(ioc->blocking_handles,
- ioc->facts.MaxDevHandle);
- if (handle < ioc->facts.MaxDevHandle)
- leapioraid_scsihost_block_io_device(ioc, handle);
- } while (test_and_clear_bit(handle, ioc->blocking_handles));
- } else if (event_data->ExpStatus ==
- LEAPIORAID_EVENT_SAS_TOPO_ES_RESPONDING)
- leapioraid_scsihost_block_io_to_children_attached_directly(
- ioc, event_data);
- if (event_data->ExpStatus != LEAPIORAID_EVENT_SAS_TOPO_ES_NOT_RESPONDING)
- return;
- spin_lock_irqsave(&ioc->fw_event_lock, flags);
- list_for_each_entry(fw_event, &ioc->fw_event_list, list) {
- if (fw_event->event != LEAPIORAID_EVENT_SAS_TOPOLOGY_CHANGE_LIST ||
- fw_event->ignore)
- continue;
- local_event_data = fw_event->event_data;
- if (local_event_data->ExpStatus ==
- LEAPIORAID_EVENT_SAS_TOPO_ES_ADDED ||
- local_event_data->ExpStatus ==
- LEAPIORAID_EVENT_SAS_TOPO_ES_RESPONDING) {
- if (le16_to_cpu(local_event_data->ExpanderDevHandle) ==
- expander_handle) {
- dewtprintk(ioc, pr_err(
- "%s setting ignoring flag\n",
- ioc->name));
- fw_event->ignore = 1;
- }
- }
- }
- spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
-}
-
-static void
-leapioraid_scsihost_set_volume_delete_flag(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct leapioraid_raid_device *raid_device;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- unsigned long flags;
-
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- raid_device = leapioraid_raid_device_find_by_handle(
- ioc, handle);
- if (raid_device && raid_device->starget &&
- raid_device->starget->hostdata) {
- sas_target_priv_data = raid_device->starget->hostdata;
- sas_target_priv_data->deleted = 1;
- dewtprintk(ioc, pr_err(
- "%s setting delete flag: handle(0x%04x), wwid(0x%016llx)\n",
- ioc->name, handle,
- (unsigned long long)raid_device->wwid));
- }
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
-}
-
-static void
-leapioraid_scsihost_set_volume_handle_for_tr(
- u16 handle, u16 *a, u16 *b)
-{
- if (!handle || handle == *a || handle == *b)
- return;
- if (!*a)
- *a = handle;
- else if (!*b)
- *b = handle;
-}
-
-static void
-leapioraid_scsihost_check_ir_config_unhide_events(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventDataIrCfgChangeList_t *event_data)
-{
- struct LeapioraidEventIrCfgEle_t *element;
- int i;
- u16 handle, volume_handle, a, b;
- struct leapioraid_tr_list *delayed_tr;
-
- a = 0;
- b = 0;
- element =
- (struct LeapioraidEventIrCfgEle_t *) &event_data->ConfigElement[0];
- for (i = 0; i < event_data->NumElements; i++, element++) {
- if (le32_to_cpu(event_data->Flags) &
- LEAPIORAID_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG)
- continue;
- if (element->ReasonCode ==
- LEAPIORAID_EVENT_IR_CHANGE_RC_VOLUME_DELETED ||
- element->ReasonCode == LEAPIORAID_EVENT_IR_CHANGE_RC_REMOVED) {
- volume_handle = le16_to_cpu(element->VolDevHandle);
- leapioraid_scsihost_set_volume_delete_flag(ioc, volume_handle);
- leapioraid_scsihost_set_volume_handle_for_tr(
- volume_handle, &a, &b);
- }
- }
- element =
- (struct LeapioraidEventIrCfgEle_t *) &event_data->ConfigElement[0];
- for (i = 0; i < event_data->NumElements; i++, element++) {
- if (le32_to_cpu(event_data->Flags) &
- LEAPIORAID_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG)
- continue;
- if (element->ReasonCode == LEAPIORAID_EVENT_IR_CHANGE_RC_UNHIDE) {
- volume_handle = le16_to_cpu(element->VolDevHandle);
- leapioraid_scsihost_set_volume_handle_for_tr(
- volume_handle, &a, &b);
- }
- }
- if (a)
- leapioraid_scsihost_tm_tr_volume_send(ioc, a);
- if (b)
- leapioraid_scsihost_tm_tr_volume_send(ioc, b);
- element =
- (struct LeapioraidEventIrCfgEle_t *) &event_data->ConfigElement[0];
- for (i = 0; i < event_data->NumElements; i++, element++) {
- if (element->ReasonCode != LEAPIORAID_EVENT_IR_CHANGE_RC_UNHIDE)
- continue;
- handle = le16_to_cpu(element->PhysDiskDevHandle);
- volume_handle = le16_to_cpu(element->VolDevHandle);
- clear_bit(handle, ioc->pd_handles);
- if (!volume_handle)
- leapioraid_scsihost_tm_tr_send(ioc, handle);
- else if (volume_handle == a || volume_handle == b) {
- delayed_tr = kzalloc(sizeof(*delayed_tr), GFP_ATOMIC);
- BUG_ON(!delayed_tr);
- INIT_LIST_HEAD(&delayed_tr->list);
- delayed_tr->handle = handle;
- list_add_tail(&delayed_tr->list, &ioc->delayed_tr_list);
- dewtprintk(ioc, pr_err(
- "%s DELAYED:tr:handle(0x%04x), (open)\n",
- ioc->name, handle));
- } else
- leapioraid_scsihost_tm_tr_send(ioc, handle);
- }
-}
-
-static void
-leapioraid_scsihost_check_volume_delete_events(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventDataIrVol_t *event_data)
-{
- u32 state;
-
- if (event_data->ReasonCode != LEAPIORAID_EVENT_IR_VOLUME_RC_STATE_CHANGED)
- return;
- state = le32_to_cpu(event_data->NewValue);
- if (state == LEAPIORAID_RAID_VOL_STATE_MISSING || state ==
- LEAPIORAID_RAID_VOL_STATE_FAILED)
- leapioraid_scsihost_set_volume_delete_flag(
- ioc, le16_to_cpu(event_data->VolDevHandle));
-}
-
-static int
-leapioraid_scsihost_set_satl_pending(
- struct scsi_cmnd *scmd, bool pending)
-{
- struct LEAPIORAID_DEVICE *priv = scmd->device->hostdata;
-
- if (scmd->cmnd[0] != ATA_12 && scmd->cmnd[0] != ATA_16)
- return 0;
- if (pending)
- return test_and_set_bit(LEAPIORAID_CMND_PENDING_BIT,
- &priv->ata_command_pending);
- clear_bit(LEAPIORAID_CMND_PENDING_BIT, &priv->ata_command_pending);
- return 0;
-}
-
-void
-leapioraid_scsihost_flush_running_cmds(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct scsi_cmnd *scmd;
- struct leapioraid_scsiio_tracker *st;
- u16 smid;
- u16 count = 0;
-
- for (smid = 1; smid <= ioc->shost->can_queue; smid++) {
- scmd = leapioraid_scsihost_scsi_lookup_get(ioc, smid);
- if (!scmd)
- continue;
- count++;
- st = leapioraid_base_scsi_cmd_priv(scmd);
- if (st && st->smid == 0)
- continue;
- leapioraid_scsihost_set_satl_pending(scmd, false);
- leapioraid_base_get_msg_frame(ioc, smid);
- scsi_dma_unmap(scmd);
-
- leapioraid_base_clear_st(ioc, st);
- if ((!leapioraid_base_pci_device_is_available(ioc)) ||
- (ioc->ioc_reset_status != 0)
- || ioc->adapter_over_temp || ioc->remove_host)
- scmd->result = DID_NO_CONNECT << 16;
- else
- scmd->result = DID_RESET << 16;
- scsi_done(scmd);
- }
- dtmprintk(ioc, pr_info("%s completing %d cmds\n",
- ioc->name, count));
-}
-
-static inline u8 scsih_is_io_belongs_to_RT_class(
- struct scsi_cmnd *scmd)
-{
- struct request *rq = scsi_cmd_to_rq(scmd);
-
- return (IOPRIO_PRIO_CLASS(req_get_ioprio(rq)) == IOPRIO_CLASS_RT);
-}
-
-static int
-leapioraid_scsihost_qcmd(
- struct Scsi_Host *shost, struct scsi_cmnd *scmd)
-{
- struct LEAPIORAID_ADAPTER *ioc
- = leapioraid_shost_private(scmd->device->host);
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- struct LeapioraidSCSIIOReq_t *mpi_request;
- u32 mpi_control;
- u16 smid;
- u16 handle;
- int rc = 0;
-
- if (ioc->logging_level & LEAPIORAID_DEBUG_SCSI)
- scsi_print_command(scmd);
- sas_device_priv_data = scmd->device->hostdata;
- if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
- scmd->result = DID_NO_CONNECT << 16;
- scsi_done(scmd);
- goto out;
- }
- if (!(leapioraid_scsihost_allow_scmd_to_device(ioc, scmd))) {
- scmd->result = DID_NO_CONNECT << 16;
- scsi_done(scmd);
- goto out;
- }
- sas_target_priv_data = sas_device_priv_data->sas_target;
- handle = sas_target_priv_data->handle;
- if (handle == LEAPIORAID_INVALID_DEVICE_HANDLE) {
- scmd->result = DID_NO_CONNECT << 16;
- scsi_done(scmd);
- goto out;
- }
- if (sas_device_priv_data->block &&
- scmd->device->host->shost_state == SHOST_RECOVERY &&
- scmd->cmnd[0] == TEST_UNIT_READY) {
- scsi_build_sense(scmd, 0, UNIT_ATTENTION,
- 0x29, 0x07);
- scsi_done(scmd);
- goto out;
- }
- if (ioc->shost_recovery || ioc->ioc_link_reset_in_progress) {
- rc = SCSI_MLQUEUE_HOST_BUSY;
- goto out;
- } else if (sas_target_priv_data->deleted ||
- sas_device_priv_data->deleted) {
- scmd->result = DID_NO_CONNECT << 16;
- scsi_done(scmd);
- goto out;
- } else if (sas_target_priv_data->tm_busy || sas_device_priv_data->block) {
- rc = SCSI_MLQUEUE_DEVICE_BUSY;
- goto out;
- }
- do {
- if (test_bit(LEAPIORAID_CMND_PENDING_BIT,
- &sas_device_priv_data->ata_command_pending)) {
- rc = SCSI_MLQUEUE_DEVICE_BUSY;
- goto out;
- }
- } while (leapioraid_scsihost_set_satl_pending(scmd, true));
- if (scmd->sc_data_direction == DMA_FROM_DEVICE)
- mpi_control = LEAPIORAID_SCSIIO_CONTROL_READ;
- else if (scmd->sc_data_direction == DMA_TO_DEVICE)
- mpi_control = LEAPIORAID_SCSIIO_CONTROL_WRITE;
- else
- mpi_control = LEAPIORAID_SCSIIO_CONTROL_NODATATRANSFER;
- mpi_control |= LEAPIORAID_SCSIIO_CONTROL_SIMPLEQ;
- if (sas_device_priv_data->ncq_prio_enable) {
- if (scsih_is_io_belongs_to_RT_class(scmd))
- mpi_control |= 1 << LEAPIORAID_SCSIIO_CONTROL_CMDPRI_SHIFT;
- }
- if ((sas_device_priv_data->flags & LEAPIORAID_DEVICE_TLR_ON) &&
- scmd->cmd_len != 32)
- mpi_control |= LEAPIORAID_SCSIIO_CONTROL_TLR_ON;
- smid = leapioraid_base_get_smid_scsiio(
- ioc, ioc->scsi_io_cb_idx, scmd);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- rc = SCSI_MLQUEUE_HOST_BUSY;
- leapioraid_scsihost_set_satl_pending(scmd, false);
- goto out;
- }
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- if (scmd->cmd_len == 32)
- mpi_control |= 4 << LEAPIORAID_SCSIIO_CONTROL_ADDCDBLEN_SHIFT;
- mpi_request->Function = LEAPIORAID_FUNC_SCSI_IO_REQUEST;
- if (sas_device_priv_data->sas_target->flags &
- LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT)
- mpi_request->Function =
- LEAPIORAID_FUNC_RAID_SCSI_IO_PASSTHROUGH;
- else
- mpi_request->Function = LEAPIORAID_FUNC_SCSI_IO_REQUEST;
- mpi_request->DevHandle = cpu_to_le16(handle);
- mpi_request->DataLength = cpu_to_le32(scsi_bufflen(scmd));
- mpi_request->Control = cpu_to_le32(mpi_control);
- mpi_request->IoFlags = cpu_to_le16(scmd->cmd_len);
- mpi_request->MsgFlags = LEAPIORAID_SCSIIO_MSGFLAGS_SYSTEM_SENSE_ADDR;
- mpi_request->SenseBufferLength = SCSI_SENSE_BUFFERSIZE;
- mpi_request->SenseBufferLowAddress =
- leapioraid_base_get_sense_buffer_dma(ioc, smid);
- mpi_request->SGLOffset0 = offsetof(struct LeapioraidSCSIIOReq_t, SGL) / 4;
- int_to_scsilun(sas_device_priv_data->lun, (struct scsi_lun *)
- mpi_request->LUN);
- memcpy(mpi_request->CDB.CDB32, scmd->cmnd, scmd->cmd_len);
- if (mpi_request->DataLength) {
- if (ioc->build_sg_scmd(ioc, scmd, smid)) {
- leapioraid_base_free_smid(ioc, smid);
- rc = SCSI_MLQUEUE_HOST_BUSY;
- leapioraid_scsihost_set_satl_pending(scmd, false);
- goto out;
- }
- } else
- ioc->build_zero_len_sge(ioc, &mpi_request->SGL);
- if (likely(mpi_request->Function == LEAPIORAID_FUNC_SCSI_IO_REQUEST)) {
- if (sas_target_priv_data->flags & LEAPIORAID_TARGET_FASTPATH_IO) {
- mpi_request->IoFlags = cpu_to_le16(scmd->cmd_len | 0x4000);
- ioc->put_smid_fast_path(ioc, smid, handle);
- } else
- ioc->put_smid_scsi_io(ioc, smid,
- le16_to_cpu(mpi_request->DevHandle));
- } else
- ioc->put_smid_default(ioc, smid);
-out:
- return rc;
-}
-
-static void
-leapioraid_scsihost_normalize_sense(
- char *sense_buffer, struct sense_info *data)
-{
- if ((sense_buffer[0] & 0x7F) >= 0x72) {
- data->skey = sense_buffer[1] & 0x0F;
- data->asc = sense_buffer[2];
- data->ascq = sense_buffer[3];
- } else {
- data->skey = sense_buffer[2] & 0x0F;
- data->asc = sense_buffer[12];
- data->ascq = sense_buffer[13];
- }
-}
-
-static void
-leapioraid_scsihost_scsi_ioc_info(
- struct LEAPIORAID_ADAPTER *ioc, struct scsi_cmnd *scmd,
- struct LeapioraidSCSIIORep_t *mpi_reply, u16 smid,
- u8 scsi_status, u16 error_response_count)
-{
- u32 response_info;
- u8 *response_bytes;
- u16 ioc_status = le16_to_cpu(mpi_reply->IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK;
- u8 scsi_state = mpi_reply->SCSIState;
- char *desc_ioc_state = NULL;
- char *desc_scsi_status = NULL;
- char *desc_scsi_state = ioc->tmp_string;
- u32 log_info = le32_to_cpu(mpi_reply->IOCLogInfo);
- struct leapioraid_sas_device *sas_device = NULL;
- struct scsi_target *starget = scmd->device->sdev_target;
- struct LEAPIORAID_TARGET *priv_target = starget->hostdata;
- char *device_str = NULL;
-
- if (!priv_target)
- return;
- if (ioc->warpdrive_msg)
- device_str = "WarpDrive";
- else
- device_str = "volume";
- if (log_info == 0x31170000)
- return;
- switch (ioc_status) {
- case LEAPIORAID_IOCSTATUS_SUCCESS:
- desc_ioc_state = "success";
- break;
- case LEAPIORAID_IOCSTATUS_INVALID_FUNCTION:
- desc_ioc_state = "invalid function";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_RECOVERED_ERROR:
- desc_ioc_state = "scsi recovered error";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_INVALID_DEVHANDLE:
- desc_ioc_state = "scsi invalid dev handle";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_DEVICE_NOT_THERE:
- desc_ioc_state = "scsi device not there";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_DATA_OVERRUN:
- desc_ioc_state = "scsi data overrun";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_DATA_UNDERRUN:
- desc_ioc_state = "scsi data underrun";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_IO_DATA_ERROR:
- desc_ioc_state = "scsi io data error";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_PROTOCOL_ERROR:
- desc_ioc_state = "scsi protocol error";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_TASK_TERMINATED:
- desc_ioc_state = "scsi task terminated";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_RESIDUAL_MISMATCH:
- desc_ioc_state = "scsi residual mismatch";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_TASK_MGMT_FAILED:
- desc_ioc_state = "scsi task mgmt failed";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_IOC_TERMINATED:
- desc_ioc_state = "scsi ioc terminated";
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_EXT_TERMINATED:
- desc_ioc_state = "scsi ext terminated";
- break;
- case LEAPIORAID_IOCSTATUS_EEDP_GUARD_ERROR:
- if (!ioc->disable_eedp_support) {
- desc_ioc_state = "eedp guard error";
- break;
- }
- fallthrough;
- case LEAPIORAID_IOCSTATUS_EEDP_REF_TAG_ERROR:
- if (!ioc->disable_eedp_support) {
- desc_ioc_state = "eedp ref tag error";
- break;
- }
- fallthrough;
- case LEAPIORAID_IOCSTATUS_EEDP_APP_TAG_ERROR:
- if (!ioc->disable_eedp_support) {
- desc_ioc_state = "eedp app tag error";
- break;
- }
- fallthrough;
- case LEAPIORAID_IOCSTATUS_INSUFFICIENT_POWER:
- desc_ioc_state = "insufficient power";
- break;
- default:
- desc_ioc_state = "unknown";
- break;
- }
- switch (scsi_status) {
- case LEAPIORAID_SCSI_STATUS_GOOD:
- desc_scsi_status = "good";
- break;
- case LEAPIORAID_SCSI_STATUS_CHECK_CONDITION:
- desc_scsi_status = "check condition";
- break;
- case LEAPIORAID_SCSI_STATUS_CONDITION_MET:
- desc_scsi_status = "condition met";
- break;
- case LEAPIORAID_SCSI_STATUS_BUSY:
- desc_scsi_status = "busy";
- break;
- case LEAPIORAID_SCSI_STATUS_INTERMEDIATE:
- desc_scsi_status = "intermediate";
- break;
- case LEAPIORAID_SCSI_STATUS_INTERMEDIATE_CONDMET:
- desc_scsi_status = "intermediate condmet";
- break;
- case LEAPIORAID_SCSI_STATUS_RESERVATION_CONFLICT:
- desc_scsi_status = "reservation conflict";
- break;
- case LEAPIORAID_SCSI_STATUS_COMMAND_TERMINATED:
- desc_scsi_status = "command terminated";
- break;
- case LEAPIORAID_SCSI_STATUS_TASK_SET_FULL:
- desc_scsi_status = "task set full";
- break;
- case LEAPIORAID_SCSI_STATUS_ACA_ACTIVE:
- desc_scsi_status = "aca active";
- break;
- case LEAPIORAID_SCSI_STATUS_TASK_ABORTED:
- desc_scsi_status = "task aborted";
- break;
- default:
- desc_scsi_status = "unknown";
- break;
- }
- desc_scsi_state[0] = '\0';
- if (!scsi_state)
- desc_scsi_state = " ";
- if (scsi_state & LEAPIORAID_SCSI_STATE_RESPONSE_INFO_VALID)
- strcat(desc_scsi_state, "response info ");
- if (scsi_state & LEAPIORAID_SCSI_STATE_TERMINATED)
- strcat(desc_scsi_state, "state terminated ");
- if (scsi_state & LEAPIORAID_SCSI_STATE_NO_SCSI_STATUS)
- strcat(desc_scsi_state, "no status ");
- if (scsi_state & LEAPIORAID_SCSI_STATE_AUTOSENSE_FAILED)
- strcat(desc_scsi_state, "autosense failed ");
- if (scsi_state & LEAPIORAID_SCSI_STATE_AUTOSENSE_VALID)
- strcat(desc_scsi_state, "autosense valid ");
- scsi_print_command(scmd);
- if (priv_target->flags & LEAPIORAID_TARGET_FLAGS_VOLUME) {
- pr_warn("%s \t%s wwid(0x%016llx)\n",
- ioc->name, device_str,
- (unsigned long long)priv_target->sas_address);
- } else {
- sas_device = leapioraid_get_sdev_from_target(ioc, priv_target);
- if (sas_device) {
- pr_warn(
- "%s \t%s: sas_address(0x%016llx), phy(%d)\n",
- ioc->name, __func__, (unsigned long long)
- sas_device->sas_address, sas_device->phy);
- leapioraid_scsihost_display_enclosure_chassis_info(ioc,
- sas_device,
- NULL, NULL);
- leapioraid_sas_device_put(sas_device);
- }
- }
- pr_warn(
- "%s \thandle(0x%04x), ioc_status(%s)(0x%04x), smid(%d)\n",
- ioc->name, le16_to_cpu(mpi_reply->DevHandle), desc_ioc_state,
- ioc_status, smid);
- pr_warn("%s \trequest_len(%d), underflow(%d), resid(%d)\n",
- ioc->name, scsi_bufflen(scmd), scmd->underflow,
- scsi_get_resid(scmd));
- pr_warn("%s \ttag(%d), transfer_count(%d), sc->result(0x%08x)\n",
- ioc->name,
- le16_to_cpu(mpi_reply->TaskTag),
- le32_to_cpu(mpi_reply->TransferCount), scmd->result);
- pr_warn("%s \tscsi_status(%s)(0x%02x), scsi_state(%s)(0x%02x)\n",
- ioc->name, desc_scsi_status,
- scsi_status, desc_scsi_state, scsi_state);
- if (scsi_state & LEAPIORAID_SCSI_STATE_AUTOSENSE_VALID) {
- struct sense_info data;
-
- leapioraid_scsihost_normalize_sense(scmd->sense_buffer, &data);
- pr_warn(
- "%s \t[sense_key,asc,ascq]: [0x%02x,0x%02x,0x%02x], count(%d)\n",
- ioc->name,
- data.skey, data.asc, data.ascq,
- le32_to_cpu(mpi_reply->SenseCount));
- }
- if (scsi_state & LEAPIORAID_SCSI_STATE_RESPONSE_INFO_VALID) {
- response_info = le32_to_cpu(mpi_reply->ResponseInfo);
- response_bytes = (u8 *) &response_info;
- leapioraid_scsihost_response_code(ioc, response_bytes[0]);
- }
-}
-
-static void
-leapioraid_scsihost_turn_on_pfa_led(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct LeapioraidSepRep_t mpi_reply;
- struct LeapioraidSepReq_t mpi_request;
- struct leapioraid_sas_device *sas_device;
-
- sas_device = leapioraid_get_sdev_by_handle(ioc, handle);
- if (!sas_device)
- return;
- memset(&mpi_request, 0, sizeof(struct LeapioraidSepReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_SCSI_ENCLOSURE_PROCESSOR;
- mpi_request.Action = LEAPIORAID_SEP_REQ_ACTION_WRITE_STATUS;
- mpi_request.SlotStatus =
- cpu_to_le32(LEAPIORAID_SEP_REQ_SLOTSTATUS_PREDICTED_FAULT);
- mpi_request.DevHandle = cpu_to_le16(handle);
- mpi_request.Flags = LEAPIORAID_SEP_REQ_FLAGS_DEVHANDLE_ADDRESS;
- if ((leapioraid_base_scsi_enclosure_processor(ioc, &mpi_reply,
- &mpi_request)) != 0) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- sas_device->pfa_led_on = 1;
- if (mpi_reply.IOCStatus || mpi_reply.IOCLogInfo) {
- dewtprintk(ioc, pr_info(
- "%s enclosure_processor: ioc_status (0x%04x), loginfo(0x%08x)\n",
- ioc->name, le16_to_cpu(mpi_reply.IOCStatus),
- le32_to_cpu(mpi_reply.IOCLogInfo)));
- goto out;
- }
-out:
- leapioraid_sas_device_put(sas_device);
-}
-
-static void
-leapioraid_scsihost_turn_off_pfa_led(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_device *sas_device)
-{
- struct LeapioraidSepRep_t mpi_reply;
- struct LeapioraidSepReq_t mpi_request;
-
- memset(&mpi_request, 0, sizeof(struct LeapioraidSepReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_SCSI_ENCLOSURE_PROCESSOR;
- mpi_request.Action = LEAPIORAID_SEP_REQ_ACTION_WRITE_STATUS;
- mpi_request.SlotStatus = 0;
- mpi_request.Slot = cpu_to_le16(sas_device->slot);
- mpi_request.DevHandle = 0;
- mpi_request.EnclosureHandle = cpu_to_le16(sas_device->enclosure_handle);
- mpi_request.Flags = LEAPIORAID_SEP_REQ_FLAGS_ENCLOSURE_SLOT_ADDRESS;
- if ((leapioraid_base_scsi_enclosure_processor(ioc, &mpi_reply,
- &mpi_request)) != 0) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- if (mpi_reply.IOCStatus || mpi_reply.IOCLogInfo) {
- dewtprintk(ioc, pr_info(
- "%s enclosure_processor: ioc_status (0x%04x), loginfo(0x%08x)\n",
- ioc->name, le16_to_cpu(mpi_reply.IOCStatus),
- le32_to_cpu(mpi_reply.IOCLogInfo)));
- return;
- }
-}
-
-static void
-leapioraid_scsihost_send_event_to_turn_on_pfa_led(
- struct LEAPIORAID_ADAPTER *ioc,
- u16 handle)
-{
- struct leapioraid_fw_event_work *fw_event;
-
- fw_event = leapioraid_alloc_fw_event_work(0);
- if (!fw_event)
- return;
- fw_event->event = LEAPIORAID_TURN_ON_PFA_LED;
- fw_event->device_handle = handle;
- fw_event->ioc = ioc;
- leapioraid_scsihost_fw_event_add(ioc, fw_event);
- leapioraid_fw_event_work_put(fw_event);
-}
-
-static void
-leapioraid_scsihost_smart_predicted_fault(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- u8 from_sata_smart_polling)
-{
- struct scsi_target *starget;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- struct LeapioraidEventNotificationRep_t *event_reply;
- struct LeapioraidEventDataSasDeviceStatusChange_t *event_data;
- struct leapioraid_sas_device *sas_device;
- ssize_t sz;
- unsigned long flags;
-
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_handle(ioc, handle);
- if (!sas_device)
- goto out_unlock;
-
- starget = sas_device->starget;
- sas_target_priv_data = starget->hostdata;
- if ((sas_target_priv_data->flags & LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT)
- || ((sas_target_priv_data->flags & LEAPIORAID_TARGET_FLAGS_VOLUME)))
- goto out_unlock;
- leapioraid_scsihost_display_enclosure_chassis_info(NULL, sas_device, NULL,
- starget);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- if (from_sata_smart_polling)
- leapioraid_scsihost_send_event_to_turn_on_pfa_led(ioc, handle);
- sz = offsetof(struct LeapioraidEventNotificationRep_t, EventData) +
- sizeof(struct LeapioraidEventDataSasDeviceStatusChange_t);
- event_reply = kzalloc(sz, GFP_ATOMIC);
- if (!event_reply) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- event_reply->Function = LEAPIORAID_FUNC_EVENT_NOTIFICATION;
- event_reply->Event =
- cpu_to_le16(LEAPIORAID_EVENT_SAS_DEVICE_STATUS_CHANGE);
- event_reply->MsgLength = sz / 4;
- event_reply->EventDataLength =
- cpu_to_le16(sizeof(struct LeapioraidEventDataSasDeviceStatusChange_t) / 4);
- event_data = (struct LeapioraidEventDataSasDeviceStatusChange_t *)
- event_reply->EventData;
- event_data->ReasonCode = LEAPIORAID_EVENT_SAS_DEV_STAT_RC_SMART_DATA;
- event_data->ASC = 0x5D;
- event_data->DevHandle = cpu_to_le16(handle);
- event_data->SASAddress = cpu_to_le64(sas_target_priv_data->sas_address);
- leapioraid_ctl_add_to_event_log(ioc, event_reply);
- kfree(event_reply);
-out:
- if (sas_device)
- leapioraid_sas_device_put(sas_device);
- return;
-out_unlock:
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- goto out;
-}
-
-static u8
-leapioraid_scsihost_io_done(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid, u8 msix_index,
- u32 reply)
-{
- struct LeapioraidSCSIIOReq_t *mpi_request;
- struct LeapioraidSCSIIORep_t *mpi_reply;
- struct scsi_cmnd *scmd;
- u16 ioc_status, error_response_count = 0;
- u32 xfer_cnt;
- u8 scsi_state;
- u8 scsi_status;
- u32 log_info;
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- u32 response_code = 0;
- struct leapioraid_scsiio_tracker *st;
-
- scmd = leapioraid_scsihost_scsi_lookup_get(ioc, smid);
- if (scmd == NULL)
- return 1;
- leapioraid_scsihost_set_satl_pending(scmd, false);
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- mpi_reply = leapioraid_base_get_reply_virt_addr(ioc, reply);
- if (mpi_reply == NULL) {
- scmd->result = DID_OK << 16;
- goto out;
- }
- sas_device_priv_data = scmd->device->hostdata;
- if (!sas_device_priv_data || !sas_device_priv_data->sas_target ||
- sas_device_priv_data->sas_target->deleted) {
- scmd->result = DID_NO_CONNECT << 16;
- goto out;
- }
- ioc_status = le16_to_cpu(mpi_reply->IOCStatus);
- st = leapioraid_base_scsi_cmd_priv(scmd);
- if (st->direct_io && ((ioc_status & LEAPIORAID_IOCSTATUS_MASK)
- != LEAPIORAID_IOCSTATUS_SCSI_TASK_TERMINATED)) {
- st->scmd = scmd;
- st->direct_io = 0;
- memcpy(mpi_request->CDB.CDB32, scmd->cmnd, scmd->cmd_len);
- mpi_request->DevHandle =
- cpu_to_le16(sas_device_priv_data->sas_target->handle);
- ioc->put_smid_scsi_io(ioc, smid,
- sas_device_priv_data->sas_target->handle);
- return 0;
- }
- scsi_state = mpi_reply->SCSIState;
- if (scsi_state & LEAPIORAID_SCSI_STATE_RESPONSE_INFO_VALID)
- response_code = le32_to_cpu(mpi_reply->ResponseInfo) & 0xFF;
- if (!sas_device_priv_data->tlr_snoop_check) {
- sas_device_priv_data->tlr_snoop_check++;
- if ((sas_device_priv_data->flags & LEAPIORAID_DEVICE_TLR_ON) &&
- response_code == LEAPIORAID_SCSITASKMGMT_RSP_INVALID_FRAME)
- sas_device_priv_data->flags &= ~LEAPIORAID_DEVICE_TLR_ON;
- }
- if (ioc_status & LEAPIORAID_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE)
- log_info = le32_to_cpu(mpi_reply->IOCLogInfo);
- else
- log_info = 0;
- ioc_status &= LEAPIORAID_IOCSTATUS_MASK;
- scsi_status = mpi_reply->SCSIStatus;
- xfer_cnt = le32_to_cpu(mpi_reply->TransferCount);
- scsi_set_resid(scmd, scsi_bufflen(scmd) - xfer_cnt);
- if (ioc_status == LEAPIORAID_IOCSTATUS_SCSI_DATA_UNDERRUN
- && xfer_cnt == 0
- && (scsi_status == LEAPIORAID_SCSI_STATUS_BUSY
- || scsi_status == LEAPIORAID_SCSI_STATUS_RESERVATION_CONFLICT
- || scsi_status == LEAPIORAID_SCSI_STATUS_TASK_SET_FULL)) {
- ioc_status = LEAPIORAID_IOCSTATUS_SUCCESS;
- }
- if (scsi_state & LEAPIORAID_SCSI_STATE_AUTOSENSE_VALID) {
- struct sense_info data;
- const void *sense_data = leapioraid_base_get_sense_buffer(ioc,
- smid);
- u32 sz = min_t(u32, SCSI_SENSE_BUFFERSIZE,
- le32_to_cpu(mpi_reply->SenseCount));
- memcpy(scmd->sense_buffer, sense_data, sz);
- leapioraid_scsihost_normalize_sense(scmd->sense_buffer, &data);
- if (data.asc == 0x5D)
- leapioraid_scsihost_smart_predicted_fault(ioc,
- le16_to_cpu(mpi_reply->DevHandle),
- 0);
- }
- switch (ioc_status) {
- case LEAPIORAID_IOCSTATUS_BUSY:
- case LEAPIORAID_IOCSTATUS_INSUFFICIENT_RESOURCES:
- scmd->result = SAM_STAT_BUSY;
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_DEVICE_NOT_THERE:
- scmd->result = DID_NO_CONNECT << 16;
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_IOC_TERMINATED:
- if (sas_device_priv_data->block) {
- scmd->result = DID_TRANSPORT_DISRUPTED << 16;
- goto out;
- }
- if (log_info == 0x31110630) {
- if (scmd->retries > 2) {
- scmd->result = DID_NO_CONNECT << 16;
- scsi_device_set_state(scmd->device,
- SDEV_OFFLINE);
- } else {
- scmd->result = DID_SOFT_ERROR << 16;
- scmd->device->expecting_cc_ua = 1;
- }
- break;
- } else if (log_info == 0x32010081) {
- scmd->result = DID_RESET << 16;
- break;
- } else if ((scmd->device->channel == RAID_CHANNEL) &&
- (scsi_state == (LEAPIORAID_SCSI_STATE_TERMINATED |
- LEAPIORAID_SCSI_STATE_NO_SCSI_STATUS))) {
- scmd->result = DID_RESET << 16;
- break;
- }
- scmd->result = DID_SOFT_ERROR << 16;
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_TASK_TERMINATED:
- case LEAPIORAID_IOCSTATUS_SCSI_EXT_TERMINATED:
- scmd->result = DID_RESET << 16;
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_RESIDUAL_MISMATCH:
- if ((xfer_cnt == 0) || (scmd->underflow > xfer_cnt))
- scmd->result = DID_SOFT_ERROR << 16;
- else
- scmd->result = (DID_OK << 16) | scsi_status;
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_DATA_UNDERRUN:
- scmd->result = (DID_OK << 16) | scsi_status;
- if ((scsi_state & LEAPIORAID_SCSI_STATE_AUTOSENSE_VALID))
- break;
- if (xfer_cnt < scmd->underflow) {
- if (scsi_status == SAM_STAT_BUSY)
- scmd->result = SAM_STAT_BUSY;
- else
- scmd->result = DID_SOFT_ERROR << 16;
- } else if (scsi_state & (LEAPIORAID_SCSI_STATE_AUTOSENSE_FAILED |
- LEAPIORAID_SCSI_STATE_NO_SCSI_STATUS))
- scmd->result = DID_SOFT_ERROR << 16;
- else if (scsi_state & LEAPIORAID_SCSI_STATE_TERMINATED)
- scmd->result = DID_RESET << 16;
- else if (!xfer_cnt && scmd->cmnd[0] == REPORT_LUNS) {
- mpi_reply->SCSIState =
- LEAPIORAID_SCSI_STATE_AUTOSENSE_VALID;
- mpi_reply->SCSIStatus = SAM_STAT_CHECK_CONDITION;
- scsi_build_sense(scmd, 0,
- ILLEGAL_REQUEST, 0x20,
- 0);
- }
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_DATA_OVERRUN:
- scsi_set_resid(scmd, 0);
- fallthrough;
- case LEAPIORAID_IOCSTATUS_SCSI_RECOVERED_ERROR:
- case LEAPIORAID_IOCSTATUS_SUCCESS:
- scmd->result = (DID_OK << 16) | scsi_status;
- if (response_code ==
- LEAPIORAID_SCSITASKMGMT_RSP_INVALID_FRAME ||
- (scsi_state & (LEAPIORAID_SCSI_STATE_AUTOSENSE_FAILED |
- LEAPIORAID_SCSI_STATE_NO_SCSI_STATUS)))
- scmd->result = DID_SOFT_ERROR << 16;
- else if (scsi_state & LEAPIORAID_SCSI_STATE_TERMINATED)
- scmd->result = DID_RESET << 16;
- break;
- case LEAPIORAID_IOCSTATUS_EEDP_GUARD_ERROR:
- case LEAPIORAID_IOCSTATUS_EEDP_REF_TAG_ERROR:
- fallthrough;
- case LEAPIORAID_IOCSTATUS_EEDP_APP_TAG_ERROR:
- fallthrough;
- case LEAPIORAID_IOCSTATUS_SCSI_PROTOCOL_ERROR:
- case LEAPIORAID_IOCSTATUS_INVALID_FUNCTION:
- case LEAPIORAID_IOCSTATUS_INVALID_SGL:
- case LEAPIORAID_IOCSTATUS_INTERNAL_ERROR:
- case LEAPIORAID_IOCSTATUS_INVALID_FIELD:
- case LEAPIORAID_IOCSTATUS_INVALID_STATE:
- case LEAPIORAID_IOCSTATUS_SCSI_IO_DATA_ERROR:
- case LEAPIORAID_IOCSTATUS_SCSI_TASK_MGMT_FAILED:
- case LEAPIORAID_IOCSTATUS_INSUFFICIENT_POWER:
- default:
- scmd->result = DID_SOFT_ERROR << 16;
- break;
- }
- if (scmd->result && (ioc->logging_level & LEAPIORAID_DEBUG_REPLY))
- leapioraid_scsihost_scsi_ioc_info(
- ioc, scmd, mpi_reply, smid, scsi_status,
- error_response_count);
-out:
- scsi_dma_unmap(scmd);
- leapioraid_base_free_smid(ioc, smid);
- scsi_done(scmd);
- return 0;
-}
-
-static void
-leapioraid_scsihost_update_vphys_after_reset(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- u16 sz, ioc_status;
- int i;
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidSasIOUnitP0_t *sas_iounit_pg0 = NULL;
- u16 attached_handle;
- u64 attached_sas_addr;
- u8 found = 0, port_id;
- struct LeapioraidSasPhyP0_t phy_pg0;
- struct leapioraid_hba_port *port, *port_next, *mport;
- struct leapioraid_virtual_phy *vphy, *vphy_next;
- struct leapioraid_sas_device *sas_device;
-
- list_for_each_entry_safe(port, port_next, &ioc->port_table_list, list) {
- if (!port->vphys_mask)
- continue;
- list_for_each_entry_safe(vphy, vphy_next, &port->vphys_list,
- list) {
- vphy->flags |= LEAPIORAID_VPHY_FLAG_DIRTY_PHY;
- }
- }
- sz = offsetof(struct LeapioraidSasIOUnitP0_t, PhyData)
- + (ioc->sas_hba.num_phys
- * sizeof(struct LEAPIORAID_SAS_IO_UNIT0_PHY_DATA));
- sas_iounit_pg0 = kzalloc(sz, GFP_KERNEL);
- if (!sas_iounit_pg0) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- if ((leapioraid_config_get_sas_iounit_pg0(ioc, &mpi_reply,
- sas_iounit_pg0, sz)) != 0)
- goto out;
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS)
- goto out;
- for (i = 0; i < ioc->sas_hba.num_phys; i++) {
- if ((sas_iounit_pg0->PhyData[i].NegotiatedLinkRate >> 4) <
- LEAPIORAID_SAS_NEG_LINK_RATE_1_5)
- continue;
- if (!(le32_to_cpu(sas_iounit_pg0->PhyData[i].ControllerPhyDeviceInfo)
- & LEAPIORAID_SAS_DEVICE_INFO_SEP))
- continue;
- if ((leapioraid_config_get_phy_pg0(ioc, &mpi_reply, &phy_pg0,
- i))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- continue;
- }
- if (!
- (le32_to_cpu(phy_pg0.PhyInfo) &
- LEAPIORAID_SAS_PHYINFO_VIRTUAL_PHY))
- continue;
- attached_handle =
- le16_to_cpu(sas_iounit_pg0->PhyData[i].AttachedDevHandle);
- if (leapioraid_scsihost_get_sas_address
- (ioc, attached_handle, &attached_sas_addr)
- != 0) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- continue;
- }
- found = 0;
- port = port_next = NULL;
- list_for_each_entry_safe(port, port_next, &ioc->port_table_list,
- list) {
- if (!port->vphys_mask)
- continue;
- list_for_each_entry_safe(vphy, vphy_next,
- &port->vphys_list, list) {
- if (!
- (vphy->flags & LEAPIORAID_VPHY_FLAG_DIRTY_PHY))
- continue;
- if (vphy->sas_address != attached_sas_addr)
- continue;
- if (!(vphy->phy_mask & (1 << i)))
- vphy->phy_mask = (1 << i);
- port_id = sas_iounit_pg0->PhyData[i].Port;
- mport =
- leapioraid_get_port_by_id(ioc, port_id, 1);
- if (!mport) {
- mport =
- kzalloc(sizeof(struct leapioraid_hba_port),
- GFP_KERNEL);
- if (!mport) {
- pr_err(
- "%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__,
- __LINE__, __func__);
- break;
- }
- mport->port_id = port_id;
- pr_err(
- "%s %s: hba_port entry: %p, port: %d is added to hba_port list\n",
- ioc->name, __func__, mport,
- mport->port_id);
- list_add_tail(&mport->list,
- &ioc->port_table_list);
- }
- if (port != mport) {
- if (!mport->vphys_mask)
- INIT_LIST_HEAD(&mport->vphys_list);
- mport->vphys_mask |= (1 << i);
- port->vphys_mask &= ~(1 << i);
- list_move(&vphy->list,
- &mport->vphys_list);
- sas_device =
- leapioraid_get_sdev_by_addr(ioc,
- attached_sas_addr,
- port);
- if (sas_device)
- sas_device->port = mport;
- }
- if (mport->flags & LEAPIORAID_HBA_PORT_FLAG_DIRTY_PORT) {
- mport->sas_address = 0;
- mport->phy_mask = 0;
- mport->flags &=
- ~LEAPIORAID_HBA_PORT_FLAG_DIRTY_PORT;
- }
- vphy->flags &= ~LEAPIORAID_VPHY_FLAG_DIRTY_PHY;
- found = 1;
- break;
- }
- if (found)
- break;
- }
- }
-out:
- kfree(sas_iounit_pg0);
-}
-
-static u8
-leapioraid_scsihost_get_port_table_after_reset(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_hba_port *port_table)
-{
- u16 sz, ioc_status;
- int i, j;
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidSasIOUnitP0_t *sas_iounit_pg0 = NULL;
- u16 attached_handle;
- u64 attached_sas_addr;
- u8 found = 0, port_count = 0, port_id;
-
- sz = offsetof(struct LeapioraidSasIOUnitP0_t, PhyData)
- + (ioc->sas_hba.num_phys
- * sizeof(struct LEAPIORAID_SAS_IO_UNIT0_PHY_DATA));
- sas_iounit_pg0 = kzalloc(sz, GFP_KERNEL);
- if (!sas_iounit_pg0) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return port_count;
- }
- if ((leapioraid_config_get_sas_iounit_pg0(ioc, &mpi_reply,
- sas_iounit_pg0, sz)) != 0)
- goto out;
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS)
- goto out;
- for (i = 0; i < ioc->sas_hba.num_phys; i++) {
- found = 0;
- if ((sas_iounit_pg0->PhyData[i].NegotiatedLinkRate >> 4) <
- LEAPIORAID_SAS_NEG_LINK_RATE_1_5)
- continue;
- attached_handle =
- le16_to_cpu(sas_iounit_pg0->PhyData[i].AttachedDevHandle);
- if (leapioraid_scsihost_get_sas_address
- (ioc, attached_handle, &attached_sas_addr)
- != 0) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- continue;
- }
- for (j = 0; j < port_count; j++) {
- port_id = sas_iounit_pg0->PhyData[i].Port;
- if ((port_table[j].port_id == port_id) &&
- (port_table[j].sas_address == attached_sas_addr)) {
- port_table[j].phy_mask |= (1 << i);
- found = 1;
- break;
- }
- }
- if (found)
- continue;
- port_id = sas_iounit_pg0->PhyData[i].Port;
- port_table[port_count].port_id = port_id;
- port_table[port_count].phy_mask = (1 << i);
- port_table[port_count].sas_address = attached_sas_addr;
- port_count++;
- }
-out:
- kfree(sas_iounit_pg0);
- return port_count;
-}
-
-enum hba_port_matched_codes {
- NOT_MATCHED = 0,
- MATCHED_WITH_ADDR_AND_PHYMASK,
- MATCHED_WITH_ADDR_SUBPHYMASK_AND_PORT,
- MATCHED_WITH_ADDR_AND_SUBPHYMASK,
- MATCHED_WITH_ADDR,
-};
-static int
-leapioraid_scsihost_look_and_get_matched_port_entry(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_hba_port *port_entry,
- struct leapioraid_hba_port **matched_port_entry,
- int *count)
-{
- struct leapioraid_hba_port *port_table_entry, *matched_port = NULL;
- enum hba_port_matched_codes matched_code = NOT_MATCHED;
- int lcount = 0;
-
- *matched_port_entry = NULL;
- list_for_each_entry(port_table_entry, &ioc->port_table_list, list) {
- if (!(port_table_entry->flags & LEAPIORAID_HBA_PORT_FLAG_DIRTY_PORT))
- continue;
- if ((port_table_entry->sas_address == port_entry->sas_address)
- && (port_table_entry->phy_mask == port_entry->phy_mask)) {
- matched_code = MATCHED_WITH_ADDR_AND_PHYMASK;
- matched_port = port_table_entry;
- break;
- }
- if ((port_table_entry->sas_address == port_entry->sas_address)
- && (port_table_entry->phy_mask & port_entry->phy_mask)
- && (port_table_entry->port_id == port_entry->port_id)) {
- matched_code = MATCHED_WITH_ADDR_SUBPHYMASK_AND_PORT;
- matched_port = port_table_entry;
- continue;
- }
- if ((port_table_entry->sas_address == port_entry->sas_address)
- && (port_table_entry->phy_mask & port_entry->phy_mask)) {
- if (matched_code ==
- MATCHED_WITH_ADDR_SUBPHYMASK_AND_PORT)
- continue;
- matched_code = MATCHED_WITH_ADDR_AND_SUBPHYMASK;
- matched_port = port_table_entry;
- continue;
- }
- if (port_table_entry->sas_address == port_entry->sas_address) {
- if (matched_code ==
- MATCHED_WITH_ADDR_SUBPHYMASK_AND_PORT)
- continue;
- if (matched_code == MATCHED_WITH_ADDR_AND_SUBPHYMASK)
- continue;
- matched_code = MATCHED_WITH_ADDR;
- matched_port = port_table_entry;
- lcount++;
- }
- }
- *matched_port_entry = matched_port;
- if (matched_code == MATCHED_WITH_ADDR)
- *count = lcount;
- return matched_code;
-}
-
-static void
-leapioraid_scsihost_del_phy_part_of_anther_port(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_hba_port *port_table,
- int index, u8 port_count, int offset)
-{
- struct leapioraid_raid_sas_node *sas_node = &ioc->sas_hba;
- u32 i, found = 0;
-
- for (i = 0; i < port_count; i++) {
- if (i == index)
- continue;
- if (port_table[i].phy_mask & (1 << offset)) {
- leapioraid_transport_del_phy_from_an_existing_port(
- ioc,
- sas_node,
- &sas_node->phy
- [offset]);
- found = 1;
- break;
- }
- }
- if (!found)
- port_table[index].phy_mask |= (1 << offset);
-}
-
-static void
-leapioraid_scsihost_add_or_del_phys_from_existing_port(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_hba_port *hba_port_entry,
- struct leapioraid_hba_port *port_table,
- int index, u8 port_count)
-{
- u32 phy_mask, offset = 0;
- struct leapioraid_raid_sas_node *sas_node = &ioc->sas_hba;
-
- phy_mask = hba_port_entry->phy_mask ^ port_table[index].phy_mask;
- for (offset = 0; offset < ioc->sas_hba.num_phys; offset++) {
- if (phy_mask & (1 << offset)) {
- if (!(port_table[index].phy_mask & (1 << offset))) {
- leapioraid_scsihost_del_phy_part_of_anther_port(
- ioc,
- port_table,
- index,
- port_count,
- offset);
- } else {
-#if defined(LEAPIORAID_WIDE_PORT_API)
- if (sas_node->phy[offset].phy_belongs_to_port)
- leapioraid_transport_del_phy_from_an_existing_port
- (ioc, sas_node,
- &sas_node->phy[offset]);
- leapioraid_transport_add_phy_to_an_existing_port
- (ioc, sas_node, &sas_node->phy[offset],
- hba_port_entry->sas_address,
- hba_port_entry);
-#endif
- }
- }
- }
-}
-
-static void
-leapioraid_scsihost_del_dirty_vphy(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_hba_port *port, *port_next;
- struct leapioraid_virtual_phy *vphy, *vphy_next;
-
- list_for_each_entry_safe(port, port_next, &ioc->port_table_list, list) {
- if (!port->vphys_mask)
- continue;
- list_for_each_entry_safe(vphy, vphy_next, &port->vphys_list,
- list) {
- if (vphy->flags & LEAPIORAID_VPHY_FLAG_DIRTY_PHY) {
- drsprintk(ioc, pr_err(
- "%s Deleting vphy %p entry from port id: %d\t, Phy_mask 0x%08x\n",
- ioc->name, vphy,
- port->port_id,
- vphy->phy_mask));
- port->vphys_mask &= ~vphy->phy_mask;
- list_del(&vphy->list);
- kfree(vphy);
- }
- }
- if (!port->vphys_mask && !port->sas_address)
- port->flags |= LEAPIORAID_HBA_PORT_FLAG_DIRTY_PORT;
- }
-}
-
-static void
-leapioraid_scsihost_del_dirty_port_entries(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_hba_port *port, *port_next;
-
- list_for_each_entry_safe(port, port_next, &ioc->port_table_list, list) {
- if (!(port->flags & LEAPIORAID_HBA_PORT_FLAG_DIRTY_PORT) ||
- port->flags & LEAPIORAID_HBA_PORT_FLAG_NEW_PORT)
- continue;
- drsprintk(ioc, pr_err(
- "%s Deleting port table entry %p having Port id: %d\t, Phy_mask 0x%08x\n",
- ioc->name, port, port->port_id,
- port->phy_mask));
- list_del(&port->list);
- kfree(port);
- }
-}
-
-static void
-leapioraid_scsihost_sas_port_refresh(struct LEAPIORAID_ADAPTER *ioc)
-{
- u8 port_count = 0;
- struct leapioraid_hba_port *port_table;
- struct leapioraid_hba_port *port_table_entry;
- struct leapioraid_hba_port *port_entry = NULL;
- int i, j, ret, count = 0, lcount = 0;
- u64 sas_addr;
- u8 num_phys;
-
- drsprintk(ioc, pr_err(
- "%s updating ports for sas_host(0x%016llx)\n",
- ioc->name,
- (unsigned long long)ioc->sas_hba.sas_address));
- leapioraid_config_get_number_hba_phys(ioc, &num_phys);
- if (!num_phys) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- if (num_phys > ioc->sas_hba.nr_phys_allocated) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- ioc->sas_hba.num_phys = num_phys;
- port_table = kcalloc(ioc->sas_hba.num_phys,
- sizeof(struct leapioraid_hba_port), GFP_KERNEL);
- if (!port_table)
- return;
- port_count = leapioraid_scsihost_get_port_table_after_reset(
- ioc, port_table);
- if (!port_count)
- return;
- drsprintk(ioc,
- pr_info("%s New Port table\n", ioc->name));
- for (j = 0; j < port_count; j++)
- drsprintk(ioc, pr_err(
- "%s Port: %d\t Phy_mask 0x%08x\t sas_addr(0x%016llx)\n",
- ioc->name, port_table[j].port_id,
- port_table[j].phy_mask,
- port_table[j].sas_address));
- list_for_each_entry(port_table_entry, &ioc->port_table_list, list) {
- port_table_entry->flags |= LEAPIORAID_HBA_PORT_FLAG_DIRTY_PORT;
- }
- drsprintk(ioc,
- pr_info("%s Old Port table\n", ioc->name));
- port_table_entry = NULL;
- list_for_each_entry(port_table_entry, &ioc->port_table_list, list) {
- drsprintk(ioc, pr_err(
- "%s Port: %d\t Phy_mask 0x%08x\t sas_addr(0x%016llx)\n",
- ioc->name, port_table_entry->port_id,
- port_table_entry->phy_mask,
- port_table_entry->sas_address));
- }
- for (j = 0; j < port_count; j++) {
- ret = leapioraid_scsihost_look_and_get_matched_port_entry(ioc,
- &port_table[j],
- &port_entry,
- &count);
- if (!port_entry) {
- drsprintk(ioc, pr_err(
- "%s No Matched entry for sas_addr(0x%16llx), Port:%d\n",
- ioc->name,
- port_table[j].sas_address,
- port_table[j].port_id));
- continue;
- }
- switch (ret) {
- case MATCHED_WITH_ADDR_SUBPHYMASK_AND_PORT:
- case MATCHED_WITH_ADDR_AND_SUBPHYMASK:
- leapioraid_scsihost_add_or_del_phys_from_existing_port(ioc,
- port_entry,
- port_table,
- j,
- port_count);
- break;
- case MATCHED_WITH_ADDR:
- sas_addr = port_table[j].sas_address;
- for (i = 0; i < port_count; i++) {
- if (port_table[i].sas_address == sas_addr)
- lcount++;
- }
- if ((count > 1) || (lcount > 1))
- port_entry = NULL;
- else
- leapioraid_scsihost_add_or_del_phys_from_existing_port
- (ioc, port_entry, port_table, j,
- port_count);
- }
- if (!port_entry)
- continue;
- if (port_entry->port_id != port_table[j].port_id)
- port_entry->port_id = port_table[j].port_id;
- port_entry->flags &= ~LEAPIORAID_HBA_PORT_FLAG_DIRTY_PORT;
- port_entry->phy_mask = port_table[j].phy_mask;
- }
- port_table_entry = NULL;
-}
-
-static
-struct leapioraid_virtual_phy *leapioraid_scsihost_alloc_vphy(
- struct LEAPIORAID_ADAPTER *ioc,
- u8 port_id, u8 phy_num)
-{
- struct leapioraid_virtual_phy *vphy;
- struct leapioraid_hba_port *port;
-
- port = leapioraid_get_port_by_id(ioc, port_id, 0);
- if (!port)
- return NULL;
- vphy = leapioraid_get_vphy_by_phy(ioc, port, phy_num);
- if (!vphy) {
- vphy = kzalloc(sizeof(struct leapioraid_virtual_phy), GFP_KERNEL);
- if (!vphy)
- return NULL;
- if (!port->vphys_mask)
- INIT_LIST_HEAD(&port->vphys_list);
- port->vphys_mask |= (1 << phy_num);
- vphy->phy_mask |= (1 << phy_num);
- list_add_tail(&vphy->list, &port->vphys_list);
- pr_info(
- "%s vphy entry: %p, port id: %d, phy:%d is added to port's vphys_list\n",
- ioc->name, vphy, port->port_id, phy_num);
- }
- return vphy;
-}
-
-static void
-leapioraid_scsihost_sas_host_refresh(struct LEAPIORAID_ADAPTER *ioc)
-{
- u16 sz;
- u16 ioc_status;
- int i;
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidSasIOUnitP0_t *sas_iounit_pg0 = NULL;
- u16 attached_handle;
- u8 link_rate, port_id;
- struct leapioraid_hba_port *port;
- struct LeapioraidSasPhyP0_t phy_pg0;
-
- dtmprintk(ioc, pr_err(
- "%s updating handles for sas_host(0x%016llx)\n",
- ioc->name,
- (unsigned long long)ioc->sas_hba.sas_address));
- sz = offsetof(struct LeapioraidSasIOUnitP0_t,
- PhyData) +
- (ioc->sas_hba.num_phys * sizeof(struct LEAPIORAID_SAS_IO_UNIT0_PHY_DATA));
- sas_iounit_pg0 = kzalloc(sz, GFP_KERNEL);
- if (!sas_iounit_pg0) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- if ((leapioraid_config_get_sas_iounit_pg0(ioc, &mpi_reply,
- sas_iounit_pg0, sz)) != 0)
- goto out;
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS)
- goto out;
- for (i = 0; i < ioc->sas_hba.num_phys; i++) {
- link_rate = sas_iounit_pg0->PhyData[i].NegotiatedLinkRate >> 4;
- if (i == 0)
- ioc->sas_hba.handle =
- le16_to_cpu(sas_iounit_pg0->PhyData[0].ControllerDevHandle);
- port_id = sas_iounit_pg0->PhyData[i].Port;
- if (!(leapioraid_get_port_by_id(ioc, port_id, 0))) {
- port = kzalloc(sizeof(struct leapioraid_hba_port), GFP_KERNEL);
- if (!port)
- goto out;
-
- port->port_id = port_id;
- pr_info(
- "%s hba_port entry: %p, port: %d is added to hba_port list\n",
- ioc->name, port, port->port_id);
- if (ioc->shost_recovery)
- port->flags = LEAPIORAID_HBA_PORT_FLAG_NEW_PORT;
- list_add_tail(&port->list, &ioc->port_table_list);
- }
- if (le32_to_cpu
- (sas_iounit_pg0->PhyData[i].ControllerPhyDeviceInfo)
- & LEAPIORAID_SAS_DEVICE_INFO_SEP
- && (link_rate >= LEAPIORAID_SAS_NEG_LINK_RATE_1_5)) {
- if ((leapioraid_config_get_phy_pg0
- (ioc, &mpi_reply, &phy_pg0, i))) {
- pr_err(
- "%s failure at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- continue;
- }
- if (!
- (le32_to_cpu(phy_pg0.PhyInfo) &
- LEAPIORAID_SAS_PHYINFO_VIRTUAL_PHY))
- continue;
- if (!leapioraid_scsihost_alloc_vphy(ioc, port_id, i))
- goto out;
- ioc->sas_hba.phy[i].hba_vphy = 1;
- }
- ioc->sas_hba.phy[i].handle = ioc->sas_hba.handle;
- attached_handle =
- le16_to_cpu(sas_iounit_pg0->PhyData[i].AttachedDevHandle);
- if (attached_handle
- && link_rate < LEAPIORAID_SAS_NEG_LINK_RATE_1_5)
- link_rate = LEAPIORAID_SAS_NEG_LINK_RATE_1_5;
- ioc->sas_hba.phy[i].port =
- leapioraid_get_port_by_id(ioc, port_id, 0);
- if (!ioc->sas_hba.phy[i].phy) {
- if ((leapioraid_config_get_phy_pg0
- (ioc, &mpi_reply, &phy_pg0, i))) {
- pr_err(
- "%s failure at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- continue;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err(
- "%s failure at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- continue;
- }
- ioc->sas_hba.phy[i].phy_id = i;
- leapioraid_transport_add_host_phy(ioc,
- &ioc->sas_hba.phy[i],
- phy_pg0,
- ioc->sas_hba.parent_dev);
- continue;
- }
- leapioraid_transport_update_links(ioc, ioc->sas_hba.sas_address,
- attached_handle, i, link_rate,
- ioc->sas_hba.phy[i].port);
- }
-out:
- kfree(sas_iounit_pg0);
-}
-
-static void
-leapioraid_scsihost_sas_host_add(struct LEAPIORAID_ADAPTER *ioc)
-{
- int i;
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidSasIOUnitP0_t *sas_iounit_pg0 = NULL;
- struct LeapioraidSasIOUnitP1_t *sas_iounit_pg1 = NULL;
- struct LeapioraidSasPhyP0_t phy_pg0;
- struct LeapioraidSasDevP0_t sas_device_pg0;
- struct LeapioraidSasEncP0_t enclosure_pg0;
- u16 ioc_status;
- u16 sz;
- u8 device_missing_delay;
- u8 num_phys, port_id;
- struct leapioraid_hba_port *port;
-
- leapioraid_config_get_number_hba_phys(ioc, &num_phys);
- if (!num_phys) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- ioc->sas_hba.nr_phys_allocated =
- max_t(u8, LEAPIORAID_MAX_HBA_NUM_PHYS, num_phys);
- ioc->sas_hba.phy =
- kcalloc(ioc->sas_hba.nr_phys_allocated,
- sizeof(struct leapioraid_sas_phy),
- GFP_KERNEL);
- if (!ioc->sas_hba.phy) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- ioc->sas_hba.num_phys = num_phys;
- sz = offsetof(struct LeapioraidSasIOUnitP0_t,
- PhyData) +
- (ioc->sas_hba.num_phys
- * sizeof(struct LEAPIORAID_SAS_IO_UNIT0_PHY_DATA));
- sas_iounit_pg0 = kzalloc(sz, GFP_KERNEL);
- if (!sas_iounit_pg0) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- if ((leapioraid_config_get_sas_iounit_pg0(ioc, &mpi_reply,
- sas_iounit_pg0, sz))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus)
- & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- sz = offsetof(struct LeapioraidSasIOUnitP1_t,
- PhyData) +
- (ioc->sas_hba.num_phys
- * sizeof(struct LEAPIORAID_SAS_IO_UNIT1_PHY_DATA));
- sas_iounit_pg1 = kzalloc(sz, GFP_KERNEL);
- if (!sas_iounit_pg1) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- if ((leapioraid_config_get_sas_iounit_pg1(ioc, &mpi_reply,
- sas_iounit_pg1, sz))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- ioc->io_missing_delay = sas_iounit_pg1->IODeviceMissingDelay;
- device_missing_delay = sas_iounit_pg1->ReportDeviceMissingDelay;
- if (device_missing_delay & LEAPIORAID_SASIOUNIT1_REPORT_MISSING_UNIT_16)
- ioc->device_missing_delay = (device_missing_delay &
- LEAPIORAID_SASIOUNIT1_REPORT_MISSING_TIMEOUT_MASK)
- * 16;
- else
- ioc->device_missing_delay = device_missing_delay &
- LEAPIORAID_SASIOUNIT1_REPORT_MISSING_TIMEOUT_MASK;
- ioc->sas_hba.parent_dev = &ioc->shost->shost_gendev;
- for (i = 0; i < ioc->sas_hba.num_phys; i++) {
- if ((leapioraid_config_get_phy_pg0(ioc, &mpi_reply, &phy_pg0,
- i))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- if (i == 0)
- ioc->sas_hba.handle =
- le16_to_cpu(sas_iounit_pg0->PhyData[0].ControllerDevHandle);
- port_id = sas_iounit_pg0->PhyData[i].Port;
- if (!(leapioraid_get_port_by_id(ioc, port_id, 0))) {
- port = kzalloc(sizeof(struct leapioraid_hba_port), GFP_KERNEL);
- if (!port)
- goto out;
-
- port->port_id = port_id;
- pr_info(
- "%s hba_port entry: %p, port: %d is added to hba_port list\n",
- ioc->name, port, port->port_id);
- list_add_tail(&port->list, &ioc->port_table_list);
- }
- if ((le32_to_cpu(phy_pg0.PhyInfo) &
- LEAPIORAID_SAS_PHYINFO_VIRTUAL_PHY)
- && (phy_pg0.NegotiatedLinkRate >> 4) >=
- LEAPIORAID_SAS_NEG_LINK_RATE_1_5) {
- if (!leapioraid_scsihost_alloc_vphy(ioc, port_id, i))
- goto out;
- ioc->sas_hba.phy[i].hba_vphy = 1;
- }
- ioc->sas_hba.phy[i].handle = ioc->sas_hba.handle;
- ioc->sas_hba.phy[i].phy_id = i;
- ioc->sas_hba.phy[i].port =
- leapioraid_get_port_by_id(ioc, port_id, 0);
- leapioraid_transport_add_host_phy(ioc, &ioc->sas_hba.phy[i],
- phy_pg0,
- ioc->sas_hba.parent_dev);
- }
- if ((leapioraid_config_get_sas_device_pg0
- (ioc, &mpi_reply, &sas_device_pg0,
- LEAPIORAID_SAS_DEVICE_PGAD_FORM_HANDLE, ioc->sas_hba.handle))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- ioc->sas_hba.enclosure_handle =
- le16_to_cpu(sas_device_pg0.EnclosureHandle);
- ioc->sas_hba.sas_address = le64_to_cpu(sas_device_pg0.SASAddress);
- pr_info(
- "%s host_add: handle(0x%04x), sas_addr(0x%016llx), phys(%d)\n",
- ioc->name,
- ioc->sas_hba.handle,
- (unsigned long long)ioc->sas_hba.sas_address,
- ioc->sas_hba.num_phys);
- if (ioc->sas_hba.enclosure_handle) {
- if (!(leapioraid_config_get_enclosure_pg0(ioc, &mpi_reply,
- &enclosure_pg0,
- LEAPIORAID_SAS_ENCLOS_PGAD_FORM_HANDLE,
- ioc->sas_hba.enclosure_handle)))
- ioc->sas_hba.enclosure_logical_id =
- le64_to_cpu(enclosure_pg0.EnclosureLogicalID);
- }
-out:
- kfree(sas_iounit_pg1);
- kfree(sas_iounit_pg0);
-}
-
-static int
-leapioraid_scsihost_expander_add(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct leapioraid_raid_sas_node *sas_expander;
- struct leapioraid_enclosure_node *enclosure_dev;
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidExpanderP0_t expander_pg0;
- struct LeapioraidExpanderP1_t expander_pg1;
- u32 ioc_status;
- u16 parent_handle;
- u64 sas_address, sas_address_parent = 0;
- int i;
- unsigned long flags;
- u8 port_id;
- struct leapioraid_sas_port *leapioraid_port = NULL;
- int rc = 0;
-
- if (!handle)
- return -1;
- if (ioc->shost_recovery || ioc->pci_error_recovery)
- return -1;
- if ((leapioraid_config_get_expander_pg0(
- ioc, &mpi_reply, &expander_pg0,
- LEAPIORAID_SAS_EXPAND_PGAD_FORM_HNDL,
- handle))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return -1;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus)
- & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return -1;
- }
- parent_handle = le16_to_cpu(expander_pg0.ParentDevHandle);
- if (leapioraid_scsihost_get_sas_address(
- ioc, parent_handle, &sas_address_parent)
- != 0) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return -1;
- }
- port_id = expander_pg0.PhysicalPort;
- if (sas_address_parent != ioc->sas_hba.sas_address) {
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- sas_expander =
- leapioraid_scsihost_expander_find_by_sas_address(
- ioc,
- sas_address_parent,
- leapioraid_get_port_by_id(ioc, port_id, 0));
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- if (!sas_expander) {
- rc = leapioraid_scsihost_expander_add(ioc, parent_handle);
- if (rc != 0)
- return rc;
- }
- }
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- sas_address = le64_to_cpu(expander_pg0.SASAddress);
- sas_expander = leapioraid_scsihost_expander_find_by_sas_address(
- ioc,
- sas_address,
- leapioraid_get_port_by_id(ioc, port_id, 0));
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- if (sas_expander)
- return 0;
- sas_expander = kzalloc(sizeof(struct leapioraid_raid_sas_node),
- GFP_KERNEL);
- if (!sas_expander)
- return -1;
-
- sas_expander->handle = handle;
- sas_expander->num_phys = expander_pg0.NumPhys;
- sas_expander->sas_address_parent = sas_address_parent;
- sas_expander->sas_address = sas_address;
- sas_expander->port = leapioraid_get_port_by_id(ioc, port_id, 0);
- if (!sas_expander->port) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -1;
- goto out_fail;
- }
- pr_info(
- "%s expander_add: handle(0x%04x), parent(0x%04x), sas_addr(0x%016llx), phys(%d)\n",
- ioc->name,
- handle, parent_handle,
- (unsigned long long)sas_expander->sas_address,
- sas_expander->num_phys);
- if (!sas_expander->num_phys) {
- rc = -1;
- goto out_fail;
- }
- sas_expander->phy = kcalloc(sas_expander->num_phys,
- sizeof(struct leapioraid_sas_phy), GFP_KERNEL);
- if (!sas_expander->phy) {
- rc = -1;
- goto out_fail;
- }
- INIT_LIST_HEAD(&sas_expander->sas_port_list);
- leapioraid_port = leapioraid_transport_port_add(
- ioc, handle,
- sas_address_parent,
- sas_expander->port);
- if (!leapioraid_port) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -1;
- goto out_fail;
- }
- sas_expander->parent_dev = &leapioraid_port->rphy->dev;
- sas_expander->rphy = leapioraid_port->rphy;
- for (i = 0; i < sas_expander->num_phys; i++) {
- if ((leapioraid_config_get_expander_pg1(
- ioc, &mpi_reply,
- &expander_pg1, i,
- handle))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -1;
- goto out_fail;
- }
- sas_expander->phy[i].handle = handle;
- sas_expander->phy[i].phy_id = i;
- sas_expander->phy[i].port =
- leapioraid_get_port_by_id(ioc, port_id, 0);
- if ((leapioraid_transport_add_expander_phy
- (ioc, &sas_expander->phy[i], expander_pg1,
- sas_expander->parent_dev))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -1;
- goto out_fail;
- }
- }
- if (sas_expander->enclosure_handle) {
- enclosure_dev =
- leapioraid_scsihost_enclosure_find_by_handle(
- ioc,
- sas_expander->enclosure_handle);
- if (enclosure_dev)
- sas_expander->enclosure_logical_id =
- le64_to_cpu(enclosure_dev->pg0.EnclosureLogicalID);
- }
- leapioraid_scsihost_expander_node_add(ioc, sas_expander);
- return 0;
-out_fail:
- if (leapioraid_port)
- leapioraid_transport_port_remove(ioc,
- sas_expander->sas_address,
- sas_address_parent,
- sas_expander->port);
- kfree(sas_expander);
- return rc;
-}
-
-void
-leapioraid_expander_remove(
- struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address, struct leapioraid_hba_port *port)
-{
- struct leapioraid_raid_sas_node *sas_expander;
- unsigned long flags;
-
- if (ioc->shost_recovery)
- return;
- if (!port)
- return;
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- sas_expander = leapioraid_scsihost_expander_find_by_sas_address(
- ioc, sas_address, port);
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- if (sas_expander)
- leapioraid_scsihost_expander_node_remove(
- ioc, sas_expander);
-}
-
-static u8
-leapioraid_scsihost_done(
- struct LEAPIORAID_ADAPTER *ioc, u16 smid, u8 msix_index,
- u32 reply)
-{
- struct LeapioraidDefaultRep_t *mpi_reply;
-
- mpi_reply = leapioraid_base_get_reply_virt_addr(ioc, reply);
- if (ioc->scsih_cmds.status == LEAPIORAID_CMD_NOT_USED)
- return 1;
- if (ioc->scsih_cmds.smid != smid)
- return 1;
- ioc->scsih_cmds.status |= LEAPIORAID_CMD_COMPLETE;
- if (mpi_reply) {
- memcpy(ioc->scsih_cmds.reply, mpi_reply,
- mpi_reply->MsgLength * 4);
- ioc->scsih_cmds.status |= LEAPIORAID_CMD_REPLY_VALID;
- }
- ioc->scsih_cmds.status &= ~LEAPIORAID_CMD_PENDING;
- complete(&ioc->scsih_cmds.done);
- return 1;
-}
-
-static int
-leapioraid_scsi_send_scsi_io(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_scsi_io_transfer *transfer_packet,
- u8 tr_timeout, u8 tr_method)
-{
- struct LeapioraidSCSIIORep_t *mpi_reply;
- struct LeapioSCSIIOReq_t *mpi_request;
- u16 smid;
- u8 issue_reset = 0;
- int rc;
- void *priv_sense;
- u32 mpi_control;
- void *psge;
- dma_addr_t data_out_dma = 0;
- dma_addr_t data_in_dma = 0;
- size_t data_in_sz = 0;
- size_t data_out_sz = 0;
- u16 handle;
- u8 retry_count = 0, host_reset_count = 0;
- int tm_return_code;
-
- if (ioc->pci_error_recovery) {
- pr_err("%s %s: pci error recovery in progress!\n",
- ioc->name, __func__);
- return -EFAULT;
- }
- if (ioc->shost_recovery) {
- pr_info("%s %s: host recovery in progress!\n",
- ioc->name, __func__);
- return -EAGAIN;
- }
- handle = transfer_packet->handle;
- if (handle == LEAPIORAID_INVALID_DEVICE_HANDLE) {
- pr_info("%s %s: no device!\n",
- __func__, ioc->name);
- return -EFAULT;
- }
- mutex_lock(&ioc->scsih_cmds.mutex);
- if (ioc->scsih_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_err("%s %s: scsih_cmd in use\n",
- ioc->name, __func__);
- rc = -EAGAIN;
- goto out;
- }
-retry_loop:
- if (test_bit(handle, ioc->device_remove_in_progress)) {
- pr_info("%s %s: device removal in progress\n",
- ioc->name, __func__);
- rc = -EFAULT;
- goto out;
- }
- ioc->scsih_cmds.status = LEAPIORAID_CMD_PENDING;
- rc = leapioraid_wait_for_ioc_to_operational(ioc, 10);
- if (rc)
- goto out;
- smid = ioc->shost->can_queue
- + LEAPIORAID_INTERNAL_SCSIIO_FOR_DISCOVERY;
- rc = 0;
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->scsih_cmds.smid = smid;
- memset(mpi_request, 0, sizeof(struct LeapioSCSIIOReq_t));
- if (transfer_packet->is_raid)
- mpi_request->Function =
- LEAPIORAID_FUNC_RAID_SCSI_IO_PASSTHROUGH;
- else
- mpi_request->Function = LEAPIORAID_FUNC_SCSI_IO_REQUEST;
- mpi_request->DevHandle = cpu_to_le16(handle);
- switch (transfer_packet->dir) {
- case DMA_TO_DEVICE:
- mpi_control = LEAPIORAID_SCSIIO_CONTROL_WRITE;
- data_out_dma = transfer_packet->data_dma;
- data_out_sz = transfer_packet->data_length;
- break;
- case DMA_FROM_DEVICE:
- mpi_control = LEAPIORAID_SCSIIO_CONTROL_READ;
- data_in_dma = transfer_packet->data_dma;
- data_in_sz = transfer_packet->data_length;
- break;
- case DMA_BIDIRECTIONAL:
- mpi_control = LEAPIORAID_SCSIIO_CONTROL_BIDIRECTIONAL;
- BUG();
- break;
- default:
- case DMA_NONE:
- mpi_control = LEAPIORAID_SCSIIO_CONTROL_NODATATRANSFER;
- break;
- }
- psge = &mpi_request->SGL;
- ioc->build_sg(
- ioc, psge, data_out_dma,
- data_out_sz, data_in_dma,
- data_in_sz);
- mpi_request->Control = cpu_to_le32(mpi_control |
- LEAPIORAID_SCSIIO_CONTROL_SIMPLEQ);
- mpi_request->DataLength = cpu_to_le32(transfer_packet->data_length);
- mpi_request->MsgFlags = LEAPIORAID_SCSIIO_MSGFLAGS_SYSTEM_SENSE_ADDR;
- mpi_request->SenseBufferLength = SCSI_SENSE_BUFFERSIZE;
- mpi_request->SenseBufferLowAddress =
- leapioraid_base_get_sense_buffer_dma(ioc, smid);
- priv_sense = leapioraid_base_get_sense_buffer(ioc, smid);
- mpi_request->SGLOffset0 = offsetof(struct LeapioSCSIIOReq_t, SGL) / 4;
- mpi_request->IoFlags = cpu_to_le16(transfer_packet->cdb_length);
- int_to_scsilun(transfer_packet->lun, (struct scsi_lun *)
- mpi_request->LUN);
- memcpy(mpi_request->CDB.CDB32, transfer_packet->cdb,
- transfer_packet->cdb_length);
- init_completion(&ioc->scsih_cmds.done);
- if (likely(mpi_request->Function == LEAPIORAID_FUNC_SCSI_IO_REQUEST))
- ioc->put_smid_scsi_io(ioc, smid, handle);
- else
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->scsih_cmds.done,
- transfer_packet->timeout * HZ);
- if (!(ioc->scsih_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- leapioraid_check_cmd_timeout(ioc,
- ioc->scsih_cmds.status,
- mpi_request,
- sizeof(struct LeapioSCSIIOReq_t) / 4,
- issue_reset);
- goto issue_target_reset;
- }
- if (ioc->scsih_cmds.status & LEAPIORAID_CMD_REPLY_VALID) {
- transfer_packet->valid_reply = 1;
- mpi_reply = ioc->scsih_cmds.reply;
- transfer_packet->sense_length =
- le32_to_cpu(mpi_reply->SenseCount);
- if (transfer_packet->sense_length)
- memcpy(transfer_packet->sense, priv_sense,
- transfer_packet->sense_length);
- transfer_packet->transfer_length =
- le32_to_cpu(mpi_reply->TransferCount);
- transfer_packet->ioc_status =
- le16_to_cpu(mpi_reply->IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- transfer_packet->scsi_state = mpi_reply->SCSIState;
- transfer_packet->scsi_status = mpi_reply->SCSIStatus;
- transfer_packet->log_info = le32_to_cpu(mpi_reply->IOCLogInfo);
- }
- goto out;
-issue_target_reset:
- if (issue_reset) {
- pr_info("%s issue target reset: handle(0x%04x)\n", ioc->name, handle);
- tm_return_code =
- leapioraid_scsihost_issue_locked_tm(ioc, handle,
- 0xFFFFFFFF, 0xFFFFFFFF,
- 0,
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_TARGET_RESET,
- smid, tr_timeout,
- tr_method);
- if (tm_return_code == SUCCESS) {
- pr_err(
- "%s target reset completed: handle (0x%04x)\n",
- ioc->name, handle);
- if (((ioc->scsih_cmds.status & LEAPIORAID_CMD_COMPLETE)
- && retry_count++ < 3)
- || ((ioc->scsih_cmds.status & LEAPIORAID_CMD_RESET)
- && host_reset_count++ == 0)) {
- pr_info("%s issue retry: handle (0x%04x)\n",
- ioc->name, handle);
- goto retry_loop;
- }
- } else
- pr_err("%s target reset didn't complete: handle(0x%04x)\n",
- ioc->name, handle);
- rc = -EFAULT;
- } else
- rc = -EAGAIN;
-out:
- ioc->scsih_cmds.status = LEAPIORAID_CMD_NOT_USED;
- mutex_unlock(&ioc->scsih_cmds.mutex);
- return rc;
-}
-
-static enum device_responsive_state
-leapioraid_scsihost_determine_disposition(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_scsi_io_transfer *transfer_packet)
-{
- static enum device_responsive_state rc;
- struct sense_info sense_info = { 0, 0, 0 };
- u8 check_sense = 0;
- char *desc = NULL;
-
- if (!transfer_packet->valid_reply)
- return DEVICE_READY;
- switch (transfer_packet->ioc_status) {
- case LEAPIORAID_IOCSTATUS_BUSY:
- case LEAPIORAID_IOCSTATUS_INSUFFICIENT_RESOURCES:
- case LEAPIORAID_IOCSTATUS_SCSI_TASK_TERMINATED:
- case LEAPIORAID_IOCSTATUS_SCSI_IO_DATA_ERROR:
- case LEAPIORAID_IOCSTATUS_SCSI_EXT_TERMINATED:
- rc = DEVICE_RETRY;
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_IOC_TERMINATED:
- if (transfer_packet->log_info == 0x31170000) {
- rc = DEVICE_RETRY;
- break;
- }
- if (transfer_packet->cdb[0] == REPORT_LUNS)
- rc = DEVICE_READY;
- else
- rc = DEVICE_RETRY;
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_DATA_UNDERRUN:
- case LEAPIORAID_IOCSTATUS_SCSI_RECOVERED_ERROR:
- case LEAPIORAID_IOCSTATUS_SUCCESS:
- if (!transfer_packet->scsi_state &&
- !transfer_packet->scsi_status) {
- rc = DEVICE_READY;
- break;
- }
- if (transfer_packet->scsi_state &
- LEAPIORAID_SCSI_STATE_AUTOSENSE_VALID) {
- rc = DEVICE_ERROR;
- check_sense = 1;
- break;
- }
- if (transfer_packet->scsi_state &
- (LEAPIORAID_SCSI_STATE_AUTOSENSE_FAILED |
- LEAPIORAID_SCSI_STATE_NO_SCSI_STATUS |
- LEAPIORAID_SCSI_STATE_TERMINATED)) {
- rc = DEVICE_RETRY;
- break;
- }
- if (transfer_packet->scsi_status >= LEAPIORAID_SCSI_STATUS_BUSY) {
- rc = DEVICE_RETRY;
- break;
- }
- rc = DEVICE_READY;
- break;
- case LEAPIORAID_IOCSTATUS_SCSI_PROTOCOL_ERROR:
- if (transfer_packet->scsi_state & LEAPIORAID_SCSI_STATE_TERMINATED)
- rc = DEVICE_RETRY;
- else
- rc = DEVICE_ERROR;
- break;
- case LEAPIORAID_IOCSTATUS_INSUFFICIENT_POWER:
- default:
- rc = DEVICE_ERROR;
- break;
- }
- if (check_sense) {
- leapioraid_scsihost_normalize_sense(
- transfer_packet->sense, &sense_info);
- if (sense_info.skey == UNIT_ATTENTION)
- rc = DEVICE_RETRY_UA;
- else if (sense_info.skey == NOT_READY) {
- if (sense_info.asc == 0x3a)
- rc = DEVICE_READY;
- else if (sense_info.asc == 0x04) {
- if (sense_info.ascq == 0x03 ||
- sense_info.ascq == 0x0b ||
- sense_info.ascq == 0x0c) {
- rc = DEVICE_ERROR;
- } else
- rc = DEVICE_START_UNIT;
- } else if (sense_info.asc == 0x3e && !sense_info.ascq)
- rc = DEVICE_START_UNIT;
- } else if (sense_info.skey == ILLEGAL_REQUEST &&
- transfer_packet->cdb[0] == REPORT_LUNS) {
- rc = DEVICE_READY;
- } else if (sense_info.skey == MEDIUM_ERROR) {
- if (sense_info.asc == 0x31)
- rc = DEVICE_READY;
- } else if (sense_info.skey == HARDWARE_ERROR) {
- if (sense_info.asc == 0x19)
- rc = DEVICE_READY;
- }
- }
- if (ioc->logging_level & LEAPIORAID_DEBUG_EVENT_WORK_TASK) {
- switch (rc) {
- case DEVICE_READY:
- desc = "ready";
- break;
- case DEVICE_RETRY:
- desc = "retry";
- break;
- case DEVICE_RETRY_UA:
- desc = "retry_ua";
- break;
- case DEVICE_START_UNIT:
- desc = "start_unit";
- break;
- case DEVICE_STOP_UNIT:
- desc = "stop_unit";
- break;
- case DEVICE_ERROR:
- desc = "error";
- break;
- }
- pr_info(
- "%s \tioc_status(0x%04x), loginfo(0x%08x),\n\t\t"
- "scsi_status(0x%02x), scsi_state(0x%02x), rc(%s)\n",
- ioc->name,
- transfer_packet->ioc_status,
- transfer_packet->log_info,
- transfer_packet->scsi_status,
- transfer_packet->scsi_state,
- desc);
- if (check_sense)
- pr_info("%s \t[sense_key,asc,ascq]: [0x%02x,0x%02x,0x%02x]\n",
- ioc->name,
- sense_info.skey, sense_info.asc,
- sense_info.ascq);
- }
- return rc;
-}
-
-static enum device_responsive_state
-leapioraid_scsihost_inquiry_vpd_sn(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- u8 **serial_number)
-{
- struct leapioraid_scsi_io_transfer *transfer_packet;
- enum device_responsive_state rc;
- u8 *inq_data;
- int return_code;
- u32 data_length;
- u8 len;
- u8 tr_timeout = 30;
- u8 tr_method = 0;
-
- inq_data = NULL;
- transfer_packet
- = kzalloc(sizeof(struct leapioraid_scsi_io_transfer), GFP_KERNEL);
- if (!transfer_packet) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = DEVICE_RETRY;
- goto out;
- }
- data_length = 252;
- inq_data = dma_alloc_coherent(&ioc->pdev->dev, data_length,
- &transfer_packet->data_dma, GFP_ATOMIC);
- if (!inq_data) {
- rc = DEVICE_RETRY;
- goto out;
- }
-
- rc = DEVICE_READY;
- memset(inq_data, 0, data_length);
- transfer_packet->handle = handle;
- transfer_packet->dir = DMA_FROM_DEVICE;
- transfer_packet->data_length = data_length;
- transfer_packet->cdb_length = 6;
- transfer_packet->cdb[0] = INQUIRY;
- transfer_packet->cdb[1] = 1;
- transfer_packet->cdb[2] = 0x80;
- transfer_packet->cdb[4] = data_length;
- transfer_packet->timeout = 30;
- tr_method = LEAPIORAID_SCSITASKMGMT_MSGFLAGS_LINK_RESET;
- return_code =
- leapioraid_scsi_send_scsi_io(
- ioc, transfer_packet, tr_timeout, tr_method);
- switch (return_code) {
- case 0:
- rc = leapioraid_scsihost_determine_disposition(
- ioc, transfer_packet);
- if (rc == DEVICE_READY) {
- len = strlen(&inq_data[4]) + 1;
- *serial_number = kmalloc(len, GFP_KERNEL);
- if (*serial_number)
- strscpy(*serial_number, &inq_data[4], len);
- }
- break;
- case -EAGAIN:
- rc = DEVICE_RETRY;
- break;
- case -EFAULT:
- default:
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = DEVICE_ERROR;
- break;
- }
-out:
- if (inq_data)
- dma_free_coherent(&ioc->pdev->dev, data_length, inq_data,
- transfer_packet->data_dma);
- kfree(transfer_packet);
- return rc;
-}
-
-static enum device_responsive_state
-leapioraid_scsihost_inquiry_vpd_supported_pages(
- struct LEAPIORAID_ADAPTER *ioc,
- u16 handle, u32 lun, void *data,
- u32 data_length)
-{
- struct leapioraid_scsi_io_transfer *transfer_packet;
- enum device_responsive_state rc;
- void *inq_data;
- int return_code;
-
- inq_data = NULL;
- transfer_packet = kzalloc(sizeof(struct leapioraid_scsi_io_transfer),
- GFP_KERNEL);
- if (!transfer_packet) {
- rc = DEVICE_RETRY;
- goto out;
- }
- inq_data = dma_alloc_coherent(&ioc->pdev->dev, data_length,
- &transfer_packet->data_dma, GFP_ATOMIC);
- if (!inq_data) {
- rc = DEVICE_RETRY;
- goto out;
- }
- rc = DEVICE_READY;
- memset(inq_data, 0, data_length);
- transfer_packet->handle = handle;
- transfer_packet->dir = DMA_FROM_DEVICE;
- transfer_packet->data_length = data_length;
- transfer_packet->cdb_length = 6;
- transfer_packet->lun = lun;
- transfer_packet->cdb[0] = INQUIRY;
- transfer_packet->cdb[1] = 1;
- transfer_packet->cdb[4] = data_length;
- transfer_packet->timeout = 30;
- return_code = leapioraid_scsi_send_scsi_io(
- ioc, transfer_packet, 30, 0);
- switch (return_code) {
- case 0:
- rc = leapioraid_scsihost_determine_disposition(
- ioc, transfer_packet);
- if (rc == DEVICE_READY)
- memcpy(data, inq_data, data_length);
- break;
- case -EAGAIN:
- rc = DEVICE_RETRY;
- break;
- case -EFAULT:
- default:
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = DEVICE_ERROR;
- break;
- }
-out:
- if (inq_data)
- dma_free_coherent(&ioc->pdev->dev, data_length, inq_data,
- transfer_packet->data_dma);
- kfree(transfer_packet);
- return rc;
-}
-
-static enum device_responsive_state
-leapioraid_scsihost_report_luns(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle, void *data,
- u32 data_length, u8 retry_count, u8 is_pd, u8 tr_timeout,
- u8 tr_method)
-{
- struct leapioraid_scsi_io_transfer *transfer_packet;
- enum device_responsive_state rc;
- void *lun_data;
- int return_code;
- int retries;
-
- lun_data = NULL;
- transfer_packet = kzalloc(sizeof(struct leapioraid_scsi_io_transfer),
- GFP_KERNEL);
- if (!transfer_packet) {
- rc = DEVICE_RETRY;
- goto out;
- }
- lun_data = dma_alloc_coherent(&ioc->pdev->dev, data_length,
- &transfer_packet->data_dma, GFP_ATOMIC);
- if (!lun_data) {
- rc = DEVICE_RETRY;
- goto out;
- }
- for (retries = 0; retries < 4; retries++) {
- rc = DEVICE_ERROR;
- pr_info("%s REPORT_LUNS: handle(0x%04x), retries(%d)\n",
- ioc->name, handle, retries);
- memset(lun_data, 0, data_length);
- transfer_packet->handle = handle;
- transfer_packet->dir = DMA_FROM_DEVICE;
- transfer_packet->data_length = data_length;
- transfer_packet->cdb_length = 12;
- transfer_packet->cdb[0] = REPORT_LUNS;
- transfer_packet->cdb[6] = (data_length >> 24) & 0xFF;
- transfer_packet->cdb[7] = (data_length >> 16) & 0xFF;
- transfer_packet->cdb[8] = (data_length >> 8) & 0xFF;
- transfer_packet->cdb[9] = data_length & 0xFF;
- transfer_packet->timeout = 30;
- transfer_packet->is_raid = is_pd;
- return_code =
- leapioraid_scsi_send_scsi_io(ioc, transfer_packet, tr_timeout,
- tr_method);
- switch (return_code) {
- case 0:
- rc = leapioraid_scsihost_determine_disposition(ioc,
- transfer_packet);
- if (rc == DEVICE_READY) {
- memcpy(data, lun_data, data_length);
- goto out;
- } else if (rc == DEVICE_ERROR)
- goto out;
- break;
- case -EAGAIN:
- rc = DEVICE_RETRY;
- break;
- case -EFAULT:
- default:
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- }
-out:
- if (lun_data)
- dma_free_coherent(&ioc->pdev->dev, data_length, lun_data,
- transfer_packet->data_dma);
- kfree(transfer_packet);
- if ((rc == DEVICE_RETRY || rc == DEVICE_START_UNIT ||
- rc == DEVICE_RETRY_UA) && retry_count >= 144)
- rc = DEVICE_ERROR;
- return rc;
-}
-
-static enum device_responsive_state
-leapioraid_scsihost_start_unit(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle, u32 lun,
- u8 is_pd, u8 tr_timeout, u8 tr_method)
-{
- struct leapioraid_scsi_io_transfer *transfer_packet;
- enum device_responsive_state rc;
- int return_code;
-
- transfer_packet = kzalloc(sizeof(struct leapioraid_scsi_io_transfer),
- GFP_KERNEL);
- if (!transfer_packet) {
- rc = DEVICE_RETRY;
- goto out;
- }
-
- rc = DEVICE_READY;
- transfer_packet->handle = handle;
- transfer_packet->dir = DMA_NONE;
- transfer_packet->lun = lun;
- transfer_packet->cdb_length = 6;
- transfer_packet->cdb[0] = START_STOP;
- transfer_packet->cdb[1] = 1;
- transfer_packet->cdb[4] = 1;
- transfer_packet->timeout = 30;
- transfer_packet->is_raid = is_pd;
- pr_info("%s START_UNIT: handle(0x%04x), lun(%d)\n",
- ioc->name, handle, lun);
- return_code =
- leapioraid_scsi_send_scsi_io(
- ioc, transfer_packet, tr_timeout, tr_method);
- switch (return_code) {
- case 0:
- rc = leapioraid_scsihost_determine_disposition(
- ioc, transfer_packet);
- break;
- case -EAGAIN:
- rc = DEVICE_RETRY;
- break;
- case -EFAULT:
- default:
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = DEVICE_ERROR;
- break;
- }
-out:
- kfree(transfer_packet);
- return rc;
-}
-
-static enum device_responsive_state
-leapioraid_scsihost_test_unit_ready(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle, u32 lun,
- u8 is_pd, u8 tr_timeout, u8 tr_method)
-{
- struct leapioraid_scsi_io_transfer *transfer_packet;
- enum device_responsive_state rc;
- int return_code;
- int sata_init_failure = 0;
-
- transfer_packet = kzalloc(sizeof(struct leapioraid_scsi_io_transfer),
- GFP_KERNEL);
- if (!transfer_packet) {
- rc = DEVICE_RETRY;
- goto out;
- }
- rc = DEVICE_READY;
- transfer_packet->handle = handle;
- transfer_packet->dir = DMA_NONE;
- transfer_packet->lun = lun;
- transfer_packet->cdb_length = 6;
- transfer_packet->cdb[0] = TEST_UNIT_READY;
- transfer_packet->timeout = 30;
- transfer_packet->is_raid = is_pd;
-sata_init_retry:
- pr_info("%s TEST_UNIT_READY: handle(0x%04x), lun(%d)\n",
- ioc->name, handle, lun);
- return_code =
- leapioraid_scsi_send_scsi_io(
- ioc, transfer_packet, tr_timeout, tr_method);
- switch (return_code) {
- case 0:
- rc = leapioraid_scsihost_determine_disposition(
- ioc, transfer_packet);
- if (rc == DEVICE_RETRY &&
- transfer_packet->log_info == 0x31111000) {
- if (!sata_init_failure++) {
- pr_err(
- "%s SATA Initialization Timeout,sending a retry\n",
- ioc->name);
- rc = DEVICE_READY;
- goto sata_init_retry;
- } else {
- pr_err(
- "%s SATA Initialization Failed\n",
- ioc->name);
- rc = DEVICE_ERROR;
- }
- }
- break;
- case -EAGAIN:
- rc = DEVICE_RETRY;
- break;
- case -EFAULT:
- default:
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = DEVICE_ERROR;
- break;
- }
-out:
- kfree(transfer_packet);
- return rc;
-}
-
-static enum device_responsive_state
-leapioraid_scsihost_ata_pass_thru_idd(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- u8 *is_ssd_device, u8 tr_timeout, u8 tr_method)
-{
- struct leapioraid_scsi_io_transfer *transfer_packet;
- enum device_responsive_state rc;
- u16 *idd_data;
- int return_code;
- u32 data_length;
-
- idd_data = NULL;
- transfer_packet = kzalloc(sizeof(struct leapioraid_scsi_io_transfer),
- GFP_KERNEL);
- if (!transfer_packet) {
- rc = DEVICE_RETRY;
- goto out;
- }
- data_length = 512;
- idd_data = dma_alloc_coherent(&ioc->pdev->dev, data_length,
- &transfer_packet->data_dma, GFP_ATOMIC);
- if (!idd_data) {
- rc = DEVICE_RETRY;
- goto out;
- }
- rc = DEVICE_READY;
- memset(idd_data, 0, data_length);
- transfer_packet->handle = handle;
- transfer_packet->dir = DMA_FROM_DEVICE;
- transfer_packet->data_length = data_length;
- transfer_packet->cdb_length = 12;
- transfer_packet->cdb[0] = ATA_12;
- transfer_packet->cdb[1] = 0x8;
- transfer_packet->cdb[2] = 0xd;
- transfer_packet->cdb[3] = 0x1;
- transfer_packet->cdb[9] = 0xec;
- transfer_packet->timeout = 30;
- return_code = leapioraid_scsi_send_scsi_io(
- ioc, transfer_packet, 30, 0);
- switch (return_code) {
- case 0:
- rc = leapioraid_scsihost_determine_disposition(
- ioc, transfer_packet);
- if (rc == DEVICE_READY) {
- if (le16_to_cpu(idd_data[217]) == 1)
- *is_ssd_device = 1;
- }
- break;
- case -EAGAIN:
- rc = DEVICE_RETRY;
- break;
- case -EFAULT:
- default:
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = DEVICE_ERROR;
- break;
- }
-out:
- if (idd_data) {
- dma_free_coherent(&ioc->pdev->dev, data_length, idd_data,
- transfer_packet->data_dma);
- }
- kfree(transfer_packet);
- return rc;
-}
-
-static enum device_responsive_state
-leapioraid_scsihost_wait_for_device_to_become_ready(
- struct LEAPIORAID_ADAPTER *ioc,
- u16 handle, u8 retry_count, u8 is_pd,
- int lun, u8 tr_timeout, u8 tr_method)
-{
- enum device_responsive_state rc;
-
- if (ioc->pci_error_recovery)
- return DEVICE_ERROR;
- if (ioc->shost_recovery)
- return DEVICE_RETRY;
- rc = leapioraid_scsihost_test_unit_ready(
- ioc, handle, lun, is_pd, tr_timeout,
- tr_method);
- if (rc == DEVICE_READY || rc == DEVICE_ERROR)
- return rc;
- else if (rc == DEVICE_START_UNIT) {
- rc = leapioraid_scsihost_start_unit(
- ioc, handle, lun, is_pd, tr_timeout,
- tr_method);
- if (rc == DEVICE_ERROR)
- return rc;
- rc = leapioraid_scsihost_test_unit_ready(
- ioc, handle, lun, is_pd,
- tr_timeout, tr_method);
- }
- if ((rc == DEVICE_RETRY || rc == DEVICE_START_UNIT ||
- rc == DEVICE_RETRY_UA) && retry_count >= 144)
- rc = DEVICE_ERROR;
- return rc;
-}
-
-static enum device_responsive_state
-leapioraid_scsihost_wait_for_target_to_become_ready(
- struct LEAPIORAID_ADAPTER *ioc,
- u16 handle, u8 retry_count, u8 is_pd,
- u8 tr_timeout, u8 tr_method)
-{
- enum device_responsive_state rc;
- struct scsi_lun *lun_data;
- u32 length, num_luns;
- u8 *data;
- int lun;
- struct scsi_lun *lunp;
-
- lun_data =
- kcalloc(255, sizeof(struct scsi_lun), GFP_KERNEL);
- if (!lun_data) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return DEVICE_RETRY;
- }
- rc = leapioraid_scsihost_report_luns(ioc, handle, lun_data,
- 255 * sizeof(struct scsi_lun),
- retry_count, is_pd, tr_timeout, tr_method);
- if (rc != DEVICE_READY)
- goto out;
- data = (u8 *) lun_data;
- length = ((data[0] << 24) | (data[1] << 16) |
- (data[2] << 8) | (data[3] << 0));
- num_luns = (length / sizeof(struct scsi_lun));
- lunp = &lun_data[1];
- lun = (num_luns) ? scsilun_to_int(&lun_data[1]) : 0;
- rc = leapioraid_scsihost_wait_for_device_to_become_ready(
- ioc, handle, retry_count,
- is_pd, lun, tr_timeout,
- tr_method);
- if (rc == DEVICE_ERROR) {
- struct scsi_lun *lunq;
-
- for (lunq = lunp++; lunq <= &lun_data[num_luns]; lunq++) {
- rc = leapioraid_scsihost_wait_for_device_to_become_ready(ioc,
- handle,
- retry_count,
- is_pd,
- scsilun_to_int
- (lunq),
- tr_timeout,
- tr_method);
- if (rc != DEVICE_ERROR)
- goto out;
- }
- }
-out:
- kfree(lun_data);
- return rc;
-}
-
-static u8
-leapioraid_scsihost_check_access_status(
- struct LEAPIORAID_ADAPTER *ioc, u64 sas_address,
- u16 handle, u8 access_status)
-{
- u8 rc = 1;
- char *desc = NULL;
-
- switch (access_status) {
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_NO_ERRORS:
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SATA_NEEDS_INITIALIZATION:
- rc = 0;
- break;
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SATA_CAPABILITY_FAILED:
- desc = "sata capability failed";
- break;
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SATA_AFFILIATION_CONFLICT:
- desc = "sata affiliation conflict";
- break;
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_ROUTE_NOT_ADDRESSABLE:
- desc = "route not addressable";
- break;
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SMP_ERROR_NOT_ADDRESSABLE:
- desc = "smp error not addressable";
- break;
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_DEVICE_BLOCKED:
- desc = "device blocked";
- break;
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SATA_INIT_FAILED:
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_UNKNOWN:
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_AFFILIATION_CONFLICT:
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_DIAG:
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_IDENTIFICATION:
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_CHECK_POWER:
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_PIO_SN:
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_MDMA_SN:
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_UDMA_SN:
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_ZONING_VIOLATION:
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_NOT_ADDRESSABLE:
- case LEAPIORAID_SAS_DEVICE0_ASTATUS_SIF_MAX:
- desc = "sata initialization failed";
- break;
- default:
- desc = "unknown";
- break;
- }
- if (!rc)
- return 0;
- pr_err(
- "%s discovery errors(%s): sas_address(0x%016llx),\n\t\t"
- "handle(0x%04x)\n",
- ioc->name,
- desc,
- (unsigned long long)sas_address,
- handle);
- return rc;
-}
-
-static void
-leapioraid_scsihost_check_device(struct LEAPIORAID_ADAPTER *ioc,
- u64 parent_sas_address, u16 handle, u8 phy_number,
- u8 link_rate)
-{
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidSasDevP0_t sas_device_pg0;
- struct leapioraid_sas_device *sas_device = NULL;
- struct leapioraid_enclosure_node *enclosure_dev = NULL;
- u32 ioc_status;
- unsigned long flags;
- u64 sas_address;
- struct scsi_target *starget;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- u32 device_info;
- u8 *serial_number = NULL;
- u8 *original_serial_number = NULL;
- int rc;
- struct leapioraid_hba_port *port;
-
- if ((leapioraid_config_get_sas_device_pg0
- (ioc, &mpi_reply, &sas_device_pg0,
- LEAPIORAID_SAS_DEVICE_PGAD_FORM_HANDLE, handle)))
- return;
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus)
- & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS)
- return;
- if (phy_number != sas_device_pg0.PhyNum)
- return;
- device_info = le32_to_cpu(sas_device_pg0.DeviceInfo);
- if (!(leapioraid_scsihost_is_sas_end_device(device_info)))
- return;
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_address = le64_to_cpu(sas_device_pg0.SASAddress);
- port = leapioraid_get_port_by_id(ioc, sas_device_pg0.PhysicalPort, 0);
- if (!port)
- goto out_unlock;
- sas_device = __leapioraid_get_sdev_by_addr(ioc, sas_address, port);
- if (!sas_device)
- goto out_unlock;
- if (unlikely(sas_device->handle != handle)) {
- starget = sas_device->starget;
- sas_target_priv_data = starget->hostdata;
- starget_printk(KERN_INFO, starget,
- "handle changed from(0x%04x) to (0x%04x)!!!\n",
- sas_device->handle, handle);
- sas_target_priv_data->handle = handle;
- sas_device->handle = handle;
- if (le16_to_cpu(sas_device_pg0.Flags) &
- LEAPIORAID_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID) {
- sas_device->enclosure_level =
- sas_device_pg0.EnclosureLevel;
- memcpy(sas_device->connector_name,
- sas_device_pg0.ConnectorName, 4);
- sas_device->connector_name[4] = '\0';
- } else {
- sas_device->enclosure_level = 0;
- sas_device->connector_name[0] = '\0';
- }
- sas_device->enclosure_handle =
- le16_to_cpu(sas_device_pg0.EnclosureHandle);
- sas_device->is_chassis_slot_valid = 0;
- enclosure_dev =
- leapioraid_scsihost_enclosure_find_by_handle(ioc,
- sas_device->enclosure_handle);
- if (enclosure_dev) {
- sas_device->enclosure_logical_id =
- le64_to_cpu(enclosure_dev->pg0.EnclosureLogicalID);
- if (le16_to_cpu(enclosure_dev->pg0.Flags) &
- LEAPIORAID_SAS_ENCLS0_FLAGS_CHASSIS_SLOT_VALID) {
- sas_device->is_chassis_slot_valid = 1;
- sas_device->chassis_slot =
- enclosure_dev->pg0.ChassisSlot;
- }
- }
- }
- if (!(le16_to_cpu(sas_device_pg0.Flags) &
- LEAPIORAID_SAS_DEVICE0_FLAGS_DEVICE_PRESENT)) {
- pr_err("%s device is not present handle(0x%04x), flags!!!\n",
- ioc->name, handle);
- goto out_unlock;
- }
- if (leapioraid_scsihost_check_access_status(ioc, sas_address, handle,
- sas_device_pg0.AccessStatus))
- goto out_unlock;
- original_serial_number = sas_device->serial_number;
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- leapioraid_scsihost_ublock_io_device_wait(ioc, sas_address, port);
- if (!original_serial_number)
- goto out;
- if (leapioraid_scsihost_inquiry_vpd_sn(ioc, handle, &serial_number) ==
- DEVICE_READY && serial_number) {
- rc = strcmp(original_serial_number, serial_number);
- kfree(serial_number);
- if (!rc)
- goto out;
- leapioraid_device_remove_by_sas_address(ioc, sas_address, port);
- leapioraid_transport_update_links(ioc, parent_sas_address,
- handle, phy_number, link_rate,
- port);
- leapioraid_scsihost_add_device(ioc, handle, 0, 0);
- }
- goto out;
-out_unlock:
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
-out:
- if (sas_device)
- leapioraid_sas_device_put(sas_device);
-}
-
-static int
-leapioraid_scsihost_add_device(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle, u8 retry_count,
- u8 is_pd)
-{
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidSasDevP0_t sas_device_pg0;
- struct leapioraid_sas_device *sas_device;
- struct leapioraid_enclosure_node *enclosure_dev = NULL;
- u32 ioc_status;
- u64 sas_address;
- u32 device_info;
- enum device_responsive_state rc;
- u8 connector_name[5], port_id;
-
- if ((leapioraid_config_get_sas_device_pg0
- (ioc, &mpi_reply, &sas_device_pg0,
- LEAPIORAID_SAS_DEVICE_PGAD_FORM_HANDLE, handle))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return 0;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus)
- & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return 0;
- }
- device_info = le32_to_cpu(sas_device_pg0.DeviceInfo);
- if (!(leapioraid_scsihost_is_sas_end_device(device_info)))
- return 0;
- set_bit(handle, ioc->pend_os_device_add);
- sas_address = le64_to_cpu(sas_device_pg0.SASAddress);
- if (!(le16_to_cpu(sas_device_pg0.Flags) &
- LEAPIORAID_SAS_DEVICE0_FLAGS_DEVICE_PRESENT)) {
- pr_err("%s device is not present handle(0x04%x)!!!\n",
- ioc->name, handle);
- return 0;
- }
- if (leapioraid_scsihost_check_access_status(
- ioc, sas_address, handle,
- sas_device_pg0.AccessStatus))
- return 0;
- port_id = sas_device_pg0.PhysicalPort;
- sas_device = leapioraid_get_sdev_by_addr(ioc,
- sas_address,
- leapioraid_get_port_by_id(ioc, port_id, 0));
- if (sas_device) {
- clear_bit(handle, ioc->pend_os_device_add);
- leapioraid_sas_device_put(sas_device);
- return 0;
- }
- if (le16_to_cpu(sas_device_pg0.EnclosureHandle)) {
- enclosure_dev =
- leapioraid_scsihost_enclosure_find_by_handle(ioc,
- le16_to_cpu
- (sas_device_pg0.EnclosureHandle));
- if (enclosure_dev == NULL)
- pr_info(
- "%s Enclosure handle(0x%04x)doesn't\n\t\t"
- "match with enclosure device!\n",
- ioc->name,
- le16_to_cpu(sas_device_pg0.EnclosureHandle));
- }
- if (!ioc->wait_for_discovery_to_complete) {
- pr_info(
- "%s detecting: handle(0x%04x), sas_address(0x%016llx), phy(%d)\n",
- ioc->name, handle,
- (unsigned long long)sas_address,
- sas_device_pg0.PhyNum);
- rc = leapioraid_scsihost_wait_for_target_to_become_ready(
- ioc, handle,
- retry_count,
- is_pd, 30, 0);
- if (rc != DEVICE_READY) {
- if (le16_to_cpu(sas_device_pg0.EnclosureHandle) != 0)
- dewtprintk(ioc,
- pr_info("%s %s: device not ready: slot(%d)\n",
- ioc->name, __func__,
- le16_to_cpu(sas_device_pg0.Slot)));
- if (le16_to_cpu(sas_device_pg0.Flags) &
- LEAPIORAID_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID) {
- memcpy(connector_name,
- sas_device_pg0.ConnectorName, 4);
- connector_name[4] = '\0';
- dewtprintk(ioc,
- pr_info(
- "%s %s: device not ready: enclosure level(0x%04x), connector name( %s)\n",
- ioc->name, __func__,
- sas_device_pg0.EnclosureLevel,
- connector_name));
- }
- if ((enclosure_dev)
- && (le16_to_cpu(enclosure_dev->pg0.Flags) &
- LEAPIORAID_SAS_ENCLS0_FLAGS_CHASSIS_SLOT_VALID))
- pr_err(
- "%s chassis slot(0x%04x)\n", ioc->name,
- enclosure_dev->pg0.ChassisSlot);
- if (rc == DEVICE_RETRY || rc == DEVICE_START_UNIT
- || rc == DEVICE_STOP_UNIT || rc == DEVICE_RETRY_UA)
- return 1;
- else if (rc == DEVICE_ERROR)
- return 0;
- }
- }
- sas_device = kzalloc(sizeof(struct leapioraid_sas_device),
- GFP_KERNEL);
- if (!sas_device)
- return 0;
-
- kref_init(&sas_device->refcount);
- sas_device->handle = handle;
- if (leapioraid_scsihost_get_sas_address(ioc,
- le16_to_cpu(sas_device_pg0.ParentDevHandle),
- &sas_device->sas_address_parent) != 0)
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- sas_device->enclosure_handle =
- le16_to_cpu(sas_device_pg0.EnclosureHandle);
- if (sas_device->enclosure_handle != 0)
- sas_device->slot = le16_to_cpu(sas_device_pg0.Slot);
- sas_device->device_info = device_info;
- sas_device->sas_address = sas_address;
- sas_device->port = leapioraid_get_port_by_id(ioc, port_id, 0);
- if (!sas_device->port) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out;
- }
- sas_device->phy = sas_device_pg0.PhyNum;
- sas_device->fast_path = (le16_to_cpu(sas_device_pg0.Flags) &
- LEAPIORAID_SAS_DEVICE0_FLAGS_FAST_PATH_CAPABLE) ?
- 1 : 0;
- sas_device->supports_sata_smart =
- (le16_to_cpu(sas_device_pg0.Flags) &
- LEAPIORAID_SAS_DEVICE0_FLAGS_SATA_SMART_SUPPORTED);
- if (le16_to_cpu(sas_device_pg0.Flags) &
- LEAPIORAID_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID) {
- sas_device->enclosure_level = sas_device_pg0.EnclosureLevel;
- memcpy(sas_device->connector_name,
- sas_device_pg0.ConnectorName, 4);
- sas_device->connector_name[4] = '\0';
- } else {
- sas_device->enclosure_level = 0;
- sas_device->connector_name[0] = '\0';
- }
- sas_device->is_chassis_slot_valid = 0;
- if (enclosure_dev) {
- sas_device->enclosure_logical_id =
- le64_to_cpu(enclosure_dev->pg0.EnclosureLogicalID);
- if (le16_to_cpu(enclosure_dev->pg0.Flags) &
- LEAPIORAID_SAS_ENCLS0_FLAGS_CHASSIS_SLOT_VALID) {
- sas_device->is_chassis_slot_valid = 1;
- sas_device->chassis_slot =
- enclosure_dev->pg0.ChassisSlot;
- }
- }
- sas_device->device_name = le64_to_cpu(sas_device_pg0.DeviceName);
- sas_device->port_type = sas_device_pg0.MaxPortConnections;
- pr_err(
- "%s handle(0x%0x) sas_address(0x%016llx) port_type(0x%0x)\n",
- ioc->name, handle, sas_device->sas_address,
- sas_device->port_type);
- if (ioc->wait_for_discovery_to_complete)
- leapioraid_scsihost_sas_device_init_add(ioc, sas_device);
- else
- leapioraid_scsihost_sas_device_add(ioc, sas_device);
-out:
- leapioraid_sas_device_put(sas_device);
- return 0;
-}
-
-static void
-leapioraid_scsihost_remove_device(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_device *sas_device)
-{
- struct LEAPIORAID_TARGET *sas_target_priv_data;
-
- if (sas_device->pfa_led_on) {
- leapioraid_scsihost_turn_off_pfa_led(ioc, sas_device);
- sas_device->pfa_led_on = 0;
- }
- dewtprintk(ioc, pr_info(
- "%s %s: enter: handle(0x%04x), sas_addr(0x%016llx)\n",
- ioc->name, __func__, sas_device->handle,
- (unsigned long long)sas_device->sas_address));
- dewtprintk(ioc,
- leapioraid_scsihost_display_enclosure_chassis_info(
- ioc, sas_device, NULL, NULL));
- if (sas_device->starget && sas_device->starget->hostdata) {
- sas_target_priv_data = sas_device->starget->hostdata;
- sas_target_priv_data->deleted = 1;
- leapioraid_scsihost_ublock_io_device(
- ioc, sas_device->sas_address,
- sas_device->port);
- sas_target_priv_data->handle =
- LEAPIORAID_INVALID_DEVICE_HANDLE;
- }
- if (!ioc->hide_drives)
- leapioraid_transport_port_remove(ioc,
- sas_device->sas_address,
- sas_device->sas_address_parent,
- sas_device->port);
- pr_info("%s removing handle(0x%04x), sas_addr(0x%016llx)\n",
- ioc->name, sas_device->handle,
- (unsigned long long)sas_device->sas_address);
- leapioraid_scsihost_display_enclosure_chassis_info(ioc, sas_device, NULL, NULL);
- dewtprintk(ioc, pr_info(
- "%s %s: exit: handle(0x%04x), sas_addr(0x%016llx)\n",
- ioc->name, __func__, sas_device->handle,
- (unsigned long long)
- sas_device->sas_address));
- dewtprintk(ioc,
- leapioraid_scsihost_display_enclosure_chassis_info(
- ioc, sas_device, NULL, NULL));
- kfree(sas_device->serial_number);
-}
-
-static void
-leapioraid_scsihost_sas_topology_change_event_debug(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventDataSasTopoChangeList_t *event_data)
-{
- int i;
- u16 handle;
- u16 reason_code;
- u8 phy_number;
- char *status_str = NULL;
- u8 link_rate, prev_link_rate;
-
- switch (event_data->ExpStatus) {
- case LEAPIORAID_EVENT_SAS_TOPO_ES_ADDED:
- status_str = "add";
- break;
- case LEAPIORAID_EVENT_SAS_TOPO_ES_NOT_RESPONDING:
- status_str = "remove";
- break;
- case LEAPIORAID_EVENT_SAS_TOPO_ES_RESPONDING:
- case 0:
- status_str = "responding";
- break;
- case LEAPIORAID_EVENT_SAS_TOPO_ES_DELAY_NOT_RESPONDING:
- status_str = "remove delay";
- break;
- default:
- status_str = "unknown status";
- break;
- }
- pr_info("%s sas topology change: (%s)\n",
- ioc->name, status_str);
- pr_info(
- "\thandle(0x%04x), enclosure_handle(0x%04x)\n\t\t"
- "start_phy(%02d), count(%d)\n",
- le16_to_cpu(event_data->ExpanderDevHandle),
- le16_to_cpu(event_data->EnclosureHandle),
- event_data->StartPhyNum,
- event_data->NumEntries);
- for (i = 0; i < event_data->NumEntries; i++) {
- handle = le16_to_cpu(event_data->PHY[i].AttachedDevHandle);
- if (!handle)
- continue;
- phy_number = event_data->StartPhyNum + i;
- reason_code = event_data->PHY[i].PhyStatus &
- LEAPIORAID_EVENT_SAS_TOPO_RC_MASK;
- switch (reason_code) {
- case LEAPIORAID_EVENT_SAS_TOPO_RC_TARG_ADDED:
- status_str = "target add";
- break;
- case LEAPIORAID_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING:
- status_str = "target remove";
- break;
- case LEAPIORAID_EVENT_SAS_TOPO_RC_DELAY_NOT_RESPONDING:
- status_str = "delay target remove";
- break;
- case LEAPIORAID_EVENT_SAS_TOPO_RC_PHY_CHANGED:
- status_str = "link rate change";
- break;
- case LEAPIORAID_EVENT_SAS_TOPO_RC_NO_CHANGE:
- status_str = "target responding";
- break;
- default:
- status_str = "unknown";
- break;
- }
- link_rate = event_data->PHY[i].LinkRate >> 4;
- prev_link_rate = event_data->PHY[i].LinkRate & 0xF;
- pr_info(
- "\tphy(%02d), attached_handle(0x%04x): %s:\n\t\t"
- "link rate: new(0x%02x), old(0x%02x)\n",
- phy_number,
- handle,
- status_str,
- link_rate,
- prev_link_rate);
- }
-}
-
-static int
-leapioraid_scsihost_sas_topology_change_event(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event)
-{
- int i;
- u16 parent_handle, handle;
- u16 reason_code;
- u8 phy_number, max_phys;
- struct leapioraid_raid_sas_node *sas_expander;
- struct leapioraid_sas_device *sas_device;
- u64 sas_address;
- unsigned long flags;
- u8 link_rate, prev_link_rate;
- int rc;
- int requeue_event;
- struct leapioraid_hba_port *port;
- struct LeapioraidEventDataSasTopoChangeList_t *event_data =
- fw_event->event_data;
-
- if (ioc->logging_level & LEAPIORAID_DEBUG_EVENT_WORK_TASK)
- leapioraid_scsihost_sas_topology_change_event_debug(
- ioc, event_data);
- if (ioc->shost_recovery || ioc->remove_host || ioc->pci_error_recovery)
- return 0;
- if (!ioc->sas_hba.num_phys)
- leapioraid_scsihost_sas_host_add(ioc);
- else
- leapioraid_scsihost_sas_host_refresh(ioc);
- if (fw_event->ignore) {
- dewtprintk(ioc,
- pr_info("%s ignoring expander event\n",
- ioc->name));
- return 0;
- }
- parent_handle = le16_to_cpu(event_data->ExpanderDevHandle);
- port = leapioraid_get_port_by_id(ioc, event_data->PhysicalPort, 0);
- if (event_data->ExpStatus == LEAPIORAID_EVENT_SAS_TOPO_ES_ADDED)
- if (leapioraid_scsihost_expander_add(ioc, parent_handle) != 0)
- return 0;
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- sas_expander = leapioraid_scsihost_expander_find_by_handle(
- ioc, parent_handle);
- if (sas_expander) {
- sas_address = sas_expander->sas_address;
- max_phys = sas_expander->num_phys;
- port = sas_expander->port;
- } else if (parent_handle < ioc->sas_hba.num_phys) {
- sas_address = ioc->sas_hba.sas_address;
- max_phys = ioc->sas_hba.num_phys;
- } else {
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- return 0;
- }
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- for (i = 0, requeue_event = 0; i < event_data->NumEntries; i++) {
- if (fw_event->ignore) {
- dewtprintk(ioc, pr_info(
- "%s ignoring expander event\n",
- ioc->name));
- return 0;
- }
- if (ioc->remove_host || ioc->pci_error_recovery)
- return 0;
- phy_number = event_data->StartPhyNum + i;
- if (phy_number >= max_phys)
- continue;
- reason_code = event_data->PHY[i].PhyStatus &
- LEAPIORAID_EVENT_SAS_TOPO_RC_MASK;
- if ((event_data->PHY[i].PhyStatus &
- LEAPIORAID_EVENT_SAS_TOPO_PHYSTATUS_VACANT) && (reason_code !=
- LEAPIORAID_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING))
- continue;
- if (fw_event->delayed_work_active && (reason_code ==
- LEAPIORAID_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING)) {
- dewtprintk(ioc,
- pr_info(
- "%s ignoring Targ not responding\n\t\t"
- "event phy in re-queued event processing\n",
- ioc->name));
- continue;
- }
- handle = le16_to_cpu(event_data->PHY[i].AttachedDevHandle);
- if (!handle)
- continue;
- link_rate = event_data->PHY[i].LinkRate >> 4;
- prev_link_rate = event_data->PHY[i].LinkRate & 0xF;
- switch (reason_code) {
- case LEAPIORAID_EVENT_SAS_TOPO_RC_PHY_CHANGED:
- if (ioc->shost_recovery)
- break;
- if (link_rate == prev_link_rate)
- break;
- leapioraid_transport_update_links(ioc, sas_address,
- handle, phy_number,
- link_rate, port);
- if (link_rate < LEAPIORAID_SAS_NEG_LINK_RATE_1_5)
- break;
- leapioraid_scsihost_check_device(ioc, sas_address, handle,
- phy_number, link_rate);
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_handle(ioc,
- handle);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- if (sas_device) {
- leapioraid_sas_device_put(sas_device);
- break;
- }
- if (!test_bit(handle, ioc->pend_os_device_add))
- break;
- dewtprintk(ioc, pr_err(
- "%s handle(0x%04x) device not found:\n\t\t"
- "convert event to a device add\n",
- ioc->name, handle));
- event_data->PHY[i].PhyStatus &= 0xF0;
- event_data->PHY[i].PhyStatus |=
- LEAPIORAID_EVENT_SAS_TOPO_RC_TARG_ADDED;
- fallthrough;
- case LEAPIORAID_EVENT_SAS_TOPO_RC_TARG_ADDED:
- if (ioc->shost_recovery)
- break;
- leapioraid_transport_update_links(ioc, sas_address,
- handle, phy_number,
- link_rate, port);
- if (link_rate < LEAPIORAID_SAS_NEG_LINK_RATE_1_5)
- break;
- rc = leapioraid_scsihost_add_device(ioc, handle,
- fw_event->retries[i], 0);
- if (rc) {
- fw_event->retries[i]++;
- requeue_event = 1;
- } else {
- event_data->PHY[i].PhyStatus |=
- LEAPIORAID_EVENT_SAS_TOPO_PHYSTATUS_VACANT;
- }
- break;
- case LEAPIORAID_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING:
- leapioraid_scsihost_device_remove_by_handle(ioc, handle);
- break;
- }
- }
- if (event_data->ExpStatus == LEAPIORAID_EVENT_SAS_TOPO_ES_NOT_RESPONDING
- && sas_expander)
- leapioraid_expander_remove(ioc, sas_address, port);
- return requeue_event;
-}
-
-static void
-leapioraid_scsihost_sas_device_status_change_event_debug(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventDataSasDeviceStatusChange_t *event_data)
-{
- char *reason_str = NULL;
-
- switch (event_data->ReasonCode) {
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_SMART_DATA:
- reason_str = "smart data";
- break;
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_UNSUPPORTED:
- reason_str = "unsupported device discovered";
- break;
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_INTERNAL_DEVICE_RESET:
- reason_str = "internal device reset";
- break;
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_TASK_ABORT_INTERNAL:
- reason_str = "internal task abort";
- break;
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_ABORT_TASK_SET_INTERNAL:
- reason_str = "internal task abort set";
- break;
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_CLEAR_TASK_SET_INTERNAL:
- reason_str = "internal clear task set";
- break;
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_QUERY_TASK_INTERNAL:
- reason_str = "internal query task";
- break;
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_SATA_INIT_FAILURE:
- reason_str = "sata init failure";
- break;
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET:
- reason_str = "internal device reset complete";
- break;
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_CMP_TASK_ABORT_INTERNAL:
- reason_str = "internal task abort complete";
- break;
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_ASYNC_NOTIFICATION:
- reason_str = "internal async notification";
- break;
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_EXPANDER_REDUCED_FUNCTIONALITY:
- reason_str = "expander reduced functionality";
- break;
- case LEAPIORAID_EVENT_SAS_DEV_STAT_RC_CMP_EXPANDER_REDUCED_FUNCTIONALITY:
- reason_str = "expander reduced functionality complete";
- break;
- default:
- reason_str = "unknown reason";
- break;
- }
- pr_info("%s device status change: (%s)\n"
- "\thandle(0x%04x), sas address(0x%016llx), tag(%d)",
- ioc->name, reason_str, le16_to_cpu(event_data->DevHandle),
- (unsigned long long)le64_to_cpu(event_data->SASAddress),
- le16_to_cpu(event_data->TaskTag));
- if (event_data->ReasonCode == LEAPIORAID_EVENT_SAS_DEV_STAT_RC_SMART_DATA)
- pr_info("%s , ASC(0x%x), ASCQ(0x%x)\n",
- ioc->name, event_data->ASC, event_data->ASCQ);
- pr_info("\n");
-}
-
-static void
-leapioraid_scsihost_sas_device_status_change_event(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventDataSasDeviceStatusChange_t *event_data)
-{
- struct LEAPIORAID_TARGET *target_priv_data;
- struct leapioraid_sas_device *sas_device;
- u64 sas_address;
- unsigned long flags;
-
- if ((ioc->facts.HeaderVersion >> 8) < 0xC)
- return;
- if (event_data->ReasonCode !=
- LEAPIORAID_EVENT_SAS_DEV_STAT_RC_INTERNAL_DEVICE_RESET &&
- event_data->ReasonCode !=
- LEAPIORAID_EVENT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET)
- return;
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_address = le64_to_cpu(event_data->SASAddress);
- sas_device = __leapioraid_get_sdev_by_addr(
- ioc, sas_address,
- leapioraid_get_port_by_id(ioc, event_data->PhysicalPort, 0));
- if (!sas_device || !sas_device->starget)
- goto out;
- target_priv_data = sas_device->starget->hostdata;
- if (!target_priv_data)
- goto out;
- if (event_data->ReasonCode ==
- LEAPIORAID_EVENT_SAS_DEV_STAT_RC_INTERNAL_DEVICE_RESET)
- target_priv_data->tm_busy = 1;
- else
- target_priv_data->tm_busy = 0;
- if (ioc->logging_level & LEAPIORAID_DEBUG_EVENT_WORK_TASK)
- pr_err(
- "%s %s tm_busy flag for handle(0x%04x)\n", ioc->name,
- (target_priv_data->tm_busy == 1) ? "Enable" : "Disable",
- target_priv_data->handle);
-out:
- if (sas_device)
- leapioraid_sas_device_put(sas_device);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
-}
-
-static void
-leapioraid_scsihost_sas_enclosure_dev_status_change_event_debug(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventDataSasEnclDevStatusChange_t *event_data)
-{
- char *reason_str = NULL;
-
- switch (event_data->ReasonCode) {
- case LEAPIORAID_EVENT_SAS_ENCL_RC_ADDED:
- reason_str = "enclosure add";
- break;
- case LEAPIORAID_EVENT_SAS_ENCL_RC_NOT_RESPONDING:
- reason_str = "enclosure remove";
- break;
- default:
- reason_str = "unknown reason";
- break;
- }
- pr_info(
- "%s enclosure status change: (%s)\n\thandle(0x%04x),\n\t\t"
- "enclosure logical id(0x%016llx) number slots(%d)\n",
- ioc->name,
- reason_str,
- le16_to_cpu(event_data->EnclosureHandle),
- (unsigned long long)le64_to_cpu(event_data->EnclosureLogicalID),
- le16_to_cpu(event_data->StartSlot));
-}
-
-static void
-leapioraid_scsihost_sas_enclosure_dev_status_change_event(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event)
-{
- struct LeapioraidCfgRep_t mpi_reply;
- struct leapioraid_enclosure_node *enclosure_dev = NULL;
- struct LeapioraidEventDataSasEnclDevStatusChange_t *event_data =
- fw_event->event_data;
- int rc;
-
- if (ioc->logging_level & LEAPIORAID_DEBUG_EVENT_WORK_TASK)
- leapioraid_scsihost_sas_enclosure_dev_status_change_event_debug(
- ioc, fw_event->event_data);
- if (ioc->shost_recovery)
- return;
- event_data->EnclosureHandle = le16_to_cpu(event_data->EnclosureHandle);
- if (event_data->EnclosureHandle)
- enclosure_dev =
- leapioraid_scsihost_enclosure_find_by_handle(ioc,
- event_data->EnclosureHandle);
- switch (event_data->ReasonCode) {
- case LEAPIORAID_EVENT_SAS_ENCL_RC_ADDED:
- if (!enclosure_dev) {
- enclosure_dev =
- kzalloc(sizeof(struct leapioraid_enclosure_node), GFP_KERNEL);
- if (!enclosure_dev) {
- pr_err("%s failure at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- return;
- }
- rc = leapioraid_config_get_enclosure_pg0(ioc,
- &mpi_reply,
- &enclosure_dev->pg0,
- LEAPIORAID_SAS_ENCLOS_PGAD_FORM_HANDLE,
- event_data->EnclosureHandle);
- if (rc
- || (le16_to_cpu(mpi_reply.IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK)) {
- kfree(enclosure_dev);
- return;
- }
- list_add_tail(&enclosure_dev->list,
- &ioc->enclosure_list);
- }
- break;
- case LEAPIORAID_EVENT_SAS_ENCL_RC_NOT_RESPONDING:
- if (enclosure_dev) {
- list_del(&enclosure_dev->list);
- kfree(enclosure_dev);
- }
- break;
- default:
- break;
- }
-}
-
-static void
-leapioraid_scsihost_sas_broadcast_primitive_event(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event)
-{
- struct scsi_cmnd *scmd;
- struct scsi_device *sdev;
- u16 smid, handle;
- u32 lun;
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- u32 termination_count;
- u32 query_count;
- struct LeapioraidSCSITmgRep_t *mpi_reply;
- struct LeapioraidEventDataSasBroadcastPrimitive_t *event_data =
- fw_event->event_data;
- u16 ioc_status;
- unsigned long flags;
- int r;
- u8 max_retries = 0;
- u8 task_abort_retries;
- struct leapioraid_scsiio_tracker *st;
-
- mutex_lock(&ioc->tm_cmds.mutex);
- dewtprintk(ioc,
- pr_info(
- "%s %s: enter: phy number(%d), width(%d)\n",
- ioc->name, __func__,
- event_data->PhyNum, event_data->PortWidth));
- leapioraid_scsihost_block_io_all_device(ioc);
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- mpi_reply = ioc->tm_cmds.reply;
-broadcast_aen_retry:
- if (max_retries++ == 5) {
- dewtprintk(ioc, pr_info("%s %s: giving up\n",
- ioc->name, __func__));
- goto out;
- } else if (max_retries > 1)
- dewtprintk(ioc, pr_info("%s %s: %d retry\n",
- ioc->name, __func__, max_retries - 1));
- termination_count = 0;
- query_count = 0;
- for (smid = 1; smid <= ioc->shost->can_queue; smid++) {
- if (ioc->shost_recovery)
- goto out;
- scmd = leapioraid_scsihost_scsi_lookup_get(ioc, smid);
- if (!scmd)
- continue;
- st = leapioraid_base_scsi_cmd_priv(scmd);
- if (!st || st->smid == 0)
- continue;
- sdev = scmd->device;
- sas_device_priv_data = sdev->hostdata;
- if (!sas_device_priv_data || !sas_device_priv_data->sas_target)
- continue;
- if (sas_device_priv_data->sas_target->flags &
- LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT)
- continue;
- if (sas_device_priv_data->sas_target->flags &
- LEAPIORAID_TARGET_FLAGS_VOLUME)
- continue;
- handle = sas_device_priv_data->sas_target->handle;
- lun = sas_device_priv_data->lun;
- query_count++;
- if (ioc->shost_recovery)
- goto out;
- spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
- r = leapioraid_scsihost_issue_tm(ioc, handle, 0, 0, lun,
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_QUERY_TASK,
- st->smid, 30, 0);
- if (r == FAILED) {
- sdev_printk(KERN_WARNING, sdev,
- "leapioraid_scsihost_issue_tm:\n\t\t"
- "FAILED when sending QUERY_TASK: scmd(%p)\n",
- scmd);
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- goto broadcast_aen_retry;
- }
- ioc_status = le16_to_cpu(mpi_reply->IOCStatus)
- & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- sdev_printk(KERN_WARNING, sdev,
- "query task: FAILED with IOCSTATUS(0x%04x), scmd(%p)\n",
- ioc_status, scmd);
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- goto broadcast_aen_retry;
- }
- if (mpi_reply->ResponseCode ==
- LEAPIORAID_SCSITASKMGMT_RSP_TM_SUCCEEDED ||
- mpi_reply->ResponseCode ==
- LEAPIORAID_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC) {
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- continue;
- }
- task_abort_retries = 0;
-tm_retry:
- if (task_abort_retries++ == 60) {
- dewtprintk(ioc, pr_err(
- "%s %s: ABORT_TASK: giving up\n",
- ioc->name, __func__));
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- goto broadcast_aen_retry;
- }
- if (ioc->shost_recovery)
- goto out_no_lock;
- r = leapioraid_scsihost_issue_tm(ioc, handle, sdev->channel,
- sdev->id, sdev->lun,
- LEAPIORAID_SCSITASKMGMT_TASKTYPE_ABORT_TASK,
- st->smid, 30, 0);
- if (r == FAILED) {
- sdev_printk(KERN_WARNING, sdev,
- "ABORT_TASK: FAILED : scmd(%p)\n", scmd);
- goto tm_retry;
- }
- if (task_abort_retries > 1)
- sdev_printk(KERN_WARNING, sdev,
- "leapioraid_scsihost_issue_tm:\n\t\t"
- "ABORT_TASK: RETRIES (%d): scmd(%p)\n",
- task_abort_retries - 1,
- scmd);
- termination_count += le32_to_cpu(mpi_reply->TerminationCount);
- spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
- }
- if (ioc->broadcast_aen_pending) {
- dewtprintk(ioc,
- pr_info("%s %s: loop back due to pending AEN\n",
- ioc->name, __func__));
- ioc->broadcast_aen_pending = 0;
- goto broadcast_aen_retry;
- }
-out:
- spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
-out_no_lock:
- dewtprintk(ioc, pr_err(
- "%s %s - exit, query_count = %d termination_count = %d\n",
- ioc->name, __func__, query_count,
- termination_count));
- ioc->broadcast_aen_busy = 0;
- if (!ioc->shost_recovery)
- leapioraid_scsihost_ublock_io_all_device(ioc, 1);
- mutex_unlock(&ioc->tm_cmds.mutex);
-}
-
-static void
-leapioraid_scsihost_sas_discovery_event(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event)
-{
- struct LeapioraidEventDataSasDiscovery_t *event_data
- = fw_event->event_data;
-
- if (ioc->logging_level & LEAPIORAID_DEBUG_EVENT_WORK_TASK) {
- pr_info("%s sas discovery event: (%s)",
- ioc->name,
- (event_data->ReasonCode ==
- LEAPIORAID_EVENT_SAS_DISC_RC_STARTED) ? "start" : "stop");
- if (event_data->DiscoveryStatus)
- pr_info("discovery_status(0x%08x)",
- le32_to_cpu(event_data->DiscoveryStatus));
- pr_info("\n");
- }
- if (event_data->ReasonCode == LEAPIORAID_EVENT_SAS_DISC_RC_STARTED &&
- !ioc->sas_hba.num_phys) {
- if (disable_discovery > 0 && ioc->shost_recovery) {
- while (ioc->shost_recovery)
- ssleep(1);
- }
- leapioraid_scsihost_sas_host_add(ioc);
- }
-}
-
-static void
-leapioraid_scsihost_sas_device_discovery_error_event(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event)
-{
- struct LeapioraidEventDataSasDeviceDiscoveryError_t *event_data =
- fw_event->event_data;
-
- switch (event_data->ReasonCode) {
- case LEAPIORAID_EVENT_SAS_DISC_ERR_SMP_FAILED:
- pr_warn(
- "%s SMP command sent to the expander(handle:0x%04x,\n\t\t"
- "sas_address:0x%016llx,physical_port:0x%02x) has failed\n",
- ioc->name,
- le16_to_cpu(event_data->DevHandle),
- (unsigned long long)le64_to_cpu(event_data->SASAddress),
- event_data->PhysicalPort);
- break;
- case LEAPIORAID_EVENT_SAS_DISC_ERR_SMP_TIMEOUT:
- pr_warn(
- "%s SMP command sent to the expander(handle:0x%04x,\n\t\t"
- "sas_address:0x%016llx,physical_port:0x%02x) has timed out\n",
- ioc->name,
- le16_to_cpu(event_data->DevHandle),
- (unsigned long long)le64_to_cpu(event_data->SASAddress),
- event_data->PhysicalPort);
- break;
- default:
- break;
- }
-}
-
-static int
-leapioraid_scsihost_ir_fastpath(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- u8 phys_disk_num)
-{
- struct LeapioraidRaidActionReq_t *mpi_request;
- struct LeapioraidRaidActionRep_t *mpi_reply;
- u16 smid;
- u8 issue_reset = 0;
- int rc = 0;
- u16 ioc_status;
- u32 log_info;
-
- mutex_lock(&ioc->scsih_cmds.mutex);
- if (ioc->scsih_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_err("%s %s: scsih_cmd in use\n",
- ioc->name, __func__);
- rc = -EAGAIN;
- goto out;
- }
- ioc->scsih_cmds.status = LEAPIORAID_CMD_PENDING;
- smid = leapioraid_base_get_smid(ioc, ioc->scsih_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- ioc->scsih_cmds.status = LEAPIORAID_CMD_NOT_USED;
- rc = -EAGAIN;
- goto out;
- }
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->scsih_cmds.smid = smid;
- memset(mpi_request, 0, sizeof(struct LeapioraidRaidActionReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_RAID_ACTION;
- mpi_request->Action = 0x24;
- mpi_request->PhysDiskNum = phys_disk_num;
- dewtprintk(ioc, pr_info(
- "%s IR RAID_ACTION: turning fast path on for handle(0x%04x), phys_disk_num (0x%02x)\n",
- ioc->name, handle, phys_disk_num));
- init_completion(&ioc->scsih_cmds.done);
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->scsih_cmds.done, 10 * HZ);
- if (!(ioc->scsih_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- leapioraid_check_cmd_timeout(ioc,
- ioc->scsih_cmds.status,
- mpi_request,
- sizeof(struct LeapioraidRaidActionReq_t)
- / 4, issue_reset);
- rc = -EFAULT;
- goto out;
- }
- if (ioc->scsih_cmds.status & LEAPIORAID_CMD_REPLY_VALID) {
- mpi_reply = ioc->scsih_cmds.reply;
- ioc_status = le16_to_cpu(mpi_reply->IOCStatus);
- if (ioc_status & LEAPIORAID_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE)
- log_info = le32_to_cpu(mpi_reply->IOCLogInfo);
- else
- log_info = 0;
- ioc_status &= LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- dewtprintk(ioc, pr_err(
- "%s IR RAID_ACTION: failed: ioc_status(0x%04x), loginfo(0x%08x)!!!\n",
- ioc->name, ioc_status,
- log_info));
- rc = -EFAULT;
- } else
- dewtprintk(ioc, pr_err(
- "%s IR RAID_ACTION: completed successfully\n",
- ioc->name));
- }
-out:
- ioc->scsih_cmds.status = LEAPIORAID_CMD_NOT_USED;
- mutex_unlock(&ioc->scsih_cmds.mutex);
- if (issue_reset)
- leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
- return rc;
-}
-
-static void
-leapioraid_scsihost_reprobe_lun(
- struct scsi_device *sdev, void *no_uld_attach)
-{
- int rc;
-
- sdev->no_uld_attach = no_uld_attach ? 1 : 0;
- sdev_printk(KERN_INFO, sdev, "%s raid component\n",
- sdev->no_uld_attach ? "hiding" : "exposing");
- rc = scsi_device_reprobe(sdev);
- pr_info("%s rc=%d\n", __func__, rc);
-}
-
-static void
-leapioraid_scsihost_sas_volume_add(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventIrCfgEle_t *element)
-{
- struct leapioraid_raid_device *raid_device;
- unsigned long flags;
- u64 wwid;
- u16 handle = le16_to_cpu(element->VolDevHandle);
- int rc;
-
- leapioraid_config_get_volume_wwid(ioc, handle, &wwid);
- if (!wwid) {
- pr_err("%s failure at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- return;
- }
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- raid_device = leapioraid_scsihost_raid_device_find_by_wwid(
- ioc, wwid);
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- if (raid_device)
- return;
- raid_device = kzalloc(sizeof(struct leapioraid_raid_device),
- GFP_KERNEL);
- if (!raid_device)
- return;
-
- raid_device->id = ioc->sas_id++;
- raid_device->channel = RAID_CHANNEL;
- raid_device->handle = handle;
- raid_device->wwid = wwid;
- leapioraid_scsihost_raid_device_add(ioc, raid_device);
- if (!ioc->wait_for_discovery_to_complete) {
- rc = scsi_add_device(ioc->shost, RAID_CHANNEL,
- raid_device->id, 0);
- if (rc)
- leapioraid_scsihost_raid_device_remove(ioc, raid_device);
- } else {
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- leapioraid_scsihost_determine_boot_device(
- ioc, raid_device, RAID_CHANNEL);
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- }
-}
-
-static void
-leapioraid_scsihost_sas_volume_delete(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle)
-{
- struct leapioraid_raid_device *raid_device;
- unsigned long flags;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- struct scsi_target *starget = NULL;
-
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- raid_device = leapioraid_raid_device_find_by_handle(ioc, handle);
- if (raid_device) {
- if (raid_device->starget) {
- starget = raid_device->starget;
- sas_target_priv_data = starget->hostdata;
- sas_target_priv_data->deleted = 1;
- }
- pr_info("%s removing handle(0x%04x), wwid(0x%016llx)\n",
- ioc->name, raid_device->handle,
- (unsigned long long)raid_device->wwid);
- list_del(&raid_device->list);
- kfree(raid_device);
- }
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- if (starget)
- scsi_remove_target(&starget->dev);
-}
-
-static void
-leapioraid_scsihost_sas_pd_expose(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventIrCfgEle_t *element)
-{
- struct leapioraid_sas_device *sas_device;
- struct scsi_target *starget = NULL;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- unsigned long flags;
- u16 handle = le16_to_cpu(element->PhysDiskDevHandle);
-
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_handle(ioc, handle);
- if (sas_device) {
- sas_device->volume_handle = 0;
- sas_device->volume_wwid = 0;
- clear_bit(handle, ioc->pd_handles);
- if (sas_device->starget && sas_device->starget->hostdata) {
- starget = sas_device->starget;
- sas_target_priv_data = starget->hostdata;
- sas_target_priv_data->flags &=
- ~LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT;
- sas_device->pfa_led_on = 0;
- leapioraid_sas_device_put(sas_device);
- }
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- if (!sas_device)
- return;
- if (starget)
- starget_for_each_device(starget, NULL, leapioraid_scsihost_reprobe_lun);
-}
-
-static void
-leapioraid_scsihost_sas_pd_hide(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventIrCfgEle_t *element)
-{
- struct leapioraid_sas_device *sas_device;
- struct scsi_target *starget = NULL;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- unsigned long flags;
- u16 handle = le16_to_cpu(element->PhysDiskDevHandle);
- u16 volume_handle = 0;
- u64 volume_wwid = 0;
-
- leapioraid_config_get_volume_handle(ioc, handle, &volume_handle);
- if (volume_handle)
- leapioraid_config_get_volume_wwid(ioc, volume_handle,
- &volume_wwid);
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_handle(ioc, handle);
- if (sas_device) {
- set_bit(handle, ioc->pd_handles);
- if (sas_device->starget && sas_device->starget->hostdata) {
- starget = sas_device->starget;
- sas_target_priv_data = starget->hostdata;
- sas_target_priv_data->flags |=
- LEAPIORAID_TARGET_FLAGS_RAID_COMPONENT;
- sas_device->volume_handle = volume_handle;
- sas_device->volume_wwid = volume_wwid;
- leapioraid_sas_device_put(sas_device);
- }
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- if (!sas_device)
- return;
- leapioraid_scsihost_ir_fastpath(ioc, handle, element->PhysDiskNum);
- if (starget)
- starget_for_each_device(starget, (void *)1,
- leapioraid_scsihost_reprobe_lun);
-}
-
-static void
-leapioraid_scsihost_sas_pd_delete(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventIrCfgEle_t *element)
-{
- u16 handle = le16_to_cpu(element->PhysDiskDevHandle);
-
- leapioraid_scsihost_device_remove_by_handle(ioc, handle);
-}
-
-static void
-leapioraid_scsihost_sas_pd_add(struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventIrCfgEle_t *element)
-{
- struct leapioraid_sas_device *sas_device;
- u16 handle = le16_to_cpu(element->PhysDiskDevHandle);
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidSasDevP0_t sas_device_pg0;
- u32 ioc_status;
- u64 sas_address;
- u16 parent_handle;
-
- set_bit(handle, ioc->pd_handles);
- sas_device = leapioraid_get_sdev_by_handle(ioc, handle);
- if (sas_device) {
- leapioraid_scsihost_ir_fastpath(ioc, handle, element->PhysDiskNum);
- leapioraid_sas_device_put(sas_device);
- return;
- }
- if ((leapioraid_config_get_sas_device_pg0
- (ioc, &mpi_reply, &sas_device_pg0,
- LEAPIORAID_SAS_DEVICE_PGAD_FORM_HANDLE, handle))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- parent_handle = le16_to_cpu(sas_device_pg0.ParentDevHandle);
- if (!leapioraid_scsihost_get_sas_address(ioc, parent_handle, &sas_address))
- leapioraid_transport_update_links(ioc, sas_address, handle,
- sas_device_pg0.PhyNum,
- LEAPIORAID_SAS_NEG_LINK_RATE_1_5,
- leapioraid_get_port_by_id(ioc,
- sas_device_pg0.PhysicalPort,
- 0));
- leapioraid_scsihost_ir_fastpath(ioc, handle, element->PhysDiskNum);
- leapioraid_scsihost_add_device(ioc, handle, 0, 1);
-}
-
-static void
-leapioraid_scsihost_sas_ir_config_change_event_debug(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventDataIrCfgChangeList_t *event_data)
-{
- struct LeapioraidEventIrCfgEle_t *element;
- u8 element_type;
- int i;
- char *reason_str = NULL, *element_str = NULL;
-
- element =
- (struct LeapioraidEventIrCfgEle_t *) &event_data->ConfigElement[0];
- pr_info("%s raid config change: (%s), elements(%d)\n",
- ioc->name,
- (le32_to_cpu(event_data->Flags) &
- LEAPIORAID_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG) ? "foreign" :
- "native", event_data->NumElements);
- for (i = 0; i < event_data->NumElements; i++, element++) {
- switch (element->ReasonCode) {
- case LEAPIORAID_EVENT_IR_CHANGE_RC_ADDED:
- reason_str = "add";
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_REMOVED:
- reason_str = "remove";
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_NO_CHANGE:
- reason_str = "no change";
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_HIDE:
- reason_str = "hide";
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_UNHIDE:
- reason_str = "unhide";
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_VOLUME_CREATED:
- reason_str = "volume_created";
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_VOLUME_DELETED:
- reason_str = "volume_deleted";
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_PD_CREATED:
- reason_str = "pd_created";
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_PD_DELETED:
- reason_str = "pd_deleted";
- break;
- default:
- reason_str = "unknown reason";
- break;
- }
- element_type = le16_to_cpu(element->ElementFlags) &
- LEAPIORAID_EVENT_IR_CHANGE_EFLAGS_ELEMENT_TYPE_MASK;
- switch (element_type) {
- case LEAPIORAID_EVENT_IR_CHANGE_EFLAGS_VOLUME_ELEMENT:
- element_str = "volume";
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_EFLAGS_VOLPHYSDISK_ELEMENT:
- element_str = "phys disk";
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_EFLAGS_HOTSPARE_ELEMENT:
- element_str = "hot spare";
- break;
- default:
- element_str = "unknown element";
- break;
- }
- pr_info(
- "\t(%s:%s), vol handle(0x%04x), pd handle(0x%04x), pd num(0x%02x)\n",
- element_str,
- reason_str, le16_to_cpu(element->VolDevHandle),
- le16_to_cpu(element->PhysDiskDevHandle),
- element->PhysDiskNum);
- }
-}
-
-static void
-leapioraid_scsihost_sas_ir_config_change_event(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event)
-{
- struct LeapioraidEventIrCfgEle_t *element;
- int i;
- u8 foreign_config;
- struct LeapioraidEventDataIrCfgChangeList_t *event_data
- = fw_event->event_data;
-
- if ((ioc->logging_level & LEAPIORAID_DEBUG_EVENT_WORK_TASK)
- && !ioc->warpdrive_msg)
- leapioraid_scsihost_sas_ir_config_change_event_debug(ioc, event_data);
- foreign_config = (le32_to_cpu(event_data->Flags) &
- LEAPIORAID_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG) ? 1 : 0;
- element =
- (struct LeapioraidEventIrCfgEle_t *) &event_data->ConfigElement[0];
- if (ioc->shost_recovery) {
- for (i = 0; i < event_data->NumElements; i++, element++) {
- if (element->ReasonCode ==
- LEAPIORAID_EVENT_IR_CHANGE_RC_HIDE)
- leapioraid_scsihost_ir_fastpath(ioc,
- le16_to_cpu(element->PhysDiskDevHandle),
- element->PhysDiskNum);
- }
- return;
- }
- for (i = 0; i < event_data->NumElements; i++, element++) {
- switch (element->ReasonCode) {
- case LEAPIORAID_EVENT_IR_CHANGE_RC_VOLUME_CREATED:
- case LEAPIORAID_EVENT_IR_CHANGE_RC_ADDED:
- if (!foreign_config)
- leapioraid_scsihost_sas_volume_add(ioc, element);
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_VOLUME_DELETED:
- case LEAPIORAID_EVENT_IR_CHANGE_RC_REMOVED:
- if (!foreign_config)
- leapioraid_scsihost_sas_volume_delete(ioc,
- le16_to_cpu
- (element->VolDevHandle));
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_PD_CREATED:
- leapioraid_scsihost_sas_pd_hide(ioc, element);
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_PD_DELETED:
- leapioraid_scsihost_sas_pd_expose(ioc, element);
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_HIDE:
- leapioraid_scsihost_sas_pd_add(ioc, element);
- break;
- case LEAPIORAID_EVENT_IR_CHANGE_RC_UNHIDE:
- leapioraid_scsihost_sas_pd_delete(ioc, element);
- break;
- }
- }
-}
-
-static void
-leapioraid_scsihost_sas_ir_volume_event(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event)
-{
- u64 wwid;
- unsigned long flags;
- struct leapioraid_raid_device *raid_device;
- u16 handle;
- u32 state;
- int rc;
- struct LeapioraidEventDataIrVol_t *event_data
- = fw_event->event_data;
-
- if (ioc->shost_recovery)
- return;
- if (event_data->ReasonCode != LEAPIORAID_EVENT_IR_VOLUME_RC_STATE_CHANGED)
- return;
- handle = le16_to_cpu(event_data->VolDevHandle);
- state = le32_to_cpu(event_data->NewValue);
- if (!ioc->warpdrive_msg)
- dewtprintk(ioc,
- pr_info("%s %s: handle(0x%04x), old(0x%08x), new(0x%08x)\n",
- ioc->name,
- __func__, handle,
- le32_to_cpu(event_data->PreviousValue),
- state));
- switch (state) {
- case LEAPIORAID_RAID_VOL_STATE_MISSING:
- case LEAPIORAID_RAID_VOL_STATE_FAILED:
- leapioraid_scsihost_sas_volume_delete(ioc, handle);
- break;
- case LEAPIORAID_RAID_VOL_STATE_ONLINE:
- case LEAPIORAID_RAID_VOL_STATE_DEGRADED:
- case LEAPIORAID_RAID_VOL_STATE_OPTIMAL:
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- raid_device =
- leapioraid_raid_device_find_by_handle(ioc, handle);
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- if (raid_device)
- break;
- leapioraid_config_get_volume_wwid(ioc, handle, &wwid);
- if (!wwid) {
- pr_err(
- "%s failure at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- break;
- }
- raid_device = kzalloc(sizeof(struct leapioraid_raid_device),
- GFP_KERNEL);
- if (!raid_device)
- break;
-
- raid_device->id = ioc->sas_id++;
- raid_device->channel = RAID_CHANNEL;
- raid_device->handle = handle;
- raid_device->wwid = wwid;
- leapioraid_scsihost_raid_device_add(ioc, raid_device);
- rc = scsi_add_device(ioc->shost, RAID_CHANNEL,
- raid_device->id, 0);
- if (rc)
- leapioraid_scsihost_raid_device_remove(ioc, raid_device);
- break;
- case LEAPIORAID_RAID_VOL_STATE_INITIALIZING:
- default:
- break;
- }
-}
-
-static void
-leapioraid_scsihost_sas_ir_physical_disk_event(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event)
-{
- u16 handle, parent_handle;
- u32 state;
- struct leapioraid_sas_device *sas_device;
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidSasDevP0_t sas_device_pg0;
- u32 ioc_status;
- struct LeapioraidEventDataIrPhyDisk_t *event_data
- = fw_event->event_data;
- u64 sas_address;
-
- if (ioc->shost_recovery)
- return;
- if (event_data->ReasonCode !=
- LEAPIORAID_EVENT_IR_PHYSDISK_RC_STATE_CHANGED)
- return;
- handle = le16_to_cpu(event_data->PhysDiskDevHandle);
- state = le32_to_cpu(event_data->NewValue);
- if (!ioc->warpdrive_msg)
- dewtprintk(ioc,
- pr_info("%s %s: handle(0x%04x), old(0x%08x), new(0x%08x)\n",
- ioc->name,
- __func__, handle,
- le32_to_cpu(event_data->PreviousValue),
- state));
- switch (state) {
- case LEAPIORAID_RAID_PD_STATE_ONLINE:
- case LEAPIORAID_RAID_PD_STATE_DEGRADED:
- case LEAPIORAID_RAID_PD_STATE_REBUILDING:
- case LEAPIORAID_RAID_PD_STATE_OPTIMAL:
- case LEAPIORAID_RAID_PD_STATE_HOT_SPARE:
- set_bit(handle, ioc->pd_handles);
- sas_device = leapioraid_get_sdev_by_handle(ioc, handle);
- if (sas_device) {
- leapioraid_sas_device_put(sas_device);
- return;
- }
- if ((leapioraid_config_get_sas_device_pg0(
- ioc, &mpi_reply,
- &sas_device_pg0,
- LEAPIORAID_SAS_DEVICE_PGAD_FORM_HANDLE,
- handle))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- parent_handle = le16_to_cpu(sas_device_pg0.ParentDevHandle);
- if (!leapioraid_scsihost_get_sas_address
- (ioc, parent_handle, &sas_address))
- leapioraid_transport_update_links(ioc, sas_address,
- handle,
- sas_device_pg0.PhyNum,
- LEAPIORAID_SAS_NEG_LINK_RATE_1_5,
- leapioraid_get_port_by_id
- (ioc,
- sas_device_pg0.PhysicalPort, 0));
- leapioraid_scsihost_add_device(ioc, handle, 0, 1);
- break;
- case LEAPIORAID_RAID_PD_STATE_OFFLINE:
- case LEAPIORAID_RAID_PD_STATE_NOT_CONFIGURED:
- case LEAPIORAID_RAID_PD_STATE_NOT_COMPATIBLE:
- default:
- break;
- }
-}
-
-static void
-leapioraid_scsihost_sas_ir_operation_status_event_debug(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidEventDataIrOpStatus_t *event_data)
-{
- char *reason_str = NULL;
-
- switch (event_data->RAIDOperation) {
- case LEAPIORAID_EVENT_IR_RAIDOP_RESYNC:
- reason_str = "resync";
- break;
- case LEAPIORAID_EVENT_IR_RAIDOP_ONLINE_CAP_EXPANSION:
- reason_str = "online capacity expansion";
- break;
- case LEAPIORAID_EVENT_IR_RAIDOP_CONSISTENCY_CHECK:
- reason_str = "consistency check";
- break;
- case LEAPIORAID_EVENT_IR_RAIDOP_BACKGROUND_INIT:
- reason_str = "background init";
- break;
- case LEAPIORAID_EVENT_IR_RAIDOP_MAKE_DATA_CONSISTENT:
- reason_str = "make data consistent";
- break;
- }
- if (!reason_str)
- return;
- pr_info(
- "%s raid operational status: (%s)\thandle(0x%04x), percent complete(%d)\n",
- ioc->name, reason_str,
- le16_to_cpu(event_data->VolDevHandle),
- event_data->PercentComplete);
-}
-
-static void
-leapioraid_scsihost_sas_ir_operation_status_event(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event)
-{
- struct LeapioraidEventDataIrOpStatus_t *event_data
- = fw_event->event_data;
- static struct leapioraid_raid_device *raid_device;
- unsigned long flags;
- u16 handle;
-
- if ((ioc->logging_level & LEAPIORAID_DEBUG_EVENT_WORK_TASK)
- && !ioc->warpdrive_msg)
- leapioraid_scsihost_sas_ir_operation_status_event_debug(
- ioc, event_data);
- if (event_data->RAIDOperation == LEAPIORAID_EVENT_IR_RAIDOP_RESYNC) {
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- handle = le16_to_cpu(event_data->VolDevHandle);
- raid_device =
- leapioraid_raid_device_find_by_handle(ioc, handle);
- if (raid_device)
- raid_device->percent_complete =
- event_data->PercentComplete;
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- }
-}
-
-static void
-leapioraid_scsihost_prep_device_scan(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct scsi_device *sdev;
-
- shost_for_each_device(sdev, ioc->shost) {
- sas_device_priv_data = sdev->hostdata;
- if (sas_device_priv_data && sas_device_priv_data->sas_target)
- sas_device_priv_data->sas_target->deleted = 1;
- }
-}
-
-static void
-leapioraid_scsihost_update_device_qdepth(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LEAPIORAID_DEVICE *sas_device_priv_data;
- struct leapioraid_sas_device *sas_device;
- struct scsi_device *sdev;
- u16 qdepth;
-
- pr_info("%s Update Devices with FW Reported QD\n",
- ioc->name);
- shost_for_each_device(sdev, ioc->shost) {
- sas_device_priv_data = sdev->hostdata;
- if (sas_device_priv_data && sas_device_priv_data->sas_target) {
- sas_device = sas_device_priv_data->sas_target->sas_dev;
- if (sas_device &&
- sas_device->device_info & LEAPIORAID_SAS_DEVICE_INFO_SSP_TARGET)
- qdepth =
- (sas_device->port_type >
- 1) ? ioc->max_wideport_qd : ioc->max_narrowport_qd;
- else if (sas_device
- && sas_device->device_info &
- LEAPIORAID_SAS_DEVICE_INFO_SATA_DEVICE)
- qdepth = ioc->max_sata_qd;
- else
- continue;
- leapioraid__scsihost_change_queue_depth(sdev, qdepth);
- }
- }
-}
-
-static void
-leapioraid_scsihost_mark_responding_sas_device(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidSasDevP0_t *sas_device_pg0)
-{
- struct LEAPIORAID_TARGET *sas_target_priv_data = NULL;
- struct scsi_target *starget;
- struct leapioraid_sas_device *sas_device;
- struct leapioraid_enclosure_node *enclosure_dev = NULL;
- unsigned long flags;
- struct leapioraid_hba_port *port;
-
- port = leapioraid_get_port_by_id(ioc, sas_device_pg0->PhysicalPort, 0);
- if (sas_device_pg0->EnclosureHandle) {
- enclosure_dev =
- leapioraid_scsihost_enclosure_find_by_handle(ioc,
- le16_to_cpu
- (sas_device_pg0->EnclosureHandle));
- if (enclosure_dev == NULL)
- pr_info(
- "%s Enclosure handle(0x%04x)doesn't match with enclosure device!\n",
- ioc->name, sas_device_pg0->EnclosureHandle);
- }
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- list_for_each_entry(sas_device, &ioc->sas_device_list, list) {
- if ((sas_device->sas_address ==
- le64_to_cpu(sas_device_pg0->SASAddress))
- && (sas_device->slot == le16_to_cpu(sas_device_pg0->Slot))
- && (sas_device->port == port)) {
- sas_device->responding = 1;
- starget = sas_device->starget;
- if (starget && starget->hostdata) {
- sas_target_priv_data = starget->hostdata;
- sas_target_priv_data->tm_busy = 0;
- sas_target_priv_data->deleted = 0;
- } else
- sas_target_priv_data = NULL;
- if (starget) {
- starget_printk(KERN_INFO, starget,
- "handle(0x%04x), sas_address(0x%016llx), port: %d\n",
- sas_device->handle,
- (unsigned long long)sas_device->sas_address,
- sas_device->port->port_id);
- if (sas_device->enclosure_handle != 0)
- starget_printk(KERN_INFO, starget,
- "enclosure logical id(0x%016llx), slot(%d)\n",
- (unsigned long long)
- sas_device->enclosure_logical_id,
- sas_device->slot);
- }
- if (le16_to_cpu(sas_device_pg0->Flags) &
- LEAPIORAID_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID) {
- sas_device->enclosure_level =
- sas_device_pg0->EnclosureLevel;
- memcpy(sas_device->connector_name,
- sas_device_pg0->ConnectorName, 4);
- sas_device->connector_name[4] = '\0';
- } else {
- sas_device->enclosure_level = 0;
- sas_device->connector_name[0] = '\0';
- }
- sas_device->enclosure_handle =
- le16_to_cpu(sas_device_pg0->EnclosureHandle);
- sas_device->is_chassis_slot_valid = 0;
- if (enclosure_dev) {
- sas_device->enclosure_logical_id =
- le64_to_cpu(enclosure_dev->pg0.EnclosureLogicalID);
- if (le16_to_cpu(enclosure_dev->pg0.Flags) &
- LEAPIORAID_SAS_ENCLS0_FLAGS_CHASSIS_SLOT_VALID) {
- sas_device->is_chassis_slot_valid = 1;
- sas_device->chassis_slot =
- enclosure_dev->pg0.ChassisSlot;
- }
- }
- if (sas_device->handle ==
- le16_to_cpu(sas_device_pg0->DevHandle))
- goto out;
- pr_info("\thandle changed from(0x%04x)!!!\n",
- sas_device->handle);
- sas_device->handle =
- le16_to_cpu(sas_device_pg0->DevHandle);
- if (sas_target_priv_data)
- sas_target_priv_data->handle =
- le16_to_cpu(sas_device_pg0->DevHandle);
- goto out;
- }
- }
-out:
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
-}
-
-static void
-leapioraid_scsihost_create_enclosure_list_after_reset(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_enclosure_node *enclosure_dev;
- struct LeapioraidCfgRep_t mpi_reply;
- u16 enclosure_handle;
- int rc;
-
- leapioraid_free_enclosure_list(ioc);
- enclosure_handle = 0xFFFF;
- do {
- enclosure_dev =
- kzalloc(sizeof(struct leapioraid_enclosure_node), GFP_KERNEL);
- if (!enclosure_dev) {
- pr_err("%s failure at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- return;
- }
- rc = leapioraid_config_get_enclosure_pg0(ioc, &mpi_reply,
- &enclosure_dev->pg0,
- LEAPIORAID_SAS_ENCLOS_PGAD_FORM_GET_NEXT_HANDLE,
- enclosure_handle);
- if (rc || (le16_to_cpu(mpi_reply.IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK)) {
- kfree(enclosure_dev);
- return;
- }
- list_add_tail(&enclosure_dev->list, &ioc->enclosure_list);
- enclosure_handle =
- le16_to_cpu(enclosure_dev->pg0.EnclosureHandle);
- } while (1);
-}
-
-static void
-leapioraid_scsihost_search_responding_sas_devices(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidSasDevP0_t sas_device_pg0;
- struct LeapioraidCfgRep_t mpi_reply;
- u16 ioc_status;
- u16 handle;
- u32 device_info;
-
- pr_info("%s search for end-devices: start\n",
- ioc->name);
- if (list_empty(&ioc->sas_device_list))
- goto out;
- handle = 0xFFFF;
- while (!(leapioraid_config_get_sas_device_pg0(ioc, &mpi_reply,
- &sas_device_pg0,
- LEAPIORAID_SAS_DEVICE_PGAD_FORM_GET_NEXT_HANDLE,
- handle))) {
- ioc_status =
- le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_info(
- "%s \tbreak from %s: ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name, __func__, ioc_status,
- le32_to_cpu(mpi_reply.IOCLogInfo));
- break;
- }
- handle = le16_to_cpu(sas_device_pg0.DevHandle);
- device_info = le32_to_cpu(sas_device_pg0.DeviceInfo);
- if (!(leapioraid_scsihost_is_sas_end_device(device_info)))
- continue;
- leapioraid_scsihost_mark_responding_sas_device(
- ioc, &sas_device_pg0);
- }
-out:
- pr_info("%s search for end-devices: complete\n",
- ioc->name);
-}
-
-static void
-leapioraid_scsihost_mark_responding_raid_device(
- struct LEAPIORAID_ADAPTER *ioc, u64 wwid, u16 handle)
-{
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- struct scsi_target *starget;
- struct leapioraid_raid_device *raid_device;
- unsigned long flags;
-
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- list_for_each_entry(raid_device, &ioc->raid_device_list, list) {
- if (raid_device->wwid == wwid && raid_device->starget) {
- starget = raid_device->starget;
- if (starget && starget->hostdata) {
- sas_target_priv_data = starget->hostdata;
- sas_target_priv_data->deleted = 0;
- } else
- sas_target_priv_data = NULL;
- raid_device->responding = 1;
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- starget_printk(KERN_INFO, raid_device->starget,
- "handle(0x%04x), wwid(0x%016llx)\n",
- handle,
- (unsigned long long)raid_device->wwid);
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- if (raid_device->handle == handle) {
- spin_unlock_irqrestore(&ioc->raid_device_lock,
- flags);
- return;
- }
- pr_info("\thandle changed from(0x%04x)!!!\n",
- raid_device->handle);
- raid_device->handle = handle;
- if (sas_target_priv_data)
- sas_target_priv_data->handle = handle;
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- return;
- }
- }
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
-}
-
-static void
-leapioraid_scsihost_search_responding_raid_devices(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidRaidVolP1_t volume_pg1;
- struct LeapioraidRaidVolP0_t volume_pg0;
- struct LeapioraidRaidPDP0_t pd_pg0;
- struct LeapioraidCfgRep_t mpi_reply;
- u16 ioc_status;
- u16 handle;
- u8 phys_disk_num;
-
- if (!ioc->ir_firmware)
- return;
- pr_info("%s search for raid volumes: start\n",
- ioc->name);
- if (list_empty(&ioc->raid_device_list))
- goto out;
- handle = 0xFFFF;
- while (!(leapioraid_config_get_raid_volume_pg1(ioc, &mpi_reply,
- &volume_pg1,
- LEAPIORAID_RAID_VOLUME_PGAD_FORM_GET_NEXT_HANDLE,
- handle))) {
- ioc_status =
- le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_info("%s \tbreak from %s: ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name, __func__, ioc_status,
- le32_to_cpu(mpi_reply.IOCLogInfo));
- break;
- }
- handle = le16_to_cpu(volume_pg1.DevHandle);
- if (leapioraid_config_get_raid_volume_pg0(ioc, &mpi_reply,
- &volume_pg0,
- LEAPIORAID_RAID_VOLUME_PGAD_FORM_HANDLE,
- handle,
- sizeof
- (struct LeapioraidRaidVolP0_t)))
- continue;
- if (volume_pg0.VolumeState == LEAPIORAID_RAID_VOL_STATE_OPTIMAL ||
- volume_pg0.VolumeState == LEAPIORAID_RAID_VOL_STATE_ONLINE ||
- volume_pg0.VolumeState == LEAPIORAID_RAID_VOL_STATE_DEGRADED)
- leapioraid_scsihost_mark_responding_raid_device(ioc,
- le64_to_cpu
- (volume_pg1.WWID),
- handle);
- }
- phys_disk_num = 0xFF;
- memset(ioc->pd_handles, 0, ioc->pd_handles_sz);
- while (!(leapioraid_config_get_phys_disk_pg0(ioc, &mpi_reply,
- &pd_pg0,
- LEAPIORAID_PHYSDISK_PGAD_FORM_GET_NEXT_PHYSDISKNUM,
- phys_disk_num))) {
- ioc_status =
- le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_info("%s \tbreak from %s: ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name, __func__, ioc_status,
- le32_to_cpu(mpi_reply.IOCLogInfo));
- break;
- }
- phys_disk_num = pd_pg0.PhysDiskNum;
- handle = le16_to_cpu(pd_pg0.DevHandle);
- set_bit(handle, ioc->pd_handles);
- }
-out:
- pr_info("%s search for responding raid volumes: complete\n",
- ioc->name);
-}
-
-static void
-leapioraid_scsihost_mark_responding_expander(
- struct LEAPIORAID_ADAPTER *ioc,
- struct LeapioraidExpanderP0_t *expander_pg0)
-{
- struct leapioraid_raid_sas_node *sas_expander;
- unsigned long flags;
- int i;
- u8 port_id = expander_pg0->PhysicalPort;
- struct leapioraid_hba_port *port = leapioraid_get_port_by_id(
- ioc, port_id, 0);
- struct leapioraid_enclosure_node *enclosure_dev = NULL;
- u16 handle = le16_to_cpu(expander_pg0->DevHandle);
- u16 enclosure_handle = le16_to_cpu(expander_pg0->EnclosureHandle);
- u64 sas_address = le64_to_cpu(expander_pg0->SASAddress);
-
- if (enclosure_handle)
- enclosure_dev =
- leapioraid_scsihost_enclosure_find_by_handle(ioc,
- enclosure_handle);
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- list_for_each_entry(sas_expander, &ioc->sas_expander_list, list) {
- if (sas_expander->sas_address != sas_address ||
- (sas_expander->port != port))
- continue;
- sas_expander->responding = 1;
- if (enclosure_dev) {
- sas_expander->enclosure_logical_id =
- le64_to_cpu(enclosure_dev->pg0.EnclosureLogicalID);
- sas_expander->enclosure_handle =
- le16_to_cpu(expander_pg0->EnclosureHandle);
- }
- if (sas_expander->handle == handle)
- goto out;
- pr_info(
- "\texpander(0x%016llx): handle changed from(0x%04x) to (0x%04x)!!!\n",
- (unsigned long long)sas_expander->sas_address,
- sas_expander->handle, handle);
- sas_expander->handle = handle;
- for (i = 0; i < sas_expander->num_phys; i++)
- sas_expander->phy[i].handle = handle;
- goto out;
- }
-out:
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
-}
-
-static void
-leapioraid_scsihost_search_responding_expanders(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidExpanderP0_t expander_pg0;
- struct LeapioraidCfgRep_t mpi_reply;
- u16 ioc_status;
- u64 sas_address;
- u16 handle;
- u8 port;
-
- pr_info("%s search for expanders: start\n",
- ioc->name);
- if (list_empty(&ioc->sas_expander_list))
- goto out;
- handle = 0xFFFF;
- while (!
- (leapioraid_config_get_expander_pg0
- (ioc, &mpi_reply, &expander_pg0,
- LEAPIORAID_SAS_EXPAND_PGAD_FORM_GET_NEXT_HNDL, handle))) {
- ioc_status =
- le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_info(
- "%s \tbreak from %s: ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name, __func__, ioc_status,
- le32_to_cpu(mpi_reply.IOCLogInfo));
- break;
- }
- handle = le16_to_cpu(expander_pg0.DevHandle);
- sas_address = le64_to_cpu(expander_pg0.SASAddress);
- port = expander_pg0.PhysicalPort;
- pr_info(
- "\texpander present: handle(0x%04x), sas_addr(0x%016llx), port:%d\n",
- handle,
- (unsigned long long)sas_address,
- ((ioc->multipath_on_hba) ?
- (port) : (LEAPIORAID_MULTIPATH_DISABLED_PORT_ID)));
- leapioraid_scsihost_mark_responding_expander(
- ioc, &expander_pg0);
- }
-out:
- pr_info("%s search for expanders: complete\n",
- ioc->name);
-}
-
-static void
-leapioraid_scsihost_remove_unresponding_devices(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_sas_device *sas_device, *sas_device_next;
- struct leapioraid_raid_sas_node *sas_expander, *sas_expander_next;
- struct leapioraid_raid_device *raid_device, *raid_device_next;
- struct list_head tmp_list;
- unsigned long flags;
- LIST_HEAD(head);
-
- pr_info("%s removing unresponding devices: start\n",
- ioc->name);
- pr_err("%s removing unresponding devices: sas end-devices\n",
- ioc->name);
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- list_for_each_entry_safe(sas_device, sas_device_next,
- &ioc->sas_device_init_list, list) {
- list_del_init(&sas_device->list);
- leapioraid_sas_device_put(sas_device);
- }
- list_for_each_entry_safe(sas_device, sas_device_next,
- &ioc->sas_device_list, list) {
- if (!sas_device->responding)
- list_move_tail(&sas_device->list, &head);
- else
- sas_device->responding = 0;
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- list_for_each_entry_safe(sas_device, sas_device_next, &head, list) {
- leapioraid_scsihost_remove_device(ioc, sas_device);
- list_del_init(&sas_device->list);
- leapioraid_sas_device_put(sas_device);
- }
- if (ioc->ir_firmware) {
- pr_info("%s removing unresponding devices: volumes\n",
- ioc->name);
- list_for_each_entry_safe(raid_device, raid_device_next,
- &ioc->raid_device_list, list) {
- if (!raid_device->responding)
- leapioraid_scsihost_sas_volume_delete(ioc,
- raid_device->handle);
- else
- raid_device->responding = 0;
- }
- }
- pr_err("%s removing unresponding devices: expanders\n",
- ioc->name);
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- INIT_LIST_HEAD(&tmp_list);
- list_for_each_entry_safe(sas_expander, sas_expander_next,
- &ioc->sas_expander_list, list) {
- if (!sas_expander->responding)
- list_move_tail(&sas_expander->list, &tmp_list);
- else
- sas_expander->responding = 0;
- }
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- list_for_each_entry_safe(
- sas_expander, sas_expander_next, &tmp_list, list) {
- leapioraid_scsihost_expander_node_remove(ioc, sas_expander);
- }
- pr_err("%s removing unresponding devices: complete\n", ioc->name);
- leapioraid_scsihost_ublock_io_all_device(ioc, 0);
-}
-
-static void
-leapioraid_scsihost_refresh_expander_links(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_raid_sas_node *sas_expander, u16 handle)
-{
- struct LeapioraidExpanderP1_t expander_pg1;
- struct LeapioraidCfgRep_t mpi_reply;
- int i;
-
- for (i = 0; i < sas_expander->num_phys; i++) {
- if ((leapioraid_config_get_expander_pg1(ioc, &mpi_reply,
- &expander_pg1, i,
- handle))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return;
- }
- leapioraid_transport_update_links(ioc,
- sas_expander->sas_address,
- le16_to_cpu(expander_pg1.AttachedDevHandle),
- i,
- expander_pg1.NegotiatedLinkRate >> 4,
- sas_expander->port);
- }
-}
-
-static void
-leapioraid_scsihost_scan_for_devices_after_reset(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidExpanderP0_t expander_pg0;
- struct LeapioraidSasDevP0_t sas_device_pg0;
- struct LeapioraidRaidVolP1_t *volume_pg1;
- struct LeapioraidRaidVolP0_t *volume_pg0;
- struct LeapioraidRaidPDP0_t pd_pg0;
- struct LeapioraidEventIrCfgEle_t element;
- struct LeapioraidCfgRep_t mpi_reply;
- u8 phys_disk_num, port_id;
- u16 ioc_status;
- u16 handle, parent_handle;
- u64 sas_address;
- struct leapioraid_sas_device *sas_device;
- struct leapioraid_raid_sas_node *expander_device;
- static struct leapioraid_raid_device *raid_device;
- u8 retry_count;
- unsigned long flags;
-
- volume_pg0 = kzalloc(sizeof(*volume_pg0), GFP_KERNEL);
- if (!volume_pg0)
- return;
-
- volume_pg1 = kzalloc(sizeof(*volume_pg1), GFP_KERNEL);
- if (!volume_pg1) {
- kfree(volume_pg0);
- return;
- }
- pr_info("%s scan devices: start\n", ioc->name);
- leapioraid_scsihost_sas_host_refresh(ioc);
- pr_info("%s \tscan devices: expanders start\n",
- ioc->name);
- handle = 0xFFFF;
- while (!
- (leapioraid_config_get_expander_pg0
- (ioc, &mpi_reply, &expander_pg0,
- LEAPIORAID_SAS_EXPAND_PGAD_FORM_GET_NEXT_HNDL, handle))) {
- ioc_status =
- le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err(
- "%s \tbreak from expander scan: ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name, ioc_status,
- le32_to_cpu(mpi_reply.IOCLogInfo));
- break;
- }
- handle = le16_to_cpu(expander_pg0.DevHandle);
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- port_id = expander_pg0.PhysicalPort;
- expander_device =
- leapioraid_scsihost_expander_find_by_sas_address(
- ioc,
- le64_to_cpu
- (expander_pg0.SASAddress),
- leapioraid_get_port_by_id
- (ioc,
- port_id,
- 0));
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- if (expander_device)
- leapioraid_scsihost_refresh_expander_links(
- ioc, expander_device, handle);
- else {
- pr_err(
- "%s \tBEFORE adding expander:\n\t\t"
- "handle (0x%04x), sas_addr(0x%016llx)\n",
- ioc->name, handle, (unsigned long long)
- le64_to_cpu(expander_pg0.SASAddress));
- leapioraid_scsihost_expander_add(ioc, handle);
- pr_info(
- "%s \tAFTER adding expander:\n\t\t"
- "handle (0x%04x), sas_addr(0x%016llx)\n",
- ioc->name, handle, (unsigned long long)
- le64_to_cpu(expander_pg0.SASAddress));
- }
- }
- pr_info("%s \tscan devices: expanders complete\n",
- ioc->name);
- if (!ioc->ir_firmware)
- goto skip_to_sas;
- pr_info("%s \tscan devices: phys disk start\n",
- ioc->name);
- phys_disk_num = 0xFF;
- while (!(leapioraid_config_get_phys_disk_pg0(ioc, &mpi_reply,
- &pd_pg0,
- LEAPIORAID_PHYSDISK_PGAD_FORM_GET_NEXT_PHYSDISKNUM,
- phys_disk_num))) {
- ioc_status =
- le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err(
- "%s \tbreak from phys disk scan:\n\t\t"
- "ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name,
- ioc_status,
- le32_to_cpu(mpi_reply.IOCLogInfo));
- break;
- }
- phys_disk_num = pd_pg0.PhysDiskNum;
- handle = le16_to_cpu(pd_pg0.DevHandle);
- sas_device = leapioraid_get_sdev_by_handle(ioc, handle);
- if (sas_device) {
- leapioraid_sas_device_put(sas_device);
- continue;
- }
- if (leapioraid_config_get_sas_device_pg0(ioc, &mpi_reply,
- &sas_device_pg0,
- LEAPIORAID_SAS_DEVICE_PGAD_FORM_HANDLE,
- handle) != 0)
- continue;
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err(
- "%s \tbreak from phys disk scan ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name, ioc_status,
- le32_to_cpu(mpi_reply.IOCLogInfo));
- break;
- }
- parent_handle = le16_to_cpu(sas_device_pg0.ParentDevHandle);
- if (!leapioraid_scsihost_get_sas_address(ioc, parent_handle,
- &sas_address)) {
- pr_err(
- "%s \tBEFORE adding phys disk:\n\t\t"
- "handle (0x%04x), sas_addr(0x%016llx)\n",
- ioc->name, handle, (unsigned long long)
- le64_to_cpu(sas_device_pg0.SASAddress));
- port_id = sas_device_pg0.PhysicalPort;
- leapioraid_transport_update_links(ioc, sas_address,
- handle,
- sas_device_pg0.PhyNum,
- LEAPIORAID_SAS_NEG_LINK_RATE_1_5,
- leapioraid_get_port_by_id
- (ioc, port_id, 0));
- set_bit(handle, ioc->pd_handles);
- retry_count = 0;
- while (leapioraid_scsihost_add_device
- (ioc, handle, retry_count++, 1)) {
- ssleep(1);
- }
- pr_err(
- "%s \tAFTER adding phys disk:\n\t\t"
- "handle (0x%04x), sas_addr(0x%016llx)\n",
- ioc->name, handle, (unsigned long long)
- le64_to_cpu(sas_device_pg0.SASAddress));
- }
- }
- pr_info("%s \tscan devices: phys disk complete\n",
- ioc->name);
- pr_info("%s \tscan devices: volumes start\n",
- ioc->name);
- handle = 0xFFFF;
- while (!(leapioraid_config_get_raid_volume_pg1(ioc, &mpi_reply,
- volume_pg1,
- LEAPIORAID_RAID_VOLUME_PGAD_FORM_GET_NEXT_HANDLE,
- handle))) {
- ioc_status =
- le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err(
- "%s \tbreak from volume scan: ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name, ioc_status,
- le32_to_cpu(mpi_reply.IOCLogInfo));
- break;
- }
- handle = le16_to_cpu(volume_pg1->DevHandle);
- spin_lock_irqsave(&ioc->raid_device_lock, flags);
- raid_device = leapioraid_scsihost_raid_device_find_by_wwid(
- ioc, le64_to_cpu(volume_pg1->WWID));
- spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
- if (raid_device)
- continue;
- if (leapioraid_config_get_raid_volume_pg0(ioc, &mpi_reply,
- volume_pg0,
- LEAPIORAID_RAID_VOLUME_PGAD_FORM_HANDLE,
- handle,
- sizeof
- (struct LeapioraidRaidVolP0_t)))
- continue;
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
- LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err(
- "%s \tbreak from volume scan: ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name, ioc_status,
- le32_to_cpu(mpi_reply.IOCLogInfo));
- break;
- }
- if (volume_pg0->VolumeState == LEAPIORAID_RAID_VOL_STATE_OPTIMAL ||
- volume_pg0->VolumeState == LEAPIORAID_RAID_VOL_STATE_ONLINE ||
- volume_pg0->VolumeState ==
- LEAPIORAID_RAID_VOL_STATE_DEGRADED) {
- memset(&element, 0,
- sizeof(struct LeapioraidEventIrCfgEle_t));
- element.ReasonCode = LEAPIORAID_EVENT_IR_CHANGE_RC_ADDED;
- element.VolDevHandle = volume_pg1->DevHandle;
- pr_info("%s \tBEFORE adding volume: handle (0x%04x)\n",
- ioc->name, volume_pg1->DevHandle);
- leapioraid_scsihost_sas_volume_add(ioc, &element);
- pr_info("%s \tAFTER adding volume: handle (0x%04x)\n",
- ioc->name, volume_pg1->DevHandle);
- }
- }
- pr_info("%s \tscan devices: volumes complete\n",
- ioc->name);
-skip_to_sas:
- pr_info("%s \tscan devices: sas end devices start\n",
- ioc->name);
- handle = 0xFFFF;
- while (!(leapioraid_config_get_sas_device_pg0(ioc, &mpi_reply,
- &sas_device_pg0,
- LEAPIORAID_SAS_DEVICE_PGAD_FORM_GET_NEXT_HANDLE,
- handle))) {
- ioc_status =
- le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err(
- "%s \tbreak from sas end device scan: ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name, ioc_status,
- le32_to_cpu(mpi_reply.IOCLogInfo));
- break;
- }
- handle = le16_to_cpu(sas_device_pg0.DevHandle);
- if (!
- (leapioraid_scsihost_is_sas_end_device
- (le32_to_cpu(sas_device_pg0.DeviceInfo))))
- continue;
- port_id = sas_device_pg0.PhysicalPort;
- sas_device = leapioraid_get_sdev_by_addr(ioc,
- le64_to_cpu
- (sas_device_pg0.SASAddress),
- leapioraid_get_port_by_id
- (ioc, port_id, 0));
- if (sas_device) {
- leapioraid_sas_device_put(sas_device);
- continue;
- }
- parent_handle = le16_to_cpu(sas_device_pg0.ParentDevHandle);
- if (!leapioraid_scsihost_get_sas_address
- (ioc, parent_handle, &sas_address)) {
- pr_err(
- "%s \tBEFORE adding sas end device:\n\t\t"
- "handle (0x%04x), sas_addr(0x%016llx)\n",
- ioc->name, handle, (unsigned long long)
- le64_to_cpu(sas_device_pg0.SASAddress));
- leapioraid_transport_update_links(ioc, sas_address,
- handle,
- sas_device_pg0.PhyNum,
- LEAPIORAID_SAS_NEG_LINK_RATE_1_5,
- leapioraid_get_port_by_id
- (ioc, port_id, 0));
- retry_count = 0;
- while (leapioraid_scsihost_add_device
- (ioc, handle, retry_count++, 0)) {
- ssleep(1);
- }
- pr_err(
- "%s \tAFTER adding sas end device:\n\t\t"
- "handle (0x%04x), sas_addr(0x%016llx)\n",
- ioc->name, handle, (unsigned long long)
- le64_to_cpu(sas_device_pg0.SASAddress));
- }
- }
- pr_err("%s \tscan devices: sas end devices complete\n", ioc->name);
- kfree(volume_pg0);
- kfree(volume_pg1);
- pr_info("%s scan devices: complete\n", ioc->name);
-}
-
-void
-leapioraid_scsihost_clear_outstanding_scsi_tm_commands(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_internal_qcmd *scsih_qcmd, *scsih_qcmd_next;
- unsigned long flags;
-
- if (ioc->scsih_cmds.status & LEAPIORAID_CMD_PENDING) {
- ioc->scsih_cmds.status |= LEAPIORAID_CMD_RESET;
- leapioraid_base_free_smid(ioc, ioc->scsih_cmds.smid);
- complete(&ioc->scsih_cmds.done);
- }
- if (ioc->tm_cmds.status & LEAPIORAID_CMD_PENDING) {
- ioc->tm_cmds.status |= LEAPIORAID_CMD_RESET;
- leapioraid_base_free_smid(ioc, ioc->tm_cmds.smid);
- complete(&ioc->tm_cmds.done);
- }
- spin_lock_irqsave(&ioc->scsih_q_internal_lock, flags);
- list_for_each_entry_safe(scsih_qcmd, scsih_qcmd_next,
- &ioc->scsih_q_intenal_cmds, list) {
- scsih_qcmd->status |= LEAPIORAID_CMD_RESET;
- leapioraid_base_free_smid(ioc, scsih_qcmd->smid);
- }
- spin_unlock_irqrestore(&ioc->scsih_q_internal_lock, flags);
- memset(ioc->pend_os_device_add, 0, ioc->pend_os_device_add_sz);
- memset(ioc->device_remove_in_progress, 0,
- ioc->device_remove_in_progress_sz);
- memset(ioc->tm_tr_retry, 0, ioc->tm_tr_retry_sz);
- leapioraid_scsihost_fw_event_cleanup_queue(ioc);
- leapioraid_scsihost_flush_running_cmds(ioc);
-}
-
-void
-leapioraid_scsihost_reset_handler(struct LEAPIORAID_ADAPTER *ioc,
- int reset_phase)
-{
- switch (reset_phase) {
- case LEAPIORAID_IOC_PRE_RESET_PHASE:
- dtmprintk(ioc, pr_info(
- "%s %s: LEAPIORAID_IOC_PRE_RESET_PHASE\n",
- ioc->name, __func__));
- break;
- case LEAPIORAID_IOC_AFTER_RESET_PHASE:
- dtmprintk(ioc, pr_info(
- "%s %s: LEAPIORAID_IOC_AFTER_RESET_PHASE\n",
- ioc->name, __func__));
- leapioraid_scsihost_clear_outstanding_scsi_tm_commands(ioc);
- break;
- case LEAPIORAID_IOC_DONE_RESET_PHASE:
- dtmprintk(ioc, pr_info(
- "%s %s: LEAPIORAID_IOC_DONE_RESET_PHASE\n",
- ioc->name, __func__));
- if (!(disable_discovery > 0 && !ioc->sas_hba.num_phys)) {
- if (ioc->multipath_on_hba) {
- leapioraid_scsihost_sas_port_refresh(ioc);
- leapioraid_scsihost_update_vphys_after_reset(ioc);
- }
- leapioraid_scsihost_prep_device_scan(ioc);
- leapioraid_scsihost_create_enclosure_list_after_reset(ioc);
- leapioraid_scsihost_search_responding_sas_devices(ioc);
- leapioraid_scsihost_search_responding_raid_devices(ioc);
- leapioraid_scsihost_search_responding_expanders(ioc);
- leapioraid_scsihost_error_recovery_delete_devices(ioc);
- }
- break;
- }
-}
-
-static void
-leapioraid_fw_work(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_fw_event_work *fw_event)
-{
- ioc->current_event = fw_event;
- leapioraid_scsihost_fw_event_del_from_list(ioc, fw_event);
- if (ioc->remove_host || ioc->pci_error_recovery) {
- leapioraid_fw_event_work_put(fw_event);
- ioc->current_event = NULL;
- return;
- }
- switch (fw_event->event) {
- case LEAPIORAID_REMOVE_UNRESPONDING_DEVICES:
- while (scsi_host_in_recovery(ioc->shost) || ioc->shost_recovery) {
- if (ioc->remove_host || ioc->fw_events_cleanup)
- goto out;
- ssleep(1);
- }
- leapioraid_scsihost_remove_unresponding_devices(ioc);
- leapioraid_scsihost_del_dirty_vphy(ioc);
- leapioraid_scsihost_del_dirty_port_entries(ioc);
- leapioraid_scsihost_update_device_qdepth(ioc);
- leapioraid_scsihost_scan_for_devices_after_reset(ioc);
- if (ioc->is_driver_loading)
- leapioraid_scsihost_complete_devices_scanning(ioc);
- break;
- case LEAPIORAID_PORT_ENABLE_COMPLETE:
- ioc->start_scan = 0;
- dewtprintk(ioc, pr_info(
- "%s port enable: complete from worker thread\n",
- ioc->name));
- break;
- case LEAPIORAID_TURN_ON_PFA_LED:
- leapioraid_scsihost_turn_on_pfa_led(ioc, fw_event->device_handle);
- break;
- case LEAPIORAID_EVENT_SAS_TOPOLOGY_CHANGE_LIST:
- if (leapioraid_scsihost_sas_topology_change_event(ioc, fw_event)) {
- leapioraid_scsihost_fw_event_requeue(ioc, fw_event, 1000);
- ioc->current_event = NULL;
- return;
- }
- break;
- case LEAPIORAID_EVENT_SAS_DEVICE_STATUS_CHANGE:
- if (ioc->logging_level & LEAPIORAID_DEBUG_EVENT_WORK_TASK)
- leapioraid_scsihost_sas_device_status_change_event_debug(
- ioc,
- (struct LeapioraidEventDataSasDeviceStatusChange_t *)
- fw_event->event_data);
- break;
- case LEAPIORAID_EVENT_SAS_DISCOVERY:
- leapioraid_scsihost_sas_discovery_event(
- ioc, fw_event);
- break;
- case LEAPIORAID_EVENT_SAS_DEVICE_DISCOVERY_ERROR:
- leapioraid_scsihost_sas_device_discovery_error_event(
- ioc, fw_event);
- break;
- case LEAPIORAID_EVENT_SAS_BROADCAST_PRIMITIVE:
- leapioraid_scsihost_sas_broadcast_primitive_event(
- ioc, fw_event);
- break;
- case LEAPIORAID_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE:
- leapioraid_scsihost_sas_enclosure_dev_status_change_event(
- ioc, fw_event);
- break;
- case LEAPIORAID_EVENT_IR_CONFIGURATION_CHANGE_LIST:
- leapioraid_scsihost_sas_ir_config_change_event(
- ioc, fw_event);
- break;
- case LEAPIORAID_EVENT_IR_VOLUME:
- leapioraid_scsihost_sas_ir_volume_event(
- ioc, fw_event);
- break;
- case LEAPIORAID_EVENT_IR_PHYSICAL_DISK:
- leapioraid_scsihost_sas_ir_physical_disk_event(
- ioc, fw_event);
- break;
- case LEAPIORAID_EVENT_IR_OPERATION_STATUS:
- leapioraid_scsihost_sas_ir_operation_status_event(
- ioc, fw_event);
- break;
- default:
- break;
- }
-out:
- leapioraid_fw_event_work_put(fw_event);
- ioc->current_event = NULL;
-}
-
-static void
-leapioraid_firmware_event_work(struct work_struct *work)
-{
- struct leapioraid_fw_event_work *fw_event = container_of(work,
- struct leapioraid_fw_event_work,
- work);
-
- leapioraid_fw_work(fw_event->ioc, fw_event);
-}
-
-static void
-leapioraid_firmware_event_work_delayed(struct work_struct *work)
-{
- struct leapioraid_fw_event_work *fw_event = container_of(work,
- struct leapioraid_fw_event_work,
- delayed_work.work);
-
- leapioraid_fw_work(fw_event->ioc, fw_event);
-}
-
-u8
-leapioraid_scsihost_event_callback(struct LEAPIORAID_ADAPTER *ioc,
- u8 msix_index, u32 reply)
-{
- struct leapioraid_fw_event_work *fw_event;
- struct LeapioraidEventNotificationRep_t *mpi_reply;
- u16 event;
- u16 sz;
-
- if (ioc->pci_error_recovery)
- return 1;
-
- mpi_reply = leapioraid_base_get_reply_virt_addr(ioc, reply);
- if (unlikely(!mpi_reply)) {
- pr_err("%s mpi_reply not valid at %s:%d/%s()!\n", ioc->name,
- __FILE__, __LINE__, __func__);
- return 1;
- }
- event = le16_to_cpu(mpi_reply->Event);
- switch (event) {
- case LEAPIORAID_EVENT_SAS_BROADCAST_PRIMITIVE:
- {
- struct LeapioraidEventDataSasBroadcastPrimitive_t *baen_data =
- (struct LeapioraidEventDataSasBroadcastPrimitive_t *)
- mpi_reply->EventData;
- if (baen_data->Primitive !=
- LEAPIORAID_EVENT_PRIMITIVE_ASYNCHRONOUS_EVENT)
- return 1;
- if (ioc->broadcast_aen_busy) {
- ioc->broadcast_aen_pending++;
- return 1;
- }
- ioc->broadcast_aen_busy = 1;
- break;
- }
- case LEAPIORAID_EVENT_SAS_TOPOLOGY_CHANGE_LIST:
- leapioraid_scsihost_check_topo_delete_events(
- ioc,
- (struct LeapioraidEventDataSasTopoChangeList_t *)
- mpi_reply->EventData);
- if (ioc->shost_recovery)
- return 1;
- break;
- case LEAPIORAID_EVENT_IR_CONFIGURATION_CHANGE_LIST:
- leapioraid_scsihost_check_ir_config_unhide_events(
- ioc,
- (struct LeapioraidEventDataIrCfgChangeList_t *)
- mpi_reply->EventData);
- break;
- case LEAPIORAID_EVENT_IR_VOLUME:
- leapioraid_scsihost_check_volume_delete_events(
- ioc,
- (struct LeapioraidEventDataIrVol_t *)
- mpi_reply->EventData);
- break;
- case LEAPIORAID_EVENT_LOG_ENTRY_ADDED:
- fallthrough;
- case LEAPIORAID_EVENT_SAS_DEVICE_STATUS_CHANGE:
- leapioraid_scsihost_sas_device_status_change_event(
- ioc,
- (struct LeapioraidEventDataSasDeviceStatusChange_t *)
- mpi_reply->EventData);
- break;
- case LEAPIORAID_EVENT_IR_OPERATION_STATUS:
- case LEAPIORAID_EVENT_SAS_DISCOVERY:
- case LEAPIORAID_EVENT_SAS_DEVICE_DISCOVERY_ERROR:
- case LEAPIORAID_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE:
- case LEAPIORAID_EVENT_IR_PHYSICAL_DISK:
- break;
- default:
- return 1;
- }
- fw_event = leapioraid_alloc_fw_event_work(0);
- if (!fw_event) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return 1;
- }
- sz = le16_to_cpu(mpi_reply->EventDataLength) * 4;
- fw_event->event_data = kzalloc(sz, GFP_ATOMIC);
- if (!fw_event->event_data) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- leapioraid_fw_event_work_put(fw_event);
- return 1;
- }
- if (event == LEAPIORAID_EVENT_SAS_TOPOLOGY_CHANGE_LIST) {
- struct LeapioraidEventDataSasTopoChangeList_t *topo_event_data =
- (struct LeapioraidEventDataSasTopoChangeList_t *)
- mpi_reply->EventData;
- fw_event->retries = kzalloc(topo_event_data->NumEntries,
- GFP_ATOMIC);
- if (!fw_event->retries) {
- kfree(fw_event->event_data);
- leapioraid_fw_event_work_put(fw_event);
- return 1;
- }
- }
- memcpy(fw_event->event_data, mpi_reply->EventData, sz);
- fw_event->ioc = ioc;
- fw_event->VF_ID = mpi_reply->VF_ID;
- fw_event->VP_ID = mpi_reply->VP_ID;
- fw_event->event = event;
- leapioraid_scsihost_fw_event_add(ioc, fw_event);
- leapioraid_fw_event_work_put(fw_event);
- return 1;
-}
-
-static void
-leapioraid_scsihost_expander_node_remove(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_raid_sas_node *sas_expander)
-{
- struct leapioraid_sas_port *leapioraid_port, *next;
- unsigned long flags;
- int port_id;
-
- list_for_each_entry_safe(leapioraid_port, next,
- &sas_expander->sas_port_list, port_list) {
- if (ioc->shost_recovery)
- return;
- if (leapioraid_port->remote_identify.device_type ==
- SAS_END_DEVICE)
- leapioraid_device_remove_by_sas_address(ioc,
- leapioraid_port->remote_identify.sas_address,
- leapioraid_port->hba_port);
- else if (leapioraid_port->remote_identify.device_type ==
- SAS_EDGE_EXPANDER_DEVICE
- || leapioraid_port->remote_identify.device_type ==
- SAS_FANOUT_EXPANDER_DEVICE)
- leapioraid_expander_remove(ioc,
- leapioraid_port->remote_identify.sas_address,
- leapioraid_port->hba_port);
- }
- port_id = sas_expander->port->port_id;
- leapioraid_transport_port_remove(ioc, sas_expander->sas_address,
- sas_expander->sas_address_parent,
- sas_expander->port);
- pr_info(
- "%s expander_remove: handle(0x%04x), sas_addr(0x%016llx), port:%d\n",
- ioc->name,
- sas_expander->handle,
- (unsigned long long)sas_expander->sas_address,
- port_id);
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- list_del(&sas_expander->list);
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- kfree(sas_expander->phy);
- kfree(sas_expander);
-}
-
-static void
-leapioraid_scsihost_ir_shutdown(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct LeapioraidRaidActionReq_t *mpi_request;
- struct LeapioraidRaidActionRep_t *mpi_reply;
- u16 smid;
-
- if (!ioc->ir_firmware)
- return;
-
- if (list_empty(&ioc->raid_device_list))
- return;
- if (leapioraid_base_pci_device_is_unplugged(ioc))
- return;
- mutex_lock(&ioc->scsih_cmds.mutex);
- if (ioc->scsih_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_err("%s %s: scsih_cmd in use\n",
- ioc->name, __func__);
- goto out;
- }
- ioc->scsih_cmds.status = LEAPIORAID_CMD_PENDING;
- smid = leapioraid_base_get_smid(ioc, ioc->scsih_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- ioc->scsih_cmds.status = LEAPIORAID_CMD_NOT_USED;
- goto out;
- }
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->scsih_cmds.smid = smid;
- memset(mpi_request, 0, sizeof(struct LeapioraidRaidActionReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_RAID_ACTION;
- mpi_request->Action = 0x20;
- if (!ioc->warpdrive_msg)
- pr_info("%s IR shutdown (sending)\n",
- ioc->name);
- init_completion(&ioc->scsih_cmds.done);
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->scsih_cmds.done, 10 * HZ);
- if (!(ioc->scsih_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- pr_err("%s %s: timeout\n",
- ioc->name, __func__);
- goto out;
- }
- if (ioc->scsih_cmds.status & LEAPIORAID_CMD_REPLY_VALID) {
- mpi_reply = ioc->scsih_cmds.reply;
- if (!ioc->warpdrive_msg)
- pr_info(
- "%s IR shutdown (complete): ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name, le16_to_cpu(mpi_reply->IOCStatus),
- le32_to_cpu(mpi_reply->IOCLogInfo));
- }
-out:
- ioc->scsih_cmds.status = LEAPIORAID_CMD_NOT_USED;
- mutex_unlock(&ioc->scsih_cmds.mutex);
-}
-
-static int
-leapioraid_scsihost_get_shost_and_ioc(struct pci_dev *pdev,
- struct Scsi_Host **shost,
- struct LEAPIORAID_ADAPTER **ioc)
-{
- *shost = pci_get_drvdata(pdev);
- if (*shost == NULL) {
- dev_err(&pdev->dev, "pdev's driver data is null\n");
- return -ENXIO;
- }
- *ioc = leapioraid_shost_private(*shost);
- if (*ioc == NULL) {
- dev_err(&pdev->dev, "shost's private data is null\n");
- return -ENXIO;
- }
- return 0;
-}
-
-static void
-leapioraid_scsihost_remove(struct pci_dev *pdev)
-{
- struct Scsi_Host *shost = NULL;
- struct LEAPIORAID_ADAPTER *ioc = NULL;
- struct leapioraid_sas_port *leapioraid_port, *next_port;
- struct leapioraid_raid_device *raid_device, *next;
- struct LEAPIORAID_TARGET *sas_target_priv_data;
- struct workqueue_struct *wq;
- unsigned long flags;
- struct leapioraid_hba_port *port, *port_next;
- struct leapioraid_virtual_phy *vphy, *vphy_next;
- struct LeapioraidCfgRep_t mpi_reply;
-
- if (leapioraid_scsihost_get_shost_and_ioc(pdev, &shost, &ioc)) {
- dev_err(&pdev->dev, "unable to remove device\n");
- return;
- }
-
- while (ioc->is_driver_loading)
- ssleep(1);
-
- ioc->remove_host = 1;
- leapioraid_wait_for_commands_to_complete(ioc);
- spin_lock_irqsave(&ioc->hba_hot_unplug_lock, flags);
- if (leapioraid_base_pci_device_is_unplugged(ioc)) {
- leapioraid_base_pause_mq_polling(ioc);
- leapioraid_scsihost_flush_running_cmds(ioc);
- }
- leapioraid_scsihost_fw_event_cleanup_queue(ioc);
- spin_unlock_irqrestore(&ioc->hba_hot_unplug_lock, flags);
- spin_lock_irqsave(&ioc->fw_event_lock, flags);
- wq = ioc->firmware_event_thread;
- ioc->firmware_event_thread = NULL;
- spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
- if (wq)
- destroy_workqueue(wq);
- leapioraid_config_set_ioc_pg1(ioc, &mpi_reply,
- &ioc->ioc_pg1_copy);
- leapioraid_scsihost_ir_shutdown(ioc);
- sas_remove_host(shost);
- scsi_remove_host(shost);
- list_for_each_entry_safe(raid_device, next, &ioc->raid_device_list,
- list) {
- if (raid_device->starget) {
- sas_target_priv_data = raid_device->starget->hostdata;
- sas_target_priv_data->deleted = 1;
- scsi_remove_target(&raid_device->starget->dev);
- }
- pr_info("%s removing handle(0x%04x), wwid(0x%016llx)\n",
- ioc->name, raid_device->handle,
- (unsigned long long)raid_device->wwid);
- leapioraid_scsihost_raid_device_remove(ioc, raid_device);
- }
- list_for_each_entry_safe(leapioraid_port, next_port,
- &ioc->sas_hba.sas_port_list, port_list) {
- if (leapioraid_port->remote_identify.device_type ==
- SAS_END_DEVICE)
- leapioraid_device_remove_by_sas_address(ioc,
- leapioraid_port->remote_identify.sas_address,
- leapioraid_port->hba_port);
- else if (leapioraid_port->remote_identify.device_type ==
- SAS_EDGE_EXPANDER_DEVICE
- || leapioraid_port->remote_identify.device_type ==
- SAS_FANOUT_EXPANDER_DEVICE)
- leapioraid_expander_remove(ioc,
- leapioraid_port->remote_identify.sas_address,
- leapioraid_port->hba_port);
- }
- list_for_each_entry_safe(port, port_next, &ioc->port_table_list, list) {
- if (port->vphys_mask) {
- list_for_each_entry_safe(vphy, vphy_next,
- &port->vphys_list, list) {
- list_del(&vphy->list);
- kfree(vphy);
- }
- }
- list_del(&port->list);
- kfree(port);
- }
- if (ioc->sas_hba.num_phys) {
- kfree(ioc->sas_hba.phy);
- ioc->sas_hba.phy = NULL;
- ioc->sas_hba.num_phys = 0;
- }
- leapioraid_base_detach(ioc);
- spin_lock(&leapioraid_gioc_lock);
- list_del(&ioc->list);
- spin_unlock(&leapioraid_gioc_lock);
- scsi_host_put(shost);
-}
-
-static void
-leapioraid_scsihost_shutdown(struct pci_dev *pdev)
-{
- struct Scsi_Host *shost = NULL;
- struct LEAPIORAID_ADAPTER *ioc = NULL;
- struct workqueue_struct *wq;
- unsigned long flags;
- struct LeapioraidCfgRep_t mpi_reply;
-
- if (leapioraid_scsihost_get_shost_and_ioc(pdev, &shost, &ioc)) {
- dev_err(&pdev->dev, "unable to shutdown device\n");
- return;
- }
- ioc->remove_host = 1;
- leapioraid_wait_for_commands_to_complete(ioc);
- leapioraid_scsihost_fw_event_cleanup_queue(ioc);
- spin_lock_irqsave(&ioc->fw_event_lock, flags);
- wq = ioc->firmware_event_thread;
- ioc->firmware_event_thread = NULL;
- spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
- if (wq)
- destroy_workqueue(wq);
- leapioraid_config_set_ioc_pg1(ioc, &mpi_reply,
- &ioc->ioc_pg1_copy);
- leapioraid_scsihost_ir_shutdown(ioc);
- leapioraid_base_mask_interrupts(ioc);
- ioc->shost_recovery = 1;
- leapioraid_base_make_ioc_ready(ioc, SOFT_RESET);
- ioc->shost_recovery = 0;
- leapioraid_base_free_irq(ioc);
- leapioraid_base_disable_msix(ioc);
-}
-
-static void
-leapioraid_scsihost_probe_boot_devices(struct LEAPIORAID_ADAPTER *ioc)
-{
- u32 channel;
- void *device;
- struct leapioraid_sas_device *sas_device;
- struct leapioraid_raid_device *raid_device;
- u16 handle;
- u64 sas_address_parent;
- u64 sas_address;
- unsigned long flags;
- int rc;
- struct leapioraid_hba_port *port;
- u8 protection_mask;
-
- if (!ioc->bios_pg3.BiosVersion)
- return;
-
- device = NULL;
- if (ioc->req_boot_device.device) {
- device = ioc->req_boot_device.device;
- channel = ioc->req_boot_device.channel;
- } else if (ioc->req_alt_boot_device.device) {
- device = ioc->req_alt_boot_device.device;
- channel = ioc->req_alt_boot_device.channel;
- } else if (ioc->current_boot_device.device) {
- device = ioc->current_boot_device.device;
- channel = ioc->current_boot_device.channel;
- }
- if (!device)
- return;
- if (channel == RAID_CHANNEL) {
- raid_device = device;
- if (raid_device->starget)
- return;
- if (!ioc->disable_eedp_support) {
- protection_mask = scsi_host_get_prot(ioc->shost);
- if (protection_mask & SHOST_DIX_TYPE0_PROTECTION) {
- scsi_host_set_prot(ioc->shost,
- protection_mask & 0x77);
- pr_err(
- "%s: Disabling DIX0 because of unsupport!\n",
- ioc->name);
- }
- }
- rc = scsi_add_device(ioc->shost, RAID_CHANNEL,
- raid_device->id, 0);
- if (rc)
- leapioraid_scsihost_raid_device_remove(ioc, raid_device);
- } else {
- sas_device = device;
- if (sas_device->starget)
- return;
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- handle = sas_device->handle;
- sas_address_parent = sas_device->sas_address_parent;
- sas_address = sas_device->sas_address;
- port = sas_device->port;
- list_move_tail(&sas_device->list, &ioc->sas_device_list);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
-
- if (!port)
- return;
-
- if (ioc->hide_drives)
- return;
-
- if (!leapioraid_transport_port_add(ioc, handle,
- sas_address_parent, port)) {
- leapioraid_scsihost_sas_device_remove(ioc, sas_device);
- } else if (!sas_device->starget) {
- if (!ioc->is_driver_loading) {
- leapioraid_transport_port_remove(ioc,
- sas_address,
- sas_address_parent,
- port);
- leapioraid_scsihost_sas_device_remove(ioc, sas_device);
- }
- }
- }
-}
-
-static void
-leapioraid_scsihost_probe_raid(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_raid_device *raid_device, *raid_next;
- int rc;
-
- list_for_each_entry_safe(raid_device, raid_next,
- &ioc->raid_device_list, list) {
- if (raid_device->starget)
- continue;
- rc = scsi_add_device(ioc->shost, RAID_CHANNEL,
- raid_device->id, 0);
- if (rc)
- leapioraid_scsihost_raid_device_remove(ioc, raid_device);
- }
-}
-
-static
-struct leapioraid_sas_device *leapioraid_get_next_sas_device(
- struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_sas_device *sas_device = NULL;
- unsigned long flags;
-
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- if (!list_empty(&ioc->sas_device_init_list)) {
- sas_device = list_first_entry(&ioc->sas_device_init_list,
- struct leapioraid_sas_device, list);
- leapioraid_sas_device_get(sas_device);
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- return sas_device;
-}
-
-static void
-leapioraid_sas_device_make_active(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_device *sas_device)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- if (!list_empty(&sas_device->list)) {
- list_del_init(&sas_device->list);
- leapioraid_sas_device_put(sas_device);
- }
- leapioraid_sas_device_get(sas_device);
- list_add_tail(&sas_device->list, &ioc->sas_device_list);
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
-}
-
-static void
-leapioraid_scsihost_probe_sas(struct LEAPIORAID_ADAPTER *ioc)
-{
- struct leapioraid_sas_device *sas_device;
-
- while ((sas_device = leapioraid_get_next_sas_device(ioc))) {
- if (ioc->hide_drives) {
- leapioraid_sas_device_make_active(ioc, sas_device);
- leapioraid_sas_device_put(sas_device);
- continue;
- }
- if (!leapioraid_transport_port_add(ioc, sas_device->handle,
- sas_device->sas_address_parent,
- sas_device->port)) {
- leapioraid_scsihost_sas_device_remove(ioc, sas_device);
- leapioraid_sas_device_put(sas_device);
- continue;
- } else if (!sas_device->starget) {
- if (!ioc->is_driver_loading) {
- leapioraid_transport_port_remove(ioc,
- sas_device->sas_address,
- sas_device->sas_address_parent,
- sas_device->port);
- leapioraid_scsihost_sas_device_remove(ioc, sas_device);
- leapioraid_sas_device_put(sas_device);
- continue;
- }
- }
- leapioraid_sas_device_make_active(ioc, sas_device);
- leapioraid_sas_device_put(sas_device);
- }
-}
-
-static void
-leapioraid_scsihost_probe_devices(struct LEAPIORAID_ADAPTER *ioc)
-{
- u16 volume_mapping_flags;
-
- if (!(ioc->facts.ProtocolFlags
- & LEAPIORAID_IOCFACTS_PROTOCOL_SCSI_INITIATOR))
- return;
- leapioraid_scsihost_probe_boot_devices(ioc);
-
- if (ioc->ir_firmware) {
- volume_mapping_flags =
- le16_to_cpu(ioc->ioc_pg8.IRVolumeMappingFlags) &
- LEAPIORAID_IOCPAGE8_IRFLAGS_MASK_VOLUME_MAPPING_MODE;
- if (volume_mapping_flags ==
- LEAPIORAID_IOCPAGE8_IRFLAGS_LOW_VOLUME_MAPPING) {
- leapioraid_scsihost_probe_raid(ioc);
- leapioraid_scsihost_probe_sas(ioc);
- } else {
- leapioraid_scsihost_probe_sas(ioc);
- leapioraid_scsihost_probe_raid(ioc);
- }
- } else {
- leapioraid_scsihost_probe_sas(ioc);
- }
-}
-
-static void
-leapioraid_scsihost_scan_start(struct Scsi_Host *shost)
-{
- struct LEAPIORAID_ADAPTER *ioc = shost_priv(shost);
- int rc;
-
- if (disable_discovery > 0)
- return;
- ioc->start_scan = 1;
- rc = leapioraid_port_enable(ioc);
- if (rc != 0)
- pr_info("%s port enable: FAILED\n",
- ioc->name);
-}
-
-void
-leapioraid_scsihost_complete_devices_scanning(struct LEAPIORAID_ADAPTER *ioc)
-{
- if (ioc->wait_for_discovery_to_complete) {
- ioc->wait_for_discovery_to_complete = 0;
- leapioraid_scsihost_probe_devices(ioc);
- }
- leapioraid_base_start_watchdog(ioc);
- ioc->is_driver_loading = 0;
-}
-
-static int
-leapioraid_scsihost_scan_finished(
- struct Scsi_Host *shost, unsigned long time)
-{
- struct LEAPIORAID_ADAPTER *ioc = shost_priv(shost);
- u32 ioc_state;
- int issue_hard_reset = 0;
-
- if (disable_discovery > 0) {
- ioc->is_driver_loading = 0;
- ioc->wait_for_discovery_to_complete = 0;
- goto out;
- }
- if (time >= (300 * HZ)) {
- ioc->port_enable_cmds.status = LEAPIORAID_CMD_NOT_USED;
- pr_info("%s port enable: FAILED with timeout (timeout=300s)\n",
- ioc->name);
- ioc->is_driver_loading = 0;
- goto out;
- }
- if (ioc->start_scan) {
- ioc_state = leapioraid_base_get_iocstate(ioc, 0);
- if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_FAULT) {
- leapioraid_print_fault_code(ioc,
- ioc_state &
- LEAPIORAID_DOORBELL_DATA_MASK);
- issue_hard_reset = 1;
- goto out;
- } else if ((ioc_state & LEAPIORAID_IOC_STATE_MASK) ==
- LEAPIORAID_IOC_STATE_COREDUMP) {
- leapioraid_base_coredump_info(ioc,
- ioc_state &
- LEAPIORAID_DOORBELL_DATA_MASK);
- leapioraid_base_wait_for_coredump_completion(ioc,
- __func__);
- issue_hard_reset = 1;
- goto out;
- }
- return 0;
- }
- if (ioc->port_enable_cmds.status & LEAPIORAID_CMD_RESET) {
- pr_err("%s port enable: aborted due to diag reset\n",
- ioc->name);
- ioc->port_enable_cmds.status = LEAPIORAID_CMD_NOT_USED;
- goto out;
- }
- if (ioc->start_scan_failed) {
- pr_info("%s port enable: FAILED with (ioc_status=0x%08x)\n",
- ioc->name, ioc->start_scan_failed);
- ioc->is_driver_loading = 0;
- ioc->wait_for_discovery_to_complete = 0;
- ioc->remove_host = 1;
- goto out;
- }
- pr_info("%s port enable: SUCCESS\n", ioc->name);
- ioc->port_enable_cmds.status = LEAPIORAID_CMD_NOT_USED;
- leapioraid_scsihost_complete_devices_scanning(ioc);
-out:
- if (issue_hard_reset) {
- ioc->port_enable_cmds.status = LEAPIORAID_CMD_NOT_USED;
- if (leapioraid_base_hard_reset_handler(ioc, SOFT_RESET))
- ioc->is_driver_loading = 0;
- }
- return 1;
-}
-
-SCSIH_MAP_QUEUE(struct Scsi_Host *shost)
-{
- struct LEAPIORAID_ADAPTER *ioc =
- (struct LEAPIORAID_ADAPTER *)shost->hostdata;
- struct blk_mq_queue_map *map;
- int i, qoff, offset;
- int nr_msix_vectors = ioc->iopoll_q_start_index;
- int iopoll_q_count = ioc->reply_queue_count - nr_msix_vectors;
-
- if (shost->nr_hw_queues == 1)
- return;
- for (i = 0, qoff = 0; i < shost->nr_maps; i++) {
- map = &shost->tag_set.map[i];
- map->nr_queues = 0;
- offset = 0;
- if (i == HCTX_TYPE_DEFAULT) {
- map->nr_queues =
- nr_msix_vectors - ioc->high_iops_queues;
- offset = ioc->high_iops_queues;
- } else if (i == HCTX_TYPE_POLL)
- map->nr_queues = iopoll_q_count;
- if (!map->nr_queues)
- BUG_ON(i == HCTX_TYPE_DEFAULT);
- map->queue_offset = qoff;
- if (i != HCTX_TYPE_POLL)
- blk_mq_pci_map_queues(map, ioc->pdev, offset);
- else
- blk_mq_map_queues(map);
- qoff += map->nr_queues;
- }
-}
-
-static struct scsi_host_template leapioraid_driver_template = {
- .module = THIS_MODULE,
- .name = "LEAPIO RAID Host",
- .proc_name = LEAPIORAID_DRIVER_NAME,
- .queuecommand = leapioraid_scsihost_qcmd,
- .target_alloc = leapioraid_scsihost_target_alloc,
- .slave_alloc = leapioraid_scsihost_slave_alloc,
- .slave_configure = leapioraid_scsihost_slave_configure,
- .target_destroy = leapioraid_scsihost_target_destroy,
- .slave_destroy = leapioraid_scsihost_slave_destroy,
- .scan_finished = leapioraid_scsihost_scan_finished,
- .scan_start = leapioraid_scsihost_scan_start,
- .change_queue_depth = leapioraid_scsihost_change_queue_depth,
- .eh_abort_handler = leapioraid_scsihost_abort,
- .eh_device_reset_handler = leapioraid_scsihost_dev_reset,
- .eh_target_reset_handler = leapioraid_scsihost_target_reset,
- .eh_host_reset_handler = leapioraid_scsihost_host_reset,
- .bios_param = leapioraid_scsihost_bios_param,
- .can_queue = 1,
- .this_id = -1,
- .sg_tablesize = LEAPIORAID_SG_DEPTH,
- .max_sectors = 128,
- .max_segment_size = 0xffffffff,
- .cmd_per_lun = 128,
- .shost_groups = leapioraid_host_groups,
- .sdev_groups = leapioraid_dev_groups,
- .track_queue_depth = 1,
- .cmd_size = sizeof(struct leapioraid_scsiio_tracker),
- .map_queues = leapioraid_scsihost_map_queues,
- .mq_poll = leapioraid_blk_mq_poll,
-};
-
-static struct raid_function_template leapioraid_raid_functions = {
- .cookie = &leapioraid_driver_template,
- .is_raid = leapioraid_scsihost_is_raid,
- .get_resync = leapioraid_scsihost_get_resync,
- .get_state = leapioraid_scsihost_get_state,
-};
-
-static int
-leapioraid_scsihost_probe(
- struct pci_dev *pdev, const struct pci_device_id *id)
-{
- struct LEAPIORAID_ADAPTER *ioc;
- struct Scsi_Host *shost = NULL;
- int rv;
-
- shost = scsi_host_alloc(&leapioraid_driver_template,
- sizeof(struct LEAPIORAID_ADAPTER));
- if (!shost)
- return -ENODEV;
- ioc = shost_priv(shost);
- memset(ioc, 0, sizeof(struct LEAPIORAID_ADAPTER));
- ioc->id = leapioraid_ids++;
- sprintf(ioc->driver_name, "%s", LEAPIORAID_DRIVER_NAME);
-
- ioc->combined_reply_queue = 1;
- ioc->nc_reply_index_count = 16;
- ioc->multipath_on_hba = 1;
-
- ioc = leapioraid_shost_private(shost);
- INIT_LIST_HEAD(&ioc->list);
- spin_lock(&leapioraid_gioc_lock);
- list_add_tail(&ioc->list, &leapioraid_ioc_list);
- spin_unlock(&leapioraid_gioc_lock);
- ioc->shost = shost;
- ioc->pdev = pdev;
-
- ioc->scsi_io_cb_idx = scsi_io_cb_idx;
- ioc->tm_cb_idx = tm_cb_idx;
- ioc->ctl_cb_idx = ctl_cb_idx;
- ioc->ctl_tm_cb_idx = ctl_tm_cb_idx;
- ioc->base_cb_idx = base_cb_idx;
- ioc->port_enable_cb_idx = port_enable_cb_idx;
- ioc->transport_cb_idx = transport_cb_idx;
- ioc->scsih_cb_idx = scsih_cb_idx;
- ioc->config_cb_idx = config_cb_idx;
- ioc->tm_tr_cb_idx = tm_tr_cb_idx;
- ioc->tm_tr_volume_cb_idx = tm_tr_volume_cb_idx;
- ioc->tm_tr_internal_cb_idx = tm_tr_internal_cb_idx;
- ioc->tm_sas_control_cb_idx = tm_sas_control_cb_idx;
-
- ioc->logging_level = logging_level;
- ioc->schedule_dead_ioc_flush_running_cmds =
- &leapioraid_scsihost_flush_running_cmds;
- ioc->open_pcie_trace = open_pcie_trace;
- ioc->enable_sdev_max_qd = 0;
- ioc->max_shutdown_latency = 6;
- ioc->drv_support_bitmap |= 0x00000001;
- ioc->drv_support_bitmap |= 0x00000002;
-
- mutex_init(&ioc->reset_in_progress_mutex);
- mutex_init(&ioc->hostdiag_unlock_mutex);
- mutex_init(&ioc->pci_access_mutex);
- spin_lock_init(&ioc->ioc_reset_in_progress_lock);
- spin_lock_init(&ioc->scsi_lookup_lock);
- spin_lock_init(&ioc->sas_device_lock);
- spin_lock_init(&ioc->sas_node_lock);
- spin_lock_init(&ioc->fw_event_lock);
- spin_lock_init(&ioc->raid_device_lock);
- spin_lock_init(&ioc->scsih_q_internal_lock);
- spin_lock_init(&ioc->hba_hot_unplug_lock);
- INIT_LIST_HEAD(&ioc->sas_device_list);
- INIT_LIST_HEAD(&ioc->port_table_list);
- INIT_LIST_HEAD(&ioc->sas_device_init_list);
- INIT_LIST_HEAD(&ioc->sas_expander_list);
- INIT_LIST_HEAD(&ioc->enclosure_list);
- INIT_LIST_HEAD(&ioc->fw_event_list);
- INIT_LIST_HEAD(&ioc->raid_device_list);
- INIT_LIST_HEAD(&ioc->sas_hba.sas_port_list);
- INIT_LIST_HEAD(&ioc->delayed_tr_list);
- INIT_LIST_HEAD(&ioc->delayed_sc_list);
- INIT_LIST_HEAD(&ioc->delayed_event_ack_list);
- INIT_LIST_HEAD(&ioc->delayed_tr_volume_list);
- INIT_LIST_HEAD(&ioc->delayed_internal_tm_list);
- INIT_LIST_HEAD(&ioc->scsih_q_intenal_cmds);
- INIT_LIST_HEAD(&ioc->reply_queue_list);
- sprintf(ioc->name, "%s_cm%d", ioc->driver_name, ioc->id);
-
- shost->max_cmd_len = 32;
- shost->max_lun = 8;
- shost->transportt = leapioraid_transport_template;
- shost->unique_id = ioc->id;
-
- ioc->drv_internal_flags |= LEAPIORAID_DRV_INTERNAL_BITMAP_BLK_MQ;
-
- ioc->disable_eedp_support = 1;
- snprintf(ioc->firmware_event_name, sizeof(ioc->firmware_event_name),
- "fw_event_%s%u", ioc->driver_name, ioc->id);
- ioc->firmware_event_thread =
- alloc_ordered_workqueue(ioc->firmware_event_name, 0);
- if (!ioc->firmware_event_thread) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rv = -ENODEV;
- goto out_thread_fail;
- }
-
- shost->host_tagset = 0;
- ioc->is_driver_loading = 1;
- if ((leapioraid_base_attach(ioc))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rv = -ENODEV;
- goto out_attach_fail;
- }
- ioc->hide_drives = 0;
-
- shost->nr_hw_queues = 1;
- rv = scsi_add_host(shost, &pdev->dev);
- if (rv) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- spin_lock(&leapioraid_gioc_lock);
- list_del(&ioc->list);
- spin_unlock(&leapioraid_gioc_lock);
- goto out_add_shost_fail;
- }
-
- scsi_scan_host(shost);
-
- return 0;
-out_add_shost_fail:
- leapioraid_base_detach(ioc);
-out_attach_fail:
- destroy_workqueue(ioc->firmware_event_thread);
-out_thread_fail:
- spin_lock(&leapioraid_gioc_lock);
- list_del(&ioc->list);
- spin_unlock(&leapioraid_gioc_lock);
- scsi_host_put(shost);
- return rv;
-}
-
-#ifdef CONFIG_PM
-static int
-leapioraid_scsihost_suspend(struct pci_dev *pdev, pm_message_t state)
-{
- struct Scsi_Host *shost = NULL;
- struct LEAPIORAID_ADAPTER *ioc = NULL;
- pci_power_t device_state;
- int rc;
-
- rc = leapioraid_scsihost_get_shost_and_ioc(pdev, &shost, &ioc);
- if (rc) {
- dev_err(&pdev->dev, "unable to suspend device\n");
- return rc;
- }
- leapioraid_base_stop_watchdog(ioc);
- leapioraid_base_stop_hba_unplug_watchdog(ioc);
- scsi_block_requests(shost);
- device_state = pci_choose_state(pdev, state);
- leapioraid_scsihost_ir_shutdown(ioc);
- pr_info("%s pdev=0x%p, slot=%s, entering operating state [D%d]\n",
- ioc->name, pdev,
- pci_name(pdev), device_state);
- pci_save_state(pdev);
- leapioraid_base_free_resources(ioc);
- pci_set_power_state(pdev, device_state);
- return 0;
-}
-
-static int
-leapioraid_scsihost_resume(struct pci_dev *pdev)
-{
- struct Scsi_Host *shost = NULL;
- struct LEAPIORAID_ADAPTER *ioc = NULL;
- pci_power_t device_state = pdev->current_state;
- int r;
-
- r = leapioraid_scsihost_get_shost_and_ioc(pdev, &shost, &ioc);
- if (r) {
- dev_err(&pdev->dev, "unable to resume device\n");
- return r;
- }
- pr_info("%s pdev=0x%p, slot=%s, previous operating state [D%d]\n",
- ioc->name, pdev,
- pci_name(pdev), device_state);
- pci_set_power_state(pdev, PCI_D0);
- pci_enable_wake(pdev, PCI_D0, 0);
- pci_restore_state(pdev);
- ioc->pdev = pdev;
- r = leapioraid_base_map_resources(ioc);
- if (r)
- return r;
- pr_err("%s issuing hard reset as part of OS resume\n",
- ioc->name);
- leapioraid_base_hard_reset_handler(ioc, SOFT_RESET);
- scsi_unblock_requests(shost);
- leapioraid_base_start_watchdog(ioc);
- leapioraid_base_start_hba_unplug_watchdog(ioc);
- return 0;
-}
-#endif
-
-static pci_ers_result_t
-leapioraid_scsihost_pci_error_detected(
- struct pci_dev *pdev, pci_channel_state_t state)
-{
- struct Scsi_Host *shost = NULL;
- struct LEAPIORAID_ADAPTER *ioc = NULL;
-
- if (leapioraid_scsihost_get_shost_and_ioc(pdev, &shost, &ioc)) {
- dev_err(&pdev->dev, "device unavailable\n");
- return PCI_ERS_RESULT_DISCONNECT;
- }
- pr_err("%s PCI error: detected callback, state(%d)!!\n",
- ioc->name, state);
- switch (state) {
- case pci_channel_io_normal:
- return PCI_ERS_RESULT_CAN_RECOVER;
- case pci_channel_io_frozen:
- ioc->pci_error_recovery = 1;
- scsi_block_requests(ioc->shost);
- leapioraid_base_stop_watchdog(ioc);
- leapioraid_base_stop_hba_unplug_watchdog(ioc);
- leapioraid_base_free_resources(ioc);
- return PCI_ERS_RESULT_NEED_RESET;
- case pci_channel_io_perm_failure:
- ioc->pci_error_recovery = 1;
- leapioraid_base_stop_watchdog(ioc);
- leapioraid_base_stop_hba_unplug_watchdog(ioc);
- leapioraid_base_pause_mq_polling(ioc);
- leapioraid_scsihost_flush_running_cmds(ioc);
- return PCI_ERS_RESULT_DISCONNECT;
- }
- return PCI_ERS_RESULT_NEED_RESET;
-}
-
-static pci_ers_result_t
-leapioraid_scsihost_pci_slot_reset(struct pci_dev *pdev)
-{
- struct Scsi_Host *shost = NULL;
- struct LEAPIORAID_ADAPTER *ioc = NULL;
- int rc;
-
- if (leapioraid_scsihost_get_shost_and_ioc(pdev, &shost, &ioc)) {
- dev_err(&pdev->dev, "unable to perform slot reset\n");
- return PCI_ERS_RESULT_DISCONNECT;
- }
- pr_err("%s PCI error: slot reset callback!!\n",
- ioc->name);
- ioc->pci_error_recovery = 0;
- ioc->pdev = pdev;
- pci_restore_state(pdev);
- rc = leapioraid_base_map_resources(ioc);
- if (rc)
- return PCI_ERS_RESULT_DISCONNECT;
- pr_info("%s issuing hard reset as part of PCI slot reset\n",
- ioc->name);
- rc = leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
- pr_info("%s hard reset: %s\n",
- ioc->name, (rc == 0) ? "success" : "failed");
- if (!rc)
- return PCI_ERS_RESULT_RECOVERED;
- else
- return PCI_ERS_RESULT_DISCONNECT;
-}
-
-static void
-leapioraid_scsihost_pci_resume(struct pci_dev *pdev)
-{
- struct Scsi_Host *shost = NULL;
- struct LEAPIORAID_ADAPTER *ioc = NULL;
-
- if (leapioraid_scsihost_get_shost_and_ioc(pdev, &shost, &ioc)) {
- dev_err(&pdev->dev, "unable to resume device\n");
- return;
- }
- pr_err("%s PCI error: resume callback!!\n",
- ioc->name);
-
- pci_aer_clear_nonfatal_status(pdev);
-
- leapioraid_base_start_watchdog(ioc);
- leapioraid_base_start_hba_unplug_watchdog(ioc);
- scsi_unblock_requests(ioc->shost);
-}
-
-static pci_ers_result_t
-leapioraid_scsihost_pci_mmio_enabled(struct pci_dev *pdev)
-{
- struct Scsi_Host *shost = NULL;
- struct LEAPIORAID_ADAPTER *ioc = NULL;
-
- if (leapioraid_scsihost_get_shost_and_ioc(pdev, &shost, &ioc)) {
- dev_err(&pdev->dev, "unable to enable mmio\n");
- return PCI_ERS_RESULT_DISCONNECT;
- }
-
- pr_err("%s: PCI error: mmio enabled callback!!!\n",
- ioc->name);
- return PCI_ERS_RESULT_RECOVERED;
-}
-
-u8 leapioraid_scsihost_ncq_prio_supp(struct scsi_device *sdev)
-{
- u8 ncq_prio_supp = 0;
-
- struct scsi_vpd *vpd;
-
- rcu_read_lock();
- vpd = rcu_dereference(sdev->vpd_pg89);
- if (!vpd || vpd->len < 214)
- goto out;
- ncq_prio_supp = (vpd->data[213] >> 4) & 1;
-out:
- rcu_read_unlock();
- return ncq_prio_supp;
-}
-
-static const struct pci_device_id leapioraid_pci_table[] = {
- { 0x1556, 0x1111, PCI_ANY_ID, PCI_ANY_ID },
- { LEAPIORAID_VENDOR_ID, LEAPIORAID_DEVICE_ID_1, PCI_ANY_ID, PCI_ANY_ID },
- { LEAPIORAID_VENDOR_ID, LEAPIORAID_DEVICE_ID_2, PCI_ANY_ID, PCI_ANY_ID },
- { LEAPIORAID_VENDOR_ID, LEAPIORAID_HBA, PCI_ANY_ID, PCI_ANY_ID },
- { LEAPIORAID_VENDOR_ID, LEAPIORAID_RAID, PCI_ANY_ID, PCI_ANY_ID },
- { 0 }
-};
-
-MODULE_DEVICE_TABLE(pci, leapioraid_pci_table);
-static struct pci_error_handlers leapioraid_err_handler = {
- .error_detected = leapioraid_scsihost_pci_error_detected,
- .mmio_enabled = leapioraid_scsihost_pci_mmio_enabled,
- .slot_reset = leapioraid_scsihost_pci_slot_reset,
- .resume = leapioraid_scsihost_pci_resume,
-};
-
-static struct pci_driver leapioraid_driver = {
- .name = LEAPIORAID_DRIVER_NAME,
- .id_table = leapioraid_pci_table,
- .probe = leapioraid_scsihost_probe,
- .remove = leapioraid_scsihost_remove,
- .shutdown = leapioraid_scsihost_shutdown,
- .err_handler = &leapioraid_err_handler,
-#ifdef CONFIG_PM
- .suspend = leapioraid_scsihost_suspend,
- .resume = leapioraid_scsihost_resume,
-#endif
-};
-
-static int
-leapioraid_scsihost_init(void)
-{
- leapioraid_ids = 0;
- leapioraid_base_initialize_callback_handler();
-
- scsi_io_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_scsihost_io_done);
- tm_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_scsihost_tm_done);
- base_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_base_done);
- port_enable_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_port_enable_done);
- transport_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_transport_done);
- scsih_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_scsihost_done);
- config_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_config_done);
- ctl_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_ctl_done);
- ctl_tm_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_ctl_tm_done);
- tm_tr_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_scsihost_tm_tr_complete);
- tm_tr_volume_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_scsihost_tm_volume_tr_complete);
- tm_tr_internal_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_scsihost_tm_internal_tr_complete);
- tm_sas_control_cb_idx =
- leapioraid_base_register_callback_handler(
- leapioraid_scsihost_sas_control_complete);
-
- return 0;
-}
-
-static void
-leapioraid_scsihost_exit(void)
-{
- leapioraid_base_release_callback_handler(scsi_io_cb_idx);
- leapioraid_base_release_callback_handler(tm_cb_idx);
- leapioraid_base_release_callback_handler(base_cb_idx);
- leapioraid_base_release_callback_handler(port_enable_cb_idx);
- leapioraid_base_release_callback_handler(transport_cb_idx);
- leapioraid_base_release_callback_handler(scsih_cb_idx);
- leapioraid_base_release_callback_handler(config_cb_idx);
- leapioraid_base_release_callback_handler(ctl_cb_idx);
- leapioraid_base_release_callback_handler(ctl_tm_cb_idx);
- leapioraid_base_release_callback_handler(tm_tr_cb_idx);
- leapioraid_base_release_callback_handler(tm_tr_volume_cb_idx);
- leapioraid_base_release_callback_handler(tm_tr_internal_cb_idx);
- leapioraid_base_release_callback_handler(tm_sas_control_cb_idx);
-
- raid_class_release(leapioraid_raid_template);
- sas_release_transport(leapioraid_transport_template);
-}
-
-static int __init leapioraid_init(void)
-{
- int error;
-
- pr_info("%s version %s loaded\n", LEAPIORAID_DRIVER_NAME,
- LEAPIORAID_DRIVER_VERSION);
- leapioraid_transport_template =
- sas_attach_transport(&leapioraid_transport_functions);
-
- if (!leapioraid_transport_template)
- return -ENODEV;
-
- leapioraid_raid_template =
- raid_class_attach(&leapioraid_raid_functions);
- if (!leapioraid_raid_template) {
- sas_release_transport(leapioraid_transport_template);
- return -ENODEV;
- }
-
- error = leapioraid_scsihost_init();
- if (error) {
- leapioraid_scsihost_exit();
- return error;
- }
- leapioraid_ctl_init();
- error = pci_register_driver(&leapioraid_driver);
- if (error)
- leapioraid_scsihost_exit();
- return error;
-}
-
-static void __exit leapioraid_exit(void)
-{
- pr_info("leapioraid_ids version %s unloading\n",
- LEAPIORAID_DRIVER_VERSION);
- leapioraid_ctl_exit();
- pci_unregister_driver(&leapioraid_driver);
- leapioraid_scsihost_exit();
-}
-
-module_init(leapioraid_init);
-module_exit(leapioraid_exit);
diff --git a/drivers/scsi/leapioraid/leapioraid_transport.c b/drivers/scsi/leapioraid/leapioraid_transport.c
deleted file mode 100644
index edddd56128a1..000000000000
--- a/drivers/scsi/leapioraid/leapioraid_transport.c
+++ /dev/null
@@ -1,1926 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * SAS Transport Layer for MPT (Message Passing Technology) based controllers
- *
- * Copyright (C) 2013-2018 LSI Corporation
- * Copyright (C) 2013-2018 Avago Technologies
- * Copyright (C) 2013-2018 Broadcom Inc.
- * (mailto:MPT-FusionLinux.pdl@broadcom.com)
- *
- * Copyright (C) 2024 LeapIO Tech Inc.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2
- * of the License, or (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * NO WARRANTY
- * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
- * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
- * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
- * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
- * solely responsible for determining the appropriateness of using and
- * distributing the Program and assumes all risks associated with its
- * exercise of rights under this Agreement, including but not limited to
- * the risks and costs of program errors, damage to or loss of data,
- * programs or equipment, and unavailability or interruption of operations.
-
- * DISCLAIMER OF LIABILITY
- * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
- * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
- * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
- * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
- */
-
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/init.h>
-#include <linux/errno.h>
-#include <linux/sched.h>
-#include <linux/workqueue.h>
-#include <linux/delay.h>
-#include <linux/pci.h>
-#include <scsi/scsi.h>
-#include <scsi/scsi_cmnd.h>
-#include <scsi/scsi_device.h>
-#include <scsi/scsi_host.h>
-#include <scsi/scsi_transport_sas.h>
-#include <scsi/scsi_dbg.h>
-#include "leapioraid_func.h"
-
-static
-struct leapioraid_raid_sas_node *leapioraid_transport_sas_node_find_by_sas_address(
- struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address, struct leapioraid_hba_port *port)
-{
- if (ioc->sas_hba.sas_address == sas_address)
- return &ioc->sas_hba;
- else
- return leapioraid_scsihost_expander_find_by_sas_address(ioc,
- sas_address,
- port);
-}
-
-static inline u8
-leapioraid_transport_get_port_id_by_sas_phy(struct sas_phy *phy)
-{
- u8 port_id = 0xFF;
- struct leapioraid_hba_port *port = phy->hostdata;
-
- if (port)
- port_id = port->port_id;
- else
- BUG();
- return port_id;
-}
-
-static int
-leapioraid_transport_find_parent_node(
- struct LEAPIORAID_ADAPTER *ioc, struct sas_phy *phy)
-{
- unsigned long flags;
- struct leapioraid_hba_port *port = phy->hostdata;
-
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- if (leapioraid_transport_sas_node_find_by_sas_address(ioc,
- phy->identify.sas_address,
- port) == NULL) {
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- return -EINVAL;
- }
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- return 0;
-}
-
-static u8
-leapioraid_transport_get_port_id_by_rphy(struct LEAPIORAID_ADAPTER *ioc,
- struct sas_rphy *rphy)
-{
- struct leapioraid_raid_sas_node *sas_expander;
- struct leapioraid_sas_device *sas_device;
- unsigned long flags;
- u8 port_id = 0xFF;
-
- if (!rphy)
- return port_id;
- if (rphy->identify.device_type == SAS_EDGE_EXPANDER_DEVICE ||
- rphy->identify.device_type == SAS_FANOUT_EXPANDER_DEVICE) {
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- list_for_each_entry(sas_expander, &ioc->sas_expander_list, list) {
- if (sas_expander->rphy == rphy) {
- port_id = sas_expander->port->port_id;
- break;
- }
- }
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- } else if (rphy->identify.device_type == SAS_END_DEVICE) {
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_addr_and_rphy(
- ioc, rphy->identify.sas_address, rphy);
- if (sas_device) {
- port_id = sas_device->port->port_id;
- leapioraid_sas_device_put(sas_device);
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- }
- return port_id;
-}
-
-static enum sas_linkrate
-leapioraid_transport_convert_phy_link_rate(u8 link_rate)
-{
- enum sas_linkrate rc;
-
- switch (link_rate) {
- case LEAPIORAID_SAS_NEG_LINK_RATE_1_5:
- rc = SAS_LINK_RATE_1_5_GBPS;
- break;
- case LEAPIORAID_SAS_NEG_LINK_RATE_3_0:
- rc = SAS_LINK_RATE_3_0_GBPS;
- break;
- case LEAPIORAID_SAS_NEG_LINK_RATE_6_0:
- rc = SAS_LINK_RATE_6_0_GBPS;
- break;
- case LEAPIORAID_SAS_NEG_LINK_RATE_12_0:
- rc = SAS_LINK_RATE_12_0_GBPS;
- break;
- case LEAPIORAID_SAS_NEG_LINK_RATE_PHY_DISABLED:
- rc = SAS_PHY_DISABLED;
- break;
- case LEAPIORAID_SAS_NEG_LINK_RATE_NEGOTIATION_FAILED:
- rc = SAS_LINK_RATE_FAILED;
- break;
- case LEAPIORAID_SAS_NEG_LINK_RATE_PORT_SELECTOR:
- rc = SAS_SATA_PORT_SELECTOR;
- break;
- case LEAPIORAID_SAS_NEG_LINK_RATE_SMP_RESET_IN_PROGRESS:
- default:
- case LEAPIORAID_SAS_NEG_LINK_RATE_SATA_OOB_COMPLETE:
- case LEAPIORAID_SAS_NEG_LINK_RATE_UNKNOWN_LINK_RATE:
- rc = SAS_LINK_RATE_UNKNOWN;
- break;
- }
- return rc;
-}
-
-static int
-leapioraid_transport_set_identify(
- struct LEAPIORAID_ADAPTER *ioc, u16 handle,
- struct sas_identify *identify)
-{
- struct LeapioraidSasDevP0_t sas_device_pg0;
- struct LeapioraidCfgRep_t mpi_reply;
- u32 device_info;
- u32 ioc_status;
-
- if ((ioc->shost_recovery && !ioc->is_driver_loading)
- || ioc->pci_error_recovery) {
- pr_info("%s %s: host reset in progress!\n",
- __func__, ioc->name);
- return -EFAULT;
- }
- if ((leapioraid_config_get_sas_device_pg0
- (ioc, &mpi_reply, &sas_device_pg0,
- LEAPIORAID_SAS_DEVICE_PGAD_FORM_HANDLE, handle))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return -ENXIO;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err("%s handle(0x%04x), ioc_status(0x%04x)\nfailure at %s:%d/%s()!\n",
- ioc->name, handle,
- ioc_status, __FILE__, __LINE__, __func__);
- return -EIO;
- }
- memset(identify, 0, sizeof(struct sas_identify));
- device_info = le32_to_cpu(sas_device_pg0.DeviceInfo);
- identify->sas_address = le64_to_cpu(sas_device_pg0.SASAddress);
- identify->phy_identifier = sas_device_pg0.PhyNum;
- switch (device_info & LEAPIORAID_SAS_DEVICE_INFO_MASK_DEVICE_TYPE) {
- case LEAPIORAID_SAS_DEVICE_INFO_NO_DEVICE:
- identify->device_type = SAS_PHY_UNUSED;
- break;
- case LEAPIORAID_SAS_DEVICE_INFO_END_DEVICE:
- identify->device_type = SAS_END_DEVICE;
- break;
- case LEAPIORAID_SAS_DEVICE_INFO_EDGE_EXPANDER:
- identify->device_type = SAS_EDGE_EXPANDER_DEVICE;
- break;
- case LEAPIORAID_SAS_DEVICE_INFO_FANOUT_EXPANDER:
- identify->device_type = SAS_FANOUT_EXPANDER_DEVICE;
- break;
- }
- if (device_info & LEAPIORAID_SAS_DEVICE_INFO_SSP_INITIATOR)
- identify->initiator_port_protocols |= SAS_PROTOCOL_SSP;
- if (device_info & LEAPIORAID_SAS_DEVICE_INFO_STP_INITIATOR)
- identify->initiator_port_protocols |= SAS_PROTOCOL_STP;
- if (device_info & LEAPIORAID_SAS_DEVICE_INFO_SMP_INITIATOR)
- identify->initiator_port_protocols |= SAS_PROTOCOL_SMP;
- if (device_info & LEAPIORAID_SAS_DEVICE_INFO_SATA_HOST)
- identify->initiator_port_protocols |= SAS_PROTOCOL_SATA;
- if (device_info & LEAPIORAID_SAS_DEVICE_INFO_SSP_TARGET)
- identify->target_port_protocols |= SAS_PROTOCOL_SSP;
- if (device_info & LEAPIORAID_SAS_DEVICE_INFO_STP_TARGET)
- identify->target_port_protocols |= SAS_PROTOCOL_STP;
- if (device_info & LEAPIORAID_SAS_DEVICE_INFO_SMP_TARGET)
- identify->target_port_protocols |= SAS_PROTOCOL_SMP;
- if (device_info & LEAPIORAID_SAS_DEVICE_INFO_SATA_DEVICE)
- identify->target_port_protocols |= SAS_PROTOCOL_SATA;
- return 0;
-}
-
-u8
-leapioraid_transport_done(struct LEAPIORAID_ADAPTER *ioc, u16 smid,
- u8 msix_index, u32 reply)
-{
- struct LeapioraidDefaultRep_t *mpi_reply;
-
- mpi_reply = leapioraid_base_get_reply_virt_addr(ioc, reply);
- if (ioc->transport_cmds.status == LEAPIORAID_CMD_NOT_USED)
- return 1;
- if (ioc->transport_cmds.smid != smid)
- return 1;
- ioc->transport_cmds.status |= LEAPIORAID_CMD_COMPLETE;
- if (mpi_reply) {
- memcpy(ioc->transport_cmds.reply, mpi_reply,
- mpi_reply->MsgLength * 4);
- ioc->transport_cmds.status |= LEAPIORAID_CMD_REPLY_VALID;
- }
- ioc->transport_cmds.status &= ~LEAPIORAID_CMD_PENDING;
- complete(&ioc->transport_cmds.done);
- return 1;
-}
-
-#if defined(LEAPIORAID_WIDE_PORT_API)
-struct leapioraid_rep_manu_request {
- u8 smp_frame_type;
- u8 function;
- u8 reserved;
- u8 request_length;
-};
-
-struct leapioraid_rep_manu_reply {
- u8 smp_frame_type;
- u8 function;
- u8 function_result;
- u8 response_length;
- u16 expander_change_count;
- u8 reserved0[2];
- u8 sas_format;
- u8 reserved2[3];
- u8 vendor_id[SAS_EXPANDER_VENDOR_ID_LEN];
- u8 product_id[SAS_EXPANDER_PRODUCT_ID_LEN];
- u8 product_rev[SAS_EXPANDER_PRODUCT_REV_LEN];
- u8 component_vendor_id[SAS_EXPANDER_COMPONENT_VENDOR_ID_LEN];
- u16 component_id;
- u8 component_revision_id;
- u8 reserved3;
- u8 vendor_specific[8];
-};
-
-static int
-leapioraid_transport_expander_report_manufacture(
- struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address,
- struct sas_expander_device *edev,
- u8 port_id)
-{
- struct LeapioraidSmpPassthroughReq_t *mpi_request;
- struct LeapioraidSmpPassthroughRep_t *mpi_reply;
- struct leapioraid_rep_manu_reply *manufacture_reply;
- struct leapioraid_rep_manu_request *manufacture_request;
- int rc;
- u16 smid;
- void *psge;
- u8 issue_reset = 0;
- void *data_out = NULL;
- dma_addr_t data_out_dma;
- dma_addr_t data_in_dma;
- size_t data_in_sz;
- size_t data_out_sz;
-
- if (ioc->shost_recovery || ioc->pci_error_recovery) {
- pr_info("%s %s: host reset in progress!\n",
- __func__, ioc->name);
- return -EFAULT;
- }
- mutex_lock(&ioc->transport_cmds.mutex);
- if (ioc->transport_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_err("%s %s: transport_cmds in use\n",
- ioc->name, __func__);
- mutex_unlock(&ioc->transport_cmds.mutex);
- return -EAGAIN;
- }
- ioc->transport_cmds.status = LEAPIORAID_CMD_PENDING;
- rc = leapioraid_wait_for_ioc_to_operational(ioc, 10);
- if (rc)
- goto out;
- smid = leapioraid_base_get_smid(ioc, ioc->transport_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- rc = -EAGAIN;
- goto out;
- }
- rc = 0;
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->transport_cmds.smid = smid;
- data_out_sz = sizeof(struct leapioraid_rep_manu_request);
- data_in_sz = sizeof(struct leapioraid_rep_manu_reply);
- data_out = dma_alloc_coherent(&ioc->pdev->dev, data_out_sz + data_in_sz,
- &data_out_dma, GFP_ATOMIC);
- if (!data_out) {
- rc = -ENOMEM;
- leapioraid_base_free_smid(ioc, smid);
- goto out;
- }
- data_in_dma = data_out_dma + sizeof(struct leapioraid_rep_manu_request);
- manufacture_request = data_out;
- manufacture_request->smp_frame_type = 0x40;
- manufacture_request->function = 1;
- manufacture_request->reserved = 0;
- manufacture_request->request_length = 0;
- memset(mpi_request, 0, sizeof(struct LeapioraidSmpPassthroughReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_SMP_PASSTHROUGH;
- mpi_request->PhysicalPort = port_id;
- mpi_request->SASAddress = cpu_to_le64(sas_address);
- mpi_request->RequestDataLength = cpu_to_le16(data_out_sz);
- psge = &mpi_request->SGL;
- ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma,
- data_in_sz);
- dtransportprintk(ioc,
- pr_info("%s report_manufacture - send to sas_addr(0x%016llx)\n",
- ioc->name,
- (unsigned long long)sas_address));
- init_completion(&ioc->transport_cmds.done);
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->transport_cmds.done, 10 * HZ);
- if (!(ioc->transport_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- pr_err("%s %s: timeout\n",
- ioc->name, __func__);
- leapioraid_debug_dump_mf(mpi_request,
- sizeof(struct LeapioraidSmpPassthroughReq_t) / 4);
- if (!(ioc->transport_cmds.status & LEAPIORAID_CMD_RESET))
- issue_reset = 1;
- goto issue_host_reset;
- }
- dtransportprintk(ioc,
- pr_info("%s report_manufacture - complete\n", ioc->name));
- if (ioc->transport_cmds.status & LEAPIORAID_CMD_REPLY_VALID) {
- u8 *tmp;
-
- mpi_reply = ioc->transport_cmds.reply;
- dtransportprintk(ioc, pr_err(
- "%s report_manufacture - reply data transfer size(%d)\n",
- ioc->name,
- le16_to_cpu(mpi_reply->ResponseDataLength)));
- if (le16_to_cpu(mpi_reply->ResponseDataLength) !=
- sizeof(struct leapioraid_rep_manu_reply))
- goto out;
- manufacture_reply = data_out + sizeof(struct leapioraid_rep_manu_request);
- strscpy(edev->vendor_id, manufacture_reply->vendor_id,
- sizeof(edev->vendor_id));
- strscpy(edev->product_id, manufacture_reply->product_id,
- sizeof(edev->product_id));
- strscpy(edev->product_rev, manufacture_reply->product_rev,
- sizeof(edev->product_rev));
- edev->level = manufacture_reply->sas_format & 1;
- if (edev->level) {
- strscpy(edev->component_vendor_id,
- manufacture_reply->component_vendor_id,
- sizeof(edev->component_vendor_id));
- tmp = (u8 *) &manufacture_reply->component_id;
- edev->component_id = tmp[0] << 8 | tmp[1];
- edev->component_revision_id =
- manufacture_reply->component_revision_id;
- }
- } else
- dtransportprintk(ioc, pr_err(
- "%s report_manufacture - no reply\n",
- ioc->name));
-issue_host_reset:
- if (issue_reset)
- leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
-out:
- ioc->transport_cmds.status = LEAPIORAID_CMD_NOT_USED;
- if (data_out)
- dma_free_coherent(&ioc->pdev->dev, data_out_sz + data_in_sz,
- data_out, data_out_dma);
- mutex_unlock(&ioc->transport_cmds.mutex);
- return rc;
-}
-#endif
-
-static void
-leapioraid_transport_delete_port(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_port *leapioraid_port)
-{
- u64 sas_address = leapioraid_port->remote_identify.sas_address;
- struct leapioraid_hba_port *port = leapioraid_port->hba_port;
- enum sas_device_type device_type =
- leapioraid_port->remote_identify.device_type;
-
-#if defined(LEAPIORAID_WIDE_PORT_API)
- dev_info(&leapioraid_port->port->dev,
- "remove: sas_addr(0x%016llx)\n",
- (unsigned long long)sas_address);
-#endif
- ioc->logging_level |= LEAPIORAID_DEBUG_TRANSPORT;
- if (device_type == SAS_END_DEVICE)
- leapioraid_device_remove_by_sas_address(ioc, sas_address, port);
- else if (device_type == SAS_EDGE_EXPANDER_DEVICE ||
- device_type == SAS_FANOUT_EXPANDER_DEVICE)
- leapioraid_expander_remove(ioc, sas_address, port);
- ioc->logging_level &= ~LEAPIORAID_DEBUG_TRANSPORT;
-}
-
-#if defined(LEAPIORAID_WIDE_PORT_API)
-static void
-leapioraid_transport_delete_phy(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_port *leapioraid_port,
- struct leapioraid_sas_phy *leapioraid_phy)
-{
- u64 sas_address = leapioraid_port->remote_identify.sas_address;
-
- dev_info(&leapioraid_phy->phy->dev,
- "remove: sas_addr(0x%016llx), phy(%d)\n",
- (unsigned long long)sas_address, leapioraid_phy->phy_id);
- list_del(&leapioraid_phy->port_siblings);
- leapioraid_port->num_phys--;
- sas_port_delete_phy(leapioraid_port->port, leapioraid_phy->phy);
- leapioraid_phy->phy_belongs_to_port = 0;
-}
-
-static void
-leapioraid_transport_add_phy(struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_port *leapioraid_port,
- struct leapioraid_sas_phy *leapioraid_phy)
-{
- u64 sas_address = leapioraid_port->remote_identify.sas_address;
-
- dev_info(&leapioraid_phy->phy->dev,
- "add: sas_addr(0x%016llx), phy(%d)\n", (unsigned long long)
- sas_address, leapioraid_phy->phy_id);
- list_add_tail(&leapioraid_phy->port_siblings,
- &leapioraid_port->phy_list);
- leapioraid_port->num_phys++;
- sas_port_add_phy(leapioraid_port->port, leapioraid_phy->phy);
- leapioraid_phy->phy_belongs_to_port = 1;
-}
-
-void
-leapioraid_transport_add_phy_to_an_existing_port(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_raid_sas_node *sas_node,
- struct leapioraid_sas_phy *leapioraid_phy,
- u64 sas_address,
- struct leapioraid_hba_port *port)
-{
- struct leapioraid_sas_port *leapioraid_port;
- struct leapioraid_sas_phy *phy_srch;
-
- if (leapioraid_phy->phy_belongs_to_port == 1)
- return;
- if (!port)
- return;
- list_for_each_entry(leapioraid_port, &sas_node->sas_port_list,
- port_list) {
- if (leapioraid_port->remote_identify.sas_address != sas_address)
- continue;
- if (leapioraid_port->hba_port != port)
- continue;
- list_for_each_entry(phy_srch, &leapioraid_port->phy_list,
- port_siblings) {
- if (phy_srch == leapioraid_phy)
- return;
- }
- leapioraid_transport_add_phy(ioc, leapioraid_port, leapioraid_phy);
- return;
- }
-}
-#endif
-
-void
-leapioraid_transport_del_phy_from_an_existing_port(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_raid_sas_node *sas_node,
- struct leapioraid_sas_phy *leapioraid_phy)
-{
- struct leapioraid_sas_port *leapioraid_port, *next;
- struct leapioraid_sas_phy *phy_srch;
-
- if (leapioraid_phy->phy_belongs_to_port == 0)
- return;
- list_for_each_entry_safe(leapioraid_port, next,
- &sas_node->sas_port_list, port_list) {
- list_for_each_entry(phy_srch, &leapioraid_port->phy_list,
- port_siblings) {
- if (phy_srch != leapioraid_phy)
- continue;
-#if defined(LEAPIORAID_WIDE_PORT_API)
- if (leapioraid_port->num_phys == 1
- && !ioc->shost_recovery)
- leapioraid_transport_delete_port(ioc, leapioraid_port);
- else
- leapioraid_transport_delete_phy(ioc, leapioraid_port,
- leapioraid_phy);
-#else
- leapioraid_transport_delete_port(ioc, leapioraid_port);
-#endif
- return;
- }
- }
-}
-
-static void
-leapioraid_transport_sanity_check(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_raid_sas_node *sas_node, u64 sas_address,
- struct leapioraid_hba_port *port)
-{
- int i;
-
- for (i = 0; i < sas_node->num_phys; i++) {
- if (sas_node->phy[i].remote_identify.sas_address != sas_address
- || sas_node->phy[i].port != port)
- continue;
- if (sas_node->phy[i].phy_belongs_to_port == 1)
- leapioraid_transport_del_phy_from_an_existing_port(ioc,
- sas_node,
- &sas_node->phy
- [i]);
- }
-}
-
-struct leapioraid_sas_port *leapioraid_transport_port_add(
- struct LEAPIORAID_ADAPTER *ioc,
- u16 handle, u64 sas_address,
- struct leapioraid_hba_port *hba_port)
-{
- struct leapioraid_sas_phy *leapioraid_phy, *next;
- struct leapioraid_sas_port *leapioraid_port;
- unsigned long flags;
- struct leapioraid_raid_sas_node *sas_node;
- struct sas_rphy *rphy;
- struct leapioraid_sas_device *sas_device = NULL;
- int i;
-#if defined(LEAPIORAID_WIDE_PORT_API)
- struct sas_port *port;
-#endif
- struct leapioraid_virtual_phy *vphy = NULL;
-
- if (!hba_port) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return NULL;
- }
- leapioraid_port = kzalloc(sizeof(struct leapioraid_sas_port), GFP_KERNEL);
- if (!leapioraid_port)
- return NULL;
- INIT_LIST_HEAD(&leapioraid_port->port_list);
- INIT_LIST_HEAD(&leapioraid_port->phy_list);
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- sas_node = leapioraid_transport_sas_node_find_by_sas_address(
- ioc,
- sas_address,
- hba_port);
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- if (!sas_node) {
- pr_err("%s %s: Could not find parent sas_address(0x%016llx)!\n",
- ioc->name,
- __func__, (unsigned long long)sas_address);
- goto out_fail;
- }
- if ((leapioraid_transport_set_identify(ioc, handle,
- &leapioraid_port->remote_identify))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out_fail;
- }
- if (leapioraid_port->remote_identify.device_type == SAS_PHY_UNUSED) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out_fail;
- }
- leapioraid_port->hba_port = hba_port;
- leapioraid_transport_sanity_check(ioc, sas_node,
- leapioraid_port->remote_identify.sas_address,
- hba_port);
- for (i = 0; i < sas_node->num_phys; i++) {
- if (sas_node->phy[i].remote_identify.sas_address !=
- leapioraid_port->remote_identify.sas_address ||
- sas_node->phy[i].port != hba_port)
- continue;
- list_add_tail(&sas_node->phy[i].port_siblings,
- &leapioraid_port->phy_list);
- leapioraid_port->num_phys++;
- if (sas_node->handle <= ioc->sas_hba.num_phys) {
- if (!sas_node->phy[i].hba_vphy) {
- hba_port->phy_mask |= (1 << i);
- continue;
- }
- vphy = leapioraid_get_vphy_by_phy(ioc, hba_port, i);
- if (!vphy) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out_fail;
- }
- }
- }
- if (!leapioraid_port->num_phys) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out_fail;
- }
- if (leapioraid_port->remote_identify.device_type == SAS_END_DEVICE) {
- sas_device = leapioraid_get_sdev_by_addr(ioc,
- leapioraid_port->remote_identify.sas_address,
- leapioraid_port->hba_port);
- if (!sas_device) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out_fail;
- }
- sas_device->pend_sas_rphy_add = 1;
- }
-#if defined(LEAPIORAID_WIDE_PORT_API)
- if (!sas_node->parent_dev) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out_fail;
- }
- port = sas_port_alloc_num(sas_node->parent_dev);
- if ((sas_port_add(port))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- goto out_fail;
- }
- list_for_each_entry(leapioraid_phy, &leapioraid_port->phy_list,
- port_siblings) {
- if ((ioc->logging_level & LEAPIORAID_DEBUG_TRANSPORT))
- dev_info(&port->dev,
- "add: handle(0x%04x), sas_addr(0x%016llx), phy(%d)\n",
- handle,
- (unsigned long long)
- leapioraid_port->remote_identify.sas_address,
- leapioraid_phy->phy_id);
- sas_port_add_phy(port, leapioraid_phy->phy);
- leapioraid_phy->phy_belongs_to_port = 1;
- leapioraid_phy->port = hba_port;
- }
- leapioraid_port->port = port;
- if (leapioraid_port->remote_identify.device_type == SAS_END_DEVICE) {
- rphy = sas_end_device_alloc(port);
- sas_device->rphy = rphy;
- if (sas_node->handle <= ioc->sas_hba.num_phys) {
- if (!vphy)
- hba_port->sas_address = sas_device->sas_address;
- else
- vphy->sas_address = sas_device->sas_address;
- }
- } else {
- rphy = sas_expander_alloc(port,
- leapioraid_port->remote_identify.device_type);
- if (sas_node->handle <= ioc->sas_hba.num_phys)
- hba_port->sas_address =
- leapioraid_port->remote_identify.sas_address;
- }
-#else
- leapioraid_phy =
- list_entry(leapioraid_port->phy_list.next, struct leapioraid_sas_phy,
- port_siblings);
- if (leapioraid_port->remote_identify.device_type == SAS_END_DEVICE) {
- rphy = sas_end_device_alloc(leapioraid_phy->phy);
- sas_device->rphy = rphy;
- } else
- rphy = sas_expander_alloc(leapioraid_phy->phy,
- leapioraid_port->remote_identify.device_type);
-#endif
- rphy->identify = leapioraid_port->remote_identify;
- if ((sas_rphy_add(rphy))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- }
- if (leapioraid_port->remote_identify.device_type == SAS_END_DEVICE) {
- sas_device->pend_sas_rphy_add = 0;
- leapioraid_sas_device_put(sas_device);
- }
- dev_info(&rphy->dev,
- "%s: added: handle(0x%04x), sas_addr(0x%016llx)\n",
- __func__, handle, (unsigned long long)
- leapioraid_port->remote_identify.sas_address);
- leapioraid_port->rphy = rphy;
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- list_add_tail(&leapioraid_port->port_list, &sas_node->sas_port_list);
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
-#if defined(LEAPIORAID_WIDE_PORT_API)
- if (leapioraid_port->remote_identify.device_type ==
- LEAPIORAID_SAS_DEVICE_INFO_EDGE_EXPANDER ||
- leapioraid_port->remote_identify.device_type ==
- LEAPIORAID_SAS_DEVICE_INFO_FANOUT_EXPANDER)
- leapioraid_transport_expander_report_manufacture(ioc,
- leapioraid_port->remote_identify.sas_address,
- rphy_to_expander_device
- (rphy),
- hba_port->port_id);
-#endif
- return leapioraid_port;
-out_fail:
- list_for_each_entry_safe(leapioraid_phy, next,
- &leapioraid_port->phy_list, port_siblings)
- list_del(&leapioraid_phy->port_siblings);
- kfree(leapioraid_port);
- return NULL;
-}
-
-void
-leapioraid_transport_port_remove(struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address, u64 sas_address_parent,
- struct leapioraid_hba_port *port)
-{
- int i;
- unsigned long flags;
- struct leapioraid_sas_port *leapioraid_port, *next;
- struct leapioraid_raid_sas_node *sas_node;
- u8 found = 0;
-#if defined(LEAPIORAID_WIDE_PORT_API)
- struct leapioraid_sas_phy *leapioraid_phy, *next_phy;
-#endif
- struct leapioraid_hba_port *hba_port, *hba_port_next = NULL;
- struct leapioraid_virtual_phy *vphy, *vphy_next = NULL;
-
- if (!port)
- return;
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- sas_node = leapioraid_transport_sas_node_find_by_sas_address(
- ioc,
- sas_address_parent,
- port);
- if (!sas_node) {
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- return;
- }
- list_for_each_entry_safe(leapioraid_port, next,
- &sas_node->sas_port_list, port_list) {
- if (leapioraid_port->remote_identify.sas_address != sas_address)
- continue;
- if (leapioraid_port->hba_port != port)
- continue;
- found = 1;
- list_del(&leapioraid_port->port_list);
- goto out;
- }
-out:
- if (!found) {
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- return;
- }
- if ((sas_node->handle <= ioc->sas_hba.num_phys) &&
- (ioc->multipath_on_hba)) {
- if (port->vphys_mask) {
- list_for_each_entry_safe(vphy, vphy_next,
- &port->vphys_list, list) {
- if (vphy->sas_address != sas_address)
- continue;
- pr_err(
- "%s remove vphy entry: %p of port:%p,\n\t\t"
- "from %d port's vphys list\n",
- ioc->name,
- vphy,
- port,
- port->port_id);
- port->vphys_mask &= ~vphy->phy_mask;
- list_del(&vphy->list);
- kfree(vphy);
- }
- if (!port->vphys_mask && !port->sas_address) {
- pr_err(
- "%s remove hba_port entry: %p port: %d\n\t\t"
- "from hba_port list\n",
- ioc->name,
- port,
- port->port_id);
- list_del(&port->list);
- kfree(port);
- }
- }
- list_for_each_entry_safe(hba_port, hba_port_next,
- &ioc->port_table_list, list) {
- if (hba_port != port)
- continue;
- if (hba_port->sas_address != sas_address)
- continue;
- if (!port->vphys_mask) {
- pr_err(
- "%s remove hba_port entry: %p port: %d\n\t\t"
- "from hba_port list\n",
- ioc->name,
- hba_port,
- hba_port->port_id);
- list_del(&hba_port->list);
- kfree(hba_port);
- } else {
- pr_err(
- "%s clearing sas_address from hba_port entry: %p\n\t\t"
- "port: %d from hba_port list\n",
- ioc->name,
- hba_port,
- hba_port->port_id);
- port->sas_address = 0;
- }
- break;
- }
- }
- for (i = 0; i < sas_node->num_phys; i++) {
- if (sas_node->phy[i].remote_identify.sas_address == sas_address) {
- memset(&sas_node->phy[i].remote_identify, 0,
- sizeof(struct sas_identify));
- sas_node->phy[i].hba_vphy = 0;
- }
- }
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
-#if defined(LEAPIORAID_WIDE_PORT_API)
- list_for_each_entry_safe(leapioraid_phy, next_phy,
- &leapioraid_port->phy_list, port_siblings) {
- if ((ioc->logging_level & LEAPIORAID_DEBUG_TRANSPORT))
- pr_info("%s %s: remove: sas_addr(0x%016llx), phy(%d)\n",
- ioc->name, __func__,
- (unsigned long long)
- leapioraid_port->remote_identify.sas_address,
- leapioraid_phy->phy_id);
- leapioraid_phy->phy_belongs_to_port = 0;
- if (!ioc->remove_host)
- sas_port_delete_phy(leapioraid_port->port,
- leapioraid_phy->phy);
- list_del(&leapioraid_phy->port_siblings);
- }
- if (!ioc->remove_host)
- sas_port_delete(leapioraid_port->port);
- pr_info("%s %s: removed: sas_addr(0x%016llx)\n",
- ioc->name, __func__, (unsigned long long)sas_address);
-#else
- if ((ioc->logging_level & LEAPIORAID_DEBUG_TRANSPORT))
- pr_info("%s %s: remove: sas_addr(0x%016llx)\n",
- ioc->name, __func__,
- (unsigned long long)sas_address);
- if (!ioc->remove_host)
- sas_rphy_delete(leapioraid_port->rphy);
- pr_info("%s %s: removed: sas_addr(0x%016llx)\n",
- ioc->name, __func__, (unsigned long long)sas_address);
-#endif
- kfree(leapioraid_port);
-}
-
-int
-leapioraid_transport_add_host_phy(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_phy *leapioraid_phy,
- struct LeapioraidSasPhyP0_t phy_pg0,
- struct device *parent_dev)
-{
- struct sas_phy *phy;
- int phy_index = leapioraid_phy->phy_id;
-
- INIT_LIST_HEAD(&leapioraid_phy->port_siblings);
- phy = sas_phy_alloc(parent_dev, phy_index);
- if (!phy) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return -1;
- }
- if ((leapioraid_transport_set_identify(ioc, leapioraid_phy->handle,
- &leapioraid_phy->identify))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- sas_phy_free(phy);
- return -1;
- }
- phy->identify = leapioraid_phy->identify;
- leapioraid_phy->attached_handle =
- le16_to_cpu(phy_pg0.AttachedDevHandle);
- if (leapioraid_phy->attached_handle)
- leapioraid_transport_set_identify(
- ioc, leapioraid_phy->attached_handle,
- &leapioraid_phy->remote_identify);
- phy->identify.phy_identifier = leapioraid_phy->phy_id;
- phy->negotiated_linkrate =
- leapioraid_transport_convert_phy_link_rate(
- phy_pg0.NegotiatedLinkRate &
- LEAPIORAID_SAS_NEG_LINK_RATE_MASK_PHYSICAL);
- phy->minimum_linkrate_hw =
- leapioraid_transport_convert_phy_link_rate(
- phy_pg0.HwLinkRate &
- LEAPIORAID_SAS_HWRATE_MIN_RATE_MASK);
- phy->maximum_linkrate_hw =
- leapioraid_transport_convert_phy_link_rate(
- phy_pg0.HwLinkRate >> 4);
- phy->minimum_linkrate =
- leapioraid_transport_convert_phy_link_rate(
- phy_pg0.ProgrammedLinkRate &
- LEAPIORAID_SAS_PRATE_MIN_RATE_MASK);
- phy->maximum_linkrate =
- leapioraid_transport_convert_phy_link_rate(
- phy_pg0.ProgrammedLinkRate >> 4);
- phy->hostdata = leapioraid_phy->port;
-#if !defined(LEAPIORAID_WIDE_PORT_API_PLUS)
- phy->local_attached = 1;
-#endif
-#if !defined(LEAPIORAID_WIDE_PORT_API)
- phy->port_identifier = phy_index;
-#endif
- if ((sas_phy_add(phy))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- sas_phy_free(phy);
- return -1;
- }
- if ((ioc->logging_level & LEAPIORAID_DEBUG_TRANSPORT))
- dev_info(&phy->dev,
- "add: handle(0x%04x), sas_addr(0x%016llx)\n"
- "\tattached_handle(0x%04x), sas_addr(0x%016llx)\n",
- leapioraid_phy->handle, (unsigned long long)
- leapioraid_phy->identify.sas_address,
- leapioraid_phy->attached_handle, (unsigned long long)
- leapioraid_phy->remote_identify.sas_address);
- leapioraid_phy->phy = phy;
- return 0;
-}
-
-int
-leapioraid_transport_add_expander_phy(
- struct LEAPIORAID_ADAPTER *ioc,
- struct leapioraid_sas_phy *leapioraid_phy,
- struct LeapioraidExpanderP1_t expander_pg1,
- struct device *parent_dev)
-{
- struct sas_phy *phy;
- int phy_index = leapioraid_phy->phy_id;
-
- INIT_LIST_HEAD(&leapioraid_phy->port_siblings);
- phy = sas_phy_alloc(parent_dev, phy_index);
- if (!phy) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return -1;
- }
- if ((leapioraid_transport_set_identify(ioc, leapioraid_phy->handle,
- &leapioraid_phy->identify))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- sas_phy_free(phy);
- return -1;
- }
- phy->identify = leapioraid_phy->identify;
- leapioraid_phy->attached_handle =
- le16_to_cpu(expander_pg1.AttachedDevHandle);
- if (leapioraid_phy->attached_handle)
- leapioraid_transport_set_identify(
- ioc, leapioraid_phy->attached_handle,
- &leapioraid_phy->remote_identify);
- phy->identify.phy_identifier = leapioraid_phy->phy_id;
- phy->negotiated_linkrate =
- leapioraid_transport_convert_phy_link_rate(
- expander_pg1.NegotiatedLinkRate &
- LEAPIORAID_SAS_NEG_LINK_RATE_MASK_PHYSICAL);
- phy->minimum_linkrate_hw =
- leapioraid_transport_convert_phy_link_rate(
- expander_pg1.HwLinkRate &
- LEAPIORAID_SAS_HWRATE_MIN_RATE_MASK);
- phy->maximum_linkrate_hw =
- leapioraid_transport_convert_phy_link_rate(
- expander_pg1.HwLinkRate >> 4);
- phy->minimum_linkrate =
- leapioraid_transport_convert_phy_link_rate(
- expander_pg1.ProgrammedLinkRate &
- LEAPIORAID_SAS_PRATE_MIN_RATE_MASK);
- phy->maximum_linkrate =
- leapioraid_transport_convert_phy_link_rate(
- expander_pg1.ProgrammedLinkRate >> 4);
- phy->hostdata = leapioraid_phy->port;
-#if !defined(LEAPIORAID_WIDE_PORT_API)
- phy->port_identifier = phy_index;
-#endif
- if ((sas_phy_add(phy))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- sas_phy_free(phy);
- return -1;
- }
- if ((ioc->logging_level & LEAPIORAID_DEBUG_TRANSPORT))
- dev_info(&phy->dev,
- "add: handle(0x%04x), sas_addr(0x%016llx)\n"
- "\tattached_handle(0x%04x), sas_addr(0x%016llx)\n",
- leapioraid_phy->handle, (unsigned long long)
- leapioraid_phy->identify.sas_address,
- leapioraid_phy->attached_handle, (unsigned long long)
- leapioraid_phy->remote_identify.sas_address);
- leapioraid_phy->phy = phy;
- return 0;
-}
-
-void
-leapioraid_transport_update_links(struct LEAPIORAID_ADAPTER *ioc,
- u64 sas_address, u16 handle, u8 phy_number,
- u8 link_rate, struct leapioraid_hba_port *port)
-{
- unsigned long flags;
- struct leapioraid_raid_sas_node *sas_node;
- struct leapioraid_sas_phy *leapioraid_phy;
- struct leapioraid_hba_port *hba_port = NULL;
-
- if (ioc->shost_recovery || ioc->pci_error_recovery)
- return;
- spin_lock_irqsave(&ioc->sas_node_lock, flags);
- sas_node = leapioraid_transport_sas_node_find_by_sas_address(ioc,
- sas_address, port);
- if (!sas_node) {
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- return;
- }
- leapioraid_phy = &sas_node->phy[phy_number];
- leapioraid_phy->attached_handle = handle;
- spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
- if (handle && (link_rate >= LEAPIORAID_SAS_NEG_LINK_RATE_1_5)) {
- leapioraid_transport_set_identify(ioc, handle,
- &leapioraid_phy->remote_identify);
-#if defined(LEAPIORAID_WIDE_PORT_API)
- if ((sas_node->handle <= ioc->sas_hba.num_phys) &&
- (ioc->multipath_on_hba)) {
- list_for_each_entry(hba_port,
- &ioc->port_table_list, list) {
- if (hba_port->sas_address == sas_address &&
- hba_port == port)
- hba_port->phy_mask |=
- (1 << leapioraid_phy->phy_id);
- }
- }
- leapioraid_transport_add_phy_to_an_existing_port(ioc, sas_node,
- leapioraid_phy,
- leapioraid_phy->remote_identify.sas_address,
- port);
-#endif
- } else
- memset(&leapioraid_phy->remote_identify, 0, sizeof(struct
- sas_identify));
- if (leapioraid_phy->phy)
- leapioraid_phy->phy->negotiated_linkrate =
- leapioraid_transport_convert_phy_link_rate(link_rate);
- if ((ioc->logging_level & LEAPIORAID_DEBUG_TRANSPORT))
- dev_info(&leapioraid_phy->phy->dev,
- "refresh: parent sas_addr(0x%016llx),\n"
- "\tlink_rate(0x%02x), phy(%d)\n"
- "\tattached_handle(0x%04x), sas_addr(0x%016llx)\n",
- (unsigned long long)sas_address,
- link_rate, phy_number, handle, (unsigned long long)
- leapioraid_phy->remote_identify.sas_address);
-}
-
-static inline void *phy_to_ioc(struct sas_phy *phy)
-{
- struct Scsi_Host *shost = dev_to_shost(phy->dev.parent);
-
- return leapioraid_shost_private(shost);
-}
-
-static inline void *rphy_to_ioc(struct sas_rphy *rphy)
-{
- struct Scsi_Host *shost = dev_to_shost(rphy->dev.parent->parent);
-
- return leapioraid_shost_private(shost);
-}
-
-struct leapioraid_phy_error_log_request {
- u8 smp_frame_type;
- u8 function;
- u8 allocated_response_length;
- u8 request_length;
- u8 reserved_1[5];
- u8 phy_identifier;
- u8 reserved_2[2];
-};
-
-struct leapioraid_phy_error_log_reply {
- u8 smp_frame_type;
- u8 function;
- u8 function_result;
- u8 response_length;
- __be16 expander_change_count;
- u8 reserved_1[3];
- u8 phy_identifier;
- u8 reserved_2[2];
- __be32 invalid_dword;
- __be32 running_disparity_error;
- __be32 loss_of_dword_sync;
- __be32 phy_reset_problem;
-};
-
-static int
-leapioraid_transport_get_expander_phy_error_log(
- struct LEAPIORAID_ADAPTER *ioc, struct sas_phy *phy)
-{
- struct LeapioraidSmpPassthroughReq_t *mpi_request;
- struct LeapioraidSmpPassthroughRep_t *mpi_reply;
- struct leapioraid_phy_error_log_request *phy_error_log_request;
- struct leapioraid_phy_error_log_reply *phy_error_log_reply;
- int rc;
- u16 smid;
- void *psge;
- u8 issue_reset = 0;
- void *data_out = NULL;
- dma_addr_t data_out_dma;
- u32 sz;
-
- if (ioc->shost_recovery || ioc->pci_error_recovery) {
- pr_info("%s %s: host reset in progress!\n",
- __func__, ioc->name);
- return -EFAULT;
- }
- mutex_lock(&ioc->transport_cmds.mutex);
- if (ioc->transport_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_err("%s %s: transport_cmds in use\n",
- ioc->name, __func__);
- mutex_unlock(&ioc->transport_cmds.mutex);
- return -EAGAIN;
- }
- ioc->transport_cmds.status = LEAPIORAID_CMD_PENDING;
- rc = leapioraid_wait_for_ioc_to_operational(ioc, 10);
- if (rc)
- goto out;
- smid = leapioraid_base_get_smid(ioc, ioc->transport_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- rc = -EAGAIN;
- goto out;
- }
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->transport_cmds.smid = smid;
- sz = sizeof(struct leapioraid_phy_error_log_request) +
- sizeof(struct leapioraid_phy_error_log_reply);
- data_out =
- dma_alloc_coherent(&ioc->pdev->dev, sz, &data_out_dma,
- GFP_ATOMIC);
- if (!data_out) {
- pr_err("failure at %s:%d/%s()!\n", __FILE__,
- __LINE__, __func__);
- rc = -ENOMEM;
- leapioraid_base_free_smid(ioc, smid);
- goto out;
- }
- rc = -EINVAL;
- memset(data_out, 0, sz);
- phy_error_log_request = data_out;
- phy_error_log_request->smp_frame_type = 0x40;
- phy_error_log_request->function = 0x11;
- phy_error_log_request->request_length = 2;
- phy_error_log_request->allocated_response_length = 0;
- phy_error_log_request->phy_identifier = phy->number;
- memset(mpi_request, 0, sizeof(struct LeapioraidSmpPassthroughReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_SMP_PASSTHROUGH;
- mpi_request->PhysicalPort = leapioraid_transport_get_port_id_by_sas_phy(phy);
- mpi_request->VF_ID = 0;
- mpi_request->VP_ID = 0;
- mpi_request->SASAddress = cpu_to_le64(phy->identify.sas_address);
- mpi_request->RequestDataLength =
- cpu_to_le16(sizeof(struct leapioraid_phy_error_log_request));
- psge = &mpi_request->SGL;
- ioc->build_sg(ioc, psge, data_out_dma,
- sizeof(struct leapioraid_phy_error_log_request),
- data_out_dma + sizeof(struct leapioraid_phy_error_log_request),
- sizeof(struct leapioraid_phy_error_log_reply));
- dtransportprintk(ioc, pr_info(
- "%s phy_error_log - send to sas_addr(0x%016llx), phy(%d)\n",
- ioc->name,
- (unsigned long long)phy->identify.sas_address,
- phy->number));
- init_completion(&ioc->transport_cmds.done);
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->transport_cmds.done, 10 * HZ);
- if (!(ioc->transport_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- pr_err("%s %s: timeout\n",
- ioc->name, __func__);
- leapioraid_debug_dump_mf(mpi_request,
- sizeof(struct LeapioraidSmpPassthroughReq_t) / 4);
- if (!(ioc->transport_cmds.status & LEAPIORAID_CMD_RESET))
- issue_reset = 1;
- goto issue_host_reset;
- }
- dtransportprintk(ioc, pr_info("%s phy_error_log - complete\n", ioc->name));
- if (ioc->transport_cmds.status & LEAPIORAID_CMD_REPLY_VALID) {
- mpi_reply = ioc->transport_cmds.reply;
- dtransportprintk(ioc, pr_err(
- "%s phy_error_log - reply data transfer size(%d)\n",
- ioc->name,
- le16_to_cpu(mpi_reply->ResponseDataLength)));
- if (le16_to_cpu(mpi_reply->ResponseDataLength) !=
- sizeof(struct leapioraid_phy_error_log_reply))
- goto out;
- phy_error_log_reply = data_out +
- sizeof(struct leapioraid_phy_error_log_request);
- dtransportprintk(ioc, pr_err(
- "%s phy_error_log - function_result(%d)\n",
- ioc->name,
- phy_error_log_reply->function_result));
- phy->invalid_dword_count =
- be32_to_cpu(phy_error_log_reply->invalid_dword);
- phy->running_disparity_error_count =
- be32_to_cpu(phy_error_log_reply->running_disparity_error);
- phy->loss_of_dword_sync_count =
- be32_to_cpu(phy_error_log_reply->loss_of_dword_sync);
- phy->phy_reset_problem_count =
- be32_to_cpu(phy_error_log_reply->phy_reset_problem);
- rc = 0;
- } else
- dtransportprintk(ioc, pr_err(
- "%s phy_error_log - no reply\n",
- ioc->name));
-issue_host_reset:
- if (issue_reset)
- leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
-out:
- ioc->transport_cmds.status = LEAPIORAID_CMD_NOT_USED;
- if (data_out)
- dma_free_coherent(&ioc->pdev->dev, sz, data_out, data_out_dma);
- mutex_unlock(&ioc->transport_cmds.mutex);
- return rc;
-}
-
-static int
-leapioraid_transport_get_linkerrors(struct sas_phy *phy)
-{
- struct LEAPIORAID_ADAPTER *ioc = phy_to_ioc(phy);
- struct LeapioraidCfgRep_t mpi_reply;
- struct LeapioraidSasPhyP1_t phy_pg1;
- int rc = 0;
-
- rc = leapioraid_transport_find_parent_node(ioc, phy);
- if (rc)
- return rc;
- if (phy->identify.sas_address != ioc->sas_hba.sas_address)
- return leapioraid_transport_get_expander_phy_error_log(ioc, phy);
- if ((leapioraid_config_get_phy_pg1(ioc, &mpi_reply, &phy_pg1,
- phy->number))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return -ENXIO;
- }
- if (mpi_reply.IOCStatus || mpi_reply.IOCLogInfo)
- pr_info("%s phy(%d), ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name,
- phy->number,
- le16_to_cpu(mpi_reply.IOCStatus),
- le32_to_cpu(mpi_reply.IOCLogInfo));
- phy->invalid_dword_count = le32_to_cpu(phy_pg1.InvalidDwordCount);
- phy->running_disparity_error_count =
- le32_to_cpu(phy_pg1.RunningDisparityErrorCount);
- phy->loss_of_dword_sync_count =
- le32_to_cpu(phy_pg1.LossDwordSynchCount);
- phy->phy_reset_problem_count =
- le32_to_cpu(phy_pg1.PhyResetProblemCount);
- return 0;
-}
-
-static int
-leapioraid_transport_get_enclosure_identifier(
- struct sas_rphy *rphy, u64 *identifier)
-{
- struct LEAPIORAID_ADAPTER *ioc = rphy_to_ioc(rphy);
- struct leapioraid_sas_device *sas_device;
- unsigned long flags;
- int rc;
-
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_addr_and_rphy(ioc,
- rphy->identify.sas_address, rphy);
- if (sas_device) {
- *identifier = sas_device->enclosure_logical_id;
- rc = 0;
- leapioraid_sas_device_put(sas_device);
- } else {
- *identifier = 0;
- rc = -ENXIO;
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- return rc;
-}
-
-static int
-leapioraid_transport_get_bay_identifier(struct sas_rphy *rphy)
-{
- struct LEAPIORAID_ADAPTER *ioc = rphy_to_ioc(rphy);
- struct leapioraid_sas_device *sas_device;
- unsigned long flags;
- int rc;
-
- spin_lock_irqsave(&ioc->sas_device_lock, flags);
- sas_device = __leapioraid_get_sdev_by_addr_and_rphy(ioc,
- rphy->identify.sas_address, rphy);
- if (sas_device) {
- rc = sas_device->slot;
- leapioraid_sas_device_put(sas_device);
- } else {
- rc = -ENXIO;
- }
- spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
- return rc;
-}
-
-struct leapioraid_phy_control_request {
- u8 smp_frame_type;
- u8 function;
- u8 allocated_response_length;
- u8 request_length;
- u16 expander_change_count;
- u8 reserved_1[3];
- u8 phy_identifier;
- u8 phy_operation;
- u8 reserved_2[13];
- u64 attached_device_name;
- u8 programmed_min_physical_link_rate;
- u8 programmed_max_physical_link_rate;
- u8 reserved_3[6];
-};
-
-struct leapioraid_phy_control_reply {
- u8 smp_frame_type;
- u8 function;
- u8 function_result;
- u8 response_length;
-};
-
-#define LEAPIORAID_SMP_PHY_CONTROL_LINK_RESET (0x01)
-#define LEAPIORAID_SMP_PHY_CONTROL_HARD_RESET (0x02)
-#define LEAPIORAID_SMP_PHY_CONTROL_DISABLE (0x03)
-static int
-leapioraid_transport_expander_phy_control(
- struct LEAPIORAID_ADAPTER *ioc,
- struct sas_phy *phy, u8 phy_operation)
-{
- struct LeapioraidSmpPassthroughReq_t *mpi_request;
- struct LeapioraidSmpPassthroughRep_t *mpi_reply;
- struct leapioraid_phy_control_request *phy_control_request;
- struct leapioraid_phy_control_reply *phy_control_reply;
- int rc;
- u16 smid;
- void *psge;
- u8 issue_reset = 0;
- void *data_out = NULL;
- dma_addr_t data_out_dma;
- u32 sz;
-
- if (ioc->shost_recovery || ioc->pci_error_recovery) {
- pr_info("%s %s: host reset in progress!\n",
- __func__, ioc->name);
- return -EFAULT;
- }
- mutex_lock(&ioc->transport_cmds.mutex);
- if (ioc->transport_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_err("%s %s: transport_cmds in use\n",
- ioc->name, __func__);
- mutex_unlock(&ioc->transport_cmds.mutex);
- return -EAGAIN;
- }
- ioc->transport_cmds.status = LEAPIORAID_CMD_PENDING;
- rc = leapioraid_wait_for_ioc_to_operational(ioc, 10);
- if (rc)
- goto out;
- smid = leapioraid_base_get_smid(ioc, ioc->transport_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- rc = -EAGAIN;
- goto out;
- }
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->transport_cmds.smid = smid;
- sz = sizeof(struct leapioraid_phy_control_request) +
- sizeof(struct leapioraid_phy_control_reply);
- data_out =
- dma_alloc_coherent(&ioc->pdev->dev, sz, &data_out_dma,
- GFP_ATOMIC);
- if (!data_out) {
- pr_err("failure at %s:%d/%s()!\n", __FILE__,
- __LINE__, __func__);
- rc = -ENOMEM;
- leapioraid_base_free_smid(ioc, smid);
- goto out;
- }
- rc = -EINVAL;
- memset(data_out, 0, sz);
- phy_control_request = data_out;
- phy_control_request->smp_frame_type = 0x40;
- phy_control_request->function = 0x91;
- phy_control_request->request_length = 9;
- phy_control_request->allocated_response_length = 0;
- phy_control_request->phy_identifier = phy->number;
- phy_control_request->phy_operation = phy_operation;
- phy_control_request->programmed_min_physical_link_rate =
- phy->minimum_linkrate << 4;
- phy_control_request->programmed_max_physical_link_rate =
- phy->maximum_linkrate << 4;
- memset(mpi_request, 0, sizeof(struct LeapioraidSmpPassthroughReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_SMP_PASSTHROUGH;
- mpi_request->PhysicalPort = leapioraid_transport_get_port_id_by_sas_phy(phy);
- mpi_request->VF_ID = 0;
- mpi_request->VP_ID = 0;
- mpi_request->SASAddress = cpu_to_le64(phy->identify.sas_address);
- mpi_request->RequestDataLength =
- cpu_to_le16(sizeof(struct leapioraid_phy_error_log_request));
- psge = &mpi_request->SGL;
- ioc->build_sg(ioc, psge, data_out_dma,
- sizeof(struct leapioraid_phy_control_request),
- data_out_dma + sizeof(struct leapioraid_phy_control_request),
- sizeof(struct leapioraid_phy_control_reply));
- dtransportprintk(ioc, pr_info(
- "%s phy_control - send to sas_addr(0x%016llx), phy(%d), opcode(%d)\n",
- ioc->name,
- (unsigned long long)phy->identify.sas_address,
- phy->number, phy_operation));
- init_completion(&ioc->transport_cmds.done);
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->transport_cmds.done, 10 * HZ);
- if (!(ioc->transport_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- pr_err("%s %s: timeout\n",
- ioc->name, __func__);
- leapioraid_debug_dump_mf(mpi_request,
- sizeof(struct LeapioraidSmpPassthroughReq_t) / 4);
- if (!(ioc->transport_cmds.status & LEAPIORAID_CMD_RESET))
- issue_reset = 1;
- goto issue_host_reset;
- }
- dtransportprintk(ioc, pr_info(
- "%s phy_control - complete\n", ioc->name));
- if (ioc->transport_cmds.status & LEAPIORAID_CMD_REPLY_VALID) {
- mpi_reply = ioc->transport_cmds.reply;
- dtransportprintk(ioc, pr_err(
- "%s phy_control - reply data transfer size(%d)\n",
- ioc->name,
- le16_to_cpu(mpi_reply->ResponseDataLength)));
- if (le16_to_cpu(mpi_reply->ResponseDataLength) !=
- sizeof(struct leapioraid_phy_control_reply))
- goto out;
- phy_control_reply = data_out +
- sizeof(struct leapioraid_phy_control_request);
- dtransportprintk(ioc, pr_err(
- "%s phy_control - function_result(%d)\n",
- ioc->name,
- phy_control_reply->function_result));
- rc = 0;
- } else
- dtransportprintk(ioc, pr_err(
- "%s phy_control - no reply\n",
- ioc->name));
-issue_host_reset:
- if (issue_reset)
- leapioraid_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
-out:
- ioc->transport_cmds.status = LEAPIORAID_CMD_NOT_USED;
- if (data_out)
- dma_free_coherent(&ioc->pdev->dev, sz, data_out, data_out_dma);
- mutex_unlock(&ioc->transport_cmds.mutex);
- return rc;
-}
-
-static int
-leapioraid_transport_phy_reset(struct sas_phy *phy, int hard_reset)
-{
- struct LEAPIORAID_ADAPTER *ioc = phy_to_ioc(phy);
- struct LeapioraidSasIoUnitControlRep_t mpi_reply;
- struct LeapioraidSasIoUnitControlReq_t mpi_request;
- int rc = 0;
-
- rc = leapioraid_transport_find_parent_node(ioc, phy);
- if (rc)
- return rc;
- if (phy->identify.sas_address != ioc->sas_hba.sas_address)
- return leapioraid_transport_expander_phy_control(ioc, phy,
- (hard_reset ==
- 1) ?
- LEAPIORAID_SMP_PHY_CONTROL_HARD_RESET
- :
- LEAPIORAID_SMP_PHY_CONTROL_LINK_RESET);
- memset(&mpi_request, 0, sizeof(struct LeapioraidSasIoUnitControlReq_t));
- mpi_request.Function = LEAPIORAID_FUNC_SAS_IO_UNIT_CONTROL;
- mpi_request.Operation = hard_reset ?
- LEAPIORAID_SAS_OP_PHY_HARD_RESET : LEAPIORAID_SAS_OP_PHY_LINK_RESET;
- mpi_request.PhyNum = phy->number;
- if ((leapioraid_base_sas_iounit_control(ioc, &mpi_reply, &mpi_request))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- return -ENXIO;
- }
- if (mpi_reply.IOCStatus || mpi_reply.IOCLogInfo)
- pr_info("%s phy(%d), ioc_status(0x%04x), loginfo(0x%08x)\n",
- ioc->name,
- phy->number,
- le16_to_cpu(mpi_reply.IOCStatus),
- le32_to_cpu(mpi_reply.IOCLogInfo));
- return 0;
-}
-
-static int
-leapioraid_transport_phy_enable(struct sas_phy *phy, int enable)
-{
- struct LEAPIORAID_ADAPTER *ioc = phy_to_ioc(phy);
- struct LeapioraidSasIOUnitP1_t *sas_iounit_pg1 = NULL;
- struct LeapioraidSasIOUnitP0_t *sas_iounit_pg0 = NULL;
- struct LeapioraidCfgRep_t mpi_reply;
- u16 ioc_status;
- u16 sz;
- int rc = 0;
- int i, discovery_active;
-
- rc = leapioraid_transport_find_parent_node(ioc, phy);
- if (rc)
- return rc;
- if (phy->identify.sas_address != ioc->sas_hba.sas_address)
- return leapioraid_transport_expander_phy_control(ioc, phy,
- (enable ==
- 1) ?
- LEAPIORAID_SMP_PHY_CONTROL_LINK_RESET
- :
- LEAPIORAID_SMP_PHY_CONTROL_DISABLE);
- sz = offsetof(struct LeapioraidSasIOUnitP0_t,
- PhyData) +
- (ioc->sas_hba.num_phys * sizeof(struct LEAPIORAID_SAS_IO_UNIT0_PHY_DATA));
- sas_iounit_pg0 = kzalloc(sz, GFP_KERNEL);
- if (!sas_iounit_pg0) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -ENOMEM;
- goto out;
- }
- if ((leapioraid_config_get_sas_iounit_pg0(ioc, &mpi_reply,
- sas_iounit_pg0, sz))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -ENXIO;
- goto out;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -EIO;
- goto out;
- }
- for (i = 0, discovery_active = 0; i < ioc->sas_hba.num_phys; i++) {
- if (sas_iounit_pg0->PhyData[i].PortFlags &
- LEAPIORAID_SASIOUNIT0_PORTFLAGS_DISCOVERY_IN_PROGRESS) {
- pr_err(
- "%s discovery is active on port = %d, phy = %d:\n\t\t"
- "unable to enable/disable phys, try again later!\n",
- ioc->name,
- sas_iounit_pg0->PhyData[i].Port,
- i);
- discovery_active = 1;
- }
- }
- if (discovery_active) {
- rc = -EAGAIN;
- goto out;
- }
- sz = offsetof(struct LeapioraidSasIOUnitP1_t,
- PhyData) +
- (ioc->sas_hba.num_phys * sizeof(struct LEAPIORAID_SAS_IO_UNIT1_PHY_DATA));
- sas_iounit_pg1 = kzalloc(sz, GFP_KERNEL);
- if (!sas_iounit_pg1) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -ENOMEM;
- goto out;
- }
- if ((leapioraid_config_get_sas_iounit_pg1(ioc, &mpi_reply,
- sas_iounit_pg1, sz))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -ENXIO;
- goto out;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -EIO;
- goto out;
- }
- for (i = 0; i < ioc->sas_hba.num_phys; i++) {
- sas_iounit_pg1->PhyData[i].Port =
- sas_iounit_pg0->PhyData[i].Port;
- sas_iounit_pg1->PhyData[i].PortFlags =
- (sas_iounit_pg0->PhyData[i].PortFlags &
- LEAPIORAID_SASIOUNIT0_PORTFLAGS_AUTO_PORT_CONFIG);
- sas_iounit_pg1->PhyData[i].PhyFlags =
- (sas_iounit_pg0->PhyData[i].PhyFlags &
- (LEAPIORAID_SASIOUNIT0_PHYFLAGS_ZONING_ENABLED +
- LEAPIORAID_SASIOUNIT0_PHYFLAGS_PHY_DISABLED));
- }
- if (enable)
- sas_iounit_pg1->PhyData[phy->number].PhyFlags
- &= ~LEAPIORAID_SASIOUNIT1_PHYFLAGS_PHY_DISABLE;
- else
- sas_iounit_pg1->PhyData[phy->number].PhyFlags
- |= LEAPIORAID_SASIOUNIT1_PHYFLAGS_PHY_DISABLE;
- leapioraid_config_set_sas_iounit_pg1(ioc, &mpi_reply, sas_iounit_pg1,
- sz);
- if (enable)
- leapioraid_transport_phy_reset(phy, 0);
-out:
- kfree(sas_iounit_pg1);
- kfree(sas_iounit_pg0);
- return rc;
-}
-
-static int
-leapioraid_transport_phy_speed(
- struct sas_phy *phy, struct sas_phy_linkrates *rates)
-{
- struct LEAPIORAID_ADAPTER *ioc = phy_to_ioc(phy);
- struct LeapioraidSasIOUnitP1_t *sas_iounit_pg1 = NULL;
- struct LeapioraidSasPhyP0_t phy_pg0;
- struct LeapioraidCfgRep_t mpi_reply;
- u16 ioc_status;
- u16 sz;
- int i;
- int rc = 0;
-
- rc = leapioraid_transport_find_parent_node(ioc, phy);
- if (rc)
- return rc;
- if (!rates->minimum_linkrate)
- rates->minimum_linkrate = phy->minimum_linkrate;
- else if (rates->minimum_linkrate < phy->minimum_linkrate_hw)
- rates->minimum_linkrate = phy->minimum_linkrate_hw;
- if (!rates->maximum_linkrate)
- rates->maximum_linkrate = phy->maximum_linkrate;
- else if (rates->maximum_linkrate > phy->maximum_linkrate_hw)
- rates->maximum_linkrate = phy->maximum_linkrate_hw;
- if (phy->identify.sas_address != ioc->sas_hba.sas_address) {
- phy->minimum_linkrate = rates->minimum_linkrate;
- phy->maximum_linkrate = rates->maximum_linkrate;
- return leapioraid_transport_expander_phy_control(ioc, phy,
- LEAPIORAID_SMP_PHY_CONTROL_LINK_RESET);
- }
- sz = offsetof(struct LeapioraidSasIOUnitP1_t,
- PhyData) +
- (ioc->sas_hba.num_phys * sizeof(struct LEAPIORAID_SAS_IO_UNIT1_PHY_DATA));
- sas_iounit_pg1 = kzalloc(sz, GFP_KERNEL);
- if (!sas_iounit_pg1) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -ENOMEM;
- goto out;
- }
- if ((leapioraid_config_get_sas_iounit_pg1(ioc, &mpi_reply,
- sas_iounit_pg1, sz))) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -ENXIO;
- goto out;
- }
- ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & LEAPIORAID_IOCSTATUS_MASK;
- if (ioc_status != LEAPIORAID_IOCSTATUS_SUCCESS) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -EIO;
- goto out;
- }
- for (i = 0; i < ioc->sas_hba.num_phys; i++) {
- if (phy->number != i) {
- sas_iounit_pg1->PhyData[i].MaxMinLinkRate =
- (ioc->sas_hba.phy[i].phy->minimum_linkrate +
- (ioc->sas_hba.phy[i].phy->maximum_linkrate << 4));
- } else {
- sas_iounit_pg1->PhyData[i].MaxMinLinkRate =
- (rates->minimum_linkrate +
- (rates->maximum_linkrate << 4));
- }
- }
- if (leapioraid_config_set_sas_iounit_pg1
- (ioc, &mpi_reply, sas_iounit_pg1, sz)) {
- pr_err("%s failure at %s:%d/%s()!\n",
- ioc->name, __FILE__, __LINE__, __func__);
- rc = -ENXIO;
- goto out;
- }
- leapioraid_transport_phy_reset(phy, 0);
- if (!leapioraid_config_get_phy_pg0(ioc, &mpi_reply, &phy_pg0,
- phy->number)) {
- phy->minimum_linkrate =
- leapioraid_transport_convert_phy_link_rate(
- phy_pg0.ProgrammedLinkRate &
- LEAPIORAID_SAS_PRATE_MIN_RATE_MASK);
- phy->maximum_linkrate =
- leapioraid_transport_convert_phy_link_rate(
- phy_pg0.ProgrammedLinkRate >> 4);
- phy->negotiated_linkrate =
- leapioraid_transport_convert_phy_link_rate(
- phy_pg0.NegotiatedLinkRate &
- LEAPIORAID_SAS_NEG_LINK_RATE_MASK_PHYSICAL);
- }
-out:
- kfree(sas_iounit_pg1);
- return rc;
-}
-
-static int
-leapioraid_transport_map_smp_buffer(
- struct device *dev, struct bsg_buffer *buf,
- dma_addr_t *dma_addr, size_t *dma_len, void **p)
-{
- if (buf->sg_cnt > 1) {
- *p = dma_alloc_coherent(dev, buf->payload_len, dma_addr,
- GFP_KERNEL);
- if (!*p)
- return -ENOMEM;
- *dma_len = buf->payload_len;
- } else {
- if (!dma_map_sg(dev, buf->sg_list, 1, DMA_BIDIRECTIONAL))
- return -ENOMEM;
- *dma_addr = sg_dma_address(buf->sg_list);
- *dma_len = sg_dma_len(buf->sg_list);
- *p = NULL;
- }
- return 0;
-}
-
-static void
-leapioraid_transport_unmap_smp_buffer(
- struct device *dev, struct bsg_buffer *buf,
- dma_addr_t dma_addr, void *p)
-{
- if (p)
- dma_free_coherent(dev, buf->payload_len, p, dma_addr);
- else
- dma_unmap_sg(dev, buf->sg_list, 1, DMA_BIDIRECTIONAL);
-}
-
-static void
-leapioraid_transport_smp_handler(
- struct bsg_job *job, struct Scsi_Host *shost,
- struct sas_rphy *rphy)
-{
- struct LEAPIORAID_ADAPTER *ioc = shost_priv(shost);
- struct LeapioraidSmpPassthroughReq_t *mpi_request;
- struct LeapioraidSmpPassthroughRep_t *mpi_reply;
- int rc;
- u16 smid;
- u32 ioc_state;
- void *psge;
- dma_addr_t dma_addr_in;
- dma_addr_t dma_addr_out;
- void *addr_in = NULL;
- void *addr_out = NULL;
- size_t dma_len_in;
- size_t dma_len_out;
- u16 wait_state_count;
- unsigned int reslen = 0;
-
- if (ioc->shost_recovery || ioc->pci_error_recovery) {
- pr_info("%s %s: host reset in progress!\n",
- __func__, ioc->name);
- rc = -EFAULT;
- goto job_done;
- }
- rc = mutex_lock_interruptible(&ioc->transport_cmds.mutex);
- if (rc)
- goto job_done;
- if (ioc->transport_cmds.status != LEAPIORAID_CMD_NOT_USED) {
- pr_err("%s %s: transport_cmds in use\n",
- ioc->name, __func__);
- mutex_unlock(&ioc->transport_cmds.mutex);
- rc = -EAGAIN;
- goto job_done;
- }
- ioc->transport_cmds.status = LEAPIORAID_CMD_PENDING;
- rc = leapioraid_transport_map_smp_buffer(
- &ioc->pdev->dev, &job->request_payload,
- &dma_addr_out, &dma_len_out, &addr_out);
- if (rc)
- goto out;
- if (addr_out) {
- sg_copy_to_buffer(job->request_payload.sg_list,
- job->request_payload.sg_cnt, addr_out,
- job->request_payload.payload_len);
- }
- rc = leapioraid_transport_map_smp_buffer(
- &ioc->pdev->dev, &job->reply_payload,
- &dma_addr_in, &dma_len_in, &addr_in);
- if (rc)
- goto unmap_out;
- wait_state_count = 0;
- ioc_state = leapioraid_base_get_iocstate(ioc, 1);
- while (ioc_state != LEAPIORAID_IOC_STATE_OPERATIONAL) {
- if (wait_state_count++ == 10) {
- pr_err(
- "%s %s: failed due to ioc not operational\n",
- ioc->name, __func__);
- rc = -EFAULT;
- goto unmap_in;
- }
- ssleep(1);
- ioc_state = leapioraid_base_get_iocstate(ioc, 1);
- pr_info(
- "%s %s: waiting for operational state(count=%d)\n",
- ioc->name, __func__, wait_state_count);
- }
- if (wait_state_count)
- pr_info("%s %s: ioc is operational\n",
- ioc->name, __func__);
- smid = leapioraid_base_get_smid(ioc, ioc->transport_cb_idx);
- if (!smid) {
- pr_err("%s %s: failed obtaining a smid\n",
- ioc->name, __func__);
- rc = -EAGAIN;
- goto unmap_in;
- }
- rc = 0;
- mpi_request = leapioraid_base_get_msg_frame(ioc, smid);
- ioc->transport_cmds.smid = smid;
- memset(mpi_request, 0, sizeof(struct LeapioraidSmpPassthroughReq_t));
- mpi_request->Function = LEAPIORAID_FUNC_SMP_PASSTHROUGH;
- mpi_request->PhysicalPort = leapioraid_transport_get_port_id_by_rphy(
- ioc, rphy);
- mpi_request->SASAddress = (rphy) ?
- cpu_to_le64(rphy->identify.sas_address) :
- cpu_to_le64(ioc->sas_hba.sas_address);
- mpi_request->RequestDataLength = cpu_to_le16(dma_len_out - 4);
- psge = &mpi_request->SGL;
- ioc->build_sg(ioc, psge, dma_addr_out, dma_len_out - 4, dma_addr_in,
- dma_len_in - 4);
- dtransportprintk(ioc, pr_info(
- "%s %s - sending smp request\n", ioc->name,
- __func__));
- init_completion(&ioc->transport_cmds.done);
- ioc->put_smid_default(ioc, smid);
- wait_for_completion_timeout(&ioc->transport_cmds.done, 10 * HZ);
- if (!(ioc->transport_cmds.status & LEAPIORAID_CMD_COMPLETE)) {
- pr_err("%s %s : timeout\n", __func__, ioc->name);
- leapioraid_debug_dump_mf(mpi_request,
- sizeof(struct LeapioraidSmpPassthroughReq_t) / 4);
- if (!(ioc->transport_cmds.status & LEAPIORAID_CMD_RESET)) {
- leapioraid_base_hard_reset_handler(ioc,
- FORCE_BIG_HAMMER);
- rc = -ETIMEDOUT;
- goto unmap_in;
- }
- }
- dtransportprintk(ioc, pr_info(
- "%s %s - complete\n", ioc->name, __func__));
- if (!(ioc->transport_cmds.status & LEAPIORAID_CMD_REPLY_VALID)) {
- dtransportprintk(ioc, pr_info(
- "%s %s - no reply\n", ioc->name,
- __func__));
- rc = -ENXIO;
- goto unmap_in;
- }
- mpi_reply = ioc->transport_cmds.reply;
- dtransportprintk(ioc,
- pr_info(
- "%s %s - reply data transfer size(%d)\n",
- ioc->name, __func__,
- le16_to_cpu(mpi_reply->ResponseDataLength)));
- memcpy(job->reply, mpi_reply, sizeof(*mpi_reply));
- job->reply_len = sizeof(*mpi_reply);
- reslen = le16_to_cpu(mpi_reply->ResponseDataLength);
- if (addr_in) {
- sg_copy_from_buffer(job->reply_payload.sg_list,
- job->reply_payload.sg_cnt, addr_in,
- job->reply_payload.payload_len);
- }
- rc = 0;
-unmap_in:
- leapioraid_transport_unmap_smp_buffer(
- &ioc->pdev->dev, &job->reply_payload,
- dma_addr_in, addr_in);
-unmap_out:
- leapioraid_transport_unmap_smp_buffer(
- &ioc->pdev->dev, &job->request_payload,
- dma_addr_out, addr_out);
-out:
- ioc->transport_cmds.status = LEAPIORAID_CMD_NOT_USED;
- mutex_unlock(&ioc->transport_cmds.mutex);
-job_done:
- bsg_job_done(job, rc, reslen);
-}
-
-struct sas_function_template leapioraid_transport_functions = {
- .get_linkerrors = leapioraid_transport_get_linkerrors,
- .get_enclosure_identifier = leapioraid_transport_get_enclosure_identifier,
- .get_bay_identifier = leapioraid_transport_get_bay_identifier,
- .phy_reset = leapioraid_transport_phy_reset,
- .phy_enable = leapioraid_transport_phy_enable,
- .set_phy_speed = leapioraid_transport_phy_speed,
- .smp_handler = leapioraid_transport_smp_handler,
-};
-
-struct scsi_transport_template *leapioraid_transport_template;
diff --git a/drivers/scsi/leapraid/Kconfig b/drivers/scsi/leapraid/Kconfig
new file mode 100644
index 000000000000..b539183b24a7
--- /dev/null
+++ b/drivers/scsi/leapraid/Kconfig
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: GPL-2.0-or-later
+
+config SCSI_LEAPRAID
+ tristate "LeapIO RAID Adapter"
+ depends on PCI && SCSI
+ select SCSI_SAS_ATTRS
+ help
+ This driver supports LeapIO PCIe-based Storage
+ and RAID controllers.
+
+ <http://www.leap-io.com>
+
+ To compile this driver as a module, choose M here: the
+ resulting kernel module will be named leapraid.
diff --git a/drivers/scsi/leapraid/Makefile b/drivers/scsi/leapraid/Makefile
new file mode 100644
index 000000000000..bdafc036cd00
--- /dev/null
+++ b/drivers/scsi/leapraid/Makefile
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for the LEAPRAID drivers.
+#
+
+obj-$(CONFIG_SCSI_LEAPRAID) += leapraid.o
+leapraid-objs += leapraid_func.o \
+ leapraid_os.o \
+ leapraid_transport.o \
+ leapraid_app.o
diff --git a/drivers/scsi/leapraid/leapraid.h b/drivers/scsi/leapraid/leapraid.h
new file mode 100644
index 000000000000..842810d41542
--- /dev/null
+++ b/drivers/scsi/leapraid/leapraid.h
@@ -0,0 +1,2070 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2025 LeapIO Tech Inc.
+ *
+ * LeapRAID Storage and RAID Controller driver.
+ */
+#ifndef LEAPRAID_H
+#define LEAPRAID_H
+
+/* doorbell register definitions */
+#define LEAPRAID_DB_RESET 0x00000000
+#define LEAPRAID_DB_READY 0x10000000
+#define LEAPRAID_DB_OPERATIONAL 0x20000000
+#define LEAPRAID_DB_FAULT 0x40000000
+
+#define LEAPRAID_DB_MASK 0xF0000000
+
+#define LEAPRAID_DB_OVER_TEMPERATURE 0x2810
+
+#define LEAPRAID_DB_USED 0x08000000
+#define LEAPRAID_DB_DATA_MASK 0x0000FFFF
+#define LEAPRAID_DB_FUNC_SHIFT 24
+#define LEAPRAID_DB_ADD_DWORDS_SHIFT 16
+
+/* maximum number of retries waiting for doorbell to become ready */
+#define LEAPRAID_DB_RETRY_COUNT_MAX 10
+/* maximum number of retries waiting for doorbell to become operational */
+#define LEAPRAID_DB_WAIT_OPERATIONAL 10
+/* sleep interval (in seconds) between doorbell polls */
+#define LEAPRAID_DB_POLL_INTERVAL_S 1
+
+/* maximum number of retries waiting for host to end recovery */
+#define LEAPRAID_WAIT_SHOST_RECOVERY 30
+
+/* diagnostic register definitions */
+#define LEAPRAID_DIAG_WRITE_ENABLE 0x00000080
+#define LEAPRAID_DIAG_RESET 0x00000004
+#define LEAPRAID_DIAG_HOLD_ADAPTER_RESET 0x00000002
+
+/* interrupt status register definitions */
+#define LEAPRAID_HOST2ADAPTER_DB_STATUS 0x80000000
+#define LEAPRAID_ADAPTER2HOST_DB_STATUS 0x00000001
+
+/* the number of debug register */
+#define LEAPRAID_DEBUGLOG_SZ_MAX 16
+
+/* reply post host register defines */
+#define REP_POST_HOST_IDX_REG_CNT 16
+#define LEAPRAID_RPHI_MSIX_IDX_SHIFT 24
+
+/* vphy flags */
+#define LEAPRAID_SAS_PHYINFO_VPHY 0x00001000
+
+/* linux driver init fw */
+#define LEAPRAID_WHOINIT_LINUX_DRIVER 0x04
+
+/* rdpq array mode */
+#define LEAPRAID_ADAPTER_INIT_MSGFLG_RDPQ_ARRAY_MODE 0x01
+
+/* request description flags */
+#define LEAPRAID_REQ_DESC_FLG_SCSI_IO 0x00
+#define LEAPRAID_REQ_DESC_FLG_HPR 0x06
+#define LEAPRAID_REQ_DESC_FLG_DFLT_TYPE 0x08
+
+/* reply description flags */
+#define LEAPRAID_RPY_DESC_FLG_TYPE_MASK 0x0F
+#define LEAPRAID_RPY_DESC_FLG_SCSI_IO_SUCCESS 0x00
+#define LEAPRAID_RPY_DESC_FLG_ADDRESS_REPLY 0x01
+#define LEAPRAID_RPY_DESC_FLG_FP_SCSI_IO_SUCCESS 0x06
+#define LEAPRAID_RPY_DESC_FLG_UNUSED 0x0F
+
+/* MPI functions */
+#define LEAPRAID_FUNC_SCSIIO_REQ 0x00
+#define LEAPRAID_FUNC_SCSI_TMF 0x01
+#define LEAPRAID_FUNC_ADAPTER_INIT 0x02
+#define LEAPRAID_FUNC_GET_ADAPTER_FEATURES 0x03
+#define LEAPRAID_FUNC_CONFIG_OP 0x04
+#define LEAPRAID_FUNC_SCAN_DEV 0x06
+#define LEAPRAID_FUNC_EVENT_NOTIFY 0x07
+#define LEAPRAID_FUNC_FW_DOWNLOAD 0x09
+#define LEAPRAID_FUNC_FW_UPLOAD 0x12
+#define LEAPRAID_FUNC_RAID_ACTION 0x15
+#define LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH 0x16
+#define LEAPRAID_FUNC_SCSI_ENC_PROCESSOR 0x18
+#define LEAPRAID_FUNC_SMP_PASSTHROUGH 0x1A
+#define LEAPRAID_FUNC_SAS_IO_UNIT_CTRL 0x1B
+#define LEAPRAID_FUNC_SATA_PASSTHROUGH 0x1C
+#define LEAPRAID_FUNC_ADAPTER_UNIT_RESET 0x40
+#define LEAPRAID_FUNC_HANDSHAKE 0x42
+#define LEAPRAID_FUNC_LOGBUF_INIT 0x57
+
+/* adapter status values */
+#define LEAPRAID_ADAPTER_STATUS_MASK 0x7FFF
+#define LEAPRAID_ADAPTER_STATUS_SUCCESS 0x0000
+#define LEAPRAID_ADAPTER_STATUS_BUSY 0x0002
+#define LEAPRAID_ADAPTER_STATUS_INTERNAL_ERROR 0x0004
+#define LEAPRAID_ADAPTER_STATUS_INSUFFICIENT_RESOURCES 0x0006
+#define LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_ACTION 0x0020
+#define LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_TYPE 0x0021
+#define LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_PAGE 0x0022
+#define LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_DATA 0x0023
+#define LEAPRAID_ADAPTER_STATUS_CONFIG_NO_DEFAULTS 0x0024
+#define LEAPRAID_ADAPTER_STATUS_CONFIG_CANT_COMMIT 0x0025
+#define LEAPRAID_ADAPTER_STATUS_SCSI_RECOVERED_ERROR 0x0040
+#define LEAPRAID_ADAPTER_STATUS_SCSI_DEVICE_NOT_THERE 0x0043
+#define LEAPRAID_ADAPTER_STATUS_SCSI_DATA_OVERRUN 0x0044
+#define LEAPRAID_ADAPTER_STATUS_SCSI_DATA_UNDERRUN 0x0045
+#define LEAPRAID_ADAPTER_STATUS_SCSI_IO_DATA_ERROR 0x0046
+#define LEAPRAID_ADAPTER_STATUS_SCSI_PROTOCOL_ERROR 0x0047
+#define LEAPRAID_ADAPTER_STATUS_SCSI_TASK_TERMINATED 0x0048
+#define LEAPRAID_ADAPTER_STATUS_SCSI_RESIDUAL_MISMATCH 0x0049
+#define LEAPRAID_ADAPTER_STATUS_SCSI_TASK_MGMT_FAILED 0x004A
+#define LEAPRAID_ADAPTER_STATUS_SCSI_ADAPTER_TERMINATED 0x004B
+#define LEAPRAID_ADAPTER_STATUS_SCSI_EXT_TERMINATED 0x004C
+
+/* sge flags */
+#define LEAPRAID_SGE_FLG_LAST_ONE 0x80
+#define LEAPRAID_SGE_FLG_EOB 0x40
+#define LEAPRAID_SGE_FLG_EOL 0x01
+#define LEAPRAID_SGE_FLG_SHIFT 24
+#define LEAPRAID_SGE_FLG_SIMPLE_ONE 0x10
+#define LEAPRAID_SGE_FLG_SYSTEM_ADDR 0x00
+#define LEAPRAID_SGE_FLG_H2C 0x04
+#define LEAPRAID_SGE_FLG_32 0x00
+#define LEAPRAID_SGE_FLG_64 0x02
+
+#define LEAPRAID_IEEE_SGE_FLG_EOL 0x40
+#define LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE 0x00
+#define LEAPRAID_IEEE_SGE_FLG_CHAIN_ONE 0x80
+#define LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR 0x00
+
+#define LEAPRAID_SGE_OFFSET_SIZE 4
+
+/* page and ext page type */
+#define LEAPRAID_CFG_PT_IO_UNIT 0x00
+#define LEAPRAID_CFG_PT_ADAPTER 0x01
+#define LEAPRAID_CFG_PT_BIOS 0x02
+#define LEAPRAID_CFG_PT_RAID_VOLUME 0x08
+#define LEAPRAID_CFG_PT_RAID_PHYSDISK 0x0A
+#define LEAPRAID_CFG_PT_EXTENDED 0x0F
+#define LEAPRAID_CFG_EXTPT_SAS_IO_UNIT 0x10
+#define LEAPRAID_CFG_EXTPT_SAS_EXP 0x11
+#define LEAPRAID_CFG_EXTPT_SAS_DEV 0x12
+#define LEAPRAID_CFG_EXTPT_SAS_PHY 0x13
+#define LEAPRAID_CFG_EXTPT_ENC 0x15
+#define LEAPRAID_CFG_EXTPT_RAID_CONFIG 0x16
+
+/* config page address */
+#define LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP 0x00000000
+#define LEAPRAID_SAS_ENC_CFG_PGAD_HDL 0x10000000
+#define LEAPRAID_SAS_DEV_CFG_PGAD_HDL 0x20000000
+#define LEAPRAID_SAS_EXP_CFG_PGAD_HDL_PHY_NUM 0x10000000
+#define LEAPRAID_SAS_EXP_CFD_PGAD_HDL 0x20000000
+#define LEAPRAID_SAS_EXP_CFG_PGAD_PHYNUM_SHIFT 16
+#define LEAPRAID_RAID_VOL_CFG_PGAD_HDL 0x10000000
+#define LEAPRAID_SAS_PHY_CFG_PGAD_PHY_NUMBER 0x00000000
+#define LEAPRAID_PHYSDISK_CFG_PGAD_PHYSDISKNUM 0x10000000
+
+/* config page operations */
+#define LEAPRAID_CFG_ACT_PAGE_HEADER 0x00
+#define LEAPRAID_CFG_ACT_PAGE_READ_CUR 0x01
+#define LEAPRAID_CFG_ACT_PAGE_WRITE_CUR 0x02
+
+/* bios pages */
+#define LEAPRAID_CFG_PAGE_NUM_BIOS2 0x2
+#define LEAPRAID_CFG_PAGE_NUM_BIOS3 0x3
+
+/* sas device pages */
+#define LEAPRAID_CFG_PAGE_NUM_DEV0 0x0
+
+/* sas device page 0 flags */
+#define LEAPRAID_SAS_DEV_P0_FLG_FP_CAP 0x2000
+#define LEAPRAID_SAS_DEV_P0_FLG_SATA_SMART 0x0040
+#define LEAPRAID_SAS_DEV_P0_FLG_ENC_LEVEL_VALID 0x0002
+#define LEAPRAID_SAS_DEV_P0_FLG_DEV_PRESENT 0x0001
+
+/* sas IO unit pages */
+#define LEAPRAID_CFG_PAGE_NUM_IOUNIT0 0x0
+#define LEAPRAID_CFG_PAGE_NUM_IOUNIT1 0x1
+
+/* sas expander pages */
+#define LEAPRAID_CFG_PAGE_NUM_EXP0 0x0
+#define LEAPRAID_CFG_PAGE_NUM_EXP1 0x1
+
+/* sas enclosure page */
+#define LEAPRAID_CFG_PAGE_NUM_ENC0 0x0
+
+/* sas phy page */
+#define LEAPRAID_CFG_PAGE_NUM_PHY0 0x0
+
+/* raid volume pages */
+#define LEAPRAID_CFG_PAGE_NUM_VOL0 0x0
+#define LEAPRAID_CFG_PAGE_NUM_VOL1 0x1
+
+/* physical disk page */
+#define LEAPRAID_CFG_PAGE_NUM_PD0 0x0
+
+/* adapter page */
+#define LEAPRAID_CFG_PAGE_NUM_ADAPTER1 0x1
+
+#define LEAPRAID_CFG_UNIT_SIZE 4
+
+/* raid volume type and state */
+#define LEAPRAID_VOL_STATE_MISSING 0x00
+#define LEAPRAID_VOL_STATE_FAILED 0x01
+#define LEAPRAID_VOL_STATE_INITIALIZING 0x02
+#define LEAPRAID_VOL_STATE_ONLINE 0x03
+#define LEAPRAID_VOL_STATE_DEGRADED 0x04
+#define LEAPRAID_VOL_STATE_OPTIMAL 0x05
+#define LEAPRAID_VOL_TYPE_RAID0 0x00
+#define LEAPRAID_VOL_TYPE_RAID1E 0x01
+#define LEAPRAID_VOL_TYPE_RAID1 0x02
+#define LEAPRAID_VOL_TYPE_RAID10 0x05
+#define LEAPRAID_VOL_TYPE_UNKNOWN 0xFF
+
+/* raid volume element flags */
+#define LEAPRAID_RAIDCFG_P0_EFLG_MASK_ELEMENT_TYPE 0x000F
+#define LEAPRAID_RAIDCFG_P0_EFLG_VOL_PHYS_DISK_ELEMENT 0x0001
+#define LEAPRAID_RAIDCFG_P0_EFLG_HOT_SPARE_ELEMENT 0x0002
+#define LEAPRAID_RAIDCFG_P0_EFLG_OCE_ELEMENT 0x0003
+
+/* raid action */
+#define LEAPRAID_RAID_ACT_SYSTEM_SHUTDOWN_INITIATED 0x20
+#define LEAPRAID_RAID_ACT_PHYSDISK_HIDDEN 0x24
+
+/* sas negotiated link rates */
+#define LEAPRAID_SAS_NEG_LINK_RATE_MASK_PHYSICAL 0x0F
+#define LEAPRAID_SAS_NEG_LINK_RATE_UNKNOWN_LINK_RATE 0x00
+#define LEAPRAID_SAS_NEG_LINK_RATE_PHY_DISABLED 0x01
+#define LEAPRAID_SAS_NEG_LINK_RATE_NEGOTIATION_FAILED 0x02
+#define LEAPRAID_SAS_NEG_LINK_RATE_SATA_OOB_COMPLETE 0x03
+#define LEAPRAID_SAS_NEG_LINK_RATE_PORT_SELECTOR 0x04
+#define LEAPRAID_SAS_NEG_LINK_RATE_SMP_RESETTING 0x05
+
+#define LEAPRAID_SAS_NEG_LINK_RATE_1_5 0x08
+#define LEAPRAID_SAS_NEG_LINK_RATE_3_0 0x09
+#define LEAPRAID_SAS_NEG_LINK_RATE_6_0 0x0A
+#define LEAPRAID_SAS_NEG_LINK_RATE_12_0 0x0B
+
+#define LEAPRAID_SAS_PRATE_MIN_RATE_MASK 0x0F
+#define LEAPRAID_SAS_HWRATE_MIN_RATE_MASK 0x0F
+
+/* scsi IO control bits */
+#define LEAPRAID_SCSIIO_CTRL_ADDCDBLEN_SHIFT 26
+#define LEAPRAID_SCSIIO_CTRL_NODATATRANSFER 0x00000000
+#define LEAPRAID_SCSIIO_CTRL_WRITE 0x01000000
+#define LEAPRAID_SCSIIO_CTRL_READ 0x02000000
+#define LEAPRAID_SCSIIO_CTRL_BIDIRECTIONAL 0x03000000
+#define LEAPRAID_SCSIIO_CTRL_SIMPLEQ 0x00000000
+#define LEAPRAID_SCSIIO_CTRL_ORDEREDQ 0x00000200
+#define LEAPRAID_SCSIIO_CTRL_CMDPRI 0x00000800
+
+/* scsi state and status */
+#define LEAPRAID_SCSI_STATUS_BUSY 0x08
+#define LEAPRAID_SCSI_STATUS_RESERVATION_CONFLICT 0x18
+#define LEAPRAID_SCSI_STATUS_TASK_SET_FULL 0x28
+
+#define LEAPRAID_SCSI_STATE_RESPONSE_INFO_VALID 0x10
+#define LEAPRAID_SCSI_STATE_TERMINATED 0x08
+#define LEAPRAID_SCSI_STATE_NO_SCSI_STATUS 0x04
+#define LEAPRAID_SCSI_STATE_AUTOSENSE_FAILED 0x02
+#define LEAPRAID_SCSI_STATE_AUTOSENSE_VALID 0x01
+
+/* scsi task management defines */
+#define LEAPRAID_TM_TASKTYPE_ABORT_TASK 0x01
+#define LEAPRAID_TM_TASKTYPE_ABRT_TASK_SET 0x02
+#define LEAPRAID_TM_TASKTYPE_TARGET_RESET 0x03
+#define LEAPRAID_TM_TASKTYPE_LOGICAL_UNIT_RESET 0x05
+#define LEAPRAID_TM_TASKTYPE_CLEAR_TASK_SET 0x06
+#define LEAPRAID_TM_TASKTYPE_QUERY_TASK 0x07
+#define LEAPRAID_TM_TASKTYPE_CLEAR_ACA 0x08
+#define LEAPRAID_TM_TASKTYPE_QUERY_TASK_SET 0x09
+#define LEAPRAID_TM_TASKTYPE_QUERY_ASYNC_EVENT 0x0A
+
+#define LEAPRAID_TM_MSGFLAGS_LINK_RESET 0x00
+#define LEAPRAID_TM_RSP_INVALID_FRAME 0x02
+#define LEAPRAID_TM_RSP_TM_SUCCEEDED 0x08
+#define LEAPRAID_TM_RSP_IO_QUEUED_ON_ADAPTER 0x80
+
+/* scsi sep request defines */
+#define LEAPRAID_SEP_REQ_ACT_WRITE_STATUS 0x00
+#define LEAPRAID_SEP_REQ_FLG_DEVHDL_ADDRESS 0x00
+#define LEAPRAID_SEP_REQ_FLG_ENCLOSURE_SLOT_ADDRESS 0x01
+#define LEAPRAID_SEP_REQ_SLOTSTATUS_PREDICTED_FAULT 0x00000040
+
+/* the capabilities of the adapter */
+#define LEAPRAID_ADAPTER_FEATURES_CAP_ATOMIC_REQ 0x00080000
+#define LEAPRAID_ADAPTER_FEATURES_CAP_RDPQ_ARRAY_CAPABLE 0x00040000
+#define LEAPRAID_ADAPTER_FEATURES_CAP_EVENT_REPLAY 0x00002000
+#define LEAPRAID_ADAPTER_FEATURES_CAP_INTEGRATED_RAID 0x00001000
+
+/* event code definitions for the firmware */
+#define LEAPRAID_EVT_SAS_DEV_STATUS_CHANGE 0x000F
+#define LEAPRAID_EVT_SAS_DISCOVERY 0x0016
+#define LEAPRAID_EVT_SAS_TOPO_CHANGE_LIST 0x001C
+#define LEAPRAID_EVT_SAS_ENCL_DEV_STATUS_CHANGE 0x001D
+#define LEAPRAID_EVT_IR_CHANGE 0x0020
+#define LEAPRAID_EVT_TURN_ON_PFA_LED 0xFFFC
+#define LEAPRAID_EVT_SCAN_DEV_DONE 0xFFFD
+#define LEAPRAID_EVT_REMOVE_DEAD_DEV 0xFFFF
+#define LEAPRAID_MAX_EVENT_NUM 128
+
+#define LEAPRAID_EVT_SAS_DEV_STAT_RC_INTERNAL_DEV_RESET 0x08
+#define LEAPRAID_EVT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET 0x0E
+
+/* raid configuration change event */
+#define LEAPRAID_EVT_IR_RC_VOLUME_ADD 0x01
+#define LEAPRAID_EVT_IR_RC_VOLUME_DELETE 0x02
+#define LEAPRAID_EVT_IR_RC_PD_HIDDEN_TO_ADD 0x03
+#define LEAPRAID_EVT_IR_RC_PD_UNHIDDEN_TO_DELETE 0x04
+#define LEAPRAID_EVT_IR_RC_PD_CREATED_TO_HIDE 0x05
+#define LEAPRAID_EVT_IR_RC_PD_DELETED_TO_EXPOSE 0x06
+
+/* sas topology change event */
+#define LEAPRAID_EVT_SAS_TOPO_ES_NO_EXPANDER 0x00
+#define LEAPRAID_EVT_SAS_TOPO_ES_ADDED 0x01
+#define LEAPRAID_EVT_SAS_TOPO_ES_NOT_RESPONDING 0x02
+#define LEAPRAID_EVT_SAS_TOPO_ES_RESPONDING 0x03
+
+#define LEAPRAID_EVT_SAS_TOPO_RC_MASK 0x0F
+#define LEAPRAID_EVT_SAS_TOPO_RC_CLEAR_MASK 0xF0
+#define LEAPRAID_EVT_SAS_TOPO_RC_TARG_ADDED 0x01
+#define LEAPRAID_EVT_SAS_TOPO_RC_TARG_NOT_RESPONDING 0x02
+#define LEAPRAID_EVT_SAS_TOPO_RC_PHY_CHANGED 0x03
+
+/* sas discovery event defines */
+#define LEAPRAID_EVT_SAS_DISC_RC_STARTED 0x01
+#define LEAPRAID_EVT_SAS_DISC_RC_COMPLETED 0x02
+
+/* enclosure device status change event */
+#define LEAPRAID_EVT_SAS_ENCL_RC_ADDED 0x01
+#define LEAPRAID_EVT_SAS_ENCL_RC_NOT_RESPONDING 0x02
+
+/* device type and identifiers */
+#define LEAPRAID_DEVTYP_SEP 0x00004000
+#define LEAPRAID_DEVTYP_SSP_TGT 0x00000400
+#define LEAPRAID_DEVTYP_STP_TGT 0x00000200
+#define LEAPRAID_DEVTYP_SMP_TGT 0x00000100
+#define LEAPRAID_DEVTYP_SATA_DEV 0x00000080
+#define LEAPRAID_DEVTYP_SSP_INIT 0x00000040
+#define LEAPRAID_DEVTYP_STP_INIT 0x00000020
+#define LEAPRAID_DEVTYP_SMP_INIT 0x00000010
+#define LEAPRAID_DEVTYP_SATA_HOST 0x00000008
+
+#define LEAPRAID_DEVTYP_MASK_DEV_TYPE 0x00000007
+#define LEAPRAID_DEVTYP_NO_DEV 0x00000000
+#define LEAPRAID_DEVTYP_END_DEV 0x00000001
+#define LEAPRAID_DEVTYP_EDGE_EXPANDER 0x00000002
+#define LEAPRAID_DEVTYP_FANOUT_EXPANDER 0x00000003
+
+/* sas control operation */
+#define LEAPRAID_SAS_OP_PHY_LINK_RESET 0x06
+#define LEAPRAID_SAS_OP_PHY_HARD_RESET 0x07
+#define LEAPRAID_SAS_OP_SET_PARAMETER 0x0F
+
+/* boot device defines */
+#define LEAPRAID_BOOTDEV_FORM_MASK 0x0F
+#define LEAPRAID_BOOTDEV_FORM_NONE 0x00
+#define LEAPRAID_BOOTDEV_FORM_SAS_WWID 0x05
+#define LEAPRAID_BOOTDEV_FORM_ENC_SLOT 0x06
+#define LEAPRAID_BOOTDEV_FORM_DEV_NAME 0x07
+
+/**
+ * struct leapraid_reg_base - Register layout of the LeapRAID controller
+ *
+ * @db: Doorbell register used to signal commands or status to firmware
+ * @ws: Write sequence register for synchronizing doorbell operations
+ * @host_diag: Diagnostic register used for status or debug reporting
+ * @r1: Reserved
+ * @host_int_status: Interrupt status register reporting active interrupts
+ * @host_int_mask: Interrupt mask register enabling or disabling sources
+ * @r2: Reserved
+ * @rep_msg_host_idx: Reply message index for the next available reply slot
+ * @r3: Reserved
+ * @debug_log: DebugLog registers for firmware debug and diagnostic output
+ * @r4: Reserved
+ * @atomic_req_desc_post: Atomic register for single descriptor posting
+ * @adapter_log_buf_pos: Adapter log buffer write position
+ * @host_log_buf_pos: Host log buffer write position
+ * @r5: Reserved
+ * @rep_post_reg_idx: Array of reply post index registers, one per queue.
+ * The number of entries is defined by
+ * REP_POST_HOST_IDX_REG_CNT.
+ */
+struct leapraid_reg_base {
+ __le32 db;
+ __le32 ws;
+ __le32 host_diag;
+ __le32 r1[9];
+ __le32 host_int_status;
+ __le32 host_int_mask;
+ __le32 r2[4];
+ __le32 rep_msg_host_idx;
+ __le32 r3[13];
+ __le32 debug_log[LEAPRAID_DEBUGLOG_SZ_MAX];
+ __le32 r4[2];
+ __le32 atomic_req_desc_post;
+ __le32 adapter_log_buf_pos;
+ __le32 host_log_buf_pos;
+ __le32 r5[142];
+ struct leapraid_rep_post_reg_idx {
+ __le32 idx;
+ __le32 r1;
+ __le32 r2;
+ __le32 r3;
+ } rep_post_reg_idx[REP_POST_HOST_IDX_REG_CNT];
+} __packed;
+
+/**
+ * struct leapraid_atomic_req_desc - Atomic request descriptor
+ *
+ * @flg: Descriptor flag indicating the type of request (e.g. SCSI I/O)
+ * @msix_idx: MSI-X vector index used for interrupt routing
+ * @taskid: Unique task identifier associated with this request
+ */
+struct leapraid_atomic_req_desc {
+ u8 flg;
+ u8 msix_idx;
+ __le16 taskid;
+};
+
+/**
+ * union leapraid_rep_desc_union - Unified reply descriptor format
+ *
+ * @dflt_rep: Default reply descriptor containing basic completion info
+ * @dflt_rep.rep_flg: Reply flag indicating reply type or status
+ * @dflt_rep.msix_idx: MSI-X index for interrupt routing
+ * @dflt_rep.taskid: Task identifier matching the submitted request
+ * @r1: Reserved
+ *
+ * @addr_rep: Address reply descriptor used when firmware returns a
+ * memory address associated with the reply
+ * @addr_rep.rep_flg: Reply flag indicating reply type or status
+ * @addr_rep.msix_idx: MSI-X index for interrupt routing
+ * @addr_rep.taskid: Task identifier matching the submitted request
+ * @addr_rep.rep_frame_addr: Physical address of the reply frame
+ *
+ * @words: Raw 64-bit representation of the reply descriptor
+ * @u: Alternative access using 32-bit low/high words
+ * @u.low: Lower 32 bits of the descriptor
+ * @u.high: Upper 32 bits of the descriptor
+ */
+union leapraid_rep_desc_union {
+ struct leapraid_rep_desc {
+ u8 rep_flg;
+ u8 msix_idx;
+ __le16 taskid;
+ u8 r1[4];
+ } dflt_rep;
+ struct leapraid_add_rep_desc {
+ u8 rep_flg;
+ u8 msix_idx;
+ __le16 taskid;
+ __le32 rep_frame_addr;
+ } addr_rep;
+ __le64 words;
+ struct {
+ u32 low;
+ u32 high;
+ } u;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_req - Generic request header
+ *
+ * @func_dep1: Function-dependent parameter (low 16 bits)
+ * @r1: Reserved
+ * @func: Function code identifying the command type
+ * @r2: Reserved
+ */
+struct leapraid_req {
+ __le16 func_dep1;
+ u8 r1;
+ u8 func;
+ u8 r2[8];
+};
+
+/**
+ * struct leapraid_rep - Generic reply header
+ *
+ * @r1: Reserved
+ * @msg_len: Length of the reply message in bytes
+ * @function: Function code corresponding to the request
+ * @r2: Reserved
+ * @adapter_status: Status code reported by the adapter
+ * @r3: Reserved
+ */
+struct leapraid_rep {
+ u8 r1[2];
+ u8 msg_len;
+ u8 function;
+ u8 r2[10];
+ __le16 adapter_status;
+ u8 r3[4];
+};
+
+/**
+ * struct leapraid_sge_simple32 - 32-bit simple scatter-gather entry
+ *
+ * @flg_and_len: Combined field for flags and segment length
+ * @addr: 32-bit physical address of the data buffer
+ */
+struct leapraid_sge_simple32 {
+ __le32 flg_and_len;
+ __le32 addr;
+};
+
+/**
+ * struct leapraid_sge_simple64 - 64-bit simple scatter-gather entry
+ *
+ * @flg_and_len: Combined field for flags and segment length
+ * @addr: 64-bit physical address of the data buffer
+ */
+struct leapraid_sge_simple64 {
+ __le32 flg_and_len;
+ __le64 addr;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_sge_simple_union - Unified 32/64-bit SGE representation
+ *
+ * @flg_and_len: Combined field for flags and segment length
+ * @u.addr32: 32-bit address field
+ * @u.addr64: 64-bit address field
+ */
+struct leapraid_sge_simple_union {
+ __le32 flg_and_len;
+ union {
+ __le32 addr32;
+ __le64 addr64;
+ } u;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_sge_chain_union - Chained scatter-gather entry
+ *
+ * @len: Length of the chain descriptor
+ * @next_chain_offset: Offset to the next SGE chain
+ * @flg: Flags indicating chain or termination properties
+ * @u.addr32: 32-bit physical address
+ * @u.addr64: 64-bit physical address
+ */
+struct leapraid_sge_chain_union {
+ __le16 len;
+ u8 next_chain_offset;
+ u8 flg;
+ union {
+ __le32 addr32;
+ __le64 addr64;
+ } u;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_ieee_sge_simple32 - IEEE 32-bit simple SGE format
+ *
+ * @addr: 32-bit physical address of the data buffer
+ * @flg_and_len: Combined field for flags and data length
+ */
+struct leapraid_ieee_sge_simple32 {
+ __le32 addr;
+ __le32 flg_and_len;
+};
+
+/**
+ * struct leapraid_ieee_sge_simple64 - IEEE 64-bit simple SGE format
+ *
+ * @addr: 64-bit physical address of the data buffer
+ * @len: Length of the data segment
+ * @r1: Reserved
+ * @flg: Flags indicating transfer properties
+ */
+struct leapraid_ieee_sge_simple64 {
+ __le64 addr;
+ __le32 len;
+ u8 r1[3];
+ u8 flg;
+} __packed __aligned(4);
+
+/**
+ * union leapraid_ieee_sge_simple_union - Unified IEEE SGE format
+ *
+ * @simple32: IEEE 32-bit simple SGE entry
+ * @simple64: IEEE 64-bit simple SGE entry
+ */
+union leapraid_ieee_sge_simple_union {
+ struct leapraid_ieee_sge_simple32 simple32;
+ struct leapraid_ieee_sge_simple64 simple64;
+};
+
+/**
+ * union leapraid_ieee_sge_chain_union - Unified IEEE SGE chain format
+ *
+ * @chain32: IEEE 32-bit chain SGE entry
+ * @chain64: IEEE 64-bit chain SGE entry
+ */
+union leapraid_ieee_sge_chain_union {
+ struct leapraid_ieee_sge_simple32 chain32;
+ struct leapraid_ieee_sge_simple64 chain64;
+};
+
+/**
+ * struct leapraid_chain64_ieee_sg - 64-bit IEEE chain SGE descriptor
+ *
+ * @addr: Physical address of the next chain segment
+ * @len: Length of the current SGE
+ * @r1: Reserved
+ * @next_chain_offset: Offset to the next chain element
+ * @flg: Flags that describe SGE attributes
+ */
+struct leapraid_chain64_ieee_sg {
+ __le64 addr;
+ __le32 len;
+ u8 r1[2];
+ u8 next_chain_offset;
+ u8 flg;
+} __packed __aligned(4);
+
+/**
+ * union leapraid_ieee_sge_io_union - IEEE-style SGE union for I/O
+ *
+ * @ieee_simple: Simple IEEE SGE descriptor
+ * @ieee_chain: IEEE chain SGE descriptor
+ */
+union leapraid_ieee_sge_io_union {
+ struct leapraid_ieee_sge_simple64 ieee_simple;
+ struct leapraid_chain64_ieee_sg ieee_chain;
+};
+
+/**
+ * union leapraid_simple_sge_union - Union of simple SGE descriptors
+ *
+ * @leapio_simple: LeapIO-style simple SGE
+ * @ieee_simple: IEEE-style simple SGE
+ */
+union leapraid_simple_sge_union {
+ struct leapraid_sge_simple_union leapio_simple;
+ union leapraid_ieee_sge_simple_union ieee_simple;
+};
+
+/**
+ * union leapraid_sge_io_union - Combined SGE union for all I/O types
+ *
+ * @leapio_simple: LeapIO simple SGE format
+ * @leapio_chain: LeapIO chain SGE format
+ * @ieee_simple: IEEE simple SGE format
+ * @ieee_chain: IEEE chain SGE format
+ */
+union leapraid_sge_io_union {
+ struct leapraid_sge_simple_union leapio_simple;
+ struct leapraid_sge_chain_union leapio_chain;
+ union leapraid_ieee_sge_simple_union ieee_simple;
+ union leapraid_ieee_sge_chain_union ieee_chain;
+};
+
+/**
+ * struct leapraid_cfg_pg_header - Standard configuration page header
+ *
+ * @r1: Reserved
+ * @page_len: Length of the page in 4-byte units
+ * @page_num: Page number
+ * @page_type: Page type
+ */
+struct leapraid_cfg_pg_header {
+ u8 r1;
+ u8 page_len;
+ u8 page_num;
+ u8 page_type;
+};
+
+/**
+ * struct leapraid_cfg_ext_pg_header - Extended configuration page header
+ *
+ * @r1: Reserved
+ * @r2: Reserved
+ * @page_num: Page number
+ * @page_type: Page type
+ * @ext_page_len: Extended page length
+ * @ext_page_type: Extended page type
+ * @r3: Reserved
+ */
+struct leapraid_cfg_ext_pg_header {
+ u8 r1;
+ u8 r2;
+ u8 page_num;
+ u8 page_type;
+ __le16 ext_page_len;
+ u8 ext_page_type;
+ u8 r3;
+};
+
+/**
+ * struct leapraid_cfg_req - Configuration request message
+ *
+ * @action: Requested action type
+ * @sgl_flag: SGL flag field
+ * @chain_offset: Offset to next chain SGE
+ * @func: Function code
+ * @ext_page_len: Extended page length
+ * @ext_page_type: Extended page type
+ * @msg_flag: Message flags
+ * @r1: Reserved
+ * @header: Configuration page header
+ * @page_addr: Address of the page buffer
+ * @page_buf_sge: SGE describing the page buffer
+ */
+struct leapraid_cfg_req {
+ u8 action;
+ u8 sgl_flag;
+ u8 chain_offset;
+ u8 func;
+ __le16 ext_page_len;
+ u8 ext_page_type;
+ u8 msg_flag;
+ u8 r1[12];
+ struct leapraid_cfg_pg_header header;
+ __le32 page_addr;
+ union leapraid_sge_io_union page_buf_sge;
+};
+
+/**
+ * struct leapraid_cfg_rep - Configuration reply message
+ *
+ * @action: Action type from the request
+ * @r1: Reserved
+ * @msg_len: Message length in bytes
+ * @func: Function code
+ * @ext_page_len: Extended page length
+ * @ext_page_type: Extended page type
+ * @msg_flag: Message flags
+ * @r2: Reserved
+ * @adapter_status: Adapter status code
+ * @r3: Reserved
+ * @header: Configuration page header
+ */
+struct leapraid_cfg_rep {
+ u8 action;
+ u8 r1;
+ u8 msg_len;
+ u8 func;
+ __le16 ext_page_len;
+ u8 ext_page_type;
+ u8 msg_flag;
+ u8 r2[6];
+ __le16 adapter_status;
+ u8 r3[4];
+ struct leapraid_cfg_pg_header header;
+};
+
+/**
+ * struct leapraid_boot_dev_format_sas_wwid - Boot device identified by wwid
+ *
+ * @sas_addr: SAS address of the device
+ * @lun: Logical unit number
+ * @r1: Reserved
+ */
+struct leapraid_boot_dev_format_sas_wwid {
+ __le64 sas_addr;
+ u8 lun[8];
+ u8 r1[8];
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_boot_dev_format_enc_slot - identified by enclosure
+ *
+ * @enc_lid: Enclosure logical ID
+ * @r1: Reserved
+ * @slot_num: Slot number in the enclosure
+ * @r2: Reserved
+ */
+struct leapraid_boot_dev_format_enc_slot {
+ __le64 enc_lid;
+ u8 r1[8];
+ __le16 slot_num;
+ u8 r2[6];
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_boot_dev_format_dev_name - Boot device by device name
+ *
+ * @dev_name: Device name identifier
+ * @lun: Logical unit number
+ * @r1: Reserved
+ */
+struct leapraid_boot_dev_format_dev_name {
+ __le64 dev_name;
+ u8 lun[8];
+ u8 r1[8];
+} __packed __aligned(4);
+
+/**
+ * union leapraid_boot_dev_format - Boot device format union
+ *
+ * @sas_wwid: Format using SAS WWID and LUN
+ * @enc_slot: Format using enclosure slot and ID
+ * @dev_name: Format using device name and LUN
+ */
+union leapraid_boot_dev_format {
+ struct leapraid_boot_dev_format_sas_wwid sas_wwid;
+ struct leapraid_boot_dev_format_enc_slot enc_slot;
+ struct leapraid_boot_dev_format_dev_name dev_name;
+};
+
+/**
+ * struct leapraid_bios_page2 - BIOS configuration page 2
+ *
+ * @header: Configuration page header
+ * @r1: Reserved
+ * @requested_boot_dev_form: Format type of the requested boot device
+ * @r2: Reserved
+ * @requested_boot_dev: Boot device requested by BIOS or user
+ * @requested_alt_boot_dev_form: Format of the alternate boot device
+ * @r3: Reserved
+ * @requested_alt_boot_dev: Alternate boot device requested
+ * @current_boot_dev_form: Format type of the active boot device
+ * @r4: Reserved
+ * @current_boot_dev: Currently active boot device in use
+ */
+struct leapraid_bios_page2 {
+ struct leapraid_cfg_pg_header header;
+ u8 r1[24];
+ u8 requested_boot_dev_form;
+ u8 r2[3];
+ union leapraid_boot_dev_format requested_boot_dev;
+ u8 requested_alt_boot_dev_form;
+ u8 r3[3];
+ union leapraid_boot_dev_format requested_alt_boot_dev;
+ u8 current_boot_dev_form;
+ u8 r4[3];
+ union leapraid_boot_dev_format current_boot_dev;
+};
+
+/**
+ * struct leapraid_bios_page3 - BIOS configuration page 3
+ *
+ * @header: Configuration page header
+ * @r1: Reserved
+ * @bios_version: BIOS firmware version number
+ * @r2: Reserved
+ */
+struct leapraid_bios_page3 {
+ struct leapraid_cfg_pg_header header;
+ u8 r1[4];
+ __le32 bios_version;
+ u8 r2[84];
+};
+
+/**
+ * struct leapraid_raidvol0_phys_disk - Physical disk in RAID volume
+ *
+ * @r1: Reserved
+ * @phys_disk_num: Physical disk number within the RAID volume
+ * @r2: Reserved
+ */
+struct leapraid_raidvol0_phys_disk {
+ u8 r1[2];
+ u8 phys_disk_num;
+ u8 r2;
+};
+
+/**
+ * struct leapraid_raidvol_p0 - RAID volume configuration page 0
+ *
+ * @header: Configuration page header
+ * @dev_hdl: Device handle for the RAID volume
+ * @volume_state: State of the RAID volume
+ * @volume_type: RAID type
+ * @r1: Reserved
+ * @num_phys_disks: Number of physical disks in the volume
+ * @r2: Reserved
+ * @phys_disk: Array of physical disks in this volume
+ */
+struct leapraid_raidvol_p0 {
+ struct leapraid_cfg_pg_header header;
+ __le16 dev_hdl;
+ u8 volume_state;
+ u8 volume_type;
+ u8 r1[28];
+ u8 num_phys_disks;
+ u8 r2[3];
+ struct leapraid_raidvol0_phys_disk phys_disk[];
+};
+
+/**
+ * struct leapraid_raidvol_p1 - RAID volume configuration page 1
+ *
+ * @header: Configuration page header
+ * @dev_hdl: Device handle of the RAID volume
+ * @r1: Reserved
+ * @wwid: World-wide identifier for the volume
+ * @r2: Reserved
+ */
+struct leapraid_raidvol_p1 {
+ struct leapraid_cfg_pg_header header;
+ __le16 dev_hdl;
+ u8 r1[42];
+ __le64 wwid;
+ u8 r2[8];
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_raidpd_p0 - Physical disk configuration page 0
+ *
+ * @header: Configuration page header
+ * @dev_hdl: Device handle of the physical disk
+ * @r1: Reserved
+ * @phys_disk_num: Physical disk number
+ * @r2: Reserved
+ */
+struct leapraid_raidpd_p0 {
+ struct leapraid_cfg_pg_header header;
+ __le16 dev_hdl;
+ u8 r1;
+ u8 phys_disk_num;
+ u8 r2[112];
+};
+
+/**
+ * struct leapraid_sas_io_unit0_phy_info - PHY info for SAS I/O unit
+ *
+ * @port: Port number the PHY belongs to
+ * @port_flg: Flags describing port status
+ * @phy_flg: Flags describing PHY status
+ * @neg_link_rate: Negotiated link rate of the PHY
+ * @controller_phy_dev_info: Controller PHY device info
+ * @attached_dev_hdl: Handle of attached device
+ * @controller_dev_hdl: Handle of the controller device
+ * @r1: Reserved
+ */
+struct leapraid_sas_io_unit0_phy_info {
+ u8 port;
+ u8 port_flg;
+ u8 phy_flg;
+ u8 neg_link_rate;
+ __le32 controller_phy_dev_info;
+ __le16 attached_dev_hdl;
+ __le16 controller_dev_hdl;
+ u8 r1[8];
+};
+
+/**
+ * struct leapraid_sas_io_unit_p0 - SAS I/O unit configuration page 0
+ *
+ * @header: Extended configuration page header
+ * @r1: Reserved
+ * @phy_num: Number of PHYs in this unit
+ * @r2: Reserved
+ * @phy_info: Array of PHY information
+ */
+struct leapraid_sas_io_unit_p0 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 r1[4];
+ u8 phy_num;
+ u8 r2[3];
+ struct leapraid_sas_io_unit0_phy_info phy_info[];
+};
+
+/**
+ * struct leapraid_sas_io_unit1_phy_info - Placeholder for SAS unit page 1 PHY
+ *
+ * @r1: Reserved
+ */
+struct leapraid_sas_io_unit1_phy_info {
+ u8 r1[12];
+};
+
+/**
+ * struct leapraid_sas_io_unit_page1 - SAS I/O unit configuration page 1
+ *
+ * @header: Extended configuration page header
+ * @r1: Reserved
+ * @narrowport_max_queue_depth: Maximum queue depth for narrow ports
+ * @r2: Reserved
+ * @wideport_max_queue_depth: Maximum queue depth for wide ports
+ * @r3: Reserved
+ * @sata_max_queue_depth: Maximum SATA queue depth
+ * @r4: Reserved
+ * @phy_info: Array of PHY info structures
+ */
+struct leapraid_sas_io_unit_page1 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 r1[2];
+ __le16 narrowport_max_queue_depth;
+ u8 r2[2];
+ __le16 wideport_max_queue_depth;
+ u8 r3;
+ u8 sata_max_queue_depth;
+ u8 r4[2];
+ struct leapraid_sas_io_unit1_phy_info phy_info[];
+};
+
+/**
+ * struct leapraid_exp_p0 - SAS expander page 0
+ *
+ * @header: Extended page header
+ * @physical_port: Physical port number
+ * @r1: Reserved
+ * @enc_hdl: Enclosure handle
+ * @sas_address: SAS address of the expander
+ * @r2: Reserved
+ * @dev_hdl: Device handle of this expander
+ * @parent_dev_hdl: Device handle of parent expander
+ * @r3: Reserved
+ * @phy_num: Number of PHYs
+ * @r4: Reserved
+ */
+struct leapraid_exp_p0 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 physical_port;
+ u8 r1;
+ __le16 enc_hdl;
+ __le64 sas_address;
+ u8 r2[4];
+ __le16 dev_hdl;
+ __le16 parent_dev_hdl;
+ u8 r3[4];
+ u8 phy_num;
+ u8 r4[27];
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_exp_p1 - SAS expander page 1
+ *
+ * @header: Extended page header
+ * @r1: Reserved
+ * @p_link_rate: PHY link rate
+ * @hw_link_rate: Hardware supported link rate
+ * @attached_dev_hdl: Attached device handle
+ * @r2: Reserved
+ * @neg_link_rate: Negotiated link rate
+ * @r3: Reserved
+ */
+struct leapraid_exp_p1 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 r1[8];
+ u8 p_link_rate;
+ u8 hw_link_rate;
+ __le16 attached_dev_hdl;
+ u8 r2[11];
+ u8 neg_link_rate;
+ u8 r3[12];
+};
+
+/**
+ * struct leapraid_sas_dev_p0 - SAS device page 0
+ *
+ * @header: Extended configuration page header
+ * @slot: Slot number
+ * @enc_hdl: Enclosure handle
+ * @sas_address: SAS address
+ * @parent_dev_hdl: Parent device handle
+ * @phy_num: Number of PHYs
+ * @r1: Reserved
+ * @dev_hdl: Device handle
+ * @r2: Reserved
+ * @dev_info: Device information
+ * @flg: Flags
+ * @physical_port: Physical port number
+ * @max_port_connections: Maximum port connections
+ * @dev_name: Device name
+ * @port_groups: Number of port groups
+ * @r3: Reserved
+ * @enc_level: Enclosure level
+ * @connector_name: Connector identifier
+ * @r4: Reserved
+ */
+struct leapraid_sas_dev_p0 {
+ struct leapraid_cfg_ext_pg_header header;
+ __le16 slot;
+ __le16 enc_hdl;
+ __le64 sas_address;
+ __le16 parent_dev_hdl;
+ u8 phy_num;
+ u8 r1;
+ __le16 dev_hdl;
+ u8 r2[2];
+ __le32 dev_info;
+ __le16 flg;
+ u8 physical_port;
+ u8 max_port_connections;
+ __le64 dev_name;
+ u8 port_groups;
+ u8 r3[2];
+ u8 enc_level;
+ u8 connector_name[4];
+ u8 r4[4];
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_sas_phy_p0 - SAS PHY configuration page 0
+ *
+ * @header: Extended configuration page header
+ * @r1: Reserved
+ * @attached_dev_hdl: Handle of attached device
+ * @r2: Reserved
+ * @p_link_rate: PHY link rate
+ * @hw_link_rate: Hardware supported link rate
+ * @r3: Reserved
+ * @phy_info: PHY information
+ * @neg_link_rate: Negotiated link rate
+ * @r4: Reserved
+ */
+struct leapraid_sas_phy_p0 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 r1[4];
+ __le16 attached_dev_hdl;
+ u8 r2[6];
+ u8 p_link_rate;
+ u8 hw_link_rate;
+ u8 r3[2];
+ __le32 phy_info;
+ u8 neg_link_rate;
+ u8 r4[3];
+};
+
+/**
+ * struct leapraid_enc_p0 - SAS enclosure page 0
+ *
+ * @header: Extended configuration page header
+ * @r1: Reserved
+ * @enc_lid: Enclosure logical ID
+ * @r2: Reserved
+ * @enc_hdl: Enclosure handle
+ * @r3: Reserved
+ */
+struct leapraid_enc_p0 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 r1[4];
+ __le64 enc_lid;
+ u8 r2[2];
+ __le16 enc_hdl;
+ u8 r3[15];
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_raid_cfg_p0_element - RAID configuration element
+ *
+ * @element_flg: Element flags
+ * @vol_dev_hdl: Volume device handle
+ * @r1: Reserved
+ * @phys_disk_dev_hdl: Physical disk device handle
+ */
+struct leapraid_raid_cfg_p0_element {
+ __le16 element_flg;
+ __le16 vol_dev_hdl;
+ u8 r1[2];
+ __le16 phys_disk_dev_hdl;
+};
+
+/**
+ * struct leapraid_raid_cfg_p0 - RAID configuration page 0
+ *
+ * @header: Extended configuration page header
+ * @r1: Reserved
+ * @cfg_num: Configuration number
+ * @r2: Reserved
+ * @elements_num: Number of RAID elements
+ * @r3: Reserved
+ * @cfg_element: Array of RAID elements
+ */
+struct leapraid_raid_cfg_p0 {
+ struct leapraid_cfg_ext_pg_header header;
+ u8 r1[3];
+ u8 cfg_num;
+ u8 r2[32];
+ u8 elements_num;
+ u8 r3[3];
+ struct leapraid_raid_cfg_p0_element cfg_element[];
+};
+
+/**
+ * union leapraid_mpi_scsi_io_cdb_union - SCSI I/O CDB or simple SGE
+ *
+ * @cdb32: 32-byte SCSI command descriptor block
+ * @sge: Simple SGE format
+ */
+union leapraid_mpi_scsi_io_cdb_union {
+ u8 cdb32[32];
+ struct leapraid_sge_simple_union sge;
+};
+
+/**
+ * struct leapraid_mpi_scsiio_req - MPI SCSI I/O request
+ *
+ * @dev_hdl: Device handle for the target
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @r1: Reserved
+ * @msg_flg: Message flags
+ * @r2: Reserved
+ * @sense_buffer_low_add: Lower 32-bit address of sense buffer
+ * @dma_flag: DMA flags
+ * @r3: Reserved
+ * @sense_buffer_len: Sense buffer length
+ * @r4: Reserved
+ * @sgl_offset0..3: SGL offsets
+ * @skip_count: Bytes to skip before transfer
+ * @data_len: Length of data transfer
+ * @bi_dir_data_len: Bi-directional transfer length
+ * @io_flg: I/O flags
+ * @eedp_flag: EEDP flags
+ * @eedp_block_size: EEDP block size
+ * @r5: Reserved
+ * @secondary_ref_tag: Secondary reference tag
+ * @secondary_app_tag: Secondary application tag
+ * @app_tag_trans_mask: Application tag mask
+ * @lun: Logical Unit Number
+ * @ctrl: Control flags
+ * @cdb: SCSI Command Descriptor Block or simple SGE
+ * @sgl: Scatter-gather list
+ */
+struct leapraid_mpi_scsiio_req {
+ __le16 dev_hdl;
+ u8 chain_offset;
+ u8 func;
+ u8 r1[3];
+ u8 msg_flg;
+ u8 r2[4];
+ __le32 sense_buffer_low_add;
+ u8 dma_flag;
+ u8 r3;
+ u8 sense_buffer_len;
+ u8 r4;
+ u8 sgl_offset0;
+ u8 sgl_offset1;
+ u8 sgl_offset2;
+ u8 sgl_offset3;
+ __le32 skip_count;
+ __le32 data_len;
+ __le32 bi_dir_data_len;
+ __le16 io_flg;
+ __le16 eedp_flag;
+ __le16 eedp_block_size;
+ u8 r5[2];
+ __le32 secondary_ref_tag;
+ __le16 secondary_app_tag;
+ __le16 app_tag_trans_mask;
+ u8 lun[8];
+ __le32 ctrl;
+ union leapraid_mpi_scsi_io_cdb_union cdb;
+ union leapraid_sge_io_union sgl;
+};
+
+/**
+ * union leapraid_scsi_io_cdb_union - SCSI I/O CDB or IEEE simple SGE
+ *
+ * @cdb32: 32-byte SCSI CDB
+ * @sge: IEEE simple 64-bit SGE
+ */
+union leapraid_scsi_io_cdb_union {
+ u8 cdb32[32];
+ struct leapraid_ieee_sge_simple64 sge;
+};
+
+/**
+ * struct leapraid_scsiio_req - SCSI I/O request
+ *
+ * @dev_hdl: Device handle
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @r1: Reserved
+ * @msg_flg: Message flags
+ * @r2: Reserved
+ * @sense_buffer_low_add: Lower 32-bit address of sense buffer
+ * @dma_flag: DMA flag
+ * @r3: Reserved
+ * @sense_buffer_len: Sense buffer length
+ * @r4: Reserved
+ * @sgl_offset0-3: SGL offsets
+ * @skip_count: Bytes to skip before transfer
+ * @data_len: Length of data transfer
+ * @bi_dir_data_len: Bi-directional transfer length
+ * @io_flg: I/O flags
+ * @eedp_flag: EEDP flags
+ * @eedp_block_size: EEDP block size
+ * @r5: Reserved
+ * @secondary_ref_tag: Secondary reference tag
+ * @secondary_app_tag: Secondary application tag
+ * @app_tag_trans_mask: Application tag mask
+ * @lun: Logical Unit Number
+ * @ctrl: Control flags
+ * @cdb: SCSI Command Descriptor Block or simple SGE
+ * @sgl: Scatter-gather list
+ */
+struct leapraid_scsiio_req {
+ __le16 dev_hdl;
+ u8 chain_offset;
+ u8 func;
+ u8 r1[3];
+ u8 msg_flg;
+ u8 r2[4];
+ __le32 sense_buffer_low_add;
+ u8 dma_flag;
+ u8 r3;
+ u8 sense_buffer_len;
+ u8 r4;
+ u8 sgl_offset0;
+ u8 sgl_offset1;
+ u8 sgl_offset2;
+ u8 sgl_offset3;
+ __le32 skip_count;
+ __le32 data_len;
+ __le32 bi_dir_data_len;
+ __le16 io_flg;
+ __le16 eedp_flag;
+ __le16 eedp_block_size;
+ u8 r5[2];
+ __le32 secondary_ref_tag;
+ __le16 secondary_app_tag;
+ __le16 app_tag_trans_mask;
+ u8 lun[8];
+ __le32 ctrl;
+ union leapraid_scsi_io_cdb_union cdb;
+ union leapraid_ieee_sge_io_union sgl;
+};
+
+/**
+ * struct leapraid_scsiio_rep - SCSI I/O response
+ *
+ * @dev_hdl: Device handle
+ * @msg_len: Length of response message
+ * @func: Function code
+ * @r1: Reserved
+ * @msg_flg: Message flags
+ * @r2: Reserved
+ * @scsi_status: SCSI status
+ * @scsi_state: SCSI state
+ * @adapter_status: Adapter status
+ * @r3: Reserved
+ * @transfer_count: Number of bytes transferred
+ * @sense_count: Number of sense bytes
+ * @resp_info: Additional response info
+ * @task_tag: Task identifier
+ * @scsi_status_qualifier: SCSI status qualifier
+ * @bi_dir_trans_count: Bi-directional transfer count
+ * @r4: Reserved
+ */
+struct leapraid_scsiio_rep {
+ __le16 dev_hdl;
+ u8 msg_len;
+ u8 func;
+ u8 r1[3];
+ u8 msg_flg;
+ u8 r2[4];
+ u8 scsi_status;
+ u8 scsi_state;
+ __le16 adapter_status;
+ u8 r3[4];
+ __le32 transfer_count;
+ __le32 sense_count;
+ __le32 resp_info;
+ __le16 task_tag;
+ __le16 scsi_status_qualifier;
+ __le32 bi_dir_trans_count;
+ __le32 r4[3];
+};
+
+/**
+ * struct leapraid_scsi_tm_req - SCSI Task Management request
+ *
+ * @dev_hdl: Device handle
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @r1: Reserved
+ * @task_type: Task management function type
+ * @r2: Reserved
+ * @msg_flg: Message flags
+ * @r3: Reserved
+ * @lun: Logical Unit Number
+ * @r4: Reserved
+ * @task_mid: Task identifier
+ * @r5: Reserved
+ */
+struct leapraid_scsi_tm_req {
+ __le16 dev_hdl;
+ u8 chain_offset;
+ u8 func;
+ u8 r1;
+ u8 task_type;
+ u8 r2;
+ u8 msg_flg;
+ u8 r3[4];
+ u8 lun[8];
+ u8 r4[28];
+ __le16 task_mid;
+ u8 r5[2];
+};
+
+/**
+ * struct leapraid_scsi_tm_rep - SCSI Task Management response
+ *
+ * @dev_hdl: Device handle
+ * @msg_len: Length of response message
+ * @func: Function code
+ * @resp_code: Response code
+ * @task_type: Task management type
+ * @r1: Reserved
+ * @msg_flag: Message flags
+ * @r2: Reserved
+ * @adapter_status: Adapter status
+ * @r3: Reserved
+ * @termination_count: Count of terminated tasks
+ * @response_info: Additional response info
+ */
+struct leapraid_scsi_tm_rep {
+ __le16 dev_hdl;
+ u8 msg_len;
+ u8 func;
+ u8 resp_code;
+ u8 task_type;
+ u8 r1;
+ u8 msg_flag;
+ u8 r2[6];
+ __le16 adapter_status;
+ u8 r3[4];
+ __le32 termination_count;
+ __le32 response_info;
+};
+
+/**
+ * struct leapraid_sep_req - SEP (SCSI Enclosure Processor) request
+ *
+ * @dev_hdl: Device handle
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @act: Action to perform
+ * @flg: Flags
+ * @r1: Reserved
+ * @msg_flag: Message flags
+ * @r2: Reserved
+ * @slot_status: Slot status
+ * @r3: Reserved
+ * @slot: Slot number
+ * @enc_hdl: Enclosure handle
+ */
+struct leapraid_sep_req {
+ __le16 dev_hdl;
+ u8 chain_offset;
+ u8 func;
+ u8 act;
+ u8 flg;
+ u8 r1;
+ u8 msg_flag;
+ u8 r2[4];
+ __le32 slot_status;
+ u8 r3[12];
+ __le16 slot;
+ __le16 enc_hdl;
+};
+
+/**
+ * struct leapraid_sep_rep - SEP response
+ *
+ * @dev_hdl: Device handle
+ * @msg_len: Message length
+ * @func: Function code
+ * @act: Action performed
+ * @flg: Flags
+ * @msg_flag: Message flags
+ * @r1: Reserved
+ * @adapter_status: Adapter status
+ * @r2: Reserved
+ * @slot_status: Slot status
+ * @r3: Reserved
+ * @slot: Slot number
+ * @enc_hdl: Enclosure handle
+ */
+struct leapraid_sep_rep {
+ __le16 dev_hdl;
+ u8 msg_len;
+ u8 func;
+ u8 act;
+ u8 flg;
+ u8 r1;
+ u8 msg_flag;
+ u8 r2[6];
+ __le16 adapter_status;
+ u8 r3[4];
+ __le32 slot_status;
+ u8 r4[4];
+ __le16 slot;
+ __le16 enc_hdl;
+};
+
+/**
+ * struct leapraid_adapter_init_req - Adapter initialization request
+ *
+ * @who_init: Initiator of the initialization
+ * @r1: Reserved
+ * @chain_offset: Chain offset
+ * @func: Function code
+ * @r2: Reserved
+ * @msg_flg: Message flags
+ * @r3: Reserved
+ * @msg_ver: Message version
+ * @header_ver: Header version
+ * @host_buf_addr: Host buffer address (non adapter-ref)
+ * @r4: Reserved
+ * @host_buf_size: Host buffer size (non adapter-ref)
+ * @host_msix_vectors: Number of host MSI-X vectors
+ * @r6: Reserved
+ * @req_frame_size: Request frame size
+ * @rep_desc_qd: Reply descriptor queue depth
+ * @rep_msg_qd: Reply message queue depth
+ * @sense_buffer_add_high: High 32-bit of sense buffer address
+ * @rep_msg_dma_high: High 32-bit of reply message DMA address
+ * @task_desc_base_addr: Base address of task descriptors
+ * @rep_desc_q_arr_addr: Address of reply descriptor queue array
+ * @rep_msg_addr_dma: Reply message DMA address
+ * @time_stamp: Timestamp
+ */
+struct leapraid_adapter_init_req {
+ u8 who_init;
+ u8 r1;
+ u8 chain_offset;
+ u8 func;
+ u8 r2[3];
+ u8 msg_flg;
+ __le32 driver_ver;
+ __le16 msg_ver;
+ __le16 header_ver;
+ __le32 host_buf_addr;
+ u8 r4[2];
+ u8 host_buf_size;
+ u8 host_msix_vectors;
+ u8 r6[2];
+ __le16 req_frame_size;
+ __le16 rep_desc_qd;
+ __le16 rep_msg_qd;
+ __le32 sense_buffer_add_high;
+ __le32 rep_msg_dma_high;
+ __le64 task_desc_base_addr;
+ __le64 rep_desc_q_arr_addr;
+ __le64 rep_msg_addr_dma;
+ __le64 time_stamp;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_rep_desc_q_arr - Reply descriptor queue array
+ *
+ * @rep_desc_base_addr: Base address of the reply descriptors
+ * @r1: Reserved
+ */
+struct leapraid_rep_desc_q_arr {
+ __le64 rep_desc_base_addr;
+ __le64 r1;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_adapter_init_rep - Adapter initialization reply
+ *
+ * @who_init: Initiator of the initialization
+ * @r1: Reserved
+ * @msg_len: Length of reply message
+ * @func: Function code
+ * @r2: Reserved
+ * @msg_flag: Message flags
+ * @r3: Reserved
+ * @adapter_status: Adapter status
+ * @r4: Reserved
+ */
+struct leapraid_adapter_init_rep {
+ u8 who_init;
+ u8 r1;
+ u8 msg_len;
+ u8 func;
+ u8 r2[3];
+ u8 msg_flag;
+ u8 r3[6];
+ __le16 adapter_status;
+ u8 r4[4];
+};
+
+/**
+ * struct leapraid_adapter_log_req - Adapter log request
+ *
+ * @action: Action code
+ * @type: Log type
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * r1: Reserved
+ * @msg_flag: Message flags
+ * r2: Reserved
+ * @mbox: Mailbox for command-specific parameters
+ * @sge: Scatter-gather entry for data buffer
+ */
+struct leapraid_adapter_log_req {
+ u8 action;
+ u8 type;
+ u8 chain_offset;
+ u8 func;
+ u8 r1[3];
+ u8 msg_flag;
+ u8 r2[4];
+ union {
+ u8 b[12];
+ __le16 s[6];
+ __le32 w[3];
+ } mbox;
+ struct leapraid_sge_simple64 sge;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_adapter_log_rep - Adapter log reply
+ *
+ * @action: Action code echoed
+ * @type: Log type echoed
+ * @msg_len: Length of message
+ * @func: Function code
+ * @r1: Reserved
+ * @msg_flag: Message flags
+ * @r2: Reserved
+ * @adapter_status: Status returned by adapter
+ */
+struct leapraid_adapter_log_rep {
+ u8 action;
+ u8 type;
+ u8 msg_len;
+ u8 func;
+ u8 r1[3];
+ u8 msg_flag;
+ u8 r2[6];
+ __le16 adapter_status;
+};
+
+/**
+ * struct leapraid_adapter_features_req - Request adapter features
+ *
+ * @r1: Reserved
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @r2: Reserved
+ * @msg_flag: Message flags
+ * @r3: Reserved
+ */
+struct leapraid_adapter_features_req {
+ u8 r1[2];
+ u8 chain_offset;
+ u8 func;
+ u8 r2[3];
+ u8 msg_flag;
+ u8 r3[4];
+};
+
+/**
+ * struct leapraid_adapter_features_rep - Adapter features reply
+ *
+ * @msg_ver: Message version
+ * @msg_len: Length of reply message
+ * @func: Function code
+ * @header_ver: Header version
+ * @r1: Reserved
+ * @msg_flag: Message flags
+ * @r2: Reserved
+ * @adapter_status: Adapter status
+ * @r3: Reserved
+ * @who_init: Who initialized the adapter
+ * @r4: Reserved
+ * @max_msix_vectors: Max MSI-X vectors supported
+ * @req_slot: Number of request slots
+ * @r5: Reserved
+ * @adapter_caps: Adapter capabilities
+ * @fw_version: Firmware version
+ * @sas_wide_max_qdepth: Max wide SAS queue depth
+ * @sas_narrow_max_qdepth: Max narrow SAS queue depth
+ * @r6: Reserved
+ * @hp_slot: Number of high-priority slots
+ * @r7: Reserved
+ * @max_volumes: Maximum supported volumes
+ * @max_dev_hdl: Maximum device handle
+ * @r8: Reserved
+ * @min_dev_hdl: Minimum device handle
+ * @r9: Reserved
+ */
+struct leapraid_adapter_features_rep {
+ u16 msg_ver;
+ u8 msg_len;
+ u8 func;
+ u16 header_ver;
+ u8 r1;
+ u8 msg_flag;
+ u8 r2[6];
+ u16 adapter_status;
+ u8 r3[4];
+ u8 sata_max_qdepth;
+ u8 who_init;
+ u8 r4;
+ u8 max_msix_vectors;
+ __le16 req_slot;
+ u8 r5[2];
+ __le32 adapter_caps;
+ __le32 fw_version;
+ __le16 sas_wide_max_qdepth;
+ __le16 sas_narrow_max_qdepth;
+ u8 r6[10];
+ __le16 hp_slot;
+ u8 r7[3];
+ u8 max_volumes;
+ __le16 max_dev_hdl;
+ u8 r8[2];
+ __le16 min_dev_hdl;
+ u8 r9[6];
+};
+
+/**
+ * struct leapraid_scan_dev_req - Request to scan devices
+ *
+ * @r1: Reserved
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @r2: Reserved
+ * @msg_flag: Message flags
+ * @r3: Reserved
+ */
+struct leapraid_scan_dev_req {
+ u8 r1[2];
+ u8 chain_offset;
+ u8 func;
+ u8 r2[3];
+ u8 msg_flag;
+ u8 r3[4];
+};
+
+/**
+ * struct leapraid_scan_dev_rep - Scan devices reply
+ *
+ * @r1: Reserved
+ * @msg_len: Length of message
+ * @func: Function code
+ * @r2: Reserved
+ * @msg_flag: Message flags
+ * @r3: Reserved
+ * @adapter_status: Adapter status
+ * @r4: Reserved
+ */
+struct leapraid_scan_dev_rep {
+ u8 r1[2];
+ u8 msg_len;
+ u8 func;
+ u8 r2[3];
+ u8 msg_flag;
+ u8 r3[6];
+ __le16 adapter_status;
+ u8 r4[4];
+};
+
+/**
+ * struct leapraid_evt_notify_req - Event notification request
+ *
+ * @r1: Reserved
+ * @chain_offset: Offset for chained SGE
+ * @func: Function code
+ * @r2: Reserved
+ * @msg_flag: Message flags
+ * @r3: Reserved
+ * @evt_masks: Event masks to enable notifications
+ * @r4: Reserved
+ */
+struct leapraid_evt_notify_req {
+ u8 r1[2];
+ u8 chain_offset;
+ u8 func;
+ u8 r2[3];
+ u8 msg_flag;
+ u8 r3[12];
+ __le32 evt_masks[4];
+ u8 r4[8];
+};
+
+/**
+ * struct leapraid_evt_notify_rep - Event notification reply
+ *
+ * @evt_data_len: Length of event data
+ * @msg_len: Length of message
+ * @func: Function code
+ * @r1: Reserved
+ * @r2: Reserved
+ * @msg_flag: Message flags
+ * @r3: Reserved
+ * @adapter_status: Adapter status
+ * @r4: Reserved
+ * @evt: Event code
+ * @r5: Reserved
+ * @evt_data: Event data array
+ */
+struct leapraid_evt_notify_rep {
+ __le16 evt_data_len;
+ u8 msg_len;
+ u8 func;
+ u8 r1[2];
+ u8 r2;
+ u8 msg_flag;
+ u8 r3[6];
+ __le16 adapter_status;
+ u8 r4[4];
+ __le16 evt;
+ u8 r5[6];
+ __le32 evt_data[];
+};
+
+/**
+ * struct leapraid_evt_data_sas_dev_status_change - SAS device status change
+ *
+ * @task_tag: Task identifier
+ * @reason_code: Reason for status change
+ * @physical_port: Physical port number
+ * @r1: Reserved
+ * @dev_hdl: Device handle
+ * @r2: Reserved
+ * @sas_address: SAS address of device
+ * @lun: Logical Unit Number
+ */
+struct leapraid_evt_data_sas_dev_status_change {
+ __le16 task_tag;
+ u8 reason_code;
+ u8 physical_port;
+ u8 r1[2];
+ __le16 dev_hdl;
+ u8 r2[4];
+ __le64 sas_address;
+ u8 lun[8];
+} __packed __aligned(4);
+/**
+ * struct leapraid_evt_data_ir_change - IR (Integrated RAID) change event data
+ *
+ * @r1: Reserved
+ * @reason_code: Reason for IR change
+ * @r2: Reserved
+ * @vol_dev_hdl: Volume device handle
+ * @phys_disk_dev_hdl: Physical disk device handle
+ */
+struct leapraid_evt_data_ir_change {
+ u8 r1;
+ u8 reason_code;
+ u8 r2[2];
+ __le16 vol_dev_hdl;
+ __le16 phys_disk_dev_hdl;
+};
+
+/**
+ * struct leapraid_evt_data_sas_disc - SAS discovery event data
+ *
+ * @r1: Reserved
+ * @reason_code: Reason for discovery event
+ * @physical_port: Physical port number where event occurred
+ * @r2: Reserved
+ */
+struct leapraid_evt_data_sas_disc {
+ u8 r1;
+ u8 reason_code;
+ u8 physical_port;
+ u8 r2[5];
+};
+
+/**
+ * struct leapraid_evt_sas_topo_phy_entry - SAS topology PHY entry
+ *
+ * @attached_dev_hdl: Device handle attached to PHY
+ * @link_rate: Current link rate
+ * @phy_status: PHY status flags
+ */
+struct leapraid_evt_sas_topo_phy_entry {
+ __le16 attached_dev_hdl;
+ u8 link_rate;
+ u8 phy_status;
+};
+
+/**
+ * struct leapraid_evt_data_sas_topo_change_list - SAS topology change list
+ *
+ * @encl_hdl: Enclosure handle
+ * @exp_dev_hdl: Expander device handle
+ * @num_phys: Number of PHYs in this entry
+ * @r1: Reserved
+ * @entry_num: Entry index
+ * @start_phy_num: Start PHY number
+ * @exp_status: Expander status
+ * @physical_port: Physical port number
+ * @phy: Array of SAS PHY entries
+ */
+struct leapraid_evt_data_sas_topo_change_list {
+ __le16 encl_hdl;
+ __le16 exp_dev_hdl;
+ u8 num_phys;
+ u8 r1[3];
+ u8 entry_num;
+ u8 start_phy_num;
+ u8 exp_status;
+ u8 physical_port;
+ struct leapraid_evt_sas_topo_phy_entry phy[];
+};
+
+/**
+ * struct leapraid_evt_data_sas_enc_dev_status_change - SAS enclosure device status
+ *
+ * @enc_hdl: Enclosure handle
+ * @reason_code: Reason code for status change
+ * @physical_port: Physical port number
+ * @encl_logical_id: Enclosure logical ID
+ * @num_slots: Number of slots in enclosure
+ * @start_slot: First affected slot
+ * @phy_bits: Bitmap of affected PHYs
+ */
+struct leapraid_evt_data_sas_enc_dev_status_change {
+ __le16 enc_hdl;
+ u8 reason_code;
+ u8 physical_port;
+ __le64 encl_logical_id;
+ __le16 num_slots;
+ __le16 start_slot;
+ __le32 phy_bits;
+};
+
+/**
+ * struct leapraid_io_unit_ctrl_req - IO unit control request
+ *
+ * @op: Operation code
+ * @r1: Reserved
+ * @chain_offset: SGE chain offset
+ * @func: Function code
+ * @dev_hdl: Device handle
+ * @adapter_para: Adapter parameter selector
+ * @msg_flag: Message flags
+ * @r2: Reserved
+ * @phy_num: PHY number
+ * @r3: Reserved
+ * @adapter_para_value: Value for adapter parameter
+ * @adapter_para_value2: Optional second parameter value
+ * @r4: Reserved
+ */
+struct leapraid_io_unit_ctrl_req {
+ u8 op;
+ u8 r1;
+ u8 chain_offset;
+ u8 func;
+ u16 dev_hdl;
+ u8 adapter_para;
+ u8 msg_flag;
+ u8 r2[6];
+ u8 phy_num;
+ u8 r3[17];
+ __le32 adapter_para_value;
+ __le32 adapter_para_value2;
+ u8 r4[4];
+};
+
+/**
+ * struct leapraid_io_unit_ctrl_rep - IO unit control reply
+ *
+ * @op: Operation code echoed
+ * @r1: Reserved
+ * @func: Function code
+ * @dev_hdl: Device handle
+ * @r2: Reserved
+ */
+struct leapraid_io_unit_ctrl_rep {
+ u8 op;
+ u8 r1[2];
+ u8 func;
+ __le16 dev_hdl;
+ u8 r2[14];
+};
+
+/**
+ * struct leapraid_raid_act_req - RAID action request
+ *
+ * @act: RAID action code
+ * @r1: Reserved
+ * @func: Function code
+ * @r2: Reserved
+ * @phys_disk_num: Number of physical disks involved
+ * @r3: Reserved
+ * @action_data_sge: SGE describing action-specific data
+ */
+struct leapraid_raid_act_req {
+ u8 act;
+ u8 r1[2];
+ u8 func;
+ u8 r2[2];
+ u8 phys_disk_num;
+ u8 r3[13];
+ struct leapraid_sge_simple_union action_data_sge;
+};
+
+/**
+ * struct leapraid_raid_act_rep - RAID action reply
+ *
+ * @act: RAID action code echoed
+ * @r1: Reserved
+ * @func: Function code
+ * @vol_dev_hdl: Volume device handle
+ * @r2: Reserved
+ * @adapter_status: Status returned by adapter
+ * @r3: Reserved
+ */
+struct leapraid_raid_act_rep {
+ u8 act;
+ u8 r1[2];
+ u8 func;
+ __le16 vol_dev_hdl;
+ u8 r2[8];
+ __le16 adapter_status;
+ u8 r3[76];
+};
+
+/**
+ * struct leapraid_smp_passthrough_req - SMP passthrough request
+ *
+ * @passthrough_flg: Passthrough flags
+ * @physical_port: Target PHY port
+ * @r1: Reserved
+ * @func: Function code
+ * @req_data_len: Request data length
+ * @r2: Reserved
+ * @sas_address: SAS address of target device
+ * @r3: Reserved
+ * @sgl: Scatter-gather list describing request buffer
+ */
+struct leapraid_smp_passthrough_req {
+ u8 passthrough_flg;
+ u8 physical_port;
+ u8 r1;
+ u8 func;
+ __le16 req_data_len;
+ u8 r2[10];
+ __le64 sas_address;
+ u8 r3[8];
+ union leapraid_simple_sge_union sgl;
+} __packed __aligned(4);
+
+/**
+ * struct leapraid_smp_passthrough_rep - SMP passthrough reply
+ *
+ * @passthrough_flg: Passthrough flags echoed
+ * @physical_port: Target PHY port
+ * @r1: Reserved
+ * @func: Function code
+ * @resp_data_len: Length of response data
+ * @r2: Reserved
+ * @adapter_status: Adapter status
+ * @r3: Reserved
+ */
+struct leapraid_smp_passthrough_rep {
+ u8 passthrough_flg;
+ u8 physical_port;
+ u8 r1;
+ u8 func;
+ __le16 resp_data_len;
+ u8 r2[8];
+ __le16 adapter_status;
+ u8 r3[12];
+};
+
+/**
+ * struct leapraid_sas_io_unit_ctrl_req - SAS IO unit control request
+ *
+ * @op: Operation code
+ * @r1: Reserved
+ * @func: Function code
+ * @dev_hdl: Device handle
+ * @r2: Reserved
+ */
+struct leapraid_sas_io_unit_ctrl_req {
+ u8 op;
+ u8 r1[2];
+ u8 func;
+ __le16 dev_hdl;
+ u8 r2[38];
+};
+
+#endif /* LEAPRAID_H */
diff --git a/drivers/scsi/leapraid/leapraid_app.c b/drivers/scsi/leapraid/leapraid_app.c
new file mode 100644
index 000000000000..f838bd5aa20e
--- /dev/null
+++ b/drivers/scsi/leapraid/leapraid_app.c
@@ -0,0 +1,675 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2025 LeapIO Tech Inc.
+ *
+ * LeapRAID Storage and RAID Controller driver.
+ */
+
+#include <linux/compat.h>
+#include <linux/module.h>
+#include <linux/miscdevice.h>
+
+#include "leapraid_func.h"
+
+/* ioctl device file */
+#define LEAPRAID_DEV_NAME "leapraid_ctl"
+
+/* ioctl version */
+#define LEAPRAID_IOCTL_VERSION 0x07
+
+/* ioctl command */
+#define LEAPRAID_ADAPTER_INFO 17
+#define LEAPRAID_COMMAND 20
+#define LEAPRAID_EVENTQUERY 21
+#define LEAPRAID_EVENTREPORT 23
+
+/**
+ * struct leapraid_ioctl_header - IOCTL command header
+ * @adapter_id : Adapter identifier
+ * @port_number: Port identifier
+ * @max_data_size: Maximum data size for transfer
+ */
+struct leapraid_ioctl_header {
+ u32 adapter_id;
+ u32 port_number;
+ u32 max_data_size;
+};
+
+/**
+ * struct leapraid_ioctl_diag_reset - Diagnostic reset request
+ * @hdr: Common IOCTL header
+ */
+struct leapraid_ioctl_diag_reset {
+ struct leapraid_ioctl_header hdr;
+};
+
+/**
+ * struct leapraid_ioctl_pci_info - PCI device information
+ * @u: Union holding PCI bus/device/function information
+ * @u.bits.dev: PCI device number
+ * @u.bits.func: PCI function number
+ * @u.bits.bus: PCI bus number
+ * @u.word: Combined representation of PCI BDF
+ * @seg_id: PCI segment identifier
+ */
+struct leapraid_ioctl_pci_info {
+ union {
+ struct {
+ u32 dev:5;
+ u32 func:3;
+ u32 bus:24;
+ } bits;
+ u32 word;
+ } u;
+ u32 seg_id;
+};
+
+/**
+ * struct leapraid_ioctl_adapter_info - Adapter information for IOCTL
+ * @hdr: IOCTL header
+ * @adapter_type: Adapter type identifier
+ * @port_number: Port number
+ * @pci_id: PCI device ID
+ * @revision: Revision number
+ * @sub_dev: Subsystem device ID
+ * @sub_vendor: Subsystem vendor ID
+ * @r0: Reserved
+ * @fw_ver: Firmware version
+ * @bios_ver: BIOS version
+ * @driver_ver: Driver version
+ * @r1: Reserved
+ * @scsi_id: SCSI ID
+ * @r2: Reserved
+ * @pci_info: PCI information structure
+ */
+struct leapraid_ioctl_adapter_info {
+ struct leapraid_ioctl_header hdr;
+ u32 adapter_type;
+ u32 port_number;
+ u32 pci_id;
+ u32 revision;
+ u32 sub_dev;
+ u32 sub_vendor;
+ u32 r0;
+ u32 fw_ver;
+ u32 bios_ver;
+ u8 driver_ver[32];
+ u8 r1;
+ u8 scsi_id;
+ u16 r2;
+ struct leapraid_ioctl_pci_info pci_info;
+};
+
+/**
+ * struct leapraid_ioctl_command - IOCTL command structure
+ * @hdr: IOCTL header
+ * @timeout: Command timeout
+ * @rep_msg_buf_ptr: User pointer to reply message buffer
+ * @c2h_buf_ptr: User pointer to card-to-host data buffer
+ * @h2c_buf_ptr: User pointer to host-to-card data buffer
+ * @sense_data_ptr: User pointer to sense data buffer
+ * @max_rep_bytes: Maximum reply bytes
+ * @c2h_size: Card-to-host data size
+ * @h2c_size: Host-to-card data size
+ * @max_sense_bytes: Maximum sense data bytes
+ * @data_sge_offset: Data SGE offset
+ * @mf: Message frame data (flexible array)
+ */
+struct leapraid_ioctl_command {
+ struct leapraid_ioctl_header hdr;
+ u32 timeout;
+ void __user *rep_msg_buf_ptr;
+ void __user *c2h_buf_ptr;
+ void __user *h2c_buf_ptr;
+ void __user *sense_data_ptr;
+ u32 max_rep_bytes;
+ u32 c2h_size;
+ u32 h2c_size;
+ u32 max_sense_bytes;
+ u32 data_sge_offset;
+ u8 mf[];
+};
+
+static struct leapraid_adapter *leapraid_ctl_lookup_adapter(int adapter_id)
+{
+ struct leapraid_adapter *adapter;
+
+ spin_lock(&leapraid_adapter_lock);
+ list_for_each_entry(adapter, &leapraid_adapter_list, list) {
+ if (adapter->adapter_attr.id == adapter_id) {
+ spin_unlock(&leapraid_adapter_lock);
+ return adapter;
+ }
+ }
+ spin_unlock(&leapraid_adapter_lock);
+
+ return NULL;
+}
+
+static void leapraid_cli_scsiio_cmd(struct leapraid_adapter *adapter,
+ struct leapraid_req *ctl_sp_mpi_req, u16 taskid,
+ dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size,
+ u16 dev_hdl, void *psge)
+{
+ struct leapraid_mpi_scsiio_req *scsiio_request =
+ (struct leapraid_mpi_scsiio_req *)ctl_sp_mpi_req;
+
+ scsiio_request->sense_buffer_len = SCSI_SENSE_BUFFERSIZE;
+ scsiio_request->sense_buffer_low_add =
+ leapraid_get_sense_buffer_dma(adapter, taskid);
+ memset((void *)(&adapter->driver_cmds.ctl_cmd.sense),
+ 0, SCSI_SENSE_BUFFERSIZE);
+ leapraid_build_ieee_sg(adapter, psge, h2c_dma_addr,
+ h2c_size, c2h_dma_addr, c2h_size);
+ if (scsiio_request->func == LEAPRAID_FUNC_SCSIIO_REQ)
+ leapraid_fire_scsi_io(adapter, taskid, dev_hdl);
+ else
+ leapraid_fire_task(adapter, taskid);
+}
+
+static void leapraid_ctl_smp_passthrough_cmd(struct leapraid_adapter *adapter,
+ struct leapraid_req *ctl_sp_mpi_req,
+ u16 taskid,
+ dma_addr_t h2c_dma_addr,
+ size_t h2c_size,
+ dma_addr_t c2h_dma_addr,
+ size_t c2h_size,
+ void *psge, void *h2c)
+{
+ struct leapraid_smp_passthrough_req *smp_pt_req =
+ (struct leapraid_smp_passthrough_req *)ctl_sp_mpi_req;
+ u8 *data;
+
+ if (!adapter->adapter_attr.enable_mp)
+ smp_pt_req->physical_port = LEAPRAID_DISABLE_MP_PORT_ID;
+ if (smp_pt_req->passthrough_flg & LEAPRAID_SMP_PT_FLAG_SGL_PTR)
+ data = (u8 *)&smp_pt_req->sgl;
+ else
+ data = h2c;
+
+ if (data[1] == LEAPRAID_SMP_FN_REPORT_PHY_ERR_LOG &&
+ (data[10] == 1 || data[10] == 2))
+ adapter->reset_desc.adapter_link_resetting = true;
+ leapraid_build_ieee_sg(adapter, psge, h2c_dma_addr,
+ h2c_size, c2h_dma_addr, c2h_size);
+ leapraid_fire_task(adapter, taskid);
+}
+
+static void leapraid_ctl_fire_ieee_cmd(struct leapraid_adapter *adapter,
+ dma_addr_t h2c_dma_addr,
+ size_t h2c_size,
+ dma_addr_t c2h_dma_addr,
+ size_t c2h_size,
+ void *psge, u16 taskid)
+{
+ leapraid_build_ieee_sg(adapter, psge, h2c_dma_addr, h2c_size,
+ c2h_dma_addr, c2h_size);
+ leapraid_fire_task(adapter, taskid);
+}
+
+static void leapraid_ctl_sata_passthrough_cmd(struct leapraid_adapter *adapter,
+ dma_addr_t h2c_dma_addr,
+ size_t h2c_size,
+ dma_addr_t c2h_dma_addr,
+ size_t c2h_size,
+ void *psge, u16 taskid)
+{
+ leapraid_ctl_fire_ieee_cmd(adapter, h2c_dma_addr,
+ h2c_size, c2h_dma_addr,
+ c2h_size, psge, taskid);
+}
+
+static void leapraid_ctl_load_fw_cmd(struct leapraid_adapter *adapter,
+ dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size,
+ void *psge, u16 taskid)
+{
+ leapraid_ctl_fire_ieee_cmd(adapter, h2c_dma_addr,
+ h2c_size, c2h_dma_addr,
+ c2h_size, psge, taskid);
+}
+
+static void leapraid_ctl_fire_mpi_cmd(struct leapraid_adapter *adapter,
+ dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size,
+ void *psge, u16 taskid)
+{
+ leapraid_build_mpi_sg(adapter, psge, h2c_dma_addr,
+ h2c_size, c2h_dma_addr, c2h_size);
+ leapraid_fire_task(adapter, taskid);
+}
+
+static void leapraid_ctl_sas_io_unit_ctrl_cmd(struct leapraid_adapter *adapter,
+ struct leapraid_req *ctl_sp_mpi_req,
+ dma_addr_t h2c_dma_addr,
+ size_t h2c_size,
+ dma_addr_t c2h_dma_addr,
+ size_t c2h_size,
+ void *psge, u16 taskid)
+{
+ struct leapraid_sas_io_unit_ctrl_req *sas_io_unit_ctrl_req =
+ (struct leapraid_sas_io_unit_ctrl_req *)ctl_sp_mpi_req;
+
+ if (sas_io_unit_ctrl_req->op == LEAPRAID_SAS_OP_PHY_HARD_RESET ||
+ sas_io_unit_ctrl_req->op == LEAPRAID_SAS_OP_PHY_LINK_RESET)
+ adapter->reset_desc.adapter_link_resetting = true;
+ leapraid_ctl_fire_mpi_cmd(adapter, h2c_dma_addr,
+ h2c_size, c2h_dma_addr,
+ c2h_size, psge, taskid);
+}
+
+static long leapraid_ctl_do_command(struct leapraid_adapter *adapter,
+ struct leapraid_ioctl_command *karg,
+ void __user *mf)
+{
+ struct leapraid_req *leap_mpi_req = NULL;
+ struct leapraid_req *ctl_sp_mpi_req = NULL;
+ u16 taskid;
+ void *h2c = NULL;
+ size_t h2c_size = 0;
+ dma_addr_t h2c_dma_addr = 0;
+ void *c2h = NULL;
+ size_t c2h_size = 0;
+ dma_addr_t c2h_dma_addr = 0;
+ void *psge;
+ unsigned long timeout;
+ u16 dev_hdl = LEAPRAID_INVALID_DEV_HANDLE;
+ bool issue_reset = false;
+ u32 sz;
+ long rc = 0;
+
+ rc = leapraid_check_adapter_is_op(adapter);
+ if (rc)
+ goto out;
+
+ leap_mpi_req = kzalloc(LEAPRAID_REQUEST_SIZE, GFP_KERNEL);
+ if (!leap_mpi_req) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ if (karg->data_sge_offset * LEAPRAID_SGE_OFFSET_SIZE > LEAPRAID_REQUEST_SIZE ||
+ karg->data_sge_offset > ((UINT_MAX) / LEAPRAID_SGE_OFFSET_SIZE)) {
+ rc = -EINVAL;
+ goto out;
+ }
+
+ if (copy_from_user(leap_mpi_req, mf,
+ karg->data_sge_offset * LEAPRAID_SGE_OFFSET_SIZE)) {
+ rc = -EFAULT;
+ goto out;
+ }
+
+ taskid = adapter->driver_cmds.ctl_cmd.taskid;
+
+ adapter->driver_cmds.ctl_cmd.status = LEAPRAID_CMD_PENDING;
+ memset((void *)(&adapter->driver_cmds.ctl_cmd.reply), 0,
+ LEAPRAID_REPLY_SIEZ);
+ ctl_sp_mpi_req = leapraid_get_task_desc(adapter, taskid);
+ memset(ctl_sp_mpi_req, 0, LEAPRAID_REQUEST_SIZE);
+ memcpy(ctl_sp_mpi_req,
+ leap_mpi_req,
+ karg->data_sge_offset * LEAPRAID_SGE_OFFSET_SIZE);
+
+ if (ctl_sp_mpi_req->func == LEAPRAID_FUNC_SCSIIO_REQ ||
+ ctl_sp_mpi_req->func == LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH ||
+ ctl_sp_mpi_req->func == LEAPRAID_FUNC_SATA_PASSTHROUGH) {
+ dev_hdl = le16_to_cpu(ctl_sp_mpi_req->func_dep1);
+ if (!dev_hdl || dev_hdl > adapter->adapter_attr.features.max_dev_handle) {
+ rc = -EINVAL;
+ goto out;
+ }
+ }
+
+ if (WARN_ON(ctl_sp_mpi_req->func == LEAPRAID_FUNC_SCSI_TMF))
+ return -EINVAL;
+
+ h2c_size = karg->h2c_size;
+ c2h_size = karg->c2h_size;
+ if (h2c_size) {
+ h2c = dma_alloc_coherent(&adapter->pdev->dev, h2c_size,
+ &h2c_dma_addr, GFP_ATOMIC);
+ if (!h2c) {
+ rc = -ENOMEM;
+ goto out;
+ }
+ if (copy_from_user(h2c, karg->h2c_buf_ptr, h2c_size)) {
+ rc = -EFAULT;
+ goto out;
+ }
+ }
+ if (c2h_size) {
+ c2h = dma_alloc_coherent(&adapter->pdev->dev,
+ c2h_size, &c2h_dma_addr, GFP_ATOMIC);
+ if (!c2h) {
+ rc = -ENOMEM;
+ goto out;
+ }
+ }
+
+ psge = (void *)ctl_sp_mpi_req + (karg->data_sge_offset *
+ LEAPRAID_SGE_OFFSET_SIZE);
+ init_completion(&adapter->driver_cmds.ctl_cmd.done);
+
+ switch (ctl_sp_mpi_req->func) {
+ case LEAPRAID_FUNC_SCSIIO_REQ:
+ case LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH:
+ if (test_bit(dev_hdl, (unsigned long *)adapter->dev_topo.dev_removing)) {
+ rc = -EINVAL;
+ goto out;
+ }
+ leapraid_cli_scsiio_cmd(adapter, ctl_sp_mpi_req, taskid,
+ h2c_dma_addr, h2c_size,
+ c2h_dma_addr, c2h_size,
+ dev_hdl, psge);
+ break;
+ case LEAPRAID_FUNC_SMP_PASSTHROUGH:
+ if (!h2c) {
+ rc = -EINVAL;
+ goto out;
+ }
+ leapraid_ctl_smp_passthrough_cmd(adapter,
+ ctl_sp_mpi_req, taskid,
+ h2c_dma_addr, h2c_size,
+ c2h_dma_addr, c2h_size,
+ psge, h2c);
+ break;
+ case LEAPRAID_FUNC_SATA_PASSTHROUGH:
+ if (test_bit(dev_hdl, (unsigned long *)adapter->dev_topo.dev_removing)) {
+ rc = -EINVAL;
+ goto out;
+ }
+ leapraid_ctl_sata_passthrough_cmd(adapter, h2c_dma_addr,
+ h2c_size, c2h_dma_addr,
+ c2h_size, psge, taskid);
+ break;
+ case LEAPRAID_FUNC_FW_DOWNLOAD:
+ case LEAPRAID_FUNC_FW_UPLOAD:
+ leapraid_ctl_load_fw_cmd(adapter, h2c_dma_addr,
+ h2c_size, c2h_dma_addr,
+ c2h_size, psge, taskid);
+ break;
+ case LEAPRAID_FUNC_SAS_IO_UNIT_CTRL:
+ leapraid_ctl_sas_io_unit_ctrl_cmd(adapter, ctl_sp_mpi_req,
+ h2c_dma_addr, h2c_size,
+ c2h_dma_addr, c2h_size,
+ psge, taskid);
+ break;
+ default:
+ leapraid_ctl_fire_mpi_cmd(adapter, h2c_dma_addr,
+ h2c_size, c2h_dma_addr,
+ c2h_size, psge, taskid);
+ break;
+ }
+
+ timeout = karg->timeout;
+ if (timeout < LEAPRAID_CTL_CMD_TIMEOUT)
+ timeout = LEAPRAID_CTL_CMD_TIMEOUT;
+ wait_for_completion_timeout(&adapter->driver_cmds.ctl_cmd.done,
+ timeout * HZ);
+
+ if ((leap_mpi_req->func == LEAPRAID_FUNC_SMP_PASSTHROUGH ||
+ leap_mpi_req->func == LEAPRAID_FUNC_SAS_IO_UNIT_CTRL) &&
+ adapter->reset_desc.adapter_link_resetting) {
+ adapter->reset_desc.adapter_link_resetting = false;
+ }
+ if (!(adapter->driver_cmds.ctl_cmd.status & LEAPRAID_CMD_DONE)) {
+ issue_reset =
+ leapraid_check_reset(
+ adapter->driver_cmds.ctl_cmd.status);
+ goto reset;
+ }
+
+ if (c2h_size) {
+ if (copy_to_user(karg->c2h_buf_ptr, c2h, c2h_size)) {
+ rc = -ENODATA;
+ goto out;
+ }
+ }
+ if (karg->max_rep_bytes) {
+ sz = min_t(u32, karg->max_rep_bytes, LEAPRAID_REPLY_SIEZ);
+ if (copy_to_user(karg->rep_msg_buf_ptr,
+ (void *)&adapter->driver_cmds.ctl_cmd.reply,
+ sz)) {
+ rc = -ENODATA;
+ goto out;
+ }
+ }
+
+ if (karg->max_sense_bytes &&
+ (leap_mpi_req->func == LEAPRAID_FUNC_SCSIIO_REQ ||
+ leap_mpi_req->func == LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH)) {
+ if (!karg->sense_data_ptr)
+ goto out;
+
+ sz = min_t(u32, karg->max_sense_bytes, SCSI_SENSE_BUFFERSIZE);
+ if (copy_to_user(karg->sense_data_ptr,
+ (void *)&adapter->driver_cmds.ctl_cmd.sense,
+ sz)) {
+ rc = -ENODATA;
+ goto out;
+ }
+ }
+reset:
+ if (issue_reset) {
+ rc = -ENODATA;
+ if (leap_mpi_req->func == LEAPRAID_FUNC_SCSIIO_REQ ||
+ leap_mpi_req->func == LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH ||
+ leap_mpi_req->func == LEAPRAID_FUNC_SATA_PASSTHROUGH) {
+ dev_err(&adapter->pdev->dev,
+ "fire tgt reset: hdl=0x%04x\n",
+ le16_to_cpu(leap_mpi_req->func_dep1));
+ leapraid_issue_locked_tm(adapter,
+ le16_to_cpu(leap_mpi_req->func_dep1), 0, 0, 0,
+ LEAPRAID_TM_TASKTYPE_TARGET_RESET, taskid,
+ LEAPRAID_TM_MSGFLAGS_LINK_RESET);
+ } else {
+ dev_info(&adapter->pdev->dev,
+ "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, FULL_RESET);
+ }
+ }
+out:
+ if (c2h)
+ dma_free_coherent(&adapter->pdev->dev, c2h_size,
+ c2h, c2h_dma_addr);
+ if (h2c)
+ dma_free_coherent(&adapter->pdev->dev, h2c_size,
+ h2c, h2c_dma_addr);
+ kfree(leap_mpi_req);
+ adapter->driver_cmds.ctl_cmd.status = LEAPRAID_CMD_NOT_USED;
+ return rc;
+}
+
+static long leapraid_ctl_get_adapter_info(struct leapraid_adapter *adapter,
+ void __user *arg)
+{
+ struct leapraid_ioctl_adapter_info *karg;
+ ssize_t __maybe_unused ret;
+ u8 revision;
+
+ karg = kzalloc(sizeof(*karg), GFP_KERNEL);
+ if (!karg)
+ return -ENOMEM;
+
+ pci_read_config_byte(adapter->pdev, PCI_CLASS_REVISION, &revision);
+ karg->revision = revision;
+ karg->pci_id = adapter->pdev->device;
+ karg->sub_dev = adapter->pdev->subsystem_device;
+ karg->sub_vendor = adapter->pdev->subsystem_vendor;
+ karg->pci_info.u.bits.bus = adapter->pdev->bus->number;
+ karg->pci_info.u.bits.dev = PCI_SLOT(adapter->pdev->devfn);
+ karg->pci_info.u.bits.func = PCI_FUNC(adapter->pdev->devfn);
+ karg->pci_info.seg_id = pci_domain_nr(adapter->pdev->bus);
+ karg->fw_ver = adapter->adapter_attr.features.fw_version;
+ ret = strscpy(karg->driver_ver, LEAPRAID_DRIVER_NAME,
+ sizeof(karg->driver_ver));
+ strcat(karg->driver_ver, "-");
+ strcat(karg->driver_ver, LEAPRAID_DRIVER_VERSION);
+ karg->adapter_type = LEAPRAID_IOCTL_VERSION;
+ karg->bios_ver = adapter->adapter_attr.bios_version;
+ if (copy_to_user(arg, karg,
+ sizeof(struct leapraid_ioctl_adapter_info))) {
+ kfree(karg);
+ return -EFAULT;
+ }
+
+ kfree(karg);
+ return 0;
+}
+
+static long leapraid_ctl_ioctl_main(struct file *file, unsigned int cmd,
+ void __user *arg, u8 compat)
+{
+ struct leapraid_ioctl_header ioctl_header;
+ struct leapraid_adapter *adapter;
+ long rc = -ENOIOCTLCMD;
+ int count;
+
+ if (copy_from_user(&ioctl_header, (char __user *)arg,
+ sizeof(struct leapraid_ioctl_header)))
+ return -EFAULT;
+
+ adapter = leapraid_ctl_lookup_adapter(ioctl_header.adapter_id);
+ if (!adapter)
+ return -EFAULT;
+
+ mutex_lock(&adapter->access_ctrl.pci_access_lock);
+
+ rc = leapraid_check_adapter_is_op(adapter);
+ if (rc)
+ goto out;
+
+ count = LEAPRAID_WAIT_SHOST_RECOVERY;
+ while (count--) {
+ if (!adapter->access_ctrl.shost_recovering)
+ break;
+ ssleep(1);
+ }
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering ||
+ adapter->scan_dev_desc.driver_loading ||
+ adapter->access_ctrl.host_removing) {
+ rc = -EAGAIN;
+ goto out;
+ }
+
+ if (file->f_flags & O_NONBLOCK) {
+ if (!mutex_trylock(&adapter->driver_cmds.ctl_cmd.mutex)) {
+ rc = -EAGAIN;
+ goto out;
+ }
+ } else if (mutex_lock_interruptible(&adapter->driver_cmds.ctl_cmd.mutex)) {
+ rc = -ERESTARTSYS;
+ goto out;
+ }
+
+ switch (_IOC_NR(cmd)) {
+ case LEAPRAID_ADAPTER_INFO:
+ if (_IOC_SIZE(cmd) == sizeof(struct leapraid_ioctl_adapter_info))
+ rc = leapraid_ctl_get_adapter_info(adapter, arg);
+ break;
+ case LEAPRAID_COMMAND:
+ {
+ struct leapraid_ioctl_command __user *uarg;
+ struct leapraid_ioctl_command karg;
+
+ if (copy_from_user(&karg, arg, sizeof(karg))) {
+ rc = -EFAULT;
+ break;
+ }
+
+ if (karg.hdr.adapter_id != ioctl_header.adapter_id) {
+ rc = -EINVAL;
+ break;
+ }
+
+ if (_IOC_SIZE(cmd) == sizeof(struct leapraid_ioctl_command)) {
+ uarg = arg;
+ rc = leapraid_ctl_do_command(adapter, &karg,
+ &uarg->mf);
+ }
+ break;
+ }
+ case LEAPRAID_EVENTQUERY:
+ case LEAPRAID_EVENTREPORT:
+ rc = 0;
+ break;
+ default:
+ pr_err("unknown ioctl opcode=0x%08x\n", cmd);
+ break;
+ }
+ mutex_unlock(&adapter->driver_cmds.ctl_cmd.mutex);
+
+out:
+ mutex_unlock(&adapter->access_ctrl.pci_access_lock);
+ return rc;
+}
+
+static long leapraid_ctl_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ return leapraid_ctl_ioctl_main(file, cmd,
+ (void __user *)arg, 0);
+}
+
+static int leapraid_fw_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ struct leapraid_adapter *adapter;
+ unsigned long length;
+ unsigned long pfn;
+
+ length = vma->vm_end - vma->vm_start;
+
+ adapter = list_first_entry(&leapraid_adapter_list,
+ struct leapraid_adapter, list);
+
+ if (length > (LEAPRAID_SYS_LOG_BUF_SIZE +
+ LEAPRAID_SYS_LOG_BUF_RESERVE)) {
+ dev_err(&adapter->pdev->dev,
+ "requested mapping size is too large!\n");
+ return -EINVAL;
+ }
+
+ if (!adapter->fw_log_desc.fw_log_buffer) {
+ dev_err(&adapter->pdev->dev, "no log buffer!\n");
+ return -EINVAL;
+ }
+
+ pfn = virt_to_phys(adapter->fw_log_desc.fw_log_buffer) >> PAGE_SHIFT;
+
+ if (remap_pfn_range(vma, vma->vm_start, pfn, length,
+ vma->vm_page_prot)) {
+ dev_err(&adapter->pdev->dev,
+ "failed to map memory to user space!\n");
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static const struct file_operations leapraid_ctl_fops = {
+ .owner = THIS_MODULE,
+ .unlocked_ioctl = leapraid_ctl_ioctl,
+ .mmap = leapraid_fw_mmap,
+};
+
+static struct miscdevice leapraid_ctl_dev = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = LEAPRAID_DEV_NAME,
+ .fops = &leapraid_ctl_fops,
+};
+
+void leapraid_ctl_init(void)
+{
+ if (misc_register(&leapraid_ctl_dev) < 0)
+ pr_err("%s can't register misc device\n", LEAPRAID_DRIVER_NAME);
+}
+
+void leapraid_ctl_exit(void)
+{
+ misc_deregister(&leapraid_ctl_dev);
+}
diff --git a/drivers/scsi/leapraid/leapraid_func.c b/drivers/scsi/leapraid/leapraid_func.c
new file mode 100644
index 000000000000..2fc142fff982
--- /dev/null
+++ b/drivers/scsi/leapraid/leapraid_func.c
@@ -0,0 +1,8264 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2025 LeapIO Tech Inc.
+ *
+ * LeapRAID Storage and RAID Controller driver.
+ */
+
+#include <linux/module.h>
+
+#include "leapraid_func.h"
+
+static int msix_disable;
+module_param(msix_disable, int, 0444);
+MODULE_PARM_DESC(msix_disable,
+ "disable msix routed interrupts (default=0)");
+
+static int smart_poll;
+module_param(smart_poll, int, 0444);
+MODULE_PARM_DESC(smart_poll,
+ "check SATA drive health via SMART polling: (default=0)");
+
+static int interrupt_mode;
+module_param(interrupt_mode, int, 0444);
+MODULE_PARM_DESC(interrupt_mode,
+ "intr mode: 0 for MSI-X, 1 for MSI, 2 for legacy. (default=0)");
+
+static int poll_queues;
+module_param(poll_queues, int, 0444);
+MODULE_PARM_DESC(poll_queues,
+ "specifies the number of queues for io_uring poll mode.");
+static int max_msix_vectors = -1;
+module_param(max_msix_vectors, int, 0444);
+MODULE_PARM_DESC(max_msix_vectors, " max msix vectors");
+
+static void leapraid_remove_device(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev);
+static void leapraid_set_led(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev, bool on);
+static void leapraid_ublk_io_dev(struct leapraid_adapter *adapter,
+ u64 sas_address,
+ struct leapraid_card_port *port);
+static int leapraid_make_adapter_available(struct leapraid_adapter *adapter);
+static int leapraid_fw_log_init(struct leapraid_adapter *adapter);
+static int leapraid_make_adapter_ready(struct leapraid_adapter *adapter,
+ enum reset_type type);
+
+static inline bool leapraid_is_end_dev(u32 dev_type)
+{
+ return (dev_type & LEAPRAID_DEVTYP_END_DEV) &&
+ ((dev_type & LEAPRAID_DEVTYP_SSP_TGT) ||
+ (dev_type & LEAPRAID_DEVTYP_STP_TGT) ||
+ (dev_type & LEAPRAID_DEVTYP_SATA_DEV));
+}
+
+bool leapraid_pci_removed(struct leapraid_adapter *adapter)
+{
+ struct pci_dev *pdev = adapter->pdev;
+ u32 vendor_id;
+
+ if (pci_bus_read_config_dword(pdev->bus, pdev->devfn, PCI_VENDOR_ID,
+ &vendor_id))
+ return true;
+
+ return ((vendor_id & LEAPRAID_PCI_VENDOR_ID_MASK) !=
+ LEAPRAID_VENDOR_ID);
+}
+
+static bool leapraid_pci_active(struct leapraid_adapter *adapter)
+{
+ return !(adapter->access_ctrl.pcie_recovering ||
+ leapraid_pci_removed(adapter));
+}
+
+void *leapraid_get_reply_vaddr(struct leapraid_adapter *adapter, u32 rep_paddr)
+{
+ if (!rep_paddr)
+ return NULL;
+
+ return adapter->mem_desc.rep_msg +
+ (rep_paddr - (u32)adapter->mem_desc.rep_msg_dma);
+}
+
+void *leapraid_get_task_desc(struct leapraid_adapter *adapter, u16 taskid)
+{
+ return (void *)(adapter->mem_desc.task_desc +
+ (taskid * LEAPRAID_REQUEST_SIZE));
+}
+
+void *leapraid_get_sense_buffer(struct leapraid_adapter *adapter, u16 taskid)
+{
+ return (void *)(adapter->mem_desc.sense_data +
+ ((taskid - 1) * SCSI_SENSE_BUFFERSIZE));
+}
+
+__le32 leapraid_get_sense_buffer_dma(struct leapraid_adapter *adapter,
+ u16 taskid)
+{
+ return cpu_to_le32(adapter->mem_desc.sense_data_dma +
+ ((taskid - 1) * SCSI_SENSE_BUFFERSIZE));
+}
+
+void leapraid_mask_int(struct leapraid_adapter *adapter)
+{
+ u32 reg;
+
+ adapter->mask_int = true;
+ reg = leapraid_readl(&adapter->iomem_base->host_int_mask);
+ reg |= LEAPRAID_TO_SYS_DB_MASK + LEAPRAID_REPLY_INT_MASK +
+ LEAPRAID_RESET_IRQ_MASK;
+ writel(reg, &adapter->iomem_base->host_int_mask);
+ leapraid_readl(&adapter->iomem_base->host_int_mask);
+}
+
+void leapraid_unmask_int(struct leapraid_adapter *adapter)
+{
+ u32 reg;
+
+ reg = leapraid_readl(&adapter->iomem_base->host_int_mask);
+ reg &= ~LEAPRAID_REPLY_INT_MASK;
+ writel(reg, &adapter->iomem_base->host_int_mask);
+ adapter->mask_int = false;
+}
+
+static void leapraid_flush_io_and_panic(struct leapraid_adapter *adapter)
+{
+ adapter->access_ctrl.adapter_thermal_alert = true;
+ leapraid_smart_polling_stop(adapter);
+ leapraid_fw_log_stop(adapter);
+ leapraid_mq_polling_pause(adapter);
+ leapraid_clean_active_scsi_cmds(adapter);
+}
+
+static void leapraid_check_panic_needed(struct leapraid_adapter *adapter,
+ u32 db, u32 adapter_state)
+{
+ bool fault_1 = adapter_state == LEAPRAID_DB_MASK;
+ bool fault_2 = (adapter_state == LEAPRAID_DB_FAULT) &&
+ ((db & LEAPRAID_DB_DATA_MASK) == LEAPRAID_DB_OVER_TEMPERATURE);
+
+ if (!fault_1 && !fault_2)
+ return;
+
+ if (fault_1)
+ pr_err("%s, doorbell status 0xFFFF!\n", __func__);
+ else
+ pr_err("%s, adapter overheating detected!\n", __func__);
+
+ leapraid_flush_io_and_panic(adapter);
+ panic("%s overheating detected, panic now!!!\n", __func__);
+}
+
+u32 leapraid_get_adapter_state(struct leapraid_adapter *adapter)
+{
+ u32 db;
+ u32 adapter_state;
+
+ db = leapraid_readl(&adapter->iomem_base->db);
+ adapter_state = db & LEAPRAID_DB_MASK;
+ leapraid_check_panic_needed(adapter, db, adapter_state);
+ return adapter_state;
+}
+
+static bool leapraid_wait_adapter_ready(struct leapraid_adapter *adapter)
+{
+ u32 cur_state;
+ u32 cnt = LEAPRAID_ADAPTER_READY_MAX_RETRY;
+
+ do {
+ cur_state = leapraid_get_adapter_state(adapter);
+ if (cur_state == LEAPRAID_DB_READY)
+ return true;
+ if (cur_state == LEAPRAID_DB_FAULT)
+ break;
+ usleep_range(LEAPRAID_ADAPTER_READY_SLEEP_MIN_US,
+ LEAPRAID_ADAPTER_READY_SLEEP_MAX_US);
+ } while (--cnt);
+
+ return false;
+}
+
+static int leapraid_db_wait_int_host(struct leapraid_adapter *adapter)
+{
+ u32 cnt = LEAPRAID_DB_WAIT_MAX_RETRY;
+
+ do {
+ if (leapraid_readl(&adapter->iomem_base->host_int_status) &
+ LEAPRAID_ADAPTER2HOST_DB_STATUS)
+ return 0;
+ udelay(LEAPRAID_DB_WAIT_DELAY_US);
+ } while (--cnt);
+
+ return -EFAULT;
+}
+
+static int leapraid_db_wait_ack_and_clear_int(struct leapraid_adapter *adapter)
+{
+ u32 adapter_state;
+ u32 int_status;
+ u32 cnt;
+
+ cnt = LEAPRAID_ADAPTER_READY_MAX_RETRY;
+ do {
+ int_status =
+ leapraid_readl(&adapter->iomem_base->host_int_status);
+ if (!(int_status & LEAPRAID_HOST2ADAPTER_DB_STATUS)) {
+ return 0;
+ } else if (int_status & LEAPRAID_ADAPTER2HOST_DB_STATUS) {
+ adapter_state = leapraid_get_adapter_state(adapter);
+ if (adapter_state == LEAPRAID_DB_FAULT)
+ return -EFAULT;
+ } else if (int_status == 0xFFFFFFFF) {
+ goto out;
+ }
+
+ usleep_range(LEAPRAID_ADAPTER_READY_SLEEP_MIN_US,
+ LEAPRAID_ADAPTER_READY_SLEEP_MAX_US);
+ } while (--cnt);
+
+out:
+ return -EFAULT;
+}
+
+static int leapraid_handshake_func(struct leapraid_adapter *adapter,
+ int req_bytes, u32 *req,
+ int rep_bytes, u16 *rep)
+{
+ int failed, i;
+
+ if ((leapraid_readl(&adapter->iomem_base->db) &
+ LEAPRAID_DB_USED)) {
+ dev_err(&adapter->pdev->dev, "doorbell used\n");
+ return -EFAULT;
+ }
+
+ if (leapraid_readl(&adapter->iomem_base->host_int_status) &
+ LEAPRAID_ADAPTER2HOST_DB_STATUS)
+ writel(0, &adapter->iomem_base->host_int_status);
+
+ writel(((LEAPRAID_FUNC_HANDSHAKE << LEAPRAID_DB_FUNC_SHIFT) |
+ ((req_bytes / LEAPRAID_DWORDS_BYTE_SIZE) <<
+ LEAPRAID_DB_ADD_DWORDS_SHIFT)),
+ &adapter->iomem_base->db);
+
+ if (leapraid_db_wait_int_host(adapter)) {
+ dev_err(&adapter->pdev->dev, "%d:wait db interrupt timeout\n",
+ __LINE__);
+ return -EFAULT;
+ }
+
+ writel(0, &adapter->iomem_base->host_int_status);
+
+ if (leapraid_db_wait_ack_and_clear_int(adapter)) {
+ dev_err(&adapter->pdev->dev, "%d:wait ack failure\n",
+ __LINE__);
+ return -EFAULT;
+ }
+
+ for (i = 0, failed = 0;
+ i < req_bytes / LEAPRAID_DWORDS_BYTE_SIZE && !failed;
+ i++) {
+ writel((u32)(req[i]), &adapter->iomem_base->db);
+ if (leapraid_db_wait_ack_and_clear_int(adapter))
+ failed = 1;
+ }
+ if (failed) {
+ dev_err(&adapter->pdev->dev, "%d:wait ack failure\n",
+ __LINE__);
+ return -EFAULT;
+ }
+
+ for (i = 0; i < rep_bytes / LEAPRAID_WORD_BYTE_SIZE; i++) {
+ if (leapraid_db_wait_int_host(adapter)) {
+ dev_err(&adapter->pdev->dev,
+ "%d:wait db interrupt timeout\n", __LINE__);
+ return -EFAULT;
+ }
+ rep[i] = (u16)(leapraid_readl(&adapter->iomem_base->db)
+ & LEAPRAID_DB_DATA_MASK);
+ writel(0, &adapter->iomem_base->host_int_status);
+ }
+
+ if (leapraid_db_wait_int_host(adapter)) {
+ dev_err(&adapter->pdev->dev, "%d:wait db interrupt timeout\n",
+ __LINE__);
+ return -EFAULT;
+ }
+
+ writel(0, &adapter->iomem_base->host_int_status);
+
+ return 0;
+}
+
+int leapraid_check_adapter_is_op(struct leapraid_adapter *adapter)
+{
+ int wait_count = LEAPRAID_DB_WAIT_OPERATIONAL;
+
+ do {
+ if (leapraid_pci_removed(adapter))
+ return -EFAULT;
+
+ if (leapraid_get_adapter_state(adapter) ==
+ LEAPRAID_DB_OPERATIONAL)
+ return 0;
+
+ dev_info(&adapter->pdev->dev,
+ "waiting for adapter to become op status(cnt=%d)\n",
+ LEAPRAID_DB_WAIT_OPERATIONAL - wait_count);
+
+ ssleep(1);
+ } while (--wait_count);
+
+ dev_err(&adapter->pdev->dev,
+ "adapter failed to become op state, last state=%d\n",
+ leapraid_get_adapter_state(adapter));
+
+ return -EFAULT;
+}
+
+struct leapraid_io_req_tracker *leapraid_get_io_tracker_from_taskid(
+ struct leapraid_adapter *adapter, u16 taskid)
+{
+ struct scsi_cmnd *scmd;
+
+ if (WARN_ON(!taskid))
+ return NULL;
+
+ if (WARN_ON(taskid > adapter->shost->can_queue))
+ return NULL;
+
+ scmd = leapraid_get_scmd_from_taskid(adapter, taskid);
+ if (scmd)
+ return leapraid_get_scmd_priv(scmd);
+
+ return NULL;
+}
+
+static u8 leapraid_get_cb_idx(struct leapraid_adapter *adapter, u16 taskid)
+{
+ struct leapraid_driver_cmd *sp_cmd;
+ u8 cb_idx = 0xFF;
+
+ if (WARN_ON(!taskid))
+ return cb_idx;
+
+ list_for_each_entry(sp_cmd, &adapter->driver_cmds.special_cmd_list,
+ list)
+ if (taskid == sp_cmd->taskid ||
+ taskid == sp_cmd->hp_taskid ||
+ taskid == sp_cmd->inter_taskid)
+ return sp_cmd->cb_idx;
+
+ WARN_ON(cb_idx == 0xFF);
+ return cb_idx;
+}
+
+struct scsi_cmnd *leapraid_get_scmd_from_taskid(
+ struct leapraid_adapter *adapter, u16 taskid)
+{
+ struct leapraid_scsiio_req *leap_mpi_req;
+ struct leapraid_io_req_tracker *st;
+ struct scsi_cmnd *scmd;
+ u32 uniq_tag;
+
+ if (taskid <= 0 || taskid > adapter->shost->can_queue)
+ return NULL;
+
+ uniq_tag = adapter->mem_desc.taskid_to_uniq_tag[taskid - 1] <<
+ BLK_MQ_UNIQUE_TAG_BITS | (taskid - 1);
+ leap_mpi_req = leapraid_get_task_desc(adapter, taskid);
+ if (!leap_mpi_req->dev_hdl)
+ return NULL;
+
+ scmd = scsi_host_find_tag(adapter->shost, uniq_tag);
+ if (scmd) {
+ st = leapraid_get_scmd_priv(scmd);
+ if (st && st->taskid == taskid)
+ return scmd;
+ }
+
+ return NULL;
+}
+
+u16 leapraid_alloc_scsiio_taskid(struct leapraid_adapter *adapter,
+ struct scsi_cmnd *scmd)
+{
+ struct leapraid_io_req_tracker *request;
+ u16 taskid;
+ u32 tag = scsi_cmd_to_rq(scmd)->tag;
+ u32 unique_tag;
+
+ unique_tag = blk_mq_unique_tag(scsi_cmd_to_rq(scmd));
+ tag = blk_mq_unique_tag_to_tag(unique_tag);
+ adapter->mem_desc.taskid_to_uniq_tag[tag] =
+ blk_mq_unique_tag_to_hwq(unique_tag);
+
+ request = leapraid_get_scmd_priv(scmd);
+ taskid = tag + 1;
+ request->taskid = taskid;
+ request->scmd = scmd;
+ return taskid;
+}
+
+static void leapraid_check_pending_io(struct leapraid_adapter *adapter)
+{
+ if (adapter->access_ctrl.shost_recovering &&
+ adapter->reset_desc.pending_io_cnt) {
+ if (adapter->reset_desc.pending_io_cnt == 1)
+ wake_up(&adapter->reset_desc.reset_wait_queue);
+ adapter->reset_desc.pending_io_cnt--;
+ }
+}
+
+static void leapraid_clear_io_tracker(struct leapraid_adapter *adapter,
+ struct leapraid_io_req_tracker *io_tracker)
+{
+ if (!io_tracker)
+ return;
+
+ if (WARN_ON(io_tracker->taskid == 0))
+ return;
+
+ io_tracker->scmd = NULL;
+}
+
+static bool leapraid_is_fixed_taskid(struct leapraid_adapter *adapter,
+ u16 taskid)
+{
+ return (taskid == adapter->driver_cmds.ctl_cmd.taskid ||
+ taskid == adapter->driver_cmds.driver_scsiio_cmd.taskid ||
+ taskid == adapter->driver_cmds.tm_cmd.hp_taskid ||
+ taskid == adapter->driver_cmds.ctl_cmd.hp_taskid ||
+ taskid == adapter->driver_cmds.scan_dev_cmd.inter_taskid ||
+ taskid == adapter->driver_cmds.timestamp_sync_cmd.inter_taskid ||
+ taskid == adapter->driver_cmds.raid_action_cmd.inter_taskid ||
+ taskid == adapter->driver_cmds.transport_cmd.inter_taskid ||
+ taskid == adapter->driver_cmds.cfg_op_cmd.inter_taskid ||
+ taskid == adapter->driver_cmds.enc_cmd.inter_taskid ||
+ taskid == adapter->driver_cmds.notify_event_cmd.inter_taskid);
+}
+
+void leapraid_free_taskid(struct leapraid_adapter *adapter, u16 taskid)
+{
+ struct leapraid_io_req_tracker *io_tracker;
+ void *task_desc;
+
+ if (leapraid_is_fixed_taskid(adapter, taskid))
+ return;
+
+ if (taskid <= adapter->shost->can_queue) {
+ io_tracker = leapraid_get_io_tracker_from_taskid(adapter,
+ taskid);
+ if (!io_tracker) {
+ leapraid_check_pending_io(adapter);
+ return;
+ }
+
+ task_desc = leapraid_get_task_desc(adapter, taskid);
+ memset(task_desc, 0, LEAPRAID_REQUEST_SIZE);
+ leapraid_clear_io_tracker(adapter, io_tracker);
+ leapraid_check_pending_io(adapter);
+ adapter->mem_desc.taskid_to_uniq_tag[taskid - 1] = 0xFFFF;
+ }
+}
+
+static u8 leapraid_get_msix_idx(struct leapraid_adapter *adapter,
+ struct scsi_cmnd *scmd)
+{
+ if (scmd && adapter->shost->nr_hw_queues > 1) {
+ u32 tag = blk_mq_unique_tag(scsi_cmd_to_rq(scmd));
+
+ return blk_mq_unique_tag_to_hwq(tag);
+ }
+ return adapter->notification_desc.msix_cpu_map[raw_smp_processor_id()];
+}
+
+static u8 leapraid_get_and_set_msix_idx_from_taskid(
+ struct leapraid_adapter *adapter, u16 taskid)
+{
+ struct leapraid_io_req_tracker *io_tracker = NULL;
+
+ if (taskid <= adapter->shost->can_queue)
+ io_tracker = leapraid_get_io_tracker_from_taskid(adapter,
+ taskid);
+
+ if (!io_tracker)
+ return leapraid_get_msix_idx(adapter, NULL);
+
+ io_tracker->msix_io = leapraid_get_msix_idx(adapter, io_tracker->scmd);
+
+ return io_tracker->msix_io;
+}
+
+void leapraid_fire_scsi_io(struct leapraid_adapter *adapter, u16 taskid,
+ u16 handle)
+{
+ struct leapraid_atomic_req_desc desc;
+
+ desc.flg = LEAPRAID_REQ_DESC_FLG_SCSI_IO;
+ desc.msix_idx = leapraid_get_and_set_msix_idx_from_taskid(adapter,
+ taskid);
+ desc.taskid = cpu_to_le16(taskid);
+ writel((__force u32)cpu_to_le32(*((u32 *)&desc)),
+ &adapter->iomem_base->atomic_req_desc_post);
+}
+
+void leapraid_fire_hpr_task(struct leapraid_adapter *adapter, u16 taskid,
+ u16 msix_task)
+{
+ struct leapraid_atomic_req_desc desc;
+
+ desc.flg = LEAPRAID_REQ_DESC_FLG_HPR;
+ desc.msix_idx = msix_task;
+ desc.taskid = cpu_to_le16(taskid);
+ writel((__force u32)cpu_to_le32(*((u32 *)&desc)),
+ &adapter->iomem_base->atomic_req_desc_post);
+}
+
+void leapraid_fire_task(struct leapraid_adapter *adapter, u16 taskid)
+{
+ struct leapraid_atomic_req_desc desc;
+
+ desc.flg = LEAPRAID_REQ_DESC_FLG_DFLT_TYPE;
+ desc.msix_idx = leapraid_get_and_set_msix_idx_from_taskid(adapter,
+ taskid);
+ desc.taskid = cpu_to_le16(taskid);
+ writel((__force u32)cpu_to_le32(*((u32 *)&desc)),
+ &adapter->iomem_base->atomic_req_desc_post);
+}
+
+void leapraid_clean_active_scsi_cmds(struct leapraid_adapter *adapter)
+{
+ struct leapraid_io_req_tracker *io_tracker;
+ struct scsi_cmnd *scmd;
+ u16 taskid;
+
+ for (taskid = 1; taskid <= adapter->shost->can_queue; taskid++) {
+ scmd = leapraid_get_scmd_from_taskid(adapter, taskid);
+ if (!scmd)
+ continue;
+
+ io_tracker = leapraid_get_scmd_priv(scmd);
+ if (io_tracker && io_tracker->taskid == 0)
+ continue;
+
+ scsi_dma_unmap(scmd);
+ leapraid_clear_io_tracker(adapter, io_tracker);
+ if (!leapraid_pci_active(adapter) ||
+ adapter->reset_desc.adapter_reset_results != 0 ||
+ adapter->access_ctrl.adapter_thermal_alert ||
+ adapter->access_ctrl.host_removing)
+ scmd->result = DID_NO_CONNECT << 16;
+ else
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ scsi_done(scmd);
+ }
+}
+
+static void leapraid_clean_active_driver_cmd(
+ struct leapraid_driver_cmd *driver_cmd)
+{
+ if (driver_cmd->status & LEAPRAID_CMD_PENDING) {
+ driver_cmd->status |= LEAPRAID_CMD_RESET;
+ complete(&driver_cmd->done);
+ }
+}
+
+static void leapraid_clean_active_driver_cmds(struct leapraid_adapter *adapter)
+{
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.timestamp_sync_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.raid_action_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.driver_scsiio_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.tm_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.transport_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.enc_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.notify_event_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.cfg_op_cmd);
+ leapraid_clean_active_driver_cmd(&adapter->driver_cmds.ctl_cmd);
+
+ if (adapter->driver_cmds.scan_dev_cmd.status & LEAPRAID_CMD_PENDING) {
+ adapter->scan_dev_desc.scan_dev_failed = true;
+ adapter->driver_cmds.scan_dev_cmd.status |= LEAPRAID_CMD_RESET;
+ if (adapter->scan_dev_desc.driver_loading) {
+ adapter->scan_dev_desc.scan_start_failed =
+ LEAPRAID_ADAPTER_STATUS_INTERNAL_ERROR;
+ adapter->scan_dev_desc.scan_start = false;
+ } else {
+ complete(&adapter->driver_cmds.scan_dev_cmd.done);
+ }
+ }
+}
+
+static void leapraid_clean_active_cmds(struct leapraid_adapter *adapter)
+{
+ leapraid_clean_active_driver_cmds(adapter);
+ memset(adapter->dev_topo.pending_dev_add, 0,
+ adapter->dev_topo.pending_dev_add_sz);
+ memset(adapter->dev_topo.dev_removing, 0,
+ adapter->dev_topo.dev_removing_sz);
+ leapraid_clean_active_fw_evt(adapter);
+ leapraid_clean_active_scsi_cmds(adapter);
+}
+
+static void leapraid_tgt_not_responding(struct leapraid_adapter *adapter,
+ u16 hdl)
+{
+ struct leapraid_starget_priv *starget_priv = NULL;
+ struct leapraid_sas_dev *sas_dev = NULL;
+ unsigned long flags = 0;
+ u32 adapter_state = 0;
+
+ if (adapter->access_ctrl.pcie_recovering)
+ return;
+
+ adapter_state = leapraid_get_adapter_state(adapter);
+ if (adapter_state != LEAPRAID_DB_OPERATIONAL)
+ return;
+
+ if (test_bit(hdl, (unsigned long *)adapter->dev_topo.pd_hdls))
+ return;
+
+ clear_bit(hdl, (unsigned long *)adapter->dev_topo.pending_dev_add);
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ if (sas_dev && sas_dev->starget && sas_dev->starget->hostdata) {
+ starget_priv = sas_dev->starget->hostdata;
+ starget_priv->deleted = true;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (starget_priv)
+ starget_priv->hdl = LEAPRAID_INVALID_DEV_HANDLE;
+
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+}
+
+static void leapraid_tgt_rst_send(struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_starget_priv *starget_priv = NULL;
+ struct leapraid_sas_dev *sas_dev = NULL;
+ struct leapraid_card_port *port = NULL;
+ u64 sas_address = 0;
+ unsigned long flags;
+ u32 adapter_state;
+
+ if (adapter->access_ctrl.pcie_recovering)
+ return;
+
+ adapter_state = leapraid_get_adapter_state(adapter);
+ if (adapter_state != LEAPRAID_DB_OPERATIONAL)
+ return;
+
+ if (test_bit(hdl, (unsigned long *)adapter->dev_topo.pd_hdls))
+ return;
+
+ clear_bit(hdl, (unsigned long *)adapter->dev_topo.pending_dev_add);
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ if (sas_dev && sas_dev->starget && sas_dev->starget->hostdata) {
+ starget_priv = sas_dev->starget->hostdata;
+ starget_priv->deleted = true;
+ sas_address = sas_dev->sas_addr;
+ port = sas_dev->card_port;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (starget_priv) {
+ leapraid_ublk_io_dev(adapter, sas_address, port);
+ starget_priv->hdl = LEAPRAID_INVALID_DEV_HANDLE;
+ }
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+}
+
+static inline void leapraid_single_mpi_sg_append(struct leapraid_adapter *adapter,
+ void *sge, u32 flag_and_len,
+ dma_addr_t dma_addr)
+{
+ if (adapter->adapter_attr.use_32_dma_mask) {
+ ((struct leapraid_sge_simple32 *)sge)->flg_and_len =
+ cpu_to_le32(flag_and_len |
+ (LEAPRAID_SGE_FLG_32 |
+ LEAPRAID_SGE_FLG_SYSTEM_ADDR) <<
+ LEAPRAID_SGE_FLG_SHIFT);
+ ((struct leapraid_sge_simple32 *)sge)->addr =
+ cpu_to_le32(dma_addr);
+ } else {
+ ((struct leapraid_sge_simple64 *)sge)->flg_and_len =
+ cpu_to_le32(flag_and_len |
+ (LEAPRAID_SGE_FLG_64 |
+ LEAPRAID_SGE_FLG_SYSTEM_ADDR) <<
+ LEAPRAID_SGE_FLG_SHIFT);
+ ((struct leapraid_sge_simple64 *)sge)->addr =
+ cpu_to_le64(dma_addr);
+ }
+}
+
+static inline void leapraid_single_ieee_sg_append(void *sge, u8 flag,
+ u8 next_chain_offset,
+ u32 len,
+ dma_addr_t dma_addr)
+{
+ ((struct leapraid_chain64_ieee_sg *)sge)->flg = flag;
+ ((struct leapraid_chain64_ieee_sg *)sge)->next_chain_offset =
+ next_chain_offset;
+ ((struct leapraid_chain64_ieee_sg *)sge)->len = cpu_to_le32(len);
+ ((struct leapraid_chain64_ieee_sg *)sge)->addr = cpu_to_le64(dma_addr);
+}
+
+static void leapraid_build_nodata_mpi_sg(struct leapraid_adapter *adapter,
+ void *sge)
+{
+ leapraid_single_mpi_sg_append(adapter,
+ sge,
+ (u32)((LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL |
+ LEAPRAID_SGE_FLG_SIMPLE_ONE) <<
+ LEAPRAID_SGE_FLG_SHIFT),
+ -1);
+}
+
+void leapraid_build_mpi_sg(struct leapraid_adapter *adapter, void *sge,
+ dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size)
+{
+ if (h2c_size && !c2h_size) {
+ leapraid_single_mpi_sg_append(adapter,
+ sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL |
+ LEAPRAID_SGE_FLG_H2C) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ h2c_size,
+ h2c_dma_addr);
+ } else if (!h2c_size && c2h_size) {
+ leapraid_single_mpi_sg_append(adapter,
+ sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ c2h_size,
+ c2h_dma_addr);
+ } else if (h2c_size && c2h_size) {
+ leapraid_single_mpi_sg_append(adapter,
+ sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_H2C) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ h2c_size,
+ h2c_dma_addr);
+ if (adapter->adapter_attr.use_32_dma_mask)
+ sge += sizeof(struct leapraid_sge_simple32);
+ else
+ sge += sizeof(struct leapraid_sge_simple64);
+ leapraid_single_mpi_sg_append(adapter,
+ sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ c2h_size,
+ c2h_dma_addr);
+ } else {
+ return leapraid_build_nodata_mpi_sg(adapter, sge);
+ }
+}
+
+void leapraid_build_ieee_nodata_sg(struct leapraid_adapter *adapter, void *sge)
+{
+ leapraid_single_ieee_sg_append(sge,
+ (LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR |
+ LEAPRAID_IEEE_SGE_FLG_EOL),
+ 0, 0, -1);
+}
+
+int leapraid_build_scmd_ieee_sg(struct leapraid_adapter *adapter,
+ struct scsi_cmnd *scmd, u16 taskid)
+{
+ struct leapraid_scsiio_req *scsiio_req;
+ struct leapraid_io_req_tracker *io_tracker;
+ struct scatterlist *scmd_sg_cur;
+ int sg_entries_left;
+ void *sg_entry_cur;
+ void *host_chain;
+ dma_addr_t host_chain_dma;
+ u8 host_chain_cursor;
+ u32 sg_entries_in_cur_seg;
+ u32 chain_offset_in_cur_seg;
+ u32 chain_len_in_cur_seg;
+
+ io_tracker = leapraid_get_scmd_priv(scmd);
+ scsiio_req = leapraid_get_task_desc(adapter, taskid);
+ scmd_sg_cur = scsi_sglist(scmd);
+ sg_entries_left = scsi_dma_map(scmd);
+ if (sg_entries_left < 0)
+ return -ENOMEM;
+ sg_entry_cur = &scsiio_req->sgl;
+ if (sg_entries_left <= LEAPRAID_SGL_INLINE_THRESHOLD)
+ goto fill_last_seg;
+
+ scsiio_req->chain_offset = LEAPRAID_CHAIN_OFFSET_DWORDS;
+ leapraid_single_ieee_sg_append(sg_entry_cur,
+ LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR,
+ 0, sg_dma_len(scmd_sg_cur),
+ sg_dma_address(scmd_sg_cur));
+ scmd_sg_cur = sg_next(scmd_sg_cur);
+ sg_entry_cur += LEAPRAID_IEEE_SGE64_ENTRY_SIZE;
+ sg_entries_left--;
+
+ host_chain_cursor = 0;
+ host_chain = io_tracker->chain +
+ host_chain_cursor * LEAPRAID_CHAIN_SEG_SIZE;
+ host_chain_dma = io_tracker->chain_dma +
+ host_chain_cursor * LEAPRAID_CHAIN_SEG_SIZE;
+ host_chain_cursor += 1;
+ for (;;) {
+ sg_entries_in_cur_seg =
+ (sg_entries_left <= LEAPRAID_MAX_SGES_IN_CHAIN) ?
+ sg_entries_left : LEAPRAID_MAX_SGES_IN_CHAIN;
+ chain_offset_in_cur_seg =
+ (sg_entries_left == (int)sg_entries_in_cur_seg) ?
+ 0 : sg_entries_in_cur_seg;
+ chain_len_in_cur_seg = sg_entries_in_cur_seg *
+ LEAPRAID_IEEE_SGE64_ENTRY_SIZE;
+ if (chain_offset_in_cur_seg)
+ chain_len_in_cur_seg += LEAPRAID_IEEE_SGE64_ENTRY_SIZE;
+
+ leapraid_single_ieee_sg_append(sg_entry_cur,
+ LEAPRAID_IEEE_SGE_FLG_CHAIN_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR,
+ chain_offset_in_cur_seg, chain_len_in_cur_seg,
+ host_chain_dma);
+ sg_entry_cur = host_chain;
+ if (!chain_offset_in_cur_seg)
+ goto fill_last_seg;
+
+ while (sg_entries_in_cur_seg) {
+ leapraid_single_ieee_sg_append(sg_entry_cur,
+ LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR,
+ 0, sg_dma_len(scmd_sg_cur),
+ sg_dma_address(scmd_sg_cur));
+ scmd_sg_cur = sg_next(scmd_sg_cur);
+ sg_entry_cur += LEAPRAID_IEEE_SGE64_ENTRY_SIZE;
+ sg_entries_left--;
+ sg_entries_in_cur_seg--;
+ }
+ host_chain = io_tracker->chain +
+ host_chain_cursor * LEAPRAID_CHAIN_SEG_SIZE;
+ host_chain_dma = io_tracker->chain_dma +
+ host_chain_cursor * LEAPRAID_CHAIN_SEG_SIZE;
+ host_chain_cursor += 1;
+ }
+
+fill_last_seg:
+ while (sg_entries_left > 0) {
+ u32 flags = LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR;
+ if (sg_entries_left == 1)
+ flags |= LEAPRAID_IEEE_SGE_FLG_EOL;
+ leapraid_single_ieee_sg_append(sg_entry_cur, flags,
+ 0, sg_dma_len(scmd_sg_cur),
+ sg_dma_address(scmd_sg_cur));
+ scmd_sg_cur = sg_next(scmd_sg_cur);
+ sg_entry_cur += LEAPRAID_IEEE_SGE64_ENTRY_SIZE;
+ sg_entries_left--;
+ }
+ return 0;
+}
+
+void leapraid_build_ieee_sg(struct leapraid_adapter *adapter, void *sge,
+ dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size)
+{
+ if (h2c_size && !c2h_size) {
+ leapraid_single_ieee_sg_append(sge,
+ LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_EOL |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR,
+ 0,
+ h2c_size,
+ h2c_dma_addr);
+ } else if (!h2c_size && c2h_size) {
+ leapraid_single_ieee_sg_append(sge,
+ LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_EOL |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR,
+ 0,
+ c2h_size,
+ c2h_dma_addr);
+ } else if (h2c_size && c2h_size) {
+ leapraid_single_ieee_sg_append(sge,
+ LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR,
+ 0,
+ h2c_size,
+ h2c_dma_addr);
+ sge += LEAPRAID_IEEE_SGE64_ENTRY_SIZE;
+ leapraid_single_ieee_sg_append(sge,
+ LEAPRAID_IEEE_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_IEEE_SGE_FLG_SYSTEM_ADDR |
+ LEAPRAID_IEEE_SGE_FLG_EOL,
+ 0,
+ c2h_size,
+ c2h_dma_addr);
+ } else {
+ return leapraid_build_ieee_nodata_sg(adapter, sge);
+ }
+}
+
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_from_tgt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_starget_priv *tgt_priv)
+{
+ assert_spin_locked(&adapter->dev_topo.sas_dev_lock);
+ if (tgt_priv->sas_dev)
+ leapraid_sdev_get(tgt_priv->sas_dev);
+
+ return tgt_priv->sas_dev;
+}
+
+struct leapraid_sas_dev *leapraid_get_sas_dev_from_tgt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_starget_priv *tgt_priv)
+{
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_from_tgt(adapter, tgt_priv);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return sas_dev;
+}
+
+static struct leapraid_card_port *leapraid_get_port_by_id(
+ struct leapraid_adapter *adapter,
+ u8 port_id, bool skip_dirty)
+{
+ struct leapraid_card_port *port;
+ struct leapraid_card_port *dirty_port = NULL;
+
+ if (!adapter->adapter_attr.enable_mp)
+ port_id = LEAPRAID_DISABLE_MP_PORT_ID;
+
+ list_for_each_entry(port, &adapter->dev_topo.card_port_list, list) {
+ if (port->port_id != port_id)
+ continue;
+
+ if (!(port->flg & LEAPRAID_CARD_PORT_FLG_DIRTY))
+ return port;
+
+ if (skip_dirty && !dirty_port)
+ dirty_port = port;
+ }
+
+ if (dirty_port)
+ return dirty_port;
+
+ if (unlikely(!adapter->adapter_attr.enable_mp)) {
+ port = kzalloc(sizeof(*port), GFP_ATOMIC);
+ if (!port)
+ return NULL;
+
+ port->port_id = LEAPRAID_DISABLE_MP_PORT_ID;
+ list_add_tail(&port->list, &adapter->dev_topo.card_port_list);
+ return port;
+ }
+
+ return NULL;
+}
+
+struct leapraid_vphy *leapraid_get_vphy_by_phy(struct leapraid_card_port *port,
+ u32 phy_seq_num)
+{
+ struct leapraid_vphy *vphy;
+
+ if (!port || !port->vphys_mask)
+ return NULL;
+
+ list_for_each_entry(vphy, &port->vphys_list, list) {
+ if (vphy->phy_mask & BIT(phy_seq_num))
+ return vphy;
+ }
+
+ return NULL;
+}
+
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_by_addr_and_rphy(
+ struct leapraid_adapter *adapter, u64 sas_address,
+ struct sas_rphy *rphy)
+{
+ struct leapraid_sas_dev *sas_dev;
+
+ assert_spin_locked(&adapter->dev_topo.sas_dev_lock);
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_list, list)
+ if (sas_dev->sas_addr == sas_address &&
+ sas_dev->rphy == rphy) {
+ leapraid_sdev_get(sas_dev);
+ return sas_dev;
+ }
+
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_init_list,
+ list)
+ if (sas_dev->sas_addr == sas_address &&
+ sas_dev->rphy == rphy) {
+ leapraid_sdev_get(sas_dev);
+ return sas_dev;
+ }
+
+ return NULL;
+}
+
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_by_addr(
+ struct leapraid_adapter *adapter,
+ u64 sas_address, struct leapraid_card_port *port)
+{
+ struct leapraid_sas_dev *sas_dev;
+
+ if (!port)
+ return NULL;
+
+ assert_spin_locked(&adapter->dev_topo.sas_dev_lock);
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_list, list)
+ if (sas_dev->sas_addr == sas_address &&
+ sas_dev->card_port == port) {
+ leapraid_sdev_get(sas_dev);
+ return sas_dev;
+ }
+
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_init_list,
+ list)
+ if (sas_dev->sas_addr == sas_address &&
+ sas_dev->card_port == port) {
+ leapraid_sdev_get(sas_dev);
+ return sas_dev;
+ }
+
+ return NULL;
+}
+
+struct leapraid_sas_dev *leapraid_get_sas_dev_by_addr(
+ struct leapraid_adapter *adapter,
+ u64 sas_address, struct leapraid_card_port *port)
+{
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+
+ if (!port)
+ return NULL;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(adapter, sas_address,
+ port);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return sas_dev;
+}
+
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_by_hdl(
+ struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_sas_dev *sas_dev;
+
+ assert_spin_locked(&adapter->dev_topo.sas_dev_lock);
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_list, list)
+ if (sas_dev->hdl == hdl) {
+ leapraid_sdev_get(sas_dev);
+ return sas_dev;
+ }
+
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_init_list,
+ list)
+ if (sas_dev->hdl == hdl) {
+ leapraid_sdev_get(sas_dev);
+ return sas_dev;
+ }
+
+ return NULL;
+}
+
+struct leapraid_sas_dev *leapraid_get_sas_dev_by_hdl(
+ struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return sas_dev;
+}
+
+void leapraid_sas_dev_remove(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev)
+{
+ unsigned long flags;
+ bool del_from_list;
+
+ if (!sas_dev)
+ return;
+
+ del_from_list = false;
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ if (!list_empty(&sas_dev->list)) {
+ list_del_init(&sas_dev->list);
+ del_from_list = true;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (del_from_list)
+ leapraid_sdev_put(sas_dev);
+}
+
+static void leapraid_sas_dev_remove_by_hdl(struct leapraid_adapter *adapter,
+ u16 hdl)
+{
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ bool del_from_list;
+
+ if (adapter->access_ctrl.shost_recovering)
+ return;
+
+ del_from_list = false;
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ if (sas_dev) {
+ if (!list_empty(&sas_dev->list)) {
+ list_del_init(&sas_dev->list);
+ del_from_list = true;
+ leapraid_sdev_put(sas_dev);
+ }
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (del_from_list) {
+ leapraid_remove_device(adapter, sas_dev);
+ leapraid_sdev_put(sas_dev);
+ }
+}
+
+void leapraid_sas_dev_remove_by_sas_address(struct leapraid_adapter *adapter,
+ u64 sas_address,
+ struct leapraid_card_port *port)
+{
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ bool del_from_list;
+
+ if (adapter->access_ctrl.shost_recovering)
+ return;
+
+ del_from_list = false;
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(adapter, sas_address,
+ port);
+ if (sas_dev) {
+ if (!list_empty(&sas_dev->list)) {
+ list_del_init(&sas_dev->list);
+ del_from_list = true;
+ leapraid_sdev_put(sas_dev);
+ }
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (del_from_list) {
+ leapraid_remove_device(adapter, sas_dev);
+ leapraid_sdev_put(sas_dev);
+ }
+}
+
+struct leapraid_raid_volume *leapraid_raid_volume_find_by_id(
+ struct leapraid_adapter *adapter, uint id, uint channel)
+{
+ struct leapraid_raid_volume *raid_volume;
+
+ list_for_each_entry(raid_volume, &adapter->dev_topo.raid_volume_list,
+ list) {
+ if (raid_volume->id == id &&
+ raid_volume->channel == channel) {
+ return raid_volume;
+ }
+ }
+
+ return NULL;
+}
+
+struct leapraid_raid_volume *leapraid_raid_volume_find_by_hdl(
+ struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_raid_volume *raid_volume;
+
+ list_for_each_entry(raid_volume, &adapter->dev_topo.raid_volume_list,
+ list) {
+ if (raid_volume->hdl == hdl)
+ return raid_volume;
+ }
+
+ return NULL;
+}
+
+static struct leapraid_raid_volume *leapraid_raid_volume_find_by_wwid(
+ struct leapraid_adapter *adapter, u64 wwid)
+{
+ struct leapraid_raid_volume *raid_volume;
+
+ list_for_each_entry(raid_volume, &adapter->dev_topo.raid_volume_list,
+ list) {
+ if (raid_volume->wwid == wwid)
+ return raid_volume;
+ }
+
+ return NULL;
+}
+
+static void leapraid_raid_volume_add(struct leapraid_adapter *adapter,
+ struct leapraid_raid_volume *raid_volume)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ list_add_tail(&raid_volume->list, &adapter->dev_topo.raid_volume_list);
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+}
+
+void leapraid_raid_volume_remove(struct leapraid_adapter *adapter,
+ struct leapraid_raid_volume *raid_volume)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ list_del(&raid_volume->list);
+ kfree(raid_volume);
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+}
+
+static struct leapraid_enc_node *leapraid_enc_find_by_hdl(
+ struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_enc_node *enc_dev;
+
+ list_for_each_entry(enc_dev, &adapter->dev_topo.enc_list, list) {
+ if (le16_to_cpu(enc_dev->pg0.enc_hdl) == hdl)
+ return enc_dev;
+ }
+
+ return NULL;
+}
+
+struct leapraid_topo_node *leapraid_exp_find_by_sas_address(
+ struct leapraid_adapter *adapter,
+ u64 sas_address, struct leapraid_card_port *port)
+{
+ struct leapraid_topo_node *sas_exp;
+
+ if (!port)
+ return NULL;
+
+ list_for_each_entry(sas_exp, &adapter->dev_topo.exp_list, list) {
+ if (sas_exp->sas_address == sas_address &&
+ sas_exp->card_port == port)
+ return sas_exp;
+ }
+
+ return NULL;
+}
+
+bool leapraid_scmd_find_by_tgt(struct leapraid_adapter *adapter, uint id,
+ uint channel)
+{
+ struct scsi_cmnd *scmd;
+ int taskid;
+
+ for (taskid = 1; taskid <= adapter->shost->can_queue; taskid++) {
+ scmd = leapraid_get_scmd_from_taskid(adapter, taskid);
+ if (!scmd)
+ continue;
+
+ if (scmd->device->id == id && scmd->device->channel == channel)
+ return true;
+ }
+
+ return false;
+}
+
+bool leapraid_scmd_find_by_lun(struct leapraid_adapter *adapter, uint id,
+ unsigned int lun, uint channel)
+{
+ struct scsi_cmnd *scmd;
+ int taskid;
+
+ for (taskid = 1; taskid <= adapter->shost->can_queue; taskid++) {
+ scmd = leapraid_get_scmd_from_taskid(adapter, taskid);
+ if (!scmd)
+ continue;
+
+ if (scmd->device->id == id &&
+ scmd->device->channel == channel &&
+ scmd->device->lun == lun)
+ return true;
+ }
+
+ return false;
+}
+
+static struct leapraid_topo_node *leapraid_exp_find_by_hdl(
+ struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_topo_node *sas_exp;
+
+ list_for_each_entry(sas_exp, &adapter->dev_topo.exp_list, list) {
+ if (sas_exp->hdl == hdl)
+ return sas_exp;
+ }
+
+ return NULL;
+}
+
+static enum leapraid_card_port_checking_flg leapraid_get_card_port_feature(
+ struct leapraid_card_port *old_card_port,
+ struct leapraid_card_port *card_port,
+ struct leapraid_card_port_feature *feature)
+{
+ feature->dirty_flg =
+ old_card_port->flg & LEAPRAID_CARD_PORT_FLG_DIRTY;
+ feature->same_addr =
+ old_card_port->sas_address == card_port->sas_address;
+ feature->exact_phy =
+ old_card_port->phy_mask == card_port->phy_mask;
+ feature->phy_overlap =
+ old_card_port->phy_mask & card_port->phy_mask;
+ feature->same_port =
+ old_card_port->port_id == card_port->port_id;
+ feature->cur_chking_old_port = old_card_port;
+
+ if (!feature->dirty_flg || !feature->same_addr)
+ return CARD_PORT_SKIP_CHECKING;
+
+ return CARD_PORT_FURTHER_CHECKING_NEEDED;
+}
+
+static int leapraid_process_card_port_feature(
+ struct leapraid_card_port_feature *feature)
+{
+ struct leapraid_card_port *old_card_port;
+
+ old_card_port = feature->cur_chking_old_port;
+ if (feature->exact_phy) {
+ feature->checking_state = SAME_PORT_WITH_NOTHING_CHANGED;
+ feature->expected_old_port = old_card_port;
+ return 1;
+ } else if (feature->phy_overlap) {
+ if (feature->same_port) {
+ feature->checking_state =
+ SAME_PORT_WITH_PARTIALLY_CHANGED_PHYS;
+ feature->expected_old_port = old_card_port;
+ } else if (feature->checking_state !=
+ SAME_PORT_WITH_PARTIALLY_CHANGED_PHYS) {
+ feature->checking_state =
+ SAME_ADDR_WITH_PARTIALLY_CHANGED_PHYS;
+ feature->expected_old_port = old_card_port;
+ }
+ } else {
+ if (feature->checking_state !=
+ SAME_PORT_WITH_PARTIALLY_CHANGED_PHYS &&
+ feature->checking_state !=
+ SAME_ADDR_WITH_PARTIALLY_CHANGED_PHYS) {
+ feature->checking_state = SAME_ADDR_ONLY;
+ feature->expected_old_port = old_card_port;
+ feature->same_addr_port_count++;
+ }
+ }
+
+ return 0;
+}
+
+static int leapraid_check_card_port(struct leapraid_adapter *adapter,
+ struct leapraid_card_port *card_port,
+ struct leapraid_card_port **expected_card_port,
+ int *count)
+{
+ struct leapraid_card_port *old_card_port;
+ struct leapraid_card_port_feature feature;
+
+ *expected_card_port = NULL;
+ memset(&feature, 0, sizeof(struct leapraid_card_port_feature));
+ feature.expected_old_port = NULL;
+ feature.same_addr_port_count = 0;
+ feature.checking_state = NEW_CARD_PORT;
+
+ list_for_each_entry(old_card_port, &adapter->dev_topo.card_port_list,
+ list) {
+ if (leapraid_get_card_port_feature(old_card_port, card_port,
+ &feature))
+ continue;
+
+ if (leapraid_process_card_port_feature(&feature))
+ break;
+ }
+
+ if (feature.checking_state == SAME_ADDR_ONLY)
+ *count = feature.same_addr_port_count;
+
+ *expected_card_port = feature.expected_old_port;
+ return feature.checking_state;
+}
+
+static void leapraid_del_phy_part_of_anther_port(
+ struct leapraid_adapter *adapter,
+ struct leapraid_card_port *card_port_table, int index,
+ u8 port_count, int offset)
+{
+ struct leapraid_topo_node *card_topo_node;
+ bool found = false;
+ int i;
+
+ card_topo_node = &adapter->dev_topo.card;
+ for (i = 0; i < port_count; i++) {
+ if (i == index)
+ continue;
+
+ if (card_port_table[i].phy_mask & BIT(offset)) {
+ leapraid_transport_detach_phy_to_port(adapter,
+ card_topo_node,
+ &card_topo_node->card_phy[offset]);
+ found = true;
+ break;
+ }
+ }
+
+ if (!found)
+ card_port_table[index].phy_mask |= BIT(offset);
+}
+
+static void leapraid_add_or_del_phys_from_existing_port(
+ struct leapraid_adapter *adapter,
+ struct leapraid_card_port *card_port,
+ struct leapraid_card_port *card_port_table,
+ int index, u8 port_count)
+{
+ struct leapraid_topo_node *card_topo_node;
+ u32 phy_mask_diff;
+ u32 offset = 0;
+
+ card_topo_node = &adapter->dev_topo.card;
+ phy_mask_diff = card_port->phy_mask ^
+ card_port_table[index].phy_mask;
+ for (offset = 0; offset < adapter->dev_topo.card.phys_num; offset++) {
+ if (!(phy_mask_diff & BIT(offset)))
+ continue;
+
+ if (!(card_port_table[index].phy_mask & BIT(offset))) {
+ leapraid_del_phy_part_of_anther_port(adapter,
+ card_port_table,
+ index, port_count,
+ offset);
+ continue;
+ }
+
+ if (card_topo_node->card_phy[offset].phy_is_assigned)
+ leapraid_transport_detach_phy_to_port(adapter,
+ card_topo_node,
+ &card_topo_node->card_phy[offset]);
+
+ leapraid_transport_attach_phy_to_port(adapter,
+ card_topo_node, &card_topo_node->card_phy[offset],
+ card_port->sas_address,
+ card_port);
+ }
+}
+
+struct leapraid_sas_dev *leapraid_get_next_sas_dev_from_init_list(
+ struct leapraid_adapter *adapter)
+{
+ struct leapraid_sas_dev *sas_dev = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ if (!list_empty(&adapter->dev_topo.sas_dev_init_list)) {
+ sas_dev = list_first_entry(&adapter->dev_topo.sas_dev_init_list,
+ struct leapraid_sas_dev, list);
+ leapraid_sdev_get(sas_dev);
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return sas_dev;
+}
+
+static bool leapraid_check_boot_dev_internal(u64 sas_address, u64 dev_name,
+ u64 enc_lid, u16 slot,
+ struct leapraid_boot_dev *boot_dev,
+ u8 form)
+{
+ if (!boot_dev)
+ return false;
+
+ switch (form & LEAPRAID_BOOTDEV_FORM_MASK) {
+ case LEAPRAID_BOOTDEV_FORM_SAS_WWID:
+ if (!sas_address)
+ return false;
+
+ return sas_address ==
+ le64_to_cpu(((struct leapraid_boot_dev_format_sas_wwid *)(
+ boot_dev->pg_dev))->sas_addr);
+ case LEAPRAID_BOOTDEV_FORM_ENC_SLOT:
+ if (!enc_lid)
+ return false;
+
+ return (enc_lid == le64_to_cpu(((struct leapraid_boot_dev_format_enc_slot *)(
+ boot_dev->pg_dev))->enc_lid) &&
+ slot == le16_to_cpu(((struct leapraid_boot_dev_format_enc_slot *)(
+ boot_dev->pg_dev))->slot_num));
+ case LEAPRAID_BOOTDEV_FORM_DEV_NAME:
+ if (!dev_name)
+ return false;
+
+ return dev_name == le64_to_cpu(((struct leapraid_boot_dev_format_dev_name *)(
+ boot_dev->pg_dev))->dev_name);
+ case LEAPRAID_BOOTDEV_FORM_NONE:
+ default:
+ return false;
+ }
+}
+
+static void leapraid_try_set_boot_dev(struct leapraid_boot_dev *boot_dev,
+ u64 sas_addr, u64 dev_name,
+ u64 enc_lid, u16 slot,
+ void *dev, u32 chnl)
+{
+ bool matched = false;
+
+ if (boot_dev->dev)
+ return;
+
+ matched = leapraid_check_boot_dev_internal(sas_addr, dev_name, enc_lid,
+ slot, boot_dev,
+ boot_dev->form);
+ if (matched) {
+ boot_dev->dev = dev;
+ boot_dev->chnl = chnl;
+ }
+}
+
+static void leapraid_check_boot_dev(struct leapraid_adapter *adapter,
+ void *dev, u32 chnl)
+{
+ u64 sas_addr = 0;
+ u64 dev_name = 0;
+ u64 enc_lid = 0;
+ u16 slot = 0;
+
+ if (!adapter->scan_dev_desc.driver_loading)
+ return;
+
+ switch (chnl) {
+ case RAID_CHANNEL:
+ {
+ struct leapraid_raid_volume *raid_volume =
+ (struct leapraid_raid_volume *)dev;
+
+ sas_addr = raid_volume->wwid;
+ break;
+ }
+ default:
+ {
+ struct leapraid_sas_dev *sas_dev =
+ (struct leapraid_sas_dev *)dev;
+ sas_addr = sas_dev->sas_addr;
+ dev_name = sas_dev->dev_name;
+ enc_lid = sas_dev->enc_lid;
+ slot = sas_dev->slot;
+ break;
+ }
+ }
+
+ leapraid_try_set_boot_dev(&adapter->boot_devs.requested_boot_dev,
+ sas_addr, dev_name, enc_lid,
+ slot, dev, chnl);
+ leapraid_try_set_boot_dev(&adapter->boot_devs.requested_alt_boot_dev,
+ sas_addr, dev_name, enc_lid,
+ slot, dev, chnl);
+ leapraid_try_set_boot_dev(&adapter->boot_devs.current_boot_dev,
+ sas_addr, dev_name, enc_lid,
+ slot, dev, chnl);
+}
+
+static void leapraid_build_and_fire_cfg_req(struct leapraid_adapter *adapter,
+ struct leapraid_cfg_req *leap_mpi_cfgp_req,
+ struct leapraid_cfg_rep *leap_mpi_cfgp_rep)
+{
+ struct leapraid_cfg_req *local_leap_cfg_req;
+
+ memset(leap_mpi_cfgp_rep, 0, sizeof(struct leapraid_cfg_rep));
+ memset((void *)(&adapter->driver_cmds.cfg_op_cmd.reply), 0,
+ sizeof(struct leapraid_cfg_rep));
+ adapter->driver_cmds.cfg_op_cmd.status = LEAPRAID_CMD_PENDING;
+ local_leap_cfg_req = leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.cfg_op_cmd.inter_taskid);
+ memcpy(local_leap_cfg_req, leap_mpi_cfgp_req,
+ sizeof(struct leapraid_cfg_req));
+ init_completion(&adapter->driver_cmds.cfg_op_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.cfg_op_cmd.inter_taskid);
+ wait_for_completion_timeout(&adapter->driver_cmds.cfg_op_cmd.done,
+ LEAPRAID_CFG_OP_TIMEOUT * HZ);
+}
+
+static int leapraid_req_cfg_func(struct leapraid_adapter *adapter,
+ struct leapraid_cfg_req *leap_mpi_cfgp_req,
+ struct leapraid_cfg_rep *leap_mpi_cfgp_rep,
+ void *target_cfg_pg, void *real_cfg_pg_addr,
+ u16 target_real_cfg_pg_sz)
+{
+ u32 adapter_status = UINT_MAX;
+ bool issue_reset = false;
+ u8 retry_cnt;
+ int rc;
+
+ retry_cnt = 0;
+ mutex_lock(&adapter->driver_cmds.cfg_op_cmd.mutex);
+retry:
+ if (retry_cnt) {
+ if (retry_cnt > LEAPRAID_CFG_REQ_RETRY_TIMES) {
+ rc = -EFAULT;
+ goto out;
+ }
+ dev_warn(&adapter->pdev->dev,
+ "cfg-req: retry request, cnt=%u\n", retry_cnt);
+ }
+
+ rc = leapraid_check_adapter_is_op(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "cfg-req: adapter not operational\n");
+ goto out;
+ }
+
+ leapraid_build_and_fire_cfg_req(adapter, leap_mpi_cfgp_req,
+ leap_mpi_cfgp_rep);
+ if (!(adapter->driver_cmds.cfg_op_cmd.status & LEAPRAID_CMD_DONE)) {
+ retry_cnt++;
+ if (adapter->driver_cmds.cfg_op_cmd.status &
+ LEAPRAID_CMD_RESET) {
+ dev_warn(&adapter->pdev->dev,
+ "cfg-req: cmd gg due to hard reset\n");
+ goto retry;
+ }
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering) {
+ dev_err(&adapter->pdev->dev,
+ "cfg-req: cmd not done during %s, skip reset\n",
+ adapter->access_ctrl.shost_recovering ?
+ "shost recovery" : "pcie recovery");
+ issue_reset = false;
+ rc = -EFAULT;
+ } else {
+ dev_err(&adapter->pdev->dev,
+ "cfg-req: cmd timeout, issuing hard reset\n");
+ issue_reset = true;
+ }
+
+ goto out;
+ }
+
+ if (adapter->driver_cmds.cfg_op_cmd.status &
+ LEAPRAID_CMD_REPLY_VALID) {
+ memcpy(leap_mpi_cfgp_rep,
+ (void *)(&adapter->driver_cmds.cfg_op_cmd.reply),
+ sizeof(struct leapraid_cfg_rep));
+ adapter_status = le16_to_cpu(
+ leap_mpi_cfgp_rep->adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (adapter_status == LEAPRAID_ADAPTER_STATUS_SUCCESS) {
+ if (target_cfg_pg && real_cfg_pg_addr &&
+ target_real_cfg_pg_sz)
+ if (leap_mpi_cfgp_req->action ==
+ LEAPRAID_CFG_ACT_PAGE_READ_CUR)
+ memcpy(target_cfg_pg,
+ real_cfg_pg_addr,
+ target_real_cfg_pg_sz);
+ } else {
+ if (adapter_status !=
+ LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_PAGE)
+ dev_err(&adapter->pdev->dev,
+ "cfg-rep: adapter_status=0x%x\n",
+ adapter_status);
+ rc = -EFAULT;
+ }
+ } else {
+ dev_err(&adapter->pdev->dev, "cfg-rep: reply invalid\n");
+ rc = -EFAULT;
+ }
+
+out:
+ adapter->driver_cmds.cfg_op_cmd.status = LEAPRAID_CMD_NOT_USED;
+ mutex_unlock(&adapter->driver_cmds.cfg_op_cmd.mutex);
+ if (issue_reset) {
+ if (adapter->scan_dev_desc.first_scan_dev_fired) {
+ dev_info(&adapter->pdev->dev,
+ "%s:%d cfg-req: failure, issuing reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, FULL_RESET);
+ rc = -EFAULT;
+ } else {
+ dev_warn(&adapter->pdev->dev,
+ "cfg-req: cmd gg during init, skip reset\n");
+ rc = -EFAULT;
+ }
+ }
+ return rc;
+}
+
+static int leapraid_request_cfg_pg_header(struct leapraid_adapter *adapter,
+ struct leapraid_cfg_req *leap_mpi_cfgp_req,
+ struct leapraid_cfg_rep *leap_mpi_cfgp_rep)
+{
+ return leapraid_req_cfg_func(adapter, leap_mpi_cfgp_req,
+ leap_mpi_cfgp_rep, NULL, NULL, 0);
+}
+
+static int leapraid_request_cfg_pg(struct leapraid_adapter *adapter,
+ struct leapraid_cfg_req *leap_mpi_cfgp_req,
+ struct leapraid_cfg_rep *leap_mpi_cfgp_rep,
+ void *target_cfg_pg, void *real_cfg_pg_addr,
+ u16 target_real_cfg_pg_sz)
+{
+ return leapraid_req_cfg_func(adapter, leap_mpi_cfgp_req,
+ leap_mpi_cfgp_rep, target_cfg_pg,
+ real_cfg_pg_addr, target_real_cfg_pg_sz);
+}
+
+int leapraid_op_config_page(struct leapraid_adapter *adapter,
+ void *target_cfg_pg, union cfg_param_1 cfgp1,
+ union cfg_param_2 cfgp2,
+ enum config_page_action cfg_op)
+{
+ struct leapraid_cfg_req leap_mpi_cfgp_req;
+ struct leapraid_cfg_rep leap_mpi_cfgp_rep;
+ u16 real_cfg_pg_sz = 0;
+ void *real_cfg_pg_addr = NULL;
+ dma_addr_t real_cfg_pg_dma = 0;
+ u32 __page_size;
+ int rc;
+
+ memset(&leap_mpi_cfgp_req, 0, sizeof(struct leapraid_cfg_req));
+ leap_mpi_cfgp_req.func = LEAPRAID_FUNC_CONFIG_OP;
+ leap_mpi_cfgp_req.action = LEAPRAID_CFG_ACT_PAGE_HEADER;
+
+ switch (cfg_op) {
+ case GET_BIOS_PG3:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_BIOS;
+ leap_mpi_cfgp_req.header.page_num =
+ LEAPRAID_CFG_PAGE_NUM_BIOS3;
+ __page_size = sizeof(struct leapraid_bios_page3);
+ break;
+ case GET_BIOS_PG2:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_BIOS;
+ leap_mpi_cfgp_req.header.page_num =
+ LEAPRAID_CFG_PAGE_NUM_BIOS2;
+ __page_size = sizeof(struct leapraid_bios_page2);
+ break;
+ case GET_SAS_DEVICE_PG0:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type = LEAPRAID_CFG_EXTPT_SAS_DEV;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_DEV0;
+ __page_size = sizeof(struct leapraid_sas_dev_p0);
+ break;
+ case GET_SAS_IOUNIT_PG0:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type =
+ LEAPRAID_CFG_EXTPT_SAS_IO_UNIT;
+ leap_mpi_cfgp_req.header.page_num =
+ LEAPRAID_CFG_PAGE_NUM_IOUNIT0;
+ __page_size = cfgp1.size;
+ break;
+ case GET_SAS_IOUNIT_PG1:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type =
+ LEAPRAID_CFG_EXTPT_SAS_IO_UNIT;
+ leap_mpi_cfgp_req.header.page_num =
+ LEAPRAID_CFG_PAGE_NUM_IOUNIT1;
+ __page_size = cfgp1.size;
+ break;
+ case GET_SAS_EXPANDER_PG0:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type = LEAPRAID_CFG_EXTPT_SAS_EXP;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_EXP0;
+ __page_size = sizeof(struct leapraid_exp_p0);
+ break;
+ case GET_SAS_EXPANDER_PG1:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type = LEAPRAID_CFG_EXTPT_SAS_EXP;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_EXP1;
+ __page_size = sizeof(struct leapraid_exp_p1);
+ break;
+ case GET_SAS_ENCLOSURE_PG0:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type = LEAPRAID_CFG_EXTPT_ENC;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_ENC0;
+ __page_size = sizeof(struct leapraid_enc_p0);
+ break;
+ case GET_PHY_PG0:
+ leap_mpi_cfgp_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ leap_mpi_cfgp_req.ext_page_type = LEAPRAID_CFG_EXTPT_SAS_PHY;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_PHY0;
+ __page_size = sizeof(struct leapraid_sas_phy_p0);
+ break;
+ case GET_RAID_VOLUME_PG0:
+ leap_mpi_cfgp_req.header.page_type =
+ LEAPRAID_CFG_PT_RAID_VOLUME;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_VOL0;
+ __page_size = cfgp1.size;
+ break;
+ case GET_RAID_VOLUME_PG1:
+ leap_mpi_cfgp_req.header.page_type =
+ LEAPRAID_CFG_PT_RAID_VOLUME;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_VOL1;
+ __page_size = sizeof(struct leapraid_raidvol_p1);
+ break;
+ case GET_PHY_DISK_PG0:
+ leap_mpi_cfgp_req.header.page_type =
+ LEAPRAID_CFG_PT_RAID_PHYSDISK;
+ leap_mpi_cfgp_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_PD0;
+ __page_size = sizeof(struct leapraid_raidpd_p0);
+ break;
+ default:
+ dev_err(&adapter->pdev->dev,
+ "unsupported config page action=%d!\n", cfg_op);
+ rc = -EINVAL;
+ goto out;
+ }
+
+ leapraid_build_nodata_mpi_sg(adapter,
+ &leap_mpi_cfgp_req.page_buf_sge);
+ rc = leapraid_request_cfg_pg_header(adapter,
+ &leap_mpi_cfgp_req,
+ &leap_mpi_cfgp_rep);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "cfg-req: header failed rc=%dn", rc);
+ goto out;
+ }
+
+ if (cfg_op == GET_SAS_DEVICE_PG0 ||
+ cfg_op == GET_SAS_EXPANDER_PG0 ||
+ cfg_op == GET_SAS_ENCLOSURE_PG0 ||
+ cfg_op == GET_RAID_VOLUME_PG1)
+ leap_mpi_cfgp_req.page_addr = cpu_to_le32(cfgp1.form |
+ cfgp2.handle);
+ else if (cfg_op == GET_PHY_DISK_PG0)
+ leap_mpi_cfgp_req.page_addr = cpu_to_le32(cfgp1.form |
+ cfgp2.form_specific);
+ else if (cfg_op == GET_RAID_VOLUME_PG0)
+ leap_mpi_cfgp_req.page_addr =
+ cpu_to_le32(cfgp2.handle |
+ LEAPRAID_RAID_VOL_CFG_PGAD_HDL);
+ else if (cfg_op == GET_SAS_EXPANDER_PG1)
+ leap_mpi_cfgp_req.page_addr =
+ cpu_to_le32(cfgp2.handle |
+ (cfgp1.phy_number <<
+ LEAPRAID_SAS_EXP_CFG_PGAD_PHYNUM_SHIFT) |
+ LEAPRAID_SAS_EXP_CFG_PGAD_HDL_PHY_NUM);
+ else if (cfg_op == GET_PHY_PG0)
+ leap_mpi_cfgp_req.page_addr = cpu_to_le32(cfgp1.phy_number |
+ LEAPRAID_SAS_PHY_CFG_PGAD_PHY_NUMBER);
+
+ leap_mpi_cfgp_req.action = LEAPRAID_CFG_ACT_PAGE_READ_CUR;
+
+ leap_mpi_cfgp_req.header.page_num = leap_mpi_cfgp_rep.header.page_num;
+ leap_mpi_cfgp_req.header.page_type =
+ leap_mpi_cfgp_rep.header.page_type;
+ leap_mpi_cfgp_req.header.page_len = leap_mpi_cfgp_rep.header.page_len;
+ leap_mpi_cfgp_req.ext_page_len = leap_mpi_cfgp_rep.ext_page_len;
+ leap_mpi_cfgp_req.ext_page_type = leap_mpi_cfgp_rep.ext_page_type;
+
+ real_cfg_pg_sz = (leap_mpi_cfgp_req.header.page_len) ?
+ leap_mpi_cfgp_req.header.page_len * 4 :
+ le16_to_cpu(leap_mpi_cfgp_rep.ext_page_len) * 4;
+ real_cfg_pg_addr = dma_alloc_coherent(&adapter->pdev->dev,
+ real_cfg_pg_sz,
+ &real_cfg_pg_dma,
+ GFP_KERNEL);
+ if (!real_cfg_pg_addr) {
+ dev_err(&adapter->pdev->dev, "cfg-req: dma alloc failed\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ if (leap_mpi_cfgp_req.action == LEAPRAID_CFG_ACT_PAGE_WRITE_CUR) {
+ leapraid_single_mpi_sg_append(adapter,
+ &leap_mpi_cfgp_req.page_buf_sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL |
+ LEAPRAID_SGE_FLG_H2C) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ real_cfg_pg_sz,
+ real_cfg_pg_dma);
+ memcpy(real_cfg_pg_addr, target_cfg_pg,
+ min_t(u16, real_cfg_pg_sz, __page_size));
+ } else {
+ memset(target_cfg_pg, 0, __page_size);
+ leapraid_single_mpi_sg_append(adapter,
+ &leap_mpi_cfgp_req.page_buf_sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ real_cfg_pg_sz,
+ real_cfg_pg_dma);
+ memset(real_cfg_pg_addr, 0,
+ min_t(u16, real_cfg_pg_sz, __page_size));
+ }
+
+ rc = leapraid_request_cfg_pg(adapter,
+ &leap_mpi_cfgp_req,
+ &leap_mpi_cfgp_rep,
+ target_cfg_pg,
+ real_cfg_pg_addr,
+ min_t(u16, real_cfg_pg_sz, __page_size));
+ if (rc) {
+ u32 adapter_status;
+
+ adapter_status = le16_to_cpu(leap_mpi_cfgp_rep.adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (adapter_status !=
+ LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_PAGE)
+ dev_err(&adapter->pdev->dev,
+ "cfg-req: rc=%d, pg_info: 0x%x, 0x%x, %d\n",
+ rc, leap_mpi_cfgp_req.header.page_type,
+ leap_mpi_cfgp_req.ext_page_type,
+ leap_mpi_cfgp_req.header.page_num);
+ }
+
+ if (real_cfg_pg_addr)
+ dma_free_coherent(&adapter->pdev->dev,
+ real_cfg_pg_sz,
+ real_cfg_pg_addr,
+ real_cfg_pg_dma);
+out:
+ return rc;
+}
+
+static int leapraid_cfg_get_volume_hdl_dispatch(
+ struct leapraid_adapter *adapter,
+ struct leapraid_cfg_req *cfg_req,
+ struct leapraid_cfg_rep *cfg_rep,
+ struct leapraid_raid_cfg_p0 *raid_cfg_p0,
+ void *real_cfg_pg_addr,
+ u16 real_cfg_pg_sz,
+ u16 raid_cfg_p0_sz,
+ u16 pd_hdl, u16 *vol_hdl)
+{
+ u16 phys_disk_dev_hdl;
+ u16 adapter_status;
+ u16 element_type;
+ int config_num;
+ int rc, i;
+
+ config_num = 0xff;
+ while (true) {
+ cfg_req->page_addr =
+ cpu_to_le32(config_num +
+ LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP);
+ rc = leapraid_request_cfg_pg(
+ adapter, cfg_req, cfg_rep,
+ raid_cfg_p0, real_cfg_pg_addr,
+ min_t(u16, real_cfg_pg_sz, raid_cfg_p0_sz));
+ adapter_status = le16_to_cpu(cfg_rep->adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (rc) {
+ if (adapter_status ==
+ LEAPRAID_ADAPTER_STATUS_CONFIG_INVALID_PAGE) {
+ *vol_hdl = 0;
+ return 0;
+ }
+ return rc;
+ }
+
+ if (adapter_status != LEAPRAID_ADAPTER_STATUS_SUCCESS)
+ return -1;
+
+ for (i = 0; i < raid_cfg_p0->elements_num; i++) {
+ element_type =
+ le16_to_cpu(raid_cfg_p0->cfg_element[i].element_flg) &
+ LEAPRAID_RAIDCFG_P0_EFLG_MASK_ELEMENT_TYPE;
+
+ switch (element_type) {
+ case LEAPRAID_RAIDCFG_P0_EFLG_VOL_PHYS_DISK_ELEMENT:
+ case LEAPRAID_RAIDCFG_P0_EFLG_OCE_ELEMENT:
+ phys_disk_dev_hdl =
+ le16_to_cpu(raid_cfg_p0->cfg_element[i]
+ .phys_disk_dev_hdl);
+ if (phys_disk_dev_hdl == pd_hdl) {
+ *vol_hdl =
+ le16_to_cpu
+ (raid_cfg_p0->cfg_element[i]
+ .vol_dev_hdl);
+ return 0;
+ }
+ break;
+
+ case LEAPRAID_RAIDCFG_P0_EFLG_HOT_SPARE_ELEMENT:
+ *vol_hdl = 0;
+ return 0;
+ default:
+ break;
+ }
+ }
+ config_num = raid_cfg_p0->cfg_num;
+ }
+ return 0;
+}
+
+int leapraid_cfg_get_volume_hdl(struct leapraid_adapter *adapter,
+ u16 pd_hdl, u16 *vol_hdl)
+{
+ struct leapraid_raid_cfg_p0 *raid_cfg_p0 = NULL;
+ struct leapraid_cfg_req cfg_req;
+ struct leapraid_cfg_rep cfg_rep;
+ dma_addr_t real_cfg_pg_dma = 0;
+ void *real_cfg_pg_addr = NULL;
+ u16 real_cfg_pg_sz = 0;
+ int rc, raid_cfg_p0_sz;
+
+ *vol_hdl = 0;
+ memset(&cfg_req, 0, sizeof(struct leapraid_cfg_req));
+ cfg_req.func = LEAPRAID_FUNC_CONFIG_OP;
+ cfg_req.action = LEAPRAID_CFG_ACT_PAGE_HEADER;
+ cfg_req.header.page_type = LEAPRAID_CFG_PT_EXTENDED;
+ cfg_req.ext_page_type = LEAPRAID_CFG_EXTPT_RAID_CONFIG;
+ cfg_req.header.page_num = LEAPRAID_CFG_PAGE_NUM_VOL0;
+
+ leapraid_build_nodata_mpi_sg(adapter, &cfg_req.page_buf_sge);
+ rc = leapraid_request_cfg_pg_header(adapter, &cfg_req, &cfg_rep);
+ if (rc)
+ goto out;
+
+ cfg_req.action = LEAPRAID_CFG_ACT_PAGE_READ_CUR;
+ raid_cfg_p0_sz = le16_to_cpu(cfg_rep.ext_page_len) *
+ LEAPRAID_CFG_UNIT_SIZE;
+ raid_cfg_p0 = kmalloc(raid_cfg_p0_sz, GFP_KERNEL);
+ if (!raid_cfg_p0) {
+ rc = -1;
+ goto out;
+ }
+
+ real_cfg_pg_sz = (cfg_req.header.page_len) ?
+ cfg_req.header.page_len * LEAPRAID_CFG_UNIT_SIZE :
+ le16_to_cpu(cfg_rep.ext_page_len) * LEAPRAID_CFG_UNIT_SIZE;
+
+ real_cfg_pg_addr = dma_alloc_coherent(&adapter->pdev->dev,
+ real_cfg_pg_sz, &real_cfg_pg_dma,
+ GFP_KERNEL);
+ if (!real_cfg_pg_addr) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ memset(raid_cfg_p0, 0, raid_cfg_p0_sz);
+ leapraid_single_mpi_sg_append(adapter,
+ &cfg_req.page_buf_sge,
+ ((LEAPRAID_SGE_FLG_SIMPLE_ONE |
+ LEAPRAID_SGE_FLG_LAST_ONE |
+ LEAPRAID_SGE_FLG_EOB |
+ LEAPRAID_SGE_FLG_EOL) <<
+ LEAPRAID_SGE_FLG_SHIFT) |
+ real_cfg_pg_sz,
+ real_cfg_pg_dma);
+ memset(real_cfg_pg_addr, 0,
+ min_t(u16, real_cfg_pg_sz, raid_cfg_p0_sz));
+
+ rc = leapraid_cfg_get_volume_hdl_dispatch(adapter,
+ &cfg_req, &cfg_rep,
+ raid_cfg_p0,
+ real_cfg_pg_addr,
+ real_cfg_pg_sz,
+ raid_cfg_p0_sz,
+ pd_hdl, vol_hdl);
+
+out:
+ if (real_cfg_pg_addr)
+ dma_free_coherent(&adapter->pdev->dev,
+ real_cfg_pg_sz, real_cfg_pg_addr,
+ real_cfg_pg_dma);
+ kfree(raid_cfg_p0);
+ return rc;
+}
+
+static int leapraid_get_adapter_phys(struct leapraid_adapter *adapter,
+ u8 *nr_phys)
+{
+ struct leapraid_sas_io_unit_p0 sas_io_unit_page0;
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ int rc = 0;
+
+ *nr_phys = 0;
+ cfgp1.size = sizeof(struct leapraid_sas_io_unit_p0);
+ rc = leapraid_op_config_page(adapter, &sas_io_unit_page0, cfgp1,
+ cfgp2, GET_SAS_IOUNIT_PG0);
+ if (rc)
+ return rc;
+
+ *nr_phys = sas_io_unit_page0.phy_num;
+
+ return 0;
+}
+
+static int leapraid_cfg_get_number_pds(struct leapraid_adapter *adapter,
+ u16 hdl, u8 *num_pds)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_raidvol_p0 raidvol_p0;
+ int rc;
+
+ *num_pds = 0;
+ cfgp1.size = sizeof(struct leapraid_raidvol_p0);
+ cfgp2.handle = hdl;
+ rc = leapraid_op_config_page(adapter, &raidvol_p0, cfgp1,
+ cfgp2, GET_RAID_VOLUME_PG0);
+
+ if (!rc)
+ *num_pds = raidvol_p0.num_phys_disks;
+
+ return rc;
+}
+
+int leapraid_cfg_get_volume_wwid(struct leapraid_adapter *adapter,
+ u16 vol_hdl, u64 *wwid)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_raidvol_p1 raidvol_p1;
+ int rc;
+
+ *wwid = 0;
+ cfgp1.form = LEAPRAID_RAID_VOL_CFG_PGAD_HDL;
+ cfgp2.handle = vol_hdl;
+ rc = leapraid_op_config_page(adapter, &raidvol_p1, cfgp1,
+ cfgp2, GET_RAID_VOLUME_PG1);
+ if (!rc)
+ *wwid = le64_to_cpu(raidvol_p1.wwid);
+
+ return rc;
+}
+
+static int leapraid_get_sas_io_unit_page0(struct leapraid_adapter *adapter,
+ struct leapraid_sas_io_unit_p0 *sas_io_unit_p0,
+ u16 sas_iou_pg0_sz)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+
+ cfgp1.size = sas_iou_pg0_sz;
+ return leapraid_op_config_page(adapter, sas_io_unit_p0, cfgp1,
+ cfgp2, GET_SAS_IOUNIT_PG0);
+}
+
+static int leapraid_get_sas_address(struct leapraid_adapter *adapter,
+ u16 hdl, u64 *sas_address)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_dev_p0 sas_dev_p0;
+
+ *sas_address = 0;
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = hdl;
+ if ((leapraid_op_config_page(adapter, &sas_dev_p0, cfgp1,
+ cfgp2, GET_SAS_DEVICE_PG0)))
+ return -ENXIO;
+
+ if (hdl <= adapter->dev_topo.card.phys_num &&
+ (!(le32_to_cpu(sas_dev_p0.dev_info) & LEAPRAID_DEVTYP_SEP)))
+ *sas_address = adapter->dev_topo.card.sas_address;
+ else
+ *sas_address = le64_to_cpu(sas_dev_p0.sas_address);
+
+ return 0;
+}
+
+int leapraid_get_volume_cap(struct leapraid_adapter *adapter,
+ struct leapraid_raid_volume *raid_volume)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_raidvol_p0 *raidvol_p0;
+ struct leapraid_sas_dev_p0 sas_dev_p0;
+ struct leapraid_raidpd_p0 raidpd_p0;
+ u8 num_pds;
+ u16 sz;
+
+ if ((leapraid_cfg_get_number_pds(adapter, raid_volume->hdl,
+ &num_pds)) || !num_pds)
+ return -EFAULT;
+
+ raid_volume->pd_num = num_pds;
+ sz = offsetof(struct leapraid_raidvol_p0, phys_disk) +
+ (num_pds * sizeof(struct leapraid_raidvol0_phys_disk));
+ raidvol_p0 = kzalloc(sz, GFP_KERNEL);
+ if (!raidvol_p0)
+ return -EFAULT;
+
+ cfgp1.size = sz;
+ cfgp2.handle = raid_volume->hdl;
+ if ((leapraid_op_config_page(adapter, raidvol_p0, cfgp1, cfgp2,
+ GET_RAID_VOLUME_PG0))) {
+ kfree(raidvol_p0);
+ return -EFAULT;
+ }
+
+ raid_volume->vol_type = raidvol_p0->volume_type;
+ cfgp1.form = LEAPRAID_PHYSDISK_CFG_PGAD_PHYSDISKNUM;
+ cfgp2.form_specific = raidvol_p0->phys_disk[0].phys_disk_num;
+ if (!(leapraid_op_config_page(adapter, &raidpd_p0, cfgp1, cfgp2,
+ GET_PHY_DISK_PG0))) {
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = le16_to_cpu(raidpd_p0.dev_hdl);
+ if (!(leapraid_op_config_page(adapter, &sas_dev_p0, cfgp1,
+ cfgp2, GET_SAS_DEVICE_PG0))) {
+ raid_volume->dev_info =
+ le32_to_cpu(sas_dev_p0.dev_info);
+ }
+ }
+
+ kfree(raidvol_p0);
+ return 0;
+}
+
+static void leapraid_fw_log_work(struct work_struct *work)
+{
+ struct leapraid_adapter *adapter = container_of(work,
+ struct leapraid_adapter, fw_log_desc.fw_log_work.work);
+ struct leapraid_fw_log_info *infom;
+ unsigned long flags;
+
+ infom = (struct leapraid_fw_log_info *)(adapter->fw_log_desc.fw_log_buffer +
+ LEAPRAID_SYS_LOG_BUF_SIZE);
+
+ if (adapter->fw_log_desc.fw_log_init_flag == 0) {
+ infom->user_position =
+ leapraid_readl(&adapter->iomem_base->host_log_buf_pos);
+ infom->adapter_position =
+ leapraid_readl(&adapter->iomem_base->adapter_log_buf_pos);
+ adapter->fw_log_desc.fw_log_init_flag++;
+ }
+
+ writel(infom->user_position, &adapter->iomem_base->host_log_buf_pos);
+ infom->adapter_position =
+ leapraid_readl(&adapter->iomem_base->adapter_log_buf_pos);
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ if (adapter->fw_log_desc.fw_log_wq)
+ queue_delayed_work(adapter->fw_log_desc.fw_log_wq,
+ &adapter->fw_log_desc.fw_log_work,
+ msecs_to_jiffies(LEAPRAID_PCIE_LOG_POLLING_INTERVAL));
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+}
+
+void leapraid_fw_log_stop(struct leapraid_adapter *adapter)
+{
+ struct workqueue_struct *wq;
+ unsigned long flags;
+
+ if (!adapter->fw_log_desc.open_pcie_trace)
+ return;
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ wq = adapter->fw_log_desc.fw_log_wq;
+ adapter->fw_log_desc.fw_log_wq = NULL;
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+ if (wq) {
+ if (!cancel_delayed_work_sync(&adapter->fw_log_desc.fw_log_work))
+ flush_workqueue(wq);
+ destroy_workqueue(wq);
+ }
+}
+
+void leapraid_fw_log_start(struct leapraid_adapter *adapter)
+{
+ unsigned long flags;
+
+ if (!adapter->fw_log_desc.open_pcie_trace)
+ return;
+
+ if (adapter->fw_log_desc.fw_log_wq)
+ return;
+
+ INIT_DELAYED_WORK(&adapter->fw_log_desc.fw_log_work,
+ leapraid_fw_log_work);
+ snprintf(adapter->fw_log_desc.fw_log_wq_name,
+ sizeof(adapter->fw_log_desc.fw_log_wq_name),
+ "poll_%s%u_fw_log",
+ LEAPRAID_DRIVER_NAME, adapter->adapter_attr.id);
+ adapter->fw_log_desc.fw_log_wq =
+ create_singlethread_workqueue(
+ adapter->fw_log_desc.fw_log_wq_name);
+ if (!adapter->fw_log_desc.fw_log_wq)
+ return;
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ if (adapter->fw_log_desc.fw_log_wq)
+ queue_delayed_work(adapter->fw_log_desc.fw_log_wq,
+ &adapter->fw_log_desc.fw_log_work,
+ msecs_to_jiffies(LEAPRAID_PCIE_LOG_POLLING_INTERVAL));
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+}
+
+static void leapraid_timestamp_sync(struct leapraid_adapter *adapter)
+{
+ struct leapraid_io_unit_ctrl_req *io_unit_ctrl_req;
+ ktime_t current_time;
+ bool issue_reset = false;
+ u64 time_stamp = 0;
+
+ mutex_lock(&adapter->driver_cmds.timestamp_sync_cmd.mutex);
+ adapter->driver_cmds.timestamp_sync_cmd.status = LEAPRAID_CMD_PENDING;
+ io_unit_ctrl_req =
+ leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.timestamp_sync_cmd.inter_taskid);
+ memset(io_unit_ctrl_req, 0, sizeof(struct leapraid_io_unit_ctrl_req));
+ io_unit_ctrl_req->func = LEAPRAID_FUNC_SAS_IO_UNIT_CTRL;
+ io_unit_ctrl_req->op = LEAPRAID_SAS_OP_SET_PARAMETER;
+ io_unit_ctrl_req->adapter_para = LEAPRAID_SET_PARAMETER_SYNC_TIMESTAMP;
+
+ current_time = ktime_get_real();
+ time_stamp = ktime_to_ms(current_time);
+
+ io_unit_ctrl_req->adapter_para_value =
+ cpu_to_le32(time_stamp & 0xFFFFFFFF);
+ io_unit_ctrl_req->adapter_para_value2 =
+ cpu_to_le32(time_stamp >> 32);
+ init_completion(&adapter->driver_cmds.timestamp_sync_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.timestamp_sync_cmd.inter_taskid);
+ wait_for_completion_timeout(&adapter->driver_cmds.timestamp_sync_cmd.done,
+ LEAPRAID_TIMESTAMP_SYNC_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.timestamp_sync_cmd.status &
+ LEAPRAID_CMD_DONE))
+ issue_reset =
+ leapraid_check_reset(
+ adapter->driver_cmds.timestamp_sync_cmd.status);
+
+ if (issue_reset) {
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, FULL_RESET);
+ }
+
+ adapter->driver_cmds.timestamp_sync_cmd.status = LEAPRAID_CMD_NOT_USED;
+ mutex_unlock(&adapter->driver_cmds.timestamp_sync_cmd.mutex);
+}
+
+static bool leapraid_should_skip_fault_check(struct leapraid_adapter *adapter)
+{
+ unsigned long flags;
+ bool skip;
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ skip = adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering ||
+ adapter->access_ctrl.host_removing;
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+
+ return skip;
+}
+
+static void leapraid_check_scheduled_fault_work(struct work_struct *work)
+{
+ struct leapraid_adapter *adapter;
+ unsigned long flags;
+ u32 adapter_state;
+ int rc;
+
+ adapter = container_of(work, struct leapraid_adapter,
+ reset_desc.fault_reset_work.work);
+
+ if (leapraid_should_skip_fault_check(adapter))
+ goto scheduled_timer;
+
+ adapter_state = leapraid_get_adapter_state(adapter);
+ if (adapter_state != LEAPRAID_DB_OPERATIONAL) {
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ rc = leapraid_hard_reset_handler(adapter, FULL_RESET);
+ dev_warn(&adapter->pdev->dev, "%s: hard reset: %s\n",
+ __func__, (rc == 0) ? "success" : "failed");
+
+ adapter_state = leapraid_get_adapter_state(adapter);
+ if (rc && adapter_state != LEAPRAID_DB_OPERATIONAL)
+ return;
+ }
+
+ if (++adapter->timestamp_sync_cnt >=
+ LEAPRAID_TIMESTAMP_SYNC_INTERVAL) {
+ adapter->timestamp_sync_cnt = 0;
+ leapraid_timestamp_sync(adapter);
+ }
+
+scheduled_timer:
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ if (adapter->reset_desc.fault_reset_wq)
+ queue_delayed_work(adapter->reset_desc.fault_reset_wq,
+ &adapter->reset_desc.fault_reset_work,
+ msecs_to_jiffies(LEAPRAID_FAULT_POLLING_INTERVAL));
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+}
+
+void leapraid_check_scheduled_fault_start(struct leapraid_adapter *adapter)
+{
+ unsigned long flags;
+
+ if (adapter->reset_desc.fault_reset_wq)
+ return;
+
+ adapter->timestamp_sync_cnt = 0;
+ INIT_DELAYED_WORK(&adapter->reset_desc.fault_reset_work,
+ leapraid_check_scheduled_fault_work);
+ snprintf(adapter->reset_desc.fault_reset_wq_name,
+ sizeof(adapter->reset_desc.fault_reset_wq_name),
+ "poll_%s%u_status",
+ LEAPRAID_DRIVER_NAME, adapter->adapter_attr.id);
+ adapter->reset_desc.fault_reset_wq =
+ create_singlethread_workqueue(
+ adapter->reset_desc.fault_reset_wq_name);
+ if (!adapter->reset_desc.fault_reset_wq) {
+ dev_err(&adapter->pdev->dev,
+ "create single thread workqueue failed!\n");
+ return;
+ }
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ if (adapter->reset_desc.fault_reset_wq)
+ queue_delayed_work(adapter->reset_desc.fault_reset_wq,
+ &adapter->reset_desc.fault_reset_work,
+ msecs_to_jiffies(LEAPRAID_FAULT_POLLING_INTERVAL));
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+}
+
+void leapraid_check_scheduled_fault_stop(struct leapraid_adapter *adapter)
+{
+ struct workqueue_struct *wq;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ wq = adapter->reset_desc.fault_reset_wq;
+ adapter->reset_desc.fault_reset_wq = NULL;
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+
+ if (!wq)
+ return;
+
+ if (!cancel_delayed_work_sync(&adapter->reset_desc.fault_reset_work))
+ flush_workqueue(wq);
+ destroy_workqueue(wq);
+}
+
+static bool leapraid_ready_for_scsi_io(struct leapraid_adapter *adapter,
+ u16 hdl)
+{
+ if (adapter->access_ctrl.pcie_recovering ||
+ adapter->access_ctrl.shost_recovering)
+ return false;
+
+ if (leapraid_check_adapter_is_op(adapter))
+ return false;
+
+ if (hdl == LEAPRAID_INVALID_DEV_HANDLE)
+ return false;
+
+ if (test_bit(hdl, (unsigned long *)adapter->dev_topo.dev_removing))
+ return false;
+
+ return true;
+}
+
+static int leapraid_dispatch_scsi_io(struct leapraid_adapter *adapter,
+ struct leapraid_scsi_cmd_desc *cmd_desc)
+{
+ struct scsi_device *sdev;
+ struct leapraid_sdev_priv *sdev_priv;
+ struct scsi_cmnd *scmd;
+ void *dma_buffer = NULL;
+ dma_addr_t dma_addr = 0;
+ u8 sdev_flg = 0;
+ bool issue_reset = false;
+ int rc = 0;
+
+ if (WARN_ON(!adapter->driver_cmds.internal_scmd))
+ return -EINVAL;
+
+ if (!leapraid_ready_for_scsi_io(adapter, cmd_desc->hdl))
+ return -EINVAL;
+
+ mutex_lock(&adapter->driver_cmds.driver_scsiio_cmd.mutex);
+ if (adapter->driver_cmds.driver_scsiio_cmd.status !=
+ LEAPRAID_CMD_NOT_USED) {
+ rc = -EAGAIN;
+ goto out;
+ }
+ adapter->driver_cmds.driver_scsiio_cmd.status = LEAPRAID_CMD_PENDING;
+
+ __shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (sdev_priv->starget_priv->hdl == cmd_desc->hdl &&
+ sdev_priv->lun == cmd_desc->lun) {
+ sdev_flg = 1;
+ break;
+ }
+ }
+
+ if (!sdev_flg) {
+ rc = -ENXIO;
+ goto out;
+ }
+
+ if (cmd_desc->data_length) {
+ dma_buffer = dma_alloc_coherent(&adapter->pdev->dev,
+ cmd_desc->data_length,
+ &dma_addr, GFP_ATOMIC);
+ if (!dma_buffer) {
+ rc = -ENOMEM;
+ goto out;
+ }
+ if (cmd_desc->dir == DMA_TO_DEVICE)
+ memcpy(dma_buffer, cmd_desc->data_buffer,
+ cmd_desc->data_length);
+ }
+
+ scmd = adapter->driver_cmds.internal_scmd;
+ scmd->device = sdev;
+ scmd->cmd_len = cmd_desc->cdb_length;
+ memcpy(scmd->cmnd, cmd_desc->cdb, cmd_desc->cdb_length);
+ scmd->sc_data_direction = cmd_desc->dir;
+ scmd->sdb.length = cmd_desc->data_length;
+ scmd->sdb.table.nents = 1;
+ scmd->sdb.table.orig_nents = 1;
+ sg_init_one(scmd->sdb.table.sgl, dma_buffer, cmd_desc->data_length);
+ init_completion(&adapter->driver_cmds.driver_scsiio_cmd.done);
+ if (leapraid_queuecommand(adapter->shost, scmd)) {
+ adapter->driver_cmds.driver_scsiio_cmd.status &=
+ ~LEAPRAID_CMD_PENDING;
+ complete(&adapter->driver_cmds.driver_scsiio_cmd.done);
+ rc = -EINVAL;
+ goto out;
+ }
+
+ wait_for_completion_timeout(&adapter->driver_cmds.driver_scsiio_cmd.done,
+ cmd_desc->time_out * HZ);
+
+ if (!(adapter->driver_cmds.driver_scsiio_cmd.status &
+ LEAPRAID_CMD_DONE)) {
+ issue_reset =
+ leapraid_check_reset(
+ adapter->driver_cmds.driver_scsiio_cmd.status);
+ rc = -ENODATA;
+ goto reset;
+ }
+
+ rc = adapter->driver_cmds.internal_scmd->result;
+ if (!rc && cmd_desc->dir == DMA_FROM_DEVICE)
+ memcpy(cmd_desc->data_buffer, dma_buffer,
+ cmd_desc->data_length);
+
+reset:
+ if (issue_reset) {
+ rc = -ENODATA;
+ dev_err(&adapter->pdev->dev, "fire tgt reset: hdl=0x%04x\n",
+ cmd_desc->hdl);
+ leapraid_issue_locked_tm(adapter, cmd_desc->hdl, 0, 0, 0,
+ LEAPRAID_TM_TASKTYPE_TARGET_RESET,
+ adapter->driver_cmds.driver_scsiio_cmd.taskid,
+ LEAPRAID_TM_MSGFLAGS_LINK_RESET);
+ }
+out:
+ if (dma_buffer)
+ dma_free_coherent(&adapter->pdev->dev,
+ cmd_desc->data_length, dma_buffer, dma_addr);
+ adapter->driver_cmds.driver_scsiio_cmd.status = LEAPRAID_CMD_NOT_USED;
+ mutex_unlock(&adapter->driver_cmds.driver_scsiio_cmd.mutex);
+ return rc;
+}
+
+static int leapraid_dispatch_logsense(struct leapraid_adapter *adapter,
+ u16 hdl, u32 lun)
+{
+ struct leapraid_scsi_cmd_desc *desc;
+ int rc = 0;
+
+ desc = kzalloc(sizeof(*desc), GFP_KERNEL);
+ if (!desc)
+ return -ENOMEM;
+
+ desc->hdl = hdl;
+ desc->lun = lun;
+ desc->data_length = LEAPRAID_LOGSENSE_DATA_LENGTH;
+ desc->dir = DMA_FROM_DEVICE;
+ desc->cdb_length = LEAPRAID_LOGSENSE_CDB_LENGTH;
+ desc->cdb[0] = LOG_SENSE;
+ desc->cdb[2] = LEAPRAID_LOGSENSE_CDB_CODE;
+ desc->cdb[8] = desc->data_length;
+ desc->raid_member = false;
+ desc->time_out = LEAPRAID_LOGSENSE_TIMEOUT;
+
+ desc->data_buffer = kzalloc(desc->data_length, GFP_KERNEL);
+ if (!desc->data_buffer) {
+ kfree(desc);
+ return -ENOMEM;
+ }
+
+ rc = leapraid_dispatch_scsi_io(adapter, desc);
+ if (!rc) {
+ if (((char *)desc->data_buffer)[8] ==
+ LEAPRAID_LOGSENSE_SMART_CODE)
+ leapraid_smart_fault_detect(adapter, hdl);
+ }
+
+ kfree(desc->data_buffer);
+ kfree(desc);
+
+ return rc;
+}
+
+static bool leapraid_smart_poll_check(struct leapraid_adapter *adapter,
+ struct leapraid_sdev_priv *sdev_priv,
+ u32 reset_flg)
+{
+ struct leapraid_sas_dev *sas_dev = NULL;
+
+ if (!sdev_priv || !sdev_priv->starget_priv->card_port)
+ goto out;
+
+ sas_dev = leapraid_get_sas_dev_by_addr(adapter,
+ sdev_priv->starget_priv->sas_address,
+ sdev_priv->starget_priv->card_port);
+ if (!sas_dev || !sas_dev->support_smart)
+ goto out;
+
+ if (reset_flg)
+ sas_dev->led_on = false;
+ else if (sas_dev->led_on)
+ goto out;
+
+ if ((sdev_priv->starget_priv->flg & LEAPRAID_TGT_FLG_RAID_MEMBER) ||
+ (sdev_priv->starget_priv->flg & LEAPRAID_TGT_FLG_VOLUME) ||
+ sdev_priv->block)
+ goto out;
+
+ leapraid_sdev_put(sas_dev);
+ return true;
+
+out:
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+ return false;
+}
+
+static void leapraid_sata_smart_poll_work(struct work_struct *work)
+{
+ struct leapraid_adapter *adapter =
+ container_of(work, struct leapraid_adapter,
+ smart_poll_desc.smart_poll_work.work);
+ struct scsi_device *sdev;
+ struct leapraid_sdev_priv *sdev_priv;
+ static u32 reset_cnt;
+ bool reset_flg = false;
+
+ if (leapraid_check_adapter_is_op(adapter))
+ goto out;
+
+ reset_flg = (reset_cnt < adapter->reset_desc.reset_cnt);
+ reset_cnt = adapter->reset_desc.reset_cnt;
+
+ __shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (leapraid_smart_poll_check(adapter, sdev_priv, reset_flg))
+ leapraid_dispatch_logsense(adapter,
+ sdev_priv->starget_priv->hdl,
+ sdev_priv->lun);
+ }
+
+out:
+ if (adapter->smart_poll_desc.smart_poll_wq)
+ queue_delayed_work(adapter->smart_poll_desc.smart_poll_wq,
+ &adapter->smart_poll_desc.smart_poll_work,
+ msecs_to_jiffies(LEAPRAID_SMART_POLLING_INTERVAL));
+}
+
+void leapraid_smart_polling_start(struct leapraid_adapter *adapter)
+{
+ if (adapter->smart_poll_desc.smart_poll_wq || !smart_poll)
+ return;
+
+ INIT_DELAYED_WORK(&adapter->smart_poll_desc.smart_poll_work,
+ leapraid_sata_smart_poll_work);
+
+ snprintf(adapter->smart_poll_desc.smart_poll_wq_name,
+ sizeof(adapter->smart_poll_desc.smart_poll_wq_name),
+ "poll_%s%u_smart_poll",
+ LEAPRAID_DRIVER_NAME,
+ adapter->adapter_attr.id);
+ adapter->smart_poll_desc.smart_poll_wq =
+ create_singlethread_workqueue(
+ adapter->smart_poll_desc.smart_poll_wq_name);
+ if (!adapter->smart_poll_desc.smart_poll_wq)
+ return;
+ queue_delayed_work(adapter->smart_poll_desc.smart_poll_wq,
+ &adapter->smart_poll_desc.smart_poll_work,
+ msecs_to_jiffies(LEAPRAID_SMART_POLLING_INTERVAL));
+}
+
+void leapraid_smart_polling_stop(struct leapraid_adapter *adapter)
+{
+ struct workqueue_struct *wq;
+
+ if (!adapter->smart_poll_desc.smart_poll_wq)
+ return;
+
+ wq = adapter->smart_poll_desc.smart_poll_wq;
+ adapter->smart_poll_desc.smart_poll_wq = NULL;
+
+ if (wq) {
+ if (!cancel_delayed_work_sync(&adapter->smart_poll_desc.smart_poll_work))
+ flush_workqueue(wq);
+ destroy_workqueue(wq);
+ }
+}
+
+static void leapraid_fw_work(struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt);
+
+static void leapraid_fw_evt_free(struct kref *r)
+{
+ struct leapraid_fw_evt_work *fw_evt;
+
+ fw_evt = container_of(r, struct leapraid_fw_evt_work, refcnt);
+
+ kfree(fw_evt->evt_data);
+ kfree(fw_evt);
+}
+
+static void leapraid_fw_evt_get(struct leapraid_fw_evt_work *fw_evt)
+{
+ kref_get(&fw_evt->refcnt);
+}
+
+static void leapraid_fw_evt_put(struct leapraid_fw_evt_work *fw_work)
+{
+ kref_put(&fw_work->refcnt, leapraid_fw_evt_free);
+}
+
+static struct leapraid_fw_evt_work *leapraid_alloc_fw_evt_work(void)
+{
+ struct leapraid_fw_evt_work *fw_evt =
+ kzalloc(sizeof(*fw_evt), GFP_ATOMIC);
+ if (!fw_evt)
+ return NULL;
+
+ kref_init(&fw_evt->refcnt);
+ return fw_evt;
+}
+
+static void leapraid_run_fw_evt_work(struct work_struct *work)
+{
+ struct leapraid_fw_evt_work *fw_evt =
+ container_of(work, struct leapraid_fw_evt_work, work);
+
+ leapraid_fw_work(fw_evt->adapter, fw_evt);
+}
+
+static void leapraid_fw_evt_add(struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt)
+{
+ unsigned long flags;
+
+ if (!adapter->fw_evt_s.fw_evt_thread)
+ return;
+
+ spin_lock_irqsave(&adapter->fw_evt_s.fw_evt_lock, flags);
+ leapraid_fw_evt_get(fw_evt);
+ INIT_LIST_HEAD(&fw_evt->list);
+ list_add_tail(&fw_evt->list, &adapter->fw_evt_s.fw_evt_list);
+ INIT_WORK(&fw_evt->work, leapraid_run_fw_evt_work);
+ leapraid_fw_evt_get(fw_evt);
+ queue_work(adapter->fw_evt_s.fw_evt_thread, &fw_evt->work);
+ spin_unlock_irqrestore(&adapter->fw_evt_s.fw_evt_lock, flags);
+}
+
+static void leapraid_del_fw_evt_from_list(struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->fw_evt_s.fw_evt_lock, flags);
+ if (!list_empty(&fw_evt->list)) {
+ list_del_init(&fw_evt->list);
+ leapraid_fw_evt_put(fw_evt);
+ }
+ spin_unlock_irqrestore(&adapter->fw_evt_s.fw_evt_lock, flags);
+}
+
+static struct leapraid_fw_evt_work *leapraid_next_fw_evt(
+ struct leapraid_adapter *adapter)
+{
+ struct leapraid_fw_evt_work *fw_evt = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->fw_evt_s.fw_evt_lock, flags);
+ if (!list_empty(&adapter->fw_evt_s.fw_evt_list)) {
+ fw_evt = list_first_entry(&adapter->fw_evt_s.fw_evt_list,
+ struct leapraid_fw_evt_work, list);
+ list_del_init(&fw_evt->list);
+ leapraid_fw_evt_put(fw_evt);
+ }
+ spin_unlock_irqrestore(&adapter->fw_evt_s.fw_evt_lock, flags);
+ return fw_evt;
+}
+
+void leapraid_clean_active_fw_evt(struct leapraid_adapter *adapter)
+{
+ struct leapraid_fw_evt_work *fw_evt;
+ bool rc = false;
+
+ if ((list_empty(&adapter->fw_evt_s.fw_evt_list) &&
+ !adapter->fw_evt_s.cur_evt) || !adapter->fw_evt_s.fw_evt_thread)
+ return;
+
+ adapter->fw_evt_s.fw_evt_cleanup = 1;
+ if (adapter->access_ctrl.shost_recovering &&
+ adapter->fw_evt_s.cur_evt)
+ adapter->fw_evt_s.cur_evt->ignore = 1;
+
+ while ((fw_evt = leapraid_next_fw_evt(adapter)) ||
+ (fw_evt = adapter->fw_evt_s.cur_evt)) {
+ if (fw_evt == adapter->fw_evt_s.cur_evt &&
+ adapter->fw_evt_s.cur_evt->evt_type !=
+ LEAPRAID_EVT_REMOVE_DEAD_DEV) {
+ adapter->fw_evt_s.cur_evt = NULL;
+ continue;
+ }
+
+ rc = cancel_work_sync(&fw_evt->work);
+
+ if (rc)
+ leapraid_fw_evt_put(fw_evt);
+ }
+ adapter->fw_evt_s.fw_evt_cleanup = 0;
+}
+
+static void leapraid_internal_dev_ublk(struct scsi_device *sdev,
+ struct leapraid_sdev_priv *sdev_priv)
+{
+ int rc = 0;
+
+ sdev_printk(KERN_WARNING, sdev,
+ "hdl 0x%04x: now internal unblkg dev\n",
+ sdev_priv->starget_priv->hdl);
+ sdev_priv->block = false;
+ rc = scsi_internal_device_unblock_nowait(sdev, SDEV_RUNNING);
+ if (rc == -EINVAL) {
+ sdev_printk(KERN_WARNING, sdev,
+ "hdl 0x%04x: unblkg failed, rc=%d\n",
+ sdev_priv->starget_priv->hdl, rc);
+ sdev_priv->block = true;
+ rc = scsi_internal_device_block_nowait(sdev);
+ if (rc)
+ sdev_printk(KERN_WARNING, sdev,
+ "hdl 0x%04x: blkg failed: earlier unblkg err, rc=%d\n",
+ sdev_priv->starget_priv->hdl, rc);
+
+ sdev_priv->block = false;
+ rc = scsi_internal_device_unblock_nowait(sdev, SDEV_RUNNING);
+ if (rc)
+ sdev_printk(KERN_WARNING, sdev,
+ "hdl 0x%04x: ublkg failed again, rc=%d\n",
+ sdev_priv->starget_priv->hdl, rc);
+ }
+}
+
+static void leapraid_internal_ublk_io_dev_to_running(struct scsi_device *sdev)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+
+ sdev_priv = sdev->hostdata;
+ sdev_priv->block = false;
+ scsi_internal_device_unblock_nowait(sdev, SDEV_RUNNING);
+ sdev_printk(KERN_WARNING, sdev, "%s: ublk hdl 0x%04x\n",
+ __func__, sdev_priv->starget_priv->hdl);
+}
+
+static void leapraid_ublk_io_dev_to_running(
+ struct leapraid_adapter *adapter, u64 sas_addr,
+ struct leapraid_card_port *card_port)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct scsi_device *sdev;
+
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv)
+ continue;
+
+ if (sdev_priv->starget_priv->sas_address != sas_addr ||
+ sdev_priv->starget_priv->card_port != card_port)
+ continue;
+
+ if (sdev_priv->block)
+ leapraid_internal_ublk_io_dev_to_running(sdev);
+ }
+}
+
+static void leapraid_ublk_io_dev(struct leapraid_adapter *adapter,
+ u64 sas_addr,
+ struct leapraid_card_port *card_port)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct scsi_device *sdev;
+
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv || !sdev_priv->starget_priv)
+ continue;
+
+ if (sdev_priv->starget_priv->sas_address != sas_addr)
+ continue;
+
+ if (sdev_priv->starget_priv->card_port != card_port)
+ continue;
+
+ if (sdev_priv->block)
+ leapraid_internal_dev_ublk(sdev, sdev_priv);
+
+ scsi_device_set_state(sdev, SDEV_OFFLINE);
+ }
+}
+
+static void leapraid_ublk_io_all_dev(struct leapraid_adapter *adapter)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct leapraid_starget_priv *stgt_priv;
+ struct scsi_device *sdev;
+
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+
+ if (!sdev_priv)
+ continue;
+
+ stgt_priv = sdev_priv->starget_priv;
+ if (!stgt_priv || stgt_priv->deleted)
+ continue;
+
+ if (!sdev_priv->block)
+ continue;
+
+ sdev_printk(KERN_WARNING, sdev, "hdl 0x%04x: blkg...\n",
+ sdev_priv->starget_priv->hdl);
+ leapraid_internal_dev_ublk(sdev, sdev_priv);
+ continue;
+ }
+}
+
+static void __maybe_unused leapraid_internal_dev_blk(
+ struct scsi_device *sdev,
+ struct leapraid_sdev_priv *sdev_priv)
+{
+ int rc = 0;
+
+ sdev_printk(KERN_INFO, sdev, "internal blkg hdl 0x%04x\n",
+ sdev_priv->starget_priv->hdl);
+ sdev_priv->block = true;
+ rc = scsi_internal_device_block_nowait(sdev);
+ if (rc == -EINVAL)
+ sdev_printk(KERN_WARNING, sdev,
+ "hdl 0x%04x: blkg failed, rc=%d\n",
+ rc, sdev_priv->starget_priv->hdl);
+}
+
+static void __maybe_unused leapraid_blkio_dev(struct leapraid_adapter *adapter,
+ u16 hdl)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct leapraid_sas_dev *sas_dev;
+ struct scsi_device *sdev;
+
+ sas_dev = leapraid_get_sas_dev_by_hdl(adapter, hdl);
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv)
+ continue;
+
+ if (sdev_priv->starget_priv->hdl != hdl)
+ continue;
+
+ if (sdev_priv->block)
+ continue;
+
+ if (sas_dev && sas_dev->pend_sas_rphy_add)
+ continue;
+
+ if (sdev_priv->sep) {
+ sdev_printk(KERN_INFO, sdev,
+ "sep hdl 0x%04x skip blkg\n",
+ sdev_priv->starget_priv->hdl);
+ continue;
+ }
+
+ leapraid_internal_dev_blk(sdev, sdev_priv);
+ }
+
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+}
+
+static void leapraid_imm_blkio_to_end_dev(struct leapraid_adapter *adapter,
+ struct leapraid_sas_port *sas_port)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct leapraid_sas_dev *sas_dev;
+ struct scsi_device *sdev;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(
+ adapter,
+ sas_port->remote_identify.sas_address,
+ sas_port->card_port);
+
+ if (sas_dev) {
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv)
+ continue;
+
+ if (sdev_priv->starget_priv->hdl != sas_dev->hdl)
+ continue;
+
+ if (sdev_priv->block)
+ continue;
+
+ if (sas_dev && sas_dev->pend_sas_rphy_add)
+ continue;
+
+ if (sdev_priv->sep) {
+ sdev_printk(KERN_INFO, sdev,
+ "%s skip dev blk for sep hdl 0x%04x\n",
+ __func__,
+ sdev_priv->starget_priv->hdl);
+ continue;
+ }
+
+ leapraid_internal_dev_blk(sdev, sdev_priv);
+ }
+
+ leapraid_sdev_put(sas_dev);
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+}
+
+static void leapraid_imm_blkio_set_end_dev_blk_hdls(
+ struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node_exp)
+{
+ struct leapraid_sas_port *sas_port;
+
+ list_for_each_entry(sas_port,
+ &topo_node_exp->sas_port_list, port_list) {
+ if (sas_port->remote_identify.device_type ==
+ SAS_END_DEVICE) {
+ leapraid_imm_blkio_to_end_dev(adapter, sas_port);
+ }
+ }
+}
+
+static void leapraid_imm_blkio_to_kids_attchd_to_ex(
+ struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node_exp);
+
+static void leapraid_imm_blkio_to_sib_exp(
+ struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node_exp)
+{
+ struct leapraid_topo_node *topo_node_exp_sib;
+ struct leapraid_sas_port *sas_port;
+
+ list_for_each_entry(sas_port,
+ &topo_node_exp->sas_port_list, port_list) {
+ if (sas_port->remote_identify.device_type ==
+ SAS_EDGE_EXPANDER_DEVICE ||
+ sas_port->remote_identify.device_type ==
+ SAS_FANOUT_EXPANDER_DEVICE) {
+ topo_node_exp_sib =
+ leapraid_exp_find_by_sas_address(
+ adapter,
+ sas_port->remote_identify.sas_address,
+ sas_port->card_port);
+ leapraid_imm_blkio_to_kids_attchd_to_ex(
+ adapter,
+ topo_node_exp_sib);
+ }
+ }
+}
+
+static void leapraid_imm_blkio_to_kids_attchd_to_ex(
+ struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node_exp)
+{
+ if (!topo_node_exp)
+ return;
+
+ leapraid_imm_blkio_set_end_dev_blk_hdls(adapter, topo_node_exp);
+
+ leapraid_imm_blkio_to_sib_exp(adapter, topo_node_exp);
+}
+
+static void leapraid_report_sdev_directly(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev)
+{
+ struct leapraid_sas_port *sas_port;
+
+ sas_port = leapraid_transport_port_add(adapter,
+ sas_dev->hdl,
+ sas_dev->parent_sas_addr,
+ sas_dev->card_port);
+ if (!sas_port) {
+ leapraid_sas_dev_remove(adapter, sas_dev);
+ return;
+ }
+
+ if (!sas_dev->starget) {
+ if (!adapter->scan_dev_desc.driver_loading) {
+ leapraid_transport_port_remove(adapter,
+ sas_dev->sas_addr,
+ sas_dev->parent_sas_addr,
+ sas_dev->card_port);
+ leapraid_sas_dev_remove(adapter, sas_dev);
+ }
+ return;
+ }
+
+ clear_bit(sas_dev->hdl,
+ (unsigned long *)adapter->dev_topo.pending_dev_add);
+}
+
+static struct leapraid_sas_dev *leapraid_init_sas_dev(
+ struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev_p0 *sas_dev_pg0,
+ struct leapraid_card_port *card_port, u16 hdl,
+ u64 parent_sas_addr, u64 sas_addr, u32 dev_info)
+{
+ struct leapraid_sas_dev *sas_dev;
+ struct leapraid_enc_node *enc_dev;
+
+ sas_dev = kzalloc(sizeof(*sas_dev), GFP_KERNEL);
+ if (!sas_dev)
+ return NULL;
+
+ kref_init(&sas_dev->refcnt);
+ sas_dev->hdl = hdl;
+ sas_dev->dev_info = dev_info;
+ sas_dev->sas_addr = sas_addr;
+ sas_dev->card_port = card_port;
+ sas_dev->parent_sas_addr = parent_sas_addr;
+ sas_dev->phy = sas_dev_pg0->phy_num;
+ sas_dev->enc_hdl = le16_to_cpu(sas_dev_pg0->enc_hdl);
+ sas_dev->dev_name = le64_to_cpu(sas_dev_pg0->dev_name);
+ sas_dev->port_type = sas_dev_pg0->max_port_connections;
+ sas_dev->slot = sas_dev->enc_hdl ? le16_to_cpu(sas_dev_pg0->slot) : 0;
+ sas_dev->support_smart = (le16_to_cpu(sas_dev_pg0->flg) &
+ LEAPRAID_SAS_DEV_P0_FLG_SATA_SMART);
+ if (le16_to_cpu(sas_dev_pg0->flg) &
+ LEAPRAID_SAS_DEV_P0_FLG_ENC_LEVEL_VALID) {
+ sas_dev->enc_level = sas_dev_pg0->enc_level;
+ memcpy(sas_dev->connector_name, sas_dev_pg0->connector_name, 4);
+ sas_dev->connector_name[4] = '\0';
+ } else {
+ sas_dev->enc_level = 0;
+ sas_dev->connector_name[0] = '\0';
+ }
+ if (le16_to_cpu(sas_dev_pg0->enc_hdl)) {
+ enc_dev = leapraid_enc_find_by_hdl(adapter,
+ le16_to_cpu(sas_dev_pg0->enc_hdl));
+ sas_dev->enc_lid = enc_dev ?
+ le64_to_cpu(enc_dev->pg0.enc_lid) : 0;
+ }
+ dev_info(&adapter->pdev->dev,
+ "add dev: hdl=0x%0x, sas addr=0x%016llx, port_type=0x%0x\n",
+ hdl, sas_dev->sas_addr, sas_dev->port_type);
+
+ return sas_dev;
+}
+
+static void leapraid_add_dev(struct leapraid_adapter *adapter, u16 hdl)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_dev_p0 sas_dev_pg0;
+ struct leapraid_card_port *card_port;
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ u64 parent_sas_addr;
+ u32 dev_info;
+ u64 sas_addr;
+ u8 port_id;
+
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = hdl;
+ if ((leapraid_op_config_page(adapter, &sas_dev_pg0,
+ cfgp1, cfgp2, GET_SAS_DEVICE_PG0)))
+ return;
+
+ dev_info = le32_to_cpu(sas_dev_pg0.dev_info);
+ if (!(leapraid_is_end_dev(dev_info)))
+ return;
+
+ set_bit(hdl, (unsigned long *)adapter->dev_topo.pending_dev_add);
+ sas_addr = le64_to_cpu(sas_dev_pg0.sas_address);
+ if (!(le16_to_cpu(sas_dev_pg0.flg) &
+ LEAPRAID_SAS_DEV_P0_FLG_DEV_PRESENT))
+ return;
+
+ port_id = sas_dev_pg0.physical_port;
+ card_port = leapraid_get_port_by_id(adapter, port_id, false);
+ if (!card_port)
+ return;
+
+ sas_dev = leapraid_get_sas_dev_by_addr(adapter, sas_addr, card_port);
+ if (sas_dev) {
+ clear_bit(hdl,
+ (unsigned long *)adapter->dev_topo.pending_dev_add);
+ leapraid_sdev_put(sas_dev);
+ return;
+ }
+
+ if (leapraid_get_sas_address(adapter,
+ le16_to_cpu(sas_dev_pg0.parent_dev_hdl),
+ &parent_sas_addr))
+ return;
+
+ sas_dev = leapraid_init_sas_dev(adapter, &sas_dev_pg0, card_port,
+ hdl, parent_sas_addr, sas_addr,
+ dev_info);
+ if (!sas_dev)
+ return;
+ if (adapter->scan_dev_desc.wait_scan_dev_done) {
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ leapraid_sdev_get(sas_dev);
+ list_add_tail(&sas_dev->list,
+ &adapter->dev_topo.sas_dev_init_list);
+ leapraid_check_boot_dev(adapter, sas_dev, 0);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ } else {
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ leapraid_sdev_get(sas_dev);
+ list_add_tail(&sas_dev->list, &adapter->dev_topo.sas_dev_list);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ leapraid_report_sdev_directly(adapter, sas_dev);
+ }
+}
+
+static void leapraid_remove_device(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev)
+{
+ struct leapraid_starget_priv *starget_priv;
+
+ if (sas_dev->led_on) {
+ leapraid_set_led(adapter, sas_dev, false);
+ sas_dev->led_on = false;
+ }
+
+ if (sas_dev->starget && sas_dev->starget->hostdata) {
+ starget_priv = sas_dev->starget->hostdata;
+ starget_priv->deleted = true;
+ leapraid_ublk_io_dev(adapter,
+ sas_dev->sas_addr, sas_dev->card_port);
+ starget_priv->hdl = LEAPRAID_INVALID_DEV_HANDLE;
+ }
+
+ leapraid_transport_port_remove(adapter,
+ sas_dev->sas_addr,
+ sas_dev->parent_sas_addr,
+ sas_dev->card_port);
+
+ dev_info(&adapter->pdev->dev,
+ "remove dev: hdl=0x%04x, sas addr=0x%016llx\n",
+ sas_dev->hdl, (unsigned long long)sas_dev->sas_addr);
+}
+
+static struct leapraid_vphy *leapraid_alloc_vphy(struct leapraid_adapter *adapter,
+ u8 port_id, u8 phy_num)
+{
+ struct leapraid_card_port *port;
+ struct leapraid_vphy *vphy;
+
+ port = leapraid_get_port_by_id(adapter, port_id, false);
+ if (!port)
+ return NULL;
+
+ vphy = leapraid_get_vphy_by_phy(port, phy_num);
+ if (vphy)
+ return vphy;
+
+ vphy = kzalloc(sizeof(*vphy), GFP_KERNEL);
+ if (!vphy)
+ return NULL;
+
+ if (!port->vphys_mask)
+ INIT_LIST_HEAD(&port->vphys_list);
+
+ port->vphys_mask |= BIT(phy_num);
+ vphy->phy_mask |= BIT(phy_num);
+ list_add_tail(&vphy->list, &port->vphys_list);
+ return vphy;
+}
+
+static int leapraid_add_port_to_card_port_list(struct leapraid_adapter *adapter,
+ u8 port_id, bool refresh)
+{
+ struct leapraid_card_port *card_port;
+
+ card_port = leapraid_get_port_by_id(adapter, port_id, false);
+ if (card_port)
+ return 0;
+
+ card_port = kzalloc(sizeof(*card_port), GFP_KERNEL);
+ if (!card_port)
+ return -ENOMEM;
+
+ card_port->port_id = port_id;
+ dev_info(&adapter->pdev->dev,
+ "port: %d is added to card_port list\n",
+ card_port->port_id);
+
+ if (refresh)
+ if (adapter->access_ctrl.shost_recovering)
+ card_port->flg = LEAPRAID_CARD_PORT_FLG_NEW;
+ list_add_tail(&card_port->list, &adapter->dev_topo.card_port_list);
+ return 0;
+}
+
+static void leapraid_sas_host_add(struct leapraid_adapter *adapter,
+ bool refresh)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_phy_p0 phy_pg0;
+ struct leapraid_sas_dev_p0 sas_dev_pg0;
+ struct leapraid_enc_p0 enc_pg0;
+ struct leapraid_sas_io_unit_p0 *sas_iou_pg0;
+ u16 sas_iou_pg0_sz;
+ u16 attached_hdl;
+ u8 phys_num;
+ u8 port_id;
+ u8 link_rate;
+ int i;
+
+ if (!refresh) {
+ if (leapraid_get_adapter_phys(adapter, &phys_num) || !phys_num)
+ return;
+
+ adapter->dev_topo.card.card_phy =
+ kcalloc(phys_num,
+ sizeof(struct leapraid_card_phy), GFP_KERNEL);
+ if (!adapter->dev_topo.card.card_phy)
+ return;
+
+ adapter->dev_topo.card.phys_num = phys_num;
+ }
+
+ sas_iou_pg0_sz = offsetof(struct leapraid_sas_io_unit_p0, phy_info) +
+ (adapter->dev_topo.card.phys_num *
+ sizeof(struct leapraid_sas_io_unit0_phy_info));
+ sas_iou_pg0 = kzalloc(sas_iou_pg0_sz, GFP_KERNEL);
+ if (!sas_iou_pg0)
+ goto out;
+
+ if (leapraid_get_sas_io_unit_page0(adapter,
+ sas_iou_pg0,
+ sas_iou_pg0_sz))
+ goto out;
+
+ adapter->dev_topo.card.parent_dev = &adapter->shost->shost_gendev;
+ adapter->dev_topo.card.hdl =
+ le16_to_cpu(sas_iou_pg0->phy_info[0].controller_dev_hdl);
+ for (i = 0; i < adapter->dev_topo.card.phys_num; i++) {
+ if (!refresh) { /* add */
+ cfgp1.phy_number = i;
+ if (leapraid_op_config_page(adapter, &phy_pg0, cfgp1,
+ cfgp2, GET_PHY_PG0))
+ goto out;
+
+ port_id = sas_iou_pg0->phy_info[i].port;
+ if (leapraid_add_port_to_card_port_list(adapter,
+ port_id,
+ false))
+ goto out;
+
+ if ((le32_to_cpu(phy_pg0.phy_info) &
+ LEAPRAID_SAS_PHYINFO_VPHY) &&
+ (phy_pg0.neg_link_rate >> 4) >=
+ LEAPRAID_SAS_NEG_LINK_RATE_1_5) {
+ if (!leapraid_alloc_vphy(adapter, port_id, i))
+ goto out;
+ adapter->dev_topo.card.card_phy[i].vphy = true;
+ }
+
+ adapter->dev_topo.card.card_phy[i].hdl =
+ adapter->dev_topo.card.hdl;
+ adapter->dev_topo.card.card_phy[i].phy_id = i;
+ adapter->dev_topo.card.card_phy[i].card_port =
+ leapraid_get_port_by_id(adapter,
+ port_id,
+ false);
+ leapraid_transport_add_card_phy(
+ adapter,
+ &adapter->dev_topo.card.card_phy[i],
+ &phy_pg0, adapter->dev_topo.card.parent_dev);
+ } else { /* refresh */
+ link_rate = sas_iou_pg0->phy_info[i].neg_link_rate >> 4;
+ port_id = sas_iou_pg0->phy_info[i].port;
+ if (leapraid_add_port_to_card_port_list(adapter,
+ port_id,
+ true))
+ goto out;
+
+ if (le32_to_cpu(sas_iou_pg0->phy_info[i]
+ .controller_phy_dev_info) &
+ LEAPRAID_DEVTYP_SEP &&
+ link_rate >= LEAPRAID_SAS_NEG_LINK_RATE_1_5) {
+ cfgp1.phy_number = i;
+ if ((leapraid_op_config_page(adapter, &phy_pg0,
+ cfgp1, cfgp2,
+ GET_PHY_PG0)))
+ continue;
+
+ if ((le32_to_cpu(phy_pg0.phy_info) &
+ LEAPRAID_SAS_PHYINFO_VPHY)) {
+ if (!leapraid_alloc_vphy(adapter,
+ port_id,
+ i))
+ goto out;
+ adapter->dev_topo.card.card_phy[i].vphy = true;
+ }
+ }
+
+ adapter->dev_topo.card.card_phy[i].hdl =
+ adapter->dev_topo.card.hdl;
+ attached_hdl =
+ le16_to_cpu(sas_iou_pg0->phy_info[i].attached_dev_hdl);
+ if (attached_hdl && link_rate < LEAPRAID_SAS_NEG_LINK_RATE_1_5)
+ link_rate = LEAPRAID_SAS_NEG_LINK_RATE_1_5;
+
+ adapter->dev_topo.card.card_phy[i].card_port =
+ leapraid_get_port_by_id(adapter,
+ port_id,
+ false);
+ if (!adapter->dev_topo.card.card_phy[i].phy) {
+ cfgp1.phy_number = i;
+ if ((leapraid_op_config_page(adapter, &phy_pg0,
+ cfgp1, cfgp2,
+ GET_PHY_PG0)))
+ continue;
+
+ adapter->dev_topo.card.card_phy[i].phy_id = i;
+ leapraid_transport_add_card_phy(adapter,
+ &adapter->dev_topo.card.card_phy[i],
+ &phy_pg0,
+ adapter->dev_topo.card.parent_dev);
+ continue;
+ }
+
+ leapraid_transport_update_links(adapter,
+ adapter->dev_topo.card.sas_address,
+ attached_hdl, i, link_rate,
+ adapter->dev_topo.card.card_phy[i].card_port);
+ }
+ }
+
+ if (!refresh) {
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = adapter->dev_topo.card.hdl;
+ if ((leapraid_op_config_page(adapter, &sas_dev_pg0, cfgp1,
+ cfgp2, GET_SAS_DEVICE_PG0)))
+ goto out;
+
+ adapter->dev_topo.card.enc_hdl =
+ le16_to_cpu(sas_dev_pg0.enc_hdl);
+ adapter->dev_topo.card.sas_address =
+ le64_to_cpu(sas_dev_pg0.sas_address);
+ dev_info(&adapter->pdev->dev,
+ "add host: devhdl=0x%04x, sas addr=0x%016llx, phynums=%d\n",
+ adapter->dev_topo.card.hdl,
+ (unsigned long long)adapter->dev_topo.card.sas_address,
+ adapter->dev_topo.card.phys_num);
+
+ if (adapter->dev_topo.card.enc_hdl) {
+ cfgp1.form = LEAPRAID_SAS_ENC_CFG_PGAD_HDL;
+ cfgp2.handle = adapter->dev_topo.card.enc_hdl;
+ if (!(leapraid_op_config_page(adapter, &enc_pg0,
+ cfgp1, cfgp2,
+ GET_SAS_ENCLOSURE_PG0)))
+ adapter->dev_topo.card.enc_lid =
+ le64_to_cpu(enc_pg0.enc_lid);
+ }
+ }
+out:
+ kfree(sas_iou_pg0);
+}
+
+static int leapraid_internal_exp_add(struct leapraid_adapter *adapter,
+ struct leapraid_exp_p0 *exp_pg0,
+ union cfg_param_1 *cfgp1,
+ union cfg_param_2 *cfgp2,
+ u16 hdl)
+{
+ struct leapraid_topo_node *topo_node_exp;
+ struct leapraid_sas_port *sas_port = NULL;
+ struct leapraid_enc_node *enc_dev;
+ struct leapraid_exp_p1 exp_pg1;
+ int rc = 0;
+ unsigned long flags;
+ u8 port_id;
+ u16 parent_handle;
+ u64 sas_addr_parent = 0;
+ int i;
+
+ port_id = exp_pg0->physical_port;
+ parent_handle = le16_to_cpu(exp_pg0->parent_dev_hdl);
+
+ if (leapraid_get_sas_address(adapter, parent_handle, &sas_addr_parent))
+ return -1;
+
+ topo_node_exp = kzalloc(sizeof(*topo_node_exp), GFP_KERNEL);
+ if (!topo_node_exp)
+ return -1;
+
+ topo_node_exp->hdl = hdl;
+ topo_node_exp->phys_num = exp_pg0->phy_num;
+ topo_node_exp->sas_address_parent = sas_addr_parent;
+ topo_node_exp->sas_address = le64_to_cpu(exp_pg0->sas_address);
+ topo_node_exp->card_port =
+ leapraid_get_port_by_id(adapter, port_id, false);
+ if (!topo_node_exp->card_port) {
+ rc = -1;
+ goto out_fail;
+ }
+
+ dev_info(&adapter->pdev->dev,
+ "add exp: sas addr=0x%016llx, hdl=0x%04x, phdl=0x%04x, phys=%d\n",
+ (unsigned long long)topo_node_exp->sas_address,
+ hdl, parent_handle,
+ topo_node_exp->phys_num);
+ if (!topo_node_exp->phys_num) {
+ rc = -1;
+ goto out_fail;
+ }
+
+ topo_node_exp->card_phy =
+ kcalloc(topo_node_exp->phys_num,
+ sizeof(struct leapraid_card_phy), GFP_KERNEL);
+ if (!topo_node_exp->card_phy) {
+ rc = -1;
+ goto out_fail;
+ }
+
+ INIT_LIST_HEAD(&topo_node_exp->sas_port_list);
+ sas_port = leapraid_transport_port_add(adapter, hdl, sas_addr_parent,
+ topo_node_exp->card_port);
+ if (!sas_port) {
+ rc = -1;
+ goto out_fail;
+ }
+
+ topo_node_exp->parent_dev = &sas_port->rphy->dev;
+ topo_node_exp->rphy = sas_port->rphy;
+ for (i = 0; i < topo_node_exp->phys_num; i++) {
+ cfgp1->phy_number = i;
+ cfgp2->handle = hdl;
+ if ((leapraid_op_config_page(adapter, &exp_pg1, *cfgp1, *cfgp2,
+ GET_SAS_EXPANDER_PG1))) {
+ rc = -1;
+ goto out_fail;
+ }
+
+ topo_node_exp->card_phy[i].hdl = hdl;
+ topo_node_exp->card_phy[i].phy_id = i;
+ topo_node_exp->card_phy[i].card_port =
+ leapraid_get_port_by_id(adapter, port_id, false);
+ if ((leapraid_transport_add_exp_phy(adapter,
+ &topo_node_exp->card_phy[i],
+ &exp_pg1,
+ topo_node_exp->parent_dev))) {
+ rc = -1;
+ goto out_fail;
+ }
+ }
+
+ if (topo_node_exp->enc_hdl) {
+ enc_dev = leapraid_enc_find_by_hdl(adapter,
+ topo_node_exp->enc_hdl);
+ if (enc_dev)
+ topo_node_exp->enc_lid =
+ le64_to_cpu(enc_dev->pg0.enc_lid);
+ }
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ list_add_tail(&topo_node_exp->list, &adapter->dev_topo.exp_list);
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+ return 0;
+
+out_fail:
+ if (sas_port)
+ leapraid_transport_port_remove(adapter,
+ topo_node_exp->sas_address,
+ sas_addr_parent,
+ topo_node_exp->card_port);
+ kfree(topo_node_exp);
+ return rc;
+}
+
+static int leapraid_exp_add(struct leapraid_adapter *adapter, u16 hdl)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_topo_node *topo_node_exp;
+ struct leapraid_exp_p0 exp_pg0;
+ u16 parent_handle;
+ u64 sas_addr, sas_addr_parent = 0;
+ unsigned long flags;
+ u8 port_id;
+ int rc = 0;
+
+ if (!hdl)
+ return -EPERM;
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering)
+ return -EPERM;
+
+ cfgp1.form = LEAPRAID_SAS_EXP_CFD_PGAD_HDL;
+ cfgp2.handle = hdl;
+ if ((leapraid_op_config_page(adapter, &exp_pg0, cfgp1, cfgp2,
+ GET_SAS_EXPANDER_PG0)))
+ return -EPERM;
+
+ parent_handle = le16_to_cpu(exp_pg0.parent_dev_hdl);
+ if (leapraid_get_sas_address(adapter, parent_handle, &sas_addr_parent))
+ return -EPERM;
+
+ port_id = exp_pg0.physical_port;
+ if (sas_addr_parent != adapter->dev_topo.card.sas_address) {
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ topo_node_exp =
+ leapraid_exp_find_by_sas_address(adapter,
+ sas_addr_parent,
+ leapraid_get_port_by_id(adapter, port_id, false));
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+ if (!topo_node_exp) {
+ rc = leapraid_exp_add(adapter, parent_handle);
+ if (rc != 0)
+ return rc;
+ }
+ }
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ sas_addr = le64_to_cpu(exp_pg0.sas_address);
+ topo_node_exp =
+ leapraid_exp_find_by_sas_address(adapter, sas_addr,
+ leapraid_get_port_by_id(adapter, port_id, false));
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ if (topo_node_exp)
+ return 0;
+
+ return leapraid_internal_exp_add(adapter, &exp_pg0, &cfgp1,
+ &cfgp2, hdl);
+}
+
+static void leapraid_exp_node_rm(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node_exp)
+{
+ struct leapraid_sas_port *sas_port, *sas_port_next;
+ unsigned long flags;
+ int port_id;
+
+ list_for_each_entry_safe(sas_port, sas_port_next,
+ &topo_node_exp->sas_port_list,
+ port_list) {
+ if (adapter->access_ctrl.shost_recovering)
+ return;
+
+ switch (sas_port->remote_identify.device_type) {
+ case SAS_END_DEVICE:
+ leapraid_sas_dev_remove_by_sas_address(
+ adapter,
+ sas_port->remote_identify.sas_address,
+ sas_port->card_port);
+ break;
+ case SAS_EDGE_EXPANDER_DEVICE:
+ case SAS_FANOUT_EXPANDER_DEVICE:
+ leapraid_exp_rm(
+ adapter,
+ sas_port->remote_identify.sas_address,
+ sas_port->card_port);
+ break;
+ default:
+ break;
+ }
+ }
+
+ port_id = topo_node_exp->card_port->port_id;
+ leapraid_transport_port_remove(adapter, topo_node_exp->sas_address,
+ topo_node_exp->sas_address_parent,
+ topo_node_exp->card_port);
+ dev_info(&adapter->pdev->dev,
+ "removing exp: port=%d, sas addr=0x%016llx, hdl=0x%04x\n",
+ port_id, (unsigned long long)topo_node_exp->sas_address,
+ topo_node_exp->hdl);
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ list_del(&topo_node_exp->list);
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+ kfree(topo_node_exp->card_phy);
+ kfree(topo_node_exp);
+}
+
+void leapraid_exp_rm(struct leapraid_adapter *adapter, u64 sas_addr,
+ struct leapraid_card_port *port)
+{
+ struct leapraid_topo_node *topo_node_exp;
+ unsigned long flags;
+
+ if (adapter->access_ctrl.shost_recovering)
+ return;
+
+ if (!port)
+ return;
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ topo_node_exp = leapraid_exp_find_by_sas_address(adapter,
+ sas_addr,
+ port);
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ if (topo_node_exp)
+ leapraid_exp_node_rm(adapter, topo_node_exp);
+}
+
+static void leapraid_check_device(struct leapraid_adapter *adapter,
+ u64 parent_sas_address, u16 handle,
+ u8 phy_number, u8 link_rate)
+{
+ struct leapraid_sas_dev_p0 sas_device_pg0;
+ struct leapraid_sas_dev *sas_dev = NULL;
+ struct leapraid_enc_node *enclosure_dev = NULL;
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ unsigned long flags;
+ u64 sas_address;
+ struct scsi_target *starget;
+ struct leapraid_starget_priv *sas_target_priv_data;
+ u32 device_info;
+ struct leapraid_card_port *port;
+
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = handle;
+ if ((leapraid_op_config_page(adapter, &sas_device_pg0, cfgp1, cfgp2,
+ GET_SAS_DEVICE_PG0)))
+ return;
+
+ if (phy_number != sas_device_pg0.phy_num)
+ return;
+
+ device_info = le32_to_cpu(sas_device_pg0.dev_info);
+ if (!(leapraid_is_end_dev(device_info)))
+ return;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_address = le64_to_cpu(sas_device_pg0.sas_address);
+ port = leapraid_get_port_by_id(adapter, sas_device_pg0.physical_port,
+ false);
+ if (!port)
+ goto out_unlock;
+
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(adapter, sas_address,
+ port);
+ if (!sas_dev)
+ goto out_unlock;
+
+ if (unlikely(sas_dev->hdl != handle)) {
+ starget = sas_dev->starget;
+ sas_target_priv_data = starget->hostdata;
+ starget_printk(KERN_INFO, starget,
+ "hdl changed from 0x%04x to 0x%04x!\n",
+ sas_dev->hdl, handle);
+ sas_target_priv_data->hdl = handle;
+ sas_dev->hdl = handle;
+ if (le16_to_cpu(sas_device_pg0.flg) &
+ LEAPRAID_SAS_DEV_P0_FLG_ENC_LEVEL_VALID) {
+ sas_dev->enc_level =
+ sas_device_pg0.enc_level;
+ memcpy(sas_dev->connector_name,
+ sas_device_pg0.connector_name, 4);
+ sas_dev->connector_name[4] = '\0';
+ } else {
+ sas_dev->enc_level = 0;
+ sas_dev->connector_name[0] = '\0';
+ }
+ sas_dev->enc_hdl =
+ le16_to_cpu(sas_device_pg0.enc_hdl);
+ enclosure_dev =
+ leapraid_enc_find_by_hdl(adapter, sas_dev->enc_hdl);
+ if (enclosure_dev) {
+ sas_dev->enc_lid =
+ le64_to_cpu(enclosure_dev->pg0.enc_lid);
+ }
+ }
+
+ if (!(le16_to_cpu(sas_device_pg0.flg) &
+ LEAPRAID_SAS_DEV_P0_FLG_DEV_PRESENT))
+ goto out_unlock;
+
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ leapraid_ublk_io_dev_to_running(adapter, sas_address, port);
+ goto out;
+
+out_unlock:
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+out:
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+}
+
+static int leapraid_internal_sas_topo_chg_evt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_card_port *card_port,
+ struct leapraid_topo_node *topo_node_exp,
+ struct leapraid_fw_evt_work *fw_evt,
+ u64 sas_addr, u8 max_phys)
+{
+ struct leapraid_evt_data_sas_topo_change_list *evt_data;
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ u8 phy_number;
+ u8 link_rate, prev_link_rate;
+ u16 reason_code;
+ u16 hdl;
+ int i;
+
+ evt_data = fw_evt->evt_data;
+ for (i = 0; i < evt_data->entry_num; i++) {
+ if (fw_evt->ignore)
+ return 0;
+
+ if (adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering)
+ return 0;
+
+ phy_number = evt_data->start_phy_num + i;
+ if (phy_number >= max_phys)
+ continue;
+
+ reason_code = evt_data->phy[i].phy_status &
+ LEAPRAID_EVT_SAS_TOPO_RC_MASK;
+
+ hdl = le16_to_cpu(evt_data->phy[i].attached_dev_hdl);
+ if (!hdl)
+ continue;
+
+ link_rate = evt_data->phy[i].link_rate >> 4;
+ prev_link_rate = evt_data->phy[i].link_rate & 0xF;
+ switch (reason_code) {
+ case LEAPRAID_EVT_SAS_TOPO_RC_PHY_CHANGED:
+ if (adapter->access_ctrl.shost_recovering)
+ break;
+
+ if (link_rate == prev_link_rate)
+ break;
+
+ leapraid_transport_update_links(adapter, sas_addr,
+ hdl, phy_number,
+ link_rate, card_port);
+ if (link_rate < LEAPRAID_SAS_NEG_LINK_RATE_1_5)
+ break;
+
+ leapraid_check_device(adapter, sas_addr, hdl,
+ phy_number, link_rate);
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock,
+ flags);
+ sas_dev =
+ leapraid_hold_lock_get_sas_dev_by_hdl(
+ adapter, hdl);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock,
+ flags);
+ if (sas_dev) {
+ leapraid_sdev_put(sas_dev);
+ break;
+ }
+ if (!test_bit(hdl, (unsigned long *)adapter->dev_topo.pending_dev_add))
+ break;
+
+ evt_data->phy[i].phy_status &=
+ LEAPRAID_EVT_SAS_TOPO_RC_CLEAR_MASK;
+ evt_data->phy[i].phy_status |=
+ LEAPRAID_EVT_SAS_TOPO_RC_TARG_ADDED;
+ fallthrough;
+
+ case LEAPRAID_EVT_SAS_TOPO_RC_TARG_ADDED:
+ if (adapter->access_ctrl.shost_recovering)
+ break;
+ leapraid_transport_update_links(adapter, sas_addr,
+ hdl, phy_number,
+ link_rate, card_port);
+ if (link_rate < LEAPRAID_SAS_NEG_LINK_RATE_1_5)
+ break;
+ leapraid_add_dev(adapter, hdl);
+ break;
+ case LEAPRAID_EVT_SAS_TOPO_RC_TARG_NOT_RESPONDING:
+ leapraid_sas_dev_remove_by_hdl(adapter, hdl);
+ break;
+ }
+ }
+
+ if (evt_data->exp_status == LEAPRAID_EVT_SAS_TOPO_ES_NOT_RESPONDING &&
+ topo_node_exp)
+ leapraid_exp_rm(adapter, sas_addr, card_port);
+
+ return 0;
+}
+
+static int leapraid_sas_topo_chg_evt(struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt)
+{
+ struct leapraid_topo_node *topo_node_exp;
+ struct leapraid_card_port *card_port;
+ struct leapraid_evt_data_sas_topo_change_list *evt_data;
+ u16 phdl;
+ u8 max_phys;
+ u64 sas_addr;
+ unsigned long flags;
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering)
+ return 0;
+
+ evt_data = fw_evt->evt_data;
+ leapraid_sas_host_add(adapter, adapter->dev_topo.card.phys_num);
+
+ if (fw_evt->ignore)
+ return 0;
+
+ phdl = le16_to_cpu(evt_data->exp_dev_hdl);
+ card_port = leapraid_get_port_by_id(adapter,
+ evt_data->physical_port,
+ false);
+ if (evt_data->exp_status == LEAPRAID_EVT_SAS_TOPO_ES_ADDED)
+ if (leapraid_exp_add(adapter, phdl) != 0)
+ return 0;
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ topo_node_exp = leapraid_exp_find_by_hdl(adapter, phdl);
+ if (topo_node_exp) {
+ sas_addr = topo_node_exp->sas_address;
+ max_phys = topo_node_exp->phys_num;
+ card_port = topo_node_exp->card_port;
+ } else if (phdl < adapter->dev_topo.card.phys_num) {
+ sas_addr = adapter->dev_topo.card.sas_address;
+ max_phys = adapter->dev_topo.card.phys_num;
+ } else {
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock,
+ flags);
+ return 0;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ return leapraid_internal_sas_topo_chg_evt(adapter, card_port,
+ topo_node_exp, fw_evt,
+ sas_addr, max_phys);
+}
+
+static void leapraid_reprobe_lun(struct scsi_device *sdev, void *no_uld_attach)
+{
+ sdev->no_uld_attach = no_uld_attach ? 1 : 0;
+ sdev_printk(KERN_INFO, sdev,
+ "%s raid component to upper layer\n",
+ sdev->no_uld_attach ? "hide" : "expose");
+ WARN_ON(scsi_device_reprobe(sdev));
+}
+
+static void leapraid_sas_pd_add(struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_ir_change *evt_data)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_dev_p0 sas_dev_p0;
+ struct leapraid_sas_dev *sas_dev;
+ u64 sas_address;
+ u16 parent_hdl;
+ u16 hdl;
+
+ hdl = le16_to_cpu(evt_data->phys_disk_dev_hdl);
+ set_bit(hdl, (unsigned long *)adapter->dev_topo.pd_hdls);
+ sas_dev = leapraid_get_sas_dev_by_hdl(adapter, hdl);
+ if (sas_dev) {
+ leapraid_sdev_put(sas_dev);
+ dev_warn(&adapter->pdev->dev,
+ "dev handle 0x%x already exists\n", hdl);
+ return;
+ }
+
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = hdl;
+ if ((leapraid_op_config_page(adapter, &sas_dev_p0, cfgp1, cfgp2,
+ GET_SAS_DEVICE_PG0))) {
+ dev_warn(&adapter->pdev->dev, "failed to read dev page0\n");
+ return;
+ }
+
+ parent_hdl = le16_to_cpu(sas_dev_p0.parent_dev_hdl);
+ if (!leapraid_get_sas_address(adapter, parent_hdl, &sas_address))
+ leapraid_transport_update_links(adapter, sas_address, hdl,
+ sas_dev_p0.phy_num,
+ LEAPRAID_SAS_NEG_LINK_RATE_1_5,
+ leapraid_get_port_by_id(adapter,
+ sas_dev_p0.physical_port,
+ false));
+ leapraid_add_dev(adapter, hdl);
+}
+
+static void leapraid_sas_pd_delete(struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_ir_change *evt_data)
+{
+ u16 hdl;
+
+ hdl = le16_to_cpu(evt_data->phys_disk_dev_hdl);
+ leapraid_sas_dev_remove_by_hdl(adapter, hdl);
+}
+
+static void leapraid_sas_pd_hide(struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_ir_change *evt_data)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct scsi_target *starget = NULL;
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ u64 volume_wwid = 0;
+ u16 volume_hdl = 0;
+ u16 hdl;
+
+ hdl = le16_to_cpu(evt_data->phys_disk_dev_hdl);
+ leapraid_cfg_get_volume_hdl(adapter, hdl, &volume_hdl);
+ if (volume_hdl)
+ leapraid_cfg_get_volume_wwid(adapter,
+ volume_hdl,
+ &volume_wwid);
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ if (!sas_dev) {
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return;
+ }
+
+ set_bit(hdl, (unsigned long *)adapter->dev_topo.pd_hdls);
+ if (sas_dev->starget && sas_dev->starget->hostdata) {
+ starget = sas_dev->starget;
+ starget_priv = starget->hostdata;
+ starget_priv->flg |= LEAPRAID_TGT_FLG_RAID_MEMBER;
+ sas_dev->volume_hdl = volume_hdl;
+ sas_dev->volume_wwid = volume_wwid;
+ leapraid_sdev_put(sas_dev);
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ if (starget) {
+ dev_info(&adapter->pdev->dev, "hide sas_dev, hdl=0x%x\n", hdl);
+ starget_for_each_device(starget,
+ (void *)1, leapraid_reprobe_lun);
+ }
+}
+
+static void leapraid_sas_pd_expose(
+ struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_ir_change *evt_data)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct scsi_target *starget = NULL;
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ u16 hdl;
+
+ hdl = le16_to_cpu(evt_data->phys_disk_dev_hdl);
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ if (!sas_dev) {
+ dev_warn(&adapter->pdev->dev,
+ "%s:%d: sas_dev not found, hdl=0x%x\n",
+ __func__, __LINE__, hdl);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return;
+ }
+
+ sas_dev->volume_hdl = 0;
+ sas_dev->volume_wwid = 0;
+ clear_bit(hdl, (unsigned long *)adapter->dev_topo.pd_hdls);
+ if (sas_dev->starget && sas_dev->starget->hostdata) {
+ starget = sas_dev->starget;
+ starget_priv = starget->hostdata;
+ starget_priv->flg &= ~LEAPRAID_TGT_FLG_RAID_MEMBER;
+ sas_dev->led_on = false;
+ leapraid_sdev_put(sas_dev);
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (starget) {
+ dev_info(&adapter->pdev->dev,
+ "expose sas_dev, hdl=0x%x\n", hdl);
+ starget_for_each_device(starget, NULL, leapraid_reprobe_lun);
+ }
+}
+
+static void leapraid_sas_volume_add(struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_ir_change *evt_data)
+{
+ struct leapraid_raid_volume *raid_volume;
+ unsigned long flags;
+ u64 wwid;
+ u16 hdl;
+
+ hdl = le16_to_cpu(evt_data->vol_dev_hdl);
+
+ if (leapraid_cfg_get_volume_wwid(adapter, hdl, &wwid)) {
+ dev_warn(&adapter->pdev->dev, "failed to read volume page1\n");
+ return;
+ }
+
+ if (!wwid) {
+ dev_warn(&adapter->pdev->dev, "invalid WWID(handle=0x%x)\n",
+ hdl);
+ return;
+ }
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_wwid(adapter, wwid);
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+
+ if (raid_volume) {
+ dev_warn(&adapter->pdev->dev,
+ "volume handle 0x%x already exists\n", hdl);
+ return;
+ }
+
+ raid_volume = kzalloc(sizeof(*raid_volume), GFP_KERNEL);
+ if (!raid_volume)
+ return;
+
+ raid_volume->id = adapter->dev_topo.sas_id++;
+ raid_volume->channel = RAID_CHANNEL;
+ raid_volume->hdl = hdl;
+ raid_volume->wwid = wwid;
+ leapraid_raid_volume_add(adapter, raid_volume);
+ if (!adapter->scan_dev_desc.wait_scan_dev_done) {
+ if (scsi_add_device(adapter->shost, RAID_CHANNEL,
+ raid_volume->id, 0))
+ leapraid_raid_volume_remove(adapter, raid_volume);
+ dev_info(&adapter->pdev->dev,
+ "add raid volume: hdl=0x%x, wwid=0x%llx\n", hdl, wwid);
+ } else {
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ leapraid_check_boot_dev(adapter, raid_volume, RAID_CHANNEL);
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock,
+ flags);
+ }
+}
+
+static void leapraid_sas_volume_delete(struct leapraid_adapter *adapter,
+ u16 hdl)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_raid_volume *raid_volume;
+ struct scsi_target *starget = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_hdl(adapter, hdl);
+ if (!raid_volume) {
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock,
+ flags);
+ dev_warn(&adapter->pdev->dev,
+ "%s:%d: volume handle 0x%x not found\n",
+ __func__, __LINE__, hdl);
+ return;
+ }
+
+ if (raid_volume->starget) {
+ starget = raid_volume->starget;
+ starget_priv = starget->hostdata;
+ starget_priv->deleted = true;
+ }
+
+ dev_info(&adapter->pdev->dev,
+ "delete raid volume: hdl=0x%x, wwid=0x%llx\n",
+ raid_volume->hdl, raid_volume->wwid);
+ list_del(&raid_volume->list);
+ kfree(raid_volume);
+
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+
+ if (starget)
+ scsi_remove_target(&starget->dev);
+}
+
+static void leapraid_sas_ir_chg_evt(struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt)
+{
+ struct leapraid_evt_data_ir_change *evt_data;
+
+ evt_data = fw_evt->evt_data;
+
+ switch (evt_data->reason_code) {
+ case LEAPRAID_EVT_IR_RC_VOLUME_ADD:
+ leapraid_sas_volume_add(adapter, evt_data);
+ break;
+ case LEAPRAID_EVT_IR_RC_VOLUME_DELETE:
+ leapraid_sas_volume_delete(adapter,
+ le16_to_cpu(evt_data->vol_dev_hdl));
+ break;
+ case LEAPRAID_EVT_IR_RC_PD_HIDDEN_TO_ADD:
+ leapraid_sas_pd_add(adapter, evt_data);
+ break;
+ case LEAPRAID_EVT_IR_RC_PD_UNHIDDEN_TO_DELETE:
+ leapraid_sas_pd_delete(adapter, evt_data);
+ break;
+ case LEAPRAID_EVT_IR_RC_PD_CREATED_TO_HIDE:
+ leapraid_sas_pd_hide(adapter, evt_data);
+ break;
+ case LEAPRAID_EVT_IR_RC_PD_DELETED_TO_EXPOSE:
+ leapraid_sas_pd_expose(adapter, evt_data);
+ break;
+ default:
+ break;
+ }
+}
+
+static void leapraid_sas_enc_dev_stat_add_node(
+ struct leapraid_adapter *adapter, u16 hdl)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_enc_node *enc_node = NULL;
+ int rc;
+
+ enc_node = kzalloc(sizeof(*enc_node), GFP_KERNEL);
+ if (!enc_node)
+ return;
+
+ cfgp1.form = LEAPRAID_SAS_ENC_CFG_PGAD_HDL;
+ cfgp2.handle = hdl;
+ rc = leapraid_op_config_page(adapter, &enc_node->pg0, cfgp1, cfgp2,
+ GET_SAS_ENCLOSURE_PG0);
+ if (rc) {
+ kfree(enc_node);
+ return;
+ }
+ list_add_tail(&enc_node->list, &adapter->dev_topo.enc_list);
+}
+
+static void leapraid_sas_enc_dev_stat_del_node(
+ struct leapraid_enc_node *enc_node)
+{
+ if (!enc_node)
+ return;
+
+ list_del(&enc_node->list);
+ kfree(enc_node);
+}
+
+static void leapraid_sas_enc_dev_stat_chg_evt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt)
+{
+ struct leapraid_enc_node *enc_node = NULL;
+ struct leapraid_evt_data_sas_enc_dev_status_change *evt_data;
+ u16 enc_hdl;
+
+ if (adapter->access_ctrl.shost_recovering)
+ return;
+
+ evt_data = fw_evt->evt_data;
+ enc_hdl = le16_to_cpu(evt_data->enc_hdl);
+ if (enc_hdl)
+ enc_node = leapraid_enc_find_by_hdl(adapter, enc_hdl);
+ switch (evt_data->reason_code) {
+ case LEAPRAID_EVT_SAS_ENCL_RC_ADDED:
+ if (!enc_node)
+ leapraid_sas_enc_dev_stat_add_node(adapter, enc_hdl);
+ break;
+ case LEAPRAID_EVT_SAS_ENCL_RC_NOT_RESPONDING:
+ leapraid_sas_enc_dev_stat_del_node(enc_node);
+ break;
+ default:
+ break;
+ }
+}
+
+static void leapraid_remove_unresp_sas_end_dev(
+ struct leapraid_adapter *adapter)
+{
+ struct leapraid_sas_dev *sas_dev, *sas_dev_next;
+ unsigned long flags;
+ LIST_HEAD(head);
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ list_for_each_entry_safe(sas_dev, sas_dev_next,
+ &adapter->dev_topo.sas_dev_init_list, list) {
+ list_del_init(&sas_dev->list);
+ leapraid_sdev_put(sas_dev);
+ }
+ list_for_each_entry_safe(sas_dev, sas_dev_next,
+ &adapter->dev_topo.sas_dev_list, list) {
+ if (!sas_dev->resp)
+ list_move_tail(&sas_dev->list, &head);
+ else
+ sas_dev->resp = false;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ list_for_each_entry_safe(sas_dev, sas_dev_next, &head, list) {
+ leapraid_remove_device(adapter, sas_dev);
+ list_del_init(&sas_dev->list);
+ leapraid_sdev_put(sas_dev);
+ }
+
+ dev_info(&adapter->pdev->dev,
+ "unresponding sas end devices removed\n");
+}
+
+static void leapraid_remove_unresp_raid_volumes(
+ struct leapraid_adapter *adapter)
+{
+ struct leapraid_raid_volume *raid_volume, *raid_volume_next;
+
+ list_for_each_entry_safe(raid_volume, raid_volume_next,
+ &adapter->dev_topo.raid_volume_list, list) {
+ if (!raid_volume->resp)
+ leapraid_sas_volume_delete(adapter, raid_volume->hdl);
+ else
+ raid_volume->resp = false;
+ }
+ dev_info(&adapter->pdev->dev,
+ "unresponding raid volumes removed\n");
+}
+
+static void leapraid_remove_unresp_sas_exp(struct leapraid_adapter *adapter)
+{
+ struct leapraid_topo_node *topo_node_exp, *topo_node_exp_next;
+ unsigned long flags;
+ LIST_HEAD(head);
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ list_for_each_entry_safe(topo_node_exp, topo_node_exp_next,
+ &adapter->dev_topo.exp_list, list) {
+ if (!topo_node_exp->resp)
+ list_move_tail(&topo_node_exp->list, &head);
+ else
+ topo_node_exp->resp = false;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ list_for_each_entry_safe(topo_node_exp, topo_node_exp_next,
+ &head, list)
+ leapraid_exp_node_rm(adapter, topo_node_exp);
+
+ dev_info(&adapter->pdev->dev,
+ "unresponding sas expanders removed\n");
+}
+
+static void leapraid_remove_unresp_dev(struct leapraid_adapter *adapter)
+{
+ leapraid_remove_unresp_sas_end_dev(adapter);
+ if (adapter->adapter_attr.raid_support)
+ leapraid_remove_unresp_raid_volumes(adapter);
+ leapraid_remove_unresp_sas_exp(adapter);
+ leapraid_ublk_io_all_dev(adapter);
+}
+
+static void leapraid_del_dirty_vphy(struct leapraid_adapter *adapter)
+{
+ struct leapraid_card_port *card_port, *card_port_next;
+ struct leapraid_vphy *vphy, *vphy_next;
+
+ list_for_each_entry_safe(card_port, card_port_next,
+ &adapter->dev_topo.card_port_list, list) {
+ if (!card_port->vphys_mask)
+ continue;
+
+ list_for_each_entry_safe(vphy, vphy_next,
+ &card_port->vphys_list, list) {
+ if (!(vphy->flg & LEAPRAID_VPHY_FLG_DIRTY))
+ continue;
+
+ card_port->vphys_mask &= ~vphy->phy_mask;
+ list_del(&vphy->list);
+ kfree(vphy);
+ }
+
+ if (!card_port->vphys_mask && !card_port->sas_address)
+ card_port->flg |= LEAPRAID_CARD_PORT_FLG_DIRTY;
+ }
+}
+
+static void leapraid_del_dirty_card_port(struct leapraid_adapter *adapter)
+{
+ struct leapraid_card_port *card_port, *card_port_next;
+
+ list_for_each_entry_safe(card_port, card_port_next,
+ &adapter->dev_topo.card_port_list, list) {
+ if (!(card_port->flg & LEAPRAID_CARD_PORT_FLG_DIRTY) ||
+ card_port->flg & LEAPRAID_CARD_PORT_FLG_NEW)
+ continue;
+
+ list_del(&card_port->list);
+ kfree(card_port);
+ }
+}
+
+static void leapraid_update_dev_qdepth(struct leapraid_adapter *adapter)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct leapraid_sas_dev *sas_dev;
+ struct scsi_device *sdev;
+ u16 qdepth;
+
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv || !sdev_priv->starget_priv)
+ continue;
+ sas_dev = sdev_priv->starget_priv->sas_dev;
+ if (sas_dev && sas_dev->dev_info & LEAPRAID_DEVTYP_SSP_TGT)
+ qdepth = (sas_dev->port_type > 1) ?
+ adapter->adapter_attr.wideport_max_queue_depth :
+ adapter->adapter_attr.narrowport_max_queue_depth;
+ else if (sas_dev && sas_dev->dev_info &
+ LEAPRAID_DEVTYP_SATA_DEV)
+ qdepth = adapter->adapter_attr.sata_max_queue_depth;
+ else
+ continue;
+
+ leapraid_adjust_sdev_queue_depth(sdev, qdepth);
+ }
+}
+
+static void leapraid_update_exp_links(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node_exp,
+ u16 hdl)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_exp_p1 exp_p1;
+ int i;
+
+ cfgp2.handle = hdl;
+ for (i = 0; i < topo_node_exp->phys_num; i++) {
+ cfgp1.phy_number = i;
+ if ((leapraid_op_config_page(adapter, &exp_p1, cfgp1, cfgp2,
+ GET_SAS_EXPANDER_PG1)))
+ return;
+
+ leapraid_transport_update_links(adapter,
+ topo_node_exp->sas_address,
+ le16_to_cpu(exp_p1.attached_dev_hdl),
+ i,
+ exp_p1.neg_link_rate >> 4,
+ topo_node_exp->card_port);
+ }
+}
+
+static void leapraid_scan_exp_after_reset(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_topo_node *topo_node_exp;
+ struct leapraid_exp_p0 exp_p0;
+ unsigned long flags;
+ u16 hdl;
+ u8 port_id;
+
+ dev_info(&adapter->pdev->dev, "begin scanning expanders\n");
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (hdl = 0xFFFF, cfgp2.handle = hdl;
+ !leapraid_op_config_page(adapter, &exp_p0, cfgp1, cfgp2,
+ GET_SAS_EXPANDER_PG0);
+ cfgp2.handle = hdl) {
+ hdl = le16_to_cpu(exp_p0.dev_hdl);
+ port_id = exp_p0.physical_port;
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ topo_node_exp =
+ leapraid_exp_find_by_sas_address(adapter,
+ le64_to_cpu(exp_p0.sas_address),
+ leapraid_get_port_by_id(adapter,
+ port_id,
+ false));
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock,
+ flags);
+
+ if (topo_node_exp) {
+ leapraid_update_exp_links(adapter, topo_node_exp, hdl);
+ } else {
+ leapraid_exp_add(adapter, hdl);
+
+ dev_info(&adapter->pdev->dev,
+ "add exp: hdl=0x%04x, sas addr=0x%016llx\n",
+ hdl,
+ (unsigned long long)le64_to_cpu(
+ exp_p0.sas_address));
+ }
+ }
+
+ dev_info(&adapter->pdev->dev, "expanders scan complete\n");
+}
+
+static void leapraid_scan_phy_disks_after_reset(
+ struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ union cfg_param_1 cfgp1_extra = {0};
+ union cfg_param_2 cfgp2_extra = {0};
+ struct leapraid_sas_dev_p0 sas_dev_p0;
+ struct leapraid_raidpd_p0 raidpd_p0;
+ struct leapraid_sas_dev *sas_dev;
+ u8 phys_disk_num, port_id;
+ u16 hdl, parent_hdl;
+ u64 sas_addr;
+
+ dev_info(&adapter->pdev->dev, "begin scanning phys disk\n");
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (phys_disk_num = 0xFF, cfgp2.form_specific = phys_disk_num;
+ !leapraid_op_config_page(adapter, &raidpd_p0,
+ cfgp1, cfgp2, GET_PHY_DISK_PG0);
+ cfgp2.form_specific = phys_disk_num) {
+ phys_disk_num = raidpd_p0.phys_disk_num;
+ hdl = le16_to_cpu(raidpd_p0.dev_hdl);
+ sas_dev = leapraid_get_sas_dev_by_hdl(adapter, hdl);
+ if (sas_dev) {
+ leapraid_sdev_put(sas_dev);
+ continue;
+ }
+
+ cfgp1_extra.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2_extra.handle = hdl;
+ if (leapraid_op_config_page(adapter, &sas_dev_p0, cfgp1_extra,
+ cfgp2_extra, GET_SAS_DEVICE_PG0) !=
+ 0)
+ continue;
+
+ parent_hdl = le16_to_cpu(sas_dev_p0.parent_dev_hdl);
+ if (!leapraid_get_sas_address(adapter,
+ parent_hdl,
+ &sas_addr)) {
+ port_id = sas_dev_p0.physical_port;
+ leapraid_transport_update_links(
+ adapter, sas_addr, hdl,
+ sas_dev_p0.phy_num,
+ LEAPRAID_SAS_NEG_LINK_RATE_1_5,
+ leapraid_get_port_by_id(
+ adapter, port_id, false));
+ set_bit(hdl,
+ (unsigned long *)adapter->dev_topo.pd_hdls);
+
+ leapraid_add_dev(adapter, hdl);
+
+ dev_info(&adapter->pdev->dev,
+ "add phys disk: hdl=0x%04x, sas addr=0x%016llx\n",
+ hdl,
+ (unsigned long long)le64_to_cpu(
+ sas_dev_p0.sas_address));
+ }
+ }
+
+ dev_info(&adapter->pdev->dev, "phys disk scan complete\n");
+}
+
+static void leapraid_scan_vol_after_reset(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ union cfg_param_1 cfgp1_extra = {0};
+ union cfg_param_2 cfgp2_extra = {0};
+ struct leapraid_evt_data_ir_change evt_data;
+ static struct leapraid_raid_volume *raid_volume;
+ struct leapraid_raidvol_p1 *vol_p1;
+ struct leapraid_raidvol_p0 *vol_p0;
+ unsigned long flags;
+ u16 hdl;
+
+ vol_p0 = kzalloc(sizeof(*vol_p0), GFP_KERNEL);
+ if (!vol_p0)
+ return;
+
+ vol_p1 = kzalloc(sizeof(*vol_p1), GFP_KERNEL);
+ if (!vol_p1) {
+ kfree(vol_p0);
+ return;
+ }
+
+ dev_info(&adapter->pdev->dev, "begin scanning volumes\n");
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (hdl = 0xFFFF, cfgp2.handle = hdl;
+ !leapraid_op_config_page(adapter, vol_p1, cfgp1,
+ cfgp2, GET_RAID_VOLUME_PG1);
+ cfgp2.handle = hdl) {
+ hdl = le16_to_cpu(vol_p1->dev_hdl);
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_wwid(
+ adapter,
+ le64_to_cpu(vol_p1->wwid));
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock,
+ flags);
+ if (raid_volume)
+ continue;
+
+ cfgp1_extra.size = sizeof(struct leapraid_raidvol_p0);
+ cfgp2_extra.handle = hdl;
+ if (leapraid_op_config_page(adapter, vol_p0, cfgp1_extra,
+ cfgp2_extra, GET_RAID_VOLUME_PG0))
+ continue;
+
+ if (vol_p0->volume_state == LEAPRAID_VOL_STATE_OPTIMAL ||
+ vol_p0->volume_state == LEAPRAID_VOL_STATE_ONLINE ||
+ vol_p0->volume_state == LEAPRAID_VOL_STATE_DEGRADED) {
+ memset(&evt_data, 0,
+ sizeof(struct leapraid_evt_data_ir_change));
+ evt_data.reason_code = LEAPRAID_EVT_IR_RC_VOLUME_ADD;
+ evt_data.vol_dev_hdl = vol_p1->dev_hdl;
+ leapraid_sas_volume_add(adapter, &evt_data);
+ dev_info(&adapter->pdev->dev,
+ "add volume: hdl=0x%04x\n",
+ vol_p1->dev_hdl);
+ }
+ }
+
+ kfree(vol_p0);
+ kfree(vol_p1);
+
+ dev_info(&adapter->pdev->dev, "volumes scan complete\n");
+}
+
+static void leapraid_scan_sas_dev_after_reset(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_dev_p0 sas_dev_p0;
+ struct leapraid_sas_dev *sas_dev;
+ u16 hdl, parent_hdl;
+ u64 sas_address;
+ u8 port_id;
+
+ dev_info(&adapter->pdev->dev,
+ "begin scanning sas end devices\n");
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (hdl = 0xFFFF, cfgp2.handle = hdl;
+ !leapraid_op_config_page(adapter, &sas_dev_p0, cfgp1, cfgp2,
+ GET_SAS_DEVICE_PG0);
+ cfgp2.handle = hdl) {
+ hdl = le16_to_cpu(sas_dev_p0.dev_hdl);
+ if (!(leapraid_is_end_dev(le32_to_cpu(sas_dev_p0.dev_info))))
+ continue;
+
+ port_id = sas_dev_p0.physical_port;
+ sas_dev = leapraid_get_sas_dev_by_addr(
+ adapter,
+ le64_to_cpu(sas_dev_p0.sas_address),
+ leapraid_get_port_by_id(
+ adapter,
+ port_id,
+ false));
+ if (sas_dev) {
+ leapraid_sdev_put(sas_dev);
+ continue;
+ }
+
+ parent_hdl = le16_to_cpu(sas_dev_p0.parent_dev_hdl);
+ if (!leapraid_get_sas_address(adapter, parent_hdl,
+ &sas_address)) {
+ leapraid_transport_update_links(
+ adapter,
+ sas_address,
+ hdl,
+ sas_dev_p0.phy_num,
+ LEAPRAID_SAS_NEG_LINK_RATE_1_5,
+ leapraid_get_port_by_id(adapter,
+ port_id,
+ false));
+ leapraid_add_dev(adapter, hdl);
+ dev_info(&adapter->pdev->dev,
+ "add sas dev: hdl=0x%04x, sas addr=0x%016llx\n",
+ hdl,
+ (unsigned long long)le64_to_cpu(
+ sas_dev_p0.sas_address));
+ }
+ }
+
+ dev_info(&adapter->pdev->dev, "sas end devices scan complete\n");
+}
+
+static void leapraid_scan_all_dev_after_reset(struct leapraid_adapter *adapter)
+{
+ dev_info(&adapter->pdev->dev, "begin scanning devices\n");
+
+ leapraid_sas_host_add(adapter, adapter->dev_topo.card.phys_num);
+ leapraid_scan_exp_after_reset(adapter);
+ if (adapter->adapter_attr.raid_support) {
+ leapraid_scan_phy_disks_after_reset(adapter);
+ leapraid_scan_vol_after_reset(adapter);
+ }
+ leapraid_scan_sas_dev_after_reset(adapter);
+
+ dev_info(&adapter->pdev->dev, "devices scan complete\n");
+}
+
+static void leapraid_hardreset_async_logic(struct leapraid_adapter *adapter)
+{
+ leapraid_remove_unresp_dev(adapter);
+ leapraid_del_dirty_vphy(adapter);
+ leapraid_del_dirty_card_port(adapter);
+ leapraid_update_dev_qdepth(adapter);
+ leapraid_scan_all_dev_after_reset(adapter);
+
+ if (adapter->scan_dev_desc.driver_loading)
+ leapraid_scan_dev_done(adapter);
+}
+
+static int leapraid_send_enc_cmd(struct leapraid_adapter *adapter,
+ struct leapraid_sep_rep *sep_rep,
+ struct leapraid_sep_req *sep_req)
+{
+ void *req;
+ bool reset_flg = false;
+ int rc = 0;
+
+ mutex_lock(&adapter->driver_cmds.enc_cmd.mutex);
+ rc = leapraid_check_adapter_is_op(adapter);
+ if (rc)
+ goto out;
+
+ adapter->driver_cmds.enc_cmd.status = LEAPRAID_CMD_PENDING;
+ req = leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.enc_cmd.inter_taskid);
+ memset(req, 0, LEAPRAID_REQUEST_SIZE);
+ memcpy(req, sep_req, sizeof(struct leapraid_sep_req));
+ init_completion(&adapter->driver_cmds.enc_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.enc_cmd.inter_taskid);
+ wait_for_completion_timeout(&adapter->driver_cmds.enc_cmd.done,
+ LEAPRAID_ENC_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.enc_cmd.status & LEAPRAID_CMD_DONE)) {
+ reset_flg =
+ leapraid_check_reset(
+ adapter->driver_cmds.enc_cmd.status);
+ rc = -EFAULT;
+ goto do_hard_reset;
+ }
+
+ if (adapter->driver_cmds.enc_cmd.status & LEAPRAID_CMD_REPLY_VALID)
+ memcpy(sep_rep, (void *)(&adapter->driver_cmds.enc_cmd.reply),
+ sizeof(struct leapraid_sep_rep));
+do_hard_reset:
+ if (reset_flg) {
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, FULL_RESET);
+ }
+
+ adapter->driver_cmds.enc_cmd.status = LEAPRAID_CMD_NOT_USED;
+out:
+ mutex_unlock(&adapter->driver_cmds.enc_cmd.mutex);
+ return rc;
+}
+
+static void leapraid_set_led(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev, bool on)
+{
+ struct leapraid_sep_rep sep_rep;
+ struct leapraid_sep_req sep_req;
+
+ if (!sas_dev)
+ return;
+
+ memset(&sep_req, 0, sizeof(struct leapraid_sep_req));
+ memset(&sep_rep, 0, sizeof(struct leapraid_sep_rep));
+ sep_req.func = LEAPRAID_FUNC_SCSI_ENC_PROCESSOR;
+ sep_req.act = LEAPRAID_SEP_REQ_ACT_WRITE_STATUS;
+ if (on) {
+ sep_req.slot_status =
+ cpu_to_le32(LEAPRAID_SEP_REQ_SLOTSTATUS_PREDICTED_FAULT);
+ sep_req.dev_hdl = cpu_to_le16(sas_dev->hdl);
+ sep_req.flg = LEAPRAID_SEP_REQ_FLG_DEVHDL_ADDRESS;
+ if (leapraid_send_enc_cmd(adapter, &sep_rep, &sep_req)) {
+ leapraid_sdev_put(sas_dev);
+ return;
+ }
+
+ sas_dev->led_on = true;
+ if (sep_rep.adapter_status)
+ leapraid_sdev_put(sas_dev);
+ } else {
+ sep_req.slot_status = 0;
+ sep_req.slot = cpu_to_le16(sas_dev->slot);
+ sep_req.dev_hdl = 0;
+ sep_req.enc_hdl = cpu_to_le16(sas_dev->enc_hdl);
+ sep_req.flg = LEAPRAID_SEP_REQ_FLG_ENCLOSURE_SLOT_ADDRESS;
+ if ((leapraid_send_enc_cmd(adapter, &sep_rep, &sep_req))) {
+ leapraid_sdev_put(sas_dev);
+ return;
+ }
+
+ if (sep_rep.adapter_status) {
+ leapraid_sdev_put(sas_dev);
+ return;
+ }
+ }
+}
+
+static void leapraid_fw_work(struct leapraid_adapter *adapter,
+ struct leapraid_fw_evt_work *fw_evt)
+{
+ struct leapraid_sas_dev *sas_dev;
+
+ adapter->fw_evt_s.cur_evt = fw_evt;
+ leapraid_del_fw_evt_from_list(adapter, fw_evt);
+ if (adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering) {
+ leapraid_fw_evt_put(fw_evt);
+ adapter->fw_evt_s.cur_evt = NULL;
+ return;
+ }
+ switch (fw_evt->evt_type) {
+ case LEAPRAID_EVT_SAS_DISCOVERY:
+ {
+ struct leapraid_evt_data_sas_disc *evt_data;
+
+ evt_data = fw_evt->evt_data;
+ if (evt_data->reason_code ==
+ LEAPRAID_EVT_SAS_DISC_RC_STARTED &&
+ !adapter->dev_topo.card.phys_num)
+ leapraid_sas_host_add(adapter, 0);
+ break;
+ }
+ case LEAPRAID_EVT_SAS_TOPO_CHANGE_LIST:
+ leapraid_sas_topo_chg_evt(adapter, fw_evt);
+ break;
+ case LEAPRAID_EVT_IR_CHANGE:
+ leapraid_sas_ir_chg_evt(adapter, fw_evt);
+ break;
+ case LEAPRAID_EVT_SAS_ENCL_DEV_STATUS_CHANGE:
+ leapraid_sas_enc_dev_stat_chg_evt(adapter, fw_evt);
+ break;
+ case LEAPRAID_EVT_REMOVE_DEAD_DEV:
+ while (scsi_host_in_recovery(adapter->shost) ||
+ adapter->access_ctrl.shost_recovering) {
+ if (adapter->access_ctrl.host_removing ||
+ adapter->fw_evt_s.fw_evt_cleanup)
+ goto out;
+
+ ssleep(1);
+ }
+ leapraid_hardreset_async_logic(adapter);
+ break;
+ case LEAPRAID_EVT_TURN_ON_PFA_LED:
+ sas_dev = leapraid_get_sas_dev_by_hdl(adapter,
+ fw_evt->dev_handle);
+ leapraid_set_led(adapter, sas_dev, true);
+ break;
+ case LEAPRAID_EVT_SCAN_DEV_DONE:
+ adapter->scan_dev_desc.scan_start = false;
+ break;
+ default:
+ break;
+ }
+out:
+ leapraid_fw_evt_put(fw_evt);
+ adapter->fw_evt_s.cur_evt = NULL;
+}
+
+static void leapraid_sas_dev_stat_chg_evt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_sas_dev_status_change *event_data)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_sas_dev *sas_dev = NULL;
+ u64 sas_address;
+ unsigned long flags;
+
+ switch (event_data->reason_code) {
+ case LEAPRAID_EVT_SAS_DEV_STAT_RC_INTERNAL_DEV_RESET:
+ case LEAPRAID_EVT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET:
+ break;
+ default:
+ return;
+ }
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+
+ sas_address = le64_to_cpu(event_data->sas_address);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(adapter,
+ sas_address,
+ leapraid_get_port_by_id(adapter,
+ event_data->physical_port,
+ false));
+
+ if (sas_dev && sas_dev->starget) {
+ starget_priv = sas_dev->starget->hostdata;
+ if (starget_priv) {
+ switch (event_data->reason_code) {
+ case LEAPRAID_EVT_SAS_DEV_STAT_RC_INTERNAL_DEV_RESET:
+ starget_priv->tm_busy = true;
+ break;
+ case LEAPRAID_EVT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET:
+ starget_priv->tm_busy = false;
+ break;
+ }
+ }
+ }
+
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+}
+
+static void leapraid_set_volume_delete_flag(struct leapraid_adapter *adapter,
+ u16 handle)
+{
+ struct leapraid_raid_volume *raid_volume;
+ struct leapraid_starget_priv *sas_target_priv_data;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_hdl(adapter, handle);
+ if (raid_volume && raid_volume->starget &&
+ raid_volume->starget->hostdata) {
+ sas_target_priv_data = raid_volume->starget->hostdata;
+ sas_target_priv_data->deleted = true;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+}
+
+static void leapraid_check_ir_change_evt(struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_ir_change *evt_data)
+{
+ u16 phys_disk_dev_hdl;
+
+ switch (evt_data->reason_code) {
+ case LEAPRAID_EVT_IR_RC_VOLUME_DELETE:
+ leapraid_set_volume_delete_flag(adapter,
+ le16_to_cpu(evt_data->vol_dev_hdl));
+ break;
+ case LEAPRAID_EVT_IR_RC_PD_UNHIDDEN_TO_DELETE:
+ phys_disk_dev_hdl =
+ le16_to_cpu(evt_data->phys_disk_dev_hdl);
+ clear_bit(phys_disk_dev_hdl,
+ (unsigned long *)adapter->dev_topo.pd_hdls);
+ leapraid_tgt_rst_send(adapter, phys_disk_dev_hdl);
+ break;
+ }
+}
+
+static void leapraid_topo_del_evts_process_exp_status(
+ struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_sas_topo_change_list *evt_data)
+{
+ struct leapraid_fw_evt_work *fw_evt = NULL;
+ struct leapraid_evt_data_sas_topo_change_list *loc_evt_data = NULL;
+ unsigned long flags;
+ u16 exp_hdl;
+
+ exp_hdl = le16_to_cpu(evt_data->exp_dev_hdl);
+
+ switch (evt_data->exp_status) {
+ case LEAPRAID_EVT_SAS_TOPO_ES_NOT_RESPONDING:
+ spin_lock_irqsave(&adapter->fw_evt_s.fw_evt_lock, flags);
+ list_for_each_entry(fw_evt,
+ &adapter->fw_evt_s.fw_evt_list, list) {
+ if (fw_evt->evt_type !=
+ LEAPRAID_EVT_SAS_TOPO_CHANGE_LIST ||
+ fw_evt->ignore)
+ continue;
+
+ loc_evt_data = fw_evt->evt_data;
+ if ((loc_evt_data->exp_status ==
+ LEAPRAID_EVT_SAS_TOPO_ES_ADDED ||
+ loc_evt_data->exp_status ==
+ LEAPRAID_EVT_SAS_TOPO_ES_RESPONDING) &&
+ le16_to_cpu(loc_evt_data->exp_dev_hdl) == exp_hdl)
+ fw_evt->ignore = 1;
+ }
+ spin_unlock_irqrestore(&adapter->fw_evt_s.fw_evt_lock, flags);
+ break;
+ default:
+ break;
+ }
+}
+
+static void leapraid_check_topo_del_evts(struct leapraid_adapter *adapter,
+ struct leapraid_evt_data_sas_topo_change_list *evt_data)
+{
+ int reason_code;
+ u16 hdl;
+ int i;
+
+ for (i = 0; i < evt_data->entry_num; i++) {
+ hdl = le16_to_cpu(evt_data->phy[i].attached_dev_hdl);
+ if (!hdl)
+ continue;
+
+ reason_code = evt_data->phy[i].phy_status &
+ LEAPRAID_EVT_SAS_TOPO_RC_MASK;
+ if (reason_code ==
+ LEAPRAID_EVT_SAS_TOPO_RC_TARG_NOT_RESPONDING)
+ leapraid_tgt_not_responding(adapter, hdl);
+ }
+ leapraid_topo_del_evts_process_exp_status(adapter, evt_data);
+}
+
+static bool leapraid_async_process_evt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_evt_notify_rep *event_notify_rep)
+{
+ u16 evt = le16_to_cpu(event_notify_rep->evt);
+ bool exit_flag = false;
+
+ switch (evt) {
+ case LEAPRAID_EVT_SAS_DEV_STATUS_CHANGE:
+ leapraid_sas_dev_stat_chg_evt(adapter,
+ (struct leapraid_evt_data_sas_dev_status_change
+ *)event_notify_rep->evt_data);
+ break;
+ case LEAPRAID_EVT_IR_CHANGE:
+ leapraid_check_ir_change_evt(adapter,
+ (struct leapraid_evt_data_ir_change
+ *)event_notify_rep->evt_data);
+ break;
+ case LEAPRAID_EVT_SAS_TOPO_CHANGE_LIST:
+ leapraid_check_topo_del_evts(adapter,
+ (struct leapraid_evt_data_sas_topo_change_list
+ *)event_notify_rep->evt_data);
+ if (adapter->access_ctrl.shost_recovering) {
+ exit_flag = true;
+ return exit_flag;
+ }
+ break;
+ case LEAPRAID_EVT_SAS_DISCOVERY:
+ case LEAPRAID_EVT_SAS_ENCL_DEV_STATUS_CHANGE:
+ break;
+ default:
+ exit_flag = true;
+ return exit_flag;
+ }
+
+ return exit_flag;
+}
+
+static void leapraid_async_evt_cb_enqueue(
+ struct leapraid_adapter *adapter,
+ struct leapraid_evt_notify_rep *evt_notify_rep)
+{
+ struct leapraid_fw_evt_work *fw_evt;
+ u16 evt_sz;
+
+ fw_evt = leapraid_alloc_fw_evt_work();
+ if (!fw_evt)
+ return;
+
+ evt_sz = le16_to_cpu(evt_notify_rep->evt_data_len) * 4;
+ fw_evt->evt_data = kmemdup(evt_notify_rep->evt_data,
+ evt_sz, GFP_ATOMIC);
+ if (!fw_evt->evt_data) {
+ leapraid_fw_evt_put(fw_evt);
+ return;
+ }
+ fw_evt->adapter = adapter;
+ fw_evt->evt_type = le16_to_cpu(evt_notify_rep->evt);
+ leapraid_fw_evt_add(adapter, fw_evt);
+ leapraid_fw_evt_put(fw_evt);
+}
+
+static void leapraid_async_evt_cb(struct leapraid_adapter *adapter,
+ u8 msix_index, u32 rep_paddr)
+{
+ struct leapraid_evt_notify_rep *evt_notify_rep;
+
+ if (adapter->access_ctrl.pcie_recovering)
+ return;
+
+ evt_notify_rep = leapraid_get_reply_vaddr(adapter, rep_paddr);
+ if (unlikely(!evt_notify_rep))
+ return;
+
+ if (leapraid_async_process_evt(adapter, evt_notify_rep))
+ return;
+
+ leapraid_async_evt_cb_enqueue(adapter, evt_notify_rep);
+}
+
+static void leapraid_handle_async_event(struct leapraid_adapter *adapter,
+ u8 msix_index, u32 reply)
+{
+ struct leapraid_evt_notify_rep *leap_mpi_rep =
+ leapraid_get_reply_vaddr(adapter, reply);
+
+ if (!leap_mpi_rep)
+ return;
+
+ if (leap_mpi_rep->func != LEAPRAID_FUNC_EVENT_NOTIFY)
+ return;
+
+ leapraid_async_evt_cb(adapter, msix_index, reply);
+}
+
+void leapraid_async_turn_on_led(struct leapraid_adapter *adapter, u16 handle)
+{
+ struct leapraid_fw_evt_work *fw_event;
+
+ fw_event = leapraid_alloc_fw_evt_work();
+ if (!fw_event)
+ return;
+
+ fw_event->dev_handle = handle;
+ fw_event->adapter = adapter;
+ fw_event->evt_type = LEAPRAID_EVT_TURN_ON_PFA_LED;
+ leapraid_fw_evt_add(adapter, fw_event);
+ leapraid_fw_evt_put(fw_event);
+}
+
+static void leapraid_hardreset_barrier(struct leapraid_adapter *adapter)
+{
+ struct leapraid_fw_evt_work *fw_event;
+
+ fw_event = leapraid_alloc_fw_evt_work();
+ if (!fw_event)
+ return;
+
+ fw_event->adapter = adapter;
+ fw_event->evt_type = LEAPRAID_EVT_REMOVE_DEAD_DEV;
+ leapraid_fw_evt_add(adapter, fw_event);
+ leapraid_fw_evt_put(fw_event);
+}
+
+static void leapraid_scan_dev_complete(struct leapraid_adapter *adapter)
+{
+ struct leapraid_fw_evt_work *fw_evt;
+
+ fw_evt = leapraid_alloc_fw_evt_work();
+ if (!fw_evt)
+ return;
+
+ fw_evt->evt_type = LEAPRAID_EVT_SCAN_DEV_DONE;
+ fw_evt->adapter = adapter;
+ leapraid_fw_evt_add(adapter, fw_evt);
+ leapraid_fw_evt_put(fw_evt);
+}
+
+static u8 leapraid_driver_cmds_done(struct leapraid_adapter *adapter,
+ u16 taskid, u8 msix_index,
+ u32 rep_paddr, u8 cb_idx)
+{
+ struct leapraid_rep *leap_mpi_rep =
+ leapraid_get_reply_vaddr(adapter, rep_paddr);
+ struct leapraid_driver_cmd *sp_cmd, *_sp_cmd = NULL;
+
+ list_for_each_entry(sp_cmd, &adapter->driver_cmds.special_cmd_list,
+ list)
+ if (cb_idx == sp_cmd->cb_idx) {
+ _sp_cmd = sp_cmd;
+ break;
+ }
+
+ if (WARN_ON(!_sp_cmd))
+ return 1;
+ if (WARN_ON(_sp_cmd->status == LEAPRAID_CMD_NOT_USED))
+ return 1;
+ if (WARN_ON(taskid != _sp_cmd->hp_taskid &&
+ taskid != _sp_cmd->taskid &&
+ taskid != _sp_cmd->inter_taskid))
+ return 1;
+
+ _sp_cmd->status |= LEAPRAID_CMD_DONE;
+ if (leap_mpi_rep) {
+ memcpy((void *)(&_sp_cmd->reply), leap_mpi_rep,
+ leap_mpi_rep->msg_len * 4);
+ _sp_cmd->status |= LEAPRAID_CMD_REPLY_VALID;
+
+ if (_sp_cmd->cb_idx == LEAPRAID_SCAN_DEV_CB_IDX) {
+ u16 adapter_status;
+
+ _sp_cmd->status &= ~LEAPRAID_CMD_PENDING;
+ adapter_status =
+ le16_to_cpu(leap_mpi_rep->adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (adapter_status != LEAPRAID_ADAPTER_STATUS_SUCCESS)
+ adapter->scan_dev_desc.scan_dev_failed = true;
+
+ if (_sp_cmd->async_scan_dev) {
+ if (adapter_status ==
+ LEAPRAID_ADAPTER_STATUS_SUCCESS) {
+ leapraid_scan_dev_complete(adapter);
+ } else {
+ adapter->scan_dev_desc.scan_start_failed =
+ adapter_status;
+ }
+ return 1;
+ }
+
+ complete(&_sp_cmd->done);
+ return 1;
+ }
+
+ if (_sp_cmd->cb_idx == LEAPRAID_CTL_CB_IDX) {
+ struct leapraid_scsiio_rep *scsiio_reply;
+
+ if (leap_mpi_rep->function ==
+ LEAPRAID_FUNC_SCSIIO_REQ ||
+ leap_mpi_rep->function ==
+ LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH) {
+ scsiio_reply =
+ (struct leapraid_scsiio_rep *)leap_mpi_rep;
+ if (scsiio_reply->scsi_state &
+ LEAPRAID_SCSI_STATE_AUTOSENSE_VALID)
+ memcpy((void *)(&adapter->driver_cmds.ctl_cmd.sense),
+ leapraid_get_sense_buffer(adapter, taskid),
+ min_t(u32,
+ SCSI_SENSE_BUFFERSIZE,
+ le32_to_cpu(scsiio_reply->sense_count)));
+ }
+ }
+ }
+
+ _sp_cmd->status &= ~LEAPRAID_CMD_PENDING;
+ complete(&_sp_cmd->done);
+
+ return 1;
+}
+
+static void leapraid_request_descript_handler(struct leapraid_adapter *adapter,
+ union leapraid_rep_desc_union *rpf,
+ u8 req_desc_type, u8 msix_idx)
+{
+ u32 rep;
+ u16 taskid;
+
+ rep = 0;
+ taskid = le16_to_cpu(rpf->dflt_rep.taskid);
+ switch (req_desc_type) {
+ case LEAPRAID_RPY_DESC_FLG_FP_SCSI_IO_SUCCESS:
+ case LEAPRAID_RPY_DESC_FLG_SCSI_IO_SUCCESS:
+ if (taskid <= adapter->shost->can_queue ||
+ taskid == adapter->driver_cmds.driver_scsiio_cmd.taskid) {
+ leapraid_scsiio_done(adapter, taskid, msix_idx, 0);
+ } else {
+ if (leapraid_driver_cmds_done(adapter, taskid,
+ msix_idx, 0,
+ leapraid_get_cb_idx(adapter,
+ taskid)))
+ leapraid_free_taskid(adapter, taskid);
+ }
+ break;
+ case LEAPRAID_RPY_DESC_FLG_ADDRESS_REPLY:
+ rep = le32_to_cpu(rpf->addr_rep.rep_frame_addr);
+ if (rep > ((u32)adapter->mem_desc.rep_msg_dma +
+ adapter->adapter_attr.rep_msg_qd * LEAPRAID_REPLY_SIEZ) ||
+ rep < ((u32)adapter->mem_desc.rep_msg_dma))
+ rep = 0;
+ if (taskid) {
+ if (taskid <= adapter->shost->can_queue ||
+ taskid == adapter->driver_cmds.driver_scsiio_cmd.taskid) {
+ leapraid_scsiio_done(adapter, taskid,
+ msix_idx, rep);
+ } else {
+ if (leapraid_driver_cmds_done(adapter, taskid,
+ msix_idx, rep,
+ leapraid_get_cb_idx(adapter,
+ taskid)))
+ leapraid_free_taskid(adapter, taskid);
+ }
+ } else {
+ leapraid_handle_async_event(adapter, msix_idx, rep);
+ }
+
+ if (rep) {
+ adapter->rep_msg_host_idx =
+ (adapter->rep_msg_host_idx ==
+ (adapter->adapter_attr.rep_msg_qd - 1)) ?
+ 0 : adapter->rep_msg_host_idx + 1;
+ adapter->mem_desc.rep_msg_addr[adapter->rep_msg_host_idx] =
+ cpu_to_le32(rep);
+ wmb(); /* Make sure that all write ops are in order */
+ writel(adapter->rep_msg_host_idx,
+ &adapter->iomem_base->rep_msg_host_idx);
+ }
+ break;
+ default:
+ break;
+ }
+}
+
+int leapraid_rep_queue_handler(struct leapraid_rq *rq)
+{
+ struct leapraid_adapter *adapter = rq->adapter;
+ union leapraid_rep_desc_union *rep_desc;
+ u8 req_desc_type;
+ u64 finish_cmds;
+ u8 msix_idx;
+
+ msix_idx = rq->msix_idx;
+ finish_cmds = 0;
+ if (!atomic_add_unless(&rq->busy, LEAPRAID_BUSY_LIMIT,
+ LEAPRAID_BUSY_LIMIT))
+ return finish_cmds;
+
+ rep_desc = &rq->rep_desc[rq->rep_post_host_idx];
+ req_desc_type = rep_desc->dflt_rep.rep_flg &
+ LEAPRAID_RPY_DESC_FLG_TYPE_MASK;
+ if (req_desc_type == LEAPRAID_RPY_DESC_FLG_UNUSED) {
+ atomic_dec(&rq->busy);
+ return finish_cmds;
+ }
+
+ for (;;) {
+ if (rep_desc->u.low == UINT_MAX ||
+ rep_desc->u.high == UINT_MAX)
+ break;
+
+ leapraid_request_descript_handler(adapter, rep_desc,
+ req_desc_type, msix_idx);
+ dev_dbg(&adapter->pdev->dev,
+ "LEAPRAID_SCSIIO: Handled Desc taskid %d, msix %d\n",
+ rep_desc->dflt_rep.taskid, msix_idx);
+ rep_desc->words = cpu_to_le64(ULLONG_MAX);
+ rq->rep_post_host_idx =
+ (rq->rep_post_host_idx ==
+ (adapter->adapter_attr.rep_desc_qd -
+ LEAPRAID_BUSY_LIMIT)) ?
+ 0 : rq->rep_post_host_idx + 1;
+ req_desc_type =
+ rq->rep_desc[rq->rep_post_host_idx].dflt_rep.rep_flg &
+ LEAPRAID_RPY_DESC_FLG_TYPE_MASK;
+ finish_cmds++;
+ if (req_desc_type == LEAPRAID_RPY_DESC_FLG_UNUSED)
+ break;
+ rep_desc = rq->rep_desc + rq->rep_post_host_idx;
+ }
+
+ if (!finish_cmds) {
+ atomic_dec(&rq->busy);
+ return finish_cmds;
+ }
+
+ wmb(); /* Make sure that all write ops are in order */
+ writel(rq->rep_post_host_idx | ((msix_idx & LEAPRAID_MSIX_GROUP_MASK) <<
+ LEAPRAID_RPHI_MSIX_IDX_SHIFT),
+ &adapter->iomem_base->rep_post_reg_idx[msix_idx /
+ LEAPRAID_MSIX_GROUP_SIZE].idx);
+ atomic_dec(&rq->busy);
+ return finish_cmds;
+}
+
+static irqreturn_t leapraid_irq_handler(int irq, void *bus_id)
+{
+ struct leapraid_rq *rq = bus_id;
+ struct leapraid_adapter *adapter = rq->adapter;
+
+ dev_dbg(&adapter->pdev->dev,
+ "LEAPRAID_SCSIIO: Receive a interrupt, irq %d msix %d\n",
+ irq, rq->msix_idx);
+
+ if (adapter->mask_int)
+ return IRQ_NONE;
+
+ return ((leapraid_rep_queue_handler(rq) > 0) ?
+ IRQ_HANDLED : IRQ_NONE);
+}
+
+void leapraid_sync_irqs(struct leapraid_adapter *adapter, bool poll)
+{
+ struct leapraid_int_rq *int_rq;
+ struct leapraid_blk_mq_poll_rq *blk_mq_poll_rq;
+ unsigned int i;
+
+ if (!adapter->notification_desc.msix_enable)
+ return;
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering)
+ return;
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qdex; i++) {
+ int_rq = &adapter->notification_desc.int_rqs[i];
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering)
+ return;
+
+ if (int_rq->rq.msix_idx == 0)
+ continue;
+
+ synchronize_irq(pci_irq_vector(adapter->pdev, int_rq->rq.msix_idx));
+ if (poll)
+ leapraid_rep_queue_handler(&int_rq->rq);
+ }
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qcnt; i++) {
+ blk_mq_poll_rq =
+ &adapter->notification_desc.blk_mq_poll_rqs[i];
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering)
+ return;
+
+ if (blk_mq_poll_rq->rq.msix_idx == 0)
+ continue;
+
+ leapraid_rep_queue_handler(&blk_mq_poll_rq->rq);
+ }
+}
+
+void leapraid_mq_polling_pause(struct leapraid_adapter *adapter)
+{
+ int iopoll_q_count =
+ adapter->adapter_attr.rq_cnt -
+ adapter->notification_desc.iopoll_qdex;
+ int qid;
+
+ for (qid = 0; qid < iopoll_q_count; qid++)
+ atomic_set(&adapter->notification_desc.blk_mq_poll_rqs[qid].pause, 1);
+
+ for (qid = 0; qid < iopoll_q_count; qid++) {
+ while (atomic_read(&adapter->notification_desc.blk_mq_poll_rqs[qid].busy)) {
+ cpu_relax();
+ udelay(LEAPRAID_IO_POLL_DELAY_US);
+ }
+ }
+}
+
+void leapraid_mq_polling_resume(struct leapraid_adapter *adapter)
+{
+ int iopoll_q_count =
+ adapter->adapter_attr.rq_cnt -
+ adapter->notification_desc.iopoll_qdex;
+ int qid;
+
+ for (qid = 0; qid < iopoll_q_count; qid++)
+ atomic_set(&adapter->notification_desc.blk_mq_poll_rqs[qid].pause, 0);
+}
+
+static int leapraid_unlock_host_diag(struct leapraid_adapter *adapter,
+ u32 *host_diag)
+{
+ const u32 unlock_seq[] = { 0x0, 0xF, 0x4, 0xB, 0x2, 0x7, 0xD };
+ const int max_retries = LEAPRAID_UNLOCK_RETRY_LIMIT;
+ int retry = 0;
+ unsigned int i;
+
+ *host_diag = 0;
+ while (retry++ <= max_retries) {
+ for (i = 0; i < ARRAY_SIZE(unlock_seq); i++)
+ writel(unlock_seq[i], &adapter->iomem_base->ws);
+
+ msleep(LEAPRAID_UNLOCK_SLEEP_MS);
+
+ *host_diag = leapraid_readl(&adapter->iomem_base->host_diag);
+ if (*host_diag & LEAPRAID_DIAG_WRITE_ENABLE)
+ return 0;
+ }
+
+ dev_err(&adapter->pdev->dev, "try host reset timeout!\n");
+ return -EFAULT;
+}
+
+static int leapraid_host_diag_reset(struct leapraid_adapter *adapter)
+{
+ u32 host_diag;
+ u32 cnt;
+
+ dev_info(&adapter->pdev->dev, "entering host diag reset!\n");
+ pci_cfg_access_lock(adapter->pdev);
+
+ mutex_lock(&adapter->reset_desc.host_diag_mutex);
+ if (leapraid_unlock_host_diag(adapter, &host_diag))
+ goto out;
+
+ writel(host_diag | LEAPRAID_DIAG_RESET,
+ &adapter->iomem_base->host_diag);
+
+ msleep(LEAPRAID_MSLEEP_NORMAL_MS);
+ for (cnt = 0; cnt < LEAPRAID_RESET_LOOP_COUNT_DEFAULT; cnt++) {
+ host_diag = leapraid_readl(&adapter->iomem_base->host_diag);
+ if (host_diag == LEAPRAID_INVALID_HOST_DIAG_VAL)
+ goto out;
+
+ if (!(host_diag & LEAPRAID_DIAG_RESET))
+ break;
+
+ msleep(LEAPRAID_RESET_POLL_INTERVAL_MS);
+ }
+
+ writel(host_diag & ~LEAPRAID_DIAG_HOLD_ADAPTER_RESET,
+ &adapter->iomem_base->host_diag);
+ writel(0x0, &adapter->iomem_base->ws);
+ mutex_unlock(&adapter->reset_desc.host_diag_mutex);
+ if (!leapraid_wait_adapter_ready(adapter))
+ goto out;
+
+ pci_cfg_access_unlock(adapter->pdev);
+ dev_info(&adapter->pdev->dev, "host diag success!\n");
+ return 0;
+out:
+ pci_cfg_access_unlock(adapter->pdev);
+ dev_info(&adapter->pdev->dev, "host diag failed!\n");
+ mutex_unlock(&adapter->reset_desc.host_diag_mutex);
+ return -EFAULT;
+}
+
+static int leapraid_find_matching_port(
+ struct leapraid_card_port *card_port_table,
+ u8 count, u8 port_id, u64 sas_addr)
+{
+ int i;
+
+ for (i = 0; i < count; i++) {
+ if (card_port_table[i].port_id == port_id &&
+ card_port_table[i].sas_address == sas_addr)
+ return i;
+ }
+ return -1;
+}
+
+static u8 leapraid_fill_card_port_table(
+ struct leapraid_adapter *adapter,
+ struct leapraid_sas_io_unit_p0 *sas_iounit_p0,
+ struct leapraid_card_port *new_card_port_table)
+{
+ u8 port_entry_num = 0, port_id;
+ u16 attached_hdl;
+ u64 attached_sas_addr;
+ int i, idx;
+
+ for (i = 0; i < adapter->dev_topo.card.phys_num; i++) {
+ if ((sas_iounit_p0->phy_info[i].neg_link_rate >> 4)
+ < LEAPRAID_SAS_NEG_LINK_RATE_1_5)
+ continue;
+
+ attached_hdl =
+ le16_to_cpu(sas_iounit_p0->phy_info[i].attached_dev_hdl);
+ if (leapraid_get_sas_address(adapter,
+ attached_hdl,
+ &attached_sas_addr) != 0)
+ continue;
+
+ port_id = sas_iounit_p0->phy_info[i].port;
+
+ idx = leapraid_find_matching_port(new_card_port_table,
+ port_entry_num,
+ port_id,
+ attached_sas_addr);
+ if (idx >= 0) {
+ new_card_port_table[idx].phy_mask |= BIT(i);
+ } else {
+ new_card_port_table[port_entry_num].port_id = port_id;
+ new_card_port_table[port_entry_num].phy_mask = BIT(i);
+ new_card_port_table[port_entry_num].sas_address =
+ attached_sas_addr;
+ port_entry_num++;
+ }
+ }
+
+ return port_entry_num;
+}
+
+static u8 leapraid_set_new_card_port_table_after_reset(
+ struct leapraid_adapter *adapter,
+ struct leapraid_card_port *new_card_port_table)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_io_unit_p0 *sas_iounit_p0 = NULL;
+ u8 port_entry_num = 0;
+ u16 sz;
+
+ sz = offsetof(struct leapraid_sas_io_unit_p0, phy_info) +
+ (adapter->dev_topo.card.phys_num *
+ sizeof(struct leapraid_sas_io_unit0_phy_info));
+ sas_iounit_p0 = kzalloc(sz, GFP_KERNEL);
+ if (!sas_iounit_p0)
+ return port_entry_num;
+
+ cfgp1.size = sz;
+ if ((leapraid_op_config_page(adapter, sas_iounit_p0, cfgp1, cfgp2,
+ GET_SAS_IOUNIT_PG0)) != 0)
+ goto out;
+
+ port_entry_num = leapraid_fill_card_port_table(adapter,
+ sas_iounit_p0,
+ new_card_port_table);
+out:
+ kfree(sas_iounit_p0);
+ return port_entry_num;
+}
+
+static void leapraid_update_existing_port(struct leapraid_adapter *adapter,
+ struct leapraid_card_port *new_table,
+ int entry_idx, int port_entry_num)
+{
+ struct leapraid_card_port *matched_card_port = NULL;
+ int matched_code;
+ int count = 0, lcount = 0;
+ u64 sas_addr;
+ int i;
+
+ matched_code = leapraid_check_card_port(adapter,
+ &new_table[entry_idx],
+ &matched_card_port,
+ &count);
+
+ if (!matched_card_port)
+ return;
+
+ if (matched_code == SAME_PORT_WITH_PARTIALLY_CHANGED_PHYS ||
+ matched_code == SAME_ADDR_WITH_PARTIALLY_CHANGED_PHYS) {
+ leapraid_add_or_del_phys_from_existing_port(adapter,
+ matched_card_port,
+ new_table,
+ entry_idx,
+ port_entry_num);
+ } else if (matched_code == SAME_ADDR_ONLY) {
+ sas_addr = new_table[entry_idx].sas_address;
+ for (i = 0; i < port_entry_num; i++) {
+ if (new_table[i].sas_address == sas_addr)
+ lcount++;
+ }
+ if (count > 1 || lcount > 1)
+ return;
+
+ leapraid_add_or_del_phys_from_existing_port(adapter,
+ matched_card_port,
+ new_table,
+ entry_idx,
+ port_entry_num);
+ }
+
+ if (matched_card_port->port_id != new_table[entry_idx].port_id)
+ matched_card_port->port_id = new_table[entry_idx].port_id;
+
+ matched_card_port->flg &= ~LEAPRAID_CARD_PORT_FLG_DIRTY;
+ matched_card_port->phy_mask = new_table[entry_idx].phy_mask;
+}
+
+static void leapraid_update_card_port_after_reset(
+ struct leapraid_adapter *adapter)
+{
+ struct leapraid_card_port *new_card_port_table;
+ struct leapraid_card_port *matched_card_port = NULL;
+ u8 port_entry_num = 0;
+ u8 nr_phys;
+ int i;
+
+ if (leapraid_get_adapter_phys(adapter, &nr_phys) || !nr_phys)
+ return;
+
+ adapter->dev_topo.card.phys_num = nr_phys;
+ new_card_port_table = kcalloc(adapter->dev_topo.card.phys_num,
+ sizeof(struct leapraid_card_port),
+ GFP_KERNEL);
+ if (!new_card_port_table)
+ return;
+
+ port_entry_num =
+ leapraid_set_new_card_port_table_after_reset(adapter,
+ new_card_port_table);
+ if (!port_entry_num)
+ return;
+
+ list_for_each_entry(matched_card_port,
+ &adapter->dev_topo.card_port_list, list) {
+ matched_card_port->flg |= LEAPRAID_CARD_PORT_FLG_DIRTY;
+ }
+
+ matched_card_port = NULL;
+ for (i = 0; i < port_entry_num; i++)
+ leapraid_update_existing_port(adapter,
+ new_card_port_table,
+ i, port_entry_num);
+}
+
+static bool leapraid_is_valid_vphy(
+ struct leapraid_adapter *adapter,
+ struct leapraid_sas_io_unit_p0 *sas_io_unit_p0,
+ int phy_index)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_phy_p0 phy_p0;
+
+ if ((sas_io_unit_p0->phy_info[phy_index].neg_link_rate >> 4) <
+ LEAPRAID_SAS_NEG_LINK_RATE_1_5)
+ return false;
+
+ if (!(le32_to_cpu(sas_io_unit_p0->phy_info[phy_index].controller_phy_dev_info) &
+ LEAPRAID_DEVTYP_SEP))
+ return false;
+
+ cfgp1.phy_number = phy_index;
+ if (leapraid_op_config_page(adapter, &phy_p0, cfgp1, cfgp2,
+ GET_PHY_PG0))
+ return false;
+
+ if (!(le32_to_cpu(phy_p0.phy_info) & LEAPRAID_SAS_PHYINFO_VPHY))
+ return false;
+
+ return true;
+}
+
+static void leapraid_update_vphy_binding(struct leapraid_adapter *adapter,
+ struct leapraid_card_port *card_port,
+ struct leapraid_vphy *vphy,
+ int phy_index, u8 may_new_port_id,
+ u64 attached_sas_addr)
+{
+ struct leapraid_card_port *may_new_card_port;
+ struct leapraid_sas_dev *sas_dev;
+
+ may_new_card_port = leapraid_get_port_by_id(adapter,
+ may_new_port_id,
+ true);
+ if (!may_new_card_port) {
+ may_new_card_port = kzalloc(sizeof(*may_new_card_port),
+ GFP_KERNEL);
+ if (!may_new_card_port)
+ return;
+ may_new_card_port->port_id = may_new_port_id;
+ dev_err(&adapter->pdev->dev,
+ "%s: new card port %p added, port=%d\n",
+ __func__, may_new_card_port, may_new_port_id);
+ list_add_tail(&may_new_card_port->list,
+ &adapter->dev_topo.card_port_list);
+ }
+
+ if (card_port != may_new_card_port) {
+ if (!may_new_card_port->vphys_mask)
+ INIT_LIST_HEAD(&may_new_card_port->vphys_list);
+ may_new_card_port->vphys_mask |= BIT(phy_index);
+ card_port->vphys_mask &= ~BIT(phy_index);
+ list_move(&vphy->list, &may_new_card_port->vphys_list);
+
+ sas_dev = leapraid_get_sas_dev_by_addr(adapter,
+ attached_sas_addr,
+ card_port);
+ if (sas_dev)
+ sas_dev->card_port = may_new_card_port;
+ }
+
+ if (may_new_card_port->flg & LEAPRAID_CARD_PORT_FLG_DIRTY) {
+ may_new_card_port->sas_address = 0;
+ may_new_card_port->phy_mask = 0;
+ may_new_card_port->flg &= ~LEAPRAID_CARD_PORT_FLG_DIRTY;
+ }
+ vphy->flg &= ~LEAPRAID_VPHY_FLG_DIRTY;
+}
+
+static void leapraid_update_vphys_after_reset(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_io_unit_p0 *sas_iounit_p0 = NULL;
+ struct leapraid_card_port *card_port, *card_port_next;
+ struct leapraid_vphy *vphy, *vphy_next;
+ u64 attached_sas_addr;
+ u16 sz;
+ u16 attached_hdl;
+ bool found = false;
+ u8 port_id;
+ int i;
+
+ list_for_each_entry_safe(card_port, card_port_next,
+ &adapter->dev_topo.card_port_list, list) {
+ if (!card_port->vphys_mask)
+ continue;
+
+ list_for_each_entry_safe(vphy, vphy_next,
+ &card_port->vphys_list, list) {
+ vphy->flg |= LEAPRAID_VPHY_FLG_DIRTY;
+ }
+ }
+
+ sz = offsetof(struct leapraid_sas_io_unit_p0, phy_info) +
+ (adapter->dev_topo.card.phys_num *
+ sizeof(struct leapraid_sas_io_unit0_phy_info));
+ sas_iounit_p0 = kzalloc(sz, GFP_KERNEL);
+ if (!sas_iounit_p0)
+ return;
+
+ cfgp1.size = sz;
+ if ((leapraid_op_config_page(adapter, sas_iounit_p0, cfgp1, cfgp2,
+ GET_SAS_IOUNIT_PG0)) != 0)
+ goto out;
+
+ for (i = 0; i < adapter->dev_topo.card.phys_num; i++) {
+ if (!leapraid_is_valid_vphy(adapter, sas_iounit_p0, i))
+ continue;
+
+ attached_hdl =
+ le16_to_cpu(sas_iounit_p0->phy_info[i].attached_dev_hdl);
+ if (leapraid_get_sas_address(adapter, attached_hdl,
+ &attached_sas_addr) != 0)
+ continue;
+
+ found = false;
+ card_port = NULL;
+ card_port_next = NULL;
+ list_for_each_entry_safe(card_port, card_port_next,
+ &adapter->dev_topo.card_port_list,
+ list) {
+ if (!card_port->vphys_mask)
+ continue;
+
+ list_for_each_entry_safe(vphy, vphy_next,
+ &card_port->vphys_list,
+ list) {
+ if (!(vphy->flg & LEAPRAID_VPHY_FLG_DIRTY))
+ continue;
+
+ if (vphy->sas_address != attached_sas_addr)
+ continue;
+
+ if (!(vphy->phy_mask & BIT(i)))
+ vphy->phy_mask = BIT(i);
+
+ port_id = sas_iounit_p0->phy_info[i].port;
+
+ leapraid_update_vphy_binding(adapter,
+ card_port,
+ vphy,
+ i,
+ port_id,
+ attached_sas_addr);
+
+ found = true;
+ break;
+ }
+ if (found)
+ break;
+ }
+ }
+out:
+ kfree(sas_iounit_p0);
+}
+
+static void leapraid_mark_all_dev_deleted(struct leapraid_adapter *adapter)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct scsi_device *sdev;
+
+ shost_for_each_device(sdev, adapter->shost) {
+ sdev_priv = sdev->hostdata;
+ if (sdev_priv && sdev_priv->starget_priv)
+ sdev_priv->starget_priv->deleted = true;
+ }
+}
+
+static void leapraid_free_enc_list(struct leapraid_adapter *adapter)
+{
+ struct leapraid_enc_node *enc_dev, *enc_dev_next;
+
+ list_for_each_entry_safe(enc_dev, enc_dev_next,
+ &adapter->dev_topo.enc_list,
+ list) {
+ list_del(&enc_dev->list);
+ kfree(enc_dev);
+ }
+}
+
+static void leapraid_rebuild_enc_list_after_reset(
+ struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_enc_node *enc_node;
+ u16 enc_hdl;
+ int rc;
+
+ leapraid_free_enc_list(adapter);
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (enc_hdl = 0xFFFF; ; enc_hdl = le16_to_cpu(enc_node->pg0.enc_hdl)) {
+ enc_node = kzalloc(sizeof(*enc_node),
+ GFP_KERNEL);
+ if (!enc_node)
+ return;
+
+ cfgp2.handle = enc_hdl;
+ rc = leapraid_op_config_page(adapter, &enc_node->pg0, cfgp1,
+ cfgp2, GET_SAS_ENCLOSURE_PG0);
+ if (rc) {
+ kfree(enc_node);
+ return;
+ }
+
+ list_add_tail(&enc_node->list, &adapter->dev_topo.enc_list);
+ }
+}
+
+static void leapraid_mark_resp_sas_dev(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev_p0 *sas_dev_p0)
+{
+ struct leapraid_starget_priv *starget_priv = NULL;
+ struct leapraid_enc_node *enc_node = NULL;
+ struct leapraid_card_port *card_port;
+ struct leapraid_sas_dev *sas_dev;
+ struct scsi_target *starget;
+ unsigned long flags;
+
+ card_port = leapraid_get_port_by_id(adapter, sas_dev_p0->physical_port,
+ false);
+ if (sas_dev_p0->enc_hdl) {
+ enc_node = leapraid_enc_find_by_hdl(adapter,
+ le16_to_cpu(
+ sas_dev_p0->enc_hdl));
+ if (!enc_node)
+ dev_info(&adapter->pdev->dev,
+ "enc hdl 0x%04x has no matched enc dev\n",
+ le16_to_cpu(sas_dev_p0->enc_hdl));
+ }
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ list_for_each_entry(sas_dev, &adapter->dev_topo.sas_dev_list, list) {
+ if (sas_dev->sas_addr == le64_to_cpu(sas_dev_p0->sas_address) &&
+ sas_dev->slot == le16_to_cpu(sas_dev_p0->slot) &&
+ sas_dev->card_port == card_port) {
+ sas_dev->resp = true;
+ starget = sas_dev->starget;
+ if (starget && starget->hostdata) {
+ starget_priv = starget->hostdata;
+ starget_priv->tm_busy = false;
+ starget_priv->deleted = false;
+ } else {
+ starget_priv = NULL;
+ }
+
+ if (starget) {
+ starget_printk(KERN_INFO, starget,
+ "dev: hdl=0x%04x, sas addr=0x%016llx, port_id=%d\n",
+ sas_dev->hdl,
+ (unsigned long long)sas_dev->sas_addr,
+ sas_dev->card_port->port_id);
+ if (sas_dev->enc_hdl != 0)
+ starget_printk(KERN_INFO, starget,
+ "enc info: enc_lid=0x%016llx, slot=%d\n",
+ (unsigned long long)sas_dev->enc_lid,
+ sas_dev->slot);
+ }
+
+ if (le16_to_cpu(sas_dev_p0->flg) &
+ LEAPRAID_SAS_DEV_P0_FLG_ENC_LEVEL_VALID) {
+ sas_dev->enc_level = sas_dev_p0->enc_level;
+ memcpy(sas_dev->connector_name,
+ sas_dev_p0->connector_name, 4);
+ sas_dev->connector_name[4] = '\0';
+ } else {
+ sas_dev->enc_level = 0;
+ sas_dev->connector_name[0] = '\0';
+ }
+
+ sas_dev->enc_hdl =
+ le16_to_cpu(sas_dev_p0->enc_hdl);
+ if (enc_node) {
+ sas_dev->enc_lid =
+ le64_to_cpu(enc_node->pg0.enc_lid);
+ }
+ if (sas_dev->hdl == le16_to_cpu(sas_dev_p0->dev_hdl))
+ goto out;
+
+ dev_info(&adapter->pdev->dev,
+ "hdl changed: 0x%04x -> 0x%04x\n",
+ sas_dev->hdl, sas_dev_p0->dev_hdl);
+ sas_dev->hdl = le16_to_cpu(sas_dev_p0->dev_hdl);
+ if (starget_priv)
+ starget_priv->hdl =
+ le16_to_cpu(sas_dev_p0->dev_hdl);
+ goto out;
+ }
+ }
+out:
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+}
+
+static void leapraid_search_resp_sas_dev(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_dev_p0 sas_dev_p0;
+ u32 device_info;
+
+ dev_info(&adapter->pdev->dev,
+ "begin searching for sas end devices\n");
+
+ if (list_empty(&adapter->dev_topo.sas_dev_list))
+ goto out;
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (cfgp2.handle = 0xFFFF;
+ !leapraid_op_config_page(adapter, &sas_dev_p0,
+ cfgp1, cfgp2, GET_SAS_DEVICE_PG0);
+ cfgp2.handle = le16_to_cpu(sas_dev_p0.dev_hdl)) {
+ device_info = le32_to_cpu(sas_dev_p0.dev_info);
+ if (!(leapraid_is_end_dev(device_info)))
+ continue;
+
+ leapraid_mark_resp_sas_dev(adapter, &sas_dev_p0);
+ }
+out:
+ dev_info(&adapter->pdev->dev,
+ "sas end devices searching complete\n");
+}
+
+static void leapraid_mark_resp_raid_volume(struct leapraid_adapter *adapter,
+ u64 wwid, u16 hdl)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_raid_volume *raid_volume;
+ struct scsi_target *starget;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ list_for_each_entry(raid_volume,
+ &adapter->dev_topo.raid_volume_list, list) {
+ if (raid_volume->wwid == wwid && raid_volume->starget) {
+ starget = raid_volume->starget;
+ if (starget && starget->hostdata) {
+ starget_priv = starget->hostdata;
+ starget_priv->deleted = false;
+ } else {
+ starget_priv = NULL;
+ }
+
+ raid_volume->resp = true;
+ spin_unlock_irqrestore(
+ &adapter->dev_topo.raid_volume_lock,
+ flags);
+
+ starget_printk(
+ KERN_INFO, raid_volume->starget,
+ "raid volume: hdl=0x%04x, wwid=0x%016llx\n",
+ hdl, (unsigned long long)raid_volume->wwid);
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock,
+ flags);
+ if (raid_volume->hdl == hdl) {
+ spin_unlock_irqrestore(
+ &adapter->dev_topo.raid_volume_lock,
+ flags);
+ return;
+ }
+
+ dev_info(&adapter->pdev->dev,
+ "hdl changed: 0x%04x -> 0x%04x\n",
+ raid_volume->hdl, hdl);
+
+ raid_volume->hdl = hdl;
+ if (starget_priv)
+ starget_priv->hdl = hdl;
+ spin_unlock_irqrestore(
+ &adapter->dev_topo.raid_volume_lock,
+ flags);
+ return;
+ }
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+}
+
+static void leapraid_search_resp_raid_volume(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_1 cfgp1_extra = {0};
+ union cfg_param_2 cfgp2 = {0};
+ union cfg_param_2 cfgp2_extra = {0};
+ struct leapraid_raidvol_p1 raidvol_p1;
+ struct leapraid_raidvol_p0 raidvol_p0;
+ struct leapraid_raidpd_p0 raidpd_p0;
+ u16 hdl;
+ u8 phys_disk_num;
+
+ if (!adapter->adapter_attr.raid_support)
+ return;
+
+ dev_info(&adapter->pdev->dev,
+ "begin searching for raid volumes\n");
+
+ if (list_empty(&adapter->dev_topo.raid_volume_list))
+ goto out;
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (hdl = 0xFFFF, cfgp2.handle = hdl;
+ !leapraid_op_config_page(adapter, &raidvol_p1, cfgp1, cfgp2,
+ GET_RAID_VOLUME_PG1);
+ cfgp2.handle = hdl) {
+ hdl = le16_to_cpu(raidvol_p1.dev_hdl);
+ cfgp1_extra.size = sizeof(struct leapraid_raidvol_p0);
+ cfgp2_extra.handle = hdl;
+ if (leapraid_op_config_page(adapter, &raidvol_p0, cfgp1_extra,
+ cfgp2_extra, GET_RAID_VOLUME_PG0))
+ continue;
+
+ if (raidvol_p0.volume_state == LEAPRAID_VOL_STATE_OPTIMAL ||
+ raidvol_p0.volume_state == LEAPRAID_VOL_STATE_ONLINE ||
+ raidvol_p0.volume_state == LEAPRAID_VOL_STATE_DEGRADED)
+ leapraid_mark_resp_raid_volume(
+ adapter,
+ le64_to_cpu(raidvol_p1.wwid),
+ hdl);
+ }
+
+ memset(adapter->dev_topo.pd_hdls, 0, adapter->dev_topo.pd_hdls_sz);
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (phys_disk_num = 0xFF, cfgp2.form_specific = phys_disk_num;
+ !leapraid_op_config_page(adapter, &raidpd_p0, cfgp1, cfgp2,
+ GET_PHY_DISK_PG0);
+ cfgp2.form_specific = phys_disk_num) {
+ phys_disk_num = raidpd_p0.phys_disk_num;
+ hdl = le16_to_cpu(raidpd_p0.dev_hdl);
+ set_bit(hdl, (unsigned long *)adapter->dev_topo.pd_hdls);
+ }
+out:
+ dev_info(&adapter->pdev->dev,
+ "raid volumes searching complete\n");
+}
+
+static void leapraid_mark_resp_exp(struct leapraid_adapter *adapter,
+ struct leapraid_exp_p0 *exp_pg0)
+{
+ struct leapraid_enc_node *enc_node = NULL;
+ struct leapraid_topo_node *topo_node_exp;
+ u16 enc_hdl = le16_to_cpu(exp_pg0->enc_hdl);
+ u64 sas_address = le64_to_cpu(exp_pg0->sas_address);
+ u16 hdl = le16_to_cpu(exp_pg0->dev_hdl);
+ u8 port_id = exp_pg0->physical_port;
+ struct leapraid_card_port *card_port = leapraid_get_port_by_id(adapter,
+ port_id,
+ false);
+ unsigned long flags;
+ int i;
+
+ if (enc_hdl)
+ enc_node = leapraid_enc_find_by_hdl(adapter, enc_hdl);
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ list_for_each_entry(topo_node_exp, &adapter->dev_topo.exp_list, list) {
+ if (topo_node_exp->sas_address != sas_address ||
+ topo_node_exp->card_port != card_port)
+ continue;
+
+ topo_node_exp->resp = true;
+ if (enc_node) {
+ topo_node_exp->enc_lid =
+ le64_to_cpu(enc_node->pg0.enc_lid);
+ topo_node_exp->enc_hdl = le16_to_cpu(exp_pg0->enc_hdl);
+ }
+ if (topo_node_exp->hdl == hdl)
+ goto out;
+
+ dev_info(&adapter->pdev->dev,
+ "hdl changed: 0x%04x -> 0x%04x\n",
+ topo_node_exp->hdl, hdl);
+ topo_node_exp->hdl = hdl;
+ for (i = 0; i < topo_node_exp->phys_num; i++)
+ topo_node_exp->card_phy[i].hdl = hdl;
+ goto out;
+ }
+out:
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+}
+
+static void leapraid_search_resp_exp(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_exp_p0 exp_p0;
+ u64 sas_address;
+ u16 hdl;
+ u8 port;
+
+ dev_info(&adapter->pdev->dev,
+ "begin searching for expanders\n");
+ if (list_empty(&adapter->dev_topo.exp_list))
+ goto out;
+
+ cfgp1.form = LEAPRAID_SAS_CFG_PGAD_GET_NEXT_LOOP;
+ for (hdl = 0xFFFF, cfgp2.handle = hdl;
+ !leapraid_op_config_page(adapter, &exp_p0, cfgp1, cfgp2,
+ GET_SAS_EXPANDER_PG0);
+ cfgp2.handle = hdl) {
+ hdl = le16_to_cpu(exp_p0.dev_hdl);
+ sas_address = le64_to_cpu(exp_p0.sas_address);
+ port = exp_p0.physical_port;
+
+ dev_info(&adapter->pdev->dev,
+ "exp detected: hdl=0x%04x, sas=0x%016llx, port=%u",
+ hdl, (unsigned long long)sas_address,
+ ((adapter->adapter_attr.enable_mp) ? (port) :
+ (LEAPRAID_DISABLE_MP_PORT_ID)));
+ leapraid_mark_resp_exp(adapter, &exp_p0);
+ }
+out:
+ dev_info(&adapter->pdev->dev,
+ "expander searching complete\n");
+}
+
+void leapraid_wait_cmds_done(struct leapraid_adapter *adapter)
+{
+ struct leapraid_io_req_tracker *io_req_tracker;
+ unsigned long flags;
+ u16 i;
+
+ adapter->reset_desc.pending_io_cnt = 0;
+ if (!leapraid_pci_active(adapter)) {
+ dev_err(&adapter->pdev->dev,
+ "%s %s: pci error, device reset or unplugged!\n",
+ adapter->adapter_attr.name, __func__);
+ return;
+ }
+
+ if (leapraid_get_adapter_state(adapter) != LEAPRAID_DB_OPERATIONAL)
+ return;
+
+ spin_lock_irqsave(&adapter->dynamic_task_desc.task_lock, flags);
+ for (i = 1; i <= adapter->shost->can_queue; i++) {
+ io_req_tracker = leapraid_get_io_tracker_from_taskid(adapter,
+ i);
+ if (io_req_tracker && io_req_tracker->taskid != 0)
+ if (io_req_tracker->scmd)
+ adapter->reset_desc.pending_io_cnt++;
+ }
+ spin_unlock_irqrestore(&adapter->dynamic_task_desc.task_lock, flags);
+
+ if (!adapter->reset_desc.pending_io_cnt)
+ return;
+
+ wait_event_timeout(adapter->reset_desc.reset_wait_queue,
+ adapter->reset_desc.pending_io_cnt == 0, 10 * HZ);
+}
+
+int leapraid_hard_reset_handler(struct leapraid_adapter *adapter,
+ enum reset_type type)
+{
+ unsigned long flags;
+ int rc;
+
+ if (!mutex_trylock(&adapter->reset_desc.adapter_reset_mutex)) {
+ do {
+ ssleep(1);
+ } while (adapter->access_ctrl.shost_recovering);
+ return adapter->reset_desc.adapter_reset_results;
+ }
+
+ if (!leapraid_pci_active(adapter)) {
+ if (leapraid_pci_removed(adapter)) {
+ dev_info(&adapter->pdev->dev,
+ "pci_dev removed, pausing polling and cleaning cmds\n");
+ leapraid_mq_polling_pause(adapter);
+ leapraid_clean_active_scsi_cmds(adapter);
+ leapraid_mq_polling_resume(adapter);
+ }
+ rc = 0;
+ goto exit_pci_unavailable;
+ }
+
+ dev_info(&adapter->pdev->dev, "starting hard reset\n");
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ adapter->access_ctrl.shost_recovering = true;
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+
+ leapraid_wait_cmds_done(adapter);
+ leapraid_mask_int(adapter);
+ leapraid_mq_polling_pause(adapter);
+ rc = leapraid_make_adapter_ready(adapter, type);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "failed to make adapter ready, rc=%d\n", rc);
+ goto out;
+ }
+
+ rc = leapraid_fw_log_init(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "firmware log init failed\n");
+ goto out;
+ }
+
+ leapraid_clean_active_cmds(adapter);
+ if (adapter->scan_dev_desc.driver_loading &&
+ adapter->scan_dev_desc.scan_dev_failed) {
+ dev_err(&adapter->pdev->dev,
+ "Previous device scan failed or driver loading\n");
+ adapter->access_ctrl.host_removing = true;
+ rc = -EFAULT;
+ goto out;
+ }
+
+ rc = leapraid_make_adapter_available(adapter);
+ if (!rc) {
+ dev_info(&adapter->pdev->dev,
+ "adapter is now available, rebuilding topology\n");
+ if (adapter->adapter_attr.enable_mp) {
+ leapraid_update_card_port_after_reset(adapter);
+ leapraid_update_vphys_after_reset(adapter);
+ }
+ leapraid_mark_all_dev_deleted(adapter);
+ leapraid_rebuild_enc_list_after_reset(adapter);
+ leapraid_search_resp_sas_dev(adapter);
+ leapraid_search_resp_raid_volume(adapter);
+ leapraid_search_resp_exp(adapter);
+ leapraid_hardreset_barrier(adapter);
+ }
+out:
+ dev_info(&adapter->pdev->dev, "hard reset %s\n",
+ ((rc == 0) ? "SUCCESS" : "FAILED"));
+
+ spin_lock_irqsave(&adapter->reset_desc.adapter_reset_lock, flags);
+ adapter->reset_desc.adapter_reset_results = rc;
+ adapter->access_ctrl.shost_recovering = false;
+ spin_unlock_irqrestore(&adapter->reset_desc.adapter_reset_lock, flags);
+ adapter->reset_desc.reset_cnt++;
+ mutex_unlock(&adapter->reset_desc.adapter_reset_mutex);
+
+ if (rc)
+ leapraid_clean_active_scsi_cmds(adapter);
+ leapraid_mq_polling_resume(adapter);
+
+exit_pci_unavailable:
+ dev_info(&adapter->pdev->dev, "pcie unavailable!\n");
+ return rc;
+}
+
+static int leapraid_get_adapter_features(struct leapraid_adapter *adapter)
+{
+ struct leapraid_adapter_features_req leap_mpi_req;
+ struct leapraid_adapter_features_rep leap_mpi_rep;
+ u8 fw_major, fw_minor, fw_build, fw_release;
+ u32 db;
+ int r;
+
+ db = leapraid_readl(&adapter->iomem_base->db);
+ if (db & LEAPRAID_DB_USED ||
+ (db & LEAPRAID_DB_MASK) == LEAPRAID_DB_FAULT)
+ return -EFAULT;
+
+ if (((db & LEAPRAID_DB_MASK) != LEAPRAID_DB_READY) &&
+ ((db & LEAPRAID_DB_MASK) != LEAPRAID_DB_OPERATIONAL)) {
+ if (!leapraid_wait_adapter_ready(adapter))
+ return -EFAULT;
+ }
+
+ memset(&leap_mpi_req, 0, sizeof(struct leapraid_adapter_features_req));
+ memset(&leap_mpi_rep, 0, sizeof(struct leapraid_adapter_features_rep));
+ leap_mpi_req.func = LEAPRAID_FUNC_GET_ADAPTER_FEATURES;
+ r = leapraid_handshake_func(adapter,
+ sizeof(struct leapraid_adapter_features_req),
+ (u32 *)&leap_mpi_req,
+ sizeof(struct leapraid_adapter_features_rep),
+ (u16 *)&leap_mpi_rep);
+ if (r) {
+ dev_err(&adapter->pdev->dev,
+ "%s %s: handshake failed, r=%d\n",
+ adapter->adapter_attr.name, __func__, r);
+ return r;
+ }
+
+ memset(&adapter->adapter_attr.features, 0,
+ sizeof(struct leapraid_adapter_features));
+ adapter->adapter_attr.features.req_slot =
+ le16_to_cpu(leap_mpi_rep.req_slot);
+ adapter->adapter_attr.features.hp_slot =
+ le16_to_cpu(leap_mpi_rep.hp_slot);
+ adapter->adapter_attr.features.adapter_caps =
+ le32_to_cpu(leap_mpi_rep.adapter_caps);
+ adapter->adapter_attr.features.max_volumes =
+ leap_mpi_rep.max_volumes;
+ if (!adapter->adapter_attr.features.max_volumes)
+ adapter->adapter_attr.features.max_volumes =
+ LEAPRAID_MAX_VOLUMES_DEFAULT;
+ adapter->adapter_attr.features.max_dev_handle =
+ le16_to_cpu(leap_mpi_rep.max_dev_hdl);
+ if (!adapter->adapter_attr.features.max_dev_handle)
+ adapter->adapter_attr.features.max_dev_handle =
+ LEAPRAID_MAX_DEV_HANDLE_DEFAULT;
+ adapter->adapter_attr.features.min_dev_handle =
+ le16_to_cpu(leap_mpi_rep.min_dev_hdl);
+ if ((adapter->adapter_attr.features.adapter_caps &
+ LEAPRAID_ADAPTER_FEATURES_CAP_INTEGRATED_RAID))
+ adapter->adapter_attr.raid_support = true;
+ if (WARN_ON(!(adapter->adapter_attr.features.adapter_caps &
+ LEAPRAID_ADAPTER_FEATURES_CAP_ATOMIC_REQ)))
+ return -EFAULT;
+ adapter->adapter_attr.features.fw_version =
+ le32_to_cpu(leap_mpi_rep.fw_version);
+
+ fw_major = (adapter->adapter_attr.features.fw_version >> 24) & 0xFF;
+ fw_minor = (adapter->adapter_attr.features.fw_version >> 16) & 0xFF;
+ fw_build = (adapter->adapter_attr.features.fw_version >> 8) & 0xFF;
+ fw_release = adapter->adapter_attr.features.fw_version & 0xFF;
+
+ dev_info(&adapter->pdev->dev,
+ "Firmware version: %u.%u.%u.%u (0x%08x)\n",
+ fw_major, fw_minor, fw_build, fw_release,
+ adapter->adapter_attr.features.fw_version);
+
+ if (fw_major < 2) {
+ dev_err(&adapter->pdev->dev,
+ "Unsupported firmware major version, requires >= 2\n");
+ return -EFAULT;
+ }
+ adapter->shost->max_id = -1;
+
+ return 0;
+}
+
+static inline void leapraid_disable_pcie(struct leapraid_adapter *adapter)
+{
+ mutex_lock(&adapter->access_ctrl.pci_access_lock);
+ if (adapter->iomem_base) {
+ iounmap(adapter->iomem_base);
+ adapter->iomem_base = NULL;
+ }
+ if (pci_is_enabled(adapter->pdev)) {
+ pci_release_regions(adapter->pdev);
+ pci_disable_device(adapter->pdev);
+ }
+ mutex_unlock(&adapter->access_ctrl.pci_access_lock);
+}
+
+static int leapraid_enable_pcie(struct leapraid_adapter *adapter)
+{
+ u64 dma_mask;
+ int rc;
+
+ rc = pci_enable_device(adapter->pdev);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "failed to enable PCI device\n");
+ return rc;
+ }
+
+ rc = pci_request_regions(adapter->pdev, LEAPRAID_DRIVER_NAME);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "failed to obtain PCI resources\n");
+ goto disable_pcie;
+ }
+
+ if (sizeof(dma_addr_t) > 4) {
+ dma_mask = DMA_BIT_MASK(64);
+ adapter->adapter_attr.use_32_dma_mask = false;
+ } else {
+ dma_mask = DMA_BIT_MASK(32);
+ adapter->adapter_attr.use_32_dma_mask = true;
+ }
+
+ rc = dma_set_mask_and_coherent(&adapter->pdev->dev, dma_mask);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "failed to set %lld DMA mask\n", dma_mask);
+ goto disable_pcie;
+ }
+ adapter->iomem_base = ioremap(pci_resource_start(adapter->pdev, 0),
+ sizeof(struct leapraid_reg_base));
+ if (!adapter->iomem_base) {
+ dev_err(&adapter->pdev->dev,
+ "failed to map memory for controller registers\n");
+ rc = -ENOMEM;
+ goto disable_pcie;
+ }
+
+ pci_set_master(adapter->pdev);
+
+ return 0;
+
+disable_pcie:
+ return rc;
+}
+
+static void leapraid_cpus_on_irq(struct leapraid_adapter *adapter)
+{
+ struct leapraid_int_rq *int_rq;
+ unsigned int i, base_group, this_group;
+ unsigned int cpu, nr_cpus, total_msix, index = 0;
+
+ total_msix = adapter->notification_desc.iopoll_qdex;
+ nr_cpus = num_online_cpus();
+
+ if (!nr_cpus || !total_msix)
+ return;
+ base_group = nr_cpus / total_msix;
+
+ cpu = cpumask_first(cpu_online_mask);
+ for (index = 0; index < adapter->notification_desc.iopoll_qdex;
+ index++) {
+ int_rq = &adapter->notification_desc.int_rqs[index];
+
+ if (cpu >= nr_cpus)
+ break;
+
+ this_group = base_group +
+ (index < (nr_cpus % total_msix) ? 1 : 0);
+
+ for (i = 0 ; i < this_group ; i++) {
+ adapter->notification_desc.msix_cpu_map[cpu] =
+ int_rq->rq.msix_idx;
+ cpu = cpumask_next(cpu, cpu_online_mask);
+ }
+ }
+}
+
+static void leapraid_map_msix_to_cpu(struct leapraid_adapter *adapter)
+{
+ struct leapraid_int_rq *int_rq;
+ const cpumask_t *affinity_mask;
+ u32 i;
+ u16 cpu;
+
+ if (!adapter->adapter_attr.rq_cnt)
+ return;
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qdex; i++) {
+ int_rq = &adapter->notification_desc.int_rqs[i];
+ affinity_mask = pci_irq_get_affinity(adapter->pdev,
+ int_rq->rq.msix_idx);
+ if (!affinity_mask)
+ goto out;
+
+ for_each_cpu_and(cpu, affinity_mask, cpu_online_mask) {
+ if (cpu >= adapter->notification_desc.msix_cpu_map_sz)
+ break;
+
+ adapter->notification_desc.msix_cpu_map[cpu] =
+ int_rq->rq.msix_idx;
+ }
+ }
+out:
+ leapraid_cpus_on_irq(adapter);
+}
+
+static void leapraid_configure_reply_queue_affinity(
+ struct leapraid_adapter *adapter)
+{
+ if (!adapter || !adapter->notification_desc.msix_enable)
+ return;
+
+ leapraid_map_msix_to_cpu(adapter);
+}
+
+static void leapraid_free_irq(struct leapraid_adapter *adapter)
+{
+ struct leapraid_int_rq *int_rq;
+ unsigned int i;
+
+ if (!adapter->notification_desc.int_rqs)
+ return;
+
+ for (i = 0; i < adapter->notification_desc.int_rqs_allocated; i++) {
+ int_rq = &adapter->notification_desc.int_rqs[i];
+ if (!int_rq)
+ continue;
+
+ irq_set_affinity_hint(pci_irq_vector(adapter->pdev,
+ int_rq->rq.msix_idx), NULL);
+ free_irq(pci_irq_vector(adapter->pdev, int_rq->rq.msix_idx),
+ &int_rq->rq);
+ }
+ adapter->notification_desc.int_rqs_allocated = 0;
+
+ if (!adapter->notification_desc.msix_enable)
+ return;
+
+ pci_free_irq_vectors(adapter->pdev);
+ adapter->notification_desc.msix_enable = false;
+
+ kfree(adapter->notification_desc.blk_mq_poll_rqs);
+ adapter->notification_desc.blk_mq_poll_rqs = NULL;
+
+ kfree(adapter->notification_desc.int_rqs);
+ adapter->notification_desc.int_rqs = NULL;
+
+ kfree(adapter->notification_desc.msix_cpu_map);
+ adapter->notification_desc.msix_cpu_map = NULL;
+}
+
+static inline int leapraid_msix_cnt(struct pci_dev *pdev)
+{
+ return pci_msix_vec_count(pdev);
+}
+
+static inline int leapraid_msi_cnt(struct pci_dev *pdev)
+{
+ return pci_msi_vec_count(pdev);
+}
+
+static int leapraid_setup_irqs(struct leapraid_adapter *adapter)
+{
+ unsigned int i;
+ int rc = 0;
+
+ if (interrupt_mode == 0) {
+ rc = pci_alloc_irq_vectors_affinity(
+ adapter->pdev,
+ adapter->notification_desc.iopoll_qdex,
+ adapter->notification_desc.iopoll_qdex,
+ PCI_IRQ_MSIX | PCI_IRQ_AFFINITY, NULL);
+
+ if (rc < 0) {
+ dev_err(&adapter->pdev->dev,
+ "%d msi/msix vectors alloacted failed!\n",
+ adapter->notification_desc.iopoll_qdex);
+ return rc;
+ }
+ }
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qdex; i++) {
+ adapter->notification_desc.int_rqs[i].rq.adapter = adapter;
+ adapter->notification_desc.int_rqs[i].rq.msix_idx = i;
+ atomic_set(&adapter->notification_desc.int_rqs[i].rq.busy, 0);
+ if (interrupt_mode == 0)
+ snprintf(adapter->notification_desc.int_rqs[i].rq.name,
+ LEAPRAID_NAME_LENGTH, "%s%u-MSIx%u",
+ LEAPRAID_DRIVER_NAME,
+ adapter->adapter_attr.id, i);
+ else if (interrupt_mode == 1)
+ snprintf(adapter->notification_desc.int_rqs[i].rq.name,
+ LEAPRAID_NAME_LENGTH, "%s%u-MSI%u",
+ LEAPRAID_DRIVER_NAME,
+ adapter->adapter_attr.id, i);
+
+ rc = request_irq(pci_irq_vector(adapter->pdev, i),
+ leapraid_irq_handler,
+ IRQF_SHARED,
+ adapter->notification_desc.int_rqs[i].rq.name,
+ &adapter->notification_desc.int_rqs[i].rq);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "MSI/MSIx: request_irq %s failed!\n",
+ adapter->notification_desc.int_rqs[i].rq.name);
+ return rc;
+ }
+ adapter->notification_desc.int_rqs_allocated++;
+ }
+
+ return 0;
+}
+
+static int leapraid_setup_legacy_int(struct leapraid_adapter *adapter)
+{
+ int rc;
+
+ adapter->notification_desc.int_rqs[0].rq.adapter = adapter;
+ adapter->notification_desc.int_rqs[0].rq.msix_idx = 0;
+ atomic_set(&adapter->notification_desc.int_rqs[0].rq.busy, 0);
+ snprintf(adapter->notification_desc.int_rqs[0].rq.name,
+ LEAPRAID_NAME_LENGTH, "%s%d-LegacyInt",
+ LEAPRAID_DRIVER_NAME, adapter->adapter_attr.id);
+
+ rc = pci_alloc_irq_vectors_affinity(
+ adapter->pdev,
+ adapter->notification_desc.iopoll_qdex,
+ adapter->notification_desc.iopoll_qdex,
+ PCI_IRQ_LEGACY | PCI_IRQ_AFFINITY,
+ NULL);
+ if (rc < 0) {
+ dev_err(&adapter->pdev->dev,
+ "legacy irq alloacted failed!\n");
+ return rc;
+ }
+
+ rc = request_irq(pci_irq_vector(adapter->pdev, 0),
+ leapraid_irq_handler,
+ IRQF_SHARED,
+ adapter->notification_desc.int_rqs[0].rq.name,
+ &adapter->notification_desc.int_rqs[0].rq);
+ if (rc) {
+ irq_set_affinity_hint(pci_irq_vector(adapter->pdev, 0), NULL);
+ pci_free_irq_vectors(adapter->pdev);
+ dev_err(&adapter->pdev->dev,
+ "Legact Int: request_irq %s failed!\n",
+ adapter->notification_desc.int_rqs[0].rq.name);
+ return -EBUSY;
+ }
+ adapter->notification_desc.int_rqs_allocated = 1;
+ return rc;
+}
+
+static int leapraid_set_legacy_int(struct leapraid_adapter *adapter)
+{
+ int rc;
+
+ adapter->notification_desc.msix_cpu_map_sz = num_online_cpus();
+ adapter->notification_desc.msix_cpu_map =
+ kzalloc(adapter->notification_desc.msix_cpu_map_sz,
+ GFP_KERNEL);
+ if (!adapter->notification_desc.msix_cpu_map)
+ return -ENOMEM;
+
+ adapter->adapter_attr.rq_cnt = 1;
+ adapter->notification_desc.iopoll_qdex =
+ adapter->adapter_attr.rq_cnt;
+ adapter->notification_desc.iopoll_qcnt = 0;
+ dev_info(&adapter->pdev->dev,
+ "Legacy Intr: req queue cnt=%d, intr=%d/poll=%d rep queues!\n",
+ adapter->adapter_attr.rq_cnt,
+ adapter->notification_desc.iopoll_qdex,
+ adapter->notification_desc.iopoll_qcnt);
+ adapter->notification_desc.int_rqs =
+ kcalloc(adapter->notification_desc.iopoll_qdex,
+ sizeof(struct leapraid_int_rq), GFP_KERNEL);
+ if (!adapter->notification_desc.int_rqs) {
+ dev_err(&adapter->pdev->dev,
+ "Legacy Intr: allocate %d intr rep queues failed!\n",
+ adapter->notification_desc.iopoll_qdex);
+ return -ENOMEM;
+ }
+
+ rc = leapraid_setup_legacy_int(adapter);
+
+ return rc;
+}
+
+static int leapraid_set_msix(struct leapraid_adapter *adapter)
+{
+ int iopoll_qcnt = 0;
+ unsigned int i;
+ int rc, msix_cnt;
+
+ if (msix_disable == 1)
+ goto legacy_int;
+
+ msix_cnt = leapraid_msix_cnt(adapter->pdev);
+ if (msix_cnt <= 0) {
+ dev_info(&adapter->pdev->dev, "msix unsupported!\n");
+ goto legacy_int;
+ }
+
+ if (reset_devices)
+ adapter->adapter_attr.rq_cnt = 1;
+ else
+ adapter->adapter_attr.rq_cnt = min_t(int,
+ num_online_cpus(),
+ msix_cnt);
+
+ if (max_msix_vectors > 0)
+ adapter->adapter_attr.rq_cnt = min_t(
+ int, max_msix_vectors, adapter->adapter_attr.rq_cnt);
+
+ if (adapter->adapter_attr.rq_cnt <= 1)
+ adapter->shost->host_tagset = 0;
+ if (adapter->shost->host_tagset) {
+ iopoll_qcnt = poll_queues;
+ if (iopoll_qcnt >= adapter->adapter_attr.rq_cnt)
+ iopoll_qcnt = 0;
+ }
+ if (iopoll_qcnt) {
+ adapter->notification_desc.blk_mq_poll_rqs =
+ kcalloc(iopoll_qcnt,
+ sizeof(struct leapraid_blk_mq_poll_rq),
+ GFP_KERNEL);
+ if (!adapter->notification_desc.blk_mq_poll_rqs)
+ return -ENOMEM;
+ adapter->adapter_attr.rq_cnt =
+ min(adapter->adapter_attr.rq_cnt + iopoll_qcnt,
+ msix_cnt);
+ }
+
+ adapter->notification_desc.iopoll_qdex =
+ adapter->adapter_attr.rq_cnt - iopoll_qcnt;
+
+ adapter->notification_desc.iopoll_qcnt = iopoll_qcnt;
+ dev_info(&adapter->pdev->dev,
+ "MSIx: req queue cnt=%d, intr=%d/poll=%d rep queues!\n",
+ adapter->adapter_attr.rq_cnt,
+ adapter->notification_desc.iopoll_qdex,
+ adapter->notification_desc.iopoll_qcnt);
+
+ adapter->notification_desc.int_rqs =
+ kcalloc(adapter->notification_desc.iopoll_qdex,
+ sizeof(struct leapraid_int_rq), GFP_KERNEL);
+ if (!adapter->notification_desc.int_rqs) {
+ dev_err(&adapter->pdev->dev,
+ "MSIx: allocate %d interrupt reply queues failed!\n",
+ adapter->notification_desc.iopoll_qdex);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qcnt; i++) {
+ adapter->notification_desc.blk_mq_poll_rqs[i].rq.adapter =
+ adapter;
+ adapter->notification_desc.blk_mq_poll_rqs[i].rq.msix_idx =
+ i + adapter->notification_desc.iopoll_qdex;
+ atomic_set(&adapter->notification_desc.blk_mq_poll_rqs[i].rq.busy, 0);
+ snprintf(adapter->notification_desc.blk_mq_poll_rqs[i].rq.name,
+ LEAPRAID_NAME_LENGTH,
+ "%s%u-MQ-Poll%u", LEAPRAID_DRIVER_NAME,
+ adapter->adapter_attr.id, i);
+ atomic_set(&adapter->notification_desc.blk_mq_poll_rqs[i].busy, 0);
+ atomic_set(&adapter->notification_desc.blk_mq_poll_rqs[i].pause, 0);
+ }
+
+ adapter->notification_desc.msix_cpu_map_sz =
+ num_online_cpus();
+ adapter->notification_desc.msix_cpu_map =
+ kzalloc(adapter->notification_desc.msix_cpu_map_sz,
+ GFP_KERNEL);
+ if (!adapter->notification_desc.msix_cpu_map)
+ return -ENOMEM;
+ memset(adapter->notification_desc.msix_cpu_map, 0,
+ adapter->notification_desc.msix_cpu_map_sz);
+
+ adapter->notification_desc.msix_enable = true;
+ rc = leapraid_setup_irqs(adapter);
+ if (rc) {
+ leapraid_free_irq(adapter);
+ adapter->notification_desc.msix_enable = false;
+ goto legacy_int;
+ }
+
+ return 0;
+
+legacy_int:
+ rc = leapraid_set_legacy_int(adapter);
+
+ return rc;
+}
+
+static int leapraid_set_msi(struct leapraid_adapter *adapter)
+{
+ int iopoll_qcnt = 0;
+ unsigned int i;
+ int rc, msi_cnt;
+
+ if (msix_disable == 1)
+ goto legacy_int1;
+
+ msi_cnt = leapraid_msi_cnt(adapter->pdev);
+ if (msi_cnt <= 0) {
+ dev_info(&adapter->pdev->dev, "msix unsupported!\n");
+ goto legacy_int1;
+ }
+
+ if (reset_devices)
+ adapter->adapter_attr.rq_cnt = 1;
+ else
+ adapter->adapter_attr.rq_cnt = min_t(int,
+ num_online_cpus(),
+ msi_cnt);
+
+ if (max_msix_vectors > 0)
+ adapter->adapter_attr.rq_cnt = min_t(
+ int, max_msix_vectors, adapter->adapter_attr.rq_cnt);
+
+ if (adapter->adapter_attr.rq_cnt <= 1)
+ adapter->shost->host_tagset = 0;
+ if (adapter->shost->host_tagset) {
+ iopoll_qcnt = poll_queues;
+ if (iopoll_qcnt >= adapter->adapter_attr.rq_cnt)
+ iopoll_qcnt = 0;
+ }
+
+ if (iopoll_qcnt) {
+ adapter->notification_desc.blk_mq_poll_rqs =
+ kcalloc(iopoll_qcnt,
+ sizeof(struct leapraid_blk_mq_poll_rq),
+ GFP_KERNEL);
+ if (!adapter->notification_desc.blk_mq_poll_rqs)
+ return -ENOMEM;
+
+ adapter->adapter_attr.rq_cnt =
+ min(adapter->adapter_attr.rq_cnt + iopoll_qcnt,
+ msi_cnt);
+ }
+
+ adapter->notification_desc.iopoll_qdex =
+ adapter->adapter_attr.rq_cnt - iopoll_qcnt;
+ rc = pci_alloc_irq_vectors_affinity(
+ adapter->pdev,
+ 1,
+ adapter->notification_desc.iopoll_qdex,
+ PCI_IRQ_MSI | PCI_IRQ_AFFINITY, NULL);
+ if (rc < 0) {
+ dev_err(&adapter->pdev->dev,
+ "%d msi vectors alloacted failed!\n",
+ adapter->notification_desc.iopoll_qdex);
+ goto legacy_int1;
+ }
+ if (rc != adapter->notification_desc.iopoll_qdex) {
+ adapter->notification_desc.iopoll_qdex = rc;
+ adapter->adapter_attr.rq_cnt =
+ adapter->notification_desc.iopoll_qdex + iopoll_qcnt;
+ }
+ adapter->notification_desc.iopoll_qcnt = iopoll_qcnt;
+ dev_info(&adapter->pdev->dev,
+ "MSI: req queue cnt=%d, intr=%d/poll=%d rep queues!\n",
+ adapter->adapter_attr.rq_cnt,
+ adapter->notification_desc.iopoll_qdex,
+ adapter->notification_desc.iopoll_qcnt);
+
+ adapter->notification_desc.int_rqs =
+ kcalloc(adapter->notification_desc.iopoll_qdex,
+ sizeof(struct leapraid_int_rq),
+ GFP_KERNEL);
+ if (!adapter->notification_desc.int_rqs) {
+ dev_err(&adapter->pdev->dev,
+ "MSI: allocate %d interrupt reply queues failed!\n",
+ adapter->notification_desc.iopoll_qdex);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qcnt; i++) {
+ adapter->notification_desc.blk_mq_poll_rqs[i].rq.adapter =
+ adapter;
+ adapter->notification_desc.blk_mq_poll_rqs[i].rq.msix_idx =
+ i + adapter->notification_desc.iopoll_qdex;
+ atomic_set(
+ &adapter->notification_desc.blk_mq_poll_rqs[i].rq.busy,
+ 0);
+ snprintf(adapter->notification_desc.blk_mq_poll_rqs[i].rq.name,
+ LEAPRAID_NAME_LENGTH,
+ "%s%u-MQ-Poll%u", LEAPRAID_DRIVER_NAME,
+ adapter->adapter_attr.id, i);
+ atomic_set(
+ &adapter->notification_desc.blk_mq_poll_rqs[i].busy,
+ 0);
+ atomic_set(
+ &adapter->notification_desc.blk_mq_poll_rqs[i].pause,
+ 0);
+ }
+
+ adapter->notification_desc.msix_cpu_map_sz = num_online_cpus();
+ adapter->notification_desc.msix_cpu_map =
+ kzalloc(adapter->notification_desc.msix_cpu_map_sz,
+ GFP_KERNEL);
+ if (!adapter->notification_desc.msix_cpu_map)
+ return -ENOMEM;
+ memset(adapter->notification_desc.msix_cpu_map, 0,
+ adapter->notification_desc.msix_cpu_map_sz);
+
+ adapter->notification_desc.msix_enable = true;
+ rc = leapraid_setup_irqs(adapter);
+ if (rc) {
+ leapraid_free_irq(adapter);
+ adapter->notification_desc.msix_enable = false;
+ goto legacy_int1;
+ }
+
+ return 0;
+
+legacy_int1:
+ rc = leapraid_set_legacy_int(adapter);
+
+ return rc;
+}
+
+static int leapraid_set_notification(struct leapraid_adapter *adapter)
+{
+ int rc = 0;
+
+ if (interrupt_mode == 0) {
+ rc = leapraid_set_msix(adapter);
+ if (rc)
+ pr_err("%s enable MSI-X irq failed!\n", __func__);
+ } else if (interrupt_mode == 1) {
+ rc = leapraid_set_msi(adapter);
+ if (rc)
+ pr_err("%s enable MSI irq failed!\n", __func__);
+ } else if (interrupt_mode == 2) {
+ rc = leapraid_set_legacy_int(adapter);
+ if (rc)
+ pr_err("%s enable legacy irq failed!\n", __func__);
+ }
+
+ return rc;
+}
+
+static void leapraid_disable_pcie_and_notification(
+ struct leapraid_adapter *adapter)
+{
+ leapraid_free_irq(adapter);
+ leapraid_disable_pcie(adapter);
+}
+
+int leapraid_set_pcie_and_notification(struct leapraid_adapter *adapter)
+{
+ int rc;
+
+ rc = leapraid_enable_pcie(adapter);
+ if (rc)
+ goto out_fail;
+
+ leapraid_mask_int(adapter);
+
+ rc = leapraid_set_notification(adapter);
+ if (rc)
+ goto out_fail;
+
+ pci_save_state(adapter->pdev);
+
+ return 0;
+
+out_fail:
+ leapraid_disable_pcie_and_notification(adapter);
+ return rc;
+}
+
+void leapraid_disable_controller(struct leapraid_adapter *adapter)
+{
+ if (!adapter->iomem_base)
+ return;
+
+ leapraid_mask_int(adapter);
+
+ adapter->access_ctrl.shost_recovering = true;
+ leapraid_make_adapter_ready(adapter, PART_RESET);
+ adapter->access_ctrl.shost_recovering = false;
+
+ leapraid_disable_pcie_and_notification(adapter);
+}
+
+static int leapraid_adapter_unit_reset(struct leapraid_adapter *adapter)
+{
+ int rc = 0;
+
+ dev_info(&adapter->pdev->dev, "fire unit reset\n");
+ writel(LEAPRAID_FUNC_ADAPTER_UNIT_RESET << LEAPRAID_DB_FUNC_SHIFT,
+ &adapter->iomem_base->db);
+ if (leapraid_db_wait_ack_and_clear_int(adapter))
+ rc = -EFAULT;
+
+ if (!leapraid_wait_adapter_ready(adapter)) {
+ rc = -EFAULT;
+ goto out;
+ }
+out:
+ dev_info(&adapter->pdev->dev, "unit reset: %s\n",
+ ((rc == 0) ? "SUCCESS" : "FAILED"));
+ return rc;
+}
+
+static int leapraid_make_adapter_ready(struct leapraid_adapter *adapter,
+ enum reset_type type)
+{
+ u32 db;
+ int rc;
+ int count;
+
+ if (!leapraid_pci_active(adapter))
+ return 0;
+
+ count = 0;
+ db = leapraid_readl(&adapter->iomem_base->db);
+ if ((db & LEAPRAID_DB_MASK) == LEAPRAID_DB_RESET) {
+ while ((db & LEAPRAID_DB_MASK) != LEAPRAID_DB_READY) {
+ if (count++ == LEAPRAID_DB_RETRY_COUNT_MAX) {
+ dev_err(&adapter->pdev->dev,
+ "wait adapter ready timeout\n");
+ return -EFAULT;
+ }
+ ssleep(1);
+ db = leapraid_readl(&adapter->iomem_base->db);
+ dev_info(&adapter->pdev->dev,
+ "wait adapter ready, count=%d, db=0x%x\n",
+ count, db);
+ }
+ }
+ if ((db & LEAPRAID_DB_MASK) == LEAPRAID_DB_READY)
+ return 0;
+
+ if (db & LEAPRAID_DB_USED)
+ goto full_reset;
+
+ if ((db & LEAPRAID_DB_MASK) == LEAPRAID_DB_FAULT)
+ goto full_reset;
+
+ if (type == FULL_RESET)
+ goto full_reset;
+
+ if ((db & LEAPRAID_DB_MASK) == LEAPRAID_DB_OPERATIONAL)
+ if (!(leapraid_adapter_unit_reset(adapter)))
+ return 0;
+
+full_reset:
+ rc = leapraid_host_diag_reset(adapter);
+ return rc;
+}
+
+static void leapraid_fw_log_exit(struct leapraid_adapter *adapter)
+{
+ if (!adapter->fw_log_desc.open_pcie_trace)
+ return;
+
+ if (adapter->fw_log_desc.fw_log_buffer) {
+ dma_free_coherent(&adapter->pdev->dev,
+ (LEAPRAID_SYS_LOG_BUF_SIZE +
+ LEAPRAID_SYS_LOG_BUF_RESERVE),
+ adapter->fw_log_desc.fw_log_buffer,
+ adapter->fw_log_desc.fw_log_buffer_dma);
+ adapter->fw_log_desc.fw_log_buffer = NULL;
+ }
+}
+
+static int leapraid_fw_log_init(struct leapraid_adapter *adapter)
+{
+ struct leapraid_adapter_log_req adapter_log_req;
+ struct leapraid_adapter_log_rep adapter_log_rep;
+ u16 adapter_status;
+ u64 buf_addr;
+ u32 rc;
+
+ if (!adapter->fw_log_desc.open_pcie_trace)
+ return 0;
+
+ if (!adapter->fw_log_desc.fw_log_buffer) {
+ adapter->fw_log_desc.fw_log_buffer =
+ dma_alloc_coherent(
+ &adapter->pdev->dev,
+ (LEAPRAID_SYS_LOG_BUF_SIZE +
+ LEAPRAID_SYS_LOG_BUF_RESERVE),
+ &adapter->fw_log_desc.fw_log_buffer_dma,
+ GFP_KERNEL);
+ if (!adapter->fw_log_desc.fw_log_buffer) {
+ dev_err(&adapter->pdev->dev,
+ "%s: log buf alloc failed.\n",
+ __func__);
+ return -ENOMEM;
+ }
+ }
+
+ memset(&adapter_log_req, 0, sizeof(struct leapraid_adapter_log_req));
+ adapter_log_req.func = LEAPRAID_FUNC_LOGBUF_INIT;
+ buf_addr = adapter->fw_log_desc.fw_log_buffer_dma;
+
+ adapter_log_req.mbox.w[0] =
+ cpu_to_le32((u32)(buf_addr & 0xFFFFFFFF));
+ adapter_log_req.mbox.w[1] =
+ cpu_to_le32((u32)((buf_addr >> 32) & 0xFFFFFFFF));
+ adapter_log_req.mbox.w[2] =
+ cpu_to_le32(LEAPRAID_SYS_LOG_BUF_SIZE);
+ rc = leapraid_handshake_func(adapter,
+ sizeof(struct leapraid_adapter_log_req),
+ (u32 *)&adapter_log_req,
+ sizeof(struct leapraid_adapter_log_rep),
+ (u16 *)&adapter_log_rep);
+ if (rc != 0) {
+ dev_err(&adapter->pdev->dev, "%s: handshake failed, rc=%d\n",
+ __func__, rc);
+ return rc;
+ }
+
+ adapter_status = le16_to_cpu(adapter_log_rep.adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (adapter_status != LEAPRAID_ADAPTER_STATUS_SUCCESS) {
+ dev_err(&adapter->pdev->dev, "%s: failed!\n", __func__);
+ rc = -EIO;
+ }
+
+ return rc;
+}
+
+static void leapraid_free_host_memory(struct leapraid_adapter *adapter)
+{
+ unsigned int i;
+
+ if (adapter->mem_desc.task_desc) {
+ dma_free_coherent(&adapter->pdev->dev,
+ adapter->adapter_attr.task_desc_dma_size,
+ adapter->mem_desc.task_desc,
+ adapter->mem_desc.task_desc_dma);
+ adapter->mem_desc.task_desc = NULL;
+ }
+
+ if (adapter->mem_desc.sense_data) {
+ dma_free_coherent(
+ &adapter->pdev->dev,
+ adapter->adapter_attr.io_qd * SCSI_SENSE_BUFFERSIZE,
+ adapter->mem_desc.sense_data,
+ adapter->mem_desc.sense_data_dma);
+ adapter->mem_desc.sense_data = NULL;
+ }
+
+ if (adapter->mem_desc.rep_msg) {
+ dma_free_coherent(
+ &adapter->pdev->dev,
+ adapter->adapter_attr.rep_msg_qd * LEAPRAID_REPLY_SIEZ,
+ adapter->mem_desc.rep_msg,
+ adapter->mem_desc.rep_msg_dma);
+ adapter->mem_desc.rep_msg = NULL;
+ }
+
+ if (adapter->mem_desc.rep_msg_addr) {
+ dma_free_coherent(&adapter->pdev->dev,
+ adapter->adapter_attr.rep_msg_qd *
+ LEAPRAID_REP_MSG_ADDR_SIZE,
+ adapter->mem_desc.rep_msg_addr,
+ adapter->mem_desc.rep_msg_addr_dma);
+ adapter->mem_desc.rep_msg_addr = NULL;
+ }
+
+ if (adapter->mem_desc.rep_desc_seg_maint) {
+ for (i = 0; i < adapter->adapter_attr.rep_desc_q_seg_cnt;
+ i++) {
+ if (adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg) {
+ dma_free_coherent(
+ &adapter->pdev->dev,
+ (adapter->adapter_attr.rep_desc_qd *
+ LEAPRAID_REP_DESC_ENTRY_SIZE) *
+ LEAPRAID_REP_DESC_CHUNK_SIZE,
+ adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg,
+ adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg_dma);
+ adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg = NULL;
+ }
+ }
+
+ if (adapter->mem_desc.rep_desc_q_arr) {
+ dma_free_coherent(
+ &adapter->pdev->dev,
+ adapter->adapter_attr.rq_cnt *
+ LEAPRAID_REP_RQ_CNT_SIZE,
+ adapter->mem_desc.rep_desc_q_arr,
+ adapter->mem_desc.rep_desc_q_arr_dma);
+ adapter->mem_desc.rep_desc_q_arr = NULL;
+ }
+
+ for (i = 0; i < adapter->adapter_attr.rep_desc_q_seg_cnt; i++)
+ kfree(adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_maint);
+ kfree(adapter->mem_desc.rep_desc_seg_maint);
+ }
+
+ kfree(adapter->mem_desc.taskid_to_uniq_tag);
+ adapter->mem_desc.taskid_to_uniq_tag = NULL;
+
+ dma_pool_destroy(adapter->mem_desc.sg_chain_pool);
+}
+
+static inline bool leapraid_is_in_same_4g_seg(dma_addr_t start, u32 size)
+{
+ return (upper_32_bits(start) == upper_32_bits(start + size - 1));
+}
+
+int leapraid_internal_init_cmd_priv(struct leapraid_adapter *adapter,
+ struct leapraid_io_req_tracker *io_tracker)
+{
+ io_tracker->chain =
+ dma_pool_alloc(adapter->mem_desc.sg_chain_pool,
+ GFP_KERNEL,
+ &io_tracker->chain_dma);
+
+ if (!io_tracker->chain)
+ return -ENOMEM;
+
+ return 0;
+}
+
+int leapraid_internal_exit_cmd_priv(struct leapraid_adapter *adapter,
+ struct leapraid_io_req_tracker *io_tracker)
+{
+ if (io_tracker && io_tracker->chain)
+ dma_pool_free(adapter->mem_desc.sg_chain_pool,
+ io_tracker->chain,
+ io_tracker->chain_dma);
+
+ return 0;
+}
+
+static int leapraid_request_host_memory(struct leapraid_adapter *adapter)
+{
+ struct leapraid_adapter_features *facts =
+ &adapter->adapter_attr.features;
+ u16 rep_desc_q_cnt_allocated;
+ unsigned int i, j;
+ int rc;
+
+ /* sg table size */
+ adapter->shost->sg_tablesize = LEAPRAID_SG_DEPTH;
+ if (reset_devices)
+ adapter->shost->sg_tablesize =
+ LEAPRAID_KDUMP_MIN_PHYS_SEGMENTS;
+ /* high priority cmds queue depth */
+ adapter->dynamic_task_desc.hp_cmd_qd = facts->hp_slot;
+ adapter->dynamic_task_desc.hp_cmd_qd = LEAPRAID_FIXED_HP_CMDS;
+ /* internal cmds queue depth */
+ adapter->dynamic_task_desc.inter_cmd_qd = LEAPRAID_FIXED_INTER_CMDS;
+ /* adapter cmds total queue depth */
+ if (reset_devices)
+ adapter->adapter_attr.adapter_total_qd =
+ LEAPRAID_DEFAULT_CMD_QD_OFFSET +
+ adapter->dynamic_task_desc.inter_cmd_qd +
+ adapter->dynamic_task_desc.hp_cmd_qd;
+ else
+ adapter->adapter_attr.adapter_total_qd = facts->req_slot +
+ adapter->dynamic_task_desc.hp_cmd_qd;
+ /* reply message queue depth */
+ adapter->adapter_attr.rep_msg_qd =
+ adapter->adapter_attr.adapter_total_qd +
+ LEAPRAID_DEFAULT_CMD_QD_OFFSET;
+ /* reply descriptor queue depth */
+ adapter->adapter_attr.rep_desc_qd =
+ round_up(adapter->adapter_attr.adapter_total_qd +
+ adapter->adapter_attr.rep_msg_qd +
+ LEAPRAID_TASKID_OFFSET_CTRL_CMD,
+ LEAPRAID_REPLY_QD_ALIGNMENT);
+ /* scsi cmd io depth */
+ adapter->adapter_attr.io_qd =
+ adapter->adapter_attr.adapter_total_qd -
+ adapter->dynamic_task_desc.hp_cmd_qd -
+ adapter->dynamic_task_desc.inter_cmd_qd;
+ /* scsi host can queue */
+ adapter->shost->can_queue = adapter->adapter_attr.io_qd -
+ LEAPRAID_TASKID_OFFSET_SCSIIO_CMD;
+ adapter->driver_cmds.ctl_cmd.taskid = adapter->shost->can_queue +
+ LEAPRAID_TASKID_OFFSET_CTRL_CMD;
+ adapter->driver_cmds.driver_scsiio_cmd.taskid =
+ adapter->shost->can_queue +
+ LEAPRAID_TASKID_OFFSET_SCSIIO_CMD;
+
+ /* allocate task descriptor */
+try_again:
+ adapter->adapter_attr.task_desc_dma_size =
+ (adapter->adapter_attr.adapter_total_qd +
+ LEAPRAID_TASKID_OFFSET_CTRL_CMD) *
+ LEAPRAID_REQUEST_SIZE;
+ adapter->mem_desc.task_desc =
+ dma_alloc_coherent(&adapter->pdev->dev,
+ adapter->adapter_attr.task_desc_dma_size,
+ &adapter->mem_desc.task_desc_dma,
+ GFP_KERNEL);
+ if (!adapter->mem_desc.task_desc) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate task descriptor DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+ /* allocate chain message pool */
+ adapter->mem_desc.sg_chain_pool_size =
+ LEAPRAID_DEFAULT_CHAINS_PER_IO * LEAPRAID_CHAIN_SEG_SIZE;
+ adapter->mem_desc.sg_chain_pool =
+ dma_pool_create("leapraid chain pool",
+ &adapter->pdev->dev,
+ adapter->mem_desc.sg_chain_pool_size, 16, 0);
+ if (!adapter->mem_desc.sg_chain_pool) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate chain message DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ /* allocate io tracker to ref scsi io */
+
+ adapter->mem_desc.taskid_to_uniq_tag =
+ kcalloc(adapter->shost->can_queue, sizeof(u16), GFP_KERNEL);
+ if (!adapter->mem_desc.taskid_to_uniq_tag) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ adapter->dynamic_task_desc.hp_taskid =
+ adapter->adapter_attr.io_qd +
+ LEAPRAID_HP_TASKID_OFFSET_CTL_CMD;
+ /* allocate static hp taskid */
+ adapter->driver_cmds.ctl_cmd.hp_taskid =
+ adapter->dynamic_task_desc.hp_taskid;
+ adapter->driver_cmds.tm_cmd.hp_taskid =
+ adapter->dynamic_task_desc.hp_taskid +
+ LEAPRAID_HP_TASKID_OFFSET_TM_CMD;
+
+ adapter->dynamic_task_desc.inter_taskid =
+ adapter->dynamic_task_desc.hp_taskid +
+ adapter->dynamic_task_desc.hp_cmd_qd;
+ adapter->driver_cmds.scan_dev_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid;
+ adapter->driver_cmds.cfg_op_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid +
+ LEAPRAID_TASKID_OFFSET_CFG_OP_CMD;
+ adapter->driver_cmds.transport_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid +
+ LEAPRAID_TASKID_OFFSET_TRANSPORT_CMD;
+ adapter->driver_cmds.timestamp_sync_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid +
+ LEAPRAID_TASKID_OFFSET_TIMESTAMP_SYNC_CMD;
+ adapter->driver_cmds.raid_action_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid +
+ LEAPRAID_TASKID_OFFSET_RAID_ACTION_CMD;
+ adapter->driver_cmds.enc_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid +
+ LEAPRAID_TASKID_OFFSET_ENC_CMD;
+ adapter->driver_cmds.notify_event_cmd.inter_taskid =
+ adapter->dynamic_task_desc.inter_taskid +
+ LEAPRAID_TASKID_OFFSET_NOTIFY_EVENT_CMD;
+ dev_info(&adapter->pdev->dev, "queue depth:\n");
+ dev_info(&adapter->pdev->dev, " host->can_queue: %d\n",
+ adapter->shost->can_queue);
+ dev_info(&adapter->pdev->dev, " io_qd: %d\n",
+ adapter->adapter_attr.io_qd);
+ dev_info(&adapter->pdev->dev, " hpr_cmd_qd: %d\n",
+ adapter->dynamic_task_desc.hp_cmd_qd);
+ dev_info(&adapter->pdev->dev, " inter_cmd_qd: %d\n",
+ adapter->dynamic_task_desc.inter_cmd_qd);
+ dev_info(&adapter->pdev->dev, " adapter_total_qd: %d\n",
+ adapter->adapter_attr.adapter_total_qd);
+
+ dev_info(&adapter->pdev->dev, "taskid range:\n");
+ dev_info(&adapter->pdev->dev,
+ " adapter->dynamic_task_desc.hp_taskid: %d\n",
+ adapter->dynamic_task_desc.hp_taskid);
+ dev_info(&adapter->pdev->dev,
+ " adapter->dynamic_task_desc.inter_taskid: %d\n",
+ adapter->dynamic_task_desc.inter_taskid);
+
+ /*
+ * allocate sense dma, driver maintain
+ * need in same 4GB segment
+ */
+ adapter->mem_desc.sense_data =
+ dma_alloc_coherent(
+ &adapter->pdev->dev,
+ adapter->adapter_attr.io_qd * SCSI_SENSE_BUFFERSIZE,
+ &adapter->mem_desc.sense_data_dma, GFP_KERNEL);
+ if (!adapter->mem_desc.sense_data) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate sense data DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+ if (!leapraid_is_in_same_4g_seg(adapter->mem_desc.sense_data_dma,
+ adapter->adapter_attr.io_qd *
+ SCSI_SENSE_BUFFERSIZE)) {
+ dev_warn(&adapter->pdev->dev,
+ "try 32 bit dma due to sense data is not in same 4g!\n");
+ rc = -EAGAIN;
+ goto out;
+ }
+
+ /* reply frame, need in same 4GB segment */
+ adapter->mem_desc.rep_msg =
+ dma_alloc_coherent(&adapter->pdev->dev,
+ adapter->adapter_attr.rep_msg_qd *
+ LEAPRAID_REPLY_SIEZ,
+ &adapter->mem_desc.rep_msg_dma,
+ GFP_KERNEL);
+ if (!adapter->mem_desc.rep_msg) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate reply message DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+ if (!leapraid_is_in_same_4g_seg(adapter->mem_desc.rep_msg_dma,
+ adapter->adapter_attr.rep_msg_qd *
+ LEAPRAID_REPLY_SIEZ)) {
+ dev_warn(&adapter->pdev->dev,
+ "use 32 bit dma due to rep msg is not in same 4g!\n");
+ rc = -EAGAIN;
+ goto out;
+ }
+
+ /* address of reply frame */
+ adapter->mem_desc.rep_msg_addr =
+ dma_alloc_coherent(&adapter->pdev->dev,
+ adapter->adapter_attr.rep_msg_qd *
+ LEAPRAID_REP_MSG_ADDR_SIZE,
+ &adapter->mem_desc.rep_msg_addr_dma,
+ GFP_KERNEL);
+ if (!adapter->mem_desc.rep_msg_addr) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate reply message address DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+ adapter->adapter_attr.rep_desc_q_seg_cnt =
+ DIV_ROUND_UP(adapter->adapter_attr.rq_cnt,
+ LEAPRAID_REP_DESC_CHUNK_SIZE);
+ adapter->mem_desc.rep_desc_seg_maint =
+ kcalloc(adapter->adapter_attr.rep_desc_q_seg_cnt,
+ sizeof(struct leapraid_rep_desc_seg_maint),
+ GFP_KERNEL);
+ if (!adapter->mem_desc.rep_desc_seg_maint) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ rep_desc_q_cnt_allocated = 0;
+ for (i = 0; i < adapter->adapter_attr.rep_desc_q_seg_cnt; i++) {
+ adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_maint =
+ kcalloc(LEAPRAID_REP_DESC_CHUNK_SIZE,
+ sizeof(struct leapraid_rep_desc_maint),
+ GFP_KERNEL);
+ if (!adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_maint) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg =
+ dma_alloc_coherent(
+ &adapter->pdev->dev,
+ (adapter->adapter_attr.rep_desc_qd *
+ LEAPRAID_REP_DESC_ENTRY_SIZE) *
+ LEAPRAID_REP_DESC_CHUNK_SIZE,
+ &adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg_dma,
+ GFP_KERNEL);
+ if (!adapter->mem_desc.rep_desc_seg_maint[i].rep_desc_seg) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate reply descriptor segment DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ for (j = 0; j < LEAPRAID_REP_DESC_CHUNK_SIZE; j++) {
+ if (rep_desc_q_cnt_allocated >=
+ adapter->adapter_attr.rq_cnt)
+ break;
+ adapter->mem_desc
+ .rep_desc_seg_maint[i]
+ .rep_desc_maint[j]
+ .rep_desc =
+ (void *)((u8 *)(
+ adapter->mem_desc
+ .rep_desc_seg_maint[i]
+ .rep_desc_seg) +
+ j *
+ (adapter->adapter_attr.rep_desc_qd *
+ LEAPRAID_REP_DESC_ENTRY_SIZE));
+ adapter->mem_desc
+ .rep_desc_seg_maint[i]
+ .rep_desc_maint[j]
+ .rep_desc_dma =
+ adapter->mem_desc
+ .rep_desc_seg_maint[i]
+ .rep_desc_seg_dma +
+ j *
+ (adapter->adapter_attr.rep_desc_qd *
+ LEAPRAID_REP_DESC_ENTRY_SIZE);
+ rep_desc_q_cnt_allocated++;
+ }
+ }
+
+ if (!reset_devices) {
+ adapter->mem_desc.rep_desc_q_arr =
+ dma_alloc_coherent(
+ &adapter->pdev->dev,
+ adapter->adapter_attr.rq_cnt *
+ LEAPRAID_REP_RQ_CNT_SIZE,
+ &adapter->mem_desc.rep_desc_q_arr_dma,
+ GFP_KERNEL);
+ if (!adapter->mem_desc.rep_desc_q_arr) {
+ dev_err(&adapter->pdev->dev,
+ "failed to allocate reply descriptor queue array DMA!\n");
+ rc = -ENOMEM;
+ goto out;
+ }
+ }
+
+ return 0;
+out:
+ if (rc == -EAGAIN) {
+ leapraid_free_host_memory(adapter);
+ adapter->adapter_attr.use_32_dma_mask = true;
+ rc = dma_set_mask_and_coherent(&adapter->pdev->dev,
+ DMA_BIT_MASK(32));
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "failed to set 32 DMA mask\n");
+ return rc;
+ }
+ goto try_again;
+ }
+ return rc;
+}
+
+static int leapraid_alloc_dev_topo_bitmaps(struct leapraid_adapter *adapter)
+{
+ adapter->dev_topo.pd_hdls_sz =
+ adapter->adapter_attr.features.max_dev_handle /
+ LEAPRAID_BITS_PER_BYTE;
+ if (adapter->adapter_attr.features.max_dev_handle %
+ LEAPRAID_BITS_PER_BYTE)
+ adapter->dev_topo.pd_hdls_sz++;
+ adapter->dev_topo.pd_hdls =
+ kzalloc(adapter->dev_topo.pd_hdls_sz, GFP_KERNEL);
+ if (!adapter->dev_topo.pd_hdls)
+ return -ENOMEM;
+
+ adapter->dev_topo.blocking_hdls =
+ kzalloc(adapter->dev_topo.pd_hdls_sz, GFP_KERNEL);
+ if (!adapter->dev_topo.blocking_hdls)
+ return -ENOMEM;
+
+ adapter->dev_topo.pending_dev_add_sz =
+ adapter->adapter_attr.features.max_dev_handle /
+ LEAPRAID_BITS_PER_BYTE;
+ if (adapter->adapter_attr.features.max_dev_handle %
+ LEAPRAID_BITS_PER_BYTE)
+ adapter->dev_topo.pending_dev_add_sz++;
+ adapter->dev_topo.pending_dev_add =
+ kzalloc(adapter->dev_topo.pending_dev_add_sz, GFP_KERNEL);
+ if (!adapter->dev_topo.pending_dev_add)
+ return -ENOMEM;
+
+ adapter->dev_topo.dev_removing_sz =
+ adapter->dev_topo.pending_dev_add_sz;
+ adapter->dev_topo.dev_removing =
+ kzalloc(adapter->dev_topo.dev_removing_sz, GFP_KERNEL);
+ if (!adapter->dev_topo.dev_removing)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void leapraid_free_dev_topo_bitmaps(struct leapraid_adapter *adapter)
+{
+ kfree(adapter->dev_topo.pd_hdls);
+ kfree(adapter->dev_topo.blocking_hdls);
+ kfree(adapter->dev_topo.pending_dev_add);
+ kfree(adapter->dev_topo.dev_removing);
+}
+
+static int leapraid_init_driver_cmds(struct leapraid_adapter *adapter)
+{
+ u32 buffer_size = 0;
+ void *buffer;
+
+ INIT_LIST_HEAD(&adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.scan_dev_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.scan_dev_cmd.cb_idx = LEAPRAID_SCAN_DEV_CB_IDX;
+ list_add_tail(&adapter->driver_cmds.scan_dev_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.cfg_op_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.cfg_op_cmd.cb_idx = LEAPRAID_CONFIG_CB_IDX;
+ mutex_init(&adapter->driver_cmds.cfg_op_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.cfg_op_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.transport_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.transport_cmd.cb_idx = LEAPRAID_TRANSPORT_CB_IDX;
+ mutex_init(&adapter->driver_cmds.transport_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.transport_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.timestamp_sync_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.timestamp_sync_cmd.cb_idx =
+ LEAPRAID_TIMESTAMP_SYNC_CB_IDX;
+ mutex_init(&adapter->driver_cmds.timestamp_sync_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.timestamp_sync_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.raid_action_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.raid_action_cmd.cb_idx =
+ LEAPRAID_RAID_ACTION_CB_IDX;
+ mutex_init(&adapter->driver_cmds.raid_action_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.raid_action_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.driver_scsiio_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.driver_scsiio_cmd.cb_idx =
+ LEAPRAID_DRIVER_SCSIIO_CB_IDX;
+ mutex_init(&adapter->driver_cmds.driver_scsiio_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.driver_scsiio_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ buffer_size = sizeof(struct scsi_cmnd) +
+ sizeof(struct leapraid_io_req_tracker) +
+ SCSI_SENSE_BUFFERSIZE +
+ sizeof(struct scatterlist);
+ buffer = kzalloc(buffer_size, GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
+
+ adapter->driver_cmds.internal_scmd = buffer;
+ buffer = (void *)((u8 *)buffer +
+ sizeof(struct scsi_cmnd) +
+ sizeof(struct leapraid_io_req_tracker));
+ adapter->driver_cmds.internal_scmd->sense_buffer =
+ (unsigned char *)buffer;
+ buffer = (void *)((u8 *)buffer + SCSI_SENSE_BUFFERSIZE);
+ adapter->driver_cmds.internal_scmd->sdb.table.sgl =
+ (struct scatterlist *)buffer;
+ buffer = (void *)((u8 *)buffer + sizeof(struct scatterlist));
+
+ adapter->driver_cmds.enc_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.enc_cmd.cb_idx = LEAPRAID_ENC_CB_IDX;
+ mutex_init(&adapter->driver_cmds.enc_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.enc_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.notify_event_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.notify_event_cmd.cb_idx =
+ LEAPRAID_NOTIFY_EVENT_CB_IDX;
+ mutex_init(&adapter->driver_cmds.notify_event_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.notify_event_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.ctl_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.ctl_cmd.cb_idx = LEAPRAID_CTL_CB_IDX;
+ mutex_init(&adapter->driver_cmds.ctl_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.ctl_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ adapter->driver_cmds.tm_cmd.status = LEAPRAID_CMD_NOT_USED;
+ adapter->driver_cmds.tm_cmd.cb_idx = LEAPRAID_TM_CB_IDX;
+ mutex_init(&adapter->driver_cmds.tm_cmd.mutex);
+ list_add_tail(&adapter->driver_cmds.tm_cmd.list,
+ &adapter->driver_cmds.special_cmd_list);
+
+ return 0;
+}
+
+static void leapraid_unmask_evts(struct leapraid_adapter *adapter, u16 evt)
+{
+ if (evt >= LEAPRAID_MAX_EVENT_NUM)
+ return;
+
+ clear_bit(evt, (unsigned long *)adapter->fw_evt_s.leapraid_evt_masks);
+}
+
+static void leapraid_init_event_mask(struct leapraid_adapter *adapter)
+{
+ int i;
+
+ for (i = 0; i < LEAPRAID_EVT_MASK_COUNT; i++)
+ adapter->fw_evt_s.leapraid_evt_masks[i] = -1;
+ leapraid_unmask_evts(adapter, LEAPRAID_EVT_SAS_DISCOVERY);
+ leapraid_unmask_evts(adapter, LEAPRAID_EVT_SAS_TOPO_CHANGE_LIST);
+ leapraid_unmask_evts(adapter, LEAPRAID_EVT_SAS_ENCL_DEV_STATUS_CHANGE);
+ leapraid_unmask_evts(adapter, LEAPRAID_EVT_SAS_DEV_STATUS_CHANGE);
+ leapraid_unmask_evts(adapter, LEAPRAID_EVT_IR_CHANGE);
+}
+
+static void leapraid_prepare_adp_init_req(
+ struct leapraid_adapter *adapter,
+ struct leapraid_adapter_init_req *init_req)
+{
+ ktime_t cur_time;
+ int i;
+ u32 reply_post_free_ary_sz;
+
+ memset(init_req, 0, sizeof(struct leapraid_adapter_init_req));
+ init_req->func = LEAPRAID_FUNC_ADAPTER_INIT;
+ init_req->who_init = LEAPRAID_WHOINIT_LINUX_DRIVER;
+ init_req->msg_ver = cpu_to_le16(0x0100);
+ init_req->header_ver = cpu_to_le16(0x0000);
+
+ init_req->driver_ver = cpu_to_le32((LEAPRAID_MAJOR_VERSION << 24) |
+ (LEAPRAID_MINOR_VERSION << 16) |
+ (LEAPRAID_BUILD_VERSION << 8) |
+ LEAPRAID_RELEASE_VERSION);
+ if (adapter->notification_desc.msix_enable)
+ init_req->host_msix_vectors = adapter->adapter_attr.rq_cnt;
+
+ init_req->req_frame_size =
+ cpu_to_le16(LEAPRAID_REQUEST_SIZE / LEAPRAID_DWORDS_BYTE_SIZE);
+ init_req->rep_desc_qd =
+ cpu_to_le16(adapter->adapter_attr.rep_desc_qd);
+ init_req->rep_msg_qd =
+ cpu_to_le16(adapter->adapter_attr.rep_msg_qd);
+ init_req->sense_buffer_add_high =
+ cpu_to_le32((u64)adapter->mem_desc.sense_data_dma >> 32);
+ init_req->rep_msg_dma_high =
+ cpu_to_le32((u64)adapter->mem_desc.rep_msg_dma >> 32);
+ init_req->task_desc_base_addr =
+ cpu_to_le64((u64)adapter->mem_desc.task_desc_dma);
+ init_req->rep_msg_addr_dma =
+ cpu_to_le64((u64)adapter->mem_desc.rep_msg_addr_dma);
+ if (!reset_devices) {
+ reply_post_free_ary_sz =
+ adapter->adapter_attr.rq_cnt * LEAPRAID_REP_RQ_CNT_SIZE;
+ memset(adapter->mem_desc.rep_desc_q_arr, 0,
+ reply_post_free_ary_sz);
+
+ for (i = 0; i < adapter->adapter_attr.rq_cnt; i++) {
+ adapter->mem_desc
+ .rep_desc_q_arr[i]
+ .rep_desc_base_addr =
+ cpu_to_le64 (
+ (u64)adapter->mem_desc
+ .rep_desc_seg_maint[i /
+ LEAPRAID_REP_DESC_CHUNK_SIZE]
+ .rep_desc_maint[i %
+ LEAPRAID_REP_DESC_CHUNK_SIZE]
+ .rep_desc_dma);
+ }
+
+ init_req->msg_flg =
+ LEAPRAID_ADAPTER_INIT_MSGFLG_RDPQ_ARRAY_MODE;
+ init_req->rep_desc_q_arr_addr =
+ cpu_to_le64((u64)adapter->mem_desc.rep_desc_q_arr_dma);
+ } else {
+ init_req->rep_desc_q_arr_addr =
+ cpu_to_le64((u64)adapter->mem_desc
+ .rep_desc_seg_maint[0]
+ .rep_desc_maint[0]
+ .rep_desc_dma);
+ }
+ cur_time = ktime_get_real();
+ init_req->time_stamp = cpu_to_le64(ktime_to_ms(cur_time));
+}
+
+static int leapraid_send_adapter_init(struct leapraid_adapter *adapter)
+{
+ struct leapraid_adapter_init_req init_req;
+ struct leapraid_adapter_init_rep init_rep;
+ u16 adapter_status;
+ int rc = 0;
+
+ leapraid_prepare_adp_init_req(adapter, &init_req);
+
+ rc = leapraid_handshake_func(adapter,
+ sizeof(struct leapraid_adapter_init_req),
+ (u32 *)&init_req,
+ sizeof(struct leapraid_adapter_init_rep),
+ (u16 *)&init_rep);
+ if (rc != 0) {
+ dev_err(&adapter->pdev->dev, "%s: handshake failed, rc=%d\n",
+ __func__, rc);
+ return rc;
+ }
+
+ adapter_status =
+ le16_to_cpu(init_rep.adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (adapter_status != LEAPRAID_ADAPTER_STATUS_SUCCESS) {
+ dev_err(&adapter->pdev->dev, "%s: failed\n", __func__);
+ rc = -EIO;
+ }
+
+ adapter->timestamp_sync_cnt = 0;
+ return rc;
+}
+
+static int leapraid_cfg_pages(struct leapraid_adapter *adapter)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_io_unit_page1 *sas_io_unit_page1 = NULL;
+ struct leapraid_bios_page3 bios_page3;
+ struct leapraid_bios_page2 bios_page2;
+ int rc = 0;
+ int sz;
+
+ rc = leapraid_op_config_page(adapter, &bios_page3, cfgp1,
+ cfgp2, GET_BIOS_PG3);
+ if (rc)
+ return rc;
+
+ rc = leapraid_op_config_page(adapter, &bios_page2, cfgp1,
+ cfgp2, GET_BIOS_PG2);
+ if (rc)
+ return rc;
+
+ adapter->adapter_attr.bios_version =
+ le32_to_cpu(bios_page3.bios_version);
+ adapter->adapter_attr.wideport_max_queue_depth =
+ LEAPRAID_SAS_QUEUE_DEPTH;
+ adapter->adapter_attr.narrowport_max_queue_depth =
+ LEAPRAID_SAS_QUEUE_DEPTH;
+ adapter->adapter_attr.sata_max_queue_depth =
+ LEAPRAID_SATA_QUEUE_DEPTH;
+
+ adapter->boot_devs.requested_boot_dev.form =
+ bios_page2.requested_boot_dev_form;
+ memcpy((void *)adapter->boot_devs.requested_boot_dev.pg_dev,
+ (void *)&bios_page2.requested_boot_dev,
+ LEAPRAID_BOOT_DEV_SIZE);
+ adapter->boot_devs.requested_alt_boot_dev.form =
+ bios_page2.requested_alt_boot_dev_form;
+ memcpy((void *)adapter->boot_devs.requested_alt_boot_dev.pg_dev,
+ (void *)&bios_page2.requested_alt_boot_dev,
+ LEAPRAID_BOOT_DEV_SIZE);
+ adapter->boot_devs.current_boot_dev.form =
+ bios_page2.current_boot_dev_form;
+ memcpy((void *)adapter->boot_devs.current_boot_dev.pg_dev,
+ (void *)&bios_page2.current_boot_dev,
+ LEAPRAID_BOOT_DEV_SIZE);
+
+ sz = offsetof(struct leapraid_sas_io_unit_page1, phy_info);
+ sas_io_unit_page1 = kzalloc(sz, GFP_KERNEL);
+ if (!sas_io_unit_page1) {
+ rc = -ENOMEM;
+ return rc;
+ }
+
+ cfgp1.size = sz;
+
+ rc = leapraid_op_config_page(adapter, sas_io_unit_page1, cfgp1,
+ cfgp2, GET_SAS_IOUNIT_PG1);
+ if (rc)
+ goto out;
+
+ if (le16_to_cpu(sas_io_unit_page1->wideport_max_queue_depth))
+ adapter->adapter_attr.wideport_max_queue_depth =
+ le16_to_cpu(
+ sas_io_unit_page1->wideport_max_queue_depth);
+
+ if (le16_to_cpu(sas_io_unit_page1->narrowport_max_queue_depth))
+ adapter->adapter_attr.narrowport_max_queue_depth =
+ le16_to_cpu(
+ sas_io_unit_page1->narrowport_max_queue_depth);
+
+ if (sas_io_unit_page1->sata_max_queue_depth)
+ adapter->adapter_attr.sata_max_queue_depth =
+ sas_io_unit_page1->sata_max_queue_depth;
+
+out:
+ kfree(sas_io_unit_page1);
+ dev_info(&adapter->pdev->dev,
+ "max wp qd=%d, max np qd=%d, max sata qd=%d\n",
+ adapter->adapter_attr.wideport_max_queue_depth,
+ adapter->adapter_attr.narrowport_max_queue_depth,
+ adapter->adapter_attr.sata_max_queue_depth);
+ return rc;
+}
+
+static int leapraid_evt_notify(struct leapraid_adapter *adapter)
+{
+ struct leapraid_evt_notify_req *evt_notify_req;
+ int rc = 0;
+ int i;
+
+ mutex_lock(&adapter->driver_cmds.notify_event_cmd.mutex);
+ adapter->driver_cmds.notify_event_cmd.status = LEAPRAID_CMD_PENDING;
+ evt_notify_req =
+ leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.notify_event_cmd.inter_taskid);
+ memset(evt_notify_req, 0, sizeof(struct leapraid_evt_notify_req));
+ evt_notify_req->func = LEAPRAID_FUNC_EVENT_NOTIFY;
+ for (i = 0; i < LEAPRAID_EVT_MASK_COUNT; i++)
+ evt_notify_req->evt_masks[i] =
+ cpu_to_le32(adapter->fw_evt_s.leapraid_evt_masks[i]);
+ init_completion(&adapter->driver_cmds.notify_event_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.notify_event_cmd.inter_taskid);
+ wait_for_completion_timeout(
+ &adapter->driver_cmds.notify_event_cmd.done,
+ LEAPRAID_NOTIFY_EVENT_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.notify_event_cmd.status &
+ LEAPRAID_CMD_DONE))
+ if (adapter->driver_cmds.notify_event_cmd.status &
+ LEAPRAID_CMD_RESET)
+ rc = -EFAULT;
+ adapter->driver_cmds.notify_event_cmd.status = LEAPRAID_CMD_NOT_USED;
+ mutex_unlock(&adapter->driver_cmds.notify_event_cmd.mutex);
+
+ return rc;
+}
+
+int leapraid_scan_dev(struct leapraid_adapter *adapter, bool async_scan_dev)
+{
+ struct leapraid_scan_dev_req *scan_dev_req;
+ struct leapraid_scan_dev_rep *scan_dev_rep;
+ u16 adapter_status;
+ int rc = 0;
+
+ dev_info(&adapter->pdev->dev,
+ "send device scan, async_scan_dev=%d!\n", async_scan_dev);
+
+ adapter->driver_cmds.scan_dev_cmd.status = LEAPRAID_CMD_PENDING;
+ adapter->driver_cmds.scan_dev_cmd.async_scan_dev = async_scan_dev;
+ scan_dev_req = leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.scan_dev_cmd.inter_taskid);
+ memset(scan_dev_req, 0, sizeof(struct leapraid_scan_dev_req));
+ scan_dev_req->func = LEAPRAID_FUNC_SCAN_DEV;
+
+ if (async_scan_dev) {
+ adapter->scan_dev_desc.first_scan_dev_fired = true;
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.scan_dev_cmd.inter_taskid);
+ return 0;
+ }
+
+ init_completion(&adapter->driver_cmds.scan_dev_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.scan_dev_cmd.inter_taskid);
+ wait_for_completion_timeout(&adapter->driver_cmds.scan_dev_cmd.done,
+ LEAPRAID_SCAN_DEV_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.scan_dev_cmd.status & LEAPRAID_CMD_DONE)) {
+ dev_err(&adapter->pdev->dev, "device scan timeout!\n");
+ if (adapter->driver_cmds.scan_dev_cmd.status &
+ LEAPRAID_CMD_RESET)
+ rc = -EFAULT;
+ else
+ rc = -ETIME;
+ goto out;
+ }
+
+ scan_dev_rep = (void *)(&adapter->driver_cmds.scan_dev_cmd.reply);
+ adapter_status =
+ le16_to_cpu(scan_dev_rep->adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+ if (adapter_status != LEAPRAID_ADAPTER_STATUS_SUCCESS) {
+ dev_err(&adapter->pdev->dev, "device scan failure!\n");
+ rc = -EFAULT;
+ goto out;
+ }
+
+out:
+ adapter->driver_cmds.scan_dev_cmd.status = LEAPRAID_CMD_NOT_USED;
+ dev_info(&adapter->pdev->dev,
+ "device scan %s\n", ((rc == 0) ? "SUCCESS" : "FAILED"));
+ return rc;
+}
+
+static void leapraid_init_task_tracker(struct leapraid_adapter *adapter)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dynamic_task_desc.task_lock, flags);
+
+ spin_unlock_irqrestore(&adapter->dynamic_task_desc.task_lock, flags);
+}
+
+static void leapraid_init_rep_msg_addr(struct leapraid_adapter *adapter)
+{
+ u32 reply_address;
+ unsigned int i;
+
+ for (i = 0, reply_address = (u32)adapter->mem_desc.rep_msg_dma;
+ i < adapter->adapter_attr.rep_msg_qd;
+ i++, reply_address += LEAPRAID_REPLY_SIEZ) {
+ adapter->mem_desc.rep_msg_addr[i] = cpu_to_le32(reply_address);
+ }
+}
+
+static void init_rep_desc(struct leapraid_rq *rq, int index,
+ union leapraid_rep_desc_union *reply_post_free_contig)
+{
+ struct leapraid_adapter *adapter = rq->adapter;
+ unsigned int i;
+
+ if (!reset_devices)
+ rq->rep_desc =
+ adapter->mem_desc
+ .rep_desc_seg_maint[index /
+ LEAPRAID_REP_DESC_CHUNK_SIZE]
+ .rep_desc_maint[index %
+ LEAPRAID_REP_DESC_CHUNK_SIZE]
+ .rep_desc;
+ else
+ rq->rep_desc = reply_post_free_contig;
+
+ rq->rep_post_host_idx = 0;
+ for (i = 0; i < adapter->adapter_attr.rep_desc_qd; i++)
+ rq->rep_desc[i].words = cpu_to_le64(ULLONG_MAX);
+}
+
+static void leapraid_init_rep_desc(struct leapraid_adapter *adapter)
+{
+ union leapraid_rep_desc_union *reply_post_free_contig;
+ struct leapraid_int_rq *int_rq;
+ struct leapraid_blk_mq_poll_rq *blk_mq_poll_rq;
+ unsigned int i;
+ int index;
+
+ index = 0;
+ reply_post_free_contig = adapter->mem_desc
+ .rep_desc_seg_maint[0]
+ .rep_desc_maint[0]
+ .rep_desc;
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qdex; i++) {
+ int_rq = &adapter->notification_desc.int_rqs[i];
+ init_rep_desc(&int_rq->rq, index, reply_post_free_contig);
+ if (!reset_devices)
+ index++;
+ else
+ reply_post_free_contig +=
+ adapter->adapter_attr.rep_desc_qd;
+ }
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qcnt; i++) {
+ blk_mq_poll_rq = &adapter->notification_desc.blk_mq_poll_rqs[i];
+ init_rep_desc(&blk_mq_poll_rq->rq,
+ index, reply_post_free_contig);
+ if (!reset_devices)
+ index++;
+ else
+ reply_post_free_contig +=
+ adapter->adapter_attr.rep_desc_qd;
+ }
+}
+
+static void leapraid_init_bar_idx_regs(struct leapraid_adapter *adapter)
+{
+ struct leapraid_int_rq *int_rq;
+ struct leapraid_blk_mq_poll_rq *blk_mq_poll_rq;
+ unsigned int i, j;
+
+ adapter->rep_msg_host_idx = adapter->adapter_attr.rep_msg_qd - 1;
+ writel(adapter->rep_msg_host_idx,
+ &adapter->iomem_base->rep_msg_host_idx);
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qdex; i++) {
+ int_rq = &adapter->notification_desc.int_rqs[i];
+ for (j = 0; j < REP_POST_HOST_IDX_REG_CNT; j++)
+ writel((int_rq->rq.msix_idx & 7) <<
+ LEAPRAID_RPHI_MSIX_IDX_SHIFT,
+ &adapter->iomem_base->rep_post_reg_idx[j].idx);
+ }
+
+ for (i = 0; i < adapter->notification_desc.iopoll_qcnt; i++) {
+ blk_mq_poll_rq =
+ &adapter->notification_desc.blk_mq_poll_rqs[i];
+ for (j = 0; j < REP_POST_HOST_IDX_REG_CNT; j++)
+ writel((blk_mq_poll_rq->rq.msix_idx & 7) <<
+ LEAPRAID_RPHI_MSIX_IDX_SHIFT,
+ &adapter->iomem_base->rep_post_reg_idx[j].idx);
+ }
+}
+
+static int leapraid_make_adapter_available(struct leapraid_adapter *adapter)
+{
+ int rc = 0;
+
+ leapraid_init_task_tracker(adapter);
+ leapraid_init_rep_msg_addr(adapter);
+
+ if (adapter->scan_dev_desc.driver_loading)
+ leapraid_configure_reply_queue_affinity(adapter);
+
+ leapraid_init_rep_desc(adapter);
+ rc = leapraid_send_adapter_init(adapter);
+ if (rc)
+ return rc;
+
+ leapraid_init_bar_idx_regs(adapter);
+ leapraid_unmask_int(adapter);
+ rc = leapraid_cfg_pages(adapter);
+ if (rc)
+ return rc;
+
+ rc = leapraid_evt_notify(adapter);
+ if (rc)
+ return rc;
+
+ if (!adapter->access_ctrl.shost_recovering) {
+ adapter->scan_dev_desc.wait_scan_dev_done = true;
+ return 0;
+ }
+
+ rc = leapraid_scan_dev(adapter, false);
+ if (rc)
+ return rc;
+
+ return rc;
+}
+
+int leapraid_ctrl_init(struct leapraid_adapter *adapter)
+{
+ u32 cap;
+ int rc = 0;
+
+ rc = leapraid_set_pcie_and_notification(adapter);
+ if (rc)
+ goto out_free_resources;
+
+ pci_set_drvdata(adapter->pdev, adapter->shost);
+
+ pcie_capability_read_dword(adapter->pdev, PCI_EXP_DEVCAP, &cap);
+
+ if (cap & PCI_EXP_DEVCAP_EXT_TAG) {
+ pcie_capability_set_word(adapter->pdev, PCI_EXP_DEVCTL,
+ PCI_EXP_DEVCTL_EXT_TAG);
+ }
+
+ rc = leapraid_make_adapter_ready(adapter, PART_RESET);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "make adapter ready failure\n");
+ goto out_free_resources;
+ }
+
+ rc = leapraid_get_adapter_features(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "get adapter feature failure\n");
+ goto out_free_resources;
+ }
+
+ rc = leapraid_fw_log_init(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "fw log init failure\n");
+ goto out_free_resources;
+ }
+
+ rc = leapraid_request_host_memory(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "request host memory failure\n");
+ goto out_free_resources;
+ }
+
+ init_waitqueue_head(&adapter->reset_desc.reset_wait_queue);
+
+ rc = leapraid_alloc_dev_topo_bitmaps(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "alloc topo bitmaps failure\n");
+ goto out_free_resources;
+ }
+
+ rc = leapraid_init_driver_cmds(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev, "init driver cmds failure\n");
+ goto out_free_resources;
+ }
+
+ leapraid_init_event_mask(adapter);
+
+ rc = leapraid_make_adapter_available(adapter);
+ if (rc) {
+ dev_err(&adapter->pdev->dev,
+ "make adapter available failure\n");
+ goto out_free_resources;
+ }
+ return 0;
+
+out_free_resources:
+ adapter->access_ctrl.host_removing = true;
+ leapraid_fw_log_exit(adapter);
+ leapraid_disable_controller(adapter);
+ leapraid_free_host_memory(adapter);
+ leapraid_free_dev_topo_bitmaps(adapter);
+ pci_set_drvdata(adapter->pdev, NULL);
+ return rc;
+}
+
+void leapraid_remove_ctrl(struct leapraid_adapter *adapter)
+{
+ leapraid_check_scheduled_fault_stop(adapter);
+ leapraid_fw_log_stop(adapter);
+ leapraid_fw_log_exit(adapter);
+ leapraid_disable_controller(adapter);
+ leapraid_free_host_memory(adapter);
+ leapraid_free_dev_topo_bitmaps(adapter);
+ leapraid_free_enc_list(adapter);
+ pci_set_drvdata(adapter->pdev, NULL);
+}
+
+void leapraid_free_internal_scsi_cmd(struct leapraid_adapter *adapter)
+{
+ mutex_lock(&adapter->driver_cmds.driver_scsiio_cmd.mutex);
+ kfree(adapter->driver_cmds.internal_scmd);
+ adapter->driver_cmds.internal_scmd = NULL;
+ mutex_unlock(&adapter->driver_cmds.driver_scsiio_cmd.mutex);
+}
diff --git a/drivers/scsi/leapraid/leapraid_func.h b/drivers/scsi/leapraid/leapraid_func.h
new file mode 100644
index 000000000000..2c51ef359b7e
--- /dev/null
+++ b/drivers/scsi/leapraid/leapraid_func.h
@@ -0,0 +1,1425 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2025 LeapIO Tech Inc.
+ *
+ * LeapRAID Storage and RAID Controller driver.
+ */
+
+#ifndef LEAPRAID_FUNC_H_INCLUDED
+#define LEAPRAID_FUNC_H_INCLUDED
+
+#include <linux/pci.h>
+#include <linux/aer.h>
+#include <linux/poll.h>
+#include <linux/errno.h>
+#include <linux/ktime.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_eh.h>
+#include <scsi/scsicam.h>
+#include <scsi/scsi_tcq.h>
+#include <scsi/scsi_dbg.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_transport_sas.h>
+
+#include "leapraid.h"
+
+#include <linux/blk-mq-pci.h>
+/* some requset and reply buffer size */
+#define LEAPRAID_REQUEST_SIZE 128
+#define LEAPRAID_REPLY_SIEZ 128
+#define LEAPRAID_CHAIN_SEG_SIZE 128
+#define LEAPRAID_MAX_SGES_IN_CHAIN 7
+#define LEAPRAID_DEFAULT_CHAINS_PER_IO 19
+#define LEAPRAID_DEFAULT_DIX_CHAINS_PER_IO \
+ (2 * LEAPRAID_DEFAULT_CHAINS_PER_IO) /* TODO DIX */
+#define LEAPRAID_IEEE_SGE64_ENTRY_SIZE 16
+#define LEAPRAID_REP_DESC_CHUNK_SIZE 16
+#define LEAPRAID_REP_DESC_ENTRY_SIZE 8
+#define LEAPRAID_REP_MSG_ADDR_SIZE 4
+#define LEAPRAID_REP_RQ_CNT_SIZE 16
+
+#define LEAPRAID_SYS_LOG_BUF_SIZE 0x200000
+#define LEAPRAID_SYS_LOG_BUF_RESERVE 0x1000
+
+/* Driver version and name */
+#define LEAPRAID_DRIVER_NAME "LeapRaid"
+#define LEAPRAID_NAME_LENGTH 48
+#define LEAPRAID_AUTHOR "LeapIO Inc."
+#define LEAPRAID_DESCRIPTION "LeapRaid Driver"
+#define LEAPRAID_DRIVER_VERSION "2.00.00.05"
+#define LEAPRAID_MAJOR_VERSION 2
+#define LEAPRAID_MINOR_VERSION 00
+#define LEAPRAID_BUILD_VERSION 00
+#define LEAPRAID_RELEASE_VERSION 05
+
+/* Device ID */
+#define LEAPRAID_VENDOR_ID 0xD405
+#define LEAPRAID_DEVID_HBA 0x8200
+#define LEAPRAID_DEVID_RAID 0x8201
+
+#define LEAPRAID_PCI_VENDOR_ID_MASK 0xFFFF
+
+ /* RAID virtual channel ID */
+#define RAID_CHANNEL 1
+
+/* Scatter/Gather (SG) segment limits */
+#define LEAPRAID_MAX_PHYS_SEGMENTS SG_CHUNK_SIZE
+
+#define LEAPRAID_KDUMP_MIN_PHYS_SEGMENTS 32
+#define LEAPRAID_SG_DEPTH LEAPRAID_MAX_PHYS_SEGMENTS
+
+/* firmware / config page operations */
+#define LEAPRAID_SET_PARAMETER_SYNC_TIMESTAMP 0x81
+#define LEAPRAID_CFG_REQ_RETRY_TIMES 2
+
+/* Hardware access helpers*/
+#define leapraid_readl(addr) readl(addr)
+#define leapraid_check_reset(status) \
+ (!((status) & LEAPRAID_CMD_RESET))
+
+/* Polling intervals */
+#define LEAPRAID_PCIE_LOG_POLLING_INTERVAL 1
+#define LEAPRAID_FAULT_POLLING_INTERVAL 1000
+#define LEAPRAID_TIMESTAMP_SYNC_INTERVAL 900
+#define LEAPRAID_SMART_POLLING_INTERVAL (300 * 1000)
+
+/* init mask */
+#define LEAPRAID_RESET_IRQ_MASK 0x40000000
+#define LEAPRAID_REPLY_INT_MASK 0x00000008
+#define LEAPRAID_TO_SYS_DB_MASK 0x00000001
+
+/* queue depth */
+#define LEAPRAID_SATA_QUEUE_DEPTH 32
+#define LEAPRAID_SAS_QUEUE_DEPTH 254
+#define LEAPRAID_RAID_QUEUE_DEPTH 128
+
+/* SCSI device and queue limits */
+#define LEAPRAID_MAX_SECTORS 8192
+#define LEAPRAID_DEF_MAX_SECTORS 32767
+#define LEAPRAID_MAX_CDB_LEN 32
+#define LEAPRAID_MAX_LUNS 16384
+#define LEAPRAID_CAN_QUEUE_MIN 1
+#define LEAPRAID_THIS_ID_NONE -1
+#define LEAPRAID_CMD_PER_LUN 128
+#define LEAPRAID_MAX_SEGMENT_SIZE 0xffffffff
+
+/* SCSI sense and ASC/ASCQ and disk geometry configuration */
+#define DESC_FORMAT_THRESHOLD 0x72
+#define SENSE_KEY_MASK 0x0F
+#define SCSI_SENSE_RESPONSE_CODE_MASK 0x7F
+#define ASC_FAILURE_PREDICTION_THRESHOLD_EXCEEDED 0x5D
+#define LEAPRAID_LARGE_DISK_THRESHOLD 0x200000UL /* in sectors, 1GB */
+#define LEAPRAID_LARGE_DISK_HEADS 255
+#define LEAPRAID_LARGE_DISK_SECTORS 63
+#define LEAPRAID_SMALL_DISK_HEADS 64
+#define LEAPRAID_SMALL_DISK_SECTORS 32
+
+/* SMP (Serial Management Protocol) */
+#define LEAPRAID_SMP_PT_FLAG_SGL_PTR 0x80
+#define LEAPRAID_SMP_FN_REPORT_PHY_ERR_LOG 0x91
+#define LEAPRAID_SMP_FRAME_HEADER_SIZE 4
+#define LEAPRAID_SCSI_HOST_SHIFT 16
+#define LEAPRAID_SCSI_DRIVER_SHIFT 24
+
+/* SCSI ASC/ASCQ definitions */
+#define LEAPRAID_SCSI_ASCQ_DEFAULT 0x00
+#define LEAPRAID_SCSI_ASC_POWER_ON_RESET 0x29
+#define LEAPRAID_SCSI_ASC_INVALID_CMD_CODE 0x20
+#define LEAPRAID_SCSI_ASCQ_POWER_ON_RESET 0x07
+
+/* ---- VPD Page 0x89 (ATA Information) ---- */
+#define LEAPRAID_VPD_PAGE_ATA_INFO 0x89
+#define LEAPRAID_VPD_PG89_MAX_LEN 255
+#define LEAPRAID_VPD_PG89_MIN_LEN 214
+
+/* Byte index for NCQ support flag in VPD Page 0x89 */
+#define LEAPRAID_VPD_PG89_NCQ_BYTE_IDX 213
+#define LEAPRAID_VPD_PG89_NCQ_BIT_SHIFT 4
+#define LEAPRAID_VPD_PG89_NCQ_BIT_MASK 0x1
+
+/* readiness polling: max retries, sleep µs between */
+#define LEAPRAID_ADAPTER_READY_MAX_RETRY 15000
+#define LEAPRAID_ADAPTER_READY_SLEEP_MIN_US 1000
+#define LEAPRAID_ADAPTER_READY_SLEEP_MAX_US 1100
+
+/* Doorbell wait parameters */
+#define LEAPRAID_DB_WAIT_MAX_RETRY 20000
+#define LEAPRAID_DB_WAIT_DELAY_US 500
+
+/* Basic data size definitions */
+#define LEAPRAID_DWORDS_BYTE_SIZE 4
+#define LEAPRAID_WORD_BYTE_SIZE 2
+
+/* SGL threshold and chain offset*/
+#define LEAPRAID_SGL_INLINE_THRESHOLD 2
+#define LEAPRAID_CHAIN_OFFSET_DWORDS 7
+
+/* MSI-X group size and mask */
+#define LEAPRAID_MSIX_GROUP_SIZE 8
+#define LEAPRAID_MSIX_GROUP_MASK 7
+
+/* basic constants and limits */
+#define LEAPRAID_BUSY_LIMIT 1
+#define LEAPRAID_INDEX_FIRST 0
+#define LEAPRAID_BITS_PER_BYTE 8
+#define LEAPRAID_INVALID_HOST_DIAG_VAL 0xFFFFFFFF
+
+/* retry / sleep configuration */
+#define LEAPRAID_UNLOCK_RETRY_LIMIT 20
+#define LEAPRAID_UNLOCK_SLEEP_MS 100
+#define LEAPRAID_MSLEEP_SHORT_MS 50
+#define LEAPRAID_MSLEEP_NORMAL_MS 100
+#define LEAPRAID_MSLEEP_LONG_MS 256
+#define LEAPRAID_MSLEEP_EXTRA_LONG_MS 500
+#define LEAPRAID_IO_POLL_DELAY_US 500
+
+/* controller reset loop parameters */
+#define LEAPRAID_RESET_LOOP_COUNT_REF (300000 / 256)
+#define LEAPRAID_RESET_LOOP_COUNT_DEFAULT 10000
+#define LEAPRAID_RESET_POLL_INTERVAL_MS 500
+
+/* Device / Volume configuration */
+#define LEAPRAID_MAX_VOLUMES_DEFAULT 32
+#define LEAPRAID_MAX_DEV_HANDLE_DEFAULT 2048
+#define LEAPRAID_INVALID_DEV_HANDLE 0xFFFF
+
+/* cmd queue depth */
+#define LEAPRAID_COALESCING_DEPTH_MAX 256
+#define LEAPRAID_DEFAULT_CMD_QD_OFFSET 64
+#define LEAPRAID_REPLY_QD_ALIGNMENT 16
+/* task id offset */
+#define LEAPRAID_TASKID_OFFSET_CTRL_CMD 1
+#define LEAPRAID_TASKID_OFFSET_SCSIIO_CMD 2
+#define LEAPRAID_TASKID_OFFSET_CFG_OP_CMD 1
+#define LEAPRAID_TASKID_OFFSET_TRANSPORT_CMD 2
+#define LEAPRAID_TASKID_OFFSET_TIMESTAMP_SYNC_CMD 3
+#define LEAPRAID_TASKID_OFFSET_RAID_ACTION_CMD 4
+#define LEAPRAID_TASKID_OFFSET_ENC_CMD 5
+#define LEAPRAID_TASKID_OFFSET_NOTIFY_EVENT_CMD 6
+
+/* task id offset for high-priority */
+#define LEAPRAID_HP_TASKID_OFFSET_CTL_CMD 0
+#define LEAPRAID_HP_TASKID_OFFSET_TM_CMD 1
+
+/* Event / Boot configuration */
+#define LEAPRAID_EVT_MASK_COUNT 4
+#define LEAPRAID_BOOT_DEV_SIZE 24
+
+/* logsense command definitions */
+#define LEAPRAID_LOGSENSE_DATA_LENGTH 16
+#define LEAPRAID_LOGSENSE_CDB_LENGTH 10
+#define LEAPRAID_LOGSENSE_CDB_CODE 0x6F
+#define LEAPRAID_LOGSENSE_TIMEOUT 5
+#define LEAPRAID_LOGSENSE_SMART_CODE 0x5D
+
+/* cmd timeout */
+#define LEAPRAID_DRIVER_SCSIIO_CMD_TIMEOUT LEAPRAID_LOGSENSE_TIMEOUT
+#define LEAPRAID_CFG_OP_TIMEOUT 15
+#define LEAPRAID_CTL_CMD_TIMEOUT 10
+#define LEAPRAID_SCAN_DEV_CMD_TIMEOUT 300
+#define LEAPRAID_TIMESTAMP_SYNC_CMD_TIMEOUT 10
+#define LEAPRAID_RAID_ACTION_CMD_TIMEOUT 10
+#define LEAPRAID_ENC_CMD_TIMEOUT 10
+#define LEAPRAID_NOTIFY_EVENT_CMD_TIMEOUT 30
+#define LEAPRAID_TM_CMD_TIMEOUT 30
+#define LEAPRAID_TRANSPORT_CMD_TIMEOUT 10
+
+/**
+ * struct leapraid_adapter_features - Features and
+ * capabilities of a LeapRAID adapter
+ *
+ * @req_slot: Number of request slots supported by the adapter
+ * @hp_slot: Number of high-priority slots supported by the adapter
+ * @adapter_caps: Adapter capabilities
+ * @fw_version: Firmware version of the adapter
+ * @max_dev_handle: Maximum device supported by the adapter
+ */
+struct leapraid_adapter_features {
+ u16 req_slot;
+ u16 hp_slot;
+ u32 adapter_caps;
+ u32 fw_version;
+ u8 max_volumes;
+ u16 max_dev_handle;
+ u16 min_dev_handle;
+};
+
+/**
+ * struct leapraid_adapter_attr - Adapter attributes and capabilities
+ *
+ * @id: Adapter identifier
+ * @raid_support: Indicates if RAID is supported
+ * @bios_version: Version of the adapter BIOS
+ * @enable_mp: Indicates if multipath (MP) support is enabled
+ * @wideport_max_queue_depth: Maximum queue depth for wide ports
+ * @narrowport_max_queue_depth: Maximum queue depth for narrow ports
+ * @sata_max_queue_depth: Maximum queue depth for SATA
+ * @features: Detailed features of the adapter
+ * @adapter_total_qd: Total queue depth available on the adapter
+ * @io_qd: Queue depth allocated for I/O operations
+ * @rep_msg_qd: Queue depth for reply messages
+ * @rep_desc_qd: Queue depth for reply descriptors
+ * @rep_desc_q_seg_cnt: Number of segments in a reply descriptor queue
+ * @rq_cnt: Number of request queues
+ * @task_desc_dma_size: Size of task descriptor DMA memory
+ * @use_32_dma_mask: Indicates if 32-bit DMA mask is used
+ * @name: Adapter name string
+ */
+struct leapraid_adapter_attr {
+ u8 id;
+ bool raid_support;
+ u32 bios_version;
+ bool enable_mp;
+ u32 wideport_max_queue_depth;
+ u32 narrowport_max_queue_depth;
+ u32 sata_max_queue_depth;
+ struct leapraid_adapter_features features;
+ u32 adapter_total_qd;
+ u32 io_qd;
+ u32 rep_msg_qd;
+ u32 rep_desc_qd;
+ u32 rep_desc_q_seg_cnt;
+ u16 rq_cnt;
+ u32 task_desc_dma_size;
+ bool use_32_dma_mask;
+ char name[LEAPRAID_NAME_LENGTH];
+};
+
+/**
+ * struct leapraid_io_req_tracker - Track a SCSI I/O request
+ * for the adapter
+ *
+ * @taskid: Unique task ID for this I/O request
+ * @scmd: Pointer to the associated SCSI command
+ * @chain_list: List of chain frames associated with this request
+ * @msix_io: MSI-X vector assigned to this I/O request
+ * @chain: Pointer to the chain memory for this request
+ * @chain_dma: DMA address of the chain memory
+ */
+struct leapraid_io_req_tracker {
+ u16 taskid;
+ struct scsi_cmnd *scmd;
+ struct list_head chain_list;
+ u16 msix_io;
+ void *chain;
+ dma_addr_t chain_dma;
+};
+
+/**
+ * struct leapraid_task_tracker - Tracks a task in the adapter
+ *
+ * @taskid: Unique task ID for this tracker
+ * @cb_idx: Callback index associated with this task
+ * @tracker_list: Linked list node to chain this tracker in lists
+ */
+struct leapraid_task_tracker {
+ u16 taskid;
+ u8 cb_idx;
+ struct list_head tracker_list;
+};
+
+/**
+ * struct leapraid_rep_desc_maint - Maintains reply descriptor
+ * memory
+ *
+ * @rep_desc: Pointer to the reply descriptor
+ * @rep_desc_dma: DMA address of the reply descriptor
+ */
+struct leapraid_rep_desc_maint {
+ union leapraid_rep_desc_union *rep_desc;
+ dma_addr_t rep_desc_dma;
+};
+
+/**
+ * struct leapraid_rep_desc_seg_maint - Maintains reply descriptor
+ * segment memory
+ *
+ * @rep_desc_seg: Pointer to the reply descriptor segment
+ * @rep_desc_seg_dma: DMA address of the reply descriptor segment
+ * @rep_desc_maint: Pointer to the main reply descriptor structure
+ */
+struct leapraid_rep_desc_seg_maint {
+ void *rep_desc_seg;
+ dma_addr_t rep_desc_seg_dma;
+ struct leapraid_rep_desc_maint *rep_desc_maint;
+};
+
+/**
+ * struct leapraid_mem_desc - Memory descriptor for LeapRaid adapter
+ *
+ * @task_desc: Pointer to task descriptor
+ * @task_desc_dma: DMA address of task descriptor
+ * @sg_chain_pool: DMA pool for SGL chain allocations
+ * @sg_chain_pool_size: Size of the sg_chain_pool
+ * @taskid_to_uniq_tag: Mapping from task ID to unique tag
+ * @sense_data: Buffer for SCSI sense data
+ * @sense_data_dma: DMA address of sense_data buffer
+ * @rep_msg: Buffer for reply message
+ * @rep_msg_dma: DMA address of reply message buffer
+ * @rep_msg_addr: Pointer to reply message address
+ * @rep_msg_addr_dma: DMA address of reply message address
+ * @rep_desc_seg_maint: Pointer to reply descriptor segment
+ * @rep_desc_q_arr: Pointer to reply descriptor queue array
+ * @rep_desc_q_arr_dma: DMA address of reply descriptor queue array
+ */
+struct leapraid_mem_desc {
+ void *task_desc;
+ dma_addr_t task_desc_dma;
+ struct dma_pool *sg_chain_pool;
+ u16 sg_chain_pool_size;
+ u16 *taskid_to_uniq_tag;
+ u8 *sense_data;
+ dma_addr_t sense_data_dma;
+ u8 *rep_msg;
+ dma_addr_t rep_msg_dma;
+ __le32 *rep_msg_addr;
+ dma_addr_t rep_msg_addr_dma;
+ struct leapraid_rep_desc_seg_maint *rep_desc_seg_maint;
+ struct leapraid_rep_desc_q_arr *rep_desc_q_arr;
+ dma_addr_t rep_desc_q_arr_dma;
+};
+
+#define LEAPRAID_FIXED_INTER_CMDS 7
+#define LEAPRAID_FIXED_HP_CMDS 2
+#define LEAPRAID_INTER_HP_CMDS_DIF \
+ (LEAPRAID_FIXED_INTER_CMDS - LEAPRAID_FIXED_HP_CMDS)
+
+#define LEAPRAID_CMD_NOT_USED 0x8000
+#define LEAPRAID_CMD_DONE 0x0001
+#define LEAPRAID_CMD_PENDING 0x0002
+#define LEAPRAID_CMD_REPLY_VALID 0x0004
+#define LEAPRAID_CMD_RESET 0x0008
+
+/**
+ * enum LEAPRAID_CB_INDEX - Callback index for LeapRaid driver
+ *
+ * @LEAPRAID_SCAN_DEV_CB_IDX: Scan device callback index
+ * @LEAPRAID_CONFIG_CB_IDX: Configuration callback index
+ * @LEAPRAID_TRANSPORT_CB_IDX: Transport callback index
+ * @LEAPRAID_TIMESTAMP_SYNC_CB_IDX: Timestamp sync callback index
+ * @LEAPRAID_RAID_ACTION_CB_IDX: RAID action callback index
+ * @LEAPRAID_DRIVER_SCSIIO_CB_IDX: Driver SCSI I/O callback index
+ * @LEAPRAID_SAS_CTRL_CB_IDX: SAS controller callback index
+ * @LEAPRAID_ENC_CB_IDX: Encryption callback index
+ * @LEAPRAID_NOTIFY_EVENT_CB_IDX: Notify event callback index
+ * @LEAPRAID_CTL_CB_IDX: Control callback index
+ * @LEAPRAID_TM_CB_IDX: Task management callback index
+ */
+enum LEAPRAID_CB_INDEX {
+ LEAPRAID_SCAN_DEV_CB_IDX = 0x1,
+ LEAPRAID_CONFIG_CB_IDX = 0x2,
+ LEAPRAID_TRANSPORT_CB_IDX = 0x3,
+ LEAPRAID_TIMESTAMP_SYNC_CB_IDX = 0x4,
+ LEAPRAID_RAID_ACTION_CB_IDX = 0x5,
+ LEAPRAID_DRIVER_SCSIIO_CB_IDX = 0x6,
+ LEAPRAID_SAS_CTRL_CB_IDX = 0x7,
+ LEAPRAID_ENC_CB_IDX = 0x8,
+ LEAPRAID_NOTIFY_EVENT_CB_IDX = 0x9,
+ LEAPRAID_CTL_CB_IDX = 0xA,
+ LEAPRAID_TM_CB_IDX = 0xB,
+ LEAPRAID_NUM_CB_IDXS
+};
+
+struct leapraid_default_reply {
+ u8 pad[LEAPRAID_REPLY_SIEZ];
+};
+
+struct leapraid_sense_buffer {
+ u8 pad[SCSI_SENSE_BUFFERSIZE];
+};
+
+/**
+ * struct leapraid_driver_cmd - Driver command tracking structure
+ *
+ * @reply: Default reply structure returned by the adapter
+ * @done: Completion object used to signal command completion
+ * @status: Status code returned by the firmware
+ * @taskid: Unique task identifier for this command
+ * @hp_taskid: Task identifier for high-priority commands
+ * @inter_taskid: Task identifier for internal commands
+ * @cb_idx: Callback index used to identify completion context
+ * @async_scan_dev: True if this command is for asynchronous device scan
+ * @sense: Sense buffer holding error information from device
+ * @mutex: Mutex to protect access to this command structure
+ * @list: List node for linking driver commands into lists
+ */
+struct leapraid_driver_cmd {
+ struct leapraid_default_reply reply;
+ struct completion done;
+ u16 status;
+ u16 taskid;
+ u16 hp_taskid;
+ u16 inter_taskid;
+ u8 cb_idx;
+ bool async_scan_dev;
+ struct leapraid_sense_buffer sense;
+ struct mutex mutex;
+ struct list_head list;
+};
+
+/**
+ * struct leapraid_driver_cmds - Collection of driver command objects
+ *
+ * @special_cmd_list: List head for tracking special driver commands
+ * @scan_dev_cmd: Command used for asynchronous device scan operations
+ * @cfg_op_cmd: Command for configuration operations
+ * @transport_cmd: Command for transport-level operations
+ * @timestamp_sync_cmd: Command for synchronizing timestamp with firmware
+ * @raid_action_cmd: Command for RAID-related management or action requests
+ * @driver_scsiio_cmd: Command used for internal SCSI I/O processing
+ * @enc_cmd: Command for enclosure management operations
+ * @notify_event_cmd: Command for asynchronous event notification handling
+ * @ctl_cmd: Command for generic control or maintenance operations
+ * @tm_cmd: Task management command
+ * @internal_scmd: Pointer to internal SCSI command used by the driver
+ */
+struct leapraid_driver_cmds {
+ struct list_head special_cmd_list;
+ struct leapraid_driver_cmd scan_dev_cmd;
+ struct leapraid_driver_cmd cfg_op_cmd;
+ struct leapraid_driver_cmd transport_cmd;
+ struct leapraid_driver_cmd timestamp_sync_cmd;
+ struct leapraid_driver_cmd raid_action_cmd;
+ struct leapraid_driver_cmd driver_scsiio_cmd;
+ struct leapraid_driver_cmd enc_cmd;
+ struct leapraid_driver_cmd notify_event_cmd;
+ struct leapraid_driver_cmd ctl_cmd;
+ struct leapraid_driver_cmd tm_cmd;
+ struct scsi_cmnd *internal_scmd;
+};
+
+/**
+ * struct leapraid_dynamic_task_desc - Dynamic task descriptor
+ *
+ * @task_lock: Spinlock to protect concurrent access
+ * @hp_taskid: Current high-priority task ID
+ * @hp_cmd_qd: Fixed command queue depth for high-priority tasks
+ * @inter_taskid: Current internal task ID
+ * @inter_cmd_qd: Fixed command queue depth for internal tasks
+ */
+struct leapraid_dynamic_task_desc {
+ spinlock_t task_lock;
+ u16 hp_taskid;
+ u16 hp_cmd_qd;
+ u16 inter_taskid;
+ u16 inter_cmd_qd;
+};
+
+/**
+ * struct leapraid_fw_evt_work - Firmware event work structure
+ *
+ * @list: Linked list node for queuing the work
+ * @adapter: Pointer to the associated LeapRaid adapter
+ * @work: Work structure used by the kernel workqueue
+ * @refcnt: Reference counter for managing the lifetime of this work
+ * @evt_data: Pointer to firmware event data
+ * @dev_handle: Device handle associated with the event
+ * @evt_type: Type of firmware event
+ * @ignore: Flag indicating whether the event should be ignored
+ */
+struct leapraid_fw_evt_work {
+ struct list_head list;
+ struct leapraid_adapter *adapter;
+ struct work_struct work;
+ struct kref refcnt;
+ void *evt_data;
+ u16 dev_handle;
+ u16 evt_type;
+ u8 ignore;
+};
+
+/**
+ * struct leapraid_fw_evt_struct - Firmware event handling structure
+ *
+ * @fw_evt_name: Name of the firmware event
+ * @fw_evt_thread: Workqueue used for processing firmware events
+ * @fw_evt_lock: Spinlock protecting access to the firmware event list
+ * @fw_evt_list: Linked list of pending firmware events
+ * @cur_evt: Pointer to the currently processing firmware event
+ * @fw_evt_cleanup: Flag indicating whether cleanup of events is in progress
+ * @leapraid_evt_masks: Array of event masks for filtering firmware events
+ */
+struct leapraid_fw_evt_struct {
+ char fw_evt_name[48];
+ struct workqueue_struct *fw_evt_thread;
+ spinlock_t fw_evt_lock;
+ struct list_head fw_evt_list;
+ struct leapraid_fw_evt_work *cur_evt;
+ int fw_evt_cleanup;
+ u32 leapraid_evt_masks[4];
+};
+
+/**
+ * struct leapraid_rq - Represents a LeapRaid request queue
+ *
+ * @adapter: Pointer to the associated LeapRaid adapter
+ * @msix_idx: MSI-X vector index used by this queue
+ * @rep_post_host_idx: Index of the last processed reply descriptor
+ * @rep_desc: Pointer to the reply descriptor associated with this queue
+ * @name: Name of the request queue
+ * @busy: Atomic counter indicating if the queue is busy
+ */
+struct leapraid_rq {
+ struct leapraid_adapter *adapter;
+ u8 msix_idx;
+ u32 rep_post_host_idx;
+ union leapraid_rep_desc_union *rep_desc;
+ char name[LEAPRAID_NAME_LENGTH];
+ atomic_t busy;
+};
+
+/**
+ * struct leapraid_int_rq - Internal request queue for a CPU
+ *
+ * @affinity_hint: CPU affinity mask for the queue
+ * @rq: Underlying LeapRaid request queue structure
+ */
+struct leapraid_int_rq {
+ cpumask_var_t affinity_hint;
+ struct leapraid_rq rq;
+};
+
+/**
+ * struct leapraid_blk_mq_poll_rq - Polling request for LeapRaid blk-mq
+ *
+ * @busy: Atomic flag indicating request is being processed
+ * @pause: Atomic flag to temporarily suspend polling
+ * @rq: The underlying LeapRaid request structure
+ */
+struct leapraid_blk_mq_poll_rq {
+ atomic_t busy;
+ atomic_t pause;
+ struct leapraid_rq rq;
+};
+
+/**
+ * struct leapraid_notification_desc - Notification
+ * descriptor for LeapRaid
+ *
+ * @iopoll_qdex: Index of the I/O polling queue
+ * @iopoll_qcnt: Count of I/O polling queues
+ * @msix_enable: Flag indicating MSI-X is enabled
+ * @msix_cpu_map: CPU map for MSI-X interrupts
+ * @msix_cpu_map_sz: Size of the MSI-X CPU map
+ * @int_rqs: Array of interrupt request queues
+ * @int_rqs_allocated: Count of allocated interrupt request queues
+ * @blk_mq_poll_rqs: Array of blk-mq polling requests
+ */
+struct leapraid_notification_desc {
+ u32 iopoll_qdex;
+ u32 iopoll_qcnt;
+ bool msix_enable;
+ u8 *msix_cpu_map;
+ u32 msix_cpu_map_sz;
+ struct leapraid_int_rq *int_rqs;
+ u32 int_rqs_allocated;
+ struct leapraid_blk_mq_poll_rq *blk_mq_poll_rqs;
+};
+
+/**
+ * struct leapraid_reset_desc - Reset descriptor for LeapRaid
+ *
+ * @fault_reset_wq: Workqueue for fault reset operations
+ * @fault_reset_work: Delayed work structure for fault reset
+ * @fault_reset_wq_name: Name of the fault reset workqueue
+ * @host_diag_mutex: Mutex for host diagnostic operations
+ * @adapter_reset_lock: Spinlock for adapter reset operations
+ * @adapter_reset_mutex: Mutex for adapter reset operations
+ * @adapter_link_resetting: Flag indicating if adapter link is resetting
+ * @adapter_reset_results: Results of the adapter reset operation
+ * @pending_io_cnt: Count of pending I/O operations
+ * @reset_wait_queue: Wait queue for reset operations
+ * @reset_cnt: Counter for reset operations
+ */
+struct leapraid_reset_desc {
+ struct workqueue_struct *fault_reset_wq;
+ struct delayed_work fault_reset_work;
+ char fault_reset_wq_name[48];
+ struct mutex host_diag_mutex;
+ spinlock_t adapter_reset_lock;
+ struct mutex adapter_reset_mutex;
+ bool adapter_link_resetting;
+ int adapter_reset_results;
+ int pending_io_cnt;
+ wait_queue_head_t reset_wait_queue;
+ u32 reset_cnt;
+};
+
+/**
+ * struct leapraid_scan_dev_desc - Scan device descriptor
+ * for LeapRaid
+ *
+ * @wait_scan_dev_done: Flag indicating if scan device operation is done
+ * @driver_loading: Flag indicating if driver is loading
+ * @first_scan_dev_fired: Flag indicating if first scan device operation fired
+ * @scan_dev_failed: Flag indicating if scan device operation failed
+ * @scan_start: Flag indicating if scan operation started
+ * @scan_start_failed: Count of failed scan start operations
+ */
+struct leapraid_scan_dev_desc {
+ bool wait_scan_dev_done;
+ bool driver_loading;
+ bool first_scan_dev_fired;
+ bool scan_dev_failed;
+ bool scan_start;
+ u16 scan_start_failed;
+};
+
+/**
+ * struct leapraid_access_ctrl - Access control structure for LeapRaid
+ *
+ * @pci_access_lock: Mutex for PCI access control
+ * @adapter_thermal_alert: Flag indicating if adapter thermal alert is active
+ * @shost_recovering: Flag indicating if host is recovering
+ * @host_removing: Flag indicating if host is being removed
+ * @pcie_recovering: Flag indicating if PCIe is recovering
+ */
+struct leapraid_access_ctrl {
+ struct mutex pci_access_lock;
+ bool adapter_thermal_alert;
+ bool shost_recovering;
+ bool host_removing;
+ bool pcie_recovering;
+};
+
+/**
+ * struct leapraid_fw_log_desc - Firmware log descriptor for LeapRaid
+ *
+ * @fw_log_buffer: Buffer for firmware log data
+ * @fw_log_buffer_dma: DMA address of the firmware log buffer
+ * @fw_log_wq_name: Name of the firmware log workqueue
+ * @fw_log_wq: Workqueue for firmware log operations
+ * @fw_log_work: Delayed work structure for firmware log
+ * @open_pcie_trace: Flag indicating if PCIe tracing is open
+ * @fw_log_init_flag: Flag indicating if firmware log is initialized
+ */
+struct leapraid_fw_log_desc {
+ u8 *fw_log_buffer;
+ dma_addr_t fw_log_buffer_dma;
+ char fw_log_wq_name[48];
+ struct workqueue_struct *fw_log_wq;
+ struct delayed_work fw_log_work;
+ int open_pcie_trace;
+ int fw_log_init_flag;
+};
+
+#define LEAPRAID_CARD_PORT_FLG_DIRTY 0x01
+#define LEAPRAID_CARD_PORT_FLG_NEW 0x02
+#define LEAPRAID_DISABLE_MP_PORT_ID 0xFF
+/**
+ * struct leapraid_card_port - Card port structure for LeapRaid
+ *
+ * @list: List head for card port
+ * @vphys_list: List head for virtual phy list
+ * @port_id: Port ID
+ * @sas_address: SAS address
+ * @phy_mask: Mask of phy
+ * @vphys_mask: Mask of virtual phy
+ * @flg: Flags for the port
+ */
+struct leapraid_card_port {
+ struct list_head list;
+ struct list_head vphys_list;
+ u8 port_id;
+ u64 sas_address;
+ u32 phy_mask;
+ u32 vphys_mask;
+ u8 flg;
+};
+
+/**
+ * struct leapraid_card_phy - Card phy structure for LeapRaid
+ *
+ * @port_siblings: List head for port siblings
+ * @card_port: Pointer to the card port
+ * @identify: SAS identify structure
+ * @remote_identify: Remote SAS identify structure
+ * @phy: SAS phy structure
+ * @phy_id: Phy ID
+ * @hdl: Handle for the port
+ * @attached_hdl: Handle for the attached port
+ * @phy_is_assigned: Flag indicating if phy is assigned
+ * @vphy: Flag indicating if virtual phy
+ */
+struct leapraid_card_phy {
+ struct list_head port_siblings;
+ struct leapraid_card_port *card_port;
+ struct sas_identify identify;
+ struct sas_identify remote_identify;
+ struct sas_phy *phy;
+ u8 phy_id;
+ u16 hdl;
+ u16 attached_hdl;
+ bool phy_is_assigned;
+ bool vphy;
+};
+
+/**
+ * struct leapraid_topo_node - SAS topology node for LeapRaid
+ *
+ * @list: List head for linking nodes
+ * @sas_port_list: List of SAS ports
+ * @card_port: Associated card port
+ * @card_phy: Associated card PHY
+ * @rphy: SAS remote PHY device
+ * @parent_dev: Parent device pointer
+ * @sas_address: SAS address of this node
+ * @sas_address_parent: Parent node's SAS address
+ * @phys_num: Number of physical links
+ * @hdl: Handle identifier
+ * @enc_hdl: Enclosure handle
+ * @enc_lid: Enclosure logical identifier
+ * @resp: Response status flag
+ */
+struct leapraid_topo_node {
+ struct list_head list;
+ struct list_head sas_port_list;
+ struct leapraid_card_port *card_port;
+ struct leapraid_card_phy *card_phy;
+ struct sas_rphy *rphy;
+ struct device *parent_dev;
+ u64 sas_address;
+ u64 sas_address_parent;
+ u8 phys_num;
+ u16 hdl;
+ u16 enc_hdl;
+ u64 enc_lid;
+ bool resp;
+};
+
+/**
+ * struct leapraid_dev_topo - LeapRaid device topology management structure
+ *
+ * @topo_node_lock: Spinlock for protecting topology node operations
+ * @sas_dev_lock: Spinlock for SAS device list access
+ * @raid_volume_lock: Spinlock for RAID volume list access
+ * @sas_id: SAS domain identifier
+ * @card: Main card topology node
+ * @exp_list: List of expander devices
+ * @enc_list: List of enclosure devices
+ * @sas_dev_list: List of SAS devices
+ * @sas_dev_init_list: List of SAS devices being initialized
+ * @raid_volume_list: List of RAID volumes
+ * @card_port_list: List of card ports
+ * @pd_hdls: Array of physical disk handles
+ * @dev_removing: Array tracking devices being removed
+ * @pending_dev_add: Array tracking devices pending addition
+ * @blocking_hdls: Array of blocking handles
+ */
+struct leapraid_dev_topo {
+ spinlock_t topo_node_lock;
+ spinlock_t sas_dev_lock;
+ spinlock_t raid_volume_lock;
+ int sas_id;
+ struct leapraid_topo_node card;
+ struct list_head exp_list;
+ struct list_head enc_list;
+ struct list_head sas_dev_list;
+ struct list_head sas_dev_init_list;
+ struct list_head raid_volume_list;
+ struct list_head card_port_list;
+ u16 pd_hdls_sz;
+ void *pd_hdls;
+ void *blocking_hdls;
+ u16 pending_dev_add_sz;
+ void *pending_dev_add;
+ u16 dev_removing_sz;
+ void *dev_removing;
+};
+
+/**
+ * struct leapraid_boot_dev - Boot device structure for LeapRaid
+ *
+ * @dev: Device pointer
+ * @chnl: Channel number
+ * @form: Form factor
+ * @pg_dev: Config page device content
+ */
+struct leapraid_boot_dev {
+ void *dev;
+ u8 chnl;
+ u8 form;
+ u8 pg_dev[24];
+};
+
+/**
+ * struct leapraid_boot_devs - Boot device management structure
+ * @requested_boot_dev: Requested primary boot device
+ * @requested_alt_boot_dev: Requested alternate boot device
+ * @current_boot_dev: Currently active boot device
+ */
+struct leapraid_boot_devs {
+ struct leapraid_boot_dev requested_boot_dev;
+ struct leapraid_boot_dev requested_alt_boot_dev;
+ struct leapraid_boot_dev current_boot_dev;
+};
+
+/**
+ * struct leapraid_smart_poll_desc - SMART polling descriptor
+ * @smart_poll_wq: Workqueue for SMART polling tasks
+ * @smart_poll_work: Delayed work for SMART polling operations
+ * @smart_poll_wq_name: Workqueue name string
+ */
+struct leapraid_smart_poll_desc {
+ struct workqueue_struct *smart_poll_wq;
+ struct delayed_work smart_poll_work;
+ char smart_poll_wq_name[48];
+};
+
+/**
+ * struct leapraid_adapter - Main LeapRaid adapter structure
+ * @list: List head for adapter management
+ * @shost: SCSI host structure
+ * @pdev: PCI device structure
+ * @iomem_base: I/O memory mapped base address
+ * @rep_msg_host_idx: Host index for reply messages
+ * @mask_int: Interrupt masking flag
+ * @timestamp_sync_cnt: Timestamp synchronization counter
+ * @adapter_attr: Adapter attributes
+ * @mem_desc: Memory descriptor
+ * @driver_cmds: Driver commands
+ * @dynamic_task_desc: Dynamic task descriptor
+ * @fw_evt_s: Firmware event structure
+ * @notification_desc: Notification descriptor
+ * @reset_desc: Reset descriptor
+ * @scan_dev_desc: Device scan descriptor
+ * @access_ctrl: Access control
+ * @fw_log_desc: Firmware log descriptor
+ * @dev_topo: Device topology
+ * @boot_devs: Boot devices
+ * @smart_poll_desc: SMART polling descriptor
+ */
+struct leapraid_adapter {
+ struct list_head list;
+ struct Scsi_Host *shost;
+ struct pci_dev *pdev;
+ struct leapraid_reg_base __iomem *iomem_base;
+ u32 rep_msg_host_idx;
+ bool mask_int;
+ u32 timestamp_sync_cnt;
+
+ struct leapraid_adapter_attr adapter_attr;
+ struct leapraid_mem_desc mem_desc;
+ struct leapraid_driver_cmds driver_cmds;
+ struct leapraid_dynamic_task_desc dynamic_task_desc;
+ struct leapraid_fw_evt_struct fw_evt_s;
+ struct leapraid_notification_desc notification_desc;
+ struct leapraid_reset_desc reset_desc;
+ struct leapraid_scan_dev_desc scan_dev_desc;
+ struct leapraid_access_ctrl access_ctrl;
+ struct leapraid_fw_log_desc fw_log_desc;
+ struct leapraid_dev_topo dev_topo;
+ struct leapraid_boot_devs boot_devs;
+ struct leapraid_smart_poll_desc smart_poll_desc;
+};
+
+union cfg_param_1 {
+ u32 form;
+ u32 size;
+ u32 phy_number;
+};
+
+union cfg_param_2 {
+ u32 handle;
+ u32 form_specific;
+};
+
+enum config_page_action {
+ GET_BIOS_PG2,
+ GET_BIOS_PG3,
+ GET_SAS_DEVICE_PG0,
+ GET_SAS_IOUNIT_PG0,
+ GET_SAS_IOUNIT_PG1,
+ GET_SAS_EXPANDER_PG0,
+ GET_SAS_EXPANDER_PG1,
+ GET_SAS_ENCLOSURE_PG0,
+ GET_PHY_PG0,
+ GET_RAID_VOLUME_PG0,
+ GET_RAID_VOLUME_PG1,
+ GET_PHY_DISK_PG0,
+};
+
+/**
+ * struct leapraid_enc_node - Enclosure node structure
+ * @list: List head for enclosure management
+ * @pg0: Enclosure page 0 data
+ */
+struct leapraid_enc_node {
+ struct list_head list;
+ struct leapraid_enc_p0 pg0;
+};
+
+/**
+ * struct leapraid_raid_volume - RAID volume structure
+ * @list: List head for volume management
+ * @starget: SCSI target structure
+ * @sdev: SCSI device structure
+ * @id: Volume ID
+ * @channel: SCSI channel
+ * @wwid: World Wide Identifier
+ * @hdl: Volume handle
+ * @vol_type: Volume type
+ * @pd_num: Number of physical disks
+ * @resp: Response status
+ * @dev_info: Device information
+ */
+struct leapraid_raid_volume {
+ struct list_head list;
+ struct scsi_target *starget;
+ struct scsi_device *sdev;
+ unsigned int id;
+ unsigned int channel;
+ u64 wwid;
+ u16 hdl;
+ u8 vol_type;
+ u8 pd_num;
+ u8 resp;
+ u32 dev_info;
+};
+
+#define LEAPRAID_TGT_FLG_RAID_MEMBER 0x01
+#define LEAPRAID_TGT_FLG_VOLUME 0x02
+#define LEAPRAID_NO_ULD_ATTACH 1
+/**
+ * struct leapraid_starget_priv - SCSI target private data
+ * @starget: SCSI target structure
+ * @sas_address: SAS address
+ * @hdl: Device handle
+ * @num_luns: Number of LUNs
+ * @flg: Flags
+ * @deleted: Deletion flag
+ * @tm_busy: Task management busy flag
+ * @card_port: Associated card port
+ * @sas_dev: SAS device structure
+ */
+struct leapraid_starget_priv {
+ struct scsi_target *starget;
+ u64 sas_address;
+ u16 hdl;
+ int num_luns;
+ u32 flg;
+ bool deleted;
+ bool tm_busy;
+ struct leapraid_card_port *card_port;
+ struct leapraid_sas_dev *sas_dev;
+};
+
+#define LEAPRAID_DEVICE_FLG_INIT 0x01
+/**
+ * struct leapraid_sdev_priv - SCSI device private data
+ * @starget_priv: Associated target private data
+ * @lun: Logical Unit Number
+ * @flg: Flags
+ * @block: Block flag
+ * @deleted: Deletion flag
+ * @sep: SEP flag
+ */
+struct leapraid_sdev_priv {
+ struct leapraid_starget_priv *starget_priv;
+ unsigned int lun;
+ u32 flg;
+ bool ncq;
+ bool block;
+ bool deleted;
+ bool sep;
+};
+
+/**
+ * struct leapraid_sas_dev - SAS device structure
+ * @list: List head for device management
+ * @starget: SCSI target structure
+ * @card_port: Associated card port
+ * @rphy: SAS remote PHY
+ * @refcnt: Reference count
+ * @id: Device ID
+ * @channel: SCSI channel
+ * @slot: Slot number
+ * @phy: PHY identifier
+ * @resp: Response status
+ * @led_on: LED state
+ * @sas_addr: SAS address
+ * @dev_name: Device name
+ * @hdl: Device handle
+ * @parent_sas_addr: Parent SAS address
+ * @enc_hdl: Enclosure handle
+ * @enc_lid: Enclosure logical ID
+ * @volume_hdl: Volume handle
+ * @volume_wwid: Volume WWID
+ * @dev_info: Device information
+ * @pend_sas_rphy_add: Pending SAS rphy addition flag
+ * @enc_level: Enclosure level
+ * @port_type: Port type
+ * @connector_name: Connector name
+ * @support_smart: SMART support flag
+ */
+struct leapraid_sas_dev {
+ struct list_head list;
+ struct scsi_target *starget;
+ struct leapraid_card_port *card_port;
+ struct sas_rphy *rphy;
+ struct kref refcnt;
+ unsigned int id;
+ unsigned int channel;
+ u16 slot;
+ u8 phy;
+ bool resp;
+ bool led_on;
+ u64 sas_addr;
+ u64 dev_name;
+ u16 hdl;
+ u64 parent_sas_addr;
+ u16 enc_hdl;
+ u64 enc_lid;
+ u16 volume_hdl;
+ u64 volume_wwid;
+ u32 dev_info;
+ u8 pend_sas_rphy_add;
+ u8 enc_level;
+ u8 port_type;
+ u8 connector_name[5];
+ bool support_smart;
+};
+
+static inline void leapraid_sdev_free(struct kref *ref)
+{
+ kfree(container_of(ref, struct leapraid_sas_dev, refcnt));
+}
+
+#define leapraid_sdev_get(sdev) kref_get(&(sdev)->refcnt)
+#define leapraid_sdev_put(sdev) kref_put(&(sdev)->refcnt, leapraid_sdev_free)
+
+/**
+ * struct leapraid_sas_port - SAS port structure
+ * @port_list: List head for port management
+ * @phy_list: List of PHYs in this port
+ * @port: SAS port structure
+ * @card_port: Associated card port
+ * @remote_identify: Remote device identification
+ * @rphy: SAS remote PHY
+ * @phys_num: Number of PHYs in this port
+ */
+struct leapraid_sas_port {
+ struct list_head port_list;
+ struct list_head phy_list;
+ struct sas_port *port;
+ struct leapraid_card_port *card_port;
+ struct sas_identify remote_identify;
+ struct sas_rphy *rphy;
+ u8 phys_num;
+};
+
+#define LEAPRAID_VPHY_FLG_DIRTY 0x01
+/**
+ * struct leapraid_vphy - Virtual PHY structure
+ * @list: List head for PHY management
+ * @sas_address: SAS address
+ * @phy_mask: PHY mask
+ * @flg: Flags
+ */
+struct leapraid_vphy {
+ struct list_head list;
+ u64 sas_address;
+ u32 phy_mask;
+ u8 flg;
+};
+
+struct leapraid_tgt_rst_list {
+ struct list_head list;
+ u16 handle;
+ u16 state;
+};
+
+struct leapraid_sc_list {
+ struct list_head list;
+ u16 handle;
+};
+
+struct sense_info {
+ u8 sense_key;
+ u8 asc;
+ u8 ascq;
+};
+
+struct leapraid_fw_log_info {
+ u32 user_position;
+ u32 adapter_position;
+};
+
+/**
+ * enum reset_type - Reset type enumeration
+ * @FULL_RESET: Full hardware reset
+ * @PART_RESET: Partial reset
+ */
+enum reset_type {
+ FULL_RESET,
+ PART_RESET,
+};
+
+enum leapraid_card_port_checking_flg {
+ CARD_PORT_FURTHER_CHECKING_NEEDED = 0,
+ CARD_PORT_SKIP_CHECKING,
+};
+
+enum leapraid_port_checking_state {
+ NEW_CARD_PORT = 0,
+ SAME_PORT_WITH_NOTHING_CHANGED,
+ SAME_PORT_WITH_PARTIALLY_CHANGED_PHYS,
+ SAME_ADDR_WITH_PARTIALLY_CHANGED_PHYS,
+ SAME_ADDR_ONLY,
+};
+
+/**
+ * struct leapraid_card_port_feature - Card port feature
+ * @dirty_flg: Dirty flag indicator
+ * @same_addr: Same address flag
+ * @exact_phy: Exact PHY match flag
+ * @phy_overlap: PHY overlap bitmap
+ * @same_port: Same port flag
+ * @cur_chking_old_port: Current checking old port
+ * @expected_old_port: Expected old port
+ * @same_addr_port_count: Same address port count
+ * @checking_state: Port checking state
+ */
+struct leapraid_card_port_feature {
+ u8 dirty_flg;
+ bool same_addr;
+ bool exact_phy;
+ u32 phy_overlap;
+ bool same_port;
+ struct leapraid_card_port *cur_chking_old_port;
+ struct leapraid_card_port *expected_old_port;
+ int same_addr_port_count;
+ enum leapraid_port_checking_state checking_state;
+};
+
+#define SMP_REPORT_MANUFACTURER_INFORMATION_FRAME_TYPE 0x40
+#define SMP_REPORT_MANUFACTURER_INFORMATION_FUNC 0x01
+
+/**
+ * ref: SAS-2(INCITS 457-2010) 10.4.3.5
+ */
+struct leapraid_rep_manu_request {
+ u8 smp_frame_type;
+ u8 function;
+ u8 allocated_response_length;
+ u8 request_length;
+};
+
+/**
+ * ref: SAS-2(INCITS 457-2010) 10.4.3.5
+ */
+struct leapraid_rep_manu_reply {
+ u8 smp_frame_type;
+ u8 function;
+ u8 function_result;
+ u8 response_length;
+ u16 expander_change_count;
+ u8 r1[2];
+ u8 sas_format;
+ u8 r2[3];
+ u8 vendor_identification[SAS_EXPANDER_VENDOR_ID_LEN];
+ u8 product_identification[SAS_EXPANDER_PRODUCT_ID_LEN];
+ u8 product_revision_level[SAS_EXPANDER_PRODUCT_REV_LEN];
+ u8 component_vendor_identification[SAS_EXPANDER_COMPONENT_VENDOR_ID_LEN];
+ u16 component_id;
+ u8 component_revision_level;
+ u8 r3;
+ u8 vendor_specific[8];
+};
+
+/**
+ * struct leapraid_scsi_cmd_desc - SCSI command descriptor
+ * @hdl: Device handle
+ * @lun: Logical Unit Number
+ * @raid_member: RAID member flag
+ * @dir: DMA data direction
+ * @data_length: Data transfer length
+ * @data_buffer: Data buffer pointer
+ * @cdb_length: CDB length
+ * @cdb: Command Descriptor Block
+ * @time_out: Timeout
+ */
+struct leapraid_scsi_cmd_desc {
+ u16 hdl;
+ u32 lun;
+ bool raid_member;
+ enum dma_data_direction dir;
+ u32 data_length;
+ void *data_buffer;
+ u8 cdb_length;
+ u8 cdb[32];
+ u8 time_out;
+};
+
+extern struct list_head leapraid_adapter_list;
+extern spinlock_t leapraid_adapter_lock;
+extern char driver_name[LEAPRAID_NAME_LENGTH];
+
+int leapraid_ctrl_init(struct leapraid_adapter *adapter);
+void leapraid_remove_ctrl(struct leapraid_adapter *adapter);
+void leapraid_check_scheduled_fault_start(struct leapraid_adapter *adapter);
+void leapraid_check_scheduled_fault_stop(struct leapraid_adapter *adapter);
+void leapraid_fw_log_start(struct leapraid_adapter *adapter);
+void leapraid_fw_log_stop(struct leapraid_adapter *adapter);
+int leapraid_set_pcie_and_notification(struct leapraid_adapter *adapter);
+void leapraid_disable_controller(struct leapraid_adapter *adapter);
+int leapraid_hard_reset_handler(struct leapraid_adapter *adapter,
+ enum reset_type type);
+void leapraid_mask_int(struct leapraid_adapter *adapter);
+void leapraid_unmask_int(struct leapraid_adapter *adapter);
+u32 leapraid_get_adapter_state(struct leapraid_adapter *adapter);
+bool leapraid_pci_removed(struct leapraid_adapter *adapter);
+int leapraid_check_adapter_is_op(struct leapraid_adapter *adapter);
+void *leapraid_get_task_desc(struct leapraid_adapter *adapter, u16 taskid);
+void *leapraid_get_sense_buffer(struct leapraid_adapter *adapter, u16 taskid);
+__le32 leapraid_get_sense_buffer_dma(struct leapraid_adapter *adapter,
+ u16 taskid);
+void *leapraid_get_reply_vaddr(struct leapraid_adapter *adapter,
+ u32 phys_addr);
+u16 leapraid_alloc_scsiio_taskid(struct leapraid_adapter *adapter,
+ struct scsi_cmnd *scmd);
+void leapraid_free_taskid(struct leapraid_adapter *adapter, u16 taskid);
+struct leapraid_io_req_tracker *leapraid_get_io_tracker_from_taskid(
+ struct leapraid_adapter *adapter, u16 taskid);
+struct leapraid_io_req_tracker *leapraid_get_scmd_priv(struct scsi_cmnd *scmd);
+struct scsi_cmnd *leapraid_get_scmd_from_taskid(
+ struct leapraid_adapter *adapter, u16 taskid);
+int leapraid_scan_dev(struct leapraid_adapter *adapter, bool async_scan_dev);
+void leapraid_scan_dev_done(struct leapraid_adapter *adapter);
+void leapraid_wait_cmds_done(struct leapraid_adapter *adapter);
+void leapraid_clean_active_scsi_cmds(struct leapraid_adapter *adapter);
+void leapraid_sync_irqs(struct leapraid_adapter *adapter, bool poll);
+int leapraid_rep_queue_handler(struct leapraid_rq *rq);
+int leapraid_blk_mq_poll(struct Scsi_Host *shost, unsigned int queue_num);
+void leapraid_mq_polling_pause(struct leapraid_adapter *adapter);
+void leapraid_mq_polling_resume(struct leapraid_adapter *adapter);
+void leapraid_set_tm_flg(struct leapraid_adapter *adapter, u16 handle);
+void leapraid_clear_tm_flg(struct leapraid_adapter *adapter, u16 handle);
+void leapraid_async_turn_on_led(struct leapraid_adapter *adapter, u16 handle);
+int leapraid_issue_locked_tm(struct leapraid_adapter *adapter, u16 handle,
+ uint channel, uint id, uint lun, u8 type,
+ u16 taskid_task, u8 tr_method);
+int leapraid_issue_tm(struct leapraid_adapter *adapter, u16 handle,
+ uint channel, uint id, uint lun, u8 type,
+ u16 taskid_task, u8 tr_method);
+u8 leapraid_scsiio_done(struct leapraid_adapter *adapter, u16 taskid,
+ u8 msix_index, u32 rep);
+int leapraid_get_volume_cap(struct leapraid_adapter *adapter,
+ struct leapraid_raid_volume *raid_volume);
+int leapraid_internal_init_cmd_priv(struct leapraid_adapter *adapter,
+ struct leapraid_io_req_tracker *io_tracker);
+int leapraid_internal_exit_cmd_priv(struct leapraid_adapter *adapter,
+ struct leapraid_io_req_tracker *io_tracker);
+void leapraid_clean_active_fw_evt(struct leapraid_adapter *adapter);
+bool leapraid_scmd_find_by_lun(struct leapraid_adapter *adapter,
+ uint id, unsigned int lun, uint channel);
+bool leapraid_scmd_find_by_tgt(struct leapraid_adapter *adapter,
+ uint id, uint channel);
+struct leapraid_vphy *leapraid_get_vphy_by_phy(struct leapraid_card_port *port,
+ u32 phy);
+struct leapraid_raid_volume *leapraid_raid_volume_find_by_id(
+ struct leapraid_adapter *adapter, uint id, uint channel);
+struct leapraid_raid_volume *leapraid_raid_volume_find_by_hdl(
+ struct leapraid_adapter *adapter, u16 handle);
+struct leapraid_topo_node *leapraid_exp_find_by_sas_address(
+ struct leapraid_adapter *adapter, u64 sas_address,
+ struct leapraid_card_port *port);
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_by_addr_and_rphy(
+ struct leapraid_adapter *adapter,
+ u64 sas_address, struct sas_rphy *rphy);
+struct leapraid_sas_dev *leapraid_get_sas_dev_by_addr(
+ struct leapraid_adapter *adapter, u64 sas_address,
+ struct leapraid_card_port *port);
+struct leapraid_sas_dev *leapraid_get_sas_dev_by_hdl(
+ struct leapraid_adapter *adapter, u16 handle);
+struct leapraid_sas_dev *leapraid_get_sas_dev_from_tgt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_starget_priv *tgt_priv);
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_from_tgt(
+ struct leapraid_adapter *adapter,
+ struct leapraid_starget_priv *tgt_priv);
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_by_hdl(
+ struct leapraid_adapter *adapter, u16 handle);
+struct leapraid_sas_dev *leapraid_hold_lock_get_sas_dev_by_addr(
+ struct leapraid_adapter *adapter, u64 sas_address,
+ struct leapraid_card_port *port);
+struct leapraid_sas_dev *leapraid_get_next_sas_dev_from_init_list(
+ struct leapraid_adapter *adapter);
+void leapraid_sas_dev_remove_by_sas_address(
+ struct leapraid_adapter *adapter,
+ u64 sas_address, struct leapraid_card_port *port);
+void leapraid_sas_dev_remove(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev);
+void leapraid_raid_volume_remove(struct leapraid_adapter *adapter,
+ struct leapraid_raid_volume *raid_volume);
+void leapraid_exp_rm(struct leapraid_adapter *adapter,
+ u64 sas_address, struct leapraid_card_port *port);
+void leapraid_build_mpi_sg(struct leapraid_adapter *adapter,
+ void *sge, dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size);
+void leapraid_build_ieee_nodata_sg(struct leapraid_adapter *adapter,
+ void *sge);
+void leapraid_build_ieee_sg(struct leapraid_adapter *adapter,
+ void *psge, dma_addr_t h2c_dma_addr,
+ size_t h2c_size, dma_addr_t c2h_dma_addr,
+ size_t c2h_size);
+int leapraid_build_scmd_ieee_sg(struct leapraid_adapter *adapter,
+ struct scsi_cmnd *scmd, u16 taskid);
+void leapraid_fire_scsi_io(struct leapraid_adapter *adapter,
+ u16 taskid, u16 handle);
+void leapraid_fire_hpr_task(struct leapraid_adapter *adapter, u16 taskid,
+ u16 msix_task);
+void leapraid_fire_task(struct leapraid_adapter *adapter, u16 taskid);
+int leapraid_cfg_get_volume_hdl(struct leapraid_adapter *adapter,
+ u16 pd_handle, u16 *volume_handle);
+int leapraid_cfg_get_volume_wwid(struct leapraid_adapter *adapter,
+ u16 volume_handle, u64 *wwid);
+int leapraid_op_config_page(struct leapraid_adapter *adapter,
+ void *cfgp, union cfg_param_1 cfgp1,
+ union cfg_param_2 cfgp2,
+ enum config_page_action cfg_op);
+void leapraid_adjust_sdev_queue_depth(struct scsi_device *sdev, int qdepth);
+
+int leapraid_ctl_release(struct inode *inode, struct file *filep);
+void leapraid_ctl_init(void);
+void leapraid_ctl_exit(void);
+
+extern struct sas_function_template leapraid_transport_functions;
+extern struct scsi_transport_template *leapraid_transport_template;
+struct leapraid_sas_port *leapraid_transport_port_add(
+ struct leapraid_adapter *adapter, u16 handle, u64 sas_address,
+ struct leapraid_card_port *card_port);
+void leapraid_transport_port_remove(struct leapraid_adapter *adapter,
+ u64 sas_address, u64 sas_address_parent,
+ struct leapraid_card_port *card_port);
+void leapraid_transport_add_card_phy(struct leapraid_adapter *adapter,
+ struct leapraid_card_phy *card_phy,
+ struct leapraid_sas_phy_p0 *phy_pg0,
+ struct device *parent_dev);
+int leapraid_transport_add_exp_phy(struct leapraid_adapter *adapter,
+ struct leapraid_card_phy *card_phy,
+ struct leapraid_exp_p1 *exp_pg1,
+ struct device *parent_dev);
+void leapraid_transport_update_links(struct leapraid_adapter *adapter,
+ u64 sas_address, u16 handle,
+ u8 phy_number, u8 link_rate,
+ struct leapraid_card_port *card_port);
+void leapraid_transport_detach_phy_to_port(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node,
+ struct leapraid_card_phy *card_phy);
+void leapraid_transport_attach_phy_to_port(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *sas_node,
+ struct leapraid_card_phy *card_phy,
+ u64 sas_address,
+ struct leapraid_card_port *card_port);
+int leapraid_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmd);
+void leapraid_smart_polling_start(struct leapraid_adapter *adapter);
+void leapraid_smart_polling_stop(struct leapraid_adapter *adapter);
+void leapraid_smart_fault_detect(struct leapraid_adapter *adapter, u16 hdl);
+void leapraid_free_internal_scsi_cmd(struct leapraid_adapter *adapter);
+
+#endif /* LEAPRAID_FUNC_H_INCLUDED */
diff --git a/drivers/scsi/leapraid/leapraid_os.c b/drivers/scsi/leapraid/leapraid_os.c
new file mode 100644
index 000000000000..be0f7cbb6684
--- /dev/null
+++ b/drivers/scsi/leapraid/leapraid_os.c
@@ -0,0 +1,2365 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2025 LeapIO Tech Inc.
+ *
+ * LeapRAID Storage and RAID Controller driver.
+ */
+
+#include <linux/module.h>
+
+#include "leapraid_func.h"
+#include "leapraid.h"
+
+LIST_HEAD(leapraid_adapter_list);
+DEFINE_SPINLOCK(leapraid_adapter_lock);
+
+MODULE_AUTHOR(LEAPRAID_AUTHOR);
+MODULE_DESCRIPTION(LEAPRAID_DESCRIPTION);
+MODULE_LICENSE("GPL");
+MODULE_VERSION(LEAPRAID_DRIVER_VERSION);
+
+static int leapraid_ids;
+
+static int open_pcie_trace = 1;
+module_param(open_pcie_trace, int, 0644);
+MODULE_PARM_DESC(open_pcie_trace, "open_pcie_trace: default=1(open)/0(close)");
+
+static int enable_mp = 1;
+module_param(enable_mp, int, 0444);
+MODULE_PARM_DESC(enable_mp,
+ "enable multipath on target device. default=1(enable)");
+
+static inline void leapraid_get_sense_data(char *sense,
+ struct sense_info *data)
+{
+ bool desc_format = (sense[0] & SCSI_SENSE_RESPONSE_CODE_MASK) >=
+ DESC_FORMAT_THRESHOLD;
+
+ if (desc_format) {
+ data->sense_key = sense[1] & SENSE_KEY_MASK;
+ data->asc = sense[2];
+ data->ascq = sense[3];
+ } else {
+ data->sense_key = sense[2] & SENSE_KEY_MASK;
+ data->asc = sense[12];
+ data->ascq = sense[13];
+ }
+}
+
+static struct Scsi_Host *pdev_to_shost(struct pci_dev *pdev)
+{
+ return pci_get_drvdata(pdev);
+}
+
+static struct leapraid_adapter *pdev_to_adapter(struct pci_dev *pdev)
+{
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+
+ if (!shost)
+ return NULL;
+
+ return shost_priv(shost);
+}
+
+struct leapraid_io_req_tracker *leapraid_get_scmd_priv(struct scsi_cmnd *scmd)
+{
+ return scsi_cmd_priv(scmd);
+}
+
+void leapraid_set_tm_flg(struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct scsi_device *sdev;
+ bool skip = false;
+
+ /* don't break out of the loop */
+ shost_for_each_device(sdev, adapter->shost) {
+ if (skip)
+ continue;
+
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv)
+ continue;
+
+ if (sdev_priv->starget_priv->hdl == hdl) {
+ sdev_priv->starget_priv->tm_busy = true;
+ skip = true;
+ }
+ }
+}
+
+void leapraid_clear_tm_flg(struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_sdev_priv *sdev_priv;
+ struct scsi_device *sdev;
+ bool skip = false;
+
+ /* don't break out of the loop */
+ shost_for_each_device(sdev, adapter->shost) {
+ if (skip)
+ continue;
+
+ sdev_priv = sdev->hostdata;
+ if (!sdev_priv)
+ continue;
+
+ if (sdev_priv->starget_priv->hdl == hdl) {
+ sdev_priv->starget_priv->tm_busy = false;
+ skip = true;
+ }
+ }
+}
+
+static int leapraid_tm_cmd_map_status(struct leapraid_adapter *adapter,
+ uint channel,
+ uint id,
+ uint lun,
+ u8 type,
+ u16 taskid_task)
+{
+ int rc = FAILED;
+
+ if (taskid_task <= adapter->shost->can_queue) {
+ switch (type) {
+ case LEAPRAID_TM_TASKTYPE_ABRT_TASK_SET:
+ case LEAPRAID_TM_TASKTYPE_LOGICAL_UNIT_RESET:
+ if (!leapraid_scmd_find_by_lun(adapter, id, lun,
+ channel))
+ rc = SUCCESS;
+ break;
+ case LEAPRAID_TM_TASKTYPE_TARGET_RESET:
+ if (!leapraid_scmd_find_by_tgt(adapter, id, channel))
+ rc = SUCCESS;
+ break;
+ default:
+ rc = SUCCESS;
+ }
+ }
+
+ if (taskid_task == adapter->driver_cmds.driver_scsiio_cmd.taskid) {
+ if ((adapter->driver_cmds.driver_scsiio_cmd.status &
+ LEAPRAID_CMD_DONE) ||
+ (adapter->driver_cmds.driver_scsiio_cmd.status &
+ LEAPRAID_CMD_NOT_USED))
+ rc = SUCCESS;
+ }
+
+ if (taskid_task == adapter->driver_cmds.ctl_cmd.hp_taskid) {
+ if ((adapter->driver_cmds.ctl_cmd.status &
+ LEAPRAID_CMD_DONE) ||
+ (adapter->driver_cmds.ctl_cmd.status &
+ LEAPRAID_CMD_NOT_USED))
+ rc = SUCCESS;
+ }
+
+ return rc;
+}
+
+static int leapraid_tm_post_processing(struct leapraid_adapter *adapter,
+ u16 hdl, uint channel, uint id,
+ uint lun, u8 type, u16 taskid_task)
+{
+ int rc;
+
+ rc = leapraid_tm_cmd_map_status(adapter, channel, id, lun,
+ type, taskid_task);
+ if (rc == SUCCESS)
+ return rc;
+
+ leapraid_mask_int(adapter);
+ leapraid_sync_irqs(adapter, true);
+ leapraid_unmask_int(adapter);
+
+ rc = leapraid_tm_cmd_map_status(adapter, channel, id, lun, type,
+ taskid_task);
+ return rc;
+}
+
+static void leapraid_build_tm_req(struct leapraid_scsi_tm_req *scsi_tm_req,
+ u16 hdl, uint lun, u8 type, u8 tr_method,
+ u16 target_taskid)
+{
+ memset(scsi_tm_req, 0, sizeof(*scsi_tm_req));
+ scsi_tm_req->func = LEAPRAID_FUNC_SCSI_TMF;
+ scsi_tm_req->dev_hdl = cpu_to_le16(hdl);
+ scsi_tm_req->task_type = type;
+ scsi_tm_req->msg_flg = tr_method;
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK ||
+ type == LEAPRAID_TM_TASKTYPE_QUERY_TASK)
+ scsi_tm_req->task_mid = cpu_to_le16(target_taskid);
+ int_to_scsilun(lun, (struct scsi_lun *)scsi_tm_req->lun);
+}
+
+int leapraid_issue_tm(struct leapraid_adapter *adapter, u16 hdl, uint channel,
+ uint id, uint lun, u8 type,
+ u16 target_taskid, u8 tr_method)
+{
+ struct leapraid_scsi_tm_req *scsi_tm_req;
+ struct leapraid_scsiio_req *scsiio_req;
+ struct leapraid_io_req_tracker *io_req_tracker = NULL;
+ u16 msix_task = 0;
+ bool issue_reset = false;
+ u32 db;
+ int rc;
+
+ lockdep_assert_held(&adapter->driver_cmds.tm_cmd.mutex);
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.host_removing ||
+ adapter->access_ctrl.pcie_recovering) {
+ dev_info(&adapter->pdev->dev,
+ "%s %s: host is recovering, skip tm command!\n",
+ __func__, adapter->adapter_attr.name);
+ return FAILED;
+ }
+
+ db = leapraid_readl(&adapter->iomem_base->db);
+ if (db & LEAPRAID_DB_USED) {
+ dev_info(&adapter->pdev->dev,
+ "%s unexpected db status, issuing hard reset!\n",
+ adapter->adapter_attr.name);
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ rc = leapraid_hard_reset_handler(adapter, FULL_RESET);
+ return (!rc) ? SUCCESS : FAILED;
+ }
+
+ if ((db & LEAPRAID_DB_MASK) == LEAPRAID_DB_FAULT) {
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ rc = leapraid_hard_reset_handler(adapter, FULL_RESET);
+ return (!rc) ? SUCCESS : FAILED;
+ }
+
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK)
+ io_req_tracker = leapraid_get_io_tracker_from_taskid(adapter,
+ target_taskid);
+
+ adapter->driver_cmds.tm_cmd.status = LEAPRAID_CMD_PENDING;
+ scsi_tm_req =
+ leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.tm_cmd.hp_taskid);
+ leapraid_build_tm_req(scsi_tm_req, hdl, lun, type, tr_method,
+ target_taskid);
+ memset((void *)(&adapter->driver_cmds.tm_cmd.reply), 0,
+ sizeof(struct leapraid_scsi_tm_rep));
+ leapraid_set_tm_flg(adapter, hdl);
+ init_completion(&adapter->driver_cmds.tm_cmd.done);
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK &&
+ io_req_tracker &&
+ io_req_tracker->msix_io < adapter->adapter_attr.rq_cnt)
+ msix_task = io_req_tracker->msix_io;
+ else
+ msix_task = 0;
+ leapraid_fire_hpr_task(adapter,
+ adapter->driver_cmds.tm_cmd.hp_taskid,
+ msix_task);
+ wait_for_completion_timeout(&adapter->driver_cmds.tm_cmd.done,
+ LEAPRAID_TM_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.tm_cmd.status & LEAPRAID_CMD_DONE)) {
+ issue_reset =
+ leapraid_check_reset(
+ adapter->driver_cmds.tm_cmd.status);
+ if (issue_reset) {
+ dev_info(&adapter->pdev->dev,
+ "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ rc = leapraid_hard_reset_handler(adapter, FULL_RESET);
+ rc = (!rc) ? SUCCESS : FAILED;
+ goto out;
+ }
+ }
+
+ leapraid_sync_irqs(adapter, false);
+
+ switch (type) {
+ case LEAPRAID_TM_TASKTYPE_TARGET_RESET:
+ case LEAPRAID_TM_TASKTYPE_ABRT_TASK_SET:
+ case LEAPRAID_TM_TASKTYPE_LOGICAL_UNIT_RESET:
+ rc = leapraid_tm_post_processing(adapter, hdl, channel, id, lun,
+ type, target_taskid);
+ break;
+ case LEAPRAID_TM_TASKTYPE_ABORT_TASK:
+ rc = SUCCESS;
+ scsiio_req = leapraid_get_task_desc(adapter, target_taskid);
+ if (le16_to_cpu(scsiio_req->dev_hdl) != hdl)
+ break;
+ dev_err(&adapter->pdev->dev, "%s abort failed, hdl=0x%04x\n",
+ adapter->adapter_attr.name, hdl);
+ rc = FAILED;
+ break;
+ case LEAPRAID_TM_TASKTYPE_QUERY_TASK:
+ rc = SUCCESS;
+ break;
+ default:
+ rc = FAILED;
+ break;
+ }
+
+out:
+ leapraid_clear_tm_flg(adapter, hdl);
+ adapter->driver_cmds.tm_cmd.status = LEAPRAID_CMD_NOT_USED;
+ return rc;
+}
+
+int leapraid_issue_locked_tm(struct leapraid_adapter *adapter, u16 hdl,
+ uint channel, uint id, uint lun, u8 type,
+ u16 target_taskid, u8 tr_method)
+{
+ int rc;
+
+ mutex_lock(&adapter->driver_cmds.tm_cmd.mutex);
+ rc = leapraid_issue_tm(adapter, hdl, channel, id, lun, type,
+ target_taskid, tr_method);
+ mutex_unlock(&adapter->driver_cmds.tm_cmd.mutex);
+
+ return rc;
+}
+
+void leapraid_smart_fault_detect(struct leapraid_adapter *adapter, u16 hdl)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_sas_dev *sas_dev;
+ struct scsi_target *starget;
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_hdl(adapter, hdl);
+ if (!sas_dev) {
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ goto out;
+ }
+
+ starget = sas_dev->starget;
+ starget_priv = starget->hostdata;
+ if ((starget_priv->flg & LEAPRAID_TGT_FLG_RAID_MEMBER) ||
+ (starget_priv->flg & LEAPRAID_TGT_FLG_VOLUME)) {
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ goto out;
+ }
+
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ leapraid_async_turn_on_led(adapter, hdl);
+out:
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+}
+
+static void leapraid_process_sense_data(struct leapraid_adapter *adapter,
+ struct leapraid_scsiio_rep *scsiio_rep,
+ struct scsi_cmnd *scmd, u16 taskid)
+{
+ struct sense_info data;
+ const void *sense_data;
+ u32 sz;
+
+ if (!(scsiio_rep->scsi_state & LEAPRAID_SCSI_STATE_AUTOSENSE_VALID))
+ return;
+
+ sense_data = leapraid_get_sense_buffer(adapter, taskid);
+ sz = min_t(u32, SCSI_SENSE_BUFFERSIZE,
+ le32_to_cpu(scsiio_rep->sense_count));
+
+ memcpy(scmd->sense_buffer, sense_data, sz);
+ leapraid_get_sense_data(scmd->sense_buffer, &data);
+ if (data.asc == ASC_FAILURE_PREDICTION_THRESHOLD_EXCEEDED)
+ leapraid_smart_fault_detect(adapter,
+ le16_to_cpu(scsiio_rep->dev_hdl));
+}
+
+static void leapraid_handle_data_underrun(
+ struct leapraid_scsiio_rep *scsiio_rep,
+ struct scsi_cmnd *scmd, u32 xfer_cnt)
+{
+ u8 scsi_status = scsiio_rep->scsi_status;
+ u8 scsi_state = scsiio_rep->scsi_state;
+
+ scmd->result = (DID_OK << LEAPRAID_SCSI_HOST_SHIFT) | scsi_status;
+
+ if (scsi_state & LEAPRAID_SCSI_STATE_AUTOSENSE_VALID)
+ return;
+
+ if (xfer_cnt < scmd->underflow) {
+ if (scsi_status == SAM_STAT_BUSY)
+ scmd->result = SAM_STAT_BUSY;
+ else
+ scmd->result = DID_SOFT_ERROR <<
+ LEAPRAID_SCSI_HOST_SHIFT;
+ } else if (scsi_state & (LEAPRAID_SCSI_STATE_AUTOSENSE_FAILED |
+ LEAPRAID_SCSI_STATE_NO_SCSI_STATUS)) {
+ scmd->result = DID_SOFT_ERROR << LEAPRAID_SCSI_HOST_SHIFT;
+ } else if (scsi_state & LEAPRAID_SCSI_STATE_TERMINATED) {
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ } else if (!xfer_cnt && scmd->cmnd[0] == REPORT_LUNS) {
+ scsiio_rep->scsi_state = LEAPRAID_SCSI_STATE_AUTOSENSE_VALID;
+ scsiio_rep->scsi_status = SAM_STAT_CHECK_CONDITION;
+ scsi_build_sense(scmd, 0, ILLEGAL_REQUEST,
+ LEAPRAID_SCSI_ASC_INVALID_CMD_CODE,
+ LEAPRAID_SCSI_ASCQ_DEFAULT);
+ }
+}
+
+static void leapraid_handle_success_status(
+ struct leapraid_scsiio_rep *scsiio_rep,
+ struct scsi_cmnd *scmd,
+ u32 response_code)
+{
+ u8 scsi_status = scsiio_rep->scsi_status;
+ u8 scsi_state = scsiio_rep->scsi_state;
+
+ scmd->result = (DID_OK << LEAPRAID_SCSI_HOST_SHIFT) | scsi_status;
+
+ if (response_code == LEAPRAID_TM_RSP_INVALID_FRAME ||
+ (scsi_state & (LEAPRAID_SCSI_STATE_AUTOSENSE_FAILED |
+ LEAPRAID_SCSI_STATE_NO_SCSI_STATUS)))
+ scmd->result = DID_SOFT_ERROR << LEAPRAID_SCSI_HOST_SHIFT;
+ else if (scsi_state & LEAPRAID_SCSI_STATE_TERMINATED)
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+}
+
+static void leapraid_scsiio_done_dispatch(struct leapraid_adapter *adapter,
+ struct leapraid_scsiio_rep *scsiio_rep,
+ struct leapraid_sdev_priv *sdev_priv,
+ struct scsi_cmnd *scmd,
+ u16 taskid, u32 response_code)
+{
+ u8 scsi_status = scsiio_rep->scsi_status;
+ u8 scsi_state = scsiio_rep->scsi_state;
+ u16 adapter_status;
+ u32 xfer_cnt;
+ u32 sz;
+
+ adapter_status = le16_to_cpu(scsiio_rep->adapter_status) &
+ LEAPRAID_ADAPTER_STATUS_MASK;
+
+ xfer_cnt = le32_to_cpu(scsiio_rep->transfer_count);
+ scsi_set_resid(scmd, scsi_bufflen(scmd) - xfer_cnt);
+
+ if (adapter_status == LEAPRAID_ADAPTER_STATUS_SCSI_DATA_UNDERRUN &&
+ xfer_cnt == 0 &&
+ (scsi_status == LEAPRAID_SCSI_STATUS_BUSY ||
+ scsi_status == LEAPRAID_SCSI_STATUS_RESERVATION_CONFLICT ||
+ scsi_status == LEAPRAID_SCSI_STATUS_TASK_SET_FULL)) {
+ adapter_status = LEAPRAID_ADAPTER_STATUS_SUCCESS;
+ }
+
+ switch (adapter_status) {
+ case LEAPRAID_ADAPTER_STATUS_SCSI_DEVICE_NOT_THERE:
+ scmd->result = DID_NO_CONNECT << LEAPRAID_SCSI_HOST_SHIFT;
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_BUSY:
+ case LEAPRAID_ADAPTER_STATUS_INSUFFICIENT_RESOURCES:
+ scmd->result = SAM_STAT_BUSY;
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_SCSI_RESIDUAL_MISMATCH:
+ if (xfer_cnt == 0 || scmd->underflow > xfer_cnt)
+ scmd->result = DID_SOFT_ERROR <<
+ LEAPRAID_SCSI_HOST_SHIFT;
+ else
+ scmd->result = (DID_OK << LEAPRAID_SCSI_HOST_SHIFT) |
+ scsi_status;
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_SCSI_ADAPTER_TERMINATED:
+ if (sdev_priv->block) {
+ scmd->result = DID_TRANSPORT_DISRUPTED <<
+ LEAPRAID_SCSI_HOST_SHIFT;
+ return;
+ }
+
+ if (scmd->device->channel == RAID_CHANNEL &&
+ scsi_state == (LEAPRAID_SCSI_STATE_TERMINATED |
+ LEAPRAID_SCSI_STATE_NO_SCSI_STATUS)) {
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ break;
+ }
+
+ scmd->result = DID_SOFT_ERROR << LEAPRAID_SCSI_HOST_SHIFT;
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_SCSI_TASK_TERMINATED:
+ case LEAPRAID_ADAPTER_STATUS_SCSI_EXT_TERMINATED:
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_SCSI_DATA_UNDERRUN:
+ leapraid_handle_data_underrun(scsiio_rep, scmd, xfer_cnt);
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_SCSI_DATA_OVERRUN:
+ scsi_set_resid(scmd, 0);
+ leapraid_handle_success_status(scsiio_rep, scmd,
+ response_code);
+ break;
+ case LEAPRAID_ADAPTER_STATUS_SCSI_RECOVERED_ERROR:
+ case LEAPRAID_ADAPTER_STATUS_SUCCESS:
+ leapraid_handle_success_status(scsiio_rep, scmd,
+ response_code);
+ break;
+
+ case LEAPRAID_ADAPTER_STATUS_SCSI_PROTOCOL_ERROR:
+ case LEAPRAID_ADAPTER_STATUS_INTERNAL_ERROR:
+ case LEAPRAID_ADAPTER_STATUS_SCSI_IO_DATA_ERROR:
+ case LEAPRAID_ADAPTER_STATUS_SCSI_TASK_MGMT_FAILED:
+ default:
+ scmd->result = DID_SOFT_ERROR << LEAPRAID_SCSI_HOST_SHIFT;
+ break;
+ }
+
+ if (!scmd->result)
+ return;
+
+ scsi_print_command(scmd);
+ dev_warn(&adapter->pdev->dev,
+ "scsiio warn: hdl=0x%x, status are: 0x%x, 0x%x, 0x%x\n",
+ le16_to_cpu(scsiio_rep->dev_hdl), adapter_status,
+ scsi_status, scsi_state);
+
+ if (scsi_state & LEAPRAID_SCSI_STATE_AUTOSENSE_VALID) {
+ struct scsi_sense_hdr sshdr;
+
+ sz = min_t(u32, SCSI_SENSE_BUFFERSIZE,
+ le32_to_cpu(scsiio_rep->sense_count));
+ if (scsi_normalize_sense(scmd->sense_buffer, sz,
+ &sshdr)) {
+ dev_warn(&adapter->pdev->dev,
+ "sense: key=0x%x asc=0x%x ascq=0x%x\n",
+ sshdr.sense_key, sshdr.asc,
+ sshdr.ascq);
+ } else {
+ dev_warn(&adapter->pdev->dev,
+ "sense: invalid sense data\n");
+ }
+ }
+}
+
+u8 leapraid_scsiio_done(struct leapraid_adapter *adapter, u16 taskid,
+ u8 msix_index, u32 rep)
+{
+ struct leapraid_scsiio_rep *scsiio_rep = NULL;
+ struct leapraid_sdev_priv *sdev_priv = NULL;
+ struct scsi_cmnd *scmd = NULL;
+ u32 response_code = 0;
+
+ if (likely(taskid != adapter->driver_cmds.driver_scsiio_cmd.taskid))
+ scmd = leapraid_get_scmd_from_taskid(adapter, taskid);
+ else
+ scmd = adapter->driver_cmds.internal_scmd;
+ if (!scmd)
+ return 1;
+
+ scsiio_rep = leapraid_get_reply_vaddr(adapter, rep);
+ if (!scsiio_rep) {
+ scmd->result = DID_OK << LEAPRAID_SCSI_HOST_SHIFT;
+ goto out;
+ }
+
+ sdev_priv = scmd->device->hostdata;
+ if (!sdev_priv ||
+ !sdev_priv->starget_priv ||
+ sdev_priv->starget_priv->deleted) {
+ scmd->result = DID_NO_CONNECT << LEAPRAID_SCSI_HOST_SHIFT;
+ goto out;
+ }
+
+ if (scsiio_rep->scsi_state & LEAPRAID_SCSI_STATE_RESPONSE_INFO_VALID)
+ response_code = le32_to_cpu(scsiio_rep->resp_info) & 0xFF;
+
+ leapraid_process_sense_data(adapter, scsiio_rep, scmd, taskid);
+ leapraid_scsiio_done_dispatch(adapter, scsiio_rep, sdev_priv, scmd,
+ taskid, response_code);
+
+out:
+ scsi_dma_unmap(scmd);
+ if (unlikely(taskid == adapter->driver_cmds.driver_scsiio_cmd.taskid)) {
+ adapter->driver_cmds.driver_scsiio_cmd.status =
+ LEAPRAID_CMD_DONE;
+ complete(&adapter->driver_cmds.driver_scsiio_cmd.done);
+ return 0;
+ }
+ leapraid_free_taskid(adapter, taskid);
+ scsi_done(scmd);
+ return 0;
+}
+
+static void leapraid_probe_raid(struct leapraid_adapter *adapter)
+{
+ struct leapraid_raid_volume *raid_volume, *raid_volume_next;
+ int rc;
+
+ list_for_each_entry_safe(raid_volume, raid_volume_next,
+ &adapter->dev_topo.raid_volume_list, list) {
+ if (raid_volume->starget)
+ continue;
+
+ rc = scsi_add_device(adapter->shost, RAID_CHANNEL,
+ raid_volume->id, 0);
+ if (rc)
+ leapraid_raid_volume_remove(adapter, raid_volume);
+ }
+}
+
+static void leapraid_sas_dev_make_active(struct leapraid_adapter *adapter,
+ struct leapraid_sas_dev *sas_dev)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ if (!list_empty(&sas_dev->list)) {
+ list_del_init(&sas_dev->list);
+ leapraid_sdev_put(sas_dev);
+ }
+
+ leapraid_sdev_get(sas_dev);
+ list_add_tail(&sas_dev->list, &adapter->dev_topo.sas_dev_list);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+}
+
+static void leapraid_probe_sas(struct leapraid_adapter *adapter)
+{
+ struct leapraid_sas_dev *sas_dev;
+ bool added;
+
+ for (;;) {
+ sas_dev = leapraid_get_next_sas_dev_from_init_list(adapter);
+ if (!sas_dev)
+ break;
+
+ added = leapraid_transport_port_add(adapter,
+ sas_dev->hdl,
+ sas_dev->parent_sas_addr,
+ sas_dev->card_port);
+
+ if (!added)
+ goto remove_dev;
+
+ if (!sas_dev->starget &&
+ !adapter->scan_dev_desc.driver_loading) {
+ leapraid_transport_port_remove(adapter,
+ sas_dev->sas_addr,
+ sas_dev->parent_sas_addr,
+ sas_dev->card_port);
+ goto remove_dev;
+ }
+
+ leapraid_sas_dev_make_active(adapter, sas_dev);
+ leapraid_sdev_put(sas_dev);
+ continue;
+
+remove_dev:
+ leapraid_sas_dev_remove(adapter, sas_dev);
+ leapraid_sdev_put(sas_dev);
+ }
+}
+
+static bool leapraid_get_boot_dev(struct leapraid_boot_dev *boot_dev,
+ void **pdev, u32 *pchnl)
+{
+ if (boot_dev->dev) {
+ *pdev = boot_dev->dev;
+ *pchnl = boot_dev->chnl;
+ return true;
+ }
+ return false;
+}
+
+static void leapraid_probe_boot_dev(struct leapraid_adapter *adapter)
+{
+ void *dev = NULL;
+ u32 chnl;
+
+ if (leapraid_get_boot_dev(&adapter->boot_devs.requested_boot_dev, &dev,
+ &chnl))
+ goto boot_dev_found;
+
+ if (leapraid_get_boot_dev(&adapter->boot_devs.requested_alt_boot_dev,
+ &dev, &chnl))
+ goto boot_dev_found;
+
+ if (leapraid_get_boot_dev(&adapter->boot_devs.current_boot_dev, &dev,
+ &chnl))
+ goto boot_dev_found;
+
+ return;
+
+boot_dev_found:
+ switch (chnl) {
+ case RAID_CHANNEL:
+ {
+ struct leapraid_raid_volume *raid_volume =
+ (struct leapraid_raid_volume *)dev;
+
+ if (raid_volume->starget)
+ return;
+
+ /* TODO eedp */
+
+ if (scsi_add_device(adapter->shost, RAID_CHANNEL,
+ raid_volume->id, 0))
+ leapraid_raid_volume_remove(adapter, raid_volume);
+ break;
+ }
+ default:
+ {
+ struct leapraid_sas_dev *sas_dev =
+ (struct leapraid_sas_dev *)dev;
+ struct leapraid_sas_port *sas_port;
+ unsigned long flags;
+
+ if (sas_dev->starget)
+ return;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ list_move_tail(&sas_dev->list,
+ &adapter->dev_topo.sas_dev_list);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ if (!sas_dev->card_port)
+ return;
+
+ sas_port = leapraid_transport_port_add(adapter, sas_dev->hdl,
+ sas_dev->parent_sas_addr,
+ sas_dev->card_port);
+ if (!sas_port)
+ leapraid_sas_dev_remove(adapter, sas_dev);
+ break;
+ }
+ }
+}
+
+static void leapraid_probe_devices(struct leapraid_adapter *adapter)
+{
+ leapraid_probe_boot_dev(adapter);
+
+ if (adapter->adapter_attr.raid_support) {
+ leapraid_probe_raid(adapter);
+ leapraid_probe_sas(adapter);
+ } else {
+ leapraid_probe_sas(adapter);
+ }
+}
+
+void leapraid_scan_dev_done(struct leapraid_adapter *adapter)
+{
+ if (adapter->scan_dev_desc.wait_scan_dev_done) {
+ adapter->scan_dev_desc.wait_scan_dev_done = false;
+ leapraid_probe_devices(adapter);
+ }
+
+ leapraid_check_scheduled_fault_start(adapter);
+ leapraid_fw_log_start(adapter);
+ adapter->scan_dev_desc.driver_loading = false;
+ leapraid_smart_polling_start(adapter);
+}
+
+static void leapraid_ir_shutdown(struct leapraid_adapter *adapter)
+{
+ struct leapraid_raid_act_req *raid_act_req;
+ struct leapraid_raid_act_rep *raid_act_rep;
+ struct leapraid_driver_cmd *raid_action_cmd;
+
+ if (!adapter || !adapter->adapter_attr.raid_support)
+ return;
+
+ if (list_empty(&adapter->dev_topo.raid_volume_list))
+ return;
+
+ if (leapraid_pci_removed(adapter))
+ return;
+
+ raid_action_cmd = &adapter->driver_cmds.raid_action_cmd;
+
+ mutex_lock(&raid_action_cmd->mutex);
+ raid_action_cmd->status = LEAPRAID_CMD_PENDING;
+
+ raid_act_req = leapraid_get_task_desc(adapter,
+ raid_action_cmd->inter_taskid);
+ memset(raid_act_req, 0, sizeof(struct leapraid_raid_act_req));
+ raid_act_req->func = LEAPRAID_FUNC_RAID_ACTION;
+ raid_act_req->act = LEAPRAID_RAID_ACT_SYSTEM_SHUTDOWN_INITIATED;
+
+ dev_info(&adapter->pdev->dev, "ir shutdown start\n");
+ init_completion(&raid_action_cmd->done);
+ leapraid_fire_task(adapter, raid_action_cmd->inter_taskid);
+ wait_for_completion_timeout(&raid_action_cmd->done,
+ LEAPRAID_RAID_ACTION_CMD_TIMEOUT * HZ);
+
+ if (!(raid_action_cmd->status & LEAPRAID_CMD_DONE)) {
+ dev_err(&adapter->pdev->dev,
+ "%s: timeout waiting for ir shutdown\n", __func__);
+ goto out;
+ }
+
+ if (raid_action_cmd->status & LEAPRAID_CMD_REPLY_VALID) {
+ raid_act_rep = (void *)(&raid_action_cmd->reply);
+ dev_info(&adapter->pdev->dev,
+ "ir shutdown done, adapter status=0x%04x\n",
+ le16_to_cpu(raid_act_rep->adapter_status));
+ }
+
+out:
+ raid_action_cmd->status = LEAPRAID_CMD_NOT_USED;
+ mutex_unlock(&raid_action_cmd->mutex);
+}
+
+static const struct pci_device_id leapraid_pci_table[] = {
+ { PCI_DEVICE(LEAPRAID_VENDOR_ID, LEAPRAID_DEVID_HBA) },
+ { PCI_DEVICE(LEAPRAID_VENDOR_ID, LEAPRAID_DEVID_RAID) },
+ { 0, }
+};
+
+static inline bool leapraid_is_scmd_permitted(struct leapraid_adapter *adapter,
+ struct scsi_cmnd *scmd)
+{
+ u8 opcode;
+
+ if (adapter->access_ctrl.pcie_recovering ||
+ adapter->access_ctrl.adapter_thermal_alert)
+ return false;
+
+ if (adapter->access_ctrl.host_removing) {
+ if (leapraid_pci_removed(adapter))
+ return false;
+
+ opcode = scmd->cmnd[0];
+ if (opcode == SYNCHRONIZE_CACHE || opcode == START_STOP)
+ return true;
+ else
+ return false;
+ }
+ return true;
+}
+
+static bool leapraid_should_queuecommand(struct leapraid_adapter *adapter,
+ struct leapraid_sdev_priv *sdev_priv,
+ struct scsi_cmnd *scmd, int *rc)
+{
+ struct leapraid_starget_priv *starget_priv;
+
+ if (!sdev_priv || !sdev_priv->starget_priv)
+ goto no_connect;
+
+ if (!leapraid_is_scmd_permitted(adapter, scmd))
+ goto no_connect;
+
+ starget_priv = sdev_priv->starget_priv;
+ if (starget_priv->hdl == LEAPRAID_INVALID_DEV_HANDLE)
+ goto no_connect;
+
+ if (sdev_priv->block &&
+ scmd->device->host->shost_state == SHOST_RECOVERY &&
+ scmd->cmnd[0] == TEST_UNIT_READY) {
+ scsi_build_sense(scmd, 0, UNIT_ATTENTION,
+ LEAPRAID_SCSI_ASC_POWER_ON_RESET,
+ LEAPRAID_SCSI_ASCQ_POWER_ON_RESET);
+ goto done_out;
+ }
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->reset_desc.adapter_link_resetting) {
+ *rc = SCSI_MLQUEUE_HOST_BUSY;
+ goto out;
+ } else if (starget_priv->deleted || sdev_priv->deleted) {
+ goto no_connect;
+ } else if (starget_priv->tm_busy || sdev_priv->block) {
+ *rc = SCSI_MLQUEUE_DEVICE_BUSY;
+ goto out;
+ }
+
+ return true;
+
+no_connect:
+ scmd->result = DID_NO_CONNECT << LEAPRAID_SCSI_HOST_SHIFT;
+done_out:
+ if (likely(scmd != adapter->driver_cmds.internal_scmd))
+ scsi_done(scmd);
+out:
+ return false;
+}
+
+static u32 build_scsiio_req_control(struct scsi_cmnd *scmd,
+ struct leapraid_sdev_priv *sdev_priv)
+{
+ u32 control;
+
+ switch (scmd->sc_data_direction) {
+ case DMA_FROM_DEVICE:
+ control = LEAPRAID_SCSIIO_CTRL_READ;
+ break;
+ case DMA_TO_DEVICE:
+ control = LEAPRAID_SCSIIO_CTRL_WRITE;
+ break;
+ default:
+ control = LEAPRAID_SCSIIO_CTRL_NODATATRANSFER;
+ break;
+ }
+
+ control |= LEAPRAID_SCSIIO_CTRL_SIMPLEQ;
+
+ if (sdev_priv->ncq &&
+ (IOPRIO_PRIO_CLASS(req_get_ioprio(scsi_cmd_to_rq(scmd))) ==
+ IOPRIO_CLASS_RT))
+ control |= LEAPRAID_SCSIIO_CTRL_CMDPRI;
+ if (scmd->cmd_len == 32)
+ control |= 4 << LEAPRAID_SCSIIO_CTRL_ADDCDBLEN_SHIFT;
+
+ return control;
+}
+
+int leapraid_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+{
+ struct leapraid_adapter *adapter = shost_priv(scmd->device->host);
+ struct leapraid_sdev_priv *sdev_priv = scmd->device->hostdata;
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_scsiio_req *scsiio_req;
+ u32 control;
+ u16 taskid;
+ u16 hdl;
+ int rc = 0;
+
+ if (!leapraid_should_queuecommand(adapter, sdev_priv, scmd, &rc))
+ goto out;
+
+ starget_priv = sdev_priv->starget_priv;
+ hdl = starget_priv->hdl;
+ control = build_scsiio_req_control(scmd, sdev_priv);
+
+ if (unlikely(scmd == adapter->driver_cmds.internal_scmd))
+ taskid = adapter->driver_cmds.driver_scsiio_cmd.taskid;
+ else
+ taskid = leapraid_alloc_scsiio_taskid(adapter, scmd);
+ scsiio_req = leapraid_get_task_desc(adapter, taskid);
+
+ scsiio_req->func = LEAPRAID_FUNC_SCSIIO_REQ;
+ if (sdev_priv->starget_priv->flg & LEAPRAID_TGT_FLG_RAID_MEMBER)
+ scsiio_req->func = LEAPRAID_FUNC_RAID_SCSIIO_PASSTHROUGH;
+ else
+ scsiio_req->func = LEAPRAID_FUNC_SCSIIO_REQ;
+
+ scsiio_req->dev_hdl = cpu_to_le16(hdl);
+ scsiio_req->data_len = cpu_to_le32(scsi_bufflen(scmd));
+ scsiio_req->ctrl = cpu_to_le32(control);
+ scsiio_req->io_flg = cpu_to_le16(scmd->cmd_len);
+ scsiio_req->msg_flg = 0;
+ scsiio_req->sense_buffer_len = SCSI_SENSE_BUFFERSIZE;
+ scsiio_req->sense_buffer_low_add =
+ leapraid_get_sense_buffer_dma(adapter, taskid);
+ scsiio_req->sgl_offset0 =
+ offsetof(struct leapraid_scsiio_req, sgl) /
+ LEAPRAID_DWORDS_BYTE_SIZE;
+ int_to_scsilun(sdev_priv->lun, (struct scsi_lun *)scsiio_req->lun);
+ memcpy(scsiio_req->cdb.cdb32, scmd->cmnd, scmd->cmd_len);
+ if (scsiio_req->data_len) {
+ if (leapraid_build_scmd_ieee_sg(adapter, scmd, taskid)) {
+ leapraid_free_taskid(adapter, taskid);
+ rc = SCSI_MLQUEUE_HOST_BUSY;
+ goto out;
+ }
+ } else {
+ leapraid_build_ieee_nodata_sg(adapter, &scsiio_req->sgl);
+ }
+
+ if (likely(scsiio_req->func == LEAPRAID_FUNC_SCSIIO_REQ)) {
+ leapraid_fire_scsi_io(adapter, taskid,
+ le16_to_cpu(scsiio_req->dev_hdl));
+ } else {
+ leapraid_fire_task(adapter, taskid);
+ }
+ dev_dbg(&adapter->pdev->dev,
+ "LEAPRAID_SCSIIO: Send Descriptor taskid %d, req type 0x%x\n",
+ taskid, scsiio_req->func);
+out:
+ return rc;
+}
+
+static int leapraid_init_cmd_priv(struct Scsi_Host *shost,
+ struct scsi_cmnd *scmd)
+{
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ struct leapraid_io_req_tracker *io_tracker;
+
+ io_tracker = leapraid_get_scmd_priv(scmd);
+ leapraid_internal_init_cmd_priv(adapter, io_tracker);
+
+ return 0;
+}
+
+static int leapraid_exit_cmd_priv(struct Scsi_Host *shost,
+ struct scsi_cmnd *scmd)
+{
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ struct leapraid_io_req_tracker *io_tracker;
+
+ io_tracker = leapraid_get_scmd_priv(scmd);
+ leapraid_internal_exit_cmd_priv(adapter, io_tracker);
+
+ return 0;
+}
+
+static int leapraid_error_handler(struct scsi_cmnd *scmd,
+ const char *str, u8 type)
+{
+ struct leapraid_adapter *adapter = shost_priv(scmd->device->host);
+ struct scsi_target *starget = scmd->device->sdev_target;
+ struct leapraid_starget_priv *starget_priv = starget->hostdata;
+ struct leapraid_io_req_tracker *io_req_tracker = NULL;
+ struct leapraid_sdev_priv *sdev_priv;
+ struct leapraid_sas_dev *sas_dev = NULL;
+ u16 hdl;
+ int rc;
+
+ dev_info(&adapter->pdev->dev,
+ "EH enter: type=%s, scmd=0x%p, req tag=%d\n", str, scmd,
+ scsi_cmd_to_rq(scmd)->tag);
+ scsi_print_command(scmd);
+
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK) {
+ io_req_tracker = leapraid_get_scmd_priv(scmd);
+ dev_info(&adapter->pdev->dev,
+ "EH ABORT: scmd=0x%p, pending=%u ms, tout=%u ms, req tag=%d\n",
+ scmd,
+ jiffies_to_msecs(jiffies - scmd->jiffies_at_alloc),
+ (scsi_cmd_to_rq(scmd)->timeout / HZ) * 1000,
+ scsi_cmd_to_rq(scmd)->tag);
+ }
+
+ if (leapraid_pci_removed(adapter) ||
+ adapter->access_ctrl.host_removing) {
+ dev_err(&adapter->pdev->dev,
+ "EH %s failed: %s scmd=0x%p\n", str,
+ (adapter->access_ctrl.host_removing ?
+ "shost removing!" : "pci_dev removed!"), scmd);
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK)
+ if (io_req_tracker && io_req_tracker->taskid)
+ leapraid_free_taskid(adapter,
+ io_req_tracker->taskid);
+ scmd->result = DID_NO_CONNECT << LEAPRAID_SCSI_HOST_SHIFT;
+#ifdef FAST_IO_FAIL
+ rc = FAST_IO_FAIL;
+#else
+ rc = FAILED;
+#endif
+ goto out;
+ }
+
+ sdev_priv = scmd->device->hostdata;
+ if (!sdev_priv || !sdev_priv->starget_priv) {
+ dev_warn(&adapter->pdev->dev,
+ "EH %s: sdev or starget gone, scmd=0x%p\n",
+ str, scmd);
+ scmd->result = DID_NO_CONNECT << LEAPRAID_SCSI_HOST_SHIFT;
+ scsi_done(scmd);
+ rc = SUCCESS;
+ goto out;
+ }
+
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK) {
+ if (!io_req_tracker) {
+ dev_warn(&adapter->pdev->dev,
+ "EH ABORT: no io tracker, scmd 0x%p\n", scmd);
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ rc = SUCCESS;
+ goto out;
+ }
+
+ if (sdev_priv->starget_priv->flg &
+ LEAPRAID_TGT_FLG_RAID_MEMBER ||
+ sdev_priv->starget_priv->flg & LEAPRAID_TGT_FLG_VOLUME) {
+ dev_err(&adapter->pdev->dev,
+ "EH ABORT: skip RAID/VOLUME target, scmd=0x%p\n",
+ scmd);
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ rc = FAILED;
+ goto out;
+ }
+
+ hdl = sdev_priv->starget_priv->hdl;
+ } else {
+ hdl = 0;
+ if (sdev_priv->starget_priv->flg &
+ LEAPRAID_TGT_FLG_RAID_MEMBER) {
+ sas_dev = leapraid_get_sas_dev_from_tgt(adapter,
+ starget_priv);
+ if (sas_dev)
+ hdl = sas_dev->volume_hdl;
+ } else {
+ hdl = sdev_priv->starget_priv->hdl;
+ }
+
+ if (!hdl) {
+ dev_err(&adapter->pdev->dev,
+ "EH %s failed: target handle is 0, scmd=0x%p\n",
+ str, scmd);
+ scmd->result = DID_RESET << LEAPRAID_SCSI_HOST_SHIFT;
+ rc = FAILED;
+ goto out;
+ }
+ }
+
+ dev_info(&adapter->pdev->dev,
+ "EH issue TM: type=%s, scmd=0x%p, hdl=0x%x\n",
+ str, scmd, hdl);
+
+ rc = leapraid_issue_locked_tm(adapter, hdl, scmd->device->channel,
+ scmd->device->id,
+ (type == LEAPRAID_TM_TASKTYPE_TARGET_RESET ?
+ 0 : scmd->device->lun),
+ type,
+ (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK ?
+ io_req_tracker->taskid : 0),
+ LEAPRAID_TM_MSGFLAGS_LINK_RESET);
+
+out:
+ if (type == LEAPRAID_TM_TASKTYPE_ABORT_TASK) {
+ dev_info(&adapter->pdev->dev,
+ "EH ABORT result: %s, scmd=0x%p\n",
+ ((rc == SUCCESS) ? "success" : "failed"), scmd);
+ } else {
+ dev_info(&adapter->pdev->dev,
+ "EH %s result: %s, scmd=0x%p\n",
+ str, ((rc == SUCCESS) ? "success" : "failed"), scmd);
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+ }
+ return rc;
+}
+
+static int leapraid_eh_abort_handler(struct scsi_cmnd *scmd)
+{
+ return leapraid_error_handler(scmd, "ABORT TASK",
+ LEAPRAID_TM_TASKTYPE_ABORT_TASK);
+}
+
+static int leapraid_eh_device_reset_handler(struct scsi_cmnd *scmd)
+{
+ return leapraid_error_handler(scmd, "UNIT RESET",
+ LEAPRAID_TM_TASKTYPE_LOGICAL_UNIT_RESET);
+}
+
+static int leapraid_eh_target_reset_handler(struct scsi_cmnd *scmd)
+{
+ return leapraid_error_handler(scmd, "TARGET RESET",
+ LEAPRAID_TM_TASKTYPE_TARGET_RESET);
+}
+
+static int leapraid_eh_host_reset_handler(struct scsi_cmnd *scmd)
+{
+ struct leapraid_adapter *adapter = shost_priv(scmd->device->host);
+ int rc;
+
+ dev_info(&adapter->pdev->dev,
+ "EH HOST RESET enter: scmd=%p, req tag=%d\n",
+ scmd,
+ scsi_cmd_to_rq(scmd)->tag);
+ scsi_print_command(scmd);
+
+ if (adapter->scan_dev_desc.driver_loading ||
+ adapter->access_ctrl.host_removing) {
+ dev_err(&adapter->pdev->dev,
+ "EH HOST RESET failed: %s scmd=0x%p\n",
+ (adapter->access_ctrl.host_removing ?
+ "shost removing!" : "driver loading!"), scmd);
+ rc = FAILED;
+ goto out;
+ }
+
+ dev_info(&adapter->pdev->dev, "%s:%d issuing hard reset\n",
+ __func__, __LINE__);
+ if (leapraid_hard_reset_handler(adapter, FULL_RESET) < 0)
+ rc = FAILED;
+ else
+ rc = SUCCESS;
+
+out:
+ dev_info(&adapter->pdev->dev, "EH HOST RESET result: %s, scmd=0x%p\n",
+ ((rc == SUCCESS) ? "success" : "failed"), scmd);
+ return rc;
+}
+
+static int leapraid_slave_alloc(struct scsi_device *sdev)
+{
+ struct leapraid_raid_volume *raid_volume;
+ struct leapraid_starget_priv *stgt_priv;
+ struct leapraid_sdev_priv *sdev_priv;
+ struct leapraid_adapter *adapter;
+ struct leapraid_sas_dev *sas_dev;
+ struct scsi_target *tgt;
+ struct Scsi_Host *shost;
+ unsigned long flags;
+
+ sdev_priv = kzalloc(sizeof(*sdev_priv), GFP_KERNEL);
+ if (!sdev_priv)
+ return -ENOMEM;
+
+ sdev_priv->lun = sdev->lun;
+ sdev_priv->flg = LEAPRAID_DEVICE_FLG_INIT;
+ tgt = scsi_target(sdev);
+ stgt_priv = tgt->hostdata;
+ stgt_priv->num_luns++;
+ sdev_priv->starget_priv = stgt_priv;
+ sdev->hostdata = sdev_priv;
+ if ((stgt_priv->flg & LEAPRAID_TGT_FLG_RAID_MEMBER))
+ sdev->no_uld_attach = LEAPRAID_NO_ULD_ATTACH;
+
+ shost = dev_to_shost(&tgt->dev);
+ adapter = shost_priv(shost);
+ if (tgt->channel == RAID_CHANNEL) {
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_id(adapter,
+ tgt->id,
+ tgt->channel);
+ if (raid_volume)
+ raid_volume->sdev = sdev;
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock,
+ flags);
+ }
+
+ if (!(stgt_priv->flg & LEAPRAID_TGT_FLG_VOLUME)) {
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(adapter,
+ stgt_priv->sas_address,
+ stgt_priv->card_port);
+ if (sas_dev && !sas_dev->starget) {
+ sdev_printk(KERN_INFO, sdev,
+ "%s: assign starget to sas_dev\n", __func__);
+ sas_dev->starget = tgt;
+ }
+
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ }
+ return 0;
+}
+
+static int leapraid_slave_cfg_volume(struct scsi_device *sdev)
+{
+ struct Scsi_Host *shost = sdev->host;
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ struct leapraid_raid_volume *raid_volume;
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_sdev_priv *sdev_priv;
+ unsigned long flags;
+ int qd;
+ u16 hdl;
+
+ sdev_priv = sdev->hostdata;
+ starget_priv = sdev_priv->starget_priv;
+ hdl = starget_priv->hdl;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_hdl(adapter, hdl);
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+ if (!raid_volume) {
+ sdev_printk(KERN_WARNING, sdev,
+ "%s: raid_volume not found, hdl=0x%x\n",
+ __func__, hdl);
+ return 1;
+ }
+
+ if (leapraid_get_volume_cap(adapter, raid_volume)) {
+ sdev_printk(KERN_ERR, sdev,
+ "%s: failed to get volume cap, hdl=0x%x\n",
+ __func__, hdl);
+ return 1;
+ }
+
+ qd = (raid_volume->dev_info & LEAPRAID_DEVTYP_SSP_TGT) ?
+ LEAPRAID_SAS_QUEUE_DEPTH : LEAPRAID_SATA_QUEUE_DEPTH;
+ if (raid_volume->vol_type != LEAPRAID_VOL_TYPE_RAID0)
+ qd = LEAPRAID_RAID_QUEUE_DEPTH;
+
+ sdev_printk(KERN_INFO, sdev,
+ "raid volume: hdl=0x%04x, wwid=0x%016llx\n",
+ raid_volume->hdl, (unsigned long long)raid_volume->wwid);
+
+ if (shost->max_sectors > LEAPRAID_MAX_SECTORS)
+ blk_queue_max_hw_sectors(sdev->request_queue,
+ LEAPRAID_MAX_SECTORS);
+
+ leapraid_adjust_sdev_queue_depth(sdev, qd);
+ return 0;
+}
+
+static int leapraid_slave_configure_extra(struct scsi_device *sdev,
+ struct leapraid_sas_dev **psas_dev,
+ u16 vol_hdl, u64 volume_wwid,
+ bool *is_target_ssp, int *qd)
+{
+ struct leapraid_sas_dev *sas_dev;
+ struct leapraid_sdev_priv *sdev_priv;
+ struct Scsi_Host *shost = sdev->host;
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ unsigned long flags;
+
+ sdev_priv = sdev->hostdata;
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ *is_target_ssp = false;
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr(adapter,
+ sdev_priv->starget_priv->sas_address,
+ sdev_priv->starget_priv->card_port);
+ if (!sas_dev) {
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ sdev_printk(KERN_WARNING, sdev,
+ "%s: sas_dev not found, sas=0x%llx\n",
+ __func__, sdev_priv->starget_priv->sas_address);
+ return 1;
+ }
+
+ *psas_dev = sas_dev;
+ sas_dev->volume_hdl = vol_hdl;
+ sas_dev->volume_wwid = volume_wwid;
+ if (sas_dev->dev_info & LEAPRAID_DEVTYP_SSP_TGT) {
+ *qd = (sas_dev->port_type > 1) ?
+ adapter->adapter_attr.wideport_max_queue_depth :
+ adapter->adapter_attr.narrowport_max_queue_depth;
+ *is_target_ssp = true;
+ if (sas_dev->dev_info & LEAPRAID_DEVTYP_SEP)
+ sdev_priv->sep = true;
+ } else {
+ *qd = adapter->adapter_attr.sata_max_queue_depth;
+ }
+
+ sdev_printk(KERN_INFO, sdev,
+ "sdev: dev name=0x%016llx, sas addr=0x%016llx\n",
+ (unsigned long long)sas_dev->dev_name,
+ (unsigned long long)sas_dev->sas_addr);
+ leapraid_sdev_put(sas_dev);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ return 0;
+}
+
+static int leapraid_slave_configure(struct scsi_device *sdev)
+{
+ struct leapraid_sas_dev *sas_dev;
+ struct leapraid_sdev_priv *sdev_priv;
+ struct Scsi_Host *shost = sdev->host;
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_adapter *adapter;
+ u16 hdl, vol_hdl = 0;
+ bool is_target_ssp = false;
+ u64 volume_wwid = 0;
+ int qd = 1;
+
+ adapter = shost_priv(shost);
+ sdev_priv = sdev->hostdata;
+ sdev_priv->flg &= ~LEAPRAID_DEVICE_FLG_INIT;
+ starget_priv = sdev_priv->starget_priv;
+ hdl = starget_priv->hdl;
+ if (starget_priv->flg & LEAPRAID_TGT_FLG_VOLUME)
+ return leapraid_slave_cfg_volume(sdev);
+
+ if (starget_priv->flg & LEAPRAID_TGT_FLG_RAID_MEMBER) {
+ if (leapraid_cfg_get_volume_hdl(adapter, hdl, &vol_hdl)) {
+ sdev_printk(KERN_WARNING, sdev,
+ "%s: get volume hdl failed, hdl=0x%x\n",
+ __func__, hdl);
+ return 1;
+ }
+
+ if (vol_hdl && leapraid_cfg_get_volume_wwid(adapter, vol_hdl,
+ &volume_wwid)) {
+ sdev_printk(KERN_WARNING, sdev,
+ "%s: get wwid failed, volume_hdl=0x%x\n",
+ __func__, vol_hdl);
+ return 1;
+ }
+ }
+
+ if (leapraid_slave_configure_extra(sdev, &sas_dev, vol_hdl,
+ volume_wwid, &is_target_ssp, &qd)) {
+ sdev_printk(KERN_WARNING, sdev,
+ "%s: slave_configure_extra failed\n", __func__);
+ return 1;
+ }
+
+ leapraid_adjust_sdev_queue_depth(sdev, qd);
+ if (is_target_ssp)
+ sas_read_port_mode_page(sdev);
+
+ return 0;
+}
+
+static void leapraid_slave_destroy(struct scsi_device *sdev)
+{
+ struct leapraid_adapter *adapter;
+ struct Scsi_Host *shost;
+ struct leapraid_sas_dev *sas_dev;
+ struct leapraid_starget_priv *starget_priv;
+ struct scsi_target *stgt;
+ unsigned long flags;
+
+ if (!sdev->hostdata)
+ return;
+
+ stgt = scsi_target(sdev);
+ starget_priv = stgt->hostdata;
+ starget_priv->num_luns--;
+ shost = dev_to_shost(&stgt->dev);
+ adapter = shost_priv(shost);
+ if (!(starget_priv->flg & LEAPRAID_TGT_FLG_VOLUME)) {
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_from_tgt(adapter,
+ starget_priv);
+ if (sas_dev && !starget_priv->num_luns)
+ sas_dev->starget = NULL;
+ if (sas_dev)
+ leapraid_sdev_put(sas_dev);
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+ }
+
+ kfree(sdev->hostdata);
+ sdev->hostdata = NULL;
+}
+
+static int leapraid_target_alloc_raid(struct scsi_target *tgt)
+{
+ struct leapraid_starget_priv *starget_priv;
+ struct leapraid_raid_volume *raid_volume;
+ struct Scsi_Host *shost = dev_to_shost(&tgt->dev);
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ unsigned long flags;
+
+ starget_priv = (struct leapraid_starget_priv *)tgt->hostdata;
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_id(adapter, tgt->id,
+ tgt->channel);
+ if (raid_volume) {
+ starget_priv->hdl = raid_volume->hdl;
+ starget_priv->sas_address = raid_volume->wwid;
+ starget_priv->flg |= LEAPRAID_TGT_FLG_VOLUME;
+ raid_volume->starget = tgt;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+ return 0;
+}
+
+static int leapraid_target_alloc_sas(struct scsi_target *tgt)
+{
+ struct sas_rphy *rphy;
+ struct Scsi_Host *shost;
+ struct leapraid_sas_dev *sas_dev;
+ struct leapraid_adapter *adapter;
+ struct leapraid_starget_priv *starget_priv;
+ unsigned long flags;
+
+ shost = dev_to_shost(&tgt->dev);
+ adapter = shost_priv(shost);
+ starget_priv = (struct leapraid_starget_priv *)tgt->hostdata;
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ rphy = dev_to_rphy(tgt->dev.parent);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr_and_rphy(adapter,
+ rphy->identify.sas_address,
+ rphy);
+ if (sas_dev) {
+ starget_priv->sas_dev = sas_dev;
+ starget_priv->card_port = sas_dev->card_port;
+ starget_priv->sas_address = sas_dev->sas_addr;
+ starget_priv->hdl = sas_dev->hdl;
+ sas_dev->channel = tgt->channel;
+ sas_dev->id = tgt->id;
+ sas_dev->starget = tgt;
+ if (test_bit(sas_dev->hdl,
+ (unsigned long *)adapter->dev_topo.pd_hdls))
+ starget_priv->flg |= LEAPRAID_TGT_FLG_RAID_MEMBER;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ return 0;
+}
+
+static int leapraid_target_alloc(struct scsi_target *tgt)
+{
+ struct leapraid_starget_priv *starget_priv;
+
+ starget_priv = kzalloc(sizeof(*starget_priv), GFP_KERNEL);
+ if (!starget_priv)
+ return -ENOMEM;
+
+ tgt->hostdata = starget_priv;
+ starget_priv->starget = tgt;
+ starget_priv->hdl = LEAPRAID_INVALID_DEV_HANDLE;
+ if (tgt->channel == RAID_CHANNEL)
+ return leapraid_target_alloc_raid(tgt);
+
+ return leapraid_target_alloc_sas(tgt);
+}
+
+static void leapraid_target_destroy_raid(struct scsi_target *tgt)
+{
+ struct leapraid_raid_volume *raid_volume;
+ struct Scsi_Host *shost = dev_to_shost(&tgt->dev);
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->dev_topo.raid_volume_lock, flags);
+ raid_volume = leapraid_raid_volume_find_by_id(adapter, tgt->id,
+ tgt->channel);
+ if (raid_volume) {
+ raid_volume->starget = NULL;
+ raid_volume->sdev = NULL;
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.raid_volume_lock, flags);
+}
+
+static void leapraid_target_destroy_sas(struct scsi_target *tgt)
+{
+ struct leapraid_adapter *adapter;
+ struct leapraid_sas_dev *sas_dev;
+ struct leapraid_starget_priv *starget_priv;
+ struct Scsi_Host *shost;
+ unsigned long flags;
+
+ shost = dev_to_shost(&tgt->dev);
+ adapter = shost_priv(shost);
+ starget_priv = tgt->hostdata;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_from_tgt(adapter,
+ starget_priv);
+ if (sas_dev &&
+ sas_dev->starget == tgt &&
+ sas_dev->id == tgt->id &&
+ sas_dev->channel == tgt->channel)
+ sas_dev->starget = NULL;
+
+ if (sas_dev) {
+ starget_priv->sas_dev = NULL;
+ leapraid_sdev_put(sas_dev);
+ leapraid_sdev_put(sas_dev);
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+}
+
+static void leapraid_target_destroy(struct scsi_target *tgt)
+{
+ struct leapraid_starget_priv *starget_priv;
+
+ starget_priv = tgt->hostdata;
+ if (!starget_priv)
+ return;
+
+ if (tgt->channel == RAID_CHANNEL) {
+ leapraid_target_destroy_raid(tgt);
+ goto out;
+ }
+
+ leapraid_target_destroy_sas(tgt);
+
+out:
+ kfree(starget_priv);
+ tgt->hostdata = NULL;
+}
+
+static bool leapraid_scan_check_status(struct leapraid_adapter *adapter,
+ bool *need_hard_reset)
+{
+ u32 adapter_state;
+
+ if (adapter->scan_dev_desc.scan_start) {
+ adapter_state = leapraid_get_adapter_state(adapter);
+ if (adapter_state == LEAPRAID_DB_FAULT) {
+ *need_hard_reset = true;
+ return true;
+ }
+ return false;
+ }
+
+ if (adapter->driver_cmds.scan_dev_cmd.status & LEAPRAID_CMD_RESET) {
+ dev_err(&adapter->pdev->dev,
+ "device scan: aborted due to reset\n");
+ adapter->driver_cmds.scan_dev_cmd.status =
+ LEAPRAID_CMD_NOT_USED;
+ adapter->scan_dev_desc.driver_loading = false;
+ return true;
+ }
+
+ if (adapter->scan_dev_desc.scan_start_failed) {
+ dev_err(&adapter->pdev->dev,
+ "device scan: failed with adapter_status=0x%08x\n",
+ adapter->scan_dev_desc.scan_start_failed);
+ adapter->scan_dev_desc.driver_loading = false;
+ adapter->scan_dev_desc.wait_scan_dev_done = false;
+ adapter->access_ctrl.host_removing = true;
+ return true;
+ }
+
+ dev_info(&adapter->pdev->dev, "device scan: SUCCESS\n");
+ adapter->driver_cmds.scan_dev_cmd.status = LEAPRAID_CMD_NOT_USED;
+ leapraid_scan_dev_done(adapter);
+ return true;
+}
+
+static int leapraid_scan_finished(struct Scsi_Host *shost, unsigned long time)
+{
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ bool need_hard_reset = false;
+
+ if (time >= (LEAPRAID_SCAN_DEV_CMD_TIMEOUT * HZ)) {
+ adapter->driver_cmds.scan_dev_cmd.status =
+ LEAPRAID_CMD_NOT_USED;
+ dev_err(&adapter->pdev->dev,
+ "device scan: failed with timeout 300s\n");
+ adapter->scan_dev_desc.driver_loading = false;
+ return 1;
+ }
+
+ if (!leapraid_scan_check_status(adapter, &need_hard_reset))
+ return 0;
+
+ if (need_hard_reset) {
+ adapter->driver_cmds.scan_dev_cmd.status =
+ LEAPRAID_CMD_NOT_USED;
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ if (leapraid_hard_reset_handler(adapter, PART_RESET))
+ adapter->scan_dev_desc.driver_loading = false;
+ }
+
+ return 1;
+}
+
+static void leapraid_scan_start(struct Scsi_Host *shost)
+{
+ struct leapraid_adapter *adapter = shost_priv(shost);
+
+ adapter->scan_dev_desc.scan_start = true;
+ leapraid_scan_dev(adapter, true);
+}
+
+static int leapraid_calc_max_queue_depth(struct scsi_device *sdev, int qdepth)
+{
+ struct Scsi_Host *shost;
+ int max_depth;
+
+ shost = sdev->host;
+ max_depth = shost->can_queue;
+
+ if (!sdev->tagged_supported)
+ max_depth = 1;
+
+ if (qdepth > max_depth)
+ qdepth = max_depth;
+
+ return qdepth;
+}
+
+static int leapraid_change_queue_depth(struct scsi_device *sdev, int qdepth)
+{
+ qdepth = leapraid_calc_max_queue_depth(sdev, qdepth);
+ scsi_change_queue_depth(sdev, qdepth);
+ return sdev->queue_depth;
+}
+
+void leapraid_adjust_sdev_queue_depth(struct scsi_device *sdev, int qdepth)
+{
+ leapraid_change_queue_depth(sdev, qdepth);
+}
+
+static void leapraid_map_queues(struct Scsi_Host *shost)
+{
+ struct leapraid_adapter *adapter;
+ struct blk_mq_queue_map *queue_map;
+ int msix_queue_count;
+ int poll_queue_count;
+ int queue_offset;
+ int map_index;
+
+ adapter = (struct leapraid_adapter *)shost->hostdata;
+ if (shost->nr_hw_queues == 1)
+ goto out;
+
+ msix_queue_count = adapter->notification_desc.iopoll_qdex;
+ poll_queue_count = adapter->adapter_attr.rq_cnt - msix_queue_count;
+
+ queue_offset = 0;
+ for (map_index = 0; map_index < shost->nr_maps; map_index++) {
+ queue_map = &shost->tag_set.map[map_index];
+ queue_map->nr_queues = 0;
+
+ switch (map_index) {
+ case HCTX_TYPE_DEFAULT:
+ queue_map->nr_queues = msix_queue_count;
+ queue_map->queue_offset = queue_offset;
+ BUG_ON(!queue_map->nr_queues);
+ blk_mq_pci_map_queues(queue_map, adapter->pdev, 0);
+ break;
+ case HCTX_TYPE_POLL:
+ queue_map->nr_queues = poll_queue_count;
+ queue_map->queue_offset = queue_offset;
+ blk_mq_map_queues(queue_map);
+ break;
+ default:
+ queue_map->queue_offset = queue_offset;
+ blk_mq_pci_map_queues(queue_map, adapter->pdev, 0);
+ break;
+ }
+ queue_offset += queue_map->nr_queues;
+ }
+
+out:
+ return;
+}
+
+int leapraid_blk_mq_poll(struct Scsi_Host *shost, unsigned int queue_num)
+{
+ struct leapraid_adapter *adapter =
+ (struct leapraid_adapter *)shost->hostdata;
+ struct leapraid_blk_mq_poll_rq *blk_mq_poll_rq;
+ int num_entries;
+ int qid = queue_num - adapter->notification_desc.iopoll_qdex;
+
+ if (atomic_read(&adapter->notification_desc.blk_mq_poll_rqs[qid].pause) ||
+ !atomic_add_unless(&adapter->notification_desc.blk_mq_poll_rqs[qid].busy, 1, 1))
+ return 0;
+
+ blk_mq_poll_rq = &adapter->notification_desc.blk_mq_poll_rqs[qid];
+ num_entries = leapraid_rep_queue_handler(&blk_mq_poll_rq->rq);
+ atomic_dec(&adapter->notification_desc.blk_mq_poll_rqs[qid].busy);
+ return num_entries;
+}
+
+static int leapraid_bios_param(struct scsi_device *sdev,
+ struct block_device *bdev,
+ sector_t capacity, int geom[])
+{
+ int heads = 0;
+ int sectors = 0;
+ sector_t cylinders;
+
+ if (scsi_partsize(bdev, capacity, geom))
+ return 0;
+
+ if ((ulong)capacity >= LEAPRAID_LARGE_DISK_THRESHOLD) {
+ heads = LEAPRAID_LARGE_DISK_HEADS;
+ sectors = LEAPRAID_LARGE_DISK_SECTORS;
+ } else {
+ heads = LEAPRAID_SMALL_DISK_HEADS;
+ sectors = LEAPRAID_SMALL_DISK_SECTORS;
+ }
+
+ cylinders = capacity;
+ sector_div(cylinders, heads * sectors);
+
+ geom[0] = heads;
+ geom[1] = sectors;
+ geom[2] = cylinders;
+ return 0;
+}
+
+static ssize_t fw_queue_depth_show(struct device *cdev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct leapraid_adapter *adapter = shost_priv(shost);
+
+ return sysfs_emit(buf, "%02d\n",
+ adapter->adapter_attr.features.req_slot);
+}
+
+static ssize_t host_sas_address_show(struct device *cdev,
+ struct device_attribute *attr, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct leapraid_adapter *adapter = shost_priv(shost);
+
+ return sysfs_emit(buf, "0x%016llx\n",
+ (unsigned long long)adapter->dev_topo.card.sas_address);
+}
+
+static DEVICE_ATTR_RO(fw_queue_depth);
+static DEVICE_ATTR_RO(host_sas_address);
+
+static struct attribute *leapraid_shost_attrs[] = {
+ &dev_attr_fw_queue_depth.attr,
+ &dev_attr_host_sas_address.attr,
+ NULL,
+};
+
+ATTRIBUTE_GROUPS(leapraid_shost);
+
+static ssize_t sas_address_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct leapraid_sdev_priv *sas_device_priv_data = sdev->hostdata;
+
+ return sysfs_emit(buf, "0x%016llx\n",
+ (unsigned long long)sas_device_priv_data->starget_priv->sas_address);
+}
+
+static ssize_t sas_device_handle_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct leapraid_sdev_priv *sas_device_priv_data = sdev->hostdata;
+
+ return sysfs_emit(buf, "0x%04x\n",
+ sas_device_priv_data->starget_priv->hdl);
+}
+
+static ssize_t sas_ncq_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct leapraid_sdev_priv *sas_device_priv_data = sdev->hostdata;
+
+ return sysfs_emit(buf, "%d\n", sas_device_priv_data->ncq);
+}
+
+static ssize_t sas_ncq_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct leapraid_sdev_priv *sas_device_priv_data = sdev->hostdata;
+ struct scsi_vpd *vpd_pg89;
+ int ncq_op = 0;
+ bool ncq_supported = false;
+
+ if (kstrtoint(buf, 0, &ncq_op))
+ goto out;
+
+ rcu_read_lock();
+ vpd_pg89 = rcu_dereference(sdev->vpd_pg89);
+ if (!vpd_pg89 || vpd_pg89->len < LEAPRAID_VPD_PG89_MIN_LEN) {
+ rcu_read_unlock();
+ goto out;
+ }
+
+ ncq_supported = (vpd_pg89->data[LEAPRAID_VPD_PG89_NCQ_BYTE_IDX] >>
+ LEAPRAID_VPD_PG89_NCQ_BIT_SHIFT) &
+ LEAPRAID_VPD_PG89_NCQ_BIT_MASK;
+ rcu_read_unlock();
+ if (ncq_supported)
+ sas_device_priv_data->ncq = ncq_op;
+ return strlen(buf);
+out:
+ return -EINVAL;
+}
+
+static DEVICE_ATTR_RO(sas_address);
+static DEVICE_ATTR_RO(sas_device_handle);
+
+static DEVICE_ATTR_RW(sas_ncq);
+
+static struct attribute *leapraid_sdev_attrs[] = {
+ &dev_attr_sas_address.attr,
+ &dev_attr_sas_device_handle.attr,
+ &dev_attr_sas_ncq.attr,
+ NULL,
+};
+
+ATTRIBUTE_GROUPS(leapraid_sdev);
+
+static struct scsi_host_template leapraid_driver_template = {
+ .module = THIS_MODULE,
+ .name = "LEAPIO RAID Host",
+ .proc_name = LEAPRAID_DRIVER_NAME,
+ .queuecommand = leapraid_queuecommand,
+ .cmd_size = sizeof(struct leapraid_io_req_tracker),
+ .init_cmd_priv = leapraid_init_cmd_priv,
+ .exit_cmd_priv = leapraid_exit_cmd_priv,
+ .eh_abort_handler = leapraid_eh_abort_handler,
+ .eh_device_reset_handler = leapraid_eh_device_reset_handler,
+ .eh_target_reset_handler = leapraid_eh_target_reset_handler,
+ .eh_host_reset_handler = leapraid_eh_host_reset_handler,
+ .slave_alloc = leapraid_slave_alloc,
+ .slave_destroy = leapraid_slave_destroy,
+ .slave_configure = leapraid_slave_configure,
+ .target_alloc = leapraid_target_alloc,
+ .target_destroy = leapraid_target_destroy,
+ .scan_finished = leapraid_scan_finished,
+ .scan_start = leapraid_scan_start,
+ .change_queue_depth = leapraid_change_queue_depth,
+ .map_queues = leapraid_map_queues,
+ .mq_poll = leapraid_blk_mq_poll,
+ .bios_param = leapraid_bios_param,
+ .can_queue = LEAPRAID_CAN_QUEUE_MIN,
+ .this_id = LEAPRAID_THIS_ID_NONE,
+ .sg_tablesize = LEAPRAID_SG_DEPTH,
+ .max_sectors = LEAPRAID_DEF_MAX_SECTORS,
+ .max_segment_size = LEAPRAID_MAX_SEGMENT_SIZE,
+ .cmd_per_lun = LEAPRAID_CMD_PER_LUN,
+ .shost_groups = leapraid_shost_groups,
+ .sdev_groups = leapraid_sdev_groups,
+ .track_queue_depth = 1,
+};
+
+static void leapraid_lock_init(struct leapraid_adapter *adapter)
+{
+ mutex_init(&adapter->reset_desc.adapter_reset_mutex);
+ mutex_init(&adapter->reset_desc.host_diag_mutex);
+ mutex_init(&adapter->access_ctrl.pci_access_lock);
+
+ spin_lock_init(&adapter->reset_desc.adapter_reset_lock);
+ spin_lock_init(&adapter->dynamic_task_desc.task_lock);
+ spin_lock_init(&adapter->dev_topo.sas_dev_lock);
+ spin_lock_init(&adapter->dev_topo.topo_node_lock);
+ spin_lock_init(&adapter->fw_evt_s.fw_evt_lock);
+ spin_lock_init(&adapter->dev_topo.raid_volume_lock);
+}
+
+static void leapraid_list_init(struct leapraid_adapter *adapter)
+{
+ INIT_LIST_HEAD(&adapter->dev_topo.sas_dev_list);
+ INIT_LIST_HEAD(&adapter->dev_topo.card_port_list);
+ INIT_LIST_HEAD(&adapter->dev_topo.sas_dev_init_list);
+ INIT_LIST_HEAD(&adapter->dev_topo.exp_list);
+ INIT_LIST_HEAD(&adapter->dev_topo.enc_list);
+ INIT_LIST_HEAD(&adapter->fw_evt_s.fw_evt_list);
+ INIT_LIST_HEAD(&adapter->dev_topo.raid_volume_list);
+ INIT_LIST_HEAD(&adapter->dev_topo.card.sas_port_list);
+}
+
+static int leapraid_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct leapraid_adapter *adapter = NULL;
+ struct Scsi_Host *shost = NULL;
+ int iopoll_q_count = 0;
+ int rc;
+
+ shost = scsi_host_alloc(&leapraid_driver_template,
+ sizeof(struct leapraid_adapter));
+ if (!shost)
+ return -ENODEV;
+
+ adapter = shost_priv(shost);
+ memset(adapter, 0, sizeof(struct leapraid_adapter));
+ adapter->adapter_attr.id = leapraid_ids++;
+
+ adapter->adapter_attr.enable_mp = enable_mp;
+
+ adapter = shost_priv(shost);
+ INIT_LIST_HEAD(&adapter->list);
+ spin_lock(&leapraid_adapter_lock);
+ list_add_tail(&adapter->list, &leapraid_adapter_list);
+ spin_unlock(&leapraid_adapter_lock);
+
+ adapter->shost = shost;
+ adapter->pdev = pdev;
+ adapter->fw_log_desc.open_pcie_trace = open_pcie_trace;
+ leapraid_lock_init(adapter);
+ leapraid_list_init(adapter);
+ sprintf(adapter->adapter_attr.name, "%s%d",
+ LEAPRAID_DRIVER_NAME, adapter->adapter_attr.id);
+
+ shost->max_cmd_len = LEAPRAID_MAX_CDB_LEN;
+ shost->max_lun = LEAPRAID_MAX_LUNS;
+ shost->transportt = leapraid_transport_template;
+ shost->unique_id = adapter->adapter_attr.id;
+
+ snprintf(adapter->fw_evt_s.fw_evt_name,
+ sizeof(adapter->fw_evt_s.fw_evt_name),
+ "fw_event_%s%d", LEAPRAID_DRIVER_NAME,
+ adapter->adapter_attr.id);
+ adapter->fw_evt_s.fw_evt_thread =
+ alloc_ordered_workqueue(adapter->fw_evt_s.fw_evt_name, 0);
+ if (!adapter->fw_evt_s.fw_evt_thread) {
+ rc = -ENODEV;
+ goto evt_wq_fail;
+ }
+
+ shost->host_tagset = 1;
+ adapter->scan_dev_desc.driver_loading = true;
+ if ((leapraid_ctrl_init(adapter))) {
+ rc = -ENODEV;
+ goto ctrl_init_fail;
+ }
+
+ shost->nr_hw_queues = 1;
+ if (shost->host_tagset) {
+ shost->nr_hw_queues = adapter->adapter_attr.rq_cnt;
+ iopoll_q_count = adapter->adapter_attr.rq_cnt -
+ adapter->notification_desc.iopoll_qdex;
+ shost->nr_maps = iopoll_q_count ? 3 : 1;
+ dev_info(&adapter->pdev->dev,
+ "max scsi io cmds %d shared with nr_hw_queues=%d\n",
+ shost->can_queue, shost->nr_hw_queues);
+ }
+
+ rc = scsi_add_host(shost, &pdev->dev);
+ if (rc) {
+ spin_lock(&leapraid_adapter_lock);
+ list_del(&adapter->list);
+ spin_unlock(&leapraid_adapter_lock);
+ goto scsi_add_shost_fail;
+ }
+
+ scsi_scan_host(shost);
+ return 0;
+
+scsi_add_shost_fail:
+ leapraid_remove_ctrl(adapter);
+ctrl_init_fail:
+ destroy_workqueue(adapter->fw_evt_s.fw_evt_thread);
+evt_wq_fail:
+ spin_lock(&leapraid_adapter_lock);
+ list_del(&adapter->list);
+ spin_unlock(&leapraid_adapter_lock);
+ scsi_host_put(shost);
+ return rc;
+}
+
+static void leapraid_cleanup_lists(struct leapraid_adapter *adapter)
+{
+ struct leapraid_raid_volume *raid_volume, *next_raid_volume;
+ struct leapraid_starget_priv *starget_priv_data;
+ struct leapraid_sas_port *leapraid_port, *next_port;
+ struct leapraid_card_port *port, *port_next;
+ struct leapraid_vphy *vphy, *vphy_next;
+
+ list_for_each_entry_safe(raid_volume, next_raid_volume,
+ &adapter->dev_topo.raid_volume_list, list) {
+ if (raid_volume->starget) {
+ starget_priv_data = raid_volume->starget->hostdata;
+ starget_priv_data->deleted = true;
+ scsi_remove_target(&raid_volume->starget->dev);
+ }
+ pr_info("removing hdl=0x%04x, wwid=0x%016llx\n",
+ raid_volume->hdl,
+ (unsigned long long)raid_volume->wwid);
+ leapraid_raid_volume_remove(adapter, raid_volume);
+ }
+
+ list_for_each_entry_safe(leapraid_port, next_port,
+ &adapter->dev_topo.card.sas_port_list,
+ port_list) {
+ if (leapraid_port->remote_identify.device_type ==
+ SAS_END_DEVICE)
+ leapraid_sas_dev_remove_by_sas_address(adapter,
+ leapraid_port->remote_identify.sas_address,
+ leapraid_port->card_port);
+ else if (leapraid_port->remote_identify.device_type ==
+ SAS_EDGE_EXPANDER_DEVICE ||
+ leapraid_port->remote_identify.device_type ==
+ SAS_FANOUT_EXPANDER_DEVICE)
+ leapraid_exp_rm(adapter,
+ leapraid_port->remote_identify.sas_address,
+ leapraid_port->card_port);
+ }
+
+ list_for_each_entry_safe(port, port_next,
+ &adapter->dev_topo.card_port_list, list) {
+ if (port->vphys_mask)
+ list_for_each_entry_safe(vphy, vphy_next,
+ &port->vphys_list, list) {
+ list_del(&vphy->list);
+ kfree(vphy);
+ }
+ list_del(&port->list);
+ kfree(port);
+ }
+
+ if (adapter->dev_topo.card.phys_num) {
+ kfree(adapter->dev_topo.card.card_phy);
+ adapter->dev_topo.card.card_phy = NULL;
+ adapter->dev_topo.card.phys_num = 0;
+ }
+}
+
+static void leapraid_remove(struct pci_dev *pdev)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+ struct workqueue_struct *wq;
+ unsigned long flags;
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev, "unable to remove!\n");
+ return;
+ }
+
+ while (adapter->scan_dev_desc.driver_loading)
+ ssleep(1);
+
+ while (adapter->access_ctrl.shost_recovering)
+ ssleep(1);
+
+ adapter->access_ctrl.host_removing = true;
+
+ leapraid_wait_cmds_done(adapter);
+
+ leapraid_smart_polling_stop(adapter);
+ leapraid_free_internal_scsi_cmd(adapter);
+
+ if (leapraid_pci_removed(adapter)) {
+ leapraid_mq_polling_pause(adapter);
+ leapraid_clean_active_scsi_cmds(adapter);
+ }
+ leapraid_clean_active_fw_evt(adapter);
+
+ spin_lock_irqsave(&adapter->fw_evt_s.fw_evt_lock, flags);
+ wq = adapter->fw_evt_s.fw_evt_thread;
+ adapter->fw_evt_s.fw_evt_thread = NULL;
+ spin_unlock_irqrestore(&adapter->fw_evt_s.fw_evt_lock, flags);
+ if (wq)
+ destroy_workqueue(wq);
+
+ leapraid_ir_shutdown(adapter);
+ sas_remove_host(shost);
+ leapraid_cleanup_lists(adapter);
+ leapraid_remove_ctrl(adapter);
+ spin_lock(&leapraid_adapter_lock);
+ list_del(&adapter->list);
+ spin_unlock(&leapraid_adapter_lock);
+ scsi_host_put(shost);
+}
+
+static void leapraid_shutdown(struct pci_dev *pdev)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+ struct workqueue_struct *wq;
+ unsigned long flags;
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev, "unable to shutdown!\n");
+ return;
+ }
+
+ adapter->access_ctrl.host_removing = true;
+ leapraid_wait_cmds_done(adapter);
+ leapraid_clean_active_fw_evt(adapter);
+
+ spin_lock_irqsave(&adapter->fw_evt_s.fw_evt_lock, flags);
+ wq = adapter->fw_evt_s.fw_evt_thread;
+ adapter->fw_evt_s.fw_evt_thread = NULL;
+ spin_unlock_irqrestore(&adapter->fw_evt_s.fw_evt_lock, flags);
+ if (wq)
+ destroy_workqueue(wq);
+
+ leapraid_ir_shutdown(adapter);
+ leapraid_disable_controller(adapter);
+}
+
+static pci_ers_result_t leapraid_pci_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev, "failed to error detected for device\n");
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ pr_err("%s: pci error detected, state=%d\n",
+ adapter->adapter_attr.name, state);
+
+ switch (state) {
+ case pci_channel_io_normal:
+ return PCI_ERS_RESULT_CAN_RECOVER;
+ case pci_channel_io_frozen:
+ adapter->access_ctrl.pcie_recovering = true;
+ scsi_block_requests(adapter->shost);
+ leapraid_smart_polling_stop(adapter);
+ leapraid_check_scheduled_fault_stop(adapter);
+ leapraid_fw_log_stop(adapter);
+ leapraid_disable_controller(adapter);
+ return PCI_ERS_RESULT_NEED_RESET;
+ case pci_channel_io_perm_failure:
+ adapter->access_ctrl.pcie_recovering = true;
+ leapraid_smart_polling_stop(adapter);
+ leapraid_check_scheduled_fault_stop(adapter);
+ leapraid_fw_log_stop(adapter);
+ leapraid_mq_polling_pause(adapter);
+ leapraid_clean_active_scsi_cmds(adapter);
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ return PCI_ERS_RESULT_NEED_RESET;
+}
+
+static pci_ers_result_t leapraid_pci_mmio_enabled(struct pci_dev *pdev)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev,
+ "failed to enable mmio for device\n");
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ dev_info(&pdev->dev, "%s: pci error mmio enabled\n",
+ adapter->adapter_attr.name);
+
+ return PCI_ERS_RESULT_RECOVERED;
+}
+
+static pci_ers_result_t leapraid_pci_slot_reset(struct pci_dev *pdev)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+ int rc;
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev,
+ "failed to slot reset for device\n");
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ dev_err(&pdev->dev, "%s pci error slot reset\n",
+ adapter->adapter_attr.name);
+
+ adapter->access_ctrl.pcie_recovering = false;
+ adapter->pdev = pdev;
+ pci_restore_state(pdev);
+ if (leapraid_set_pcie_and_notification(adapter))
+ return PCI_ERS_RESULT_DISCONNECT;
+
+ dev_info(&pdev->dev, "%s: hard reset triggered by pci slot reset\n",
+ adapter->adapter_attr.name);
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ rc = leapraid_hard_reset_handler(adapter, FULL_RESET);
+ dev_info(&pdev->dev, "%s hard reset: %s\n",
+ adapter->adapter_attr.name, (rc == 0) ? "success" : "failed");
+
+ return (rc == 0) ? PCI_ERS_RESULT_RECOVERED :
+ PCI_ERS_RESULT_DISCONNECT;
+}
+
+static void leapraid_pci_resume(struct pci_dev *pdev)
+{
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev, "failed to resume\n");
+ return;
+ }
+
+ dev_err(&pdev->dev, "PCI error resume!\n");
+ pci_aer_clear_nonfatal_status(pdev);
+ leapraid_check_scheduled_fault_start(adapter);
+ leapraid_fw_log_start(adapter);
+ scsi_unblock_requests(adapter->shost);
+ leapraid_smart_polling_start(adapter);
+}
+
+MODULE_DEVICE_TABLE(pci, leapraid_pci_table);
+static struct pci_error_handlers leapraid_err_handler = {
+ .error_detected = leapraid_pci_error_detected,
+ .mmio_enabled = leapraid_pci_mmio_enabled,
+ .slot_reset = leapraid_pci_slot_reset,
+ .resume = leapraid_pci_resume,
+};
+
+#ifdef CONFIG_PM
+static int leapraid_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+ pci_power_t device_state;
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev,
+ "suspend failed, invalid host or adapter\n");
+ return -ENXIO;
+ }
+
+ leapraid_smart_polling_stop(adapter);
+ leapraid_check_scheduled_fault_stop(adapter);
+ leapraid_fw_log_stop(adapter);
+ scsi_block_requests(shost);
+ device_state = pci_choose_state(pdev, state);
+ leapraid_ir_shutdown(adapter);
+
+ dev_info(&pdev->dev, "entering PCI power state D%d, (slot=%s)\n",
+ device_state, pci_name(pdev));
+
+ pci_save_state(pdev);
+ leapraid_disable_controller(adapter);
+ pci_set_power_state(pdev, device_state);
+ return 0;
+}
+
+static int leapraid_resume(struct pci_dev *pdev)
+{
+ struct leapraid_adapter *adapter = pdev_to_adapter(pdev);
+ struct Scsi_Host *shost = pdev_to_shost(pdev);
+ pci_power_t device_state = pdev->current_state;
+ int rc;
+
+ if (!shost || !adapter) {
+ dev_err(&pdev->dev,
+ "resume failed, invalid host or adapter\n");
+ return -ENXIO;
+ }
+
+ dev_info(&pdev->dev,
+ "resuming device %s, previous state D%d\n",
+ pci_name(pdev), device_state);
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_enable_wake(pdev, PCI_D0, 0);
+ pci_restore_state(pdev);
+ adapter->pdev = pdev;
+ rc = leapraid_set_pcie_and_notification(adapter);
+ if (rc)
+ return rc;
+
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, PART_RESET);
+ scsi_unblock_requests(shost);
+ leapraid_check_scheduled_fault_start(adapter);
+ leapraid_fw_log_start(adapter);
+ leapraid_smart_polling_start(adapter);
+ return 0;
+}
+#endif /* CONFIG_PM */
+
+static struct pci_driver leapraid_driver = {
+ .name = LEAPRAID_DRIVER_NAME,
+ .id_table = leapraid_pci_table,
+ .probe = leapraid_probe,
+ .remove = leapraid_remove,
+ .shutdown = leapraid_shutdown,
+ .err_handler = &leapraid_err_handler,
+#ifdef CONFIG_PM
+ .suspend = leapraid_suspend,
+ .resume = leapraid_resume,
+#endif /* CONFIG_PM */
+};
+
+static int __init leapraid_init(void)
+{
+ int error;
+
+ pr_info("%s version %s loaded\n", LEAPRAID_DRIVER_NAME,
+ LEAPRAID_DRIVER_VERSION);
+
+ leapraid_transport_template =
+ sas_attach_transport(&leapraid_transport_functions);
+ if (!leapraid_transport_template)
+ return -ENODEV;
+
+ leapraid_ids = 0;
+
+ leapraid_ctl_init();
+
+ error = pci_register_driver(&leapraid_driver);
+ if (error)
+ sas_release_transport(leapraid_transport_template);
+
+ return error;
+}
+
+static void __exit leapraid_exit(void)
+{
+ pr_info("leapraid version %s unloading\n",
+ LEAPRAID_DRIVER_VERSION);
+
+ leapraid_ctl_exit();
+ pci_unregister_driver(&leapraid_driver);
+ sas_release_transport(leapraid_transport_template);
+}
+
+module_init(leapraid_init);
+module_exit(leapraid_exit);
diff --git a/drivers/scsi/leapraid/leapraid_transport.c b/drivers/scsi/leapraid/leapraid_transport.c
new file mode 100644
index 000000000000..d224449732a3
--- /dev/null
+++ b/drivers/scsi/leapraid/leapraid_transport.c
@@ -0,0 +1,1256 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2025 LeapIO Tech Inc.
+ *
+ * LeapRAID Storage and RAID Controller driver.
+ */
+
+#include <scsi/scsi_host.h>
+
+#include "leapraid_func.h"
+
+static struct leapraid_topo_node *leapraid_transport_topo_node_by_sas_addr(
+ struct leapraid_adapter *adapter,
+ u64 sas_addr,
+ struct leapraid_card_port *card_port)
+{
+ if (adapter->dev_topo.card.sas_address == sas_addr)
+ return &adapter->dev_topo.card;
+ else
+ return leapraid_exp_find_by_sas_address(adapter,
+ sas_addr,
+ card_port);
+}
+
+static u8 leapraid_get_port_id_by_expander(struct leapraid_adapter *adapter,
+ struct sas_rphy *rphy)
+{
+ struct leapraid_topo_node *topo_node_exp;
+ unsigned long flags;
+ u8 port_id = 0xFF;
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ list_for_each_entry(topo_node_exp, &adapter->dev_topo.exp_list, list) {
+ if (topo_node_exp->rphy == rphy) {
+ port_id = topo_node_exp->card_port->port_id;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ return port_id;
+}
+
+static u8 leapraid_get_port_id_by_end_dev(struct leapraid_adapter *adapter,
+ struct sas_rphy *rphy)
+{
+ struct leapraid_sas_dev *sas_dev;
+ unsigned long flags;
+ u8 port_id = 0xFF;
+
+ spin_lock_irqsave(&adapter->dev_topo.sas_dev_lock, flags);
+ sas_dev = leapraid_hold_lock_get_sas_dev_by_addr_and_rphy(adapter,
+ rphy->identify.sas_address,
+ rphy);
+ if (sas_dev) {
+ port_id = sas_dev->card_port->port_id;
+ leapraid_sdev_put(sas_dev);
+ }
+ spin_unlock_irqrestore(&adapter->dev_topo.sas_dev_lock, flags);
+
+ return port_id;
+}
+
+static u8 leapraid_transport_get_port_id_by_rphy(
+ struct leapraid_adapter *adapter,
+ struct sas_rphy *rphy)
+{
+ if (!rphy)
+ return 0xFF;
+
+ switch (rphy->identify.device_type) {
+ case SAS_EDGE_EXPANDER_DEVICE:
+ case SAS_FANOUT_EXPANDER_DEVICE:
+ return leapraid_get_port_id_by_expander(adapter, rphy);
+ case SAS_END_DEVICE:
+ return leapraid_get_port_id_by_end_dev(adapter, rphy);
+ default:
+ return 0xFF;
+ }
+}
+
+static enum sas_linkrate leapraid_transport_convert_phy_link_rate(u8 link_rate)
+{
+ unsigned int i;
+
+ #define SAS_RATE_12G SAS_LINK_RATE_12_0_GBPS
+
+ const struct linkrate_map {
+ u8 in;
+ enum sas_linkrate out;
+ } linkrate_table[] = {
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_1_5,
+ SAS_LINK_RATE_1_5_GBPS
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_3_0,
+ SAS_LINK_RATE_3_0_GBPS
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_6_0,
+ SAS_LINK_RATE_6_0_GBPS
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_12_0,
+ SAS_RATE_12G
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_PHY_DISABLED,
+ SAS_PHY_DISABLED
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_NEGOTIATION_FAILED,
+ SAS_LINK_RATE_FAILED
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_PORT_SELECTOR,
+ SAS_SATA_PORT_SELECTOR
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_SMP_RESETTING,
+ SAS_LINK_RATE_UNKNOWN
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_SATA_OOB_COMPLETE,
+ SAS_LINK_RATE_UNKNOWN
+ },
+ {
+ LEAPRAID_SAS_NEG_LINK_RATE_UNKNOWN_LINK_RATE,
+ SAS_LINK_RATE_UNKNOWN
+ },
+ };
+
+ for (i = 0; i < ARRAY_SIZE(linkrate_table); i++) {
+ if (linkrate_table[i].in == link_rate)
+ return linkrate_table[i].out;
+ }
+
+ return SAS_LINK_RATE_UNKNOWN;
+}
+
+static void leapraid_set_identify_protocol_flags(u32 dev_info,
+ struct sas_identify *identify)
+{
+ unsigned int i;
+
+ const struct protocol_mapping {
+ u32 mask;
+ u32 *target;
+ u32 protocol;
+ } mappings[] = {
+ {
+ LEAPRAID_DEVTYP_SSP_INIT,
+ &identify->initiator_port_protocols,
+ SAS_PROTOCOL_SSP
+ },
+ {
+ LEAPRAID_DEVTYP_STP_INIT,
+ &identify->initiator_port_protocols,
+ SAS_PROTOCOL_STP
+ },
+ {
+ LEAPRAID_DEVTYP_SMP_INIT,
+ &identify->initiator_port_protocols,
+ SAS_PROTOCOL_SMP
+ },
+ {
+ LEAPRAID_DEVTYP_SATA_HOST,
+ &identify->initiator_port_protocols,
+ SAS_PROTOCOL_SATA
+ },
+ {
+ LEAPRAID_DEVTYP_SSP_TGT,
+ &identify->target_port_protocols,
+ SAS_PROTOCOL_SSP
+ },
+ {
+ LEAPRAID_DEVTYP_STP_TGT,
+ &identify->target_port_protocols,
+ SAS_PROTOCOL_STP
+ },
+ {
+ LEAPRAID_DEVTYP_SMP_TGT,
+ &identify->target_port_protocols,
+ SAS_PROTOCOL_SMP
+ },
+ {
+ LEAPRAID_DEVTYP_SATA_DEV,
+ &identify->target_port_protocols,
+ SAS_PROTOCOL_SATA
+ },
+ };
+
+ for (i = 0; i < ARRAY_SIZE(mappings); i++)
+ if ((dev_info & mappings[i].mask) && mappings[i].target)
+ *mappings[i].target |= mappings[i].protocol;
+}
+
+static int leapraid_transport_set_identify(struct leapraid_adapter *adapter,
+ u16 hdl,
+ struct sas_identify *identify)
+{
+ union cfg_param_1 cfgp1 = {0};
+ union cfg_param_2 cfgp2 = {0};
+ struct leapraid_sas_dev_p0 sas_dev_pg0;
+ u32 dev_info;
+
+ if ((adapter->access_ctrl.shost_recovering &&
+ !adapter->scan_dev_desc.driver_loading) ||
+ adapter->access_ctrl.pcie_recovering)
+ return -EFAULT;
+
+ cfgp1.form = LEAPRAID_SAS_DEV_CFG_PGAD_HDL;
+ cfgp2.handle = hdl;
+ if ((leapraid_op_config_page(adapter, &sas_dev_pg0, cfgp1,
+ cfgp2, GET_SAS_DEVICE_PG0)))
+ return -ENXIO;
+
+ memset(identify, 0, sizeof(struct sas_identify));
+ dev_info = le32_to_cpu(sas_dev_pg0.dev_info);
+ identify->sas_address = le64_to_cpu(sas_dev_pg0.sas_address);
+ identify->phy_identifier = sas_dev_pg0.phy_num;
+
+ switch (dev_info & LEAPRAID_DEVTYP_MASK_DEV_TYPE) {
+ case LEAPRAID_DEVTYP_NO_DEV:
+ identify->device_type = SAS_PHY_UNUSED;
+ break;
+ case LEAPRAID_DEVTYP_END_DEV:
+ identify->device_type = SAS_END_DEVICE;
+ break;
+ case LEAPRAID_DEVTYP_EDGE_EXPANDER:
+ identify->device_type = SAS_EDGE_EXPANDER_DEVICE;
+ break;
+ case LEAPRAID_DEVTYP_FANOUT_EXPANDER:
+ identify->device_type = SAS_FANOUT_EXPANDER_DEVICE;
+ break;
+ }
+
+ leapraid_set_identify_protocol_flags(dev_info, identify);
+
+ return 0;
+}
+
+static void leapraid_transport_exp_set_edev(struct leapraid_adapter *adapter,
+ void *data_out,
+ struct sas_expander_device *edev)
+{
+ struct leapraid_smp_passthrough_rep *smp_passthrough_rep;
+ struct leapraid_rep_manu_reply *rep_manu_reply;
+ u8 *component_id;
+ ssize_t __maybe_unused ret;
+
+ smp_passthrough_rep =
+ (void *)(&adapter->driver_cmds.transport_cmd.reply);
+ if (le16_to_cpu(smp_passthrough_rep->resp_data_len) !=
+ sizeof(struct leapraid_rep_manu_reply))
+ return;
+
+ rep_manu_reply = data_out + sizeof(struct leapraid_rep_manu_request);
+ ret = strscpy(edev->vendor_id, rep_manu_reply->vendor_identification,
+ SAS_EXPANDER_VENDOR_ID_LEN);
+ ret = strscpy(edev->product_id, rep_manu_reply->product_identification,
+ SAS_EXPANDER_PRODUCT_ID_LEN);
+ ret = strscpy(edev->product_rev,
+ rep_manu_reply->product_revision_level,
+ SAS_EXPANDER_PRODUCT_REV_LEN);
+ edev->level = rep_manu_reply->sas_format & 1;
+ if (edev->level) {
+ ret = strscpy(edev->component_vendor_id,
+ rep_manu_reply->component_vendor_identification,
+ SAS_EXPANDER_COMPONENT_VENDOR_ID_LEN);
+
+ component_id = (u8 *)&rep_manu_reply->component_id;
+ edev->component_id = component_id[0] << 8 | component_id[1];
+ edev->component_revision_id =
+ rep_manu_reply->component_revision_level;
+ }
+}
+
+static int leapraid_transport_exp_report_manu(struct leapraid_adapter *adapter,
+ u64 sas_address,
+ struct sas_expander_device *edev,
+ u8 port_id)
+{
+ struct leapraid_smp_passthrough_req *smp_passthrough_req;
+ struct leapraid_rep_manu_request *rep_manu_request;
+ dma_addr_t h2c_dma_addr;
+ dma_addr_t c2h_dma_addr;
+ bool issue_reset = false;
+ void *data_out = NULL;
+ size_t c2h_size;
+ size_t h2c_size;
+ void *psge;
+ int rc = 0;
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering) {
+ return -EFAULT;
+ }
+
+ mutex_lock(&adapter->driver_cmds.transport_cmd.mutex);
+ adapter->driver_cmds.transport_cmd.status = LEAPRAID_CMD_PENDING;
+ rc = leapraid_check_adapter_is_op(adapter);
+ if (rc)
+ goto out;
+
+ h2c_size = sizeof(struct leapraid_rep_manu_request);
+ c2h_size = sizeof(struct leapraid_rep_manu_reply);
+ data_out = dma_alloc_coherent(&adapter->pdev->dev,
+ h2c_size + c2h_size,
+ &h2c_dma_addr,
+ GFP_ATOMIC);
+ if (!data_out) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ rep_manu_request = data_out;
+ rep_manu_request->smp_frame_type =
+ SMP_REPORT_MANUFACTURER_INFORMATION_FRAME_TYPE;
+ rep_manu_request->function = SMP_REPORT_MANUFACTURER_INFORMATION_FUNC;
+ rep_manu_request->allocated_response_length = 0;
+ rep_manu_request->request_length = 0;
+
+ smp_passthrough_req =
+ leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.transport_cmd.inter_taskid);
+ memset(smp_passthrough_req, 0,
+ sizeof(struct leapraid_smp_passthrough_req));
+ smp_passthrough_req->func = LEAPRAID_FUNC_SMP_PASSTHROUGH;
+ smp_passthrough_req->physical_port = port_id;
+ smp_passthrough_req->sas_address = cpu_to_le64(sas_address);
+ smp_passthrough_req->req_data_len = cpu_to_le16(h2c_size);
+ psge = &smp_passthrough_req->sgl;
+ c2h_dma_addr = h2c_dma_addr + sizeof(struct leapraid_rep_manu_request);
+ leapraid_build_ieee_sg(adapter, psge, h2c_dma_addr, h2c_size,
+ c2h_dma_addr, c2h_size);
+
+ init_completion(&adapter->driver_cmds.transport_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.transport_cmd.inter_taskid);
+ wait_for_completion_timeout(&adapter->driver_cmds.transport_cmd.done,
+ LEAPRAID_TRANSPORT_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.transport_cmd.status & LEAPRAID_CMD_DONE)) {
+ dev_err(&adapter->pdev->dev,
+ "%s: smp passthrough to exp timeout\n",
+ __func__);
+ if (!(adapter->driver_cmds.transport_cmd.status &
+ LEAPRAID_CMD_RESET))
+ issue_reset = true;
+
+ goto hard_reset;
+ }
+
+ if (adapter->driver_cmds.transport_cmd.status &
+ LEAPRAID_CMD_REPLY_VALID)
+ leapraid_transport_exp_set_edev(adapter, data_out, edev);
+
+hard_reset:
+ if (issue_reset) {
+ dev_info(&adapter->pdev->dev, "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, FULL_RESET);
+ }
+out:
+ adapter->driver_cmds.transport_cmd.status = LEAPRAID_CMD_NOT_USED;
+ if (data_out)
+ dma_free_coherent(&adapter->pdev->dev, h2c_size + c2h_size,
+ data_out, h2c_dma_addr);
+
+ mutex_unlock(&adapter->driver_cmds.transport_cmd.mutex);
+ return rc;
+}
+
+static void leapraid_transport_del_port(struct leapraid_adapter *adapter,
+ struct leapraid_sas_port *sas_port)
+{
+ dev_info(&sas_port->port->dev,
+ "remove port: sas addr=0x%016llx\n",
+ (unsigned long long)sas_port->remote_identify.sas_address);
+ switch (sas_port->remote_identify.device_type) {
+ case SAS_END_DEVICE:
+ leapraid_sas_dev_remove_by_sas_address(adapter,
+ sas_port->remote_identify.sas_address,
+ sas_port->card_port);
+ break;
+ case SAS_EDGE_EXPANDER_DEVICE:
+ case SAS_FANOUT_EXPANDER_DEVICE:
+ leapraid_exp_rm(adapter, sas_port->remote_identify.sas_address,
+ sas_port->card_port);
+ break;
+ default:
+ break;
+ }
+}
+
+static void leapraid_transport_del_phy(struct leapraid_adapter *adapter,
+ struct leapraid_sas_port *sas_port,
+ struct leapraid_card_phy *card_phy)
+{
+ dev_info(&card_phy->phy->dev,
+ "remove phy: sas addr=0x%016llx, phy=%d\n",
+ (unsigned long long)sas_port->remote_identify.sas_address,
+ card_phy->phy_id);
+ list_del(&card_phy->port_siblings);
+ sas_port->phys_num--;
+ sas_port_delete_phy(sas_port->port, card_phy->phy);
+ card_phy->phy_is_assigned = false;
+}
+
+static void leapraid_transport_add_phy(struct leapraid_adapter *adapter,
+ struct leapraid_sas_port *sas_port,
+ struct leapraid_card_phy *card_phy)
+{
+ dev_info(&card_phy->phy->dev,
+ "add phy: sas addr=0x%016llx, phy=%d\n",
+ (unsigned long long)sas_port->remote_identify.sas_address,
+ card_phy->phy_id);
+ list_add_tail(&card_phy->port_siblings, &sas_port->phy_list);
+ sas_port->phys_num++;
+ sas_port_add_phy(sas_port->port, card_phy->phy);
+ card_phy->phy_is_assigned = true;
+}
+
+void leapraid_transport_attach_phy_to_port(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node,
+ struct leapraid_card_phy *card_phy,
+ u64 sas_address,
+ struct leapraid_card_port *card_port)
+{
+ struct leapraid_sas_port *sas_port;
+ struct leapraid_card_phy *card_phy_srch;
+
+ if (card_phy->phy_is_assigned)
+ return;
+
+ if (!card_port)
+ return;
+
+ list_for_each_entry(sas_port, &topo_node->sas_port_list, port_list) {
+ if (sas_port->remote_identify.sas_address != sas_address)
+ continue;
+
+ if (sas_port->card_port != card_port)
+ continue;
+
+ list_for_each_entry(card_phy_srch, &sas_port->phy_list,
+ port_siblings) {
+ if (card_phy_srch == card_phy)
+ return;
+ }
+ leapraid_transport_add_phy(adapter, sas_port, card_phy);
+ return;
+ }
+}
+
+void leapraid_transport_detach_phy_to_port(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node,
+ struct leapraid_card_phy *target_card_phy)
+{
+ struct leapraid_sas_port *sas_port, *sas_port_next;
+ struct leapraid_card_phy *cur_card_phy;
+
+ if (!target_card_phy->phy_is_assigned)
+ return;
+
+ list_for_each_entry_safe(sas_port, sas_port_next,
+ &topo_node->sas_port_list, port_list) {
+ list_for_each_entry(cur_card_phy, &sas_port->phy_list,
+ port_siblings) {
+ if (cur_card_phy != target_card_phy)
+ continue;
+
+ if (sas_port->phys_num == 1 &&
+ !adapter->access_ctrl.shost_recovering)
+ leapraid_transport_del_port(adapter, sas_port);
+ else
+ leapraid_transport_del_phy(adapter, sas_port,
+ target_card_phy);
+ return;
+ }
+ }
+}
+
+static void leapraid_detach_phy_from_old_port(struct leapraid_adapter *adapter,
+ struct leapraid_topo_node *topo_node,
+ u64 sas_address,
+ struct leapraid_card_port *card_port)
+{
+ int i;
+
+ for (i = 0; i < topo_node->phys_num; i++) {
+ if (topo_node->card_phy[i].remote_identify.sas_address !=
+ sas_address ||
+ topo_node->card_phy[i].card_port != card_port)
+ continue;
+ if (topo_node->card_phy[i].phy_is_assigned)
+ leapraid_transport_detach_phy_to_port(adapter,
+ topo_node,
+ &topo_node->card_phy[i]);
+ }
+}
+
+static struct leapraid_sas_port *leapraid_prepare_sas_port(
+ struct leapraid_adapter *adapter,
+ u16 handle, u64 sas_address,
+ struct leapraid_card_port *card_port,
+ struct leapraid_topo_node **out_topo_node)
+{
+ struct leapraid_topo_node *topo_node;
+ struct leapraid_sas_port *sas_port;
+ unsigned long flags;
+
+ sas_port = kzalloc(sizeof(*sas_port), GFP_KERNEL);
+ if (!sas_port)
+ return NULL;
+
+ INIT_LIST_HEAD(&sas_port->port_list);
+ INIT_LIST_HEAD(&sas_port->phy_list);
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ topo_node = leapraid_transport_topo_node_by_sas_addr(adapter,
+ sas_address,
+ card_port);
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ if (!topo_node) {
+ dev_err(&adapter->pdev->dev,
+ "%s: failed to find parent node for sas addr 0x%016llx!\n",
+ __func__, sas_address);
+ kfree(sas_port);
+ return NULL;
+ }
+
+ if (leapraid_transport_set_identify(adapter, handle,
+ &sas_port->remote_identify)) {
+ kfree(sas_port);
+ return NULL;
+ }
+
+ if (sas_port->remote_identify.device_type == SAS_PHY_UNUSED) {
+ kfree(sas_port);
+ return NULL;
+ }
+
+ sas_port->card_port = card_port;
+ *out_topo_node = topo_node;
+
+ return sas_port;
+}
+
+static int leapraid_bind_phys_and_vphy(struct leapraid_adapter *adapter,
+ struct leapraid_sas_port *sas_port,
+ struct leapraid_topo_node *topo_node,
+ struct leapraid_card_port *card_port,
+ struct leapraid_vphy **out_vphy)
+{
+ struct leapraid_vphy *vphy = NULL;
+ int i;
+
+ for (i = 0; i < topo_node->phys_num; i++) {
+ if (topo_node->card_phy[i].remote_identify.sas_address !=
+ sas_port->remote_identify.sas_address ||
+ topo_node->card_phy[i].card_port != card_port)
+ continue;
+
+ list_add_tail(&topo_node->card_phy[i].port_siblings,
+ &sas_port->phy_list);
+ sas_port->phys_num++;
+
+ if (topo_node->hdl <= adapter->dev_topo.card.phys_num) {
+ if (!topo_node->card_phy[i].vphy) {
+ card_port->phy_mask |= BIT(i);
+ continue;
+ }
+
+ vphy = leapraid_get_vphy_by_phy(card_port, i);
+ if (!vphy)
+ return -1;
+ }
+ }
+
+ *out_vphy = vphy;
+ return sas_port->phys_num ? 0 : -1;
+}
+
+static struct sas_rphy *leapraid_create_and_register_rphy(
+ struct leapraid_adapter *adapter,
+ struct leapraid_sas_port *sas_port,
+ struct leapraid_topo_node *topo_node,
+ struct leapraid_card_port *card_port,
+ struct leapraid_vphy *vphy)
+{
+ struct leapraid_sas_dev *sas_dev = NULL;
+ struct leapraid_card_phy *card_phy;
+ struct sas_port *port;
+ struct sas_rphy *rphy;
+
+ if (!topo_node->parent_dev)
+ return NULL;
+
+ port = sas_port_alloc_num(topo_node->parent_dev);
+ if (sas_port_add(port))
+ return NULL;
+
+ list_for_each_entry(card_phy, &sas_port->phy_list, port_siblings) {
+ sas_port_add_phy(port, card_phy->phy);
+ card_phy->phy_is_assigned = true;
+ card_phy->card_port = card_port;
+ }
+
+ if (sas_port->remote_identify.device_type == SAS_END_DEVICE) {
+ sas_dev = leapraid_get_sas_dev_by_addr(adapter,
+ sas_port->remote_identify.sas_address,
+ card_port);
+ if (!sas_dev)
+ return NULL;
+ sas_dev->pend_sas_rphy_add = 1;
+ rphy = sas_end_device_alloc(port);
+ sas_dev->rphy = rphy;
+
+ if (topo_node->hdl <= adapter->dev_topo.card.phys_num) {
+ if (!vphy)
+ card_port->sas_address = sas_dev->sas_addr;
+ else
+ vphy->sas_address = sas_dev->sas_addr;
+ }
+
+ } else {
+ rphy = sas_expander_alloc(port,
+ sas_port->remote_identify.device_type);
+ if (topo_node->hdl <= adapter->dev_topo.card.phys_num)
+ card_port->sas_address =
+ sas_port->remote_identify.sas_address;
+ }
+
+ rphy->identify = sas_port->remote_identify;
+
+ if (sas_rphy_add(rphy))
+ dev_err(&adapter->pdev->dev,
+ "%s: failed to add rphy\n", __func__);
+
+ if (sas_dev) {
+ sas_dev->pend_sas_rphy_add = 0;
+ leapraid_sdev_put(sas_dev);
+ }
+
+ sas_port->port = port;
+ return rphy;
+}
+
+struct leapraid_sas_port *leapraid_transport_port_add(
+ struct leapraid_adapter *adapter,
+ u16 hdl, u64 sas_address,
+ struct leapraid_card_port *card_port)
+{
+ struct leapraid_card_phy *card_phy, *card_phy_next;
+ struct leapraid_topo_node *topo_node = NULL;
+ struct leapraid_sas_port *sas_port = NULL;
+ struct leapraid_vphy *vphy = NULL;
+ struct sas_rphy *rphy = NULL;
+ unsigned long flags;
+
+ if (!card_port)
+ return NULL;
+
+ sas_port = leapraid_prepare_sas_port(adapter, hdl, sas_address,
+ card_port, &topo_node);
+ if (!sas_port)
+ return NULL;
+
+ leapraid_detach_phy_from_old_port(adapter,
+ topo_node,
+ sas_port->remote_identify.sas_address,
+ card_port);
+
+ if (leapraid_bind_phys_and_vphy(adapter, sas_port, topo_node,
+ card_port, &vphy))
+ goto out_fail;
+
+ rphy = leapraid_create_and_register_rphy(adapter, sas_port, topo_node,
+ card_port, vphy);
+ if (!rphy)
+ goto out_fail;
+
+ dev_info(&rphy->dev,
+ "%s: added dev: hdl=0x%04x, sas addr=0x%016llx\n",
+ __func__, hdl,
+ (unsigned long long)sas_port->remote_identify.sas_address);
+
+ sas_port->rphy = rphy;
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ list_add_tail(&sas_port->port_list, &topo_node->sas_port_list);
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ if (sas_port->remote_identify.device_type ==
+ LEAPRAID_DEVTYP_EDGE_EXPANDER ||
+ sas_port->remote_identify.device_type ==
+ LEAPRAID_DEVTYP_FANOUT_EXPANDER)
+ leapraid_transport_exp_report_manu(adapter,
+ sas_port->remote_identify.sas_address,
+ rphy_to_expander_device(rphy),
+ card_port->port_id);
+
+ return sas_port;
+
+out_fail:
+ list_for_each_entry_safe(card_phy, card_phy_next,
+ &sas_port->phy_list, port_siblings)
+ list_del(&card_phy->port_siblings);
+ kfree(sas_port);
+ return NULL;
+}
+
+static struct leapraid_sas_port *leapraid_find_and_remove_sas_port(
+ struct leapraid_topo_node *topo_node,
+ u64 sas_address,
+ struct leapraid_card_port *remove_card_port,
+ bool *found)
+{
+ struct leapraid_sas_port *sas_port, *sas_port_next;
+
+ list_for_each_entry_safe(sas_port, sas_port_next,
+ &topo_node->sas_port_list, port_list) {
+ if (sas_port->remote_identify.sas_address != sas_address)
+ continue;
+
+ if (sas_port->card_port != remove_card_port)
+ continue;
+
+ *found = true;
+ list_del(&sas_port->port_list);
+ return sas_port;
+ }
+ return NULL;
+}
+
+static void leapraid_cleanup_card_port_and_vphys(
+ struct leapraid_adapter *adapter,
+ u64 sas_address,
+ struct leapraid_card_port *remove_card_port)
+{
+ struct leapraid_card_port *card_port, *card_port_next;
+ struct leapraid_vphy *vphy, *vphy_next;
+
+ if (remove_card_port->vphys_mask) {
+ list_for_each_entry_safe(vphy, vphy_next,
+ &remove_card_port->vphys_list, list) {
+ if (vphy->sas_address != sas_address)
+ continue;
+
+ dev_info(&adapter->pdev->dev,
+ "%s: remove vphy: %p from port: %p, port_id=%d\n",
+ __func__, vphy, remove_card_port,
+ remove_card_port->port_id);
+
+ remove_card_port->vphys_mask &= ~vphy->phy_mask;
+ list_del(&vphy->list);
+ kfree(vphy);
+ }
+
+ if (!remove_card_port->vphys_mask &&
+ !remove_card_port->sas_address) {
+ dev_info(&adapter->pdev->dev,
+ "%s: remove empty hba_port: %p, port_id=%d\n",
+ __func__,
+ remove_card_port,
+ remove_card_port->port_id);
+ list_del(&remove_card_port->list);
+ kfree(remove_card_port);
+ remove_card_port = NULL;
+ }
+ }
+
+ list_for_each_entry_safe(card_port, card_port_next,
+ &adapter->dev_topo.card_port_list, list) {
+ if (card_port != remove_card_port)
+ continue;
+
+ if (card_port->sas_address != sas_address)
+ continue;
+
+ if (!remove_card_port->vphys_mask) {
+ dev_info(&adapter->pdev->dev,
+ "%s: remove hba_port: %p, port_id=%d\n",
+ __func__, card_port, card_port->port_id);
+ list_del(&card_port->list);
+ kfree(card_port);
+ } else {
+ dev_info(&adapter->pdev->dev,
+ "%s: clear sas_address of hba_port: %p, port_id=%d\n",
+ __func__, card_port, card_port->port_id);
+ remove_card_port->sas_address = 0;
+ }
+ break;
+ }
+}
+
+static void leapraid_clear_topo_node_phys(struct leapraid_topo_node *topo_node,
+ u64 sas_address)
+{
+ int i;
+
+ for (i = 0; i < topo_node->phys_num; i++) {
+ if (topo_node->card_phy[i].remote_identify.sas_address ==
+ sas_address) {
+ memset(&topo_node->card_phy[i].remote_identify, 0,
+ sizeof(struct sas_identify));
+ topo_node->card_phy[i].vphy = false;
+ }
+ }
+}
+
+void leapraid_transport_port_remove(struct leapraid_adapter *adapter,
+ u64 sas_address, u64 sas_address_parent,
+ struct leapraid_card_port *remove_card_port)
+{
+ struct leapraid_card_phy *card_phy, *card_phy_next;
+ struct leapraid_sas_port *sas_port = NULL;
+ struct leapraid_topo_node *topo_node;
+ unsigned long flags;
+ bool found = false;
+
+ if (!remove_card_port)
+ return;
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+
+ topo_node = leapraid_transport_topo_node_by_sas_addr(adapter,
+ sas_address_parent,
+ remove_card_port);
+ if (!topo_node) {
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock,
+ flags);
+ return;
+ }
+
+ sas_port = leapraid_find_and_remove_sas_port(topo_node, sas_address,
+ remove_card_port, &found);
+
+ if (!found) {
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock,
+ flags);
+ return;
+ }
+
+ if (topo_node->hdl <= adapter->dev_topo.card.phys_num &&
+ adapter->adapter_attr.enable_mp)
+ leapraid_cleanup_card_port_and_vphys(adapter, sas_address,
+ remove_card_port);
+
+ leapraid_clear_topo_node_phys(topo_node, sas_address);
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ list_for_each_entry_safe(card_phy, card_phy_next,
+ &sas_port->phy_list, port_siblings) {
+ card_phy->phy_is_assigned = false;
+ if (!adapter->access_ctrl.host_removing)
+ sas_port_delete_phy(sas_port->port, card_phy->phy);
+
+ list_del(&card_phy->port_siblings);
+ }
+
+ if (!adapter->access_ctrl.host_removing)
+ sas_port_delete(sas_port->port);
+
+ dev_info(&adapter->pdev->dev,
+ "%s: removed sas_port for sas addr=0x%016llx\n",
+ __func__, (unsigned long long)sas_address);
+
+ kfree(sas_port);
+}
+
+static void leapraid_init_sas_or_exp_phy(struct leapraid_adapter *adapter,
+ struct leapraid_card_phy *card_phy,
+ struct sas_phy *phy,
+ struct leapraid_sas_phy_p0 *phy_pg0,
+ struct leapraid_exp_p1 *exp_pg1)
+{
+ if (exp_pg1 && phy_pg0)
+ return;
+
+ if (!exp_pg1 && !phy_pg0)
+ return;
+
+ phy->identify = card_phy->identify;
+ phy->identify.phy_identifier = card_phy->phy_id;
+ phy->negotiated_linkrate = phy_pg0 ?
+ leapraid_transport_convert_phy_link_rate(
+ phy_pg0->neg_link_rate &
+ LEAPRAID_SAS_NEG_LINK_RATE_MASK_PHYSICAL) :
+ leapraid_transport_convert_phy_link_rate(
+ exp_pg1->neg_link_rate &
+ LEAPRAID_SAS_NEG_LINK_RATE_MASK_PHYSICAL);
+ phy->minimum_linkrate_hw = phy_pg0 ?
+ leapraid_transport_convert_phy_link_rate(
+ phy_pg0->hw_link_rate &
+ LEAPRAID_SAS_HWRATE_MIN_RATE_MASK) :
+ leapraid_transport_convert_phy_link_rate(
+ exp_pg1->hw_link_rate &
+ LEAPRAID_SAS_HWRATE_MIN_RATE_MASK);
+ phy->maximum_linkrate_hw = phy_pg0 ?
+ leapraid_transport_convert_phy_link_rate(
+ phy_pg0->hw_link_rate >> 4) :
+ leapraid_transport_convert_phy_link_rate(
+ exp_pg1->hw_link_rate >> 4);
+ phy->minimum_linkrate = phy_pg0 ?
+ leapraid_transport_convert_phy_link_rate(
+ phy_pg0->p_link_rate &
+ LEAPRAID_SAS_PRATE_MIN_RATE_MASK) :
+ leapraid_transport_convert_phy_link_rate(
+ exp_pg1->p_link_rate &
+ LEAPRAID_SAS_PRATE_MIN_RATE_MASK);
+ phy->maximum_linkrate = phy_pg0 ?
+ leapraid_transport_convert_phy_link_rate(
+ phy_pg0->p_link_rate >> 4) :
+ leapraid_transport_convert_phy_link_rate(
+ exp_pg1->p_link_rate >> 4);
+ phy->hostdata = card_phy->card_port;
+}
+
+void leapraid_transport_add_card_phy(struct leapraid_adapter *adapter,
+ struct leapraid_card_phy *card_phy,
+ struct leapraid_sas_phy_p0 *phy_pg0,
+ struct device *parent_dev)
+{
+ struct sas_phy *phy;
+
+ INIT_LIST_HEAD(&card_phy->port_siblings);
+ phy = sas_phy_alloc(parent_dev, card_phy->phy_id);
+ if (!phy) {
+ dev_err(&adapter->pdev->dev,
+ "%s sas_phy_alloc failed!\n", __func__);
+ return;
+ }
+
+ if ((leapraid_transport_set_identify(adapter, card_phy->hdl,
+ &card_phy->identify))) {
+ dev_err(&adapter->pdev->dev,
+ "%s set phy handle identify failed!\n", __func__);
+ sas_phy_free(phy);
+ return;
+ }
+
+ card_phy->attached_hdl = le16_to_cpu(phy_pg0->attached_dev_hdl);
+ if (card_phy->attached_hdl) {
+ if (leapraid_transport_set_identify(adapter,
+ card_phy->attached_hdl,
+ &card_phy->remote_identify)) {
+ dev_err(&adapter->pdev->dev,
+ "%s set phy attached handle identify failed!\n",
+ __func__);
+ sas_phy_free(phy);
+ return;
+ }
+ }
+
+ leapraid_init_sas_or_exp_phy(adapter, card_phy, phy, phy_pg0, NULL);
+
+ if ((sas_phy_add(phy))) {
+ sas_phy_free(phy);
+ return;
+ }
+
+ card_phy->phy = phy;
+}
+
+int leapraid_transport_add_exp_phy(struct leapraid_adapter *adapter,
+ struct leapraid_card_phy *card_phy,
+ struct leapraid_exp_p1 *exp_pg1,
+ struct device *parent_dev)
+{
+ struct sas_phy *phy;
+
+ INIT_LIST_HEAD(&card_phy->port_siblings);
+ phy = sas_phy_alloc(parent_dev, card_phy->phy_id);
+ if (!phy) {
+ dev_err(&adapter->pdev->dev,
+ "%s sas_phy_alloc failed!\n", __func__);
+ return -EFAULT;
+ }
+
+ if ((leapraid_transport_set_identify(adapter, card_phy->hdl,
+ &card_phy->identify))) {
+ dev_err(&adapter->pdev->dev,
+ "%s set phy hdl identify failed!\n", __func__);
+ sas_phy_free(phy);
+ return -EFAULT;
+ }
+
+ card_phy->attached_hdl = le16_to_cpu(exp_pg1->attached_dev_hdl);
+ if (card_phy->attached_hdl) {
+ if (leapraid_transport_set_identify(adapter,
+ card_phy->attached_hdl,
+ &card_phy->remote_identify)) {
+ dev_err(&adapter->pdev->dev,
+ "%s set phy attached hdl identify failed!\n",
+ __func__);
+ sas_phy_free(phy);
+ }
+ }
+
+ leapraid_init_sas_or_exp_phy(adapter, card_phy, phy, NULL, exp_pg1);
+
+ if ((sas_phy_add(phy))) {
+ sas_phy_free(phy);
+ return -EFAULT;
+ }
+
+ card_phy->phy = phy;
+ return 0;
+}
+
+void leapraid_transport_update_links(struct leapraid_adapter *adapter,
+ u64 sas_address, u16 hdl, u8 phy_index,
+ u8 link_rate, struct leapraid_card_port *target_card_port)
+{
+ struct leapraid_topo_node *topo_node;
+ struct leapraid_card_phy *card_phy;
+ struct leapraid_card_port *card_port = NULL;
+ unsigned long flags;
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering)
+ return;
+
+ spin_lock_irqsave(&adapter->dev_topo.topo_node_lock, flags);
+ topo_node = leapraid_transport_topo_node_by_sas_addr(adapter,
+ sas_address,
+ target_card_port);
+ if (!topo_node) {
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock,
+ flags);
+ return;
+ }
+
+ card_phy = &topo_node->card_phy[phy_index];
+ card_phy->attached_hdl = hdl;
+ spin_unlock_irqrestore(&adapter->dev_topo.topo_node_lock, flags);
+
+ if (hdl && link_rate >= LEAPRAID_SAS_NEG_LINK_RATE_1_5) {
+ leapraid_transport_set_identify(adapter, hdl,
+ &card_phy->remote_identify);
+ if (topo_node->hdl <= adapter->dev_topo.card.phys_num &&
+ adapter->adapter_attr.enable_mp) {
+ list_for_each_entry(card_port,
+ &adapter->dev_topo.card_port_list,
+ list) {
+ if (card_port->sas_address == sas_address &&
+ card_port == target_card_port)
+ card_port->phy_mask |=
+ BIT(card_phy->phy_id);
+ }
+ }
+ leapraid_transport_attach_phy_to_port(adapter, topo_node,
+ card_phy,
+ card_phy->remote_identify.sas_address,
+ target_card_port);
+ } else {
+ memset(&card_phy->remote_identify, 0,
+ sizeof(struct sas_identify));
+ }
+
+ if (card_phy->phy)
+ card_phy->phy->negotiated_linkrate =
+ leapraid_transport_convert_phy_link_rate(link_rate);
+}
+
+static int leapraid_dma_map_buffer(struct device *dev, struct bsg_buffer *buf,
+ dma_addr_t *dma_addr,
+ size_t *dma_len, void **p)
+{
+ if (buf->sg_cnt > 1) {
+ *p = dma_alloc_coherent(dev, buf->payload_len, dma_addr,
+ GFP_KERNEL);
+ if (!*p)
+ return -ENOMEM;
+
+ *dma_len = buf->payload_len;
+ } else {
+ if (!dma_map_sg(dev, buf->sg_list, 1, DMA_BIDIRECTIONAL))
+ return -ENOMEM;
+
+ *dma_addr = sg_dma_address(buf->sg_list);
+ *dma_len = sg_dma_len(buf->sg_list);
+ *p = NULL;
+ }
+ return 0;
+}
+
+static void leapraid_dma_unmap_buffer(struct device *dev,
+ struct bsg_buffer *buf,
+ dma_addr_t dma_addr,
+ void *p)
+{
+ if (p)
+ dma_free_coherent(dev, buf->payload_len, p, dma_addr);
+ else
+ dma_unmap_sg(dev, buf->sg_list, 1, DMA_BIDIRECTIONAL);
+}
+
+static void leapraid_build_smp_task(struct leapraid_adapter *adapter,
+ struct sas_rphy *rphy,
+ dma_addr_t h2c_dma_addr, size_t h2c_size,
+ dma_addr_t c2h_dma_addr, size_t c2h_size)
+{
+ struct leapraid_smp_passthrough_req *smp_passthrough_req;
+ void *psge;
+
+ smp_passthrough_req =
+ leapraid_get_task_desc(adapter,
+ adapter->driver_cmds.transport_cmd.inter_taskid);
+ memset(smp_passthrough_req, 0, sizeof(*smp_passthrough_req));
+
+ smp_passthrough_req->func = LEAPRAID_FUNC_SMP_PASSTHROUGH;
+ smp_passthrough_req->physical_port =
+ leapraid_transport_get_port_id_by_rphy(adapter, rphy);
+ smp_passthrough_req->sas_address = (rphy) ?
+ cpu_to_le64(rphy->identify.sas_address) :
+ cpu_to_le64(adapter->dev_topo.card.sas_address);
+ smp_passthrough_req->req_data_len =
+ cpu_to_le16(h2c_size - LEAPRAID_SMP_FRAME_HEADER_SIZE);
+ psge = &smp_passthrough_req->sgl;
+ leapraid_build_ieee_sg(adapter, psge, h2c_dma_addr,
+ h2c_size - LEAPRAID_SMP_FRAME_HEADER_SIZE,
+ c2h_dma_addr,
+ c2h_size - LEAPRAID_SMP_FRAME_HEADER_SIZE);
+}
+
+static int leapraid_send_smp_req(struct leapraid_adapter *adapter)
+{
+ dev_info(&adapter->pdev->dev,
+ "%s: sending smp request\n", __func__);
+ init_completion(&adapter->driver_cmds.transport_cmd.done);
+ leapraid_fire_task(adapter,
+ adapter->driver_cmds.transport_cmd.inter_taskid);
+ wait_for_completion_timeout(&adapter->driver_cmds.transport_cmd.done,
+ LEAPRAID_TRANSPORT_CMD_TIMEOUT * HZ);
+ if (!(adapter->driver_cmds.transport_cmd.status & LEAPRAID_CMD_DONE)) {
+ dev_err(&adapter->pdev->dev, "%s: timeout\n", __func__);
+ if (!(adapter->driver_cmds.transport_cmd.status &
+ LEAPRAID_CMD_RESET)) {
+ dev_info(&adapter->pdev->dev,
+ "%s:%d call hard_reset\n",
+ __func__, __LINE__);
+ leapraid_hard_reset_handler(adapter, FULL_RESET);
+ return -ETIMEDOUT;
+ }
+ }
+
+ dev_info(&adapter->pdev->dev, "%s: smp request complete\n", __func__);
+ if (!(adapter->driver_cmds.transport_cmd.status &
+ LEAPRAID_CMD_REPLY_VALID)) {
+ dev_err(&adapter->pdev->dev,
+ "%s: smp request no reply\n", __func__);
+ return -ENXIO;
+ }
+
+ return 0;
+}
+
+static void leapraid_handle_smp_rep(struct leapraid_adapter *adapter,
+ struct bsg_job *job, void *addr_in,
+ unsigned int *reslen)
+{
+ struct leapraid_smp_passthrough_rep *smp_passthrough_rep;
+
+ smp_passthrough_rep =
+ (void *)(&adapter->driver_cmds.transport_cmd.reply);
+
+ dev_info(&adapter->pdev->dev, "%s: response data len=%d\n",
+ __func__, le16_to_cpu(smp_passthrough_rep->resp_data_len));
+
+ memcpy(job->reply, smp_passthrough_rep, sizeof(*smp_passthrough_rep));
+ job->reply_len = sizeof(*smp_passthrough_rep);
+ *reslen = le16_to_cpu(smp_passthrough_rep->resp_data_len);
+
+ if (addr_in)
+ sg_copy_from_buffer(job->reply_payload.sg_list,
+ job->reply_payload.sg_cnt, addr_in,
+ job->reply_payload.payload_len);
+}
+
+static void leapraid_transport_smp_handler(struct bsg_job *job,
+ struct Scsi_Host *shost,
+ struct sas_rphy *rphy)
+{
+ struct leapraid_adapter *adapter = shost_priv(shost);
+ dma_addr_t c2h_dma_addr;
+ dma_addr_t h2c_dma_addr;
+ void *addr_in = NULL;
+ void *addr_out = NULL;
+ size_t c2h_size;
+ size_t h2c_size;
+ int rc;
+ unsigned int reslen = 0;
+
+ if (adapter->access_ctrl.shost_recovering ||
+ adapter->access_ctrl.pcie_recovering) {
+ rc = -EFAULT;
+ goto done;
+ }
+
+ rc = mutex_lock_interruptible(&adapter->driver_cmds.transport_cmd.mutex);
+ if (rc)
+ goto done;
+
+ adapter->driver_cmds.transport_cmd.status = LEAPRAID_CMD_PENDING;
+ rc = leapraid_dma_map_buffer(&adapter->pdev->dev,
+ &job->request_payload,
+ &h2c_dma_addr, &h2c_size, &addr_out);
+ if (rc)
+ goto release_lock;
+
+ if (addr_out)
+ sg_copy_to_buffer(job->request_payload.sg_list,
+ job->request_payload.sg_cnt, addr_out,
+ job->request_payload.payload_len);
+
+ rc = leapraid_dma_map_buffer(&adapter->pdev->dev, &job->reply_payload,
+ &c2h_dma_addr, &c2h_size, &addr_in);
+ if (rc)
+ goto free_req_buf;
+
+ rc = leapraid_check_adapter_is_op(adapter);
+ if (rc)
+ goto free_rep_buf;
+
+ leapraid_build_smp_task(adapter, rphy, h2c_dma_addr,
+ h2c_size, c2h_dma_addr, c2h_size);
+
+ rc = leapraid_send_smp_req(adapter);
+ if (rc)
+ goto free_rep_buf;
+
+ leapraid_handle_smp_rep(adapter, job, addr_in, &reslen);
+
+free_rep_buf:
+ leapraid_dma_unmap_buffer(&adapter->pdev->dev, &job->reply_payload,
+ c2h_dma_addr, addr_in);
+free_req_buf:
+ leapraid_dma_unmap_buffer(&adapter->pdev->dev, &job->request_payload,
+ h2c_dma_addr, addr_out);
+release_lock:
+ adapter->driver_cmds.transport_cmd.status = LEAPRAID_CMD_NOT_USED;
+ mutex_unlock(&adapter->driver_cmds.transport_cmd.mutex);
+done:
+ bsg_job_done(job, rc, reslen);
+}
+
+struct sas_function_template leapraid_transport_functions = {
+ .smp_handler = leapraid_transport_smp_handler,
+};
+
+struct scsi_transport_template *leapraid_transport_template;
--
2.25.1
2
1
From: Song Liu <song(a)kernel.org>
stable inclusion
from stable-v6.6.119
commit 7ea2ea68df08bcfabb59bf0012120ffec4901d21
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8287
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 56b3c85e153b84f27e6cff39623ba40a1ad299d3 ]
When livepatch is attached to the same function as bpf trampoline with
a fexit program, bpf trampoline code calls register_ftrace_direct()
twice. The first time will fail with -EAGAIN, and the second time it
will succeed. This requires register_ftrace_direct() to unregister
the address on the first attempt. Otherwise, the bpf trampoline cannot
attach. Here is an easy way to reproduce this issue:
insmod samples/livepatch/livepatch-sample.ko
bpftrace -e 'fexit:cmdline_proc_show {}'
ERROR: Unable to attach probe: fexit:vmlinux:cmdline_proc_show...
Fix this by cleaning up the hash when register_ftrace_function_nolock hits
errors.
Also, move the code that resets ops->func and ops->trampoline to the error
path of register_ftrace_direct(); and add a helper function reset_direct()
in register_ftrace_direct() and unregister_ftrace_direct().
Fixes: d05cb470663a ("ftrace: Fix modification of direct_function hash while in use")
Cc: stable(a)vger.kernel.org # v6.6+
Reported-by: Andrey Grodzovsky <andrey.grodzovsky(a)crowdstrike.com>
Closes: https://lore.kernel.org/live-patching/c5058315a39d4615b333e485893345be@crow…
Cc: Steven Rostedt (Google) <rostedt(a)goodmis.org>
Cc: Masami Hiramatsu (Google) <mhiramat(a)kernel.org>
Acked-and-tested-by: Andrey Grodzovsky <andrey.grodzovsky(a)crowdstrike.com>
Signed-off-by: Song Liu <song(a)kernel.org>
Reviewed-by: Jiri Olsa <jolsa(a)kernel.org>
Link: https://lore.kernel.org/r/20251027175023.1521602-2-song@kernel.org
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Acked-by: Steven Rostedt (Google) <rostedt(a)goodmis.org>
[ moved cleanup to reset_direct() ]
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Pan Taixi <pantaixi1(a)huawei.com>
---
kernel/bpf/trampoline.c | 4 ----
kernel/trace/ftrace.c | 20 ++++++++++++++------
2 files changed, 14 insertions(+), 10 deletions(-)
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index bcf8e4bad210..08325a65835b 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -460,10 +460,6 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
* BPF_TRAMP_F_SHARE_IPMODIFY is set, we can generate the
* trampoline again, and retry register.
*/
- /* reset fops->func and fops->trampoline for re-register */
- tr->fops->func = NULL;
- tr->fops->trampoline = 0;
-
/* reset im->image memory attr for arch_prepare_bpf_trampoline */
set_memory_nx((long)im->image, 1);
set_memory_rw((long)im->image, 1);
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 398992597685..44e31eebb7aa 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5370,6 +5370,17 @@ static void remove_direct_functions_hash(struct ftrace_hash *hash, unsigned long
}
}
+static void reset_direct(struct ftrace_ops *ops, unsigned long addr)
+{
+ struct ftrace_hash *hash = ops->func_hash->filter_hash;
+
+ remove_direct_functions_hash(hash, addr);
+
+ /* cleanup for possible another register call */
+ ops->func = NULL;
+ ops->trampoline = 0;
+}
+
/**
* register_ftrace_direct - Call a custom trampoline directly
* for multiple functions registered in @ops
@@ -5465,6 +5476,8 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
ops->direct_call = addr;
err = register_ftrace_function_nolock(ops);
+ if (err)
+ reset_direct(ops, addr);
out_unlock:
mutex_unlock(&direct_mutex);
@@ -5497,7 +5510,6 @@ EXPORT_SYMBOL_GPL(register_ftrace_direct);
int unregister_ftrace_direct(struct ftrace_ops *ops, unsigned long addr,
bool free_filters)
{
- struct ftrace_hash *hash = ops->func_hash->filter_hash;
int err;
if (check_direct_multi(ops))
@@ -5507,13 +5519,9 @@ int unregister_ftrace_direct(struct ftrace_ops *ops, unsigned long addr,
mutex_lock(&direct_mutex);
err = unregister_ftrace_function(ops);
- remove_direct_functions_hash(hash, addr);
+ reset_direct(ops, addr);
mutex_unlock(&direct_mutex);
- /* cleanup for possible another register call */
- ops->func = NULL;
- ops->trampoline = 0;
-
if (free_filters)
ftrace_free_filter(ops);
return err;
--
2.34.1
2
1
[PATCH openEuler-1.0-LTS] ubi: Fix possible null-ptr-deref in ubi_free_volume()
by Wang Zhaolong 07 Jan '26
by Wang Zhaolong 07 Jan '26
07 Jan '26
From: Yang Yingliang <yangyingliang(a)huawei.com>
stable inclusion
from stable-v4.19.276
commit 45b2c5ca4d2edae70f19fdb086bd927840c4c309
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/12950
CVE: CVE-2023-54087
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit c15859bfd326c10230f09cb48a17f8a35f190342 ]
It willl cause null-ptr-deref in the following case:
uif_init()
ubi_add_volume()
cdev_add() -> if it fails, call kill_volumes()
device_register()
kill_volumes() -> if ubi_add_volume() fails call this function
ubi_free_volume()
cdev_del()
device_unregister() -> trying to delete a not added device,
it causes null-ptr-deref
So in ubi_free_volume(), it delete devices whether they are added
or not, it will causes null-ptr-deref.
Handle the error case whlie calling ubi_add_volume() to fix this
problem. If add volume fails, set the corresponding vol to null,
so it can not be accessed in kill_volumes() and release the
resource in ubi_add_volume() error path.
Fixes: 801c135ce73d ("UBI: Unsorted Block Images")
Suggested-by: Zhihao Cheng <chengzhihao1(a)huawei.com>
Signed-off-by: Yang Yingliang <yangyingliang(a)huawei.com>
Reviewed-by: Zhihao Cheng <chengzhihao1(a)huawei.com>
Signed-off-by: Richard Weinberger <richard(a)nod.at>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Wang Zhaolong <wangzhaolong(a)huaweicloud.com>
---
drivers/mtd/ubi/build.c | 1 +
drivers/mtd/ubi/vmt.c | 12 ++++++------
2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
index df888c34499e..3e7e5b51eafd 100644
--- a/drivers/mtd/ubi/build.c
+++ b/drivers/mtd/ubi/build.c
@@ -478,10 +478,11 @@ static int uif_init(struct ubi_device *ubi)
for (i = 0; i < ubi->vtbl_slots; i++)
if (ubi->volumes[i]) {
err = ubi_add_volume(ubi, ubi->volumes[i]);
if (err) {
ubi_err(ubi, "cannot add volume %d", i);
+ ubi->volumes[i] = NULL;
goto out_volumes;
}
}
return 0;
diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
index 405cc5289d89..c5dec58846ce 100644
--- a/drivers/mtd/ubi/vmt.c
+++ b/drivers/mtd/ubi/vmt.c
@@ -593,29 +593,29 @@ int ubi_add_volume(struct ubi_device *ubi, struct ubi_volume *vol)
dev = MKDEV(MAJOR(ubi->cdev.dev), vol->vol_id + 1);
err = cdev_add(&vol->cdev, dev, 1);
if (err) {
ubi_err(ubi, "cannot add character device for volume %d, error %d",
vol_id, err);
+ vol_release(&vol->dev);
return err;
}
vol->dev.release = vol_release;
vol->dev.parent = &ubi->dev;
vol->dev.devt = dev;
vol->dev.class = &ubi_class;
vol->dev.groups = volume_dev_groups;
dev_set_name(&vol->dev, "%s_%d", ubi->ubi_name, vol->vol_id);
err = device_register(&vol->dev);
- if (err)
- goto out_cdev;
+ if (err) {
+ cdev_del(&vol->cdev);
+ put_device(&vol->dev);
+ return err;
+ }
self_check_volumes(ubi);
return err;
-
-out_cdev:
- cdev_del(&vol->cdev);
- return err;
}
/**
* ubi_free_volume - free volume.
* @ubi: UBI device description object
--
2.34.3
2
1
[PATCH OLK-6.6] cifs: Fix establishing NetBIOS session for SMB2+ connection
by Wang Zhaolong 07 Jan '26
by Wang Zhaolong 07 Jan '26
07 Jan '26
From: Pali Rohár <pali(a)kernel.org>
stable inclusion
from stable-v6.6.93
commit ee68e068cf92f8192ef61557789a6bb7f4b49ce0
category: bugfix
bugzilla: https://atomgit.com/openeuler/kernel/issues/8333
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id…
--------------------------------
[ Upstream commit 781802aa5a5950f99899f13ff9d760f5db81d36d ]
Function ip_rfc1001_connect() which establish NetBIOS session for SMB
connections, currently uses smb_send() function for sending NetBIOS Session
Request packet. This function expects that the passed buffer is SMB packet
and for SMB2+ connections it mangles packet header, which breaks prepared
NetBIOS Session Request packet. Result is that this function send garbage
packet for SMB2+ connection, which SMB2+ server cannot parse. That function
is not mangling packets for SMB1 connections, so it somehow works for SMB1.
Fix this problem and instead of smb_send(), use smb_send_kvec() function
which does not mangle prepared packet, this function send them as is. Just
API of this function takes struct msghdr (kvec) instead of packet buffer.
[MS-SMB2] specification allows SMB2 protocol to use NetBIOS as a transport
protocol. NetBIOS can be used over TCP via port 139. So this is a valid
configuration, just not so common. And even recent Windows versions (e.g.
Windows Server 2022) still supports this configuration: SMB over TCP port
139, including for modern SMB2 and SMB3 dialects.
This change fixes SMB2 and SMB3 connections over TCP port 139 which
requires establishing of NetBIOS session. Tested that this change fixes
establishing of SMB2 and SMB3 connections with Windows Server 2022.
Signed-off-by: Pali Rohár <pali(a)kernel.org>
Signed-off-by: Steve French <stfrench(a)microsoft.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Wang Zhaolong <wangzhaolong(a)huaweicloud.com>
---
fs/smb/client/cifsproto.h | 3 +++
fs/smb/client/connect.c | 20 +++++++++++++++-----
fs/smb/client/transport.c | 2 +-
3 files changed, 19 insertions(+), 6 deletions(-)
diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
index 7f97e5468652..858492220437 100644
--- a/fs/smb/client/cifsproto.h
+++ b/fs/smb/client/cifsproto.h
@@ -29,10 +29,13 @@ extern void cifs_buf_release(void *);
extern struct smb_hdr *cifs_small_buf_get(void);
extern void cifs_small_buf_release(void *);
extern void free_rsp_buf(int, void *);
extern int smb_send(struct TCP_Server_Info *, struct smb_hdr *,
unsigned int /* length */);
+extern int smb_send_kvec(struct TCP_Server_Info *server,
+ struct msghdr *msg,
+ size_t *sent);
extern unsigned int _get_xid(void);
extern void _free_xid(unsigned int);
#define get_xid() \
({ \
unsigned int __xid = _get_xid(); \
diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
index 23e0b902239a..3eae50bfdb6b 100644
--- a/fs/smb/client/connect.c
+++ b/fs/smb/client/connect.c
@@ -3049,12 +3049,14 @@ ip_rfc1001_connect(struct TCP_Server_Info *server)
* some servers require RFC1001 sessinit before sending
* negprot - BB check reconnection in case where second
* sessinit is sent but no second negprot
*/
struct rfc1002_session_packet req = {};
- struct smb_hdr *smb_buf = (struct smb_hdr *)&req;
+ struct msghdr msg = {};
+ struct kvec iov = {};
unsigned int len;
+ size_t sent;
req.trailer.session_req.called_len = sizeof(req.trailer.session_req.called_name);
if (server->server_RFC1001_name[0] != 0)
rfc1002mangle(req.trailer.session_req.called_name,
@@ -3079,23 +3081,31 @@ ip_rfc1001_connect(struct TCP_Server_Info *server)
/*
* As per rfc1002, @len must be the number of bytes that follows the
* length field of a rfc1002 session request payload.
*/
- len = sizeof(req) - offsetof(struct rfc1002_session_packet, trailer.session_req);
+ len = sizeof(req.trailer.session_req);
+ req.type = RFC1002_SESSION_REQUEST;
+ req.flags = 0;
+ req.length = cpu_to_be16(len);
+ len += offsetof(typeof(req), trailer.session_req);
+ iov.iov_base = &req;
+ iov.iov_len = len;
+ iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, &iov, 1, len);
+ rc = smb_send_kvec(server, &msg, &sent);
+ if (rc < 0 || len != sent)
+ return (rc == -EINTR || rc == -EAGAIN) ? rc : -ECONNABORTED;
- smb_buf->smb_buf_length = cpu_to_be32((RFC1002_SESSION_REQUEST << 24) | len);
- rc = smb_send(server, smb_buf, len);
/*
* RFC1001 layer in at least one server requires very short break before
* negprot presumably because not expecting negprot to follow so fast.
* This is a simple solution that works without complicating the code
* and causes no significant slowing down on mount for everyone else
*/
usleep_range(1000, 2000);
- return rc;
+ return 0;
}
static int
generic_ip_connect(struct TCP_Server_Info *server)
{
diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c
index 8f045db44319..1d796400680c 100644
--- a/fs/smb/client/transport.c
+++ b/fs/smb/client/transport.c
@@ -177,11 +177,11 @@ delete_mid(struct mid_q_entry *mid)
* @sent: amount of data sent on socket is stored here
*
* Our basic "send data to server" function. Should be called with srv_mutex
* held. The caller is responsible for handling the results.
*/
-static int
+int
smb_send_kvec(struct TCP_Server_Info *server, struct msghdr *smb_msg,
size_t *sent)
{
int rc = 0;
int retries = 0;
--
2.34.3
2
1
07 Jan '26
Backport the following patches for OLK-5.10:
- e14f9d96acb7 mm/memory-failure: support disabling soft offline for HugeTLB pages
- 2f0c01a690d1 mm/memory-failure: userspace controls soft-offlining pages
- 9bb3fd60f91e arm64: mm: Add copy mc support for all migrate_page
- 45dbef4c04f6 mm: page_eject: Add mc support during offline page
- b7f5b9ef641a mm: Update PF_COREDUMP_MCS to PF_MCS
- 17d08829674a mm/hwpoison: add migrate_page_mc_extra()
- c10a520c4102 mm/hwpoison: introduce copy_mc_highpages
- d689cd1c1479 mm/hwpoison: arm64: introduce copy_mc_highpage
Add mcs support for migrate page & support disabling soft offline for
HugeTLB pages.
Since UCE kernel recovery is needed by this. This should be enable
with the following step:
- echo 1 > /proc/sys/kernel/uce_kernel_recovery
Disable soft offline support for hugetlb with the following step:
- echo 3 > /proc/sys/vm/enable_soft_offline
Since ARCH_ENABLE_HUGEPAGE_MIGRATION is not enabled in openeuler_defconfig
in arm64. There is no need to merge soft-offining related patches.
Jiaqi Yan (1):
mm/memory-failure: userspace controls soft-offlining pages
Kyle Meyer (1):
mm/memory-failure: support disabling soft offline for HugeTLB pages
Wupeng Ma (2):
uce: add copy_mc_highpage{s}
arm64: mm: Add copy mc support for migrate_page
.../ABI/testing/sysfs-memory-page-offline | 3 +
include/linux/highmem.h | 55 +++++++++++++
include/linux/mm.h | 1 +
kernel/sysctl.c | 9 +++
mm/memory-failure.c | 25 +++++-
mm/migrate.c | 79 ++++++++++++++++---
6 files changed, 162 insertions(+), 10 deletions(-)
--
2.43.0
2
5
07 Jan '26
From: Paulo Alcantara <pc(a)manguebit.org>
mainline inclusion
from mainline-v6.18
commit 3184b6a5a24ec9ee74087b2a550476f386df7dc2
category: bugfix
bugzilla: https://atomgit.com/src-openeuler/kernel/issues/11577
CVE: CVE-2025-68295
Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?…
--------------------------------
When having a multiuser mount with domain= specified and using
cifscreds, cifs_set_cifscreds() will end up setting @ctx->domainname,
so it needs to be freed before leaving cifs_construct_tcon().
This fixes the following memory leak reported by kmemleak:
mount.cifs //srv/share /mnt -o domain=ZELDA,multiuser,...
su - testuser
cifscreds add -d ZELDA -u testuser
...
ls /mnt/1
...
umount /mnt
echo scan > /sys/kernel/debug/kmemleak
cat /sys/kernel/debug/kmemleak
unreferenced object 0xffff8881203c3f08 (size 8):
comm "ls", pid 5060, jiffies 4307222943
hex dump (first 8 bytes):
5a 45 4c 44 41 00 cc cc ZELDA...
backtrace (crc d109a8cf):
__kmalloc_node_track_caller_noprof+0x572/0x710
kstrdup+0x3a/0x70
cifs_sb_tlink+0x1209/0x1770 [cifs]
cifs_get_fattr+0xe1/0xf50 [cifs]
cifs_get_inode_info+0xb5/0x240 [cifs]
cifs_revalidate_dentry_attr+0x2d1/0x470 [cifs]
cifs_getattr+0x28e/0x450 [cifs]
vfs_getattr_nosec+0x126/0x180
vfs_statx+0xf6/0x220
do_statx+0xab/0x110
__x64_sys_statx+0xd5/0x130
do_syscall_64+0xbb/0x380
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Fixes: f2aee329a68f ("cifs: set domainName when a domain-key is used in multiuser")
Signed-off-by: Paulo Alcantara (Red Hat) <pc(a)manguebit.org>
Reviewed-by: David Howells <dhowells(a)redhat.com>
Cc: Jay Shin <jaeshin(a)redhat.com>
Cc: stable(a)vger.kernel.org
Cc: linux-cifs(a)vger.kernel.org
Signed-off-by: Steve French <stfrench(a)microsoft.com>
Conflicts:
fs/smb/client/connect.c
fs/cifs/connect.c
[The code has been refactored multiple times.]
Signed-off-by: Wang Zhaolong <wangzhaolong(a)huaweicloud.com>
---
fs/cifs/connect.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
index 287230b70e51..f779f1f4593d 100644
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -5219,10 +5219,11 @@ cifs_construct_tcon(struct cifs_sb_info *cifs_sb, kuid_t fsuid)
if (cap_unix(ses))
reset_cifs_unix_caps(0, tcon, NULL, vol_info);
out:
kfree(vol_info->username);
+ kfree(vol_info->domainname);
kfree_sensitive(vol_info->password);
kfree(vol_info);
return tcon;
}
--
2.34.3
2
1
CVE-2025-40328 fixes
Henrique Carvalho (2):
smb: client: fix potential UAF in smb2_close_cached_fid()
smb: client: fix incomplete backport in cfids_invalidation_worker()
fs/smb/client/cached_dir.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
--
2.34.3
2
3